From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4C00CC47422 for ; Thu, 25 Jan 2024 16:15:28 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id CD24A8D0006; Thu, 25 Jan 2024 11:15:27 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id C81DA8D0002; Thu, 25 Jan 2024 11:15:27 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AFB5F8D0006; Thu, 25 Jan 2024 11:15:27 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 9C2CC8D0002 for ; Thu, 25 Jan 2024 11:15:27 -0500 (EST) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 7453CA18DD for ; Thu, 25 Jan 2024 16:15:27 +0000 (UTC) X-FDA: 81718333494.15.E8138E9 Received: from mail-vk1-f172.google.com (mail-vk1-f172.google.com [209.85.221.172]) by imf08.hostedemail.com (Postfix) with ESMTP id B2A2C16001F for ; Thu, 25 Jan 2024 16:15:25 +0000 (UTC) Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=JI6RRHQx; spf=pass (imf08.hostedemail.com: domain of aliceryhl@google.com designates 209.85.221.172 as permitted sender) smtp.mailfrom=aliceryhl@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1706199325; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=7lk63POKR0f1uaiQp+rL8Q768TeeIp8icZEI1dRCPmo=; b=xSUdcMzDwNTH/rvQjwGcrfzG6RI2k4+E0xIhJghNx+HbLOAuptz71+q1yx135rbAwsWkr9 IUKclNifYxrVmJ+g6HrvOOytrElIEh0vn/kx4HbE9/R2fN1nX3zDfIhz1BqciC4yJAMLlL AyfHm9YWWrai+p+eN5PD+TT156eEzDE= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1706199325; a=rsa-sha256; cv=none; b=ZJZeKnx2VUhtNNuegvalEfzFp9g5NkvhY88V16cO2PkmhGGlShdi8fVHiMs6esuayrmhkw gXyxZwQnAnyzNVpm0s3TWvRYUY8AhTcYFipJOZ2Oh4VHnqYQb+DoKq+GL1LYohMIthF/La 9yy0Zq+ULkOzYCYwnvsXvbbIqb/Qqo8= ARC-Authentication-Results: i=1; imf08.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=JI6RRHQx; spf=pass (imf08.hostedemail.com: domain of aliceryhl@google.com designates 209.85.221.172 as permitted sender) smtp.mailfrom=aliceryhl@google.com; dmarc=pass (policy=reject) header.from=google.com Received: by mail-vk1-f172.google.com with SMTP id 71dfb90a1353d-4bd3dcee54eso511872e0c.0 for ; Thu, 25 Jan 2024 08:15:25 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1706199325; x=1706804125; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=7lk63POKR0f1uaiQp+rL8Q768TeeIp8icZEI1dRCPmo=; b=JI6RRHQxxfL112UCYm1oKggwiEzSJcfPziEFhWJ7gNSWooOEOE1DMr15O0Z979zV79 y+SJ3fMNppG7Cc/bINqpKyU3ozmZHpxyZI5M73IZiwaWO1RQbx8znpWTexfovV9Qc9LZ Wny8db2rR9KmOEk+u17hCX123rCYRKAxYJ5aSvEvzYW6J/9vTWa8DF6FFMo/Okg0pFUS mXRGGHfwR+1CaNahchRs7IiqZdqXs7aAhR/nSDc8bsah6dYOf7vSGEHGkAAY2PJfzg2T A9a32NgMdQyQ6jGh+IsHCi1fNKU3xIDy3OzuSfZ7yzChFq0xjdOt1jGE/oYeDQBfH+Qq CV/A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1706199325; x=1706804125; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=7lk63POKR0f1uaiQp+rL8Q768TeeIp8icZEI1dRCPmo=; b=qGcjaQ88IfCfDK9/NyaKmHw4/RCWpUPYzy6yYVK+tJXlW2wy6nJ6Rb6uT3b3uEmEoH CDwHFyJ4OBZSv3aQQtmeNy9H3lh32qRbRXfWA+3cAZHnCRXfSmkrRccPeA/EZfoXqVXx ez7kq/Vo/mN2NMPAt5s7kwdfytl51SFgwLEwVEqckT52IEEZcDFWCIs+Aw2HTtFzhMpo 2gBSAfBKimjr38yuH/gVOcr4csC2uphHzMfh6vvd+wl5nz4pOhUTYiiUXnye0Gsn++0x fL6dPbbDTu2xJHcne5lQAx+P1SDwduArZTXE2SDy56elcrTJBjwr2apcnoDp56zCQlnq XWwQ== X-Gm-Message-State: AOJu0YzlAbPzLOgo4x3BjWxwI9mZuWgkeCIG46O8oVCzjW9NO8cudPoj WfFPjezv782dJkmmcx8PnzE3ZnPgqyQjeAasqFK85IlefqGGuo+bXnkBCe+VceNTZ0gP8J9is4G tcmMUFNLNOrjOw0e9mZO3eZzHJM+Y6HCyA8pE X-Google-Smtp-Source: AGHT+IGA0K3UOZ9HMIN8oAo3K16wYrzxN3UKybeNfpxPpUsyncL2OiiQaWS9A2aKFqJK++AqLpBtQOygJxuD+f+XbDo= X-Received: by 2002:a05:6122:1807:b0:4bd:2957:2620 with SMTP id ay7-20020a056122180700b004bd29572620mr46434vkb.13.1706199324648; Thu, 25 Jan 2024 08:15:24 -0800 (PST) MIME-Version: 1.0 References: <20240124-alice-mm-v1-0-d1abcec83c44@google.com> <20240124-alice-mm-v1-2-d1abcec83c44@google.com> <070574cb-8d7f-4fe7-9826-cec6110168ff@app.fastmail.com> In-Reply-To: From: Alice Ryhl Date: Thu, 25 Jan 2024 17:15:13 +0100 Message-ID: Subject: Re: [PATCH 2/3] rust: add typed accessors for userspace pointers To: Arnd Bergmann Cc: Miguel Ojeda , Alex Gaynor , Wedson Almeida Filho , Boqun Feng , Gary Guo , =?UTF-8?Q?Bj=C3=B6rn_Roy_Baron?= , Benno Lossin , Andreas Hindborg , Kees Cook , Alexander Viro , Andrew Morton , Greg Kroah-Hartman , =?UTF-8?B?QXJ2ZSBIasO4bm5ldsOlZw==?= , Todd Kjos , Martijn Coenen , Joel Fernandes , Carlos Llamas , Suren Baghdasaryan , linux-mm@kvack.org, linux-kernel@vger.kernel.org, rust-for-linux@vger.kernel.org, Christian Brauner Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Stat-Signature: rmje3rtkhj7jctuu8613yn4io7dxu9pw X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: B2A2C16001F X-Rspam-User: X-HE-Tag: 1706199325-365201 X-HE-Meta: U2FsdGVkX19ibCFEMQmdarYbN1T6Ew23nTbLkwvTiF9KmnSki+okmvMjdO75nbiHANjZDugq+hAYerGsG+stcnVHgST3bcSkDGyhjEbJz6zjiLt7hUM/UFx0E0fVrwFzAiEj/ESWDodaQLosktZ8GzP7Ifv3SNj7zZfsAm+NzbcPbZmgp8EjDXKc0MWSQ53pT982kFUPzgm7D/gfj6B1St6oIR2b1EbD7bEv+3iIOF1ZYfWgEDRhQS97SbgF/PBcPBbPF/xhefqGZpWaZ3mbh5KfqdliuLC6aGNMaXeXoZZnqZFAiVA4ihSv8CBlOfbvh5wvey/YLkhi87VDYlDEmuIr2KCQ09+1coTXky38dXRUz1W/gH0478Km9dOIAVubWH0psrwTpQBkNimMAzN2pmXarhgMSm2pD8WMGJws1pmfRjLdA/rR2c6LdFROtD9/DGyhNvTr1DuJ3vCvxuQmHIPIjo/WyZZBDnxq+fMlqGaJhNyty7puEfzKS8jt+F9bwLa76CbKl7DiQ/2hAq6D9RMkOT/Yp3RrdWq40C8/fn9vX4dSNHj7Af6F6dHCknfKEFzwdcmefeFFgmPRQDrO4HCVOleP62/VYlbILZGWbfSvpztPkeH1wR6aDIE2SCtust+uK/NDnRBIYkxSvUcoLfdGJbcSowe+CN8fYdX4RL6gXxIC5Y+XleW5nvvoZ3kMrLM87Yor93PYYolMoltlk6xreTztF7vx9ZvpNnF/dZiGjKZMSO/lPm1QQ5S1natF2E7tp5BFObsAGODfANLP8fvSGhkDMnZwMCO7jIzlzcp2Fwl1DNSsW8YIi6RKPRI8FfyFKa0AT70s7QxMoF6knQmcGZmHWJMnX2YkIUlu81J6b4tz+19Gidejf2NqIHvcVs1aSh54b67//KqmQOkawQk0TRvEGnergnDv0Yv/6l13n7glrwx+ua4FNbwH9wAtUObEP/9FWusGSdaHpvc eG2SOFOY 6UTpHwSQ5A61gNUSvcXxCHBNfdmJJ5tiuvTIrpMQbOi4YQJnwG7su9qLu7YCA4z6XfaPw X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Thu, Jan 25, 2024 at 5:00=E2=80=AFPM Arnd Bergmann wrote= : > > On Thu, Jan 25, 2024, at 13:37, Alice Ryhl wrote: > > On Thu, Jan 25, 2024 at 1:27=E2=80=AFPM Arnd Bergmann w= rote: > >> On Wed, Jan 24, 2024, at 12:20, Alice Ryhl wrote: > > >> > +EXPORT_SYMBOL_GPL(rust_helper_copy_to_user_unsafe_skip_check_object= _size); > >> > >> These functions are almost identical to the ones in > >> lib/usercopy.c for !defined(INLINE_COPY_TO_USER). > >> > >> That version has an extra memset() after a partial > >> copy_from_user(), and you probably want to have the > >> same thing here for consistency. > >> > >> I think ideally we should only have one out-of-line copy > >> of these two functions and have that one shared between > >> rust and architectures that want the C version out of line > >> as well. > > > > I had a bit of trouble figuring out all of the copy_[to/from]_user > > methods that are available. I was hoping that a better solution would > > be available, and it sounds like one is. Is _copy_from_user always > > available as an exported symbol? If it's always available and skips > > the check, then I can just use that. I don't think the memset matters > > for my case. > > At the moment, it appears that it's available on the few architectures > that don't #define INLINE_COPY_FROM_USER: alpha, csky, powerpc, > riscv and x86. On the other architectures it is always an inline > function. > > > Otherwise, I can add a helper in rust/helpers.c that wraps > > _copy_from_user only when INLINE_COPY_FROM_USER is defined, and call > > the helper in those cases, and otherwise call the exported symbol > > directly. (I need an exported symbol to call into C from Rust.) > > > > Would that make sense? > > I don't think we can have a perfect abstraction here, but rather > than putting knowledge of INLINE_COPY_FROM_USER into the rust > wrapper, I would suggest putting a bit of information about > rust into lib/usercopy.c. > > I've tried to come up with an idea below, see if that works > for you. > > Signed-off-by: Arnd Bergmann > > diff --git a/include/linux/uaccess.h b/include/linux/uaccess.h > index 3064314f4832..835aa175d0ee 100644 > --- a/include/linux/uaccess.h > +++ b/include/linux/uaccess.h > @@ -138,13 +138,18 @@ __copy_to_user(void __user *to, const void *from, u= nsigned long n) > return raw_copy_to_user(to, from, n); > } > > -#ifdef INLINE_COPY_FROM_USER > static inline __must_check unsigned long > -_copy_from_user(void *to, const void __user *from, unsigned long n) > +_inline_copy_from_user(void *to, const void __user *from, unsigned long = n) > { > unsigned long res =3D n; > might_fault(); > if (!should_fail_usercopy() && likely(access_ok(from, n))) { > + /* > + * Ensure that bad access_ok() speculation will not > + * lead to nasty side effects *after* the copy is > + * finished: > + */ > + barrier_nospec(); > instrument_copy_from_user_before(to, from, n); > res =3D raw_copy_from_user(to, from, n); > instrument_copy_from_user_after(to, from, n, res); > @@ -153,14 +158,11 @@ _copy_from_user(void *to, const void __user *from, = unsigned long n) > memset(to + (n - res), 0, res); > return res; > } > -#else > extern __must_check unsigned long > _copy_from_user(void *, const void __user *, unsigned long); > -#endif > > -#ifdef INLINE_COPY_TO_USER > static inline __must_check unsigned long > -_copy_to_user(void __user *to, const void *from, unsigned long n) > +_inline_copy_to_user(void __user *to, const void *from, unsigned long n) > { > might_fault(); > if (should_fail_usercopy()) > @@ -171,25 +173,32 @@ _copy_to_user(void __user *to, const void *from, un= signed long n) > } > return n; > } > -#else > extern __must_check unsigned long > _copy_to_user(void __user *, const void *, unsigned long); > -#endif > > static __always_inline unsigned long __must_check > copy_from_user(void *to, const void __user *from, unsigned long n) > { > - if (check_copy_size(to, n, false)) > - n =3D _copy_from_user(to, from, n); > - return n; > + if (!check_copy_size(to, n, false)) > + return n; > +#ifdef INLINE_COPY_FROM_USER > + return _inline_copy_from_user(to, from, n); > +#else > + return _copy_from_user(to, from, n); > +#endif > } > > static __always_inline unsigned long __must_check > copy_to_user(void __user *to, const void *from, unsigned long n) > { > - if (check_copy_size(from, n, true)) > - n =3D _copy_to_user(to, from, n); > - return n; > + if (!check_copy_size(from, n, true)) > + return n; > + > +#ifdef INLINE_COPY_TO_USER > + return _inline_copy_to_user(to, from, n); > +#else > + return _copy_to_user(to, from, n); > +#endif > } > > #ifndef copy_mc_to_kernel > diff --git a/lib/usercopy.c b/lib/usercopy.c > index d29fe29c6849..503a064d79e2 100644 > --- a/lib/usercopy.c > +++ b/lib/usercopy.c > @@ -7,40 +7,18 @@ > > /* out-of-line parts */ > > -#ifndef INLINE_COPY_FROM_USER > +#if !defined(INLINE_COPY_FROM_USER) || defined(CONFIG_RUST) > unsigned long _copy_from_user(void *to, const void __user *from, unsigne= d long n) > { > - unsigned long res =3D n; > - might_fault(); > - if (!should_fail_usercopy() && likely(access_ok(from, n))) { > - /* > - * Ensure that bad access_ok() speculation will not > - * lead to nasty side effects *after* the copy is > - * finished: > - */ > - barrier_nospec(); > - instrument_copy_from_user_before(to, from, n); > - res =3D raw_copy_from_user(to, from, n); > - instrument_copy_from_user_after(to, from, n, res); > - } > - if (unlikely(res)) > - memset(to + (n - res), 0, res); > - return res; > + return _inline_copy_from_user(to, from, n); > } > EXPORT_SYMBOL(_copy_from_user); > #endif > > -#ifndef INLINE_COPY_TO_USER > +#if !defined(INLINE_COPY_TO_USER) || defined(CONFIG_RUST) > unsigned long _copy_to_user(void __user *to, const void *from, unsigned = long n) > { > - might_fault(); > - if (should_fail_usercopy()) > - return n; > - if (likely(access_ok(to, n))) { > - instrument_copy_to_user(to, from, n); > - n =3D raw_copy_to_user(to, from, n); > - } > - return n; > + return _inline_copy_to_user(to, from, n); > } > EXPORT_SYMBOL(_copy_to_user); > #endif Sure, if that's okay with you, then I'm happy to do it that way in v2. Thank you! Alice