From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1CDB9CF258C for ; Sun, 13 Oct 2024 00:16:29 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7A33E6B0082; Sat, 12 Oct 2024 20:16:29 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 754426B0083; Sat, 12 Oct 2024 20:16:29 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5F3786B0085; Sat, 12 Oct 2024 20:16:29 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 41B286B0082 for ; Sat, 12 Oct 2024 20:16:29 -0400 (EDT) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 1ACF11A0446 for ; Sun, 13 Oct 2024 00:16:18 +0000 (UTC) X-FDA: 82666662372.02.1038B30 Received: from mail-qt1-f172.google.com (mail-qt1-f172.google.com [209.85.160.172]) by imf03.hostedemail.com (Postfix) with ESMTP id 3FCD320005 for ; Sun, 13 Oct 2024 00:16:24 +0000 (UTC) Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=OBUm96AC; spf=pass (imf03.hostedemail.com: domain of boqun.feng@gmail.com designates 209.85.160.172 as permitted sender) smtp.mailfrom=boqun.feng@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1728778446; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=WRXk4qjuzQVRU2Px4UHPGrBV2OP5ZCUt6zYeuf1cJ+4=; b=HVBB6OWH+WD0WdLqob/vgHDE+X/U4+ILN4/e5CNWveCJj/PWXWUhkNV/PuLfnIj+YAZdi7 cT+snXjyoPRjb60BdKuYSj8v//OOHof7mLZp4Xrn3nPIdqM3IlorccVI98o2VCkOg+eP7e 2enB6vEQJLaqjxTDFzQEyPoOBNQscsA= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1728778446; a=rsa-sha256; cv=none; b=AS9ubAx2hczY+XRKQuL04HaUQx0nMGteqSEIp+peeS6id+wWzSOsRp6u4jO8rRX3dZHii2 7vYicUN+xtmEtLnp2SunvYUuplaXgtoAbwIfErZLOkRHm/uEaKwyAiagisCYqNq7lRzI/S 45sIHt4EdM2/dULBQA/q5Kkp0BUbVNM= ARC-Authentication-Results: i=1; imf03.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=OBUm96AC; spf=pass (imf03.hostedemail.com: domain of boqun.feng@gmail.com designates 209.85.160.172 as permitted sender) smtp.mailfrom=boqun.feng@gmail.com; dmarc=pass (policy=none) header.from=gmail.com Received: by mail-qt1-f172.google.com with SMTP id d75a77b69052e-460407b421bso25710771cf.3 for ; Sat, 12 Oct 2024 17:16:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1728778585; x=1729383385; darn=kvack.org; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:feedback-id:from:to:cc:subject:date :message-id:reply-to; bh=WRXk4qjuzQVRU2Px4UHPGrBV2OP5ZCUt6zYeuf1cJ+4=; b=OBUm96AClwpabSFl1IOGhziQK3bJ9Ik+ZlVupKZ6V58ED/rq/TvlhpAbmAh0uKR+eZ jsH0zQH8/Jo18ciaM4cbB/FMDCr1yOIZT1IVA3mRjAS+AEKR3G5ku+5j4E6FJ6iSqOcv zGhpZGIo5OPzYZQazDkeUWaqE6b8dDiJ4VY2MQVwOy/hmjfpLWYuVoFygdAMc59JGqLS BBSq/BYA07gVBDNlFVo0xheW7d5Z91vLXN+7tIEenf+C2hQipE7P1NVHURdiyxzJwCkO WrGpEXIAqwe5i3XsQAfXBYqE+bH4y3Lc0b7eyaIT571+ToqYnSfe48GJKAcggVKxjY7b /7kQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728778585; x=1729383385; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:feedback-id:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=WRXk4qjuzQVRU2Px4UHPGrBV2OP5ZCUt6zYeuf1cJ+4=; b=G4+4maXfZ9JQHs64hwBtSp85dF3oOf45P4+y60CD0z0gkU5UNI1e6xB71AoHpzpjOs REjO0Erf3/ZOxpApek0lXi7Y8se4y9GVGSBqoPh+XeNFOLBkQJWapl485gU0SDfhLLPt bRDPWv/oNqL2LpZJ7HMxy/HybLJDKdc99WB31Ehu/mxBAY74RDqLHJiP406CcXtlg4ge Tu+3sq3lglt5tFHGtdl1jQO/VZBHYbVDrbwdDJRCMjsjOKJAGnf1jovSD7pLgq5KaQVk qXvHb122MjdJFKYEBqBMZWPG/ZGkxBuLyO9CB2eJTivSmFVLuZgdwMkEVGR1FxBncL4n P3YA== X-Forwarded-Encrypted: i=1; AJvYcCVxXEFH69L8+sryCddVKR7xHZ9ZrQUaJj4MwNbaTjliviVXbp2jN38cTHm4669VN3xl//F4wrunng==@kvack.org X-Gm-Message-State: AOJu0YwdfGcgu6oKA9LE0jfAO1j6DrkKnUfL63tX8k7zSQQxdzvz1omP FTv8mAltajXzvSp6U97oGUeZjQkt3kKxbX1fHUIS4qtRCGX/qsAE X-Google-Smtp-Source: AGHT+IHlSpzm7FldRFgxIk8xopMUMysZW366cMw8BFoAZ0h+HCHdzb2IkZ9abhMcZCJnr3hhtpL5qg== X-Received: by 2002:ac8:5a83:0:b0:460:3a45:9460 with SMTP id d75a77b69052e-4604bbccb32mr94388571cf.33.1728778585472; Sat, 12 Oct 2024 17:16:25 -0700 (PDT) Received: from fauth-a1-smtp.messagingengine.com (fauth-a1-smtp.messagingengine.com. [103.168.172.200]) by smtp.gmail.com with ESMTPSA id af79cd13be357-7b11497ad55sm270295385a.112.2024.10.12.17.16.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 12 Oct 2024 17:16:25 -0700 (PDT) Received: from phl-compute-03.internal (phl-compute-03.phl.internal [10.202.2.43]) by mailfauth.phl.internal (Postfix) with ESMTP id 6439A1200076; Sat, 12 Oct 2024 20:16:24 -0400 (EDT) Received: from phl-mailfrontend-01 ([10.202.2.162]) by phl-compute-03.internal (MEProxy); Sat, 12 Oct 2024 20:16:24 -0400 X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeeftddrvdegvddgfeefucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdggtfgfnhhsuhgsshgtrhhisggvpdfu rfetoffkrfgpnffqhgenuceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnh htshculddquddttddmnecujfgurhepfffhvfevuffkfhggtggujgesthdtredttddtvden ucfhrhhomhepuehoqhhunhcuhfgvnhhguceosghoqhhunhdrfhgvnhhgsehgmhgrihhlrd gtohhmqeenucggtffrrghtthgvrhhnpeehudfgudffffetuedtvdehueevledvhfelleei vedtgeeuhfegueevieduffeivdenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmh epmhgrihhlfhhrohhmpegsohhquhhnodhmvghsmhhtphgruhhthhhpvghrshhonhgrlhhi thihqdeiledvgeehtdeigedqudejjeekheehhedvqdgsohhquhhnrdhfvghngheppehgmh grihhlrdgtohhmsehfihigmhgvrdhnrghmvgdpnhgspghrtghpthhtohepvddtpdhmohgu vgepshhmthhpohhuthdprhgtphhtthhopegrlhhitggvrhihhhhlsehgohhoghhlvgdrtg homhdprhgtphhtthhopehojhgvuggrsehkvghrnhgvlhdrohhrghdprhgtphhtthhopeif ihhllhihsehinhhfrhgruggvrggurdhorhhgpdhrtghpthhtoheplhhorhgvnhiiohdrsh htohgrkhgvshesohhrrggtlhgvrdgtohhmpdhrtghpthhtohepvhgsrggskhgrsehsuhhs vgdrtgiipdhrtghpthhtohepjhhhuhgssggrrhgusehnvhhiughirgdrtghomhdprhgtph htthhopehlihgrmhdrhhhofihlvghtthesohhrrggtlhgvrdgtohhmpdhrtghpthhtohep rghkphhmsehlihhnuhigqdhfohhunhgurghtihhonhdrohhrghdprhgtphhtthhopehgrh gvghhkhheslhhinhhugihfohhunhgurghtihhonhdrohhrgh X-ME-Proxy: Feedback-ID: iad51458e:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Sat, 12 Oct 2024 20:16:23 -0400 (EDT) Date: Sat, 12 Oct 2024 17:16:22 -0700 From: Boqun Feng To: Alice Ryhl Cc: Miguel Ojeda , Matthew Wilcox , Lorenzo Stoakes , Vlastimil Babka , John Hubbard , "Liam R. Howlett" , Andrew Morton , Greg Kroah-Hartman , Arnd Bergmann , Alex Gaynor , Gary Guo , =?iso-8859-1?Q?Bj=F6rn?= Roy Baron , Benno Lossin , linux-kernel@vger.kernel.org, linux-mm@kvack.org, rust-for-linux@vger.kernel.org, Andreas Hindborg , Wedson Almeida Filho Subject: Re: [PATCH v6 1/2] rust: mm: add abstractions for mm_struct and vm_area_struct Message-ID: References: <20241010-vma-v6-0-d89039b6f573@google.com> <20241010-vma-v6-1-d89039b6f573@google.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20241010-vma-v6-1-d89039b6f573@google.com> X-Rspam-User: X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 3FCD320005 X-Stat-Signature: y4oomy5uj3djcb3yzazf5wfoma7q1j7c X-HE-Tag: 1728778584-289776 X-HE-Meta: U2FsdGVkX1///vhpdoE5wn3HhWf2vU5oI+GRX81f6PvNXzZYlYYditxDkcR1OfdVZtWMaO2b0tElIl3dpowySdwVQFe4MlKps7zxN4nzTmf304Bp2URt9pjwPa/GrsAALPkhkVdcn/mGE9q5jVtm2AXzKtYzsAdGXR7zdUDWJxwU5HJVvUiPCKdKsAp6zNGNtWu3pRI5/bkq2PJu+/UPudy36yPNao7xeH7NVHnJNL5me4c5L9wc/Nxp1DTyxtej8xOiVa9p/qxgnWKLlvSm9xyNd6XEqKqOInXjD8PTPjTeF3srDBA0kQo4uS1nX5d0GhRQXHW2ql4VkbuLdgZiJGS6ViU/TloOkIZDQXNxoz7A9MxvupLXY16dNhyWEWmjVj7rhUPDE7lt/f0K7ErWYd07UcBWwUpssrhP6y12WSi/2LFplBWrNEvwoHshFE4FJciEPA5UmnCDDOKchYPJ3Bs5gAQYny1eXAo2NO8fq/WGQTyBWYehFoKRqnniYwZquFBna+XaAULGlCeh+myiqRCQxO/O8MolpeFAwkLctnrBCldbDBne+NVQiSCbKZGtV5j7ypdoJMdN24s4iIk1ZHBnetI8AVYBsHcxzaV8a1sgL7+R/DpkDEKuWSjX9hJxMBXoQMvt3Bgzhdq2iS5+V0PZyjpqKGNjfzWxwrHupMt6Ldmfxo4JET2zDqY6prdVO1wTSERs7zT4PpGhCOohdlxxxULkJz7FlF9ontCQ7v7KQJQ09lCTcrpFwsSB1zjytrkg2L1br8OwRZygxJa8iOzZUoDlbXy67+dwhIkczkl15nA+FY9rmvZQuXfYxyNrrWuB93YIq+rl4Q+U7OJ9XqefxsDjkG0WFyQGbPPM/z5KZUhxSVLJxCBJiFQ3cvdpq5rDjnBQFW1f1ytzTcLHenuXJQojUmUJFfNPzTwBK4GLRCaCOgCMy6LTVZeKMz1VZRUC/Qc5UNji2Mh8gGU GLpOZWUN iTLJ5fliRT+au75CMxIeUHyOIwN5ogiKmZloUDf+D5SQ1ERH1DpHn1YPUYtmFbEI6QsSS6M69PDA7/QnK0M4wqzXBaYmgDZYxhhAOn8PGn1aVHqhjCnzzcaTPoGGX9qADZHj8l55e1SFQuqCzdVzSkRvl4R6xkb9gtql22BIulp4xpH4i2KcbjV3WeAFpahqh39rB2YLo0bwVtU1z/PBLIcnvCH3Gsz2NCmaVpWLyCgoOTvFswrZ73P06oj62H0VTJQKktUEsJhiHWUkCkj7tGej/J3bP65fMaUt3rhvVg0xyZ3sufzRuNwkx4jCdqS3TiGkwBZQdL/isH2UyiG168fehSs74RIB7j6XNn3lwXSkbki/FFLb49JXzK/lBBrVQhyZuIHNrF5OZHQaWdfPwNFMaa8IMGXnUZvwfaSmJca+qyc74e9ek2Mu1nhb8Y6r2air3qNSfNbI2SGE8E5UoDTw7FbkdAW+cpPzzP7rfxlvLPZQpgrLto8ZrjYbnuaqrYrpu7Z/DM4btcXRE3Ji9OEZ9L5rdUmA0gqGuiRpjdvJbUW5FtwXN3LQ/WxbYjRKkDQQ3cTiSTmeAHwE= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Thu, Oct 10, 2024 at 12:56:35PM +0000, Alice Ryhl wrote: > These abstractions allow you to manipulate vmas. Rust Binder will uses > these in a few different ways. > > In the mmap implementation, a VmAreaNew will be provided to the mmap > call which allows it to modify the vma in ways that are only okay during > initial setup. This is the case where the most methods are available. > > However, Rust Binder needs to insert and remove pages from the vma as > time passes. When incoming messages arrive, pages may need to be > inserted if space is missing, and in this case that is done by using a > stashed ARef and calling mmget_not_zero followed by mmap_write_lock > followed by vma_lookup followed by vm_insert_page. In this case, since > mmap_write_lock is used, the VmAreaMut type will be in use. > > Another use-case is the shrinker, where the mmap read lock is taken > instead, and zap_page_range_single is used to remove pages from the vma. > In this case, only the read lock is taken, so the VmAreaRef type will be > in use. > > Future extensions could involve a VmAreaRcuRef for accessing vma methods > that are okay to use when holding just the rcu read lock. However, these > methods are not needed from Rust yet. > > This uses shared references even for VmAreaMut. This is preferable to > using pinned mutable references because those are pretty inconvenient > due to the lack of reborrowing. However, this means that VmAreaMut > cannot be Sync. I think it is an acceptable trade-off. > Interesting ;-) I agree it's better than using Pin. > This patch is based on Wedson's implementation on the old rust branch, > but has been changed significantly. All mistakes are Alice's. > > Co-developed-by: Wedson Almeida Filho > Signed-off-by: Wedson Almeida Filho > Signed-off-by: Alice Ryhl > --- [...] > +/// A wrapper for the kernel's `struct mm_struct`. > +/// > +/// This type is like [`Mm`], but with non-zero `mm_users`. It can only be used when `mm_users` can > +/// be proven to be non-zero at compile-time, usually because the relevant code holds an `mmget` > +/// refcount. It can be used to access the associated address space. > +/// > +/// The `ARef` smart pointer holds an `mmget` refcount. Its destructor may sleep. > +/// > +/// # Invariants > +/// > +/// Values of this type are always refcounted using `mmget`. The value of `mm_users` is non-zero. > +/// #[repr(transparent)] > +pub struct MmWithUser { > + mm: Mm, > +} > + > +// SAFETY: It is safe to call `mmput` on another thread than where `mmget` was called. > +unsafe impl Send for MmWithUser {} > +// SAFETY: All methods on `MmWithUser` can be called in parallel from several threads. > +unsafe impl Sync for MmWithUser {} > + [...] > + > +/// A guard for the mmap read lock. > +/// > +/// # Invariants > +/// > +/// This `MmapReadLock` guard owns the mmap read lock. > +pub struct MmapReadLock<'a> { > + mm: &'a MmWithUser, Since `MmWithUser` is `Sync`, so `MmapReadLock<'a>` is `Send`? However, it cannot be a `Send` because the lock must be released by the same thread: although ->mmap_lock is a read-write *semaphore*, but rw_semaphore by default has strict owner semantics (see Documentation/locking/locktypes.rst). Also given this type is really a lock guard, maybe name it something like MmapReadGuard or MmapReadLockGuard? Same `Send` issue and name suggestion for `MmapWriteLock`. > +} > + > +impl<'a> MmapReadLock<'a> { > + /// Look up a vma at the given address. > + #[inline] > + pub fn vma_lookup(&self, vma_addr: usize) -> Option<&virt::VmAreaRef> { > + // SAFETY: We hold a reference to the mm, so the pointer must be valid. Any value is okay > + // for `vma_addr`. > + let vma = unsafe { bindings::vma_lookup(self.mm.as_raw(), vma_addr as _) }; > + > + if vma.is_null() { > + None > + } else { > + // SAFETY: We just checked that a vma was found, so the pointer is valid. Furthermore, > + // the returned area will borrow from this read lock guard, so it can only be used > + // while the read lock is still held. > + unsafe { Some(virt::VmAreaRef::from_raw(vma)) } > + } > + } > +} > + > +impl Drop for MmapReadLock<'_> { > + #[inline] > + fn drop(&mut self) { > + // SAFETY: We hold the read lock by the type invariants. > + unsafe { bindings::mmap_read_unlock(self.mm.as_raw()) }; > + } > +} > + > +/// A guard for the mmap write lock. > +/// > +/// # Invariants > +/// > +/// This `MmapReadLock` guard owns the mmap write lock. > +pub struct MmapWriteLock<'a> { > + mm: &'a MmWithUser, > +} > + > +impl<'a> MmapWriteLock<'a> { > + /// Look up a vma at the given address. > + #[inline] > + pub fn vma_lookup(&mut self, vma_addr: usize) -> Option<&virt::VmAreaMut> { > + // SAFETY: We hold a reference to the mm, so the pointer must be valid. Any value is okay > + // for `vma_addr`. > + let vma = unsafe { bindings::vma_lookup(self.mm.as_raw(), vma_addr as _) }; > + > + if vma.is_null() { > + None > + } else { > + // SAFETY: We just checked that a vma was found, so the pointer is valid. Furthermore, > + // the returned area will borrow from this write lock guard, so it can only be used > + // while the write lock is still held. > + unsafe { Some(virt::VmAreaMut::from_raw(vma)) } > + } > + } > +} > + > +impl Drop for MmapWriteLock<'_> { > + #[inline] > + fn drop(&mut self) { > + // SAFETY: We hold the write lock by the type invariants. > + unsafe { bindings::mmap_write_unlock(self.mm.as_raw()) }; > + } > +} > diff --git a/rust/kernel/mm/virt.rs b/rust/kernel/mm/virt.rs > new file mode 100644 > index 000000000000..7c09813e22f9 > --- /dev/null > +++ b/rust/kernel/mm/virt.rs > @@ -0,0 +1,264 @@ > +// SPDX-License-Identifier: GPL-2.0 > + > +// Copyright (C) 2024 Google LLC. > + > +//! Virtual memory. [...] > +impl VmAreaRef { > + /// Access a virtual memory area given a raw pointer. > + /// > + /// # Safety > + /// > + /// Callers must ensure that `vma` is valid for the duration of 'a, and that the mmap read lock > + /// (or stronger) is held for at least the duration of 'a. > + #[inline] > + pub unsafe fn from_raw<'a>(vma: *const bindings::vm_area_struct) -> &'a Self { Unrelated to this patch, but since we have so many `from_raw`s, I want to suggest that we should look into a #[derive(FromRaw)] ;-) For example: pub trait FromRaw { type RawType; unsafe fn from_raw<'a>(raw: *const Self::RawType) -> &'a Self; } and #[derive(FromRaw)] #[repr(transparent)] // repr(transparent) is mandatory. struct VmAreaRef { vma: Opaque // Opaque is also mandatory. } Regards, Boqun > + // SAFETY: The caller ensures that the invariants are satisfied for the duration of 'a. > + unsafe { &*vma.cast() } > + } [...]