From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id AFFFAC52D6F for ; Fri, 2 Aug 2024 15:33:18 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 255416B007B; Fri, 2 Aug 2024 11:33:18 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 205B36B0083; Fri, 2 Aug 2024 11:33:18 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0A6606B0088; Fri, 2 Aug 2024 11:33:18 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id E17CD6B007B for ; Fri, 2 Aug 2024 11:33:17 -0400 (EDT) Received: from smtpin25.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 89794412DC for ; Fri, 2 Aug 2024 15:33:17 +0000 (UTC) X-FDA: 82407699234.25.59FCBF6 Received: from mail-qk1-f176.google.com (mail-qk1-f176.google.com [209.85.222.176]) by imf27.hostedemail.com (Postfix) with ESMTP id 41DFE4000E for ; Fri, 2 Aug 2024 15:33:15 +0000 (UTC) Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=fC+TwZp0; spf=pass (imf27.hostedemail.com: domain of boqun.feng@gmail.com designates 209.85.222.176 as permitted sender) smtp.mailfrom=boqun.feng@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1722612737; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=hjG1umlrzw0lSXr9y8w9IUcaGi7HbzKvSj4pZKikvaI=; b=t6rcbM2GuoMYieZ8TbI2qd4lwSMrxWaWb1rOPUYzkfFDWfEPKKtS7c/p6mxbR0Bv6iu6QT bb5Dc9dW1cWa65+Cbq8Z3tbWEMMMpRqqGJT/SyEi2A9pEnGsKg64/9mOIg3R/uC3oP42h8 Kb8ai9eXPeqqoYKYlwi0a4d8xxgoc74= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1722612737; a=rsa-sha256; cv=none; b=j9UoNmzZ5F+n2E69a4iUturIpq7uWa+wFOCMpaCRmGTbq4vzItGnGmRJ8xJzi2fMDxzTbp wRtP9E2ZUi4nBmnsjfcjOOpfRmA6l3X4aLrJVxDgDsdRJmjD1FTBouwsgWprBRcaz8K3b6 6b17MtVx889X3QlNB03QEtD9ovCrElk= ARC-Authentication-Results: i=1; imf27.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=fC+TwZp0; spf=pass (imf27.hostedemail.com: domain of boqun.feng@gmail.com designates 209.85.222.176 as permitted sender) smtp.mailfrom=boqun.feng@gmail.com; dmarc=pass (policy=none) header.from=gmail.com Received: by mail-qk1-f176.google.com with SMTP id af79cd13be357-7a1e0ff6871so467266185a.2 for ; Fri, 02 Aug 2024 08:33:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1722612794; x=1723217594; darn=kvack.org; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:feedback-id:from:to:cc:subject:date :message-id:reply-to; bh=hjG1umlrzw0lSXr9y8w9IUcaGi7HbzKvSj4pZKikvaI=; b=fC+TwZp0lrxAwplF0zqTX21TXdHiVAMlk9Bh6rZomdKtncFu1UN/hIA3nhAr/y2VSj 1IY4Jpmp3wXsDATlkRdK6fvJLLIgl7cW3XlOQ1pxeRSYgtDgWJ3+NBGdDxpW6duHmQ2j QU6LeKK47X9p4Peh+1uhWLweP9amxP0h9DBEcm3b5hmrFwkmpzdF/tMglYUX3/8xi34g rxfkzyJhd6zfrgZAqfG9pnj0mtU6WCp9lD8ijOBje4suWSu3bf+wK3qu3CBRCpmh1IHD V9JUoZkJE0y4uOQdPe7eMY77DCKf0a+AtbCA6gTf1R702hbJlKHvAAKIsgFkx5IV/Zos s83g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722612794; x=1723217594; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:feedback-id:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=hjG1umlrzw0lSXr9y8w9IUcaGi7HbzKvSj4pZKikvaI=; b=aCofOc54aYHt6BojNlzSGiiok7zJT5di9lGJrKljyA3u2IfJI9B6qHhrRj8IyA5zzn aiiCj9agPmWTuArVXWMvqTzuTjGB+CyH1bi0eTfWbOpQGfempjImsdiwK3tmMAuUVLBS uworMm3y3+JdAUVKgEP/K3IQq6Li+daBsBZgpu9mtQ9f6pomKjWp3OpaDiOIYw0Uenra nMv5k0+lRNFQpH2NmjB7p1t1BFCjMRjRK9hMpretDNeSyLObcIsnpsKJa6ajy6BTHgiY KcgROjeWG1gc5i3jh6XhCma80XFZlJIV29aeLWME3iXuKYVOQ1O9K8BDm+JpqlMAsyYP zvTg== X-Forwarded-Encrypted: i=1; AJvYcCWeMjhxq9iaWHSvaQpUli19SBH7FS7aOnWksjjvOs5Q2+Exmc+I3SYJ5Z3q5lwufpDnJdDMGgEtRVOiQMkDGUwv73E= X-Gm-Message-State: AOJu0Yyo71M/6xPGJ4GuTfnnjyglYBFPomtK3UB0aeCJZD6FFwO4a47o WaxsnXzurG/sYiWMwfZ4t27yAPKMuh9iymK3QRT7Rh2SvwjgiMNH X-Google-Smtp-Source: AGHT+IHwcwh7lUDTHID/MvbPVdKokl3p8mz9boLIt6HoPbme3x6iG4qqU4gYMx7vpHo+ockIHfyadQ== X-Received: by 2002:a05:620a:4614:b0:7a1:e93c:ccf3 with SMTP id af79cd13be357-7a34ef9e42cmr464274485a.47.1722612794075; Fri, 02 Aug 2024 08:33:14 -0700 (PDT) Received: from fauth2-smtp.messagingengine.com (fauth2-smtp.messagingengine.com. [103.168.172.201]) by smtp.gmail.com with ESMTPSA id af79cd13be357-7a34f6dc7besm95198285a.21.2024.08.02.08.33.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 02 Aug 2024 08:33:13 -0700 (PDT) Received: from compute1.internal (compute1.nyi.internal [10.202.2.41]) by mailfauth.nyi.internal (Postfix) with ESMTP id 134B3120006A; Fri, 2 Aug 2024 11:33:13 -0400 (EDT) Received: from mailfrontend2 ([10.202.2.163]) by compute1.internal (MEProxy); Fri, 02 Aug 2024 11:33:13 -0400 X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeeftddrkedtgdeklecutefuodetggdotefrodftvf curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu uegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenuc fjughrpeffhffvvefukfhfgggtuggjsehttdertddttddvnecuhfhrohhmpeeuohhquhhn ucfhvghnghcuoegsohhquhhnrdhfvghnghesghhmrghilhdrtghomheqnecuggftrfgrth htvghrnhephedugfduffffteeutddvheeuveelvdfhleelieevtdeguefhgeeuveeiudff iedvnecuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrhhomhepsg hoqhhunhdomhgvshhmthhprghuthhhphgvrhhsohhnrghlihhthidqieelvdeghedtieeg qddujeejkeehheehvddqsghoqhhunhdrfhgvnhhgpeepghhmrghilhdrtghomhesfhhigi hmvgdrnhgrmhgvpdhnsggprhgtphhtthhopedt X-ME-Proxy: Feedback-ID: iad51458e:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Fri, 2 Aug 2024 11:33:12 -0400 (EDT) Date: Fri, 2 Aug 2024 08:32:21 -0700 From: Boqun Feng To: Alice Ryhl Cc: Miguel Ojeda , Andrew Morton , Alex Gaynor , Wedson Almeida Filho , Gary Guo , =?iso-8859-1?Q?Bj=F6rn?= Roy Baron , Benno Lossin , Andreas Hindborg , Matthew Wilcox , "Liam R. Howlett" , Vlastimil Babka , Lorenzo Stoakes , linux-kernel@vger.kernel.org, linux-mm@kvack.org, rust-for-linux@vger.kernel.org Subject: Re: [PATCH v4] rust: mm: add abstractions for mm_struct and vm_area_struct Message-ID: References: <20240802-vma-v4-1-091a87058a43@google.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20240802-vma-v4-1-091a87058a43@google.com> X-Rspam-User: X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 41DFE4000E X-Stat-Signature: hokcopb4xdcdfm6wg8ekgfofuociw7dw X-HE-Tag: 1722612795-906003 X-HE-Meta: U2FsdGVkX19oT9oWqBl6vpcgF1oPAwkutQTg46PH5L0wJNFLWQi6ieP4n1PzLpG1PU5Slx5hhT62MCoy/5KqXQ9SnC/7FjGFNICMQTBW7SKqS98t59uRXqKwynRgg+V7Rxz3mKG1WIy+z7NMf4J5EflMdosvQnpjjbB9kQtJOLSuyi2xzmWVTuv2EPmKrsSbLUjXzdDxqh6Y+7O+fbF+wKiwSr4BPhUphZpkI80IS/koxSIUjK7X6TuwJ/HsKkXTUf3bP0hHkGGXr6qtnvjRjH8Of+yay/C2i3WspYiY+cK4QodVTw5V2PfcZ2u5mvzWESLTnIK6eNPDX9dplfoA/29DaQ05IZwNvU+9mIg7jXHDhDcehQvC9qG4ZWqySyFED1fQSxgykR9s473pMd6kZr94h21A9nHDTKDgZ4OCYfOAkJa7+9KhsJXaUNBhi/3aC5q8wdr6kTG8BHjD41wXoiTMOxthLC7G+H5LuwAT2e96u3ir9+E8vQw+x69lvJzubY8ZO97JZS4BeWYkJ8Pa8zfdIj/YeX1OdUbT5GNgsoHuvn+0+BycNnGv2CDg78EvbKVzQvJstutn/tj7u+A2f7pDNis9agn7jcbKI3VEdpGEjqeob7rErlhwngUmU+HxP8HsX9gcdFr3bNJFKP995GVuAJdgg62oeTVBSmh1lOpqOv+sXutOhCdnIOIQ0YlxPT7fgK7q6pg9adPGZuj2S5QyGnVj+3JlcKD4xyZP9Ybh5sA0CU1HxhvXv++89/iZ5BdjnKhb+pKZtKRqCW6KDqyMoSMgwYgmEVJjGXgs1FaR6wDt+396EE9XY273+nFa92ttuXDTwYbCB+e8pxyyUoiksRNTrC2sck7NXnywN4xX3pU1Sr4xGKUqXCf3wsBvQJUKhMJ/fSOKaicCUa4gq6VxhQPrPexDhGI0GG6ryCkbMcHmudHRdSlad2ATZbUNLw33BcP4phdVmQfJfKp Nhiq7ofg mfOYzOgRb/LUA9M+HPCNNn8eTomoMt3CXHfvh0vRKKq8KbY9fHwbJk8UTW82AqX5tNyWUkD5RU9fFuOYIxTBZtUCNw3Nb5yhdGYsKux+1tCS7CVi0i540l8pWzL5Cf1FsAbytVM0I/zgFU1FiDkeEQsl1kD3jKIhEQWYN6TJr8J/I/HV1Fh/SoSRd5OjcA6YvtWCvx4hIb3GwxB/GNGrtTGnbUqixqcSsbdNNGPo02LI7Xl/vs+zydiSE9D7/DLLdTtN5BbA7ZqbkPCG3jjAoC8Dy3Zq22FpwRw5WRBDnE61TIdGAlFFo6R1CFMQt2eUxGDGfoQdkfQxIO9OVn44yNpSa9f+DT/ExeCA7s4H5T5JOAZF3oWJVE9BonXPdEW4FF9RNVS850mub1mUsqyqTMK+XMBv7noIDD6L//Bu5jGAYbYvNWQ/9vXCKxuI1yyvNnmVqbwEcXwl/15uXxRYb5OXX+/+SeHqiE8yIi/rlAWN0LUXhcY/t4vA/+NElVIz1I83M X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Fri, Aug 02, 2024 at 07:38:32AM +0000, Alice Ryhl wrote: [...] > +// These methods are safe to call even if `mm_users` is zero. > +impl Mm { > + /// Call `mmgrab` on `current.mm`. > + #[inline] > + pub fn mmgrab_current() -> Option> { > + // SAFETY: It's safe to get the `mm` field from current. > + let mm = unsafe { > + let current = bindings::get_current(); > + (*current).mm > + }; > + > + let mm = NonNull::new(mm)?; > + > + // SAFETY: We just checked that `mm` is not null. > + unsafe { bindings::mmgrab(mm.as_ptr()) }; > + > + // SAFETY: We just created an `mmgrab` refcount. Layouts are compatible due to > + // repr(transparent). > + Some(unsafe { ARef::from_raw(mm.cast()) }) We can use from_raw() + into() here. If a type `impl`s AlwaysRefcounted, we should have no chance to call the "refcount increment" function directly. > + } > + > + /// Returns a raw pointer to the inner `mm_struct`. > + #[inline] > + pub fn as_raw(&self) -> *mut bindings::mm_struct { > + self.mm.get() > + } > + > + /// Obtain a reference from a raw pointer. > + /// > + /// # Safety > + /// > + /// The caller must ensure that `ptr` points at an `mm_struct`, and that it is not deallocated > + /// during the lifetime 'a. > + #[inline] > + pub unsafe fn from_raw<'a>(ptr: *const bindings::mm_struct) -> &'a Mm { > + // SAFETY: Caller promises that the pointer is valid for 'a. Layouts are compatible due to > + // repr(transparent). > + unsafe { &*ptr.cast() } > + } > + > + /// Check whether this vma is associated with this mm. > + #[inline] > + pub fn is_same_mm(&self, area: &virt::VmArea) -> bool { > + // SAFETY: The `vm_mm` field of the area is immutable, so we can read it without > + // synchronization. > + let vm_mm = unsafe { (*area.as_ptr()).vm_mm }; > + > + ptr::eq(vm_mm, self.as_raw()) > + } > + > + /// Calls `mmget_not_zero` and returns a handle if it succeeds. > + #[inline] > + pub fn mmget_not_zero(&self) -> Option> { > + // SAFETY: The pointer is valid since self is a reference. > + let success = unsafe { bindings::mmget_not_zero(self.as_raw()) }; > + > + if success { > + // SAFETY: We just created an `mmget` refcount. > + Some(unsafe { ARef::from_raw(NonNull::new_unchecked(self.as_raw().cast())) }) > + } else { > + None > + } > + } > +} > + > +// These methods require `mm_users` to be non-zero. > +impl MmWithUser { > + /// Obtain a reference from a raw pointer. > + /// > + /// # Safety > + /// > + /// The caller must ensure that `ptr` points at an `mm_struct`, and that `mm_users` remains > + /// non-zero for the duration of the lifetime 'a. > + #[inline] > + pub unsafe fn from_raw<'a>(ptr: *const bindings::mm_struct) -> &'a MmWithUser { > + // SAFETY: Caller promises that the pointer is valid for 'a. The layout is compatible due > + // to repr(transparent). > + unsafe { &*ptr.cast() } > + } > + > + /// Use `mmput_async` when dropping this refcount. > + pub fn use_mmput_async(me: ARef) -> ARef { > + // SAFETY: The layouts and invariants are compatible. > + unsafe { ARef::from_raw(ARef::into_raw(me).cast()) } > + } > + > + /// Lock the mmap write lock. > + #[inline] > + pub fn mmap_write_lock(&self) -> MmapWriteLock<'_> { > + // SAFETY: The pointer is valid since self is a reference. > + unsafe { bindings::mmap_write_lock(self.as_raw()) }; > + > + // INVARIANT: We just acquired the write lock. > + MmapWriteLock { mm: self } > + } > + > + /// Lock the mmap read lock. > + #[inline] > + pub fn mmap_read_lock(&self) -> MmapReadLock<'_> { > + // SAFETY: The pointer is valid since self is a reference. > + unsafe { bindings::mmap_read_lock(self.as_raw()) }; > + > + // INVARIANT: We just acquired the read lock. > + MmapReadLock { mm: self } > + } > + > + /// Try to lock the mmap read lock. > + #[inline] > + pub fn mmap_read_trylock(&self) -> Option> { > + // SAFETY: The pointer is valid since self is a reference. > + let success = unsafe { bindings::mmap_read_trylock(self.as_raw()) }; > + > + if success { > + // INVARIANT: We just acquired the read lock. > + Some(MmapReadLock { mm: self }) > + } else { > + None > + } > + } > +} > + > +impl MmWithUserAsync { > + /// Use `mmput` when dropping this refcount. > + pub fn use_mmput(me: ARef) -> ARef { > + // SAFETY: The layouts and invariants are compatible. > + unsafe { ARef::from_raw(ARef::into_raw(me).cast()) } > + } > +} > + > +/// A guard for the mmap read lock. > +/// > +/// # Invariants > +/// > +/// This `MmapReadLock` guard owns the mmap read lock. > +pub struct MmapReadLock<'a> { > + mm: &'a MmWithUser, > +} > + > +impl<'a> MmapReadLock<'a> { > + /// Look up a vma at the given address. > + #[inline] > + pub fn vma_lookup(&self, vma_addr: usize) -> Option<&virt::VmArea> { > + // SAFETY: We hold a reference to the mm, so the pointer must be valid. Any value is okay > + // for `vma_addr`. > + let vma = unsafe { bindings::vma_lookup(self.mm.as_raw(), vma_addr as _) }; > + > + if vma.is_null() { > + None > + } else { > + // SAFETY: We just checked that a vma was found, so the pointer is valid. Furthermore, > + // the returned area will borrow from this read lock guard, so it can only be used > + // while the read lock is still held. The returned reference is immutable, so the > + // reference cannot be used to modify the area. > + unsafe { Some(virt::VmArea::from_raw_vma(vma)) } > + } > + } > +} > + > +impl Drop for MmapReadLock<'_> { > + #[inline] > + fn drop(&mut self) { > + // SAFETY: We hold the read lock by the type invariants. > + unsafe { bindings::mmap_read_unlock(self.mm.as_raw()) }; > + } > +} > + > +/// A guard for the mmap write lock. > +/// > +/// # Invariants > +/// > +/// This `MmapReadLock` guard owns the mmap write lock. > +pub struct MmapWriteLock<'a> { > + mm: &'a MmWithUser, > +} > + > +impl<'a> MmapWriteLock<'a> { > + /// Look up a vma at the given address. > + #[inline] > + pub fn vma_lookup(&mut self, vma_addr: usize) -> Option<&mut virt::VmArea> { I think this needs to be -> Option>, otherwise, you could swap two VMAs (from different MMs) while the address stability is required by themselves (list_head) or others (list_head and rb_node): let vma1 = writer1.vma_lookup(x)?; let vma2 = writer2.vma_lookup(x)?; swap(vma1, vma2); > + // SAFETY: We hold a reference to the mm, so the pointer must be valid. Any value is okay > + // for `vma_addr`. > + let vma = unsafe { bindings::vma_lookup(self.mm.as_raw(), vma_addr as _) }; > + > + if vma.is_null() { > + None > + } else { > + // SAFETY: We just checked that a vma was found, so the pointer is valid. Furthermore, > + // the returned area will borrow from this write lock guard, so it can only be used > + // while the write lock is still held. We hold the write lock, so mutable operations on > + // the area are okay. > + unsafe { Some(virt::VmArea::from_raw_vma_mut(vma)) } > + } > + } > +} > + > +impl Drop for MmapWriteLock<'_> { > + #[inline] > + fn drop(&mut self) { > + // SAFETY: We hold the write lock by the type invariants. > + unsafe { bindings::mmap_write_unlock(self.mm.as_raw()) }; > + } > +} (Will review the locking part later) Regards, Boqun [...]