* [PATCH v3 01/12] rust: xarray: minor formatting fixes
2026-02-09 14:38 [PATCH v3 00/12] rust: xarray: add entry API with preloading Andreas Hindborg
@ 2026-02-09 14:38 ` Andreas Hindborg
2026-02-10 16:44 ` Daniel Gomez
` (2 more replies)
2026-02-09 14:38 ` [PATCH v3 02/12] rust: xarray: add debug format for `StoreError` Andreas Hindborg
` (10 subsequent siblings)
11 siblings, 3 replies; 52+ messages in thread
From: Andreas Hindborg @ 2026-02-09 14:38 UTC (permalink / raw)
To: Tamir Duberstein, Miguel Ojeda, Alex Gaynor, Boqun Feng,
Gary Guo, Björn Roy Baron, Benno Lossin, Alice Ryhl,
Trevor Gross, Danilo Krummrich, Lorenzo Stoakes, Liam R. Howlett,
Vlastimil Babka, Andrew Morton, Christoph Lameter,
David Rientjes, Roman Gushchin, Harry Yoo
Cc: Daniel Gomez, rust-for-linux, linux-kernel, linux-mm, Andreas Hindborg
Fix formatting in xarray module to comply with kernel coding
guidelines:
- Update use clauses to use vertical layout with each import on its
own line.
- Add trailing empty comments to preserve formatting and prevent
rustfmt from collapsing imports.
- Break long assert_eq! statement in documentation across multiple
lines for better readability.
Reviewed-by: Gary Guo <gary@garyguo.net>
Reviewed-by: Tamir Duberstein <tamird@gmail.com>
Acked-by: Tamir Duberstein <tamird@gmail.com>
Signed-off-by: Andreas Hindborg <a.hindborg@kernel.org>
---
rust/kernel/xarray.rs | 36 +++++++++++++++++++++++++++++-------
1 file changed, 29 insertions(+), 7 deletions(-)
diff --git a/rust/kernel/xarray.rs b/rust/kernel/xarray.rs
index a49d6db288458..88625c9abf4ef 100644
--- a/rust/kernel/xarray.rs
+++ b/rust/kernel/xarray.rs
@@ -4,14 +4,33 @@
//!
//! C header: [`include/linux/xarray.h`](srctree/include/linux/xarray.h)
-use crate::{
- alloc, bindings, build_assert,
- error::{Error, Result},
+use core::{
+ iter,
+ marker::PhantomData,
+ pin::Pin,
+ ptr::NonNull, //
+};
+use kernel::{
+ alloc,
+ bindings,
+ build_assert, //
+ error::{
+ Error,
+ Result, //
+ },
ffi::c_void,
- types::{ForeignOwnable, NotThreadSafe, Opaque},
+ types::{
+ ForeignOwnable,
+ NotThreadSafe,
+ Opaque, //
+ },
+};
+use pin_init::{
+ pin_data,
+ pin_init,
+ pinned_drop,
+ PinInit, //
};
-use core::{iter, marker::PhantomData, pin::Pin, ptr::NonNull};
-use pin_init::{pin_data, pin_init, pinned_drop, PinInit};
/// An array which efficiently maps sparse integer indices to owned objects.
///
@@ -44,7 +63,10 @@
/// *guard.get_mut(0).unwrap() = 0xffff;
/// assert_eq!(guard.get(0).copied(), Some(0xffff));
///
-/// assert_eq!(guard.store(0, beef, GFP_KERNEL)?.as_deref().copied(), Some(0xffff));
+/// assert_eq!(
+/// guard.store(0, beef, GFP_KERNEL)?.as_deref().copied(),
+/// Some(0xffff)
+/// );
/// assert_eq!(guard.get(0).copied(), Some(0xbeef));
///
/// guard.remove(0);
--
2.51.2
^ permalink raw reply [flat|nested] 52+ messages in thread* Re: [PATCH v3 01/12] rust: xarray: minor formatting fixes
2026-02-09 14:38 ` [PATCH v3 01/12] rust: xarray: minor formatting fixes Andreas Hindborg
@ 2026-02-10 16:44 ` Daniel Gomez
2026-02-10 17:44 ` Liam R. Howlett
2026-02-11 18:30 ` Mukesh Kumar Chaurasiya
2 siblings, 0 replies; 52+ messages in thread
From: Daniel Gomez @ 2026-02-10 16:44 UTC (permalink / raw)
To: Andreas Hindborg
Cc: Tamir Duberstein, Miguel Ojeda, Alex Gaynor, Boqun Feng,
Gary Guo, Björn Roy Baron, Benno Lossin, Alice Ryhl,
Trevor Gross, Danilo Krummrich, Lorenzo Stoakes, Liam R. Howlett,
Vlastimil Babka, Andrew Morton, Christoph Lameter,
David Rientjes, Roman Gushchin, Harry Yoo, rust-for-linux,
linux-kernel, linux-mm
On 2026-02-09 15:38, Andreas Hindborg wrote:
> Fix formatting in xarray module to comply with kernel coding
> guidelines:
>
> - Update use clauses to use vertical layout with each import on its
> own line.
> - Add trailing empty comments to preserve formatting and prevent
> rustfmt from collapsing imports.
> - Break long assert_eq! statement in documentation across multiple
> lines for better readability.
>
> Reviewed-by: Gary Guo <gary@garyguo.net>
> Reviewed-by: Tamir Duberstein <tamird@gmail.com>
> Acked-by: Tamir Duberstein <tamird@gmail.com>
> Signed-off-by: Andreas Hindborg <a.hindborg@kernel.org>
Reviewed-by: Daniel Gomez <da.gomez@samsung.com>
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [PATCH v3 01/12] rust: xarray: minor formatting fixes
2026-02-09 14:38 ` [PATCH v3 01/12] rust: xarray: minor formatting fixes Andreas Hindborg
2026-02-10 16:44 ` Daniel Gomez
@ 2026-02-10 17:44 ` Liam R. Howlett
2026-02-11 18:30 ` Mukesh Kumar Chaurasiya
2 siblings, 0 replies; 52+ messages in thread
From: Liam R. Howlett @ 2026-02-10 17:44 UTC (permalink / raw)
To: Andreas Hindborg
Cc: Tamir Duberstein, Miguel Ojeda, Alex Gaynor, Boqun Feng,
Gary Guo, Björn Roy Baron, Benno Lossin, Alice Ryhl,
Trevor Gross, Danilo Krummrich, Lorenzo Stoakes, Vlastimil Babka,
Andrew Morton, Christoph Lameter, David Rientjes, Roman Gushchin,
Harry Yoo, Daniel Gomez, rust-for-linux, linux-kernel, linux-mm
* Andreas Hindborg <a.hindborg@kernel.org> [260209 14:39]:
> Fix formatting in xarray module to comply with kernel coding
> guidelines:
>
> - Update use clauses to use vertical layout with each import on its
> own line.
> - Add trailing empty comments to preserve formatting and prevent
> rustfmt from collapsing imports.
> - Break long assert_eq! statement in documentation across multiple
> lines for better readability.
>
> Reviewed-by: Gary Guo <gary@garyguo.net>
> Reviewed-by: Tamir Duberstein <tamird@gmail.com>
> Acked-by: Tamir Duberstein <tamird@gmail.com>
Acked-by: Liam R. Howlett <Liam.Howlett@oracle.com>
> Signed-off-by: Andreas Hindborg <a.hindborg@kernel.org>
> ---
> rust/kernel/xarray.rs | 36 +++++++++++++++++++++++++++++-------
> 1 file changed, 29 insertions(+), 7 deletions(-)
>
> diff --git a/rust/kernel/xarray.rs b/rust/kernel/xarray.rs
> index a49d6db288458..88625c9abf4ef 100644
> --- a/rust/kernel/xarray.rs
> +++ b/rust/kernel/xarray.rs
> @@ -4,14 +4,33 @@
> //!
> //! C header: [`include/linux/xarray.h`](srctree/include/linux/xarray.h)
>
> -use crate::{
> - alloc, bindings, build_assert,
> - error::{Error, Result},
> +use core::{
> + iter,
> + marker::PhantomData,
> + pin::Pin,
> + ptr::NonNull, //
> +};
> +use kernel::{
> + alloc,
> + bindings,
> + build_assert, //
> + error::{
> + Error,
> + Result, //
> + },
> ffi::c_void,
> - types::{ForeignOwnable, NotThreadSafe, Opaque},
> + types::{
> + ForeignOwnable,
> + NotThreadSafe,
> + Opaque, //
> + },
> +};
> +use pin_init::{
> + pin_data,
> + pin_init,
> + pinned_drop,
> + PinInit, //
> };
> -use core::{iter, marker::PhantomData, pin::Pin, ptr::NonNull};
> -use pin_init::{pin_data, pin_init, pinned_drop, PinInit};
>
> /// An array which efficiently maps sparse integer indices to owned objects.
> ///
> @@ -44,7 +63,10 @@
> /// *guard.get_mut(0).unwrap() = 0xffff;
> /// assert_eq!(guard.get(0).copied(), Some(0xffff));
> ///
> -/// assert_eq!(guard.store(0, beef, GFP_KERNEL)?.as_deref().copied(), Some(0xffff));
> +/// assert_eq!(
> +/// guard.store(0, beef, GFP_KERNEL)?.as_deref().copied(),
> +/// Some(0xffff)
> +/// );
> /// assert_eq!(guard.get(0).copied(), Some(0xbeef));
> ///
> /// guard.remove(0);
>
> --
> 2.51.2
>
>
^ permalink raw reply [flat|nested] 52+ messages in thread* Re: [PATCH v3 01/12] rust: xarray: minor formatting fixes
2026-02-09 14:38 ` [PATCH v3 01/12] rust: xarray: minor formatting fixes Andreas Hindborg
2026-02-10 16:44 ` Daniel Gomez
2026-02-10 17:44 ` Liam R. Howlett
@ 2026-02-11 18:30 ` Mukesh Kumar Chaurasiya
2 siblings, 0 replies; 52+ messages in thread
From: Mukesh Kumar Chaurasiya @ 2026-02-11 18:30 UTC (permalink / raw)
To: Andreas Hindborg
Cc: Tamir Duberstein, Miguel Ojeda, Alex Gaynor, Boqun Feng,
Gary Guo, Björn Roy Baron, Benno Lossin, Alice Ryhl,
Trevor Gross, Danilo Krummrich, Lorenzo Stoakes, Liam R. Howlett,
Vlastimil Babka, Andrew Morton, Christoph Lameter,
David Rientjes, Roman Gushchin, Harry Yoo, Daniel Gomez,
rust-for-linux, linux-kernel, linux-mm
On Mon, Feb 09, 2026 at 03:38:06PM +0100, Andreas Hindborg wrote:
> Fix formatting in xarray module to comply with kernel coding
> guidelines:
>
> - Update use clauses to use vertical layout with each import on its
> own line.
> - Add trailing empty comments to preserve formatting and prevent
> rustfmt from collapsing imports.
> - Break long assert_eq! statement in documentation across multiple
> lines for better readability.
>
> Reviewed-by: Gary Guo <gary@garyguo.net>
> Reviewed-by: Tamir Duberstein <tamird@gmail.com>
> Acked-by: Tamir Duberstein <tamird@gmail.com>
> Signed-off-by: Andreas Hindborg <a.hindborg@kernel.org>
> ---
> rust/kernel/xarray.rs | 36 +++++++++++++++++++++++++++++-------
> 1 file changed, 29 insertions(+), 7 deletions(-)
>
> diff --git a/rust/kernel/xarray.rs b/rust/kernel/xarray.rs
> index a49d6db288458..88625c9abf4ef 100644
> --- a/rust/kernel/xarray.rs
> +++ b/rust/kernel/xarray.rs
> @@ -4,14 +4,33 @@
> //!
> //! C header: [`include/linux/xarray.h`](srctree/include/linux/xarray.h)
>
> -use crate::{
> - alloc, bindings, build_assert,
> - error::{Error, Result},
> +use core::{
> + iter,
> + marker::PhantomData,
> + pin::Pin,
> + ptr::NonNull, //
> +};
> +use kernel::{
> + alloc,
> + bindings,
> + build_assert, //
> + error::{
> + Error,
> + Result, //
> + },
> ffi::c_void,
> - types::{ForeignOwnable, NotThreadSafe, Opaque},
> + types::{
> + ForeignOwnable,
> + NotThreadSafe,
> + Opaque, //
> + },
> +};
> +use pin_init::{
> + pin_data,
> + pin_init,
> + pinned_drop,
> + PinInit, //
> };
> -use core::{iter, marker::PhantomData, pin::Pin, ptr::NonNull};
> -use pin_init::{pin_data, pin_init, pinned_drop, PinInit};
>
> /// An array which efficiently maps sparse integer indices to owned objects.
> ///
> @@ -44,7 +63,10 @@
> /// *guard.get_mut(0).unwrap() = 0xffff;
> /// assert_eq!(guard.get(0).copied(), Some(0xffff));
> ///
> -/// assert_eq!(guard.store(0, beef, GFP_KERNEL)?.as_deref().copied(), Some(0xffff));
> +/// assert_eq!(
> +/// guard.store(0, beef, GFP_KERNEL)?.as_deref().copied(),
> +/// Some(0xffff)
> +/// );
> /// assert_eq!(guard.get(0).copied(), Some(0xbeef));
> ///
> /// guard.remove(0);
>
> --
> 2.51.2
>
LGTM
Reviewed-by: Mukesh Kumar Chaurasiya (IBM) <mkchauras@gmail.com>
>
^ permalink raw reply [flat|nested] 52+ messages in thread
* [PATCH v3 02/12] rust: xarray: add debug format for `StoreError`
2026-02-09 14:38 [PATCH v3 00/12] rust: xarray: add entry API with preloading Andreas Hindborg
2026-02-09 14:38 ` [PATCH v3 01/12] rust: xarray: minor formatting fixes Andreas Hindborg
@ 2026-02-09 14:38 ` Andreas Hindborg
2026-02-10 16:45 ` Daniel Gomez
2026-02-10 17:44 ` Liam R. Howlett
2026-02-09 14:38 ` [PATCH v3 03/12] rust: xarray: add `contains_index` method Andreas Hindborg
` (9 subsequent siblings)
11 siblings, 2 replies; 52+ messages in thread
From: Andreas Hindborg @ 2026-02-09 14:38 UTC (permalink / raw)
To: Tamir Duberstein, Miguel Ojeda, Alex Gaynor, Boqun Feng,
Gary Guo, Björn Roy Baron, Benno Lossin, Alice Ryhl,
Trevor Gross, Danilo Krummrich, Lorenzo Stoakes, Liam R. Howlett,
Vlastimil Babka, Andrew Morton, Christoph Lameter,
David Rientjes, Roman Gushchin, Harry Yoo
Cc: Daniel Gomez, rust-for-linux, linux-kernel, linux-mm, Andreas Hindborg
Add a `Debug` implementation for `StoreError<T>` to enable better error
reporting and debugging. The implementation only displays the `error`
field and omits the `value` field, as `T` may not implement `Debug`.
Reviewed-by: Gary Guo <gary@garyguo.net>
Acked-by: Tamir Duberstein <tamird@gmail.com>
Signed-off-by: Andreas Hindborg <a.hindborg@kernel.org>
---
rust/kernel/xarray.rs | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/rust/kernel/xarray.rs b/rust/kernel/xarray.rs
index 88625c9abf4ef..d9762c6bef19c 100644
--- a/rust/kernel/xarray.rs
+++ b/rust/kernel/xarray.rs
@@ -193,6 +193,14 @@ pub struct StoreError<T> {
pub value: T,
}
+impl<T> core::fmt::Debug for StoreError<T> {
+ fn fmt(&self, f: &mut core::fmt::Formatter<'_>) -> core::fmt::Result {
+ f.debug_struct("StoreError")
+ .field("error", &self.error)
+ .finish()
+ }
+}
+
impl<T> From<StoreError<T>> for Error {
fn from(value: StoreError<T>) -> Self {
value.error
--
2.51.2
^ permalink raw reply [flat|nested] 52+ messages in thread* Re: [PATCH v3 02/12] rust: xarray: add debug format for `StoreError`
2026-02-09 14:38 ` [PATCH v3 02/12] rust: xarray: add debug format for `StoreError` Andreas Hindborg
@ 2026-02-10 16:45 ` Daniel Gomez
2026-02-10 16:55 ` Tamir Duberstein
2026-02-10 17:44 ` Liam R. Howlett
1 sibling, 1 reply; 52+ messages in thread
From: Daniel Gomez @ 2026-02-10 16:45 UTC (permalink / raw)
To: Andreas Hindborg
Cc: Tamir Duberstein, Miguel Ojeda, Alex Gaynor, Boqun Feng,
Gary Guo, Björn Roy Baron, Benno Lossin, Alice Ryhl,
Trevor Gross, Danilo Krummrich, Lorenzo Stoakes, Liam R. Howlett,
Vlastimil Babka, Andrew Morton, Christoph Lameter,
David Rientjes, Roman Gushchin, Harry Yoo, rust-for-linux,
linux-kernel, linux-mm
On 2026-02-09 15:38, Andreas Hindborg wrote:
> Add a `Debug` implementation for `StoreError<T>` to enable better error
> reporting and debugging. The implementation only displays the `error`
> field and omits the `value` field, as `T` may not implement `Debug`.
>
> Reviewed-by: Gary Guo <gary@garyguo.net>
> Acked-by: Tamir Duberstein <tamird@gmail.com>
> Signed-off-by: Andreas Hindborg <a.hindborg@kernel.org>
> ---
> rust/kernel/xarray.rs | 8 ++++++++
> 1 file changed, 8 insertions(+)
>
> diff --git a/rust/kernel/xarray.rs b/rust/kernel/xarray.rs
> index 88625c9abf4ef..d9762c6bef19c 100644
> --- a/rust/kernel/xarray.rs
> +++ b/rust/kernel/xarray.rs
> @@ -193,6 +193,14 @@ pub struct StoreError<T> {
> pub value: T,
> }
>
> +impl<T> core::fmt::Debug for StoreError<T> {
> + fn fmt(&self, f: &mut core::fmt::Formatter<'_>) -> core::fmt::Result {
> + f.debug_struct("StoreError")
> + .field("error", &self.error)
> + .finish()
> + }
> +}
> +
Is there any best practice for when to include use core::fmt::*, so you can
avoid being verbose here?
I see other cases like this, but I couldn't find anything in the code
guidelines.
Reviewed-by: Daniel Gomez <da.gomez@samsung.com>
^ permalink raw reply [flat|nested] 52+ messages in thread* Re: [PATCH v3 02/12] rust: xarray: add debug format for `StoreError`
2026-02-10 16:45 ` Daniel Gomez
@ 2026-02-10 16:55 ` Tamir Duberstein
0 siblings, 0 replies; 52+ messages in thread
From: Tamir Duberstein @ 2026-02-10 16:55 UTC (permalink / raw)
To: Daniel Gomez
Cc: Andreas Hindborg, Miguel Ojeda, Alex Gaynor, Boqun Feng,
Gary Guo, Björn Roy Baron, Benno Lossin, Alice Ryhl,
Trevor Gross, Danilo Krummrich, Lorenzo Stoakes, Liam R. Howlett,
Vlastimil Babka, Andrew Morton, Christoph Lameter,
David Rientjes, Roman Gushchin, Harry Yoo, rust-for-linux,
linux-kernel, linux-mm
On Tue, Feb 10, 2026 at 8:45 AM Daniel Gomez <da.gomez@kernel.org> wrote:
>
> On 2026-02-09 15:38, Andreas Hindborg wrote:
> > Add a `Debug` implementation for `StoreError<T>` to enable better error
> > reporting and debugging. The implementation only displays the `error`
> > field and omits the `value` field, as `T` may not implement `Debug`.
> >
> > Reviewed-by: Gary Guo <gary@garyguo.net>
> > Acked-by: Tamir Duberstein <tamird@gmail.com>
> > Signed-off-by: Andreas Hindborg <a.hindborg@kernel.org>
> > ---
> > rust/kernel/xarray.rs | 8 ++++++++
> > 1 file changed, 8 insertions(+)
> >
> > diff --git a/rust/kernel/xarray.rs b/rust/kernel/xarray.rs
> > index 88625c9abf4ef..d9762c6bef19c 100644
> > --- a/rust/kernel/xarray.rs
> > +++ b/rust/kernel/xarray.rs
> > @@ -193,6 +193,14 @@ pub struct StoreError<T> {
> > pub value: T,
> > }
> >
> > +impl<T> core::fmt::Debug for StoreError<T> {
> > + fn fmt(&self, f: &mut core::fmt::Formatter<'_>) -> core::fmt::Result {
> > + f.debug_struct("StoreError")
> > + .field("error", &self.error)
> > + .finish()
> > + }
> > +}
> > +
>
> Is there any best practice for when to include use core::fmt::*, so you can
> avoid being verbose here?
>
> I see other cases like this, but I couldn't find anything in the code
> guidelines.
It would probably be better to use kernel::fmt::* rather than
core::fmt so that we can interpose our own trait in the future, if we
want.
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [PATCH v3 02/12] rust: xarray: add debug format for `StoreError`
2026-02-09 14:38 ` [PATCH v3 02/12] rust: xarray: add debug format for `StoreError` Andreas Hindborg
2026-02-10 16:45 ` Daniel Gomez
@ 2026-02-10 17:44 ` Liam R. Howlett
1 sibling, 0 replies; 52+ messages in thread
From: Liam R. Howlett @ 2026-02-10 17:44 UTC (permalink / raw)
To: Andreas Hindborg
Cc: Tamir Duberstein, Miguel Ojeda, Alex Gaynor, Boqun Feng,
Gary Guo, Björn Roy Baron, Benno Lossin, Alice Ryhl,
Trevor Gross, Danilo Krummrich, Lorenzo Stoakes, Vlastimil Babka,
Andrew Morton, Christoph Lameter, David Rientjes, Roman Gushchin,
Harry Yoo, Daniel Gomez, rust-for-linux, linux-kernel, linux-mm
* Andreas Hindborg <a.hindborg@kernel.org> [260209 14:39]:
> Add a `Debug` implementation for `StoreError<T>` to enable better error
> reporting and debugging. The implementation only displays the `error`
> field and omits the `value` field, as `T` may not implement `Debug`.
>
> Reviewed-by: Gary Guo <gary@garyguo.net>
> Acked-by: Tamir Duberstein <tamird@gmail.com>
> Signed-off-by: Andreas Hindborg <a.hindborg@kernel.org>
Acked-by: Liam R. Howlett <Liam.Howlett@oracle.com>
> ---
> rust/kernel/xarray.rs | 8 ++++++++
> 1 file changed, 8 insertions(+)
>
> diff --git a/rust/kernel/xarray.rs b/rust/kernel/xarray.rs
> index 88625c9abf4ef..d9762c6bef19c 100644
> --- a/rust/kernel/xarray.rs
> +++ b/rust/kernel/xarray.rs
> @@ -193,6 +193,14 @@ pub struct StoreError<T> {
> pub value: T,
> }
>
> +impl<T> core::fmt::Debug for StoreError<T> {
> + fn fmt(&self, f: &mut core::fmt::Formatter<'_>) -> core::fmt::Result {
> + f.debug_struct("StoreError")
> + .field("error", &self.error)
> + .finish()
> + }
> +}
> +
> impl<T> From<StoreError<T>> for Error {
> fn from(value: StoreError<T>) -> Self {
> value.error
>
> --
> 2.51.2
>
>
>
^ permalink raw reply [flat|nested] 52+ messages in thread
* [PATCH v3 03/12] rust: xarray: add `contains_index` method
2026-02-09 14:38 [PATCH v3 00/12] rust: xarray: add entry API with preloading Andreas Hindborg
2026-02-09 14:38 ` [PATCH v3 01/12] rust: xarray: minor formatting fixes Andreas Hindborg
2026-02-09 14:38 ` [PATCH v3 02/12] rust: xarray: add debug format for `StoreError` Andreas Hindborg
@ 2026-02-09 14:38 ` Andreas Hindborg
2026-02-10 16:46 ` Daniel Gomez
` (2 more replies)
2026-02-09 14:38 ` [PATCH v3 04/12] rust: xarray: add `XArrayState` Andreas Hindborg
` (8 subsequent siblings)
11 siblings, 3 replies; 52+ messages in thread
From: Andreas Hindborg @ 2026-02-09 14:38 UTC (permalink / raw)
To: Tamir Duberstein, Miguel Ojeda, Alex Gaynor, Boqun Feng,
Gary Guo, Björn Roy Baron, Benno Lossin, Alice Ryhl,
Trevor Gross, Danilo Krummrich, Lorenzo Stoakes, Liam R. Howlett,
Vlastimil Babka, Andrew Morton, Christoph Lameter,
David Rientjes, Roman Gushchin, Harry Yoo
Cc: Daniel Gomez, rust-for-linux, linux-kernel, linux-mm, Andreas Hindborg
Add a convenience method `contains_index` to check whether an element
exists at a given index in the XArray. This method provides a more
ergonomic API compared to calling `get` and checking for `Some`.
Signed-off-by: Andreas Hindborg <a.hindborg@kernel.org>
---
rust/kernel/xarray.rs | 21 +++++++++++++++++++++
1 file changed, 21 insertions(+)
diff --git a/rust/kernel/xarray.rs b/rust/kernel/xarray.rs
index d9762c6bef19c..ede48b5e1dba3 100644
--- a/rust/kernel/xarray.rs
+++ b/rust/kernel/xarray.rs
@@ -218,6 +218,27 @@ fn load<F, U>(&self, index: usize, f: F) -> Option<U>
Some(f(ptr))
}
+ /// Checks if the XArray contains an element at the specified index.
+ ///
+ /// # Examples
+ ///
+ /// ```
+ /// # use kernel::{alloc::{flags::GFP_KERNEL, kbox::KBox}, xarray::{AllocKind, XArray}};
+ /// let xa = KBox::pin_init(XArray::new(AllocKind::Alloc), GFP_KERNEL)?;
+ ///
+ /// let mut guard = xa.lock();
+ /// assert_eq!(guard.contains_index(42), false);
+ ///
+ /// guard.store(42, KBox::new(0u32, GFP_KERNEL)?, GFP_KERNEL)?;
+ ///
+ /// assert_eq!(guard.contains_index(42), true);
+ ///
+ /// # Ok::<(), kernel::error::Error>(())
+ /// ```
+ pub fn contains_index(&self, index: usize) -> bool {
+ self.get(index).is_some()
+ }
+
/// Provides a reference to the element at the given index.
pub fn get(&self, index: usize) -> Option<T::Borrowed<'_>> {
self.load(index, |ptr| {
--
2.51.2
^ permalink raw reply [flat|nested] 52+ messages in thread* Re: [PATCH v3 03/12] rust: xarray: add `contains_index` method
2026-02-09 14:38 ` [PATCH v3 03/12] rust: xarray: add `contains_index` method Andreas Hindborg
@ 2026-02-10 16:46 ` Daniel Gomez
2026-02-10 16:56 ` Tamir Duberstein
2026-02-10 17:52 ` Liam R. Howlett
2 siblings, 0 replies; 52+ messages in thread
From: Daniel Gomez @ 2026-02-10 16:46 UTC (permalink / raw)
To: Andreas Hindborg
Cc: Tamir Duberstein, Miguel Ojeda, Alex Gaynor, Boqun Feng,
Gary Guo, Björn Roy Baron, Benno Lossin, Alice Ryhl,
Trevor Gross, Danilo Krummrich, Lorenzo Stoakes, Liam R. Howlett,
Vlastimil Babka, Andrew Morton, Christoph Lameter,
David Rientjes, Roman Gushchin, Harry Yoo, rust-for-linux,
linux-kernel, linux-mm
On 2026-02-09 15:38, Andreas Hindborg wrote:
> Add a convenience method `contains_index` to check whether an element
> exists at a given index in the XArray. This method provides a more
> ergonomic API compared to calling `get` and checking for `Some`.
>
> Signed-off-by: Andreas Hindborg <a.hindborg@kernel.org>
> ---
> rust/kernel/xarray.rs | 21 +++++++++++++++++++++
> 1 file changed, 21 insertions(+)
>
> diff --git a/rust/kernel/xarray.rs b/rust/kernel/xarray.rs
> index d9762c6bef19c..ede48b5e1dba3 100644
> --- a/rust/kernel/xarray.rs
> +++ b/rust/kernel/xarray.rs
> @@ -218,6 +218,27 @@ fn load<F, U>(&self, index: usize, f: F) -> Option<U>
> Some(f(ptr))
> }
>
> + /// Checks if the XArray contains an element at the specified index.
> + ///
> + /// # Examples
> + ///
> + /// ```
> + /// # use kernel::{alloc::{flags::GFP_KERNEL, kbox::KBox}, xarray::{AllocKind, XArray}};
> + /// let xa = KBox::pin_init(XArray::new(AllocKind::Alloc), GFP_KERNEL)?;
Side comment: I'd also update the examples to the new coding style. And it'd
probably be cleaner in the first commit.
> + ///
> + /// let mut guard = xa.lock();
> + /// assert_eq!(guard.contains_index(42), false);
> + ///
> + /// guard.store(42, KBox::new(0u32, GFP_KERNEL)?, GFP_KERNEL)?;
> + ///
> + /// assert_eq!(guard.contains_index(42), true);
> + ///
> + /// # Ok::<(), kernel::error::Error>(())
> + /// ```
> + pub fn contains_index(&self, index: usize) -> bool {
Nit. The method name may imply there's a contains_*() variant. I'd rename it to
contains() so we are consistent with the rest of the API.
Reviewed-by: Daniel Gomez <da.gomez@samsung.com>
^ permalink raw reply [flat|nested] 52+ messages in thread* Re: [PATCH v3 03/12] rust: xarray: add `contains_index` method
2026-02-09 14:38 ` [PATCH v3 03/12] rust: xarray: add `contains_index` method Andreas Hindborg
2026-02-10 16:46 ` Daniel Gomez
@ 2026-02-10 16:56 ` Tamir Duberstein
2026-02-11 7:31 ` Andreas Hindborg
2026-02-10 17:52 ` Liam R. Howlett
2 siblings, 1 reply; 52+ messages in thread
From: Tamir Duberstein @ 2026-02-10 16:56 UTC (permalink / raw)
To: Andreas Hindborg
Cc: Miguel Ojeda, Alex Gaynor, Boqun Feng, Gary Guo,
Björn Roy Baron, Benno Lossin, Alice Ryhl, Trevor Gross,
Danilo Krummrich, Lorenzo Stoakes, Liam R. Howlett,
Vlastimil Babka, Andrew Morton, Christoph Lameter,
David Rientjes, Roman Gushchin, Harry Yoo, Daniel Gomez,
rust-for-linux, linux-kernel, linux-mm
On Mon, Feb 9, 2026 at 6:38 AM Andreas Hindborg <a.hindborg@kernel.org> wrote:
>
> Add a convenience method `contains_index` to check whether an element
> exists at a given index in the XArray. This method provides a more
> ergonomic API compared to calling `get` and checking for `Some`.
>
> Signed-off-by: Andreas Hindborg <a.hindborg@kernel.org>
As I said in v1 I'm -1 on this change. As gregkh would say: it's hard
to review a new API without seeing its user.
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [PATCH v3 03/12] rust: xarray: add `contains_index` method
2026-02-10 16:56 ` Tamir Duberstein
@ 2026-02-11 7:31 ` Andreas Hindborg
2026-02-11 18:24 ` Tamir Duberstein
0 siblings, 1 reply; 52+ messages in thread
From: Andreas Hindborg @ 2026-02-11 7:31 UTC (permalink / raw)
To: Tamir Duberstein
Cc: Miguel Ojeda, Alex Gaynor, Boqun Feng, Gary Guo,
Björn Roy Baron, Benno Lossin, Alice Ryhl, Trevor Gross,
Danilo Krummrich, Lorenzo Stoakes, Liam R. Howlett,
Vlastimil Babka, Andrew Morton, Christoph Lameter,
David Rientjes, Roman Gushchin, Harry Yoo, Daniel Gomez,
rust-for-linux, linux-kernel, linux-mm
Tamir Duberstein <tamird@gmail.com> writes:
> On Mon, Feb 9, 2026 at 6:38 AM Andreas Hindborg <a.hindborg@kernel.org> wrote:
>>
>> Add a convenience method `contains_index` to check whether an element
>> exists at a given index in the XArray. This method provides a more
>> ergonomic API compared to calling `get` and checking for `Some`.
>>
>> Signed-off-by: Andreas Hindborg <a.hindborg@kernel.org>
>
> As I said in v1 I'm -1 on this change. As gregkh would say: it's hard
> to review a new API without seeing its user.
I already gave you the user inline [1] and if you wish you can also see
it in a downstream tree [2]. Gary already explained why this is required
with the current implementation of the borrow checker [3].
Best regards,
Andreas Hindborg
[1] https://lore.kernel.org/r/87344gh2pk.fsf@t14s.mail-host-address-is-not-set
[2] https://github.com/metaspace/linux/blob/aa43a6ecb68a785a90e167609aa57c5a0860d123/drivers/block/rnull/disk_storage.rs#L218
[3] https://lore.kernel.org/r/DFK801ZCI1GD.34GWJ10JZBBBF@garyguo.net
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [PATCH v3 03/12] rust: xarray: add `contains_index` method
2026-02-11 7:31 ` Andreas Hindborg
@ 2026-02-11 18:24 ` Tamir Duberstein
0 siblings, 0 replies; 52+ messages in thread
From: Tamir Duberstein @ 2026-02-11 18:24 UTC (permalink / raw)
To: Andreas Hindborg
Cc: Miguel Ojeda, Alex Gaynor, Boqun Feng, Gary Guo,
Björn Roy Baron, Benno Lossin, Alice Ryhl, Trevor Gross,
Danilo Krummrich, Lorenzo Stoakes, Liam R. Howlett,
Vlastimil Babka, Andrew Morton, Christoph Lameter,
David Rientjes, Roman Gushchin, Harry Yoo, Daniel Gomez,
rust-for-linux, linux-kernel, linux-mm
On Tue, Feb 10, 2026 at 11:31 PM Andreas Hindborg <a.hindborg@kernel.org> wrote:
>
> Tamir Duberstein <tamird@gmail.com> writes:
>
> > On Mon, Feb 9, 2026 at 6:38 AM Andreas Hindborg <a.hindborg@kernel.org> wrote:
> >>
> >> Add a convenience method `contains_index` to check whether an element
> >> exists at a given index in the XArray. This method provides a more
> >> ergonomic API compared to calling `get` and checking for `Some`.
> >>
> >> Signed-off-by: Andreas Hindborg <a.hindborg@kernel.org>
> >
> > As I said in v1 I'm -1 on this change. As gregkh would say: it's hard
> > to review a new API without seeing its user.
>
> I already gave you the user inline [1] and if you wish you can also see
> it in a downstream tree [2]. Gary already explained why this is required
> with the current implementation of the borrow checker [3].
Yeah, that's fine, and the poor ergonomics are IMO a feature - I
should see the ugly `is_some()` call in your code, because it's a
useful signal that something non-obvious is going. This function hides
that, which I think is not better.
>
> Best regards,
> Andreas Hindborg
>
> [1] https://lore.kernel.org/r/87344gh2pk.fsf@t14s.mail-host-address-is-not-set
> [2] https://github.com/metaspace/linux/blob/aa43a6ecb68a785a90e167609aa57c5a0860d123/drivers/block/rnull/disk_storage.rs#L218
> [3] https://lore.kernel.org/r/DFK801ZCI1GD.34GWJ10JZBBBF@garyguo.net
>
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [PATCH v3 03/12] rust: xarray: add `contains_index` method
2026-02-09 14:38 ` [PATCH v3 03/12] rust: xarray: add `contains_index` method Andreas Hindborg
2026-02-10 16:46 ` Daniel Gomez
2026-02-10 16:56 ` Tamir Duberstein
@ 2026-02-10 17:52 ` Liam R. Howlett
2026-02-11 7:41 ` Andreas Hindborg
2 siblings, 1 reply; 52+ messages in thread
From: Liam R. Howlett @ 2026-02-10 17:52 UTC (permalink / raw)
To: Andreas Hindborg
Cc: Tamir Duberstein, Miguel Ojeda, Alex Gaynor, Boqun Feng,
Gary Guo, Björn Roy Baron, Benno Lossin, Alice Ryhl,
Trevor Gross, Danilo Krummrich, Lorenzo Stoakes, Vlastimil Babka,
Andrew Morton, Christoph Lameter, David Rientjes, Roman Gushchin,
Harry Yoo, Daniel Gomez, rust-for-linux, linux-kernel, linux-mm
* Andreas Hindborg <a.hindborg@kernel.org> [260209 14:38]:
> Add a convenience method `contains_index` to check whether an element
> exists at a given index in the XArray. This method provides a more
> ergonomic API compared to calling `get` and checking for `Some`.
I think this is going to result in less efficient code for most uses.
Most users use the results returned, not just checking if there is or is
not a value. So if you find the value an xarray state and then just
throw it away and find it again, it'll be less efficient.
If there are users that do use the xarray to just check if something
exists or not (which there probably are?), then it should be in a
wrapper for that code and not the generic API. Otherwise we will have
users pop up to use this method when they should not.
>
> Signed-off-by: Andreas Hindborg <a.hindborg@kernel.org>
> ---
> rust/kernel/xarray.rs | 21 +++++++++++++++++++++
> 1 file changed, 21 insertions(+)
>
> diff --git a/rust/kernel/xarray.rs b/rust/kernel/xarray.rs
> index d9762c6bef19c..ede48b5e1dba3 100644
> --- a/rust/kernel/xarray.rs
> +++ b/rust/kernel/xarray.rs
> @@ -218,6 +218,27 @@ fn load<F, U>(&self, index: usize, f: F) -> Option<U>
> Some(f(ptr))
> }
>
> + /// Checks if the XArray contains an element at the specified index.
> + ///
> + /// # Examples
> + ///
> + /// ```
> + /// # use kernel::{alloc::{flags::GFP_KERNEL, kbox::KBox}, xarray::{AllocKind, XArray}};
> + /// let xa = KBox::pin_init(XArray::new(AllocKind::Alloc), GFP_KERNEL)?;
> + ///
> + /// let mut guard = xa.lock();
> + /// assert_eq!(guard.contains_index(42), false);
> + ///
> + /// guard.store(42, KBox::new(0u32, GFP_KERNEL)?, GFP_KERNEL)?;
> + ///
> + /// assert_eq!(guard.contains_index(42), true);
> + ///
> + /// # Ok::<(), kernel::error::Error>(())
> + /// ```
> + pub fn contains_index(&self, index: usize) -> bool {
> + self.get(index).is_some()
> + }
> +
> /// Provides a reference to the element at the given index.
> pub fn get(&self, index: usize) -> Option<T::Borrowed<'_>> {
> self.load(index, |ptr| {
>
> --
> 2.51.2
>
>
^ permalink raw reply [flat|nested] 52+ messages in thread* Re: [PATCH v3 03/12] rust: xarray: add `contains_index` method
2026-02-10 17:52 ` Liam R. Howlett
@ 2026-02-11 7:41 ` Andreas Hindborg
2026-02-11 18:21 ` Liam R. Howlett
0 siblings, 1 reply; 52+ messages in thread
From: Andreas Hindborg @ 2026-02-11 7:41 UTC (permalink / raw)
To: Liam R. Howlett
Cc: Tamir Duberstein, Miguel Ojeda, Alex Gaynor, Boqun Feng,
Gary Guo, Björn Roy Baron, Benno Lossin, Alice Ryhl,
Trevor Gross, Danilo Krummrich, Lorenzo Stoakes, Vlastimil Babka,
Andrew Morton, Christoph Lameter, David Rientjes, Roman Gushchin,
Harry Yoo, Daniel Gomez, rust-for-linux, linux-kernel, linux-mm
"Liam R. Howlett" <Liam.Howlett@oracle.com> writes:
> * Andreas Hindborg <a.hindborg@kernel.org> [260209 14:38]:
>> Add a convenience method `contains_index` to check whether an element
>> exists at a given index in the XArray. This method provides a more
>> ergonomic API compared to calling `get` and checking for `Some`.
>
> I think this is going to result in less efficient code for most uses.
>
> Most users use the results returned, not just checking if there is or is
> not a value. So if you find the value an xarray state and then just
> throw it away and find it again, it'll be less efficient.
>
> If there are users that do use the xarray to just check if something
> exists or not (which there probably are?), then it should be in a
> wrapper for that code and not the generic API. Otherwise we will have
> users pop up to use this method when they should not.
I agree in that I would prefer matching on the result of a lookup and
using that result. This is not always possible due to a limitation in
the current implementation of the borrow checker. Please see my response
to Tamir [1], it has all the pointers.
Since we cannot use a match statement in certain situations, we have to
fall back to something that does not borrow from the collections, like
`array.get(index).is_some()`. I would argue that
if array.contains_index(foo) { ... }
is easier to read than
if array.get(foo).is_some() { ... }
And I don't think `array.contains()` is going to have worse codegen than
`array.get(index).is_some()`.
Best regards,
Andreas Hindborg
[1] https://lore.kernel.org/rust-for-linux/20260209-xarray-entry-send-v3-0-f777c65b8ae2@kernel.org/T/#m95fb90870c511491f4f487dbf852c689cd0733f4
^ permalink raw reply [flat|nested] 52+ messages in thread* Re: [PATCH v3 03/12] rust: xarray: add `contains_index` method
2026-02-11 7:41 ` Andreas Hindborg
@ 2026-02-11 18:21 ` Liam R. Howlett
2026-02-12 10:15 ` Andreas Hindborg
0 siblings, 1 reply; 52+ messages in thread
From: Liam R. Howlett @ 2026-02-11 18:21 UTC (permalink / raw)
To: Andreas Hindborg
Cc: Tamir Duberstein, Miguel Ojeda, Alex Gaynor, Boqun Feng,
Gary Guo, Björn Roy Baron, Benno Lossin, Alice Ryhl,
Trevor Gross, Danilo Krummrich, Lorenzo Stoakes, Vlastimil Babka,
Andrew Morton, Christoph Lameter, David Rientjes, Roman Gushchin,
Harry Yoo, Daniel Gomez, rust-for-linux, linux-kernel, linux-mm
* Andreas Hindborg <a.hindborg@kernel.org> [260211 02:41]:
> "Liam R. Howlett" <Liam.Howlett@oracle.com> writes:
>
> > * Andreas Hindborg <a.hindborg@kernel.org> [260209 14:38]:
> >> Add a convenience method `contains_index` to check whether an element
> >> exists at a given index in the XArray. This method provides a more
> >> ergonomic API compared to calling `get` and checking for `Some`.
> >
> > I think this is going to result in less efficient code for most uses.
> >
> > Most users use the results returned, not just checking if there is or is
> > not a value. So if you find the value an xarray state and then just
> > throw it away and find it again, it'll be less efficient.
> >
> > If there are users that do use the xarray to just check if something
> > exists or not (which there probably are?), then it should be in a
> > wrapper for that code and not the generic API. Otherwise we will have
> > users pop up to use this method when they should not.
>
> I agree in that I would prefer matching on the result of a lookup and
> using that result. This is not always possible due to a limitation in
> the current implementation of the borrow checker. Please see my response
> to Tamir [1], it has all the pointers.
>
> Since we cannot use a match statement in certain situations, we have to
> fall back to something that does not borrow from the collections, like
> `array.get(index).is_some()`. I would argue that
>
> if array.contains_index(foo) { ... }
>
> is easier to read than
>
> if array.get(foo).is_some() { ... }
>
> And I don't think `array.contains()` is going to have worse codegen than
> `array.get(index).is_some()`.
This is probably my lack of rust knowledge...
My concern is around API usage. I am concerned people will use xas as a
throw-away lookup with this API and cause more walks of the xarray to
the same location.
In the normal API, we have lookups like this; you take a lock, look
something up, drop the lock and return it. Since the life cycle of the
stored information is outside the scope of the xarray, the user is
dependent on the entry being stable by some other means after the xarray
lock is dropped.
In the advanced API, we do more within the locked area, usually.
Usually, applications don't just print out there is a value, they do
something with it. So I would expect a real example to be something
like (this horrible psudo-c/rust mangled mess):
let entry = array.get_mut(foo);
if (entry.is_some()) {
/* do something with entry */
send_to_party(entry);
} else {
/* deal with it not existing */
}
What I don't want to do:
if (array.contains_index(foo)) {
entry = array.get_mut(foo);
} else {
...
}
Where contains_index(foo) sets up an xas, walks to the location, returns
the entry (or not) and then translates into a boolean.. then if it's
true we set up another xas to walk to the same location.
That is, the worst code gen would come from this:
if (array.get(foo).is_some()) { array.get_mut(foo).. }
From what you said here and the link, you are saying we need to do this
in certain situations due to rust's borrow checker and the lifetime, but
I cannot see why we would need to walk the xarray twice from the example
provided.
And making it easier to do this could result in a lot more users
doubling xarray walks without realising that it's a bad idea (unless
it's this special case).
...
>
> [1] https://lore.kernel.org/rust-for-linux/20260209-xarray-entry-send-v3-0-f777c65b8ae2@kernel.org/T/#m95fb90870c511491f4f487dbf852c689cd0733f4
>
I have trouble following 'the taken arm' in your link. I think you mean
one of the branches based on the existence of the entry, but I don't
know which is the 'taken' and how 'self' is out of scope.
Other links off the above link seem to indicate it is a problem with the
rust borrow checking hitting a false positive.
It seems we need to look up things twice to work around the false
positive - or implement something like get_or_insert()?
Or, wait for the new checker to be released - but that doesn't fix all
the false positives, just this one?
So, do all users of the xarray suffer from this false positive?
Thanks,
Liam
^ permalink raw reply [flat|nested] 52+ messages in thread* Re: [PATCH v3 03/12] rust: xarray: add `contains_index` method
2026-02-11 18:21 ` Liam R. Howlett
@ 2026-02-12 10:15 ` Andreas Hindborg
2026-02-12 10:52 ` Andreas Hindborg
0 siblings, 1 reply; 52+ messages in thread
From: Andreas Hindborg @ 2026-02-12 10:15 UTC (permalink / raw)
To: Liam R. Howlett
Cc: Tamir Duberstein, Miguel Ojeda, Alex Gaynor, Boqun Feng,
Gary Guo, Björn Roy Baron, Benno Lossin, Alice Ryhl,
Trevor Gross, Danilo Krummrich, Lorenzo Stoakes, Vlastimil Babka,
Andrew Morton, Christoph Lameter, David Rientjes, Roman Gushchin,
Harry Yoo, Daniel Gomez, rust-for-linux, linux-kernel, linux-mm
"Liam R. Howlett" <Liam.Howlett@oracle.com> writes:
> * Andreas Hindborg <a.hindborg@kernel.org> [260211 02:41]:
>> "Liam R. Howlett" <Liam.Howlett@oracle.com> writes:
>>
>> > * Andreas Hindborg <a.hindborg@kernel.org> [260209 14:38]:
>> >> Add a convenience method `contains_index` to check whether an element
>> >> exists at a given index in the XArray. This method provides a more
>> >> ergonomic API compared to calling `get` and checking for `Some`.
>> >
>> > I think this is going to result in less efficient code for most uses.
>> >
>> > Most users use the results returned, not just checking if there is or is
>> > not a value. So if you find the value an xarray state and then just
>> > throw it away and find it again, it'll be less efficient.
>> >
>> > If there are users that do use the xarray to just check if something
>> > exists or not (which there probably are?), then it should be in a
>> > wrapper for that code and not the generic API. Otherwise we will have
>> > users pop up to use this method when they should not.
>>
>> I agree in that I would prefer matching on the result of a lookup and
>> using that result. This is not always possible due to a limitation in
>> the current implementation of the borrow checker. Please see my response
>> to Tamir [1], it has all the pointers.
>>
>> Since we cannot use a match statement in certain situations, we have to
>> fall back to something that does not borrow from the collections, like
>> `array.get(index).is_some()`. I would argue that
>>
>> if array.contains_index(foo) { ... }
>>
>> is easier to read than
>>
>> if array.get(foo).is_some() { ... }
>>
>> And I don't think `array.contains()` is going to have worse codegen than
>> `array.get(index).is_some()`.
>
> This is probably my lack of rust knowledge...
>
>
> My concern is around API usage. I am concerned people will use xas as a
> throw-away lookup with this API and cause more walks of the xarray to
> the same location.
>
> In the normal API, we have lookups like this; you take a lock, look
> something up, drop the lock and return it. Since the life cycle of the
> stored information is outside the scope of the xarray, the user is
> dependent on the entry being stable by some other means after the xarray
> lock is dropped.
As discussed elsewhere, the current Rust XArray API only has fully
exclusive access, no reader/writer separation.
After a lookup the returned object is bound to the lifetime of lock. So
you cannot drop the lock while holding a mutable reference to something
in the collection, and the "other means" part of your paragraph is
automatically handled in Rust.
> In the advanced API, we do more within the locked area, usually.
>
> Usually, applications don't just print out there is a value, they do
> something with it. So I would expect a real example to be something
> like (this horrible psudo-c/rust mangled mess):
>
> let entry = array.get_mut(foo);
>
> if (entry.is_some()) {
> /* do something with entry */
> send_to_party(entry);
> } else {
> /* deal with it not existing */
> }
>
> What I don't want to do:
>
> if (array.contains_index(foo)) {
> entry = array.get_mut(foo);
> } else {
> ...
> }
>
> Where contains_index(foo) sets up an xas, walks to the location, returns
> the entry (or not) and then translates into a boolean.. then if it's
> true we set up another xas to walk to the same location.
>
> That is, the worst code gen would come from this:
> if (array.get(foo).is_some()) { array.get_mut(foo).. }
You are completely right, this causes two walks to the same location,
which is not great.
> From what you said here and the link, you are saying we need to do this
> in certain situations due to rust's borrow checker and the lifetime, but
> I cannot see why we would need to walk the xarray twice from the example
> provided.
Let me try to explain with an example using user space Rust. Consider a
situation where we have two two key-value mapping data structures. Since
we are in user space, I'll go with BTreeMap from the standard library,
but it could be XArray in the kernel.
If we want to implement a transaction across these two KV maps, we
probably need to take a lock to get mutable access to them. Let's
represent this with a `struct Maps` that has two fields `a` and `b` for
these two mutable references:
use std::collections::{
BTreeMap,
btree_map::{Entry, OccupiedEntry},
};
type MapType = BTreeMap<u32, u32>;
struct Maps<'a> {
a: &'a mut MapType,
b: &'a mut MapType,
}
Now we have two locked data structures and we can implement our
transaction over them. Let's say the algorithm is the following:
- Given a key, if that key exists in map A, return a mutable reference
to the value for that key in map A.
- Else, move the key and value for the lowest ordered key from map A to
map B and return a mutable reference the that value in map B.
My first attempt at implementing this algorithm would be something like
this:
fn transaction_impl1<'a>(maps: &'a mut Maps, key: u32) -> &'a mut u32 {
if let Entry::Occupied(o) = maps.a.entry(key) {
o.into_mut()
} else {
let value = maps.a.first_entry().expect("Not empty").remove();
maps.b.entry(key).or_insert(value)
}
}
However, this does not compile.
error[E0499]: cannot borrow `*maps.a` as mutable more than once at a time
--> src/main.rs:51:21
|
47 | fn transaction_impl1<'a>(maps: &'a mut Maps, key: u32) -> &'a mut u32 {
| -- lifetime `'a` defined here
48 | if let Entry::Occupied(o) = maps.a.entry(key) {
| - ------ first mutable borrow occurs here
| _____|
| |
49 | | o.into_mut()
50 | | } else {
51 | | let value = maps.a.first_entry().expect("Not empty").remove();
| | ^^^^^^ second mutable borrow occurs here
52 | | maps.b.entry(key).or_insert(value)
53 | | }
| |_____- returning this value requires that `*maps.a` is borrowed for `'a`
The `into_mut()` on line 49 captures the lifetime of `o`, and because the
value is returned from the function, it must live for `'a`, which is
till the end of the function.
As far as I understand, this is a borrow checker limitation. It is easy
for us to look at this code and decide that the borrow on line 51 will
never alias with the borrow on line 49.
If we change the code so that we do not capture the lifetime of the
returned object across the if/else, the code will compile:
fn transaction_impl2<'a>(maps: &'a mut Maps, key: u32) -> &'a mut u32 {
if maps.a.contains_key(&key) {
maps.a.get_mut(&key).expect("Key is present")
} else {
let value = maps.a.first_entry().expect("Not empty").remove();
maps.b.entry(key).or_insert(value)
}
}
But now we do two walks in `maps.a` in the taken path, which is not efficient.
>
> And making it easier to do this could result in a lot more users
> doubling xarray walks without realising that it's a bad idea (unless
> it's this special case).
>
> ...
>>
>> [1] https://lore.kernel.org/rust-for-linux/20260209-xarray-entry-send-v3-0-f777c65b8ae2@kernel.org/T/#m95fb90870c511491f4f487dbf852c689cd0733f4
>>
>
> I have trouble following 'the taken arm' in your link. I think you mean
> one of the branches based on the existence of the entry, but I don't
> know which is the 'taken' and how 'self' is out of scope.
>
> Other links off the above link seem to indicate it is a problem with the
> rust borrow checking hitting a false positive.
>
> It seems we need to look up things twice to work around the false
> positive - or implement something like get_or_insert()?
>
> Or, wait for the new checker to be released - but that doesn't fix all
> the false positives, just this one?
>
> So, do all users of the xarray suffer from this false positive?
Not all users will suffer from this. The following code compiles fine:
fn transaction_impl3<'a>(maps: &'a mut Maps, key: u32, value: u32) -> &'a mut u32 {
if let Entry::Occupied(o) = maps.a.entry(key) {
o.into_mut()
} else {
maps.b.entry(key).or_insert(value)
}
}
This is valid because the lifetime captured by `o.into_mut()` is not
used on the other match arm.
I hope this helps clarify the situation.
Best regards,
Andreas Hindborg
^ permalink raw reply [flat|nested] 52+ messages in thread* Re: [PATCH v3 03/12] rust: xarray: add `contains_index` method
2026-02-12 10:15 ` Andreas Hindborg
@ 2026-02-12 10:52 ` Andreas Hindborg
2026-02-12 11:19 ` Alice Ryhl
2026-02-12 11:27 ` Miguel Ojeda
0 siblings, 2 replies; 52+ messages in thread
From: Andreas Hindborg @ 2026-02-12 10:52 UTC (permalink / raw)
To: Liam R. Howlett
Cc: Tamir Duberstein, Miguel Ojeda, Alex Gaynor, Boqun Feng,
Gary Guo, Björn Roy Baron, Benno Lossin, Alice Ryhl,
Trevor Gross, Danilo Krummrich, Lorenzo Stoakes, Vlastimil Babka,
Andrew Morton, Christoph Lameter, David Rientjes, Roman Gushchin,
Harry Yoo, Daniel Gomez, rust-for-linux, linux-kernel, linux-mm
Andreas Hindborg <a.hindborg@kernel.org> writes:
> As far as I understand, this is a borrow checker limitation. It is easy
> for us to look at this code and decide that the borrow on line 51 will
> never alias with the borrow on line 49.
I did a bit of googling, and this seems to be a well known issue with
the current implementation of lifetime analysis in the rust compiler.
Apparently this kind of code used to be OK [1] but the Rust devs decided
to remove the code that allowed this, because it was causing excessive
compilation times [2]. The upside is that this is solved by the new
lifetime analysis implementation called "Polonius" and it is the
intention to replace the existing implementation with Polonius at some
point [3].
Best regards,
Andreas Hindborg
[1] https://github.com/rust-lang/rust/issues/51545
[2] https://smallcultfollowing.com/babysteps/blog/2018/06/15/mir-based-borrow-check-nll-status-update/
[3] https://rust-lang.github.io/polonius/current_status.html
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [PATCH v3 03/12] rust: xarray: add `contains_index` method
2026-02-12 10:52 ` Andreas Hindborg
@ 2026-02-12 11:19 ` Alice Ryhl
2026-02-12 12:39 ` Andreas Hindborg
2026-02-12 11:27 ` Miguel Ojeda
1 sibling, 1 reply; 52+ messages in thread
From: Alice Ryhl @ 2026-02-12 11:19 UTC (permalink / raw)
To: Andreas Hindborg
Cc: Liam R. Howlett, Tamir Duberstein, Miguel Ojeda, Alex Gaynor,
Boqun Feng, Gary Guo, Björn Roy Baron, Benno Lossin,
Trevor Gross, Danilo Krummrich, Lorenzo Stoakes, Vlastimil Babka,
Andrew Morton, Christoph Lameter, David Rientjes, Roman Gushchin,
Harry Yoo, Daniel Gomez, rust-for-linux, linux-kernel, linux-mm
On Thu, Feb 12, 2026 at 11:52 AM Andreas Hindborg <a.hindborg@kernel.org> wrote:
>
> Andreas Hindborg <a.hindborg@kernel.org> writes:
>
> > As far as I understand, this is a borrow checker limitation. It is easy
> > for us to look at this code and decide that the borrow on line 51 will
> > never alias with the borrow on line 49.
>
> I did a bit of googling, and this seems to be a well known issue with
> the current implementation of lifetime analysis in the rust compiler.
> Apparently this kind of code used to be OK [1] but the Rust devs decided
> to remove the code that allowed this, because it was causing excessive
> compilation times [2]. The upside is that this is solved by the new
> lifetime analysis implementation called "Polonius" and it is the
> intention to replace the existing implementation with Polonius at some
> point [3].
I believe the standard fix for this issue is to provide an entry api
similar to HashMap::entry(). See the rbtree for an example, as it
already provides such API.
Alice
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [PATCH v3 03/12] rust: xarray: add `contains_index` method
2026-02-12 11:19 ` Alice Ryhl
@ 2026-02-12 12:39 ` Andreas Hindborg
2026-02-12 17:49 ` Liam R. Howlett
2026-02-13 8:17 ` Alice Ryhl
0 siblings, 2 replies; 52+ messages in thread
From: Andreas Hindborg @ 2026-02-12 12:39 UTC (permalink / raw)
To: Alice Ryhl
Cc: Liam R. Howlett, Tamir Duberstein, Miguel Ojeda, Alex Gaynor,
Boqun Feng, Gary Guo, Björn Roy Baron, Benno Lossin,
Trevor Gross, Danilo Krummrich, Lorenzo Stoakes, Vlastimil Babka,
Andrew Morton, Christoph Lameter, David Rientjes, Roman Gushchin,
Harry Yoo, Daniel Gomez, rust-for-linux, linux-kernel, linux-mm
"Alice Ryhl" <aliceryhl@google.com> writes:
> On Thu, Feb 12, 2026 at 11:52 AM Andreas Hindborg <a.hindborg@kernel.org> wrote:
>>
>> Andreas Hindborg <a.hindborg@kernel.org> writes:
>>
>> > As far as I understand, this is a borrow checker limitation. It is easy
>> > for us to look at this code and decide that the borrow on line 51 will
>> > never alias with the borrow on line 49.
>>
>> I did a bit of googling, and this seems to be a well known issue with
>> the current implementation of lifetime analysis in the rust compiler.
>> Apparently this kind of code used to be OK [1] but the Rust devs decided
>> to remove the code that allowed this, because it was causing excessive
>> compilation times [2]. The upside is that this is solved by the new
>> lifetime analysis implementation called "Polonius" and it is the
>> intention to replace the existing implementation with Polonius at some
>> point [3].
>
> I believe the standard fix for this issue is to provide an entry api
> similar to HashMap::entry(). See the rbtree for an example, as it
> already provides such API.
The example above [1] is using the BTreeMap entry API to produce the
issue. Are the BTreeMap and HashMap entry APIs significantly different,
or is there something else I missed?
Best regards,
Andreas Hindborg
[1] https://lore.kernel.org/r/87y0kytggx.fsf@kernel.org
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [PATCH v3 03/12] rust: xarray: add `contains_index` method
2026-02-12 12:39 ` Andreas Hindborg
@ 2026-02-12 17:49 ` Liam R. Howlett
2026-02-13 8:15 ` Alice Ryhl
2026-02-13 8:17 ` Alice Ryhl
1 sibling, 1 reply; 52+ messages in thread
From: Liam R. Howlett @ 2026-02-12 17:49 UTC (permalink / raw)
To: Andreas Hindborg
Cc: Alice Ryhl, Tamir Duberstein, Miguel Ojeda, Alex Gaynor,
Boqun Feng, Gary Guo, Björn Roy Baron, Benno Lossin,
Trevor Gross, Danilo Krummrich, Lorenzo Stoakes, Vlastimil Babka,
Andrew Morton, Christoph Lameter, David Rientjes, Roman Gushchin,
Harry Yoo, Daniel Gomez, rust-for-linux, linux-kernel, linux-mm
* Andreas Hindborg <a.hindborg@kernel.org> [260212 07:40]:
> "Alice Ryhl" <aliceryhl@google.com> writes:
>
> > On Thu, Feb 12, 2026 at 11:52 AM Andreas Hindborg <a.hindborg@kernel.org> wrote:
> >>
> >> Andreas Hindborg <a.hindborg@kernel.org> writes:
> >>
> >> > As far as I understand, this is a borrow checker limitation. It is easy
> >> > for us to look at this code and decide that the borrow on line 51 will
> >> > never alias with the borrow on line 49.
> >>
> >> I did a bit of googling, and this seems to be a well known issue with
> >> the current implementation of lifetime analysis in the rust compiler.
> >> Apparently this kind of code used to be OK [1] but the Rust devs decided
> >> to remove the code that allowed this, because it was causing excessive
> >> compilation times [2]. The upside is that this is solved by the new
> >> lifetime analysis implementation called "Polonius" and it is the
> >> intention to replace the existing implementation with Polonius at some
> >> point [3].
> >
> > I believe the standard fix for this issue is to provide an entry api
> > similar to HashMap::entry(). See the rbtree for an example, as it
> > already provides such API.
Alice, can you provide a link to the rbtree code please?
>
> The example above [1] is using the BTreeMap entry API to produce the
> issue. Are the BTreeMap and HashMap entry APIs significantly different,
> or is there something else I missed?
From what I can find, the HashMap is different specifically for this
reason.
This is where my question about get_or_insert() came from, the HashSet
has this workaround, maybe?
AFAICT, the hash workaround is done in the Entry code that takes a
different reference(?) based on the variant (enum?) returned [1].
Or maybe it's about the way branches are evaluated by the checker? Are
these different
But I'm really fumbling around this while I learn what you are all
doing! Thanks for all the education on this stuff, it's helping me
understand where we are headed. Hopefully I help along the way this
time.
Thanks,
Liam
[1]. https://doc.rust-lang.org/std/collections/hash_map/enum.Entry.html
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [PATCH v3 03/12] rust: xarray: add `contains_index` method
2026-02-12 17:49 ` Liam R. Howlett
@ 2026-02-13 8:15 ` Alice Ryhl
0 siblings, 0 replies; 52+ messages in thread
From: Alice Ryhl @ 2026-02-13 8:15 UTC (permalink / raw)
To: Liam R. Howlett, Andreas Hindborg, Tamir Duberstein,
Miguel Ojeda, Alex Gaynor, Boqun Feng, Gary Guo,
Björn Roy Baron, Benno Lossin, Trevor Gross,
Danilo Krummrich, Lorenzo Stoakes, Vlastimil Babka,
Andrew Morton, Christoph Lameter, David Rientjes, Roman Gushchin,
Harry Yoo, Daniel Gomez, rust-for-linux, linux-kernel, linux-mm
On Thu, Feb 12, 2026 at 12:49:09PM -0500, Liam R. Howlett wrote:
> * Andreas Hindborg <a.hindborg@kernel.org> [260212 07:40]:
> > "Alice Ryhl" <aliceryhl@google.com> writes:
> >
> > > On Thu, Feb 12, 2026 at 11:52 AM Andreas Hindborg <a.hindborg@kernel.org> wrote:
> > >>
> > >> Andreas Hindborg <a.hindborg@kernel.org> writes:
> > >>
> > >> > As far as I understand, this is a borrow checker limitation. It is easy
> > >> > for us to look at this code and decide that the borrow on line 51 will
> > >> > never alias with the borrow on line 49.
> > >>
> > >> I did a bit of googling, and this seems to be a well known issue with
> > >> the current implementation of lifetime analysis in the rust compiler.
> > >> Apparently this kind of code used to be OK [1] but the Rust devs decided
> > >> to remove the code that allowed this, because it was causing excessive
> > >> compilation times [2]. The upside is that this is solved by the new
> > >> lifetime analysis implementation called "Polonius" and it is the
> > >> intention to replace the existing implementation with Polonius at some
> > >> point [3].
> > >
> > > I believe the standard fix for this issue is to provide an entry api
> > > similar to HashMap::entry(). See the rbtree for an example, as it
> > > already provides such API.
>
> Alice, can you provide a link to the rbtree code please?
Please see rust/kernel/rbtree.rs in the kernel tree.
Alice
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [PATCH v3 03/12] rust: xarray: add `contains_index` method
2026-02-12 12:39 ` Andreas Hindborg
2026-02-12 17:49 ` Liam R. Howlett
@ 2026-02-13 8:17 ` Alice Ryhl
1 sibling, 0 replies; 52+ messages in thread
From: Alice Ryhl @ 2026-02-13 8:17 UTC (permalink / raw)
To: Andreas Hindborg
Cc: Liam R. Howlett, Tamir Duberstein, Miguel Ojeda, Alex Gaynor,
Boqun Feng, Gary Guo, Björn Roy Baron, Benno Lossin,
Trevor Gross, Danilo Krummrich, Lorenzo Stoakes, Vlastimil Babka,
Andrew Morton, Christoph Lameter, David Rientjes, Roman Gushchin,
Harry Yoo, Daniel Gomez, rust-for-linux, linux-kernel, linux-mm
On Thu, Feb 12, 2026 at 01:39:48PM +0100, Andreas Hindborg wrote:
> "Alice Ryhl" <aliceryhl@google.com> writes:
>
> > On Thu, Feb 12, 2026 at 11:52 AM Andreas Hindborg <a.hindborg@kernel.org> wrote:
> >>
> >> Andreas Hindborg <a.hindborg@kernel.org> writes:
> >>
> >> > As far as I understand, this is a borrow checker limitation. It is easy
> >> > for us to look at this code and decide that the borrow on line 51 will
> >> > never alias with the borrow on line 49.
> >>
> >> I did a bit of googling, and this seems to be a well known issue with
> >> the current implementation of lifetime analysis in the rust compiler.
> >> Apparently this kind of code used to be OK [1] but the Rust devs decided
> >> to remove the code that allowed this, because it was causing excessive
> >> compilation times [2]. The upside is that this is solved by the new
> >> lifetime analysis implementation called "Polonius" and it is the
> >> intention to replace the existing implementation with Polonius at some
> >> point [3].
> >
> > I believe the standard fix for this issue is to provide an entry api
> > similar to HashMap::entry(). See the rbtree for an example, as it
> > already provides such API.
>
> The example above [1] is using the BTreeMap entry API to produce the
> issue. Are the BTreeMap and HashMap entry APIs significantly different,
> or is there something else I missed?
>
> Best regards,
> Andreas Hindborg
>
> [1] https://lore.kernel.org/r/87y0kytggx.fsf@kernel.org
Hrm, tricky. I think it would work if the entry type had an into_map() that
consumes the entry and returns a &mut to the original map.
fn transaction_impl1<'a>(maps: &'a mut Maps, key: u32) -> &'a mut u32 {
match maps.a.entry(key) {
Entry::Occupied(o) => o.into_mut()
Entry::Vacant(v) => {
let map_a = v.into_map();
let value = map_a.first_entry().expect("Not empty").remove();
maps.b.entry(key).or_insert(value)
}
}
}
The HashMap and BTreeMap apis are not particularly different.
Alice
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [PATCH v3 03/12] rust: xarray: add `contains_index` method
2026-02-12 10:52 ` Andreas Hindborg
2026-02-12 11:19 ` Alice Ryhl
@ 2026-02-12 11:27 ` Miguel Ojeda
2026-02-12 12:47 ` Andreas Hindborg
1 sibling, 1 reply; 52+ messages in thread
From: Miguel Ojeda @ 2026-02-12 11:27 UTC (permalink / raw)
To: Andreas Hindborg
Cc: Liam R. Howlett, Tamir Duberstein, Miguel Ojeda, Alex Gaynor,
Boqun Feng, Gary Guo, Björn Roy Baron, Benno Lossin,
Alice Ryhl, Trevor Gross, Danilo Krummrich, Lorenzo Stoakes,
Vlastimil Babka, Andrew Morton, Christoph Lameter,
David Rientjes, Roman Gushchin, Harry Yoo, Daniel Gomez,
rust-for-linux, linux-kernel, linux-mm
On Thu, Feb 12, 2026 at 11:52 AM Andreas Hindborg <a.hindborg@kernel.org> wrote:
>
> Apparently this kind of code used to be OK [1] but the Rust devs decided
I don't think it used to be OK -- I think that issue was talking about
code that required non-lexical lifetimes, which was an unstable
feature back then, and the behavior changed while they worked on it.
When NLLs became stable later on (years later), the feature never
allowed the code.
Cheers,
Miguel
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [PATCH v3 03/12] rust: xarray: add `contains_index` method
2026-02-12 11:27 ` Miguel Ojeda
@ 2026-02-12 12:47 ` Andreas Hindborg
2026-02-12 13:34 ` Miguel Ojeda
0 siblings, 1 reply; 52+ messages in thread
From: Andreas Hindborg @ 2026-02-12 12:47 UTC (permalink / raw)
To: Miguel Ojeda
Cc: Liam R. Howlett, Tamir Duberstein, Miguel Ojeda, Alex Gaynor,
Boqun Feng, Gary Guo, Björn Roy Baron, Benno Lossin,
Alice Ryhl, Trevor Gross, Danilo Krummrich, Lorenzo Stoakes,
Vlastimil Babka, Andrew Morton, Christoph Lameter,
David Rientjes, Roman Gushchin, Harry Yoo, Daniel Gomez,
rust-for-linux, linux-kernel, linux-mm
"Miguel Ojeda" <miguel.ojeda.sandonis@gmail.com> writes:
> On Thu, Feb 12, 2026 at 11:52 AM Andreas Hindborg <a.hindborg@kernel.org> wrote:
>>
>> Apparently this kind of code used to be OK [1] but the Rust devs decided
>
> I don't think it used to be OK -- I think that issue was talking about
> code that required non-lexical lifetimes, which was an unstable
> feature back then, and the behavior changed while they worked on it.
Right. The devil is in the details. My point is that the analysis to
support this was deemed to be too computationally expensive to roll out,
but the intention is to achieve a similar analysis at some point in the
future, via Polonius.
Best regards,
Andreas Hindborg
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [PATCH v3 03/12] rust: xarray: add `contains_index` method
2026-02-12 12:47 ` Andreas Hindborg
@ 2026-02-12 13:34 ` Miguel Ojeda
0 siblings, 0 replies; 52+ messages in thread
From: Miguel Ojeda @ 2026-02-12 13:34 UTC (permalink / raw)
To: Andreas Hindborg
Cc: Liam R. Howlett, Tamir Duberstein, Miguel Ojeda, Alex Gaynor,
Boqun Feng, Gary Guo, Björn Roy Baron, Benno Lossin,
Alice Ryhl, Trevor Gross, Danilo Krummrich, Lorenzo Stoakes,
Vlastimil Babka, Andrew Morton, Christoph Lameter,
David Rientjes, Roman Gushchin, Harry Yoo, Daniel Gomez,
rust-for-linux, linux-kernel, linux-mm
On Thu, Feb 12, 2026 at 1:47 PM Andreas Hindborg <a.hindborg@kernel.org> wrote:
>
> Right. The devil is in the details. My point is that the analysis to
> support this was deemed to be too computationally expensive to roll out,
> but the intention is to achieve a similar analysis at some point in the
> future, via Polonius.
Yeah, that is fine, I just wanted to avoid confusion, because it
sounded like Rust broke (a lot of) existing code, which wouldn't have
been acceptable (for the kernel and many other projects).
Cheers,
Miguel
^ permalink raw reply [flat|nested] 52+ messages in thread
* [PATCH v3 04/12] rust: xarray: add `XArrayState`
2026-02-09 14:38 [PATCH v3 00/12] rust: xarray: add entry API with preloading Andreas Hindborg
` (2 preceding siblings ...)
2026-02-09 14:38 ` [PATCH v3 03/12] rust: xarray: add `contains_index` method Andreas Hindborg
@ 2026-02-09 14:38 ` Andreas Hindborg
2026-02-10 16:48 ` Daniel Gomez
2026-02-10 16:57 ` Tamir Duberstein
2026-02-09 14:38 ` [PATCH v3 05/12] rust: xarray: use `xas_load` instead of `xa_load` in `Guard::load` Andreas Hindborg
` (7 subsequent siblings)
11 siblings, 2 replies; 52+ messages in thread
From: Andreas Hindborg @ 2026-02-09 14:38 UTC (permalink / raw)
To: Tamir Duberstein, Miguel Ojeda, Alex Gaynor, Boqun Feng,
Gary Guo, Björn Roy Baron, Benno Lossin, Alice Ryhl,
Trevor Gross, Danilo Krummrich, Lorenzo Stoakes, Liam R. Howlett,
Vlastimil Babka, Andrew Morton, Christoph Lameter,
David Rientjes, Roman Gushchin, Harry Yoo
Cc: Daniel Gomez, rust-for-linux, linux-kernel, linux-mm, Andreas Hindborg
Add `XArrayState` as internal state for XArray iteration and entry
operations. This struct wraps the C `xa_state` structure and holds a
reference to a `Guard` to ensure exclusive access to the XArray for the
lifetime of the state object.
The `XAS_RESTART` constant is also exposed through the bindings helper
to properly initialize the `xa_node` field.
The struct and its constructor are marked with `#[expect(dead_code)]` as
there are no users yet. We will remove this annotation in a later patch.
Signed-off-by: Andreas Hindborg <a.hindborg@kernel.org>
---
rust/bindings/bindings_helper.h | 1 +
rust/kernel/xarray.rs | 41 ++++++++++++++++++++++++++++++++++++++++-
2 files changed, 41 insertions(+), 1 deletion(-)
diff --git a/rust/bindings/bindings_helper.h b/rust/bindings/bindings_helper.h
index a067038b4b422..58605c32e8102 100644
--- a/rust/bindings/bindings_helper.h
+++ b/rust/bindings/bindings_helper.h
@@ -117,6 +117,7 @@ const xa_mark_t RUST_CONST_HELPER_XA_PRESENT = XA_PRESENT;
const gfp_t RUST_CONST_HELPER_XA_FLAGS_ALLOC = XA_FLAGS_ALLOC;
const gfp_t RUST_CONST_HELPER_XA_FLAGS_ALLOC1 = XA_FLAGS_ALLOC1;
+const size_t RUST_CONST_HELPER_XAS_RESTART = (size_t)XAS_RESTART;
const vm_flags_t RUST_CONST_HELPER_VM_MERGEABLE = VM_MERGEABLE;
const vm_flags_t RUST_CONST_HELPER_VM_READ = VM_READ;
diff --git a/rust/kernel/xarray.rs b/rust/kernel/xarray.rs
index ede48b5e1dba3..d1246ec114898 100644
--- a/rust/kernel/xarray.rs
+++ b/rust/kernel/xarray.rs
@@ -8,7 +8,10 @@
iter,
marker::PhantomData,
pin::Pin,
- ptr::NonNull, //
+ ptr::{
+ null_mut,
+ NonNull, //
+ },
};
use kernel::{
alloc,
@@ -319,6 +322,42 @@ pub fn store(
}
}
+/// Internal state for XArray iteration and entry operations.
+///
+/// # Invariants
+///
+/// - `state` is always a valid `bindings::xa_state`.
+#[expect(dead_code)]
+pub(crate) struct XArrayState<'a, 'b, T: ForeignOwnable> {
+ /// Holds a reference to the lock guard to ensure the lock is not dropped
+ /// while `Self` is live.
+ _access: PhantomData<&'b Guard<'a, T>>,
+ state: bindings::xa_state,
+}
+
+impl<'a, 'b, T: ForeignOwnable> XArrayState<'a, 'b, T> {
+ #[expect(dead_code)]
+ fn new(access: &'b Guard<'a, T>, index: usize) -> Self {
+ let ptr = access.xa.xa.get();
+ // INVARIANT: We initialize `self.state` to a valid value below.
+ Self {
+ _access: PhantomData,
+ state: bindings::xa_state {
+ xa: ptr,
+ xa_index: index,
+ xa_shift: 0,
+ xa_sibs: 0,
+ xa_offset: 0,
+ xa_pad: 0,
+ xa_node: bindings::XAS_RESTART as *mut bindings::xa_node,
+ xa_alloc: null_mut(),
+ xa_update: None,
+ xa_lru: null_mut(),
+ },
+ }
+ }
+}
+
// SAFETY: `XArray<T>` has no shared mutable state so it is `Send` iff `T` is `Send`.
unsafe impl<T: ForeignOwnable + Send> Send for XArray<T> {}
--
2.51.2
^ permalink raw reply [flat|nested] 52+ messages in thread* Re: [PATCH v3 04/12] rust: xarray: add `XArrayState`
2026-02-09 14:38 ` [PATCH v3 04/12] rust: xarray: add `XArrayState` Andreas Hindborg
@ 2026-02-10 16:48 ` Daniel Gomez
2026-02-11 7:42 ` Andreas Hindborg
2026-02-10 16:57 ` Tamir Duberstein
1 sibling, 1 reply; 52+ messages in thread
From: Daniel Gomez @ 2026-02-10 16:48 UTC (permalink / raw)
To: Andreas Hindborg
Cc: Tamir Duberstein, Miguel Ojeda, Alex Gaynor, Boqun Feng,
Gary Guo, Björn Roy Baron, Benno Lossin, Alice Ryhl,
Trevor Gross, Danilo Krummrich, Lorenzo Stoakes, Liam R. Howlett,
Vlastimil Babka, Andrew Morton, Christoph Lameter,
David Rientjes, Roman Gushchin, Harry Yoo, rust-for-linux,
linux-kernel, linux-mm
On 2026-02-09 15:38, Andreas Hindborg wrote:
> Add `XArrayState` as internal state for XArray iteration and entry
> operations. This struct wraps the C `xa_state` structure and holds a
> reference to a `Guard` to ensure exclusive access to the XArray for the
> lifetime of the state object.
>
> The `XAS_RESTART` constant is also exposed through the bindings helper
> to properly initialize the `xa_node` field.
>
> The struct and its constructor are marked with `#[expect(dead_code)]` as
> there are no users yet. We will remove this annotation in a later patch.
It makes sense to me to merge patch 4 and 5 to avoid this.
>
> Signed-off-by: Andreas Hindborg <a.hindborg@kernel.org>
> ---
> rust/bindings/bindings_helper.h | 1 +
> rust/kernel/xarray.rs | 41 ++++++++++++++++++++++++++++++++++++++++-
> 2 files changed, 41 insertions(+), 1 deletion(-)
>
> diff --git a/rust/bindings/bindings_helper.h b/rust/bindings/bindings_helper.h
> index a067038b4b422..58605c32e8102 100644
> --- a/rust/bindings/bindings_helper.h
> +++ b/rust/bindings/bindings_helper.h
...
> @@ -319,6 +322,42 @@ pub fn store(
...
> +impl<'a, 'b, T: ForeignOwnable> XArrayState<'a, 'b, T> {
> + #[expect(dead_code)]
> + fn new(access: &'b Guard<'a, T>, index: usize) -> Self {
> + let ptr = access.xa.xa.get();
> + // INVARIANT: We initialize `self.state` to a valid value below.
> + Self {
> + _access: PhantomData,
> + state: bindings::xa_state {
> + xa: ptr,
> + xa_index: index,
> + xa_shift: 0,
> + xa_sibs: 0,
> + xa_offset: 0,
To match C XArray __XA_STATE() we should also pass shift and sibs. Even if the
only use case we have currently is setting these to 0 (XA_STATE()).
^ permalink raw reply [flat|nested] 52+ messages in thread* Re: [PATCH v3 04/12] rust: xarray: add `XArrayState`
2026-02-10 16:48 ` Daniel Gomez
@ 2026-02-11 7:42 ` Andreas Hindborg
0 siblings, 0 replies; 52+ messages in thread
From: Andreas Hindborg @ 2026-02-11 7:42 UTC (permalink / raw)
To: Daniel Gomez
Cc: Tamir Duberstein, Miguel Ojeda, Alex Gaynor, Boqun Feng,
Gary Guo, Björn Roy Baron, Benno Lossin, Alice Ryhl,
Trevor Gross, Danilo Krummrich, Lorenzo Stoakes, Liam R. Howlett,
Vlastimil Babka, Andrew Morton, Christoph Lameter,
David Rientjes, Roman Gushchin, Harry Yoo, rust-for-linux,
linux-kernel, linux-mm
Daniel Gomez <da.gomez@kernel.org> writes:
> On 2026-02-09 15:38, Andreas Hindborg wrote:
>> Add `XArrayState` as internal state for XArray iteration and entry
>> operations. This struct wraps the C `xa_state` structure and holds a
>> reference to a `Guard` to ensure exclusive access to the XArray for the
>> lifetime of the state object.
>>
>> The `XAS_RESTART` constant is also exposed through the bindings helper
>> to properly initialize the `xa_node` field.
>>
>> The struct and its constructor are marked with `#[expect(dead_code)]` as
>> there are no users yet. We will remove this annotation in a later patch.
>
> It makes sense to me to merge patch 4 and 5 to avoid this.
It's always a balance. When I merge things I tend to get comments that I
should split things out to make them easier to digest and review.
Best regards,
Andreas Hindborg
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [PATCH v3 04/12] rust: xarray: add `XArrayState`
2026-02-09 14:38 ` [PATCH v3 04/12] rust: xarray: add `XArrayState` Andreas Hindborg
2026-02-10 16:48 ` Daniel Gomez
@ 2026-02-10 16:57 ` Tamir Duberstein
1 sibling, 0 replies; 52+ messages in thread
From: Tamir Duberstein @ 2026-02-10 16:57 UTC (permalink / raw)
To: Andreas Hindborg
Cc: Miguel Ojeda, Alex Gaynor, Boqun Feng, Gary Guo,
Björn Roy Baron, Benno Lossin, Alice Ryhl, Trevor Gross,
Danilo Krummrich, Lorenzo Stoakes, Liam R. Howlett,
Vlastimil Babka, Andrew Morton, Christoph Lameter,
David Rientjes, Roman Gushchin, Harry Yoo, Daniel Gomez,
rust-for-linux, linux-kernel, linux-mm
On Mon, Feb 9, 2026 at 6:39 AM Andreas Hindborg <a.hindborg@kernel.org> wrote:
>
> Add `XArrayState` as internal state for XArray iteration and entry
> operations. This struct wraps the C `xa_state` structure and holds a
> reference to a `Guard` to ensure exclusive access to the XArray for the
> lifetime of the state object.
>
> The `XAS_RESTART` constant is also exposed through the bindings helper
> to properly initialize the `xa_node` field.
>
> The struct and its constructor are marked with `#[expect(dead_code)]` as
> there are no users yet. We will remove this annotation in a later patch.
>
> Signed-off-by: Andreas Hindborg <a.hindborg@kernel.org>
> ---
> rust/bindings/bindings_helper.h | 1 +
> rust/kernel/xarray.rs | 41 ++++++++++++++++++++++++++++++++++++++++-
> 2 files changed, 41 insertions(+), 1 deletion(-)
>
> diff --git a/rust/bindings/bindings_helper.h b/rust/bindings/bindings_helper.h
> index a067038b4b422..58605c32e8102 100644
> --- a/rust/bindings/bindings_helper.h
> +++ b/rust/bindings/bindings_helper.h
> @@ -117,6 +117,7 @@ const xa_mark_t RUST_CONST_HELPER_XA_PRESENT = XA_PRESENT;
>
> const gfp_t RUST_CONST_HELPER_XA_FLAGS_ALLOC = XA_FLAGS_ALLOC;
> const gfp_t RUST_CONST_HELPER_XA_FLAGS_ALLOC1 = XA_FLAGS_ALLOC1;
> +const size_t RUST_CONST_HELPER_XAS_RESTART = (size_t)XAS_RESTART;
Please add a comment to explain the cast.
>
> const vm_flags_t RUST_CONST_HELPER_VM_MERGEABLE = VM_MERGEABLE;
> const vm_flags_t RUST_CONST_HELPER_VM_READ = VM_READ;
> diff --git a/rust/kernel/xarray.rs b/rust/kernel/xarray.rs
> index ede48b5e1dba3..d1246ec114898 100644
> --- a/rust/kernel/xarray.rs
> +++ b/rust/kernel/xarray.rs
> @@ -8,7 +8,10 @@
> iter,
> marker::PhantomData,
> pin::Pin,
> - ptr::NonNull, //
> + ptr::{
> + null_mut,
> + NonNull, //
> + },
> };
> use kernel::{
> alloc,
> @@ -319,6 +322,42 @@ pub fn store(
> }
> }
>
> +/// Internal state for XArray iteration and entry operations.
> +///
> +/// # Invariants
> +///
> +/// - `state` is always a valid `bindings::xa_state`.
> +#[expect(dead_code)]
> +pub(crate) struct XArrayState<'a, 'b, T: ForeignOwnable> {
> + /// Holds a reference to the lock guard to ensure the lock is not dropped
> + /// while `Self` is live.
> + _access: PhantomData<&'b Guard<'a, T>>,
> + state: bindings::xa_state,
> +}
> +
> +impl<'a, 'b, T: ForeignOwnable> XArrayState<'a, 'b, T> {
> + #[expect(dead_code)]
> + fn new(access: &'b Guard<'a, T>, index: usize) -> Self {
> + let ptr = access.xa.xa.get();
> + // INVARIANT: We initialize `self.state` to a valid value below.
> + Self {
> + _access: PhantomData,
> + state: bindings::xa_state {
> + xa: ptr,
> + xa_index: index,
> + xa_shift: 0,
> + xa_sibs: 0,
> + xa_offset: 0,
> + xa_pad: 0,
> + xa_node: bindings::XAS_RESTART as *mut bindings::xa_node,
> + xa_alloc: null_mut(),
> + xa_update: None,
> + xa_lru: null_mut(),
> + },
> + }
> + }
> +}
> +
> // SAFETY: `XArray<T>` has no shared mutable state so it is `Send` iff `T` is `Send`.
> unsafe impl<T: ForeignOwnable + Send> Send for XArray<T> {}
>
>
> --
> 2.51.2
>
>
^ permalink raw reply [flat|nested] 52+ messages in thread
* [PATCH v3 05/12] rust: xarray: use `xas_load` instead of `xa_load` in `Guard::load`
2026-02-09 14:38 [PATCH v3 00/12] rust: xarray: add entry API with preloading Andreas Hindborg
` (3 preceding siblings ...)
2026-02-09 14:38 ` [PATCH v3 04/12] rust: xarray: add `XArrayState` Andreas Hindborg
@ 2026-02-09 14:38 ` Andreas Hindborg
2026-02-10 18:16 ` Liam R. Howlett
2026-02-09 14:38 ` [PATCH v3 06/12] rust: xarray: simplify `Guard::load` Andreas Hindborg
` (6 subsequent siblings)
11 siblings, 1 reply; 52+ messages in thread
From: Andreas Hindborg @ 2026-02-09 14:38 UTC (permalink / raw)
To: Tamir Duberstein, Miguel Ojeda, Alex Gaynor, Boqun Feng,
Gary Guo, Björn Roy Baron, Benno Lossin, Alice Ryhl,
Trevor Gross, Danilo Krummrich, Lorenzo Stoakes, Liam R. Howlett,
Vlastimil Babka, Andrew Morton, Christoph Lameter,
David Rientjes, Roman Gushchin, Harry Yoo
Cc: Daniel Gomez, rust-for-linux, linux-kernel, linux-mm, Andreas Hindborg
Replace the call to `xa_load` with `xas_load` in `Guard::load`. The
`xa_load` function takes the RCU lock internally, which we do not need,
since the `Guard` already holds an exclusive lock on the `XArray`. The
`xas_load` function operates on `xa_state` and assumes the required locks
are already held.
This change also removes the `#[expect(dead_code)]` annotation from
`XArrayState` and its constructor, as they are now in use.
Signed-off-by: Andreas Hindborg <a.hindborg@kernel.org>
---
rust/kernel/xarray.rs | 15 +++++++++------
1 file changed, 9 insertions(+), 6 deletions(-)
diff --git a/rust/kernel/xarray.rs b/rust/kernel/xarray.rs
index d1246ec114898..eadddafb180ec 100644
--- a/rust/kernel/xarray.rs
+++ b/rust/kernel/xarray.rs
@@ -215,10 +215,8 @@ fn load<F, U>(&self, index: usize, f: F) -> Option<U>
where
F: FnOnce(NonNull<c_void>) -> U,
{
- // SAFETY: `self.xa.xa` is always valid by the type invariant.
- let ptr = unsafe { bindings::xa_load(self.xa.xa.get(), index) };
- let ptr = NonNull::new(ptr.cast())?;
- Some(f(ptr))
+ let mut state = XArrayState::new(self, index);
+ Some(f(state.load()?))
}
/// Checks if the XArray contains an element at the specified index.
@@ -327,7 +325,6 @@ pub fn store(
/// # Invariants
///
/// - `state` is always a valid `bindings::xa_state`.
-#[expect(dead_code)]
pub(crate) struct XArrayState<'a, 'b, T: ForeignOwnable> {
/// Holds a reference to the lock guard to ensure the lock is not dropped
/// while `Self` is live.
@@ -336,7 +333,6 @@ pub(crate) struct XArrayState<'a, 'b, T: ForeignOwnable> {
}
impl<'a, 'b, T: ForeignOwnable> XArrayState<'a, 'b, T> {
- #[expect(dead_code)]
fn new(access: &'b Guard<'a, T>, index: usize) -> Self {
let ptr = access.xa.xa.get();
// INVARIANT: We initialize `self.state` to a valid value below.
@@ -356,6 +352,13 @@ fn new(access: &'b Guard<'a, T>, index: usize) -> Self {
},
}
}
+
+ fn load(&mut self) -> Option<NonNull<c_void>> {
+ // SAFETY: `state.state` is always valid by the type invariant of
+ // `XArrayState and we hold the xarray lock`.
+ let ptr = unsafe { bindings::xas_load(&raw mut self.state) };
+ NonNull::new(ptr.cast())
+ }
}
// SAFETY: `XArray<T>` has no shared mutable state so it is `Send` iff `T` is `Send`.
--
2.51.2
^ permalink raw reply [flat|nested] 52+ messages in thread* Re: [PATCH v3 05/12] rust: xarray: use `xas_load` instead of `xa_load` in `Guard::load`
2026-02-09 14:38 ` [PATCH v3 05/12] rust: xarray: use `xas_load` instead of `xa_load` in `Guard::load` Andreas Hindborg
@ 2026-02-10 18:16 ` Liam R. Howlett
2026-02-10 19:53 ` Tamir Duberstein
0 siblings, 1 reply; 52+ messages in thread
From: Liam R. Howlett @ 2026-02-10 18:16 UTC (permalink / raw)
To: Andreas Hindborg
Cc: Tamir Duberstein, Miguel Ojeda, Alex Gaynor, Boqun Feng,
Gary Guo, Björn Roy Baron, Benno Lossin, Alice Ryhl,
Trevor Gross, Danilo Krummrich, Lorenzo Stoakes, Vlastimil Babka,
Andrew Morton, Christoph Lameter, David Rientjes, Roman Gushchin,
Harry Yoo, Daniel Gomez, rust-for-linux, linux-kernel, linux-mm
* Andreas Hindborg <a.hindborg@kernel.org> [260209 14:39]:
> Replace the call to `xa_load` with `xas_load` in `Guard::load`. The
> `xa_load` function takes the RCU lock internally, which we do not need,
> since the `Guard` already holds an exclusive lock on the `XArray`. The
> `xas_load` function operates on `xa_state` and assumes the required locks
> are already held.
>
> This change also removes the `#[expect(dead_code)]` annotation from
> `XArrayState` and its constructor, as they are now in use.
I don't understand the locking here.
You are saying that, since you hold the xarray write lock, you won't be
taking the rcu read lock, but then you change the api of load? That
seems wrong to me.
Any readers of the api that calls load will now need to hold the rcu
read lock externally. If you're doing this, then you should indicate
that is necessary in the function name, like the C side does. Otherwise
you are limiting the users to the advanced API, aren't you?
Or are you saying that xarray can only be used if you hold the exclusive
lock, which is now a read and write lock?
>
> Signed-off-by: Andreas Hindborg <a.hindborg@kernel.org>
> ---
> rust/kernel/xarray.rs | 15 +++++++++------
> 1 file changed, 9 insertions(+), 6 deletions(-)
>
> diff --git a/rust/kernel/xarray.rs b/rust/kernel/xarray.rs
> index d1246ec114898..eadddafb180ec 100644
> --- a/rust/kernel/xarray.rs
> +++ b/rust/kernel/xarray.rs
> @@ -215,10 +215,8 @@ fn load<F, U>(&self, index: usize, f: F) -> Option<U>
> where
> F: FnOnce(NonNull<c_void>) -> U,
> {
> - // SAFETY: `self.xa.xa` is always valid by the type invariant.
> - let ptr = unsafe { bindings::xa_load(self.xa.xa.get(), index) };
> - let ptr = NonNull::new(ptr.cast())?;
> - Some(f(ptr))
> + let mut state = XArrayState::new(self, index);
> + Some(f(state.load()?))
> }
>
> /// Checks if the XArray contains an element at the specified index.
> @@ -327,7 +325,6 @@ pub fn store(
> /// # Invariants
> ///
> /// - `state` is always a valid `bindings::xa_state`.
> -#[expect(dead_code)]
> pub(crate) struct XArrayState<'a, 'b, T: ForeignOwnable> {
> /// Holds a reference to the lock guard to ensure the lock is not dropped
> /// while `Self` is live.
> @@ -336,7 +333,6 @@ pub(crate) struct XArrayState<'a, 'b, T: ForeignOwnable> {
> }
>
> impl<'a, 'b, T: ForeignOwnable> XArrayState<'a, 'b, T> {
> - #[expect(dead_code)]
> fn new(access: &'b Guard<'a, T>, index: usize) -> Self {
> let ptr = access.xa.xa.get();
> // INVARIANT: We initialize `self.state` to a valid value below.
> @@ -356,6 +352,13 @@ fn new(access: &'b Guard<'a, T>, index: usize) -> Self {
> },
> }
> }
> +
> + fn load(&mut self) -> Option<NonNull<c_void>> {
> + // SAFETY: `state.state` is always valid by the type invariant of
> + // `XArrayState and we hold the xarray lock`.
> + let ptr = unsafe { bindings::xas_load(&raw mut self.state) };
> + NonNull::new(ptr.cast())
> + }
> }
>
> // SAFETY: `XArray<T>` has no shared mutable state so it is `Send` iff `T` is `Send`.
>
> --
> 2.51.2
>
>
>
^ permalink raw reply [flat|nested] 52+ messages in thread* Re: [PATCH v3 05/12] rust: xarray: use `xas_load` instead of `xa_load` in `Guard::load`
2026-02-10 18:16 ` Liam R. Howlett
@ 2026-02-10 19:53 ` Tamir Duberstein
2026-02-10 20:59 ` Liam R. Howlett
0 siblings, 1 reply; 52+ messages in thread
From: Tamir Duberstein @ 2026-02-10 19:53 UTC (permalink / raw)
To: Liam R. Howlett, Andreas Hindborg, Tamir Duberstein,
Miguel Ojeda, Alex Gaynor, Boqun Feng, Gary Guo,
Björn Roy Baron, Benno Lossin, Alice Ryhl, Trevor Gross,
Danilo Krummrich, Lorenzo Stoakes, Vlastimil Babka,
Andrew Morton, Christoph Lameter, David Rientjes, Roman Gushchin,
Harry Yoo, Daniel Gomez, rust-for-linux, linux-kernel, linux-mm
On Tue, Feb 10, 2026 at 10:16 AM Liam R. Howlett
<Liam.Howlett@oracle.com> wrote:
>
> * Andreas Hindborg <a.hindborg@kernel.org> [260209 14:39]:
> > Replace the call to `xa_load` with `xas_load` in `Guard::load`. The
> > `xa_load` function takes the RCU lock internally, which we do not need,
> > since the `Guard` already holds an exclusive lock on the `XArray`. The
> > `xas_load` function operates on `xa_state` and assumes the required locks
> > are already held.
> >
> > This change also removes the `#[expect(dead_code)]` annotation from
> > `XArrayState` and its constructor, as they are now in use.
>
> I don't understand the locking here.
>
> You are saying that, since you hold the xarray write lock, you won't be
> taking the rcu read lock, but then you change the api of load? That
> seems wrong to me.
This patch doesn't change the API of load. Andreas is saying that the
type system already requires the caller to hold the xarray spin lock
when load is called, meaning acquiring the RCU lock isn't necessary.
>
> Any readers of the api that calls load will now need to hold the rcu
> read lock externally. If you're doing this, then you should indicate
> that is necessary in the function name, like the C side does. Otherwise
> you are limiting the users to the advanced API, aren't you?
The existing API already requires users to hold the xarray lock.
>
> Or are you saying that xarray can only be used if you hold the exclusive
> lock, which is now a read and write lock?
Yes - except for the word "now"; I'm not sure what you mean by it.
>
> >
> > Signed-off-by: Andreas Hindborg <a.hindborg@kernel.org>
Acked-by: Tamir Duberstein <tamird@kernel.org>
> > ---
> > rust/kernel/xarray.rs | 15 +++++++++------
> > 1 file changed, 9 insertions(+), 6 deletions(-)
> >
> > diff --git a/rust/kernel/xarray.rs b/rust/kernel/xarray.rs
> > index d1246ec114898..eadddafb180ec 100644
> > --- a/rust/kernel/xarray.rs
> > +++ b/rust/kernel/xarray.rs
> > @@ -215,10 +215,8 @@ fn load<F, U>(&self, index: usize, f: F) -> Option<U>
> > where
> > F: FnOnce(NonNull<c_void>) -> U,
> > {
> > - // SAFETY: `self.xa.xa` is always valid by the type invariant.
> > - let ptr = unsafe { bindings::xa_load(self.xa.xa.get(), index) };
> > - let ptr = NonNull::new(ptr.cast())?;
> > - Some(f(ptr))
> > + let mut state = XArrayState::new(self, index);
> > + Some(f(state.load()?))
This can probably be written as `state.load().map(f)`.
> > }
> >
> > /// Checks if the XArray contains an element at the specified index.
> > @@ -327,7 +325,6 @@ pub fn store(
> > /// # Invariants
> > ///
> > /// - `state` is always a valid `bindings::xa_state`.
> > -#[expect(dead_code)]
> > pub(crate) struct XArrayState<'a, 'b, T: ForeignOwnable> {
> > /// Holds a reference to the lock guard to ensure the lock is not dropped
> > /// while `Self` is live.
> > @@ -336,7 +333,6 @@ pub(crate) struct XArrayState<'a, 'b, T: ForeignOwnable> {
> > }
> >
> > impl<'a, 'b, T: ForeignOwnable> XArrayState<'a, 'b, T> {
> > - #[expect(dead_code)]
> > fn new(access: &'b Guard<'a, T>, index: usize) -> Self {
> > let ptr = access.xa.xa.get();
> > // INVARIANT: We initialize `self.state` to a valid value below.
> > @@ -356,6 +352,13 @@ fn new(access: &'b Guard<'a, T>, index: usize) -> Self {
> > },
> > }
> > }
> > +
> > + fn load(&mut self) -> Option<NonNull<c_void>> {
> > + // SAFETY: `state.state` is always valid by the type invariant of
> > + // `XArrayState and we hold the xarray lock`.
> > + let ptr = unsafe { bindings::xas_load(&raw mut self.state) };
> > + NonNull::new(ptr.cast())
> > + }
> > }
> >
> > // SAFETY: `XArray<T>` has no shared mutable state so it is `Send` iff `T` is `Send`.
> >
> > --
> > 2.51.2
> >
> >
> >
^ permalink raw reply [flat|nested] 52+ messages in thread* Re: [PATCH v3 05/12] rust: xarray: use `xas_load` instead of `xa_load` in `Guard::load`
2026-02-10 19:53 ` Tamir Duberstein
@ 2026-02-10 20:59 ` Liam R. Howlett
2026-02-10 21:22 ` Tamir Duberstein
0 siblings, 1 reply; 52+ messages in thread
From: Liam R. Howlett @ 2026-02-10 20:59 UTC (permalink / raw)
To: Tamir Duberstein
Cc: Andreas Hindborg, Tamir Duberstein, Miguel Ojeda, Alex Gaynor,
Boqun Feng, Gary Guo, Björn Roy Baron, Benno Lossin,
Alice Ryhl, Trevor Gross, Danilo Krummrich, Lorenzo Stoakes,
Vlastimil Babka, Andrew Morton, Christoph Lameter,
David Rientjes, Roman Gushchin, Harry Yoo, Daniel Gomez,
rust-for-linux, linux-kernel, linux-mm
* Tamir Duberstein <tamird@kernel.org> [260210 19:54]:
> On Tue, Feb 10, 2026 at 10:16 AM Liam R. Howlett
> <Liam.Howlett@oracle.com> wrote:
> >
> > * Andreas Hindborg <a.hindborg@kernel.org> [260209 14:39]:
> > > Replace the call to `xa_load` with `xas_load` in `Guard::load`. The
> > > `xa_load` function takes the RCU lock internally, which we do not need,
> > > since the `Guard` already holds an exclusive lock on the `XArray`. The
> > > `xas_load` function operates on `xa_state` and assumes the required locks
> > > are already held.
> > >
> > > This change also removes the `#[expect(dead_code)]` annotation from
> > > `XArrayState` and its constructor, as they are now in use.
> >
> > I don't understand the locking here.
> >
> > You are saying that, since you hold the xarray write lock, you won't be
> > taking the rcu read lock, but then you change the api of load? That
> > seems wrong to me.
>
> This patch doesn't change the API of load. Andreas is saying that the
> type system already requires the caller to hold the xarray spin lock
> when load is called, meaning acquiring the RCU lock isn't necessary.
What I mean is that the API can no longer be called when holding the RCU
read lock. You seem to imply this is already the case though.
>
> >
> > Any readers of the api that calls load will now need to hold the rcu
> > read lock externally. If you're doing this, then you should indicate
> > that is necessary in the function name, like the C side does. Otherwise
> > you are limiting the users to the advanced API, aren't you?
>
> The existing API already requires users to hold the xarray lock.
>
> >
> > Or are you saying that xarray can only be used if you hold the exclusive
> > lock, which is now a read and write lock?
>
> Yes - except for the word "now"; I'm not sure what you mean by it.
I'm trying to understand the locking on the rust side.
I think you answered it by telling me that all readers and writers use
the spinlock.
Is this a temporary limitation?
Thanks,
Liam
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [PATCH v3 05/12] rust: xarray: use `xas_load` instead of `xa_load` in `Guard::load`
2026-02-10 20:59 ` Liam R. Howlett
@ 2026-02-10 21:22 ` Tamir Duberstein
2026-02-10 21:34 ` Alice Ryhl
0 siblings, 1 reply; 52+ messages in thread
From: Tamir Duberstein @ 2026-02-10 21:22 UTC (permalink / raw)
To: Liam R. Howlett, Tamir Duberstein, Andreas Hindborg,
Tamir Duberstein, Miguel Ojeda, Alex Gaynor, Boqun Feng,
Gary Guo, Björn Roy Baron, Benno Lossin, Alice Ryhl,
Trevor Gross, Danilo Krummrich, Lorenzo Stoakes, Vlastimil Babka,
Andrew Morton, Christoph Lameter, David Rientjes, Roman Gushchin,
Harry Yoo, Daniel Gomez, rust-for-linux, linux-kernel, linux-mm
On Tue, Feb 10, 2026 at 12:59 PM Liam R. Howlett
<Liam.Howlett@oracle.com> wrote:
>
> * Tamir Duberstein <tamird@kernel.org> [260210 19:54]:
> > On Tue, Feb 10, 2026 at 10:16 AM Liam R. Howlett
> > <Liam.Howlett@oracle.com> wrote:
> > >
> > > * Andreas Hindborg <a.hindborg@kernel.org> [260209 14:39]:
> > > > Replace the call to `xa_load` with `xas_load` in `Guard::load`. The
> > > > `xa_load` function takes the RCU lock internally, which we do not need,
> > > > since the `Guard` already holds an exclusive lock on the `XArray`. The
> > > > `xas_load` function operates on `xa_state` and assumes the required locks
> > > > are already held.
> > > >
> > > > This change also removes the `#[expect(dead_code)]` annotation from
> > > > `XArrayState` and its constructor, as they are now in use.
> > >
> > > I don't understand the locking here.
> > >
> > > You are saying that, since you hold the xarray write lock, you won't be
> > > taking the rcu read lock, but then you change the api of load? That
> > > seems wrong to me.
> >
> > This patch doesn't change the API of load. Andreas is saying that the
> > type system already requires the caller to hold the xarray spin lock
> > when load is called, meaning acquiring the RCU lock isn't necessary.
>
> What I mean is that the API can no longer be called when holding the RCU
> read lock. You seem to imply this is already the case though.
>
> >
> > >
> > > Any readers of the api that calls load will now need to hold the rcu
> > > read lock externally. If you're doing this, then you should indicate
> > > that is necessary in the function name, like the C side does. Otherwise
> > > you are limiting the users to the advanced API, aren't you?
> >
> > The existing API already requires users to hold the xarray lock.
> >
> > >
> > > Or are you saying that xarray can only be used if you hold the exclusive
> > > lock, which is now a read and write lock?
> >
> > Yes - except for the word "now"; I'm not sure what you mean by it.
>
> I'm trying to understand the locking on the rust side.
>
> I think you answered it by telling me that all readers and writers use
> the spinlock.
Indeed. The current API doesn't expose load on the xarray at all; load
is only available on the "guard" type. An instance of "guard" can
exist only when the lock is held. The lock is unlocked when the guard
is dropped.
>
> Is this a temporary limitation?
Maybe? I don't think RfL has good abstractions for RCU yet. For
example, exposing load directly on the xarray using xa_load would
require a way to guarantee that the returned pointer's target isn't
being concurrently mutated (e.g. under the xarray lock). I'm not aware
of anyone asking for this, though.
>
> Thanks,
> Liam
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [PATCH v3 05/12] rust: xarray: use `xas_load` instead of `xa_load` in `Guard::load`
2026-02-10 21:22 ` Tamir Duberstein
@ 2026-02-10 21:34 ` Alice Ryhl
2026-02-11 14:32 ` Liam R. Howlett
0 siblings, 1 reply; 52+ messages in thread
From: Alice Ryhl @ 2026-02-10 21:34 UTC (permalink / raw)
To: Tamir Duberstein
Cc: Liam R. Howlett, Andreas Hindborg, Tamir Duberstein,
Miguel Ojeda, Alex Gaynor, Boqun Feng, Gary Guo,
Björn Roy Baron, Benno Lossin, Trevor Gross,
Danilo Krummrich, Lorenzo Stoakes, Vlastimil Babka,
Andrew Morton, Christoph Lameter, David Rientjes, Roman Gushchin,
Harry Yoo, Daniel Gomez, rust-for-linux, linux-kernel, linux-mm
On Tue, Feb 10, 2026 at 10:23 PM Tamir Duberstein <tamird@kernel.org> wrote:
>
> On Tue, Feb 10, 2026 at 12:59 PM Liam R. Howlett
> <Liam.Howlett@oracle.com> wrote:
> > Is this a temporary limitation?
>
> Maybe? I don't think RfL has good abstractions for RCU yet. For
> example, exposing load directly on the xarray using xa_load would
> require a way to guarantee that the returned pointer's target isn't
> being concurrently mutated (e.g. under the xarray lock). I'm not aware
> of anyone asking for this, though.
It's relatively easy to add an rcu-backed load using the RCU
abstractions we have today. I already shared an RFC containing such a
method for the maple tree, and it would not be much different for
xarray.
https://lore.kernel.org/all/20260116-rcu-box-v1-0-38ebfbcd53f0@google.com/
Alice
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [PATCH v3 05/12] rust: xarray: use `xas_load` instead of `xa_load` in `Guard::load`
2026-02-10 21:34 ` Alice Ryhl
@ 2026-02-11 14:32 ` Liam R. Howlett
2026-02-11 18:00 ` Boqun Feng
0 siblings, 1 reply; 52+ messages in thread
From: Liam R. Howlett @ 2026-02-11 14:32 UTC (permalink / raw)
To: Alice Ryhl
Cc: Tamir Duberstein, Andreas Hindborg, Tamir Duberstein,
Miguel Ojeda, Alex Gaynor, Boqun Feng, Gary Guo,
Björn Roy Baron, Benno Lossin, Trevor Gross,
Danilo Krummrich, Lorenzo Stoakes, Vlastimil Babka,
Andrew Morton, Christoph Lameter, David Rientjes, Roman Gushchin,
Harry Yoo, Daniel Gomez, rust-for-linux, linux-kernel, linux-mm
* Alice Ryhl <aliceryhl@google.com> [260210 16:34]:
> On Tue, Feb 10, 2026 at 10:23 PM Tamir Duberstein <tamird@kernel.org> wrote:
> >
> > On Tue, Feb 10, 2026 at 12:59 PM Liam R. Howlett
> > <Liam.Howlett@oracle.com> wrote:
> > > Is this a temporary limitation?
> >
> > Maybe? I don't think RfL has good abstractions for RCU yet. For
> > example, exposing load directly on the xarray using xa_load would
> > require a way to guarantee that the returned pointer's target isn't
> > being concurrently mutated (e.g. under the xarray lock). I'm not aware
> > of anyone asking for this, though.
>
> It's relatively easy to add an rcu-backed load using the RCU
> abstractions we have today. I already shared an RFC containing such a
> method for the maple tree, and it would not be much different for
> xarray.
> https://lore.kernel.org/all/20260116-rcu-box-v1-0-38ebfbcd53f0@google.com/
>
It would probably be worth having two loads then, one that does
rcu_read_lock()/unlock() and one for writer/advanced users like we have
on the C side of things.
Or at least name the load() function to indicate which is implemented
today?
At least on the maple tree side, we have both interfaces and users for
both. I just found the change to remove the rcu safety odd because I
assumed both are needed.
Thanks,
Liam
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [PATCH v3 05/12] rust: xarray: use `xas_load` instead of `xa_load` in `Guard::load`
2026-02-11 14:32 ` Liam R. Howlett
@ 2026-02-11 18:00 ` Boqun Feng
2026-02-11 18:19 ` Tamir Duberstein
2026-02-11 18:55 ` Liam R. Howlett
0 siblings, 2 replies; 52+ messages in thread
From: Boqun Feng @ 2026-02-11 18:00 UTC (permalink / raw)
To: Liam R. Howlett, Alice Ryhl, Tamir Duberstein, Andreas Hindborg,
Tamir Duberstein, Miguel Ojeda, Alex Gaynor, Boqun Feng,
Gary Guo, Björn Roy Baron, Benno Lossin, Trevor Gross,
Danilo Krummrich, Lorenzo Stoakes, Vlastimil Babka,
Andrew Morton, Christoph Lameter, David Rientjes, Roman Gushchin,
Harry Yoo, Daniel Gomez, rust-for-linux, linux-kernel, linux-mm
On Wed, Feb 11, 2026 at 09:32:36AM -0500, Liam R. Howlett wrote:
> * Alice Ryhl <aliceryhl@google.com> [260210 16:34]:
> > On Tue, Feb 10, 2026 at 10:23 PM Tamir Duberstein <tamird@kernel.org> wrote:
> > >
> > > On Tue, Feb 10, 2026 at 12:59 PM Liam R. Howlett
> > > <Liam.Howlett@oracle.com> wrote:
> > > > Is this a temporary limitation?
> > >
> > > Maybe? I don't think RfL has good abstractions for RCU yet. For
> > > example, exposing load directly on the xarray using xa_load would
> > > require a way to guarantee that the returned pointer's target isn't
Well, if we only return a pointer, we don't need to guarantee that,
right? Because it's up to the user to provide that guarantee. So we
could have XArray::load() (not Guard::load()) that just calls xa_load().
Also see below.
> > > being concurrently mutated (e.g. under the xarray lock). I'm not aware
> > > of anyone asking for this, though.
> >
> > It's relatively easy to add an rcu-backed load using the RCU
> > abstractions we have today. I already shared an RFC containing such a
> > method for the maple tree, and it would not be much different for
> > xarray.
> > https://lore.kernel.org/all/20260116-rcu-box-v1-0-38ebfbcd53f0@google.com/
I need to point out a difference between xas_load() and Alice's usage
(also what Tamir mentioned above) there, what Alice needs (at least from
her patchset) is the existence of the object is protected by RCU, i.e.
if there is someone else dropping the object, a RCU read lock would
still guarantee the access to the object is valid.
However, the internal RCU usage of both xarray and maple tree is to
protect the *internal* data structure if I'm not missing anything, i.e.
an writer may change the array or the tree while a reader is reading,
the internal structure itself is still consistent and valid. But the
nothing guarantees the object you read is still valid. For example, you
can have an xa_erase() racing with an xa_load():
<writer> <reader>
ptr = xa_erase(xa, idx);
ptr = xa_load(xa, idx);
reclaim(ptr);
use(ptr); // <- object may be gone
the users of xarray needs to use other mechanism to guarantee the
existence of the object.
In Alice's case, she in fact used an RCU read side critical section with
a larger scope to protect the object as well, which is definitely nice
to have, but not only way of using maple/xarray.
> >
>
> It would probably be worth having two loads then, one that does
> rcu_read_lock()/unlock() and one for writer/advanced users like we have
> on the C side of things.
>
Agreed. But we may need more ;-)
Here IIUC that Andreas does is adding a `load()` for `Guard` of
`XArray`, which is the load for a writer and most certainly you won't
need to take an RCU read lock for that. The load of a reader can be
added as I suggested above (similar as your "rcu_read_lock()/unlock()"
suggestion above), but no object existence guarantee. We likely
need a third API that can provide the object existence similar to what
Alice had in maple tree.
> Or at least name the load() function to indicate which is implemented
> today?
>
It's a namespace thing ;-) , the function in this patch is
kernel::xarray::Guard::load(), and as I suggest here
kernel::xarray::XArray::load() should be the same as xa_load().
Regards,
Boqun
> At least on the maple tree side, we have both interfaces and users for
> both. I just found the change to remove the rcu safety odd because I
> assumed both are needed.
>
> Thanks,
> Liam
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [PATCH v3 05/12] rust: xarray: use `xas_load` instead of `xa_load` in `Guard::load`
2026-02-11 18:00 ` Boqun Feng
@ 2026-02-11 18:19 ` Tamir Duberstein
2026-02-11 18:24 ` Boqun Feng
2026-02-11 18:55 ` Liam R. Howlett
1 sibling, 1 reply; 52+ messages in thread
From: Tamir Duberstein @ 2026-02-11 18:19 UTC (permalink / raw)
To: Boqun Feng
Cc: Liam R. Howlett, Alice Ryhl, Andreas Hindborg, Miguel Ojeda,
Alex Gaynor, Boqun Feng, Gary Guo, Björn Roy Baron,
Benno Lossin, Trevor Gross, Danilo Krummrich, Lorenzo Stoakes,
Vlastimil Babka, Andrew Morton, Christoph Lameter,
David Rientjes, Roman Gushchin, Harry Yoo, Daniel Gomez,
rust-for-linux, linux-kernel, linux-mm
On Wed, Feb 11, 2026 at 10:00 AM Boqun Feng <boqun@kernel.org> wrote:
>
> On Wed, Feb 11, 2026 at 09:32:36AM -0500, Liam R. Howlett wrote:
> > * Alice Ryhl <aliceryhl@google.com> [260210 16:34]:
> > > On Tue, Feb 10, 2026 at 10:23 PM Tamir Duberstein <tamird@kernel.org> wrote:
> > > >
> > > > On Tue, Feb 10, 2026 at 12:59 PM Liam R. Howlett
> > > > <Liam.Howlett@oracle.com> wrote:
> > > > > Is this a temporary limitation?
> > > >
> > > > Maybe? I don't think RfL has good abstractions for RCU yet. For
> > > > example, exposing load directly on the xarray using xa_load would
> > > > require a way to guarantee that the returned pointer's target isn't
>
> Well, if we only return a pointer, we don't need to guarantee that,
> right? Because it's up to the user to provide that guarantee. So we
> could have XArray::load() (not Guard::load()) that just calls xa_load().
> Also see below.
>
> > > > being concurrently mutated (e.g. under the xarray lock). I'm not aware
> > > > of anyone asking for this, though.
> > >
> > > It's relatively easy to add an rcu-backed load using the RCU
> > > abstractions we have today. I already shared an RFC containing such a
> > > method for the maple tree, and it would not be much different for
> > > xarray.
> > > https://lore.kernel.org/all/20260116-rcu-box-v1-0-38ebfbcd53f0@google.com/
>
> I need to point out a difference between xas_load() and Alice's usage
> (also what Tamir mentioned above) there, what Alice needs (at least from
> her patchset) is the existence of the object is protected by RCU, i.e.
> if there is someone else dropping the object, a RCU read lock would
> still guarantee the access to the object is valid.
>
> However, the internal RCU usage of both xarray and maple tree is to
> protect the *internal* data structure if I'm not missing anything, i.e.
> an writer may change the array or the tree while a reader is reading,
> the internal structure itself is still consistent and valid. But the
> nothing guarantees the object you read is still valid. For example, you
> can have an xa_erase() racing with an xa_load():
>
> <writer> <reader>
> ptr = xa_erase(xa, idx);
> ptr = xa_load(xa, idx);
> reclaim(ptr);
> use(ptr); // <- object may be gone
>
> the users of xarray needs to use other mechanism to guarantee the
> existence of the object.
>
> In Alice's case, she in fact used an RCU read side critical section with
> a larger scope to protect the object as well, which is definitely nice
> to have, but not only way of using maple/xarray.
>
> > >
> >
> > It would probably be worth having two loads then, one that does
> > rcu_read_lock()/unlock() and one for writer/advanced users like we have
> > on the C side of things.
> >
>
> Agreed. But we may need more ;-)
>
> Here IIUC that Andreas does is adding a `load()` for `Guard` of
> `XArray`, which is the load for a writer and most certainly you won't
> need to take an RCU read lock for that. The load of a reader can be
> added as I suggested above (similar as your "rcu_read_lock()/unlock()"
> suggestion above), but no object existence guarantee. We likely
> need a third API that can provide the object existence similar to what
> Alice had in maple tree.
>
> > Or at least name the load() function to indicate which is implemented
> > today?
> >
>
> It's a namespace thing ;-) , the function in this patch is
> kernel::xarray::Guard::load(), and as I suggest here
> kernel::xarray::XArray::load() should be the same as xa_load().
Just to clarify: `kernel::xarray::XArray::load()` does not currently
exist and no one has yet asked for it.
>
> Regards,
> Boqun
>
> > At least on the maple tree side, we have both interfaces and users for
> > both. I just found the change to remove the rcu safety odd because I
> > assumed both are needed.
> >
> > Thanks,
> > Liam
>
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [PATCH v3 05/12] rust: xarray: use `xas_load` instead of `xa_load` in `Guard::load`
2026-02-11 18:19 ` Tamir Duberstein
@ 2026-02-11 18:24 ` Boqun Feng
0 siblings, 0 replies; 52+ messages in thread
From: Boqun Feng @ 2026-02-11 18:24 UTC (permalink / raw)
To: Tamir Duberstein
Cc: Liam R. Howlett, Alice Ryhl, Andreas Hindborg, Miguel Ojeda,
Alex Gaynor, Boqun Feng, Gary Guo, Björn Roy Baron,
Benno Lossin, Trevor Gross, Danilo Krummrich, Lorenzo Stoakes,
Vlastimil Babka, Andrew Morton, Christoph Lameter,
David Rientjes, Roman Gushchin, Harry Yoo, Daniel Gomez,
rust-for-linux, linux-kernel, linux-mm
On Wed, Feb 11, 2026 at 01:19:23PM -0500, Tamir Duberstein wrote:
> On Wed, Feb 11, 2026 at 10:00 AM Boqun Feng <boqun@kernel.org> wrote:
> >
> > On Wed, Feb 11, 2026 at 09:32:36AM -0500, Liam R. Howlett wrote:
> > > * Alice Ryhl <aliceryhl@google.com> [260210 16:34]:
> > > > On Tue, Feb 10, 2026 at 10:23 PM Tamir Duberstein <tamird@kernel.org> wrote:
> > > > >
> > > > > On Tue, Feb 10, 2026 at 12:59 PM Liam R. Howlett
> > > > > <Liam.Howlett@oracle.com> wrote:
> > > > > > Is this a temporary limitation?
> > > > >
> > > > > Maybe? I don't think RfL has good abstractions for RCU yet. For
> > > > > example, exposing load directly on the xarray using xa_load would
> > > > > require a way to guarantee that the returned pointer's target isn't
> >
> > Well, if we only return a pointer, we don't need to guarantee that,
> > right? Because it's up to the user to provide that guarantee. So we
> > could have XArray::load() (not Guard::load()) that just calls xa_load().
> > Also see below.
> >
> > > > > being concurrently mutated (e.g. under the xarray lock). I'm not aware
> > > > > of anyone asking for this, though.
> > > >
> > > > It's relatively easy to add an rcu-backed load using the RCU
> > > > abstractions we have today. I already shared an RFC containing such a
> > > > method for the maple tree, and it would not be much different for
> > > > xarray.
> > > > https://lore.kernel.org/all/20260116-rcu-box-v1-0-38ebfbcd53f0@google.com/
> >
> > I need to point out a difference between xas_load() and Alice's usage
> > (also what Tamir mentioned above) there, what Alice needs (at least from
> > her patchset) is the existence of the object is protected by RCU, i.e.
> > if there is someone else dropping the object, a RCU read lock would
> > still guarantee the access to the object is valid.
> >
> > However, the internal RCU usage of both xarray and maple tree is to
> > protect the *internal* data structure if I'm not missing anything, i.e.
> > an writer may change the array or the tree while a reader is reading,
> > the internal structure itself is still consistent and valid. But the
> > nothing guarantees the object you read is still valid. For example, you
> > can have an xa_erase() racing with an xa_load():
> >
> > <writer> <reader>
> > ptr = xa_erase(xa, idx);
> > ptr = xa_load(xa, idx);
> > reclaim(ptr);
> > use(ptr); // <- object may be gone
> >
> > the users of xarray needs to use other mechanism to guarantee the
> > existence of the object.
> >
> > In Alice's case, she in fact used an RCU read side critical section with
> > a larger scope to protect the object as well, which is definitely nice
> > to have, but not only way of using maple/xarray.
> >
> > > >
> > >
> > > It would probably be worth having two loads then, one that does
> > > rcu_read_lock()/unlock() and one for writer/advanced users like we have
> > > on the C side of things.
> > >
> >
> > Agreed. But we may need more ;-)
> >
> > Here IIUC that Andreas does is adding a `load()` for `Guard` of
> > `XArray`, which is the load for a writer and most certainly you won't
> > need to take an RCU read lock for that. The load of a reader can be
> > added as I suggested above (similar as your "rcu_read_lock()/unlock()"
> > suggestion above), but no object existence guarantee. We likely
> > need a third API that can provide the object existence similar to what
> > Alice had in maple tree.
> >
> > > Or at least name the load() function to indicate which is implemented
> > > today?
> > >
> >
> > It's a namespace thing ;-) , the function in this patch is
> > kernel::xarray::Guard::load(), and as I suggest here
> > kernel::xarray::XArray::load() should be the same as xa_load().
>
> Just to clarify: `kernel::xarray::XArray::load()` does not currently
> exist and no one has yet asked for it.
>
Yeah ;-) I meant to say we can add it if there is a user and we can use
xa_load() to implement that.
Regards,
Boqun
> >
> > Regards,
> > Boqun
> >
> > > At least on the maple tree side, we have both interfaces and users for
> > > both. I just found the change to remove the rcu safety odd because I
> > > assumed both are needed.
> > >
> > > Thanks,
> > > Liam
> >
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [PATCH v3 05/12] rust: xarray: use `xas_load` instead of `xa_load` in `Guard::load`
2026-02-11 18:00 ` Boqun Feng
2026-02-11 18:19 ` Tamir Duberstein
@ 2026-02-11 18:55 ` Liam R. Howlett
2026-02-11 19:45 ` Boqun Feng
1 sibling, 1 reply; 52+ messages in thread
From: Liam R. Howlett @ 2026-02-11 18:55 UTC (permalink / raw)
To: Boqun Feng
Cc: Alice Ryhl, Tamir Duberstein, Andreas Hindborg, Tamir Duberstein,
Miguel Ojeda, Alex Gaynor, Boqun Feng, Gary Guo,
Björn Roy Baron, Benno Lossin, Trevor Gross,
Danilo Krummrich, Lorenzo Stoakes, Vlastimil Babka,
Andrew Morton, Christoph Lameter, David Rientjes, Roman Gushchin,
Harry Yoo, Daniel Gomez, rust-for-linux, linux-kernel, linux-mm
* Boqun Feng <boqun@kernel.org> [260211 13:00]:
> On Wed, Feb 11, 2026 at 09:32:36AM -0500, Liam R. Howlett wrote:
> > * Alice Ryhl <aliceryhl@google.com> [260210 16:34]:
> > > On Tue, Feb 10, 2026 at 10:23 PM Tamir Duberstein <tamird@kernel.org> wrote:
> > > >
> > > > On Tue, Feb 10, 2026 at 12:59 PM Liam R. Howlett
> > > > <Liam.Howlett@oracle.com> wrote:
> > > > > Is this a temporary limitation?
> > > >
> > > > Maybe? I don't think RfL has good abstractions for RCU yet. For
> > > > example, exposing load directly on the xarray using xa_load would
> > > > require a way to guarantee that the returned pointer's target isn't
>
> Well, if we only return a pointer, we don't need to guarantee that,
> right? Because it's up to the user to provide that guarantee. So we
> could have XArray::load() (not Guard::load()) that just calls xa_load().
> Also see below.
>
> > > > being concurrently mutated (e.g. under the xarray lock). I'm not aware
> > > > of anyone asking for this, though.
> > >
> > > It's relatively easy to add an rcu-backed load using the RCU
> > > abstractions we have today. I already shared an RFC containing such a
> > > method for the maple tree, and it would not be much different for
> > > xarray.
> > > https://lore.kernel.org/all/20260116-rcu-box-v1-0-38ebfbcd53f0@google.com/
>
> I need to point out a difference between xas_load() and Alice's usage
> (also what Tamir mentioned above) there, what Alice needs (at least from
> her patchset) is the existence of the object is protected by RCU, i.e.
> if there is someone else dropping the object, a RCU read lock would
> still guarantee the access to the object is valid.
>
> However, the internal RCU usage of both xarray and maple tree is to
> protect the *internal* data structure if I'm not missing anything, i.e.
> an writer may change the array or the tree while a reader is reading,
> the internal structure itself is still consistent and valid. But the
> nothing guarantees the object you read is still valid. For example, you
> can have an xa_erase() racing with an xa_load():
>
> <writer> <reader>
> ptr = xa_erase(xa, idx);
> ptr = xa_load(xa, idx);
> reclaim(ptr);
> use(ptr); // <- object may be gone
>
> the users of xarray needs to use other mechanism to guarantee the
> existence of the object.
>
> In Alice's case, she in fact used an RCU read side critical section with
> a larger scope to protect the object as well, which is definitely nice
> to have, but not only way of using maple/xarray.
The lock surrounding the ptr is only useful if your ptr is rcu
protected, which isn't always the case. So having the rcu read side
critical section extended to the life of the ptr means you've extended
the rcu window for no reason when the ptr is protected in another way.
This may perform poorly, depending on the situation.
>
> > >
> >
> > It would probably be worth having two loads then, one that does
> > rcu_read_lock()/unlock() and one for writer/advanced users like we have
> > on the C side of things.
> >
>
> Agreed. But we may need more ;-)
>
> Here IIUC that Andreas does is adding a `load()` for `Guard` of
> `XArray`, which is the load for a writer and most certainly you won't
> need to take an RCU read lock for that. The load of a reader can be
> added as I suggested above (similar as your "rcu_read_lock()/unlock()"
> suggestion above), but no object existence guarantee. We likely
> need a third API that can provide the object existence similar to what
> Alice had in maple tree.
>
> > Or at least name the load() function to indicate which is implemented
> > today?
> >
>
> It's a namespace thing ;-) , the function in this patch is
> kernel::xarray::Guard::load(), and as I suggest here
> kernel::xarray::XArray::load() should be the same as xa_load().
Ah, okay.. this might be hard to follow in code, but I guess the
compiler will catch the wrong user. That is, Guard::load() would not be
written out, we'd just see array.load() in both cases?
It's also more confusing with the borrowck false positive issue in play.
That is, using Guard::load() will do more work than necessary, I believe?
It is odd that only one of these exist, especially considering we have
users for both on the C side. I guess the other one will be added once
it's needed.
Thanks,
Liam
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [PATCH v3 05/12] rust: xarray: use `xas_load` instead of `xa_load` in `Guard::load`
2026-02-11 18:55 ` Liam R. Howlett
@ 2026-02-11 19:45 ` Boqun Feng
0 siblings, 0 replies; 52+ messages in thread
From: Boqun Feng @ 2026-02-11 19:45 UTC (permalink / raw)
To: Liam R. Howlett, Alice Ryhl, Tamir Duberstein, Andreas Hindborg,
Tamir Duberstein, Miguel Ojeda, Alex Gaynor, Boqun Feng,
Gary Guo, Björn Roy Baron, Benno Lossin, Trevor Gross,
Danilo Krummrich, Lorenzo Stoakes, Vlastimil Babka,
Andrew Morton, Christoph Lameter, David Rientjes, Roman Gushchin,
Harry Yoo, Daniel Gomez, rust-for-linux, linux-kernel, linux-mm
On Wed, Feb 11, 2026 at 01:55:43PM -0500, Liam R. Howlett wrote:
> * Boqun Feng <boqun@kernel.org> [260211 13:00]:
> > On Wed, Feb 11, 2026 at 09:32:36AM -0500, Liam R. Howlett wrote:
> > > * Alice Ryhl <aliceryhl@google.com> [260210 16:34]:
> > > > On Tue, Feb 10, 2026 at 10:23 PM Tamir Duberstein <tamird@kernel.org> wrote:
> > > > >
> > > > > On Tue, Feb 10, 2026 at 12:59 PM Liam R. Howlett
> > > > > <Liam.Howlett@oracle.com> wrote:
> > > > > > Is this a temporary limitation?
> > > > >
> > > > > Maybe? I don't think RfL has good abstractions for RCU yet. For
> > > > > example, exposing load directly on the xarray using xa_load would
> > > > > require a way to guarantee that the returned pointer's target isn't
> >
> > Well, if we only return a pointer, we don't need to guarantee that,
> > right? Because it's up to the user to provide that guarantee. So we
> > could have XArray::load() (not Guard::load()) that just calls xa_load().
> > Also see below.
> >
> > > > > being concurrently mutated (e.g. under the xarray lock). I'm not aware
> > > > > of anyone asking for this, though.
> > > >
> > > > It's relatively easy to add an rcu-backed load using the RCU
> > > > abstractions we have today. I already shared an RFC containing such a
> > > > method for the maple tree, and it would not be much different for
> > > > xarray.
> > > > https://lore.kernel.org/all/20260116-rcu-box-v1-0-38ebfbcd53f0@google.com/
> >
> > I need to point out a difference between xas_load() and Alice's usage
> > (also what Tamir mentioned above) there, what Alice needs (at least from
> > her patchset) is the existence of the object is protected by RCU, i.e.
> > if there is someone else dropping the object, a RCU read lock would
> > still guarantee the access to the object is valid.
> >
> > However, the internal RCU usage of both xarray and maple tree is to
> > protect the *internal* data structure if I'm not missing anything, i.e.
> > an writer may change the array or the tree while a reader is reading,
> > the internal structure itself is still consistent and valid. But the
> > nothing guarantees the object you read is still valid. For example, you
> > can have an xa_erase() racing with an xa_load():
> >
> > <writer> <reader>
> > ptr = xa_erase(xa, idx);
> > ptr = xa_load(xa, idx);
> > reclaim(ptr);
> > use(ptr); // <- object may be gone
> >
> > the users of xarray needs to use other mechanism to guarantee the
> > existence of the object.
> >
> > In Alice's case, she in fact used an RCU read side critical section with
> > a larger scope to protect the object as well, which is definitely nice
> > to have, but not only way of using maple/xarray.
>
> The lock surrounding the ptr is only useful if your ptr is rcu
> protected, which isn't always the case. So having the rcu read side
Right, so what Alice had in her patchset linked was introducing an RCU
protected type wrapper for an object, so that is particularly for the
case where the object is under RCU protection. (I would like to arrange
the implementation and API in a slight different way for that patchset,
but the idea is solid)
> critical section extended to the life of the ptr means you've extended
> the rcu window for no reason when the ptr is protected in another way.
> This may perform poorly, depending on the situation.
>
> >
> > > >
> > >
> > > It would probably be worth having two loads then, one that does
> > > rcu_read_lock()/unlock() and one for writer/advanced users like we have
> > > on the C side of things.
> > >
> >
> > Agreed. But we may need more ;-)
> >
> > Here IIUC that Andreas does is adding a `load()` for `Guard` of
> > `XArray`, which is the load for a writer and most certainly you won't
> > need to take an RCU read lock for that. The load of a reader can be
> > added as I suggested above (similar as your "rcu_read_lock()/unlock()"
> > suggestion above), but no object existence guarantee. We likely
> > need a third API that can provide the object existence similar to what
> > Alice had in maple tree.
> >
> > > Or at least name the load() function to indicate which is implemented
> > > today?
> > >
> >
> > It's a namespace thing ;-) , the function in this patch is
> > kernel::xarray::Guard::load(), and as I suggest here
> > kernel::xarray::XArray::load() should be the same as xa_load().
>
> Ah, okay.. this might be hard to follow in code, but I guess the
> compiler will catch the wrong user. That is, Guard::load() would not be
> written out, we'd just see array.load() in both cases?
>
For `Guard::load()` case, you will need to write:
let ptr = xa.lock().load();
or
let guard = xa.lock();
let ptr = guard.load();
for `XArray::load()` case, you will need to write:
let ptr = xa.load();
> It's also more confusing with the borrowck false positive issue in play.
> That is, using Guard::load() will do more work than necessary, I believe?
>
I'm not following that thread closely, but seem so.
> It is odd that only one of these exist, especially considering we have
> users for both on the C side. I guess the other one will be added once
> it's needed.
>
Yeah, this is the place where I don't agree with the "you must have a
user to upstream an API" rule ;-) A reader's `load()`, even with no
user, would completes the concept of what an XArrary is.
Regards,
Boqun
> Thanks,
> Liam
^ permalink raw reply [flat|nested] 52+ messages in thread
* [PATCH v3 06/12] rust: xarray: simplify `Guard::load`
2026-02-09 14:38 [PATCH v3 00/12] rust: xarray: add entry API with preloading Andreas Hindborg
` (4 preceding siblings ...)
2026-02-09 14:38 ` [PATCH v3 05/12] rust: xarray: use `xas_load` instead of `xa_load` in `Guard::load` Andreas Hindborg
@ 2026-02-09 14:38 ` Andreas Hindborg
2026-02-09 14:38 ` [PATCH v3 07/12] rust: xarray: add `find_next` and `find_next_mut` Andreas Hindborg
` (5 subsequent siblings)
11 siblings, 0 replies; 52+ messages in thread
From: Andreas Hindborg @ 2026-02-09 14:38 UTC (permalink / raw)
To: Tamir Duberstein, Miguel Ojeda, Alex Gaynor, Boqun Feng,
Gary Guo, Björn Roy Baron, Benno Lossin, Alice Ryhl,
Trevor Gross, Danilo Krummrich, Lorenzo Stoakes, Liam R. Howlett,
Vlastimil Babka, Andrew Morton, Christoph Lameter,
David Rientjes, Roman Gushchin, Harry Yoo
Cc: Daniel Gomez, rust-for-linux, linux-kernel, linux-mm, Andreas Hindborg
Simplify the implementation by removing the closure-based API from
`Guard::load` in favor of returning `Option<NonNull<c_void>>` directly.
Signed-off-by: Andreas Hindborg <a.hindborg@kernel.org>
---
rust/kernel/xarray.rs | 23 +++++++++--------------
1 file changed, 9 insertions(+), 14 deletions(-)
diff --git a/rust/kernel/xarray.rs b/rust/kernel/xarray.rs
index eadddafb180ec..e654bf56dc97c 100644
--- a/rust/kernel/xarray.rs
+++ b/rust/kernel/xarray.rs
@@ -211,12 +211,8 @@ fn from(value: StoreError<T>) -> Self {
}
impl<'a, T: ForeignOwnable> Guard<'a, T> {
- fn load<F, U>(&self, index: usize, f: F) -> Option<U>
- where
- F: FnOnce(NonNull<c_void>) -> U,
- {
- let mut state = XArrayState::new(self, index);
- Some(f(state.load()?))
+ fn load(&self, index: usize) -> Option<NonNull<c_void>> {
+ XArrayState::new(self, index).load()
}
/// Checks if the XArray contains an element at the specified index.
@@ -242,18 +238,17 @@ pub fn contains_index(&self, index: usize) -> bool {
/// Provides a reference to the element at the given index.
pub fn get(&self, index: usize) -> Option<T::Borrowed<'_>> {
- self.load(index, |ptr| {
- // SAFETY: `ptr` came from `T::into_foreign`.
- unsafe { T::borrow(ptr.as_ptr()) }
- })
+ let ptr = self.load(index)?;
+ // SAFETY: `ptr` came from `T::into_foreign`.
+ Some(unsafe { T::borrow(ptr.as_ptr()) })
}
/// Provides a mutable reference to the element at the given index.
pub fn get_mut(&mut self, index: usize) -> Option<T::BorrowedMut<'_>> {
- self.load(index, |ptr| {
- // SAFETY: `ptr` came from `T::into_foreign`.
- unsafe { T::borrow_mut(ptr.as_ptr()) }
- })
+ let ptr = self.load(index)?;
+
+ // SAFETY: `ptr` came from `T::into_foreign`.
+ Some(unsafe { T::borrow_mut(ptr.as_ptr()) })
}
/// Removes and returns the element at the given index.
--
2.51.2
^ permalink raw reply [flat|nested] 52+ messages in thread* [PATCH v3 07/12] rust: xarray: add `find_next` and `find_next_mut`
2026-02-09 14:38 [PATCH v3 00/12] rust: xarray: add entry API with preloading Andreas Hindborg
` (5 preceding siblings ...)
2026-02-09 14:38 ` [PATCH v3 06/12] rust: xarray: simplify `Guard::load` Andreas Hindborg
@ 2026-02-09 14:38 ` Andreas Hindborg
2026-02-09 14:38 ` [PATCH v3 08/12] rust: xarray: add entry API Andreas Hindborg
` (4 subsequent siblings)
11 siblings, 0 replies; 52+ messages in thread
From: Andreas Hindborg @ 2026-02-09 14:38 UTC (permalink / raw)
To: Tamir Duberstein, Miguel Ojeda, Alex Gaynor, Boqun Feng,
Gary Guo, Björn Roy Baron, Benno Lossin, Alice Ryhl,
Trevor Gross, Danilo Krummrich, Lorenzo Stoakes, Liam R. Howlett,
Vlastimil Babka, Andrew Morton, Christoph Lameter,
David Rientjes, Roman Gushchin, Harry Yoo
Cc: Daniel Gomez, rust-for-linux, linux-kernel, linux-mm, Andreas Hindborg
Add methods to find the next element in an XArray starting from a
given index. The methods return a tuple containing the index where the
element was found and a reference to the element.
The implementation uses the XArray state API via `xas_find` to avoid taking
the rcu lock as an exclusive lock is already held by `Guard`.
Signed-off-by: Andreas Hindborg <a.hindborg@kernel.org>
---
rust/kernel/xarray.rs | 68 +++++++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 68 insertions(+)
diff --git a/rust/kernel/xarray.rs b/rust/kernel/xarray.rs
index e654bf56dc97c..656ec897a0c41 100644
--- a/rust/kernel/xarray.rs
+++ b/rust/kernel/xarray.rs
@@ -251,6 +251,67 @@ pub fn get_mut(&mut self, index: usize) -> Option<T::BorrowedMut<'_>> {
Some(unsafe { T::borrow_mut(ptr.as_ptr()) })
}
+ fn load_next(&self, index: usize) -> Option<(usize, NonNull<c_void>)> {
+ XArrayState::new(self, index).load_next()
+ }
+
+ /// Finds the next element starting from the given index.
+ ///
+ /// # Examples
+ ///
+ /// ```
+ /// # use kernel::{prelude::*, xarray::{AllocKind, XArray}};
+ /// let mut xa = KBox::pin_init(XArray::<KBox<u32>>::new(AllocKind::Alloc), GFP_KERNEL)?;
+ /// let mut guard = xa.lock();
+ ///
+ /// guard.store(10, KBox::new(10u32, GFP_KERNEL)?, GFP_KERNEL)?;
+ /// guard.store(20, KBox::new(20u32, GFP_KERNEL)?, GFP_KERNEL)?;
+ ///
+ /// if let Some((found_index, value)) = guard.find_next(11) {
+ /// assert_eq!(found_index, 20);
+ /// assert_eq!(*value, 20);
+ /// }
+ ///
+ /// if let Some((found_index, value)) = guard.find_next(5) {
+ /// assert_eq!(found_index, 10);
+ /// assert_eq!(*value, 10);
+ /// }
+ ///
+ /// # Ok::<(), kernel::error::Error>(())
+ /// ```
+ pub fn find_next(&self, index: usize) -> Option<(usize, T::Borrowed<'_>)> {
+ self.load_next(index)
+ // SAFETY: `ptr` came from `T::into_foreign`.
+ .map(|(index, ptr)| (index, unsafe { T::borrow(ptr.as_ptr()) }))
+ }
+
+ /// Finds the next element starting from the given index, returning a mutable reference.
+ ///
+ /// # Examples
+ ///
+ /// ```
+ /// # use kernel::{prelude::*, xarray::{AllocKind, XArray}};
+ /// let mut xa = KBox::pin_init(XArray::<KBox<u32>>::new(AllocKind::Alloc), GFP_KERNEL)?;
+ /// let mut guard = xa.lock();
+ ///
+ /// guard.store(10, KBox::new(10u32, GFP_KERNEL)?, GFP_KERNEL)?;
+ /// guard.store(20, KBox::new(20u32, GFP_KERNEL)?, GFP_KERNEL)?;
+ ///
+ /// if let Some((found_index, mut_value)) = guard.find_next_mut(5) {
+ /// assert_eq!(found_index, 10);
+ /// *mut_value = 0x99;
+ /// }
+ ///
+ /// assert_eq!(guard.get(10).copied(), Some(0x99));
+ ///
+ /// # Ok::<(), kernel::error::Error>(())
+ /// ```
+ pub fn find_next_mut(&mut self, index: usize) -> Option<(usize, T::BorrowedMut<'_>)> {
+ self.load_next(index)
+ // SAFETY: `ptr` came from `T::into_foreign`.
+ .map(move |(index, ptr)| (index, unsafe { T::borrow_mut(ptr.as_ptr()) }))
+ }
+
/// Removes and returns the element at the given index.
pub fn remove(&mut self, index: usize) -> Option<T> {
// SAFETY:
@@ -354,6 +415,13 @@ fn load(&mut self) -> Option<NonNull<c_void>> {
let ptr = unsafe { bindings::xas_load(&raw mut self.state) };
NonNull::new(ptr.cast())
}
+
+ fn load_next(&mut self) -> Option<(usize, NonNull<c_void>)> {
+ // SAFETY: `self.state` is always valid by the type invariant of
+ // `XArrayState` and the we hold the xarray lock.
+ let ptr = unsafe { bindings::xas_find(&raw mut self.state, usize::MAX) };
+ NonNull::new(ptr).map(|ptr| (self.state.xa_index, ptr))
+ }
}
// SAFETY: `XArray<T>` has no shared mutable state so it is `Send` iff `T` is `Send`.
--
2.51.2
^ permalink raw reply [flat|nested] 52+ messages in thread* [PATCH v3 08/12] rust: xarray: add entry API
2026-02-09 14:38 [PATCH v3 00/12] rust: xarray: add entry API with preloading Andreas Hindborg
` (6 preceding siblings ...)
2026-02-09 14:38 ` [PATCH v3 07/12] rust: xarray: add `find_next` and `find_next_mut` Andreas Hindborg
@ 2026-02-09 14:38 ` Andreas Hindborg
2026-02-09 14:38 ` [PATCH v3 09/12] rust: mm: add abstractions for allocating from a `sheaf` Andreas Hindborg
` (3 subsequent siblings)
11 siblings, 0 replies; 52+ messages in thread
From: Andreas Hindborg @ 2026-02-09 14:38 UTC (permalink / raw)
To: Tamir Duberstein, Miguel Ojeda, Alex Gaynor, Boqun Feng,
Gary Guo, Björn Roy Baron, Benno Lossin, Alice Ryhl,
Trevor Gross, Danilo Krummrich, Lorenzo Stoakes, Liam R. Howlett,
Vlastimil Babka, Andrew Morton, Christoph Lameter,
David Rientjes, Roman Gushchin, Harry Yoo
Cc: Daniel Gomez, rust-for-linux, linux-kernel, linux-mm, Andreas Hindborg
Add an Entry API for XArray that provides ergonomic access to array
slots that may be vacant or occupied. The API follows the pattern of
Rust's standard library HashMap entry API, allowing efficient
conditional insertion and modification of entries.
The implementation uses the XArray state API (`xas_*` functions) for
efficient operations without requiring multiple lookups. Helper
functions are added to rust/helpers/xarray.c to wrap C macros that are
not directly accessible from Rust.
Also update MAINTAINERS to cover the new rust files.
Signed-off-by: Andreas Hindborg <a.hindborg@kernel.org>
---
MAINTAINERS | 1 +
rust/helpers/xarray.c | 17 ++
rust/kernel/xarray.rs | 123 +++++++++++++++
rust/kernel/xarray/entry.rs | 367 ++++++++++++++++++++++++++++++++++++++++++++
4 files changed, 508 insertions(+)
diff --git a/MAINTAINERS b/MAINTAINERS
index 0efa8cc6775b7..8202515c6065b 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -28361,6 +28361,7 @@ B: https://github.com/Rust-for-Linux/linux/issues
C: https://rust-for-linux.zulipchat.com
T: git https://github.com/Rust-for-Linux/linux.git xarray-next
F: rust/kernel/xarray.rs
+F: rust/kernel/xarray/
XBOX DVD IR REMOTE
M: Benjamin Valentin <benpicco@googlemail.com>
diff --git a/rust/helpers/xarray.c b/rust/helpers/xarray.c
index 60b299f11451d..425a6cc494734 100644
--- a/rust/helpers/xarray.c
+++ b/rust/helpers/xarray.c
@@ -26,3 +26,20 @@ void rust_helper_xa_unlock(struct xarray *xa)
{
return xa_unlock(xa);
}
+
+void *rust_helper_xas_result(struct xa_state *xas, void *curr)
+{
+ if (xa_err(xas->xa_node))
+ curr = xas->xa_node;
+ return curr;
+}
+
+void *rust_helper_xa_zero_to_null(void *entry)
+{
+ return xa_is_zero(entry) ? NULL : entry;
+}
+
+int rust_helper_xas_error(const struct xa_state *xas)
+{
+ return xas_error(xas);
+}
diff --git a/rust/kernel/xarray.rs b/rust/kernel/xarray.rs
index 656ec897a0c41..8c10e8fd76f15 100644
--- a/rust/kernel/xarray.rs
+++ b/rust/kernel/xarray.rs
@@ -13,11 +13,17 @@
NonNull, //
},
};
+pub use entry::{
+ Entry,
+ OccupiedEntry,
+ VacantEntry, //
+};
use kernel::{
alloc,
bindings,
build_assert, //
error::{
+ to_result,
Error,
Result, //
},
@@ -251,6 +257,35 @@ pub fn get_mut(&mut self, index: usize) -> Option<T::BorrowedMut<'_>> {
Some(unsafe { T::borrow_mut(ptr.as_ptr()) })
}
+ /// Gets an entry for the specified index, which can be vacant or occupied.
+ ///
+ /// # Examples
+ ///
+ /// ```
+ /// # use kernel::{prelude::*, xarray::{AllocKind, XArray, Entry}};
+ /// let mut xa = KBox::pin_init(XArray::<KBox<u32>>::new(AllocKind::Alloc), GFP_KERNEL)?;
+ /// let mut guard = xa.lock();
+ ///
+ /// assert_eq!(guard.contains_index(42), false);
+ ///
+ /// match guard.entry(42) {
+ /// Entry::Vacant(entry) => {
+ /// entry.insert(KBox::new(0x1337u32, GFP_KERNEL)?)?;
+ /// }
+ /// Entry::Occupied(_) => unreachable!("We did not insert an entry yet"),
+ /// }
+ ///
+ /// assert_eq!(guard.get(42), Some(&0x1337));
+ ///
+ /// # Ok::<(), kernel::error::Error>(())
+ /// ```
+ pub fn entry<'b>(&'b mut self, index: usize) -> Entry<'a, 'b, T> {
+ match self.load(index) {
+ None => Entry::Vacant(VacantEntry::new(self, index)),
+ Some(ptr) => Entry::Occupied(OccupiedEntry::new(self, index, ptr)),
+ }
+ }
+
fn load_next(&self, index: usize) -> Option<(usize, NonNull<c_void>)> {
XArrayState::new(self, index).load_next()
}
@@ -312,6 +347,72 @@ pub fn find_next_mut(&mut self, index: usize) -> Option<(usize, T::BorrowedMut<'
.map(move |(index, ptr)| (index, unsafe { T::borrow_mut(ptr.as_ptr()) }))
}
+ /// Finds the next occupied entry starting from the given index.
+ ///
+ /// # Examples
+ ///
+ /// ```
+ /// # use kernel::{prelude::*, xarray::{AllocKind, XArray}};
+ /// let mut xa = KBox::pin_init(XArray::<KBox<u32>>::new(AllocKind::Alloc), GFP_KERNEL)?;
+ /// let mut guard = xa.lock();
+ ///
+ /// guard.store(10, KBox::new(10u32, GFP_KERNEL)?, GFP_KERNEL)?;
+ /// guard.store(20, KBox::new(20u32, GFP_KERNEL)?, GFP_KERNEL)?;
+ ///
+ /// if let Some(entry) = guard.find_next_entry(5) {
+ /// assert_eq!(entry.index(), 10);
+ /// let value = entry.remove();
+ /// assert_eq!(*value, 10);
+ /// }
+ ///
+ /// assert_eq!(guard.get(10), None);
+ ///
+ /// # Ok::<(), kernel::error::Error>(())
+ /// ```
+ pub fn find_next_entry<'b>(&'b mut self, index: usize) -> Option<OccupiedEntry<'a, 'b, T>> {
+ let mut state = XArrayState::new(self, index);
+ let (_, ptr) = state.load_next()?;
+ Some(OccupiedEntry { state, ptr })
+ }
+
+ /// Finds the next occupied entry starting at the given index, wrapping around.
+ ///
+ /// Searches for an entry starting at `index` up to the maximum index. If no entry
+ /// is found, wraps around and searches from index 0 up to `index`.
+ ///
+ /// # Examples
+ ///
+ /// ```
+ /// # use kernel::{prelude::*, xarray::{AllocKind, XArray}};
+ /// let mut xa = KBox::pin_init(XArray::<KBox<u32>>::new(AllocKind::Alloc), GFP_KERNEL)?;
+ /// let mut guard = xa.lock();
+ ///
+ /// guard.store(100, KBox::new(42u32, GFP_KERNEL)?, GFP_KERNEL)?;
+ /// let entry = guard.find_next_entry_circular(101);
+ /// assert_eq!(entry.map(|e| e.index()), Some(100));
+ ///
+ /// # Ok::<(), kernel::error::Error>(())
+ /// ```
+ pub fn find_next_entry_circular<'b>(
+ &'b mut self,
+ index: usize,
+ ) -> Option<OccupiedEntry<'a, 'b, T>> {
+ let mut state = XArrayState::new(self, index);
+
+ // SAFETY: `state.state` is properly initialized by XArrayState::new and the caller holds
+ // the lock.
+ let ptr = NonNull::new(unsafe { bindings::xas_find(&mut state.state, usize::MAX) })
+ .or_else(|| {
+ state.state.xa_node = bindings::XAS_RESTART as *mut bindings::xa_node;
+ state.state.xa_index = 0;
+ // SAFETY: `state.state` is properly initialized and by type invariant, we hold the
+ // xarray lock.
+ NonNull::new(unsafe { bindings::xas_find(&mut state.state, index) })
+ })?;
+
+ Some(OccupiedEntry { state, ptr })
+ }
+
/// Removes and returns the element at the given index.
pub fn remove(&mut self, index: usize) -> Option<T> {
// SAFETY:
@@ -422,8 +523,30 @@ fn load_next(&mut self) -> Option<(usize, NonNull<c_void>)> {
let ptr = unsafe { bindings::xas_find(&raw mut self.state, usize::MAX) };
NonNull::new(ptr).map(|ptr| (self.state.xa_index, ptr))
}
+
+ fn status(&self) -> Result {
+ // SAFETY: `self.state` is properly initialized and valid.
+ to_result(unsafe { bindings::xas_error(&self.state) })
+ }
+
+ fn insert(&mut self, value: T) -> Result<*mut c_void, StoreError<T>> {
+ let new = T::into_foreign(value).cast();
+
+ // SAFETY: `self.state.state` is properly initialized and `new` came from `T::into_foreign`.
+ // We hold the xarray lock.
+ unsafe { bindings::xas_store(&mut self.state, new) };
+
+ self.status().map(|()| new).map_err(|error| {
+ // SAFETY: `new` came from `T::into_foreign` and `xas_store` does not take ownership of
+ // the value on error.
+ let value = unsafe { T::from_foreign(new) };
+ StoreError { value, error }
+ })
+ }
}
+mod entry;
+
// SAFETY: `XArray<T>` has no shared mutable state so it is `Send` iff `T` is `Send`.
unsafe impl<T: ForeignOwnable + Send> Send for XArray<T> {}
diff --git a/rust/kernel/xarray/entry.rs b/rust/kernel/xarray/entry.rs
new file mode 100644
index 0000000000000..1b1c21bed7022
--- /dev/null
+++ b/rust/kernel/xarray/entry.rs
@@ -0,0 +1,367 @@
+// SPDX-License-Identifier: GPL-2.0
+
+use super::{
+ Guard,
+ StoreError,
+ XArrayState, //
+};
+use core::ptr::NonNull;
+use kernel::{
+ prelude::*,
+ types::ForeignOwnable, //
+};
+
+/// Represents either a vacant or occupied entry in an XArray.
+pub enum Entry<'a, 'b, T: ForeignOwnable> {
+ /// A vacant entry that can have a value inserted.
+ Vacant(VacantEntry<'a, 'b, T>),
+ /// An occupied entry containing a value.
+ Occupied(OccupiedEntry<'a, 'b, T>),
+}
+
+impl<T: ForeignOwnable> Entry<'_, '_, T> {
+ /// Returns true if this entry is occupied.
+ ///
+ /// # Examples
+ ///
+ /// ```
+ /// # use kernel::{prelude::*, xarray::{AllocKind, XArray, Entry}};
+ /// let mut xa = KBox::pin_init(XArray::<KBox<u32>>::new(AllocKind::Alloc), GFP_KERNEL)?;
+ /// let mut guard = xa.lock();
+ ///
+ ///
+ /// let entry = guard.entry(42);
+ /// assert_eq!(entry.is_occupied(), false);
+ ///
+ /// guard.store(42, KBox::new(0x1337u32, GFP_KERNEL)?, GFP_KERNEL)?;
+ /// let entry = guard.entry(42);
+ /// assert_eq!(entry.is_occupied(), true);
+ ///
+ /// # Ok::<(), kernel::error::Error>(())
+ /// ```
+ pub fn is_occupied(&self) -> bool {
+ matches!(self, Entry::Occupied(_))
+ }
+}
+
+/// A view into a vacant entry in an XArray.
+pub struct VacantEntry<'a, 'b, T: ForeignOwnable> {
+ state: XArrayState<'a, 'b, T>,
+}
+
+impl<'a, 'b, T> VacantEntry<'a, 'b, T>
+where
+ T: ForeignOwnable,
+{
+ pub(crate) fn new(guard: &'b mut Guard<'a, T>, index: usize) -> Self {
+ Self {
+ state: XArrayState::new(guard, index),
+ }
+ }
+
+ /// Inserts a value into this vacant entry.
+ ///
+ /// Returns a reference to the newly inserted value.
+ ///
+ /// - This method will fail if the nodes on the path to the index
+ /// represented by this entry are not present in the XArray.
+ /// - This method will not drop the XArray lock.
+ ///
+ ///
+ /// # Examples
+ ///
+ /// ```
+ /// # use kernel::{prelude::*, xarray::{AllocKind, XArray, Entry}};
+ /// let mut xa = KBox::pin_init(XArray::<KBox<u32>>::new(AllocKind::Alloc), GFP_KERNEL)?;
+ /// let mut guard = xa.lock();
+ ///
+ /// assert_eq!(guard.get(42), None);
+ ///
+ /// if let Entry::Vacant(entry) = guard.entry(42) {
+ /// let value = KBox::new(0x1337u32, GFP_KERNEL)?;
+ /// let borrowed = entry.insert(value)?;
+ /// assert_eq!(*borrowed, 0x1337);
+ /// }
+ ///
+ /// assert_eq!(guard.get(42).copied(), Some(0x1337));
+ ///
+ /// # Ok::<(), kernel::error::Error>(())
+ /// ```
+ pub fn insert(mut self, value: T) -> Result<T::BorrowedMut<'b>, StoreError<T>> {
+ let new = self.state.insert(value)?;
+
+ // SAFETY: `new` came from `T::into_foreign`. The entry has exclusive
+ // ownership of `new` as it holds a mutable reference to `Guard`.
+ Ok(unsafe { T::borrow_mut(new) })
+ }
+
+ /// Inserts a value and returns an occupied entry representing the newly inserted value.
+ ///
+ /// - This method will fail if the nodes on the path to the index
+ /// represented by this entry are not present in the XArray.
+ /// - This method will not drop the XArray lock.
+ ///
+ /// # Examples
+ ///
+ /// ```
+ /// # use kernel::{prelude::*, xarray::{AllocKind, XArray, Entry}};
+ /// let mut xa = KBox::pin_init(XArray::<KBox<u32>>::new(AllocKind::Alloc), GFP_KERNEL)?;
+ /// let mut guard = xa.lock();
+ ///
+ /// assert_eq!(guard.get(42), None);
+ ///
+ /// if let Entry::Vacant(entry) = guard.entry(42) {
+ /// let value = KBox::new(0x1337u32, GFP_KERNEL)?;
+ /// let occupied = entry.insert_entry(value)?;
+ /// assert_eq!(occupied.index(), 42);
+ /// }
+ ///
+ /// assert_eq!(guard.get(42).copied(), Some(0x1337));
+ ///
+ /// # Ok::<(), kernel::error::Error>(())
+ /// ```
+ pub fn insert_entry(mut self, value: T) -> Result<OccupiedEntry<'a, 'b, T>, StoreError<T>> {
+ let new = self.state.insert(value)?;
+
+ Ok(OccupiedEntry::<'a, 'b, T> {
+ state: self.state,
+ // SAFETY: `new` came from `T::into_foreign` and is guaranteed non-null.
+ ptr: unsafe { core::ptr::NonNull::new_unchecked(new) },
+ })
+ }
+
+ /// Returns the index of this vacant entry.
+ ///
+ /// # Examples
+ ///
+ /// ```
+ /// # use kernel::{prelude::*, xarray::{AllocKind, XArray, Entry}};
+ /// let mut xa = KBox::pin_init(XArray::<KBox<u32>>::new(AllocKind::Alloc), GFP_KERNEL)?;
+ /// let mut guard = xa.lock();
+ ///
+ /// assert_eq!(guard.get(42), None);
+ ///
+ /// if let Entry::Vacant(entry) = guard.entry(42) {
+ /// assert_eq!(entry.index(), 42);
+ /// }
+ ///
+ /// # Ok::<(), kernel::error::Error>(())
+ /// ```
+ pub fn index(&self) -> usize {
+ self.state.state.xa_index
+ }
+}
+
+/// A view into an occupied entry in an XArray.
+pub struct OccupiedEntry<'a, 'b, T: ForeignOwnable> {
+ pub(crate) state: XArrayState<'a, 'b, T>,
+ pub(crate) ptr: NonNull<c_void>,
+}
+
+impl<'a, 'b, T> OccupiedEntry<'a, 'b, T>
+where
+ T: ForeignOwnable,
+{
+ pub(crate) fn new(guard: &'b mut Guard<'a, T>, index: usize, ptr: NonNull<c_void>) -> Self {
+ Self {
+ state: XArrayState::new(guard, index),
+ ptr,
+ }
+ }
+
+ /// Removes the value from this occupied entry and returns it, consuming the entry.
+ ///
+ /// # Examples
+ ///
+ /// ```
+ /// # use kernel::{prelude::*, xarray::{AllocKind, XArray, Entry}};
+ /// let mut xa = KBox::pin_init(XArray::<KBox<u32>>::new(AllocKind::Alloc), GFP_KERNEL)?;
+ /// let mut guard = xa.lock();
+ ///
+ /// guard.store(42, KBox::new(0x1337u32, GFP_KERNEL)?, GFP_KERNEL)?;
+ /// assert_eq!(guard.get(42).copied(), Some(0x1337));
+ ///
+ /// if let Entry::Occupied(entry) = guard.entry(42) {
+ /// let value = entry.remove();
+ /// assert_eq!(*value, 0x1337);
+ /// }
+ ///
+ /// assert_eq!(guard.get(42), None);
+ ///
+ /// # Ok::<(), kernel::error::Error>(())
+ /// ```
+ pub fn remove(mut self) -> T {
+ // SAFETY: `self.state.state` is properly initialized and valid for XAS operations.
+ let ptr = unsafe {
+ bindings::xas_result(
+ &mut self.state.state,
+ bindings::xa_zero_to_null(bindings::xas_store(
+ &mut self.state.state,
+ core::ptr::null_mut(),
+ )),
+ )
+ };
+
+ // SAFETY: `ptr` is a valid return value from xas_result.
+ let errno = unsafe { bindings::xa_err(ptr) };
+
+ // NOTE: Storing NULL to an occupied slot never fails. This is by design
+ // of the xarray data structure. If a slot is occupied, a store is a
+ // simple pointer swap.
+ debug_assert!(errno == 0);
+
+ // SAFETY:
+ // - `ptr` came from `T::into_foreign`.
+ // - As this method takes self by value, the lifetimes of any [`T::Borrowed`] and
+ // [`T::BorrowedMut`] we have created must have ended.
+ unsafe { T::from_foreign(ptr.cast()) }
+ }
+
+ /// Returns the index of this occupied entry.
+ ///
+ /// # Examples
+ ///
+ /// ```
+ /// # use kernel::{prelude::*, xarray::{AllocKind, XArray, Entry}};
+ /// let mut xa = KBox::pin_init(XArray::<KBox<u32>>::new(AllocKind::Alloc), GFP_KERNEL)?;
+ /// let mut guard = xa.lock();
+ ///
+ /// guard.store(42, KBox::new(0x1337u32, GFP_KERNEL)?, GFP_KERNEL)?;
+ ///
+ /// if let Entry::Occupied(entry) = guard.entry(42) {
+ /// assert_eq!(entry.index(), 42);
+ /// }
+ ///
+ /// # Ok::<(), kernel::error::Error>(())
+ /// ```
+ pub fn index(&self) -> usize {
+ self.state.state.xa_index
+ }
+
+ /// Replaces the value in this occupied entry and returns the old value.
+ ///
+ /// # Examples
+ ///
+ /// ```
+ /// # use kernel::{prelude::*, xarray::{AllocKind, XArray, Entry}};
+ /// let mut xa = KBox::pin_init(XArray::<KBox<u32>>::new(AllocKind::Alloc), GFP_KERNEL)?;
+ /// let mut guard = xa.lock();
+ ///
+ /// guard.store(42, KBox::new(0x1337u32, GFP_KERNEL)?, GFP_KERNEL)?;
+ ///
+ /// if let Entry::Occupied(mut entry) = guard.entry(42) {
+ /// let new_value = KBox::new(0x9999u32, GFP_KERNEL)?;
+ /// let old_value = entry.insert(new_value);
+ /// assert_eq!(*old_value, 0x1337);
+ /// }
+ ///
+ /// assert_eq!(guard.get(42).copied(), Some(0x9999));
+ ///
+ /// # Ok::<(), kernel::error::Error>(())
+ /// ```
+ pub fn insert(&mut self, value: T) -> T {
+ let new = T::into_foreign(value).cast();
+ // SAFETY: `new` came from `T::into_foreign` and is guaranteed non-null.
+ self.ptr = unsafe { NonNull::new_unchecked(new) };
+
+ // SAFETY: `self.state.state` is properly initialized and valid for XAS operations.
+ let old = unsafe {
+ bindings::xas_result(
+ &mut self.state.state,
+ bindings::xa_zero_to_null(bindings::xas_store(&mut self.state.state, new)),
+ )
+ };
+
+ // SAFETY: `old` is a valid return value from xas_result.
+ let errno = unsafe { bindings::xa_err(old) };
+
+ // NOTE: Storing NULL to an occupied slot never fails. This is by design
+ // of the xarray data structure. If a slot is occupied, a store is a
+ // simple pointer swap.
+ debug_assert!(errno == 0);
+
+ // SAFETY:
+ // - `ptr` came from `T::into_foreign`.
+ // - As this method takes self by value, the lifetimes of any [`T::Borrowed`] and
+ // [`T::BorrowedMut`] we have created must have ended.
+ unsafe { T::from_foreign(old) }
+ }
+
+ /// Converts this occupied entry into a mutable reference to the value in the slot represented
+ /// by the entry.
+ ///
+ /// # Examples
+ ///
+ /// ```
+ /// # use kernel::{prelude::*, xarray::{AllocKind, XArray, Entry}};
+ /// let mut xa = KBox::pin_init(XArray::<KBox<u32>>::new(AllocKind::Alloc), GFP_KERNEL)?;
+ /// let mut guard = xa.lock();
+ ///
+ /// guard.store(42, KBox::new(0x1337u32, GFP_KERNEL)?, GFP_KERNEL)?;
+ ///
+ /// if let Entry::Occupied(entry) = guard.entry(42) {
+ /// let value_ref = entry.into_mut();
+ /// *value_ref = 0x9999;
+ /// }
+ ///
+ /// assert_eq!(guard.get(42).copied(), Some(0x9999));
+ ///
+ /// # Ok::<(), kernel::error::Error>(())
+ /// ```
+ pub fn into_mut(self) -> T::BorrowedMut<'b> {
+ // SAFETY: `ptr` came from `T::into_foreign`.
+ unsafe { T::borrow_mut(self.ptr.as_ptr()) }
+ }
+
+ /// Swaps the value in this entry with the provided value.
+ ///
+ /// Returns the old value that was in the entry.
+ ///
+ /// # Examples
+ ///
+ /// ```
+ /// # use kernel::{prelude::*, xarray::{AllocKind, XArray, Entry}};
+ /// let mut xa = KBox::pin_init(XArray::<KBox<u32>>::new(AllocKind::Alloc), GFP_KERNEL)?;
+ /// let mut guard = xa.lock();
+ ///
+ /// guard.store(42, KBox::new(100u32, GFP_KERNEL)?, GFP_KERNEL)?;
+ ///
+ /// if let Entry::Occupied(mut entry) = guard.entry(42) {
+ /// let mut other = 200u32;
+ /// entry.swap(&mut other);
+ /// assert_eq!(other, 100);
+ /// assert_eq!(*entry, 200);
+ /// }
+ ///
+ /// # Ok::<(), kernel::error::Error>(())
+ /// ```
+ pub fn swap<U>(&mut self, other: &mut U)
+ where
+ T: for<'c> ForeignOwnable<Borrowed<'c> = &'c U, BorrowedMut<'c> = &'c mut U>,
+ {
+ use core::ops::DerefMut;
+ core::mem::swap(self.deref_mut(), other);
+ }
+}
+
+impl<T, U> core::ops::Deref for OccupiedEntry<'_, '_, T>
+where
+ T: for<'a> ForeignOwnable<Borrowed<'a> = &'a U, BorrowedMut<'a> = &'a mut U>,
+{
+ type Target = U;
+
+ fn deref(&self) -> &Self::Target {
+ // SAFETY: `ptr` came from `T::into_foreign`.
+ unsafe { T::borrow(self.ptr.as_ptr()) }
+ }
+}
+
+impl<T, U> core::ops::DerefMut for OccupiedEntry<'_, '_, T>
+where
+ T: for<'a> ForeignOwnable<Borrowed<'a> = &'a U, BorrowedMut<'a> = &'a mut U>,
+{
+ fn deref_mut(&mut self) -> &mut Self::Target {
+ // SAFETY: `ptr` came from `T::into_foreign`.
+ unsafe { T::borrow_mut(self.ptr.as_ptr()) }
+ }
+}
--
2.51.2
^ permalink raw reply [flat|nested] 52+ messages in thread* [PATCH v3 09/12] rust: mm: add abstractions for allocating from a `sheaf`
2026-02-09 14:38 [PATCH v3 00/12] rust: xarray: add entry API with preloading Andreas Hindborg
` (7 preceding siblings ...)
2026-02-09 14:38 ` [PATCH v3 08/12] rust: xarray: add entry API Andreas Hindborg
@ 2026-02-09 14:38 ` Andreas Hindborg
2026-02-09 14:38 ` [PATCH v3 10/12] rust: mm: sheaf: allow use of C initialized static caches Andreas Hindborg
` (2 subsequent siblings)
11 siblings, 0 replies; 52+ messages in thread
From: Andreas Hindborg @ 2026-02-09 14:38 UTC (permalink / raw)
To: Tamir Duberstein, Miguel Ojeda, Alex Gaynor, Boqun Feng,
Gary Guo, Björn Roy Baron, Benno Lossin, Alice Ryhl,
Trevor Gross, Danilo Krummrich, Lorenzo Stoakes, Liam R. Howlett,
Vlastimil Babka, Andrew Morton, Christoph Lameter,
David Rientjes, Roman Gushchin, Harry Yoo
Cc: Daniel Gomez, rust-for-linux, linux-kernel, linux-mm,
Andreas Hindborg, Matthew Wilcox (Oracle)
Add Rust APIs for allocating objects from a `sheaf`.
Introduce a reduced abstraction `KMemCacheInit` for `struct kmem_cache` to
support management of the `Sheaf`s.
Initialize objects using in-place initialization when objects are allocated
from a `Sheaf`. This is different from C which tends to do some
initialization when the cache is filled. This approach is chosen because
there is no destructor/drop capability in `struct kmem_cache` that can be
invoked when the cache is dropped.
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: "Liam R. Howlett" <Liam.Howlett@oracle.com>
Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org>
Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Cc: linux-mm@kvack.org
Signed-off-by: Andreas Hindborg <a.hindborg@kernel.org>
---
rust/kernel/mm.rs | 1 +
rust/kernel/mm/sheaf.rs | 407 ++++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 408 insertions(+)
diff --git a/rust/kernel/mm.rs b/rust/kernel/mm.rs
index 4764d7b68f2a7..1aa44424b0d53 100644
--- a/rust/kernel/mm.rs
+++ b/rust/kernel/mm.rs
@@ -18,6 +18,7 @@
};
use core::{ops::Deref, ptr::NonNull};
+pub mod sheaf;
pub mod virt;
use virt::VmaRef;
diff --git a/rust/kernel/mm/sheaf.rs b/rust/kernel/mm/sheaf.rs
new file mode 100644
index 0000000000000..b8fd321335ace
--- /dev/null
+++ b/rust/kernel/mm/sheaf.rs
@@ -0,0 +1,407 @@
+// SPDX-License-Identifier: GPL-2.0
+
+//! Slub allocator sheaf abstraction.
+//!
+//! Sheaves are percpu array-based caching layers for the slub allocator.
+//! They provide a mechanism for pre-allocating objects that can later
+//! be retrieved without risking allocation failure, making them useful in
+//! contexts where memory allocation must be guaranteed to succeed.
+//!
+//! The term "sheaf" is the english word for a bundle of straw. In this context
+//! it means a bundle of pre-allocated objects. A per-NUMA-node cache of sheaves
+//! is called a "barn". Because you store your sheafs in barns.
+//!
+//! # Use cases
+//!
+//! Sheaves are particularly useful when:
+//!
+//! - Allocations must be guaranteed to succeed in a restricted context (e.g.,
+//! while holding locks or in atomic context).
+//! - Multiple allocations need to be performed as a batch operation.
+//! - Fast-path allocation performance is critical, as sheaf allocations avoid
+//! atomic operations by using local locks with preemption disabled.
+//!
+//! # Architecture
+//!
+//! The sheaf system consists of three main components:
+//!
+//! - [`KMemCache`]: A slab cache configured with sheaf support.
+//! - [`Sheaf`]: A pre-filled container of objects from a specific cache.
+//! - [`SBox`]: An owned allocation from a sheaf, similar to a `Box`.
+//!
+//! # Example
+//!
+//! ```
+//! use kernel::c_str;
+//! use kernel::mm::sheaf::{KMemCache, KMemCacheInit, Sheaf, SBox};
+//! use kernel::prelude::*;
+//!
+//! struct MyObject {
+//! value: u32,
+//! }
+//!
+//! impl KMemCacheInit<MyObject> for MyObject {
+//! fn init() -> impl Init<MyObject> {
+//! init!(MyObject { value: 0 })
+//! }
+//! }
+//!
+//! // Create a cache with sheaf capacity of 16 objects.
+//! let cache = KMemCache::<MyObject>::new(c_str!("my_cache"), 16)?;
+//!
+//! // Pre-fill a sheaf with 8 objects.
+//! let mut sheaf = cache.as_arc_borrow().sheaf(8, GFP_KERNEL)?;
+//!
+//! // Allocations from the sheaf are guaranteed to succeed until empty.
+//! let obj = sheaf.alloc().unwrap();
+//!
+//! // Return the sheaf when done, attempting to refill it.
+//! sheaf.return_refill(GFP_KERNEL);
+//! # Ok::<(), Error>(())
+//! ```
+//!
+//! # Constraints
+//!
+//! - Sheaves are slower when `CONFIG_SLUB_TINY` or `CONFIG_SLUB_DEBUG` is
+//! enabled due to cpu sheaves being disabled. All prefilled sheaves become
+//! "oversize" and go through a slower allocation path.
+//! - The sheaf capacity is fixed at cache creation time.
+
+use core::{
+ convert::Infallible,
+ marker::PhantomData,
+ ops::{Deref, DerefMut},
+ ptr::NonNull,
+};
+
+use kernel::prelude::*;
+
+use crate::sync::{Arc, ArcBorrow};
+
+/// A slab cache with sheaf support.
+///
+/// This type wraps a kernel `kmem_cache` configured with a sheaf capacity,
+/// enabling pre-allocation of objects via [`Sheaf`].
+///
+/// For now, this type only exists for sheaf management.
+///
+/// # Type parameter
+///
+/// - `T`: The type of objects managed by this cache. Must implement
+/// [`KMemCacheInit`] to provide initialization logic for new allocations.
+///
+/// # Invariants
+///
+/// - `cache` is a valid pointer to a `kmem_cache` created with
+/// `__kmem_cache_create_args`.
+/// - The cache is valid for the lifetime of this struct.
+pub struct KMemCache<T: KMemCacheInit<T>> {
+ cache: NonNull<bindings::kmem_cache>,
+ _p: PhantomData<T>,
+}
+
+impl<T: KMemCacheInit<T>> KMemCache<T> {
+ /// Creates a new slab cache with sheaf support.
+ ///
+ /// Creates a kernel slab cache for objects of type `T` with the specified
+ /// sheaf capacity. The cache uses the provided `name` for identification
+ /// in `/sys/kernel/slab/` and debugging output.
+ ///
+ /// # Arguments
+ ///
+ /// - `name`: A string identifying the cache. This name appears in sysfs and
+ /// debugging output.
+ /// - `sheaf_capacity`: The maximum number of objects a sheaf from this
+ /// cache can hold. A capacity of zero disables sheaf support.
+ ///
+ /// # Errors
+ ///
+ /// Returns an error if:
+ ///
+ /// - The cache could not be created due to memory pressure.
+ /// - The size of `T` cannot be represented as a `c_uint`.
+ pub fn new(name: &CStr, sheaf_capacity: u32) -> Result<Arc<Self>>
+ where
+ T: KMemCacheInit<T>,
+ {
+ let flags = 0;
+ let mut args: bindings::kmem_cache_args = pin_init::zeroed();
+ args.sheaf_capacity = sheaf_capacity;
+
+ // NOTE: We are not initializing at object allocation time, because
+ // there is no matching teardown function on the C side machinery.
+ args.ctor = None;
+
+ // SAFETY: `name` is a valid C string, `args` is properly initialized,
+ // and the size of `T` has been validated to fit in a `c_uint`.
+ let ptr = unsafe {
+ bindings::__kmem_cache_create_args(
+ name.as_ptr().cast::<u8>(),
+ core::mem::size_of::<T>().try_into()?,
+ &mut args,
+ flags,
+ )
+ };
+
+ // INVARIANT: `ptr` was returned by `__kmem_cache_create_args` and is
+ // non-null (checked below). The cache is valid until
+ // `kmem_cache_destroy` is called in `Drop`.
+ Ok(Arc::new(
+ Self {
+ cache: NonNull::new(ptr).ok_or(ENOMEM)?,
+ _p: PhantomData,
+ },
+ GFP_KERNEL,
+ )?)
+ }
+
+ /// Creates a pre-filled sheaf from this cache.
+ ///
+ /// Allocates a sheaf and pre-fills it with `size` objects. Once created,
+ /// allocations from the sheaf via [`Sheaf::alloc`] are guaranteed to
+ /// succeed until the sheaf is depleted.
+ ///
+ /// # Arguments
+ ///
+ /// - `size`: The number of objects to pre-allocate. Must not exceed the
+ /// cache's `sheaf_capacity`.
+ /// - `gfp`: Allocation flags controlling how memory is obtained. Use
+ /// [`GFP_KERNEL`] for normal allocations that may sleep, or
+ /// [`GFP_NOWAIT`] for non-blocking allocations.
+ ///
+ /// # Errors
+ ///
+ /// Returns [`ENOMEM`] if the sheaf or its objects could not be allocated.
+ ///
+ /// # Warnings
+ ///
+ /// The kernel will warn if `size` exceeds `sheaf_capacity`.
+ pub fn sheaf(
+ self: ArcBorrow<'_, Self>,
+ size: usize,
+ gfp: kernel::alloc::Flags,
+ ) -> Result<Sheaf<T>> {
+ // SAFETY: `self.as_raw()` returns a valid cache pointer, and `size`
+ // has been validated to fit in a `c_uint`.
+ let ptr = unsafe {
+ bindings::kmem_cache_prefill_sheaf(self.as_raw(), gfp.as_raw(), size.try_into()?)
+ };
+
+ // INVARIANT: `ptr` was returned by `kmem_cache_prefill_sheaf` and is
+ // non-null (checked below). `cache` is the cache from which this sheaf
+ // was created. `dropped` is false since the sheaf has not been returned.
+ Ok(Sheaf {
+ sheaf: NonNull::new(ptr).ok_or(ENOMEM)?,
+ cache: self.into(),
+ dropped: false,
+ })
+ }
+
+ fn as_raw(&self) -> *mut bindings::kmem_cache {
+ self.cache.as_ptr()
+ }
+}
+
+impl<T: KMemCacheInit<T>> Drop for KMemCache<T> {
+ fn drop(&mut self) {
+ // SAFETY: `self.as_raw()` returns a valid cache pointer that was
+ // created by `__kmem_cache_create_args`. As all objects allocated from
+ // this hold a reference on `self`, they must have been dropped for this
+ // `drop` method to execute.
+ unsafe { bindings::kmem_cache_destroy(self.as_raw()) };
+ }
+}
+
+/// Trait for types that can be initialized in a slab cache.
+///
+/// This trait provides the initialization logic for objects allocated from a
+/// [`KMemCache`]. When the slab allocator creates new objects, it invokes the
+/// constructor to ensure objects are in a valid initial state.
+///
+/// # Implementation
+///
+/// Implementors must provide [`init`](KMemCacheInit::init), which returns
+/// a in-place initializer for the type.
+///
+/// # Example
+///
+/// ```
+/// use kernel::mm::sheaf::KMemCacheInit;
+/// use kernel::prelude::*;
+///
+/// struct MyData {
+/// counter: u32,
+/// name: [u8; 16],
+/// }
+///
+/// impl KMemCacheInit<MyData> for MyData {
+/// fn init() -> impl Init<MyData> {
+/// init!(MyData {
+/// counter: 0,
+/// name: [0; 16],
+/// })
+/// }
+/// }
+/// ```
+pub trait KMemCacheInit<T> {
+ /// Returns an initializer for creating new objects of type `T`.
+ ///
+ /// This method is called by the allocator's constructor to initialize newly
+ /// allocated objects. The initializer should set all fields to their
+ /// default or initial values.
+ fn init() -> impl Init<T, Infallible>;
+}
+
+/// A pre-filled container of slab objects.
+///
+/// A sheaf holds a set of pre-allocated objects from a [`KMemCache`].
+/// Allocations from a sheaf are guaranteed to succeed until the sheaf is
+/// depleted, making sheaves useful in contexts where allocation failure is
+/// not acceptable.
+///
+/// Sheaves provide faster allocation than direct allocation because they use
+/// local locks with preemption disabled rather than atomic operations.
+///
+/// # Lifecycle
+///
+/// Sheaves are created via [`KMemCache::sheaf`] and should be returned to the
+/// allocator when no longer needed via [`Sheaf::return_refill`]. If a sheaf is
+/// simply dropped, it is returned with `GFP_NOWAIT` flags, which may result in
+/// the sheaf being flushed and freed rather than being cached for reuse.
+///
+/// # Invariants
+///
+/// - `sheaf` is a valid pointer to a `slab_sheaf` obtained from
+/// `kmem_cache_prefill_sheaf`.
+/// - `cache` is the cache from which this sheaf was created.
+/// - `dropped` tracks whether the sheaf has been explicitly returned.
+pub struct Sheaf<T: KMemCacheInit<T>> {
+ sheaf: NonNull<bindings::slab_sheaf>,
+ cache: Arc<KMemCache<T>>,
+ dropped: bool,
+}
+
+impl<T: KMemCacheInit<T>> Sheaf<T> {
+ fn as_raw(&self) -> *mut bindings::slab_sheaf {
+ self.sheaf.as_ptr()
+ }
+
+ /// Return the sheaf and try to refill using `flags`.
+ ///
+ /// If the sheaf cannot simply become the percpu spare sheaf, but there's
+ /// space for a full sheaf in the barn, we try to refill the sheaf back to
+ /// the cache's sheaf_capacity to avoid handling partially full sheaves.
+ ///
+ /// If the refill fails because gfp is e.g. GFP_NOWAIT, or the barn is full,
+ /// the sheaf is instead flushed and freed.
+ pub fn return_refill(mut self, flags: kernel::alloc::Flags) {
+ self.dropped = true;
+ // SAFETY: `self.cache.as_raw()` and `self.as_raw()` return valid
+ // pointers to the cache and sheaf respectively.
+ unsafe {
+ bindings::kmem_cache_return_sheaf(self.cache.as_raw(), flags.as_raw(), self.as_raw())
+ };
+ drop(self);
+ }
+
+ /// Allocates an object from the sheaf.
+ ///
+ /// Returns a new [`SBox`] containing an initialized object, or [`None`]
+ /// if the sheaf is depleted. Allocations are guaranteed to succeed as
+ /// long as the sheaf contains pre-allocated objects.
+ ///
+ /// The `gfp` flags passed to `kmem_cache_alloc_from_sheaf` are set to zero,
+ /// meaning no additional flags like `__GFP_ZERO` or `__GFP_ACCOUNT` are
+ /// applied.
+ ///
+ /// The returned `T` is initialized as part of this function.
+ pub fn alloc(&mut self) -> Option<SBox<T>> {
+ // SAFETY: `self.cache.as_raw()` and `self.as_raw()` return valid
+ // pointers. The function returns NULL when the sheaf is empty.
+ let ptr = unsafe {
+ bindings::kmem_cache_alloc_from_sheaf_noprof(self.cache.as_raw(), 0, self.as_raw())
+ };
+
+ // SAFETY:
+ // - `ptr` is a valid pointer as it was just returned by the cache.
+ // - The initializer is infallible, so an error is never returned.
+ unsafe { T::init().__init(ptr.cast()) }.expect("Initializer is infallible");
+
+ let ptr = NonNull::new(ptr.cast::<T>())?;
+
+ // INVARIANT: `ptr` was returned by `kmem_cache_alloc_from_sheaf_noprof`
+ // and initialized above. `cache` is the cache from which this object
+ // was allocated. The object remains valid until freed in `Drop`.
+ Some(SBox {
+ ptr,
+ cache: self.cache.clone(),
+ })
+ }
+}
+
+impl<T: KMemCacheInit<T>> Drop for Sheaf<T> {
+ fn drop(&mut self) {
+ if !self.dropped {
+ // SAFETY: `self.cache.as_raw()` and `self.as_raw()` return valid
+ // pointers. Using `GFP_NOWAIT` because the drop may occur in a
+ // context where sleeping is not permitted.
+ unsafe {
+ bindings::kmem_cache_return_sheaf(
+ self.cache.as_raw(),
+ GFP_NOWAIT.as_raw(),
+ self.as_raw(),
+ )
+ };
+ }
+ }
+}
+
+/// An owned allocation from a cache sheaf.
+///
+/// `SBox` is similar to `Box` but is backed by a slab cache allocation obtained
+/// through a [`Sheaf`]. It provides owned access to an initialized object and
+/// ensures the object is properly freed back to the cache when dropped.
+///
+/// The contained `T` is initialized when the `SBox` is returned from alloc and
+/// dropped when the `SBox` is dropped.
+///
+/// # Invariants
+///
+/// - `ptr` points to a valid, initialized object of type `T`.
+/// - `cache` is the cache from which this object was allocated.
+/// - The object remains valid for the lifetime of the `SBox`.
+pub struct SBox<T: KMemCacheInit<T>> {
+ ptr: NonNull<T>,
+ cache: Arc<KMemCache<T>>,
+}
+
+impl<T: KMemCacheInit<T>> Deref for SBox<T> {
+ type Target = T;
+
+ fn deref(&self) -> &Self::Target {
+ // SAFETY: `ptr` is valid and properly aligned per the type invariants.
+ unsafe { self.ptr.as_ref() }
+ }
+}
+
+impl<T: KMemCacheInit<T>> DerefMut for SBox<T> {
+ fn deref_mut(&mut self) -> &mut Self::Target {
+ // SAFETY: `ptr` is valid and properly aligned per the type invariants,
+ // and we have exclusive access via `&mut self`.
+ unsafe { self.ptr.as_mut() }
+ }
+}
+
+impl<T: KMemCacheInit<T>> Drop for SBox<T> {
+ fn drop(&mut self) {
+ // SAFETY: By type invariant, `ptr` points to a valid and initialized
+ // object. We do not touch `ptr` after returning it to the cache.
+ unsafe { core::ptr::drop_in_place(self.ptr.as_ptr()) };
+
+ // SAFETY: `self.ptr` was allocated from `self.cache` via
+ // `kmem_cache_alloc_from_sheaf_noprof` and is valid.
+ unsafe {
+ bindings::kmem_cache_free(self.cache.as_raw(), self.ptr.as_ptr().cast());
+ }
+ }
+}
--
2.51.2
^ permalink raw reply [flat|nested] 52+ messages in thread* [PATCH v3 10/12] rust: mm: sheaf: allow use of C initialized static caches
2026-02-09 14:38 [PATCH v3 00/12] rust: xarray: add entry API with preloading Andreas Hindborg
` (8 preceding siblings ...)
2026-02-09 14:38 ` [PATCH v3 09/12] rust: mm: add abstractions for allocating from a `sheaf` Andreas Hindborg
@ 2026-02-09 14:38 ` Andreas Hindborg
2026-02-09 14:38 ` [PATCH v3 11/12] xarray, radix-tree: enable sheaf support for kmem_cache Andreas Hindborg
2026-02-09 14:38 ` [PATCH v3 12/12] rust: xarray: add preload API Andreas Hindborg
11 siblings, 0 replies; 52+ messages in thread
From: Andreas Hindborg @ 2026-02-09 14:38 UTC (permalink / raw)
To: Tamir Duberstein, Miguel Ojeda, Alex Gaynor, Boqun Feng,
Gary Guo, Björn Roy Baron, Benno Lossin, Alice Ryhl,
Trevor Gross, Danilo Krummrich, Lorenzo Stoakes, Liam R. Howlett,
Vlastimil Babka, Andrew Morton, Christoph Lameter,
David Rientjes, Roman Gushchin, Harry Yoo
Cc: Daniel Gomez, rust-for-linux, linux-kernel, linux-mm,
Andreas Hindborg, Matthew Wilcox (Oracle)
Extend the sheaf abstraction to support caches initialized by C at kernel
boot time, in addition to dynamically created Rust caches.
Introduce `KMemCache<T>` as a transparent wrapper around `kmem_cache` for
static caches with `'static` lifetime. Rename the previous `KMemCache<T>`
to `KMemCacheHandle<T>` to represent dynamically created, reference-counted
caches.
Add `Static` and `Dynamic` marker types along with `StaticSheaf` and
`DynamicSheaf` type aliases to distinguish sheaves from each cache type.
The `Sheaf` type now carries lifetime and allocation mode type parameters.
Add `SBox::into_ptr()` and `SBox::static_from_ptr()` methods for passing
allocations through C code via raw pointers.
Add `KMemCache::from_raw()` for wrapping C-initialized static caches and
`Sheaf::refill()` for replenishing a sheaf to a minimum size.
Export `kmem_cache_prefill_sheaf`, `kmem_cache_return_sheaf`,
`kmem_cache_refill_sheaf`, and `kmem_cache_alloc_from_sheaf_noprof` to
allow Rust module code to use the sheaf API.
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: "Liam R. Howlett" <Liam.Howlett@oracle.com>
Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org>
Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Cc: linux-mm@kvack.org
Signed-off-by: Andreas Hindborg <a.hindborg@kernel.org>
---
mm/slub.c | 4 +
rust/kernel/mm/sheaf.rs | 343 +++++++++++++++++++++++++++++++++++++++++++-----
2 files changed, 317 insertions(+), 30 deletions(-)
diff --git a/mm/slub.c b/mm/slub.c
index f77b7407c51bc..7c6b1d28778d0 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -5428,6 +5428,7 @@ kmem_cache_prefill_sheaf(struct kmem_cache *s, gfp_t gfp, unsigned int size)
return sheaf;
}
+EXPORT_SYMBOL(kmem_cache_prefill_sheaf);
/*
* Use this to return a sheaf obtained by kmem_cache_prefill_sheaf()
@@ -5483,6 +5484,7 @@ void kmem_cache_return_sheaf(struct kmem_cache *s, gfp_t gfp,
barn_put_full_sheaf(barn, sheaf);
stat(s, BARN_PUT);
}
+EXPORT_SYMBOL(kmem_cache_return_sheaf);
/*
* refill a sheaf previously returned by kmem_cache_prefill_sheaf to at least
@@ -5536,6 +5538,7 @@ int kmem_cache_refill_sheaf(struct kmem_cache *s, gfp_t gfp,
*sheafp = sheaf;
return 0;
}
+EXPORT_SYMBOL(kmem_cache_refill_sheaf);
/*
* Allocate from a sheaf obtained by kmem_cache_prefill_sheaf()
@@ -5573,6 +5576,7 @@ kmem_cache_alloc_from_sheaf_noprof(struct kmem_cache *s, gfp_t gfp,
return ret;
}
+EXPORT_SYMBOL(kmem_cache_alloc_from_sheaf_noprof);
unsigned int kmem_cache_sheaf_size(struct slab_sheaf *sheaf)
{
diff --git a/rust/kernel/mm/sheaf.rs b/rust/kernel/mm/sheaf.rs
index b8fd321335ace..e98879f9881c3 100644
--- a/rust/kernel/mm/sheaf.rs
+++ b/rust/kernel/mm/sheaf.rs
@@ -23,17 +23,26 @@
//!
//! # Architecture
//!
-//! The sheaf system consists of three main components:
+//! The sheaf system supports two modes of operation:
+//!
+//! - **Static caches**: [`KMemCache`] represents a cache initialized by C code at
+//! kernel boot time. These have `'static` lifetime and produce [`StaticSheaf`]
+//! instances.
+//! - **Dynamic caches**: [`KMemCacheHandle`] wraps a cache created at runtime by
+//! Rust code. These are reference-counted and produce [`DynamicSheaf`] instances.
+//!
+//! Both modes use the same core types:
//!
-//! - [`KMemCache`]: A slab cache configured with sheaf support.
//! - [`Sheaf`]: A pre-filled container of objects from a specific cache.
//! - [`SBox`]: An owned allocation from a sheaf, similar to a `Box`.
//!
//! # Example
//!
+//! Using a dynamically created cache:
+//!
//! ```
//! use kernel::c_str;
-//! use kernel::mm::sheaf::{KMemCache, KMemCacheInit, Sheaf, SBox};
+//! use kernel::mm::sheaf::{KMemCacheHandle, KMemCacheInit, Sheaf, SBox};
//! use kernel::prelude::*;
//!
//! struct MyObject {
@@ -47,7 +56,7 @@
//! }
//!
//! // Create a cache with sheaf capacity of 16 objects.
-//! let cache = KMemCache::<MyObject>::new(c_str!("my_cache"), 16)?;
+//! let cache = KMemCacheHandle::<MyObject>::new(c_str!("my_cache"), 16)?;
//!
//! // Pre-fill a sheaf with 8 objects.
//! let mut sheaf = cache.as_arc_borrow().sheaf(8, GFP_KERNEL)?;
@@ -76,7 +85,102 @@
use kernel::prelude::*;
-use crate::sync::{Arc, ArcBorrow};
+use crate::{
+ sync::{Arc, ArcBorrow},
+ types::Opaque,
+};
+
+/// A slab cache with sheaf support.
+///
+/// This type is a transparent wrapper around a kernel `kmem_cache`. It can be
+/// used with caches created either by C code or via [`KMemCacheHandle`].
+///
+/// When a reference to this type has `'static` lifetime (i.e., `&'static
+/// KMemCache<T>`), it typically represents a cache initialized by C at boot
+/// time. Such references produce [`StaticSheaf`] instances via [`sheaf`].
+///
+/// [`sheaf`]: KMemCache::sheaf
+///
+/// # Type parameter
+///
+/// - `T`: The type of objects managed by this cache. Must implement
+/// [`KMemCacheInit`] to provide initialization logic for allocations.
+#[repr(transparent)]
+pub struct KMemCache<T: KMemCacheInit<T>> {
+ inner: Opaque<bindings::kmem_cache>,
+ _p: PhantomData<T>,
+}
+
+impl<T: KMemCacheInit<T>> KMemCache<T> {
+ /// Creates a pre-filled sheaf from this cache.
+ ///
+ /// Allocates a sheaf and pre-fills it with `size` objects. Once created,
+ /// allocations from the sheaf via [`Sheaf::alloc`] are guaranteed to
+ /// succeed until the sheaf is depleted.
+ ///
+ /// # Arguments
+ ///
+ /// - `size`: The number of objects to pre-allocate. Must not exceed the
+ /// cache's `sheaf_capacity`.
+ /// - `gfp`: Allocation flags controlling how memory is obtained. Use
+ /// [`GFP_KERNEL`] for normal allocations that may sleep, or
+ /// [`GFP_NOWAIT`] for non-blocking allocations.
+ ///
+ /// # Errors
+ ///
+ /// Returns [`ENOMEM`] if the sheaf or its objects could not be allocated.
+ ///
+ /// # Warnings
+ ///
+ /// The kernel will warn if `size` exceeds `sheaf_capacity`.
+ pub fn sheaf(
+ &'static self,
+ size: usize,
+ gfp: kernel::alloc::Flags,
+ ) -> Result<Sheaf<'static, T, Static>> {
+ // SAFETY: `self.as_raw()` returns a valid cache pointer, and `size`
+ // has been validated to fit in a `c_uint`.
+ let ptr = unsafe {
+ bindings::kmem_cache_prefill_sheaf(self.inner.get(), gfp.as_raw(), size.try_into()?)
+ };
+
+ // INVARIANT: `ptr` was returned by `kmem_cache_prefill_sheaf` and is
+ // non-null (checked below). `cache` is the cache from which this sheaf
+ // was created. `dropped` is false since the sheaf has not been returned.
+ Ok(Sheaf {
+ sheaf: NonNull::new(ptr).ok_or(ENOMEM)?,
+ // SAFETY: `self` is a valid reference, so the pointer is non-null.
+ cache: CacheRef::Static(unsafe {
+ NonNull::new_unchecked((&raw const *self).cast_mut())
+ }),
+ dropped: false,
+ _p: PhantomData,
+ })
+ }
+
+ fn as_raw(&self) -> *mut bindings::kmem_cache {
+ self.inner.get()
+ }
+
+ /// Creates a reference to a [`KMemCache`] from a raw pointer.
+ ///
+ /// This is useful for wrapping a C-initialized static `kmem_cache`, such as
+ /// the global `radix_tree_node_cachep` used by XArrays.
+ ///
+ /// # Safety
+ ///
+ /// - `ptr` must be a valid pointer to a `kmem_cache` that was created for
+ /// objects of type `T`.
+ /// - The cache must remain valid for the lifetime `'a`.
+ /// - The caller must ensure that the cache was configured appropriately for
+ /// the type `T`, including proper size and alignment.
+ pub unsafe fn from_raw<'a>(ptr: *mut bindings::kmem_cache) -> &'a Self {
+ // SAFETY: The caller guarantees that `ptr` is a valid pointer to a
+ // `kmem_cache` created for objects of type `T`, that it remains valid
+ // for lifetime `'a`, and that the cache is properly configured for `T`.
+ unsafe { &*ptr.cast::<Self>() }
+ }
+}
/// A slab cache with sheaf support.
///
@@ -95,12 +199,12 @@
/// - `cache` is a valid pointer to a `kmem_cache` created with
/// `__kmem_cache_create_args`.
/// - The cache is valid for the lifetime of this struct.
-pub struct KMemCache<T: KMemCacheInit<T>> {
- cache: NonNull<bindings::kmem_cache>,
- _p: PhantomData<T>,
+#[repr(transparent)]
+pub struct KMemCacheHandle<T: KMemCacheInit<T>> {
+ cache: NonNull<KMemCache<T>>,
}
-impl<T: KMemCacheInit<T>> KMemCache<T> {
+impl<T: KMemCacheInit<T>> KMemCacheHandle<T> {
/// Creates a new slab cache with sheaf support.
///
/// Creates a kernel slab cache for objects of type `T` with the specified
@@ -148,8 +252,7 @@ pub fn new(name: &CStr, sheaf_capacity: u32) -> Result<Arc<Self>>
// `kmem_cache_destroy` is called in `Drop`.
Ok(Arc::new(
Self {
- cache: NonNull::new(ptr).ok_or(ENOMEM)?,
- _p: PhantomData,
+ cache: NonNull::new(ptr.cast()).ok_or(ENOMEM)?,
},
GFP_KERNEL,
)?)
@@ -176,11 +279,11 @@ pub fn new(name: &CStr, sheaf_capacity: u32) -> Result<Arc<Self>>
/// # Warnings
///
/// The kernel will warn if `size` exceeds `sheaf_capacity`.
- pub fn sheaf(
- self: ArcBorrow<'_, Self>,
+ pub fn sheaf<'a>(
+ self: ArcBorrow<'a, Self>,
size: usize,
gfp: kernel::alloc::Flags,
- ) -> Result<Sheaf<T>> {
+ ) -> Result<Sheaf<'a, T, Dynamic>> {
// SAFETY: `self.as_raw()` returns a valid cache pointer, and `size`
// has been validated to fit in a `c_uint`.
let ptr = unsafe {
@@ -192,17 +295,18 @@ pub fn sheaf(
// was created. `dropped` is false since the sheaf has not been returned.
Ok(Sheaf {
sheaf: NonNull::new(ptr).ok_or(ENOMEM)?,
- cache: self.into(),
+ cache: CacheRef::Arc(self.into()),
dropped: false,
+ _p: PhantomData,
})
}
fn as_raw(&self) -> *mut bindings::kmem_cache {
- self.cache.as_ptr()
+ self.cache.as_ptr().cast()
}
}
-impl<T: KMemCacheInit<T>> Drop for KMemCache<T> {
+impl<T: KMemCacheInit<T>> Drop for KMemCacheHandle<T> {
fn drop(&mut self) {
// SAFETY: `self.as_raw()` returns a valid cache pointer that was
// created by `__kmem_cache_create_args`. As all objects allocated from
@@ -215,13 +319,13 @@ fn drop(&mut self) {
/// Trait for types that can be initialized in a slab cache.
///
/// This trait provides the initialization logic for objects allocated from a
-/// [`KMemCache`]. When the slab allocator creates new objects, it invokes the
-/// constructor to ensure objects are in a valid initial state.
+/// [`KMemCache`]. The initializer is called when objects are allocated from a
+/// sheaf via [`Sheaf::alloc`].
///
/// # Implementation
///
-/// Implementors must provide [`init`](KMemCacheInit::init), which returns
-/// a in-place initializer for the type.
+/// Implementors must provide [`init`](KMemCacheInit::init), which returns an
+/// infallible initializer for the type.
///
/// # Example
///
@@ -252,6 +356,28 @@ pub trait KMemCacheInit<T> {
fn init() -> impl Init<T, Infallible>;
}
+/// Marker type for sheaves from static caches.
+///
+/// Used as a type parameter for [`Sheaf`] to indicate the sheaf was created
+/// from a `&'static KMemCache<T>`.
+pub enum Static {}
+
+/// Marker type for sheaves from dynamic caches.
+///
+/// Used as a type parameter for [`Sheaf`] to indicate the sheaf was created
+/// from a [`KMemCacheHandle`] via [`ArcBorrow`].
+pub enum Dynamic {}
+
+/// A sheaf from a static cache.
+///
+/// This is a [`Sheaf`] backed by a `&'static KMemCache<T>`.
+pub type StaticSheaf<'a, T> = Sheaf<'a, T, Static>;
+
+/// A sheaf from a dynamic cache.
+///
+/// This is a [`Sheaf`] backed by a reference-counted [`KMemCacheHandle`].
+pub type DynamicSheaf<'a, T> = Sheaf<'a, T, Dynamic>;
+
/// A pre-filled container of slab objects.
///
/// A sheaf holds a set of pre-allocated objects from a [`KMemCache`].
@@ -262,12 +388,23 @@ pub trait KMemCacheInit<T> {
/// Sheaves provide faster allocation than direct allocation because they use
/// local locks with preemption disabled rather than atomic operations.
///
+/// # Type parameters
+///
+/// - `'a`: The lifetime of the cache reference.
+/// - `T`: The type of objects in this sheaf.
+/// - `A`: Either [`Static`] or [`Dynamic`], indicating whether the backing
+/// cache is a static reference or a reference-counted handle.
+///
+/// For convenience, [`StaticSheaf`] and [`DynamicSheaf`] type aliases are
+/// provided.
+///
/// # Lifecycle
///
-/// Sheaves are created via [`KMemCache::sheaf`] and should be returned to the
-/// allocator when no longer needed via [`Sheaf::return_refill`]. If a sheaf is
-/// simply dropped, it is returned with `GFP_NOWAIT` flags, which may result in
-/// the sheaf being flushed and freed rather than being cached for reuse.
+/// Sheaves are created via [`KMemCache::sheaf`] or [`KMemCacheHandle::sheaf`]
+/// and should be returned to the allocator when no longer needed via
+/// [`Sheaf::return_refill`]. If a sheaf is simply dropped, it is returned with
+/// `GFP_NOWAIT` flags, which may result in the sheaf being flushed and freed
+/// rather than being cached for reuse.
///
/// # Invariants
///
@@ -275,13 +412,14 @@ pub trait KMemCacheInit<T> {
/// `kmem_cache_prefill_sheaf`.
/// - `cache` is the cache from which this sheaf was created.
/// - `dropped` tracks whether the sheaf has been explicitly returned.
-pub struct Sheaf<T: KMemCacheInit<T>> {
+pub struct Sheaf<'a, T: KMemCacheInit<T>, A> {
sheaf: NonNull<bindings::slab_sheaf>,
- cache: Arc<KMemCache<T>>,
+ cache: CacheRef<T>,
dropped: bool,
+ _p: PhantomData<(&'a KMemCache<T>, A)>,
}
-impl<T: KMemCacheInit<T>> Sheaf<T> {
+impl<'a, T: KMemCacheInit<T>, A> Sheaf<'a, T, A> {
fn as_raw(&self) -> *mut bindings::slab_sheaf {
self.sheaf.as_ptr()
}
@@ -304,6 +442,75 @@ pub fn return_refill(mut self, flags: kernel::alloc::Flags) {
drop(self);
}
+ /// Refills the sheaf to at least the specified size.
+ ///
+ /// Replenishes the sheaf by preallocating objects until it contains at
+ /// least `size` objects. If the sheaf already contains `size` or more
+ /// objects, this is a no-op. In practice, the sheaf is refilled to its
+ /// full capacity.
+ ///
+ /// # Arguments
+ ///
+ /// - `flags`: Allocation flags controlling how memory is obtained.
+ /// - `size`: The minimum number of objects the sheaf should contain after
+ /// refilling. If `size` exceeds the cache's `sheaf_capacity`, the sheaf
+ /// may be replaced with a larger one.
+ ///
+ /// # Errors
+ ///
+ /// Returns an error if the objects could not be allocated. If refilling
+ /// fails, the existing sheaf is left intact.
+ pub fn refill(&mut self, flags: kernel::alloc::Flags, size: usize) -> Result {
+ // SAFETY: `self.cache.as_raw()` returns a valid cache pointer and
+ // `&raw mut self.sheaf` points to a valid sheaf per the type invariants.
+ kernel::error::to_result(unsafe {
+ bindings::kmem_cache_refill_sheaf(
+ self.cache.as_raw(),
+ flags.as_raw(),
+ (&raw mut (self.sheaf)).cast(),
+ size.try_into()?,
+ )
+ })
+ }
+}
+
+impl<'a, T: KMemCacheInit<T>> Sheaf<'a, T, Static> {
+ /// Allocates an object from the sheaf.
+ ///
+ /// Returns a new [`SBox`] containing an initialized object, or [`None`]
+ /// if the sheaf is depleted. Allocations are guaranteed to succeed as
+ /// long as the sheaf contains pre-allocated objects.
+ ///
+ /// The `gfp` flags passed to `kmem_cache_alloc_from_sheaf` are set to zero,
+ /// meaning no additional flags like `__GFP_ZERO` or `__GFP_ACCOUNT` are
+ /// applied.
+ ///
+ /// The returned `T` is initialized as part of this function.
+ pub fn alloc(&mut self) -> Option<SBox<T>> {
+ // SAFETY: `self.cache.as_raw()` and `self.as_raw()` return valid
+ // pointers. The function returns NULL when the sheaf is empty.
+ let ptr = unsafe {
+ bindings::kmem_cache_alloc_from_sheaf_noprof(self.cache.as_raw(), 0, self.as_raw())
+ };
+
+ // SAFETY:
+ // - `ptr` is a valid pointer as it was just returned by the cache.
+ // - The initializer is infallible, so an error is never returned.
+ unsafe { T::init().__init(ptr.cast()) }.expect("Initializer is infallible");
+
+ let ptr = NonNull::new(ptr.cast::<T>())?;
+
+ // INVARIANT: `ptr` was returned by `kmem_cache_alloc_from_sheaf_noprof`
+ // and initialized above. `cache` is the cache from which this object
+ // was allocated. The object remains valid until freed in `Drop`.
+ Some(SBox {
+ ptr,
+ cache: self.cache.clone(),
+ })
+ }
+}
+
+impl<'a, T: KMemCacheInit<T>> Sheaf<'a, T, Dynamic> {
/// Allocates an object from the sheaf.
///
/// Returns a new [`SBox`] containing an initialized object, or [`None`]
@@ -339,7 +546,7 @@ pub fn alloc(&mut self) -> Option<SBox<T>> {
}
}
-impl<T: KMemCacheInit<T>> Drop for Sheaf<T> {
+impl<'a, T: KMemCacheInit<T>, A> Drop for Sheaf<'a, T, A> {
fn drop(&mut self) {
if !self.dropped {
// SAFETY: `self.cache.as_raw()` and `self.as_raw()` return valid
@@ -356,6 +563,39 @@ fn drop(&mut self) {
}
}
+/// Internal reference to a cache, either static or reference-counted.
+///
+/// # Invariants
+///
+/// - For `CacheRef::Static`: the `NonNull` points to a valid `KMemCache<T>`
+/// with `'static` lifetime, derived from a `&'static KMemCache<T>` reference.
+enum CacheRef<T: KMemCacheInit<T>> {
+ /// A reference-counted handle to a dynamically created cache.
+ Arc(Arc<KMemCacheHandle<T>>),
+ /// A pointer to a static lifetime cache.
+ Static(NonNull<KMemCache<T>>),
+}
+
+impl<T: KMemCacheInit<T>> Clone for CacheRef<T> {
+ fn clone(&self) -> Self {
+ match self {
+ Self::Arc(arg0) => Self::Arc(arg0.clone()),
+ Self::Static(arg0) => Self::Static(*arg0),
+ }
+ }
+}
+
+impl<T: KMemCacheInit<T>> CacheRef<T> {
+ fn as_raw(&self) -> *mut bindings::kmem_cache {
+ match self {
+ CacheRef::Arc(handle) => handle.as_raw(),
+ // SAFETY: By type invariant, `ptr` points to a valid `KMemCache<T>`
+ // with `'static` lifetime.
+ CacheRef::Static(ptr) => unsafe { ptr.as_ref() }.as_raw(),
+ }
+ }
+}
+
/// An owned allocation from a cache sheaf.
///
/// `SBox` is similar to `Box` but is backed by a slab cache allocation obtained
@@ -372,7 +612,50 @@ fn drop(&mut self) {
/// - The object remains valid for the lifetime of the `SBox`.
pub struct SBox<T: KMemCacheInit<T>> {
ptr: NonNull<T>,
- cache: Arc<KMemCache<T>>,
+ cache: CacheRef<T>,
+}
+
+impl<T: KMemCacheInit<T>> SBox<T> {
+ /// Consumes the `SBox` and returns the raw pointer to the contained value.
+ ///
+ /// The caller becomes responsible for freeing the memory. The object is not
+ /// dropped and remains initialized. Use [`static_from_ptr`] to reconstruct
+ /// an `SBox` from the pointer.
+ ///
+ /// [`static_from_ptr`]: SBox::static_from_ptr
+ pub fn into_ptr(self) -> *mut T {
+ let ptr = self.ptr.as_ptr();
+ core::mem::forget(self);
+ ptr
+ }
+
+ /// Reconstructs an `SBox` from a raw pointer and cache.
+ ///
+ /// This is intended for use with objects that were previously converted to
+ /// raw pointers via [`into_ptr`], typically for passing through C code.
+ ///
+ /// [`into_ptr`]: SBox::into_ptr
+ ///
+ /// # Safety
+ ///
+ /// - `cache` must be a valid pointer to the `kmem_cache` from which `value`
+ /// was allocated.
+ /// - `value` must be a valid pointer to an initialized `T` that was
+ /// allocated from `cache`.
+ /// - The caller must ensure that no other `SBox` or reference exists for
+ /// `value`.
+ pub unsafe fn static_from_ptr(cache: *mut bindings::kmem_cache, value: *mut T) -> Self {
+ // INVARIANT: The caller guarantees `value` points to a valid,
+ // initialized `T` allocated from `cache`.
+ Self {
+ // SAFETY: By function safety requirements, `value` is not null.
+ ptr: unsafe { NonNull::new_unchecked(value) },
+ cache: CacheRef::Static(
+ // SAFETY: By function safety requirements, `cache` is not null.
+ unsafe { NonNull::new_unchecked(cache.cast()) },
+ ),
+ }
+ }
}
impl<T: KMemCacheInit<T>> Deref for SBox<T> {
--
2.51.2
^ permalink raw reply [flat|nested] 52+ messages in thread* [PATCH v3 11/12] xarray, radix-tree: enable sheaf support for kmem_cache
2026-02-09 14:38 [PATCH v3 00/12] rust: xarray: add entry API with preloading Andreas Hindborg
` (9 preceding siblings ...)
2026-02-09 14:38 ` [PATCH v3 10/12] rust: mm: sheaf: allow use of C initialized static caches Andreas Hindborg
@ 2026-02-09 14:38 ` Andreas Hindborg
2026-02-10 16:49 ` Daniel Gomez
2026-02-09 14:38 ` [PATCH v3 12/12] rust: xarray: add preload API Andreas Hindborg
11 siblings, 1 reply; 52+ messages in thread
From: Andreas Hindborg @ 2026-02-09 14:38 UTC (permalink / raw)
To: Tamir Duberstein, Miguel Ojeda, Alex Gaynor, Boqun Feng,
Gary Guo, Björn Roy Baron, Benno Lossin, Alice Ryhl,
Trevor Gross, Danilo Krummrich, Lorenzo Stoakes, Liam R. Howlett,
Vlastimil Babka, Andrew Morton, Christoph Lameter,
David Rientjes, Roman Gushchin, Harry Yoo
Cc: Daniel Gomez, rust-for-linux, linux-kernel, linux-mm,
Andreas Hindborg, Matthew Wilcox (Oracle)
The rust null block driver plans to rely on preloading xarray nodes from
the radix_tree_node_cachep kmem_cache.
Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org>
Signed-off-by: Andreas Hindborg <a.hindborg@kernel.org>
---
lib/radix-tree.c | 14 ++++++++++----
1 file changed, 10 insertions(+), 4 deletions(-)
diff --git a/lib/radix-tree.c b/lib/radix-tree.c
index 976b9bd02a1b5..1cf0012b15ade 100644
--- a/lib/radix-tree.c
+++ b/lib/radix-tree.c
@@ -1598,10 +1598,16 @@ void __init radix_tree_init(void)
BUILD_BUG_ON(RADIX_TREE_MAX_TAGS + __GFP_BITS_SHIFT > 32);
BUILD_BUG_ON(ROOT_IS_IDR & ~GFP_ZONEMASK);
BUILD_BUG_ON(XA_CHUNK_SIZE > 255);
- radix_tree_node_cachep = kmem_cache_create("radix_tree_node",
- sizeof(struct radix_tree_node), 0,
- SLAB_PANIC | SLAB_RECLAIM_ACCOUNT,
- radix_tree_node_ctor);
+
+ struct kmem_cache_args args = {
+ .ctor = radix_tree_node_ctor,
+ .sheaf_capacity = 64,
+ };
+
+ radix_tree_node_cachep = kmem_cache_create(
+ "radix_tree_node", sizeof(struct radix_tree_node), &args,
+ SLAB_PANIC | SLAB_RECLAIM_ACCOUNT);
+
ret = cpuhp_setup_state_nocalls(CPUHP_RADIX_DEAD, "lib/radix:dead",
NULL, radix_tree_cpu_dead);
WARN_ON(ret < 0);
--
2.51.2
^ permalink raw reply [flat|nested] 52+ messages in thread* Re: [PATCH v3 11/12] xarray, radix-tree: enable sheaf support for kmem_cache
2026-02-09 14:38 ` [PATCH v3 11/12] xarray, radix-tree: enable sheaf support for kmem_cache Andreas Hindborg
@ 2026-02-10 16:49 ` Daniel Gomez
2026-02-11 7:45 ` Andreas Hindborg
0 siblings, 1 reply; 52+ messages in thread
From: Daniel Gomez @ 2026-02-10 16:49 UTC (permalink / raw)
To: Andreas Hindborg
Cc: Tamir Duberstein, Miguel Ojeda, Alex Gaynor, Boqun Feng,
Gary Guo, Björn Roy Baron, Benno Lossin, Alice Ryhl,
Trevor Gross, Danilo Krummrich, Lorenzo Stoakes, Liam R. Howlett,
Vlastimil Babka, Andrew Morton, Christoph Lameter,
David Rientjes, Roman Gushchin, Harry Yoo, rust-for-linux,
linux-kernel, linux-mm, Matthew Wilcox (Oracle)
On 2026-02-09 15:38, Andreas Hindborg wrote:
> The rust null block driver plans to rely on preloading xarray nodes from
> the radix_tree_node_cachep kmem_cache.
>
> Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org>
> Signed-off-by: Andreas Hindborg <a.hindborg@kernel.org>
> ---
> lib/radix-tree.c | 14 ++++++++++----
> 1 file changed, 10 insertions(+), 4 deletions(-)
>
> diff --git a/lib/radix-tree.c b/lib/radix-tree.c
> index 976b9bd02a1b5..1cf0012b15ade 100644
> --- a/lib/radix-tree.c
> +++ b/lib/radix-tree.c
> @@ -1598,10 +1598,16 @@ void __init radix_tree_init(void)
> BUILD_BUG_ON(RADIX_TREE_MAX_TAGS + __GFP_BITS_SHIFT > 32);
> BUILD_BUG_ON(ROOT_IS_IDR & ~GFP_ZONEMASK);
> BUILD_BUG_ON(XA_CHUNK_SIZE > 255);
> - radix_tree_node_cachep = kmem_cache_create("radix_tree_node",
> - sizeof(struct radix_tree_node), 0,
> - SLAB_PANIC | SLAB_RECLAIM_ACCOUNT,
> - radix_tree_node_ctor);
> +
> + struct kmem_cache_args args = {
> + .ctor = radix_tree_node_ctor,
> + .sheaf_capacity = 64,
> + };
Is the sheaf_capacity matching the number of slots in an XArray node? If so,
this should be bindings::XA_CHUNK_SIZE.
^ permalink raw reply [flat|nested] 52+ messages in thread* Re: [PATCH v3 11/12] xarray, radix-tree: enable sheaf support for kmem_cache
2026-02-10 16:49 ` Daniel Gomez
@ 2026-02-11 7:45 ` Andreas Hindborg
0 siblings, 0 replies; 52+ messages in thread
From: Andreas Hindborg @ 2026-02-11 7:45 UTC (permalink / raw)
To: Daniel Gomez
Cc: Tamir Duberstein, Miguel Ojeda, Alex Gaynor, Boqun Feng,
Gary Guo, Björn Roy Baron, Benno Lossin, Alice Ryhl,
Trevor Gross, Danilo Krummrich, Lorenzo Stoakes, Liam R. Howlett,
Vlastimil Babka, Andrew Morton, Christoph Lameter,
David Rientjes, Roman Gushchin, Harry Yoo, rust-for-linux,
linux-kernel, linux-mm, Matthew Wilcox (Oracle)
Daniel Gomez <da.gomez@kernel.org> writes:
> On 2026-02-09 15:38, Andreas Hindborg wrote:
>> The rust null block driver plans to rely on preloading xarray nodes from
>> the radix_tree_node_cachep kmem_cache.
>>
>> Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org>
>> Signed-off-by: Andreas Hindborg <a.hindborg@kernel.org>
>> ---
>> lib/radix-tree.c | 14 ++++++++++----
>> 1 file changed, 10 insertions(+), 4 deletions(-)
>>
>> diff --git a/lib/radix-tree.c b/lib/radix-tree.c
>> index 976b9bd02a1b5..1cf0012b15ade 100644
>> --- a/lib/radix-tree.c
>> +++ b/lib/radix-tree.c
>> @@ -1598,10 +1598,16 @@ void __init radix_tree_init(void)
>> BUILD_BUG_ON(RADIX_TREE_MAX_TAGS + __GFP_BITS_SHIFT > 32);
>> BUILD_BUG_ON(ROOT_IS_IDR & ~GFP_ZONEMASK);
>> BUILD_BUG_ON(XA_CHUNK_SIZE > 255);
>> - radix_tree_node_cachep = kmem_cache_create("radix_tree_node",
>> - sizeof(struct radix_tree_node), 0,
>> - SLAB_PANIC | SLAB_RECLAIM_ACCOUNT,
>> - radix_tree_node_ctor);
>> +
>> + struct kmem_cache_args args = {
>> + .ctor = radix_tree_node_ctor,
>> + .sheaf_capacity = 64,
>> + };
>
> Is the sheaf_capacity matching the number of slots in an XArray node? If so,
> this should be bindings::XA_CHUNK_SIZE.
It is not, it is arbitrarily chosen to large enough to gain an advantage
but small enough to not waste too much memory.
For rust null block, it needs to be at least large enough to insert two
leaf nodes.
Best regards,
Andreas Hindborg
^ permalink raw reply [flat|nested] 52+ messages in thread
* [PATCH v3 12/12] rust: xarray: add preload API
2026-02-09 14:38 [PATCH v3 00/12] rust: xarray: add entry API with preloading Andreas Hindborg
` (10 preceding siblings ...)
2026-02-09 14:38 ` [PATCH v3 11/12] xarray, radix-tree: enable sheaf support for kmem_cache Andreas Hindborg
@ 2026-02-09 14:38 ` Andreas Hindborg
11 siblings, 0 replies; 52+ messages in thread
From: Andreas Hindborg @ 2026-02-09 14:38 UTC (permalink / raw)
To: Tamir Duberstein, Miguel Ojeda, Alex Gaynor, Boqun Feng,
Gary Guo, Björn Roy Baron, Benno Lossin, Alice Ryhl,
Trevor Gross, Danilo Krummrich, Lorenzo Stoakes, Liam R. Howlett,
Vlastimil Babka, Andrew Morton, Christoph Lameter,
David Rientjes, Roman Gushchin, Harry Yoo
Cc: Daniel Gomez, rust-for-linux, linux-kernel, linux-mm,
Andreas Hindborg, Matthew Wilcox (Oracle)
Add a preload API that allows preallocating memory for XArray
insertions. This enables insertions to proceed without allocation
failures in contexts where memory allocation is not desirable, such as
in atomic contexts.
The implementation introduces `XArrayNode` representing a single XArray
node and `XArraySheaf` as a type alias for a sheaf of preallocated
nodes.
Add the function `xarray_kmem_cache` to provide access to the global XArray
node cache for creating sheaves.
Update `VacantEntry::insert` and `VacantEntry::insert_entry` to accept
an optional sheaf argument for preloaded memory. Add a new
`Guard::insert_entry` method for inserting with preload support. When an
insertion would fail due to ENOMEM, the XArray state API automatically
consumes a preallocated node from the sheaf if available.
Export `radix_tree_node_ctor` and `radix_tree_node_cachep` from C to
enable Rust code to work with the radix tree node cache.
Cc: "Liam R. Howlett" <Liam.Howlett@oracle.com>
Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org>
Signed-off-by: Andreas Hindborg <a.hindborg@kernel.org>
---
include/linux/radix-tree.h | 3 +
lib/radix-tree.c | 5 +-
rust/bindings/bindings_helper.h | 3 +
rust/kernel/xarray.rs | 172 +++++++++++++++++++++++++++++++++++-----
rust/kernel/xarray/entry.rs | 29 ++++---
5 files changed, 182 insertions(+), 30 deletions(-)
diff --git a/include/linux/radix-tree.h b/include/linux/radix-tree.h
index eae67015ce51a..c3699f12b070c 100644
--- a/include/linux/radix-tree.h
+++ b/include/linux/radix-tree.h
@@ -469,4 +469,7 @@ static __always_inline void __rcu **radix_tree_next_slot(void __rcu **slot,
slot = radix_tree_next_slot(slot, iter, \
RADIX_TREE_ITER_TAGGED | tag))
+
+void radix_tree_node_ctor(void *arg);
+
#endif /* _LINUX_RADIX_TREE_H */
diff --git a/lib/radix-tree.c b/lib/radix-tree.c
index 1cf0012b15ade..ddd67ce672f5c 100644
--- a/lib/radix-tree.c
+++ b/lib/radix-tree.c
@@ -33,6 +33,7 @@
* Radix tree node cache.
*/
struct kmem_cache *radix_tree_node_cachep;
+EXPORT_SYMBOL(radix_tree_node_cachep);
/*
* The radix tree is variable-height, so an insert operation not only has
@@ -1566,14 +1567,14 @@ void idr_destroy(struct idr *idr)
}
EXPORT_SYMBOL(idr_destroy);
-static void
-radix_tree_node_ctor(void *arg)
+void radix_tree_node_ctor(void *arg)
{
struct radix_tree_node *node = arg;
memset(node, 0, sizeof(*node));
INIT_LIST_HEAD(&node->private_list);
}
+EXPORT_SYMBOL(radix_tree_node_ctor);
static int radix_tree_cpu_dead(unsigned int cpu)
{
diff --git a/rust/bindings/bindings_helper.h b/rust/bindings/bindings_helper.h
index 58605c32e8102..652f08ad888cd 100644
--- a/rust/bindings/bindings_helper.h
+++ b/rust/bindings/bindings_helper.h
@@ -118,6 +118,9 @@ const xa_mark_t RUST_CONST_HELPER_XA_PRESENT = XA_PRESENT;
const gfp_t RUST_CONST_HELPER_XA_FLAGS_ALLOC = XA_FLAGS_ALLOC;
const gfp_t RUST_CONST_HELPER_XA_FLAGS_ALLOC1 = XA_FLAGS_ALLOC1;
const size_t RUST_CONST_HELPER_XAS_RESTART = (size_t)XAS_RESTART;
+const size_t RUST_CONST_HELPER_XA_CHUNK_SHIFT = XA_CHUNK_SHIFT;
+const size_t RUST_CONST_HELPER_XA_CHUNK_SIZE = XA_CHUNK_SIZE;
+extern struct kmem_cache *radix_tree_node_cachep;
const vm_flags_t RUST_CONST_HELPER_VM_MERGEABLE = VM_MERGEABLE;
const vm_flags_t RUST_CONST_HELPER_VM_READ = VM_READ;
diff --git a/rust/kernel/xarray.rs b/rust/kernel/xarray.rs
index 8c10e8fd76f15..89bf531308c88 100644
--- a/rust/kernel/xarray.rs
+++ b/rust/kernel/xarray.rs
@@ -5,6 +5,7 @@
//! C header: [`include/linux/xarray.h`](srctree/include/linux/xarray.h)
use core::{
+ convert::Infallible,
iter,
marker::PhantomData,
pin::Pin,
@@ -23,11 +24,17 @@
bindings,
build_assert, //
error::{
+ code::*,
to_result,
Error,
Result, //
},
ffi::c_void,
+ mm::sheaf::{
+ KMemCache,
+ SBox,
+ StaticSheaf, //
+ },
types::{
ForeignOwnable,
NotThreadSafe,
@@ -35,12 +42,54 @@
},
};
use pin_init::{
+ init,
pin_data,
pin_init,
pinned_drop,
+ Init,
PinInit, //
};
+/// Sheaf of preallocated [`XArray`] nodes.
+pub type XArraySheaf<'a> = StaticSheaf<'a, XArrayNode>;
+
+/// Returns a reference to the global XArray node cache.
+///
+/// This provides access to the kernel's `radix_tree_node_cachep`, which is the
+/// slab cache used for allocating internal XArray nodes. This cache can be used
+/// to create sheaves for preallocating XArray nodes.
+pub fn xarray_kmem_cache() -> &'static KMemCache<XArrayNode> {
+ // SAFETY: `radix_tree_node_cachep` is a valid, statically initialized
+ // kmem_cache that remains valid for the lifetime of the kernel. The cache
+ // is configured for `xa_node` objects which match our `XArrayNode` type.
+ unsafe { KMemCache::from_raw(bindings::radix_tree_node_cachep) }
+}
+
+/// An preallocated XArray node.
+///
+/// This represents a single preallocated internal node for an XArray.
+pub struct XArrayNode {
+ node: Opaque<bindings::xa_node>,
+}
+
+impl kernel::mm::sheaf::KMemCacheInit<XArrayNode> for XArrayNode {
+ fn init() -> impl Init<Self, Infallible> {
+ init!(Self {
+ // SAFETY:
+ // - This initialization cannot fail and will never return `Err`.
+ // - The xa_node does not move during initalization.
+ node <- unsafe {
+ pin_init::init_from_closure(
+ |place: *mut Opaque<bindings::xa_node>| -> Result<(), Infallible> {
+ bindings::radix_tree_node_ctor(place.cast::<c_void>());
+ Ok(())
+ },
+ )
+ }
+ })
+ }
+}
+
/// An array which efficiently maps sparse integer indices to owned objects.
///
/// This is similar to a [`crate::alloc::kvec::Vec<Option<T>>`], but more efficient when there are
@@ -137,15 +186,22 @@ fn iter(&self) -> impl Iterator<Item = NonNull<c_void>> + '_ {
let mut index = 0;
// SAFETY: `self.xa` is always valid by the type invariant.
- iter::once(unsafe {
- bindings::xa_find(self.xa.get(), &mut index, usize::MAX, bindings::XA_PRESENT)
- })
- .chain(iter::from_fn(move || {
- // SAFETY: `self.xa` is always valid by the type invariant.
- Some(unsafe {
- bindings::xa_find_after(self.xa.get(), &mut index, usize::MAX, bindings::XA_PRESENT)
- })
- }))
+ Iterator::chain(
+ iter::once(unsafe {
+ bindings::xa_find(self.xa.get(), &mut index, usize::MAX, bindings::XA_PRESENT)
+ }),
+ iter::from_fn(move || {
+ // SAFETY: `self.xa` is always valid by the type invariant.
+ Some(unsafe {
+ bindings::xa_find_after(
+ self.xa.get(),
+ &mut index,
+ usize::MAX,
+ bindings::XA_PRESENT,
+ )
+ })
+ }),
+ )
.map_while(|ptr| NonNull::new(ptr.cast()))
}
@@ -166,7 +222,6 @@ pub fn try_lock(&self) -> Option<Guard<'_, T>> {
pub fn lock(&self) -> Guard<'_, T> {
// SAFETY: `self.xa` is always valid by the type invariant.
unsafe { bindings::xa_lock(self.xa.get()) };
-
Guard {
xa: self,
_not_send: NotThreadSafe,
@@ -270,7 +325,7 @@ pub fn get_mut(&mut self, index: usize) -> Option<T::BorrowedMut<'_>> {
///
/// match guard.entry(42) {
/// Entry::Vacant(entry) => {
- /// entry.insert(KBox::new(0x1337u32, GFP_KERNEL)?)?;
+ /// entry.insert(KBox::new(0x1337u32, GFP_KERNEL)?, None)?;
/// }
/// Entry::Occupied(_) => unreachable!("We did not insert an entry yet"),
/// }
@@ -475,6 +530,45 @@ pub fn store(
Ok(unsafe { T::try_from_foreign(old) })
}
}
+
+ /// Inserts a value and returns an occupied entry for further operations.
+ ///
+ /// If a value is already present, the operation fails.
+ ///
+ /// This method will not drop the XArray lock. If memory allocation is
+ /// required for the operation to succeed, the user should supply memory
+ /// through the `preload` argument.
+ ///
+ /// # Examples
+ ///
+ /// ```
+ /// # use kernel::{prelude::*, xarray::{AllocKind, XArray}};
+ /// let mut xa = KBox::pin_init(XArray::<KBox<u32>>::new(AllocKind::Alloc), GFP_KERNEL)?;
+ /// let mut guard = xa.lock();
+ ///
+ /// assert_eq!(guard.get(42), None);
+ ///
+ /// let value = KBox::new(0x1337u32, GFP_KERNEL)?;
+ /// let entry = guard.insert_entry(42, value, None)?;
+ /// let borrowed = entry.into_mut();
+ /// assert_eq!(borrowed, &0x1337);
+ ///
+ /// # Ok::<(), kernel::error::Error>(())
+ /// ```
+ pub fn insert_entry<'b>(
+ &'b mut self,
+ index: usize,
+ value: T,
+ preload: Option<&mut XArraySheaf<'_>>,
+ ) -> Result<OccupiedEntry<'a, 'b, T>, StoreError<T>> {
+ match self.entry(index) {
+ Entry::Vacant(entry) => entry.insert_entry(value, preload),
+ Entry::Occupied(_) => Err(StoreError {
+ error: EBUSY,
+ value,
+ }),
+ }
+ }
}
/// Internal state for XArray iteration and entry operations.
@@ -489,6 +583,25 @@ pub(crate) struct XArrayState<'a, 'b, T: ForeignOwnable> {
state: bindings::xa_state,
}
+impl<'a, 'b, T: ForeignOwnable> Drop for XArrayState<'a, 'b, T> {
+ fn drop(&mut self) {
+ if !self.state.xa_alloc.is_null() {
+ // SAFETY:
+ // - `xa_alloc` is only set via `SBox::into_ptr()` in `insert()` where
+ // the node comes from an `XArraySheaf` backed by `radix_tree_node_cachep`.
+ // - `xa_alloc` points to a valid, initialized `XArrayNode`.
+ // - `XArrayState` has exclusive ownership of `xa_alloc`, and no other
+ // `SBox` or reference exists for this value.
+ drop(unsafe {
+ SBox::<XArrayNode>::static_from_ptr(
+ bindings::radix_tree_node_cachep,
+ self.state.xa_alloc.cast(),
+ )
+ })
+ }
+ }
+}
+
impl<'a, 'b, T: ForeignOwnable> XArrayState<'a, 'b, T> {
fn new(access: &'b Guard<'a, T>, index: usize) -> Self {
let ptr = access.xa.xa.get();
@@ -529,16 +642,37 @@ fn status(&self) -> Result {
to_result(unsafe { bindings::xas_error(&self.state) })
}
- fn insert(&mut self, value: T) -> Result<*mut c_void, StoreError<T>> {
+ fn insert(
+ &mut self,
+ value: T,
+ mut preload: Option<&mut XArraySheaf<'_>>,
+ ) -> Result<*mut c_void, StoreError<T>> {
let new = T::into_foreign(value).cast();
- // SAFETY: `self.state.state` is properly initialized and `new` came from `T::into_foreign`.
- // We hold the xarray lock.
- unsafe { bindings::xas_store(&mut self.state, new) };
-
- self.status().map(|()| new).map_err(|error| {
- // SAFETY: `new` came from `T::into_foreign` and `xas_store` does not take ownership of
- // the value on error.
+ loop {
+ // SAFETY: `self.state` is properly initialized and `new` came from
+ // `T::into_foreign`. We hold the xarray lock.
+ unsafe { bindings::xas_store(&mut self.state, new) };
+
+ match self.status() {
+ Ok(()) => break Ok(new),
+ Err(ENOMEM) => {
+ debug_assert!(self.state.xa_alloc.is_null());
+ let node = match preload.as_mut().map(|sheaf| sheaf.alloc().ok_or(ENOMEM)) {
+ None => break Err(ENOMEM),
+ Some(Err(e)) => break Err(e),
+ Some(Ok(node)) => node,
+ };
+
+ self.state.xa_alloc = node.into_ptr().cast();
+ continue;
+ }
+ Err(e) => break Err(e),
+ }
+ }
+ .map_err(|error| {
+ // SAFETY: `new` came from `T::into_foreign` and `xas_store` does not take
+ // ownership of the value on error.
let value = unsafe { T::from_foreign(new) };
StoreError { value, error }
})
diff --git a/rust/kernel/xarray/entry.rs b/rust/kernel/xarray/entry.rs
index 1b1c21bed7022..ff500be3832b7 100644
--- a/rust/kernel/xarray/entry.rs
+++ b/rust/kernel/xarray/entry.rs
@@ -3,6 +3,7 @@
use super::{
Guard,
StoreError,
+ XArraySheaf,
XArrayState, //
};
use core::ptr::NonNull;
@@ -29,9 +30,9 @@ impl<T: ForeignOwnable> Entry<'_, '_, T> {
/// let mut xa = KBox::pin_init(XArray::<KBox<u32>>::new(AllocKind::Alloc), GFP_KERNEL)?;
/// let mut guard = xa.lock();
///
- ///
/// let entry = guard.entry(42);
/// assert_eq!(entry.is_occupied(), false);
+ /// drop(entry);
///
/// guard.store(42, KBox::new(0x1337u32, GFP_KERNEL)?, GFP_KERNEL)?;
/// let entry = guard.entry(42);
@@ -64,7 +65,8 @@ pub(crate) fn new(guard: &'b mut Guard<'a, T>, index: usize) -> Self {
/// Returns a reference to the newly inserted value.
///
/// - This method will fail if the nodes on the path to the index
- /// represented by this entry are not present in the XArray.
+ /// represented by this entry are not present in the XArray and no memory
+ /// is available via the `preload` argument.
/// - This method will not drop the XArray lock.
///
///
@@ -79,7 +81,7 @@ pub(crate) fn new(guard: &'b mut Guard<'a, T>, index: usize) -> Self {
///
/// if let Entry::Vacant(entry) = guard.entry(42) {
/// let value = KBox::new(0x1337u32, GFP_KERNEL)?;
- /// let borrowed = entry.insert(value)?;
+ /// let borrowed = entry.insert(value, None)?;
/// assert_eq!(*borrowed, 0x1337);
/// }
///
@@ -87,8 +89,12 @@ pub(crate) fn new(guard: &'b mut Guard<'a, T>, index: usize) -> Self {
///
/// # Ok::<(), kernel::error::Error>(())
/// ```
- pub fn insert(mut self, value: T) -> Result<T::BorrowedMut<'b>, StoreError<T>> {
- let new = self.state.insert(value)?;
+ pub fn insert(
+ mut self,
+ value: T,
+ preload: Option<&mut XArraySheaf<'_>>,
+ ) -> Result<T::BorrowedMut<'b>, StoreError<T>> {
+ let new = self.state.insert(value, preload)?;
// SAFETY: `new` came from `T::into_foreign`. The entry has exclusive
// ownership of `new` as it holds a mutable reference to `Guard`.
@@ -98,7 +104,8 @@ pub fn insert(mut self, value: T) -> Result<T::BorrowedMut<'b>, StoreError<T>> {
/// Inserts a value and returns an occupied entry representing the newly inserted value.
///
/// - This method will fail if the nodes on the path to the index
- /// represented by this entry are not present in the XArray.
+ /// represented by this entry are not present in the XArray and no memory
+ /// is available via the `preload` argument.
/// - This method will not drop the XArray lock.
///
/// # Examples
@@ -112,7 +119,7 @@ pub fn insert(mut self, value: T) -> Result<T::BorrowedMut<'b>, StoreError<T>> {
///
/// if let Entry::Vacant(entry) = guard.entry(42) {
/// let value = KBox::new(0x1337u32, GFP_KERNEL)?;
- /// let occupied = entry.insert_entry(value)?;
+ /// let occupied = entry.insert_entry(value, None)?;
/// assert_eq!(occupied.index(), 42);
/// }
///
@@ -120,8 +127,12 @@ pub fn insert(mut self, value: T) -> Result<T::BorrowedMut<'b>, StoreError<T>> {
///
/// # Ok::<(), kernel::error::Error>(())
/// ```
- pub fn insert_entry(mut self, value: T) -> Result<OccupiedEntry<'a, 'b, T>, StoreError<T>> {
- let new = self.state.insert(value)?;
+ pub fn insert_entry(
+ mut self,
+ value: T,
+ preload: Option<&mut XArraySheaf<'_>>,
+ ) -> Result<OccupiedEntry<'a, 'b, T>, StoreError<T>> {
+ let new = self.state.insert(value, preload)?;
Ok(OccupiedEntry::<'a, 'b, T> {
state: self.state,
--
2.51.2
^ permalink raw reply [flat|nested] 52+ messages in thread