* [PATCH v2 01/11] rust: xarray: minor formatting fixes
2026-02-06 21:10 [PATCH v2 00/11] rust: xarray: add entry API with preloading Andreas Hindborg
@ 2026-02-06 21:10 ` Andreas Hindborg
2026-02-06 21:10 ` [PATCH v2 02/11] rust: xarray: add debug format for `StoreError` Andreas Hindborg
` (9 subsequent siblings)
10 siblings, 0 replies; 14+ messages in thread
From: Andreas Hindborg @ 2026-02-06 21:10 UTC (permalink / raw)
To: Tamir Duberstein, Miguel Ojeda, Alex Gaynor, Boqun Feng,
Gary Guo, Björn Roy Baron, Benno Lossin, Alice Ryhl,
Trevor Gross, Danilo Krummrich, Lorenzo Stoakes, Liam R. Howlett,
Vlastimil Babka, Andrew Morton, Christoph Lameter,
David Rientjes, Roman Gushchin, Harry Yoo
Cc: Daniel Gomez, rust-for-linux, linux-kernel, linux-mm, Andreas Hindborg
Fix formatting in xarray module to comply with kernel coding
guidelines:
- Update use clauses to use vertical layout with each import on its
own line.
- Add trailing empty comments to preserve formatting and prevent
rustfmt from collapsing imports.
- Break long assert_eq! statement in documentation across multiple
lines for better readability.
Reviewed-by: Gary Guo <gary@garyguo.net>
Reviewed-by: Tamir Duberstein <tamird@gmail.com>
Acked-by: Tamir Duberstein <tamird@gmail.com>
Signed-off-by: Andreas Hindborg <a.hindborg@kernel.org>
---
rust/kernel/xarray.rs | 36 +++++++++++++++++++++++++++++-------
1 file changed, 29 insertions(+), 7 deletions(-)
diff --git a/rust/kernel/xarray.rs b/rust/kernel/xarray.rs
index a49d6db288458..88625c9abf4ef 100644
--- a/rust/kernel/xarray.rs
+++ b/rust/kernel/xarray.rs
@@ -4,14 +4,33 @@
//!
//! C header: [`include/linux/xarray.h`](srctree/include/linux/xarray.h)
-use crate::{
- alloc, bindings, build_assert,
- error::{Error, Result},
+use core::{
+ iter,
+ marker::PhantomData,
+ pin::Pin,
+ ptr::NonNull, //
+};
+use kernel::{
+ alloc,
+ bindings,
+ build_assert, //
+ error::{
+ Error,
+ Result, //
+ },
ffi::c_void,
- types::{ForeignOwnable, NotThreadSafe, Opaque},
+ types::{
+ ForeignOwnable,
+ NotThreadSafe,
+ Opaque, //
+ },
+};
+use pin_init::{
+ pin_data,
+ pin_init,
+ pinned_drop,
+ PinInit, //
};
-use core::{iter, marker::PhantomData, pin::Pin, ptr::NonNull};
-use pin_init::{pin_data, pin_init, pinned_drop, PinInit};
/// An array which efficiently maps sparse integer indices to owned objects.
///
@@ -44,7 +63,10 @@
/// *guard.get_mut(0).unwrap() = 0xffff;
/// assert_eq!(guard.get(0).copied(), Some(0xffff));
///
-/// assert_eq!(guard.store(0, beef, GFP_KERNEL)?.as_deref().copied(), Some(0xffff));
+/// assert_eq!(
+/// guard.store(0, beef, GFP_KERNEL)?.as_deref().copied(),
+/// Some(0xffff)
+/// );
/// assert_eq!(guard.get(0).copied(), Some(0xbeef));
///
/// guard.remove(0);
--
2.51.2
^ permalink raw reply [flat|nested] 14+ messages in thread* [PATCH v2 02/11] rust: xarray: add debug format for `StoreError`
2026-02-06 21:10 [PATCH v2 00/11] rust: xarray: add entry API with preloading Andreas Hindborg
2026-02-06 21:10 ` [PATCH v2 01/11] rust: xarray: minor formatting fixes Andreas Hindborg
@ 2026-02-06 21:10 ` Andreas Hindborg
2026-02-06 21:10 ` [PATCH v2 03/11] rust: xarray: add `contains_index` method Andreas Hindborg
` (8 subsequent siblings)
10 siblings, 0 replies; 14+ messages in thread
From: Andreas Hindborg @ 2026-02-06 21:10 UTC (permalink / raw)
To: Tamir Duberstein, Miguel Ojeda, Alex Gaynor, Boqun Feng,
Gary Guo, Björn Roy Baron, Benno Lossin, Alice Ryhl,
Trevor Gross, Danilo Krummrich, Lorenzo Stoakes, Liam R. Howlett,
Vlastimil Babka, Andrew Morton, Christoph Lameter,
David Rientjes, Roman Gushchin, Harry Yoo
Cc: Daniel Gomez, rust-for-linux, linux-kernel, linux-mm, Andreas Hindborg
Add a `Debug` implementation for `StoreError<T>` to enable better error
reporting and debugging. The implementation only displays the `error`
field and omits the `value` field, as `T` may not implement `Debug`.
Reviewed-by: Gary Guo <gary@garyguo.net>
Acked-by: Tamir Duberstein <tamird@gmail.com>
Signed-off-by: Andreas Hindborg <a.hindborg@kernel.org>
---
rust/kernel/xarray.rs | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/rust/kernel/xarray.rs b/rust/kernel/xarray.rs
index 88625c9abf4ef..d9762c6bef19c 100644
--- a/rust/kernel/xarray.rs
+++ b/rust/kernel/xarray.rs
@@ -193,6 +193,14 @@ pub struct StoreError<T> {
pub value: T,
}
+impl<T> core::fmt::Debug for StoreError<T> {
+ fn fmt(&self, f: &mut core::fmt::Formatter<'_>) -> core::fmt::Result {
+ f.debug_struct("StoreError")
+ .field("error", &self.error)
+ .finish()
+ }
+}
+
impl<T> From<StoreError<T>> for Error {
fn from(value: StoreError<T>) -> Self {
value.error
--
2.51.2
^ permalink raw reply [flat|nested] 14+ messages in thread* [PATCH v2 03/11] rust: xarray: add `contains_index` method
2026-02-06 21:10 [PATCH v2 00/11] rust: xarray: add entry API with preloading Andreas Hindborg
2026-02-06 21:10 ` [PATCH v2 01/11] rust: xarray: minor formatting fixes Andreas Hindborg
2026-02-06 21:10 ` [PATCH v2 02/11] rust: xarray: add debug format for `StoreError` Andreas Hindborg
@ 2026-02-06 21:10 ` Andreas Hindborg
2026-02-06 21:10 ` [PATCH v2 04/11] rust: xarray: add `XArrayState` Andreas Hindborg
` (7 subsequent siblings)
10 siblings, 0 replies; 14+ messages in thread
From: Andreas Hindborg @ 2026-02-06 21:10 UTC (permalink / raw)
To: Tamir Duberstein, Miguel Ojeda, Alex Gaynor, Boqun Feng,
Gary Guo, Björn Roy Baron, Benno Lossin, Alice Ryhl,
Trevor Gross, Danilo Krummrich, Lorenzo Stoakes, Liam R. Howlett,
Vlastimil Babka, Andrew Morton, Christoph Lameter,
David Rientjes, Roman Gushchin, Harry Yoo
Cc: Daniel Gomez, rust-for-linux, linux-kernel, linux-mm, Andreas Hindborg
Add a convenience method `contains_index` to check whether an element
exists at a given index in the XArray. This method provides a more
ergonomic API compared to calling `get` and checking for `Some`.
Signed-off-by: Andreas Hindborg <a.hindborg@kernel.org>
---
rust/kernel/xarray.rs | 21 +++++++++++++++++++++
1 file changed, 21 insertions(+)
diff --git a/rust/kernel/xarray.rs b/rust/kernel/xarray.rs
index d9762c6bef19c..ede48b5e1dba3 100644
--- a/rust/kernel/xarray.rs
+++ b/rust/kernel/xarray.rs
@@ -218,6 +218,27 @@ fn load<F, U>(&self, index: usize, f: F) -> Option<U>
Some(f(ptr))
}
+ /// Checks if the XArray contains an element at the specified index.
+ ///
+ /// # Examples
+ ///
+ /// ```
+ /// # use kernel::{alloc::{flags::GFP_KERNEL, kbox::KBox}, xarray::{AllocKind, XArray}};
+ /// let xa = KBox::pin_init(XArray::new(AllocKind::Alloc), GFP_KERNEL)?;
+ ///
+ /// let mut guard = xa.lock();
+ /// assert_eq!(guard.contains_index(42), false);
+ ///
+ /// guard.store(42, KBox::new(0u32, GFP_KERNEL)?, GFP_KERNEL)?;
+ ///
+ /// assert_eq!(guard.contains_index(42), true);
+ ///
+ /// # Ok::<(), kernel::error::Error>(())
+ /// ```
+ pub fn contains_index(&self, index: usize) -> bool {
+ self.get(index).is_some()
+ }
+
/// Provides a reference to the element at the given index.
pub fn get(&self, index: usize) -> Option<T::Borrowed<'_>> {
self.load(index, |ptr| {
--
2.51.2
^ permalink raw reply [flat|nested] 14+ messages in thread* [PATCH v2 04/11] rust: xarray: add `XArrayState`
2026-02-06 21:10 [PATCH v2 00/11] rust: xarray: add entry API with preloading Andreas Hindborg
` (2 preceding siblings ...)
2026-02-06 21:10 ` [PATCH v2 03/11] rust: xarray: add `contains_index` method Andreas Hindborg
@ 2026-02-06 21:10 ` Andreas Hindborg
2026-02-06 21:10 ` [PATCH v2 05/11] rust: xarray: use `xas_load` instead of `xa_load` in `Guard::load` Andreas Hindborg
` (6 subsequent siblings)
10 siblings, 0 replies; 14+ messages in thread
From: Andreas Hindborg @ 2026-02-06 21:10 UTC (permalink / raw)
To: Tamir Duberstein, Miguel Ojeda, Alex Gaynor, Boqun Feng,
Gary Guo, Björn Roy Baron, Benno Lossin, Alice Ryhl,
Trevor Gross, Danilo Krummrich, Lorenzo Stoakes, Liam R. Howlett,
Vlastimil Babka, Andrew Morton, Christoph Lameter,
David Rientjes, Roman Gushchin, Harry Yoo
Cc: Daniel Gomez, rust-for-linux, linux-kernel, linux-mm, Andreas Hindborg
Add `XArrayState` as internal state for XArray iteration and entry
operations. This struct wraps the C `xa_state` structure and holds a
reference to a `Guard` to ensure exclusive access to the XArray for the
lifetime of the state object.
The `XAS_RESTART` constant is also exposed through the bindings helper
to properly initialize the `xa_node` field.
The struct and its constructor are marked with `#[expect(dead_code)]` as
there are no users yet. We will remove this annotation in a later patch.
Signed-off-by: Andreas Hindborg <a.hindborg@kernel.org>
---
rust/bindings/bindings_helper.h | 1 +
rust/kernel/xarray.rs | 41 ++++++++++++++++++++++++++++++++++++++++-
2 files changed, 41 insertions(+), 1 deletion(-)
diff --git a/rust/bindings/bindings_helper.h b/rust/bindings/bindings_helper.h
index a067038b4b422..58605c32e8102 100644
--- a/rust/bindings/bindings_helper.h
+++ b/rust/bindings/bindings_helper.h
@@ -117,6 +117,7 @@ const xa_mark_t RUST_CONST_HELPER_XA_PRESENT = XA_PRESENT;
const gfp_t RUST_CONST_HELPER_XA_FLAGS_ALLOC = XA_FLAGS_ALLOC;
const gfp_t RUST_CONST_HELPER_XA_FLAGS_ALLOC1 = XA_FLAGS_ALLOC1;
+const size_t RUST_CONST_HELPER_XAS_RESTART = (size_t)XAS_RESTART;
const vm_flags_t RUST_CONST_HELPER_VM_MERGEABLE = VM_MERGEABLE;
const vm_flags_t RUST_CONST_HELPER_VM_READ = VM_READ;
diff --git a/rust/kernel/xarray.rs b/rust/kernel/xarray.rs
index ede48b5e1dba3..d1246ec114898 100644
--- a/rust/kernel/xarray.rs
+++ b/rust/kernel/xarray.rs
@@ -8,7 +8,10 @@
iter,
marker::PhantomData,
pin::Pin,
- ptr::NonNull, //
+ ptr::{
+ null_mut,
+ NonNull, //
+ },
};
use kernel::{
alloc,
@@ -319,6 +322,42 @@ pub fn store(
}
}
+/// Internal state for XArray iteration and entry operations.
+///
+/// # Invariants
+///
+/// - `state` is always a valid `bindings::xa_state`.
+#[expect(dead_code)]
+pub(crate) struct XArrayState<'a, 'b, T: ForeignOwnable> {
+ /// Holds a reference to the lock guard to ensure the lock is not dropped
+ /// while `Self` is live.
+ _access: PhantomData<&'b Guard<'a, T>>,
+ state: bindings::xa_state,
+}
+
+impl<'a, 'b, T: ForeignOwnable> XArrayState<'a, 'b, T> {
+ #[expect(dead_code)]
+ fn new(access: &'b Guard<'a, T>, index: usize) -> Self {
+ let ptr = access.xa.xa.get();
+ // INVARIANT: We initialize `self.state` to a valid value below.
+ Self {
+ _access: PhantomData,
+ state: bindings::xa_state {
+ xa: ptr,
+ xa_index: index,
+ xa_shift: 0,
+ xa_sibs: 0,
+ xa_offset: 0,
+ xa_pad: 0,
+ xa_node: bindings::XAS_RESTART as *mut bindings::xa_node,
+ xa_alloc: null_mut(),
+ xa_update: None,
+ xa_lru: null_mut(),
+ },
+ }
+ }
+}
+
// SAFETY: `XArray<T>` has no shared mutable state so it is `Send` iff `T` is `Send`.
unsafe impl<T: ForeignOwnable + Send> Send for XArray<T> {}
--
2.51.2
^ permalink raw reply [flat|nested] 14+ messages in thread* [PATCH v2 05/11] rust: xarray: use `xas_load` instead of `xa_load` in `Guard::load`
2026-02-06 21:10 [PATCH v2 00/11] rust: xarray: add entry API with preloading Andreas Hindborg
` (3 preceding siblings ...)
2026-02-06 21:10 ` [PATCH v2 04/11] rust: xarray: add `XArrayState` Andreas Hindborg
@ 2026-02-06 21:10 ` Andreas Hindborg
2026-02-06 21:10 ` [PATCH v2 06/11] rust: xarray: simplify `Guard::load` Andreas Hindborg
` (5 subsequent siblings)
10 siblings, 0 replies; 14+ messages in thread
From: Andreas Hindborg @ 2026-02-06 21:10 UTC (permalink / raw)
To: Tamir Duberstein, Miguel Ojeda, Alex Gaynor, Boqun Feng,
Gary Guo, Björn Roy Baron, Benno Lossin, Alice Ryhl,
Trevor Gross, Danilo Krummrich, Lorenzo Stoakes, Liam R. Howlett,
Vlastimil Babka, Andrew Morton, Christoph Lameter,
David Rientjes, Roman Gushchin, Harry Yoo
Cc: Daniel Gomez, rust-for-linux, linux-kernel, linux-mm, Andreas Hindborg
Replace the call to `xa_load` with `xas_load` in `Guard::load`. The
`xa_load` function takes the RCU lock internally, which we do not need,
since the `Guard` already holds an exclusive lock on the `XArray`. The
`xas_load` function operates on `xa_state` and assumes the required locks
are already held.
This change also removes the `#[expect(dead_code)]` annotation from
`XArrayState` and its constructor, as they are now in use.
Signed-off-by: Andreas Hindborg <a.hindborg@kernel.org>
---
rust/kernel/xarray.rs | 15 +++++++++------
1 file changed, 9 insertions(+), 6 deletions(-)
diff --git a/rust/kernel/xarray.rs b/rust/kernel/xarray.rs
index d1246ec114898..eadddafb180ec 100644
--- a/rust/kernel/xarray.rs
+++ b/rust/kernel/xarray.rs
@@ -215,10 +215,8 @@ fn load<F, U>(&self, index: usize, f: F) -> Option<U>
where
F: FnOnce(NonNull<c_void>) -> U,
{
- // SAFETY: `self.xa.xa` is always valid by the type invariant.
- let ptr = unsafe { bindings::xa_load(self.xa.xa.get(), index) };
- let ptr = NonNull::new(ptr.cast())?;
- Some(f(ptr))
+ let mut state = XArrayState::new(self, index);
+ Some(f(state.load()?))
}
/// Checks if the XArray contains an element at the specified index.
@@ -327,7 +325,6 @@ pub fn store(
/// # Invariants
///
/// - `state` is always a valid `bindings::xa_state`.
-#[expect(dead_code)]
pub(crate) struct XArrayState<'a, 'b, T: ForeignOwnable> {
/// Holds a reference to the lock guard to ensure the lock is not dropped
/// while `Self` is live.
@@ -336,7 +333,6 @@ pub(crate) struct XArrayState<'a, 'b, T: ForeignOwnable> {
}
impl<'a, 'b, T: ForeignOwnable> XArrayState<'a, 'b, T> {
- #[expect(dead_code)]
fn new(access: &'b Guard<'a, T>, index: usize) -> Self {
let ptr = access.xa.xa.get();
// INVARIANT: We initialize `self.state` to a valid value below.
@@ -356,6 +352,13 @@ fn new(access: &'b Guard<'a, T>, index: usize) -> Self {
},
}
}
+
+ fn load(&mut self) -> Option<NonNull<c_void>> {
+ // SAFETY: `state.state` is always valid by the type invariant of
+ // `XArrayState and we hold the xarray lock`.
+ let ptr = unsafe { bindings::xas_load(&raw mut self.state) };
+ NonNull::new(ptr.cast())
+ }
}
// SAFETY: `XArray<T>` has no shared mutable state so it is `Send` iff `T` is `Send`.
--
2.51.2
^ permalink raw reply [flat|nested] 14+ messages in thread* [PATCH v2 06/11] rust: xarray: simplify `Guard::load`
2026-02-06 21:10 [PATCH v2 00/11] rust: xarray: add entry API with preloading Andreas Hindborg
` (4 preceding siblings ...)
2026-02-06 21:10 ` [PATCH v2 05/11] rust: xarray: use `xas_load` instead of `xa_load` in `Guard::load` Andreas Hindborg
@ 2026-02-06 21:10 ` Andreas Hindborg
2026-02-06 21:10 ` [PATCH v2 07/11] rust: xarray: add `find_next` and `find_next_mut` Andreas Hindborg
` (4 subsequent siblings)
10 siblings, 0 replies; 14+ messages in thread
From: Andreas Hindborg @ 2026-02-06 21:10 UTC (permalink / raw)
To: Tamir Duberstein, Miguel Ojeda, Alex Gaynor, Boqun Feng,
Gary Guo, Björn Roy Baron, Benno Lossin, Alice Ryhl,
Trevor Gross, Danilo Krummrich, Lorenzo Stoakes, Liam R. Howlett,
Vlastimil Babka, Andrew Morton, Christoph Lameter,
David Rientjes, Roman Gushchin, Harry Yoo
Cc: Daniel Gomez, rust-for-linux, linux-kernel, linux-mm, Andreas Hindborg
Simplify the implementation by removing the closure-based API from
`Guard::load` in favor of returning `Option<NonNull<c_void>>` directly.
Signed-off-by: Andreas Hindborg <a.hindborg@kernel.org>
---
rust/kernel/xarray.rs | 23 +++++++++--------------
1 file changed, 9 insertions(+), 14 deletions(-)
diff --git a/rust/kernel/xarray.rs b/rust/kernel/xarray.rs
index eadddafb180ec..e654bf56dc97c 100644
--- a/rust/kernel/xarray.rs
+++ b/rust/kernel/xarray.rs
@@ -211,12 +211,8 @@ fn from(value: StoreError<T>) -> Self {
}
impl<'a, T: ForeignOwnable> Guard<'a, T> {
- fn load<F, U>(&self, index: usize, f: F) -> Option<U>
- where
- F: FnOnce(NonNull<c_void>) -> U,
- {
- let mut state = XArrayState::new(self, index);
- Some(f(state.load()?))
+ fn load(&self, index: usize) -> Option<NonNull<c_void>> {
+ XArrayState::new(self, index).load()
}
/// Checks if the XArray contains an element at the specified index.
@@ -242,18 +238,17 @@ pub fn contains_index(&self, index: usize) -> bool {
/// Provides a reference to the element at the given index.
pub fn get(&self, index: usize) -> Option<T::Borrowed<'_>> {
- self.load(index, |ptr| {
- // SAFETY: `ptr` came from `T::into_foreign`.
- unsafe { T::borrow(ptr.as_ptr()) }
- })
+ let ptr = self.load(index)?;
+ // SAFETY: `ptr` came from `T::into_foreign`.
+ Some(unsafe { T::borrow(ptr.as_ptr()) })
}
/// Provides a mutable reference to the element at the given index.
pub fn get_mut(&mut self, index: usize) -> Option<T::BorrowedMut<'_>> {
- self.load(index, |ptr| {
- // SAFETY: `ptr` came from `T::into_foreign`.
- unsafe { T::borrow_mut(ptr.as_ptr()) }
- })
+ let ptr = self.load(index)?;
+
+ // SAFETY: `ptr` came from `T::into_foreign`.
+ Some(unsafe { T::borrow_mut(ptr.as_ptr()) })
}
/// Removes and returns the element at the given index.
--
2.51.2
^ permalink raw reply [flat|nested] 14+ messages in thread* [PATCH v2 07/11] rust: xarray: add `find_next` and `find_next_mut`
2026-02-06 21:10 [PATCH v2 00/11] rust: xarray: add entry API with preloading Andreas Hindborg
` (5 preceding siblings ...)
2026-02-06 21:10 ` [PATCH v2 06/11] rust: xarray: simplify `Guard::load` Andreas Hindborg
@ 2026-02-06 21:10 ` Andreas Hindborg
2026-02-06 21:10 ` [PATCH v2 08/11] rust: xarray: add entry API Andreas Hindborg
` (3 subsequent siblings)
10 siblings, 0 replies; 14+ messages in thread
From: Andreas Hindborg @ 2026-02-06 21:10 UTC (permalink / raw)
To: Tamir Duberstein, Miguel Ojeda, Alex Gaynor, Boqun Feng,
Gary Guo, Björn Roy Baron, Benno Lossin, Alice Ryhl,
Trevor Gross, Danilo Krummrich, Lorenzo Stoakes, Liam R. Howlett,
Vlastimil Babka, Andrew Morton, Christoph Lameter,
David Rientjes, Roman Gushchin, Harry Yoo
Cc: Daniel Gomez, rust-for-linux, linux-kernel, linux-mm, Andreas Hindborg
Add methods to find the next element in an XArray starting from a
given index. The methods return a tuple containing the index where the
element was found and a reference to the element.
The implementation uses the XArray state API via `xas_find` to avoid taking
the rcu lock as an exclusive lock is already held by `Guard`.
Signed-off-by: Andreas Hindborg <a.hindborg@kernel.org>
---
rust/kernel/xarray.rs | 68 +++++++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 68 insertions(+)
diff --git a/rust/kernel/xarray.rs b/rust/kernel/xarray.rs
index e654bf56dc97c..656ec897a0c41 100644
--- a/rust/kernel/xarray.rs
+++ b/rust/kernel/xarray.rs
@@ -251,6 +251,67 @@ pub fn get_mut(&mut self, index: usize) -> Option<T::BorrowedMut<'_>> {
Some(unsafe { T::borrow_mut(ptr.as_ptr()) })
}
+ fn load_next(&self, index: usize) -> Option<(usize, NonNull<c_void>)> {
+ XArrayState::new(self, index).load_next()
+ }
+
+ /// Finds the next element starting from the given index.
+ ///
+ /// # Examples
+ ///
+ /// ```
+ /// # use kernel::{prelude::*, xarray::{AllocKind, XArray}};
+ /// let mut xa = KBox::pin_init(XArray::<KBox<u32>>::new(AllocKind::Alloc), GFP_KERNEL)?;
+ /// let mut guard = xa.lock();
+ ///
+ /// guard.store(10, KBox::new(10u32, GFP_KERNEL)?, GFP_KERNEL)?;
+ /// guard.store(20, KBox::new(20u32, GFP_KERNEL)?, GFP_KERNEL)?;
+ ///
+ /// if let Some((found_index, value)) = guard.find_next(11) {
+ /// assert_eq!(found_index, 20);
+ /// assert_eq!(*value, 20);
+ /// }
+ ///
+ /// if let Some((found_index, value)) = guard.find_next(5) {
+ /// assert_eq!(found_index, 10);
+ /// assert_eq!(*value, 10);
+ /// }
+ ///
+ /// # Ok::<(), kernel::error::Error>(())
+ /// ```
+ pub fn find_next(&self, index: usize) -> Option<(usize, T::Borrowed<'_>)> {
+ self.load_next(index)
+ // SAFETY: `ptr` came from `T::into_foreign`.
+ .map(|(index, ptr)| (index, unsafe { T::borrow(ptr.as_ptr()) }))
+ }
+
+ /// Finds the next element starting from the given index, returning a mutable reference.
+ ///
+ /// # Examples
+ ///
+ /// ```
+ /// # use kernel::{prelude::*, xarray::{AllocKind, XArray}};
+ /// let mut xa = KBox::pin_init(XArray::<KBox<u32>>::new(AllocKind::Alloc), GFP_KERNEL)?;
+ /// let mut guard = xa.lock();
+ ///
+ /// guard.store(10, KBox::new(10u32, GFP_KERNEL)?, GFP_KERNEL)?;
+ /// guard.store(20, KBox::new(20u32, GFP_KERNEL)?, GFP_KERNEL)?;
+ ///
+ /// if let Some((found_index, mut_value)) = guard.find_next_mut(5) {
+ /// assert_eq!(found_index, 10);
+ /// *mut_value = 0x99;
+ /// }
+ ///
+ /// assert_eq!(guard.get(10).copied(), Some(0x99));
+ ///
+ /// # Ok::<(), kernel::error::Error>(())
+ /// ```
+ pub fn find_next_mut(&mut self, index: usize) -> Option<(usize, T::BorrowedMut<'_>)> {
+ self.load_next(index)
+ // SAFETY: `ptr` came from `T::into_foreign`.
+ .map(move |(index, ptr)| (index, unsafe { T::borrow_mut(ptr.as_ptr()) }))
+ }
+
/// Removes and returns the element at the given index.
pub fn remove(&mut self, index: usize) -> Option<T> {
// SAFETY:
@@ -354,6 +415,13 @@ fn load(&mut self) -> Option<NonNull<c_void>> {
let ptr = unsafe { bindings::xas_load(&raw mut self.state) };
NonNull::new(ptr.cast())
}
+
+ fn load_next(&mut self) -> Option<(usize, NonNull<c_void>)> {
+ // SAFETY: `self.state` is always valid by the type invariant of
+ // `XArrayState` and the we hold the xarray lock.
+ let ptr = unsafe { bindings::xas_find(&raw mut self.state, usize::MAX) };
+ NonNull::new(ptr).map(|ptr| (self.state.xa_index, ptr))
+ }
}
// SAFETY: `XArray<T>` has no shared mutable state so it is `Send` iff `T` is `Send`.
--
2.51.2
^ permalink raw reply [flat|nested] 14+ messages in thread* [PATCH v2 08/11] rust: xarray: add entry API
2026-02-06 21:10 [PATCH v2 00/11] rust: xarray: add entry API with preloading Andreas Hindborg
` (6 preceding siblings ...)
2026-02-06 21:10 ` [PATCH v2 07/11] rust: xarray: add `find_next` and `find_next_mut` Andreas Hindborg
@ 2026-02-06 21:10 ` Andreas Hindborg
2026-02-06 21:10 ` [PATCH v2 09/11] rust: mm: add abstractions for allocating from a `sheaf` Andreas Hindborg
` (2 subsequent siblings)
10 siblings, 0 replies; 14+ messages in thread
From: Andreas Hindborg @ 2026-02-06 21:10 UTC (permalink / raw)
To: Tamir Duberstein, Miguel Ojeda, Alex Gaynor, Boqun Feng,
Gary Guo, Björn Roy Baron, Benno Lossin, Alice Ryhl,
Trevor Gross, Danilo Krummrich, Lorenzo Stoakes, Liam R. Howlett,
Vlastimil Babka, Andrew Morton, Christoph Lameter,
David Rientjes, Roman Gushchin, Harry Yoo
Cc: Daniel Gomez, rust-for-linux, linux-kernel, linux-mm, Andreas Hindborg
Add an Entry API for XArray that provides ergonomic access to array
slots that may be vacant or occupied. The API follows the pattern of
Rust's standard library HashMap entry API, allowing efficient
conditional insertion and modification of entries.
The implementation uses the XArray state API (`xas_*` functions) for
efficient operations without requiring multiple lookups. Helper
functions are added to rust/helpers/xarray.c to wrap C macros that are
not directly accessible from Rust.
Also update MAINTAINERS to cover the new rust files.
Signed-off-by: Andreas Hindborg <a.hindborg@kernel.org>
---
MAINTAINERS | 1 +
rust/helpers/xarray.c | 17 ++
rust/kernel/xarray.rs | 123 +++++++++++++++
rust/kernel/xarray/entry.rs | 367 ++++++++++++++++++++++++++++++++++++++++++++
4 files changed, 508 insertions(+)
diff --git a/MAINTAINERS b/MAINTAINERS
index 0efa8cc6775b7..8202515c6065b 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -28361,6 +28361,7 @@ B: https://github.com/Rust-for-Linux/linux/issues
C: https://rust-for-linux.zulipchat.com
T: git https://github.com/Rust-for-Linux/linux.git xarray-next
F: rust/kernel/xarray.rs
+F: rust/kernel/xarray/
XBOX DVD IR REMOTE
M: Benjamin Valentin <benpicco@googlemail.com>
diff --git a/rust/helpers/xarray.c b/rust/helpers/xarray.c
index 60b299f11451d..425a6cc494734 100644
--- a/rust/helpers/xarray.c
+++ b/rust/helpers/xarray.c
@@ -26,3 +26,20 @@ void rust_helper_xa_unlock(struct xarray *xa)
{
return xa_unlock(xa);
}
+
+void *rust_helper_xas_result(struct xa_state *xas, void *curr)
+{
+ if (xa_err(xas->xa_node))
+ curr = xas->xa_node;
+ return curr;
+}
+
+void *rust_helper_xa_zero_to_null(void *entry)
+{
+ return xa_is_zero(entry) ? NULL : entry;
+}
+
+int rust_helper_xas_error(const struct xa_state *xas)
+{
+ return xas_error(xas);
+}
diff --git a/rust/kernel/xarray.rs b/rust/kernel/xarray.rs
index 656ec897a0c41..8c10e8fd76f15 100644
--- a/rust/kernel/xarray.rs
+++ b/rust/kernel/xarray.rs
@@ -13,11 +13,17 @@
NonNull, //
},
};
+pub use entry::{
+ Entry,
+ OccupiedEntry,
+ VacantEntry, //
+};
use kernel::{
alloc,
bindings,
build_assert, //
error::{
+ to_result,
Error,
Result, //
},
@@ -251,6 +257,35 @@ pub fn get_mut(&mut self, index: usize) -> Option<T::BorrowedMut<'_>> {
Some(unsafe { T::borrow_mut(ptr.as_ptr()) })
}
+ /// Gets an entry for the specified index, which can be vacant or occupied.
+ ///
+ /// # Examples
+ ///
+ /// ```
+ /// # use kernel::{prelude::*, xarray::{AllocKind, XArray, Entry}};
+ /// let mut xa = KBox::pin_init(XArray::<KBox<u32>>::new(AllocKind::Alloc), GFP_KERNEL)?;
+ /// let mut guard = xa.lock();
+ ///
+ /// assert_eq!(guard.contains_index(42), false);
+ ///
+ /// match guard.entry(42) {
+ /// Entry::Vacant(entry) => {
+ /// entry.insert(KBox::new(0x1337u32, GFP_KERNEL)?)?;
+ /// }
+ /// Entry::Occupied(_) => unreachable!("We did not insert an entry yet"),
+ /// }
+ ///
+ /// assert_eq!(guard.get(42), Some(&0x1337));
+ ///
+ /// # Ok::<(), kernel::error::Error>(())
+ /// ```
+ pub fn entry<'b>(&'b mut self, index: usize) -> Entry<'a, 'b, T> {
+ match self.load(index) {
+ None => Entry::Vacant(VacantEntry::new(self, index)),
+ Some(ptr) => Entry::Occupied(OccupiedEntry::new(self, index, ptr)),
+ }
+ }
+
fn load_next(&self, index: usize) -> Option<(usize, NonNull<c_void>)> {
XArrayState::new(self, index).load_next()
}
@@ -312,6 +347,72 @@ pub fn find_next_mut(&mut self, index: usize) -> Option<(usize, T::BorrowedMut<'
.map(move |(index, ptr)| (index, unsafe { T::borrow_mut(ptr.as_ptr()) }))
}
+ /// Finds the next occupied entry starting from the given index.
+ ///
+ /// # Examples
+ ///
+ /// ```
+ /// # use kernel::{prelude::*, xarray::{AllocKind, XArray}};
+ /// let mut xa = KBox::pin_init(XArray::<KBox<u32>>::new(AllocKind::Alloc), GFP_KERNEL)?;
+ /// let mut guard = xa.lock();
+ ///
+ /// guard.store(10, KBox::new(10u32, GFP_KERNEL)?, GFP_KERNEL)?;
+ /// guard.store(20, KBox::new(20u32, GFP_KERNEL)?, GFP_KERNEL)?;
+ ///
+ /// if let Some(entry) = guard.find_next_entry(5) {
+ /// assert_eq!(entry.index(), 10);
+ /// let value = entry.remove();
+ /// assert_eq!(*value, 10);
+ /// }
+ ///
+ /// assert_eq!(guard.get(10), None);
+ ///
+ /// # Ok::<(), kernel::error::Error>(())
+ /// ```
+ pub fn find_next_entry<'b>(&'b mut self, index: usize) -> Option<OccupiedEntry<'a, 'b, T>> {
+ let mut state = XArrayState::new(self, index);
+ let (_, ptr) = state.load_next()?;
+ Some(OccupiedEntry { state, ptr })
+ }
+
+ /// Finds the next occupied entry starting at the given index, wrapping around.
+ ///
+ /// Searches for an entry starting at `index` up to the maximum index. If no entry
+ /// is found, wraps around and searches from index 0 up to `index`.
+ ///
+ /// # Examples
+ ///
+ /// ```
+ /// # use kernel::{prelude::*, xarray::{AllocKind, XArray}};
+ /// let mut xa = KBox::pin_init(XArray::<KBox<u32>>::new(AllocKind::Alloc), GFP_KERNEL)?;
+ /// let mut guard = xa.lock();
+ ///
+ /// guard.store(100, KBox::new(42u32, GFP_KERNEL)?, GFP_KERNEL)?;
+ /// let entry = guard.find_next_entry_circular(101);
+ /// assert_eq!(entry.map(|e| e.index()), Some(100));
+ ///
+ /// # Ok::<(), kernel::error::Error>(())
+ /// ```
+ pub fn find_next_entry_circular<'b>(
+ &'b mut self,
+ index: usize,
+ ) -> Option<OccupiedEntry<'a, 'b, T>> {
+ let mut state = XArrayState::new(self, index);
+
+ // SAFETY: `state.state` is properly initialized by XArrayState::new and the caller holds
+ // the lock.
+ let ptr = NonNull::new(unsafe { bindings::xas_find(&mut state.state, usize::MAX) })
+ .or_else(|| {
+ state.state.xa_node = bindings::XAS_RESTART as *mut bindings::xa_node;
+ state.state.xa_index = 0;
+ // SAFETY: `state.state` is properly initialized and by type invariant, we hold the
+ // xarray lock.
+ NonNull::new(unsafe { bindings::xas_find(&mut state.state, index) })
+ })?;
+
+ Some(OccupiedEntry { state, ptr })
+ }
+
/// Removes and returns the element at the given index.
pub fn remove(&mut self, index: usize) -> Option<T> {
// SAFETY:
@@ -422,8 +523,30 @@ fn load_next(&mut self) -> Option<(usize, NonNull<c_void>)> {
let ptr = unsafe { bindings::xas_find(&raw mut self.state, usize::MAX) };
NonNull::new(ptr).map(|ptr| (self.state.xa_index, ptr))
}
+
+ fn status(&self) -> Result {
+ // SAFETY: `self.state` is properly initialized and valid.
+ to_result(unsafe { bindings::xas_error(&self.state) })
+ }
+
+ fn insert(&mut self, value: T) -> Result<*mut c_void, StoreError<T>> {
+ let new = T::into_foreign(value).cast();
+
+ // SAFETY: `self.state.state` is properly initialized and `new` came from `T::into_foreign`.
+ // We hold the xarray lock.
+ unsafe { bindings::xas_store(&mut self.state, new) };
+
+ self.status().map(|()| new).map_err(|error| {
+ // SAFETY: `new` came from `T::into_foreign` and `xas_store` does not take ownership of
+ // the value on error.
+ let value = unsafe { T::from_foreign(new) };
+ StoreError { value, error }
+ })
+ }
}
+mod entry;
+
// SAFETY: `XArray<T>` has no shared mutable state so it is `Send` iff `T` is `Send`.
unsafe impl<T: ForeignOwnable + Send> Send for XArray<T> {}
diff --git a/rust/kernel/xarray/entry.rs b/rust/kernel/xarray/entry.rs
new file mode 100644
index 0000000000000..1b1c21bed7022
--- /dev/null
+++ b/rust/kernel/xarray/entry.rs
@@ -0,0 +1,367 @@
+// SPDX-License-Identifier: GPL-2.0
+
+use super::{
+ Guard,
+ StoreError,
+ XArrayState, //
+};
+use core::ptr::NonNull;
+use kernel::{
+ prelude::*,
+ types::ForeignOwnable, //
+};
+
+/// Represents either a vacant or occupied entry in an XArray.
+pub enum Entry<'a, 'b, T: ForeignOwnable> {
+ /// A vacant entry that can have a value inserted.
+ Vacant(VacantEntry<'a, 'b, T>),
+ /// An occupied entry containing a value.
+ Occupied(OccupiedEntry<'a, 'b, T>),
+}
+
+impl<T: ForeignOwnable> Entry<'_, '_, T> {
+ /// Returns true if this entry is occupied.
+ ///
+ /// # Examples
+ ///
+ /// ```
+ /// # use kernel::{prelude::*, xarray::{AllocKind, XArray, Entry}};
+ /// let mut xa = KBox::pin_init(XArray::<KBox<u32>>::new(AllocKind::Alloc), GFP_KERNEL)?;
+ /// let mut guard = xa.lock();
+ ///
+ ///
+ /// let entry = guard.entry(42);
+ /// assert_eq!(entry.is_occupied(), false);
+ ///
+ /// guard.store(42, KBox::new(0x1337u32, GFP_KERNEL)?, GFP_KERNEL)?;
+ /// let entry = guard.entry(42);
+ /// assert_eq!(entry.is_occupied(), true);
+ ///
+ /// # Ok::<(), kernel::error::Error>(())
+ /// ```
+ pub fn is_occupied(&self) -> bool {
+ matches!(self, Entry::Occupied(_))
+ }
+}
+
+/// A view into a vacant entry in an XArray.
+pub struct VacantEntry<'a, 'b, T: ForeignOwnable> {
+ state: XArrayState<'a, 'b, T>,
+}
+
+impl<'a, 'b, T> VacantEntry<'a, 'b, T>
+where
+ T: ForeignOwnable,
+{
+ pub(crate) fn new(guard: &'b mut Guard<'a, T>, index: usize) -> Self {
+ Self {
+ state: XArrayState::new(guard, index),
+ }
+ }
+
+ /// Inserts a value into this vacant entry.
+ ///
+ /// Returns a reference to the newly inserted value.
+ ///
+ /// - This method will fail if the nodes on the path to the index
+ /// represented by this entry are not present in the XArray.
+ /// - This method will not drop the XArray lock.
+ ///
+ ///
+ /// # Examples
+ ///
+ /// ```
+ /// # use kernel::{prelude::*, xarray::{AllocKind, XArray, Entry}};
+ /// let mut xa = KBox::pin_init(XArray::<KBox<u32>>::new(AllocKind::Alloc), GFP_KERNEL)?;
+ /// let mut guard = xa.lock();
+ ///
+ /// assert_eq!(guard.get(42), None);
+ ///
+ /// if let Entry::Vacant(entry) = guard.entry(42) {
+ /// let value = KBox::new(0x1337u32, GFP_KERNEL)?;
+ /// let borrowed = entry.insert(value)?;
+ /// assert_eq!(*borrowed, 0x1337);
+ /// }
+ ///
+ /// assert_eq!(guard.get(42).copied(), Some(0x1337));
+ ///
+ /// # Ok::<(), kernel::error::Error>(())
+ /// ```
+ pub fn insert(mut self, value: T) -> Result<T::BorrowedMut<'b>, StoreError<T>> {
+ let new = self.state.insert(value)?;
+
+ // SAFETY: `new` came from `T::into_foreign`. The entry has exclusive
+ // ownership of `new` as it holds a mutable reference to `Guard`.
+ Ok(unsafe { T::borrow_mut(new) })
+ }
+
+ /// Inserts a value and returns an occupied entry representing the newly inserted value.
+ ///
+ /// - This method will fail if the nodes on the path to the index
+ /// represented by this entry are not present in the XArray.
+ /// - This method will not drop the XArray lock.
+ ///
+ /// # Examples
+ ///
+ /// ```
+ /// # use kernel::{prelude::*, xarray::{AllocKind, XArray, Entry}};
+ /// let mut xa = KBox::pin_init(XArray::<KBox<u32>>::new(AllocKind::Alloc), GFP_KERNEL)?;
+ /// let mut guard = xa.lock();
+ ///
+ /// assert_eq!(guard.get(42), None);
+ ///
+ /// if let Entry::Vacant(entry) = guard.entry(42) {
+ /// let value = KBox::new(0x1337u32, GFP_KERNEL)?;
+ /// let occupied = entry.insert_entry(value)?;
+ /// assert_eq!(occupied.index(), 42);
+ /// }
+ ///
+ /// assert_eq!(guard.get(42).copied(), Some(0x1337));
+ ///
+ /// # Ok::<(), kernel::error::Error>(())
+ /// ```
+ pub fn insert_entry(mut self, value: T) -> Result<OccupiedEntry<'a, 'b, T>, StoreError<T>> {
+ let new = self.state.insert(value)?;
+
+ Ok(OccupiedEntry::<'a, 'b, T> {
+ state: self.state,
+ // SAFETY: `new` came from `T::into_foreign` and is guaranteed non-null.
+ ptr: unsafe { core::ptr::NonNull::new_unchecked(new) },
+ })
+ }
+
+ /// Returns the index of this vacant entry.
+ ///
+ /// # Examples
+ ///
+ /// ```
+ /// # use kernel::{prelude::*, xarray::{AllocKind, XArray, Entry}};
+ /// let mut xa = KBox::pin_init(XArray::<KBox<u32>>::new(AllocKind::Alloc), GFP_KERNEL)?;
+ /// let mut guard = xa.lock();
+ ///
+ /// assert_eq!(guard.get(42), None);
+ ///
+ /// if let Entry::Vacant(entry) = guard.entry(42) {
+ /// assert_eq!(entry.index(), 42);
+ /// }
+ ///
+ /// # Ok::<(), kernel::error::Error>(())
+ /// ```
+ pub fn index(&self) -> usize {
+ self.state.state.xa_index
+ }
+}
+
+/// A view into an occupied entry in an XArray.
+pub struct OccupiedEntry<'a, 'b, T: ForeignOwnable> {
+ pub(crate) state: XArrayState<'a, 'b, T>,
+ pub(crate) ptr: NonNull<c_void>,
+}
+
+impl<'a, 'b, T> OccupiedEntry<'a, 'b, T>
+where
+ T: ForeignOwnable,
+{
+ pub(crate) fn new(guard: &'b mut Guard<'a, T>, index: usize, ptr: NonNull<c_void>) -> Self {
+ Self {
+ state: XArrayState::new(guard, index),
+ ptr,
+ }
+ }
+
+ /// Removes the value from this occupied entry and returns it, consuming the entry.
+ ///
+ /// # Examples
+ ///
+ /// ```
+ /// # use kernel::{prelude::*, xarray::{AllocKind, XArray, Entry}};
+ /// let mut xa = KBox::pin_init(XArray::<KBox<u32>>::new(AllocKind::Alloc), GFP_KERNEL)?;
+ /// let mut guard = xa.lock();
+ ///
+ /// guard.store(42, KBox::new(0x1337u32, GFP_KERNEL)?, GFP_KERNEL)?;
+ /// assert_eq!(guard.get(42).copied(), Some(0x1337));
+ ///
+ /// if let Entry::Occupied(entry) = guard.entry(42) {
+ /// let value = entry.remove();
+ /// assert_eq!(*value, 0x1337);
+ /// }
+ ///
+ /// assert_eq!(guard.get(42), None);
+ ///
+ /// # Ok::<(), kernel::error::Error>(())
+ /// ```
+ pub fn remove(mut self) -> T {
+ // SAFETY: `self.state.state` is properly initialized and valid for XAS operations.
+ let ptr = unsafe {
+ bindings::xas_result(
+ &mut self.state.state,
+ bindings::xa_zero_to_null(bindings::xas_store(
+ &mut self.state.state,
+ core::ptr::null_mut(),
+ )),
+ )
+ };
+
+ // SAFETY: `ptr` is a valid return value from xas_result.
+ let errno = unsafe { bindings::xa_err(ptr) };
+
+ // NOTE: Storing NULL to an occupied slot never fails. This is by design
+ // of the xarray data structure. If a slot is occupied, a store is a
+ // simple pointer swap.
+ debug_assert!(errno == 0);
+
+ // SAFETY:
+ // - `ptr` came from `T::into_foreign`.
+ // - As this method takes self by value, the lifetimes of any [`T::Borrowed`] and
+ // [`T::BorrowedMut`] we have created must have ended.
+ unsafe { T::from_foreign(ptr.cast()) }
+ }
+
+ /// Returns the index of this occupied entry.
+ ///
+ /// # Examples
+ ///
+ /// ```
+ /// # use kernel::{prelude::*, xarray::{AllocKind, XArray, Entry}};
+ /// let mut xa = KBox::pin_init(XArray::<KBox<u32>>::new(AllocKind::Alloc), GFP_KERNEL)?;
+ /// let mut guard = xa.lock();
+ ///
+ /// guard.store(42, KBox::new(0x1337u32, GFP_KERNEL)?, GFP_KERNEL)?;
+ ///
+ /// if let Entry::Occupied(entry) = guard.entry(42) {
+ /// assert_eq!(entry.index(), 42);
+ /// }
+ ///
+ /// # Ok::<(), kernel::error::Error>(())
+ /// ```
+ pub fn index(&self) -> usize {
+ self.state.state.xa_index
+ }
+
+ /// Replaces the value in this occupied entry and returns the old value.
+ ///
+ /// # Examples
+ ///
+ /// ```
+ /// # use kernel::{prelude::*, xarray::{AllocKind, XArray, Entry}};
+ /// let mut xa = KBox::pin_init(XArray::<KBox<u32>>::new(AllocKind::Alloc), GFP_KERNEL)?;
+ /// let mut guard = xa.lock();
+ ///
+ /// guard.store(42, KBox::new(0x1337u32, GFP_KERNEL)?, GFP_KERNEL)?;
+ ///
+ /// if let Entry::Occupied(mut entry) = guard.entry(42) {
+ /// let new_value = KBox::new(0x9999u32, GFP_KERNEL)?;
+ /// let old_value = entry.insert(new_value);
+ /// assert_eq!(*old_value, 0x1337);
+ /// }
+ ///
+ /// assert_eq!(guard.get(42).copied(), Some(0x9999));
+ ///
+ /// # Ok::<(), kernel::error::Error>(())
+ /// ```
+ pub fn insert(&mut self, value: T) -> T {
+ let new = T::into_foreign(value).cast();
+ // SAFETY: `new` came from `T::into_foreign` and is guaranteed non-null.
+ self.ptr = unsafe { NonNull::new_unchecked(new) };
+
+ // SAFETY: `self.state.state` is properly initialized and valid for XAS operations.
+ let old = unsafe {
+ bindings::xas_result(
+ &mut self.state.state,
+ bindings::xa_zero_to_null(bindings::xas_store(&mut self.state.state, new)),
+ )
+ };
+
+ // SAFETY: `old` is a valid return value from xas_result.
+ let errno = unsafe { bindings::xa_err(old) };
+
+ // NOTE: Storing NULL to an occupied slot never fails. This is by design
+ // of the xarray data structure. If a slot is occupied, a store is a
+ // simple pointer swap.
+ debug_assert!(errno == 0);
+
+ // SAFETY:
+ // - `ptr` came from `T::into_foreign`.
+ // - As this method takes self by value, the lifetimes of any [`T::Borrowed`] and
+ // [`T::BorrowedMut`] we have created must have ended.
+ unsafe { T::from_foreign(old) }
+ }
+
+ /// Converts this occupied entry into a mutable reference to the value in the slot represented
+ /// by the entry.
+ ///
+ /// # Examples
+ ///
+ /// ```
+ /// # use kernel::{prelude::*, xarray::{AllocKind, XArray, Entry}};
+ /// let mut xa = KBox::pin_init(XArray::<KBox<u32>>::new(AllocKind::Alloc), GFP_KERNEL)?;
+ /// let mut guard = xa.lock();
+ ///
+ /// guard.store(42, KBox::new(0x1337u32, GFP_KERNEL)?, GFP_KERNEL)?;
+ ///
+ /// if let Entry::Occupied(entry) = guard.entry(42) {
+ /// let value_ref = entry.into_mut();
+ /// *value_ref = 0x9999;
+ /// }
+ ///
+ /// assert_eq!(guard.get(42).copied(), Some(0x9999));
+ ///
+ /// # Ok::<(), kernel::error::Error>(())
+ /// ```
+ pub fn into_mut(self) -> T::BorrowedMut<'b> {
+ // SAFETY: `ptr` came from `T::into_foreign`.
+ unsafe { T::borrow_mut(self.ptr.as_ptr()) }
+ }
+
+ /// Swaps the value in this entry with the provided value.
+ ///
+ /// Returns the old value that was in the entry.
+ ///
+ /// # Examples
+ ///
+ /// ```
+ /// # use kernel::{prelude::*, xarray::{AllocKind, XArray, Entry}};
+ /// let mut xa = KBox::pin_init(XArray::<KBox<u32>>::new(AllocKind::Alloc), GFP_KERNEL)?;
+ /// let mut guard = xa.lock();
+ ///
+ /// guard.store(42, KBox::new(100u32, GFP_KERNEL)?, GFP_KERNEL)?;
+ ///
+ /// if let Entry::Occupied(mut entry) = guard.entry(42) {
+ /// let mut other = 200u32;
+ /// entry.swap(&mut other);
+ /// assert_eq!(other, 100);
+ /// assert_eq!(*entry, 200);
+ /// }
+ ///
+ /// # Ok::<(), kernel::error::Error>(())
+ /// ```
+ pub fn swap<U>(&mut self, other: &mut U)
+ where
+ T: for<'c> ForeignOwnable<Borrowed<'c> = &'c U, BorrowedMut<'c> = &'c mut U>,
+ {
+ use core::ops::DerefMut;
+ core::mem::swap(self.deref_mut(), other);
+ }
+}
+
+impl<T, U> core::ops::Deref for OccupiedEntry<'_, '_, T>
+where
+ T: for<'a> ForeignOwnable<Borrowed<'a> = &'a U, BorrowedMut<'a> = &'a mut U>,
+{
+ type Target = U;
+
+ fn deref(&self) -> &Self::Target {
+ // SAFETY: `ptr` came from `T::into_foreign`.
+ unsafe { T::borrow(self.ptr.as_ptr()) }
+ }
+}
+
+impl<T, U> core::ops::DerefMut for OccupiedEntry<'_, '_, T>
+where
+ T: for<'a> ForeignOwnable<Borrowed<'a> = &'a U, BorrowedMut<'a> = &'a mut U>,
+{
+ fn deref_mut(&mut self) -> &mut Self::Target {
+ // SAFETY: `ptr` came from `T::into_foreign`.
+ unsafe { T::borrow_mut(self.ptr.as_ptr()) }
+ }
+}
--
2.51.2
^ permalink raw reply [flat|nested] 14+ messages in thread* [PATCH v2 09/11] rust: mm: add abstractions for allocating from a `sheaf`
2026-02-06 21:10 [PATCH v2 00/11] rust: xarray: add entry API with preloading Andreas Hindborg
` (7 preceding siblings ...)
2026-02-06 21:10 ` [PATCH v2 08/11] rust: xarray: add entry API Andreas Hindborg
@ 2026-02-06 21:10 ` Andreas Hindborg
2026-02-06 21:10 ` [PATCH v2 10/11] rust: mm: sheaf: allow use of C initialized static caches Andreas Hindborg
2026-02-06 21:10 ` [PATCH v2 11/11] rust: xarray: add preload API Andreas Hindborg
10 siblings, 0 replies; 14+ messages in thread
From: Andreas Hindborg @ 2026-02-06 21:10 UTC (permalink / raw)
To: Tamir Duberstein, Miguel Ojeda, Alex Gaynor, Boqun Feng,
Gary Guo, Björn Roy Baron, Benno Lossin, Alice Ryhl,
Trevor Gross, Danilo Krummrich, Lorenzo Stoakes, Liam R. Howlett,
Vlastimil Babka, Andrew Morton, Christoph Lameter,
David Rientjes, Roman Gushchin, Harry Yoo
Cc: Daniel Gomez, rust-for-linux, linux-kernel, linux-mm,
Andreas Hindborg, Matthew Wilcox (Oracle)
Add Rust APIs for allocating objects from a `sheaf`.
Introduce a reduced abstraction `KMemCacheInit` for `struct kmem_cache` to
support management of the `Sheaf`s.
Initialize objects using in-place initialization when objects are allocated
from a `Sheaf`. This is different from C which tends to do some
initialization when the cache is filled. This approach is chosen because
there is no destructor/drop capability in `struct kmem_cache` that can be
invoked when the cache is dropped.
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: "Liam R. Howlett" <Liam.Howlett@oracle.com>
Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org>
Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Cc: linux-mm@kvack.org
Signed-off-by: Andreas Hindborg <a.hindborg@kernel.org>
---
rust/kernel/mm.rs | 2 +
rust/kernel/mm/sheaf.rs | 406 ++++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 408 insertions(+)
diff --git a/rust/kernel/mm.rs b/rust/kernel/mm.rs
index 4764d7b68f2a7..fcfa5a97ebf0a 100644
--- a/rust/kernel/mm.rs
+++ b/rust/kernel/mm.rs
@@ -18,6 +18,8 @@
};
use core::{ops::Deref, ptr::NonNull};
+#[cfg(not(any(CONFIG_SLUB_TINY, CONFIG_SLUB_DEBUG)))]
+pub mod sheaf;
pub mod virt;
use virt::VmaRef;
diff --git a/rust/kernel/mm/sheaf.rs b/rust/kernel/mm/sheaf.rs
new file mode 100644
index 0000000000000..c92750eaf1c4a
--- /dev/null
+++ b/rust/kernel/mm/sheaf.rs
@@ -0,0 +1,406 @@
+// SPDX-License-Identifier: GPL-2.0
+
+//! Slub allocator sheaf abstraction.
+//!
+//! Sheaves are percpu array-based caching layers for the slub allocator.
+//! They provide a mechanism for pre-allocating objects that can later
+//! be retrieved without risking allocation failure, making them useful in
+//! contexts where memory allocation must be guaranteed to succeed.
+//!
+//! The term "sheaf" is the english word for a bundle of straw. In this context
+//! it means a bundle of pre-allocated objects. A per-NUMA-node cache of sheaves
+//! is called a "barn". Because you store your sheafs in barns.
+//!
+//! # Use cases
+//!
+//! Sheaves are particularly useful when:
+//!
+//! - Allocations must be guaranteed to succeed in a restricted context (e.g.,
+//! while holding locks or in atomic context).
+//! - Multiple allocations need to be performed as a batch operation.
+//! - Fast-path allocation performance is critical, as sheaf allocations avoid
+//! atomic operations by using local locks with preemption disabled.
+//!
+//! # Architecture
+//!
+//! The sheaf system consists of three main components:
+//!
+//! - [`KMemCache`]: A slab cache configured with sheaf support.
+//! - [`Sheaf`]: A pre-filled container of objects from a specific cache.
+//! - [`SBox`]: An owned allocation from a sheaf, similar to a `Box`.
+//!
+//! # Example
+//!
+//! ```
+//! use kernel::c_str;
+//! use kernel::mm::sheaf::{KMemCache, KMemCacheInit, Sheaf, SBox};
+//! use kernel::prelude::*;
+//!
+//! struct MyObject {
+//! value: u32,
+//! }
+//!
+//! impl KMemCacheInit<MyObject> for MyObject {
+//! fn init() -> impl Init<MyObject> {
+//! init!(MyObject { value: 0 })
+//! }
+//! }
+//!
+//! // Create a cache with sheaf capacity of 16 objects.
+//! let cache = KMemCache::<MyObject>::new(c_str!("my_cache"), 16)?;
+//!
+//! // Pre-fill a sheaf with 8 objects.
+//! let mut sheaf = cache.as_arc_borrow().sheaf(8, GFP_KERNEL)?;
+//!
+//! // Allocations from the sheaf are guaranteed to succeed until empty.
+//! let obj = sheaf.alloc().unwrap();
+//!
+//! // Return the sheaf when done, attempting to refill it.
+//! sheaf.return_refill(GFP_KERNEL);
+//! # Ok::<(), Error>(())
+//! ```
+//!
+//! # Constraints
+//!
+//! - Sheaves are disabled when `CONFIG_SLUB_TINY` is enabled.
+//! - Sheaves are disabled when slab debugging (`slub_debug`) is active.
+//! - The sheaf capacity is fixed at cache creation time.
+
+use core::{
+ convert::Infallible,
+ marker::PhantomData,
+ ops::{Deref, DerefMut},
+ ptr::NonNull,
+};
+
+use kernel::prelude::*;
+
+use crate::sync::{Arc, ArcBorrow};
+
+/// A slab cache with sheaf support.
+///
+/// This type wraps a kernel `kmem_cache` configured with a sheaf capacity,
+/// enabling pre-allocation of objects via [`Sheaf`].
+///
+/// For now, this type only exists for sheaf management.
+///
+/// # Type parameter
+///
+/// - `T`: The type of objects managed by this cache. Must implement
+/// [`KMemCacheInit`] to provide initialization logic for new allocations.
+///
+/// # Invariants
+///
+/// - `cache` is a valid pointer to a `kmem_cache` created with
+/// `__kmem_cache_create_args`.
+/// - The cache is valid for the lifetime of this struct.
+pub struct KMemCache<T: KMemCacheInit<T>> {
+ cache: NonNull<bindings::kmem_cache>,
+ _p: PhantomData<T>,
+}
+
+impl<T: KMemCacheInit<T>> KMemCache<T> {
+ /// Creates a new slab cache with sheaf support.
+ ///
+ /// Creates a kernel slab cache for objects of type `T` with the specified
+ /// sheaf capacity. The cache uses the provided `name` for identification
+ /// in `/sys/kernel/slab/` and debugging output.
+ ///
+ /// # Arguments
+ ///
+ /// - `name`: A string identifying the cache. This name appears in sysfs and
+ /// debugging output.
+ /// - `sheaf_capacity`: The maximum number of objects a sheaf from this
+ /// cache can hold. A capacity of zero disables sheaf support.
+ ///
+ /// # Errors
+ ///
+ /// Returns an error if:
+ ///
+ /// - The cache could not be created due to memory pressure.
+ /// - The size of `T` cannot be represented as a `c_uint`.
+ pub fn new(name: &CStr, sheaf_capacity: u32) -> Result<Arc<Self>>
+ where
+ T: KMemCacheInit<T>,
+ {
+ let flags = 0;
+ let mut args: bindings::kmem_cache_args = pin_init::zeroed();
+ args.sheaf_capacity = sheaf_capacity;
+
+ // NOTE: We are not initializing at object allocation time, because
+ // there is no matching teardown function on the C side machinery.
+ args.ctor = None;
+
+ // SAFETY: `name` is a valid C string, `args` is properly initialized,
+ // and the size of `T` has been validated to fit in a `c_uint`.
+ let ptr = unsafe {
+ bindings::__kmem_cache_create_args(
+ name.as_ptr().cast::<u8>(),
+ core::mem::size_of::<T>().try_into()?,
+ &mut args,
+ flags,
+ )
+ };
+
+ // INVARIANT: `ptr` was returned by `__kmem_cache_create_args` and is
+ // non-null (checked below). The cache is valid until
+ // `kmem_cache_destroy` is called in `Drop`.
+ Ok(Arc::new(
+ Self {
+ cache: NonNull::new(ptr).ok_or(ENOMEM)?,
+ _p: PhantomData,
+ },
+ GFP_KERNEL,
+ )?)
+ }
+
+ /// Creates a pre-filled sheaf from this cache.
+ ///
+ /// Allocates a sheaf and pre-fills it with `size` objects. Once created,
+ /// allocations from the sheaf via [`Sheaf::alloc`] are guaranteed to
+ /// succeed until the sheaf is depleted.
+ ///
+ /// # Arguments
+ ///
+ /// - `size`: The number of objects to pre-allocate. Must not exceed the
+ /// cache's `sheaf_capacity`.
+ /// - `gfp`: Allocation flags controlling how memory is obtained. Use
+ /// [`GFP_KERNEL`] for normal allocations that may sleep, or
+ /// [`GFP_NOWAIT`] for non-blocking allocations.
+ ///
+ /// # Errors
+ ///
+ /// Returns [`ENOMEM`] if the sheaf or its objects could not be allocated.
+ ///
+ /// # Warnings
+ ///
+ /// The kernel will warn if `size` exceeds `sheaf_capacity`.
+ pub fn sheaf(
+ self: ArcBorrow<'_, Self>,
+ size: usize,
+ gfp: kernel::alloc::Flags,
+ ) -> Result<Sheaf<T>> {
+ // SAFETY: `self.as_raw()` returns a valid cache pointer, and `size`
+ // has been validated to fit in a `c_uint`.
+ let ptr = unsafe {
+ bindings::kmem_cache_prefill_sheaf(self.as_raw(), gfp.as_raw(), size.try_into()?)
+ };
+
+ // INVARIANT: `ptr` was returned by `kmem_cache_prefill_sheaf` and is
+ // non-null (checked below). `cache` is the cache from which this sheaf
+ // was created. `dropped` is false since the sheaf has not been returned.
+ Ok(Sheaf {
+ sheaf: NonNull::new(ptr).ok_or(ENOMEM)?,
+ cache: self.into(),
+ dropped: false,
+ })
+ }
+
+ fn as_raw(&self) -> *mut bindings::kmem_cache {
+ self.cache.as_ptr()
+ }
+}
+
+impl<T: KMemCacheInit<T>> Drop for KMemCache<T> {
+ fn drop(&mut self) {
+ // SAFETY: `self.as_raw()` returns a valid cache pointer that was
+ // created by `__kmem_cache_create_args`. As all objects allocated from
+ // this hold a reference on `self`, they must have been dropped for this
+ // `drop` method to execute.
+ unsafe { bindings::kmem_cache_destroy(self.as_raw()) };
+ }
+}
+
+/// Trait for types that can be initialized in a slab cache.
+///
+/// This trait provides the initialization logic for objects allocated from a
+/// [`KMemCache`]. When the slab allocator creates new objects, it invokes the
+/// constructor to ensure objects are in a valid initial state.
+///
+/// # Implementation
+///
+/// Implementors must provide [`init`](KMemCacheInit::init), which returns
+/// a in-place initializer for the type.
+///
+/// # Example
+///
+/// ```
+/// use kernel::mm::sheaf::KMemCacheInit;
+/// use kernel::prelude::*;
+///
+/// struct MyData {
+/// counter: u32,
+/// name: [u8; 16],
+/// }
+///
+/// impl KMemCacheInit<MyData> for MyData {
+/// fn init() -> impl Init<MyData> {
+/// init!(MyData {
+/// counter: 0,
+/// name: [0; 16],
+/// })
+/// }
+/// }
+/// ```
+pub trait KMemCacheInit<T> {
+ /// Returns an initializer for creating new objects of type `T`.
+ ///
+ /// This method is called by the allocator's constructor to initialize newly
+ /// allocated objects. The initializer should set all fields to their
+ /// default or initial values.
+ fn init() -> impl Init<T, Infallible>;
+}
+
+/// A pre-filled container of slab objects.
+///
+/// A sheaf holds a set of pre-allocated objects from a [`KMemCache`].
+/// Allocations from a sheaf are guaranteed to succeed until the sheaf is
+/// depleted, making sheaves useful in contexts where allocation failure is
+/// not acceptable.
+///
+/// Sheaves provide faster allocation than direct allocation because they use
+/// local locks with preemption disabled rather than atomic operations.
+///
+/// # Lifecycle
+///
+/// Sheaves are created via [`KMemCache::sheaf`] and should be returned to the
+/// allocator when no longer needed via [`Sheaf::return_refill`]. If a sheaf is
+/// simply dropped, it is returned with `GFP_NOWAIT` flags, which may result in
+/// the sheaf being flushed and freed rather than being cached for reuse.
+///
+/// # Invariants
+///
+/// - `sheaf` is a valid pointer to a `slab_sheaf` obtained from
+/// `kmem_cache_prefill_sheaf`.
+/// - `cache` is the cache from which this sheaf was created.
+/// - `dropped` tracks whether the sheaf has been explicitly returned.
+pub struct Sheaf<T: KMemCacheInit<T>> {
+ sheaf: NonNull<bindings::slab_sheaf>,
+ cache: Arc<KMemCache<T>>,
+ dropped: bool,
+}
+
+impl<T: KMemCacheInit<T>> Sheaf<T> {
+ fn as_raw(&self) -> *mut bindings::slab_sheaf {
+ self.sheaf.as_ptr()
+ }
+
+ /// Return the sheaf and try to refill using `flags`.
+ ///
+ /// If the sheaf cannot simply become the percpu spare sheaf, but there's
+ /// space for a full sheaf in the barn, we try to refill the sheaf back to
+ /// the cache's sheaf_capacity to avoid handling partially full sheaves.
+ ///
+ /// If the refill fails because gfp is e.g. GFP_NOWAIT, or the barn is full,
+ /// the sheaf is instead flushed and freed.
+ pub fn return_refill(mut self, flags: kernel::alloc::Flags) {
+ self.dropped = true;
+ // SAFETY: `self.cache.as_raw()` and `self.as_raw()` return valid
+ // pointers to the cache and sheaf respectively.
+ unsafe {
+ bindings::kmem_cache_return_sheaf(self.cache.as_raw(), flags.as_raw(), self.as_raw())
+ };
+ drop(self);
+ }
+
+ /// Allocates an object from the sheaf.
+ ///
+ /// Returns a new [`SBox`] containing an initialized object, or [`None`]
+ /// if the sheaf is depleted. Allocations are guaranteed to succeed as
+ /// long as the sheaf contains pre-allocated objects.
+ ///
+ /// The `gfp` flags passed to `kmem_cache_alloc_from_sheaf` are set to zero,
+ /// meaning no additional flags like `__GFP_ZERO` or `__GFP_ACCOUNT` are
+ /// applied.
+ ///
+ /// The returned `T` is initialized as part of this function.
+ pub fn alloc(&mut self) -> Option<SBox<T>> {
+ // SAFETY: `self.cache.as_raw()` and `self.as_raw()` return valid
+ // pointers. The function returns NULL when the sheaf is empty.
+ let ptr = unsafe {
+ bindings::kmem_cache_alloc_from_sheaf_noprof(self.cache.as_raw(), 0, self.as_raw())
+ };
+
+ // SAFETY:
+ // - `ptr` is a valid pointer as it was just returned by the cache.
+ // - The initializer is infallible, so an error is never returned.
+ unsafe { T::init().__init(ptr.cast()) }.expect("Initializer is infallible");
+
+ let ptr = NonNull::new(ptr.cast::<T>())?;
+
+ // INVARIANT: `ptr` was returned by `kmem_cache_alloc_from_sheaf_noprof`
+ // and initialized above. `cache` is the cache from which this object
+ // was allocated. The object remains valid until freed in `Drop`.
+ Some(SBox {
+ ptr,
+ cache: self.cache.clone(),
+ })
+ }
+}
+
+impl<T: KMemCacheInit<T>> Drop for Sheaf<T> {
+ fn drop(&mut self) {
+ if !self.dropped {
+ // SAFETY: `self.cache.as_raw()` and `self.as_raw()` return valid
+ // pointers. Using `GFP_NOWAIT` because the drop may occur in a
+ // context where sleeping is not permitted.
+ unsafe {
+ bindings::kmem_cache_return_sheaf(
+ self.cache.as_raw(),
+ GFP_NOWAIT.as_raw(),
+ self.as_raw(),
+ )
+ };
+ }
+ }
+}
+
+/// An owned allocation from a cache sheaf.
+///
+/// `SBox` is similar to `Box` but is backed by a slab cache allocation obtained
+/// through a [`Sheaf`]. It provides owned access to an initialized object and
+/// ensures the object is properly freed back to the cache when dropped.
+///
+/// The contained `T` is initialized when the `SBox` is returned from alloc and
+/// dropped when the `SBox` is dropped.
+///
+/// # Invariants
+///
+/// - `ptr` points to a valid, initialized object of type `T`.
+/// - `cache` is the cache from which this object was allocated.
+/// - The object remains valid for the lifetime of the `SBox`.
+pub struct SBox<T: KMemCacheInit<T>> {
+ ptr: NonNull<T>,
+ cache: Arc<KMemCache<T>>,
+}
+
+impl<T: KMemCacheInit<T>> Deref for SBox<T> {
+ type Target = T;
+
+ fn deref(&self) -> &Self::Target {
+ // SAFETY: `ptr` is valid and properly aligned per the type invariants.
+ unsafe { self.ptr.as_ref() }
+ }
+}
+
+impl<T: KMemCacheInit<T>> DerefMut for SBox<T> {
+ fn deref_mut(&mut self) -> &mut Self::Target {
+ // SAFETY: `ptr` is valid and properly aligned per the type invariants,
+ // and we have exclusive access via `&mut self`.
+ unsafe { self.ptr.as_mut() }
+ }
+}
+
+impl<T: KMemCacheInit<T>> Drop for SBox<T> {
+ fn drop(&mut self) {
+ // SAFETY: By type invariant, `ptr` points to a valid and initialized
+ // object. We do not touch `ptr` after returning it to the cache.
+ unsafe { core::ptr::drop_in_place(self.ptr.as_ptr()) };
+
+ // SAFETY: `self.ptr` was allocated from `self.cache` via
+ // `kmem_cache_alloc_from_sheaf_noprof` and is valid.
+ unsafe {
+ bindings::kmem_cache_free(self.cache.as_raw(), self.ptr.as_ptr().cast());
+ }
+ }
+}
--
2.51.2
^ permalink raw reply [flat|nested] 14+ messages in thread* [PATCH v2 10/11] rust: mm: sheaf: allow use of C initialized static caches
2026-02-06 21:10 [PATCH v2 00/11] rust: xarray: add entry API with preloading Andreas Hindborg
` (8 preceding siblings ...)
2026-02-06 21:10 ` [PATCH v2 09/11] rust: mm: add abstractions for allocating from a `sheaf` Andreas Hindborg
@ 2026-02-06 21:10 ` Andreas Hindborg
2026-02-06 21:10 ` [PATCH v2 11/11] rust: xarray: add preload API Andreas Hindborg
10 siblings, 0 replies; 14+ messages in thread
From: Andreas Hindborg @ 2026-02-06 21:10 UTC (permalink / raw)
To: Tamir Duberstein, Miguel Ojeda, Alex Gaynor, Boqun Feng,
Gary Guo, Björn Roy Baron, Benno Lossin, Alice Ryhl,
Trevor Gross, Danilo Krummrich, Lorenzo Stoakes, Liam R. Howlett,
Vlastimil Babka, Andrew Morton, Christoph Lameter,
David Rientjes, Roman Gushchin, Harry Yoo
Cc: Daniel Gomez, rust-for-linux, linux-kernel, linux-mm,
Andreas Hindborg, Matthew Wilcox (Oracle)
Extend the sheaf abstraction to support caches initialized by C at kernel
boot time, in addition to dynamically created Rust caches.
Introduce `KMemCache<T>` as a transparent wrapper around `kmem_cache` for
static caches with `'static` lifetime. Rename the previous `KMemCache<T>`
to `KMemCacheHandle<T>` to represent dynamically created, reference-counted
caches.
Add `Static` and `Dynamic` marker types along with `StaticSheaf` and
`DynamicSheaf` type aliases to distinguish sheaves from each cache type.
The `Sheaf` type now carries lifetime and allocation mode type parameters.
Add `SBox::into_ptr()` and `SBox::static_from_ptr()` methods for passing
allocations through C code via raw pointers.
Add `KMemCache::from_raw()` for wrapping C-initialized static caches and
`Sheaf::refill()` for replenishing a sheaf to a minimum size.
Export `kmem_cache_prefill_sheaf`, `kmem_cache_return_sheaf`,
`kmem_cache_refill_sheaf`, and `kmem_cache_alloc_from_sheaf_noprof` to
allow Rust module code to use the sheaf API.
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: "Liam R. Howlett" <Liam.Howlett@oracle.com>
Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org>
Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Cc: linux-mm@kvack.org
Signed-off-by: Andreas Hindborg <a.hindborg@kernel.org>
---
mm/slub.c | 4 +
rust/kernel/mm/sheaf.rs | 343 +++++++++++++++++++++++++++++++++++++++++++-----
2 files changed, 317 insertions(+), 30 deletions(-)
diff --git a/mm/slub.c b/mm/slub.c
index f77b7407c51bc..7c6b1d28778d0 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -5428,6 +5428,7 @@ kmem_cache_prefill_sheaf(struct kmem_cache *s, gfp_t gfp, unsigned int size)
return sheaf;
}
+EXPORT_SYMBOL(kmem_cache_prefill_sheaf);
/*
* Use this to return a sheaf obtained by kmem_cache_prefill_sheaf()
@@ -5483,6 +5484,7 @@ void kmem_cache_return_sheaf(struct kmem_cache *s, gfp_t gfp,
barn_put_full_sheaf(barn, sheaf);
stat(s, BARN_PUT);
}
+EXPORT_SYMBOL(kmem_cache_return_sheaf);
/*
* refill a sheaf previously returned by kmem_cache_prefill_sheaf to at least
@@ -5536,6 +5538,7 @@ int kmem_cache_refill_sheaf(struct kmem_cache *s, gfp_t gfp,
*sheafp = sheaf;
return 0;
}
+EXPORT_SYMBOL(kmem_cache_refill_sheaf);
/*
* Allocate from a sheaf obtained by kmem_cache_prefill_sheaf()
@@ -5573,6 +5576,7 @@ kmem_cache_alloc_from_sheaf_noprof(struct kmem_cache *s, gfp_t gfp,
return ret;
}
+EXPORT_SYMBOL(kmem_cache_alloc_from_sheaf_noprof);
unsigned int kmem_cache_sheaf_size(struct slab_sheaf *sheaf)
{
diff --git a/rust/kernel/mm/sheaf.rs b/rust/kernel/mm/sheaf.rs
index c92750eaf1c4a..a604246714f7b 100644
--- a/rust/kernel/mm/sheaf.rs
+++ b/rust/kernel/mm/sheaf.rs
@@ -23,17 +23,26 @@
//!
//! # Architecture
//!
-//! The sheaf system consists of three main components:
+//! The sheaf system supports two modes of operation:
+//!
+//! - **Static caches**: [`KMemCache`] represents a cache initialized by C code at
+//! kernel boot time. These have `'static` lifetime and produce [`StaticSheaf`]
+//! instances.
+//! - **Dynamic caches**: [`KMemCacheHandle`] wraps a cache created at runtime by
+//! Rust code. These are reference-counted and produce [`DynamicSheaf`] instances.
+//!
+//! Both modes use the same core types:
//!
-//! - [`KMemCache`]: A slab cache configured with sheaf support.
//! - [`Sheaf`]: A pre-filled container of objects from a specific cache.
//! - [`SBox`]: An owned allocation from a sheaf, similar to a `Box`.
//!
//! # Example
//!
+//! Using a dynamically created cache:
+//!
//! ```
//! use kernel::c_str;
-//! use kernel::mm::sheaf::{KMemCache, KMemCacheInit, Sheaf, SBox};
+//! use kernel::mm::sheaf::{KMemCacheHandle, KMemCacheInit, Sheaf, SBox};
//! use kernel::prelude::*;
//!
//! struct MyObject {
@@ -47,7 +56,7 @@
//! }
//!
//! // Create a cache with sheaf capacity of 16 objects.
-//! let cache = KMemCache::<MyObject>::new(c_str!("my_cache"), 16)?;
+//! let cache = KMemCacheHandle::<MyObject>::new(c_str!("my_cache"), 16)?;
//!
//! // Pre-fill a sheaf with 8 objects.
//! let mut sheaf = cache.as_arc_borrow().sheaf(8, GFP_KERNEL)?;
@@ -75,7 +84,102 @@
use kernel::prelude::*;
-use crate::sync::{Arc, ArcBorrow};
+use crate::{
+ sync::{Arc, ArcBorrow},
+ types::Opaque,
+};
+
+/// A slab cache with sheaf support.
+///
+/// This type is a transparent wrapper around a kernel `kmem_cache`. It can be
+/// used with caches created either by C code or via [`KMemCacheHandle`].
+///
+/// When a reference to this type has `'static` lifetime (i.e., `&'static
+/// KMemCache<T>`), it typically represents a cache initialized by C at boot
+/// time. Such references produce [`StaticSheaf`] instances via [`sheaf`].
+///
+/// [`sheaf`]: KMemCache::sheaf
+///
+/// # Type parameter
+///
+/// - `T`: The type of objects managed by this cache. Must implement
+/// [`KMemCacheInit`] to provide initialization logic for allocations.
+#[repr(transparent)]
+pub struct KMemCache<T: KMemCacheInit<T>> {
+ inner: Opaque<bindings::kmem_cache>,
+ _p: PhantomData<T>,
+}
+
+impl<T: KMemCacheInit<T>> KMemCache<T> {
+ /// Creates a pre-filled sheaf from this cache.
+ ///
+ /// Allocates a sheaf and pre-fills it with `size` objects. Once created,
+ /// allocations from the sheaf via [`Sheaf::alloc`] are guaranteed to
+ /// succeed until the sheaf is depleted.
+ ///
+ /// # Arguments
+ ///
+ /// - `size`: The number of objects to pre-allocate. Must not exceed the
+ /// cache's `sheaf_capacity`.
+ /// - `gfp`: Allocation flags controlling how memory is obtained. Use
+ /// [`GFP_KERNEL`] for normal allocations that may sleep, or
+ /// [`GFP_NOWAIT`] for non-blocking allocations.
+ ///
+ /// # Errors
+ ///
+ /// Returns [`ENOMEM`] if the sheaf or its objects could not be allocated.
+ ///
+ /// # Warnings
+ ///
+ /// The kernel will warn if `size` exceeds `sheaf_capacity`.
+ pub fn sheaf(
+ &'static self,
+ size: usize,
+ gfp: kernel::alloc::Flags,
+ ) -> Result<Sheaf<'static, T, Static>> {
+ // SAFETY: `self.as_raw()` returns a valid cache pointer, and `size`
+ // has been validated to fit in a `c_uint`.
+ let ptr = unsafe {
+ bindings::kmem_cache_prefill_sheaf(self.inner.get(), gfp.as_raw(), size.try_into()?)
+ };
+
+ // INVARIANT: `ptr` was returned by `kmem_cache_prefill_sheaf` and is
+ // non-null (checked below). `cache` is the cache from which this sheaf
+ // was created. `dropped` is false since the sheaf has not been returned.
+ Ok(Sheaf {
+ sheaf: NonNull::new(ptr).ok_or(ENOMEM)?,
+ // SAFETY: `self` is a valid reference, so the pointer is non-null.
+ cache: CacheRef::Static(unsafe {
+ NonNull::new_unchecked((&raw const *self).cast_mut())
+ }),
+ dropped: false,
+ _p: PhantomData,
+ })
+ }
+
+ fn as_raw(&self) -> *mut bindings::kmem_cache {
+ self.inner.get()
+ }
+
+ /// Creates a reference to a [`KMemCache`] from a raw pointer.
+ ///
+ /// This is useful for wrapping a C-initialized static `kmem_cache`, such as
+ /// the global `radix_tree_node_cachep` used by XArrays.
+ ///
+ /// # Safety
+ ///
+ /// - `ptr` must be a valid pointer to a `kmem_cache` that was created for
+ /// objects of type `T`.
+ /// - The cache must remain valid for the lifetime `'a`.
+ /// - The caller must ensure that the cache was configured appropriately for
+ /// the type `T`, including proper size and alignment.
+ pub unsafe fn from_raw<'a>(ptr: *mut bindings::kmem_cache) -> &'a Self {
+ // SAFETY: The caller guarantees that `ptr` is a valid pointer to a
+ // `kmem_cache` created for objects of type `T`, that it remains valid
+ // for lifetime `'a`, and that the cache is properly configured for `T`.
+ unsafe { &*ptr.cast::<Self>() }
+ }
+}
/// A slab cache with sheaf support.
///
@@ -94,12 +198,12 @@
/// - `cache` is a valid pointer to a `kmem_cache` created with
/// `__kmem_cache_create_args`.
/// - The cache is valid for the lifetime of this struct.
-pub struct KMemCache<T: KMemCacheInit<T>> {
- cache: NonNull<bindings::kmem_cache>,
- _p: PhantomData<T>,
+#[repr(transparent)]
+pub struct KMemCacheHandle<T: KMemCacheInit<T>> {
+ cache: NonNull<KMemCache<T>>,
}
-impl<T: KMemCacheInit<T>> KMemCache<T> {
+impl<T: KMemCacheInit<T>> KMemCacheHandle<T> {
/// Creates a new slab cache with sheaf support.
///
/// Creates a kernel slab cache for objects of type `T` with the specified
@@ -147,8 +251,7 @@ pub fn new(name: &CStr, sheaf_capacity: u32) -> Result<Arc<Self>>
// `kmem_cache_destroy` is called in `Drop`.
Ok(Arc::new(
Self {
- cache: NonNull::new(ptr).ok_or(ENOMEM)?,
- _p: PhantomData,
+ cache: NonNull::new(ptr.cast()).ok_or(ENOMEM)?,
},
GFP_KERNEL,
)?)
@@ -175,11 +278,11 @@ pub fn new(name: &CStr, sheaf_capacity: u32) -> Result<Arc<Self>>
/// # Warnings
///
/// The kernel will warn if `size` exceeds `sheaf_capacity`.
- pub fn sheaf(
- self: ArcBorrow<'_, Self>,
+ pub fn sheaf<'a>(
+ self: ArcBorrow<'a, Self>,
size: usize,
gfp: kernel::alloc::Flags,
- ) -> Result<Sheaf<T>> {
+ ) -> Result<Sheaf<'a, T, Dynamic>> {
// SAFETY: `self.as_raw()` returns a valid cache pointer, and `size`
// has been validated to fit in a `c_uint`.
let ptr = unsafe {
@@ -191,17 +294,18 @@ pub fn sheaf(
// was created. `dropped` is false since the sheaf has not been returned.
Ok(Sheaf {
sheaf: NonNull::new(ptr).ok_or(ENOMEM)?,
- cache: self.into(),
+ cache: CacheRef::Arc(self.into()),
dropped: false,
+ _p: PhantomData,
})
}
fn as_raw(&self) -> *mut bindings::kmem_cache {
- self.cache.as_ptr()
+ self.cache.as_ptr().cast()
}
}
-impl<T: KMemCacheInit<T>> Drop for KMemCache<T> {
+impl<T: KMemCacheInit<T>> Drop for KMemCacheHandle<T> {
fn drop(&mut self) {
// SAFETY: `self.as_raw()` returns a valid cache pointer that was
// created by `__kmem_cache_create_args`. As all objects allocated from
@@ -214,13 +318,13 @@ fn drop(&mut self) {
/// Trait for types that can be initialized in a slab cache.
///
/// This trait provides the initialization logic for objects allocated from a
-/// [`KMemCache`]. When the slab allocator creates new objects, it invokes the
-/// constructor to ensure objects are in a valid initial state.
+/// [`KMemCache`]. The initializer is called when objects are allocated from a
+/// sheaf via [`Sheaf::alloc`].
///
/// # Implementation
///
-/// Implementors must provide [`init`](KMemCacheInit::init), which returns
-/// a in-place initializer for the type.
+/// Implementors must provide [`init`](KMemCacheInit::init), which returns an
+/// infallible initializer for the type.
///
/// # Example
///
@@ -251,6 +355,28 @@ pub trait KMemCacheInit<T> {
fn init() -> impl Init<T, Infallible>;
}
+/// Marker type for sheaves from static caches.
+///
+/// Used as a type parameter for [`Sheaf`] to indicate the sheaf was created
+/// from a `&'static KMemCache<T>`.
+pub enum Static {}
+
+/// Marker type for sheaves from dynamic caches.
+///
+/// Used as a type parameter for [`Sheaf`] to indicate the sheaf was created
+/// from a [`KMemCacheHandle`] via [`ArcBorrow`].
+pub enum Dynamic {}
+
+/// A sheaf from a static cache.
+///
+/// This is a [`Sheaf`] backed by a `&'static KMemCache<T>`.
+pub type StaticSheaf<'a, T> = Sheaf<'a, T, Static>;
+
+/// A sheaf from a dynamic cache.
+///
+/// This is a [`Sheaf`] backed by a reference-counted [`KMemCacheHandle`].
+pub type DynamicSheaf<'a, T> = Sheaf<'a, T, Dynamic>;
+
/// A pre-filled container of slab objects.
///
/// A sheaf holds a set of pre-allocated objects from a [`KMemCache`].
@@ -261,12 +387,23 @@ pub trait KMemCacheInit<T> {
/// Sheaves provide faster allocation than direct allocation because they use
/// local locks with preemption disabled rather than atomic operations.
///
+/// # Type parameters
+///
+/// - `'a`: The lifetime of the cache reference.
+/// - `T`: The type of objects in this sheaf.
+/// - `A`: Either [`Static`] or [`Dynamic`], indicating whether the backing
+/// cache is a static reference or a reference-counted handle.
+///
+/// For convenience, [`StaticSheaf`] and [`DynamicSheaf`] type aliases are
+/// provided.
+///
/// # Lifecycle
///
-/// Sheaves are created via [`KMemCache::sheaf`] and should be returned to the
-/// allocator when no longer needed via [`Sheaf::return_refill`]. If a sheaf is
-/// simply dropped, it is returned with `GFP_NOWAIT` flags, which may result in
-/// the sheaf being flushed and freed rather than being cached for reuse.
+/// Sheaves are created via [`KMemCache::sheaf`] or [`KMemCacheHandle::sheaf`]
+/// and should be returned to the allocator when no longer needed via
+/// [`Sheaf::return_refill`]. If a sheaf is simply dropped, it is returned with
+/// `GFP_NOWAIT` flags, which may result in the sheaf being flushed and freed
+/// rather than being cached for reuse.
///
/// # Invariants
///
@@ -274,13 +411,14 @@ pub trait KMemCacheInit<T> {
/// `kmem_cache_prefill_sheaf`.
/// - `cache` is the cache from which this sheaf was created.
/// - `dropped` tracks whether the sheaf has been explicitly returned.
-pub struct Sheaf<T: KMemCacheInit<T>> {
+pub struct Sheaf<'a, T: KMemCacheInit<T>, A> {
sheaf: NonNull<bindings::slab_sheaf>,
- cache: Arc<KMemCache<T>>,
+ cache: CacheRef<T>,
dropped: bool,
+ _p: PhantomData<(&'a KMemCache<T>, A)>,
}
-impl<T: KMemCacheInit<T>> Sheaf<T> {
+impl<'a, T: KMemCacheInit<T>, A> Sheaf<'a, T, A> {
fn as_raw(&self) -> *mut bindings::slab_sheaf {
self.sheaf.as_ptr()
}
@@ -303,6 +441,75 @@ pub fn return_refill(mut self, flags: kernel::alloc::Flags) {
drop(self);
}
+ /// Refills the sheaf to at least the specified size.
+ ///
+ /// Replenishes the sheaf by preallocating objects until it contains at
+ /// least `size` objects. If the sheaf already contains `size` or more
+ /// objects, this is a no-op. In practice, the sheaf is refilled to its
+ /// full capacity.
+ ///
+ /// # Arguments
+ ///
+ /// - `flags`: Allocation flags controlling how memory is obtained.
+ /// - `size`: The minimum number of objects the sheaf should contain after
+ /// refilling. If `size` exceeds the cache's `sheaf_capacity`, the sheaf
+ /// may be replaced with a larger one.
+ ///
+ /// # Errors
+ ///
+ /// Returns an error if the objects could not be allocated. If refilling
+ /// fails, the existing sheaf is left intact.
+ pub fn refill(&mut self, flags: kernel::alloc::Flags, size: usize) -> Result {
+ // SAFETY: `self.cache.as_raw()` returns a valid cache pointer and
+ // `&raw mut self.sheaf` points to a valid sheaf per the type invariants.
+ kernel::error::to_result(unsafe {
+ bindings::kmem_cache_refill_sheaf(
+ self.cache.as_raw(),
+ flags.as_raw(),
+ (&raw mut (self.sheaf)).cast(),
+ size.try_into()?,
+ )
+ })
+ }
+}
+
+impl<'a, T: KMemCacheInit<T>> Sheaf<'a, T, Static> {
+ /// Allocates an object from the sheaf.
+ ///
+ /// Returns a new [`SBox`] containing an initialized object, or [`None`]
+ /// if the sheaf is depleted. Allocations are guaranteed to succeed as
+ /// long as the sheaf contains pre-allocated objects.
+ ///
+ /// The `gfp` flags passed to `kmem_cache_alloc_from_sheaf` are set to zero,
+ /// meaning no additional flags like `__GFP_ZERO` or `__GFP_ACCOUNT` are
+ /// applied.
+ ///
+ /// The returned `T` is initialized as part of this function.
+ pub fn alloc(&mut self) -> Option<SBox<T>> {
+ // SAFETY: `self.cache.as_raw()` and `self.as_raw()` return valid
+ // pointers. The function returns NULL when the sheaf is empty.
+ let ptr = unsafe {
+ bindings::kmem_cache_alloc_from_sheaf_noprof(self.cache.as_raw(), 0, self.as_raw())
+ };
+
+ // SAFETY:
+ // - `ptr` is a valid pointer as it was just returned by the cache.
+ // - The initializer is infallible, so an error is never returned.
+ unsafe { T::init().__init(ptr.cast()) }.expect("Initializer is infallible");
+
+ let ptr = NonNull::new(ptr.cast::<T>())?;
+
+ // INVARIANT: `ptr` was returned by `kmem_cache_alloc_from_sheaf_noprof`
+ // and initialized above. `cache` is the cache from which this object
+ // was allocated. The object remains valid until freed in `Drop`.
+ Some(SBox {
+ ptr,
+ cache: self.cache.clone(),
+ })
+ }
+}
+
+impl<'a, T: KMemCacheInit<T>> Sheaf<'a, T, Dynamic> {
/// Allocates an object from the sheaf.
///
/// Returns a new [`SBox`] containing an initialized object, or [`None`]
@@ -338,7 +545,7 @@ pub fn alloc(&mut self) -> Option<SBox<T>> {
}
}
-impl<T: KMemCacheInit<T>> Drop for Sheaf<T> {
+impl<'a, T: KMemCacheInit<T>, A> Drop for Sheaf<'a, T, A> {
fn drop(&mut self) {
if !self.dropped {
// SAFETY: `self.cache.as_raw()` and `self.as_raw()` return valid
@@ -355,6 +562,39 @@ fn drop(&mut self) {
}
}
+/// Internal reference to a cache, either static or reference-counted.
+///
+/// # Invariants
+///
+/// - For `CacheRef::Static`: the `NonNull` points to a valid `KMemCache<T>`
+/// with `'static` lifetime, derived from a `&'static KMemCache<T>` reference.
+enum CacheRef<T: KMemCacheInit<T>> {
+ /// A reference-counted handle to a dynamically created cache.
+ Arc(Arc<KMemCacheHandle<T>>),
+ /// A pointer to a static lifetime cache.
+ Static(NonNull<KMemCache<T>>),
+}
+
+impl<T: KMemCacheInit<T>> Clone for CacheRef<T> {
+ fn clone(&self) -> Self {
+ match self {
+ Self::Arc(arg0) => Self::Arc(arg0.clone()),
+ Self::Static(arg0) => Self::Static(*arg0),
+ }
+ }
+}
+
+impl<T: KMemCacheInit<T>> CacheRef<T> {
+ fn as_raw(&self) -> *mut bindings::kmem_cache {
+ match self {
+ CacheRef::Arc(handle) => handle.as_raw(),
+ // SAFETY: By type invariant, `ptr` points to a valid `KMemCache<T>`
+ // with `'static` lifetime.
+ CacheRef::Static(ptr) => unsafe { ptr.as_ref() }.as_raw(),
+ }
+ }
+}
+
/// An owned allocation from a cache sheaf.
///
/// `SBox` is similar to `Box` but is backed by a slab cache allocation obtained
@@ -371,7 +611,50 @@ fn drop(&mut self) {
/// - The object remains valid for the lifetime of the `SBox`.
pub struct SBox<T: KMemCacheInit<T>> {
ptr: NonNull<T>,
- cache: Arc<KMemCache<T>>,
+ cache: CacheRef<T>,
+}
+
+impl<T: KMemCacheInit<T>> SBox<T> {
+ /// Consumes the `SBox` and returns the raw pointer to the contained value.
+ ///
+ /// The caller becomes responsible for freeing the memory. The object is not
+ /// dropped and remains initialized. Use [`static_from_ptr`] to reconstruct
+ /// an `SBox` from the pointer.
+ ///
+ /// [`static_from_ptr`]: SBox::static_from_ptr
+ pub fn into_ptr(self) -> *mut T {
+ let ptr = self.ptr.as_ptr();
+ core::mem::forget(self);
+ ptr
+ }
+
+ /// Reconstructs an `SBox` from a raw pointer and cache.
+ ///
+ /// This is intended for use with objects that were previously converted to
+ /// raw pointers via [`into_ptr`], typically for passing through C code.
+ ///
+ /// [`into_ptr`]: SBox::into_ptr
+ ///
+ /// # Safety
+ ///
+ /// - `cache` must be a valid pointer to the `kmem_cache` from which `value`
+ /// was allocated.
+ /// - `value` must be a valid pointer to an initialized `T` that was
+ /// allocated from `cache`.
+ /// - The caller must ensure that no other `SBox` or reference exists for
+ /// `value`.
+ pub unsafe fn static_from_ptr(cache: *mut bindings::kmem_cache, value: *mut T) -> Self {
+ // INVARIANT: The caller guarantees `value` points to a valid,
+ // initialized `T` allocated from `cache`.
+ Self {
+ // SAFETY: By function safety requirements, `value` is not null.
+ ptr: unsafe { NonNull::new_unchecked(value) },
+ cache: CacheRef::Static(
+ // SAFETY: By function safety requirements, `cache` is not null.
+ unsafe { NonNull::new_unchecked(cache.cast()) },
+ ),
+ }
+ }
}
impl<T: KMemCacheInit<T>> Deref for SBox<T> {
--
2.51.2
^ permalink raw reply [flat|nested] 14+ messages in thread* [PATCH v2 11/11] rust: xarray: add preload API
2026-02-06 21:10 [PATCH v2 00/11] rust: xarray: add entry API with preloading Andreas Hindborg
` (9 preceding siblings ...)
2026-02-06 21:10 ` [PATCH v2 10/11] rust: mm: sheaf: allow use of C initialized static caches Andreas Hindborg
@ 2026-02-06 21:10 ` Andreas Hindborg
2026-02-06 21:43 ` Andreas Hindborg
2026-02-07 5:04 ` kernel test robot
10 siblings, 2 replies; 14+ messages in thread
From: Andreas Hindborg @ 2026-02-06 21:10 UTC (permalink / raw)
To: Tamir Duberstein, Miguel Ojeda, Alex Gaynor, Boqun Feng,
Gary Guo, Björn Roy Baron, Benno Lossin, Alice Ryhl,
Trevor Gross, Danilo Krummrich, Lorenzo Stoakes, Liam R. Howlett,
Vlastimil Babka, Andrew Morton, Christoph Lameter,
David Rientjes, Roman Gushchin, Harry Yoo
Cc: Daniel Gomez, rust-for-linux, linux-kernel, linux-mm,
Andreas Hindborg, Matthew Wilcox (Oracle)
Add a preload API that allows preallocating memory for XArray
insertions. This enables insertions to proceed without allocation
failures in contexts where memory allocation is not desirable, such as
in atomic contexts or where reliability is critical.
The API includes:
- `XArrayPreloadBuffer` for managing a pool of preallocated nodes.
- `XArrayPreloadNode` representing a single preallocated node.
- Integration with the entry API, allowing `VacantEntry::insert` and
`VacantEntry::insert_entry` to accept an optional preload buffer.
- A new `Guard::insert_entry` method for inserting with preload support.
The implementation uses a circular buffer to efficiently manage
preallocated nodes. When an insertion would fail due to ENOMEM, the
XArray state API automatically consumes a preallocated node from the
buffer if available.
Export `radix_tree_node_ctor` from C to enable Rust code to work with the
radix tree node cache.
Cc: "Liam R. Howlett" <Liam.Howlett@oracle.com>
Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org>
Signed-off-by: Andreas Hindborg <a.hindborg@kernel.org>
---
include/linux/radix-tree.h | 3 +
lib/radix-tree.c | 5 +-
rust/bindings/bindings_helper.h | 3 +
rust/kernel/xarray.rs | 172 +++++++++++++++++++++++++++++++++++-----
rust/kernel/xarray/entry.rs | 29 ++++---
rust/kernel/xarray/preload.rs | 3 +
6 files changed, 185 insertions(+), 30 deletions(-)
diff --git a/include/linux/radix-tree.h b/include/linux/radix-tree.h
index eae67015ce51a..c3699f12b070c 100644
--- a/include/linux/radix-tree.h
+++ b/include/linux/radix-tree.h
@@ -469,4 +469,7 @@ static __always_inline void __rcu **radix_tree_next_slot(void __rcu **slot,
slot = radix_tree_next_slot(slot, iter, \
RADIX_TREE_ITER_TAGGED | tag))
+
+void radix_tree_node_ctor(void *arg);
+
#endif /* _LINUX_RADIX_TREE_H */
diff --git a/lib/radix-tree.c b/lib/radix-tree.c
index 976b9bd02a1b5..b642f2775e89c 100644
--- a/lib/radix-tree.c
+++ b/lib/radix-tree.c
@@ -33,6 +33,7 @@
* Radix tree node cache.
*/
struct kmem_cache *radix_tree_node_cachep;
+EXPORT_SYMBOL(radix_tree_node_cachep);
/*
* The radix tree is variable-height, so an insert operation not only has
@@ -1566,14 +1567,14 @@ void idr_destroy(struct idr *idr)
}
EXPORT_SYMBOL(idr_destroy);
-static void
-radix_tree_node_ctor(void *arg)
+void radix_tree_node_ctor(void *arg)
{
struct radix_tree_node *node = arg;
memset(node, 0, sizeof(*node));
INIT_LIST_HEAD(&node->private_list);
}
+EXPORT_SYMBOL(radix_tree_node_ctor);
static int radix_tree_cpu_dead(unsigned int cpu)
{
diff --git a/rust/bindings/bindings_helper.h b/rust/bindings/bindings_helper.h
index 58605c32e8102..652f08ad888cd 100644
--- a/rust/bindings/bindings_helper.h
+++ b/rust/bindings/bindings_helper.h
@@ -118,6 +118,9 @@ const xa_mark_t RUST_CONST_HELPER_XA_PRESENT = XA_PRESENT;
const gfp_t RUST_CONST_HELPER_XA_FLAGS_ALLOC = XA_FLAGS_ALLOC;
const gfp_t RUST_CONST_HELPER_XA_FLAGS_ALLOC1 = XA_FLAGS_ALLOC1;
const size_t RUST_CONST_HELPER_XAS_RESTART = (size_t)XAS_RESTART;
+const size_t RUST_CONST_HELPER_XA_CHUNK_SHIFT = XA_CHUNK_SHIFT;
+const size_t RUST_CONST_HELPER_XA_CHUNK_SIZE = XA_CHUNK_SIZE;
+extern struct kmem_cache *radix_tree_node_cachep;
const vm_flags_t RUST_CONST_HELPER_VM_MERGEABLE = VM_MERGEABLE;
const vm_flags_t RUST_CONST_HELPER_VM_READ = VM_READ;
diff --git a/rust/kernel/xarray.rs b/rust/kernel/xarray.rs
index 8c10e8fd76f15..89bf531308c88 100644
--- a/rust/kernel/xarray.rs
+++ b/rust/kernel/xarray.rs
@@ -5,6 +5,7 @@
//! C header: [`include/linux/xarray.h`](srctree/include/linux/xarray.h)
use core::{
+ convert::Infallible,
iter,
marker::PhantomData,
pin::Pin,
@@ -23,11 +24,17 @@
bindings,
build_assert, //
error::{
+ code::*,
to_result,
Error,
Result, //
},
ffi::c_void,
+ mm::sheaf::{
+ KMemCache,
+ SBox,
+ StaticSheaf, //
+ },
types::{
ForeignOwnable,
NotThreadSafe,
@@ -35,12 +42,54 @@
},
};
use pin_init::{
+ init,
pin_data,
pin_init,
pinned_drop,
+ Init,
PinInit, //
};
+/// Sheaf of preallocated [`XArray`] nodes.
+pub type XArraySheaf<'a> = StaticSheaf<'a, XArrayNode>;
+
+/// Returns a reference to the global XArray node cache.
+///
+/// This provides access to the kernel's `radix_tree_node_cachep`, which is the
+/// slab cache used for allocating internal XArray nodes. This cache can be used
+/// to create sheaves for preallocating XArray nodes.
+pub fn xarray_kmem_cache() -> &'static KMemCache<XArrayNode> {
+ // SAFETY: `radix_tree_node_cachep` is a valid, statically initialized
+ // kmem_cache that remains valid for the lifetime of the kernel. The cache
+ // is configured for `xa_node` objects which match our `XArrayNode` type.
+ unsafe { KMemCache::from_raw(bindings::radix_tree_node_cachep) }
+}
+
+/// An preallocated XArray node.
+///
+/// This represents a single preallocated internal node for an XArray.
+pub struct XArrayNode {
+ node: Opaque<bindings::xa_node>,
+}
+
+impl kernel::mm::sheaf::KMemCacheInit<XArrayNode> for XArrayNode {
+ fn init() -> impl Init<Self, Infallible> {
+ init!(Self {
+ // SAFETY:
+ // - This initialization cannot fail and will never return `Err`.
+ // - The xa_node does not move during initalization.
+ node <- unsafe {
+ pin_init::init_from_closure(
+ |place: *mut Opaque<bindings::xa_node>| -> Result<(), Infallible> {
+ bindings::radix_tree_node_ctor(place.cast::<c_void>());
+ Ok(())
+ },
+ )
+ }
+ })
+ }
+}
+
/// An array which efficiently maps sparse integer indices to owned objects.
///
/// This is similar to a [`crate::alloc::kvec::Vec<Option<T>>`], but more efficient when there are
@@ -137,15 +186,22 @@ fn iter(&self) -> impl Iterator<Item = NonNull<c_void>> + '_ {
let mut index = 0;
// SAFETY: `self.xa` is always valid by the type invariant.
- iter::once(unsafe {
- bindings::xa_find(self.xa.get(), &mut index, usize::MAX, bindings::XA_PRESENT)
- })
- .chain(iter::from_fn(move || {
- // SAFETY: `self.xa` is always valid by the type invariant.
- Some(unsafe {
- bindings::xa_find_after(self.xa.get(), &mut index, usize::MAX, bindings::XA_PRESENT)
- })
- }))
+ Iterator::chain(
+ iter::once(unsafe {
+ bindings::xa_find(self.xa.get(), &mut index, usize::MAX, bindings::XA_PRESENT)
+ }),
+ iter::from_fn(move || {
+ // SAFETY: `self.xa` is always valid by the type invariant.
+ Some(unsafe {
+ bindings::xa_find_after(
+ self.xa.get(),
+ &mut index,
+ usize::MAX,
+ bindings::XA_PRESENT,
+ )
+ })
+ }),
+ )
.map_while(|ptr| NonNull::new(ptr.cast()))
}
@@ -166,7 +222,6 @@ pub fn try_lock(&self) -> Option<Guard<'_, T>> {
pub fn lock(&self) -> Guard<'_, T> {
// SAFETY: `self.xa` is always valid by the type invariant.
unsafe { bindings::xa_lock(self.xa.get()) };
-
Guard {
xa: self,
_not_send: NotThreadSafe,
@@ -270,7 +325,7 @@ pub fn get_mut(&mut self, index: usize) -> Option<T::BorrowedMut<'_>> {
///
/// match guard.entry(42) {
/// Entry::Vacant(entry) => {
- /// entry.insert(KBox::new(0x1337u32, GFP_KERNEL)?)?;
+ /// entry.insert(KBox::new(0x1337u32, GFP_KERNEL)?, None)?;
/// }
/// Entry::Occupied(_) => unreachable!("We did not insert an entry yet"),
/// }
@@ -475,6 +530,45 @@ pub fn store(
Ok(unsafe { T::try_from_foreign(old) })
}
}
+
+ /// Inserts a value and returns an occupied entry for further operations.
+ ///
+ /// If a value is already present, the operation fails.
+ ///
+ /// This method will not drop the XArray lock. If memory allocation is
+ /// required for the operation to succeed, the user should supply memory
+ /// through the `preload` argument.
+ ///
+ /// # Examples
+ ///
+ /// ```
+ /// # use kernel::{prelude::*, xarray::{AllocKind, XArray}};
+ /// let mut xa = KBox::pin_init(XArray::<KBox<u32>>::new(AllocKind::Alloc), GFP_KERNEL)?;
+ /// let mut guard = xa.lock();
+ ///
+ /// assert_eq!(guard.get(42), None);
+ ///
+ /// let value = KBox::new(0x1337u32, GFP_KERNEL)?;
+ /// let entry = guard.insert_entry(42, value, None)?;
+ /// let borrowed = entry.into_mut();
+ /// assert_eq!(borrowed, &0x1337);
+ ///
+ /// # Ok::<(), kernel::error::Error>(())
+ /// ```
+ pub fn insert_entry<'b>(
+ &'b mut self,
+ index: usize,
+ value: T,
+ preload: Option<&mut XArraySheaf<'_>>,
+ ) -> Result<OccupiedEntry<'a, 'b, T>, StoreError<T>> {
+ match self.entry(index) {
+ Entry::Vacant(entry) => entry.insert_entry(value, preload),
+ Entry::Occupied(_) => Err(StoreError {
+ error: EBUSY,
+ value,
+ }),
+ }
+ }
}
/// Internal state for XArray iteration and entry operations.
@@ -489,6 +583,25 @@ pub(crate) struct XArrayState<'a, 'b, T: ForeignOwnable> {
state: bindings::xa_state,
}
+impl<'a, 'b, T: ForeignOwnable> Drop for XArrayState<'a, 'b, T> {
+ fn drop(&mut self) {
+ if !self.state.xa_alloc.is_null() {
+ // SAFETY:
+ // - `xa_alloc` is only set via `SBox::into_ptr()` in `insert()` where
+ // the node comes from an `XArraySheaf` backed by `radix_tree_node_cachep`.
+ // - `xa_alloc` points to a valid, initialized `XArrayNode`.
+ // - `XArrayState` has exclusive ownership of `xa_alloc`, and no other
+ // `SBox` or reference exists for this value.
+ drop(unsafe {
+ SBox::<XArrayNode>::static_from_ptr(
+ bindings::radix_tree_node_cachep,
+ self.state.xa_alloc.cast(),
+ )
+ })
+ }
+ }
+}
+
impl<'a, 'b, T: ForeignOwnable> XArrayState<'a, 'b, T> {
fn new(access: &'b Guard<'a, T>, index: usize) -> Self {
let ptr = access.xa.xa.get();
@@ -529,16 +642,37 @@ fn status(&self) -> Result {
to_result(unsafe { bindings::xas_error(&self.state) })
}
- fn insert(&mut self, value: T) -> Result<*mut c_void, StoreError<T>> {
+ fn insert(
+ &mut self,
+ value: T,
+ mut preload: Option<&mut XArraySheaf<'_>>,
+ ) -> Result<*mut c_void, StoreError<T>> {
let new = T::into_foreign(value).cast();
- // SAFETY: `self.state.state` is properly initialized and `new` came from `T::into_foreign`.
- // We hold the xarray lock.
- unsafe { bindings::xas_store(&mut self.state, new) };
-
- self.status().map(|()| new).map_err(|error| {
- // SAFETY: `new` came from `T::into_foreign` and `xas_store` does not take ownership of
- // the value on error.
+ loop {
+ // SAFETY: `self.state` is properly initialized and `new` came from
+ // `T::into_foreign`. We hold the xarray lock.
+ unsafe { bindings::xas_store(&mut self.state, new) };
+
+ match self.status() {
+ Ok(()) => break Ok(new),
+ Err(ENOMEM) => {
+ debug_assert!(self.state.xa_alloc.is_null());
+ let node = match preload.as_mut().map(|sheaf| sheaf.alloc().ok_or(ENOMEM)) {
+ None => break Err(ENOMEM),
+ Some(Err(e)) => break Err(e),
+ Some(Ok(node)) => node,
+ };
+
+ self.state.xa_alloc = node.into_ptr().cast();
+ continue;
+ }
+ Err(e) => break Err(e),
+ }
+ }
+ .map_err(|error| {
+ // SAFETY: `new` came from `T::into_foreign` and `xas_store` does not take
+ // ownership of the value on error.
let value = unsafe { T::from_foreign(new) };
StoreError { value, error }
})
diff --git a/rust/kernel/xarray/entry.rs b/rust/kernel/xarray/entry.rs
index 1b1c21bed7022..ff500be3832b7 100644
--- a/rust/kernel/xarray/entry.rs
+++ b/rust/kernel/xarray/entry.rs
@@ -3,6 +3,7 @@
use super::{
Guard,
StoreError,
+ XArraySheaf,
XArrayState, //
};
use core::ptr::NonNull;
@@ -29,9 +30,9 @@ impl<T: ForeignOwnable> Entry<'_, '_, T> {
/// let mut xa = KBox::pin_init(XArray::<KBox<u32>>::new(AllocKind::Alloc), GFP_KERNEL)?;
/// let mut guard = xa.lock();
///
- ///
/// let entry = guard.entry(42);
/// assert_eq!(entry.is_occupied(), false);
+ /// drop(entry);
///
/// guard.store(42, KBox::new(0x1337u32, GFP_KERNEL)?, GFP_KERNEL)?;
/// let entry = guard.entry(42);
@@ -64,7 +65,8 @@ pub(crate) fn new(guard: &'b mut Guard<'a, T>, index: usize) -> Self {
/// Returns a reference to the newly inserted value.
///
/// - This method will fail if the nodes on the path to the index
- /// represented by this entry are not present in the XArray.
+ /// represented by this entry are not present in the XArray and no memory
+ /// is available via the `preload` argument.
/// - This method will not drop the XArray lock.
///
///
@@ -79,7 +81,7 @@ pub(crate) fn new(guard: &'b mut Guard<'a, T>, index: usize) -> Self {
///
/// if let Entry::Vacant(entry) = guard.entry(42) {
/// let value = KBox::new(0x1337u32, GFP_KERNEL)?;
- /// let borrowed = entry.insert(value)?;
+ /// let borrowed = entry.insert(value, None)?;
/// assert_eq!(*borrowed, 0x1337);
/// }
///
@@ -87,8 +89,12 @@ pub(crate) fn new(guard: &'b mut Guard<'a, T>, index: usize) -> Self {
///
/// # Ok::<(), kernel::error::Error>(())
/// ```
- pub fn insert(mut self, value: T) -> Result<T::BorrowedMut<'b>, StoreError<T>> {
- let new = self.state.insert(value)?;
+ pub fn insert(
+ mut self,
+ value: T,
+ preload: Option<&mut XArraySheaf<'_>>,
+ ) -> Result<T::BorrowedMut<'b>, StoreError<T>> {
+ let new = self.state.insert(value, preload)?;
// SAFETY: `new` came from `T::into_foreign`. The entry has exclusive
// ownership of `new` as it holds a mutable reference to `Guard`.
@@ -98,7 +104,8 @@ pub fn insert(mut self, value: T) -> Result<T::BorrowedMut<'b>, StoreError<T>> {
/// Inserts a value and returns an occupied entry representing the newly inserted value.
///
/// - This method will fail if the nodes on the path to the index
- /// represented by this entry are not present in the XArray.
+ /// represented by this entry are not present in the XArray and no memory
+ /// is available via the `preload` argument.
/// - This method will not drop the XArray lock.
///
/// # Examples
@@ -112,7 +119,7 @@ pub fn insert(mut self, value: T) -> Result<T::BorrowedMut<'b>, StoreError<T>> {
///
/// if let Entry::Vacant(entry) = guard.entry(42) {
/// let value = KBox::new(0x1337u32, GFP_KERNEL)?;
- /// let occupied = entry.insert_entry(value)?;
+ /// let occupied = entry.insert_entry(value, None)?;
/// assert_eq!(occupied.index(), 42);
/// }
///
@@ -120,8 +127,12 @@ pub fn insert(mut self, value: T) -> Result<T::BorrowedMut<'b>, StoreError<T>> {
///
/// # Ok::<(), kernel::error::Error>(())
/// ```
- pub fn insert_entry(mut self, value: T) -> Result<OccupiedEntry<'a, 'b, T>, StoreError<T>> {
- let new = self.state.insert(value)?;
+ pub fn insert_entry(
+ mut self,
+ value: T,
+ preload: Option<&mut XArraySheaf<'_>>,
+ ) -> Result<OccupiedEntry<'a, 'b, T>, StoreError<T>> {
+ let new = self.state.insert(value, preload)?;
Ok(OccupiedEntry::<'a, 'b, T> {
state: self.state,
diff --git a/rust/kernel/xarray/preload.rs b/rust/kernel/xarray/preload.rs
new file mode 100644
index 0000000000000..745709579a265
--- /dev/null
+++ b/rust/kernel/xarray/preload.rs
@@ -0,0 +1,3 @@
+// SPDX-License-Identifier: GPL-2.0
+
+use kernel::prelude::*;
--
2.51.2
^ permalink raw reply [flat|nested] 14+ messages in thread* Re: [PATCH v2 11/11] rust: xarray: add preload API
2026-02-06 21:10 ` [PATCH v2 11/11] rust: xarray: add preload API Andreas Hindborg
@ 2026-02-06 21:43 ` Andreas Hindborg
2026-02-07 5:04 ` kernel test robot
1 sibling, 0 replies; 14+ messages in thread
From: Andreas Hindborg @ 2026-02-06 21:43 UTC (permalink / raw)
To: Matthew Wilcox (Oracle),
Tamir Duberstein, Miguel Ojeda, Alex Gaynor, Boqun Feng,
Gary Guo, Björn Roy Baron, Benno Lossin, Alice Ryhl,
Trevor Gross, Danilo Krummrich, Lorenzo Stoakes, Liam R. Howlett,
Vlastimil Babka, Andrew Morton, Christoph Lameter,
David Rientjes, Roman Gushchin, Harry Yoo
Cc: Daniel Gomez, rust-for-linux, linux-kernel, linux-mm
Andreas Hindborg <a.hindborg@kernel.org> writes:
> Add a preload API that allows preallocating memory for XArray
> insertions. This enables insertions to proceed without allocation
> failures in contexts where memory allocation is not desirable, such as
> in atomic contexts or where reliability is critical.
>
> The API includes:
>
> - `XArrayPreloadBuffer` for managing a pool of preallocated nodes.
> - `XArrayPreloadNode` representing a single preallocated node.
> - Integration with the entry API, allowing `VacantEntry::insert` and
> `VacantEntry::insert_entry` to accept an optional preload buffer.
> - A new `Guard::insert_entry` method for inserting with preload support.
>
> The implementation uses a circular buffer to efficiently manage
> preallocated nodes. When an insertion would fail due to ENOMEM, the
> XArray state API automatically consumes a preallocated node from the
> buffer if available.
>
> Export `radix_tree_node_ctor` from C to enable Rust code to work with the
> radix tree node cache.
>
> Cc: "Liam R. Howlett" <Liam.Howlett@oracle.com>
> Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org>
> Signed-off-by: Andreas Hindborg <a.hindborg@kernel.org>
I somehow managed to not include this last bit of detail:
commit 09bdfa18f6f879eb42df2e032ad5224eed29eb25
Author: Andreas Hindborg <a.hindborg@kernel.org>
Date: Fri Feb 6 22:38:09 2026 +0100
radix-tree: enable sheaf suppport for kmem_cache
The rust null block driver plans to rely on preloading xarray nodes from the
radix_tree_node_cachep kmem_cache.
Signed-off-by: Andreas Hindborg <a.hindborg@kernel.org>
diff --git a/lib/radix-tree.c b/lib/radix-tree.c
index b642f2775e89c..ddd67ce672f5c 100644
--- a/lib/radix-tree.c
+++ b/lib/radix-tree.c
@@ -1599,10 +1599,16 @@ void __init radix_tree_init(void)
BUILD_BUG_ON(RADIX_TREE_MAX_TAGS + __GFP_BITS_SHIFT > 32);
BUILD_BUG_ON(ROOT_IS_IDR & ~GFP_ZONEMASK);
BUILD_BUG_ON(XA_CHUNK_SIZE > 255);
- radix_tree_node_cachep = kmem_cache_create("radix_tree_node",
- sizeof(struct radix_tree_node), 0,
- SLAB_PANIC | SLAB_RECLAIM_ACCOUNT,
- radix_tree_node_ctor);
+
+ struct kmem_cache_args args = {
+ .ctor = radix_tree_node_ctor,
+ .sheaf_capacity = 64,
+ };
+
+ radix_tree_node_cachep = kmem_cache_create(
+ "radix_tree_node", sizeof(struct radix_tree_node), &args,
+ SLAB_PANIC | SLAB_RECLAIM_ACCOUNT);
+
ret = cpuhp_setup_state_nocalls(CPUHP_RADIX_DEAD, "lib/radix:dead",
NULL, radix_tree_cpu_dead);
WARN_ON(ret < 0);
Best regards,
Andreas Hindborg
^ permalink raw reply [flat|nested] 14+ messages in thread* Re: [PATCH v2 11/11] rust: xarray: add preload API
2026-02-06 21:10 ` [PATCH v2 11/11] rust: xarray: add preload API Andreas Hindborg
2026-02-06 21:43 ` Andreas Hindborg
@ 2026-02-07 5:04 ` kernel test robot
1 sibling, 0 replies; 14+ messages in thread
From: kernel test robot @ 2026-02-07 5:04 UTC (permalink / raw)
To: Andreas Hindborg, Tamir Duberstein, Miguel Ojeda, Alex Gaynor,
Boqun Feng, Gary Guo, Björn Roy Baron, Benno Lossin,
Alice Ryhl, Trevor Gross, Danilo Krummrich, Lorenzo Stoakes,
Liam R. Howlett, Vlastimil Babka, Andrew Morton,
Christoph Lameter, David Rientjes, Roman Gushchin, Harry Yoo
Cc: llvm, oe-kbuild-all, Linux Memory Management List, Daniel Gomez,
rust-for-linux, linux-kernel, Andreas Hindborg,
Matthew Wilcox (Oracle)
Hi Andreas,
kernel test robot noticed the following build errors:
[auto build test ERROR on 18f7fcd5e69a04df57b563360b88be72471d6b62]
url: https://github.com/intel-lab-lkp/linux/commits/Andreas-Hindborg/rust-xarray-minor-formatting-fixes/20260207-051500
base: 18f7fcd5e69a04df57b563360b88be72471d6b62
patch link: https://lore.kernel.org/r/20260206-xarray-entry-send-v2-11-91c41673fd30%40kernel.org
patch subject: [PATCH v2 11/11] rust: xarray: add preload API
config: arm64-randconfig-001-20260207 (https://download.01.org/0day-ci/archive/20260207/202602071349.6yvINDvm-lkp@intel.com/config)
compiler: clang version 22.0.0git (https://github.com/llvm/llvm-project 9b8addffa70cee5b2acc5454712d9cf78ce45710)
rustc: rustc 1.88.0 (6b00bc388 2025-06-23)
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20260207/202602071349.6yvINDvm-lkp@intel.com/reproduce)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202602071349.6yvINDvm-lkp@intel.com/
All errors (new ones prefixed by >>):
>> error[E0432]: unresolved import `kernel::mm::sheaf`
--> rust/kernel/xarray.rs:33:9
|
33 | mm::sheaf::{
| ^^^^^ could not find `sheaf` in `mm`
|
note: found an item that was configured out
--> rust/kernel/mm.rs:22:9
|
22 | pub mod sheaf;
| ^^^^^
note: the item is gated here
--> rust/kernel/mm.rs:21:1
|
21 | #[cfg(not(any(CONFIG_SLUB_TINY, CONFIG_SLUB_DEBUG)))]
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
--
>> error[E0433]: failed to resolve: could not find `sheaf` in `mm`
--> rust/kernel/xarray.rs:75:18
|
75 | impl kernel::mm::sheaf::KMemCacheInit<XArrayNode> for XArrayNode {
| ^^^^^ could not find `sheaf` in `mm`
|
note: found an item that was configured out
--> rust/kernel/mm.rs:22:9
|
22 | pub mod sheaf;
| ^^^^^
note: the item is gated here
--> rust/kernel/mm.rs:21:1
|
21 | #[cfg(not(any(CONFIG_SLUB_TINY, CONFIG_SLUB_DEBUG)))]
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
^ permalink raw reply [flat|nested] 14+ messages in thread