* [RFC PATCH] slub: spill refill leftover objects into percpu sheaves
@ 2026-04-10 11:16 Hao Li
2026-04-14 8:39 ` Harry Yoo (Oracle)
0 siblings, 1 reply; 3+ messages in thread
From: Hao Li @ 2026-04-10 11:16 UTC (permalink / raw)
To: vbabka, harry, akpm
Cc: cl, rientjes, roman.gushchin, linux-mm, linux-kernel, Hao Li
When performing objects refill, we tend to optimistically assume that
there will be more allocation requests coming next; this is the
fundamental assumption behind this optimization.
When __refill_objects_node() isolates a partial slab and satisfies a
bulk allocation from its freelist, the slab can still have a small tail
of free objects left over. Today those objects are freed back to the
slab immediately.
If the leftover tail is local and small enough to fit, keep it in the
current CPU's sheaves instead. This avoids pushing those objects back
through the __slab_free slowpath.
Add a helper to obtain both the freelist and its free-object count, and
then spill the remaining objects into a percpu sheaf when:
- the tail fits in a sheaf
- the slab is local to the current CPU
- the slab is not pfmemalloc
- the target sheaf has enough free space
Otherwise keep the existing fallback and free the tail back to the slab.
Also add a SHEAF_SPILL stat so the new path can be observed in SLUB
stats.
On the mmap2 case in the will-it-scale benchmark suite, this patch can
improve performance by about 2~5%.
Signed-off-by: Hao Li <hao.li@linux.dev>
---
This patch is an exploratory attempt to address the leftover objects and
partial slab issues in the refill path, and it is marked as RFC to warmly
welcome any feedback, suggestions, and discussion!
---
mm/slub.c | 107 ++++++++++++++++++++++++++++++++++++++++++++----------
1 file changed, 88 insertions(+), 19 deletions(-)
diff --git a/mm/slub.c b/mm/slub.c
index 2b2d33cc735c..fe6351ba0e60 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -353,6 +353,7 @@ enum stat_item {
SHEAF_REFILL, /* Objects refilled to a sheaf */
SHEAF_ALLOC, /* Allocation of an empty sheaf */
SHEAF_FREE, /* Freeing of an empty sheaf */
+ SHEAF_SPILL,
BARN_GET, /* Got full sheaf from barn */
BARN_GET_FAIL, /* Failed to get full sheaf from barn */
BARN_PUT, /* Put full sheaf to barn */
@@ -4279,7 +4280,9 @@ static inline bool pfmemalloc_match(struct slab *slab, gfp_t gfpflags)
* Assumes this is performed only for caches without debugging so we
* don't need to worry about adding the slab to the full list.
*/
-static inline void *get_freelist_nofreeze(struct kmem_cache *s, struct slab *slab)
+static inline void *__get_freelist_nofreeze(struct kmem_cache *s,
+ struct slab *slab, int *freecount,
+ const char *n)
{
struct freelist_counters old, new;
@@ -4293,11 +4296,26 @@ static inline void *get_freelist_nofreeze(struct kmem_cache *s, struct slab *sla
new.inuse = old.objects;
- } while (!slab_update_freelist(s, slab, &old, &new, "get_freelist_nofreeze"));
+ } while (!slab_update_freelist(s, slab, &old, &new, n));
+
+ if (freecount)
+ *freecount = old.objects - old.inuse;
return old.freelist;
}
+static inline void *get_freelist_nofreeze(struct kmem_cache *s, struct slab *slab)
+{
+ return __get_freelist_nofreeze(s, slab, NULL, "get_freelist_nofreeze");
+}
+
+static inline void *get_freelist_and_freecount_nofreeze(struct kmem_cache *s,
+ struct slab *slab,
+ int *freecount)
+{
+ return __get_freelist_nofreeze(s, slab, freecount, "get_freelist_and_freecount_nofreeze");
+}
+
/*
* If the object has been wiped upon free, make sure it's fully initialized by
* zeroing out freelist pointer.
@@ -7028,10 +7046,15 @@ __refill_objects_node(struct kmem_cache *s, void **p, gfp_t gfp, unsigned int mi
return 0;
list_for_each_entry_safe(slab, slab2, &pc.slabs, slab_list) {
+ void *head;
+ void *tail;
+ struct slub_percpu_sheaves *pcs;
+ int freecount, local_node, i, cnt = 0;
+ struct slab_sheaf *spill;
list_del(&slab->slab_list);
- object = get_freelist_nofreeze(s, slab);
+ object = get_freelist_and_freecount_nofreeze(s, slab, &freecount);
while (object && refilled < max) {
p[refilled] = object;
@@ -7039,28 +7062,72 @@ __refill_objects_node(struct kmem_cache *s, void **p, gfp_t gfp, unsigned int mi
maybe_wipe_obj_freeptr(s, p[refilled]);
refilled++;
+ freecount--;
}
+ if (!freecount) {
+ if (refilled >= max)
+ break;
+ continue;
+ }
/*
- * Freelist had more objects than we can accommodate, we need to
- * free them back. We can treat it like a detached freelist, just
- * need to find the tail object.
+ * Freelist had more objects than we can accommodate, we first
+ * try to spill them into percpu sheaf.
*/
- if (unlikely(object)) {
- void *head = object;
- void *tail;
- int cnt = 0;
-
- do {
- tail = object;
- cnt++;
- object = get_freepointer(s, object);
- } while (object);
- __slab_free(s, slab, head, tail, cnt, _RET_IP_);
+ if (freecount > s->sheaf_capacity)
+ goto skip_spill;
+ if (slab_test_pfmemalloc(slab))
+ goto skip_spill;
+
+ if (!local_trylock(&s->cpu_sheaves->lock))
+ goto skip_spill;
+
+ local_node = numa_mem_id();
+ if (slab_nid(slab) != local_node) {
+ local_unlock(&s->cpu_sheaves->lock);
+ goto skip_spill;
}
- if (refilled >= max)
- break;
+ pcs = this_cpu_ptr(s->cpu_sheaves);
+ if (pcs->spare &&
+ (freecount <= (s->sheaf_capacity - pcs->spare->size)))
+ spill = pcs->spare;
+ else if (freecount <= (s->sheaf_capacity - pcs->main->size))
+ spill = pcs->main;
+ else {
+ local_unlock(&s->cpu_sheaves->lock);
+ goto skip_spill;
+ }
+
+ if (freecount > (s->sheaf_capacity - spill->size)) {
+ local_unlock(&s->cpu_sheaves->lock);
+ goto skip_spill;
+ }
+
+ for (i = 0; i < freecount; i++) {
+ spill->objects[spill->size] = object;
+ object = get_freepointer(s, object);
+ maybe_wipe_obj_freeptr(s, spill->objects[spill->size]);
+ spill->size++;
+ }
+
+ local_unlock(&s->cpu_sheaves->lock);
+ stat(s, SHEAF_SPILL);
+ break;
+skip_spill:
+ /*
+ * Freelist had more objects than we can accommodate or spill,
+ * we need to free them back. We can treat it like a detached freelist,
+ * just need to find the tail object.
+ */
+ head = object;
+ do {
+ tail = object;
+ cnt++;
+ object = get_freepointer(s, object);
+ } while (object);
+ __slab_free(s, slab, head, tail, cnt, _RET_IP_);
+ break;
}
if (unlikely(!list_empty(&pc.slabs))) {
@@ -9247,6 +9314,7 @@ STAT_ATTR(SHEAF_FLUSH, sheaf_flush);
STAT_ATTR(SHEAF_REFILL, sheaf_refill);
STAT_ATTR(SHEAF_ALLOC, sheaf_alloc);
STAT_ATTR(SHEAF_FREE, sheaf_free);
+STAT_ATTR(SHEAF_SPILL, sheaf_spill);
STAT_ATTR(BARN_GET, barn_get);
STAT_ATTR(BARN_GET_FAIL, barn_get_fail);
STAT_ATTR(BARN_PUT, barn_put);
@@ -9335,6 +9403,7 @@ static struct attribute *slab_attrs[] = {
&sheaf_refill_attr.attr,
&sheaf_alloc_attr.attr,
&sheaf_free_attr.attr,
+ &sheaf_spill_attr.attr,
&barn_get_attr.attr,
&barn_get_fail_attr.attr,
&barn_put_attr.attr,
--
2.50.1
^ permalink raw reply [flat|nested] 3+ messages in thread* Re: [RFC PATCH] slub: spill refill leftover objects into percpu sheaves
2026-04-10 11:16 [RFC PATCH] slub: spill refill leftover objects into percpu sheaves Hao Li
@ 2026-04-14 8:39 ` Harry Yoo (Oracle)
2026-04-14 9:59 ` Hao Li
0 siblings, 1 reply; 3+ messages in thread
From: Harry Yoo (Oracle) @ 2026-04-14 8:39 UTC (permalink / raw)
To: Hao Li
Cc: vbabka, akpm, cl, rientjes, roman.gushchin, linux-mm,
linux-kernel, Liam R. Howlett
On Fri, Apr 10, 2026 at 07:16:57PM +0800, Hao Li wrote:
> When performing objects refill, we tend to optimistically assume that
> there will be more allocation requests coming next; this is the
> fundamental assumption behind this optimization.
I think the reason why currently we have two sheaves per CPU instead of
one bigger sheaf is to avoid unfairly pessimizing when the alloc/free
pattern frequently changes.
By refilling more objects, frees are more likely to hit the slowpath.
How can it be argued that this optimization is beneficial to have
in general, not just for caches with specific alloc/free patterns?
> When __refill_objects_node() isolates a partial slab and satisfies a
> bulk allocation from its freelist, the slab can still have a small tail
> of free objects left over. Today those objects are freed back to the
> slab immediately.
>
> If the leftover tail is local and small enough to fit, keep it in the
> current CPU's sheaves instead. This avoids pushing those objects back
> through the __slab_free slowpath.
So there are two different paths:
1. When refilling prefilled sheaves, spill objects into ->main and
->spare.
2. When refilling ->main sheaf, spill objects into ->spare.
> Add a helper to obtain both the freelist and its free-object count, and
> then spill the remaining objects into a percpu sheaf when:
> - the tail fits in a sheaf
> - the slab is local to the current CPU
> - the slab is not pfmemalloc
> - the target sheaf has enough free space
>
> Otherwise keep the existing fallback and free the tail back to the slab.
>
> Also add a SHEAF_SPILL stat so the new path can be observed in SLUB
> stats.
>
> On the mmap2 case in the will-it-scale benchmark suite,
> this patch can improve performance by about 2~5%.
Where do you think the improvement comes from? (hopefully w/ some data)
e.g.:
1. the benefit is from largely or partly from
reduced contention on n->list_lock.
2. this change reduces # of alloc slowpath at the cost of increased
of free slowpath hits, but that's better because the slowpath frees
are mostly lockless.
3. the alloc/free pattern of the workload is benefiting from
spilling objects to the CPU's sheaves.
or something else?
> Signed-off-by: Hao Li <hao.li@linux.dev>
> ---
>
> This patch is an exploratory attempt to address the leftover objects and
> partial slab issues in the refill path, and it is marked as RFC to warmly
> welcome any feedback, suggestions, and discussion!
Yeah, let's discuss!
By the way, have you also been considering having min-max capacity
for sheaves? (that I think Vlastimil suggested somewhere)
--
Cheers,
Harry / Hyeonggon
^ permalink raw reply [flat|nested] 3+ messages in thread* Re: [RFC PATCH] slub: spill refill leftover objects into percpu sheaves
2026-04-14 8:39 ` Harry Yoo (Oracle)
@ 2026-04-14 9:59 ` Hao Li
0 siblings, 0 replies; 3+ messages in thread
From: Hao Li @ 2026-04-14 9:59 UTC (permalink / raw)
To: Harry Yoo (Oracle)
Cc: vbabka, akpm, cl, rientjes, roman.gushchin, linux-mm,
linux-kernel, Liam R. Howlett
On Tue, Apr 14, 2026 at 05:39:40PM +0900, Harry Yoo (Oracle) wrote:
> On Fri, Apr 10, 2026 at 07:16:57PM +0800, Hao Li wrote:
> > When performing objects refill, we tend to optimistically assume that
> > there will be more allocation requests coming next; this is the
> > fundamental assumption behind this optimization.
>
> I think the reason why currently we have two sheaves per CPU instead of
> one bigger sheaf is to avoid unfairly pessimizing when the alloc/free
> pattern frequently changes.
Yes.
>
> By refilling more objects, frees are more likely to hit the slowpath.
> How can it be argued that this optimization is beneficial to have
> in general, not just for caches with specific alloc/free patterns?
Yes, that's a very valid concern. My thinking here is that the leftover objects
have to be kept somewhere after all, so in this current experimental
implementation I'm trading off future free-path performance for better
allocation performance. It's a pretty tough trade-off either way :/
>
> > When __refill_objects_node() isolates a partial slab and satisfies a
> > bulk allocation from its freelist, the slab can still have a small tail
> > of free objects left over. Today those objects are freed back to the
> > slab immediately.
> >
> > If the leftover tail is local and small enough to fit, keep it in the
> > current CPU's sheaves instead. This avoids pushing those objects back
> > through the __slab_free slowpath.
>
> So there are two different paths:
>
> 1. When refilling prefilled sheaves, spill objects into ->main and
> ->spare.
> 2. When refilling ->main sheaf, spill objects into ->spare.
the current experimental code is biased toward spilling into the spare sheaf
when possible.
for kernels without kernel preemption enabled or !RT, the spare sheaf is
generally NULL at that point, so the main sheaf may still end up being the
primary place to absorb the spill...
>
> > Add a helper to obtain both the freelist and its free-object count, and
> > then spill the remaining objects into a percpu sheaf when:
> > - the tail fits in a sheaf
> > - the slab is local to the current CPU
> > - the slab is not pfmemalloc
> > - the target sheaf has enough free space
> >
> > Otherwise keep the existing fallback and free the tail back to the slab.
> >
> > Also add a SHEAF_SPILL stat so the new path can be observed in SLUB
> > stats.
> >
> > On the mmap2 case in the will-it-scale benchmark suite,
>
> > this patch can improve performance by about 2~5%.
>
> Where do you think the improvement comes from? (hopefully w/ some data)
Yes, this is necessary.
>
> e.g.:
> 1. the benefit is from largely or partly from
> reduced contention on n->list_lock.
Before this patch is applied, the mmap benchmark shows the following hot path:
- 7.85% native_queued_spin_lock_slowpath
-7.85% _raw_spin_lock_irqsave
- 3.69% __slab_free
+ 1.84% __refill_objects_node
+ 1.77% __kmem_cache_free_bulk
+ 3.27% __refill_objects_node
With the patch applied, the __refill_objects_node -> __slab_free hotspot goes
away, and the native_queued_spin_lock_slowpath drops to roughly 3.5%. The
remaining lock contention is mostly between __refill_objects_node ->
add_partial and __kmem_cache_free_bulk -> __slab_free.
>
> 2. this change reduces # of alloc slowpath at the cost of increased
> of free slowpath hits, but that's better because the slowpath frees
> are mostly lockless.
The alloc slowpath remains at 0 both w/ or w/o the patch, whereas the
free slowpath increases by 2x after applying the patch.
>
> 3. the alloc/free pattern of the workload is benefiting from
> spilling objects to the CPU's sheaves.
>
> or something else?
The 2-5% throughput improvement does seem to come with some trade-offs.
The main one is that leftover objects get hidden in the percpu sheaves now,
which reduces the objects on the node partial list and thus indirectly
increases slab alloc/free frequency to about 4x of the baseline.
This is a drawback of the current approach. :/
I experimented with several alternative ideas, and the pattern seems fairly
consistent: as soon as leftover objects are hidden at the percpu level, slab
alloc/free churn tends to go up.
>
> > Signed-off-by: Hao Li <hao.li@linux.dev>
> > ---
> >
> > This patch is an exploratory attempt to address the leftover objects and
> > partial slab issues in the refill path, and it is marked as RFC to warmly
> > welcome any feedback, suggestions, and discussion!
>
> Yeah, let's discuss!
Sure! Thanks for the discussion!
>
> By the way, have you also been considering having min-max capacity
> for sheaves? (that I think Vlastimil suggested somewhere)
Yes, I also tried it.
I experimented with using a manually chosen threshold to allow refill to leave
the sheaf in a partially filled state. However, since concurrent frees are
inherently unpredictable, this seems can only reduce the probability of
generating leftover objects, while at the same time affecting alloc-side
throughput. In my testing, the results were not very encouraging: it seems hard
to observe improvement, and in most cases it ended up causing a performance
regression.
my impression is that it could be difficult to prevent leftovers proactively.
It may be easier to deal with them after they appear.
Besides, I also tried another idea: maintaining a dedicated spill sheaf in the
barn, protected by the barn lock, and placing leftover objects there. Then,
during refill, barn_replace_empty_sheaf() would first try the spill sheaf, and
if it contained objects, it would swap spill and main, avoiding consumption
from barn->full_list.
With this approach, I still couldn't observe meaningful performance
change. Slab alloc/free churn still present, although the increase was
relatively small, at around 1.x
My guess is that while this approach pulls leftovers up to the barn level and
avoids the cost of pushing them back down to the node partial list level, the
serialized nature of the barn lock means leftovers cannot be deposited into the
spill sheaf with high concurrency. As a result, the placement is not fast
enough, and the performance gain remains limited.
--
Thanks,
Hao
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2026-04-14 10:00 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2026-04-10 11:16 [RFC PATCH] slub: spill refill leftover objects into percpu sheaves Hao Li
2026-04-14 8:39 ` Harry Yoo (Oracle)
2026-04-14 9:59 ` Hao Li
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox