From: Vlastimil Babka <vbabka@suse.cz>
To: Harry Yoo <harry.yoo@oracle.com>,
Petr Tesarik <ptesarik@suse.com>,
Christoph Lameter <cl@gentwo.org>,
David Rientjes <rientjes@google.com>,
Roman Gushchin <roman.gushchin@linux.dev>
Cc: Hao Li <hao.li@linux.dev>,
Andrew Morton <akpm@linux-foundation.org>,
Uladzislau Rezki <urezki@gmail.com>,
"Liam R. Howlett" <Liam.Howlett@oracle.com>,
Suren Baghdasaryan <surenb@google.com>,
Sebastian Andrzej Siewior <bigeasy@linutronix.de>,
Alexei Starovoitov <ast@kernel.org>,
linux-mm@kvack.org, linux-kernel@vger.kernel.org,
linux-rt-devel@lists.linux.dev, bpf@vger.kernel.org,
kasan-dev@googlegroups.com
Subject: Re: [PATCH RFC v2 08/20] slab: add optimized sheaf refill from partial list
Date: Thu, 15 Jan 2026 15:25:59 +0100 [thread overview]
Message-ID: <38de0039-e0ea-41c4-a293-400798390ea1@suse.cz> (raw)
In-Reply-To: <20260112-sheaves-for-all-v2-8-98225cfb50cf@suse.cz>
On 1/12/26 16:17, Vlastimil Babka wrote:
> At this point we have sheaves enabled for all caches, but their refill
> is done via __kmem_cache_alloc_bulk() which relies on cpu (partial)
> slabs - now a redundant caching layer that we are about to remove.
>
> The refill will thus be done from slabs on the node partial list.
> Introduce new functions that can do that in an optimized way as it's
> easier than modifying the __kmem_cache_alloc_bulk() call chain.
>
> Extend struct partial_context so it can return a list of slabs from the
> partial list with the sum of free objects in them within the requested
> min and max.
>
> Introduce get_partial_node_bulk() that removes the slabs from freelist
> and returns them in the list.
>
> Introduce get_freelist_nofreeze() which grabs the freelist without
> freezing the slab.
>
> Introduce alloc_from_new_slab() which can allocate multiple objects from
> a newly allocated slab where we don't need to synchronize with freeing.
> In some aspects it's similar to alloc_single_from_new_slab() but assumes
> the cache is a non-debug one so it can avoid some actions.
>
> Introduce __refill_objects() that uses the functions above to fill an
> array of objects. It has to handle the possibility that the slabs will
> contain more objects that were requested, due to concurrent freeing of
> objects to those slabs. When no more slabs on partial lists are
> available, it will allocate new slabs. It is intended to be only used
> in context where spinning is allowed, so add a WARN_ON_ONCE check there.
>
> Finally, switch refill_sheaf() to use __refill_objects(). Sheaves are
> only refilled from contexts that allow spinning, or even blocking.
>
> Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
...
> +static unsigned int alloc_from_new_slab(struct kmem_cache *s, struct slab *slab,
> + void **p, unsigned int count, bool allow_spin)
> +{
> + unsigned int allocated = 0;
> + struct kmem_cache_node *n;
> + unsigned long flags;
> + void *object;
> +
> + if (!allow_spin && (slab->objects - slab->inuse) > count) {
> +
> + n = get_node(s, slab_nid(slab));
> +
> + if (!spin_trylock_irqsave(&n->list_lock, flags)) {
> + /* Unlucky, discard newly allocated slab */
> + defer_deactivate_slab(slab, NULL);
This actually does dec_slabs_node() only with slab->frozen which we don't set.
> + return 0;
> + }
> + }
> +
> + object = slab->freelist;
> + while (object && allocated < count) {
> + p[allocated] = object;
> + object = get_freepointer(s, object);
> + maybe_wipe_obj_freeptr(s, p[allocated]);
> +
> + slab->inuse++;
> + allocated++;
> + }
> + slab->freelist = object;
> +
> + if (slab->freelist) {
> +
> + if (allow_spin) {
> + n = get_node(s, slab_nid(slab));
> + spin_lock_irqsave(&n->list_lock, flags);
> + }
> + add_partial(n, slab, DEACTIVATE_TO_HEAD);
> + spin_unlock_irqrestore(&n->list_lock, flags);
> + }
So we should only do inc_slabs_node() here.
This also addresses the problem in 9/20 that Hao Li pointed out...
> + return allocated;
> +}
> +
...
> +static unsigned int
> +__refill_objects(struct kmem_cache *s, void **p, gfp_t gfp, unsigned int min,
> + unsigned int max)
> +{
> + struct slab *slab, *slab2;
> + struct partial_context pc;
> + unsigned int refilled = 0;
> + unsigned long flags;
> + void *object;
> + int node;
> +
> + pc.flags = gfp;
> + pc.min_objects = min;
> + pc.max_objects = max;
> +
> + node = numa_mem_id();
> +
> + if (WARN_ON_ONCE(!gfpflags_allow_spinning(gfp)))
> + return 0;
> +
> + /* TODO: consider also other nodes? */
> + if (!get_partial_node_bulk(s, get_node(s, node), &pc))
> + goto new_slab;
> +
> + list_for_each_entry_safe(slab, slab2, &pc.slabs, slab_list) {
> +
> + list_del(&slab->slab_list);
> +
> + object = get_freelist_nofreeze(s, slab);
> +
> + while (object && refilled < max) {
> + p[refilled] = object;
> + object = get_freepointer(s, object);
> + maybe_wipe_obj_freeptr(s, p[refilled]);
> +
> + refilled++;
> + }
> +
> + /*
> + * Freelist had more objects than we can accomodate, we need to
> + * free them back. We can treat it like a detached freelist, just
> + * need to find the tail object.
> + */
> + if (unlikely(object)) {
> + void *head = object;
> + void *tail;
> + int cnt = 0;
> +
> + do {
> + tail = object;
> + cnt++;
> + object = get_freepointer(s, object);
> + } while (object);
> + do_slab_free(s, slab, head, tail, cnt, _RET_IP_);
> + }
> +
> + if (refilled >= max)
> + break;
> + }
> +
> + if (unlikely(!list_empty(&pc.slabs))) {
> + struct kmem_cache_node *n = get_node(s, node);
> +
> + spin_lock_irqsave(&n->list_lock, flags);
> +
> + list_for_each_entry_safe(slab, slab2, &pc.slabs, slab_list) {
> +
> + if (unlikely(!slab->inuse && n->nr_partial >= s->min_partial))
> + continue;
> +
> + list_del(&slab->slab_list);
> + add_partial(n, slab, DEACTIVATE_TO_HEAD);
> + }
> +
> + spin_unlock_irqrestore(&n->list_lock, flags);
> +
> + /* any slabs left are completely free and for discard */
> + list_for_each_entry_safe(slab, slab2, &pc.slabs, slab_list) {
> +
> + list_del(&slab->slab_list);
> + discard_slab(s, slab);
> + }
> + }
> +
> +
> + if (likely(refilled >= min))
> + goto out;
> +
> +new_slab:
> +
> + slab = new_slab(s, pc.flags, node);
> + if (!slab)
> + goto out;
> +
> + stat(s, ALLOC_SLAB);
> + inc_slabs_node(s, slab_nid(slab), slab->objects);
And remove it from here.
> +
> + /*
> + * TODO: possible optimization - if we know we will consume the whole
> + * slab we might skip creating the freelist?
> + */
> + refilled += alloc_from_new_slab(s, slab, p + refilled, max - refilled,
> + /* allow_spin = */ true);
> +
> + if (refilled < min)
> + goto new_slab;
> +out:
> +
> + return refilled;
> +}
> +
> static inline
> int __kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size,
> void **p)
>
next prev parent reply other threads:[~2026-01-15 14:26 UTC|newest]
Thread overview: 63+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-01-12 15:16 [PATCH RFC v2 00/20] slab: replace cpu (partial) slabs with sheaves Vlastimil Babka
2026-01-12 15:16 ` [PATCH RFC v2 01/20] mm/slab: add rcu_barrier() to kvfree_rcu_barrier_on_cache() Vlastimil Babka
2026-01-13 2:08 ` Harry Yoo
2026-01-13 9:32 ` Vlastimil Babka
2026-01-13 12:31 ` Harry Yoo
2026-01-13 13:09 ` Vlastimil Babka
2026-01-14 11:14 ` Harry Yoo
2026-01-14 13:02 ` Vlastimil Babka
2026-01-15 23:52 ` Suren Baghdasaryan
2026-01-14 4:56 ` Harry Yoo
2026-01-12 15:16 ` [PATCH RFC v2 02/20] mm/slab: move and refactor __kmem_cache_alias() Vlastimil Babka
2026-01-13 7:06 ` Harry Yoo
2026-01-16 0:06 ` Suren Baghdasaryan
2026-01-12 15:16 ` [PATCH RFC v2 03/20] mm/slab: make caches with sheaves mergeable Vlastimil Babka
2026-01-13 7:47 ` Harry Yoo
2026-01-16 0:22 ` Suren Baghdasaryan
2026-01-16 7:24 ` Vlastimil Babka
2026-01-16 8:46 ` Harry Yoo
2026-01-16 11:01 ` Vlastimil Babka
2026-01-16 11:10 ` Harry Yoo
2026-01-16 16:58 ` Suren Baghdasaryan
2026-01-12 15:16 ` [PATCH RFC v2 04/20] slab: add sheaves to most caches Vlastimil Babka
2026-01-16 5:45 ` Suren Baghdasaryan
2026-01-16 11:24 ` Vlastimil Babka
2026-01-16 16:59 ` Suren Baghdasaryan
2026-01-16 6:46 ` Harry Yoo
2026-01-12 15:16 ` [PATCH RFC v2 05/20] slab: introduce percpu sheaves bootstrap Vlastimil Babka
2026-01-13 12:49 ` Hao Li
2026-01-15 10:11 ` Vlastimil Babka
2026-01-16 7:29 ` Harry Yoo
2026-01-12 15:17 ` [PATCH RFC v2 06/20] slab: make percpu sheaves compatible with kmalloc_nolock()/kfree_nolock() Vlastimil Babka
2026-01-13 15:42 ` Hao Li
2026-01-15 11:07 ` Vlastimil Babka
2026-01-13 18:36 ` Sebastian Andrzej Siewior
2026-01-13 23:26 ` Alexei Starovoitov
2026-01-14 13:57 ` Vlastimil Babka
2026-01-14 14:05 ` Vlastimil Babka
2026-01-14 15:07 ` Sebastian Andrzej Siewior
2026-01-12 15:17 ` [PATCH RFC v2 07/20] slab: handle kmalloc sheaves bootstrap Vlastimil Babka
2026-01-12 15:17 ` [PATCH RFC v2 08/20] slab: add optimized sheaf refill from partial list Vlastimil Babka
2026-01-15 14:25 ` Vlastimil Babka [this message]
2026-01-16 6:27 ` Hao Li
2026-01-16 7:32 ` Vlastimil Babka
2026-01-16 7:56 ` Hao Li
2026-01-12 15:17 ` [PATCH RFC v2 09/20] slab: remove cpu (partial) slabs usage from allocation paths Vlastimil Babka
2026-01-14 6:07 ` Hao Li
2026-01-15 13:53 ` Vlastimil Babka
2026-01-12 15:17 ` [PATCH RFC v2 10/20] slab: remove SLUB_CPU_PARTIAL Vlastimil Babka
2026-01-12 15:17 ` [PATCH RFC v2 11/20] slab: remove the do_slab_free() fastpath Vlastimil Babka
2026-01-12 15:17 ` [PATCH RFC v2 12/20] slab: remove defer_deactivate_slab() Vlastimil Babka
2026-01-15 14:09 ` Hao Li
2026-01-15 14:47 ` Vlastimil Babka
2026-01-12 15:17 ` [PATCH RFC v2 13/20] slab: simplify kmalloc_nolock() Vlastimil Babka
2026-01-14 3:31 ` Alexei Starovoitov
2026-01-12 15:17 ` [PATCH RFC v2 14/20] slab: remove struct kmem_cache_cpu Vlastimil Babka
2026-01-12 15:17 ` [PATCH RFC v2 15/20] slab: remove unused PREEMPT_RT specific macros Vlastimil Babka
2026-01-12 15:17 ` [PATCH RFC v2 16/20] slab: refill sheaves from all nodes Vlastimil Babka
2026-01-12 15:17 ` [PATCH RFC v2 17/20] slab: update overview comments Vlastimil Babka
2026-01-12 15:17 ` [PATCH RFC v2 18/20] slab: remove frozen slab checks from __slab_free() Vlastimil Babka
2026-01-12 15:17 ` [PATCH RFC v2 19/20] mm/slub: remove DEACTIVATE_TO_* stat items Vlastimil Babka
2026-01-12 15:17 ` [PATCH RFC v2 20/20] mm/slub: cleanup and repurpose some " Vlastimil Babka
2026-01-12 15:20 ` [PATCH v2 00/20] slab: replace cpu (partial) slabs with sheaves Vlastimil Babka
2026-01-15 15:12 ` [PATCH RFC " Vlastimil Babka
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=38de0039-e0ea-41c4-a293-400798390ea1@suse.cz \
--to=vbabka@suse.cz \
--cc=Liam.Howlett@oracle.com \
--cc=akpm@linux-foundation.org \
--cc=ast@kernel.org \
--cc=bigeasy@linutronix.de \
--cc=bpf@vger.kernel.org \
--cc=cl@gentwo.org \
--cc=hao.li@linux.dev \
--cc=harry.yoo@oracle.com \
--cc=kasan-dev@googlegroups.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-rt-devel@lists.linux.dev \
--cc=ptesarik@suse.com \
--cc=rientjes@google.com \
--cc=roman.gushchin@linux.dev \
--cc=surenb@google.com \
--cc=urezki@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox