From: Harry Yoo <harry.yoo@oracle.com>
To: Vlastimil Babka <vbabka@suse.cz>
Cc: Andrew Morton <akpm@linux-foundation.org>,
Christoph Lameter <cl@gentwo.org>,
David Rientjes <rientjes@google.com>,
Roman Gushchin <roman.gushchin@linux.dev>,
Uladzislau Rezki <urezki@gmail.com>,
"Liam R. Howlett" <Liam.Howlett@oracle.com>,
Suren Baghdasaryan <surenb@google.com>,
Sebastian Andrzej Siewior <bigeasy@linutronix.de>,
Alexei Starovoitov <ast@kernel.org>,
linux-mm@kvack.org, linux-kernel@vger.kernel.org,
linux-rt-devel@lists.linux.dev, bpf@vger.kernel.org,
kasan-dev@googlegroups.com
Subject: Re: [PATCH RFC 09/19] slab: add optimized sheaf refill from partial list
Date: Mon, 27 Oct 2025 16:20:56 +0900 [thread overview]
Message-ID: <aP8dWDNiHVpAe7ak@hyeyoo> (raw)
In-Reply-To: <20251023-sheaves-for-all-v1-9-6ffa2c9941c0@suse.cz>
On Thu, Oct 23, 2025 at 03:52:31PM +0200, Vlastimil Babka wrote:
> At this point we have sheaves enabled for all caches, but their refill
> is done via __kmem_cache_alloc_bulk() which relies on cpu (partial)
> slabs - now a redundant caching layer that we are about to remove.
>
> The refill will thus be done from slabs on the node partial list.
> Introduce new functions that can do that in an optimized way as it's
> easier than modifying the __kmem_cache_alloc_bulk() call chain.
>
> Extend struct partial_context so it can return a list of slabs from the
> partial list with the sum of free objects in them within the requested
> min and max.
>
> Introduce get_partial_node_bulk() that removes the slabs from freelist
> and returns them in the list.
>
> Introduce get_freelist_nofreeze() which grabs the freelist without
> freezing the slab.
>
> Introduce __refill_objects() that uses the functions above to fill an
> array of objects. It has to handle the possibility that the slabs will
> contain more objects that were requested, due to concurrent freeing of
> objects to those slabs. When no more slabs on partial lists are
> available, it will allocate new slabs.
>
> Finally, switch refill_sheaf() to use __refill_objects().
>
> Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
> ---
> mm/slub.c | 235 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++--
> 1 file changed, 230 insertions(+), 5 deletions(-)
>
> diff --git a/mm/slub.c b/mm/slub.c
> index a84027fbca78..e2b052657d11 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -3508,6 +3511,69 @@ static inline void put_cpu_partial(struct kmem_cache *s, struct slab *slab,
> #endif
> static inline bool pfmemalloc_match(struct slab *slab, gfp_t gfpflags);
>
> +static bool get_partial_node_bulk(struct kmem_cache *s,
> + struct kmem_cache_node *n,
> + struct partial_context *pc)
> +{
> + struct slab *slab, *slab2;
> + unsigned int total_free = 0;
> + unsigned long flags;
> +
> + /*
> + * Racy check. If we mistakenly see no partial slabs then we
> + * just allocate an empty slab. If we mistakenly try to get a
> + * partial slab and there is none available then get_partial()
> + * will return NULL.
> + */
> + if (!n || !n->nr_partial)
> + return false;
> +
> + INIT_LIST_HEAD(&pc->slabs);
> +
> + if (gfpflags_allow_spinning(pc->flags))
> + spin_lock_irqsave(&n->list_lock, flags);
> + else if (!spin_trylock_irqsave(&n->list_lock, flags))
> + return false;
> +
> + list_for_each_entry_safe(slab, slab2, &n->partial, slab_list) {
> + struct slab slab_counters;
> + unsigned int slab_free;
> +
> + if (!pfmemalloc_match(slab, pc->flags))
> + continue;
> +
> + /*
> + * due to atomic updates done by a racing free we should not
> + * read garbage here, but do a sanity check anyway
> + *
> + * slab_free is a lower bound due to subsequent concurrent
> + * freeing, the caller might get more objects than requested and
> + * must deal with it
> + */
> + slab_counters.counters = data_race(READ_ONCE(slab->counters));
> + slab_free = slab_counters.objects - slab_counters.inuse;
> +
> + if (unlikely(slab_free > oo_objects(s->oo)))
> + continue;
> +
> + /* we have already min and this would get us over the max */
> + if (total_free >= pc->min_objects
> + && total_free + slab_free > pc->max_objects)
> + continue;
> +
> + remove_partial(n, slab);
> +
> + list_add(&slab->slab_list, &pc->slabs);
> +
> + total_free += slab_free;
> + if (total_free >= pc->max_objects)
> + break;
It may end up iterating over all slabs in the n->partial list
when the sum of free objects isn't exactly equal to pc->max_objects?
> + }
> +
> + spin_unlock_irqrestore(&n->list_lock, flags);
> + return total_free > 0;
> +}
> +
> /*
> * Try to allocate a partial slab from a specific node.
> */
> @@ -4436,6 +4502,38 @@ static inline void *get_freelist(struct kmem_cache *s, struct slab *slab)
> return freelist;
> }
>
> /*
> * Freeze the partial slab and return the pointer to the freelist.
> */
> @@ -5373,6 +5471,9 @@ static int __prefill_sheaf_pfmemalloc(struct kmem_cache *s,
> return ret;
> }
>
> +static int __kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags,
> + size_t size, void **p);
> +
> /*
> * returns a sheaf that has at least the requested size
> * when prefilling is needed, do so with given gfp flags
> @@ -7409,6 +7510,130 @@ void kmem_cache_free_bulk(struct kmem_cache *s, size_t size, void **p)
> }
> EXPORT_SYMBOL(kmem_cache_free_bulk);
>
> +static unsigned int
> +__refill_objects(struct kmem_cache *s, void **p, gfp_t gfp, unsigned int min,
> + unsigned int max)
> +{
> + struct slab *slab, *slab2;
> + struct partial_context pc;
> + unsigned int refilled = 0;
> + unsigned long flags;
> + void *object;
> + int node;
> +
> + pc.flags = gfp;
> + pc.min_objects = min;
> + pc.max_objects = max;
> +
> + node = numa_mem_id();
> +
> + /* TODO: consider also other nodes? */
> + if (!get_partial_node_bulk(s, get_node(s, node), &pc))
> + goto new_slab;
> +
> + list_for_each_entry_safe(slab, slab2, &pc.slabs, slab_list) {
> +
> + list_del(&slab->slab_list);
> +
> + object = get_freelist_nofreeze(s, slab);
> +
> + while (object && refilled < max) {
> + p[refilled] = object;
> + object = get_freepointer(s, object);
> + maybe_wipe_obj_freeptr(s, p[refilled]);
> +
> + refilled++;
> + }
> +
> + /*
> + * Freelist had more objects than we can accomodate, we need to
> + * free them back. We can treat it like a detached freelist, just
> + * need to find the tail object.
> + */
> + if (unlikely(object)) {
> + void *head = object;
> + void *tail;
> + int cnt = 0;
> +
> + do {
> + tail = object;
> + cnt++;
> + object = get_freepointer(s, object);
> + } while (object);
> + do_slab_free(s, slab, head, tail, cnt, _RET_IP_);
> + }
Maybe we don't have to do this if we put slabs into a singly linked list
and use the other word to record the number of objects in the slab.
> +
> + if (refilled >= max)
> + break;
> + }
> +
> + if (unlikely(!list_empty(&pc.slabs))) {
> + struct kmem_cache_node *n = get_node(s, node);
> +
> + spin_lock_irqsave(&n->list_lock, flags);
Do we surely know that trylock will succeed when
we succeeded to acquire it in get_partial_node_bulk()?
I think the answer is yes, but just to double check :)
> + list_for_each_entry_safe(slab, slab2, &pc.slabs, slab_list) {
> +
> + if (unlikely(!slab->inuse && n->nr_partial >= s->min_partial))
> + continue;
> +
> + list_del(&slab->slab_list);
> + add_partial(n, slab, DEACTIVATE_TO_HEAD);
> + }
> +
> + spin_unlock_irqrestore(&n->list_lock, flags);
> +
> + /* any slabs left are completely free and for discard */
> + list_for_each_entry_safe(slab, slab2, &pc.slabs, slab_list) {
> +
> + list_del(&slab->slab_list);
> + discard_slab(s, slab);
> + }
> + }
> +
> +
> + if (likely(refilled >= min))
> + goto out;
> +
> +new_slab:
> +
> + slab = new_slab(s, pc.flags, node);
> + if (!slab)
> + goto out;
> +
> + stat(s, ALLOC_SLAB);
> + inc_slabs_node(s, slab_nid(slab), slab->objects);
> +
> + /*
> + * TODO: possible optimization - if we know we will consume the whole
> + * slab we might skip creating the freelist?
> + */
> + object = slab->freelist;
> + while (object && refilled < max) {
> + p[refilled] = object;
> + object = get_freepointer(s, object);
> + maybe_wipe_obj_freeptr(s, p[refilled]);
> +
> + slab->inuse++;
> + refilled++;
> + }
> + slab->freelist = object;
> +
> + if (slab->freelist) {
> + struct kmem_cache_node *n = get_node(s, slab_nid(slab));
> +
> + spin_lock_irqsave(&n->list_lock, flags);
> + add_partial(n, slab, DEACTIVATE_TO_HEAD);
> + spin_unlock_irqrestore(&n->list_lock, flags);
If slab_nid(slab) != node, we should check gfpflags_allow_spinning()
and call defer_deactivate_slab() if it returns false?
> + }
> +
> + if (refilled < min)
> + goto new_slab;
> +out:
> +
> + return refilled;
> +}
> +
> static inline
> int __kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size,
> void **p)
>
> --
> 2.51.1
>
--
Cheers,
Harry / Hyeonggon
next prev parent reply other threads:[~2025-10-27 7:21 UTC|newest]
Thread overview: 61+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-10-23 13:52 [PATCH RFC 00/19] slab: replace cpu (partial) slabs with sheaves Vlastimil Babka
2025-10-23 13:52 ` [PATCH RFC 01/19] slab: move kfence_alloc() out of internal bulk alloc Vlastimil Babka
2025-10-23 15:20 ` Marco Elver
2025-10-29 14:38 ` Vlastimil Babka
2025-10-29 15:30 ` Marco Elver
2025-10-23 13:52 ` [PATCH RFC 02/19] slab: handle pfmemalloc slabs properly with sheaves Vlastimil Babka
2025-10-24 14:21 ` Chris Mason
2025-10-29 15:00 ` Vlastimil Babka
2025-10-29 16:06 ` Chris Mason
2025-10-23 13:52 ` [PATCH RFC 03/19] slub: remove CONFIG_SLUB_TINY specific code paths Vlastimil Babka
2025-10-24 22:34 ` Alexei Starovoitov
2025-10-29 15:37 ` Vlastimil Babka
2025-10-23 13:52 ` [PATCH RFC 04/19] slab: prevent recursive kmalloc() in alloc_empty_sheaf() Vlastimil Babka
2025-10-23 13:52 ` [PATCH RFC 05/19] slab: add sheaves to most caches Vlastimil Babka
2025-10-27 0:24 ` Harry Yoo
2025-10-29 15:42 ` Vlastimil Babka
2025-10-23 13:52 ` [PATCH RFC 06/19] slab: introduce percpu sheaves bootstrap Vlastimil Babka
2025-10-24 15:29 ` Chris Mason
2025-10-29 15:51 ` Vlastimil Babka
2025-10-23 13:52 ` [PATCH RFC 07/19] slab: make percpu sheaves compatible with kmalloc_nolock()/kfree_nolock() Vlastimil Babka
2025-10-24 14:04 ` Chris Mason
2025-10-29 17:30 ` Vlastimil Babka
2025-10-24 19:43 ` Alexei Starovoitov
2025-10-29 17:46 ` Vlastimil Babka
2025-10-23 13:52 ` [PATCH RFC 08/19] slab: handle kmalloc sheaves bootstrap Vlastimil Babka
2025-10-27 6:12 ` Harry Yoo
2025-10-29 20:06 ` Vlastimil Babka
2025-10-29 20:06 ` Vlastimil Babka
2025-10-30 0:11 ` Harry Yoo
2025-10-23 13:52 ` [PATCH RFC 09/19] slab: add optimized sheaf refill from partial list Vlastimil Babka
2025-10-27 7:20 ` Harry Yoo [this message]
2025-10-27 9:11 ` Harry Yoo
2025-10-29 20:48 ` Vlastimil Babka
2025-10-30 0:07 ` Harry Yoo
2025-10-30 13:18 ` Vlastimil Babka
2025-10-23 13:52 ` [PATCH RFC 10/19] slab: remove cpu (partial) slabs usage from allocation paths Vlastimil Babka
2025-10-24 14:29 ` Chris Mason
2025-10-29 21:31 ` Vlastimil Babka
2025-10-30 4:32 ` Harry Yoo
2025-10-30 13:09 ` Vlastimil Babka
2025-10-30 15:27 ` Alexei Starovoitov
2025-10-30 15:35 ` Vlastimil Babka
2025-10-30 15:59 ` Alexei Starovoitov
2025-11-03 3:44 ` Harry Yoo
2025-10-23 13:52 ` [PATCH RFC 11/19] slab: remove SLUB_CPU_PARTIAL Vlastimil Babka
2025-10-24 20:43 ` Alexei Starovoitov
2025-10-29 22:31 ` Vlastimil Babka
2025-10-30 0:26 ` Alexei Starovoitov
2025-10-23 13:52 ` [PATCH RFC 12/19] slab: remove the do_slab_free() fastpath Vlastimil Babka
2025-10-24 22:32 ` Alexei Starovoitov
2025-10-29 22:44 ` Vlastimil Babka
2025-10-30 0:24 ` Alexei Starovoitov
2025-10-23 13:52 ` [PATCH RFC 13/19] slab: remove defer_deactivate_slab() Vlastimil Babka
2025-10-23 13:52 ` [PATCH RFC 14/19] slab: simplify kmalloc_nolock() Vlastimil Babka
2025-10-23 13:52 ` [PATCH RFC 15/19] slab: remove struct kmem_cache_cpu Vlastimil Babka
2025-10-23 13:52 ` [PATCH RFC 16/19] slab: remove unused PREEMPT_RT specific macros Vlastimil Babka
2025-10-23 13:52 ` [PATCH RFC 17/19] slab: refill sheaves from all nodes Vlastimil Babka
2025-10-23 13:52 ` [PATCH RFC 18/19] slab: update overview comments Vlastimil Babka
2025-10-23 13:52 ` [PATCH RFC 19/19] slab: remove frozen slab checks from __slab_free() Vlastimil Babka
2025-10-24 23:57 ` [PATCH RFC 00/19] slab: replace cpu (partial) slabs with sheaves Alexei Starovoitov
2025-11-04 22:11 ` Christoph Lameter (Ampere)
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=aP8dWDNiHVpAe7ak@hyeyoo \
--to=harry.yoo@oracle.com \
--cc=Liam.Howlett@oracle.com \
--cc=akpm@linux-foundation.org \
--cc=ast@kernel.org \
--cc=bigeasy@linutronix.de \
--cc=bpf@vger.kernel.org \
--cc=cl@gentwo.org \
--cc=kasan-dev@googlegroups.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-rt-devel@lists.linux.dev \
--cc=rientjes@google.com \
--cc=roman.gushchin@linux.dev \
--cc=surenb@google.com \
--cc=urezki@gmail.com \
--cc=vbabka@suse.cz \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox