From: Suren Baghdasaryan <surenb@google.com>
To: Vlastimil Babka <vbabka@suse.cz>
Cc: "Liam R. Howlett" <Liam.Howlett@oracle.com>,
Christoph Lameter <cl@linux.com>,
David Rientjes <rientjes@google.com>,
Roman Gushchin <roman.gushchin@linux.dev>,
Harry Yoo <harry.yoo@oracle.com>,
Uladzislau Rezki <urezki@gmail.com>,
linux-mm@kvack.org, linux-kernel@vger.kernel.org,
rcu@vger.kernel.org, maple-tree@lists.infradead.org
Subject: Re: [PATCH v4 3/9] slab: sheaf prefilling for guaranteed allocations
Date: Tue, 6 May 2025 15:54:29 -0700 [thread overview]
Message-ID: <CAJuCfpHF3xHiDqzSMLUiR+RTG0Y-D+s0TfPchu8bOOyT4K-9TA@mail.gmail.com> (raw)
In-Reply-To: <20250425-slub-percpu-caches-v4-3-8a636982b4a4@suse.cz>
On Fri, Apr 25, 2025 at 1:28 AM Vlastimil Babka <vbabka@suse.cz> wrote:
>
> Add functions for efficient guaranteed allocations e.g. in a critical
> section that cannot sleep, when the exact number of allocations is not
> known beforehand, but an upper limit can be calculated.
>
> kmem_cache_prefill_sheaf() returns a sheaf containing at least given
> number of objects.
>
> kmem_cache_alloc_from_sheaf() will allocate an object from the sheaf
> and is guaranteed not to fail until depleted.
>
> kmem_cache_return_sheaf() is for giving the sheaf back to the slab
> allocator after the critical section. This will also attempt to refill
> it to cache's sheaf capacity for better efficiency of sheaves handling,
> but it's not stricly necessary to succeed.
>
> kmem_cache_refill_sheaf() can be used to refill a previously obtained
> sheaf to requested size. If the current size is sufficient, it does
> nothing. If the requested size exceeds cache's sheaf_capacity and the
> sheaf's current capacity, the sheaf will be replaced with a new one,
> hence the indirect pointer parameter.
>
> kmem_cache_sheaf_size() can be used to query the current size.
>
> The implementation supports requesting sizes that exceed cache's
> sheaf_capacity, but it is not efficient - such "oversize" sheaves are
> allocated fresh in kmem_cache_prefill_sheaf() and flushed and freed
> immediately by kmem_cache_return_sheaf(). kmem_cache_refill_sheaf()
> might be especially ineffective when replacing a sheaf with a new one of
> a larger capacity. It is therefore better to size cache's
> sheaf_capacity accordingly to make oversize sheaves exceptional.
>
> CONFIG_SLUB_STATS counters are added for sheaf prefill and return
> operations. A prefill or return is considered _fast when it is able to
> grab or return a percpu spare sheaf (even if the sheaf needs a refill to
> satisfy the request, as those should amortize over time), and _slow
> otherwise (when the barn or even sheaf allocation/freeing has to be
> involved). sheaf_prefill_oversize is provided to determine how many
> prefills were oversize (counter for oversize returns is not necessary as
> all oversize refills result in oversize returns).
>
> When slub_debug is enabled for a cache with sheaves, no percpu sheaves
> exist for it, but the prefill functionality is still provided simply by
> all prefilled sheaves becoming oversize. If percpu sheaves are not
> created for a cache due to not passing the sheaf_capacity argument on
> cache creation, the prefills also work through oversize sheaves, but
> there's a WARN_ON_ONCE() to indicate the omission.
>
> Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
> Reviewed-by: Suren Baghdasaryan <surenb@google.com>
> ---
> include/linux/slab.h | 16 ++++
> mm/slub.c | 265 +++++++++++++++++++++++++++++++++++++++++++++++++++
> 2 files changed, 281 insertions(+)
>
> diff --git a/include/linux/slab.h b/include/linux/slab.h
> index 4cb495d55fc58c70a992ee4782d7990ce1c55dc6..b0a9ba33abae22bf38cbf1689e3c08bb0b05002f 100644
> --- a/include/linux/slab.h
> +++ b/include/linux/slab.h
> @@ -829,6 +829,22 @@ void *kmem_cache_alloc_node_noprof(struct kmem_cache *s, gfp_t flags,
> int node) __assume_slab_alignment __malloc;
> #define kmem_cache_alloc_node(...) alloc_hooks(kmem_cache_alloc_node_noprof(__VA_ARGS__))
>
> +struct slab_sheaf *
> +kmem_cache_prefill_sheaf(struct kmem_cache *s, gfp_t gfp, unsigned int size);
> +
> +int kmem_cache_refill_sheaf(struct kmem_cache *s, gfp_t gfp,
> + struct slab_sheaf **sheafp, unsigned int size);
> +
> +void kmem_cache_return_sheaf(struct kmem_cache *s, gfp_t gfp,
> + struct slab_sheaf *sheaf);
> +
> +void *kmem_cache_alloc_from_sheaf_noprof(struct kmem_cache *cachep, gfp_t gfp,
> + struct slab_sheaf *sheaf) __assume_slab_alignment __malloc;
> +#define kmem_cache_alloc_from_sheaf(...) \
> + alloc_hooks(kmem_cache_alloc_from_sheaf_noprof(__VA_ARGS__))
> +
> +unsigned int kmem_cache_sheaf_size(struct slab_sheaf *sheaf);
> +
> /*
> * These macros allow declaring a kmem_buckets * parameter alongside size, which
> * can be compiled out with CONFIG_SLAB_BUCKETS=n so that a large number of call
> diff --git a/mm/slub.c b/mm/slub.c
> index 6f31a27b5d47fa6621fa8af6d6842564077d4b60..724266fdd996c091f1f0b34012c5179f17dfa422 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -384,6 +384,11 @@ enum stat_item {
> BARN_GET_FAIL, /* Failed to get full sheaf from barn */
> BARN_PUT, /* Put full sheaf to barn */
> BARN_PUT_FAIL, /* Failed to put full sheaf to barn */
> + SHEAF_PREFILL_FAST, /* Sheaf prefill grabbed the spare sheaf */
> + SHEAF_PREFILL_SLOW, /* Sheaf prefill found no spare sheaf */
> + SHEAF_PREFILL_OVERSIZE, /* Allocation of oversize sheaf for prefill */
> + SHEAF_RETURN_FAST, /* Sheaf return reattached spare sheaf */
> + SHEAF_RETURN_SLOW, /* Sheaf return could not reattach spare */
> NR_SLUB_STAT_ITEMS
> };
>
> @@ -445,6 +450,8 @@ struct slab_sheaf {
> union {
> struct rcu_head rcu_head;
> struct list_head barn_list;
> + /* only used for prefilled sheafs */
> + unsigned int capacity;
> };
> struct kmem_cache *cache;
> unsigned int size;
> @@ -2795,6 +2802,30 @@ static void barn_put_full_sheaf(struct node_barn *barn, struct slab_sheaf *sheaf
> spin_unlock_irqrestore(&barn->lock, flags);
> }
>
> +static struct slab_sheaf *barn_get_full_or_empty_sheaf(struct node_barn *barn)
> +{
> + struct slab_sheaf *sheaf = NULL;
> + unsigned long flags;
> +
> + spin_lock_irqsave(&barn->lock, flags);
> +
> + if (barn->nr_full) {
> + sheaf = list_first_entry(&barn->sheaves_full, struct slab_sheaf,
> + barn_list);
> + list_del(&sheaf->barn_list);
> + barn->nr_full--;
> + } else if (barn->nr_empty) {
> + sheaf = list_first_entry(&barn->sheaves_empty,
> + struct slab_sheaf, barn_list);
> + list_del(&sheaf->barn_list);
> + barn->nr_empty--;
> + }
> +
> + spin_unlock_irqrestore(&barn->lock, flags);
> +
> + return sheaf;
> +}
> +
> /*
> * If a full sheaf is available, return it and put the supplied empty one to
> * barn. We ignore the limit on empty sheaves as the number of sheaves doesn't
> @@ -4905,6 +4936,230 @@ void *kmem_cache_alloc_node_noprof(struct kmem_cache *s, gfp_t gfpflags, int nod
> }
> EXPORT_SYMBOL(kmem_cache_alloc_node_noprof);
>
> +/*
> + * returns a sheaf that has least the requested size
s/least/at least ?
> + * when prefilling is needed, do so with given gfp flags
> + *
> + * return NULL if sheaf allocation or prefilling failed
> + */
> +struct slab_sheaf *
> +kmem_cache_prefill_sheaf(struct kmem_cache *s, gfp_t gfp, unsigned int size)
> +{
> + struct slub_percpu_sheaves *pcs;
> + struct slab_sheaf *sheaf = NULL;
> +
> + if (unlikely(size > s->sheaf_capacity)) {
> +
> + /*
> + * slab_debug disables cpu sheaves intentionally so all
> + * prefilled sheaves become "oversize" and we give up on
> + * performance for the debugging.
> + * Creating a cache without sheaves and then requesting a
> + * prefilled sheaf is however not expected, so warn.
> + */
> + WARN_ON_ONCE(s->sheaf_capacity == 0 &&
> + !(s->flags & SLAB_DEBUG_FLAGS));
> +
> + sheaf = kzalloc(struct_size(sheaf, objects, size), gfp);
> + if (!sheaf)
> + return NULL;
> +
> + stat(s, SHEAF_PREFILL_OVERSIZE);
> + sheaf->cache = s;
> + sheaf->capacity = size;
> +
> + if (!__kmem_cache_alloc_bulk(s, gfp, size,
> + &sheaf->objects[0])) {
> + kfree(sheaf);
Not sure if we should have SHEAF_PREFILL_OVERSIZE_FAIL accounting as well here.
> + return NULL;
> + }
> +
> + sheaf->size = size;
> +
> + return sheaf;
> + }
> +
> + local_lock(&s->cpu_sheaves->lock);
> + pcs = this_cpu_ptr(s->cpu_sheaves);
> +
> + if (pcs->spare) {
> + sheaf = pcs->spare;
> + pcs->spare = NULL;
> + stat(s, SHEAF_PREFILL_FAST);
> + } else {
> + stat(s, SHEAF_PREFILL_SLOW);
> + sheaf = barn_get_full_or_empty_sheaf(pcs->barn);
> + if (sheaf && sheaf->size)
> + stat(s, BARN_GET);
> + else
> + stat(s, BARN_GET_FAIL);
> + }
> +
> + local_unlock(&s->cpu_sheaves->lock);
> +
> +
> + if (!sheaf)
> + sheaf = alloc_empty_sheaf(s, gfp);
> +
> + if (sheaf && sheaf->size < size) {
> + if (refill_sheaf(s, sheaf, gfp)) {
> + sheaf_flush_unused(s, sheaf);
> + free_empty_sheaf(s, sheaf);
> + sheaf = NULL;
> + }
> + }
> +
> + if (sheaf)
> + sheaf->capacity = s->sheaf_capacity;
> +
> + return sheaf;
> +}
> +
> +/*
> + * Use this to return a sheaf obtained by kmem_cache_prefill_sheaf()
> + *
> + * If the sheaf cannot simply become the percpu spare sheaf, but there's space
> + * for a full sheaf in the barn, we try to refill the sheaf back to the cache's
> + * sheaf_capacity to avoid handling partially full sheaves.
> + *
> + * If the refill fails because gfp is e.g. GFP_NOWAIT, or the barn is full, the
> + * sheaf is instead flushed and freed.
> + */
> +void kmem_cache_return_sheaf(struct kmem_cache *s, gfp_t gfp,
> + struct slab_sheaf *sheaf)
> +{
> + struct slub_percpu_sheaves *pcs;
> + bool refill = false;
> + struct node_barn *barn;
> +
> + if (unlikely(sheaf->capacity != s->sheaf_capacity)) {
> + sheaf_flush_unused(s, sheaf);
> + kfree(sheaf);
> + return;
> + }
> +
> + local_lock(&s->cpu_sheaves->lock);
> + pcs = this_cpu_ptr(s->cpu_sheaves);
> +
> + if (!pcs->spare) {
> + pcs->spare = sheaf;
> + sheaf = NULL;
> + stat(s, SHEAF_RETURN_FAST);
> + } else if (data_race(pcs->barn->nr_full) < MAX_FULL_SHEAVES) {
> + barn = pcs->barn;
> + refill = true;
> + }
> +
> + local_unlock(&s->cpu_sheaves->lock);
> +
> + if (!sheaf)
> + return;
> +
> + stat(s, SHEAF_RETURN_SLOW);
> +
> + /*
> + * if the barn is full of full sheaves or we fail to refill the sheaf,
> + * simply flush and free it
> + */
> + if (!refill || refill_sheaf(s, sheaf, gfp)) {
> + sheaf_flush_unused(s, sheaf);
> + free_empty_sheaf(s, sheaf);
> + return;
> + }
> +
> + /* we racily determined the sheaf would fit, so now force it */
> + barn_put_full_sheaf(barn, sheaf);
> + stat(s, BARN_PUT);
> +}
> +
> +/*
> + * refill a sheaf previously returned by kmem_cache_prefill_sheaf to at least
> + * the given size
> + *
> + * the sheaf might be replaced by a new one when requesting more than
> + * s->sheaf_capacity objects if such replacement is necessary, but the refill
> + * fails (returning -ENOMEM), the existing sheaf is left intact
> + *
> + * In practice we always refill to full sheaf's capacity.
> + */
> +int kmem_cache_refill_sheaf(struct kmem_cache *s, gfp_t gfp,
> + struct slab_sheaf **sheafp, unsigned int size)
> +{
> + struct slab_sheaf *sheaf;
> +
> + /*
> + * TODO: do we want to support *sheaf == NULL to be equivalent of
> + * kmem_cache_prefill_sheaf() ?
> + */
> + if (!sheafp || !(*sheafp))
> + return -EINVAL;
> +
> + sheaf = *sheafp;
> + if (sheaf->size >= size)
> + return 0;
> +
> + if (likely(sheaf->capacity >= size)) {
> + if (likely(sheaf->capacity == s->sheaf_capacity))
> + return refill_sheaf(s, sheaf, gfp);
> +
> + if (!__kmem_cache_alloc_bulk(s, gfp, sheaf->capacity - sheaf->size,
> + &sheaf->objects[sheaf->size])) {
> + return -ENOMEM;
> + }
> + sheaf->size = sheaf->capacity;
> +
> + return 0;
> + }
> +
> + /*
> + * We had a regular sized sheaf and need an oversize one, or we had an
> + * oversize one already but need a larger one now.
> + * This should be a very rare path so let's not complicate it.
> + */
> + sheaf = kmem_cache_prefill_sheaf(s, gfp, size);
> + if (!sheaf)
> + return -ENOMEM;
> +
> + kmem_cache_return_sheaf(s, gfp, *sheafp);
> + *sheafp = sheaf;
> + return 0;
> +}
> +
> +/*
> + * Allocate from a sheaf obtained by kmem_cache_prefill_sheaf()
> + *
> + * Guaranteed not to fail as many allocations as was the requested size.
> + * After the sheaf is emptied, it fails - no fallback to the slab cache itself.
> + *
> + * The gfp parameter is meant only to specify __GFP_ZERO or __GFP_ACCOUNT
> + * memcg charging is forced over limit if necessary, to avoid failure.
> + */
> +void *
> +kmem_cache_alloc_from_sheaf_noprof(struct kmem_cache *s, gfp_t gfp,
> + struct slab_sheaf *sheaf)
> +{
> + void *ret = NULL;
> + bool init;
> +
> + if (sheaf->size == 0)
> + goto out;
> +
> + ret = sheaf->objects[--sheaf->size];
> +
> + init = slab_want_init_on_alloc(gfp, s);
> +
> + /* add __GFP_NOFAIL to force successful memcg charging */
> + slab_post_alloc_hook(s, NULL, gfp | __GFP_NOFAIL, 1, &ret, init, s->object_size);
> +out:
> + trace_kmem_cache_alloc(_RET_IP_, ret, s, gfp, NUMA_NO_NODE);
> +
> + return ret;
> +}
> +
> +unsigned int kmem_cache_sheaf_size(struct slab_sheaf *sheaf)
> +{
> + return sheaf->size;
> +}
> /*
> * To avoid unnecessary overhead, we pass through large allocation requests
> * directly to the page allocator. We use __GFP_COMP, because we will need to
> @@ -8423,6 +8678,11 @@ STAT_ATTR(BARN_GET, barn_get);
> STAT_ATTR(BARN_GET_FAIL, barn_get_fail);
> STAT_ATTR(BARN_PUT, barn_put);
> STAT_ATTR(BARN_PUT_FAIL, barn_put_fail);
> +STAT_ATTR(SHEAF_PREFILL_FAST, sheaf_prefill_fast);
> +STAT_ATTR(SHEAF_PREFILL_SLOW, sheaf_prefill_slow);
> +STAT_ATTR(SHEAF_PREFILL_OVERSIZE, sheaf_prefill_oversize);
> +STAT_ATTR(SHEAF_RETURN_FAST, sheaf_return_fast);
> +STAT_ATTR(SHEAF_RETURN_SLOW, sheaf_return_slow);
> #endif /* CONFIG_SLUB_STATS */
>
> #ifdef CONFIG_KFENCE
> @@ -8523,6 +8783,11 @@ static struct attribute *slab_attrs[] = {
> &barn_get_fail_attr.attr,
> &barn_put_attr.attr,
> &barn_put_fail_attr.attr,
> + &sheaf_prefill_fast_attr.attr,
> + &sheaf_prefill_slow_attr.attr,
> + &sheaf_prefill_oversize_attr.attr,
> + &sheaf_return_fast_attr.attr,
> + &sheaf_return_slow_attr.attr,
> #endif
> #ifdef CONFIG_FAILSLAB
> &failslab_attr.attr,
>
> --
> 2.49.0
>
next prev parent reply other threads:[~2025-05-06 22:54 UTC|newest]
Thread overview: 35+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-04-25 8:27 [PATCH v4 0/9] SLUB percpu sheaves Vlastimil Babka
2025-04-25 8:27 ` [PATCH v4 1/9] slab: add opt-in caching layer of " Vlastimil Babka
2025-04-25 17:31 ` Christoph Lameter (Ampere)
2025-04-28 7:01 ` Vlastimil Babka
2025-05-06 17:32 ` Suren Baghdasaryan
2025-05-06 23:11 ` Suren Baghdasaryan
2025-04-29 1:08 ` Harry Yoo
2025-05-13 16:08 ` Vlastimil Babka
2025-05-06 23:14 ` Suren Baghdasaryan
2025-05-14 13:06 ` Vlastimil Babka
2025-04-25 8:27 ` [PATCH v4 2/9] slab: add sheaf support for batching kfree_rcu() operations Vlastimil Babka
2025-04-29 7:36 ` Harry Yoo
2025-05-14 13:07 ` Vlastimil Babka
2025-05-06 21:34 ` Suren Baghdasaryan
2025-05-14 14:01 ` Vlastimil Babka
2025-05-15 8:45 ` Vlastimil Babka
2025-05-15 15:03 ` Suren Baghdasaryan
2025-04-25 8:27 ` [PATCH v4 3/9] slab: sheaf prefilling for guaranteed allocations Vlastimil Babka
2025-05-06 22:54 ` Suren Baghdasaryan [this message]
2025-05-07 9:15 ` Harry Yoo
2025-05-07 9:20 ` Harry Yoo
2025-05-15 8:41 ` Vlastimil Babka
2025-04-25 8:27 ` [PATCH v4 4/9] slab: determine barn status racily outside of lock Vlastimil Babka
2025-04-25 8:27 ` [PATCH v4 5/9] tools: Add testing support for changes to rcu and slab for sheaves Vlastimil Babka
2025-04-25 8:27 ` [PATCH v4 6/9] tools: Add sheaves support to testing infrastructure Vlastimil Babka
2025-04-25 8:27 ` [PATCH v4 7/9] maple_tree: use percpu sheaves for maple_node_cache Vlastimil Babka
2025-04-25 8:27 ` [PATCH v4 8/9] mm, vma: use percpu sheaves for vm_area_struct cache Vlastimil Babka
2025-05-06 23:08 ` Suren Baghdasaryan
2025-04-25 8:27 ` [PATCH v4 9/9] mm, slub: skip percpu sheaves for remote object freeing Vlastimil Babka
2025-04-25 17:35 ` Christoph Lameter (Ampere)
2025-04-28 7:08 ` Vlastimil Babka
2025-05-07 10:39 ` Harry Yoo
2025-05-15 8:59 ` Vlastimil Babka
2025-05-15 12:46 ` [PATCH v4 0/9] SLUB percpu sheaves Vlastimil Babka
2025-05-15 15:01 ` Suren Baghdasaryan
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CAJuCfpHF3xHiDqzSMLUiR+RTG0Y-D+s0TfPchu8bOOyT4K-9TA@mail.gmail.com \
--to=surenb@google.com \
--cc=Liam.Howlett@oracle.com \
--cc=cl@linux.com \
--cc=harry.yoo@oracle.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=maple-tree@lists.infradead.org \
--cc=rcu@vger.kernel.org \
--cc=rientjes@google.com \
--cc=roman.gushchin@linux.dev \
--cc=urezki@gmail.com \
--cc=vbabka@suse.cz \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox