From: Vlastimil Babka <vbabka@suse.cz>
To: Harry Yoo <harry.yoo@oracle.com>
Cc: Suren Baghdasaryan <surenb@google.com>,
"Liam R. Howlett" <Liam.Howlett@oracle.com>,
Christoph Lameter <cl@linux.com>,
David Rientjes <rientjes@google.com>,
Roman Gushchin <roman.gushchin@linux.dev>,
Uladzislau Rezki <urezki@gmail.com>,
linux-mm@kvack.org, linux-kernel@vger.kernel.org,
rcu@vger.kernel.org, maple-tree@lists.infradead.org
Subject: Re: [PATCH RFC v3 2/8] slab: add opt-in caching layer of percpu sheaves
Date: Thu, 3 Apr 2025 16:11:22 +0200 [thread overview]
Message-ID: <81ffcfee-8f18-4392-a9ce-ff3f60f7b5b1@suse.cz> (raw)
In-Reply-To: <Z-5HWApFjrOr7Q8_@harry>
On 4/3/25 10:31, Harry Yoo wrote:
>> +/*
>> + * Bulk free objects to the percpu sheaves.
>> + * Unlike free_to_pcs() this includes the calls to all necessary hooks
>> + * and the fallback to freeing to slab pages.
>> + */
>> +static void free_to_pcs_bulk(struct kmem_cache *s, size_t size, void **p)
>> +{
>
> [...snip...]
>
>> +next_batch:
>> + if (!localtry_trylock(&s->cpu_sheaves->lock))
>> + goto fallback;
>> +
>> + pcs = this_cpu_ptr(s->cpu_sheaves);
>> +
>> + if (unlikely(pcs->main->size == s->sheaf_capacity)) {
>> +
>> + struct slab_sheaf *empty;
>> +
>> + if (!pcs->spare) {
>> + empty = barn_get_empty_sheaf(pcs->barn);
>> + if (empty) {
>> + pcs->spare = pcs->main;
>> + pcs->main = empty;
>> + goto do_free;
>> + }
>> + goto no_empty;
>
> Maybe a silly question, but if neither of alloc_from_pcs_bulk() or
> free_to_pcs_bulk() allocates empty sheaves (and sometimes put empty or full
> sheaves in the barn), you should expect usually sheaves not to be in the barn
> when using bulk interfces?
Hm maybe, but with patch 5/8 it becomes cheap to check? And there might be
caches mixing both bulk and individual allocs?
But maybe I should at least add the free sheaf alloc with GFP_NOWAIT attempt
to bulk free? Can't recall if I missed it intentionally or forgot.
>> -static void
>> -init_kmem_cache_node(struct kmem_cache_node *n)
>> +static bool
>> +init_kmem_cache_node(struct kmem_cache_node *n, struct node_barn *barn)
>> {
>
> Why is the return type bool, when it always succeeds?
I guess leftover from earlier versions. Will fix.
>> @@ -5421,20 +6295,27 @@ static int init_kmem_cache_nodes(struct kmem_cache *s)
>>
>> for_each_node_mask(node, slab_nodes) {
>> struct kmem_cache_node *n;
>> + struct node_barn *barn = NULL;
>>
>> if (slab_state == DOWN) {
>> early_kmem_cache_node_alloc(node);
>> continue;
>> }
>> +
>> + if (s->cpu_sheaves) {
>> + barn = kmalloc_node(sizeof(*barn), GFP_KERNEL, node);
>> +
>> + if (!barn)
>> + return 0;
>> + }
>> +
>> n = kmem_cache_alloc_node(kmem_cache_node,
>> GFP_KERNEL, node);
>> -
>> - if (!n) {
>> - free_kmem_cache_nodes(s);
>> + if (!n)
>> return 0;
>> - }
>
> Looks like it's leaking the barn
> if the allocation of kmem_cache_node fails?
Oops right, will add kfree(barn) before return 0;
>
>> - init_kmem_cache_node(n);
>> + init_kmem_cache_node(n, barn);
>> +
>> s->node[node] = n;
>> }
>> return 1;
>> @@ -6005,12 +6891,24 @@ static int slab_mem_going_online_callback(void *arg)
>> */
>> mutex_lock(&slab_mutex);
>> list_for_each_entry(s, &slab_caches, list) {
>> + struct node_barn *barn = NULL;
>> +
>> /*
>> * The structure may already exist if the node was previously
>> * onlined and offlined.
>> */
>> if (get_node(s, nid))
>> continue;
>> +
>> + if (s->cpu_sheaves) {
>> + barn = kmalloc_node(sizeof(*barn), GFP_KERNEL, nid);
>> +
>> + if (!barn) {
>> + ret = -ENOMEM;
>> + goto out;
>> + }
>> + }
>> +
>
> Ditto.
>
> Otherwise looks good to me :)
Thanks a lot!
next prev parent reply other threads:[~2025-04-03 14:11 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-03-17 14:33 [PATCH RFC v3 0/8] SLUB " Vlastimil Babka
2025-03-17 14:33 ` [PATCH RFC v3 1/8] locking/local_lock: Introduce localtry_lock_t Vlastimil Babka
2025-03-17 14:33 ` [PATCH RFC v3 2/8] slab: add opt-in caching layer of percpu sheaves Vlastimil Babka
2025-04-03 8:31 ` Harry Yoo
2025-04-03 14:11 ` Vlastimil Babka [this message]
2025-04-10 19:51 ` Suren Baghdasaryan
2025-04-22 15:02 ` Vlastimil Babka
2025-03-17 14:33 ` [PATCH RFC v3 3/8] slab: add sheaf support for batching kfree_rcu() operations Vlastimil Babka
2025-04-09 1:50 ` Harry Yoo
2025-04-09 15:09 ` Vlastimil Babka
2025-04-10 20:24 ` Suren Baghdasaryan
2025-04-22 15:18 ` Vlastimil Babka
2025-03-17 14:33 ` [PATCH RFC v3 4/8] slab: sheaf prefilling for guaranteed allocations Vlastimil Babka
2025-04-10 20:47 ` Suren Baghdasaryan
2025-04-23 13:06 ` Vlastimil Babka
2025-04-23 17:13 ` Suren Baghdasaryan
2025-03-17 14:33 ` [PATCH RFC v3 5/8] slab: determine barn status racily outside of lock Vlastimil Babka
2025-03-17 14:33 ` [PATCH RFC v3 6/8] tools: Add testing support for changes to rcu and slab for sheaves Vlastimil Babka
2025-03-17 14:33 ` [PATCH RFC v3 7/8] tools: Add sheafs support to testing infrastructure Vlastimil Babka
2025-03-17 14:33 ` [PATCH RFC v3 8/8] maple_tree: use percpu sheaves for maple_node_cache Vlastimil Babka
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=81ffcfee-8f18-4392-a9ce-ff3f60f7b5b1@suse.cz \
--to=vbabka@suse.cz \
--cc=Liam.Howlett@oracle.com \
--cc=cl@linux.com \
--cc=harry.yoo@oracle.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=maple-tree@lists.infradead.org \
--cc=rcu@vger.kernel.org \
--cc=rientjes@google.com \
--cc=roman.gushchin@linux.dev \
--cc=surenb@google.com \
--cc=urezki@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox