From: Shakeel Butt <shakeel.butt@linux.dev>
To: Vlastimil Babka <vbabka@suse.cz>
Cc: lsf-pc@lists.linux-foundation.org, linux-mm@kvack.org,
bpf <bpf@vger.kernel.org>, Christoph Lameter <cl@linux.com>,
David Rientjes <rientjes@google.com>,
Hyeonggon Yoo <42.hyeyoo@gmail.com>,
"Uladzislau Rezki (Sony)" <urezki@gmail.com>,
Alexei Starovoitov <ast@kernel.org>
Subject: Re: [LSF/MM/BPF TOPIC] SLUB allocator, mainly the sheaves caching layer
Date: Mon, 24 Feb 2025 10:02:09 -0800 [thread overview]
Message-ID: <e2fz26kcbni37rp2rdqvac7mljvrglvtzmkivfpsnibubu3g3t@blz27xo4honn> (raw)
In-Reply-To: <14422cf1-4a63-4115-87cb-92685e7dd91b@suse.cz>
On Mon, Feb 24, 2025 at 05:13:25PM +0100, Vlastimil Babka wrote:
> Hi,
>
> I'd like to propose a session about the SLUB allocator.
>
> Mainly I would like to discuss the addition of the sheaves caching layer,
> the latest RFC posted at [1].
>
> The goals of that work is to:
>
> - Reduce fastpath overhead. The current freeing fastpath only can be used if
> the same target slab is still the cpu slab, which can be only expected for a
> very short term allocations. Further improvements should come from the new
> local_trylock_t primitive.
>
> - Improve efficiency of users such as like maple tree, thanks to more
> efficient preallocations, and kfree_rcu batching/reusal
>
> - Hopefully also facilitate further changes needed for bpf allocations, also
> via local_trylock_t, that could possibly extend to the other parts of the
> implementation as needed.
>
> The controversial discussion points I expect about this approach are:
>
> - Either sheaves will not support NUMA restrictions (as in current RFC), or
> bring back the alien cache flushing issues of SLAB (or there's a better idea?)
>
> - Will it be possible to eventually have sheaves enabled for every cache and
> replace the current slub's fastpaths with it? Arguably these are also not
> very efficient when NUMA-restricted allocations are requested for varying
> NUMA nodes (cpu slab is flushed if it's from a wrong node, to load a slab
> from the requested node).
>
> Besides sheaves, I'd like to summarize recent kfree_rcu() changes and we
> could discuss further improvements to that.
>
> Also we can discuss what's needed to support bpf allocations. I've talked
> about it last year, but then focused on other things, so Alexei has been
> driving that recently (so far in the page allocator).
What about pre-memcg-charged sheaves? We had to disable memcg charging
of some kernel allocations and I think sheaves can help in reenabling
it.
next prev parent reply other threads:[~2025-02-24 18:02 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-02-24 16:13 Vlastimil Babka
2025-02-24 18:02 ` Shakeel Butt [this message]
2025-02-24 18:15 ` Vlastimil Babka
2025-02-24 20:52 ` Shakeel Butt
2025-02-24 18:46 ` Mateusz Guzik
2025-02-24 21:12 ` Shakeel Butt
2025-02-24 22:21 ` Mateusz Guzik
2025-02-26 0:17 ` Christoph Lameter (Ampere)
2025-03-05 10:26 ` Vlastimil Babka
2025-03-25 17:43 ` Vlastimil Babka
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=e2fz26kcbni37rp2rdqvac7mljvrglvtzmkivfpsnibubu3g3t@blz27xo4honn \
--to=shakeel.butt@linux.dev \
--cc=42.hyeyoo@gmail.com \
--cc=ast@kernel.org \
--cc=bpf@vger.kernel.org \
--cc=cl@linux.com \
--cc=linux-mm@kvack.org \
--cc=lsf-pc@lists.linux-foundation.org \
--cc=rientjes@google.com \
--cc=urezki@gmail.com \
--cc=vbabka@suse.cz \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox