linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Vlastimil Babka <vbabka@suse.cz>
To: Michal Hocko <mhocko@suse.com>
Cc: lsf-pc@lists.linux-foundation.org, linux-mm@kvack.org,
	bpf <bpf@vger.kernel.org>
Subject: Re: [Lsf-pc] [LSF/MM/BPF TOPIC] SLUB: what's next?
Date: Thu, 2 May 2024 11:26:34 +0200	[thread overview]
Message-ID: <67304905-57d7-47f5-937b-2c4fb95d13ba@suse.cz> (raw)
In-Reply-To: <ZjNH9oerqOyxDokU@tiehlicka>



On 5/2/24 09:59, Michal Hocko wrote:
> On Tue 30-04-24 17:42:18, Vlastimil Babka wrote:
>> Hi,
>>
>> I'd like to propose a session about the next steps for SLUB. This is
>> different from the BOF about sheaves that Matthew suggested, which would be
>> not suitable for the whole group due to being not fleshed out enough yet.
>> But the session could be scheduled after the BOF so if we do brainstorm
>> something promising there, the result could be discussed as part of the full
>> session.
>>
>> Aside from that my preliminary plan is to discuss:
>>
>> - what was made possible by reducing the slab allocators implementations to
>> a single one, and what else could be done now with a single implementation
>>
>> - the work-in-progress work (for now in the context of maple tree) on SLUB
>> per-cpu array caches and preallocation
>>
>> - what functionality would SLUB need to gain so the extra caching done by
>> bpf allocator on top wouldn't be necessary? (kernel/bpf/memalloc.c)
>>
>> - similar wrt lib/objpool.c (did you even noticed it was added? :)
>>
>> - maybe the mempool functionality could be better integrated as well?
>>
>> - are there more cases where people have invented layers outside mm and that
>> could be integrated with some effort? IIRC io_uring also has some caching on
>> top currently...
>>
>> - better/more efficient memcg integration?
>>
>> - any other features people would like SLUB to have?
> 
> Thanks a lot Vlastimi. This is quite a list. Do you think this is a fit
> into a single time slot or would that benefit from splitting into 2
> slots?

I think single slot is fine, could schedule another one later if we
don't fit?


  reply	other threads:[~2024-05-02  9:26 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-04-30 15:42 Vlastimil Babka
2024-05-01  9:23 ` Alexei Starovoitov
2024-05-02  7:59 ` [Lsf-pc] " Michal Hocko
2024-05-02  9:26   ` Vlastimil Babka [this message]
2024-05-06 21:04     ` Roman Gushchin
2024-05-20 20:52 ` Vlastimil Babka

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=67304905-57d7-47f5-937b-2c4fb95d13ba@suse.cz \
    --to=vbabka@suse.cz \
    --cc=bpf@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=lsf-pc@lists.linux-foundation.org \
    --cc=mhocko@suse.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox