linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Vlastimil Babka <vbabka@suse.cz>
To: Roman Gushchin <roman.gushchin@linux.dev>
Cc: lsf-pc@lists.linux-foundation.org, linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org, linux-block@vger.kernel.org,
	bpf@vger.kernel.org, linux-xfs@vger.kernel.org,
	David Rientjes <rientjes@google.com>,
	Christoph Lameter <cl@linux.com>,
	Pekka Enberg <penberg@kernel.org>,
	Joonsoo Kim <iamjoonsoo.kim@lge.com>,
	Hyeonggon Yoo <42.hyeyoo@gmail.com>
Subject: Re: [LSF/MM/BPF TOPIC] SLOB+SLAB allocators removal and future SLUB improvements
Date: Thu, 16 Mar 2023 09:18:11 +0100	[thread overview]
Message-ID: <c87d4f6c-e947-70b2-f74f-2e5145572123@suse.cz> (raw)
In-Reply-To: <ZBEzUN35gOK5igmT@P9FQF9L96D>

On 3/15/23 03:54, Roman Gushchin wrote:
> On Tue, Mar 14, 2023 at 09:05:13AM +0100, Vlastimil Babka wrote:
>> As you're probably aware, my plan is to get rid of SLOB and SLAB, leaving
>> only SLUB going forward. The removal of SLOB seems to be going well, there
>> were no objections to the deprecation and I've posted v1 of the removal
>> itself [1] so it could be in -next soon.
>> 
>> The immediate benefit of that is that we can allow kfree() (and kfree_rcu())
>> to free objects from kmem_cache_alloc() - something that IIRC at least xfs
>> people wanted in the past, and SLOB was incompatible with that.
>> 
>> For SLAB removal I haven't yet heard any objections (but also didn't
>> deprecate it yet) but if there are any users due to particular workloads
>> doing better with SLAB than SLUB, we can discuss why those would regress and
>> what can be done about that in SLUB.
>> 
>> Once we have just one slab allocator in the kernel, we can take a closer
>> look at what the users are missing from it that forces them to create own
>> allocators (e.g. BPF), and could be considered to be added as a generic
>> implementation to SLUB.
> 
> I guess eventually we want to merge the percpu allocator too.

What exactly do you mean here, probably not mm/percpu.c which is too
different from slab, but some kind of per-cpu object cache on top of slab?


  reply	other threads:[~2023-03-16  8:18 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-03-14  8:05 Vlastimil Babka
2023-03-14 13:06 ` Matthew Wilcox
2023-03-15  2:54 ` Roman Gushchin
2023-03-16  8:18   ` Vlastimil Babka [this message]
2023-03-16 20:20     ` Roman Gushchin
2023-03-22 12:15 ` Binder Makin
2023-03-22 13:02   ` Hyeonggon Yoo
2023-03-22 13:24     ` Binder Makin
2023-03-22 13:30     ` Binder Makin
2023-03-22 12:30 ` Binder Makin
2023-04-04 16:03   ` Vlastimil Babka
2023-04-05 19:54     ` Binder Makin
2023-04-27  8:29       ` Vlastimil Babka
2023-05-05 19:44         ` Binder Makin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=c87d4f6c-e947-70b2-f74f-2e5145572123@suse.cz \
    --to=vbabka@suse.cz \
    --cc=42.hyeyoo@gmail.com \
    --cc=bpf@vger.kernel.org \
    --cc=cl@linux.com \
    --cc=iamjoonsoo.kim@lge.com \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-xfs@vger.kernel.org \
    --cc=lsf-pc@lists.linux-foundation.org \
    --cc=penberg@kernel.org \
    --cc=rientjes@google.com \
    --cc=roman.gushchin@linux.dev \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox