linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Vlastimil Babka <42.hyeyoo@gmail.com>
To: Vlastimil Babka <vbabka@suse.cz>, linux-mm@kvack.org
Cc: linux-kernel@vger.kernel.org, patches@lists.linux.dev
Subject: Re: [PATCH] mm: remove all the slab allocators
Date: Sat, 1 Apr 2023 20:38:39 +0900	[thread overview]
Message-ID: <51b8ae95-da5f-f426-feef-ff10c8c3e583@gmail.com> (raw)
In-Reply-To: <20230401094658.11146-1-vbabka@suse.cz>

On 4/1/2023 6:46 PM, Vlastimil Babka wrote:

> As the SLOB removal is on track and the SLAB removal is planned, I have
> realized - why should we stop there and not remove also SLUB? What's a
> slab allocator good for in 2023? The RAM sizes are getting larger and
> the modules cheaper [1]. The object constructor trick was perhaps
> interesting in 1994, but not with contemporary CPUs. So all the slab
> allocator does today is just adding an unnecessary layer of complexity
> over the page allocator.

Oh my goodness, I am on vacation now and looks like someone is 
pretending to be me.

Probably my account was hacked. Please ignore this whole thread.

> Thus, with this patch, all three slab allocators are removed, and only a
> layer that passes everything to the page allocator remains in the slab.h
> and mm/slab_common.c files. This will allow users to gradually
> transition away and use the page allocator directly. To summarize the
> advantages:
>
> - Less code to maintain: over 13k lines are removed by this patch, and
>    more could be removed if I wast^Wspent more time on this, and later as
>    users are transitioned from the legacy layer. This no longer needs a
>    separate subsystem so remove it from MAINTAINERS (I hope I can keep the
>    kernel.org account anyway, though).
>
> - Simplified MEMCG_KMEM accounting: while I was lazy and just marked it
>    BROKEN in this patch, it should be trivial to use the page memcg
>    accounting now that we use the page allocator. The per-object
>    accounting went through several iterations in the past and was always
>    complex and added overhead. Page accounting is much simpler by
>    comparison.
>
> - Simplified KASAN and friends: also was lazy in this patch so they
>    can't be enabled but should be easy to fix up and work just on the
>    page level.
>
> - Simpler debugging: just use debug_pagealloc=on, no need to look up the
>    exact syntax of the absurdly complex slub_debug parameter.
>
> - Speed: didn't measure, but for the page allocator we have pcplists, so
>    it should scale just fine. No need for the crazy SLUB's cmpxchg_double()
>    craziness. Maybe that thing could be now removed too? Yeah I can see
>    just two remaining users.
>
> Any downsides? Let's look at memory usage after virtme boot:
>
> Before (with SLUB):
> Slab:              26304 kB
>
> After:
> Slab:             295592 kB
>
> Well, that's not so bad, see [1].
>
> [1] https://www.theregister.com/2023/03/29/dram_prices_crash/
> ---
>   MAINTAINERS              |   15 -
>   include/linux/slab.h     |  211 +-
>   include/linux/slab_def.h |  124 -
>   include/linux/slub_def.h |  198 --
>   init/Kconfig             |    2 +-
>   mm/Kconfig               |  134 +-
>   mm/Makefile              |   10 -
>   mm/slab.c                | 4046 ------------------------
>   mm/slab.h                |  426 ---
>   mm/slab_common.c         |  876 ++---
>   mm/slob.c                |  757 -----
>   mm/slub.c                | 6506 --------------------------------------
>   12 files changed, 228 insertions(+), 13077 deletions(-)
>   delete mode 100644 include/linux/slab_def.h
>   delete mode 100644 include/linux/slub_def.h
>   delete mode 100644 mm/slab.c
>   delete mode 100644 mm/slob.c
>   delete mode 100644 mm/slub.c

> diff --git a/MAINTAINERS b/MAINTAINERS
> index 1dc8bd26b6cf..40b05ad03cd0 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -19183,21 +19183,6 @@ F:	drivers/irqchip/irq-sl28cpld.c
>   F:	drivers/pwm/pwm-sl28cpld.c
>   F:	drivers/watchdog/sl28cpld_wdt.c
>   
> -SLAB ALLOCATOR
> -M:	Christoph Lameter <cl@linux.com>
> -M:	Pekka Enberg <penberg@kernel.org>
> -M:	David Rientjes <rientjes@google.com>
> -M:	Joonsoo Kim <iamjoonsoo.kim@lge.com>
> -M:	Andrew Morton <akpm@linux-foundation.org>
> -M:	Vlastimil Babka <vbabka@suse.cz>
> -R:	Roman Gushchin <roman.gushchin@linux.dev>
> -R:	Hyeonggon Yoo <42.hyeyoo@gmail.com>
> -L:	linux-mm@kvack.org
> -S:	Maintained
> -T:	git git://git.kernel.org/pub/scm/linux/kernel/git/vbabka/slab.git
> -F:	include/linux/sl?b*.h
> -F:	mm/sl?b


  parent reply	other threads:[~2023-04-01 11:38 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-04-01  9:46 Vlastimil Babka
2023-04-01 10:00 ` Lorenzo Stoakes
2023-04-01 10:25 ` Yosry Ahmed
2023-04-01 11:33 ` Petr Tesařík
2023-04-01 11:38 ` Vlastimil Babka [this message]
2023-04-01 15:15 ` Matthew Wilcox
2023-04-01 18:33 ` David Laight
2023-04-01 18:45   ` Christophe Leroy
2023-04-01 22:04     ` David Laight
2023-04-02  5:09 ` Jeff Xie
2023-04-02  9:09 ` 郭辉
2023-04-02 11:04   ` Matthew Wilcox
2023-04-03  3:51     ` Fabio M. De Francesco
2023-04-03  4:04       ` Fabio M. De Francesco
2023-04-03  4:13       ` Matthew Wilcox

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=51b8ae95-da5f-f426-feef-ff10c8c3e583@gmail.com \
    --to=42.hyeyoo@gmail.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=patches@lists.linux.dev \
    --cc=vbabka@suse.cz \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox