linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Binder Makin <merimus@google.com>
To: Vlastimil Babka <vbabka@suse.cz>
Cc: lsf-pc@lists.linux-foundation.org, linux-fsdevel@vger.kernel.org,
	 linux-mm@kvack.org, linux-block@vger.kernel.org,
	bpf@vger.kernel.org,  linux-xfs@vger.kernel.org,
	David Rientjes <rientjes@google.com>,
	 Christoph Lameter <cl@linux.com>,
	Pekka Enberg <penberg@kernel.org>,
	Joonsoo Kim <iamjoonsoo.kim@lge.com>,
	 Hyeonggon Yoo <42.hyeyoo@gmail.com>,
	Roman Gushchin <roman.gushchin@linux.dev>
Subject: Re: [LSF/MM/BPF TOPIC] SLOB+SLAB allocators removal and future SLUB improvements
Date: Fri, 5 May 2023 15:44:00 -0400	[thread overview]
Message-ID: <CAANmLty+yVqN74p_w8VX6=LBTioVKS+b6SHMwoJonoUXgqeXng@mail.gmail.com> (raw)
In-Reply-To: <19acbdbb-fc2f-e198-3d31-850ef53f544e@suse.cz>

Here are the results of my research.
One doc is an overview fo the data and the other is a pdf of the raw data.

https://drive.google.com/file/d/1DE8QMri1Rsr7L27fORHFCmwgrMtdfPfu/view?usp=share_link

https://drive.google.com/file/d/1UwnTeqsKB0jgpnZodJ0_cM2bOHx5aR_v/view?usp=share_link

On Thu, Apr 27, 2023 at 4:29 AM Vlastimil Babka <vbabka@suse.cz> wrote:
>
> On 4/5/23 21:54, Binder Makin wrote:
> > I'm still running tests to explore some of these questions.
> > The machines I am using are roughly as follows.
> >
> > Intel dual socket 56 total cores
> > 192-384GB ram
> > LEVEL1_ICACHE_SIZE                 32768
> > LEVEL1_DCACHE_SIZE                 32768
> > LEVEL2_CACHE_SIZE                  1048576
> > LEVEL3_CACHE_SIZE                  40370176
> >
> > Amd dual socket 128 total cores
> > 1TB ram
> > LEVEL1_ICACHE_SIZE                 32768
> > LEVEL1_DCACHE_SIZE                 32768
> > LEVEL2_CACHE_SIZE                  524288
> > LEVEL3_CACHE_SIZE                  268435456
> >
> > Arm single socket 64 total cores
> > 256GB rma
> > LEVEL1_ICACHE_SIZE                 65536
> > LEVEL1_DCACHE_SIZE                 65536
> > LEVEL2_CACHE_SIZE                  1048576
> > LEVEL3_CACHE_SIZE                  33554432
>
> So with "some artifact of different cache layout" I didn't mean the
> different cache sizes of the processors, but possible differences how
> objects end up placed in memory by SLAB vs SLUB causing them to collide in
> the cache of cause false sharing less or more. This kind of interference can
> make interpreting (micro)benchmark results hard.
>
> Anyway, how I'd hope to approach this topic would be that SLAB removal is
> proposed, and anyone who opposes that because they can't switch from SLAB to
> SLUB would describe why they can't. I'd hope the "why" to be based on
> testing with actual workloads, not just benchmarks. Benchmarks are then of
> course useful if they can indeed distill the reason why the actual workload
> regresses, as then anyone can reproduce that locally and develop/test fixes
> etc. My hope is that if some kind of regression is found (e.g. due to lack
> of percpu array in SLUB), it can be dealt with by improving SLUB.
>
> Historically I recall that we (SUSE) objected somwhat to SLAB removal as our
> distro kernels were using it, but we have switched since. Then networking
> had concerns (possibly related to the lack percpu array) but seems bulk
> allocations helped and they use SLUB these days [1]. And IIRC Google was
> also sticking to SLAB, which led to some attempts to augment SLUB for those
> workloads years ago, but those were never finished. So I'd be curious if we
> should restart those effors or can just remove SLAB now.
>
> [1] https://lore.kernel.org/all/93665604-5420-be5d-2104-17850288b955@redhat.com/
>
>


      reply	other threads:[~2023-05-05 19:44 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-03-14  8:05 Vlastimil Babka
2023-03-14 13:06 ` Matthew Wilcox
2023-03-15  2:54 ` Roman Gushchin
2023-03-16  8:18   ` Vlastimil Babka
2023-03-16 20:20     ` Roman Gushchin
2023-03-22 12:15 ` Binder Makin
2023-03-22 13:02   ` Hyeonggon Yoo
2023-03-22 13:24     ` Binder Makin
2023-03-22 13:30     ` Binder Makin
2023-03-22 12:30 ` Binder Makin
2023-04-04 16:03   ` Vlastimil Babka
2023-04-05 19:54     ` Binder Makin
2023-04-27  8:29       ` Vlastimil Babka
2023-05-05 19:44         ` Binder Makin [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAANmLty+yVqN74p_w8VX6=LBTioVKS+b6SHMwoJonoUXgqeXng@mail.gmail.com' \
    --to=merimus@google.com \
    --cc=42.hyeyoo@gmail.com \
    --cc=bpf@vger.kernel.org \
    --cc=cl@linux.com \
    --cc=iamjoonsoo.kim@lge.com \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-xfs@vger.kernel.org \
    --cc=lsf-pc@lists.linux-foundation.org \
    --cc=penberg@kernel.org \
    --cc=rientjes@google.com \
    --cc=roman.gushchin@linux.dev \
    --cc=vbabka@suse.cz \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox