From: "Vlastimil Babka (SUSE)" <vbabka@kernel.org>
To: Ming Lei <ming.lei@redhat.com>
Cc: Vlastimil Babka <vbabka@suse.cz>,
Andrew Morton <akpm@linux-foundation.org>,
linux-mm@kvack.org, linux-kernel@vger.kernel.org,
linux-block@vger.kernel.org, Harry Yoo <harry.yoo@oracle.com>,
Hao Li <hao.li@linux.dev>, Christoph Hellwig <hch@infradead.org>
Subject: Re: [Regression] mm:slab/sheaves: severe performance regression in cross-CPU slab allocation
Date: Wed, 25 Feb 2026 12:29:26 +0100 [thread overview]
Message-ID: <f3a26a8a-a3db-4133-83c0-7a70faacee9c@kernel.org> (raw)
In-Reply-To: <aZ7BbosIr2FvZFAe@fedora>
On 2/25/26 10:31, Ming Lei wrote:
> Hi Vlastimil,
>
> On Wed, Feb 25, 2026 at 09:45:03AM +0100, Vlastimil Babka (SUSE) wrote:
>> On 2/24/26 21:27, Vlastimil Babka wrote:
>> >
>> > It made sense to me not to refill sheaves when we can't reclaim, but I
>> > didn't anticipate this interaction with mempools. We could change them
>> > but there might be others using a similar pattern. Maybe it would be for
>> > the best to just drop that heuristic from __pcs_replace_empty_main()
>> > (but carefully as some deadlock avoidance depends on it, we might need
>> > to e.g. replace it with gfpflags_allow_spinning()). I'll send a patch
>> > tomorrow to test this theory, unless someone beats me to it (feel free to).
>> Could you try this then, please? Thanks!
>
> Thanks for working on this issue!
>
> Unfortunately the patch doesn't make a difference on IOPS in the perf test,
> follows the collected perf profile on linus tree(basically 7.0-rc1 with your patch):
Hm that's weird, still the slowpath is prominent in your profile.
I followed your reproducer instructions, although only with a small
virtme-ng based setup. What's the output of "numactl -H" on yours, btw?
Anyway what I saw is my patch raised the IOPS substantially, and with
CONFIG_SLUB_STATS=y enabled I could see that
/sys/kernel/slab/bio-248/alloc_slowpath had substantial values before the
patch and zero afterwards.
Maybe if you could also enable CONFIG_SLUB_STATS=y and see in which cache(s)
there's significant alloc_slowpath even after the patch, it could help.
Thanks!
> ```
> 04cb971e2d28 (HEAD -> master) mm:slab/sheaves: severe performance regression in cross-CPU slab allocation
> a5a9cf3f020f mm: fix NULL NODE_DATA dereference for memoryless nodes on boot
> 7dff99b35460 (origin/master) Remove WARN_ALL_UNSEEDED_RANDOM kernel config option
> 551d44200152 default_gfp(): avoid using the "newfangled" __VA_OPT__ trick
> 6de23f81a5e0 (tag: v7.0-rc1) Linux 7.0-rc1
> ```
>
> + 49.03% 2.00% io_uring [kernel.kallsyms] [k] __blkdev_direct_IO_async
> - 38.66% 1.16% io_uring [kernel.kallsyms] [k] bio_alloc_bioset
> - 37.51% bio_alloc_bioset
> - 34.98% mempool_alloc_noprof
> - 34.87% kmem_cache_alloc_noprof
> - 33.82% ___slab_alloc
> - 30.25% get_from_any_partial
> - 29.59% get_from_partial_node
> - 28.42% __raw_spin_lock_irqsave
> native_queued_spin_lock_slowpath
> + 2.16% allocate_slab
> + 0.60% alloc_from_new_slab
> 0.51% __pcs_replace_empty_main
> 1.58% bio_associate_blkg
> + 1.16% submitter_uring_fn
> + 35.16% 0.30% io_uring [kernel.kallsyms] [k] kmem_cache_alloc_noprof
> + 35.13% 0.12% io_uring [kernel.kallsyms] [k] mempool_alloc_noprof
>
>
> Thanks,
> Ming
>
next prev parent reply other threads:[~2026-02-25 11:29 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-02-24 2:52 Ming Lei
2026-02-24 5:00 ` Harry Yoo
2026-02-24 9:07 ` Ming Lei
2026-02-25 5:32 ` Hao Li
2026-02-25 6:54 ` Harry Yoo
2026-02-25 7:06 ` Hao Li
2026-02-25 7:19 ` Harry Yoo
2026-02-25 8:19 ` Hao Li
2026-02-25 8:41 ` Harry Yoo
2026-02-25 8:54 ` Hao Li
2026-02-25 8:21 ` Harry Yoo
2026-02-24 6:51 ` Hao Li
2026-02-24 7:10 ` Harry Yoo
2026-02-24 7:41 ` Hao Li
2026-02-24 20:27 ` Vlastimil Babka
2026-02-25 5:24 ` Harry Yoo
2026-02-25 8:45 ` Vlastimil Babka (SUSE)
2026-02-25 9:31 ` Ming Lei
2026-02-25 11:29 ` Vlastimil Babka (SUSE) [this message]
2026-02-25 12:24 ` Ming Lei
2026-02-25 13:22 ` Vlastimil Babka (SUSE)
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=f3a26a8a-a3db-4133-83c0-7a70faacee9c@kernel.org \
--to=vbabka@kernel.org \
--cc=akpm@linux-foundation.org \
--cc=hao.li@linux.dev \
--cc=harry.yoo@oracle.com \
--cc=hch@infradead.org \
--cc=linux-block@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=ming.lei@redhat.com \
--cc=vbabka@suse.cz \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox