linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Zhao Liu <zhao1.liu@intel.com>
To: Hao Li <hao.li@linux.dev>
Cc: Vlastimil Babka <vbabka@suse.cz>, Hao Li <haolee.swjtu@gmail.com>,
	akpm@linux-foundation.org, harry.yoo@oracle.com, cl@gentwo.org,
	rientjes@google.com, roman.gushchin@linux.dev,
	linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	tim.c.chen@intel.com, yu.c.chen@intel.com, zhao1.liu@intel.com
Subject: Re: [PATCH v2] slub: keep empty main sheaf as spare in __pcs_replace_empty_main()
Date: Tue, 20 Jan 2026 16:21:16 +0800	[thread overview]
Message-ID: <aW86/Nc2+bkopFd7@intel.com> (raw)
In-Reply-To: <3ozekmmsscrarwoa7vcytwjn5rxsiyxjrcsirlu3bhmlwtdxzn@s7a6rcxnqadc>

> 1. Machine Configuration
> 
> The topology of my machine is as follows:
> 
> CPU(s):              384
> On-line CPU(s) list: 0-383
> Thread(s) per core:  2
> Core(s) per socket:  96
> Socket(s):           2
> NUMA node(s):        2

It seems like this is a GNR machine - maybe SNC could be enabled.

> Since my machine only has 192 cores when counting physical cores, I had to
> enable SMT to support the higher number of tasks in the LKP test cases. My
> configuration was as follows:
> 
> will-it-scale:
>   mode: process
>   test: mmap2
>   no_affinity: 0
>   smt: 1

For lkp, smt parameter is disabled. I tried with smt=1 locally, the
difference between "with fix" & "w/o fix" is not significate. Maybe smt
parameter could be set as 0.

On another machine (2 sockets with SNC3 enabled - 6 NUMA nodes), there's
the similar regression happening when tasks fill up a socket and then
there're more get_partial_node().

> Here's the "perf report --no-children -g" output with the patch:
> 
> ```
> +   30.36%  mmap2_processes  [kernel.kallsyms]     [k] perf_iterate_ctx
> -   28.80%  mmap2_processes  [kernel.kallsyms]     [k] native_queued_spin_lock_slowpath
>    - 24.72% testcase
>       - 24.71% __mmap
>          - 24.68% entry_SYSCALL_64_after_hwframe
>             - do_syscall_64
>                - 24.61% ksys_mmap_pgoff
>                   - 24.57% vm_mmap_pgoff
>                      - 24.51% do_mmap
>                         - 24.30% __mmap_region
>                            - 18.33% mas_preallocate
>                               - 18.30% mas_alloc_nodes
>                                  - 18.30% kmem_cache_alloc_noprof
>                                     - 18.28% __pcs_replace_empty_main
>                                        + 9.06% barn_replace_empty_sheaf
>                                        + 6.12% barn_get_empty_sheaf
>                                        + 3.09% refill_sheaf

this is the difference with my previous perf report: here the proportion
of refill_sheaf is low - it indicates the shaeves are enough in the most
time.

Back to my previous test, I'm guessing that with this fix, under extreme
conditions of massive mmap usage, each CPU now stores an empty spare sheaf
locally. Previously, each CPU's spare sheaf was NULL. So memory pressure
increases with more spare sheaves locally. And in that extreme scenario,
cross-socket remote NUMA access incurs significant overhead — which is why
regression occurs here.

However, testing from 1 task to max tasks (nr_tasks = nr_logical_cpus)
shows overall significant improvements in most scenarios. Regressions
only occur at the specific topology boundaries described above.

I believe the cases with performance gains are more common. So I think
the regression is a corner case. If it does indeed impact certain
workloads in the future, we may need to reconsider optimization at that
time. It can now be used as a reference.

Thanks,
Zhao



  reply	other threads:[~2026-01-20  7:55 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-12-10  0:26 Hao Li
2025-12-15 14:30 ` Vlastimil Babka
2025-12-16  2:34   ` Hao Lee
2025-12-22 10:20   ` Harry Yoo
2026-01-05 15:58     ` Vlastimil Babka
2026-01-15 10:12   ` Zhao Liu
2026-01-15 16:19     ` Vlastimil Babka
2026-01-16  9:07       ` Zhao Liu
2026-01-16  9:11         ` Hao Li
2026-01-16  4:06     ` Hao Li
2026-01-16  9:16       ` Zhao Liu
2026-01-16  9:09         ` Hao Li
2026-01-19  6:07     ` Hao Li
2026-01-20  8:21       ` Zhao Liu [this message]
2026-01-21  3:15         ` Hao Li
2026-01-21 13:17           ` Zhao Liu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=aW86/Nc2+bkopFd7@intel.com \
    --to=zhao1.liu@intel.com \
    --cc=akpm@linux-foundation.org \
    --cc=cl@gentwo.org \
    --cc=hao.li@linux.dev \
    --cc=haolee.swjtu@gmail.com \
    --cc=harry.yoo@oracle.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=rientjes@google.com \
    --cc=roman.gushchin@linux.dev \
    --cc=tim.c.chen@intel.com \
    --cc=vbabka@suse.cz \
    --cc=yu.c.chen@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox