linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: kernel test robot <oliver.sang@intel.com>
To: Vlastimil Babka <vbabka@suse.cz>
Cc: <oe-lkp@lists.linux.dev>, <lkp@intel.com>,
	<linux-kernel@vger.kernel.org>, Harry Yoo <harry.yoo@oracle.com>,
	Hao Li <hao.li@linux.dev>,
	"Suren Baghdasaryan" <surenb@google.com>,
	Alexei Starovoitov <ast@kernel.org>, <linux-mm@kvack.org>,
	<oliver.sang@intel.com>
Subject: [linus:master] [slab]  17c38c8829: will-it-scale.per_process_ops 60.5% regression
Date: Thu, 26 Feb 2026 15:26:16 +0800	[thread overview]
Message-ID: <202602261526.dc2fdec-lkp@intel.com> (raw)



Hello,

kernel test robot noticed a 60.5% regression of will-it-scale.per_process_ops on:


commit: 17c38c88294d75506c67cae378c9e940d1ce55e3 ("slab: remove cpu (partial) slabs usage from allocation paths")
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master

[still regression on linus/master      8bf22c33e7a172fbc72464f4cc484d23a6b412ba]
[still regression on linux-next/master 44982d352c33767cd8d19f8044e7e1161a587ff7]

testcase: will-it-scale
config: x86_64-rhel-9.4
compiler: gcc-14
test machine: 48 threads 2 sockets Intel(R) Xeon(R) CPU E5-2697 v2 @ 2.70GHz (Ivy Bridge-EP) with 64G memory
parameters:

	nr_task: 100%
	mode: process
	test: brk1
	cpufreq_governor: performance



If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <oliver.sang@intel.com>
| Closes: https://lore.kernel.org/oe-lkp/202602261526.dc2fdec-lkp@intel.com


Details are as below:
-------------------------------------------------------------------------------------------------->


The kernel config and materials to reproduce are available at:
https://download.01.org/0day-ci/archive/20260226/202602261526.dc2fdec-lkp@intel.com

=========================================================================================
compiler/cpufreq_governor/kconfig/mode/nr_task/rootfs/tbox_group/test/testcase:
  gcc-14/performance/x86_64-rhel-9.4/process/100%/debian-13-x86_64-20250902.cgz/lkp-ivb-2ep2/brk1/will-it-scale

commit: 
  ed30c4adfc ("slab: add optimized sheaf refill from partial list")
  17c38c8829 ("slab: remove cpu (partial) slabs usage from allocation paths")

ed30c4adfc2b5690 17c38c88294d75506c67cae378c 
---------------- --------------------------- 
         %stddev     %change         %stddev
             \          |                \  
  22625956           -60.5%    8947026        will-it-scale.48.processes
    471373           -60.5%     186395        will-it-scale.per_process_ops
  22625956           -60.5%    8947026        will-it-scale.workload
     20.65           -74.9%       5.19        vmstat.cpu.us
     10502 ±  3%     -87.0%       1366 ±  9%  vmstat.system.cs
      1.31            -0.1        1.20        mpstat.cpu.all.irq%
      4.13            -1.9        2.21        mpstat.cpu.all.soft%
     73.24           +17.4       90.61        mpstat.cpu.all.sys%
     20.67           -15.3        5.36        mpstat.cpu.all.usr%
    179.60            -4.1%     172.24        turbostat.CorWatt
      0.39           -40.0%       0.23 ±  2%  turbostat.IPC
    211.17            -4.4%     201.86        turbostat.PkgWatt
     21.36           -45.0%      11.76        turbostat.RAMWatt
      0.07 ±  6%     -38.7%       0.04 ±  9%  perf-sched.sch_delay.avg.ms.[unknown].[unknown].[unknown].[unknown].[unknown]
      0.07 ±  6%     -38.7%       0.04 ±  9%  perf-sched.total_sch_delay.average.ms
     18.17 ±  3%    +262.0%      65.77 ±  8%  perf-sched.total_wait_and_delay.average.ms
     47429 ±  4%     -84.9%       7139 ± 10%  perf-sched.total_wait_and_delay.count.ms
     18.10 ±  3%    +263.2%      65.72 ±  8%  perf-sched.total_wait_time.average.ms
     18.17 ±  3%    +262.0%      65.77 ±  8%  perf-sched.wait_and_delay.avg.ms.[unknown].[unknown].[unknown].[unknown].[unknown]
     47429 ±  4%     -84.9%       7139 ± 10%  perf-sched.wait_and_delay.count.[unknown].[unknown].[unknown].[unknown].[unknown]
     18.10 ±  3%    +263.2%      65.72 ±  8%  perf-sched.wait_time.avg.ms.[unknown].[unknown].[unknown].[unknown].[unknown]
      0.68           -83.8%       0.11 ±  6%  perf-stat.i.MPKI
 1.657e+10           -35.2%  1.073e+10        perf-stat.i.branch-instructions
      0.27 ±  2%      +0.1        0.34 ±  2%  perf-stat.i.branch-miss-rate%
  43387409           -17.1%   35980500 ±  2%  perf-stat.i.branch-misses
     38.35           -35.8        2.53 ±  6%  perf-stat.i.cache-miss-rate%
  56795946           -90.4%    5475678 ±  6%  perf-stat.i.cache-misses
  1.48e+08           +48.5%  2.198e+08        perf-stat.i.cache-references
     10520 ±  3%     -87.4%       1322 ±  9%  perf-stat.i.context-switches
      1.70           +65.9%       2.81        perf-stat.i.cpi
      2499         +1000.8%      27508 ±  6%  perf-stat.i.cycles-between-cache-misses
 8.369e+10           -39.7%  5.048e+10        perf-stat.i.instructions
      0.59           -39.7%       0.36        perf-stat.i.ipc
      0.68           -84.0%       0.11 ±  6%  perf-stat.overall.MPKI
      0.26            +0.1        0.34 ±  2%  perf-stat.overall.branch-miss-rate%
     38.37           -35.9        2.49 ±  6%  perf-stat.overall.cache-miss-rate%
      1.70           +65.9%       2.81        perf-stat.overall.cpi
      2500          +942.6%      26067 ±  6%  perf-stat.overall.cycles-between-cache-misses
      0.59           -39.7%       0.36        perf-stat.overall.ipc
   1117006           +52.3%    1700887        perf-stat.overall.path-length
 1.651e+10           -35.2%  1.069e+10        perf-stat.ps.branch-instructions
  43243442           -17.1%   35861582 ±  2%  perf-stat.ps.branch-misses
  56607711           -90.4%    5457714 ±  6%  perf-stat.ps.cache-misses
 1.475e+08           +48.5%  2.191e+08        perf-stat.ps.cache-references
     10485 ±  3%     -87.4%       1318 ±  9%  perf-stat.ps.context-switches
 8.341e+10           -39.7%  5.031e+10        perf-stat.ps.instructions
 2.527e+13           -39.8%  1.522e+13        perf-stat.total.instructions




Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.


-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki



                 reply	other threads:[~2026-02-26  7:26 UTC|newest]

Thread overview: [no followups] expand[flat|nested]  mbox.gz  Atom feed

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=202602261526.dc2fdec-lkp@intel.com \
    --to=oliver.sang@intel.com \
    --cc=ast@kernel.org \
    --cc=hao.li@linux.dev \
    --cc=harry.yoo@oracle.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=lkp@intel.com \
    --cc=oe-lkp@lists.linux.dev \
    --cc=surenb@google.com \
    --cc=vbabka@suse.cz \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox