From: kernel test robot <oliver.sang@intel.com>
To: Shakeel Butt <shakeel.butt@linux.dev>
Cc: <oe-lkp@lists.linux.dev>, <lkp@intel.com>,
Linux Memory Management List <linux-mm@kvack.org>,
Andrew Morton <akpm@linux-foundation.org>,
Yosry Ahmed <yosryahmed@google.com>,
"T.J. Mercier" <tjmercier@google.com>,
"Roman Gushchin" <roman.gushchin@linux.dev>,
Johannes Weiner <hannes@cmpxchg.org>,
Michal Hocko <mhocko@kernel.org>,
Muchun Song <muchun.song@linux.dev>, <cgroups@vger.kernel.org>,
<ying.huang@intel.com>, <feng.tang@intel.com>,
<fengwei.yin@intel.com>, <oliver.sang@intel.com>
Subject: [linux-next:master] [memcg] 70a64b7919: will-it-scale.per_process_ops -11.9% regression
Date: Fri, 17 May 2024 13:56:30 +0800 [thread overview]
Message-ID: <202405171353.b56b845-oliver.sang@intel.com> (raw)
Hello,
kernel test robot noticed a -11.9% regression of will-it-scale.per_process_ops on:
commit: 70a64b7919cbd6c12306051ff2825839a9d65605 ("memcg: dynamically allocate lruvec_stats")
https://git.kernel.org/cgit/linux/kernel/git/next/linux-next.git master
testcase: will-it-scale
test machine: 104 threads 2 sockets (Skylake) with 192G memory
parameters:
nr_task: 100%
mode: process
test: page_fault2
cpufreq_governor: performance
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <oliver.sang@intel.com>
| Closes: https://lore.kernel.org/oe-lkp/202405171353.b56b845-oliver.sang@intel.com
Details are as below:
-------------------------------------------------------------------------------------------------->
The kernel config and materials to reproduce are available at:
https://download.01.org/0day-ci/archive/20240517/202405171353.b56b845-oliver.sang@intel.com
=========================================================================================
compiler/cpufreq_governor/kconfig/mode/nr_task/rootfs/tbox_group/test/testcase:
gcc-13/performance/x86_64-rhel-8.3/process/100%/debian-12-x86_64-20240206.cgz/lkp-skl-fpga01/page_fault2/will-it-scale
commit:
59142d87ab ("memcg: reduce memory size of mem_cgroup_events_index")
70a64b7919 ("memcg: dynamically allocate lruvec_stats")
59142d87ab03b8ff 70a64b7919cbd6c12306051ff28
---------------- ---------------------------
%stddev %change %stddev
\ | \
7.14 -0.8 6.32 mpstat.cpu.all.usr%
245257 ± 7% -13.8% 211354 ± 4% sched_debug.cfs_rq:/.avg_vruntime.stddev
245258 ± 7% -13.8% 211353 ± 4% sched_debug.cfs_rq:/.min_vruntime.stddev
21099 ± 5% -14.9% 17946 ± 5% perf-c2c.DRAM.local
4025 ± 2% +29.1% 5197 ± 3% perf-c2c.HITM.local
105.17 ± 8% -12.7% 91.83 ± 6% perf-c2c.HITM.remote
9538291 -11.9% 8402170 will-it-scale.104.processes
91713 -11.9% 80789 will-it-scale.per_process_ops
9538291 -11.9% 8402170 will-it-scale.workload
1.438e+09 -11.2% 1.276e+09 numa-numastat.node0.local_node
1.44e+09 -11.3% 1.278e+09 numa-numastat.node0.numa_hit
83001 ± 15% -68.9% 25774 ± 34% numa-numastat.node0.other_node
1.453e+09 -12.5% 1.271e+09 numa-numastat.node1.local_node
1.454e+09 -12.5% 1.272e+09 numa-numastat.node1.numa_hit
24752 ± 51% +230.9% 81910 ± 10% numa-numastat.node1.other_node
1.44e+09 -11.3% 1.278e+09 numa-vmstat.node0.numa_hit
1.438e+09 -11.3% 1.276e+09 numa-vmstat.node0.numa_local
83001 ± 15% -68.9% 25774 ± 34% numa-vmstat.node0.numa_other
1.454e+09 -12.5% 1.272e+09 numa-vmstat.node1.numa_hit
1.453e+09 -12.5% 1.271e+09 numa-vmstat.node1.numa_local
24752 ± 51% +230.9% 81910 ± 10% numa-vmstat.node1.numa_other
14952 -3.2% 14468 proc-vmstat.nr_mapped
2.894e+09 -11.9% 2.55e+09 proc-vmstat.numa_hit
2.891e+09 -11.9% 2.548e+09 proc-vmstat.numa_local
2.88e+09 -11.8% 2.539e+09 proc-vmstat.pgalloc_normal
2.869e+09 -11.9% 2.529e+09 proc-vmstat.pgfault
2.88e+09 -11.8% 2.539e+09 proc-vmstat.pgfree
17.51 -2.6% 17.05 perf-stat.i.MPKI
9.457e+09 -9.2% 8.585e+09 perf-stat.i.branch-instructions
45022022 -8.2% 41340795 perf-stat.i.branch-misses
84.38 -4.9 79.51 perf-stat.i.cache-miss-rate%
8.353e+08 -12.1% 7.345e+08 perf-stat.i.cache-misses
9.877e+08 -6.7% 9.216e+08 perf-stat.i.cache-references
6.06 +10.8% 6.72 perf-stat.i.cpi
136.25 -1.2% 134.59 perf-stat.i.cpu-migrations
348.56 +13.9% 396.93 perf-stat.i.cycles-between-cache-misses
4.763e+10 -9.7% 4.302e+10 perf-stat.i.instructions
0.17 -9.6% 0.15 perf-stat.i.ipc
182.56 -11.9% 160.88 perf-stat.i.metric.K/sec
9494393 -11.9% 8368012 perf-stat.i.minor-faults
9494393 -11.9% 8368012 perf-stat.i.page-faults
17.54 -2.6% 17.08 perf-stat.overall.MPKI
0.47 +0.0 0.48 perf-stat.overall.branch-miss-rate%
84.57 -4.9 79.71 perf-stat.overall.cache-miss-rate%
6.07 +10.8% 6.73 perf-stat.overall.cpi
346.33 +13.8% 393.97 perf-stat.overall.cycles-between-cache-misses
0.16 -9.7% 0.15 perf-stat.overall.ipc
1503802 +2.6% 1542599 perf-stat.overall.path-length
9.424e+09 -9.2% 8.553e+09 perf-stat.ps.branch-instructions
44739120 -8.3% 41034189 perf-stat.ps.branch-misses
8.326e+08 -12.1% 7.321e+08 perf-stat.ps.cache-misses
9.846e+08 -6.7% 9.185e+08 perf-stat.ps.cache-references
134.98 -1.3% 133.26 perf-stat.ps.cpu-migrations
4.747e+10 -9.7% 4.286e+10 perf-stat.ps.instructions
9463902 -11.9% 8339836 perf-stat.ps.minor-faults
9463902 -11.9% 8339836 perf-stat.ps.page-faults
1.434e+13 -9.6% 1.296e+13 perf-stat.total.instructions
64.15 -2.4 61.72 perf-profile.calltrace.cycles-pp.testcase
58.30 -1.9 56.41 perf-profile.calltrace.cycles-pp.asm_exc_page_fault.testcase
52.64 -1.4 51.28 perf-profile.calltrace.cycles-pp.exc_page_fault.asm_exc_page_fault.testcase
52.50 -1.3 51.16 perf-profile.calltrace.cycles-pp.do_user_addr_fault.exc_page_fault.asm_exc_page_fault.testcase
50.81 -1.0 49.86 perf-profile.calltrace.cycles-pp.handle_mm_fault.do_user_addr_fault.exc_page_fault.asm_exc_page_fault.testcase
49.86 -0.8 49.02 perf-profile.calltrace.cycles-pp.__handle_mm_fault.handle_mm_fault.do_user_addr_fault.exc_page_fault.asm_exc_page_fault
9.27 -0.8 8.45 ± 3% perf-profile.calltrace.cycles-pp.copy_page.do_fault.__handle_mm_fault.handle_mm_fault.do_user_addr_fault
49.21 -0.8 48.43 perf-profile.calltrace.cycles-pp.do_fault.__handle_mm_fault.handle_mm_fault.do_user_addr_fault.exc_page_fault
5.15 -0.5 4.68 perf-profile.calltrace.cycles-pp.__irqentry_text_end.testcase
3.24 -0.5 2.77 perf-profile.calltrace.cycles-pp.folio_prealloc.do_fault.__handle_mm_fault.handle_mm_fault.do_user_addr_fault
0.82 -0.3 0.51 perf-profile.calltrace.cycles-pp.lock_vma_under_rcu.do_user_addr_fault.exc_page_fault.asm_exc_page_fault.testcase
1.68 -0.3 1.42 perf-profile.calltrace.cycles-pp.vma_alloc_folio_noprof.folio_prealloc.do_fault.__handle_mm_fault.handle_mm_fault
2.52 -0.2 2.28 perf-profile.calltrace.cycles-pp.error_entry.testcase
1.50 ± 2% -0.2 1.30 perf-profile.calltrace.cycles-pp.__mem_cgroup_charge.folio_prealloc.do_fault.__handle_mm_fault.handle_mm_fault
1.85 -0.1 1.70 ± 3% perf-profile.calltrace.cycles-pp.__pte_offset_map_lock.finish_fault.do_fault.__handle_mm_fault.handle_mm_fault
0.68 -0.1 0.55 ± 2% perf-profile.calltrace.cycles-pp.lru_add_fn.folio_batch_move_lru.folio_add_lru_vma.set_pte_range.finish_fault
1.55 -0.1 1.44 ± 3% perf-profile.calltrace.cycles-pp._raw_spin_lock.__pte_offset_map_lock.finish_fault.do_fault.__handle_mm_fault
0.55 -0.1 0.43 ± 44% perf-profile.calltrace.cycles-pp.get_page_from_freelist.__alloc_pages_noprof.alloc_pages_mpol_noprof.vma_alloc_folio_noprof.folio_prealloc
1.07 -0.1 0.98 perf-profile.calltrace.cycles-pp.alloc_pages_mpol_noprof.vma_alloc_folio_noprof.folio_prealloc.do_fault.__handle_mm_fault
0.90 -0.1 0.81 perf-profile.calltrace.cycles-pp.sync_regs.asm_exc_page_fault.testcase
0.89 -0.0 0.86 perf-profile.calltrace.cycles-pp.__alloc_pages_noprof.alloc_pages_mpol_noprof.vma_alloc_folio_noprof.folio_prealloc.do_fault
1.00 +0.1 1.05 perf-profile.calltrace.cycles-pp.zap_present_ptes.zap_pte_range.zap_pmd_range.unmap_page_range.unmap_vmas
3.85 +0.2 4.10 perf-profile.calltrace.cycles-pp.tlb_finish_mmu.unmap_region.do_vmi_align_munmap.do_vmi_munmap.__vm_munmap
3.85 +0.2 4.10 perf-profile.calltrace.cycles-pp.__tlb_batch_free_encoded_pages.tlb_finish_mmu.unmap_region.do_vmi_align_munmap.do_vmi_munmap
3.85 +0.2 4.10 perf-profile.calltrace.cycles-pp.free_pages_and_swap_cache.__tlb_batch_free_encoded_pages.tlb_finish_mmu.unmap_region.do_vmi_align_munmap
3.82 +0.3 4.07 perf-profile.calltrace.cycles-pp.folios_put_refs.free_pages_and_swap_cache.__tlb_batch_free_encoded_pages.tlb_finish_mmu.unmap_region
3.68 +0.3 3.94 perf-profile.calltrace.cycles-pp.__page_cache_release.folios_put_refs.free_pages_and_swap_cache.__tlb_batch_free_encoded_pages.tlb_finish_mmu
0.83 +0.3 1.10 ± 2% perf-profile.calltrace.cycles-pp.__lruvec_stat_mod_folio.set_pte_range.finish_fault.do_fault.__handle_mm_fault
0.00 +0.5 0.54 perf-profile.calltrace.cycles-pp.__lruvec_stat_mod_folio.folio_remove_rmap_ptes.zap_present_ptes.zap_pte_range.zap_pmd_range
0.00 +0.7 0.66 perf-profile.calltrace.cycles-pp.folio_remove_rmap_ptes.zap_present_ptes.zap_pte_range.zap_pmd_range.unmap_page_range
32.87 +0.7 33.62 perf-profile.calltrace.cycles-pp.set_pte_range.finish_fault.do_fault.__handle_mm_fault.handle_mm_fault
29.54 +2.3 31.80 perf-profile.calltrace.cycles-pp.__tlb_batch_free_encoded_pages.tlb_flush_mmu.zap_pte_range.zap_pmd_range.unmap_page_range
29.54 +2.3 31.80 perf-profile.calltrace.cycles-pp.tlb_flush_mmu.zap_pte_range.zap_pmd_range.unmap_page_range.unmap_vmas
29.53 +2.3 31.80 perf-profile.calltrace.cycles-pp.free_pages_and_swap_cache.__tlb_batch_free_encoded_pages.tlb_flush_mmu.zap_pte_range.zap_pmd_range
30.66 +2.3 32.93 perf-profile.calltrace.cycles-pp.unmap_page_range.unmap_vmas.unmap_region.do_vmi_align_munmap.do_vmi_munmap
30.66 +2.3 32.93 perf-profile.calltrace.cycles-pp.unmap_vmas.unmap_region.do_vmi_align_munmap.do_vmi_munmap.__vm_munmap
30.66 +2.3 32.93 perf-profile.calltrace.cycles-pp.zap_pmd_range.unmap_page_range.unmap_vmas.unmap_region.do_vmi_align_munmap
30.66 +2.3 32.93 perf-profile.calltrace.cycles-pp.zap_pte_range.zap_pmd_range.unmap_page_range.unmap_vmas.unmap_region
29.26 +2.3 31.60 perf-profile.calltrace.cycles-pp.folios_put_refs.free_pages_and_swap_cache.__tlb_batch_free_encoded_pages.tlb_flush_mmu.zap_pte_range
28.41 +2.4 30.78 perf-profile.calltrace.cycles-pp.__page_cache_release.folios_put_refs.free_pages_and_swap_cache.__tlb_batch_free_encoded_pages.tlb_flush_mmu
34.56 +2.5 37.08 perf-profile.calltrace.cycles-pp.__munmap
34.56 +2.5 37.08 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.__munmap
34.56 +2.5 37.08 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.__munmap
34.55 +2.5 37.07 perf-profile.calltrace.cycles-pp.unmap_region.do_vmi_align_munmap.do_vmi_munmap.__vm_munmap.__x64_sys_munmap
34.55 +2.5 37.08 perf-profile.calltrace.cycles-pp.__vm_munmap.__x64_sys_munmap.do_syscall_64.entry_SYSCALL_64_after_hwframe.__munmap
34.55 +2.5 37.08 perf-profile.calltrace.cycles-pp.__x64_sys_munmap.do_syscall_64.entry_SYSCALL_64_after_hwframe.__munmap
34.55 +2.5 37.08 perf-profile.calltrace.cycles-pp.do_vmi_align_munmap.do_vmi_munmap.__vm_munmap.__x64_sys_munmap.do_syscall_64
34.55 +2.5 37.08 perf-profile.calltrace.cycles-pp.do_vmi_munmap.__vm_munmap.__x64_sys_munmap.do_syscall_64.entry_SYSCALL_64_after_hwframe
31.41 +2.8 34.20 perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.folio_lruvec_lock_irqsave.__page_cache_release.folios_put_refs.free_pages_and_swap_cache
31.42 +2.8 34.23 perf-profile.calltrace.cycles-pp.folio_lruvec_lock_irqsave.__page_cache_release.folios_put_refs.free_pages_and_swap_cache.__tlb_batch_free_encoded_pages
31.38 +2.8 34.19 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.folio_lruvec_lock_irqsave.__page_cache_release.folios_put_refs
65.26 -2.5 62.73 perf-profile.children.cycles-pp.testcase
56.09 -1.7 54.41 perf-profile.children.cycles-pp.asm_exc_page_fault
52.66 -1.4 51.30 perf-profile.children.cycles-pp.exc_page_fault
52.52 -1.3 51.18 perf-profile.children.cycles-pp.do_user_addr_fault
50.83 -1.0 49.88 perf-profile.children.cycles-pp.handle_mm_fault
49.87 -0.8 49.02 perf-profile.children.cycles-pp.__handle_mm_fault
9.35 -0.8 8.53 ± 3% perf-profile.children.cycles-pp.copy_page
49.23 -0.8 48.45 perf-profile.children.cycles-pp.do_fault
5.15 -0.5 4.68 perf-profile.children.cycles-pp.__irqentry_text_end
3.27 -0.5 2.80 perf-profile.children.cycles-pp.folio_prealloc
0.82 -0.3 0.52 perf-profile.children.cycles-pp.lock_vma_under_rcu
0.57 -0.3 0.32 perf-profile.children.cycles-pp.mas_walk
1.69 -0.3 1.43 perf-profile.children.cycles-pp.vma_alloc_folio_noprof
2.54 -0.2 2.30 perf-profile.children.cycles-pp.error_entry
1.52 ± 2% -0.2 1.31 perf-profile.children.cycles-pp.__mem_cgroup_charge
0.95 -0.2 0.79 ± 4% perf-profile.children.cycles-pp.__mod_memcg_lruvec_state
1.87 -0.2 1.72 ± 3% perf-profile.children.cycles-pp.__pte_offset_map_lock
0.60 ± 4% -0.1 0.46 ± 6% perf-profile.children.cycles-pp.get_mem_cgroup_from_mm
0.70 -0.1 0.56 ± 2% perf-profile.children.cycles-pp.lru_add_fn
1.57 -0.1 1.45 ± 3% perf-profile.children.cycles-pp._raw_spin_lock
1.16 -0.1 1.04 perf-profile.children.cycles-pp.native_irq_return_iret
1.12 -0.1 1.01 perf-profile.children.cycles-pp.alloc_pages_mpol_noprof
0.44 -0.1 0.35 perf-profile.children.cycles-pp.get_vma_policy
0.94 -0.1 0.85 perf-profile.children.cycles-pp.sync_regs
0.96 -0.1 0.87 perf-profile.children.cycles-pp.__perf_sw_event
0.43 -0.1 0.34 ± 2% perf-profile.children.cycles-pp.free_unref_folios
0.21 ± 3% -0.1 0.13 ± 3% perf-profile.children.cycles-pp._compound_head
0.75 -0.1 0.68 perf-profile.children.cycles-pp.___perf_sw_event
0.31 -0.1 0.25 perf-profile.children.cycles-pp.__mem_cgroup_uncharge_folios
0.94 -0.0 0.90 perf-profile.children.cycles-pp.__alloc_pages_noprof
0.41 ± 4% -0.0 0.37 ± 4% perf-profile.children.cycles-pp.mem_cgroup_commit_charge
0.44 ± 5% -0.0 0.40 ± 5% perf-profile.children.cycles-pp.__count_memcg_events
0.17 ± 2% -0.0 0.13 ± 4% perf-profile.children.cycles-pp.uncharge_batch
0.57 -0.0 0.53 ± 2% perf-profile.children.cycles-pp.get_page_from_freelist
0.13 ± 2% -0.0 0.09 ± 5% perf-profile.children.cycles-pp.__mod_zone_page_state
0.19 ± 3% -0.0 0.16 ± 6% perf-profile.children.cycles-pp.cgroup_rstat_updated
0.15 ± 2% -0.0 0.12 ± 3% perf-profile.children.cycles-pp.free_unref_page_commit
0.10 ± 3% -0.0 0.07 ± 5% perf-profile.children.cycles-pp.mem_cgroup_update_lru_size
0.08 -0.0 0.05 perf-profile.children.cycles-pp.policy_nodemask
0.13 ± 3% -0.0 0.10 ± 3% perf-profile.children.cycles-pp.page_counter_uncharge
0.32 ± 3% -0.0 0.30 ± 2% perf-profile.children.cycles-pp.__mod_node_page_state
0.17 ± 2% -0.0 0.15 ± 3% perf-profile.children.cycles-pp.percpu_counter_add_batch
0.16 ± 2% -0.0 0.14 ± 2% perf-profile.children.cycles-pp.shmem_get_policy
0.16 -0.0 0.14 ± 2% perf-profile.children.cycles-pp.handle_pte_fault
0.16 ± 4% -0.0 0.14 ± 4% perf-profile.children.cycles-pp.__pte_offset_map
0.09 -0.0 0.07 ± 5% perf-profile.children.cycles-pp.get_pfnblock_flags_mask
0.12 ± 3% -0.0 0.10 ± 4% perf-profile.children.cycles-pp.uncharge_folio
0.36 -0.0 0.34 perf-profile.children.cycles-pp.asm_sysvec_apic_timer_interrupt
0.10 ± 3% -0.0 0.08 ± 5% perf-profile.children.cycles-pp.pte_offset_map_nolock
0.30 -0.0 0.28 ± 2% perf-profile.children.cycles-pp.__sysvec_apic_timer_interrupt
0.09 ± 4% -0.0 0.08 perf-profile.children.cycles-pp.down_read_trylock
0.08 -0.0 0.07 ± 5% perf-profile.children.cycles-pp.folio_unlock
0.40 +0.0 0.43 perf-profile.children.cycles-pp.__mod_lruvec_state
1.02 +0.0 1.06 perf-profile.children.cycles-pp.zap_present_ptes
0.47 +0.2 0.67 perf-profile.children.cycles-pp.folio_remove_rmap_ptes
3.87 +0.3 4.12 perf-profile.children.cycles-pp.tlb_finish_mmu
1.17 +0.5 1.71 ± 2% perf-profile.children.cycles-pp.__lruvec_stat_mod_folio
32.88 +0.8 33.63 perf-profile.children.cycles-pp.set_pte_range
29.54 +2.3 31.80 perf-profile.children.cycles-pp.tlb_flush_mmu
30.66 +2.3 32.93 perf-profile.children.cycles-pp.zap_pte_range
30.66 +2.3 32.94 perf-profile.children.cycles-pp.unmap_page_range
30.66 +2.3 32.94 perf-profile.children.cycles-pp.zap_pmd_range
30.66 +2.3 32.94 perf-profile.children.cycles-pp.unmap_vmas
33.41 +2.5 35.92 perf-profile.children.cycles-pp.__tlb_batch_free_encoded_pages
33.40 +2.5 35.92 perf-profile.children.cycles-pp.free_pages_and_swap_cache
34.56 +2.5 37.08 perf-profile.children.cycles-pp.__munmap
34.56 +2.5 37.08 perf-profile.children.cycles-pp.__vm_munmap
34.56 +2.5 37.08 perf-profile.children.cycles-pp.__x64_sys_munmap
34.56 +2.5 37.09 perf-profile.children.cycles-pp.do_vmi_munmap
34.56 +2.5 37.09 perf-profile.children.cycles-pp.do_vmi_align_munmap
34.67 +2.5 37.20 perf-profile.children.cycles-pp.do_syscall_64
34.67 +2.5 37.20 perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
34.56 +2.5 37.09 perf-profile.children.cycles-pp.unmap_region
33.22 +2.6 35.80 perf-profile.children.cycles-pp.folios_put_refs
32.12 +2.6 34.75 perf-profile.children.cycles-pp.__page_cache_release
61.97 +3.3 65.27 perf-profile.children.cycles-pp._raw_spin_lock_irqsave
61.94 +3.3 65.26 perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
61.98 +3.3 65.30 perf-profile.children.cycles-pp.folio_lruvec_lock_irqsave
9.32 -0.8 8.49 ± 3% perf-profile.self.cycles-pp.copy_page
5.15 -0.5 4.68 perf-profile.self.cycles-pp.__irqentry_text_end
0.56 -0.3 0.31 perf-profile.self.cycles-pp.mas_walk
2.58 -0.2 2.33 perf-profile.self.cycles-pp.testcase
2.53 -0.2 2.30 perf-profile.self.cycles-pp.error_entry
0.60 ± 4% -0.2 0.44 ± 6% perf-profile.self.cycles-pp.get_mem_cgroup_from_mm
0.85 -0.1 0.71 ± 4% perf-profile.self.cycles-pp.__mod_memcg_lruvec_state
1.54 -0.1 1.43 ± 3% perf-profile.self.cycles-pp._raw_spin_lock
1.15 -0.1 1.04 perf-profile.self.cycles-pp.native_irq_return_iret
0.94 -0.1 0.85 perf-profile.self.cycles-pp.sync_regs
0.20 ± 3% -0.1 0.13 ± 3% perf-profile.self.cycles-pp._compound_head
0.27 ± 3% -0.1 0.20 ± 3% perf-profile.self.cycles-pp.free_pages_and_swap_cache
0.26 -0.1 0.18 ± 2% perf-profile.self.cycles-pp.get_vma_policy
0.26 -0.1 0.19 ± 2% perf-profile.self.cycles-pp.__page_cache_release
0.16 -0.1 0.09 ± 5% perf-profile.self.cycles-pp.vma_alloc_folio_noprof
0.28 ± 2% -0.1 0.22 ± 3% perf-profile.self.cycles-pp.zap_present_ptes
0.66 -0.1 0.60 perf-profile.self.cycles-pp.___perf_sw_event
0.32 -0.1 0.27 ± 5% perf-profile.self.cycles-pp.lru_add_fn
0.47 -0.0 0.43 ± 2% perf-profile.self.cycles-pp.__handle_mm_fault
0.16 ± 4% -0.0 0.12 perf-profile.self.cycles-pp.lock_vma_under_rcu
0.20 -0.0 0.16 ± 4% perf-profile.self.cycles-pp.free_unref_folios
0.30 -0.0 0.26 perf-profile.self.cycles-pp.handle_mm_fault
0.10 ± 4% -0.0 0.07 perf-profile.self.cycles-pp.zap_pte_range
0.09 ± 5% -0.0 0.06 ± 6% perf-profile.self.cycles-pp.mem_cgroup_update_lru_size
0.14 ± 2% -0.0 0.11 ± 4% perf-profile.self.cycles-pp.mem_cgroup_commit_charge
0.14 ± 3% -0.0 0.12 ± 4% perf-profile.self.cycles-pp.folio_remove_rmap_ptes
0.12 ± 4% -0.0 0.09 ± 7% perf-profile.self.cycles-pp.__mod_zone_page_state
0.10 ± 4% -0.0 0.08 ± 6% perf-profile.self.cycles-pp.alloc_pages_mpol_noprof
0.11 -0.0 0.08 ± 5% perf-profile.self.cycles-pp.free_unref_page_commit
0.22 ± 2% -0.0 0.19 perf-profile.self.cycles-pp.__pte_offset_map_lock
0.21 -0.0 0.18 ± 2% perf-profile.self.cycles-pp.__perf_sw_event
0.21 -0.0 0.18 ± 2% perf-profile.self.cycles-pp.do_user_addr_fault
0.31 ± 2% -0.0 0.29 perf-profile.self.cycles-pp.__mod_node_page_state
0.16 ± 2% -0.0 0.14 ± 5% perf-profile.self.cycles-pp.cgroup_rstat_updated
0.17 ± 2% -0.0 0.15 ± 2% perf-profile.self.cycles-pp.percpu_counter_add_batch
0.11 -0.0 0.09 ± 4% perf-profile.self.cycles-pp.page_counter_uncharge
0.09 -0.0 0.07 perf-profile.self.cycles-pp.get_pfnblock_flags_mask
0.28 ± 2% -0.0 0.26 ± 2% perf-profile.self.cycles-pp.xas_load
0.16 ± 2% -0.0 0.14 ± 3% perf-profile.self.cycles-pp.get_page_from_freelist
0.12 -0.0 0.10 ± 3% perf-profile.self.cycles-pp.uncharge_folio
0.16 ± 4% -0.0 0.14 ± 3% perf-profile.self.cycles-pp.__pte_offset_map
0.20 ± 2% -0.0 0.19 ± 2% perf-profile.self.cycles-pp.shmem_get_folio_gfp
0.16 ± 3% -0.0 0.14 ± 3% perf-profile.self.cycles-pp.shmem_get_policy
0.14 ± 3% -0.0 0.12 ± 4% perf-profile.self.cycles-pp.do_fault
0.08 -0.0 0.07 ± 7% perf-profile.self.cycles-pp.folio_unlock
0.12 ± 3% -0.0 0.11 perf-profile.self.cycles-pp.folio_add_new_anon_rmap
0.09 -0.0 0.08 perf-profile.self.cycles-pp.down_read_trylock
0.07 -0.0 0.06 perf-profile.self.cycles-pp.folio_prealloc
0.38 ± 2% +0.0 0.42 ± 3% perf-profile.self.cycles-pp.filemap_get_entry
0.26 +0.1 0.36 perf-profile.self.cycles-pp.folios_put_refs
0.33 +0.1 0.44 ± 3% perf-profile.self.cycles-pp.folio_batch_move_lru
0.40 ± 5% +0.6 0.98 perf-profile.self.cycles-pp.__lruvec_stat_mod_folio
61.94 +3.3 65.26 perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
next reply other threads:[~2024-05-17 5:56 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-05-17 5:56 kernel test robot [this message]
2024-05-17 23:38 ` Yosry Ahmed
2024-05-18 6:28 ` Shakeel Butt
2024-05-19 9:14 ` Oliver Sang
2024-05-19 17:20 ` Shakeel Butt
2024-05-20 2:43 ` Oliver Sang
2024-05-20 3:49 ` Shakeel Butt
2024-05-21 2:43 ` Oliver Sang
2024-05-22 4:18 ` Shakeel Butt
2024-05-23 7:48 ` Oliver Sang
2024-05-23 16:47 ` Shakeel Butt
2024-05-24 7:45 ` Oliver Sang
2024-05-24 18:06 ` Shakeel Butt
2024-05-28 6:30 ` Shakeel Butt
2024-05-30 6:17 ` Oliver Sang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=202405171353.b56b845-oliver.sang@intel.com \
--to=oliver.sang@intel.com \
--cc=akpm@linux-foundation.org \
--cc=cgroups@vger.kernel.org \
--cc=feng.tang@intel.com \
--cc=fengwei.yin@intel.com \
--cc=hannes@cmpxchg.org \
--cc=linux-mm@kvack.org \
--cc=lkp@intel.com \
--cc=mhocko@kernel.org \
--cc=muchun.song@linux.dev \
--cc=oe-lkp@lists.linux.dev \
--cc=roman.gushchin@linux.dev \
--cc=shakeel.butt@linux.dev \
--cc=tjmercier@google.com \
--cc=ying.huang@intel.com \
--cc=yosryahmed@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox