linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Feng Tang <feng.tang@intel.com>
To: Chuck Lever III <chuck.lever@oracle.com>
Cc: "Sang, Oliver" <oliver.sang@intel.com>,
	"oe-lkp@lists.linux.dev" <oe-lkp@lists.linux.dev>,
	lkp <lkp@intel.com>,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
	Christian Brauner <brauner@kernel.org>,
	linux-mm <linux-mm@kvack.org>,
	"Huang, Ying" <ying.huang@intel.com>,
	"Yin, Fengwei" <fengwei.yin@intel.com>
Subject: Re: [linus:master] [shmem]  a2e459555c:  aim9.disk_src.ops_per_sec -19.0% regression
Date: Tue, 12 Sep 2023 23:14:42 +0800	[thread overview]
Message-ID: <ZQCAYpqu+5iD0rhh@feng-clx> (raw)
In-Reply-To: <84984801-F885-4739-B4B3-DE8DE4ABE378@oracle.com>

Hi Chuck Lever, 

On Tue, Sep 12, 2023 at 09:01:29PM +0800, Chuck Lever III wrote:
> 
> 
> > On Sep 11, 2023, at 9:25 PM, Oliver Sang <oliver.sang@intel.com> wrote:
> > 
> > hi, Chuck Lever,
> > 
> > On Fri, Sep 08, 2023 at 02:43:22PM +0000, Chuck Lever III wrote:
> >> 
> >> 
> >>> On Sep 8, 2023, at 1:26 AM, kernel test robot <oliver.sang@intel.com> wrote:
> >>> 
> >>> 
> >>> 
> >>> Hello,
> >>> 
> >>> kernel test robot noticed a -19.0% regression of aim9.disk_src.ops_per_sec on:
> >>> 
> >>> 
> >>> commit: a2e459555c5f9da3e619b7e47a63f98574dc75f1 ("shmem: stable directory offsets")
> >>> https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master
> >>> 
> >>> testcase: aim9
> >>> test machine: 48 threads 2 sockets Intel(R) Xeon(R) CPU E5-2697 v2 @ 2.70GHz (Ivy Bridge-EP) with 112G memory
> >>> parameters:
> >>> 
> >>> testtime: 300s
> >>> test: disk_src
> >>> cpufreq_governor: performance
> >>> 
> >>> 
> >>> In addition to that, the commit also has significant impact on the following tests:
> >>> 
> >>> +------------------+-------------------------------------------------------------------------------------------------+
> >>> | testcase: change | aim9: aim9.disk_src.ops_per_sec -14.6% regression                                               |
> >>> | test machine     | 48 threads 2 sockets Intel(R) Xeon(R) CPU E5-2697 v2 @ 2.70GHz (Ivy Bridge-EP) with 112G memory |
> >>> | test parameters  | cpufreq_governor=performance                                                                    |
> >>> |                  | test=all                                                                                        |
> >>> |                  | testtime=5s                                                                                     |
> >>> +------------------+-------------------------------------------------------------------------------------------------+
> >>> 
> >>> 
> >>> If you fix the issue in a separate patch/commit (i.e. not just a new version of
> >>> the same patch/commit), kindly add following tags
> >>> | Reported-by: kernel test robot <oliver.sang@intel.com>
> >>> | Closes: https://lore.kernel.org/oe-lkp/202309081306.3ecb3734-oliver.sang@intel.com
 
> >> But, I'm still in a position where I can't run this test,
> >> and the results don't really indicate where the problem
> >> is. So I can't possibly address this issue.
> >> 
> >> Any suggestions, advice, or help would be appreciated.
> > 
> > if you have further fix patch, could you let us know? I will test it.
> 
> Well that's the problem. Since I can't run the reproducer, there's
> nothing I can do to troubleshoot the problem myself.

We dug more into the perf and other profiling data from 0Day server
running this case, and it seems that the new simple_offset_add()
called by shmem_mknod() brings extra cost related with slab,
specifically the 'radix_tree_node', which cause the regression.

Here is some slabinfo diff for commit a2e459555c5f and its parent:

	23a31d87645c6527 a2e459555c5f9da3e619b7e47a6 
	---------------- --------------------------- 
 
     26363           +40.2%      36956        slabinfo.radix_tree_node.active_objs
    941.00           +40.4%       1321        slabinfo.radix_tree_node.active_slabs
     26363           +40.3%      37001        slabinfo.radix_tree_node.num_objs
    941.00           +40.4%       1321        slabinfo.radix_tree_node.num_slabs

Also the perf profile show some difference

      0.01 ±223%      +0.1        0.10 ± 28%  pp.self.shuffle_freelist
      0.00            +0.1        0.11 ± 40%  pp.self.xas_create
      0.00            +0.1        0.12 ± 27%  pp.self.xas_find_marked
      0.00            +0.1        0.14 ± 18%  pp.self.xas_alloc
      0.03 ±103%      +0.1        0.17 ± 29%  pp.self.xas_descend
      0.00            +0.2        0.16 ± 23%  pp.self.xas_expand
      0.10 ± 22%      +0.2        0.27 ± 16%  pp.self.rcu_segcblist_enqueue
      0.92 ± 35%      +0.3        1.22 ± 11%  pp.self.kmem_cache_free
      0.00            +0.4        0.36 ± 16%  pp.self.xas_store
      0.32 ± 30%      +0.4        0.71 ± 12%  pp.self.__call_rcu_common
      0.18 ± 27%      +0.5        0.65 ±  8%  pp.self.kmem_cache_alloc_lru
      0.36 ± 79%      +0.6        0.96 ± 15%  pp.self.__slab_free
      0.00            +0.8        0.80 ± 14%  pp.self.radix_tree_node_rcu_free
      0.00            +1.0        1.01 ± 16%  pp.self.radix_tree_node_ctor

Some perf profile from a2e459555c5f is: 

-   17.09%     0.09%  singleuser       [kernel.kallsyms]            [k] path_openat   
   - 16.99% path_openat                
      - 12.23% open_last_lookups      
         - 11.33% lookup_open.isra.0
            - 9.05% shmem_mknod
               - 5.11% simple_offset_add
                  - 4.95% __xa_alloc_cyclic 
                     - 4.88% __xa_alloc
                        - 4.76% xas_store 
                           - xas_create
                              - 2.40% xas_expand.constprop.0
                                 - 2.01% xas_alloc
                                    - kmem_cache_alloc_lru
                                       - 1.28% ___slab_alloc
                                          - 1.22% allocate_slab 
                                             - 1.19% shuffle_freelist 
                                                - 1.04% setup_object
                                                     radix_tree_node_ctor

Please let me know if you need more info.

> 
> Is there any hope in getting this reproducer to run on Fedora?

Myself haven't succeeded to reproduce it locally, will keep trying
it tomorrow.

Thanks,
Feng

> 
> --
> Chuck Lever
> 
> 


  parent reply	other threads:[~2023-09-12 15:24 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-09-08  5:26 kernel test robot
2023-09-08 14:43 ` Chuck Lever III
2023-09-12  1:25   ` Oliver Sang
2023-09-12 13:01     ` Chuck Lever III
2023-09-12 13:19       ` Oliver Sang
2023-09-12 15:14       ` Feng Tang [this message]
2023-09-12 15:26         ` Chuck Lever III
2023-09-12 16:01         ` Matthew Wilcox
2023-09-12 16:27           ` Chuck Lever III
2023-09-13 17:45           ` Chuck Lever III
2024-01-04 19:33           ` Chuck Lever III
2024-01-05 16:27             ` Liam R. Howlett
2024-01-05 16:33               ` Chuck Lever III
2023-09-13  6:47         ` Feng Tang
2023-09-13 13:32           ` Chuck Lever III

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ZQCAYpqu+5iD0rhh@feng-clx \
    --to=feng.tang@intel.com \
    --cc=brauner@kernel.org \
    --cc=chuck.lever@oracle.com \
    --cc=fengwei.yin@intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=lkp@intel.com \
    --cc=oe-lkp@lists.linux.dev \
    --cc=oliver.sang@intel.com \
    --cc=ying.huang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox