From: Shakeel Butt <shakeelb@google.com>
To: Matthew Wilcox <willy@infradead.org>
Cc: kernel test robot <oliver.sang@intel.com>,
oe-lkp@lists.linux.dev, lkp@intel.com,
linux-kernel@vger.kernel.org,
Andrew Morton <akpm@linux-foundation.org>,
Marek Szyprowski <m.szyprowski@samsung.com>,
linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org,
ying.huang@intel.com, feng.tang@intel.com,
zhengjun.xing@linux.intel.com, fengwei.yin@intel.com
Subject: Re: [linus:master] [mm] f1a7941243: unixbench.score -19.2% regression
Date: Tue, 31 Jan 2023 05:57:43 +0000 [thread overview]
Message-ID: <20230131055743.tsilxx5vfl6gx4dj@google.com> (raw)
In-Reply-To: <Y9iq8fRT4sDgIwQN@casper.infradead.org>
On Tue, Jan 31, 2023 at 05:45:21AM +0000, Matthew Wilcox wrote:
[...]
> > I ran perf and it seems like percpu counter allocation is the additional
> > cost with this patch. See the report below. However I made spawn a bit
> > more sophisticated by adding a mmap() of a GiB then the page table
> > copy became the significant cost and no difference without or with the
> > given patch.
> >
> > I am now wondering if this fork ping pong really an important workload
> > that we should revert the patch or ignore for now but work on improving
> > the performance of __alloc_percpu_gfp code.
> >
> >
> > - 90.97% 0.06% spawn [kernel.kallsyms] [k] entry_SYSCALL_64_after_hwframe
> > - 90.91% entry_SYSCALL_64_after_hwframe
> > - 90.86% do_syscall_64
> > - 80.03% __x64_sys_clone
> > - 79.98% kernel_clone
> > - 75.97% copy_process
> > + 46.04% perf_event_init_task
> > - 21.50% copy_mm
> > - 10.05% mm_init
> > ----------------------> - 8.92% __percpu_counter_init
> > - 8.67% __alloc_percpu_gfp
> > - 5.70% pcpu_alloc
>
> 5.7% of our time spent in pcpu_alloc seems excessive. Are we contending
> on pcpu_alloc_mutex perhaps? Also, are you doing this on a 4-socket
> machine like the kernel test robot ran on?
I ran on 2-socket machine and I am not sure about pcpu_alloc_mutex but I
doubt that because I ran a single instance of the spawn test i.e. a
single fork ping pong.
>
> We could cut down the number of calls to pcpu_alloc() by a factor of 4
> by having a pcpu_alloc_bulk() that would allocate all four RSS counters
> at once.
>
> Just throwing out ideas ...
Thanks, I will take a stab at pcpu_alloc_bulk() and will share the
result tomorrow.
thanks,
Shakeel
next prev parent reply other threads:[~2023-01-31 5:57 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-01-30 2:32 kernel test robot
2023-01-30 4:15 ` Matthew Wilcox
2023-01-31 5:23 ` Shakeel Butt
2023-01-31 5:45 ` Matthew Wilcox
2023-01-31 5:57 ` Shakeel Butt [this message]
2023-01-31 18:26 ` Shakeel Butt
2023-02-27 6:35 ` Yin, Fengwei
2023-02-27 16:50 ` Shakeel Butt
2023-02-28 0:32 ` Yin, Fengwei
2023-01-31 6:11 ` Feng Tang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20230131055743.tsilxx5vfl6gx4dj@google.com \
--to=shakeelb@google.com \
--cc=akpm@linux-foundation.org \
--cc=feng.tang@intel.com \
--cc=fengwei.yin@intel.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-trace-kernel@vger.kernel.org \
--cc=lkp@intel.com \
--cc=m.szyprowski@samsung.com \
--cc=oe-lkp@lists.linux.dev \
--cc=oliver.sang@intel.com \
--cc=willy@infradead.org \
--cc=ying.huang@intel.com \
--cc=zhengjun.xing@linux.intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox