From: Gilad Ben-Yossef <gilad@benyossef.com>
To: Shaohua Li <shaohua.li@intel.com>
Cc: "linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
Peter Zijlstra <a.p.zijlstra@chello.nl>,
Frederic Weisbecker <fweisbec@gmail.com>,
Russell King <linux@arm.linux.org.uk>,
Chris Metcalf <cmetcalf@tilera.com>,
"linux-mm@kvack.org" <linux-mm@kvack.org>,
Christoph Lameter <cl@linux-foundation.org>,
Pekka Enberg <penberg@kernel.org>, Matt Mackall <mpm@selenic.com>
Subject: Re: [PATCH 4/5] mm: Only IPI CPUs to drain local pages if they exist
Date: Mon, 26 Sep 2011 09:47:10 +0300 [thread overview]
Message-ID: <CAOtvUMddUAATZcU_5jLgY10ocsHNnOO2GC2c4ecYO9KGt-U7VQ@mail.gmail.com> (raw)
In-Reply-To: <1317001924.29510.160.camel@sli10-conroe>
Hi Li,
Thank you for the feedback!
On Mon, Sep 26, 2011 at 4:52 AM, Shaohua Li <shaohua.li@intel.com> wrote:
> On Sun, 2011-09-25 at 16:54 +0800, Gilad Ben-Yossef wrote:
>> Use a cpumask to track CPUs with per-cpu pages in any zone
>> and only send an IPI requesting CPUs to drain these pages
>> to the buddy allocator if they actually have pages.
> Did you have evaluation why the fine-grained ipi is required? I suppose
> every CPU has local pages here.
I have given it a lot of though and I believe It's a question of work
load - in a "classic" symmetric work load on a small SMP system I
would indeed expect each CPU to have a per cpu pages cache in some
zone. However, we are seeing more and more push towards massively
multi core systems and we add support for using them (e.g. cpusets,
Frederic's dynamic tick task patch set etc.). For these work loads,
things can be different:
In a system where you have many core (or hardware threads) and you
dedicate processors to run a singe CPU bound task that performs
virtually no system calls (quite typical for some high performance
computing set ups), you can very well have situations where the per
cpu released page is empty on many processors, since the working set
per cpu rarely changes, so there was now release since the last drain.
Or just consider a multicore machine where a lot of processors are
simply idle with no activity (and we now have cores with 8 cores / 128
hw threads in a single package) - again, no per CPU local page cache
since there was 0 activity since the last drain, but the IPI will be
yanking cores out of low power states to do the check.
I do not know if these scenarios warrant the additional overhead,
certainly not in all situations. Maybe the right thing is to make it a
config option dependent. As I stated in the patch description, that is
one of the thing I'm interested in feedback on.
Thanks,
Gilad
--
Gilad Ben-Yossef
Chief Coffee Drinker
gilad@benyossef.com
Israel Cell: +972-52-8260388
US Cell: +1-973-8260388
http://benyossef.com
"I've seen things you people wouldn't believe. Goto statements used to
implement co-routines. I watched C structures being stored in
registers. All those moments will be lost in time... like tears in
rain... Time to die. "
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2011-09-26 6:47 UTC|newest]
Thread overview: 33+ messages / expand[flat|nested] mbox.gz Atom feed top
2011-09-25 8:54 [PATCH 0/5] Reduce cross CPU IPI interference Gilad Ben-Yossef
2011-09-25 8:54 ` [PATCH 1/5] smp: Introduce a generic on_each_cpu_mask function Gilad Ben-Yossef
2011-09-25 11:37 ` Sasha Levin
2011-09-26 8:48 ` Peter Zijlstra
2011-09-25 8:54 ` [PATCH 2/5] arm: Move arm over to generic on_each_cpu_mask Gilad Ben-Yossef
2011-09-25 8:54 ` [PATCH 3/5] tile: Move tile to use " Gilad Ben-Yossef
2011-09-25 8:54 ` [PATCH 4/5] mm: Only IPI CPUs to drain local pages if they exist Gilad Ben-Yossef
2011-09-26 1:52 ` Shaohua Li
2011-09-26 6:47 ` Gilad Ben-Yossef [this message]
2011-09-26 15:24 ` Christoph Lameter
2011-09-27 7:27 ` Gilad Ben-Yossef
2011-09-27 16:13 ` Christoph Lameter
2011-09-26 7:28 ` Peter Zijlstra
2011-09-26 8:39 ` Gilad Ben-Yossef
2011-09-26 7:29 ` Peter Zijlstra
2011-09-25 8:54 ` [PATCH 5/5] slub: Only IPI CPUs that have per cpu obj to flush Gilad Ben-Yossef
2011-09-26 6:54 ` Pekka Enberg
2011-09-26 7:36 ` Peter Zijlstra
2011-09-26 8:07 ` Gilad Ben-Yossef
2011-09-26 10:03 ` Pekka Enberg
2011-09-26 8:10 ` Gilad Ben-Yossef
2011-09-26 7:33 ` Peter Zijlstra
2011-09-26 8:35 ` Gilad Ben-Yossef
2011-09-26 9:28 ` Pekka Enberg
2011-09-26 9:45 ` Peter Zijlstra
2011-09-26 12:05 ` Gilad Ben-Yossef
2011-09-26 13:49 ` Gilad Ben-Yossef
2011-09-26 7:20 ` [PATCH 0/5] Reduce cross CPU IPI interference Peter Zijlstra
2011-09-26 8:43 ` Gilad Ben-Yossef
2011-09-26 8:46 ` Peter Zijlstra
2011-09-28 13:00 ` Chris Metcalf
2011-10-02 8:44 ` Gilad Ben-Yossef
2011-10-02 14:58 ` Chris Metcalf
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CAOtvUMddUAATZcU_5jLgY10ocsHNnOO2GC2c4ecYO9KGt-U7VQ@mail.gmail.com \
--to=gilad@benyossef.com \
--cc=a.p.zijlstra@chello.nl \
--cc=cl@linux-foundation.org \
--cc=cmetcalf@tilera.com \
--cc=fweisbec@gmail.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux@arm.linux.org.uk \
--cc=mpm@selenic.com \
--cc=penberg@kernel.org \
--cc=shaohua.li@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox