From: Gilad Ben-Yossef <gilad@benyossef.com>
To: Christoph Lameter <cl@gentwo.org>
Cc: linux-kernel@vger.kernel.org,
Peter Zijlstra <a.p.zijlstra@chello.nl>,
Frederic Weisbecker <fweisbec@gmail.com>,
Russell King <linux@arm.linux.org.uk>,
linux-mm@kvack.org, Pekka Enberg <penberg@kernel.org>,
Matt Mackall <mpm@selenic.com>,
Sasha Levin <levinsasha928@gmail.com>
Subject: Re: [PATCH v2 6/6] slub: only preallocate cpus_with_slabs if offstack
Date: Fri, 28 Oct 2011 11:09:52 +0200 [thread overview]
Message-ID: <CAOtvUMcHOysen7betBOwEJAjL-UVzvBfCf0fzmmBERFrivkOBA@mail.gmail.com> (raw)
In-Reply-To: <alpine.DEB.2.00.1110272304020.14619@router.home>
On Fri, Oct 28, 2011 at 6:06 AM, Christoph Lameter <cl@gentwo.org> wrote:
> On Sun, 23 Oct 2011, Gilad Ben-Yossef wrote:
>
>> We need a cpumask to track cpus with per cpu cache pages
>> to know which cpu to whack during flush_all. For
>> CONFIG_CPUMASK_OFFSTACK=n we allocate the mask on stack.
>> For CONFIG_CPUMASK_OFFSTACK=y we don't want to call kmalloc
>> on the flush_all path, so we preallocate per kmem_cache
>> on cache creation and use it in flush_all.
>
> I think the on stack alloc should be the default because we can then avoid
> the field in kmem_cache and the associated logic with managing the field.
> Can we do a GFP_ATOMIC allocation in flush_all()? If the alloc
> fails then you can still fallback to send an IPI to all cpus.
Yes, that was exactly what I did in the first version of this patch
did. See: https://lkml.org/lkml/2011/9/25/32
Pekka E. did not like it because of the allocation out of kmem_cache
in CONFIG_CPUMASK_OFFSTACK=y case in a code path that is supposed to
shrink kmem_caches. I have to say I certainly see his point so I tried
to work around that. On the other hand the overhead code complexity
wise of avoiding that allocation is non trivial.
I tried to give it some more thought -
Since flush_all is called on a kmem_cache basis, to allocate off of
the cpumask kmem_cache while shrinking *another cache* is fine. A
little weird maybe, but fine.
Trouble might lurk if some code path will try to shrink the cpumask
kmem_cache. This can happens if a code path ever tries to either close
the cpumask kmem_cache, which I find very unlikely, or if someone will
try to shrink the cpumask kmem_cache. Right now the only in tree user
I found of kmem_shrink_cache is the acpi code, and even that happens
only for a few specific caches and only during boot. I don't see that
changing.
I think if it is up to me, I recommend going the simpler route that
does the allocation in flush_all using GFP_ATOMIC for
CPUMASK_OFFSTACK=y and sends an IPI to all CPUs if it fails, because
it is simpler code and in the end I believe it is also correct.
What do you guys think?
Thanks!
Gilad
--
Gilad Ben-Yossef
Chief Coffee Drinker
gilad@benyossef.com
Israel Cell: +972-52-8260388
US Cell: +1-973-8260388
http://benyossef.com
"I've seen things you people wouldn't believe. Goto statements used to
implement co-routines. I watched C structures being stored in
registers. All those moments will be lost in time... like tears in
rain... Time to die. "
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2011-10-28 9:09 UTC|newest]
Thread overview: 22+ messages / expand[flat|nested] mbox.gz Atom feed top
2011-10-23 15:56 [PATCH v2 0/6] Reduce cross CPU IPI interference Gilad Ben-Yossef
2011-10-23 15:56 ` [PATCH v2 1/6] smp: Introduce a generic on_each_cpu_mask function Gilad Ben-Yossef
2011-10-23 15:56 ` [PATCH v2 2/6] arm: Move arm over to generic on_each_cpu_mask Gilad Ben-Yossef
2011-10-23 15:56 ` [PATCH v2 3/6] tile: Move tile to use " Gilad Ben-Yossef
2011-10-23 15:56 ` [PATCH v2 4/6] mm: Only IPI CPUs to drain local pages if they exist Gilad Ben-Yossef
2011-10-28 16:07 ` Rik van Riel
2011-10-29 15:29 ` Gilad Ben-Yossef
2011-11-02 8:53 ` Christoph Lameter
2011-11-10 8:03 ` Gilad Ben-Yossef
2011-10-23 15:56 ` [PATCH v2 5/6] slub: Only IPI CPUs that have per cpu obj to flush Gilad Ben-Yossef
2011-10-28 4:06 ` Christoph Lameter
2011-10-28 8:50 ` Gilad Ben-Yossef
2011-10-23 15:56 ` [PATCH v2 6/6] slub: only preallocate cpus_with_slabs if offstack Gilad Ben-Yossef
2011-10-28 4:06 ` Christoph Lameter
2011-10-28 9:09 ` Gilad Ben-Yossef [this message]
2011-11-02 8:52 ` Christoph Lameter
2011-11-06 19:10 ` Pekka Enberg
2011-11-13 8:39 ` Gilad Ben-Yossef
-- strict thread matches above, loose matches on Subject: below --
2011-10-23 15:48 [PATCH v2 0/6] Reduce cross CPU IPI interference Gilad Ben-Yossef
2011-10-23 15:48 ` [PATCH v2 6/6] slub: only preallocate cpus_with_slabs if offstack Gilad Ben-Yossef
2011-10-24 5:19 ` Andi Kleen
2011-10-24 6:16 ` Sasha Levin
2011-10-24 8:02 ` Gilad Ben-Yossef
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CAOtvUMcHOysen7betBOwEJAjL-UVzvBfCf0fzmmBERFrivkOBA@mail.gmail.com \
--to=gilad@benyossef.com \
--cc=a.p.zijlstra@chello.nl \
--cc=cl@gentwo.org \
--cc=fweisbec@gmail.com \
--cc=levinsasha928@gmail.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux@arm.linux.org.uk \
--cc=mpm@selenic.com \
--cc=penberg@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox