From: Roman Gushchin <klamm@yandex-team.ru>
To: David Rientjes <rientjes@google.com>
Cc: Christoph Lameter <cl@gentwo.org>,
penberg@kernel.org, mpm@selenic.com, akpm@linux-foundation.org,
mgorman@suse.de, glommer@parallels.com, hannes@cmpxchg.org,
minchan@kernel.org, jiang.liu@huawei.com, linux-mm@kvack.org,
linux-kernel@vger.kernel.org
Subject: Re: [PATCH] slub: Avoid direct compaction if possible
Date: Mon, 17 Jun 2013 16:34:23 +0400 [thread overview]
Message-ID: <51BF024F.2080609@yandex-team.ru> (raw)
In-Reply-To: <alpine.DEB.2.02.1306141322490.17237@chino.kir.corp.google.com>
On 15.06.2013 00:26, David Rientjes wrote:
> On Fri, 14 Jun 2013, Christoph Lameter wrote:
>
>>> It's possible to avoid such problems (or at least to make them less probable)
>>> by avoiding direct compaction. If it's not possible to allocate a contiguous
>>> page without compaction, slub will fall back to order 0 page(s). In this case
>>> kswapd will be woken to perform asynchronous compaction. So, slub can return
>>> to default order allocations as soon as memory will be de-fragmented.
>>
>> Sounds like a good idea. Do you have some numbers to show the effect of
>> this patch?
>>
>
> I'm surprised you like this patch, it basically makes slub allocations to
> be atomic and doesn't try memory compaction nor reclaim. Asynchronous
> compaction certainly isn't aggressive enough to mimick the effects of the
> old lumpy reclaim that would have resulted in less fragmented memory. If
> slub is the only thing that is doing high-order allocations, it will start
> falling back to the smallest page order much much more often.
>
> I agree that this doesn't seem like a slub issue at all but rather a page
> allocator issue; if we have many simultaneous thp faults at the same time
> and /sys/kernel/mm/transparent_hugepage/defrag is "always" then you'll get
> the same problem if deferred compaction isn't helping.
>
> So I don't think we should be patching slub in any special way here.
>
> Roman, are you using the latest kernel? If so, what does
> grep compact_ /proc/vmstat show after one or more of these events?
>
We're using 3.4. And the problem reveals when we moved from 3.2 to 3.4.
It can be also reproduced on 3.5.
I'll send the exact numbers as soon I'll reproduce it again.
It can take up to 1 week.
Thanks!
Regards,
Roman
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2013-06-17 12:34 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-06-14 13:17 Roman Gushchin
2013-06-14 14:32 ` Christoph Lameter
2013-06-14 15:17 ` Roman Gushchin
2013-06-14 16:08 ` Christoph Lameter
2013-06-14 16:52 ` Roman Gushchin
2013-06-14 20:26 ` David Rientjes
2013-06-17 12:34 ` Roman Gushchin [this message]
2013-06-17 14:27 ` Michal Hocko
2013-06-17 14:54 ` Roman Gushchin
2013-06-17 21:44 ` David Rientjes
2013-06-27 8:49 ` Roman Gushchin
2013-06-27 20:41 ` David Rientjes
2013-07-18 20:29 ` Vinson Lee
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=51BF024F.2080609@yandex-team.ru \
--to=klamm@yandex-team.ru \
--cc=akpm@linux-foundation.org \
--cc=cl@gentwo.org \
--cc=glommer@parallels.com \
--cc=hannes@cmpxchg.org \
--cc=jiang.liu@huawei.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mgorman@suse.de \
--cc=minchan@kernel.org \
--cc=mpm@selenic.com \
--cc=penberg@kernel.org \
--cc=rientjes@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox