From: Vladimir Davydov <vdavydov@parallels.com>
To: Johannes Weiner <hannes@cmpxchg.org>
Cc: linux-mm@kvack.org, Michal Hocko <mhocko@suse.cz>,
Greg Thelen <gthelen@google.com>, Dave Hansen <dave@sr71.net>,
cgroups@vger.kernel.org, linux-kernel@vger.kernel.org
Subject: Re: [patch] mm: memcontrol: support transparent huge pages under pressure
Date: Mon, 22 Sep 2014 14:18:03 +0400 [thread overview]
Message-ID: <20140922101803.GB20398@esperanza> (raw)
In-Reply-To: <1411132840-16025-1-git-send-email-hannes@cmpxchg.org>
On Fri, Sep 19, 2014 at 09:20:40AM -0400, Johannes Weiner wrote:
> In a memcg with even just moderate cache pressure, success rates for
> transparent huge page allocations drop to zero, wasting a lot of
> effort that the allocator puts into assembling these pages.
>
> The reason for this is that the memcg reclaim code was never designed
> for higher-order charges. It reclaims in small batches until there is
> room for at least one page. Huge pages charges only succeed when
> these batches add up over a series of huge faults, which is unlikely
> under any significant load involving order-0 allocations in the group.
>
> Remove that loop on the memcg side in favor of passing the actual
> reclaim goal to direct reclaim, which is already set up and optimized
> to meet higher-order goals efficiently.
>
> This brings memcg's THP policy in line with the system policy: if the
> allocator painstakingly assembles a hugepage, memcg will at least make
> an honest effort to charge it. As a result, transparent hugepage
> allocation rates amid cache activity are drastically improved:
>
> vanilla patched
> pgalloc 4717530.80 ( +0.00%) 4451376.40 ( -5.64%)
> pgfault 491370.60 ( +0.00%) 225477.40 ( -54.11%)
> pgmajfault 2.00 ( +0.00%) 1.80 ( -6.67%)
> thp_fault_alloc 0.00 ( +0.00%) 531.60 (+100.00%)
> thp_fault_fallback 749.00 ( +0.00%) 217.40 ( -70.88%)
>
> [ Note: this may in turn increase memory consumption from internal
> fragmentation, which is an inherent risk of transparent hugepages.
> Some setups may have to adjust the memcg limits accordingly to
> accomodate this - or, if the machine is already packed to capacity,
> disable the transparent huge page feature. ]
>
> Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
Looks like a really nice change to me. FWIW,
Reviewed-by: Vladimir Davydov <vdavydov@parallels.com>
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2014-09-22 10:18 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-09-19 13:20 Johannes Weiner
2014-09-22 10:18 ` Vladimir Davydov [this message]
2014-09-23 5:52 ` Greg Thelen
2014-09-23 8:29 ` Vladimir Davydov
2014-09-23 11:44 ` Vladimir Davydov
2014-09-23 11:48 ` Johannes Weiner
2014-09-23 11:56 ` Vladimir Davydov
2014-09-23 11:16 ` Johannes Weiner
2014-09-24 0:07 ` Greg Thelen
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20140922101803.GB20398@esperanza \
--to=vdavydov@parallels.com \
--cc=cgroups@vger.kernel.org \
--cc=dave@sr71.net \
--cc=gthelen@google.com \
--cc=hannes@cmpxchg.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mhocko@suse.cz \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox