From: "Kirill A. Shutemov" <kirill@shutemov.name>
To: Ning Zhang <ningzhang@linux.alibaba.com>
Cc: linux-mm@kvack.org, Andrew Morton <akpm@linux-foundation.org>,
Johannes Weiner <hannes@cmpxchg.org>,
Michal Hocko <mhocko@kernel.org>,
Vladimir Davydov <vdavydov.dev@gmail.com>,
Yu Zhao <yuzhao@google.com>
Subject: Re: [RFC 0/6] Reclaim zero subpages of thp to avoid memory bloat
Date: Thu, 28 Oct 2021 17:13:33 +0300 [thread overview]
Message-ID: <20211028141333.kgcjgsnrrjuq4hjx@box.shutemov.name> (raw)
In-Reply-To: <1635422215-99394-1-git-send-email-ningzhang@linux.alibaba.com>
On Thu, Oct 28, 2021 at 07:56:49PM +0800, Ning Zhang wrote:
> As we know, thp may lead to memory bloat which may cause OOM.
> Through testing with some apps, we found that the reason of
> memory bloat is a huge page may contain some zero subpages
> (may accessed or not). And we found that most zero subpages
> are centralized in a few huge pages.
>
> Following is a text_classification_rnn case for tensorflow:
>
> zero_subpages huge_pages waste
> [ 0, 1) 186 0.00%
> [ 1, 2) 23 0.01%
> [ 2, 4) 36 0.02%
> [ 4, 8) 67 0.08%
> [ 8, 16) 80 0.23%
> [ 16, 32) 109 0.61%
> [ 32, 64) 44 0.49%
> [ 64, 128) 12 0.30%
> [ 128, 256) 28 1.54%
> [ 256, 513) 159 18.03%
>
> In the case, there are 187 huge pages (25% of the total huge pages)
> which contain more then 128 zero subpages. And these huge pages
> lead to 19.57% waste of the total rss. It means we can reclaim
> 19.57% memory by splitting the 187 huge pages and reclaiming the
> zero subpages.
>
> This patchset introduce a new mechanism to split the huge page
> which has zero subpages and reclaim these zero subpages.
>
> We add the anonymous huge page to a list to reduce the cost of
> finding the huge page. When the memory reclaim is triggering,
> the list will be walked and the huge page contains enough zero
> subpages may be reclaimed. Meanwhile, replace the zero subpages
> by ZERO_PAGE(0).
Does it actually help your workload?
I mean this will only be triggered via vmscan that was going to split
pages and free anyway.
You prioritize splitting THP and freeing zero subpages over reclaiming
other pages. It may or may not be right thing to do, depending on
workload.
Maybe it makes more sense to check for all-zero pages just after
split_huge_page_to_list() in vmscan and free such pages immediately rather
then add all this complexity?
> Yu Zhao has done some similar work when the huge page is swap out
> or migrated to accelerate[1]. While we do this in the normal memory
> shrink path for the swapoff scene to avoid OOM.
>
> In the future, we will do the proactive reclaim to reclaim the "cold"
> huge page proactively. This is for keeping the performance of thp as
> for as possible. In addition to that, some users want the memory usage
> using thp is equal to the usage using 4K.
Proactive reclaim can be harmful if your max_ptes_none allows to recreate
THP back.
--
Kirill A. Shutemov
next prev parent reply other threads:[~2021-10-28 14:13 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-10-28 11:56 Ning Zhang
2021-10-28 11:56 ` [RFC 1/6] mm, thp: introduce thp zero subpages reclaim Ning Zhang
2021-10-28 12:53 ` Matthew Wilcox
2021-10-29 12:16 ` ning zhang
2021-10-28 11:56 ` [RFC 2/6] mm, thp: add a global interface for zero subapges reclaim Ning Zhang
2021-10-28 11:56 ` [RFC 3/6] mm, thp: introduce zero subpages reclaim threshold Ning Zhang
2021-10-28 11:56 ` [RFC 4/6] mm, thp: introduce a controller to trigger zero subpages reclaim Ning Zhang
2021-10-28 11:56 ` [RFC 5/6] mm, thp: add some statistics for " Ning Zhang
2021-10-28 11:56 ` [RFC 6/6] mm, thp: add document " Ning Zhang
2021-10-28 14:13 ` Kirill A. Shutemov [this message]
2021-10-29 12:07 ` [RFC 0/6] Reclaim zero subpages of thp to avoid memory bloat ning zhang
2021-10-29 16:56 ` Yang Shi
2021-11-01 2:50 ` ning zhang
2021-10-29 13:38 ` Michal Hocko
2021-10-29 16:12 ` ning zhang
2021-11-01 9:20 ` Michal Hocko
2021-11-08 3:24 ` ning zhang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20211028141333.kgcjgsnrrjuq4hjx@box.shutemov.name \
--to=kirill@shutemov.name \
--cc=akpm@linux-foundation.org \
--cc=hannes@cmpxchg.org \
--cc=linux-mm@kvack.org \
--cc=mhocko@kernel.org \
--cc=ningzhang@linux.alibaba.com \
--cc=vdavydov.dev@gmail.com \
--cc=yuzhao@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox