From: Yang Shi <yang.shi@linux.alibaba.com>
To: David Rientjes <rientjes@google.com>
Cc: ktkhai@virtuozzo.com, hannes@cmpxchg.org, mhocko@suse.com,
kirill.shutemov@linux.intel.com, hughd@google.com,
shakeelb@google.com, Andrew Morton <akpm@linux-foundation.org>,
linux-mm@kvack.org, linux-kernel@vger.kernel.org
Subject: Re: [RFC PATCH 0/3] Make deferred split shrinker memcg aware
Date: Thu, 30 May 2019 11:22:21 +0800 [thread overview]
Message-ID: <9af25d50-576a-3cc3-20a3-c0c61cf3e494@linux.alibaba.com> (raw)
In-Reply-To: <alpine.DEB.2.21.1905291402360.242480@chino.kir.corp.google.com>
On 5/30/19 5:07 AM, David Rientjes wrote:
> On Wed, 29 May 2019, Yang Shi wrote:
>
>>> Right, we've also encountered this. I talked to Kirill about it a week or
>>> so ago where the suggestion was to split all compound pages on the
>>> deferred split queues under the presence of even memory pressure.
>>>
>>> That breaks cgroup isolation and perhaps unfairly penalizes workloads that
>>> are running attached to other memcg hierarchies that are not under
>>> pressure because their compound pages are now split as a side effect.
>>> There is a benefit to keeping these compound pages around while not under
>>> memory pressure if all pages are subsequently mapped again.
>> Yes, I do agree. I tried other approaches too, it sounds making deferred split
>> queue per memcg is the optimal one.
>>
> The approach we went with were to track the actual counts of compound
> pages on the deferred split queue for each pgdat for each memcg and then
> invoke the shrinker for memcg reclaim and iterate those not charged to the
> hierarchy under reclaim. That's suboptimal and was a stop gap measure
> under time pressure: it's refreshing to see the optimal method being
> pursued, thanks!
We did the exactly same thing for a temporary hotfix.
>
>>> I'm curious if your internal applications team is also asking for
>>> statistics on how much memory can be freed if the deferred split queues
>>> can be shrunk? We have applications that monitor their own memory usage
>> No, but this reminds me. The THPs on deferred split queue should be accounted
>> into available memory too.
>>
> Right, and we have also seen this for users of MADV_FREE that have both an
> increased rss and memcg usage that don't realize that the memory is freed
> under pressure. I'm thinking that we need some kind of MemAvailable for
> memcg hierarchies to be the authoritative source of what can be reclaimed
> under pressure.
It sounds useful. We also need know the available memory in memcg scope
in our containers.
>
>>> through memcg stats or usage and proactively try to reduce that usage when
>>> it is growing too large. The deferred split queues have significantly
>>> increased both memcg usage and rss when they've upgraded kernels.
>>>
>>> How are your applications monitoring how much memory from deferred split
>>> queues can be freed on memory pressure? Any thoughts on providing it as a
>>> memcg stat?
>> I don't think they have such monitor. I saw rss_huge is abormal in memcg stat
>> even after the application is killed by oom, so I realized the deferred split
>> queue may play a role here.
>>
> Exactly the same in my case :) We were likely looking at the exact same
> issue at the same time.
Yes, it seems so. :-)
>> The memcg stat doesn't have counters for available memory as global vmstat. It
>> may be better to have such statistics, or extending reclaimable "slab" to
>> shrinkable/reclaimable "memory".
>>
> Have you considered following how NR_ANON_MAPPED is tracked for each pgdat
> and using that as an indicator of when the modify a memcg stat to track
> the amount of memory on a compound page? I think this would be necessary
> for userspace to know what their true memory usage is.
No, I haven't. Do you mean minus MADV_FREE and deferred split THP from
NR_ANON_MAPPED? It looks they have been decreased from NR_ANON_MAPPED
when removing rmap.
prev parent reply other threads:[~2019-05-30 3:22 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-05-28 12:44 Yang Shi
2019-05-28 12:44 ` [PATCH 1/3] mm: thp: make " Yang Shi
2019-05-28 14:42 ` Kirill Tkhai
2019-05-29 2:43 ` Yang Shi
2019-05-29 8:14 ` Kirill Tkhai
2019-05-29 11:25 ` Yang Shi
2019-06-10 8:23 ` Kirill Tkhai
2019-06-10 17:25 ` Yang Shi
2019-06-13 8:19 ` Kirill Tkhai
2019-06-13 17:53 ` Yang Shi
2019-05-30 12:07 ` Kirill A. Shutemov
2019-05-30 13:29 ` Yang Shi
2019-05-28 12:44 ` [PATCH 2/3] mm: thp: remove THP destructor Yang Shi
2019-05-28 12:44 ` [PATCH 3/3] mm: shrinker: make shrinker not depend on memcg kmem Yang Shi
2019-05-30 12:08 ` Kirill A. Shutemov
2019-05-30 13:20 ` Yang Shi
2019-05-29 1:22 ` [RFC PATCH 0/3] Make deferred split shrinker memcg aware David Rientjes
2019-05-29 2:34 ` Yang Shi
2019-05-29 21:07 ` David Rientjes
2019-05-30 3:22 ` Yang Shi [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=9af25d50-576a-3cc3-20a3-c0c61cf3e494@linux.alibaba.com \
--to=yang.shi@linux.alibaba.com \
--cc=akpm@linux-foundation.org \
--cc=hannes@cmpxchg.org \
--cc=hughd@google.com \
--cc=kirill.shutemov@linux.intel.com \
--cc=ktkhai@virtuozzo.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mhocko@suse.com \
--cc=rientjes@google.com \
--cc=shakeelb@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox