linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: "Huang, Ying" <ying.huang@intel.com>
To: Johannes Weiner <hannes@cmpxchg.org>
Cc: Michal Hocko <mhocko@suse.com>,
	 Dave Hansen <dave.hansen@intel.com>,
	Yang Shi <shy828301@gmail.com>,  Wei Xu <weixugc@google.com>,
	 Andrew Morton <akpm@linux-foundation.org>,
	 linux-mm@kvack.org,  LKML <linux-kernel@vger.kernel.org>
Subject: Re: memcg reclaim demotion wrt. isolation
Date: Fri, 16 Dec 2022 11:16:26 +0800	[thread overview]
Message-ID: <877cys9gxh.fsf@yhuang6-desk2.ccr.corp.intel.com> (raw)
In-Reply-To: <Y5rZSRxcgQzQQVbS@cmpxchg.org> (Johannes Weiner's message of "Thu, 15 Dec 2022 09:22:33 +0100")

Johannes Weiner <hannes@cmpxchg.org> writes:

> On Thu, Dec 15, 2022 at 02:17:13PM +0800, Huang, Ying wrote:
>> Michal Hocko <mhocko@suse.com> writes:
>> 
>> > On Tue 13-12-22 17:14:48, Johannes Weiner wrote:
>> >> On Tue, Dec 13, 2022 at 04:41:10PM +0100, Michal Hocko wrote:
>> >> > Hi,
>> >> > I have just noticed that that pages allocated for demotion targets
>> >> > includes __GFP_KSWAPD_RECLAIM (through GFP_NOWAIT). This is the case
>> >> > since the code has been introduced by 26aa2d199d6f ("mm/migrate: demote
>> >> > pages during reclaim"). I suspect the intention is to trigger the aging
>> >> > on the fallback node and either drop or further demote oldest pages.
>> >> > 
>> >> > This makes sense but I suspect that this wasn't intended also for
>> >> > memcg triggered reclaim. This would mean that a memory pressure in one
>> >> > hierarchy could trigger paging out pages of a different hierarchy if the
>> >> > demotion target is close to full.
>> >> 
>> >> This is also true if you don't do demotion. If a cgroup tries to
>> >> allocate memory on a full node (i.e. mbind()), it may wake kswapd or
>> >> enter global reclaim directly which may push out the memory of other
>> >> cgroups, regardless of the respective cgroup limits.
>> >
>> > You are right on this. But this is describing a slightly different
>> > situaton IMO. 
>> >
>> >> The demotion allocations don't strike me as any different. They're
>> >> just allocations on behalf of a cgroup. I would expect them to wake
>> >> kswapd and reclaim physical memory as needed.
>> >
>> > I am not sure this is an expected behavior. Consider the currently
>> > discussed memory.demote interface when the userspace can trigger
>> > (almost) arbitrary demotions. This can deplete fallback nodes without
>> > over-committing the memory overall yet push out demoted memory from
>> > other workloads. From the user POV it would look like a reclaim while
>> > the overall memory is far from depleted so it would be considered as
>> > premature and a warrant a bug report.
>> >
>> > The reclaim behavior would make more sense to me if it was constrained
>> > to the allocating memcg hierarchy so unrelated lruvecs wouldn't be
>> > disrupted.
>> 
>> When we reclaim/demote some pages from a memcg proactively, what is our
>> goal?  To free up some memory in this memcg for other memcgs to use?  If
>> so, it sounds reasonable to keep the pages of other memcgs as many as
>> possible.
>
> The goal of proactive aging is to free up any resources that aren't
> needed to meet the SLAs (e.g. end-to-end response time of webserver).
> Meaning, to run things as leanly as possible within spec. Into that
> free space, another container can then be co-located.
>
> This means that the goal is to free up as many resources as possible,
> starting with the coveted hightier. If a container has been using
> all-hightier memory but is able demote to lowtier, there are 3 options
> for existing memory in the lower tier:
>
> 1) Colder/stale memory - should be displaced
>
> 2) Memory that can be promoted once the hightier is free -
>    reclaim/demotion of the coldest pages needs to happen at least
>    temporarily, or the tierswap is in stale mate.
>
> 3) Equally hot memory - if this exceeds capacity of the lower tier,
>    the hottest overall pages should stay, the excess demoted/reclaimed.
>
> You can't know what scenario you're in until you put the demoted pages
> in direct LRU competition with what's already there. And in all three
> scenarios, direct LRU competition also produces the optimal outcome.

If my understanding were correct, your preferred semantics is to be memcg
specific in the higher tier, and global in the lower tier.

Another choice is to add another global "memory.reclaim" knob, for
example, as /sys/devices/virtual/memory_tiering/memory_tier<N>/memory.reclaim ?
Then we can trigger global memory reclaim in lower tiers firstly.  Then
trigger memcg specific memory reclaim in higher tier for the specified
memcg.

The cons of this choice is that you need 2 steps to finish the work.
The pros is that you don't need to combine memcg-specific and global
behavior in one interface.

Best Regards,
Huang, Ying


  reply	other threads:[~2022-12-16  3:17 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-12-13 15:41 Michal Hocko
2022-12-13 16:14 ` Johannes Weiner
2022-12-14  9:42   ` Michal Hocko
2022-12-14 12:40     ` Johannes Weiner
2022-12-14 15:29       ` Michal Hocko
2022-12-14 17:40         ` Johannes Weiner
2022-12-15  6:17     ` Huang, Ying
2022-12-15  8:22       ` Johannes Weiner
2022-12-16  3:16         ` Huang, Ying [this message]
2022-12-13 22:26 ` Dave Hansen
2022-12-14  9:45   ` Michal Hocko
2022-12-14  2:57 ` Huang, Ying
2022-12-14  9:49   ` Michal Hocko

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=877cys9gxh.fsf@yhuang6-desk2.ccr.corp.intel.com \
    --to=ying.huang@intel.com \
    --cc=akpm@linux-foundation.org \
    --cc=dave.hansen@intel.com \
    --cc=hannes@cmpxchg.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@suse.com \
    --cc=shy828301@gmail.com \
    --cc=weixugc@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox