From: Mina Almasry <almasrymina@google.com>
To: "Huang, Ying" <ying.huang@intel.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>,
Yang Shi <yang.shi@linux.alibaba.com>,
Yosry Ahmed <yosryahmed@google.com>,
Tim Chen <tim.c.chen@linux.intel.com>,
weixugc@google.com, shakeelb@google.com, gthelen@google.com,
fvdl@google.com, Michal Hocko <mhocko@kernel.org>,
Roman Gushchin <roman.gushchin@linux.dev>,
Muchun Song <songmuchun@bytedance.com>,
Andrew Morton <akpm@linux-foundation.org>,
linux-kernel@vger.kernel.org, cgroups@vger.kernel.org,
linux-mm@kvack.org
Subject: Re: [RFC PATCH V1] mm: Disable demotion from proactive reclaim
Date: Thu, 1 Dec 2022 18:06:18 -0800 [thread overview]
Message-ID: <CAHS8izO3HZOpACJV0zqGZ-OpsNWYat3H-Adp9Vg7mtVO+5C3fw@mail.gmail.com> (raw)
In-Reply-To: <87tu2e36nw.fsf@yhuang6-desk2.ccr.corp.intel.com>
On Thu, Dec 1, 2022 at 6:02 PM Huang, Ying <ying.huang@intel.com> wrote:
>
> Mina Almasry <almasrymina@google.com> writes:
>
> > On Tue, Nov 29, 2022 at 7:56 PM Huang, Ying <ying.huang@intel.com> wrote:
> >>
> >> Johannes Weiner <hannes@cmpxchg.org> writes:
> >>
> >> > Hello Ying,
> >> >
> >> > On Thu, Nov 24, 2022 at 01:51:20PM +0800, Huang, Ying wrote:
> >> >> Johannes Weiner <hannes@cmpxchg.org> writes:
> >> >> > The fallback to reclaim actually strikes me as wrong.
> >> >> >
> >> >> > Think of reclaim as 'demoting' the pages to the storage tier. If we
> >> >> > have a RAM -> CXL -> storage hierarchy, we should demote from RAM to
> >> >> > CXL and from CXL to storage. If we reclaim a page from RAM, it means
> >> >> > we 'demote' it directly from RAM to storage, bypassing potentially a
> >> >> > huge amount of pages colder than it in CXL. That doesn't seem right.
> >> >> >
> >> >> > If demotion fails, IMO it shouldn't satisfy the reclaim request by
> >> >> > breaking the layering. Rather it should deflect that pressure to the
> >> >> > lower layers to make room. This makes sure we maintain an aging
> >> >> > pipeline that honors the memory tier hierarchy.
> >> >>
> >> >> Yes. I think that we should avoid to fall back to reclaim as much as
> >> >> possible too. Now, when we allocate memory for demotion
> >> >> (alloc_demote_page()), __GFP_KSWAPD_RECLAIM is used. So, we will trigger
> >> >> kswapd reclaim on lower tier node to free some memory to avoid fall back
> >> >> to reclaim on current (higher tier) node. This may be not good enough,
> >> >> for example, the following patch from Hasan may help via waking up
> >> >> kswapd earlier.
> >> >>
> >> >> https://lore.kernel.org/linux-mm/b45b9bf7cd3e21bca61d82dcd1eb692cd32c122c.1637778851.git.hasanalmaruf@fb.com/
> >> >>
> >> >> Do you know what is the next step plan for this patch?
> >> >>
> >> >> Should we do even more?
> >> >>
> >> >> From another point of view, I still think that we can use falling back
> >> >> to reclaim as the last resort to avoid OOM in some special situations,
> >> >> for example, most pages in the lowest tier node are mlock() or too hot
> >> >> to be reclaimed.
> >> >
> >> > If they're hotter than reclaim candidates on the toptier, shouldn't
> >> > they get promoted instead and make room that way? We may have to tweak
> >> > the watermark logic a bit to facilitate that (allow promotions where
> >> > regular allocations already fail?). But this sort of resorting would
> >> > be preferable to age inversions.
> >>
> >> Now it's legal to enable demotion and disable promotion. Yes, this is
> >> wrong configuration in general. But should we trigger OOM for these
> >> users?
> >>
> >> And now promotion only works for default NUMA policy (and MPOL_BIND to
> >> both promotion source and target nodes with MPOL_F_NUMA_BALANCING). If
> >> we use some other NUMA policy, the pages cannot be promoted too.
> >>
> >> > The mlock scenario sounds possible. In that case, it wouldn't be an
> >> > aging inversion, since there is nothing colder on the CXL node.
> >> >
> >> > Maybe a bypass check should explicitly consult the demotion target
> >> > watermarks against its evictable pages (similar to the file_is_tiny
> >> > check in prepare_scan_count)?
> >>
> >> Yes. This sounds doable.
> >>
> >> > Because in any other scenario, if there is a bug in the promo/demo
> >> > coordination, I think we'd rather have the OOM than deal with age
> >> > inversions causing intermittent performance issues that are incredibly
> >> > hard to track down.
> >>
> >> Previously, I thought that people will always prefer performance
> >> regression than OOM. Apparently, I am wrong.
> >>
> >> Anyway, I think that we need to reduce the possibility of OOM or falling
> >> back to reclaim as much as possible firstly. Do you agree?
> >>
> >
> > I've been discussing this with a few folks here. I think FWIW general
> > feeling here is that demoting from top tier nodes is preferred, except
> > in extreme circumstances we would indeed like to run with a
> > performance issue rather than OOM a customer VM. I wonder if there is
> > another way to debug mis-tiered pages rather than trigger an oom to
> > debug.
> >
> > One thing I think/hope we can trivially agree on is that proactive
> > reclaim/demotion is _not_ an extreme circumstance. I would like me or
> > someone from the team to follow up with a patch that disables fallback
> > to reclaim on proactive reclaim/demotion (sc->proactive).
>
> Yes. This makes sense to me.
>
Glad to hear it. Patch is already sent for review btw:
https://lore.kernel.org/linux-mm/20221201233317.1394958-1-almasrymina@google.com/T/
> Best Regards,
> Huang, Ying
>
> >> One possibility, can we fall back to reclaim only if the sc->priority is
> >> small enough (even 0)?
> >>
> >
> > This makes sense to me.
> >
> >> Best Regards,
> >> Huang, Ying
> >>
>
next prev parent reply other threads:[~2022-12-02 2:06 UTC|newest]
Thread overview: 29+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-11-22 20:38 Mina Almasry
2022-11-22 20:38 ` [RFC PATCH v1] mm: Add memory.demote for proactive demotion only Mina Almasry
2022-11-22 20:38 ` [RFC PATCH v1 3/4] mm: Fix demotion-only scanning anon pages Mina Almasry
2022-11-24 5:27 ` Huang, Ying
2022-11-22 20:38 ` [RFC PATCH v1 4/4] mm: Add nodes= arg to memory.demote Mina Almasry
2022-11-23 18:00 ` [RFC PATCH V1] mm: Disable demotion from proactive reclaim Johannes Weiner
2022-11-23 21:20 ` Mina Almasry
2022-11-23 21:35 ` Yosry Ahmed
2022-11-23 22:30 ` Johannes Weiner
2022-11-23 23:47 ` Yosry Ahmed
2022-11-23 21:58 ` Johannes Weiner
2022-11-23 22:37 ` Mina Almasry
2022-11-24 5:51 ` Huang, Ying
2022-11-28 22:24 ` Yang Shi
2022-11-29 0:53 ` Huang, Ying
2022-11-29 17:27 ` Yang Shi
2022-11-30 5:31 ` Huang, Ying
2022-11-30 18:49 ` Yang Shi
2022-12-01 1:51 ` Huang, Ying
2022-12-01 22:45 ` Yang Shi
2022-12-02 1:57 ` Huang, Ying
2022-11-29 18:08 ` Johannes Weiner
2022-11-30 3:55 ` Huang, Ying
2022-12-01 20:40 ` Mina Almasry
2022-12-02 2:01 ` Huang, Ying
2022-12-02 2:06 ` Mina Almasry [this message]
2022-11-30 2:14 ` Mina Almasry
2022-11-30 5:39 ` Huang, Ying
2022-11-30 6:06 ` Mina Almasry
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CAHS8izO3HZOpACJV0zqGZ-OpsNWYat3H-Adp9Vg7mtVO+5C3fw@mail.gmail.com \
--to=almasrymina@google.com \
--cc=akpm@linux-foundation.org \
--cc=cgroups@vger.kernel.org \
--cc=fvdl@google.com \
--cc=gthelen@google.com \
--cc=hannes@cmpxchg.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mhocko@kernel.org \
--cc=roman.gushchin@linux.dev \
--cc=shakeelb@google.com \
--cc=songmuchun@bytedance.com \
--cc=tim.c.chen@linux.intel.com \
--cc=weixugc@google.com \
--cc=yang.shi@linux.alibaba.com \
--cc=ying.huang@intel.com \
--cc=yosryahmed@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox