From: Yosry Ahmed <yosryahmed@google.com>
To: Michal Hocko <mhocko@suse.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>,
Roman Gushchin <roman.gushchin@linux.dev>,
Shakeel Butt <shakeelb@google.com>,
Muchun Song <songmuchun@bytedance.com>,
Andrew Morton <akpm@linux-foundation.org>,
Matthew Wilcox <willy@infradead.org>,
Vlastimil Babka <vbabka@suse.cz>,
David Hildenbrand <david@redhat.com>,
Miaohe Lin <linmiaohe@huawei.com>, NeilBrown <neilb@suse.de>,
Alistair Popple <apopple@nvidia.com>,
Suren Baghdasaryan <surenb@google.com>,
Peter Xu <peterx@redhat.com>,
Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
Cgroups <cgroups@vger.kernel.org>, Linux-MM <linux-mm@kvack.org>
Subject: Re: [PATCH] mm: vmpressure: don't count userspace-induced reclaim as memory pressure
Date: Thu, 23 Jun 2022 09:22:35 -0700 [thread overview]
Message-ID: <CAJD7tkadsLOV7GMFAm+naX4Y1WpZ-4=NkAhAMxNw60iaRPWx=w@mail.gmail.com> (raw)
In-Reply-To: <YrQ1o3CeaZWhm+h4@dhcp22.suse.cz>
On Thu, Jun 23, 2022 at 2:43 AM Michal Hocko <mhocko@suse.com> wrote:
>
> On Thu 23-06-22 01:35:59, Yosry Ahmed wrote:
> > On Thu, Jun 23, 2022 at 1:05 AM Michal Hocko <mhocko@suse.com> wrote:
> > >
> > > On Thu 23-06-22 00:05:30, Yosry Ahmed wrote:
> > > > Commit e22c6ed90aa9 ("mm: memcontrol: don't count limit-setting reclaim
> > > > as memory pressure") made sure that memory reclaim that is induced by
> > > > userspace (limit-setting, proactive reclaim, ..) is not counted as
> > > > memory pressure for the purposes of psi.
> > > >
> > > > Instead of counting psi inside try_to_free_mem_cgroup_pages(), callers
> > > > from try_charge() and reclaim_high() wrap the call to
> > > > try_to_free_mem_cgroup_pages() with psi handlers.
> > > >
> > > > However, vmpressure is still counted in these cases where reclaim is
> > > > directly induced by userspace. This patch makes sure vmpressure is not
> > > > counted in those operations, in the same way as psi. Since vmpressure
> > > > calls need to happen deeper within the reclaim path, the same approach
> > > > could not be followed. Hence, a new "controlled" flag is added to struct
> > > > scan_control to flag a reclaim operation that is controlled by
> > > > userspace. This flag is set by limit-setting and proactive reclaim
> > > > operations, and is used to count vmpressure correctly.
> > > >
> > > > To prevent future divergence of psi and vmpressure, commit e22c6ed90aa9
> > > > ("mm: memcontrol: don't count limit-setting reclaim as memory pressure")
> > > > is effectively reverted and the same flag is used to control psi as
> > > > well.
> > >
> > > Why do we need to add this is a legacy interface now? Are there any
> > > pre-existing users who realized this is bugging them? Please be more
> > > specific about the usecase.
> >
> > Sorry if I wasn't clear enough. Unfortunately we still have userspace
> > workloads at Google that use vmpressure notifications.
> >
> > In our internal version of memory.reclaim that we recently upstreamed,
> > we do not account vmpressure during proactive reclaim (similar to how
> > psi is handled upstream). We want to make sure this behavior also
> > exists in the upstream version so that consolidating them does not
> > break our users who rely on vmpressure and will start seeing increased
> > pressure due to proactive reclaim.
>
> These are good reasons to have this patch in your tree. But why is this
> patch benefitial for the upstream kernel? It clearly adds some code and
> some special casing which will add a maintenance overhead.
It is not just Google, any existing vmpressure users will start seeing
false pressure notifications with memory.reclaim. The main goal of the
patch is to make sure memory.reclaim does not break pre-existing users
of vmpressure, and doing it in a way that is consistent with psi makes
sense.
> --
> Michal Hocko
> SUSE Labs
next prev parent reply other threads:[~2022-06-23 16:23 UTC|newest]
Thread overview: 24+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-06-23 0:05 Yosry Ahmed
2022-06-23 0:16 ` Andrew Morton
2022-06-23 0:24 ` Yosry Ahmed
2022-06-23 8:05 ` Michal Hocko
2022-06-23 8:35 ` Yosry Ahmed
2022-06-23 9:42 ` Michal Hocko
2022-06-23 16:22 ` Yosry Ahmed [this message]
2022-06-23 16:37 ` Michal Hocko
2022-06-23 16:42 ` Shakeel Butt
2022-06-23 16:49 ` Yosry Ahmed
2022-06-23 17:04 ` Michal Hocko
2022-06-23 17:26 ` Yosry Ahmed
2022-06-24 22:10 ` Suren Baghdasaryan
2022-06-24 22:13 ` Yosry Ahmed
2022-06-24 22:41 ` Suren Baghdasaryan
2022-06-27 8:25 ` Michal Hocko
2022-06-27 8:39 ` Yosry Ahmed
2022-06-27 9:20 ` Michal Hocko
2022-06-27 9:39 ` Yosry Ahmed
2022-06-27 12:31 ` Michal Hocko
2022-06-27 17:03 ` Yosry Ahmed
2022-06-30 1:07 ` Shakeel Butt
2022-06-30 2:08 ` Yosry Ahmed
2022-06-30 8:22 ` Michal Hocko
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='CAJD7tkadsLOV7GMFAm+naX4Y1WpZ-4=NkAhAMxNw60iaRPWx=w@mail.gmail.com' \
--to=yosryahmed@google.com \
--cc=akpm@linux-foundation.org \
--cc=apopple@nvidia.com \
--cc=cgroups@vger.kernel.org \
--cc=david@redhat.com \
--cc=hannes@cmpxchg.org \
--cc=linmiaohe@huawei.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mhocko@suse.com \
--cc=neilb@suse.de \
--cc=peterx@redhat.com \
--cc=roman.gushchin@linux.dev \
--cc=shakeelb@google.com \
--cc=songmuchun@bytedance.com \
--cc=surenb@google.com \
--cc=vbabka@suse.cz \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox