From: Johannes Weiner <hannes@cmpxchg.org>
To: Shakeel Butt <shakeelb@google.com>
Cc: Rik van Riel <riel@surriel.com>,
lsf-pc@lists.linux-foundation.org, Linux MM <linux-mm@kvack.org>,
Michal Hocko <mhocko@kernel.org>, Roman Gushchin <guro@fb.com>
Subject: Re: [LSF/MM TOPIC] Proactive Memory Reclaim
Date: Tue, 23 Apr 2019 13:49:20 -0400 [thread overview]
Message-ID: <20190423174920.GA5613@cmpxchg.org> (raw)
In-Reply-To: <CALvZod44yAJTLuvg9jtqHF9uKuKNtXL9p_=3Ld+eakSijAbo1A@mail.gmail.com>
On Tue, Apr 23, 2019 at 10:04:19AM -0700, Shakeel Butt wrote:
> On Tue, Apr 23, 2019 at 9:08 AM Rik van Riel <riel@surriel.com> wrote:
> > On Tue, 2019-04-23 at 08:30 -0700, Shakeel Butt wrote:
> > This sounds similar to a project Johannes has
> > been working on, except he is not tracking which
> > memory is idle at all, but only the pressure on
> > each cgroup, through the PSI interface:
> >
> > https://facebookmicrosites.github.io/psi/docs/overview
> >
>
> I think both techniques are orthogonal and can be used concurrently.
> This technique proactively reclaims memory and hopes that we don't go
> to direct reclaim but in the worst case if we trigger direct reclaim
> then we can use PSI to early detect when to give up on reclaim and
> trigger oom-kill.
>
> Another thing I want to point out is our usage model: this proactive
> memory reclaim is transparent to the jobs. The admin (infrastructure
> owner) is using proactive reclaim to create more schedulable memory
> transparently to the job owners.
That's our motivation too.
We want a more accurate sense of actually "required" RAM for each job,
as determined by the job's latency expectations, the access frequency
curve, and IO latency (or compression and CPU latency - whatever is
used for secondary storage). The latter two change dynamically based
on memory and IO access patterns, but psi factors that in.
It's supposed to be transparent to the job owners and not impact their
performance. It's supposed to help them understand their own memory
requirements and the utilization of their resource allotment. Having a
better sense of utilization also helps fleet capacity planning.
next prev parent reply other threads:[~2019-04-23 17:49 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-04-23 15:30 Shakeel Butt
2019-04-23 15:58 ` Mel Gorman
2019-04-23 16:33 ` Shakeel Butt
2019-04-23 16:49 ` Yang Shi
2019-04-23 17:12 ` Shakeel Butt
2019-04-23 18:26 ` Yang Shi
2019-04-23 16:08 ` Rik van Riel
2019-04-23 17:04 ` Shakeel Butt
2019-04-23 17:49 ` Johannes Weiner [this message]
2019-04-23 17:34 ` Suren Baghdasaryan
2019-04-23 17:31 ` Johannes Weiner
2019-04-24 16:28 ` Christopher Lameter
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20190423174920.GA5613@cmpxchg.org \
--to=hannes@cmpxchg.org \
--cc=guro@fb.com \
--cc=linux-mm@kvack.org \
--cc=lsf-pc@lists.linux-foundation.org \
--cc=mhocko@kernel.org \
--cc=riel@surriel.com \
--cc=shakeelb@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox