linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Ganesan Rajagopal <rganesan@arista.com>
To: Shakeel Butt <shakeelb@google.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>,
	Michal Hocko <mhocko@kernel.org>,
	 Roman Gushchin <roman.gushchin@linux.dev>,
	Cgroups <cgroups@vger.kernel.org>,  Linux MM <linux-mm@kvack.org>
Subject: Re: [PATCH] mm/memcontrol: Export memcg->watermark via sysfs for v2 memcg
Date: Thu, 5 May 2022 22:51:52 +0530	[thread overview]
Message-ID: <CAPD3tpGqd3kX7J=Y5uvAnD2c1hfYOA-8N7WfqX_nXNbfjkHbrg@mail.gmail.com> (raw)
In-Reply-To: <CALvZod5xiSuJaDjGb+NM18puejwhnPWweSj+N=0RGQrjpjfxbw@mail.gmail.com>

[-- Attachment #1: Type: text/plain, Size: 1629 bytes --]

On Thu, May 5, 2022 at 9:42 PM Shakeel Butt <shakeelb@google.com> wrote:

> On Thu, May 5, 2022 at 5:13 AM Ganesan Rajagopal <rganesan@arista.com>
> wrote:
> >
> > v1 memcg exports memcg->watermark as "memory.mem_usage_in_bytes" in
>
> *max_usage_in_bytes
>

Oops, thanks for the correction.

> > sysfs. This is missing for v2 memcg though "memory.current" is exported.
> > There is no other easy way of getting this information in Linux.
> > getrsuage() returns ru_maxrss but that's the max RSS of a single process
> > instead of the aggregated max RSS of all the processes. Hence, expose
> > memcg->watermark as "memory.watermark" for v2 memcg.
> >
> > Signed-off-by: Ganesan Rajagopal <rganesan@arista.com>
>
> Can you please explain the use-case for which you need this metric?
> Also note that this is not really an aggregated RSS of all the
> processes in the cgroup. So, do you want max RSS or max charge and for
> what use-case?
>

We run a lot of automated tests when building our software and used to
run into OOM scenarios when the tests run unbounded. We use this metric
to heuristically limit how many tests can run in parallel using per test
historical data.

I understand this isn't really aggregated RSS, max charge works. We just
need some metric to account for the peak memory usage.  We don't need
it to be super accurate because there's significant variance between test
runs anyway. We conservatively use the historical max to limit parallelism.

Since this metric is not exposed in v2 memcg, the only alternative is to
poll "memory.current" which would be quite inefficient and grossly
inaccurate.

Ganesan

[-- Attachment #2: Type: text/html, Size: 4136 bytes --]

  reply	other threads:[~2022-05-05 17:22 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-05-05 12:13 Ganesan Rajagopal
2022-05-05 16:12 ` Shakeel Butt
2022-05-05 17:21   ` Ganesan Rajagopal [this message]
2022-05-05 17:27   ` Ganesan Rajagopal
2022-05-06 15:58     ` Shakeel Butt
2022-05-06 14:56 ` Johannes Weiner
2022-05-06 15:04   ` Ganesan Rajagopal
2022-05-06 15:56   ` Shakeel Butt
2022-05-06 16:43     ` Ganesan Rajagopal
2022-05-06 16:45       ` Shakeel Butt
2022-05-06 16:58         ` Ganesan Rajagopal
2022-05-06 20:49     ` Johannes Weiner
2022-05-06 21:04       ` Shakeel Butt

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAPD3tpGqd3kX7J=Y5uvAnD2c1hfYOA-8N7WfqX_nXNbfjkHbrg@mail.gmail.com' \
    --to=rganesan@arista.com \
    --cc=cgroups@vger.kernel.org \
    --cc=hannes@cmpxchg.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@kernel.org \
    --cc=roman.gushchin@linux.dev \
    --cc=shakeelb@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox