linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2] mm/memcontrol: Export memcg->watermark via sysfs for v2 memcg
@ 2022-05-07  5:09 Ganesan Rajagopal
  2022-05-07 15:33 ` Shakeel Butt
                   ` (4 more replies)
  0 siblings, 5 replies; 10+ messages in thread
From: Ganesan Rajagopal @ 2022-05-07  5:09 UTC (permalink / raw)
  To: hannes, mhocko, roman.gushchin, shakeelb; +Cc: cgroups, linux-mm, rganesan

We run a lot of automated tests when building our software and run into
OOM scenarios when the tests run unbounded. v1 memcg exports
memcg->watermark as "memory.max_usage_in_bytes" in sysfs. We use this
metric to heuristically limit the number of tests that can run in
parallel based on per test historical data.

This metric is currently not exported for v2 memcg and there is no
other easy way of getting this information. getrusage() syscall returns
"ru_maxrss" which can be used as an approximation but that's the max
RSS of a single child process across all children instead of the
aggregated max for all child processes. The only work around is to
periodically poll "memory.current" but that's not practical for
short-lived one-off cgroups.

Hence, expose memcg->watermark as "memory.peak" for v2 memcg.

Signed-off-by: Ganesan Rajagopal <rganesan@arista.com>
---
 Documentation/admin-guide/cgroup-v2.rst |  7 +++++++
 mm/memcontrol.c                         | 13 +++++++++++++
 2 files changed, 20 insertions(+)

diff --git a/Documentation/admin-guide/cgroup-v2.rst b/Documentation/admin-guide/cgroup-v2.rst
index 69d7a6983f78..828ce037fb2a 100644
--- a/Documentation/admin-guide/cgroup-v2.rst
+++ b/Documentation/admin-guide/cgroup-v2.rst
@@ -1208,6 +1208,13 @@ PAGE_SIZE multiple when read back.
 	high limit is used and monitored properly, this limit's
 	utility is limited to providing the final safety net.
 
+  memory.peak
+	A read-only single value file which exists on non-root
+	cgroups.
+
+	The max memory usage recorded for the cgroup and its
+	descendants since the creation of the cgroup.
+
   memory.oom.group
 	A read-write single value file which exists on non-root
 	cgroups.  The default value is "0".
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 725f76723220..88fa70b5d8af 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -6098,6 +6098,14 @@ static u64 memory_current_read(struct cgroup_subsys_state *css,
 	return (u64)page_counter_read(&memcg->memory) * PAGE_SIZE;
 }
 
+static u64 memory_peak_read(struct cgroup_subsys_state *css,
+			    struct cftype *cft)
+{
+	struct mem_cgroup *memcg = mem_cgroup_from_css(css);
+
+	return (u64)memcg->memory.watermark * PAGE_SIZE;
+}
+
 static int memory_min_show(struct seq_file *m, void *v)
 {
 	return seq_puts_memcg_tunable(m,
@@ -6361,6 +6369,11 @@ static struct cftype memory_files[] = {
 		.flags = CFTYPE_NOT_ON_ROOT,
 		.read_u64 = memory_current_read,
 	},
+	{
+		.name = "peak",
+		.flags = CFTYPE_NOT_ON_ROOT,
+		.read_u64 = memory_peak_read,
+	},
 	{
 		.name = "min",
 		.flags = CFTYPE_NOT_ON_ROOT,
-- 
2.28.0



^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v2] mm/memcontrol: Export memcg->watermark via sysfs for v2 memcg
  2022-05-07  5:09 [PATCH v2] mm/memcontrol: Export memcg->watermark via sysfs for v2 memcg Ganesan Rajagopal
@ 2022-05-07 15:33 ` Shakeel Butt
  2022-05-09 13:44 ` Johannes Weiner
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 10+ messages in thread
From: Shakeel Butt @ 2022-05-07 15:33 UTC (permalink / raw)
  To: Ganesan Rajagopal
  Cc: Johannes Weiner, Michal Hocko, Roman Gushchin, Cgroups, Linux MM

On Fri, May 6, 2022 at 10:09 PM Ganesan Rajagopal <rganesan@arista.com> wrote:
>
> We run a lot of automated tests when building our software and run into
> OOM scenarios when the tests run unbounded. v1 memcg exports
> memcg->watermark as "memory.max_usage_in_bytes" in sysfs. We use this
> metric to heuristically limit the number of tests that can run in
> parallel based on per test historical data.
>
> This metric is currently not exported for v2 memcg and there is no
> other easy way of getting this information. getrusage() syscall returns
> "ru_maxrss" which can be used as an approximation but that's the max
> RSS of a single child process across all children instead of the
> aggregated max for all child processes. The only work around is to
> periodically poll "memory.current" but that's not practical for
> short-lived one-off cgroups.
>
> Hence, expose memcg->watermark as "memory.peak" for v2 memcg.
>
> Signed-off-by: Ganesan Rajagopal <rganesan@arista.com>

Acked-by: Shakeel Butt <shakeelb@google.com>


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v2] mm/memcontrol: Export memcg->watermark via sysfs for v2 memcg
  2022-05-07  5:09 [PATCH v2] mm/memcontrol: Export memcg->watermark via sysfs for v2 memcg Ganesan Rajagopal
  2022-05-07 15:33 ` Shakeel Butt
@ 2022-05-09 13:44 ` Johannes Weiner
  2022-05-11  2:48 ` Roman Gushchin
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 10+ messages in thread
From: Johannes Weiner @ 2022-05-09 13:44 UTC (permalink / raw)
  To: Ganesan Rajagopal; +Cc: mhocko, roman.gushchin, shakeelb, cgroups, linux-mm

On Fri, May 06, 2022 at 10:09:16PM -0700, Ganesan Rajagopal wrote:
> We run a lot of automated tests when building our software and run into
> OOM scenarios when the tests run unbounded. v1 memcg exports
> memcg->watermark as "memory.max_usage_in_bytes" in sysfs. We use this
> metric to heuristically limit the number of tests that can run in
> parallel based on per test historical data.
> 
> This metric is currently not exported for v2 memcg and there is no
> other easy way of getting this information. getrusage() syscall returns
> "ru_maxrss" which can be used as an approximation but that's the max
> RSS of a single child process across all children instead of the
> aggregated max for all child processes. The only work around is to
> periodically poll "memory.current" but that's not practical for
> short-lived one-off cgroups.
> 
> Hence, expose memcg->watermark as "memory.peak" for v2 memcg.
> 
> Signed-off-by: Ganesan Rajagopal <rganesan@arista.com>

Acked-by: Johannes Weiner <hannes@cmpxchg.org>


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v2] mm/memcontrol: Export memcg->watermark via sysfs for v2 memcg
  2022-05-07  5:09 [PATCH v2] mm/memcontrol: Export memcg->watermark via sysfs for v2 memcg Ganesan Rajagopal
  2022-05-07 15:33 ` Shakeel Butt
  2022-05-09 13:44 ` Johannes Weiner
@ 2022-05-11  2:48 ` Roman Gushchin
  2022-05-11  3:47   ` Ganesan Rajagopal
  2022-05-11  7:13 ` Michal Hocko
  2022-05-11 17:49 ` Michal Koutný
  4 siblings, 1 reply; 10+ messages in thread
From: Roman Gushchin @ 2022-05-11  2:48 UTC (permalink / raw)
  To: Ganesan Rajagopal; +Cc: hannes, mhocko, shakeelb, cgroups, linux-mm

On Fri, May 06, 2022 at 10:09:16PM -0700, Ganesan Rajagopal wrote:
> We run a lot of automated tests when building our software and run into
> OOM scenarios when the tests run unbounded. v1 memcg exports
> memcg->watermark as "memory.max_usage_in_bytes" in sysfs. We use this
> metric to heuristically limit the number of tests that can run in
> parallel based on per test historical data.
> 
> This metric is currently not exported for v2 memcg and there is no
> other easy way of getting this information. getrusage() syscall returns
> "ru_maxrss" which can be used as an approximation but that's the max
> RSS of a single child process across all children instead of the
> aggregated max for all child processes. The only work around is to
> periodically poll "memory.current" but that's not practical for
> short-lived one-off cgroups.
> 
> Hence, expose memcg->watermark as "memory.peak" for v2 memcg.
> 
> Signed-off-by: Ganesan Rajagopal <rganesan@arista.com>

Acked-by: Roman Gushchin <roman.gushchin@linux.dev>

I've been asked a couple of times about this feature, so I think it's indeed
useful.

Thank you for adding it!


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v2] mm/memcontrol: Export memcg->watermark via sysfs for v2 memcg
  2022-05-11  2:48 ` Roman Gushchin
@ 2022-05-11  3:47   ` Ganesan Rajagopal
  0 siblings, 0 replies; 10+ messages in thread
From: Ganesan Rajagopal @ 2022-05-11  3:47 UTC (permalink / raw)
  To: Roman Gushchin; +Cc: hannes, mhocko, shakeelb, cgroups, linux-mm

On Wed, May 11, 2022 at 8:18 AM Roman Gushchin <roman.gushchin@linux.dev> wrote:
>
> On Fri, May 06, 2022 at 10:09:16PM -0700, Ganesan Rajagopal wrote:
> > We run a lot of automated tests when building our software and run into
> > OOM scenarios when the tests run unbounded. v1 memcg exports
> > memcg->watermark as "memory.max_usage_in_bytes" in sysfs. We use this
> > metric to heuristically limit the number of tests that can run in
> > parallel based on per test historical data.
> >
> > This metric is currently not exported for v2 memcg and there is no
> > other easy way of getting this information. getrusage() syscall returns
> > "ru_maxrss" which can be used as an approximation but that's the max
> > RSS of a single child process across all children instead of the
> > aggregated max for all child processes. The only work around is to
> > periodically poll "memory.current" but that's not practical for
> > short-lived one-off cgroups.
> >
> > Hence, expose memcg->watermark as "memory.peak" for v2 memcg.
> >
> > Signed-off-by: Ganesan Rajagopal <rganesan@arista.com>
>
> Acked-by: Roman Gushchin <roman.gushchin@linux.dev>
>
> I've been asked a couple of times about this feature, so I think it's indeed
> useful.
>
> Thank you for adding it!

You're welcome and thank you for the Ack. Thank you, Shakeel and
Johannes for the review and Ack. The patch has been picked up for
mm-unstable.

Ganesan


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v2] mm/memcontrol: Export memcg->watermark via sysfs for v2 memcg
  2022-05-07  5:09 [PATCH v2] mm/memcontrol: Export memcg->watermark via sysfs for v2 memcg Ganesan Rajagopal
                   ` (2 preceding siblings ...)
  2022-05-11  2:48 ` Roman Gushchin
@ 2022-05-11  7:13 ` Michal Hocko
  2022-05-11  7:22   ` Ganesan Rajagopal
  2022-05-11 17:49 ` Michal Koutný
  4 siblings, 1 reply; 10+ messages in thread
From: Michal Hocko @ 2022-05-11  7:13 UTC (permalink / raw)
  To: Ganesan Rajagopal; +Cc: hannes, roman.gushchin, shakeelb, cgroups, linux-mm

On Fri 06-05-22 22:09:16, Ganesan Rajagopal wrote:
> We run a lot of automated tests when building our software and run into
> OOM scenarios when the tests run unbounded. v1 memcg exports
> memcg->watermark as "memory.max_usage_in_bytes" in sysfs. We use this
> metric to heuristically limit the number of tests that can run in
> parallel based on per test historical data.
> 
> This metric is currently not exported for v2 memcg and there is no
> other easy way of getting this information. getrusage() syscall returns
> "ru_maxrss" which can be used as an approximation but that's the max
> RSS of a single child process across all children instead of the
> aggregated max for all child processes. The only work around is to
> periodically poll "memory.current" but that's not practical for
> short-lived one-off cgroups.
> 
> Hence, expose memcg->watermark as "memory.peak" for v2 memcg.

Yes, I can imagine that a very short lived process can easily escape
from the monitoring. The memory consumption can be still significant
though.

The v1 interface allows to reset the value by writing to the file. Have
you considered that as well?
 
> Signed-off-by: Ganesan Rajagopal <rganesan@arista.com>

Acked-by: Michal Hocko <mhocko@suse.com>

> ---
>  Documentation/admin-guide/cgroup-v2.rst |  7 +++++++
>  mm/memcontrol.c                         | 13 +++++++++++++
>  2 files changed, 20 insertions(+)
> 
> diff --git a/Documentation/admin-guide/cgroup-v2.rst b/Documentation/admin-guide/cgroup-v2.rst
> index 69d7a6983f78..828ce037fb2a 100644
> --- a/Documentation/admin-guide/cgroup-v2.rst
> +++ b/Documentation/admin-guide/cgroup-v2.rst
> @@ -1208,6 +1208,13 @@ PAGE_SIZE multiple when read back.
>  	high limit is used and monitored properly, this limit's
>  	utility is limited to providing the final safety net.
>  
> +  memory.peak
> +	A read-only single value file which exists on non-root
> +	cgroups.
> +
> +	The max memory usage recorded for the cgroup and its
> +	descendants since the creation of the cgroup.
> +
>    memory.oom.group
>  	A read-write single value file which exists on non-root
>  	cgroups.  The default value is "0".
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index 725f76723220..88fa70b5d8af 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -6098,6 +6098,14 @@ static u64 memory_current_read(struct cgroup_subsys_state *css,
>  	return (u64)page_counter_read(&memcg->memory) * PAGE_SIZE;
>  }
>  
> +static u64 memory_peak_read(struct cgroup_subsys_state *css,
> +			    struct cftype *cft)
> +{
> +	struct mem_cgroup *memcg = mem_cgroup_from_css(css);
> +
> +	return (u64)memcg->memory.watermark * PAGE_SIZE;
> +}
> +
>  static int memory_min_show(struct seq_file *m, void *v)
>  {
>  	return seq_puts_memcg_tunable(m,
> @@ -6361,6 +6369,11 @@ static struct cftype memory_files[] = {
>  		.flags = CFTYPE_NOT_ON_ROOT,
>  		.read_u64 = memory_current_read,
>  	},
> +	{
> +		.name = "peak",
> +		.flags = CFTYPE_NOT_ON_ROOT,
> +		.read_u64 = memory_peak_read,
> +	},
>  	{
>  		.name = "min",
>  		.flags = CFTYPE_NOT_ON_ROOT,
> -- 
> 2.28.0

-- 
Michal Hocko
SUSE Labs


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v2] mm/memcontrol: Export memcg->watermark via sysfs for v2 memcg
  2022-05-11  7:13 ` Michal Hocko
@ 2022-05-11  7:22   ` Ganesan Rajagopal
  0 siblings, 0 replies; 10+ messages in thread
From: Ganesan Rajagopal @ 2022-05-11  7:22 UTC (permalink / raw)
  To: Michal Hocko; +Cc: hannes, roman.gushchin, shakeelb, cgroups, linux-mm

On Wed, May 11, 2022 at 12:43 PM Michal Hocko <mhocko@suse.com> wrote:
>
> On Fri 06-05-22 22:09:16, Ganesan Rajagopal wrote:
> > We run a lot of automated tests when building our software and run into
> > OOM scenarios when the tests run unbounded. v1 memcg exports
> > memcg->watermark as "memory.max_usage_in_bytes" in sysfs. We use this
> > metric to heuristically limit the number of tests that can run in
> > parallel based on per test historical data.
> >
> > This metric is currently not exported for v2 memcg and there is no
> > other easy way of getting this information. getrusage() syscall returns
> > "ru_maxrss" which can be used as an approximation but that's the max
> > RSS of a single child process across all children instead of the
> > aggregated max for all child processes. The only work around is to
> > periodically poll "memory.current" but that's not practical for
> > short-lived one-off cgroups.
> >
> > Hence, expose memcg->watermark as "memory.peak" for v2 memcg.
>
> Yes, I can imagine that a very short lived process can easily escape
> from the monitoring. The memory consumption can be still significant
> though.
>
> The v1 interface allows to reset the value by writing to the file. Have
> you considered that as well?

I hadn't originally but this was discussed and dropped when I posted the
first version of this patch. See
https://www.spinics.net/lists/cgroups/msg32476.html

Ganesan

>
> > Signed-off-by: Ganesan Rajagopal <rganesan@arista.com>
>
> Acked-by: Michal Hocko <mhocko@suse.com>
>
> > ---
> >  Documentation/admin-guide/cgroup-v2.rst |  7 +++++++
> >  mm/memcontrol.c                         | 13 +++++++++++++
> >  2 files changed, 20 insertions(+)
> >
> > diff --git a/Documentation/admin-guide/cgroup-v2.rst b/Documentation/admin-guide/cgroup-v2.rst
> > index 69d7a6983f78..828ce037fb2a 100644
> > --- a/Documentation/admin-guide/cgroup-v2.rst
> > +++ b/Documentation/admin-guide/cgroup-v2.rst
> > @@ -1208,6 +1208,13 @@ PAGE_SIZE multiple when read back.
> >       high limit is used and monitored properly, this limit's
> >       utility is limited to providing the final safety net.
> >
> > +  memory.peak
> > +     A read-only single value file which exists on non-root
> > +     cgroups.
> > +
> > +     The max memory usage recorded for the cgroup and its
> > +     descendants since the creation of the cgroup.
> > +
> >    memory.oom.group
> >       A read-write single value file which exists on non-root
> >       cgroups.  The default value is "0".
> > diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> > index 725f76723220..88fa70b5d8af 100644
> > --- a/mm/memcontrol.c
> > +++ b/mm/memcontrol.c
> > @@ -6098,6 +6098,14 @@ static u64 memory_current_read(struct cgroup_subsys_state *css,
> >       return (u64)page_counter_read(&memcg->memory) * PAGE_SIZE;
> >  }
> >
> > +static u64 memory_peak_read(struct cgroup_subsys_state *css,
> > +                         struct cftype *cft)
> > +{
> > +     struct mem_cgroup *memcg = mem_cgroup_from_css(css);
> > +
> > +     return (u64)memcg->memory.watermark * PAGE_SIZE;
> > +}
> > +
> >  static int memory_min_show(struct seq_file *m, void *v)
> >  {
> >       return seq_puts_memcg_tunable(m,
> > @@ -6361,6 +6369,11 @@ static struct cftype memory_files[] = {
> >               .flags = CFTYPE_NOT_ON_ROOT,
> >               .read_u64 = memory_current_read,
> >       },
> > +     {
> > +             .name = "peak",
> > +             .flags = CFTYPE_NOT_ON_ROOT,
> > +             .read_u64 = memory_peak_read,
> > +     },
> >       {
> >               .name = "min",
> >               .flags = CFTYPE_NOT_ON_ROOT,
> > --
> > 2.28.0
>
> --
> Michal Hocko
> SUSE Labs


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v2] mm/memcontrol: Export memcg->watermark via sysfs for v2 memcg
  2022-05-07  5:09 [PATCH v2] mm/memcontrol: Export memcg->watermark via sysfs for v2 memcg Ganesan Rajagopal
                   ` (3 preceding siblings ...)
  2022-05-11  7:13 ` Michal Hocko
@ 2022-05-11 17:49 ` Michal Koutný
  2022-05-12  2:48   ` Ganesan Rajagopal
  4 siblings, 1 reply; 10+ messages in thread
From: Michal Koutný @ 2022-05-11 17:49 UTC (permalink / raw)
  To: Ganesan Rajagopal
  Cc: hannes, mhocko, roman.gushchin, shakeelb, cgroups, linux-mm

Hi.

On Fri, May 06, 2022 at 10:09:16PM -0700, Ganesan Rajagopal <rganesan@arista.com> wrote:
> We run a lot of automated tests when building our software and run into
> OOM scenarios when the tests run unbounded. v1 memcg exports
> memcg->watermark as "memory.max_usage_in_bytes" in sysfs. We use this
> metric to heuristically limit the number of tests that can run in
> parallel based on per test historical data.
> 
> This metric is currently not exported for v2 memcg and there is no
> other easy way of getting this information. getrusage() syscall returns
> "ru_maxrss" which can be used as an approximation but that's the max
> RSS of a single child process across all children instead of the
> aggregated max for all child processes. The only work around is to
> periodically poll "memory.current" but that's not practical for
> short-lived one-off cgroups.
> 
> Hence, expose memcg->watermark as "memory.peak" for v2 memcg.

It'll save some future indirections if the commit messages includes the
argument about multiple readers and purposeful irresetability.

> 
> Signed-off-by: Ganesan Rajagopal <rganesan@arista.com>
> ---
>  Documentation/admin-guide/cgroup-v2.rst |  7 +++++++
>  mm/memcontrol.c                         | 13 +++++++++++++
>  2 files changed, 20 insertions(+)

Besides that it looks useful and correct, feel free to add
Reviewed-by: Michal Koutný <mkoutny@suse.com>


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v2] mm/memcontrol: Export memcg->watermark via sysfs for v2 memcg
  2022-05-11 17:49 ` Michal Koutný
@ 2022-05-12  2:48   ` Ganesan Rajagopal
  2022-05-12  9:11     ` Michal Koutný
  0 siblings, 1 reply; 10+ messages in thread
From: Ganesan Rajagopal @ 2022-05-12  2:48 UTC (permalink / raw)
  To: Michal Koutný
  Cc: hannes, mhocko, roman.gushchin, shakeelb, cgroups, linux-mm

On Wed, May 11, 2022 at 11:19 PM Michal Koutný <mkoutny@suse.com> wrote:
>
> Hi.
>
> On Fri, May 06, 2022 at 10:09:16PM -0700, Ganesan Rajagopal <rganesan@arista.com> wrote:
> > We run a lot of automated tests when building our software and run into
> > OOM scenarios when the tests run unbounded. v1 memcg exports
> > memcg->watermark as "memory.max_usage_in_bytes" in sysfs. We use this
> > metric to heuristically limit the number of tests that can run in
> > parallel based on per test historical data.
> >
> > This metric is currently not exported for v2 memcg and there is no
> > other easy way of getting this information. getrusage() syscall returns
> > "ru_maxrss" which can be used as an approximation but that's the max
> > RSS of a single child process across all children instead of the
> > aggregated max for all child processes. The only work around is to
> > periodically poll "memory.current" but that's not practical for
> > short-lived one-off cgroups.
> >
> > Hence, expose memcg->watermark as "memory.peak" for v2 memcg.
>
> It'll save some future indirections if the commit messages includes the
> argument about multiple readers and purposeful irresetability.

Good point. The patch has already been picked up for mm-unstable. I don't
know what's the process in this situation. Should I post a "[PATCH v3]"
with an updated commit message?

>
> >
> > Signed-off-by: Ganesan Rajagopal <rganesan@arista.com>
> > ---
> >  Documentation/admin-guide/cgroup-v2.rst |  7 +++++++
> >  mm/memcontrol.c                         | 13 +++++++++++++
> >  2 files changed, 20 insertions(+)
>
> Besides that it looks useful and correct, feel free to add
> Reviewed-by: Michal Koutný <mkoutny@suse.com>

Thank you.

Ganesan


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v2] mm/memcontrol: Export memcg->watermark via sysfs for v2 memcg
  2022-05-12  2:48   ` Ganesan Rajagopal
@ 2022-05-12  9:11     ` Michal Koutný
  0 siblings, 0 replies; 10+ messages in thread
From: Michal Koutný @ 2022-05-12  9:11 UTC (permalink / raw)
  To: Ganesan Rajagopal
  Cc: hannes, mhocko, roman.gushchin, shakeelb, cgroups, linux-mm

On Thu, May 12, 2022 at 08:18:01AM +0530, Ganesan Rajagopal <rganesan@arista.com> wrote:
> Good point. The patch has already been picked up for mm-unstable.

Oh, I didn't notice that.

> I don't know what's the process in this situation. Should I post a
> "[PATCH v3]" with an updated commit message?

Or you can send a fixup for folding? (I see this is something new,
you better ask Andrew.)

Michal


^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2022-05-12  9:11 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-05-07  5:09 [PATCH v2] mm/memcontrol: Export memcg->watermark via sysfs for v2 memcg Ganesan Rajagopal
2022-05-07 15:33 ` Shakeel Butt
2022-05-09 13:44 ` Johannes Weiner
2022-05-11  2:48 ` Roman Gushchin
2022-05-11  3:47   ` Ganesan Rajagopal
2022-05-11  7:13 ` Michal Hocko
2022-05-11  7:22   ` Ganesan Rajagopal
2022-05-11 17:49 ` Michal Koutný
2022-05-12  2:48   ` Ganesan Rajagopal
2022-05-12  9:11     ` Michal Koutný

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox