linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Andrew Morton <akpm@linux-foundation.org>
To: Roman Gushchin <guro@fb.com>
Cc: "linux-mm@kvack.org" <linux-mm@kvack.org>,
	Michal Hocko <mhocko@kernel.org>,
	Johannes Weiner <hannes@cmpxchg.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Kernel Team <Kernel-team@fb.com>
Subject: Re: [PATCH] mm: memcontrol: flush slab vmstats on kmem offlining
Date: Thu, 8 Aug 2019 16:02:57 -0700	[thread overview]
Message-ID: <20190808160257.bce08b5ae1574414f96ee26b@linux-foundation.org> (raw)
In-Reply-To: <20190808214706.GA24864@tower.dhcp.thefacebook.com>

On Thu, 8 Aug 2019 21:47:11 +0000 Roman Gushchin <guro@fb.com> wrote:

> On Thu, Aug 08, 2019 at 02:21:46PM -0700, Andrew Morton wrote:
> > On Thu, 8 Aug 2019 13:36:04 -0700 Roman Gushchin <guro@fb.com> wrote:
> > 
> > > I've noticed that the "slab" value in memory.stat is sometimes 0,
> > > even if some children memory cgroups have a non-zero "slab" value.
> > > The following investigation showed that this is the result
> > > of the kmem_cache reparenting in combination with the per-cpu
> > > batching of slab vmstats.
> > > 
> > > At the offlining some vmstat value may leave in the percpu cache,
> > > not being propagated upwards by the cgroup hierarchy. It means
> > > that stats on ancestor levels are lower than actual. Later when
> > > slab pages are released, the precise number of pages is substracted
> > > on the parent level, making the value negative. We don't show negative
> > > values, 0 is printed instead.
> > > 
> > > To fix this issue, let's flush percpu slab memcg and lruvec stats
> > > on memcg offlining. This guarantees that numbers on all ancestor
> > > levels are accurate and match the actual number of outstanding
> > > slab pages.
> > > 
> > 
> > Looks expensive.  How frequently can these functions be called?
> 
> Once per memcg lifetime.

iirc there are some workloads in which this can be rapid?

> > > +	for_each_node(node)
> > > +		memcg_flush_slab_node_stats(memcg, node);
> > 
> > This loops across all possible CPUs once for each possible node.  Ouch.
> > 
> > Implementing hotplug handlers in here (which is surprisingly simple)
> > brings this down to num_online_nodes * num_online_cpus which is, I
> > think, potentially vastly better.
> >
> 
> Hm, maybe I'm biased because we don't play much with offlining, and
> don't have many NUMA nodes. What's the real world scenario? Disabling
> hyperthreading?

I assume it's machines which could take a large number of CPUs but in
fact have few.  I've asked this in response to many patches down the
ages and have never really got a clear answer.

A concern is that if such machines do exist, it will take a long time
for the regression reports to get to us.  Especially if such machines
are rare.

> Idk, given that it happens once per memcg lifetime, and memcg destruction
> isn't cheap anyway, I'm not sure it worth it. But if you are, I'm happy
> to add hotplug handlers.

I think it's worth taking a look.  As I mentioned, it can turn out to
be stupidly simple.

> I also thought about merging per-memcg stats and per-memcg-per-node stats
> (reading part can aggregate over 2? 4? numa nodes each time). That will
> make everything overall cheaper. But it's a separate topic.


      reply	other threads:[~2019-08-08 23:03 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-08-08 20:36 Roman Gushchin
2019-08-08 21:21 ` Andrew Morton
2019-08-08 21:47   ` Roman Gushchin
2019-08-08 23:02     ` Andrew Morton [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190808160257.bce08b5ae1574414f96ee26b@linux-foundation.org \
    --to=akpm@linux-foundation.org \
    --cc=Kernel-team@fb.com \
    --cc=guro@fb.com \
    --cc=hannes@cmpxchg.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox