From: Andrew Morton <akpm@linux-foundation.org>
To: Roman Gushchin <guro@fb.com>
Cc: <linux-mm@kvack.org>, Michal Hocko <mhocko@kernel.org>,
Johannes Weiner <hannes@cmpxchg.org>,
<linux-kernel@vger.kernel.org>, <kernel-team@fb.com>
Subject: Re: [PATCH] mm: memcontrol: flush slab vmstats on kmem offlining
Date: Thu, 8 Aug 2019 14:21:46 -0700 [thread overview]
Message-ID: <20190808142146.a328cd673c66d5fdbca26f79@linux-foundation.org> (raw)
In-Reply-To: <20190808203604.3413318-1-guro@fb.com>
On Thu, 8 Aug 2019 13:36:04 -0700 Roman Gushchin <guro@fb.com> wrote:
> I've noticed that the "slab" value in memory.stat is sometimes 0,
> even if some children memory cgroups have a non-zero "slab" value.
> The following investigation showed that this is the result
> of the kmem_cache reparenting in combination with the per-cpu
> batching of slab vmstats.
>
> At the offlining some vmstat value may leave in the percpu cache,
> not being propagated upwards by the cgroup hierarchy. It means
> that stats on ancestor levels are lower than actual. Later when
> slab pages are released, the precise number of pages is substracted
> on the parent level, making the value negative. We don't show negative
> values, 0 is printed instead.
>
> To fix this issue, let's flush percpu slab memcg and lruvec stats
> on memcg offlining. This guarantees that numbers on all ancestor
> levels are accurate and match the actual number of outstanding
> slab pages.
>
Looks expensive. How frequently can these functions be called?
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -3412,6 +3412,50 @@ static int memcg_online_kmem(struct mem_cgroup *memcg)
> return 0;
> }
>
> +static void memcg_flush_slab_node_stats(struct mem_cgroup *memcg, int node)
> +{
> + struct mem_cgroup_per_node *pn = memcg->nodeinfo[node];
> + struct mem_cgroup_per_node *pi;
> + unsigned long recl = 0, unrecl = 0;
> + int cpu;
> +
> + for_each_possible_cpu(cpu) {
> + recl += raw_cpu_read(
> + pn->lruvec_stat_cpu->count[NR_SLAB_RECLAIMABLE]);
> + unrecl += raw_cpu_read(
> + pn->lruvec_stat_cpu->count[NR_SLAB_UNRECLAIMABLE]);
> + }
> +
> + for (pi = pn; pi; pi = parent_nodeinfo(pi, node)) {
> + atomic_long_add(recl,
> + &pi->lruvec_stat[NR_SLAB_RECLAIMABLE]);
> + atomic_long_add(unrecl,
> + &pi->lruvec_stat[NR_SLAB_UNRECLAIMABLE]);
> + }
> +}
> +
> +static void memcg_flush_slab_vmstats(struct mem_cgroup *memcg)
> +{
> + struct mem_cgroup *mi;
> + unsigned long recl = 0, unrecl = 0;
> + int node, cpu;
> +
> + for_each_possible_cpu(cpu) {
> + recl += raw_cpu_read(
> + memcg->vmstats_percpu->stat[NR_SLAB_RECLAIMABLE]);
> + unrecl += raw_cpu_read(
> + memcg->vmstats_percpu->stat[NR_SLAB_UNRECLAIMABLE]);
> + }
> +
> + for (mi = memcg; mi; mi = parent_mem_cgroup(mi)) {
> + atomic_long_add(recl, &mi->vmstats[NR_SLAB_RECLAIMABLE]);
> + atomic_long_add(unrecl, &mi->vmstats[NR_SLAB_UNRECLAIMABLE]);
> + }
> +
> + for_each_node(node)
> + memcg_flush_slab_node_stats(memcg, node);
This loops across all possible CPUs once for each possible node. Ouch.
Implementing hotplug handlers in here (which is surprisingly simple)
brings this down to num_online_nodes * num_online_cpus which is, I
think, potentially vastly better.
next prev parent reply other threads:[~2019-08-08 21:21 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-08-08 20:36 Roman Gushchin
2019-08-08 21:21 ` Andrew Morton [this message]
2019-08-08 21:47 ` Roman Gushchin
2019-08-08 23:02 ` Andrew Morton
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20190808142146.a328cd673c66d5fdbca26f79@linux-foundation.org \
--to=akpm@linux-foundation.org \
--cc=guro@fb.com \
--cc=hannes@cmpxchg.org \
--cc=kernel-team@fb.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mhocko@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox