From: Michal Hocko <mhocko@suse.com>
To: Johannes Weiner <hannes@cmpxchg.org>
Cc: Andrew Morton <akpm@linux-foundation.org>,
Tejun Heo <tj@kernel.org>, Roman Gushchin <guro@fb.com>,
Shakeel Butt <shakeelb@google.com>,
linux-mm@kvack.org, cgroups@vger.kernel.org,
linux-kernel@vger.kernel.org, kernel-team@fb.com
Subject: Re: [PATCH 7/8] mm: memcontrol: consolidate lruvec stat flushing
Date: Mon, 8 Feb 2021 14:54:23 +0100 [thread overview]
Message-ID: <YCFCj0QSemahSMP1@dhcp22.suse.cz> (raw)
In-Reply-To: <20210205182806.17220-8-hannes@cmpxchg.org>
On Fri 05-02-21 13:28:05, Johannes Weiner wrote:
> There are two functions to flush the per-cpu data of an lruvec into
> the rest of the cgroup tree: when the cgroup is being freed, and when
> a CPU disappears during hotplug. The difference is whether all CPUs or
> just one is being collected, but the rest of the flushing code is the
> same. Merge them into one function and share the common code.
>
> Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
Yes, this looks much better/cleaner.
Acked-by: Michal Hocko <mhocko@suse.com>
Thanks!
> ---
> mm/memcontrol.c | 74 +++++++++++++++++++------------------------------
> 1 file changed, 28 insertions(+), 46 deletions(-)
>
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index 5dc0bd53b64a..490357945f2c 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -2410,39 +2410,39 @@ static void drain_all_stock(struct mem_cgroup *root_memcg)
> mutex_unlock(&percpu_charge_mutex);
> }
>
> -static int memcg_hotplug_cpu_dead(unsigned int cpu)
> +static void memcg_flush_lruvec_page_state(struct mem_cgroup *memcg, int cpu)
> {
> - struct memcg_stock_pcp *stock;
> - struct mem_cgroup *memcg;
> -
> - stock = &per_cpu(memcg_stock, cpu);
> - drain_stock(stock);
> + int nid;
>
> - for_each_mem_cgroup(memcg) {
> + for_each_node(nid) {
> + struct mem_cgroup_per_node *pn = memcg->nodeinfo[nid];
> + unsigned long stat[NR_VM_NODE_STAT_ITEMS];
> + struct batched_lruvec_stat *lstatc;
> int i;
>
> + lstatc = per_cpu_ptr(pn->lruvec_stat_cpu, cpu);
> for (i = 0; i < NR_VM_NODE_STAT_ITEMS; i++) {
> - int nid;
> + stat[i] = lstatc->count[i];
> + lstatc->count[i] = 0;
> + }
>
> - for_each_node(nid) {
> - struct batched_lruvec_stat *lstatc;
> - struct mem_cgroup_per_node *pn;
> - long x;
> + do {
> + for (i = 0; i < NR_VM_NODE_STAT_ITEMS; i++)
> + atomic_long_add(stat[i], &pn->lruvec_stat[i]);
> + } while ((pn = parent_nodeinfo(pn, nid)));
> + }
> +}
>
> - pn = memcg->nodeinfo[nid];
> - lstatc = per_cpu_ptr(pn->lruvec_stat_cpu, cpu);
> +static int memcg_hotplug_cpu_dead(unsigned int cpu)
> +{
> + struct memcg_stock_pcp *stock;
> + struct mem_cgroup *memcg;
>
> - x = lstatc->count[i];
> - lstatc->count[i] = 0;
> + stock = &per_cpu(memcg_stock, cpu);
> + drain_stock(stock);
>
> - if (x) {
> - do {
> - atomic_long_add(x, &pn->lruvec_stat[i]);
> - } while ((pn = parent_nodeinfo(pn, nid)));
> - }
> - }
> - }
> - }
> + for_each_mem_cgroup(memcg)
> + memcg_flush_lruvec_page_state(memcg, cpu);
>
> return 0;
> }
> @@ -3636,27 +3636,6 @@ static u64 mem_cgroup_read_u64(struct cgroup_subsys_state *css,
> }
> }
>
> -static void memcg_flush_lruvec_page_state(struct mem_cgroup *memcg)
> -{
> - int node;
> -
> - for_each_node(node) {
> - struct mem_cgroup_per_node *pn = memcg->nodeinfo[node];
> - unsigned long stat[NR_VM_NODE_STAT_ITEMS] = { 0 };
> - struct mem_cgroup_per_node *pi;
> - int cpu, i;
> -
> - for_each_online_cpu(cpu)
> - for (i = 0; i < NR_VM_NODE_STAT_ITEMS; i++)
> - stat[i] += per_cpu(
> - pn->lruvec_stat_cpu->count[i], cpu);
> -
> - for (pi = pn; pi; pi = parent_nodeinfo(pi, node))
> - for (i = 0; i < NR_VM_NODE_STAT_ITEMS; i++)
> - atomic_long_add(stat[i], &pi->lruvec_stat[i]);
> - }
> -}
> -
> #ifdef CONFIG_MEMCG_KMEM
> static int memcg_online_kmem(struct mem_cgroup *memcg)
> {
> @@ -5192,12 +5171,15 @@ static void __mem_cgroup_free(struct mem_cgroup *memcg)
>
> static void mem_cgroup_free(struct mem_cgroup *memcg)
> {
> + int cpu;
> +
> memcg_wb_domain_exit(memcg);
> /*
> * Flush percpu lruvec stats to guarantee the value
> * correctness on parent's and all ancestor levels.
> */
> - memcg_flush_lruvec_page_state(memcg);
> + for_each_online_cpu(cpu)
> + memcg_flush_lruvec_page_state(memcg, cpu);
> __mem_cgroup_free(memcg);
> }
>
> --
> 2.30.0
>
--
Michal Hocko
SUSE Labs
next prev parent reply other threads:[~2021-02-08 13:54 UTC|newest]
Thread overview: 22+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-02-05 18:27 [PATCH 0/8] mm: memcontrol: switch to rstat v2 Johannes Weiner
2021-02-05 18:27 ` [PATCH 1/8] mm: memcontrol: fix cpuhotplug statistics flushing Johannes Weiner
2021-02-05 18:28 ` [PATCH 2/8] mm: memcontrol: kill mem_cgroup_nodeinfo() Johannes Weiner
2021-02-05 18:28 ` [PATCH 3/8] mm: memcontrol: privatize memcg_page_state query functions Johannes Weiner
2021-02-05 18:28 ` [PATCH 4/8] cgroup: rstat: support cgroup1 Johannes Weiner
2021-02-05 22:11 ` Shakeel Butt
2021-02-06 3:00 ` Tejun Heo
2021-02-05 18:28 ` [PATCH 5/8] cgroup: rstat: punt root-level optimization to individual controllers Johannes Weiner
2021-02-06 3:34 ` Tejun Heo
2021-02-08 20:29 ` Johannes Weiner
2021-02-08 15:58 ` Tejun Heo
2021-02-05 18:28 ` [PATCH 6/8] mm: memcontrol: switch to rstat Johannes Weiner
2021-02-08 2:19 ` Shakeel Butt
2021-02-08 20:40 ` Johannes Weiner
2021-02-05 18:28 ` [PATCH 7/8] mm: memcontrol: consolidate lruvec stat flushing Johannes Weiner
2021-02-08 2:28 ` Shakeel Butt
2021-02-08 20:54 ` Johannes Weiner
2021-02-08 16:02 ` Tejun Heo
2021-02-08 13:54 ` Michal Hocko [this message]
2021-02-05 18:28 ` [PATCH 8/8] kselftests: cgroup: update kmem test for new vmstat implementation Johannes Weiner
2021-02-09 13:48 ` Shakeel Butt
2021-02-06 3:58 ` [PATCH 0/8] mm: memcontrol: switch to rstat v2 Tejun Heo
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=YCFCj0QSemahSMP1@dhcp22.suse.cz \
--to=mhocko@suse.com \
--cc=akpm@linux-foundation.org \
--cc=cgroups@vger.kernel.org \
--cc=guro@fb.com \
--cc=hannes@cmpxchg.org \
--cc=kernel-team@fb.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=shakeelb@google.com \
--cc=tj@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox