From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 553E7C433DB for ; Tue, 9 Feb 2021 16:33:47 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id D8B2B64EBA for ; Tue, 9 Feb 2021 16:33:46 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D8B2B64EBA Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=cmpxchg.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id AD1C86B0075; Tue, 9 Feb 2021 11:33:39 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id A34328D0001; Tue, 9 Feb 2021 11:33:39 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6B28B8D0001; Tue, 9 Feb 2021 11:33:39 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0006.hostedemail.com [216.40.44.6]) by kanga.kvack.org (Postfix) with ESMTP id 330FF6B0075 for ; Tue, 9 Feb 2021 11:33:39 -0500 (EST) Received: from smtpin30.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id D3863180AC483 for ; Tue, 9 Feb 2021 16:33:38 +0000 (UTC) X-FDA: 77799275316.30.sink45_1304e8627608 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin30.hostedemail.com (Postfix) with ESMTP id E62B01801ACA3 for ; Tue, 9 Feb 2021 16:33:37 +0000 (UTC) X-HE-Tag: sink45_1304e8627608 X-Filterd-Recvd-Size: 6826 Received: from mail-qt1-f177.google.com (mail-qt1-f177.google.com [209.85.160.177]) by imf29.hostedemail.com (Postfix) with ESMTP for ; Tue, 9 Feb 2021 16:33:37 +0000 (UTC) Received: by mail-qt1-f177.google.com with SMTP id n28so6375744qtv.12 for ; Tue, 09 Feb 2021 08:33:37 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cmpxchg-org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=pM+e2ns8V7jwVqNtLJGWoXKVZQIo4iLejzhQNz1sprE=; b=J3MueaBKAh3WgnLMGe7OwtcFhvXuTlKr9pZvBYRnjonYxR/d+hPgUsnZWtFoS+DXxR 4947cn/DGehG0pH3dQaK5U6DaHTJpXD2JQApOWmbSjEND+Ir1y4Grjap8BRSXMtXPBTh 7kfLhkEB6rHobmxZ1KcPew0T/LvPT4VHfx5wUDtbA7E/aD0s3ZkRMg+FjAgA9sVICG8D lJSWb0EL9OdGDTTTNB0i2QcQgkNUehpROlYc+Yb8BZPrPAuh5dFNUeHhwHeny5/Tqlpu bvmFn7HFw322Eg5vJYUyQ5tB+pGdMJ8uR+wXVNr4d+gVy2MIvyIy/1H0wDQ8I/2oap4V gjfw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=pM+e2ns8V7jwVqNtLJGWoXKVZQIo4iLejzhQNz1sprE=; b=d1DSMKevTYFHgZo3uCRwcHuXvUHtzIlNtlDYCMdC0F4L2us84LGVkot0gB4hodgBZA sQUz1bcEeuQ1l9xN3mipNt7UQDrag70o0HFnuLMU069brS6ASSjFF4tLHfnu6Rg0PItZ 57E8FqGWoiTHPi2tUBs5affLqn9MhzySYfyr9PJG+Ia/NGlwVxnvZ85OktUfZEDQqOuU ZtElw1rP/0EXBX+NgxMoAPVUThg/s5Xtyee68VNykckIJCP+oV3RwcG1b4TQEEnr3d3U Lblx5T1lezRXb7SrKGaQj/nyhflBHrg+3H4SkvP0eX9pLta6w0ndN2MSDpQe741zNhLT AoHg== X-Gm-Message-State: AOAM532agIxhABRHIzcIHT1pvjH1H7l5KXzCB7siLAkhONPD8nSym2H1 Iqars0Ha5tdQ3LzkNYLxSUqCvDCd8gntyQ== X-Google-Smtp-Source: ABdhPJwYdE5CZIL4QYdDeUriQ8kH+B2GTLuOZ4e1VF2c/78G7/4F3PlGGPBjoc3kFIE8V6uokbKG3w== X-Received: by 2002:ac8:6b45:: with SMTP id x5mr20912145qts.226.1612888416356; Tue, 09 Feb 2021 08:33:36 -0800 (PST) Received: from localhost (70.44.39.90.res-cmts.bus.ptd.net. [70.44.39.90]) by smtp.gmail.com with ESMTPSA id v18sm19441028qkv.62.2021.02.09.08.33.35 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 09 Feb 2021 08:33:35 -0800 (PST) From: Johannes Weiner To: Andrew Morton Cc: Tejun Heo , Michal Hocko , Roman Gushchin , Shakeel Butt , linux-mm@kvack.org, cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, kernel-team@fb.com Subject: [PATCH v3 7/8] mm: memcontrol: consolidate lruvec stat flushing Date: Tue, 9 Feb 2021 11:33:03 -0500 Message-Id: <20210209163304.77088-8-hannes@cmpxchg.org> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20210209163304.77088-1-hannes@cmpxchg.org> References: <20210209163304.77088-1-hannes@cmpxchg.org> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: There are two functions to flush the per-cpu data of an lruvec into the rest of the cgroup tree: when the cgroup is being freed, and when a CPU disappears during hotplug. The difference is whether all CPUs or just one is being collected, but the rest of the flushing code is the same. Merge them into one function and share the common code. Signed-off-by: Johannes Weiner Reviewed-by: Shakeel Butt Acked-by: Michal Hocko --- mm/memcontrol.c | 74 +++++++++++++++++++------------------------------ 1 file changed, 28 insertions(+), 46 deletions(-) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index bc0979257551..51778fa9b462 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -2410,39 +2410,39 @@ static void drain_all_stock(struct mem_cgroup *ro= ot_memcg) mutex_unlock(&percpu_charge_mutex); } =20 -static int memcg_hotplug_cpu_dead(unsigned int cpu) +static void memcg_flush_lruvec_page_state(struct mem_cgroup *memcg, int = cpu) { - struct memcg_stock_pcp *stock; - struct mem_cgroup *memcg; - - stock =3D &per_cpu(memcg_stock, cpu); - drain_stock(stock); + int nid; =20 - for_each_mem_cgroup(memcg) { + for_each_node(nid) { + struct mem_cgroup_per_node *pn =3D memcg->nodeinfo[nid]; + unsigned long stat[NR_VM_NODE_STAT_ITEMS]; + struct batched_lruvec_stat *lstatc; int i; =20 + lstatc =3D per_cpu_ptr(pn->lruvec_stat_cpu, cpu); for (i =3D 0; i < NR_VM_NODE_STAT_ITEMS; i++) { - int nid; + stat[i] =3D lstatc->count[i]; + lstatc->count[i] =3D 0; + } =20 - for_each_node(nid) { - struct batched_lruvec_stat *lstatc; - struct mem_cgroup_per_node *pn; - long x; + do { + for (i =3D 0; i < NR_VM_NODE_STAT_ITEMS; i++) + atomic_long_add(stat[i], &pn->lruvec_stat[i]); + } while ((pn =3D parent_nodeinfo(pn, nid))); + } +} =20 - pn =3D memcg->nodeinfo[nid]; - lstatc =3D per_cpu_ptr(pn->lruvec_stat_cpu, cpu); +static int memcg_hotplug_cpu_dead(unsigned int cpu) +{ + struct memcg_stock_pcp *stock; + struct mem_cgroup *memcg; =20 - x =3D lstatc->count[i]; - lstatc->count[i] =3D 0; + stock =3D &per_cpu(memcg_stock, cpu); + drain_stock(stock); =20 - if (x) { - do { - atomic_long_add(x, &pn->lruvec_stat[i]); - } while ((pn =3D parent_nodeinfo(pn, nid))); - } - } - } - } + for_each_mem_cgroup(memcg) + memcg_flush_lruvec_page_state(memcg, cpu); =20 return 0; } @@ -3635,27 +3635,6 @@ static u64 mem_cgroup_read_u64(struct cgroup_subsy= s_state *css, } } =20 -static void memcg_flush_lruvec_page_state(struct mem_cgroup *memcg) -{ - int node; - - for_each_node(node) { - struct mem_cgroup_per_node *pn =3D memcg->nodeinfo[node]; - unsigned long stat[NR_VM_NODE_STAT_ITEMS] =3D { 0 }; - struct mem_cgroup_per_node *pi; - int cpu, i; - - for_each_online_cpu(cpu) - for (i =3D 0; i < NR_VM_NODE_STAT_ITEMS; i++) - stat[i] +=3D per_cpu( - pn->lruvec_stat_cpu->count[i], cpu); - - for (pi =3D pn; pi; pi =3D parent_nodeinfo(pi, node)) - for (i =3D 0; i < NR_VM_NODE_STAT_ITEMS; i++) - atomic_long_add(stat[i], &pi->lruvec_stat[i]); - } -} - #ifdef CONFIG_MEMCG_KMEM static int memcg_online_kmem(struct mem_cgroup *memcg) { @@ -5191,12 +5170,15 @@ static void __mem_cgroup_free(struct mem_cgroup *= memcg) =20 static void mem_cgroup_free(struct mem_cgroup *memcg) { + int cpu; + memcg_wb_domain_exit(memcg); /* * Flush percpu lruvec stats to guarantee the value * correctness on parent's and all ancestor levels. */ - memcg_flush_lruvec_page_state(memcg); + for_each_online_cpu(cpu) + memcg_flush_lruvec_page_state(memcg, cpu); __mem_cgroup_free(memcg); } =20 --=20 2.30.0