From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 51B9AC4361B for ; Thu, 10 Dec 2020 04:24:04 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id B6E6322D75 for ; Thu, 10 Dec 2020 04:24:03 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B6E6322D75 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 24DA76B0068; Wed, 9 Dec 2020 23:24:03 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 1D6EE6B006C; Wed, 9 Dec 2020 23:24:03 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 09F466B006E; Wed, 9 Dec 2020 23:24:03 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0082.hostedemail.com [216.40.44.82]) by kanga.kvack.org (Postfix) with ESMTP id E20146B0068 for ; Wed, 9 Dec 2020 23:24:02 -0500 (EST) Received: from smtpin30.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id AFFB6180AD807 for ; Thu, 10 Dec 2020 04:24:02 +0000 (UTC) X-FDA: 77576079924.30.tiger57_4201e91273f5 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin30.hostedemail.com (Postfix) with ESMTP id 8C168180B3AB8 for ; Thu, 10 Dec 2020 04:24:02 +0000 (UTC) X-HE-Tag: tiger57_4201e91273f5 X-Filterd-Recvd-Size: 6372 Received: from mail-pg1-f194.google.com (mail-pg1-f194.google.com [209.85.215.194]) by imf23.hostedemail.com (Postfix) with ESMTP for ; Thu, 10 Dec 2020 04:24:01 +0000 (UTC) Received: by mail-pg1-f194.google.com with SMTP id e2so3018608pgi.5 for ; Wed, 09 Dec 2020 20:24:01 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=M3R+1Kw1+pxZrJagVjtPWvZt7HKHZG/cWLsw4oRpXRM=; b=ed7QLo/rYT5tWL9hE0juUheenuYvMB/DsiCixtcPicRPnPdc5XSR5o8m6ehwmw9ixx mqoOK5hMiLYq01XPF9wfQE8kIs8GHOOEue2QNA+YeNaiJO64O7YYDLWV1814M/RsjyZu 757nJtTf62KwIE+mCNy7RcyEp2w+a8Go8gEoZJGLXl2wgy0Yiqb6Ai2HDNPFfr/Il+Mz fdI/7omavvu1NoE46SQOnBHFAgSFEVhlhgnKlCcjMKT6dt8Uyk7WrIDLOmUeQzQxEv7R l5wZZ8MaVUhKBZOZ1vGRpKibvImYmJB5CzZIWinZyRk0oj+oIQ290DXX3lQDuP4o5Drg hYMw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=M3R+1Kw1+pxZrJagVjtPWvZt7HKHZG/cWLsw4oRpXRM=; b=nQjivxDEiFHF8z5xKk892J5SiIjNO7FSwbDRA5Cez+2RkseEH5su/gJfaeLWgYq2mF 1E1js+RU1oTf0aZxC45Du9J0MYEYPIMwA65fEisLI9KbzrnJURzuHJaihgb7gWE22biT 0xtsunwWcyZbeqRTPCKbBQYixCuzRR7UUQBO+lNQiFD2aPjkeH72d1zrBR6Te/bl8i2m TQnxVbZQZNbbFPc2rDQk42I8NYPybyDGyj+JEJwn/1kP56XU/oXJRlvEa4nVMUCKjI1k 7icEKFP21wXH7ZsTx5SqKkvVAHfPGnj+zm4DGPDzyVLOFSGfFiz/9BlK5GPNrs9LxMZD SIzQ== X-Gm-Message-State: AOAM533YFyqWcFf8sxtlQTDGh9dckzWmrjMb8p9WvP2qBxonUGMprsUV R4jVTbQnkFvtgAMPFwg28LIlDw== X-Google-Smtp-Source: ABdhPJwkCF7fY1bFrcA6xK2/kdJzPG4YfdS6qXeU6PxZwyABhuXfLBj6ekpI6AfnvSaQU/HEQuqamA== X-Received: by 2002:a63:7982:: with SMTP id u124mr4880721pgc.259.1607574240745; Wed, 09 Dec 2020 20:24:00 -0800 (PST) Received: from localhost.localdomain ([103.136.221.73]) by smtp.gmail.com with ESMTPSA id q9sm4319411pgb.82.2020.12.09.20.23.55 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Wed, 09 Dec 2020 20:24:00 -0800 (PST) From: Muchun Song To: hannes@cmpxchg.org, mhocko@kernel.org, vdavydov.dev@gmail.com, akpm@linux-foundation.org, shakeelb@google.com, guro@fb.com, sfr@canb.auug.org.au, chris@chrisdown.name, laoar.shao@gmail.com, richard.weiyang@gmail.com Cc: linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-mm@kvack.org, Muchun Song Subject: [PATCH v3] mm: memcontrol: optimize per-lruvec stats counter memory usage Date: Thu, 10 Dec 2020 12:21:21 +0800 Message-Id: <20201210042121.39665-1-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The vmstat threshold is 32 (MEMCG_CHARGE_BATCH), Actually the threshold can be as big as MEMCG_CHARGE_BATCH * PAGE_SIZE. It still fits into s32. So introducing struct batched_lruvec_stat to optimize memory usage. The size of struct lruvec_stat is 304 bytes on 64 bits system. As it is a per-cpu structure. So with this patch, we can save 304 / 2 * ncpu bytes per-memcg per-node where ncpu is the number of the possible CPU. If there are c memory cgroup (include dying cgroup) and n NUMA node in the system. Finally, we can save (152 * ncpu * c * n) bytes. Signed-off-by: Muchun Song Reviewed-by: Shakeel Butt --- Changes in v2 -> v3: - Rename per_cpu_lruvec_stat to batched_lruvec_stat. Thanks Shakeel. - Update commit log. Thanks Roman. Changes in v1 -> v2: - Update the commit log to point out how many bytes that we can save. include/linux/memcontrol.h | 14 ++++++++++++-- mm/memcontrol.c | 10 +++++++++- 2 files changed, 21 insertions(+), 3 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 3febf64d1b80..076512e1dc9c 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -92,6 +92,10 @@ struct lruvec_stat { long count[NR_VM_NODE_STAT_ITEMS]; }; =20 +struct batched_lruvec_stat { + s32 count[NR_VM_NODE_STAT_ITEMS]; +}; + /* * Bitmap of shrinker::id corresponding to memcg-aware shrinkers, * which have elements charged to this memcg. @@ -107,11 +111,17 @@ struct memcg_shrinker_map { struct mem_cgroup_per_node { struct lruvec lruvec; =20 - /* Legacy local VM stats */ + /* + * Legacy local VM stats. This should be struct lruvec_stat and + * cannot be optimized to struct batched_lruvec_stat. Becasue + * the threshold of the lruvec_stat_cpu can be as big as + * MEMCG_CHARGE_BATCH * PAGE_SIZE. It can fit into s32. But this + * filed has no upper limit. + */ struct lruvec_stat __percpu *lruvec_stat_local; =20 /* Subtree VM stats (batched updates) */ - struct lruvec_stat __percpu *lruvec_stat_cpu; + struct batched_lruvec_stat __percpu *lruvec_stat_cpu; atomic_long_t lruvec_stat[NR_VM_NODE_STAT_ITEMS]; =20 unsigned long lru_zone_size[MAX_NR_ZONES][NR_LRU_LISTS]; diff --git a/mm/memcontrol.c b/mm/memcontrol.c index eec44918d373..1b01771f2600 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -5198,7 +5198,7 @@ static int alloc_mem_cgroup_per_node_info(struct me= m_cgroup *memcg, int node) return 1; } =20 - pn->lruvec_stat_cpu =3D alloc_percpu_gfp(struct lruvec_stat, + pn->lruvec_stat_cpu =3D alloc_percpu_gfp(struct batched_lruvec_stat, GFP_KERNEL_ACCOUNT); if (!pn->lruvec_stat_cpu) { free_percpu(pn->lruvec_stat_local); @@ -7089,6 +7089,14 @@ static int __init mem_cgroup_init(void) { int cpu, node; =20 + /* + * Currently s32 type (can refer to struct batched_lruvec_stat) is + * used for per-memcg-per-cpu caching of per-node statistics. In order + * to work fine, we should make sure that the overfill threshold can't + * exceed S32_MAX / PAGE_SIZE. + */ + BUILD_BUG_ON(MEMCG_CHARGE_BATCH > S32_MAX / PAGE_SIZE); + cpuhp_setup_state_nocalls(CPUHP_MM_MEMCQ_DEAD, "mm/memctrl:dead", NULL, memcg_hotplug_cpu_dead); =20 --=20 2.11.0