From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.5 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,FREEMAIL_FORGED_FROMDOMAIN, FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1E176C433E6 for ; Thu, 11 Mar 2021 19:09:25 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 72EA764EFE for ; Thu, 11 Mar 2021 19:09:24 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 72EA764EFE Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 715E88D02ED; Thu, 11 Mar 2021 14:09:21 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 6C5528D02B2; Thu, 11 Mar 2021 14:09:21 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4F0678D02ED; Thu, 11 Mar 2021 14:09:21 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0122.hostedemail.com [216.40.44.122]) by kanga.kvack.org (Postfix) with ESMTP id 27F608D02B2 for ; Thu, 11 Mar 2021 14:09:21 -0500 (EST) Received: from smtpin19.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id C80EE180ACF9A for ; Thu, 11 Mar 2021 19:09:20 +0000 (UTC) X-FDA: 77908531680.19.A04AE8D Received: from mail-pl1-f174.google.com (mail-pl1-f174.google.com [209.85.214.174]) by imf02.hostedemail.com (Postfix) with ESMTP id 1E499407F8DC for ; Thu, 11 Mar 2021 19:09:10 +0000 (UTC) Received: by mail-pl1-f174.google.com with SMTP id q12so225669plr.1 for ; Thu, 11 Mar 2021 11:09:19 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=O2soj3VRJNBvxi9PWCn4DecTfTo8NOmelRWHo0xmQP0=; b=uRBsx+Kax6NkZ9uc+8SG6XZVcl8rkcflBM04WQkKe1hczZa1EixUiP1LbAu/TXH5hg aubdIpn0WNho5jsbSkooNSxqL+0DoAV7/yLncRMbbAPHNrWjm0Gx0FQ7c/1xiSsF0T7w NB1Cnzo8DSLhgxKQt7ME2aPCPXHczsjcj6JDCf4SK1FyAU2ZoLaSanGJLs89l2972BFU ns5j3EUPu5KWJXRwJ83uO2U93T1inYVPo7DB7nLqglpWDOIEky7EVHBnbI73wmB8P8Cp WvhGvR4WOhdI6nKogLkB1VXMSkTfFg/FEdhqY8kvdoWm5K6DD4F0QHchHO42vyknUHq0 2Chw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=O2soj3VRJNBvxi9PWCn4DecTfTo8NOmelRWHo0xmQP0=; b=WgjbSv9C9KKI8VJ6SmcyIoWI+EbzgSbWwRdGh2CwptBhpvy7ydv9aYVEIQ8kp/lo7+ lAKdeO+QASuB6xlwpjXyHoZjb/asBI+vJBK6Olrb0UHiyYvszi+qDcYGMnKYwFpxz6c5 SxXGLRxxxfIYQSJTrOkvG2kTXqw06MlaN1NUn1KQCsZx/XwY5wjQSBvZVVV0KvP+RmQR 6c4iM1nr3AUybfITUUs8cKzA6Utg7T8UR4juPgFggDD3LKe0gBLQbudsj7lEIzrH/po4 cxWa+rBrX98pXf7SGVv3VnjqL7gc+r8tW/kW6q/fCzrLztZkSxjaL0ymUqIlbskBL9K1 FOxA== X-Gm-Message-State: AOAM5338XB/Q02y3iuVQh+g2erVfYIYVgmlDPXfdf/It/xy18UIrFhzo R1FtkQBmudwxrzSbpprCUQ4= X-Google-Smtp-Source: ABdhPJxCLYddJof6Hlc1njrzRqI8meh0rFXR/jtVGiDahS5BvuBn7cw7/b5cJIri8oPuKD+CJDP5KQ== X-Received: by 2002:a17:902:aa0a:b029:e4:c090:ad76 with SMTP id be10-20020a170902aa0ab02900e4c090ad76mr9687150plb.2.1615489759072; Thu, 11 Mar 2021 11:09:19 -0800 (PST) Received: from localhost.localdomain (c-73-93-239-127.hsd1.ca.comcast.net. [73.93.239.127]) by smtp.gmail.com with ESMTPSA id t12sm2999111pfe.203.2021.03.11.11.09.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 11 Mar 2021 11:09:18 -0800 (PST) From: Yang Shi To: guro@fb.com, ktkhai@virtuozzo.com, vbabka@suse.cz, shakeelb@google.com, david@fromorbit.com, hannes@cmpxchg.org, mhocko@suse.com, akpm@linux-foundation.org Cc: shy828301@gmail.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [v10 PATCH 09/13] mm: vmscan: add per memcg shrinker nr_deferred Date: Thu, 11 Mar 2021 11:08:41 -0800 Message-Id: <20210311190845.9708-10-shy828301@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210311190845.9708-1-shy828301@gmail.com> References: <20210311190845.9708-1-shy828301@gmail.com> MIME-Version: 1.0 X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 1E499407F8DC X-Stat-Signature: pp7pepeo5de4pqbn1gtxnnb9q11doaq3 Received-SPF: none (gmail.com>: No applicable sender policy available) receiver=imf02; identity=mailfrom; envelope-from=""; helo=mail-pl1-f174.google.com; client-ip=209.85.214.174 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1615489750-40898 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Currently the number of deferred objects are per shrinker, but some slabs= , for example, vfs inode/dentry cache are per memcg, this would result in poor isolation among memcgs. The deferred objects typically are generated by __GFP_NOFS allocations, o= ne memcg with excessive __GFP_NOFS allocations may blow up deferred objects,= then other innocent memcgs may suffer from over shrink, excessive reclaim late= ncy, etc. For example, two workloads run in memcgA and memcgB respectively, workloa= d in B is vfs heavy workload. Workload in A generates excessive deferred obje= cts, then B's vfs cache might be hit heavily (drop half of caches) by B's limi= t reclaim or global reclaim. We observed this hit in our production environment which was running vfs = heavy workload shown as the below tracing log: <...>-409454 [016] .... 28286961.747146: mm_shrink_slab_start: super_cach= e_scan+0x0/0x1a0 ffff9a83046f3458: nid: 1 objects to shrink 3641681686040 gfp_flags GFP_HIGHUSER_MOVABLE|__G= FP_ZERO pgs_scanned 1 lru_pgs 15721 cache items 246404277 delta 31345 total_scan 123202138 <...>-409454 [022] .... 28287105.928018: mm_shrink_slab_end: super_cache_= scan+0x0/0x1a0 ffff9a83046f3458: nid: 1 unused scan count 3641681686040 new scan count 3641798379189 total= _scan 602 last shrinker return val 123186855 The vfs cache and page cache ratio was 10:1 on this machine, and half of = caches were dropped. This also resulted in significant amount of page caches we= re dropped due to inodes eviction. Make nr_deferred per memcg for memcg aware shrinkers would solve the unfa= irness and bring better isolation. The following patch will add nr_deferred to parent memcg when memcg offli= ne. To preserve nr_deferred when reparenting memcgs to root, root memcg needs shrinker_info allocated too. When memcg is not enabled (!CONFIG_MEMCG or memcg disabled), the shrinker= 's nr_deferred would be used. And non memcg aware shrinkers use shrinker's nr_deferred all the time. Acked-by: Roman Gushchin Acked-by: Kirill Tkhai Reviewed-by: Shakeel Butt Signed-off-by: Yang Shi --- include/linux/memcontrol.h | 7 +++-- mm/vmscan.c | 60 ++++++++++++++++++++++++++------------ 2 files changed, 46 insertions(+), 21 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index dc7d0e2cb3ad..24e735434a46 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -114,12 +114,13 @@ struct batched_lruvec_stat { }; =20 /* - * Bitmap of shrinker::id corresponding to memcg-aware shrinkers, - * which have elements charged to this memcg. + * Bitmap and deferred work of shrinker::id corresponding to memcg-aware + * shrinkers, which have elements charged to this memcg. */ struct shrinker_info { struct rcu_head rcu; - unsigned long map[]; + atomic_long_t *nr_deferred; + unsigned long *map; }; =20 /* diff --git a/mm/vmscan.c b/mm/vmscan.c index 34cf3d84309c..397f3b67bad8 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -187,11 +187,17 @@ static DECLARE_RWSEM(shrinker_rwsem); #ifdef CONFIG_MEMCG static int shrinker_nr_max; =20 +/* The shrinker_info is expanded in a batch of BITS_PER_LONG */ static inline int shrinker_map_size(int nr_items) { return (DIV_ROUND_UP(nr_items, BITS_PER_LONG) * sizeof(unsigned long)); } =20 +static inline int shrinker_defer_size(int nr_items) +{ + return (round_up(nr_items, BITS_PER_LONG) * sizeof(atomic_long_t)); +} + static struct shrinker_info *shrinker_info_protected(struct mem_cgroup *= memcg, int nid) { @@ -200,11 +206,13 @@ static struct shrinker_info *shrinker_info_protecte= d(struct mem_cgroup *memcg, } =20 static int expand_one_shrinker_info(struct mem_cgroup *memcg, - int size, int old_size) + int map_size, int defer_size, + int old_map_size, int old_defer_size) { struct shrinker_info *new, *old; struct mem_cgroup_per_node *pn; int nid; + int size =3D map_size + defer_size; =20 for_each_node(nid) { pn =3D memcg->nodeinfo[nid]; @@ -217,9 +225,16 @@ static int expand_one_shrinker_info(struct mem_cgrou= p *memcg, if (!new) return -ENOMEM; =20 - /* Set all old bits, clear all new bits */ - memset(new->map, (int)0xff, old_size); - memset((void *)new->map + old_size, 0, size - old_size); + new->nr_deferred =3D (atomic_long_t *)(new + 1); + new->map =3D (void *)new->nr_deferred + defer_size; + + /* map: set all old bits, clear all new bits */ + memset(new->map, (int)0xff, old_map_size); + memset((void *)new->map + old_map_size, 0, map_size - old_map_size); + /* nr_deferred: copy old values, clear all new values */ + memcpy(new->nr_deferred, old->nr_deferred, old_defer_size); + memset((void *)new->nr_deferred + old_defer_size, 0, + defer_size - old_defer_size); =20 rcu_assign_pointer(pn->shrinker_info, new); kvfree_rcu(old, rcu); @@ -234,9 +249,6 @@ void free_shrinker_info(struct mem_cgroup *memcg) struct shrinker_info *info; int nid; =20 - if (mem_cgroup_is_root(memcg)) - return; - for_each_node(nid) { pn =3D memcg->nodeinfo[nid]; info =3D shrinker_info_protected(memcg, nid); @@ -249,12 +261,12 @@ int alloc_shrinker_info(struct mem_cgroup *memcg) { struct shrinker_info *info; int nid, size, ret =3D 0; - - if (mem_cgroup_is_root(memcg)) - return 0; + int map_size, defer_size =3D 0; =20 down_write(&shrinker_rwsem); - size =3D shrinker_map_size(shrinker_nr_max); + map_size =3D shrinker_map_size(shrinker_nr_max); + defer_size =3D shrinker_defer_size(shrinker_nr_max); + size =3D map_size + defer_size; for_each_node(nid) { info =3D kvzalloc_node(sizeof(*info) + size, GFP_KERNEL, nid); if (!info) { @@ -262,6 +274,8 @@ int alloc_shrinker_info(struct mem_cgroup *memcg) ret =3D -ENOMEM; break; } + info->nr_deferred =3D (atomic_long_t *)(info + 1); + info->map =3D (void *)info->nr_deferred + defer_size; rcu_assign_pointer(memcg->nodeinfo[nid]->shrinker_info, info); } up_write(&shrinker_rwsem); @@ -269,15 +283,21 @@ int alloc_shrinker_info(struct mem_cgroup *memcg) return ret; } =20 +static inline bool need_expand(int nr_max) +{ + return round_up(nr_max, BITS_PER_LONG) > + round_up(shrinker_nr_max, BITS_PER_LONG); +} + static int expand_shrinker_info(int new_id) { - int size, old_size, ret =3D 0; + int ret =3D 0; int new_nr_max =3D new_id + 1; + int map_size, defer_size =3D 0; + int old_map_size, old_defer_size =3D 0; struct mem_cgroup *memcg; =20 - size =3D shrinker_map_size(new_nr_max); - old_size =3D shrinker_map_size(shrinker_nr_max); - if (size <=3D old_size) + if (!need_expand(new_nr_max)) goto out; =20 if (!root_mem_cgroup) @@ -285,11 +305,15 @@ static int expand_shrinker_info(int new_id) =20 lockdep_assert_held(&shrinker_rwsem); =20 + map_size =3D shrinker_map_size(new_nr_max); + defer_size =3D shrinker_defer_size(new_nr_max); + old_map_size =3D shrinker_map_size(shrinker_nr_max); + old_defer_size =3D shrinker_defer_size(shrinker_nr_max); + memcg =3D mem_cgroup_iter(NULL, NULL, NULL); do { - if (mem_cgroup_is_root(memcg)) - continue; - ret =3D expand_one_shrinker_info(memcg, size, old_size); + ret =3D expand_one_shrinker_info(memcg, map_size, defer_size, + old_map_size, old_defer_size); if (ret) { mem_cgroup_iter_break(NULL, memcg); goto out; --=20 2.26.2