From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.5 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,FREEMAIL_FORGED_FROMDOMAIN, FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A9FD4C433E6 for ; Wed, 17 Feb 2021 00:13:51 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 51EF564DFF for ; Wed, 17 Feb 2021 00:13:51 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 51EF564DFF Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id E98728D0011; Tue, 16 Feb 2021 19:13:50 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id D7D918D000D; Tue, 16 Feb 2021 19:13:50 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BD4508D0011; Tue, 16 Feb 2021 19:13:50 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0153.hostedemail.com [216.40.44.153]) by kanga.kvack.org (Postfix) with ESMTP id 9DC368D000D for ; Tue, 16 Feb 2021 19:13:50 -0500 (EST) Received: from smtpin06.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 5E27B180143FA for ; Wed, 17 Feb 2021 00:13:50 +0000 (UTC) X-FDA: 77825836620.06.bit35_510375727648 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin06.hostedemail.com (Postfix) with ESMTP id 3A5E71005D00B for ; Wed, 17 Feb 2021 00:13:50 +0000 (UTC) X-HE-Tag: bit35_510375727648 X-Filterd-Recvd-Size: 10243 Received: from mail-pj1-f46.google.com (mail-pj1-f46.google.com [209.85.216.46]) by imf07.hostedemail.com (Postfix) with ESMTP for ; Wed, 17 Feb 2021 00:13:49 +0000 (UTC) Received: by mail-pj1-f46.google.com with SMTP id gb24so353267pjb.4 for ; Tue, 16 Feb 2021 16:13:49 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=BeUPvhEAKyX9E/ec80pKz9hYSPF2D+uBDSObIq3Mzk0=; b=l5jGtCznVK0NVwkC43TLcjy6EyPzdWSDomF/6MCFY8gOAwNC3+VwMtOM94S+yqhKbt UPbavFhbVYtbOOEMITEO4aYXRfIC9FAqrj3xFADx5Dy6K+xpNPcROM6grZ9sx9+3/I2v BrIhDi9BAlb8c//c+0R6c/mVlKu45hhx1eqkOkom91W0TDCBvrJV8jdiI8vkXsVtr7uR 8QuL319HGFDPYH0JGzGgodQgk4+Z2ww8t+5katwqDidIc7PrJUGCF1Gl/vMTpknE+wh6 2mDy2SGwTxLI2Qtnxuv5OVoEVNmD2ETZjpBe01AiCiZrg0a49co8chiz941kd/8SLGhR NqPg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=BeUPvhEAKyX9E/ec80pKz9hYSPF2D+uBDSObIq3Mzk0=; b=gQWlHPk9b1pPAIe3R1ojxd9GsnLaX9oudkuOaooN7PRTQbQ3VtZEAZoE7W8cCUkq9x 7GWls9q4lqvoNV0crhOXcGrm4bmy40KiZErTJNTEzhy5lzs2xFCjSLvg+9XF32qayd0n 81SoON8kXwv6ZwYSH8lY2l/UN0UhJ0VifJ49O/yTHEH6UZLwp74zy9dDe1zZF0cadpsl tewy1KzgYPGN/t/NVPmwVLyTbUcMj79oztx8qxzLpR3guHzfk3xzJ+6h3CAgouXTfvAD Uy5FRNZ86j/5z/dW0wbBPQeteAC9mASmr+J4FlIrlKouOiq3sBtFLNfjjP61oqr/jyS6 zuSw== X-Gm-Message-State: AOAM5301PdnEB3ACBQoi/IpA+eGaT2fkz4E2QFAboAJOlYjSpRWfly6W f5dqdz/asXDcFXx/P22w/8Y= X-Google-Smtp-Source: ABdhPJzRIgnWMkJWfVYWNE2KifMetToUV9ry7UTA22xZy4U6M6axAXj8xeGW4AYHXUwQn33JRYcT2g== X-Received: by 2002:a17:90a:4f83:: with SMTP id q3mr6548785pjh.38.1613520828992; Tue, 16 Feb 2021 16:13:48 -0800 (PST) Received: from localhost.localdomain (c-73-93-239-127.hsd1.ca.comcast.net. [73.93.239.127]) by smtp.gmail.com with ESMTPSA id y12sm99220pjc.56.2021.02.16.16.13.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 16 Feb 2021 16:13:48 -0800 (PST) From: Yang Shi To: guro@fb.com, ktkhai@virtuozzo.com, vbabka@suse.cz, shakeelb@google.com, david@fromorbit.com, hannes@cmpxchg.org, mhocko@suse.com, akpm@linux-foundation.org Cc: shy828301@gmail.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [v8 PATCH 09/13] mm: vmscan: add per memcg shrinker nr_deferred Date: Tue, 16 Feb 2021 16:13:18 -0800 Message-Id: <20210217001322.2226796-10-shy828301@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210217001322.2226796-1-shy828301@gmail.com> References: <20210217001322.2226796-1-shy828301@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Currently the number of deferred objects are per shrinker, but some slabs= , for example, vfs inode/dentry cache are per memcg, this would result in poor isolation= among memcgs. The deferred objects typically are generated by __GFP_NOFS allocations, o= ne memcg with excessive __GFP_NOFS allocations may blow up deferred objects, then other= innocent memcgs may suffer from over shrink, excessive reclaim latency, etc. For example, two workloads run in memcgA and memcgB respectively, workloa= d in B is vfs heavy workload. Workload in A generates excessive deferred objects, then= B's vfs cache might be hit heavily (drop half of caches) by B's limit reclaim or global= reclaim. We observed this hit in our production environment which was running vfs = heavy workload shown as the below tracing log: <...>-409454 [016] .... 28286961.747146: mm_shrink_slab_start: super_cach= e_scan+0x0/0x1a0 ffff9a83046f3458: nid: 1 objects to shrink 3641681686040 gfp_flags GFP_HIGHUSER_MOVABLE|__G= FP_ZERO pgs_scanned 1 lru_pgs 15721 cache items 246404277 delta 31345 total_scan 123202138 <...>-409454 [022] .... 28287105.928018: mm_shrink_slab_end: super_cache_= scan+0x0/0x1a0 ffff9a83046f3458: nid: 1 unused scan count 3641681686040 new scan count 3641798379189 total= _scan 602 last shrinker return val 123186855 The vfs cache and page cache ratio was 10:1 on this machine, and half of = caches were dropped. This also resulted in significant amount of page caches were dropped due = to inodes eviction. Make nr_deferred per memcg for memcg aware shrinkers would solve the unfa= irness and bring better isolation. When memcg is not enabled (!CONFIG_MEMCG or memcg disabled), the shrinker= 's nr_deferred would be used. And non memcg aware shrinkers use shrinker's nr_deferred = all the time. Signed-off-by: Yang Shi --- include/linux/memcontrol.h | 7 +++-- mm/vmscan.c | 60 ++++++++++++++++++++++++++------------ 2 files changed, 46 insertions(+), 21 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 4c9253896e25..c457fc7bc631 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -93,12 +93,13 @@ struct lruvec_stat { }; =20 /* - * Bitmap of shrinker::id corresponding to memcg-aware shrinkers, - * which have elements charged to this memcg. + * Bitmap and deferred work of shrinker::id corresponding to memcg-aware + * shrinkers, which have elements charged to this memcg. */ struct shrinker_info { struct rcu_head rcu; - unsigned long map[]; + atomic_long_t *nr_deferred; + unsigned long *map; }; =20 /* diff --git a/mm/vmscan.c b/mm/vmscan.c index a1047ea60ecf..fcb399e18fc3 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -187,11 +187,17 @@ static DECLARE_RWSEM(shrinker_rwsem); #ifdef CONFIG_MEMCG static int shrinker_nr_max; =20 +/* The shrinker_info is expanded in a batch of BITS_PER_LONG */ static inline int shrinker_map_size(int nr_items) { return (DIV_ROUND_UP(nr_items, BITS_PER_LONG) * sizeof(unsigned long)); } =20 +static inline int shrinker_defer_size(int nr_items) +{ + return (round_up(nr_items, BITS_PER_LONG) * sizeof(atomic_long_t)); +} + static struct shrinker_info *shrinker_info_protected(struct mem_cgroup *= memcg, int nid) { @@ -200,10 +206,12 @@ static struct shrinker_info *shrinker_info_protecte= d(struct mem_cgroup *memcg, } =20 static int expand_one_shrinker_info(struct mem_cgroup *memcg, - int size, int old_size) + int map_size, int defer_size, + int old_map_size, int old_defer_size) { struct shrinker_info *new, *old; int nid; + int size =3D map_size + defer_size; =20 for_each_node(nid) { old =3D shrinker_info_protected(memcg, nid); @@ -215,9 +223,16 @@ static int expand_one_shrinker_info(struct mem_cgrou= p *memcg, if (!new) return -ENOMEM; =20 - /* Set all old bits, clear all new bits */ - memset(new->map, (int)0xff, old_size); - memset((void *)new->map + old_size, 0, size - old_size); + new->nr_deferred =3D (atomic_long_t *)(new + 1); + new->map =3D (void *)new->nr_deferred + defer_size; + + /* map: set all old bits, clear all new bits */ + memset(new->map, (int)0xff, old_map_size); + memset((void *)new->map + old_map_size, 0, map_size - old_map_size); + /* nr_deferred: copy old values, clear all new values */ + memcpy(new->nr_deferred, old->nr_deferred, old_defer_size); + memset((void *)new->nr_deferred + old_defer_size, 0, + defer_size - old_defer_size); =20 rcu_assign_pointer(memcg->nodeinfo[nid]->shrinker_info, new); kvfree_rcu(old); @@ -232,9 +247,6 @@ void free_shrinker_info(struct mem_cgroup *memcg) struct shrinker_info *info; int nid; =20 - if (mem_cgroup_is_root(memcg)) - return; - for_each_node(nid) { pn =3D mem_cgroup_nodeinfo(memcg, nid); info =3D shrinker_info_protected(memcg, nid); @@ -247,12 +259,12 @@ int alloc_shrinker_info(struct mem_cgroup *memcg) { struct shrinker_info *info; int nid, size, ret =3D 0; - - if (mem_cgroup_is_root(memcg)) - return 0; + int map_size, defer_size =3D 0; =20 down_write(&shrinker_rwsem); - size =3D shrinker_map_size(shrinker_nr_max); + map_size =3D shrinker_map_size(shrinker_nr_max); + defer_size =3D shrinker_defer_size(shrinker_nr_max); + size =3D map_size + defer_size; for_each_node(nid) { info =3D kvzalloc_node(sizeof(*info) + size, GFP_KERNEL, nid); if (!info) { @@ -260,6 +272,8 @@ int alloc_shrinker_info(struct mem_cgroup *memcg) ret =3D -ENOMEM; break; } + info->nr_deferred =3D (atomic_long_t *)(info + 1); + info->map =3D (void *)info->nr_deferred + defer_size; rcu_assign_pointer(memcg->nodeinfo[nid]->shrinker_info, info); } up_write(&shrinker_rwsem); @@ -267,15 +281,21 @@ int alloc_shrinker_info(struct mem_cgroup *memcg) return ret; } =20 +static inline bool need_expand(int nr_max) +{ + return round_up(nr_max, BITS_PER_LONG) > + round_up(shrinker_nr_max, BITS_PER_LONG); +} + static int expand_shrinker_info(int new_id) { - int size, old_size, ret =3D 0; + int ret =3D 0; int new_nr_max =3D new_id + 1; + int map_size, defer_size =3D 0; + int old_map_size, old_defer_size =3D 0; struct mem_cgroup *memcg; =20 - size =3D shrinker_map_size(new_nr_max); - old_size =3D shrinker_map_size(shrinker_nr_max); - if (size <=3D old_size) + if (!need_expand(new_nr_max)) goto out; =20 if (!root_mem_cgroup) @@ -283,11 +303,15 @@ static int expand_shrinker_info(int new_id) =20 lockdep_assert_held(&shrinker_rwsem); =20 + map_size =3D shrinker_map_size(new_nr_max); + defer_size =3D shrinker_defer_size(new_nr_max); + old_map_size =3D shrinker_map_size(shrinker_nr_max); + old_defer_size =3D shrinker_defer_size(shrinker_nr_max); + memcg =3D mem_cgroup_iter(NULL, NULL, NULL); do { - if (mem_cgroup_is_root(memcg)) - continue; - ret =3D expand_one_shrinker_info(memcg, size, old_size); + ret =3D expand_one_shrinker_info(memcg, map_size, defer_size, + old_map_size, old_defer_size); if (ret) { mem_cgroup_iter_break(NULL, memcg); goto out; --=20 2.26.2