From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1F766C433DB for ; Mon, 8 Mar 2021 20:30:46 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id A19A565272 for ; Mon, 8 Mar 2021 20:30:45 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A19A565272 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 2FD848D0075; Mon, 8 Mar 2021 15:30:45 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 2AD8E8D001D; Mon, 8 Mar 2021 15:30:45 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1286B8D0075; Mon, 8 Mar 2021 15:30:45 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0009.hostedemail.com [216.40.44.9]) by kanga.kvack.org (Postfix) with ESMTP id E8B058D001D for ; Mon, 8 Mar 2021 15:30:44 -0500 (EST) Received: from smtpin24.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id A63E0180AD81F for ; Mon, 8 Mar 2021 20:30:44 +0000 (UTC) X-FDA: 77897850408.24.24CCE74 Received: from mail-ej1-f53.google.com (mail-ej1-f53.google.com [209.85.218.53]) by imf18.hostedemail.com (Postfix) with ESMTP id A7329200039E for ; Mon, 8 Mar 2021 20:30:44 +0000 (UTC) Received: by mail-ej1-f53.google.com with SMTP id ox4so7393936ejb.11 for ; Mon, 08 Mar 2021 12:30:43 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=Bng89PIihxzY7Jr/zcM+BtxNNqk4o89SmQXzo/gPHmU=; b=QbbEPERB65dedudzcs6DWkwg4Eeg+TXLOZVdLXE+Z11qwfm/V27fEwxC0aajud/bO8 OssuGoTO0ZOTHAt2lMpfldVhx4eW4eYr7X/vG+sqfTBb8SXsM9Cdi7vt78ste/laRmZ/ Sy6qUd5Mb1SMV2LwDYVFZk5NYrjSfOYVcV+Y0FoEzeckY/V2hgkLrqO79sbWUOUEWa7M 9S8VRymQaWCaZDY7QeDRkkLe2MXE6SvSNZX0ON8VWtW36GRXV20Gd/0WSMeLbHTyL6dO eG3z0+WsVSQE1SNBtSXHesZAgflfqMon+fR1yIcDjHAAXcUSRTAe6xQB8QIrZ4jVjzbB 3yfw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=Bng89PIihxzY7Jr/zcM+BtxNNqk4o89SmQXzo/gPHmU=; b=YmU7stlsp0BxhzfGyspFDHvuOuWA/acwS1L8Ny1cIOE9Dom5dm7CF9kJYtv/BInUf7 3wt5/JHCRByntyWLWlS8m83WzUDesmF0YfrCPNWC5a9O/SPJo/Ot6tn0tSi3VSb0pb0b 9xph/YVBaos1kY81STD3JyQZyvZpt2PJSzFneNlJW7NfySuAC0tfNTJeu/Yc0x55iHLH AGaWqFK7G1mviROUs7p+dq7l3/Ndln2M7uS9mqKwDbI+VQHjWRagnYt7MnEm4abOWNVg 6n3CujR01mv5kTVnX85GMhaV4rULJfSRB0xq5TusZBVe+ezkY/H8MfV/NfPXR5Ow2qbB beag== X-Gm-Message-State: AOAM531UFyyc7WKouCGPmWPszpBuGZBBioVD5gtTaZWgupAuM70/8tXD yPQVm5E5/9EKyyqU4tB+b5NFseTQSF85Koj4SCE= X-Google-Smtp-Source: ABdhPJzJlbz1w5x5vBQyV7pYVy8xLV2kFyoZQT9beTonyOQejIZGGntxjfdW0Lbwx5eSoklWTZgDc7Z05Hn65RuUIxw= X-Received: by 2002:a17:906:304a:: with SMTP id d10mr16667995ejd.507.1615235442779; Mon, 08 Mar 2021 12:30:42 -0800 (PST) MIME-Version: 1.0 References: <20210217001322.2226796-1-shy828301@gmail.com> <20210217001322.2226796-10-shy828301@gmail.com> In-Reply-To: From: Yang Shi Date: Mon, 8 Mar 2021 12:30:31 -0800 Message-ID: Subject: Re: [v8 PATCH 09/13] mm: vmscan: add per memcg shrinker nr_deferred To: Shakeel Butt Cc: Roman Gushchin , Kirill Tkhai , Vlastimil Babka , Dave Chinner , Johannes Weiner , Michal Hocko , Andrew Morton , Linux MM , linux-fsdevel , LKML Content-Type: text/plain; charset="UTF-8" X-Stat-Signature: hd3qqe4hwda9qqx7eadgzeckpenfairr X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: A7329200039E Received-SPF: none (gmail.com>: No applicable sender policy available) receiver=imf18; identity=mailfrom; envelope-from=""; helo=mail-ej1-f53.google.com; client-ip=209.85.218.53 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1615235444-376996 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, Mar 8, 2021 at 11:12 AM Shakeel Butt wrote: > > On Tue, Feb 16, 2021 at 4:13 PM Yang Shi wrote: > > > > Currently the number of deferred objects are per shrinker, but some slabs, for example, > > vfs inode/dentry cache are per memcg, this would result in poor isolation among memcgs. > > > > The deferred objects typically are generated by __GFP_NOFS allocations, one memcg with > > excessive __GFP_NOFS allocations may blow up deferred objects, then other innocent memcgs > > may suffer from over shrink, excessive reclaim latency, etc. > > > > For example, two workloads run in memcgA and memcgB respectively, workload in B is vfs > > heavy workload. Workload in A generates excessive deferred objects, then B's vfs cache > > might be hit heavily (drop half of caches) by B's limit reclaim or global reclaim. > > > > We observed this hit in our production environment which was running vfs heavy workload > > shown as the below tracing log: > > > > <...>-409454 [016] .... 28286961.747146: mm_shrink_slab_start: super_cache_scan+0x0/0x1a0 ffff9a83046f3458: > > nid: 1 objects to shrink 3641681686040 gfp_flags GFP_HIGHUSER_MOVABLE|__GFP_ZERO pgs_scanned 1 lru_pgs 15721 > > cache items 246404277 delta 31345 total_scan 123202138 > > <...>-409454 [022] .... 28287105.928018: mm_shrink_slab_end: super_cache_scan+0x0/0x1a0 ffff9a83046f3458: > > nid: 1 unused scan count 3641681686040 new scan count 3641798379189 total_scan 602 > > last shrinker return val 123186855 > > > > The vfs cache and page cache ratio was 10:1 on this machine, and half of caches were dropped. > > This also resulted in significant amount of page caches were dropped due to inodes eviction. > > > > Make nr_deferred per memcg for memcg aware shrinkers would solve the unfairness and bring > > better isolation. > > > > When memcg is not enabled (!CONFIG_MEMCG or memcg disabled), the shrinker's nr_deferred > > would be used. And non memcg aware shrinkers use shrinker's nr_deferred all the time. > > > > Signed-off-by: Yang Shi > > --- > > include/linux/memcontrol.h | 7 +++-- > > mm/vmscan.c | 60 ++++++++++++++++++++++++++------------ > > 2 files changed, 46 insertions(+), 21 deletions(-) > > > > diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h > > index 4c9253896e25..c457fc7bc631 100644 > > --- a/include/linux/memcontrol.h > > +++ b/include/linux/memcontrol.h > > @@ -93,12 +93,13 @@ struct lruvec_stat { > > }; > > > > /* > > - * Bitmap of shrinker::id corresponding to memcg-aware shrinkers, > > - * which have elements charged to this memcg. > > + * Bitmap and deferred work of shrinker::id corresponding to memcg-aware > > + * shrinkers, which have elements charged to this memcg. > > */ > > struct shrinker_info { > > struct rcu_head rcu; > > - unsigned long map[]; > > + atomic_long_t *nr_deferred; > > + unsigned long *map; > > }; > > > > /* > > diff --git a/mm/vmscan.c b/mm/vmscan.c > > index a1047ea60ecf..fcb399e18fc3 100644 > > --- a/mm/vmscan.c > > +++ b/mm/vmscan.c > > @@ -187,11 +187,17 @@ static DECLARE_RWSEM(shrinker_rwsem); > > #ifdef CONFIG_MEMCG > > static int shrinker_nr_max; > > > > +/* The shrinker_info is expanded in a batch of BITS_PER_LONG */ > > static inline int shrinker_map_size(int nr_items) > > { > > return (DIV_ROUND_UP(nr_items, BITS_PER_LONG) * sizeof(unsigned long)); > > } > > > > +static inline int shrinker_defer_size(int nr_items) > > +{ > > + return (round_up(nr_items, BITS_PER_LONG) * sizeof(atomic_long_t)); > > +} > > + > > static struct shrinker_info *shrinker_info_protected(struct mem_cgroup *memcg, > > int nid) > > { > > @@ -200,10 +206,12 @@ static struct shrinker_info *shrinker_info_protected(struct mem_cgroup *memcg, > > } > > > > static int expand_one_shrinker_info(struct mem_cgroup *memcg, > > - int size, int old_size) > > + int map_size, int defer_size, > > + int old_map_size, int old_defer_size) > > { > > struct shrinker_info *new, *old; > > int nid; > > + int size = map_size + defer_size; > > > > for_each_node(nid) { > > old = shrinker_info_protected(memcg, nid); > > @@ -215,9 +223,16 @@ static int expand_one_shrinker_info(struct mem_cgroup *memcg, > > if (!new) > > return -ENOMEM; > > > > - /* Set all old bits, clear all new bits */ > > - memset(new->map, (int)0xff, old_size); > > - memset((void *)new->map + old_size, 0, size - old_size); > > + new->nr_deferred = (atomic_long_t *)(new + 1); > > + new->map = (void *)new->nr_deferred + defer_size; > > + > > + /* map: set all old bits, clear all new bits */ > > + memset(new->map, (int)0xff, old_map_size); > > + memset((void *)new->map + old_map_size, 0, map_size - old_map_size); > > + /* nr_deferred: copy old values, clear all new values */ > > + memcpy(new->nr_deferred, old->nr_deferred, old_defer_size); > > + memset((void *)new->nr_deferred + old_defer_size, 0, > > + defer_size - old_defer_size); > > > > rcu_assign_pointer(memcg->nodeinfo[nid]->shrinker_info, new); > > kvfree_rcu(old); > > @@ -232,9 +247,6 @@ void free_shrinker_info(struct mem_cgroup *memcg) > > struct shrinker_info *info; > > int nid; > > > > - if (mem_cgroup_is_root(memcg)) > > - return; > > - > > for_each_node(nid) { > > pn = mem_cgroup_nodeinfo(memcg, nid); > > info = shrinker_info_protected(memcg, nid); > > @@ -247,12 +259,12 @@ int alloc_shrinker_info(struct mem_cgroup *memcg) > > { > > struct shrinker_info *info; > > int nid, size, ret = 0; > > - > > - if (mem_cgroup_is_root(memcg)) > > - return 0; > > Can you please comment on the consequences on allowing to allocate > shrinker_info for root memcg? Why didn't we do that before but now it > is fine (or maybe required)? Please add the explanation in the commit > message. Before the patchset shrinker_info just tracks shrinker_maps which is not required for root memcg. But the newly added nr_deferred is needed in root memcg otherwise the nr_deferred work would get lost once the memcgs are reparented to root. How's about adding the below paragraph to the commit log: "To preserve nr_deferred when reparenting memcgs to root, root memcg needs shrinker_info allocated too." > > > + int map_size, defer_size = 0; > > > > down_write(&shrinker_rwsem); > > - size = shrinker_map_size(shrinker_nr_max); > > + map_size = shrinker_map_size(shrinker_nr_max); > > + defer_size = shrinker_defer_size(shrinker_nr_max); > > + size = map_size + defer_size; > > for_each_node(nid) { > > info = kvzalloc_node(sizeof(*info) + size, GFP_KERNEL, nid); > > if (!info) { > > @@ -260,6 +272,8 @@ int alloc_shrinker_info(struct mem_cgroup *memcg) > > ret = -ENOMEM; > > break; > > } > > + info->nr_deferred = (atomic_long_t *)(info + 1); > > + info->map = (void *)info->nr_deferred + defer_size; > > rcu_assign_pointer(memcg->nodeinfo[nid]->shrinker_info, info); > > } > > up_write(&shrinker_rwsem); > > @@ -267,15 +281,21 @@ int alloc_shrinker_info(struct mem_cgroup *memcg) > > return ret; > > } > > > > +static inline bool need_expand(int nr_max) > > +{ > > + return round_up(nr_max, BITS_PER_LONG) > > > + round_up(shrinker_nr_max, BITS_PER_LONG); > > +} > > + > > static int expand_shrinker_info(int new_id) > > { > > - int size, old_size, ret = 0; > > + int ret = 0; > > int new_nr_max = new_id + 1; > > + int map_size, defer_size = 0; > > + int old_map_size, old_defer_size = 0; > > struct mem_cgroup *memcg; > > > > - size = shrinker_map_size(new_nr_max); > > - old_size = shrinker_map_size(shrinker_nr_max); > > - if (size <= old_size) > > + if (!need_expand(new_nr_max)) > > goto out; > > > > if (!root_mem_cgroup) > > @@ -283,11 +303,15 @@ static int expand_shrinker_info(int new_id) > > > > lockdep_assert_held(&shrinker_rwsem); > > > > + map_size = shrinker_map_size(new_nr_max); > > + defer_size = shrinker_defer_size(new_nr_max); > > + old_map_size = shrinker_map_size(shrinker_nr_max); > > + old_defer_size = shrinker_defer_size(shrinker_nr_max); > > + > > memcg = mem_cgroup_iter(NULL, NULL, NULL); > > do { > > - if (mem_cgroup_is_root(memcg)) > > - continue; > > - ret = expand_one_shrinker_info(memcg, size, old_size); > > + ret = expand_one_shrinker_info(memcg, map_size, defer_size, > > + old_map_size, old_defer_size); > > if (ret) { > > mem_cgroup_iter_break(NULL, memcg); > > goto out; > > -- > > 2.26.2 > >