From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-23.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E8026C433E6 for ; Mon, 8 Mar 2021 19:12:22 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 804FA65290 for ; Mon, 8 Mar 2021 19:12:22 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 804FA65290 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 0AE268D0068; Mon, 8 Mar 2021 14:12:22 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 05EAB8D001D; Mon, 8 Mar 2021 14:12:21 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DF36A8D0068; Mon, 8 Mar 2021 14:12:21 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0076.hostedemail.com [216.40.44.76]) by kanga.kvack.org (Postfix) with ESMTP id BF7DB8D001D for ; Mon, 8 Mar 2021 14:12:21 -0500 (EST) Received: from smtpin18.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 741083A91 for ; Mon, 8 Mar 2021 19:12:21 +0000 (UTC) X-FDA: 77897652882.18.AFA32E8 Received: from mail-lj1-f170.google.com (mail-lj1-f170.google.com [209.85.208.170]) by imf29.hostedemail.com (Postfix) with ESMTP id A66DCE4 for ; Mon, 8 Mar 2021 19:12:17 +0000 (UTC) Received: by mail-lj1-f170.google.com with SMTP id k12so17499068ljg.9 for ; Mon, 08 Mar 2021 11:12:19 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=6lphpwLFld/gj6xJlJyOO7lHoVH9+/eeuknXpn9es+U=; b=A85oOlhvLmMYG+jDFsWRVBu/cnZwePzqql9t9WBFBx59OqZJoMWuCF0MWN9qj2rV5E hAid/1yrUHgklq8XGGnk8xeZqN//wbDPaTxY5WI70oXoxb2B8sfXFQvzk0HHsp6tuU25 i6uABvQNuaaSGXKGfUFhqxTaYDdODGfRXdMa/FMvenEXGfwFueCPgNK/kBiImTAm37MI 3urV6N0IFcmQO9ZBWJkdYDiy9LBIltrkDNqf0F6kXk6S8qWPG1QJV452AeNOa4ySLjC6 dmi5ErtSe99XmVMiYpeQzPcZ5I1jGf2U/hZyijvvW+/27B0Ey1671eTcG4SRn+IXoU1Y OB1g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=6lphpwLFld/gj6xJlJyOO7lHoVH9+/eeuknXpn9es+U=; b=pWMjaZf7/on1bezfcie4GRcPWecnUZntKWNDW5ZqZlPXS6hEeyQnhPtwJz+FYHNoxP XdFepke5pRz6Ecub54z5GN4c/u4EpyCrIrhx2CFOEyuI4WCK9ELkIWDpTXbyTNvMG96z FBZqXgUsXzfw4qcK55mw20gy1TsOQoLDpxcsjPIDpqQkyS290r3cm637fO4qapcmYLSL hbVQnuRoVN4LRRt7tz07vE6QBkOHBBum92pLQWyXM/KCXWKnQ7Ad82Z14774lP5Y+X/E luJlCQ6a1K2jy0GC/S3oOlzO23CjTce3hVbwlom2wUziVUR1hmyFfYTWoCXO6Wob9AuX +Fpw== X-Gm-Message-State: AOAM533QIrMAEvh3LSqcz0MgFl3gy3YmV9jD+wNpQmT4QXA05tBl2Tlz rAgsS9d0sLkdrfW2EJ1NbFj8kl/2Jspj9XirT84pNQ== X-Google-Smtp-Source: ABdhPJzQmw14vkzJ3AJMLh+464Tumni34U9qtfTufyTIa2fwlW6C3qDAs1zNSWlUwqMQyuGoJ9X56V1I9hofuVbt+Mc= X-Received: by 2002:a2e:8084:: with SMTP id i4mr11438404ljg.122.1615230737841; Mon, 08 Mar 2021 11:12:17 -0800 (PST) MIME-Version: 1.0 References: <20210217001322.2226796-1-shy828301@gmail.com> <20210217001322.2226796-10-shy828301@gmail.com> In-Reply-To: <20210217001322.2226796-10-shy828301@gmail.com> From: Shakeel Butt Date: Mon, 8 Mar 2021 11:12:03 -0800 Message-ID: Subject: Re: [v8 PATCH 09/13] mm: vmscan: add per memcg shrinker nr_deferred To: Yang Shi Cc: Roman Gushchin , Kirill Tkhai , Vlastimil Babka , Dave Chinner , Johannes Weiner , Michal Hocko , Andrew Morton , Linux MM , linux-fsdevel , LKML Content-Type: text/plain; charset="UTF-8" X-Stat-Signature: e5c5g17i9y3diizs3c766sxu4q7b7yob X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: A66DCE4 Received-SPF: none (google.com>: No applicable sender policy available) receiver=imf29; identity=mailfrom; envelope-from=""; helo=mail-lj1-f170.google.com; client-ip=209.85.208.170 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1615230737-703606 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, Feb 16, 2021 at 4:13 PM Yang Shi wrote: > > Currently the number of deferred objects are per shrinker, but some slabs, for example, > vfs inode/dentry cache are per memcg, this would result in poor isolation among memcgs. > > The deferred objects typically are generated by __GFP_NOFS allocations, one memcg with > excessive __GFP_NOFS allocations may blow up deferred objects, then other innocent memcgs > may suffer from over shrink, excessive reclaim latency, etc. > > For example, two workloads run in memcgA and memcgB respectively, workload in B is vfs > heavy workload. Workload in A generates excessive deferred objects, then B's vfs cache > might be hit heavily (drop half of caches) by B's limit reclaim or global reclaim. > > We observed this hit in our production environment which was running vfs heavy workload > shown as the below tracing log: > > <...>-409454 [016] .... 28286961.747146: mm_shrink_slab_start: super_cache_scan+0x0/0x1a0 ffff9a83046f3458: > nid: 1 objects to shrink 3641681686040 gfp_flags GFP_HIGHUSER_MOVABLE|__GFP_ZERO pgs_scanned 1 lru_pgs 15721 > cache items 246404277 delta 31345 total_scan 123202138 > <...>-409454 [022] .... 28287105.928018: mm_shrink_slab_end: super_cache_scan+0x0/0x1a0 ffff9a83046f3458: > nid: 1 unused scan count 3641681686040 new scan count 3641798379189 total_scan 602 > last shrinker return val 123186855 > > The vfs cache and page cache ratio was 10:1 on this machine, and half of caches were dropped. > This also resulted in significant amount of page caches were dropped due to inodes eviction. > > Make nr_deferred per memcg for memcg aware shrinkers would solve the unfairness and bring > better isolation. > > When memcg is not enabled (!CONFIG_MEMCG or memcg disabled), the shrinker's nr_deferred > would be used. And non memcg aware shrinkers use shrinker's nr_deferred all the time. > > Signed-off-by: Yang Shi > --- > include/linux/memcontrol.h | 7 +++-- > mm/vmscan.c | 60 ++++++++++++++++++++++++++------------ > 2 files changed, 46 insertions(+), 21 deletions(-) > > diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h > index 4c9253896e25..c457fc7bc631 100644 > --- a/include/linux/memcontrol.h > +++ b/include/linux/memcontrol.h > @@ -93,12 +93,13 @@ struct lruvec_stat { > }; > > /* > - * Bitmap of shrinker::id corresponding to memcg-aware shrinkers, > - * which have elements charged to this memcg. > + * Bitmap and deferred work of shrinker::id corresponding to memcg-aware > + * shrinkers, which have elements charged to this memcg. > */ > struct shrinker_info { > struct rcu_head rcu; > - unsigned long map[]; > + atomic_long_t *nr_deferred; > + unsigned long *map; > }; > > /* > diff --git a/mm/vmscan.c b/mm/vmscan.c > index a1047ea60ecf..fcb399e18fc3 100644 > --- a/mm/vmscan.c > +++ b/mm/vmscan.c > @@ -187,11 +187,17 @@ static DECLARE_RWSEM(shrinker_rwsem); > #ifdef CONFIG_MEMCG > static int shrinker_nr_max; > > +/* The shrinker_info is expanded in a batch of BITS_PER_LONG */ > static inline int shrinker_map_size(int nr_items) > { > return (DIV_ROUND_UP(nr_items, BITS_PER_LONG) * sizeof(unsigned long)); > } > > +static inline int shrinker_defer_size(int nr_items) > +{ > + return (round_up(nr_items, BITS_PER_LONG) * sizeof(atomic_long_t)); > +} > + > static struct shrinker_info *shrinker_info_protected(struct mem_cgroup *memcg, > int nid) > { > @@ -200,10 +206,12 @@ static struct shrinker_info *shrinker_info_protected(struct mem_cgroup *memcg, > } > > static int expand_one_shrinker_info(struct mem_cgroup *memcg, > - int size, int old_size) > + int map_size, int defer_size, > + int old_map_size, int old_defer_size) > { > struct shrinker_info *new, *old; > int nid; > + int size = map_size + defer_size; > > for_each_node(nid) { > old = shrinker_info_protected(memcg, nid); > @@ -215,9 +223,16 @@ static int expand_one_shrinker_info(struct mem_cgroup *memcg, > if (!new) > return -ENOMEM; > > - /* Set all old bits, clear all new bits */ > - memset(new->map, (int)0xff, old_size); > - memset((void *)new->map + old_size, 0, size - old_size); > + new->nr_deferred = (atomic_long_t *)(new + 1); > + new->map = (void *)new->nr_deferred + defer_size; > + > + /* map: set all old bits, clear all new bits */ > + memset(new->map, (int)0xff, old_map_size); > + memset((void *)new->map + old_map_size, 0, map_size - old_map_size); > + /* nr_deferred: copy old values, clear all new values */ > + memcpy(new->nr_deferred, old->nr_deferred, old_defer_size); > + memset((void *)new->nr_deferred + old_defer_size, 0, > + defer_size - old_defer_size); > > rcu_assign_pointer(memcg->nodeinfo[nid]->shrinker_info, new); > kvfree_rcu(old); > @@ -232,9 +247,6 @@ void free_shrinker_info(struct mem_cgroup *memcg) > struct shrinker_info *info; > int nid; > > - if (mem_cgroup_is_root(memcg)) > - return; > - > for_each_node(nid) { > pn = mem_cgroup_nodeinfo(memcg, nid); > info = shrinker_info_protected(memcg, nid); > @@ -247,12 +259,12 @@ int alloc_shrinker_info(struct mem_cgroup *memcg) > { > struct shrinker_info *info; > int nid, size, ret = 0; > - > - if (mem_cgroup_is_root(memcg)) > - return 0; Can you please comment on the consequences on allowing to allocate shrinker_info for root memcg? Why didn't we do that before but now it is fine (or maybe required)? Please add the explanation in the commit message. > + int map_size, defer_size = 0; > > down_write(&shrinker_rwsem); > - size = shrinker_map_size(shrinker_nr_max); > + map_size = shrinker_map_size(shrinker_nr_max); > + defer_size = shrinker_defer_size(shrinker_nr_max); > + size = map_size + defer_size; > for_each_node(nid) { > info = kvzalloc_node(sizeof(*info) + size, GFP_KERNEL, nid); > if (!info) { > @@ -260,6 +272,8 @@ int alloc_shrinker_info(struct mem_cgroup *memcg) > ret = -ENOMEM; > break; > } > + info->nr_deferred = (atomic_long_t *)(info + 1); > + info->map = (void *)info->nr_deferred + defer_size; > rcu_assign_pointer(memcg->nodeinfo[nid]->shrinker_info, info); > } > up_write(&shrinker_rwsem); > @@ -267,15 +281,21 @@ int alloc_shrinker_info(struct mem_cgroup *memcg) > return ret; > } > > +static inline bool need_expand(int nr_max) > +{ > + return round_up(nr_max, BITS_PER_LONG) > > + round_up(shrinker_nr_max, BITS_PER_LONG); > +} > + > static int expand_shrinker_info(int new_id) > { > - int size, old_size, ret = 0; > + int ret = 0; > int new_nr_max = new_id + 1; > + int map_size, defer_size = 0; > + int old_map_size, old_defer_size = 0; > struct mem_cgroup *memcg; > > - size = shrinker_map_size(new_nr_max); > - old_size = shrinker_map_size(shrinker_nr_max); > - if (size <= old_size) > + if (!need_expand(new_nr_max)) > goto out; > > if (!root_mem_cgroup) > @@ -283,11 +303,15 @@ static int expand_shrinker_info(int new_id) > > lockdep_assert_held(&shrinker_rwsem); > > + map_size = shrinker_map_size(new_nr_max); > + defer_size = shrinker_defer_size(new_nr_max); > + old_map_size = shrinker_map_size(shrinker_nr_max); > + old_defer_size = shrinker_defer_size(shrinker_nr_max); > + > memcg = mem_cgroup_iter(NULL, NULL, NULL); > do { > - if (mem_cgroup_is_root(memcg)) > - continue; > - ret = expand_one_shrinker_info(memcg, size, old_size); > + ret = expand_one_shrinker_info(memcg, map_size, defer_size, > + old_map_size, old_defer_size); > if (ret) { > mem_cgroup_iter_break(NULL, memcg); > goto out; > -- > 2.26.2 >