From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EF23EC433E0 for ; Fri, 29 Jan 2021 18:04:54 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 4820C64E08 for ; Fri, 29 Jan 2021 18:04:53 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4820C64E08 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 532198D0003; Fri, 29 Jan 2021 13:04:53 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 4E3258D0001; Fri, 29 Jan 2021 13:04:53 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 383D78D0003; Fri, 29 Jan 2021 13:04:53 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0089.hostedemail.com [216.40.44.89]) by kanga.kvack.org (Postfix) with ESMTP id 1E5F88D0001 for ; Fri, 29 Jan 2021 13:04:53 -0500 (EST) Received: from smtpin02.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id A929C3632 for ; Fri, 29 Jan 2021 18:04:52 +0000 (UTC) X-FDA: 77759588424.02.jewel69_48184c9275aa Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin02.hostedemail.com (Postfix) with ESMTP id 8418910097AA4 for ; Fri, 29 Jan 2021 18:04:52 +0000 (UTC) X-HE-Tag: jewel69_48184c9275aa X-Filterd-Recvd-Size: 12820 Received: from mail-ej1-f48.google.com (mail-ej1-f48.google.com [209.85.218.48]) by imf22.hostedemail.com (Postfix) with ESMTP for ; Fri, 29 Jan 2021 18:04:51 +0000 (UTC) Received: by mail-ej1-f48.google.com with SMTP id l9so14352239ejx.3 for ; Fri, 29 Jan 2021 10:04:51 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=af9nj7EwAdSWlVuKcCurAKwgJ+Dd2k2PJZps8cujYvw=; b=guwxXV5isqsrEGM8hL6/PO/Q0y+Cy5ybpQwgYrmzxY+a3ulu8F8g0KFkPdooP9bGU4 iuAAeyE5v0NCFKaLWAOwL0TFC8+lqKYsBlx+4oSo6+4v2u4G/Bb+hamt66jlwOpdse4Q P4AXvtkewaeZN9Ew4eDGDyHHBDi1ZNXLIlgcLgqSIuqDGC0EvAvKaPYHwnfhOO69BVSN pylKCIle5w48lu9WgO+IbalRQ5e5j52YDFutEzyhEi5iW3M3nYLWcYiBeV6Lq5a8hQsp geoDGXkU19kSHsm8zPs6VDMapOc5jxoWOxxb9I0FgzlpShRDbGdikPpx0PK+nkCIymx7 0foQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=af9nj7EwAdSWlVuKcCurAKwgJ+Dd2k2PJZps8cujYvw=; b=BEBWbsD9mtIp+TR87i/+GN8T8l/6x4PsHHKhpsIbzKwUVBcpWdVO7zfp0zyLMRjfAa Sl2odxdv/1SmkGRhclHOJ4OnG4aYb3Y4Aot1HFLMKDdd3jat71Kc3C5/hc8zGWxrT3YN ET14IA0hiCPua7YGyDP3BfroAS4Gr7rbpF2vXPIx1oxWNibJ2FoGOZ6qz6FPottTVjGE ZduX2zasU4GWoe7Z5HnDlScPYc+hZvb3bdVrkjkteNKT0nOWWq2Rkrgga7RpBG17yE2u ZgIfTu6LoTWV2kkhkX9ahkzSLtwPyGR4bScN/Sc11G6O/rGDmeKLP2hiasgb0kTxabMb nfiQ== X-Gm-Message-State: AOAM532sH9AMW/n7mwYOuv1g1avXB0xVGwvccMotDlfhBTveQsPUNjcU 9ZEUXpU6QB9vDIUTp9lelfTOBmWF9L6P5b0Cgcss2kEbbLs= X-Google-Smtp-Source: ABdhPJxkOiG0Vjhe0VVuoETQmpwi60GzX6D5do+VW/lK+oqcWF4Ms3LOSx2uHKeJ9Vost5hGW9uiq/9G68MjE/Rezew= X-Received: by 2002:a17:906:3945:: with SMTP id g5mr6027283eje.514.1611943490731; Fri, 29 Jan 2021 10:04:50 -0800 (PST) MIME-Version: 1.0 References: <20210127233345.339910-1-shy828301@gmail.com> <20210127233345.339910-8-shy828301@gmail.com> <6b0638ba-2513-67f5-8ef1-9e60a7d9ded6@suse.cz> In-Reply-To: From: Yang Shi Date: Fri, 29 Jan 2021 10:04:38 -0800 Message-ID: Subject: Re: [v5 PATCH 07/11] mm: vmscan: add per memcg shrinker nr_deferred To: Vlastimil Babka Cc: Roman Gushchin , Kirill Tkhai , Shakeel Butt , Dave Chinner , Johannes Weiner , Michal Hocko , Andrew Morton , Linux MM , Linux FS-devel Mailing List , Linux Kernel Mailing List Content-Type: text/plain; charset="UTF-8" X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Fri, Jan 29, 2021 at 9:20 AM Yang Shi wrote: > > On Fri, Jan 29, 2021 at 5:00 AM Vlastimil Babka wrote: > > > > On 1/28/21 12:33 AM, Yang Shi wrote: > > > Currently the number of deferred objects are per shrinker, but some slabs, for example, > > > vfs inode/dentry cache are per memcg, this would result in poor isolation among memcgs. > > > > > > The deferred objects typically are generated by __GFP_NOFS allocations, one memcg with > > > excessive __GFP_NOFS allocations may blow up deferred objects, then other innocent memcgs > > > may suffer from over shrink, excessive reclaim latency, etc. > > > > > > For example, two workloads run in memcgA and memcgB respectively, workload in B is vfs > > > heavy workload. Workload in A generates excessive deferred objects, then B's vfs cache > > > might be hit heavily (drop half of caches) by B's limit reclaim or global reclaim. > > > > > > We observed this hit in our production environment which was running vfs heavy workload > > > shown as the below tracing log: > > > > > > <...>-409454 [016] .... 28286961.747146: mm_shrink_slab_start: super_cache_scan+0x0/0x1a0 ffff9a83046f3458: > > > nid: 1 objects to shrink 3641681686040 gfp_flags GFP_HIGHUSER_MOVABLE|__GFP_ZERO pgs_scanned 1 lru_pgs 15721 > > > cache items 246404277 delta 31345 total_scan 123202138 > > > <...>-409454 [022] .... 28287105.928018: mm_shrink_slab_end: super_cache_scan+0x0/0x1a0 ffff9a83046f3458: > > > nid: 1 unused scan count 3641681686040 new scan count 3641798379189 total_scan 602 > > > last shrinker return val 123186855 > > > > > > The vfs cache and page cache ration was 10:1 on this machine, and half of caches were dropped. > > > This also resulted in significant amount of page caches were dropped due to inodes eviction. > > > > > > Make nr_deferred per memcg for memcg aware shrinkers would solve the unfairness and bring > > > better isolation. > > > > > > When memcg is not enabled (!CONFIG_MEMCG or memcg disabled), the shrinker's nr_deferred > > > would be used. And non memcg aware shrinkers use shrinker's nr_deferred all the time. > > > > > > Signed-off-by: Yang Shi > > > --- > > > include/linux/memcontrol.h | 7 +++--- > > > mm/vmscan.c | 48 +++++++++++++++++++++++++------------- > > > 2 files changed, 36 insertions(+), 19 deletions(-) > > > > > > diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h > > > index 62b888b88a5f..e0384367e07d 100644 > > > --- a/include/linux/memcontrol.h > > > +++ b/include/linux/memcontrol.h > > > @@ -93,12 +93,13 @@ struct lruvec_stat { > > > }; > > > > > > /* > > > - * Bitmap of shrinker::id corresponding to memcg-aware shrinkers, > > > - * which have elements charged to this memcg. > > > + * Bitmap and deferred work of shrinker::id corresponding to memcg-aware > > > + * shrinkers, which have elements charged to this memcg. > > > */ > > > struct shrinker_info { > > > struct rcu_head rcu; > > > - unsigned long map[]; > > > + unsigned long *map; > > > + atomic_long_t *nr_deferred; > > > }; > > > > > > /* > > > diff --git a/mm/vmscan.c b/mm/vmscan.c > > > index 256896d157d4..20be0db291fe 100644 > > > --- a/mm/vmscan.c > > > +++ b/mm/vmscan.c > > > @@ -187,16 +187,21 @@ static DECLARE_RWSEM(shrinker_rwsem); > > > #ifdef CONFIG_MEMCG > > > static int shrinker_nr_max; > > > > > > +#define NR_MAX_TO_SHR_MAP_SIZE(nr_max) \ > > > + ((nr_max / BITS_PER_LONG + 1) * sizeof(unsigned long)) > > > > Could have been part of patch 4 already. And yeah, using DIV_ROUND_UP(), as > > being hidden in a macro makes the "shorter statement" benefit disappear :) > > > > > + > > > static void free_shrinker_info_rcu(struct rcu_head *head) > > > { > > > kvfree(container_of(head, struct shrinker_info, rcu)); > > > } > > > > > > static int expand_one_shrinker_info(struct mem_cgroup *memcg, > > > - int size, int old_size) > > > + int m_size, int d_size, > > > + int old_m_size, int old_d_size) > > > { > > > struct shrinker_info *new, *old; > > > int nid; > > > + int size = m_size + d_size; > > > > > > for_each_node(nid) { > > > old = rcu_dereference_protected( > > > @@ -209,9 +214,15 @@ static int expand_one_shrinker_info(struct mem_cgroup *memcg, > > > if (!new) > > > return -ENOMEM; > > > > > > - /* Set all old bits, clear all new bits */ > > > - memset(new->map, (int)0xff, old_size); > > > - memset((void *)new->map + old_size, 0, size - old_size); > > > + new->map = (unsigned long *)(new + 1); > > > + new->nr_deferred = (void *)new->map + m_size; > > > > This better be aligned to sizeof(atomic_long_t). Can we be sure about that? > > Good point. No, if unsigned long is 32 bit on some 64 bit machines. I think we could just change map to "u64" and guarantee struct shrinker_info is aligned to 64 bit. > > > Also it's all quite ugly and complex. Is it worth it? What about just leaving > > map as it is and allocating a nr_deferred array separately, i.e.: > > > > struct shrinker_info { > > struct rcu_head rcu; > > atomic_long_t *nr_deferred; // allocated separately > > unsigned long map[]; > > }; > > So, you mean we allocate shrinker info with map array in the first > step, then allocate nr_deferred? It is ok, but I'm afraid the error > handling may make the code not that clean as what you expect since we > have to call kvmalloc() twice. And we still need to do all the > initialization and copy work. So, eventually we just replace the > pointer assignment to error handling. I'm not quite sure if it is > worth it. The nested error handling might be more error prone. > > > > > > + /* map: set all old bits, clear all new bits */ > > > + memset(new->map, (int)0xff, old_m_size); > > > + memset((void *)new->map + old_m_size, 0, m_size - old_m_size); > > > + /* nr_deferred: copy old values, clear all new values */ > > > + memcpy(new->nr_deferred, old->nr_deferred, old_d_size); > > > + memset((void *)new->nr_deferred + old_d_size, 0, d_size - old_d_size); > > > > > > rcu_assign_pointer(memcg->nodeinfo[nid]->shrinker_info, new); > > > call_rcu(&old->rcu, free_shrinker_info_rcu); > > > @@ -226,9 +237,6 @@ void free_shrinker_info(struct mem_cgroup *memcg) > > > struct shrinker_info *info; > > > int nid; > > > > > > - if (mem_cgroup_is_root(memcg)) > > > - return; > > > - > > > for_each_node(nid) { > > > pn = mem_cgroup_nodeinfo(memcg, nid); > > > info = rcu_dereference_protected(pn->shrinker_info, true); > > > @@ -242,12 +250,13 @@ int alloc_shrinker_info(struct mem_cgroup *memcg) > > > { > > > struct shrinker_info *info; > > > int nid, size, ret = 0; > > > - > > > - if (mem_cgroup_is_root(memcg)) > > > - return 0; > > > + int m_size, d_size = 0; > > > > > > down_write(&shrinker_rwsem); > > > - size = (shrinker_nr_max / BITS_PER_LONG + 1) * sizeof(unsigned long); > > > + m_size = NR_MAX_TO_SHR_MAP_SIZE(shrinker_nr_max); > > > + d_size = shrinker_nr_max * sizeof(atomic_long_t); > > > + size = m_size + d_size; > > > + > > > for_each_node(nid) { > > > info = kvzalloc_node(sizeof(*info) + size, GFP_KERNEL, nid); > > > if (!info) { > > > @@ -255,6 +264,8 @@ int alloc_shrinker_info(struct mem_cgroup *memcg) > > > ret = -ENOMEM; > > > break; > > > } > > > + info->map = (unsigned long *)(info + 1); > > > + info->nr_deferred = (void *)info->map + m_size; > > > rcu_assign_pointer(memcg->nodeinfo[nid]->shrinker_info, info); > > > } > > > up_write(&shrinker_rwsem); > > > @@ -266,10 +277,16 @@ static int expand_shrinker_info(int new_id) > > > { > > > int size, old_size, ret = 0; > > > int new_nr_max = new_id + 1; > > > + int m_size, d_size = 0; > > > + int old_m_size, old_d_size = 0; > > > struct mem_cgroup *memcg; > > > > > > - size = (new_nr_max / BITS_PER_LONG + 1) * sizeof(unsigned long); > > > - old_size = (shrinker_nr_max / BITS_PER_LONG + 1) * sizeof(unsigned long); > > > + m_size = NR_MAX_TO_SHR_MAP_SIZE(new_nr_max); > > > + d_size = new_nr_max * sizeof(atomic_long_t); > > > + size = m_size + d_size; > > > + old_m_size = NR_MAX_TO_SHR_MAP_SIZE(shrinker_nr_max); > > > + old_d_size = shrinker_nr_max * sizeof(atomic_long_t); > > > + old_size = old_m_size + old_d_size; > > > if (size <= old_size) > > > goto out; > > > > > > @@ -278,9 +295,8 @@ static int expand_shrinker_info(int new_id) > > > > > > memcg = mem_cgroup_iter(NULL, NULL, NULL); > > > do { > > > - if (mem_cgroup_is_root(memcg)) > > > - continue; > > > - ret = expand_one_shrinker_info(memcg, size, old_size); > > > + ret = expand_one_shrinker_info(memcg, m_size, d_size, > > > + old_m_size, old_d_size); > > > if (ret) { > > > mem_cgroup_iter_break(NULL, memcg); > > > goto out; > > > > >