From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EA7BAC4361B for ; Tue, 15 Dec 2020 21:58:02 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 8843522CBB for ; Tue, 15 Dec 2020 21:58:02 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8843522CBB Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id B2B9F6B0036; Tue, 15 Dec 2020 16:58:01 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id ADA886B005D; Tue, 15 Dec 2020 16:58:01 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9F0F86B0068; Tue, 15 Dec 2020 16:58:01 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0066.hostedemail.com [216.40.44.66]) by kanga.kvack.org (Postfix) with ESMTP id 88AFE6B0036 for ; Tue, 15 Dec 2020 16:58:01 -0500 (EST) Received: from smtpin10.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 4D089362A for ; Tue, 15 Dec 2020 21:58:01 +0000 (UTC) X-FDA: 77596879962.10.able46_130dc4c27426 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin10.hostedemail.com (Postfix) with ESMTP id 3726A16A4B8 for ; Tue, 15 Dec 2020 21:58:01 +0000 (UTC) X-HE-Tag: able46_130dc4c27426 X-Filterd-Recvd-Size: 7396 Received: from mail-ed1-f65.google.com (mail-ed1-f65.google.com [209.85.208.65]) by imf05.hostedemail.com (Postfix) with ESMTP for ; Tue, 15 Dec 2020 21:58:00 +0000 (UTC) Received: by mail-ed1-f65.google.com with SMTP id b73so22655546edf.13 for ; Tue, 15 Dec 2020 13:58:00 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=N1a8xrDPCZ026HQdK8Vq1OhFGNFowbJMgK6N8VyjyPY=; b=jtrl3dfR/1vQ5dt76KTjESXT1RBhUqhuN0XasnNWtE4bFMBb+uXjOcAaFG4Tw5eosA NQNGDova2B2qfvEMPrC0OhMSY00X9O3Pt0ktdk8KVoPawxC4WBo2VfrX4Eyj3lzqYsH7 tRkBecZ1rn3rbM5+Xv7pysLnpNckkwfTs01NvA/RERFPbjESx4D04DIjfBcYlz236UmL VP6nQbNfmifDYWUYwr2KOwth3UpQCbZ2pX1jDtN1cTy02602tdB5fbFCMzPXgGg5CSzD 1UEaohj0mCAhTxfdF2wRkgT3ZvWg9JzIWDzKtM5lLuHYtjUoqd/KZ0Yr8cbkxT+f2DUt Y9vA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=N1a8xrDPCZ026HQdK8Vq1OhFGNFowbJMgK6N8VyjyPY=; b=PYwMMDSAldGO77a+59/ZWkxT2P1NsMx61CWe5KUjOwuwQVOO2Dp06z00ECtq4hDnNJ q50r48pc6fIfse/vh8gIYl0jevI/bv9+cYOQr8Yk6VZcWH/e9IkhfLIVi6KIzncG7hL6 JYIUO8q1aL5ms17gepNsg79HwWS2ESCnFVo2DNc33dJ/7SfnM8Wmckado/+0RYNJIVfH de0+x48YjM6Y4WRJhvx0ZUbqB/otpanggSAnBeUYQYMP45XvJXNrfKGta9r5QAMQSvIt Mb57p5NjLiqlguMDvH2Zv/dG7I5JWK9Y/DTJz7CBQw4tXWYHog/lcR0tXga468NaqWQR bEuQ== X-Gm-Message-State: AOAM530u3z/51le8mj/EAEHR3xHK3flfPknzHPjqDAS/T4R+XfU39XfC 7g3nwt/hXhH7aHHcVgr8I2WEthXTLJIcdHkkfB8= X-Google-Smtp-Source: ABdhPJz2iMox1eHPojxlVeQmLPOYNsyRWakO6o+Lc8pqKLjCy/wbT6pwNGUBgys4AUvniZ8JaLwxZ/Hz8TIby4klQ7A= X-Received: by 2002:a05:6402:ca2:: with SMTP id cn2mr31266268edb.137.1608069479463; Tue, 15 Dec 2020 13:57:59 -0800 (PST) MIME-Version: 1.0 References: <20201214223722.232537-1-shy828301@gmail.com> <20201214223722.232537-6-shy828301@gmail.com> <20201215022233.GL3913616@dread.disaster.area> <20201215144516.GE379720@cmpxchg.org> In-Reply-To: <20201215144516.GE379720@cmpxchg.org> From: Yang Shi Date: Tue, 15 Dec 2020 13:57:47 -0800 Message-ID: Subject: Re: [v2 PATCH 5/9] mm: memcontrol: add per memcg shrinker nr_deferred To: Johannes Weiner Cc: Dave Chinner , Roman Gushchin , Kirill Tkhai , Shakeel Butt , Michal Hocko , Andrew Morton , Linux MM , Linux FS-devel Mailing List , Linux Kernel Mailing List Content-Type: text/plain; charset="UTF-8" X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, Dec 15, 2020 at 6:47 AM Johannes Weiner wrote: > > On Tue, Dec 15, 2020 at 01:22:33PM +1100, Dave Chinner wrote: > > On Mon, Dec 14, 2020 at 02:37:18PM -0800, Yang Shi wrote: > > > Currently the number of deferred objects are per shrinker, but some slabs, for example, > > > vfs inode/dentry cache are per memcg, this would result in poor isolation among memcgs. > > > > > > The deferred objects typically are generated by __GFP_NOFS allocations, one memcg with > > > excessive __GFP_NOFS allocations may blow up deferred objects, then other innocent memcgs > > > may suffer from over shrink, excessive reclaim latency, etc. > > > > > > For example, two workloads run in memcgA and memcgB respectively, workload in B is vfs > > > heavy workload. Workload in A generates excessive deferred objects, then B's vfs cache > > > might be hit heavily (drop half of caches) by B's limit reclaim or global reclaim. > > > > > > We observed this hit in our production environment which was running vfs heavy workload > > > shown as the below tracing log: > > > > > > <...>-409454 [016] .... 28286961.747146: mm_shrink_slab_start: super_cache_scan+0x0/0x1a0 ffff9a83046f3458: > > > nid: 1 objects to shrink 3641681686040 gfp_flags GFP_HIGHUSER_MOVABLE|__GFP_ZERO pgs_scanned 1 lru_pgs 15721 > > > cache items 246404277 delta 31345 total_scan 123202138 > > > <...>-409454 [022] .... 28287105.928018: mm_shrink_slab_end: super_cache_scan+0x0/0x1a0 ffff9a83046f3458: > > > nid: 1 unused scan count 3641681686040 new scan count 3641798379189 total_scan 602 > > > last shrinker return val 123186855 > > > > > > The vfs cache and page cache ration was 10:1 on this machine, and half of caches were dropped. > > > This also resulted in significant amount of page caches were dropped due to inodes eviction. > > > > > > Make nr_deferred per memcg for memcg aware shrinkers would solve the unfairness and bring > > > better isolation. > > > > > > When memcg is not enabled (!CONFIG_MEMCG or memcg disabled), the shrinker's nr_deferred > > > would be used. And non memcg aware shrinkers use shrinker's nr_deferred all the time. > > > > > > Signed-off-by: Yang Shi > > > --- > > > include/linux/memcontrol.h | 9 +++ > > > mm/memcontrol.c | 110 ++++++++++++++++++++++++++++++++++++- > > > mm/vmscan.c | 4 ++ > > > 3 files changed, 120 insertions(+), 3 deletions(-) > > > > > > diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h > > > index 922a7f600465..1b343b268359 100644 > > > --- a/include/linux/memcontrol.h > > > +++ b/include/linux/memcontrol.h > > > @@ -92,6 +92,13 @@ struct lruvec_stat { > > > long count[NR_VM_NODE_STAT_ITEMS]; > > > }; > > > > > > + > > > +/* Shrinker::id indexed nr_deferred of memcg-aware shrinkers. */ > > > +struct memcg_shrinker_deferred { > > > + struct rcu_head rcu; > > > + atomic_long_t nr_deferred[]; > > > +}; > > > > So you're effectively copy and pasting the memcg_shrinker_map > > infrastructure and doubling the number of allocations/frees required > > to set up/tear down a memcg? Why not add it to the struct > > memcg_shrinker_map like this: > > > > struct memcg_shrinker_map { > > struct rcu_head rcu; > > unsigned long *map; > > atomic_long_t *nr_deferred; > > }; > > > > And when you dynamically allocate the structure, set the map and > > nr_deferred pointers to the correct offset in the allocated range. > > > > Then this patch is really only changes to the size of the chunk > > being allocated, setting up the pointers and copying the relevant > > data from the old to new. > > Fully agreed. Thanks folks. Such idea has been discussed with Roman in the earlier emails. I agree this would make the code neater. Will do it in v3. > > In the longer-term, it may be nice to further expand this and make > this the generalized intersection between cgroup, node and shrinkers. > > There is large overlap with list_lru e.g. - with data of identical > scope and lifetime, but duplicative callbacks and management. If we > folded list_lru_memcg into the above data structure, we could also > generalize and reuse the existing callbacks. Yes, agree we should look further to combine and deduplicate all the pieces for the long run.