From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1B920C4361B for ; Tue, 15 Dec 2020 14:47:29 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 6D5E4223D6 for ; Tue, 15 Dec 2020 14:47:28 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 6D5E4223D6 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=cmpxchg.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id BDD456B0036; Tue, 15 Dec 2020 09:47:27 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id B8DD06B006E; Tue, 15 Dec 2020 09:47:27 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AA3F46B0070; Tue, 15 Dec 2020 09:47:27 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0077.hostedemail.com [216.40.44.77]) by kanga.kvack.org (Postfix) with ESMTP id 90E006B0036 for ; Tue, 15 Dec 2020 09:47:27 -0500 (EST) Received: from smtpin24.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 4E9D38249980 for ; Tue, 15 Dec 2020 14:47:27 +0000 (UTC) X-FDA: 77595794934.24.rings66_1a03ed627424 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin24.hostedemail.com (Postfix) with ESMTP id 1D2271A4AA for ; Tue, 15 Dec 2020 14:47:27 +0000 (UTC) X-HE-Tag: rings66_1a03ed627424 X-Filterd-Recvd-Size: 7060 Received: from mail-ej1-f66.google.com (mail-ej1-f66.google.com [209.85.218.66]) by imf12.hostedemail.com (Postfix) with ESMTP for ; Tue, 15 Dec 2020 14:47:26 +0000 (UTC) Received: by mail-ej1-f66.google.com with SMTP id lt17so28063739ejb.3 for ; Tue, 15 Dec 2020 06:47:26 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cmpxchg-org.20150623.gappssmtp.com; s=20150623; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=uXsidnHN6xvD4rkSp2MPrzKExLvD2s/CBWkJh4xL/c8=; b=hmgJIO6hC+zCQO5jBDz/KqQyAqz5+66A1CXpR5ppJ3Q/jcTKWX8eO+xYDY4zLAjzNr kFDPci2DvyPzOKMfd/qC9QytiEQvX19bwaw/0YJpzMd/+c8ZyP7aFmwrx99A1Cr24HFb ihbxVMLUFLBpNilkbDc1PmzUWzAF4mzFrnMfiKzPXAmRoCJ/uwMgWCry/RDwkCBK2pUu kXrPH0Ict45lQScdNP4sQaALfJEwZ32bed71d9OnWnpJt2kxo6DQlxwxmRpBNQtkxrgQ B6dGvIZyc21A8sViRde/+qJLxJP1qAR+ozUGcdG901Um+YPCtXBIpzZKFDgSh2I1xg3P PIVA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=uXsidnHN6xvD4rkSp2MPrzKExLvD2s/CBWkJh4xL/c8=; b=V+CdnFsMvwO+agszopz+n3JdoWtzPHRhUBj/EveC0hbK8p3tUO30vZ7z3PHlNwMrQz TRV3XXV2ZGd2k/RD78Bng+A0nb7XXLw2xJqv9jP0RcQi3MVApALOz+z5pnxVGxNOYduZ KTYAW+zvq+bH8uxmu07n+Y6n0hfi6yIdFk2n1bzy+cBG9VN6C8gVpBJI8k2u5PWPBrVP G6IljgLyPFbfR0GVwzzuM7P+U/XKZHT1ayM+GjuUs5zDp2yRpksFAE3nM9jfPsPVVlvR bB0Fb6sIv+o1qzoIndzCZ9IJfEV8TsFddTvgTNzw+BRQ9MDSHUnxYcqkZwvlfTi+XjvE /5/Q== X-Gm-Message-State: AOAM530a0hvUoLaII5PIOQ4XZd3XZpE4UkA9AMp59o/gt0uMheW14Ui+ 9ECJrBL5OB0JVRArhlpdUhirdA== X-Google-Smtp-Source: ABdhPJz4zOZt3U3vN8cP9uNm7Tb9AU0tWJaQj6vSyL8M0CvnB/RMpBpdngGOwVVF+cPNFCPh9DwyHg== X-Received: by 2002:a17:906:134f:: with SMTP id x15mr27667216ejb.278.1608043644908; Tue, 15 Dec 2020 06:47:24 -0800 (PST) Received: from localhost ([2620:10d:c093:400::5:d6dd]) by smtp.gmail.com with ESMTPSA id h15sm1759058ejq.29.2020.12.15.06.47.23 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 15 Dec 2020 06:47:23 -0800 (PST) Date: Tue, 15 Dec 2020 15:45:16 +0100 From: Johannes Weiner To: Dave Chinner Cc: Yang Shi , guro@fb.com, ktkhai@virtuozzo.com, shakeelb@google.com, mhocko@suse.com, akpm@linux-foundation.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: Re: [v2 PATCH 5/9] mm: memcontrol: add per memcg shrinker nr_deferred Message-ID: <20201215144516.GE379720@cmpxchg.org> References: <20201214223722.232537-1-shy828301@gmail.com> <20201214223722.232537-6-shy828301@gmail.com> <20201215022233.GL3913616@dread.disaster.area> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20201215022233.GL3913616@dread.disaster.area> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, Dec 15, 2020 at 01:22:33PM +1100, Dave Chinner wrote: > On Mon, Dec 14, 2020 at 02:37:18PM -0800, Yang Shi wrote: > > Currently the number of deferred objects are per shrinker, but some slabs, for example, > > vfs inode/dentry cache are per memcg, this would result in poor isolation among memcgs. > > > > The deferred objects typically are generated by __GFP_NOFS allocations, one memcg with > > excessive __GFP_NOFS allocations may blow up deferred objects, then other innocent memcgs > > may suffer from over shrink, excessive reclaim latency, etc. > > > > For example, two workloads run in memcgA and memcgB respectively, workload in B is vfs > > heavy workload. Workload in A generates excessive deferred objects, then B's vfs cache > > might be hit heavily (drop half of caches) by B's limit reclaim or global reclaim. > > > > We observed this hit in our production environment which was running vfs heavy workload > > shown as the below tracing log: > > > > <...>-409454 [016] .... 28286961.747146: mm_shrink_slab_start: super_cache_scan+0x0/0x1a0 ffff9a83046f3458: > > nid: 1 objects to shrink 3641681686040 gfp_flags GFP_HIGHUSER_MOVABLE|__GFP_ZERO pgs_scanned 1 lru_pgs 15721 > > cache items 246404277 delta 31345 total_scan 123202138 > > <...>-409454 [022] .... 28287105.928018: mm_shrink_slab_end: super_cache_scan+0x0/0x1a0 ffff9a83046f3458: > > nid: 1 unused scan count 3641681686040 new scan count 3641798379189 total_scan 602 > > last shrinker return val 123186855 > > > > The vfs cache and page cache ration was 10:1 on this machine, and half of caches were dropped. > > This also resulted in significant amount of page caches were dropped due to inodes eviction. > > > > Make nr_deferred per memcg for memcg aware shrinkers would solve the unfairness and bring > > better isolation. > > > > When memcg is not enabled (!CONFIG_MEMCG or memcg disabled), the shrinker's nr_deferred > > would be used. And non memcg aware shrinkers use shrinker's nr_deferred all the time. > > > > Signed-off-by: Yang Shi > > --- > > include/linux/memcontrol.h | 9 +++ > > mm/memcontrol.c | 110 ++++++++++++++++++++++++++++++++++++- > > mm/vmscan.c | 4 ++ > > 3 files changed, 120 insertions(+), 3 deletions(-) > > > > diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h > > index 922a7f600465..1b343b268359 100644 > > --- a/include/linux/memcontrol.h > > +++ b/include/linux/memcontrol.h > > @@ -92,6 +92,13 @@ struct lruvec_stat { > > long count[NR_VM_NODE_STAT_ITEMS]; > > }; > > > > + > > +/* Shrinker::id indexed nr_deferred of memcg-aware shrinkers. */ > > +struct memcg_shrinker_deferred { > > + struct rcu_head rcu; > > + atomic_long_t nr_deferred[]; > > +}; > > So you're effectively copy and pasting the memcg_shrinker_map > infrastructure and doubling the number of allocations/frees required > to set up/tear down a memcg? Why not add it to the struct > memcg_shrinker_map like this: > > struct memcg_shrinker_map { > struct rcu_head rcu; > unsigned long *map; > atomic_long_t *nr_deferred; > }; > > And when you dynamically allocate the structure, set the map and > nr_deferred pointers to the correct offset in the allocated range. > > Then this patch is really only changes to the size of the chunk > being allocated, setting up the pointers and copying the relevant > data from the old to new. Fully agreed. In the longer-term, it may be nice to further expand this and make this the generalized intersection between cgroup, node and shrinkers. There is large overlap with list_lru e.g. - with data of identical scope and lifetime, but duplicative callbacks and management. If we folded list_lru_memcg into the above data structure, we could also generalize and reuse the existing callbacks.