From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EBBBFC4361B for ; Wed, 16 Dec 2020 00:00:14 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 8450822CB8 for ; Wed, 16 Dec 2020 00:00:14 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8450822CB8 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id B5C3E6B0068; Tue, 15 Dec 2020 19:00:13 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id B0C606B006C; Tue, 15 Dec 2020 19:00:13 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9D4C36B006E; Tue, 15 Dec 2020 19:00:13 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0247.hostedemail.com [216.40.44.247]) by kanga.kvack.org (Postfix) with ESMTP id 854D86B0068 for ; Tue, 15 Dec 2020 19:00:13 -0500 (EST) Received: from smtpin06.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 4460B3629 for ; Wed, 16 Dec 2020 00:00:13 +0000 (UTC) X-FDA: 77597187906.06.kick67_59011e027427 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin06.hostedemail.com (Postfix) with ESMTP id 1B15E1003DAC5 for ; Wed, 16 Dec 2020 00:00:13 +0000 (UTC) X-HE-Tag: kick67_59011e027427 X-Filterd-Recvd-Size: 7061 Received: from mail-ej1-f66.google.com (mail-ej1-f66.google.com [209.85.218.66]) by imf38.hostedemail.com (Postfix) with ESMTP for ; Wed, 16 Dec 2020 00:00:12 +0000 (UTC) Received: by mail-ej1-f66.google.com with SMTP id q22so12450839eja.2 for ; Tue, 15 Dec 2020 16:00:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=oWNs84euS5+GSv9nRk8flYq+n+pWmoelKQH47Nr99CU=; b=I/AuV4JDGG7VDvzUf1ACX4rHlnyKX8woLAg1gUm1neAgUHcC20JjUUqw3/OpuzbXTV Ij5CYI4N5/4vrBX7jcmTyEGO9ju8njXHhcDjDGESneDsj8n0zfJGNF55/jgBmMUDIWuI 3Z8A670wDo/YgnlA8gdeGj1tfGk3MhZkrJ1SWBkC+rsp5qWK0RIWFj5KPLjt2spsBdZs 8sPNx2OhKTIkrqph4cwpG4OartZeCNifKv2WjeRzklU/fVVhdsP5ksUETdbLMYcLaxmV 2pLJpR2Lhfw6KY0egqc2JfBaAMxAXaiFgKl2wOU4bdwC9fEhb4OEbxFZAOSOgQuNP8Pw FwNA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=oWNs84euS5+GSv9nRk8flYq+n+pWmoelKQH47Nr99CU=; b=BfuAbv8LjA9wOUQHt/iOYgw4vSfcC1HQCxJQvq5CpG8TuzGmctVfa4SdLWDaVbB1za LqC2giZS9opY3CESefJu83TrpDj5dFrsvAbng5Aw1mr8JYhsXfLm4+1aij1+PMd6PzKb JLViOOFdaNZ0rzW+wkGQo50PFlPFe2nLgDZuNgIzHLLX9+fS+ntWjckehxtJtLYqhhY3 Yarhtk4r3ky8rqz7YZ8Kdo2VNU6Jo8lUxXGGkRjsEEWjNjKUfxWtn7kO7qAzhfkG/TDm A57A18RAaWIaBesybN6vgpWx/+IBKucTgzCl6RQeaxetjORylT6crc2b/e1yDHfcfTUU p16w== X-Gm-Message-State: AOAM532pZvUXPcewfOgVlWvkQavjZrGlBbzT3dKezzBERnTwVULFFL9g 1uv3Ds2HtA1ANNM7ADDoAgg9xPz6s45PCf0lppU= X-Google-Smtp-Source: ABdhPJxasHIw68sJydp5dBN/h2PRm+ei926F1sGziqBSdmJcWbqrJEocf2hXr7y33TE0K40Pt1NLsJngefAgGo1niUA= X-Received: by 2002:a17:906:cd06:: with SMTP id oz6mr29096141ejb.25.1608076811384; Tue, 15 Dec 2020 16:00:11 -0800 (PST) MIME-Version: 1.0 References: <20201214223722.232537-1-shy828301@gmail.com> <20201214223722.232537-10-shy828301@gmail.com> <20201215032337.GP3913616@dread.disaster.area> In-Reply-To: <20201215032337.GP3913616@dread.disaster.area> From: Yang Shi Date: Tue, 15 Dec 2020 15:59:59 -0800 Message-ID: Subject: Re: [v2 PATCH 9/9] mm: vmscan: shrink deferred objects proportional to priority To: Dave Chinner Cc: Roman Gushchin , Kirill Tkhai , Shakeel Butt , Johannes Weiner , Michal Hocko , Andrew Morton , Linux MM , Linux FS-devel Mailing List , Linux Kernel Mailing List Content-Type: text/plain; charset="UTF-8" X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, Dec 14, 2020 at 7:23 PM Dave Chinner wrote: > > On Mon, Dec 14, 2020 at 02:37:22PM -0800, Yang Shi wrote: > > The number of deferred objects might get windup to an absurd number, and it results in > > clamp of slab objects. It is undesirable for sustaining workingset. > > > > So shrink deferred objects proportional to priority and cap nr_deferred to twice of > > cache items. > > This completely changes the work accrual algorithm without any > explaination of how it works, what the theory behind the algorithm > is, what the work accrual ramp up and damp down curve looks like, > what workloads it is designed to benefit, how it affects page > cache vs slab cache balance and system performance, what OOM stress > testing has been done to ensure pure slab cache pressure workloads > don't easily trigger OOM kills, etc. Actually this patch does two things: 1. Take nr_deferred into account priority. 2. Cap nr_deferred to twice of freeable Actually the idea is borrowed from you patch: https://lore.kernel.org/linux-xfs/20191031234618.15403-13-david@fromorbit.com/, the difference is that your patch restrains the change for kswapd only, but mine is extended to direct reclaim and limit reclaim. > > You're going to need a lot more supporting evidence that this is a > well thought out algorithm that doesn't obviously introduce > regressions. The current code might fall down in one corner case, > but there are an awful lot of corner cases where it does work. > Please provide some evidence that it not only works in your corner > case, but also doesn't introduce regressions for other slab cache > intensive and mixed cache intensive worklaods... I agree the change may cause some workload regressed out of blue. I tested with kernel build and vfs metadata heavy workloads, I wish I could cover more. But I'm not filesystem developer, do you have any typical workloads that I could try to run to see if they have regression? > > > > > Signed-off-by: Yang Shi > > --- > > mm/vmscan.c | 40 +++++----------------------------------- > > 1 file changed, 5 insertions(+), 35 deletions(-) > > > > diff --git a/mm/vmscan.c b/mm/vmscan.c > > index 693a41e89969..58f4a383f0df 100644 > > --- a/mm/vmscan.c > > +++ b/mm/vmscan.c > > @@ -525,7 +525,6 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl, > > */ > > nr = count_nr_deferred(shrinker, shrinkctl); > > > > - total_scan = nr; > > if (shrinker->seeks) { > > delta = freeable >> priority; > > delta *= 4; > > @@ -539,37 +538,9 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl, > > delta = freeable / 2; > > } > > > > + total_scan = nr >> priority; > > When there is low memory pressure, this will throw away a large > amount of the work that is deferred. If we are not defering in > amounts larger than ~4000 items, every pass through this code will > zero the deferred work. > > Hence when we do get substantial pressure, that deferred work is no > longer being tracked. While it may help your specific corner case, > it's likely to significantly change the reclaim balance of slab > caches, especially under GFP_NOFS intensive workloads where we can > only defer the work to kswapd. > > Hence I think this is still a problematic approach as it doesn't > address the reason why deferred counts are increasing out of > control in the first place.... For our workload the deferred counts are mainly contributed by multiple memcgs' limit reclaim per my analysis. So, the most crucial step is to make nr_deferred memcg aware so that the auxiliary memcgs won't have interference to the main workload. If the test may take too long I'd prefer drop this patch for now since it is not that critical to our workload, I really hope have nr_deferred memcg aware part get into upstream soon. > > Cheers, > > Dave. > -- > Dave Chinner > david@fromorbit.com