From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.5 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,FREEMAIL_FORGED_FROMDOMAIN, FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 26C8DC433DB for ; Thu, 21 Jan 2021 23:07:03 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id C96C52389F for ; Thu, 21 Jan 2021 23:07:02 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C96C52389F Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 684A46B0025; Thu, 21 Jan 2021 18:07:02 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 635406B0026; Thu, 21 Jan 2021 18:07:02 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 54BB26B0027; Thu, 21 Jan 2021 18:07:02 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0150.hostedemail.com [216.40.44.150]) by kanga.kvack.org (Postfix) with ESMTP id 3B93A6B0025 for ; Thu, 21 Jan 2021 18:07:02 -0500 (EST) Received: from smtpin17.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 02A2D6118 for ; Thu, 21 Jan 2021 23:07:02 +0000 (UTC) X-FDA: 77731319484.17.arm27_420b35f27567 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin17.hostedemail.com (Postfix) with ESMTP id D7C3F180D0181 for ; Thu, 21 Jan 2021 23:07:01 +0000 (UTC) X-HE-Tag: arm27_420b35f27567 X-Filterd-Recvd-Size: 6098 Received: from mail-pf1-f178.google.com (mail-pf1-f178.google.com [209.85.210.178]) by imf37.hostedemail.com (Postfix) with ESMTP for ; Thu, 21 Jan 2021 23:07:01 +0000 (UTC) Received: by mail-pf1-f178.google.com with SMTP id i63so2456372pfg.7 for ; Thu, 21 Jan 2021 15:07:01 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=cFyZyyuzW/ERd/pESi1fpg8DhMcXb2BRIJWpsGqGYMg=; b=K+k8Xme1P0wSsE7PFMeGaVopgVa+k1Y498WdbK9y6Vatk2rx7jNoLmN3+5kYcRLqqK l8rOPPSYKTQzP7xrZiGtdzmcpCYM9Wah5qcEcLYHGWoTk9EKgYdi9gLLcg20U2eXkLKF TvKvzOD5LIma1GnDGEPyaQL45M+4wKB8g1hbxRmTpbrONiMwDCGCD7F/Q16/zcjG6Sqd GItJuuq+IVsY9f72PmOX2LP9JKJ33nLsZKSUuGJcR/+LmGZyXjcefw1CJjkLZnQ/4OmK 4kE0X/54if2CtfTBeLlbp/+1imSeKqC8NUK/RrfZjM3HXvhZjy4qTlKF+D2Fwvg7u7MU IPWw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=cFyZyyuzW/ERd/pESi1fpg8DhMcXb2BRIJWpsGqGYMg=; b=BufUKtIYUP4wchUfQtACRlrkDEelS1S3tOzn6wL6c81jx3Ea7+uFKViCMtrIYA9z6O chVNoaQ5tFETCT3fuH4gVNV2Cik7lgmWNJMNNDh1s/hATH1ZKtC23wNaeKGaStzPMc7Q ABlFdyr/AORD0lhud/cUB1kQT4YthJfWwVQ74EEYEadsd2SV2Joe75GZkm44oN1EIji6 7cHh//Aio4UooQwdxje2voox91Jd3O7pr3tUzNmT74lIxsynhTU9X88J3Dg2WEAm8uTw 5JKFWIs4G47rWzLcrFB7wkuQgABIM8SvyzzIYGWOP8SUHcFlpe7FvNvipRBy35hdexiW pK3w== X-Gm-Message-State: AOAM5303GtY2MNXykq4d+WZa00tdWLFuYk5F79Yp1aIfSzhx/ceIMXmR k38JaMgUrbmgFxDZMSX0wVRtGoONFeNZfEyz X-Google-Smtp-Source: ABdhPJx5OwK2Kp2/u2/VM8ly/p6rR5YhfIOFzAhndfeeO2sLl7p9z+F3NsQ/by+5PnKrRVX3gESrew== X-Received: by 2002:a62:d449:0:b029:1bc:431b:6aa4 with SMTP id u9-20020a62d4490000b02901bc431b6aa4mr1755606pfl.58.1611270420524; Thu, 21 Jan 2021 15:07:00 -0800 (PST) Received: from localhost.localdomain (c-73-93-239-127.hsd1.ca.comcast.net. [73.93.239.127]) by smtp.gmail.com with ESMTPSA id y16sm6722921pfb.83.2021.01.21.15.06.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 21 Jan 2021 15:06:59 -0800 (PST) From: Yang Shi To: guro@fb.com, ktkhai@virtuozzo.com, shakeelb@google.com, david@fromorbit.com, hannes@cmpxchg.org, mhocko@suse.com, akpm@linux-foundation.org Cc: shy828301@gmail.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [v4 PATCH 11/11] mm: vmscan: shrink deferred objects proportional to priority Date: Thu, 21 Jan 2021 15:06:21 -0800 Message-Id: <20210121230621.654304-12-shy828301@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210121230621.654304-1-shy828301@gmail.com> References: <20210121230621.654304-1-shy828301@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The number of deferred objects might get windup to an absurd number, and = it results in clamp of slab objects. It is undesirable for sustaining workingset. So shrink deferred objects proportional to priority and cap nr_deferred t= o twice of cache items. The idea is borrowed fron Dave Chinner's patch: https://lore.kernel.org/linux-xfs/20191031234618.15403-13-david@fromorbit= .com/ Tested with kernel build and vfs metadata heavy workload, no regression i= s spotted so far. But it still may have regression for some corner cases. Signed-off-by: Yang Shi --- mm/vmscan.c | 40 +++++----------------------------------- 1 file changed, 5 insertions(+), 35 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index e73f200ffd2d..bb254d39339f 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -659,7 +659,6 @@ static unsigned long do_shrink_slab(struct shrink_con= trol *shrinkctl, */ nr =3D count_nr_deferred(shrinker, shrinkctl); =20 - total_scan =3D nr; if (shrinker->seeks) { delta =3D freeable >> priority; delta *=3D 4; @@ -673,37 +672,9 @@ static unsigned long do_shrink_slab(struct shrink_co= ntrol *shrinkctl, delta =3D freeable / 2; } =20 + total_scan =3D nr >> priority; total_scan +=3D delta; - if (total_scan < 0) { - pr_err("shrink_slab: %pS negative objects to delete nr=3D%ld\n", - shrinker->scan_objects, total_scan); - total_scan =3D freeable; - next_deferred =3D nr; - } else - next_deferred =3D total_scan; - - /* - * We need to avoid excessive windup on filesystem shrinkers - * due to large numbers of GFP_NOFS allocations causing the - * shrinkers to return -1 all the time. This results in a large - * nr being built up so when a shrink that can do some work - * comes along it empties the entire cache due to nr >>> - * freeable. This is bad for sustaining a working set in - * memory. - * - * Hence only allow the shrinker to scan the entire cache when - * a large delta change is calculated directly. - */ - if (delta < freeable / 4) - total_scan =3D min(total_scan, freeable / 2); - - /* - * Avoid risking looping forever due to too large nr value: - * never try to free more than twice the estimate number of - * freeable entries. - */ - if (total_scan > freeable * 2) - total_scan =3D freeable * 2; + total_scan =3D min(total_scan, (2 * freeable)); =20 trace_mm_shrink_slab_start(shrinker, shrinkctl, nr, freeable, delta, total_scan, priority); @@ -742,10 +713,9 @@ static unsigned long do_shrink_slab(struct shrink_co= ntrol *shrinkctl, cond_resched(); } =20 - if (next_deferred >=3D scanned) - next_deferred -=3D scanned; - else - next_deferred =3D 0; + next_deferred =3D max_t(long, (nr - scanned), 0) + total_scan; + next_deferred =3D min(next_deferred, (2 * freeable)); + /* * move the unused scan count back into the shrinker in a * manner that handles concurrent updates. --=20 2.26.2