From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-11.0 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH, MAILING_LIST_MULTI,NICE_REPLY_A,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 22101C433DB for ; Wed, 6 Jan 2021 23:56:07 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 82FE52333C for ; Wed, 6 Jan 2021 23:56:06 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 82FE52333C Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id A41736B02DC; Wed, 6 Jan 2021 18:56:05 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 9CAF36B02DD; Wed, 6 Jan 2021 18:56:05 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8BAC16B02DE; Wed, 6 Jan 2021 18:56:05 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0062.hostedemail.com [216.40.44.62]) by kanga.kvack.org (Postfix) with ESMTP id 700AB6B02DC for ; Wed, 6 Jan 2021 18:56:05 -0500 (EST) Received: from smtpin30.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 3B4C5362C for ; Wed, 6 Jan 2021 23:56:05 +0000 (UTC) X-FDA: 77677011090.30.side29_5715f39274e5 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin30.hostedemail.com (Postfix) with ESMTP id 1A651180B3C83 for ; Wed, 6 Jan 2021 23:56:05 +0000 (UTC) X-HE-Tag: side29_5715f39274e5 X-Filterd-Recvd-Size: 3234 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf21.hostedemail.com (Postfix) with ESMTP for ; Wed, 6 Jan 2021 23:56:04 +0000 (UTC) Received: by mail.kernel.org (Postfix) with ESMTPSA id 643012333B; Wed, 6 Jan 2021 23:56:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1609977363; bh=gA46bnzFjTY1FGYHxSKAixtrOZKz7AQN8HjwXSenaeM=; h=Date:From:To:Cc:Subject:In-Reply-To:References:From; b=xtZscw1NEOE7Vkmuk84KZMwZnYh4KD/4SlvkFsP4jqvjYi0Ccch7xTMxJ2m5i/xrh gbiNwVmoSP1I9ada/BeOZLocDDQXlAzILYFC+MnscyCDV7TsR/2fBN7L2BaBFLxtDT pvwHrWBozVkEO/2dYDagRk9AfJRB+UY9TW0JrSJ8= Date: Wed, 6 Jan 2021 15:56:02 -0800 From: Andrew Morton To: Sudarshan Rajagopalan Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Vladimir Davydov , Dave Chinner Subject: Re: [PATCH] mm: vmscan: support complete shrinker reclaim Message-Id: <20210106155602.6ce48dfe88ca7b94986b329b@linux-foundation.org> In-Reply-To: <2d1f1dbb7e018ad02a9e7af36a8c86397a1598a7.1609892546.git.sudaraja@codeaurora.org> References: <2d1f1dbb7e018ad02a9e7af36a8c86397a1598a7.1609892546.git.sudaraja@codeaurora.org> X-Mailer: Sylpheed 3.5.1 (GTK+ 2.24.31; x86_64-pc-linux-gnu) Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: (cc's added) On Tue, 5 Jan 2021 16:43:38 -0800 Sudarshan Rajagopalan wrote: > Ensure that shrinkers are given the option to completely drop > their caches even when their caches are smaller than the batch size. > This change helps improve memory headroom by ensuring that under > significant memory pressure shrinkers can drop all of their caches. > This change only attempts to more aggressively call the shrinkers > during background memory reclaim, inorder to avoid hurting the > performance of direct memory reclaim. > > ... > > --- a/mm/vmscan.c > +++ b/mm/vmscan.c > @@ -424,6 +424,10 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl, > long batch_size = shrinker->batch ? shrinker->batch > : SHRINK_BATCH; > long scanned = 0, next_deferred; > + long min_cache_size = batch_size; > + > + if (current_is_kswapd()) > + min_cache_size = 0; > > if (!(shrinker->flags & SHRINKER_NUMA_AWARE)) > nid = 0; > @@ -503,7 +507,7 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl, > * scanning at high prio and therefore should try to reclaim as much as > * possible. > */ > - while (total_scan >= batch_size || > + while (total_scan > min_cache_size || > total_scan >= freeable) { > unsigned long ret; > unsigned long nr_to_scan = min(batch_size, total_scan); I don't really see the need to exclude direct reclaim from this fix. And if we're leaving unscanned objects behind in this situation, the current code simply isn't working as intended, and 0b1fb40a3b1 ("mm: vmscan: shrink all slab objects if tight on memory") either failed to achieve its objective or was later broken? Vladimir, could you please take a look?