From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, NICE_REPLY_A,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5C17AC43461 for ; Fri, 4 Sep 2020 04:11:03 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 1168D2073A for ; Fri, 4 Sep 2020 04:11:02 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="LmRE7aS3" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1168D2073A Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 7C1A36B005D; Fri, 4 Sep 2020 00:11:02 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 770E56B0062; Fri, 4 Sep 2020 00:11:02 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6AEBB6B0068; Fri, 4 Sep 2020 00:11:02 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0045.hostedemail.com [216.40.44.45]) by kanga.kvack.org (Postfix) with ESMTP id 55EC66B005D for ; Fri, 4 Sep 2020 00:11:02 -0400 (EDT) Received: from smtpin26.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 1D3E63499 for ; Fri, 4 Sep 2020 04:11:02 +0000 (UTC) X-FDA: 77224053564.26.heat43_5813e3d270af Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin26.hostedemail.com (Postfix) with ESMTP id D75D31804B660 for ; Fri, 4 Sep 2020 04:11:01 +0000 (UTC) X-HE-Tag: heat43_5813e3d270af X-Filterd-Recvd-Size: 3248 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf47.hostedemail.com (Postfix) with ESMTP for ; Fri, 4 Sep 2020 04:11:01 +0000 (UTC) Received: from X1 (nat-ab2241.sltdut.senawave.net [162.218.216.4]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 64FEB206CA; Fri, 4 Sep 2020 04:11:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1599192660; bh=atSQ1o94B/mGNkpCk9TKMEzNNhHtnGV27b0zlomMdaw=; h=Date:From:To:Cc:Subject:In-Reply-To:References:From; b=LmRE7aS3pA2ZJRm7NWj+alJW+v+FXw2FQByZDOcuw08XdnOhI4cN0nOzbM0c9JiZT +R1D6h0SlX3FksGomoe2dNEWxWDSOvrgXJvZqzt+bdsiLX87x0WXfbwmdp7ypkj7+R inY6QqYpkDXkGm0pKb/Ojm5uhOMxxsirxRnvQ8x4= Date: Thu, 3 Sep 2020 21:10:59 -0700 From: Andrew Morton To: Roman Gushchin Cc: , =Shakeel Butt , Johannes Weiner , Michal Hocko , , , Michal Hocko Subject: Re: [PATCH] mm: workingset: ignore slab memory size when calculating shadows pressure Message-Id: <20200903211059.7dc9530e6d988eaeefe53cf7@linux-foundation.org> In-Reply-To: <20200903230055.1245058-1-guro@fb.com> References: <20200903230055.1245058-1-guro@fb.com> X-Mailer: Sylpheed 3.5.1 (GTK+ 2.24.32; x86_64-pc-linux-gnu) Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit X-Rspamd-Queue-Id: D75D31804B660 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam04 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Thu, 3 Sep 2020 16:00:55 -0700 Roman Gushchin wrote: > In the memcg case count_shadow_nodes() sums the number of pages in lru > lists and the amount of slab memory (reclaimable and non-reclaimable) > as a baseline for the allowed number of shadow entries. > > It seems to be a good analogy for the !memcg case, where > node_present_pages() is used. However, it's not quite true, as there > two problems: > > 1) Due to slab reparenting introduced by commit fb2f2b0adb98 ("mm: > memcg/slab: reparent memcg kmem_caches on cgroup removal") local > per-lruvec slab counters might be inaccurate on non-leaf levels. > It's the only place where local slab counters are used. > > 2) Shadow nodes by themselves are backed by slabs. So there is a loop > dependency: the more shadow entries are there, the less pressure the > kernel applies to reclaim them. > > Fortunately, there is a simple way to solve both problems: slab > counters shouldn't be taken into the account by count_shadow_nodes(). > > ... > > --- a/mm/workingset.c > +++ b/mm/workingset.c > @@ -495,10 +495,6 @@ static unsigned long count_shadow_nodes(struct shrinker *shrinker, > for (pages = 0, i = 0; i < NR_LRU_LISTS; i++) > pages += lruvec_page_state_local(lruvec, > NR_LRU_BASE + i); > - pages += lruvec_page_state_local( > - lruvec, NR_SLAB_RECLAIMABLE_B) >> PAGE_SHIFT; > - pages += lruvec_page_state_local( > - lruvec, NR_SLAB_UNRECLAIMABLE_B) >> PAGE_SHIFT; > } else > #endif > pages = node_present_pages(sc->nid); Did this have any observable runtime effects?