From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6CF59C433FE for ; Sat, 16 Apr 2022 06:48:19 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B06186B0072; Sat, 16 Apr 2022 02:48:18 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A8F6C6B0073; Sat, 16 Apr 2022 02:48:18 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 930B16B0074; Sat, 16 Apr 2022 02:48:18 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.28]) by kanga.kvack.org (Postfix) with ESMTP id 8349C6B0072 for ; Sat, 16 Apr 2022 02:48:18 -0400 (EDT) Received: from smtpin18.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 48C4A62002 for ; Sat, 16 Apr 2022 06:48:18 +0000 (UTC) X-FDA: 79361813076.18.A5DEF14 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by imf27.hostedemail.com (Postfix) with ESMTP id D01A34000C for ; Sat, 16 Apr 2022 06:48:16 +0000 (UTC) Received: from canpemm500002.china.huawei.com (unknown [172.30.72.56]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4KgNzm6KRZzfYnt; Sat, 16 Apr 2022 14:47:32 +0800 (CST) Received: from [10.174.177.76] (10.174.177.76) by canpemm500002.china.huawei.com (7.192.104.244) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Sat, 16 Apr 2022 14:48:10 +0800 Subject: Re: [PATCH v10 03/14] mm/vmscan.c: refactor shrink_node() To: Yu Zhao , Stephen Rothwell , CC: Andi Kleen , Andrew Morton , Aneesh Kumar , Barry Song <21cnbao@gmail.com>, Catalin Marinas , Dave Hansen , Hillf Danton , Jens Axboe , Jesse Barnes , Johannes Weiner , Jonathan Corbet , Linus Torvalds , Matthew Wilcox , Mel Gorman , Michael Larabel , Michal Hocko , Mike Rapoport , Rik van Riel , Vlastimil Babka , Will Deacon , Ying Huang , , , , , , Barry Song , Brian Geffon , Jan Alexander Steffens , Oleksandr Natalenko , Steven Barrett , Suleiman Souhlal , Daniel Byrne , Donald Carr , =?UTF-8?Q?Holger_Hoffst=c3=a4tte?= , Konstantin Kharlamov , Shuang Zhai , Sofia Trinh , Vaibhav Jain References: <20220407031525.2368067-1-yuzhao@google.com> <20220407031525.2368067-4-yuzhao@google.com> From: Miaohe Lin Message-ID: <195d4677-e033-e124-144c-9ede270b4f70@huawei.com> Date: Sat, 16 Apr 2022 14:48:10 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.6.0 MIME-Version: 1.0 In-Reply-To: <20220407031525.2368067-4-yuzhao@google.com> Content-Type: text/plain; charset="utf-8" Content-Language: en-US Content-Transfer-Encoding: 8bit X-Originating-IP: [10.174.177.76] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To canpemm500002.china.huawei.com (7.192.104.244) X-CFilter-Loop: Reflected X-Stat-Signature: g7wjbd3d8a7fs3jdqa7f5736r1q7sunq X-Rspam-User: Authentication-Results: imf27.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf27.hostedemail.com: domain of linmiaohe@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=linmiaohe@huawei.com X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: D01A34000C X-HE-Tag: 1650091696-823842 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 2022/4/7 11:15, Yu Zhao wrote: > This patch refactors shrink_node() to improve readability for the > upcoming changes to mm/vmscan.c. > > Signed-off-by: Yu Zhao > Reviewed-by: Barry Song > Acked-by: Brian Geffon > Acked-by: Jan Alexander Steffens (heftig) > Acked-by: Oleksandr Natalenko > Acked-by: Steven Barrett > Acked-by: Suleiman Souhlal > Tested-by: Daniel Byrne > Tested-by: Donald Carr > Tested-by: Holger Hoffstätte > Tested-by: Konstantin Kharlamov > Tested-by: Shuang Zhai > Tested-by: Sofia Trinh > Tested-by: Vaibhav Jain > --- > mm/vmscan.c | 198 +++++++++++++++++++++++++++------------------------- > 1 file changed, 104 insertions(+), 94 deletions(-) > Looks good to me. Thanks! Reviewed-by: Miaohe Lin > diff --git a/mm/vmscan.c b/mm/vmscan.c > index 1678802e03e7..2232cb55af41 100644 > --- a/mm/vmscan.c > +++ b/mm/vmscan.c > @@ -2644,6 +2644,109 @@ enum scan_balance { > SCAN_FILE, > }; > > +static void prepare_scan_count(pg_data_t *pgdat, struct scan_control *sc) > +{ > + unsigned long file; > + struct lruvec *target_lruvec; > + > + target_lruvec = mem_cgroup_lruvec(sc->target_mem_cgroup, pgdat); > + > + /* > + * Flush the memory cgroup stats, so that we read accurate per-memcg > + * lruvec stats for heuristics. > + */ > + mem_cgroup_flush_stats(); > + > + /* > + * Determine the scan balance between anon and file LRUs. > + */ > + spin_lock_irq(&target_lruvec->lru_lock); > + sc->anon_cost = target_lruvec->anon_cost; > + sc->file_cost = target_lruvec->file_cost; > + spin_unlock_irq(&target_lruvec->lru_lock); > + > + /* > + * Target desirable inactive:active list ratios for the anon > + * and file LRU lists. > + */ > + if (!sc->force_deactivate) { > + unsigned long refaults; > + > + refaults = lruvec_page_state(target_lruvec, > + WORKINGSET_ACTIVATE_ANON); > + if (refaults != target_lruvec->refaults[0] || > + inactive_is_low(target_lruvec, LRU_INACTIVE_ANON)) > + sc->may_deactivate |= DEACTIVATE_ANON; > + else > + sc->may_deactivate &= ~DEACTIVATE_ANON; > + > + /* > + * When refaults are being observed, it means a new > + * workingset is being established. Deactivate to get > + * rid of any stale active pages quickly. > + */ > + refaults = lruvec_page_state(target_lruvec, > + WORKINGSET_ACTIVATE_FILE); > + if (refaults != target_lruvec->refaults[1] || > + inactive_is_low(target_lruvec, LRU_INACTIVE_FILE)) > + sc->may_deactivate |= DEACTIVATE_FILE; > + else > + sc->may_deactivate &= ~DEACTIVATE_FILE; > + } else > + sc->may_deactivate = DEACTIVATE_ANON | DEACTIVATE_FILE; > + > + /* > + * If we have plenty of inactive file pages that aren't > + * thrashing, try to reclaim those first before touching > + * anonymous pages. > + */ > + file = lruvec_page_state(target_lruvec, NR_INACTIVE_FILE); > + if (file >> sc->priority && !(sc->may_deactivate & DEACTIVATE_FILE)) > + sc->cache_trim_mode = 1; > + else > + sc->cache_trim_mode = 0; > + > + /* > + * Prevent the reclaimer from falling into the cache trap: as > + * cache pages start out inactive, every cache fault will tip > + * the scan balance towards the file LRU. And as the file LRU > + * shrinks, so does the window for rotation from references. > + * This means we have a runaway feedback loop where a tiny > + * thrashing file LRU becomes infinitely more attractive than > + * anon pages. Try to detect this based on file LRU size. > + */ > + if (!cgroup_reclaim(sc)) { > + unsigned long total_high_wmark = 0; > + unsigned long free, anon; > + int z; > + > + free = sum_zone_node_page_state(pgdat->node_id, NR_FREE_PAGES); > + file = node_page_state(pgdat, NR_ACTIVE_FILE) + > + node_page_state(pgdat, NR_INACTIVE_FILE); > + > + for (z = 0; z < MAX_NR_ZONES; z++) { > + struct zone *zone = &pgdat->node_zones[z]; > + > + if (!managed_zone(zone)) > + continue; > + > + total_high_wmark += high_wmark_pages(zone); > + } > + > + /* > + * Consider anon: if that's low too, this isn't a > + * runaway file reclaim problem, but rather just > + * extreme pressure. Reclaim as per usual then. > + */ > + anon = node_page_state(pgdat, NR_INACTIVE_ANON); > + > + sc->file_is_tiny = > + file + free <= total_high_wmark && > + !(sc->may_deactivate & DEACTIVATE_ANON) && > + anon >> sc->priority; > + } > +} > + > /* > * Determine how aggressively the anon and file LRU lists should be > * scanned. The relative value of each set of LRU lists is determined > @@ -3114,109 +3217,16 @@ static void shrink_node(pg_data_t *pgdat, struct scan_control *sc) > unsigned long nr_reclaimed, nr_scanned; > struct lruvec *target_lruvec; > bool reclaimable = false; > - unsigned long file; > > target_lruvec = mem_cgroup_lruvec(sc->target_mem_cgroup, pgdat); > > again: > - /* > - * Flush the memory cgroup stats, so that we read accurate per-memcg > - * lruvec stats for heuristics. > - */ > - mem_cgroup_flush_stats(); > - > memset(&sc->nr, 0, sizeof(sc->nr)); > > nr_reclaimed = sc->nr_reclaimed; > nr_scanned = sc->nr_scanned; > > - /* > - * Determine the scan balance between anon and file LRUs. > - */ > - spin_lock_irq(&target_lruvec->lru_lock); > - sc->anon_cost = target_lruvec->anon_cost; > - sc->file_cost = target_lruvec->file_cost; > - spin_unlock_irq(&target_lruvec->lru_lock); > - > - /* > - * Target desirable inactive:active list ratios for the anon > - * and file LRU lists. > - */ > - if (!sc->force_deactivate) { > - unsigned long refaults; > - > - refaults = lruvec_page_state(target_lruvec, > - WORKINGSET_ACTIVATE_ANON); > - if (refaults != target_lruvec->refaults[0] || > - inactive_is_low(target_lruvec, LRU_INACTIVE_ANON)) > - sc->may_deactivate |= DEACTIVATE_ANON; > - else > - sc->may_deactivate &= ~DEACTIVATE_ANON; > - > - /* > - * When refaults are being observed, it means a new > - * workingset is being established. Deactivate to get > - * rid of any stale active pages quickly. > - */ > - refaults = lruvec_page_state(target_lruvec, > - WORKINGSET_ACTIVATE_FILE); > - if (refaults != target_lruvec->refaults[1] || > - inactive_is_low(target_lruvec, LRU_INACTIVE_FILE)) > - sc->may_deactivate |= DEACTIVATE_FILE; > - else > - sc->may_deactivate &= ~DEACTIVATE_FILE; > - } else > - sc->may_deactivate = DEACTIVATE_ANON | DEACTIVATE_FILE; > - > - /* > - * If we have plenty of inactive file pages that aren't > - * thrashing, try to reclaim those first before touching > - * anonymous pages. > - */ > - file = lruvec_page_state(target_lruvec, NR_INACTIVE_FILE); > - if (file >> sc->priority && !(sc->may_deactivate & DEACTIVATE_FILE)) > - sc->cache_trim_mode = 1; > - else > - sc->cache_trim_mode = 0; > - > - /* > - * Prevent the reclaimer from falling into the cache trap: as > - * cache pages start out inactive, every cache fault will tip > - * the scan balance towards the file LRU. And as the file LRU > - * shrinks, so does the window for rotation from references. > - * This means we have a runaway feedback loop where a tiny > - * thrashing file LRU becomes infinitely more attractive than > - * anon pages. Try to detect this based on file LRU size. > - */ > - if (!cgroup_reclaim(sc)) { > - unsigned long total_high_wmark = 0; > - unsigned long free, anon; > - int z; > - > - free = sum_zone_node_page_state(pgdat->node_id, NR_FREE_PAGES); > - file = node_page_state(pgdat, NR_ACTIVE_FILE) + > - node_page_state(pgdat, NR_INACTIVE_FILE); > - > - for (z = 0; z < MAX_NR_ZONES; z++) { > - struct zone *zone = &pgdat->node_zones[z]; > - if (!managed_zone(zone)) > - continue; > - > - total_high_wmark += high_wmark_pages(zone); > - } > - > - /* > - * Consider anon: if that's low too, this isn't a > - * runaway file reclaim problem, but rather just > - * extreme pressure. Reclaim as per usual then. > - */ > - anon = node_page_state(pgdat, NR_INACTIVE_ANON); > - > - sc->file_is_tiny = > - file + free <= total_high_wmark && > - !(sc->may_deactivate & DEACTIVATE_ANON) && > - anon >> sc->priority; > - } > + prepare_scan_count(pgdat, sc); > > shrink_node_memcgs(pgdat, sc); > >