From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6EF95C48BEB for ; Thu, 22 Feb 2024 10:01:44 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0C3F66B0080; Thu, 22 Feb 2024 05:01:44 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 074E06B0081; Thu, 22 Feb 2024 05:01:44 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E7EB66B0082; Thu, 22 Feb 2024 05:01:43 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id D6F966B0080 for ; Thu, 22 Feb 2024 05:01:43 -0500 (EST) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 97D24A0484 for ; Thu, 22 Feb 2024 10:01:43 +0000 (UTC) X-FDA: 81818998086.14.69B495F Received: from invmail4.hynix.com (exvmail4.skhynix.com [166.125.252.92]) by imf15.hostedemail.com (Postfix) with ESMTP id F2CCCA000F for ; Thu, 22 Feb 2024 10:01:37 +0000 (UTC) Authentication-Results: imf15.hostedemail.com; dkim=none; spf=pass (imf15.hostedemail.com: domain of byungchul@sk.com designates 166.125.252.92 as permitted sender) smtp.mailfrom=byungchul@sk.com; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1708596102; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=bK9NiFzIm4VcVZfxwFYRmJs3lLwJkNBd8NhKkqYqdm8=; b=SQP8HMAWk4kZ011oxAXMyPt599AgdJzkg8bIiHe1VsG/exnDXf51lpggfmni3Xo4WNsZ8z US+6B5/HcwC7681rHRTwGGFsBhCPlNRs/w8HY839M1ZKKmmoK653Ge6+W0fih0fjQWpHnz DqammSCQAM7cYE6MdSkwxeAGIta3pEc= ARC-Authentication-Results: i=1; imf15.hostedemail.com; dkim=none; spf=pass (imf15.hostedemail.com: domain of byungchul@sk.com designates 166.125.252.92 as permitted sender) smtp.mailfrom=byungchul@sk.com; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1708596102; a=rsa-sha256; cv=none; b=hHxVoMME+SfNW5KoAi8A6xvF/M+BFp61O4I4m5w/9CG1RY1DZR1HWK3G/AvaQUoRhTs55C bIbBk6G2wArgVJk7pA11zhshCbLlOMrCOQROnakPRFmzrEgYIQ/plKqzubjGpmqxWRSxDz /cpRZLLVLERPzZlLqHzxdaRl01kgjS4= X-AuditID: a67dfc5b-d85ff70000001748-86-65d71b7e2d42 Date: Thu, 22 Feb 2024 19:01:29 +0900 From: Byungchul Park To: "Huang, Ying" Cc: akpm@linux-foundation.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, kernel_team@skhynix.com, yuzhao@google.com, mgorman@suse.de Subject: Re: [PATCH v2] mm, vmscan: don't turn on cache_trim_mode at high scan priorities Message-ID: <20240222100129.GB13076@system.software.com> References: <20240222070817.70515-1-byungchul@sk.com> <87sf1kj3nn.fsf@yhuang6-desk2.ccr.corp.intel.com> <20240222092042.GA33967@system.software.com> <20240222094900.GA13076@system.software.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20240222094900.GA13076@system.software.com> User-Agent: Mutt/1.9.4 (2018-02-28) X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFrrKLMWRmVeSWpSXmKPExsXC9ZZnoW6d9PVUgy0XzCzmrF/DZnF51xw2 i3tr/rNaTH73jNHi5KzJLBbvJnxhdWDzWLCp1GPxnpdMHps+TWL3ODHjN4vH5tPVHp83yQWw RXHZpKTmZJalFunbJXBl9E2ewF7QZVNxZ/UktgbGDzpdjJwcEgImEkduzWSFsX8vfg9mswio Srw8+Z0ZxGYTUJe4ceMnmC0ioCHxaeFy9i5GLg5mgQmMEt2vprGBJIQFoiVavn0ESnBw8ApY SNy45AlSIyRwnFFi2c6dTCA1vAKCEidnPmEBsZkFtCRu/HvJBFLPLCAtsfwfB0iYU8BS4v/i N2C7RAWUJQ5sO84EMkdCYAubxOpNR6EOlZQ4uOIGywRGgVlIxs5CMnYWwtgFjMyrGIUy88py EzNzTPQyKvMyK/SS83M3MQIDe1ntn+gdjJ8uBB9iFOBgVOLhfcB4LVWINbGsuDL3EKMEB7OS CC9L+ZVUId6UxMqq1KL8+KLSnNTiQ4zSHCxK4rxG38pThATSE0tSs1NTC1KLYLJMHJxSDYwr G9byV7GxHP3ynfMsv2N0jCfLsc51x3wc1rFIWU/XN+btzd167VPW6of2ydc/H+GMzskp4T/8 g/eawBoJk/tbSmUzT/Kw/P4+awFLtvOEabFfVXyLqljNjvVvDepNjA7M3nZ93qdnX4u/N5yf 2Kca+fX20hX+JqkZxhoqH3mjFPb/k526KlOJpTgj0VCLuag4EQBRM9GDaAIAAA== X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFtrDLMWRmVeSWpSXmKPExsXC5WfdrFsnfT3VYM4hHYs569ewWRyee5LV 4vKuOWwW99b8Z7WY/O4Zo8XJWZNZLN5N+MLqwO6xYFOpx+I9L5k8Nn2axO5xYsZvFo/FLz4w eWw+Xe3xeZNcAHsUl01Kak5mWWqRvl0CV0bf5AnsBV02FXdWT2JrYPyg08XIySEhYCLxe/F7 VhCbRUBV4uXJ78wgNpuAusSNGz/BbBEBDYlPC5ezdzFycTALTGCU6H41jQ0kISwQLdHy7SNQ goODV8BC4sYlT5AaIYHjjBLLdu5kAqnhFRCUODnzCQuIzSygJXHj30smkHpmAWmJ5f84QMKc ApYS/xe/AdslKqAscWDbcaYJjLyzkHTPQtI9C6F7ASPzKkaRzLyy3MTMHFO94uyMyrzMCr3k /NxNjMAwXVb7Z+IOxi+X3Q8xCnAwKvHwPmC8lirEmlhWXJl7iFGCg1lJhJel/EqqEG9KYmVV alF+fFFpTmrxIUZpDhYlcV6v8NQEIYH0xJLU7NTUgtQimCwTB6dUA6MnZzXXNfdzbUc5psx7 eGrLhBPvPx17tMxLk+9y2jIO9kvdmy56ejVuM3my1O2m3Yxlh5I3C31VaLpikCX868FFNtPf t4/sN8yKy3u/l6Ou5Pq7SsvHjDG2wiem3tXVFEsRm9K3+Trzo43zHy9+xhh12SRj05w1i7Uk gmxXlV/ZnVhwYenBY7/fKLEUZyQaajEXFScCAEVlmsxPAgAA X-CFilter-Loop: Reflected X-Rspamd-Queue-Id: F2CCCA000F X-Rspam-User: X-Stat-Signature: hqquh44p5hgozhnx494yo3ksube6t7yk X-Rspamd-Server: rspam01 X-HE-Tag: 1708596097-596855 X-HE-Meta: U2FsdGVkX1/ZqXVnAYHDKWsE+sZvuSCvREkCr3UzD8vVZQxCWH6rbWF1pfYT/wlXCVu9fx2g3DZzQvGfkcbzKQlPsfcgcHFwZW7BmuYDmALwnxNZg51STLPckbpF4y8lJMjUlQCP0EMXlBklgzqRkfwLBlsmICOCHrqoxKkYbLddbkRmISgH/tbzA53ou9yFFJtB1GW/Fdqb/dcROw2CSLAldxcnuIH2TBhDeylum0pzNF/B8kQk4NFcKYTqIwP//msgKNdbw2VzNPBz4VaT195ZyLO2V6Dwz+1FNtF5z0j0GBQbEW+TVj7oXc/Vr8wxup6R94y5mDj42FtuA2n6JaLxKTD19ZvKOXR6RHHAonmffyKO0PrFy4DNBtNvXDQgKTChAgmZ2AvEa+eiKRtBOuP5P0bPUgtk2nrOB38PblHlhFt2SUVP++BW6VURLXMPxoibhYozydiFn9bNibuvk6G8lXXfQDWnYWnZ9VYTdWTfg9+R3rqFNq1PTLclOAXwzAa2qQO6GJVcue5LyZVlg+9Pc+lzxk5FQJZJlOM+nCoSeN55t3mGGHIX2Jrj+hjwG1S8Qul9uJRm9rU+bNQONSKgEWGVlxWSBRtX58RQ6MINx5cLtcWKvaxZxlIFi10E8k922zRu/OLZyk6YMgQ9eDcaIUSVzXCbh0AtYsHW4JcNBUkhmz4th1aIWn/qltuBUpyPAtHa62Bu08KZOJbYiw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Thu, Feb 22, 2024 at 06:49:00PM +0900, Byungchul Park wrote: > On Thu, Feb 22, 2024 at 06:20:42PM +0900, Byungchul Park wrote: > > On Thu, Feb 22, 2024 at 04:37:16PM +0800, Huang, Ying wrote: > > > Byungchul Park writes: > > > > > > > Changes from v1: > > > > 1. Add a comment describing why this change is necessary in code > > > > and rewrite the commit message with how to reproduce and what > > > > the result is using vmstat. (feedbacked by Andrew Morton and > > > > Yu Zhao) > > > > 2. Change the condition to avoid cache_trim_mode from > > > > 'sc->priority != 1' to 'sc->priority > 1' to reflect cases > > > > where the priority goes to zero all the way. (feedbacked by > > > > Yu Zhao) > > > > > > > > --->8--- > > > > From 07e0baab368160e50b6ca35d95745168aa60e217 Mon Sep 17 00:00:00 2001 > > > > From: Byungchul Park > > > > Date: Thu, 22 Feb 2024 14:50:17 +0900 > > > > Subject: [PATCH v2] mm, vmscan: don't turn on cache_trim_mode at high scan priorities > > > > > > > > With cache_trim_mode on, reclaim logic doesn't bother reclaiming anon > > > > pages. However, it should be more careful to turn on the mode because > > > > it's going to prevent anon pages from being reclaimed even if there are > > > > a huge number of anon pages that are cold and should be reclaimed. Even > > > > worse, that can lead kswapd_failures to reach MAX_RECLAIM_RETRIES and > > > > stopping kswapd until direct reclaim eventually works to resume kswapd. > > > > So this is more like a bug fix than a performance improvement. > > > > > > > > The problematic behavior can be reproduced by: > > > > > > > > CONFIG_NUMA_BALANCING enabled > > > > sysctl_numa_balancing_mode set to NUMA_BALANCING_MEMORY_TIERING > > > > > > > > numa node0 (8GB local memory, 16 CPUs) > > > > numa node1 (8GB slow tier memory, no CPUs) > > > > > > > > Sequence: > > > > > > > > 1) echo 3 > /proc/sys/vm/drop_caches > > > > 2) To emulate the system with full of cold memory in local DRAM, run > > > > the following dummy program and never touch the region: > > > > > > > > mmap(0, 8 * 1024 * 1024 * 1024, PROT_READ | PROT_WRITE, > > > > MAP_ANONYMOUS | MAP_PRIVATE | MAP_POPULATE, -1, 0); > > > > > > > > 3) Run any memory intensive work e.g. XSBench. > > > > 4) Check if numa balancing is working e.i. promotion/demotion. > > > > 5) Iterate 1) ~ 4) until kswapd stops. > > > > > > > > With this, you could eventually see that promotion/demotion are not > > > > working because kswapd has stopped due to ->kswapd_failures >= > > > > MAX_RECLAIM_RETRIES. > > > > > > > > Interesting vmstat delta's differences between before and after are like: > > > > > > > > -nr_inactive_anon 321935 > > > > -nr_active_anon 1780700 > > > > -nr_inactive_file 30425 > > > > -nr_active_file 14961 > > > > -pgpromote_success 356 > > > > -pgpromote_candidate 21953245 > > > > -pgactivate 1844523 > > > > -pgdeactivate 50634 > > > > -pgfault 31100294 > > > > -pgdemote_kswapd 30856 > > > > -pgscan_kswapd 1861981 > > > > -pgscan_anon 1822930 > > > > -pgscan_file 39051 > > > > -pgsteal_anon 386 > > > > -pgsteal_file 30470 > > > > -pageoutrun 30 > > > > -numa_hint_faults 27418279 > > > > -numa_pages_migrated 356 > > > > > > > > +nr_inactive_anon 1662306 > > > > +nr_active_anon 440303 > > > > +nr_inactive_file 27669 > > > > +nr_active_file 1654 > > > > +pgpromote_success 1314102 > > > > +pgpromote_candidate 1892525 > > > > +pgactivate 3284457 > > > > +pgdeactivate 1527504 > > > > +pgfault 6847775 > > > > +pgdemote_kswapd 2142047 > > > > +pgscan_kswapd 7496588 > > > > +pgscan_anon 7462488 > > > > +pgscan_file 34100 > > > > +pgsteal_anon 2115661 > > > > +pgsteal_file 26386 > > > > +pageoutrun 378 > > > > +numa_hint_faults 3220891 > > > > +numa_pages_migrated 1314102 > > > > > > > > where -: before this patch, +: after this patch > > > > > > > > Signed-off-by: Byungchul Park > > > > --- > > > > mm/vmscan.c | 10 +++++++++- > > > > 1 file changed, 9 insertions(+), 1 deletion(-) > > > > > > > > diff --git a/mm/vmscan.c b/mm/vmscan.c > > > > index bba207f41b14..6eda59fce5ee 100644 > > > > --- a/mm/vmscan.c > > > > +++ b/mm/vmscan.c > > > > @@ -2266,9 +2266,17 @@ static void prepare_scan_control(pg_data_t *pgdat, struct scan_control *sc) > > > > * If we have plenty of inactive file pages that aren't > > > > * thrashing, try to reclaim those first before touching > > > > * anonymous pages. > > > > + * > > > > + * However, the condition 'sc->cache_trim_mode == 1' all through > > > > + * the scan priorties might lead reclaim failure. If it keeps > > > > + * MAX_RECLAIM_RETRIES times, then kswapd would get stopped even > > > > + * if there are still plenty anon pages to reclaim, which is not > > > > + * desirable. So do not use cache_trim_mode when reclaim is not > > > > + * smooth e.i. high scan priority. > > > > */ > > > > file = lruvec_page_state(target_lruvec, NR_INACTIVE_FILE); > > > > - if (file >> sc->priority && !(sc->may_deactivate & DEACTIVATE_FILE)) > > > > + if (sc->priority > 1 && file >> sc->priority && > > > > + !(sc->may_deactivate & DEACTIVATE_FILE)) > > > > sc->cache_trim_mode = 1; > > > > else > > > > sc->cache_trim_mode = 0; > > > > > > In get_scan_count(), there's following code, > > > > > > /* > > > * Do not apply any pressure balancing cleverness when the > > > * system is close to OOM, scan both anon and file equally > > > * (unless the swappiness setting disagrees with swapping). > > > */ > > > if (!sc->priority && swappiness) { > > > scan_balance = SCAN_EQUAL; > > > goto out; > > > } > > > > > > So, swappiness is 0 in you system? Please check it. If it's not 0, > > > please check why this doesn't help. > > > > Nice information! Then the change should be: > > > > diff --git a/mm/vmscan.c b/mm/vmscan.c > > index bba207f41b14..91f9bab86e92 100644 > > --- a/mm/vmscan.c > > +++ b/mm/vmscan.c > > @@ -2357,7 +2357,7 @@ static void get_scan_count(struct lruvec *lruvec, struct scan_control *sc, > > * system is close to OOM, scan both anon and file equally > > * (unless the swappiness setting disagrees with swapping). > > */ > > - if (!sc->priority && swappiness) { > > + if (sc->priority <= 1 && swappiness) { > > scan_balance = SCAN_EQUAL; > > goto out; > > } > > Or: > > diff --git a/mm/vmscan.c b/mm/vmscan.c > index bba207f41b14..c54371a398b1 100644 > --- a/mm/vmscan.c > +++ b/mm/vmscan.c > @@ -6896,7 +6896,7 @@ static int balance_pgdat(pg_data_t *pgdat, int order, int highest_zoneidx) > > if (raise_priority || !nr_reclaimed) > sc.priority--; > - } while (sc.priority >= 1); > + } while (sc.priority >= 0); > > if (!sc.nr_reclaimed) > pgdat->kswapd_failures++; +cc Mel Gorman I just found this was intended. See commit 9aa41348a8d11 ("mm: vmscan: do not allow kswapd to scan at maximum priority"). Mel Gorman didn't want to make kswapd too much aggressive. However, does it make sense to stop kswapd even if there are plenty cold anon pages to reclaim and make the system wait for direct reclaim? Thoughts? Byungchul > --- > > Byungchul > > > Worth noting that the priority goes from DEF_PRIORITY to 1 in > > balance_pgdat() of kswapd. I will change how to fix to this if this > > looks more reasonable. > > > > Byungchul