From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E9525C5478C for ; Tue, 5 Mar 2024 07:06:52 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 67B406B0083; Tue, 5 Mar 2024 02:06:52 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 6021D6B0085; Tue, 5 Mar 2024 02:06:52 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 47E1A6B0089; Tue, 5 Mar 2024 02:06:52 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 31A976B0083 for ; Tue, 5 Mar 2024 02:06:52 -0500 (EST) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id DA9F512021F for ; Tue, 5 Mar 2024 07:06:51 +0000 (UTC) X-FDA: 81862103022.06.FD26656 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.14]) by imf15.hostedemail.com (Postfix) with ESMTP id C5C0DA0019 for ; Tue, 5 Mar 2024 07:06:48 +0000 (UTC) Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=aQDRSA8u; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf15.hostedemail.com: domain of ying.huang@intel.com designates 198.175.65.14 as permitted sender) smtp.mailfrom=ying.huang@intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1709622410; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=yAxWqgehsftMuxY+W6DjDk7C1+BdoXyrDi6akQ3lkXo=; b=WNBVW5C1oJ/D7Zpi3raD2hKC0wvDomYLuP2y+9kL0mOxWpF3VQFxb/XtvR5nctfecLcxba hKxLkGHoDIi2o9l+UlEH/QsvTFZ6k+0BoeUbA76ZheBsfNKb3Qm1z/S4ANzzC8SmYN8P55 J0Fl2R4Ys/7VWU+gzyis7KaPCyxByd0= ARC-Authentication-Results: i=1; imf15.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=aQDRSA8u; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf15.hostedemail.com: domain of ying.huang@intel.com designates 198.175.65.14 as permitted sender) smtp.mailfrom=ying.huang@intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1709622410; a=rsa-sha256; cv=none; b=ilbXB8SJP57i6Glr0AUrMZC9T78WDGHcTpjqACScRfLCKnUno3e0IUZQ1B7DqGFX2g1an/ Xf7Tp8Pri2D+Xan5zlsCnkMwF/XaJU6CrPI1LQZYll6KuNHtHctne9C+Yan+UAT26NggqE Q0XvuHJoVwNEVyhyJuj46AJDXUm5ZGQ= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1709622409; x=1741158409; h=from:to:cc:subject:in-reply-to:references:date: message-id:mime-version; bh=PoNSfWj1z0BMexJjXoGQjSgV91DZCSlbJC5F9lvgRNs=; b=aQDRSA8u64AQcqtDlWqkmwZcMyHSQEEuTZrO+f0WPARbxAQzJl281b2F YU4LzaTnaRQQ9Tv2rXeeHXqCQ6tPVjk+/N0u/XqVbrv8rnBbybNit99W1 4qlZ/luk7IjSpIJyJ8hNTlSnSCE+IT/4ELumQU40nP6AM4JBQG5FBlSUV vOEpcPFHLNGQ3qovUt2YyLG4cAg0/yNiEcVWEiYfQBKfhpdQkZix0Kb3P SWYB9xaw9BIVzoUa8IH1BIZ1EpoBaRR1ISS66+ThyC7vBBW8gO5cYnQ7s GZN993U94xxnLCbyJ0elGjo921z5IC5XCYkmTH5H/qqJdJUv9A3glbORq Q==; X-IronPort-AV: E=McAfee;i="6600,9927,11003"; a="7975447" X-IronPort-AV: E=Sophos;i="6.06,205,1705392000"; d="scan'208";a="7975447" Received: from orviesa008.jf.intel.com ([10.64.159.148]) by orvoesa106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Mar 2024 23:06:45 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.06,205,1705392000"; d="scan'208";a="9854214" Received: from yhuang6-desk2.sh.intel.com (HELO yhuang6-desk2.ccr.corp.intel.com) ([10.238.208.55]) by orviesa008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Mar 2024 23:06:43 -0800 From: "Huang, Ying" To: Byungchul Park Cc: , , , , , Subject: Re: [PATCH v6] mm, vmscan: retry kswapd's priority loop with cache_trim_mode off on failure In-Reply-To: <20240305065846.GA37850@system.software.com> (Byungchul Park's message of "Tue, 5 Mar 2024 15:58:46 +0900") References: <20240304033611.GD13332@system.software.com> <20240304082118.20499-1-byungchul@sk.com> <87zfvda1f8.fsf@yhuang6-desk2.ccr.corp.intel.com> <20240305023708.GA60719@system.software.com> <20240305024345.GB60719@system.software.com> <20240305040930.GA21107@system.software.com> <87le6x9p6u.fsf@yhuang6-desk2.ccr.corp.intel.com> <20240305065846.GA37850@system.software.com> Date: Tue, 05 Mar 2024 15:04:48 +0800 Message-ID: <87cys99n1r.fsf@yhuang6-desk2.ccr.corp.intel.com> User-Agent: Gnus/5.13 (Gnus v5.13) MIME-Version: 1.0 Content-Type: text/plain; charset=ascii X-Rspamd-Queue-Id: C5C0DA0019 X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: eh9jgirsij3i5gs6raa68msc4inzn9kz X-HE-Tag: 1709622408-123258 X-HE-Meta: U2FsdGVkX19BPskcN6lm1A+YX+qo4GmbFy5zqyewP625uGBLwaZN/FVqUIGTBn0loPHowOOojgAu4Y/UDogCCJ1MwEOM/VO27YqRYp6gDtWGqpcZzI/EnJtipqlxNRE4oUaUa1Z8C8nbIQSVqtXYM2ZPX9NTfllP7r+1oCHdI0oQzL6z351E3GVfKOzOQae9z4PbqIZ8DWy1/uvOvdxgRbTrCjil1qxCTgYtY9haDsOBlpqsVsyFUFaEATqTgOZNA/n7rdJZBge/Mdq2BhM8G0Xts6IO2wQdAcThKUdMMU4Wr+Hl/3TqcVvkSPv3KLwqh5o3oQqGk2g+JMoEqmNj/pBnsZFT70N2pHEXXPGDFPkrG+fhl4YKxHOnr6yob14m1QlI1wlM1dYh64E79Omd6YHeG3KAMW3EglXpTjEz2mj3Bm9i14Hb5y8rqRLFlI4q4XtNqUIDLZhWodwTDlh/oM3PbQtL0bAe3WKcutW/HRBbw2aWF8DQCUCTqy6Z4etok9w4oEGfc83GhLa1YCovHtCtKV9wsJ0WqjbAE8rfjLloUAUE/gHNr9QzsL/WOOmTOCmA33qj4Wu0YkxgTvHtsYBDrA8EMCHz48gMboLhbb062GzkliXlGXDCGOjr6XC+AKb2ZY4MAiB5P3xvXtgsNEWXBhurtn6lJStLzcsvO71lara5uq13ODgkkdrqd3QHQvqacPa7kk+QWAVtQa9inziKA2RL0IJPMXhI5KIPWte4Av0z/hykO7j3v5CsCOgGXM7VQWSYvG7R14QrXVppvjr+NugASJgrOWNSbqGDOFjEMMkL06cNW/JhV/cfJDs3+tugm5vhdMy+RmCKcsn1oa202IN3/oJDWasfe5pdEuCaz+NADE08Cu+AwjbU6Js1iilb1LehqGoNCl3x7APJ1p3KwVevr5SI5O6R8DQmzjlv8t+L07nsgXPjSK5AVM4GiK33lJNyvcbWOcvfw7q wPLxygKT pFznt59T2JDY/duPcSlPWDXi5Zt5cBwmO4qtMrQVoWV/DMajNGw+PdCLn9opWu2/XHKsRQ/tJ1DZwR01jM1zrGXPNkvOD3xuAMLxMB03twiYyBVlKxrtgRgupmS73Ryjv/gepa8QQ1hrl4zGjQcTR7I3nJ88s3OwhJFvPcwXHDT0tDcTSNS1OMjU73wPR5oJB3i6S2zu2UlUJLAIvpMrz/e4mSfpW0tqNMElqhoKTZFEW32exzDYVGgISqxhDyKFOvNWcONONJeSDzjaUd3gitHwJka23VIigAmIH X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Byungchul Park writes: > On Tue, Mar 05, 2024 at 02:18:33PM +0800, Huang, Ying wrote: >> Byungchul Park writes: >> >> > On Tue, Mar 05, 2024 at 11:43:45AM +0900, Byungchul Park wrote: >> >> On Tue, Mar 05, 2024 at 11:37:08AM +0900, Byungchul Park wrote: >> >> > On Tue, Mar 05, 2024 at 09:54:19AM +0800, Huang, Ying wrote: >> >> > > Byungchul Park writes: >> >> > > >> >> > > > Changes from v5: >> >> > > > 1. Make it retry the kswapd's scan priority loop with >> >> > > > cache_trim_mode off *only if* the mode didn't work in the >> >> > > > previous loop. (feedbacked by Huang Ying) >> >> > > > 2. Take into account 'break's from the priority loop when making >> >> > > > the decision whether to retry. (feedbacked by Huang Ying) >> >> > > > 3. Update the test result in the commit message. >> >> > > > >> >> > > > Changes from v4: >> >> > > > 1. Make other scans start with may_cache_trim_mode = 1. >> >> > > > >> >> > > > Changes from v3: >> >> > > > 1. Update the test result in the commit message with v4. >> >> > > > 2. Retry the whole priority loop with cache_trim_mode off again, >> >> > > > rather than forcing the mode off at the highest priority, >> >> > > > when the mode doesn't work. (feedbacked by Johannes Weiner) >> >> > > > >> >> > > > Changes from v2: >> >> > > > 1. Change the condition to stop cache_trim_mode. >> >> > > > >> >> > > > From - Stop it if it's at high scan priorities, 0 or 1. >> >> > > > To - Stop it if it's at high scan priorities, 0 or 1, and >> >> > > > the mode didn't work in the previous turn. >> >> > > > >> >> > > > (feedbacked by Huang Ying) >> >> > > > >> >> > > > 2. Change the test result in the commit message after testing >> >> > > > with the new logic. >> >> > > > >> >> > > > Changes from v1: >> >> > > > 1. Add a comment describing why this change is necessary in code >> >> > > > and rewrite the commit message with how to reproduce and what >> >> > > > the result is using vmstat. (feedbacked by Andrew Morton and >> >> > > > Yu Zhao) >> >> > > > 2. Change the condition to avoid cache_trim_mode from >> >> > > > 'sc->priority != 1' to 'sc->priority > 1' to reflect cases >> >> > > > where the priority goes to zero all the way. (feedbacked by >> >> > > > Yu Zhao) >> >> > > > >> >> > > > --->8--- >> >> > > > From f811ee583158fd53d0e94d32ce5948fac4b17cfe Mon Sep 17 00:00:00 2001 >> >> > > > From: Byungchul Park >> >> > > > Date: Mon, 4 Mar 2024 15:27:37 +0900 >> >> > > > Subject: [PATCH v6] mm, vmscan: retry kswapd's priority loop with cache_trim_mode off on failure >> >> > > > >> >> > > > With cache_trim_mode on, reclaim logic doesn't bother reclaiming anon >> >> > > > pages. However, it should be more careful to use the mode because it's >> >> > > > going to prevent anon pages from being reclaimed even if there are a >> >> > > > huge number of anon pages that are cold and should be reclaimed. Even >> >> > > > worse, that leads kswapd_failures to reach MAX_RECLAIM_RETRIES and >> >> > > > stopping kswapd from functioning until direct reclaim eventually works >> >> > > > to resume kswapd. >> >> > > > >> >> > > > So kswapd needs to retry its scan priority loop with cache_trim_mode >> >> > > > off again if the mode doesn't work for reclaim. >> >> > > > >> >> > > > The problematic behavior can be reproduced by: >> >> > > > >> >> > > > CONFIG_NUMA_BALANCING enabled >> >> > > > sysctl_numa_balancing_mode set to NUMA_BALANCING_MEMORY_TIERING >> >> > > > numa node0 (8GB local memory, 16 CPUs) >> >> > > > numa node1 (8GB slow tier memory, no CPUs) >> >> > > > >> >> > > > Sequence: >> >> > > > >> >> > > > 1) echo 3 > /proc/sys/vm/drop_caches >> >> > > > 2) To emulate the system with full of cold memory in local DRAM, run >> >> > > > the following dummy program and never touch the region: >> >> > > > >> >> > > > mmap(0, 8 * 1024 * 1024 * 1024, PROT_READ | PROT_WRITE, >> >> > > > MAP_ANONYMOUS | MAP_PRIVATE | MAP_POPULATE, -1, 0); >> >> > > > >> >> > > > 3) Run any memory intensive work e.g. XSBench. >> >> > > > 4) Check if numa balancing is working e.i. promotion/demotion. >> >> > > > 5) Iterate 1) ~ 4) until numa balancing stops. >> >> > > > >> >> > > > With this, you could see that promotion/demotion are not working because >> >> > > > kswapd has stopped due to ->kswapd_failures >= MAX_RECLAIM_RETRIES. >> >> > > > >> >> > > > Interesting vmstat delta's differences between before and after are like: >> >> > > > >> >> > > > +-----------------------+-------------------------------+ >> >> > > > | interesting vmstat | before | after | >> >> > > > +-----------------------+-------------------------------+ >> >> > > > | nr_inactive_anon | 321935 | 1664772 | >> >> > > > | nr_active_anon | 1780700 | 437834 | >> >> > > > | nr_inactive_file | 30425 | 40882 | >> >> > > > | nr_active_file | 14961 | 3012 | >> >> > > > | pgpromote_success | 356 | 1293122 | >> >> > > > | pgpromote_candidate | 21953245 | 1824148 | >> >> > > > | pgactivate | 1844523 | 3311907 | >> >> > > > | pgdeactivate | 50634 | 1554069 | >> >> > > > | pgfault | 31100294 | 6518806 | >> >> > > > | pgdemote_kswapd | 30856 | 2230821 | >> >> > > > | pgscan_kswapd | 1861981 | 7667629 | >> >> > > > | pgscan_anon | 1822930 | 7610583 | >> >> > > > | pgscan_file | 39051 | 57046 | >> >> > > > | pgsteal_anon | 386 | 2192033 | >> >> > > > | pgsteal_file | 30470 | 38788 | >> >> > > > | pageoutrun | 30 | 412 | >> >> > > > | numa_hint_faults | 27418279 | 2875955 | >> >> > > > | numa_pages_migrated | 356 | 1293122 | >> >> > > > +-----------------------+-------------------------------+ >> >> > > > >> >> > > > Signed-off-by: Byungchul Park >> >> > > > --- >> >> > > > mm/vmscan.c | 21 ++++++++++++++++++++- >> >> > > > 1 file changed, 20 insertions(+), 1 deletion(-) >> >> > > > >> >> > > > diff --git a/mm/vmscan.c b/mm/vmscan.c >> >> > > > index bba207f41b14..6fe45eca7766 100644 >> >> > > > --- a/mm/vmscan.c >> >> > > > +++ b/mm/vmscan.c >> >> > > > @@ -108,6 +108,12 @@ struct scan_control { >> >> > > > /* Can folios be swapped as part of reclaim? */ >> >> > > > unsigned int may_swap:1; >> >> > > > >> >> > > > + /* Not allow cache_trim_mode to be turned on as part of reclaim? */ >> >> > > > + unsigned int no_cache_trim_mode:1; >> >> > > > + >> >> > > > + /* Has cache_trim_mode failed at least once? */ >> >> > > > + unsigned int cache_trim_mode_failed:1; >> >> > > > + >> >> > > > /* Proactive reclaim invoked by userspace through memory.reclaim */ >> >> > > > unsigned int proactive:1; >> >> > > > >> >> > > > @@ -2268,7 +2274,8 @@ static void prepare_scan_control(pg_data_t *pgdat, struct scan_control *sc) >> >> > > > * anonymous pages. >> >> > > > */ >> >> > > > file = lruvec_page_state(target_lruvec, NR_INACTIVE_FILE); >> >> > > > - if (file >> sc->priority && !(sc->may_deactivate & DEACTIVATE_FILE)) >> >> > > > + if (file >> sc->priority && !(sc->may_deactivate & DEACTIVATE_FILE) && >> >> > > > + !sc->no_cache_trim_mode) >> >> > > > sc->cache_trim_mode = 1; >> >> > > > else >> >> > > > sc->cache_trim_mode = 0; >> >> > > > @@ -5967,6 +5974,8 @@ static void shrink_node(pg_data_t *pgdat, struct scan_control *sc) >> >> > > > */ >> >> > > > if (reclaimable) >> >> > > > pgdat->kswapd_failures = 0; >> >> > > > + else if (sc->cache_trim_mode) >> >> > > > + sc->cache_trim_mode_failed = 1; >> >> > > > } >> >> > > > >> >> > > > /* >> >> > > > @@ -6898,6 +6907,16 @@ static int balance_pgdat(pg_data_t *pgdat, int order, int highest_zoneidx) >> >> > > > sc.priority--; >> >> > > > } while (sc.priority >= 1); >> >> > > > >> >> > > > + /* >> >> > > > + * Restart only if it went through the priority loop all the way, >> >> > > > + * but cache_trim_mode didn't work. >> >> > > > + */ >> >> > > > + if (!sc.nr_reclaimed && sc.priority < 1 && >> >> > > > + !sc.no_cache_trim_mode && sc.cache_trim_mode_failed) { >> >> > > >> >> > > Can we just use sc.cache_trim_mode (instead of >> >> > > sc.cache_trim_mode_failed) here? That is, if cache_trim_mode is enabled >> >> > >> >> > As Johannes mentioned, within a priority scan, all the numa nodes are >> >> > scanned each with its own value of cache_trim_mode. So we cannot use >> >> > cache_trim_mode for that purpose. >> >> >> >> Ah, okay. Confining to kswapd, that might make sense. I will apply it if >> >> there's no objection to it. Thanks. >> > >> > I didn't want to introduce two additional flags either, but it was >> > possible to make it do exactly what we want it to do thanks to the flags. >> > I'd like to keep this version if possible unless there are any other >> > objections on it. >> >> Sorry, I'm confused. Whether does "cache_trim_mode == 1" do the trick? >> If so, why not? If not, why? > > kswapd might happen to go through: > > priority 12(== DEF_PRIORITY) + cache_trim_mode on -> fail > priority 11 + cache_trim_mode on -> fail > priority 10 + cache_trim_mode on -> fail > priority 9 + cache_trim_mode on -> fail > priority 8 + cache_trim_mode on -> fail > priority 7 + cache_trim_mode on -> fail > priority 6 + cache_trim_mode on -> fail > priority 5 + cache_trim_mode on -> fail > priority 4 + cache_trim_mode on -> fail > priority 3 + cache_trim_mode on -> fail > priority 2 + cache_trim_mode on -> fail > priority 1 + cache_trim_mode off -> fail > > I'd like to retry even in this case. I don't think that we should retry in this case. If the following case fails, > priority 1 + cache_trim_mode off -> fail Why will we succeed after retrying? -- Best Regards, Huang, Ying > Am I missing something? > > Byungchul > >> -- >> Best Regards, >> Huang, Ying >> >> > Byungchul >> > >> >> Byungchul >> >> > >> >> > Byungchul >> >> > >> >> > > for priority == 1 and failed to reclaim, we will restart. If this >> >> > > works, we can avoid to add another flag. >> >> > > >> >> > > > + sc.no_cache_trim_mode = 1; >> >> > > > + goto restart; >> >> > > > + } >> >> > > > + >> >> > > > if (!sc.nr_reclaimed) >> >> > > > pgdat->kswapd_failures++; >> >> > > >> >> > > -- >> >> > > Best Regards, >> >> > > Huang, Ying