From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 34147C54798 for ; Tue, 5 Mar 2024 07:37:36 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 799FA6B008C; Tue, 5 Mar 2024 02:37:35 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 74A326B0092; Tue, 5 Mar 2024 02:37:35 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 611386B0093; Tue, 5 Mar 2024 02:37:35 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 4EF3F6B008C for ; Tue, 5 Mar 2024 02:37:35 -0500 (EST) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id EE2C51C01F5 for ; Tue, 5 Mar 2024 07:37:34 +0000 (UTC) X-FDA: 81862180428.17.3F7C5F6 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.9]) by imf06.hostedemail.com (Postfix) with ESMTP id F305C18001B for ; Tue, 5 Mar 2024 07:37:32 +0000 (UTC) Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=iSeaKaa7; spf=pass (imf06.hostedemail.com: domain of ying.huang@intel.com designates 198.175.65.9 as permitted sender) smtp.mailfrom=ying.huang@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1709624253; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=nh99iWJA36CXN+V0deHrvkRa9rEazD0OqRUxdiiR9y0=; b=e9g07fZsX2eHFZHvmINMnxKa2VTf/TVsKAp+m+bXjo0qlQMji9rMWw0azrpmTzoNImXlJh PaCBlvJO9X03AkjE13b5JjMqN09Gob/5jCOg61dEFl9SlWNyg2cMTNXRP9LSyS8eC3sSfm 3zPYUGum7Xr0nXTvHKXcQ3Jj89/aEJ8= ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=iSeaKaa7; spf=pass (imf06.hostedemail.com: domain of ying.huang@intel.com designates 198.175.65.9 as permitted sender) smtp.mailfrom=ying.huang@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1709624253; a=rsa-sha256; cv=none; b=OjLcRL1T0LwITaRlBtrzhqc017n0iI3zrNEHGZJRAM7zkjd1+C5fCaXM1yQXjQXcVbQE51 nixbIDzCE72Hn99ef+oPTbu5wDhWalYnAAcTZh48o/Za6JQKapTTC/PfSZsP9rmJBhA/Um p6hpoH6ECY9nxYtnFnViMarw5rY97II= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1709624253; x=1741160253; h=from:to:cc:subject:in-reply-to:references:date: message-id:mime-version; bh=6XAGGpqPCvNAQujByhz7IqI9xkwL0b0s7n98gDewalE=; b=iSeaKaa7HQBXUlT/4ccGT39PJIDEZNh7qzdKhqTUa+WfAHRacgACDnIX ICj9/zmBW+Ou27e/zcAUCyGoIzRAoUrsPjd09ZiUaHa1/ujtoY76zePdX KxH0DOuQDzKQJpVSwNeJVjCPMp2aZtYqsO3YPY6WYb6yGvmwdEs9Pexeq azEkPY/f5btIdBUiQGxdgDTokPeOy4n1rFTt5LU/ZaARv4LsRFtwB8QyL Tbg0XyuLO7Nyw5aVh1Ow8StRV43tZYRrnCxBnoBtkHbm9K6ClCphv4YSU LY/sh527jGlhqv4ck4xWNjR7u71DlG610XOhjMFTmgVVxXibbbuTiiV10 A==; X-IronPort-AV: E=McAfee;i="6600,9927,11003"; a="26616418" X-IronPort-AV: E=Sophos;i="6.06,205,1705392000"; d="scan'208";a="26616418" Received: from fmviesa002.fm.intel.com ([10.60.135.142]) by orvoesa101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Mar 2024 23:37:31 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.06,205,1705392000"; d="scan'208";a="32447585" Received: from yhuang6-desk2.sh.intel.com (HELO yhuang6-desk2.ccr.corp.intel.com) ([10.238.208.55]) by fmviesa002-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Mar 2024 23:37:29 -0800 From: "Huang, Ying" To: Byungchul Park Cc: , , , , , Subject: Re: [PATCH v6] mm, vmscan: retry kswapd's priority loop with cache_trim_mode off on failure In-Reply-To: <20240305071232.GB37850@system.software.com> (Byungchul Park's message of "Tue, 5 Mar 2024 16:12:32 +0900") References: <20240304033611.GD13332@system.software.com> <20240304082118.20499-1-byungchul@sk.com> <87zfvda1f8.fsf@yhuang6-desk2.ccr.corp.intel.com> <20240305023708.GA60719@system.software.com> <20240305024345.GB60719@system.software.com> <20240305040930.GA21107@system.software.com> <87le6x9p6u.fsf@yhuang6-desk2.ccr.corp.intel.com> <20240305065846.GA37850@system.software.com> <87cys99n1r.fsf@yhuang6-desk2.ccr.corp.intel.com> <20240305071232.GB37850@system.software.com> Date: Tue, 05 Mar 2024 15:35:35 +0800 Message-ID: <874jdl9lmg.fsf@yhuang6-desk2.ccr.corp.intel.com> User-Agent: Gnus/5.13 (Gnus v5.13) MIME-Version: 1.0 Content-Type: text/plain; charset=ascii X-Rspamd-Queue-Id: F305C18001B X-Rspam-User: X-Stat-Signature: e4iou8ecud7jirwk4jeooudhd1dnzmis X-Rspamd-Server: rspam01 X-HE-Tag: 1709624252-283385 X-HE-Meta: U2FsdGVkX19JNLm5hBw461b7wXGsAnffGu/ZS5MT6g10V/KGSmszsOHefQ2t+P4BsrpwhotZFV9nT3ch+ke+cY8mTYH10yqH4/JS7U3BeK1JXUqS2FmCPcR78LaPw8O3hzQ2QvI/kZ+zrFfs5krITiSrOtW5i8/RyJAANFpFPiES2xWGlMw56bV2hPrbRL1cX63+HkQ73OPV4T3adlgwyyKl3vl298itsqmdxm07ipX2fhxFe2vIlCj+6VhR23KXn/cYhGDTRg37bUUNQlU7IttSIHCR/QNACIMsDq5q3+/tcZRXqkNAxG3j8YOa5GDIJPvoJUEQz38aV5NRQcvfQ1FxCUqDqNdQjjZptPZ3BoT72vZMaLUjUohvQ/meC7l3XvtKvWm3L0Htw9MHPOQWwec5jiO6E74cgqQkOvxVUfmrE2Hu11tY4z9l72HkKzaD96IU6NcW0pom1w12TCCQryUgbR08UaOVzRb8KbavT/RDnNZao1HtggEpORNyLJC6dh9RrA93JRKqdBJBsuVXzH7OVB6oxErpfzia2O6aqQz7JXZllZDv91hMHT22Xil/W805SZ9Q2Eu8wHiQ2vaNLksortluxUDXvlyyMSDz80mSoiFiquyUujnH+PBmaZFqj8nlRiiI8tiCik2VM6a4DtXKGBHF2uv88WQjZrfXdRPiCPVk42AEVP9iRkbO+BYL/srtjISY9FCfMvc8OgNFIJRURXmxEsGKZp/IpmXBU3/TluShGJT14Atq14qLLF//mOf7z0cy8S1xWjeCZ3ziHDzoCZ4PZ/KVauy2dkIkDpcHpuWzDr2zy1VJcB+2ZtXyjs3lURhXuTVKh7enqgusY7dQ9tkE7gvLTf8rmFFP6VupyJ/xkRuEWvKemWzj7utoFQeb9C6Qcr8= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Byungchul Park writes: > On Tue, Mar 05, 2024 at 03:04:48PM +0800, Huang, Ying wrote: >> Byungchul Park writes: >> >> > On Tue, Mar 05, 2024 at 02:18:33PM +0800, Huang, Ying wrote: >> >> Byungchul Park writes: >> >> >> >> > On Tue, Mar 05, 2024 at 11:43:45AM +0900, Byungchul Park wrote: >> >> >> On Tue, Mar 05, 2024 at 11:37:08AM +0900, Byungchul Park wrote: >> >> >> > On Tue, Mar 05, 2024 at 09:54:19AM +0800, Huang, Ying wrote: >> >> >> > > Byungchul Park writes: >> >> >> > > >> >> >> > > > Changes from v5: >> >> >> > > > 1. Make it retry the kswapd's scan priority loop with >> >> >> > > > cache_trim_mode off *only if* the mode didn't work in the >> >> >> > > > previous loop. (feedbacked by Huang Ying) >> >> >> > > > 2. Take into account 'break's from the priority loop when making >> >> >> > > > the decision whether to retry. (feedbacked by Huang Ying) >> >> >> > > > 3. Update the test result in the commit message. >> >> >> > > > >> >> >> > > > Changes from v4: >> >> >> > > > 1. Make other scans start with may_cache_trim_mode = 1. >> >> >> > > > >> >> >> > > > Changes from v3: >> >> >> > > > 1. Update the test result in the commit message with v4. >> >> >> > > > 2. Retry the whole priority loop with cache_trim_mode off again, >> >> >> > > > rather than forcing the mode off at the highest priority, >> >> >> > > > when the mode doesn't work. (feedbacked by Johannes Weiner) >> >> >> > > > >> >> >> > > > Changes from v2: >> >> >> > > > 1. Change the condition to stop cache_trim_mode. >> >> >> > > > >> >> >> > > > From - Stop it if it's at high scan priorities, 0 or 1. >> >> >> > > > To - Stop it if it's at high scan priorities, 0 or 1, and >> >> >> > > > the mode didn't work in the previous turn. >> >> >> > > > >> >> >> > > > (feedbacked by Huang Ying) >> >> >> > > > >> >> >> > > > 2. Change the test result in the commit message after testing >> >> >> > > > with the new logic. >> >> >> > > > >> >> >> > > > Changes from v1: >> >> >> > > > 1. Add a comment describing why this change is necessary in code >> >> >> > > > and rewrite the commit message with how to reproduce and what >> >> >> > > > the result is using vmstat. (feedbacked by Andrew Morton and >> >> >> > > > Yu Zhao) >> >> >> > > > 2. Change the condition to avoid cache_trim_mode from >> >> >> > > > 'sc->priority != 1' to 'sc->priority > 1' to reflect cases >> >> >> > > > where the priority goes to zero all the way. (feedbacked by >> >> >> > > > Yu Zhao) >> >> >> > > > >> >> >> > > > --->8--- >> >> >> > > > From f811ee583158fd53d0e94d32ce5948fac4b17cfe Mon Sep 17 00:00:00 2001 >> >> >> > > > From: Byungchul Park >> >> >> > > > Date: Mon, 4 Mar 2024 15:27:37 +0900 >> >> >> > > > Subject: [PATCH v6] mm, vmscan: retry kswapd's priority loop with cache_trim_mode off on failure >> >> >> > > > >> >> >> > > > With cache_trim_mode on, reclaim logic doesn't bother reclaiming anon >> >> >> > > > pages. However, it should be more careful to use the mode because it's >> >> >> > > > going to prevent anon pages from being reclaimed even if there are a >> >> >> > > > huge number of anon pages that are cold and should be reclaimed. Even >> >> >> > > > worse, that leads kswapd_failures to reach MAX_RECLAIM_RETRIES and >> >> >> > > > stopping kswapd from functioning until direct reclaim eventually works >> >> >> > > > to resume kswapd. >> >> >> > > > >> >> >> > > > So kswapd needs to retry its scan priority loop with cache_trim_mode >> >> >> > > > off again if the mode doesn't work for reclaim. >> >> >> > > > >> >> >> > > > The problematic behavior can be reproduced by: >> >> >> > > > >> >> >> > > > CONFIG_NUMA_BALANCING enabled >> >> >> > > > sysctl_numa_balancing_mode set to NUMA_BALANCING_MEMORY_TIERING >> >> >> > > > numa node0 (8GB local memory, 16 CPUs) >> >> >> > > > numa node1 (8GB slow tier memory, no CPUs) >> >> >> > > > >> >> >> > > > Sequence: >> >> >> > > > >> >> >> > > > 1) echo 3 > /proc/sys/vm/drop_caches >> >> >> > > > 2) To emulate the system with full of cold memory in local DRAM, run >> >> >> > > > the following dummy program and never touch the region: >> >> >> > > > >> >> >> > > > mmap(0, 8 * 1024 * 1024 * 1024, PROT_READ | PROT_WRITE, >> >> >> > > > MAP_ANONYMOUS | MAP_PRIVATE | MAP_POPULATE, -1, 0); >> >> >> > > > >> >> >> > > > 3) Run any memory intensive work e.g. XSBench. >> >> >> > > > 4) Check if numa balancing is working e.i. promotion/demotion. >> >> >> > > > 5) Iterate 1) ~ 4) until numa balancing stops. >> >> >> > > > >> >> >> > > > With this, you could see that promotion/demotion are not working because >> >> >> > > > kswapd has stopped due to ->kswapd_failures >= MAX_RECLAIM_RETRIES. >> >> >> > > > >> >> >> > > > Interesting vmstat delta's differences between before and after are like: >> >> >> > > > >> >> >> > > > +-----------------------+-------------------------------+ >> >> >> > > > | interesting vmstat | before | after | >> >> >> > > > +-----------------------+-------------------------------+ >> >> >> > > > | nr_inactive_anon | 321935 | 1664772 | >> >> >> > > > | nr_active_anon | 1780700 | 437834 | >> >> >> > > > | nr_inactive_file | 30425 | 40882 | >> >> >> > > > | nr_active_file | 14961 | 3012 | >> >> >> > > > | pgpromote_success | 356 | 1293122 | >> >> >> > > > | pgpromote_candidate | 21953245 | 1824148 | >> >> >> > > > | pgactivate | 1844523 | 3311907 | >> >> >> > > > | pgdeactivate | 50634 | 1554069 | >> >> >> > > > | pgfault | 31100294 | 6518806 | >> >> >> > > > | pgdemote_kswapd | 30856 | 2230821 | >> >> >> > > > | pgscan_kswapd | 1861981 | 7667629 | >> >> >> > > > | pgscan_anon | 1822930 | 7610583 | >> >> >> > > > | pgscan_file | 39051 | 57046 | >> >> >> > > > | pgsteal_anon | 386 | 2192033 | >> >> >> > > > | pgsteal_file | 30470 | 38788 | >> >> >> > > > | pageoutrun | 30 | 412 | >> >> >> > > > | numa_hint_faults | 27418279 | 2875955 | >> >> >> > > > | numa_pages_migrated | 356 | 1293122 | >> >> >> > > > +-----------------------+-------------------------------+ >> >> >> > > > >> >> >> > > > Signed-off-by: Byungchul Park >> >> >> > > > --- >> >> >> > > > mm/vmscan.c | 21 ++++++++++++++++++++- >> >> >> > > > 1 file changed, 20 insertions(+), 1 deletion(-) >> >> >> > > > >> >> >> > > > diff --git a/mm/vmscan.c b/mm/vmscan.c >> >> >> > > > index bba207f41b14..6fe45eca7766 100644 >> >> >> > > > --- a/mm/vmscan.c >> >> >> > > > +++ b/mm/vmscan.c >> >> >> > > > @@ -108,6 +108,12 @@ struct scan_control { >> >> >> > > > /* Can folios be swapped as part of reclaim? */ >> >> >> > > > unsigned int may_swap:1; >> >> >> > > > >> >> >> > > > + /* Not allow cache_trim_mode to be turned on as part of reclaim? */ >> >> >> > > > + unsigned int no_cache_trim_mode:1; >> >> >> > > > + >> >> >> > > > + /* Has cache_trim_mode failed at least once? */ >> >> >> > > > + unsigned int cache_trim_mode_failed:1; >> >> >> > > > + >> >> >> > > > /* Proactive reclaim invoked by userspace through memory.reclaim */ >> >> >> > > > unsigned int proactive:1; >> >> >> > > > >> >> >> > > > @@ -2268,7 +2274,8 @@ static void prepare_scan_control(pg_data_t *pgdat, struct scan_control *sc) >> >> >> > > > * anonymous pages. >> >> >> > > > */ >> >> >> > > > file = lruvec_page_state(target_lruvec, NR_INACTIVE_FILE); >> >> >> > > > - if (file >> sc->priority && !(sc->may_deactivate & DEACTIVATE_FILE)) >> >> >> > > > + if (file >> sc->priority && !(sc->may_deactivate & DEACTIVATE_FILE) && >> >> >> > > > + !sc->no_cache_trim_mode) >> >> >> > > > sc->cache_trim_mode = 1; >> >> >> > > > else >> >> >> > > > sc->cache_trim_mode = 0; >> >> >> > > > @@ -5967,6 +5974,8 @@ static void shrink_node(pg_data_t *pgdat, struct scan_control *sc) >> >> >> > > > */ >> >> >> > > > if (reclaimable) >> >> >> > > > pgdat->kswapd_failures = 0; >> >> >> > > > + else if (sc->cache_trim_mode) >> >> >> > > > + sc->cache_trim_mode_failed = 1; >> >> >> > > > } >> >> >> > > > >> >> >> > > > /* >> >> >> > > > @@ -6898,6 +6907,16 @@ static int balance_pgdat(pg_data_t *pgdat, int order, int highest_zoneidx) >> >> >> > > > sc.priority--; >> >> >> > > > } while (sc.priority >= 1); >> >> >> > > > >> >> >> > > > + /* >> >> >> > > > + * Restart only if it went through the priority loop all the way, >> >> >> > > > + * but cache_trim_mode didn't work. >> >> >> > > > + */ >> >> >> > > > + if (!sc.nr_reclaimed && sc.priority < 1 && >> >> >> > > > + !sc.no_cache_trim_mode && sc.cache_trim_mode_failed) { >> >> >> > > >> >> >> > > Can we just use sc.cache_trim_mode (instead of >> >> >> > > sc.cache_trim_mode_failed) here? That is, if cache_trim_mode is enabled >> >> >> > >> >> >> > As Johannes mentioned, within a priority scan, all the numa nodes are >> >> >> > scanned each with its own value of cache_trim_mode. So we cannot use >> >> >> > cache_trim_mode for that purpose. >> >> >> >> >> >> Ah, okay. Confining to kswapd, that might make sense. I will apply it if >> >> >> there's no objection to it. Thanks. >> >> > >> >> > I didn't want to introduce two additional flags either, but it was >> >> > possible to make it do exactly what we want it to do thanks to the flags. >> >> > I'd like to keep this version if possible unless there are any other >> >> > objections on it. >> >> >> >> Sorry, I'm confused. Whether does "cache_trim_mode == 1" do the trick? >> >> If so, why not? If not, why? >> > >> > kswapd might happen to go through: >> > >> > priority 12(== DEF_PRIORITY) + cache_trim_mode on -> fail >> > priority 11 + cache_trim_mode on -> fail >> > priority 10 + cache_trim_mode on -> fail >> > priority 9 + cache_trim_mode on -> fail >> > priority 8 + cache_trim_mode on -> fail >> > priority 7 + cache_trim_mode on -> fail >> > priority 6 + cache_trim_mode on -> fail >> > priority 5 + cache_trim_mode on -> fail >> > priority 4 + cache_trim_mode on -> fail >> > priority 3 + cache_trim_mode on -> fail >> > priority 2 + cache_trim_mode on -> fail >> > priority 1 + cache_trim_mode off -> fail >> > >> > I'd like to retry even in this case. >> >> I don't think that we should retry in this case. If the following case >> fails, >> >> > priority 1 + cache_trim_mode off -> fail >> >> Why will we succeed after retrying? > > At priority 1, anon pages will be partially scanned. However, there > might be anon pages that have never been scanned but can be reclaimed. > > Do I get it wrong? Yes. In theory, that's possible. But do you think that will be some practical issue? So that, pgdat->kswapd_failures will reach max value? -- Best Regards, Huang, Ying > Byungchul > >> -- >> Best Regards, >> Huang, Ying >> >> > Am I missing something? >> > >> > Byungchul >> > >> >> -- >> >> Best Regards, >> >> Huang, Ying >> >> >> >> > Byungchul >> >> > >> >> >> Byungchul >> >> >> > >> >> >> > Byungchul >> >> >> > >> >> >> > > for priority == 1 and failed to reclaim, we will restart. If this >> >> >> > > works, we can avoid to add another flag. >> >> >> > > >> >> >> > > > + sc.no_cache_trim_mode = 1; >> >> >> > > > + goto restart; >> >> >> > > > + } >> >> >> > > > + >> >> >> > > > if (!sc.nr_reclaimed) >> >> >> > > > pgdat->kswapd_failures++; >> >> >> > > >> >> >> > > -- >> >> >> > > Best Regards, >> >> >> > > Huang, Ying