From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 855CFC54798 for ; Fri, 23 Feb 2024 01:05:43 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 07F2C6B00CB; Thu, 22 Feb 2024 20:05:43 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 02F376B00CD; Thu, 22 Feb 2024 20:05:42 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E12AA6B00CE; Thu, 22 Feb 2024 20:05:42 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id D0D8B6B00CB for ; Thu, 22 Feb 2024 20:05:42 -0500 (EST) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 7FB08C0F98 for ; Fri, 23 Feb 2024 01:05:42 +0000 (UTC) X-FDA: 81821276124.06.9CFCA39 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.14]) by imf30.hostedemail.com (Postfix) with ESMTP id D0E2480010 for ; Fri, 23 Feb 2024 01:05:39 +0000 (UTC) Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=GUev+g8j; spf=pass (imf30.hostedemail.com: domain of ying.huang@intel.com designates 192.198.163.14 as permitted sender) smtp.mailfrom=ying.huang@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1708650340; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=YMEJ3MaOgkTxFfG/DNzZ2XZ6GdmDdVfwhivu+MYdKqY=; b=luAIHxqi4QtFUYXatOOHYftWm7uK0dagIsnVHprZmGs9FttVE5xj1Ifu0ClLU6eaVqlNmJ eQXlrkvkW2se3g8qNeiXtExHwHCSQC43FG32n7j+fZ1T412N0/KMDlAYNp8rRS3R4EiMwe VcNwZme4cUMcbUDS5/HXuxLEkUgJVNc= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1708650340; a=rsa-sha256; cv=none; b=4idCebMWfLJUBgTf4UODJ3FgAtMxWSfCgGtKnaz3OncNqy4mU1z7MvRdAWB5SDi0/ck5/O 9fyKo/gXCkgEFBCwfVBBnwMYeHJ4e6vG4xGAwn82L+UX9ai/fBhbL+WP8hsijDNwTrQ1tO oOBVhQvNycwEmqeHXiUuIl/mF65Z7Mc= ARC-Authentication-Results: i=1; imf30.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=GUev+g8j; spf=pass (imf30.hostedemail.com: domain of ying.huang@intel.com designates 192.198.163.14 as permitted sender) smtp.mailfrom=ying.huang@intel.com; dmarc=pass (policy=none) header.from=intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1708650340; x=1740186340; h=from:to:cc:subject:in-reply-to:references:date: message-id:mime-version; bh=gB7mf/OyHTy5Fguvmc8J5o3drqi69mmOKqEonRSknBs=; b=GUev+g8jJFqVnRY/g1P9gEcjHeIWWhOPw4hWnjsgNzK/9Ib6sgVWAMng V9BgANuKi//qnmSjTy/oTATiCMfDtKlsXlwfmdIQkb7V7mOQsGVUX0sK7 SQgve5l6cyJKhr+W0jO6btSLmolwNFcGCxzFJfcHBXbCEvSo4zAcLNjCW E0hF4bFeYyyYjrgCAIyYcTSOeVf1ZsQbMWvIPI0iB3vY2jDtm1IhD3gWD YUTOZT/yC1KpxoUawuto1bJT6V2ny6GiWfKJmTS3uifl8Rl59uYd/fewH xCb2pyQUAz/7JKa/LeOonTdO5wFd79aCu415e/yuQF2eirfCCFrxths3J w==; X-IronPort-AV: E=McAfee;i="6600,9927,10992"; a="3106288" X-IronPort-AV: E=Sophos;i="6.06,179,1705392000"; d="scan'208";a="3106288" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmvoesa108.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Feb 2024 17:05:38 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10992"; a="827706957" X-IronPort-AV: E=Sophos;i="6.06,179,1705392000"; d="scan'208";a="827706957" Received: from yhuang6-desk2.sh.intel.com (HELO yhuang6-desk2.ccr.corp.intel.com) ([10.238.208.55]) by orsmga001-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Feb 2024 17:05:35 -0800 From: "Huang, Ying" To: Byungchul Park Cc: , , , , , Subject: Re: [PATCH v2] mm, vmscan: don't turn on cache_trim_mode at high scan priorities In-Reply-To: <20240222100129.GB13076@system.software.com> (Byungchul Park's message of "Thu, 22 Feb 2024 19:01:29 +0900") References: <20240222070817.70515-1-byungchul@sk.com> <87sf1kj3nn.fsf@yhuang6-desk2.ccr.corp.intel.com> <20240222092042.GA33967@system.software.com> <20240222094900.GA13076@system.software.com> <20240222100129.GB13076@system.software.com> Date: Fri, 23 Feb 2024 09:03:39 +0800 Message-ID: <87o7c8htzo.fsf@yhuang6-desk2.ccr.corp.intel.com> User-Agent: Gnus/5.13 (Gnus v5.13) MIME-Version: 1.0 Content-Type: text/plain; charset=ascii X-Rspamd-Queue-Id: D0E2480010 X-Rspam-User: X-Rspamd-Server: rspam11 X-Stat-Signature: wjofxj8zzgi1gs3gon7wuc7xdps5rmra X-HE-Tag: 1708650339-533564 X-HE-Meta: U2FsdGVkX19gFj29u7DYfb4pZIULufbBbd6PadxTAQs1gAeIPGXaFYhsNi4Xj1NafGw/WdmiWoGHBFl5p7e6lm7CErzAaenDRDlrbgiN8GhOEoqf3ETuSOXAHfEi6NfQ9TiCDEvNiEH2GR/N/dP2luwv7d1d7ld47j0ZR05bBurxEEdThzTKmZnSePve1X+eNLzkgRyRcQOBK0jMwkPRxbVUrX+W/Hl7EkyRIVQsIiLfersZIPqtTyfWQy3cfhvGPL4jsCsAPDWagE2K6z6C+C3PcTEybvrtYduUELUMhOFeEdik6zwckO/NLn06FSywguCRMNYXfJ6BrrYJ+7IEqQYTDuknhz/FbNRFIcoGFhnHvuZHRdAIuFzQBEpvjbnX/0tZPth1lIbufyEw2JZnARF6/1mvhWhszSn23EaaDMReRm51a/lstK1hmaezIATlqsUh8qq/vsKwe12z+O187qkDHAtjT7EDrwOtIgqM7kxSdD0eQaob7FrfRFg2yWabySRjA5PTXsWQov07uPvd/w6kRJvXmnEt9xLq8Dhp/M0xHwa+u7PpUU7FM1j4zywRpOPBjTrYT47KnfcNxhFxRO5mkbmrqgdUsa7/qdllyZRRmi4zqXZtorQFKNtZstDOky7eIaf9106ex65rgVsItaWTVOXiPJ8u69UPcB175XHytFuQEOY6UdL+UlbEAYQUnQ8iFpxR9+8bVtj8MzRNlh7RZ/mmbArA3VxYP1rcn2BsUOO+u3T/yU9C797RMDAJ/gXVz+TZ1g/Q/pPPItirHL86H2TWyXmkKN02AyNd+AMiq4dPXBj8+qmmuUwW0biIt9ewYIjfAuumuRlV4L2WHVrLMvzCAicZCCHl+22/YVBLjaG+77WWmrL1IjPKKn9G5mPjIfZGXKi9MLUJD8XBQNoXZ675TCBdkadXMNSuVoZNSadeg/F3zNkA3i1eJjcreuKK+xiVVXnYKfKilA1 AAm2shef KVPly36os24v7vqQm70T854LC1DGCjNzXda1pLdhpuxcWyikIStLrBIUhaCVUt9FLiR6CC2Zn6s7EUqcMuW9ZLHWxsLV/mCfzAZPyG12khP9QxKmesl2OOisKzc3TMfq5ME02WlcrStm/xXsy0A0W5B/O7SPGbj+468DNa3KluH/uBMLi1omNWICCUJBA8xE+OW/lfSjj7HUWIf/rzAjHyq9s5bVcH1qv/lyT1fYkaeFGqAbaEjmNMScCmU+BuCTCCCUi X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Byungchul Park writes: > On Thu, Feb 22, 2024 at 06:49:00PM +0900, Byungchul Park wrote: >> On Thu, Feb 22, 2024 at 06:20:42PM +0900, Byungchul Park wrote: >> > On Thu, Feb 22, 2024 at 04:37:16PM +0800, Huang, Ying wrote: >> > > Byungchul Park writes: >> > > >> > > > Changes from v1: >> > > > 1. Add a comment describing why this change is necessary in code >> > > > and rewrite the commit message with how to reproduce and what >> > > > the result is using vmstat. (feedbacked by Andrew Morton and >> > > > Yu Zhao) >> > > > 2. Change the condition to avoid cache_trim_mode from >> > > > 'sc->priority != 1' to 'sc->priority > 1' to reflect cases >> > > > where the priority goes to zero all the way. (feedbacked by >> > > > Yu Zhao) >> > > > >> > > > --->8--- >> > > > From 07e0baab368160e50b6ca35d95745168aa60e217 Mon Sep 17 00:00:00 2001 >> > > > From: Byungchul Park >> > > > Date: Thu, 22 Feb 2024 14:50:17 +0900 >> > > > Subject: [PATCH v2] mm, vmscan: don't turn on cache_trim_mode at high scan priorities >> > > > >> > > > With cache_trim_mode on, reclaim logic doesn't bother reclaiming anon >> > > > pages. However, it should be more careful to turn on the mode because >> > > > it's going to prevent anon pages from being reclaimed even if there are >> > > > a huge number of anon pages that are cold and should be reclaimed. Even >> > > > worse, that can lead kswapd_failures to reach MAX_RECLAIM_RETRIES and >> > > > stopping kswapd until direct reclaim eventually works to resume kswapd. >> > > > So this is more like a bug fix than a performance improvement. >> > > > >> > > > The problematic behavior can be reproduced by: >> > > > >> > > > CONFIG_NUMA_BALANCING enabled >> > > > sysctl_numa_balancing_mode set to NUMA_BALANCING_MEMORY_TIERING >> > > > >> > > > numa node0 (8GB local memory, 16 CPUs) >> > > > numa node1 (8GB slow tier memory, no CPUs) >> > > > >> > > > Sequence: >> > > > >> > > > 1) echo 3 > /proc/sys/vm/drop_caches >> > > > 2) To emulate the system with full of cold memory in local DRAM, run >> > > > the following dummy program and never touch the region: >> > > > >> > > > mmap(0, 8 * 1024 * 1024 * 1024, PROT_READ | PROT_WRITE, >> > > > MAP_ANONYMOUS | MAP_PRIVATE | MAP_POPULATE, -1, 0); >> > > > >> > > > 3) Run any memory intensive work e.g. XSBench. >> > > > 4) Check if numa balancing is working e.i. promotion/demotion. >> > > > 5) Iterate 1) ~ 4) until kswapd stops. >> > > > >> > > > With this, you could eventually see that promotion/demotion are not >> > > > working because kswapd has stopped due to ->kswapd_failures >= >> > > > MAX_RECLAIM_RETRIES. >> > > > >> > > > Interesting vmstat delta's differences between before and after are like: >> > > > >> > > > -nr_inactive_anon 321935 >> > > > -nr_active_anon 1780700 >> > > > -nr_inactive_file 30425 >> > > > -nr_active_file 14961 >> > > > -pgpromote_success 356 >> > > > -pgpromote_candidate 21953245 >> > > > -pgactivate 1844523 >> > > > -pgdeactivate 50634 >> > > > -pgfault 31100294 >> > > > -pgdemote_kswapd 30856 >> > > > -pgscan_kswapd 1861981 >> > > > -pgscan_anon 1822930 >> > > > -pgscan_file 39051 >> > > > -pgsteal_anon 386 >> > > > -pgsteal_file 30470 >> > > > -pageoutrun 30 >> > > > -numa_hint_faults 27418279 >> > > > -numa_pages_migrated 356 >> > > > >> > > > +nr_inactive_anon 1662306 >> > > > +nr_active_anon 440303 >> > > > +nr_inactive_file 27669 >> > > > +nr_active_file 1654 >> > > > +pgpromote_success 1314102 >> > > > +pgpromote_candidate 1892525 >> > > > +pgactivate 3284457 >> > > > +pgdeactivate 1527504 >> > > > +pgfault 6847775 >> > > > +pgdemote_kswapd 2142047 >> > > > +pgscan_kswapd 7496588 >> > > > +pgscan_anon 7462488 >> > > > +pgscan_file 34100 >> > > > +pgsteal_anon 2115661 >> > > > +pgsteal_file 26386 >> > > > +pageoutrun 378 >> > > > +numa_hint_faults 3220891 >> > > > +numa_pages_migrated 1314102 >> > > > >> > > > where -: before this patch, +: after this patch >> > > > >> > > > Signed-off-by: Byungchul Park >> > > > --- >> > > > mm/vmscan.c | 10 +++++++++- >> > > > 1 file changed, 9 insertions(+), 1 deletion(-) >> > > > >> > > > diff --git a/mm/vmscan.c b/mm/vmscan.c >> > > > index bba207f41b14..6eda59fce5ee 100644 >> > > > --- a/mm/vmscan.c >> > > > +++ b/mm/vmscan.c >> > > > @@ -2266,9 +2266,17 @@ static void prepare_scan_control(pg_data_t *pgdat, struct scan_control *sc) >> > > > * If we have plenty of inactive file pages that aren't >> > > > * thrashing, try to reclaim those first before touching >> > > > * anonymous pages. >> > > > + * >> > > > + * However, the condition 'sc->cache_trim_mode == 1' all through >> > > > + * the scan priorties might lead reclaim failure. If it keeps >> > > > + * MAX_RECLAIM_RETRIES times, then kswapd would get stopped even >> > > > + * if there are still plenty anon pages to reclaim, which is not >> > > > + * desirable. So do not use cache_trim_mode when reclaim is not >> > > > + * smooth e.i. high scan priority. >> > > > */ >> > > > file = lruvec_page_state(target_lruvec, NR_INACTIVE_FILE); >> > > > - if (file >> sc->priority && !(sc->may_deactivate & DEACTIVATE_FILE)) >> > > > + if (sc->priority > 1 && file >> sc->priority && >> > > > + !(sc->may_deactivate & DEACTIVATE_FILE)) >> > > > sc->cache_trim_mode = 1; >> > > > else >> > > > sc->cache_trim_mode = 0; >> > > >> > > In get_scan_count(), there's following code, >> > > >> > > /* >> > > * Do not apply any pressure balancing cleverness when the >> > > * system is close to OOM, scan both anon and file equally >> > > * (unless the swappiness setting disagrees with swapping). >> > > */ >> > > if (!sc->priority && swappiness) { >> > > scan_balance = SCAN_EQUAL; >> > > goto out; >> > > } >> > > >> > > So, swappiness is 0 in you system? Please check it. If it's not 0, >> > > please check why this doesn't help. >> > >> > Nice information! Then the change should be: >> > >> > diff --git a/mm/vmscan.c b/mm/vmscan.c >> > index bba207f41b14..91f9bab86e92 100644 >> > --- a/mm/vmscan.c >> > +++ b/mm/vmscan.c >> > @@ -2357,7 +2357,7 @@ static void get_scan_count(struct lruvec *lruvec, struct scan_control *sc, >> > * system is close to OOM, scan both anon and file equally >> > * (unless the swappiness setting disagrees with swapping). >> > */ >> > - if (!sc->priority && swappiness) { >> > + if (sc->priority <= 1 && swappiness) { >> > scan_balance = SCAN_EQUAL; >> > goto out; >> > } >> >> Or: >> >> diff --git a/mm/vmscan.c b/mm/vmscan.c >> index bba207f41b14..c54371a398b1 100644 >> --- a/mm/vmscan.c >> +++ b/mm/vmscan.c >> @@ -6896,7 +6896,7 @@ static int balance_pgdat(pg_data_t *pgdat, int order, int highest_zoneidx) >> >> if (raise_priority || !nr_reclaimed) >> sc.priority--; >> - } while (sc.priority >= 1); >> + } while (sc.priority >= 0); >> >> if (!sc.nr_reclaimed) >> pgdat->kswapd_failures++; > > +cc Mel Gorman > > I just found this was intended. See commit 9aa41348a8d11 ("mm: vmscan: > do not allow kswapd to scan at maximum priority"). Mel Gorman didn't want > to make kswapd too much aggressive. However, does it make sense to stop > kswapd even if there are plenty cold anon pages to reclaim and make the > system wait for direct reclaim? Maybe we can play with cache_trim_mode, for example, if sc.nr_reclaimed == 0 and sc.cache_trim_mode == true, force disabling cache_trim_mode in the next round? -- Best Regards, Huang, Ying > Thoughts? > > Byungchul > >> --- >> >> Byungchul >> >> > Worth noting that the priority goes from DEF_PRIORITY to 1 in >> > balance_pgdat() of kswapd. I will change how to fix to this if this >> > looks more reasonable. >> > >> > Byungchul