From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5A0D9C25B78 for ; Wed, 5 Jun 2024 02:04:06 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 99E6D6B0082; Tue, 4 Jun 2024 22:04:05 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 94E486B0083; Tue, 4 Jun 2024 22:04:05 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 815756B0085; Tue, 4 Jun 2024 22:04:05 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 65C036B0082 for ; Tue, 4 Jun 2024 22:04:05 -0400 (EDT) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 0DDA21A0F1E for ; Wed, 5 Jun 2024 02:04:05 +0000 (UTC) X-FDA: 82195189650.26.BEB5309 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.16]) by imf25.hostedemail.com (Postfix) with ESMTP id 7ACC7A0003 for ; Wed, 5 Jun 2024 02:04:02 +0000 (UTC) Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=GcQxRISH; spf=pass (imf25.hostedemail.com: domain of ying.huang@intel.com designates 198.175.65.16 as permitted sender) smtp.mailfrom=ying.huang@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1717553043; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=VUTffEPpcuiKs6b/cjkiaClbLU+6EDCXOitK7A05kaM=; b=q2s4q0jd5ZNNdT+MnEw3Ca4giXt9fivS58XoZtnIvG4knj2nE8Mj0884oth/B71oR+gZOa yJI2Mm3jN5e0YAr/tcx2mrWbrAWzwB2JFaKygSU+WpF+W9r91bKMFJarCGq72joeGFbgkB 6WhcqA0uA8GQtwPcO8NELoDHlGQCu90= ARC-Authentication-Results: i=1; imf25.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=GcQxRISH; spf=pass (imf25.hostedemail.com: domain of ying.huang@intel.com designates 198.175.65.16 as permitted sender) smtp.mailfrom=ying.huang@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1717553043; a=rsa-sha256; cv=none; b=OvDHHpcxkfxqSWlRDidwJopkXF+xTsh1tB8AdEOIKferi9cRgHqShXLAI+FDqQN8Uwy81G 2/hMAo/R3viosLxOgiUbg4G8QE0O6nKhxiON0ryu53ejY7Qp1KdaKusTd8B9WQjr1j2NWV H956a5xRrDtkgdbg3KmNNLmCFqboxX8= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1717553042; x=1749089042; h=from:to:cc:subject:in-reply-to:references:date: message-id:mime-version; bh=6mehi3CoPBEEy+f+M0kPGZBBjoi2oPFm16L/sOvB6ZI=; b=GcQxRISH5sufCPGL9kXzX7YmoFK2tgwFVnAswYkCxkVZ8qmZmZOVDJNB zSqDEtcC55kPiWwUO58Die5A8yzA0FXe/aIdeBOEMRLFbggZSJh/7Q1SQ PfERYbf37K9iMcICWZCbQQmN+gWPBZsz+eqAYv2sVCs2MvZBMQ5zUxCv+ qIzURuTP+pWREHElddRCE0SPaK6RzBfIb9Q+jov6iayoqcghgHjlqMt7T RS9jUqL9qLWmH5SphM0+3mfI9P2z6YccdlZN88Qaqoxzo5+CJJu4vYyOE YrGCvb+WupCiGAR09Fw33c2WJ8z32B503NJhbBSmalDAzwjxIsp9dHEKk w==; X-CSE-ConnectionGUID: +O0ISoadT2q8GwuUtsecWA== X-CSE-MsgGUID: LlukzwviTm2bq0J14b8cRA== X-IronPort-AV: E=McAfee;i="6600,9927,11093"; a="14269843" X-IronPort-AV: E=Sophos;i="6.08,215,1712646000"; d="scan'208";a="14269843" Received: from fmviesa006.fm.intel.com ([10.60.135.146]) by orvoesa108.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Jun 2024 19:04:01 -0700 X-CSE-ConnectionGUID: KXtnQgulR5WKLLd9IyILJw== X-CSE-MsgGUID: 2TVHDZuTTQ2VE9BfPFjcLQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.08,215,1712646000"; d="scan'208";a="37440309" Received: from unknown (HELO yhuang6-desk2.ccr.corp.intel.com) ([10.238.208.55]) by fmviesa006-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Jun 2024 19:03:59 -0700 From: "Huang, Ying" To: Byungchul Park Cc: , , , , , , Subject: Re: [PATCH v2] mm: let kswapd work again for node that used to be hopeless but may not now In-Reply-To: <20240605015021.GB75311@system.software.com> (Byungchul Park's message of "Wed, 5 Jun 2024 10:50:21 +0900") References: <20240604072323.10886-1-byungchul@sk.com> <87bk4hcf7h.fsf@yhuang6-desk2.ccr.corp.intel.com> <20240604084533.GA68919@system.software.com> <8734ptccgi.fsf@yhuang6-desk2.ccr.corp.intel.com> <20240605015021.GB75311@system.software.com> Date: Wed, 05 Jun 2024 10:02:07 +0800 Message-ID: <87tti8b10g.fsf@yhuang6-desk2.ccr.corp.intel.com> User-Agent: Gnus/5.13 (Gnus v5.13) MIME-Version: 1.0 Content-Type: text/plain; charset=ascii X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 7ACC7A0003 X-Stat-Signature: amm8ossebpumcoriftfg9um6gcamqutq X-Rspam-User: X-HE-Tag: 1717553042-329659 X-HE-Meta: U2FsdGVkX19byw4nDwePRmFpcXMExUJZEyWimVDpFsMfEuFkK04Tl4QCtAic8Iu3VU3jJz8h9qZtCvL2OtP+yfpsxl3bDnF/6vRZcjCECsaeruQsri1oyvN7NEMVrln023dlamQPuYthDH990JBbRei3NcPbKuAUmbGKF71Qjii2hZjNmKB/K/9C7X5DmS/1cr7d1+BaYURvEl76dq1hfiuey65bmlFyfjguqPH8Gsahhhuo9AQIVnN/KNc9w6dhnEUhNElB+5FfrenzVNqxYNOohwgmTDdFAB1rBeSstj56lBNL1aiZRHgSHCeTsVFiOddNiHzFx4M84N4iTG2aPUd31y1wTaNrFlsJtLavOzG2j7uECB3UeiXOdtX9BjrvSpfv3C4VoMw2PPx3jkXr+GTW4Bn8rdykcInAmTQnliXJbKH86mx2sYqK3TRpdRXHFo/Lsl4Y3rWuIlAPCKYtIKAmzslzT15DTYEkqpCm12Q2VRHvWiPBTJrptBvpehrz6E/auFpYwos4TquTHbcYbkmEesW3kPIvM6vrps62xzDVryifbyZwj+ByQIxAqcH2S6B3ZhFZ6gjNrNy/ybPW6LjJbDHxcAmmYcdEribJu2sOGQdOo9Cnf6knXKYmWnO3aPNkkKnnrcAR2RwwfASFZDgPSeljQBlCwZ+4Jh4ixjCbOg64FQvWJLC/eVfQh79I2TOuNgJ4KlqukI28cqZOdZ5HLUpVhGjCaU9GOe8n/ymPc5MlQkINAz5ykVm2jujghunmlYFA8ghxIJ9mC4OGMkbqySIQiArGXH+xcAZ4tBXluDu0jHPoEDb1ZKE7iiPWzN8I12yMSZvJ51U/2w1KnPm8ZO8EnvR25ia2iha2Pry4mUcJeL2pZu6l2bofVD2lEGDWoEh1ysdzJiz578UEPBCyrs9whmf+p5YkPa3xirhbRRJNPkklmqG5fTShIvddbMdEN3yDzi8dHW9tven o2N/E1r/ J3yGPs0V0yWlB03M2Vk9r/5baohowNgIynQDIxnGqltKQRtwqKnRvLoN14OzVUk/Nnym6lVycX5V35HMSIefbccEiFS6CJIzaHmDv37xF7mlrBm8ke0ZJeRMedS4HoWSqnORdS8k/2EpoeZpqKtXaPHoBHm8GQpsCKjHd00V/RQrCe8+I7n9JVzFdfYADyteT2/YBMyZ/EER+BBRTDjppldAwKuQiKPcssCfCKqxlEGSjoFlRrdfV7sMrU9RO7tQFR90nA2/VZf27vvg= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Byungchul Park writes: > On Tue, Jun 04, 2024 at 04:57:17PM +0800, Huang, Ying wrote: >> Byungchul Park writes: >> >> > On Tue, Jun 04, 2024 at 03:57:54PM +0800, Huang, Ying wrote: >> >> Byungchul Park writes: >> >> >> >> > Changes from v1: >> >> > 1. Don't allow to resume kswapd if the system is under memory >> >> > pressure that might affect direct reclaim by any chance, like >> >> > if NR_FREE_PAGES is less than (low wmark + min wmark)/2. >> >> > >> >> > --->8--- >> >> > From 6c73fc16b75907f5da9e6b33aff86bf7d7c9dd64 Mon Sep 17 00:00:00 2001 >> >> > From: Byungchul Park >> >> > Date: Tue, 4 Jun 2024 15:27:56 +0900 >> >> > Subject: [PATCH v2] mm: let kswapd work again for node that used to be hopeless but may not now >> >> > >> >> > A system should run with kswapd running in background when under memory >> >> > pressure, such as when the available memory level is below the low water >> >> > mark and there are reclaimable folios. >> >> > >> >> > However, the current code let the system run with kswapd stopped if >> >> > kswapd has been stopped due to more than MAX_RECLAIM_RETRIES failures >> >> > until direct reclaim will do for that, even if there are reclaimable >> >> > folios that can be reclaimed by kswapd. This case was observed in the >> >> > following scenario: >> >> > >> >> > CONFIG_NUMA_BALANCING enabled >> >> > sysctl_numa_balancing_mode set to NUMA_BALANCING_MEMORY_TIERING >> >> > numa node0 (500GB local DRAM, 128 CPUs) >> >> > numa node1 (100GB CXL memory, no CPUs) >> >> > swap off >> >> > >> >> > 1) Run a workload with big anon pages e.g. mmap(200GB). >> >> > 2) Continue adding the same workload to the system. >> >> > 3) The anon pages are placed in node0 by promotion/demotion. >> >> > 4) kswapd0 stops because of the unreclaimable anon pages in node0. >> >> > 5) Kill the memory hoggers to restore the system. >> >> > >> >> > After restoring the system at 5), the system starts to run without >> >> > kswapd. Even worse, tiering mechanism is no longer able to work since >> >> > the mechanism relies on kswapd for demotion. >> >> >> >> We have run into the situation that kswapd is kept in failure state for >> >> long in a multiple tiers system. I think that your solution is too >> > >> > My solution just gives a chance for kswapd to work again even if >> > kswapd_failures >= MAX_RECLAIM_RETRIES, if there are potential >> > reclaimable folios. That's it. >> > >> >> limited, because OOM killing may not happen, while the access pattern of >> > >> > I don't get this. OOM will happen as is, through direct reclaim. >> >> A system that fails to reclaim via kswapd may succeed to reclaim via >> direct reclaim, because more CPUs are used to scanning the page tables. > > Honestly, I don't think so with this description. > > The fact that the system hit MAX_RECLAIM_RETRIES means the system is > currently hopeless unless reclaiming folios in a stronger way by *direct > reclaim*. The solution for this situation should not be about letting > more CPUs particiated in reclaiming, again, *at least in this situation*. > > What you described here is true only in a normal state where the more > CPUs work on reclaiming, the more reclaimable folios can be reclaimed. > kswapd can be a helper *only* when there are kswapd-reclaimable folios. Sometimes, we cannot reclaim just because we doesn't scan fast enough so the Accessed-bit is set again during scanning. With more CPUs, we can scan faster, so make some progress. But, yes, this only cover one situation, there are other situations too. -- Best Regards, Huang, Ying > Byungchul > >> In a system with NUMA balancing based page promotion and page demotion >> enabled, page promotion will wake up kswapd, but kswapd may fail in some >> situations. But page promotion will no trigger direct reclaim or OOM. >> >> >> the workloads may change. We have a preliminary and simple solution for >> >> this as follows, >> >> >> >> https://git.kernel.org/pub/scm/linux/kernel/git/vishal/tiering.git/commit/?h=tiering-0.8&id=17a24a354e12d4d4675d78481b358f668d5a6866 >> > >> > Whether tiering is involved or not, the same problem can arise if >> > kswapd gets stopped due to kswapd_failures >= MAX_RECLAIM_RETRIES. >> >> Your description is about tiering too. Can you describe a situation >> without tiering? >> >> -- >> Best Regards, >> Huang, Ying >> >> > Byungchul >> > >> >> where we will try to wake up kswapd to check every 10 seconds if kswapd >> >> is in failure state. This is another possible solution. >> >> >> >> > However, the node0 has pages newly allocated after 5), that might or >> >> > might not be reclaimable. Since those are potentially reclaimable, it's >> >> > worth hopefully trying reclaim by allowing kswapd to work again. >> >> > >> >> >> >> [snip] >> >> >> >> -- >> >> Best Regards, >> >> Huang, Ying