From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4CB97C4321E for ; Mon, 5 Dec 2022 02:58:07 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 896B68E0003; Sun, 4 Dec 2022 21:58:06 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 846108E0001; Sun, 4 Dec 2022 21:58:06 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 70E048E0003; Sun, 4 Dec 2022 21:58:06 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 5E4988E0001 for ; Sun, 4 Dec 2022 21:58:06 -0500 (EST) Received: from smtpin20.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id F34FD806C3 for ; Mon, 5 Dec 2022 02:58:05 +0000 (UTC) X-FDA: 80206743330.20.53DBF2F Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by imf24.hostedemail.com (Postfix) with ESMTP id 54F6718000D for ; Mon, 5 Dec 2022 02:58:04 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=HElPnMCy; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf24.hostedemail.com: domain of ying.huang@intel.com designates 134.134.136.100 as permitted sender) smtp.mailfrom=ying.huang@intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1670209085; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=IKH71yzF4gTDOK/tmLVt+xUJgdeJ3o6NFzDNQpY79k8=; b=XPW1l+xgWkYzzHtIhmspH83Ce/7NoxESM/vCFeOsE8IbamMKrkhp4y/80op6/D+p1ji10l OSnnTDlZe+viDQeTBHGUfWMtKs0+nNnHevAK9ae/NiErLDefqyuJ41hYTkIM6ivuQeYC33 rO7bDztQBn3fbZXlXVkmVGZ2Xi9YOvo= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=HElPnMCy; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf24.hostedemail.com: domain of ying.huang@intel.com designates 134.134.136.100 as permitted sender) smtp.mailfrom=ying.huang@intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1670209085; a=rsa-sha256; cv=none; b=xFRVI9EFtlAcf+ybLJeFYIHEWMbtmT8doWWrJbcpK32I2XBCD1itWoA7BqdZBStm8lv+AI zZLjBohlEoHJiolWdvBD4yFl2zBdmzHKkl288fhiXT515bfT3GmLq1+/Vvg6oSD8MY/PMr lSRS6s1rMUdcVvzdMyDbc16g1akk+lg= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1670209084; x=1701745084; h=from:to:cc:subject:references:date:in-reply-to: message-id:mime-version; bh=iIlftp72e7lOy9y0ialET0UyyQONsrRT5Ayg2jLvnxo=; b=HElPnMCykwX9LokGce38MDEO1VwlQkTci8PRP31vfT1FLN8af7MBq9UV X+7L1bAlHX06UxNZEcGMQV+tHaSi8FQkhBSqoD/EEPzt2JhyvwC+7ctBH X8PSnu+e/HSrdlftdOJqZWtWLH7hFXtAo+6MgkHkv1Kl9zFzWUpFG80or qLXcoenhiohFn0MKQIvuvYhFH9DLfsVJ68tPb+X4iN0NZmOqlAzRFgHHK Ddy2T5fGP3NKKIDl86ajkgXxFQRM56YL7+mVjkA/IrC4OjuR8yvH7/xpY CLaO/VBbbABc6DBwoE2MsDmXbJC+N5nnT8fC/qBE9dxTnCwj99Np73lT4 g==; X-IronPort-AV: E=McAfee;i="6500,9779,10551"; a="380542215" X-IronPort-AV: E=Sophos;i="5.96,218,1665471600"; d="scan'208";a="380542215" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Dec 2022 18:58:00 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10551"; a="770199000" X-IronPort-AV: E=Sophos;i="5.96,218,1665471600"; d="scan'208";a="770199000" Received: from yhuang6-desk2.sh.intel.com (HELO yhuang6-desk2.ccr.corp.intel.com) ([10.238.208.55]) by orsmga004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Dec 2022 18:57:53 -0800 From: "Huang, Ying" To: Wei Xu Cc: Mina Almasry , Andrew Morton , Johannes Weiner , Michal Hocko , Roman Gushchin , Shakeel Butt , Muchun Song , Yang Shi , Yosry Ahmed , fvdl@google.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH v1] [mm-unstable] mm: Fix memcg reclaim on memory tiered systems References: <20221203011120.2361610-1-almasrymina@google.com> Date: Mon, 05 Dec 2022 10:57:04 +0800 In-Reply-To: (Wei Xu's message of "Fri, 2 Dec 2022 20:14:25 -0800") Message-ID: <87cz8y1rsv.fsf@yhuang6-desk2.ccr.corp.intel.com> User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/27.1 (gnu/linux) MIME-Version: 1.0 Content-Type: text/plain; charset=ascii X-Rspam-User: X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 54F6718000D X-Stat-Signature: nm38fqxbfazzoirwdmp4q5kc1tran1cj X-Spamd-Result: default: False [-6.90 / 9.00]; BAYES_HAM(-6.00)[100.00%]; DMARC_POLICY_ALLOW(-0.50)[intel.com,none]; R_SPF_ALLOW(-0.20)[+ip4:134.134.136.100/32]; R_DKIM_ALLOW(-0.20)[intel.com:s=Intel]; MIME_GOOD(-0.10)[text/plain]; RCVD_NO_TLS_LAST(0.10)[]; FROM_EQ_ENVFROM(0.00)[]; RCPT_COUNT_TWELVE(0.00)[13]; DKIM_TRACE(0.00)[intel.com:+]; MID_RHS_MATCH_FROMTLD(0.00)[]; TO_MATCH_ENVRCPT_SOME(0.00)[]; RCVD_COUNT_THREE(0.00)[3]; FROM_HAS_DN(0.00)[]; MIME_TRACE(0.00)[0:+]; TO_DN_SOME(0.00)[]; ARC_SIGNED(0.00)[hostedemail.com:s=arc-20220608:i=1]; ARC_NA(0.00)[] X-HE-Tag: 1670209084-866453 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Wei Xu writes: > On Fri, Dec 2, 2022 at 5:11 PM Mina Almasry wrote: >> >> commit 3f1509c57b1b ("Revert "mm/vmscan: never demote for memcg >> reclaim"") enabled demotion in memcg reclaim, which is the right thing >> to do, however, I suspect it introduced a regression in the behavior of >> try_to_free_mem_cgroup_pages(). >> >> The callers of try_to_free_mem_cgroup_pages() expect it to attempt to >> reclaim - not demote - nr_pages from the cgroup. I.e. the memory usage >> of the cgroup should reduce by nr_pages. The callers expect >> try_to_free_mem_cgroup_pages() to also return the number of pages >> reclaimed, not demoted. >> >> However, what try_to_free_mem_cgroup_pages() actually does is it >> unconditionally counts demoted pages as reclaimed pages. So in practice >> when it is called it will often demote nr_pages and return the number of >> demoted pages to the caller. Demoted pages don't lower the memcg usage, >> and so I think try_to_free_mem_cgroup_pages() is not actually doing what >> the callers want it to do. >> >> I suspect various things work suboptimally on memory systems or don't >> work at all due to this: >> >> - memory.high enforcement likely doesn't work (it just demotes nr_pages >> instead of lowering the memcg usage by nr_pages). >> - try_charge_memcg() will keep retrying the charge while >> try_to_free_mem_cgroup_pages() is just demoting pages and not actually >> making any room for the charge. >> - memory.reclaim has a wonky interface. It advertises to the user it >> reclaims the provided amount but it will actually demote that amount. >> >> There may be more effects to this issue. >> >> To fix these issues I propose shrink_folio_list() to only count pages >> demoted from inside of sc->nodemask to outside of sc->nodemask as >> 'reclaimed'. >> >> For callers such as reclaim_high() or try_charge_memcg() that set >> sc->nodemask to NULL, try_to_free_mem_cgroup_pages() will try to >> actually reclaim nr_pages and return the number of pages reclaimed. No >> demoted pages would count towards the nr_pages requirement. >> >> For callers such as memory_reclaim() that set sc->nodemask, >> try_to_free_mem_cgroup_pages() will free nr_pages from that nodemask >> with either reclaim or demotion. >> >> Tested this change using memory.reclaim interface. With this change, >> >> echo "1m" > memory.reclaim >> >> Will cause freeing of 1m of memory from the cgroup regardless of the >> demotions happening inside. >> >> echo "1m nodes=0" > memory.reclaim >> >> Will cause freeing of 1m of node 0 by demotion if a demotion target is >> available, and by reclaim if no demotion target is available. >> >> Signed-off-by: Mina Almasry >> >> --- >> >> This is developed on top of mm-unstable largely because I need the >> memory.reclaim nodes= arg to test it properly. >> --- >> mm/vmscan.c | 13 ++++++++++++- >> 1 file changed, 12 insertions(+), 1 deletion(-) >> >> diff --git a/mm/vmscan.c b/mm/vmscan.c >> index 2b42ac9ad755..8f6e993b870d 100644 >> --- a/mm/vmscan.c >> +++ b/mm/vmscan.c >> @@ -1653,6 +1653,7 @@ static unsigned int shrink_folio_list(struct list_head *folio_list, >> LIST_HEAD(free_folios); >> LIST_HEAD(demote_folios); >> unsigned int nr_reclaimed = 0; >> + unsigned int nr_demoted = 0; >> unsigned int pgactivate = 0; >> bool do_demote_pass; >> struct swap_iocb *plug = NULL; >> @@ -2085,7 +2086,17 @@ static unsigned int shrink_folio_list(struct list_head *folio_list, >> /* 'folio_list' is always empty here */ >> >> /* Migrate folios selected for demotion */ >> - nr_reclaimed += demote_folio_list(&demote_folios, pgdat); >> + nr_demoted = demote_folio_list(&demote_folios, pgdat); >> + >> + /* >> + * Only count demoted folios as reclaimed if we demoted them from >> + * inside of the nodemask to outside of the nodemask, hence reclaiming >> + * pages in the nodemask. >> + */ >> + if (sc->nodemask && node_isset(pgdat->node_id, *sc->nodemask) && >> + !node_isset(next_demotion_node(pgdat->node_id), *sc->nodemask)) > > next_demotion_node() is just the first demotion target node. Demotion > can fall back to other allowed target nodes returned by > node_get_allowed_targets(). When the page is demoted to a fallback > node and this fallback node is in sc->nodemask, nr_demoted should not > be added into nr_reclaimed, either. > > One way to address this issue is to pass sc->nodemask into > demote_folio_list() and exclude sc->nodemask from the allowed target > demotion nodes. I don't think this is a good idea. Because this may break the fast -> slow -> storage aging order. A warm page in fast memory node may be reclaimed to storage directly, instead of being demoted to the slow memory node. If necessary, we can account "nr_demoted" in alloc_demote_page() and to-be-added free_demote_page(). Best Regards, Huang, Ying >> + nr_reclaimed += nr_demoted; >> + >> /* Folios that could not be demoted are still in @demote_folios */ >> if (!list_empty(&demote_folios)) { >> /* Folios which weren't demoted go back on @folio_list */ >> -- >> 2.39.0.rc0.267.gcb52ba06e7-goog