From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id CF826C47089 for ; Tue, 6 Dec 2022 00:04:21 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 52EBC8E0002; Mon, 5 Dec 2022 19:04:21 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 4DEE48E0001; Mon, 5 Dec 2022 19:04:21 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 37F3F8E0002; Mon, 5 Dec 2022 19:04:21 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 25F128E0001 for ; Mon, 5 Dec 2022 19:04:21 -0500 (EST) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id CE14612038B for ; Tue, 6 Dec 2022 00:04:20 +0000 (UTC) X-FDA: 80209934280.19.C5F8D17 Received: from mail-ua1-f52.google.com (mail-ua1-f52.google.com [209.85.222.52]) by imf10.hostedemail.com (Postfix) with ESMTP id 81178C0006 for ; Tue, 6 Dec 2022 00:04:19 +0000 (UTC) Authentication-Results: imf10.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=oGxMdcr+; spf=pass (imf10.hostedemail.com: domain of almasrymina@google.com designates 209.85.222.52 as permitted sender) smtp.mailfrom=almasrymina@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1670285059; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=CN9j3kYVvyZ5N6wvjIbsfvlwhZyZism7lUmKxfBm3qo=; b=dFyUtT3X8sU7kImJMWtoKMp55N/XI1CChboQR6fEjsXXA4i0KG3pAohyKL21tp2zekE1Uq yWiGqXeNevHh+Rn6y53Iiabua4xUSixcBApqbec1JSnwHi+6LLAWTcrdVt3O/MKd3Dbp3h LX2t94Bvo10/SUDEd5fqc8m9vzyL8lA= ARC-Authentication-Results: i=1; imf10.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=oGxMdcr+; spf=pass (imf10.hostedemail.com: domain of almasrymina@google.com designates 209.85.222.52 as permitted sender) smtp.mailfrom=almasrymina@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1670285059; a=rsa-sha256; cv=none; b=WGAkEw9WkDjU6GgXwIoKrTthG12lgh0XEAI1AFCGaTH3tosba3DwB/ieF7S4rDgVQqTJ9U Lm8Tjz7kpendJ7VEVuoQzjxf5R2/rZgDSDqkxheN4srlQYUPXSrcLU8te6VDdaDSYVwUt+ bzOs3h8zAQY0MhZwzuSvPG885exJi9g= Received: by mail-ua1-f52.google.com with SMTP id m5so4462598uah.3 for ; Mon, 05 Dec 2022 16:04:19 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=CN9j3kYVvyZ5N6wvjIbsfvlwhZyZism7lUmKxfBm3qo=; b=oGxMdcr+0mbzD6dGaWRq//H9fg0RRqPLZS5um42zp6vqIB3VgaYSzsPlmIZ0z83+Fv O0goqZ/n6Ydm454x0iQFUjqVCxeoFtpDVFTBE9du6vbdkt9YG3ZYp92U/vktU881pzhA TdULRQ7aMupkPxWWZEdtAR6mtIm7nO7d+DpeE0VozhOJiaCMYZFnrHBjhAFZA2TddYd3 4FNrEcJY9Zj1oXHpEMc/hpy1qAKRPd1BiJeMK+pmKSdo/Nvp3UsdQ3+DGAVLPrzsBoxm cMgfGpOQPSAK42PLPdHgQKnp9QJdnTE8tw8qQMd9MDguSBcsVsV5mTkWMbj06/QHcEZP LxTw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=CN9j3kYVvyZ5N6wvjIbsfvlwhZyZism7lUmKxfBm3qo=; b=wa/2ynZe2g5JGV7K8iWf+JU0ycJDDH5hhQyykdGjteIBWxgmiotNMQGc+VaJ7L7ECG wPa3M/4t0rWZxTbUDdv0LZLwIOQcQqxjGtLPRD07wnXZkUsEay6uZieKj4uE0x0n9vZP c+/plqINz8FVSVWs1INxA40TUR8NUSWGI9L8W3zEFux/nJVOEAZAR7O2bBTWvRaauLKt Kx+AYab0zl2UUKZ/T19zQZG8LR1Dr/aSZPFchr9BEABO7lkKzfg9pxlh1b46Df2WZFu/ iY1m6mdTc2ToHXcgsKEwFQ9maVrisf+3Qo8ld+TrsZoE6XpOuMf0zvJv+iLmtlUU4dml NU1w== X-Gm-Message-State: ANoB5pnilx7Lmg54ObmXvmXiV2U3oq8A3PUJKg1c6dXoFgUeAHmp+mXE b1FE7J6hUReb0KKJD+xbzx+m+PbwBte5b7grpT491Q== X-Google-Smtp-Source: AA0mqf5I66pb9lA/8S5u9JMg2ndRhX5KB3kou7fRIId3RudNxizUHF7KzNRn+9h6uS0rWUC7fY5J9LY1KNoDMQUYwbw= X-Received: by 2002:ab0:5517:0:b0:409:5403:c18 with SMTP id t23-20020ab05517000000b0040954030c18mr38785220uaa.51.1670285058552; Mon, 05 Dec 2022 16:04:18 -0800 (PST) MIME-Version: 1.0 References: <20221203011120.2361610-1-almasrymina@google.com> <87lenm1soh.fsf@yhuang6-desk2.ccr.corp.intel.com> In-Reply-To: <87lenm1soh.fsf@yhuang6-desk2.ccr.corp.intel.com> From: Mina Almasry Date: Mon, 5 Dec 2022 16:04:07 -0800 Message-ID: Subject: Re: [PATCH v1] [mm-unstable] mm: Fix memcg reclaim on memory tiered systems To: "Huang, Ying" Cc: Andrew Morton , Johannes Weiner , Michal Hocko , Roman Gushchin , Shakeel Butt , Muchun Song , Yang Shi , Yosry Ahmed , weixugc@google.com, fvdl@google.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Content-Type: text/plain; charset="UTF-8" X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 81178C0006 X-Rspam-User: X-Stat-Signature: 581m78fuqn5j47ftd3u1agutszgqatw4 X-Spamd-Result: default: False [-2.90 / 9.00]; BAYES_HAM(-6.00)[100.00%]; SORBS_IRL_BL(3.00)[209.85.222.52:from]; BAD_REP_POLICIES(0.10)[]; RCVD_NO_TLS_LAST(0.10)[]; MIME_GOOD(-0.10)[text/plain]; MIME_TRACE(0.00)[0:+]; RCVD_COUNT_TWO(0.00)[2]; FROM_EQ_ENVFROM(0.00)[]; DMARC_POLICY_ALLOW(0.00)[google.com,reject]; RCPT_COUNT_TWELVE(0.00)[13]; DKIM_TRACE(0.00)[google.com:+]; TO_MATCH_ENVRCPT_SOME(0.00)[]; PREVIOUSLY_DELIVERED(0.00)[linux-mm@kvack.org]; R_DKIM_ALLOW(0.00)[google.com:s=20210112]; ARC_SIGNED(0.00)[hostedemail.com:s=arc-20220608:i=1]; FROM_HAS_DN(0.00)[]; R_SPF_ALLOW(0.00)[+ip4:209.85.128.0/17]; TO_DN_SOME(0.00)[]; ARC_NA(0.00)[] X-HE-Tag: 1670285059-603469 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Sun, Dec 4, 2022 at 6:39 PM Huang, Ying wrote: > > Mina Almasry writes: > > > commit 3f1509c57b1b ("Revert "mm/vmscan: never demote for memcg > > reclaim"") enabled demotion in memcg reclaim, which is the right thing > > to do, however, I suspect it introduced a regression in the behavior of > > try_to_free_mem_cgroup_pages(). > > > > The callers of try_to_free_mem_cgroup_pages() expect it to attempt to > > reclaim - not demote - nr_pages from the cgroup. I.e. the memory usage > > of the cgroup should reduce by nr_pages. The callers expect > > try_to_free_mem_cgroup_pages() to also return the number of pages > > reclaimed, not demoted. > > > > However, what try_to_free_mem_cgroup_pages() actually does is it > > unconditionally counts demoted pages as reclaimed pages. So in practice > > when it is called it will often demote nr_pages and return the number of > > demoted pages to the caller. Demoted pages don't lower the memcg usage, > > and so I think try_to_free_mem_cgroup_pages() is not actually doing what > > the callers want it to do. > > > > I suspect various things work suboptimally on memory systems or don't > > work at all due to this: > > > > - memory.high enforcement likely doesn't work (it just demotes nr_pages > > instead of lowering the memcg usage by nr_pages). > > - try_charge_memcg() will keep retrying the charge while > > try_to_free_mem_cgroup_pages() is just demoting pages and not actually > > making any room for the charge. > > - memory.reclaim has a wonky interface. It advertises to the user it > > reclaims the provided amount but it will actually demote that amount. > > > > There may be more effects to this issue. > > > > To fix these issues I propose shrink_folio_list() to only count pages > > demoted from inside of sc->nodemask to outside of sc->nodemask as > > 'reclaimed'. > > > > For callers such as reclaim_high() or try_charge_memcg() that set > > sc->nodemask to NULL, try_to_free_mem_cgroup_pages() will try to > > actually reclaim nr_pages and return the number of pages reclaimed. No > > demoted pages would count towards the nr_pages requirement. > > > > For callers such as memory_reclaim() that set sc->nodemask, > > try_to_free_mem_cgroup_pages() will free nr_pages from that nodemask > > with either reclaim or demotion. > > Have you checked all callers? For example, IIUC, in > reclaim_clean_pages_from_list(), although sc.nodemask == NULL, the > demoted pages should be counted as reclaimed. I checked all call stacks leading to shrink_folio_list() now (at least I hope). Here is what I think they do and how I propose to handle them: - reclaim_clean_pages_from_list() & __node_reclaim() & balance_pgdat() These try to free memory from a specific node, and both demotion and reclaim from that node should be counted. I propose these calls set sc>nodemask = pgdat.node_id to signal to shrink_folio_list() that both demotion and reclaim from this node should be counted. - try_to_free_pages() Tries to free pages from a specific nodemask. It sets sc->nodemask to ac->nodemask. In this case pages demoted within the nodemask should not count. Pages demoted outside of the nodemask should count, which this patch already tries to do. - mem_cgroup_shrink_node() This is memcg soft limit reclaim. AFAIU only reclaim should be counted. It already sets sc->nodemask=NULL to indicate that it requires reclaim from all nodes and that only reclaimed memory should be counted, which this patch already tries to do. - try_to_free_mem_cgroup_pages() This is covered in the commit message. Many callers set nodemask=NULL indicating they want reclaim and demotion should not count. memory.reclaim sets nodemask depending on the 'nodes=' arg and wants demotion and reclaim from that nodemask. - reclaim_folio_list() Sets no_demotion = 1. No ambiguity here, only reclaims and counts reclaimed pages. If agreeable I can fix reclaim_clean_pages_from_list() & __node_reclaim() & balance_pgdat() call sites in v3. > How about count both > "demoted" and "reclaimed" in struct scan_control, and let callers to > determine how to use the number? > I don't think this is by itself enough. Pages demoted between 2 nodes that are both in sc->nodemask should not count, I think. So 'demoted' needs to be specifically pages demoted outside of the nodemask. We can do 2 things: 1. Only allow the kernel to demote outside the nodemask (which you don't prefer). 2. Allow the kernel to demote inside the nodemask but not count them. I will see if I can implement #2. > > Tested this change using memory.reclaim interface. With this change, > > > > echo "1m" > memory.reclaim > > > > Will cause freeing of 1m of memory from the cgroup regardless of the > > demotions happening inside. > > > > echo "1m nodes=0" > memory.reclaim > > Have you tested these tests in the original kernel? If so, whether does > the issue you suspected above occurs during testing? > Yes. I set up a test case where I allocate 500m in a cgroup, and then do: echo "50m" > memory.reclaim Without my fix, my kernel demotes 70mb and reclaims 4 mb. With my v1 fix, my kernel demotes all memory possible and reclaims 60mb. I will add this to the commit message in the next version. > Best Regards, > Huang, Ying > > > Will cause freeing of 1m of node 0 by demotion if a demotion target is > > available, and by reclaim if no demotion target is available. > > > > Signed-off-by: Mina Almasry > > > > --- > > > > This is developed on top of mm-unstable largely because I need the > > memory.reclaim nodes= arg to test it properly. > > --- > > mm/vmscan.c | 13 ++++++++++++- > > 1 file changed, 12 insertions(+), 1 deletion(-) > > > > diff --git a/mm/vmscan.c b/mm/vmscan.c > > index 2b42ac9ad755..8f6e993b870d 100644 > > --- a/mm/vmscan.c > > +++ b/mm/vmscan.c > > @@ -1653,6 +1653,7 @@ static unsigned int shrink_folio_list(struct list_head *folio_list, > > LIST_HEAD(free_folios); > > LIST_HEAD(demote_folios); > > unsigned int nr_reclaimed = 0; > > + unsigned int nr_demoted = 0; > > unsigned int pgactivate = 0; > > bool do_demote_pass; > > struct swap_iocb *plug = NULL; > > @@ -2085,7 +2086,17 @@ static unsigned int shrink_folio_list(struct list_head *folio_list, > > /* 'folio_list' is always empty here */ > > > > /* Migrate folios selected for demotion */ > > - nr_reclaimed += demote_folio_list(&demote_folios, pgdat); > > + nr_demoted = demote_folio_list(&demote_folios, pgdat); > > + > > + /* > > + * Only count demoted folios as reclaimed if we demoted them from > > + * inside of the nodemask to outside of the nodemask, hence reclaiming > > + * pages in the nodemask. > > + */ > > + if (sc->nodemask && node_isset(pgdat->node_id, *sc->nodemask) && > > + !node_isset(next_demotion_node(pgdat->node_id), *sc->nodemask)) > > + nr_reclaimed += nr_demoted; > > + > > /* Folios that could not be demoted are still in @demote_folios */ > > if (!list_empty(&demote_folios)) { > > /* Folios which weren't demoted go back on @folio_list */ > > -- > > 2.39.0.rc0.267.gcb52ba06e7-goog >