From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A3C81C352A1 for ; Sat, 3 Dec 2022 04:14:40 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id EAB846B0072; Fri, 2 Dec 2022 23:14:39 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id E337F6B0073; Fri, 2 Dec 2022 23:14:39 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CC56B6B0074; Fri, 2 Dec 2022 23:14:39 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id B51B76B0072 for ; Fri, 2 Dec 2022 23:14:39 -0500 (EST) Received: from smtpin08.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 835A3160B90 for ; Sat, 3 Dec 2022 04:14:39 +0000 (UTC) X-FDA: 80199678678.08.B54C317 Received: from mail-pf1-f175.google.com (mail-pf1-f175.google.com [209.85.210.175]) by imf02.hostedemail.com (Postfix) with ESMTP id 2F90880017 for ; Sat, 3 Dec 2022 04:14:38 +0000 (UTC) Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=DK7gt55u; spf=pass (imf02.hostedemail.com: domain of weixugc@google.com designates 209.85.210.175 as permitted sender) smtp.mailfrom=weixugc@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1670040879; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=v8Lb4rWO2/png7jFWUlCrnlSu3DLF81AvaOseRvPiQo=; b=TV/g/WdrT4igR872lOdydLOFrDge0q9gsitWjeZvq/0YCuq0O3JJuHOXt4UcRk3ry6I2IO NiIK4MzJMT4N3w3hb0A+t4HVjW0MdwisGbGVJXMpN2VuA+NTsC9K2Pq2aqICgciKO0epjn SBcUs1tMW/TcWRD7D5cxD/aW/szUVTo= ARC-Authentication-Results: i=1; imf02.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=DK7gt55u; spf=pass (imf02.hostedemail.com: domain of weixugc@google.com designates 209.85.210.175 as permitted sender) smtp.mailfrom=weixugc@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1670040879; a=rsa-sha256; cv=none; b=WmLyTxPMFM4tLEqmtNVVrpb0Jc+RdtJbMlgnU7r0Ga/2m9Y8bVlkb+h5/2izmC/TJew1Jk TBPOniT7X30GB24JEpsclU9AXf0sZRQbpkb8KzSKDtoQ6z07PrlFpvjzyaA/AbbCO6CJhO 9WNFEIb7uEGfSX5gk9UBvhg4iP1XjuU= Received: by mail-pf1-f175.google.com with SMTP id 130so6646931pfu.8 for ; Fri, 02 Dec 2022 20:14:38 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=v8Lb4rWO2/png7jFWUlCrnlSu3DLF81AvaOseRvPiQo=; b=DK7gt55uCo6/h51m7XjhVUqJ6E0Hs1bMl1kZpwZsi021a8/WQEhvds1UH95TxlnQLL HiIC9DLb/Q5gyrtpLTu3s/U1NtfoG81WACqKRnbf2L7VKjVm3wuC8tynzlI3Ywaj5hiK Vl6NDwBNDYa1attFWgejzmcGLKwmTZJDFC5GFHXLL0STk2vWoo/pxggTq41Llot1cBm8 cOc1k6zJsZD6KfyrItvjsziLEwnUrFv6Dm2N9/aLyag5VifY2PoMwtK6AZMLiYWBeLOi qmMtoKc0+FJKxkO+f9n+FQFDRU2lqhxbeBcmJartPvBBc4yfofz/mm+4N56cgI5WxOZI k0SA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=v8Lb4rWO2/png7jFWUlCrnlSu3DLF81AvaOseRvPiQo=; b=3y2Ol7Od0EkTQTXHjlMbkAwc62vnyuBPCY+M5fiBi4tcdUrkinPXAiYuZ62Vf5Y2x6 72/Yrvd/dFziz1U1BWgJ+JKyX3uLRMeZnT7iPQPbUqhXXJ8lefq9wTv/rSkgdRo31Y82 Je6W1HJPJH8g6kKm5N6L/p09n9PI60qRRHJisEThLYyv0Xu2si/KEv2S7u4cEb3uWfqQ UkeuveUKSijn4MqZEE8SxnV+OttAsLfRUs6mA68tIyladlpX2xDg7u1dmyeCeqUj+exa P/z1D7LxlejoxfsJ9jsfl866K5qGwznbZn4V8pEiBQgH5TDoWusY0khJXfZRGr7WUl4N 6utQ== X-Gm-Message-State: ANoB5pl7CCeY3zgF4Vl9L1XuBEQ/bC645q0oBBRCjM/goaYU20o3lIn6 PkNXW+n+cOHrLar3Km0hMsnNgDTImUym1Ow4oPVc9w== X-Google-Smtp-Source: AA0mqf7HjcQjConev4/hT9HMwqftbCr9qTzOUrXfBmX9z/iV7ni8AiEWoMPW5n8bPUrF9QR5LPc04/T0lmcPn1KXSmY= X-Received: by 2002:a63:1b52:0:b0:476:cba9:eba4 with SMTP id b18-20020a631b52000000b00476cba9eba4mr51182014pgm.350.1670040877782; Fri, 02 Dec 2022 20:14:37 -0800 (PST) MIME-Version: 1.0 References: <20221203011120.2361610-1-almasrymina@google.com> In-Reply-To: <20221203011120.2361610-1-almasrymina@google.com> From: Wei Xu Date: Fri, 2 Dec 2022 20:14:25 -0800 Message-ID: Subject: Re: [PATCH v1] [mm-unstable] mm: Fix memcg reclaim on memory tiered systems To: Mina Almasry Cc: Andrew Morton , Johannes Weiner , Michal Hocko , Roman Gushchin , Shakeel Butt , Muchun Song , Huang Ying , Yang Shi , Yosry Ahmed , fvdl@google.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Content-Type: text/plain; charset="UTF-8" X-Spamd-Result: default: False [2.78 / 9.00]; SORBS_IRL_BL(3.00)[209.85.210.175:from]; BAYES_HAM(-0.32)[69.19%]; BAD_REP_POLICIES(0.10)[]; RCVD_NO_TLS_LAST(0.10)[]; MIME_GOOD(-0.10)[text/plain]; MIME_TRACE(0.00)[0:+]; RCVD_COUNT_TWO(0.00)[2]; FROM_EQ_ENVFROM(0.00)[]; DMARC_POLICY_ALLOW(0.00)[google.com,reject]; RCPT_COUNT_TWELVE(0.00)[13]; DKIM_TRACE(0.00)[google.com:+]; TO_MATCH_ENVRCPT_SOME(0.00)[]; PREVIOUSLY_DELIVERED(0.00)[linux-mm@kvack.org]; R_DKIM_ALLOW(0.00)[google.com:s=20210112]; ARC_SIGNED(0.00)[hostedemail.com:s=arc-20220608:i=1]; FROM_HAS_DN(0.00)[]; R_SPF_ALLOW(0.00)[+ip4:209.85.128.0/17]; TO_DN_SOME(0.00)[]; ARC_NA(0.00)[] X-Rspam-User: X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 2F90880017 X-Stat-Signature: itdbyatua13suy1q3w4uak1gxiqtonmk X-HE-Tag: 1670040878-405113 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Fri, Dec 2, 2022 at 5:11 PM Mina Almasry wrote: > > commit 3f1509c57b1b ("Revert "mm/vmscan: never demote for memcg > reclaim"") enabled demotion in memcg reclaim, which is the right thing > to do, however, I suspect it introduced a regression in the behavior of > try_to_free_mem_cgroup_pages(). > > The callers of try_to_free_mem_cgroup_pages() expect it to attempt to > reclaim - not demote - nr_pages from the cgroup. I.e. the memory usage > of the cgroup should reduce by nr_pages. The callers expect > try_to_free_mem_cgroup_pages() to also return the number of pages > reclaimed, not demoted. > > However, what try_to_free_mem_cgroup_pages() actually does is it > unconditionally counts demoted pages as reclaimed pages. So in practice > when it is called it will often demote nr_pages and return the number of > demoted pages to the caller. Demoted pages don't lower the memcg usage, > and so I think try_to_free_mem_cgroup_pages() is not actually doing what > the callers want it to do. > > I suspect various things work suboptimally on memory systems or don't > work at all due to this: > > - memory.high enforcement likely doesn't work (it just demotes nr_pages > instead of lowering the memcg usage by nr_pages). > - try_charge_memcg() will keep retrying the charge while > try_to_free_mem_cgroup_pages() is just demoting pages and not actually > making any room for the charge. > - memory.reclaim has a wonky interface. It advertises to the user it > reclaims the provided amount but it will actually demote that amount. > > There may be more effects to this issue. > > To fix these issues I propose shrink_folio_list() to only count pages > demoted from inside of sc->nodemask to outside of sc->nodemask as > 'reclaimed'. > > For callers such as reclaim_high() or try_charge_memcg() that set > sc->nodemask to NULL, try_to_free_mem_cgroup_pages() will try to > actually reclaim nr_pages and return the number of pages reclaimed. No > demoted pages would count towards the nr_pages requirement. > > For callers such as memory_reclaim() that set sc->nodemask, > try_to_free_mem_cgroup_pages() will free nr_pages from that nodemask > with either reclaim or demotion. > > Tested this change using memory.reclaim interface. With this change, > > echo "1m" > memory.reclaim > > Will cause freeing of 1m of memory from the cgroup regardless of the > demotions happening inside. > > echo "1m nodes=0" > memory.reclaim > > Will cause freeing of 1m of node 0 by demotion if a demotion target is > available, and by reclaim if no demotion target is available. > > Signed-off-by: Mina Almasry > > --- > > This is developed on top of mm-unstable largely because I need the > memory.reclaim nodes= arg to test it properly. > --- > mm/vmscan.c | 13 ++++++++++++- > 1 file changed, 12 insertions(+), 1 deletion(-) > > diff --git a/mm/vmscan.c b/mm/vmscan.c > index 2b42ac9ad755..8f6e993b870d 100644 > --- a/mm/vmscan.c > +++ b/mm/vmscan.c > @@ -1653,6 +1653,7 @@ static unsigned int shrink_folio_list(struct list_head *folio_list, > LIST_HEAD(free_folios); > LIST_HEAD(demote_folios); > unsigned int nr_reclaimed = 0; > + unsigned int nr_demoted = 0; > unsigned int pgactivate = 0; > bool do_demote_pass; > struct swap_iocb *plug = NULL; > @@ -2085,7 +2086,17 @@ static unsigned int shrink_folio_list(struct list_head *folio_list, > /* 'folio_list' is always empty here */ > > /* Migrate folios selected for demotion */ > - nr_reclaimed += demote_folio_list(&demote_folios, pgdat); > + nr_demoted = demote_folio_list(&demote_folios, pgdat); > + > + /* > + * Only count demoted folios as reclaimed if we demoted them from > + * inside of the nodemask to outside of the nodemask, hence reclaiming > + * pages in the nodemask. > + */ > + if (sc->nodemask && node_isset(pgdat->node_id, *sc->nodemask) && > + !node_isset(next_demotion_node(pgdat->node_id), *sc->nodemask)) next_demotion_node() is just the first demotion target node. Demotion can fall back to other allowed target nodes returned by node_get_allowed_targets(). When the page is demoted to a fallback node and this fallback node is in sc->nodemask, nr_demoted should not be added into nr_reclaimed, either. One way to address this issue is to pass sc->nodemask into demote_folio_list() and exclude sc->nodemask from the allowed target demotion nodes. > + nr_reclaimed += nr_demoted; > + > /* Folios that could not be demoted are still in @demote_folios */ > if (!list_empty(&demote_folios)) { > /* Folios which weren't demoted go back on @folio_list */ > -- > 2.39.0.rc0.267.gcb52ba06e7-goog