From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E1245C352A1 for ; Wed, 7 Dec 2022 01:23:30 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 241FB8E0003; Tue, 6 Dec 2022 20:23:30 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 1F2968E0001; Tue, 6 Dec 2022 20:23:30 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0BABF8E0003; Tue, 6 Dec 2022 20:23:30 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id F11358E0001 for ; Tue, 6 Dec 2022 20:23:29 -0500 (EST) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id C139514037F for ; Wed, 7 Dec 2022 01:23:29 +0000 (UTC) X-FDA: 80213762538.06.285E315 Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by imf07.hostedemail.com (Postfix) with ESMTP id EC6B940011 for ; Wed, 7 Dec 2022 01:23:27 +0000 (UTC) Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=Lbf3Y07b; spf=pass (imf07.hostedemail.com: domain of ying.huang@intel.com designates 134.134.136.24 as permitted sender) smtp.mailfrom=ying.huang@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1670376209; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=zrQSl01BVviMV7bs+p7LiGcHyqnbeeiItantcpkCBt0=; b=noZ25NCa2Ng4YTyp31iAI4BQeZSZXG+vj4PGevZFOE06fSYr4IyRCk5B4tkTdnKThsR7UY sW7wCtux8d3ARDL9k2PXwGF0ANUHT6f/N5Ztjlrrum426W1Xb7WhHyCwZFXW4mkZONlX4+ UMvUIaCAVdwJPZNBUV2L1URBcWhih2s= ARC-Authentication-Results: i=1; imf07.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=Lbf3Y07b; spf=pass (imf07.hostedemail.com: domain of ying.huang@intel.com designates 134.134.136.24 as permitted sender) smtp.mailfrom=ying.huang@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1670376209; a=rsa-sha256; cv=none; b=w1EBzGkMmqLxaF7nqeyClSEaZbpU3qFKYCePhvOBMh0jzCW/h5ioPqMhBo8bMioQkjUTmM v3nBzntI3VD5iPigMYCK9YgMNah/3vUrHzZsjRl25beXZVA1BoVPTy8dCtUJ9gz+/f5hof ya73lRdnANHjcEYNGyZpW7s8C4/RD9c= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1670376208; x=1701912208; h=from:to:cc:subject:references:date:in-reply-to: message-id:mime-version; bh=WVlevJAAvUHI6+70tiF2I2mN3+Cpcrc2mvg4fmmFWMw=; b=Lbf3Y07bXobQT1LvOoaCrQ87eeVVYhXiXLpGuhjwcmMcbVxp1DYO83pQ iccTYhByVBkwV38w8zi/N+KoVuY+Nql/XFFKOlLoDGgwWV4c+NqYAqk+q CrqcWto58Q9hXpdEZmm8ykDR5TkMkSZu1HSt+rh4uyMVKbSLLr+zm6xxa MyBuzeAWxu5LiCmPAuQOrqDkBz0JAyj1QuffCeUrYcPp/cxNvmT3e/XD7 eD7vkMsq9eZohjLHMN6d5M0dhPG5urHNRFXfIfyYarFmkcy+uJn6lHflU qhJkzZRTsSsjK/VkYNdnIKhDh+ybHiCr46CG+ftf6+O3mEPgUbz9CNP2K w==; X-IronPort-AV: E=McAfee;i="6500,9779,10553"; a="317919214" X-IronPort-AV: E=Sophos;i="5.96,223,1665471600"; d="scan'208";a="317919214" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Dec 2022 17:23:26 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10553"; a="709880911" X-IronPort-AV: E=Sophos;i="5.96,223,1665471600"; d="scan'208";a="709880911" Received: from yhuang6-desk2.sh.intel.com (HELO yhuang6-desk2.ccr.corp.intel.com) ([10.238.208.55]) by fmsmga008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Dec 2022 17:23:23 -0800 From: "Huang, Ying" To: Michal Hocko Cc: Mina Almasry , Andrew Morton , Johannes Weiner , Roman Gushchin , Shakeel Butt , Muchun Song , Yang Shi , Yosry Ahmed , weixugc@google.com, fvdl@google.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH v3] [mm-unstable] mm: Fix memcg reclaim on memory tiered systems References: <20221206023406.3182800-1-almasrymina@google.com> Date: Wed, 07 Dec 2022 09:22:25 +0800 In-Reply-To: (Michal Hocko's message of "Tue, 6 Dec 2022 20:55:27 +0100") Message-ID: <875yeo80tq.fsf@yhuang6-desk2.ccr.corp.intel.com> User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/27.1 (gnu/linux) MIME-Version: 1.0 Content-Type: text/plain; charset=ascii X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: EC6B940011 X-Stat-Signature: szebnr6wshzusc91xe5qi3akio58rq5w X-Spamd-Result: default: False [-6.90 / 9.00]; BAYES_HAM(-6.00)[100.00%]; DMARC_POLICY_ALLOW(-0.50)[intel.com,none]; R_SPF_ALLOW(-0.20)[+ip4:134.134.136.24/32]; R_DKIM_ALLOW(-0.20)[intel.com:s=Intel]; MIME_GOOD(-0.10)[text/plain]; RCVD_NO_TLS_LAST(0.10)[]; FROM_EQ_ENVFROM(0.00)[]; RCPT_COUNT_TWELVE(0.00)[13]; DKIM_TRACE(0.00)[intel.com:+]; MID_RHS_MATCH_FROMTLD(0.00)[]; TO_MATCH_ENVRCPT_SOME(0.00)[]; RCVD_COUNT_THREE(0.00)[3]; FROM_HAS_DN(0.00)[]; MIME_TRACE(0.00)[0:+]; TO_DN_SOME(0.00)[]; ARC_SIGNED(0.00)[hostedemail.com:s=arc-20220608:i=1]; ARC_NA(0.00)[] X-Rspam-User: X-HE-Tag: 1670376207-556092 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Michal Hocko writes: > On Tue 06-12-22 08:06:51, Mina Almasry wrote: >> On Tue, Dec 6, 2022 at 4:20 AM Michal Hocko wrote: >> > >> > On Mon 05-12-22 18:34:05, Mina Almasry wrote: >> > > commit 3f1509c57b1b ("Revert "mm/vmscan: never demote for memcg >> > > reclaim"") enabled demotion in memcg reclaim, which is the right thing >> > > to do, however, it introduced a regression in the behavior of >> > > try_to_free_mem_cgroup_pages(). >> > > >> > > The callers of try_to_free_mem_cgroup_pages() expect it to attempt to >> > > reclaim - not demote - nr_pages from the cgroup. I.e. the memory usage >> > > of the cgroup should reduce by nr_pages. The callers expect >> > > try_to_free_mem_cgroup_pages() to also return the number of pages >> > > reclaimed, not demoted. >> > > >> > > However, what try_to_free_mem_cgroup_pages() actually does is it >> > > unconditionally counts demoted pages as reclaimed pages. So in practice >> > > when it is called it will often demote nr_pages and return the number of >> > > demoted pages to the caller. Demoted pages don't lower the memcg usage, >> > > and so try_to_free_mem_cgroup_pages() is not actually doing what the >> > > callers want it to do. >> > > >> > > Various things work suboptimally on memory tiered systems or don't work >> > > at all due to this: >> > > >> > > - memory.high enforcement likely doesn't work (it just demotes nr_pages >> > > instead of lowering the memcg usage by nr_pages). >> > > - try_charge_memcg() will keep retrying the charge while >> > > try_to_free_mem_cgroup_pages() is just demoting pages and not actually >> > > making any room for the charge. >> > >> > This has been brought up during the review https://lore.kernel.org/all/YoYTEDD+c4GT0xYY@dhcp22.suse.cz/ >> > >> >> Ah, I did indeed miss this. Thanks for the pointer. However I don't >> understand this bit from your email (sorry I'm probably missing >> something): >> >> "I suspect this is rather unlikely situation, though. The last tear >> (without any fallback) should have some memory to reclaim most of >> the time." >> >> Reading the code in try_charge_memcg(), I don't see the last retry for >> try_to_free_mem_cgroup_pages() do anything special. My concern here is >> that try_charge_memcg() calls try_to_free_mem_cgroup_pages() >> MAX_RECLAIM_RETRIES times. Each time that call may demote pages and >> report back that it was able to 'reclaim' memory, but the charge keeps >> failing because the memcg reclaim didn't actually make room for the >> charge. What happens in this case? My understanding is that the memcg >> oom-killer gets wrongly invoked. > > The memcg reclaim shrinks from all zones in the allowed zonelist. In > general from all nodes. So unless the lower tier is outside of this > zonelist then there is a zone to reclaim from which cannot demote. > Correct? > >> > > - memory.reclaim has a wonky interface. It advertises to the user it >> > > reclaims the provided amount but it will actually often demote that >> > > amount. >> > > >> > > There may be more effects to this issue. >> > > >> > > To fix these issues I propose shrink_folio_list() to only count pages >> > > demoted from inside of sc->nodemask to outside of sc->nodemask as >> > > 'reclaimed'. >> > >> > Could you expand on why the node mask matters? From the charge point of >> > view it should be completely uninteresting as the charge remains. >> > >> > I suspect we really need to change to reclaim metrics for memcg reclaim. >> > In the memory balancing reclaim we can indeed consider demotions as a >> > reclaim because the memory is freed in the end but for the memcg reclaim >> > we really should be counting discharges instead. No demotion/migration will >> > free up charges. >> >> I think what you're describing is exactly what this patch aims to do. >> I'm proposing an interface change to shrink_folio_list() such that it >> only counts demoted pages as reclaimed iff sc->nodemask is provided by >> the caller and the demotion removed pages from inside sc->nodemask to >> outside sc->nodemask. In this case: >> >> 1. memory balancing reclaim would pass sc->nodemask=nid to >> shrink_folio_list() indicating that it should count pages demoted from >> sc->nodemask as reclaimed. >> >> 2. memcg reclaim would pass sc->nodemask=NULL to shrink_folio_list() >> indicating that it is looking for reclaim across all nodes and no >> demoted pages should count as reclaimed. >> >> Sorry if the commit message was not clear. I can try making it clearer >> in the next version but it's already very long. > > Either I am missing something or I simply do not understand why you are > hooked into nodemask so much. Why cannot we have a simple rule that > only global reclaim considers demotions as nr_reclaimed? Yes. This sounds reasonable to me and this simplify the logic greatly! Best Regards, Huang, Ying