From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 090F6C4167B for ; Fri, 16 Dec 2022 03:03:33 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3B0BC8E0003; Thu, 15 Dec 2022 22:03:33 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 337538E0002; Thu, 15 Dec 2022 22:03:33 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1D7E08E0003; Thu, 15 Dec 2022 22:03:33 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 092368E0002 for ; Thu, 15 Dec 2022 22:03:33 -0500 (EST) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id D2DB3160853 for ; Fri, 16 Dec 2022 03:03:32 +0000 (UTC) X-FDA: 80246673864.14.525B28F Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by imf06.hostedemail.com (Postfix) with ESMTP id 4B835180008 for ; Fri, 16 Dec 2022 03:03:30 +0000 (UTC) Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=cpKR5aMq; spf=pass (imf06.hostedemail.com: domain of ying.huang@intel.com designates 192.55.52.136 as permitted sender) smtp.mailfrom=ying.huang@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1671159811; a=rsa-sha256; cv=none; b=xumy2GIPtC8rX53Bf6xjxZYykYsImLWnj7sUdkpAuhYpkRu73hqSc4AkVDMW6zPtUjkrNo iIUWOrIM828cNXT74XI7Ut5B0w0DT23xS17xRDkgUM3ZUQzpmKBLbh8Wbk6eMfMc9U2XAc c7PyHj1R6py29BCAIawZhpfUtvicm5c= ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=cpKR5aMq; spf=pass (imf06.hostedemail.com: domain of ying.huang@intel.com designates 192.55.52.136 as permitted sender) smtp.mailfrom=ying.huang@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1671159811; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=DwD9BUs3Jsgm4GYwcgwtf019HmpDKEysn0STfQKFDQQ=; b=e2qLCPnCGLrVXb/bi14jtHkNoaEJfhBW8+271n7hX7DLTMWER5Uz+Jxo9tXxqm2eOjl2e8 SlU/jFEKYIaaYJn2NLuQtYsynka999LezvNhaA/iRPYcQcbgU2PpRQGpmPp1hPxbFBVnfL XCx5yHsCfIiMZ2nBMbTFAdEiZngMtKM= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1671159810; x=1702695810; h=from:to:cc:subject:references:date:in-reply-to: message-id:mime-version; bh=gCrxsFtqjirdLVPFluZe2uGgcAEuFJc+TUH1x+Gd2rA=; b=cpKR5aMqLttZl7YCZsg/C2khrksCPimedVOpmMcDoe4x5BW8PGjRhXMB W6CYm8YZCvITSUWCJX9yZSf+gLuhOKcIWPXfzRR768ZrXIMFKLHV1lthj HPH9tSUXQwtPeFeyscRqcnKFuLW803MkF0DvUaAONtnHvmQL5dS4cQ8JE 2Uezq7vVfYzDdw0c5T1f6+hrpp3XouGT7AvRfHPMUPY3s9uDlR23ruY9x 83DkoFrs9ZTyJ2bn6OsfPg13X/T76V5gOKm++s2qRjIqQPwD2gJEj81Dt gcKeLIYwoniQaAeC1/igr6/AR9pxx7wLAdzcxTanYwGopmmxLf6jox7CH Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10562"; a="298538144" X-IronPort-AV: E=Sophos;i="5.96,248,1665471600"; d="scan'208";a="298538144" Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Dec 2022 19:03:28 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10562"; a="599794438" X-IronPort-AV: E=Sophos;i="5.96,248,1665471600"; d="scan'208";a="599794438" Received: from yhuang6-desk2.sh.intel.com (HELO yhuang6-desk2.ccr.corp.intel.com) ([10.238.208.55]) by orsmga003-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Dec 2022 19:03:22 -0800 From: "Huang, Ying" To: Michal Hocko Cc: Mina Almasry , Johannes Weiner , Tejun Heo , Zefan Li , Jonathan Corbet , Roman Gushchin , Shakeel Butt , Muchun Song , Andrew Morton , Yang Shi , Yosry Ahmed , weixugc@google.com, fvdl@google.com, bagasdotme@gmail.com, cgroups@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: Re: [PATCH v3] mm: Add nodes= arg to memory.reclaim References: <20221202223533.1785418-1-almasrymina@google.com> <87k02volwe.fsf@yhuang6-desk2.ccr.corp.intel.com> <87mt7pdxm1.fsf@yhuang6-desk2.ccr.corp.intel.com> Date: Fri, 16 Dec 2022 11:02:22 +0800 In-Reply-To: (Michal Hocko's message of "Thu, 15 Dec 2022 10:21:25 +0100") Message-ID: <87bko49hkx.fsf@yhuang6-desk2.ccr.corp.intel.com> User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/27.1 (gnu/linux) MIME-Version: 1.0 Content-Type: text/plain; charset=ascii X-Rspam-User: X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 4B835180008 X-Stat-Signature: 1h1ci7kyswajs6yyrn7njmx6pisfpddm X-HE-Tag: 1671159810-115805 X-HE-Meta: U2FsdGVkX1/idfeGhOpufDPrAxNjGesxweKPx4TciwukvK3qcVDLmiMKJe5xF5cn0ovgUWa2S0DYCxCRFp1tVntL3J3ds22JOKTUEjs2/emTDIcAmWz76cd7KaADtm9Z6mscMsILHVH1i1/VPHzmHlZkd05EqSngq0E+dmXNT6vMmKqQMtKwGsRF1fY454gvMDpr+vR4PHWrTv4ggcbeAOTTG3tYoKGevpsvyFYSKtnqTS65O+w0w2+ai4PMyPr0DDPalxQH0UJZs9LBbp2QTDdmPQkPUskhU4xU+0vqu+dVyv7IRgPYXL/YDK6c4BVjjt0WBzb1mvJ1PAEMcAT78VLpeC3nIXwB7OTuu2bVNt3D0wME2ykOS5VmyIz+rYBcZScjy//ipha/HP92yP2FRRsEtwnNpgaJ/2x+Uj+LNgWxYfcU9Qo/urN2qR32bXapwBLwq7CqywsDsM1s3XrA88SBvxfM3fihnZB/hH2Hrr/RcAdYxsE1ljgkItSSlonZCO+d+kzu/PY3Sy+7pO4fYlB13GGkmixHoZQm0NaZiCgRqhbZLYQu2il3XlXLWMBF98MXi2b4NZZ6lG/mkT+IUp/x7mM3WFD+XYmDGfOuXoQYGPoVVwlBw8+rnBb/EG6Xa4pN2Qv92pY/gC/voQPR1aklonPlAS4E6xbH7sAXi2dJkAHX3wQInaUri0ojozs3C0rO1IlsDBD9FOKDIrT1LC3NKO9ZRtTG6ziCkIQykg9mh2WtITNJW7KeYih6IDhSCYMZoKdVF2++d9m+6ccZ5bSaWohA9HqbcJ/H63K3jHIHFBS1lWM357AqVVlzbzF2oxKE/6zMiLpuqY7V5ypDXeyCRjmrZyIHs67dxxH3lyrKnjX4m2CDgQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Michal Hocko writes: > On Thu 15-12-22 13:50:14, Huang, Ying wrote: >> Michal Hocko writes: >> >> > On Tue 13-12-22 11:29:45, Mina Almasry wrote: >> >> On Tue, Dec 13, 2022 at 6:03 AM Michal Hocko wrote: >> >> > >> >> > On Tue 13-12-22 14:30:40, Johannes Weiner wrote: >> >> > > On Tue, Dec 13, 2022 at 02:30:57PM +0800, Huang, Ying wrote: >> >> > [...] >> >> > > > After these discussion, I think the solution maybe use different >> >> > > > interfaces for "proactive demote" and "proactive reclaim". That is, >> >> > > > reconsider "memory.demote". In this way, we will always uncharge the >> >> > > > cgroup for "memory.reclaim". This avoid the possible confusion there. >> >> > > > And, because demotion is considered aging, we don't need to disable >> >> > > > demotion for "memory.reclaim", just don't count it. >> >> > > >> >> > > Hm, so in summary: >> >> > > >> >> > > 1) memory.reclaim would demote and reclaim like today, but it would >> >> > > change to only count reclaimed pages against the goal. >> >> > > >> >> > > 2) memory.demote would only demote. >> >> > > >> >> >> >> If the above 2 points are agreeable then yes, this sounds good to me >> >> and does address our use case. >> >> >> >> > > a) What if the demotion targets are full? Would it reclaim or fail? >> >> > > >> >> >> >> Wei will chime in if he disagrees, but I think we _require_ that it >> >> fails, not falls back to reclaim. The interface is asking for >> >> demotion, and is called memory.demote. For such an interface to fall >> >> back to reclaim would be very confusing to userspace and may trigger >> >> reclaim on a high priority job that we want to shield from proactive >> >> reclaim. >> > >> > But what should happen if the immediate demotion target is full but >> > lower tiers are still usable. Should the first one demote before >> > allowing to demote from the top tier? >> > >> >> > > 3) Would memory.reclaim and memory.demote still need nodemasks? >> >> >> >> memory.demote will need a nodemask, for sure. Today the nodemask would >> >> be useful if there is a specific node in the top tier that is >> >> overloaded and we want to reduce the pressure by demoting. In the >> >> future there will be N tiers and the nodemask says which tier to >> >> demote from. >> > >> > OK, so what is the exact semantic of the node mask. Does it control >> > where to demote from or to or both? >> > >> >> I don't think memory.reclaim would need a nodemask anymore? At least I >> >> no longer see the use for it for us. >> >> >> >> > > Would >> >> > > they return -EINVAL if a) memory.reclaim gets passed only toptier >> >> > > nodes or b) memory.demote gets passed any lasttier nodes? >> >> > >> >> >> >> Honestly it would be great if memory.reclaim can force reclaim from a >> >> top tier nodes. It breaks the aginig pipeline, yes, but if the user is >> >> specifically asking for that because they decided in their usecase >> >> it's a good idea then the kernel should comply IMO. Not a strict >> >> requirement for us. Wei will chime in if he disagrees. >> > >> > That would require a nodemask to say which nodes to reclaim, no? The >> > default behavior should be in line with what standard memory reclaim >> > does. If the demotion is a part of that process so should be >> > memory.reclaim part of it. If we want to have a finer control then a >> > nodemask is really a must and then the nodemaks should constrain both >> > agining and reclaim. >> > >> >> memory.demote returning -EINVAL for lasttier nodes makes sense to me. >> >> >> >> > I would also add >> >> > 4) Do we want to allow to control the demotion path (e.g. which node to >> >> > demote from and to) and how to achieve that? >> >> >> >> We care deeply about specifying which node to demote _from_. That >> >> would be some node that is approaching pressure and we're looking for >> >> proactive saving from. So far I haven't seen any reason to control >> >> which nodes to demote _to_. The kernel deciding that based on the >> >> aging pipeline and the node distances sounds good to me. Obviously >> >> someone else may find that useful. >> > >> > Please keep in mind that the interface should be really prepared for >> > future extensions so try to abstract from your immediate usecases. >> >> I see two requirements here, one is to control the demotion source, that >> is, which nodes to free memory. The other is to control the demotion >> path. I think that we can use two different parameters for them, for >> example, "from=" and "to=> nodes>". In most cases we don't need to control the demotion path. >> Because in current implementation, the nodes in the lower tiers in the >> same socket (local nodes) will be preferred. I think that this is >> the desired behavior in most cases. > > Even if the demotion path is not really required at the moment we should > keep in mind future potential extensions. E.g. when a userspace based > balancing is to be implemented because the default behavior cannot > capture userspace policies (one example would be enforcing a > prioritization of containers when some container's demoted pages would > need to be demoted further to free up a space for a different > workload). Yes. We should consider the potential requirements. >> >> > 5) Is the demotion api restricted to multi-tier systems or any numa >> >> > configuration allowed as well? >> >> > >> >> >> >> demotion will of course not work on single tiered systems. The >> >> interface may return some failure on such systems or not be available >> >> at all. >> > >> > Is there any strong reason for that? We do not have any interface to >> > control NUMA balancing from userspace. Why cannot we use the interface >> > for that purpose? >> >> Do you mean to demote the cold pages from the specified source nodes to >> the specified target nodes in different sockets? We don't do that to >> avoid loop in the demotion path. If we prevent the target nodes from >> demoting cold pages to the source nodes at the same time, it seems >> doable. > > Loops could be avoid by properly specifying from and to nodes if this is > going to be a fine grained interface to control demotion. Yes. Best Regards, Huang, Ying