From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2A237C4332F for ; Thu, 15 Dec 2022 09:21:31 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7B17E8E0003; Thu, 15 Dec 2022 04:21:30 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 761478E0002; Thu, 15 Dec 2022 04:21:30 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 601748E0003; Thu, 15 Dec 2022 04:21:30 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 4DD058E0002 for ; Thu, 15 Dec 2022 04:21:30 -0500 (EST) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 084CA14105B for ; Thu, 15 Dec 2022 09:21:30 +0000 (UTC) X-FDA: 80243997540.15.827CFF9 Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28]) by imf03.hostedemail.com (Postfix) with ESMTP id 2F7A220006 for ; Thu, 15 Dec 2022 09:21:27 +0000 (UTC) Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=suse.com header.s=susede1 header.b=eStTkJ4U; dmarc=pass (policy=quarantine) header.from=suse.com; spf=pass (imf03.hostedemail.com: domain of mhocko@suse.com designates 195.135.220.28 as permitted sender) smtp.mailfrom=mhocko@suse.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1671096088; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=ZpJ8bIqa20K6AOQSS7MjDOb2ROLXcd/4UFBcK5ftzSw=; b=O8saCEhX66t03QI2j1Jw1soyF/NNC3mweW3z7NzqA8aEcnhBzpdDsIOxRVO5S9EJlp+xK3 fGgVZ/ukBf1rpJVXoVry6kjLXSDjPIKA2gWKP6OLnBleUhfsACjuMMXrB+IyhYdqJeNK1z C1loW34g2LaiKqBxQKOKepdU8dsRFJ8= ARC-Authentication-Results: i=1; imf03.hostedemail.com; dkim=pass header.d=suse.com header.s=susede1 header.b=eStTkJ4U; dmarc=pass (policy=quarantine) header.from=suse.com; spf=pass (imf03.hostedemail.com: domain of mhocko@suse.com designates 195.135.220.28 as permitted sender) smtp.mailfrom=mhocko@suse.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1671096088; a=rsa-sha256; cv=none; b=g+/cVV2xp5F2vgofFYAz54lHVmSNfE2inW/OP9r6UbrHZY8RiwxAf3w1/N+yk9bMsoWpEZ FRabm8X+xA+KRuhy+JwQ9MuvpTwl/Y3WjCC6jwAyRKsctFoBlhEJYFo6hSDKMOiNR+l/26 ZSJNelsz9Hu5m17mjDXERAO8C2udt5s= Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id 7F0A0211BC; Thu, 15 Dec 2022 09:21:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1671096086; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=ZpJ8bIqa20K6AOQSS7MjDOb2ROLXcd/4UFBcK5ftzSw=; b=eStTkJ4Uq2IaY7/wCaPTH9ZTh4+3JGTku6CGhbtuZLS958Geom0iaf7W2iiUzNK5LbTGcX fisjHOWN54atCAiwvEUptpoTczjynBXeFvyAVHuvnoRkllvJpx8BOYRpODeZQCEreh1j0Y sn1IlFHHWGpFQDwiUze/1vv1wT4xnM8= Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 5A9B413434; Thu, 15 Dec 2022 09:21:26 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id FelCExbnmmNjLAAAMHmgww (envelope-from ); Thu, 15 Dec 2022 09:21:26 +0000 Date: Thu, 15 Dec 2022 10:21:25 +0100 From: Michal Hocko To: "Huang, Ying" Cc: Mina Almasry , Johannes Weiner , Tejun Heo , Zefan Li , Jonathan Corbet , Roman Gushchin , Shakeel Butt , Muchun Song , Andrew Morton , Yang Shi , Yosry Ahmed , weixugc@google.com, fvdl@google.com, bagasdotme@gmail.com, cgroups@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: Re: [PATCH v3] mm: Add nodes= arg to memory.reclaim Message-ID: References: <20221202223533.1785418-1-almasrymina@google.com> <87k02volwe.fsf@yhuang6-desk2.ccr.corp.intel.com> <87mt7pdxm1.fsf@yhuang6-desk2.ccr.corp.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <87mt7pdxm1.fsf@yhuang6-desk2.ccr.corp.intel.com> X-Rspamd-Queue-Id: 2F7A220006 X-Rspamd-Server: rspam09 X-Rspam-User: X-Stat-Signature: g1hwqfqkbb1id3jpuhoswqrn7rmg3o6j X-HE-Tag: 1671096087-539765 X-HE-Meta: U2FsdGVkX1/q3vbWGM6EBZf4RzN3Z70S8ifBwtgtz3ZUWs2uh3QVa9kSD217A+xPSIEULQGBC+Goml0F3A2DEfDeOLNsZwUTxO2OfGYcet3tx28rnqtY0WEbW0pdjx7RPOpINjTjbyJ5oXXpPHrPhkMnXrg8nDEQYOxZHxTnimYQzrNVgIPwVcmABCoHefChjeFPnRNbFPoBf4oX7hjHsWGacTOW1WatboOcne64hISMkuxBynUS1EFrWOi9TPJA3SCyRSuFX1MyJhhSDPJQtMirYEtEsZ5aOKULVexo40noK8AP29mrUEvyV3dAsZ2CiDRTXp4zP96gny2wToEEO2gFbv9rGqZL6ci30cm/QCsOzwfqhlipzJn+qaSuhoiP16eCKsv86K2MWgQeVNMiVOY3VQMzsfTQoFz928j+SSDbRKl920IMmsrl6+jjNDwxMUaf6oK8wcpHYpJNWiDK0RdotuGBD+rmI9Z25nj4PMpEtmEuPo4fVJHQKl8eYO+ZYe/Skr9kCCmz6fs8GvvL1OcFNo+gL0C33t7svbRAZIuMOTtYCxcpn2drWj0/lKYuqFsq7MD7Gb6hZOf87fE2I2/1NI68sEegOXXDyPiyKAi6RVWwhpASxCN052TY5MuyzAMKBDQpm7qqGI0JcHzvVIIEeaWskjRZfkLwrBiQsDmWu7H4wI13jonGxq+e12okrbP+uWPAXZ55MZjzdIaUjPMSDY+0xEaWJLpeyzvJVkaD0+SRpdlJZS7I0uTxG/zpJCAfjG6HcnpubqnvKSfLU/sJ6OvhT29yaoZjDfqm6mPy2ZVdJgk4WRc6AMfM/1zeHWtbiTOBbIPXf5uSnREnthdGkdwnRpyqaG6ZBnwvy52mCGUBMrql3Q== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Thu 15-12-22 13:50:14, Huang, Ying wrote: > Michal Hocko writes: > > > On Tue 13-12-22 11:29:45, Mina Almasry wrote: > >> On Tue, Dec 13, 2022 at 6:03 AM Michal Hocko wrote: > >> > > >> > On Tue 13-12-22 14:30:40, Johannes Weiner wrote: > >> > > On Tue, Dec 13, 2022 at 02:30:57PM +0800, Huang, Ying wrote: > >> > [...] > >> > > > After these discussion, I think the solution maybe use different > >> > > > interfaces for "proactive demote" and "proactive reclaim". That is, > >> > > > reconsider "memory.demote". In this way, we will always uncharge the > >> > > > cgroup for "memory.reclaim". This avoid the possible confusion there. > >> > > > And, because demotion is considered aging, we don't need to disable > >> > > > demotion for "memory.reclaim", just don't count it. > >> > > > >> > > Hm, so in summary: > >> > > > >> > > 1) memory.reclaim would demote and reclaim like today, but it would > >> > > change to only count reclaimed pages against the goal. > >> > > > >> > > 2) memory.demote would only demote. > >> > > > >> > >> If the above 2 points are agreeable then yes, this sounds good to me > >> and does address our use case. > >> > >> > > a) What if the demotion targets are full? Would it reclaim or fail? > >> > > > >> > >> Wei will chime in if he disagrees, but I think we _require_ that it > >> fails, not falls back to reclaim. The interface is asking for > >> demotion, and is called memory.demote. For such an interface to fall > >> back to reclaim would be very confusing to userspace and may trigger > >> reclaim on a high priority job that we want to shield from proactive > >> reclaim. > > > > But what should happen if the immediate demotion target is full but > > lower tiers are still usable. Should the first one demote before > > allowing to demote from the top tier? > > > >> > > 3) Would memory.reclaim and memory.demote still need nodemasks? > >> > >> memory.demote will need a nodemask, for sure. Today the nodemask would > >> be useful if there is a specific node in the top tier that is > >> overloaded and we want to reduce the pressure by demoting. In the > >> future there will be N tiers and the nodemask says which tier to > >> demote from. > > > > OK, so what is the exact semantic of the node mask. Does it control > > where to demote from or to or both? > > > >> I don't think memory.reclaim would need a nodemask anymore? At least I > >> no longer see the use for it for us. > >> > >> > > Would > >> > > they return -EINVAL if a) memory.reclaim gets passed only toptier > >> > > nodes or b) memory.demote gets passed any lasttier nodes? > >> > > >> > >> Honestly it would be great if memory.reclaim can force reclaim from a > >> top tier nodes. It breaks the aginig pipeline, yes, but if the user is > >> specifically asking for that because they decided in their usecase > >> it's a good idea then the kernel should comply IMO. Not a strict > >> requirement for us. Wei will chime in if he disagrees. > > > > That would require a nodemask to say which nodes to reclaim, no? The > > default behavior should be in line with what standard memory reclaim > > does. If the demotion is a part of that process so should be > > memory.reclaim part of it. If we want to have a finer control then a > > nodemask is really a must and then the nodemaks should constrain both > > agining and reclaim. > > > >> memory.demote returning -EINVAL for lasttier nodes makes sense to me. > >> > >> > I would also add > >> > 4) Do we want to allow to control the demotion path (e.g. which node to > >> > demote from and to) and how to achieve that? > >> > >> We care deeply about specifying which node to demote _from_. That > >> would be some node that is approaching pressure and we're looking for > >> proactive saving from. So far I haven't seen any reason to control > >> which nodes to demote _to_. The kernel deciding that based on the > >> aging pipeline and the node distances sounds good to me. Obviously > >> someone else may find that useful. > > > > Please keep in mind that the interface should be really prepared for > > future extensions so try to abstract from your immediate usecases. > > I see two requirements here, one is to control the demotion source, that > is, which nodes to free memory. The other is to control the demotion > path. I think that we can use two different parameters for them, for > example, "from=" and "to= nodes>". In most cases we don't need to control the demotion path. > Because in current implementation, the nodes in the lower tiers in the > same socket (local nodes) will be preferred. I think that this is > the desired behavior in most cases. Even if the demotion path is not really required at the moment we should keep in mind future potential extensions. E.g. when a userspace based balancing is to be implemented because the default behavior cannot capture userspace policies (one example would be enforcing a prioritization of containers when some container's demoted pages would need to be demoted further to free up a space for a different workload). > >> > 5) Is the demotion api restricted to multi-tier systems or any numa > >> > configuration allowed as well? > >> > > >> > >> demotion will of course not work on single tiered systems. The > >> interface may return some failure on such systems or not be available > >> at all. > > > > Is there any strong reason for that? We do not have any interface to > > control NUMA balancing from userspace. Why cannot we use the interface > > for that purpose? > > Do you mean to demote the cold pages from the specified source nodes to > the specified target nodes in different sockets? We don't do that to > avoid loop in the demotion path. If we prevent the target nodes from > demoting cold pages to the source nodes at the same time, it seems > doable. Loops could be avoid by properly specifying from and to nodes if this is going to be a fine grained interface to control demotion. -- Michal Hocko SUSE Labs