From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 13DFAC47082 for ; Mon, 31 May 2021 07:01:31 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 93DE8600D1 for ; Mon, 31 May 2021 07:01:30 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 93DE8600D1 Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 07B6C8D0001; Mon, 31 May 2021 03:01:30 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 02AB96B006E; Mon, 31 May 2021 03:01:29 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E0CEB8D0001; Mon, 31 May 2021 03:01:29 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0054.hostedemail.com [216.40.44.54]) by kanga.kvack.org (Postfix) with ESMTP id AC9F96B0036 for ; Mon, 31 May 2021 03:01:29 -0400 (EDT) Received: from smtpin26.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 373A5A8FB for ; Mon, 31 May 2021 07:01:29 +0000 (UTC) X-FDA: 78200630298.26.9A33341 Received: from mx2.suse.de (mx2.suse.de [195.135.220.15]) by imf17.hostedemail.com (Postfix) with ESMTP id DCECC4224949 for ; Mon, 31 May 2021 07:00:20 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1622444427; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=aaImCG7lMKKcTS5zB6FcuyY1eMwzb6NwX61cgjXmtpw=; b=tiHomxzhGPHSU+9YpxgHa64BJq+et+Qh3qxdBbkGyyjbrc+QSqT6mk3fG8/Smc+gwVJDv0 SpqyLgVd3q7LRRCAi+R4btgsHOsF23N2MAf2LKChkq0gPb16B5RcbKGD2x9vFbl5nceSsy DNWm3GXkWPxiTYqlJbdEY681VRpwAdg= Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 8303EB2E3; Mon, 31 May 2021 07:00:27 +0000 (UTC) Date: Mon, 31 May 2021 09:00:25 +0200 From: Michal Hocko To: Feng Tang Cc: linux-mm@kvack.org, Andrew Morton , David Rientjes , Dave Hansen , Ben Widawsky , linux-kernel@vger.kernel.org, Andrea Arcangeli , Mel Gorman , Mike Kravetz , Randy Dunlap , Vlastimil Babka , Andi Kleen , Dan Williams , ying.huang@intel.com Subject: Re: [PATCH v1 4/4] mm/mempolicy: kill MPOL_F_LOCAL bit Message-ID: References: <1622005302-23027-1-git-send-email-feng.tang@intel.com> <1622005302-23027-5-git-send-email-feng.tang@intel.com> <20210527121041.GA7743@shbuild999.sh.intel.com> <20210527133436.GD7743@shbuild999.sh.intel.com> <20210528043954.GA32292@shbuild999.sh.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20210528043954.GA32292@shbuild999.sh.intel.com> Authentication-Results: imf17.hostedemail.com; dkim=pass header.d=suse.com header.s=susede1 header.b=tiHomxzh; dmarc=pass (policy=quarantine) header.from=suse.com; spf=pass (imf17.hostedemail.com: domain of mhocko@suse.com designates 195.135.220.15 as permitted sender) smtp.mailfrom=mhocko@suse.com X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: DCECC4224949 X-Stat-Signature: 5qea9tu4ydzyrb8tddaj3mypbjy1cbhn X-HE-Tag: 1622444420-533435 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Fri 28-05-21 12:39:54, Feng Tang wrote: > On Thu, May 27, 2021 at 05:34:56PM +0200, Michal Hocko wrote: > > On Thu 27-05-21 21:34:36, Feng Tang wrote: > > > On Thu, May 27, 2021 at 02:26:24PM +0200, Michal Hocko wrote: > > > > On Thu 27-05-21 20:10:41, Feng Tang wrote: > > > > > On Thu, May 27, 2021 at 10:20:08AM +0200, Michal Hocko wrote: > > > > > > On Wed 26-05-21 13:01:42, Feng Tang wrote: > > > > > > > Now the only remaining case of a real 'local' policy faked by > > > > > > > 'prefer' policy plus MPOL_F_LOCAL bit is: > > > > > > > > > > > > > > A valid 'prefer' policy with a valid 'preferred' node is 'rebind' > > > > > > > to a nodemask which doesn't contains the 'preferred' node, then it > > > > > > > will handle allocation with 'local' policy. > > > > > > > > > > > > > > Add a new 'MPOL_F_LOCAL_TEMP' bit for this case, and kill the > > > > > > > MPOL_F_LOCAL bit, which could simplify the code much. > > > > > > > > > > > > As I've pointed out in the reply to the previous patch. It would have > > > > > > been much better if most of the MPOL_F_LOCAL usage was gone by this > > > > > > patch. > > > > > > > > > > > > I also dislike a new MPOL_F_LOCAL_TEMP. This smells like sneaking the > > > > > > hack back in after you have painstakingly removed it. So this looks like > > > > > > a step backwards to me. I also do not understand why do we need the > > > > > > rebind callback for local policy at all. There is no node mask for local > > > > > > so what is going on here? > > > > > > > > > > This is the special case 4 for 'perfer' policy with MPOL_F_STATIC_NODES > > > > > flag set, say it prefer node 1, when it is later 'refind' to a new > > > > > nodemask node 2-3, according to current code it will be add the > > > > > MPOL_F_LOCAL bit and performs 'local' policy acctually. And in future > > > > > it is 'rebind' again with a nodemask 1-2, it will be restored back > > > > > to 'prefer' policy with preferred node 1. > > > > > > > > Honestly I still do not follow the actual problem. > > > > > > I was confused too, and don't know the original thought behind it. This > > > case 4 was just imagined by reading the code. > > > > > > > A preferred node is a > > > > _hint_. If you rebind the task to a different cpuset then why should we > > > > actually care? The allocator will fallback to the closest node according > > > > to the distance metric. Maybe the original code was trying to handle > > > > that in some way but I really do fail to understand that code and I > > > > strongly suspect it is more likely to overengineered rather than backed > > > > by a real usecase. I might be wrong here but then this is an excellent > > > > opportunity to clarify all those subtleties. > > > > > > From the code, the original special handling may be needed in 3 cases: > > > get_policy_nodemask() > > > policy_node() > > > mempolicy_slab_node() > > > to not return the preset prefer_nid. > > > > I am sorry but I do not follow. What is actually wrong if the preferred > > node is outside of the cpuset nodemask? > > Sorry, I didn't make it clear. With current code logic, it will perform > as 'local' policy, but its mode is kept as 'prefer', so the code still > has these tricky bit checking when these APIs are called for this policy. > I agree with you that these ping-pong rebind() may be over engineering, > so for this case can we just change the policy from 'prefer' to 'local', > and drop the tricky bit manipulation, as the 'prefer' is just a hint, > if these rebind misses the target node, there is no need to stick with > the 'prefer' policy? Again. I really do not understand why we should rebind or mark as local anything here. Is this a documented/expected behavior? What if somebody just changes the cpuset to include the preferred node again. Is it expected to have local preference now? I can see you have posted a newer version which I haven't seen yet but this is really better to get resolved before building up more on top. And let me be explicit. I do believe that rebinding preferred policy is just bogus and it should be dropped altogether on the ground that a preference is a mere hint from userspace where to start the allocation. Unless I am missing something cpusets will be always authoritative for the final placement. The preferred node just acts as a starting point and it should be really preserved when cpusets changes. Otherwise we have a very subtle behavior corner cases. -- Michal Hocko SUSE Labs