From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 00839C433F5 for ; Thu, 12 May 2022 00:35:49 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8EA936B0078; Wed, 11 May 2022 20:35:49 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 899896B007B; Wed, 11 May 2022 20:35:49 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7621E6B007D; Wed, 11 May 2022 20:35:49 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 669FF6B0078 for ; Wed, 11 May 2022 20:35:49 -0400 (EDT) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 409D86082D for ; Thu, 12 May 2022 00:35:49 +0000 (UTC) X-FDA: 79455223218.13.73AF222 Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55]) by imf11.hostedemail.com (Postfix) with ESMTP id 597E8400A6 for ; Thu, 12 May 2022 00:35:43 +0000 (UTC) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sin.source.kernel.org (Postfix) with ESMTPS id 446DDCE0AD2; Thu, 12 May 2022 00:35:45 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id A2014C340EE; Thu, 12 May 2022 00:35:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1652315743; bh=uxmwQjcc5w9P8/ZBtZYbLtOFtxd71WhJTlltZhYOd+o=; h=Date:From:To:Cc:Subject:Reply-To:References:In-Reply-To:From; b=Mnk3xql1gcp07bHdLvrpCMUuhXH1uzi5OBQb+Ul+Gju38PpV9Y9QfvVmg8X53hmkf xZK1XipMLOOxaPj2zRKkLwVa3rWxEttVmx/Z1L3f7ggdRZKtjKTHf0NvTpBNIPbTA8 /+MuO0uL63+Xi1RTZ6x27WSsI+MAv+VE+PByM/myRwRmMI+PHF/TBqCdf5APdGgjeu NRrR5Evrc11ZO5ciIRFFNsALEGtWhTN30wVyUwQ/ltdKtE1XNM1foL8JoOa/GI9nsS CVyXwHeMxvoRDJy10+1dAaNCbrUQyYu5/nDVQQ7Df58Sl2wIN07XLne07HLmWohXmc B4pH4yEejGq7Q== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id 356825C05FC; Wed, 11 May 2022 17:35:43 -0700 (PDT) Date: Wed, 11 May 2022 17:35:43 -0700 From: "Paul E. McKenney" To: Minchan Kim Cc: John Hubbard , Andrew Morton , linux-mm , LKML , John Dias , David Hildenbrand Subject: Re: [PATCH v4] mm: fix is_pinnable_page against on cma page Message-ID: <20220512003543.GA2043693@paulmck-ThinkPad-P17-Gen-1> Reply-To: paulmck@kernel.org References: <54b5d177-f2f4-cef2-3a68-cd3b0b276f86@nvidia.com> <8f083802-7ab0-15ec-b37d-bc9471eea0b1@nvidia.com> <20220511234534.GG1790663@paulmck-ThinkPad-P17-Gen-1> <0d90390c-3624-4f93-f8bd-fb29e92237d3@nvidia.com> <20220512002207.GJ1790663@paulmck-ThinkPad-P17-Gen-1> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 597E8400A6 X-Stat-Signature: gj7xn413f399xpgs6sktgstxce1o9tor Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=Mnk3xql1; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf11.hostedemail.com: domain of "SRS0=2I+2=VU=paulmck-ThinkPad-P17-Gen-1.home=paulmck@kernel.org" designates 145.40.73.55 as permitted sender) smtp.mailfrom="SRS0=2I+2=VU=paulmck-ThinkPad-P17-Gen-1.home=paulmck@kernel.org" X-Rspam-User: X-HE-Tag: 1652315743-510349 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, May 11, 2022 at 05:26:55PM -0700, Minchan Kim wrote: > On Wed, May 11, 2022 at 05:22:07PM -0700, Paul E. McKenney wrote: > > On Wed, May 11, 2022 at 05:12:32PM -0700, John Hubbard wrote: > > > On 5/11/22 16:57, John Hubbard wrote: > > > > On 5/11/22 16:45, Paul E. McKenney wrote: > > > > > > > > > > > > Well no, because the "&" operation is a single operation on the CPU, and > > > > > > isn't going to get split up like that. > > > > > > > > > > Chiming in a bit late... > > > > > > > > Much appreciated! > > > > > > > > > > > > > > The usual way that this sort of thing causes trouble is if there is a > > > > > single store instruction that changes the value from MIGRATE_ISOLATE > > > > > to MIGRATE_CMA, and if the compiler decides to fetch twice, AND twice, > > > > > > > > Doing an AND twice for "x & constant" this definitely blows my mind. Is > > > > nothing sacred? :) > > > > > > > > > and then combine the results.  This could give a zero outcome where the > > > > > underlying variable never had the value zero. > > > > > > > > > > Is this sort of thing low probability? > > > > > > > > > > Definitely. > > > > > > > > > > Isn't this sort of thing prohibited? > > > > > > > > > > Definitely not. > > > > > > > > > > So what you have will likely work for at least a while longer, but it > > > > > is not guaranteed and it forces you to think a lot harder about what > > > > > the current implementations of the compiler can and cannot do to you. > > > > > > > > > > The following LWN article goes through some of the possible optimizations > > > > > (vandalisms?) in this area: https://lwn.net/Articles/793253/ > > > > > > > > > > > > > hmm, I don't think we hit any of those  cases, do we? Because here, the > > > > "write" side is via a non-inline function that I just don't believe the > > > > compiler is allowed to call twice. Or is it? > > > > > > > > Minchan's earlier summary: > > > > > > > > CPU 0                         CPU1 > > > > > > > > > > > >                               set_pageblock_migratetype(MIGRATE_ISOLATE) > > > > > > > > if (get_pageblock_migrate(page) & MIGRATE_CMA) > > > > > > > >                               set_pageblock_migratetype(MIGRATE_CMA) > > > > > > > > if (get_pageblock_migrate(page) & MIGRATE_ISOLATE) > > > > > > > > ...where set_pageblock_migratetype() is not inline. > > > > > > > > thanks, > > > > > > Let me try to say this more clearly: I don't think that the following > > > __READ_ONCE() statement can actually help anything, given that > > > get_pageblock_migratetype() is non-inlined: > > > > > > + int __mt = get_pageblock_migratetype(page); > > > + int mt = __READ_ONCE(__mt); > > > + > > > + if (mt & (MIGRATE_CMA | MIGRATE_ISOLATE)) > > > + return false; > > > > > > > > > Am I missing anything here? > > > > In the absence of future aggression from link-time optimizations (LTO), > > you are missing nothing. > > A thing I want to note is Android kernel uses LTO full mode. I doubt that current LTO can do this sort of optimized inlining, at least, not yet. Thanx, Paul