From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 52E7CC433F5 for ; Thu, 12 May 2022 00:49:55 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A631F6B007D; Wed, 11 May 2022 20:49:54 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A11576B007E; Wed, 11 May 2022 20:49:54 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8B2988D0001; Wed, 11 May 2022 20:49:54 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 7C2516B007D for ; Wed, 11 May 2022 20:49:54 -0400 (EDT) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay11.hostedemail.com (Postfix) with ESMTP id 450C98219F for ; Thu, 12 May 2022 00:49:54 +0000 (UTC) X-FDA: 79455258708.26.78F2350 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by imf14.hostedemail.com (Postfix) with ESMTP id 588161000A9 for ; Thu, 12 May 2022 00:49:51 +0000 (UTC) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id CF286B825FF; Thu, 12 May 2022 00:49:51 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 5DD4DC340EE; Thu, 12 May 2022 00:49:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1652316590; bh=eWke+NN1Ar4ZNDhe1jOCTRWa0cwyvdm8F4X6YA+8kK8=; h=Date:From:To:Cc:Subject:Reply-To:References:In-Reply-To:From; b=k6d9rbL12uyyjp74yBjzKvcn4ZUeTVKs87va/EwFEHTRQSsy29qeTg7zvvtrx+Jw4 /hGLSxt0gJgUrV4qnDkcuwLNquYV+10gydxteNl50VwNuSs2GTZ2SIPKobtnqNGdyV xhvx038uhHTXSNQ4wEFYMpwFZrTkNn6f8wSgQ4jGkmgxr6+mnbFSO9Qlu70DSsxkVj agk2G4k2wnHP20Wou776/LGM+O+D4pVzIK07eTORqoW+de/RAt57+1VleS9ZftLJPE piZFMJijh94kkFSbhhUAJM2TZgBFlMpxkJLk3SaWArNA8YOORmWTlIADNUNcomO6GY IZsFaAKa2ml2Q== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id 06F3E5C05FC; Wed, 11 May 2022 17:49:50 -0700 (PDT) Date: Wed, 11 May 2022 17:49:49 -0700 From: "Paul E. McKenney" To: John Hubbard Cc: Minchan Kim , Andrew Morton , linux-mm , LKML , John Dias , David Hildenbrand Subject: Re: [PATCH v4] mm: fix is_pinnable_page against on cma page Message-ID: <20220512004949.GK1790663@paulmck-ThinkPad-P17-Gen-1> Reply-To: paulmck@kernel.org References: <8f083802-7ab0-15ec-b37d-bc9471eea0b1@nvidia.com> <20220511234534.GG1790663@paulmck-ThinkPad-P17-Gen-1> <0d90390c-3624-4f93-f8bd-fb29e92237d3@nvidia.com> <20220512002207.GJ1790663@paulmck-ThinkPad-P17-Gen-1> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Stat-Signature: e3if1kprq9ababbasffg5gsbkxx8r55x X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 588161000A9 Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=k6d9rbL1; spf=pass (imf14.hostedemail.com: domain of "SRS0=2I+2=VU=paulmck-ThinkPad-P17-Gen-1.home=paulmck@kernel.org" designates 145.40.68.75 as permitted sender) smtp.mailfrom="SRS0=2I+2=VU=paulmck-ThinkPad-P17-Gen-1.home=paulmck@kernel.org"; dmarc=pass (policy=none) header.from=kernel.org X-Rspam-User: X-HE-Tag: 1652316591-37608 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, May 11, 2022 at 05:34:52PM -0700, John Hubbard wrote: > On 5/11/22 17:26, Minchan Kim wrote: > > > > Let me try to say this more clearly: I don't think that the following > > > > __READ_ONCE() statement can actually help anything, given that > > > > get_pageblock_migratetype() is non-inlined: > > > > > > > > + int __mt = get_pageblock_migratetype(page); > > > > + int mt = __READ_ONCE(__mt); > > > > + > > > > + if (mt & (MIGRATE_CMA | MIGRATE_ISOLATE)) > > > > + return false; > > > > > > > > > > > > Am I missing anything here? > > > > > > In the absence of future aggression from link-time optimizations (LTO), > > > you are missing nothing. > > > > A thing I want to note is Android kernel uses LTO full mode. > > Thanks Paul for explaining the state of things. > > Minchan, how about something like very close to your original draft, > then, but with a little note, and the "&" as well: > > int __mt = get_pageblock_migratetype(page); > > /* > * Defend against future compiler LTO features, or code refactoring > * that inlines the above function, by forcing a single read. Because, this > * routine races with set_pageblock_migratetype(), and we want to avoid > * reading zero, when actually one or the other flags was set. > */ > int mt = __READ_ONCE(__mt); > > if (mt & (MIGRATE_CMA | MIGRATE_ISOLATE)) > return false; > > > ...which should make everyone comfortable and protected from the > future sins of the compiler and linker teams? :) This would work, but it would force a store to the stack and an immediate reload. Which might be OK on this code path. But using READ_ONCE() in (I think?) __get_pfnblock_flags_mask() would likely generate the same code that is produced today. word = READ_ONCE(bitmap[word_bitidx]); But I could easily have missed a turn in that cascade of functions. ;-) Or there might be some code path that really hates a READ_ONCE() in that place. Thanx, Paul