From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 494D1C433F5 for ; Wed, 11 May 2022 23:45:41 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id CCAF96B0074; Wed, 11 May 2022 19:45:40 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C7B328D0001; Wed, 11 May 2022 19:45:40 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B1ACC6B0078; Wed, 11 May 2022 19:45:40 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id A36466B0074 for ; Wed, 11 May 2022 19:45:40 -0400 (EDT) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 6C1023007B for ; Wed, 11 May 2022 23:45:40 +0000 (UTC) X-FDA: 79455096840.07.E7E0A59 Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55]) by imf03.hostedemail.com (Postfix) with ESMTP id 72DD42009F for ; Wed, 11 May 2022 23:45:31 +0000 (UTC) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sin.source.kernel.org (Postfix) with ESMTPS id BC211CE273D; Wed, 11 May 2022 23:45:36 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 266A4C340EE; Wed, 11 May 2022 23:45:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1652312735; bh=b6mgKJpVJA/b7+VDSLPzDbu9hDNiACAHNnivpAnLRXQ=; h=Date:From:To:Cc:Subject:Reply-To:References:In-Reply-To:From; b=NQu3DKyBmK8FcoCK67b8nygpYZHdzwkv8l49/7YNJqI22QYKFW0YKFLz6fh2Gdq/6 lK3q+K1EwN5AW1fTyV+603TNksBJI5aOD4KVFaI5dfkrZIVyt5NT01tyNhaaZIe0sW yiUchbbd2TNRSP5ZG0vEMhwj3svStRjxS3MS7uyzG1dPbyw7z6aHBOZs04GGEZZpry Bvh0r3gnZQ32rAUsXBn4hPVPmdeJoFAcXJ4acEkAICBfqU7TTaSFjzirHp2GuFq1lh wFytBIQfJim9nuY9bq/wx1UNNDgNzDRR/WL+LAjvbCirBnA9VLdpmKCHkythUmn7dD OJKYM0sxY6jMg== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id BAFD55C05FC; Wed, 11 May 2022 16:45:34 -0700 (PDT) Date: Wed, 11 May 2022 16:45:34 -0700 From: "Paul E. McKenney" To: John Hubbard Cc: Minchan Kim , Andrew Morton , linux-mm , LKML , John Dias , David Hildenbrand Subject: Re: [PATCH v4] mm: fix is_pinnable_page against on cma page Message-ID: <20220511234534.GG1790663@paulmck-ThinkPad-P17-Gen-1> Reply-To: paulmck@kernel.org References: <2ffa7670-04ea-bb28-28f8-93a9b9eea7e8@nvidia.com> <54b5d177-f2f4-cef2-3a68-cd3b0b276f86@nvidia.com> <8f083802-7ab0-15ec-b37d-bc9471eea0b1@nvidia.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <8f083802-7ab0-15ec-b37d-bc9471eea0b1@nvidia.com> X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: 72DD42009F Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=NQu3DKyB; spf=pass (imf03.hostedemail.com: domain of "SRS0=yzgh=VT=paulmck-ThinkPad-P17-Gen-1.home=paulmck@kernel.org" designates 145.40.73.55 as permitted sender) smtp.mailfrom="SRS0=yzgh=VT=paulmck-ThinkPad-P17-Gen-1.home=paulmck@kernel.org"; dmarc=pass (policy=none) header.from=kernel.org X-Rspam-User: X-Stat-Signature: naqpr8aj9ncwew8he6msetww7kgh56pt X-HE-Tag: 1652312731-86634 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, May 11, 2022 at 04:13:10PM -0700, John Hubbard wrote: > On 5/11/22 16:08, Minchan Kim wrote: > > > OK, so the code checks the wrong item each time. But the code really > > > only needs to know "is either _CMA or _ISOLATE set?". And so you > > > > Yes. > > > > > can just sidestep the entire question by writing it like this: > > > > > > int mt = get_pageblock_migratetype(page); > > > > > > if (mt & (MIGRATE_ISOLATE | MIGRATE_CMA)) > > > return false; > > > > I am confused. Isn't it same question? > > > > set_pageblock_migratetype(MIGRATE_ISOLATE) > > if (get_pageblock_migrate(page) & MIGRATE_CMA) > > > > set_pageblock_migratetype(MIGRATE_CMA) > > > > if (get_pageblock_migrate(page) & MIGRATE_ISOLATE) > > Well no, because the "&" operation is a single operation on the CPU, and > isn't going to get split up like that. Chiming in a bit late... The usual way that this sort of thing causes trouble is if there is a single store instruction that changes the value from MIGRATE_ISOLATE to MIGRATE_CMA, and if the compiler decides to fetch twice, AND twice, and then combine the results. This could give a zero outcome where the underlying variable never had the value zero. Is this sort of thing low probability? Definitely. Isn't this sort of thing prohibited? Definitely not. So what you have will likely work for at least a while longer, but it is not guaranteed and it forces you to think a lot harder about what the current implementations of the compiler can and cannot do to you. The following LWN article goes through some of the possible optimizations (vandalisms?) in this area: https://lwn.net/Articles/793253/ In the end, it is your code, so you get to decide how much you would like to keep track of what compilers get up to over time. ;-) Thanx, Paul