From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E5980C433F5 for ; Sat, 7 May 2022 19:23:10 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 280166B0071; Sat, 7 May 2022 15:23:10 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 22F1F6B0073; Sat, 7 May 2022 15:23:10 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0D0A96B0074; Sat, 7 May 2022 15:23:10 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id F28A06B0071 for ; Sat, 7 May 2022 15:23:09 -0400 (EDT) Received: from smtpin31.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id BFFA820940 for ; Sat, 7 May 2022 19:23:09 +0000 (UTC) X-FDA: 79439920098.31.10408BC Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55]) by imf17.hostedemail.com (Postfix) with ESMTP id 5AD074006E for ; Sat, 7 May 2022 19:22:53 +0000 (UTC) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sin.source.kernel.org (Postfix) with ESMTPS id BE556CE0B8E; Sat, 7 May 2022 19:23:04 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id C0B67C385A5; Sat, 7 May 2022 19:23:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1651951383; bh=/yTWTK2657a6e5IKSoTYOVfyFLxzHBO5jGiV0wX7FFU=; h=Date:From:To:Cc:Subject:In-Reply-To:References:From; b=SOj1L/sB+VaWVasrN7Jc3aYe2MKGXDmHVe0w/SJ7crTwSqxdCcYaEIlk726QHZ+Xf 5Qe9sznmdKULLEQFW0v3pI3RXh/a46WQFbLmJrOMvKoyaQ9kiLAMIdAsPgvKKG2xGQ UtCQNxQt83G5SytDB5rFj5Gy+5AXlLQn0EZEc5A0= Date: Sat, 7 May 2022 12:23:01 -0700 From: Andrew Morton To: Minchan Kim Cc: linux-mm , LKML , "Paul E . McKenney" , John Hubbard , John Dias , David Hildenbrand Subject: Re: [PATCH v2] mm: fix is_pinnable_page against on cma page Message-Id: <20220507122301.3b50eb030f9cd6f047f14352@linux-foundation.org> In-Reply-To: <20220505064429.2818496-1-minchan@kernel.org> References: <20220505064429.2818496-1-minchan@kernel.org> X-Mailer: Sylpheed 3.7.0 (GTK+ 2.24.33; x86_64-redhat-linux-gnu) Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit X-Rspam-User: X-Rspamd-Server: rspam11 X-Rspamd-Queue-Id: 5AD074006E X-Stat-Signature: jbtm3zfh3u4qf6gns5nbrwpbhzajg1x7 Authentication-Results: imf17.hostedemail.com; dkim=pass header.d=linux-foundation.org header.s=korg header.b="SOj1L/sB"; spf=pass (imf17.hostedemail.com: domain of akpm@linux-foundation.org designates 145.40.73.55 as permitted sender) smtp.mailfrom=akpm@linux-foundation.org; dmarc=none X-HE-Tag: 1651951373-405235 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, 4 May 2022 23:44:29 -0700 Minchan Kim wrote: > Pages on CMA area could have MIGRATE_ISOLATE as well as MIGRATE_CMA > so current is_pinnable_page could miss CMA pages which has MIGRATE_ > ISOLATE. It ends up putting CMA pages longterm pinning possible on > pin_user_pages APIs so CMA allocation fails. > > The CMA allocation path protects the migration type change race > using zone->lock but what GUP path need to know is just whether the > page is on CMA area or not rather than exact type. Thus, we don't > need zone->lock but just checks the migratype in either of > (MIGRATE_ISOLATE and MIGRATE_CMA). > > Adding the MIGRATE_ISOLATE check in is_pinnable_page could cause > rejecting of pinning the page on MIGRATE_ISOLATE pageblock even > thouth it's neither CMA nor movable zone if the page is temporarily "though" > unmovable. However, the migration failure is general issue, not > only come from MIGRATE_ISOLATE and the MIGRATE_ISOLATE is also > transient state like other temporal refcount holding of pages. > > ... > > --- a/include/linux/mm.h > +++ b/include/linux/mm.h > @@ -1625,8 +1625,18 @@ static inline bool page_needs_cow_for_dma(struct vm_area_struct *vma, > #ifdef CONFIG_MIGRATION > static inline bool is_pinnable_page(struct page *page) > { > - return !(is_zone_movable_page(page) || is_migrate_cma_page(page)) || > - is_zero_pfn(page_to_pfn(page)); > +#ifdef CONFIG_CMA > + /* > + * use volatile to use local variable mt instead of > + * refetching mt value. > + */ > + volatile int mt = get_pageblock_migratetype(page); > + > + if (mt == MIGRATE_CMA || mt == MIGRATE_ISOLATE) > + return false; > +#endif Open-coded use of `volatile' draws unwelcome attention. What are we trying to do here? Prevent the compiler from rerunning all of get_pageblock_migratetype() (really __get_pfnblock_flags_mask()) twice? That would be pretty dumb of it? Would a suitably-commented something like int __mt = get_pageblock_migratetype(page); int mt = __READ_ONCE(__mt); express this better? > + > + return !(is_zone_movable_page(page) || is_zero_pfn(page_to_pfn(page))); > } > #else > static inline bool is_pinnable_page(struct page *page)