From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE, SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B1646C433DB for ; Tue, 9 Mar 2021 12:50:20 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 40E2F65272 for ; Tue, 9 Mar 2021 12:50:20 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 40E2F65272 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id B45B18D00EC; Tue, 9 Mar 2021 07:50:19 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id B1D018D007F; Tue, 9 Mar 2021 07:50:19 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9E54D8D00EC; Tue, 9 Mar 2021 07:50:19 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0009.hostedemail.com [216.40.44.9]) by kanga.kvack.org (Postfix) with ESMTP id 857828D007F for ; Tue, 9 Mar 2021 07:50:19 -0500 (EST) Received: from smtpin28.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 34CBF181AF5FA for ; Tue, 9 Mar 2021 12:50:19 +0000 (UTC) X-FDA: 77900318958.28.EB698ED Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf15.hostedemail.com (Postfix) with ESMTP id E08C4A0000FD for ; Tue, 9 Mar 2021 12:50:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=pZ/COBNbblJO+x07w0308qCGnfsww0L8iErIm0GU3pk=; b=OwbzsZ7Ekm7vjuMsGUDxy9IzZB aYNSzv5gJVgAdrZj4EuSbYdC1NbhsHVYWcyKD5HDML0sTHUjneU88cMrc3Usnq8uFrRYRRNCb+Hdr m/LqpRf/aHhyPRh4rbrxdW7Pct4tPO2kTbhNdtH+K69l6xJOlJj/z1ZgMXfV2ROYPp8npVcU+rXU2 Dew94FhovF/W/xp164C6bDgOeS09uC5hy43MtfnWObcKL9V0I/qfat/PfktpUvW/U88XS4brjBEcF l9YfOHt2tK9eaWDDUj7m9uUz4lbGzPnWNMCRt191+VjJdsHevacr700C9rBOS5UCEU0HdvhMZicVp j5kcRFpA==; Received: from willy by casper.infradead.org with local (Exim 4.94 #2 (Red Hat Linux)) id 1lJbo5-000aTt-T6; Tue, 09 Mar 2021 12:50:04 +0000 Date: Tue, 9 Mar 2021 12:49:49 +0000 From: Matthew Wilcox To: Alistair Popple Cc: linux-mm@kvack.org, nouveau@lists.freedesktop.org, bskeggs@redhat.com, akpm@linux-foundation.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, kvm-ppc@vger.kernel.org, dri-devel@lists.freedesktop.org, jhubbard@nvidia.com, rcampbell@nvidia.com, jglisse@redhat.com Subject: Re: [PATCH v5 1/8] mm: Remove special swap entry functions Message-ID: <20210309124949.GJ3479805@casper.infradead.org> References: <20210309121505.23608-1-apopple@nvidia.com> <20210309121505.23608-2-apopple@nvidia.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20210309121505.23608-2-apopple@nvidia.com> X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: E08C4A0000FD X-Stat-Signature: e835mh3g5an7yumpagh6b5ma6tjj5mu5 Received-SPF: none (infradead.org>: No applicable sender policy available) receiver=imf15; identity=mailfrom; envelope-from=""; helo=casper.infradead.org; client-ip=90.155.50.34 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1615294217-441971 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, Mar 09, 2021 at 11:14:58PM +1100, Alistair Popple wrote: > -static inline struct page *migration_entry_to_page(swp_entry_t entry) > -{ > - struct page *p = pfn_to_page(swp_offset(entry)); > - /* > - * Any use of migration entries may only occur while the > - * corresponding page is locked > - */ > - BUG_ON(!PageLocked(compound_head(p))); > - return p; > -} > +static inline struct page *pfn_swap_entry_to_page(swp_entry_t entry) > +{ > + struct page *p = pfn_to_page(swp_offset(entry)); > + > + /* > + * Any use of migration entries may only occur while the > + * corresponding page is locked > + */ > + BUG_ON(is_migration_entry(entry) && !PageLocked(compound_head(p))); > + > + return p; > +} I appreciate you're only moving this code, but PageLocked includes an implicit compound_head(): 1. __PAGEFLAG(Locked, locked, PF_NO_TAIL) 2. #define __PAGEFLAG(uname, lname, policy) \ TESTPAGEFLAG(uname, lname, policy) \ 3. #define TESTPAGEFLAG(uname, lname, policy) \ static __always_inline int Page##uname(struct page *page) \ { return test_bit(PG_##lname, &policy(page, 0)->flags); } 4. #define PF_NO_TAIL(page, enforce) ({ \ VM_BUG_ON_PGFLAGS(enforce && PageTail(page), page); \ PF_POISONED_CHECK(compound_head(page)); }) 5. #define PF_POISONED_CHECK(page) ({ \ VM_BUG_ON_PGFLAGS(PagePoisoned(page), page); \ page; }) This macrology isn't easy to understand the first time you read it (nor, indeed, the tenth time), so let me decode it: Substitute 5 into 4 and remove irrelevancies: 6. #define PF_NO_TAIL(page, enforce) compound_head(page) Expand 1 in 2: 7. TESTPAGEFLAG(Locked, locked, PF_NO_TAIL) Expand 7 in 3: 8. static __always_inline int PageLocked(struct page *page) { return test_bit(PG_locked, &PF_NO_TAIL(page, 0)->flags); } Expand 6 in 8: 9. static __always_inline int PageLocked(struct page *page) { return test_bit(PG_locked, &compound_head(page)->flags); } (in case it's not clear, compound_head() is idempotent. that is: f(f(a)) == f(a))