From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5D4D9C4742C for ; Mon, 2 Nov 2020 15:18:47 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id D6B812225E for ; Mon, 2 Nov 2020 15:18:46 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="NTxOlDIi" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D6B812225E Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 6A8516B006C; Mon, 2 Nov 2020 10:18:46 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 5E2A16B006E; Mon, 2 Nov 2020 10:18:46 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 433166B0070; Mon, 2 Nov 2020 10:18:46 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0169.hostedemail.com [216.40.44.169]) by kanga.kvack.org (Postfix) with ESMTP id 080C06B006C for ; Mon, 2 Nov 2020 10:18:45 -0500 (EST) Received: from smtpin24.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 8C12C1EF1 for ; Mon, 2 Nov 2020 15:18:45 +0000 (UTC) X-FDA: 77439835410.24.walk15_1d0711d272b1 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin24.hostedemail.com (Postfix) with ESMTP id 698621A4AA for ; Mon, 2 Nov 2020 15:18:45 +0000 (UTC) X-HE-Tag: walk15_1d0711d272b1 X-Filterd-Recvd-Size: 10613 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf17.hostedemail.com (Postfix) with ESMTP for ; Mon, 2 Nov 2020 15:18:44 +0000 (UTC) Received: from kernel.org (unknown [87.71.17.26]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id C549122226; Mon, 2 Nov 2020 15:18:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1604330323; bh=BP7NXK/PeknKLWeyovBUOGGTlkdf+pO59OcJEpEzF3Q=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=NTxOlDIiQILczXFe3vUeE1LWhh4tMDAJPfsMsis8q/6V+hKCXiAy5IWy87AtMzQgK Dv0zj38DAeoAdeT+vHPVnvuz7AP8hm6ElliB8xy73LgDuiMFeuopOYCfTJ/r4Q8oKb Q/oat80trPnnSe/229/7PlCKbYG4VCtKZqTfN03E= Date: Mon, 2 Nov 2020 17:18:29 +0200 From: Mike Rapoport To: David Hildenbrand Cc: Andrew Morton , Albert Ou , Andy Lutomirski , Benjamin Herrenschmidt , Borislav Petkov , Catalin Marinas , Christian Borntraeger , Christoph Lameter , "David S. Miller" , Dave Hansen , David Rientjes , "Edgecombe, Rick P" , "H. Peter Anvin" , Heiko Carstens , Ingo Molnar , Joonsoo Kim , "Kirill A. Shutemov" , Len Brown , Michael Ellerman , Mike Rapoport , Palmer Dabbelt , Paul Mackerras , Paul Walmsley , Pavel Machek , Pekka Enberg , Peter Zijlstra , "Rafael J. Wysocki" , Thomas Gleixner , Vasily Gorbik , Will Deacon , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-pm@vger.kernel.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, sparclinux@vger.kernel.org, x86@kernel.org Subject: Re: [PATCH v3 4/4] arch, mm: make kernel_page_present() always available Message-ID: <20201102151829.GC4879@kernel.org> References: <20201101170815.9795-1-rppt@kernel.org> <20201101170815.9795-5-rppt@kernel.org> <08db307a-b093-d7aa-7364-045f328ab147@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <08db307a-b093-d7aa-7364-045f328ab147@redhat.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, Nov 02, 2020 at 10:28:14AM +0100, David Hildenbrand wrote: > On 01.11.20 18:08, Mike Rapoport wrote: > > From: Mike Rapoport > > > > For architectures that enable ARCH_HAS_SET_MEMORY having the ability to > > verify that a page is mapped in the kernel direct map can be useful > > regardless of hibernation. > > > > Add RISC-V implementation of kernel_page_present(), update its forward > > declarations and stubs to be a part of set_memory API and remove ugly > > ifdefery in inlcude/linux/mm.h around current declarations of > > kernel_page_present(). > > > > Signed-off-by: Mike Rapoport > > --- > > arch/arm64/include/asm/cacheflush.h | 1 + > > arch/arm64/mm/pageattr.c | 4 +--- > > arch/riscv/include/asm/set_memory.h | 1 + > > arch/riscv/mm/pageattr.c | 29 +++++++++++++++++++++++++++++ > > arch/x86/include/asm/set_memory.h | 1 + > > arch/x86/mm/pat/set_memory.c | 4 +--- > > include/linux/mm.h | 7 ------- > > include/linux/set_memory.h | 5 +++++ > > 8 files changed, 39 insertions(+), 13 deletions(-) > > > > diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h > > index 9384fd8fc13c..45217f21f1fe 100644 > > --- a/arch/arm64/include/asm/cacheflush.h > > +++ b/arch/arm64/include/asm/cacheflush.h > > @@ -140,6 +140,7 @@ int set_memory_valid(unsigned long addr, int numpages, int enable); > > int set_direct_map_invalid_noflush(struct page *page); > > int set_direct_map_default_noflush(struct page *page); > > +bool kernel_page_present(struct page *page); > > #include > > diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c > > index 439325532be1..92eccaf595c8 100644 > > --- a/arch/arm64/mm/pageattr.c > > +++ b/arch/arm64/mm/pageattr.c > > @@ -186,8 +186,8 @@ void __kernel_map_pages(struct page *page, int numpages, int enable) > > set_memory_valid((unsigned long)page_address(page), numpages, enable); > > } > > +#endif /* CONFIG_DEBUG_PAGEALLOC */ > > -#ifdef CONFIG_HIBERNATION > > /* > > * This function is used to determine if a linear map page has been marked as > > * not-valid. Walk the page table and check the PTE_VALID bit. This is based > > @@ -234,5 +234,3 @@ bool kernel_page_present(struct page *page) > > ptep = pte_offset_kernel(pmdp, addr); > > return pte_valid(READ_ONCE(*ptep)); > > } > > -#endif /* CONFIG_HIBERNATION */ > > -#endif /* CONFIG_DEBUG_PAGEALLOC */ > > diff --git a/arch/riscv/include/asm/set_memory.h b/arch/riscv/include/asm/set_memory.h > > index 4c5bae7ca01c..d690b08dff2a 100644 > > --- a/arch/riscv/include/asm/set_memory.h > > +++ b/arch/riscv/include/asm/set_memory.h > > @@ -24,6 +24,7 @@ static inline int set_memory_nx(unsigned long addr, int numpages) { return 0; } > > int set_direct_map_invalid_noflush(struct page *page); > > int set_direct_map_default_noflush(struct page *page); > > +bool kernel_page_present(struct page *page); > > #endif /* __ASSEMBLY__ */ > > diff --git a/arch/riscv/mm/pageattr.c b/arch/riscv/mm/pageattr.c > > index 321b09d2e2ea..87ba5a68bbb8 100644 > > --- a/arch/riscv/mm/pageattr.c > > +++ b/arch/riscv/mm/pageattr.c > > @@ -198,3 +198,32 @@ void __kernel_map_pages(struct page *page, int numpages, int enable) > > __pgprot(0), __pgprot(_PAGE_PRESENT)); > > } > > #endif > > + > > +bool kernel_page_present(struct page *page) > > +{ > > + unsigned long addr = (unsigned long)page_address(page); > > + pgd_t *pgd; > > + pud_t *pud; > > + p4d_t *p4d; > > + pmd_t *pmd; > > + pte_t *pte; > > + > > + pgd = pgd_offset_k(addr); > > + if (!pgd_present(*pgd)) > > + return false; > > + > > + p4d = p4d_offset(pgd, addr); > > + if (!p4d_present(*p4d)) > > + return false; > > + > > + pud = pud_offset(p4d, addr); > > + if (!pud_present(*pud)) > > + return false; > > + > > + pmd = pmd_offset(pud, addr); > > + if (!pmd_present(*pmd)) > > + return false; > > + > > + pte = pte_offset_kernel(pmd, addr); > > + return pte_present(*pte); > > +} > > diff --git a/arch/x86/include/asm/set_memory.h b/arch/x86/include/asm/set_memory.h > > index 5948218f35c5..4352f08bfbb5 100644 > > --- a/arch/x86/include/asm/set_memory.h > > +++ b/arch/x86/include/asm/set_memory.h > > @@ -82,6 +82,7 @@ int set_pages_rw(struct page *page, int numpages); > > int set_direct_map_invalid_noflush(struct page *page); > > int set_direct_map_default_noflush(struct page *page); > > +bool kernel_page_present(struct page *page); > > extern int kernel_set_to_readonly; > > diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c > > index bc9be96b777f..16f878c26667 100644 > > --- a/arch/x86/mm/pat/set_memory.c > > +++ b/arch/x86/mm/pat/set_memory.c > > @@ -2226,8 +2226,8 @@ void __kernel_map_pages(struct page *page, int numpages, int enable) > > arch_flush_lazy_mmu_mode(); > > } > > +#endif /* CONFIG_DEBUG_PAGEALLOC */ > > -#ifdef CONFIG_HIBERNATION > > bool kernel_page_present(struct page *page) > > { > > unsigned int level; > > @@ -2239,8 +2239,6 @@ bool kernel_page_present(struct page *page) > > pte = lookup_address((unsigned long)page_address(page), &level); > > return (pte_val(*pte) & _PAGE_PRESENT); > > } > > -#endif /* CONFIG_HIBERNATION */ > > -#endif /* CONFIG_DEBUG_PAGEALLOC */ > > int __init kernel_map_pages_in_pgd(pgd_t *pgd, u64 pfn, unsigned long address, > > unsigned numpages, unsigned long page_flags) > > diff --git a/include/linux/mm.h b/include/linux/mm.h > > index ab0ef6bd351d..44b82f22e76a 100644 > > --- a/include/linux/mm.h > > +++ b/include/linux/mm.h > > @@ -2937,16 +2937,9 @@ static inline void debug_pagealloc_map_pages(struct page *page, > > if (debug_pagealloc_enabled_static()) > > __kernel_map_pages(page, numpages, enable); > > } > > - > > -#ifdef CONFIG_HIBERNATION > > -extern bool kernel_page_present(struct page *page); > > -#endif /* CONFIG_HIBERNATION */ > > #else /* CONFIG_DEBUG_PAGEALLOC */ > > static inline void debug_pagealloc_map_pages(struct page *page, > > int numpages, int enable) {} > > -#ifdef CONFIG_HIBERNATION > > -static inline bool kernel_page_present(struct page *page) { return true; } > > -#endif /* CONFIG_HIBERNATION */ > > #endif /* CONFIG_DEBUG_PAGEALLOC */ > > #ifdef __HAVE_ARCH_GATE_AREA > > diff --git a/include/linux/set_memory.h b/include/linux/set_memory.h > > index 860e0f843c12..fe1aa4e54680 100644 > > --- a/include/linux/set_memory.h > > +++ b/include/linux/set_memory.h > > @@ -23,6 +23,11 @@ static inline int set_direct_map_default_noflush(struct page *page) > > { > > return 0; > > } > > + > > +static inline bool kernel_page_present(struct page *page) > > +{ > > + return true; > > +} > > #endif > > #ifndef set_mce_nospec > > > > It's somewhat weird to move this to set_memory.h - it's only one possible > user. I think include/linux/mm.h is a better fit. Ack to making it > independent of CONFIG_HIBERNATION. Semantically this is a part of direct map manipulation, that's primarily why I put it into set_memory.h > in include/linux/mm.h , I'd prefer: > > #if defined(CONFIG_DEBUG_PAGEALLOC) || \ > defined(CONFIG_ARCH_HAS_SET_DIRECT_MAP) The second reason was to avoid this ^ and the third is -7 lines to include/linux/mm.h :) > bool kernel_page_present(struct page *page); > #else > static inline bool kernel_page_present(struct page *page) > { > return true; > } > #endif > > -- > Thanks, > > David / dhildenb > > -- Sincerely yours, Mike.