From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-11.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,NICE_REPLY_A,SIGNED_OFF_BY,SPF_HELO_NONE, SPF_PASS,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AC911C00A89 for ; Mon, 2 Nov 2020 09:28:35 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id DA24E2223C for ; Mon, 2 Nov 2020 09:28:34 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="Ek1/NHX7" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org DA24E2223C Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id E81C36B005C; Mon, 2 Nov 2020 04:28:33 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id E31376B005D; Mon, 2 Nov 2020 04:28:33 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CFB266B0068; Mon, 2 Nov 2020 04:28:33 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0174.hostedemail.com [216.40.44.174]) by kanga.kvack.org (Postfix) with ESMTP id 9F9076B005C for ; Mon, 2 Nov 2020 04:28:33 -0500 (EST) Received: from smtpin06.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 2B8BE180AD807 for ; Mon, 2 Nov 2020 09:28:33 +0000 (UTC) X-FDA: 77438952906.06.screw50_2600495272ae Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin06.hostedemail.com (Postfix) with ESMTP id E76971003FC20 for ; Mon, 2 Nov 2020 09:28:32 +0000 (UTC) X-HE-Tag: screw50_2600495272ae X-Filterd-Recvd-Size: 10876 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [216.205.24.124]) by imf14.hostedemail.com (Postfix) with ESMTP for ; Mon, 2 Nov 2020 09:28:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1604309311; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=R6jpfVvnbZweO6L9WbYNkIEDq4eIj3Ssxl4RmqN6ucY=; b=Ek1/NHX7sgoaenmjo+2XH+8IhDEDOwBDJ6gWqZTVFQRWZsaxPMX+K6zAFQsPgguGUGuIi0 lUgPDNiTUT0JDisdCoDgAEApGEgObykP1ACv2b+aa8/EIu0JMR5O4z01bP6+FgYeUz8QLc il3+NBT8VaJrBoztHC3S1ldq3EmNIqo= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-163-jyeDbgmvO1-7_tiQHjRZ_Q-1; Mon, 02 Nov 2020 04:28:29 -0500 X-MC-Unique: jyeDbgmvO1-7_tiQHjRZ_Q-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 0526D8049F7; Mon, 2 Nov 2020 09:28:24 +0000 (UTC) Received: from [10.36.113.163] (ovpn-113-163.ams2.redhat.com [10.36.113.163]) by smtp.corp.redhat.com (Postfix) with ESMTP id 206F75B4AF; Mon, 2 Nov 2020 09:28:14 +0000 (UTC) Subject: Re: [PATCH v3 4/4] arch, mm: make kernel_page_present() always available To: Mike Rapoport , Andrew Morton Cc: Albert Ou , Andy Lutomirski , Benjamin Herrenschmidt , Borislav Petkov , Catalin Marinas , Christian Borntraeger , Christoph Lameter , "David S. Miller" , Dave Hansen , David Rientjes , "Edgecombe, Rick P" , "H. Peter Anvin" , Heiko Carstens , Ingo Molnar , Joonsoo Kim , "Kirill A. Shutemov" , Len Brown , Michael Ellerman , Mike Rapoport , Palmer Dabbelt , Paul Mackerras , Paul Walmsley , Pavel Machek , Pekka Enberg , Peter Zijlstra , "Rafael J. Wysocki" , Thomas Gleixner , Vasily Gorbik , Will Deacon , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-pm@vger.kernel.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, sparclinux@vger.kernel.org, x86@kernel.org References: <20201101170815.9795-1-rppt@kernel.org> <20201101170815.9795-5-rppt@kernel.org> From: David Hildenbrand Organization: Red Hat GmbH Message-ID: <08db307a-b093-d7aa-7364-045f328ab147@redhat.com> Date: Mon, 2 Nov 2020 10:28:14 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.6.0 MIME-Version: 1.0 In-Reply-To: <20201101170815.9795-5-rppt@kernel.org> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 01.11.20 18:08, Mike Rapoport wrote: > From: Mike Rapoport > > For architectures that enable ARCH_HAS_SET_MEMORY having the ability to > verify that a page is mapped in the kernel direct map can be useful > regardless of hibernation. > > Add RISC-V implementation of kernel_page_present(), update its forward > declarations and stubs to be a part of set_memory API and remove ugly > ifdefery in inlcude/linux/mm.h around current declarations of > kernel_page_present(). > > Signed-off-by: Mike Rapoport > --- > arch/arm64/include/asm/cacheflush.h | 1 + > arch/arm64/mm/pageattr.c | 4 +--- > arch/riscv/include/asm/set_memory.h | 1 + > arch/riscv/mm/pageattr.c | 29 +++++++++++++++++++++++++++++ > arch/x86/include/asm/set_memory.h | 1 + > arch/x86/mm/pat/set_memory.c | 4 +--- > include/linux/mm.h | 7 ------- > include/linux/set_memory.h | 5 +++++ > 8 files changed, 39 insertions(+), 13 deletions(-) > > diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h > index 9384fd8fc13c..45217f21f1fe 100644 > --- a/arch/arm64/include/asm/cacheflush.h > +++ b/arch/arm64/include/asm/cacheflush.h > @@ -140,6 +140,7 @@ int set_memory_valid(unsigned long addr, int numpages, int enable); > > int set_direct_map_invalid_noflush(struct page *page); > int set_direct_map_default_noflush(struct page *page); > +bool kernel_page_present(struct page *page); > > #include > > diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c > index 439325532be1..92eccaf595c8 100644 > --- a/arch/arm64/mm/pageattr.c > +++ b/arch/arm64/mm/pageattr.c > @@ -186,8 +186,8 @@ void __kernel_map_pages(struct page *page, int numpages, int enable) > > set_memory_valid((unsigned long)page_address(page), numpages, enable); > } > +#endif /* CONFIG_DEBUG_PAGEALLOC */ > > -#ifdef CONFIG_HIBERNATION > /* > * This function is used to determine if a linear map page has been marked as > * not-valid. Walk the page table and check the PTE_VALID bit. This is based > @@ -234,5 +234,3 @@ bool kernel_page_present(struct page *page) > ptep = pte_offset_kernel(pmdp, addr); > return pte_valid(READ_ONCE(*ptep)); > } > -#endif /* CONFIG_HIBERNATION */ > -#endif /* CONFIG_DEBUG_PAGEALLOC */ > diff --git a/arch/riscv/include/asm/set_memory.h b/arch/riscv/include/asm/set_memory.h > index 4c5bae7ca01c..d690b08dff2a 100644 > --- a/arch/riscv/include/asm/set_memory.h > +++ b/arch/riscv/include/asm/set_memory.h > @@ -24,6 +24,7 @@ static inline int set_memory_nx(unsigned long addr, int numpages) { return 0; } > > int set_direct_map_invalid_noflush(struct page *page); > int set_direct_map_default_noflush(struct page *page); > +bool kernel_page_present(struct page *page); > > #endif /* __ASSEMBLY__ */ > > diff --git a/arch/riscv/mm/pageattr.c b/arch/riscv/mm/pageattr.c > index 321b09d2e2ea..87ba5a68bbb8 100644 > --- a/arch/riscv/mm/pageattr.c > +++ b/arch/riscv/mm/pageattr.c > @@ -198,3 +198,32 @@ void __kernel_map_pages(struct page *page, int numpages, int enable) > __pgprot(0), __pgprot(_PAGE_PRESENT)); > } > #endif > + > +bool kernel_page_present(struct page *page) > +{ > + unsigned long addr = (unsigned long)page_address(page); > + pgd_t *pgd; > + pud_t *pud; > + p4d_t *p4d; > + pmd_t *pmd; > + pte_t *pte; > + > + pgd = pgd_offset_k(addr); > + if (!pgd_present(*pgd)) > + return false; > + > + p4d = p4d_offset(pgd, addr); > + if (!p4d_present(*p4d)) > + return false; > + > + pud = pud_offset(p4d, addr); > + if (!pud_present(*pud)) > + return false; > + > + pmd = pmd_offset(pud, addr); > + if (!pmd_present(*pmd)) > + return false; > + > + pte = pte_offset_kernel(pmd, addr); > + return pte_present(*pte); > +} > diff --git a/arch/x86/include/asm/set_memory.h b/arch/x86/include/asm/set_memory.h > index 5948218f35c5..4352f08bfbb5 100644 > --- a/arch/x86/include/asm/set_memory.h > +++ b/arch/x86/include/asm/set_memory.h > @@ -82,6 +82,7 @@ int set_pages_rw(struct page *page, int numpages); > > int set_direct_map_invalid_noflush(struct page *page); > int set_direct_map_default_noflush(struct page *page); > +bool kernel_page_present(struct page *page); > > extern int kernel_set_to_readonly; > > diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c > index bc9be96b777f..16f878c26667 100644 > --- a/arch/x86/mm/pat/set_memory.c > +++ b/arch/x86/mm/pat/set_memory.c > @@ -2226,8 +2226,8 @@ void __kernel_map_pages(struct page *page, int numpages, int enable) > > arch_flush_lazy_mmu_mode(); > } > +#endif /* CONFIG_DEBUG_PAGEALLOC */ > > -#ifdef CONFIG_HIBERNATION > bool kernel_page_present(struct page *page) > { > unsigned int level; > @@ -2239,8 +2239,6 @@ bool kernel_page_present(struct page *page) > pte = lookup_address((unsigned long)page_address(page), &level); > return (pte_val(*pte) & _PAGE_PRESENT); > } > -#endif /* CONFIG_HIBERNATION */ > -#endif /* CONFIG_DEBUG_PAGEALLOC */ > > int __init kernel_map_pages_in_pgd(pgd_t *pgd, u64 pfn, unsigned long address, > unsigned numpages, unsigned long page_flags) > diff --git a/include/linux/mm.h b/include/linux/mm.h > index ab0ef6bd351d..44b82f22e76a 100644 > --- a/include/linux/mm.h > +++ b/include/linux/mm.h > @@ -2937,16 +2937,9 @@ static inline void debug_pagealloc_map_pages(struct page *page, > if (debug_pagealloc_enabled_static()) > __kernel_map_pages(page, numpages, enable); > } > - > -#ifdef CONFIG_HIBERNATION > -extern bool kernel_page_present(struct page *page); > -#endif /* CONFIG_HIBERNATION */ > #else /* CONFIG_DEBUG_PAGEALLOC */ > static inline void debug_pagealloc_map_pages(struct page *page, > int numpages, int enable) {} > -#ifdef CONFIG_HIBERNATION > -static inline bool kernel_page_present(struct page *page) { return true; } > -#endif /* CONFIG_HIBERNATION */ > #endif /* CONFIG_DEBUG_PAGEALLOC */ > > #ifdef __HAVE_ARCH_GATE_AREA > diff --git a/include/linux/set_memory.h b/include/linux/set_memory.h > index 860e0f843c12..fe1aa4e54680 100644 > --- a/include/linux/set_memory.h > +++ b/include/linux/set_memory.h > @@ -23,6 +23,11 @@ static inline int set_direct_map_default_noflush(struct page *page) > { > return 0; > } > + > +static inline bool kernel_page_present(struct page *page) > +{ > + return true; > +} > #endif > > #ifndef set_mce_nospec > It's somewhat weird to move this to set_memory.h - it's only one possible user. I think include/linux/mm.h is a better fit. Ack to making it independent of CONFIG_HIBERNATION. in include/linux/mm.h , I'd prefer: #if defined(CONFIG_DEBUG_PAGEALLOC) || \ defined(CONFIG_ARCH_HAS_SET_DIRECT_MAP) bool kernel_page_present(struct page *page); #else static inline bool kernel_page_present(struct page *page) { return true; } #endif -- Thanks, David / dhildenb