From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.6 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,NICE_REPLY_A, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3FBA9C2B9F8 for ; Tue, 25 May 2021 10:04:48 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 9474461408 for ; Tue, 25 May 2021 10:04:47 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9474461408 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 3427D6B006C; Tue, 25 May 2021 06:04:47 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 31A8E6B006E; Tue, 25 May 2021 06:04:47 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 16D5B6B0070; Tue, 25 May 2021 06:04:47 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0074.hostedemail.com [216.40.44.74]) by kanga.kvack.org (Postfix) with ESMTP id CEE6F6B006C for ; Tue, 25 May 2021 06:04:46 -0400 (EDT) Received: from smtpin04.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 596F5180ACF6C for ; Tue, 25 May 2021 10:04:46 +0000 (UTC) X-FDA: 78179319372.04.2C6EFEC Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [216.205.24.124]) by imf08.hostedemail.com (Postfix) with ESMTP id 097E98019134 for ; Tue, 25 May 2021 10:04:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1621937085; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=rEMarepUYpuQv4Tl+GWAkfzqYcfKaB044Ds0uKZVtes=; b=N/GSsjUpKFXCq5NT9g+bqtAyi9q2lt1aHtFSdmBHqfToPpNqxifmpG6naRTXXAFiG+Mr3w nr4P+wF4XYZLYeLZhs2EJTQgVUj+KPwF8kkvDNYUtFYylaDHZW50TKyMDYCrNRyR5lfXX3 TSb1jaQNpw4yzpHTv5e3jmsWjyABczY= Received: from mail-wr1-f71.google.com (mail-wr1-f71.google.com [209.85.221.71]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-151-Gvfycp3dPN-ceZQqRoCPgQ-1; Tue, 25 May 2021 06:04:43 -0400 X-MC-Unique: Gvfycp3dPN-ceZQqRoCPgQ-1 Received: by mail-wr1-f71.google.com with SMTP id i15-20020a5d558f0000b029011215b1cf5cso7965041wrv.22 for ; Tue, 25 May 2021 03:04:43 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:subject:to:cc:references:from:organization :message-id:date:user-agent:mime-version:in-reply-to :content-language:content-transfer-encoding; bh=rEMarepUYpuQv4Tl+GWAkfzqYcfKaB044Ds0uKZVtes=; b=NFqhcOA+Xry8OFUWqXsxgTOeUO+q+oV7ZB+EQHJG4OK9WXWTV5njdxTj3YDZicrxwb Stht8+yY3IiGmNprkZwUsZLRVHKn5UkaPckQnTcmd6W/Nla/X1sLyCEtckk5lxCsBc+6 x9BbsItsvfoepEFGgj6L1/4/PnHWOOdduK66zcELMkNmMCTv9ij7qy0XSAbalMpZjHd/ lqP19JnFjXZ+lP1f2AxBJfsihTtBbQkiq+tvqXXCJFJEwV3tCmdKdSX3F+cpaGF+OYqh 08nwv1THUnCPJVrG8rPkBwGCb3QXj12gaTCCuOh9ZC/qvN1Myly9wb5WplNrCpai2WVN 2bjg== X-Gm-Message-State: AOAM533V5nBrq7jphnRyBc7UMARsA65yXVs0OmjvX8ld4snxXEKHdeTb MWRsZiA/AU7EDaqCJ+em2kLfHYLrjt0lNr4uuA0w4i8/rdVVPOEsx1t07ajrbExn3+ylLSEJc+8 3pq9MVeb0lVc= X-Received: by 2002:a5d:50c1:: with SMTP id f1mr26809449wrt.168.1621937082153; Tue, 25 May 2021 03:04:42 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyswEisE4HffAcfiuhK1eDXDYyGX350jzEtm+odGzt14QcYGCxZ2zA6Dn2zVW+beu1c46XwHg== X-Received: by 2002:a5d:50c1:: with SMTP id f1mr26809418wrt.168.1621937081876; Tue, 25 May 2021 03:04:41 -0700 (PDT) Received: from ?IPv6:2003:d8:2f38:2400:62f4:c5fa:ba13:ac32? (p200300d82f38240062f4c5faba13ac32.dip0.t-ipconnect.de. [2003:d8:2f38:2400:62f4:c5fa:ba13:ac32]) by smtp.gmail.com with ESMTPSA id m5sm2170961wmq.6.2021.05.25.03.04.40 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Tue, 25 May 2021 03:04:41 -0700 (PDT) Subject: Re: [RFC V2] mm: Enable generic pfn_valid() to handle early sections with memmap holes To: Mike Rapoport , Anshuman Khandual Cc: linux-mm@kvack.org, akpm@linux-foundation.org, Catalin Marinas , Will Deacon , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org References: <20210422061902.21614-1-rppt@kernel.org> <1619077823-3819-1-git-send-email-anshuman.khandual@arm.com> <10e5eecf-3ef5-f691-f38a-7ca305b707c1@arm.com> From: David Hildenbrand Organization: Red Hat Message-ID: <1f603aab-0577-7e44-e0fe-6563ed424918@redhat.com> Date: Tue, 25 May 2021 12:04:40 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.10.1 MIME-Version: 1.0 In-Reply-To: X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit X-Rspamd-Queue-Id: 097E98019134 Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b="N/GSsjUp"; spf=none (imf08.hostedemail.com: domain of david@redhat.com has no SPF policy when checking 216.205.24.124) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com X-Rspamd-Server: rspam04 X-Stat-Signature: jbfa4i1ejjow8nognf7xwdenyd1tr3bw X-HE-Tag: 1621937079-493222 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 25.05.21 12:03, Mike Rapoport wrote: > On Tue, May 25, 2021 at 03:22:53PM +0530, Anshuman Khandual wrote: >> >> >> On 5/25/21 12:02 PM, Mike Rapoport wrote: >>> On Tue, May 25, 2021 at 11:30:15AM +0530, Anshuman Khandual wrote: >>>> >>>> On 5/24/21 12:22 PM, Mike Rapoport wrote: >>>>> Hello Anshuman, >>>>> >>>>> On Mon, May 24, 2021 at 10:28:32AM +0530, Anshuman Khandual wrote: >>>>>> >>>>>> On 4/22/21 1:20 PM, Anshuman Khandual wrote: >>>>>>> Platforms like arm and arm64 have redefined pfn_valid() because their early >>>>>>> memory sections might have contained memmap holes after freeing parts of it >>>>>>> during boot, which should be skipped while validating a pfn for struct page >>>>>>> backing. This scenario on certain platforms where memmap is not continuous, >>>>>>> could be captured with a new option CONFIG_HAVE_EARLY_SECTION_MEMMAP_HOLES. >>>>>>> Then the generic pfn_valid() can be improved to accommodate such platforms. >>>>>>> This reduces overall code footprint and also improves maintainability. >>>>>>> >>>>>>> free_unused_memmap() and pfn_to_online_page() have been updated to include >>>>>>> such cases. This also exports memblock_is_memory() for all drivers that use >>>>>>> pfn_valid() but lack required visibility. After the new config is in place, >>>>>>> drop CONFIG_HAVE_ARCH_PFN_VALID from arm64 platforms. >>>>>>> >>>>>>> Cc: Catalin Marinas >>>>>>> Cc: Will Deacon >>>>>>> Cc: Andrew Morton >>>>>>> Cc: Mike Rapoport >>>>>>> Cc: David Hildenbrand >>>>>>> Cc: linux-arm-kernel@lists.infradead.org >>>>>>> Cc: linux-kernel@vger.kernel.org >>>>>>> Cc: linux-mm@kvack.org >>>>>>> Suggested-by: David Hildenbrand >>>>>>> Signed-off-by: Anshuman Khandual >>>>>>> --- >>>>>>> This patch applies on the latest mainline kernel after Mike's series >>>>>>> regarding arm64 based pfn_valid(). >>>>>>> >>>>>>> https://lore.kernel.org/linux-mm/20210422061902.21614-1-rppt@kernel.org/T/#t >>>>>>> >>>>>>> Changes in RFC V2: >>>>>>> >>>>>>> - Dropped support for arm (32 bit) >>>>>>> - Replaced memblock_is_map_memory() check with memblock_is_memory() >>>>>>> - MEMBLOCK_NOMAP memory are no longer skipped for pfn_valid() >>>>>>> - Updated pfn_to_online_page() per David >>>>>>> - Updated free_unused_memmap() to preserve existing semantics per Mike >>>>>>> - Exported memblock_is_memory() instead of memblock_is_map_memory() >>>>>>> >>>>>>> Changes in RFC V1: >>>>>>> >>>>>>> - https://patchwork.kernel.org/project/linux-mm/patch/1615174073-10520-1-git-send-email-anshuman.khandual@arm.com/ >>>>>>> >>>>>>> arch/arm64/Kconfig | 2 +- >>>>>>> arch/arm64/include/asm/page.h | 1 - >>>>>>> arch/arm64/mm/init.c | 41 ----------------------------------- >>>>>>> include/linux/mmzone.h | 18 ++++++++++++++- >>>>>>> mm/Kconfig | 9 ++++++++ >>>>>>> mm/memblock.c | 8 +++++-- >>>>>>> mm/memory_hotplug.c | 5 +++++ >>>>>>> 7 files changed, 38 insertions(+), 46 deletions(-) >>>>>>> >>>>>>> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig >>>>>>> index b4a9b493ce72..4cdc3570ffa9 100644 >>>>>>> --- a/arch/arm64/Kconfig >>>>>>> +++ b/arch/arm64/Kconfig >>>>>>> @@ -144,7 +144,6 @@ config ARM64 >>>>>>> select HAVE_ARCH_KGDB >>>>>>> select HAVE_ARCH_MMAP_RND_BITS >>>>>>> select HAVE_ARCH_MMAP_RND_COMPAT_BITS if COMPAT >>>>>>> - select HAVE_ARCH_PFN_VALID >>>>>>> select HAVE_ARCH_PREL32_RELOCATIONS >>>>>>> select HAVE_ARCH_SECCOMP_FILTER >>>>>>> select HAVE_ARCH_STACKLEAK >>>>>>> @@ -167,6 +166,7 @@ config ARM64 >>>>>>> if $(cc-option,-fpatchable-function-entry=2) >>>>>>> select FTRACE_MCOUNT_USE_PATCHABLE_FUNCTION_ENTRY \ >>>>>>> if DYNAMIC_FTRACE_WITH_REGS >>>>>>> + select HAVE_EARLY_SECTION_MEMMAP_HOLES >>>>>>> select HAVE_EFFICIENT_UNALIGNED_ACCESS >>>>>>> select HAVE_FAST_GUP >>>>>>> select HAVE_FTRACE_MCOUNT_RECORD >>>>>>> diff --git a/arch/arm64/include/asm/page.h b/arch/arm64/include/asm/page.h >>>>>>> index 75ddfe671393..fcbef3eec4b2 100644 >>>>>>> --- a/arch/arm64/include/asm/page.h >>>>>>> +++ b/arch/arm64/include/asm/page.h >>>>>>> @@ -37,7 +37,6 @@ void copy_highpage(struct page *to, struct page *from); >>>>>>> >>>>>>> typedef struct page *pgtable_t; >>>>>>> >>>>>>> -int pfn_valid(unsigned long pfn); >>>>>>> int pfn_is_map_memory(unsigned long pfn); >>>>>>> >>>>>>> #include >>>>>>> diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c >>>>>>> index f431b38d0837..5731a11550d8 100644 >>>>>>> --- a/arch/arm64/mm/init.c >>>>>>> +++ b/arch/arm64/mm/init.c >>>>>>> @@ -217,47 +217,6 @@ static void __init zone_sizes_init(unsigned long min, unsigned long max) >>>>>>> free_area_init(max_zone_pfns); >>>>>>> } >>>>>>> >>>>>>> -int pfn_valid(unsigned long pfn) >>>>>>> -{ >>>>>>> - phys_addr_t addr = PFN_PHYS(pfn); >>>>>>> - >>>>>>> - /* >>>>>>> - * Ensure the upper PAGE_SHIFT bits are clear in the >>>>>>> - * pfn. Else it might lead to false positives when >>>>>>> - * some of the upper bits are set, but the lower bits >>>>>>> - * match a valid pfn. >>>>>>> - */ >>>>>>> - if (PHYS_PFN(addr) != pfn) >>>>>>> - return 0; >>>>>>> - >>>>>>> -#ifdef CONFIG_SPARSEMEM >>>>>>> -{ >>>>>>> - struct mem_section *ms; >>>>>>> - >>>>>>> - if (pfn_to_section_nr(pfn) >= NR_MEM_SECTIONS) >>>>>>> - return 0; >>>>>>> - >>>>>>> - ms = __pfn_to_section(pfn); >>>>>>> - if (!valid_section(ms)) >>>>>>> - return 0; >>>>>>> - >>>>>>> - /* >>>>>>> - * ZONE_DEVICE memory does not have the memblock entries. >>>>>>> - * memblock_is_memory() check for ZONE_DEVICE based >>>>>>> - * addresses will always fail. Even the normal hotplugged >>>>>>> - * memory will never have MEMBLOCK_NOMAP flag set in their >>>>>>> - * memblock entries. Skip memblock search for all non early >>>>>>> - * memory sections covering all of hotplug memory including >>>>>>> - * both normal and ZONE_DEVICE based. >>>>>>> - */ >>>>>>> - if (!early_section(ms)) >>>>>>> - return pfn_section_valid(ms, pfn); >>>>>>> -} >>>>>>> -#endif >>>>>>> - return memblock_is_memory(addr); >>>>>>> -} >>>>>>> -EXPORT_SYMBOL(pfn_valid); >>>>>>> - >>>>>>> int pfn_is_map_memory(unsigned long pfn) >>>>>>> { >>>>>>> phys_addr_t addr = PFN_PHYS(pfn); >>>>>>> diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h >>>>>>> index 961f0eeefb62..18bf71665211 100644 >>>>>>> --- a/include/linux/mmzone.h >>>>>>> +++ b/include/linux/mmzone.h >>>>>>> @@ -1421,10 +1421,22 @@ static inline int pfn_section_valid(struct mem_section *ms, unsigned long pfn) >>>>>>> * >>>>>>> * Return: 1 for PFNs that have memory map entries and 0 otherwise >>>>>>> */ >>>>>>> +bool memblock_is_memory(phys_addr_t addr); >>>>>>> + >>>>>>> static inline int pfn_valid(unsigned long pfn) >>>>>>> { >>>>>>> + phys_addr_t addr = PFN_PHYS(pfn); >>>>>>> struct mem_section *ms; >>>>>>> >>>>>>> + /* >>>>>>> + * Ensure the upper PAGE_SHIFT bits are clear in the >>>>>>> + * pfn. Else it might lead to false positives when >>>>>>> + * some of the upper bits are set, but the lower bits >>>>>>> + * match a valid pfn. >>>>>>> + */ >>>>>>> + if (PHYS_PFN(addr) != pfn) >>>>>>> + return 0; >>>>>>> + >>>>>>> if (pfn_to_section_nr(pfn) >= NR_MEM_SECTIONS) >>>>>>> return 0; >>>>>>> ms = __nr_to_section(pfn_to_section_nr(pfn)); >>>>>>> @@ -1434,7 +1446,11 @@ static inline int pfn_valid(unsigned long pfn) >>>>>>> * Traditionally early sections always returned pfn_valid() for >>>>>>> * the entire section-sized span. >>>>>>> */ >>>>>>> - return early_section(ms) || pfn_section_valid(ms, pfn); >>>>>>> + if (early_section(ms)) >>>>>>> + return IS_ENABLED(CONFIG_HAVE_EARLY_SECTION_MEMMAP_HOLES) ? >>>>>>> + memblock_is_memory(pfn << PAGE_SHIFT) : 1; >>>>>>> + >>>>>>> + return pfn_section_valid(ms, pfn); >>>>>>> } >>>>>>> #endif >>>>>> >>>>>> Hello David/Mike, >>>>>> >>>>>> Now that pfn_is_map_memory() usage has been decoupled from pfn_valid() and >>>>>> SPARSEMEM_VMEMMAP is only available memory model on arm64, wondering if we >>>>>> still need this HAVE_EARLY_SECTION_MEMMAP_HOLES proposal ? Please do kindly >>>>>> suggest. Thank you. >>>>> >>>>> Even now arm64 still frees parts of the memory map and pfn_valid() should >>>>> be able to tell if a part of a section is freed or not. >>>>> >>>>> For instance for the following memory configuration >>>>> >>>>> |<----section---->|<----hole---->|<----section---->| >>>>> +--------+--------+--------------+--------+--------+ >>>>> | bank 0 | unused | | bank 1 | unused | >>>>> +--------+--------+--------------+--------+--------+ >>>>> >>>>> the memory map corresponding to the "unused" areas is freed, but the generic >>>>> pfn_valid() will still return 1 there. >>>> >>>> But is not free_unused_memmap() return early when CONFIG_SPARSEMEM_VMEMMAP >>>> is enabled, which is the only option now on arm64. Then how can memmap have >>>> holes (from unused areas) anymore ? Am I missing something here. >>> >>> Ah, you are right, I missed this detail myself :) >>> >>> With CONFIG_SPARSEMEM_VMEMMAP as the only memory model for arm64, we can >>> simply rid of arm64::pfn_valid() without any changes to the generic >>> version. >> >> Though just moved the pfn bits sanity check into generic pfn_valid(). >> I hope this looks okay. >> >> From 7a63f460bcb6ae171c2081bfad81edd9e8f3b7a0 Mon Sep 17 00:00:00 2001 >> From: Anshuman Khandual >> Date: Tue, 25 May 2021 10:27:09 +0100 >> Subject: [PATCH] arm64/mm: Drop HAVE_ARCH_PFN_VALID >> >> CONFIG_SPARSEMEM_VMEMMAP is now the only available memory model on arm64 >> platforms and free_unused_memmap() would just return without creating any >> holes in the memmap mapping. There is no need for any special handling in >> pfn_valid() and HAVE_ARCH_PFN_VALID can just be dropped. This also moves >> the pfn upper bits sanity check into generic pfn_valid(). >> >> Signed-off-by: Anshuman Khandual > > Acked-by: Mike Rapoport > Indeed, looks good Acked-by: David Hildenbrand >> --- >> arch/arm64/Kconfig | 1 - >> arch/arm64/include/asm/page.h | 1 - >> arch/arm64/mm/init.c | 37 ----------------------------------- >> include/linux/mmzone.h | 9 +++++++++ >> 4 files changed, 9 insertions(+), 39 deletions(-) >> >> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig >> index d7dc8698cf8e..7904728befcc 100644 >> --- a/arch/arm64/Kconfig >> +++ b/arch/arm64/Kconfig >> @@ -154,7 +154,6 @@ config ARM64 >> select HAVE_ARCH_KGDB >> select HAVE_ARCH_MMAP_RND_BITS >> select HAVE_ARCH_MMAP_RND_COMPAT_BITS if COMPAT >> - select HAVE_ARCH_PFN_VALID >> select HAVE_ARCH_PREL32_RELOCATIONS >> select HAVE_ARCH_RANDOMIZE_KSTACK_OFFSET >> select HAVE_ARCH_SECCOMP_FILTER >> diff --git a/arch/arm64/include/asm/page.h b/arch/arm64/include/asm/page.h >> index 75ddfe671393..fcbef3eec4b2 100644 >> --- a/arch/arm64/include/asm/page.h >> +++ b/arch/arm64/include/asm/page.h >> @@ -37,7 +37,6 @@ void copy_highpage(struct page *to, struct page *from); >> >> typedef struct page *pgtable_t; >> >> -int pfn_valid(unsigned long pfn); >> int pfn_is_map_memory(unsigned long pfn); >> >> #include >> diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c >> index 725aa84f2faa..49019ea0c8a8 100644 >> --- a/arch/arm64/mm/init.c >> +++ b/arch/arm64/mm/init.c >> @@ -219,43 +219,6 @@ static void __init zone_sizes_init(unsigned long min, unsigned long max) >> free_area_init(max_zone_pfns); >> } >> >> -int pfn_valid(unsigned long pfn) >> -{ >> - phys_addr_t addr = PFN_PHYS(pfn); >> - struct mem_section *ms; >> - >> - /* >> - * Ensure the upper PAGE_SHIFT bits are clear in the >> - * pfn. Else it might lead to false positives when >> - * some of the upper bits are set, but the lower bits >> - * match a valid pfn. >> - */ >> - if (PHYS_PFN(addr) != pfn) >> - return 0; >> - >> - if (pfn_to_section_nr(pfn) >= NR_MEM_SECTIONS) >> - return 0; >> - >> - ms = __pfn_to_section(pfn); >> - if (!valid_section(ms)) >> - return 0; >> - >> - /* >> - * ZONE_DEVICE memory does not have the memblock entries. >> - * memblock_is_map_memory() check for ZONE_DEVICE based >> - * addresses will always fail. Even the normal hotplugged >> - * memory will never have MEMBLOCK_NOMAP flag set in their >> - * memblock entries. Skip memblock search for all non early >> - * memory sections covering all of hotplug memory including >> - * both normal and ZONE_DEVICE based. >> - */ >> - if (!early_section(ms)) >> - return pfn_section_valid(ms, pfn); >> - >> - return memblock_is_memory(addr); >> -} >> -EXPORT_SYMBOL(pfn_valid); >> - >> int pfn_is_map_memory(unsigned long pfn) >> { >> phys_addr_t addr = PFN_PHYS(pfn); >> diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h >> index a9b263d4cf9d..d0c4fc506fa3 100644 >> --- a/include/linux/mmzone.h >> +++ b/include/linux/mmzone.h >> @@ -1443,6 +1443,15 @@ static inline int pfn_valid(unsigned long pfn) >> { >> struct mem_section *ms; >> >> + /* >> + * Ensure the upper PAGE_SHIFT bits are clear in the >> + * pfn. Else it might lead to false positives when >> + * some of the upper bits are set, but the lower bits >> + * match a valid pfn. >> + */ >> + if (PHYS_PFN(PFN_PHYS(pfn)) != pfn) >> + return 0; >> + >> if (pfn_to_section_nr(pfn) >= NR_MEM_SECTIONS) >> return 0; >> ms = __nr_to_section(pfn_to_section_nr(pfn)); >> -- >> 2.20.1 > -- Thanks, David / dhildenb