From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 78A62C4360C for ; Thu, 22 Apr 2021 06:19:26 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id DF697613CE for ; Thu, 22 Apr 2021 06:19:25 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org DF697613CE Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 7135A6B0071; Thu, 22 Apr 2021 02:19:25 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 6C3E86B0072; Thu, 22 Apr 2021 02:19:25 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 58D076B0073; Thu, 22 Apr 2021 02:19:25 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0200.hostedemail.com [216.40.44.200]) by kanga.kvack.org (Postfix) with ESMTP id 402D96B0071 for ; Thu, 22 Apr 2021 02:19:25 -0400 (EDT) Received: from smtpin26.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id F215E181B048C for ; Thu, 22 Apr 2021 06:19:24 +0000 (UTC) X-FDA: 78059001048.26.DAF9167 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf01.hostedemail.com (Postfix) with ESMTP id E09985001534 for ; Thu, 22 Apr 2021 06:19:21 +0000 (UTC) Received: by mail.kernel.org (Postfix) with ESMTPSA id 33D1F61435; Thu, 22 Apr 2021 06:19:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1619072363; bh=9kfzC5zo0v7akEEAeoirtWTk7ku+3GeuiJKcxC8sMNE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=bQF3qbFGi5hXQBVvbuNSirl3raBF5dajpQyIONh8J7jNTSbBrcc97/YaHzw1oM6In UgyU3X3TBuGIyVal/JeXJ6SZ4f4oDgkevVVdRumQNap2mxBgKKuJMN6o+cp+hxYvab T1QJozEsOSvuYGsCB6SWM+w+odj/lxPNQ2qhZz+4Rpc6sXntSQHB8o/Bu12+9OudP/ r6OAWRc+xBBTXMsanhDbLIyZfn4LSnkvJxEm8kv38/7bqACs8ZfYJbE14XXmRdw/UV kiJkogCyXZH/92oTenQnv2DMzg07CRzwBCRCpCNS30vCTzsZ5BtqQ0hyV2dTmQ2snf jl4aziuCXeIXw== From: Mike Rapoport To: linux-arm-kernel@lists.infradead.org Cc: Andrew Morton , Anshuman Khandual , Ard Biesheuvel , Catalin Marinas , David Hildenbrand , Marc Zyngier , Mark Rutland , Mike Rapoport , Mike Rapoport , Will Deacon , kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v3 3/4] arm64: decouple check whether pfn is in linear map from pfn_valid() Date: Thu, 22 Apr 2021 09:19:01 +0300 Message-Id: <20210422061902.21614-4-rppt@kernel.org> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20210422061902.21614-1-rppt@kernel.org> References: <20210422061902.21614-1-rppt@kernel.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: E09985001534 X-Stat-Signature: fwicnhgnmq866mi8us5nyxduhi3u7t67 X-Rspamd-Server: rspam02 Received-SPF: none (kernel.org>: No applicable sender policy available) receiver=imf01; identity=mailfrom; envelope-from=""; helo=mail.kernel.org; client-ip=198.145.29.99 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1619072361-422508 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Mike Rapoport The intended semantics of pfn_valid() is to verify whether there is a struct page for the pfn in question and nothing else. Yet, on arm64 it is used to distinguish memory areas that are mapped in t= he linear map vs those that require ioremap() to access them. Introduce a dedicated pfn_is_map_memory() wrapper for memblock_is_map_memory() to perform such check and use it where appropriate. Using a wrapper allows to avoid cyclic include dependencies. While here also update style of pfn_valid() so that both pfn_valid() and pfn_is_map_memory() declarations will be consistent. Signed-off-by: Mike Rapoport --- arch/arm64/include/asm/memory.h | 2 +- arch/arm64/include/asm/page.h | 3 ++- arch/arm64/kvm/mmu.c | 2 +- arch/arm64/mm/init.c | 12 ++++++++++++ arch/arm64/mm/ioremap.c | 4 ++-- arch/arm64/mm/mmu.c | 2 +- 6 files changed, 19 insertions(+), 6 deletions(-) diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/mem= ory.h index 0aabc3be9a75..194f9f993d30 100644 --- a/arch/arm64/include/asm/memory.h +++ b/arch/arm64/include/asm/memory.h @@ -351,7 +351,7 @@ static inline void *phys_to_virt(phys_addr_t x) =20 #define virt_addr_valid(addr) ({ \ __typeof__(addr) __addr =3D __tag_reset(addr); \ - __is_lm_address(__addr) && pfn_valid(virt_to_pfn(__addr)); \ + __is_lm_address(__addr) && pfn_is_map_memory(virt_to_pfn(__addr)); \ }) =20 void dump_mem_limit(void); diff --git a/arch/arm64/include/asm/page.h b/arch/arm64/include/asm/page.= h index 012cffc574e8..75ddfe671393 100644 --- a/arch/arm64/include/asm/page.h +++ b/arch/arm64/include/asm/page.h @@ -37,7 +37,8 @@ void copy_highpage(struct page *to, struct page *from); =20 typedef struct page *pgtable_t; =20 -extern int pfn_valid(unsigned long); +int pfn_valid(unsigned long pfn); +int pfn_is_map_memory(unsigned long pfn); =20 #include =20 diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 8711894db8c2..23dd99e29b23 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -85,7 +85,7 @@ void kvm_flush_remote_tlbs(struct kvm *kvm) =20 static bool kvm_is_device_pfn(unsigned long pfn) { - return !pfn_valid(pfn); + return !pfn_is_map_memory(pfn); } =20 /* diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index 3685e12aba9b..966a7a18d528 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -258,6 +258,18 @@ int pfn_valid(unsigned long pfn) } EXPORT_SYMBOL(pfn_valid); =20 +int pfn_is_map_memory(unsigned long pfn) +{ + phys_addr_t addr =3D PFN_PHYS(pfn); + + /* avoid false positives for bogus PFNs, see comment in pfn_valid() */ + if (PHYS_PFN(addr) !=3D pfn) + return 0; + + return memblock_is_map_memory(addr); +} +EXPORT_SYMBOL(pfn_is_map_memory); + static phys_addr_t memory_limit =3D PHYS_ADDR_MAX; =20 /* diff --git a/arch/arm64/mm/ioremap.c b/arch/arm64/mm/ioremap.c index b5e83c46b23e..b7c81dacabf0 100644 --- a/arch/arm64/mm/ioremap.c +++ b/arch/arm64/mm/ioremap.c @@ -43,7 +43,7 @@ static void __iomem *__ioremap_caller(phys_addr_t phys_= addr, size_t size, /* * Don't allow RAM to be mapped. */ - if (WARN_ON(pfn_valid(__phys_to_pfn(phys_addr)))) + if (WARN_ON(pfn_is_map_memory(__phys_to_pfn(phys_addr)))) return NULL; =20 area =3D get_vm_area_caller(size, VM_IOREMAP, caller); @@ -84,7 +84,7 @@ EXPORT_SYMBOL(iounmap); void __iomem *ioremap_cache(phys_addr_t phys_addr, size_t size) { /* For normal memory we already have a cacheable mapping. */ - if (pfn_valid(__phys_to_pfn(phys_addr))) + if (pfn_is_map_memory(__phys_to_pfn(phys_addr))) return (void __iomem *)__phys_to_virt(phys_addr); =20 return __ioremap_caller(phys_addr, size, __pgprot(PROT_NORMAL), diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 5d9550fdb9cf..26045e9adbd7 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -81,7 +81,7 @@ void set_swapper_pgd(pgd_t *pgdp, pgd_t pgd) pgprot_t phys_mem_access_prot(struct file *file, unsigned long pfn, unsigned long size, pgprot_t vma_prot) { - if (!pfn_valid(pfn)) + if (!pfn_is_map_memory(pfn)) return pgprot_noncached(vma_prot); else if (file->f_flags & O_SYNC) return pgprot_writecombine(vma_prot); --=20 2.28.0