From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.1 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,NICE_REPLY_A,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AA1D5C433B4 for ; Tue, 20 Apr 2021 15:58:04 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 0FDB2613C3 for ; Tue, 20 Apr 2021 15:58:04 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0FDB2613C3 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 742D76B006C; Tue, 20 Apr 2021 11:58:03 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 719516B006E; Tue, 20 Apr 2021 11:58:03 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5BA066B0070; Tue, 20 Apr 2021 11:58:03 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0061.hostedemail.com [216.40.44.61]) by kanga.kvack.org (Postfix) with ESMTP id 41F246B006C for ; Tue, 20 Apr 2021 11:58:03 -0400 (EDT) Received: from smtpin21.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id EB0A32C07 for ; Tue, 20 Apr 2021 15:58:02 +0000 (UTC) X-FDA: 78053201604.21.5040C43 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf29.hostedemail.com (Postfix) with ESMTP id 02E5B12E for ; Tue, 20 Apr 2021 15:57:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1618934282; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=kRUjVbp/xhnBKh4OUO4qLwF9cw2sRaU2IDxMzgRt9EE=; b=WKy233fWkxUbPOls6EwSxRu7rUlxpiH33DQymgc/cpKaIfWk7+X2+qxn6ZAJaBwoS//AsB tIhLSUpKHGWlb1yKyZU8QPBMRvPCex3VUItozDXcGBazP1uKqpuDCn3mpeKOetMQH1kWHX uAFx5Dy8tzqPxqPDx5TEjuXqedHtuOc= Received: from mail-wr1-f71.google.com (mail-wr1-f71.google.com [209.85.221.71]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-250-5zURuMJlNeWYLNlGKMWJ6w-1; Tue, 20 Apr 2021 11:58:00 -0400 X-MC-Unique: 5zURuMJlNeWYLNlGKMWJ6w-1 Received: by mail-wr1-f71.google.com with SMTP id 32-20020adf84230000b029010705438fbfso5974987wrf.21 for ; Tue, 20 Apr 2021 08:58:00 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:to:cc:references:from:organization:subject :message-id:date:user-agent:mime-version:in-reply-to :content-language:content-transfer-encoding; bh=kRUjVbp/xhnBKh4OUO4qLwF9cw2sRaU2IDxMzgRt9EE=; b=pRPhh8wBTFrfvNWMkbENQd/ypRMNh+yv01YJCosRE6jZGKwc90fN905s7VFA02cRyp ik/ejfxL+LzBHNB/YY8c2kn3cYA3abIzUuYEf4B0dq5B0KHmrw3CcB/AKuuv3TqT6zvw FjDa1ylkIg8cQ2BJFdKf4AmQjd7+cYgsPtHD1hqRSrRxNmHlNXwhDobpHfp8vyhJoWIB t24lIQ1veUrULdePhdOy9zjm6ymQUAeBFQjE/D8bIHJE3M2t6Jcz21xkWBg3ThdV08uS WuEBFKw9Vw7kQ7Qj0PSDReYNsfZVejCJ6FzU2GHSlIXG7UwdscSFw3b1P6jCkXH6wcDz fljg== X-Gm-Message-State: AOAM530M+98UV0o9fJs5EgSW3jl5PFpXRvJ36MiYKclW+MECLpZ5f2jf XHk89ch4eex+gXO1rv2QS9P9yFu9dckXVWZVbQG2BOXOhZeC5yQnELoUClYpdMu0fTlIRUw4UO3 oUj/r/FIw25liF+ecsdFWXLLT7oDyH0tkjYIRNG4+cazUVCV2d1YNGw3UCkI= X-Received: by 2002:a7b:c312:: with SMTP id k18mr5229623wmj.89.1618934279215; Tue, 20 Apr 2021 08:57:59 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyHImaAxee8YH0d/aTWDGKEJNDp2XNJZPNDFkBuyM4cK1Nn3spewv6GbBlRucz+5ba84q6ojQ== X-Received: by 2002:a7b:c312:: with SMTP id k18mr5229585wmj.89.1618934278815; Tue, 20 Apr 2021 08:57:58 -0700 (PDT) Received: from [192.168.3.132] (p4ff2390a.dip0.t-ipconnect.de. [79.242.57.10]) by smtp.gmail.com with ESMTPSA id b12sm9569161wmj.1.2021.04.20.08.57.58 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Tue, 20 Apr 2021 08:57:58 -0700 (PDT) To: Mike Rapoport , linux-arm-kernel@lists.infradead.org Cc: Anshuman Khandual , Ard Biesheuvel , Catalin Marinas , Marc Zyngier , Mark Rutland , Mike Rapoport , Will Deacon , kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, linux-mm@kvack.org References: <20210420090925.7457-1-rppt@kernel.org> <20210420090925.7457-4-rppt@kernel.org> From: David Hildenbrand Organization: Red Hat Subject: Re: [PATCH v1 3/4] arm64: decouple check whether pfn is in linear map from pfn_valid() Message-ID: <29b51a80-1543-fcec-6f5b-5ae21b78e1e9@redhat.com> Date: Tue, 20 Apr 2021 17:57:57 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.8.1 MIME-Version: 1.0 In-Reply-To: <20210420090925.7457-4-rppt@kernel.org> Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=david@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US X-Stat-Signature: 55jmzn1uh7dn5gamxykbt6m9gc7qy7e8 X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 02E5B12E Received-SPF: none (redhat.com>: No applicable sender policy available) receiver=imf29; identity=mailfrom; envelope-from=""; helo=us-smtp-delivery-124.mimecast.com; client-ip=170.10.133.124 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1618934279-988430 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 20.04.21 11:09, Mike Rapoport wrote: > From: Mike Rapoport >=20 > The intended semantics of pfn_valid() is to verify whether there is a > struct page for the pfn in question and nothing else. >=20 > Yet, on arm64 it is used to distinguish memory areas that are mapped in= the > linear map vs those that require ioremap() to access them. >=20 > Introduce a dedicated pfn_is_map_memory() wrapper for > memblock_is_map_memory() to perform such check and use it where > appropriate. >=20 > Using a wrapper allows to avoid cyclic include dependencies. >=20 > Signed-off-by: Mike Rapoport > --- > arch/arm64/include/asm/memory.h | 2 +- > arch/arm64/include/asm/page.h | 1 + > arch/arm64/kvm/mmu.c | 2 +- > arch/arm64/mm/init.c | 6 ++++++ > arch/arm64/mm/ioremap.c | 4 ++-- > arch/arm64/mm/mmu.c | 2 +- > 6 files changed, 12 insertions(+), 5 deletions(-) >=20 > diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/m= emory.h > index 0aabc3be9a75..194f9f993d30 100644 > --- a/arch/arm64/include/asm/memory.h > +++ b/arch/arm64/include/asm/memory.h > @@ -351,7 +351,7 @@ static inline void *phys_to_virt(phys_addr_t x) > =20 > #define virt_addr_valid(addr) ({ \ > __typeof__(addr) __addr =3D __tag_reset(addr); \ > - __is_lm_address(__addr) && pfn_valid(virt_to_pfn(__addr)); \ > + __is_lm_address(__addr) && pfn_is_map_memory(virt_to_pfn(__addr)); \ > }) > =20 > void dump_mem_limit(void); > diff --git a/arch/arm64/include/asm/page.h b/arch/arm64/include/asm/pag= e.h > index 012cffc574e8..99a6da91f870 100644 > --- a/arch/arm64/include/asm/page.h > +++ b/arch/arm64/include/asm/page.h > @@ -38,6 +38,7 @@ void copy_highpage(struct page *to, struct page *from= ); > typedef struct page *pgtable_t; > =20 > extern int pfn_valid(unsigned long); > +extern int pfn_is_map_memory(unsigned long); > =20 > #include > =20 > diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c > index 8711894db8c2..23dd99e29b23 100644 > --- a/arch/arm64/kvm/mmu.c > +++ b/arch/arm64/kvm/mmu.c > @@ -85,7 +85,7 @@ void kvm_flush_remote_tlbs(struct kvm *kvm) > =20 > static bool kvm_is_device_pfn(unsigned long pfn) > { > - return !pfn_valid(pfn); > + return !pfn_is_map_memory(pfn); > } > =20 > /* > diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c > index 3685e12aba9b..c54e329aca15 100644 > --- a/arch/arm64/mm/init.c > +++ b/arch/arm64/mm/init.c > @@ -258,6 +258,12 @@ int pfn_valid(unsigned long pfn) > } > EXPORT_SYMBOL(pfn_valid); > =20 > +int pfn_is_map_memory(unsigned long pfn) > +{ I think you might have to add (see pfn_valid()) if (PHYS_PFN(PFN_PHYS(pfn)) !=3D pfn) return 0; to catch false positives. > + return memblock_is_map_memory(PFN_PHYS(pfn)); > +} > +EXPORT_SYMBOL(pfn_is_map_memory); > + > static phys_addr_t memory_limit =3D PHYS_ADDR_MAX; > =20 > /* > diff --git a/arch/arm64/mm/ioremap.c b/arch/arm64/mm/ioremap.c > index b5e83c46b23e..b7c81dacabf0 100644 > --- a/arch/arm64/mm/ioremap.c > +++ b/arch/arm64/mm/ioremap.c > @@ -43,7 +43,7 @@ static void __iomem *__ioremap_caller(phys_addr_t phy= s_addr, size_t size, > /* > * Don't allow RAM to be mapped. > */ > - if (WARN_ON(pfn_valid(__phys_to_pfn(phys_addr)))) > + if (WARN_ON(pfn_is_map_memory(__phys_to_pfn(phys_addr)))) > return NULL; > =20 > area =3D get_vm_area_caller(size, VM_IOREMAP, caller); > @@ -84,7 +84,7 @@ EXPORT_SYMBOL(iounmap); > void __iomem *ioremap_cache(phys_addr_t phys_addr, size_t size) > { > /* For normal memory we already have a cacheable mapping. */ > - if (pfn_valid(__phys_to_pfn(phys_addr))) > + if (pfn_is_map_memory(__phys_to_pfn(phys_addr))) > return (void __iomem *)__phys_to_virt(phys_addr); > =20 > return __ioremap_caller(phys_addr, size, __pgprot(PROT_NORMAL), > diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c > index 5d9550fdb9cf..26045e9adbd7 100644 > --- a/arch/arm64/mm/mmu.c > +++ b/arch/arm64/mm/mmu.c > @@ -81,7 +81,7 @@ void set_swapper_pgd(pgd_t *pgdp, pgd_t pgd) > pgprot_t phys_mem_access_prot(struct file *file, unsigned long pfn, > unsigned long size, pgprot_t vma_prot) > { > - if (!pfn_valid(pfn)) > + if (!pfn_is_map_memory(pfn)) > return pgprot_noncached(vma_prot); > else if (file->f_flags & O_SYNC) > return pgprot_writecombine(vma_prot); >=20 As discussed, in the future it would be nice if we could just rely on=20 the memmap state. There are cases where pfn_is_map_memory() will now be=20 slower than pfn_valid() -- e.g., we don't check for valid_section() in=20 case of CONFIG_SPARSEMEM. This would apply where pfn_valid() would have=20 returned "0". As we're not creating the direct map, kern_addr_valid() shouldn't need=20 love. It'd be some kind of ugly if some generic code used by arm64 would=20 be relying in case of arm64 on pfn_valid() to return the expected=20 result; I doubt it. Acked-by: David Hildenbrand --=20 Thanks, David / dhildenb