From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 22A4CC61DA4 for ; Thu, 9 Mar 2023 11:03:18 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 88C226B0071; Thu, 9 Mar 2023 06:03:17 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 83C5F6B0072; Thu, 9 Mar 2023 06:03:17 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 72AB0280001; Thu, 9 Mar 2023 06:03:17 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 64C6D6B0071 for ; Thu, 9 Mar 2023 06:03:17 -0500 (EST) Received: from smtpin11.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id F248E121066 for ; Thu, 9 Mar 2023 11:03:16 +0000 (UTC) X-FDA: 80549073192.11.DC7B0D6 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf03.hostedemail.com (Postfix) with ESMTP id 9DD412000E for ; Thu, 9 Mar 2023 11:03:13 +0000 (UTC) Authentication-Results: imf03.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf03.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1678359793; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=LXI+GAyuAnh5Mob9QX1cFPYR91EakvSRqHCpCH+7BwQ=; b=D14+JPqNMOq2Nx76sgtDi/gew6FC6MxRnE6opfLZWOSLagWZjZpZRQw00Q2eeKenamAgdK cMkuV18qC4BSB1hCMBFEgia4joguMGNkWQU266Pd2gYPpWdMrn6ib8I1e+Hp4jHczGospA Ne3uQ61xCnyn70N4sgsiXuvhyQ4btn8= ARC-Authentication-Results: i=1; imf03.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf03.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1678359793; a=rsa-sha256; cv=none; b=Gr7FCes/IhdAR345T/7F27CmWf/MVpqCCKXUkMgEaSEBvCZN7drUqTYM0ID5uPB6NzaagA 3xawjzvwAT/+QLbWpzku4Hwf0yQCOQsSk6FUi6UaSZdcyqrNLzxFvgZlN6EeLiyKDaT1Yq RN5IYfDaHpZAkdaOWeVIWOH4qwkqun8= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id F26F6C14; Thu, 9 Mar 2023 03:03:55 -0800 (PST) Received: from [10.1.27.175] (C02CF1NRLVDN.cambridge.arm.com [10.1.27.175]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id AE5173F67D; Thu, 9 Mar 2023 03:03:11 -0800 (PST) Message-ID: Date: Thu, 9 Mar 2023 11:03:10 +0000 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0) Gecko/20100101 Thunderbird/102.8.0 Subject: Re: [PATCH v3 08/34] arm64: Implement the new page table range API Content-Language: en-US To: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-arch@vger.kernel.org Cc: linux-kernel@vger.kernel.org, Catalin Marinas , linux-arm-kernel@lists.infradead.org References: <20230228213738.272178-1-willy@infradead.org> <20230228213738.272178-9-willy@infradead.org> From: Ryan Roberts In-Reply-To: <20230228213738.272178-9-willy@infradead.org> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Rspamd-Queue-Id: 9DD412000E X-Rspamd-Server: rspam09 X-Rspam-User: X-Stat-Signature: wx7asfnrazb78gisf6jweahrucayfmbc X-HE-Tag: 1678359793-761858 X-HE-Meta: U2FsdGVkX19qkXuRDB8D+ivkiD8gWsG2kFcnuWDA18krKSV67M515sLZoFcMCvy+DNe47yamylrZFJLJHo9wdsZ59JJ8fmTbK1n60LZQA4fKXtXQ4QVzInoMA0tV/0gmkvwPkZ4eaT6FI8erQp9P/Vgpr5Ljan940K3EmBC95a5NFXVtpcx9kk4cB/Bm9wUh3v7nAmpoHfS0inth+N3lPhKx03KU6/C4qqhMAC4IllHFeJryLGiRm4Xm3er/s1Eq0fl1PCEd4uPbQfGRW3B29uU2/CP0OdJ46PPnb0yM1s5vZcZ66pRZTYffAm1Bl39qH+qK6692CElbSZy/JezQlvI8D/HGqqisG7rVo1Sp2obPbAi5O7NWT9pk+4ofjE3cHsJkb4OCu7Jam+4TKgq0JRVosL04PKbgCydIkR6k11eAypHLwfpnuZLA+rbZGWsEAbGW52Z+ExYpb5Hca2z2jHoKoa1Eol6W21Dv1di/EVyUeXQxVadNgQovKbFyZBglpCvZSRp9TG8JISfzlUe8ZIOox0cF5zZSB7O4f13fWJfyFERRdooEGvs7mzC9XvUR4Fr/oJtZNL8aQcOxKqk+BIi135cttMiLMPa8o/Drwo6eZlq3vtvI739OwHKP8fS2tpY9hmUdKZg/vKy/CP2TU+OsVfsuGcDxCc9PzwLs7HRqk97rwahpOfWpkpa6JpL1knAyUwTm/MP20RyYgMdEjk9YtVHDfVlvPSs1ey5haCNtOMCz/NYErpFZCeVIRI85D6Odl9M668D6ck0s7+sR51p7xl0kFPJCK8sET6PBDC0zkKRzdwBjDK4XCDHh/LQPA80ZLh9ldJ6gB97415pyiJrGXr/z3B5MPch/dSBly3bq32PjHtGcAFIws8Emeoe7JAblRqhP7TrtrcGs4PDvy2PMCa7K7rRcmZA+lLKizWlYHkgEFFXxrLrVADWBUop/EQpblIW43FfC8yvBkOv bpwSBBib GDw/X2NOmbv+4JOMI8LWr8r8/eBPJs0iicPwZqY0Za0rbvu4zuehE96uSrdTNQbhEFDOcv+IcpfKvgqZKlDV09cZAZciLi86hhK5jtu/jy+FTiO+GqSMGU+gxz8nK8CrQPX3oL9YP2KgjRSyyS9gwbgp5tBd3wCVuirKF8bSg4K/dj4f2U8N9Qi07lGPu9BYpXNkn0whEd3C3bwig6xI9ql7KJWAvToEC8TQm1IuH1Znt0CfKgq0idwMtvg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 28/02/2023 21:37, Matthew Wilcox (Oracle) wrote: > Add set_ptes(), update_mmu_cache_range() and flush_dcache_folio(). > Change the PG_dcache_clean flag from being per-page to per-folio. > > Signed-off-by: Matthew Wilcox (Oracle) > Reviewed-by: Catalin Marinas > Cc: linux-arm-kernel@lists.infradead.org > --- > arch/arm64/include/asm/cacheflush.h | 4 +++- > arch/arm64/include/asm/pgtable.h | 25 ++++++++++++++------ > arch/arm64/mm/flush.c | 36 +++++++++++------------------ > 3 files changed, 35 insertions(+), 30 deletions(-) > > diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h > index 37185e978aeb..d115451ed263 100644 > --- a/arch/arm64/include/asm/cacheflush.h > +++ b/arch/arm64/include/asm/cacheflush.h > @@ -114,7 +114,7 @@ extern void copy_to_user_page(struct vm_area_struct *, struct page *, > #define copy_to_user_page copy_to_user_page > > /* > - * flush_dcache_page is used when the kernel has written to the page > + * flush_dcache_folio is used when the kernel has written to the page > * cache page at virtual address page->virtual. > * > * If this page isn't mapped (ie, page_mapping == NULL), or it might > @@ -127,6 +127,8 @@ extern void copy_to_user_page(struct vm_area_struct *, struct page *, > */ > #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1 > extern void flush_dcache_page(struct page *); > +void flush_dcache_folio(struct folio *); > +#define flush_dcache_folio flush_dcache_folio > > static __always_inline void icache_inval_all_pou(void) > { > diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h > index 69765dc697af..4d1b79dbff16 100644 > --- a/arch/arm64/include/asm/pgtable.h > +++ b/arch/arm64/include/asm/pgtable.h > @@ -355,12 +355,21 @@ static inline void __set_pte_at(struct mm_struct *mm, unsigned long addr, > set_pte(ptep, pte); > } > > -static inline void set_pte_at(struct mm_struct *mm, unsigned long addr, > - pte_t *ptep, pte_t pte) > -{ > - page_table_check_ptes_set(mm, addr, ptep, pte, 1); > - return __set_pte_at(mm, addr, ptep, pte); > +static inline void set_ptes(struct mm_struct *mm, unsigned long addr, > + pte_t *ptep, pte_t pte, unsigned int nr) > +{ > + page_table_check_ptes_set(mm, addr, ptep, pte, nr); > + > + for (;;) { > + __set_pte_at(mm, addr, ptep, pte); > + if (--nr == 0) > + break; > + ptep++; > + addr += PAGE_SIZE; > + pte_val(pte) += PAGE_SIZE; For systems that support > 48-bit PA, arm64 places high bits [51:48] of the PA at a low position in the PTE. I think I've convinced myself that this is ok though, because set_ptes() promises that the range is always within a single PMD and therefore its guaranteed that we will not have ptes straddling both sides of the 48 bit boundary for a single call? Also, its not clear to me if set_ptes() could be called for a range of not-present ptes? (i.e. clearing the pte-range or swap entries, etc). If so, then I guess you would only want to increment the address if pte_present(pte)? I'm guessing that batch-clearing ptes might appear in the near future so might be sensible to support that now? Regardless, a comment to make these assumptions clear would be useful. Thanks, Ryan > + } > } > +#define set_pte_at(mm, addr, ptep, pte) set_ptes(mm, addr, ptep, pte, 1) > > /* > * Huge pte definitions. > @@ -1059,8 +1068,8 @@ static inline void arch_swap_restore(swp_entry_t entry, struct folio *folio) > /* > * On AArch64, the cache coherency is handled via the set_pte_at() function. > */ > -static inline void update_mmu_cache(struct vm_area_struct *vma, > - unsigned long addr, pte_t *ptep) > +static inline void update_mmu_cache_range(struct vm_area_struct *vma, > + unsigned long addr, pte_t *ptep, unsigned int nr) > { > /* > * We don't do anything here, so there's a very small chance of > @@ -1069,6 +1078,8 @@ static inline void update_mmu_cache(struct vm_area_struct *vma, > */ > } > > +#define update_mmu_cache(vma, addr, ptep) \ > + update_mmu_cache_range(vma, addr, ptep, 1) > #define update_mmu_cache_pmd(vma, address, pmd) do { } while (0) > > #ifdef CONFIG_ARM64_PA_BITS_52 > diff --git a/arch/arm64/mm/flush.c b/arch/arm64/mm/flush.c > index 5f9379b3c8c8..deb781af0a3a 100644 > --- a/arch/arm64/mm/flush.c > +++ b/arch/arm64/mm/flush.c > @@ -50,20 +50,13 @@ void copy_to_user_page(struct vm_area_struct *vma, struct page *page, > > void __sync_icache_dcache(pte_t pte) > { > - struct page *page = pte_page(pte); > + struct folio *folio = page_folio(pte_page(pte)); > > - /* > - * HugeTLB pages are always fully mapped, so only setting head page's > - * PG_dcache_clean flag is enough. > - */ > - if (PageHuge(page)) > - page = compound_head(page); > - > - if (!test_bit(PG_dcache_clean, &page->flags)) { > - sync_icache_aliases((unsigned long)page_address(page), > - (unsigned long)page_address(page) + > - page_size(page)); > - set_bit(PG_dcache_clean, &page->flags); > + if (!test_bit(PG_dcache_clean, &folio->flags)) { > + sync_icache_aliases((unsigned long)folio_address(folio), > + (unsigned long)folio_address(folio) + > + folio_size(folio)); > + set_bit(PG_dcache_clean, &folio->flags); > } > } > EXPORT_SYMBOL_GPL(__sync_icache_dcache); > @@ -73,17 +66,16 @@ EXPORT_SYMBOL_GPL(__sync_icache_dcache); > * it as dirty for later flushing when mapped in user space (if executable, > * see __sync_icache_dcache). > */ > -void flush_dcache_page(struct page *page) > +void flush_dcache_folio(struct folio *folio) > { > - /* > - * HugeTLB pages are always fully mapped and only head page will be > - * set PG_dcache_clean (see comments in __sync_icache_dcache()). > - */ > - if (PageHuge(page)) > - page = compound_head(page); > + if (test_bit(PG_dcache_clean, &folio->flags)) > + clear_bit(PG_dcache_clean, &folio->flags); > +} > +EXPORT_SYMBOL(flush_dcache_folio); > > - if (test_bit(PG_dcache_clean, &page->flags)) > - clear_bit(PG_dcache_clean, &page->flags); > +void flush_dcache_page(struct page *page) > +{ > + flush_dcache_folio(page_folio(page)); > } > EXPORT_SYMBOL(flush_dcache_page); >