From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 04070C48BEB for ; Thu, 15 Feb 2024 11:31:07 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8D93E8D0013; Thu, 15 Feb 2024 06:31:06 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 886C18D0001; Thu, 15 Feb 2024 06:31:06 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 74E568D0013; Thu, 15 Feb 2024 06:31:06 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 6353A8D0001 for ; Thu, 15 Feb 2024 06:31:06 -0500 (EST) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 47D6BC09DE for ; Thu, 15 Feb 2024 11:31:06 +0000 (UTC) X-FDA: 81793821732.30.264EF0C Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf25.hostedemail.com (Postfix) with ESMTP id 9BC97A001E for ; Thu, 15 Feb 2024 11:31:04 +0000 (UTC) Authentication-Results: imf25.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf25.hostedemail.com: domain of mark.rutland@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=mark.rutland@arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1707996664; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=0EryzdAqVAM7cg1PRsGZZUP4smOhcq0jHtff0QlbZk0=; b=I8EruoEtUOnagJhJRTTahHfwLlCdCGfeUPwgVPnPJ+kYfT41fX3Kc7E18hF8s3/9khkDvH bXEoaZq6kc4bY/PTae5NX0nxu/kCzkuXaHrosIvlzhKQT8AwsdsqUtkl9HnIRPuncTQKzH V6iODszdChQVerKPuwkjmfDMN9pWuvY= ARC-Authentication-Results: i=1; imf25.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf25.hostedemail.com: domain of mark.rutland@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=mark.rutland@arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1707996664; a=rsa-sha256; cv=none; b=5v49813rUyyxsnVrCr9+tYqNBaNPHzHkTXC4qhOmG3tZmFCo93nOso+efVF0wJwzgdoP55 jzpTwO86uisBreYxyZlcBaEgXJj6sChInCySTiBlHLJT9bb3zu3lgJVBsqYBfxtk5Sv7RU yxF8fE2AzY7uS4JKqWxaF8zGgcislg0= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id A5D23DA7; Thu, 15 Feb 2024 03:31:44 -0800 (PST) Received: from FVFF77S0Q05N (unknown [10.57.68.11]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 2BCF13F7B4; Thu, 15 Feb 2024 03:31:00 -0800 (PST) Date: Thu, 15 Feb 2024 11:30:56 +0000 From: Mark Rutland To: Ryan Roberts Cc: Catalin Marinas , Will Deacon , Ard Biesheuvel , Marc Zyngier , James Morse , Andrey Ryabinin , Andrew Morton , Matthew Wilcox , David Hildenbrand , Kefeng Wang , John Hubbard , Zi Yan , Barry Song <21cnbao@gmail.com>, Alistair Popple , Yang Shi , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , "H. Peter Anvin" , linux-arm-kernel@lists.infradead.org, x86@kernel.org, linuxppc-dev@lists.ozlabs.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH v6 18/18] arm64/mm: Automatically fold contpte mappings Message-ID: References: <20240215103205.2607016-1-ryan.roberts@arm.com> <20240215103205.2607016-19-ryan.roberts@arm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20240215103205.2607016-19-ryan.roberts@arm.com> X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 9BC97A001E X-Stat-Signature: 1f9g6yy8jgbnkk8kfakb5hfat9mfzeaq X-HE-Tag: 1707996664-980344 X-HE-Meta: U2FsdGVkX19+UEPkWZp8kjL6R3uk+aLKGJXgBuRoEB0kQHJv3464TjRPy0kU8+il6OJZgQPLsOHsUESt3O8fGngAn5JxNWiUd6j+BtxSc7cyM4g8yznl8janNKU48tSWIFXuv1pxANwA+ia2lLPu0CN651Q++CyZETqrUwVWvDHsEEWde5R1IpogFV5t5JTjy1iKtl7iC1PZqFUNpqL4Gn3gB/tOSppapulr/xFdbNSGB5RZZiTw+VrB7Q0iNMmJXRfsTYxAz9k3T4NEwObxnPfZOY+WufAIIyiIFeX8M/VmVMfJkn+NcH9Es5XRDq4aMlzYHXbPTX5gxarevTXIhfSXGapVLT0dSvGCNLlFBRMWoAkH/fP/lYQHyWpeRi4kR9VyuQnPk4RcXRL/6Yzy1eV979L4txb2S/BuJRCB89d2R57NaznqAn/oU12KkKkYh7EZsQBAdOLTNwBCxLyD1L0U9vL2jPSvxjxHKLC/fiDHXqI8kTMCU6C0Bzqbqqk/bG5FwrP9M75ASbfSltrn8n4Q250Hy1gVVcX7OmkGZ2V5KTtKgSkFkB89f4Jux2GrRXF8bVcc3v1ExHRHMY2cZDIO1f3Lx2yHX5wWjua/SJQ91OJODWdRtv6qzVBSEAM3ThGB+FClwDdHht1gYR3FaIXDTy95j6ZsdqMCGcOl80K24GhrYUEH1o3ZB5gcj0yItS2U00EwxD6G+4wDGR1U5bcVMn4WoBHtomRtGOz0fiRFM8+9D14YcIvNhLy9OyqJF5/5R2vqUAihUtEIeV8dS5vIlohFvOu9bzcjMiDqwHwR8atrl/ACo4nA20q/ozQ87hvEc8XzC/bNs/6yGpYvU+PIZPXrLmTlj8JpGm5JYCYYhzgjKuBgwWN+3WPNCbNemQKRkFtIFy9A5qWJwK+rJE0fMNklIgsByrl0lokBQV3ML/BkhvLiYhTFaPx2tnzGgA3dJSPBs4v8gEfoYhL 8coxcDKg +M1Ngb1L9uFhGSN7j/5LVZfqUVHZKouN6yA5ga2v7VdSfPRdPc7YPbtOnNCrceTBRwVh6st4JMAQDf2SXt4BaHTrPhut2bBLLW2UlhdE2O0FtsnVGY83hAuKlmuU10eKniRdK9NKHO648YO5np4mPXZEZePgwEAnINXu4c+yGy2Wa0bV7Z1jzal4opF6otjZCsZ+zmAyNMNGOsHyVwqbY4IDz6MZcg4fTXE4CrJdC43cHy7Q= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Thu, Feb 15, 2024 at 10:32:05AM +0000, Ryan Roberts wrote: > There are situations where a change to a single PTE could cause the > contpte block in which it resides to become foldable (i.e. could be > repainted with the contiguous bit). Such situations arise, for example, > when user space temporarily changes protections, via mprotect, for > individual pages, such can be the case for certain garbage collectors. > > We would like to detect when such a PTE change occurs. However this can > be expensive due to the amount of checking required. Therefore only > perform the checks when an indiviual PTE is modified via mprotect > (ptep_modify_prot_commit() -> set_pte_at() -> set_ptes(nr=1)) and only > when we are setting the final PTE in a contpte-aligned block. > > Signed-off-by: Ryan Roberts Acked-by: Mark Rutland Mark. > --- > arch/arm64/include/asm/pgtable.h | 26 +++++++++++++ > arch/arm64/mm/contpte.c | 64 ++++++++++++++++++++++++++++++++ > 2 files changed, 90 insertions(+) > > diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h > index 8310875133ff..401087e8a43d 100644 > --- a/arch/arm64/include/asm/pgtable.h > +++ b/arch/arm64/include/asm/pgtable.h > @@ -1185,6 +1185,8 @@ extern void ptep_modify_prot_commit(struct vm_area_struct *vma, > * where it is possible and makes sense to do so. The PTE_CONT bit is considered > * a private implementation detail of the public ptep API (see below). > */ > +extern void __contpte_try_fold(struct mm_struct *mm, unsigned long addr, > + pte_t *ptep, pte_t pte); > extern void __contpte_try_unfold(struct mm_struct *mm, unsigned long addr, > pte_t *ptep, pte_t pte); > extern pte_t contpte_ptep_get(pte_t *ptep, pte_t orig_pte); > @@ -1206,6 +1208,29 @@ extern int contpte_ptep_set_access_flags(struct vm_area_struct *vma, > unsigned long addr, pte_t *ptep, > pte_t entry, int dirty); > > +static __always_inline void contpte_try_fold(struct mm_struct *mm, > + unsigned long addr, pte_t *ptep, pte_t pte) > +{ > + /* > + * Only bother trying if both the virtual and physical addresses are > + * aligned and correspond to the last entry in a contig range. The core > + * code mostly modifies ranges from low to high, so this is the likely > + * the last modification in the contig range, so a good time to fold. > + * We can't fold special mappings, because there is no associated folio. > + */ > + > + const unsigned long contmask = CONT_PTES - 1; > + bool valign = ((addr >> PAGE_SHIFT) & contmask) == contmask; > + > + if (unlikely(valign)) { > + bool palign = (pte_pfn(pte) & contmask) == contmask; > + > + if (unlikely(palign && > + pte_valid(pte) && !pte_cont(pte) && !pte_special(pte))) > + __contpte_try_fold(mm, addr, ptep, pte); > + } > +} > + > static __always_inline void contpte_try_unfold(struct mm_struct *mm, > unsigned long addr, pte_t *ptep, pte_t pte) > { > @@ -1286,6 +1311,7 @@ static __always_inline void set_ptes(struct mm_struct *mm, unsigned long addr, > if (likely(nr == 1)) { > contpte_try_unfold(mm, addr, ptep, __ptep_get(ptep)); > __set_ptes(mm, addr, ptep, pte, 1); > + contpte_try_fold(mm, addr, ptep, pte); > } else { > contpte_set_ptes(mm, addr, ptep, pte, nr); > } > diff --git a/arch/arm64/mm/contpte.c b/arch/arm64/mm/contpte.c > index 50e0173dc5ee..16788f07716d 100644 > --- a/arch/arm64/mm/contpte.c > +++ b/arch/arm64/mm/contpte.c > @@ -73,6 +73,70 @@ static void contpte_convert(struct mm_struct *mm, unsigned long addr, > __set_ptes(mm, start_addr, start_ptep, pte, CONT_PTES); > } > > +void __contpte_try_fold(struct mm_struct *mm, unsigned long addr, > + pte_t *ptep, pte_t pte) > +{ > + /* > + * We have already checked that the virtual and pysical addresses are > + * correctly aligned for a contpte mapping in contpte_try_fold() so the > + * remaining checks are to ensure that the contpte range is fully > + * covered by a single folio, and ensure that all the ptes are valid > + * with contiguous PFNs and matching prots. We ignore the state of the > + * access and dirty bits for the purpose of deciding if its a contiguous > + * range; the folding process will generate a single contpte entry which > + * has a single access and dirty bit. Those 2 bits are the logical OR of > + * their respective bits in the constituent pte entries. In order to > + * ensure the contpte range is covered by a single folio, we must > + * recover the folio from the pfn, but special mappings don't have a > + * folio backing them. Fortunately contpte_try_fold() already checked > + * that the pte is not special - we never try to fold special mappings. > + * Note we can't use vm_normal_page() for this since we don't have the > + * vma. > + */ > + > + unsigned long folio_start, folio_end; > + unsigned long cont_start, cont_end; > + pte_t expected_pte, subpte; > + struct folio *folio; > + struct page *page; > + unsigned long pfn; > + pte_t *orig_ptep; > + pgprot_t prot; > + > + int i; > + > + if (!mm_is_user(mm)) > + return; > + > + page = pte_page(pte); > + folio = page_folio(page); > + folio_start = addr - (page - &folio->page) * PAGE_SIZE; > + folio_end = folio_start + folio_nr_pages(folio) * PAGE_SIZE; > + cont_start = ALIGN_DOWN(addr, CONT_PTE_SIZE); > + cont_end = cont_start + CONT_PTE_SIZE; > + > + if (folio_start > cont_start || folio_end < cont_end) > + return; > + > + pfn = ALIGN_DOWN(pte_pfn(pte), CONT_PTES); > + prot = pte_pgprot(pte_mkold(pte_mkclean(pte))); > + expected_pte = pfn_pte(pfn, prot); > + orig_ptep = ptep; > + ptep = contpte_align_down(ptep); > + > + for (i = 0; i < CONT_PTES; i++) { > + subpte = pte_mkold(pte_mkclean(__ptep_get(ptep))); > + if (!pte_same(subpte, expected_pte)) > + return; > + expected_pte = pte_advance_pfn(expected_pte, 1); > + ptep++; > + } > + > + pte = pte_mkcont(pte); > + contpte_convert(mm, addr, orig_ptep, pte); > +} > +EXPORT_SYMBOL(__contpte_try_fold); > + > void __contpte_try_unfold(struct mm_struct *mm, unsigned long addr, > pte_t *ptep, pte_t pte) > { > -- > 2.25.1 >