From: Ryan Roberts <ryan.roberts@arm.com>
To: Catalin Marinas <catalin.marinas@arm.com>,
Will Deacon <will@kernel.org>, Ard Biesheuvel <ardb@kernel.org>,
Marc Zyngier <maz@kernel.org>,
Oliver Upton <oliver.upton@linux.dev>,
James Morse <james.morse@arm.com>,
Suzuki K Poulose <suzuki.poulose@arm.com>,
Zenghui Yu <yuzenghui@huawei.com>,
Andrey Ryabinin <ryabinin.a.a@gmail.com>,
Alexander Potapenko <glider@google.com>,
Andrey Konovalov <andreyknvl@gmail.com>,
Dmitry Vyukov <dvyukov@google.com>,
Vincenzo Frascino <vincenzo.frascino@arm.com>,
Andrew Morton <akpm@linux-foundation.org>,
Anshuman Khandual <anshuman.khandual@arm.com>,
Matthew Wilcox <willy@infradead.org>, Yu Zhao <yuzhao@google.com>,
Mark Rutland <mark.rutland@arm.com>
Cc: Ryan Roberts <ryan.roberts@arm.com>,
linux-arm-kernel@lists.infradead.org,
linux-kernel@vger.kernel.org, linux-mm@kvack.org
Subject: [PATCH v1 12/14] arm64/mm: Add ptep_get_and_clear_full() to optimize process teardown
Date: Thu, 22 Jun 2023 15:42:07 +0100 [thread overview]
Message-ID: <20230622144210.2623299-13-ryan.roberts@arm.com> (raw)
In-Reply-To: <20230622144210.2623299-1-ryan.roberts@arm.com>
ptep_get_and_clear_full() adds a 'full' parameter which is not present
for the fallback ptep_get_and_clear() function. 'full' is set to 1 when
a full address space teardown is in progress. We use this information to
optimize arm64_sys_exit_group() by avoiding unfolding (and therefore
tlbi) contiguous ranges. Instead we just clear the PTE but allow all the
contiguous neighbours to keep their contig bit set, because we know we
are about to clear the rest too.
Before this optimization, the cost of arm64_sys_exit_group() exploded to
32x what it was before PTE_CONT support was wired up, when compiling the
kernel. With this optimization in place, we are back down to the
original cost.
Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
---
arch/arm64/include/asm/pgtable.h | 18 ++++++++-
arch/arm64/mm/contpte.c | 68 ++++++++++++++++++++++++++++++++
2 files changed, 84 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
index 17ea534bc5b0..5963da651da7 100644
--- a/arch/arm64/include/asm/pgtable.h
+++ b/arch/arm64/include/asm/pgtable.h
@@ -1128,6 +1128,8 @@ extern pte_t contpte_ptep_get(pte_t *ptep, pte_t orig_pte);
extern pte_t contpte_ptep_get_lockless(pte_t *orig_ptep);
extern void contpte_set_ptes(struct mm_struct *mm, unsigned long addr,
pte_t *ptep, pte_t pte, unsigned int nr);
+extern pte_t contpte_ptep_get_and_clear_full(struct mm_struct *mm,
+ unsigned long addr, pte_t *ptep);
extern int contpte_ptep_test_and_clear_young(struct vm_area_struct *vma,
unsigned long addr, pte_t *ptep);
extern int contpte_ptep_clear_flush_young(struct vm_area_struct *vma,
@@ -1252,12 +1254,24 @@ static inline void pte_clear(struct mm_struct *mm,
__pte_clear(mm, addr, ptep);
}
+#define __HAVE_ARCH_PTEP_GET_AND_CLEAR_FULL
+static inline pte_t ptep_get_and_clear_full(struct mm_struct *mm,
+ unsigned long addr, pte_t *ptep, int full)
+{
+ pte_t orig_pte = __ptep_get(ptep);
+
+ if (!pte_present(orig_pte) || !pte_cont(orig_pte) || !full) {
+ contpte_try_unfold(mm, addr, ptep, orig_pte);
+ return __ptep_get_and_clear(mm, addr, ptep);
+ } else
+ return contpte_ptep_get_and_clear_full(mm, addr, ptep);
+}
+
#define __HAVE_ARCH_PTEP_GET_AND_CLEAR
static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
unsigned long addr, pte_t *ptep)
{
- contpte_try_unfold(mm, addr, ptep, __ptep_get(ptep));
- return __ptep_get_and_clear(mm, addr, ptep);
+ return ptep_get_and_clear_full(mm, addr, ptep, 0);
}
#define __HAVE_ARCH_PTEP_TEST_AND_CLEAR_YOUNG
diff --git a/arch/arm64/mm/contpte.c b/arch/arm64/mm/contpte.c
index e8e4a298fd53..0b585d1c4c94 100644
--- a/arch/arm64/mm/contpte.c
+++ b/arch/arm64/mm/contpte.c
@@ -241,6 +241,74 @@ void contpte_set_ptes(struct mm_struct *mm, unsigned long addr,
} while (addr != end);
}
+pte_t contpte_ptep_get_and_clear_full(struct mm_struct *mm,
+ unsigned long addr, pte_t *ptep)
+{
+ /*
+ * When doing a full address space teardown, we can avoid unfolding the
+ * contiguous range, and therefore avoid the associated tlbi. Instead,
+ * just clear the pte. The caller is promising to call us for every pte,
+ * so every pte in the range will be cleared by the time the tlbi is
+ * issued.
+ *
+ * However, this approach will leave the ptes in an inconsistent state
+ * until ptep_get_and_clear_full() has been called for every pte in the
+ * range. This could cause ptep_get() to fail to return the correct
+ * access/dirty bits, if ptep_get() calls are interleved with
+ * ptep_get_and_clear_full() (which they are). Solve this by copying the
+ * access/dirty bits to every pte in the range so that ptep_get() still
+ * sees them if we have already cleared pte that the hw chose to update.
+ * Note that a full teardown will only happen when the process is
+ * exiting, so we do not expect anymore accesses and therefore no more
+ * access/dirty bit updates, so there is no race here.
+ */
+
+ pte_t *orig_ptep = ptep;
+ pte_t pte;
+ bool flags_propagated = false;
+ bool dirty = false;
+ bool young = false;
+ int i;
+
+ /* First, gather access and dirty bits. */
+ ptep = contpte_align_down(orig_ptep);
+ for (i = 0; i < CONT_PTES; i++, ptep++) {
+ pte = __ptep_get(ptep);
+
+ /*
+ * If we find a zeroed PTE, contpte_ptep_get_and_clear_full()
+ * must have already been called for it, so we have already
+ * propagated the flags to the other ptes.
+ */
+ if (pte_val(pte) == 0) {
+ flags_propagated = true;
+ break;
+ }
+
+ if (pte_dirty(pte))
+ dirty = true;
+
+ if (pte_young(pte))
+ young = true;
+ }
+
+ /* Now copy the access and dirty bits into each pte in the range. */
+ if (!flags_propagated) {
+ ptep = contpte_align_down(orig_ptep);
+ for (i = 0; i < CONT_PTES; i++, ptep++) {
+ pte = __ptep_get(ptep);
+
+ if (dirty)
+ pte = pte_mkdirty(pte);
+
+ if (young)
+ pte = pte_mkyoung(pte);
+ }
+ }
+
+ return __ptep_get_and_clear(mm, addr, orig_ptep);
+}
+
int contpte_ptep_test_and_clear_young(struct vm_area_struct *vma,
unsigned long addr, pte_t *ptep)
{
--
2.25.1
next prev parent reply other threads:[~2023-06-22 14:43 UTC|newest]
Thread overview: 23+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-06-22 14:41 [PATCH v1 00/14] Transparent Contiguous PTEs for User Mappings Ryan Roberts
2023-06-22 14:41 ` [PATCH v1 01/14] arm64/mm: set_pte(): New layer to manage contig bit Ryan Roberts
2023-06-22 14:41 ` [PATCH v1 02/14] arm64/mm: set_ptes()/set_pte_at(): " Ryan Roberts
2023-06-22 14:41 ` [PATCH v1 03/14] arm64/mm: pte_clear(): " Ryan Roberts
2023-06-22 14:41 ` [PATCH v1 04/14] arm64/mm: ptep_get_and_clear(): " Ryan Roberts
2023-06-22 14:42 ` [PATCH v1 05/14] arm64/mm: ptep_test_and_clear_young(): " Ryan Roberts
2023-06-22 14:42 ` [PATCH v1 06/14] arm64/mm: ptep_clear_flush_young(): " Ryan Roberts
2023-06-22 14:42 ` [PATCH v1 07/14] arm64/mm: ptep_set_wrprotect(): " Ryan Roberts
2023-06-22 14:42 ` [PATCH v1 08/14] arm64/mm: ptep_set_access_flags(): " Ryan Roberts
2023-06-22 14:42 ` [PATCH v1 09/14] arm64/mm: ptep_get(): " Ryan Roberts
2023-06-22 14:42 ` [PATCH v1 10/14] arm64/mm: Split __flush_tlb_range() to elide trailing DSB Ryan Roberts
2023-06-22 14:42 ` [PATCH v1 11/14] arm64/mm: Wire up PTE_CONT for user mappings Ryan Roberts
2023-06-30 1:54 ` John Hubbard
2023-07-03 9:48 ` Ryan Roberts
2023-07-03 15:17 ` Catalin Marinas
2023-07-04 11:09 ` Ryan Roberts
2023-07-05 13:13 ` Ryan Roberts
2023-07-16 15:09 ` Catalin Marinas
2023-06-22 14:42 ` Ryan Roberts [this message]
2023-06-22 14:42 ` [PATCH v1 13/14] mm: Batch-copy PTE ranges during fork() Ryan Roberts
2023-06-22 14:42 ` [PATCH v1 14/14] arm64/mm: Implement ptep_set_wrprotects() to optimize fork() Ryan Roberts
2023-07-10 12:05 ` [PATCH v1 00/14] Transparent Contiguous PTEs for User Mappings Barry Song
2023-07-10 13:28 ` Ryan Roberts
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20230622144210.2623299-13-ryan.roberts@arm.com \
--to=ryan.roberts@arm.com \
--cc=akpm@linux-foundation.org \
--cc=andreyknvl@gmail.com \
--cc=anshuman.khandual@arm.com \
--cc=ardb@kernel.org \
--cc=catalin.marinas@arm.com \
--cc=dvyukov@google.com \
--cc=glider@google.com \
--cc=james.morse@arm.com \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mark.rutland@arm.com \
--cc=maz@kernel.org \
--cc=oliver.upton@linux.dev \
--cc=ryabinin.a.a@gmail.com \
--cc=suzuki.poulose@arm.com \
--cc=vincenzo.frascino@arm.com \
--cc=will@kernel.org \
--cc=willy@infradead.org \
--cc=yuzenghui@huawei.com \
--cc=yuzhao@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox