From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id DA823C4167B for ; Tue, 28 Nov 2023 16:55:19 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 77C346B030E; Tue, 28 Nov 2023 11:55:19 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 72BBC6B0316; Tue, 28 Nov 2023 11:55:19 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5F4C36B031A; Tue, 28 Nov 2023 11:55:19 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 5105B6B030E for ; Tue, 28 Nov 2023 11:55:19 -0500 (EST) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 20270A02EA for ; Tue, 28 Nov 2023 16:55:19 +0000 (UTC) X-FDA: 81507963558.27.A27ACAF Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf22.hostedemail.com (Postfix) with ESMTP id E36C3C001D for ; Tue, 28 Nov 2023 16:55:16 +0000 (UTC) Authentication-Results: imf22.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf22.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1701190517; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=sboKB+wEUzEBbLAIJZ29WDGiy+pCVIPq25RyHBh4nGw=; b=indJBof3B28EdxL0y3PU5uOGFBX3KSG6IX3HTSVwyIVmQPuwC+NNmaKEsIHWGAKgUKfoCV QZ3mHYqAxvqVdJ7FyeuB1I7rPoPQftOGVjnpSRf7rjYqcrtZXzYpsdtOVOcb1370r9K5C+ 318xVLPW3uuzDhUPXO+IoaxlTYKpKIU= ARC-Authentication-Results: i=1; imf22.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf22.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1701190517; a=rsa-sha256; cv=none; b=TH61a/v6ueFIT2B7ZJ6tstoQPf+B8E/2WQ8341W7cw2YnlID/qkEAgl+8nO8VXhYWesDSs FtEuRzShI0qqnpERX2L5TdkT6JIFVWe+E+e/qltFPlOYJ63TVnLMhWzTYFbPbWpBaqNMSg sQFlsFRMdzG/Fz7YWIUqViD57mlBSaQ= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 2AD41C15; Tue, 28 Nov 2023 08:56:03 -0800 (PST) Received: from [10.1.33.188] (XHFQ2J9959.cambridge.arm.com [10.1.33.188]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id B35D43F6C4; Tue, 28 Nov 2023 08:55:12 -0800 (PST) Message-ID: Date: Tue, 28 Nov 2023 16:55:11 +0000 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v2 14/14] arm64/mm: Add ptep_get_and_clear_full() to optimize process teardown Content-Language: en-GB From: Ryan Roberts To: Alistair Popple Cc: Catalin Marinas , Will Deacon , Ard Biesheuvel , Marc Zyngier , Oliver Upton , James Morse , Suzuki K Poulose , Zenghui Yu , Andrey Ryabinin , Alexander Potapenko , Andrey Konovalov , Dmitry Vyukov , Vincenzo Frascino , Andrew Morton , Anshuman Khandual , Matthew Wilcox , Yu Zhao , Mark Rutland , David Hildenbrand , Kefeng Wang , John Hubbard , Zi Yan , linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org References: <20231115163018.1303287-1-ryan.roberts@arm.com> <20231115163018.1303287-15-ryan.roberts@arm.com> <87fs0xxd5g.fsf@nvdebian.thelocal> <3b4f6bff-6322-4394-9efb-9c3b9ef52010@arm.com> <87y1eovsn5.fsf@nvdebian.thelocal> <8fca9ed7-f916-4abe-8284-6e3c9fa33a8c@arm.com> <87wmu3pro8.fsf@nvdebian.thelocal> <26c78fee-4b7a-4d73-9f8b-2e25bbae20e8@arm.com> <87o7fepdun.fsf@nvdebian.thelocal> In-Reply-To: Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: E36C3C001D X-Stat-Signature: y9okgnfxq6u5equ8uqz5qxw4aw7xtx8k X-Rspam-User: X-HE-Tag: 1701190516-35864 X-HE-Meta: U2FsdGVkX18amJ5QOVubyXAEfFkVTihLD/3IS09DTyVXbTozECti8Xw21TvT3z8c/T7gzg7NK1jMVy/sUnYASXB7VobRHyKircCDdQm1sCrET+a7fhzdonLcAo1wW+lUSwgu+R4mvn6urCRIa/HiQQwua2oiTrWwXb0IAGhBAaFwk3v9q4JesEWY4L77BJtjf51suAPAq9Cl1Nf/QqyWMBNj4yPGNIc5aRwh+ni7SCTC3kMkO/bbPmu3Xh+d/cVKJIasNAkRgc36Sz1Txrax+1vm2GBrRlPeJTvBaUjuB+wHMWFvyJbx9mpbqE6zgqel2R2X9CzLtKNrKngrc46CnOad7kuMuF/orSPTchSOjyy+hP4kmnIzXwga1hS72ijNuLD+h4P4JZ1k8zUWaVv//9UAh750tcvc198/YF1go3fD/NmE+F5JvqVi8f9Q6qgdah5KXLCJOf9x37HU5B7D5KO0nW461/HcsvSYx9rylVz6b8TS3omMuMiaX8O0ezNcAoCzWdqKEoUHNeAEc6S54kGKCpfyx8w1mCp2PPJdjiA6L8M4ByamDZYmLx5jS9FgUK7vWyG9HFv8q7imSlRE0+eIXPG8wYvqiekz1KNS0Y82hQhmxgjlC1oIg0ph4JkJCFK4OgGUwx7CazJw0YT/YI1azGQYkhpnqpP0zBXmNP6mBxyV+QGPMnHpWJ79ILPL+oPzyWwO4d3Wy4U+Suzmv7VlCTDNGDXUnXoJo5j3NX8foIIw5NT6v7ZCvhAlqFba9lxcqbkLvCPgfmak0F4/rijawawYFHxHuHhwh/dL4ZxWV+OQYy7kfSj8l9uVUetSzc1YwUZczZ5lJZiaKGB1g+8qj0I7x4CV51RXPLwN/62TgX6rOcXmURPvT4YbE9dpWxlkQs7HiaLVYLsr0bjAkql0DpT8yHsrH6PkeTSP+XXMeJenNOouo+EJBm3D2ET8UKLIyJu/KdYu2mvAeRB tYTT4ctt D12CYLBaZYHgeS1gVW9WL4C6tBFnwqEpm6cgTGUjh57P7dnhlFPKH9VEKnk6iPjL+pkhgNKqond8el7CzgjuAxMYseZUWSMG+uhjfJswweLGkm4ZvmOjeJXtqqS+11GFuHI6aJ5OUoLLePXDudWfO5vCsmaEGrHQxO8MnABcl2/Aar0X6pAEReRZ7t1Ou22ZcwLHEwU2/lmiCSohXLiA2PAY5v2Ray/kWvmM6UqVCyrqFzeBHKzUEMOlm3FfJhlFJfBRxqYCYGRA4EJw= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: >>> So if we do need to deal with racing HW, I'm pretty sure my v1 implementation is >>> buggy because it iterated through the PTEs, getting and accumulating. Then >>> iterated again, writing that final set of bits to all the PTEs. And the HW could >>> have modified the bits during those loops. I think it would be possible to fix >>> the race, but intuition says it would be expensive. >> >> So the issue as I understand it is subsequent iterations would see a >> clean PTE after the first iteration returned a dirty PTE. In >> ptep_get_and_clear_full() why couldn't you just copy the dirty/accessed >> bit (if set) from the PTE being cleared to an adjacent PTE rather than >> all the PTEs? > > The raciness I'm describing is the race between reading access/dirty from one > pte and applying it to another. But yes I like your suggestion. if we do: > > pte = __ptep_get_and_clear_full(ptep) > > on the target pte, then we have grabbed access/dirty from it in a race-free > manner. we can then loop from current pte up towards the top of the block until > we find a valid entry (and I guess wrap at the top to make us robust against > future callers clearing an an arbitrary order). Then atomically accumulate the > access/dirty bits we have just saved into that new entry. I guess that's just a > cmpxchg loop - there are already examples of how to do that correctly when > racing the TLB. > > For most entries, we will just be copying up to the next pte. For the last pte, > we would end up reading all ptes and determine we are the last one. > > What do you think? OK here is an attempt at something which solves the fragility. I think this is now robust and will always return the correct access/dirty state from ptep_get_and_clear_full() and ptep_get(). But I'm not sure about performance; each call to ptep_get_and_clear_full() for each pte in a contpte block will cause a ptep_get() to gather the access/dirty bits from across the contpte block - which requires reading each pte in the contpte block. So its O(n^2) in that sense. I'll benchmark it and report back. Was this the type of thing you were thinking of, Alistair? --8<-- arch/arm64/include/asm/pgtable.h | 23 ++++++++- arch/arm64/mm/contpte.c | 81 ++++++++++++++++++++++++++++++++ arch/arm64/mm/fault.c | 38 +++++++++------ 3 files changed, 125 insertions(+), 17 deletions(-) diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index 9bd2f57a9e11..6c295d277784 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -851,6 +851,7 @@ static inline pmd_t pmd_modify(pmd_t pmd, pgprot_t newprot) return pte_pmd(pte_modify(pmd_pte(pmd), newprot)); } +extern int __ptep_set_access_flags_notlbi(pte_t *ptep, pte_t entry); extern int __ptep_set_access_flags(struct vm_area_struct *vma, unsigned long address, pte_t *ptep, pte_t entry, int dirty); @@ -1145,6 +1146,8 @@ extern pte_t contpte_ptep_get(pte_t *ptep, pte_t orig_pte); extern pte_t contpte_ptep_get_lockless(pte_t *orig_ptep); extern void contpte_set_ptes(struct mm_struct *mm, unsigned long addr, pte_t *ptep, pte_t pte, unsigned int nr); +extern pte_t contpte_ptep_get_and_clear_full(struct mm_struct *mm, + unsigned long addr, pte_t *ptep); extern int contpte_ptep_test_and_clear_young(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep); extern int contpte_ptep_clear_flush_young(struct vm_area_struct *vma, @@ -1270,12 +1273,28 @@ static inline void pte_clear(struct mm_struct *mm, __pte_clear(mm, addr, ptep); } +#define __HAVE_ARCH_PTEP_GET_AND_CLEAR_FULL +static inline pte_t ptep_get_and_clear_full(struct mm_struct *mm, + unsigned long addr, pte_t *ptep, int full) +{ + pte_t orig_pte = __ptep_get(ptep); + + if (!pte_valid_cont(orig_pte)) + return __ptep_get_and_clear(mm, addr, ptep); + + if (!full) { + contpte_try_unfold(mm, addr, ptep, orig_pte); + return __ptep_get_and_clear(mm, addr, ptep); + } + + return contpte_ptep_get_and_clear_full(mm, addr, ptep); +} + #define __HAVE_ARCH_PTEP_GET_AND_CLEAR static inline pte_t ptep_get_and_clear(struct mm_struct *mm, unsigned long addr, pte_t *ptep) { - contpte_try_unfold(mm, addr, ptep, __ptep_get(ptep)); - return __ptep_get_and_clear(mm, addr, ptep); + return ptep_get_and_clear_full(mm, addr, ptep, 0); } #define __HAVE_ARCH_PTEP_TEST_AND_CLEAR_YOUNG diff --git a/arch/arm64/mm/contpte.c b/arch/arm64/mm/contpte.c index 2a57df16bf58..99b211118d93 100644 --- a/arch/arm64/mm/contpte.c +++ b/arch/arm64/mm/contpte.c @@ -145,6 +145,14 @@ pte_t contpte_ptep_get(pte_t *ptep, pte_t orig_pte) for (i = 0; i < CONT_PTES; i++, ptep++) { pte = __ptep_get(ptep); + /* + * Deal with the partial contpte_ptep_get_and_clear_full() case, + * where some of the ptes in the range may be cleared but others + * are still to do. See contpte_ptep_get_and_clear_full(). + */ + if (!pte_valid(pte)) + continue; + if (pte_dirty(pte)) orig_pte = pte_mkdirty(orig_pte); @@ -257,6 +265,79 @@ void contpte_set_ptes(struct mm_struct *mm, unsigned long addr, } EXPORT_SYMBOL(contpte_set_ptes); +pte_t contpte_ptep_get_and_clear_full(struct mm_struct *mm, + unsigned long addr, pte_t *ptep) +{ + /* + * When doing a full address space teardown, we can avoid unfolding the + * contiguous range, and therefore avoid the associated tlbi. Instead, + * just get and clear the pte. The caller is promising to call us for + * every pte, so every pte in the range will be cleared by the time the + * final tlbi is issued. + * + * This approach requires some complex hoop jumping though, as for the + * duration between returning from the first call to + * ptep_get_and_clear_full() and making the final call, the contpte + * block is in an intermediate state, where some ptes are cleared and + * others are still set with the PTE_CONT bit. If any other APIs are + * called for the ptes in the contpte block during that time, we have to + * be very careful. The core code currently interleaves calls to + * ptep_get_and_clear_full() with ptep_get() and so ptep_get() must be + * careful to ignore the cleared entries when accumulating the access + * and dirty bits - the same goes for ptep_get_lockless(). The only + * other calls we might resonably expect are to set markers in the + * previously cleared ptes. (We shouldn't see valid entries being set + * until after the tlbi, at which point we are no longer in the + * intermediate state). Since markers are not valid, this is safe; + * set_ptes() will see the old, invalid entry and will not attempt to + * unfold. And the new pte is also invalid so it won't attempt to fold. + * We shouldn't see pte markers being set for the 'full' case anyway + * since the address space is being torn down. + * + * The last remaining issue is returning the access/dirty bits. That + * info could be present in any of the ptes in the contpte block. + * ptep_get() will gather those bits from across the contpte block (for + * the remaining valid entries). So below, if the pte we are clearing + * has dirty or young set, we need to stash it into a pte that we are + * yet to clear. This allows future calls to return the correct state + * even when the info was stored in a different pte. Since the core-mm + * calls from low to high address, we prefer to stash in the last pte of + * the contpte block - this means we are not "dragging" the bits up + * through all ptes and increases the chances that we can exit early + * because a given pte will have neither dirty or young set. + */ + + pte_t orig_pte = __ptep_get_and_clear(mm, addr, ptep); + bool dirty = pte_dirty(orig_pte); + bool young = pte_young(orig_pte); + pte_t *start; + + if (!dirty && !young) + return contpte_ptep_get(ptep, orig_pte); + + start = contpte_align_down(ptep); + ptep = start + CONT_PTES - 1; + + for (; ptep >= start; ptep--) { + pte_t pte = __ptep_get(ptep); + + if (!pte_valid(pte)) + continue; + + if (dirty) + pte = pte_mkdirty(pte); + + if (young) + pte = pte_mkyoung(pte); + + __ptep_set_access_flags_notlbi(ptep, pte); + return contpte_ptep_get(ptep, orig_pte); + } + + return orig_pte; +} +EXPORT_SYMBOL(contpte_ptep_get_and_clear_full); + int contpte_ptep_test_and_clear_young(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep) { diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c index d63f3a0a7251..b22216a8153c 100644 --- a/arch/arm64/mm/fault.c +++ b/arch/arm64/mm/fault.c @@ -199,19 +199,7 @@ static void show_pte(unsigned long addr) pr_cont("\n"); } -/* - * This function sets the access flags (dirty, accessed), as well as write - * permission, and only to a more permissive setting. - * - * It needs to cope with hardware update of the accessed/dirty state by other - * agents in the system and can safely skip the __sync_icache_dcache() call as, - * like __set_ptes(), the PTE is never changed from no-exec to exec here. - * - * Returns whether or not the PTE actually changed. - */ -int __ptep_set_access_flags(struct vm_area_struct *vma, - unsigned long address, pte_t *ptep, - pte_t entry, int dirty) +int __ptep_set_access_flags_notlbi(pte_t *ptep, pte_t entry) { pteval_t old_pteval, pteval; pte_t pte = __ptep_get(ptep); @@ -238,10 +226,30 @@ int __ptep_set_access_flags(struct vm_area_struct *vma, pteval = cmpxchg_relaxed(&pte_val(*ptep), old_pteval, pteval); } while (pteval != old_pteval); + return 1; +} + +/* + * This function sets the access flags (dirty, accessed), as well as write + * permission, and only to a more permissive setting. + * + * It needs to cope with hardware update of the accessed/dirty state by other + * agents in the system and can safely skip the __sync_icache_dcache() call as, + * like __set_ptes(), the PTE is never changed from no-exec to exec here. + * + * Returns whether or not the PTE actually changed. + */ +int __ptep_set_access_flags(struct vm_area_struct *vma, + unsigned long address, pte_t *ptep, + pte_t entry, int dirty) +{ + int changed = __ptep_set_access_flags_notlbi(ptep, entry); + /* Invalidate a stale read-only entry */ - if (dirty) + if (changed && dirty) flush_tlb_page(vma, address); - return 1; + + return changed; } static bool is_el1_instruction_abort(unsigned long esr) --8<--