From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 40CB5C4829A for ; Tue, 13 Feb 2024 16:43:22 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B1F8E6B0092; Tue, 13 Feb 2024 11:43:21 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id ACF816B0093; Tue, 13 Feb 2024 11:43:21 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 970876B0095; Tue, 13 Feb 2024 11:43:21 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 88BAD6B0092 for ; Tue, 13 Feb 2024 11:43:21 -0500 (EST) Received: from smtpin11.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 5975A80CB1 for ; Tue, 13 Feb 2024 16:43:21 +0000 (UTC) X-FDA: 81787351002.11.5621AB7 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf04.hostedemail.com (Postfix) with ESMTP id 97DC040015 for ; Tue, 13 Feb 2024 16:43:19 +0000 (UTC) Authentication-Results: imf04.hostedemail.com; dkim=none; spf=pass (imf04.hostedemail.com: domain of mark.rutland@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=mark.rutland@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1707842599; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=OYTdSYQE5RpbpyIsM+rEh4v94P7Sc7CvMVpVSHddux0=; b=S4ch1vQzClZvWe0aPh58C6XGfVoKgAQX0Vg1jW510m1Et0f7kdAu8X73tNgIxfc+fNiAXe +oVXjP0icN8Xbi710uBrMxa2IPs21U7nvq8D6LD3VqgxO4imoAq7hyJ2bFUcmsUoE0JqQv rVAIqwiUw9ZzM/6MtHNPoJoAhb/Zh6M= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1707842599; a=rsa-sha256; cv=none; b=f1y2QiOLRJo48gQ25pzAdMaE53tPevNomXjm5YOemeaPRwdWsZhFSXdtRjTqn1LqMDDAbc A+sd+6yLv5tlycMRElEYIkYf+xvdqjA+aXCljlHvCeAWTIgv12Y8UlD3Gw+2l3hmYFmpmM 4UYmhyoiwArdNYZ+grYEbApTL4sWXas= ARC-Authentication-Results: i=1; imf04.hostedemail.com; dkim=none; spf=pass (imf04.hostedemail.com: domain of mark.rutland@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=mark.rutland@arm.com; dmarc=pass (policy=none) header.from=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 020EEDA7; Tue, 13 Feb 2024 08:44:00 -0800 (PST) Received: from FVFF77S0Q05N.cambridge.arm.com (FVFF77S0Q05N.cambridge.arm.com [10.1.36.130]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 009A73F5A1; Tue, 13 Feb 2024 08:43:14 -0800 (PST) Date: Tue, 13 Feb 2024 16:43:11 +0000 From: Mark Rutland To: Ryan Roberts Cc: Catalin Marinas , Will Deacon , Ard Biesheuvel , Marc Zyngier , James Morse , Andrey Ryabinin , Andrew Morton , Matthew Wilcox , David Hildenbrand , Kefeng Wang , John Hubbard , Zi Yan , Barry Song <21cnbao@gmail.com>, Alistair Popple , Yang Shi , Nicholas Piggin , Christophe Leroy , "Aneesh Kumar K.V" , "Naveen N. Rao" , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , "H. Peter Anvin" , linux-arm-kernel@lists.infradead.org, x86@kernel.org, linuxppc-dev@lists.ozlabs.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH v5 21/25] arm64/mm: Implement new [get_and_]clear_full_ptes() batch APIs Message-ID: References: <20240202080756.1453939-1-ryan.roberts@arm.com> <20240202080756.1453939-22-ryan.roberts@arm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20240202080756.1453939-22-ryan.roberts@arm.com> X-Rspamd-Queue-Id: 97DC040015 X-Rspam-User: X-Rspamd-Server: rspam11 X-Stat-Signature: ycu1a55zbt377ttzhiuzqucwk6wewxep X-HE-Tag: 1707842599-731496 X-HE-Meta: U2FsdGVkX191BcMoJoxRQ2efqfkp9XAedcHov4wbGgLyEx5xRWgIiscW82KFs9uXzF4W5aOiNgsBHJ8Hdy0t1+NlYYtBJsmaf8rb1ZtopvnbbDCdcAC8tHbNiirvEa96toV+SgopTqdzzCf1f6VBX1765n99mkmp/Dj0Vpe5twnGHY5tokTRf28IjbPjnefDv+6kbrODviP9+3PtkkeFiPDx9/DV2r+wLe401DW35Ro6yFJeOeL3u+auPi9DGPS2rVgT8Oi2HTELWgI0NgXp+Dee0ATY8X5uow/KzRxScbw7stJITIY5e83abqTKJZ33wqOclF9lxCssLCpii252uOvY5rBPKKWQ+gl52aZkRxI0QG9Vvha7dUjDkcmAGF/DyB9wd5+LyFxqg6gHUa0ZcoIuaynqS+m2tSJ+50i+eiT1OyeAVqcYcHUgRP/CIM/BW6ZZBFXXf5XD7Wd8fgT85Qvl1ySb75Sm9MTzOSCTGNHaSO+kvPcSUwv3qGkHL5zDpO61Q0uwI5bLXe9Ngy8nwoz9ojKjKlnNK/ZghBRtfdm5sKBBKi2U2MUebIuRqyU144cP1/DbyZZoOqOzvfr+0bx9AYUmcM64IJh7dcOcLatyOMPK78rf2cBefWsoN/GvCsVNQAtBXq2pU0fAKOSEqGxV4sEzUpF+sAI1hnQPy5ylZP5XJodtQW+Njd1y/xQC7nm6WncZvUszy8uwz6nOhcj01esbs5NTLcCzdqMkAF6EELZ5EJVEkZWZbgChkUptC1oznLheDlYXn08w63sQDkGRUyEPTvtXa1v3Gptkefzx/ZaatcLPfzQGMG/LtJC0XCgtuaboJLrGF96oHRIEAZjCZf9HJsUJIOkdh3UldUUIbQq+Xuz7jIfvQFa37GxDZ4b951SBof5NJnf+VVHiN/qeSHYp2LcAwiEzdSC4tVnIH+4VI8cPlJp9eTqw7sY95KDekLq/C6pJ3yJgvlf NJ2pzTJl 0VH8VQMtdj0CEps93/0oQjWfeo2fJ29gH7lwx9g/e3Avn09TfHSAgkcjbwZIR/OphlDHkNsZ4I/fGmPXd59U+2/XDyRNUvNW1UPDeje/PHrcH1/HI7BK1gSSBOf6BnFO1PL8qArxztAQdtW2CNHge0mCjeKlyGjwl0vsyZF+3E3IombKy2UCvaGXGGxG9BP0NBSaugcwerdAzdxBZF4Fux7olAMZ7ZvqGMljf+mykhWks+tXiK5OUOcO661lGjy+2IoHFdCEt+mN3L6zgQ7aA3c+8tubtQLzKCUhrS2T5J5gMFEneXHvcvPZn4QBX7FD/Ha5fHb3PINjAmraCsXKIsAbuIKy39wN4eeUU1psw7P4vH1Yw35Bd8k2rEOd0ee8debN6HreT9WW3TN6KfVsweLZsog== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Fri, Feb 02, 2024 at 08:07:52AM +0000, Ryan Roberts wrote: > Optimize the contpte implementation to fix some of the > exit/munmap/dontneed performance regression introduced by the initial > contpte commit. Subsequent patches will solve it entirely. > > During exit(), munmap() or madvise(MADV_DONTNEED), mappings must be > cleared. Previously this was done 1 PTE at a time. But the core-mm > supports batched clear via the new [get_and_]clear_full_ptes() APIs. So > let's implement those APIs and for fully covered contpte mappings, we no > longer need to unfold the contpte. This significantly reduces unfolding > operations, reducing the number of tlbis that must be issued. > > Tested-by: John Hubbard > Signed-off-by: Ryan Roberts > --- > arch/arm64/include/asm/pgtable.h | 67 ++++++++++++++++++++++++++++++++ > arch/arm64/mm/contpte.c | 17 ++++++++ > 2 files changed, 84 insertions(+) > > diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h > index c07f0d563733..ad04adb7b87f 100644 > --- a/arch/arm64/include/asm/pgtable.h > +++ b/arch/arm64/include/asm/pgtable.h > @@ -965,6 +965,37 @@ static inline pte_t __ptep_get_and_clear(struct mm_struct *mm, > return pte; > } > > +static inline void __clear_full_ptes(struct mm_struct *mm, unsigned long addr, > + pte_t *ptep, unsigned int nr, int full) > +{ > + for (;;) { > + __ptep_get_and_clear(mm, addr, ptep); > + if (--nr == 0) > + break; > + ptep++; > + addr += PAGE_SIZE; > + } > +} The loop construct is a bit odd; can't this be: while (nr--) { __ptep_get_and_clear(mm, addr, ptep); ptep++; addr += PAGE_SIZE; } ... or: do { __ptep_get_and_clear(mm, addr, ptep); ptep++; addr += PAGE_SIZE; } while (--nr); ... ? Otherwise, this looks good to me. Mark. > + > +static inline pte_t __get_and_clear_full_ptes(struct mm_struct *mm, > + unsigned long addr, pte_t *ptep, > + unsigned int nr, int full) > +{ > + pte_t pte, tmp_pte; > + > + pte = __ptep_get_and_clear(mm, addr, ptep); > + while (--nr) { > + ptep++; > + addr += PAGE_SIZE; > + tmp_pte = __ptep_get_and_clear(mm, addr, ptep); > + if (pte_dirty(tmp_pte)) > + pte = pte_mkdirty(pte); > + if (pte_young(tmp_pte)) > + pte = pte_mkyoung(pte); > + } > + return pte; > +} > + > #ifdef CONFIG_TRANSPARENT_HUGEPAGE > #define __HAVE_ARCH_PMDP_HUGE_GET_AND_CLEAR > static inline pmd_t pmdp_huge_get_and_clear(struct mm_struct *mm, > @@ -1167,6 +1198,11 @@ extern pte_t contpte_ptep_get(pte_t *ptep, pte_t orig_pte); > extern pte_t contpte_ptep_get_lockless(pte_t *orig_ptep); > extern void contpte_set_ptes(struct mm_struct *mm, unsigned long addr, > pte_t *ptep, pte_t pte, unsigned int nr); > +extern void contpte_clear_full_ptes(struct mm_struct *mm, unsigned long addr, > + pte_t *ptep, unsigned int nr, int full); > +extern pte_t contpte_get_and_clear_full_ptes(struct mm_struct *mm, > + unsigned long addr, pte_t *ptep, > + unsigned int nr, int full); > extern int contpte_ptep_test_and_clear_young(struct vm_area_struct *vma, > unsigned long addr, pte_t *ptep); > extern int contpte_ptep_clear_flush_young(struct vm_area_struct *vma, > @@ -1254,6 +1290,35 @@ static inline void pte_clear(struct mm_struct *mm, > __pte_clear(mm, addr, ptep); > } > > +#define clear_full_ptes clear_full_ptes > +static inline void clear_full_ptes(struct mm_struct *mm, unsigned long addr, > + pte_t *ptep, unsigned int nr, int full) > +{ > + if (likely(nr == 1)) { > + contpte_try_unfold(mm, addr, ptep, __ptep_get(ptep)); > + __clear_full_ptes(mm, addr, ptep, nr, full); > + } else { > + contpte_clear_full_ptes(mm, addr, ptep, nr, full); > + } > +} > + > +#define get_and_clear_full_ptes get_and_clear_full_ptes > +static inline pte_t get_and_clear_full_ptes(struct mm_struct *mm, > + unsigned long addr, pte_t *ptep, > + unsigned int nr, int full) > +{ > + pte_t pte; > + > + if (likely(nr == 1)) { > + contpte_try_unfold(mm, addr, ptep, __ptep_get(ptep)); > + pte = __get_and_clear_full_ptes(mm, addr, ptep, nr, full); > + } else { > + pte = contpte_get_and_clear_full_ptes(mm, addr, ptep, nr, full); > + } > + > + return pte; > +} > + > #define __HAVE_ARCH_PTEP_GET_AND_CLEAR > static inline pte_t ptep_get_and_clear(struct mm_struct *mm, > unsigned long addr, pte_t *ptep) > @@ -1338,6 +1403,8 @@ static inline int ptep_set_access_flags(struct vm_area_struct *vma, > #define set_pte __set_pte > #define set_ptes __set_ptes > #define pte_clear __pte_clear > +#define clear_full_ptes __clear_full_ptes > +#define get_and_clear_full_ptes __get_and_clear_full_ptes > #define __HAVE_ARCH_PTEP_GET_AND_CLEAR > #define ptep_get_and_clear __ptep_get_and_clear > #define __HAVE_ARCH_PTEP_TEST_AND_CLEAR_YOUNG > diff --git a/arch/arm64/mm/contpte.c b/arch/arm64/mm/contpte.c > index c85e64baf03b..80346108450b 100644 > --- a/arch/arm64/mm/contpte.c > +++ b/arch/arm64/mm/contpte.c > @@ -207,6 +207,23 @@ void contpte_set_ptes(struct mm_struct *mm, unsigned long addr, > } > EXPORT_SYMBOL(contpte_set_ptes); > > +void contpte_clear_full_ptes(struct mm_struct *mm, unsigned long addr, > + pte_t *ptep, unsigned int nr, int full) > +{ > + contpte_try_unfold_partial(mm, addr, ptep, nr); > + __clear_full_ptes(mm, addr, ptep, nr, full); > +} > +EXPORT_SYMBOL(contpte_clear_full_ptes); > + > +pte_t contpte_get_and_clear_full_ptes(struct mm_struct *mm, > + unsigned long addr, pte_t *ptep, > + unsigned int nr, int full) > +{ > + contpte_try_unfold_partial(mm, addr, ptep, nr); > + return __get_and_clear_full_ptes(mm, addr, ptep, nr, full); > +} > +EXPORT_SYMBOL(contpte_get_and_clear_full_ptes); > + > int contpte_ptep_test_and_clear_young(struct vm_area_struct *vma, > unsigned long addr, pte_t *ptep) > { > -- > 2.25.1 >