From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9A324C3DA6E for ; Wed, 20 Dec 2023 08:42:58 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 09DA96B007B; Wed, 20 Dec 2023 03:42:58 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 0256A6B007D; Wed, 20 Dec 2023 03:42:57 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E096D6B007E; Wed, 20 Dec 2023 03:42:57 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id C9E5B6B007B for ; Wed, 20 Dec 2023 03:42:57 -0500 (EST) Received: from smtpin08.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 91CF5C0A1C for ; Wed, 20 Dec 2023 08:42:57 +0000 (UTC) X-FDA: 81586556394.08.DDC4B1F Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf15.hostedemail.com (Postfix) with ESMTP id 8ADE0A0007 for ; Wed, 20 Dec 2023 08:42:55 +0000 (UTC) Authentication-Results: imf15.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf15.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1703061775; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=R1NgX+dIx+7bXJ/+FfwqJewopICrxzOgvnjONNAmZVU=; b=xOqCuMQFoSTGCIIm2ADPTAp5xgtZLuJr4mA8RqRmcZDg0VJGQvIAq6ky0Ez4UuBiZ+8R3H EklQWbaCkpeZZsqCeMpyFyblBr24WdcnVMAoG51TluL5C7qdt28Tfbi/+YM3qZKWWRp0Al 238OIYKSrczTCxZHa7xhi6ZnmAFpcok= ARC-Authentication-Results: i=1; imf15.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf15.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1703061775; a=rsa-sha256; cv=none; b=QZz05YxOfntnImvu9lbQUXZRvXmiEK24RvmXOfa5jnWTUFJh6LyLP/BNl6ncS6VHPA1ClZ l9cjXWij/Sm3Rb1TFsnAkh8vWjkEEE7EoZzIafd6cNqatSvkOGCbsMOpt9o4ZWDPLSBqoM 0KlwwLn1rnxYkG48yRCEYlqvl2013/Q= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 18BF41FB; Wed, 20 Dec 2023 00:43:39 -0800 (PST) Received: from [10.57.75.247] (unknown [10.57.75.247]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id AF8AC3F64C; Wed, 20 Dec 2023 00:42:50 -0800 (PST) Message-ID: <44ca1e89-c0f3-4916-9bd7-99a3fc626073@arm.com> Date: Wed, 20 Dec 2023 08:42:49 +0000 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v4 16/16] arm64/mm: Implement clear_ptes() to optimize exit, munmap, dontneed Content-Language: en-GB To: Alistair Popple Cc: Catalin Marinas , Will Deacon , Ard Biesheuvel , Marc Zyngier , Oliver Upton , James Morse , Suzuki K Poulose , Zenghui Yu , Andrey Ryabinin , Alexander Potapenko , Andrey Konovalov , Dmitry Vyukov , Vincenzo Frascino , Andrew Morton , Anshuman Khandual , Matthew Wilcox , Yu Zhao , Mark Rutland , David Hildenbrand , Kefeng Wang , John Hubbard , Zi Yan , Barry Song <21cnbao@gmail.com>, Yang Shi , linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org References: <20231218105100.172635-1-ryan.roberts@arm.com> <20231218105100.172635-17-ryan.roberts@arm.com> <87v88tzbfe.fsf@nvdebian.thelocal> From: Ryan Roberts In-Reply-To: <87v88tzbfe.fsf@nvdebian.thelocal> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Rspam-User: X-Stat-Signature: r73n3fnha137wox3c8ytkyugm1y1aq5d X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 8ADE0A0007 X-HE-Tag: 1703061775-186027 X-HE-Meta: U2FsdGVkX18ProcfPa27Lm7qHEilysOzPsO/NZtAqg1OTWCJKin9JeIL0hzqdGdJKmPO24fyg3f1whB1XEUQhAccDv9flphd+JJelG4PG6ekfqwY8WZmZ/JOse1i2wjTLdf2RdpS8q5qAEEPcnrhyHMY7CACQrPv7caenJl+6A8kUrSzarX6Rkx/rm21kvLh1VGCu+7XGQaUatVWCHyeyPe0NoYHzeq+5+rE3Rkw7wgdj0PDIWr7RCgFS878tRP6W8nntFeqEnju2fqTTLTGt3hh6iMwkPEYyT89UFPxE0rgQkLUx+y1rMKEco5qgez+iTBcGNllO564pUhJL2QQr69YUDmRd8iW/NRLcrVVnyoh39rZDFrD7Q8D+EsDJPwv4GzrA17jpgJjw4ILN7pElyVHevPn6nhqdcy3Xz1kmXHxwsW3qNbDco1Jz67+PRnmHYbOgSqCLMm+0PAAVTg0/wR1XNtOG2jG00VbtmaIN4EdEFgj9DF13MzZap5fxZL06nTyzrfLH3irrSdaDjw5EzWqkgQWgAGVtBDv/Glc9zgQD8sjytmwe+F7tJzTivK0ORVbp5MQKvffDizLQJK1LwpatwLlUY9E4sGfb0cA3CgAtChsICybLdJvDzeB3pg67tBiswllRgo0/wB4pkddmJHKM8rJkqNGlGhQ0FszflWVEygdWSzghTgpdEUbBHCwKmj3XJGfJ9xVKTwG1ta5l4LreHuCXf6v4oiPy+SJGIYqGnfYjrVPWRDPtJFjQ99au0NDbgB3WMboOgpdo4eLNfPOBFmvJQS4mMY668nUbVcdIrWqlOanXmrCbaEgVQ2y6owOaiaMlYHSmIcOotVSkfzIKmuITlMHX6Z3y8amT1x8isPnO4I1eqvwYwuFLZ/8aGbB6tcqCgszmL/uV5YOhLEmAuJiykSzE4G6RqPZO26Qntbb9vXMFxsUm0DycLUulCQ1kZ2M/zGTChHp1Tp LYcTQPSJ ZzXMLd+wxw6M/jnL8xcwhacKgshRzA8IQ4XNeOdAS/05hbfvwwd7wtdKn8Mto+D9uGSrx8e0tLCZYluOpAz+j83zvrjJyhuEMp+697mmr+sxptRv1pPlLcSxbim1h2Ia1p7G5/wHF+0lUwqwDcK47TSk54niLP5ShbwJZBLlQc1glm2bZzT3+fyrWBb3tCWrl1oD0p8UO6MnS1pnjI5sP+IkT3r8l27sX+IbJeAKfjXLIlVKiuWsa1SmjWn6SeV90uK7ivbxQe+3hXRy8G46yEoPAHWiSSU3trh9O4u5905g0idJpSOaf5FjrP8UpEgcXV0su21x+9eK57Ei5I4FnO2uXTMYT/a2nXXgh X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 20/12/2023 05:28, Alistair Popple wrote: > > Ryan Roberts writes: > >> With the core-mm changes in place to batch-clear ptes during >> zap_pte_range(), we can take advantage of this in arm64 to greatly >> reduce the number of tlbis we have to issue, and recover the lost >> performance in exit, munmap and madvise(DONTNEED) incured when adding >> support for transparent contiguous ptes. >> >> If we are clearing a whole contpte range, we can elide first unfolding >> that range and save the tlbis. We just clear the whole range. >> >> The following microbenchmark results demonstate the effect of this >> change on madvise(DONTNEED) performance for large pte-mapped folios. >> madvise(dontneed) is called for each page of a 1G populated mapping and >> the total time is measured. 100 iterations per run, 8 runs performed on >> both Apple M2 (VM) and Ampere Altra (bare metal). Tests performed for >> case where 1G memory is comprised of pte-mapped order-9 folios. Negative >> is faster, positive is slower, compared to baseline upon which the >> series is based: >> >> | dontneed | Apple M2 VM | Ampere Altra | >> | order-9 |-------------------|-------------------| >> | (pte-map) | mean | stdev | mean | stdev | >> |---------------|---------|---------|---------|---------| >> | baseline | 0.0% | 7.9% | 0.0% | 0.0% | >> | before-change | -1.3% | 7.0% | 13.0% | 0.0% | >> | after-change | -9.9% | 0.9% | 14.1% | 0.0% | >> >> The memory is initially all contpte-mapped and has to be unfolded (which >> requires tlbi for the whole block) when the first page is touched (since >> the test is madvise-ing 1 page at a time). Ampere Altra has high cost >> for tlbi; this is why cost increases there. >> >> The following microbenchmark results demonstate the recovery (and >> overall improvement) of munmap performance for large pte-mapped folios. >> munmap is called for a 1G populated mapping and the function runtime is >> measured. 100 iterations per run, 8 runs performed on both Apple M2 (VM) >> and Ampere Altra (bare metal). Tests performed for case where 1G memory >> is comprised of pte-mapped order-9 folios. Negative is faster, positive >> is slower, compared to baseline upon which the series is based: >> >> | munmap | Apple M2 VM | Ampere Altra | >> | order-9 |-------------------|-------------------| >> | (pte-map) | mean | stdev | mean | stdev | >> |---------------|---------|---------|---------|---------| >> | baseline | 0.0% | 6.4% | 0.0% | 0.1% | >> | before-change | 43.3% | 1.9% | 375.2% | 0.0% | >> | after-change | -6.0% | 1.4% | -0.6% | 0.2% | >> >> Tested-by: John Hubbard >> Signed-off-by: Ryan Roberts >> --- >> arch/arm64/include/asm/pgtable.h | 42 +++++++++++++++++++++++++++++ >> arch/arm64/mm/contpte.c | 45 ++++++++++++++++++++++++++++++++ >> 2 files changed, 87 insertions(+) >> >> diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h >> index d4805f73b9db..f5bf059291c3 100644 >> --- a/arch/arm64/include/asm/pgtable.h >> +++ b/arch/arm64/include/asm/pgtable.h >> @@ -953,6 +953,29 @@ static inline pte_t __ptep_get_and_clear(struct mm_struct *mm, >> return pte; >> } >> >> +static inline pte_t __clear_ptes(struct mm_struct *mm, >> + unsigned long address, pte_t *ptep, >> + unsigned int nr, int full) > > Ping on my previous comment - why not just use the generic version > defined in patch 3 which is basically identical to this? Perhaps I misunderstood your original comment - I thought this was what you were suggesting - i.e. move this code out of the arm64 clear_ptes() impl into its own __clear_ptes() helper, and always define an arm64 clear_ptes(), even when ARM64_CONTPTE is not enabled. I can use (and was in v3) the generic version when ARM64_CONTPTE is disabled. But I can't use it when its enabled, because then arm64 needs its own implementation to manage the contpte bit. And once it defines it's own version, by defining the macro clear_ptes(), then the generic version is no longer defined so I can't call it as part of this implementation. Even if I could, that would be recursive. Or perhaps I'm still not understanding your suggestion? > >> +{ >> + pte_t orig_pte = __ptep_get_and_clear(mm, address, ptep); >> + unsigned int i; >> + pte_t pte; >> + >> + for (i = 1; i < nr; i++) { >> + address += PAGE_SIZE; >> + ptep++; >> + pte = __ptep_get_and_clear(mm, address, ptep); >> + >> + if (pte_dirty(pte)) >> + orig_pte = pte_mkdirty(orig_pte); >> + >> + if (pte_young(pte)) >> + orig_pte = pte_mkyoung(orig_pte); >> + } >> + >> + return orig_pte; >> +} >> + >> #ifdef CONFIG_TRANSPARENT_HUGEPAGE >> #define __HAVE_ARCH_PMDP_HUGE_GET_AND_CLEAR >> static inline pmd_t pmdp_huge_get_and_clear(struct mm_struct *mm, >> @@ -1151,6 +1174,8 @@ extern pte_t contpte_ptep_get(pte_t *ptep, pte_t orig_pte); >> extern pte_t contpte_ptep_get_lockless(pte_t *orig_ptep); >> extern void contpte_set_ptes(struct mm_struct *mm, unsigned long addr, >> pte_t *ptep, pte_t pte, unsigned int nr); >> +extern pte_t contpte_clear_ptes(struct mm_struct *mm, unsigned long addr, >> + pte_t *ptep, unsigned int nr, int full); >> extern int contpte_ptep_test_and_clear_young(struct vm_area_struct *vma, >> unsigned long addr, pte_t *ptep); >> extern int contpte_ptep_clear_flush_young(struct vm_area_struct *vma, >> @@ -1279,6 +1304,22 @@ static inline void pte_clear(struct mm_struct *mm, >> __pte_clear(mm, addr, ptep); >> } >> >> +#define clear_ptes clear_ptes >> +static inline pte_t clear_ptes(struct mm_struct *mm, >> + unsigned long addr, pte_t *ptep, >> + unsigned int nr, int full) >> +{ >> + pte_t pte; >> + >> + if (nr == 1) { >> + contpte_try_unfold(mm, addr, ptep, __ptep_get(ptep)); >> + pte = __ptep_get_and_clear(mm, addr, ptep); >> + } else >> + pte = contpte_clear_ptes(mm, addr, ptep, nr, full); >> + >> + return pte; >> +} >> + >> #define __HAVE_ARCH_PTEP_GET_AND_CLEAR >> static inline pte_t ptep_get_and_clear(struct mm_struct *mm, >> unsigned long addr, pte_t *ptep) >> @@ -1366,6 +1407,7 @@ static inline int ptep_set_access_flags(struct vm_area_struct *vma, >> #define set_pte __set_pte >> #define set_ptes __set_ptes >> #define pte_clear __pte_clear >> +#define clear_ptes __clear_ptes >> #define __HAVE_ARCH_PTEP_GET_AND_CLEAR >> #define ptep_get_and_clear __ptep_get_and_clear >> #define __HAVE_ARCH_PTEP_TEST_AND_CLEAR_YOUNG >> diff --git a/arch/arm64/mm/contpte.c b/arch/arm64/mm/contpte.c >> index 72e672024785..6f2a15ac5163 100644 >> --- a/arch/arm64/mm/contpte.c >> +++ b/arch/arm64/mm/contpte.c >> @@ -293,6 +293,51 @@ void contpte_set_ptes(struct mm_struct *mm, unsigned long addr, >> } >> EXPORT_SYMBOL(contpte_set_ptes); >> >> +pte_t contpte_clear_ptes(struct mm_struct *mm, unsigned long addr, pte_t *ptep, >> + unsigned int nr, int full) >> +{ >> + /* >> + * If we cover a partial contpte block at the beginning or end of the >> + * batch, unfold if currently folded. This makes it safe to clear some >> + * of the entries while keeping others. contpte blocks in the middle of >> + * the range, which are fully covered don't need to be unfolded because >> + * we will clear the full block. >> + */ >> + >> + unsigned int i; >> + pte_t pte; >> + pte_t tail; >> + >> + if (!mm_is_user(mm)) >> + return __clear_ptes(mm, addr, ptep, nr, full); >> + >> + if (ptep != contpte_align_down(ptep) || nr < CONT_PTES) >> + contpte_try_unfold(mm, addr, ptep, __ptep_get(ptep)); >> + >> + if (ptep + nr != contpte_align_down(ptep + nr)) >> + contpte_try_unfold(mm, addr + PAGE_SIZE * (nr - 1), >> + ptep + nr - 1, >> + __ptep_get(ptep + nr - 1)); >> + >> + pte = __ptep_get_and_clear(mm, addr, ptep); >> + >> + for (i = 1; i < nr; i++) { >> + addr += PAGE_SIZE; >> + ptep++; >> + >> + tail = __ptep_get_and_clear(mm, addr, ptep); >> + >> + if (pte_dirty(tail)) >> + pte = pte_mkdirty(pte); >> + >> + if (pte_young(tail)) >> + pte = pte_mkyoung(pte); >> + } >> + >> + return pte; >> +} >> +EXPORT_SYMBOL(contpte_clear_ptes); >> + >> int contpte_ptep_test_and_clear_young(struct vm_area_struct *vma, >> unsigned long addr, pte_t *ptep) >> { >