From: Ryan Roberts <ryan.roberts@arm.com>
To: Alistair Popple <apopple@nvidia.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>,
Will Deacon <will@kernel.org>, Ard Biesheuvel <ardb@kernel.org>,
Marc Zyngier <maz@kernel.org>,
Oliver Upton <oliver.upton@linux.dev>,
James Morse <james.morse@arm.com>,
Suzuki K Poulose <suzuki.poulose@arm.com>,
Zenghui Yu <yuzenghui@huawei.com>,
Andrey Ryabinin <ryabinin.a.a@gmail.com>,
Alexander Potapenko <glider@google.com>,
Andrey Konovalov <andreyknvl@gmail.com>,
Dmitry Vyukov <dvyukov@google.com>,
Vincenzo Frascino <vincenzo.frascino@arm.com>,
Andrew Morton <akpm@linux-foundation.org>,
Anshuman Khandual <anshuman.khandual@arm.com>,
Matthew Wilcox <willy@infradead.org>, Yu Zhao <yuzhao@google.com>,
Mark Rutland <mark.rutland@arm.com>,
David Hildenbrand <david@redhat.com>,
Kefeng Wang <wangkefeng.wang@huawei.com>,
John Hubbard <jhubbard@nvidia.com>, Zi Yan <ziy@nvidia.com>,
Barry Song <21cnbao@gmail.com>, Yang Shi <shy828301@gmail.com>,
linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org,
linux-kernel@vger.kernel.org
Subject: Re: [PATCH v3 14/15] arm64/mm: Implement ptep_set_wrprotects() to optimize fork()
Date: Fri, 15 Dec 2023 14:05:28 +0000 [thread overview]
Message-ID: <f2a3b1f0-0823-4c3a-b0e8-20ac83e855b2@arm.com> (raw)
In-Reply-To: <87fs0413rx.fsf@nvdebian.thelocal>
On 15/12/2023 04:32, Alistair Popple wrote:
>
> Ryan Roberts <ryan.roberts@arm.com> writes:
>
>> On 08/12/2023 01:37, Alistair Popple wrote:
>>>
>>> Ryan Roberts <ryan.roberts@arm.com> writes:
>>>
>>>> With the core-mm changes in place to batch-copy ptes during fork, we can
>>>> take advantage of this in arm64 to greatly reduce the number of tlbis we
>>>> have to issue, and recover the lost fork performance incured when adding
>>>> support for transparent contiguous ptes.
>>>>
>>>> If we are write-protecting a whole contig range, we can apply the
>>>> write-protection to the whole range and know that it won't change
>>>> whether the range should have the contiguous bit set or not. For ranges
>>>> smaller than the contig range, we will still have to unfold, apply the
>>>> write-protection, then fold if the change now means the range is
>>>> foldable.
>>>>
>>>> This optimization is possible thanks to the tightening of the Arm ARM in
>>>> respect to the definition and behaviour when 'Misprogramming the
>>>> Contiguous bit'. See section D21194 at
>>>> https://developer.arm.com/documentation/102105/latest/
>>>>
>>>> Performance tested with the following test written for the will-it-scale
>>>> framework:
>>>>
>>>> -------
>>>>
>>>> char *testcase_description = "fork and exit";
>>>>
>>>> void testcase(unsigned long long *iterations, unsigned long nr)
>>>> {
>>>> int pid;
>>>> char *mem;
>>>>
>>>> mem = malloc(SZ_128M);
>>>> assert(mem);
>>>> memset(mem, 1, SZ_128M);
>>>>
>>>> while (1) {
>>>> pid = fork();
>>>> assert(pid >= 0);
>>>>
>>>> if (!pid)
>>>> exit(0);
>>>>
>>>> waitpid(pid, NULL, 0);
>>>>
>>>> (*iterations)++;
>>>> }
>>>> }
>>>>
>>>> -------
>>>>
>>>> I see huge performance regression when PTE_CONT support was added, then
>>>> the regression is mostly fixed with the addition of this change. The
>>>> following shows regression relative to before PTE_CONT was enabled
>>>> (bigger negative value is bigger regression):
>>>>
>>>> | cpus | before opt | after opt |
>>>> |-------:|-------------:|------------:|
>>>> | 1 | -10.4% | -5.2% |
>>>> | 8 | -15.4% | -3.5% |
>>>> | 16 | -38.7% | -3.7% |
>>>> | 24 | -57.0% | -4.4% |
>>>> | 32 | -65.8% | -5.4% |
>>>>
>>>> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
>>>> ---
>>>> arch/arm64/include/asm/pgtable.h | 30 ++++++++++++++++++++---
>>>> arch/arm64/mm/contpte.c | 42 ++++++++++++++++++++++++++++++++
>>>> 2 files changed, 69 insertions(+), 3 deletions(-)
>>>>
>>>> diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
>>>> index 15bc9cf1eef4..9bd2f57a9e11 100644
>>>> --- a/arch/arm64/include/asm/pgtable.h
>>>> +++ b/arch/arm64/include/asm/pgtable.h
>>>> @@ -984,6 +984,16 @@ static inline void __ptep_set_wrprotect(struct mm_struct *mm,
>>>> } while (pte_val(pte) != pte_val(old_pte));
>>>> }
>>>>
>>>> +static inline void __ptep_set_wrprotects(struct mm_struct *mm,
>>>> + unsigned long address, pte_t *ptep,
>>>> + unsigned int nr)
>>>> +{
>>>> + unsigned int i;
>>>> +
>>>> + for (i = 0; i < nr; i++, address += PAGE_SIZE, ptep++)
>>>> + __ptep_set_wrprotect(mm, address, ptep);
>>>> +}
>>>> +
>>>> #ifdef CONFIG_TRANSPARENT_HUGEPAGE
>>>> #define __HAVE_ARCH_PMDP_SET_WRPROTECT
>>>> static inline void pmdp_set_wrprotect(struct mm_struct *mm,
>>>> @@ -1139,6 +1149,8 @@ extern int contpte_ptep_test_and_clear_young(struct vm_area_struct *vma,
>>>> unsigned long addr, pte_t *ptep);
>>>> extern int contpte_ptep_clear_flush_young(struct vm_area_struct *vma,
>>>> unsigned long addr, pte_t *ptep);
>>>> +extern void contpte_set_wrprotects(struct mm_struct *mm, unsigned long addr,
>>>> + pte_t *ptep, unsigned int nr);
>>>> extern int contpte_ptep_set_access_flags(struct vm_area_struct *vma,
>>>> unsigned long addr, pte_t *ptep,
>>>> pte_t entry, int dirty);
>>>> @@ -1290,13 +1302,25 @@ static inline int ptep_clear_flush_young(struct vm_area_struct *vma,
>>>> return contpte_ptep_clear_flush_young(vma, addr, ptep);
>>>> }
>>>>
>>>> +#define ptep_set_wrprotects ptep_set_wrprotects
>>>> +static inline void ptep_set_wrprotects(struct mm_struct *mm, unsigned long addr,
>>>> + pte_t *ptep, unsigned int nr)
>>>> +{
>>>> + if (!contpte_is_enabled(mm))
>>>> + __ptep_set_wrprotects(mm, addr, ptep, nr);
>>>> + else if (nr == 1) {
>>>
>>> Why do we need the special case here? Couldn't we just call
>>> contpte_set_wrprotects() with nr == 1?
>>
>> My intention is for this to be a fast path for ptep_set_wrprotect(). I'm having
>> to work hard to prevent regressing the order-0 folios case.
>
> This ends up calling three functions anyway so I'm curious - does
> removing the one function call really make that much of difference?
Yes; big time. All the functions in the fast path are inlined. The version
regresses a fork() microbenchmark that David gave me by ~30%. I've had to work
quite hard to reduce that to 2%, even from this starting point. There is so
little in the inner loop that even the __ptep_get(ptep) (which is a READ_ONCE())
makes a measurable difference.
Anyway, I'll be posting v4 with these optimizations and all the supporting
benchmark data on Monday.
>
> Either way I think a comment justifying the special case (ie. that this
> is simply a fast path for nr == 1) would be good.
I've added a comment here in v4.
>
> Thanks.
>
>>>
>>>> + contpte_try_unfold(mm, addr, ptep, __ptep_get(ptep));
>>>> + __ptep_set_wrprotects(mm, addr, ptep, 1);
>>>> + contpte_try_fold(mm, addr, ptep, __ptep_get(ptep));
>>>> + } else
>>>> + contpte_set_wrprotects(mm, addr, ptep, nr);
>>>> +}
>>>> +
>>>> #define __HAVE_ARCH_PTEP_SET_WRPROTECT
>>>> static inline void ptep_set_wrprotect(struct mm_struct *mm,
>>>> unsigned long addr, pte_t *ptep)
>>>> {
>>>> - contpte_try_unfold(mm, addr, ptep, __ptep_get(ptep));
>>>> - __ptep_set_wrprotect(mm, addr, ptep);
>>>> - contpte_try_fold(mm, addr, ptep, __ptep_get(ptep));
>>>> + ptep_set_wrprotects(mm, addr, ptep, 1);
>>>> }
>>>>
>>>> #define __HAVE_ARCH_PTEP_SET_ACCESS_FLAGS
>>>> diff --git a/arch/arm64/mm/contpte.c b/arch/arm64/mm/contpte.c
>>>> index e079ec61d7d1..2a57df16bf58 100644
>>>> --- a/arch/arm64/mm/contpte.c
>>>> +++ b/arch/arm64/mm/contpte.c
>>>> @@ -303,6 +303,48 @@ int contpte_ptep_clear_flush_young(struct vm_area_struct *vma,
>>>> }
>>>> EXPORT_SYMBOL(contpte_ptep_clear_flush_young);
>>>>
>>>> +void contpte_set_wrprotects(struct mm_struct *mm, unsigned long addr,
>>>> + pte_t *ptep, unsigned int nr)
>>>> +{
>>>> + unsigned long next;
>>>> + unsigned long end = addr + (nr << PAGE_SHIFT);
>>>> +
>>>> + do {
>>>> + next = pte_cont_addr_end(addr, end);
>>>> + nr = (next - addr) >> PAGE_SHIFT;
>>>> +
>>>> + /*
>>>> + * If wrprotecting an entire contig range, we can avoid
>>>> + * unfolding. Just set wrprotect and wait for the later
>>>> + * mmu_gather flush to invalidate the tlb. Until the flush, the
>>>> + * page may or may not be wrprotected. After the flush, it is
>>>> + * guarranteed wrprotected. If its a partial range though, we
>>>> + * must unfold, because we can't have a case where CONT_PTE is
>>>> + * set but wrprotect applies to a subset of the PTEs; this would
>>>> + * cause it to continue to be unpredictable after the flush.
>>>> + */
>>>> + if (nr != CONT_PTES)
>>>> + contpte_try_unfold(mm, addr, ptep, __ptep_get(ptep));
>>>> +
>>>> + __ptep_set_wrprotects(mm, addr, ptep, nr);
>>>> +
>>>> + addr = next;
>>>> + ptep += nr;
>>>> +
>>>> + /*
>>>> + * If applying to a partial contig range, the change could have
>>>> + * made the range foldable. Use the last pte in the range we
>>>> + * just set for comparison, since contpte_try_fold() only
>>>> + * triggers when acting on the last pte in the contig range.
>>>> + */
>>>> + if (nr != CONT_PTES)
>>>> + contpte_try_fold(mm, addr - PAGE_SIZE, ptep - 1,
>>>> + __ptep_get(ptep - 1));
>>>> +
>>>> + } while (addr != end);
>>>> +}
>>>> +EXPORT_SYMBOL(contpte_set_wrprotects);
>>>> +
>>>> int contpte_ptep_set_access_flags(struct vm_area_struct *vma,
>>>> unsigned long addr, pte_t *ptep,
>>>> pte_t entry, int dirty)
>>>
>
next prev parent reply other threads:[~2023-12-15 14:05 UTC|newest]
Thread overview: 41+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-12-04 10:54 [PATCH v3 00/15] Transparent Contiguous PTEs for User Mappings Ryan Roberts
2023-12-04 10:54 ` [PATCH v3 01/15] mm: Batch-copy PTE ranges during fork() Ryan Roberts
2023-12-04 15:47 ` David Hildenbrand
2023-12-04 16:00 ` David Hildenbrand
2023-12-04 17:27 ` David Hildenbrand
2023-12-05 11:30 ` Ryan Roberts
2023-12-05 12:04 ` David Hildenbrand
2023-12-05 14:16 ` Ryan Roberts
2023-12-08 0:32 ` Alistair Popple
2023-12-12 11:51 ` Ryan Roberts
2023-12-04 10:54 ` [PATCH v3 02/15] mm: Batch-clear PTE ranges during zap_pte_range() Ryan Roberts
2023-12-08 1:30 ` Alistair Popple
2023-12-12 11:57 ` Ryan Roberts
2023-12-04 10:54 ` [PATCH v3 03/15] arm64/mm: set_pte(): New layer to manage contig bit Ryan Roberts
2023-12-04 10:54 ` [PATCH v3 04/15] arm64/mm: set_ptes()/set_pte_at(): " Ryan Roberts
2023-12-04 10:54 ` [PATCH v3 05/15] arm64/mm: pte_clear(): " Ryan Roberts
2023-12-04 10:54 ` [PATCH v3 06/15] arm64/mm: ptep_get_and_clear(): " Ryan Roberts
2023-12-04 10:54 ` [PATCH v3 07/15] arm64/mm: ptep_test_and_clear_young(): " Ryan Roberts
2023-12-04 10:54 ` [PATCH v3 08/15] arm64/mm: ptep_clear_flush_young(): " Ryan Roberts
2023-12-04 10:54 ` [PATCH v3 09/15] arm64/mm: ptep_set_wrprotect(): " Ryan Roberts
2023-12-04 10:54 ` [PATCH v3 10/15] arm64/mm: ptep_set_access_flags(): " Ryan Roberts
2023-12-04 10:54 ` [PATCH v3 11/15] arm64/mm: ptep_get(): " Ryan Roberts
2023-12-04 10:54 ` [PATCH v3 12/15] arm64/mm: Split __flush_tlb_range() to elide trailing DSB Ryan Roberts
2023-12-12 11:35 ` Will Deacon
2023-12-12 11:47 ` Ryan Roberts
2023-12-14 11:53 ` Ryan Roberts
2023-12-14 12:13 ` Will Deacon
2023-12-14 12:30 ` Robin Murphy
2023-12-14 14:28 ` Ryan Roberts
2023-12-14 15:22 ` Jean-Philippe Brucker
2023-12-14 16:45 ` Jonathan Cameron
2023-12-04 10:54 ` [PATCH v3 13/15] arm64/mm: Wire up PTE_CONT for user mappings Ryan Roberts
2023-12-04 10:54 ` [PATCH v3 14/15] arm64/mm: Implement ptep_set_wrprotects() to optimize fork() Ryan Roberts
2023-12-08 1:37 ` Alistair Popple
2023-12-12 11:59 ` Ryan Roberts
2023-12-15 4:32 ` Alistair Popple
2023-12-15 14:05 ` Ryan Roberts [this message]
2023-12-04 10:54 ` [PATCH v3 15/15] arm64/mm: Implement clear_ptes() to optimize exit() Ryan Roberts
2023-12-08 1:45 ` Alistair Popple
2023-12-12 12:02 ` Ryan Roberts
2023-12-05 3:41 ` [PATCH v3 00/15] Transparent Contiguous PTEs for User Mappings John Hubbard
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=f2a3b1f0-0823-4c3a-b0e8-20ac83e855b2@arm.com \
--to=ryan.roberts@arm.com \
--cc=21cnbao@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=andreyknvl@gmail.com \
--cc=anshuman.khandual@arm.com \
--cc=apopple@nvidia.com \
--cc=ardb@kernel.org \
--cc=catalin.marinas@arm.com \
--cc=david@redhat.com \
--cc=dvyukov@google.com \
--cc=glider@google.com \
--cc=james.morse@arm.com \
--cc=jhubbard@nvidia.com \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mark.rutland@arm.com \
--cc=maz@kernel.org \
--cc=oliver.upton@linux.dev \
--cc=ryabinin.a.a@gmail.com \
--cc=shy828301@gmail.com \
--cc=suzuki.poulose@arm.com \
--cc=vincenzo.frascino@arm.com \
--cc=wangkefeng.wang@huawei.com \
--cc=will@kernel.org \
--cc=willy@infradead.org \
--cc=yuzenghui@huawei.com \
--cc=yuzhao@google.com \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox