From: David Hildenbrand <david@redhat.com>
To: Ryan Roberts <ryan.roberts@arm.com>,
Catalin Marinas <catalin.marinas@arm.com>,
Will Deacon <will@kernel.org>, Ard Biesheuvel <ardb@kernel.org>,
Marc Zyngier <maz@kernel.org>,
Oliver Upton <oliver.upton@linux.dev>,
James Morse <james.morse@arm.com>,
Suzuki K Poulose <suzuki.poulose@arm.com>,
Zenghui Yu <yuzenghui@huawei.com>,
Andrey Ryabinin <ryabinin.a.a@gmail.com>,
Alexander Potapenko <glider@google.com>,
Andrey Konovalov <andreyknvl@gmail.com>,
Dmitry Vyukov <dvyukov@google.com>,
Vincenzo Frascino <vincenzo.frascino@arm.com>,
Andrew Morton <akpm@linux-foundation.org>,
Anshuman Khandual <anshuman.khandual@arm.com>,
Matthew Wilcox <willy@infradead.org>, Yu Zhao <yuzhao@google.com>,
Mark Rutland <mark.rutland@arm.com>,
Kefeng Wang <wangkefeng.wang@huawei.com>,
John Hubbard <jhubbard@nvidia.com>, Zi Yan <ziy@nvidia.com>,
Barry Song <21cnbao@gmail.com>,
Alistair Popple <apopple@nvidia.com>,
Yang Shi <shy828301@gmail.com>
Cc: linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org,
linux-kernel@vger.kernel.org
Subject: Re: [PATCH v4 02/16] mm: Batch-copy PTE ranges during fork()
Date: Wed, 20 Dec 2023 10:17:45 +0100 [thread overview]
Message-ID: <7c0236ad-01f3-437f-8b04-125d69e90dc0@redhat.com> (raw)
In-Reply-To: <dd227e51-c4b2-420b-a92a-65da85ab4018@arm.com>
On 19.12.23 18:42, Ryan Roberts wrote:
> On 19/12/2023 17:22, David Hildenbrand wrote:
>> On 19.12.23 09:30, Ryan Roberts wrote:
>>> On 18/12/2023 17:47, David Hildenbrand wrote:
>>>> On 18.12.23 11:50, Ryan Roberts wrote:
>>>>> Convert copy_pte_range() to copy a batch of ptes in one go. A given
>>>>> batch is determined by the architecture with the new helper,
>>>>> pte_batch_remaining(), and maps a physically contiguous block of memory,
>>>>> all belonging to the same folio. A pte batch is then write-protected in
>>>>> one go in the parent using the new helper, ptep_set_wrprotects() and is
>>>>> set in one go in the child using the new helper, set_ptes_full().
>>>>>
>>>>> The primary motivation for this change is to reduce the number of tlb
>>>>> maintenance operations that the arm64 backend has to perform during
>>>>> fork, as it is about to add transparent support for the "contiguous bit"
>>>>> in its ptes. By write-protecting the parent using the new
>>>>> ptep_set_wrprotects() (note the 's' at the end) function, the backend
>>>>> can avoid having to unfold contig ranges of PTEs, which is expensive,
>>>>> when all ptes in the range are being write-protected. Similarly, by
>>>>> using set_ptes_full() rather than set_pte_at() to set up ptes in the
>>>>> child, the backend does not need to fold a contiguous range once they
>>>>> are all populated - they can be initially populated as a contiguous
>>>>> range in the first place.
>>>>>
>>>>> This code is very performance sensitive, and a significant amount of
>>>>> effort has been put into not regressing performance for the order-0
>>>>> folio case. By default, pte_batch_remaining() is compile constant 1,
>>>>> which enables the compiler to simplify the extra loops that are added
>>>>> for batching and produce code that is equivalent (and equally
>>>>> performant) as the previous implementation.
>>>>>
>>>>> This change addresses the core-mm refactoring only and a separate change
>>>>> will implement pte_batch_remaining(), ptep_set_wrprotects() and
>>>>> set_ptes_full() in the arm64 backend to realize the performance
>>>>> improvement as part of the work to enable contpte mappings.
>>>>>
>>>>> To ensure the arm64 is performant once implemented, this change is very
>>>>> careful to only call ptep_get() once per pte batch.
>>>>>
>>>>> The following microbenchmark results demonstate that there is no
>>>>> significant performance change after this patch. Fork is called in a
>>>>> tight loop in a process with 1G of populated memory and the time for the
>>>>> function to execute is measured. 100 iterations per run, 8 runs
>>>>> performed on both Apple M2 (VM) and Ampere Altra (bare metal). Tests
>>>>> performed for case where 1G memory is comprised of order-0 folios and
>>>>> case where comprised of pte-mapped order-9 folios. Negative is faster,
>>>>> positive is slower, compared to baseline upon which the series is based:
>>>>>
>>>>> | Apple M2 VM | order-0 (pte-map) | order-9 (pte-map) |
>>>>> | fork |-------------------|-------------------|
>>>>> | microbench | mean | stdev | mean | stdev |
>>>>> |---------------|---------|---------|---------|---------|
>>>>> | baseline | 0.0% | 1.1% | 0.0% | 1.2% |
>>>>> | after-change | -1.0% | 2.0% | -0.1% | 1.1% |
>>>>>
>>>>> | Ampere Altra | order-0 (pte-map) | order-9 (pte-map) |
>>>>> | fork |-------------------|-------------------|
>>>>> | microbench | mean | stdev | mean | stdev |
>>>>> |---------------|---------|---------|---------|---------|
>>>>> | baseline | 0.0% | 1.0% | 0.0% | 0.1% |
>>>>> | after-change | -0.1% | 1.2% | -0.1% | 0.1% |
>>>>>
>>>>> Tested-by: John Hubbard <jhubbard@nvidia.com>
>>>>> Reviewed-by: Alistair Popple <apopple@nvidia.com>
>>>>> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
>>>>> ---
>>>>> include/linux/pgtable.h | 80 +++++++++++++++++++++++++++++++++++
>>>>> mm/memory.c | 92 ++++++++++++++++++++++++++---------------
>>>>> 2 files changed, 139 insertions(+), 33 deletions(-)
>>>>>
>>>>> diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
>>>>> index af7639c3b0a3..db93fb81465a 100644
>>>>> --- a/include/linux/pgtable.h
>>>>> +++ b/include/linux/pgtable.h
>>>>> @@ -205,6 +205,27 @@ static inline int pmd_young(pmd_t pmd)
>>>>> #define arch_flush_lazy_mmu_mode() do {} while (0)
>>>>> #endif
>>>>> +#ifndef pte_batch_remaining
>>>>> +/**
>>>>> + * pte_batch_remaining - Number of pages from addr to next batch boundary.
>>>>> + * @pte: Page table entry for the first page.
>>>>> + * @addr: Address of the first page.
>>>>> + * @end: Batch ceiling (e.g. end of vma).
>>>>> + *
>>>>> + * Some architectures (arm64) can efficiently modify a contiguous batch of
>>>>> ptes.
>>>>> + * In such cases, this function returns the remaining number of pages to
>>>>> the end
>>>>> + * of the current batch, as defined by addr. This can be useful when iterating
>>>>> + * over ptes.
>>>>> + *
>>>>> + * May be overridden by the architecture, else batch size is always 1.
>>>>> + */
>>>>> +static inline unsigned int pte_batch_remaining(pte_t pte, unsigned long addr,
>>>>> + unsigned long end)
>>>>> +{
>>>>> + return 1;
>>>>> +}
>>>>> +#endif
>>>>
>>>> It's a shame we now lose the optimization for all other archtiectures.
>>>>
>>>> Was there no way to have some basic batching mechanism that doesn't require arch
>>>> specifics?
>>>
>>> I tried a bunch of things but ultimately the way I've done it was the only way
>>> to reduce the order-0 fork regression to 0.
>>>
>>> My original v3 posting was costing 5% extra and even my first attempt at an
>>> arch-specific version that didn't resolve to a compile-time constant 1 still
>>> cost an extra 3%.
>>>
>>>
>>>>
>>>> I'd have thought that something very basic would have worked like:
>>>>
>>>> * Check if PTE is the same when setting the PFN to 0.
>>>> * Check that PFN is consecutive
>>>> * Check that all PFNs belong to the same folio
>>>
>>> I haven't tried this exact approach, but I'd be surprised if I can get the
>>> regression under 4% with this. Further along the series I spent a lot of time
>>> having to fiddle with the arm64 implementation; every conditional and every
>>> memory read (even when in cache) was a problem. There is just so little in the
>>> inner loop that every instruction matters. (At least on Ampere Altra and Apple
>>> M2).
>>>
>>> Of course if you're willing to pay that 4-5% for order-0 then the benefit to
>>> order-9 is around 10% in my measurements. Personally though, I'd prefer to play
>>> safe and ensure the common order-0 case doesn't regress, as you previously
>>> suggested.
>>>
>>
>> I just hacked something up, on top of my beloved rmap cleanup/batching series. I
>> implemented very generic and simple batching for large folios (all PTE bits
>> except the PFN have to match).
>>
>> Some very quick testing (don't trust each last % ) on Intel(R) Xeon(R) Silver
>> 4210R CPU.
>>
>> order-0: 0.014210 -> 0.013969
>>
>> -> Around 1.7 % faster
>>
>> order-9: 0.014373 -> 0.009149
>>
>> -> Around 36.3 % faster
>
> Well I guess that shows me :)
>
> I'll do a review and run the tests on my HW to see if it concurs.
I pushed a simple compile fixup (we need pte_next_pfn()).
Note that we should probably handle "ptep_set_wrprotects" rather like set_ptes:
#ifndef wrprotect_ptes
static inline void wrprotect_ptes(struct mm_struct *mm, unsigned long addr,
pte_t *ptep, unsigned int nr)
{
for (;;) {
ptep_set_wrprotect(mm, addr, ptep);
if (--nr == 0)
break;
ptep++;
addr += PAGE_SIZE;
}
}
#endif
--
Cheers,
David / dhildenb
next prev parent reply other threads:[~2023-12-20 9:17 UTC|newest]
Thread overview: 57+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-12-18 10:50 [PATCH v4 00/16] Transparent Contiguous PTEs for User Mappings Ryan Roberts
2023-12-18 10:50 ` [PATCH v4 01/16] mm: thp: Batch-collapse PMD with set_ptes() Ryan Roberts
2023-12-18 17:40 ` David Hildenbrand
2023-12-19 8:18 ` Ryan Roberts
2023-12-18 10:50 ` [PATCH v4 02/16] mm: Batch-copy PTE ranges during fork() Ryan Roberts
2023-12-18 17:47 ` David Hildenbrand
2023-12-19 8:30 ` Ryan Roberts
2023-12-19 11:29 ` David Hildenbrand
2023-12-19 17:22 ` David Hildenbrand
2023-12-19 17:42 ` Ryan Roberts
2023-12-20 9:17 ` David Hildenbrand [this message]
2023-12-20 9:51 ` Ryan Roberts
2023-12-20 9:54 ` David Hildenbrand
2023-12-20 10:11 ` Ryan Roberts
2023-12-20 10:16 ` David Hildenbrand
2023-12-20 10:41 ` Ryan Roberts
2023-12-20 10:56 ` David Hildenbrand
2023-12-20 11:28 ` Ryan Roberts
2023-12-20 11:36 ` David Hildenbrand
2023-12-20 11:51 ` Ryan Roberts
2023-12-20 11:58 ` David Hildenbrand
2023-12-20 12:04 ` Ryan Roberts
2023-12-20 12:08 ` David Hildenbrand
2023-12-20 12:54 ` David Hildenbrand
2023-12-20 13:02 ` Ryan Roberts
2023-12-20 13:06 ` David Hildenbrand
2023-12-20 13:10 ` Ryan Roberts
2023-12-20 13:13 ` David Hildenbrand
2023-12-20 13:33 ` Ryan Roberts
2023-12-20 14:00 ` David Hildenbrand
2023-12-20 15:05 ` Ryan Roberts
2023-12-20 15:35 ` David Hildenbrand
2023-12-20 15:59 ` Ryan Roberts
2023-12-20 9:57 ` Ryan Roberts
2023-12-20 10:00 ` David Hildenbrand
2023-12-18 10:50 ` [PATCH v4 03/16] mm: Batch-clear PTE ranges during zap_pte_range() Ryan Roberts
2023-12-20 5:25 ` Alistair Popple
2023-12-18 10:50 ` [PATCH v4 04/16] arm64/mm: set_pte(): New layer to manage contig bit Ryan Roberts
2023-12-18 10:50 ` [PATCH v4 05/16] arm64/mm: set_ptes()/set_pte_at(): " Ryan Roberts
2023-12-18 10:50 ` [PATCH v4 06/16] arm64/mm: pte_clear(): " Ryan Roberts
2023-12-18 10:50 ` [PATCH v4 07/16] arm64/mm: ptep_get_and_clear(): " Ryan Roberts
2023-12-18 10:50 ` [PATCH v4 08/16] arm64/mm: ptep_test_and_clear_young(): " Ryan Roberts
2023-12-18 10:50 ` [PATCH v4 09/16] arm64/mm: ptep_clear_flush_young(): " Ryan Roberts
2023-12-18 10:50 ` [PATCH v4 10/16] arm64/mm: ptep_set_wrprotect(): " Ryan Roberts
2023-12-18 10:50 ` [PATCH v4 11/16] arm64/mm: ptep_set_access_flags(): " Ryan Roberts
2023-12-18 10:50 ` [PATCH v4 12/16] arm64/mm: ptep_get(): " Ryan Roberts
2023-12-18 10:50 ` [PATCH v4 13/16] arm64/mm: Split __flush_tlb_range() to elide trailing DSB Ryan Roberts
2023-12-18 10:50 ` [PATCH v4 14/16] arm64/mm: Wire up PTE_CONT for user mappings Ryan Roberts
2024-01-15 15:14 ` Alexandre Ghiti
2024-01-15 16:27 ` Ryan Roberts
2024-01-15 21:23 ` Alexandre Ghiti
2024-01-16 14:44 ` Ryan Roberts
2024-01-16 20:41 ` Alexandre Ghiti
2023-12-18 10:50 ` [PATCH v4 15/16] arm64/mm: Implement new helpers to optimize fork() Ryan Roberts
2023-12-18 10:51 ` [PATCH v4 16/16] arm64/mm: Implement clear_ptes() to optimize exit, munmap, dontneed Ryan Roberts
2023-12-20 5:28 ` Alistair Popple
2023-12-20 8:42 ` Ryan Roberts
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=7c0236ad-01f3-437f-8b04-125d69e90dc0@redhat.com \
--to=david@redhat.com \
--cc=21cnbao@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=andreyknvl@gmail.com \
--cc=anshuman.khandual@arm.com \
--cc=apopple@nvidia.com \
--cc=ardb@kernel.org \
--cc=catalin.marinas@arm.com \
--cc=dvyukov@google.com \
--cc=glider@google.com \
--cc=james.morse@arm.com \
--cc=jhubbard@nvidia.com \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mark.rutland@arm.com \
--cc=maz@kernel.org \
--cc=oliver.upton@linux.dev \
--cc=ryabinin.a.a@gmail.com \
--cc=ryan.roberts@arm.com \
--cc=shy828301@gmail.com \
--cc=suzuki.poulose@arm.com \
--cc=vincenzo.frascino@arm.com \
--cc=wangkefeng.wang@huawei.com \
--cc=will@kernel.org \
--cc=willy@infradead.org \
--cc=yuzenghui@huawei.com \
--cc=yuzhao@google.com \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox