From: David Hildenbrand <david@redhat.com>
To: Ryan Roberts <ryan.roberts@arm.com>,
Catalin Marinas <catalin.marinas@arm.com>,
Will Deacon <will@kernel.org>, Ard Biesheuvel <ardb@kernel.org>,
Marc Zyngier <maz@kernel.org>,
Oliver Upton <oliver.upton@linux.dev>,
James Morse <james.morse@arm.com>,
Suzuki K Poulose <suzuki.poulose@arm.com>,
Zenghui Yu <yuzenghui@huawei.com>,
Andrey Ryabinin <ryabinin.a.a@gmail.com>,
Alexander Potapenko <glider@google.com>,
Andrey Konovalov <andreyknvl@gmail.com>,
Dmitry Vyukov <dvyukov@google.com>,
Vincenzo Frascino <vincenzo.frascino@arm.com>,
Andrew Morton <akpm@linux-foundation.org>,
Anshuman Khandual <anshuman.khandual@arm.com>,
Matthew Wilcox <willy@infradead.org>, Yu Zhao <yuzhao@google.com>,
Mark Rutland <mark.rutland@arm.com>,
Kefeng Wang <wangkefeng.wang@huawei.com>,
John Hubbard <jhubbard@nvidia.com>, Zi Yan <ziy@nvidia.com>,
Barry Song <21cnbao@gmail.com>,
Alistair Popple <apopple@nvidia.com>,
Yang Shi <shy828301@gmail.com>
Cc: linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org,
linux-kernel@vger.kernel.org
Subject: Re: [PATCH v3 01/15] mm: Batch-copy PTE ranges during fork()
Date: Tue, 5 Dec 2023 13:04:15 +0100 [thread overview]
Message-ID: <df99bb4e-f69d-4d94-baa5-2fc336df1a7b@redhat.com> (raw)
In-Reply-To: <a81dc390-8b10-4ce9-b72f-57f253e77af3@arm.com>
On 05.12.23 12:30, Ryan Roberts wrote:
> On 04/12/2023 17:27, David Hildenbrand wrote:
>>>
>>> With rmap batching from [1] -- rebased+changed on top of that -- we could turn
>>> that into an effective (untested):
>>>
>>> if (page && folio_test_anon(folio)) {
>>> + nr = folio_nr_pages_cont_mapped(folio, page, src_pte, addr, end,
>>> + pte, enforce_uffd_wp, &nr_dirty,
>>> + &nr_writable);
>>> /*
>>> * If this page may have been pinned by the parent process,
>>> * copy the page immediately for the child so that we'll always
>>> * guarantee the pinned page won't be randomly replaced in the
>>> * future.
>>> */
>>> - folio_get(folio);
>>> - if (unlikely(folio_try_dup_anon_rmap_pte(folio, page,
>>> src_vma))) {
>>> + folio_ref_add(folio, nr);
>>> + if (unlikely(folio_try_dup_anon_rmap_ptes(folio, page, nr,
>>> src_vma))) {
>>> /* Page may be pinned, we have to copy. */
>>> - folio_put(folio);
>>> - return copy_present_page(dst_vma, src_vma, dst_pte,
>>> src_pte,
>>> - addr, rss, prealloc, page);
>>> + folio_ref_sub(folio, nr);
>>> + ret = copy_present_page(dst_vma, src_vma, dst_pte,
>>> + src_pte, addr, rss, prealloc,
>>> + page);
>>> + return ret == 0 ? 1 : ret;
>>> }
>>> - rss[MM_ANONPAGES]++;
>>> + rss[MM_ANONPAGES] += nr;
>>> } else if (page) {
>>> - folio_get(folio);
>>> - folio_dup_file_rmap_pte(folio, page);
>>> - rss[mm_counter_file(page)]++;
>>> + nr = folio_nr_pages_cont_mapped(folio, page, src_pte, addr, end,
>>> + pte, enforce_uffd_wp, &nr_dirty,
>>> + &nr_writable);
>>> + folio_ref_add(folio, nr);
>>> + folio_dup_file_rmap_ptes(folio, page, nr);
>>> + rss[mm_counter_file(page)] += nr;
>>> }
>>>
>>>
>>> We'll have to test performance, but it could be that we want to specialize
>>> more on !folio_test_large(). That code is very performance-sensitive.
>>>
>>>
>>> [1] https://lkml.kernel.org/r/20231204142146.91437-1-david@redhat.com
>>
>> So, on top of [1] without rmap batching but with a slightly modified version of
>
> Can you clarify what you mean by "without rmap batching"? I thought [1]
> implicitly adds rmap batching? (e.g. folio_dup_file_rmap_ptes(), which you've
> added in the code snippet above).
Not calling the batched variants but essentially doing what your code
does (with some minor improvements, like updating the rss counters only
once).
The snipped above is only linked below. I had the performance numbers
for [1] ready, so I gave it a test on top of that.
To keep it simple, you might just benchmark w and w/o your patches.
>
>> yours (that keeps the existing code structure as pointed out and e.g., updates
>> counter updates), running my fork() microbenchmark with a 1 GiB of memory:
>>
>> Compared to [1], with all order-0 pages it gets 13--14% _slower_ and with all
>> PTE-mapped THP (order-9) it gets ~29--30% _faster_.
>
> What test are you running - I'd like to reproduce if possible, since it sounds
> like I've got some work to do to remove the order-0 regression.
Essentially just allocating 1 GiB of memory an measuring how long it
takes to call fork().
order-0 benchmarks:
https://gitlab.com/davidhildenbrand/scratchspace/-/raw/main/order-0-benchmarks.c?ref_type=heads
e.g.,: $ ./order-0-benchmarks fork 100
pte-mapped-thp benchmarks:
https://gitlab.com/davidhildenbrand/scratchspace/-/raw/main/pte-mapped-thp-benchmarks.c?ref_type=heads
e.g.,: $ ./pte-mapped-thp-benchmarks fork 100
Ideally, pin to one CPU and get stable performance numbers by disabling
SMT+turbo etc.
>
>>
>> So looks like we really want to have a completely seprate code path for
>> "!folio_test_large()" to keep that case as fast as possible. And "Likely" we
>> want to use "likely(!folio_test_large()". ;)
>
> Yuk, but fair enough. If I can repro the perf numbers, I'll have a go a
> reworking this.
>
> I think you're also implicitly suggesting that this change needs to depend on
> [1]? Which is a shame...
Not necessarily. It certainly cleans up the code, but we can do that in
any order reasonable.
>
> I guess I should also go through a similar exercise for patch 2 in this series.
Yes. There are "unmap" and "pte-dontneed" benchmarks contained in both
files above.
--
Cheers,
David / dhildenb
next prev parent reply other threads:[~2023-12-05 12:04 UTC|newest]
Thread overview: 41+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-12-04 10:54 [PATCH v3 00/15] Transparent Contiguous PTEs for User Mappings Ryan Roberts
2023-12-04 10:54 ` [PATCH v3 01/15] mm: Batch-copy PTE ranges during fork() Ryan Roberts
2023-12-04 15:47 ` David Hildenbrand
2023-12-04 16:00 ` David Hildenbrand
2023-12-04 17:27 ` David Hildenbrand
2023-12-05 11:30 ` Ryan Roberts
2023-12-05 12:04 ` David Hildenbrand [this message]
2023-12-05 14:16 ` Ryan Roberts
2023-12-08 0:32 ` Alistair Popple
2023-12-12 11:51 ` Ryan Roberts
2023-12-04 10:54 ` [PATCH v3 02/15] mm: Batch-clear PTE ranges during zap_pte_range() Ryan Roberts
2023-12-08 1:30 ` Alistair Popple
2023-12-12 11:57 ` Ryan Roberts
2023-12-04 10:54 ` [PATCH v3 03/15] arm64/mm: set_pte(): New layer to manage contig bit Ryan Roberts
2023-12-04 10:54 ` [PATCH v3 04/15] arm64/mm: set_ptes()/set_pte_at(): " Ryan Roberts
2023-12-04 10:54 ` [PATCH v3 05/15] arm64/mm: pte_clear(): " Ryan Roberts
2023-12-04 10:54 ` [PATCH v3 06/15] arm64/mm: ptep_get_and_clear(): " Ryan Roberts
2023-12-04 10:54 ` [PATCH v3 07/15] arm64/mm: ptep_test_and_clear_young(): " Ryan Roberts
2023-12-04 10:54 ` [PATCH v3 08/15] arm64/mm: ptep_clear_flush_young(): " Ryan Roberts
2023-12-04 10:54 ` [PATCH v3 09/15] arm64/mm: ptep_set_wrprotect(): " Ryan Roberts
2023-12-04 10:54 ` [PATCH v3 10/15] arm64/mm: ptep_set_access_flags(): " Ryan Roberts
2023-12-04 10:54 ` [PATCH v3 11/15] arm64/mm: ptep_get(): " Ryan Roberts
2023-12-04 10:54 ` [PATCH v3 12/15] arm64/mm: Split __flush_tlb_range() to elide trailing DSB Ryan Roberts
2023-12-12 11:35 ` Will Deacon
2023-12-12 11:47 ` Ryan Roberts
2023-12-14 11:53 ` Ryan Roberts
2023-12-14 12:13 ` Will Deacon
2023-12-14 12:30 ` Robin Murphy
2023-12-14 14:28 ` Ryan Roberts
2023-12-14 15:22 ` Jean-Philippe Brucker
2023-12-14 16:45 ` Jonathan Cameron
2023-12-04 10:54 ` [PATCH v3 13/15] arm64/mm: Wire up PTE_CONT for user mappings Ryan Roberts
2023-12-04 10:54 ` [PATCH v3 14/15] arm64/mm: Implement ptep_set_wrprotects() to optimize fork() Ryan Roberts
2023-12-08 1:37 ` Alistair Popple
2023-12-12 11:59 ` Ryan Roberts
2023-12-15 4:32 ` Alistair Popple
2023-12-15 14:05 ` Ryan Roberts
2023-12-04 10:54 ` [PATCH v3 15/15] arm64/mm: Implement clear_ptes() to optimize exit() Ryan Roberts
2023-12-08 1:45 ` Alistair Popple
2023-12-12 12:02 ` Ryan Roberts
2023-12-05 3:41 ` [PATCH v3 00/15] Transparent Contiguous PTEs for User Mappings John Hubbard
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=df99bb4e-f69d-4d94-baa5-2fc336df1a7b@redhat.com \
--to=david@redhat.com \
--cc=21cnbao@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=andreyknvl@gmail.com \
--cc=anshuman.khandual@arm.com \
--cc=apopple@nvidia.com \
--cc=ardb@kernel.org \
--cc=catalin.marinas@arm.com \
--cc=dvyukov@google.com \
--cc=glider@google.com \
--cc=james.morse@arm.com \
--cc=jhubbard@nvidia.com \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mark.rutland@arm.com \
--cc=maz@kernel.org \
--cc=oliver.upton@linux.dev \
--cc=ryabinin.a.a@gmail.com \
--cc=ryan.roberts@arm.com \
--cc=shy828301@gmail.com \
--cc=suzuki.poulose@arm.com \
--cc=vincenzo.frascino@arm.com \
--cc=wangkefeng.wang@huawei.com \
--cc=will@kernel.org \
--cc=willy@infradead.org \
--cc=yuzenghui@huawei.com \
--cc=yuzhao@google.com \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox