From: David Hildenbrand <david@redhat.com>
To: Ryan Roberts <ryan.roberts@arm.com>, linux-kernel@vger.kernel.org
Cc: linux-mm@kvack.org, Andrew Morton <akpm@linux-foundation.org>,
Matthew Wilcox <willy@infradead.org>,
Russell King <linux@armlinux.org.uk>,
Catalin Marinas <catalin.marinas@arm.com>,
Will Deacon <will@kernel.org>, Dinh Nguyen <dinguyen@kernel.org>,
Michael Ellerman <mpe@ellerman.id.au>,
Nicholas Piggin <npiggin@gmail.com>,
Christophe Leroy <christophe.leroy@csgroup.eu>,
"Aneesh Kumar K.V" <aneesh.kumar@kernel.org>,
"Naveen N. Rao" <naveen.n.rao@linux.ibm.com>,
Paul Walmsley <paul.walmsley@sifive.com>,
Palmer Dabbelt <palmer@dabbelt.com>,
Albert Ou <aou@eecs.berkeley.edu>,
Alexander Gordeev <agordeev@linux.ibm.com>,
Gerald Schaefer <gerald.schaefer@linux.ibm.com>,
Heiko Carstens <hca@linux.ibm.com>,
Vasily Gorbik <gor@linux.ibm.com>,
Christian Borntraeger <borntraeger@linux.ibm.com>,
Sven Schnelle <svens@linux.ibm.com>,
"David S. Miller" <davem@davemloft.net>,
linux-arm-kernel@lists.infradead.org,
linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org,
linux-s390@vger.kernel.org, sparclinux@vger.kernel.org
Subject: Re: [PATCH v1 09/11] mm/memory: optimize fork() with PTE-mapped THP
Date: Tue, 23 Jan 2024 13:19:35 +0100 [thread overview]
Message-ID: <31a0661e-fa69-419c-9936-98bfe168d5a7@redhat.com> (raw)
In-Reply-To: <63be0c3c-bf34-4cbb-b47b-7c9be0e65058@arm.com>
[...]
>
> I wrote some documentation for this (based on Matthew's docs for set_ptes() in
> my version. Perhaps it makes sense to add it here, given this is overridable by
> the arch.
>
> /**
> * wrprotect_ptes - Write protect a consecutive set of pages.
> * @mm: Address space that the pages are mapped into.
> * @addr: Address of first page to write protect.
> * @ptep: Page table pointer for the first entry.
> * @nr: Number of pages to write protect.
> *
> * May be overridden by the architecture, else implemented as a loop over
> * ptep_set_wrprotect().
> *
> * Context: The caller holds the page table lock. The PTEs are all in the same
> * PMD.
> */
>
I could have sworn I had a documentation at some point. Let me add some,
thanks.
[...]
>> +
>> + /*
>> + * If we likely have to copy, just don't bother with batching. Make
>> + * sure that the common "small folio" case stays as fast as possible
>> + * by keeping the batching logic separate.
>> + */
>> + if (unlikely(!*prealloc && folio_test_large(folio) && max_nr != 1)) {
>> + nr = folio_pte_batch(folio, addr, src_pte, pte, max_nr);
>> + if (folio_test_anon(folio)) {
>> + folio_ref_add(folio, nr);
>> + if (unlikely(folio_try_dup_anon_rmap_ptes(folio, page,
>> + nr, src_vma))) {
>
> What happens if its not the first page of the batch that fails here? Aren't you
> signalling that you need a prealloc'ed page for the wrong pte? Shouldn't you
> still batch copy all the way up to the failing page first? Perhaps it all comes
> out in the wash and these events are so infrequent that we don't care about the
> lost batching opportunity?
I assume you mean the weird corner case that some folio pages in the
range have PAE set, others don't -- and the folio maybe pinned.
In that case, we fallback to individual pages, and might have
preallocated a page although we wouldn't have to preallocate one for
processing the next page (that doesn't have PAE set).
It should all work, although not optimized to the extreme, and as it's a
corner case, we don't particularly care. Hopefully, in the future we'll
only have a single PAE flag per folio.
Or am I missing something?
>
>> + folio_ref_sub(folio, nr);
>> + return -EAGAIN;
>> + }
>> + rss[MM_ANONPAGES] += nr;
>> + VM_WARN_ON_FOLIO(PageAnonExclusive(page), folio);
>> + } else {
>> + folio_ref_add(folio, nr);
>
> Perhaps hoist this out to immediately after folio_pte_batch() since you're
> calling it on both branches?
Makes sense, thanks.
--
Cheers,
David / dhildenb
next prev parent reply other threads:[~2024-01-23 12:19 UTC|newest]
Thread overview: 49+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-01-22 19:41 [PATCH v1 00/11] " David Hildenbrand
2024-01-22 19:41 ` [PATCH v1 01/11] arm/pgtable: define PFN_PTE_SHIFT on arm and arm64 David Hildenbrand
2024-01-23 10:34 ` Ryan Roberts
2024-01-23 10:48 ` David Hildenbrand
2024-01-23 11:02 ` David Hildenbrand
2024-01-23 11:17 ` Ryan Roberts
2024-01-23 11:33 ` David Hildenbrand
2024-01-23 11:44 ` Ryan Roberts
2024-01-23 11:08 ` Ryan Roberts
2024-01-23 11:16 ` Christophe Leroy
2024-01-23 11:31 ` David Hildenbrand
2024-01-23 11:38 ` Ryan Roberts
2024-01-23 11:40 ` David Hildenbrand
2024-01-24 5:45 ` Aneesh Kumar K.V
2024-01-23 11:48 ` Christophe Leroy
2024-01-23 11:53 ` David Hildenbrand
2024-01-24 5:46 ` Aneesh Kumar K.V
2024-01-23 11:10 ` Christophe Leroy
2024-01-23 15:01 ` Matthew Wilcox
2024-01-23 15:22 ` Ryan Roberts
2024-01-22 19:41 ` [PATCH v1 02/11] nios2/pgtable: define PFN_PTE_SHIFT David Hildenbrand
2024-01-22 19:41 ` [PATCH v1 03/11] powerpc/pgtable: " David Hildenbrand
2024-01-22 19:41 ` [PATCH v1 04/11] risc: pgtable: " David Hildenbrand
2024-01-22 20:03 ` Alexandre Ghiti
2024-01-22 20:08 ` David Hildenbrand
2024-01-22 19:41 ` [PATCH v1 05/11] s390/pgtable: " David Hildenbrand
2024-01-22 19:41 ` [PATCH v1 06/11] sparc/pgtable: " David Hildenbrand
2024-01-22 19:41 ` [PATCH v1 07/11] mm/memory: factor out copying the actual PTE in copy_present_pte() David Hildenbrand
2024-01-23 10:45 ` Ryan Roberts
2024-01-22 19:41 ` [PATCH v1 08/11] mm/memory: pass PTE to copy_present_pte() David Hildenbrand
2024-01-23 10:47 ` Ryan Roberts
2024-01-22 19:41 ` [PATCH v1 09/11] mm/memory: optimize fork() with PTE-mapped THP David Hildenbrand
2024-01-23 12:01 ` Ryan Roberts
2024-01-23 12:19 ` David Hildenbrand [this message]
2024-01-23 12:28 ` Ryan Roberts
2024-01-22 19:41 ` [PATCH v1 10/11] mm/memory: ignore dirty/accessed/soft-dirty bits in folio_pte_batch() David Hildenbrand
2024-01-23 12:25 ` Ryan Roberts
2024-01-23 13:06 ` David Hildenbrand
2024-01-23 13:42 ` Ryan Roberts
2024-01-23 13:55 ` David Hildenbrand
2024-01-23 14:13 ` David Hildenbrand
2024-01-23 14:27 ` Ryan Roberts
2024-01-22 19:42 ` [PATCH v1 11/11] mm/memory: ignore writable bit " David Hildenbrand
2024-01-23 12:35 ` Ryan Roberts
2024-01-23 19:15 ` [PATCH v1 00/11] mm/memory: optimize fork() with PTE-mapped THP Ryan Roberts
2024-01-23 19:33 ` David Hildenbrand
2024-01-23 19:43 ` Ryan Roberts
2024-01-23 20:14 ` David Hildenbrand
2024-01-23 20:43 ` Ryan Roberts
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=31a0661e-fa69-419c-9936-98bfe168d5a7@redhat.com \
--to=david@redhat.com \
--cc=agordeev@linux.ibm.com \
--cc=akpm@linux-foundation.org \
--cc=aneesh.kumar@kernel.org \
--cc=aou@eecs.berkeley.edu \
--cc=borntraeger@linux.ibm.com \
--cc=catalin.marinas@arm.com \
--cc=christophe.leroy@csgroup.eu \
--cc=davem@davemloft.net \
--cc=dinguyen@kernel.org \
--cc=gerald.schaefer@linux.ibm.com \
--cc=gor@linux.ibm.com \
--cc=hca@linux.ibm.com \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-riscv@lists.infradead.org \
--cc=linux-s390@vger.kernel.org \
--cc=linux@armlinux.org.uk \
--cc=linuxppc-dev@lists.ozlabs.org \
--cc=mpe@ellerman.id.au \
--cc=naveen.n.rao@linux.ibm.com \
--cc=npiggin@gmail.com \
--cc=palmer@dabbelt.com \
--cc=paul.walmsley@sifive.com \
--cc=ryan.roberts@arm.com \
--cc=sparclinux@vger.kernel.org \
--cc=svens@linux.ibm.com \
--cc=will@kernel.org \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox