From: David Hildenbrand <david@redhat.com>
To: linux-kernel@vger.kernel.org
Cc: linux-mm@kvack.org, Andrew Morton <akpm@linux-foundation.org>,
Matthew Wilcox <willy@infradead.org>,
Ryan Roberts <ryan.roberts@arm.com>,
Russell King <linux@armlinux.org.uk>,
Catalin Marinas <catalin.marinas@arm.com>,
Will Deacon <will@kernel.org>, Dinh Nguyen <dinguyen@kernel.org>,
Michael Ellerman <mpe@ellerman.id.au>,
Nicholas Piggin <npiggin@gmail.com>,
Christophe Leroy <christophe.leroy@csgroup.eu>,
"Aneesh Kumar K.V" <aneesh.kumar@kernel.org>,
"Naveen N. Rao" <naveen.n.rao@linux.ibm.com>,
Paul Walmsley <paul.walmsley@sifive.com>,
Palmer Dabbelt <palmer@dabbelt.com>,
Albert Ou <aou@eecs.berkeley.edu>,
Alexander Gordeev <agordeev@linux.ibm.com>,
Gerald Schaefer <gerald.schaefer@linux.ibm.com>,
Heiko Carstens <hca@linux.ibm.com>,
Vasily Gorbik <gor@linux.ibm.com>,
Christian Borntraeger <borntraeger@linux.ibm.com>,
Sven Schnelle <svens@linux.ibm.com>,
"David S. Miller" <davem@davemloft.net>,
linux-arm-kernel@lists.infradead.org,
linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org,
linux-s390@vger.kernel.org, sparclinux@vger.kernel.org
Subject: Re: [PATCH v2 13/15] mm/memory: optimize fork() with PTE-mapped THP
Date: Fri, 26 Jan 2024 20:11:19 +0100 [thread overview]
Message-ID: <5b5b2ceb-9099-4acc-a995-d979ad414c6a@redhat.com> (raw)
In-Reply-To: <20240125193227.444072-14-david@redhat.com>
On 25.01.24 20:32, David Hildenbrand wrote:
> Let's implement PTE batching when consecutive (present) PTEs map
> consecutive pages of the same large folio, and all other PTE bits besides
> the PFNs are equal.
>
> We will optimize folio_pte_batch() separately, to ignore selected
> PTE bits. This patch is based on work by Ryan Roberts.
>
> Use __always_inline for __copy_present_ptes() and keep the handling for
> single PTEs completely separate from the multi-PTE case: we really want
> the compiler to optimize for the single-PTE case with small folios, to
> not degrade performance.
>
> Note that PTE batching will never exceed a single page table and will
> always stay within VMA boundaries.
>
> Further, processing PTE-mapped THP that maybe pinned and have
> PageAnonExclusive set on at least one subpage should work as expected,
> but there is room for improvement: We will repeatedly (1) detect a PTE
> batch (2) detect that we have to copy a page (3) fall back and allocate a
> single page to copy a single page. For now we won't care as pinned pages
> are a corner case, and we should rather look into maintaining only a
> single PageAnonExclusive bit for large folios.
>
> Signed-off-by: David Hildenbrand <david@redhat.com>
> ---
> include/linux/pgtable.h | 31 +++++++++++
> mm/memory.c | 112 +++++++++++++++++++++++++++++++++-------
> 2 files changed, 124 insertions(+), 19 deletions(-)
>
> diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
> index 351cd9dc7194f..891ed246978a4 100644
> --- a/include/linux/pgtable.h
> +++ b/include/linux/pgtable.h
> @@ -650,6 +650,37 @@ static inline void ptep_set_wrprotect(struct mm_struct *mm, unsigned long addres
> }
> #endif
>
> +#ifndef wrprotect_ptes
> +/**
> + * wrprotect_ptes - Write-protect consecutive pages that are mapped to a
> + * contiguous range of addresses.
> + * @mm: Address space to map the pages into.
> + * @addr: Address the first page is mapped at.
> + * @ptep: Page table pointer for the first entry.
> + * @nr: Number of pages to write-protect.
> + *
> + * May be overridden by the architecture; otherwise, implemented as a simple
> + * loop over ptep_set_wrprotect().
> + *
> + * Note that PTE bits in the PTE range besides the PFN can differ. For example,
> + * some PTEs might already be write-protected.
> + *
> + * Context: The caller holds the page table lock. The pages all belong
> + * to the same folio. The PTEs are all in the same PMD.
> + */
After writing documentation for another two such functions, I'll change this to:
/**
* wrprotect_ptes - Write-protect PTEs that map consecutive pages of the same
* folio.
* @mm: Address space the pages are mapped into.
* @addr: Address the first page is mapped at.
* @ptep: Page table pointer for the first entry.
* @nr: Number of entries to write-protect.
*
* May be overridden by the architecture; otherwise, implemented as a simple
* loop over ptep_set_wrprotect().
*
* Note that PTE bits in the PTE range besides the PFN can differ. For example,
* some PTEs might be write-protected.
*
* Context: The caller holds the page table lock. The PTEs map consecutive
* pages that belong to the same folio. The PTEs are all in the same PMD.
*/
--
Cheers,
David / dhildenb
next prev parent reply other threads:[~2024-01-26 19:11 UTC|newest]
Thread overview: 22+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-01-25 19:32 [PATCH v2 00/15] " David Hildenbrand
2024-01-25 19:32 ` [PATCH v2 01/15] arm64/mm: Make set_ptes() robust when OAs cross 48-bit boundary David Hildenbrand
2024-01-25 19:32 ` [PATCH v2 02/15] arm/pgtable: define PFN_PTE_SHIFT David Hildenbrand
2024-01-25 19:32 ` [PATCH v2 03/15] nios2/pgtable: " David Hildenbrand
2024-01-25 19:32 ` [PATCH v2 04/15] powerpc/pgtable: " David Hildenbrand
2024-01-25 21:27 ` Christophe Leroy
2024-01-25 19:32 ` [PATCH v2 05/15] riscv/pgtable: " David Hildenbrand
2024-01-25 19:32 ` [PATCH v2 06/15] s390/pgtable: " David Hildenbrand
2024-01-25 19:32 ` [PATCH v2 07/15] sparc/pgtable: " David Hildenbrand
2024-01-25 19:32 ` [PATCH v2 08/15] mm/pgtable: make pte_next_pfn() independent of set_ptes() David Hildenbrand
2024-01-25 21:29 ` Christophe Leroy
2024-01-25 19:32 ` [PATCH v2 09/15] arm/mm: use pte_next_pfn() in set_ptes() David Hildenbrand
2024-01-25 19:32 ` [PATCH v2 10/15] powerpc/mm: " David Hildenbrand
2024-01-25 21:27 ` Christophe Leroy
2024-01-25 19:32 ` [PATCH v2 11/15] mm/memory: factor out copying the actual PTE in copy_present_pte() David Hildenbrand
2024-01-25 19:32 ` [PATCH v2 12/15] mm/memory: pass PTE to copy_present_pte() David Hildenbrand
2024-01-25 19:32 ` [PATCH v2 13/15] mm/memory: optimize fork() with PTE-mapped THP David Hildenbrand
2024-01-26 19:11 ` David Hildenbrand [this message]
2024-01-29 10:46 ` Ryan Roberts
2024-01-25 19:32 ` [PATCH v2 14/15] mm/memory: ignore dirty/accessed/soft-dirty bits in folio_pte_batch() David Hildenbrand
2024-01-29 11:05 ` Ryan Roberts
2024-01-25 19:32 ` [PATCH v2 15/15] mm/memory: ignore writable bit " David Hildenbrand
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=5b5b2ceb-9099-4acc-a995-d979ad414c6a@redhat.com \
--to=david@redhat.com \
--cc=agordeev@linux.ibm.com \
--cc=akpm@linux-foundation.org \
--cc=aneesh.kumar@kernel.org \
--cc=aou@eecs.berkeley.edu \
--cc=borntraeger@linux.ibm.com \
--cc=catalin.marinas@arm.com \
--cc=christophe.leroy@csgroup.eu \
--cc=davem@davemloft.net \
--cc=dinguyen@kernel.org \
--cc=gerald.schaefer@linux.ibm.com \
--cc=gor@linux.ibm.com \
--cc=hca@linux.ibm.com \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-riscv@lists.infradead.org \
--cc=linux-s390@vger.kernel.org \
--cc=linux@armlinux.org.uk \
--cc=linuxppc-dev@lists.ozlabs.org \
--cc=mpe@ellerman.id.au \
--cc=naveen.n.rao@linux.ibm.com \
--cc=npiggin@gmail.com \
--cc=palmer@dabbelt.com \
--cc=paul.walmsley@sifive.com \
--cc=ryan.roberts@arm.com \
--cc=sparclinux@vger.kernel.org \
--cc=svens@linux.ibm.com \
--cc=will@kernel.org \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox