From: David Hildenbrand <david@redhat.com>
To: Ryan Roberts <ryan.roberts@arm.com>,
Catalin Marinas <catalin.marinas@arm.com>,
Will Deacon <will@kernel.org>, Ard Biesheuvel <ardb@kernel.org>,
Marc Zyngier <maz@kernel.org>,
Oliver Upton <oliver.upton@linux.dev>,
James Morse <james.morse@arm.com>,
Suzuki K Poulose <suzuki.poulose@arm.com>,
Zenghui Yu <yuzenghui@huawei.com>,
Andrey Ryabinin <ryabinin.a.a@gmail.com>,
Alexander Potapenko <glider@google.com>,
Andrey Konovalov <andreyknvl@gmail.com>,
Dmitry Vyukov <dvyukov@google.com>,
Vincenzo Frascino <vincenzo.frascino@arm.com>,
Andrew Morton <akpm@linux-foundation.org>,
Anshuman Khandual <anshuman.khandual@arm.com>,
Matthew Wilcox <willy@infradead.org>, Yu Zhao <yuzhao@google.com>,
Mark Rutland <mark.rutland@arm.com>,
Kefeng Wang <wangkefeng.wang@huawei.com>,
John Hubbard <jhubbard@nvidia.com>, Zi Yan <ziy@nvidia.com>
Cc: linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org,
linux-kernel@vger.kernel.org
Subject: Re: [PATCH v2 01/14] mm: Batch-copy PTE ranges during fork()
Date: Thu, 16 Nov 2023 15:13:14 +0100 [thread overview]
Message-ID: <2618b024-6a95-4bfc-a08d-59d86e9931e5@redhat.com> (raw)
In-Reply-To: <2d027a8d-adfb-481d-89ea-c99139e669aa@arm.com>
On 16.11.23 14:49, Ryan Roberts wrote:
> On 16/11/2023 13:20, David Hildenbrand wrote:
>> On 16.11.23 12:20, Ryan Roberts wrote:
>>> On 16/11/2023 11:03, David Hildenbrand wrote:
>>>> On 15.11.23 17:30, Ryan Roberts wrote:
>>>>> Convert copy_pte_range() to copy a set of ptes in a batch. A given batch
>>>>> maps a physically contiguous block of memory, all belonging to the same
>>>>> folio, with the same permissions, and for shared mappings, the same
>>>>> dirty state. This will likely improve performance by a tiny amount due
>>>>> to batching the folio reference count management and calling set_ptes()
>>>>> rather than making individual calls to set_pte_at().
>>>>>
>>>>> However, the primary motivation for this change is to reduce the number
>>>>> of tlb maintenance operations that the arm64 backend has to perform
>>>>> during fork, as it is about to add transparent support for the
>>>>> "contiguous bit" in its ptes. By write-protecting the parent using the
>>>>> new ptep_set_wrprotects() (note the 's' at the end) function, the
>>>>> backend can avoid having to unfold contig ranges of PTEs, which is
>>>>> expensive, when all ptes in the range are being write-protected.
>>>>> Similarly, by using set_ptes() rather than set_pte_at() to set up ptes
>>>>> in the child, the backend does not need to fold a contiguous range once
>>>>> they are all populated - they can be initially populated as a contiguous
>>>>> range in the first place.
>>>>>
>>>>> This change addresses the core-mm refactoring only, and introduces
>>>>> ptep_set_wrprotects() with a default implementation that calls
>>>>> ptep_set_wrprotect() for each pte in the range. A separate change will
>>>>> implement ptep_set_wrprotects() in the arm64 backend to realize the
>>>>> performance improvement as part of the work to enable contpte mappings.
>>>>>
>>>>> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
>>>>> ---
>>>>> include/linux/pgtable.h | 13 +++
>>>>> mm/memory.c | 175 +++++++++++++++++++++++++++++++---------
>>>>> 2 files changed, 150 insertions(+), 38 deletions(-)
>>>>>
>>>>> diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
>>>>> index af7639c3b0a3..1c50f8a0fdde 100644
>>>>> --- a/include/linux/pgtable.h
>>>>> +++ b/include/linux/pgtable.h
>>>>> @@ -622,6 +622,19 @@ static inline void ptep_set_wrprotect(struct mm_struct
>>>>> *mm, unsigned long addres
>>>>> }
>>>>> #endif
>>>>> +#ifndef ptep_set_wrprotects
>>>>> +struct mm_struct;
>>>>> +static inline void ptep_set_wrprotects(struct mm_struct *mm,
>>>>> + unsigned long address, pte_t *ptep,
>>>>> + unsigned int nr)
>>>>> +{
>>>>> + unsigned int i;
>>>>> +
>>>>> + for (i = 0; i < nr; i++, address += PAGE_SIZE, ptep++)
>>>>> + ptep_set_wrprotect(mm, address, ptep);
>>>>> +}
>>>>> +#endif
>>>>> +
>>>>> /*
>>>>> * On some architectures hardware does not set page access bit when
>>>>> accessing
>>>>> * memory page, it is responsibility of software setting this bit. It brings
>>>>> diff --git a/mm/memory.c b/mm/memory.c
>>>>> index 1f18ed4a5497..b7c8228883cf 100644
>>>>> --- a/mm/memory.c
>>>>> +++ b/mm/memory.c
>>>>> @@ -921,46 +921,129 @@ copy_present_page(struct vm_area_struct *dst_vma,
>>>>> struct vm_area_struct *src_vma
>>>>> /* Uffd-wp needs to be delivered to dest pte as well */
>>>>> pte = pte_mkuffd_wp(pte);
>>>>> set_pte_at(dst_vma->vm_mm, addr, dst_pte, pte);
>>>>> - return 0;
>>>>> + return 1;
>>>>> +}
>>>>> +
>>>>> +static inline unsigned long page_cont_mapped_vaddr(struct page *page,
>>>>> + struct page *anchor, unsigned long anchor_vaddr)
>>>>> +{
>>>>> + unsigned long offset;
>>>>> + unsigned long vaddr;
>>>>> +
>>>>> + offset = (page_to_pfn(page) - page_to_pfn(anchor)) << PAGE_SHIFT;
>>>>> + vaddr = anchor_vaddr + offset;
>>>>> +
>>>>> + if (anchor > page) {
>>>>> + if (vaddr > anchor_vaddr)
>>>>> + return 0;
>>>>> + } else {
>>>>> + if (vaddr < anchor_vaddr)
>>>>> + return ULONG_MAX;
>>>>> + }
>>>>> +
>>>>> + return vaddr;
>>>>> +}
>>>>> +
>>>>> +static int folio_nr_pages_cont_mapped(struct folio *folio,
>>>>> + struct page *page, pte_t *pte,
>>>>> + unsigned long addr, unsigned long end,
>>>>> + pte_t ptent, bool *any_dirty)
>>>>> +{
>>>>> + int floops;
>>>>> + int i;
>>>>> + unsigned long pfn;
>>>>> + pgprot_t prot;
>>>>> + struct page *folio_end;
>>>>> +
>>>>> + if (!folio_test_large(folio))
>>>>> + return 1;
>>>>> +
>>>>> + folio_end = &folio->page + folio_nr_pages(folio);
>>>>> + end = min(page_cont_mapped_vaddr(folio_end, page, addr), end);
>>>>> + floops = (end - addr) >> PAGE_SHIFT;
>>>>> + pfn = page_to_pfn(page);
>>>>> + prot = pte_pgprot(pte_mkold(pte_mkclean(ptent)));
>>>>> +
>>>>> + *any_dirty = pte_dirty(ptent);
>>>>> +
>>>>> + pfn++;
>>>>> + pte++;
>>>>> +
>>>>> + for (i = 1; i < floops; i++) {
>>>>> + ptent = ptep_get(pte);
>>>>> + ptent = pte_mkold(pte_mkclean(ptent));
>>>>> +
>>>>> + if (!pte_present(ptent) || pte_pfn(ptent) != pfn ||
>>>>> + pgprot_val(pte_pgprot(ptent)) != pgprot_val(prot))
>>>>> + break;
>>>>> +
>>>>> + if (pte_dirty(ptent))
>>>>> + *any_dirty = true;
>>>>> +
>>>>> + pfn++;
>>>>> + pte++;
>>>>> + }
>>>>> +
>>>>> + return i;
>>>>> }
>>>>> /*
>>>>> - * Copy one pte. Returns 0 if succeeded, or -EAGAIN if one preallocated page
>>>>> - * is required to copy this pte.
>>>>> + * Copy set of contiguous ptes. Returns number of ptes copied if succeeded
>>>>> + * (always gte 1), or -EAGAIN if one preallocated page is required to copy the
>>>>> + * first pte.
>>>>> */
>>>>> static inline int
>>>>> -copy_present_pte(struct vm_area_struct *dst_vma, struct vm_area_struct
>>>>> *src_vma,
>>>>> - pte_t *dst_pte, pte_t *src_pte, unsigned long addr, int *rss,
>>>>> - struct folio **prealloc)
>>>>> +copy_present_ptes(struct vm_area_struct *dst_vma, struct vm_area_struct
>>>>> *src_vma,
>>>>> + pte_t *dst_pte, pte_t *src_pte,
>>>>> + unsigned long addr, unsigned long end,
>>>>> + int *rss, struct folio **prealloc)
>>>>> {
>>>>> struct mm_struct *src_mm = src_vma->vm_mm;
>>>>> unsigned long vm_flags = src_vma->vm_flags;
>>>>> pte_t pte = ptep_get(src_pte);
>>>>> struct page *page;
>>>>> struct folio *folio;
>>>>> + int nr = 1;
>>>>> + bool anon;
>>>>> + bool any_dirty = pte_dirty(pte);
>>>>> + int i;
>>>>> page = vm_normal_page(src_vma, addr, pte);
>>>>> - if (page)
>>>>> + if (page) {
>>>>> folio = page_folio(page);
>>>>> - if (page && folio_test_anon(folio)) {
>>>>> - /*
>>>>> - * If this page may have been pinned by the parent process,
>>>>> - * copy the page immediately for the child so that we'll always
>>>>> - * guarantee the pinned page won't be randomly replaced in the
>>>>> - * future.
>>>>> - */
>>>>> - folio_get(folio);
>>>>> - if (unlikely(page_try_dup_anon_rmap(page, false, src_vma))) {
>>>>> - /* Page may be pinned, we have to copy. */
>>>>> - folio_put(folio);
>>>>> - return copy_present_page(dst_vma, src_vma, dst_pte, src_pte,
>>>>> - addr, rss, prealloc, page);
>>>>> + anon = folio_test_anon(folio);
>>>>> + nr = folio_nr_pages_cont_mapped(folio, page, src_pte, addr,
>>>>> + end, pte, &any_dirty);
>>>>> +
>>>>> + for (i = 0; i < nr; i++, page++) {
>>>>> + if (anon) {
>>>>> + /*
>>>>> + * If this page may have been pinned by the
>>>>> + * parent process, copy the page immediately for
>>>>> + * the child so that we'll always guarantee the
>>>>> + * pinned page won't be randomly replaced in the
>>>>> + * future.
>>>>> + */
>>>>> + if (unlikely(page_try_dup_anon_rmap(
>>>>> + page, false, src_vma))) {
>>>>> + if (i != 0)
>>>>> + break;
>>>>> + /* Page may be pinned, we have to copy. */
>>>>> + return copy_present_page(
>>>>> + dst_vma, src_vma, dst_pte,
>>>>> + src_pte, addr, rss, prealloc,
>>>>> + page);
>>>>> + }
>>>>> + rss[MM_ANONPAGES]++;
>>>>> + VM_BUG_ON(PageAnonExclusive(page));
>>>>> + } else {
>>>>> + page_dup_file_rmap(page, false);
>>>>> + rss[mm_counter_file(page)]++;
>>>>> + }
>>>>> }
>>>>> - rss[MM_ANONPAGES]++;
>>>>> - } else if (page) {
>>>>> - folio_get(folio);
>>>>> - page_dup_file_rmap(page, false);
>>>>> - rss[mm_counter_file(page)]++;
>>>>> +
>>>>> + nr = i;
>>>>> + folio_ref_add(folio, nr);
>>>>> }
>>>>> /*
>>>>> @@ -968,24 +1051,28 @@ copy_present_pte(struct vm_area_struct *dst_vma, struct
>>>>> vm_area_struct *src_vma,
>>>>> * in the parent and the child
>>>>> */
>>>>> if (is_cow_mapping(vm_flags) && pte_write(pte)) {
>>>>> - ptep_set_wrprotect(src_mm, addr, src_pte);
>>>>> + ptep_set_wrprotects(src_mm, addr, src_pte, nr);
>>>>> pte = pte_wrprotect(pte);
>>>>
>>>> You likely want an "any_pte_writable" check here instead, no?
>>>>
>>>> Any operations that target a single indiividual PTE while multiple PTEs are
>>>> adjusted are suspicious :)
>>>
>>> The idea is that I've already constrained the batch of pages such that the
>>> permissions are all the same (see folio_nr_pages_cont_mapped()). So if the first
>>> pte is writable, then they all are - something has gone badly wrong if some are
>>> writable and others are not.
>>
>> I wonder if it would be cleaner and easier to not do that, though.
>>
>> Simply record if any pte is writable. Afterwards they will *all* be R/O and you
>> can set the cont bit, correct?
>
> Oh I see what you mean - that only works for cow mappings though. If you have a
> shared mapping, you won't be making it read-only at fork. So if we ignore
> pte_write() state when demarking the batches, we will end up with a batch of
> pages with a mix of RO and RW in the parent, but then we set_ptes() for the
> child and those pages will all have the permissions of the first page of the batch.
I see what you mean.
After fork(), all anon pages will be R/O in the parent and the child.
Easy. If any PTE is writable, wrprotect all in the parent and the child.
After fork(), all shared pages can be R/O or R/W in the parent. For
simplicity, I think you can simply set them all R/O in the child. So if
any PTE is writable, wrprotect all in the child.
Why? in the default case, fork() does not even care about MAP_SHARED
mappings; it does not copy the page tables/ptes. See vma_needs_copy().
Only in corner cases (e.g., uffd-wp, VM_PFNMAP, VM_MIXEDMAP), or in
MAP_PRIVATE mappings, you can even end up in that code.
In MAP_PRIVATE mappings, only anon pages can be R/W, other pages can
never be writable, so it does not matter. In VM_PFNMAP/VM_MIXEDMAP
likely all permissions match either way.
So you might just wrprotect the !anon pages R/O for the child and nobody
should really notice it, write faults will resolve it.
Famous last words :)
--
Cheers,
David / dhildenb
next prev parent reply other threads:[~2023-11-16 14:13 UTC|newest]
Thread overview: 102+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-11-15 16:30 [PATCH v2 00/14] Transparent Contiguous PTEs for User Mappings Ryan Roberts
2023-11-15 16:30 ` [PATCH v2 01/14] mm: Batch-copy PTE ranges during fork() Ryan Roberts
2023-11-15 21:26 ` kernel test robot
2023-11-16 10:07 ` Ryan Roberts
2023-11-16 10:12 ` David Hildenbrand
2023-11-16 10:36 ` Ryan Roberts
2023-11-16 11:01 ` David Hildenbrand
2023-11-16 11:13 ` Ryan Roberts
2023-11-15 21:37 ` Andrew Morton
2023-11-16 9:34 ` Ryan Roberts
2023-12-04 11:01 ` Christophe Leroy
2023-11-15 22:40 ` kernel test robot
2023-11-16 10:03 ` David Hildenbrand
2023-11-16 10:26 ` Ryan Roberts
2023-11-27 8:42 ` Barry Song
2023-11-27 9:35 ` Ryan Roberts
2023-11-27 9:59 ` Barry Song
2023-11-27 10:10 ` Ryan Roberts
2023-11-27 10:28 ` Barry Song
2023-11-27 11:07 ` Ryan Roberts
2023-11-27 20:34 ` Barry Song
2023-11-28 9:14 ` Ryan Roberts
2023-11-28 9:49 ` Barry Song
2023-11-28 10:49 ` Ryan Roberts
2023-11-28 21:06 ` Barry Song
2023-11-29 12:21 ` Ryan Roberts
2023-11-30 0:51 ` Barry Song
2023-11-16 11:03 ` David Hildenbrand
2023-11-16 11:20 ` Ryan Roberts
2023-11-16 13:20 ` David Hildenbrand
2023-11-16 13:49 ` Ryan Roberts
2023-11-16 14:13 ` David Hildenbrand [this message]
2023-11-16 14:15 ` David Hildenbrand
2023-11-16 17:58 ` Ryan Roberts
2023-11-23 10:26 ` Ryan Roberts
2023-11-23 12:12 ` David Hildenbrand
2023-11-23 12:28 ` Ryan Roberts
2023-11-24 8:53 ` David Hildenbrand
2023-11-23 4:26 ` Alistair Popple
2023-11-23 14:43 ` Ryan Roberts
2023-11-23 23:50 ` Alistair Popple
2023-11-27 5:54 ` Barry Song
2023-11-27 9:24 ` Ryan Roberts
2023-11-28 0:11 ` Barry Song
2023-11-28 11:00 ` Ryan Roberts
2023-11-28 19:00 ` Barry Song
2023-11-29 12:29 ` Ryan Roberts
2023-11-29 13:09 ` Barry Song
2023-11-29 14:07 ` Ryan Roberts
2023-11-30 0:34 ` Barry Song
2023-11-15 16:30 ` [PATCH v2 02/14] arm64/mm: set_pte(): New layer to manage contig bit Ryan Roberts
2023-11-15 16:30 ` [PATCH v2 03/14] arm64/mm: set_ptes()/set_pte_at(): " Ryan Roberts
2023-11-15 16:30 ` [PATCH v2 04/14] arm64/mm: pte_clear(): " Ryan Roberts
2023-11-15 16:30 ` [PATCH v2 05/14] arm64/mm: ptep_get_and_clear(): " Ryan Roberts
2023-11-15 16:30 ` [PATCH v2 06/14] arm64/mm: ptep_test_and_clear_young(): " Ryan Roberts
2023-11-15 16:30 ` [PATCH v2 07/14] arm64/mm: ptep_clear_flush_young(): " Ryan Roberts
2023-11-15 16:30 ` [PATCH v2 08/14] arm64/mm: ptep_set_wrprotect(): " Ryan Roberts
2023-11-15 16:30 ` [PATCH v2 09/14] arm64/mm: ptep_set_access_flags(): " Ryan Roberts
2023-11-15 16:30 ` [PATCH v2 10/14] arm64/mm: ptep_get(): " Ryan Roberts
2023-11-15 16:30 ` [PATCH v2 11/14] arm64/mm: Split __flush_tlb_range() to elide trailing DSB Ryan Roberts
2023-11-15 16:30 ` [PATCH v2 12/14] arm64/mm: Wire up PTE_CONT for user mappings Ryan Roberts
2023-11-21 11:22 ` Alistair Popple
2023-11-21 15:14 ` Ryan Roberts
2023-11-22 6:01 ` Alistair Popple
2023-11-22 8:35 ` Ryan Roberts
2023-11-15 16:30 ` [PATCH v2 13/14] arm64/mm: Implement ptep_set_wrprotects() to optimize fork() Ryan Roberts
2023-11-15 16:30 ` [PATCH v2 14/14] arm64/mm: Add ptep_get_and_clear_full() to optimize process teardown Ryan Roberts
2023-11-23 5:13 ` Alistair Popple
2023-11-23 16:01 ` Ryan Roberts
2023-11-24 1:35 ` Alistair Popple
2023-11-24 8:54 ` Ryan Roberts
2023-11-27 7:34 ` Alistair Popple
2023-11-27 8:53 ` Ryan Roberts
2023-11-28 6:54 ` Alistair Popple
2023-11-28 12:45 ` Ryan Roberts
2023-11-28 16:55 ` Ryan Roberts
2023-11-30 5:07 ` Alistair Popple
2023-11-30 5:57 ` Barry Song
2023-11-30 11:47 ` Ryan Roberts
2023-12-03 23:20 ` Alistair Popple
2023-12-04 9:39 ` Ryan Roberts
2023-11-28 7:32 ` Barry Song
2023-11-28 11:15 ` Ryan Roberts
2023-11-28 8:17 ` Barry Song
2023-11-28 11:49 ` Ryan Roberts
2023-11-28 20:23 ` Barry Song
2023-11-29 12:43 ` Ryan Roberts
2023-11-29 13:00 ` Barry Song
2023-11-30 5:35 ` Barry Song
2023-11-30 12:00 ` Ryan Roberts
2023-12-03 21:41 ` Barry Song
2023-11-27 3:18 ` [PATCH v2 00/14] Transparent Contiguous PTEs for User Mappings Barry Song
2023-11-27 9:15 ` Ryan Roberts
2023-11-27 10:35 ` Barry Song
2023-11-27 11:11 ` Ryan Roberts
2023-11-27 22:53 ` Barry Song
2023-11-28 11:52 ` Ryan Roberts
2023-11-28 3:13 ` Yang Shi
2023-11-28 11:58 ` Ryan Roberts
2023-11-28 5:49 ` Barry Song
2023-11-28 12:08 ` Ryan Roberts
2023-11-28 19:37 ` Barry Song
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=2618b024-6a95-4bfc-a08d-59d86e9931e5@redhat.com \
--to=david@redhat.com \
--cc=akpm@linux-foundation.org \
--cc=andreyknvl@gmail.com \
--cc=anshuman.khandual@arm.com \
--cc=ardb@kernel.org \
--cc=catalin.marinas@arm.com \
--cc=dvyukov@google.com \
--cc=glider@google.com \
--cc=james.morse@arm.com \
--cc=jhubbard@nvidia.com \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mark.rutland@arm.com \
--cc=maz@kernel.org \
--cc=oliver.upton@linux.dev \
--cc=ryabinin.a.a@gmail.com \
--cc=ryan.roberts@arm.com \
--cc=suzuki.poulose@arm.com \
--cc=vincenzo.frascino@arm.com \
--cc=wangkefeng.wang@huawei.com \
--cc=will@kernel.org \
--cc=willy@infradead.org \
--cc=yuzenghui@huawei.com \
--cc=yuzhao@google.com \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox