linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Ryan Roberts <ryan.roberts@arm.com>
To: David Hildenbrand <david@redhat.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	"Matthew Wilcox (Oracle)" <willy@infradead.org>,
	Yu Zhao <yuzhao@google.com>,
	"Yin, Fengwei" <fengwei.yin@intel.com>
Cc: linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org
Subject: Re: [RFC v2 PATCH 00/17] variable-order, large folios for anonymous memory
Date: Mon, 17 Apr 2023 16:38:06 +0100	[thread overview]
Message-ID: <cc0b1dcc-abe0-7cc2-d84f-ca4a299bc29d@arm.com> (raw)
In-Reply-To: <568b5b73-f0e9-c385-f628-93e45825fb7b@redhat.com>

On 17/04/2023 15:05, David Hildenbrand wrote:
> [...]
> 
>>> Just a note (that you maybe already know) that we have to be a bit careful in
>>> the wp_copy path with replacing sub-pages that are marked exclusive.
>>
>> Ahh, no I wasn't aware of this - thanks for taking the time to explain it. I
>> think I have a bug.
>>
>> (I'm guessing the GUP fast path assumes that if it sees an exclusive page then
>> that page can't go away? And if I then wp_copy it, I'm breaking the assumption?
>> But surely user space can munmap it at any time and the consequences are
>> similar? It's probably clear that I don't know much about the GUP implementation
>> details...)
> 
> If GUP finds a read-only PTE pointing at an exclusive subpage, it assumes that
> this page cannot randomly be replaced by core MM due to COW. See
> gup_must_unshare(). So it can go ahead and pin the page. As long as user space
> doesn't do something stupid with the mapping (MADV_DONTNEED, munmap()) the
> pinned page must correspond to the mapped page.
> 
> If GUP finds a writeable PTE, it assumes that this page cannot randomly be
> replaced by core MM due to COW -- because writable implies exclusive. See, for
> example the VM_BUG_ON_PAGE() in follow_page_pte(). So, similarly, GUP can simply
> go ahead and pin the page.
> 
> GUP-fast runs lockless, not even taking the PT locks. It syncs against
> concurrent fork() using a special seqlock, and essentially unpins whatever it
> temporarily pinned when it detects that fork() was running concurrently. But it
> might result in some pages temporarily being flagged as "maybe pinned".
> 
> In other cases (!fork()), GUP-fast synchronizes against concurrent sharing (KSM)
> or unmapping (migration, swapout) that implies clearing of the PG_anon_flag of
> the subpage by first unmapping the PTE and conditionally remapping it. See
> mm/ksm.c:write_protect_page() as an example for the sharing side (especially: if
> page_try_share_anon_rmap() fails because the page may be pinned).
> 
> Long story short: replacing a r-o "maybe shared" (!exclusive) PTE is easy.
> Replacing an exclusive PTE (including writable PTEs) requires some work to sync
> with GUP-fast and goes rather in the "maybe just don't bother" terriroty.

Yep agreed. I'll plan to fix this by adding the constraint that all pages of the
copy range (calc_anon_folio_order_copy()) must be "maybe shared".

> 
>>
>> My current patch always prefers reuse over copy, and expands the size of the
>> reuse to the biggest set of pages that are all exclusive (determined either by
>> the presence of the anon_exclusive flag or from the refcount), and covered by
>> the same folio (and a few other bounds constraints - see
>> calc_anon_folio_range_reuse()).
>>
>> If I determine I must copy (because the "anchor" page is not exclusive), then I
>> determine the size of the copy region based on a few constraints (see
>> calc_anon_folio_order_copy()). But I think you are saying that no pages in that
>> region are allowed to have the anon_exclusive flag set? In which case, this
>> could be fixed by adding that check in the function.
> 
> Yes, changing a PTE that points at an anonymous subpage that has the "exclusive"
> flag set requires more care.
> 
>>
>>>
>>> Currently, we always only replace a single shared anon (sub)page by a fresh
>>> exclusive base-page during a write-fault/unsharing. As the sub-page is already
>>> marked "maybe shared", it cannot get pinned concurrently and everybody is happy.
>>
>> When you say, "maybe shared" is that determined by the absence of the
>> "exclusive" flag?
> 
> Yes. Semantics of PG_anon_exclusive are "exclusive" vs. "maybe shared". Once
> "maybe shared", we must only go back to "exclusive (set the flag) if we are sure
> that there are no other references to the page.
> 
>>
>>>
>>> If you now decide to replace more subpages, you have to be careful that none of
>>> them are still exclusive -- because they could get pinned concurrently and
>>> replacing them would result in memory corruptions.
>>>
>>> There are scenarios (most prominently: MADV_WIPEONFORK), but also failed partial
>>> fork() that could result in something like that.
>>
>> Are there any test cases that stress the kernel in this area that I could use to
>> validate my fix?
> 
> tools/testing/selftests/mm/cow.c does excessive tests (including some
> MADV_DONTFORK -- that's what I actually meant --  and partial mremap tests), but
> mostly focuses on ordinary base pages (order-0), THP, and hugetlb.
> 
> We don't have any "GUP-fast racing with fork()" tests or similar yet (tests that
> rely on races are not a good candidate for selftests).
> 
> We might want to extend tools/testing/selftests/mm/cow.c to test for some of the
> cases you extend.
> 
> We may also change the detection of THP (I think, by luck, it would currently
> also test your patches to some degree set the way it tests for THP)
> 
> if (!pagemap_is_populated(pagemap_fd, mem + pagesize)) {
>     ksft_test_result_skip("Did not get a THP populated\n");
>     goto munmap;
> }
> 
> Would have to be, for example,
> 
> if (!pagemap_is_populated(pagemap_fd, mem + thpsize - pagesize)) {
>     ksft_test_result_skip("Did not get a THP populated\n");
>     goto munmap;
> }
> 
> Because we touch the first PTE in a PMD and want to test if core-mm gave us a
> full THP (last PTE also populated).
> 
> 
> Extending the tests to cover other anon THP sizes could work by aligning a VMA
> to THP/2 size (so we're sure we don't get a full THP), and then testing if we
> get more PTEs populated -> your code active.

Thanks. I'll run all these and make sure they pass and look at adding new
variants for the next rev.

> 
>>
>>>
>>> Further, we have to be a bit careful regarding replacing ranges that are backed
>>> by different anon pages (for example, due to fork() deciding to copy some
>>> sub-pages of a PTE-mapped folio instead of sharing all sub-pages).
>>
>> I don't understand this statement; do you mean "different anon _folios_"? I am
>> scanning the page table to expand the region that I reuse/copy and as part of
>> that scan, make sure that I only cover a single folio. So I think I conform here
>> - the scan would give up once it gets to the hole.
> 
> During fork(), what could happen (temporary detection of pinned page resulting
> in a copy) is something weird like:
> 
> PTE 0: subpage0 of anon page #1 (maybe shared)
> PTE 1: subpage1 of anon page #1 (maybe shared
> PTE 2: anon page #2 (exclusive)
> PTE 3: subpage2 of anon page #1 (maybe shared

Hmm... I can see how this could happen if you mremap PTE2 to PTE3, then mmap
something new in PTE2. But I don't see how it happens at fork. For PTE3, did you
mean subpage _3_?

> 
> Of course, any combination of above.
> 
> Further, with mremap() we might get completely crazy layouts, randomly mapping
> sub-pages of anon pages, mixed with other sub-pages or base-page folios.
> 
> Maybe it's all handled already by your code, just pointing out which kind of
> mess we might get :)

Yep, this is already handled; the scan to expand the range ensures that all the
PTEs map to the expected contiguous pages in the same folio.

> 
>>
>>>
>>>
>>> So what should be safe is replacing all sub-pages of a folio that are marked
>>> "maybe shared" by a new folio under PT lock. However, I wonder if it's really
>>> worth the complexity. For THP we were happy so far to *not* optimize this,
>>> implying that maybe we shouldn't worry about optimizing the fork() case for now
>>> that heavily.
>>
>> I don't have the exact numbers to hand, but I'm pretty sure I remember enabling
>> large copies was contributing a measurable amount to the performance
>> improvement. (Certainly, the zero-page copy case, is definitely a big
>> contributer). I don't have access to the HW at the moment but can rerun later
>> with and without to double check.
> 
> In which test exactly? Some micro-benchmark?

The kernel compile benchmark that I quoted numbers for in the cover letter. I
have some trace points (not part of the submitted series) that tell me how many
mappings of each order we get for each code path. I'm pretty sure I remember all
of these 4 code paths contributing non-negligible amounts.

> 
>>
>>>
>>>
>>> One optimization once could think of instead (that I raised previously in other
>>> context) is the detection of exclusivity after fork()+exit in the child (IOW,
>>> only the parent continues to exist). Once PG_anon_exclusive was cleared for all
>>> sub-pages of the THP-mapped folio during fork(), we'd always decide to copy
>>> instead of reuse (because page_count() > 1, as the folio is PTE mapped).
>>> Scanning the surrounding page table if it makes sense (e.g., page_count() <=
>>> folio_nr_pages()), to test if all page references are from the current process
>>> would allow for reusing the folio (setting PG_anon_exclusive) for the sub-pages.
>>> The smaller the folio order, the cheaper this "scan surrounding PTEs" scan is.
>>> For THP, which are usually PMD-mapped even after fork()+exit, we didn't add this
>>> optimization.
>>
>> Yes, I have already implemented this in my series; see patch 10.
> 
> Oh, good! That's the most important part.
> 



  reply	other threads:[~2023-04-17 15:38 UTC|newest]

Thread overview: 44+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-04-14 13:02 Ryan Roberts
2023-04-14 13:02 ` [RFC v2 PATCH 01/17] mm: Expose clear_huge_page() unconditionally Ryan Roberts
2023-04-14 13:02 ` [RFC v2 PATCH 02/17] mm: pass gfp flags and order to vma_alloc_zeroed_movable_folio() Ryan Roberts
2023-04-14 13:02 ` [RFC v2 PATCH 03/17] mm: Introduce try_vma_alloc_movable_folio() Ryan Roberts
2023-04-17  8:49   ` Yin, Fengwei
2023-04-17 10:11     ` Ryan Roberts
2023-04-14 13:02 ` [RFC v2 PATCH 04/17] mm: Implement folio_add_new_anon_rmap_range() Ryan Roberts
2023-04-14 13:02 ` [RFC v2 PATCH 05/17] mm: Routines to determine max anon folio allocation order Ryan Roberts
2023-04-14 14:09   ` Kirill A. Shutemov
2023-04-14 14:38     ` Ryan Roberts
2023-04-14 15:37       ` Kirill A. Shutemov
2023-04-14 16:06         ` Ryan Roberts
2023-04-14 16:18           ` Matthew Wilcox
2023-04-14 16:31             ` Ryan Roberts
2023-04-14 13:02 ` [RFC v2 PATCH 06/17] mm: Allocate large folios for anonymous memory Ryan Roberts
2023-04-14 13:02 ` [RFC v2 PATCH 07/17] mm: Allow deferred splitting of arbitrary large anon folios Ryan Roberts
2023-04-14 13:02 ` [RFC v2 PATCH 08/17] mm: Implement folio_move_anon_rmap_range() Ryan Roberts
2023-04-14 13:02 ` [RFC v2 PATCH 09/17] mm: Update wp_page_reuse() to operate on range of pages Ryan Roberts
2023-04-14 13:02 ` [RFC v2 PATCH 10/17] mm: Reuse large folios for anonymous memory Ryan Roberts
2023-04-14 13:02 ` [RFC v2 PATCH 11/17] mm: Split __wp_page_copy_user() into 2 variants Ryan Roberts
2023-04-14 13:02 ` [RFC v2 PATCH 12/17] mm: ptep_clear_flush_range_notify() macro for batch operation Ryan Roberts
2023-04-14 13:02 ` [RFC v2 PATCH 13/17] mm: Implement folio_remove_rmap_range() Ryan Roberts
2023-04-14 13:03 ` [RFC v2 PATCH 14/17] mm: Copy large folios for anonymous memory Ryan Roberts
2023-04-14 13:03 ` [RFC v2 PATCH 15/17] mm: Convert zero page to large folios on write Ryan Roberts
2023-04-14 13:03 ` [RFC v2 PATCH 16/17] mm: mmap: Align unhinted maps to highest anon folio order Ryan Roberts
2023-04-17  8:25   ` Yin, Fengwei
2023-04-17 10:13     ` Ryan Roberts
2023-04-14 13:03 ` [RFC v2 PATCH 17/17] mm: Batch-zap large anonymous folio PTE mappings Ryan Roberts
2023-04-17  8:04 ` [RFC v2 PATCH 00/17] variable-order, large folios for anonymous memory Yin, Fengwei
2023-04-17 10:19   ` Ryan Roberts
2023-04-17  8:19 ` Yin, Fengwei
2023-04-17 10:28   ` Ryan Roberts
2023-04-17 10:54 ` David Hildenbrand
2023-04-17 11:43   ` Ryan Roberts
2023-04-17 14:05     ` David Hildenbrand
2023-04-17 15:38       ` Ryan Roberts [this message]
2023-04-17 15:44         ` David Hildenbrand
2023-04-17 16:15           ` Ryan Roberts
2023-04-26 10:41           ` Ryan Roberts
2023-05-17 13:58             ` David Hildenbrand
2023-05-18 11:23               ` Ryan Roberts
2023-04-19 10:12       ` Ryan Roberts
2023-04-19 10:51         ` David Hildenbrand
2023-04-19 11:13           ` Ryan Roberts

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=cc0b1dcc-abe0-7cc2-d84f-ca4a299bc29d@arm.com \
    --to=ryan.roberts@arm.com \
    --cc=akpm@linux-foundation.org \
    --cc=david@redhat.com \
    --cc=fengwei.yin@intel.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-mm@kvack.org \
    --cc=willy@infradead.org \
    --cc=yuzhao@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox