linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: David Hildenbrand <david@redhat.com>
To: Ryan Roberts <ryan.roberts@arm.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	"Matthew Wilcox (Oracle)" <willy@infradead.org>,
	Yu Zhao <yuzhao@google.com>,
	"Yin, Fengwei" <fengwei.yin@intel.com>
Cc: linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org
Subject: Re: [RFC v2 PATCH 00/17] variable-order, large folios for anonymous memory
Date: Mon, 17 Apr 2023 17:44:25 +0200	[thread overview]
Message-ID: <c1e63072-4b7b-fcbd-8de5-9dbcb6ad60fc@redhat.com> (raw)
In-Reply-To: <cc0b1dcc-abe0-7cc2-d84f-ca4a299bc29d@arm.com>

>>>
>>>>
>>>> Further, we have to be a bit careful regarding replacing ranges that are backed
>>>> by different anon pages (for example, due to fork() deciding to copy some
>>>> sub-pages of a PTE-mapped folio instead of sharing all sub-pages).
>>>
>>> I don't understand this statement; do you mean "different anon _folios_"? I am
>>> scanning the page table to expand the region that I reuse/copy and as part of
>>> that scan, make sure that I only cover a single folio. So I think I conform here
>>> - the scan would give up once it gets to the hole.
>>
>> During fork(), what could happen (temporary detection of pinned page resulting
>> in a copy) is something weird like:
>>
>> PTE 0: subpage0 of anon page #1 (maybe shared)
>> PTE 1: subpage1 of anon page #1 (maybe shared
>> PTE 2: anon page #2 (exclusive)
>> PTE 3: subpage2 of anon page #1 (maybe shared
> 
> Hmm... I can see how this could happen if you mremap PTE2 to PTE3, then mmap
> something new in PTE2. But I don't see how it happens at fork. For PTE3, did you
> mean subpage _3_?
>

Yes, fat fingers :) Thanks for paying attention!

Above could be optimized by processing all consecutive PTEs at once: 
meaning, we check if the page maybe pinned only once, and then either 
copy all PTEs or share all PTEs. It's unlikely to happen in practice, I 
guess, though.


>>
>> Of course, any combination of above.
>>
>> Further, with mremap() we might get completely crazy layouts, randomly mapping
>> sub-pages of anon pages, mixed with other sub-pages or base-page folios.
>>
>> Maybe it's all handled already by your code, just pointing out which kind of
>> mess we might get :)
> 
> Yep, this is already handled; the scan to expand the range ensures that all the
> PTEs map to the expected contiguous pages in the same folio.

Okay, great.

> 
>>
>>>
>>>>
>>>>
>>>> So what should be safe is replacing all sub-pages of a folio that are marked
>>>> "maybe shared" by a new folio under PT lock. However, I wonder if it's really
>>>> worth the complexity. For THP we were happy so far to *not* optimize this,
>>>> implying that maybe we shouldn't worry about optimizing the fork() case for now
>>>> that heavily.
>>>
>>> I don't have the exact numbers to hand, but I'm pretty sure I remember enabling
>>> large copies was contributing a measurable amount to the performance
>>> improvement. (Certainly, the zero-page copy case, is definitely a big
>>> contributer). I don't have access to the HW at the moment but can rerun later
>>> with and without to double check.
>>
>> In which test exactly? Some micro-benchmark?
> 
> The kernel compile benchmark that I quoted numbers for in the cover letter. I
> have some trace points (not part of the submitted series) that tell me how many
> mappings of each order we get for each code path. I'm pretty sure I remember all
> of these 4 code paths contributing non-negligible amounts.

Interesting! It would be great to see if there is an actual difference 
after patch #10 was applied without the other COW replacement.

-- 
Thanks,

David / dhildenb



  reply	other threads:[~2023-04-17 15:44 UTC|newest]

Thread overview: 44+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-04-14 13:02 Ryan Roberts
2023-04-14 13:02 ` [RFC v2 PATCH 01/17] mm: Expose clear_huge_page() unconditionally Ryan Roberts
2023-04-14 13:02 ` [RFC v2 PATCH 02/17] mm: pass gfp flags and order to vma_alloc_zeroed_movable_folio() Ryan Roberts
2023-04-14 13:02 ` [RFC v2 PATCH 03/17] mm: Introduce try_vma_alloc_movable_folio() Ryan Roberts
2023-04-17  8:49   ` Yin, Fengwei
2023-04-17 10:11     ` Ryan Roberts
2023-04-14 13:02 ` [RFC v2 PATCH 04/17] mm: Implement folio_add_new_anon_rmap_range() Ryan Roberts
2023-04-14 13:02 ` [RFC v2 PATCH 05/17] mm: Routines to determine max anon folio allocation order Ryan Roberts
2023-04-14 14:09   ` Kirill A. Shutemov
2023-04-14 14:38     ` Ryan Roberts
2023-04-14 15:37       ` Kirill A. Shutemov
2023-04-14 16:06         ` Ryan Roberts
2023-04-14 16:18           ` Matthew Wilcox
2023-04-14 16:31             ` Ryan Roberts
2023-04-14 13:02 ` [RFC v2 PATCH 06/17] mm: Allocate large folios for anonymous memory Ryan Roberts
2023-04-14 13:02 ` [RFC v2 PATCH 07/17] mm: Allow deferred splitting of arbitrary large anon folios Ryan Roberts
2023-04-14 13:02 ` [RFC v2 PATCH 08/17] mm: Implement folio_move_anon_rmap_range() Ryan Roberts
2023-04-14 13:02 ` [RFC v2 PATCH 09/17] mm: Update wp_page_reuse() to operate on range of pages Ryan Roberts
2023-04-14 13:02 ` [RFC v2 PATCH 10/17] mm: Reuse large folios for anonymous memory Ryan Roberts
2023-04-14 13:02 ` [RFC v2 PATCH 11/17] mm: Split __wp_page_copy_user() into 2 variants Ryan Roberts
2023-04-14 13:02 ` [RFC v2 PATCH 12/17] mm: ptep_clear_flush_range_notify() macro for batch operation Ryan Roberts
2023-04-14 13:02 ` [RFC v2 PATCH 13/17] mm: Implement folio_remove_rmap_range() Ryan Roberts
2023-04-14 13:03 ` [RFC v2 PATCH 14/17] mm: Copy large folios for anonymous memory Ryan Roberts
2023-04-14 13:03 ` [RFC v2 PATCH 15/17] mm: Convert zero page to large folios on write Ryan Roberts
2023-04-14 13:03 ` [RFC v2 PATCH 16/17] mm: mmap: Align unhinted maps to highest anon folio order Ryan Roberts
2023-04-17  8:25   ` Yin, Fengwei
2023-04-17 10:13     ` Ryan Roberts
2023-04-14 13:03 ` [RFC v2 PATCH 17/17] mm: Batch-zap large anonymous folio PTE mappings Ryan Roberts
2023-04-17  8:04 ` [RFC v2 PATCH 00/17] variable-order, large folios for anonymous memory Yin, Fengwei
2023-04-17 10:19   ` Ryan Roberts
2023-04-17  8:19 ` Yin, Fengwei
2023-04-17 10:28   ` Ryan Roberts
2023-04-17 10:54 ` David Hildenbrand
2023-04-17 11:43   ` Ryan Roberts
2023-04-17 14:05     ` David Hildenbrand
2023-04-17 15:38       ` Ryan Roberts
2023-04-17 15:44         ` David Hildenbrand [this message]
2023-04-17 16:15           ` Ryan Roberts
2023-04-26 10:41           ` Ryan Roberts
2023-05-17 13:58             ` David Hildenbrand
2023-05-18 11:23               ` Ryan Roberts
2023-04-19 10:12       ` Ryan Roberts
2023-04-19 10:51         ` David Hildenbrand
2023-04-19 11:13           ` Ryan Roberts

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=c1e63072-4b7b-fcbd-8de5-9dbcb6ad60fc@redhat.com \
    --to=david@redhat.com \
    --cc=akpm@linux-foundation.org \
    --cc=fengwei.yin@intel.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-mm@kvack.org \
    --cc=ryan.roberts@arm.com \
    --cc=willy@infradead.org \
    --cc=yuzhao@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox