linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: David Hildenbrand <david@redhat.com>
To: Linus Torvalds <torvalds@linux-foundation.org>
Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org,
	Andrew Morton <akpm@linux-foundation.org>,
	Ryan Roberts <ryan.roberts@arm.com>,
	Matthew Wilcox <willy@infradead.org>,
	Hugh Dickins <hughd@google.com>,
	Yin Fengwei <fengwei.yin@intel.com>,
	Yang Shi <shy828301@gmail.com>, Ying Huang <ying.huang@intel.com>,
	Zi Yan <ziy@nvidia.com>, Peter Zijlstra <peterz@infradead.org>,
	Ingo Molnar <mingo@redhat.com>, Will Deacon <will@kernel.org>,
	Waiman Long <longman@redhat.com>,
	"Paul E. McKenney" <paulmck@kernel.org>
Subject: Re: [PATCH WIP v1 00/20] mm: precise "mapped shared" vs. "mapped exclusively" detection for PTE-mapped THP / partially-mappable folios
Date: Sat, 25 Nov 2023 18:02:04 +0100	[thread overview]
Message-ID: <a9922f58-8129-4f15-b160-e0ace581bcbe@redhat.com> (raw)
In-Reply-To: <CAHk-=wgFRaS9FLJZEv0pbASVJ8rPSrTWHkYTmj83vRJh9Ehepw@mail.gmail.com>

On 24.11.23 21:55, Linus Torvalds wrote:
> On Fri, 24 Nov 2023 at 05:26, David Hildenbrand <david@redhat.com> wrote:
>>
>> Are you interested in some made-up math, new locking primitives and
>> slightly unpleasant performance numbers on first sight? :)

Hi Linus,

first of all -- wow -- thanks for that blazing fast feedback! You really 
had to work through quite some text+code to understand what's happening.

Thanks for prioritizing that over Black Friday shopping ;)

> 
> Ugh. I'm not loving the "I have a proof, but it's too big to fit in
> the margin" model  of VM development.
> 
> This does seem to be very subtle.

Yes, compared to other kernel subsystems, this level of math in the VM 
is really new.

The main reason I excluded the proof in this WIP series is not the size, 
though. I wanted to get the implementation out after talking about it 
(and optimizing it ...) for way too long and (a) proofs involving 
infinite sequences in pure ascii are just horrible to read; (b) I think 
the proof can be cleaned-up / simplified, especially after I came up 
with the "intuition" in the patch some days ago and decided to use that 
one instead for now.

No questions asked, if this ever gets discussed for actual merging, only 
with a public, reviewed proof available.

[most of the "magic" goes away once one simply uses one rmap value for 
each bit in the mm->mm_rmap_id; 22 bit -> 22 rmap values. Of course, 22 
values are undesirable.]

> 
> Also, please benchmark what your rmap changes do to just plain regular
> pages - it *looks* like maybe all you did was to add some
> VM_WARN_ON_FOLIO() for those cases, but I have this strong memory of
> that
> 
>          if (likely(!compound)) {
> 
> case being very critical on all the usual cases (and the cleanups by
> Hugh last year were nice).

Yes, indeed. I separated small vs. large folio handling cleanly, such 
that we always have a pattern like:

+       if (likely(!folio_test_large(folio)))
+               return atomic_add_negative(-1, &page->_mapcount);

So, the fast default path is "small folio".

As stated, I want to do much more benchmarking to better understand all 
performance impacts; especially, on top of Ryan's work of having THPs 
that are always PTE-mapped and we don't have to "artificially" force a 
PTE-mapped THP.

> 
> I get the feeling that you are trying to optimize a particular case
> that is special enough that some less complicated model might work.
> 
> Just by looking at your benchmarks, I *think* the case you actually
> want to optimize is "THP -> fork -> child exit/execve -> parent write
> COW reuse" where the THP page was really never in more than two VM's,
> and the second VM was an almost accidental temporary thing that is
> just about the whole "fork->exec/exit" model.

That's the most obvious/important case regarding COW reuse, agreed. And 
also where I originally started, because it looked like the low-hanging 
fruit (below).

For the benchmarks I have so far, I focused mostly on the 
performance/harm of individual operations. Conceptually, with rmap IDs, 
there is no performance difference if you end up reusing a THP in the 
parent or the child, the performance is the same, so I didn't add it 
manually to the micro benchmarks.

> 
> Which makes me really feel like your rmap_id is very over-engineered.
> It seems to be designed to handle all the generic cases, but it seems
> like the main cause for it is a very specific case that I _feel_
> should be something that could be tracked with *way* less information
> (eg just have a "pointer to owner vma, and a simple counter of
> non-owners").

That's precisely where I originally started [1], but quickly wondered 
(already in that mail):

(a) How to cleanly and safely stabilize refcount vs. mapcount. Without
     playing tricks such that it's just "obvious" how it's just correct
     and race-free in the COW reuse path.
(b) How to extend it to !anon folios, where we don't have a clean entry
     point like folio_add_new_anon_rmap(); primarily to
     get a sane replacement for folio_estimated_shares(), which I just
     dislike at this point.
(c) If it's possibly to easily and cleanly change owners (creators in my
     mail), without involving locks.

So I started thinking about the possibility of a precise and possibly 
more universal/cleaner way of handling it, that doesn't add too much 
runtime overhead. A way to just have what we have for small folios.

I was surprised to find an approach that gives a precise answer and 
simply changes the owner implicitly, primarily just by 
adding/subtracting numbers.

[I'll note that having a universal way to stabilize the mapcount vs. 
refcount could be quite valuable. But achieving that also for small 
folios would require e.g., having shared, hashed atomic seqcounts. And 
I'm not too interested in harming small-folio performance at this point :)]

Now, if we want all that sooner, later, or maybe never, is a different 
question. This WIP version primarily tries to show what's possible, at 
which price, and what the limitations are.

> 
> I dunno. I was cc'd, I looked at the patches, but I suspect I'm not
> really the target audience. If Hugh is ok with this kind of

Well, I really value your feedback, and you are always in my CC list 
when I'm messing with COW and mapcounts.

Having that said, it's encouraging that you went over the patches 
(thanks again!) and nothing immediately jumped at you (well, besides the 
proof, but that will be fixed if this ever gets merged).

> complexity, I bow to a higher authority. This *does* seem to add a lot
> of conceptual complexity to something that is already complicated.

I'll note that while it all sounds complicated, in the end it's "just" 
adding/subtracting numbers, and having a clean scheme to detect 
concurrent (un)mapping. Further, it's handled without any new rmap hooks.

But yes, there sure is added code and complexity, and personally I 
dislike having to go from 3 to 6 rmap values to support arm64 with 512 
MiB THP. If we could just squeeze it all into a single rmap value, it 
would all look much nicer: one total mapcount, one rmap value.

Before this would get merged a lot more has to happen, and most rmap 
batching (and possibly including the exclusive atomic seqcount) could be 
beneficial even without the rmap ID handling, so it's natural to start 
with that independently.

Thanks again!

[1] 
https://lore.kernel.org/all/6cec6f68-248e-63b4-5615-9e0f3f819a0a@redhat.com/T/#u

-- 
Cheers,

David / dhildenb



      reply	other threads:[~2023-11-25 17:02 UTC|newest]

Thread overview: 26+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-11-24 13:26 David Hildenbrand
2023-11-24 13:26 ` [PATCH WIP v1 01/20] mm/rmap: factor out adding folio range into __folio_add_rmap_range() David Hildenbrand
2023-11-24 13:26 ` [PATCH WIP v1 02/20] mm: add a total mapcount for large folios David Hildenbrand
2023-11-24 13:26 ` [PATCH WIP v1 03/20] mm: convert folio_estimated_sharers() to folio_mapped_shared() and improve it David Hildenbrand
2023-11-24 13:26 ` [PATCH WIP v1 04/20] mm/rmap: pass dst_vma to page_try_dup_anon_rmap() and page_dup_file_rmap() David Hildenbrand
2023-11-24 13:26 ` [PATCH WIP v1 05/20] mm/rmap: abstract total mapcount operations for partially-mappable folios David Hildenbrand
2023-11-24 13:26 ` [PATCH WIP v1 06/20] atomic_seqcount: new (raw) seqcount variant to support concurrent writers David Hildenbrand
2023-11-24 13:26 ` [PATCH WIP v1 07/20] mm/rmap_id: track if one ore multiple MMs map a partially-mappable folio David Hildenbrand
2023-12-17 19:13   ` Nadav Amit
2023-12-18 14:04     ` David Hildenbrand
2023-12-18 14:34       ` Nadav Amit
2023-11-24 13:26 ` [PATCH WIP v1 08/20] mm: pass MM to folio_mapped_shared() David Hildenbrand
2023-11-24 13:26 ` [PATCH WIP v1 09/20] mm: improve folio_mapped_shared() for partially-mappable folios using rmap IDs David Hildenbrand
2023-11-24 13:26 ` [PATCH WIP v1 10/20] mm/memory: COW reuse support for PTE-mapped THP with " David Hildenbrand
2023-11-24 13:26 ` [PATCH WIP v1 11/20] mm/rmap_id: support for 1, 2 and 3 values by manual calculation David Hildenbrand
2023-11-24 13:26 ` [PATCH WIP v1 12/20] mm/rmap: introduce folio_add_anon_rmap_range() David Hildenbrand
2023-11-24 13:26 ` [PATCH WIP v1 13/20] mm/huge_memory: batch rmap operations in __split_huge_pmd_locked() David Hildenbrand
2023-11-24 13:26 ` [PATCH WIP v1 14/20] mm/huge_memory: avoid folio_refcount() < folio_mapcount() " David Hildenbrand
2023-11-24 13:26 ` [PATCH WIP v1 15/20] mm/rmap_id: verify precalculated subids with CONFIG_DEBUG_VM David Hildenbrand
2023-11-24 13:26 ` [PATCH WIP v1 16/20] atomic_seqcount: support a single exclusive writer in the absence of other writers David Hildenbrand
2023-11-24 13:26 ` [PATCH WIP v1 17/20] mm/rmap_id: reduce atomic RMW operations when we are the exclusive writer David Hildenbrand
2023-11-24 13:26 ` [PATCH WIP v1 18/20] atomic_seqcount: use atomic add-return instead of atomic cmpxchg on 64bit David Hildenbrand
2023-11-24 13:26 ` [PATCH WIP v1 19/20] mm/rmap: factor out removing folio range into __folio_remove_rmap_range() David Hildenbrand
2023-11-24 13:26 ` [PATCH WIP v1 20/20] mm/rmap: perform all mapcount operations of large folios under the rmap seqcount David Hildenbrand
2023-11-24 20:55 ` [PATCH WIP v1 00/20] mm: precise "mapped shared" vs. "mapped exclusively" detection for PTE-mapped THP / partially-mappable folios Linus Torvalds
2023-11-25 17:02   ` David Hildenbrand [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=a9922f58-8129-4f15-b160-e0ace581bcbe@redhat.com \
    --to=david@redhat.com \
    --cc=akpm@linux-foundation.org \
    --cc=fengwei.yin@intel.com \
    --cc=hughd@google.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=longman@redhat.com \
    --cc=mingo@redhat.com \
    --cc=paulmck@kernel.org \
    --cc=peterz@infradead.org \
    --cc=ryan.roberts@arm.com \
    --cc=shy828301@gmail.com \
    --cc=torvalds@linux-foundation.org \
    --cc=will@kernel.org \
    --cc=willy@infradead.org \
    --cc=ying.huang@intel.com \
    --cc=ziy@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox