From: Byungchul Park <byungchul@sk.com>
To: David Hildenbrand <david@redhat.com>
Cc: Dave Hansen <dave.hansen@intel.com>,
Byungchul Park <lkml.byungchul.park@gmail.com>,
linux-kernel@vger.kernel.org, linux-mm@kvack.org,
kernel_team@skhynix.com, akpm@linux-foundation.org,
ying.huang@intel.com, vernhao@tencent.com,
mgorman@techsingularity.net, hughd@google.com,
willy@infradead.org, peterz@infradead.org, luto@kernel.org,
tglx@linutronix.de, mingo@redhat.com, bp@alien8.de,
dave.hansen@linux.intel.com, rjgolo@gmail.com
Subject: Re: [PATCH v11 09/12] mm: implement LUF(Lazy Unmap Flush) defering tlb flush when folios get unmapped
Date: Mon, 3 Jun 2024 18:35:05 +0900 [thread overview]
Message-ID: <20240603093505.GA12549@system.software.com> (raw)
In-Reply-To: <26dc4594-430b-483c-a26c-7e68bade74b0@redhat.com>
On Sat, Jun 01, 2024 at 09:22:17AM +0200, David Hildenbrand wrote:
> On 31.05.24 23:46, Dave Hansen wrote:
> > On 5/31/24 11:04, Byungchul Park wrote:
> > ...
> > > I don't believe you do not agree with the concept itself. Thing is
> > > the current version is not good enough. I will do my best by doing
> > > what I can do.
> >
> > More performance is good. I agree with that.
> >
> > But it has to be weighed against the risk and the complexity. The more
> > I look at this approach, the more I think this is not a good trade off.
> > There's a lot of risk and a lot of complexity and we haven't seen the
> > full complexity picture. The gaps are being fixed by adding complexity
> > in new subsystems (the VFS in this case).
> >
> > There are going to be winners and losers, and this version for example
> > makes file writes lose performance.
> >
> > Just to be crystal clear: I disagree with the concept of leaving stale
> > TLB entries in place in an attempt to gain performance.
>
> There is the inherent problem that a CPU reading from such (unmapped but not
> flushed yet) memory will not get a page fault, which I think is the most
> controversial part here (besides interaction with other deferred TLB
> flushing, and how this glues into the buddy).
>
> What we used to do so far was limiting the timeframe where that could
> happen, under well-controlled circumstances. On the common unmap/zap path,
> we perform the batched TLB flush before any page faults / VMA changes would
> have be possible and munmap() would have returned with "succeess". Now that
> time frame could be significantly longer.
>
> So in current code, at the point in time where we would process a page
> fault, mmap()/munmap()/... the TLB would have been flushed already.
>
> To "mimic" the old behavior, we'd essentially have to force any page
> faults/mmap/whatsoever to perform the deferred flush such that the CPU will
> see the "reality" again. Not sure how that could be done in a *consistent*
In luf's point of view, the points where the deferred flush should be
performed are simply:
1. when changing the vma maps, that might be luf'ed.
2. when updating data of the pages, that might be luf'ed.
All we need to do is to indentify the points:
1. when changing the vma maps, that might be luf'ed.
a) mmap and munmap e.i. fault handler or unmap_region().
b) permission to writable e.i. mprotect or fault handler.
c) what I'm missing.
2. when updating data of the pages, that might be luf'ed.
a) updating files through vfs e.g. file_end_write().
b) updating files through writable maps e.i. 1-a) or 1-b).
c) what I'm missing.
Some of them are already performing necessary tlb flush and the others
are not. luf has to handle the others, that I've been focusing on. Of
course, there might be what I'm missing tho.
Worth noting again, luf is working only on *migration* and *reclaim*
currently. Thing is when to stop the pending initiated from migration
or reclaim by luf.
Byungchul
> way (check whenever we take the mmap/vma lock etc ...) and if there would
> still be a performance win.
>
> --
> Cheers,
>
> David / dhildenb
next prev parent reply other threads:[~2024-06-03 9:35 UTC|newest]
Thread overview: 36+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-05-31 9:19 [PATCH v11 00/12] LUF(Lazy Unmap Flush) reducing tlb numbers over 90% Byungchul Park
2024-05-31 9:19 ` [PATCH v11 01/12] x86/tlb: add APIs manipulating tlb batch's arch data Byungchul Park
2024-05-31 9:19 ` [PATCH v11 02/12] arm64: tlbflush: " Byungchul Park
2024-05-31 9:19 ` [PATCH v11 03/12] riscv, tlb: " Byungchul Park
2024-05-31 9:19 ` [PATCH v11 04/12] x86/tlb, riscv/tlb, mm/rmap: separate arch_tlbbatch_clear() out of arch_tlbbatch_flush() Byungchul Park
2024-05-31 9:19 ` [PATCH v11 05/12] mm: buddy: make room for a new variable, ugen, in struct page Byungchul Park
2024-05-31 9:19 ` [PATCH v11 06/12] mm: add folio_put_ugen() to deliver unmap generation number to pcp or buddy Byungchul Park
2024-05-31 9:19 ` [PATCH v11 07/12] mm: add a parameter, unmap generation number, to free_unref_folios() Byungchul Park
2024-05-31 9:19 ` [PATCH v11 08/12] mm/rmap: recognize read-only tlb entries during batched tlb flush Byungchul Park
2024-05-31 9:19 ` [PATCH v11 09/12] mm: implement LUF(Lazy Unmap Flush) defering tlb flush when folios get unmapped Byungchul Park
2024-05-31 16:12 ` Dave Hansen
2024-05-31 18:04 ` Byungchul Park
2024-05-31 21:46 ` Dave Hansen
2024-05-31 22:09 ` Matthew Wilcox
2024-06-01 2:20 ` Byungchul Park
2024-06-01 7:22 ` David Hildenbrand
2024-06-03 9:35 ` Byungchul Park [this message]
2024-06-03 13:23 ` Dave Hansen
2024-06-03 16:05 ` David Hildenbrand
2024-06-03 16:37 ` Dave Hansen
2024-06-03 17:01 ` Matthew Wilcox
2024-06-03 18:00 ` David Hildenbrand
2024-06-04 8:16 ` Huang, Ying
2024-06-04 0:34 ` Byungchul Park
2024-06-10 13:23 ` Michal Hocko
2024-06-11 0:55 ` Byungchul Park
2024-06-11 11:55 ` Michal Hocko
2024-06-14 2:45 ` Byungchul Park
2024-06-04 1:53 ` Byungchul Park
2024-06-04 4:43 ` Byungchul Park
2024-06-06 8:33 ` David Hildenbrand
2024-06-14 1:57 ` Byungchul Park
2024-06-11 9:12 ` Byungchul Park
2024-05-31 9:19 ` [PATCH v11 10/12] mm: separate move/undo parts from migrate_pages_batch() Byungchul Park
2024-05-31 9:20 ` [PATCH v11 11/12] mm, migrate: apply luf mechanism to unmapping during migration Byungchul Park
2024-05-31 9:20 ` [PATCH v11 12/12] mm, vmscan: apply luf mechanism to unmapping during folio reclaim Byungchul Park
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20240603093505.GA12549@system.software.com \
--to=byungchul@sk.com \
--cc=akpm@linux-foundation.org \
--cc=bp@alien8.de \
--cc=dave.hansen@intel.com \
--cc=dave.hansen@linux.intel.com \
--cc=david@redhat.com \
--cc=hughd@google.com \
--cc=kernel_team@skhynix.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=lkml.byungchul.park@gmail.com \
--cc=luto@kernel.org \
--cc=mgorman@techsingularity.net \
--cc=mingo@redhat.com \
--cc=peterz@infradead.org \
--cc=rjgolo@gmail.com \
--cc=tglx@linutronix.de \
--cc=vernhao@tencent.com \
--cc=willy@infradead.org \
--cc=ying.huang@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox