linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: David Hildenbrand <david@redhat.com>
To: Byungchul Park <byungchul@sk.com>,
	linux-kernel@vger.kernel.org, linux-mm@kvack.org
Cc: kernel_team@skhynix.com, akpm@linux-foundation.org,
	ying.huang@intel.com, vernhao@tencent.com,
	mgorman@techsingularity.net, hughd@google.com,
	willy@infradead.org, peterz@infradead.org, luto@kernel.org,
	tglx@linutronix.de, mingo@redhat.com, bp@alien8.de,
	dave.hansen@linux.intel.com, rjgolo@gmail.com
Subject: Re: [RESEND PATCH v8 0/8] Reduce TLB flushes by 94% by improving folio migration
Date: Thu, 29 Feb 2024 10:33:44 +0100	[thread overview]
Message-ID: <54053f0d-024b-4064-8d82-235cc71b61f8@redhat.com> (raw)
In-Reply-To: <20240229092810.GC64252@system.software.com>

On 29.02.24 10:28, Byungchul Park wrote:
> On Mon, Feb 26, 2024 at 12:06:05PM +0900, Byungchul Park wrote:
>> Hi everyone,
>>
>> While I'm working with a tiered memory system e.g. CXL memory, I have
>> been facing migration overhead esp. TLB shootdown on promotion or
>> demotion between different tiers. Yeah.. most TLB shootdowns on
>> migration through hinting fault can be avoided thanks to Huang Ying's
>> work, commit 4d4b6d66db ("mm,unmap: avoid flushing TLB in batch if PTE
>> is inaccessible"). See the following link:
>>
>> https://lore.kernel.org/lkml/20231115025755.GA29979@system.software.com/
>>
>> However, it's only for ones using hinting fault. I thought it'd be much
>> better if we have a general mechanism to reduce the number of TLB
>> flushes and TLB misses, that we can ultimately apply to any type of
>> migration, I tried it only for tiering for now tho.
>>
>> I'm suggesting a mechanism called MIGRC that stands for 'Migration Read
>> Copy', to reduce TLB flushes by keeping source and destination of folios
>> participated in the migrations until all TLB flushes required are done,
>> only if those folios are not mapped with write permission PTE entries.
>>
>> To achieve that:
>>
>>     1. For the folios that map only to non-writable TLB entries, prevent
>>        TLB flush at migration by keeping both source and destination
>>        folios, which will be handled later at a better time.
>>
>>     2. When any non-writable TLB entry changes to writable e.g. through
>>        fault handler, give up migrc mechanism so as to perform TLB flush
>>        required right away.
>>
>> I observed a big improvement of TLB flushes # and TLB misses # at the
>> following evaluation using XSBench like:
>>
>>     1. itlb flush was reduced by 93.9%.
>>     2. dtlb thread was reduced by 43.5%.
>>     3. stlb flush was reduced by 24.9%.
> 
> Hi guys,

Hi,

> 
> The TLB flush reduction is 25% ~ 94%, IMO, it's unbelievable.

Can't we find at least one benchmark that shows an actual improvement on 
some system?

Staring at the number TLB flushes is nice, but if it does not affect 
actual performance of at least one benchmark why do we even care?

"12 files changed, 597 insertions(+), 59 deletions(-)"

is not negligible and needs proper review.

That review needs motivation. The current numbers do not seem to be 
motivating enough :)

-- 
Cheers,

David / dhildenb



  reply	other threads:[~2024-02-29  9:33 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-02-26  3:06 Byungchul Park
2024-02-26  3:06 ` [RESEND PATCH v8 1/8] x86/tlb: Add APIs manipulating tlb batch's arch data Byungchul Park
2024-02-26  3:06 ` [RESEND PATCH v8 2/8] arm64: tlbflush: " Byungchul Park
2024-02-26  3:06 ` [RESEND PATCH v8 3/8] mm/rmap: Recognize read-only TLB entries during batched TLB flush Byungchul Park
2024-02-26  3:06 ` [RESEND PATCH v8 4/8] x86/tlb, mm/rmap: Separate arch_tlbbatch_clear() out of arch_tlbbatch_flush() Byungchul Park
2024-02-26  3:06 ` [RESEND PATCH v8 5/8] mm: Separate move/undo doing on folio list from migrate_pages_batch() Byungchul Park
2024-02-26  3:06 ` [RESEND PATCH v8 6/8] mm: Add APIs to free a folio directly to the buddy bypassing pcp Byungchul Park
2024-02-26  3:06 ` [RESEND PATCH v8 7/8] mm: Defer TLB flush by keeping both src and dst folios at migration Byungchul Park
2024-02-26  3:06 ` [RESEND PATCH v8 8/8] mm: Pause migrc mechanism at high memory pressure Byungchul Park
2024-02-29  9:28 ` [RESEND PATCH v8 0/8] Reduce TLB flushes by 94% by improving folio migration Byungchul Park
2024-02-29  9:33   ` David Hildenbrand [this message]
2024-03-01  0:33     ` Huang, Ying
2024-03-04  2:51       ` Byungchul Park
2024-03-04  2:39     ` Byungchul Park

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=54053f0d-024b-4064-8d82-235cc71b61f8@redhat.com \
    --to=david@redhat.com \
    --cc=akpm@linux-foundation.org \
    --cc=bp@alien8.de \
    --cc=byungchul@sk.com \
    --cc=dave.hansen@linux.intel.com \
    --cc=hughd@google.com \
    --cc=kernel_team@skhynix.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=luto@kernel.org \
    --cc=mgorman@techsingularity.net \
    --cc=mingo@redhat.com \
    --cc=peterz@infradead.org \
    --cc=rjgolo@gmail.com \
    --cc=tglx@linutronix.de \
    --cc=vernhao@tencent.com \
    --cc=willy@infradead.org \
    --cc=ying.huang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox