linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Dave Hansen <dave.hansen@intel.com>
To: Rik van Riel <riel@surriel.com>, x86@kernel.org
Cc: linux-kernel@vger.kernel.org, kernel-team@meta.com,
	dave.hansen@linux.intel.com, luto@kernel.org,
	peterz@infradead.org, tglx@linutronix.de, mingo@redhat.com,
	bp@alien8.de, hpa@zytor.com, akpm@linux-foundation.org,
	nadav.amit@gmail.com, zhengqi.arch@bytedance.com,
	linux-mm@kvack.org
Subject: Re: [PATCH 06/12] x86/mm: use INVLPGB for kernel TLB flushes
Date: Thu, 9 Jan 2025 13:18:57 -0800	[thread overview]
Message-ID: <426011a9-1fbc-415c-bac7-df5d67417df3@intel.com> (raw)
In-Reply-To: <855298e6e981378c3afeab93b8c3cb821a7a5b88.camel@surriel.com>

On 1/9/25 12:16, Rik van Riel wrote:
> On Mon, 2025-01-06 at 09:21 -0800, Dave Hansen wrote:
>> On 12/30/24 09:53, Rik van Riel wrote:
>>
>>
>>> +static void broadcast_kernel_range_flush(unsigned long start,
>>> unsigned long end)
>>> +{
>>> +	unsigned long addr;
>>> +	unsigned long maxnr = invlpgb_count_max;
>>> +	unsigned long threshold = tlb_single_page_flush_ceiling *
>>> maxnr;
>>
>> The 'tlb_single_page_flush_ceiling' value was determined by
>> looking at _local_ invalidation cost. Could you talk a bit about
>> why it's also a good value to use for remote invalidations? Does it
>> hold up for INVLPGB the same way it did for good ol' INVLPG? Has
>> there been any explicit testing here to find a good value?
>>
>> I'm also confused by the multiplication here. Let's say
>> invlpgb_count_max==20 and tlb_single_page_flush_ceiling==30.
>>
>> You would need to switch away from single-address invalidation
>> when the number of addresses is >20 for INVLPGB functional reasons.
>> But you'd also need to switch away when >30 for performance
>> reasons (tlb_single_page_flush_ceiling).
>>
>> But I don't understand how that would make the threshold 20*30=600
>> invalidations.
> 
> I have not done any measurement to see how
> flushing with INVLPGB stacks up versus
> local TLB flushes.
> 
> What makes INVLPGB potentially slower:
> - These flushes are done globally
> 
> What makes INVLPGB potentially faster:
> - Multiple flushes can be pending simultaneously,
>   and executed in any convenient order by the CPUs.
> - Wait once on completion of all the queued flushes.
> 
> Another thing that makes things interesting is the 
> TLB entry coalescing done by AMD CPUs.
> 
> When multiple pages are both virtually and physically
> contiguous in memory (which is fairly common), the
> CPU can use a single TLB entry to map up to 8 of them.
> 
> That means if we issue eg. 20 INVLPGB flushes for
> 8 4kB pages each, instead of the CPUs needing to
> remove 160 TLB entries, there might only be 50.

I honestly don't expect there to be any real difference in INVLPGB
execution on the sender side based on what the receivers have in their TLB.

> I just guessed at the numbers used in my code,
> while trying to sort out the details elsewhere
> in the code.
> 
> How should we go about measuring the tradeoffs
> between invalidation time, and the time spent
> in TLB misses from flushing unnecessary stuff?

Well, we did a bunch of benchmarks for INVLPG. We could dig that back up
and repeat some of it.

But actually I think INVLPGB is *WAY* better than INVLPG here.  INVLPG
doesn't have ranged invalidation. It will only architecturally
invalidate multiple 4K entries when the hardware fractured them in the
first place. I think we should probably take advantage of what INVLPGB
can do instead of following the INVLPG approach.

INVLPGB will invalidate a range no matter where the underlying entries
came from. Its "increment the virtual address at the 2M boundary" mode
will invalidate entries of any size. That's my reading of the docs at
least. Is that everyone else's reading too?

So, let's pick a number "Z" which is >= invlpgb_count_max. Z could
arguably be set to tlb_single_page_flush_ceiling. Then do this:

	   4k -> Z*4k => use 4k step
	>Z*4k -> Z*2M => use 2M step
	>Z*2M	      => invalidate everything

Invalidations <=Z*4k are exact. They never zap extra TLB entries.

Invalidations that use the 2M step *might* unnecessarily zap some extra
4k mappings in the last 2M, but this is *WAY* better than invalidating
everything.

"Invalidate everything" obviously stinks, but it should only be for
pretty darn big invalidations. This approach can also do a true ranged
INVLPGB for many more cases than the existing proposal. The only issue
would be if the 2M step is substantially more expensive than the 4k step.

...
>> I also wonder if this would all get simpler if we give in and 
>> *always* call get_flush_tlb_info(). That would provide a nice
>> single place to consolidate the "all vs. ranged" flush logic.
> 
> Possibly. That might be a good way to unify that threshold check?
> 
> That should probably be a separate patch, though.

Yes, it should be part of refactoring that comes before the INVLPGB
enabling.




  reply	other threads:[~2025-01-09 21:19 UTC|newest]

Thread overview: 89+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-12-30 17:53 [PATCH v3 00/12] AMD broadcast TLB invalidation Rik van Riel
2024-12-30 17:53 ` [PATCH 01/12] x86/mm: make MMU_GATHER_RCU_TABLE_FREE unconditional Rik van Riel
2024-12-30 18:41   ` Borislav Petkov
2024-12-31 16:11     ` Rik van Riel
2024-12-31 16:19       ` Borislav Petkov
2024-12-31 16:30         ` Rik van Riel
2025-01-02 11:52           ` Borislav Petkov
2025-01-02 19:56       ` Peter Zijlstra
2025-01-03 12:18         ` Borislav Petkov
2025-01-04 16:27           ` Peter Zijlstra
2025-01-06 15:54             ` Dave Hansen
2025-01-06 15:47           ` Rik van Riel
2024-12-30 17:53 ` [PATCH 02/12] x86/mm: remove pv_ops.mmu.tlb_remove_table call Rik van Riel
2024-12-31  3:18   ` Qi Zheng
2024-12-30 17:53 ` [PATCH 03/12] x86/mm: add X86_FEATURE_INVLPGB definition Rik van Riel
2025-01-02 12:04   ` Borislav Petkov
2025-01-03 18:27     ` Rik van Riel
2025-01-03 21:07       ` Borislav Petkov
2024-12-30 17:53 ` [PATCH 04/12] x86/mm: get INVLPGB count max from CPUID Rik van Riel
2025-01-02 12:15   ` Borislav Petkov
2025-01-10 18:44   ` Tom Lendacky
2025-01-10 20:27     ` Rik van Riel
2025-01-10 20:31       ` Tom Lendacky
2025-01-10 20:34       ` Borislav Petkov
2024-12-30 17:53 ` [PATCH 05/12] x86/mm: add INVLPGB support code Rik van Riel
2025-01-02 12:42   ` Borislav Petkov
2025-01-06 16:50     ` Dave Hansen
2025-01-06 17:32       ` Rik van Riel
2025-01-06 18:14       ` Borislav Petkov
2025-01-14 19:50     ` Rik van Riel
2025-01-03 12:44   ` Borislav Petkov
2024-12-30 17:53 ` [PATCH 06/12] x86/mm: use INVLPGB for kernel TLB flushes Rik van Riel
2025-01-03 12:39   ` Borislav Petkov
2025-01-06 17:21   ` Dave Hansen
2025-01-09 20:16     ` Rik van Riel
2025-01-09 21:18       ` Dave Hansen [this message]
2025-01-10  5:31         ` Rik van Riel
2025-01-10  6:07         ` Nadav Amit
2025-01-10 15:14           ` Dave Hansen
2025-01-10 16:08             ` Rik van Riel
2025-01-10 16:29               ` Dave Hansen
2025-01-10 16:36                 ` Rik van Riel
2025-01-10 18:53   ` Tom Lendacky
2025-01-10 20:29     ` Rik van Riel
2024-12-30 17:53 ` [PATCH 07/12] x86/tlb: use INVLPGB in flush_tlb_all Rik van Riel
2025-01-06 17:29   ` Dave Hansen
2025-01-06 17:35     ` Rik van Riel
2025-01-06 17:54       ` Dave Hansen
2024-12-30 17:53 ` [PATCH 08/12] x86/mm: use broadcast TLB flushing for page reclaim TLB flushing Rik van Riel
2024-12-30 17:53 ` [PATCH 09/12] x86/mm: enable broadcast TLB invalidation for multi-threaded processes Rik van Riel
2024-12-30 19:24   ` Nadav Amit
2025-01-01  4:42     ` Rik van Riel
2025-01-01 15:20       ` Nadav Amit
2025-01-01 16:15         ` Karim Manaouil
2025-01-01 16:23           ` Rik van Riel
2025-01-02  0:06             ` Nadav Amit
2025-01-03 17:36   ` Jann Horn
2025-01-04  2:55     ` Rik van Riel
2025-01-06 13:04       ` Jann Horn
2025-01-06 14:26         ` Rik van Riel
2025-01-06 14:52   ` Nadav Amit
2025-01-06 16:03     ` Rik van Riel
2025-01-06 18:40   ` Dave Hansen
2025-01-12  2:36     ` Rik van Riel
2024-12-30 17:53 ` [PATCH 10/12] x86,tlb: do targeted broadcast flushing from tlbbatch code Rik van Riel
2024-12-30 17:53 ` [PATCH 11/12] x86/mm: enable AMD translation cache extensions Rik van Riel
2024-12-30 18:25   ` Nadav Amit
2024-12-30 18:27     ` Rik van Riel
2025-01-03 17:49   ` Jann Horn
2025-01-04  3:08     ` Rik van Riel
2025-01-06 13:10       ` Jann Horn
2025-01-06 18:29         ` Sean Christopherson
2025-01-10 19:34   ` Tom Lendacky
2025-01-10 19:45     ` Rik van Riel
2025-01-10 19:58       ` Borislav Petkov
2025-01-10 20:43         ` Rik van Riel
2024-12-30 17:53 ` [PATCH 12/12] x86/mm: only invalidate final translations with INVLPGB Rik van Riel
2025-01-03 18:40   ` Jann Horn
2025-01-12  2:39     ` Rik van Riel
2025-01-06 19:03 ` [PATCH v3 00/12] AMD broadcast TLB invalidation Dave Hansen
2025-01-12  2:46   ` Rik van Riel
2025-01-06 22:49 ` Yosry Ahmed
2025-01-07  3:25   ` Rik van Riel
2025-01-08  1:36     ` Yosry Ahmed
2025-01-09  2:25       ` Andrew Cooper
2025-01-09  2:47       ` Andrew Cooper
2025-01-09 21:32         ` Yosry Ahmed
2025-01-09 23:00           ` Andrew Cooper
2025-01-09 23:26             ` Yosry Ahmed

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=426011a9-1fbc-415c-bac7-df5d67417df3@intel.com \
    --to=dave.hansen@intel.com \
    --cc=akpm@linux-foundation.org \
    --cc=bp@alien8.de \
    --cc=dave.hansen@linux.intel.com \
    --cc=hpa@zytor.com \
    --cc=kernel-team@meta.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=luto@kernel.org \
    --cc=mingo@redhat.com \
    --cc=nadav.amit@gmail.com \
    --cc=peterz@infradead.org \
    --cc=riel@surriel.com \
    --cc=tglx@linutronix.de \
    --cc=x86@kernel.org \
    --cc=zhengqi.arch@bytedance.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox