linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Dave Hansen <dave.hansen@intel.com>
To: Rik van Riel <riel@surriel.com>, x86@kernel.org
Cc: linux-kernel@vger.kernel.org, kernel-team@meta.com,
	dave.hansen@linux.intel.com, luto@kernel.org,
	peterz@infradead.org, tglx@linutronix.de, mingo@redhat.com,
	bp@alien8.de, hpa@zytor.com, akpm@linux-foundation.org,
	nadav.amit@gmail.com, zhengqi.arch@bytedance.com,
	linux-mm@kvack.org
Subject: Re: [PATCH 06/12] x86/mm: use INVLPGB for kernel TLB flushes
Date: Mon, 6 Jan 2025 09:21:11 -0800	[thread overview]
Message-ID: <bbe96ff4-3633-4008-b524-6b183a64caf6@intel.com> (raw)
In-Reply-To: <20241230175550.4046587-7-riel@surriel.com>

On 12/30/24 09:53, Rik van Riel wrote:
> Use broadcast TLB invalidation for kernel addresses when available.
> 
> This stops us from having to send IPIs for kernel TLB flushes.

Could this be changed to imperative voice, please?

	Remove the need to send IPIs for kernel TLB flushes.

> +static void broadcast_kernel_range_flush(unsigned long start, unsigned long end)
> +{
> +	unsigned long addr;
> +	unsigned long maxnr = invlpgb_count_max;
> +	unsigned long threshold = tlb_single_page_flush_ceiling * maxnr;

The 'tlb_single_page_flush_ceiling' value was determined by looking at
_local_ invalidation cost. Could you talk a bit about why it's also a
good value to use for remote invalidations? Does it hold up for INVLPGB
the same way it did for good ol' INVLPG? Has there been any explicit
testing here to find a good value?

I'm also confused by the multiplication here. Let's say
invlpgb_count_max==20 and tlb_single_page_flush_ceiling==30.

You would need to switch away from single-address invalidation when the
number of addresses is >20 for INVLPGB functional reasons. But you'd
also need to switch away when >30 for performance reasons
(tlb_single_page_flush_ceiling).

But I don't understand how that would make the threshold 20*30=600
invalidations.

> +	/*
> +	 * TLBSYNC only waits for flushes originating on the same CPU.
> +	 * Disabling migration allows us to wait on all flushes.
> +	 */

Imperative voice here too, please:

	Disable migration to wait on all flushes.

> +	guard(preempt)();
> +
> +	if (end == TLB_FLUSH_ALL ||
> +	    (end - start) > threshold << PAGE_SHIFT) {

This is basically a copy-and-paste of the "range vs. global" flush
logic, but taking 'invlpgb_count_max' into account.

It would be ideal if those limit checks could be consolidated. I suspect
that when the 'threshold' calculation above gets clarified that they may
be easier to consolidate.

BTW, what is a typical value for 'invlpgb_count_max'? Is it more or less
than the typical value for 'tlb_single_page_flush_ceiling'?

Maybe we should just lower 'tlb_single_page_flush_ceiling' if
'invlpgb_count_max' falls below it so we only have _one_ runtime value
to consider.


> +		invlpgb_flush_all();
> +	} else {
> +		unsigned long nr;
> +		for (addr = start; addr < end; addr += nr << PAGE_SHIFT) {
> +			nr = min((end - addr) >> PAGE_SHIFT, maxnr);
> +			invlpgb_flush_addr(addr, nr);
> +		}
> +	}
> +
> +	tlbsync();
> +}
> +
>  static void do_kernel_range_flush(void *info)
>  {
>  	struct flush_tlb_info *f = info;
> @@ -1089,6 +1115,11 @@ static void do_kernel_range_flush(void *info)
>  
>  void flush_tlb_kernel_range(unsigned long start, unsigned long end)
>  {
> +	if (cpu_feature_enabled(X86_FEATURE_INVLPGB)) {
> +		broadcast_kernel_range_flush(start, end);
> +		return;
> +	}
> +
>  	/* Balance as user space task's flush, a bit conservative */
>  	if (end == TLB_FLUSH_ALL ||
>  	    (end - start) > tlb_single_page_flush_ceiling << PAGE_SHIFT) {

I also wonder if this would all get simpler if we give in and *always*
call get_flush_tlb_info(). That would provide a nice single place to
consolidate the "all vs. ranged" flush logic.


  parent reply	other threads:[~2025-01-06 17:21 UTC|newest]

Thread overview: 89+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-12-30 17:53 [PATCH v3 00/12] AMD broadcast TLB invalidation Rik van Riel
2024-12-30 17:53 ` [PATCH 01/12] x86/mm: make MMU_GATHER_RCU_TABLE_FREE unconditional Rik van Riel
2024-12-30 18:41   ` Borislav Petkov
2024-12-31 16:11     ` Rik van Riel
2024-12-31 16:19       ` Borislav Petkov
2024-12-31 16:30         ` Rik van Riel
2025-01-02 11:52           ` Borislav Petkov
2025-01-02 19:56       ` Peter Zijlstra
2025-01-03 12:18         ` Borislav Petkov
2025-01-04 16:27           ` Peter Zijlstra
2025-01-06 15:54             ` Dave Hansen
2025-01-06 15:47           ` Rik van Riel
2024-12-30 17:53 ` [PATCH 02/12] x86/mm: remove pv_ops.mmu.tlb_remove_table call Rik van Riel
2024-12-31  3:18   ` Qi Zheng
2024-12-30 17:53 ` [PATCH 03/12] x86/mm: add X86_FEATURE_INVLPGB definition Rik van Riel
2025-01-02 12:04   ` Borislav Petkov
2025-01-03 18:27     ` Rik van Riel
2025-01-03 21:07       ` Borislav Petkov
2024-12-30 17:53 ` [PATCH 04/12] x86/mm: get INVLPGB count max from CPUID Rik van Riel
2025-01-02 12:15   ` Borislav Petkov
2025-01-10 18:44   ` Tom Lendacky
2025-01-10 20:27     ` Rik van Riel
2025-01-10 20:31       ` Tom Lendacky
2025-01-10 20:34       ` Borislav Petkov
2024-12-30 17:53 ` [PATCH 05/12] x86/mm: add INVLPGB support code Rik van Riel
2025-01-02 12:42   ` Borislav Petkov
2025-01-06 16:50     ` Dave Hansen
2025-01-06 17:32       ` Rik van Riel
2025-01-06 18:14       ` Borislav Petkov
2025-01-14 19:50     ` Rik van Riel
2025-01-03 12:44   ` Borislav Petkov
2024-12-30 17:53 ` [PATCH 06/12] x86/mm: use INVLPGB for kernel TLB flushes Rik van Riel
2025-01-03 12:39   ` Borislav Petkov
2025-01-06 17:21   ` Dave Hansen [this message]
2025-01-09 20:16     ` Rik van Riel
2025-01-09 21:18       ` Dave Hansen
2025-01-10  5:31         ` Rik van Riel
2025-01-10  6:07         ` Nadav Amit
2025-01-10 15:14           ` Dave Hansen
2025-01-10 16:08             ` Rik van Riel
2025-01-10 16:29               ` Dave Hansen
2025-01-10 16:36                 ` Rik van Riel
2025-01-10 18:53   ` Tom Lendacky
2025-01-10 20:29     ` Rik van Riel
2024-12-30 17:53 ` [PATCH 07/12] x86/tlb: use INVLPGB in flush_tlb_all Rik van Riel
2025-01-06 17:29   ` Dave Hansen
2025-01-06 17:35     ` Rik van Riel
2025-01-06 17:54       ` Dave Hansen
2024-12-30 17:53 ` [PATCH 08/12] x86/mm: use broadcast TLB flushing for page reclaim TLB flushing Rik van Riel
2024-12-30 17:53 ` [PATCH 09/12] x86/mm: enable broadcast TLB invalidation for multi-threaded processes Rik van Riel
2024-12-30 19:24   ` Nadav Amit
2025-01-01  4:42     ` Rik van Riel
2025-01-01 15:20       ` Nadav Amit
2025-01-01 16:15         ` Karim Manaouil
2025-01-01 16:23           ` Rik van Riel
2025-01-02  0:06             ` Nadav Amit
2025-01-03 17:36   ` Jann Horn
2025-01-04  2:55     ` Rik van Riel
2025-01-06 13:04       ` Jann Horn
2025-01-06 14:26         ` Rik van Riel
2025-01-06 14:52   ` Nadav Amit
2025-01-06 16:03     ` Rik van Riel
2025-01-06 18:40   ` Dave Hansen
2025-01-12  2:36     ` Rik van Riel
2024-12-30 17:53 ` [PATCH 10/12] x86,tlb: do targeted broadcast flushing from tlbbatch code Rik van Riel
2024-12-30 17:53 ` [PATCH 11/12] x86/mm: enable AMD translation cache extensions Rik van Riel
2024-12-30 18:25   ` Nadav Amit
2024-12-30 18:27     ` Rik van Riel
2025-01-03 17:49   ` Jann Horn
2025-01-04  3:08     ` Rik van Riel
2025-01-06 13:10       ` Jann Horn
2025-01-06 18:29         ` Sean Christopherson
2025-01-10 19:34   ` Tom Lendacky
2025-01-10 19:45     ` Rik van Riel
2025-01-10 19:58       ` Borislav Petkov
2025-01-10 20:43         ` Rik van Riel
2024-12-30 17:53 ` [PATCH 12/12] x86/mm: only invalidate final translations with INVLPGB Rik van Riel
2025-01-03 18:40   ` Jann Horn
2025-01-12  2:39     ` Rik van Riel
2025-01-06 19:03 ` [PATCH v3 00/12] AMD broadcast TLB invalidation Dave Hansen
2025-01-12  2:46   ` Rik van Riel
2025-01-06 22:49 ` Yosry Ahmed
2025-01-07  3:25   ` Rik van Riel
2025-01-08  1:36     ` Yosry Ahmed
2025-01-09  2:25       ` Andrew Cooper
2025-01-09  2:47       ` Andrew Cooper
2025-01-09 21:32         ` Yosry Ahmed
2025-01-09 23:00           ` Andrew Cooper
2025-01-09 23:26             ` Yosry Ahmed

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=bbe96ff4-3633-4008-b524-6b183a64caf6@intel.com \
    --to=dave.hansen@intel.com \
    --cc=akpm@linux-foundation.org \
    --cc=bp@alien8.de \
    --cc=dave.hansen@linux.intel.com \
    --cc=hpa@zytor.com \
    --cc=kernel-team@meta.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=luto@kernel.org \
    --cc=mingo@redhat.com \
    --cc=nadav.amit@gmail.com \
    --cc=peterz@infradead.org \
    --cc=riel@surriel.com \
    --cc=tglx@linutronix.de \
    --cc=x86@kernel.org \
    --cc=zhengqi.arch@bytedance.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox