linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Borislav Petkov <bp@kernel.org>
To: riel@surriel.com
Cc: Manali.Shukla@amd.com, akpm@linux-foundation.org,
	andrew.cooper3@citrix.com, jackmanb@google.com, jannh@google.com,
	kernel-team@meta.com, linux-kernel@vger.kernel.org,
	linux-mm@kvack.org, mhklinux@outlook.com, nadav.amit@gmail.com,
	thomas.lendacky@amd.com, x86@kernel.org,
	zhengqi.arch@bytedance.com, Dave Hansen <dave.hansen@intel.com>,
	Borislav Petkov <bp@alien8.de>
Subject: [PATCH v15 01/11] x86/mm: Consolidate full flush threshold decision
Date: Tue,  4 Mar 2025 14:58:06 +0100	[thread overview]
Message-ID: <20250304135816.12356-2-bp@kernel.org> (raw)
In-Reply-To: <20250304135816.12356-1-bp@kernel.org>

From: Rik van Riel <riel@surriel.com>

Reduce code duplication by consolidating the decision point for whether to do
individual invalidations or a full flush inside get_flush_tlb_info().

Suggested-by: Dave Hansen <dave.hansen@intel.com>
Signed-off-by: Rik van Riel <riel@surriel.com>
Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de>
Reviewed-by: Borislav Petkov (AMD) <bp@alien8.de>
Acked-by: Dave Hansen <dave.hansen@intel.com>
Link: https://lore.kernel.org/r/20250226030129.530345-2-riel@surriel.com
---
 arch/x86/mm/tlb.c | 41 +++++++++++++++++++----------------------
 1 file changed, 19 insertions(+), 22 deletions(-)

diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index ffc25b348041..dbcb5c968ff9 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -1000,6 +1000,15 @@ static struct flush_tlb_info *get_flush_tlb_info(struct mm_struct *mm,
 	BUG_ON(this_cpu_inc_return(flush_tlb_info_idx) != 1);
 #endif
 
+	/*
+	 * If the number of flushes is so large that a full flush
+	 * would be faster, do a full flush.
+	 */
+	if ((end - start) >> stride_shift > tlb_single_page_flush_ceiling) {
+		start = 0;
+		end = TLB_FLUSH_ALL;
+	}
+
 	info->start		= start;
 	info->end		= end;
 	info->mm		= mm;
@@ -1026,17 +1035,8 @@ void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start,
 				bool freed_tables)
 {
 	struct flush_tlb_info *info;
+	int cpu = get_cpu();
 	u64 new_tlb_gen;
-	int cpu;
-
-	cpu = get_cpu();
-
-	/* Should we flush just the requested range? */
-	if ((end == TLB_FLUSH_ALL) ||
-	    ((end - start) >> stride_shift) > tlb_single_page_flush_ceiling) {
-		start = 0;
-		end = TLB_FLUSH_ALL;
-	}
 
 	/* This is also a barrier that synchronizes with switch_mm(). */
 	new_tlb_gen = inc_mm_tlb_gen(mm);
@@ -1089,22 +1089,19 @@ static void do_kernel_range_flush(void *info)
 
 void flush_tlb_kernel_range(unsigned long start, unsigned long end)
 {
-	/* Balance as user space task's flush, a bit conservative */
-	if (end == TLB_FLUSH_ALL ||
-	    (end - start) > tlb_single_page_flush_ceiling << PAGE_SHIFT) {
-		on_each_cpu(do_flush_tlb_all, NULL, 1);
-	} else {
-		struct flush_tlb_info *info;
+	struct flush_tlb_info *info;
+
+	guard(preempt)();
 
-		preempt_disable();
-		info = get_flush_tlb_info(NULL, start, end, 0, false,
-					  TLB_GENERATION_INVALID);
+	info = get_flush_tlb_info(NULL, start, end, PAGE_SHIFT, false,
+				  TLB_GENERATION_INVALID);
 
+	if (info->end == TLB_FLUSH_ALL)
+		on_each_cpu(do_flush_tlb_all, NULL, 1);
+	else
 		on_each_cpu(do_kernel_range_flush, info, 1);
 
-		put_flush_tlb_info();
-		preempt_enable();
-	}
+	put_flush_tlb_info();
 }
 
 /*
-- 
2.43.0



  reply	other threads:[~2025-03-04 13:58 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-03-04 13:58 [PATCH v15 00/11] AMD broadcast TLB invalidation Borislav Petkov
2025-03-04 13:58 ` Borislav Petkov [this message]
2025-03-04 13:58 ` [PATCH v15 02/11] x86/mm: Add INVLPGB feature and Kconfig entry Borislav Petkov
2025-03-05 12:01   ` Borislav Petkov
2025-03-04 13:58 ` [PATCH v15 03/11] x86/mm: Add INVLPGB support code Borislav Petkov
2025-03-04 13:58 ` [PATCH v15 04/11] x86/mm: Use INVLPGB for kernel TLB flushes Borislav Petkov
2025-03-04 13:58 ` [PATCH v15 05/11] x86/mm: Use broadcast TLB flushing in page reclaim Borislav Petkov
2025-03-04 13:58 ` [PATCH v15 06/11] x86/mm: Add global ASID allocation helper functions Borislav Petkov
2025-03-04 13:58 ` [PATCH v15 07/11] x86/mm: Handle global ASID context switch and TLB flush Borislav Petkov
2025-03-04 13:58 ` [PATCH v15 08/11] x86/mm: Add global ASID process exit helpers Borislav Petkov
2025-03-04 13:58 ` [PATCH v15 09/11] x86/mm: Enable broadcast TLB invalidation for multi-threaded processes Borislav Petkov
2025-03-04 13:58 ` [PATCH v15 10/11] x86/mm: Do targeted broadcast flushing from tlbbatch code Borislav Petkov
2025-03-04 13:58 ` [PATCH v15 11/11] x86/mm: Enable AMD translation cache extensions Borislav Petkov

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20250304135816.12356-2-bp@kernel.org \
    --to=bp@kernel.org \
    --cc=Manali.Shukla@amd.com \
    --cc=akpm@linux-foundation.org \
    --cc=andrew.cooper3@citrix.com \
    --cc=bp@alien8.de \
    --cc=dave.hansen@intel.com \
    --cc=jackmanb@google.com \
    --cc=jannh@google.com \
    --cc=kernel-team@meta.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhklinux@outlook.com \
    --cc=nadav.amit@gmail.com \
    --cc=riel@surriel.com \
    --cc=thomas.lendacky@amd.com \
    --cc=x86@kernel.org \
    --cc=zhengqi.arch@bytedance.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox