From: Rik van Riel <riel@surriel.com>
To: x86@kernel.org
Cc: linux-kernel@vger.kernel.org, bp@alien8.de, peterz@infradead.org,
dave.hansen@linux.intel.com, zhengqi.arch@bytedance.com,
nadav.amit@gmail.com, thomas.lendacky@amd.com,
kernel-team@meta.com, linux-mm@kvack.org,
akpm@linux-foundation.org, jannh@google.com,
mhklinux@outlook.com, andrew.cooper3@citrix.com,
Rik van Riel <riel@surriel.com>,
Dave Hansen <dave.hansen@intel.com>
Subject: [PATCH v8 03/12] x86/mm: consolidate full flush threshold decision
Date: Tue, 4 Feb 2025 20:39:52 -0500 [thread overview]
Message-ID: <20250205014033.3626204-4-riel@surriel.com> (raw)
In-Reply-To: <20250205014033.3626204-1-riel@surriel.com>
Reduce code duplication by consolidating the decision point
for whether to do individual invalidations or a full flush
inside get_flush_tlb_info.
Signed-off-by: Rik van Riel <riel@surriel.com>
Suggested-by: Dave Hansen <dave.hansen@intel.com>
---
arch/x86/mm/tlb.c | 52 ++++++++++++++++++++++++-----------------------
1 file changed, 27 insertions(+), 25 deletions(-)
diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index 6cf881a942bb..02e1f5c5bca3 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -1000,8 +1000,13 @@ static struct flush_tlb_info *get_flush_tlb_info(struct mm_struct *mm,
BUG_ON(this_cpu_inc_return(flush_tlb_info_idx) != 1);
#endif
- info->start = start;
- info->end = end;
+ /*
+ * Round the start and end addresses to the page size specified
+ * by the stride shift. This ensures partial pages at the end of
+ * a range get fully invalidated.
+ */
+ info->start = round_down(start, 1 << stride_shift);
+ info->end = round_up(end, 1 << stride_shift);
info->mm = mm;
info->stride_shift = stride_shift;
info->freed_tables = freed_tables;
@@ -1009,6 +1014,15 @@ static struct flush_tlb_info *get_flush_tlb_info(struct mm_struct *mm,
info->initiating_cpu = smp_processor_id();
info->trim_cpumask = 0;
+ /*
+ * If the number of flushes is so large that a full flush
+ * would be faster, do a full flush.
+ */
+ if ((end - start) >> stride_shift > tlb_single_page_flush_ceiling) {
+ info->start = 0;
+ info->end = TLB_FLUSH_ALL;
+ }
+
return info;
}
@@ -1026,17 +1040,8 @@ void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start,
bool freed_tables)
{
struct flush_tlb_info *info;
+ int cpu = get_cpu();
u64 new_tlb_gen;
- int cpu;
-
- cpu = get_cpu();
-
- /* Should we flush just the requested range? */
- if ((end == TLB_FLUSH_ALL) ||
- ((end - start) >> stride_shift) > tlb_single_page_flush_ceiling) {
- start = 0;
- end = TLB_FLUSH_ALL;
- }
/* This is also a barrier that synchronizes with switch_mm(). */
new_tlb_gen = inc_mm_tlb_gen(mm);
@@ -1089,22 +1094,19 @@ static void do_kernel_range_flush(void *info)
void flush_tlb_kernel_range(unsigned long start, unsigned long end)
{
- /* Balance as user space task's flush, a bit conservative */
- if (end == TLB_FLUSH_ALL ||
- (end - start) > tlb_single_page_flush_ceiling << PAGE_SHIFT) {
- on_each_cpu(do_flush_tlb_all, NULL, 1);
- } else {
- struct flush_tlb_info *info;
+ struct flush_tlb_info *info;
- preempt_disable();
- info = get_flush_tlb_info(NULL, start, end, 0, false,
- TLB_GENERATION_INVALID);
+ guard(preempt)();
+
+ info = get_flush_tlb_info(NULL, start, end, PAGE_SHIFT, false,
+ TLB_GENERATION_INVALID);
+ if (info->end == TLB_FLUSH_ALL)
+ on_each_cpu(do_flush_tlb_all, NULL, 1);
+ else
on_each_cpu(do_kernel_range_flush, info, 1);
- put_flush_tlb_info();
- preempt_enable();
- }
+ put_flush_tlb_info();
}
/*
@@ -1276,7 +1278,7 @@ void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch)
int cpu = get_cpu();
- info = get_flush_tlb_info(NULL, 0, TLB_FLUSH_ALL, 0, false,
+ info = get_flush_tlb_info(NULL, 0, TLB_FLUSH_ALL, PAGE_SHIFT, false,
TLB_GENERATION_INVALID);
/*
* flush_tlb_multi() is not optimized for the common case in which only
--
2.47.1
next prev parent reply other threads:[~2025-02-05 1:42 UTC|newest]
Thread overview: 18+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-02-05 1:39 [PATCH v8 00/12] AMD broadcast TLB invalidation Rik van Riel
2025-02-05 1:39 ` [PATCH v8 01/12] x86/mm: make MMU_GATHER_RCU_TABLE_FREE unconditional Rik van Riel
2025-02-05 1:39 ` [PATCH v8 02/12] x86/mm: remove pv_ops.mmu.tlb_remove_table call Rik van Riel
2025-02-05 1:39 ` Rik van Riel [this message]
2025-02-05 12:20 ` [PATCH v8 03/12] x86/mm: consolidate full flush threshold decision Peter Zijlstra
2025-02-05 13:00 ` Peter Zijlstra
2025-02-05 13:52 ` Rik van Riel
2025-02-05 1:39 ` [PATCH v8 04/12] x86/mm: get INVLPGB count max from CPUID Rik van Riel
2025-02-05 1:39 ` [PATCH v8 05/12] x86/mm: add INVLPGB support code Rik van Riel
2025-02-05 1:39 ` [PATCH v8 06/12] x86/mm: use INVLPGB for kernel TLB flushes Rik van Riel
2025-02-05 1:39 ` [PATCH v8 07/12] x86/mm: use INVLPGB in flush_tlb_all Rik van Riel
2025-02-05 1:39 ` [PATCH v8 08/12] x86/mm: use broadcast TLB flushing for page reclaim TLB flushing Rik van Riel
2025-02-05 1:39 ` [PATCH v8 09/12] x86/mm: enable broadcast TLB invalidation for multi-threaded processes Rik van Riel
2025-02-05 1:39 ` [PATCH v8 10/12] x86/mm: do targeted broadcast flushing from tlbbatch code Rik van Riel
2025-02-05 13:51 ` Peter Zijlstra
2025-02-05 14:52 ` Rik van Riel
2025-02-05 1:40 ` [PATCH v8 11/12] x86/mm: enable AMD translation cache extensions Rik van Riel
2025-02-05 1:40 ` [PATCH v8 12/12] x86/mm: only invalidate final translations with INVLPGB Rik van Riel
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250205014033.3626204-4-riel@surriel.com \
--to=riel@surriel.com \
--cc=akpm@linux-foundation.org \
--cc=andrew.cooper3@citrix.com \
--cc=bp@alien8.de \
--cc=dave.hansen@intel.com \
--cc=dave.hansen@linux.intel.com \
--cc=jannh@google.com \
--cc=kernel-team@meta.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mhklinux@outlook.com \
--cc=nadav.amit@gmail.com \
--cc=peterz@infradead.org \
--cc=thomas.lendacky@amd.com \
--cc=x86@kernel.org \
--cc=zhengqi.arch@bytedance.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox