From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id A9829CE8D40 for ; Fri, 14 Nov 2025 15:17:31 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 13D4E8E0033; Fri, 14 Nov 2025 10:17:31 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 0EDE38E0002; Fri, 14 Nov 2025 10:17:31 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EF7D48E0033; Fri, 14 Nov 2025 10:17:30 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id D84C18E0002 for ; Fri, 14 Nov 2025 10:17:30 -0500 (EST) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id A953BC03D7 for ; Fri, 14 Nov 2025 15:17:30 +0000 (UTC) X-FDA: 84109566660.27.D34F540 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf01.hostedemail.com (Postfix) with ESMTP id D0CA240006 for ; Fri, 14 Nov 2025 15:17:28 +0000 (UTC) Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=ao8CTe1Q; dmarc=pass (policy=quarantine) header.from=redhat.com; spf=pass (imf01.hostedemail.com: domain of vschneid@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=vschneid@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1763133448; a=rsa-sha256; cv=none; b=MhZQjOndR9kpXVVSQk7//E6bNkXDJDY2RKWdD6B7BJHogI5W69wSxdSfr1IppOAi4EkwAO 04czIsv9pSQgtWLctSiCKfnsyKwOTbrVezzqaUXpX//GsDmckEPJo/eQb/qPHr4ua07qKx d9eusgWC+0tvfmkIsdtMlWHp0ECVLh0= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=ao8CTe1Q; dmarc=pass (policy=quarantine) header.from=redhat.com; spf=pass (imf01.hostedemail.com: domain of vschneid@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=vschneid@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1763133448; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=8nnNDo7oy33aNFJ8GpidZCiS52jFHxLb2tZYlzIaaH0=; b=c265AuKXqbMLihgIm+OTUYwDdZrhQvpy+xTZ6IZc1ZQijKRQ2O46eJHekRNHH6Ly7uOQMW /HnF8dGy30o1JGXqEN6Q+QyCfn6solhqnxYDzQIzDFwq1QuBldn8OHaLCwIjvW16YYjgJF rR/lybDCvON8DUvqGluGt4jSoEqYRjE= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1763133448; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=8nnNDo7oy33aNFJ8GpidZCiS52jFHxLb2tZYlzIaaH0=; b=ao8CTe1QTKOHyYjtkiqG+TjPw7Dznzge0m4XDwQJdbN1u/pTLiS2FqsxQrt5fnJMtgPo79 bWvbMZCGDIAvzlVP4rmxGE9pF+Jb49QIzfsB9WG/1EZje6kCcs4GL0q1eRXBwV0DqMwtt3 n/YBDNlziN8KqXbiZYBjemZT/muAwdk= Received: from mx-prod-mc-05.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-421-JVVClndjP8CFZcyMJNIxlg-1; Fri, 14 Nov 2025 10:17:23 -0500 X-MC-Unique: JVVClndjP8CFZcyMJNIxlg-1 X-Mimecast-MFC-AGG-ID: JVVClndjP8CFZcyMJNIxlg_1763133438 Received: from mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.93]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-05.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 2113F195608E; Fri, 14 Nov 2025 15:17:18 +0000 (UTC) Received: from vschneid-thinkpadt14sgen2i.remote.csb (unknown [10.45.226.10]) by mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 6AFB41800451; Fri, 14 Nov 2025 15:17:03 +0000 (UTC) From: Valentin Schneider To: linux-kernel@vger.kernel.org, linux-mm@kvack.org, rcu@vger.kernel.org, x86@kernel.org, linux-arm-kernel@lists.infradead.org, loongarch@lists.linux.dev, linux-riscv@lists.infradead.org, linux-arch@vger.kernel.org, linux-trace-kernel@vger.kernel.org Cc: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , "H. Peter Anvin" , Andy Lutomirski , Peter Zijlstra , Arnaldo Carvalho de Melo , Josh Poimboeuf , Paolo Bonzini , Arnd Bergmann , Frederic Weisbecker , "Paul E. McKenney" , Jason Baron , Steven Rostedt , Ard Biesheuvel , Sami Tolvanen , "David S. Miller" , Neeraj Upadhyay , Joel Fernandes , Josh Triplett , Boqun Feng , Uladzislau Rezki , Mathieu Desnoyers , Mel Gorman , Andrew Morton , Masahiro Yamada , Han Shen , Rik van Riel , Jann Horn , Dan Carpenter , Oleg Nesterov , Juri Lelli , Clark Williams , Yair Podemsky , Marcelo Tosatti , Daniel Wagner , Petr Tesarik , Shrikanth Hegde Subject: [RFC PATCH v7 30/31] x86/mm, mm/vmalloc: Defer kernel TLB flush IPIs under CONFIG_COALESCE_TLBI=y Date: Fri, 14 Nov 2025 16:14:27 +0100 Message-ID: <20251114151428.1064524-10-vschneid@redhat.com> In-Reply-To: <20251114150133.1056710-1-vschneid@redhat.com> References: <20251114150133.1056710-1-vschneid@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.93 X-Rspam-User: X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: D0CA240006 X-Stat-Signature: 8h9g6jhghu4cm6hz51mmt3eckrdh33aq X-HE-Tag: 1763133448-484874 X-HE-Meta: U2FsdGVkX194srPByf5KxaAzJYaB1cxd3RdUlnrW5ehPSfmD1D3olCxuljtJcn42CLS7shYDixHEV+4is7ujEY05IwvDSzq6JKXtuJLMzOqd7KeUns20UXRr9xbWE48ebqvj5ORFd2viSN1Ow/Nk3NXb3wL/2EZT96o3/ZxROP5SFm47etc/Mp9xWDyYsuTh7T6/cnr3x5iNwmkT501iQiXhpsG3TVv7j64RNGPfw88P95GKhrE0qMec6+j11KGLQ9ihDvH8T5FDck2R7XQN+kVzdvngZe6QwQZUDUvU+MwkjFsh3r6dfo8LZyjhm7gU64ugrMmTpncb0C5KQaEuRmTMIT2SZHcirG/vUxYg0rrCWMWH2HYX73rXdHVjT7sj3e28QHEQX2KVztWLZuGGq3owxHY9ctgkPKKHi3MSkFyNL/GCuiUCsFNMcOwkbzWruMs3ziUjgK+wkC0WZqD46d4UJ3py6fvoznM7NC186JjlMxI+EhFgzlt+0puCdEfnj1gq4ICcoHViV4qg4fsPq9wB6D96pu4WxkgcBcCjS/j/sT7YUCgoYzXJ1MzWcGgTf2/jRpqEV5i9CZKh0zTh4mqLaYjrCkgTvL78zJ9Pr7xnKe7l3JjVQRrMJHxSaN+9DpSdyArWVtt+JOiehRmbPfxALYm1nDQVGD/vP44ALNN0zlC5Xhu8Zp0xVBQMLks6wftn3z407aKzZ9RxCbcszQpppmap03iJbrSqIo3tB3gXTIxMBQX2xdfzsjTAdlX1kvKCbjPRlti8B6zoOV4K7JUit6lQjjkoI2rgwCL3D3HoOsrrr3a3GYlfThvKr2Ydf+vxMIlSbDV7y7Chh46HsM9IXWNKYpF03NsRlJF1lChUbiNNFU8Hnr0H00DkvlfCmTfpxf0CeTpWJOb2dPecxUYIa8fydCrre6SDpc+G7/TEigrQK00DbXNZod/uZ1eTht+IwPkFelI8UXoV3yt VgWGLb9w C4Cc/Jswsg8tJ6lFAK3xvvP/ZmoEiDxUgn5waqEmkffZA4S9uLO4/WoBA46IJ6NloFqkSg8Qyld0slnimyvTEXYLow51OQXb33shPdmlB6MILzqU0yvjqLwS0GYAzxL14S8zUi5MLYa2Cl/i0HGyJAdjoOFZY0ELLnmb72HH9clo3v85bRAZM2DL6wMmvgWBK1CyLJknfiH2x0WHnLx47Q6GrgDLWnktcEE4vc2jtGDFCIrKeNHrf2YUOGgtNez8MIc6/UH3OVhoIa/5Wc9qE53MhtHbdW6uzQGJekRli7te+4ZXfilMGqo2cH2fjhehtQCzIp/espnoPWtZES0QNGnpjY6Ne/YekTaj553B88DOvDhopzieBgPn4udKQmOtmYck2+Ku6HMheog+n/Tu/8ZD7kNi5+3uMLphsFPyEmQyL1CZLx4/JXKwJzrWvAgvVmC5e/QmZJnCwA2nEsHkN3c0OUIRcDTl8IOOF X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Previous commits have added an unconditional TLB flush right after switching to the kernel CR3 on NOHZ_FULL CPUs, and a software signal to determine whether a CPU has its kernel CR3 loaded. Using these two components, we can now safely defer kernel TLB flush IPIs targeting NOHZ_FULL CPUs executing in userspace (i.e. with the user CR3 loaded). Note that the COALESCE_TLBI config option is introduced in a later commit, when the whole feature is implemented. Signed-off-by: Valentin Schneider --- arch/x86/include/asm/tlbflush.h | 3 +++ arch/x86/mm/tlb.c | 34 ++++++++++++++++++++++++++------- mm/vmalloc.c | 34 ++++++++++++++++++++++++++++----- 3 files changed, 59 insertions(+), 12 deletions(-) diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h index e39ae95b85072..6d533afd70952 100644 --- a/arch/x86/include/asm/tlbflush.h +++ b/arch/x86/include/asm/tlbflush.h @@ -321,6 +321,9 @@ extern void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start, unsigned long end, unsigned int stride_shift, bool freed_tables); extern void flush_tlb_kernel_range(unsigned long start, unsigned long end); +#ifdef CONFIG_COALESCE_TLBI +extern void flush_tlb_kernel_range_deferrable(unsigned long start, unsigned long end); +#endif static inline void flush_tlb_page(struct vm_area_struct *vma, unsigned long a) { diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c index 5d221709353e0..1ce80f8775e7a 100644 --- a/arch/x86/mm/tlb.c +++ b/arch/x86/mm/tlb.c @@ -12,6 +12,7 @@ #include #include #include +#include #include #include @@ -1529,23 +1530,24 @@ static void do_kernel_range_flush(void *info) flush_tlb_one_kernel(addr); } -static void kernel_tlb_flush_all(struct flush_tlb_info *info) +static void kernel_tlb_flush_all(smp_cond_func_t cond, struct flush_tlb_info *info) { if (cpu_feature_enabled(X86_FEATURE_INVLPGB)) invlpgb_flush_all(); else - on_each_cpu(do_flush_tlb_all, NULL, 1); + on_each_cpu_cond(cond, do_flush_tlb_all, NULL, 1); } -static void kernel_tlb_flush_range(struct flush_tlb_info *info) +static void kernel_tlb_flush_range(smp_cond_func_t cond, struct flush_tlb_info *info) { if (cpu_feature_enabled(X86_FEATURE_INVLPGB)) invlpgb_kernel_range_flush(info); else - on_each_cpu(do_kernel_range_flush, info, 1); + on_each_cpu_cond(cond, do_kernel_range_flush, info, 1); } -void flush_tlb_kernel_range(unsigned long start, unsigned long end) +static inline void +__flush_tlb_kernel_range(smp_cond_func_t cond, unsigned long start, unsigned long end) { struct flush_tlb_info *info; @@ -1555,13 +1557,31 @@ void flush_tlb_kernel_range(unsigned long start, unsigned long end) TLB_GENERATION_INVALID); if (info->end == TLB_FLUSH_ALL) - kernel_tlb_flush_all(info); + kernel_tlb_flush_all(cond, info); else - kernel_tlb_flush_range(info); + kernel_tlb_flush_range(cond, info); put_flush_tlb_info(); } +void flush_tlb_kernel_range(unsigned long start, unsigned long end) +{ + __flush_tlb_kernel_range(NULL, start, end); +} + +#ifdef CONFIG_COALESCE_TLBI +static bool flush_tlb_kernel_cond(int cpu, void *info) +{ + return housekeeping_cpu(cpu, HK_TYPE_KERNEL_NOISE) || + per_cpu(kernel_cr3_loaded, cpu); +} + +void flush_tlb_kernel_range_deferrable(unsigned long start, unsigned long end) +{ + __flush_tlb_kernel_range(flush_tlb_kernel_cond, start, end); +} +#endif + /* * This can be used from process context to figure out what the value of * CR3 is without needing to do a (slow) __read_cr3(). diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 798b2ed21e460..76ec10d56623b 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -494,6 +494,30 @@ void vunmap_range_noflush(unsigned long start, unsigned long end) __vunmap_range_noflush(start, end); } +#ifdef CONFIG_COALESCE_TLBI +/* + * !!! BIG FAT WARNING !!! + * + * The CPU is free to cache any part of the paging hierarchy it wants at any + * time. It's also free to set accessed and dirty bits at any time, even for + * instructions that may never execute architecturally. + * + * This means that deferring a TLB flush affecting freed page-table-pages (IOW, + * keeping them in a CPU's paging hierarchy cache) is a recipe for disaster. + * + * This isn't a problem for deferral of TLB flushes in vmalloc, because + * page-table-pages used for vmap() mappings are never freed - see how + * __vunmap_range_noflush() walks the whole mapping but only clears the leaf PTEs. + * If this ever changes, TLB flush deferral will cause misery. + */ +void __weak flush_tlb_kernel_range_deferrable(unsigned long start, unsigned long end) +{ + flush_tlb_kernel_range(start, end); +} +#else +#define flush_tlb_kernel_range_deferrable(start, end) flush_tlb_kernel_range(start, end) +#endif + /** * vunmap_range - unmap kernel virtual addresses * @addr: start of the VM area to unmap @@ -507,7 +531,7 @@ void vunmap_range(unsigned long addr, unsigned long end) { flush_cache_vunmap(addr, end); vunmap_range_noflush(addr, end); - flush_tlb_kernel_range(addr, end); + flush_tlb_kernel_range_deferrable(addr, end); } static int vmap_pages_pte_range(pmd_t *pmd, unsigned long addr, @@ -2339,7 +2363,7 @@ static bool __purge_vmap_area_lazy(unsigned long start, unsigned long end, nr_purge_nodes = cpumask_weight(&purge_nodes); if (nr_purge_nodes > 0) { - flush_tlb_kernel_range(start, end); + flush_tlb_kernel_range_deferrable(start, end); /* One extra worker is per a lazy_max_pages() full set minus one. */ nr_purge_helpers = atomic_long_read(&vmap_lazy_nr) / lazy_max_pages(); @@ -2442,7 +2466,7 @@ static void free_unmap_vmap_area(struct vmap_area *va) flush_cache_vunmap(va->va_start, va->va_end); vunmap_range_noflush(va->va_start, va->va_end); if (debug_pagealloc_enabled_static()) - flush_tlb_kernel_range(va->va_start, va->va_end); + flush_tlb_kernel_range_deferrable(va->va_start, va->va_end); free_vmap_area_noflush(va); } @@ -2890,7 +2914,7 @@ static void vb_free(unsigned long addr, unsigned long size) vunmap_range_noflush(addr, addr + size); if (debug_pagealloc_enabled_static()) - flush_tlb_kernel_range(addr, addr + size); + flush_tlb_kernel_range_deferrable(addr, addr + size); spin_lock(&vb->lock); @@ -2955,7 +2979,7 @@ static void _vm_unmap_aliases(unsigned long start, unsigned long end, int flush) free_purged_blocks(&purge_list); if (!__purge_vmap_area_lazy(start, end, false) && flush) - flush_tlb_kernel_range(start, end); + flush_tlb_kernel_range_deferrable(start, end); mutex_unlock(&vmap_purge_lock); } -- 2.51.0