linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Peter Zijlstra <peterz@infradead.org>
To: Rik van Riel <riel@surriel.com>
Cc: x86@kernel.org, linux-kernel@vger.kernel.org, bp@alien8.de,
	dave.hansen@linux.intel.com, zhengqi.arch@bytedance.com,
	nadav.amit@gmail.com, thomas.lendacky@amd.com,
	kernel-team@meta.com, linux-mm@kvack.org,
	akpm@linux-foundation.org, jannh@google.com,
	mhklinux@outlook.com, andrew.cooper3@citrix.com,
	Mark Rutland <mark.rutland@arm.com>,
	Will Deacon <will@kernel.org>
Subject: Re: [PATCH v6 09/12] x86/mm: enable broadcast TLB invalidation for multi-threaded processes
Date: Thu, 23 Jan 2025 10:07:20 +0100	[thread overview]
Message-ID: <20250123090720.GI3808@noisy.programming.kicks-ass.net> (raw)
In-Reply-To: <5820b18ef0ba48c33a62553fcc444c47f963b907.camel@surriel.com>

On Wed, Jan 22, 2025 at 08:13:03PM -0500, Rik van Riel wrote:
> On Wed, 2025-01-22 at 09:38 +0100, Peter Zijlstra wrote:
> > 
> > Looking at this more... I'm left wondering, did 'we' look at any
> > other
> > architecture code at all? 
> > 
> > For example, look at arch/arm64/mm/context.c and see how their reset
> > works. Notably, they are not at all limited to reclaiming free'd
> > ASIDs,
> > but will very aggressively take back all ASIDs except for the current
> > running ones.
> > 
> I did look at the ARM64 code, and while their reset
> is much nicer, it looks like that comes at a cost on
> each process at context switch time.
> 
> In new_context(), there is a call to check_update_reserved_asid(),
> which will iterate over all CPUs to check whether this
> process's ASID is part of the reserved list that got
> carried over during the rollover.
> 
> I don't know if that would scale well enough to work
> on systems with thousands of CPUs.

So assuming something like 1k CPUs and !PTI, we only have like 4 PCIDs
per CPU on average, and rollover could be frequent.

While an ARM64 with 1k CPUs and !PTI would have an average of 64 ASIDs
per CPU, and rollover would be far less frequent.

That is to say, their larger ASID space (16 bits, vs our 12) definitely
helps. But at some point yeah, this will become a problem.

Notably, I think think a 2 socket Epyc Turin with 192C is one of the
larger off-the-shelf systems atm, that gets you 768 CPUs and that is
already uncomfortably tight with our PCID space.




  reply	other threads:[~2025-01-23  9:07 UTC|newest]

Thread overview: 36+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-01-20  2:40 [PATCH v6 00/12] AMD broadcast TLB invalidation Rik van Riel
2025-01-20  2:40 ` [PATCH v6 01/12] x86/mm: make MMU_GATHER_RCU_TABLE_FREE unconditional Rik van Riel
2025-01-20 19:32   ` David Hildenbrand
2025-01-20  2:40 ` [PATCH v6 02/12] x86/mm: remove pv_ops.mmu.tlb_remove_table call Rik van Riel
2025-01-20 19:47   ` David Hildenbrand
2025-01-21  1:03     ` Rik van Riel
2025-01-21  7:46       ` David Hildenbrand
2025-01-21  8:54         ` Peter Zijlstra
2025-01-22 15:48           ` Rik van Riel
2025-01-20  2:40 ` [PATCH v6 03/12] x86/mm: consolidate full flush threshold decision Rik van Riel
2025-01-20  2:40 ` [PATCH v6 04/12] x86/mm: get INVLPGB count max from CPUID Rik van Riel
2025-01-20  2:40 ` [PATCH v6 05/12] x86/mm: add INVLPGB support code Rik van Riel
2025-01-21  9:45   ` Peter Zijlstra
2025-01-22 16:58     ` Rik van Riel
2025-01-20  2:40 ` [PATCH v6 06/12] x86/mm: use INVLPGB for kernel TLB flushes Rik van Riel
2025-01-20  2:40 ` [PATCH v6 07/12] x86/tlb: use INVLPGB in flush_tlb_all Rik van Riel
2025-01-20  2:40 ` [PATCH v6 08/12] x86/mm: use broadcast TLB flushing for page reclaim TLB flushing Rik van Riel
2025-01-20  2:40 ` [PATCH v6 09/12] x86/mm: enable broadcast TLB invalidation for multi-threaded processes Rik van Riel
2025-01-20 14:02   ` Nadav Amit
2025-01-20 16:09     ` Rik van Riel
2025-01-20 20:04       ` Nadav Amit
2025-01-20 22:44         ` Rik van Riel
2025-01-21  7:31           ` Nadav Amit
2025-01-21  9:55   ` Peter Zijlstra
2025-01-21 10:33     ` Peter Zijlstra
2025-01-23  1:40       ` Rik van Riel
2025-01-21 18:48     ` Dave Hansen
2025-01-22  8:38   ` Peter Zijlstra
2025-01-23  1:13     ` Rik van Riel
2025-01-23  9:07       ` Peter Zijlstra [this message]
2025-01-23 12:42         ` Rik van Riel
2025-01-20  2:40 ` [PATCH v6 10/12] x86,tlb: do targeted broadcast flushing from tlbbatch code Rik van Riel
2025-01-20  2:40 ` [PATCH v6 11/12] x86/mm: enable AMD translation cache extensions Rik van Riel
2025-01-20  2:40 ` [PATCH v6 12/12] x86/mm: only invalidate final translations with INVLPGB Rik van Riel
2025-01-20  5:58 ` [PATCH v6 00/12] AMD broadcast TLB invalidation Michael Kelley
2025-01-24 11:41 ` Manali Shukla

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20250123090720.GI3808@noisy.programming.kicks-ass.net \
    --to=peterz@infradead.org \
    --cc=akpm@linux-foundation.org \
    --cc=andrew.cooper3@citrix.com \
    --cc=bp@alien8.de \
    --cc=dave.hansen@linux.intel.com \
    --cc=jannh@google.com \
    --cc=kernel-team@meta.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mark.rutland@arm.com \
    --cc=mhklinux@outlook.com \
    --cc=nadav.amit@gmail.com \
    --cc=riel@surriel.com \
    --cc=thomas.lendacky@amd.com \
    --cc=will@kernel.org \
    --cc=x86@kernel.org \
    --cc=zhengqi.arch@bytedance.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox