From: Mark Rutland <mark.rutland@arm.com>
To: Thomas Gleixner <tglx@linutronix.de>
Cc: Nadav Amit <nadav.amit@gmail.com>,
Uladzislau Rezki <urezki@gmail.com>,
"Russell King (Oracle)" <linux@armlinux.org.uk>,
Andrew Morton <akpm@linux-foundation.org>,
linux-mm <linux-mm@kvack.org>, Christoph Hellwig <hch@lst.de>,
Lorenzo Stoakes <lstoakes@gmail.com>,
Peter Zijlstra <peterz@infradead.org>,
Baoquan He <bhe@redhat.com>, John Ogness <jogness@linutronix.de>,
linux-arm-kernel@lists.infradead.org,
Marc Zyngier <maz@kernel.org>,
x86@kernel.org
Subject: Re: Excessive TLB flush ranges
Date: Wed, 17 May 2023 15:43:36 +0100 [thread overview]
Message-ID: <ZGTngQhcJ19/dMbm@FVFF77S0Q05N.cambridge.arm.com> (raw)
In-Reply-To: <87ttwb5jx3.ffs@tglx>
On Wed, May 17, 2023 at 12:31:04PM +0200, Thomas Gleixner wrote:
> On Tue, May 16 2023 at 18:23, Nadav Amit wrote:
> >> On May 16, 2023, at 5:23 PM, Thomas Gleixner <tglx@linutronix.de> wrote:
> > My experience with non-IPI based TLB invalidations is more limited. IIUC
> > the usage model is that the TLB shootdowns should be invoked ASAP
> > (perhaps each range can be batched, but there is no sense of batching
> > multiple ranges), and then later you would issue some barrier to ensure
> > prior TLB shootdown invocations have been completed.
> >
> > If that is the (use) case, I am not sure the abstraction you used in
> > your prototype is the best one.
>
> The way how arm/arm64 implement that in software is:
>
> magic_barrier1();
> flush_range_with_magic_opcodes();
> magic_barrier2();
FWIW, on arm64 that sequence (for leaf entries only) is:
/*
* Make sure prior writes to the page table entries are visible to all
* CPUs, so that *subsequent* page table walks will see the latest
* values.
*
* This is roughly __smp_wmb().
*/
dsb(ishst) // AKA magic_barrier1()
/*
* The "TLBI *IS, <addr>" instructions send a message to all other
* CPUs, essentially saying "please start invalidating entries for
* <addr>"
*
* The "TLBI *ALL*IS" instructions send a message to all other CPUs,
* essentially saying "please start invalidating all entries".
*
* In theory, this could be for discontiguous ranges.
*/
flush_range_with_magic_opcodes()
/*
* Wait for acknowledgement that all prior TLBIs have completed. This
* also ensures that all accesses using those translations have also
* completed.
*
* This waits for all relevant CPUs to acknowledge completion of any
* prior TLBIs sent by this CPU.
*/
dsb(ish) // AKA magic_barrier2()
isb()
So you can batch a bunch of "TLBI *IS, <addr>" with a single barrier for
completion, or you can use a single "TLBI *ALL*IS" to invalidate everything.
It can still be worth using the latter, as arm64 has done since commit:
05ac65305437e8ef ("arm64: fix soft lockup due to large tlb flush range")
... as for a large range, issuing a bunch of "TLBI *IS, <addr>" can take a
while, and can require the recipient CPUs to do more work than they might have
to do for a single "TLBI *ALL*IS".
The point at which invalidating everything is better depends on a number of
factors (e.g. the impact of all CPUs needing to make new page table walks), and
currently we have an arbitrary boundary where we choose to invalidate
everything (which has been tweaked a bit over time); there isn't really a
one-size-fits-all best answer.
Thanks,
Mark.
next prev parent reply other threads:[~2023-05-17 14:43 UTC|newest]
Thread overview: 75+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-05-15 16:43 Thomas Gleixner
2023-05-15 16:59 ` Russell King (Oracle)
2023-05-15 19:46 ` Thomas Gleixner
2023-05-15 21:11 ` Thomas Gleixner
2023-05-15 21:31 ` Russell King (Oracle)
2023-05-16 6:37 ` Thomas Gleixner
2023-05-16 6:46 ` Thomas Gleixner
2023-05-16 8:18 ` Thomas Gleixner
2023-05-16 8:20 ` Thomas Gleixner
2023-05-16 8:27 ` Russell King (Oracle)
2023-05-16 9:03 ` Thomas Gleixner
2023-05-16 10:05 ` Baoquan He
2023-05-16 14:21 ` Thomas Gleixner
2023-05-16 19:03 ` Thomas Gleixner
2023-05-17 9:38 ` Thomas Gleixner
2023-05-17 10:52 ` Baoquan He
2023-05-19 11:22 ` Thomas Gleixner
2023-05-19 11:49 ` Baoquan He
2023-05-19 14:13 ` Thomas Gleixner
2023-05-19 12:01 ` [RFC PATCH 1/3] mm/vmalloc.c: try to flush vmap_area one by one Baoquan He
2023-05-19 14:16 ` Thomas Gleixner
2023-05-19 12:02 ` [RFC PATCH 2/3] mm/vmalloc.c: Only flush VM_FLUSH_RESET_PERMS area immediately Baoquan He
2023-05-19 12:03 ` [RFC PATCH 3/3] mm/vmalloc.c: change _vm_unmap_aliases() to do purge firstly Baoquan He
2023-05-19 14:17 ` Thomas Gleixner
2023-05-19 18:38 ` Thomas Gleixner
2023-05-19 23:46 ` Baoquan He
2023-05-21 23:10 ` Thomas Gleixner
2023-05-22 11:21 ` Baoquan He
2023-05-22 12:02 ` Thomas Gleixner
2023-05-22 14:34 ` Baoquan He
2023-05-22 20:21 ` Thomas Gleixner
2023-05-22 20:44 ` Thomas Gleixner
2023-05-23 9:35 ` Baoquan He
2023-05-19 13:49 ` Excessive TLB flush ranges Thomas Gleixner
2023-05-16 8:21 ` Russell King (Oracle)
2023-05-16 8:19 ` Russell King (Oracle)
2023-05-16 8:44 ` Thomas Gleixner
2023-05-16 8:48 ` Russell King (Oracle)
2023-05-16 12:09 ` Thomas Gleixner
2023-05-16 13:42 ` Uladzislau Rezki
2023-05-16 14:38 ` Thomas Gleixner
2023-05-16 15:01 ` Uladzislau Rezki
2023-05-16 17:04 ` Thomas Gleixner
2023-05-17 11:26 ` Uladzislau Rezki
2023-05-17 11:58 ` Thomas Gleixner
2023-05-17 12:15 ` Uladzislau Rezki
2023-05-17 16:32 ` Thomas Gleixner
2023-05-19 10:01 ` Uladzislau Rezki
2023-05-19 14:56 ` Thomas Gleixner
2023-05-19 15:14 ` Uladzislau Rezki
2023-05-19 16:32 ` Thomas Gleixner
2023-05-19 17:02 ` Uladzislau Rezki
2023-05-16 17:56 ` Nadav Amit
2023-05-16 19:32 ` Thomas Gleixner
2023-05-17 0:23 ` Thomas Gleixner
2023-05-17 1:23 ` Nadav Amit
2023-05-17 10:31 ` Thomas Gleixner
2023-05-17 11:47 ` Thomas Gleixner
2023-05-17 22:41 ` Nadav Amit
2023-05-17 14:43 ` Mark Rutland [this message]
2023-05-17 16:41 ` Thomas Gleixner
2023-05-17 22:57 ` Nadav Amit
2023-05-19 11:49 ` Thomas Gleixner
2023-05-17 12:12 ` Russell King (Oracle)
2023-05-17 23:14 ` Nadav Amit
2023-05-15 18:17 ` Uladzislau Rezki
2023-05-16 2:26 ` Baoquan He
2023-05-16 6:40 ` Thomas Gleixner
2023-05-16 8:07 ` Baoquan He
2023-05-16 8:10 ` Baoquan He
2023-05-16 8:45 ` Russell King (Oracle)
2023-05-16 9:13 ` Thomas Gleixner
2023-05-16 8:54 ` Thomas Gleixner
2023-05-16 9:48 ` Baoquan He
2023-05-15 20:02 ` Nadav Amit
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ZGTngQhcJ19/dMbm@FVFF77S0Q05N.cambridge.arm.com \
--to=mark.rutland@arm.com \
--cc=akpm@linux-foundation.org \
--cc=bhe@redhat.com \
--cc=hch@lst.de \
--cc=jogness@linutronix.de \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-mm@kvack.org \
--cc=linux@armlinux.org.uk \
--cc=lstoakes@gmail.com \
--cc=maz@kernel.org \
--cc=nadav.amit@gmail.com \
--cc=peterz@infradead.org \
--cc=tglx@linutronix.de \
--cc=urezki@gmail.com \
--cc=x86@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox