* [PATCH 1/1] mm/mmu_gather: replace IPI with synchronize_rcu() when batch allocation fails
@ 2026-02-23 3:36 Lance Yang
2026-02-23 9:29 ` David Hildenbrand (Arm)
0 siblings, 1 reply; 7+ messages in thread
From: Lance Yang @ 2026-02-23 3:36 UTC (permalink / raw)
To: akpm, peterz
Cc: david, dave.hansen, will, aneesh.kumar, npiggin, linux-arch,
linux-mm, linux-kernel, Lance Yang
From: Lance Yang <lance.yang@linux.dev>
When freeing page tables, we try to batch them. If batch allocation fails
(GFP_NOWAIT), __tlb_remove_table_one() immediately frees the one without
batching.
On !CONFIG_PT_RECLAIM, the fallback sends an IPI to all CPUs via
tlb_remove_table_sync_one(). It disrupts all CPUs even when only a single
process is unmapping memory. IPI broadcast was reported to hurt RT
workloads[1].
tlb_remove_table_sync_one() synchronizes with lockless page-table walkers
(e.g. GUP-fast) that rely on IRQ disabling. These walkers use
local_irq_disable(), which is also an RCU read-side critical section.
synchronize_rcu() waits for all such sections to complete, providing the
same guarantee as IPI but without disrupting all CPUs.
Since batch allocation already failed, we are in a way slow path, so
replacing the IPI with synchronize_rcu() is fine.
We are in process context (unmap_region, exit_mmap) with only mmap_lock
held, a sleeping lock. synchronize_rcu() will catch any invalid context
via might_sleep().
[1] https://lore.kernel.org/linux-mm/1b27a3fa-359a-43d0-bdeb-c31341749367@kernel.org/
Link: https://lore.kernel.org/linux-mm/20260202150957.GD1282955@noisy.programming.kicks-ass.net/
Link: https://lore.kernel.org/linux-mm/dfdfeac9-5cd5-46fc-a5c1-9ccf9bd3502a@intel.com/
Link: https://lore.kernel.org/linux-mm/bc489455-bb18-44dc-8518-ae75abda6bec@kernel.org/
Suggested-by: Peter Zijlstra <peterz@infradead.org>
Suggested-by: Dave Hansen <dave.hansen@intel.com>
Suggested-by: David Hildenbrand (Arm) <david@kernel.org>
Signed-off-by: Lance Yang <lance.yang@linux.dev>
---
mm/mmu_gather.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c
index fe5b6a031717..df670c219260 100644
--- a/mm/mmu_gather.c
+++ b/mm/mmu_gather.c
@@ -339,7 +339,8 @@ static inline void __tlb_remove_table_one(void *table)
#else
static inline void __tlb_remove_table_one(void *table)
{
- tlb_remove_table_sync_one();
+ if (IS_ENABLED(CONFIG_MMU_GATHER_RCU_TABLE_FREE))
+ synchronize_rcu();
__tlb_remove_table(table);
}
#endif /* CONFIG_PT_RECLAIM */
--
2.49.0
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH 1/1] mm/mmu_gather: replace IPI with synchronize_rcu() when batch allocation fails
2026-02-23 3:36 [PATCH 1/1] mm/mmu_gather: replace IPI with synchronize_rcu() when batch allocation fails Lance Yang
@ 2026-02-23 9:29 ` David Hildenbrand (Arm)
2026-02-23 12:58 ` Lance Yang
0 siblings, 1 reply; 7+ messages in thread
From: David Hildenbrand (Arm) @ 2026-02-23 9:29 UTC (permalink / raw)
To: Lance Yang, akpm, peterz
Cc: dave.hansen, will, aneesh.kumar, npiggin, linux-arch, linux-mm,
linux-kernel
On 2/23/26 04:36, Lance Yang wrote:
> From: Lance Yang <lance.yang@linux.dev>
>
> When freeing page tables, we try to batch them. If batch allocation fails
> (GFP_NOWAIT), __tlb_remove_table_one() immediately frees the one without
> batching.
>
> On !CONFIG_PT_RECLAIM, the fallback sends an IPI to all CPUs via
> tlb_remove_table_sync_one(). It disrupts all CPUs even when only a single
> process is unmapping memory. IPI broadcast was reported to hurt RT
> workloads[1].
>
> tlb_remove_table_sync_one() synchronizes with lockless page-table walkers
> (e.g. GUP-fast) that rely on IRQ disabling. These walkers use
> local_irq_disable(), which is also an RCU read-side critical section.
> synchronize_rcu() waits for all such sections to complete, providing the
> same guarantee as IPI but without disrupting all CPUs.
>
> Since batch allocation already failed, we are in a way slow path, so
> replacing the IPI with synchronize_rcu() is fine.
>
> We are in process context (unmap_region, exit_mmap) with only mmap_lock
> held, a sleeping lock. synchronize_rcu() will catch any invalid context
> via might_sleep().
>
> [1] https://lore.kernel.org/linux-mm/1b27a3fa-359a-43d0-bdeb-c31341749367@kernel.org/
>
> Link: https://lore.kernel.org/linux-mm/20260202150957.GD1282955@noisy.programming.kicks-ass.net/
> Link: https://lore.kernel.org/linux-mm/dfdfeac9-5cd5-46fc-a5c1-9ccf9bd3502a@intel.com/
> Link: https://lore.kernel.org/linux-mm/bc489455-bb18-44dc-8518-ae75abda6bec@kernel.org/
> Suggested-by: Peter Zijlstra <peterz@infradead.org>
> Suggested-by: Dave Hansen <dave.hansen@intel.com>
> Suggested-by: David Hildenbrand (Arm) <david@kernel.org>
I think it was primarily Peter and Dave suggesting that :)
> Signed-off-by: Lance Yang <lance.yang@linux.dev>
> ---
> mm/mmu_gather.c | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c
> index fe5b6a031717..df670c219260 100644
> --- a/mm/mmu_gather.c
> +++ b/mm/mmu_gather.c
> @@ -339,7 +339,8 @@ static inline void __tlb_remove_table_one(void *table)
> #else
> static inline void __tlb_remove_table_one(void *table)
> {
> - tlb_remove_table_sync_one();
> + if (IS_ENABLED(CONFIG_MMU_GATHER_RCU_TABLE_FREE))
> + synchronize_rcu();
That should work.
Reading all the comments for tlb_remove_table_smp_sync(), I wonder
whether we should wrap that in a tlb_remove_table_sync_rcu() function,
with a proper kerneldoc for the CONFIG_MMU_GATHER_RCU_TABLE_FREE variant
where we discuss how this relates to tlb_remove_table_sync_one (and
tlb_remove_table_smp_sync() .
--
Cheers,
David
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH 1/1] mm/mmu_gather: replace IPI with synchronize_rcu() when batch allocation fails
2026-02-23 9:29 ` David Hildenbrand (Arm)
@ 2026-02-23 12:58 ` Lance Yang
2026-02-23 13:02 ` David Hildenbrand (Arm)
2026-02-23 15:31 ` Dave Hansen
0 siblings, 2 replies; 7+ messages in thread
From: Lance Yang @ 2026-02-23 12:58 UTC (permalink / raw)
To: david
Cc: akpm, aneesh.kumar, dave.hansen, lance.yang, linux-arch,
linux-kernel, linux-mm, npiggin, peterz, will
On Mon, Feb 23, 2026 at 10:29:56AM +0100, David Hildenbrand (Arm) wrote:
>On 2/23/26 04:36, Lance Yang wrote:
>> From: Lance Yang <lance.yang@linux.dev>
>>
>> When freeing page tables, we try to batch them. If batch allocation fails
>> (GFP_NOWAIT), __tlb_remove_table_one() immediately frees the one without
>> batching.
>>
>> On !CONFIG_PT_RECLAIM, the fallback sends an IPI to all CPUs via
>> tlb_remove_table_sync_one(). It disrupts all CPUs even when only a single
>> process is unmapping memory. IPI broadcast was reported to hurt RT
>> workloads[1].
>>
>> tlb_remove_table_sync_one() synchronizes with lockless page-table walkers
>> (e.g. GUP-fast) that rely on IRQ disabling. These walkers use
>> local_irq_disable(), which is also an RCU read-side critical section.
>> synchronize_rcu() waits for all such sections to complete, providing the
>> same guarantee as IPI but without disrupting all CPUs.
>>
>> Since batch allocation already failed, we are in a way slow path, so
>> replacing the IPI with synchronize_rcu() is fine.
>>
>> We are in process context (unmap_region, exit_mmap) with only mmap_lock
>> held, a sleeping lock. synchronize_rcu() will catch any invalid context
>> via might_sleep().
>>
>> [1] https://lore.kernel.org/linux-mm/1b27a3fa-359a-43d0-bdeb-c31341749367@kernel.org/
>>
>> Link: https://lore.kernel.org/linux-mm/20260202150957.GD1282955@noisy.programming.kicks-ass.net/
>> Link: https://lore.kernel.org/linux-mm/dfdfeac9-5cd5-46fc-a5c1-9ccf9bd3502a@intel.com/
>> Link: https://lore.kernel.org/linux-mm/bc489455-bb18-44dc-8518-ae75abda6bec@kernel.org/
>> Suggested-by: Peter Zijlstra <peterz@infradead.org>
>> Suggested-by: Dave Hansen <dave.hansen@intel.com>
>> Suggested-by: David Hildenbrand (Arm) <david@kernel.org>
>
>I think it was primarily Peter and Dave suggesting that :)
:)
>> Signed-off-by: Lance Yang <lance.yang@linux.dev>
>> ---
>> mm/mmu_gather.c | 3 ++-
>> 1 file changed, 2 insertions(+), 1 deletion(-)
>>
>> diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c
>> index fe5b6a031717..df670c219260 100644
>> --- a/mm/mmu_gather.c
>> +++ b/mm/mmu_gather.c
>> @@ -339,7 +339,8 @@ static inline void __tlb_remove_table_one(void *table)
>> #else
>> static inline void __tlb_remove_table_one(void *table)
>> {
>> - tlb_remove_table_sync_one();
>> + if (IS_ENABLED(CONFIG_MMU_GATHER_RCU_TABLE_FREE))
>> + synchronize_rcu();
>
>That should work.
>
>Reading all the comments for tlb_remove_table_smp_sync(), I wonder
>whether we should wrap that in a tlb_remove_table_sync_rcu() function,
>with a proper kerneldoc for the CONFIG_MMU_GATHER_RCU_TABLE_FREE variant
>where we discuss how this relates to tlb_remove_table_sync_one (and
>tlb_remove_table_smp_sync() .
Good point! That would be cleaner and better ;)
How about the following:
---8<---
diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c
index fe5b6a031717..ea5503d3e650 100644
--- a/mm/mmu_gather.c
+++ b/mm/mmu_gather.c
@@ -296,6 +296,24 @@ static void tlb_remove_table_free(struct mmu_table_batch *batch)
call_rcu(&batch->rcu, tlb_remove_table_rcu);
}
+/**
+ * tlb_remove_table_sync_rcu() - synchronize with software page-table walkers
+ *
+ * Like tlb_remove_table_sync_one() but uses RCU grace period instead of IPI
+ * broadcast. Should be used in slow paths where sleeping is acceptable.
+ *
+ * Software/Lockless page-table walkers use local_irq_disable(), which is also
+ * an RCU read-side critical section. synchronize_rcu() waits for all such
+ * sections, providing the same guarantee as tlb_remove_table_sync_one() but
+ * without disrupting all CPUs with IPIs.
+ *
+ * Context: Can sleep/block. Cannot be called from any atomic context.
+ */
+static void tlb_remove_table_sync_rcu(void)
+{
+ synchronize_rcu();
+}
+
#else /* !CONFIG_MMU_GATHER_RCU_TABLE_FREE */
static void tlb_remove_table_free(struct mmu_table_batch *batch)
@@ -303,6 +321,10 @@ static void tlb_remove_table_free(struct mmu_table_batch *batch)
__tlb_remove_table_free(batch);
}
+static void tlb_remove_table_sync_rcu(void)
+{
+}
+
#endif /* CONFIG_MMU_GATHER_RCU_TABLE_FREE */
/*
@@ -339,7 +361,7 @@ static inline void __tlb_remove_table_one(void *table)
#else
static inline void __tlb_remove_table_one(void *table)
{
- tlb_remove_table_sync_one();
+ tlb_remove_table_sync_rcu();
__tlb_remove_table(table);
}
#endif /* CONFIG_PT_RECLAIM */
---
Thanks for the suggestion!
Lance
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH 1/1] mm/mmu_gather: replace IPI with synchronize_rcu() when batch allocation fails
2026-02-23 12:58 ` Lance Yang
@ 2026-02-23 13:02 ` David Hildenbrand (Arm)
2026-02-23 15:31 ` Dave Hansen
1 sibling, 0 replies; 7+ messages in thread
From: David Hildenbrand (Arm) @ 2026-02-23 13:02 UTC (permalink / raw)
To: Lance Yang
Cc: akpm, aneesh.kumar, dave.hansen, linux-arch, linux-kernel,
linux-mm, npiggin, peterz, will
On 2/23/26 13:58, Lance Yang wrote:
>
> On Mon, Feb 23, 2026 at 10:29:56AM +0100, David Hildenbrand (Arm) wrote:
>> On 2/23/26 04:36, Lance Yang wrote:
>>> From: Lance Yang <lance.yang@linux.dev>
>>>
>>> When freeing page tables, we try to batch them. If batch allocation fails
>>> (GFP_NOWAIT), __tlb_remove_table_one() immediately frees the one without
>>> batching.
>>>
>>> On !CONFIG_PT_RECLAIM, the fallback sends an IPI to all CPUs via
>>> tlb_remove_table_sync_one(). It disrupts all CPUs even when only a single
>>> process is unmapping memory. IPI broadcast was reported to hurt RT
>>> workloads[1].
>>>
>>> tlb_remove_table_sync_one() synchronizes with lockless page-table walkers
>>> (e.g. GUP-fast) that rely on IRQ disabling. These walkers use
>>> local_irq_disable(), which is also an RCU read-side critical section.
>>> synchronize_rcu() waits for all such sections to complete, providing the
>>> same guarantee as IPI but without disrupting all CPUs.
>>>
>>> Since batch allocation already failed, we are in a way slow path, so
>>> replacing the IPI with synchronize_rcu() is fine.
>>>
>>> We are in process context (unmap_region, exit_mmap) with only mmap_lock
>>> held, a sleeping lock. synchronize_rcu() will catch any invalid context
>>> via might_sleep().
>>>
>>> [1] https://lore.kernel.org/linux-mm/1b27a3fa-359a-43d0-bdeb-c31341749367@kernel.org/
>>>
>>> Link: https://lore.kernel.org/linux-mm/20260202150957.GD1282955@noisy.programming.kicks-ass.net/
>>> Link: https://lore.kernel.org/linux-mm/dfdfeac9-5cd5-46fc-a5c1-9ccf9bd3502a@intel.com/
>>> Link: https://lore.kernel.org/linux-mm/bc489455-bb18-44dc-8518-ae75abda6bec@kernel.org/
>>> Suggested-by: Peter Zijlstra <peterz@infradead.org>
>>> Suggested-by: Dave Hansen <dave.hansen@intel.com>
>>> Suggested-by: David Hildenbrand (Arm) <david@kernel.org>
>>
>> I think it was primarily Peter and Dave suggesting that :)
>
> :)
>
>>> Signed-off-by: Lance Yang <lance.yang@linux.dev>
>>> ---
>>> mm/mmu_gather.c | 3 ++-
>>> 1 file changed, 2 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c
>>> index fe5b6a031717..df670c219260 100644
>>> --- a/mm/mmu_gather.c
>>> +++ b/mm/mmu_gather.c
>>> @@ -339,7 +339,8 @@ static inline void __tlb_remove_table_one(void *table)
>>> #else
>>> static inline void __tlb_remove_table_one(void *table)
>>> {
>>> - tlb_remove_table_sync_one();
>>> + if (IS_ENABLED(CONFIG_MMU_GATHER_RCU_TABLE_FREE))
>>> + synchronize_rcu();
>>
>> That should work.
>>
>> Reading all the comments for tlb_remove_table_smp_sync(), I wonder
>> whether we should wrap that in a tlb_remove_table_sync_rcu() function,
>> with a proper kerneldoc for the CONFIG_MMU_GATHER_RCU_TABLE_FREE variant
>> where we discuss how this relates to tlb_remove_table_sync_one (and
>> tlb_remove_table_smp_sync() .
>
> Good point! That would be cleaner and better ;)
>
> How about the following:
>
> ---8<---
> diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c
> index fe5b6a031717..ea5503d3e650 100644
> --- a/mm/mmu_gather.c
> +++ b/mm/mmu_gather.c
> @@ -296,6 +296,24 @@ static void tlb_remove_table_free(struct mmu_table_batch *batch)
> call_rcu(&batch->rcu, tlb_remove_table_rcu);
> }
>
> +/**
> + * tlb_remove_table_sync_rcu() - synchronize with software page-table walkers
> + *
> + * Like tlb_remove_table_sync_one() but uses RCU grace period instead of IPI
> + * broadcast. Should be used in slow paths where sleeping is acceptable.
> + *
> + * Software/Lockless page-table walkers use local_irq_disable(), which is also
> + * an RCU read-side critical section. synchronize_rcu() waits for all such
> + * sections, providing the same guarantee as tlb_remove_table_sync_one() but
> + * without disrupting all CPUs with IPIs.
> + *
> + * Context: Can sleep/block. Cannot be called from any atomic context.
> + */
> +static void tlb_remove_table_sync_rcu(void)
> +{
> + synchronize_rcu();
> +}
> +
> #else /* !CONFIG_MMU_GATHER_RCU_TABLE_FREE */
>
> static void tlb_remove_table_free(struct mmu_table_batch *batch)
> @@ -303,6 +321,10 @@ static void tlb_remove_table_free(struct mmu_table_batch *batch)
> __tlb_remove_table_free(batch);
> }
>
> +static void tlb_remove_table_sync_rcu(void)
> +{
> +}
> +
> #endif /* CONFIG_MMU_GATHER_RCU_TABLE_FREE */
>
> /*
> @@ -339,7 +361,7 @@ static inline void __tlb_remove_table_one(void *table)
> #else
> static inline void __tlb_remove_table_one(void *table)
> {
> - tlb_remove_table_sync_one();
> + tlb_remove_table_sync_rcu();
> __tlb_remove_table(table);
> }
> #endif /* CONFIG_PT_RECLAIM */
> ---
>
> Thanks for the suggestion!
> Lance
LGTM, but let's hear other options.
--
Cheers,
David
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH 1/1] mm/mmu_gather: replace IPI with synchronize_rcu() when batch allocation fails
2026-02-23 12:58 ` Lance Yang
2026-02-23 13:02 ` David Hildenbrand (Arm)
@ 2026-02-23 15:31 ` Dave Hansen
2026-02-23 16:29 ` Lance Yang
1 sibling, 1 reply; 7+ messages in thread
From: Dave Hansen @ 2026-02-23 15:31 UTC (permalink / raw)
To: Lance Yang, david
Cc: akpm, aneesh.kumar, linux-arch, linux-kernel, linux-mm, npiggin,
peterz, will
On 2/23/26 04:58, Lance Yang wrote:
...
> +/**
> + * tlb_remove_table_sync_rcu() - synchronize with software page-table walkers
> + *
> + * Like tlb_remove_table_sync_one() but uses RCU grace period instead of IPI
> + * broadcast. Should be used in slow paths where sleeping is acceptable.
Just a nit on comments: Use imperative voice:
... Use in slow paths where sleeping is acceptable.
> + * Software/Lockless page-table walkers use local_irq_disable(), which is also
> + * an RCU read-side critical section. synchronize_rcu() waits for all such
> + * sections, providing the same guarantee as tlb_remove_table_sync_one() but
> + * without disrupting all CPUs with IPIs.
Yep, synchronize_rcu() is likely slower (longer wall clock time) but
less disruptive to other CPUs.
Is it worth explaining here that this should be used when code really
needs to _wait_ and *not* for freeing memory? Freeing memory should use
RCU callbacks that don't cause latency spikes in this thread, not this.
> + * Context: Can sleep/block. Cannot be called from any atomic context.
As a general rule, expressing constraints like this is best done in
code, not comments, so:
might_sleep();
or
WARN_ON_ONCE(!in_atomic());
seem appropriate.
I didn't see any obvious warning like that in the top levels of
synchronize_rcu().
> +static void tlb_remove_table_sync_rcu(void)
> +{
> + synchronize_rcu();
> +}
> +
> #else /* !CONFIG_MMU_GATHER_RCU_TABLE_FREE */
>
> static void tlb_remove_table_free(struct mmu_table_batch *batch)
> @@ -303,6 +321,10 @@ static void tlb_remove_table_free(struct mmu_table_batch *batch)
> __tlb_remove_table_free(batch);
> }
>
> +static void tlb_remove_table_sync_rcu(void)
> +{
> +}
> +
> #endif /* CONFIG_MMU_GATHER_RCU_TABLE_FREE */
This seems a _little_ dangerous to even define. We don't want this
sneaking into use when it doesn't do anything.
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH 1/1] mm/mmu_gather: replace IPI with synchronize_rcu() when batch allocation fails
2026-02-23 15:31 ` Dave Hansen
@ 2026-02-23 16:29 ` Lance Yang
2026-02-23 16:35 ` Dave Hansen
0 siblings, 1 reply; 7+ messages in thread
From: Lance Yang @ 2026-02-23 16:29 UTC (permalink / raw)
To: Dave Hansen
Cc: akpm, aneesh.kumar, linux-arch, linux-kernel, linux-mm, npiggin,
peterz, will, david
On 2026/2/23 23:31, Dave Hansen wrote:
> On 2/23/26 04:58, Lance Yang wrote:
> ...
>> +/**
>> + * tlb_remove_table_sync_rcu() - synchronize with software page-table walkers
>> + *
>> + * Like tlb_remove_table_sync_one() but uses RCU grace period instead of IPI
>> + * broadcast. Should be used in slow paths where sleeping is acceptable.
>
> Just a nit on comments: Use imperative voice:
>
> ... Use in slow paths where sleeping is acceptable.
Okay, thanks.
>> + * Software/Lockless page-table walkers use local_irq_disable(), which is also
>> + * an RCU read-side critical section. synchronize_rcu() waits for all such
>> + * sections, providing the same guarantee as tlb_remove_table_sync_one() but
>> + * without disrupting all CPUs with IPIs.
>
> Yep, synchronize_rcu() is likely slower (longer wall clock time) but
> less disruptive to other CPUs.
>
> Is it worth explaining here that this should be used when code really
> needs to _wait_ and *not* for freeing memory? Freeing memory should use
> RCU callbacks that don't cause latency spikes in this thread, not this.
Good point! Worth clarifying. Something like:
Note: Use this when code really needs to wait for synchronization,
*not* for freeing memory. Memory freeing should use RCU callbacks
that don't cause latency spikes in this thread.
>> + * Context: Can sleep/block. Cannot be called from any atomic context.
>
> As a general rule, expressing constraints like this is best done in
> code, not comments, so:
>
> might_sleep();
> or
> WARN_ON_ONCE(!in_atomic());
>
> seem appropriate.
>
> I didn't see any obvious warning like that in the top levels of
> synchronize_rcu().
Yep, synchronize_rcu() does call might_sleep() internally:
synchronize_rcu()
-> synchronize_rcu_normal()
-> wait_rcu_gp() -> __wait_rcu_gp()
-> might_sleep()
But adding an explicit might_sleep() here makes the constraint
more obvious. I'll add it :)
>> +static void tlb_remove_table_sync_rcu(void)
>> +{
>> + synchronize_rcu();
>> +}
>> +
>> #else /* !CONFIG_MMU_GATHER_RCU_TABLE_FREE */
>>
>> static void tlb_remove_table_free(struct mmu_table_batch *batch)
>> @@ -303,6 +321,10 @@ static void tlb_remove_table_free(struct mmu_table_batch *batch)
>> __tlb_remove_table_free(batch);
>> }
>>
>> +static void tlb_remove_table_sync_rcu(void)
>> +{
>> +}
>> +
>> #endif /* CONFIG_MMU_GATHER_RCU_TABLE_FREE */
> This seems a _little_ dangerous to even define. We don't want this
> sneaking into use when it doesn't do anything.
This follows the same pattern as tlb_remove_table_sync_one(), which
also has an empty stub in the !CONFIG_MMU_GATHER_RCU_TABLE_FREE path.
It should be in tlb.h like tlb_remove_table_sync_one(). Will put
it there.
Thanks,
Lance
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH 1/1] mm/mmu_gather: replace IPI with synchronize_rcu() when batch allocation fails
2026-02-23 16:29 ` Lance Yang
@ 2026-02-23 16:35 ` Dave Hansen
0 siblings, 0 replies; 7+ messages in thread
From: Dave Hansen @ 2026-02-23 16:35 UTC (permalink / raw)
To: Lance Yang
Cc: akpm, aneesh.kumar, linux-arch, linux-kernel, linux-mm, npiggin,
peterz, will, david
On 2/23/26 08:29, Lance Yang wrote:
>
> Note: Use this when code really needs to wait for synchronization,
> *not* for freeing memory. Memory freeing should use RCU callbacks
> that don't cause latency spikes in this thread.
Yeah, but I'd probably mix it in with the other chit chat about
non-atomic contexts. I'd also make it something like:
Do not use for freeing memory. Use RCU callbacks instead to
avoid latency spikes.
to make the commands clear.
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2026-02-23 16:35 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2026-02-23 3:36 [PATCH 1/1] mm/mmu_gather: replace IPI with synchronize_rcu() when batch allocation fails Lance Yang
2026-02-23 9:29 ` David Hildenbrand (Arm)
2026-02-23 12:58 ` Lance Yang
2026-02-23 13:02 ` David Hildenbrand (Arm)
2026-02-23 15:31 ` Dave Hansen
2026-02-23 16:29 ` Lance Yang
2026-02-23 16:35 ` Dave Hansen
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox