From: Vlastimil Babka <vbabka@suse.cz>
To: Peter Zijlstra <peterz@infradead.org>,
Ingo Molnar <mingo@redhat.com>, Will Deacon <will@kernel.org>,
Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Cc: LKML <linux-kernel@vger.kernel.org>,
"linux-mm@kvack.org" <linux-mm@kvack.org>,
Linus Torvalds <torvalds@linux-foundation.org>,
Waiman Long <longman@redhat.com>,
Mel Gorman <mgorman@techsingularity.net>,
Matthew Wilcox <willy@infradead.org>,
Steven Rostedt <rostedt@goodmis.org>
Subject: [RFC] making nested spin_trylock() work on UP?
Date: Fri, 13 Feb 2026 12:57:43 +0100 [thread overview]
Message-ID: <d762c46b-36f0-471a-b5b4-23c8cf5628ae@suse.cz> (raw)
Hi,
this is not a real RFC PATCH, but more like discussion about possible
direction. I wanted to have a patch at hand, but the layers of spinlock APIs
are rather complex for me to untangle, so I'd rather know first if it's even
worth trying.
The page allocator has been using a locking scheme for its percpu page
caches (pcp) for years now, based on spin_trylock() with no _irqsave() part.
The point is that if we interrupt the locked section, we fail the trylock
and just fallback to something that's more expensive, but it's rare so we
don't need to pay the irqsave cost all the time in the fastpaths.
It's similar to but not exactly local_trylock_t (which is also newer anyway)
because in some cases we do lock the pcp of a non-local cpu to flush it, in
a way that's cheaper than IPI or queue_work_on().
The complication of this scheme has been UP non-debug spinlock
implementation which assumes spin_trylock() can't fail on UP and has no
state to track it. It just doesn't anticipate this usage scenario. So to
work around that we disable IRQs on UP, complicating the implementation.
Also recently we found years old bug in the implementation - see
038a102535eb ("mm/page_alloc: prevent pcp corruption with SMP=n").
So my question is if we could have spinlock implementation supporting this
nested spin_trylock() usage, or if the UP optimization is still considered
too important to lose it. I was thinking:
- remove the UP implementation completely - would it increase the overhead
on SMP=n systems too much and do we still care?
- make the non-debug implementation a bit like the debug one so we do have
the 'locked' state (see include/linux/spinlock_up.h and lock->slock). This
also adds some overhead but not as much as the full SMP implementation?
Below is how this would simplify page_alloc.c.
Thanks,
Vlastimil
From 7a0f233ec0ae46324b2db6a09944e93c7cb14459 Mon Sep 17 00:00:00 2001
From: Vlastimil Babka <vbabka@suse.cz>
Date: Fri, 13 Feb 2026 12:51:02 +0100
Subject: [PATCH] mm/page_alloc: simplify as if UP spin_trylock() was reliable
---
mm/page_alloc.c | 111 +++++++++++++-----------------------------------
1 file changed, 30 insertions(+), 81 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index d312ebaa1e77..f147126b6c06 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -95,23 +95,6 @@ typedef int __bitwise fpi_t;
static DEFINE_MUTEX(pcp_batch_high_lock);
#define MIN_PERCPU_PAGELIST_HIGH_FRACTION (8)
-#if defined(CONFIG_SMP) || defined(CONFIG_PREEMPT_RT)
-/*
- * On SMP, spin_trylock is sufficient protection.
- * On PREEMPT_RT, spin_trylock is equivalent on both SMP and UP.
- * Pass flags to a no-op inline function to typecheck and silence the unused
- * variable warning.
- */
-static inline void __pcp_trylock_noop(unsigned long *flags) { }
-#define pcp_trylock_prepare(flags) __pcp_trylock_noop(&(flags))
-#define pcp_trylock_finish(flags) __pcp_trylock_noop(&(flags))
-#else
-
-/* UP spin_trylock always succeeds so disable IRQs to prevent re-entrancy. */
-#define pcp_trylock_prepare(flags) local_irq_save(flags)
-#define pcp_trylock_finish(flags) local_irq_restore(flags)
-#endif
-
/*
* Locking a pcp requires a PCP lookup followed by a spinlock. To avoid
* a migration causing the wrong PCP to be locked and remote memory being
@@ -151,48 +134,22 @@ static inline void __pcp_trylock_noop(unsigned long *flags) { }
})
/* struct per_cpu_pages specific helpers. */
-#define pcp_spin_trylock(ptr, UP_flags) \
+#define pcp_spin_trylock(ptr) \
({ \
struct per_cpu_pages *__ret; \
- pcp_trylock_prepare(UP_flags); \
__ret = pcpu_spin_trylock(struct per_cpu_pages, lock, ptr); \
- if (!__ret) \
- pcp_trylock_finish(UP_flags); \
__ret; \
})
-#define pcp_spin_unlock(ptr, UP_flags) \
+#define pcp_spin_unlock(ptr) \
({ \
pcpu_spin_unlock(lock, ptr); \
- pcp_trylock_finish(UP_flags); \
})
-/*
- * With the UP spinlock implementation, when we spin_lock(&pcp->lock) (for i.e.
- * a potentially remote cpu drain) and get interrupted by an operation that
- * attempts pcp_spin_trylock(), we can't rely on the trylock failure due to UP
- * spinlock assumptions making the trylock a no-op. So we have to turn that
- * spin_lock() to a spin_lock_irqsave(). This works because on UP there are no
- * remote cpu's so we can only be locking the only existing local one.
- */
-#if defined(CONFIG_SMP) || defined(CONFIG_PREEMPT_RT)
-static inline void __flags_noop(unsigned long *flags) { }
-#define pcp_spin_lock_maybe_irqsave(ptr, flags) \
-({ \
- __flags_noop(&(flags)); \
- spin_lock(&(ptr)->lock); \
-})
-#define pcp_spin_unlock_maybe_irqrestore(ptr, flags) \
-({ \
- spin_unlock(&(ptr)->lock); \
- __flags_noop(&(flags)); \
-})
-#else
-#define pcp_spin_lock_maybe_irqsave(ptr, flags) \
- spin_lock_irqsave(&(ptr)->lock, flags)
-#define pcp_spin_unlock_maybe_irqrestore(ptr, flags) \
- spin_unlock_irqrestore(&(ptr)->lock, flags)
-#endif
+#define pcp_spin_lock_nopin(ptr) \
+ spin_lock(&(ptr)->lock)
+#define pcp_spin_unlock_nopin(ptr) \
+ spin_unlock(&(ptr)->lock)
#ifdef CONFIG_USE_PERCPU_NUMA_NODE_ID
DEFINE_PER_CPU(int, numa_node);
@@ -2583,7 +2540,6 @@ static int rmqueue_bulk(struct zone *zone, unsigned int order,
bool decay_pcp_high(struct zone *zone, struct per_cpu_pages *pcp)
{
int high_min, to_drain, to_drain_batched, batch;
- unsigned long UP_flags;
bool todo = false;
high_min = READ_ONCE(pcp->high_min);
@@ -2603,9 +2559,9 @@ bool decay_pcp_high(struct zone *zone, struct per_cpu_pages *pcp)
to_drain = pcp->count - pcp->high;
while (to_drain > 0) {
to_drain_batched = min(to_drain, batch);
- pcp_spin_lock_maybe_irqsave(pcp, UP_flags);
+ pcp_spin_lock_nopin(pcp);
free_pcppages_bulk(zone, to_drain_batched, pcp, 0);
- pcp_spin_unlock_maybe_irqrestore(pcp, UP_flags);
+ pcp_spin_unlock_nopin(pcp);
todo = true;
to_drain -= to_drain_batched;
@@ -2622,15 +2578,14 @@ bool decay_pcp_high(struct zone *zone, struct per_cpu_pages *pcp)
*/
void drain_zone_pages(struct zone *zone, struct per_cpu_pages *pcp)
{
- unsigned long UP_flags;
int to_drain, batch;
batch = READ_ONCE(pcp->batch);
to_drain = min(pcp->count, batch);
if (to_drain > 0) {
- pcp_spin_lock_maybe_irqsave(pcp, UP_flags);
+ pcp_spin_lock_nopin(pcp);
free_pcppages_bulk(zone, to_drain, pcp, 0);
- pcp_spin_unlock_maybe_irqrestore(pcp, UP_flags);
+ pcp_spin_unlock_nopin(pcp);
}
}
#endif
@@ -2641,11 +2596,10 @@ void drain_zone_pages(struct zone *zone, struct per_cpu_pages *pcp)
static void drain_pages_zone(unsigned int cpu, struct zone *zone)
{
struct per_cpu_pages *pcp = per_cpu_ptr(zone->per_cpu_pageset, cpu);
- unsigned long UP_flags;
int count;
do {
- pcp_spin_lock_maybe_irqsave(pcp, UP_flags);
+ pcp_spin_lock_nopin(pcp);
count = pcp->count;
if (count) {
int to_drain = min(count,
@@ -2654,7 +2608,7 @@ static void drain_pages_zone(unsigned int cpu, struct zone *zone)
free_pcppages_bulk(zone, to_drain, pcp, 0);
count -= to_drain;
}
- pcp_spin_unlock_maybe_irqrestore(pcp, UP_flags);
+ pcp_spin_unlock_nopin(pcp);
} while (count);
}
@@ -2853,7 +2807,7 @@ static int nr_pcp_high(struct per_cpu_pages *pcp, struct zone *zone,
*/
static bool free_frozen_page_commit(struct zone *zone,
struct per_cpu_pages *pcp, struct page *page, int migratetype,
- unsigned int order, fpi_t fpi_flags, unsigned long *UP_flags)
+ unsigned int order, fpi_t fpi_flags)
{
int high, batch;
int to_free, to_free_batched;
@@ -2913,9 +2867,9 @@ static bool free_frozen_page_commit(struct zone *zone,
if (to_free == 0 || pcp->count == 0)
break;
- pcp_spin_unlock(pcp, *UP_flags);
+ pcp_spin_unlock(pcp);
- pcp = pcp_spin_trylock(zone->per_cpu_pageset, *UP_flags);
+ pcp = pcp_spin_trylock(zone->per_cpu_pageset);
if (!pcp) {
ret = false;
break;
@@ -2927,7 +2881,7 @@ static bool free_frozen_page_commit(struct zone *zone,
* returned in an unlocked state.
*/
if (smp_processor_id() != cpu) {
- pcp_spin_unlock(pcp, *UP_flags);
+ pcp_spin_unlock(pcp);
ret = false;
break;
}
@@ -2959,7 +2913,6 @@ static bool free_frozen_page_commit(struct zone *zone,
static void __free_frozen_pages(struct page *page, unsigned int order,
fpi_t fpi_flags)
{
- unsigned long UP_flags;
struct per_cpu_pages *pcp;
struct zone *zone;
unsigned long pfn = page_to_pfn(page);
@@ -2995,12 +2948,12 @@ static void __free_frozen_pages(struct page *page, unsigned int order,
add_page_to_zone_llist(zone, page, order);
return;
}
- pcp = pcp_spin_trylock(zone->per_cpu_pageset, UP_flags);
+ pcp = pcp_spin_trylock(zone->per_cpu_pageset);
if (pcp) {
if (!free_frozen_page_commit(zone, pcp, page, migratetype,
- order, fpi_flags, &UP_flags))
+ order, fpi_flags))
return;
- pcp_spin_unlock(pcp, UP_flags);
+ pcp_spin_unlock(pcp);
} else {
free_one_page(zone, page, pfn, order, fpi_flags);
}
@@ -3021,7 +2974,6 @@ void free_frozen_pages_nolock(struct page *page, unsigned int order)
*/
void free_unref_folios(struct folio_batch *folios)
{
- unsigned long UP_flags;
struct per_cpu_pages *pcp = NULL;
struct zone *locked_zone = NULL;
int i, j;
@@ -3064,7 +3016,7 @@ void free_unref_folios(struct folio_batch *folios)
if (zone != locked_zone ||
is_migrate_isolate(migratetype)) {
if (pcp) {
- pcp_spin_unlock(pcp, UP_flags);
+ pcp_spin_unlock(pcp);
locked_zone = NULL;
pcp = NULL;
}
@@ -3083,7 +3035,7 @@ void free_unref_folios(struct folio_batch *folios)
* trylock is necessary as folios may be getting freed
* from IRQ or SoftIRQ context after an IO completion.
*/
- pcp = pcp_spin_trylock(zone->per_cpu_pageset, UP_flags);
+ pcp = pcp_spin_trylock(zone->per_cpu_pageset);
if (unlikely(!pcp)) {
free_one_page(zone, &folio->page, pfn,
order, FPI_NONE);
@@ -3101,14 +3053,14 @@ void free_unref_folios(struct folio_batch *folios)
trace_mm_page_free_batched(&folio->page);
if (!free_frozen_page_commit(zone, pcp, &folio->page,
- migratetype, order, FPI_NONE, &UP_flags)) {
+ migratetype, order, FPI_NONE)) {
pcp = NULL;
locked_zone = NULL;
}
}
if (pcp)
- pcp_spin_unlock(pcp, UP_flags);
+ pcp_spin_unlock(pcp);
folio_batch_reinit(folios);
}
@@ -3359,10 +3311,9 @@ static struct page *rmqueue_pcplist(struct zone *preferred_zone,
struct per_cpu_pages *pcp;
struct list_head *list;
struct page *page;
- unsigned long UP_flags;
/* spin_trylock may fail due to a parallel drain or IRQ reentrancy. */
- pcp = pcp_spin_trylock(zone->per_cpu_pageset, UP_flags);
+ pcp = pcp_spin_trylock(zone->per_cpu_pageset);
if (!pcp)
return NULL;
@@ -3374,7 +3325,7 @@ static struct page *rmqueue_pcplist(struct zone *preferred_zone,
pcp->free_count >>= 1;
list = &pcp->lists[order_to_pindex(migratetype, order)];
page = __rmqueue_pcplist(zone, order, migratetype, alloc_flags, pcp, list);
- pcp_spin_unlock(pcp, UP_flags);
+ pcp_spin_unlock(pcp);
if (page) {
__count_zid_vm_events(PGALLOC, page_zonenum(page), 1 << order);
zone_statistics(preferred_zone, zone, 1);
@@ -5062,7 +5013,6 @@ unsigned long alloc_pages_bulk_noprof(gfp_t gfp, int preferred_nid,
struct page **page_array)
{
struct page *page;
- unsigned long UP_flags;
struct zone *zone;
struct zoneref *z;
struct per_cpu_pages *pcp;
@@ -5156,7 +5106,7 @@ unsigned long alloc_pages_bulk_noprof(gfp_t gfp, int preferred_nid,
goto failed;
/* spin_trylock may fail due to a parallel drain or IRQ reentrancy. */
- pcp = pcp_spin_trylock(zone->per_cpu_pageset, UP_flags);
+ pcp = pcp_spin_trylock(zone->per_cpu_pageset);
if (!pcp)
goto failed;
@@ -5175,7 +5125,7 @@ unsigned long alloc_pages_bulk_noprof(gfp_t gfp, int preferred_nid,
if (unlikely(!page)) {
/* Try and allocate at least one page */
if (!nr_account) {
- pcp_spin_unlock(pcp, UP_flags);
+ pcp_spin_unlock(pcp);
goto failed;
}
break;
@@ -5187,7 +5137,7 @@ unsigned long alloc_pages_bulk_noprof(gfp_t gfp, int preferred_nid,
page_array[nr_populated++] = page;
}
- pcp_spin_unlock(pcp, UP_flags);
+ pcp_spin_unlock(pcp);
__count_zid_vm_events(PGALLOC, zone_idx(zone), nr_account);
zone_statistics(zonelist_zone(ac.preferred_zoneref), zone, nr_account);
@@ -6144,7 +6094,6 @@ static void zone_pcp_update_cacheinfo(struct zone *zone, unsigned int cpu)
{
struct per_cpu_pages *pcp;
struct cpu_cacheinfo *cci;
- unsigned long UP_flags;
pcp = per_cpu_ptr(zone->per_cpu_pageset, cpu);
cci = get_cpu_cacheinfo(cpu);
@@ -6155,12 +6104,12 @@ static void zone_pcp_update_cacheinfo(struct zone *zone, unsigned int cpu)
* This can reduce zone lock contention without hurting
* cache-hot pages sharing.
*/
- pcp_spin_lock_maybe_irqsave(pcp, UP_flags);
+ pcp_spin_lock_nopin(pcp);
if ((cci->per_cpu_data_slice_size >> PAGE_SHIFT) > 3 * pcp->batch)
pcp->flags |= PCPF_FREE_HIGH_BATCH;
else
pcp->flags &= ~PCPF_FREE_HIGH_BATCH;
- pcp_spin_unlock_maybe_irqrestore(pcp, UP_flags);
+ pcp_spin_unlock_nopin(pcp);
}
void setup_pcp_cacheinfo(unsigned int cpu)
--
2.53.0
next reply other threads:[~2026-02-13 11:57 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-02-13 11:57 Vlastimil Babka [this message]
2026-02-14 6:28 ` Matthew Wilcox
2026-02-14 16:32 ` Linus Torvalds
2026-02-16 10:32 ` Vlastimil Babka
-- strict thread matches above, loose matches on Subject: below --
2026-02-13 11:57 Vlastimil Babka
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=d762c46b-36f0-471a-b5b4-23c8cf5628ae@suse.cz \
--to=vbabka@suse.cz \
--cc=bigeasy@linutronix.de \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=longman@redhat.com \
--cc=mgorman@techsingularity.net \
--cc=mingo@redhat.com \
--cc=peterz@infradead.org \
--cc=rostedt@goodmis.org \
--cc=torvalds@linux-foundation.org \
--cc=will@kernel.org \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox