linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH mm] mm/page_alloc: Avoid second trylock of zone->lock
@ 2025-03-31  0:28 Alexei Starovoitov
  2025-03-31  8:11 ` Sebastian Andrzej Siewior
                   ` (3 more replies)
  0 siblings, 4 replies; 7+ messages in thread
From: Alexei Starovoitov @ 2025-03-31  0:28 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: bpf, daniel, andrii, martin.lau, akpm, peterz, vbabka, bigeasy,
	rostedt, shakeel.butt, mhocko, linux-mm, linux-kernel

From: Alexei Starovoitov <ast@kernel.org>

spin_trylock followed by spin_lock will cause extra write cache
access. If the lock is contended it may cause unnecessary cache
line bouncing and will execute redundant irq restore/save pair.
Therefore, check alloc/fpi_flags first and use spin_trylock or
spin_lock.

Suggested-by: Linus Torvalds <torvalds@linux-foundation.org>
Fixes: 97769a53f117 ("mm, bpf: Introduce try_alloc_pages() for opportunistic page allocation")
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
---
 mm/page_alloc.c | 15 +++++++++------
 1 file changed, 9 insertions(+), 6 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index e3ea5bf5c459..ffbb5678bc2f 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1268,11 +1268,12 @@ static void free_one_page(struct zone *zone, struct page *page,
 	struct llist_head *llhead;
 	unsigned long flags;
 
-	if (!spin_trylock_irqsave(&zone->lock, flags)) {
-		if (unlikely(fpi_flags & FPI_TRYLOCK)) {
+	if (unlikely(fpi_flags & FPI_TRYLOCK)) {
+		if (!spin_trylock_irqsave(&zone->lock, flags)) {
 			add_page_to_zone_llist(zone, page, order);
 			return;
 		}
+	} else {
 		spin_lock_irqsave(&zone->lock, flags);
 	}
 
@@ -2341,9 +2342,10 @@ static int rmqueue_bulk(struct zone *zone, unsigned int order,
 	unsigned long flags;
 	int i;
 
-	if (!spin_trylock_irqsave(&zone->lock, flags)) {
-		if (unlikely(alloc_flags & ALLOC_TRYLOCK))
+	if (unlikely(alloc_flags & ALLOC_TRYLOCK)) {
+		if (!spin_trylock_irqsave(&zone->lock, flags))
 			return 0;
+	} else {
 		spin_lock_irqsave(&zone->lock, flags);
 	}
 	for (i = 0; i < count; ++i) {
@@ -2964,9 +2966,10 @@ struct page *rmqueue_buddy(struct zone *preferred_zone, struct zone *zone,
 
 	do {
 		page = NULL;
-		if (!spin_trylock_irqsave(&zone->lock, flags)) {
-			if (unlikely(alloc_flags & ALLOC_TRYLOCK))
+		if (unlikely(alloc_flags & ALLOC_TRYLOCK)) {
+			if (!spin_trylock_irqsave(&zone->lock, flags))
 				return NULL;
+		} else {
 			spin_lock_irqsave(&zone->lock, flags);
 		}
 		if (alloc_flags & ALLOC_HIGHATOMIC)
-- 
2.47.1



^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2025-04-02  4:29 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-03-31  0:28 [PATCH mm] mm/page_alloc: Avoid second trylock of zone->lock Alexei Starovoitov
2025-03-31  8:11 ` Sebastian Andrzej Siewior
2025-03-31 10:52 ` Michal Hocko
2025-03-31 12:17   ` Vlastimil Babka
2025-03-31 16:59     ` Alexei Starovoitov
2025-04-01  3:28 ` Harry Yoo
2025-04-02  4:29 ` Shakeel Butt

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox