linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/8] mm: introduce zone lock guards
@ 2026-03-06 16:05 Dmitry Ilvokhin
  2026-03-06 16:05 ` [PATCH 1/8] mm: use zone lock guard in reserve_highatomic_pageblock() Dmitry Ilvokhin
                   ` (8 more replies)
  0 siblings, 9 replies; 16+ messages in thread
From: Dmitry Ilvokhin @ 2026-03-06 16:05 UTC (permalink / raw)
  To: Andrew Morton, David Hildenbrand, Lorenzo Stoakes,
	Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
	Suren Baghdasaryan, Michal Hocko, Brendan Jackman,
	Johannes Weiner, Zi Yan
  Cc: linux-mm, linux-kernel, kernel-team, Dmitry Ilvokhin, Steven Rostedt

This series defines DEFINE_LOCK_GUARD_1 for zone_lock_irqsave and uses
it across several mm functions to replace explicit lock/unlock patterns
with automatic scope-based cleanup.

This simplifies the control flow by removing 'flags' variables, goto
labels, and redundant unlock calls.

Patches are ordered by decreasing value. The first six patches simplify
the control flow by removing gotos, multiple unlock paths, or 'ret'
variables. The last two are simpler lock/unlock pair conversions that
only remove 'flags' and can be dropped if considered unnecessary churn.

Based on mm-new.

Suggested-by: Steven Rostedt <rostedt@goodmis.org>

Dmitry Ilvokhin (8):
  mm: use zone lock guard in reserve_highatomic_pageblock()
  mm: use zone lock guard in unset_migratetype_isolate()
  mm: use zone lock guard in unreserve_highatomic_pageblock()
  mm: use zone lock guard in set_migratetype_isolate()
  mm: use zone lock guard in take_page_off_buddy()
  mm: use zone lock guard in put_page_back_buddy()
  mm: use zone lock guard in free_pcppages_bulk()
  mm: use zone lock guard in __offline_isolated_pages()

 include/linux/mmzone_lock.h |  9 +++++
 mm/page_alloc.c             | 50 +++++++++------------------
 mm/page_isolation.c         | 67 ++++++++++++++++---------------------
 3 files changed, 53 insertions(+), 73 deletions(-)

-- 
2.47.3



^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH 1/8] mm: use zone lock guard in reserve_highatomic_pageblock()
  2026-03-06 16:05 [PATCH 0/8] mm: introduce zone lock guards Dmitry Ilvokhin
@ 2026-03-06 16:05 ` Dmitry Ilvokhin
  2026-03-06 17:53   ` Andrew Morton
  2026-03-06 16:05 ` [PATCH 2/8] mm: use zone lock guard in unset_migratetype_isolate() Dmitry Ilvokhin
                   ` (7 subsequent siblings)
  8 siblings, 1 reply; 16+ messages in thread
From: Dmitry Ilvokhin @ 2026-03-06 16:05 UTC (permalink / raw)
  To: Andrew Morton, David Hildenbrand, Lorenzo Stoakes,
	Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
	Suren Baghdasaryan, Michal Hocko, Brendan Jackman,
	Johannes Weiner, Zi Yan
  Cc: linux-mm, linux-kernel, kernel-team, Dmitry Ilvokhin, Steven Rostedt

Use the newly introduced zone_lock_irqsave lock guard in
reserve_highatomic_pageblock() to replace the explicit lock/unlock and
goto out_unlock pattern with automatic scope-based cleanup.

Suggested-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Dmitry Ilvokhin <d@ilvokhin.com>
---
 include/linux/mmzone_lock.h |  9 +++++++++
 mm/page_alloc.c             | 13 +++++--------
 2 files changed, 14 insertions(+), 8 deletions(-)

diff --git a/include/linux/mmzone_lock.h b/include/linux/mmzone_lock.h
index 6bd8b026029f..fe399a4505ba 100644
--- a/include/linux/mmzone_lock.h
+++ b/include/linux/mmzone_lock.h
@@ -97,4 +97,13 @@ static inline void zone_unlock_irq(struct zone *zone)
 	spin_unlock_irq(&zone->_lock);
 }
 
+DEFINE_LOCK_GUARD_1(zone_lock_irqsave, struct zone,
+		zone_lock_irqsave(_T->lock, _T->flags),
+		zone_unlock_irqrestore(_T->lock, _T->flags),
+		unsigned long flags)
+DECLARE_LOCK_GUARD_1_ATTRS(zone_lock_irqsave,
+		__acquires(_T), __releases(*(struct zone **)_T))
+#define class_zone_lock_irqsave_constructor(_T) \
+	WITH_LOCK_GUARD_1_ATTRS(zone_lock_irqsave, _T)
+
 #endif /* _LINUX_MMZONE_LOCK_H */
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 75ee81445640..260fb003822a 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -3407,7 +3407,7 @@ static void reserve_highatomic_pageblock(struct page *page, int order,
 					 struct zone *zone)
 {
 	int mt;
-	unsigned long max_managed, flags;
+	unsigned long max_managed;
 
 	/*
 	 * The number reserved as: minimum is 1 pageblock, maximum is
@@ -3421,29 +3421,26 @@ static void reserve_highatomic_pageblock(struct page *page, int order,
 	if (zone->nr_reserved_highatomic >= max_managed)
 		return;
 
-	zone_lock_irqsave(zone, flags);
+	guard(zone_lock_irqsave)(zone);
 
 	/* Recheck the nr_reserved_highatomic limit under the lock */
 	if (zone->nr_reserved_highatomic >= max_managed)
-		goto out_unlock;
+		return;
 
 	/* Yoink! */
 	mt = get_pageblock_migratetype(page);
 	/* Only reserve normal pageblocks (i.e., they can merge with others) */
 	if (!migratetype_is_mergeable(mt))
-		goto out_unlock;
+		return;
 
 	if (order < pageblock_order) {
 		if (move_freepages_block(zone, page, mt, MIGRATE_HIGHATOMIC) == -1)
-			goto out_unlock;
+			return;
 		zone->nr_reserved_highatomic += pageblock_nr_pages;
 	} else {
 		change_pageblock_range(page, order, MIGRATE_HIGHATOMIC);
 		zone->nr_reserved_highatomic += 1 << order;
 	}
-
-out_unlock:
-	zone_unlock_irqrestore(zone, flags);
 }
 
 /*
-- 
2.47.3



^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH 2/8] mm: use zone lock guard in unset_migratetype_isolate()
  2026-03-06 16:05 [PATCH 0/8] mm: introduce zone lock guards Dmitry Ilvokhin
  2026-03-06 16:05 ` [PATCH 1/8] mm: use zone lock guard in reserve_highatomic_pageblock() Dmitry Ilvokhin
@ 2026-03-06 16:05 ` Dmitry Ilvokhin
  2026-03-06 16:05 ` [PATCH 3/8] mm: use zone lock guard in unreserve_highatomic_pageblock() Dmitry Ilvokhin
                   ` (6 subsequent siblings)
  8 siblings, 0 replies; 16+ messages in thread
From: Dmitry Ilvokhin @ 2026-03-06 16:05 UTC (permalink / raw)
  To: Andrew Morton, David Hildenbrand, Lorenzo Stoakes,
	Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
	Suren Baghdasaryan, Michal Hocko, Brendan Jackman,
	Johannes Weiner, Zi Yan
  Cc: linux-mm, linux-kernel, kernel-team, Dmitry Ilvokhin, Steven Rostedt

Use zone_lock_irqsave lock guard in unset_migratetype_isolate() to
replace the explicit lock/unlock and goto pattern with automatic
scope-based cleanup.

Suggested-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Dmitry Ilvokhin <d@ilvokhin.com>
---
 mm/page_isolation.c | 7 ++-----
 1 file changed, 2 insertions(+), 5 deletions(-)

diff --git a/mm/page_isolation.c b/mm/page_isolation.c
index e8414e9a718a..dc1e18124228 100644
--- a/mm/page_isolation.c
+++ b/mm/page_isolation.c
@@ -224,15 +224,14 @@ static int set_migratetype_isolate(struct page *page, enum pb_isolate_mode mode,
 static void unset_migratetype_isolate(struct page *page)
 {
 	struct zone *zone;
-	unsigned long flags;
 	bool isolated_page = false;
 	unsigned int order;
 	struct page *buddy;
 
 	zone = page_zone(page);
-	zone_lock_irqsave(zone, flags);
+	guard(zone_lock_irqsave)(zone);
 	if (!is_migrate_isolate_page(page))
-		goto out;
+		return;
 
 	/*
 	 * Because freepage with more than pageblock_order on isolated
@@ -280,8 +279,6 @@ static void unset_migratetype_isolate(struct page *page)
 		__putback_isolated_page(page, order, get_pageblock_migratetype(page));
 	}
 	zone->nr_isolate_pageblock--;
-out:
-	zone_unlock_irqrestore(zone, flags);
 }
 
 static inline struct page *
-- 
2.47.3



^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH 3/8] mm: use zone lock guard in unreserve_highatomic_pageblock()
  2026-03-06 16:05 [PATCH 0/8] mm: introduce zone lock guards Dmitry Ilvokhin
  2026-03-06 16:05 ` [PATCH 1/8] mm: use zone lock guard in reserve_highatomic_pageblock() Dmitry Ilvokhin
  2026-03-06 16:05 ` [PATCH 2/8] mm: use zone lock guard in unset_migratetype_isolate() Dmitry Ilvokhin
@ 2026-03-06 16:05 ` Dmitry Ilvokhin
  2026-03-06 16:10   ` Steven Rostedt
  2026-03-06 16:05 ` [PATCH 4/8] mm: use zone lock guard in set_migratetype_isolate() Dmitry Ilvokhin
                   ` (5 subsequent siblings)
  8 siblings, 1 reply; 16+ messages in thread
From: Dmitry Ilvokhin @ 2026-03-06 16:05 UTC (permalink / raw)
  To: Andrew Morton, David Hildenbrand, Lorenzo Stoakes,
	Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
	Suren Baghdasaryan, Michal Hocko, Brendan Jackman,
	Johannes Weiner, Zi Yan
  Cc: linux-mm, linux-kernel, kernel-team, Dmitry Ilvokhin, Steven Rostedt

Use zone_lock_irqsave lock guard in unreserve_highatomic_pageblock()
to replace the explicit lock/unlock pattern with automatic scope-based
cleanup.

Suggested-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Dmitry Ilvokhin <d@ilvokhin.com>
---
 mm/page_alloc.c | 5 +----
 1 file changed, 1 insertion(+), 4 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 260fb003822a..2857daf6ebfd 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -3456,7 +3456,6 @@ static bool unreserve_highatomic_pageblock(const struct alloc_context *ac,
 						bool force)
 {
 	struct zonelist *zonelist = ac->zonelist;
-	unsigned long flags;
 	struct zoneref *z;
 	struct zone *zone;
 	struct page *page;
@@ -3473,7 +3472,7 @@ static bool unreserve_highatomic_pageblock(const struct alloc_context *ac,
 					pageblock_nr_pages)
 			continue;
 
-		zone_lock_irqsave(zone, flags);
+		guard(zone_lock_irqsave)(zone);
 		for (order = 0; order < NR_PAGE_ORDERS; order++) {
 			struct free_area *area = &(zone->free_area[order]);
 			unsigned long size;
@@ -3521,11 +3520,9 @@ static bool unreserve_highatomic_pageblock(const struct alloc_context *ac,
 			 */
 			WARN_ON_ONCE(ret == -1);
 			if (ret > 0) {
-				zone_unlock_irqrestore(zone, flags);
 				return ret;
 			}
 		}
-		zone_unlock_irqrestore(zone, flags);
 	}
 
 	return false;
-- 
2.47.3



^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH 4/8] mm: use zone lock guard in set_migratetype_isolate()
  2026-03-06 16:05 [PATCH 0/8] mm: introduce zone lock guards Dmitry Ilvokhin
                   ` (2 preceding siblings ...)
  2026-03-06 16:05 ` [PATCH 3/8] mm: use zone lock guard in unreserve_highatomic_pageblock() Dmitry Ilvokhin
@ 2026-03-06 16:05 ` Dmitry Ilvokhin
  2026-03-06 16:05 ` [PATCH 5/8] mm: use zone lock guard in take_page_off_buddy() Dmitry Ilvokhin
                   ` (4 subsequent siblings)
  8 siblings, 0 replies; 16+ messages in thread
From: Dmitry Ilvokhin @ 2026-03-06 16:05 UTC (permalink / raw)
  To: Andrew Morton, David Hildenbrand, Lorenzo Stoakes,
	Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
	Suren Baghdasaryan, Michal Hocko, Brendan Jackman,
	Johannes Weiner, Zi Yan
  Cc: linux-mm, linux-kernel, kernel-team, Dmitry Ilvokhin, Steven Rostedt

Use zone_lock_irqsave scoped lock guard in set_migratetype_isolate() to
replace the explicit lock/unlock pattern with automatic scope-based
cleanup. The scoped variant is used to keep dump_page() outside the
locked section to avoid a lockdep splat.

Suggested-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Dmitry Ilvokhin <d@ilvokhin.com>
---
 mm/page_isolation.c | 60 ++++++++++++++++++++-------------------------
 1 file changed, 26 insertions(+), 34 deletions(-)

diff --git a/mm/page_isolation.c b/mm/page_isolation.c
index dc1e18124228..e7f006e8870c 100644
--- a/mm/page_isolation.c
+++ b/mm/page_isolation.c
@@ -168,48 +168,40 @@ static int set_migratetype_isolate(struct page *page, enum pb_isolate_mode mode,
 {
 	struct zone *zone = page_zone(page);
 	struct page *unmovable;
-	unsigned long flags;
 	unsigned long check_unmovable_start, check_unmovable_end;
 
 	if (PageUnaccepted(page))
 		accept_page(page);
 
-	zone_lock_irqsave(zone, flags);
-
-	/*
-	 * We assume the caller intended to SET migrate type to isolate.
-	 * If it is already set, then someone else must have raced and
-	 * set it before us.
-	 */
-	if (is_migrate_isolate_page(page)) {
-		zone_unlock_irqrestore(zone, flags);
-		return -EBUSY;
-	}
-
-	/*
-	 * FIXME: Now, memory hotplug doesn't call shrink_slab() by itself.
-	 * We just check MOVABLE pages.
-	 *
-	 * Pass the intersection of [start_pfn, end_pfn) and the page's pageblock
-	 * to avoid redundant checks.
-	 */
-	check_unmovable_start = max(page_to_pfn(page), start_pfn);
-	check_unmovable_end = min(pageblock_end_pfn(page_to_pfn(page)),
-				  end_pfn);
-
-	unmovable = has_unmovable_pages(check_unmovable_start, check_unmovable_end,
-			mode);
-	if (!unmovable) {
-		if (!pageblock_isolate_and_move_free_pages(zone, page)) {
-			zone_unlock_irqrestore(zone, flags);
+	scoped_guard(zone_lock_irqsave, zone) {
+		/*
+		 * We assume the caller intended to SET migrate type to
+		 * isolate. If it is already set, then someone else must have
+		 * raced and set it before us.
+		 */
+		if (is_migrate_isolate_page(page))
 			return -EBUSY;
+
+		/*
+		 * FIXME: Now, memory hotplug doesn't call shrink_slab() by
+		 * itself. We just check MOVABLE pages.
+		 *
+		 * Pass the intersection of [start_pfn, end_pfn) and the page's
+		 * pageblock to avoid redundant checks.
+		 */
+		check_unmovable_start = max(page_to_pfn(page), start_pfn);
+		check_unmovable_end = min(pageblock_end_pfn(page_to_pfn(page)),
+					  end_pfn);
+
+		unmovable = has_unmovable_pages(check_unmovable_start,
+				check_unmovable_end, mode);
+		if (!unmovable) {
+			if (!pageblock_isolate_and_move_free_pages(zone, page))
+				return -EBUSY;
+			zone->nr_isolate_pageblock++;
+			return 0;
 		}
-		zone->nr_isolate_pageblock++;
-		zone_unlock_irqrestore(zone, flags);
-		return 0;
 	}
-
-	zone_unlock_irqrestore(zone, flags);
 	if (mode == PB_ISOLATE_MODE_MEM_OFFLINE) {
 		/*
 		 * printk() with zone lock held will likely trigger a
-- 
2.47.3



^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH 5/8] mm: use zone lock guard in take_page_off_buddy()
  2026-03-06 16:05 [PATCH 0/8] mm: introduce zone lock guards Dmitry Ilvokhin
                   ` (3 preceding siblings ...)
  2026-03-06 16:05 ` [PATCH 4/8] mm: use zone lock guard in set_migratetype_isolate() Dmitry Ilvokhin
@ 2026-03-06 16:05 ` Dmitry Ilvokhin
  2026-03-06 16:05 ` [PATCH 6/8] mm: use zone lock guard in put_page_back_buddy() Dmitry Ilvokhin
                   ` (3 subsequent siblings)
  8 siblings, 0 replies; 16+ messages in thread
From: Dmitry Ilvokhin @ 2026-03-06 16:05 UTC (permalink / raw)
  To: Andrew Morton, David Hildenbrand, Lorenzo Stoakes,
	Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
	Suren Baghdasaryan, Michal Hocko, Brendan Jackman,
	Johannes Weiner, Zi Yan
  Cc: linux-mm, linux-kernel, kernel-team, Dmitry Ilvokhin, Steven Rostedt

Use zone_lock_irqsave lock guard in take_page_off_buddy() to replace
the explicit lock/unlock pattern with automatic scope-based cleanup.

This also allows to return directly from the loop, removing the 'ret'
variable.

Suggested-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Dmitry Ilvokhin <d@ilvokhin.com>
---
 mm/page_alloc.c | 10 +++-------
 1 file changed, 3 insertions(+), 7 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 2857daf6ebfd..92fa922911d5 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -7493,11 +7493,9 @@ bool take_page_off_buddy(struct page *page)
 {
 	struct zone *zone = page_zone(page);
 	unsigned long pfn = page_to_pfn(page);
-	unsigned long flags;
 	unsigned int order;
-	bool ret = false;
 
-	zone_lock_irqsave(zone, flags);
+	guard(zone_lock_irqsave)(zone);
 	for (order = 0; order < NR_PAGE_ORDERS; order++) {
 		struct page *page_head = page - (pfn & ((1 << order) - 1));
 		int page_order = buddy_order(page_head);
@@ -7512,14 +7510,12 @@ bool take_page_off_buddy(struct page *page)
 			break_down_buddy_pages(zone, page_head, page, 0,
 						page_order, migratetype);
 			SetPageHWPoisonTakenOff(page);
-			ret = true;
-			break;
+			return true;
 		}
 		if (page_count(page_head) > 0)
 			break;
 	}
-	zone_unlock_irqrestore(zone, flags);
-	return ret;
+	return false;
 }
 
 /*
-- 
2.47.3



^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH 6/8] mm: use zone lock guard in put_page_back_buddy()
  2026-03-06 16:05 [PATCH 0/8] mm: introduce zone lock guards Dmitry Ilvokhin
                   ` (4 preceding siblings ...)
  2026-03-06 16:05 ` [PATCH 5/8] mm: use zone lock guard in take_page_off_buddy() Dmitry Ilvokhin
@ 2026-03-06 16:05 ` Dmitry Ilvokhin
  2026-03-06 16:05 ` [PATCH 7/8] mm: use zone lock guard in free_pcppages_bulk() Dmitry Ilvokhin
                   ` (2 subsequent siblings)
  8 siblings, 0 replies; 16+ messages in thread
From: Dmitry Ilvokhin @ 2026-03-06 16:05 UTC (permalink / raw)
  To: Andrew Morton, David Hildenbrand, Lorenzo Stoakes,
	Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
	Suren Baghdasaryan, Michal Hocko, Brendan Jackman,
	Johannes Weiner, Zi Yan
  Cc: linux-mm, linux-kernel, kernel-team, Dmitry Ilvokhin, Steven Rostedt

Use zone_lock_irqsave lock guard in put_page_back_buddy() to replace the
explicit lock/unlock pattern with automatic scope-based cleanup.

Suggested-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Dmitry Ilvokhin <d@ilvokhin.com>
---
 mm/page_alloc.c | 12 ++++--------
 1 file changed, 4 insertions(+), 8 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 92fa922911d5..28b06baa4075 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -7524,23 +7524,19 @@ bool take_page_off_buddy(struct page *page)
 bool put_page_back_buddy(struct page *page)
 {
 	struct zone *zone = page_zone(page);
-	unsigned long flags;
-	bool ret = false;
 
-	zone_lock_irqsave(zone, flags);
+	guard(zone_lock_irqsave)(zone);
 	if (put_page_testzero(page)) {
 		unsigned long pfn = page_to_pfn(page);
 		int migratetype = get_pfnblock_migratetype(page, pfn);
 
 		ClearPageHWPoisonTakenOff(page);
 		__free_one_page(page, pfn, zone, 0, migratetype, FPI_NONE);
-		if (TestClearPageHWPoison(page)) {
-			ret = true;
-		}
+		if (TestClearPageHWPoison(page))
+			return true;
 	}
-	zone_unlock_irqrestore(zone, flags);
 
-	return ret;
+	return false;
 }
 #endif
 
-- 
2.47.3



^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH 7/8] mm: use zone lock guard in free_pcppages_bulk()
  2026-03-06 16:05 [PATCH 0/8] mm: introduce zone lock guards Dmitry Ilvokhin
                   ` (5 preceding siblings ...)
  2026-03-06 16:05 ` [PATCH 6/8] mm: use zone lock guard in put_page_back_buddy() Dmitry Ilvokhin
@ 2026-03-06 16:05 ` Dmitry Ilvokhin
  2026-03-06 16:05 ` [PATCH 8/8] mm: use zone lock guard in __offline_isolated_pages() Dmitry Ilvokhin
  2026-03-06 16:15 ` [PATCH 0/8] mm: introduce zone lock guards Steven Rostedt
  8 siblings, 0 replies; 16+ messages in thread
From: Dmitry Ilvokhin @ 2026-03-06 16:05 UTC (permalink / raw)
  To: Andrew Morton, David Hildenbrand, Lorenzo Stoakes,
	Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
	Suren Baghdasaryan, Michal Hocko, Brendan Jackman,
	Johannes Weiner, Zi Yan
  Cc: linux-mm, linux-kernel, kernel-team, Dmitry Ilvokhin, Steven Rostedt

Use zone_lock_irqsave lock guard in free_pcppages_bulk() to replace the
explicit lock/unlock pattern with automatic scope-based cleanup.

Suggested-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Dmitry Ilvokhin <d@ilvokhin.com>
---
 mm/page_alloc.c | 5 +----
 1 file changed, 1 insertion(+), 4 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 28b06baa4075..2759e02340fa 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1455,7 +1455,6 @@ static void free_pcppages_bulk(struct zone *zone, int count,
 					struct per_cpu_pages *pcp,
 					int pindex)
 {
-	unsigned long flags;
 	unsigned int order;
 	struct page *page;
 
@@ -1468,7 +1467,7 @@ static void free_pcppages_bulk(struct zone *zone, int count,
 	/* Ensure requested pindex is drained first. */
 	pindex = pindex - 1;
 
-	zone_lock_irqsave(zone, flags);
+	guard(zone_lock_irqsave)(zone);
 
 	while (count > 0) {
 		struct list_head *list;
@@ -1500,8 +1499,6 @@ static void free_pcppages_bulk(struct zone *zone, int count,
 			trace_mm_page_pcpu_drain(page, order, mt);
 		} while (count > 0 && !list_empty(list));
 	}
-
-	zone_unlock_irqrestore(zone, flags);
 }
 
 /* Split a multi-block free page into its individual pageblocks. */
-- 
2.47.3



^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH 8/8] mm: use zone lock guard in __offline_isolated_pages()
  2026-03-06 16:05 [PATCH 0/8] mm: introduce zone lock guards Dmitry Ilvokhin
                   ` (6 preceding siblings ...)
  2026-03-06 16:05 ` [PATCH 7/8] mm: use zone lock guard in free_pcppages_bulk() Dmitry Ilvokhin
@ 2026-03-06 16:05 ` Dmitry Ilvokhin
  2026-03-06 16:15 ` [PATCH 0/8] mm: introduce zone lock guards Steven Rostedt
  8 siblings, 0 replies; 16+ messages in thread
From: Dmitry Ilvokhin @ 2026-03-06 16:05 UTC (permalink / raw)
  To: Andrew Morton, David Hildenbrand, Lorenzo Stoakes,
	Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
	Suren Baghdasaryan, Michal Hocko, Brendan Jackman,
	Johannes Weiner, Zi Yan
  Cc: linux-mm, linux-kernel, kernel-team, Dmitry Ilvokhin, Steven Rostedt

Use zone_lock_irqsave lock guard in __offline_isolated_pages() to
replace the explicit lock/unlock pattern with automatic scope-based
cleanup.

Suggested-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Dmitry Ilvokhin <d@ilvokhin.com>
---
 mm/page_alloc.c | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 2759e02340fa..6f7420e4431f 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -7380,7 +7380,7 @@ void zone_pcp_reset(struct zone *zone)
 unsigned long __offline_isolated_pages(unsigned long start_pfn,
 		unsigned long end_pfn)
 {
-	unsigned long already_offline = 0, flags;
+	unsigned long already_offline = 0;
 	unsigned long pfn = start_pfn;
 	struct page *page;
 	struct zone *zone;
@@ -7388,7 +7388,7 @@ unsigned long __offline_isolated_pages(unsigned long start_pfn,
 
 	offline_mem_sections(pfn, end_pfn);
 	zone = page_zone(pfn_to_page(pfn));
-	zone_lock_irqsave(zone, flags);
+	guard(zone_lock_irqsave)(zone);
 	while (pfn < end_pfn) {
 		page = pfn_to_page(pfn);
 		/*
@@ -7418,7 +7418,6 @@ unsigned long __offline_isolated_pages(unsigned long start_pfn,
 		del_page_from_free_list(page, zone, order, MIGRATE_ISOLATE);
 		pfn += (1 << order);
 	}
-	zone_unlock_irqrestore(zone, flags);
 
 	return end_pfn - start_pfn - already_offline;
 }
-- 
2.47.3



^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 3/8] mm: use zone lock guard in unreserve_highatomic_pageblock()
  2026-03-06 16:05 ` [PATCH 3/8] mm: use zone lock guard in unreserve_highatomic_pageblock() Dmitry Ilvokhin
@ 2026-03-06 16:10   ` Steven Rostedt
  0 siblings, 0 replies; 16+ messages in thread
From: Steven Rostedt @ 2026-03-06 16:10 UTC (permalink / raw)
  To: Dmitry Ilvokhin
  Cc: Andrew Morton, David Hildenbrand, Lorenzo Stoakes,
	Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
	Suren Baghdasaryan, Michal Hocko, Brendan Jackman,
	Johannes Weiner, Zi Yan, linux-mm, linux-kernel, kernel-team

On Fri,  6 Mar 2026 16:05:37 +0000
Dmitry Ilvokhin <d@ilvokhin.com> wrote:

>  			 */
>  			WARN_ON_ONCE(ret == -1);
>  			if (ret > 0) {
> -				zone_unlock_irqrestore(zone, flags);
>  				return ret;
>  			}

You can lose the braces here too:

			if (ret > 0)
				return ret;

-- Steve

>  		}
> -		zone_unlock_irqrestore(zone, flags);
>  	}
>  


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 0/8] mm: introduce zone lock guards
  2026-03-06 16:05 [PATCH 0/8] mm: introduce zone lock guards Dmitry Ilvokhin
                   ` (7 preceding siblings ...)
  2026-03-06 16:05 ` [PATCH 8/8] mm: use zone lock guard in __offline_isolated_pages() Dmitry Ilvokhin
@ 2026-03-06 16:15 ` Steven Rostedt
  8 siblings, 0 replies; 16+ messages in thread
From: Steven Rostedt @ 2026-03-06 16:15 UTC (permalink / raw)
  To: Dmitry Ilvokhin
  Cc: Andrew Morton, David Hildenbrand, Lorenzo Stoakes,
	Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
	Suren Baghdasaryan, Michal Hocko, Brendan Jackman,
	Johannes Weiner, Zi Yan, linux-mm, linux-kernel, kernel-team

On Fri,  6 Mar 2026 16:05:34 +0000
Dmitry Ilvokhin <d@ilvokhin.com> wrote:

> This series defines DEFINE_LOCK_GUARD_1 for zone_lock_irqsave and uses
> it across several mm functions to replace explicit lock/unlock patterns
> with automatic scope-based cleanup.
> 
> This simplifies the control flow by removing 'flags' variables, goto
> labels, and redundant unlock calls.
> 
> Patches are ordered by decreasing value. The first six patches simplify
> the control flow by removing gotos, multiple unlock paths, or 'ret'
> variables. The last two are simpler lock/unlock pair conversions that
> only remove 'flags' and can be dropped if considered unnecessary churn.
> 
> Based on mm-new.
> 
> Suggested-by: Steven Rostedt <rostedt@goodmis.org>

Thanks, the code looks much cleaner.

Reviewed-by: Steven Rostedt (Google) <rostedt@goodmis.org>

-- Steve


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 1/8] mm: use zone lock guard in reserve_highatomic_pageblock()
  2026-03-06 16:05 ` [PATCH 1/8] mm: use zone lock guard in reserve_highatomic_pageblock() Dmitry Ilvokhin
@ 2026-03-06 17:53   ` Andrew Morton
  2026-03-06 18:00     ` Steven Rostedt
  0 siblings, 1 reply; 16+ messages in thread
From: Andrew Morton @ 2026-03-06 17:53 UTC (permalink / raw)
  To: Dmitry Ilvokhin
  Cc: David Hildenbrand, Lorenzo Stoakes, Liam R. Howlett,
	Vlastimil Babka, Mike Rapoport, Suren Baghdasaryan, Michal Hocko,
	Brendan Jackman, Johannes Weiner, Zi Yan, linux-mm, linux-kernel,
	kernel-team, Steven Rostedt

On Fri,  6 Mar 2026 16:05:35 +0000 Dmitry Ilvokhin <d@ilvokhin.com> wrote:

> Use the newly introduced zone_lock_irqsave lock guard in
> reserve_highatomic_pageblock() to replace the explicit lock/unlock and
> goto out_unlock pattern with automatic scope-based cleanup.
> 
> ...
>
> -	zone_lock_irqsave(zone, flags);
> +	guard(zone_lock_irqsave)(zone);

guard() is cute, but this patch adds a little overhead - defconfig
page_alloc.o text increases by 32 bytes, presumably all in
reserve_highatomic_pageblock().  More instructions, larger cache
footprint.

So we're adding a little overhead to every user's Linux machine for all
time.  In return for which the developers get a little convenience and
maintainability.

Is it worth it?


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 1/8] mm: use zone lock guard in reserve_highatomic_pageblock()
  2026-03-06 17:53   ` Andrew Morton
@ 2026-03-06 18:00     ` Steven Rostedt
  2026-03-06 18:24       ` Vlastimil Babka
  0 siblings, 1 reply; 16+ messages in thread
From: Steven Rostedt @ 2026-03-06 18:00 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Dmitry Ilvokhin, David Hildenbrand, Lorenzo Stoakes,
	Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
	Suren Baghdasaryan, Michal Hocko, Brendan Jackman,
	Johannes Weiner, Zi Yan, linux-mm, linux-kernel, kernel-team,
	Peter Zijlstra


[ Adding Peter ]

On Fri, 6 Mar 2026 09:53:36 -0800
Andrew Morton <akpm@linux-foundation.org> wrote:

> On Fri,  6 Mar 2026 16:05:35 +0000 Dmitry Ilvokhin <d@ilvokhin.com> wrote:
> 
> > Use the newly introduced zone_lock_irqsave lock guard in
> > reserve_highatomic_pageblock() to replace the explicit lock/unlock and
> > goto out_unlock pattern with automatic scope-based cleanup.
> > 
> > ...
> >
> > -	zone_lock_irqsave(zone, flags);
> > +	guard(zone_lock_irqsave)(zone);  
> 
> guard() is cute, but this patch adds a little overhead - defconfig
> page_alloc.o text increases by 32 bytes, presumably all in
> reserve_highatomic_pageblock().  More instructions, larger cache
> footprint.
> 
> So we're adding a little overhead to every user's Linux machine for all
> time.  In return for which the developers get a little convenience and
> maintainability.

I think maintainability is of importance. Is there any measurable slowdown?
Or are we only worried about the text size increase?

> 
> Is it worth it?

This is being done all over the kernel. Perhaps we should look at ways to
make the generic infrastructure more performant?

-- Steve


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 1/8] mm: use zone lock guard in reserve_highatomic_pageblock()
  2026-03-06 18:00     ` Steven Rostedt
@ 2026-03-06 18:24       ` Vlastimil Babka
  2026-03-06 18:33         ` Andrew Morton
  0 siblings, 1 reply; 16+ messages in thread
From: Vlastimil Babka @ 2026-03-06 18:24 UTC (permalink / raw)
  To: Steven Rostedt, Andrew Morton
  Cc: Dmitry Ilvokhin, David Hildenbrand, Lorenzo Stoakes,
	Liam R. Howlett, Mike Rapoport, Suren Baghdasaryan, Michal Hocko,
	Brendan Jackman, Johannes Weiner, Zi Yan, linux-mm, linux-kernel,
	kernel-team, Peter Zijlstra

On 3/6/26 19:00, Steven Rostedt wrote:
> 
> [ Adding Peter ]
> 
> On Fri, 6 Mar 2026 09:53:36 -0800
> Andrew Morton <akpm@linux-foundation.org> wrote:
> 
>> On Fri,  6 Mar 2026 16:05:35 +0000 Dmitry Ilvokhin <d@ilvokhin.com> wrote:
>> 
>> > Use the newly introduced zone_lock_irqsave lock guard in
>> > reserve_highatomic_pageblock() to replace the explicit lock/unlock and
>> > goto out_unlock pattern with automatic scope-based cleanup.
>> > 
>> > ...
>> >
>> > -	zone_lock_irqsave(zone, flags);
>> > +	guard(zone_lock_irqsave)(zone);  
>> 
>> guard() is cute, but this patch adds a little overhead - defconfig
>> page_alloc.o text increases by 32 bytes, presumably all in
>> reserve_highatomic_pageblock().  More instructions, larger cache
>> footprint.

I get this:

Function                                     old     new   delta
get_page_from_freelist                      6389    6452     +63

>> So we're adding a little overhead to every user's Linux machine for all
>> time.  In return for which the developers get a little convenience and
>> maintainability.
> 
> I think maintainability is of importance. Is there any measurable slowdown?
> Or are we only worried about the text size increase?
> 
>> 
>> Is it worth it?
> 
> This is being done all over the kernel. Perhaps we should look at ways to
> make the generic infrastructure more performant?

Yeah I don't think the guard construct in this case should be doing anything
here that wouldn't allow the compiler to compile to the exactly same result
as before? Either there's some problem with the infra, or we're just victim
of compiler heuristics. In both cases imho worth looking into rather than
rejecting the construct.

> -- Steve



^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 1/8] mm: use zone lock guard in reserve_highatomic_pageblock()
  2026-03-06 18:24       ` Vlastimil Babka
@ 2026-03-06 18:33         ` Andrew Morton
  2026-03-06 18:46           ` Steven Rostedt
  0 siblings, 1 reply; 16+ messages in thread
From: Andrew Morton @ 2026-03-06 18:33 UTC (permalink / raw)
  To: Vlastimil Babka
  Cc: Steven Rostedt, Dmitry Ilvokhin, David Hildenbrand,
	Lorenzo Stoakes, Liam R. Howlett, Mike Rapoport,
	Suren Baghdasaryan, Michal Hocko, Brendan Jackman,
	Johannes Weiner, Zi Yan, linux-mm, linux-kernel, kernel-team,
	Peter Zijlstra

On Fri, 6 Mar 2026 19:24:56 +0100 Vlastimil Babka <vbabka@kernel.org> wrote:

> >> 
> >> Is it worth it?
> > 
> > This is being done all over the kernel. Perhaps we should look at ways to
> > make the generic infrastructure more performant?
> 
> Yeah I don't think the guard construct in this case should be doing anything
> here that wouldn't allow the compiler to compile to the exactly same result
> as before? Either there's some problem with the infra, or we're just victim
> of compiler heuristics.

Sure, it'd be good to figure this out.

> In both cases imho worth looking into rather than
> rejecting the construct.

I'm not enjoying the ides of penalizing billions of machines all of the
time in order to make life a little easier for the developers.  Seems
like a poor tradeoff.


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 1/8] mm: use zone lock guard in reserve_highatomic_pageblock()
  2026-03-06 18:33         ` Andrew Morton
@ 2026-03-06 18:46           ` Steven Rostedt
  0 siblings, 0 replies; 16+ messages in thread
From: Steven Rostedt @ 2026-03-06 18:46 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Vlastimil Babka, Dmitry Ilvokhin, David Hildenbrand,
	Lorenzo Stoakes, Liam R. Howlett, Mike Rapoport,
	Suren Baghdasaryan, Michal Hocko, Brendan Jackman,
	Johannes Weiner, Zi Yan, linux-mm, linux-kernel, kernel-team,
	Peter Zijlstra

On Fri, 6 Mar 2026 10:33:07 -0800
Andrew Morton <akpm@linux-foundation.org> wrote:

> I'm not enjoying the ides of penalizing billions of machines all of the
> time in order to make life a little easier for the developers.  Seems
> like a poor tradeoff.

But if there's a bug due to not being as maintainable, that too will affect
billions of machines! To me, that balances the tradeoff.

-- Steve


^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2026-03-06 18:46 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2026-03-06 16:05 [PATCH 0/8] mm: introduce zone lock guards Dmitry Ilvokhin
2026-03-06 16:05 ` [PATCH 1/8] mm: use zone lock guard in reserve_highatomic_pageblock() Dmitry Ilvokhin
2026-03-06 17:53   ` Andrew Morton
2026-03-06 18:00     ` Steven Rostedt
2026-03-06 18:24       ` Vlastimil Babka
2026-03-06 18:33         ` Andrew Morton
2026-03-06 18:46           ` Steven Rostedt
2026-03-06 16:05 ` [PATCH 2/8] mm: use zone lock guard in unset_migratetype_isolate() Dmitry Ilvokhin
2026-03-06 16:05 ` [PATCH 3/8] mm: use zone lock guard in unreserve_highatomic_pageblock() Dmitry Ilvokhin
2026-03-06 16:10   ` Steven Rostedt
2026-03-06 16:05 ` [PATCH 4/8] mm: use zone lock guard in set_migratetype_isolate() Dmitry Ilvokhin
2026-03-06 16:05 ` [PATCH 5/8] mm: use zone lock guard in take_page_off_buddy() Dmitry Ilvokhin
2026-03-06 16:05 ` [PATCH 6/8] mm: use zone lock guard in put_page_back_buddy() Dmitry Ilvokhin
2026-03-06 16:05 ` [PATCH 7/8] mm: use zone lock guard in free_pcppages_bulk() Dmitry Ilvokhin
2026-03-06 16:05 ` [PATCH 8/8] mm: use zone lock guard in __offline_isolated_pages() Dmitry Ilvokhin
2026-03-06 16:15 ` [PATCH 0/8] mm: introduce zone lock guards Steven Rostedt

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox