linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 0/5] mm: zone lock tracepoint instrumentation
@ 2026-02-26 18:26 Dmitry Ilvokhin
  2026-02-26 18:26 ` [PATCH v3 1/5] mm: introduce zone lock wrappers Dmitry Ilvokhin
                   ` (4 more replies)
  0 siblings, 5 replies; 14+ messages in thread
From: Dmitry Ilvokhin @ 2026-02-26 18:26 UTC (permalink / raw)
  To: Andrew Morton, David Hildenbrand, Lorenzo Stoakes,
	Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
	Suren Baghdasaryan, Michal Hocko, Axel Rasmussen, Yuanchu Xie,
	Wei Xu, Steven Rostedt, Masami Hiramatsu, Mathieu Desnoyers,
	Brendan Jackman, Johannes Weiner, Zi Yan, Oscar Salvador,
	Qi Zheng, Shakeel Butt
  Cc: linux-kernel, linux-mm, linux-trace-kernel, linux-cxl,
	kernel-team, Benjamin Cheatham, Dmitry Ilvokhin

Zone lock contention can significantly impact allocation and
reclaim latency, as it is a central synchronization point in
the page allocator and reclaim paths. Improved visibility into
its behavior is therefore important for diagnosing performance
issues in memory-intensive workloads.

On some production workloads at Meta, we have observed noticeable
zone lock contention. Deeper analysis of lock holders and waiters
is currently difficult with existing instrumentation.

While generic lock contention_begin/contention_end tracepoints
cover the slow path, they do not provide sufficient visibility
into lock hold times. In particular, the lack of a release-side
event makes it difficult to identify long lock holders and
correlate them with waiters. As a result, distinguishing between
short bursts of contention and pathological long hold times
requires additional instrumentation.

This patch series adds dedicated tracepoint instrumentation to
zone lock, following the existing mmap_lock tracing model.

The goal is to enable detailed holder/waiter analysis and lock
hold time measurements without affecting the fast path when
tracing is disabled.

The series is structured as follows:

  1. Introduce zone lock wrappers.
  2. Mechanically convert zone lock users to the wrappers.
  3. Convert compaction to use the wrappers (requires minor
     restructuring of compact_lock_irqsave()).
  4. Rename zone->lock to zone->_lock.
  5. Add zone lock tracepoints.

The tracepoints are added via lightweight inline helpers in the
wrappers. When tracing is disabled, the fast path remains
unchanged.

Changes in v3:
- Split compact_lock_irqsave() to compact_zone_lock_irqsave() and 
  compact_lruvec_lock_irqsave().
- Rename zone->lock to zone->_lock.

Changes in v2:
- Move mecanical changes from mm/compaction.c to different commit.
- Removed compact_do_zone_trylock() and compact_do_raw_trylock_irqsave().

v1: https://lore.kernel.org/all/cover.1770821420.git.d@ilvokhin.com/
v2: https://lore.kernel.org/all/cover.1772030186.git.d@ilvokhin.com/

Dmitry Ilvokhin (5):
  mm: introduce zone lock wrappers
  mm: convert zone lock users to wrappers
  mm: convert compaction to zone lock wrappers
  mm: rename zone->lock to zone->_lock
  mm: add tracepoints for zone lock

 MAINTAINERS                      |   3 +
 include/linux/mmzone.h           |   7 ++-
 include/linux/zone_lock.h        | 100 +++++++++++++++++++++++++++++++
 include/trace/events/zone_lock.h |  64 ++++++++++++++++++++
 mm/Makefile                      |   2 +-
 mm/compaction.c                  |  58 +++++++++++-------
 mm/internal.h                    |   2 +-
 mm/memory_hotplug.c              |   9 +--
 mm/mm_init.c                     |   3 +-
 mm/page_alloc.c                  |  89 +++++++++++++--------------
 mm/page_isolation.c              |  23 +++----
 mm/page_owner.c                  |   2 +-
 mm/page_reporting.c              |  13 ++--
 mm/show_mem.c                    |   5 +-
 mm/vmscan.c                      |   5 +-
 mm/vmstat.c                      |   9 +--
 mm/zone_lock.c                   |  31 ++++++++++
 17 files changed, 326 insertions(+), 99 deletions(-)
 create mode 100644 include/linux/zone_lock.h
 create mode 100644 include/trace/events/zone_lock.h
 create mode 100644 mm/zone_lock.c

-- 
2.47.3



^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH v3 1/5] mm: introduce zone lock wrappers
  2026-02-26 18:26 [PATCH v3 0/5] mm: zone lock tracepoint instrumentation Dmitry Ilvokhin
@ 2026-02-26 18:26 ` Dmitry Ilvokhin
  2026-02-26 18:26 ` [PATCH v3 2/5] mm: convert zone lock users to wrappers Dmitry Ilvokhin
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 14+ messages in thread
From: Dmitry Ilvokhin @ 2026-02-26 18:26 UTC (permalink / raw)
  To: Andrew Morton, David Hildenbrand, Lorenzo Stoakes,
	Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
	Suren Baghdasaryan, Michal Hocko, Axel Rasmussen, Yuanchu Xie,
	Wei Xu, Steven Rostedt, Masami Hiramatsu, Mathieu Desnoyers,
	Brendan Jackman, Johannes Weiner, Zi Yan, Oscar Salvador,
	Qi Zheng, Shakeel Butt
  Cc: linux-kernel, linux-mm, linux-trace-kernel, linux-cxl,
	kernel-team, Benjamin Cheatham, Dmitry Ilvokhin

Add thin wrappers around zone lock acquire/release operations. This
prepares the code for future tracepoint instrumentation without
modifying individual call sites.

Centralizing zone lock operations behind wrappers allows future
instrumentation or debugging hooks to be added without touching
all users.

No functional change intended. The wrappers are introduced in
preparation for subsequent patches and are not yet used.

Signed-off-by: Dmitry Ilvokhin <d@ilvokhin.com>
Acked-by: Shakeel Butt <shakeel.butt@linux.dev>
---
 MAINTAINERS               |  1 +
 include/linux/zone_lock.h | 38 ++++++++++++++++++++++++++++++++++++++
 2 files changed, 39 insertions(+)
 create mode 100644 include/linux/zone_lock.h

diff --git a/MAINTAINERS b/MAINTAINERS
index 55af015174a5..61e3d1f5bf43 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -16680,6 +16680,7 @@ F:	include/linux/pgtable.h
 F:	include/linux/ptdump.h
 F:	include/linux/vmpressure.h
 F:	include/linux/vmstat.h
+F:	include/linux/zone_lock.h
 F:	kernel/fork.c
 F:	mm/Kconfig
 F:	mm/debug.c
diff --git a/include/linux/zone_lock.h b/include/linux/zone_lock.h
new file mode 100644
index 000000000000..c531e26280e6
--- /dev/null
+++ b/include/linux/zone_lock.h
@@ -0,0 +1,38 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef _LINUX_ZONE_LOCK_H
+#define _LINUX_ZONE_LOCK_H
+
+#include <linux/mmzone.h>
+#include <linux/spinlock.h>
+
+static inline void zone_lock_init(struct zone *zone)
+{
+	spin_lock_init(&zone->lock);
+}
+
+#define zone_lock_irqsave(zone, flags)				\
+do {								\
+	spin_lock_irqsave(&(zone)->lock, flags);		\
+} while (0)
+
+#define zone_trylock_irqsave(zone, flags)			\
+({								\
+	spin_trylock_irqsave(&(zone)->lock, flags);		\
+})
+
+static inline void zone_unlock_irqrestore(struct zone *zone, unsigned long flags)
+{
+	spin_unlock_irqrestore(&zone->lock, flags);
+}
+
+static inline void zone_lock_irq(struct zone *zone)
+{
+	spin_lock_irq(&zone->lock);
+}
+
+static inline void zone_unlock_irq(struct zone *zone)
+{
+	spin_unlock_irq(&zone->lock);
+}
+
+#endif /* _LINUX_ZONE_LOCK_H */
-- 
2.47.3



^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH v3 2/5] mm: convert zone lock users to wrappers
  2026-02-26 18:26 [PATCH v3 0/5] mm: zone lock tracepoint instrumentation Dmitry Ilvokhin
  2026-02-26 18:26 ` [PATCH v3 1/5] mm: introduce zone lock wrappers Dmitry Ilvokhin
@ 2026-02-26 18:26 ` Dmitry Ilvokhin
  2026-02-26 18:26 ` [PATCH v3 3/5] mm: convert compaction to zone lock wrappers Dmitry Ilvokhin
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 14+ messages in thread
From: Dmitry Ilvokhin @ 2026-02-26 18:26 UTC (permalink / raw)
  To: Andrew Morton, David Hildenbrand, Lorenzo Stoakes,
	Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
	Suren Baghdasaryan, Michal Hocko, Axel Rasmussen, Yuanchu Xie,
	Wei Xu, Steven Rostedt, Masami Hiramatsu, Mathieu Desnoyers,
	Brendan Jackman, Johannes Weiner, Zi Yan, Oscar Salvador,
	Qi Zheng, Shakeel Butt
  Cc: linux-kernel, linux-mm, linux-trace-kernel, linux-cxl,
	kernel-team, Benjamin Cheatham, Dmitry Ilvokhin

Replace direct zone lock acquire/release operations with the
newly introduced wrappers.

The changes are purely mechanical substitutions. No functional change
intended. Locking semantics and ordering remain unchanged.

The compaction path is left unchanged for now and will be
handled separately in the following patch due to additional
non-trivial modifications.

Signed-off-by: Dmitry Ilvokhin <d@ilvokhin.com>
Acked-by: Shakeel Butt <shakeel.butt@linux.dev>
---
 mm/compaction.c     | 25 +++++++++-------
 mm/memory_hotplug.c |  9 +++---
 mm/mm_init.c        |  3 +-
 mm/page_alloc.c     | 73 +++++++++++++++++++++++----------------------
 mm/page_isolation.c | 19 ++++++------
 mm/page_reporting.c | 13 ++++----
 mm/show_mem.c       |  5 ++--
 mm/vmscan.c         |  5 ++--
 mm/vmstat.c         |  9 +++---
 9 files changed, 86 insertions(+), 75 deletions(-)

diff --git a/mm/compaction.c b/mm/compaction.c
index 1e8f8eca318c..47b26187a5df 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -24,6 +24,7 @@
 #include <linux/page_owner.h>
 #include <linux/psi.h>
 #include <linux/cpuset.h>
+#include <linux/zone_lock.h>
 #include "internal.h"
 
 #ifdef CONFIG_COMPACTION
@@ -530,11 +531,14 @@ static bool compact_lock_irqsave(spinlock_t *lock, unsigned long *flags,
  * Returns true if compaction should abort due to fatal signal pending.
  * Returns false when compaction can continue.
  */
-static bool compact_unlock_should_abort(spinlock_t *lock,
-		unsigned long flags, bool *locked, struct compact_control *cc)
+
+static bool compact_unlock_should_abort(struct zone *zone,
+					unsigned long flags,
+					bool *locked,
+					struct compact_control *cc)
 {
 	if (*locked) {
-		spin_unlock_irqrestore(lock, flags);
+		zone_unlock_irqrestore(zone, flags);
 		*locked = false;
 	}
 
@@ -582,9 +586,8 @@ static unsigned long isolate_freepages_block(struct compact_control *cc,
 		 * contention, to give chance to IRQs. Abort if fatal signal
 		 * pending.
 		 */
-		if (!(blockpfn % COMPACT_CLUSTER_MAX)
-		    && compact_unlock_should_abort(&cc->zone->lock, flags,
-								&locked, cc))
+		if (!(blockpfn % COMPACT_CLUSTER_MAX) &&
+		    compact_unlock_should_abort(cc->zone, flags, &locked, cc))
 			break;
 
 		nr_scanned++;
@@ -649,7 +652,7 @@ static unsigned long isolate_freepages_block(struct compact_control *cc,
 	}
 
 	if (locked)
-		spin_unlock_irqrestore(&cc->zone->lock, flags);
+		zone_unlock_irqrestore(cc->zone, flags);
 
 	/*
 	 * Be careful to not go outside of the pageblock.
@@ -1555,7 +1558,7 @@ static void fast_isolate_freepages(struct compact_control *cc)
 		if (!area->nr_free)
 			continue;
 
-		spin_lock_irqsave(&cc->zone->lock, flags);
+		zone_lock_irqsave(cc->zone, flags);
 		freelist = &area->free_list[MIGRATE_MOVABLE];
 		list_for_each_entry_reverse(freepage, freelist, buddy_list) {
 			unsigned long pfn;
@@ -1614,7 +1617,7 @@ static void fast_isolate_freepages(struct compact_control *cc)
 			}
 		}
 
-		spin_unlock_irqrestore(&cc->zone->lock, flags);
+		zone_unlock_irqrestore(cc->zone, flags);
 
 		/* Skip fast search if enough freepages isolated */
 		if (cc->nr_freepages >= cc->nr_migratepages)
@@ -1988,7 +1991,7 @@ static unsigned long fast_find_migrateblock(struct compact_control *cc)
 		if (!area->nr_free)
 			continue;
 
-		spin_lock_irqsave(&cc->zone->lock, flags);
+		zone_lock_irqsave(cc->zone, flags);
 		freelist = &area->free_list[MIGRATE_MOVABLE];
 		list_for_each_entry(freepage, freelist, buddy_list) {
 			unsigned long free_pfn;
@@ -2021,7 +2024,7 @@ static unsigned long fast_find_migrateblock(struct compact_control *cc)
 				break;
 			}
 		}
-		spin_unlock_irqrestore(&cc->zone->lock, flags);
+		zone_unlock_irqrestore(cc->zone, flags);
 	}
 
 	cc->total_migrate_scanned += nr_scanned;
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index bc805029da51..cfc0103fa50e 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -36,6 +36,7 @@
 #include <linux/rmap.h>
 #include <linux/module.h>
 #include <linux/node.h>
+#include <linux/zone_lock.h>
 
 #include <asm/tlbflush.h>
 
@@ -1190,9 +1191,9 @@ int online_pages(unsigned long pfn, unsigned long nr_pages,
 	 * Fixup the number of isolated pageblocks before marking the sections
 	 * onlining, such that undo_isolate_page_range() works correctly.
 	 */
-	spin_lock_irqsave(&zone->lock, flags);
+	zone_lock_irqsave(zone, flags);
 	zone->nr_isolate_pageblock += nr_pages / pageblock_nr_pages;
-	spin_unlock_irqrestore(&zone->lock, flags);
+	zone_unlock_irqrestore(zone, flags);
 
 	/*
 	 * If this zone is not populated, then it is not in zonelist.
@@ -2041,9 +2042,9 @@ int offline_pages(unsigned long start_pfn, unsigned long nr_pages,
 	 * effectively stale; nobody should be touching them. Fixup the number
 	 * of isolated pageblocks, memory onlining will properly revert this.
 	 */
-	spin_lock_irqsave(&zone->lock, flags);
+	zone_lock_irqsave(zone, flags);
 	zone->nr_isolate_pageblock -= nr_pages / pageblock_nr_pages;
-	spin_unlock_irqrestore(&zone->lock, flags);
+	zone_unlock_irqrestore(zone, flags);
 
 	lru_cache_enable();
 	zone_pcp_enable(zone);
diff --git a/mm/mm_init.c b/mm/mm_init.c
index 61d983d23f55..6dd37621248b 100644
--- a/mm/mm_init.c
+++ b/mm/mm_init.c
@@ -32,6 +32,7 @@
 #include <linux/vmstat.h>
 #include <linux/kexec_handover.h>
 #include <linux/hugetlb.h>
+#include <linux/zone_lock.h>
 #include "internal.h"
 #include "slab.h"
 #include "shuffle.h"
@@ -1425,7 +1426,7 @@ static void __meminit zone_init_internals(struct zone *zone, enum zone_type idx,
 	zone_set_nid(zone, nid);
 	zone->name = zone_names[idx];
 	zone->zone_pgdat = NODE_DATA(nid);
-	spin_lock_init(&zone->lock);
+	zone_lock_init(zone);
 	zone_seqlock_init(zone);
 	zone_pcp_init(zone);
 }
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index fcc32737f451..c5d13fe9b79f 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -54,6 +54,7 @@
 #include <linux/delayacct.h>
 #include <linux/cacheinfo.h>
 #include <linux/pgalloc_tag.h>
+#include <linux/zone_lock.h>
 #include <asm/div64.h>
 #include "internal.h"
 #include "shuffle.h"
@@ -1500,7 +1501,7 @@ static void free_pcppages_bulk(struct zone *zone, int count,
 	/* Ensure requested pindex is drained first. */
 	pindex = pindex - 1;
 
-	spin_lock_irqsave(&zone->lock, flags);
+	zone_lock_irqsave(zone, flags);
 
 	while (count > 0) {
 		struct list_head *list;
@@ -1533,7 +1534,7 @@ static void free_pcppages_bulk(struct zone *zone, int count,
 		} while (count > 0 && !list_empty(list));
 	}
 
-	spin_unlock_irqrestore(&zone->lock, flags);
+	zone_unlock_irqrestore(zone, flags);
 }
 
 /* Split a multi-block free page into its individual pageblocks. */
@@ -1577,12 +1578,12 @@ static void free_one_page(struct zone *zone, struct page *page,
 	unsigned long flags;
 
 	if (unlikely(fpi_flags & FPI_TRYLOCK)) {
-		if (!spin_trylock_irqsave(&zone->lock, flags)) {
+		if (!zone_trylock_irqsave(zone, flags)) {
 			add_page_to_zone_llist(zone, page, order);
 			return;
 		}
 	} else {
-		spin_lock_irqsave(&zone->lock, flags);
+		zone_lock_irqsave(zone, flags);
 	}
 
 	/* The lock succeeded. Process deferred pages. */
@@ -1600,7 +1601,7 @@ static void free_one_page(struct zone *zone, struct page *page,
 		}
 	}
 	split_large_buddy(zone, page, pfn, order, fpi_flags);
-	spin_unlock_irqrestore(&zone->lock, flags);
+	zone_unlock_irqrestore(zone, flags);
 
 	__count_vm_events(PGFREE, 1 << order);
 }
@@ -2553,10 +2554,10 @@ static int rmqueue_bulk(struct zone *zone, unsigned int order,
 	int i;
 
 	if (unlikely(alloc_flags & ALLOC_TRYLOCK)) {
-		if (!spin_trylock_irqsave(&zone->lock, flags))
+		if (!zone_trylock_irqsave(zone, flags))
 			return 0;
 	} else {
-		spin_lock_irqsave(&zone->lock, flags);
+		zone_lock_irqsave(zone, flags);
 	}
 	for (i = 0; i < count; ++i) {
 		struct page *page = __rmqueue(zone, order, migratetype,
@@ -2576,7 +2577,7 @@ static int rmqueue_bulk(struct zone *zone, unsigned int order,
 		 */
 		list_add_tail(&page->pcp_list, list);
 	}
-	spin_unlock_irqrestore(&zone->lock, flags);
+	zone_unlock_irqrestore(zone, flags);
 
 	return i;
 }
@@ -3246,10 +3247,10 @@ struct page *rmqueue_buddy(struct zone *preferred_zone, struct zone *zone,
 	do {
 		page = NULL;
 		if (unlikely(alloc_flags & ALLOC_TRYLOCK)) {
-			if (!spin_trylock_irqsave(&zone->lock, flags))
+			if (!zone_trylock_irqsave(zone, flags))
 				return NULL;
 		} else {
-			spin_lock_irqsave(&zone->lock, flags);
+			zone_lock_irqsave(zone, flags);
 		}
 		if (alloc_flags & ALLOC_HIGHATOMIC)
 			page = __rmqueue_smallest(zone, order, MIGRATE_HIGHATOMIC);
@@ -3268,11 +3269,11 @@ struct page *rmqueue_buddy(struct zone *preferred_zone, struct zone *zone,
 				page = __rmqueue_smallest(zone, order, MIGRATE_HIGHATOMIC);
 
 			if (!page) {
-				spin_unlock_irqrestore(&zone->lock, flags);
+				zone_unlock_irqrestore(zone, flags);
 				return NULL;
 			}
 		}
-		spin_unlock_irqrestore(&zone->lock, flags);
+		zone_unlock_irqrestore(zone, flags);
 	} while (check_new_pages(page, order));
 
 	__count_zid_vm_events(PGALLOC, page_zonenum(page), 1 << order);
@@ -3459,7 +3460,7 @@ static void reserve_highatomic_pageblock(struct page *page, int order,
 	if (zone->nr_reserved_highatomic >= max_managed)
 		return;
 
-	spin_lock_irqsave(&zone->lock, flags);
+	zone_lock_irqsave(zone, flags);
 
 	/* Recheck the nr_reserved_highatomic limit under the lock */
 	if (zone->nr_reserved_highatomic >= max_managed)
@@ -3481,7 +3482,7 @@ static void reserve_highatomic_pageblock(struct page *page, int order,
 	}
 
 out_unlock:
-	spin_unlock_irqrestore(&zone->lock, flags);
+	zone_unlock_irqrestore(zone, flags);
 }
 
 /*
@@ -3514,7 +3515,7 @@ static bool unreserve_highatomic_pageblock(const struct alloc_context *ac,
 					pageblock_nr_pages)
 			continue;
 
-		spin_lock_irqsave(&zone->lock, flags);
+		zone_lock_irqsave(zone, flags);
 		for (order = 0; order < NR_PAGE_ORDERS; order++) {
 			struct free_area *area = &(zone->free_area[order]);
 			unsigned long size;
@@ -3562,11 +3563,11 @@ static bool unreserve_highatomic_pageblock(const struct alloc_context *ac,
 			 */
 			WARN_ON_ONCE(ret == -1);
 			if (ret > 0) {
-				spin_unlock_irqrestore(&zone->lock, flags);
+				zone_unlock_irqrestore(zone, flags);
 				return ret;
 			}
 		}
-		spin_unlock_irqrestore(&zone->lock, flags);
+		zone_unlock_irqrestore(zone, flags);
 	}
 
 	return false;
@@ -6446,7 +6447,7 @@ static void __setup_per_zone_wmarks(void)
 	for_each_zone(zone) {
 		u64 tmp;
 
-		spin_lock_irqsave(&zone->lock, flags);
+		zone_lock_irqsave(zone, flags);
 		tmp = (u64)pages_min * zone_managed_pages(zone);
 		tmp = div64_ul(tmp, lowmem_pages);
 		if (is_highmem(zone) || zone_idx(zone) == ZONE_MOVABLE) {
@@ -6487,7 +6488,7 @@ static void __setup_per_zone_wmarks(void)
 		zone->_watermark[WMARK_PROMO] = high_wmark_pages(zone) + tmp;
 		trace_mm_setup_per_zone_wmarks(zone);
 
-		spin_unlock_irqrestore(&zone->lock, flags);
+		zone_unlock_irqrestore(zone, flags);
 	}
 
 	/* update totalreserve_pages */
@@ -7257,7 +7258,7 @@ struct page *alloc_contig_frozen_pages_noprof(unsigned long nr_pages,
 	zonelist = node_zonelist(nid, gfp_mask);
 	for_each_zone_zonelist_nodemask(zone, z, zonelist,
 					gfp_zone(gfp_mask), nodemask) {
-		spin_lock_irqsave(&zone->lock, flags);
+		zone_lock_irqsave(zone, flags);
 
 		pfn = ALIGN(zone->zone_start_pfn, nr_pages);
 		while (zone_spans_last_pfn(zone, pfn, nr_pages)) {
@@ -7271,18 +7272,18 @@ struct page *alloc_contig_frozen_pages_noprof(unsigned long nr_pages,
 				 * allocation spinning on this lock, it may
 				 * win the race and cause allocation to fail.
 				 */
-				spin_unlock_irqrestore(&zone->lock, flags);
+				zone_unlock_irqrestore(zone, flags);
 				ret = alloc_contig_frozen_range_noprof(pfn,
 							pfn + nr_pages,
 							ACR_FLAGS_NONE,
 							gfp_mask);
 				if (!ret)
 					return pfn_to_page(pfn);
-				spin_lock_irqsave(&zone->lock, flags);
+				zone_lock_irqsave(zone, flags);
 			}
 			pfn += nr_pages;
 		}
-		spin_unlock_irqrestore(&zone->lock, flags);
+		zone_unlock_irqrestore(zone, flags);
 	}
 	/*
 	 * If we failed, retry the search, but treat regions with HugeTLB pages
@@ -7436,7 +7437,7 @@ unsigned long __offline_isolated_pages(unsigned long start_pfn,
 
 	offline_mem_sections(pfn, end_pfn);
 	zone = page_zone(pfn_to_page(pfn));
-	spin_lock_irqsave(&zone->lock, flags);
+	zone_lock_irqsave(zone, flags);
 	while (pfn < end_pfn) {
 		page = pfn_to_page(pfn);
 		/*
@@ -7466,7 +7467,7 @@ unsigned long __offline_isolated_pages(unsigned long start_pfn,
 		del_page_from_free_list(page, zone, order, MIGRATE_ISOLATE);
 		pfn += (1 << order);
 	}
-	spin_unlock_irqrestore(&zone->lock, flags);
+	zone_unlock_irqrestore(zone, flags);
 
 	return end_pfn - start_pfn - already_offline;
 }
@@ -7542,7 +7543,7 @@ bool take_page_off_buddy(struct page *page)
 	unsigned int order;
 	bool ret = false;
 
-	spin_lock_irqsave(&zone->lock, flags);
+	zone_lock_irqsave(zone, flags);
 	for (order = 0; order < NR_PAGE_ORDERS; order++) {
 		struct page *page_head = page - (pfn & ((1 << order) - 1));
 		int page_order = buddy_order(page_head);
@@ -7563,7 +7564,7 @@ bool take_page_off_buddy(struct page *page)
 		if (page_count(page_head) > 0)
 			break;
 	}
-	spin_unlock_irqrestore(&zone->lock, flags);
+	zone_unlock_irqrestore(zone, flags);
 	return ret;
 }
 
@@ -7576,7 +7577,7 @@ bool put_page_back_buddy(struct page *page)
 	unsigned long flags;
 	bool ret = false;
 
-	spin_lock_irqsave(&zone->lock, flags);
+	zone_lock_irqsave(zone, flags);
 	if (put_page_testzero(page)) {
 		unsigned long pfn = page_to_pfn(page);
 		int migratetype = get_pfnblock_migratetype(page, pfn);
@@ -7587,7 +7588,7 @@ bool put_page_back_buddy(struct page *page)
 			ret = true;
 		}
 	}
-	spin_unlock_irqrestore(&zone->lock, flags);
+	zone_unlock_irqrestore(zone, flags);
 
 	return ret;
 }
@@ -7636,7 +7637,7 @@ static void __accept_page(struct zone *zone, unsigned long *flags,
 	account_freepages(zone, -MAX_ORDER_NR_PAGES, MIGRATE_MOVABLE);
 	__mod_zone_page_state(zone, NR_UNACCEPTED, -MAX_ORDER_NR_PAGES);
 	__ClearPageUnaccepted(page);
-	spin_unlock_irqrestore(&zone->lock, *flags);
+	zone_unlock_irqrestore(zone, *flags);
 
 	accept_memory(page_to_phys(page), PAGE_SIZE << MAX_PAGE_ORDER);
 
@@ -7648,9 +7649,9 @@ void accept_page(struct page *page)
 	struct zone *zone = page_zone(page);
 	unsigned long flags;
 
-	spin_lock_irqsave(&zone->lock, flags);
+	zone_lock_irqsave(zone, flags);
 	if (!PageUnaccepted(page)) {
-		spin_unlock_irqrestore(&zone->lock, flags);
+		zone_unlock_irqrestore(zone, flags);
 		return;
 	}
 
@@ -7663,11 +7664,11 @@ static bool try_to_accept_memory_one(struct zone *zone)
 	unsigned long flags;
 	struct page *page;
 
-	spin_lock_irqsave(&zone->lock, flags);
+	zone_lock_irqsave(zone, flags);
 	page = list_first_entry_or_null(&zone->unaccepted_pages,
 					struct page, lru);
 	if (!page) {
-		spin_unlock_irqrestore(&zone->lock, flags);
+		zone_unlock_irqrestore(zone, flags);
 		return false;
 	}
 
@@ -7724,12 +7725,12 @@ static bool __free_unaccepted(struct page *page)
 	if (!lazy_accept)
 		return false;
 
-	spin_lock_irqsave(&zone->lock, flags);
+	zone_lock_irqsave(zone, flags);
 	list_add_tail(&page->lru, &zone->unaccepted_pages);
 	account_freepages(zone, MAX_ORDER_NR_PAGES, MIGRATE_MOVABLE);
 	__mod_zone_page_state(zone, NR_UNACCEPTED, MAX_ORDER_NR_PAGES);
 	__SetPageUnaccepted(page);
-	spin_unlock_irqrestore(&zone->lock, flags);
+	zone_unlock_irqrestore(zone, flags);
 
 	return true;
 }
diff --git a/mm/page_isolation.c b/mm/page_isolation.c
index c48ff5c00244..56a272f38b66 100644
--- a/mm/page_isolation.c
+++ b/mm/page_isolation.c
@@ -10,6 +10,7 @@
 #include <linux/hugetlb.h>
 #include <linux/page_owner.h>
 #include <linux/migrate.h>
+#include <linux/zone_lock.h>
 #include "internal.h"
 
 #define CREATE_TRACE_POINTS
@@ -173,7 +174,7 @@ static int set_migratetype_isolate(struct page *page, enum pb_isolate_mode mode,
 	if (PageUnaccepted(page))
 		accept_page(page);
 
-	spin_lock_irqsave(&zone->lock, flags);
+	zone_lock_irqsave(zone, flags);
 
 	/*
 	 * We assume the caller intended to SET migrate type to isolate.
@@ -181,7 +182,7 @@ static int set_migratetype_isolate(struct page *page, enum pb_isolate_mode mode,
 	 * set it before us.
 	 */
 	if (is_migrate_isolate_page(page)) {
-		spin_unlock_irqrestore(&zone->lock, flags);
+		zone_unlock_irqrestore(zone, flags);
 		return -EBUSY;
 	}
 
@@ -200,15 +201,15 @@ static int set_migratetype_isolate(struct page *page, enum pb_isolate_mode mode,
 			mode);
 	if (!unmovable) {
 		if (!pageblock_isolate_and_move_free_pages(zone, page)) {
-			spin_unlock_irqrestore(&zone->lock, flags);
+			zone_unlock_irqrestore(zone, flags);
 			return -EBUSY;
 		}
 		zone->nr_isolate_pageblock++;
-		spin_unlock_irqrestore(&zone->lock, flags);
+		zone_unlock_irqrestore(zone, flags);
 		return 0;
 	}
 
-	spin_unlock_irqrestore(&zone->lock, flags);
+	zone_unlock_irqrestore(zone, flags);
 	if (mode == PB_ISOLATE_MODE_MEM_OFFLINE) {
 		/*
 		 * printk() with zone->lock held will likely trigger a
@@ -229,7 +230,7 @@ static void unset_migratetype_isolate(struct page *page)
 	struct page *buddy;
 
 	zone = page_zone(page);
-	spin_lock_irqsave(&zone->lock, flags);
+	zone_lock_irqsave(zone, flags);
 	if (!is_migrate_isolate_page(page))
 		goto out;
 
@@ -280,7 +281,7 @@ static void unset_migratetype_isolate(struct page *page)
 	}
 	zone->nr_isolate_pageblock--;
 out:
-	spin_unlock_irqrestore(&zone->lock, flags);
+	zone_unlock_irqrestore(zone, flags);
 }
 
 static inline struct page *
@@ -641,9 +642,9 @@ int test_pages_isolated(unsigned long start_pfn, unsigned long end_pfn,
 
 	/* Check all pages are free or marked as ISOLATED */
 	zone = page_zone(page);
-	spin_lock_irqsave(&zone->lock, flags);
+	zone_lock_irqsave(zone, flags);
 	pfn = __test_page_isolated_in_pageblock(start_pfn, end_pfn, mode);
-	spin_unlock_irqrestore(&zone->lock, flags);
+	zone_unlock_irqrestore(zone, flags);
 
 	ret = pfn < end_pfn ? -EBUSY : 0;
 
diff --git a/mm/page_reporting.c b/mm/page_reporting.c
index f0042d5743af..37e54e16538b 100644
--- a/mm/page_reporting.c
+++ b/mm/page_reporting.c
@@ -7,6 +7,7 @@
 #include <linux/module.h>
 #include <linux/delay.h>
 #include <linux/scatterlist.h>
+#include <linux/zone_lock.h>
 
 #include "page_reporting.h"
 #include "internal.h"
@@ -161,7 +162,7 @@ page_reporting_cycle(struct page_reporting_dev_info *prdev, struct zone *zone,
 	if (list_empty(list))
 		return err;
 
-	spin_lock_irq(&zone->lock);
+	zone_lock_irq(zone);
 
 	/*
 	 * Limit how many calls we will be making to the page reporting
@@ -219,7 +220,7 @@ page_reporting_cycle(struct page_reporting_dev_info *prdev, struct zone *zone,
 			list_rotate_to_front(&page->lru, list);
 
 		/* release lock before waiting on report processing */
-		spin_unlock_irq(&zone->lock);
+		zone_unlock_irq(zone);
 
 		/* begin processing pages in local list */
 		err = prdev->report(prdev, sgl, PAGE_REPORTING_CAPACITY);
@@ -231,7 +232,7 @@ page_reporting_cycle(struct page_reporting_dev_info *prdev, struct zone *zone,
 		budget--;
 
 		/* reacquire zone lock and resume processing */
-		spin_lock_irq(&zone->lock);
+		zone_lock_irq(zone);
 
 		/* flush reported pages from the sg list */
 		page_reporting_drain(prdev, sgl, PAGE_REPORTING_CAPACITY, !err);
@@ -251,7 +252,7 @@ page_reporting_cycle(struct page_reporting_dev_info *prdev, struct zone *zone,
 	if (!list_entry_is_head(next, list, lru) && !list_is_first(&next->lru, list))
 		list_rotate_to_front(&next->lru, list);
 
-	spin_unlock_irq(&zone->lock);
+	zone_unlock_irq(zone);
 
 	return err;
 }
@@ -296,9 +297,9 @@ page_reporting_process_zone(struct page_reporting_dev_info *prdev,
 		err = prdev->report(prdev, sgl, leftover);
 
 		/* flush any remaining pages out from the last report */
-		spin_lock_irq(&zone->lock);
+		zone_lock_irq(zone);
 		page_reporting_drain(prdev, sgl, leftover, !err);
-		spin_unlock_irq(&zone->lock);
+		zone_unlock_irq(zone);
 	}
 
 	return err;
diff --git a/mm/show_mem.c b/mm/show_mem.c
index 24078ac3e6bc..245beca127af 100644
--- a/mm/show_mem.c
+++ b/mm/show_mem.c
@@ -14,6 +14,7 @@
 #include <linux/mmzone.h>
 #include <linux/swap.h>
 #include <linux/vmstat.h>
+#include <linux/zone_lock.h>
 
 #include "internal.h"
 #include "swap.h"
@@ -363,7 +364,7 @@ static void show_free_areas(unsigned int filter, nodemask_t *nodemask, int max_z
 		show_node(zone);
 		printk(KERN_CONT "%s: ", zone->name);
 
-		spin_lock_irqsave(&zone->lock, flags);
+		zone_lock_irqsave(zone, flags);
 		for (order = 0; order < NR_PAGE_ORDERS; order++) {
 			struct free_area *area = &zone->free_area[order];
 			int type;
@@ -377,7 +378,7 @@ static void show_free_areas(unsigned int filter, nodemask_t *nodemask, int max_z
 					types[order] |= 1 << type;
 			}
 		}
-		spin_unlock_irqrestore(&zone->lock, flags);
+		zone_unlock_irqrestore(zone, flags);
 		for (order = 0; order < NR_PAGE_ORDERS; order++) {
 			printk(KERN_CONT "%lu*%lukB ",
 			       nr[order], K(1UL) << order);
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 0fc9373e8251..b369e00e8415 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -58,6 +58,7 @@
 #include <linux/random.h>
 #include <linux/mmu_notifier.h>
 #include <linux/parser.h>
+#include <linux/zone_lock.h>
 
 #include <asm/tlbflush.h>
 #include <asm/div64.h>
@@ -7139,9 +7140,9 @@ static int balance_pgdat(pg_data_t *pgdat, int order, int highest_zoneidx)
 
 			/* Increments are under the zone lock */
 			zone = pgdat->node_zones + i;
-			spin_lock_irqsave(&zone->lock, flags);
+			zone_lock_irqsave(zone, flags);
 			zone->watermark_boost -= min(zone->watermark_boost, zone_boosts[i]);
-			spin_unlock_irqrestore(&zone->lock, flags);
+			zone_unlock_irqrestore(zone, flags);
 		}
 
 		/*
diff --git a/mm/vmstat.c b/mm/vmstat.c
index 86b14b0f77b5..299b461a6b4b 100644
--- a/mm/vmstat.c
+++ b/mm/vmstat.c
@@ -28,6 +28,7 @@
 #include <linux/mm_inline.h>
 #include <linux/page_owner.h>
 #include <linux/sched/isolation.h>
+#include <linux/zone_lock.h>
 
 #include "internal.h"
 
@@ -1535,10 +1536,10 @@ static void walk_zones_in_node(struct seq_file *m, pg_data_t *pgdat,
 			continue;
 
 		if (!nolock)
-			spin_lock_irqsave(&zone->lock, flags);
+			zone_lock_irqsave(zone, flags);
 		print(m, pgdat, zone);
 		if (!nolock)
-			spin_unlock_irqrestore(&zone->lock, flags);
+			zone_unlock_irqrestore(zone, flags);
 	}
 }
 #endif
@@ -1603,9 +1604,9 @@ static void pagetypeinfo_showfree_print(struct seq_file *m,
 				}
 			}
 			seq_printf(m, "%s%6lu ", overflow ? ">" : "", freecount);
-			spin_unlock_irq(&zone->lock);
+			zone_unlock_irq(zone);
 			cond_resched();
-			spin_lock_irq(&zone->lock);
+			zone_lock_irq(zone);
 		}
 		seq_putc(m, '\n');
 	}
-- 
2.47.3



^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH v3 3/5] mm: convert compaction to zone lock wrappers
  2026-02-26 18:26 [PATCH v3 0/5] mm: zone lock tracepoint instrumentation Dmitry Ilvokhin
  2026-02-26 18:26 ` [PATCH v3 1/5] mm: introduce zone lock wrappers Dmitry Ilvokhin
  2026-02-26 18:26 ` [PATCH v3 2/5] mm: convert zone lock users to wrappers Dmitry Ilvokhin
@ 2026-02-26 18:26 ` Dmitry Ilvokhin
  2026-02-26 19:07   ` Shakeel Butt
  2026-02-26 18:26 ` [PATCH v3 4/5] mm: rename zone->lock to zone->_lock Dmitry Ilvokhin
  2026-02-26 18:26 ` [PATCH v3 5/5] mm: add tracepoints for zone lock Dmitry Ilvokhin
  4 siblings, 1 reply; 14+ messages in thread
From: Dmitry Ilvokhin @ 2026-02-26 18:26 UTC (permalink / raw)
  To: Andrew Morton, David Hildenbrand, Lorenzo Stoakes,
	Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
	Suren Baghdasaryan, Michal Hocko, Axel Rasmussen, Yuanchu Xie,
	Wei Xu, Steven Rostedt, Masami Hiramatsu, Mathieu Desnoyers,
	Brendan Jackman, Johannes Weiner, Zi Yan, Oscar Salvador,
	Qi Zheng, Shakeel Butt
  Cc: linux-kernel, linux-mm, linux-trace-kernel, linux-cxl,
	kernel-team, Benjamin Cheatham, Dmitry Ilvokhin

Compaction uses compact_lock_irqsave(), which currently operates
on a raw spinlock_t pointer so it can be used for both zone->lock
and lruvec->lru_lock. Since zone lock operations are now wrapped,
compact_lock_irqsave() can no longer directly operate on a
spinlock_t when the lock belongs to a zone.

Split the helper into compact_zone_lock_irqsave() and
compact_lruvec_lock_irqsave(), duplicating the small amount of
shared logic. As there are only two call sites and both statically
know the lock type, this avoids introducing additional abstraction
or runtime dispatch in the compaction path.

No functional change intended.

Signed-off-by: Dmitry Ilvokhin <d@ilvokhin.com>
---
 mm/compaction.c | 33 ++++++++++++++++++++++++---------
 1 file changed, 24 insertions(+), 9 deletions(-)

diff --git a/mm/compaction.c b/mm/compaction.c
index 47b26187a5df..9f7997e827bd 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -503,19 +503,36 @@ static bool test_and_set_skip(struct compact_control *cc, struct page *page)
  *
  * Always returns true which makes it easier to track lock state in callers.
  */
-static bool compact_lock_irqsave(spinlock_t *lock, unsigned long *flags,
-						struct compact_control *cc)
-	__acquires(lock)
+static bool compact_zone_lock_irqsave(struct zone *zone,
+				      unsigned long *flags,
+				      struct compact_control *cc)
+__acquires(&zone->lock)
 {
 	/* Track if the lock is contended in async mode */
 	if (cc->mode == MIGRATE_ASYNC && !cc->contended) {
-		if (spin_trylock_irqsave(lock, *flags))
+		if (zone_trylock_irqsave(zone, *flags))
 			return true;
 
 		cc->contended = true;
 	}
 
-	spin_lock_irqsave(lock, *flags);
+	zone_lock_irqsave(zone, *flags);
+	return true;
+}
+
+static bool compact_lruvec_lock_irqsave(struct lruvec *lruvec,
+					unsigned long *flags,
+					struct compact_control *cc)
+__acquires(&lruvec->lru_lock)
+{
+	if (cc->mode == MIGRATE_ASYNC && !cc->contended) {
+		if (spin_trylock_irqsave(&lruvec->lru_lock, *flags))
+			return true;
+
+		cc->contended = true;
+	}
+
+	spin_lock_irqsave(&lruvec->lru_lock, *flags);
 	return true;
 }
 
@@ -531,7 +548,6 @@ static bool compact_lock_irqsave(spinlock_t *lock, unsigned long *flags,
  * Returns true if compaction should abort due to fatal signal pending.
  * Returns false when compaction can continue.
  */
-
 static bool compact_unlock_should_abort(struct zone *zone,
 					unsigned long flags,
 					bool *locked,
@@ -616,8 +632,7 @@ static unsigned long isolate_freepages_block(struct compact_control *cc,
 
 		/* If we already hold the lock, we can skip some rechecking. */
 		if (!locked) {
-			locked = compact_lock_irqsave(&cc->zone->lock,
-								&flags, cc);
+			locked = compact_zone_lock_irqsave(cc->zone, &flags, cc);
 
 			/* Recheck this is a buddy page under lock */
 			if (!PageBuddy(page))
@@ -1163,7 +1178,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
 			if (locked)
 				unlock_page_lruvec_irqrestore(locked, flags);
 
-			compact_lock_irqsave(&lruvec->lru_lock, &flags, cc);
+			compact_lruvec_lock_irqsave(lruvec, &flags, cc);
 			locked = lruvec;
 
 			lruvec_memcg_debug(lruvec, folio);
-- 
2.47.3



^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH v3 4/5] mm: rename zone->lock to zone->_lock
  2026-02-26 18:26 [PATCH v3 0/5] mm: zone lock tracepoint instrumentation Dmitry Ilvokhin
                   ` (2 preceding siblings ...)
  2026-02-26 18:26 ` [PATCH v3 3/5] mm: convert compaction to zone lock wrappers Dmitry Ilvokhin
@ 2026-02-26 18:26 ` Dmitry Ilvokhin
  2026-02-26 19:09   ` Shakeel Butt
                     ` (2 more replies)
  2026-02-26 18:26 ` [PATCH v3 5/5] mm: add tracepoints for zone lock Dmitry Ilvokhin
  4 siblings, 3 replies; 14+ messages in thread
From: Dmitry Ilvokhin @ 2026-02-26 18:26 UTC (permalink / raw)
  To: Andrew Morton, David Hildenbrand, Lorenzo Stoakes,
	Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
	Suren Baghdasaryan, Michal Hocko, Axel Rasmussen, Yuanchu Xie,
	Wei Xu, Steven Rostedt, Masami Hiramatsu, Mathieu Desnoyers,
	Brendan Jackman, Johannes Weiner, Zi Yan, Oscar Salvador,
	Qi Zheng, Shakeel Butt
  Cc: linux-kernel, linux-mm, linux-trace-kernel, linux-cxl,
	kernel-team, Benjamin Cheatham, Dmitry Ilvokhin

This intentionally breaks direct users of zone->lock at compile time so
all call sites are converted to the zone lock wrappers. Without the
rename, present and future out-of-tree code could continue using
spin_lock(&zone->lock) and bypass the wrappers and tracing
infrastructure.

No functional change intended.

Suggested-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Dmitry Ilvokhin <d@ilvokhin.com>
---
 include/linux/mmzone.h    |  7 +++++--
 include/linux/zone_lock.h | 12 ++++++------
 mm/compaction.c           |  4 ++--
 mm/internal.h             |  2 +-
 mm/page_alloc.c           | 16 ++++++++--------
 mm/page_isolation.c       |  4 ++--
 mm/page_owner.c           |  2 +-
 7 files changed, 25 insertions(+), 22 deletions(-)

diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index 3e51190a55e4..32bca655fce5 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -1009,8 +1009,11 @@ struct zone {
 	/* zone flags, see below */
 	unsigned long		flags;
 
-	/* Primarily protects free_area */
-	spinlock_t		lock;
+	/*
+	 * Primarily protects free_area. Should be accessed via zone_lock_*
+	 * helpers.
+	 */
+	spinlock_t		_lock;
 
 	/* Pages to be freed when next trylock succeeds */
 	struct llist_head	trylock_free_pages;
diff --git a/include/linux/zone_lock.h b/include/linux/zone_lock.h
index c531e26280e6..5ce1aa38d500 100644
--- a/include/linux/zone_lock.h
+++ b/include/linux/zone_lock.h
@@ -7,32 +7,32 @@
 
 static inline void zone_lock_init(struct zone *zone)
 {
-	spin_lock_init(&zone->lock);
+	spin_lock_init(&zone->_lock);
 }
 
 #define zone_lock_irqsave(zone, flags)				\
 do {								\
-	spin_lock_irqsave(&(zone)->lock, flags);		\
+	spin_lock_irqsave(&(zone)->_lock, flags);		\
 } while (0)
 
 #define zone_trylock_irqsave(zone, flags)			\
 ({								\
-	spin_trylock_irqsave(&(zone)->lock, flags);		\
+	spin_trylock_irqsave(&(zone)->_lock, flags);		\
 })
 
 static inline void zone_unlock_irqrestore(struct zone *zone, unsigned long flags)
 {
-	spin_unlock_irqrestore(&zone->lock, flags);
+	spin_unlock_irqrestore(&zone->_lock, flags);
 }
 
 static inline void zone_lock_irq(struct zone *zone)
 {
-	spin_lock_irq(&zone->lock);
+	spin_lock_irq(&zone->_lock);
 }
 
 static inline void zone_unlock_irq(struct zone *zone)
 {
-	spin_unlock_irq(&zone->lock);
+	spin_unlock_irq(&zone->_lock);
 }
 
 #endif /* _LINUX_ZONE_LOCK_H */
diff --git a/mm/compaction.c b/mm/compaction.c
index 9f7997e827bd..aed5bf468fd3 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -506,7 +506,7 @@ static bool test_and_set_skip(struct compact_control *cc, struct page *page)
 static bool compact_zone_lock_irqsave(struct zone *zone,
 				      unsigned long *flags,
 				      struct compact_control *cc)
-__acquires(&zone->lock)
+__acquires(&zone->_lock)
 {
 	/* Track if the lock is contended in async mode */
 	if (cc->mode == MIGRATE_ASYNC && !cc->contended) {
@@ -1402,7 +1402,7 @@ static bool suitable_migration_target(struct compact_control *cc,
 		int order = cc->order > 0 ? cc->order : pageblock_order;
 
 		/*
-		 * We are checking page_order without zone->lock taken. But
+		 * We are checking page_order without zone->_lock taken. But
 		 * the only small danger is that we skip a potentially suitable
 		 * pageblock, so it's not worth to check order for valid range.
 		 */
diff --git a/mm/internal.h b/mm/internal.h
index cb0af847d7d9..6cb06e21ce15 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -710,7 +710,7 @@ static inline unsigned int buddy_order(struct page *page)
  * (d) a page and its buddy are in the same zone.
  *
  * For recording whether a page is in the buddy system, we set PageBuddy.
- * Setting, clearing, and testing PageBuddy is serialized by zone->lock.
+ * Setting, clearing, and testing PageBuddy is serialized by zone->_lock.
  *
  * For recording page's order, we use page_private(page).
  */
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index c5d13fe9b79f..56ca27a07a62 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -815,7 +815,7 @@ compaction_capture(struct capture_control *capc, struct page *page,
 static inline void account_freepages(struct zone *zone, int nr_pages,
 				     int migratetype)
 {
-	lockdep_assert_held(&zone->lock);
+	lockdep_assert_held(&zone->_lock);
 
 	if (is_migrate_isolate(migratetype))
 		return;
@@ -2473,7 +2473,7 @@ enum rmqueue_mode {
 
 /*
  * Do the hard work of removing an element from the buddy allocator.
- * Call me with the zone->lock already held.
+ * Call me with the zone->_lock already held.
  */
 static __always_inline struct page *
 __rmqueue(struct zone *zone, unsigned int order, int migratetype,
@@ -2501,7 +2501,7 @@ __rmqueue(struct zone *zone, unsigned int order, int migratetype,
 	 * fallbacks modes with increasing levels of fragmentation risk.
 	 *
 	 * The fallback logic is expensive and rmqueue_bulk() calls in
-	 * a loop with the zone->lock held, meaning the freelists are
+	 * a loop with the zone->_lock held, meaning the freelists are
 	 * not subject to any outside changes. Remember in *mode where
 	 * we found pay dirt, to save us the search on the next call.
 	 */
@@ -3203,7 +3203,7 @@ void __putback_isolated_page(struct page *page, unsigned int order, int mt)
 	struct zone *zone = page_zone(page);
 
 	/* zone lock should be held when this function is called */
-	lockdep_assert_held(&zone->lock);
+	lockdep_assert_held(&zone->_lock);
 
 	/* Return isolated page to tail of freelist. */
 	__free_one_page(page, page_to_pfn(page), zone, order, mt,
@@ -7086,7 +7086,7 @@ int alloc_contig_frozen_range_noprof(unsigned long start, unsigned long end,
 	 * pages.  Because of this, we reserve the bigger range and
 	 * once this is done free the pages we are not interested in.
 	 *
-	 * We don't have to hold zone->lock here because the pages are
+	 * We don't have to hold zone->_lock here because the pages are
 	 * isolated thus they won't get removed from buddy.
 	 */
 	outer_start = find_large_buddy(start);
@@ -7655,7 +7655,7 @@ void accept_page(struct page *page)
 		return;
 	}
 
-	/* Unlocks zone->lock */
+	/* Unlocks zone->_lock */
 	__accept_page(zone, &flags, page);
 }
 
@@ -7672,7 +7672,7 @@ static bool try_to_accept_memory_one(struct zone *zone)
 		return false;
 	}
 
-	/* Unlocks zone->lock */
+	/* Unlocks zone->_lock */
 	__accept_page(zone, &flags, page);
 
 	return true;
@@ -7813,7 +7813,7 @@ struct page *alloc_frozen_pages_nolock_noprof(gfp_t gfp_flags, int nid, unsigned
 
 	/*
 	 * Best effort allocation from percpu free list.
-	 * If it's empty attempt to spin_trylock zone->lock.
+	 * If it's empty attempt to spin_trylock zone->_lock.
 	 */
 	page = get_page_from_freelist(alloc_gfp, order, alloc_flags, &ac);
 
diff --git a/mm/page_isolation.c b/mm/page_isolation.c
index 56a272f38b66..78b58dae2015 100644
--- a/mm/page_isolation.c
+++ b/mm/page_isolation.c
@@ -212,7 +212,7 @@ static int set_migratetype_isolate(struct page *page, enum pb_isolate_mode mode,
 	zone_unlock_irqrestore(zone, flags);
 	if (mode == PB_ISOLATE_MODE_MEM_OFFLINE) {
 		/*
-		 * printk() with zone->lock held will likely trigger a
+		 * printk() with zone->_lock held will likely trigger a
 		 * lockdep splat, so defer it here.
 		 */
 		dump_page(unmovable, "unmovable page");
@@ -553,7 +553,7 @@ void undo_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn)
 /*
  * Test all pages in the range is free(means isolated) or not.
  * all pages in [start_pfn...end_pfn) must be in the same zone.
- * zone->lock must be held before call this.
+ * zone->_lock must be held before call this.
  *
  * Returns the last tested pfn.
  */
diff --git a/mm/page_owner.c b/mm/page_owner.c
index 8178e0be557f..54a4ba63b14f 100644
--- a/mm/page_owner.c
+++ b/mm/page_owner.c
@@ -799,7 +799,7 @@ static void init_pages_in_zone(struct zone *zone)
 				continue;
 
 			/*
-			 * To avoid having to grab zone->lock, be a little
+			 * To avoid having to grab zone->_lock, be a little
 			 * careful when reading buddy page order. The only
 			 * danger is that we skip too much and potentially miss
 			 * some early allocated pages, which is better than
-- 
2.47.3



^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH v3 5/5] mm: add tracepoints for zone lock
  2026-02-26 18:26 [PATCH v3 0/5] mm: zone lock tracepoint instrumentation Dmitry Ilvokhin
                   ` (3 preceding siblings ...)
  2026-02-26 18:26 ` [PATCH v3 4/5] mm: rename zone->lock to zone->_lock Dmitry Ilvokhin
@ 2026-02-26 18:26 ` Dmitry Ilvokhin
  2026-02-26 19:14   ` Shakeel Butt
  4 siblings, 1 reply; 14+ messages in thread
From: Dmitry Ilvokhin @ 2026-02-26 18:26 UTC (permalink / raw)
  To: Andrew Morton, David Hildenbrand, Lorenzo Stoakes,
	Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
	Suren Baghdasaryan, Michal Hocko, Axel Rasmussen, Yuanchu Xie,
	Wei Xu, Steven Rostedt, Masami Hiramatsu, Mathieu Desnoyers,
	Brendan Jackman, Johannes Weiner, Zi Yan, Oscar Salvador,
	Qi Zheng, Shakeel Butt
  Cc: linux-kernel, linux-mm, linux-trace-kernel, linux-cxl,
	kernel-team, Benjamin Cheatham, Dmitry Ilvokhin

Add tracepoint instrumentation to zone lock acquire/release operations
via the previously introduced wrappers.

The implementation follows the mmap_lock tracepoint pattern: a
lightweight inline helper checks whether the tracepoint is enabled and
calls into an out-of-line helper when tracing is active. When
CONFIG_TRACING is disabled, helpers compile to empty inline stubs.

The fast path is unaffected when tracing is disabled.

Signed-off-by: Dmitry Ilvokhin <d@ilvokhin.com>
---
 MAINTAINERS                      |  2 +
 include/linux/zone_lock.h        | 64 +++++++++++++++++++++++++++++++-
 include/trace/events/zone_lock.h | 64 ++++++++++++++++++++++++++++++++
 mm/Makefile                      |  2 +-
 mm/zone_lock.c                   | 31 ++++++++++++++++
 5 files changed, 161 insertions(+), 2 deletions(-)
 create mode 100644 include/trace/events/zone_lock.h
 create mode 100644 mm/zone_lock.c

diff --git a/MAINTAINERS b/MAINTAINERS
index 61e3d1f5bf43..b5aa2bb5d2ba 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -16681,6 +16681,7 @@ F:	include/linux/ptdump.h
 F:	include/linux/vmpressure.h
 F:	include/linux/vmstat.h
 F:	include/linux/zone_lock.h
+F:	include/trace/events/zone_lock.h
 F:	kernel/fork.c
 F:	mm/Kconfig
 F:	mm/debug.c
@@ -16700,6 +16701,7 @@ F:	mm/sparse.c
 F:	mm/util.c
 F:	mm/vmpressure.c
 F:	mm/vmstat.c
+F:	mm/zone_lock.c
 N:	include/linux/page[-_]*
 
 MEMORY MANAGEMENT - EXECMEM
diff --git a/include/linux/zone_lock.h b/include/linux/zone_lock.h
index 5ce1aa38d500..f32ff0fae266 100644
--- a/include/linux/zone_lock.h
+++ b/include/linux/zone_lock.h
@@ -4,6 +4,53 @@
 
 #include <linux/mmzone.h>
 #include <linux/spinlock.h>
+#include <linux/tracepoint-defs.h>
+
+DECLARE_TRACEPOINT(zone_lock_start_locking);
+DECLARE_TRACEPOINT(zone_lock_acquire_returned);
+DECLARE_TRACEPOINT(zone_lock_released);
+
+#ifdef CONFIG_TRACING
+
+void __zone_lock_do_trace_start_locking(struct zone *zone);
+void __zone_lock_do_trace_acquire_returned(struct zone *zone, bool success);
+void __zone_lock_do_trace_released(struct zone *zone);
+
+static inline void __zone_lock_trace_start_locking(struct zone *zone)
+{
+	if (tracepoint_enabled(zone_lock_start_locking))
+		__zone_lock_do_trace_start_locking(zone);
+}
+
+static inline void __zone_lock_trace_acquire_returned(struct zone *zone,
+						      bool success)
+{
+	if (tracepoint_enabled(zone_lock_acquire_returned))
+		__zone_lock_do_trace_acquire_returned(zone, success);
+}
+
+static inline void __zone_lock_trace_released(struct zone *zone)
+{
+	if (tracepoint_enabled(zone_lock_released))
+		__zone_lock_do_trace_released(zone);
+}
+
+#else /* !CONFIG_TRACING */
+
+static inline void __zone_lock_trace_start_locking(struct zone *zone)
+{
+}
+
+static inline void __zone_lock_trace_acquire_returned(struct zone *zone,
+						      bool success)
+{
+}
+
+static inline void __zone_lock_trace_released(struct zone *zone)
+{
+}
+
+#endif /* CONFIG_TRACING */
 
 static inline void zone_lock_init(struct zone *zone)
 {
@@ -12,26 +59,41 @@ static inline void zone_lock_init(struct zone *zone)
 
 #define zone_lock_irqsave(zone, flags)				\
 do {								\
+	bool success = true;					\
+								\
+	__zone_lock_trace_start_locking(zone);			\
 	spin_lock_irqsave(&(zone)->_lock, flags);		\
+	__zone_lock_trace_acquire_returned(zone, success);	\
 } while (0)
 
 #define zone_trylock_irqsave(zone, flags)			\
 ({								\
-	spin_trylock_irqsave(&(zone)->_lock, flags);		\
+	bool success;						\
+								\
+	__zone_lock_trace_start_locking(zone);			\
+	success = spin_trylock_irqsave(&(zone)->_lock, flags);	\
+	__zone_lock_trace_acquire_returned(zone, success);	\
+	success;						\
 })
 
 static inline void zone_unlock_irqrestore(struct zone *zone, unsigned long flags)
 {
+	__zone_lock_trace_released(zone);
 	spin_unlock_irqrestore(&zone->_lock, flags);
 }
 
 static inline void zone_lock_irq(struct zone *zone)
 {
+	bool success = true;
+
+	__zone_lock_trace_start_locking(zone);
 	spin_lock_irq(&zone->_lock);
+	__zone_lock_trace_acquire_returned(zone, success);
 }
 
 static inline void zone_unlock_irq(struct zone *zone)
 {
+	__zone_lock_trace_released(zone);
 	spin_unlock_irq(&zone->_lock);
 }
 
diff --git a/include/trace/events/zone_lock.h b/include/trace/events/zone_lock.h
new file mode 100644
index 000000000000..3df82a8c0160
--- /dev/null
+++ b/include/trace/events/zone_lock.h
@@ -0,0 +1,64 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#undef TRACE_SYSTEM
+#define TRACE_SYSTEM zone_lock
+
+#if !defined(_TRACE_ZONE_LOCK_H) || defined(TRACE_HEADER_MULTI_READ)
+#define _TRACE_ZONE_LOCK_H
+
+#include <linux/tracepoint.h>
+#include <linux/types.h>
+
+struct zone;
+
+DECLARE_EVENT_CLASS(zone_lock,
+
+	TP_PROTO(struct zone *zone),
+
+	TP_ARGS(zone),
+
+	TP_STRUCT__entry(
+		__field(struct zone *, zone)
+	),
+
+	TP_fast_assign(
+		__entry->zone = zone;
+	),
+
+	TP_printk("zone=%p", __entry->zone)
+);
+
+#define DEFINE_ZONE_LOCK_EVENT(name)			\
+	DEFINE_EVENT(zone_lock, name,			\
+		TP_PROTO(struct zone *zone),		\
+		TP_ARGS(zone))
+
+DEFINE_ZONE_LOCK_EVENT(zone_lock_start_locking);
+DEFINE_ZONE_LOCK_EVENT(zone_lock_released);
+
+TRACE_EVENT(zone_lock_acquire_returned,
+
+	TP_PROTO(struct zone *zone, bool success),
+
+	TP_ARGS(zone, success),
+
+	TP_STRUCT__entry(
+		__field(struct zone *, zone)
+		__field(bool, success)
+	),
+
+	TP_fast_assign(
+		__entry->zone = zone;
+		__entry->success = success;
+	),
+
+	TP_printk(
+		"zone=%p success=%s",
+		__entry->zone,
+		__entry->success ? "true" : "false"
+	)
+);
+
+#endif /* _TRACE_ZONE_LOCK_H */
+
+/* This part must be outside protection */
+#include <trace/define_trace.h>
diff --git a/mm/Makefile b/mm/Makefile
index 8ad2ab08244e..ffd06cf7a04e 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -55,7 +55,7 @@ obj-y			:= filemap.o mempool.o oom_kill.o fadvise.o \
 			   mm_init.o percpu.o slab_common.o \
 			   compaction.o show_mem.o \
 			   interval_tree.o list_lru.o workingset.o \
-			   debug.o gup.o mmap_lock.o vma_init.o $(mmu-y)
+			   debug.o gup.o mmap_lock.o zone_lock.o vma_init.o $(mmu-y)
 
 # Give 'page_alloc' its own module-parameter namespace
 page-alloc-y := page_alloc.o
diff --git a/mm/zone_lock.c b/mm/zone_lock.c
new file mode 100644
index 000000000000..f647fd2aca48
--- /dev/null
+++ b/mm/zone_lock.c
@@ -0,0 +1,31 @@
+// SPDX-License-Identifier: GPL-2.0
+#define CREATE_TRACE_POINTS
+#include <trace/events/zone_lock.h>
+
+#include <linux/zone_lock.h>
+
+EXPORT_TRACEPOINT_SYMBOL(zone_lock_start_locking);
+EXPORT_TRACEPOINT_SYMBOL(zone_lock_acquire_returned);
+EXPORT_TRACEPOINT_SYMBOL(zone_lock_released);
+
+#ifdef CONFIG_TRACING
+
+void __zone_lock_do_trace_start_locking(struct zone *zone)
+{
+	trace_zone_lock_start_locking(zone);
+}
+EXPORT_SYMBOL(__zone_lock_do_trace_start_locking);
+
+void __zone_lock_do_trace_acquire_returned(struct zone *zone, bool success)
+{
+	trace_zone_lock_acquire_returned(zone, success);
+}
+EXPORT_SYMBOL(__zone_lock_do_trace_acquire_returned);
+
+void __zone_lock_do_trace_released(struct zone *zone)
+{
+	trace_zone_lock_released(zone);
+}
+EXPORT_SYMBOL(__zone_lock_do_trace_released);
+
+#endif /* CONFIG_TRACING */
-- 
2.47.3



^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v3 3/5] mm: convert compaction to zone lock wrappers
  2026-02-26 18:26 ` [PATCH v3 3/5] mm: convert compaction to zone lock wrappers Dmitry Ilvokhin
@ 2026-02-26 19:07   ` Shakeel Butt
  0 siblings, 0 replies; 14+ messages in thread
From: Shakeel Butt @ 2026-02-26 19:07 UTC (permalink / raw)
  To: Dmitry Ilvokhin
  Cc: Andrew Morton, David Hildenbrand, Lorenzo Stoakes,
	Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
	Suren Baghdasaryan, Michal Hocko, Axel Rasmussen, Yuanchu Xie,
	Wei Xu, Steven Rostedt, Masami Hiramatsu, Mathieu Desnoyers,
	Brendan Jackman, Johannes Weiner, Zi Yan, Oscar Salvador,
	Qi Zheng, linux-kernel, linux-mm, linux-trace-kernel, linux-cxl,
	kernel-team, Benjamin Cheatham

On Thu, Feb 26, 2026 at 06:26:20PM +0000, Dmitry Ilvokhin wrote:
> Compaction uses compact_lock_irqsave(), which currently operates
> on a raw spinlock_t pointer so it can be used for both zone->lock
> and lruvec->lru_lock. Since zone lock operations are now wrapped,
> compact_lock_irqsave() can no longer directly operate on a
> spinlock_t when the lock belongs to a zone.
> 
> Split the helper into compact_zone_lock_irqsave() and
> compact_lruvec_lock_irqsave(), duplicating the small amount of
> shared logic. As there are only two call sites and both statically
> know the lock type, this avoids introducing additional abstraction
> or runtime dispatch in the compaction path.
> 
> No functional change intended.
> 
> Signed-off-by: Dmitry Ilvokhin <d@ilvokhin.com>

Acked-by: Shakeel Butt <shakeel.butt@linux.dev>



^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v3 4/5] mm: rename zone->lock to zone->_lock
  2026-02-26 18:26 ` [PATCH v3 4/5] mm: rename zone->lock to zone->_lock Dmitry Ilvokhin
@ 2026-02-26 19:09   ` Shakeel Butt
  2026-02-26 21:48   ` kernel test robot
  2026-02-26 23:13   ` kernel test robot
  2 siblings, 0 replies; 14+ messages in thread
From: Shakeel Butt @ 2026-02-26 19:09 UTC (permalink / raw)
  To: Dmitry Ilvokhin
  Cc: Andrew Morton, David Hildenbrand, Lorenzo Stoakes,
	Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
	Suren Baghdasaryan, Michal Hocko, Axel Rasmussen, Yuanchu Xie,
	Wei Xu, Steven Rostedt, Masami Hiramatsu, Mathieu Desnoyers,
	Brendan Jackman, Johannes Weiner, Zi Yan, Oscar Salvador,
	Qi Zheng, linux-kernel, linux-mm, linux-trace-kernel, linux-cxl,
	kernel-team, Benjamin Cheatham

On Thu, Feb 26, 2026 at 06:26:21PM +0000, Dmitry Ilvokhin wrote:
> This intentionally breaks direct users of zone->lock at compile time so
> all call sites are converted to the zone lock wrappers. Without the
> rename, present and future out-of-tree code could continue using
> spin_lock(&zone->lock) and bypass the wrappers and tracing
> infrastructure.
> 
> No functional change intended.
> 
> Suggested-by: Andrew Morton <akpm@linux-foundation.org>
> Signed-off-by: Dmitry Ilvokhin <d@ilvokhin.com>

Acked-by: Shakeel Butt <shakeel.butt@linux.dev>


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v3 5/5] mm: add tracepoints for zone lock
  2026-02-26 18:26 ` [PATCH v3 5/5] mm: add tracepoints for zone lock Dmitry Ilvokhin
@ 2026-02-26 19:14   ` Shakeel Butt
  2026-02-26 21:25     ` Andrew Morton
  0 siblings, 1 reply; 14+ messages in thread
From: Shakeel Butt @ 2026-02-26 19:14 UTC (permalink / raw)
  To: Dmitry Ilvokhin
  Cc: Andrew Morton, David Hildenbrand, Lorenzo Stoakes,
	Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
	Suren Baghdasaryan, Michal Hocko, Axel Rasmussen, Yuanchu Xie,
	Wei Xu, Steven Rostedt, Masami Hiramatsu, Mathieu Desnoyers,
	Brendan Jackman, Johannes Weiner, Zi Yan, Oscar Salvador,
	Qi Zheng, linux-kernel, linux-mm, linux-trace-kernel, linux-cxl,
	kernel-team, Benjamin Cheatham

On Thu, Feb 26, 2026 at 06:26:22PM +0000, Dmitry Ilvokhin wrote:
> Add tracepoint instrumentation to zone lock acquire/release operations
> via the previously introduced wrappers.
> 
> The implementation follows the mmap_lock tracepoint pattern: a
> lightweight inline helper checks whether the tracepoint is enabled and
> calls into an out-of-line helper when tracing is active. When
> CONFIG_TRACING is disabled, helpers compile to empty inline stubs.
> 
> The fast path is unaffected when tracing is disabled.
> 
> Signed-off-by: Dmitry Ilvokhin <d@ilvokhin.com>

One nit below other than that:

Acked-by: Shakeel Butt <shakeel.butt@linux.dev>

[...]
> --- /dev/null
> +++ b/mm/zone_lock.c
> @@ -0,0 +1,31 @@
> +// SPDX-License-Identifier: GPL-2.0
> +#define CREATE_TRACE_POINTS
> +#include <trace/events/zone_lock.h>
> +
> +#include <linux/zone_lock.h>
> +
> +EXPORT_TRACEPOINT_SYMBOL(zone_lock_start_locking);
> +EXPORT_TRACEPOINT_SYMBOL(zone_lock_acquire_returned);
> +EXPORT_TRACEPOINT_SYMBOL(zone_lock_released);
> +
> +#ifdef CONFIG_TRACING
> +
> +void __zone_lock_do_trace_start_locking(struct zone *zone)
> +{
> +	trace_zone_lock_start_locking(zone);
> +}
> +EXPORT_SYMBOL(__zone_lock_do_trace_start_locking);

No reason to not have these as EXPORT_SYMBOL_GPL (& below)

> +
> +void __zone_lock_do_trace_acquire_returned(struct zone *zone, bool success)
> +{
> +	trace_zone_lock_acquire_returned(zone, success);
> +}
> +EXPORT_SYMBOL(__zone_lock_do_trace_acquire_returned);
> +
> +void __zone_lock_do_trace_released(struct zone *zone)
> +{
> +	trace_zone_lock_released(zone);
> +}
> +EXPORT_SYMBOL(__zone_lock_do_trace_released);
> +
> +#endif /* CONFIG_TRACING */
> -- 
> 2.47.3
> 


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v3 5/5] mm: add tracepoints for zone lock
  2026-02-26 19:14   ` Shakeel Butt
@ 2026-02-26 21:25     ` Andrew Morton
  2026-02-26 21:31       ` Shakeel Butt
  0 siblings, 1 reply; 14+ messages in thread
From: Andrew Morton @ 2026-02-26 21:25 UTC (permalink / raw)
  To: Shakeel Butt
  Cc: Dmitry Ilvokhin, David Hildenbrand, Lorenzo Stoakes,
	Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
	Suren Baghdasaryan, Michal Hocko, Axel Rasmussen, Yuanchu Xie,
	Wei Xu, Steven Rostedt, Masami Hiramatsu, Mathieu Desnoyers,
	Brendan Jackman, Johannes Weiner, Zi Yan, Oscar Salvador,
	Qi Zheng, linux-kernel, linux-mm, linux-trace-kernel, linux-cxl,
	kernel-team, Benjamin Cheatham

On Thu, 26 Feb 2026 11:14:52 -0800 Shakeel Butt <shakeel.butt@linux.dev> wrote:

> On Thu, Feb 26, 2026 at 06:26:22PM +0000, Dmitry Ilvokhin wrote:
> > Add tracepoint instrumentation to zone lock acquire/release operations
> > via the previously introduced wrappers.
> > 
> > The implementation follows the mmap_lock tracepoint pattern: a
> > lightweight inline helper checks whether the tracepoint is enabled and
> > calls into an out-of-line helper when tracing is active. When
> > CONFIG_TRACING is disabled, helpers compile to empty inline stubs.
> > 
> > The fast path is unaffected when tracing is disabled.
> > 
> > Signed-off-by: Dmitry Ilvokhin <d@ilvokhin.com>
> 
> ...
>
> > +void __zone_lock_do_trace_start_locking(struct zone *zone)
> > +{
> > +	trace_zone_lock_start_locking(zone);
> > +}
> > +EXPORT_SYMBOL(__zone_lock_do_trace_start_locking);
> 
> No reason to not have these as EXPORT_SYMBOL_GPL (& below)

Do we need the exports at all?

include/linux/mmzone.h
include/linux/zone_lock.h
include/trace/events/zone_lock.h
MAINTAINERS
mm/compaction.c
mm/internal.h
mm/Makefile
mm/memory_hotplug.c
mm/mm_init.c
mm/page_alloc.c
mm/page_isolation.c
mm/page_owner.c
mm/page_reporting.c
mm/show_mem.c
mm/vmscan.c
mm/vmstat.c
mm/zone_lock.c



^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v3 5/5] mm: add tracepoints for zone lock
  2026-02-26 21:25     ` Andrew Morton
@ 2026-02-26 21:31       ` Shakeel Butt
  0 siblings, 0 replies; 14+ messages in thread
From: Shakeel Butt @ 2026-02-26 21:31 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Dmitry Ilvokhin, David Hildenbrand, Lorenzo Stoakes,
	Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
	Suren Baghdasaryan, Michal Hocko, Axel Rasmussen, Yuanchu Xie,
	Wei Xu, Steven Rostedt, Masami Hiramatsu, Mathieu Desnoyers,
	Brendan Jackman, Johannes Weiner, Zi Yan, Oscar Salvador,
	Qi Zheng, linux-kernel, linux-mm, linux-trace-kernel, linux-cxl,
	kernel-team, Benjamin Cheatham

On Thu, Feb 26, 2026 at 01:25:01PM -0800, Andrew Morton wrote:
> On Thu, 26 Feb 2026 11:14:52 -0800 Shakeel Butt <shakeel.butt@linux.dev> wrote:
> 
> > On Thu, Feb 26, 2026 at 06:26:22PM +0000, Dmitry Ilvokhin wrote:
> > > Add tracepoint instrumentation to zone lock acquire/release operations
> > > via the previously introduced wrappers.
> > > 
> > > The implementation follows the mmap_lock tracepoint pattern: a
> > > lightweight inline helper checks whether the tracepoint is enabled and
> > > calls into an out-of-line helper when tracing is active. When
> > > CONFIG_TRACING is disabled, helpers compile to empty inline stubs.
> > > 
> > > The fast path is unaffected when tracing is disabled.
> > > 
> > > Signed-off-by: Dmitry Ilvokhin <d@ilvokhin.com>
> > 
> > ...
> >
> > > +void __zone_lock_do_trace_start_locking(struct zone *zone)
> > > +{
> > > +	trace_zone_lock_start_locking(zone);
> > > +}
> > > +EXPORT_SYMBOL(__zone_lock_do_trace_start_locking);
> > 
> > No reason to not have these as EXPORT_SYMBOL_GPL (& below)
> 
> Do we need the exports at all?

Very good point and we don't. I think this might just be copying the mmap_lock
tracepoint wrappers which might need the exports as some drivers might be taking
the mmap_lock.

Dmitry, please confirm (test) and let us know.


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v3 4/5] mm: rename zone->lock to zone->_lock
  2026-02-26 18:26 ` [PATCH v3 4/5] mm: rename zone->lock to zone->_lock Dmitry Ilvokhin
  2026-02-26 19:09   ` Shakeel Butt
@ 2026-02-26 21:48   ` kernel test robot
  2026-02-26 22:08     ` Andrew Morton
  2026-02-26 23:13   ` kernel test robot
  2 siblings, 1 reply; 14+ messages in thread
From: kernel test robot @ 2026-02-26 21:48 UTC (permalink / raw)
  To: Dmitry Ilvokhin, Andrew Morton, David Hildenbrand,
	Lorenzo Stoakes, Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
	Suren Baghdasaryan, Michal Hocko, Axel Rasmussen, Yuanchu Xie,
	Wei Xu, Steven Rostedt, Masami Hiramatsu, Mathieu Desnoyers,
	Brendan Jackman, Johannes Weiner, Zi Yan, Oscar Salvador,
	Qi Zheng, Shakeel Butt
  Cc: oe-kbuild-all, Linux Memory Management List, linux-kernel,
	linux-trace-kernel, linux-cxl, kernel-team, Benjamin Cheatham,
	Dmitry Ilvokhin

Hi Dmitry,

kernel test robot noticed the following build errors:

[auto build test ERROR on linus/master]
[also build test ERROR on v7.0-rc1 next-20260226]
[cannot apply to akpm-mm/mm-everything rppt-memblock/for-next rppt-memblock/fixes]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]

url:    https://github.com/intel-lab-lkp/linux/commits/Dmitry-Ilvokhin/mm-introduce-zone-lock-wrappers/20260227-022914
base:   linus/master
patch link:    https://lore.kernel.org/r/1221b8e7fa9f5694f3c4e411f01581b5aba9bc63.1772129168.git.d%40ilvokhin.com
patch subject: [PATCH v3 4/5] mm: rename zone->lock to zone->_lock
config: microblaze-randconfig-r073-20260227 (https://download.01.org/0day-ci/archive/20260227/202602270508.8MKXotxZ-lkp@intel.com/config)
compiler: microblaze-linux-gcc (GCC) 11.5.0
smatch version: v0.5.0-8994-gd50c5a4c
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20260227/202602270508.8MKXotxZ-lkp@intel.com/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202602270508.8MKXotxZ-lkp@intel.com/

All errors (new ones prefixed by >>):

   In file included from include/linux/mmzone.h:8,
                    from include/linux/gfp.h:7,
                    from include/linux/mm.h:8,
                    from mm/shuffle.c:4:
   mm/shuffle.c: In function '__shuffle_zone':
>> mm/shuffle.c:88:31: error: 'struct zone' has no member named 'lock'; did you mean '_lock'?
      88 |         spin_lock_irqsave(&z->lock, flags);
         |                               ^~~~
   include/linux/spinlock.h:244:48: note: in definition of macro 'raw_spin_lock_irqsave'
     244 |                 flags = _raw_spin_lock_irqsave(lock);   \
         |                                                ^~~~
   mm/shuffle.c:88:9: note: in expansion of macro 'spin_lock_irqsave'
      88 |         spin_lock_irqsave(&z->lock, flags);
         |         ^~~~~~~~~~~~~~~~~
   mm/shuffle.c:141:52: error: 'struct zone' has no member named 'lock'; did you mean '_lock'?
     141 |                         spin_unlock_irqrestore(&z->lock, flags);
         |                                                    ^~~~
         |                                                    _lock
   In file included from include/linux/mmzone.h:8,
                    from include/linux/gfp.h:7,
                    from include/linux/mm.h:8,
                    from mm/shuffle.c:4:
   mm/shuffle.c:143:47: error: 'struct zone' has no member named 'lock'; did you mean '_lock'?
     143 |                         spin_lock_irqsave(&z->lock, flags);
         |                                               ^~~~
   include/linux/spinlock.h:244:48: note: in definition of macro 'raw_spin_lock_irqsave'
     244 |                 flags = _raw_spin_lock_irqsave(lock);   \
         |                                                ^~~~
   mm/shuffle.c:143:25: note: in expansion of macro 'spin_lock_irqsave'
     143 |                         spin_lock_irqsave(&z->lock, flags);
         |                         ^~~~~~~~~~~~~~~~~
   mm/shuffle.c:146:36: error: 'struct zone' has no member named 'lock'; did you mean '_lock'?
     146 |         spin_unlock_irqrestore(&z->lock, flags);
         |                                    ^~~~
         |                                    _lock


vim +88 mm/shuffle.c

e900a918b0984e Dan Williams            2019-05-14    3  
e900a918b0984e Dan Williams            2019-05-14   @4  #include <linux/mm.h>
e900a918b0984e Dan Williams            2019-05-14    5  #include <linux/init.h>
e900a918b0984e Dan Williams            2019-05-14    6  #include <linux/mmzone.h>
e900a918b0984e Dan Williams            2019-05-14    7  #include <linux/random.h>
e900a918b0984e Dan Williams            2019-05-14    8  #include <linux/moduleparam.h>
e900a918b0984e Dan Williams            2019-05-14    9  #include "internal.h"
e900a918b0984e Dan Williams            2019-05-14   10  #include "shuffle.h"
e900a918b0984e Dan Williams            2019-05-14   11  
e900a918b0984e Dan Williams            2019-05-14   12  DEFINE_STATIC_KEY_FALSE(page_alloc_shuffle_key);
e900a918b0984e Dan Williams            2019-05-14   13  
e900a918b0984e Dan Williams            2019-05-14   14  static bool shuffle_param;
e900a918b0984e Dan Williams            2019-05-14   15  
85a34107eba913 Liu Shixin              2022-09-09   16  static __meminit int shuffle_param_set(const char *val,
e900a918b0984e Dan Williams            2019-05-14   17  		const struct kernel_param *kp)
e900a918b0984e Dan Williams            2019-05-14   18  {
85a34107eba913 Liu Shixin              2022-09-09   19  	if (param_set_bool(val, kp))
85a34107eba913 Liu Shixin              2022-09-09   20  		return -EINVAL;
85a34107eba913 Liu Shixin              2022-09-09   21  	if (*(bool *)kp->arg)
839195352d8235 David Hildenbrand       2020-08-06   22  		static_branch_enable(&page_alloc_shuffle_key);
e900a918b0984e Dan Williams            2019-05-14   23  	return 0;
e900a918b0984e Dan Williams            2019-05-14   24  }
85a34107eba913 Liu Shixin              2022-09-09   25  
85a34107eba913 Liu Shixin              2022-09-09   26  static const struct kernel_param_ops shuffle_param_ops = {
85a34107eba913 Liu Shixin              2022-09-09   27  	.set = shuffle_param_set,
85a34107eba913 Liu Shixin              2022-09-09   28  	.get = param_get_bool,
85a34107eba913 Liu Shixin              2022-09-09   29  };
85a34107eba913 Liu Shixin              2022-09-09   30  module_param_cb(shuffle, &shuffle_param_ops, &shuffle_param, 0400);
e900a918b0984e Dan Williams            2019-05-14   31  
e900a918b0984e Dan Williams            2019-05-14   32  /*
e900a918b0984e Dan Williams            2019-05-14   33   * For two pages to be swapped in the shuffle, they must be free (on a
e900a918b0984e Dan Williams            2019-05-14   34   * 'free_area' lru), have the same order, and have the same migratetype.
e900a918b0984e Dan Williams            2019-05-14   35   */
4a93025cbe4a0b David Hildenbrand       2020-08-06   36  static struct page * __meminit shuffle_valid_page(struct zone *zone,
4a93025cbe4a0b David Hildenbrand       2020-08-06   37  						  unsigned long pfn, int order)
e900a918b0984e Dan Williams            2019-05-14   38  {
4a93025cbe4a0b David Hildenbrand       2020-08-06   39  	struct page *page = pfn_to_online_page(pfn);
e900a918b0984e Dan Williams            2019-05-14   40  
e900a918b0984e Dan Williams            2019-05-14   41  	/*
e900a918b0984e Dan Williams            2019-05-14   42  	 * Given we're dealing with randomly selected pfns in a zone we
e900a918b0984e Dan Williams            2019-05-14   43  	 * need to ask questions like...
e900a918b0984e Dan Williams            2019-05-14   44  	 */
e900a918b0984e Dan Williams            2019-05-14   45  
4a93025cbe4a0b David Hildenbrand       2020-08-06   46  	/* ... is the page managed by the buddy? */
4a93025cbe4a0b David Hildenbrand       2020-08-06   47  	if (!page)
e900a918b0984e Dan Williams            2019-05-14   48  		return NULL;
e900a918b0984e Dan Williams            2019-05-14   49  
4a93025cbe4a0b David Hildenbrand       2020-08-06   50  	/* ... is the page assigned to the same zone? */
4a93025cbe4a0b David Hildenbrand       2020-08-06   51  	if (page_zone(page) != zone)
e900a918b0984e Dan Williams            2019-05-14   52  		return NULL;
e900a918b0984e Dan Williams            2019-05-14   53  
e900a918b0984e Dan Williams            2019-05-14   54  	/* ...is the page free and currently on a free_area list? */
e900a918b0984e Dan Williams            2019-05-14   55  	if (!PageBuddy(page))
e900a918b0984e Dan Williams            2019-05-14   56  		return NULL;
e900a918b0984e Dan Williams            2019-05-14   57  
e900a918b0984e Dan Williams            2019-05-14   58  	/*
e900a918b0984e Dan Williams            2019-05-14   59  	 * ...is the page on the same list as the page we will
e900a918b0984e Dan Williams            2019-05-14   60  	 * shuffle it with?
e900a918b0984e Dan Williams            2019-05-14   61  	 */
ab130f9108dcf2 Matthew Wilcox (Oracle  2020-10-15   62) 	if (buddy_order(page) != order)
e900a918b0984e Dan Williams            2019-05-14   63  		return NULL;
e900a918b0984e Dan Williams            2019-05-14   64  
e900a918b0984e Dan Williams            2019-05-14   65  	return page;
e900a918b0984e Dan Williams            2019-05-14   66  }
e900a918b0984e Dan Williams            2019-05-14   67  
e900a918b0984e Dan Williams            2019-05-14   68  /*
e900a918b0984e Dan Williams            2019-05-14   69   * Fisher-Yates shuffle the freelist which prescribes iterating through an
e900a918b0984e Dan Williams            2019-05-14   70   * array, pfns in this case, and randomly swapping each entry with another in
e900a918b0984e Dan Williams            2019-05-14   71   * the span, end_pfn - start_pfn.
e900a918b0984e Dan Williams            2019-05-14   72   *
e900a918b0984e Dan Williams            2019-05-14   73   * To keep the implementation simple it does not attempt to correct for sources
e900a918b0984e Dan Williams            2019-05-14   74   * of bias in the distribution, like modulo bias or pseudo-random number
e900a918b0984e Dan Williams            2019-05-14   75   * generator bias. I.e. the expectation is that this shuffling raises the bar
e900a918b0984e Dan Williams            2019-05-14   76   * for attacks that exploit the predictability of page allocations, but need not
e900a918b0984e Dan Williams            2019-05-14   77   * be a perfect shuffle.
e900a918b0984e Dan Williams            2019-05-14   78   */
e900a918b0984e Dan Williams            2019-05-14   79  #define SHUFFLE_RETRY 10
e900a918b0984e Dan Williams            2019-05-14   80  void __meminit __shuffle_zone(struct zone *z)
e900a918b0984e Dan Williams            2019-05-14   81  {
e900a918b0984e Dan Williams            2019-05-14   82  	unsigned long i, flags;
e900a918b0984e Dan Williams            2019-05-14   83  	unsigned long start_pfn = z->zone_start_pfn;
e900a918b0984e Dan Williams            2019-05-14   84  	unsigned long end_pfn = zone_end_pfn(z);
e900a918b0984e Dan Williams            2019-05-14   85  	const int order = SHUFFLE_ORDER;
e900a918b0984e Dan Williams            2019-05-14   86  	const int order_pages = 1 << order;
e900a918b0984e Dan Williams            2019-05-14   87  
e900a918b0984e Dan Williams            2019-05-14  @88  	spin_lock_irqsave(&z->lock, flags);
e900a918b0984e Dan Williams            2019-05-14   89  	start_pfn = ALIGN(start_pfn, order_pages);
e900a918b0984e Dan Williams            2019-05-14   90  	for (i = start_pfn; i < end_pfn; i += order_pages) {
e900a918b0984e Dan Williams            2019-05-14   91  		unsigned long j;
e900a918b0984e Dan Williams            2019-05-14   92  		int migratetype, retry;
e900a918b0984e Dan Williams            2019-05-14   93  		struct page *page_i, *page_j;
e900a918b0984e Dan Williams            2019-05-14   94  
e900a918b0984e Dan Williams            2019-05-14   95  		/*
e900a918b0984e Dan Williams            2019-05-14   96  		 * We expect page_i, in the sub-range of a zone being added
e900a918b0984e Dan Williams            2019-05-14   97  		 * (@start_pfn to @end_pfn), to more likely be valid compared to
e900a918b0984e Dan Williams            2019-05-14   98  		 * page_j randomly selected in the span @zone_start_pfn to
e900a918b0984e Dan Williams            2019-05-14   99  		 * @spanned_pages.
e900a918b0984e Dan Williams            2019-05-14  100  		 */
4a93025cbe4a0b David Hildenbrand       2020-08-06  101  		page_i = shuffle_valid_page(z, i, order);
e900a918b0984e Dan Williams            2019-05-14  102  		if (!page_i)
e900a918b0984e Dan Williams            2019-05-14  103  			continue;
e900a918b0984e Dan Williams            2019-05-14  104  
e900a918b0984e Dan Williams            2019-05-14  105  		for (retry = 0; retry < SHUFFLE_RETRY; retry++) {
e900a918b0984e Dan Williams            2019-05-14  106  			/*
e900a918b0984e Dan Williams            2019-05-14  107  			 * Pick a random order aligned page in the zone span as
e900a918b0984e Dan Williams            2019-05-14  108  			 * a swap target. If the selected pfn is a hole, retry
e900a918b0984e Dan Williams            2019-05-14  109  			 * up to SHUFFLE_RETRY attempts find a random valid pfn
e900a918b0984e Dan Williams            2019-05-14  110  			 * in the zone.
e900a918b0984e Dan Williams            2019-05-14  111  			 */
e900a918b0984e Dan Williams            2019-05-14  112  			j = z->zone_start_pfn +
e900a918b0984e Dan Williams            2019-05-14  113  				ALIGN_DOWN(get_random_long() % z->spanned_pages,
e900a918b0984e Dan Williams            2019-05-14  114  						order_pages);
4a93025cbe4a0b David Hildenbrand       2020-08-06  115  			page_j = shuffle_valid_page(z, j, order);
e900a918b0984e Dan Williams            2019-05-14  116  			if (page_j && page_j != page_i)
e900a918b0984e Dan Williams            2019-05-14  117  				break;
e900a918b0984e Dan Williams            2019-05-14  118  		}
e900a918b0984e Dan Williams            2019-05-14  119  		if (retry >= SHUFFLE_RETRY) {
e900a918b0984e Dan Williams            2019-05-14  120  			pr_debug("%s: failed to swap %#lx\n", __func__, i);
e900a918b0984e Dan Williams            2019-05-14  121  			continue;
e900a918b0984e Dan Williams            2019-05-14  122  		}
e900a918b0984e Dan Williams            2019-05-14  123  
e900a918b0984e Dan Williams            2019-05-14  124  		/*
e900a918b0984e Dan Williams            2019-05-14  125  		 * Each migratetype corresponds to its own list, make sure the
e900a918b0984e Dan Williams            2019-05-14  126  		 * types match otherwise we're moving pages to lists where they
e900a918b0984e Dan Williams            2019-05-14  127  		 * do not belong.
e900a918b0984e Dan Williams            2019-05-14  128  		 */
e900a918b0984e Dan Williams            2019-05-14  129  		migratetype = get_pageblock_migratetype(page_i);
e900a918b0984e Dan Williams            2019-05-14  130  		if (get_pageblock_migratetype(page_j) != migratetype) {
e900a918b0984e Dan Williams            2019-05-14  131  			pr_debug("%s: migratetype mismatch %#lx\n", __func__, i);
e900a918b0984e Dan Williams            2019-05-14  132  			continue;
e900a918b0984e Dan Williams            2019-05-14  133  		}
e900a918b0984e Dan Williams            2019-05-14  134  
e900a918b0984e Dan Williams            2019-05-14  135  		list_swap(&page_i->lru, &page_j->lru);
e900a918b0984e Dan Williams            2019-05-14  136  
e900a918b0984e Dan Williams            2019-05-14  137  		pr_debug("%s: swap: %#lx -> %#lx\n", __func__, i, j);
e900a918b0984e Dan Williams            2019-05-14  138  
e900a918b0984e Dan Williams            2019-05-14  139  		/* take it easy on the zone lock */
e900a918b0984e Dan Williams            2019-05-14  140  		if ((i % (100 * order_pages)) == 0) {
e900a918b0984e Dan Williams            2019-05-14  141  			spin_unlock_irqrestore(&z->lock, flags);
e900a918b0984e Dan Williams            2019-05-14  142  			cond_resched();
e900a918b0984e Dan Williams            2019-05-14  143  			spin_lock_irqsave(&z->lock, flags);
e900a918b0984e Dan Williams            2019-05-14  144  		}
e900a918b0984e Dan Williams            2019-05-14  145  	}
e900a918b0984e Dan Williams            2019-05-14  146  	spin_unlock_irqrestore(&z->lock, flags);
e900a918b0984e Dan Williams            2019-05-14  147  }
e900a918b0984e Dan Williams            2019-05-14  148  

-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v3 4/5] mm: rename zone->lock to zone->_lock
  2026-02-26 21:48   ` kernel test robot
@ 2026-02-26 22:08     ` Andrew Morton
  0 siblings, 0 replies; 14+ messages in thread
From: Andrew Morton @ 2026-02-26 22:08 UTC (permalink / raw)
  To: kernel test robot
  Cc: Dmitry Ilvokhin, David Hildenbrand, Lorenzo Stoakes,
	Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
	Suren Baghdasaryan, Michal Hocko, Axel Rasmussen, Yuanchu Xie,
	Wei Xu, Steven Rostedt, Masami Hiramatsu, Mathieu Desnoyers,
	Brendan Jackman, Johannes Weiner, Zi Yan, Oscar Salvador,
	Qi Zheng, Shakeel Butt, oe-kbuild-all,
	Linux Memory Management List, linux-kernel, linux-trace-kernel,
	linux-cxl, kernel-team, Benjamin Cheatham

On Fri, 27 Feb 2026 05:48:05 +0800 kernel test robot <lkp@intel.com> wrote:

> Hi Dmitry,
> 
> kernel test robot noticed the following build errors:
> 
>    mm/shuffle.c: In function '__shuffle_zone':

yep, thanks.  And kernel/power/snapshot.c.  I've added fixups.


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v3 4/5] mm: rename zone->lock to zone->_lock
  2026-02-26 18:26 ` [PATCH v3 4/5] mm: rename zone->lock to zone->_lock Dmitry Ilvokhin
  2026-02-26 19:09   ` Shakeel Butt
  2026-02-26 21:48   ` kernel test robot
@ 2026-02-26 23:13   ` kernel test robot
  2 siblings, 0 replies; 14+ messages in thread
From: kernel test robot @ 2026-02-26 23:13 UTC (permalink / raw)
  To: Dmitry Ilvokhin, Andrew Morton, David Hildenbrand,
	Lorenzo Stoakes, Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
	Suren Baghdasaryan, Michal Hocko, Axel Rasmussen, Yuanchu Xie,
	Wei Xu, Steven Rostedt, Masami Hiramatsu, Mathieu Desnoyers,
	Brendan Jackman, Johannes Weiner, Zi Yan, Oscar Salvador,
	Qi Zheng, Shakeel Butt
  Cc: oe-kbuild-all, Linux Memory Management List, linux-kernel,
	linux-trace-kernel, linux-cxl, kernel-team, Benjamin Cheatham,
	Dmitry Ilvokhin

Hi Dmitry,

kernel test robot noticed the following build errors:

[auto build test ERROR on linus/master]
[also build test ERROR on v7.0-rc1 next-20260226]
[cannot apply to akpm-mm/mm-everything rppt-memblock/for-next rppt-memblock/fixes]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]

url:    https://github.com/intel-lab-lkp/linux/commits/Dmitry-Ilvokhin/mm-introduce-zone-lock-wrappers/20260227-022914
base:   linus/master
patch link:    https://lore.kernel.org/r/1221b8e7fa9f5694f3c4e411f01581b5aba9bc63.1772129168.git.d%40ilvokhin.com
patch subject: [PATCH v3 4/5] mm: rename zone->lock to zone->_lock
config: x86_64-defconfig (https://download.01.org/0day-ci/archive/20260227/202602270740.0RL1uwsV-lkp@intel.com/config)
compiler: gcc-14 (Debian 14.2.0-19) 14.2.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20260227/202602270740.0RL1uwsV-lkp@intel.com/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202602270740.0RL1uwsV-lkp@intel.com/

All errors (new ones prefixed by >>):

   In file included from include/linux/sched.h:37,
                    from include/linux/percpu.h:12,
                    from arch/x86/include/asm/msr.h:16,
                    from arch/x86/include/asm/tsc.h:11,
                    from arch/x86/include/asm/timex.h:6,
                    from include/linux/timex.h:67,
                    from include/linux/time32.h:13,
                    from include/linux/time.h:60,
                    from include/linux/stat.h:19,
                    from include/linux/module.h:13,
                    from kernel/power/snapshot.c:14:
   kernel/power/snapshot.c: In function 'mark_free_pages':
>> kernel/power/snapshot.c:1254:34: error: 'struct zone' has no member named 'lock'; did you mean '_lock'?
    1254 |         spin_lock_irqsave(&zone->lock, flags);
         |                                  ^~~~
   include/linux/spinlock.h:244:48: note: in definition of macro 'raw_spin_lock_irqsave'
     244 |                 flags = _raw_spin_lock_irqsave(lock);   \
         |                                                ^~~~
   kernel/power/snapshot.c:1254:9: note: in expansion of macro 'spin_lock_irqsave'
    1254 |         spin_lock_irqsave(&zone->lock, flags);
         |         ^~~~~~~~~~~~~~~~~
   kernel/power/snapshot.c:1287:39: error: 'struct zone' has no member named 'lock'; did you mean '_lock'?
    1287 |         spin_unlock_irqrestore(&zone->lock, flags);
         |                                       ^~~~
         |                                       _lock


vim +1254 kernel/power/snapshot.c

31a1b9d7fe768d Kefeng Wang     2023-05-16  1243  
31a1b9d7fe768d Kefeng Wang     2023-05-16  1244  static void mark_free_pages(struct zone *zone)
31a1b9d7fe768d Kefeng Wang     2023-05-16  1245  {
31a1b9d7fe768d Kefeng Wang     2023-05-16  1246  	unsigned long pfn, max_zone_pfn, page_count = WD_PAGE_COUNT;
31a1b9d7fe768d Kefeng Wang     2023-05-16  1247  	unsigned long flags;
31a1b9d7fe768d Kefeng Wang     2023-05-16  1248  	unsigned int order, t;
31a1b9d7fe768d Kefeng Wang     2023-05-16  1249  	struct page *page;
31a1b9d7fe768d Kefeng Wang     2023-05-16  1250  
31a1b9d7fe768d Kefeng Wang     2023-05-16  1251  	if (zone_is_empty(zone))
31a1b9d7fe768d Kefeng Wang     2023-05-16  1252  		return;
31a1b9d7fe768d Kefeng Wang     2023-05-16  1253  
31a1b9d7fe768d Kefeng Wang     2023-05-16 @1254  	spin_lock_irqsave(&zone->lock, flags);
31a1b9d7fe768d Kefeng Wang     2023-05-16  1255  
31a1b9d7fe768d Kefeng Wang     2023-05-16  1256  	max_zone_pfn = zone_end_pfn(zone);
312eca8a14c5f5 David Woodhouse 2025-04-23  1257  	for_each_valid_pfn(pfn, zone->zone_start_pfn, max_zone_pfn) {
31a1b9d7fe768d Kefeng Wang     2023-05-16  1258  		page = pfn_to_page(pfn);
31a1b9d7fe768d Kefeng Wang     2023-05-16  1259  
31a1b9d7fe768d Kefeng Wang     2023-05-16  1260  		if (!--page_count) {
31a1b9d7fe768d Kefeng Wang     2023-05-16  1261  			touch_nmi_watchdog();
31a1b9d7fe768d Kefeng Wang     2023-05-16  1262  			page_count = WD_PAGE_COUNT;
31a1b9d7fe768d Kefeng Wang     2023-05-16  1263  		}
31a1b9d7fe768d Kefeng Wang     2023-05-16  1264  
31a1b9d7fe768d Kefeng Wang     2023-05-16  1265  		if (page_zone(page) != zone)
31a1b9d7fe768d Kefeng Wang     2023-05-16  1266  			continue;
31a1b9d7fe768d Kefeng Wang     2023-05-16  1267  
31a1b9d7fe768d Kefeng Wang     2023-05-16  1268  		if (!swsusp_page_is_forbidden(page))
31a1b9d7fe768d Kefeng Wang     2023-05-16  1269  			swsusp_unset_page_free(page);
31a1b9d7fe768d Kefeng Wang     2023-05-16  1270  	}
31a1b9d7fe768d Kefeng Wang     2023-05-16  1271  
31a1b9d7fe768d Kefeng Wang     2023-05-16  1272  	for_each_migratetype_order(order, t) {
31a1b9d7fe768d Kefeng Wang     2023-05-16  1273  		list_for_each_entry(page,
31a1b9d7fe768d Kefeng Wang     2023-05-16  1274  				&zone->free_area[order].free_list[t], buddy_list) {
31a1b9d7fe768d Kefeng Wang     2023-05-16  1275  			unsigned long i;
31a1b9d7fe768d Kefeng Wang     2023-05-16  1276  
31a1b9d7fe768d Kefeng Wang     2023-05-16  1277  			pfn = page_to_pfn(page);
31a1b9d7fe768d Kefeng Wang     2023-05-16  1278  			for (i = 0; i < (1UL << order); i++) {
31a1b9d7fe768d Kefeng Wang     2023-05-16  1279  				if (!--page_count) {
31a1b9d7fe768d Kefeng Wang     2023-05-16  1280  					touch_nmi_watchdog();
31a1b9d7fe768d Kefeng Wang     2023-05-16  1281  					page_count = WD_PAGE_COUNT;
31a1b9d7fe768d Kefeng Wang     2023-05-16  1282  				}
31a1b9d7fe768d Kefeng Wang     2023-05-16  1283  				swsusp_set_page_free(pfn_to_page(pfn + i));
31a1b9d7fe768d Kefeng Wang     2023-05-16  1284  			}
31a1b9d7fe768d Kefeng Wang     2023-05-16  1285  		}
31a1b9d7fe768d Kefeng Wang     2023-05-16  1286  	}
31a1b9d7fe768d Kefeng Wang     2023-05-16  1287  	spin_unlock_irqrestore(&zone->lock, flags);
31a1b9d7fe768d Kefeng Wang     2023-05-16  1288  }
31a1b9d7fe768d Kefeng Wang     2023-05-16  1289  

-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki


^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2026-02-26 23:15 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2026-02-26 18:26 [PATCH v3 0/5] mm: zone lock tracepoint instrumentation Dmitry Ilvokhin
2026-02-26 18:26 ` [PATCH v3 1/5] mm: introduce zone lock wrappers Dmitry Ilvokhin
2026-02-26 18:26 ` [PATCH v3 2/5] mm: convert zone lock users to wrappers Dmitry Ilvokhin
2026-02-26 18:26 ` [PATCH v3 3/5] mm: convert compaction to zone lock wrappers Dmitry Ilvokhin
2026-02-26 19:07   ` Shakeel Butt
2026-02-26 18:26 ` [PATCH v3 4/5] mm: rename zone->lock to zone->_lock Dmitry Ilvokhin
2026-02-26 19:09   ` Shakeel Butt
2026-02-26 21:48   ` kernel test robot
2026-02-26 22:08     ` Andrew Morton
2026-02-26 23:13   ` kernel test robot
2026-02-26 18:26 ` [PATCH v3 5/5] mm: add tracepoints for zone lock Dmitry Ilvokhin
2026-02-26 19:14   ` Shakeel Butt
2026-02-26 21:25     ` Andrew Morton
2026-02-26 21:31       ` Shakeel Butt

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox