linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 0/7] mm: Introduce for_each_valid_pfn()
@ 2025-04-23  7:52 David Woodhouse
  2025-04-23  7:52 ` [PATCH v3 1/7] mm: Introduce for_each_valid_pfn() and use it from reserve_bootmem_region() David Woodhouse
                   ` (6 more replies)
  0 siblings, 7 replies; 15+ messages in thread
From: David Woodhouse @ 2025-04-23  7:52 UTC (permalink / raw)
  To: Mike Rapoport
  Cc: Andrew Morton, Sauerwein, David, Anshuman Khandual,
	Ard Biesheuvel, Catalin Marinas, David Hildenbrand, Marc Zyngier,
	Mark Rutland, Mike Rapoport, Will Deacon, kvmarm,
	linux-arm-kernel, linux-kernel, linux-mm, Ruihan Li

There are cases where a naïve loop over a PFN range, calling pfn_valid() on
each one, is horribly inefficient. Ruihan Li reported the case where
memmap_init() iterates all the way from zero to a potentially large value
of ARCH_PFN_OFFSET, and we at Amazon found the reserve_bootmem_region()
one as it affects hypervisor live update. Others are more cosmetic.

By introducing a for_each_valid_pfn() helper it can optimise away a lot
of pointless calls to pfn_valid(), skipping immediately to the next
valid PFN and also skipping *all* checks within a valid (sub)region
according to the granularity of the memory model in use.

https://git.infradead.org/users/dwmw2/linux.git/shortlog/refs/heads/for_each_valid_pfn

v3: 
 • Fold the 'optimised' SPARSEMEM implementation into the original patch
 • Drop the use of (-1) as end marker, and use end_pfn instead.
 • Drop unused first_valid_pfn() helper for FLATMEM implementation
 • Add use case in memmap_init() from discussion at 
   https://lore.kernel.org/linux-mm/20250419122801.1752234-1-lrh2000@pku.edu.cn/

v2 [RFC]: https://lore.kernel.org/linux-mm/20250404155959.3442111-1-dwmw2@infradead.org/
 • Revised implementations with feedback from Mike
 • Add a few more use cases

v1 [RFC]: https://lore.kernel.org/linux-mm/20250402201841.3245371-1-dwmw2@infradead.org/
 • First proof of concept

David Woodhouse (7):
      mm: Introduce for_each_valid_pfn() and use it from reserve_bootmem_region()
      mm: Implement for_each_valid_pfn() for CONFIG_FLATMEM
      mm: Implement for_each_valid_pfn() for CONFIG_SPARSEMEM
      mm, PM: Use for_each_valid_pfn() in kernel/power/snapshot.c
      mm, x86: Use for_each_valid_pfn() from __ioremap_check_ram()
      mm: Use for_each_valid_pfn() in memory_hotplug
      mm/mm_init: Use for_each_valid_pfn() in init_unavailable_range()

 arch/x86/mm/ioremap.c              |  7 ++-
 include/asm-generic/memory_model.h | 26 ++++++++++-
 include/linux/mmzone.h             | 88 ++++++++++++++++++++++++++++++++++++++
 kernel/power/snapshot.c            | 42 +++++++++---------
 mm/memory_hotplug.c                |  8 +---
 mm/mm_init.c                       | 29 +++++--------
 6 files changed, 149 insertions(+), 51 deletions(-)



^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH v3 1/7] mm: Introduce for_each_valid_pfn() and use it from reserve_bootmem_region()
  2025-04-23  7:52 [PATCH v3 0/7] mm: Introduce for_each_valid_pfn() David Woodhouse
@ 2025-04-23  7:52 ` David Woodhouse
  2025-04-23  7:52 ` [PATCH v3 2/7] mm: Implement for_each_valid_pfn() for CONFIG_FLATMEM David Woodhouse
                   ` (5 subsequent siblings)
  6 siblings, 0 replies; 15+ messages in thread
From: David Woodhouse @ 2025-04-23  7:52 UTC (permalink / raw)
  To: Mike Rapoport
  Cc: Andrew Morton, Sauerwein, David, Anshuman Khandual,
	Ard Biesheuvel, Catalin Marinas, David Hildenbrand, Marc Zyngier,
	Mark Rutland, Mike Rapoport, Will Deacon, kvmarm,
	linux-arm-kernel, linux-kernel, linux-mm, Ruihan Li

From: David Woodhouse <dwmw@amazon.co.uk>

Especially since commit 9092d4f7a1f8 ("memblock: update initialization
of reserved pages"), the reserve_bootmem_region() function can spend a
significant amount of time iterating over every 4KiB PFN in a range,
calling pfn_valid() on each one, and ultimately doing absolutely nothing.

On a platform used for virtualization, with large NOMAP regions that
eventually get used for guest RAM, this leads to a significant increase
in steal time experienced during kexec for a live update.

Introduce for_each_valid_pfn() and use it from reserve_bootmem_region().
This implementation is precisely the same naïve loop that the function
used to have, but subsequent commits will provide optimised versions
for FLATMEM and SPARSEMEM, and this version will remain for those
architectures which provide their own pfn_valid() implementation,
until/unless they also provide a matching for_each_valid_pfn().

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Reviewed-by: Mike Rapoport (Microsoft) <rppt@kernel.org>
---
 include/linux/mmzone.h | 10 ++++++++++
 mm/mm_init.c           | 23 ++++++++++-------------
 2 files changed, 20 insertions(+), 13 deletions(-)

diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index 6ccec1bf2896..230a29c2ed1a 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -2177,6 +2177,16 @@ void sparse_init(void);
 #define subsection_map_init(_pfn, _nr_pages) do {} while (0)
 #endif /* CONFIG_SPARSEMEM */
 
+/*
+ * Fallback case for when the architecture provides its own pfn_valid() but
+ * not a corresponding for_each_valid_pfn().
+ */
+#ifndef for_each_valid_pfn
+#define for_each_valid_pfn(_pfn, _start_pfn, _end_pfn)			\
+	for ((_pfn) = (_start_pfn); (_pfn) < (_end_pfn); (_pfn)++)	\
+		if (pfn_valid(_pfn))
+#endif
+
 #endif /* !__GENERATING_BOUNDS.H */
 #endif /* !__ASSEMBLY__ */
 #endif /* _LINUX_MMZONE_H */
diff --git a/mm/mm_init.c b/mm/mm_init.c
index 9659689b8ace..41884f2155c4 100644
--- a/mm/mm_init.c
+++ b/mm/mm_init.c
@@ -777,22 +777,19 @@ static inline void init_deferred_page(unsigned long pfn, int nid)
 void __meminit reserve_bootmem_region(phys_addr_t start,
 				      phys_addr_t end, int nid)
 {
-	unsigned long start_pfn = PFN_DOWN(start);
-	unsigned long end_pfn = PFN_UP(end);
+	unsigned long pfn;
 
-	for (; start_pfn < end_pfn; start_pfn++) {
-		if (pfn_valid(start_pfn)) {
-			struct page *page = pfn_to_page(start_pfn);
+	for_each_valid_pfn (pfn, PFN_DOWN(start), PFN_UP(end)) {
+		struct page *page = pfn_to_page(pfn);
 
-			init_deferred_page(start_pfn, nid);
+		init_deferred_page(pfn, nid);
 
-			/*
-			 * no need for atomic set_bit because the struct
-			 * page is not visible yet so nobody should
-			 * access it yet.
-			 */
-			__SetPageReserved(page);
-		}
+		/*
+		 * no need for atomic set_bit because the struct
+		 * page is not visible yet so nobody should
+		 * access it yet.
+		 */
+		__SetPageReserved(page);
 	}
 }
 
-- 
2.49.0



^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH v3 2/7] mm: Implement for_each_valid_pfn() for CONFIG_FLATMEM
  2025-04-23  7:52 [PATCH v3 0/7] mm: Introduce for_each_valid_pfn() David Woodhouse
  2025-04-23  7:52 ` [PATCH v3 1/7] mm: Introduce for_each_valid_pfn() and use it from reserve_bootmem_region() David Woodhouse
@ 2025-04-23  7:52 ` David Woodhouse
  2025-04-23  7:52 ` [PATCH v3 3/7] mm: Implement for_each_valid_pfn() for CONFIG_SPARSEMEM David Woodhouse
                   ` (4 subsequent siblings)
  6 siblings, 0 replies; 15+ messages in thread
From: David Woodhouse @ 2025-04-23  7:52 UTC (permalink / raw)
  To: Mike Rapoport
  Cc: Andrew Morton, Sauerwein, David, Anshuman Khandual,
	Ard Biesheuvel, Catalin Marinas, David Hildenbrand, Marc Zyngier,
	Mark Rutland, Mike Rapoport, Will Deacon, kvmarm,
	linux-arm-kernel, linux-kernel, linux-mm, Ruihan Li

From: David Woodhouse <dwmw@amazon.co.uk>

In the FLATMEM case, the default pfn_valid() just checks that the PFN is
within the range [ ARCH_PFN_OFFSET .. ARCH_PFN_OFFSET + max_mapnr ).

The for_each_valid_pfn() function can therefore be a simple for() loop
using those as min/max respectively.

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Reviewed-by: Mike Rapoport (Microsoft) <rppt@kernel.org>
---
 include/asm-generic/memory_model.h | 10 +++++++++-
 1 file changed, 9 insertions(+), 1 deletion(-)

diff --git a/include/asm-generic/memory_model.h b/include/asm-generic/memory_model.h
index a3b5029aebbd..74d0077cc5fa 100644
--- a/include/asm-generic/memory_model.h
+++ b/include/asm-generic/memory_model.h
@@ -30,7 +30,15 @@ static inline int pfn_valid(unsigned long pfn)
 	return pfn >= pfn_offset && (pfn - pfn_offset) < max_mapnr;
 }
 #define pfn_valid pfn_valid
-#endif
+
+#ifndef for_each_valid_pfn
+#define for_each_valid_pfn(pfn, start_pfn, end_pfn)			 \
+	for ((pfn) = max_t(unsigned long, (start_pfn), ARCH_PFN_OFFSET); \
+	     (pfn) < min_t(unsigned long, (end_pfn),			 \
+			   ARCH_PFN_OFFSET + max_mapnr);		 \
+	     (pfn)++)
+#endif /* for_each_valid_pfn */
+#endif /* valid_pfn */
 
 #elif defined(CONFIG_SPARSEMEM_VMEMMAP)
 
-- 
2.49.0



^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH v3 3/7] mm: Implement for_each_valid_pfn() for CONFIG_SPARSEMEM
  2025-04-23  7:52 [PATCH v3 0/7] mm: Introduce for_each_valid_pfn() David Woodhouse
  2025-04-23  7:52 ` [PATCH v3 1/7] mm: Introduce for_each_valid_pfn() and use it from reserve_bootmem_region() David Woodhouse
  2025-04-23  7:52 ` [PATCH v3 2/7] mm: Implement for_each_valid_pfn() for CONFIG_FLATMEM David Woodhouse
@ 2025-04-23  7:52 ` David Woodhouse
  2025-04-23 11:11   ` Mike Rapoport
  2025-04-23  7:52 ` [PATCH v3 4/7] mm, PM: Use for_each_valid_pfn() in kernel/power/snapshot.c David Woodhouse
                   ` (3 subsequent siblings)
  6 siblings, 1 reply; 15+ messages in thread
From: David Woodhouse @ 2025-04-23  7:52 UTC (permalink / raw)
  To: Mike Rapoport
  Cc: Andrew Morton, Sauerwein, David, Anshuman Khandual,
	Ard Biesheuvel, Catalin Marinas, David Hildenbrand, Marc Zyngier,
	Mark Rutland, Mike Rapoport, Will Deacon, kvmarm,
	linux-arm-kernel, linux-kernel, linux-mm, Ruihan Li

From: David Woodhouse <dwmw@amazon.co.uk>

Implement for_each_valid_pfn() based on two helper functions.

The first_valid_pfn() function largely mirrors pfn_valid(), calling into
a pfn_section_first_valid() helper which is trivial for the !VMEMMAP case,
and in the VMEMMAP case will skip to the next subsection as needed.

Since next_valid_pfn() knows that its argument *is* a valid PFN, it
doesn't need to do any checking at all while iterating over the low bits
within a (sub)section mask; the whole (sub)section is either present or
not.

Note that the VMEMMAP version of pfn_section_first_valid() may return a
value *higher* than end_pfn when skipping to the next subsection, and
first_valid_pfn() happily returns that higher value. This is fine.

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Previous-revision-reviewed-by: Mike Rapoport (Microsoft) <rppt@kernel.org>
---
 include/asm-generic/memory_model.h | 26 ++++++++--
 include/linux/mmzone.h             | 78 ++++++++++++++++++++++++++++++
 2 files changed, 99 insertions(+), 5 deletions(-)

diff --git a/include/asm-generic/memory_model.h b/include/asm-generic/memory_model.h
index 74d0077cc5fa..044536da3390 100644
--- a/include/asm-generic/memory_model.h
+++ b/include/asm-generic/memory_model.h
@@ -31,12 +31,28 @@ static inline int pfn_valid(unsigned long pfn)
 }
 #define pfn_valid pfn_valid
 
+static inline bool first_valid_pfn(unsigned long *pfn)
+{
+	/* avoid <linux/mm.h> include hell */
+	extern unsigned long max_mapnr;
+	unsigned long pfn_offset = ARCH_PFN_OFFSET;
+
+	if (*pfn < pfn_offset) {
+		*pfn = pfn_offset;
+		return true;
+	}
+
+	if ((*pfn - pfn_offset) < max_mapnr)
+		return true;
+
+	return false;
+}
+
 #ifndef for_each_valid_pfn
-#define for_each_valid_pfn(pfn, start_pfn, end_pfn)			 \
-	for ((pfn) = max_t(unsigned long, (start_pfn), ARCH_PFN_OFFSET); \
-	     (pfn) < min_t(unsigned long, (end_pfn),			 \
-			   ARCH_PFN_OFFSET + max_mapnr);		 \
-	     (pfn)++)
+#define for_each_valid_pfn(pfn, start_pfn, end_pfn)			       \
+	for (pfn = max_t(unsigned long, start_pfn, ARCH_PFN_OFFSET);	\
+	     pfn < min_t(unsigned long, end_pfn, ARCH_PFN_OFFSET + max_mapnr); \
+			 pfn++)
 #endif /* for_each_valid_pfn */
 #endif /* valid_pfn */
 
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index 230a29c2ed1a..dab1d31477d7 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -2075,11 +2075,37 @@ static inline int pfn_section_valid(struct mem_section *ms, unsigned long pfn)
 
 	return usage ? test_bit(idx, usage->subsection_map) : 0;
 }
+
+static inline bool pfn_section_first_valid(struct mem_section *ms, unsigned long *pfn)
+{
+	struct mem_section_usage *usage = READ_ONCE(ms->usage);
+	int idx = subsection_map_index(*pfn);
+	unsigned long bit;
+
+	if (!usage)
+		return false;
+
+	if (test_bit(idx, usage->subsection_map))
+		return true;
+
+	/* Find the next subsection that exists */
+	bit = find_next_bit(usage->subsection_map, SUBSECTIONS_PER_SECTION, idx);
+	if (bit == SUBSECTIONS_PER_SECTION)
+		return false;
+
+	*pfn = (*pfn & PAGE_SECTION_MASK) + (bit * PAGES_PER_SUBSECTION);
+	return true;
+}
 #else
 static inline int pfn_section_valid(struct mem_section *ms, unsigned long pfn)
 {
 	return 1;
 }
+
+static inline bool pfn_section_first_valid(struct mem_section *ms, unsigned long *pfn)
+{
+	return true;
+}
 #endif
 
 void sparse_init_early_section(int nid, struct page *map, unsigned long pnum,
@@ -2128,6 +2154,58 @@ static inline int pfn_valid(unsigned long pfn)
 
 	return ret;
 }
+
+/* Returns end_pfn or higher if no valid PFN remaining in range */
+static inline unsigned long first_valid_pfn(unsigned long pfn, unsigned long end_pfn)
+{
+	unsigned long nr = pfn_to_section_nr(pfn);
+
+	rcu_read_lock_sched();
+
+	while (nr <= __highest_present_section_nr && pfn < end_pfn) {
+		struct mem_section *ms = __pfn_to_section(pfn);
+
+		if (valid_section(ms) &&
+		    (early_section(ms) || pfn_section_first_valid(ms, &pfn))) {
+			rcu_read_unlock_sched();
+			return pfn;
+		}
+
+		/* Nothing left in this section? Skip to next section */
+		nr++;
+		pfn = section_nr_to_pfn(nr);
+	}
+
+	rcu_read_unlock_sched();
+	return end_pfn;
+}
+
+static inline unsigned long next_valid_pfn(unsigned long pfn, unsigned long end_pfn)
+{
+	pfn++;
+
+	if (pfn >= end_pfn)
+		return end_pfn;
+
+	/*
+	 * Either every PFN within the section (or subsection for VMEMMAP) is
+	 * valid, or none of them are. So there's no point repeating the check
+	 * for every PFN; only call first_valid_pfn() the first time, and when
+	 * crossing a (sub)section boundary (i.e. !(pfn & ~PFN_VALID_MASK)).
+	 */
+	if (pfn & (IS_ENABLED(CONFIG_SPARSEMEM_VMEMMAP) ?
+		   PAGE_SUBSECTION_MASK : PAGE_SECTION_MASK))
+		return pfn;
+
+	return first_valid_pfn(pfn, end_pfn);
+}
+
+
+#define for_each_valid_pfn(_pfn, _start_pfn, _end_pfn)			\
+	for ((_pfn) = first_valid_pfn((_start_pfn), (_end_pfn));	\
+	     (_pfn) < (_end_pfn);					\
+	     (_pfn) = next_valid_pfn((_pfn), (_end_pfn)))
+
 #endif
 
 static inline int pfn_in_present_section(unsigned long pfn)
-- 
2.49.0



^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH v3 4/7] mm, PM: Use for_each_valid_pfn() in kernel/power/snapshot.c
  2025-04-23  7:52 [PATCH v3 0/7] mm: Introduce for_each_valid_pfn() David Woodhouse
                   ` (2 preceding siblings ...)
  2025-04-23  7:52 ` [PATCH v3 3/7] mm: Implement for_each_valid_pfn() for CONFIG_SPARSEMEM David Woodhouse
@ 2025-04-23  7:52 ` David Woodhouse
  2025-04-23 11:12   ` Mike Rapoport
  2025-04-23  7:52 ` [PATCH v3 5/7] mm, x86: Use for_each_valid_pfn() from __ioremap_check_ram() David Woodhouse
                   ` (2 subsequent siblings)
  6 siblings, 1 reply; 15+ messages in thread
From: David Woodhouse @ 2025-04-23  7:52 UTC (permalink / raw)
  To: Mike Rapoport
  Cc: Andrew Morton, Sauerwein, David, Anshuman Khandual,
	Ard Biesheuvel, Catalin Marinas, David Hildenbrand, Marc Zyngier,
	Mark Rutland, Mike Rapoport, Will Deacon, kvmarm,
	linux-arm-kernel, linux-kernel, linux-mm, Ruihan Li

From: David Woodhouse <dwmw@amazon.co.uk>

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
---
 kernel/power/snapshot.c | 42 ++++++++++++++++++++---------------------
 1 file changed, 20 insertions(+), 22 deletions(-)

diff --git a/kernel/power/snapshot.c b/kernel/power/snapshot.c
index 4e6e24e8b854..f151c7a45584 100644
--- a/kernel/power/snapshot.c
+++ b/kernel/power/snapshot.c
@@ -1094,16 +1094,15 @@ static void mark_nosave_pages(struct memory_bitmap *bm)
 			 ((unsigned long long) region->end_pfn << PAGE_SHIFT)
 				- 1);
 
-		for (pfn = region->start_pfn; pfn < region->end_pfn; pfn++)
-			if (pfn_valid(pfn)) {
-				/*
-				 * It is safe to ignore the result of
-				 * mem_bm_set_bit_check() here, since we won't
-				 * touch the PFNs for which the error is
-				 * returned anyway.
-				 */
-				mem_bm_set_bit_check(bm, pfn);
-			}
+		for_each_valid_pfn (pfn, region->start_pfn, region->end_pfn) {
+			/*
+			 * It is safe to ignore the result of
+			 * mem_bm_set_bit_check() here, since we won't
+			 * touch the PFNs for which the error is
+			 * returned anyway.
+			 */
+			mem_bm_set_bit_check(bm, pfn);
+		}
 	}
 }
 
@@ -1255,21 +1254,20 @@ static void mark_free_pages(struct zone *zone)
 	spin_lock_irqsave(&zone->lock, flags);
 
 	max_zone_pfn = zone_end_pfn(zone);
-	for (pfn = zone->zone_start_pfn; pfn < max_zone_pfn; pfn++)
-		if (pfn_valid(pfn)) {
-			page = pfn_to_page(pfn);
+	for_each_valid_pfn(pfn, zone->zone_start_pfn, max_zone_pfn) {
+		page = pfn_to_page(pfn);
 
-			if (!--page_count) {
-				touch_nmi_watchdog();
-				page_count = WD_PAGE_COUNT;
-			}
+		if (!--page_count) {
+			touch_nmi_watchdog();
+			page_count = WD_PAGE_COUNT;
+		}
 
-			if (page_zone(page) != zone)
-				continue;
+		if (page_zone(page) != zone)
+			continue;
 
-			if (!swsusp_page_is_forbidden(page))
-				swsusp_unset_page_free(page);
-		}
+		if (!swsusp_page_is_forbidden(page))
+			swsusp_unset_page_free(page);
+	}
 
 	for_each_migratetype_order(order, t) {
 		list_for_each_entry(page,
-- 
2.49.0



^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH v3 5/7] mm, x86: Use for_each_valid_pfn() from __ioremap_check_ram()
  2025-04-23  7:52 [PATCH v3 0/7] mm: Introduce for_each_valid_pfn() David Woodhouse
                   ` (3 preceding siblings ...)
  2025-04-23  7:52 ` [PATCH v3 4/7] mm, PM: Use for_each_valid_pfn() in kernel/power/snapshot.c David Woodhouse
@ 2025-04-23  7:52 ` David Woodhouse
  2025-04-23 11:13   ` Mike Rapoport
  2025-04-23  7:52 ` [PATCH v3 6/7] mm: Use for_each_valid_pfn() in memory_hotplug David Woodhouse
  2025-04-23  7:52 ` [PATCH v3 7/7] mm/mm_init: Use for_each_valid_pfn() in init_unavailable_range() David Woodhouse
  6 siblings, 1 reply; 15+ messages in thread
From: David Woodhouse @ 2025-04-23  7:52 UTC (permalink / raw)
  To: Mike Rapoport
  Cc: Andrew Morton, Sauerwein, David, Anshuman Khandual,
	Ard Biesheuvel, Catalin Marinas, David Hildenbrand, Marc Zyngier,
	Mark Rutland, Mike Rapoport, Will Deacon, kvmarm,
	linux-arm-kernel, linux-kernel, linux-mm, Ruihan Li

From: David Woodhouse <dwmw@amazon.co.uk>

Instead of calling pfn_valid() separately for every single PFN in the
range, use for_each_valid_pfn() and only look at the ones which are.

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
---
 arch/x86/mm/ioremap.c | 7 +++----
 1 file changed, 3 insertions(+), 4 deletions(-)

diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c
index 331e101bf801..12c8180ca1ba 100644
--- a/arch/x86/mm/ioremap.c
+++ b/arch/x86/mm/ioremap.c
@@ -71,7 +71,7 @@ int ioremap_change_attr(unsigned long vaddr, unsigned long size,
 static unsigned int __ioremap_check_ram(struct resource *res)
 {
 	unsigned long start_pfn, stop_pfn;
-	unsigned long i;
+	unsigned long pfn;
 
 	if ((res->flags & IORESOURCE_SYSTEM_RAM) != IORESOURCE_SYSTEM_RAM)
 		return 0;
@@ -79,9 +79,8 @@ static unsigned int __ioremap_check_ram(struct resource *res)
 	start_pfn = (res->start + PAGE_SIZE - 1) >> PAGE_SHIFT;
 	stop_pfn = (res->end + 1) >> PAGE_SHIFT;
 	if (stop_pfn > start_pfn) {
-		for (i = 0; i < (stop_pfn - start_pfn); ++i)
-			if (pfn_valid(start_pfn + i) &&
-			    !PageReserved(pfn_to_page(start_pfn + i)))
+		for_each_valid_pfn(pfn, start_pfn, stop_pfn)
+			if (!PageReserved(pfn_to_page(pfn)))
 				return IORES_MAP_SYSTEM_RAM;
 	}
 
-- 
2.49.0



^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH v3 6/7] mm: Use for_each_valid_pfn() in memory_hotplug
  2025-04-23  7:52 [PATCH v3 0/7] mm: Introduce for_each_valid_pfn() David Woodhouse
                   ` (4 preceding siblings ...)
  2025-04-23  7:52 ` [PATCH v3 5/7] mm, x86: Use for_each_valid_pfn() from __ioremap_check_ram() David Woodhouse
@ 2025-04-23  7:52 ` David Woodhouse
  2025-04-23 11:13   ` Mike Rapoport
  2025-04-23  7:52 ` [PATCH v3 7/7] mm/mm_init: Use for_each_valid_pfn() in init_unavailable_range() David Woodhouse
  6 siblings, 1 reply; 15+ messages in thread
From: David Woodhouse @ 2025-04-23  7:52 UTC (permalink / raw)
  To: Mike Rapoport
  Cc: Andrew Morton, Sauerwein, David, Anshuman Khandual,
	Ard Biesheuvel, Catalin Marinas, David Hildenbrand, Marc Zyngier,
	Mark Rutland, Mike Rapoport, Will Deacon, kvmarm,
	linux-arm-kernel, linux-kernel, linux-mm, Ruihan Li

From: David Woodhouse <dwmw@amazon.co.uk>

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
---
 mm/memory_hotplug.c | 8 ++------
 1 file changed, 2 insertions(+), 6 deletions(-)

diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index 8305483de38b..8f74c55137bf 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -1756,12 +1756,10 @@ static int scan_movable_pages(unsigned long start, unsigned long end,
 {
 	unsigned long pfn;
 
-	for (pfn = start; pfn < end; pfn++) {
+	for_each_valid_pfn (pfn, start, end) {
 		struct page *page;
 		struct folio *folio;
 
-		if (!pfn_valid(pfn))
-			continue;
 		page = pfn_to_page(pfn);
 		if (PageLRU(page))
 			goto found;
@@ -1805,11 +1803,9 @@ static void do_migrate_range(unsigned long start_pfn, unsigned long end_pfn)
 	static DEFINE_RATELIMIT_STATE(migrate_rs, DEFAULT_RATELIMIT_INTERVAL,
 				      DEFAULT_RATELIMIT_BURST);
 
-	for (pfn = start_pfn; pfn < end_pfn; pfn++) {
+	for_each_valid_pfn (pfn, start_pfn, end_pfn) {
 		struct page *page;
 
-		if (!pfn_valid(pfn))
-			continue;
 		page = pfn_to_page(pfn);
 		folio = page_folio(page);
 
-- 
2.49.0



^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH v3 7/7] mm/mm_init: Use for_each_valid_pfn() in init_unavailable_range()
  2025-04-23  7:52 [PATCH v3 0/7] mm: Introduce for_each_valid_pfn() David Woodhouse
                   ` (5 preceding siblings ...)
  2025-04-23  7:52 ` [PATCH v3 6/7] mm: Use for_each_valid_pfn() in memory_hotplug David Woodhouse
@ 2025-04-23  7:52 ` David Woodhouse
  2025-04-23  9:35   ` Ruihan Li
  2025-04-23 11:14   ` Mike Rapoport
  6 siblings, 2 replies; 15+ messages in thread
From: David Woodhouse @ 2025-04-23  7:52 UTC (permalink / raw)
  To: Mike Rapoport
  Cc: Andrew Morton, Sauerwein, David, Anshuman Khandual,
	Ard Biesheuvel, Catalin Marinas, David Hildenbrand, Marc Zyngier,
	Mark Rutland, Mike Rapoport, Will Deacon, kvmarm,
	linux-arm-kernel, linux-kernel, linux-mm, Ruihan Li

From: David Woodhouse <dwmw@amazon.co.uk>

Currently, memmap_init initializes pfn_hole with 0 instead of
ARCH_PFN_OFFSET. Then init_unavailable_range will start iterating each
page from the page at address zero to the first available page, but it
won't do anything for pages below ARCH_PFN_OFFSET because pfn_valid
won't pass.

If ARCH_PFN_OFFSET is very large (e.g., something like 2^64-2GiB if the
kernel is used as a library and loaded at a very high address), the
pointless iteration for pages below ARCH_PFN_OFFSET will take a very
long time, and the kernel will look stuck at boot time.

Use for_each_valid_pfn() to skip the pointless iterations.

Reported-by: Ruihan Li <lrh2000@pku.edu.cn>
Suggested-by: Mike Rapoport <rppt@kernel.org>
Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
---
 mm/mm_init.c | 6 +-----
 1 file changed, 1 insertion(+), 5 deletions(-)

diff --git a/mm/mm_init.c b/mm/mm_init.c
index 41884f2155c4..0d1a4546825c 100644
--- a/mm/mm_init.c
+++ b/mm/mm_init.c
@@ -845,11 +845,7 @@ static void __init init_unavailable_range(unsigned long spfn,
 	unsigned long pfn;
 	u64 pgcnt = 0;
 
-	for (pfn = spfn; pfn < epfn; pfn++) {
-		if (!pfn_valid(pageblock_start_pfn(pfn))) {
-			pfn = pageblock_end_pfn(pfn) - 1;
-			continue;
-		}
+	for_each_valid_pfn(pfn, spfn, epfn) {
 		__init_single_page(pfn_to_page(pfn), pfn, zone, node);
 		__SetPageReserved(pfn_to_page(pfn));
 		pgcnt++;
-- 
2.49.0



^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v3 7/7] mm/mm_init: Use for_each_valid_pfn() in init_unavailable_range()
  2025-04-23  7:52 ` [PATCH v3 7/7] mm/mm_init: Use for_each_valid_pfn() in init_unavailable_range() David Woodhouse
@ 2025-04-23  9:35   ` Ruihan Li
  2025-04-23 11:14   ` Mike Rapoport
  1 sibling, 0 replies; 15+ messages in thread
From: Ruihan Li @ 2025-04-23  9:35 UTC (permalink / raw)
  To: David Woodhouse
  Cc: Mike Rapoport, Andrew Morton, Sauerwein, David,
	Anshuman Khandual, Ard Biesheuvel, Catalin Marinas,
	David Hildenbrand, Marc Zyngier, Mark Rutland, Mike Rapoport,
	Will Deacon, kvmarm, linux-arm-kernel, linux-kernel, linux-mm

Hi David,

On Wed, Apr 23, 2025 at 08:52:49AM +0100, David Woodhouse wrote:
> From: David Woodhouse <dwmw@amazon.co.uk>
> 
> Currently, memmap_init initializes pfn_hole with 0 instead of
> ARCH_PFN_OFFSET. Then init_unavailable_range will start iterating each
> page from the page at address zero to the first available page, but it
> won't do anything for pages below ARCH_PFN_OFFSET because pfn_valid
> won't pass.
> 
> If ARCH_PFN_OFFSET is very large (e.g., something like 2^64-2GiB if the
> kernel is used as a library and loaded at a very high address), the
> pointless iteration for pages below ARCH_PFN_OFFSET will take a very
> long time, and the kernel will look stuck at boot time.
> 
> Use for_each_valid_pfn() to skip the pointless iterations.
> 
> Reported-by: Ruihan Li <lrh2000@pku.edu.cn>
> Suggested-by: Mike Rapoport <rppt@kernel.org>
> Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
> ---

Thanks! I have confirmed that this worked in my scenario and fixed the
problem I reported earlier.

Tested-by: Ruihan Li <lrh2000@pku.edu.cn>

>  mm/mm_init.c | 6 +-----
>  1 file changed, 1 insertion(+), 5 deletions(-)
> 
> diff --git a/mm/mm_init.c b/mm/mm_init.c
> index 41884f2155c4..0d1a4546825c 100644
> --- a/mm/mm_init.c
> +++ b/mm/mm_init.c
> @@ -845,11 +845,7 @@ static void __init init_unavailable_range(unsigned long spfn,
>  	unsigned long pfn;
>  	u64 pgcnt = 0;
>  
> -	for (pfn = spfn; pfn < epfn; pfn++) {
> -		if (!pfn_valid(pageblock_start_pfn(pfn))) {
> -			pfn = pageblock_end_pfn(pfn) - 1;
> -			continue;
> -		}
> +	for_each_valid_pfn(pfn, spfn, epfn) {
>  		__init_single_page(pfn_to_page(pfn), pfn, zone, node);
>  		__SetPageReserved(pfn_to_page(pfn));
>  		pgcnt++;
> -- 
> 2.49.0

Thanks,
Ruihan Li



^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v3 3/7] mm: Implement for_each_valid_pfn() for CONFIG_SPARSEMEM
  2025-04-23  7:52 ` [PATCH v3 3/7] mm: Implement for_each_valid_pfn() for CONFIG_SPARSEMEM David Woodhouse
@ 2025-04-23 11:11   ` Mike Rapoport
  2025-04-23 12:05     ` David Woodhouse
  0 siblings, 1 reply; 15+ messages in thread
From: Mike Rapoport @ 2025-04-23 11:11 UTC (permalink / raw)
  To: David Woodhouse
  Cc: Andrew Morton, Sauerwein, David, Anshuman Khandual,
	Ard Biesheuvel, Catalin Marinas, David Hildenbrand, Marc Zyngier,
	Mark Rutland, Mike Rapoport, Will Deacon, kvmarm,
	linux-arm-kernel, linux-kernel, linux-mm, Ruihan Li

On Wed, Apr 23, 2025 at 08:52:45AM +0100, David Woodhouse wrote:
> From: David Woodhouse <dwmw@amazon.co.uk>
> 
> Implement for_each_valid_pfn() based on two helper functions.
> 
> The first_valid_pfn() function largely mirrors pfn_valid(), calling into
> a pfn_section_first_valid() helper which is trivial for the !VMEMMAP case,
> and in the VMEMMAP case will skip to the next subsection as needed.
> 
> Since next_valid_pfn() knows that its argument *is* a valid PFN, it
> doesn't need to do any checking at all while iterating over the low bits
> within a (sub)section mask; the whole (sub)section is either present or
> not.
> 
> Note that the VMEMMAP version of pfn_section_first_valid() may return a
> value *higher* than end_pfn when skipping to the next subsection, and
> first_valid_pfn() happily returns that higher value. This is fine.
> 
> Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
> Previous-revision-reviewed-by: Mike Rapoport (Microsoft) <rppt@kernel.org>
> ---
>  include/asm-generic/memory_model.h | 26 ++++++++--
>  include/linux/mmzone.h             | 78 ++++++++++++++++++++++++++++++
>  2 files changed, 99 insertions(+), 5 deletions(-)
> 
> diff --git a/include/asm-generic/memory_model.h b/include/asm-generic/memory_model.h
> index 74d0077cc5fa..044536da3390 100644
> --- a/include/asm-generic/memory_model.h
> +++ b/include/asm-generic/memory_model.h
> @@ -31,12 +31,28 @@ static inline int pfn_valid(unsigned long pfn)
>  }
>  #define pfn_valid pfn_valid
>  
> +static inline bool first_valid_pfn(unsigned long *pfn)
> +{
> +	/* avoid <linux/mm.h> include hell */
> +	extern unsigned long max_mapnr;
> +	unsigned long pfn_offset = ARCH_PFN_OFFSET;
> +
> +	if (*pfn < pfn_offset) {
> +		*pfn = pfn_offset;
> +		return true;
> +	}
> +
> +	if ((*pfn - pfn_offset) < max_mapnr)
> +		return true;
> +
> +	return false;
> +}
> +

Looks like it's a leftover from one of the previous versions.

>  #ifndef for_each_valid_pfn
> -#define for_each_valid_pfn(pfn, start_pfn, end_pfn)			 \
> -	for ((pfn) = max_t(unsigned long, (start_pfn), ARCH_PFN_OFFSET); \
> -	     (pfn) < min_t(unsigned long, (end_pfn),			 \
> -			   ARCH_PFN_OFFSET + max_mapnr);		 \
> -	     (pfn)++)
> +#define for_each_valid_pfn(pfn, start_pfn, end_pfn)			       \
> +	for (pfn = max_t(unsigned long, start_pfn, ARCH_PFN_OFFSET);	\
> +	     pfn < min_t(unsigned long, end_pfn, ARCH_PFN_OFFSET + max_mapnr); \
> +			 pfn++)

And this one is probably a rebase artifact? 

With FLATMEM changes dropped
This-revision-also-reviewed-by: Mike Rapoport (Microsoft) <rppt@kernel.org>

>  #endif /* for_each_valid_pfn */
>  #endif /* valid_pfn */
>  

-- 
Sincerely yours,
Mike.


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v3 4/7] mm, PM: Use for_each_valid_pfn() in kernel/power/snapshot.c
  2025-04-23  7:52 ` [PATCH v3 4/7] mm, PM: Use for_each_valid_pfn() in kernel/power/snapshot.c David Woodhouse
@ 2025-04-23 11:12   ` Mike Rapoport
  0 siblings, 0 replies; 15+ messages in thread
From: Mike Rapoport @ 2025-04-23 11:12 UTC (permalink / raw)
  To: David Woodhouse
  Cc: Andrew Morton, Sauerwein, David, Anshuman Khandual,
	Ard Biesheuvel, Catalin Marinas, David Hildenbrand, Marc Zyngier,
	Mark Rutland, Mike Rapoport, Will Deacon, kvmarm,
	linux-arm-kernel, linux-kernel, linux-mm, Ruihan Li

On Wed, Apr 23, 2025 at 08:52:46AM +0100, David Woodhouse wrote:
> From: David Woodhouse <dwmw@amazon.co.uk>
> 
> Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>

Acked-by: Mike Rapoport (Microsoft) <rppt@kernel.org>

> ---
>  kernel/power/snapshot.c | 42 ++++++++++++++++++++---------------------
>  1 file changed, 20 insertions(+), 22 deletions(-)
> 

-- 
Sincerely yours,
Mike.


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v3 5/7] mm, x86: Use for_each_valid_pfn() from __ioremap_check_ram()
  2025-04-23  7:52 ` [PATCH v3 5/7] mm, x86: Use for_each_valid_pfn() from __ioremap_check_ram() David Woodhouse
@ 2025-04-23 11:13   ` Mike Rapoport
  0 siblings, 0 replies; 15+ messages in thread
From: Mike Rapoport @ 2025-04-23 11:13 UTC (permalink / raw)
  To: David Woodhouse
  Cc: Andrew Morton, Sauerwein, David, Anshuman Khandual,
	Ard Biesheuvel, Catalin Marinas, David Hildenbrand, Marc Zyngier,
	Mark Rutland, Mike Rapoport, Will Deacon, kvmarm,
	linux-arm-kernel, linux-kernel, linux-mm, Ruihan Li

On Wed, Apr 23, 2025 at 08:52:47AM +0100, David Woodhouse wrote:
> From: David Woodhouse <dwmw@amazon.co.uk>
> 
> Instead of calling pfn_valid() separately for every single PFN in the
> range, use for_each_valid_pfn() and only look at the ones which are.
> 
> Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>

Acked-by: Mike Rapoport (Microsoft) <rppt@kernel.org>

> ---
>  arch/x86/mm/ioremap.c | 7 +++----
>  1 file changed, 3 insertions(+), 4 deletions(-)

-- 
Sincerely yours,
Mike.


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v3 6/7] mm: Use for_each_valid_pfn() in memory_hotplug
  2025-04-23  7:52 ` [PATCH v3 6/7] mm: Use for_each_valid_pfn() in memory_hotplug David Woodhouse
@ 2025-04-23 11:13   ` Mike Rapoport
  0 siblings, 0 replies; 15+ messages in thread
From: Mike Rapoport @ 2025-04-23 11:13 UTC (permalink / raw)
  To: David Woodhouse
  Cc: Andrew Morton, Sauerwein, David, Anshuman Khandual,
	Ard Biesheuvel, Catalin Marinas, David Hildenbrand, Marc Zyngier,
	Mark Rutland, Mike Rapoport, Will Deacon, kvmarm,
	linux-arm-kernel, linux-kernel, linux-mm, Ruihan Li

On Wed, Apr 23, 2025 at 08:52:48AM +0100, David Woodhouse wrote:
> From: David Woodhouse <dwmw@amazon.co.uk>
> 
> Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>

Acked-by: Mike Rapoport (Microsoft) <rppt@kernel.org>

> ---
>  mm/memory_hotplug.c | 8 ++------
>  1 file changed, 2 insertions(+), 6 deletions(-)

-- 
Sincerely yours,
Mike.


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v3 7/7] mm/mm_init: Use for_each_valid_pfn() in init_unavailable_range()
  2025-04-23  7:52 ` [PATCH v3 7/7] mm/mm_init: Use for_each_valid_pfn() in init_unavailable_range() David Woodhouse
  2025-04-23  9:35   ` Ruihan Li
@ 2025-04-23 11:14   ` Mike Rapoport
  1 sibling, 0 replies; 15+ messages in thread
From: Mike Rapoport @ 2025-04-23 11:14 UTC (permalink / raw)
  To: David Woodhouse
  Cc: Andrew Morton, Sauerwein, David, Anshuman Khandual,
	Ard Biesheuvel, Catalin Marinas, David Hildenbrand, Marc Zyngier,
	Mark Rutland, Mike Rapoport, Will Deacon, kvmarm,
	linux-arm-kernel, linux-kernel, linux-mm, Ruihan Li

On Wed, Apr 23, 2025 at 08:52:49AM +0100, David Woodhouse wrote:
> From: David Woodhouse <dwmw@amazon.co.uk>
> 
> Currently, memmap_init initializes pfn_hole with 0 instead of
> ARCH_PFN_OFFSET. Then init_unavailable_range will start iterating each
> page from the page at address zero to the first available page, but it
> won't do anything for pages below ARCH_PFN_OFFSET because pfn_valid
> won't pass.
> 
> If ARCH_PFN_OFFSET is very large (e.g., something like 2^64-2GiB if the
> kernel is used as a library and loaded at a very high address), the
> pointless iteration for pages below ARCH_PFN_OFFSET will take a very
> long time, and the kernel will look stuck at boot time.
> 
> Use for_each_valid_pfn() to skip the pointless iterations.
> 
> Reported-by: Ruihan Li <lrh2000@pku.edu.cn>
> Suggested-by: Mike Rapoport <rppt@kernel.org>
> Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>

Reviewed-by: Mike Rapoport (Microsoft) <rppt@kernel.org>

> ---
>  mm/mm_init.c | 6 +-----
>  1 file changed, 1 insertion(+), 5 deletions(-)
> 

-- 
Sincerely yours,
Mike.


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v3 3/7] mm: Implement for_each_valid_pfn() for CONFIG_SPARSEMEM
  2025-04-23 11:11   ` Mike Rapoport
@ 2025-04-23 12:05     ` David Woodhouse
  0 siblings, 0 replies; 15+ messages in thread
From: David Woodhouse @ 2025-04-23 12:05 UTC (permalink / raw)
  To: Mike Rapoport
  Cc: Andrew Morton, Sauerwein, David, Anshuman Khandual,
	Ard Biesheuvel, Catalin Marinas, David Hildenbrand, Marc Zyngier,
	Mark Rutland, Mike Rapoport, Will Deacon, kvmarm,
	linux-arm-kernel, linux-kernel, linux-mm, Ruihan Li

[-- Attachment #1: Type: text/plain, Size: 1300 bytes --]

On Wed, 2025-04-23 at 14:11 +0300, Mike Rapoport wrote:
> 
> Looks like it's a leftover from one of the previous versions.
> 
> >   #ifndef for_each_valid_pfn
> > -#define for_each_valid_pfn(pfn, start_pfn,
> > end_pfn)			 \
> > -	for ((pfn) = max_t(unsigned long, (start_pfn),
> > ARCH_PFN_OFFSET); \
> > -	     (pfn) < min_t(unsigned long,
> > (end_pfn),			 \
> > -			   ARCH_PFN_OFFSET +
> > max_mapnr);		 \
> > -	     (pfn)++)
> > +#define for_each_valid_pfn(pfn, start_pfn,
> > end_pfn)			       \
> > +	for (pfn = max_t(unsigned long, start_pfn,
> > ARCH_PFN_OFFSET);	\
> > +	     pfn < min_t(unsigned long, end_pfn, ARCH_PFN_OFFSET +
> > max_mapnr); \
> > +			 pfn++)
> 
> And this one is probably a rebase artifact? 
> 
> With FLATMEM changes dropped

Oops, that was a result of me attempting to keep the SPARSEMEM thing in
two commits — the one you'd previously reviewed, and then the
'optimisation', as discussed.

And then giving up on it and just resetting to the previous 'optimised'
version in a single commit... and failing to realise that in doing so I
was also reverting the cleanups I'd done to the flatmem version.

Will fix that; thanks.

> This-revision-also-reviewed-by: Mike Rapoport (Microsoft) <rppt@kernel.org>


[-- Attachment #2: smime.p7s --]
[-- Type: application/pkcs7-signature, Size: 5069 bytes --]

^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2025-04-23 12:05 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-04-23  7:52 [PATCH v3 0/7] mm: Introduce for_each_valid_pfn() David Woodhouse
2025-04-23  7:52 ` [PATCH v3 1/7] mm: Introduce for_each_valid_pfn() and use it from reserve_bootmem_region() David Woodhouse
2025-04-23  7:52 ` [PATCH v3 2/7] mm: Implement for_each_valid_pfn() for CONFIG_FLATMEM David Woodhouse
2025-04-23  7:52 ` [PATCH v3 3/7] mm: Implement for_each_valid_pfn() for CONFIG_SPARSEMEM David Woodhouse
2025-04-23 11:11   ` Mike Rapoport
2025-04-23 12:05     ` David Woodhouse
2025-04-23  7:52 ` [PATCH v3 4/7] mm, PM: Use for_each_valid_pfn() in kernel/power/snapshot.c David Woodhouse
2025-04-23 11:12   ` Mike Rapoport
2025-04-23  7:52 ` [PATCH v3 5/7] mm, x86: Use for_each_valid_pfn() from __ioremap_check_ram() David Woodhouse
2025-04-23 11:13   ` Mike Rapoport
2025-04-23  7:52 ` [PATCH v3 6/7] mm: Use for_each_valid_pfn() in memory_hotplug David Woodhouse
2025-04-23 11:13   ` Mike Rapoport
2025-04-23  7:52 ` [PATCH v3 7/7] mm/mm_init: Use for_each_valid_pfn() in init_unavailable_range() David Woodhouse
2025-04-23  9:35   ` Ruihan Li
2025-04-23 11:14   ` Mike Rapoport

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox