linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v8 0/3] Optimize zone->contiguous update and issue fix
@ 2026-01-20 14:33 Tianyou Li
  2026-01-20 14:33 ` [PATCH v8 1/3] mm/memory hotplug: Fix zone->contiguous always false when hotplug Tianyou Li
                   ` (2 more replies)
  0 siblings, 3 replies; 18+ messages in thread
From: Tianyou Li @ 2026-01-20 14:33 UTC (permalink / raw)
  To: David Hildenbrand, Oscar Salvador, Mike Rapoport, Wei Yang, Michal Hocko
  Cc: linux-mm, Yong Hu, Nanhai Zou, Yuan Liu, Tim Chen, Qiuxu Zhuo,
	Yu C Chen, Pan Deng, Tianyou Li, Chen Zhang, linux-kernel

This series contains 3 patches. The first one fix an issue when check
the zone->contiguous during zone grows. The second one encapsulate the
mhp_init_memmap_on_memory() and online_pages() into
online_memory_block_pages(), the mhp_deinit_memmap_on_memory() and
offline_pages() into offline_memory_block_pages(). The last one add a
fast path to check the zone->contiguous. The issue fixed by the first
patch can be found in the original code path.

Changes History
===============
v2 changes:
   Add check_zone_contiguous_fast function to check zone contiguity for
   new  memory PFN ranges.

v3 changes:
   Add zone contiguity check for empty zones.

v4 changes:
   1. Improve coding style.
   2. Add fast path for zone contiguity check in memory unplugged cases,
      and update test results.
   3. Refactor set_zone_contiguous: the new set_zone_contiguous updates
      zone contiguity based on the fast path results.

v5 changes:
   1. Improve coding style.
   2. Fix a issue in which zone->contiguous was always false when adding
      new memory, leveraging the fast path optimization.

v6 changes:
   1. Improve coding style.
   2. Add comments.

v7 changes:
   1. Rebased to 6.19-rc1
   2. Reorder the patches so that the fix will be the first in the series. 

v8 changes:
   1. Rebased to 6.19-rc6
   2. Add online_memory_block_pages() and offline_memory_block_pages()

Tianyou Li (2):
  mm/memory hotplug/unplug: Add online_memory_block_pages() and
    offline_memory_block_pages()
  mm/memory hotplug/unplug: Optimize zone->contiguous update when
    changes pfn range

Yuan Liu (1):
  mm/memory hotplug: Fix zone->contiguous always false when hotplug


 drivers/base/memory.c          |  53 +++----------
 include/linux/memory_hotplug.h |  18 ++---
 mm/internal.h                  |   8 +-
 mm/memory_hotplug.c            | 136 +++++++++++++++++++++++++++++++--
 mm/mm_init.c                   |  15 +++-
 5 files changed, 167 insertions(+), 63 deletions(-)

-- 
2.47.1



^ permalink raw reply	[flat|nested] 18+ messages in thread

* [PATCH v8 1/3] mm/memory hotplug: Fix zone->contiguous always false when hotplug
  2026-01-20 14:33 [PATCH v8 0/3] Optimize zone->contiguous update and issue fix Tianyou Li
@ 2026-01-20 14:33 ` Tianyou Li
  2026-01-22 11:16   ` Mike Rapoport
  2026-01-20 14:33 ` [PATCH v8 2/3] mm/memory hotplug/unplug: Add online_memory_block_pages() and offline_memory_block_pages() Tianyou Li
  2026-01-20 14:33 ` [PATCH v8 3/3] mm/memory hotplug/unplug: Optimize zone->contiguous update when changes pfn range Tianyou Li
  2 siblings, 1 reply; 18+ messages in thread
From: Tianyou Li @ 2026-01-20 14:33 UTC (permalink / raw)
  To: David Hildenbrand, Oscar Salvador, Mike Rapoport, Wei Yang, Michal Hocko
  Cc: linux-mm, Yong Hu, Nanhai Zou, Yuan Liu, Tim Chen, Qiuxu Zhuo,
	Yu C Chen, Pan Deng, Tianyou Li, Chen Zhang, linux-kernel

From: Yuan Liu <yuan1.liu@intel.com>

set_zone_contiguous() uses __pageblock_pfn_to_page() to detect
pageblocks that either do not exist (hole) or that do not belong
to the same zone.

__pageblock_pfn_to_page(), however, relies on pfn_to_online_page(),
effectively always returning NULL for memory ranges that were not
onlined yet. So when called on a range-to-be-onlined, it indicates
a memory hole to set_zone_contiguous().

Consequently, the set_zone_contiguous() call in move_pfn_range_to_zone(),
which happens early during memory onlining, will never detect a
zone as being contiguous. Bad.

To fix the issue, move the set_zone_contiguous() call to a later
stage in memory onlining, where pfn_to_online_page() will succeed:
after we mark the memory sections to be online.

Fixes: 2d070eab2e82 ("mm: consider zone which is not fully populated to have holes")
Cc: Michal Hocko <mhocko@suse.com>
Reviewed-by: Nanhai Zou <nanhai.zou@intel.com>
Signed-off-by: Yuan Liu <yuan1.liu@intel.com>
Signed-off-by: Tianyou Li <tianyou.li@intel.com>
---
 mm/memory_hotplug.c | 9 +++++++--
 1 file changed, 7 insertions(+), 2 deletions(-)

diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index a63ec679d861..c8f492b5daf0 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -782,8 +782,6 @@ void move_pfn_range_to_zone(struct zone *zone, unsigned long start_pfn,
 	memmap_init_range(nr_pages, nid, zone_idx(zone), start_pfn, 0,
 			 MEMINIT_HOTPLUG, altmap, migratetype,
 			 isolate_pageblock);
-
-	set_zone_contiguous(zone);
 }
 
 struct auto_movable_stats {
@@ -1205,6 +1203,13 @@ int online_pages(unsigned long pfn, unsigned long nr_pages,
 	}
 
 	online_pages_range(pfn, nr_pages);
+
+	/*
+	 * Now that the ranges are indicated as online, check whether the whole
+	 * zone is contiguous.
+	 */
+	set_zone_contiguous(zone);
+
 	adjust_present_page_count(pfn_to_page(pfn), group, nr_pages);
 
 	if (node_arg.nid >= 0)
-- 
2.47.1



^ permalink raw reply	[flat|nested] 18+ messages in thread

* [PATCH v8 2/3] mm/memory hotplug/unplug: Add online_memory_block_pages() and offline_memory_block_pages()
  2026-01-20 14:33 [PATCH v8 0/3] Optimize zone->contiguous update and issue fix Tianyou Li
  2026-01-20 14:33 ` [PATCH v8 1/3] mm/memory hotplug: Fix zone->contiguous always false when hotplug Tianyou Li
@ 2026-01-20 14:33 ` Tianyou Li
  2026-01-22 11:32   ` Mike Rapoport
  2026-01-20 14:33 ` [PATCH v8 3/3] mm/memory hotplug/unplug: Optimize zone->contiguous update when changes pfn range Tianyou Li
  2 siblings, 1 reply; 18+ messages in thread
From: Tianyou Li @ 2026-01-20 14:33 UTC (permalink / raw)
  To: David Hildenbrand, Oscar Salvador, Mike Rapoport, Wei Yang, Michal Hocko
  Cc: linux-mm, Yong Hu, Nanhai Zou, Yuan Liu, Tim Chen, Qiuxu Zhuo,
	Yu C Chen, Pan Deng, Tianyou Li, Chen Zhang, linux-kernel

Encapsulate the mhp_init_memmap_on_memory() and online_pages() into
online_memory_block_pages(). Thus we can further optimize the
set_zone_contiguous() to check the whole memory block range, instead
of check the zone contiguous in separate range.

Correspondingly, encapsulate the mhp_deinit_memmap_on_memory() and
offline_pages() into offline_memory_block_pages().

Tested-by: Yuan Liu <yuan1.liu@intel.com>
Reviewed-by: Yuan Liu <yuan1.liu@intel.com>
Signed-off-by: Tianyou Li <tianyou.li@intel.com>
---
 drivers/base/memory.c          | 53 ++++++---------------------
 include/linux/memory_hotplug.h | 18 +++++-----
 mm/memory_hotplug.c            | 65 +++++++++++++++++++++++++++++++---
 3 files changed, 80 insertions(+), 56 deletions(-)

diff --git a/drivers/base/memory.c b/drivers/base/memory.c
index 751f248ca4a8..ea4d6fbf34fd 100644
--- a/drivers/base/memory.c
+++ b/drivers/base/memory.c
@@ -246,31 +246,12 @@ static int memory_block_online(struct memory_block *mem)
 		nr_vmemmap_pages = mem->altmap->free;
 
 	mem_hotplug_begin();
-	if (nr_vmemmap_pages) {
-		ret = mhp_init_memmap_on_memory(start_pfn, nr_vmemmap_pages, zone);
-		if (ret)
-			goto out;
-	}
-
-	ret = online_pages(start_pfn + nr_vmemmap_pages,
-			   nr_pages - nr_vmemmap_pages, zone, mem->group);
-	if (ret) {
-		if (nr_vmemmap_pages)
-			mhp_deinit_memmap_on_memory(start_pfn, nr_vmemmap_pages);
-		goto out;
-	}
-
-	/*
-	 * Account once onlining succeeded. If the zone was unpopulated, it is
-	 * now already properly populated.
-	 */
-	if (nr_vmemmap_pages)
-		adjust_present_page_count(pfn_to_page(start_pfn), mem->group,
-					  nr_vmemmap_pages);
-
-	mem->zone = zone;
-out:
+	ret = online_memory_block_pages(start_pfn, nr_pages, nr_vmemmap_pages,
+					zone, mem->group);
+	if (!ret)
+		mem->zone = zone;
 	mem_hotplug_done();
+
 	return ret;
 }
 
@@ -295,26 +276,12 @@ static int memory_block_offline(struct memory_block *mem)
 		nr_vmemmap_pages = mem->altmap->free;
 
 	mem_hotplug_begin();
-	if (nr_vmemmap_pages)
-		adjust_present_page_count(pfn_to_page(start_pfn), mem->group,
-					  -nr_vmemmap_pages);
-
-	ret = offline_pages(start_pfn + nr_vmemmap_pages,
-			    nr_pages - nr_vmemmap_pages, mem->zone, mem->group);
-	if (ret) {
-		/* offline_pages() failed. Account back. */
-		if (nr_vmemmap_pages)
-			adjust_present_page_count(pfn_to_page(start_pfn),
-						  mem->group, nr_vmemmap_pages);
-		goto out;
-	}
-
-	if (nr_vmemmap_pages)
-		mhp_deinit_memmap_on_memory(start_pfn, nr_vmemmap_pages);
-
-	mem->zone = NULL;
-out:
+	ret = offline_memory_block_pages(start_pfn, nr_pages, nr_vmemmap_pages,
+					 mem->zone, mem->group);
+	if (!ret)
+		mem->zone = NULL;
 	mem_hotplug_done();
+
 	return ret;
 }
 
diff --git a/include/linux/memory_hotplug.h b/include/linux/memory_hotplug.h
index f2f16cdd73ee..1f8d5edd820d 100644
--- a/include/linux/memory_hotplug.h
+++ b/include/linux/memory_hotplug.h
@@ -106,11 +106,9 @@ extern void adjust_present_page_count(struct page *page,
 				      struct memory_group *group,
 				      long nr_pages);
 /* VM interface that may be used by firmware interface */
-extern int mhp_init_memmap_on_memory(unsigned long pfn, unsigned long nr_pages,
-				     struct zone *zone);
-extern void mhp_deinit_memmap_on_memory(unsigned long pfn, unsigned long nr_pages);
-extern int online_pages(unsigned long pfn, unsigned long nr_pages,
-			struct zone *zone, struct memory_group *group);
+extern int online_memory_block_pages(unsigned long start_pfn,
+		unsigned long nr_pages, unsigned long nr_vmemmap_pages,
+		struct zone *zone, struct memory_group *group);
 extern unsigned long __offline_isolated_pages(unsigned long start_pfn,
 		unsigned long end_pfn);
 
@@ -261,8 +259,9 @@ static inline void pgdat_resize_init(struct pglist_data *pgdat) {}
 #ifdef CONFIG_MEMORY_HOTREMOVE
 
 extern void try_offline_node(int nid);
-extern int offline_pages(unsigned long start_pfn, unsigned long nr_pages,
-			 struct zone *zone, struct memory_group *group);
+extern int offline_memory_block_pages(unsigned long start_pfn,
+		unsigned long nr_pages, unsigned long nr_vmemmap_pages,
+		struct zone *zone, struct memory_group *group);
 extern int remove_memory(u64 start, u64 size);
 extern void __remove_memory(u64 start, u64 size);
 extern int offline_and_remove_memory(u64 start, u64 size);
@@ -270,8 +269,9 @@ extern int offline_and_remove_memory(u64 start, u64 size);
 #else
 static inline void try_offline_node(int nid) {}
 
-static inline int offline_pages(unsigned long start_pfn, unsigned long nr_pages,
-				struct zone *zone, struct memory_group *group)
+static inline int offline_memory_block_pages(unsigned long start_pfn,
+		unsigned long nr_pages, unsigned long nr_vmemmap_pages,
+		struct zone *zone, struct memory_group *group)
 {
 	return -EINVAL;
 }
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index c8f492b5daf0..8793a83702c5 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -1085,7 +1085,7 @@ void adjust_present_page_count(struct page *page, struct memory_group *group,
 		group->present_kernel_pages += nr_pages;
 }
 
-int mhp_init_memmap_on_memory(unsigned long pfn, unsigned long nr_pages,
+static int mhp_init_memmap_on_memory(unsigned long pfn, unsigned long nr_pages,
 			      struct zone *zone)
 {
 	unsigned long end_pfn = pfn + nr_pages;
@@ -1116,7 +1116,7 @@ int mhp_init_memmap_on_memory(unsigned long pfn, unsigned long nr_pages,
 	return ret;
 }
 
-void mhp_deinit_memmap_on_memory(unsigned long pfn, unsigned long nr_pages)
+static void mhp_deinit_memmap_on_memory(unsigned long pfn, unsigned long nr_pages)
 {
 	unsigned long end_pfn = pfn + nr_pages;
 
@@ -1139,7 +1139,7 @@ void mhp_deinit_memmap_on_memory(unsigned long pfn, unsigned long nr_pages)
 /*
  * Must be called with mem_hotplug_lock in write mode.
  */
-int online_pages(unsigned long pfn, unsigned long nr_pages,
+static int online_pages(unsigned long pfn, unsigned long nr_pages,
 		       struct zone *zone, struct memory_group *group)
 {
 	struct memory_notify mem_arg = {
@@ -1254,6 +1254,37 @@ int online_pages(unsigned long pfn, unsigned long nr_pages,
 	return ret;
 }
 
+int online_memory_block_pages(unsigned long start_pfn,
+		unsigned long nr_pages, unsigned long nr_vmemmap_pages,
+		struct zone *zone, struct memory_group *group)
+{
+	int ret;
+
+	if (nr_vmemmap_pages) {
+		ret = mhp_init_memmap_on_memory(start_pfn, nr_vmemmap_pages, zone);
+		if (ret)
+			return ret;
+	}
+
+	ret = online_pages(start_pfn + nr_vmemmap_pages,
+			   nr_pages - nr_vmemmap_pages, zone, group);
+	if (ret) {
+		if (nr_vmemmap_pages)
+			mhp_deinit_memmap_on_memory(start_pfn, nr_vmemmap_pages);
+		return ret;
+	}
+
+	/*
+	 * Account once onlining succeeded. If the zone was unpopulated, it is
+	 * now already properly populated.
+	 */
+	if (nr_vmemmap_pages)
+		adjust_present_page_count(pfn_to_page(start_pfn), group,
+					  nr_vmemmap_pages);
+
+	return ret;
+}
+
 /* we are OK calling __meminit stuff here - we have CONFIG_MEMORY_HOTPLUG */
 static pg_data_t *hotadd_init_pgdat(int nid)
 {
@@ -1896,7 +1927,7 @@ static int count_system_ram_pages_cb(unsigned long start_pfn,
 /*
  * Must be called with mem_hotplug_lock in write mode.
  */
-int offline_pages(unsigned long start_pfn, unsigned long nr_pages,
+static int offline_pages(unsigned long start_pfn, unsigned long nr_pages,
 			struct zone *zone, struct memory_group *group)
 {
 	unsigned long pfn, managed_pages, system_ram_pages = 0;
@@ -2101,6 +2132,32 @@ int offline_pages(unsigned long start_pfn, unsigned long nr_pages,
 	return ret;
 }
 
+int offline_memory_block_pages(unsigned long start_pfn,
+		unsigned long nr_pages, unsigned long nr_vmemmap_pages,
+		struct zone *zone, struct memory_group *group)
+{
+	int ret;
+
+	if (nr_vmemmap_pages)
+		adjust_present_page_count(pfn_to_page(start_pfn), group,
+					  -nr_vmemmap_pages);
+
+	ret = offline_pages(start_pfn + nr_vmemmap_pages,
+			    nr_pages - nr_vmemmap_pages, zone, group);
+	if (ret) {
+		/* offline_pages() failed. Account back. */
+		if (nr_vmemmap_pages)
+			adjust_present_page_count(pfn_to_page(start_pfn),
+						  group, nr_vmemmap_pages);
+		return ret;
+	}
+
+	if (nr_vmemmap_pages)
+		mhp_deinit_memmap_on_memory(start_pfn, nr_vmemmap_pages);
+
+	return ret;
+}
+
 static int check_memblock_offlined_cb(struct memory_block *mem, void *arg)
 {
 	int *nid = arg;
-- 
2.47.1



^ permalink raw reply	[flat|nested] 18+ messages in thread

* [PATCH v8 3/3] mm/memory hotplug/unplug: Optimize zone->contiguous update when changes pfn range
  2026-01-20 14:33 [PATCH v8 0/3] Optimize zone->contiguous update and issue fix Tianyou Li
  2026-01-20 14:33 ` [PATCH v8 1/3] mm/memory hotplug: Fix zone->contiguous always false when hotplug Tianyou Li
  2026-01-20 14:33 ` [PATCH v8 2/3] mm/memory hotplug/unplug: Add online_memory_block_pages() and offline_memory_block_pages() Tianyou Li
@ 2026-01-20 14:33 ` Tianyou Li
  2026-01-22 11:43   ` Mike Rapoport
  2 siblings, 1 reply; 18+ messages in thread
From: Tianyou Li @ 2026-01-20 14:33 UTC (permalink / raw)
  To: David Hildenbrand, Oscar Salvador, Mike Rapoport, Wei Yang, Michal Hocko
  Cc: linux-mm, Yong Hu, Nanhai Zou, Yuan Liu, Tim Chen, Qiuxu Zhuo,
	Yu C Chen, Pan Deng, Tianyou Li, Chen Zhang, linux-kernel

When invoke move_pfn_range_to_zone or remove_pfn_range_from_zone, it will
update the zone->contiguous by checking the new zone's pfn range from the
beginning to the end, regardless the previous state of the old zone. When
the zone's pfn range is large, the cost of traversing the pfn range to
update the zone->contiguous could be significant.

Add fast paths to quickly detect cases where zone is definitely not
contiguous without scanning the new zone. The cases are: when the new range
did not overlap with previous range, the contiguous should be false; if the
new range adjacent with the previous range, just need to check the new
range; if the new added pages could not fill the hole of previous zone, the
contiguous should be false.

The following test cases of memory hotplug for a VM [1], tested in the
environment [2], show that this optimization can significantly reduce the
memory hotplug time [3].

+----------------+------+---------------+--------------+----------------+
|                | Size | Time (before) | Time (after) | Time Reduction |
|                +------+---------------+--------------+----------------+
| Plug Memory    | 256G |      10s      |      2s      |       80%      |
|                +------+---------------+--------------+----------------+
|                | 512G |      33s      |      6s      |       81%      |
+----------------+------+---------------+--------------+----------------+

+----------------+------+---------------+--------------+----------------+
|                | Size | Time (before) | Time (after) | Time Reduction |
|                +------+---------------+--------------+----------------+
| Unplug Memory  | 256G |      10s      |      2s      |       80%      |
|                +------+---------------+--------------+----------------+
|                | 512G |      34s      |      6s      |       82%      |
+----------------+------+---------------+--------------+----------------+

[1] Qemu commands to hotplug 256G/512G memory for a VM:
    object_add memory-backend-ram,id=hotmem0,size=256G/512G,share=on
    device_add virtio-mem-pci,id=vmem1,memdev=hotmem0,bus=port1
    qom-set vmem1 requested-size 256G/512G (Plug Memory)
    qom-set vmem1 requested-size 0G (Unplug Memory)

[2] Hardware     : Intel Icelake server
    Guest Kernel : v6.18-rc2
    Qemu         : v9.0.0

    Launch VM    :
    qemu-system-x86_64 -accel kvm -cpu host \
    -drive file=./Centos10_cloud.qcow2,format=qcow2,if=virtio \
    -drive file=./seed.img,format=raw,if=virtio \
    -smp 3,cores=3,threads=1,sockets=1,maxcpus=3 \
    -m 2G,slots=10,maxmem=2052472M \
    -device pcie-root-port,id=port1,bus=pcie.0,slot=1,multifunction=on \
    -device pcie-root-port,id=port2,bus=pcie.0,slot=2 \
    -nographic -machine q35 \
    -nic user,hostfwd=tcp::3000-:22

    Guest kernel auto-onlines newly added memory blocks:
    echo online > /sys/devices/system/memory/auto_online_blocks

[3] The time from typing the QEMU commands in [1] to when the output of
    'grep MemTotal /proc/meminfo' on Guest reflects that all hotplugged
    memory is recognized.

Reported-by: Nanhai Zou <nanhai.zou@intel.com>
Reported-by: Chen Zhang <zhangchen.kidd@jd.com>
Tested-by: Yuan Liu <yuan1.liu@intel.com>
Reviewed-by: Tim Chen <tim.c.chen@linux.intel.com>
Reviewed-by: Qiuxu Zhuo <qiuxu.zhuo@intel.com>
Reviewed-by: Yu C Chen <yu.c.chen@intel.com>
Reviewed-by: Pan Deng <pan.deng@intel.com>
Reviewed-by: Nanhai Zou <nanhai.zou@intel.com>
Reviewed-by: Yuan Liu <yuan1.liu@intel.com>
Signed-off-by: Tianyou Li <tianyou.li@intel.com>
---
 mm/internal.h       |  8 ++++-
 mm/memory_hotplug.c | 86 +++++++++++++++++++++++++++++++++++++--------
 mm/mm_init.c        | 15 ++++++--
 3 files changed, 92 insertions(+), 17 deletions(-)

diff --git a/mm/internal.h b/mm/internal.h
index e430da900430..828aed5c2fef 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -730,7 +730,13 @@ static inline struct page *pageblock_pfn_to_page(unsigned long start_pfn,
 	return __pageblock_pfn_to_page(start_pfn, end_pfn, zone);
 }
 
-void set_zone_contiguous(struct zone *zone);
+enum zone_contig_state {
+	ZONE_CONTIG_YES,
+	ZONE_CONTIG_NO,
+	ZONE_CONTIG_MAYBE,
+};
+
+void set_zone_contiguous(struct zone *zone, enum zone_contig_state state);
 bool pfn_range_intersects_zones(int nid, unsigned long start_pfn,
 			   unsigned long nr_pages);
 
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index 8793a83702c5..7b8feaca0d63 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -544,6 +544,25 @@ static void update_pgdat_span(struct pglist_data *pgdat)
 	pgdat->node_spanned_pages = node_end_pfn - node_start_pfn;
 }
 
+static enum zone_contig_state zone_contig_state_after_shrinking(struct zone *zone,
+				unsigned long start_pfn, unsigned long nr_pages)
+{
+	const unsigned long end_pfn = start_pfn + nr_pages;
+
+	/*
+	 * If we cut a hole into the zone span, then the zone is
+	 * certainly not contiguous.
+	 */
+	if (start_pfn > zone->zone_start_pfn && end_pfn < zone_end_pfn(zone))
+		return ZONE_CONTIG_NO;
+
+	/* Removing from the start/end of the zone will not change anything. */
+	if (start_pfn == zone->zone_start_pfn || end_pfn == zone_end_pfn(zone))
+		return zone->contiguous ? ZONE_CONTIG_YES : ZONE_CONTIG_MAYBE;
+
+	return ZONE_CONTIG_MAYBE;
+}
+
 void remove_pfn_range_from_zone(struct zone *zone,
 				      unsigned long start_pfn,
 				      unsigned long nr_pages)
@@ -551,6 +570,7 @@ void remove_pfn_range_from_zone(struct zone *zone,
 	const unsigned long end_pfn = start_pfn + nr_pages;
 	struct pglist_data *pgdat = zone->zone_pgdat;
 	unsigned long pfn, cur_nr_pages;
+	enum zone_contig_state new_contiguous_state;
 
 	/* Poison struct pages because they are now uninitialized again. */
 	for (pfn = start_pfn; pfn < end_pfn; pfn += cur_nr_pages) {
@@ -571,12 +591,14 @@ void remove_pfn_range_from_zone(struct zone *zone,
 	if (zone_is_zone_device(zone))
 		return;
 
+	new_contiguous_state = zone_contig_state_after_shrinking(zone, start_pfn,
+								 nr_pages);
 	clear_zone_contiguous(zone);
 
 	shrink_zone_span(zone, start_pfn, start_pfn + nr_pages);
 	update_pgdat_span(pgdat);
 
-	set_zone_contiguous(zone);
+	set_zone_contiguous(zone, new_contiguous_state);
 }
 
 /**
@@ -736,6 +758,32 @@ static inline void section_taint_zone_device(unsigned long pfn)
 }
 #endif
 
+static enum zone_contig_state zone_contig_state_after_growing(struct zone *zone,
+				unsigned long start_pfn, unsigned long nr_pages)
+{
+	const unsigned long end_pfn = start_pfn + nr_pages;
+
+	if (zone_is_empty(zone))
+		return ZONE_CONTIG_YES;
+
+	/*
+	 * If the moved pfn range does not intersect with the original zone span
+	 * the zone is surely not contiguous.
+	 */
+	if (end_pfn < zone->zone_start_pfn || start_pfn > zone_end_pfn(zone))
+		return ZONE_CONTIG_NO;
+
+	/* Adding to the start/end of the zone will not change anything. */
+	if (end_pfn == zone->zone_start_pfn || start_pfn == zone_end_pfn(zone))
+		return zone->contiguous ? ZONE_CONTIG_YES : ZONE_CONTIG_NO;
+
+	/* If we cannot fill the hole, the zone stays not contiguous. */
+	if (nr_pages < (zone->spanned_pages - zone->present_pages))
+		return ZONE_CONTIG_NO;
+
+	return ZONE_CONTIG_MAYBE;
+}
+
 /*
  * Associate the pfn range with the given zone, initializing the memmaps
  * and resizing the pgdat/zone data to span the added pages. After this
@@ -1165,7 +1213,6 @@ static int online_pages(unsigned long pfn, unsigned long nr_pages,
 			 !IS_ALIGNED(pfn + nr_pages, PAGES_PER_SECTION)))
 		return -EINVAL;
 
-
 	/* associate pfn range with the zone */
 	move_pfn_range_to_zone(zone, pfn, nr_pages, NULL, MIGRATE_MOVABLE,
 			       true);
@@ -1203,13 +1250,6 @@ static int online_pages(unsigned long pfn, unsigned long nr_pages,
 	}
 
 	online_pages_range(pfn, nr_pages);
-
-	/*
-	 * Now that the ranges are indicated as online, check whether the whole
-	 * zone is contiguous.
-	 */
-	set_zone_contiguous(zone);
-
 	adjust_present_page_count(pfn_to_page(pfn), group, nr_pages);
 
 	if (node_arg.nid >= 0)
@@ -1254,16 +1294,25 @@ static int online_pages(unsigned long pfn, unsigned long nr_pages,
 	return ret;
 }
 
-int online_memory_block_pages(unsigned long start_pfn,
-		unsigned long nr_pages, unsigned long nr_vmemmap_pages,
-		struct zone *zone, struct memory_group *group)
+int online_memory_block_pages(unsigned long start_pfn, unsigned long nr_pages,
+			unsigned long nr_vmemmap_pages, struct zone *zone,
+			struct memory_group *group)
 {
+	const bool contiguous = zone->contiguous;
+	enum zone_contig_state new_contiguous_state;
 	int ret;
 
+	/*
+	 * Calculate the new zone contig state before move_pfn_range_to_zone()
+	 * sets the zone temporarily to non-contiguous.
+	 */
+	new_contiguous_state = zone_contig_state_after_growing(zone, start_pfn,
+							       nr_pages);
+
 	if (nr_vmemmap_pages) {
 		ret = mhp_init_memmap_on_memory(start_pfn, nr_vmemmap_pages, zone);
 		if (ret)
-			return ret;
+			goto restore_zone_contig;
 	}
 
 	ret = online_pages(start_pfn + nr_vmemmap_pages,
@@ -1271,7 +1320,7 @@ int online_memory_block_pages(unsigned long start_pfn,
 	if (ret) {
 		if (nr_vmemmap_pages)
 			mhp_deinit_memmap_on_memory(start_pfn, nr_vmemmap_pages);
-		return ret;
+		goto restore_zone_contig;
 	}
 
 	/*
@@ -1282,6 +1331,15 @@ int online_memory_block_pages(unsigned long start_pfn,
 		adjust_present_page_count(pfn_to_page(start_pfn), group,
 					  nr_vmemmap_pages);
 
+	/*
+	 * Now that the ranges are indicated as online, check whether the whole
+	 * zone is contiguous.
+	 */
+	set_zone_contiguous(zone, new_contiguous_state);
+	return 0;
+
+restore_zone_contig:
+	zone->contiguous = contiguous;
 	return ret;
 }
 
diff --git a/mm/mm_init.c b/mm/mm_init.c
index fc2a6f1e518f..5ed3fbd5c643 100644
--- a/mm/mm_init.c
+++ b/mm/mm_init.c
@@ -2263,11 +2263,22 @@ void __init init_cma_pageblock(struct page *page)
 }
 #endif
 
-void set_zone_contiguous(struct zone *zone)
+void set_zone_contiguous(struct zone *zone, enum zone_contig_state state)
 {
 	unsigned long block_start_pfn = zone->zone_start_pfn;
 	unsigned long block_end_pfn;
 
+	/* We expect an earlier call to clear_zone_contiguous(). */
+	VM_WARN_ON_ONCE(zone->contiguous);
+
+	if (state == ZONE_CONTIG_YES) {
+		zone->contiguous = true;
+		return;
+	}
+
+	if (state == ZONE_CONTIG_NO)
+		return;
+
 	block_end_pfn = pageblock_end_pfn(block_start_pfn);
 	for (; block_start_pfn < zone_end_pfn(zone);
 			block_start_pfn = block_end_pfn,
@@ -2348,7 +2359,7 @@ void __init page_alloc_init_late(void)
 		shuffle_free_memory(NODE_DATA(nid));
 
 	for_each_populated_zone(zone)
-		set_zone_contiguous(zone);
+		set_zone_contiguous(zone, ZONE_CONTIG_MAYBE);
 
 	/* Initialize page ext after all struct pages are initialized. */
 	if (deferred_struct_pages)
-- 
2.47.1



^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v8 1/3] mm/memory hotplug: Fix zone->contiguous always false when hotplug
  2026-01-20 14:33 ` [PATCH v8 1/3] mm/memory hotplug: Fix zone->contiguous always false when hotplug Tianyou Li
@ 2026-01-22 11:16   ` Mike Rapoport
  2026-01-24 12:18     ` Li, Tianyou
  0 siblings, 1 reply; 18+ messages in thread
From: Mike Rapoport @ 2026-01-22 11:16 UTC (permalink / raw)
  To: Tianyou Li
  Cc: David Hildenbrand, Oscar Salvador, Wei Yang, Michal Hocko,
	linux-mm, Yong Hu, Nanhai Zou, Yuan Liu, Tim Chen, Qiuxu Zhuo,
	Yu C Chen, Pan Deng, Chen Zhang, linux-kernel

Hi,

On Tue, Jan 20, 2026 at 10:33:44PM +0800, Tianyou Li wrote:
> From: Yuan Liu <yuan1.liu@intel.com>
> 
> set_zone_contiguous() uses __pageblock_pfn_to_page() to detect
> pageblocks that either do not exist (hole) or that do not belong
> to the same zone.
> 
> __pageblock_pfn_to_page(), however, relies on pfn_to_online_page(),
> effectively always returning NULL for memory ranges that were not
> onlined yet. So when called on a range-to-be-onlined, it indicates
> a memory hole to set_zone_contiguous().
> 
> Consequently, the set_zone_contiguous() call in move_pfn_range_to_zone(),
> which happens early during memory onlining, will never detect a
> zone as being contiguous. Bad.
> 
> To fix the issue, move the set_zone_contiguous() call to a later
> stage in memory onlining, where pfn_to_online_page() will succeed:
> after we mark the memory sections to be online.
> 
> Fixes: 2d070eab2e82 ("mm: consider zone which is not fully populated to have holes")

cc stable@ perhaps?

> Cc: Michal Hocko <mhocko@suse.com>
> Reviewed-by: Nanhai Zou <nanhai.zou@intel.com>
> Signed-off-by: Yuan Liu <yuan1.liu@intel.com>
> Signed-off-by: Tianyou Li <tianyou.li@intel.com>
> ---
>  mm/memory_hotplug.c | 9 +++++++--
>  1 file changed, 7 insertions(+), 2 deletions(-)
> 
> diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
> index a63ec679d861..c8f492b5daf0 100644
> --- a/mm/memory_hotplug.c
> +++ b/mm/memory_hotplug.c
> @@ -782,8 +782,6 @@ void move_pfn_range_to_zone(struct zone *zone, unsigned long start_pfn,
>  	memmap_init_range(nr_pages, nid, zone_idx(zone), start_pfn, 0,
>  			 MEMINIT_HOTPLUG, altmap, migratetype,
>  			 isolate_pageblock);
> -
> -	set_zone_contiguous(zone);

move_pfn_range_to_zone() is also called from memremap::pagemap_range().
Shouldn't we add set_zone_contiguous() there as well?

>  }
>  
>  struct auto_movable_stats {
> @@ -1205,6 +1203,13 @@ int online_pages(unsigned long pfn, unsigned long nr_pages,
>  	}
>  
>  	online_pages_range(pfn, nr_pages);
> +
> +	/*
> +	 * Now that the ranges are indicated as online, check whether the whole
> +	 * zone is contiguous.
> +	 */
> +	set_zone_contiguous(zone);
> +
>  	adjust_present_page_count(pfn_to_page(pfn), group, nr_pages);
>  
>  	if (node_arg.nid >= 0)
> -- 
> 2.47.1
> 

-- 
Sincerely yours,
Mike.


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v8 2/3] mm/memory hotplug/unplug: Add online_memory_block_pages() and offline_memory_block_pages()
  2026-01-20 14:33 ` [PATCH v8 2/3] mm/memory hotplug/unplug: Add online_memory_block_pages() and offline_memory_block_pages() Tianyou Li
@ 2026-01-22 11:32   ` Mike Rapoport
  2026-01-24 12:30     ` Li, Tianyou
  0 siblings, 1 reply; 18+ messages in thread
From: Mike Rapoport @ 2026-01-22 11:32 UTC (permalink / raw)
  To: Tianyou Li
  Cc: David Hildenbrand, Oscar Salvador, Wei Yang, Michal Hocko,
	linux-mm, Yong Hu, Nanhai Zou, Yuan Liu, Tim Chen, Qiuxu Zhuo,
	Yu C Chen, Pan Deng, Chen Zhang, linux-kernel

Hi,

On Tue, Jan 20, 2026 at 10:33:45PM +0800, Tianyou Li wrote:
> Encapsulate the mhp_init_memmap_on_memory() and online_pages() into
> online_memory_block_pages(). Thus we can further optimize the
> set_zone_contiguous() to check the whole memory block range, instead
> of check the zone contiguous in separate range.
> 
> Correspondingly, encapsulate the mhp_deinit_memmap_on_memory() and
> offline_pages() into offline_memory_block_pages().
> 
> Tested-by: Yuan Liu <yuan1.liu@intel.com>
> Reviewed-by: Yuan Liu <yuan1.liu@intel.com>
> Signed-off-by: Tianyou Li <tianyou.li@intel.com>
> ---
>  drivers/base/memory.c          | 53 ++++++---------------------
>  include/linux/memory_hotplug.h | 18 +++++-----
>  mm/memory_hotplug.c            | 65 +++++++++++++++++++++++++++++++---
>  3 files changed, 80 insertions(+), 56 deletions(-)
> 
> diff --git a/drivers/base/memory.c b/drivers/base/memory.c
> index 751f248ca4a8..ea4d6fbf34fd 100644
> --- a/drivers/base/memory.c
> +++ b/drivers/base/memory.c
> @@ -246,31 +246,12 @@ static int memory_block_online(struct memory_block *mem)
>  		nr_vmemmap_pages = mem->altmap->free;
>  
>  	mem_hotplug_begin();
> -	if (nr_vmemmap_pages) {
> -		ret = mhp_init_memmap_on_memory(start_pfn, nr_vmemmap_pages, zone);
> -		if (ret)
> -			goto out;
> -	}
> -
> -	ret = online_pages(start_pfn + nr_vmemmap_pages,
> -			   nr_pages - nr_vmemmap_pages, zone, mem->group);
> -	if (ret) {
> -		if (nr_vmemmap_pages)
> -			mhp_deinit_memmap_on_memory(start_pfn, nr_vmemmap_pages);
> -		goto out;
> -	}
> -
> -	/*
> -	 * Account once onlining succeeded. If the zone was unpopulated, it is
> -	 * now already properly populated.
> -	 */
> -	if (nr_vmemmap_pages)
> -		adjust_present_page_count(pfn_to_page(start_pfn), mem->group,
> -					  nr_vmemmap_pages);
> -
> -	mem->zone = zone;
> -out:
> +	ret = online_memory_block_pages(start_pfn, nr_pages, nr_vmemmap_pages,
> +					zone, mem->group);
> +	if (!ret)
> +		mem->zone = zone;

I think we can move most of memory_block_online() to the new function and
pass struct memory_block to it.
I'd suggest 
	
	int mhp_block_online(struct memory_block *block)

and 

	int mhp_block_offline(struct memory_block *block)

Other than that LGTM.

>  	mem_hotplug_done();
> +
>  	return ret;
>  }
>  

-- 
Sincerely yours,
Mike.


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v8 3/3] mm/memory hotplug/unplug: Optimize zone->contiguous update when changes pfn range
  2026-01-20 14:33 ` [PATCH v8 3/3] mm/memory hotplug/unplug: Optimize zone->contiguous update when changes pfn range Tianyou Li
@ 2026-01-22 11:43   ` Mike Rapoport
  2026-01-24 12:43     ` Li, Tianyou
  0 siblings, 1 reply; 18+ messages in thread
From: Mike Rapoport @ 2026-01-22 11:43 UTC (permalink / raw)
  To: Tianyou Li
  Cc: David Hildenbrand, Oscar Salvador, Wei Yang, Michal Hocko,
	linux-mm, Yong Hu, Nanhai Zou, Yuan Liu, Tim Chen, Qiuxu Zhuo,
	Yu C Chen, Pan Deng, Chen Zhang, linux-kernel

Hi,

On Tue, Jan 20, 2026 at 10:33:46PM +0800, Tianyou Li wrote:
> When invoke move_pfn_range_to_zone or remove_pfn_range_from_zone, it will
> update the zone->contiguous by checking the new zone's pfn range from the
> beginning to the end, regardless the previous state of the old zone. When
> the zone's pfn range is large, the cost of traversing the pfn range to
> update the zone->contiguous could be significant.
> 
> Add fast paths to quickly detect cases where zone is definitely not
> contiguous without scanning the new zone. The cases are: when the new range
> did not overlap with previous range, the contiguous should be false; if the
> new range adjacent with the previous range, just need to check the new
> range; if the new added pages could not fill the hole of previous zone, the
> contiguous should be false.
> 
> The following test cases of memory hotplug for a VM [1], tested in the
> environment [2], show that this optimization can significantly reduce the
> memory hotplug time [3].
> 
> +----------------+------+---------------+--------------+----------------+
> |                | Size | Time (before) | Time (after) | Time Reduction |
> |                +------+---------------+--------------+----------------+
> | Plug Memory    | 256G |      10s      |      2s      |       80%      |
> |                +------+---------------+--------------+----------------+
> |                | 512G |      33s      |      6s      |       81%      |
> +----------------+------+---------------+--------------+----------------+
> 
> +----------------+------+---------------+--------------+----------------+
> |                | Size | Time (before) | Time (after) | Time Reduction |
> |                +------+---------------+--------------+----------------+
> | Unplug Memory  | 256G |      10s      |      2s      |       80%      |
> |                +------+---------------+--------------+----------------+
> |                | 512G |      34s      |      6s      |       82%      |
> +----------------+------+---------------+--------------+----------------+
> 
> [1] Qemu commands to hotplug 256G/512G memory for a VM:
>     object_add memory-backend-ram,id=hotmem0,size=256G/512G,share=on
>     device_add virtio-mem-pci,id=vmem1,memdev=hotmem0,bus=port1
>     qom-set vmem1 requested-size 256G/512G (Plug Memory)
>     qom-set vmem1 requested-size 0G (Unplug Memory)
> 
> [2] Hardware     : Intel Icelake server
>     Guest Kernel : v6.18-rc2
>     Qemu         : v9.0.0
> 
>     Launch VM    :
>     qemu-system-x86_64 -accel kvm -cpu host \
>     -drive file=./Centos10_cloud.qcow2,format=qcow2,if=virtio \
>     -drive file=./seed.img,format=raw,if=virtio \
>     -smp 3,cores=3,threads=1,sockets=1,maxcpus=3 \
>     -m 2G,slots=10,maxmem=2052472M \
>     -device pcie-root-port,id=port1,bus=pcie.0,slot=1,multifunction=on \
>     -device pcie-root-port,id=port2,bus=pcie.0,slot=2 \
>     -nographic -machine q35 \
>     -nic user,hostfwd=tcp::3000-:22
> 
>     Guest kernel auto-onlines newly added memory blocks:
>     echo online > /sys/devices/system/memory/auto_online_blocks
> 
> [3] The time from typing the QEMU commands in [1] to when the output of
>     'grep MemTotal /proc/meminfo' on Guest reflects that all hotplugged
>     memory is recognized.
> 
> Reported-by: Nanhai Zou <nanhai.zou@intel.com>
> Reported-by: Chen Zhang <zhangchen.kidd@jd.com>
> Tested-by: Yuan Liu <yuan1.liu@intel.com>
> Reviewed-by: Tim Chen <tim.c.chen@linux.intel.com>
> Reviewed-by: Qiuxu Zhuo <qiuxu.zhuo@intel.com>
> Reviewed-by: Yu C Chen <yu.c.chen@intel.com>
> Reviewed-by: Pan Deng <pan.deng@intel.com>
> Reviewed-by: Nanhai Zou <nanhai.zou@intel.com>
> Reviewed-by: Yuan Liu <yuan1.liu@intel.com>
> Signed-off-by: Tianyou Li <tianyou.li@intel.com>
> ---

...

> +int online_memory_block_pages(unsigned long start_pfn, unsigned long nr_pages,
> +			unsigned long nr_vmemmap_pages, struct zone *zone,
> +			struct memory_group *group)
>  {
> +	const bool contiguous = zone->contiguous;
> +	enum zone_contig_state new_contiguous_state;
>  	int ret;
>  
> +	/*
> +	 * Calculate the new zone contig state before move_pfn_range_to_zone()
> +	 * sets the zone temporarily to non-contiguous.
> +	 */
> +	new_contiguous_state = zone_contig_state_after_growing(zone, start_pfn,
> +							       nr_pages);
> +
>  	if (nr_vmemmap_pages) {
>  		ret = mhp_init_memmap_on_memory(start_pfn, nr_vmemmap_pages, zone);
>  		if (ret)
> -			return ret;
> +			goto restore_zone_contig;

But zone_contig_state_after_growing() does not change zone->contiguous. Why
do we need to save and restore it?

>  	}
>  
>  	ret = online_pages(start_pfn + nr_vmemmap_pages,
> @@ -1271,7 +1320,7 @@ int online_memory_block_pages(unsigned long start_pfn,
>  	if (ret) {
>  		if (nr_vmemmap_pages)
>  			mhp_deinit_memmap_on_memory(start_pfn, nr_vmemmap_pages);
> -		return ret;
> +		goto restore_zone_contig;
>  	}
>  
>  	/*
> @@ -1282,6 +1331,15 @@ int online_memory_block_pages(unsigned long start_pfn,
>  		adjust_present_page_count(pfn_to_page(start_pfn), group,
>  					  nr_vmemmap_pages);
>  
> +	/*
> +	 * Now that the ranges are indicated as online, check whether the whole
> +	 * zone is contiguous.
> +	 */
> +	set_zone_contiguous(zone, new_contiguous_state);
> +	return 0;
> +
> +restore_zone_contig:
> +	zone->contiguous = contiguous;
>  	return ret;
>  }

-- 
Sincerely yours,
Mike.


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v8 1/3] mm/memory hotplug: Fix zone->contiguous always false when hotplug
  2026-01-22 11:16   ` Mike Rapoport
@ 2026-01-24 12:18     ` Li, Tianyou
  2026-01-26 21:58       ` Andrew Morton
  2026-01-27  6:53       ` Mike Rapoport
  0 siblings, 2 replies; 18+ messages in thread
From: Li, Tianyou @ 2026-01-24 12:18 UTC (permalink / raw)
  To: Mike Rapoport
  Cc: David Hildenbrand, Oscar Salvador, Wei Yang, Michal Hocko,
	linux-mm, Yong Hu, Nanhai Zou, Yuan Liu, Tim Chen, Qiuxu Zhuo,
	Yu C Chen, Pan Deng, Chen Zhang, linux-kernel

Appreciated for your review, Mike.

On 1/22/2026 7:16 PM, Mike Rapoport wrote:
> Hi,
>
> On Tue, Jan 20, 2026 at 10:33:44PM +0800, Tianyou Li wrote:
>> From: Yuan Liu <yuan1.liu@intel.com>
>>
>> set_zone_contiguous() uses __pageblock_pfn_to_page() to detect
>> pageblocks that either do not exist (hole) or that do not belong
>> to the same zone.
>>
>> __pageblock_pfn_to_page(), however, relies on pfn_to_online_page(),
>> effectively always returning NULL for memory ranges that were not
>> onlined yet. So when called on a range-to-be-onlined, it indicates
>> a memory hole to set_zone_contiguous().
>>
>> Consequently, the set_zone_contiguous() call in move_pfn_range_to_zone(),
>> which happens early during memory onlining, will never detect a
>> zone as being contiguous. Bad.
>>
>> To fix the issue, move the set_zone_contiguous() call to a later
>> stage in memory onlining, where pfn_to_online_page() will succeed:
>> after we mark the memory sections to be online.
>>
>> Fixes: 2d070eab2e82 ("mm: consider zone which is not fully populated to have holes")
> cc stable@ perhaps?

Yes, will do. Thanks.

>
>> Cc: Michal Hocko <mhocko@suse.com>
>> Reviewed-by: Nanhai Zou <nanhai.zou@intel.com>
>> Signed-off-by: Yuan Liu <yuan1.liu@intel.com>
>> Signed-off-by: Tianyou Li <tianyou.li@intel.com>
>> ---
>>   mm/memory_hotplug.c | 9 +++++++--
>>   1 file changed, 7 insertions(+), 2 deletions(-)
>>
>> diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
>> index a63ec679d861..c8f492b5daf0 100644
>> --- a/mm/memory_hotplug.c
>> +++ b/mm/memory_hotplug.c
>> @@ -782,8 +782,6 @@ void move_pfn_range_to_zone(struct zone *zone, unsigned long start_pfn,
>>   	memmap_init_range(nr_pages, nid, zone_idx(zone), start_pfn, 0,
>>   			 MEMINIT_HOTPLUG, altmap, migratetype,
>>   			 isolate_pageblock);
>> -
>> -	set_zone_contiguous(zone);
> move_pfn_range_to_zone() is also called from memremap::pagemap_range().
> Shouldn't we add set_zone_contiguous() there as well?

I did not find the place where the online_pages was invoked along path 
of the memremap:pagemap_range() function. Would there be other functions 
to online the pages remapped? Much appreciated for the guidance.

I leave the zone contiguous state remains the same as the optimization 
not took place in case any unexpected behavior.

>>   }
>>   
>>   struct auto_movable_stats {
>> @@ -1205,6 +1203,13 @@ int online_pages(unsigned long pfn, unsigned long nr_pages,
>>   	}
>>   
>>   	online_pages_range(pfn, nr_pages);
>> +
>> +	/*
>> +	 * Now that the ranges are indicated as online, check whether the whole
>> +	 * zone is contiguous.
>> +	 */
>> +	set_zone_contiguous(zone);
>> +
>>   	adjust_present_page_count(pfn_to_page(pfn), group, nr_pages);
>>   
>>   	if (node_arg.nid >= 0)
>> -- 
>> 2.47.1
>>


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v8 2/3] mm/memory hotplug/unplug: Add online_memory_block_pages() and offline_memory_block_pages()
  2026-01-22 11:32   ` Mike Rapoport
@ 2026-01-24 12:30     ` Li, Tianyou
  2026-01-27  6:58       ` Mike Rapoport
  0 siblings, 1 reply; 18+ messages in thread
From: Li, Tianyou @ 2026-01-24 12:30 UTC (permalink / raw)
  To: Mike Rapoport
  Cc: David Hildenbrand, Oscar Salvador, Wei Yang, Michal Hocko,
	linux-mm, Yong Hu, Nanhai Zou, Yuan Liu, Tim Chen, Qiuxu Zhuo,
	Yu C Chen, Pan Deng, Chen Zhang, linux-kernel


On 1/22/2026 7:32 PM, Mike Rapoport wrote:
> Hi,
>
> On Tue, Jan 20, 2026 at 10:33:45PM +0800, Tianyou Li wrote:
>> Encapsulate the mhp_init_memmap_on_memory() and online_pages() into
>> online_memory_block_pages(). Thus we can further optimize the
>> set_zone_contiguous() to check the whole memory block range, instead
>> of check the zone contiguous in separate range.
>>
>> Correspondingly, encapsulate the mhp_deinit_memmap_on_memory() and
>> offline_pages() into offline_memory_block_pages().
>>
>> Tested-by: Yuan Liu <yuan1.liu@intel.com>
>> Reviewed-by: Yuan Liu <yuan1.liu@intel.com>
>> Signed-off-by: Tianyou Li <tianyou.li@intel.com>
>> ---
>>   drivers/base/memory.c          | 53 ++++++---------------------
>>   include/linux/memory_hotplug.h | 18 +++++-----
>>   mm/memory_hotplug.c            | 65 +++++++++++++++++++++++++++++++---
>>   3 files changed, 80 insertions(+), 56 deletions(-)
>>
>> diff --git a/drivers/base/memory.c b/drivers/base/memory.c
>> index 751f248ca4a8..ea4d6fbf34fd 100644
>> --- a/drivers/base/memory.c
>> +++ b/drivers/base/memory.c
>> @@ -246,31 +246,12 @@ static int memory_block_online(struct memory_block *mem)
>>   		nr_vmemmap_pages = mem->altmap->free;
>>   
>>   	mem_hotplug_begin();
>> -	if (nr_vmemmap_pages) {
>> -		ret = mhp_init_memmap_on_memory(start_pfn, nr_vmemmap_pages, zone);
>> -		if (ret)
>> -			goto out;
>> -	}
>> -
>> -	ret = online_pages(start_pfn + nr_vmemmap_pages,
>> -			   nr_pages - nr_vmemmap_pages, zone, mem->group);
>> -	if (ret) {
>> -		if (nr_vmemmap_pages)
>> -			mhp_deinit_memmap_on_memory(start_pfn, nr_vmemmap_pages);
>> -		goto out;
>> -	}
>> -
>> -	/*
>> -	 * Account once onlining succeeded. If the zone was unpopulated, it is
>> -	 * now already properly populated.
>> -	 */
>> -	if (nr_vmemmap_pages)
>> -		adjust_present_page_count(pfn_to_page(start_pfn), mem->group,
>> -					  nr_vmemmap_pages);
>> -
>> -	mem->zone = zone;
>> -out:
>> +	ret = online_memory_block_pages(start_pfn, nr_pages, nr_vmemmap_pages,
>> +					zone, mem->group);
>> +	if (!ret)
>> +		mem->zone = zone;
> I think we can move most of memory_block_online() to the new function and
> pass struct memory_block to it.
> I'd suggest
> 	
> 	int mhp_block_online(struct memory_block *block)
>
> and
>
> 	int mhp_block_offline(struct memory_block *block)
>
> Other than that LGTM.


It's doable, if not other comments I can change the code. Would it look 
like moving the functions to mm/memory_hotplug.c, change the name to 
mhp_block_online() and mhp_block_offline(), and change the references 
where the original function invoked in drivers/base/memory.c?


My prior thoughts on this was just break the code as small pieces as 
necessary to handle the pages online part together with zone contiguous 
state update.


>
>>   	mem_hotplug_done();
>> +
>>   	return ret;
>>   }
>>   


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v8 3/3] mm/memory hotplug/unplug: Optimize zone->contiguous update when changes pfn range
  2026-01-22 11:43   ` Mike Rapoport
@ 2026-01-24 12:43     ` Li, Tianyou
  2026-01-27  7:10       ` Mike Rapoport
  0 siblings, 1 reply; 18+ messages in thread
From: Li, Tianyou @ 2026-01-24 12:43 UTC (permalink / raw)
  To: Mike Rapoport
  Cc: David Hildenbrand, Oscar Salvador, Wei Yang, Michal Hocko,
	linux-mm, Yong Hu, Nanhai Zou, Yuan Liu, Tim Chen, Qiuxu Zhuo,
	Yu C Chen, Pan Deng, Chen Zhang, linux-kernel


On 1/22/2026 7:43 PM, Mike Rapoport wrote:
> Hi,
>
> On Tue, Jan 20, 2026 at 10:33:46PM +0800, Tianyou Li wrote:
>> When invoke move_pfn_range_to_zone or remove_pfn_range_from_zone, it will
>> update the zone->contiguous by checking the new zone's pfn range from the
>> beginning to the end, regardless the previous state of the old zone. When
>> the zone's pfn range is large, the cost of traversing the pfn range to
>> update the zone->contiguous could be significant.
>>
>> Add fast paths to quickly detect cases where zone is definitely not
>> contiguous without scanning the new zone. The cases are: when the new range
>> did not overlap with previous range, the contiguous should be false; if the
>> new range adjacent with the previous range, just need to check the new
>> range; if the new added pages could not fill the hole of previous zone, the
>> contiguous should be false.
>>
>> The following test cases of memory hotplug for a VM [1], tested in the
>> environment [2], show that this optimization can significantly reduce the
>> memory hotplug time [3].
>>
>> +----------------+------+---------------+--------------+----------------+
>> |                | Size | Time (before) | Time (after) | Time Reduction |
>> |                +------+---------------+--------------+----------------+
>> | Plug Memory    | 256G |      10s      |      2s      |       80%      |
>> |                +------+---------------+--------------+----------------+
>> |                | 512G |      33s      |      6s      |       81%      |
>> +----------------+------+---------------+--------------+----------------+
>>
>> +----------------+------+---------------+--------------+----------------+
>> |                | Size | Time (before) | Time (after) | Time Reduction |
>> |                +------+---------------+--------------+----------------+
>> | Unplug Memory  | 256G |      10s      |      2s      |       80%      |
>> |                +------+---------------+--------------+----------------+
>> |                | 512G |      34s      |      6s      |       82%      |
>> +----------------+------+---------------+--------------+----------------+
>>
>> [1] Qemu commands to hotplug 256G/512G memory for a VM:
>>      object_add memory-backend-ram,id=hotmem0,size=256G/512G,share=on
>>      device_add virtio-mem-pci,id=vmem1,memdev=hotmem0,bus=port1
>>      qom-set vmem1 requested-size 256G/512G (Plug Memory)
>>      qom-set vmem1 requested-size 0G (Unplug Memory)
>>
>> [2] Hardware     : Intel Icelake server
>>      Guest Kernel : v6.18-rc2
>>      Qemu         : v9.0.0
>>
>>      Launch VM    :
>>      qemu-system-x86_64 -accel kvm -cpu host \
>>      -drive file=./Centos10_cloud.qcow2,format=qcow2,if=virtio \
>>      -drive file=./seed.img,format=raw,if=virtio \
>>      -smp 3,cores=3,threads=1,sockets=1,maxcpus=3 \
>>      -m 2G,slots=10,maxmem=2052472M \
>>      -device pcie-root-port,id=port1,bus=pcie.0,slot=1,multifunction=on \
>>      -device pcie-root-port,id=port2,bus=pcie.0,slot=2 \
>>      -nographic -machine q35 \
>>      -nic user,hostfwd=tcp::3000-:22
>>
>>      Guest kernel auto-onlines newly added memory blocks:
>>      echo online > /sys/devices/system/memory/auto_online_blocks
>>
>> [3] The time from typing the QEMU commands in [1] to when the output of
>>      'grep MemTotal /proc/meminfo' on Guest reflects that all hotplugged
>>      memory is recognized.
>>
>> Reported-by: Nanhai Zou <nanhai.zou@intel.com>
>> Reported-by: Chen Zhang <zhangchen.kidd@jd.com>
>> Tested-by: Yuan Liu <yuan1.liu@intel.com>
>> Reviewed-by: Tim Chen <tim.c.chen@linux.intel.com>
>> Reviewed-by: Qiuxu Zhuo <qiuxu.zhuo@intel.com>
>> Reviewed-by: Yu C Chen <yu.c.chen@intel.com>
>> Reviewed-by: Pan Deng <pan.deng@intel.com>
>> Reviewed-by: Nanhai Zou <nanhai.zou@intel.com>
>> Reviewed-by: Yuan Liu <yuan1.liu@intel.com>
>> Signed-off-by: Tianyou Li <tianyou.li@intel.com>
>> ---
> ...
>
>> +int online_memory_block_pages(unsigned long start_pfn, unsigned long nr_pages,
>> +			unsigned long nr_vmemmap_pages, struct zone *zone,
>> +			struct memory_group *group)
>>   {
>> +	const bool contiguous = zone->contiguous;
>> +	enum zone_contig_state new_contiguous_state;
>>   	int ret;
>>   
>> +	/*
>> +	 * Calculate the new zone contig state before move_pfn_range_to_zone()
>> +	 * sets the zone temporarily to non-contiguous.
>> +	 */
>> +	new_contiguous_state = zone_contig_state_after_growing(zone, start_pfn,
>> +							       nr_pages);
>> +
>>   	if (nr_vmemmap_pages) {
>>   		ret = mhp_init_memmap_on_memory(start_pfn, nr_vmemmap_pages, zone);
>>   		if (ret)
>> -			return ret;
>> +			goto restore_zone_contig;
> But zone_contig_state_after_growing() does not change zone->contiguous. Why
> do we need to save and restore it?

Move_pfn_range_to_zone() will clear the zone contiguous state and it was 
invoked by online_pages(). If error occurs after 
move_pfn_range_to_zone() called like in online_pages(), I think we'd 
better to restore the original value if previous zone contiguous state 
is true.


>
>>   	}
>>   
>>   	ret = online_pages(start_pfn + nr_vmemmap_pages,
>> @@ -1271,7 +1320,7 @@ int online_memory_block_pages(unsigned long start_pfn,
>>   	if (ret) {
>>   		if (nr_vmemmap_pages)
>>   			mhp_deinit_memmap_on_memory(start_pfn, nr_vmemmap_pages);
>> -		return ret;
>> +		goto restore_zone_contig;
>>   	}
>>   
>>   	/*
>> @@ -1282,6 +1331,15 @@ int online_memory_block_pages(unsigned long start_pfn,
>>   		adjust_present_page_count(pfn_to_page(start_pfn), group,
>>   					  nr_vmemmap_pages);
>>   
>> +	/*
>> +	 * Now that the ranges are indicated as online, check whether the whole
>> +	 * zone is contiguous.
>> +	 */
>> +	set_zone_contiguous(zone, new_contiguous_state);
>> +	return 0;
>> +
>> +restore_zone_contig:
>> +	zone->contiguous = contiguous;
>>   	return ret;
>>   }


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v8 1/3] mm/memory hotplug: Fix zone->contiguous always false when hotplug
  2026-01-24 12:18     ` Li, Tianyou
@ 2026-01-26 21:58       ` Andrew Morton
  2026-01-28 14:16         ` Li, Tianyou
  2026-01-27  6:53       ` Mike Rapoport
  1 sibling, 1 reply; 18+ messages in thread
From: Andrew Morton @ 2026-01-26 21:58 UTC (permalink / raw)
  To: Li, Tianyou
  Cc: Mike Rapoport, David Hildenbrand, Oscar Salvador, Wei Yang,
	Michal Hocko, linux-mm, Yong Hu, Nanhai Zou, Yuan Liu, Tim Chen,
	Qiuxu Zhuo, Yu C Chen, Pan Deng, Chen Zhang, linux-kernel

On Sat, 24 Jan 2026 20:18:39 +0800 "Li, Tianyou" <tianyou.li@intel.com> wrote:

> > cc stable@ perhaps?
> 
> Yes, will do. Thanks.

Please separate backportable patches from those which target the next
merge window.  They have different timing and take different routes
into mainline.

Thanks.



^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v8 1/3] mm/memory hotplug: Fix zone->contiguous always false when hotplug
  2026-01-24 12:18     ` Li, Tianyou
  2026-01-26 21:58       ` Andrew Morton
@ 2026-01-27  6:53       ` Mike Rapoport
  2026-01-28 13:49         ` Li, Tianyou
  1 sibling, 1 reply; 18+ messages in thread
From: Mike Rapoport @ 2026-01-27  6:53 UTC (permalink / raw)
  To: Li, Tianyou
  Cc: David Hildenbrand, Oscar Salvador, Wei Yang, Michal Hocko,
	linux-mm, Yong Hu, Nanhai Zou, Yuan Liu, Tim Chen, Qiuxu Zhuo,
	Yu C Chen, Pan Deng, Chen Zhang, linux-kernel

Hi,

On Sat, Jan 24, 2026 at 08:18:39PM +0800, Li, Tianyou wrote:
> On 1/22/2026 7:16 PM, Mike Rapoport wrote:
> > On Tue, Jan 20, 2026 at 10:33:44PM +0800, Tianyou Li wrote:
> > > diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
> > > index a63ec679d861..c8f492b5daf0 100644
> > > --- a/mm/memory_hotplug.c
> > > +++ b/mm/memory_hotplug.c
> > > @@ -782,8 +782,6 @@ void move_pfn_range_to_zone(struct zone *zone, unsigned long start_pfn,
> > >   	memmap_init_range(nr_pages, nid, zone_idx(zone), start_pfn, 0,
> > >   			 MEMINIT_HOTPLUG, altmap, migratetype,
> > >   			 isolate_pageblock);
> > > -
> > > -	set_zone_contiguous(zone);
> > move_pfn_range_to_zone() is also called from memremap::pagemap_range().
> > Shouldn't we add set_zone_contiguous() there as well?
> 
> I did not find the place where the online_pages was invoked along path of
> the memremap:pagemap_range() function. Would there be other functions to
> online the pages remapped? Much appreciated for the guidance.

Currently when we do memremap_pages() we have

	memremap_pages() ->
		pagemap_range() ->
			move_pfn_range_to_zone() ->
				set_zone_contiguous();
 
Once set_zone_contiguous() is moved out from move_pfn_range_to_zone(),
memremap_pages() path never calls it.
I'm not sure if the pages added in memremap_pages() are online, but to keep
it's current behaviour I think it should call set_zone_contiguous()
explicitly.

-- 
Sincerely yours,
Mike.


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v8 2/3] mm/memory hotplug/unplug: Add online_memory_block_pages() and offline_memory_block_pages()
  2026-01-24 12:30     ` Li, Tianyou
@ 2026-01-27  6:58       ` Mike Rapoport
  2026-01-28 13:56         ` Li, Tianyou
  0 siblings, 1 reply; 18+ messages in thread
From: Mike Rapoport @ 2026-01-27  6:58 UTC (permalink / raw)
  To: Li, Tianyou
  Cc: David Hildenbrand, Oscar Salvador, Wei Yang, Michal Hocko,
	linux-mm, Yong Hu, Nanhai Zou, Yuan Liu, Tim Chen, Qiuxu Zhuo,
	Yu C Chen, Pan Deng, Chen Zhang, linux-kernel

On Sat, Jan 24, 2026 at 08:30:57PM +0800, Li, Tianyou wrote:
> 
> On 1/22/2026 7:32 PM, Mike Rapoport wrote:

> > > diff --git a/drivers/base/memory.c b/drivers/base/memory.c
> > > index 751f248ca4a8..ea4d6fbf34fd 100644
> > > --- a/drivers/base/memory.c
> > > +++ b/drivers/base/memory.c
> > > @@ -246,31 +246,12 @@ static int memory_block_online(struct memory_block *mem)
> > >   		nr_vmemmap_pages = mem->altmap->free;
> > >   	mem_hotplug_begin();
> > > -	if (nr_vmemmap_pages) {
> > > -		ret = mhp_init_memmap_on_memory(start_pfn, nr_vmemmap_pages, zone);
> > > -		if (ret)
> > > -			goto out;
> > > -	}
> > > -
> > > -	ret = online_pages(start_pfn + nr_vmemmap_pages,
> > > -			   nr_pages - nr_vmemmap_pages, zone, mem->group);
> > > -	if (ret) {
> > > -		if (nr_vmemmap_pages)
> > > -			mhp_deinit_memmap_on_memory(start_pfn, nr_vmemmap_pages);
> > > -		goto out;
> > > -	}
> > > -
> > > -	/*
> > > -	 * Account once onlining succeeded. If the zone was unpopulated, it is
> > > -	 * now already properly populated.
> > > -	 */
> > > -	if (nr_vmemmap_pages)
> > > -		adjust_present_page_count(pfn_to_page(start_pfn), mem->group,
> > > -					  nr_vmemmap_pages);
> > > -
> > > -	mem->zone = zone;
> > > -out:
> > > +	ret = online_memory_block_pages(start_pfn, nr_pages, nr_vmemmap_pages,
> > > +					zone, mem->group);
> > > +	if (!ret)
> > > +		mem->zone = zone;
> > I think we can move most of memory_block_online() to the new function and
> > pass struct memory_block to it.
> > I'd suggest
> > 	
> > 	int mhp_block_online(struct memory_block *block)
> > 
> > and
> > 
> > 	int mhp_block_offline(struct memory_block *block)
> > 
> > Other than that LGTM.
> 
> 
> It's doable, if not other comments I can change the code. Would it look like
> moving the functions to mm/memory_hotplug.c, change the name to
> mhp_block_online() and mhp_block_offline(), and change the references where
> the original function invoked in drivers/base/memory.c?

Yeah, that's what I thought about. 

Even more broadly, I think the functionality in drivers/base/memory.c
belongs to mm/ much more than to drivers/ but that's surely out of scope
for these patches.
 
> My prior thoughts on this was just break the code as small pieces as
> necessary to handle the pages online part together with zone contiguous
> state update.

IMHO moving the entire function is cleaner, let's hear what David and Oscar
think.

-- 
Sincerely yours,
Mike.


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v8 3/3] mm/memory hotplug/unplug: Optimize zone->contiguous update when changes pfn range
  2026-01-24 12:43     ` Li, Tianyou
@ 2026-01-27  7:10       ` Mike Rapoport
  2026-01-28 14:11         ` Li, Tianyou
  0 siblings, 1 reply; 18+ messages in thread
From: Mike Rapoport @ 2026-01-27  7:10 UTC (permalink / raw)
  To: Li, Tianyou
  Cc: David Hildenbrand, Oscar Salvador, Wei Yang, Michal Hocko,
	linux-mm, Yong Hu, Nanhai Zou, Yuan Liu, Tim Chen, Qiuxu Zhuo,
	Yu C Chen, Pan Deng, Chen Zhang, linux-kernel

Hi,

On Sat, Jan 24, 2026 at 08:43:51PM +0800, Li, Tianyou wrote:
> On 1/22/2026 7:43 PM, Mike Rapoport wrote:
>
> > > +int online_memory_block_pages(unsigned long start_pfn, unsigned long nr_pages,
> > > +			unsigned long nr_vmemmap_pages, struct zone *zone,
> > > +			struct memory_group *group)
> > >   {
> > > +	const bool contiguous = zone->contiguous;
> > > +	enum zone_contig_state new_contiguous_state;
> > >   	int ret;
> > > +	/*
> > > +	 * Calculate the new zone contig state before move_pfn_range_to_zone()
> > > +	 * sets the zone temporarily to non-contiguous.
> > > +	 */
> > > +	new_contiguous_state = zone_contig_state_after_growing(zone, start_pfn,
> > > +							       nr_pages);
> > > +
> > >   	if (nr_vmemmap_pages) {
> > >   		ret = mhp_init_memmap_on_memory(start_pfn, nr_vmemmap_pages, zone);
> > >   		if (ret)
> > > -			return ret;
> > > +			goto restore_zone_contig;
> > But zone_contig_state_after_growing() does not change zone->contiguous. Why
> > do we need to save and restore it?
> 
> Move_pfn_range_to_zone() will clear the zone contiguous state and it was
> invoked by online_pages(). If error occurs after
> move_pfn_range_to_zone() called like in online_pages(), I think we'd better
> to restore the original value if previous zone contiguous state is true.
 
But after move_pfn_range_to_zone() the added pages are still offline, so I
think the zone remains contiguous and the call to
clear_zone_contiguous(zone) should not be there.

BTW, as we have set_zone_contiguous(ZONE_CONTIG_NO) I think we can use it
instead if clear_zone_contiguous() and remove the latter.
 

-- 
Sincerely yours,
Mike.


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v8 1/3] mm/memory hotplug: Fix zone->contiguous always false when hotplug
  2026-01-27  6:53       ` Mike Rapoport
@ 2026-01-28 13:49         ` Li, Tianyou
  0 siblings, 0 replies; 18+ messages in thread
From: Li, Tianyou @ 2026-01-28 13:49 UTC (permalink / raw)
  To: Mike Rapoport
  Cc: David Hildenbrand, Oscar Salvador, Wei Yang, Michal Hocko,
	linux-mm, Yong Hu, Nanhai Zou, Yuan Liu, Tim Chen, Qiuxu Zhuo,
	Yu C Chen, Pan Deng, Chen Zhang, linux-kernel


On 1/27/2026 2:53 PM, Mike Rapoport wrote:
> Hi,
>
> On Sat, Jan 24, 2026 at 08:18:39PM +0800, Li, Tianyou wrote:
>> On 1/22/2026 7:16 PM, Mike Rapoport wrote:
>>> On Tue, Jan 20, 2026 at 10:33:44PM +0800, Tianyou Li wrote:
>>>> diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
>>>> index a63ec679d861..c8f492b5daf0 100644
>>>> --- a/mm/memory_hotplug.c
>>>> +++ b/mm/memory_hotplug.c
>>>> @@ -782,8 +782,6 @@ void move_pfn_range_to_zone(struct zone *zone, unsigned long start_pfn,
>>>>    	memmap_init_range(nr_pages, nid, zone_idx(zone), start_pfn, 0,
>>>>    			 MEMINIT_HOTPLUG, altmap, migratetype,
>>>>    			 isolate_pageblock);
>>>> -
>>>> -	set_zone_contiguous(zone);
>>> move_pfn_range_to_zone() is also called from memremap::pagemap_range().
>>> Shouldn't we add set_zone_contiguous() there as well?
>> I did not find the place where the online_pages was invoked along path of
>> the memremap:pagemap_range() function. Would there be other functions to
>> online the pages remapped? Much appreciated for the guidance.
> Currently when we do memremap_pages() we have
>
> 	memremap_pages() ->
> 		pagemap_range() ->
> 			move_pfn_range_to_zone() ->
> 				set_zone_contiguous();
>   
> Once set_zone_contiguous() is moved out from move_pfn_range_to_zone(),
> memremap_pages() path never calls it.
> I'm not sure if the pages added in memremap_pages() are online, but to keep
> it's current behaviour I think it should call set_zone_contiguous()
> explicitly.

Thanks Mike. It's doable for me to add such a line. I am worried about 
place a set_zone_contiguous() will not add any meaningful value. Per my 
understanding, the zone contiguous will remain as false because the page 
is not online.

Regards,

Tianyou



^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v8 2/3] mm/memory hotplug/unplug: Add online_memory_block_pages() and offline_memory_block_pages()
  2026-01-27  6:58       ` Mike Rapoport
@ 2026-01-28 13:56         ` Li, Tianyou
  0 siblings, 0 replies; 18+ messages in thread
From: Li, Tianyou @ 2026-01-28 13:56 UTC (permalink / raw)
  To: Mike Rapoport
  Cc: David Hildenbrand, Oscar Salvador, Wei Yang, Michal Hocko,
	linux-mm, Yong Hu, Nanhai Zou, Yuan Liu, Tim Chen, Qiuxu Zhuo,
	Yu C Chen, Pan Deng, Chen Zhang, linux-kernel


On 1/27/2026 2:58 PM, Mike Rapoport wrote:
> On Sat, Jan 24, 2026 at 08:30:57PM +0800, Li, Tianyou wrote:
>> On 1/22/2026 7:32 PM, Mike Rapoport wrote:
>>>> diff --git a/drivers/base/memory.c b/drivers/base/memory.c
>>>> index 751f248ca4a8..ea4d6fbf34fd 100644
>>>> --- a/drivers/base/memory.c
>>>> +++ b/drivers/base/memory.c
>>>> @@ -246,31 +246,12 @@ static int memory_block_online(struct memory_block *mem)
>>>>    		nr_vmemmap_pages = mem->altmap->free;
>>>>    	mem_hotplug_begin();
>>>> -	if (nr_vmemmap_pages) {
>>>> -		ret = mhp_init_memmap_on_memory(start_pfn, nr_vmemmap_pages, zone);
>>>> -		if (ret)
>>>> -			goto out;
>>>> -	}
>>>> -
>>>> -	ret = online_pages(start_pfn + nr_vmemmap_pages,
>>>> -			   nr_pages - nr_vmemmap_pages, zone, mem->group);
>>>> -	if (ret) {
>>>> -		if (nr_vmemmap_pages)
>>>> -			mhp_deinit_memmap_on_memory(start_pfn, nr_vmemmap_pages);
>>>> -		goto out;
>>>> -	}
>>>> -
>>>> -	/*
>>>> -	 * Account once onlining succeeded. If the zone was unpopulated, it is
>>>> -	 * now already properly populated.
>>>> -	 */
>>>> -	if (nr_vmemmap_pages)
>>>> -		adjust_present_page_count(pfn_to_page(start_pfn), mem->group,
>>>> -					  nr_vmemmap_pages);
>>>> -
>>>> -	mem->zone = zone;
>>>> -out:
>>>> +	ret = online_memory_block_pages(start_pfn, nr_pages, nr_vmemmap_pages,
>>>> +					zone, mem->group);
>>>> +	if (!ret)
>>>> +		mem->zone = zone;
>>> I think we can move most of memory_block_online() to the new function and
>>> pass struct memory_block to it.
>>> I'd suggest
>>> 	
>>> 	int mhp_block_online(struct memory_block *block)
>>>
>>> and
>>>
>>> 	int mhp_block_offline(struct memory_block *block)
>>>
>>> Other than that LGTM.
>>
>> It's doable, if not other comments I can change the code. Would it look like
>> moving the functions to mm/memory_hotplug.c, change the name to
>> mhp_block_online() and mhp_block_offline(), and change the references where
>> the original function invoked in drivers/base/memory.c?
> Yeah, that's what I thought about.
>
> Even more broadly, I think the functionality in drivers/base/memory.c
> belongs to mm/ much more than to drivers/ but that's surely out of scope
> for these patches.
>   


Thanks Mike for the confirmation.


>> My prior thoughts on this was just break the code as small pieces as
>> necessary to handle the pages online part together with zone contiguous
>> state update.
> IMHO moving the entire function is cleaner, let's hear what David and Oscar
> think.


Sure. Actually the patch has been ready with the mhp_block_online() and 
mhp_block_offline() moved to memory_hotplug.c, the 
online_memory_block_pages() and offline_memory_block_pages() remains but 
as static function.


Once get feedback from David and Oscar, I can change them accordingly.



^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v8 3/3] mm/memory hotplug/unplug: Optimize zone->contiguous update when changes pfn range
  2026-01-27  7:10       ` Mike Rapoport
@ 2026-01-28 14:11         ` Li, Tianyou
  0 siblings, 0 replies; 18+ messages in thread
From: Li, Tianyou @ 2026-01-28 14:11 UTC (permalink / raw)
  To: Mike Rapoport
  Cc: David Hildenbrand, Oscar Salvador, Wei Yang, Michal Hocko,
	linux-mm, Yong Hu, Nanhai Zou, Yuan Liu, Tim Chen, Qiuxu Zhuo,
	Yu C Chen, Pan Deng, Chen Zhang, linux-kernel


On 1/27/2026 3:10 PM, Mike Rapoport wrote:
> Hi,
>
> On Sat, Jan 24, 2026 at 08:43:51PM +0800, Li, Tianyou wrote:
>> On 1/22/2026 7:43 PM, Mike Rapoport wrote:
>>
>>>> +int online_memory_block_pages(unsigned long start_pfn, unsigned long nr_pages,
>>>> +			unsigned long nr_vmemmap_pages, struct zone *zone,
>>>> +			struct memory_group *group)
>>>>    {
>>>> +	const bool contiguous = zone->contiguous;
>>>> +	enum zone_contig_state new_contiguous_state;
>>>>    	int ret;
>>>> +	/*
>>>> +	 * Calculate the new zone contig state before move_pfn_range_to_zone()
>>>> +	 * sets the zone temporarily to non-contiguous.
>>>> +	 */
>>>> +	new_contiguous_state = zone_contig_state_after_growing(zone, start_pfn,
>>>> +							       nr_pages);
>>>> +
>>>>    	if (nr_vmemmap_pages) {
>>>>    		ret = mhp_init_memmap_on_memory(start_pfn, nr_vmemmap_pages, zone);
>>>>    		if (ret)
>>>> -			return ret;
>>>> +			goto restore_zone_contig;
>>> But zone_contig_state_after_growing() does not change zone->contiguous. Why
>>> do we need to save and restore it?
>> Move_pfn_range_to_zone() will clear the zone contiguous state and it was
>> invoked by online_pages(). If error occurs after
>> move_pfn_range_to_zone() called like in online_pages(), I think we'd better
>> to restore the original value if previous zone contiguous state is true.
>   
> But after move_pfn_range_to_zone() the added pages are still offline, so I
> think the zone remains contiguous and the call to
> clear_zone_contiguous(zone) should not be there.

Since move_pfn_range_to_zone() may change the zone contiguous state, 
clear_zone_contiguous() should be invoked. If we did not 
clear_zone_contiguous() properly, especially keep the zone contiguous 
state as true but actually the zone is non contiguous after resize the 
zone, the code path rely on the contiguous state potentially may fail?

I am not sure if we need to handle the clear_zone_contiguous() in or out 
of move_pfn_range_to_zone() in this patch series.

> BTW, as we have set_zone_contiguous(ZONE_CONTIG_NO) I think we can use it
> instead if clear_zone_contiguous() and remove the latter.
>   

Yes we can. I am hesitate to do so because clear_zone_contiguous() has a 
name that explain itself. The pair of clear/set seems a pattern should 
be preserved? I am OK to change the code, let's hear from other comments 
if feasible? Thanks.


Regards,

Tianyou



^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v8 1/3] mm/memory hotplug: Fix zone->contiguous always false when hotplug
  2026-01-26 21:58       ` Andrew Morton
@ 2026-01-28 14:16         ` Li, Tianyou
  0 siblings, 0 replies; 18+ messages in thread
From: Li, Tianyou @ 2026-01-28 14:16 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Mike Rapoport, David Hildenbrand, Oscar Salvador, Wei Yang,
	Michal Hocko, linux-mm, Yong Hu, Nanhai Zou, Yuan Liu, Tim Chen,
	Qiuxu Zhuo, Yu C Chen, Pan Deng, Chen Zhang, linux-kernel


On 1/27/2026 5:58 AM, Andrew Morton wrote:
> On Sat, 24 Jan 2026 20:18:39 +0800 "Li, Tianyou" <tianyou.li@intel.com> wrote:
>
>>> cc stable@ perhaps?
>> Yes, will do. Thanks.
> Please separate backportable patches from those which target the next
> merge window.  They have different timing and take different routes
> into mainline.
>
> Thanks.
>
>
Appreciated Andrew for your kind explanation. My bad. Will separate it 
from the patch set in next version.

Again, thanks Mike for pointing it out.


Regards,

Tianyou





^ permalink raw reply	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2026-01-28 14:17 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2026-01-20 14:33 [PATCH v8 0/3] Optimize zone->contiguous update and issue fix Tianyou Li
2026-01-20 14:33 ` [PATCH v8 1/3] mm/memory hotplug: Fix zone->contiguous always false when hotplug Tianyou Li
2026-01-22 11:16   ` Mike Rapoport
2026-01-24 12:18     ` Li, Tianyou
2026-01-26 21:58       ` Andrew Morton
2026-01-28 14:16         ` Li, Tianyou
2026-01-27  6:53       ` Mike Rapoport
2026-01-28 13:49         ` Li, Tianyou
2026-01-20 14:33 ` [PATCH v8 2/3] mm/memory hotplug/unplug: Add online_memory_block_pages() and offline_memory_block_pages() Tianyou Li
2026-01-22 11:32   ` Mike Rapoport
2026-01-24 12:30     ` Li, Tianyou
2026-01-27  6:58       ` Mike Rapoport
2026-01-28 13:56         ` Li, Tianyou
2026-01-20 14:33 ` [PATCH v8 3/3] mm/memory hotplug/unplug: Optimize zone->contiguous update when changes pfn range Tianyou Li
2026-01-22 11:43   ` Mike Rapoport
2026-01-24 12:43     ` Li, Tianyou
2026-01-27  7:10       ` Mike Rapoport
2026-01-28 14:11         ` Li, Tianyou

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox