linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: "Liu, Yuan1" <yuan1.liu@intel.com>
To: "David Hildenbrand (Arm)" <david@kernel.org>,
	Oscar Salvador <osalvador@suse.de>,
	Mike Rapoport <rppt@kernel.org>,
	Wei Yang <richard.weiyang@gmail.com>
Cc: "linux-mm@kvack.org" <linux-mm@kvack.org>,
	"Hu, Yong" <yong.hu@intel.com>,
	"Zou, Nanhai" <nanhai.zou@intel.com>,
	Tim Chen <tim.c.chen@linux.intel.com>,
	"Zhuo, Qiuxu" <qiuxu.zhuo@intel.com>,
	"Chen, Yu C" <yu.c.chen@intel.com>,
	"Deng, Pan" <pan.deng@intel.com>,
	"Li, Tianyou" <tianyou.li@intel.com>,
	"Chen Zhang" <zhangchen.kidd@jd.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>
Subject: RE: [PATCH v3] mm/memory hotplug/unplug: Optimize zone contiguous check when changing pfn range
Date: Wed, 8 Apr 2026 12:29:11 +0000	[thread overview]
Message-ID: <IA4PR11MB90099CF424CDD1E88A30504EA35BA@IA4PR11MB9009.namprd11.prod.outlook.com> (raw)
In-Reply-To: <17b821b6-0176-43d5-92f7-fe2a0c4f70cf@kernel.org>

> -----Original Message-----
> From: David Hildenbrand (Arm) <david@kernel.org>
> Sent: Wednesday, April 8, 2026 3:36 PM
> To: Liu, Yuan1 <yuan1.liu@intel.com>; Oscar Salvador <osalvador@suse.de>;
> Mike Rapoport <rppt@kernel.org>; Wei Yang <richard.weiyang@gmail.com>
> Cc: linux-mm@kvack.org; Hu, Yong <yong.hu@intel.com>; Zou, Nanhai
> <nanhai.zou@intel.com>; Tim Chen <tim.c.chen@linux.intel.com>; Zhuo, Qiuxu
> <qiuxu.zhuo@intel.com>; Chen, Yu C <yu.c.chen@intel.com>; Deng, Pan
> <pan.deng@intel.com>; Li, Tianyou <tianyou.li@intel.com>; Chen Zhang
> <zhangchen.kidd@jd.com>; linux-kernel@vger.kernel.org
> Subject: Re: [PATCH v3] mm/memory hotplug/unplug: Optimize zone contiguous
> check when changing pfn range
> 
> On 4/8/26 05:16, Yuan Liu wrote:
> > When move_pfn_range_to_zone() or remove_pfn_range_from_zone() updates a
> > zone, set_zone_contiguous() rescans the entire zone pageblock-by-
> pageblock
> > to rebuild zone->contiguous. For large zones this is a significant cost
> > during memory hotplug and hot-unplug.
> >
> > Add a new zone member pages_with_online_memmap that tracks the number of
> > pages within the zone span that have an online memory map (including
> present
> > pages and memory holes whose memory map has been initialized). When
> > spanned_pages == pages_with_online_memmap the zone is contiguous and
> > pfn_to_page() can be called on any PFN in the zone span without further
> > pfn_valid() checks.
> >
> > Only pages that fall within the current zone span are accounted towards
> > pages_with_online_memmap. A "too small" value is safe, it merely
> prevents
> > detecting a contiguous zone.
> >
> > The following test cases of memory hotplug for a VM [1], tested in the
> > environment [2], show that this optimization can significantly reduce
> the
> > memory hotplug time [3].
> >
> > +----------------+------+---------------+--------------+----------------
> +
> > |                | Size | Time (before) | Time (after) | Time Reduction
> |
> > |                +------+---------------+--------------+----------------
> +
> > | Plug Memory    | 256G |      10s      |      3s      |       70%
> |
> > |                +------+---------------+--------------+----------------
> +
> > |                | 512G |      36s      |      7s      |       81%
> |
> > +----------------+------+---------------+--------------+----------------
> +
> >
> > +----------------+------+---------------+--------------+----------------
> +
> > |                | Size | Time (before) | Time (after) | Time Reduction
> |
> > |                +------+---------------+--------------+----------------
> +
> > | Unplug Memory  | 256G |      11s      |      4s      |       64%
> |
> > |                +------+---------------+--------------+----------------
> +
> > |                | 512G |      36s      |      9s      |       75%
> |
> > +----------------+------+---------------+--------------+----------------
> +
> >
> > [1] Qemu commands to hotplug 256G/512G memory for a VM:
> >     object_add memory-backend-ram,id=hotmem0,size=256G/512G,share=on
> >     device_add virtio-mem-pci,id=vmem1,memdev=hotmem0,bus=port1
> >     qom-set vmem1 requested-size 256G/512G (Plug Memory)
> >     qom-set vmem1 requested-size 0G (Unplug Memory)
> >
> > [2] Hardware     : Intel Icelake server
> >     Guest Kernel : v7.0-rc4
> >     Qemu         : v9.0.0
> >
> >     Launch VM    :
> >     qemu-system-x86_64 -accel kvm -cpu host \
> >     -drive file=./Centos10_cloud.qcow2,format=qcow2,if=virtio \
> >     -drive file=./seed.img,format=raw,if=virtio \
> >     -smp 3,cores=3,threads=1,sockets=1,maxcpus=3 \
> >     -m 2G,slots=10,maxmem=2052472M \
> >     -device pcie-root-port,id=port1,bus=pcie.0,slot=1,multifunction=on \
> >     -device pcie-root-port,id=port2,bus=pcie.0,slot=2 \
> >     -nographic -machine q35 \
> >     -nic user,hostfwd=tcp::3000-:22
> >
> >     Guest kernel auto-onlines newly added memory blocks:
> >     echo online > /sys/devices/system/memory/auto_online_blocks
> >
> > [3] The time from typing the QEMU commands in [1] to when the output of
> >     'grep MemTotal /proc/meminfo' on Guest reflects that all hotplugged
> >     memory is recognized.
> >
> > Reported-by: Nanhai Zou <nanhai.zou@intel.com>
> > Reported-by: Chen Zhang <zhangchen.kidd@jd.com>
> > Tested-by: Yuan Liu <yuan1.liu@intel.com>
> > Reviewed-by: Tim Chen <tim.c.chen@linux.intel.com>
> > Reviewed-by: Qiuxu Zhuo <qiuxu.zhuo@intel.com>
> > Reviewed-by: Yu C Chen <yu.c.chen@intel.com>
> > Reviewed-by: Pan Deng <pan.deng@intel.com>
> > Reviewed-by: Nanhai Zou <nanhai.zou@intel.com>
> > Co-developed-by: Tianyou Li <tianyou.li@intel.com>
> > Signed-off-by: Tianyou Li <tianyou.li@intel.com>
> > Signed-off-by: Yuan Liu <yuan1.liu@intel.com>
> > Acked-by: David Hildenbrand (Arm) <david@kernel.org>
> > ---
> 
> [...]
> 
> > @@ -842,7 +842,7 @@ overlap_memmap_init(unsigned long zone, unsigned
> long *pfn)
> >   *   zone/node above the hole except for the trailing pages in the last
> >   *   section that will be appended to the zone/node below.
> >   */
> > -static void __init init_unavailable_range(unsigned long spfn,
> > +static unsigned long __init init_unavailable_range(unsigned long spfn,
> >  					  unsigned long epfn,
> >  					  int zone, int node)
> >  {
> > @@ -858,6 +858,7 @@ static void __init init_unavailable_range(unsigned
> long spfn,
> >  	if (pgcnt)
> >  		pr_info("On node %d, zone %s: %lld pages in unavailable
> ranges\n",
> >  			node, zone_names[zone], pgcnt);
> > +	return pgcnt;
> >  }
> >
> >  /*
> > @@ -956,9 +957,22 @@ static void __init memmap_init_zone_range(struct
> zone *zone,
> >  	memmap_init_range(end_pfn - start_pfn, nid, zone_id, start_pfn,
> >  			  zone_end_pfn, MEMINIT_EARLY, NULL, MIGRATE_MOVABLE,
> >  			  false);
> > +	zone->pages_with_online_memmap += end_pfn - start_pfn;
> >
> > -	if (*hole_pfn < start_pfn)
> > -		init_unavailable_range(*hole_pfn, start_pfn, zone_id, nid);
> > +	if (*hole_pfn < start_pfn) {
> > +		unsigned long pgcnt;
> > +
> > +		if (*hole_pfn < zone_start_pfn) {
> > +			init_unavailable_range(*hole_pfn, zone_start_pfn,
> > +					       zone_id, nid);
> > +			pgcnt = init_unavailable_range(zone_start_pfn,
> > +					start_pfn, zone_id, nid);
> 
> Indentation of parameters.

Got it, I'll fix the indentation.

> 
> > +		} else {
> > +			pgcnt = init_unavailable_range(*hole_pfn, start_pfn,
> > +					zone_id, nid);
> 
> 
> Same here.

Sure
 
> > +		}
> > +		zone->pages_with_online_memmap += pgcnt;
> > +	}
> 
> 
> Maybe something like the following could make it nicer to read, just a
> thought.
> 
> unsigned long hole_start_pfn = *hole_pfn;
> 
> if (hole_start_pfn < zone_start_pfn) {
> 	init_unavailable_range(hole_start_pfn, zone_start_pfn,
> 			       zone_id, nid);
> 	hole_start_pfn = zone_start_pfn;
> }
> pgcnt = init_unavailable_range(hole_start_pfn, start_pfn,
> 			       zone_id, nid);

Yes, this looks better. I'll apply your suggestion

> 
> LGTM, thanks!

Thanks for the feedback, I'll include these changes in the next version

> --
> Cheers,
> 
> David

  reply	other threads:[~2026-04-08 12:29 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-04-08  3:16 Yuan Liu
2026-04-08  7:36 ` David Hildenbrand (Arm)
2026-04-08 12:29   ` Liu, Yuan1 [this message]
2026-04-08 12:31     ` David Hildenbrand (Arm)
2026-04-08 12:37       ` Liu, Yuan1

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=IA4PR11MB90099CF424CDD1E88A30504EA35BA@IA4PR11MB9009.namprd11.prod.outlook.com \
    --to=yuan1.liu@intel.com \
    --cc=david@kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=nanhai.zou@intel.com \
    --cc=osalvador@suse.de \
    --cc=pan.deng@intel.com \
    --cc=qiuxu.zhuo@intel.com \
    --cc=richard.weiyang@gmail.com \
    --cc=rppt@kernel.org \
    --cc=tianyou.li@intel.com \
    --cc=tim.c.chen@linux.intel.com \
    --cc=yong.hu@intel.com \
    --cc=yu.c.chen@intel.com \
    --cc=zhangchen.kidd@jd.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox