linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: "Liu, Yuan1" <yuan1.liu@intel.com>
To: Sion Ji <sion.ji@samsung.com>
Cc: David Hildenbrand <david@kernel.org>,
	Oscar Salvador <osalvador@suse.de>,
	Mike Rapoport <rppt@kernel.org>,
	Wei Yang <richard.weiyang@gmail.com>,
	"linux-mm@kvack.org" <linux-mm@kvack.org>,
	"Hu, Yong" <yong.hu@intel.com>,
	"Zou, Nanhai" <nanhai.zou@intel.com>,
	Tim Chen <tim.c.chen@linux.intel.com>,
	"Zhuo, Qiuxu" <qiuxu.zhuo@intel.com>,
	"Chen, Yu C" <yu.c.chen@intel.com>,
	"Deng, Pan" <pan.deng@intel.com>,
	"Li, Tianyou" <tianyou.li@intel.com>,
	"Chen Zhang" <zhangchen.kidd@jd.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>
Subject: RE: [PATCH] mm/memory hotplug/unplug: Optimize zone contiguous check when changing pfn range
Date: Thu, 9 Apr 2026 03:20:24 +0000	[thread overview]
Message-ID: <MW4PR11MB693641052EE649F974F8CD9EA3582@MW4PR11MB6936.namprd11.prod.outlook.com> (raw)
In-Reply-To: <20260409023552.GA2807@AE>

> -----Original Message-----
> From: Sion Ji <sion.ji@samsung.com>
> Sent: Thursday, April 9, 2026 10:36 AM
> To: Liu, Yuan1 <yuan1.liu@intel.com>
> Cc: David Hildenbrand <david@kernel.org>; Oscar Salvador
> <osalvador@suse.de>; Mike Rapoport <rppt@kernel.org>; Wei Yang
> <richard.weiyang@gmail.com>; linux-mm@kvack.org; Hu, Yong
> <yong.hu@intel.com>; Zou, Nanhai <nanhai.zou@intel.com>; Tim Chen
> <tim.c.chen@linux.intel.com>; Zhuo, Qiuxu <qiuxu.zhuo@intel.com>; Chen, Yu
> C <yu.c.chen@intel.com>; Deng, Pan <pan.deng@intel.com>; Li, Tianyou
> <tianyou.li@intel.com>; Chen Zhang <zhangchen.kidd@jd.com>; linux-
> kernel@vger.kernel.org
> Subject: Re: [PATCH] mm/memory hotplug/unplug: Optimize zone contiguous
> check when changing pfn range
> 
> On Thu, Mar 19, 2026 at 05:56:22AM -0400, Yuan Liu wrote:
> > When invoke move_pfn_range_to_zone or remove_pfn_range_from_zone, it
> will
> > update the zone->contiguous by checking the new zone's pfn range from
> the
> > beginning to the end, regardless the previous state of the old zone.
> When
> > the zone's pfn range is large, the cost of traversing the pfn range to
> > update the zone->contiguous could be significant.
> >
> > Add a new zone's pages_with_memmap member, it is pages within the zone
> that
> > have an online memmap. It includes present pages and memory holes that
> have
> > a memmap. When spanned_pages == pages_with_online_memmap, pfn_to_page()
> can
> > be performed without further checks on any pfn within the zone span.
> >
> > The following test cases of memory hotplug for a VM [1], tested in the
> > environment [2], show that this optimization can significantly reduce
> the
> > memory hotplug time [3].
> >
> > +----------------+------+---------------+--------------+----------------
> +
> > |                | Size | Time (before) | Time (after) | Time Reduction
> |
> > |                +------+---------------+--------------+----------------
> +
> > | Plug Memory    | 256G |      10s      |      3s      |       70%
> |
> > |                +------+---------------+--------------+----------------
> +
> > |                | 512G |      36s      |      7s      |       81%
> |
> > +----------------+------+---------------+--------------+----------------
> +
> >
> > +----------------+------+---------------+--------------+----------------
> +
> > |                | Size | Time (before) | Time (after) | Time Reduction
> |
> > |                +------+---------------+--------------+----------------
> +
> > | Unplug Memory  | 256G |      11s      |      4s      |       64%
> |
> > |                +------+---------------+--------------+----------------
> +
> > |                | 512G |      36s      |      9s      |       75%
> |
> > +----------------+------+---------------+--------------+----------------
> +
> >
> > [1] Qemu commands to hotplug 256G/512G memory for a VM:
> >     object_add memory-backend-ram,id=hotmem0,size=256G/512G,share=on
> >     device_add virtio-mem-pci,id=vmem1,memdev=hotmem0,bus=port1
> >     qom-set vmem1 requested-size 256G/512G (Plug Memory)
> >     qom-set vmem1 requested-size 0G (Unplug Memory)
> >
> > [2] Hardware     : Intel Icelake server
> >     Guest Kernel : v7.0-rc4
> >     Qemu         : v9.0.0
> >
> >     Launch VM    :
> >     qemu-system-x86_64 -accel kvm -cpu host \
> >     -drive file=./Centos10_cloud.qcow2,format=qcow2,if=virtio \
> >     -drive file=./seed.img,format=raw,if=virtio \
> >     -smp 3,cores=3,threads=1,sockets=1,maxcpus=3 \
> >     -m 2G,slots=10,maxmem=2052472M \
> >     -device pcie-root-port,id=port1,bus=pcie.0,slot=1,multifunction=on \
> >     -device pcie-root-port,id=port2,bus=pcie.0,slot=2 \
> >     -nographic -machine q35 \
> >     -nic user,hostfwd=tcp::3000-:22
> >
> >     Guest kernel auto-onlines newly added memory blocks:
> >     echo online > /sys/devices/system/memory/auto_online_blocks
> >
> > [3] The time from typing the QEMU commands in [1] to when the output of
> >     'grep MemTotal /proc/meminfo' on Guest reflects that all hotplugged
> >     memory is recognized.
> 
> Since the above results were tested in a QEMU environment, we ran the
> tests in a real-world environment [1]. The kernel version is v7.0-rc4,
> which is the same as the QEMU guest kernel. We configured remote memory
> node using 2 and 4 CXL devices for the 256G and 512G tests, respectively.
> 
> We replaced the QEMU hotplug commands with the daxctl [2]. We tested by
> changing the value of /sys/devices/system/memory/auto_online_blocks.
> 
> The following is the result when the kernel onlines added memory blocks:
> (echo online > /sys/devices/system/memory/auto_online_blocks)
> 
> +----------------+------+---------------+--------------+----------------+
> |                | Size | Time (before) | Time (after) | Time Reduction |
> |                +------+---------------+--------------+----------------+
> | Plug Memory    | 256G |      6.7s     |     4.4s     |      -35%      |
> |                +------+---------------+--------------+----------------+
> |                | 512G |      18s      |     8.7s     |      -52%      |
> +----------------+------+---------------+--------------+----------------+
> 
> +----------------+------+---------------+--------------+----------------+
> |                | Size | Time (before) | Time (after) | Time Reduction |
> |                +------+---------------+--------------+----------------+
> | Unplug Memory  | 256G |      2.8s     |     2.6s     |      -8%       |
> |                +------+---------------+--------------+----------------+
> |                | 512G |      5.2s     |      5s      |      -4%       |
> +----------------+------+---------------+--------------+----------------+
> 
> [1] Platform        : Intel GNR-AP
>     CPU             : Intel(R) Xeon(R) 6960P
>     Memory          : Samsung DDR5-4800
>     Hotplug devices : Samsung CXL 2.0 128G Devices
>     Kernel          : v7.0-rc4
> 
> 
> [2] daxctl commands to hotplug memory for a test:
>     time daxctl reconfigure-device --force --mode=system-ram dax0.0
>                                                            (Plug Memory)
>     time daxctl reconfigure-device --force --mode=devdax dax0.0
>                                                          (Unplug Memory)
> 
> 
> The following is the result when the value is set to online_movable:
> (echo online_movable > /sys/devices/system/memory/auto_online_blocks)
> 
> +----------------+------+---------------+--------------+----------------+
> |                | Size | Time (before) | Time (after) | Time Reduction |
> |                +------+---------------+--------------+----------------+
> | Plug Memory    | 256G |      4.5s     |     4.4s     |       -3%      |
> |                +------+---------------+--------------+----------------+
> |                | 512G |     18.6s     |     6.6s     |      -65%      |
> +----------------+------+---------------+--------------+----------------+
> 
> +----------------+------+---------------+--------------+----------------+
> |                | Size | Time (before) | Time (after) | Time Reduction |
> |                +------+---------------+--------------+----------------+
> | Unplug Memory  | 256G |      2.2s     |     2.6s     |      +18%      |
> |                +------+---------------+--------------+----------------+
> |                | 512G |      5.2s     |     4.2s     |      -20%      |
> +----------------+------+---------------+--------------+----------------+
> 
> 
> FYI, it is the result when kernel does not automatically online memory.
> (echo offline > /sys/devices/system/memory/auto_online_blocks)
> 
> +----------------+------+---------------+--------------+----------------+
> |                | Size | Time (before) | Time (after) | Time Reduction |
> |                +------+---------------+--------------+----------------+
> | Plug Memory    | 256G |      3.3s     |     4.4s     |      +33%      |
> |                +------+---------------+--------------+----------------+
> |                | 512G |      6.7s     |     6.6s     |       -2%      |
> +----------------+------+---------------+--------------+----------------+
> 
> +----------------+------+---------------+--------------+----------------+
> |                | Size | Time (before) | Time (after) | Time Reduction |
> |                +------+---------------+--------------+----------------+
> | Unplug Memory  | 256G |      2.2s     |     2.6s     |      +18%      |
> |                +------+---------------+--------------+----------------+
> |                | 512G |      4.4s     |     4.2s     |       -5%      |
> +----------------+------+---------------+--------------+----------------+
> 
> 
> We hope this result is helpful.

Thank you very much for sharing these results - they are really helpful. From your data, it seems that for 256G the improvement is not consistent, but for 512G there is a clear and consistent benefit. This is very useful for understanding the behavior at larger scales. Thanks again for the detailed testing!

> Regards,
> Sion Ji



      reply	other threads:[~2026-04-09  3:20 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-03-19  9:56 Yuan Liu
2026-03-19 10:08 ` Liu, Yuan1
2026-03-20  3:13 ` Andrew Morton
2026-03-23 10:56 ` David Hildenbrand (Arm)
2026-03-23 11:31   ` Mike Rapoport
2026-03-23 11:42     ` David Hildenbrand (Arm)
2026-03-26  7:30       ` Liu, Yuan1
2026-03-26  7:38         ` Chen, Yu C
2026-03-26  9:53           ` David Hildenbrand (Arm)
2026-03-27  7:47             ` Liu, Yuan1
2026-03-26  3:39   ` Liu, Yuan1
2026-03-26  9:23     ` David Hildenbrand (Arm)
2026-03-27  7:39       ` Liu, Yuan1
2026-03-23 11:51 ` Mike Rapoport
2026-03-26  7:32   ` Liu, Yuan1
     [not found] ` <CGME20260409023553epcas2p2e40d1d79206f0169a765fadcf180b010@epcas2p2.samsung.com>
2026-04-09  2:35   ` Sion Ji
2026-04-09  3:20     ` Liu, Yuan1 [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=MW4PR11MB693641052EE649F974F8CD9EA3582@MW4PR11MB6936.namprd11.prod.outlook.com \
    --to=yuan1.liu@intel.com \
    --cc=david@kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=nanhai.zou@intel.com \
    --cc=osalvador@suse.de \
    --cc=pan.deng@intel.com \
    --cc=qiuxu.zhuo@intel.com \
    --cc=richard.weiyang@gmail.com \
    --cc=rppt@kernel.org \
    --cc=sion.ji@samsung.com \
    --cc=tianyou.li@intel.com \
    --cc=tim.c.chen@linux.intel.com \
    --cc=yong.hu@intel.com \
    --cc=yu.c.chen@intel.com \
    --cc=zhangchen.kidd@jd.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox