linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: "Liu, Yuan1" <yuan1.liu@intel.com>
To: Mike Rapoport <rppt@kernel.org>
Cc: David Hildenbrand <david@kernel.org>,
	Oscar Salvador <osalvador@suse.de>,
	Wei Yang <richard.weiyang@gmail.com>,
	"linux-mm@kvack.org" <linux-mm@kvack.org>,
	"Hu, Yong" <yong.hu@intel.com>,
	"Zou, Nanhai" <nanhai.zou@intel.com>,
	Tim Chen <tim.c.chen@linux.intel.com>,
	"Zhuo, Qiuxu" <qiuxu.zhuo@intel.com>,
	"Chen, Yu C" <yu.c.chen@intel.com>,
	"Deng, Pan" <pan.deng@intel.com>,
	"Li, Tianyou" <tianyou.li@intel.com>,
	Chen Zhang <zhangchen.kidd@jd.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>
Subject: RE: [PATCH v2] mm/memory hotplug/unplug: Optimize zone contiguous check when changing pfn range
Date: Tue, 7 Apr 2026 00:59:27 +0000	[thread overview]
Message-ID: <IA4PR11MB90092416B620C48682C35332A35AA@IA4PR11MB9009.namprd11.prod.outlook.com> (raw)
In-Reply-To: <adDx-FYHY-si07mW@kernel.org>

> -----Original Message-----
> From: Mike Rapoport <rppt@kernel.org>
> Sent: Saturday, April 4, 2026 7:12 PM
> To: Liu, Yuan1 <yuan1.liu@intel.com>
> Cc: David Hildenbrand <david@kernel.org>; Oscar Salvador
> <osalvador@suse.de>; Wei Yang <richard.weiyang@gmail.com>; linux-
> mm@kvack.org; Hu, Yong <yong.hu@intel.com>; Zou, Nanhai
> <nanhai.zou@intel.com>; Tim Chen <tim.c.chen@linux.intel.com>; Zhuo, Qiuxu
> <qiuxu.zhuo@intel.com>; Chen, Yu C <yu.c.chen@intel.com>; Deng, Pan
> <pan.deng@intel.com>; Li, Tianyou <tianyou.li@intel.com>; Chen Zhang
> <zhangchen.kidd@jd.com>; linux-kernel@vger.kernel.org
> Subject: Re: [PATCH v2] mm/memory hotplug/unplug: Optimize zone contiguous
> check when changing pfn range
> 
> On Wed, Apr 01, 2026 at 03:01:55AM -0400, Yuan Liu wrote:
> > When move_pfn_range_to_zone() or remove_pfn_range_from_zone() updates a
> > zone, set_zone_contiguous() rescans the entire zone pageblock-by-
> pageblock
> > to rebuild zone->contiguous. For large zones this is a significant cost
> > during memory hotplug and hot-unplug.
> 
> ...
> 
> > diff --git a/Documentation/mm/physical_memory.rst
> b/Documentation/mm/physical_memory.rst
> > index b76183545e5b..e47e96ef6a6d 100644
> > --- a/Documentation/mm/physical_memory.rst
> > +++ b/Documentation/mm/physical_memory.rst
> > @@ -483,6 +483,17 @@ General
> >    ``present_pages`` should use ``get_online_mems()`` to get a stable
> value. It
> >    is initialized by ``calculate_node_totalpages()``.
> >
> > +``pages_with_online_memmap``
> > +  Tracks pages within the zone that have an online memmap (present
> pages and
> 
> Please spell out "memory map" rather then memmap in the documentation and
> in the comments.

Sure, I will fix it next version.

> > +  memory holes whose memmap has been initialized). When
> ``spanned_pages`` ==
> > +  ``pages_with_online_memmap``, ``pfn_to_page()`` can be performed
> without
> > +  further checks on any PFN within the zone span.
> > +
> > +  Note: this counter may temporarily undercount when pages with an
> online
> > +  memmap exist outside the current zone span. Growing the zone to cover
> such
> > +  pages and later shrinking it back may result in a "too small" value.
> This is
> > +  safe: it merely prevents detecting a contiguous zone.
> > +
> >  ``present_early_pages``
> >    The present pages existing within the zone located on memory
> available since
> >    early boot, excluding hotplugged memory. Defined only when
> 
> ...
> 
> > +/*
> > + * Initialize unavailable range [spfn, epfn) while accounting only the
> pages
> > + * that fall within the zone span towards pages_with_online_memmap.
> Pages
> > + * outside the zone span are still initialized but not accounted.
> > + */
> > +static void __init init_unavailable_range_for_zone(struct zone *zone,
> > +						   unsigned long spfn,
> > +						   unsigned long epfn)
> > +{
> > +	int nid = zone_to_nid(zone);
> > +	int zid = zone_idx(zone);
> > +	unsigned long in_zone_start;
> > +	unsigned long in_zone_end;
> > +
> > +	in_zone_start = clamp(spfn, zone->zone_start_pfn,
> zone_end_pfn(zone));
> > +	in_zone_end = clamp(epfn, zone->zone_start_pfn, zone_end_pfn(zone));
> > +
> > +	if (spfn < in_zone_start)
> > +		init_unavailable_range(spfn, in_zone_start, zid, nid);
> > +
> > +	if (in_zone_start < in_zone_end)
> > +		zone->pages_with_online_memmap +=
> > +			init_unavailable_range(in_zone_start, in_zone_end,
> > +					       zid, nid);
> > +
> > +	if (in_zone_end < epfn)
> > +		init_unavailable_range(in_zone_end, epfn, zid, nid);
> >  }
> 
> I think we can make it simpler, see below.
> 
> >  /*
> > @@ -956,9 +986,10 @@ static void __init memmap_init_zone_range(struct
> zone *zone,
> >  	memmap_init_range(end_pfn - start_pfn, nid, zone_id, start_pfn,
> >  			  zone_end_pfn, MEMINIT_EARLY, NULL, MIGRATE_MOVABLE,
> >  			  false);
> > +	zone->pages_with_online_memmap += end_pfn - start_pfn;
> >
> >  	if (*hole_pfn < start_pfn)
> > -		init_unavailable_range(*hole_pfn, start_pfn, zone_id, nid);
> > +		init_unavailable_range_for_zone(zone, *hole_pfn, start_pfn);
> 
> Here *hole_pfn is either inside zone span or below it and in the second
> case it's enough to adjust page count returned by init_unavailable_range()
> by (zone_start_pfn - *hole_pfn).

Get it, I will refine it next version.
 
> >  	*hole_pfn = end_pfn;
> >  }
> > @@ -996,8 +1027,11 @@ static void __init memmap_init(void)
> >  #else
> >  	end_pfn = round_up(end_pfn, MAX_ORDER_NR_PAGES);
> >  #endif
> > -	if (hole_pfn < end_pfn)
> > -		init_unavailable_range(hole_pfn, end_pfn, zone_id, nid);
> > +	if (hole_pfn < end_pfn) {
> > +		struct zone *zone = &NODE_DATA(nid)->node_zones[zone_id];
> > +
> > +		init_unavailable_range_for_zone(zone, hole_pfn, end_pfn);
> 
> Here we know that the range is not in any zone span.

Indeed, the range here does not belong to the zone span. 
Thank you for your review.

> > +	}
> >  }
> >
> 
> --
> Sincerely yours,
> Mike.


      reply	other threads:[~2026-04-07  0:59 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-04-01  7:01 Yuan Liu
2026-04-02 14:57 ` David Hildenbrand (Arm)
2026-04-03 10:15   ` Liu, Yuan1
2026-04-04 11:11 ` Mike Rapoport
2026-04-07  0:59   ` Liu, Yuan1 [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=IA4PR11MB90092416B620C48682C35332A35AA@IA4PR11MB9009.namprd11.prod.outlook.com \
    --to=yuan1.liu@intel.com \
    --cc=david@kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=nanhai.zou@intel.com \
    --cc=osalvador@suse.de \
    --cc=pan.deng@intel.com \
    --cc=qiuxu.zhuo@intel.com \
    --cc=richard.weiyang@gmail.com \
    --cc=rppt@kernel.org \
    --cc=tianyou.li@intel.com \
    --cc=tim.c.chen@linux.intel.com \
    --cc=yong.hu@intel.com \
    --cc=yu.c.chen@intel.com \
    --cc=zhangchen.kidd@jd.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox