From: "David Hildenbrand (Arm)" <david@kernel.org>
To: Mike Rapoport <rppt@kernel.org>
Cc: Yuan Liu <yuan1.liu@intel.com>,
Oscar Salvador <osalvador@suse.de>,
Wei Yang <richard.weiyang@gmail.com>,
linux-mm@kvack.org, Yong Hu <yong.hu@intel.com>,
Nanhai Zou <nanhai.zou@intel.com>,
Tim Chen <tim.c.chen@linux.intel.com>,
Qiuxu Zhuo <qiuxu.zhuo@intel.com>,
Yu C Chen <yu.c.chen@intel.com>, Pan Deng <pan.deng@intel.com>,
Tianyou Li <tianyou.li@intel.com>,
Chen Zhang <zhangchen.kidd@jd.com>,
linux-kernel@vger.kernel.org
Subject: Re: [PATCH] mm/memory hotplug/unplug: Optimize zone contiguous check when changing pfn range
Date: Mon, 23 Mar 2026 12:42:18 +0100 [thread overview]
Message-ID: <168ab3c0-c44f-4d48-b7dc-33196b7ba6a5@kernel.org> (raw)
In-Reply-To: <acEkfaycrJI-kWjk@kernel.org>
On 3/23/26 12:31, Mike Rapoport wrote:
> On Mon, Mar 23, 2026 at 11:56:35AM +0100, David Hildenbrand (Arm) wrote:
>> On 3/19/26 10:56, Yuan Liu wrote:
>
> ...
>
>>> diff --git a/mm/mm_init.c b/mm/mm_init.c
>>> index df34797691bd..96690e550024 100644
>>> --- a/mm/mm_init.c
>>> +++ b/mm/mm_init.c
>>> @@ -946,6 +946,7 @@ static void __init memmap_init_zone_range(struct zone *zone,
>>> unsigned long zone_start_pfn = zone->zone_start_pfn;
>>> unsigned long zone_end_pfn = zone_start_pfn + zone->spanned_pages;
>>> int nid = zone_to_nid(zone), zone_id = zone_idx(zone);
>>> + unsigned long zone_hole_start, zone_hole_end;
>>>
>>> start_pfn = clamp(start_pfn, zone_start_pfn, zone_end_pfn);
>>> end_pfn = clamp(end_pfn, zone_start_pfn, zone_end_pfn);
>>> @@ -957,8 +958,19 @@ static void __init memmap_init_zone_range(struct zone *zone,
>>> zone_end_pfn, MEMINIT_EARLY, NULL, MIGRATE_MOVABLE,
>>> false);
>>>
>>> - if (*hole_pfn < start_pfn)
>>> + WRITE_ONCE(zone->pages_with_online_memmap,
>>> + READ_ONCE(zone->pages_with_online_memmap) +
>>> + (end_pfn - start_pfn));
>>> +
>>> + if (*hole_pfn < start_pfn) {
>>> init_unavailable_range(*hole_pfn, start_pfn, zone_id, nid);
>>> + zone_hole_start = clamp(*hole_pfn, zone_start_pfn, zone_end_pfn);
>>> + zone_hole_end = clamp(start_pfn, zone_start_pfn, zone_end_pfn);
>>> + if (zone_hole_start < zone_hole_end)
>>> + WRITE_ONCE(zone->pages_with_online_memmap,
>>> + READ_ONCE(zone->pages_with_online_memmap) +
>>> + (zone_hole_end - zone_hole_start));
>>> + }
>>
>> The range can have larger holes without a memmap, and I think we would be
>> missing pages handled by the other init_unavailable_range() call?
>>
>>
>> There is one question for Mike, though: couldn't it happen that the
>> init_unavailable_range() call in memmap_init() would initialize
>> the memmap outside of the node/zone span?
>
> Yes, and it most likely will.
>
> Very common example is page 0 on x86 systems:
>
> [ 0.012196] DMA [mem 0x0000000000001000-0x0000000000ffffff]
> [ 0.012221] On node 0, zone DMA: 1 pages in unavailable ranges
> [ 0.012205] Early memory node ranges
> [ 0.012206] node 0: [mem 0x0000000000001000-0x000000000009efff]
>
> The unavailable page in zone DMA is the page from 0x0 to 0x1000 that is
> neither in node 0 nor in zone DMA.
>
> For ZONE_NORMAL it would be a more pathological case when zone/node span
> ends in a middle of a section, but that's still possible.
>
>> If so, I wonder whether we would want to adjust the node+zone space to
>> include these ranges.
>>
>> Later memory onlining could make these ranges suddenly fall into the
>> node/zone span.
>
> But doesn't memory onlining always happen at section boundaries?
Sure, but assume ZONE_NORMAL ends in the middle of a section, and then
you hotplug the next section.
Then, the zone spans that memmap. zone->pages_with_online_memmap will be
wrong.
Once we unplug the hotplugged section, zone shrinking code will stumble
over the whole-pfns and assume they belong to the zone.
zone->pages_with_online_memmap will be wrong.
zone->pages_with_online_memmap being wrong means that it is smaller than
it should. I guess, it would not be broken, but we would fail to detect
contiguous zones.
If there would be an easy way to avoid that, that would be cleaner.
--
Cheers,
David
next prev parent reply other threads:[~2026-03-23 11:42 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-03-19 9:56 Yuan Liu
2026-03-19 10:08 ` Liu, Yuan1
2026-03-20 3:13 ` Andrew Morton
2026-03-23 10:56 ` David Hildenbrand (Arm)
2026-03-23 11:31 ` Mike Rapoport
2026-03-23 11:42 ` David Hildenbrand (Arm) [this message]
2026-03-26 7:30 ` Liu, Yuan1
2026-03-26 7:38 ` Chen, Yu C
2026-03-26 9:53 ` David Hildenbrand (Arm)
2026-03-27 7:47 ` Liu, Yuan1
2026-03-26 3:39 ` Liu, Yuan1
2026-03-26 9:23 ` David Hildenbrand (Arm)
2026-03-27 7:39 ` Liu, Yuan1
2026-03-23 11:51 ` Mike Rapoport
2026-03-26 7:32 ` Liu, Yuan1
[not found] ` <CGME20260409023553epcas2p2e40d1d79206f0169a765fadcf180b010@epcas2p2.samsung.com>
2026-04-09 2:35 ` Sion Ji
2026-04-09 3:20 ` Liu, Yuan1
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=168ab3c0-c44f-4d48-b7dc-33196b7ba6a5@kernel.org \
--to=david@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=nanhai.zou@intel.com \
--cc=osalvador@suse.de \
--cc=pan.deng@intel.com \
--cc=qiuxu.zhuo@intel.com \
--cc=richard.weiyang@gmail.com \
--cc=rppt@kernel.org \
--cc=tianyou.li@intel.com \
--cc=tim.c.chen@linux.intel.com \
--cc=yong.hu@intel.com \
--cc=yu.c.chen@intel.com \
--cc=yuan1.liu@intel.com \
--cc=zhangchen.kidd@jd.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox