From: Cui Chao <cuichao1753@phytium.com.cn>
To: Mike Rapoport <rppt@kernel.org>
Cc: Andrew Morton <akpm@linux-foundation.org>,
Joanthan Cameron <Jonathan.Cameron@huawei.com>,
wangyinfeng@phytium.com.cn, linux-mm@kvack.org,
linux-kernel@vger.kernel.org
Subject: Re: [PATCH] mm: numa_memblks: Identify the accurate NUMA ID of CFMW
Date: Mon, 5 Jan 2026 10:38:30 +0800 [thread overview]
Message-ID: <dac83ef4-6e4d-421b-bd54-7090d2f963d9@phytium.com.cn> (raw)
In-Reply-To: <aVPtSNKRnnI88Euk@kernel.org>
Hi,
Thank you for your review.
On 12/30/2025 11:18 PM, Mike Rapoport wrote:
> Hi,
>
> On Tue, Dec 30, 2025 at 05:27:50PM +0800, Cui Chao wrote:
>> In some physical memory layout designs, the address space of CFMW
>> resides between multiple segments of system memory belonging to
>> the same NUMA node. In numa_cleanup_meminfo, these multiple segments
>> of system memory are merged into a larger numa_memblk. When
>> identifying which NUMA node the CFMW belongs to, it may be incorrectly
>> assigned to the NUMA node of the merged system memory. To address this
> Can you please provide an example of such memory layout?
Example memory layout:
Physical address space:
0x00000000 - 0x1FFFFFFF System RAM (node0)
0x20000000 - 0x2FFFFFFF CXL CFMW (node2)
0x40000000 - 0x5FFFFFFF System RAM (node0)
0x60000000 - 0x7FFFFFFF System RAM (node1)
After numa_cleanup_meminfo, the two node0 segments are merged into one:
0x00000000 - 0x5FFFFFFF System RAM (node0) // CFMW is inside this
range
0x60000000 - 0x7FFFFFFF System RAM (node1)
So the CFMW (0x20000000-0x2FFFFFFF) will be incorrectly assigned to node0.
>> scenario, accurately identifying the correct NUMA node can be achieved
>> by checking whether the region belongs to both numa_meminfo and
>> numa_reserved_meminfo.
>>
>> Signed-off-by: Cui Chao <cuichao1753@phytium.com.cn>
>> ---
>> mm/numa_memblks.c | 3 ++-
>> 1 file changed, 2 insertions(+), 1 deletion(-)
>>
>> diff --git a/mm/numa_memblks.c b/mm/numa_memblks.c
>> index 5b009a9cd8b4..1ef037f0e0e0 100644
>> --- a/mm/numa_memblks.c
>> +++ b/mm/numa_memblks.c
>> @@ -573,7 +573,8 @@ int phys_to_target_node(u64 start)
>> * Prefer online nodes, but if reserved memory might be
>> * hot-added continue the search with reserved ranges.
>> */
>> - if (nid != NUMA_NO_NODE)
>> + if (nid != NUMA_NO_NODE &&
>> + meminfo_to_nid(&numa_reserved_meminfo, start) == NUMA_NO_NODE)
> I'd suggest assigning the result of meminfo_to_nid(&numa_reserved_meminfo,
> start) to a local variable and using that in if and return statements.
I will use a local variable named reserved_nid.
>> return nid;
>>
>> return meminfo_to_nid(&numa_reserved_meminfo, start);
>> --
>> 2.33.0
>>
>>
--
Best regards,
Cui Chao.
next prev parent reply other threads:[~2026-01-05 2:38 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-12-30 9:27 Cui Chao
2025-12-30 15:18 ` Mike Rapoport
2026-01-05 2:38 ` Cui Chao [this message]
2026-01-05 9:34 ` Mike Rapoport
2026-01-05 9:59 ` Jonathan Cameron
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=dac83ef4-6e4d-421b-bd54-7090d2f963d9@phytium.com.cn \
--to=cuichao1753@phytium.com.cn \
--cc=Jonathan.Cameron@huawei.com \
--cc=akpm@linux-foundation.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=rppt@kernel.org \
--cc=wangyinfeng@phytium.com.cn \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox