linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Cui Chao <cuichao1753@phytium.com.cn>
To: Pratyush Brahma <pratyush.brahma@oss.qualcomm.com>
Cc: Wang Yinfeng <wangyinfeng@phytium.com.cn>,
	linux-cxl@vger.kernel.org, linux-kernel@vger.kernel.org,
	linux-mm@kvack.org, Andrew Morton <akpm@linux-foundation.org>,
	Jonathan Cameron <jonathan.cameron@huawei.com>,
	Mike Rapoport <rppt@kernel.org>
Subject: Re: [PATCH v2 1/1] mm: numa_memblks: Identify the accurate NUMA ID of CFMW
Date: Thu, 15 Jan 2026 18:06:45 +0800	[thread overview]
Message-ID: <0e1e1b5f-df0c-459d-9cba-5bde5ad56ba3@phytium.com.cn> (raw)
In-Reply-To: <CALzOmR2z0noh74aCAd=QVUBgdn7Q+Hbevs-cr1EtV_zXCuQ=PA@mail.gmail.com>


On 1/9/2026 5:35 PM, Pratyush Brahma wrote:
> On Fri, Jan 9, 2026 at 12:44 PM Cui Chao <cuichao1753@phytium.com.cn> wrote:
>> In some physical memory layout designs, the address space of CFMW
>> resides between multiple segments of system memory belonging to
>> the same NUMA node. In numa_cleanup_meminfo, these multiple segments
>> of system memory are merged into a larger numa_memblk. When
>> identifying which NUMA node the CFMW belongs to, it may be incorrectly
>> assigned to the NUMA node of the merged system memory.
>>
>> Example memory layout:
>>
>> Physical address space:
>>      0x00000000 - 0x1FFFFFFF  System RAM (node0)
>>      0x20000000 - 0x2FFFFFFF  CXL CFMW (node2)
>>      0x40000000 - 0x5FFFFFFF  System RAM (node0)
>>      0x60000000 - 0x7FFFFFFF  System RAM (node1)
>>
>> After numa_cleanup_meminfo, the two node0 segments are merged into one:
>>      0x00000000 - 0x5FFFFFFF  System RAM (node0) // CFMW is inside the range
>>      0x60000000 - 0x7FFFFFFF  System RAM (node1)
>>
>> So the CFMW (0x20000000-0x2FFFFFFF) will be incorrectly assigned to node0.
>>
>> To address this scenario, accurately identifying the correct NUMA node
>> can be achieved by checking whether the region belongs to both
>> numa_meminfo and numa_reserved_meminfo.
>>
>> Signed-off-by: Cui Chao <cuichao1753@phytium.com.cn>
>> Reviewed-by: Jonathan Cameron <jonathan.cameron@huawei.com>
>> ---
>>   mm/numa_memblks.c | 5 +++--
>>   1 file changed, 3 insertions(+), 2 deletions(-)
>>
>> diff --git a/mm/numa_memblks.c b/mm/numa_memblks.c
>> index 5b009a9cd8b4..e91908ed8661 100644
>> --- a/mm/numa_memblks.c
>> +++ b/mm/numa_memblks.c
>> @@ -568,15 +568,16 @@ static int meminfo_to_nid(struct numa_meminfo *mi, u64 start)
>>   int phys_to_target_node(u64 start)
>>   {
>>          int nid = meminfo_to_nid(&numa_meminfo, start);
>> +       int reserved_nid = meminfo_to_nid(&numa_reserved_meminfo, start);
>>
>>          /*
>>           * Prefer online nodes, but if reserved memory might be
>>           * hot-added continue the search with reserved ranges.
> It would be good to change this comment as well. With the new logic
> you’re not just "continuing the search", you’re explicitly preferring
> reserved on overlap.
> Probably something like "Prefer numa_meminfo unless the address is
> also described by reserved ranges, in which case use the reserved
> nid."

Thanks.

I will revise the next version according to your suggestion.

>>           */
>> -       if (nid != NUMA_NO_NODE)
>> +       if (nid != NUMA_NO_NODE && reserved_nid == NUMA_NO_NODE)
>>                  return nid;
>>
>> -       return meminfo_to_nid(&numa_reserved_meminfo, start);
>> +       return reserved_nid;
>>   }
>>   EXPORT_SYMBOL_GPL(phys_to_target_node);
>>
>> --
>> 2.33.0
>>
-- 
Best regards,
Cui Chao.



      reply	other threads:[~2026-01-15 10:07 UTC|newest]

Thread overview: 25+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-01-06  3:10 [PATCH v2 0/1] " Cui Chao
2026-01-06  3:10 ` [PATCH v2 1/1] mm: numa_memblks: " Cui Chao
2026-01-08 16:19   ` Jonathan Cameron
2026-01-08 17:48   ` Andrew Morton
2026-01-15  9:43     ` Cui Chao
2026-01-15 18:18       ` Andrew Morton
2026-01-15 19:50         ` dan.j.williams
2026-01-22  8:03           ` Cui Chao
2026-01-22 21:28             ` Andrew Morton
2026-01-23  8:59               ` Cui Chao
2026-01-23 16:46             ` Gregory Price
2026-01-26  9:06               ` Cui Chao
2026-02-05 22:58                 ` Andrew Morton
2026-02-05 23:10                   ` Gregory Price
2026-02-06 11:03                     ` Jonathan Cameron
2026-02-06 13:31                       ` Gregory Price
2026-02-06 15:09                         ` Jonathan Cameron
2026-02-06 15:53                           ` Gregory Price
2026-02-06 16:26                             ` Jonathan Cameron
2026-02-06 16:32                               ` Gregory Price
2026-02-19 14:19                                 ` Jonathan Cameron
2026-02-06 15:57                           ` Andrew Morton
2026-02-06 16:23                             ` Jonathan Cameron
2026-01-09  9:35   ` Pratyush Brahma
2026-01-15 10:06     ` Cui Chao [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=0e1e1b5f-df0c-459d-9cba-5bde5ad56ba3@phytium.com.cn \
    --to=cuichao1753@phytium.com.cn \
    --cc=akpm@linux-foundation.org \
    --cc=jonathan.cameron@huawei.com \
    --cc=linux-cxl@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=pratyush.brahma@oss.qualcomm.com \
    --cc=rppt@kernel.org \
    --cc=wangyinfeng@phytium.com.cn \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox