linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/1] Identify the accurate NUMA ID of CFMW
@ 2026-01-06  3:10 Cui Chao
  2026-01-06  3:10 ` [PATCH v2 1/1] mm: numa_memblks: " Cui Chao
  0 siblings, 1 reply; 4+ messages in thread
From: Cui Chao @ 2026-01-06  3:10 UTC (permalink / raw)
  To: Andrew Morton, Jonathan Cameron, Mike Rapoport
  Cc: Wang Yinfeng, linux-cxl, linux-kernel, linux-mm

Changes in v2:
- Added an example memory layout to changelog.
- Added linux-cxl@vger.kernel.org to CC list.
- Assigned the result of meminfo_to_nid(&numa_reserved_meminfo, start)
  to a local variable.

Cui Chao (1):
  mm: numa_memblks: Identify the accurate NUMA ID of CFMW

 mm/numa_memblks.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

-- 
2.33.0



^ permalink raw reply	[flat|nested] 4+ messages in thread

* [PATCH v2 1/1] mm: numa_memblks: Identify the accurate NUMA ID of CFMW
  2026-01-06  3:10 [PATCH v2 0/1] Identify the accurate NUMA ID of CFMW Cui Chao
@ 2026-01-06  3:10 ` Cui Chao
  2026-01-08 16:19   ` Jonathan Cameron
  2026-01-08 17:48   ` Andrew Morton
  0 siblings, 2 replies; 4+ messages in thread
From: Cui Chao @ 2026-01-06  3:10 UTC (permalink / raw)
  To: Andrew Morton, Jonathan Cameron, Mike Rapoport
  Cc: Wang Yinfeng, linux-cxl, linux-kernel, linux-mm

In some physical memory layout designs, the address space of CFMW
resides between multiple segments of system memory belonging to
the same NUMA node. In numa_cleanup_meminfo, these multiple segments
of system memory are merged into a larger numa_memblk. When
identifying which NUMA node the CFMW belongs to, it may be incorrectly
assigned to the NUMA node of the merged system memory.

Example memory layout:

Physical address space:
    0x00000000 - 0x1FFFFFFF  System RAM (node0)
    0x20000000 - 0x2FFFFFFF  CXL CFMW (node2)
    0x40000000 - 0x5FFFFFFF  System RAM (node0)
    0x60000000 - 0x7FFFFFFF  System RAM (node1)

After numa_cleanup_meminfo, the two node0 segments are merged into one:
    0x00000000 - 0x5FFFFFFF  System RAM (node0) // CFMW is inside the range
    0x60000000 - 0x7FFFFFFF  System RAM (node1)

So the CFMW (0x20000000-0x2FFFFFFF) will be incorrectly assigned to node0.

To address this scenario, accurately identifying the correct NUMA node
can be achieved by checking whether the region belongs to both
numa_meminfo and numa_reserved_meminfo.

Signed-off-by: Cui Chao <cuichao1753@phytium.com.cn>
---
 mm/numa_memblks.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/mm/numa_memblks.c b/mm/numa_memblks.c
index 5b009a9cd8b4..e91908ed8661 100644
--- a/mm/numa_memblks.c
+++ b/mm/numa_memblks.c
@@ -568,15 +568,16 @@ static int meminfo_to_nid(struct numa_meminfo *mi, u64 start)
 int phys_to_target_node(u64 start)
 {
 	int nid = meminfo_to_nid(&numa_meminfo, start);
+	int reserved_nid = meminfo_to_nid(&numa_reserved_meminfo, start);
 
 	/*
 	 * Prefer online nodes, but if reserved memory might be
 	 * hot-added continue the search with reserved ranges.
 	 */
-	if (nid != NUMA_NO_NODE)
+	if (nid != NUMA_NO_NODE && reserved_nid == NUMA_NO_NODE)
 		return nid;
 
-	return meminfo_to_nid(&numa_reserved_meminfo, start);
+	return reserved_nid;
 }
 EXPORT_SYMBOL_GPL(phys_to_target_node);
 
-- 
2.33.0



^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH v2 1/1] mm: numa_memblks: Identify the accurate NUMA ID of CFMW
  2026-01-06  3:10 ` [PATCH v2 1/1] mm: numa_memblks: " Cui Chao
@ 2026-01-08 16:19   ` Jonathan Cameron
  2026-01-08 17:48   ` Andrew Morton
  1 sibling, 0 replies; 4+ messages in thread
From: Jonathan Cameron @ 2026-01-08 16:19 UTC (permalink / raw)
  To: Cui Chao
  Cc: Andrew Morton, Mike Rapoport, Wang Yinfeng, linux-cxl,
	linux-kernel, linux-mm

On Tue, 6 Jan 2026 11:10:42 +0800
Cui Chao <cuichao1753@phytium.com.cn> wrote:

> In some physical memory layout designs, the address space of CFMW
> resides between multiple segments of system memory belonging to
> the same NUMA node. In numa_cleanup_meminfo, these multiple segments
> of system memory are merged into a larger numa_memblk. When
> identifying which NUMA node the CFMW belongs to, it may be incorrectly
> assigned to the NUMA node of the merged system memory.
> 
> Example memory layout:
> 
> Physical address space:
>     0x00000000 - 0x1FFFFFFF  System RAM (node0)
>     0x20000000 - 0x2FFFFFFF  CXL CFMW (node2)
>     0x40000000 - 0x5FFFFFFF  System RAM (node0)
>     0x60000000 - 0x7FFFFFFF  System RAM (node1)
> 
> After numa_cleanup_meminfo, the two node0 segments are merged into one:
>     0x00000000 - 0x5FFFFFFF  System RAM (node0) // CFMW is inside the range
>     0x60000000 - 0x7FFFFFFF  System RAM (node1)
> 
> So the CFMW (0x20000000-0x2FFFFFFF) will be incorrectly assigned to node0.
> 
> To address this scenario, accurately identifying the correct NUMA node
> can be achieved by checking whether the region belongs to both
> numa_meminfo and numa_reserved_meminfo.
> 
> Signed-off-by: Cui Chao <cuichao1753@phytium.com.cn>

Reviewed-by: Jonathan Cameron <jonathan.cameron@huawei.com>

> ---
>  mm/numa_memblks.c | 5 +++--
>  1 file changed, 3 insertions(+), 2 deletions(-)
> 
> diff --git a/mm/numa_memblks.c b/mm/numa_memblks.c
> index 5b009a9cd8b4..e91908ed8661 100644
> --- a/mm/numa_memblks.c
> +++ b/mm/numa_memblks.c
> @@ -568,15 +568,16 @@ static int meminfo_to_nid(struct numa_meminfo *mi, u64 start)
>  int phys_to_target_node(u64 start)
>  {
>  	int nid = meminfo_to_nid(&numa_meminfo, start);
> +	int reserved_nid = meminfo_to_nid(&numa_reserved_meminfo, start);
>  
>  	/*
>  	 * Prefer online nodes, but if reserved memory might be
>  	 * hot-added continue the search with reserved ranges.
>  	 */
> -	if (nid != NUMA_NO_NODE)
> +	if (nid != NUMA_NO_NODE && reserved_nid == NUMA_NO_NODE)
>  		return nid;
>  
> -	return meminfo_to_nid(&numa_reserved_meminfo, start);
> +	return reserved_nid;
>  }
>  EXPORT_SYMBOL_GPL(phys_to_target_node);
>  



^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH v2 1/1] mm: numa_memblks: Identify the accurate NUMA ID of CFMW
  2026-01-06  3:10 ` [PATCH v2 1/1] mm: numa_memblks: " Cui Chao
  2026-01-08 16:19   ` Jonathan Cameron
@ 2026-01-08 17:48   ` Andrew Morton
  1 sibling, 0 replies; 4+ messages in thread
From: Andrew Morton @ 2026-01-08 17:48 UTC (permalink / raw)
  To: Cui Chao
  Cc: Jonathan Cameron, Mike Rapoport, Wang Yinfeng, linux-cxl,
	linux-kernel, linux-mm

On Tue,  6 Jan 2026 11:10:42 +0800 Cui Chao <cuichao1753@phytium.com.cn> wrote:

> In some physical memory layout designs, the address space of CFMW
> resides between multiple segments of system memory belonging to
> the same NUMA node. In numa_cleanup_meminfo, these multiple segments
> of system memory are merged into a larger numa_memblk. When
> identifying which NUMA node the CFMW belongs to, it may be incorrectly
> assigned to the NUMA node of the merged system memory.
> 
> Example memory layout:
> 
> Physical address space:
>     0x00000000 - 0x1FFFFFFF  System RAM (node0)
>     0x20000000 - 0x2FFFFFFF  CXL CFMW (node2)
>     0x40000000 - 0x5FFFFFFF  System RAM (node0)
>     0x60000000 - 0x7FFFFFFF  System RAM (node1)
> 
> After numa_cleanup_meminfo, the two node0 segments are merged into one:
>     0x00000000 - 0x5FFFFFFF  System RAM (node0) // CFMW is inside the range
>     0x60000000 - 0x7FFFFFFF  System RAM (node1)
> 
> So the CFMW (0x20000000-0x2FFFFFFF) will be incorrectly assigned to node0.
> 
> To address this scenario, accurately identifying the correct NUMA node
> can be achieved by checking whether the region belongs to both
> numa_meminfo and numa_reserved_meminfo.

Thanks.

Can you please help us understand the userspace-visible runtime effects
of this incorrect assignment?


^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2026-01-08 17:48 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2026-01-06  3:10 [PATCH v2 0/1] Identify the accurate NUMA ID of CFMW Cui Chao
2026-01-06  3:10 ` [PATCH v2 1/1] mm: numa_memblks: " Cui Chao
2026-01-08 16:19   ` Jonathan Cameron
2026-01-08 17:48   ` Andrew Morton

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox