linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] mm: numa_memblks: Identify the accurate NUMA ID of CFMW
@ 2025-12-30  9:27 Cui Chao
  2025-12-30 15:18 ` Mike Rapoport
  0 siblings, 1 reply; 2+ messages in thread
From: Cui Chao @ 2025-12-30  9:27 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Joanthan Cameron, Mike Rapoport, wangyinfeng, linux-mm, linux-kernel

In some physical memory layout designs, the address space of CFMW
resides between multiple segments of system memory belonging to
the same NUMA node. In numa_cleanup_meminfo, these multiple segments
of system memory are merged into a larger numa_memblk. When
identifying which NUMA node the CFMW belongs to, it may be incorrectly
assigned to the NUMA node of the merged system memory. To address this
scenario, accurately identifying the correct NUMA node can be achieved
by checking whether the region belongs to both numa_meminfo and
numa_reserved_meminfo.

Signed-off-by: Cui Chao <cuichao1753@phytium.com.cn>
---
 mm/numa_memblks.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/mm/numa_memblks.c b/mm/numa_memblks.c
index 5b009a9cd8b4..1ef037f0e0e0 100644
--- a/mm/numa_memblks.c
+++ b/mm/numa_memblks.c
@@ -573,7 +573,8 @@ int phys_to_target_node(u64 start)
 	 * Prefer online nodes, but if reserved memory might be
 	 * hot-added continue the search with reserved ranges.
 	 */
-	if (nid != NUMA_NO_NODE)
+	if (nid != NUMA_NO_NODE &&
+		meminfo_to_nid(&numa_reserved_meminfo, start) == NUMA_NO_NODE)
 		return nid;
 
 	return meminfo_to_nid(&numa_reserved_meminfo, start);
-- 
2.33.0



^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: [PATCH] mm: numa_memblks: Identify the accurate NUMA ID of CFMW
  2025-12-30  9:27 [PATCH] mm: numa_memblks: Identify the accurate NUMA ID of CFMW Cui Chao
@ 2025-12-30 15:18 ` Mike Rapoport
  0 siblings, 0 replies; 2+ messages in thread
From: Mike Rapoport @ 2025-12-30 15:18 UTC (permalink / raw)
  To: Cui Chao
  Cc: Andrew Morton, Joanthan Cameron, wangyinfeng, linux-mm, linux-kernel

Hi,

On Tue, Dec 30, 2025 at 05:27:50PM +0800, Cui Chao wrote:
> In some physical memory layout designs, the address space of CFMW
> resides between multiple segments of system memory belonging to
> the same NUMA node. In numa_cleanup_meminfo, these multiple segments
> of system memory are merged into a larger numa_memblk. When
> identifying which NUMA node the CFMW belongs to, it may be incorrectly
> assigned to the NUMA node of the merged system memory. To address this

Can you please provide an example of such memory layout?

> scenario, accurately identifying the correct NUMA node can be achieved
> by checking whether the region belongs to both numa_meminfo and
> numa_reserved_meminfo.
> 
> Signed-off-by: Cui Chao <cuichao1753@phytium.com.cn>
> ---
>  mm/numa_memblks.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/mm/numa_memblks.c b/mm/numa_memblks.c
> index 5b009a9cd8b4..1ef037f0e0e0 100644
> --- a/mm/numa_memblks.c
> +++ b/mm/numa_memblks.c
> @@ -573,7 +573,8 @@ int phys_to_target_node(u64 start)
>  	 * Prefer online nodes, but if reserved memory might be
>  	 * hot-added continue the search with reserved ranges.
>  	 */
> -	if (nid != NUMA_NO_NODE)
> +	if (nid != NUMA_NO_NODE &&
> +		meminfo_to_nid(&numa_reserved_meminfo, start) == NUMA_NO_NODE)

I'd suggest assigning the result of meminfo_to_nid(&numa_reserved_meminfo,
start) to a local variable and using that in if and return statements.

>  		return nid;
>  
>  	return meminfo_to_nid(&numa_reserved_meminfo, start);
> -- 
> 2.33.0
> 
> 

-- 
Sincerely yours,
Mike.


^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2025-12-30 15:18 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-12-30  9:27 [PATCH] mm: numa_memblks: Identify the accurate NUMA ID of CFMW Cui Chao
2025-12-30 15:18 ` Mike Rapoport

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox