linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Mike Rapoport <rppt@kernel.org>
To: Cui Chao <cuichao1753@phytium.com.cn>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	Joanthan Cameron <Jonathan.Cameron@huawei.com>,
	wangyinfeng@phytium.com.cn, linux-mm@kvack.org,
	linux-kernel@vger.kernel.org
Subject: Re: [PATCH] mm: numa_memblks: Identify the accurate NUMA ID of CFMW
Date: Mon, 5 Jan 2026 11:34:55 +0200	[thread overview]
Message-ID: <aVuFv7P6og6pY2lj@kernel.org> (raw)
In-Reply-To: <dac83ef4-6e4d-421b-bd54-7090d2f963d9@phytium.com.cn>

On Mon, Jan 05, 2026 at 10:38:30AM +0800, Cui Chao wrote:
> Hi,
> 
> Thank you for your review.
> 
> On 12/30/2025 11:18 PM, Mike Rapoport wrote:
> > Hi,
> > 
> > On Tue, Dec 30, 2025 at 05:27:50PM +0800, Cui Chao wrote:
> > > In some physical memory layout designs, the address space of CFMW
> > > resides between multiple segments of system memory belonging to
> > > the same NUMA node. In numa_cleanup_meminfo, these multiple segments
> > > of system memory are merged into a larger numa_memblk. When
> > > identifying which NUMA node the CFMW belongs to, it may be incorrectly
> > > assigned to the NUMA node of the merged system memory. To address this
> > Can you please provide an example of such memory layout?
> 
> Example memory layout:
> 
> Physical address space:
>     0x00000000 - 0x1FFFFFFF  System RAM (node0)
>     0x20000000 - 0x2FFFFFFF  CXL CFMW (node2)
>     0x40000000 - 0x5FFFFFFF  System RAM (node0)
>     0x60000000 - 0x7FFFFFFF  System RAM (node1)
> 
> After numa_cleanup_meminfo, the two node0 segments are merged into one:
>     0x00000000 - 0x5FFFFFFF  System RAM (node0)  // CFMW is inside this
> range
>     0x60000000 - 0x7FFFFFFF  System RAM (node1)
> 
> So the CFMW (0x20000000-0x2FFFFFFF) will be incorrectly assigned to node0.

Can you please add this example to the changelog? 
 
> > > scenario, accurately identifying the correct NUMA node can be achieved
> > > by checking whether the region belongs to both numa_meminfo and
> > > numa_reserved_meminfo.
> > > 
> > > Signed-off-by: Cui Chao <cuichao1753@phytium.com.cn>

-- 
Sincerely yours,
Mike.


  reply	other threads:[~2026-01-05  9:35 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-12-30  9:27 Cui Chao
2025-12-30 15:18 ` Mike Rapoport
2026-01-05  2:38   ` Cui Chao
2026-01-05  9:34     ` Mike Rapoport [this message]
2026-01-05  9:59       ` Jonathan Cameron

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=aVuFv7P6og6pY2lj@kernel.org \
    --to=rppt@kernel.org \
    --cc=Jonathan.Cameron@huawei.com \
    --cc=akpm@linux-foundation.org \
    --cc=cuichao1753@phytium.com.cn \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=wangyinfeng@phytium.com.cn \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox