From: Jonathan Cameron <jonathan.cameron@huawei.com>
To: David Rientjes <rientjes@google.com>
Cc: Davidlohr Bueso <dave@stgolabs.net>, Fan Ni <nifan.cxl@gmail.com>,
Gregory Price <gourry@gourry.net>,
Joshua Hahn <joshua.hahnjy@gmail.com>,
Raghavendra K T <rkodsara@amd.com>,
"Rao, Bharata Bhasker" <bharata@amd.com>,
SeongJae Park <sj@kernel.org>, Wei Xu <weixugc@google.com>,
Xuezheng Chu <xuezhengchu@huawei.com>,
"Yiannis Nikolakopoulos" <yiannis@zptcorp.com>,
Zi Yan <ziy@nvidia.com>, <linux-mm@kvack.org>
Subject: Re: [Linux Memory Hotness and Promotion] Notes from February 26, 2026
Date: Mon, 2 Mar 2026 12:15:35 +0000 [thread overview]
Message-ID: <20260302121535.000001ed@huawei.com> (raw)
In-Reply-To: <855370d6-811e-5864-b93f-c5bf4b6e27b3@google.com>
> ----->o-----
> We touched on the CXL Hotness Monitoring Unit (CHMU) and whether any work
> was on-going to abstract this in upstream Linux. Both Google and Meta
> were not actively looking at this. Yiannis suggested Jonathan Cameron may
> be looking at this for qemu and testing. We concluded that not having
> CHMU support upstream is not currently holding anything back and it might
> be addressed in a year or so; it might also be solving a problem that
> nobody has yet.
On this, sorry I didn't make the call. Whilst we have some minimal support
in QEMU for emulating the CHMU to allow data capture, and a tracing style
kernel driver my priorities currently lie elsewhere. I might get back to
this a little later in the year. I'm aware that some others have been
experimenting further though but heard how they are getting on yet.
> ----->o-----
> Yiannis brought up non-temporal stores for tiering and the possibility of
> finding time to work on it in the next few weeks. He saw great value in
> this from the compression side and was trying to determine if this brings
> value to CXL or tiered systems in general. The idea is to extend
> migrate_pages() for the demotion path so it uses non-temporal stores -- we
> don't want to warm up our cache for cold memory. Gregory noted that we
> want to ensure that when allocating the folio as the migration target that
> we would also need to make sure that's not in the cache for this cold
> memory. I noted that Shivank from AMD had previously presented to this
> group about enlightening migrate_pages() for hardware assists and using
> the "reason" field of migrate_pages() to differentiate different use
> cases.
I'm also keen to see progress in this area. There are lots of options
for bulk movement of data and tiering brings a specific set of constraints /
opportunities that perhaps don't apply so much elsewhere.
Jonathan
prev parent reply other threads:[~2026-03-02 12:15 UTC|newest]
Thread overview: 2+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-03-01 20:35 David Rientjes
2026-03-02 12:15 ` Jonathan Cameron [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260302121535.000001ed@huawei.com \
--to=jonathan.cameron@huawei.com \
--cc=bharata@amd.com \
--cc=dave@stgolabs.net \
--cc=gourry@gourry.net \
--cc=joshua.hahnjy@gmail.com \
--cc=linux-mm@kvack.org \
--cc=nifan.cxl@gmail.com \
--cc=rientjes@google.com \
--cc=rkodsara@amd.com \
--cc=sj@kernel.org \
--cc=weixugc@google.com \
--cc=xuezhengchu@huawei.com \
--cc=yiannis@zptcorp.com \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox