From: David Rientjes <rientjes@google.com>
To: Matthew Wilcox <willy@infradead.org>
Cc: John Hubbard <jhubbard@nvidia.com>, Zi Yan <ziy@nvidia.com>,
Bharata B Rao <bharata@amd.com>,
Dave Jiang <dave.jiang@intel.com>,
"Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com>,
"Huang, Ying" <ying.huang@intel.com>,
Alistair Popple <apopple@nvidia.com>,
Christoph Lameter <cl@gentwo.org>,
Andrew Morton <akpm@linux-foundation.org>,
Linus Torvalds <torvalds@linux-foundation.org>,
Dave Hansen <dave.hansen@linux.intel.com>,
Mel Gorman <mgorman@suse.de>, Jon Grimm <jon.grimm@amd.com>,
Gregory Price <gourry.memverge@gmail.com>,
Brian Morris <bsmorris@google.com>, Wei Xu <weixugc@google.com>,
Johannes Weiner <hannes@cmpxchg.org>,
SeongJae Park <sj@kernel.org>,
linux-mm@kvack.org
Subject: Re: [RFC] Memory tiering kernel alignment
Date: Thu, 25 Jan 2024 13:37:02 -0800 (PST) [thread overview]
Message-ID: <e351526b-0872-afcb-4eb7-a3dd6242f9f9@google.com> (raw)
In-Reply-To: <ZbLCPO7cI2LmNhnD@casper.infradead.org>
On Thu, 25 Jan 2024, Matthew Wilcox wrote:
> On Thu, Jan 25, 2024 at 12:04:37PM -0800, David Rientjes wrote:
> > On Thu, 25 Jan 2024, Matthew Wilcox wrote:
> > > On Thu, Jan 25, 2024 at 10:26:19AM -0800, David Rientjes wrote:
> > > > There is a lot of excitement around upcoming CXL type 3 memory expansion
> > > > devices and their cost savings potential. As the industry starts to
> > > > adopt this technology, one of the key components in strategic planning is
> > > > how the upstream Linux kernel will support various tiered configurations
> > > > to meet various user needs. I think it goes without saying that this is
> > > > quite interesting to cloud providers as well as other hyperscalers :)
> > >
> > > I'm not excited. I'm disappointed that people are falling for this scam.
> > > CXL is the ATM of this decade. The protocol is not fit for the purpose
> > > of accessing remote memory, adding 10ns just for an encode/decode cycle.
> > > Hands up everybody who's excited about memory latency increasing by 17%.
> >
> > Right, I don't think that anybody is claiming that we can leverage locally
> > attached CXL memory as through it was DRAM on the same or remote socket
> > and that there won't be a noticable impact to application performance
> > while the memory is still across the device.
> >
> > It does offer several cost savings benefits for offloading of cold memory,
> > though, if locally attached and I think the support for that use case is
> > inevitable -- in fact, Linux has some sophisticated support for the
> > locally attached use case already.
> >
> > > Then there are the lies from the vendors who want you to buy switches.
> > > Not one of them are willing to guarantee you the worst case latency
> > > through their switches.
> >
> > I should have prefaced this thread by saying "locally attached CXL memory
> > expansion", because that's the primary focus of many of the folks on this
> > email thread :)
>
> That's a huge relief. I was not looking forward to the patches to add
> support for pooling (etc).
>
> Using CXL as cold-data-storage makes a certain amount of sense, although
> I'm not really sure why it offers an advantage over NAND. It's faster
> than NAND, but you still want to bring it back locally before operating
> on it. NAND is denser, and consumes less power while idle. NAND comes
> with a DMA controller to move the data instead of relying on the CPU to
> move the data around. And of course moving the data first to CXL and
> then to swap means that it's got to go over the memory bus multiple
> times, unless you're building a swap device which attaches to the
> other end of the CXL bus ...
>
This is **exactly** the type of discussion we're looking to have :)
There are some things that I've chatted informally with folks about that
I'd like to bring to the forum:
- Decoupling CPU migration from memory migration for NUMA Balancing (or
perhaps deprecating CPU migration entirely)
- Allowing NUMA Balancing to do migration as part of a kthread
asynchronous to the NUMA hint fault, in kernel context
- Abstraction for future hardware devices that can provide an expanded
view into page hotness that can be leveraged in different areas of the
kernel, including as a backend for NUMA Balanacing to replace NUMA
hint faults
- Per-container support for configuring balancing and memory migration
- Opting certain types of memory into NUMA Balancing (like tmpfs) while
leaving other types alone
- Utilizing hardware accelerated memory migration as a replacement for
the traditional migrate_pages() path when available
I could go code all of this up and spend an enormous amount of time doing
so only to get NAKed by somebody because I'm ripping out their critical
use case that I just didn't know about :) There's also the question of
whether DAMON should be the source of truth for this or it should be
decoupled.
My dream world would be where we could discuss various use cases for
locally attached CXL memory and determine, as a group, what the shared,
comprehensive "Linux vision" for it is and do so before LSF/MM/BPF. In a
perfect world, we could block out an expanded MM session in Salt Lake City
to bring all these concepts together, what approaches sound reasonable vs
unreasonable, and leave that conference with a clear understanding of what
needs to happen.
next prev parent reply other threads:[~2024-01-25 21:37 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-01-25 18:26 David Rientjes
2024-01-25 18:52 ` Matthew Wilcox
2024-01-25 20:04 ` David Rientjes
2024-01-25 20:19 ` Matthew Wilcox
2024-01-25 21:37 ` David Rientjes [this message]
2024-01-25 22:28 ` Gregory Price
2024-01-26 0:16 ` SeongJae Park
2024-01-26 21:06 ` Christoph Lameter (Ampere)
2024-01-26 23:03 ` Gregory Price
2024-01-28 20:15 ` David Rientjes
2024-01-29 10:27 ` David Hildenbrand
2024-01-26 20:41 ` Christoph Lameter (Ampere)
2024-01-26 0:04 ` SeongJae Park
[not found] ` <tsnp3a6oxglx2siv7aoplo665k7xsigkqtpfm5yiu2r3wvys26@3vntgau4t2gv>
2024-01-26 14:31 ` John Groves
2024-02-29 2:04 ` Davidlohr Bueso
2024-02-29 4:01 ` Bharata B Rao
2024-02-29 18:23 ` SeongJae Park
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=e351526b-0872-afcb-4eb7-a3dd6242f9f9@google.com \
--to=rientjes@google.com \
--cc=akpm@linux-foundation.org \
--cc=aneesh.kumar@linux.ibm.com \
--cc=apopple@nvidia.com \
--cc=bharata@amd.com \
--cc=bsmorris@google.com \
--cc=cl@gentwo.org \
--cc=dave.hansen@linux.intel.com \
--cc=dave.jiang@intel.com \
--cc=gourry.memverge@gmail.com \
--cc=hannes@cmpxchg.org \
--cc=jhubbard@nvidia.com \
--cc=jon.grimm@amd.com \
--cc=linux-mm@kvack.org \
--cc=mgorman@suse.de \
--cc=sj@kernel.org \
--cc=torvalds@linux-foundation.org \
--cc=weixugc@google.com \
--cc=willy@infradead.org \
--cc=ying.huang@intel.com \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox