linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: David Rientjes <rientjes@google.com>
To: Matthew Wilcox <willy@infradead.org>
Cc: John Hubbard <jhubbard@nvidia.com>, Zi Yan <ziy@nvidia.com>,
	 Bharata B Rao <bharata@amd.com>,
	Dave Jiang <dave.jiang@intel.com>,
	 "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com>,
	 "Huang, Ying" <ying.huang@intel.com>,
	Alistair Popple <apopple@nvidia.com>,
	 Christoph Lameter <cl@gentwo.org>,
	 Andrew Morton <akpm@linux-foundation.org>,
	 Linus Torvalds <torvalds@linux-foundation.org>,
	 Dave Hansen <dave.hansen@linux.intel.com>,
	Mel Gorman <mgorman@suse.de>,  Jon Grimm <jon.grimm@amd.com>,
	Gregory Price <gourry.memverge@gmail.com>,
	 Brian Morris <bsmorris@google.com>, Wei Xu <weixugc@google.com>,
	 Johannes Weiner <hannes@cmpxchg.org>,
	linux-mm@kvack.org
Subject: Re: [RFC] Memory tiering kernel alignment
Date: Thu, 25 Jan 2024 12:04:37 -0800 (PST)	[thread overview]
Message-ID: <2b29dd3d-bb2c-6a8c-94d2-d5c2e035516a@google.com> (raw)
In-Reply-To: <ZbKt7jDN8XI561DO@casper.infradead.org>

On Thu, 25 Jan 2024, Matthew Wilcox wrote:

> On Thu, Jan 25, 2024 at 10:26:19AM -0800, David Rientjes wrote:
> > There is a lot of excitement around upcoming CXL type 3 memory expansion
> > devices and their cost savings potential.  As the industry starts to
> > adopt this technology, one of the key components in strategic planning is
> > how the upstream Linux kernel will support various tiered configurations
> > to meet various user needs.  I think it goes without saying that this is
> > quite interesting to cloud providers as well as other hyperscalers :)
> 
> I'm not excited.  I'm disappointed that people are falling for this scam.
> CXL is the ATM of this decade.  The protocol is not fit for the purpose
> of accessing remote memory, adding 10ns just for an encode/decode cycle.
> Hands up everybody who's excited about memory latency increasing by 17%.
> 

Right, I don't think that anybody is claiming that we can leverage locally 
attached CXL memory as through it was DRAM on the same or remote socket 
and that there won't be a noticable impact to application performance 
while the memory is still across the device.

It does offer several cost savings benefits for offloading of cold memory, 
though, if locally attached and I think the support for that use case is 
inevitable -- in fact, Linux has some sophisticated support for the 
locally attached use case already.

> Then there are the lies from the vendors who want you to buy switches.
> Not one of them are willing to guarantee you the worst case latency
> through their switches.
> 

I should have prefaced this thread by saying "locally attached CXL memory 
expansion", because that's the primary focus of many of the folks on this 
email thread :)

FWIW, I fully agree with your evaluation for memory pooling and some of 
the extensions provided by CXL 2.0.  I think that a lot of the pooling 
concepts are currently being overhyped, that's just my personal opinion.  
Happy to talk about the advantages and disadvantages (as well as the use 
cases), but I remain unconvinced on memory pooling use cases.

> The concept is wrong.  Nobody wants to tie all of their machines together
> into a giant single failure domain.  There's no possible redundancy
> here.  Availability is diminished; how do you upgrade firmware on a
> switch without taking it down?  Nobody can answer my contentions about
> contention either; preventing a single machine from hogging access to
> a single CXL endpoint seems like an unsolved problem.
> 
> CXL is great for its real purpose of attaching GPUs and migrating memory
> back and forth in a software-transparent way.  We should support that,
> and nothing more.
> 
> We should reject this technology before it harms our kernel and the
> entire industry.  There's a reason that SGI died.  Nobody wants to buy
> single image machines the size of a data centre.
> 
> 


  reply	other threads:[~2024-01-25 20:04 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-01-25 18:26 David Rientjes
2024-01-25 18:52 ` Matthew Wilcox
2024-01-25 20:04   ` David Rientjes [this message]
2024-01-25 20:19     ` Matthew Wilcox
2024-01-25 21:37       ` David Rientjes
2024-01-25 22:28         ` Gregory Price
2024-01-26  0:16         ` SeongJae Park
2024-01-26 21:06         ` Christoph Lameter (Ampere)
2024-01-26 23:03           ` Gregory Price
2024-01-28 20:15         ` David Rientjes
2024-01-29 10:27       ` David Hildenbrand
2024-01-26 20:41   ` Christoph Lameter (Ampere)
2024-01-26  0:04 ` SeongJae Park
     [not found] ` <tsnp3a6oxglx2siv7aoplo665k7xsigkqtpfm5yiu2r3wvys26@3vntgau4t2gv>
2024-01-26 14:31   ` John Groves
2024-02-29  2:04 ` Davidlohr Bueso
2024-02-29  4:01   ` Bharata B Rao
2024-02-29 18:23     ` SeongJae Park

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=2b29dd3d-bb2c-6a8c-94d2-d5c2e035516a@google.com \
    --to=rientjes@google.com \
    --cc=akpm@linux-foundation.org \
    --cc=aneesh.kumar@linux.ibm.com \
    --cc=apopple@nvidia.com \
    --cc=bharata@amd.com \
    --cc=bsmorris@google.com \
    --cc=cl@gentwo.org \
    --cc=dave.hansen@linux.intel.com \
    --cc=dave.jiang@intel.com \
    --cc=gourry.memverge@gmail.com \
    --cc=hannes@cmpxchg.org \
    --cc=jhubbard@nvidia.com \
    --cc=jon.grimm@amd.com \
    --cc=linux-mm@kvack.org \
    --cc=mgorman@suse.de \
    --cc=torvalds@linux-foundation.org \
    --cc=weixugc@google.com \
    --cc=willy@infradead.org \
    --cc=ying.huang@intel.com \
    --cc=ziy@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox