linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: "Huang, Ying" <ying.huang@intel.com>
To: Gregory Price <gregory.price@memverge.com>
Cc: Gregory Price <gourry.memverge@gmail.com>,  <linux-mm@kvack.org>,
	<linux-kernel@vger.kernel.org>,  <linux-cxl@vger.kernel.org>,
	<akpm@linux-foundation.org>,  <sthanneeru@micron.com>,
	 Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>,
	 Wei Xu <weixugc@google.com>,
	 Alistair Popple <apopple@nvidia.com>,
	 Dan Williams <dan.j.williams@intel.com>,
	Dave Hansen <dave.hansen@intel.com>,
	 Johannes Weiner <hannes@cmpxchg.org>,
	 "Jonathan Cameron" <Jonathan.Cameron@huawei.com>,
	Michal Hocko <mhocko@kernel.org>,
	 "Tim Chen" <tim.c.chen@intel.com>,
	Yang Shi <shy828301@gmail.com>
Subject: Re: [RFC PATCH v2 0/3] mm: mempolicy: Multi-tier weighted interleaving
Date: Fri, 20 Oct 2023 14:11:40 +0800	[thread overview]
Message-ID: <87fs25g6w3.fsf@yhuang6-desk2.ccr.corp.intel.com> (raw)
In-Reply-To: <ZS9HSIrblel39qrt@memverge.com> (Gregory Price's message of "Tue, 17 Oct 2023 22:47:36 -0400")

Gregory Price <gregory.price@memverge.com> writes:

[snip]

> Example 1: A single-socket system with multiple CXL memory devices
> ===
> CPU Node: node0
> CXL Nodes: node1, node2
>
> Bandwidth attributes (in theory):
> node0 - 8 channels - ~307GB/s
> node1 - x16 link - 64GB/s
> node2 - x8 link - 32GB/s
>
> In a system like this, the optimal distribution of memory on an
> interleave for maximizing bandwidth is about 76%/16%/8%.
>
> for the sake of simplicity:  --weighted-interleave=0:76,1:16,0:8
> but realistically we could make the weights sysfs values in the node
>
> Regardless of the mechanism to engage this, the most effective way to
> capture this in the system is by applying weights to nodes, not tiers.
> If done in tiers, each node would be assigned to its own tier, making
> the mechanism equivalent. So you might as well simplify the whole thing
> and chop the memtier component out.
>
> Is this configuration realistic? *shrug* - technically possible. And in
> fact most hardware or driver based interleaving mechanisms would not
> really be able to manage an interleave region across these nodes, at
> least not without placing the x16 driver in x8 mode, or just having the
> wrong distribution %'s.
>
>
>
> Example 2: A dual-socket system with 1 CXL device per socket
> ===
> CPU Nodes: node0, node1
> CXL Nodes: node2, node3 (on sockets 0 and 1 respective)
>
> Bandwidth Attributes (in theory):
> nodes 0 & 1 - 8 channels - ~307GB/s ea.
> nodes 2 & 3 - x16 link - 64GB/s ea.
>
> This is similar to example #1, but with one difference:  A task running
> on node 0 should not treat nodes 0 and 1 the same, nor nodes 2 and 3.
> This is because on access to nodes 1 and 3, the cross-socket link (UPI,
> or whatever AMD calls it) becomes a bandwidth chokepoint.
>
> So from the perspective of node 0, the "real total" available bandwidth
> is about 307GB+64GB+(41.6GB * UPI Links) in the case of intel.  so the
> best result you could get is around 307+64+164=535GB/s if you have the
> full 4 links.
>
> You'd want to distribute the cross-socket traffic proportional to UPI,
> not the total.
>
> This leaves us with weights of:
>
> node0 - 57%
> node1 - 26%
> node2 - 12%
> node3 - 5%
>
> Again, naturally nodes are the place to carry the weights here. In this
> scenario, placing it in memory-tiers would require that 1 tier per node
> existed.

Does the workload run on CPU of node 0 only?  This appears unreasonable.
If the memory bandwidth requirement of the workload is so large that CXL
is used to expand bandwidth, why not run workload on CPU of node 1 and
use the full memory bandwidth of node 1?

If the workload run on CPU of node 0 and node 1, then the cross-socket
traffic should be minimized if possible.  That is, threads/processes on
node 0 should interleave memory of node 0 and node 2, while that on node
1 should interleave memory of node 1 and node 3.

But TBH, I lacks knowledge about the real life workloads.  So, my
understanding may be wrong.  Please correct me for any mistake.

--
Best Regards,
Huang, Ying

[snip]


  reply	other threads:[~2023-10-20  6:13 UTC|newest]

Thread overview: 22+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-10-09 20:42 Gregory Price
2023-10-09 20:42 ` [RFC PATCH v2 1/3] mm/memory-tiers: change mutex to rw semaphore Gregory Price
2023-10-09 20:42 ` [RFC PATCH v2 2/3] mm/memory-tiers: Introduce sysfs for tier interleave weights Gregory Price
2023-10-09 20:42 ` [RFC PATCH v2 3/3] mm/mempolicy: modify interleave mempolicy to use memtier weights Gregory Price
2023-10-11 21:15 ` [RFC PATCH v2 0/3] mm: mempolicy: Multi-tier weighted interleaving Matthew Wilcox
2023-10-10  1:07   ` Gregory Price
2023-10-16  7:57 ` Huang, Ying
2023-10-17  1:28   ` Gregory Price
2023-10-18  8:29     ` Huang, Ying
2023-10-17  2:52       ` Gregory Price
2023-10-19  6:28         ` Huang, Ying
2023-10-18  2:47           ` Gregory Price
2023-10-20  6:11             ` Huang, Ying [this message]
2023-10-19 13:26               ` Gregory Price
2023-10-23  2:09                 ` Huang, Ying
2023-10-24 15:32                   ` Gregory Price
2023-10-25  1:13                     ` Huang, Ying
2023-10-25 19:51                       ` Gregory Price
2023-10-30  2:20                         ` Huang, Ying
2023-10-30  4:19                           ` Gregory Price
2023-10-30  5:23                             ` Huang, Ying
2023-10-18  8:31       ` Huang, Ying

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=87fs25g6w3.fsf@yhuang6-desk2.ccr.corp.intel.com \
    --to=ying.huang@intel.com \
    --cc=Jonathan.Cameron@huawei.com \
    --cc=akpm@linux-foundation.org \
    --cc=aneesh.kumar@linux.ibm.com \
    --cc=apopple@nvidia.com \
    --cc=dan.j.williams@intel.com \
    --cc=dave.hansen@intel.com \
    --cc=gourry.memverge@gmail.com \
    --cc=gregory.price@memverge.com \
    --cc=hannes@cmpxchg.org \
    --cc=linux-cxl@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@kernel.org \
    --cc=shy828301@gmail.com \
    --cc=sthanneeru@micron.com \
    --cc=tim.c.chen@intel.com \
    --cc=weixugc@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox