linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Gregory Price <gregory.price@memverge.com>
To: "Huang, Ying" <ying.huang@intel.com>
Cc: Gregory Price <gourry.memverge@gmail.com>,
	linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	hannes@cmpxchg.org, dan.j.williams@intel.com,
	dave.jiang@intel.com
Subject: Re: [RFC 1/1] mm/mempolicy: introduce system default interleave weights
Date: Tue, 27 Feb 2024 01:11:50 -0500	[thread overview]
Message-ID: <Zd19JvKrhMho20Fg@memverge.com> (raw)
In-Reply-To: <87a5nme9c1.fsf@yhuang6-desk2.ccr.corp.intel.com>

On Tue, Feb 27, 2024 at 01:59:26PM +0800, Huang, Ying wrote:
> Gregory Price <gregory.price@memverge.com> writes:
> 
> > I have to press this issue: Is this an actual, practical, concern?
> 
> I don't know who have large machine like that.  But I guess that it's
> possible in the long run.
>

Certainly possible, although that seems like a hyper-specialized case of
a supercomputer.  I suppose still worth considering for a bit.

> > I suppose another strategy is to calculate the interleave weights
> > un-bounded from the raw bandwidth - but continuously force reductions
> > (through some yet-undefined algorithm) until at least one node reaches a
> > weight of `1`.  This suffers from the opposite problem: what if the top
> > node has a value greater than 255? Do we just cap it at 255? That seems
> > the opposite form of problematic.
> >
> > (Large numbers are quite pointless, as it is essentially the antithesis
> > of interleave)
> 
> Yes.  So I suggest to use a relative small number as the default weight
> to start with for normal DRAM.  We will have to floor/ceiling the weight
> value.

Yeah more concretely, I was thinking something like

unsigned int *temp_weights; /* sizeof nr_node_ids */

memcpy(temp_weights, node_bandwidth);
while min(temp_weights) > 1:
    - attempt GCD reduction
    - if failed (GCD=1), adjust all odd numbers to be even (+1), try again

for weight in temp_weights:
    iw_table[N] = (weight > 255) ? 255 : (unsigned char)weight;

Something like this.  Of course this breaks if you have two nodes with a
massively different bandwidth ratio (> 255:1), but that seems
unrealistic given the intent of the devices.

~Gregory


  reply	other threads:[~2024-02-27  6:12 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-02-20 20:25 [RCF 0/1] mm/mempolicy: weighted interleave system default weights Gregory Price
2024-02-20 20:25 ` [RFC 1/1] mm/mempolicy: introduce system default interleave weights Gregory Price
2024-02-22  7:10   ` Huang, Ying
2024-02-23  5:47     ` Gregory Price
2024-02-23  9:11       ` Huang, Ying
2024-02-26 14:29         ` Gregory Price
2024-02-27  0:38           ` Huang, Ying
2024-02-27  5:36             ` Gregory Price
2024-02-27  5:59               ` Huang, Ying
2024-02-27  6:11                 ` Gregory Price [this message]
2024-02-27  8:24                   ` Huang, Ying

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Zd19JvKrhMho20Fg@memverge.com \
    --to=gregory.price@memverge.com \
    --cc=dan.j.williams@intel.com \
    --cc=dave.jiang@intel.com \
    --cc=gourry.memverge@gmail.com \
    --cc=hannes@cmpxchg.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=ying.huang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox