From: Gregory Price <gregory.price@memverge.com>
To: "Huang, Ying" <ying.huang@intel.com>
Cc: Gregory Price <gourry.memverge@gmail.com>,
linux-mm@kvack.org, linux-kernel@vger.kernel.org,
linux-doc@vger.kernel.org, linux-fsdevel@vger.kernel.org,
linux-api@vger.kernel.org, corbet@lwn.net,
akpm@linux-foundation.org, honggyu.kim@sk.com, rakie.kim@sk.com,
hyeongtak.ji@sk.com, mhocko@kernel.org, vtavarespetr@micron.com,
jgroves@micron.com, ravis.opensrc@micron.com,
sthanneeru@micron.com, emirakhur@micron.com, Hasan.Maruf@amd.com,
seungjun.ha@samsung.com, hannes@cmpxchg.org,
dan.j.williams@intel.com,
Srinivasulu Thanneeru <sthanneeru.opensrc@micron.com>
Subject: Re: [PATCH 3/3] mm/mempolicy: introduce MPOL_WEIGHTED_INTERLEAVE for weighted interleaving
Date: Wed, 17 Jan 2024 23:06:46 -0500 [thread overview]
Message-ID: <Zaij1uA4GvWxdNNW@memverge.com> (raw)
In-Reply-To: <87fryvz6gf.fsf@yhuang6-desk2.ccr.corp.intel.com>
On Thu, Jan 18, 2024 at 11:05:52AM +0800, Huang, Ying wrote:
> Gregory Price <gourry.memverge@gmail.com> writes:
> > +static unsigned long alloc_pages_bulk_array_weighted_interleave(gfp_t gfp,
> > + struct mempolicy *pol, unsigned long nr_pages,
> > + struct page **page_array)
> > +{
> > + struct task_struct *me = current;
> > + unsigned long total_allocated = 0;
> > + unsigned long nr_allocated;
> > + unsigned long rounds;
> > + unsigned long node_pages, delta;
> > + u8 weight;
> > + struct iw_table __rcu *table;
> > + u8 *weights;
> > + unsigned int weight_total = 0;
> > + unsigned long rem_pages = nr_pages;
> > + nodemask_t nodes;
> > + int nnodes, node, weight_nodes;
> > + int prev_node = NUMA_NO_NODE;
> > + int i;
> > +
> > + nnodes = read_once_policy_nodemask(pol, &nodes);
> > + if (!nnodes)
> > + return 0;
> > +
> > + /* Continue allocating from most recent node and adjust the nr_pages */
> > + if (pol->wil.cur_weight) {
> > + node = next_node_in(me->il_prev, nodes);
> > + node_pages = pol->wil.cur_weight;
> > + if (node_pages > rem_pages)
> > + node_pages = rem_pages;
> > + nr_allocated = __alloc_pages_bulk(gfp, node, NULL, node_pages,
> > + NULL, page_array);
> > + page_array += nr_allocated;
> > + total_allocated += nr_allocated;
> > + /* if that's all the pages, no need to interleave */
> > + if (rem_pages <= pol->wil.cur_weight) {
> > + pol->wil.cur_weight -= rem_pages;
> > + return total_allocated;
> > + }
> > + /* Otherwise we adjust nr_pages down, and continue from there */
> > + rem_pages -= pol->wil.cur_weight;
> > + pol->wil.cur_weight = 0;
> > + prev_node = node;
> > + }
> > +
> > + /* fetch the weights for this operation and calculate total weight */
> > + weights = kmalloc(nnodes, GFP_KERNEL);
> > + if (!weights)
> > + return total_allocated;
> > +
> > + rcu_read_lock();
> > + table = rcu_dereference(iw_table);
> > + weight_nodes = 0;
> > + for_each_node_mask(node, nodes) {
> > + weights[weight_nodes++] = table->weights[node];
> > + weight_total += table->weights[node];
> > + }
> > + rcu_read_unlock();
> > +
> > + if (!weight_total) {
> > + kfree(weights);
> > + return total_allocated;
> > + }
> > +
> > + /* Now we can continue allocating as if from 0 instead of an offset */
> > + rounds = rem_pages / weight_total;
> > + delta = rem_pages % weight_total;
> > + for (i = 0; i < nnodes; i++) {
> > + node = next_node_in(prev_node, nodes);
> > + weight = weights[i];
> > + node_pages = weight * rounds;
> > + if (delta) {
> > + if (delta > weight) {
> > + node_pages += weight;
> > + delta -= weight;
> > + } else {
> > + node_pages += delta;
> > + delta = 0;
> > + }
> > + }
> > + nr_allocated = __alloc_pages_bulk(gfp, node, NULL, node_pages,
> > + NULL, page_array);
> > + page_array += nr_allocated;
> > + total_allocated += nr_allocated;
> > + if (total_allocated == nr_pages)
> > + break;
> > + prev_node = node;
> > + }
> > +
> > + /*
> > + * Finally, we need to update me->il_prev and pol->wil.cur_weight
> > + * if there were overflow pages, but not equivalent to the node
> > + * weight, set the cur_weight to node_weight - delta and the
> > + * me->il_prev to the previous node. Otherwise if it was perfect
> > + * we can simply set il_prev to node and cur_weight to 0
> > + */
> > + if (node_pages) {
> > + me->il_prev = prev_node;
> > + node_pages %= weight;
> > + pol->wil.cur_weight = weight - node_pages;
> > + } else {
> > + me->il_prev = node;
> > + pol->wil.cur_weight = 0;
> > + }
>
>
> It appears that we should set me->il_prev and pol->wil.cur_weight when
> delta becomes 0? That is, following allocation should start from there?
>
So the observation is that when delta reaches 0, we know what the prior
node should be. The only corner case being that delta is 0 when we
enter the loop (in which case current prev_node is the correct
prev_node).
Eyeballing it, this seems correct, but I'll do some additional
validation tomorrow. That should clean up the last block a bit.
Thanks!
~Gregory
prev parent reply other threads:[~2024-01-18 4:07 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-01-12 21:08 [PATCH 0/3] mm/mempolicy: weighted interleave mempolicy with sysfs extension Gregory Price
2024-01-12 21:08 ` [PATCH 2/3] mm/mempolicy: refactor a read-once mechanism into a function for re-use Gregory Price
2024-01-15 4:13 ` Huang, Ying
2024-01-17 5:26 ` Gregory Price
[not found] ` <20240112210834.8035-2-gregory.price@memverge.com>
2024-01-15 3:18 ` [PATCH 1/3] mm/mempolicy: implement the sysfs-based weighted_interleave interface Huang, Ying
2024-01-17 5:24 ` Gregory Price
2024-01-17 6:58 ` Huang, Ying
2024-01-17 17:46 ` Gregory Price
2024-01-18 4:37 ` Huang, Ying
[not found] ` <20240112210834.8035-4-gregory.price@memverge.com>
2024-01-15 5:47 ` [PATCH 3/3] mm/mempolicy: introduce MPOL_WEIGHTED_INTERLEAVE for weighted interleaving Huang, Ying
2024-01-17 5:34 ` Gregory Price
2024-01-18 1:28 ` Huang, Ying
2024-01-18 3:05 ` Huang, Ying
2024-01-18 4:06 ` Gregory Price [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Zaij1uA4GvWxdNNW@memverge.com \
--to=gregory.price@memverge.com \
--cc=Hasan.Maruf@amd.com \
--cc=akpm@linux-foundation.org \
--cc=corbet@lwn.net \
--cc=dan.j.williams@intel.com \
--cc=emirakhur@micron.com \
--cc=gourry.memverge@gmail.com \
--cc=hannes@cmpxchg.org \
--cc=honggyu.kim@sk.com \
--cc=hyeongtak.ji@sk.com \
--cc=jgroves@micron.com \
--cc=linux-api@vger.kernel.org \
--cc=linux-doc@vger.kernel.org \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mhocko@kernel.org \
--cc=rakie.kim@sk.com \
--cc=ravis.opensrc@micron.com \
--cc=seungjun.ha@samsung.com \
--cc=sthanneeru.opensrc@micron.com \
--cc=sthanneeru@micron.com \
--cc=vtavarespetr@micron.com \
--cc=ying.huang@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox