* [RFC PATCH v2 0/4] mm/mempolicy: get/set_mempolicy2 syscalls
@ 2023-10-03 0:21 Gregory Price
2023-10-03 0:21 ` [RFC PATCH v2 3/4] mm/mempolicy: implement a preferred-interleave Gregory Price
2023-10-03 0:21 ` [RFC PATCH v2 4/4] mm/mempolicy: implement a weighted-interleave Gregory Price
0 siblings, 2 replies; 3+ messages in thread
From: Gregory Price @ 2023-10-03 0:21 UTC (permalink / raw)
To: linux-mm
Cc: linux-kernel, linux-arch, linux-api, linux-cxl, luto, tglx,
mingo, bp, dave.hansen, hpa, arnd, akpm, x86, Gregory Price
v2: style updates, weighted-interleave, rename partial-interleave to
preferred-interleave, variety of bug fixes.
---
This patch set is a proposal for set_mempolicy2 and get_mempolicy2
system calls. This is an extension to the existing mempolicy
syscalls that allow for a more extensible mempolicy interface and
new, complex memory policies.
This RFC is broken into 4 patches for discussion:
1) A refactor of do_set_mempolicy that allows code reuse for
the new syscalls when replacing the task mempolicy.
2) The implementation of get_mempolicy2 and set_mempolicy2 which
includes a new uapi type: "struct mempolicy_args" and denotes
the original mempolicies as "legacy". This allows the existing
policies to be routed through the original interface.
(note: only implemented on x86 at this time, though can be
hacked into other architectures somewhat trivially)
3) The implementation of "preferred-interleave", a policy which
applies a weight to the local node while interleaving.
4) The implementation of "weighted-interleave", a policy which
applies weights to all enabled nodes while interleaving.
x) Future Updates: ktest, numactl, and man page updates
Besides the obvious proposal of extending the mempolicy subsystem for
new policies, the core proposal is the addition of the new uapi type
"struct mempolicy". In this proposal, the get and set interfaces use
the same structure, and some fields may be ignored depending on the
requested operation.
This sample implementation of get_mempolicy allows for the retrieval
of all information that would have previously required multiple calls
to get_mempolicy, and implements an area for per-policy information.
This allows for future extensibility, and would avoid the need for
additional syscalls in the future.
struct mempolicy_args {
unsigned short mode;
unsigned long *nodemask;
unsigned long maxnode;
unsigned short flags;
struct {
/* Memory allowed */
struct {
unsigned long maxnode;
unsigned long *nodemask;
} allowed;
/* Address information */
struct {
unsigned long addr;
unsigned long node;
unsigned short mode;
unsigned short flags;
} addr;
} get;
union {
/* Interleave */
struct {
unsigned long next_node; /* get only */
} interleave;
/* Preferred Interleave */
struct {
unsigned long weight; /* get and set */
unsigned long next_node; /* get only */
} pil;
/* Weighted Interleave */
struct {
unsigned long next_node; /* get only */
unsigned char *weight; /* get and set */
} wil;
};
};
In the third and fourth patch, we implement preferred and weighted
interleave policies (respectively), which could not be implemented
with the existing syscalls.
We extend the internal mempolicy structure to include to include
a new union area which can be used to host complex policy data.
Example:
union {
/* Preferred Interleave: Allocate local count, then interleave */
struct {
int weight;
int count;
} pil;
/* Weighted Interleave */
struct {
unsigned int il_weight;
unsigned char cur_weight;
unsigned char weights[MAX_NUMNODES];
} wil;
};
Summary of Preferred Interleave:
================================
nodeset=0,1,2
interval=3
cpunode=0
The preferred node (cpunode) is the "preferred" node on which [weight]
allocations are made before an interleave occurs.
Over 10 consecutive allocations, the following nodes will be selected:
[0,0,0,1,2,0,0,0,1,2]
In this example, there is a 60%/20%/20% distribution of memory across
the node set.
This is a useful strategy if the goal is an even distribution of
memory across all non-local nodes for the purpose of bandwidth AND
task-node migrations are a possibility. In this case, the weight
applies to whatever the local node happens to be at the time of the
interleave, rather than a static node weight.
Summary of Weighted Interleave:
===============================
The weighted-interleave mempolicy implements weights per-node
which are used to distribute memory while interleaving.
For example:
nodes: 0,1,2
weights: 5,3,2
Over 10 consecutive allocations, the following nodes will be selected:
[0,0,0,0,0,1,1,1,2,2]
If a node is enabled, the minimum weight is 1. If an enabled node
ends up with a weight of 0 (cgroup updates can cause a runtime
recalculation) a minimum of 1 is applied during interleave.
This is a useful strategy if the goal is a non-even distribution of
memory across a variety of nodes AND task-node migrations are NOT
expected to occur (or the weights are approximately the same,
relationally from all possible target nodes).
This is because "Thread A" with weights set for best performance
from the perspective of "Socket 0" may have a less-than-optimal
interleave strategy if "Thread A" is migrated to "Socket 1". In
this scenario, the bandwidth and latency attributes of each node
will have changed, as will the local node.
In the above example, a thread migrating from node 0 to node 1 will
cause most of its memory to be allocated on remote nodes, which is
less than optimal.
Some notes for discussion
=========================
0) Why?
In the coming age of CXL and a many-numa-node system with memory
hosted on the PCIe bus, new memory policies are likely to be
beneficial to experiment with and ultimately implement new
allocation-time placement policies.
Presently, much focus is placed on memory-usage monitoring and data
migration, but these methods steal performance to accomplish what
could be optimized for up-front. For example, if maximizing bandwith
is preferable, then a statistical distribution of memory can be
calculated fairly easily based on task location..
Getting a fair approximation of distribution at allocation can help
reduce the migration load required after-the fact. This is the
intent of the included preferred-interleave example, which allows for
an approximate distribution of memory, where the local node is still
the preferred location for the majority of memory.
1) Maybe this should be a set of sysfs interfaces?
This would involve adding a /proc/pid/mempolicy interface that
allows for external processes to interrogate and change the
mempolicy of running processes. This would be a fundamental
change to the mempolicy subsystem.
I attempted this, but eventually came to the conclusion that it
would require a much more radical re-write of mempolicy.c code
due concurrency issues.
Notably, mempolicy.c is very "current"--centric, and is not well
designed for runtime changes to nodemask (and, subsequently, the
new weights added to struct mempolicy).
I avoided that for this RFC as it seemed far more radical than
proposing a set/get_mempolicy2 interface. Though technically it
could be done.
2) Why not do this in cgroups or memtier?
Both have the issue of functionally being a "global" setting,
in the sense that cgroups/memtier implemented weights would
produce poor results on processes whose threads span multiple
sockets (or on a thread migration).
Consider the following scenario:
Node 0 - Socket 0 DRAM
Node 1 - Socket 1 DRAM
Node 2 - Socket 0 local CXL
Node 3 - Socket 1 local CXL
Weights:
[0:4, 1:2, 2:2, 3:1]
The "Tiers" in this case are essentially [0, 1-2, 3]
We have 2 tasks in our cgroup:
Thread A - socket 0
Thread B - socket 1
In this scenario, Thread B will have a very poor distribution of
memory, with most of its memory landing on a remote-socket.
Instead, it's preferable for workloads to stick to a single socket
where possible, and future work will need to be done to determine
how to handle workloads which span sockets. Due to the above
mentioned issues with concurrency, this may be quite some time.
In the meantime, there is user for weights to be carried per-task.
For migrations:
Weights could be recalculated based on the new location of the
task. This recalculation of weights is not included in this
patch set, but could be done as an extension to weighted
interleave, where a thread that detects it has been migrated
works with memtier.c to adjust its weights internally.
So basically even if you implement these things in cgroups/memtier,
you still require per-task information (local node) to adjust the
weights. My proposal: Just do it in mempolicy and use things like
cgroups/memtier to enrich that implementation, rather than the other
way around.
3) Do we need this level extensibility?
Presently the ability to dictate allocation-time placement is
limited to a few primitive mechanisms:
1) existing mempolicy, and those that can be implemented using
the existing interface.
2) numa-aware applications, requiring code changes.
3) LDPRELOAD methods, which have compability issues.
For the sake of compatibility, being able to extend numactl to
include newer, more complex policies would be beneficial.
Gregory Price (4):
mm/mempolicy: refactor do_set_mempolicy for code re-use
mm/mempolicy: Implement set_mempolicy2 and get_mempolicy2 syscalls
mm/mempolicy: implement a preferred-interleave mempolicy
mm/mempolicy: implement a weighted-interleave mempolicy
arch/x86/entry/syscalls/syscall_32.tbl | 2 +
arch/x86/entry/syscalls/syscall_64.tbl | 2 +
include/linux/mempolicy.h | 14 +
include/linux/syscalls.h | 4 +
include/uapi/asm-generic/unistd.h | 10 +-
include/uapi/linux/mempolicy.h | 41 ++
mm/mempolicy.c | 688 ++++++++++++++++++++++++-
7 files changed, 741 insertions(+), 20 deletions(-)
--
2.39.1
^ permalink raw reply [flat|nested] 3+ messages in thread
* [RFC PATCH v2 3/4] mm/mempolicy: implement a preferred-interleave
2023-10-03 0:21 [RFC PATCH v2 0/4] mm/mempolicy: get/set_mempolicy2 syscalls Gregory Price
@ 2023-10-03 0:21 ` Gregory Price
2023-10-03 0:21 ` [RFC PATCH v2 4/4] mm/mempolicy: implement a weighted-interleave Gregory Price
1 sibling, 0 replies; 3+ messages in thread
From: Gregory Price @ 2023-10-03 0:21 UTC (permalink / raw)
To: linux-mm
Cc: linux-kernel, linux-arch, linux-api, linux-cxl, luto, tglx,
mingo, bp, dave.hansen, hpa, arnd, akpm, x86, Gregory Price
The preferred-interleave mempolicy implements single-weight
interleave mechanism where the preferred node is the local node.
If the local node is not set in nodemask, the first node in the
node mask is the preferred node.
When set, N (weight) pages will be allocated on the preferred node
beforce an interleave pass occurs.
For example:
nodes=0,1,2
interval=3
cpunode=0
Over 10 consecutive allocations, the following nodes will be selected:
[0,0,0,1,2,0,0,0,1,2]
In this example, there is a 60%/20%/20% distribution of memory.
Using this mechanism, it becomes possible to define an approximate
distribution percentage of memory across a set of nodes:
local_node% : interval/((nr_nodes-1)+interval-1)
other_node% : (1-local_node%)/(nr_nodes-1)
The behavior can be preferred over a fully-weighted interleave (where
each node has a separate weight) when migrations or multiple sockets
may be in use. If a task migrates, the weight applies to the new
local node without a need for the task to "rebalance" its weights.
Similarly, if nodes are removed from the nodemask, no weights need
to be recalculated. The exception to this is when the local node is
removed from the nodemask, which is a rare situation.
Similarly, consider a task executing on a 2-socket system which creates
a new thread. If the first thread is scheduled to execute on socket 0
and the second thread is scheduled to execute on socket 1, weightings
set by thread 1 (which are inherited by thread 2) would very likely
be a poor interleave strategy for the new thread.
In this scheme, thread 2 would inherit the same weight, but it would
apply to the local node of thread 2, leading to more predictable
behavior for new allocations.
Signed-off-by: Gregory Price <gregory.price@memverge.com>
---
include/linux/mempolicy.h | 8 ++
include/uapi/linux/mempolicy.h | 6 +
mm/mempolicy.c | 203 ++++++++++++++++++++++++++++++++-
3 files changed, 212 insertions(+), 5 deletions(-)
diff --git a/include/linux/mempolicy.h b/include/linux/mempolicy.h
index d232de7cdc56..8f918488c61c 100644
--- a/include/linux/mempolicy.h
+++ b/include/linux/mempolicy.h
@@ -48,6 +48,14 @@ struct mempolicy {
nodemask_t nodes; /* interleave/bind/perfer */
int home_node; /* Home node to use for MPOL_BIND and MPOL_PREFERRED_MANY */
+ union {
+ /* Preferred Interleave: Weight local, then interleave */
+ struct {
+ int weight;
+ int count;
+ } pil;
+ };
+
union {
nodemask_t cpuset_mems_allowed; /* relative to these nodes */
nodemask_t user_nodemask; /* nodemask passed by user */
diff --git a/include/uapi/linux/mempolicy.h b/include/uapi/linux/mempolicy.h
index ea386872094b..41c35f404c5e 100644
--- a/include/uapi/linux/mempolicy.h
+++ b/include/uapi/linux/mempolicy.h
@@ -24,6 +24,7 @@ enum {
MPOL_LOCAL,
MPOL_PREFERRED_MANY,
MPOL_LEGACY, /* set_mempolicy limited to above modes */
+ MPOL_PREFERRED_INTERLEAVE,
MPOL_MAX, /* always last member of enum */
};
@@ -52,6 +53,11 @@ struct mempolicy_args {
struct {
unsigned long next_node; /* get only */
} interleave;
+ /* Partial interleave */
+ struct {
+ unsigned long weight; /* get and set */
+ unsigned long next_node; /* get only */
+ } pil;
};
};
diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index 936c641f554e..6374312cef5f 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -399,6 +399,10 @@ static const struct mempolicy_operations mpol_ops[MPOL_MAX] = {
.create = mpol_new_nodemask,
.rebind = mpol_rebind_nodemask,
},
+ [MPOL_PREFERRED_INTERLEAVE] = {
+ .create = mpol_new_nodemask,
+ .rebind = mpol_rebind_nodemask,
+ },
[MPOL_PREFERRED] = {
.create = mpol_new_preferred,
.rebind = mpol_rebind_preferred,
@@ -873,7 +877,8 @@ static long replace_mempolicy(struct mempolicy *new, nodemask_t *nodes)
old = current->mempolicy;
current->mempolicy = new;
- if (new && new->mode == MPOL_INTERLEAVE)
+ if (new && (new->mode == MPOL_INTERLEAVE ||
+ new->mode == MPOL_PREFERRED_INTERLEAVE))
current->il_prev = MAX_NUMNODES-1;
out:
task_unlock(current);
@@ -915,6 +920,7 @@ static void get_policy_nodemask(struct mempolicy *p, nodemask_t *nodes)
switch (p->mode) {
case MPOL_BIND:
case MPOL_INTERLEAVE:
+ case MPOL_PREFERRED_INTERLEAVE:
case MPOL_PREFERRED:
case MPOL_PREFERRED_MANY:
*nodes = p->nodes;
@@ -1609,6 +1615,23 @@ SYSCALL_DEFINE3(set_mempolicy, int, mode, const unsigned long __user *, nmask,
return kernel_set_mempolicy(mode, nmask, maxnode);
}
+static long do_set_preferred_interleave(struct mempolicy_args *args,
+ struct mempolicy *new,
+ nodemask_t *nodes)
+{
+ /* Preferred interleave cannot be done with no nodemask */
+ if (nodes_empty(*nodes))
+ return -EINVAL;
+
+ /* Preferred interleave weight cannot be <= 0 */
+ if (args->pil.weight <= 0)
+ return -EINVAL;
+
+ new->pil.weight = args->pil.weight;
+ new->pil.count = 0;
+ return 0;
+}
+
static long do_set_mempolicy2(struct mempolicy_args *args)
{
struct mempolicy *new = NULL;
@@ -1630,6 +1653,9 @@ static long do_set_mempolicy2(struct mempolicy_args *args)
return PTR_ERR(new);
switch (args->mode) {
+ case MPOL_PREFERRED_INTERLEAVE:
+ err = do_set_preferred_interleave(args, new, &nodes);
+ break;
default:
BUG();
}
@@ -1767,6 +1793,12 @@ static long do_get_mempolicy2(struct mempolicy_args *kargs)
pol->nodes);
rc = 0;
break;
+ case MPOL_PREFERRED_INTERLEAVE:
+ kargs->pil.next_node = next_node_in(current->il_prev,
+ pol->nodes);
+ kargs->pil.weight = pol->pil.weight;
+ rc = 0;
+ break;
default:
BUG();
}
@@ -2102,12 +2134,41 @@ static int policy_node(gfp_t gfp, struct mempolicy *policy, int nd)
return nd;
}
+static unsigned int preferred_interleave_nodes(struct mempolicy *policy)
+{
+ int mynode = numa_node_id();
+ struct task_struct *me = current;
+ int next;
+
+ /*
+ * If the local node is not in the node mask, we treat the
+ * lowest node as the preferred node. This can happen if the
+ * cpu is bound to a node that is not present in the mempolicy
+ */
+ if (!node_isset(mynode, policy->nodes))
+ mynode = first_node(policy->nodes);
+
+ next = next_node_in(me->il_prev, policy->nodes);
+ if (next == mynode) {
+ if (++policy->pil.count >= policy->pil.weight) {
+ policy->pil.count = 0;
+ me->il_prev = next;
+ }
+ } else if (next < MAX_NUMNODES) {
+ me->il_prev = next;
+ }
+ return next;
+}
+
/* Do dynamic interleaving for a process */
static unsigned interleave_nodes(struct mempolicy *policy)
{
unsigned next;
struct task_struct *me = current;
+ if (policy->mode == MPOL_PREFERRED_INTERLEAVE)
+ return preferred_interleave_nodes(policy);
+
next = next_node_in(me->il_prev, policy->nodes);
if (next < MAX_NUMNODES)
me->il_prev = next;
@@ -2135,6 +2196,7 @@ unsigned int mempolicy_slab_node(void)
return first_node(policy->nodes);
case MPOL_INTERLEAVE:
+ case MPOL_PREFERRED_INTERLEAVE:
return interleave_nodes(policy);
case MPOL_BIND:
@@ -2161,6 +2223,56 @@ unsigned int mempolicy_slab_node(void)
}
}
+static unsigned int offset_pil_node(struct mempolicy *pol, unsigned long n)
+{
+ nodemask_t nodemask = pol->nodes;
+ unsigned int target, nnodes;
+ int i;
+ int nid = MAX_NUMNODES;
+ int weight = pol->pil.weight;
+
+ /*
+ * The barrier will stabilize the nodemask in a register or on
+ * the stack so that it will stop changing under the code.
+ *
+ * Between first_node() and next_node(), pol->nodes could be changed
+ * by other threads. So we put pol->nodes in a local stack.
+ */
+ barrier();
+
+ nnodes = nodes_weight(nodemask);
+
+ /*
+ * If the local node ID is not set (cpu is bound to a node
+ * but that node is not set in the memory nodemask), interleave
+ * based on the lowest set node.
+ */
+ nid = numa_node_id();
+ if (!node_isset(nid, nodemask))
+ nid = first_node(nodemask);
+ /*
+ * Mode or weight can change so default to basic interleave
+ * if the weight has become invalid. Basic interleave is
+ * equivalent to weight=1. Don't double-count the base node
+ */
+ if (weight == 0)
+ weight = 1;
+ weight -= 1;
+
+ /* If target <= the weight, no need to call next_node */
+ target = ((unsigned int)n % (nnodes + weight));
+ target -= (target > weight) ? weight : target;
+ target %= MAX_NUMNODES;
+
+ /* Target may not be the first node, so use next_node_in to wrap */
+ for (i = 0; i < target; i++) {
+ nid = next_node_in(nid, nodemask);
+ if (nid == MAX_NUMNODES)
+ nid = first_node(nodemask);
+ }
+ return nid;
+}
+
/*
* Do static interleaving for a VMA with known offset @n. Returns the n'th
* node in pol->nodes (starting from n=0), wrapping around if n exceeds the
@@ -2168,10 +2280,16 @@ unsigned int mempolicy_slab_node(void)
*/
static unsigned offset_il_node(struct mempolicy *pol, unsigned long n)
{
- nodemask_t nodemask = pol->nodes;
+ nodemask_t nodemask;
unsigned int target, nnodes;
int i;
int nid;
+
+ if (pol->mode == MPOL_PREFERRED_INTERLEAVE)
+ return offset_pil_node(pol, n);
+
+ nodemask = pol->nodes;
+
/*
* The barrier will stabilize the nodemask in a register or on
* the stack so that it will stop changing under the code.
@@ -2239,7 +2357,8 @@ int huge_node(struct vm_area_struct *vma, unsigned long addr, gfp_t gfp_flags,
*nodemask = NULL;
mode = (*mpol)->mode;
- if (unlikely(mode == MPOL_INTERLEAVE)) {
+ if (unlikely(mode == MPOL_INTERLEAVE) ||
+ unlikely(mode == MPOL_PREFERRED_INTERLEAVE)) {
nid = interleave_nid(*mpol, vma, addr,
huge_page_shift(hstate_vma(vma)));
} else {
@@ -2280,6 +2399,7 @@ bool init_nodemask_of_mempolicy(nodemask_t *mask)
case MPOL_PREFERRED_MANY:
case MPOL_BIND:
case MPOL_INTERLEAVE:
+ case MPOL_PREFERRED_INTERLEAVE:
*mask = mempolicy->nodes;
break;
@@ -2390,7 +2510,8 @@ struct folio *vma_alloc_folio(gfp_t gfp, int order, struct vm_area_struct *vma,
pol = get_vma_policy(vma, addr);
- if (pol->mode == MPOL_INTERLEAVE) {
+ if (pol->mode == MPOL_INTERLEAVE ||
+ pol->mode == MPOL_PREFERRED_INTERLEAVE) {
struct page *page;
unsigned nid;
@@ -2492,7 +2613,8 @@ struct page *alloc_pages(gfp_t gfp, unsigned order)
* No reference counting needed for current->mempolicy
* nor system default_policy
*/
- if (pol->mode == MPOL_INTERLEAVE)
+ if (pol->mode == MPOL_INTERLEAVE ||
+ pol->mode == MPOL_PREFERRED_INTERLEAVE)
page = alloc_page_interleave(gfp, order, interleave_nodes(pol));
else if (pol->mode == MPOL_PREFERRED_MANY)
page = alloc_pages_preferred_many(gfp, order,
@@ -2552,6 +2674,69 @@ static unsigned long alloc_pages_bulk_array_interleave(gfp_t gfp,
return total_allocated;
}
+static unsigned long alloc_pages_bulk_array_pil(gfp_t gfp,
+ struct mempolicy *pol,
+ unsigned long nr_pages,
+ struct page **page_array)
+{
+ nodemask_t nodemask = pol->nodes;
+ unsigned long nr_pages_main;
+ unsigned long nr_pages_other;
+ unsigned long total_cycle;
+ unsigned long delta;
+ unsigned long weight;
+ int allocated = 0;
+ int start_nid;
+ int nnodes;
+ int prev, next;
+ int i;
+
+ /* This stabilizes nodes on the stack incase pol->nodes changes */
+ barrier();
+
+ nnodes = nodes_weight(nodemask);
+ start_nid = numa_node_id();
+
+ if (!node_isset(start_nid, nodemask))
+ start_nid = first_node(nodemask);
+
+ if (nnodes == 1) {
+ allocated = __alloc_pages_bulk(gfp, start_nid,
+ NULL, nr_pages_main,
+ NULL, page_array);
+ return allocated;
+ }
+ /* We don't want to double-count the main node in calculations */
+ nnodes--;
+
+ weight = pol->pil.weight;
+ total_cycle = (weight + nnodes);
+ /* Number of pages on main node: (cycles*weight + up to weight) */
+ nr_pages_main = ((nr_pages / total_cycle) * weight);
+ nr_pages_main += (nr_pages % total_cycle % (weight + 1));
+ /* Number of pages on others: (remaining/nodes) + 1 page if delta */
+ nr_pages_other = (nr_pages - nr_pages_main) / nnodes;
+ nr_pages_other /= nnodes;
+ /* Delta is number of pages beyond weight up to full cycle */
+ delta = nr_pages - (nr_pages_main + (nr_pages_other * nnodes));
+
+ /* start by allocating for the main node, then interleave rest */
+ prev = start_nid;
+ allocated = __alloc_pages_bulk(gfp, start_nid, NULL, nr_pages_main,
+ NULL, page_array);
+ for (i = 0; i < nnodes; i++) {
+ int pages = nr_pages_other + (delta-- ? 1 : 0);
+
+ next = next_node_in(prev, nodemask);
+ if (next < MAX_NUMNODES)
+ prev = next;
+ allocated += __alloc_pages_bulk(gfp, next, NULL, pages,
+ NULL, page_array);
+ }
+
+ return allocated;
+}
+
static unsigned long alloc_pages_bulk_array_preferred_many(gfp_t gfp, int nid,
struct mempolicy *pol, unsigned long nr_pages,
struct page **page_array)
@@ -2590,6 +2775,10 @@ unsigned long alloc_pages_bulk_array_mempolicy(gfp_t gfp,
return alloc_pages_bulk_array_interleave(gfp, pol,
nr_pages, page_array);
+ if (pol->mode == MPOL_PREFERRED_INTERLEAVE)
+ return alloc_pages_bulk_array_pil(gfp, pol, nr_pages,
+ page_array);
+
if (pol->mode == MPOL_PREFERRED_MANY)
return alloc_pages_bulk_array_preferred_many(gfp,
numa_node_id(), pol, nr_pages, page_array);
@@ -2662,6 +2851,7 @@ bool __mpol_equal(struct mempolicy *a, struct mempolicy *b)
switch (a->mode) {
case MPOL_BIND:
case MPOL_INTERLEAVE:
+ case MPOL_PREFERRED_INTERLEAVE:
case MPOL_PREFERRED:
case MPOL_PREFERRED_MANY:
return !!nodes_equal(a->nodes, b->nodes);
@@ -2798,6 +2988,7 @@ int mpol_misplaced(struct page *page, struct vm_area_struct *vma, unsigned long
switch (pol->mode) {
case MPOL_INTERLEAVE:
+ case MPOL_PREFERRED_INTERLEAVE:
pgoff = vma->vm_pgoff;
pgoff += (addr - vma->vm_start) >> PAGE_SHIFT;
polnid = offset_il_node(pol, pgoff);
@@ -3185,6 +3376,7 @@ static const char * const policy_modes[] =
[MPOL_PREFERRED] = "prefer",
[MPOL_BIND] = "bind",
[MPOL_INTERLEAVE] = "interleave",
+ [MPOL_PREFERRED_INTERLEAVE] = "preferred interleave",
[MPOL_LOCAL] = "local",
[MPOL_PREFERRED_MANY] = "prefer (many)",
};
@@ -3355,6 +3547,7 @@ void mpol_to_str(char *buffer, int maxlen, struct mempolicy *pol)
case MPOL_PREFERRED_MANY:
case MPOL_BIND:
case MPOL_INTERLEAVE:
+ case MPOL_PREFERRED_INTERLEAVE:
nodes = pol->nodes;
break;
default:
--
2.39.1
^ permalink raw reply [flat|nested] 3+ messages in thread
* [RFC PATCH v2 4/4] mm/mempolicy: implement a weighted-interleave
2023-10-03 0:21 [RFC PATCH v2 0/4] mm/mempolicy: get/set_mempolicy2 syscalls Gregory Price
2023-10-03 0:21 ` [RFC PATCH v2 3/4] mm/mempolicy: implement a preferred-interleave Gregory Price
@ 2023-10-03 0:21 ` Gregory Price
1 sibling, 0 replies; 3+ messages in thread
From: Gregory Price @ 2023-10-03 0:21 UTC (permalink / raw)
To: linux-mm
Cc: linux-kernel, linux-arch, linux-api, linux-cxl, luto, tglx,
mingo, bp, dave.hansen, hpa, arnd, akpm, x86, Gregory Price
The weighted-interleave mempolicy implements weights per-node
which are used to distribute memory while interleaving.
For example:
nodes: 0,1,2
weights: 5,3,2
Over 10 consecutive allocations, the following nodes will be selected:
[0,0,0,0,0,1,1,1,2,2]
In this example there is a 50%/30%/20% distribution of memory across
the enabled nodes.
If a node is enabled, the minimum weight is expected to be 0. If an
enabled node ends up with a weight of 0 (as can happen if weights
are being recalculated due to a cgroup mask update), a minimum
of 1 is applied during the interleave mechanism.
Signed-off-by: Gregory Price <gregory.price@memverge.com>
---
include/linux/mempolicy.h | 6 +
include/uapi/linux/mempolicy.h | 6 +
mm/mempolicy.c | 261 ++++++++++++++++++++++++++++++++-
3 files changed, 269 insertions(+), 4 deletions(-)
diff --git a/include/linux/mempolicy.h b/include/linux/mempolicy.h
index 8f918488c61c..8763e536d4a2 100644
--- a/include/linux/mempolicy.h
+++ b/include/linux/mempolicy.h
@@ -54,6 +54,12 @@ struct mempolicy {
int weight;
int count;
} pil;
+ /* weighted interleave */
+ struct {
+ unsigned int il_weight;
+ unsigned char cur_weight;
+ unsigned char weights[MAX_NUMNODES];
+ } wil;
};
union {
diff --git a/include/uapi/linux/mempolicy.h b/include/uapi/linux/mempolicy.h
index 41c35f404c5e..913ca9bf9af7 100644
--- a/include/uapi/linux/mempolicy.h
+++ b/include/uapi/linux/mempolicy.h
@@ -25,6 +25,7 @@ enum {
MPOL_PREFERRED_MANY,
MPOL_LEGACY, /* set_mempolicy limited to above modes */
MPOL_PREFERRED_INTERLEAVE,
+ MPOL_WEIGHTED_INTERLEAVE,
MPOL_MAX, /* always last member of enum */
};
@@ -58,6 +59,11 @@ struct mempolicy_args {
unsigned long weight; /* get and set */
unsigned long next_node; /* get only */
} pil;
+ /* Weighted interleave */
+ struct {
+ unsigned long next_node; /* get only */
+ unsigned char *weights; /* get and set */
+ } wil;
};
};
diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index 6374312cef5f..92be74d4c431 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -195,11 +195,43 @@ static void mpol_relative_nodemask(nodemask_t *ret, const nodemask_t *orig,
nodes_onto(*ret, tmp, *rel);
}
+static void mpol_recalculate_weights(struct mempolicy *pol)
+{
+ unsigned int il_weight = 0;
+ int node;
+
+ /* Recalculate weights to ensure minimum node weight */
+ for (node = 0; node < MAX_NUMNODES; node++) {
+ if (!node_isset(node, pol->nodes) && pol->wil.weights[node]) {
+ /* If node is not set, weight should be 0 */
+ pol->wil.weights[node] = 0;
+ } else if (!pol->wil.weights[node]) {
+ /* If node is set, weight should be minimum of 1 */
+ pol->wil.weights[node] = 1;
+ pol->wil.il_weight += 1;
+ il_weight += 1;
+ } else {
+ /* Otherwise, keep the existing weight */
+ il_weight += pol->wil.weights[node];
+ }
+ }
+ pol->wil.il_weight = il_weight;
+ /*
+ * It's possible an allocation has been occurring at this point
+ * force it to go to the next node, since we just changed weights
+ */
+ pol->wil.cur_weight = 0;
+}
+
static int mpol_new_nodemask(struct mempolicy *pol, const nodemask_t *nodes)
{
if (nodes_empty(*nodes))
return -EINVAL;
pol->nodes = *nodes;
+
+ if (pol->mode == MPOL_WEIGHTED_INTERLEAVE)
+ mpol_recalculate_weights(pol);
+
return 0;
}
@@ -334,6 +366,10 @@ static void mpol_rebind_nodemask(struct mempolicy *pol, const nodemask_t *nodes)
tmp = *nodes;
pol->nodes = tmp;
+
+ /* After a change to the nodemask, weights must be recalculated */
+ if (pol->mode == MPOL_WEIGHTED_INTERLEAVE)
+ mpol_recalculate_weights(pol);
}
static void mpol_rebind_preferred(struct mempolicy *pol,
@@ -403,6 +439,10 @@ static const struct mempolicy_operations mpol_ops[MPOL_MAX] = {
.create = mpol_new_nodemask,
.rebind = mpol_rebind_nodemask,
},
+ [MPOL_WEIGHTED_INTERLEAVE] = {
+ .create = mpol_new_nodemask,
+ .rebind = mpol_rebind_nodemask,
+ },
[MPOL_PREFERRED] = {
.create = mpol_new_preferred,
.rebind = mpol_rebind_preferred,
@@ -878,8 +918,10 @@ static long replace_mempolicy(struct mempolicy *new, nodemask_t *nodes)
old = current->mempolicy;
current->mempolicy = new;
if (new && (new->mode == MPOL_INTERLEAVE ||
- new->mode == MPOL_PREFERRED_INTERLEAVE))
+ new->mode == MPOL_PREFERRED_INTERLEAVE ||
+ new->mode == MPOL_WEIGHTED_INTERLEAVE))
current->il_prev = MAX_NUMNODES-1;
+
out:
task_unlock(current);
mpol_put(old);
@@ -921,6 +963,7 @@ static void get_policy_nodemask(struct mempolicy *p, nodemask_t *nodes)
case MPOL_BIND:
case MPOL_INTERLEAVE:
case MPOL_PREFERRED_INTERLEAVE:
+ case MPOL_WEIGHTED_INTERLEAVE:
case MPOL_PREFERRED:
case MPOL_PREFERRED_MANY:
*nodes = p->nodes;
@@ -1632,6 +1675,56 @@ static long do_set_preferred_interleave(struct mempolicy_args *args,
return 0;
}
+static long do_set_weighted_interleave(struct mempolicy_args *args,
+ struct mempolicy *new,
+ nodemask_t *nodes)
+{
+ unsigned char weight;
+ unsigned char *weights;
+ int node;
+ int ret = 0;
+
+ /* Weighted interleave cannot be done with no nodemask */
+ if (nodes_empty(*nodes))
+ return -EINVAL;
+
+ /* Weighted interleave requires a set of weights */
+ if (!args->wil.weights)
+ return -EINVAL;
+
+ weights = kmalloc(MAX_NUMNODES, GFP_KERNEL);
+ if (!weights)
+ return -ENOMEM;
+
+ ret = copy_from_user(weights, args->wil.weights, MAX_NUMNODES);
+ if (ret) {
+ ret = -EFAULT;
+ goto weights_out;
+ }
+
+ new->wil.cur_weight = 0;
+ new->wil.il_weight = 0;
+ memset(new->wil.weights, 0, sizeof(new->wil.weights));
+
+ /* Weights for set nodes cannot be 0 */
+ node = first_node(*nodes);
+ while (node != MAX_NUMNODES) {
+ weight = weights[node];
+ if (!weight) {
+ ret = -EINVAL;
+ goto weights_out;
+ }
+ /* policy creation initializes total to nr_nodes, adjust it */
+ new->wil.il_weight += weight;
+ new->wil.weights[node] = weight;
+ node = next_node(node, *nodes);
+ }
+
+weights_out:
+ kfree(weights);
+ return ret;
+}
+
static long do_set_mempolicy2(struct mempolicy_args *args)
{
struct mempolicy *new = NULL;
@@ -1656,6 +1749,9 @@ static long do_set_mempolicy2(struct mempolicy_args *args)
case MPOL_PREFERRED_INTERLEAVE:
err = do_set_preferred_interleave(args, new, &nodes);
break;
+ case MPOL_WEIGHTED_INTERLEAVE:
+ err = do_set_weighted_interleave(args, new, &nodes);
+ break;
default:
BUG();
}
@@ -1799,6 +1895,12 @@ static long do_get_mempolicy2(struct mempolicy_args *kargs)
kargs->pil.weight = pol->pil.weight;
rc = 0;
break;
+ case MPOL_WEIGHTED_INTERLEAVE:
+ kargs->wil.next_node = next_node_in(current->il_prev,
+ pol->nodes);
+ rc = copy_to_user(kargs->wil.weights, pol->wil.weights,
+ MAX_NUMNODES);
+ break;
default:
BUG();
}
@@ -2160,6 +2262,27 @@ static unsigned int preferred_interleave_nodes(struct mempolicy *policy)
return next;
}
+static unsigned int weighted_interleave_nodes(struct mempolicy *policy)
+{
+ unsigned int next;
+ unsigned char next_weight;
+ struct task_struct *me = current;
+
+ /* When weight reaches 0, we're on a new node, reset the weight */
+ next = next_node_in(me->il_prev, policy->nodes);
+ if (!policy->wil.cur_weight) {
+ /* If the node is set, at least 1 allocation is required */
+ next_weight = policy->wil.weights[next];
+ policy->wil.cur_weight = next_weight ? next_weight : 1;
+ }
+
+ policy->wil.cur_weight--;
+ if (next < MAX_NUMNODES && !policy->wil.cur_weight)
+ me->il_prev = next;
+
+ return next;
+}
+
/* Do dynamic interleaving for a process */
static unsigned interleave_nodes(struct mempolicy *policy)
{
@@ -2168,6 +2291,8 @@ static unsigned interleave_nodes(struct mempolicy *policy)
if (policy->mode == MPOL_PREFERRED_INTERLEAVE)
return preferred_interleave_nodes(policy);
+ else if (policy->mode == MPOL_WEIGHTED_INTERLEAVE)
+ return weighted_interleave_nodes(policy);
next = next_node_in(me->il_prev, policy->nodes);
if (next < MAX_NUMNODES)
@@ -2197,6 +2322,7 @@ unsigned int mempolicy_slab_node(void)
case MPOL_INTERLEAVE:
case MPOL_PREFERRED_INTERLEAVE:
+ case MPOL_WEIGHTED_INTERLEAVE:
return interleave_nodes(policy);
case MPOL_BIND:
@@ -2273,6 +2399,40 @@ static unsigned int offset_pil_node(struct mempolicy *pol, unsigned long n)
return nid;
}
+static unsigned int offset_wil_node(struct mempolicy *pol, unsigned long n)
+{
+ nodemask_t nodemask = pol->nodes;
+ unsigned int target, nnodes;
+ unsigned char weight;
+ int nid;
+
+ /*
+ * The barrier will stabilize the nodemask in a register or on
+ * the stack so that it will stop changing under the code.
+ *
+ * Between first_node() and next_node(), pol->nodes could be changed
+ * by other threads. So we put pol->nodes in a local stack.
+ */
+ barrier();
+
+ nnodes = nodes_weight(nodemask);
+ if (!nnodes)
+ return numa_node_id();
+ target = (unsigned int)n % pol->wil.il_weight;
+ nid = first_node(nodemask);
+ while (target) {
+ weight = pol->wil.weights[nid];
+ /* If weights are being recaculated, revert to interleave */
+ if (!weight)
+ weight = 1;
+ if (target < weight)
+ break;
+ target -= weight;
+ nid = next_node_in(nid, nodemask);
+ }
+ return nid;
+}
+
/*
* Do static interleaving for a VMA with known offset @n. Returns the n'th
* node in pol->nodes (starting from n=0), wrapping around if n exceeds the
@@ -2287,6 +2447,8 @@ static unsigned offset_il_node(struct mempolicy *pol, unsigned long n)
if (pol->mode == MPOL_PREFERRED_INTERLEAVE)
return offset_pil_node(pol, n);
+ else if (pol->mode == MPOL_WEIGHTED_INTERLEAVE)
+ return offset_wil_node(pol, n);
nodemask = pol->nodes;
@@ -2358,7 +2520,8 @@ int huge_node(struct vm_area_struct *vma, unsigned long addr, gfp_t gfp_flags,
mode = (*mpol)->mode;
if (unlikely(mode == MPOL_INTERLEAVE) ||
- unlikely(mode == MPOL_PREFERRED_INTERLEAVE)) {
+ unlikely(mode == MPOL_PREFERRED_INTERLEAVE) ||
+ unlikely(mode == MPOL_WEIGHTED_INTERLEAVE)) {
nid = interleave_nid(*mpol, vma, addr,
huge_page_shift(hstate_vma(vma)));
} else {
@@ -2400,6 +2563,7 @@ bool init_nodemask_of_mempolicy(nodemask_t *mask)
case MPOL_BIND:
case MPOL_INTERLEAVE:
case MPOL_PREFERRED_INTERLEAVE:
+ case MPOL_WEIGHTED_INTERLEAVE:
*mask = mempolicy->nodes;
break;
@@ -2511,7 +2675,8 @@ struct folio *vma_alloc_folio(gfp_t gfp, int order, struct vm_area_struct *vma,
pol = get_vma_policy(vma, addr);
if (pol->mode == MPOL_INTERLEAVE ||
- pol->mode == MPOL_PREFERRED_INTERLEAVE) {
+ pol->mode == MPOL_PREFERRED_INTERLEAVE ||
+ pol->mode == MPOL_WEIGHTED_INTERLEAVE) {
struct page *page;
unsigned nid;
@@ -2614,7 +2779,8 @@ struct page *alloc_pages(gfp_t gfp, unsigned order)
* nor system default_policy
*/
if (pol->mode == MPOL_INTERLEAVE ||
- pol->mode == MPOL_PREFERRED_INTERLEAVE)
+ pol->mode == MPOL_PREFERRED_INTERLEAVE ||
+ pol->mode == MPOL_WEIGHTED_INTERLEAVE)
page = alloc_page_interleave(gfp, order, interleave_nodes(pol));
else if (pol->mode == MPOL_PREFERRED_MANY)
page = alloc_pages_preferred_many(gfp, order,
@@ -2737,6 +2903,84 @@ static unsigned long alloc_pages_bulk_array_pil(gfp_t gfp,
return allocated;
}
+static unsigned long alloc_pages_bulk_array_weighted_interleave(gfp_t gfp,
+ struct mempolicy *pol, unsigned long nr_pages,
+ struct page **page_array)
+{
+ struct task_struct *me = current;
+ unsigned long total_allocated = 0;
+ unsigned long nr_allocated;
+ unsigned long rounds;
+ unsigned long node_pages, delta;
+ unsigned char weight;
+ int nnodes, node, prev_node;
+ int i;
+
+ nnodes = nodes_weight(pol->nodes);
+ /* Continue allocating from most recent node and adjust the nr_pages */
+ if (pol->wil.cur_weight) {
+ node = next_node_in(me->il_prev, pol->nodes);
+ node_pages = pol->wil.cur_weight;
+ nr_allocated = __alloc_pages_bulk(gfp, node, NULL, node_pages,
+ NULL, page_array);
+ page_array += nr_allocated;
+ total_allocated += nr_allocated;
+ /* if that's all the pages, no need to interleave */
+ if (nr_pages <= pol->wil.cur_weight) {
+ pol->wil.cur_weight -= nr_pages;
+ return total_allocated;
+ }
+ /* Otherwise we adjust nr_pages down, and continue from there */
+ nr_pages -= pol->wil.cur_weight;
+ pol->wil.cur_weight = 0;
+ prev_node = node;
+ }
+
+ /* Now we can continue allocating from this point */
+ rounds = nr_pages / pol->wil.il_weight;
+ delta = nr_pages % pol->wil.il_weight;
+ for (i = 0; i < nnodes; i++) {
+ node = next_node_in(prev_node, pol->nodes);
+ weight = pol->wil.weights[node];
+ node_pages = weight * rounds;
+ if (delta) {
+ if (delta > weight) {
+ node_pages += weight;
+ delta -= weight;
+ } else {
+ node_pages += delta;
+ delta = 0;
+ }
+ }
+ /* We may not make it all the way around */
+ if (!node_pages)
+ break;
+ nr_allocated = __alloc_pages_bulk(gfp, node, NULL, node_pages,
+ NULL, page_array);
+ page_array += nr_allocated;
+ total_allocated += nr_allocated;
+ prev_node = node;
+ }
+
+ /*
+ * Finally, we need to update me->il_prev and pol->wil.cur_weight
+ * if there were overflow pages, but not equivalent to the node
+ * weight, set the cur_weight to node_weight - delta and the
+ * me->il_prev to the previous node. Otherwise if it was perfect
+ * we can simply set il_prev to node and cur_weight to 0
+ */
+ delta %= weight;
+ if (node_pages) {
+ me->il_prev = prev_node;
+ pol->wil.cur_weight = pol->wil.weights[node] - node_pages;
+ } else {
+ me->il_prev = node;
+ pol->wil.cur_weight = 0;
+ }
+
+ return total_allocated;
+}
+
static unsigned long alloc_pages_bulk_array_preferred_many(gfp_t gfp, int nid,
struct mempolicy *pol, unsigned long nr_pages,
struct page **page_array)
@@ -2779,6 +3023,11 @@ unsigned long alloc_pages_bulk_array_mempolicy(gfp_t gfp,
return alloc_pages_bulk_array_pil(gfp, pol, nr_pages,
page_array);
+ if (pol->mode == MPOL_WEIGHTED_INTERLEAVE)
+ return alloc_pages_bulk_array_weighted_interleave(gfp, pol,
+ nr_pages,
+ page_array);
+
if (pol->mode == MPOL_PREFERRED_MANY)
return alloc_pages_bulk_array_preferred_many(gfp,
numa_node_id(), pol, nr_pages, page_array);
@@ -2852,6 +3101,7 @@ bool __mpol_equal(struct mempolicy *a, struct mempolicy *b)
case MPOL_BIND:
case MPOL_INTERLEAVE:
case MPOL_PREFERRED_INTERLEAVE:
+ case MPOL_WEIGHTED_INTERLEAVE:
case MPOL_PREFERRED:
case MPOL_PREFERRED_MANY:
return !!nodes_equal(a->nodes, b->nodes);
@@ -2989,6 +3239,7 @@ int mpol_misplaced(struct page *page, struct vm_area_struct *vma, unsigned long
switch (pol->mode) {
case MPOL_INTERLEAVE:
case MPOL_PREFERRED_INTERLEAVE:
+ case MPOL_WEIGHTED_INTERLEAVE:
pgoff = vma->vm_pgoff;
pgoff += (addr - vma->vm_start) >> PAGE_SHIFT;
polnid = offset_il_node(pol, pgoff);
@@ -3377,6 +3628,7 @@ static const char * const policy_modes[] =
[MPOL_BIND] = "bind",
[MPOL_INTERLEAVE] = "interleave",
[MPOL_PREFERRED_INTERLEAVE] = "preferred interleave",
+ [MPOL_WEIGHTED_INTERLEAVE] = "weighted interleave",
[MPOL_LOCAL] = "local",
[MPOL_PREFERRED_MANY] = "prefer (many)",
};
@@ -3548,6 +3800,7 @@ void mpol_to_str(char *buffer, int maxlen, struct mempolicy *pol)
case MPOL_BIND:
case MPOL_INTERLEAVE:
case MPOL_PREFERRED_INTERLEAVE:
+ case MPOL_WEIGHTED_INTERLEAVE:
nodes = pol->nodes;
break;
default:
--
2.39.1
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2023-10-03 0:22 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-10-03 0:21 [RFC PATCH v2 0/4] mm/mempolicy: get/set_mempolicy2 syscalls Gregory Price
2023-10-03 0:21 ` [RFC PATCH v2 3/4] mm/mempolicy: implement a preferred-interleave Gregory Price
2023-10-03 0:21 ` [RFC PATCH v2 4/4] mm/mempolicy: implement a weighted-interleave Gregory Price
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox