* [Patch v3] mm/page_alloc: add same penalty is enough to get round-robin order
@ 2022-04-12 0:13 Wei Yang
2022-04-12 7:33 ` Vlastimil Babka
` (2 more replies)
0 siblings, 3 replies; 4+ messages in thread
From: Wei Yang @ 2022-04-12 0:13 UTC (permalink / raw)
To: akpm
Cc: linux-mm, Wei Yang, Vlastimil Babka, David Hildenbrand, Oscar Salvador
To make node order in round-robin in the same distance group, we add a
penalty to the first node we got in each round.
To get a round-robin order in the same distance group, we don't need to
decrease the penalty since:
* find_next_best_node() always iterates node in the same order
* distance matters more then penalty in find_next_best_node()
* in nodes with the same distance, the first one would be picked up
So it is fine to increase same penalty when we get the first node in the
same distance group. Since we just increase a constance of 1 to node
penalty, it is not necessary to multiply MAX_NODE_LOAD for preference.
[vbabka@suse.cz: suggests to remove MAX_NODE_LOAD]
Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
CC: Vlastimil Babka <vbabka@suse.cz>
CC: David Hildenbrand <david@redhat.com>
CC: Oscar Salvador <osalvador@suse.de>
---
v3: merge into a single patch
v2: adjust constant penalty to 1
---
mm/page_alloc.c | 9 +++------
1 file changed, 3 insertions(+), 6 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 5d71b8dcb5f4..0334c06a0a47 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -6170,7 +6170,6 @@ int numa_zonelist_order_handler(struct ctl_table *table, int write,
}
-#define MAX_NODE_LOAD (nr_online_nodes)
static int node_load[MAX_NUMNODES];
/**
@@ -6217,7 +6216,7 @@ int find_next_best_node(int node, nodemask_t *used_node_mask)
val += PENALTY_FOR_NODE_WITH_CPUS;
/* Slight preference for less loaded node */
- val *= (MAX_NODE_LOAD*MAX_NUMNODES);
+ val *= MAX_NUMNODES;
val += node_load[n];
if (val < min_val) {
@@ -6283,13 +6282,12 @@ static void build_thisnode_zonelists(pg_data_t *pgdat)
static void build_zonelists(pg_data_t *pgdat)
{
static int node_order[MAX_NUMNODES];
- int node, load, nr_nodes = 0;
+ int node, nr_nodes = 0;
nodemask_t used_mask = NODE_MASK_NONE;
int local_node, prev_node;
/* NUMA-aware ordering of nodes */
local_node = pgdat->node_id;
- load = nr_online_nodes;
prev_node = local_node;
memset(node_order, 0, sizeof(node_order));
@@ -6301,11 +6299,10 @@ static void build_zonelists(pg_data_t *pgdat)
*/
if (node_distance(local_node, node) !=
node_distance(local_node, prev_node))
- node_load[node] += load;
+ node_load[node] += 1;
node_order[nr_nodes++] = node;
prev_node = node;
- load--;
}
build_zonelists_in_node_order(pgdat, node_order, nr_nodes);
--
2.33.1
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [Patch v3] mm/page_alloc: add same penalty is enough to get round-robin order
2022-04-12 0:13 [Patch v3] mm/page_alloc: add same penalty is enough to get round-robin order Wei Yang
@ 2022-04-12 7:33 ` Vlastimil Babka
2022-04-12 7:46 ` David Hildenbrand
2022-04-12 8:11 ` Oscar Salvador
2 siblings, 0 replies; 4+ messages in thread
From: Vlastimil Babka @ 2022-04-12 7:33 UTC (permalink / raw)
To: Wei Yang, akpm; +Cc: linux-mm, David Hildenbrand, Oscar Salvador
On 4/12/22 02:13, Wei Yang wrote:
> To make node order in round-robin in the same distance group, we add a
> penalty to the first node we got in each round.
>
> To get a round-robin order in the same distance group, we don't need to
> decrease the penalty since:
>
> * find_next_best_node() always iterates node in the same order
> * distance matters more then penalty in find_next_best_node()
> * in nodes with the same distance, the first one would be picked up
>
> So it is fine to increase same penalty when we get the first node in the
> same distance group. Since we just increase a constance of 1 to node
> penalty, it is not necessary to multiply MAX_NODE_LOAD for preference.
>
> [vbabka@suse.cz: suggests to remove MAX_NODE_LOAD]
>
> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
> CC: Vlastimil Babka <vbabka@suse.cz>
> CC: David Hildenbrand <david@redhat.com>
> CC: Oscar Salvador <osalvador@suse.de>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
> v3: merge into a single patch
> v2: adjust constant penalty to 1
> ---
> mm/page_alloc.c | 9 +++------
> 1 file changed, 3 insertions(+), 6 deletions(-)
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 5d71b8dcb5f4..0334c06a0a47 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -6170,7 +6170,6 @@ int numa_zonelist_order_handler(struct ctl_table *table, int write,
> }
>
>
> -#define MAX_NODE_LOAD (nr_online_nodes)
> static int node_load[MAX_NUMNODES];
>
> /**
> @@ -6217,7 +6216,7 @@ int find_next_best_node(int node, nodemask_t *used_node_mask)
> val += PENALTY_FOR_NODE_WITH_CPUS;
>
> /* Slight preference for less loaded node */
> - val *= (MAX_NODE_LOAD*MAX_NUMNODES);
> + val *= MAX_NUMNODES;
> val += node_load[n];
>
> if (val < min_val) {
> @@ -6283,13 +6282,12 @@ static void build_thisnode_zonelists(pg_data_t *pgdat)
> static void build_zonelists(pg_data_t *pgdat)
> {
> static int node_order[MAX_NUMNODES];
> - int node, load, nr_nodes = 0;
> + int node, nr_nodes = 0;
> nodemask_t used_mask = NODE_MASK_NONE;
> int local_node, prev_node;
>
> /* NUMA-aware ordering of nodes */
> local_node = pgdat->node_id;
> - load = nr_online_nodes;
> prev_node = local_node;
>
> memset(node_order, 0, sizeof(node_order));
> @@ -6301,11 +6299,10 @@ static void build_zonelists(pg_data_t *pgdat)
> */
> if (node_distance(local_node, node) !=
> node_distance(local_node, prev_node))
> - node_load[node] += load;
> + node_load[node] += 1;
>
> node_order[nr_nodes++] = node;
> prev_node = node;
> - load--;
> }
>
> build_zonelists_in_node_order(pgdat, node_order, nr_nodes);
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [Patch v3] mm/page_alloc: add same penalty is enough to get round-robin order
2022-04-12 0:13 [Patch v3] mm/page_alloc: add same penalty is enough to get round-robin order Wei Yang
2022-04-12 7:33 ` Vlastimil Babka
@ 2022-04-12 7:46 ` David Hildenbrand
2022-04-12 8:11 ` Oscar Salvador
2 siblings, 0 replies; 4+ messages in thread
From: David Hildenbrand @ 2022-04-12 7:46 UTC (permalink / raw)
To: Wei Yang, akpm; +Cc: linux-mm, Vlastimil Babka, Oscar Salvador
On 12.04.22 02:13, Wei Yang wrote:
> To make node order in round-robin in the same distance group, we add a
> penalty to the first node we got in each round.
>
> To get a round-robin order in the same distance group, we don't need to
> decrease the penalty since:
>
> * find_next_best_node() always iterates node in the same order
> * distance matters more then penalty in find_next_best_node()
> * in nodes with the same distance, the first one would be picked up
>
> So it is fine to increase same penalty when we get the first node in the
> same distance group. Since we just increase a constance of 1 to node
> penalty, it is not necessary to multiply MAX_NODE_LOAD for preference.
>
> [vbabka@suse.cz: suggests to remove MAX_NODE_LOAD]
>
> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
> CC: Vlastimil Babka <vbabka@suse.cz>
> CC: David Hildenbrand <david@redhat.com>
> CC: Oscar Salvador <osalvador@suse.de>
> ---
> v3: merge into a single patch
> v2: adjust constant penalty to 1
> ---
Acked-by: David Hildenbrand <david@redhat.com>
--
Thanks,
David / dhildenb
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [Patch v3] mm/page_alloc: add same penalty is enough to get round-robin order
2022-04-12 0:13 [Patch v3] mm/page_alloc: add same penalty is enough to get round-robin order Wei Yang
2022-04-12 7:33 ` Vlastimil Babka
2022-04-12 7:46 ` David Hildenbrand
@ 2022-04-12 8:11 ` Oscar Salvador
2 siblings, 0 replies; 4+ messages in thread
From: Oscar Salvador @ 2022-04-12 8:11 UTC (permalink / raw)
To: Wei Yang; +Cc: akpm, linux-mm, Vlastimil Babka, David Hildenbrand
On Tue, Apr 12, 2022 at 12:13:19AM +0000, Wei Yang wrote:
> To make node order in round-robin in the same distance group, we add a
> penalty to the first node we got in each round.
>
> To get a round-robin order in the same distance group, we don't need to
> decrease the penalty since:
>
> * find_next_best_node() always iterates node in the same order
> * distance matters more then penalty in find_next_best_node()
> * in nodes with the same distance, the first one would be picked up
>
> So it is fine to increase same penalty when we get the first node in the
> same distance group. Since we just increase a constance of 1 to node
> penalty, it is not necessary to multiply MAX_NODE_LOAD for preference.
>
> [vbabka@suse.cz: suggests to remove MAX_NODE_LOAD]
>
> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
> CC: Vlastimil Babka <vbabka@suse.cz>
> CC: David Hildenbrand <david@redhat.com>
> CC: Oscar Salvador <osalvador@suse.de>
Acked-by: Oscar Salvador <osalvador@suse.de>
> ---
> v3: merge into a single patch
> v2: adjust constant penalty to 1
> ---
> mm/page_alloc.c | 9 +++------
> 1 file changed, 3 insertions(+), 6 deletions(-)
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 5d71b8dcb5f4..0334c06a0a47 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -6170,7 +6170,6 @@ int numa_zonelist_order_handler(struct ctl_table *table, int write,
> }
>
>
> -#define MAX_NODE_LOAD (nr_online_nodes)
> static int node_load[MAX_NUMNODES];
>
> /**
> @@ -6217,7 +6216,7 @@ int find_next_best_node(int node, nodemask_t *used_node_mask)
> val += PENALTY_FOR_NODE_WITH_CPUS;
>
> /* Slight preference for less loaded node */
> - val *= (MAX_NODE_LOAD*MAX_NUMNODES);
> + val *= MAX_NUMNODES;
> val += node_load[n];
>
> if (val < min_val) {
> @@ -6283,13 +6282,12 @@ static void build_thisnode_zonelists(pg_data_t *pgdat)
> static void build_zonelists(pg_data_t *pgdat)
> {
> static int node_order[MAX_NUMNODES];
> - int node, load, nr_nodes = 0;
> + int node, nr_nodes = 0;
> nodemask_t used_mask = NODE_MASK_NONE;
> int local_node, prev_node;
>
> /* NUMA-aware ordering of nodes */
> local_node = pgdat->node_id;
> - load = nr_online_nodes;
> prev_node = local_node;
>
> memset(node_order, 0, sizeof(node_order));
> @@ -6301,11 +6299,10 @@ static void build_zonelists(pg_data_t *pgdat)
> */
> if (node_distance(local_node, node) !=
> node_distance(local_node, prev_node))
> - node_load[node] += load;
> + node_load[node] += 1;
>
> node_order[nr_nodes++] = node;
> prev_node = node;
> - load--;
> }
>
> build_zonelists_in_node_order(pgdat, node_order, nr_nodes);
> --
> 2.33.1
>
>
--
Oscar Salvador
SUSE Labs
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2022-04-12 8:11 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-04-12 0:13 [Patch v3] mm/page_alloc: add same penalty is enough to get round-robin order Wei Yang
2022-04-12 7:33 ` Vlastimil Babka
2022-04-12 7:46 ` David Hildenbrand
2022-04-12 8:11 ` Oscar Salvador
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox