linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Luiz Capitulino <luizcap@redhat.com>
To: Frank van der Linden <fvdl@google.com>,
	akpm@linux-foundation.org, muchun.song@linux.dev,
	linux-mm@kvack.org, linux-kernel@vger.kernel.org
Cc: david@redhat.com, osalvador@suse.de
Subject: Re: [PATCH] mm/hugetlb: use separate nodemask for bootmem allocations
Date: Tue, 15 Apr 2025 21:07:53 -0400	[thread overview]
Message-ID: <a7f5a4f7-1ec6-42dc-a93d-af043a01044f@redhat.com> (raw)
In-Reply-To: <20250402205613.3086864-1-fvdl@google.com>

On 2025-04-02 16:56, Frank van der Linden wrote:
> Hugetlb boot allocation has used online nodes for allocation since
> commit de55996d7188 ("mm/hugetlb: use online nodes for bootmem
> allocation"). This was needed to be able to do the allocations
> earlier in boot, before N_MEMORY was set.

Honest question: I imagine there's a reason why we can't move
x86's hugetlb_cma_reserve() and hugetlb_bootmem_alloc() calls
in setup_arch() to after x86_init.paging.pagetable_init() (which
seems to be where we call zone_sizes_init())? This way we could
go back to using N_MEMORY and avoid this dance.

I'm not familiar with vmemmap if that's the reason...

- Luiz

> 
> This might lead to a different distribution of gigantic hugepages
> across NUMA nodes if there are memoryless nodes in the system.
> 
> What happens is that the memoryless nodes are tried, but then
> the memblock allocation fails and falls back, which usually means
> that the node that has the highest physical address available
> will be used (top-down allocation). While this will end up
> getting the same number of hugetlb pages, they might not be
> be distributed the same way. The fallback for each memoryless
> node might not end up coming from the same node as the
> successful round-robin allocation from N_MEMORY nodes.
> 
> While administrators that rely on having a specific number of
> hugepages per node should use the hugepages=N:X syntax, it's
> better not to change the old behavior for the plain hugepages=N
> case.
> 
> To do this, construct a nodemask for hugetlb bootmem purposes
> only, containing nodes that have memory. Then use that
> for round-robin bootmem allocations.
> 
> This saves some cycles, and the added advantage here is that
> hugetlb_cma can use it too, avoiding the older issue of
> pointless attempts to create a CMA area for memoryless nodes
> (which will also cause the per-node CMA area size to be too
> small).
> 
> Fixes: de55996d7188 ("mm/hugetlb: use online nodes for bootmem allocation")
> Signed-off-by: Frank van der Linden <fvdl@google.com>
> ---
>   include/linux/hugetlb.h |  3 +++
>   mm/hugetlb.c            | 30 ++++++++++++++++++++++++++++--
>   mm/hugetlb_cma.c        | 11 +++++++----
>   3 files changed, 38 insertions(+), 6 deletions(-)
> 
> diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
> index 8f3ac832ee7f..fc9166f7f679 100644
> --- a/include/linux/hugetlb.h
> +++ b/include/linux/hugetlb.h
> @@ -14,6 +14,7 @@
>   #include <linux/pgtable.h>
>   #include <linux/gfp.h>
>   #include <linux/userfaultfd_k.h>
> +#include <linux/nodemask.h>
>   
>   struct ctl_table;
>   struct user_struct;
> @@ -176,6 +177,8 @@ extern struct list_head huge_boot_pages[MAX_NUMNODES];
>   
>   void hugetlb_bootmem_alloc(void);
>   bool hugetlb_bootmem_allocated(void);
> +extern nodemask_t hugetlb_bootmem_nodes;
> +void hugetlb_bootmem_set_nodes(void);
>   
>   /* arch callbacks */
>   
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index 6fccfe6d046c..e69f6f31e082 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -58,6 +58,7 @@ int hugetlb_max_hstate __read_mostly;
>   unsigned int default_hstate_idx;
>   struct hstate hstates[HUGE_MAX_HSTATE];
>   
> +__initdata nodemask_t hugetlb_bootmem_nodes;
>   __initdata struct list_head huge_boot_pages[MAX_NUMNODES];
>   static unsigned long hstate_boot_nrinvalid[HUGE_MAX_HSTATE] __initdata;
>   
> @@ -3237,7 +3238,8 @@ int __alloc_bootmem_huge_page(struct hstate *h, int nid)
>   	}
>   
>   	/* allocate from next node when distributing huge pages */
> -	for_each_node_mask_to_alloc(&h->next_nid_to_alloc, nr_nodes, node, &node_states[N_ONLINE]) {
> +	for_each_node_mask_to_alloc(&h->next_nid_to_alloc, nr_nodes, node,
> +				    &hugetlb_bootmem_nodes) {
>   		m = alloc_bootmem(h, node, false);
>   		if (!m)
>   			return 0;
> @@ -3701,6 +3703,15 @@ static void __init hugetlb_init_hstates(void)
>   	struct hstate *h, *h2;
>   
>   	for_each_hstate(h) {
> +		/*
> +		 * Always reset to first_memory_node here, even if
> +		 * next_nid_to_alloc was set before - we can't
> +		 * reference hugetlb_bootmem_nodes after init, and
> +		 * first_memory_node is right for all further allocations.
> +		 */
> +		h->next_nid_to_alloc = first_memory_node;
> +		h->next_nid_to_free = first_memory_node;
> +
>   		/* oversize hugepages were init'ed in early boot */
>   		if (!hstate_is_gigantic(h))
>   			hugetlb_hstate_alloc_pages(h);
> @@ -4990,6 +5001,20 @@ static int __init default_hugepagesz_setup(char *s)
>   }
>   hugetlb_early_param("default_hugepagesz", default_hugepagesz_setup);
>   
> +void __init hugetlb_bootmem_set_nodes(void)
> +{
> +	int i, nid;
> +	unsigned long start_pfn, end_pfn;
> +
> +	if (!nodes_empty(hugetlb_bootmem_nodes))
> +		return;
> +
> +	for_each_mem_pfn_range(i, MAX_NUMNODES, &start_pfn, &end_pfn, &nid) {
> +		if (end_pfn > start_pfn)
> +			node_set(nid, hugetlb_bootmem_nodes);
> +	}
> +}
> +
>   static bool __hugetlb_bootmem_allocated __initdata;
>   
>   bool __init hugetlb_bootmem_allocated(void)
> @@ -5005,6 +5030,8 @@ void __init hugetlb_bootmem_alloc(void)
>   	if (__hugetlb_bootmem_allocated)
>   		return;
>   
> +	hugetlb_bootmem_set_nodes();
> +
>   	for (i = 0; i < MAX_NUMNODES; i++)
>   		INIT_LIST_HEAD(&huge_boot_pages[i]);
>   
> @@ -5012,7 +5039,6 @@ void __init hugetlb_bootmem_alloc(void)
>   
>   	for_each_hstate(h) {
>   		h->next_nid_to_alloc = first_online_node;
> -		h->next_nid_to_free = first_online_node;
>   
>   		if (hstate_is_gigantic(h))
>   			hugetlb_hstate_alloc_pages(h);
> diff --git a/mm/hugetlb_cma.c b/mm/hugetlb_cma.c
> index e0f2d5c3a84c..f58ef4969e7a 100644
> --- a/mm/hugetlb_cma.c
> +++ b/mm/hugetlb_cma.c
> @@ -66,7 +66,7 @@ hugetlb_cma_alloc_bootmem(struct hstate *h, int *nid, bool node_exact)
>   		if (node_exact)
>   			return NULL;
>   
> -		for_each_online_node(node) {
> +		for_each_node_mask(node, hugetlb_bootmem_nodes) {
>   			cma = hugetlb_cma[node];
>   			if (!cma || node == *nid)
>   				continue;
> @@ -153,11 +153,13 @@ void __init hugetlb_cma_reserve(int order)
>   	if (!hugetlb_cma_size)
>   		return;
>   
> +	hugetlb_bootmem_set_nodes();
> +
>   	for (nid = 0; nid < MAX_NUMNODES; nid++) {
>   		if (hugetlb_cma_size_in_node[nid] == 0)
>   			continue;
>   
> -		if (!node_online(nid)) {
> +		if (!node_isset(nid, hugetlb_bootmem_nodes)) {
>   			pr_warn("hugetlb_cma: invalid node %d specified\n", nid);
>   			hugetlb_cma_size -= hugetlb_cma_size_in_node[nid];
>   			hugetlb_cma_size_in_node[nid] = 0;
> @@ -190,13 +192,14 @@ void __init hugetlb_cma_reserve(int order)
>   		 * If 3 GB area is requested on a machine with 4 numa nodes,
>   		 * let's allocate 1 GB on first three nodes and ignore the last one.
>   		 */
> -		per_node = DIV_ROUND_UP(hugetlb_cma_size, nr_online_nodes);
> +		per_node = DIV_ROUND_UP(hugetlb_cma_size,
> +					nodes_weight(hugetlb_bootmem_nodes));
>   		pr_info("hugetlb_cma: reserve %lu MiB, up to %lu MiB per node\n",
>   			hugetlb_cma_size / SZ_1M, per_node / SZ_1M);
>   	}
>   
>   	reserved = 0;
> -	for_each_online_node(nid) {
> +	for_each_node_mask(nid, hugetlb_bootmem_nodes) {
>   		int res;
>   		char name[CMA_MAX_NAME];
>   



  parent reply	other threads:[~2025-04-16  1:08 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-04-02 20:56 Frank van der Linden
2025-04-08 13:54 ` Oscar Salvador
2025-04-08 15:48   ` Frank van der Linden
2025-04-09  7:41     ` Oscar Salvador
2025-04-09  7:47 ` Oscar Salvador
2025-04-16  1:07 ` Luiz Capitulino [this message]
2025-04-16 16:32   ` Frank van der Linden
2025-04-16 17:07     ` Luiz Capitulino

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=a7f5a4f7-1ec6-42dc-a93d-af043a01044f@redhat.com \
    --to=luizcap@redhat.com \
    --cc=akpm@linux-foundation.org \
    --cc=david@redhat.com \
    --cc=fvdl@google.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=muchun.song@linux.dev \
    --cc=osalvador@suse.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox