From: Oscar Salvador <osalvador@suse.de>
To: Frank van der Linden <fvdl@google.com>
Cc: akpm@linux-foundation.org, muchun.song@linux.dev,
linux-mm@kvack.org, linux-kernel@vger.kernel.org,
david@redhat.com, luizcap@redhat.com
Subject: Re: [PATCH] mm/hugetlb: use separate nodemask for bootmem allocations
Date: Wed, 9 Apr 2025 09:47:45 +0200 [thread overview]
Message-ID: <Z_YmIRpBIBtIAdfu@localhost.localdomain> (raw)
In-Reply-To: <20250402205613.3086864-1-fvdl@google.com>
On Wed, Apr 02, 2025 at 08:56:13PM +0000, Frank van der Linden wrote:
> Hugetlb boot allocation has used online nodes for allocation since
> commit de55996d7188 ("mm/hugetlb: use online nodes for bootmem
> allocation"). This was needed to be able to do the allocations
> earlier in boot, before N_MEMORY was set.
>
> This might lead to a different distribution of gigantic hugepages
> across NUMA nodes if there are memoryless nodes in the system.
>
> What happens is that the memoryless nodes are tried, but then
> the memblock allocation fails and falls back, which usually means
> that the node that has the highest physical address available
> will be used (top-down allocation). While this will end up
> getting the same number of hugetlb pages, they might not be
> be distributed the same way. The fallback for each memoryless
> node might not end up coming from the same node as the
> successful round-robin allocation from N_MEMORY nodes.
>
> While administrators that rely on having a specific number of
> hugepages per node should use the hugepages=N:X syntax, it's
> better not to change the old behavior for the plain hugepages=N
> case.
>
> To do this, construct a nodemask for hugetlb bootmem purposes
> only, containing nodes that have memory. Then use that
> for round-robin bootmem allocations.
>
> This saves some cycles, and the added advantage here is that
> hugetlb_cma can use it too, avoiding the older issue of
> pointless attempts to create a CMA area for memoryless nodes
> (which will also cause the per-node CMA area size to be too
> small).
>
> Fixes: de55996d7188 ("mm/hugetlb: use online nodes for bootmem allocation")
> Signed-off-by: Frank van der Linden <fvdl@google.com>
This looks good to me
Reviewed-by: Oscar Salvador <osalvador@suse.de>
The only think I was pondering whether it would be a way
to keep hugetlb_bootmem_set_nodes() confined in hugetlb code
and not having to export that to hugetlb_cma.
But then again, you would have to create a function that calls
hugetlb_bootmem_set_nodes() earlier and would be churn for churn.
--
Oscar Salvador
SUSE Labs
next prev parent reply other threads:[~2025-04-09 7:47 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-04-02 20:56 Frank van der Linden
2025-04-08 13:54 ` Oscar Salvador
2025-04-08 15:48 ` Frank van der Linden
2025-04-09 7:41 ` Oscar Salvador
2025-04-09 7:47 ` Oscar Salvador [this message]
2025-04-16 1:07 ` Luiz Capitulino
2025-04-16 16:32 ` Frank van der Linden
2025-04-16 17:07 ` Luiz Capitulino
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Z_YmIRpBIBtIAdfu@localhost.localdomain \
--to=osalvador@suse.de \
--cc=akpm@linux-foundation.org \
--cc=david@redhat.com \
--cc=fvdl@google.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=luizcap@redhat.com \
--cc=muchun.song@linux.dev \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox