From: Michal Hocko <mhocko@suse.com>
To: Gregory Price <gourry@gourry.net>
Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org,
kernel-team@meta.com, akpm@linux-foundation.org, vbabka@suse.cz,
surenb@google.com, jackmanb@google.com, hannes@cmpxchg.org,
ziy@nvidia.com, richard.weiyang@gmail.com,
David Hildenbrand <david@redhat.com>
Subject: Re: [PATCH v7] page_alloc: allow migration of smaller hugepages during contig_alloc
Date: Tue, 6 Jan 2026 15:48:20 +0100 [thread overview]
Message-ID: <aV0gtOs5cuLu57Mk@tiehlicka> (raw)
In-Reply-To: <20251221124656.2362540-1-gourry@gourry.net>
On Sun 21-12-25 07:46:56, Gregory Price wrote:
> We presently skip regions with hugepages entirely when trying to do
> contiguous page allocation. This will cause otherwise-movable
> 2MB HugeTLB pages to be considered unmovable, and makes 1GB gigantic
> page allocation less reliable on systems utilizing both.
>
> Commit 4d73ba5fa710 ("mm: page_alloc: skip regions with hugetlbfs pages
> when allocating 1G pages") skipped all HugePage containing regions
> because it can cause significant delays in 1G allocation (as HugeTLB
> migrations may fail for a number of reasons).
>
> Instead, if hugepage migration is enabled, consider regions with
> hugepages smaller than the target contiguous allocation request
> as valid targets for allocation.
>
> We optimize for the existing behavior by searching for non-hugetlb
> regions in a first pass, then retrying the search to include hugetlb
> only on failure. This allows the existing fast-path to remain the
> default case with a slow-path fallback to increase reliability.
>
> We only fallback to the slow path if a hugetlb region was detected,
> and we do a full re-scan because the zones/blocks may have changed
> during the first pass (and it's not worth further complexity).
>
> isolate_migrate_pages_block() has similar hugetlb filter logic, and
> the hugetlb code does a migratable check in folio_isolate_hugetlb()
> during isolation. The code servicing the allocation and migration
> already supports this exact use case.
>
> To test, allocate a bunch of 2MB HugeTLB pages (in this case 48GB)
> and then attempt to allocate some 1G HugeTLB pages (in this case 4GB)
> (Scale to your machine's memory capacity).
>
> echo 24576 > .../hugepages-2048kB/nr_hugepages
> echo 4 > .../hugepages-1048576kB/nr_hugepages
>
> Prior to this patch, the 1GB page reservation can fail if no contiguous
> 1GB pages remain. After this patch, the kernel will try to move 2MB
> pages and successfully allocate the 1GB pages (assuming overall
> sufficient memory is available). Also tested this while a program had
> the 2MB reservations mapped, and the 1GB reservation still succeeds.
>
> folio_alloc_gigantic() is the primary user of alloc_contig_pages(),
> other users are debug or init-time allocations and largely unaffected.
> - ppc/memtrace is a debugfs interface
> - x86/tdx memory allocation occurs once on module-init
> - kfence/core happens once on module (late) init
> - THP uses it in debug_vm_pgtable_alloc_huge_page at __init time
>
> Suggested-by: David Hildenbrand <david@redhat.com>
> Link: https://lore.kernel.org/linux-mm/6fe3562d-49b2-4975-aa86-e139c535ad00@redhat.com/
> Signed-off-by: Gregory Price <gourry@gourry.net>
> Reviewed-by: Zi Yan <ziy@nvidia.com>
> Reviewed-by: Wei Yang <richard.weiyang@gmail.com>
Sorry to be quite late with this one. Making this two stage process is
a reasonable compromise. Have you considered using hugepage_movable_supported?
Anyway
Acked-by: Michal Hocko <mhocko@suse.com>
Thanks!
--
Michal Hocko
SUSE Labs
next prev parent reply other threads:[~2026-01-06 14:48 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-12-21 12:46 Gregory Price
2026-01-06 14:48 ` Michal Hocko [this message]
2026-01-06 18:46 ` David Hildenbrand (Red Hat)
2026-01-06 18:56 ` Zi Yan
2026-01-06 19:20 ` David Hildenbrand (Red Hat)
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=aV0gtOs5cuLu57Mk@tiehlicka \
--to=mhocko@suse.com \
--cc=akpm@linux-foundation.org \
--cc=david@redhat.com \
--cc=gourry@gourry.net \
--cc=hannes@cmpxchg.org \
--cc=jackmanb@google.com \
--cc=kernel-team@meta.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=richard.weiyang@gmail.com \
--cc=surenb@google.com \
--cc=vbabka@suse.cz \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox