From: Zi Yan <ziy@nvidia.com>
To: Kefeng Wang <wangkefeng.wang@huawei.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
David Hildenbrand <david@kernel.org>,
Oscar Salvador <osalvador@suse.de>,
Muchun Song <muchun.song@linux.dev>,
linux-mm@kvack.org, sidhartha.kumar@oracle.com,
jane.chu@oracle.com, Vlastimil Babka <vbabka@suse.cz>,
Brendan Jackman <jackmanb@google.com>,
Johannes Weiner <hannes@cmpxchg.org>,
Matthew Wilcox <willy@infradead.org>
Subject: Re: [PATCH 2/5] mm: page_alloc: optimize pfn_range_valid_contig()
Date: Mon, 12 Jan 2026 12:02:16 -0500 [thread overview]
Message-ID: <926A149E-FE2F-4F88-92D6-FA607398605F@nvidia.com> (raw)
In-Reply-To: <20260112150954.1802953-3-wangkefeng.wang@huawei.com>
On 12 Jan 2026, at 10:09, Kefeng Wang wrote:
> The alloc_contig_pages() spends a significant amount of time within
> pfn_range_valid_contig().
>
> - set_max_huge_pages
> - 99.98% alloc_pool_huge_folio
> only_alloc_fresh_hugetlb_folio.isra.0
> - alloc_contig_frozen_pages_noprof
> - 87.00% pfn_range_valid_contig
> pfn_to_online_page
> - 12.91% alloc_contig_frozen_range_noprof
> 4.51% replace_free_hugepage_folios
> - 4.02% prep_new_page
> prep_compound_page
> - 2.98% undo_isolate_page_range
> - 2.79% unset_migratetype_isolate
> - 2.75% __move_freepages_block_isolate
> 2.71% __move_freepages_block
> - 0.98% start_isolate_page_range
> 0.66% set_migratetype_isolate
>
> To optimize this process, use the new helper has_unmovable_pages()
s/has_unmovable_pages/page_is_unmovable
> to avoid more unnecessary iterations for compound pages, such as
> THP, and high-order buddy pages, which significantly improving the
s/THP/THP not on LRU/
> efficiency of contiguous memory allocation.
>
> A simple test on machine with 114G free memory, allocate 120 * 1G
> HugeTLB folios(104 successfully returned),
>
> time echo 120 > /sys/kernel/mm/hugepages/hugepages-1048576kB/nr_hugepages
>
> Before: 0m3.605s
> After: 0m0.602s
>
> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
> ---
> mm/page_alloc.c | 25 ++++++++-----------------
> 1 file changed, 8 insertions(+), 17 deletions(-)
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index d8d5379c44dc..813c5f57883f 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -7157,18 +7157,20 @@ static bool pfn_range_valid_contig(struct zone *z, unsigned long start_pfn,
> unsigned long nr_pages, bool skip_hugetlb,
> bool *skipped_hugetlb)
> {
> - unsigned long i, end_pfn = start_pfn + nr_pages;
> + unsigned long end_pfn = start_pfn + nr_pages;
> struct page *page;
>
> - for (i = start_pfn; i < end_pfn; i++) {
> - page = pfn_to_online_page(i);
> + while (start_pfn < end_pfn) {
> + unsigned long step = 1;
> +
> + page = pfn_to_online_page(start_pfn);
> if (!page)
> return false;
>
> if (page_zone(page) != z)
> return false;
>
> - if (PageReserved(page))
> + if (page_is_unmovable(z, page, PB_ISOLATE_MODE_OTHER, &step))
> return false;
>
> /*
> @@ -7183,9 +7185,6 @@ static bool pfn_range_valid_contig(struct zone *z, unsigned long start_pfn,
> if (PageHuge(page)) {
> unsigned int order;
>
> - if (!IS_ENABLED(CONFIG_ARCH_ENABLE_HUGEPAGE_MIGRATION))
> - return false;
> -
> if (skip_hugetlb) {
> *skipped_hugetlb = true;
> return false;
> @@ -7196,17 +7195,9 @@ static bool pfn_range_valid_contig(struct zone *z, unsigned long start_pfn,
> if ((order >= MAX_FOLIO_ORDER) ||
> (nr_pages <= (1 << order)))
> return false;
How does page_is_unmovable() interact with the code inside “if (PageHuge(page))”?
page_is_unmovable() only identify 1GB hugetlb as unmovable, so skip_hugetlb still
works?
> -
> - /*
> - * Reaching this point means we've encounted a huge page
> - * smaller than nr_pages, skip all pfn's for that page.
> - *
> - * We can't get here from a tail-PageHuge, as it implies
> - * we started a scan in the middle of a hugepage larger
> - * than nr_pages - which the prior check filters for.
> - */
> - i += (1 << order) - 1;
> }
> +
> + start_pfn += step;
> }
> return true;
> }
> --
> 2.27.0
Best Regards,
Yan, Zi
next prev parent reply other threads:[~2026-01-12 17:02 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-01-12 15:09 [PATCH mm-new resend 0/5] mm: accelerate gigantic folio allocation Kefeng Wang
2026-01-12 15:09 ` [PATCH 1/5] mm: page_isolation: introduce page_is_unmovable() Kefeng Wang
2026-01-12 16:36 ` Zi Yan
2026-01-12 15:09 ` [PATCH 2/5] mm: page_alloc: optimize pfn_range_valid_contig() Kefeng Wang
2026-01-12 17:02 ` Zi Yan [this message]
2026-01-12 15:09 ` [PATCH 3/5] mm: hugetlb: optimize replace_free_hugepage_folios() Kefeng Wang
2026-01-12 15:09 ` [PATCH 4/5] mm: hugetlb_cma: optimize hugetlb_cma_alloc_frozen_folio() Kefeng Wang
2026-01-12 15:09 ` [PATCH 5/5] mm: hugetlb_cma: mark hugetlb_cma{_only} as __ro_after_init Kefeng Wang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=926A149E-FE2F-4F88-92D6-FA607398605F@nvidia.com \
--to=ziy@nvidia.com \
--cc=akpm@linux-foundation.org \
--cc=david@kernel.org \
--cc=hannes@cmpxchg.org \
--cc=jackmanb@google.com \
--cc=jane.chu@oracle.com \
--cc=linux-mm@kvack.org \
--cc=muchun.song@linux.dev \
--cc=osalvador@suse.de \
--cc=sidhartha.kumar@oracle.com \
--cc=vbabka@suse.cz \
--cc=wangkefeng.wang@huawei.com \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox