linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Kefeng Wang <wangkefeng.wang@huawei.com>
To: Andrew Morton <akpm@linux-foundation.org>,
	David Hildenbrand <david@kernel.org>,
	Oscar Salvador <osalvador@suse.de>,
	Muchun Song <muchun.song@linux.dev>, <linux-mm@kvack.org>
Cc: <sidhartha.kumar@oracle.com>, <jane.chu@oracle.com>,
	Zi Yan <ziy@nvidia.com>, Vlastimil Babka <vbabka@suse.cz>,
	Brendan Jackman <jackmanb@google.com>,
	Johannes Weiner <hannes@cmpxchg.org>,
	Matthew Wilcox <willy@infradead.org>,
	Kefeng Wang <wangkefeng.wang@huawei.com>
Subject: [PATCH 2/5] mm: page_alloc: optimize pfn_range_valid_contig()
Date: Mon, 12 Jan 2026 23:09:51 +0800	[thread overview]
Message-ID: <20260112150954.1802953-3-wangkefeng.wang@huawei.com> (raw)
In-Reply-To: <20260112150954.1802953-1-wangkefeng.wang@huawei.com>

The alloc_contig_pages() spends a significant amount of time within
pfn_range_valid_contig().

- set_max_huge_pages
   - 99.98% alloc_pool_huge_folio
        only_alloc_fresh_hugetlb_folio.isra.0
      - alloc_contig_frozen_pages_noprof
         - 87.00% pfn_range_valid_contig
              pfn_to_online_page
         - 12.91% alloc_contig_frozen_range_noprof
              4.51% replace_free_hugepage_folios
            - 4.02% prep_new_page
                 prep_compound_page
            - 2.98% undo_isolate_page_range
               - 2.79% unset_migratetype_isolate
                  - 2.75% __move_freepages_block_isolate
                       2.71% __move_freepages_block
            - 0.98% start_isolate_page_range
                 0.66% set_migratetype_isolate

To optimize this process, use the new helper has_unmovable_pages()
to avoid more unnecessary iterations for compound pages, such as
THP, and high-order buddy pages, which significantly improving the
efficiency of contiguous memory allocation.

A simple test on machine with 114G free memory, allocate 120 * 1G
HugeTLB folios(104 successfully returned),

  time echo 120 > /sys/kernel/mm/hugepages/hugepages-1048576kB/nr_hugepages

Before: 0m3.605s
After:  0m0.602s

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 mm/page_alloc.c | 25 ++++++++-----------------
 1 file changed, 8 insertions(+), 17 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index d8d5379c44dc..813c5f57883f 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -7157,18 +7157,20 @@ static bool pfn_range_valid_contig(struct zone *z, unsigned long start_pfn,
 				   unsigned long nr_pages, bool skip_hugetlb,
 				   bool *skipped_hugetlb)
 {
-	unsigned long i, end_pfn = start_pfn + nr_pages;
+	unsigned long end_pfn = start_pfn + nr_pages;
 	struct page *page;
 
-	for (i = start_pfn; i < end_pfn; i++) {
-		page = pfn_to_online_page(i);
+	while (start_pfn < end_pfn) {
+		unsigned long step = 1;
+
+		page = pfn_to_online_page(start_pfn);
 		if (!page)
 			return false;
 
 		if (page_zone(page) != z)
 			return false;
 
-		if (PageReserved(page))
+		if (page_is_unmovable(z, page, PB_ISOLATE_MODE_OTHER, &step))
 			return false;
 
 		/*
@@ -7183,9 +7185,6 @@ static bool pfn_range_valid_contig(struct zone *z, unsigned long start_pfn,
 		if (PageHuge(page)) {
 			unsigned int order;
 
-			if (!IS_ENABLED(CONFIG_ARCH_ENABLE_HUGEPAGE_MIGRATION))
-				return false;
-
 			if (skip_hugetlb) {
 				*skipped_hugetlb = true;
 				return false;
@@ -7196,17 +7195,9 @@ static bool pfn_range_valid_contig(struct zone *z, unsigned long start_pfn,
 			if ((order >= MAX_FOLIO_ORDER) ||
 			    (nr_pages <= (1 << order)))
 				return false;
-
-			/*
-			 * Reaching this point means we've encounted a huge page
-			 * smaller than nr_pages, skip all pfn's for that page.
-			 *
-			 * We can't get here from a tail-PageHuge, as it implies
-			 * we started a scan in the middle of a hugepage larger
-			 * than nr_pages - which the prior check filters for.
-			 */
-			i += (1 << order) - 1;
 		}
+
+		start_pfn += step;
 	}
 	return true;
 }
-- 
2.27.0



  parent reply	other threads:[~2026-01-12 15:10 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-01-12 15:09 [PATCH mm-new resend 0/5] mm: accelerate gigantic folio allocation Kefeng Wang
2026-01-12 15:09 ` [PATCH 1/5] mm: page_isolation: introduce page_is_unmovable() Kefeng Wang
2026-01-12 16:36   ` Zi Yan
2026-01-12 15:09 ` Kefeng Wang [this message]
2026-01-12 17:02   ` [PATCH 2/5] mm: page_alloc: optimize pfn_range_valid_contig() Zi Yan
2026-01-12 15:09 ` [PATCH 3/5] mm: hugetlb: optimize replace_free_hugepage_folios() Kefeng Wang
2026-01-12 15:09 ` [PATCH 4/5] mm: hugetlb_cma: optimize hugetlb_cma_alloc_frozen_folio() Kefeng Wang
2026-01-12 15:09 ` [PATCH 5/5] mm: hugetlb_cma: mark hugetlb_cma{_only} as __ro_after_init Kefeng Wang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20260112150954.1802953-3-wangkefeng.wang@huawei.com \
    --to=wangkefeng.wang@huawei.com \
    --cc=akpm@linux-foundation.org \
    --cc=david@kernel.org \
    --cc=hannes@cmpxchg.org \
    --cc=jackmanb@google.com \
    --cc=jane.chu@oracle.com \
    --cc=linux-mm@kvack.org \
    --cc=muchun.song@linux.dev \
    --cc=osalvador@suse.de \
    --cc=sidhartha.kumar@oracle.com \
    --cc=vbabka@suse.cz \
    --cc=willy@infradead.org \
    --cc=ziy@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox