linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Kefeng Wang <wangkefeng.wang@huawei.com>
To: Andrew Morton <akpm@linux-foundation.org>,
	David Hildenbrand <david@redhat.com>,
	Oscar Salvador <osalvador@suse.de>,
	Muchun Song <muchun.song@linux.dev>, Zi Yan <ziy@nvidia.com>,
	Matthew Wilcox <willy@infradead.org>
Cc: <sidhartha.kumar@oracle.com>, <jane.chu@oracle.com>,
	Vlastimil Babka <vbabka@suse.cz>,
	Brendan Jackman <jackmanb@google.com>,
	Johannes Weiner <hannes@cmpxchg.org>, <linux-mm@kvack.org>,
	Kefeng Wang <wangkefeng.wang@huawei.com>
Subject: [PATCH v2 2/8] mm: hugetlb: optimize replace_free_hugepage_folios()
Date: Thu, 18 Sep 2025 21:19:54 +0800	[thread overview]
Message-ID: <20250918132000.1951232-3-wangkefeng.wang@huawei.com> (raw)
In-Reply-To: <20250918132000.1951232-1-wangkefeng.wang@huawei.com>

No need to replace free hugepage folios if no free hugetlb folios,
we don't replace gigantic folio, so use isolate_or_dissolve_huge_folio(),
also skip some pfn iterations for compound pages such as THP and
non-compound high order buddy to save time.

A simple test on machine with 116G free memory, allocate 120 * 1G
HugeTLB folios(107 successfully returned),

  time echo 120 > /sys/kernel/mm/hugepages/hugepages-1048576kB/nr_hugepages

Before: 0m0.602s
After:  0m0.429s

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 mm/hugetlb.c | 49 +++++++++++++++++++++++++++++++++++++------------
 1 file changed, 37 insertions(+), 12 deletions(-)

diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 1806685ea326..bc88b659a88b 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -2890,26 +2890,51 @@ int isolate_or_dissolve_huge_folio(struct folio *folio, struct list_head *list)
  */
 int replace_free_hugepage_folios(unsigned long start_pfn, unsigned long end_pfn)
 {
-	struct folio *folio;
-	int ret = 0;
+	unsigned long nr = 0;
+	struct page *page;
+	struct hstate *h;
+	LIST_HEAD(list);
+
+	/* Avoid pfn iterations if no free non-gigantic huge pages */
+	for_each_hstate(h) {
+		if (!hstate_is_gigantic(h))
+			nr += h->free_huge_pages;
+	}
 
-	LIST_HEAD(isolate_list);
+	if (!nr)
+		return 0;
 
 	while (start_pfn < end_pfn) {
-		folio = pfn_folio(start_pfn);
+		page = pfn_to_page(start_pfn);
+		nr = 1;
 
-		/* Not to disrupt normal path by vainly holding hugetlb_lock */
-		if (folio_test_hugetlb(folio) && !folio_ref_count(folio)) {
-			ret = alloc_and_dissolve_hugetlb_folio(folio, &isolate_list);
-			if (ret)
-				break;
+		if (PageHuge(page) || PageCompound(page)) {
+			struct folio *folio = page_folio(page);
+
+			nr = 1UL << compound_order(page);
 
-			putback_movable_pages(&isolate_list);
+			if (folio_test_hugetlb(folio) && !folio_ref_count(folio)) {
+				if (isolate_or_dissolve_huge_folio(folio, &list))
+					return -ENOMEM;
+
+				putback_movable_pages(&list);
+			}
+		} else if (PageBuddy(page)) {
+			/*
+			 * Buddy order check without zone lock is unsafe and
+			 * the order is maybe invalid, but race should be
+			 * small, and the worst thing is skipping free hugetlb.
+			 */
+			const unsigned int order = buddy_order_unsafe(page);
+
+			if (order <= MAX_PAGE_ORDER)
+				nr = 1UL << order;
 		}
-		start_pfn++;
+
+		start_pfn += nr;
 	}
 
-	return ret;
+	return 0;
 }
 
 void wait_for_freed_hugetlb_folios(void)
-- 
2.27.0



  parent reply	other threads:[~2025-09-18 13:20 UTC|newest]

Thread overview: 23+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-09-18 13:19 [PATCH v2 0/8] mm: hugetlb: allocate frozen gigantic folio Kefeng Wang
2025-09-18 13:19 ` [PATCH v2 1/8] mm: page_alloc: optimize pfn_range_valid_contig() Kefeng Wang
2025-09-18 15:49   ` Zi Yan
2025-09-19  2:03     ` Kefeng Wang
2025-09-19  1:40   ` kernel test robot
2025-09-19  5:00   ` Dev Jain
2025-09-20  8:19     ` Kefeng Wang
2025-09-30  9:56   ` David Hildenbrand
2025-10-09 12:40     ` Kefeng Wang
2025-09-18 13:19 ` Kefeng Wang [this message]
2025-09-30  9:57   ` [PATCH v2 2/8] mm: hugetlb: optimize replace_free_hugepage_folios() David Hildenbrand
2025-10-09 12:40     ` Kefeng Wang
2025-09-18 13:19 ` [PATCH v2 3/8] mm: debug_vm_pgtable: add debug_vm_pgtable_free_huge_page() Kefeng Wang
2025-09-30 10:01   ` David Hildenbrand
2025-09-18 13:19 ` [PATCH v2 4/8] mm: page_alloc: add split_non_compound_page() Kefeng Wang
2025-09-30 10:06   ` David Hildenbrand
2025-10-09 12:40     ` Kefeng Wang
2025-09-18 13:19 ` [PATCH v2 5/8] mm: page_alloc: add alloc_contig_{range_frozen,frozen_pages}() Kefeng Wang
2025-09-18 13:19 ` [PATCH v2 6/8] mm: cma: add __cma_release() Kefeng Wang
2025-09-30 10:15   ` David Hildenbrand
2025-10-09 12:40     ` Kefeng Wang
2025-09-18 13:19 ` [PATCH v2 7/8] mm: cma: add cma_alloc_frozen{_compound}() Kefeng Wang
2025-09-18 13:20 ` [PATCH v2 8/8] mm: hugetlb: allocate frozen pages in alloc_gigantic_folio() Kefeng Wang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20250918132000.1951232-3-wangkefeng.wang@huawei.com \
    --to=wangkefeng.wang@huawei.com \
    --cc=akpm@linux-foundation.org \
    --cc=david@redhat.com \
    --cc=hannes@cmpxchg.org \
    --cc=jackmanb@google.com \
    --cc=jane.chu@oracle.com \
    --cc=linux-mm@kvack.org \
    --cc=muchun.song@linux.dev \
    --cc=osalvador@suse.de \
    --cc=sidhartha.kumar@oracle.com \
    --cc=vbabka@suse.cz \
    --cc=willy@infradead.org \
    --cc=ziy@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox