linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: 李喆 <lizhe.67@bytedance.com>
To: <muchun.song@linux.dev>, <osalvador@suse.de>, <david@kernel.org>,
	 <akpm@linux-foundation.org>, <fvdl@google.com>
Cc: <linux-mm@kvack.org>, <linux-kernel@vger.kernel.org>,
	 <lizhe.67@bytedance.com>
Subject: [PATCH 2/8] mm/hugetlb: convert to prep_account_new_hugetlb_folio()
Date: Thu, 25 Dec 2025 16:20:53 +0800	[thread overview]
Message-ID: <20251225082059.1632-3-lizhe.67@bytedance.com> (raw)
In-Reply-To: <20251225082059.1632-1-lizhe.67@bytedance.com>

From: Li Zhe <lizhe.67@bytedance.com>

After a huge folio is instantiated, it is always initialized through
the successive calls to prep_new_hugetlb_folio() and
account_new_hugetlb_folio(). To eliminate the risk that future changes
update one routine but overlook the other, the two functions have been
consolidated into a single entry point prep_account_new_hugetlb_folio().

Signed-off-by: Li Zhe <lizhe.67@bytedance.com>
---
 mm/hugetlb.c | 29 ++++++++++-------------------
 1 file changed, 10 insertions(+), 19 deletions(-)

diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index d20614b1c927..63f9369789b5 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1874,18 +1874,14 @@ void free_huge_folio(struct folio *folio)
 /*
  * Must be called with the hugetlb lock held
  */
-static void account_new_hugetlb_folio(struct hstate *h, struct folio *folio)
-{
-	lockdep_assert_held(&hugetlb_lock);
-	h->nr_huge_pages++;
-	h->nr_huge_pages_node[folio_nid(folio)]++;
-}
-
-static void prep_new_hugetlb_folio(struct folio *folio)
+static void prep_account_new_hugetlb_folio(struct hstate *h,
+					   struct folio *folio)
 {
 	lockdep_assert_held(&hugetlb_lock);
 	folio_clear_hugetlb_freed(folio);
 	prep_clear_zeroed(folio);
+	h->nr_huge_pages++;
+	h->nr_huge_pages_node[folio_nid(folio)]++;
 }
 
 void init_new_hugetlb_folio(struct folio *folio)
@@ -2012,8 +2008,7 @@ void prep_and_add_allocated_folios(struct hstate *h,
 	/* Add all new pool pages to free lists in one lock cycle */
 	spin_lock_irqsave(&hugetlb_lock, flags);
 	list_for_each_entry_safe(folio, tmp_f, folio_list, lru) {
-		prep_new_hugetlb_folio(folio);
-		account_new_hugetlb_folio(h, folio);
+		prep_account_new_hugetlb_folio(h, folio);
 		enqueue_hugetlb_folio(h, folio);
 	}
 	spin_unlock_irqrestore(&hugetlb_lock, flags);
@@ -2220,13 +2215,12 @@ static struct folio *alloc_surplus_hugetlb_folio(struct hstate *h,
 		return NULL;
 
 	spin_lock_irq(&hugetlb_lock);
-	prep_new_hugetlb_folio(folio);
 	/*
 	 * nr_huge_pages needs to be adjusted within the same lock cycle
 	 * as surplus_pages, otherwise it might confuse
 	 * persistent_huge_pages() momentarily.
 	 */
-	account_new_hugetlb_folio(h, folio);
+	prep_account_new_hugetlb_folio(h, folio);
 
 	/*
 	 * We could have raced with the pool size change.
@@ -2264,8 +2258,7 @@ static struct folio *alloc_migrate_hugetlb_folio(struct hstate *h, gfp_t gfp_mas
 		return NULL;
 
 	spin_lock_irq(&hugetlb_lock);
-	prep_new_hugetlb_folio(folio);
-	account_new_hugetlb_folio(h, folio);
+	prep_account_new_hugetlb_folio(h, folio);
 	spin_unlock_irq(&hugetlb_lock);
 
 	/* fresh huge pages are frozen */
@@ -2831,18 +2824,17 @@ static int alloc_and_dissolve_hugetlb_folio(struct folio *old_folio,
 		/*
 		 * Ok, old_folio is still a genuine free hugepage. Remove it from
 		 * the freelist and decrease the counters. These will be
-		 * incremented again when calling account_new_hugetlb_folio()
+		 * incremented again when calling prep_account_new_hugetlb_folio()
 		 * and enqueue_hugetlb_folio() for new_folio. The counters will
 		 * remain stable since this happens under the lock.
 		 */
 		remove_hugetlb_folio(h, old_folio, false);
 
-		prep_new_hugetlb_folio(new_folio);
 		/*
 		 * Ref count on new_folio is already zero as it was dropped
 		 * earlier.  It can be directly added to the pool free list.
 		 */
-		account_new_hugetlb_folio(h, new_folio);
+		prep_account_new_hugetlb_folio(h, new_folio);
 		enqueue_hugetlb_folio(h, new_folio);
 
 		/*
@@ -3318,8 +3310,7 @@ static void __init prep_and_add_bootmem_folios(struct hstate *h,
 		hugetlb_bootmem_init_migratetype(folio, h);
 		/* Subdivide locks to achieve better parallel performance */
 		spin_lock_irqsave(&hugetlb_lock, flags);
-		prep_new_hugetlb_folio(folio);
-		account_new_hugetlb_folio(h, folio);
+		prep_account_new_hugetlb_folio(h, folio);
 		enqueue_hugetlb_folio(h, folio);
 		spin_unlock_irqrestore(&hugetlb_lock, flags);
 	}
-- 
2.20.1


  parent reply	other threads:[~2025-12-25  8:22 UTC|newest]

Thread overview: 22+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-12-25  8:20 [PATCH 0/8] Introduce a huge-page pre-zeroing mechanism 李喆
2025-12-25  8:20 ` [PATCH 1/8] mm/hugetlb: add pre-zeroed framework 李喆
2025-12-26  9:24   ` Raghavendra K T
2025-12-26  9:48     ` Li Zhe
2025-12-25  8:20 ` 李喆 [this message]
2025-12-25  8:20 ` [PATCH 3/8] mm/hugetlb: move the huge folio to the end of the list during enqueue 李喆
2025-12-25  8:20 ` [PATCH 4/8] mm/hugetlb: introduce per-node sysfs interface "zeroable_hugepages" 李喆
2025-12-26 18:51   ` Frank van der Linden
2025-12-29 12:25     ` Li Zhe
2025-12-29 18:57       ` Frank van der Linden
2025-12-30  2:41         ` Li Zhe
2025-12-25  8:20 ` [PATCH 5/8] mm/hugetlb: simplify function hugetlb_sysfs_add_hstate() 李喆
2025-12-25  8:20 ` [PATCH 6/8] mm/hugetlb: relocate the per-hstate struct kobject pointer 李喆
2025-12-25  8:20 ` [PATCH 7/8] mm/hugetlb: add epoll support for interface "zeroable_hugepages" 李喆
2025-12-25  8:20 ` [PATCH 8/8] mm/hugetlb: limit event generation frequency of function do_zero_free_notify() 李喆
2025-12-26 18:32 ` [PATCH 0/8] Introduce a huge-page pre-zeroing mechanism Frank van der Linden
2025-12-26 21:42   ` Frank van der Linden
2025-12-29 12:28     ` Li Zhe
2025-12-27  7:21 ` Mateusz Guzik
2025-12-29 12:31   ` Li Zhe
2025-12-28 21:44 ` Andrew Morton
2025-12-29 12:34   ` Li Zhe

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20251225082059.1632-3-lizhe.67@bytedance.com \
    --to=lizhe.67@bytedance.com \
    --cc=akpm@linux-foundation.org \
    --cc=david@kernel.org \
    --cc=fvdl@google.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=muchun.song@linux.dev \
    --cc=osalvador@suse.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox