* [PATCH v3 0/5] mm: hugetlb: cleanup hugetlb folio allocation
@ 2025-09-10 13:39 Kefeng Wang
2025-09-10 13:39 ` [PATCH v3 1/5] mm: hugetlb: convert to use more alloc_fresh_hugetlb_folio() Kefeng Wang
` (4 more replies)
0 siblings, 5 replies; 6+ messages in thread
From: Kefeng Wang @ 2025-09-10 13:39 UTC (permalink / raw)
To: Andrew Morton, David Hildenbrand, Oscar Salvador, Muchun Song
Cc: sidhartha.kumar, jane.chu, Zi Yan, Vlastimil Babka,
Brendan Jackman, Johannes Weiner, linux-mm, Kefeng Wang
Some cleanup for hugetlb folio allocation.
v3:
- As Zi/Matthew pointed, it's better to not set page refcount
for both compound and non-compound allocation by adding
alloc_contig_range_frozen_noprof(), which need more changes,
so only send cleanup part in this version to let it go firstly
- add RB/ACK and address commets(per Zi/Oscar)
v2:
- Add RB and address some comments(per Vishal / Jane)
- Naming is hard, so don't add hvo for alloc_fresh_hugetlb_folio()
and only drop __prep prfix for account new hugetlb folio
- Add ACR_FLAGS_FROZEN for allocating frozen compound pages
- Refactoring the cma alloc/release to prepare for cma alloc/free
frozen folio
- https://lore.kernel.org/linux-mm/20250902124820.3081488-1-wangkefeng.wang@huawei.com/
v1:
- https://lore.kernel.org/linux-mm/20250802073107.2787975-1-wangkefeng.wang@huawei.com/
Kefeng Wang (5):
mm: hugetlb: convert to use more alloc_fresh_hugetlb_folio()
mm: hugetlb: convert to account_new_hugetlb_folio()
mm: hugetlb: directly pass order when allocate a hugetlb folio
mm: hugetlb: remove struct hstate from init_new_hugetlb_folio()
mm: hugeltb: check NUMA_NO_NODE in only_alloc_fresh_hugetlb_folio()
include/linux/hugetlb.h | 7 ++-
mm/hugetlb.c | 104 ++++++++++++++++------------------------
mm/hugetlb_cma.c | 3 +-
mm/hugetlb_cma.h | 6 +--
4 files changed, 52 insertions(+), 68 deletions(-)
--
2.27.0
^ permalink raw reply [flat|nested] 6+ messages in thread
* [PATCH v3 1/5] mm: hugetlb: convert to use more alloc_fresh_hugetlb_folio()
2025-09-10 13:39 [PATCH v3 0/5] mm: hugetlb: cleanup hugetlb folio allocation Kefeng Wang
@ 2025-09-10 13:39 ` Kefeng Wang
2025-09-10 13:39 ` [PATCH v3 2/5] mm: hugetlb: convert to account_new_hugetlb_folio() Kefeng Wang
` (3 subsequent siblings)
4 siblings, 0 replies; 6+ messages in thread
From: Kefeng Wang @ 2025-09-10 13:39 UTC (permalink / raw)
To: Andrew Morton, David Hildenbrand, Oscar Salvador, Muchun Song
Cc: sidhartha.kumar, jane.chu, Zi Yan, Vlastimil Babka,
Brendan Jackman, Johannes Weiner, linux-mm, Kefeng Wang
Simplify alloc_fresh_hugetlb_folio() and convert more functions
to use it, which help us to remove prep_new_hugetlb_folio() and
__prep_new_hugetlb_folio().
Acked-by: Oscar Salvador <osalvador@suse.de>
Reviewed-by: Sidhartha Kumar <sidhartha.kumar@oracle.com>
Reviewed-by: Zi Yan <ziy@nvidia.com>
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
mm/hugetlb.c | 46 ++++++++++++++--------------------------------
1 file changed, 14 insertions(+), 32 deletions(-)
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 753f99b4c718..42f79c4b916b 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1906,20 +1906,6 @@ static void init_new_hugetlb_folio(struct hstate *h, struct folio *folio)
set_hugetlb_cgroup_rsvd(folio, NULL);
}
-static void __prep_new_hugetlb_folio(struct hstate *h, struct folio *folio)
-{
- init_new_hugetlb_folio(h, folio);
- hugetlb_vmemmap_optimize_folio(h, folio);
-}
-
-static void prep_new_hugetlb_folio(struct hstate *h, struct folio *folio, int nid)
-{
- __prep_new_hugetlb_folio(h, folio);
- spin_lock_irq(&hugetlb_lock);
- __prep_account_new_huge_page(h, nid);
- spin_unlock_irq(&hugetlb_lock);
-}
-
/*
* Find and lock address space (mapping) in write mode.
*
@@ -2005,25 +1991,20 @@ static struct folio *only_alloc_fresh_hugetlb_folio(struct hstate *h,
}
/*
- * Common helper to allocate a fresh hugetlb page. All specific allocators
- * should use this function to get new hugetlb pages
+ * Common helper to allocate a fresh hugetlb folio. All specific allocators
+ * should use this function to get new hugetlb folio
*
- * Note that returned page is 'frozen': ref count of head page and all tail
- * pages is zero.
+ * Note that returned folio is 'frozen': ref count of head page and all tail
+ * pages is zero, and the accounting must be done in the caller.
*/
static struct folio *alloc_fresh_hugetlb_folio(struct hstate *h,
gfp_t gfp_mask, int nid, nodemask_t *nmask)
{
struct folio *folio;
- if (hstate_is_gigantic(h))
- folio = alloc_gigantic_folio(h, gfp_mask, nid, nmask);
- else
- folio = alloc_buddy_hugetlb_folio(h, gfp_mask, nid, nmask, NULL);
- if (!folio)
- return NULL;
-
- prep_new_hugetlb_folio(h, folio, folio_nid(folio));
+ folio = only_alloc_fresh_hugetlb_folio(h, gfp_mask, nid, nmask, NULL);
+ if (folio)
+ hugetlb_vmemmap_optimize_folio(h, folio);
return folio;
}
@@ -2241,12 +2222,10 @@ static struct folio *alloc_surplus_hugetlb_folio(struct hstate *h,
goto out_unlock;
spin_unlock_irq(&hugetlb_lock);
- folio = only_alloc_fresh_hugetlb_folio(h, gfp_mask, nid, nmask, NULL);
+ folio = alloc_fresh_hugetlb_folio(h, gfp_mask, nid, nmask);
if (!folio)
return NULL;
- hugetlb_vmemmap_optimize_folio(h, folio);
-
spin_lock_irq(&hugetlb_lock);
/*
* nr_huge_pages needs to be adjusted within the same lock cycle
@@ -2290,6 +2269,10 @@ static struct folio *alloc_migrate_hugetlb_folio(struct hstate *h, gfp_t gfp_mas
if (!folio)
return NULL;
+ spin_lock_irq(&hugetlb_lock);
+ __prep_account_new_huge_page(h, folio_nid(folio));
+ spin_unlock_irq(&hugetlb_lock);
+
/* fresh huge pages are frozen */
folio_ref_unfreeze(folio, 1);
/*
@@ -2836,11 +2819,10 @@ static int alloc_and_dissolve_hugetlb_folio(struct folio *old_folio,
if (!new_folio) {
spin_unlock_irq(&hugetlb_lock);
gfp_mask = htlb_alloc_mask(h) | __GFP_THISNODE;
- new_folio = alloc_buddy_hugetlb_folio(h, gfp_mask, nid,
- NULL, NULL);
+ new_folio = alloc_fresh_hugetlb_folio(h, gfp_mask,
+ nid, NULL);
if (!new_folio)
return -ENOMEM;
- __prep_new_hugetlb_folio(h, new_folio);
goto retry;
}
--
2.27.0
^ permalink raw reply [flat|nested] 6+ messages in thread
* [PATCH v3 2/5] mm: hugetlb: convert to account_new_hugetlb_folio()
2025-09-10 13:39 [PATCH v3 0/5] mm: hugetlb: cleanup hugetlb folio allocation Kefeng Wang
2025-09-10 13:39 ` [PATCH v3 1/5] mm: hugetlb: convert to use more alloc_fresh_hugetlb_folio() Kefeng Wang
@ 2025-09-10 13:39 ` Kefeng Wang
2025-09-10 13:39 ` [PATCH v3 3/5] mm: hugetlb: directly pass order when allocate a hugetlb folio Kefeng Wang
` (2 subsequent siblings)
4 siblings, 0 replies; 6+ messages in thread
From: Kefeng Wang @ 2025-09-10 13:39 UTC (permalink / raw)
To: Andrew Morton, David Hildenbrand, Oscar Salvador, Muchun Song
Cc: sidhartha.kumar, jane.chu, Zi Yan, Vlastimil Babka,
Brendan Jackman, Johannes Weiner, linux-mm, Kefeng Wang
In order to avoid the wrong nid passed into the account, and we
did make such mistake before, so it's better to move folio_nid()
into account_new_hugetlb_folio().
Acked-by: Oscar Salvador <osalvador@suse.de>
Reviewed-by: Sidhartha Kumar <sidhartha.kumar@oracle.com>
Reviewed-by: Zi Yan <ziy@nvidia.com>
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
mm/hugetlb.c | 16 ++++++++--------
1 file changed, 8 insertions(+), 8 deletions(-)
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 42f79c4b916b..6378f5f40f44 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1890,11 +1890,11 @@ void free_huge_folio(struct folio *folio)
/*
* Must be called with the hugetlb lock held
*/
-static void __prep_account_new_huge_page(struct hstate *h, int nid)
+static void account_new_hugetlb_folio(struct hstate *h, struct folio *folio)
{
lockdep_assert_held(&hugetlb_lock);
h->nr_huge_pages++;
- h->nr_huge_pages_node[nid]++;
+ h->nr_huge_pages_node[folio_nid(folio)]++;
}
static void init_new_hugetlb_folio(struct hstate *h, struct folio *folio)
@@ -2020,7 +2020,7 @@ static void prep_and_add_allocated_folios(struct hstate *h,
/* Add all new pool pages to free lists in one lock cycle */
spin_lock_irqsave(&hugetlb_lock, flags);
list_for_each_entry_safe(folio, tmp_f, folio_list, lru) {
- __prep_account_new_huge_page(h, folio_nid(folio));
+ account_new_hugetlb_folio(h, folio);
enqueue_hugetlb_folio(h, folio);
}
spin_unlock_irqrestore(&hugetlb_lock, flags);
@@ -2232,7 +2232,7 @@ static struct folio *alloc_surplus_hugetlb_folio(struct hstate *h,
* as surplus_pages, otherwise it might confuse
* persistent_huge_pages() momentarily.
*/
- __prep_account_new_huge_page(h, folio_nid(folio));
+ account_new_hugetlb_folio(h, folio);
/*
* We could have raced with the pool size change.
@@ -2270,7 +2270,7 @@ static struct folio *alloc_migrate_hugetlb_folio(struct hstate *h, gfp_t gfp_mas
return NULL;
spin_lock_irq(&hugetlb_lock);
- __prep_account_new_huge_page(h, folio_nid(folio));
+ account_new_hugetlb_folio(h, folio);
spin_unlock_irq(&hugetlb_lock);
/* fresh huge pages are frozen */
@@ -2829,7 +2829,7 @@ static int alloc_and_dissolve_hugetlb_folio(struct folio *old_folio,
/*
* Ok, old_folio is still a genuine free hugepage. Remove it from
* the freelist and decrease the counters. These will be
- * incremented again when calling __prep_account_new_huge_page()
+ * incremented again when calling account_new_hugetlb_folio()
* and enqueue_hugetlb_folio() for new_folio. The counters will
* remain stable since this happens under the lock.
*/
@@ -2839,7 +2839,7 @@ static int alloc_and_dissolve_hugetlb_folio(struct folio *old_folio,
* Ref count on new_folio is already zero as it was dropped
* earlier. It can be directly added to the pool free list.
*/
- __prep_account_new_huge_page(h, nid);
+ account_new_hugetlb_folio(h, new_folio);
enqueue_hugetlb_folio(h, new_folio);
/*
@@ -3309,7 +3309,7 @@ static void __init prep_and_add_bootmem_folios(struct hstate *h,
hugetlb_bootmem_init_migratetype(folio, h);
/* Subdivide locks to achieve better parallel performance */
spin_lock_irqsave(&hugetlb_lock, flags);
- __prep_account_new_huge_page(h, folio_nid(folio));
+ account_new_hugetlb_folio(h, folio);
enqueue_hugetlb_folio(h, folio);
spin_unlock_irqrestore(&hugetlb_lock, flags);
}
--
2.27.0
^ permalink raw reply [flat|nested] 6+ messages in thread
* [PATCH v3 3/5] mm: hugetlb: directly pass order when allocate a hugetlb folio
2025-09-10 13:39 [PATCH v3 0/5] mm: hugetlb: cleanup hugetlb folio allocation Kefeng Wang
2025-09-10 13:39 ` [PATCH v3 1/5] mm: hugetlb: convert to use more alloc_fresh_hugetlb_folio() Kefeng Wang
2025-09-10 13:39 ` [PATCH v3 2/5] mm: hugetlb: convert to account_new_hugetlb_folio() Kefeng Wang
@ 2025-09-10 13:39 ` Kefeng Wang
2025-09-10 13:39 ` [PATCH v3 4/5] mm: hugetlb: remove struct hstate from init_new_hugetlb_folio() Kefeng Wang
2025-09-10 13:39 ` [PATCH v3 5/5] mm: hugeltb: check NUMA_NO_NODE in only_alloc_fresh_hugetlb_folio() Kefeng Wang
4 siblings, 0 replies; 6+ messages in thread
From: Kefeng Wang @ 2025-09-10 13:39 UTC (permalink / raw)
To: Andrew Morton, David Hildenbrand, Oscar Salvador, Muchun Song
Cc: sidhartha.kumar, jane.chu, Zi Yan, Vlastimil Babka,
Brendan Jackman, Johannes Weiner, linux-mm, Kefeng Wang
Use order instead of struct hstate to remove huge_page_order() call
from all hugetlb folio allocation, also order_is_gigantic() is added
to check whether it is a gigantic order.
Acked-by: Oscar Salvador <osalvador@suse.de>
Reviewed-by: Sidhartha Kumar <sidhartha.kumar@oracle.com>
Reviewed-by: Jane Chu <jane.chu@oracle.com>
Reviewed-by: Zi Yan <ziy@nvidia.com>
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
include/linux/hugetlb.h | 7 ++++++-
mm/hugetlb.c | 29 ++++++++++++++---------------
mm/hugetlb_cma.c | 3 +--
mm/hugetlb_cma.h | 6 +++---
4 files changed, 24 insertions(+), 21 deletions(-)
diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index 526d27e88b3b..8e63e46b8e1f 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -788,9 +788,14 @@ static inline unsigned huge_page_shift(struct hstate *h)
return h->order + PAGE_SHIFT;
}
+static inline bool order_is_gigantic(unsigned int order)
+{
+ return order > MAX_PAGE_ORDER;
+}
+
static inline bool hstate_is_gigantic(struct hstate *h)
{
- return huge_page_order(h) > MAX_PAGE_ORDER;
+ return order_is_gigantic(huge_page_order(h));
}
static inline unsigned int pages_per_huge_page(const struct hstate *h)
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 6378f5f40f44..b98736ad60d3 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1473,17 +1473,16 @@ static int hstate_next_node_to_free(struct hstate *h, nodemask_t *nodes_allowed)
#ifdef CONFIG_ARCH_HAS_GIGANTIC_PAGE
#ifdef CONFIG_CONTIG_ALLOC
-static struct folio *alloc_gigantic_folio(struct hstate *h, gfp_t gfp_mask,
+static struct folio *alloc_gigantic_folio(int order, gfp_t gfp_mask,
int nid, nodemask_t *nodemask)
{
struct folio *folio;
- int order = huge_page_order(h);
bool retried = false;
if (nid == NUMA_NO_NODE)
nid = numa_mem_id();
retry:
- folio = hugetlb_cma_alloc_folio(h, gfp_mask, nid, nodemask);
+ folio = hugetlb_cma_alloc_folio(order, gfp_mask, nid, nodemask);
if (!folio) {
if (hugetlb_cma_exclusive_alloc())
return NULL;
@@ -1506,16 +1505,16 @@ static struct folio *alloc_gigantic_folio(struct hstate *h, gfp_t gfp_mask,
}
#else /* !CONFIG_CONTIG_ALLOC */
-static struct folio *alloc_gigantic_folio(struct hstate *h, gfp_t gfp_mask,
- int nid, nodemask_t *nodemask)
+static struct folio *alloc_gigantic_folio(int order, gfp_t gfp_mask, int nid,
+ nodemask_t *nodemask)
{
return NULL;
}
#endif /* CONFIG_CONTIG_ALLOC */
#else /* !CONFIG_ARCH_HAS_GIGANTIC_PAGE */
-static struct folio *alloc_gigantic_folio(struct hstate *h, gfp_t gfp_mask,
- int nid, nodemask_t *nodemask)
+static struct folio *alloc_gigantic_folio(int order, gfp_t gfp_mask, int nid,
+ nodemask_t *nodemask)
{
return NULL;
}
@@ -1926,11 +1925,9 @@ struct address_space *hugetlb_folio_mapping_lock_write(struct folio *folio)
return NULL;
}
-static struct folio *alloc_buddy_hugetlb_folio(struct hstate *h,
- gfp_t gfp_mask, int nid, nodemask_t *nmask,
- nodemask_t *node_alloc_noretry)
+static struct folio *alloc_buddy_hugetlb_folio(int order, gfp_t gfp_mask,
+ int nid, nodemask_t *nmask, nodemask_t *node_alloc_noretry)
{
- int order = huge_page_order(h);
struct folio *folio;
bool alloc_try_hard = true;
@@ -1980,11 +1977,13 @@ static struct folio *only_alloc_fresh_hugetlb_folio(struct hstate *h,
nodemask_t *node_alloc_noretry)
{
struct folio *folio;
+ int order = huge_page_order(h);
- if (hstate_is_gigantic(h))
- folio = alloc_gigantic_folio(h, gfp_mask, nid, nmask);
+ if (order_is_gigantic(order))
+ folio = alloc_gigantic_folio(order, gfp_mask, nid, nmask);
else
- folio = alloc_buddy_hugetlb_folio(h, gfp_mask, nid, nmask, node_alloc_noretry);
+ folio = alloc_buddy_hugetlb_folio(order, gfp_mask, nid, nmask,
+ node_alloc_noretry);
if (folio)
init_new_hugetlb_folio(h, folio);
return folio;
@@ -2872,7 +2871,7 @@ int isolate_or_dissolve_huge_folio(struct folio *folio, struct list_head *list)
* alloc_contig_range and them. Return -ENOMEM as this has the effect
* of bailing out right away without further retrying.
*/
- if (folio_order(folio) > MAX_PAGE_ORDER)
+ if (order_is_gigantic(folio_order(folio)))
return -ENOMEM;
if (folio_ref_count(folio) && folio_isolate_hugetlb(folio, list))
diff --git a/mm/hugetlb_cma.c b/mm/hugetlb_cma.c
index f58ef4969e7a..e8e4dc7182d5 100644
--- a/mm/hugetlb_cma.c
+++ b/mm/hugetlb_cma.c
@@ -26,11 +26,10 @@ void hugetlb_cma_free_folio(struct folio *folio)
}
-struct folio *hugetlb_cma_alloc_folio(struct hstate *h, gfp_t gfp_mask,
+struct folio *hugetlb_cma_alloc_folio(int order, gfp_t gfp_mask,
int nid, nodemask_t *nodemask)
{
int node;
- int order = huge_page_order(h);
struct folio *folio = NULL;
if (hugetlb_cma[nid])
diff --git a/mm/hugetlb_cma.h b/mm/hugetlb_cma.h
index f7d7fb9880a2..2c2ec8a7e134 100644
--- a/mm/hugetlb_cma.h
+++ b/mm/hugetlb_cma.h
@@ -4,7 +4,7 @@
#ifdef CONFIG_CMA
void hugetlb_cma_free_folio(struct folio *folio);
-struct folio *hugetlb_cma_alloc_folio(struct hstate *h, gfp_t gfp_mask,
+struct folio *hugetlb_cma_alloc_folio(int order, gfp_t gfp_mask,
int nid, nodemask_t *nodemask);
struct huge_bootmem_page *hugetlb_cma_alloc_bootmem(struct hstate *h, int *nid,
bool node_exact);
@@ -18,8 +18,8 @@ static inline void hugetlb_cma_free_folio(struct folio *folio)
{
}
-static inline struct folio *hugetlb_cma_alloc_folio(struct hstate *h,
- gfp_t gfp_mask, int nid, nodemask_t *nodemask)
+static inline struct folio *hugetlb_cma_alloc_folio(int order, gfp_t gfp_mask,
+ int nid, nodemask_t *nodemask)
{
return NULL;
}
--
2.27.0
^ permalink raw reply [flat|nested] 6+ messages in thread
* [PATCH v3 4/5] mm: hugetlb: remove struct hstate from init_new_hugetlb_folio()
2025-09-10 13:39 [PATCH v3 0/5] mm: hugetlb: cleanup hugetlb folio allocation Kefeng Wang
` (2 preceding siblings ...)
2025-09-10 13:39 ` [PATCH v3 3/5] mm: hugetlb: directly pass order when allocate a hugetlb folio Kefeng Wang
@ 2025-09-10 13:39 ` Kefeng Wang
2025-09-10 13:39 ` [PATCH v3 5/5] mm: hugeltb: check NUMA_NO_NODE in only_alloc_fresh_hugetlb_folio() Kefeng Wang
4 siblings, 0 replies; 6+ messages in thread
From: Kefeng Wang @ 2025-09-10 13:39 UTC (permalink / raw)
To: Andrew Morton, David Hildenbrand, Oscar Salvador, Muchun Song
Cc: sidhartha.kumar, jane.chu, Zi Yan, Vlastimil Babka,
Brendan Jackman, Johannes Weiner, linux-mm, Kefeng Wang
The struct hstate is never used since commit d67e32f26713 ("hugetlb:
restructure pool allocations”), remove it.
Acked-by: Oscar Salvador <osalvador@suse.de>
Reviewed-by: Sidhartha Kumar <sidhartha.kumar@oracle.com>
Reviewed-by: Jane Chu <jane.chu@oracle.com>
Reviewed-by: Zi Yan <ziy@nvidia.com>
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
mm/hugetlb.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index b98736ad60d3..28519a73eab7 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1896,7 +1896,7 @@ static void account_new_hugetlb_folio(struct hstate *h, struct folio *folio)
h->nr_huge_pages_node[folio_nid(folio)]++;
}
-static void init_new_hugetlb_folio(struct hstate *h, struct folio *folio)
+static void init_new_hugetlb_folio(struct folio *folio)
{
__folio_set_hugetlb(folio);
INIT_LIST_HEAD(&folio->lru);
@@ -1985,7 +1985,7 @@ static struct folio *only_alloc_fresh_hugetlb_folio(struct hstate *h,
folio = alloc_buddy_hugetlb_folio(order, gfp_mask, nid, nmask,
node_alloc_noretry);
if (folio)
- init_new_hugetlb_folio(h, folio);
+ init_new_hugetlb_folio(folio);
return folio;
}
@@ -3404,7 +3404,7 @@ static void __init gather_bootmem_prealloc_node(unsigned long nid)
hugetlb_folio_init_vmemmap(folio, h,
HUGETLB_VMEMMAP_RESERVE_PAGES);
- init_new_hugetlb_folio(h, folio);
+ init_new_hugetlb_folio(folio);
if (hugetlb_bootmem_page_prehvo(m))
/*
@@ -4016,7 +4016,7 @@ static long demote_free_hugetlb_folios(struct hstate *src, struct hstate *dst,
prep_compound_page(page, dst->order);
new_folio->mapping = NULL;
- init_new_hugetlb_folio(dst, new_folio);
+ init_new_hugetlb_folio(new_folio);
/* Copy the CMA flag so that it is freed correctly */
if (cma)
folio_set_hugetlb_cma(new_folio);
--
2.27.0
^ permalink raw reply [flat|nested] 6+ messages in thread
* [PATCH v3 5/5] mm: hugeltb: check NUMA_NO_NODE in only_alloc_fresh_hugetlb_folio()
2025-09-10 13:39 [PATCH v3 0/5] mm: hugetlb: cleanup hugetlb folio allocation Kefeng Wang
` (3 preceding siblings ...)
2025-09-10 13:39 ` [PATCH v3 4/5] mm: hugetlb: remove struct hstate from init_new_hugetlb_folio() Kefeng Wang
@ 2025-09-10 13:39 ` Kefeng Wang
4 siblings, 0 replies; 6+ messages in thread
From: Kefeng Wang @ 2025-09-10 13:39 UTC (permalink / raw)
To: Andrew Morton, David Hildenbrand, Oscar Salvador, Muchun Song
Cc: sidhartha.kumar, jane.chu, Zi Yan, Vlastimil Babka,
Brendan Jackman, Johannes Weiner, linux-mm, Kefeng Wang
Move the NUMA_NO_NODE check out of buddy and gigantic folio allocation
to cleanup code a bit, also this will avoid NUMA_NO_NODE passed as 'nid'
to node_isset() in alloc_buddy_hugetlb_folio().
Acked-by: Oscar Salvador <osalvador@suse.de>
Reviewed-by: Sidhartha Kumar <sidhartha.kumar@oracle.com>
Reviewed-by: Jane Chu <jane.chu@oracle.com>
Reviewed-by: Zi Yan <ziy@nvidia.com>
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
mm/hugetlb.c | 7 +++----
1 file changed, 3 insertions(+), 4 deletions(-)
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 28519a73eab7..856f6ec3a41a 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1479,8 +1479,6 @@ static struct folio *alloc_gigantic_folio(int order, gfp_t gfp_mask,
struct folio *folio;
bool retried = false;
- if (nid == NUMA_NO_NODE)
- nid = numa_mem_id();
retry:
folio = hugetlb_cma_alloc_folio(order, gfp_mask, nid, nodemask);
if (!folio) {
@@ -1942,8 +1940,6 @@ static struct folio *alloc_buddy_hugetlb_folio(int order, gfp_t gfp_mask,
alloc_try_hard = false;
if (alloc_try_hard)
gfp_mask |= __GFP_RETRY_MAYFAIL;
- if (nid == NUMA_NO_NODE)
- nid = numa_mem_id();
folio = (struct folio *)__alloc_frozen_pages(gfp_mask, order, nid, nmask);
@@ -1979,6 +1975,9 @@ static struct folio *only_alloc_fresh_hugetlb_folio(struct hstate *h,
struct folio *folio;
int order = huge_page_order(h);
+ if (nid == NUMA_NO_NODE)
+ nid = numa_mem_id();
+
if (order_is_gigantic(order))
folio = alloc_gigantic_folio(order, gfp_mask, nid, nmask);
else
--
2.27.0
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2025-09-10 13:40 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-09-10 13:39 [PATCH v3 0/5] mm: hugetlb: cleanup hugetlb folio allocation Kefeng Wang
2025-09-10 13:39 ` [PATCH v3 1/5] mm: hugetlb: convert to use more alloc_fresh_hugetlb_folio() Kefeng Wang
2025-09-10 13:39 ` [PATCH v3 2/5] mm: hugetlb: convert to account_new_hugetlb_folio() Kefeng Wang
2025-09-10 13:39 ` [PATCH v3 3/5] mm: hugetlb: directly pass order when allocate a hugetlb folio Kefeng Wang
2025-09-10 13:39 ` [PATCH v3 4/5] mm: hugetlb: remove struct hstate from init_new_hugetlb_folio() Kefeng Wang
2025-09-10 13:39 ` [PATCH v3 5/5] mm: hugeltb: check NUMA_NO_NODE in only_alloc_fresh_hugetlb_folio() Kefeng Wang
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox