linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [RFC Patch 00/11] Convert huge page split API to folio-style
@ 2025-12-08 14:36 Wei Yang
  2025-12-08 14:36 ` [RFC Patch 01/11] mm/huge_memory: relocate fundamental folio split comment to __folio_split() Wei Yang
                   ` (10 more replies)
  0 siblings, 11 replies; 12+ messages in thread
From: Wei Yang @ 2025-12-08 14:36 UTC (permalink / raw)
  To: akpm, david, lorenzo.stoakes, ziy, baolin.wang, Liam.Howlett,
	npache, ryan.roberts, dev.jain, baohua, lance.yang, linmiaohe,
	nao.horiguchi
  Cc: linux-mm, Wei Yang

This patch series continues the effort to modernize the huge page
management layer by converting internal users to the folio-style API
and eliminating redundant, older page-style interfaces.

Currently, the code maintains two parallel sets of APIs for huge page
splitting, which adds unnecessary complexity and maintenance burden:

page-style APIs:

    __split_huge_page_to_list_to_order
    split_huge_page_to_list_to_order
    split_huge_page_to_order
    split_huge_page

folio-style APIs:

    try_folio_split_to_order
    folio_split_unmapped
    folio_split
    split_folio
    split_folio_to_list
    split_folio_to_order
    split_folio_to_list_to_order

After cleanup, we would have:

page-style:

    split_huge_page

folio-style:

    try_folio_split_to_order
    folio_split_unmapped
    folio_split
    folio_split_uniform
    split_folio
    split_folio_to_list
    split_folio_to_order

The only page-style API left is split_huge_page() which requires a 
specified page to be locked after split. It is possible to rename and 
convert it to folio-style. Currently it is left to get some comment.

Wei Yang (11):
  mm/huge_memory: relocate fundamental folio split comment to
    __folio_split()
  mm/huge_memory: remove split_folio_to_list_to_order() helper
  mm/huge_memory: convert try_folio_split_to_order() to use
    split_folio_to_order()
  mm/memory-failure: convert try_to_split_thp_page() to use
    split_folio_to_order()
  mm/huge_memory: remove unused function split_huge_page_to_order()
  mm/huge_memory: introduce __split_folio_and_update_stats() to
    consolidate split task
  mm/huge_memory: separate uniform/non uniform split logic in
    __split_unmapped_folio()
  mm/huge_memory: restrict @split_at check to non-uniform splits
  mm/huge_memory: introduce folio_split_uniform() helper for uniform
    splitting
  mm/huge_memory: convert folio split helpers to use
    folio_split_uniform()
  mm/huge_memory: simplify split_huge_page() by calling __folio_split()
    directly

 include/linux/huge_mm.h |  48 +++--------
 mm/huge_memory.c        | 181 ++++++++++++++++++++++------------------
 mm/memory-failure.c     |   7 +-
 3 files changed, 116 insertions(+), 120 deletions(-)

-- 
2.34.1



^ permalink raw reply	[flat|nested] 12+ messages in thread

* [RFC Patch  01/11] mm/huge_memory: relocate fundamental folio split comment to __folio_split()
  2025-12-08 14:36 [RFC Patch 00/11] Convert huge page split API to folio-style Wei Yang
@ 2025-12-08 14:36 ` Wei Yang
  2025-12-08 14:36 ` [RFC Patch 02/11] mm/huge_memory: remove split_folio_to_list_to_order() helper Wei Yang
                   ` (9 subsequent siblings)
  10 siblings, 0 replies; 12+ messages in thread
From: Wei Yang @ 2025-12-08 14:36 UTC (permalink / raw)
  To: akpm, david, lorenzo.stoakes, ziy, baolin.wang, Liam.Howlett,
	npache, ryan.roberts, dev.jain, baohua, lance.yang, linmiaohe,
	nao.horiguchi
  Cc: linux-mm, Wei Yang

The core function responsible for folio splitting has shifted over time.
Historically, the mechanism was documented primarily within
__split_huge_page_to_list_to_order().

However, the current central function for folio splitting is now
__folio_split().

To ensure documentation matches the current code structure, this commit
moves the fundamental mechanism comment to __folio_split().

Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
Cc: Zi Yan <ziy@nvidia.com>
---
 mm/huge_memory.c | 96 +++++++++++++++++++++++-------------------------
 1 file changed, 46 insertions(+), 50 deletions(-)

diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 8db0d81fca40..37a73bfb96ff 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -3911,15 +3911,61 @@ static int __folio_freeze_and_split_unmapped(struct folio *folio, unsigned int n
  * @list: after-split folios will be put on it if non NULL
  * @split_type: perform uniform split or not (non-uniform split)
  *
+ * This function splits a large folio into smaller folios of order @new_order.
+ * @page can point to any page of the large folio to split. The split operation
+ * does not change the position of @page.
+ *
+ * Prerequisites:
+ *
+ * 1) The caller must hold a reference on the @page's owning folio, also known
+ *    as the large folio.
+ *
+ * 2) The large folio must be locked.
+ *
+ * 3) The folio must not be pinned. Any unexpected folio references, including
+ *    GUP pins, will result in the folio not getting split; instead, the caller
+ *    will receive an -EAGAIN.
+ *
+ * 4) @new_order > 1, usually. Splitting to order-1 anonymous folios is not
+ *    supported for non-file-backed folios, because folio->_deferred_list, which
+ *    is used by partially mapped folios, is stored in subpage 2, but an order-1
+ *    folio only has subpages 0 and 1. File-backed order-1 folios are supported,
+ *    since they do not use _deferred_list.
+ *
  * It calls __split_unmapped_folio() to perform uniform and non-uniform split.
  * It is in charge of checking whether the split is supported or not and
  * preparing @folio for __split_unmapped_folio().
  *
+ * After splitting, the caller's folio reference will be transferred to @page,
+ * resulting in a raised refcount of @page after this call. The other pages may
+ * be freed if they are not mapped.
+ *
  * After splitting, the after-split folio containing @lock_at remains locked
  * and others are unlocked:
  * 1. for uniform split, @lock_at points to one of @folio's subpages;
  * 2. for buddy allocator like (non-uniform) split, @lock_at points to @folio.
  *
+ * If @list is null, tail pages will be added to LRU list, otherwise, to @list.
+ *
+ * Pages in @new_order will inherit the mapping, flags, and so on from the
+ * huge page.
+ *
+ * Returns 0 if the huge page was split successfully.
+ *
+ * Returns -EAGAIN if the folio has unexpected reference (e.g., GUP) or if
+ * the folio was concurrently removed from the page cache.
+ *
+ * Returns -EBUSY when trying to split the huge zeropage, if the folio is
+ * under writeback, if fs-specific folio metadata cannot currently be
+ * released, or if some unexpected race happened (e.g., anon VMA disappeared,
+ * truncation).
+ *
+ * Callers should ensure that the order respects the address space mapping
+ * min-order if one is set for non-anonymous folios.
+ *
+ * Returns -EINVAL when trying to split to an order that is incompatible
+ * with the folio. Splitting to order 0 is compatible with all folios.
+ *
  * Return: 0 - successful, <0 - failed (if -ENOMEM is returned, @folio might be
  * split but not to @new_order, the caller needs to check)
  */
@@ -4136,53 +4182,6 @@ int folio_split_unmapped(struct folio *folio, unsigned int new_order)
 	return ret;
 }
 
-/*
- * This function splits a large folio into smaller folios of order @new_order.
- * @page can point to any page of the large folio to split. The split operation
- * does not change the position of @page.
- *
- * Prerequisites:
- *
- * 1) The caller must hold a reference on the @page's owning folio, also known
- *    as the large folio.
- *
- * 2) The large folio must be locked.
- *
- * 3) The folio must not be pinned. Any unexpected folio references, including
- *    GUP pins, will result in the folio not getting split; instead, the caller
- *    will receive an -EAGAIN.
- *
- * 4) @new_order > 1, usually. Splitting to order-1 anonymous folios is not
- *    supported for non-file-backed folios, because folio->_deferred_list, which
- *    is used by partially mapped folios, is stored in subpage 2, but an order-1
- *    folio only has subpages 0 and 1. File-backed order-1 folios are supported,
- *    since they do not use _deferred_list.
- *
- * After splitting, the caller's folio reference will be transferred to @page,
- * resulting in a raised refcount of @page after this call. The other pages may
- * be freed if they are not mapped.
- *
- * If @list is null, tail pages will be added to LRU list, otherwise, to @list.
- *
- * Pages in @new_order will inherit the mapping, flags, and so on from the
- * huge page.
- *
- * Returns 0 if the huge page was split successfully.
- *
- * Returns -EAGAIN if the folio has unexpected reference (e.g., GUP) or if
- * the folio was concurrently removed from the page cache.
- *
- * Returns -EBUSY when trying to split the huge zeropage, if the folio is
- * under writeback, if fs-specific folio metadata cannot currently be
- * released, or if some unexpected race happened (e.g., anon VMA disappeared,
- * truncation).
- *
- * Callers should ensure that the order respects the address space mapping
- * min-order if one is set for non-anonymous folios.
- *
- * Returns -EINVAL when trying to split to an order that is incompatible
- * with the folio. Splitting to order 0 is compatible with all folios.
- */
 int __split_huge_page_to_list_to_order(struct page *page, struct list_head *list,
 				     unsigned int new_order)
 {
@@ -4200,9 +4199,6 @@ int __split_huge_page_to_list_to_order(struct page *page, struct list_head *list
  * @list: after-split folios are added to @list if not null, otherwise to LRU
  *        list
  *
- * It has the same prerequisites and returns as
- * split_huge_page_to_list_to_order().
- *
  * Split a folio at @split_at to a new_order folio, leave the
  * remaining subpages of the original folio as large as possible. For example,
  * in the case of splitting an order-9 folio at its third order-3 subpages to
-- 
2.34.1



^ permalink raw reply	[flat|nested] 12+ messages in thread

* [RFC Patch  02/11] mm/huge_memory: remove split_folio_to_list_to_order() helper
  2025-12-08 14:36 [RFC Patch 00/11] Convert huge page split API to folio-style Wei Yang
  2025-12-08 14:36 ` [RFC Patch 01/11] mm/huge_memory: relocate fundamental folio split comment to __folio_split() Wei Yang
@ 2025-12-08 14:36 ` Wei Yang
  2025-12-08 14:36 ` [RFC Patch 03/11] mm/huge_memory: convert try_folio_split_to_order() to use split_folio_to_order() Wei Yang
                   ` (8 subsequent siblings)
  10 siblings, 0 replies; 12+ messages in thread
From: Wei Yang @ 2025-12-08 14:36 UTC (permalink / raw)
  To: akpm, david, lorenzo.stoakes, ziy, baolin.wang, Liam.Howlett,
	npache, ryan.roberts, dev.jain, baohua, lance.yang, linmiaohe,
	nao.horiguchi
  Cc: linux-mm, Wei Yang

The function split_folio_to_list_to_order() serves only as a simple
wrapper around split_huge_page_to_list_to_order(), and is exclusively
called from split_folio_to_order().

To reduce API clutter and streamline the code, this commit removes the
redundant split_folio_to_list_to_order() wrapper. The caller
(split_folio_to_order()) is updated to directly invoke
split_huge_page_to_list_to_order().

Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
Cc: Zi Yan <ziy@nvidia.com>
---
 include/linux/huge_mm.h | 8 +-------
 1 file changed, 1 insertion(+), 7 deletions(-)

diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
index 21162493a0a0..977e513feed7 100644
--- a/include/linux/huge_mm.h
+++ b/include/linux/huge_mm.h
@@ -771,15 +771,9 @@ static inline bool pmd_is_huge(pmd_t pmd)
 }
 #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
 
-static inline int split_folio_to_list_to_order(struct folio *folio,
-		struct list_head *list, int new_order)
-{
-	return split_huge_page_to_list_to_order(&folio->page, list, new_order);
-}
-
 static inline int split_folio_to_order(struct folio *folio, int new_order)
 {
-	return split_folio_to_list_to_order(folio, NULL, new_order);
+	return split_huge_page_to_list_to_order(&folio->page, NULL, new_order);
 }
 
 /**
-- 
2.34.1



^ permalink raw reply	[flat|nested] 12+ messages in thread

* [RFC Patch  03/11] mm/huge_memory: convert try_folio_split_to_order() to use split_folio_to_order()
  2025-12-08 14:36 [RFC Patch 00/11] Convert huge page split API to folio-style Wei Yang
  2025-12-08 14:36 ` [RFC Patch 01/11] mm/huge_memory: relocate fundamental folio split comment to __folio_split() Wei Yang
  2025-12-08 14:36 ` [RFC Patch 02/11] mm/huge_memory: remove split_folio_to_list_to_order() helper Wei Yang
@ 2025-12-08 14:36 ` Wei Yang
  2025-12-08 14:36 ` [RFC Patch 04/11] mm/memory-failure: convert try_to_split_thp_page() " Wei Yang
                   ` (7 subsequent siblings)
  10 siblings, 0 replies; 12+ messages in thread
From: Wei Yang @ 2025-12-08 14:36 UTC (permalink / raw)
  To: akpm, david, lorenzo.stoakes, ziy, baolin.wang, Liam.Howlett,
	npache, ryan.roberts, dev.jain, baohua, lance.yang, linmiaohe,
	nao.horiguchi
  Cc: linux-mm, Wei Yang

The function try_folio_split_to_order() currently calls
split_huge_page_to_order() to split a large folio.

The behavior of split_huge_page_to_order() in this
context -- splitting a large folio and returning the first resulting
page locked -- is functionally identical to the folio-style helper
split_folio_to_order().

This commit converts the call to use the split_folio_to_order() helper
instead. This adopts the modern folio-style API, improving consistency
and reducing reliance on the older page-centric interface.

Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
Cc: Zi Yan <ziy@nvidia.com>
---
 include/linux/huge_mm.h | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
index 977e513feed7..3e01184cf274 100644
--- a/include/linux/huge_mm.h
+++ b/include/linux/huge_mm.h
@@ -389,6 +389,11 @@ static inline int split_huge_page_to_order(struct page *page, unsigned int new_o
 	return split_huge_page_to_list_to_order(page, NULL, new_order);
 }
 
+static inline int split_folio_to_order(struct folio *folio, int new_order)
+{
+	return split_huge_page_to_list_to_order(&folio->page, NULL, new_order);
+}
+
 /**
  * try_folio_split_to_order() - try to split a @folio at @page to @new_order
  * using non uniform split.
@@ -407,7 +412,7 @@ static inline int try_folio_split_to_order(struct folio *folio,
 		struct page *page, unsigned int new_order)
 {
 	if (folio_check_splittable(folio, new_order, SPLIT_TYPE_NON_UNIFORM))
-		return split_huge_page_to_order(&folio->page, new_order);
+		return split_folio_to_order(folio, new_order);
 	return folio_split(folio, new_order, page, NULL);
 }
 static inline int split_huge_page(struct page *page)
@@ -771,11 +776,6 @@ static inline bool pmd_is_huge(pmd_t pmd)
 }
 #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
 
-static inline int split_folio_to_order(struct folio *folio, int new_order)
-{
-	return split_huge_page_to_list_to_order(&folio->page, NULL, new_order);
-}
-
 /**
  * largest_zero_folio - Get the largest zero size folio available
  *
-- 
2.34.1



^ permalink raw reply	[flat|nested] 12+ messages in thread

* [RFC Patch  04/11] mm/memory-failure: convert try_to_split_thp_page() to use split_folio_to_order()
  2025-12-08 14:36 [RFC Patch 00/11] Convert huge page split API to folio-style Wei Yang
                   ` (2 preceding siblings ...)
  2025-12-08 14:36 ` [RFC Patch 03/11] mm/huge_memory: convert try_folio_split_to_order() to use split_folio_to_order() Wei Yang
@ 2025-12-08 14:36 ` Wei Yang
  2025-12-08 14:36 ` [RFC Patch 05/11] mm/huge_memory: remove unused function split_huge_page_to_order() Wei Yang
                   ` (6 subsequent siblings)
  10 siblings, 0 replies; 12+ messages in thread
From: Wei Yang @ 2025-12-08 14:36 UTC (permalink / raw)
  To: akpm, david, lorenzo.stoakes, ziy, baolin.wang, Liam.Howlett,
	npache, ryan.roberts, dev.jain, baohua, lance.yang, linmiaohe,
	nao.horiguchi
  Cc: linux-mm, Wei Yang

The function try_to_split_thp_page() currently uses the page-style
split_huge_page_to_order() API to split a huge page.

Since the returned page is immediately unlocked after the split
completes, the specific locking behavior of split_huge_page_to_order()
(returning the page locked) is irrelevant here.

This allows us to replace the call with the folio-style equivalent,
split_folio_to_order(). This conversion improves code consistency
by adopting the modern folio API throughout the THP splitting
logic.

Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
Cc: Zi Yan <ziy@nvidia.com>
---
 include/linux/huge_mm.h | 6 ++++++
 mm/memory-failure.c     | 7 ++++---
 2 files changed, 10 insertions(+), 3 deletions(-)

diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
index 3e01184cf274..872b4ed2a477 100644
--- a/include/linux/huge_mm.h
+++ b/include/linux/huge_mm.h
@@ -647,6 +647,12 @@ static inline int split_folio_to_list(struct folio *folio, struct list_head *lis
 	return -EINVAL;
 }
 
+static inline int split_folio_to_order(struct folio *folio, int new_order)
+{
+	VM_WARN_ON_ONCE_FOLIO(1, folio);
+	return -EINVAL;
+}
+
 static inline int try_folio_split_to_order(struct folio *folio,
 		struct page *page, unsigned int new_order)
 {
diff --git a/mm/memory-failure.c b/mm/memory-failure.c
index fbc5a01260c8..600666491f52 100644
--- a/mm/memory-failure.c
+++ b/mm/memory-failure.c
@@ -1665,11 +1665,12 @@ static int identify_page_state(unsigned long pfn, struct page *p,
 static int try_to_split_thp_page(struct page *page, unsigned int new_order,
 		bool release)
 {
+	struct folio *folio = page_folio(page);
 	int ret;
 
-	lock_page(page);
-	ret = split_huge_page_to_order(page, new_order);
-	unlock_page(page);
+	folio_lock(folio);
+	ret = split_folio_to_order(folio, new_order);
+	folio_unlock(folio);
 
 	if (ret && release)
 		put_page(page);
-- 
2.34.1



^ permalink raw reply	[flat|nested] 12+ messages in thread

* [RFC Patch  05/11] mm/huge_memory: remove unused function split_huge_page_to_order()
  2025-12-08 14:36 [RFC Patch 00/11] Convert huge page split API to folio-style Wei Yang
                   ` (3 preceding siblings ...)
  2025-12-08 14:36 ` [RFC Patch 04/11] mm/memory-failure: convert try_to_split_thp_page() " Wei Yang
@ 2025-12-08 14:36 ` Wei Yang
  2025-12-08 14:36 ` [RFC Patch 06/11] mm/huge_memory: introduce __split_folio_and_update_stats() to consolidate split task Wei Yang
                   ` (5 subsequent siblings)
  10 siblings, 0 replies; 12+ messages in thread
From: Wei Yang @ 2025-12-08 14:36 UTC (permalink / raw)
  To: akpm, david, lorenzo.stoakes, ziy, baolin.wang, Liam.Howlett,
	npache, ryan.roberts, dev.jain, baohua, lance.yang, linmiaohe,
	nao.horiguchi
  Cc: linux-mm, Wei Yang

The function split_huge_page_to_order() is no longer called by any part
of the codebase.

This commit removes the function entirely, cleaning up the code and
eliminating dead API.

Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
Cc: Zi Yan <ziy@nvidia.com>
---
 include/linux/huge_mm.h | 9 ---------
 1 file changed, 9 deletions(-)

diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
index 872b4ed2a477..cf38ed6b9835 100644
--- a/include/linux/huge_mm.h
+++ b/include/linux/huge_mm.h
@@ -384,10 +384,6 @@ static inline int split_huge_page_to_list_to_order(struct page *page, struct lis
 {
 	return __split_huge_page_to_list_to_order(page, list, new_order);
 }
-static inline int split_huge_page_to_order(struct page *page, unsigned int new_order)
-{
-	return split_huge_page_to_list_to_order(page, NULL, new_order);
-}
 
 static inline int split_folio_to_order(struct folio *folio, int new_order)
 {
@@ -624,11 +620,6 @@ split_huge_page_to_list_to_order(struct page *page, struct list_head *list,
 	VM_WARN_ON_ONCE_PAGE(1, page);
 	return -EINVAL;
 }
-static inline int split_huge_page_to_order(struct page *page, unsigned int new_order)
-{
-	VM_WARN_ON_ONCE_PAGE(1, page);
-	return -EINVAL;
-}
 static inline int split_huge_page(struct page *page)
 {
 	VM_WARN_ON_ONCE_PAGE(1, page);
-- 
2.34.1



^ permalink raw reply	[flat|nested] 12+ messages in thread

* [RFC Patch  06/11] mm/huge_memory: introduce __split_folio_and_update_stats() to consolidate split task
  2025-12-08 14:36 [RFC Patch 00/11] Convert huge page split API to folio-style Wei Yang
                   ` (4 preceding siblings ...)
  2025-12-08 14:36 ` [RFC Patch 05/11] mm/huge_memory: remove unused function split_huge_page_to_order() Wei Yang
@ 2025-12-08 14:36 ` Wei Yang
  2025-12-08 14:36 ` [RFC Patch 07/11] mm/huge_memory: separate uniform/non uniform split logic in __split_unmapped_folio() Wei Yang
                   ` (4 subsequent siblings)
  10 siblings, 0 replies; 12+ messages in thread
From: Wei Yang @ 2025-12-08 14:36 UTC (permalink / raw)
  To: akpm, david, lorenzo.stoakes, ziy, baolin.wang, Liam.Howlett,
	npache, ryan.roberts, dev.jain, baohua, lance.yang, linmiaohe,
	nao.horiguchi
  Cc: linux-mm, Wei Yang

The folio splitting process involves several related tasks that are
executed together:

    Adjusting memcg (memory control group) accounting.
    Updating page owner tracking.
    Splitting the folio to the target size (new_order).
    Updating necessary folio statistics.

This commit introduces the new helper function,
__split_folio_and_update_stats(), to gather all these tasks. This
consolidation improves modularity and is a necessary preparation step
for further cleanup and simplification of the surrounding folio
splitting logic.

Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Cc: Zi Yan <ziy@nvidia.com>
Cc: "David Hildenbrand (Red Hat)" <david@kernel.org>

---
v2:
  * add const to nr_new_folios
  * remove is_anon parameter
---
 mm/huge_memory.c | 28 +++++++++++++++++-----------
 1 file changed, 17 insertions(+), 11 deletions(-)

diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 37a73bfb96ff..7160254ce76c 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -3571,6 +3571,22 @@ static void __split_folio_to_order(struct folio *folio, int old_order,
 		ClearPageCompound(&folio->page);
 }
 
+static void __split_folio_and_update_stats(struct folio *folio, int old_order,
+					   int new_order)
+{
+	const int nr_new_folios = 1UL << (old_order - new_order);
+
+	folio_split_memcg_refs(folio, old_order, new_order);
+	split_page_owner(&folio->page, old_order, new_order);
+	pgalloc_tag_split(folio, old_order, new_order);
+	__split_folio_to_order(folio, old_order, new_order);
+
+	if (folio_test_anon(folio)) {
+		mod_mthp_stat(old_order, MTHP_STAT_NR_ANON, -1);
+		mod_mthp_stat(new_order, MTHP_STAT_NR_ANON, nr_new_folios);
+	}
+}
+
 /**
  * __split_unmapped_folio() - splits an unmapped @folio to lower order folios in
  * two ways: uniform split or non-uniform split.
@@ -3628,8 +3644,6 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
 	for (split_order = start_order;
 	     split_order >= new_order;
 	     split_order--) {
-		int nr_new_folios = 1UL << (old_order - split_order);
-
 		/* order-1 anonymous folio is not supported */
 		if (is_anon && split_order == 1)
 			continue;
@@ -3650,15 +3664,7 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
 			}
 		}
 
-		folio_split_memcg_refs(folio, old_order, split_order);
-		split_page_owner(&folio->page, old_order, split_order);
-		pgalloc_tag_split(folio, old_order, split_order);
-		__split_folio_to_order(folio, old_order, split_order);
-
-		if (is_anon) {
-			mod_mthp_stat(old_order, MTHP_STAT_NR_ANON, -1);
-			mod_mthp_stat(split_order, MTHP_STAT_NR_ANON, nr_new_folios);
-		}
+		__split_folio_and_update_stats(folio, old_order, split_order);
 		/*
 		 * If uniform split, the process is complete.
 		 * If non-uniform, continue splitting the folio at @split_at
-- 
2.34.1



^ permalink raw reply	[flat|nested] 12+ messages in thread

* [RFC Patch  07/11] mm/huge_memory: separate uniform/non uniform split logic in __split_unmapped_folio()
  2025-12-08 14:36 [RFC Patch 00/11] Convert huge page split API to folio-style Wei Yang
                   ` (5 preceding siblings ...)
  2025-12-08 14:36 ` [RFC Patch 06/11] mm/huge_memory: introduce __split_folio_and_update_stats() to consolidate split task Wei Yang
@ 2025-12-08 14:36 ` Wei Yang
  2025-12-08 14:36 ` [RFC Patch 08/11] mm/huge_memory: restrict @split_at check to non-uniform splits Wei Yang
                   ` (3 subsequent siblings)
  10 siblings, 0 replies; 12+ messages in thread
From: Wei Yang @ 2025-12-08 14:36 UTC (permalink / raw)
  To: akpm, david, lorenzo.stoakes, ziy, baolin.wang, Liam.Howlett,
	npache, ryan.roberts, dev.jain, baohua, lance.yang, linmiaohe,
	nao.horiguchi
  Cc: linux-mm, Wei Yang

By utilizing the newly introduced __split_folio_and_update_stats()
helper function, we can now clearly separate the logic for uniform and
non-uniform folio splitting within __split_unmapped_folio().

This refactoring greatly simplifies the code by creating two distinct
execution paths:

    * Uniform Split: Directly calls __split_folio_and_update_stats()
      once to achieve the @new_order in a single operation.

    * Non-Uniform Split: Continues to use a loop to iteratively split
      the folio to a single lower order at a time, eventually reaching
      the @new_order.

This separation improves code clarity and maintainability.

Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Cc: Zi Yan <ziy@nvidia.com>
Cc: "David Hildenbrand (Red Hat)" <david@kernel.org>
---
 mm/huge_memory.c | 31 ++++++++++++++++---------------
 1 file changed, 16 insertions(+), 15 deletions(-)

diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 7160254ce76c..dbb4b86e7d6d 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -3634,14 +3634,20 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
 {
 	const bool is_anon = folio_test_anon(folio);
 	int old_order = folio_order(folio);
-	int start_order = split_type == SPLIT_TYPE_UNIFORM ? new_order : old_order - 1;
 	int split_order;
 
+	/* For uniform split, folio is split to new_order directly. */
+	if (split_type == SPLIT_TYPE_UNIFORM) {
+		if (mapping)
+			xas_split(xas, folio, old_order);
+		__split_folio_and_update_stats(folio, old_order, new_order);
+		return 0;
+	}
+
 	/*
-	 * split to new_order one order at a time. For uniform split,
-	 * folio is split to new_order directly.
+	 * For non-uniform, split to new_order one order at a time.
 	 */
-	for (split_order = start_order;
+	for (split_order = old_order - 1;
 	     split_order >= new_order;
 	     split_order--) {
 		/* order-1 anonymous folio is not supported */
@@ -3654,21 +3660,16 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
 			 * irq is disabled to allocate enough memory, whereas
 			 * non-uniform split can handle ENOMEM.
 			 */
-			if (split_type == SPLIT_TYPE_UNIFORM)
-				xas_split(xas, folio, old_order);
-			else {
-				xas_set_order(xas, folio->index, split_order);
-				xas_try_split(xas, folio, old_order);
-				if (xas_error(xas))
-					return xas_error(xas);
-			}
+			xas_set_order(xas, folio->index, split_order);
+			xas_try_split(xas, folio, old_order);
+			if (xas_error(xas))
+				return xas_error(xas);
 		}
 
 		__split_folio_and_update_stats(folio, old_order, split_order);
 		/*
-		 * If uniform split, the process is complete.
-		 * If non-uniform, continue splitting the folio at @split_at
-		 * as long as the next @split_order is >= @new_order.
+		 * Continue splitting the folio at @split_at as long as the
+		 * next @split_order is >= @new_order.
 		 */
 		folio = page_folio(split_at);
 		old_order = split_order;
-- 
2.34.1



^ permalink raw reply	[flat|nested] 12+ messages in thread

* [RFC Patch  08/11] mm/huge_memory: restrict @split_at check to non-uniform splits
  2025-12-08 14:36 [RFC Patch 00/11] Convert huge page split API to folio-style Wei Yang
                   ` (6 preceding siblings ...)
  2025-12-08 14:36 ` [RFC Patch 07/11] mm/huge_memory: separate uniform/non uniform split logic in __split_unmapped_folio() Wei Yang
@ 2025-12-08 14:36 ` Wei Yang
  2025-12-08 14:36 ` [RFC Patch 09/11] mm/huge_memory: introduce folio_split_uniform() helper for uniform splitting Wei Yang
                   ` (2 subsequent siblings)
  10 siblings, 0 replies; 12+ messages in thread
From: Wei Yang @ 2025-12-08 14:36 UTC (permalink / raw)
  To: akpm, david, lorenzo.stoakes, ziy, baolin.wang, Liam.Howlett,
	npache, ryan.roberts, dev.jain, baohua, lance.yang, linmiaohe,
	nao.horiguchi
  Cc: linux-mm, Wei Yang

The concept of a uniform split implies that the entire folio is treated
equally, making the specific starting index parameter, @split_at,
irrelevant and unnecessary.

Following previous code cleanups, it is confirmed that @split_at is
indeed unused in the uniform split path.

This commit refactors the validation logic to check the @split_at
parameter only when a non-uniform split is being performed. This
simplifies the uniform split code path and removes a redundant check.

Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
Cc: Zi Yan <ziy@nvidia.com>
---
 mm/huge_memory.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index dbb4b86e7d6d..a8ca7c5902f4 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -3995,7 +3995,8 @@ static int __folio_split(struct folio *folio, unsigned int new_order,
 	VM_WARN_ON_ONCE_FOLIO(!folio_test_locked(folio), folio);
 	VM_WARN_ON_ONCE_FOLIO(!folio_test_large(folio), folio);
 
-	if (folio != page_folio(split_at) || folio != page_folio(lock_at)) {
+	if ((split_type == SPLIT_TYPE_NON_UNIFORM && folio != page_folio(split_at))
+	    || folio != page_folio(lock_at)) {
 		ret = -EINVAL;
 		goto out;
 	}
-- 
2.34.1



^ permalink raw reply	[flat|nested] 12+ messages in thread

* [RFC Patch  09/11] mm/huge_memory: introduce folio_split_uniform() helper for uniform splitting
  2025-12-08 14:36 [RFC Patch 00/11] Convert huge page split API to folio-style Wei Yang
                   ` (7 preceding siblings ...)
  2025-12-08 14:36 ` [RFC Patch 08/11] mm/huge_memory: restrict @split_at check to non-uniform splits Wei Yang
@ 2025-12-08 14:36 ` Wei Yang
  2025-12-08 14:36 ` [RFC Patch 10/11] mm/huge_memory: convert folio split helpers to use folio_split_uniform() Wei Yang
  2025-12-08 14:36 ` [RFC Patch 11/11] mm/huge_memory: simplify split_huge_page() by calling __folio_split() directly Wei Yang
  10 siblings, 0 replies; 12+ messages in thread
From: Wei Yang @ 2025-12-08 14:36 UTC (permalink / raw)
  To: akpm, david, lorenzo.stoakes, ziy, baolin.wang, Liam.Howlett,
	npache, ryan.roberts, dev.jain, baohua, lance.yang, linmiaohe,
	nao.horiguchi
  Cc: linux-mm, Wei Yang

There are two fundamental types of folio split:

    * uniform
    * non-uniform

The existing implementation uses the page-centric function
__split_huge_page_to_list_to_order() to handle uniform splits.

This commit introduces a new, dedicated folio-style API,
folio_split_uniform(), which wraps the existing split logic. This
transition improves consistency by adopting the modern folio API and
paves the way for the eventual removal of the older, page-centric
interface.

Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
Cc: Zi Yan <ziy@nvidia.com>
---
 include/linux/huge_mm.h |  1 +
 mm/huge_memory.c        | 18 ++++++++++++++++++
 2 files changed, 19 insertions(+)

diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
index cf38ed6b9835..bc99ae7b0376 100644
--- a/include/linux/huge_mm.h
+++ b/include/linux/huge_mm.h
@@ -376,6 +376,7 @@ unsigned int min_order_for_split(struct folio *folio);
 int split_folio_to_list(struct folio *folio, struct list_head *list);
 int folio_check_splittable(struct folio *folio, unsigned int new_order,
 			   enum split_type split_type);
+int folio_split_uniform(struct folio *folio, unsigned int new_order, struct list_head *list);
 int folio_split(struct folio *folio, unsigned int new_order, struct page *page,
 		struct list_head *list);
 
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index a8ca7c5902f4..09742c021eee 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -4199,6 +4199,24 @@ int __split_huge_page_to_list_to_order(struct page *page, struct list_head *list
 			     SPLIT_TYPE_UNIFORM);
 }
 
+/**
+ * folio_split_uniform() - split a folio at @split_at to a @new_order folio
+ * @folio: folio to split
+ * @new_order: the order of the new folio
+ * @list: after-split folios are added to @list if not null, otherwise to LRU
+ *        list
+ *
+ * After split, folio is left locked for caller.
+ *
+ * Return: 0 - successful, <0 - failed (if -ENOMEM is returned, @folio might be
+ * split but not to @new_order, the caller needs to check)
+ */
+int folio_split_uniform(struct folio *folio, unsigned int new_order, struct list_head *list)
+{
+	return __folio_split(folio, new_order, NULL, &folio->page, list,
+			     SPLIT_TYPE_UNIFORM);
+}
+
 /**
  * folio_split() - split a folio at @split_at to a @new_order folio
  * @folio: folio to split
-- 
2.34.1



^ permalink raw reply	[flat|nested] 12+ messages in thread

* [RFC Patch  10/11] mm/huge_memory: convert folio split helpers to use folio_split_uniform()
  2025-12-08 14:36 [RFC Patch 00/11] Convert huge page split API to folio-style Wei Yang
                   ` (8 preceding siblings ...)
  2025-12-08 14:36 ` [RFC Patch 09/11] mm/huge_memory: introduce folio_split_uniform() helper for uniform splitting Wei Yang
@ 2025-12-08 14:36 ` Wei Yang
  2025-12-08 14:36 ` [RFC Patch 11/11] mm/huge_memory: simplify split_huge_page() by calling __folio_split() directly Wei Yang
  10 siblings, 0 replies; 12+ messages in thread
From: Wei Yang @ 2025-12-08 14:36 UTC (permalink / raw)
  To: akpm, david, lorenzo.stoakes, ziy, baolin.wang, Liam.Howlett,
	npache, ryan.roberts, dev.jain, baohua, lance.yang, linmiaohe,
	nao.horiguchi
  Cc: linux-mm, Wei Yang

The functions split_folio_to_order() and split_folio_to_list() both
perform a uniform split operation and return the first resulting page
locked.

This behavior is functionally identical to the recently introduced
folio-style helper, folio_split_uniform().

This commit converts the internal calls within these functions to use
folio_split_uniform(). This conversion standardizes the implementation
across the codebase, ensuring consistency and full adoption of the
modern folio splitting API.

Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
Cc: Zi Yan <ziy@nvidia.com>
---
 include/linux/huge_mm.h | 2 +-
 mm/huge_memory.c        | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
index bc99ae7b0376..95a68c19e177 100644
--- a/include/linux/huge_mm.h
+++ b/include/linux/huge_mm.h
@@ -388,7 +388,7 @@ static inline int split_huge_page_to_list_to_order(struct page *page, struct lis
 
 static inline int split_folio_to_order(struct folio *folio, int new_order)
 {
-	return split_huge_page_to_list_to_order(&folio->page, NULL, new_order);
+	return folio_split_uniform(folio, new_order, NULL);
 }
 
 /**
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 09742c021eee..cedecfc1a40c 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -4274,7 +4274,7 @@ unsigned int min_order_for_split(struct folio *folio)
 
 int split_folio_to_list(struct folio *folio, struct list_head *list)
 {
-	return split_huge_page_to_list_to_order(&folio->page, list, 0);
+	return folio_split_uniform(folio, 0, list);
 }
 
 /*
-- 
2.34.1



^ permalink raw reply	[flat|nested] 12+ messages in thread

* [RFC Patch  11/11] mm/huge_memory: simplify split_huge_page() by calling __folio_split() directly
  2025-12-08 14:36 [RFC Patch 00/11] Convert huge page split API to folio-style Wei Yang
                   ` (9 preceding siblings ...)
  2025-12-08 14:36 ` [RFC Patch 10/11] mm/huge_memory: convert folio split helpers to use folio_split_uniform() Wei Yang
@ 2025-12-08 14:36 ` Wei Yang
  10 siblings, 0 replies; 12+ messages in thread
From: Wei Yang @ 2025-12-08 14:36 UTC (permalink / raw)
  To: akpm, david, lorenzo.stoakes, ziy, baolin.wang, Liam.Howlett,
	npache, ryan.roberts, dev.jain, baohua, lance.yang, linmiaohe,
	nao.horiguchi
  Cc: linux-mm, Wei Yang

The current function call chain for splitting huge pages is overly
verbose:

split_huge_page() -> split_huge_page_to_list_to_order() ->
__split_huge_page_to_list_to_order() -> __folio_split()

Since __split_huge_page_to_list_to_order() is merely a wrapper for
__folio_split(), and it is only used by
split_huge_page_to_list_to_order() (which is, in turn, only used by
split_huge_page()), the intermediate functions are redundant.

This commit refactors split_huge_page() to call __folio_split()
directly. This removes the unnecessary middle layers
(split_huge_page_to_list_to_order() and
__split_huge_page_to_list_to_order()), simplifying the splitting
interface.

Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
Cc: Zi Yan <ziy@nvidia.com>
---
 include/linux/huge_mm.h | 20 +-------------------
 mm/huge_memory.c        |  5 ++---
 2 files changed, 3 insertions(+), 22 deletions(-)

diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
index 95a68c19e177..a977e032385c 100644
--- a/include/linux/huge_mm.h
+++ b/include/linux/huge_mm.h
@@ -369,8 +369,6 @@ enum split_type {
 	SPLIT_TYPE_NON_UNIFORM,
 };
 
-int __split_huge_page_to_list_to_order(struct page *page, struct list_head *list,
-		unsigned int new_order);
 int folio_split_unmapped(struct folio *folio, unsigned int new_order);
 unsigned int min_order_for_split(struct folio *folio);
 int split_folio_to_list(struct folio *folio, struct list_head *list);
@@ -380,12 +378,6 @@ int folio_split_uniform(struct folio *folio, unsigned int new_order, struct list
 int folio_split(struct folio *folio, unsigned int new_order, struct page *page,
 		struct list_head *list);
 
-static inline int split_huge_page_to_list_to_order(struct page *page, struct list_head *list,
-		unsigned int new_order)
-{
-	return __split_huge_page_to_list_to_order(page, list, new_order);
-}
-
 static inline int split_folio_to_order(struct folio *folio, int new_order)
 {
 	return folio_split_uniform(folio, new_order, NULL);
@@ -412,10 +404,7 @@ static inline int try_folio_split_to_order(struct folio *folio,
 		return split_folio_to_order(folio, new_order);
 	return folio_split(folio, new_order, page, NULL);
 }
-static inline int split_huge_page(struct page *page)
-{
-	return split_huge_page_to_list_to_order(page, NULL, 0);
-}
+int split_huge_page(struct page *page);
 void deferred_split_folio(struct folio *folio, bool partially_mapped);
 #ifdef CONFIG_MEMCG
 void reparent_deferred_split_queue(struct mem_cgroup *memcg);
@@ -614,13 +603,6 @@ can_split_folio(struct folio *folio, int caller_pins, int *pextra_pins)
 {
 	return false;
 }
-static inline int
-split_huge_page_to_list_to_order(struct page *page, struct list_head *list,
-		unsigned int new_order)
-{
-	VM_WARN_ON_ONCE_PAGE(1, page);
-	return -EINVAL;
-}
 static inline int split_huge_page(struct page *page)
 {
 	VM_WARN_ON_ONCE_PAGE(1, page);
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index cedecfc1a40c..ded17d0fa695 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -4190,12 +4190,11 @@ int folio_split_unmapped(struct folio *folio, unsigned int new_order)
 	return ret;
 }
 
-int __split_huge_page_to_list_to_order(struct page *page, struct list_head *list,
-				     unsigned int new_order)
+int split_huge_page(struct page *page)
 {
 	struct folio *folio = page_folio(page);
 
-	return __folio_split(folio, new_order, &folio->page, page, list,
+	return __folio_split(folio, 0, &folio->page, page, NULL,
 			     SPLIT_TYPE_UNIFORM);
 }
 
-- 
2.34.1



^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2025-12-08 14:37 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-12-08 14:36 [RFC Patch 00/11] Convert huge page split API to folio-style Wei Yang
2025-12-08 14:36 ` [RFC Patch 01/11] mm/huge_memory: relocate fundamental folio split comment to __folio_split() Wei Yang
2025-12-08 14:36 ` [RFC Patch 02/11] mm/huge_memory: remove split_folio_to_list_to_order() helper Wei Yang
2025-12-08 14:36 ` [RFC Patch 03/11] mm/huge_memory: convert try_folio_split_to_order() to use split_folio_to_order() Wei Yang
2025-12-08 14:36 ` [RFC Patch 04/11] mm/memory-failure: convert try_to_split_thp_page() " Wei Yang
2025-12-08 14:36 ` [RFC Patch 05/11] mm/huge_memory: remove unused function split_huge_page_to_order() Wei Yang
2025-12-08 14:36 ` [RFC Patch 06/11] mm/huge_memory: introduce __split_folio_and_update_stats() to consolidate split task Wei Yang
2025-12-08 14:36 ` [RFC Patch 07/11] mm/huge_memory: separate uniform/non uniform split logic in __split_unmapped_folio() Wei Yang
2025-12-08 14:36 ` [RFC Patch 08/11] mm/huge_memory: restrict @split_at check to non-uniform splits Wei Yang
2025-12-08 14:36 ` [RFC Patch 09/11] mm/huge_memory: introduce folio_split_uniform() helper for uniform splitting Wei Yang
2025-12-08 14:36 ` [RFC Patch 10/11] mm/huge_memory: convert folio split helpers to use folio_split_uniform() Wei Yang
2025-12-08 14:36 ` [RFC Patch 11/11] mm/huge_memory: simplify split_huge_page() by calling __folio_split() directly Wei Yang

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox