From: Wei Yang <richard.weiyang@gmail.com>
To: akpm@linux-foundation.org, david@kernel.org,
lorenzo.stoakes@oracle.com, ziy@nvidia.com,
baolin.wang@linux.alibaba.com, Liam.Howlett@oracle.com,
npache@redhat.com, ryan.roberts@arm.com, dev.jain@arm.com,
baohua@kernel.org, lance.yang@linux.dev
Cc: linux-mm@kvack.org, Wei Yang <richard.weiyang@gmail.com>
Subject: [Patch v3 1/2] mm/huge_memory: introduce enum split_type for clarity
Date: Thu, 6 Nov 2025 03:41:54 +0000 [thread overview]
Message-ID: <20251106034155.21398-2-richard.weiyang@gmail.com> (raw)
In-Reply-To: <20251106034155.21398-1-richard.weiyang@gmail.com>
We currently handle two distinct types of large folio splitting:
* uniform split
* non-uniform split
Differentiating between these types using a simple boolean variable is
not obvious and can harm code readability.
This commit introduces enum split_type to explicitly define these two
types. Replacing the existing boolean variable with this enumeration
significantly improves code clarity and expressiveness when dealing with
folio splitting logic.
No functional change is expected.
Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
Cc: Zi Yan <ziy@nvidia.com>
Cc: "David Hildenbrand (Red Hat)" <david@kernel.org>
---
include/linux/huge_mm.h | 5 +++++
mm/huge_memory.c | 30 +++++++++++++++---------------
2 files changed, 20 insertions(+), 15 deletions(-)
diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
index f381339842fa..9e96dbe2f246 100644
--- a/include/linux/huge_mm.h
+++ b/include/linux/huge_mm.h
@@ -364,6 +364,11 @@ unsigned long thp_get_unmapped_area_vmflags(struct file *filp, unsigned long add
unsigned long len, unsigned long pgoff, unsigned long flags,
vm_flags_t vm_flags);
+enum split_type {
+ SPLIT_TYPE_UNIFORM,
+ SPLIT_TYPE_NON_UNIFORM,
+};
+
bool can_split_folio(struct folio *folio, int caller_pins, int *pextra_pins);
int __split_huge_page_to_list_to_order(struct page *page, struct list_head *list,
unsigned int new_order, bool unmapped);
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 5795c0b4c39c..659532199233 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -3598,16 +3598,16 @@ static void __split_folio_to_order(struct folio *folio, int old_order,
* will be split until its order becomes @new_order.
* @xas: xa_state pointing to folio->mapping->i_pages and locked by caller
* @mapping: @folio->mapping
- * @uniform_split: if the split is uniform or not (buddy allocator like split)
+ * @split_type: if the split is uniform or not (buddy allocator like split)
*
*
* 1. uniform split: the given @folio into multiple @new_order small folios,
* where all small folios have the same order. This is done when
- * uniform_split is true.
+ * split_type is SPLIT_TYPE_UNIFORM.
* 2. buddy allocator like (non-uniform) split: the given @folio is split into
* half and one of the half (containing the given page) is split into half
* until the given @folio's order becomes @new_order. This is done when
- * uniform_split is false.
+ * split_type is SPLIT_TYPE_NON_UNIFORM.
*
* The high level flow for these two methods are:
* 1. uniform split: @xas is split with no expectation of failure and a single
@@ -3629,11 +3629,11 @@ static void __split_folio_to_order(struct folio *folio, int old_order,
*/
static int __split_unmapped_folio(struct folio *folio, int new_order,
struct page *split_at, struct xa_state *xas,
- struct address_space *mapping, bool uniform_split)
+ struct address_space *mapping, enum split_type split_type)
{
const bool is_anon = folio_test_anon(folio);
int old_order = folio_order(folio);
- int start_order = uniform_split ? new_order : old_order - 1;
+ int start_order = split_type == SPLIT_TYPE_UNIFORM ? new_order : old_order - 1;
int split_order;
/*
@@ -3655,7 +3655,7 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
* irq is disabled to allocate enough memory, whereas
* non-uniform split can handle ENOMEM.
*/
- if (uniform_split)
+ if (split_type == SPLIT_TYPE_UNIFORM)
xas_split(xas, folio, old_order);
else {
xas_set_order(xas, folio->index, split_order);
@@ -3752,7 +3752,7 @@ bool uniform_split_supported(struct folio *folio, unsigned int new_order,
* @split_at: a page within the new folio
* @lock_at: a page within @folio to be left locked to caller
* @list: after-split folios will be put on it if non NULL
- * @uniform_split: perform uniform split or not (non-uniform split)
+ * @split_type: perform uniform split or not (non-uniform split)
* @unmapped: The pages are already unmapped, they are migration entries.
*
* It calls __split_unmapped_folio() to perform uniform and non-uniform split.
@@ -3769,7 +3769,7 @@ bool uniform_split_supported(struct folio *folio, unsigned int new_order,
*/
static int __folio_split(struct folio *folio, unsigned int new_order,
struct page *split_at, struct page *lock_at,
- struct list_head *list, bool uniform_split, bool unmapped)
+ struct list_head *list, enum split_type split_type, bool unmapped)
{
struct deferred_split *ds_queue;
XA_STATE(xas, &folio->mapping->i_pages, folio->index);
@@ -3794,10 +3794,10 @@ static int __folio_split(struct folio *folio, unsigned int new_order,
if (new_order >= old_order)
return -EINVAL;
- if (uniform_split && !uniform_split_supported(folio, new_order, true))
+ if (split_type == SPLIT_TYPE_UNIFORM && !uniform_split_supported(folio, new_order, true))
return -EINVAL;
- if (!uniform_split &&
+ if (split_type == SPLIT_TYPE_NON_UNIFORM &&
!non_uniform_split_supported(folio, new_order, true))
return -EINVAL;
@@ -3859,7 +3859,7 @@ static int __folio_split(struct folio *folio, unsigned int new_order,
goto out;
}
- if (uniform_split) {
+ if (split_type == SPLIT_TYPE_UNIFORM) {
xas_set_order(&xas, folio->index, new_order);
xas_split_alloc(&xas, folio, old_order, gfp);
if (xas_error(&xas)) {
@@ -3973,7 +3973,7 @@ static int __folio_split(struct folio *folio, unsigned int new_order,
lruvec = folio_lruvec_lock(folio);
ret = __split_unmapped_folio(folio, new_order, split_at, &xas,
- mapping, uniform_split);
+ mapping, split_type);
/*
* Unfreeze after-split folios and put them back to the right
@@ -4149,8 +4149,8 @@ int __split_huge_page_to_list_to_order(struct page *page, struct list_head *list
{
struct folio *folio = page_folio(page);
- return __folio_split(folio, new_order, &folio->page, page, list, true,
- unmapped);
+ return __folio_split(folio, new_order, &folio->page, page, list,
+ SPLIT_TYPE_UNIFORM, unmapped);
}
/**
@@ -4181,7 +4181,7 @@ int folio_split(struct folio *folio, unsigned int new_order,
struct page *split_at, struct list_head *list)
{
return __folio_split(folio, new_order, split_at, &folio->page, list,
- false, false);
+ SPLIT_TYPE_NON_UNIFORM, false);
}
int min_order_for_split(struct folio *folio)
--
2.34.1
next prev parent reply other threads:[~2025-11-06 3:42 UTC|newest]
Thread overview: 32+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-11-06 3:41 [Patch v3 0/2] mm/huge_memory: Define split_type and consolidate split support checks Wei Yang
2025-11-06 3:41 ` Wei Yang [this message]
2025-11-06 10:17 ` [Patch v3 1/2] mm/huge_memory: introduce enum split_type for clarity David Hildenbrand (Red Hat)
2025-11-06 14:57 ` Wei Yang
2025-11-07 0:44 ` Zi Yan
2025-11-06 3:41 ` [Patch v3 2/2] mm/huge_memory: merge uniform_split_supported() and non_uniform_split_supported() Wei Yang
2025-11-06 10:20 ` David Hildenbrand (Red Hat)
2025-11-07 0:46 ` Zi Yan
2025-11-07 1:17 ` Wei Yang
2025-11-07 2:07 ` Zi Yan
2025-11-07 2:49 ` Wei Yang
2025-11-07 3:21 ` Zi Yan
2025-11-07 7:29 ` Wei Yang
2025-11-14 3:03 ` Wei Yang
2025-11-17 1:22 ` Wei Yang
2025-11-17 15:56 ` Zi Yan
2025-11-18 2:10 ` Wei Yang
2025-11-18 3:33 ` Wei Yang
2025-11-18 4:10 ` Zi Yan
2025-11-18 18:32 ` Andrew Morton
2025-11-18 18:55 ` Zi Yan
2025-11-18 22:06 ` Andrew Morton
2025-11-19 0:52 ` Wei Yang
2025-11-20 21:16 ` Andrew Morton
2025-11-21 0:55 ` Zi Yan
2025-11-21 9:00 ` Wei Yang
2025-11-21 14:59 ` Zi Yan
2025-11-21 16:50 ` Andrew Morton
2025-11-21 17:00 ` Zi Yan
2025-11-21 18:39 ` Andrew Morton
2025-11-21 19:09 ` Zi Yan
2025-11-21 19:15 ` Andrew Morton
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20251106034155.21398-2-richard.weiyang@gmail.com \
--to=richard.weiyang@gmail.com \
--cc=Liam.Howlett@oracle.com \
--cc=akpm@linux-foundation.org \
--cc=baohua@kernel.org \
--cc=baolin.wang@linux.alibaba.com \
--cc=david@kernel.org \
--cc=dev.jain@arm.com \
--cc=lance.yang@linux.dev \
--cc=linux-mm@kvack.org \
--cc=lorenzo.stoakes@oracle.com \
--cc=npache@redhat.com \
--cc=ryan.roberts@arm.com \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox