* [PATCH] mm/huge_memory: merge uniform_split_supported() and non_uniform_split_supported()
@ 2025-11-01 2:11 Wei Yang
2025-11-03 9:04 ` Dev Jain
` (2 more replies)
0 siblings, 3 replies; 13+ messages in thread
From: Wei Yang @ 2025-11-01 2:11 UTC (permalink / raw)
To: akpm, david, lorenzo.stoakes, ziy, baolin.wang, Liam.Howlett,
npache, ryan.roberts, dev.jain, baohua, lance.yang
Cc: linux-mm, Wei Yang
The functions uniform_split_supported() and
non_uniform_split_supported() share significantly similar logic.
The only functional difference is that uniform_split_supported()
includes an additional check on the requested @new_order before
proceeding with further validation.
This commit unifies the logic by introducing a new variable,
@need_check, which is conditionally set based on whether a uniform
split is requested. This allows us to merge the two functions into
a single, combined helper, removing redundant code and simplifying
the split support checking mechanism.
Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
Cc: Zi Yan <ziy@nvidia.com>
---
include/linux/huge_mm.h | 8 +++---
mm/huge_memory.c | 55 +++++++++++------------------------------
2 files changed, 18 insertions(+), 45 deletions(-)
diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
index cbb2243f8e56..79343809a7be 100644
--- a/include/linux/huge_mm.h
+++ b/include/linux/huge_mm.h
@@ -369,10 +369,8 @@ int __split_huge_page_to_list_to_order(struct page *page, struct list_head *list
unsigned int new_order, bool unmapped);
int min_order_for_split(struct folio *folio);
int split_folio_to_list(struct folio *folio, struct list_head *list);
-bool uniform_split_supported(struct folio *folio, unsigned int new_order,
- bool warns);
-bool non_uniform_split_supported(struct folio *folio, unsigned int new_order,
- bool warns);
+bool folio_split_supported(struct folio *folio, unsigned int new_order,
+ bool uniform_split, bool warns);
int folio_split(struct folio *folio, unsigned int new_order, struct page *page,
struct list_head *list);
@@ -403,7 +401,7 @@ static inline int split_huge_page_to_order(struct page *page, unsigned int new_o
static inline int try_folio_split_to_order(struct folio *folio,
struct page *page, unsigned int new_order)
{
- if (!non_uniform_split_supported(folio, new_order, /* warns= */ false))
+ if (!folio_split_supported(folio, new_order, /* uniform_split = */ false, /* warns= */ false))
return split_huge_page_to_order(&folio->page, new_order);
return folio_split(folio, new_order, page, NULL);
}
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index d1fa0d2d9b44..f6d2cb2a5ca0 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -3673,55 +3673,34 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
return 0;
}
-bool non_uniform_split_supported(struct folio *folio, unsigned int new_order,
- bool warns)
+bool folio_split_supported(struct folio *folio, unsigned int new_order,
+ bool uniform_split, bool warns)
{
- if (folio_test_anon(folio)) {
- /* order-1 is not supported for anonymous THP. */
- VM_WARN_ONCE(warns && new_order == 1,
- "Cannot split to order-1 folio");
- return new_order != 1;
- } else if (IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) &&
- !mapping_large_folio_support(folio->mapping)) {
- /*
- * No split if the file system does not support large folio.
- * Note that we might still have THPs in such mappings due to
- * CONFIG_READ_ONLY_THP_FOR_FS. But in that case, the mapping
- * does not actually support large folios properly.
- */
- VM_WARN_ONCE(warns,
- "Cannot split file folio to non-0 order");
- return false;
- }
-
- /* Only swapping a whole PMD-mapped folio is supported */
- if (folio_test_swapcache(folio)) {
- VM_WARN_ONCE(warns,
- "Cannot split swapcache folio to non-0 order");
- return false;
- }
+ bool need_check = uniform_split ? new_order : true;
- return true;
-}
-
-/* See comments in non_uniform_split_supported() */
-bool uniform_split_supported(struct folio *folio, unsigned int new_order,
- bool warns)
-{
if (folio_test_anon(folio)) {
+ /* order-1 is not supported for anonymous THP. */
VM_WARN_ONCE(warns && new_order == 1,
"Cannot split to order-1 folio");
return new_order != 1;
- } else if (new_order) {
+ } else if (need_check) {
if (IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) &&
!mapping_large_folio_support(folio->mapping)) {
+ /*
+ * No split if the file system does not support large
+ * folio. Note that we might still have THPs in such
+ * mappings due to CONFIG_READ_ONLY_THP_FOR_FS. But in
+ * that case, the mapping does not actually support
+ * large folios properly.
+ */
VM_WARN_ONCE(warns,
"Cannot split file folio to non-0 order");
return false;
}
}
- if (new_order && folio_test_swapcache(folio)) {
+ /* Only swapping a whole PMD-mapped folio is supported */
+ if (need_check && folio_test_swapcache(folio)) {
VM_WARN_ONCE(warns,
"Cannot split swapcache folio to non-0 order");
return false;
@@ -3779,11 +3758,7 @@ static int __folio_split(struct folio *folio, unsigned int new_order,
if (new_order >= old_order)
return -EINVAL;
- if (uniform_split && !uniform_split_supported(folio, new_order, true))
- return -EINVAL;
-
- if (!uniform_split &&
- !non_uniform_split_supported(folio, new_order, true))
+ if (!folio_split_supported(folio, new_order, uniform_split, /* warn = */ true))
return -EINVAL;
is_hzp = is_huge_zero_folio(folio);
--
2.34.1
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH] mm/huge_memory: merge uniform_split_supported() and non_uniform_split_supported()
2025-11-01 2:11 [PATCH] mm/huge_memory: merge uniform_split_supported() and non_uniform_split_supported() Wei Yang
@ 2025-11-03 9:04 ` Dev Jain
2025-11-03 16:19 ` Zi Yan
2025-11-03 11:50 ` David Hildenbrand (Red Hat)
2025-11-03 16:34 ` Zi Yan
2 siblings, 1 reply; 13+ messages in thread
From: Dev Jain @ 2025-11-03 9:04 UTC (permalink / raw)
To: Wei Yang, akpm, david, lorenzo.stoakes, ziy, baolin.wang,
Liam.Howlett, npache, ryan.roberts, baohua, lance.yang
Cc: linux-mm
On 01/11/25 7:41 am, Wei Yang wrote:
> The functions uniform_split_supported() and
> non_uniform_split_supported() share significantly similar logic.
>
> The only functional difference is that uniform_split_supported()
> includes an additional check on the requested @new_order before
> proceeding with further validation.
>
> This commit unifies the logic by introducing a new variable,
> @need_check, which is conditionally set based on whether a uniform
> split is requested. This allows us to merge the two functions into
> a single, combined helper, removing redundant code and simplifying
> the split support checking mechanism.
>
> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
> Cc: Zi Yan <ziy@nvidia.com>
> ---
> include/linux/huge_mm.h | 8 +++---
> mm/huge_memory.c | 55 +++++++++++------------------------------
> 2 files changed, 18 insertions(+), 45 deletions(-)
>
> diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
> index cbb2243f8e56..79343809a7be 100644
> --- a/include/linux/huge_mm.h
> +++ b/include/linux/huge_mm.h
> @@ -369,10 +369,8 @@ int __split_huge_page_to_list_to_order(struct page *page, struct list_head *list
> unsigned int new_order, bool unmapped);
> int min_order_for_split(struct folio *folio);
> int split_folio_to_list(struct folio *folio, struct list_head *list);
> -bool uniform_split_supported(struct folio *folio, unsigned int new_order,
> - bool warns);
> -bool non_uniform_split_supported(struct folio *folio, unsigned int new_order,
> - bool warns);
> +bool folio_split_supported(struct folio *folio, unsigned int new_order,
> + bool uniform_split, bool warns);
> int folio_split(struct folio *folio, unsigned int new_order, struct page *page,
> struct list_head *list);
>
> @@ -403,7 +401,7 @@ static inline int split_huge_page_to_order(struct page *page, unsigned int new_o
> static inline int try_folio_split_to_order(struct folio *folio,
> struct page *page, unsigned int new_order)
> {
> - if (!non_uniform_split_supported(folio, new_order, /* warns= */ false))
> + if (!folio_split_supported(folio, new_order, /* uniform_split = */ false, /* warns= */ false))
> return split_huge_page_to_order(&folio->page, new_order);
> return folio_split(folio, new_order, page, NULL);
> }
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index d1fa0d2d9b44..f6d2cb2a5ca0 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -3673,55 +3673,34 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
> return 0;
> }
>
> -bool non_uniform_split_supported(struct folio *folio, unsigned int new_order,
> - bool warns)
> +bool folio_split_supported(struct folio *folio, unsigned int new_order,
> + bool uniform_split, bool warns)
> {
> - if (folio_test_anon(folio)) {
> - /* order-1 is not supported for anonymous THP. */
> - VM_WARN_ONCE(warns && new_order == 1,
> - "Cannot split to order-1 folio");
> - return new_order != 1;
> - } else if (IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) &&
> - !mapping_large_folio_support(folio->mapping)) {
> - /*
> - * No split if the file system does not support large folio.
> - * Note that we might still have THPs in such mappings due to
> - * CONFIG_READ_ONLY_THP_FOR_FS. But in that case, the mapping
> - * does not actually support large folios properly.
> - */
> - VM_WARN_ONCE(warns,
> - "Cannot split file folio to non-0 order");
> - return false;
> - }
> -
> - /* Only swapping a whole PMD-mapped folio is supported */
> - if (folio_test_swapcache(folio)) {
> - VM_WARN_ONCE(warns,
> - "Cannot split swapcache folio to non-0 order");
> - return false;
> - }
> + bool need_check = uniform_split ? new_order : true;
>
> - return true;
> -}
> -
> -/* See comments in non_uniform_split_supported() */
> -bool uniform_split_supported(struct folio *folio, unsigned int new_order,
> - bool warns)
> -{
> if (folio_test_anon(folio)) {
> + /* order-1 is not supported for anonymous THP. */
> VM_WARN_ONCE(warns && new_order == 1,
> "Cannot split to order-1 folio");
> return new_order != 1;
> - } else if (new_order) {
> + } else if (need_check) {
> if (IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) &&
> !mapping_large_folio_support(folio->mapping)) {
> + /*
> + * No split if the file system does not support large
> + * folio. Note that we might still have THPs in such
> + * mappings due to CONFIG_READ_ONLY_THP_FOR_FS. But in
> + * that case, the mapping does not actually support
> + * large folios properly.
> + */
> VM_WARN_ONCE(warns,
> "Cannot split file folio to non-0 order");
> return false;
> }
> }
>
> - if (new_order && folio_test_swapcache(folio)) {
> + /* Only swapping a whole PMD-mapped folio is supported */
> + if (need_check && folio_test_swapcache(folio)) {
> VM_WARN_ONCE(warns,
> "Cannot split swapcache folio to non-0 order");
> return false;
> @@ -3779,11 +3758,7 @@ static int __folio_split(struct folio *folio, unsigned int new_order,
> if (new_order >= old_order)
> return -EINVAL;
>
> - if (uniform_split && !uniform_split_supported(folio, new_order, true))
> - return -EINVAL;
> -
> - if (!uniform_split &&
> - !non_uniform_split_supported(folio, new_order, true))
> + if (!folio_split_supported(folio, new_order, uniform_split, /* warn = */ true))
> return -EINVAL;
>
> is_hzp = is_huge_zero_folio(folio);
@Zi, I am confused on the code. Is there something stopping us from passing a zero new_order to
non_uniform_split_supported()?
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH] mm/huge_memory: merge uniform_split_supported() and non_uniform_split_supported()
2025-11-01 2:11 [PATCH] mm/huge_memory: merge uniform_split_supported() and non_uniform_split_supported() Wei Yang
2025-11-03 9:04 ` Dev Jain
@ 2025-11-03 11:50 ` David Hildenbrand (Red Hat)
2025-11-04 0:41 ` Wei Yang
2025-11-03 16:34 ` Zi Yan
2 siblings, 1 reply; 13+ messages in thread
From: David Hildenbrand (Red Hat) @ 2025-11-03 11:50 UTC (permalink / raw)
To: Wei Yang, akpm, lorenzo.stoakes, ziy, baolin.wang, Liam.Howlett,
npache, ryan.roberts, dev.jain, baohua, lance.yang
Cc: linux-mm
On 01.11.25 03:11, Wei Yang wrote:
> The functions uniform_split_supported() and
> non_uniform_split_supported() share significantly similar logic.
>
> The only functional difference is that uniform_split_supported()
> includes an additional check on the requested @new_order before
> proceeding with further validation.
>
> This commit unifies the logic by introducing a new variable,
> @need_check, which is conditionally set based on whether a uniform
> split is requested. This allows us to merge the two functions into
> a single, combined helper, removing redundant code and simplifying
> the split support checking mechanism.
>
> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
> Cc: Zi Yan <ziy@nvidia.com>
> ---
> include/linux/huge_mm.h | 8 +++---
> mm/huge_memory.c | 55 +++++++++++------------------------------
> 2 files changed, 18 insertions(+), 45 deletions(-)
>
> diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
> index cbb2243f8e56..79343809a7be 100644
> --- a/include/linux/huge_mm.h
> +++ b/include/linux/huge_mm.h
> @@ -369,10 +369,8 @@ int __split_huge_page_to_list_to_order(struct page *page, struct list_head *list
> unsigned int new_order, bool unmapped);
> int min_order_for_split(struct folio *folio);
> int split_folio_to_list(struct folio *folio, struct list_head *list);
> -bool uniform_split_supported(struct folio *folio, unsigned int new_order,
> - bool warns);
> -bool non_uniform_split_supported(struct folio *folio, unsigned int new_order,
> - bool warns);
> +bool folio_split_supported(struct folio *folio, unsigned int new_order,
> + bool uniform_split, bool warns);
> int folio_split(struct folio *folio, unsigned int new_order, struct page *page,
> struct list_head *list);
>
> @@ -403,7 +401,7 @@ static inline int split_huge_page_to_order(struct page *page, unsigned int new_o
> static inline int try_folio_split_to_order(struct folio *folio,
> struct page *page, unsigned int new_order)
> {
> - if (!non_uniform_split_supported(folio, new_order, /* warns= */ false))
> + if (!folio_split_supported(folio, new_order, /* uniform_split = */ false, /* warns= */ false))
> return split_huge_page_to_order(&folio->page, new_order);
> return folio_split(folio, new_order, page, NULL);
> }
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index d1fa0d2d9b44..f6d2cb2a5ca0 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -3673,55 +3673,34 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
> return 0;
> }
>
> -bool non_uniform_split_supported(struct folio *folio, unsigned int new_order,
> - bool warns)
> +bool folio_split_supported(struct folio *folio, unsigned int new_order,
> + bool uniform_split, bool warns)
Likely a simple
enum split_type {
SPLIT_TYPE_UNIFORM,
SPLIT_TYPE_NON_UNIFORM, // or SPLIT_TYPE_BUDDY?
};
Could make invocations easier to read.
> {
> - if (folio_test_anon(folio)) {
> - /* order-1 is not supported for anonymous THP. */
> - VM_WARN_ONCE(warns && new_order == 1,
> - "Cannot split to order-1 folio");
> - return new_order != 1;
> - } else if (IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) &&
> - !mapping_large_folio_support(folio->mapping)) {
> - /*
> - * No split if the file system does not support large folio.
> - * Note that we might still have THPs in such mappings due to
> - * CONFIG_READ_ONLY_THP_FOR_FS. But in that case, the mapping
> - * does not actually support large folios properly.
> - */
> - VM_WARN_ONCE(warns,
> - "Cannot split file folio to non-0 order");
> - return false;
> - }
> -
> - /* Only swapping a whole PMD-mapped folio is supported */
> - if (folio_test_swapcache(folio)) {
> - VM_WARN_ONCE(warns,
> - "Cannot split swapcache folio to non-0 order");
> - return false;
> - }
> + bool need_check = uniform_split ? new_order : true;
(could be const)
Not really a great name for that variable (what needs a check?). Maybe
simply ...
>
> - return true;
> -}
> -
> -/* See comments in non_uniform_split_supported() */
> -bool uniform_split_supported(struct folio *folio, unsigned int new_order,
> - bool warns)
> -{
> if (folio_test_anon(folio)) {
> + /* order-1 is not supported for anonymous THP. */
> VM_WARN_ONCE(warns && new_order == 1,
> "Cannot split to order-1 folio");
> return new_order != 1;
> - } else if (new_order) {
> + } else if (need_check) {
... change this to ("!uniform_split || new_order")
} else if (!uniform_split || new_order) {
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH] mm/huge_memory: merge uniform_split_supported() and non_uniform_split_supported()
2025-11-03 9:04 ` Dev Jain
@ 2025-11-03 16:19 ` Zi Yan
0 siblings, 0 replies; 13+ messages in thread
From: Zi Yan @ 2025-11-03 16:19 UTC (permalink / raw)
To: Dev Jain
Cc: Wei Yang, akpm, david, lorenzo.stoakes, baolin.wang,
Liam.Howlett, npache, ryan.roberts, baohua, lance.yang, linux-mm
On 3 Nov 2025, at 4:04, Dev Jain wrote:
> On 01/11/25 7:41 am, Wei Yang wrote:
>> The functions uniform_split_supported() and
>> non_uniform_split_supported() share significantly similar logic.
>>
>> The only functional difference is that uniform_split_supported()
>> includes an additional check on the requested @new_order before
>> proceeding with further validation.
>>
>> This commit unifies the logic by introducing a new variable,
>> @need_check, which is conditionally set based on whether a uniform
>> split is requested. This allows us to merge the two functions into
>> a single, combined helper, removing redundant code and simplifying
>> the split support checking mechanism.
>>
>> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
>> Cc: Zi Yan <ziy@nvidia.com>
>> ---
>> include/linux/huge_mm.h | 8 +++---
>> mm/huge_memory.c | 55 +++++++++++------------------------------
>> 2 files changed, 18 insertions(+), 45 deletions(-)
>>
>> diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
>> index cbb2243f8e56..79343809a7be 100644
>> --- a/include/linux/huge_mm.h
>> +++ b/include/linux/huge_mm.h
>> @@ -369,10 +369,8 @@ int __split_huge_page_to_list_to_order(struct page *page, struct list_head *list
>> unsigned int new_order, bool unmapped);
>> int min_order_for_split(struct folio *folio);
>> int split_folio_to_list(struct folio *folio, struct list_head *list);
>> -bool uniform_split_supported(struct folio *folio, unsigned int new_order,
>> - bool warns);
>> -bool non_uniform_split_supported(struct folio *folio, unsigned int new_order,
>> - bool warns);
>> +bool folio_split_supported(struct folio *folio, unsigned int new_order,
>> + bool uniform_split, bool warns);
>> int folio_split(struct folio *folio, unsigned int new_order, struct page *page,
>> struct list_head *list);
>> @@ -403,7 +401,7 @@ static inline int split_huge_page_to_order(struct page *page, unsigned int new_o
>> static inline int try_folio_split_to_order(struct folio *folio,
>> struct page *page, unsigned int new_order)
>> {
>> - if (!non_uniform_split_supported(folio, new_order, /* warns= */ false))
>> + if (!folio_split_supported(folio, new_order, /* uniform_split = */ false, /* warns= */ false))
>> return split_huge_page_to_order(&folio->page, new_order);
>> return folio_split(folio, new_order, page, NULL);
>> }
>> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
>> index d1fa0d2d9b44..f6d2cb2a5ca0 100644
>> --- a/mm/huge_memory.c
>> +++ b/mm/huge_memory.c
>> @@ -3673,55 +3673,34 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
>> return 0;
>> }
>> -bool non_uniform_split_supported(struct folio *folio, unsigned int new_order,
>> - bool warns)
>> +bool folio_split_supported(struct folio *folio, unsigned int new_order,
>> + bool uniform_split, bool warns)
>> {
>> - if (folio_test_anon(folio)) {
>> - /* order-1 is not supported for anonymous THP. */
>> - VM_WARN_ONCE(warns && new_order == 1,
>> - "Cannot split to order-1 folio");
>> - return new_order != 1;
>> - } else if (IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) &&
>> - !mapping_large_folio_support(folio->mapping)) {
>> - /*
>> - * No split if the file system does not support large folio.
>> - * Note that we might still have THPs in such mappings due to
>> - * CONFIG_READ_ONLY_THP_FOR_FS. But in that case, the mapping
>> - * does not actually support large folios properly.
>> - */
>> - VM_WARN_ONCE(warns,
>> - "Cannot split file folio to non-0 order");
>> - return false;
>> - }
>> -
>> - /* Only swapping a whole PMD-mapped folio is supported */
>> - if (folio_test_swapcache(folio)) {
>> - VM_WARN_ONCE(warns,
>> - "Cannot split swapcache folio to non-0 order");
>> - return false;
>> - }
>> + bool need_check = uniform_split ? new_order : true;
>> - return true;
>> -}
>> -
>> -/* See comments in non_uniform_split_supported() */
>> -bool uniform_split_supported(struct folio *folio, unsigned int new_order,
>> - bool warns)
>> -{
>> if (folio_test_anon(folio)) {
>> + /* order-1 is not supported for anonymous THP. */
>> VM_WARN_ONCE(warns && new_order == 1,
>> "Cannot split to order-1 folio");
>> return new_order != 1;
>> - } else if (new_order) {
>> + } else if (need_check) {
>> if (IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) &&
>> !mapping_large_folio_support(folio->mapping)) {
>> + /*
>> + * No split if the file system does not support large
>> + * folio. Note that we might still have THPs in such
>> + * mappings due to CONFIG_READ_ONLY_THP_FOR_FS. But in
>> + * that case, the mapping does not actually support
>> + * large folios properly.
>> + */
>> VM_WARN_ONCE(warns,
>> "Cannot split file folio to non-0 order");
>> return false;
>> }
>> }
>> - if (new_order && folio_test_swapcache(folio)) {
>> + /* Only swapping a whole PMD-mapped folio is supported */
>> + if (need_check && folio_test_swapcache(folio)) {
>> VM_WARN_ONCE(warns,
>> "Cannot split swapcache folio to non-0 order");
>> return false;
>> @@ -3779,11 +3758,7 @@ static int __folio_split(struct folio *folio, unsigned int new_order,
>> if (new_order >= old_order)
>> return -EINVAL;
>> - if (uniform_split && !uniform_split_supported(folio, new_order, true))
>> - return -EINVAL;
>> -
>> - if (!uniform_split &&
>> - !non_uniform_split_supported(folio, new_order, true))
>> + if (!folio_split_supported(folio, new_order, uniform_split, /* warn = */ true))
>> return -EINVAL;
>> is_hzp = is_huge_zero_folio(folio);
>
> @Zi, I am confused on the code. Is there something stopping us from passing a zero new_order to
> non_uniform_split_supported()?
Yes, for filesystems without large folio support and a PMD size file-backed THP
is created via khugepaged (with CONFIG_READ_ONLY_THP_FOR_FS), the THP can only
be split to order-0. Non uniform split would create other orders and causes
issues.
Best Regards,
Yan, Zi
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH] mm/huge_memory: merge uniform_split_supported() and non_uniform_split_supported()
2025-11-01 2:11 [PATCH] mm/huge_memory: merge uniform_split_supported() and non_uniform_split_supported() Wei Yang
2025-11-03 9:04 ` Dev Jain
2025-11-03 11:50 ` David Hildenbrand (Red Hat)
@ 2025-11-03 16:34 ` Zi Yan
2025-11-04 0:36 ` Wei Yang
2 siblings, 1 reply; 13+ messages in thread
From: Zi Yan @ 2025-11-03 16:34 UTC (permalink / raw)
To: Wei Yang
Cc: akpm, david, lorenzo.stoakes, baolin.wang, Liam.Howlett, npache,
ryan.roberts, dev.jain, baohua, lance.yang, linux-mm
On 31 Oct 2025, at 22:11, Wei Yang wrote:
> The functions uniform_split_supported() and
> non_uniform_split_supported() share significantly similar logic.
>
> The only functional difference is that uniform_split_supported()
> includes an additional check on the requested @new_order before
Please elaborate on what the check is for.
> proceeding with further validation.
>
> This commit unifies the logic by introducing a new variable,
> @need_check, which is conditionally set based on whether a uniform
> split is requested. This allows us to merge the two functions into
> a single, combined helper, removing redundant code and simplifying
> the split support checking mechanism.
>
> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
> Cc: Zi Yan <ziy@nvidia.com>
> ---
> include/linux/huge_mm.h | 8 +++---
> mm/huge_memory.c | 55 +++++++++++------------------------------
> 2 files changed, 18 insertions(+), 45 deletions(-)
>
> diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
> index cbb2243f8e56..79343809a7be 100644
> --- a/include/linux/huge_mm.h
> +++ b/include/linux/huge_mm.h
> @@ -369,10 +369,8 @@ int __split_huge_page_to_list_to_order(struct page *page, struct list_head *list
> unsigned int new_order, bool unmapped);
> int min_order_for_split(struct folio *folio);
> int split_folio_to_list(struct folio *folio, struct list_head *list);
> -bool uniform_split_supported(struct folio *folio, unsigned int new_order,
> - bool warns);
> -bool non_uniform_split_supported(struct folio *folio, unsigned int new_order,
> - bool warns);
> +bool folio_split_supported(struct folio *folio, unsigned int new_order,
> + bool uniform_split, bool warns);
> int folio_split(struct folio *folio, unsigned int new_order, struct page *page,
> struct list_head *list);
>
> @@ -403,7 +401,7 @@ static inline int split_huge_page_to_order(struct page *page, unsigned int new_o
> static inline int try_folio_split_to_order(struct folio *folio,
> struct page *page, unsigned int new_order)
> {
> - if (!non_uniform_split_supported(folio, new_order, /* warns= */ false))
> + if (!folio_split_supported(folio, new_order, /* uniform_split = */ false, /* warns= */ false))
> return split_huge_page_to_order(&folio->page, new_order);
> return folio_split(folio, new_order, page, NULL);
> }
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index d1fa0d2d9b44..f6d2cb2a5ca0 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -3673,55 +3673,34 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
> return 0;
> }
>
> -bool non_uniform_split_supported(struct folio *folio, unsigned int new_order,
> - bool warns)
> +bool folio_split_supported(struct folio *folio, unsigned int new_order,
> + bool uniform_split, bool warns)
> {
> - if (folio_test_anon(folio)) {
> - /* order-1 is not supported for anonymous THP. */
> - VM_WARN_ONCE(warns && new_order == 1,
> - "Cannot split to order-1 folio");
> - return new_order != 1;
> - } else if (IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) &&
> - !mapping_large_folio_support(folio->mapping)) {
> - /*
> - * No split if the file system does not support large folio.
> - * Note that we might still have THPs in such mappings due to
> - * CONFIG_READ_ONLY_THP_FOR_FS. But in that case, the mapping
> - * does not actually support large folios properly.
> - */
> - VM_WARN_ONCE(warns,
> - "Cannot split file folio to non-0 order");
> - return false;
> - }
> -
> - /* Only swapping a whole PMD-mapped folio is supported */
> - if (folio_test_swapcache(folio)) {
> - VM_WARN_ONCE(warns,
> - "Cannot split swapcache folio to non-0 order");
> - return false;
> - }
> + bool need_check = uniform_split ? new_order : true;
>
> - return true;
> -}
> -
> -/* See comments in non_uniform_split_supported() */
> -bool uniform_split_supported(struct folio *folio, unsigned int new_order,
> - bool warns)
> -{
> if (folio_test_anon(folio)) {
> + /* order-1 is not supported for anonymous THP. */
> VM_WARN_ONCE(warns && new_order == 1,
> "Cannot split to order-1 folio");
> return new_order != 1;
> - } else if (new_order) {
> + } else if (need_check) {
> if (IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) &&
> !mapping_large_folio_support(folio->mapping)) {
> + /*
> + * No split if the file system does not support large
> + * folio. Note that we might still have THPs in such
> + * mappings due to CONFIG_READ_ONLY_THP_FOR_FS. But in
> + * that case, the mapping does not actually support
> + * large folios properly.
> + */
Blindly copying the comment here causes fusion. The checks for
uniform and non uniform look similar but this comment is specific
for non uniform split. The “No split” only applies to non uniform
split, but for uniform split as long as order is 0, the folio
can be split.
Please rewrite this comment to clarify both uniform and non uniform
cases.
> VM_WARN_ONCE(warns,
> "Cannot split file folio to non-0 order");
> return false;
> }
> }
>
> - if (new_order && folio_test_swapcache(folio)) {
> + /* Only swapping a whole PMD-mapped folio is supported */
The same issue like the above one. Please rewrite this comment as well.
> + if (need_check && folio_test_swapcache(folio)) {
> VM_WARN_ONCE(warns,
> "Cannot split swapcache folio to non-0 order");
> return false;
> @@ -3779,11 +3758,7 @@ static int __folio_split(struct folio *folio, unsigned int new_order,
> if (new_order >= old_order)
> return -EINVAL;
>
> - if (uniform_split && !uniform_split_supported(folio, new_order, true))
> - return -EINVAL;
> -
> - if (!uniform_split &&
> - !non_uniform_split_supported(folio, new_order, true))
> + if (!folio_split_supported(folio, new_order, uniform_split, /* warn = */ true))
> return -EINVAL;
>
> is_hzp = is_huge_zero_folio(folio);
> --
> 2.34.1
Best Regards,
Yan, Zi
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH] mm/huge_memory: merge uniform_split_supported() and non_uniform_split_supported()
2025-11-03 16:34 ` Zi Yan
@ 2025-11-04 0:36 ` Wei Yang
2025-11-04 2:30 ` Zi Yan
0 siblings, 1 reply; 13+ messages in thread
From: Wei Yang @ 2025-11-04 0:36 UTC (permalink / raw)
To: Zi Yan
Cc: Wei Yang, akpm, david, lorenzo.stoakes, baolin.wang,
Liam.Howlett, npache, ryan.roberts, dev.jain, baohua, lance.yang,
linux-mm
On Mon, Nov 03, 2025 at 11:34:47AM -0500, Zi Yan wrote:
>On 31 Oct 2025, at 22:11, Wei Yang wrote:
>
>> The functions uniform_split_supported() and
>> non_uniform_split_supported() share significantly similar logic.
>>
>> The only functional difference is that uniform_split_supported()
>> includes an additional check on the requested @new_order before
>
>Please elaborate on what the check is for.
>
>> proceeding with further validation.
How about this:
The only functional difference is that uniform_split_supported() includes an
additional check on the requested @new_order and split type to confirm support
from file system or swap cache.
>>
>> This commit unifies the logic by introducing a new variable,
>> @need_check, which is conditionally set based on whether a uniform
>> split is requested. This allows us to merge the two functions into
>> a single, combined helper, removing redundant code and simplifying
>> the split support checking mechanism.
>>
>> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
>> Cc: Zi Yan <ziy@nvidia.com>
>> ---
>> include/linux/huge_mm.h | 8 +++---
>> mm/huge_memory.c | 55 +++++++++++------------------------------
>> 2 files changed, 18 insertions(+), 45 deletions(-)
>>
>> diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
>> index cbb2243f8e56..79343809a7be 100644
>> --- a/include/linux/huge_mm.h
>> +++ b/include/linux/huge_mm.h
>> @@ -369,10 +369,8 @@ int __split_huge_page_to_list_to_order(struct page *page, struct list_head *list
>> unsigned int new_order, bool unmapped);
>> int min_order_for_split(struct folio *folio);
>> int split_folio_to_list(struct folio *folio, struct list_head *list);
>> -bool uniform_split_supported(struct folio *folio, unsigned int new_order,
>> - bool warns);
>> -bool non_uniform_split_supported(struct folio *folio, unsigned int new_order,
>> - bool warns);
>> +bool folio_split_supported(struct folio *folio, unsigned int new_order,
>> + bool uniform_split, bool warns);
>> int folio_split(struct folio *folio, unsigned int new_order, struct page *page,
>> struct list_head *list);
>>
>> @@ -403,7 +401,7 @@ static inline int split_huge_page_to_order(struct page *page, unsigned int new_o
>> static inline int try_folio_split_to_order(struct folio *folio,
>> struct page *page, unsigned int new_order)
>> {
>> - if (!non_uniform_split_supported(folio, new_order, /* warns= */ false))
>> + if (!folio_split_supported(folio, new_order, /* uniform_split = */ false, /* warns= */ false))
>> return split_huge_page_to_order(&folio->page, new_order);
>> return folio_split(folio, new_order, page, NULL);
>> }
>> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
>> index d1fa0d2d9b44..f6d2cb2a5ca0 100644
>> --- a/mm/huge_memory.c
>> +++ b/mm/huge_memory.c
>> @@ -3673,55 +3673,34 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
>> return 0;
>> }
>>
>> -bool non_uniform_split_supported(struct folio *folio, unsigned int new_order,
>> - bool warns)
>> +bool folio_split_supported(struct folio *folio, unsigned int new_order,
>> + bool uniform_split, bool warns)
>> {
>> - if (folio_test_anon(folio)) {
>> - /* order-1 is not supported for anonymous THP. */
>> - VM_WARN_ONCE(warns && new_order == 1,
>> - "Cannot split to order-1 folio");
>> - return new_order != 1;
>> - } else if (IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) &&
>> - !mapping_large_folio_support(folio->mapping)) {
>> - /*
>> - * No split if the file system does not support large folio.
>> - * Note that we might still have THPs in such mappings due to
>> - * CONFIG_READ_ONLY_THP_FOR_FS. But in that case, the mapping
>> - * does not actually support large folios properly.
>> - */
>> - VM_WARN_ONCE(warns,
>> - "Cannot split file folio to non-0 order");
>> - return false;
>> - }
>> -
>> - /* Only swapping a whole PMD-mapped folio is supported */
>> - if (folio_test_swapcache(folio)) {
>> - VM_WARN_ONCE(warns,
>> - "Cannot split swapcache folio to non-0 order");
>> - return false;
>> - }
>> + bool need_check = uniform_split ? new_order : true;
>>
>> - return true;
>> -}
>> -
>> -/* See comments in non_uniform_split_supported() */
>> -bool uniform_split_supported(struct folio *folio, unsigned int new_order,
>> - bool warns)
>> -{
>> if (folio_test_anon(folio)) {
>> + /* order-1 is not supported for anonymous THP. */
>> VM_WARN_ONCE(warns && new_order == 1,
>> "Cannot split to order-1 folio");
>> return new_order != 1;
>> - } else if (new_order) {
>> + } else if (need_check) {
>> if (IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) &&
>> !mapping_large_folio_support(folio->mapping)) {
>> + /*
>> + * No split if the file system does not support large
>> + * folio. Note that we might still have THPs in such
>> + * mappings due to CONFIG_READ_ONLY_THP_FOR_FS. But in
>> + * that case, the mapping does not actually support
>> + * large folios properly.
>> + */
>
>Blindly copying the comment here causes fusion. The checks for
>uniform and non uniform look similar but this comment is specific
>for non uniform split. The “No split” only applies to non uniform
>split, but for uniform split as long as order is 0, the folio
>can be split.
>
Per my understanding, "no split" applies to both uniform/non uniform split
when new_order is not 0.
So the logic here is:
* uniform split && !new_order: no more check
* non uniform split: do the check regardless of the new_order
But I am lack of some background knowledge, if it is wrong, please correct me.
>Please rewrite this comment to clarify both uniform and non uniform
>cases.
Not sure this one would be better?
We can always split a folio down to a single page (new_order == 0) directly.
For any other scenario
* uniform split targeting a large folio (new_order > 0)
* any non-uniform split
we must confirm that the file system supports large folios.
Note that we might still have THPs in such mappings due to
CONFIG_READ_ONLY_THP_FOR_FS. But in that case, the mapping does not actually
support large folios properly.
>> VM_WARN_ONCE(warns,
>> "Cannot split file folio to non-0 order");
>> return false;
>> }
>> }
>>
>> - if (new_order && folio_test_swapcache(folio)) {
>> + /* Only swapping a whole PMD-mapped folio is supported */
>
>The same issue like the above one. Please rewrite this comment as well.
>
How about this one:
swapcache folio could only be split to order 0
For non-uniform split or uniform split targeting a large folio, return
false.
>> + if (need_check && folio_test_swapcache(folio)) {
>> VM_WARN_ONCE(warns,
>> "Cannot split swapcache folio to non-0 order");
>> return false;
>> @@ -3779,11 +3758,7 @@ static int __folio_split(struct folio *folio, unsigned int new_order,
>> if (new_order >= old_order)
>> return -EINVAL;
>>
>> - if (uniform_split && !uniform_split_supported(folio, new_order, true))
>> - return -EINVAL;
>> -
>> - if (!uniform_split &&
>> - !non_uniform_split_supported(folio, new_order, true))
>> + if (!folio_split_supported(folio, new_order, uniform_split, /* warn = */ true))
>> return -EINVAL;
>>
>> is_hzp = is_huge_zero_folio(folio);
>> --
>> 2.34.1
>
>
>Best Regards,
>Yan, Zi
--
Wei Yang
Help you, Help me
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH] mm/huge_memory: merge uniform_split_supported() and non_uniform_split_supported()
2025-11-03 11:50 ` David Hildenbrand (Red Hat)
@ 2025-11-04 0:41 ` Wei Yang
2025-11-04 9:05 ` David Hildenbrand (Red Hat)
0 siblings, 1 reply; 13+ messages in thread
From: Wei Yang @ 2025-11-04 0:41 UTC (permalink / raw)
To: David Hildenbrand (Red Hat)
Cc: Wei Yang, akpm, lorenzo.stoakes, ziy, baolin.wang, Liam.Howlett,
npache, ryan.roberts, dev.jain, baohua, lance.yang, linux-mm
On Mon, Nov 03, 2025 at 12:50:32PM +0100, David Hildenbrand (Red Hat) wrote:
>On 01.11.25 03:11, Wei Yang wrote:
>> The functions uniform_split_supported() and
>> non_uniform_split_supported() share significantly similar logic.
>>
>> The only functional difference is that uniform_split_supported()
>> includes an additional check on the requested @new_order before
>> proceeding with further validation.
>>
>> This commit unifies the logic by introducing a new variable,
>> @need_check, which is conditionally set based on whether a uniform
>> split is requested. This allows us to merge the two functions into
>> a single, combined helper, removing redundant code and simplifying
>> the split support checking mechanism.
>>
>> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
>> Cc: Zi Yan <ziy@nvidia.com>
>> ---
>> include/linux/huge_mm.h | 8 +++---
>> mm/huge_memory.c | 55 +++++++++++------------------------------
>> 2 files changed, 18 insertions(+), 45 deletions(-)
>>
>> diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
>> index cbb2243f8e56..79343809a7be 100644
>> --- a/include/linux/huge_mm.h
>> +++ b/include/linux/huge_mm.h
>> @@ -369,10 +369,8 @@ int __split_huge_page_to_list_to_order(struct page *page, struct list_head *list
>> unsigned int new_order, bool unmapped);
>> int min_order_for_split(struct folio *folio);
>> int split_folio_to_list(struct folio *folio, struct list_head *list);
>> -bool uniform_split_supported(struct folio *folio, unsigned int new_order,
>> - bool warns);
>> -bool non_uniform_split_supported(struct folio *folio, unsigned int new_order,
>> - bool warns);
>> +bool folio_split_supported(struct folio *folio, unsigned int new_order,
>> + bool uniform_split, bool warns);
>> int folio_split(struct folio *folio, unsigned int new_order, struct page *page,
>> struct list_head *list);
>> @@ -403,7 +401,7 @@ static inline int split_huge_page_to_order(struct page *page, unsigned int new_o
>> static inline int try_folio_split_to_order(struct folio *folio,
>> struct page *page, unsigned int new_order)
>> {
>> - if (!non_uniform_split_supported(folio, new_order, /* warns= */ false))
>> + if (!folio_split_supported(folio, new_order, /* uniform_split = */ false, /* warns= */ false))
>> return split_huge_page_to_order(&folio->page, new_order);
>> return folio_split(folio, new_order, page, NULL);
>> }
>> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
>> index d1fa0d2d9b44..f6d2cb2a5ca0 100644
>> --- a/mm/huge_memory.c
>> +++ b/mm/huge_memory.c
>> @@ -3673,55 +3673,34 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
>> return 0;
>> }
>> -bool non_uniform_split_supported(struct folio *folio, unsigned int new_order,
>> - bool warns)
>> +bool folio_split_supported(struct folio *folio, unsigned int new_order,
>> + bool uniform_split, bool warns)
>
>Likely a simple
>
>enum split_type {
> SPLIT_TYPE_UNIFORM,
> SPLIT_TYPE_NON_UNIFORM, // or SPLIT_TYPE_BUDDY?
>};
>
>Could make invocations easier to read.
>
Hmm... if so we may need to change the caller too.
>> {
>> - if (folio_test_anon(folio)) {
>> - /* order-1 is not supported for anonymous THP. */
>> - VM_WARN_ONCE(warns && new_order == 1,
>> - "Cannot split to order-1 folio");
>> - return new_order != 1;
>> - } else if (IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) &&
>> - !mapping_large_folio_support(folio->mapping)) {
>> - /*
>> - * No split if the file system does not support large folio.
>> - * Note that we might still have THPs in such mappings due to
>> - * CONFIG_READ_ONLY_THP_FOR_FS. But in that case, the mapping
>> - * does not actually support large folios properly.
>> - */
>> - VM_WARN_ONCE(warns,
>> - "Cannot split file folio to non-0 order");
>> - return false;
>> - }
>> -
>> - /* Only swapping a whole PMD-mapped folio is supported */
>> - if (folio_test_swapcache(folio)) {
>> - VM_WARN_ONCE(warns,
>> - "Cannot split swapcache folio to non-0 order");
>> - return false;
>> - }
>> + bool need_check = uniform_split ? new_order : true;
>
>(could be const)
>
>Not really a great name for that variable (what needs a check?). Maybe
>simply ...
>
>
>
>> - return true;
>> -}
>> -
>> -/* See comments in non_uniform_split_supported() */
>> -bool uniform_split_supported(struct folio *folio, unsigned int new_order,
>> - bool warns)
>> -{
>> if (folio_test_anon(folio)) {
>> + /* order-1 is not supported for anonymous THP. */
>> VM_WARN_ONCE(warns && new_order == 1,
>> "Cannot split to order-1 folio");
>> return new_order != 1;
>> - } else if (new_order) {
>> + } else if (need_check) {
>
>... change this to ("!uniform_split || new_order")
>
Looks simple, thanks.
>} else if (!uniform_split || new_order) {
--
Wei Yang
Help you, Help me
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH] mm/huge_memory: merge uniform_split_supported() and non_uniform_split_supported()
2025-11-04 0:36 ` Wei Yang
@ 2025-11-04 2:30 ` Zi Yan
2025-11-04 7:53 ` Wei Yang
0 siblings, 1 reply; 13+ messages in thread
From: Zi Yan @ 2025-11-04 2:30 UTC (permalink / raw)
To: Wei Yang
Cc: akpm, david, lorenzo.stoakes, baolin.wang, Liam.Howlett, npache,
ryan.roberts, dev.jain, baohua, lance.yang, linux-mm
On 3 Nov 2025, at 19:36, Wei Yang wrote:
> On Mon, Nov 03, 2025 at 11:34:47AM -0500, Zi Yan wrote:
>> On 31 Oct 2025, at 22:11, Wei Yang wrote:
>>
>>> The functions uniform_split_supported() and
>>> non_uniform_split_supported() share significantly similar logic.
>>>
>>> The only functional difference is that uniform_split_supported()
>>> includes an additional check on the requested @new_order before
>>
>> Please elaborate on what the check is for.
>>
>>> proceeding with further validation.
>
> How about this:
>
> The only functional difference is that uniform_split_supported() includes an
> additional check on the requested @new_order and split type to confirm support
> from file system or swap cache.
You are describing what the code does instead of its actual meaning.
You need to describe:
1. what is the difference between uniform split and non-uniform split?
2. what order does what file systems support? Only order-0.
3. what order does swap cache support? Only order-0.
4. why can uniform split be used to split large folios from 2 or 3 to
order-0?
5. why can non uniform split not be used to split large folios from 2
or 3 to order-0?
6. The logic similarity between uniform_split_supported() and
non_uniform_split_supported() and they can be combined with detailed
comment.
>
>>>
>>> This commit unifies the logic by introducing a new variable,
>>> @need_check, which is conditionally set based on whether a uniform
>>> split is requested. This allows us to merge the two functions into
>>> a single, combined helper, removing redundant code and simplifying
>>> the split support checking mechanism.
>>>
>>> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
>>> Cc: Zi Yan <ziy@nvidia.com>
>>> ---
>>> include/linux/huge_mm.h | 8 +++---
>>> mm/huge_memory.c | 55 +++++++++++------------------------------
>>> 2 files changed, 18 insertions(+), 45 deletions(-)
>>>
>>> diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
>>> index cbb2243f8e56..79343809a7be 100644
>>> --- a/include/linux/huge_mm.h
>>> +++ b/include/linux/huge_mm.h
>>> @@ -369,10 +369,8 @@ int __split_huge_page_to_list_to_order(struct page *page, struct list_head *list
>>> unsigned int new_order, bool unmapped);
>>> int min_order_for_split(struct folio *folio);
>>> int split_folio_to_list(struct folio *folio, struct list_head *list);
>>> -bool uniform_split_supported(struct folio *folio, unsigned int new_order,
>>> - bool warns);
>>> -bool non_uniform_split_supported(struct folio *folio, unsigned int new_order,
>>> - bool warns);
>>> +bool folio_split_supported(struct folio *folio, unsigned int new_order,
>>> + bool uniform_split, bool warns);
>>> int folio_split(struct folio *folio, unsigned int new_order, struct page *page,
>>> struct list_head *list);
>>>
>>> @@ -403,7 +401,7 @@ static inline int split_huge_page_to_order(struct page *page, unsigned int new_o
>>> static inline int try_folio_split_to_order(struct folio *folio,
>>> struct page *page, unsigned int new_order)
>>> {
>>> - if (!non_uniform_split_supported(folio, new_order, /* warns= */ false))
>>> + if (!folio_split_supported(folio, new_order, /* uniform_split = */ false, /* warns= */ false))
>>> return split_huge_page_to_order(&folio->page, new_order);
>>> return folio_split(folio, new_order, page, NULL);
>>> }
>>> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
>>> index d1fa0d2d9b44..f6d2cb2a5ca0 100644
>>> --- a/mm/huge_memory.c
>>> +++ b/mm/huge_memory.c
>>> @@ -3673,55 +3673,34 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
>>> return 0;
>>> }
>>>
>>> -bool non_uniform_split_supported(struct folio *folio, unsigned int new_order,
>>> - bool warns)
>>> +bool folio_split_supported(struct folio *folio, unsigned int new_order,
>>> + bool uniform_split, bool warns)
>>> {
>>> - if (folio_test_anon(folio)) {
>>> - /* order-1 is not supported for anonymous THP. */
>>> - VM_WARN_ONCE(warns && new_order == 1,
>>> - "Cannot split to order-1 folio");
>>> - return new_order != 1;
>>> - } else if (IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) &&
>>> - !mapping_large_folio_support(folio->mapping)) {
>>> - /*
>>> - * No split if the file system does not support large folio.
>>> - * Note that we might still have THPs in such mappings due to
>>> - * CONFIG_READ_ONLY_THP_FOR_FS. But in that case, the mapping
>>> - * does not actually support large folios properly.
>>> - */
>>> - VM_WARN_ONCE(warns,
>>> - "Cannot split file folio to non-0 order");
>>> - return false;
>>> - }
>>> -
>>> - /* Only swapping a whole PMD-mapped folio is supported */
>>> - if (folio_test_swapcache(folio)) {
>>> - VM_WARN_ONCE(warns,
>>> - "Cannot split swapcache folio to non-0 order");
>>> - return false;
>>> - }
>>> + bool need_check = uniform_split ? new_order : true;
>>>
>>> - return true;
>>> -}
>>> -
>>> -/* See comments in non_uniform_split_supported() */
>>> -bool uniform_split_supported(struct folio *folio, unsigned int new_order,
>>> - bool warns)
>>> -{
>>> if (folio_test_anon(folio)) {
>>> + /* order-1 is not supported for anonymous THP. */
>>> VM_WARN_ONCE(warns && new_order == 1,
>>> "Cannot split to order-1 folio");
>>> return new_order != 1;
>>> - } else if (new_order) {
>>> + } else if (need_check) {
>>> if (IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) &&
>>> !mapping_large_folio_support(folio->mapping)) {
>>> + /*
>>> + * No split if the file system does not support large
>>> + * folio. Note that we might still have THPs in such
>>> + * mappings due to CONFIG_READ_ONLY_THP_FOR_FS. But in
>>> + * that case, the mapping does not actually support
>>> + * large folios properly.
>>> + */
>>
>> Blindly copying the comment here causes fusion. The checks for
>> uniform and non uniform look similar but this comment is specific
>> for non uniform split. The “No split” only applies to non uniform
>> split, but for uniform split as long as order is 0, the folio
>> can be split.
>>
>
> Per my understanding, "no split" applies to both uniform/non uniform split
> when new_order is not 0.
Not exactly. For non uniform split, any new_order value is not allowed.
>
> So the logic here is:
>
> * uniform split && !new_order: no more check
> * non uniform split: do the check regardless of the new_order
>
> But I am lack of some background knowledge, if it is wrong, please correct me.
You are changing the code, please do your homework first. Or you can
ask. After go through the above 6 bullet points, you should get the
background knowledge.
>
>> Please rewrite this comment to clarify both uniform and non uniform
>> cases.
>
> Not sure this one would be better?
>
> We can always split a folio down to a single page (new_order == 0) directly.
Not always, the exceptions are listed below.
>
> For any other scenario
> * uniform split targeting a large folio (new_order > 0)
> * any non-uniform split
> we must confirm that the file system supports large folios.
>
> Note that we might still have THPs in such mappings due to
> CONFIG_READ_ONLY_THP_FOR_FS. But in that case, the mapping does not actually
> support large folios properly.
These filesystems do not support large folios except THPs created from khugepaged when CONFIG_READ_ONLY_THP_FOR_FS is enabled.
>
>>> VM_WARN_ONCE(warns,
>>> "Cannot split file folio to non-0 order");
>>> return false;
>>> }
>>> }
>>>
>>> - if (new_order && folio_test_swapcache(folio)) {
>>> + /* Only swapping a whole PMD-mapped folio is supported */
>>
>> The same issue like the above one. Please rewrite this comment as well.
>>
>
> How about this one:
>
> swapcache folio could only be split to order 0
This looks good.
>
> For non-uniform split or uniform split targeting a large folio, return
> false.
You are just describing the code.
non-uniform split creates after-split folios with orders from
folio_order(folio) - 1 to new_order, making it not suitable for any swapcache
folio split. Only uniform split to order-0 can be used here.
>
>>> + if (need_check && folio_test_swapcache(folio)) {
>>> VM_WARN_ONCE(warns,
>>> "Cannot split swapcache folio to non-0 order");
>>> return false;
>>> @@ -3779,11 +3758,7 @@ static int __folio_split(struct folio *folio, unsigned int new_order,
>>> if (new_order >= old_order)
>>> return -EINVAL;
>>>
>>> - if (uniform_split && !uniform_split_supported(folio, new_order, true))
>>> - return -EINVAL;
>>> -
>>> - if (!uniform_split &&
>>> - !non_uniform_split_supported(folio, new_order, true))
>>> + if (!folio_split_supported(folio, new_order, uniform_split, /* warn = */ true))
>>> return -EINVAL;
>>>
>>> is_hzp = is_huge_zero_folio(folio);
>>> --
>>> 2.34.1
>>
>>
>> Best Regards,
>> Yan, Zi
>
> --
> Wei Yang
> Help you, Help me
Best Regards,
Yan, Zi
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH] mm/huge_memory: merge uniform_split_supported() and non_uniform_split_supported()
2025-11-04 2:30 ` Zi Yan
@ 2025-11-04 7:53 ` Wei Yang
2025-11-05 2:14 ` Zi Yan
0 siblings, 1 reply; 13+ messages in thread
From: Wei Yang @ 2025-11-04 7:53 UTC (permalink / raw)
To: Zi Yan
Cc: Wei Yang, akpm, david, lorenzo.stoakes, baolin.wang,
Liam.Howlett, npache, ryan.roberts, dev.jain, baohua, lance.yang,
linux-mm
On Mon, Nov 03, 2025 at 09:30:03PM -0500, Zi Yan wrote:
>On 3 Nov 2025, at 19:36, Wei Yang wrote:
>
>> On Mon, Nov 03, 2025 at 11:34:47AM -0500, Zi Yan wrote:
>>> On 31 Oct 2025, at 22:11, Wei Yang wrote:
>>>
>>>> The functions uniform_split_supported() and
>>>> non_uniform_split_supported() share significantly similar logic.
>>>>
>>>> The only functional difference is that uniform_split_supported()
>>>> includes an additional check on the requested @new_order before
>>>
>>> Please elaborate on what the check is for.
>>>
>>>> proceeding with further validation.
>>
>> How about this:
>>
>> The only functional difference is that uniform_split_supported() includes an
>> additional check on the requested @new_order and split type to confirm support
>> from file system or swap cache.
>
>You are describing what the code does instead of its actual meaning.
>You need to describe:
>1. what is the difference between uniform split and non-uniform split?
>2. what order does what file systems support? Only order-0.
>3. what order does swap cache support? Only order-0.
>4. why can uniform split be used to split large folios from 2 or 3 to
> order-0?
>5. why can non uniform split not be used to split large folios from 2
> or 3 to order-0?
>6. The logic similarity between uniform_split_supported() and
> non_uniform_split_supported() and they can be combined with detailed
> comment.
>
Here is the updated version:
The only functional difference is that uniform_split_supported() includes an
additional check on the requested @new_order.
The reason for this check comes from the following two aspects:
* some file system or swap cache just supports order-0 folio
* the behavioral difference between uniform/non-uniform split
The behavioral difference between uniform split and non-uniform:
* uniform split splits folio directly to @new_order
* non-uniform split creates after-split folios with orders from
folio_order(folio) - 1 to new_order.
This means for non-uniform split or !new_order split we should check the file
system and swap cache respectively.
>>
>>>>
>>>> This commit unifies the logic by introducing a new variable,
>>>> @need_check, which is conditionally set based on whether a uniform
>>>> split is requested. This allows us to merge the two functions into
>>>> a single, combined helper, removing redundant code and simplifying
>>>> the split support checking mechanism.
>>>>
>>>> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
>>>> Cc: Zi Yan <ziy@nvidia.com>
>>>> ---
>>>> include/linux/huge_mm.h | 8 +++---
>>>> mm/huge_memory.c | 55 +++++++++++------------------------------
>>>> 2 files changed, 18 insertions(+), 45 deletions(-)
>>>>
>>>> diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
>>>> index cbb2243f8e56..79343809a7be 100644
>>>> --- a/include/linux/huge_mm.h
>>>> +++ b/include/linux/huge_mm.h
>>>> @@ -369,10 +369,8 @@ int __split_huge_page_to_list_to_order(struct page *page, struct list_head *list
>>>> unsigned int new_order, bool unmapped);
>>>> int min_order_for_split(struct folio *folio);
>>>> int split_folio_to_list(struct folio *folio, struct list_head *list);
>>>> -bool uniform_split_supported(struct folio *folio, unsigned int new_order,
>>>> - bool warns);
>>>> -bool non_uniform_split_supported(struct folio *folio, unsigned int new_order,
>>>> - bool warns);
>>>> +bool folio_split_supported(struct folio *folio, unsigned int new_order,
>>>> + bool uniform_split, bool warns);
>>>> int folio_split(struct folio *folio, unsigned int new_order, struct page *page,
>>>> struct list_head *list);
>>>>
>>>> @@ -403,7 +401,7 @@ static inline int split_huge_page_to_order(struct page *page, unsigned int new_o
>>>> static inline int try_folio_split_to_order(struct folio *folio,
>>>> struct page *page, unsigned int new_order)
>>>> {
>>>> - if (!non_uniform_split_supported(folio, new_order, /* warns= */ false))
>>>> + if (!folio_split_supported(folio, new_order, /* uniform_split = */ false, /* warns= */ false))
>>>> return split_huge_page_to_order(&folio->page, new_order);
>>>> return folio_split(folio, new_order, page, NULL);
>>>> }
>>>> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
>>>> index d1fa0d2d9b44..f6d2cb2a5ca0 100644
>>>> --- a/mm/huge_memory.c
>>>> +++ b/mm/huge_memory.c
>>>> @@ -3673,55 +3673,34 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
>>>> return 0;
>>>> }
>>>>
>>>> -bool non_uniform_split_supported(struct folio *folio, unsigned int new_order,
>>>> - bool warns)
>>>> +bool folio_split_supported(struct folio *folio, unsigned int new_order,
>>>> + bool uniform_split, bool warns)
>>>> {
>>>> - if (folio_test_anon(folio)) {
>>>> - /* order-1 is not supported for anonymous THP. */
>>>> - VM_WARN_ONCE(warns && new_order == 1,
>>>> - "Cannot split to order-1 folio");
>>>> - return new_order != 1;
>>>> - } else if (IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) &&
>>>> - !mapping_large_folio_support(folio->mapping)) {
>>>> - /*
>>>> - * No split if the file system does not support large folio.
>>>> - * Note that we might still have THPs in such mappings due to
>>>> - * CONFIG_READ_ONLY_THP_FOR_FS. But in that case, the mapping
>>>> - * does not actually support large folios properly.
>>>> - */
>>>> - VM_WARN_ONCE(warns,
>>>> - "Cannot split file folio to non-0 order");
>>>> - return false;
>>>> - }
>>>> -
>>>> - /* Only swapping a whole PMD-mapped folio is supported */
>>>> - if (folio_test_swapcache(folio)) {
>>>> - VM_WARN_ONCE(warns,
>>>> - "Cannot split swapcache folio to non-0 order");
>>>> - return false;
>>>> - }
>>>> + bool need_check = uniform_split ? new_order : true;
>>>>
>>>> - return true;
>>>> -}
>>>> -
>>>> -/* See comments in non_uniform_split_supported() */
>>>> -bool uniform_split_supported(struct folio *folio, unsigned int new_order,
>>>> - bool warns)
>>>> -{
>>>> if (folio_test_anon(folio)) {
>>>> + /* order-1 is not supported for anonymous THP. */
>>>> VM_WARN_ONCE(warns && new_order == 1,
>>>> "Cannot split to order-1 folio");
>>>> return new_order != 1;
>>>> - } else if (new_order) {
>>>> + } else if (need_check) {
>>>> if (IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) &&
>>>> !mapping_large_folio_support(folio->mapping)) {
>>>> + /*
>>>> + * No split if the file system does not support large
>>>> + * folio. Note that we might still have THPs in such
>>>> + * mappings due to CONFIG_READ_ONLY_THP_FOR_FS. But in
>>>> + * that case, the mapping does not actually support
>>>> + * large folios properly.
>>>> + */
>>>
>>> Blindly copying the comment here causes fusion. The checks for
>>> uniform and non uniform look similar but this comment is specific
>>> for non uniform split. The “No split” only applies to non uniform
>>> split, but for uniform split as long as order is 0, the folio
>>> can be split.
>>>
>>
>> Per my understanding, "no split" applies to both uniform/non uniform split
>> when new_order is not 0.
>
>Not exactly. For non uniform split, any new_order value is not allowed.
>
>>
>> So the logic here is:
>>
>> * uniform split && !new_order: no more check
>> * non uniform split: do the check regardless of the new_order
>>
>> But I am lack of some background knowledge, if it is wrong, please correct me.
>
>You are changing the code, please do your homework first. Or you can
>ask. After go through the above 6 bullet points, you should get the
>background knowledge.
>
>>
>>> Please rewrite this comment to clarify both uniform and non uniform
>>> cases.
>>
>> Not sure this one would be better?
>>
>> We can always split a folio down to a single page (new_order == 0) directly.
>
>Not always, the exceptions are listed below.
>
I mean uniform split to order-0, maybe above line misleading to non-uniform
split?
>>
>> For any other scenario
>> * uniform split targeting a large folio (new_order > 0)
>> * any non-uniform split
>> we must confirm that the file system supports large folios.
>>
>> Note that we might still have THPs in such mappings due to
>> CONFIG_READ_ONLY_THP_FOR_FS. But in that case, the mapping does not actually
>> support large folios properly.
>
>These filesystems do not support large folios except THPs created from khugepaged when CONFIG_READ_ONLY_THP_FOR_FS is enabled.
>
Want to confirm to see whether I understand correctly.
We have two kinds of file system:
a) support large folio
b) not support large folio
For a), we can split large folio to min_order_for_split(), uniform or
non-uniform.
For b), normally there is no large folio. The large folio is collapsed by
khugepaged when CONFIG_READ_ONLY_THP_FOR_FS is enabled. So we can only split
it to order-0 folio for this case.
Not sure this one would be better?
We can always split a folio down to a single page (new_order == 0)
uniformly.
For any other scenario
* uniform split targeting a large folio (new_order > 0)
* any non-uniform split
we must confirm that the file system supports large folios.
Note that we might still have THPs in such mappings, which is created from
khugepaged when CONFIG_READ_ONLY_THP_FOR_FS is enabled. But in that case,
the mapping does not actually support large folios properly.
>>>> VM_WARN_ONCE(warns,
>>>> "Cannot split file folio to non-0 order");
>>>> return false;
>>>> }
>>>> }
>>>>
>>>> - if (new_order && folio_test_swapcache(folio)) {
>>>> + /* Only swapping a whole PMD-mapped folio is supported */
>>>
>>> The same issue like the above one. Please rewrite this comment as well.
>>>
>>
>> How about this one:
>>
>> swapcache folio could only be split to order 0
>
>This looks good.
>
>>
>> For non-uniform split or uniform split targeting a large folio, return
>> false.
>
>You are just describing the code.
>
>non-uniform split creates after-split folios with orders from
>folio_order(folio) - 1 to new_order, making it not suitable for any swapcache
>folio split. Only uniform split to order-0 can be used here.
>
Below is the updated version:
swapcache folio could only be split to order 0
non-uniform split creates after-split folios with orders from
folio_order(folio) - 1 to new_order, making it not suitable for any
swapcache folio split. Only uniform split to order-0 can be used here.
>>
>>>> + if (need_check && folio_test_swapcache(folio)) {
>>>> VM_WARN_ONCE(warns,
>>>> "Cannot split swapcache folio to non-0 order");
>>>> return false;
>>>> @@ -3779,11 +3758,7 @@ static int __folio_split(struct folio *folio, unsigned int new_order,
>>>> if (new_order >= old_order)
>>>> return -EINVAL;
>>>>
>>>> - if (uniform_split && !uniform_split_supported(folio, new_order, true))
>>>> - return -EINVAL;
>>>> -
>>>> - if (!uniform_split &&
>>>> - !non_uniform_split_supported(folio, new_order, true))
>>>> + if (!folio_split_supported(folio, new_order, uniform_split, /* warn = */ true))
>>>> return -EINVAL;
>>>>
>>>> is_hzp = is_huge_zero_folio(folio);
>>>> --
>>>> 2.34.1
>>>
>>>
>>> Best Regards,
>>> Yan, Zi
>>
>> --
>> Wei Yang
>> Help you, Help me
>
>
>Best Regards,
>Yan, Zi
--
Wei Yang
Help you, Help me
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH] mm/huge_memory: merge uniform_split_supported() and non_uniform_split_supported()
2025-11-04 0:41 ` Wei Yang
@ 2025-11-04 9:05 ` David Hildenbrand (Red Hat)
2025-11-04 13:31 ` Wei Yang
0 siblings, 1 reply; 13+ messages in thread
From: David Hildenbrand (Red Hat) @ 2025-11-04 9:05 UTC (permalink / raw)
To: Wei Yang
Cc: akpm, lorenzo.stoakes, ziy, baolin.wang, Liam.Howlett, npache,
ryan.roberts, dev.jain, baohua, lance.yang, linux-mm
>> Likely a simple
>>
>> enum split_type {
>> SPLIT_TYPE_UNIFORM,
>> SPLIT_TYPE_NON_UNIFORM, // or SPLIT_TYPE_BUDDY?
>> };
>>
>> Could make invocations easier to read.
>>
>
> Hmm... if so we may need to change the caller too.
Probably worth a separate cleanup patch instead of doing it in this patch.
--
Cheers
David
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH] mm/huge_memory: merge uniform_split_supported() and non_uniform_split_supported()
2025-11-04 9:05 ` David Hildenbrand (Red Hat)
@ 2025-11-04 13:31 ` Wei Yang
0 siblings, 0 replies; 13+ messages in thread
From: Wei Yang @ 2025-11-04 13:31 UTC (permalink / raw)
To: David Hildenbrand (Red Hat)
Cc: Wei Yang, akpm, lorenzo.stoakes, ziy, baolin.wang, Liam.Howlett,
npache, ryan.roberts, dev.jain, baohua, lance.yang, linux-mm
On Tue, Nov 04, 2025 at 10:05:49AM +0100, David Hildenbrand (Red Hat) wrote:
>> > Likely a simple
>> >
>> > enum split_type {
>> > SPLIT_TYPE_UNIFORM,
>> > SPLIT_TYPE_NON_UNIFORM, // or SPLIT_TYPE_BUDDY?
>> > };
>> >
>> > Could make invocations easier to read.
>> >
>>
>> Hmm... if so we may need to change the caller too.
>
>Probably worth a separate cleanup patch instead of doing it in this patch.
>
Got it.
>--
>Cheers
>
>David
--
Wei Yang
Help you, Help me
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH] mm/huge_memory: merge uniform_split_supported() and non_uniform_split_supported()
2025-11-04 7:53 ` Wei Yang
@ 2025-11-05 2:14 ` Zi Yan
2025-11-05 2:44 ` Wei Yang
0 siblings, 1 reply; 13+ messages in thread
From: Zi Yan @ 2025-11-05 2:14 UTC (permalink / raw)
To: Wei Yang
Cc: akpm, david, lorenzo.stoakes, baolin.wang, Liam.Howlett, npache,
ryan.roberts, dev.jain, baohua, lance.yang, linux-mm
On 4 Nov 2025, at 2:53, Wei Yang wrote:
> On Mon, Nov 03, 2025 at 09:30:03PM -0500, Zi Yan wrote:
>> On 3 Nov 2025, at 19:36, Wei Yang wrote:
>>
>>> On Mon, Nov 03, 2025 at 11:34:47AM -0500, Zi Yan wrote:
>>>> On 31 Oct 2025, at 22:11, Wei Yang wrote:
>>>>
>>>>> The functions uniform_split_supported() and
>>>>> non_uniform_split_supported() share significantly similar logic.
>>>>>
>>>>> The only functional difference is that uniform_split_supported()
>>>>> includes an additional check on the requested @new_order before
>>>>
>>>> Please elaborate on what the check is for.
>>>>
>>>>> proceeding with further validation.
>>>
>>> How about this:
>>>
>>> The only functional difference is that uniform_split_supported() includes an
>>> additional check on the requested @new_order and split type to confirm support
>>> from file system or swap cache.
>>
>> You are describing what the code does instead of its actual meaning.
>> You need to describe:
>> 1. what is the difference between uniform split and non-uniform split?
>> 2. what order does what file systems support? Only order-0.
>> 3. what order does swap cache support? Only order-0.
>> 4. why can uniform split be used to split large folios from 2 or 3 to
>> order-0?
>> 5. why can non uniform split not be used to split large folios from 2
>> or 3 to order-0?
>> 6. The logic similarity between uniform_split_supported() and
>> non_uniform_split_supported() and they can be combined with detailed
>> comment.
>>
>
> Here is the updated version:
>
> The only functional difference is that uniform_split_supported() includes an
> additional check on the requested @new_order.
>
> The reason for this check comes from the following two aspects:
>
> * some file system or swap cache just supports order-0 folio
> * the behavioral difference between uniform/non-uniform split
>
> The behavioral difference between uniform split and non-uniform:
>
> * uniform split splits folio directly to @new_order
> * non-uniform split creates after-split folios with orders from
> folio_order(folio) - 1 to new_order.
>
> This means for non-uniform split or !new_order split we should check the file
> system and swap cache respectively.
>
LGTM.
>>>
>>>>>
>>>>> This commit unifies the logic by introducing a new variable,
>>>>> @need_check, which is conditionally set based on whether a uniform
>>>>> split is requested. This allows us to merge the two functions into
>>>>> a single, combined helper, removing redundant code and simplifying
>>>>> the split support checking mechanism.
>>>>>
>>>>> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
>>>>> Cc: Zi Yan <ziy@nvidia.com>
>>>>> ---
>>>>> include/linux/huge_mm.h | 8 +++---
>>>>> mm/huge_memory.c | 55 +++++++++++------------------------------
>>>>> 2 files changed, 18 insertions(+), 45 deletions(-)
>>>>>
>>>>> diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
>>>>> index cbb2243f8e56..79343809a7be 100644
>>>>> --- a/include/linux/huge_mm.h
>>>>> +++ b/include/linux/huge_mm.h
>>>>> @@ -369,10 +369,8 @@ int __split_huge_page_to_list_to_order(struct page *page, struct list_head *list
>>>>> unsigned int new_order, bool unmapped);
>>>>> int min_order_for_split(struct folio *folio);
>>>>> int split_folio_to_list(struct folio *folio, struct list_head *list);
>>>>> -bool uniform_split_supported(struct folio *folio, unsigned int new_order,
>>>>> - bool warns);
>>>>> -bool non_uniform_split_supported(struct folio *folio, unsigned int new_order,
>>>>> - bool warns);
>>>>> +bool folio_split_supported(struct folio *folio, unsigned int new_order,
>>>>> + bool uniform_split, bool warns);
>>>>> int folio_split(struct folio *folio, unsigned int new_order, struct page *page,
>>>>> struct list_head *list);
>>>>>
>>>>> @@ -403,7 +401,7 @@ static inline int split_huge_page_to_order(struct page *page, unsigned int new_o
>>>>> static inline int try_folio_split_to_order(struct folio *folio,
>>>>> struct page *page, unsigned int new_order)
>>>>> {
>>>>> - if (!non_uniform_split_supported(folio, new_order, /* warns= */ false))
>>>>> + if (!folio_split_supported(folio, new_order, /* uniform_split = */ false, /* warns= */ false))
>>>>> return split_huge_page_to_order(&folio->page, new_order);
>>>>> return folio_split(folio, new_order, page, NULL);
>>>>> }
>>>>> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
>>>>> index d1fa0d2d9b44..f6d2cb2a5ca0 100644
>>>>> --- a/mm/huge_memory.c
>>>>> +++ b/mm/huge_memory.c
>>>>> @@ -3673,55 +3673,34 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
>>>>> return 0;
>>>>> }
>>>>>
>>>>> -bool non_uniform_split_supported(struct folio *folio, unsigned int new_order,
>>>>> - bool warns)
>>>>> +bool folio_split_supported(struct folio *folio, unsigned int new_order,
>>>>> + bool uniform_split, bool warns)
>>>>> {
>>>>> - if (folio_test_anon(folio)) {
>>>>> - /* order-1 is not supported for anonymous THP. */
>>>>> - VM_WARN_ONCE(warns && new_order == 1,
>>>>> - "Cannot split to order-1 folio");
>>>>> - return new_order != 1;
>>>>> - } else if (IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) &&
>>>>> - !mapping_large_folio_support(folio->mapping)) {
>>>>> - /*
>>>>> - * No split if the file system does not support large folio.
>>>>> - * Note that we might still have THPs in such mappings due to
>>>>> - * CONFIG_READ_ONLY_THP_FOR_FS. But in that case, the mapping
>>>>> - * does not actually support large folios properly.
>>>>> - */
>>>>> - VM_WARN_ONCE(warns,
>>>>> - "Cannot split file folio to non-0 order");
>>>>> - return false;
>>>>> - }
>>>>> -
>>>>> - /* Only swapping a whole PMD-mapped folio is supported */
>>>>> - if (folio_test_swapcache(folio)) {
>>>>> - VM_WARN_ONCE(warns,
>>>>> - "Cannot split swapcache folio to non-0 order");
>>>>> - return false;
>>>>> - }
>>>>> + bool need_check = uniform_split ? new_order : true;
>>>>>
>>>>> - return true;
>>>>> -}
>>>>> -
>>>>> -/* See comments in non_uniform_split_supported() */
>>>>> -bool uniform_split_supported(struct folio *folio, unsigned int new_order,
>>>>> - bool warns)
>>>>> -{
>>>>> if (folio_test_anon(folio)) {
>>>>> + /* order-1 is not supported for anonymous THP. */
>>>>> VM_WARN_ONCE(warns && new_order == 1,
>>>>> "Cannot split to order-1 folio");
>>>>> return new_order != 1;
>>>>> - } else if (new_order) {
>>>>> + } else if (need_check) {
>>>>> if (IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) &&
>>>>> !mapping_large_folio_support(folio->mapping)) {
>>>>> + /*
>>>>> + * No split if the file system does not support large
>>>>> + * folio. Note that we might still have THPs in such
>>>>> + * mappings due to CONFIG_READ_ONLY_THP_FOR_FS. But in
>>>>> + * that case, the mapping does not actually support
>>>>> + * large folios properly.
>>>>> + */
>>>>
>>>> Blindly copying the comment here causes fusion. The checks for
>>>> uniform and non uniform look similar but this comment is specific
>>>> for non uniform split. The “No split” only applies to non uniform
>>>> split, but for uniform split as long as order is 0, the folio
>>>> can be split.
>>>>
>>>
>>> Per my understanding, "no split" applies to both uniform/non uniform split
>>> when new_order is not 0.
>>
>> Not exactly. For non uniform split, any new_order value is not allowed.
>>
>>>
>>> So the logic here is:
>>>
>>> * uniform split && !new_order: no more check
>>> * non uniform split: do the check regardless of the new_order
>>>
>>> But I am lack of some background knowledge, if it is wrong, please correct me.
>>
>> You are changing the code, please do your homework first. Or you can
>> ask. After go through the above 6 bullet points, you should get the
>> background knowledge.
>>
>>>
>>>> Please rewrite this comment to clarify both uniform and non uniform
>>>> cases.
>>>
>>> Not sure this one would be better?
>>>
>>> We can always split a folio down to a single page (new_order == 0) directly.
>>
>> Not always, the exceptions are listed below.
>>
>
> I mean uniform split to order-0, maybe above line misleading to non-uniform
> split?
Right. If you mean uniform split, then write uniform split instead.
>
>>>
>>> For any other scenario
>>> * uniform split targeting a large folio (new_order > 0)
>>> * any non-uniform split
>>> we must confirm that the file system supports large folios.
>>>
>>> Note that we might still have THPs in such mappings due to
>>> CONFIG_READ_ONLY_THP_FOR_FS. But in that case, the mapping does not actually
>>> support large folios properly.
>>
>> These filesystems do not support large folios except THPs created from khugepaged when CONFIG_READ_ONLY_THP_FOR_FS is enabled.
>>
>
> Want to confirm to see whether I understand correctly.
>
> We have two kinds of file system:
>
> a) support large folio
> b) not support large folio
>
> For a), we can split large folio to min_order_for_split(), uniform or
> non-uniform.
>
> For b), normally there is no large folio. The large folio is collapsed by
> khugepaged when CONFIG_READ_ONLY_THP_FOR_FS is enabled. So we can only split
> it to order-0 folio for this case.
Exactly.
>
> Not sure this one would be better?
>
> We can always split a folio down to a single page (new_order == 0)
> uniformly.
>
> For any other scenario
> * uniform split targeting a large folio (new_order > 0)
> * any non-uniform split
> we must confirm that the file system supports large folios.
>
> Note that we might still have THPs in such mappings, which is created from
> khugepaged when CONFIG_READ_ONLY_THP_FOR_FS is enabled. But in that case,
> the mapping does not actually support large folios properly.
LGTM.
>
>
>>>>> VM_WARN_ONCE(warns,
>>>>> "Cannot split file folio to non-0 order");
>>>>> return false;
>>>>> }
>>>>> }
>>>>>
>>>>> - if (new_order && folio_test_swapcache(folio)) {
>>>>> + /* Only swapping a whole PMD-mapped folio is supported */
>>>>
>>>> The same issue like the above one. Please rewrite this comment as well.
>>>>
>>>
>>> How about this one:
>>>
>>> swapcache folio could only be split to order 0
>>
>> This looks good.
>>
>>>
>>> For non-uniform split or uniform split targeting a large folio, return
>>> false.
>>
>> You are just describing the code.
>>
>> non-uniform split creates after-split folios with orders from
>> folio_order(folio) - 1 to new_order, making it not suitable for any swapcache
>> folio split. Only uniform split to order-0 can be used here.
>>
>
> Below is the updated version:
>
> swapcache folio could only be split to order 0
>
> non-uniform split creates after-split folios with orders from
> folio_order(folio) - 1 to new_order, making it not suitable for any
> swapcache folio split. Only uniform split to order-0 can be used here.
LGTM.
Thank you for updating the comments. Looking forward to your updated patch.
>
>>>
>>>>> + if (need_check && folio_test_swapcache(folio)) {
>>>>> VM_WARN_ONCE(warns,
>>>>> "Cannot split swapcache folio to non-0 order");
>>>>> return false;
>>>>> @@ -3779,11 +3758,7 @@ static int __folio_split(struct folio *folio, unsigned int new_order,
>>>>> if (new_order >= old_order)
>>>>> return -EINVAL;
>>>>>
>>>>> - if (uniform_split && !uniform_split_supported(folio, new_order, true))
>>>>> - return -EINVAL;
>>>>> -
>>>>> - if (!uniform_split &&
>>>>> - !non_uniform_split_supported(folio, new_order, true))
>>>>> + if (!folio_split_supported(folio, new_order, uniform_split, /* warn = */ true))
>>>>> return -EINVAL;
>>>>>
>>>>> is_hzp = is_huge_zero_folio(folio);
>>>>> --
>>>>> 2.34.1
>>>>
>>>>
>>>> Best Regards,
>>>> Yan, Zi
>>>
>>> --
>>> Wei Yang
>>> Help you, Help me
>>
>>
>> Best Regards,
>> Yan, Zi
>
> --
> Wei Yang
> Help you, Help me
Best Regards,
Yan, Zi
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH] mm/huge_memory: merge uniform_split_supported() and non_uniform_split_supported()
2025-11-05 2:14 ` Zi Yan
@ 2025-11-05 2:44 ` Wei Yang
0 siblings, 0 replies; 13+ messages in thread
From: Wei Yang @ 2025-11-05 2:44 UTC (permalink / raw)
To: Zi Yan
Cc: Wei Yang, akpm, david, lorenzo.stoakes, baolin.wang,
Liam.Howlett, npache, ryan.roberts, dev.jain, baohua, lance.yang,
linux-mm
On Tue, Nov 04, 2025 at 09:14:31PM -0500, Zi Yan wrote:
>On 4 Nov 2025, at 2:53, Wei Yang wrote:
[...]
>>>>>
>>>>
>>>> How about this one:
>>>>
>>>> swapcache folio could only be split to order 0
>>>
>>> This looks good.
>>>
>>>>
>>>> For non-uniform split or uniform split targeting a large folio, return
>>>> false.
>>>
>>> You are just describing the code.
>>>
>>> non-uniform split creates after-split folios with orders from
>>> folio_order(folio) - 1 to new_order, making it not suitable for any swapcache
>>> folio split. Only uniform split to order-0 can be used here.
>>>
>>
>> Below is the updated version:
>>
>> swapcache folio could only be split to order 0
>>
>> non-uniform split creates after-split folios with orders from
>> folio_order(folio) - 1 to new_order, making it not suitable for any
>> swapcache folio split. Only uniform split to order-0 can be used here.
>
>LGTM.
>
>Thank you for updating the comments. Looking forward to your updated patch.
Thanks for you time and patience, will prepare a new version.
--
Wei Yang
Help you, Help me
^ permalink raw reply [flat|nested] 13+ messages in thread
end of thread, other threads:[~2025-11-05 2:44 UTC | newest]
Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-11-01 2:11 [PATCH] mm/huge_memory: merge uniform_split_supported() and non_uniform_split_supported() Wei Yang
2025-11-03 9:04 ` Dev Jain
2025-11-03 16:19 ` Zi Yan
2025-11-03 11:50 ` David Hildenbrand (Red Hat)
2025-11-04 0:41 ` Wei Yang
2025-11-04 9:05 ` David Hildenbrand (Red Hat)
2025-11-04 13:31 ` Wei Yang
2025-11-03 16:34 ` Zi Yan
2025-11-04 0:36 ` Wei Yang
2025-11-04 2:30 ` Zi Yan
2025-11-04 7:53 ` Wei Yang
2025-11-05 2:14 ` Zi Yan
2025-11-05 2:44 ` Wei Yang
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox