linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] mm/huge_memory: consolidate order-related checks into folio_split_supported()
@ 2025-11-14  7:57 Wei Yang
  2025-11-14  8:49 ` David Hildenbrand (Red Hat)
  2025-11-19 12:37 ` Dan Carpenter
  0 siblings, 2 replies; 14+ messages in thread
From: Wei Yang @ 2025-11-14  7:57 UTC (permalink / raw)
  To: willy, akpm, david, lorenzo.stoakes, Liam.Howlett, vbabka, rppt,
	surenb, mhocko, ziy, baolin.wang, npache, ryan.roberts, dev.jain,
	baohua, lance.yang
  Cc: linux-fsdevel, linux-mm, Wei Yang

The primary goal of the folio_split_supported() function is to validate
whether a folio is suitable for splitting and to bail out early if it is
not.

Currently, some order-related checks are scattered throughout the
calling code rather than being centralized in folio_split_supported().

This commit moves all remaining order-related validation logic into
folio_split_supported(). This consolidation ensures that the function
serves its intended purpose as a single point of failure and improves
the clarity and maintainability of the surrounding code.

Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
---
 include/linux/pagemap.h |  6 +++
 mm/huge_memory.c        | 88 +++++++++++++++++++++--------------------
 2 files changed, 51 insertions(+), 43 deletions(-)

diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
index 09b581c1d878..d8c8df629b90 100644
--- a/include/linux/pagemap.h
+++ b/include/linux/pagemap.h
@@ -516,6 +516,12 @@ static inline bool mapping_large_folio_support(const struct address_space *mappi
 	return mapping_max_folio_order(mapping) > 0;
 }
 
+static inline bool
+mapping_folio_order_supported(const struct address_space *mapping, unsigned int order)
+{
+	return (order >= mapping_min_folio_order(mapping) && order <= mapping_max_folio_order(mapping));
+}
+
 /* Return the maximum folio size for this pagecache mapping, in bytes. */
 static inline size_t mapping_max_folio_size(const struct address_space *mapping)
 {
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 0184cd915f44..68faac843527 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -3690,34 +3690,58 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
 bool folio_split_supported(struct folio *folio, unsigned int new_order,
 		enum split_type split_type, bool warns)
 {
+	const int old_order = folio_order(folio);
+
+	if (new_order >= old_order)
+		return -EINVAL;
+
 	if (folio_test_anon(folio)) {
 		/* order-1 is not supported for anonymous THP. */
 		VM_WARN_ONCE(warns && new_order == 1,
 				"Cannot split to order-1 folio");
 		if (new_order == 1)
 			return false;
-	} else if (split_type == SPLIT_TYPE_NON_UNIFORM || new_order) {
-		if (IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) &&
-		    !mapping_large_folio_support(folio->mapping)) {
-			/*
-			 * We can always split a folio down to a single page
-			 * (new_order == 0) uniformly.
-			 *
-			 * For any other scenario
-			 *   a) uniform split targeting a large folio
-			 *      (new_order > 0)
-			 *   b) any non-uniform split
-			 * we must confirm that the file system supports large
-			 * folios.
-			 *
-			 * Note that we might still have THPs in such
-			 * mappings, which is created from khugepaged when
-			 * CONFIG_READ_ONLY_THP_FOR_FS is enabled. But in that
-			 * case, the mapping does not actually support large
-			 * folios properly.
-			 */
+	} else {
+		const struct address_space *mapping = NULL;
+
+		mapping = folio->mapping;
+
+		/* Truncated ? */
+		/*
+		 * TODO: add support for large shmem folio in swap cache.
+		 * When shmem is in swap cache, mapping is NULL and
+		 * folio_test_swapcache() is true.
+		 */
+		if (!mapping)
+			return false;
+
+		/*
+		 * We have two types of split:
+		 *
+		 *   a) uniform split: split folio directly to new_order.
+		 *   b) non-uniform split: create after-split folios with
+		 *      orders from (old_order - 1) to new_order.
+		 *
+		 * For file system, we encodes it supported folio order in
+		 * mapping->flags, which could be checked by
+		 * mapping_folio_order_supported().
+		 *
+		 * With these knowledge, we can know whether folio support
+		 * split to new_order by:
+		 *
+		 *   1. check new_order is supported first
+		 *   2. check (old_order - 1) is supported if
+		 *      SPLIT_TYPE_NON_UNIFORM
+		 */
+		if (!mapping_folio_order_supported(mapping, new_order)) {
+			VM_WARN_ONCE(warns,
+				"Cannot split file folio to unsupported order: %d", new_order);
+			return false;
+		}
+		if (split_type == SPLIT_TYPE_NON_UNIFORM
+		    && !mapping_folio_order_supported(mapping, old_order - 1)) {
 			VM_WARN_ONCE(warns,
-				"Cannot split file folio to non-0 order");
+				"Cannot split file folio to unsupported order: %d", old_order - 1);
 			return false;
 		}
 	}
@@ -3785,9 +3809,6 @@ static int __folio_split(struct folio *folio, unsigned int new_order,
 	if (folio != page_folio(split_at) || folio != page_folio(lock_at))
 		return -EINVAL;
 
-	if (new_order >= old_order)
-		return -EINVAL;
-
 	if (!folio_split_supported(folio, new_order, split_type, /* warn = */ true))
 		return -EINVAL;
 
@@ -3819,28 +3840,9 @@ static int __folio_split(struct folio *folio, unsigned int new_order,
 		}
 		mapping = NULL;
 	} else {
-		unsigned int min_order;
 		gfp_t gfp;
 
 		mapping = folio->mapping;
-
-		/* Truncated ? */
-		/*
-		 * TODO: add support for large shmem folio in swap cache.
-		 * When shmem is in swap cache, mapping is NULL and
-		 * folio_test_swapcache() is true.
-		 */
-		if (!mapping) {
-			ret = -EBUSY;
-			goto out;
-		}
-
-		min_order = mapping_min_folio_order(folio->mapping);
-		if (new_order < min_order) {
-			ret = -EINVAL;
-			goto out;
-		}
-
 		gfp = current_gfp_context(mapping_gfp_mask(mapping) &
 							GFP_RECLAIM_MASK);
 
-- 
2.34.1



^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH] mm/huge_memory: consolidate order-related checks into folio_split_supported()
  2025-11-14  7:57 [PATCH] mm/huge_memory: consolidate order-related checks into folio_split_supported() Wei Yang
@ 2025-11-14  8:49 ` David Hildenbrand (Red Hat)
  2025-11-14 12:43   ` Zi Yan
  2025-11-14 15:03   ` Wei Yang
  2025-11-19 12:37 ` Dan Carpenter
  1 sibling, 2 replies; 14+ messages in thread
From: David Hildenbrand (Red Hat) @ 2025-11-14  8:49 UTC (permalink / raw)
  To: Wei Yang, willy, akpm, lorenzo.stoakes, Liam.Howlett, vbabka,
	rppt, surenb, mhocko, ziy, baolin.wang, npache, ryan.roberts,
	dev.jain, baohua, lance.yang
  Cc: linux-fsdevel, linux-mm

On 14.11.25 08:57, Wei Yang wrote:
> The primary goal of the folio_split_supported() function is to validate
> whether a folio is suitable for splitting and to bail out early if it is
> not.
> 
> Currently, some order-related checks are scattered throughout the
> calling code rather than being centralized in folio_split_supported().
> 
> This commit moves all remaining order-related validation logic into
> folio_split_supported(). This consolidation ensures that the function
> serves its intended purpose as a single point of failure and improves
> the clarity and maintainability of the surrounding code.

Combining the EINVAL handling sounds reasonable.

> 
> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
> ---
>   include/linux/pagemap.h |  6 +++
>   mm/huge_memory.c        | 88 +++++++++++++++++++++--------------------
>   2 files changed, 51 insertions(+), 43 deletions(-)
> 
> diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
> index 09b581c1d878..d8c8df629b90 100644
> --- a/include/linux/pagemap.h
> +++ b/include/linux/pagemap.h
> @@ -516,6 +516,12 @@ static inline bool mapping_large_folio_support(const struct address_space *mappi
>   	return mapping_max_folio_order(mapping) > 0;
>   }
>   
> +static inline bool
> +mapping_folio_order_supported(const struct address_space *mapping, unsigned int order)
> +{
> +	return (order >= mapping_min_folio_order(mapping) && order <= mapping_max_folio_order(mapping));
> +}

(unnecessary () and unnecessary long line)

Style in the file seems to want:

static inline bool mapping_folio_order_supported(const struct address_space *mapping,
						 unsigned int order)
{
	return order >= mapping_min_folio_order(mapping) &&
	       order <= mapping_max_folio_order(mapping);
}


The mapping_max_folio_order() check is new now. What is the default value of that? Is it always initialized properly?

> +
>   /* Return the maximum folio size for this pagecache mapping, in bytes. */
>   static inline size_t mapping_max_folio_size(const struct address_space *mapping)
>   {
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index 0184cd915f44..68faac843527 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -3690,34 +3690,58 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
>   bool folio_split_supported(struct folio *folio, unsigned int new_order,
>   		enum split_type split_type, bool warns)
>   {
> +	const int old_order = folio_order(folio);

While at it, make it "unsigned int" like new_order.

> +
> +	if (new_order >= old_order)
> +		return -EINVAL;
> +
>   	if (folio_test_anon(folio)) {
>   		/* order-1 is not supported for anonymous THP. */
>   		VM_WARN_ONCE(warns && new_order == 1,
>   				"Cannot split to order-1 folio");
>   		if (new_order == 1)
>   			return false;
> -	} else if (split_type == SPLIT_TYPE_NON_UNIFORM || new_order) {
> -		if (IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) &&
> -		    !mapping_large_folio_support(folio->mapping)) {
> -			/*
> -			 * We can always split a folio down to a single page
> -			 * (new_order == 0) uniformly.
> -			 *
> -			 * For any other scenario
> -			 *   a) uniform split targeting a large folio
> -			 *      (new_order > 0)
> -			 *   b) any non-uniform split
> -			 * we must confirm that the file system supports large
> -			 * folios.
> -			 *
> -			 * Note that we might still have THPs in such
> -			 * mappings, which is created from khugepaged when
> -			 * CONFIG_READ_ONLY_THP_FOR_FS is enabled. But in that
> -			 * case, the mapping does not actually support large
> -			 * folios properly.
> -			 */
> +	} else {
> +		const struct address_space *mapping = NULL;
> +
> +		mapping = folio->mapping;

const struct address_space *mapping = folio->mapping;

> +
> +		/* Truncated ? */
> +		/*
> +		 * TODO: add support for large shmem folio in swap cache.
> +		 * When shmem is in swap cache, mapping is NULL and
> +		 * folio_test_swapcache() is true.
> +		 */
> +		if (!mapping)
> +			return false;
> +
> +		/*
> +		 * We have two types of split:
> +		 *
> +		 *   a) uniform split: split folio directly to new_order.
> +		 *   b) non-uniform split: create after-split folios with
> +		 *      orders from (old_order - 1) to new_order.
> +		 *
> +		 * For file system, we encodes it supported folio order in
> +		 * mapping->flags, which could be checked by
> +		 * mapping_folio_order_supported().
> +		 *
> +		 * With these knowledge, we can know whether folio support
> +		 * split to new_order by:
> +		 *
> +		 *   1. check new_order is supported first
> +		 *   2. check (old_order - 1) is supported if
> +		 *      SPLIT_TYPE_NON_UNIFORM
> +		 */
> +		if (!mapping_folio_order_supported(mapping, new_order)) {
> +			VM_WARN_ONCE(warns,
> +				"Cannot split file folio to unsupported order: %d", new_order);

Is that really worth a VM_WARN_ONCE? We didn't have that previously IIUC, we would only return
-EINVAL.




-- 
Cheers

David


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH] mm/huge_memory: consolidate order-related checks into folio_split_supported()
  2025-11-14  8:49 ` David Hildenbrand (Red Hat)
@ 2025-11-14 12:43   ` Zi Yan
  2025-11-14 14:30     ` Wei Yang
  2025-11-14 15:03   ` Wei Yang
  1 sibling, 1 reply; 14+ messages in thread
From: Zi Yan @ 2025-11-14 12:43 UTC (permalink / raw)
  To: David Hildenbrand (Red Hat), Wei Yang
  Cc: willy, akpm, lorenzo.stoakes, Liam.Howlett, vbabka, rppt, surenb,
	mhocko, baolin.wang, npache, ryan.roberts, dev.jain, baohua,
	lance.yang, linux-fsdevel, linux-mm

On 14 Nov 2025, at 3:49, David Hildenbrand (Red Hat) wrote:

> On 14.11.25 08:57, Wei Yang wrote:
>> The primary goal of the folio_split_supported() function is to validate
>> whether a folio is suitable for splitting and to bail out early if it is
>> not.
>>
>> Currently, some order-related checks are scattered throughout the
>> calling code rather than being centralized in folio_split_supported().
>>
>> This commit moves all remaining order-related validation logic into
>> folio_split_supported(). This consolidation ensures that the function
>> serves its intended purpose as a single point of failure and improves
>> the clarity and maintainability of the surrounding code.
>
> Combining the EINVAL handling sounds reasonable.
>
>>
>> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
>> ---
>>   include/linux/pagemap.h |  6 +++
>>   mm/huge_memory.c        | 88 +++++++++++++++++++++--------------------
>>   2 files changed, 51 insertions(+), 43 deletions(-)
>>
>> diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
>> index 09b581c1d878..d8c8df629b90 100644
>> --- a/include/linux/pagemap.h
>> +++ b/include/linux/pagemap.h
>> @@ -516,6 +516,12 @@ static inline bool mapping_large_folio_support(const struct address_space *mappi
>>   	return mapping_max_folio_order(mapping) > 0;
>>   }
>>  +static inline bool
>> +mapping_folio_order_supported(const struct address_space *mapping, unsigned int order)
>> +{
>> +	return (order >= mapping_min_folio_order(mapping) && order <= mapping_max_folio_order(mapping));
>> +}
>
> (unnecessary () and unnecessary long line)
>
> Style in the file seems to want:
>
> static inline bool mapping_folio_order_supported(const struct address_space *mapping,
> 						 unsigned int order)
> {
> 	return order >= mapping_min_folio_order(mapping) &&
> 	       order <= mapping_max_folio_order(mapping);
> }
>
>
> The mapping_max_folio_order() check is new now. What is the default value of that? Is it always initialized properly?
>
>> +
>>   /* Return the maximum folio size for this pagecache mapping, in bytes. */
>>   static inline size_t mapping_max_folio_size(const struct address_space *mapping)
>>   {
>> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
>> index 0184cd915f44..68faac843527 100644
>> --- a/mm/huge_memory.c
>> +++ b/mm/huge_memory.c
>> @@ -3690,34 +3690,58 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
>>   bool folio_split_supported(struct folio *folio, unsigned int new_order,
>>   		enum split_type split_type, bool warns)
>>   {
>> +	const int old_order = folio_order(folio);
>
> While at it, make it "unsigned int" like new_order.
>
>> +
>> +	if (new_order >= old_order)
>> +		return -EINVAL;
>> +
>>   	if (folio_test_anon(folio)) {
>>   		/* order-1 is not supported for anonymous THP. */
>>   		VM_WARN_ONCE(warns && new_order == 1,
>>   				"Cannot split to order-1 folio");
>>   		if (new_order == 1)
>>   			return false;
>> -	} else if (split_type == SPLIT_TYPE_NON_UNIFORM || new_order) {
>> -		if (IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) &&
>> -		    !mapping_large_folio_support(folio->mapping)) {
>> -			/*
>> -			 * We can always split a folio down to a single page
>> -			 * (new_order == 0) uniformly.
>> -			 *
>> -			 * For any other scenario
>> -			 *   a) uniform split targeting a large folio
>> -			 *      (new_order > 0)
>> -			 *   b) any non-uniform split
>> -			 * we must confirm that the file system supports large
>> -			 * folios.
>> -			 *
>> -			 * Note that we might still have THPs in such
>> -			 * mappings, which is created from khugepaged when
>> -			 * CONFIG_READ_ONLY_THP_FOR_FS is enabled. But in that
>> -			 * case, the mapping does not actually support large
>> -			 * folios properly.
>> -			 */
>> +	} else {
>> +		const struct address_space *mapping = NULL;
>> +
>> +		mapping = folio->mapping;
>
> const struct address_space *mapping = folio->mapping;
>
>> +
>> +		/* Truncated ? */
>> +		/*
>> +		 * TODO: add support for large shmem folio in swap cache.
>> +		 * When shmem is in swap cache, mapping is NULL and
>> +		 * folio_test_swapcache() is true.
>> +		 */
>> +		if (!mapping)
>> +			return false;
>> +
>> +		/*
>> +		 * We have two types of split:
>> +		 *
>> +		 *   a) uniform split: split folio directly to new_order.
>> +		 *   b) non-uniform split: create after-split folios with
>> +		 *      orders from (old_order - 1) to new_order.
>> +		 *
>> +		 * For file system, we encodes it supported folio order in
>> +		 * mapping->flags, which could be checked by
>> +		 * mapping_folio_order_supported().
>> +		 *
>> +		 * With these knowledge, we can know whether folio support
>> +		 * split to new_order by:
>> +		 *
>> +		 *   1. check new_order is supported first
>> +		 *   2. check (old_order - 1) is supported if
>> +		 *      SPLIT_TYPE_NON_UNIFORM
>> +		 */
>> +		if (!mapping_folio_order_supported(mapping, new_order)) {
>> +			VM_WARN_ONCE(warns,
>> +				"Cannot split file folio to unsupported order: %d", new_order);
>
> Is that really worth a VM_WARN_ONCE? We didn't have that previously IIUC, we would only return
> -EINVAL.

No, and it causes undesired warning when LBS folio is enabled. I explicitly
removed this warning one month ago in the LBS related patch[1].

It is so frustrating to see this part of patch. Wei has RB in the aforementioned
patch and still add this warning blindly. I am not sure if Wei understands
what he is doing, since he threw the idea to me and I told him to just
move the code without changing the logic, but he insisted doing it in his
own way and failed[2]. This retry is still wrong.

Wei, please make sure you understand the code before sending any patch.

[1] https://lore.kernel.org/linux-mm/20251017013630.139907-1-ziy@nvidia.com/
[2] https://lore.kernel.org/linux-mm/20251114030301.hkestzrk534ik7q4@master/

Best Regards,
Yan, Zi


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH] mm/huge_memory: consolidate order-related checks into folio_split_supported()
  2025-11-14 12:43   ` Zi Yan
@ 2025-11-14 14:30     ` Wei Yang
  2025-11-14 20:53       ` Zi Yan
  0 siblings, 1 reply; 14+ messages in thread
From: Wei Yang @ 2025-11-14 14:30 UTC (permalink / raw)
  To: Zi Yan
  Cc: David Hildenbrand (Red Hat),
	Wei Yang, willy, akpm, lorenzo.stoakes, Liam.Howlett, vbabka,
	rppt, surenb, mhocko, baolin.wang, npache, ryan.roberts,
	dev.jain, baohua, lance.yang, linux-fsdevel, linux-mm

On Fri, Nov 14, 2025 at 07:43:38AM -0500, Zi Yan wrote:
>On 14 Nov 2025, at 3:49, David Hildenbrand (Red Hat) wrote:
>
[...]
>>> +
>>> +	if (new_order >= old_order)
>>> +		return -EINVAL;
>>> +
>>>   	if (folio_test_anon(folio)) {
>>>   		/* order-1 is not supported for anonymous THP. */
>>>   		VM_WARN_ONCE(warns && new_order == 1,
>>>   				"Cannot split to order-1 folio");
>>>   		if (new_order == 1)
>>>   			return false;
>>> -	} else if (split_type == SPLIT_TYPE_NON_UNIFORM || new_order) {
>>> -		if (IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) &&
>>> -		    !mapping_large_folio_support(folio->mapping)) {
>>> -			/*
>>> -			 * We can always split a folio down to a single page
>>> -			 * (new_order == 0) uniformly.
>>> -			 *
>>> -			 * For any other scenario
>>> -			 *   a) uniform split targeting a large folio
>>> -			 *      (new_order > 0)
>>> -			 *   b) any non-uniform split
>>> -			 * we must confirm that the file system supports large
>>> -			 * folios.
>>> -			 *
>>> -			 * Note that we might still have THPs in such
>>> -			 * mappings, which is created from khugepaged when
>>> -			 * CONFIG_READ_ONLY_THP_FOR_FS is enabled. But in that
>>> -			 * case, the mapping does not actually support large
>>> -			 * folios properly.
>>> -			 */
>>> +	} else {
>>> +		const struct address_space *mapping = NULL;
>>> +
>>> +		mapping = folio->mapping;
>>
>> const struct address_space *mapping = folio->mapping;
>>
>>> +
>>> +		/* Truncated ? */
>>> +		/*
>>> +		 * TODO: add support for large shmem folio in swap cache.
>>> +		 * When shmem is in swap cache, mapping is NULL and
>>> +		 * folio_test_swapcache() is true.
>>> +		 */
>>> +		if (!mapping)
>>> +			return false;
>>> +
>>> +		/*
>>> +		 * We have two types of split:
>>> +		 *
>>> +		 *   a) uniform split: split folio directly to new_order.
>>> +		 *   b) non-uniform split: create after-split folios with
>>> +		 *      orders from (old_order - 1) to new_order.
>>> +		 *
>>> +		 * For file system, we encodes it supported folio order in
>>> +		 * mapping->flags, which could be checked by
>>> +		 * mapping_folio_order_supported().
>>> +		 *
>>> +		 * With these knowledge, we can know whether folio support
>>> +		 * split to new_order by:
>>> +		 *
>>> +		 *   1. check new_order is supported first
>>> +		 *   2. check (old_order - 1) is supported if
>>> +		 *      SPLIT_TYPE_NON_UNIFORM
>>> +		 */
>>> +		if (!mapping_folio_order_supported(mapping, new_order)) {
>>> +			VM_WARN_ONCE(warns,
>>> +				"Cannot split file folio to unsupported order: %d", new_order);
>>
>> Is that really worth a VM_WARN_ONCE? We didn't have that previously IIUC, we would only return
>> -EINVAL.
>

Sorry for introducing this unpleasant affair.

Hope I can explain what I have done.

>No, and it causes undesired warning when LBS folio is enabled. I explicitly
>removed this warning one month ago in the LBS related patch[1].
>

Yes, I see you removal of a warning in [1].

While in the discussion in [2], you mentioned:

  Then, you might want to add a helper function mapping_folio_order_supported()
  instead and change the warning message below to "Cannot split file folio to
  unsupported order [%d, %d]", min_order, max_order (showing min/max order
  is optional since it kinda defeat the purpose of having the helper function).
  Of course, the comment needs to be changed.

I thought you agree to print a warning message here. So I am confused.

>It is so frustrating to see this part of patch. Wei has RB in the aforementioned
>patch and still add this warning blindly. I am not sure if Wei understands
>what he is doing, since he threw the idea to me and I told him to just
>move the code without changing the logic, but he insisted doing it in his
>own way and failed[2]. This retry is still wrong.
>

I think we are still discussing the problem and a patch maybe more convenient
to proceed. I didn't insist anything and actually I am looking forward your
option and always respect your insight. Never thought to offend you.

In discussion [2], you pointed out two concerns:

  1) new_order < min_order is meaning less if min_order is 0
  2) how to do the check if new_order is 0 for non-uniform split

For 1), you suggested to add mapping_folio_order_supported().
For 2), I come up an idea to check (old_order - 1) <= max_order. Originally,
we just check !max_order. I think this could cover it.

So I gather them together here to see whether it is suitable.

If I missed some part, hope you could let me know.

>Wei, please make sure you understand the code before sending any patch.
>
>[1] https://lore.kernel.org/linux-mm/20251017013630.139907-1-ziy@nvidia.com/
>[2] https://lore.kernel.org/linux-mm/20251114030301.hkestzrk534ik7q4@master/
>
>Best Regards,
>Yan, Zi

-- 
Wei Yang
Help you, Help me


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH] mm/huge_memory: consolidate order-related checks into folio_split_supported()
  2025-11-14  8:49 ` David Hildenbrand (Red Hat)
  2025-11-14 12:43   ` Zi Yan
@ 2025-11-14 15:03   ` Wei Yang
  2025-11-14 19:36     ` David Hildenbrand (Red Hat)
  1 sibling, 1 reply; 14+ messages in thread
From: Wei Yang @ 2025-11-14 15:03 UTC (permalink / raw)
  To: David Hildenbrand (Red Hat)
  Cc: Wei Yang, willy, akpm, lorenzo.stoakes, Liam.Howlett, vbabka,
	rppt, surenb, mhocko, ziy, baolin.wang, npache, ryan.roberts,
	dev.jain, baohua, lance.yang, linux-fsdevel, linux-mm

On Fri, Nov 14, 2025 at 09:49:34AM +0100, David Hildenbrand (Red Hat) wrote:
>On 14.11.25 08:57, Wei Yang wrote:
>> The primary goal of the folio_split_supported() function is to validate
>> whether a folio is suitable for splitting and to bail out early if it is
>> not.
>> 
>> Currently, some order-related checks are scattered throughout the
>> calling code rather than being centralized in folio_split_supported().
>> 
>> This commit moves all remaining order-related validation logic into
>> folio_split_supported(). This consolidation ensures that the function
>> serves its intended purpose as a single point of failure and improves
>> the clarity and maintainability of the surrounding code.
>
>Combining the EINVAL handling sounds reasonable.
>

You mean:

This commit combines the EINVAL handling logic into folio_split_supported().
This consolidation ... ?

>> 
>> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
>> ---
>>   include/linux/pagemap.h |  6 +++
>>   mm/huge_memory.c        | 88 +++++++++++++++++++++--------------------
>>   2 files changed, 51 insertions(+), 43 deletions(-)
>> 
>> diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
>> index 09b581c1d878..d8c8df629b90 100644
>> --- a/include/linux/pagemap.h
>> +++ b/include/linux/pagemap.h
>> @@ -516,6 +516,12 @@ static inline bool mapping_large_folio_support(const struct address_space *mappi
>>   	return mapping_max_folio_order(mapping) > 0;
>>   }
>> +static inline bool
>> +mapping_folio_order_supported(const struct address_space *mapping, unsigned int order)
>> +{
>> +	return (order >= mapping_min_folio_order(mapping) && order <= mapping_max_folio_order(mapping));
>> +}
>
>(unnecessary () and unnecessary long line)
>
>Style in the file seems to want:
>
>static inline bool mapping_folio_order_supported(const struct address_space *mapping,
>						 unsigned int order)
>{
>	return order >= mapping_min_folio_order(mapping) &&
>	       order <= mapping_max_folio_order(mapping);
>}
>

Adjusted.

>
>The mapping_max_folio_order() check is new now. What is the default value of that? Is it always initialized properly?
>

Not sure "is new now" means what?

Original check use mapping_large_folio_support() which calls
mapping_max_folio_order(). It looks not new to me.

If my understanding is correct, struct address_mapping is part of struct
inode, which is initialized by inode_init_once() which clears flags. So the
default value is 0.

>> +
>>   /* Return the maximum folio size for this pagecache mapping, in bytes. */
>>   static inline size_t mapping_max_folio_size(const struct address_space *mapping)
>>   {
>> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
>> index 0184cd915f44..68faac843527 100644
>> --- a/mm/huge_memory.c
>> +++ b/mm/huge_memory.c
>> @@ -3690,34 +3690,58 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
>>   bool folio_split_supported(struct folio *folio, unsigned int new_order,
>>   		enum split_type split_type, bool warns)
>>   {
>> +	const int old_order = folio_order(folio);
>
>While at it, make it "unsigned int" like new_order.
>

Adjusted.

>> +
>> +	if (new_order >= old_order)
>> +		return -EINVAL;
>> +
>>   	if (folio_test_anon(folio)) {
>>   		/* order-1 is not supported for anonymous THP. */
>>   		VM_WARN_ONCE(warns && new_order == 1,
>>   				"Cannot split to order-1 folio");
>>   		if (new_order == 1)
>>   			return false;
>> -	} else if (split_type == SPLIT_TYPE_NON_UNIFORM || new_order) {
>> -		if (IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) &&
>> -		    !mapping_large_folio_support(folio->mapping)) {
>> -			/*
>> -			 * We can always split a folio down to a single page
>> -			 * (new_order == 0) uniformly.
>> -			 *
>> -			 * For any other scenario
>> -			 *   a) uniform split targeting a large folio
>> -			 *      (new_order > 0)
>> -			 *   b) any non-uniform split
>> -			 * we must confirm that the file system supports large
>> -			 * folios.
>> -			 *
>> -			 * Note that we might still have THPs in such
>> -			 * mappings, which is created from khugepaged when
>> -			 * CONFIG_READ_ONLY_THP_FOR_FS is enabled. But in that
>> -			 * case, the mapping does not actually support large
>> -			 * folios properly.
>> -			 */
>> +	} else {
>> +		const struct address_space *mapping = NULL;
>> +
>> +		mapping = folio->mapping;
>
>const struct address_space *mapping = folio->mapping;
>

Adjusted.

>> +
>> +		/* Truncated ? */
>> +		/*
>> +		 * TODO: add support for large shmem folio in swap cache.
>> +		 * When shmem is in swap cache, mapping is NULL and
>> +		 * folio_test_swapcache() is true.
>> +		 */
>> +		if (!mapping)
>> +			return false;
>> +
>> +		/*
>> +		 * We have two types of split:
>> +		 *
>> +		 *   a) uniform split: split folio directly to new_order.
>> +		 *   b) non-uniform split: create after-split folios with
>> +		 *      orders from (old_order - 1) to new_order.
>> +		 *
>> +		 * For file system, we encodes it supported folio order in
>> +		 * mapping->flags, which could be checked by
>> +		 * mapping_folio_order_supported().
>> +		 *
>> +		 * With these knowledge, we can know whether folio support
>> +		 * split to new_order by:
>> +		 *
>> +		 *   1. check new_order is supported first
>> +		 *   2. check (old_order - 1) is supported if
>> +		 *      SPLIT_TYPE_NON_UNIFORM
>> +		 */
>> +		if (!mapping_folio_order_supported(mapping, new_order)) {
>> +			VM_WARN_ONCE(warns,
>> +				"Cannot split file folio to unsupported order: %d", new_order);
>
>Is that really worth a VM_WARN_ONCE? We didn't have that previously IIUC, we would only return
>-EINVAL.

Hmm.. it seems not necessary. Thanks

>
>-- 
>Cheers
>
>David

-- 
Wei Yang
Help you, Help me


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH] mm/huge_memory: consolidate order-related checks into folio_split_supported()
  2025-11-14 15:03   ` Wei Yang
@ 2025-11-14 19:36     ` David Hildenbrand (Red Hat)
  2025-11-15  2:51       ` Wei Yang
  2025-12-04 15:13       ` Wei Yang
  0 siblings, 2 replies; 14+ messages in thread
From: David Hildenbrand (Red Hat) @ 2025-11-14 19:36 UTC (permalink / raw)
  To: Wei Yang
  Cc: willy, akpm, lorenzo.stoakes, Liam.Howlett, vbabka, rppt, surenb,
	mhocko, ziy, baolin.wang, npache, ryan.roberts, dev.jain, baohua,
	lance.yang, linux-fsdevel, linux-mm

On 14.11.25 16:03, Wei Yang wrote:
> On Fri, Nov 14, 2025 at 09:49:34AM +0100, David Hildenbrand (Red Hat) wrote:
>> On 14.11.25 08:57, Wei Yang wrote:
>>> The primary goal of the folio_split_supported() function is to validate
>>> whether a folio is suitable for splitting and to bail out early if it is
>>> not.
>>>
>>> Currently, some order-related checks are scattered throughout the
>>> calling code rather than being centralized in folio_split_supported().
>>>
>>> This commit moves all remaining order-related validation logic into
>>> folio_split_supported(). This consolidation ensures that the function
>>> serves its intended purpose as a single point of failure and improves
>>> the clarity and maintainability of the surrounding code.
>>
>> Combining the EINVAL handling sounds reasonable.
>>
> 
> You mean:
> 
> This commit combines the EINVAL handling logic into folio_split_supported().
> This consolidation ... ?

It was not a suggestion to change, it was rather only a comment from my 
side :)

[...]

>>
>> The mapping_max_folio_order() check is new now. What is the default value of that? Is it always initialized properly?
>>
> 
> Not sure "is new now" means what?
> 
> Original check use mapping_large_folio_support() which calls
> mapping_max_folio_order(). It looks not new to me.

Right, but we did not actually care about the exact value.

IOW, we didn't check for order <= mapping_max_folio_order() before.

SO I'm just curious if that is universally fine.

-- 
Cheers

David


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH] mm/huge_memory: consolidate order-related checks into folio_split_supported()
  2025-11-14 14:30     ` Wei Yang
@ 2025-11-14 20:53       ` Zi Yan
  2025-11-15  2:42         ` Wei Yang
  0 siblings, 1 reply; 14+ messages in thread
From: Zi Yan @ 2025-11-14 20:53 UTC (permalink / raw)
  To: Wei Yang
  Cc: David Hildenbrand (Red Hat),
	willy, akpm, lorenzo.stoakes, Liam.Howlett, vbabka, rppt, surenb,
	mhocko, baolin.wang, npache, ryan.roberts, dev.jain, baohua,
	lance.yang, linux-fsdevel, linux-mm

On 14 Nov 2025, at 9:30, Wei Yang wrote:

> On Fri, Nov 14, 2025 at 07:43:38AM -0500, Zi Yan wrote:
>> On 14 Nov 2025, at 3:49, David Hildenbrand (Red Hat) wrote:
>>
> [...]
>>>> +
>>>> +	if (new_order >= old_order)
>>>> +		return -EINVAL;
>>>> +
>>>>   	if (folio_test_anon(folio)) {
>>>>   		/* order-1 is not supported for anonymous THP. */
>>>>   		VM_WARN_ONCE(warns && new_order == 1,
>>>>   				"Cannot split to order-1 folio");
>>>>   		if (new_order == 1)
>>>>   			return false;
>>>> -	} else if (split_type == SPLIT_TYPE_NON_UNIFORM || new_order) {
>>>> -		if (IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) &&
>>>> -		    !mapping_large_folio_support(folio->mapping)) {
>>>> -			/*
>>>> -			 * We can always split a folio down to a single page
>>>> -			 * (new_order == 0) uniformly.
>>>> -			 *
>>>> -			 * For any other scenario
>>>> -			 *   a) uniform split targeting a large folio
>>>> -			 *      (new_order > 0)
>>>> -			 *   b) any non-uniform split
>>>> -			 * we must confirm that the file system supports large
>>>> -			 * folios.
>>>> -			 *
>>>> -			 * Note that we might still have THPs in such
>>>> -			 * mappings, which is created from khugepaged when
>>>> -			 * CONFIG_READ_ONLY_THP_FOR_FS is enabled. But in that
>>>> -			 * case, the mapping does not actually support large
>>>> -			 * folios properly.
>>>> -			 */
>>>> +	} else {
>>>> +		const struct address_space *mapping = NULL;
>>>> +
>>>> +		mapping = folio->mapping;
>>>
>>> const struct address_space *mapping = folio->mapping;
>>>
>>>> +
>>>> +		/* Truncated ? */
>>>> +		/*
>>>> +		 * TODO: add support for large shmem folio in swap cache.
>>>> +		 * When shmem is in swap cache, mapping is NULL and
>>>> +		 * folio_test_swapcache() is true.
>>>> +		 */
>>>> +		if (!mapping)
>>>> +			return false;
>>>> +
>>>> +		/*
>>>> +		 * We have two types of split:
>>>> +		 *
>>>> +		 *   a) uniform split: split folio directly to new_order.
>>>> +		 *   b) non-uniform split: create after-split folios with
>>>> +		 *      orders from (old_order - 1) to new_order.
>>>> +		 *
>>>> +		 * For file system, we encodes it supported folio order in
>>>> +		 * mapping->flags, which could be checked by
>>>> +		 * mapping_folio_order_supported().
>>>> +		 *
>>>> +		 * With these knowledge, we can know whether folio support
>>>> +		 * split to new_order by:
>>>> +		 *
>>>> +		 *   1. check new_order is supported first
>>>> +		 *   2. check (old_order - 1) is supported if
>>>> +		 *      SPLIT_TYPE_NON_UNIFORM
>>>> +		 */
>>>> +		if (!mapping_folio_order_supported(mapping, new_order)) {
>>>> +			VM_WARN_ONCE(warns,
>>>> +				"Cannot split file folio to unsupported order: %d", new_order);
>>>
>>> Is that really worth a VM_WARN_ONCE? We didn't have that previously IIUC, we would only return
>>> -EINVAL.
>>
>
> Sorry for introducing this unpleasant affair.
>
> Hope I can explain what I have done.
>
>> No, and it causes undesired warning when LBS folio is enabled. I explicitly
>> removed this warning one month ago in the LBS related patch[1].
>>
>
> Yes, I see you removal of a warning in [1].
>
> While in the discussion in [2], you mentioned:
>
>   Then, you might want to add a helper function mapping_folio_order_supported()
>   instead and change the warning message below to "Cannot split file folio to
>   unsupported order [%d, %d]", min_order, max_order (showing min/max order
>   is optional since it kinda defeat the purpose of having the helper function).
>   Of course, the comment needs to be changed.
>
> I thought you agree to print a warning message here. So I am confused.

This is exactly my point. You need to know what you are doing. You should not
write a patch because of what I said. And my above comment is to
CONFIG_READ_ONLY_THP_FOR_FS part of code. It has nothing
to do with the check pulled into folio_split_supported().

>
>> It is so frustrating to see this part of patch. Wei has RB in the aforementioned
>> patch and still add this warning blindly. I am not sure if Wei understands
>> what he is doing, since he threw the idea to me and I told him to just
>> move the code without changing the logic, but he insisted doing it in his
>> own way and failed[2]. This retry is still wrong.
>>
>
> I think we are still discussing the problem and a patch maybe more convenient
> to proceed. I didn't insist anything and actually I am looking forward your
> option and always respect your insight. Never thought to offend you.

Not offended.
>
> In discussion [2], you pointed out two concerns:
>
>   1) new_order < min_order is meaning less if min_order is 0
>   2) how to do the check if new_order is 0 for non-uniform split
>
> For 1), you suggested to add mapping_folio_order_supported().
> For 2), I come up an idea to check (old_order - 1) <= max_order. Originally,
> we just check !max_order. I think this could cover it.
>
> So I gather them together here to see whether it is suitable.
>
> If I missed some part, hope you could let me know.

Based on the discussion in [2], your patch mixes the checks for FS does not
support large folio and FS supporting large folio has min_order requirement
and I told you that it does not work well and suggested you to just move
“if (new_order < min_order) {“ part into folio_split_supported() as an
easy approach. Why not do that?

>
>> Wei, please make sure you understand the code before sending any patch.
>>
>> [1] https://lore.kernel.org/linux-mm/20251017013630.139907-1-ziy@nvidia.com/
>> [2] https://lore.kernel.org/linux-mm/20251114030301.hkestzrk534ik7q4@master/
>>
>> Best Regards,
>> Yan, Zi
>
> -- 
> Wei Yang
> Help you, Help me


Best Regards,
Yan, Zi


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH] mm/huge_memory: consolidate order-related checks into folio_split_supported()
  2025-11-14 20:53       ` Zi Yan
@ 2025-11-15  2:42         ` Wei Yang
  0 siblings, 0 replies; 14+ messages in thread
From: Wei Yang @ 2025-11-15  2:42 UTC (permalink / raw)
  To: Zi Yan
  Cc: Wei Yang, David Hildenbrand (Red Hat),
	willy, akpm, lorenzo.stoakes, Liam.Howlett, vbabka, rppt, surenb,
	mhocko, baolin.wang, npache, ryan.roberts, dev.jain, baohua,
	lance.yang, linux-fsdevel, linux-mm

On Fri, Nov 14, 2025 at 03:53:42PM -0500, Zi Yan wrote:
>On 14 Nov 2025, at 9:30, Wei Yang wrote:
>
>> On Fri, Nov 14, 2025 at 07:43:38AM -0500, Zi Yan wrote:
>>> On 14 Nov 2025, at 3:49, David Hildenbrand (Red Hat) wrote:
>>>
>> [...]
>>>>> +
>>>>> +	if (new_order >= old_order)
>>>>> +		return -EINVAL;
>>>>> +
>>>>>   	if (folio_test_anon(folio)) {
>>>>>   		/* order-1 is not supported for anonymous THP. */
>>>>>   		VM_WARN_ONCE(warns && new_order == 1,
>>>>>   				"Cannot split to order-1 folio");
>>>>>   		if (new_order == 1)
>>>>>   			return false;
>>>>> -	} else if (split_type == SPLIT_TYPE_NON_UNIFORM || new_order) {
>>>>> -		if (IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) &&
>>>>> -		    !mapping_large_folio_support(folio->mapping)) {
>>>>> -			/*
>>>>> -			 * We can always split a folio down to a single page
>>>>> -			 * (new_order == 0) uniformly.
>>>>> -			 *
>>>>> -			 * For any other scenario
>>>>> -			 *   a) uniform split targeting a large folio
>>>>> -			 *      (new_order > 0)
>>>>> -			 *   b) any non-uniform split
>>>>> -			 * we must confirm that the file system supports large
>>>>> -			 * folios.
>>>>> -			 *
>>>>> -			 * Note that we might still have THPs in such
>>>>> -			 * mappings, which is created from khugepaged when
>>>>> -			 * CONFIG_READ_ONLY_THP_FOR_FS is enabled. But in that
>>>>> -			 * case, the mapping does not actually support large
>>>>> -			 * folios properly.
>>>>> -			 */
>>>>> +	} else {
>>>>> +		const struct address_space *mapping = NULL;
>>>>> +
>>>>> +		mapping = folio->mapping;
>>>>
>>>> const struct address_space *mapping = folio->mapping;
>>>>
>>>>> +
>>>>> +		/* Truncated ? */
>>>>> +		/*
>>>>> +		 * TODO: add support for large shmem folio in swap cache.
>>>>> +		 * When shmem is in swap cache, mapping is NULL and
>>>>> +		 * folio_test_swapcache() is true.
>>>>> +		 */
>>>>> +		if (!mapping)
>>>>> +			return false;
>>>>> +
>>>>> +		/*
>>>>> +		 * We have two types of split:
>>>>> +		 *
>>>>> +		 *   a) uniform split: split folio directly to new_order.
>>>>> +		 *   b) non-uniform split: create after-split folios with
>>>>> +		 *      orders from (old_order - 1) to new_order.
>>>>> +		 *
>>>>> +		 * For file system, we encodes it supported folio order in
>>>>> +		 * mapping->flags, which could be checked by
>>>>> +		 * mapping_folio_order_supported().
>>>>> +		 *
>>>>> +		 * With these knowledge, we can know whether folio support
>>>>> +		 * split to new_order by:
>>>>> +		 *
>>>>> +		 *   1. check new_order is supported first
>>>>> +		 *   2. check (old_order - 1) is supported if
>>>>> +		 *      SPLIT_TYPE_NON_UNIFORM
>>>>> +		 */
>>>>> +		if (!mapping_folio_order_supported(mapping, new_order)) {
>>>>> +			VM_WARN_ONCE(warns,
>>>>> +				"Cannot split file folio to unsupported order: %d", new_order);
>>>>
>>>> Is that really worth a VM_WARN_ONCE? We didn't have that previously IIUC, we would only return
>>>> -EINVAL.
>>>
>>
>> Sorry for introducing this unpleasant affair.
>>
>> Hope I can explain what I have done.
>>
>>> No, and it causes undesired warning when LBS folio is enabled. I explicitly
>>> removed this warning one month ago in the LBS related patch[1].
>>>
>>
>> Yes, I see you removal of a warning in [1].
>>
>> While in the discussion in [2], you mentioned:
>>
>>   Then, you might want to add a helper function mapping_folio_order_supported()
>>   instead and change the warning message below to "Cannot split file folio to
>>   unsupported order [%d, %d]", min_order, max_order (showing min/max order
>>   is optional since it kinda defeat the purpose of having the helper function).
>>   Of course, the comment needs to be changed.
>>
>> I thought you agree to print a warning message here. So I am confused.
>
>This is exactly my point. You need to know what you are doing. You should not
>write a patch because of what I said. And my above comment is to
>CONFIG_READ_ONLY_THP_FOR_FS part of code. It has nothing
>to do with the check pulled into folio_split_supported().
>

Yes, I thought they serve the same purpose(checking supported folio order), so
print the warning both.

If should not, I will remove it.

>>
>>> It is so frustrating to see this part of patch. Wei has RB in the aforementioned
>>> patch and still add this warning blindly. I am not sure if Wei understands
>>> what he is doing, since he threw the idea to me and I told him to just
>>> move the code without changing the logic, but he insisted doing it in his
>>> own way and failed[2]. This retry is still wrong.
>>>
>>
>> I think we are still discussing the problem and a patch maybe more convenient
>> to proceed. I didn't insist anything and actually I am looking forward your
>> option and always respect your insight. Never thought to offend you.
>
>Not offended.
>>
>> In discussion [2], you pointed out two concerns:
>>
>>   1) new_order < min_order is meaning less if min_order is 0
>>   2) how to do the check if new_order is 0 for non-uniform split
>>
>> For 1), you suggested to add mapping_folio_order_supported().
>> For 2), I come up an idea to check (old_order - 1) <= max_order. Originally,
>> we just check !max_order. I think this could cover it.
>>
>> So I gather them together here to see whether it is suitable.
>>
>> If I missed some part, hope you could let me know.
>
>Based on the discussion in [2], your patch mixes the checks for FS does not
>support large folio and FS supporting large folio has min_order requirement
>and I told you that it does not work well and suggested you to just move
>“if (new_order < min_order) {“ part into folio_split_supported() as an
>easy approach. Why not do that?
>

I may not follow you. Let me try to clear my mind.

My first approach is :

diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index dee416b3f6ed..ef05f246df73 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -3704,8 +3704,8 @@ bool folio_split_supported(struct folio *folio, unsigned int new_order,
 		if (new_order == 1)
 			return false;
 	} else if (split_type == SPLIT_TYPE_NON_UNIFORM || new_order) {
-		if (IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) &&
-		    !mapping_large_folio_support(folio->mapping)) {
+		unsigned int min_order = mapping_min_folio_order(folio->mapping);
+		if (new_order < min_order) {
 			/*
 			 * We can always split a folio down to a single page
 			 * (new_order == 0) uniformly.


Current patch does 3 major changes.

1)

Your comment to above change:

```
   This check is good for !CONFIG_READ_ONLY_THP_FOR_FS, but for
   CONFIG_READ_ONLY_THP_FOR_FS and !mapping_large_folio_support(), min_order is
   always 0.

   ...
   
   OK, basically the check should be:
   
     if (new_order < mapping_min_folio_order() || new_order > mapping_max_folio_order()).
   
   Then, you might want to add a helper function mapping_folio_order_supported().
```

So in this patch, I introduce mapping_folio_order_supported() and replace the
(new_order < min_order) check. Still use (new_order < min_order) looks not
correct. Or I misunderstand your suggestion?

2)

And then you mentioned this check is not enough:

```
    Hmm, but still how could the above check to trigger the warning when
    split_type == SPLIT_TYPE_NON_UNIFORM and new_order is 0? It will not
    trigger, since new_order (as 0) is supported by the mapping.
```

What come up my mind is to check with old_orer - 1 is also supported. So for
SPLIT_TYPE_NON_UNIFORM, the after-split folio range [min_order, old_order - 1]
are all checked.

3)

Also I remove the first "if (split_type == SPLIT_TYPE_NON_UNIFORM || new_order)".

Because in original logic, when split_type == SPLIT_TYPE_UNIFORM && new_order
== 0, we should still check (new_order < min_order). If not we would miss
that.

But sounds I misunderstand you insight, I am sorry for that. If not too
bother, would you mind sharing it again?

>>
>>> Wei, please make sure you understand the code before sending any patch.
>>>
>>> [1] https://lore.kernel.org/linux-mm/20251017013630.139907-1-ziy@nvidia.com/
>>> [2] https://lore.kernel.org/linux-mm/20251114030301.hkestzrk534ik7q4@master/
>>>
>>> Best Regards,
>>> Yan, Zi
>>
>> -- 
>> Wei Yang
>> Help you, Help me
>
>
>Best Regards,
>Yan, Zi

-- 
Wei Yang
Help you, Help me


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH] mm/huge_memory: consolidate order-related checks into folio_split_supported()
  2025-11-14 19:36     ` David Hildenbrand (Red Hat)
@ 2025-11-15  2:51       ` Wei Yang
  2025-11-15  5:07         ` Matthew Wilcox
  2025-12-04 15:13       ` Wei Yang
  1 sibling, 1 reply; 14+ messages in thread
From: Wei Yang @ 2025-11-15  2:51 UTC (permalink / raw)
  To: David Hildenbrand (Red Hat)
  Cc: Wei Yang, willy, akpm, lorenzo.stoakes, Liam.Howlett, vbabka,
	rppt, surenb, mhocko, ziy, baolin.wang, npache, ryan.roberts,
	dev.jain, baohua, lance.yang, linux-fsdevel, linux-mm

On Fri, Nov 14, 2025 at 08:36:20PM +0100, David Hildenbrand (Red Hat) wrote:
>On 14.11.25 16:03, Wei Yang wrote:
>> On Fri, Nov 14, 2025 at 09:49:34AM +0100, David Hildenbrand (Red Hat) wrote:
>> > On 14.11.25 08:57, Wei Yang wrote:
>> > > The primary goal of the folio_split_supported() function is to validate
>> > > whether a folio is suitable for splitting and to bail out early if it is
>> > > not.
>> > > 
>> > > Currently, some order-related checks are scattered throughout the
>> > > calling code rather than being centralized in folio_split_supported().
>> > > 
>> > > This commit moves all remaining order-related validation logic into
>> > > folio_split_supported(). This consolidation ensures that the function
>> > > serves its intended purpose as a single point of failure and improves
>> > > the clarity and maintainability of the surrounding code.
>> > 
>> > Combining the EINVAL handling sounds reasonable.
>> > 
>> 
>> You mean:
>> 
>> This commit combines the EINVAL handling logic into folio_split_supported().
>> This consolidation ... ?
>
>It was not a suggestion to change, it was rather only a comment from my side
>:)
>
>[...]
>
>> > 
>> > The mapping_max_folio_order() check is new now. What is the default value of that? Is it always initialized properly?
>> > 
>> 
>> Not sure "is new now" means what?
>> 
>> Original check use mapping_large_folio_support() which calls
>> mapping_max_folio_order(). It looks not new to me.
>
>Right, but we did not actually care about the exact value.
>
>IOW, we didn't check for order <= mapping_max_folio_order() before.
>
>SO I'm just curious if that is universally fine.
>

Thanks, I got your point.

I am not 100% for sure. While if we trust it returns 0 if folio doesn't
support large folio, I am afraid we can trust it returns correct value it
supports. Or that would be the issue to related file system.

To be honest, I am lack of the capability to investigate all file system to
make sure this value is properly setup.

>-- 
>Cheers
>
>David

-- 
Wei Yang
Help you, Help me


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH] mm/huge_memory: consolidate order-related checks into folio_split_supported()
  2025-11-15  2:51       ` Wei Yang
@ 2025-11-15  5:07         ` Matthew Wilcox
  2025-11-15  9:43           ` Wei Yang
  0 siblings, 1 reply; 14+ messages in thread
From: Matthew Wilcox @ 2025-11-15  5:07 UTC (permalink / raw)
  To: Wei Yang
  Cc: David Hildenbrand (Red Hat),
	akpm, lorenzo.stoakes, Liam.Howlett, vbabka, rppt, surenb,
	mhocko, ziy, baolin.wang, npache, ryan.roberts, dev.jain, baohua,
	lance.yang, linux-fsdevel, linux-mm

On Sat, Nov 15, 2025 at 02:51:09AM +0000, Wei Yang wrote:
> I am not 100% for sure. While if we trust it returns 0 if folio doesn't
> support large folio, I am afraid we can trust it returns correct value it
> supports. Or that would be the issue to related file system.
> 
> To be honest, I am lack of the capability to investigate all file system to
> make sure this value is properly setup.

Maybe you should just give up on this instead of asking so many
questions, misunderstanding the answers, and sending more patches full
of mistakes?


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH] mm/huge_memory: consolidate order-related checks into folio_split_supported()
  2025-11-15  5:07         ` Matthew Wilcox
@ 2025-11-15  9:43           ` Wei Yang
  0 siblings, 0 replies; 14+ messages in thread
From: Wei Yang @ 2025-11-15  9:43 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: Wei Yang, David Hildenbrand (Red Hat),
	akpm, lorenzo.stoakes, Liam.Howlett, vbabka, rppt, surenb,
	mhocko, ziy, baolin.wang, npache, ryan.roberts, dev.jain, baohua,
	lance.yang, linux-fsdevel, linux-mm

On Sat, Nov 15, 2025 at 05:07:45AM +0000, Matthew Wilcox wrote:
>On Sat, Nov 15, 2025 at 02:51:09AM +0000, Wei Yang wrote:
>> I am not 100% for sure. While if we trust it returns 0 if folio doesn't
>> support large folio, I am afraid we can trust it returns correct value it
>> supports. Or that would be the issue to related file system.
>> 
>> To be honest, I am lack of the capability to investigate all file system to
>> make sure this value is properly setup.
>
>Maybe you should just give up on this instead of asking so many
>questions, misunderstanding the answers, and sending more patches full
>of mistakes?

Hi, Matthew

I am glad to see your reply, but your comment makes me feel frustrated.

Well, I think we still talk about the fact. You pointed out three flaws:

  * too many questions
  * too many misunderstandings
  * full of mistakes

Per my understanding, there is only one question: whether
mapping_max_folio_order() works universally. I think it does and also Zi
suggested to use it. But since I am not sure about this, we can revert it to
keep the logic same as current.

For misunderstandings, I use the word to be polite and avoid conflict. I
went through the discussion with Zi and thought I did what he suggested. In
case I do miss something, Zi would correct me.

For full of mistakes, I am confused. I did some contributions in kernel, small
of course. Mostly correct and get merged. I know sometimes I made mistakes,
but full of mistakes, it seems not to be true.

Well, I still think this is a common review process, I am open to comment and
questions. And also glad to take suggestions from community. I still think it
is reasonable to combine the EINVAL handling, while the detail need to be
settled down. 

Have a nice weekend all.

-- 
Wei Yang
Help you, Help me


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH] mm/huge_memory: consolidate order-related checks into folio_split_supported()
  2025-11-14  7:57 [PATCH] mm/huge_memory: consolidate order-related checks into folio_split_supported() Wei Yang
  2025-11-14  8:49 ` David Hildenbrand (Red Hat)
@ 2025-11-19 12:37 ` Dan Carpenter
  2025-11-19 12:39   ` Wei Yang
  1 sibling, 1 reply; 14+ messages in thread
From: Dan Carpenter @ 2025-11-19 12:37 UTC (permalink / raw)
  To: oe-kbuild, Wei Yang, willy, akpm, david, lorenzo.stoakes,
	Liam.Howlett, vbabka, rppt, surenb, mhocko, ziy, baolin.wang,
	npache, ryan.roberts, dev.jain, baohua, lance.yang
  Cc: lkp, oe-kbuild-all, linux-fsdevel, linux-mm, Wei Yang

Hi Wei,

kernel test robot noticed the following build warnings:

url:    https://github.com/intel-lab-lkp/linux/commits/Wei-Yang/mm-huge_memory-consolidate-order-related-checks-into-folio_split_supported/20251114-155833
base:   https://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm.git mm-everything
patch link:    https://lore.kernel.org/r/20251114075703.10434-1-richard.weiyang%40gmail.com
patch subject: [PATCH] mm/huge_memory: consolidate order-related checks into folio_split_supported()
config: i386-randconfig-141-20251115 (https://download.01.org/0day-ci/archive/20251115/202511151652.GKPwEctt-lkp@intel.com/config)
compiler: gcc-12 (Debian 12.4.0-5) 12.4.0

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Reported-by: Dan Carpenter <dan.carpenter@linaro.org>
| Closes: https://lore.kernel.org/r/202511151652.GKPwEctt-lkp@intel.com/

smatch warnings:
mm/huge_memory.c:3696 folio_split_supported() warn: signedness bug returning '(-22)'

vim +3696 mm/huge_memory.c

aa27253af32c74 Wei Yang 2025-11-06  3690  bool folio_split_supported(struct folio *folio, unsigned int new_order,
                                          ^^^^
This is a bool function.

aa27253af32c74 Wei Yang 2025-11-06  3691  		enum split_type split_type, bool warns)
58729c04cf1092 Zi Yan   2025-03-07  3692  {
ab62f1bb0caaa5 Wei Yang 2025-11-14  3693  	const int old_order = folio_order(folio);
ab62f1bb0caaa5 Wei Yang 2025-11-14  3694  
ab62f1bb0caaa5 Wei Yang 2025-11-14  3695  	if (new_order >= old_order)
ab62f1bb0caaa5 Wei Yang 2025-11-14 @3696  		return -EINVAL;

s/-EINVAL/false/

ab62f1bb0caaa5 Wei Yang 2025-11-14  3697  
58729c04cf1092 Zi Yan   2025-03-07  3698  	if (folio_test_anon(folio)) {
58729c04cf1092 Zi Yan   2025-03-07  3699  		/* order-1 is not supported for anonymous THP. */
58729c04cf1092 Zi Yan   2025-03-07  3700  		VM_WARN_ONCE(warns && new_order == 1,
58729c04cf1092 Zi Yan   2025-03-07  3701  				"Cannot split to order-1 folio");
28753037121116 Zi Yan   2025-11-05  3702  		if (new_order == 1)
28753037121116 Zi Yan   2025-11-05  3703  			return false;

-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki



^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH] mm/huge_memory: consolidate order-related checks into folio_split_supported()
  2025-11-19 12:37 ` Dan Carpenter
@ 2025-11-19 12:39   ` Wei Yang
  0 siblings, 0 replies; 14+ messages in thread
From: Wei Yang @ 2025-11-19 12:39 UTC (permalink / raw)
  To: Dan Carpenter
  Cc: oe-kbuild, Wei Yang, willy, akpm, david, lorenzo.stoakes,
	Liam.Howlett, vbabka, rppt, surenb, mhocko, ziy, baolin.wang,
	npache, ryan.roberts, dev.jain, baohua, lance.yang, lkp,
	oe-kbuild-all, linux-fsdevel, linux-mm

On Wed, Nov 19, 2025 at 03:37:09PM +0300, Dan Carpenter wrote:
>Hi Wei,
>
>kernel test robot noticed the following build warnings:
>
>url:    https://github.com/intel-lab-lkp/linux/commits/Wei-Yang/mm-huge_memory-consolidate-order-related-checks-into-folio_split_supported/20251114-155833
>base:   https://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm.git mm-everything
>patch link:    https://lore.kernel.org/r/20251114075703.10434-1-richard.weiyang%40gmail.com
>patch subject: [PATCH] mm/huge_memory: consolidate order-related checks into folio_split_supported()
>config: i386-randconfig-141-20251115 (https://download.01.org/0day-ci/archive/20251115/202511151652.GKPwEctt-lkp@intel.com/config)
>compiler: gcc-12 (Debian 12.4.0-5) 12.4.0
>
>If you fix the issue in a separate patch/commit (i.e. not just a new version of
>the same patch/commit), kindly add following tags
>| Reported-by: kernel test robot <lkp@intel.com>
>| Reported-by: Dan Carpenter <dan.carpenter@linaro.org>
>| Closes: https://lore.kernel.org/r/202511151652.GKPwEctt-lkp@intel.com/
>
>smatch warnings:
>mm/huge_memory.c:3696 folio_split_supported() warn: signedness bug returning '(-22)'
>
>vim +3696 mm/huge_memory.c
>
>aa27253af32c74 Wei Yang 2025-11-06  3690  bool folio_split_supported(struct folio *folio, unsigned int new_order,
>                                          ^^^^
>This is a bool function.
>
>aa27253af32c74 Wei Yang 2025-11-06  3691  		enum split_type split_type, bool warns)
>58729c04cf1092 Zi Yan   2025-03-07  3692  {
>ab62f1bb0caaa5 Wei Yang 2025-11-14  3693  	const int old_order = folio_order(folio);
>ab62f1bb0caaa5 Wei Yang 2025-11-14  3694  
>ab62f1bb0caaa5 Wei Yang 2025-11-14  3695  	if (new_order >= old_order)
>ab62f1bb0caaa5 Wei Yang 2025-11-14 @3696  		return -EINVAL;

Thanks, this is noticed.

>
>s/-EINVAL/false/
>
>ab62f1bb0caaa5 Wei Yang 2025-11-14  3697  
>58729c04cf1092 Zi Yan   2025-03-07  3698  	if (folio_test_anon(folio)) {
>58729c04cf1092 Zi Yan   2025-03-07  3699  		/* order-1 is not supported for anonymous THP. */
>58729c04cf1092 Zi Yan   2025-03-07  3700  		VM_WARN_ONCE(warns && new_order == 1,
>58729c04cf1092 Zi Yan   2025-03-07  3701  				"Cannot split to order-1 folio");
>28753037121116 Zi Yan   2025-11-05  3702  		if (new_order == 1)
>28753037121116 Zi Yan   2025-11-05  3703  			return false;
>
>-- 
>0-DAY CI Kernel Test Service
>https://github.com/intel/lkp-tests/wiki

-- 
Wei Yang
Help you, Help me


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH] mm/huge_memory: consolidate order-related checks into folio_split_supported()
  2025-11-14 19:36     ` David Hildenbrand (Red Hat)
  2025-11-15  2:51       ` Wei Yang
@ 2025-12-04 15:13       ` Wei Yang
  1 sibling, 0 replies; 14+ messages in thread
From: Wei Yang @ 2025-12-04 15:13 UTC (permalink / raw)
  To: David Hildenbrand (Red Hat)
  Cc: Wei Yang, willy, akpm, lorenzo.stoakes, Liam.Howlett, vbabka,
	rppt, surenb, mhocko, ziy, baolin.wang, npache, ryan.roberts,
	dev.jain, baohua, lance.yang, linux-fsdevel, linux-mm

On Fri, Nov 14, 2025 at 08:36:20PM +0100, David Hildenbrand (Red Hat) wrote:
>On 14.11.25 16:03, Wei Yang wrote:
>> On Fri, Nov 14, 2025 at 09:49:34AM +0100, David Hildenbrand (Red Hat) wrote:
>> > On 14.11.25 08:57, Wei Yang wrote:
>> > > The primary goal of the folio_split_supported() function is to validate
>> > > whether a folio is suitable for splitting and to bail out early if it is
>> > > not.
>> > > 
>> > > Currently, some order-related checks are scattered throughout the
>> > > calling code rather than being centralized in folio_split_supported().
>> > > 
>> > > This commit moves all remaining order-related validation logic into
>> > > folio_split_supported(). This consolidation ensures that the function
>> > > serves its intended purpose as a single point of failure and improves
>> > > the clarity and maintainability of the surrounding code.
>> > 
>> > Combining the EINVAL handling sounds reasonable.
>> > 
>> 
>> You mean:
>> 
>> This commit combines the EINVAL handling logic into folio_split_supported().
>> This consolidation ... ?
>
>It was not a suggestion to change, it was rather only a comment from my side
>:)
>
>[...]
>
>> > 
>> > The mapping_max_folio_order() check is new now. What is the default value of that? Is it always initialized properly?
>> > 
>> 
>> Not sure "is new now" means what?
>> 
>> Original check use mapping_large_folio_support() which calls
>> mapping_max_folio_order(). It looks not new to me.
>
>Right, but we did not actually care about the exact value.
>
>IOW, we didn't check for order <= mapping_max_folio_order() before.
>
>SO I'm just curious if that is universally fine.
>

Hi, David

I just happened to come across some code that might address your question
here.

Take a look at the function __filemap_get_folio(). It uses
mapping_max_folio_order() and mapping_min_folio_order() to determine the
appropriate folio order for page cache allocation.

Just want to share what I found.

>-- 
>Cheers
>
>David

-- 
Wei Yang
Help you, Help me


^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2025-12-04 15:13 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-11-14  7:57 [PATCH] mm/huge_memory: consolidate order-related checks into folio_split_supported() Wei Yang
2025-11-14  8:49 ` David Hildenbrand (Red Hat)
2025-11-14 12:43   ` Zi Yan
2025-11-14 14:30     ` Wei Yang
2025-11-14 20:53       ` Zi Yan
2025-11-15  2:42         ` Wei Yang
2025-11-14 15:03   ` Wei Yang
2025-11-14 19:36     ` David Hildenbrand (Red Hat)
2025-11-15  2:51       ` Wei Yang
2025-11-15  5:07         ` Matthew Wilcox
2025-11-15  9:43           ` Wei Yang
2025-12-04 15:13       ` Wei Yang
2025-11-19 12:37 ` Dan Carpenter
2025-11-19 12:39   ` Wei Yang

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox