linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [Patch v2] mm/huge_memory: consolidate order-related checks into folio_check_splittable()
@ 2025-12-23 12:25 Wei Yang
  2025-12-23 17:50 ` [syzbot ci] " syzbot ci
  2026-01-04  2:37 ` [Patch v2] " Wei Yang
  0 siblings, 2 replies; 9+ messages in thread
From: Wei Yang @ 2025-12-23 12:25 UTC (permalink / raw)
  To: akpm, david, lorenzo.stoakes, ziy, baolin.wang, Liam.Howlett,
	npache, ryan.roberts, dev.jain, baohua, lance.yang
  Cc: linux-mm, Wei Yang

The primary goal of the folio_check_splittable() function is to validate
whether a folio is suitable for splitting and to bail out early if it is
not.

Currently, some order-related checks are scattered throughout the
calling code rather than being centralized in folio_check_splittable().

This commit moves all remaining order-related validation logic into
folio_check_splittable(). This consolidation ensures that the function
serves its intended purpose as a single point of failure and improves
the clarity and maintainability of the surrounding code.

Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
Cc: Zi Yan <ziy@nvidia.com>

---
v2: just move current logic here
---
 mm/huge_memory.c | 63 +++++++++++++++++++++++-------------------------
 1 file changed, 30 insertions(+), 33 deletions(-)

diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index b8ee33318a60..59d72522399f 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -3705,6 +3705,10 @@ int folio_check_splittable(struct folio *folio, unsigned int new_order,
 			   enum split_type split_type)
 {
 	VM_WARN_ON_FOLIO(!folio_test_locked(folio), folio);
+
+	if (new_order >= folio_order(folio))
+		return -EINVAL;
+
 	/*
 	 * Folios that just got truncated cannot get split. Signal to the
 	 * caller that there was a race.
@@ -3719,28 +3723,33 @@ int folio_check_splittable(struct folio *folio, unsigned int new_order,
 		/* order-1 is not supported for anonymous THP. */
 		if (new_order == 1)
 			return -EINVAL;
-	} else if (split_type == SPLIT_TYPE_NON_UNIFORM || new_order) {
-		if (IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) &&
-		    !mapping_large_folio_support(folio->mapping)) {
-			/*
-			 * We can always split a folio down to a single page
-			 * (new_order == 0) uniformly.
-			 *
-			 * For any other scenario
-			 *   a) uniform split targeting a large folio
-			 *      (new_order > 0)
-			 *   b) any non-uniform split
-			 * we must confirm that the file system supports large
-			 * folios.
-			 *
-			 * Note that we might still have THPs in such
-			 * mappings, which is created from khugepaged when
-			 * CONFIG_READ_ONLY_THP_FOR_FS is enabled. But in that
-			 * case, the mapping does not actually support large
-			 * folios properly.
-			 */
-			return -EINVAL;
+	} else {
+		if (split_type == SPLIT_TYPE_NON_UNIFORM || new_order) {
+			if (IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) &&
+			    !mapping_large_folio_support(folio->mapping)) {
+				/*
+				 * We can always split a folio down to a
+				 * single page (new_order == 0) uniformly.
+				 *
+				 * For any other scenario
+				 *   a) uniform split targeting a large folio
+				 *      (new_order > 0)
+				 *   b) any non-uniform split
+				 * we must confirm that the file system
+				 * supports large folios.
+				 *
+				 * Note that we might still have THPs in such
+				 * mappings, which is created from khugepaged
+				 * when CONFIG_READ_ONLY_THP_FOR_FS is
+				 * enabled. But in that case, the mapping does
+				 * not actually support large folios properly.
+				 */
+				return -EINVAL;
+			}
 		}
+
+		if (new_order < mapping_min_folio_order(folio->mapping))
+			return -EINVAL;
 	}
 
 	/*
@@ -4008,11 +4017,6 @@ static int __folio_split(struct folio *folio, unsigned int new_order,
 		goto out;
 	}
 
-	if (new_order >= old_order) {
-		ret = -EINVAL;
-		goto out;
-	}
-
 	ret = folio_check_splittable(folio, new_order, split_type);
 	if (ret) {
 		VM_WARN_ONCE(ret == -EINVAL, "Tried to split an unsplittable folio");
@@ -4036,16 +4040,9 @@ static int __folio_split(struct folio *folio, unsigned int new_order,
 		anon_vma_lock_write(anon_vma);
 		mapping = NULL;
 	} else {
-		unsigned int min_order;
 		gfp_t gfp;
 
 		mapping = folio->mapping;
-		min_order = mapping_min_folio_order(folio->mapping);
-		if (new_order < min_order) {
-			ret = -EINVAL;
-			goto out;
-		}
-
 		gfp = current_gfp_context(mapping_gfp_mask(mapping) &
 							GFP_RECLAIM_MASK);
 
-- 
2.34.1



^ permalink raw reply	[flat|nested] 9+ messages in thread

* [syzbot ci] Re: mm/huge_memory: consolidate order-related checks into folio_check_splittable()
  2025-12-23 12:25 [Patch v2] mm/huge_memory: consolidate order-related checks into folio_check_splittable() Wei Yang
@ 2025-12-23 17:50 ` syzbot ci
  2026-01-04  2:37 ` [Patch v2] " Wei Yang
  1 sibling, 0 replies; 9+ messages in thread
From: syzbot ci @ 2025-12-23 17:50 UTC (permalink / raw)
  To: akpm, baohua, baolin.wang, david, dev.jain, lance.yang,
	liam.howlett, linux-mm, lorenzo.stoakes, npache, richard.weiyang,
	ryan.roberts, ziy
  Cc: syzbot, syzkaller-bugs

syzbot ci has tested the following series

[v2] mm/huge_memory: consolidate order-related checks into folio_check_splittable()
https://lore.kernel.org/all/20251223122539.10726-1-richard.weiyang@gmail.com
* [Patch v2] mm/huge_memory: consolidate order-related checks into folio_check_splittable()

and found the following issue:
WARNING in __folio_split

Full report is available here:
https://ci.syzbot.org/series/7e34013d-ed08-40e1-99b7-8fd118dce84f

***

WARNING in __folio_split

tree:      mm-new
URL:       https://kernel.googlesource.com/pub/scm/linux/kernel/git/akpm/mm.git
base:      c642ecda5b136882e518d8303863473c0d21ab2f
arch:      amd64
compiler:  Debian clang version 20.1.8 (++20250708063551+0c9f909b7976-1~exp1~20250708183702.136), Debian LLD 20.1.8
config:    https://ci.syzbot.org/builds/2edb2dc5-42c6-4557-a194-921c57fd9eb1/config
C repro:   https://ci.syzbot.org/findings/8349b674-5790-4507-97cd-03697ec93cb0/c_repro
syz repro: https://ci.syzbot.org/findings/8349b674-5790-4507-97cd-03697ec93cb0/syz_repro

------------[ cut here ]------------
Tried to split an unsplittable folio
WARNING: mm/huge_memory.c:3970 at __folio_split+0xfe7/0x1370 mm/huge_memory.c:3970, CPU#1: syz.0.17/5997
Modules linked in:
CPU: 1 UID: 0 PID: 5997 Comm: syz.0.17 Not tainted syzkaller #0 PREEMPT(full) 
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014
RIP: 0010:__folio_split+0xfe7/0x1370 mm/huge_memory.c:3970
Code: fe c6 05 a5 f8 3e 0d 01 90 0f 0b 90 e9 0d f4 ff ff e8 4d 78 94 ff 49 ff cd e9 d1 f4 ff ff e8 40 78 94 ff 48 8d 3d 69 47 5a 0d <67> 48 0f b9 3a 41 bd ea ff ff ff e9 7c fe ff ff 44 89 7c 24 34 44
RSP: 0018:ffffc900036d6d60 EFLAGS: 00010293
RAX: ffffffff822d4360 RBX: ffffea0005a4b008 RCX: ffff8881047fd7c0
RDX: 0000000000000000 RSI: ffffffff8e06e540 RDI: ffffffff8f878ad0
RBP: ffffc900036d6ef0 R08: ffff8881047fd7c0 R09: 0000000000000002
R10: 00000000ffffffea R11: 0000000000000000 R12: 0000000000000004
R13: 00000000ffffffea R14: ffffea0005a4b000 R15: 1ffffd4000b49603
FS:  0000555591a60500(0000) GS:ffff8882a9e32000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000001b30d63fff CR3: 0000000100f30000 CR4: 00000000000006f0
Call Trace:
 <TASK>
 madvise_cold_or_pageout_pte_range+0xbf3/0x1ce0 mm/madvise.c:503
 walk_pmd_range mm/pagewalk.c:130 [inline]
 walk_pud_range mm/pagewalk.c:224 [inline]
 walk_p4d_range mm/pagewalk.c:262 [inline]
 walk_pgd_range+0x1037/0x1d30 mm/pagewalk.c:303
 __walk_page_range+0x14c/0x710 mm/pagewalk.c:410
 walk_page_range_vma_unsafe+0x34c/0x400 mm/pagewalk.c:714
 madvise_pageout_page_range mm/madvise.c:622 [inline]
 madvise_pageout mm/madvise.c:647 [inline]
 madvise_vma_behavior+0x30c7/0x4420 mm/madvise.c:1366
 madvise_walk_vmas+0x575/0xaf0 mm/madvise.c:1721
 madvise_do_behavior+0x38e/0x550 mm/madvise.c:1937
 do_madvise+0x1bc/0x270 mm/madvise.c:2030
 __do_sys_madvise mm/madvise.c:2039 [inline]
 __se_sys_madvise mm/madvise.c:2037 [inline]
 __x64_sys_madvise+0xa7/0xc0 mm/madvise.c:2037
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0xfa/0xf80 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7ff69838f7c9
Code: ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 a8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007fff82001dd8 EFLAGS: 00000246 ORIG_RAX: 000000000000001c
RAX: ffffffffffffffda RBX: 00007ff6985e5fa0 RCX: 00007ff69838f7c9
RDX: 0000000000000015 RSI: 0000000000600000 RDI: 0000200000000000
RBP: 00007ff6983f297f R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007ff6985e5fa0 R14: 00007ff6985e5fa0 R15: 0000000000000003
 </TASK>
----------------
Code disassembly (best guess), 1 bytes skipped:
   0:	c6 05 a5 f8 3e 0d 01 	movb   $0x1,0xd3ef8a5(%rip)        # 0xd3ef8ac
   7:	90                   	nop
   8:	0f 0b                	ud2
   a:	90                   	nop
   b:	e9 0d f4 ff ff       	jmp    0xfffff41d
  10:	e8 4d 78 94 ff       	call   0xff947862
  15:	49 ff cd             	dec    %r13
  18:	e9 d1 f4 ff ff       	jmp    0xfffff4ee
  1d:	e8 40 78 94 ff       	call   0xff947862
  22:	48 8d 3d 69 47 5a 0d 	lea    0xd5a4769(%rip),%rdi        # 0xd5a4792
* 29:	67 48 0f b9 3a       	ud1    (%edx),%rdi <-- trapping instruction
  2e:	41 bd ea ff ff ff    	mov    $0xffffffea,%r13d
  34:	e9 7c fe ff ff       	jmp    0xfffffeb5
  39:	44 89 7c 24 34       	mov    %r15d,0x34(%rsp)
  3e:	44                   	rex.R


***

If these findings have caused you to resend the series or submit a
separate fix, please add the following tag to your commit message:
  Tested-by: syzbot@syzkaller.appspotmail.com

---
This report is generated by a bot. It may contain errors.
syzbot ci engineers can be reached at syzkaller@googlegroups.com.


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [Patch v2] mm/huge_memory: consolidate order-related checks into folio_check_splittable()
  2025-12-23 12:25 [Patch v2] mm/huge_memory: consolidate order-related checks into folio_check_splittable() Wei Yang
  2025-12-23 17:50 ` [syzbot ci] " syzbot ci
@ 2026-01-04  2:37 ` Wei Yang
  2026-01-05 16:16   ` David Hildenbrand (Red Hat)
  1 sibling, 1 reply; 9+ messages in thread
From: Wei Yang @ 2026-01-04  2:37 UTC (permalink / raw)
  To: Wei Yang
  Cc: akpm, david, lorenzo.stoakes, ziy, baolin.wang, Liam.Howlett,
	npache, ryan.roberts, dev.jain, baohua, lance.yang, linux-mm

On Tue, Dec 23, 2025 at 12:25:39PM +0000, Wei Yang wrote:
>The primary goal of the folio_check_splittable() function is to validate
>whether a folio is suitable for splitting and to bail out early if it is
>not.
>
>Currently, some order-related checks are scattered throughout the
>calling code rather than being centralized in folio_check_splittable().
>
>This commit moves all remaining order-related validation logic into
>folio_check_splittable(). This consolidation ensures that the function
>serves its intended purpose as a single point of failure and improves
>the clarity and maintainability of the surrounding code.
>
>Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
>Cc: Zi Yan <ziy@nvidia.com>
>
>---
[...]
>@@ -3719,28 +3723,33 @@ int folio_check_splittable(struct folio *folio, unsigned int new_order,
> 		/* order-1 is not supported for anonymous THP. */
> 		if (new_order == 1)
> 			return -EINVAL;
>-	} else if (split_type == SPLIT_TYPE_NON_UNIFORM || new_order) {
>-		if (IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) &&
>-		    !mapping_large_folio_support(folio->mapping)) {
>-			/*
>-			 * We can always split a folio down to a single page
>-			 * (new_order == 0) uniformly.
>-			 *
>-			 * For any other scenario
>-			 *   a) uniform split targeting a large folio
>-			 *      (new_order > 0)
>-			 *   b) any non-uniform split
>-			 * we must confirm that the file system supports large
>-			 * folios.
>-			 *
>-			 * Note that we might still have THPs in such
>-			 * mappings, which is created from khugepaged when
>-			 * CONFIG_READ_ONLY_THP_FOR_FS is enabled. But in that
>-			 * case, the mapping does not actually support large
>-			 * folios properly.
>-			 */
>-			return -EINVAL;
>+	} else {
>+		if (split_type == SPLIT_TYPE_NON_UNIFORM || new_order) {
>+			if (IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) &&
>+			    !mapping_large_folio_support(folio->mapping)) {
>+				/*
>+				 * We can always split a folio down to a
>+				 * single page (new_order == 0) uniformly.
>+				 *
>+				 * For any other scenario
>+				 *   a) uniform split targeting a large folio
>+				 *      (new_order > 0)
>+				 *   b) any non-uniform split
>+				 * we must confirm that the file system
>+				 * supports large folios.
>+				 *
>+				 * Note that we might still have THPs in such
>+				 * mappings, which is created from khugepaged
>+				 * when CONFIG_READ_ONLY_THP_FOR_FS is
>+				 * enabled. But in that case, the mapping does
>+				 * not actually support large folios properly.
>+				 */
>+				return -EINVAL;
>+			}
> 		}

Hi, Happy New Year to all :-)


Following the application of this patch, a warning was reported [5]. The root
cause is an attempt to uniformly split a page cache folio down to order-0 when
the mapping has a mapping_min_folio_order() > 0.

It is worth noting that the current upstream code also denies this split, but
it does so silently. This patch simply makes the violation visible.

Upon reviewing the code history, I believe the logic introduced here is
correct. The existing comment--"We can always split a folio down to a single
page"--appears to be misleading, as it does not account for modern
constraints where a minimum folio order is required by the mapping.

Below is my analysis and suggestion:
----


All the story came from [1] which introduced folio split to any lower order.
The check on order is fixed by [2], which is the base of current form.

When we split a large pagecache folio, it has two possibilities:

    1) khugepaged collapsed folio
    2) filesystem supported large folio 

For case 1), the folio could only be splitted to order-0, so the check is
added, (IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) && !mapping_large_folio_support(folio->mapping))

For case 2), it looks implies mapping_large_folio_support() is true. And it
looks we assume it could be splitted to any lower order. Because the
mapping_min_folio_order() is introduced by [3] and the min_order check is
introduced in [4], both are later than [1] and [2]. So when [2] applied, we
don't have the knowledge of mapping_min_folio_order(). (I may lose some
background here, if not correct, please correct me.)

The introduction of [3] and [4] changes the assumption at [2], besides
checking mapping_large_folio_support(), the mapping_min_folio_order() should
be checked. So the comment in current code is misleading:

	/* 
	 * We can always split a folio down to a
	 * single page (new_order == 0) uniformly.
	 */

For case 1), it still stands, but for case 2) we should also check min_order
with mapping_min_folio_order(), which [4] does.

If my understanding is correct, below is my suggestion for the comment change.
Feel free to correct me, if I miss something.

diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 40cf59301c21..b0ba27b0f763 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -3714,28 +3718,28 @@ int folio_check_splittable(struct folio *folio, unsigned int new_order,
                /* order-1 is not supported for anonymous THP. */
                if (new_order == 1)
                        return -EINVAL;
-       } else if (split_type == SPLIT_TYPE_NON_UNIFORM || new_order) {
-               if (IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) &&
-                   !mapping_large_folio_support(folio->mapping)) {
-                       /*
-                        * We can always split a folio down to a single page
-                        * (new_order == 0) uniformly.
-                        *
-                        * For any other scenario
-                        *   a) uniform split targeting a large folio
-                        *      (new_order > 0)
-                        *   b) any non-uniform split
-                        * we must confirm that the file system supports large
-                        * folios.
-                        *
-                        * Note that we might still have THPs in such
-                        * mappings, which is created from khugepaged when
-                        * CONFIG_READ_ONLY_THP_FOR_FS is enabled. But in that
-                        * case, the mapping does not actually support large
-                        * folios properly.
-                        */
-                       return -EINVAL;
+       } else {
+               /*
+                * When splitting a large pagecache folio, it has two
+                * possibilities:
+                *
+                *   1) khugepaged collapsed folio when
+                *      CONFIG_READ_ONLY_THP_FOR_FS is enabled
+                *   2) filesystem supported folio
+                *
+                * For case 1), we only support uniform split to order-0.
+                * For case 2), we need to make sure new_order is not less
+                * than mapping_min_folio_order().
+                */
+               if (split_type == SPLIT_TYPE_NON_UNIFORM || new_order) {
+                       if (IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) &&
+                           !mapping_large_folio_support(folio->mapping)) {
+                               return -EINVAL;
+                       }
                }
+
+               if (new_order < mapping_min_folio_order(folio->mapping))
+                       return -EINVAL;
        }


The warning reported is in madvise_cold_or_pageout_pte_range(). I don't find a
way to eliminate it, since user may specify any range to madvise(). Uniformly
splitting to order-0 is a general way.

[1]: 2024-02-26 c010d47f107f mm: thp: split huge page to any lower order pages
[2]: 2024-06-07 6a50c9b512f7 mm: huge_memory: fix misused mapping_large_folio_support() for anon folios
[3]: 2024-08-22 84429b675bcf fs: Allow fine-grained control of folio sizes
[4]: 2024-08-22 e220917fa507 mm: split a folio in minimum folio order chunks
[5]: https://lore.kernel.org/all/694ac438.050a0220.35954c.0000.GAE@google.com/

-- 
Wei Yang
Help you, Help me


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [Patch v2] mm/huge_memory: consolidate order-related checks into folio_check_splittable()
  2026-01-04  2:37 ` [Patch v2] " Wei Yang
@ 2026-01-05 16:16   ` David Hildenbrand (Red Hat)
  2026-01-05 16:29     ` Lorenzo Stoakes
  2026-01-06  9:54     ` Wei Yang
  0 siblings, 2 replies; 9+ messages in thread
From: David Hildenbrand (Red Hat) @ 2026-01-05 16:16 UTC (permalink / raw)
  To: Wei Yang
  Cc: akpm, lorenzo.stoakes, ziy, baolin.wang, Liam.Howlett, npache,
	ryan.roberts, dev.jain, baohua, lance.yang, linux-mm

On 1/4/26 03:37, Wei Yang wrote:
> On Tue, Dec 23, 2025 at 12:25:39PM +0000, Wei Yang wrote:
>> The primary goal of the folio_check_splittable() function is to validate
>> whether a folio is suitable for splitting and to bail out early if it is
>> not.
>>
>> Currently, some order-related checks are scattered throughout the
>> calling code rather than being centralized in folio_check_splittable().
>>
>> This commit moves all remaining order-related validation logic into
>> folio_check_splittable(). This consolidation ensures that the function
>> serves its intended purpose as a single point of failure and improves
>> the clarity and maintainability of the surrounding code.
>>
>> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
>> Cc: Zi Yan <ziy@nvidia.com>
>>
>> ---
> [...]
>> @@ -3719,28 +3723,33 @@ int folio_check_splittable(struct folio *folio, unsigned int new_order,
>> 		/* order-1 is not supported for anonymous THP. */
>> 		if (new_order == 1)
>> 			return -EINVAL;
>> -	} else if (split_type == SPLIT_TYPE_NON_UNIFORM || new_order) {
>> -		if (IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) &&
>> -		    !mapping_large_folio_support(folio->mapping)) {
>> -			/*
>> -			 * We can always split a folio down to a single page
>> -			 * (new_order == 0) uniformly.
>> -			 *
>> -			 * For any other scenario
>> -			 *   a) uniform split targeting a large folio
>> -			 *      (new_order > 0)
>> -			 *   b) any non-uniform split
>> -			 * we must confirm that the file system supports large
>> -			 * folios.
>> -			 *
>> -			 * Note that we might still have THPs in such
>> -			 * mappings, which is created from khugepaged when
>> -			 * CONFIG_READ_ONLY_THP_FOR_FS is enabled. But in that
>> -			 * case, the mapping does not actually support large
>> -			 * folios properly.
>> -			 */
>> -			return -EINVAL;
>> +	} else {
>> +		if (split_type == SPLIT_TYPE_NON_UNIFORM || new_order) {
>> +			if (IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) &&
>> +			    !mapping_large_folio_support(folio->mapping)) {
>> +				/*
>> +				 * We can always split a folio down to a
>> +				 * single page (new_order == 0) uniformly.
>> +				 *
>> +				 * For any other scenario
>> +				 *   a) uniform split targeting a large folio
>> +				 *      (new_order > 0)
>> +				 *   b) any non-uniform split
>> +				 * we must confirm that the file system
>> +				 * supports large folios.
>> +				 *
>> +				 * Note that we might still have THPs in such
>> +				 * mappings, which is created from khugepaged
>> +				 * when CONFIG_READ_ONLY_THP_FOR_FS is
>> +				 * enabled. But in that case, the mapping does
>> +				 * not actually support large folios properly.
>> +				 */
>> +				return -EINVAL;
>> +			}
>> 		}
> 
> Hi, Happy New Year to all :-)

Happy new year to you, too!

There was an offlist discussion about some of the text below, because a 
couple of people wondered whether it was an LLM-generated reply, and 
whether it is even worth the time to read.

So I am curious, did you end up using an LLM to compose this reply, and 
if so, to which degree? Only to improve your writing or also to come up 
with an analysis, code etc?

Feel free to use an LLM to improve your writing, analysis etc. Just a 
note that nobody here is interested in getting LLM-slopped, so don't 
send unfiltered/unchecked LLM output to the list.

In general, I think it was raised already in the past, please don't send 
patches for code you don't fully understand. It consumes quite some 
bandwidth for us reviewers/maintainers here and it just gets very likely 
to break things by accident.

The comment change suggestion below does not make any sense to fix a 
warning we trigger. If an LLM wrote it, you should never have sent it. 
If you wrote it, you should have invested more time to understand the 
problem and come up with a reasonable solution ... or not worked on it 
in the first place if you don't understand the details.


To the issue at hand: Zi Yan pointed this very thing out in v1 [1], no?

The patch as is cannot work: we cannot return -EINVAL for something that 
is not supposed to trigger a warning.

[1] 
https://lore.kernel.org/linux-mm/01FABE3A-AD4E-4A09-B971-C89503A848DF@nvidia.com/

-- 
Cheers

David


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [Patch v2] mm/huge_memory: consolidate order-related checks into folio_check_splittable()
  2026-01-05 16:16   ` David Hildenbrand (Red Hat)
@ 2026-01-05 16:29     ` Lorenzo Stoakes
  2026-01-05 16:52       ` Matthew Wilcox
  2026-01-06  9:54     ` Wei Yang
  1 sibling, 1 reply; 9+ messages in thread
From: Lorenzo Stoakes @ 2026-01-05 16:29 UTC (permalink / raw)
  To: David Hildenbrand (Red Hat)
  Cc: Wei Yang, akpm, ziy, baolin.wang, Liam.Howlett, npache,
	ryan.roberts, dev.jain, baohua, lance.yang, linux-mm

On Mon, Jan 05, 2026 at 05:16:45PM +0100, David Hildenbrand (Red Hat) wrote:
> On 1/4/26 03:37, Wei Yang wrote:
> > On Tue, Dec 23, 2025 at 12:25:39PM +0000, Wei Yang wrote:
> > > The primary goal of the folio_check_splittable() function is to validate
> > > whether a folio is suitable for splitting and to bail out early if it is
> > > not.
> > >
> > > Currently, some order-related checks are scattered throughout the
> > > calling code rather than being centralized in folio_check_splittable().
> > >
> > > This commit moves all remaining order-related validation logic into
> > > folio_check_splittable(). This consolidation ensures that the function
> > > serves its intended purpose as a single point of failure and improves
> > > the clarity and maintainability of the surrounding code.
> > >
> > > Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
> > > Cc: Zi Yan <ziy@nvidia.com>
> > >
> > > ---
> > [...]
> > > @@ -3719,28 +3723,33 @@ int folio_check_splittable(struct folio *folio, unsigned int new_order,
> > > 		/* order-1 is not supported for anonymous THP. */
> > > 		if (new_order == 1)
> > > 			return -EINVAL;
> > > -	} else if (split_type == SPLIT_TYPE_NON_UNIFORM || new_order) {
> > > -		if (IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) &&
> > > -		    !mapping_large_folio_support(folio->mapping)) {
> > > -			/*
> > > -			 * We can always split a folio down to a single page
> > > -			 * (new_order == 0) uniformly.
> > > -			 *
> > > -			 * For any other scenario
> > > -			 *   a) uniform split targeting a large folio
> > > -			 *      (new_order > 0)
> > > -			 *   b) any non-uniform split
> > > -			 * we must confirm that the file system supports large
> > > -			 * folios.
> > > -			 *
> > > -			 * Note that we might still have THPs in such
> > > -			 * mappings, which is created from khugepaged when
> > > -			 * CONFIG_READ_ONLY_THP_FOR_FS is enabled. But in that
> > > -			 * case, the mapping does not actually support large
> > > -			 * folios properly.
> > > -			 */
> > > -			return -EINVAL;
> > > +	} else {
> > > +		if (split_type == SPLIT_TYPE_NON_UNIFORM || new_order) {
> > > +			if (IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) &&
> > > +			    !mapping_large_folio_support(folio->mapping)) {
> > > +				/*
> > > +				 * We can always split a folio down to a
> > > +				 * single page (new_order == 0) uniformly.
> > > +				 *
> > > +				 * For any other scenario
> > > +				 *   a) uniform split targeting a large folio
> > > +				 *      (new_order > 0)
> > > +				 *   b) any non-uniform split
> > > +				 * we must confirm that the file system
> > > +				 * supports large folios.
> > > +				 *
> > > +				 * Note that we might still have THPs in such
> > > +				 * mappings, which is created from khugepaged
> > > +				 * when CONFIG_READ_ONLY_THP_FOR_FS is
> > > +				 * enabled. But in that case, the mapping does
> > > +				 * not actually support large folios properly.
> > > +				 */
> > > +				return -EINVAL;
> > > +			}
> > > 		}
> >
> > Hi, Happy New Year to all :-)
>
> Happy new year to you, too!
>
> There was an offlist discussion about some of the text below, because a
> couple of people wondered whether it was an LLM-generated reply, and whether
> it is even worth the time to read.
>
> So I am curious, did you end up using an LLM to compose this reply, and if
> so, to which degree? Only to improve your writing or also to come up with an
> analysis, code etc?
>
> Feel free to use an LLM to improve your writing, analysis etc. Just a note
> that nobody here is interested in getting LLM-slopped, so don't send
> unfiltered/unchecked LLM output to the list.
>
> In general, I think it was raised already in the past, please don't send
> patches for code you don't fully understand. It consumes quite some
> bandwidth for us reviewers/maintainers here and it just gets very likely to
> break things by accident.
>
> The comment change suggestion below does not make any sense to fix a warning
> we trigger. If an LLM wrote it, you should never have sent it. If you wrote
> it, you should have invested more time to understand the problem and come up
> with a reasonable solution ... or not worked on it in the first place if you
> don't understand the details.

Honestly I have repeatedly told Wei to not send series like these and have
been ignored, and therefore I now ignore his series in general.

I think that's a reasonable approach - good will can only go so far.

But it doesn't make any difference, mm's 'merge by default' approach and
our continuing to tolerate this kind of thing means that nothing will
change, and unless you propose to NAK series that obviously contain LLM
slop, nothing will change.

>
>
> To the issue at hand: Zi Yan pointed this very thing out in v1 [1], no?
>
> The patch as is cannot work: we cannot return -EINVAL for something that is
> not supposed to trigger a warning.
>
> [1] https://lore.kernel.org/linux-mm/01FABE3A-AD4E-4A09-B971-C89503A848DF@nvidia.com/
>
> --
> Cheers
>
> David
>

I for one however, have lost patience!

Happy new year, Lorenzo


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [Patch v2] mm/huge_memory: consolidate order-related checks into folio_check_splittable()
  2026-01-05 16:29     ` Lorenzo Stoakes
@ 2026-01-05 16:52       ` Matthew Wilcox
  0 siblings, 0 replies; 9+ messages in thread
From: Matthew Wilcox @ 2026-01-05 16:52 UTC (permalink / raw)
  To: Lorenzo Stoakes
  Cc: David Hildenbrand (Red Hat),
	Wei Yang, akpm, ziy, baolin.wang, Liam.Howlett, npache,
	ryan.roberts, dev.jain, baohua, lance.yang, linux-mm

On Mon, Jan 05, 2026 at 04:29:30PM +0000, Lorenzo Stoakes wrote:
> Honestly I have repeatedly told Wei to not send series like these and have
> been ignored, and therefore I now ignore his series in general.

I also have lost patience with Wei.  He's been contributing patches
since 2012 and has not progressed beyond cleanup work.  I'm all for
helping new people along, but the balance of "time I spend helping"
versus "time I don't have to spend just fixing the bug myself" has long
since passed a tipping point.

I no longer look at Wei's patches, and would prefer them to no longer
be merged.


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [Patch v2] mm/huge_memory: consolidate order-related checks into folio_check_splittable()
  2026-01-05 16:16   ` David Hildenbrand (Red Hat)
  2026-01-05 16:29     ` Lorenzo Stoakes
@ 2026-01-06  9:54     ` Wei Yang
  2026-01-06 12:28       ` Zi Yan
  1 sibling, 1 reply; 9+ messages in thread
From: Wei Yang @ 2026-01-06  9:54 UTC (permalink / raw)
  To: David Hildenbrand (Red Hat)
  Cc: Wei Yang, akpm, lorenzo.stoakes, ziy, baolin.wang, Liam.Howlett,
	npache, ryan.roberts, dev.jain, baohua, lance.yang, linux-mm

On Mon, Jan 05, 2026 at 05:16:45PM +0100, David Hildenbrand (Red Hat) wrote:
>On 1/4/26 03:37, Wei Yang wrote:
>> On Tue, Dec 23, 2025 at 12:25:39PM +0000, Wei Yang wrote:
>> > The primary goal of the folio_check_splittable() function is to validate
>> > whether a folio is suitable for splitting and to bail out early if it is
>> > not.
>> > 
>> > Currently, some order-related checks are scattered throughout the
>> > calling code rather than being centralized in folio_check_splittable().
>> > 
>> > This commit moves all remaining order-related validation logic into
>> > folio_check_splittable(). This consolidation ensures that the function
>> > serves its intended purpose as a single point of failure and improves
>> > the clarity and maintainability of the surrounding code.
>> > 
>> > Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
>> > Cc: Zi Yan <ziy@nvidia.com>
>> > 
>> > ---
>> [...]
>> > @@ -3719,28 +3723,33 @@ int folio_check_splittable(struct folio *folio, unsigned int new_order,
>> > 		/* order-1 is not supported for anonymous THP. */
>> > 		if (new_order == 1)
>> > 			return -EINVAL;
>> > -	} else if (split_type == SPLIT_TYPE_NON_UNIFORM || new_order) {
>> > -		if (IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) &&
>> > -		    !mapping_large_folio_support(folio->mapping)) {
>> > -			/*
>> > -			 * We can always split a folio down to a single page
>> > -			 * (new_order == 0) uniformly.
>> > -			 *
>> > -			 * For any other scenario
>> > -			 *   a) uniform split targeting a large folio
>> > -			 *      (new_order > 0)
>> > -			 *   b) any non-uniform split
>> > -			 * we must confirm that the file system supports large
>> > -			 * folios.
>> > -			 *
>> > -			 * Note that we might still have THPs in such
>> > -			 * mappings, which is created from khugepaged when
>> > -			 * CONFIG_READ_ONLY_THP_FOR_FS is enabled. But in that
>> > -			 * case, the mapping does not actually support large
>> > -			 * folios properly.
>> > -			 */
>> > -			return -EINVAL;
>> > +	} else {
>> > +		if (split_type == SPLIT_TYPE_NON_UNIFORM || new_order) {
>> > +			if (IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) &&
>> > +			    !mapping_large_folio_support(folio->mapping)) {
>> > +				/*
>> > +				 * We can always split a folio down to a
>> > +				 * single page (new_order == 0) uniformly.
>> > +				 *
>> > +				 * For any other scenario
>> > +				 *   a) uniform split targeting a large folio
>> > +				 *      (new_order > 0)
>> > +				 *   b) any non-uniform split
>> > +				 * we must confirm that the file system
>> > +				 * supports large folios.
>> > +				 *
>> > +				 * Note that we might still have THPs in such
>> > +				 * mappings, which is created from khugepaged
>> > +				 * when CONFIG_READ_ONLY_THP_FOR_FS is
>> > +				 * enabled. But in that case, the mapping does
>> > +				 * not actually support large folios properly.
>> > +				 */
>> > +				return -EINVAL;
>> > +			}
>> > 		}
>> 
>> Hi, Happy New Year to all :-)
>
>Happy new year to you, too!
>
>There was an offlist discussion about some of the text below, because a
>couple of people wondered whether it was an LLM-generated reply, and whether
>it is even worth the time to read.
>
>So I am curious, did you end up using an LLM to compose this reply, and if
>so, to which degree? Only to improve your writing or also to come up with an
>analysis, code etc?
>

The first three paragraph of the mail is polished by LLM, since once upon a
time Andrew suggested me to use LLM to refine my text.

Others, including the code change is not LLM-generated.

>Feel free to use an LLM to improve your writing, analysis etc. Just a note
>that nobody here is interested in getting LLM-slopped, so don't send
>unfiltered/unchecked LLM output to the list.
>
>In general, I think it was raised already in the past, please don't send
>patches for code you don't fully understand. It consumes quite some bandwidth
>for us reviewers/maintainers here and it just gets very likely to break
>things by accident.
>
>The comment change suggestion below does not make any sense to fix a warning
>we trigger. If an LLM wrote it, you should never have sent it. If you wrote
>it, you should have invested more time to understand the problem and come up
>with a reasonable solution ... or not worked on it in the first place if you
>don't understand the details.
>
>
>To the issue at hand: Zi Yan pointed this very thing out in v1 [1], no?
>

Hmm.. this is not the same thing.

Actually before sending v2, I have talked with Zi Yan off-list and he said it
looks good.

But after triggering the warning, I re-read the history and found the logic is
correct but comment is misleading. And current upstream don't report the
warning if we want to split folio uniformly to 0 when the folio doesn't
support.

Or more worse, we should split to 0, but we didn't.

Thanks for your patience on taking a look.

>The patch as is cannot work: we cannot return -EINVAL for something that is
>not supposed to trigger a warning.
>
>[1] https://lore.kernel.org/linux-mm/01FABE3A-AD4E-4A09-B971-C89503A848DF@nvidia.com/
>
>-- 
>Cheers
>
>David

-- 
Wei Yang
Help you, Help me


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [Patch v2] mm/huge_memory: consolidate order-related checks into folio_check_splittable()
  2026-01-06  9:54     ` Wei Yang
@ 2026-01-06 12:28       ` Zi Yan
  2026-01-06 12:51         ` Wei Yang
  0 siblings, 1 reply; 9+ messages in thread
From: Zi Yan @ 2026-01-06 12:28 UTC (permalink / raw)
  To: Wei Yang
  Cc: David Hildenbrand (Red Hat),
	akpm, lorenzo.stoakes, baolin.wang, Liam.Howlett, npache,
	ryan.roberts, dev.jain, baohua, lance.yang, linux-mm

On 6 Jan 2026, at 4:54, Wei Yang wrote:

> On Mon, Jan 05, 2026 at 05:16:45PM +0100, David Hildenbrand (Red Hat) wrote:
>> On 1/4/26 03:37, Wei Yang wrote:
>>> On Tue, Dec 23, 2025 at 12:25:39PM +0000, Wei Yang wrote:
>>>> The primary goal of the folio_check_splittable() function is to validate
>>>> whether a folio is suitable for splitting and to bail out early if it is
>>>> not.
>>>>
>>>> Currently, some order-related checks are scattered throughout the
>>>> calling code rather than being centralized in folio_check_splittable().
>>>>
>>>> This commit moves all remaining order-related validation logic into
>>>> folio_check_splittable(). This consolidation ensures that the function
>>>> serves its intended purpose as a single point of failure and improves
>>>> the clarity and maintainability of the surrounding code.
>>>>
>>>> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
>>>> Cc: Zi Yan <ziy@nvidia.com>
>>>>
>>>> ---
>>> [...]
>>>> @@ -3719,28 +3723,33 @@ int folio_check_splittable(struct folio *folio, unsigned int new_order,
>>>> 		/* order-1 is not supported for anonymous THP. */
>>>> 		if (new_order == 1)
>>>> 			return -EINVAL;
>>>> -	} else if (split_type == SPLIT_TYPE_NON_UNIFORM || new_order) {
>>>> -		if (IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) &&
>>>> -		    !mapping_large_folio_support(folio->mapping)) {
>>>> -			/*
>>>> -			 * We can always split a folio down to a single page
>>>> -			 * (new_order == 0) uniformly.
>>>> -			 *
>>>> -			 * For any other scenario
>>>> -			 *   a) uniform split targeting a large folio
>>>> -			 *      (new_order > 0)
>>>> -			 *   b) any non-uniform split
>>>> -			 * we must confirm that the file system supports large
>>>> -			 * folios.
>>>> -			 *
>>>> -			 * Note that we might still have THPs in such
>>>> -			 * mappings, which is created from khugepaged when
>>>> -			 * CONFIG_READ_ONLY_THP_FOR_FS is enabled. But in that
>>>> -			 * case, the mapping does not actually support large
>>>> -			 * folios properly.
>>>> -			 */
>>>> -			return -EINVAL;
>>>> +	} else {
>>>> +		if (split_type == SPLIT_TYPE_NON_UNIFORM || new_order) {
>>>> +			if (IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) &&
>>>> +			    !mapping_large_folio_support(folio->mapping)) {
>>>> +				/*
>>>> +				 * We can always split a folio down to a
>>>> +				 * single page (new_order == 0) uniformly.
>>>> +				 *
>>>> +				 * For any other scenario
>>>> +				 *   a) uniform split targeting a large folio
>>>> +				 *      (new_order > 0)
>>>> +				 *   b) any non-uniform split
>>>> +				 * we must confirm that the file system
>>>> +				 * supports large folios.
>>>> +				 *
>>>> +				 * Note that we might still have THPs in such
>>>> +				 * mappings, which is created from khugepaged
>>>> +				 * when CONFIG_READ_ONLY_THP_FOR_FS is
>>>> +				 * enabled. But in that case, the mapping does
>>>> +				 * not actually support large folios properly.
>>>> +				 */
>>>> +				return -EINVAL;
>>>> +			}
>>>> 		}
>>>
>>> Hi, Happy New Year to all :-)
>>
>> Happy new year to you, too!
>>
>> There was an offlist discussion about some of the text below, because a
>> couple of people wondered whether it was an LLM-generated reply, and whether
>> it is even worth the time to read.
>>
>> So I am curious, did you end up using an LLM to compose this reply, and if
>> so, to which degree? Only to improve your writing or also to come up with an
>> analysis, code etc?
>>
>
> The first three paragraph of the mail is polished by LLM, since once upon a
> time Andrew suggested me to use LLM to refine my text.
>
> Others, including the code change is not LLM-generated.
>
>> Feel free to use an LLM to improve your writing, analysis etc. Just a note
>> that nobody here is interested in getting LLM-slopped, so don't send
>> unfiltered/unchecked LLM output to the list.
>>
>> In general, I think it was raised already in the past, please don't send
>> patches for code you don't fully understand. It consumes quite some bandwidth
>> for us reviewers/maintainers here and it just gets very likely to break
>> things by accident.
>>
>> The comment change suggestion below does not make any sense to fix a warning
>> we trigger. If an LLM wrote it, you should never have sent it. If you wrote
>> it, you should have invested more time to understand the problem and come up
>> with a reasonable solution ... or not worked on it in the first place if you
>> don't understand the details.
>>
>>
>> To the issue at hand: Zi Yan pointed this very thing out in v1 [1], no?
>>
>
> Hmm.. this is not the same thing.
>
> Actually before sending v2, I have talked with Zi Yan off-list and he said it
> looks good.

The off-list discussion was purely on V1 and you never sent me V2.
The last off-list email exchange was:

you: The related cleanup looks merged. Do you think it is proper to send v2 now?
me: Sure, feel free to do so.

No one would interpret it as “V2 looks good”.

In addition, if your patches are solely relying on other’s “it looks good”,
please do not send them. You are responsible for the correctness of your patches.

I am done with wasting time on you.

>
> But after triggering the warning, I re-read the history and found the logic is
> correct but comment is misleading. And current upstream don't report the
> warning if we want to split folio uniformly to 0 when the folio doesn't
> support.
>
> Or more worse, we should split to 0, but we didn't.
>
> Thanks for your patience on taking a look.
>
>> The patch as is cannot work: we cannot return -EINVAL for something that is
>> not supposed to trigger a warning.
>>
>> [1] https://lore.kernel.org/linux-mm/01FABE3A-AD4E-4A09-B971-C89503A848DF@nvidia.com/
>>
>> -- 
>> Cheers
>>
>> David
>
> -- 
> Wei Yang
> Help you, Help me


Best Regards,
Yan, Zi


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [Patch v2] mm/huge_memory: consolidate order-related checks into folio_check_splittable()
  2026-01-06 12:28       ` Zi Yan
@ 2026-01-06 12:51         ` Wei Yang
  0 siblings, 0 replies; 9+ messages in thread
From: Wei Yang @ 2026-01-06 12:51 UTC (permalink / raw)
  To: Zi Yan
  Cc: Wei Yang, David Hildenbrand (Red Hat),
	akpm, lorenzo.stoakes, baolin.wang, Liam.Howlett, npache,
	ryan.roberts, dev.jain, baohua, lance.yang, linux-mm

On Tue, Jan 06, 2026 at 07:28:34AM -0500, Zi Yan wrote:
>On 6 Jan 2026, at 4:54, Wei Yang wrote:
>
>> On Mon, Jan 05, 2026 at 05:16:45PM +0100, David Hildenbrand (Red Hat) wrote:
>>> On 1/4/26 03:37, Wei Yang wrote:
>>>> On Tue, Dec 23, 2025 at 12:25:39PM +0000, Wei Yang wrote:
>>>>> The primary goal of the folio_check_splittable() function is to validate
>>>>> whether a folio is suitable for splitting and to bail out early if it is
>>>>> not.
>>>>>
>>>>> Currently, some order-related checks are scattered throughout the
>>>>> calling code rather than being centralized in folio_check_splittable().
>>>>>
>>>>> This commit moves all remaining order-related validation logic into
>>>>> folio_check_splittable(). This consolidation ensures that the function
>>>>> serves its intended purpose as a single point of failure and improves
>>>>> the clarity and maintainability of the surrounding code.
>>>>>
>>>>> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
>>>>> Cc: Zi Yan <ziy@nvidia.com>
>>>>>
>>>>> ---
>>>> [...]
>>>>> @@ -3719,28 +3723,33 @@ int folio_check_splittable(struct folio *folio, unsigned int new_order,
>>>>> 		/* order-1 is not supported for anonymous THP. */
>>>>> 		if (new_order == 1)
>>>>> 			return -EINVAL;
>>>>> -	} else if (split_type == SPLIT_TYPE_NON_UNIFORM || new_order) {
>>>>> -		if (IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) &&
>>>>> -		    !mapping_large_folio_support(folio->mapping)) {
>>>>> -			/*
>>>>> -			 * We can always split a folio down to a single page
>>>>> -			 * (new_order == 0) uniformly.
>>>>> -			 *
>>>>> -			 * For any other scenario
>>>>> -			 *   a) uniform split targeting a large folio
>>>>> -			 *      (new_order > 0)
>>>>> -			 *   b) any non-uniform split
>>>>> -			 * we must confirm that the file system supports large
>>>>> -			 * folios.
>>>>> -			 *
>>>>> -			 * Note that we might still have THPs in such
>>>>> -			 * mappings, which is created from khugepaged when
>>>>> -			 * CONFIG_READ_ONLY_THP_FOR_FS is enabled. But in that
>>>>> -			 * case, the mapping does not actually support large
>>>>> -			 * folios properly.
>>>>> -			 */
>>>>> -			return -EINVAL;
>>>>> +	} else {
>>>>> +		if (split_type == SPLIT_TYPE_NON_UNIFORM || new_order) {
>>>>> +			if (IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) &&
>>>>> +			    !mapping_large_folio_support(folio->mapping)) {
>>>>> +				/*
>>>>> +				 * We can always split a folio down to a
>>>>> +				 * single page (new_order == 0) uniformly.
>>>>> +				 *
>>>>> +				 * For any other scenario
>>>>> +				 *   a) uniform split targeting a large folio
>>>>> +				 *      (new_order > 0)
>>>>> +				 *   b) any non-uniform split
>>>>> +				 * we must confirm that the file system
>>>>> +				 * supports large folios.
>>>>> +				 *
>>>>> +				 * Note that we might still have THPs in such
>>>>> +				 * mappings, which is created from khugepaged
>>>>> +				 * when CONFIG_READ_ONLY_THP_FOR_FS is
>>>>> +				 * enabled. But in that case, the mapping does
>>>>> +				 * not actually support large folios properly.
>>>>> +				 */
>>>>> +				return -EINVAL;
>>>>> +			}
>>>>> 		}
>>>>
>>>> Hi, Happy New Year to all :-)
>>>
>>> Happy new year to you, too!
>>>
>>> There was an offlist discussion about some of the text below, because a
>>> couple of people wondered whether it was an LLM-generated reply, and whether
>>> it is even worth the time to read.
>>>
>>> So I am curious, did you end up using an LLM to compose this reply, and if
>>> so, to which degree? Only to improve your writing or also to come up with an
>>> analysis, code etc?
>>>
>>
>> The first three paragraph of the mail is polished by LLM, since once upon a
>> time Andrew suggested me to use LLM to refine my text.
>>
>> Others, including the code change is not LLM-generated.
>>
>>> Feel free to use an LLM to improve your writing, analysis etc. Just a note
>>> that nobody here is interested in getting LLM-slopped, so don't send
>>> unfiltered/unchecked LLM output to the list.
>>>
>>> In general, I think it was raised already in the past, please don't send
>>> patches for code you don't fully understand. It consumes quite some bandwidth
>>> for us reviewers/maintainers here and it just gets very likely to break
>>> things by accident.
>>>
>>> The comment change suggestion below does not make any sense to fix a warning
>>> we trigger. If an LLM wrote it, you should never have sent it. If you wrote
>>> it, you should have invested more time to understand the problem and come up
>>> with a reasonable solution ... or not worked on it in the first place if you
>>> don't understand the details.
>>>
>>>
>>> To the issue at hand: Zi Yan pointed this very thing out in v1 [1], no?
>>>
>>
>> Hmm.. this is not the same thing.
>>
>> Actually before sending v2, I have talked with Zi Yan off-list and he said it
>> looks good.
>
>The off-list discussion was purely on V1 and you never sent me V2.
>The last off-list email exchange was:
>
>you: The related cleanup looks merged. Do you think it is proper to send v2 now?
>me: Sure, feel free to do so.
>
>No one would interpret it as “V2 looks good”.
>
>In addition, if your patches are solely relying on other’s “it looks good”,
>please do not send them. You are responsible for the correctness of your patches.
>

Hi, Zi

Thanks for the suggestion, I should be responsible for the patch.

>I am done with wasting time on you.
>

Sorry for wasting time for everyone here. 


-- 
Wei Yang
Help you, Help me


^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2026-01-06 12:51 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-12-23 12:25 [Patch v2] mm/huge_memory: consolidate order-related checks into folio_check_splittable() Wei Yang
2025-12-23 17:50 ` [syzbot ci] " syzbot ci
2026-01-04  2:37 ` [Patch v2] " Wei Yang
2026-01-05 16:16   ` David Hildenbrand (Red Hat)
2026-01-05 16:29     ` Lorenzo Stoakes
2026-01-05 16:52       ` Matthew Wilcox
2026-01-06  9:54     ` Wei Yang
2026-01-06 12:28       ` Zi Yan
2026-01-06 12:51         ` Wei Yang

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox