* [Patch v2] mm/huge_memory: consolidate order-related checks into folio_check_splittable()
@ 2025-12-23 12:25 Wei Yang
2025-12-23 17:50 ` [syzbot ci] " syzbot ci
0 siblings, 1 reply; 2+ messages in thread
From: Wei Yang @ 2025-12-23 12:25 UTC (permalink / raw)
To: akpm, david, lorenzo.stoakes, ziy, baolin.wang, Liam.Howlett,
npache, ryan.roberts, dev.jain, baohua, lance.yang
Cc: linux-mm, Wei Yang
The primary goal of the folio_check_splittable() function is to validate
whether a folio is suitable for splitting and to bail out early if it is
not.
Currently, some order-related checks are scattered throughout the
calling code rather than being centralized in folio_check_splittable().
This commit moves all remaining order-related validation logic into
folio_check_splittable(). This consolidation ensures that the function
serves its intended purpose as a single point of failure and improves
the clarity and maintainability of the surrounding code.
Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
Cc: Zi Yan <ziy@nvidia.com>
---
v2: just move current logic here
---
mm/huge_memory.c | 63 +++++++++++++++++++++++-------------------------
1 file changed, 30 insertions(+), 33 deletions(-)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index b8ee33318a60..59d72522399f 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -3705,6 +3705,10 @@ int folio_check_splittable(struct folio *folio, unsigned int new_order,
enum split_type split_type)
{
VM_WARN_ON_FOLIO(!folio_test_locked(folio), folio);
+
+ if (new_order >= folio_order(folio))
+ return -EINVAL;
+
/*
* Folios that just got truncated cannot get split. Signal to the
* caller that there was a race.
@@ -3719,28 +3723,33 @@ int folio_check_splittable(struct folio *folio, unsigned int new_order,
/* order-1 is not supported for anonymous THP. */
if (new_order == 1)
return -EINVAL;
- } else if (split_type == SPLIT_TYPE_NON_UNIFORM || new_order) {
- if (IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) &&
- !mapping_large_folio_support(folio->mapping)) {
- /*
- * We can always split a folio down to a single page
- * (new_order == 0) uniformly.
- *
- * For any other scenario
- * a) uniform split targeting a large folio
- * (new_order > 0)
- * b) any non-uniform split
- * we must confirm that the file system supports large
- * folios.
- *
- * Note that we might still have THPs in such
- * mappings, which is created from khugepaged when
- * CONFIG_READ_ONLY_THP_FOR_FS is enabled. But in that
- * case, the mapping does not actually support large
- * folios properly.
- */
- return -EINVAL;
+ } else {
+ if (split_type == SPLIT_TYPE_NON_UNIFORM || new_order) {
+ if (IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) &&
+ !mapping_large_folio_support(folio->mapping)) {
+ /*
+ * We can always split a folio down to a
+ * single page (new_order == 0) uniformly.
+ *
+ * For any other scenario
+ * a) uniform split targeting a large folio
+ * (new_order > 0)
+ * b) any non-uniform split
+ * we must confirm that the file system
+ * supports large folios.
+ *
+ * Note that we might still have THPs in such
+ * mappings, which is created from khugepaged
+ * when CONFIG_READ_ONLY_THP_FOR_FS is
+ * enabled. But in that case, the mapping does
+ * not actually support large folios properly.
+ */
+ return -EINVAL;
+ }
}
+
+ if (new_order < mapping_min_folio_order(folio->mapping))
+ return -EINVAL;
}
/*
@@ -4008,11 +4017,6 @@ static int __folio_split(struct folio *folio, unsigned int new_order,
goto out;
}
- if (new_order >= old_order) {
- ret = -EINVAL;
- goto out;
- }
-
ret = folio_check_splittable(folio, new_order, split_type);
if (ret) {
VM_WARN_ONCE(ret == -EINVAL, "Tried to split an unsplittable folio");
@@ -4036,16 +4040,9 @@ static int __folio_split(struct folio *folio, unsigned int new_order,
anon_vma_lock_write(anon_vma);
mapping = NULL;
} else {
- unsigned int min_order;
gfp_t gfp;
mapping = folio->mapping;
- min_order = mapping_min_folio_order(folio->mapping);
- if (new_order < min_order) {
- ret = -EINVAL;
- goto out;
- }
-
gfp = current_gfp_context(mapping_gfp_mask(mapping) &
GFP_RECLAIM_MASK);
--
2.34.1
^ permalink raw reply [flat|nested] 2+ messages in thread
* [syzbot ci] Re: mm/huge_memory: consolidate order-related checks into folio_check_splittable()
2025-12-23 12:25 [Patch v2] mm/huge_memory: consolidate order-related checks into folio_check_splittable() Wei Yang
@ 2025-12-23 17:50 ` syzbot ci
0 siblings, 0 replies; 2+ messages in thread
From: syzbot ci @ 2025-12-23 17:50 UTC (permalink / raw)
To: akpm, baohua, baolin.wang, david, dev.jain, lance.yang,
liam.howlett, linux-mm, lorenzo.stoakes, npache, richard.weiyang,
ryan.roberts, ziy
Cc: syzbot, syzkaller-bugs
syzbot ci has tested the following series
[v2] mm/huge_memory: consolidate order-related checks into folio_check_splittable()
https://lore.kernel.org/all/20251223122539.10726-1-richard.weiyang@gmail.com
* [Patch v2] mm/huge_memory: consolidate order-related checks into folio_check_splittable()
and found the following issue:
WARNING in __folio_split
Full report is available here:
https://ci.syzbot.org/series/7e34013d-ed08-40e1-99b7-8fd118dce84f
***
WARNING in __folio_split
tree: mm-new
URL: https://kernel.googlesource.com/pub/scm/linux/kernel/git/akpm/mm.git
base: c642ecda5b136882e518d8303863473c0d21ab2f
arch: amd64
compiler: Debian clang version 20.1.8 (++20250708063551+0c9f909b7976-1~exp1~20250708183702.136), Debian LLD 20.1.8
config: https://ci.syzbot.org/builds/2edb2dc5-42c6-4557-a194-921c57fd9eb1/config
C repro: https://ci.syzbot.org/findings/8349b674-5790-4507-97cd-03697ec93cb0/c_repro
syz repro: https://ci.syzbot.org/findings/8349b674-5790-4507-97cd-03697ec93cb0/syz_repro
------------[ cut here ]------------
Tried to split an unsplittable folio
WARNING: mm/huge_memory.c:3970 at __folio_split+0xfe7/0x1370 mm/huge_memory.c:3970, CPU#1: syz.0.17/5997
Modules linked in:
CPU: 1 UID: 0 PID: 5997 Comm: syz.0.17 Not tainted syzkaller #0 PREEMPT(full)
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014
RIP: 0010:__folio_split+0xfe7/0x1370 mm/huge_memory.c:3970
Code: fe c6 05 a5 f8 3e 0d 01 90 0f 0b 90 e9 0d f4 ff ff e8 4d 78 94 ff 49 ff cd e9 d1 f4 ff ff e8 40 78 94 ff 48 8d 3d 69 47 5a 0d <67> 48 0f b9 3a 41 bd ea ff ff ff e9 7c fe ff ff 44 89 7c 24 34 44
RSP: 0018:ffffc900036d6d60 EFLAGS: 00010293
RAX: ffffffff822d4360 RBX: ffffea0005a4b008 RCX: ffff8881047fd7c0
RDX: 0000000000000000 RSI: ffffffff8e06e540 RDI: ffffffff8f878ad0
RBP: ffffc900036d6ef0 R08: ffff8881047fd7c0 R09: 0000000000000002
R10: 00000000ffffffea R11: 0000000000000000 R12: 0000000000000004
R13: 00000000ffffffea R14: ffffea0005a4b000 R15: 1ffffd4000b49603
FS: 0000555591a60500(0000) GS:ffff8882a9e32000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000001b30d63fff CR3: 0000000100f30000 CR4: 00000000000006f0
Call Trace:
<TASK>
madvise_cold_or_pageout_pte_range+0xbf3/0x1ce0 mm/madvise.c:503
walk_pmd_range mm/pagewalk.c:130 [inline]
walk_pud_range mm/pagewalk.c:224 [inline]
walk_p4d_range mm/pagewalk.c:262 [inline]
walk_pgd_range+0x1037/0x1d30 mm/pagewalk.c:303
__walk_page_range+0x14c/0x710 mm/pagewalk.c:410
walk_page_range_vma_unsafe+0x34c/0x400 mm/pagewalk.c:714
madvise_pageout_page_range mm/madvise.c:622 [inline]
madvise_pageout mm/madvise.c:647 [inline]
madvise_vma_behavior+0x30c7/0x4420 mm/madvise.c:1366
madvise_walk_vmas+0x575/0xaf0 mm/madvise.c:1721
madvise_do_behavior+0x38e/0x550 mm/madvise.c:1937
do_madvise+0x1bc/0x270 mm/madvise.c:2030
__do_sys_madvise mm/madvise.c:2039 [inline]
__se_sys_madvise mm/madvise.c:2037 [inline]
__x64_sys_madvise+0xa7/0xc0 mm/madvise.c:2037
do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
do_syscall_64+0xfa/0xf80 arch/x86/entry/syscall_64.c:94
entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7ff69838f7c9
Code: ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 a8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007fff82001dd8 EFLAGS: 00000246 ORIG_RAX: 000000000000001c
RAX: ffffffffffffffda RBX: 00007ff6985e5fa0 RCX: 00007ff69838f7c9
RDX: 0000000000000015 RSI: 0000000000600000 RDI: 0000200000000000
RBP: 00007ff6983f297f R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007ff6985e5fa0 R14: 00007ff6985e5fa0 R15: 0000000000000003
</TASK>
----------------
Code disassembly (best guess), 1 bytes skipped:
0: c6 05 a5 f8 3e 0d 01 movb $0x1,0xd3ef8a5(%rip) # 0xd3ef8ac
7: 90 nop
8: 0f 0b ud2
a: 90 nop
b: e9 0d f4 ff ff jmp 0xfffff41d
10: e8 4d 78 94 ff call 0xff947862
15: 49 ff cd dec %r13
18: e9 d1 f4 ff ff jmp 0xfffff4ee
1d: e8 40 78 94 ff call 0xff947862
22: 48 8d 3d 69 47 5a 0d lea 0xd5a4769(%rip),%rdi # 0xd5a4792
* 29: 67 48 0f b9 3a ud1 (%edx),%rdi <-- trapping instruction
2e: 41 bd ea ff ff ff mov $0xffffffea,%r13d
34: e9 7c fe ff ff jmp 0xfffffeb5
39: 44 89 7c 24 34 mov %r15d,0x34(%rsp)
3e: 44 rex.R
***
If these findings have caused you to resend the series or submit a
separate fix, please add the following tag to your commit message:
Tested-by: syzbot@syzkaller.appspotmail.com
---
This report is generated by a bot. It may contain errors.
syzbot ci engineers can be reached at syzkaller@googlegroups.com.
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2025-12-23 17:51 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-12-23 12:25 [Patch v2] mm/huge_memory: consolidate order-related checks into folio_check_splittable() Wei Yang
2025-12-23 17:50 ` [syzbot ci] " syzbot ci
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox