linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH V5 0/2]  JFS: Implement migrate_folio for jfs_metapage_aops
@ 2025-04-30 10:01 Shivank Garg
  2025-04-30 10:01 ` [PATCH V5 1/2] mm: Add folio_expected_ref_count() for reference count calculation Shivank Garg
                   ` (3 more replies)
  0 siblings, 4 replies; 14+ messages in thread
From: Shivank Garg @ 2025-04-30 10:01 UTC (permalink / raw)
  To: shaggy, akpm
  Cc: willy, shivankg, david, wangkefeng.wang, jane.chu, ziy, donettom,
	apopple, jfs-discussion, linux-kernel, linux-mm,
	syzbot+8bb6fd945af4e0ad9299

This patch addresses a warning that occurs during memory compaction due
to JFS's missing migrate_folio operation. The warning was introduced by
commit 7ee3647243e5 ("migrate: Remove call to ->writepage") which added
explicit warnings when filesystem don't implement migrate_folio.

The syzbot reported following [1]:
  jfs_metapage_aops does not implement migrate_folio
  WARNING: CPU: 1 PID: 5861 at mm/migrate.c:955 fallback_migrate_folio mm/migrate.c:953 [inline]
  WARNING: CPU: 1 PID: 5861 at mm/migrate.c:955 move_to_new_folio+0x70e/0x840 mm/migrate.c:1007
  Modules linked in:
  CPU: 1 UID: 0 PID: 5861 Comm: syz-executor280 Not tainted 6.15.0-rc1-next-20250411-syzkaller #0 PREEMPT(full) 
  Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 02/12/2025
  RIP: 0010:fallback_migrate_folio mm/migrate.c:953 [inline]
  RIP: 0010:move_to_new_folio+0x70e/0x840 mm/migrate.c:1007

To fix this issue, this series implement metapage_migrate_folio() for JFS
which handles both single and multiple metapages per page configurations.

While most filesystems leverage existing migration implementations like
filemap_migrate_folio(), buffer_migrate_folio_norefs() or buffer_migrate_folio()
(which internally used folio_expected_refs()), JFS's metapage architecture
requires special handling of its private data during migration. To support this,
this series introduce the folio_expected_ref_count(), which calculates
external references to a folio from page/swap cache, private data, and page
table mappings.
This standardized implementation replaces the previous ad-hoc
folio_expected_refs() function and enables JFS to accurately determine whether
a folio has unexpected references before attempting migration.

[1]: https://syzkaller.appspot.com/bug?extid=8bb6fd945af4e0ad9299

Changelogs:
V5 (current):
- Add folio_expected_ref_count() for refcount calculation
- Add details about need of folio_expected_ref_count() for JFS

V4:
- https://lore.kernel.org/all/20250422114000.15003-1-shivankg@amd.com
- Make folio_expected_refs() inline and rename to folio_migration_expected_refs()

V3:
- https://lore.kernel.org/all/20250417060630.197278-1-shivankg@amd.com
- Fix typos

V1/V2:
- https://lore.kernel.org/all/20250413172356.561544-1-shivankg@amd.com
- Implement metapage_migrate_folio() similar to buffer_migrate_folio() but
  specialized to move JFS metapage data

#syz test: https://github.com/shivankgarg98/linux.git 69a58d5260

Shivank Garg (2):
  mm: Add folio_expected_ref_count() for reference count calculation
  jfs: implement migrate_folio for jfs_metapage_aops

 fs/jfs/jfs_metapage.c | 94 +++++++++++++++++++++++++++++++++++++++++++
 include/linux/mm.h    | 55 +++++++++++++++++++++++++
 mm/migrate.c          | 22 ++--------
 3 files changed, 153 insertions(+), 18 deletions(-)

-- 
2.34.1



^ permalink raw reply	[flat|nested] 14+ messages in thread
* [PATCH V4 0/2] JFS: Implement migrate_folio for jfs_metapage_aops
@ 2025-04-22 11:40 Shivank Garg
  2025-04-22 13:59 ` [syzbot] [mm?] WARNING in move_to_new_folio syzbot
  0 siblings, 1 reply; 14+ messages in thread
From: Shivank Garg @ 2025-04-22 11:40 UTC (permalink / raw)
  To: shaggy, akpm
  Cc: willy, shivankg, david, wangkefeng.wang, jane.chu, ziy, donettom,
	apopple, jfs-discussion, linux-kernel, linux-mm,
	syzbot+8bb6fd945af4e0ad9299

This patch addresses a warning that occurs during memory compaction due
to JFS's missing migrate_folio operation. The warning was introduced by
commit 7ee3647243e5 ("migrate: Remove call to ->writepage") which added
explicit warnings when filesystem don't implement migrate_folio.

The syzbot reported following [1]:
  jfs_metapage_aops does not implement migrate_folio
  WARNING: CPU: 1 PID: 5861 at mm/migrate.c:955 fallback_migrate_folio mm/migrate.c:953 [inline]
  WARNING: CPU: 1 PID: 5861 at mm/migrate.c:955 move_to_new_folio+0x70e/0x840 mm/migrate.c:1007
  Modules linked in:
  CPU: 1 UID: 0 PID: 5861 Comm: syz-executor280 Not tainted 6.15.0-rc1-next-20250411-syzkaller #0 PREEMPT(full) 
  Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 02/12/2025
  RIP: 0010:fallback_migrate_folio mm/migrate.c:953 [inline]
  RIP: 0010:move_to_new_folio+0x70e/0x840 mm/migrate.c:1007

This implement metapage_migrate_folio which handles both single and multiple
metapages per page configurations.

[1]: https://syzkaller.appspot.com/bug?extid=8bb6fd945af4e0ad9299

Previous Versions:
V1/V2:
https://lore.kernel.org/all/20250413172356.561544-1-shivankg@amd.com
V3:
https://lore.kernel.org/all/20250417060630.197278-1-shivankg@amd.com

#syz test: https://github.com/AMDESE/linux-mm.git f17a3b8bc


Shivank Garg (2):
  mm: export folio_expected_refs for JFS migration handler
  jfs: implement migrate_folio for jfs_metapage_aops

 fs/jfs/jfs_metapage.c   | 94 +++++++++++++++++++++++++++++++++++++++++
 include/linux/migrate.h |  1 +
 mm/migrate.c            |  3 +-
 3 files changed, 97 insertions(+), 1 deletion(-)

-- 
2.34.1



^ permalink raw reply	[flat|nested] 14+ messages in thread
* [PATCH V3 0/2] JFS: Implement migrate_folio for jfs_metapage_aops
@ 2025-04-17  6:06 Shivank Garg
  2025-04-17  8:47 ` [syzbot] [mm?] WARNING in move_to_new_folio syzbot
  0 siblings, 1 reply; 14+ messages in thread
From: Shivank Garg @ 2025-04-17  6:06 UTC (permalink / raw)
  To: shaggy, akpm
  Cc: willy, shivankg, david, wangkefeng.wang, jane.chu, ziy, donettom,
	apopple, jfs-discussion, linux-kernel, linux-mm,
	syzbot+8bb6fd945af4e0ad9299

This patch addresses a warning that occurs during memory compaction due
to JFS's missing migrate_folio operation. The warning was introduced by
commit 7ee3647 ("migrate: Remove call to ->writepage") which added
explicit warnings when filesystem don't implement migrate_folio.

The syzbot reported following [1]:
  jfs_metapage_aops does not implement migrate_folio
  WARNING: CPU: 1 PID: 5861 at mm/migrate.c:955 fallback_migrate_folio mm/migrate.c:953 [inline]
  WARNING: CPU: 1 PID: 5861 at mm/migrate.c:955 move_to_new_folio+0x70e/0x840 mm/migrate.c:1007
  Modules linked in:
  CPU: 1 UID: 0 PID: 5861 Comm: syz-executor280 Not tainted 6.15.0-rc1-next-20250411-syzkaller #0 PREEMPT(full) 
  Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 02/12/2025
  RIP: 0010:fallback_migrate_folio mm/migrate.c:953 [inline]
  RIP: 0010:move_to_new_folio+0x70e/0x840 mm/migrate.c:1007

This Implement metapage_migrate_folio which handles both single and multiple
metapages per page configurations.

[1]: https://syzkaller.appspot.com/bug?extid=8bb6fd945af4e0ad9299

V1/V2:
https://lore.kernel.org/all/20250413172356.561544-1-shivankg@amd.com

#syz test: https://github.com/AMDESE/linux-mm.git 07246f14ea

Best Regards,
Shivank

Shivank Garg (2):
  mm: export folio_expected_refs for JFS migration handler
  jfs: implement migrate_folio for jfs_metapage_aops

 fs/jfs/jfs_metapage.c   | 94 +++++++++++++++++++++++++++++++++++++++++
 include/linux/migrate.h |  1 +
 mm/migrate.c            |  3 +-
 3 files changed, 97 insertions(+), 1 deletion(-)

-- 
2.34.1



^ permalink raw reply	[flat|nested] 14+ messages in thread
* [syzbot] [mm?] WARNING in move_to_new_folio
@ 2025-04-13  0:03 syzbot
  2025-04-13  6:24 ` syzbot
  2025-04-13 21:44 ` syzbot
  0 siblings, 2 replies; 14+ messages in thread
From: syzbot @ 2025-04-13  0:03 UTC (permalink / raw)
  To: akpm, linux-kernel, linux-mm, syzkaller-bugs

Hello,

syzbot found the following issue on:

HEAD commit:    01c6df60d5d4 Add linux-next specific files for 20250411
git tree:       linux-next
console output: https://syzkaller.appspot.com/x/log.txt?x=107b77e4580000
kernel config:  https://syzkaller.appspot.com/x/.config?x=db03cefa26ecf825
dashboard link: https://syzkaller.appspot.com/bug?extid=8bb6fd945af4e0ad9299
compiler:       Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40

Unfortunately, I don't have any reproducer for this issue yet.

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/928246b3f3d5/disk-01c6df60.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/79a68c4e1134/vmlinux-01c6df60.xz
kernel image: https://storage.googleapis.com/syzbot-assets/9caf8c293819/bzImage-01c6df60.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+8bb6fd945af4e0ad9299@syzkaller.appspotmail.com

------------[ cut here ]------------
jfs_metapage_aops does not implement migrate_folio
WARNING: CPU: 0 PID: 6870 at mm/migrate.c:955 fallback_migrate_folio mm/migrate.c:953 [inline]
WARNING: CPU: 0 PID: 6870 at mm/migrate.c:955 move_to_new_folio+0x70e/0x840 mm/migrate.c:1007
Modules linked in:
CPU: 0 UID: 0 PID: 6870 Comm: syz.3.196 Not tainted 6.15.0-rc1-next-20250411-syzkaller #0 PREEMPT(full) 
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 02/12/2025
RIP: 0010:fallback_migrate_folio mm/migrate.c:953 [inline]
RIP: 0010:move_to_new_folio+0x70e/0x840 mm/migrate.c:1007
Code: b8 00 00 00 00 00 fc ff df 41 80 7c 05 00 00 74 08 4c 89 e7 e8 73 2d fe ff 49 8b 34 24 48 c7 c7 00 8b 57 8c e8 d3 0b 53 ff 90 <0f> 0b 90 90 e9 0a fd ff ff e8 74 f6 93 ff 90 0f 0b 90 eb a7 e8 69
RSP: 0018:ffffc900045aeb30 EFLAGS: 00010246
RAX: d74fd979f2566d00 RBX: 0000000000000000 RCX: 0000000000080000
RDX: ffffc9000e489000 RSI: 000000000007ffff RDI: 0000000000080000
RBP: ffffea0000c98800 R08: ffffffff81829232 R09: 1ffff110170c4852
R10: dffffc0000000000 R11: ffffed10170c4853 R12: ffff8880612710a8
R13: 1ffff1100c24e215 R14: ffffffff8c839248 R15: ffff888061270f88
FS:  00007f855ea5e6c0(0000) GS:ffff888124f80000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000000000000008 CR3: 000000003477c000 CR4: 00000000003526f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
 <TASK>
 migrate_folio_move mm/migrate.c:1312 [inline]
 migrate_folios_move mm/migrate.c:1664 [inline]
 migrate_pages_batch+0x1e86/0x30b0 mm/migrate.c:1911
 migrate_pages_sync mm/migrate.c:1968 [inline]
 migrate_pages+0x24f8/0x3470 mm/migrate.c:2050
 compact_zone+0x365d/0x4dd0 mm/compaction.c:2689
 compact_node mm/compaction.c:2958 [inline]
 compact_nodes mm/compaction.c:2980 [inline]
 sysctl_compaction_handler+0x498/0x9a0 mm/compaction.c:3031
 proc_sys_call_handler+0x54b/0x820 fs/proc/proc_sysctl.c:601
 iter_file_splice_write+0xbdf/0x1530 fs/splice.c:738
 do_splice_from fs/splice.c:935 [inline]
 direct_splice_actor+0x11b/0x220 fs/splice.c:1158
 splice_direct_to_actor+0x595/0xc90 fs/splice.c:1102
 do_splice_direct_actor fs/splice.c:1201 [inline]
 do_splice_direct+0x281/0x3d0 fs/splice.c:1227
 do_sendfile+0x582/0x8d0 fs/read_write.c:1368
 __do_sys_sendfile64 fs/read_write.c:1423 [inline]
 __se_sys_sendfile64+0x102/0x1e0 fs/read_write.c:1415
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0xf3/0x230 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f855db8d169
Code: ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 a8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007f855ea5e038 EFLAGS: 00000246 ORIG_RAX: 0000000000000028
RAX: ffffffffffffffda RBX: 00007f855dda6080 RCX: 00007f855db8d169
RDX: 00002000000000c0 RSI: 0000000000000006 RDI: 0000000000000007
RBP: 00007f855dc0e990 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000009 R11: 0000000000000246 R12: 0000000000000000
R13: 0000000000000000 R14: 00007f855dda6080 R15: 00007ffe70b954f8
 </TASK>


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzkaller@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title

If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)

If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report

If you want to undo deduplication, reply with:
#syz undup


^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2025-05-01 12:37 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-04-30 10:01 [PATCH V5 0/2] JFS: Implement migrate_folio for jfs_metapage_aops Shivank Garg
2025-04-30 10:01 ` [PATCH V5 1/2] mm: Add folio_expected_ref_count() for reference count calculation Shivank Garg
2025-04-30 10:01 ` [PATCH V5 2/2] jfs: implement migrate_folio for jfs_metapage_aops Shivank Garg
2025-04-30 10:38 ` [syzbot] [mm?] WARNING in move_to_new_folio syzbot
2025-04-30 21:21 ` [PATCH V5 0/2] JFS: Implement migrate_folio for jfs_metapage_aops Andrew Morton
2025-05-01 12:37   ` Matthew Wilcox
  -- strict thread matches above, loose matches on Subject: below --
2025-04-22 11:40 [PATCH V4 " Shivank Garg
2025-04-22 13:59 ` [syzbot] [mm?] WARNING in move_to_new_folio syzbot
2025-04-17  6:06 [PATCH V3 0/2] JFS: Implement migrate_folio for jfs_metapage_aops Shivank Garg
2025-04-17  8:47 ` [syzbot] [mm?] WARNING in move_to_new_folio syzbot
2025-04-17  9:06   ` Shivank Garg
2025-04-13  0:03 syzbot
2025-04-13  6:24 ` syzbot
2025-04-13 21:44 ` syzbot
2025-04-15  4:57   ` Shivank Garg
2025-04-15  5:24     ` syzbot

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox