* [PATCH 0/2] migrate: Fix up hugetlb file folio handling @ 2026-01-09 4:13 Matthew Wilcox (Oracle) 2026-01-09 4:13 ` [PATCH 1/2] migrate: Correct lock ordering for hugetlb file folios Matthew Wilcox (Oracle) 2026-01-09 4:13 ` [PATCH 2/2] migrate: Replace RMP_ flags with TTU_ flags Matthew Wilcox (Oracle) 0 siblings, 2 replies; 11+ messages in thread From: Matthew Wilcox (Oracle) @ 2026-01-09 4:13 UTC (permalink / raw) To: Andrew Morton Cc: Matthew Wilcox (Oracle), Zi Yan, David Hildenbrand, Lorenzo Stoakes, Rik van Riel, Liam R. Howlett, Vlastimil Babka, Harry Yoo, Jann Horn, linux-mm The first patch is a fix for a syzbot-induced bug, but it can really happen and is worth being backported. The second patch I'm less sure about. I don't like the current RMP flags, but I'm not sure that reusing TTU flags for this is the right approach. So I'm fine with this patch being dropped and just applying the first one. Matthew Wilcox (Oracle) (2): migrate: Correct lock ordering for hugetlb file folios migrate: Replace RMP_ flags with TTU_ flags include/linux/rmap.h | 9 +++------ mm/huge_memory.c | 8 ++++---- mm/migrate.c | 20 ++++++++++---------- 3 files changed, 17 insertions(+), 20 deletions(-) -- 2.47.3 ^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH 1/2] migrate: Correct lock ordering for hugetlb file folios 2026-01-09 4:13 [PATCH 0/2] migrate: Fix up hugetlb file folio handling Matthew Wilcox (Oracle) @ 2026-01-09 4:13 ` Matthew Wilcox (Oracle) 2026-01-09 4:32 ` Lance Yang ` (2 more replies) 2026-01-09 4:13 ` [PATCH 2/2] migrate: Replace RMP_ flags with TTU_ flags Matthew Wilcox (Oracle) 1 sibling, 3 replies; 11+ messages in thread From: Matthew Wilcox (Oracle) @ 2026-01-09 4:13 UTC (permalink / raw) To: Andrew Morton Cc: Matthew Wilcox (Oracle), Zi Yan, David Hildenbrand, Lorenzo Stoakes, Rik van Riel, Liam R. Howlett, Vlastimil Babka, Harry Yoo, Jann Horn, linux-mm, syzbot+2d9c96466c978346b55f, Lance Yang, stable Syzbot has found a deadlock (analyzed by Lance Yang): 1) Task (5749): Holds folio_lock, then tries to acquire i_mmap_rwsem(read lock). 2) Task (5754): Holds i_mmap_rwsem(write lock), then tries to acquire folio_lock. migrate_pages() -> migrate_hugetlbs() -> unmap_and_move_huge_page() <- Takes folio_lock! -> remove_migration_ptes() -> __rmap_walk_file() -> i_mmap_lock_read() <- Waits for i_mmap_rwsem(read lock)! hugetlbfs_fallocate() -> hugetlbfs_punch_hole() <- Takes i_mmap_rwsem(write lock)! -> hugetlbfs_zero_partial_page() -> filemap_lock_hugetlb_folio() -> filemap_lock_folio() -> __filemap_get_folio <- Waits for folio_lock! The migration path is the one taking locks in the wrong order according to the documentation at the top of mm/rmap.c. So expand the scope of the existing i_mmap_lock to cover the calls to remove_migration_ptes() too. This is (mostly) how it used to be after commit c0d0381ade79. That was removed by 336bf30eb765 for both file & anon hugetlb pages when it should only have been removed for anon hugetlb pages. Fixes: 336bf30eb765 (hugetlbfs: fix anon huge page migration race) Reported-by: syzbot+2d9c96466c978346b55f@syzkaller.appspotmail.com Link: https://lore.kernel.org/all/68e9715a.050a0220.1186a4.000d.GAE@google.com Debugged-by: Lance Yang <lance.yang@linux.dev> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Cc: stable@vger.kernel.org --- mm/migrate.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/mm/migrate.c b/mm/migrate.c index 5169f9717f60..4688b9e38cd2 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -1458,6 +1458,7 @@ static int unmap_and_move_huge_page(new_folio_t get_new_folio, int page_was_mapped = 0; struct anon_vma *anon_vma = NULL; struct address_space *mapping = NULL; + enum ttu_flags ttu = 0; if (folio_ref_count(src) == 1) { /* page was freed from under us. So we are done. */ @@ -1498,8 +1499,6 @@ static int unmap_and_move_huge_page(new_folio_t get_new_folio, goto put_anon; if (folio_mapped(src)) { - enum ttu_flags ttu = 0; - if (!folio_test_anon(src)) { /* * In shared mappings, try_to_unmap could potentially @@ -1516,16 +1515,17 @@ static int unmap_and_move_huge_page(new_folio_t get_new_folio, try_to_migrate(src, ttu); page_was_mapped = 1; - - if (ttu & TTU_RMAP_LOCKED) - i_mmap_unlock_write(mapping); } if (!folio_mapped(src)) rc = move_to_new_folio(dst, src, mode); if (page_was_mapped) - remove_migration_ptes(src, !rc ? dst : src, 0); + remove_migration_ptes(src, !rc ? dst : src, + ttu ? RMP_LOCKED : 0); + + if (ttu & TTU_RMAP_LOCKED) + i_mmap_unlock_write(mapping); unlock_put_anon: folio_unlock(dst); -- 2.47.3 ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 1/2] migrate: Correct lock ordering for hugetlb file folios 2026-01-09 4:13 ` [PATCH 1/2] migrate: Correct lock ordering for hugetlb file folios Matthew Wilcox (Oracle) @ 2026-01-09 4:32 ` Lance Yang 2026-01-09 13:50 ` David Hildenbrand (Red Hat) 2026-01-09 14:57 ` Zi Yan 2 siblings, 0 replies; 11+ messages in thread From: Lance Yang @ 2026-01-09 4:32 UTC (permalink / raw) To: Matthew Wilcox (Oracle) Cc: Zi Yan, David Hildenbrand, Lorenzo Stoakes, Rik van Riel, Liam R. Howlett, Vlastimil Babka, Harry Yoo, Jann Horn, linux-mm, syzbot+2d9c96466c978346b55f, stable, Andrew Morton On 2026/1/9 12:13, Matthew Wilcox (Oracle) wrote: > Syzbot has found a deadlock (analyzed by Lance Yang): > > 1) Task (5749): Holds folio_lock, then tries to acquire i_mmap_rwsem(read lock). > 2) Task (5754): Holds i_mmap_rwsem(write lock), then tries to acquire > folio_lock. > > migrate_pages() > -> migrate_hugetlbs() > -> unmap_and_move_huge_page() <- Takes folio_lock! > -> remove_migration_ptes() > -> __rmap_walk_file() > -> i_mmap_lock_read() <- Waits for i_mmap_rwsem(read lock)! > > hugetlbfs_fallocate() > -> hugetlbfs_punch_hole() <- Takes i_mmap_rwsem(write lock)! > -> hugetlbfs_zero_partial_page() > -> filemap_lock_hugetlb_folio() > -> filemap_lock_folio() > -> __filemap_get_folio <- Waits for folio_lock! > > The migration path is the one taking locks in the wrong order according > to the documentation at the top of mm/rmap.c. So expand the scope of the > existing i_mmap_lock to cover the calls to remove_migration_ptes() too. > > This is (mostly) how it used to be after commit c0d0381ade79. That was > removed by 336bf30eb765 for both file & anon hugetlb pages when it should > only have been removed for anon hugetlb pages. Cool. Thanks for the fix! As someone new to hugetlb, learned something about the lock ordering here. Cheers, Lance > > Fixes: 336bf30eb765 (hugetlbfs: fix anon huge page migration race) > Reported-by: syzbot+2d9c96466c978346b55f@syzkaller.appspotmail.com > Link: https://lore.kernel.org/all/68e9715a.050a0220.1186a4.000d.GAE@google.com > Debugged-by: Lance Yang <lance.yang@linux.dev> > Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> > Cc: stable@vger.kernel.org > --- > mm/migrate.c | 12 ++++++------ > 1 file changed, 6 insertions(+), 6 deletions(-) > > diff --git a/mm/migrate.c b/mm/migrate.c > index 5169f9717f60..4688b9e38cd2 100644 > --- a/mm/migrate.c > +++ b/mm/migrate.c > @@ -1458,6 +1458,7 @@ static int unmap_and_move_huge_page(new_folio_t get_new_folio, > int page_was_mapped = 0; > struct anon_vma *anon_vma = NULL; > struct address_space *mapping = NULL; > + enum ttu_flags ttu = 0; > > if (folio_ref_count(src) == 1) { > /* page was freed from under us. So we are done. */ > @@ -1498,8 +1499,6 @@ static int unmap_and_move_huge_page(new_folio_t get_new_folio, > goto put_anon; > > if (folio_mapped(src)) { > - enum ttu_flags ttu = 0; > - > if (!folio_test_anon(src)) { > /* > * In shared mappings, try_to_unmap could potentially > @@ -1516,16 +1515,17 @@ static int unmap_and_move_huge_page(new_folio_t get_new_folio, > > try_to_migrate(src, ttu); > page_was_mapped = 1; > - > - if (ttu & TTU_RMAP_LOCKED) > - i_mmap_unlock_write(mapping); > } > > if (!folio_mapped(src)) > rc = move_to_new_folio(dst, src, mode); > > if (page_was_mapped) > - remove_migration_ptes(src, !rc ? dst : src, 0); > + remove_migration_ptes(src, !rc ? dst : src, > + ttu ? RMP_LOCKED : 0); > + > + if (ttu & TTU_RMAP_LOCKED) > + i_mmap_unlock_write(mapping); > > unlock_put_anon: > folio_unlock(dst); ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 1/2] migrate: Correct lock ordering for hugetlb file folios 2026-01-09 4:13 ` [PATCH 1/2] migrate: Correct lock ordering for hugetlb file folios Matthew Wilcox (Oracle) 2026-01-09 4:32 ` Lance Yang @ 2026-01-09 13:50 ` David Hildenbrand (Red Hat) 2026-01-09 14:44 ` Matthew Wilcox 2026-01-09 14:57 ` Zi Yan 2 siblings, 1 reply; 11+ messages in thread From: David Hildenbrand (Red Hat) @ 2026-01-09 13:50 UTC (permalink / raw) To: Matthew Wilcox (Oracle), Andrew Morton Cc: Zi Yan, Lorenzo Stoakes, Rik van Riel, Liam R. Howlett, Vlastimil Babka, Harry Yoo, Jann Horn, linux-mm, syzbot+2d9c96466c978346b55f, Lance Yang, stable On 1/9/26 05:13, Matthew Wilcox (Oracle) wrote: > Syzbot has found a deadlock (analyzed by Lance Yang): > > 1) Task (5749): Holds folio_lock, then tries to acquire i_mmap_rwsem(read lock). > 2) Task (5754): Holds i_mmap_rwsem(write lock), then tries to acquire > folio_lock. > > migrate_pages() > -> migrate_hugetlbs() > -> unmap_and_move_huge_page() <- Takes folio_lock! > -> remove_migration_ptes() > -> __rmap_walk_file() > -> i_mmap_lock_read() <- Waits for i_mmap_rwsem(read lock)! > > hugetlbfs_fallocate() > -> hugetlbfs_punch_hole() <- Takes i_mmap_rwsem(write lock)! > -> hugetlbfs_zero_partial_page() > -> filemap_lock_hugetlb_folio() > -> filemap_lock_folio() > -> __filemap_get_folio <- Waits for folio_lock! As raised in the other patch I stumbled over first: We now handle file-backed folios correctly I think. Could we somehow also be in trouble for anon folios? Because there, we'd still take the rmap lock after grabbing the folio lock. > > The migration path is the one taking locks in the wrong order according > to the documentation at the top of mm/rmap.c. So expand the scope of the > existing i_mmap_lock to cover the calls to remove_migration_ptes() too. > > This is (mostly) how it used to be after commit c0d0381ade79. That was > removed by 336bf30eb765 for both file & anon hugetlb pages when it should > only have been removed for anon hugetlb pages. > > Fixes: 336bf30eb765 (hugetlbfs: fix anon huge page migration race) > Reported-by: syzbot+2d9c96466c978346b55f@syzkaller.appspotmail.com > Link: https://lore.kernel.org/all/68e9715a.050a0220.1186a4.000d.GAE@google.com > Debugged-by: Lance Yang <lance.yang@linux.dev> > Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> > Cc: stable@vger.kernel.org > --- > mm/migrate.c | 12 ++++++------ > 1 file changed, 6 insertions(+), 6 deletions(-) > > diff --git a/mm/migrate.c b/mm/migrate.c > index 5169f9717f60..4688b9e38cd2 100644 > --- a/mm/migrate.c > +++ b/mm/migrate.c > @@ -1458,6 +1458,7 @@ static int unmap_and_move_huge_page(new_folio_t get_new_folio, > int page_was_mapped = 0; > struct anon_vma *anon_vma = NULL; > struct address_space *mapping = NULL; > + enum ttu_flags ttu = 0; > > if (folio_ref_count(src) == 1) { > /* page was freed from under us. So we are done. */ > @@ -1498,8 +1499,6 @@ static int unmap_and_move_huge_page(new_folio_t get_new_folio, > goto put_anon; > > if (folio_mapped(src)) { > - enum ttu_flags ttu = 0; > - > if (!folio_test_anon(src)) { > /* > * In shared mappings, try_to_unmap could potentially > @@ -1516,16 +1515,17 @@ static int unmap_and_move_huge_page(new_folio_t get_new_folio, > > try_to_migrate(src, ttu); > page_was_mapped = 1; > - > - if (ttu & TTU_RMAP_LOCKED) > - i_mmap_unlock_write(mapping); > } > > if (!folio_mapped(src)) > rc = move_to_new_folio(dst, src, mode); > > if (page_was_mapped) > - remove_migration_ptes(src, !rc ? dst : src, 0); > + remove_migration_ptes(src, !rc ? dst : src, > + ttu ? RMP_LOCKED : 0); (ttu & TTU_RMAP_LOCKED) ? RMP_LOCKED : 0) Would be cleaner, but I see how you clean that up in #2. :) Acked-by: David Hildenbrand (Red Hat) <david@kernel.org> -- Cheers David ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 1/2] migrate: Correct lock ordering for hugetlb file folios 2026-01-09 13:50 ` David Hildenbrand (Red Hat) @ 2026-01-09 14:44 ` Matthew Wilcox 0 siblings, 0 replies; 11+ messages in thread From: Matthew Wilcox @ 2026-01-09 14:44 UTC (permalink / raw) To: David Hildenbrand (Red Hat) Cc: Andrew Morton, Zi Yan, Lorenzo Stoakes, Rik van Riel, Liam R. Howlett, Vlastimil Babka, Harry Yoo, Jann Horn, linux-mm, syzbot+2d9c96466c978346b55f, Lance Yang, stable On Fri, Jan 09, 2026 at 02:50:26PM +0100, David Hildenbrand (Red Hat) wrote: > We now handle file-backed folios correctly I think. Could we somehow > also be in trouble for anon folios? Because there, we'd still take the > rmap lock after grabbing the folio lock. We're now pretty far afield from my area of MM expertise, but since using AI is now encouraged, I will confidently state that only file-backed hugetlb folios have this inversion of the rmap lock and folio lock. anon hugetlb folios follow the normal rules. And it's all because of PMD sharing, which isn't needed in the anon case but is needed for file-backed. So once mshare is in, we can remove this wart. > > if (page_was_mapped) > > - remove_migration_ptes(src, !rc ? dst : src, 0); > > + remove_migration_ptes(src, !rc ? dst : src, > > + ttu ? RMP_LOCKED : 0); > > (ttu & TTU_RMAP_LOCKED) ? RMP_LOCKED : 0) > > Would be cleaner, but I see how you clean that up in #2. :) Yes, that would be more future-proof, but this code has no future ;-) > Acked-by: David Hildenbrand (Red Hat) <david@kernel.org> Thanks! ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 1/2] migrate: Correct lock ordering for hugetlb file folios 2026-01-09 4:13 ` [PATCH 1/2] migrate: Correct lock ordering for hugetlb file folios Matthew Wilcox (Oracle) 2026-01-09 4:32 ` Lance Yang 2026-01-09 13:50 ` David Hildenbrand (Red Hat) @ 2026-01-09 14:57 ` Zi Yan 2 siblings, 0 replies; 11+ messages in thread From: Zi Yan @ 2026-01-09 14:57 UTC (permalink / raw) To: Matthew Wilcox (Oracle) Cc: Andrew Morton, David Hildenbrand, Lorenzo Stoakes, Rik van Riel, Liam R. Howlett, Vlastimil Babka, Harry Yoo, Jann Horn, linux-mm, syzbot+2d9c96466c978346b55f, Lance Yang, stable On 8 Jan 2026, at 23:13, Matthew Wilcox (Oracle) wrote: > Syzbot has found a deadlock (analyzed by Lance Yang): > > 1) Task (5749): Holds folio_lock, then tries to acquire i_mmap_rwsem(read lock). > 2) Task (5754): Holds i_mmap_rwsem(write lock), then tries to acquire > folio_lock. > > migrate_pages() > -> migrate_hugetlbs() > -> unmap_and_move_huge_page() <- Takes folio_lock! > -> remove_migration_ptes() > -> __rmap_walk_file() > -> i_mmap_lock_read() <- Waits for i_mmap_rwsem(read lock)! > > hugetlbfs_fallocate() > -> hugetlbfs_punch_hole() <- Takes i_mmap_rwsem(write lock)! > -> hugetlbfs_zero_partial_page() > -> filemap_lock_hugetlb_folio() > -> filemap_lock_folio() > -> __filemap_get_folio <- Waits for folio_lock! > > The migration path is the one taking locks in the wrong order according > to the documentation at the top of mm/rmap.c. So expand the scope of the > existing i_mmap_lock to cover the calls to remove_migration_ptes() too. > > This is (mostly) how it used to be after commit c0d0381ade79. That was > removed by 336bf30eb765 for both file & anon hugetlb pages when it should > only have been removed for anon hugetlb pages. > > Fixes: 336bf30eb765 (hugetlbfs: fix anon huge page migration race) > Reported-by: syzbot+2d9c96466c978346b55f@syzkaller.appspotmail.com > Link: https://lore.kernel.org/all/68e9715a.050a0220.1186a4.000d.GAE@google.com > Debugged-by: Lance Yang <lance.yang@linux.dev> > Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> > Cc: stable@vger.kernel.org > --- > mm/migrate.c | 12 ++++++------ > 1 file changed, 6 insertions(+), 6 deletions(-) > LGTM. Acked-by: Zi Yan <ziy@nvidia.com> Best Regards, Yan, Zi ^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH 2/2] migrate: Replace RMP_ flags with TTU_ flags 2026-01-09 4:13 [PATCH 0/2] migrate: Fix up hugetlb file folio handling Matthew Wilcox (Oracle) 2026-01-09 4:13 ` [PATCH 1/2] migrate: Correct lock ordering for hugetlb file folios Matthew Wilcox (Oracle) @ 2026-01-09 4:13 ` Matthew Wilcox (Oracle) 2026-01-09 13:52 ` David Hildenbrand (Red Hat) ` (2 more replies) 1 sibling, 3 replies; 11+ messages in thread From: Matthew Wilcox (Oracle) @ 2026-01-09 4:13 UTC (permalink / raw) To: Andrew Morton Cc: Matthew Wilcox (Oracle), Zi Yan, David Hildenbrand, Lorenzo Stoakes, Rik van Riel, Liam R. Howlett, Vlastimil Babka, Harry Yoo, Jann Horn, linux-mm Instead of translating between RMP_ and TTU_ flags, remove the RMP_ flags and just use the TTU_ flag space; there's plenty available. Possibly we should rename these to RMAP_ flags, and maybe even pass them in through rmap_walk_arg, but that can be done later. Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> --- include/linux/rmap.h | 9 +++------ mm/huge_memory.c | 8 ++++---- mm/migrate.c | 12 ++++++------ 3 files changed, 13 insertions(+), 16 deletions(-) diff --git a/include/linux/rmap.h b/include/linux/rmap.h index daa92a58585d..7afc6abe1c23 100644 --- a/include/linux/rmap.h +++ b/include/linux/rmap.h @@ -92,6 +92,7 @@ struct anon_vma_chain { }; enum ttu_flags { + TTU_USE_SHARED_ZEROPAGE = 0x2, /* for unused pages of large folios */ TTU_SPLIT_HUGE_PMD = 0x4, /* split huge PMD if any */ TTU_IGNORE_MLOCK = 0x8, /* ignore mlock */ TTU_SYNC = 0x10, /* avoid racy checks with PVMW_SYNC */ @@ -1000,12 +1001,8 @@ int mapping_wrprotect_range(struct address_space *mapping, pgoff_t pgoff, int pfn_mkclean_range(unsigned long pfn, unsigned long nr_pages, pgoff_t pgoff, struct vm_area_struct *vma); -enum rmp_flags { - RMP_LOCKED = 1 << 0, - RMP_USE_SHARED_ZEROPAGE = 1 << 1, -}; - -void remove_migration_ptes(struct folio *src, struct folio *dst, int flags); +void remove_migration_ptes(struct folio *src, struct folio *dst, + enum ttu_flags flags); /* * rmap_walk_control: To control rmap traversing for specific needs diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 40cf59301c21..44ff8a648afd 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -3431,7 +3431,7 @@ static void remap_page(struct folio *folio, unsigned long nr, int flags) if (!folio_test_anon(folio)) return; for (;;) { - remove_migration_ptes(folio, folio, RMP_LOCKED | flags); + remove_migration_ptes(folio, folio, TTU_RMAP_LOCKED | flags); i += folio_nr_pages(folio); if (i >= nr) break; @@ -3944,7 +3944,7 @@ static int __folio_split(struct folio *folio, unsigned int new_order, int old_order = folio_order(folio); struct folio *new_folio, *next; int nr_shmem_dropped = 0; - int remap_flags = 0; + enum ttu_flags ttu_flags = 0; int ret; pgoff_t end = 0; @@ -4064,9 +4064,9 @@ static int __folio_split(struct folio *folio, unsigned int new_order, shmem_uncharge(mapping->host, nr_shmem_dropped); if (!ret && is_anon && !folio_is_device_private(folio)) - remap_flags = RMP_USE_SHARED_ZEROPAGE; + ttu_flags = TTU_USE_SHARED_ZEROPAGE; - remap_page(folio, 1 << old_order, remap_flags); + remap_page(folio, 1 << old_order, ttu_flags); /* * Unlock all after-split folios except the one containing diff --git a/mm/migrate.c b/mm/migrate.c index 4688b9e38cd2..4750a2ba15fe 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -452,11 +452,12 @@ static bool remove_migration_pte(struct folio *folio, * Get rid of all migration entries and replace them by * references to the indicated page. */ -void remove_migration_ptes(struct folio *src, struct folio *dst, int flags) +void remove_migration_ptes(struct folio *src, struct folio *dst, + enum ttu_flags flags) { struct rmap_walk_arg rmap_walk_arg = { .folio = src, - .map_unused_to_zeropage = flags & RMP_USE_SHARED_ZEROPAGE, + .map_unused_to_zeropage = flags & TTU_USE_SHARED_ZEROPAGE, }; struct rmap_walk_control rwc = { @@ -464,9 +465,9 @@ void remove_migration_ptes(struct folio *src, struct folio *dst, int flags) .arg = &rmap_walk_arg, }; - VM_BUG_ON_FOLIO((flags & RMP_USE_SHARED_ZEROPAGE) && (src != dst), src); + VM_BUG_ON_FOLIO((flags & TTU_USE_SHARED_ZEROPAGE) && (src != dst), src); - if (flags & RMP_LOCKED) + if (flags & TTU_RMAP_LOCKED) rmap_walk_locked(dst, &rwc); else rmap_walk(dst, &rwc); @@ -1521,8 +1522,7 @@ static int unmap_and_move_huge_page(new_folio_t get_new_folio, rc = move_to_new_folio(dst, src, mode); if (page_was_mapped) - remove_migration_ptes(src, !rc ? dst : src, - ttu ? RMP_LOCKED : 0); + remove_migration_ptes(src, !rc ? dst : src, ttu); if (ttu & TTU_RMAP_LOCKED) i_mmap_unlock_write(mapping); -- 2.47.3 ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 2/2] migrate: Replace RMP_ flags with TTU_ flags 2026-01-09 4:13 ` [PATCH 2/2] migrate: Replace RMP_ flags with TTU_ flags Matthew Wilcox (Oracle) @ 2026-01-09 13:52 ` David Hildenbrand (Red Hat) 2026-01-09 14:44 ` Lorenzo Stoakes 2026-01-09 17:20 ` Zi Yan 2 siblings, 0 replies; 11+ messages in thread From: David Hildenbrand (Red Hat) @ 2026-01-09 13:52 UTC (permalink / raw) To: Matthew Wilcox (Oracle), Andrew Morton Cc: Zi Yan, Lorenzo Stoakes, Rik van Riel, Liam R. Howlett, Vlastimil Babka, Harry Yoo, Jann Horn, linux-mm On 1/9/26 05:13, Matthew Wilcox (Oracle) wrote: > Instead of translating between RMP_ and TTU_ flags, remove the RMP_ > flags and just use the TTU_ flag space; there's plenty available. > > Possibly we should rename these to RMAP_ flags, and maybe even pass them > in through rmap_walk_arg, but that can be done later. Yes, the TTU prefix is a bit misleading. Acked-by: David Hildenbrand (Red Hat) <david@kernel.org> -- Cheers David ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 2/2] migrate: Replace RMP_ flags with TTU_ flags 2026-01-09 4:13 ` [PATCH 2/2] migrate: Replace RMP_ flags with TTU_ flags Matthew Wilcox (Oracle) 2026-01-09 13:52 ` David Hildenbrand (Red Hat) @ 2026-01-09 14:44 ` Lorenzo Stoakes 2026-01-09 14:48 ` Matthew Wilcox 2026-01-09 17:20 ` Zi Yan 2 siblings, 1 reply; 11+ messages in thread From: Lorenzo Stoakes @ 2026-01-09 14:44 UTC (permalink / raw) To: Matthew Wilcox (Oracle) Cc: Andrew Morton, Zi Yan, David Hildenbrand, Rik van Riel, Liam R. Howlett, Vlastimil Babka, Harry Yoo, Jann Horn, linux-mm On Fri, Jan 09, 2026 at 04:13:43AM +0000, Matthew Wilcox (Oracle) wrote: > Instead of translating between RMP_ and TTU_ flags, remove the RMP_ > flags and just use the TTU_ flag space; there's plenty available. > > Possibly we should rename these to RMAP_ flags, and maybe even pass them > in through rmap_walk_arg, but that can be done later. > > Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> LGTM to me, so: Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> > --- > include/linux/rmap.h | 9 +++------ > mm/huge_memory.c | 8 ++++---- > mm/migrate.c | 12 ++++++------ > 3 files changed, 13 insertions(+), 16 deletions(-) > > diff --git a/include/linux/rmap.h b/include/linux/rmap.h > index daa92a58585d..7afc6abe1c23 100644 > --- a/include/linux/rmap.h > +++ b/include/linux/rmap.h > @@ -92,6 +92,7 @@ struct anon_vma_chain { > }; > > enum ttu_flags { > + TTU_USE_SHARED_ZEROPAGE = 0x2, /* for unused pages of large folios */ Kinda weird we had 0x2 free :) I wonder why? Did we have flags here we removed I guess? > TTU_SPLIT_HUGE_PMD = 0x4, /* split huge PMD if any */ > TTU_IGNORE_MLOCK = 0x8, /* ignore mlock */ > TTU_SYNC = 0x10, /* avoid racy checks with PVMW_SYNC */ > @@ -1000,12 +1001,8 @@ int mapping_wrprotect_range(struct address_space *mapping, pgoff_t pgoff, > int pfn_mkclean_range(unsigned long pfn, unsigned long nr_pages, pgoff_t pgoff, > struct vm_area_struct *vma); > > -enum rmp_flags { > - RMP_LOCKED = 1 << 0, > - RMP_USE_SHARED_ZEROPAGE = 1 << 1, > -}; > - > -void remove_migration_ptes(struct folio *src, struct folio *dst, int flags); > +void remove_migration_ptes(struct folio *src, struct folio *dst, > + enum ttu_flags flags); > > /* > * rmap_walk_control: To control rmap traversing for specific needs > diff --git a/mm/huge_memory.c b/mm/huge_memory.c > index 40cf59301c21..44ff8a648afd 100644 > --- a/mm/huge_memory.c > +++ b/mm/huge_memory.c > @@ -3431,7 +3431,7 @@ static void remap_page(struct folio *folio, unsigned long nr, int flags) > if (!folio_test_anon(folio)) > return; > for (;;) { > - remove_migration_ptes(folio, folio, RMP_LOCKED | flags); > + remove_migration_ptes(folio, folio, TTU_RMAP_LOCKED | flags); > i += folio_nr_pages(folio); > if (i >= nr) > break; > @@ -3944,7 +3944,7 @@ static int __folio_split(struct folio *folio, unsigned int new_order, > int old_order = folio_order(folio); > struct folio *new_folio, *next; > int nr_shmem_dropped = 0; > - int remap_flags = 0; > + enum ttu_flags ttu_flags = 0; > int ret; > pgoff_t end = 0; > > @@ -4064,9 +4064,9 @@ static int __folio_split(struct folio *folio, unsigned int new_order, > shmem_uncharge(mapping->host, nr_shmem_dropped); > > if (!ret && is_anon && !folio_is_device_private(folio)) > - remap_flags = RMP_USE_SHARED_ZEROPAGE; > + ttu_flags = TTU_USE_SHARED_ZEROPAGE; > > - remap_page(folio, 1 << old_order, remap_flags); > + remap_page(folio, 1 << old_order, ttu_flags); > > /* > * Unlock all after-split folios except the one containing > diff --git a/mm/migrate.c b/mm/migrate.c > index 4688b9e38cd2..4750a2ba15fe 100644 > --- a/mm/migrate.c > +++ b/mm/migrate.c > @@ -452,11 +452,12 @@ static bool remove_migration_pte(struct folio *folio, > * Get rid of all migration entries and replace them by > * references to the indicated page. > */ > -void remove_migration_ptes(struct folio *src, struct folio *dst, int flags) > +void remove_migration_ptes(struct folio *src, struct folio *dst, > + enum ttu_flags flags) > { > struct rmap_walk_arg rmap_walk_arg = { > .folio = src, > - .map_unused_to_zeropage = flags & RMP_USE_SHARED_ZEROPAGE, > + .map_unused_to_zeropage = flags & TTU_USE_SHARED_ZEROPAGE, > }; > > struct rmap_walk_control rwc = { > @@ -464,9 +465,9 @@ void remove_migration_ptes(struct folio *src, struct folio *dst, int flags) > .arg = &rmap_walk_arg, > }; > > - VM_BUG_ON_FOLIO((flags & RMP_USE_SHARED_ZEROPAGE) && (src != dst), src); > + VM_BUG_ON_FOLIO((flags & TTU_USE_SHARED_ZEROPAGE) && (src != dst), src); > > - if (flags & RMP_LOCKED) > + if (flags & TTU_RMAP_LOCKED) > rmap_walk_locked(dst, &rwc); > else > rmap_walk(dst, &rwc); > @@ -1521,8 +1522,7 @@ static int unmap_and_move_huge_page(new_folio_t get_new_folio, > rc = move_to_new_folio(dst, src, mode); > > if (page_was_mapped) > - remove_migration_ptes(src, !rc ? dst : src, > - ttu ? RMP_LOCKED : 0); > + remove_migration_ptes(src, !rc ? dst : src, ttu); > > if (ttu & TTU_RMAP_LOCKED) > i_mmap_unlock_write(mapping); > -- > 2.47.3 > > ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 2/2] migrate: Replace RMP_ flags with TTU_ flags 2026-01-09 14:44 ` Lorenzo Stoakes @ 2026-01-09 14:48 ` Matthew Wilcox 0 siblings, 0 replies; 11+ messages in thread From: Matthew Wilcox @ 2026-01-09 14:48 UTC (permalink / raw) To: Lorenzo Stoakes Cc: Andrew Morton, Zi Yan, David Hildenbrand, Rik van Riel, Liam R. Howlett, Vlastimil Babka, Harry Yoo, Jann Horn, linux-mm On Fri, Jan 09, 2026 at 02:44:20PM +0000, Lorenzo Stoakes wrote: > > enum ttu_flags { > > + TTU_USE_SHARED_ZEROPAGE = 0x2, /* for unused pages of large folios */ > > Kinda weird we had 0x2 free :) I wonder why? Did we have flags here we removed I > guess? a98a2f0c8ce1 - TTU_MIGRATION = 0x1, /* migration mode */ cd62734ca60d - TTU_MUNLOCK = 0x2, /* munlock mode */ 732ed55823fc + TTU_SYNC = 0x10, /* avoid racy checks with PVMW_SYNC */ 013339df116c - TTU_IGNORE_ACCESS = 0x10, /* don't age */ a128ca71fb29 did many changes all at once and I shan't attempt to summarise ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 2/2] migrate: Replace RMP_ flags with TTU_ flags 2026-01-09 4:13 ` [PATCH 2/2] migrate: Replace RMP_ flags with TTU_ flags Matthew Wilcox (Oracle) 2026-01-09 13:52 ` David Hildenbrand (Red Hat) 2026-01-09 14:44 ` Lorenzo Stoakes @ 2026-01-09 17:20 ` Zi Yan 2 siblings, 0 replies; 11+ messages in thread From: Zi Yan @ 2026-01-09 17:20 UTC (permalink / raw) To: Matthew Wilcox (Oracle) Cc: Andrew Morton, David Hildenbrand, Lorenzo Stoakes, Rik van Riel, Liam R. Howlett, Vlastimil Babka, Harry Yoo, Jann Horn, linux-mm On 8 Jan 2026, at 23:13, Matthew Wilcox (Oracle) wrote: > Instead of translating between RMP_ and TTU_ flags, remove the RMP_ > flags and just use the TTU_ flag space; there's plenty available. > > Possibly we should rename these to RMAP_ flags, and maybe even pass them > in through rmap_walk_arg, but that can be done later. > > Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> > --- > include/linux/rmap.h | 9 +++------ > mm/huge_memory.c | 8 ++++---- > mm/migrate.c | 12 ++++++------ > 3 files changed, 13 insertions(+), 16 deletions(-) > LGTM. Reviewed-by: Zi Yan <ziy@nvidia.com> Best Regards, Yan, Zi ^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2026-01-09 17:20 UTC | newest] Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2026-01-09 4:13 [PATCH 0/2] migrate: Fix up hugetlb file folio handling Matthew Wilcox (Oracle) 2026-01-09 4:13 ` [PATCH 1/2] migrate: Correct lock ordering for hugetlb file folios Matthew Wilcox (Oracle) 2026-01-09 4:32 ` Lance Yang 2026-01-09 13:50 ` David Hildenbrand (Red Hat) 2026-01-09 14:44 ` Matthew Wilcox 2026-01-09 14:57 ` Zi Yan 2026-01-09 4:13 ` [PATCH 2/2] migrate: Replace RMP_ flags with TTU_ flags Matthew Wilcox (Oracle) 2026-01-09 13:52 ` David Hildenbrand (Red Hat) 2026-01-09 14:44 ` Lorenzo Stoakes 2026-01-09 14:48 ` Matthew Wilcox 2026-01-09 17:20 ` Zi Yan
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox