linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] mm/huge_memory: fix early failure try_to_migrate() when split huge pmd for shared thp
@ 2026-01-30 23:00 Wei Yang
  2026-01-31  2:44 ` Zi Yan
  0 siblings, 1 reply; 13+ messages in thread
From: Wei Yang @ 2026-01-30 23:00 UTC (permalink / raw)
  To: akpm, david, lorenzo.stoakes, riel, Liam.Howlett, vbabka,
	harry.yoo, jannh, gavinguo, baolin.wang, ziy
  Cc: linux-mm, Wei Yang, stable

Commit 60fbb14396d5 ("mm/huge_memory: adjust try_to_migrate_one() and
split_huge_pmd_locked()") return false unconditionally after
split_huge_pmd_locked() which may fail early during try_to_migrate() for
shared thp. This will lead to unexpected folio split failure.

One way to reproduce:

    Create an anonymous thp range and fork 512 children, so we have a
    thp shared mapped in 513 processes. Then trigger folio split with
    /sys/kernel/debug/split_huge_pages debugfs to split the thp folio to
    order 0.

Without the above commit, we can successfully split to order 0.
With the above commit, the folio is still a large folio.

The reason is the above commit return false after split pmd
unconditionally in the first process and break try_to_migrate().

The tricky thing in above reproduce method is current debugfs interface
leverage function split_huge_pages_pid(), which will iterate the whole
pmd range and do folio split on each base page address. This means it
will try 512 times, and each time split one pmd from pmd mapped to pte
mapped thp. If there are less than 512 shared mapped process,
the folio is still split successfully at last. But in real world, we
usually try it for once.

This patch fixes this by removing the unconditional false return after
split_huge_pmd_locked(). Later, we may introduce a true fail early if
split_huge_pmd_locked() does fail.

Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
Fixes: 60fbb14396d5 ("mm/huge_memory: adjust try_to_migrate_one() and split_huge_pmd_locked()")
Cc: Gavin Guo <gavinguo@igalia.com>
Cc: "David Hildenbrand (Red Hat)" <david@kernel.org>
Cc: Zi Yan <ziy@nvidia.com>
Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
Cc: <stable@vger.kernel.org>
---
 mm/rmap.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/mm/rmap.c b/mm/rmap.c
index 618df3385c8b..eed971568d65 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -2448,7 +2448,6 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma,
 			if (flags & TTU_SPLIT_HUGE_PMD) {
 				split_huge_pmd_locked(vma, pvmw.address,
 						      pvmw.pmd, true);
-				ret = false;
 				page_vma_mapped_walk_done(&pvmw);
 				break;
 			}
-- 
2.34.1



^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2026-02-03 13:20 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2026-01-30 23:00 [PATCH] mm/huge_memory: fix early failure try_to_migrate() when split huge pmd for shared thp Wei Yang
2026-01-31  2:44 ` Zi Yan
2026-02-01  2:09   ` Wei Yang
2026-02-01  3:39     ` Zi Yan
2026-02-01 13:04       ` Gavin Guo
2026-02-01 14:20         ` Zi Yan
2026-02-03  0:00           ` Wei Yang
2026-02-03  0:07             ` Zi Yan
2026-02-03 13:04               ` Wei Yang
2026-02-03 13:07                 ` Zi Yan
2026-02-03 13:20           ` Lance Yang
2026-02-02 23:57       ` Wei Yang
2026-02-03  0:05         ` Zi Yan

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox