* [Patch v2] mm/huge_memory: fix early failure try_to_migrate() when split huge pmd for shared thp
@ 2026-02-04 0:42 Wei Yang
2026-02-04 2:03 ` Baolin Wang
` (5 more replies)
0 siblings, 6 replies; 12+ messages in thread
From: Wei Yang @ 2026-02-04 0:42 UTC (permalink / raw)
To: akpm, david, lorenzo.stoakes, riel, Liam.Howlett, vbabka,
harry.yoo, jannh, ziy, gavinguo, baolin.wang
Cc: linux-mm, Wei Yang, Lance Yang, stable
Commit 60fbb14396d5 ("mm/huge_memory: adjust try_to_migrate_one() and
split_huge_pmd_locked()") return false unconditionally after
split_huge_pmd_locked() which may fail early during try_to_migrate() for
shared thp. This will lead to unexpected folio split failure.
One way to reproduce:
Create an anonymous thp range and fork 512 children, so we have a
thp shared mapped in 513 processes. Then trigger folio split with
/sys/kernel/debug/split_huge_pages debugfs to split the thp folio to
order 0.
Without the above commit, we can successfully split to order 0.
With the above commit, the folio is still a large folio.
The reason is the above commit return false after split pmd
unconditionally in the first process and break try_to_migrate().
The tricky thing in above reproduce method is current debugfs interface
leverage function split_huge_pages_pid(), which will iterate the whole
pmd range and do folio split on each base page address. This means it
will try 512 times, and each time split one pmd from pmd mapped to pte
mapped thp. If there are less than 512 shared mapped process,
the folio is still split successfully at last. But in real world, we
usually try it for once.
This patch fixes this by restart page_vma_mapped_walk() after
split_huge_pmd_locked(). Because split_huge_pmd_locked() may fall back to
(freeze = false) if folio_try_share_anon_rmap_pmd() fails and the PMD is
just split instead of split to migration entry. Restart
page_vma_mapped_walk() and let try_to_migrate_one() try on each PTE
again and fail try_to_migrate() early if it fails.
Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
Fixes: 60fbb14396d5 ("mm/huge_memory: adjust try_to_migrate_one() and split_huge_pmd_locked()")
Cc: Gavin Guo <gavinguo@igalia.com>
Cc: "David Hildenbrand (Red Hat)" <david@kernel.org>
Cc: Zi Yan <ziy@nvidia.com>
Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
Cc: Lance Yang <lance.yang@linux.dev>
Cc: <stable@vger.kernel.org>
---
v2:
* restart page_vma_mapped_walk() after split_huge_pmd_locked()
---
mm/rmap.c | 11 ++++++++---
1 file changed, 8 insertions(+), 3 deletions(-)
diff --git a/mm/rmap.c b/mm/rmap.c
index 618df3385c8b..5b853ec8901d 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -2446,11 +2446,16 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma,
__maybe_unused pmd_t pmdval;
if (flags & TTU_SPLIT_HUGE_PMD) {
+ /*
+ * After split_huge_pmd_locked(), restart the
+ * walk to detect PageAnonExclusive handling
+ * failure in __split_huge_pmd_locked().
+ */
split_huge_pmd_locked(vma, pvmw.address,
pvmw.pmd, true);
- ret = false;
- page_vma_mapped_walk_done(&pvmw);
- break;
+ flags &= ~TTU_SPLIT_HUGE_PMD;
+ page_vma_mapped_walk_restart(&pvmw);
+ continue;
}
#ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION
pmdval = pmdp_get(pvmw.pmd);
--
2.34.1
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Patch v2] mm/huge_memory: fix early failure try_to_migrate() when split huge pmd for shared thp
2026-02-04 0:42 [Patch v2] mm/huge_memory: fix early failure try_to_migrate() when split huge pmd for shared thp Wei Yang
@ 2026-02-04 2:03 ` Baolin Wang
2026-02-04 2:22 ` Zi Yan
` (4 subsequent siblings)
5 siblings, 0 replies; 12+ messages in thread
From: Baolin Wang @ 2026-02-04 2:03 UTC (permalink / raw)
To: Wei Yang, akpm, david, lorenzo.stoakes, riel, Liam.Howlett,
vbabka, harry.yoo, jannh, ziy, gavinguo
Cc: linux-mm, Lance Yang, stable
On 2/4/26 8:42 AM, Wei Yang wrote:
> Commit 60fbb14396d5 ("mm/huge_memory: adjust try_to_migrate_one() and
> split_huge_pmd_locked()") return false unconditionally after
> split_huge_pmd_locked() which may fail early during try_to_migrate() for
> shared thp. This will lead to unexpected folio split failure.
>
> One way to reproduce:
>
> Create an anonymous thp range and fork 512 children, so we have a
> thp shared mapped in 513 processes. Then trigger folio split with
> /sys/kernel/debug/split_huge_pages debugfs to split the thp folio to
> order 0.
>
> Without the above commit, we can successfully split to order 0.
> With the above commit, the folio is still a large folio.
>
> The reason is the above commit return false after split pmd
> unconditionally in the first process and break try_to_migrate().
>
> The tricky thing in above reproduce method is current debugfs interface
> leverage function split_huge_pages_pid(), which will iterate the whole
> pmd range and do folio split on each base page address. This means it
> will try 512 times, and each time split one pmd from pmd mapped to pte
> mapped thp. If there are less than 512 shared mapped process,
> the folio is still split successfully at last. But in real world, we
> usually try it for once.
>
> This patch fixes this by restart page_vma_mapped_walk() after
> split_huge_pmd_locked(). Because split_huge_pmd_locked() may fall back to
> (freeze = false) if folio_try_share_anon_rmap_pmd() fails and the PMD is
> just split instead of split to migration entry. Restart
> page_vma_mapped_walk() and let try_to_migrate_one() try on each PTE
> again and fail try_to_migrate() early if it fails.
>
> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
> Fixes: 60fbb14396d5 ("mm/huge_memory: adjust try_to_migrate_one() and split_huge_pmd_locked()")
> Cc: Gavin Guo <gavinguo@igalia.com>
> Cc: "David Hildenbrand (Red Hat)" <david@kernel.org>
> Cc: Zi Yan <ziy@nvidia.com>
> Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
> Cc: Lance Yang <lance.yang@linux.dev>
> Cc: <stable@vger.kernel.org>
>
> ---
> v2:
> * restart page_vma_mapped_walk() after split_huge_pmd_locked()
> ---
The fix looks reasonable to me. Thanks.
Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com>
> mm/rmap.c | 11 ++++++++---
> 1 file changed, 8 insertions(+), 3 deletions(-)
>
> diff --git a/mm/rmap.c b/mm/rmap.c
> index 618df3385c8b..5b853ec8901d 100644
> --- a/mm/rmap.c
> +++ b/mm/rmap.c
> @@ -2446,11 +2446,16 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma,
> __maybe_unused pmd_t pmdval;
>
> if (flags & TTU_SPLIT_HUGE_PMD) {
> + /*
> + * After split_huge_pmd_locked(), restart the
> + * walk to detect PageAnonExclusive handling
> + * failure in __split_huge_pmd_locked().
> + */
> split_huge_pmd_locked(vma, pvmw.address,
> pvmw.pmd, true);
> - ret = false;
> - page_vma_mapped_walk_done(&pvmw);
> - break;
> + flags &= ~TTU_SPLIT_HUGE_PMD;
> + page_vma_mapped_walk_restart(&pvmw);
> + continue;
> }
> #ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION
> pmdval = pmdp_get(pvmw.pmd);
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Patch v2] mm/huge_memory: fix early failure try_to_migrate() when split huge pmd for shared thp
2026-02-04 0:42 [Patch v2] mm/huge_memory: fix early failure try_to_migrate() when split huge pmd for shared thp Wei Yang
2026-02-04 2:03 ` Baolin Wang
@ 2026-02-04 2:22 ` Zi Yan
2026-02-04 3:12 ` Lance Yang
` (3 subsequent siblings)
5 siblings, 0 replies; 12+ messages in thread
From: Zi Yan @ 2026-02-04 2:22 UTC (permalink / raw)
To: Wei Yang
Cc: akpm, david, lorenzo.stoakes, riel, Liam.Howlett, vbabka,
harry.yoo, jannh, gavinguo, baolin.wang, linux-mm, Lance Yang,
stable
On 3 Feb 2026, at 19:42, Wei Yang wrote:
> Commit 60fbb14396d5 ("mm/huge_memory: adjust try_to_migrate_one() and
> split_huge_pmd_locked()") return false unconditionally after
> split_huge_pmd_locked() which may fail early during try_to_migrate() for
> shared thp. This will lead to unexpected folio split failure.
>
> One way to reproduce:
>
> Create an anonymous thp range and fork 512 children, so we have a
> thp shared mapped in 513 processes. Then trigger folio split with
> /sys/kernel/debug/split_huge_pages debugfs to split the thp folio to
> order 0.
>
> Without the above commit, we can successfully split to order 0.
> With the above commit, the folio is still a large folio.
>
> The reason is the above commit return false after split pmd
> unconditionally in the first process and break try_to_migrate().
>
> The tricky thing in above reproduce method is current debugfs interface
> leverage function split_huge_pages_pid(), which will iterate the whole
> pmd range and do folio split on each base page address. This means it
> will try 512 times, and each time split one pmd from pmd mapped to pte
> mapped thp. If there are less than 512 shared mapped process,
> the folio is still split successfully at last. But in real world, we
> usually try it for once.
>
> This patch fixes this by restart page_vma_mapped_walk() after
> split_huge_pmd_locked(). Because split_huge_pmd_locked() may fall back to
> (freeze = false) if folio_try_share_anon_rmap_pmd() fails and the PMD is
> just split instead of split to migration entry. Restart
> page_vma_mapped_walk() and let try_to_migrate_one() try on each PTE
> again and fail try_to_migrate() early if it fails.
>
> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
> Fixes: 60fbb14396d5 ("mm/huge_memory: adjust try_to_migrate_one() and split_huge_pmd_locked()")
> Cc: Gavin Guo <gavinguo@igalia.com>
> Cc: "David Hildenbrand (Red Hat)" <david@kernel.org>
> Cc: Zi Yan <ziy@nvidia.com>
> Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
> Cc: Lance Yang <lance.yang@linux.dev>
> Cc: <stable@vger.kernel.org>
>
> ---
> v2:
> * restart page_vma_mapped_walk() after split_huge_pmd_locked()
> ---
> mm/rmap.c | 11 ++++++++---
> 1 file changed, 8 insertions(+), 3 deletions(-)
>
Reviewed-by: Zi Yan <ziy@nvidia.com>
Best Regards,
Yan, Zi
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Patch v2] mm/huge_memory: fix early failure try_to_migrate() when split huge pmd for shared thp
2026-02-04 0:42 [Patch v2] mm/huge_memory: fix early failure try_to_migrate() when split huge pmd for shared thp Wei Yang
2026-02-04 2:03 ` Baolin Wang
2026-02-04 2:22 ` Zi Yan
@ 2026-02-04 3:12 ` Lance Yang
2026-02-04 9:41 ` Gavin Guo
` (2 subsequent siblings)
5 siblings, 0 replies; 12+ messages in thread
From: Lance Yang @ 2026-02-04 3:12 UTC (permalink / raw)
To: Wei Yang
Cc: linux-mm, ziy, riel, lorenzo.stoakes, david, akpm, baolin.wang,
gavinguo, vbabka, jannh, stable, harry.yoo, Liam.Howlett
On 2026/2/4 08:42, Wei Yang wrote:
> Commit 60fbb14396d5 ("mm/huge_memory: adjust try_to_migrate_one() and
> split_huge_pmd_locked()") return false unconditionally after
> split_huge_pmd_locked() which may fail early during try_to_migrate() for
> shared thp. This will lead to unexpected folio split failure.
>
> One way to reproduce:
>
> Create an anonymous thp range and fork 512 children, so we have a
> thp shared mapped in 513 processes. Then trigger folio split with
> /sys/kernel/debug/split_huge_pages debugfs to split the thp folio to
> order 0.
>
> Without the above commit, we can successfully split to order 0.
> With the above commit, the folio is still a large folio.
>
> The reason is the above commit return false after split pmd
> unconditionally in the first process and break try_to_migrate().
>
> The tricky thing in above reproduce method is current debugfs interface
> leverage function split_huge_pages_pid(), which will iterate the whole
> pmd range and do folio split on each base page address. This means it
> will try 512 times, and each time split one pmd from pmd mapped to pte
> mapped thp. If there are less than 512 shared mapped process,
> the folio is still split successfully at last. But in real world, we
> usually try it for once.
>
> This patch fixes this by restart page_vma_mapped_walk() after
> split_huge_pmd_locked(). Because split_huge_pmd_locked() may fall back to
> (freeze = false) if folio_try_share_anon_rmap_pmd() fails and the PMD is
> just split instead of split to migration entry. Restart
> page_vma_mapped_walk() and let try_to_migrate_one() try on each PTE
> again and fail try_to_migrate() early if it fails.
>
> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
> Fixes: 60fbb14396d5 ("mm/huge_memory: adjust try_to_migrate_one() and split_huge_pmd_locked()")
> Cc: Gavin Guo <gavinguo@igalia.com>
> Cc: "David Hildenbrand (Red Hat)" <david@kernel.org>
> Cc: Zi Yan <ziy@nvidia.com>
> Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
> Cc: Lance Yang <lance.yang@linux.dev>
> Cc: <stable@vger.kernel.org>
>
> ---
Confirmed that the splitting is working now as expected with the
reproducer above.
Tested-by: Lance Yang <lance.yang@linux.dev>
Also, looks good to me:
Reviewed-by: Lance Yang <lance.yang@linux.dev>
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Patch v2] mm/huge_memory: fix early failure try_to_migrate() when split huge pmd for shared thp
2026-02-04 0:42 [Patch v2] mm/huge_memory: fix early failure try_to_migrate() when split huge pmd for shared thp Wei Yang
` (2 preceding siblings ...)
2026-02-04 3:12 ` Lance Yang
@ 2026-02-04 9:41 ` Gavin Guo
2026-02-04 19:36 ` David Hildenbrand (arm)
2026-02-04 19:42 ` Andrew Morton
5 siblings, 0 replies; 12+ messages in thread
From: Gavin Guo @ 2026-02-04 9:41 UTC (permalink / raw)
To: Wei Yang, akpm, david, lorenzo.stoakes, riel, Liam.Howlett,
vbabka, harry.yoo, jannh, ziy, baolin.wang
Cc: linux-mm, Lance Yang, stable
On 2/4/26 08:42, Wei Yang wrote:
> Commit 60fbb14396d5 ("mm/huge_memory: adjust try_to_migrate_one() and
> split_huge_pmd_locked()") return false unconditionally after
> split_huge_pmd_locked() which may fail early during try_to_migrate() for
> shared thp. This will lead to unexpected folio split failure.
>
> One way to reproduce:
>
> Create an anonymous thp range and fork 512 children, so we have a
> thp shared mapped in 513 processes. Then trigger folio split with
> /sys/kernel/debug/split_huge_pages debugfs to split the thp folio to
> order 0.
>
> Without the above commit, we can successfully split to order 0.
> With the above commit, the folio is still a large folio.
>
> The reason is the above commit return false after split pmd
> unconditionally in the first process and break try_to_migrate().
>
> The tricky thing in above reproduce method is current debugfs interface
> leverage function split_huge_pages_pid(), which will iterate the whole
> pmd range and do folio split on each base page address. This means it
> will try 512 times, and each time split one pmd from pmd mapped to pte
> mapped thp. If there are less than 512 shared mapped process,
> the folio is still split successfully at last. But in real world, we
> usually try it for once.
>
> This patch fixes this by restart page_vma_mapped_walk() after
> split_huge_pmd_locked(). Because split_huge_pmd_locked() may fall back to
> (freeze = false) if folio_try_share_anon_rmap_pmd() fails and the PMD is
> just split instead of split to migration entry. Restart
> page_vma_mapped_walk() and let try_to_migrate_one() try on each PTE
> again and fail try_to_migrate() early if it fails.
>
> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
> Fixes: 60fbb14396d5 ("mm/huge_memory: adjust try_to_migrate_one() and split_huge_pmd_locked()")
> Cc: Gavin Guo <gavinguo@igalia.com>
> Cc: "David Hildenbrand (Red Hat)" <david@kernel.org>
> Cc: Zi Yan <ziy@nvidia.com>
> Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
> Cc: Lance Yang <lance.yang@linux.dev>
> Cc: <stable@vger.kernel.org>
>
> ---
> v2:
> * restart page_vma_mapped_walk() after split_huge_pmd_locked()
> ---
> mm/rmap.c | 11 ++++++++---
> 1 file changed, 8 insertions(+), 3 deletions(-)
>
> diff --git a/mm/rmap.c b/mm/rmap.c
> index 618df3385c8b..5b853ec8901d 100644
> --- a/mm/rmap.c
> +++ b/mm/rmap.c
> @@ -2446,11 +2446,16 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma,
> __maybe_unused pmd_t pmdval;
>
> if (flags & TTU_SPLIT_HUGE_PMD) {
> + /*
> + * After split_huge_pmd_locked(), restart the
> + * walk to detect PageAnonExclusive handling
> + * failure in __split_huge_pmd_locked().
> + */
> split_huge_pmd_locked(vma, pvmw.address,
> pvmw.pmd, true);
> - ret = false;
> - page_vma_mapped_walk_done(&pvmw);
> - break;
> + flags &= ~TTU_SPLIT_HUGE_PMD;
> + page_vma_mapped_walk_restart(&pvmw);
> + continue;
> }
> #ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION
> pmdval = pmdp_get(pvmw.pmd);
It looks good to me. Thanks!
Reviewed-by: Gavin Guo <gavinguo@igalia.com>
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Patch v2] mm/huge_memory: fix early failure try_to_migrate() when split huge pmd for shared thp
2026-02-04 0:42 [Patch v2] mm/huge_memory: fix early failure try_to_migrate() when split huge pmd for shared thp Wei Yang
` (3 preceding siblings ...)
2026-02-04 9:41 ` Gavin Guo
@ 2026-02-04 19:36 ` David Hildenbrand (arm)
2026-02-04 20:02 ` Zi Yan
2026-02-04 19:42 ` Andrew Morton
5 siblings, 1 reply; 12+ messages in thread
From: David Hildenbrand (arm) @ 2026-02-04 19:36 UTC (permalink / raw)
To: Wei Yang, akpm, lorenzo.stoakes, riel, Liam.Howlett, vbabka,
harry.yoo, jannh, ziy, gavinguo, baolin.wang
Cc: linux-mm, Lance Yang, stable
Sorry for the late reply. I saw that I was CCed in v1 but I am only now
catching up with mails ... slowly but steadily.
> Without the above commit, we can successfully split to order 0.
> With the above commit, the folio is still a large folio.
>
> The reason is the above commit return false after split pmd
> unconditionally in the first process and break try_to_migrate().
>
> The tricky thing in above reproduce method is current debugfs interface
> leverage function split_huge_pages_pid(), which will iterate the whole
> pmd range and do folio split on each base page address. This means it
> will try 512 times, and each time split one pmd from pmd mapped to pte
> mapped thp. If there are less than 512 shared mapped process,
> the folio is still split successfully at last. But in real world, we
> usually try it for once.
Ah, that explains magic number 513.
>
> This patch fixes this by restart page_vma_mapped_walk() after
> split_huge_pmd_locked(). Because split_huge_pmd_locked() may fall back to
> (freeze = false) if folio_try_share_anon_rmap_pmd() fails and the PMD is
> just split instead of split to migration entry.
Right, but folio_try_share_anon_rmap_pmd() should never fail on the
folios that have already been shared? (above you write that it is shared
with 512 children)
The only case where folio_try_share_anon_rmap_pmd() could fail would be
if the folio would not be shared, and there would only be a single PMD
then, so there is nothing you can do -> abort.
Returning "false" from try_to_migrate_one() is the real issue, as it
makes rmap_walk_anon() to just stop -> abort the walk.
So I suspect v1 was actually sufficient, or what am I missing where the
restart would actually be required?
(maybe we should get rid of the usage of booleans here at some point, an
enum like abort/continue would have been much clearer)
> Restart
> page_vma_mapped_walk() and let try_to_migrate_one() try on each PTE
> again and fail try_to_migrate() early if it fails.
>
> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
> Fixes: 60fbb14396d5 ("mm/huge_memory: adjust try_to_migrate_one() and split_huge_pmd_locked()")
> Cc: Gavin Guo <gavinguo@igalia.com>
> Cc: "David Hildenbrand (Red Hat)" <david@kernel.org>
> Cc: Zi Yan <ziy@nvidia.com>
> Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
> Cc: Lance Yang <lance.yang@linux.dev>
> Cc: <stable@vger.kernel.org>
>
> ---
> v2:
> * restart page_vma_mapped_walk() after split_huge_pmd_locked()
> ---
> mm/rmap.c | 11 ++++++++---
> 1 file changed, 8 insertions(+), 3 deletions(-)
>
> diff --git a/mm/rmap.c b/mm/rmap.c
> index 618df3385c8b..5b853ec8901d 100644
> --- a/mm/rmap.c
> +++ b/mm/rmap.c
> @@ -2446,11 +2446,16 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma,
> __maybe_unused pmd_t pmdval;
>
> if (flags & TTU_SPLIT_HUGE_PMD) {
> + /*
> + * After split_huge_pmd_locked(), restart the
> + * walk to detect PageAnonExclusive handling
> + * failure in __split_huge_pmd_locked().
> + */
> split_huge_pmd_locked(vma, pvmw.address,
> pvmw.pmd, true);
> - ret = false;
> - page_vma_mapped_walk_done(&pvmw);
> - break;
> + flags &= ~TTU_SPLIT_HUGE_PMD;
> + page_vma_mapped_walk_restart(&pvmw);
> + continue;
> }
The change looks more consistent to what we have in try_to_unmap().
But the explanation above is not quite right I think. And consequently
the comment above as well.
PAE being set implies "single PMD" -> unshared.
--
Cheers,
David
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Patch v2] mm/huge_memory: fix early failure try_to_migrate() when split huge pmd for shared thp
2026-02-04 0:42 [Patch v2] mm/huge_memory: fix early failure try_to_migrate() when split huge pmd for shared thp Wei Yang
` (4 preceding siblings ...)
2026-02-04 19:36 ` David Hildenbrand (arm)
@ 2026-02-04 19:42 ` Andrew Morton
2026-02-05 3:04 ` Wei Yang
5 siblings, 1 reply; 12+ messages in thread
From: Andrew Morton @ 2026-02-04 19:42 UTC (permalink / raw)
To: Wei Yang
Cc: david, lorenzo.stoakes, riel, Liam.Howlett, vbabka, harry.yoo,
jannh, ziy, gavinguo, baolin.wang, linux-mm, Lance Yang, stable
On Wed, 4 Feb 2026 00:42:19 +0000 Wei Yang <richard.weiyang@gmail.com> wrote:
> Commit 60fbb14396d5 ("mm/huge_memory: adjust try_to_migrate_one() and
> split_huge_pmd_locked()") return false unconditionally after
> split_huge_pmd_locked() which may fail early during try_to_migrate() for
> shared thp. This will lead to unexpected folio split failure.
>
> One way to reproduce:
>
> Create an anonymous thp range and fork 512 children, so we have a
> thp shared mapped in 513 processes. Then trigger folio split with
> /sys/kernel/debug/split_huge_pages debugfs to split the thp folio to
> order 0.
>
> Without the above commit, we can successfully split to order 0.
> With the above commit, the folio is still a large folio.
>
> The reason is the above commit return false after split pmd
> unconditionally in the first process and break try_to_migrate().
>
> The tricky thing in above reproduce method is current debugfs interface
> leverage function split_huge_pages_pid(), which will iterate the whole
> pmd range and do folio split on each base page address. This means it
> will try 512 times, and each time split one pmd from pmd mapped to pte
> mapped thp. If there are less than 512 shared mapped process,
> the folio is still split successfully at last. But in real world, we
> usually try it for once.
>
> This patch fixes this by restart page_vma_mapped_walk() after
> split_huge_pmd_locked(). Because split_huge_pmd_locked() may fall back to
> (freeze = false) if folio_try_share_anon_rmap_pmd() fails and the PMD is
> just split instead of split to migration entry. Restart
> page_vma_mapped_walk() and let try_to_migrate_one() try on each PTE
> again and fail try_to_migrate() early if it fails.
>
> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
> Fixes: 60fbb14396d5 ("mm/huge_memory: adjust try_to_migrate_one() and split_huge_pmd_locked()")
Cool, thanks.
> Cc: Gavin Guo <gavinguo@igalia.com>
> Cc: "David Hildenbrand (Red Hat)" <david@kernel.org>
> Cc: Zi Yan <ziy@nvidia.com>
> Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
> Cc: Lance Yang <lance.yang@linux.dev>
> Cc: <stable@vger.kernel.org>
Why cc:stable? In other words, what is the userspace-visible runtime
effect of this bug?
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Patch v2] mm/huge_memory: fix early failure try_to_migrate() when split huge pmd for shared thp
2026-02-04 19:36 ` David Hildenbrand (arm)
@ 2026-02-04 20:02 ` Zi Yan
2026-02-04 20:43 ` David Hildenbrand (arm)
0 siblings, 1 reply; 12+ messages in thread
From: Zi Yan @ 2026-02-04 20:02 UTC (permalink / raw)
To: David Hildenbrand (arm)
Cc: Wei Yang, akpm, lorenzo.stoakes, riel, Liam.Howlett, vbabka,
harry.yoo, jannh, gavinguo, baolin.wang, linux-mm, Lance Yang,
stable
On 4 Feb 2026, at 14:36, David Hildenbrand (arm) wrote:
> Sorry for the late reply. I saw that I was CCed in v1 but I am only now catching up with mails ... slowly but steadily.
>
>> Without the above commit, we can successfully split to order 0.
>> With the above commit, the folio is still a large folio.
>>
>> The reason is the above commit return false after split pmd
>> unconditionally in the first process and break try_to_migrate().
>>
>> The tricky thing in above reproduce method is current debugfs interface
>> leverage function split_huge_pages_pid(), which will iterate the whole
>> pmd range and do folio split on each base page address. This means it
>> will try 512 times, and each time split one pmd from pmd mapped to pte
>> mapped thp. If there are less than 512 shared mapped process,
>> the folio is still split successfully at last. But in real world, we
>> usually try it for once.
>
> Ah, that explains magic number 513.
>
>>
>> This patch fixes this by restart page_vma_mapped_walk() after
>> split_huge_pmd_locked(). Because split_huge_pmd_locked() may fall back to
>> (freeze = false) if folio_try_share_anon_rmap_pmd() fails and the PMD is
>> just split instead of split to migration entry.
>
> Right, but folio_try_share_anon_rmap_pmd() should never fail on the folios that have already been shared? (above you write that it is shared with 512 children)
>
> The only case where folio_try_share_anon_rmap_pmd() could fail would be if the folio would not be shared, and there would only be a single PMD then, so there is nothing you can do -> abort.
>
> Returning "false" from try_to_migrate_one() is the real issue, as it makes rmap_walk_anon() to just stop -> abort the walk.
>
>
> So I suspect v1 was actually sufficient, or what am I missing where the restart would actually be required?
The explanation is not for the shared case mentioned above. It is for unshared
folio. If an unshared folio’s PAE cannot be cleared, try_to_migrate_one() return
true, indicating a success. Yeah, since it is an unshared folio, the return
value of try_to_migrate_one() does not matter. This fix makes try_to_migrate_one()
return false.
>
>
> (maybe we should get rid of the usage of booleans here at some point, an enum like abort/continue would have been much clearer)
>
>> Restart
>> page_vma_mapped_walk() and let try_to_migrate_one() try on each PTE
>> again and fail try_to_migrate() early if it fails.
>>
>> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
>> Fixes: 60fbb14396d5 ("mm/huge_memory: adjust try_to_migrate_one() and split_huge_pmd_locked()")
>> Cc: Gavin Guo <gavinguo@igalia.com>
>> Cc: "David Hildenbrand (Red Hat)" <david@kernel.org>
>> Cc: Zi Yan <ziy@nvidia.com>
>> Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
>> Cc: Lance Yang <lance.yang@linux.dev>
>> Cc: <stable@vger.kernel.org>
>>
>> ---
>> v2:
>> * restart page_vma_mapped_walk() after split_huge_pmd_locked()
>> ---
>> mm/rmap.c | 11 ++++++++---
>> 1 file changed, 8 insertions(+), 3 deletions(-)
>>
>> diff --git a/mm/rmap.c b/mm/rmap.c
>> index 618df3385c8b..5b853ec8901d 100644
>> --- a/mm/rmap.c
>> +++ b/mm/rmap.c
>> @@ -2446,11 +2446,16 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma,
>> __maybe_unused pmd_t pmdval;
>> if (flags & TTU_SPLIT_HUGE_PMD) {
>> + /*
>> + * After split_huge_pmd_locked(), restart the
>> + * walk to detect PageAnonExclusive handling
>> + * failure in __split_huge_pmd_locked().
>> + */
>> split_huge_pmd_locked(vma, pvmw.address,
>> pvmw.pmd, true);
>> - ret = false;
>> - page_vma_mapped_walk_done(&pvmw);
>> - break;
>> + flags &= ~TTU_SPLIT_HUGE_PMD;
>> + page_vma_mapped_walk_restart(&pvmw);
>> + continue;
>> }
>
> The change looks more consistent to what we have in try_to_unmap().
>
> But the explanation above is not quite right I think. And consequently the comment above as well.
>
> PAE being set implies "single PMD" -> unshared.
The commit message might be improved with some additional context. The comment
above pairs with the comment in __split_huge_pmd_locked()
“In case we cannot clear PageAnonExclusive(), split the PMD
only and let try_to_migrate_one() fail later”. What is problem with it?
Best Regards,
Yan, Zi
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Patch v2] mm/huge_memory: fix early failure try_to_migrate() when split huge pmd for shared thp
2026-02-04 20:02 ` Zi Yan
@ 2026-02-04 20:43 ` David Hildenbrand (arm)
2026-02-05 2:59 ` Wei Yang
0 siblings, 1 reply; 12+ messages in thread
From: David Hildenbrand (arm) @ 2026-02-04 20:43 UTC (permalink / raw)
To: Zi Yan
Cc: Wei Yang, akpm, lorenzo.stoakes, riel, Liam.Howlett, vbabka,
harry.yoo, jannh, gavinguo, baolin.wang, linux-mm, Lance Yang,
stable
On 2/4/26 21:02, Zi Yan wrote:
> On 4 Feb 2026, at 14:36, David Hildenbrand (arm) wrote:
>
>> Sorry for the late reply. I saw that I was CCed in v1 but I am only now catching up with mails ... slowly but steadily.
>>
>>> Without the above commit, we can successfully split to order 0.
>>> With the above commit, the folio is still a large folio.
>>>
>>> The reason is the above commit return false after split pmd
>>> unconditionally in the first process and break try_to_migrate().
>>>
>>> The tricky thing in above reproduce method is current debugfs interface
>>> leverage function split_huge_pages_pid(), which will iterate the whole
>>> pmd range and do folio split on each base page address. This means it
>>> will try 512 times, and each time split one pmd from pmd mapped to pte
>>> mapped thp. If there are less than 512 shared mapped process,
>>> the folio is still split successfully at last. But in real world, we
>>> usually try it for once.
>>
>> Ah, that explains magic number 513.
>>
>>>
>>> This patch fixes this by restart page_vma_mapped_walk() after
>>> split_huge_pmd_locked(). Because split_huge_pmd_locked() may fall back to
>>> (freeze = false) if folio_try_share_anon_rmap_pmd() fails and the PMD is
>>> just split instead of split to migration entry.
>>
>> Right, but folio_try_share_anon_rmap_pmd() should never fail on the folios that have already been shared? (above you write that it is shared with 512 children)
>>
>> The only case where folio_try_share_anon_rmap_pmd() could fail would be if the folio would not be shared, and there would only be a single PMD then, so there is nothing you can do -> abort.
>>
>> Returning "false" from try_to_migrate_one() is the real issue, as it makes rmap_walk_anon() to just stop -> abort the walk.
>>
>>
>> So I suspect v1 was actually sufficient, or what am I missing where the restart would actually be required?
>
> The explanation is not for the shared case mentioned above. It is for unshared
> folio. If an unshared folio’s PAE cannot be cleared, try_to_migrate_one() return
> true, indicating a success.
Oh. You mean that should be something like
"This patch fixes this by restart page_vma_mapped_walk() after
split_huge_pmd_locked(). We cannot simply return "true" to fix the
problem, as that would affect another case:
split_huge_pmd_locked()->folio_try_share_anon_rmap_pmd() can failed and
leave the folio mapped through PTEs; we would return "true" from
try_to_migrate_one() in that case as well. While that is mostly
harmless, we could end up walking the rmap, wasting some cycles.".
> Yeah, since it is an unshared folio, the return
> value of try_to_migrate_one() does not matter. This fix makes try_to_migrate_one()
> return false.
Right, it's not really problematic. We could end up walking the rmap and
burn some cycles.
>
>>
>>
>> (maybe we should get rid of the usage of booleans here at some point, an enum like abort/continue would have been much clearer)
>>
>>> Restart
>>> page_vma_mapped_walk() and let try_to_migrate_one() try on each PTE
>>> again and fail try_to_migrate() early if it fails.
>>>
>>> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
>>> Fixes: 60fbb14396d5 ("mm/huge_memory: adjust try_to_migrate_one() and split_huge_pmd_locked()")
>>> Cc: Gavin Guo <gavinguo@igalia.com>
>>> Cc: "David Hildenbrand (Red Hat)" <david@kernel.org>
>>> Cc: Zi Yan <ziy@nvidia.com>
>>> Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
>>> Cc: Lance Yang <lance.yang@linux.dev>
>>> Cc: <stable@vger.kernel.org>
>>>
>>> ---
>>> v2:
>>> * restart page_vma_mapped_walk() after split_huge_pmd_locked()
>>> ---
>>> mm/rmap.c | 11 ++++++++---
>>> 1 file changed, 8 insertions(+), 3 deletions(-)
>>>
>>> diff --git a/mm/rmap.c b/mm/rmap.c
>>> index 618df3385c8b..5b853ec8901d 100644
>>> --- a/mm/rmap.c
>>> +++ b/mm/rmap.c
>>> @@ -2446,11 +2446,16 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma,
>>> __maybe_unused pmd_t pmdval;
>>> if (flags & TTU_SPLIT_HUGE_PMD) {
>>> + /*
>>> + * After split_huge_pmd_locked(), restart the
>>> + * walk to detect PageAnonExclusive handling
>>> + * failure in __split_huge_pmd_locked().
>>> + */
>>> split_huge_pmd_locked(vma, pvmw.address,
>>> pvmw.pmd, true);
>>> - ret = false;
>>> - page_vma_mapped_walk_done(&pvmw);
>>> - break;
>>> + flags &= ~TTU_SPLIT_HUGE_PMD;
>>> + page_vma_mapped_walk_restart(&pvmw);
>>> + continue;
>>> }
>>
>> The change looks more consistent to what we have in try_to_unmap().
>>
>> But the explanation above is not quite right I think. And consequently the comment above as well.
>>
>> PAE being set implies "single PMD" -> unshared.
>
> The commit message might be improved with some additional context. The comment
> above pairs with the comment in __split_huge_pmd_locked()
> “In case we cannot clear PageAnonExclusive(), split the PMD
> only and let try_to_migrate_one() fail later”. What is problem with it?
With your explanation it's much clearer, thanks.
I'd remove some details from the comments about PAE like:
"split_huge_pmd_locked() might leave the folio mapped through PTEs.
Retry the walk so we can detect this scenario and properly abort the walk."
With some clarifications along those lines
Acked-by: David Hildenbrand (arm) <david@kernel.org>
--
Cheers,
David
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Patch v2] mm/huge_memory: fix early failure try_to_migrate() when split huge pmd for shared thp
2026-02-04 20:43 ` David Hildenbrand (arm)
@ 2026-02-05 2:59 ` Wei Yang
0 siblings, 0 replies; 12+ messages in thread
From: Wei Yang @ 2026-02-05 2:59 UTC (permalink / raw)
To: David Hildenbrand (arm)
Cc: Zi Yan, Wei Yang, akpm, lorenzo.stoakes, riel, Liam.Howlett,
vbabka, harry.yoo, jannh, gavinguo, baolin.wang, linux-mm,
Lance Yang, stable
On Wed, Feb 04, 2026 at 09:43:42PM +0100, David Hildenbrand (arm) wrote:
>On 2/4/26 21:02, Zi Yan wrote:
>> On 4 Feb 2026, at 14:36, David Hildenbrand (arm) wrote:
>>
>> > Sorry for the late reply. I saw that I was CCed in v1 but I am only now catching up with mails ... slowly but steadily.
>> >
>> > > Without the above commit, we can successfully split to order 0.
>> > > With the above commit, the folio is still a large folio.
>> > >
>> > > The reason is the above commit return false after split pmd
>> > > unconditionally in the first process and break try_to_migrate().
>> > >
>> > > The tricky thing in above reproduce method is current debugfs interface
>> > > leverage function split_huge_pages_pid(), which will iterate the whole
>> > > pmd range and do folio split on each base page address. This means it
>> > > will try 512 times, and each time split one pmd from pmd mapped to pte
>> > > mapped thp. If there are less than 512 shared mapped process,
>> > > the folio is still split successfully at last. But in real world, we
>> > > usually try it for once.
>> >
>> > Ah, that explains magic number 513.
>> >
>> > >
>> > > This patch fixes this by restart page_vma_mapped_walk() after
>> > > split_huge_pmd_locked(). Because split_huge_pmd_locked() may fall back to
>> > > (freeze = false) if folio_try_share_anon_rmap_pmd() fails and the PMD is
>> > > just split instead of split to migration entry.
>> >
>> > Right, but folio_try_share_anon_rmap_pmd() should never fail on the folios that have already been shared? (above you write that it is shared with 512 children)
>> >
>> > The only case where folio_try_share_anon_rmap_pmd() could fail would be if the folio would not be shared, and there would only be a single PMD then, so there is nothing you can do -> abort.
>> >
>> > Returning "false" from try_to_migrate_one() is the real issue, as it makes rmap_walk_anon() to just stop -> abort the walk.
>> >
>> >
>> > So I suspect v1 was actually sufficient, or what am I missing where the restart would actually be required?
>>
>> The explanation is not for the shared case mentioned above. It is for unshared
>> folio. If an unshared folio’s PAE cannot be cleared, try_to_migrate_one() return
>> true, indicating a success.
Thanks Zi Yan for the explanation.
>
>Oh. You mean that should be something like
>
>"This patch fixes this by restart page_vma_mapped_walk() after
>split_huge_pmd_locked(). We cannot simply return "true" to fix the problem,
>as that would affect another case:
>split_huge_pmd_locked()->folio_try_share_anon_rmap_pmd() can failed and leave
>the folio mapped through PTEs; we would return "true" from
>try_to_migrate_one() in that case as well. While that is mostly harmless, we
>could end up walking the rmap, wasting some cycles.".
>
Change log is updated accordingly.
>
>> Yeah, since it is an unshared folio, the return
>> value of try_to_migrate_one() does not matter. This fix makes try_to_migrate_one()
>> return false.
>
>Right, it's not really problematic. We could end up walking the rmap and burn
>some cycles.
>
>>
>> >
>> >
>> > (maybe we should get rid of the usage of booleans here at some point, an enum like abort/continue would have been much clearer)
>> >
>> > > Restart
>> > > page_vma_mapped_walk() and let try_to_migrate_one() try on each PTE
>> > > again and fail try_to_migrate() early if it fails.
>> > >
>> > > Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
>> > > Fixes: 60fbb14396d5 ("mm/huge_memory: adjust try_to_migrate_one() and split_huge_pmd_locked()")
>> > > Cc: Gavin Guo <gavinguo@igalia.com>
>> > > Cc: "David Hildenbrand (Red Hat)" <david@kernel.org>
>> > > Cc: Zi Yan <ziy@nvidia.com>
>> > > Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
>> > > Cc: Lance Yang <lance.yang@linux.dev>
>> > > Cc: <stable@vger.kernel.org>
>> > >
>> > > ---
>> > > v2:
>> > > * restart page_vma_mapped_walk() after split_huge_pmd_locked()
>> > > ---
>> > > mm/rmap.c | 11 ++++++++---
>> > > 1 file changed, 8 insertions(+), 3 deletions(-)
>> > >
>> > > diff --git a/mm/rmap.c b/mm/rmap.c
>> > > index 618df3385c8b..5b853ec8901d 100644
>> > > --- a/mm/rmap.c
>> > > +++ b/mm/rmap.c
>> > > @@ -2446,11 +2446,16 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma,
>> > > __maybe_unused pmd_t pmdval;
>> > > if (flags & TTU_SPLIT_HUGE_PMD) {
>> > > + /*
>> > > + * After split_huge_pmd_locked(), restart the
>> > > + * walk to detect PageAnonExclusive handling
>> > > + * failure in __split_huge_pmd_locked().
>> > > + */
>> > > split_huge_pmd_locked(vma, pvmw.address,
>> > > pvmw.pmd, true);
>> > > - ret = false;
>> > > - page_vma_mapped_walk_done(&pvmw);
>> > > - break;
>> > > + flags &= ~TTU_SPLIT_HUGE_PMD;
>> > > + page_vma_mapped_walk_restart(&pvmw);
>> > > + continue;
>> > > }
>> >
>> > The change looks more consistent to what we have in try_to_unmap().
>> >
>> > But the explanation above is not quite right I think. And consequently the comment above as well.
>> >
>> > PAE being set implies "single PMD" -> unshared.
>>
>> The commit message might be improved with some additional context. The comment
>> above pairs with the comment in __split_huge_pmd_locked()
>> “In case we cannot clear PageAnonExclusive(), split the PMD
>> only and let try_to_migrate_one() fail later”. What is problem with it?
>
>With your explanation it's much clearer, thanks.
>
>I'd remove some details from the comments about PAE like:
>
>"split_huge_pmd_locked() might leave the folio mapped through PTEs. Retry the
>walk so we can detect this scenario and properly abort the walk."
>
Comment is updated accordingly.
>
>With some clarifications along those lines
>
>Acked-by: David Hildenbrand (arm) <david@kernel.org>
>
>--
>Cheers,
>
>David
--
Wei Yang
Help you, Help me
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Patch v2] mm/huge_memory: fix early failure try_to_migrate() when split huge pmd for shared thp
2026-02-04 19:42 ` Andrew Morton
@ 2026-02-05 3:04 ` Wei Yang
2026-02-05 3:13 ` Andrew Morton
0 siblings, 1 reply; 12+ messages in thread
From: Wei Yang @ 2026-02-05 3:04 UTC (permalink / raw)
To: Andrew Morton
Cc: Wei Yang, david, lorenzo.stoakes, riel, Liam.Howlett, vbabka,
harry.yoo, jannh, ziy, gavinguo, baolin.wang, linux-mm,
Lance Yang, stable
On Wed, Feb 04, 2026 at 11:42:17AM -0800, Andrew Morton wrote:
>On Wed, 4 Feb 2026 00:42:19 +0000 Wei Yang <richard.weiyang@gmail.com> wrote:
>
>> Commit 60fbb14396d5 ("mm/huge_memory: adjust try_to_migrate_one() and
>> split_huge_pmd_locked()") return false unconditionally after
>> split_huge_pmd_locked() which may fail early during try_to_migrate() for
>> shared thp. This will lead to unexpected folio split failure.
>>
>> One way to reproduce:
>>
>> Create an anonymous thp range and fork 512 children, so we have a
>> thp shared mapped in 513 processes. Then trigger folio split with
>> /sys/kernel/debug/split_huge_pages debugfs to split the thp folio to
>> order 0.
>>
>> Without the above commit, we can successfully split to order 0.
>> With the above commit, the folio is still a large folio.
>>
>> The reason is the above commit return false after split pmd
>> unconditionally in the first process and break try_to_migrate().
>>
>> The tricky thing in above reproduce method is current debugfs interface
>> leverage function split_huge_pages_pid(), which will iterate the whole
>> pmd range and do folio split on each base page address. This means it
>> will try 512 times, and each time split one pmd from pmd mapped to pte
>> mapped thp. If there are less than 512 shared mapped process,
>> the folio is still split successfully at last. But in real world, we
>> usually try it for once.
>>
>> This patch fixes this by restart page_vma_mapped_walk() after
>> split_huge_pmd_locked(). Because split_huge_pmd_locked() may fall back to
>> (freeze = false) if folio_try_share_anon_rmap_pmd() fails and the PMD is
>> just split instead of split to migration entry. Restart
>> page_vma_mapped_walk() and let try_to_migrate_one() try on each PTE
>> again and fail try_to_migrate() early if it fails.
>>
>> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
>> Fixes: 60fbb14396d5 ("mm/huge_memory: adjust try_to_migrate_one() and split_huge_pmd_locked()")
>
>Cool, thanks.
>
>> Cc: Gavin Guo <gavinguo@igalia.com>
>> Cc: "David Hildenbrand (Red Hat)" <david@kernel.org>
>> Cc: Zi Yan <ziy@nvidia.com>
>> Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
>> Cc: Lance Yang <lance.yang@linux.dev>
>> Cc: <stable@vger.kernel.org>
>
>Why cc:stable? In other words, what is the userspace-visible runtime
>effect of this bug?
On memory pressure or failure, we would try to split folio to reclaim or limit
bad memory. If failed to split it, we will leave some memory unusable.
I would put this in change log, if it looks good to you.
As David mentioned some change in comment and change log, do you prefer a v3?
--
Wei Yang
Help you, Help me
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Patch v2] mm/huge_memory: fix early failure try_to_migrate() when split huge pmd for shared thp
2026-02-05 3:04 ` Wei Yang
@ 2026-02-05 3:13 ` Andrew Morton
0 siblings, 0 replies; 12+ messages in thread
From: Andrew Morton @ 2026-02-05 3:13 UTC (permalink / raw)
To: Wei Yang
Cc: david, lorenzo.stoakes, riel, Liam.Howlett, vbabka, harry.yoo,
jannh, ziy, gavinguo, baolin.wang, linux-mm, Lance Yang, stable
On Thu, 5 Feb 2026 03:04:21 +0000 Wei Yang <richard.weiyang@gmail.com> wrote:
> >> Cc: Gavin Guo <gavinguo@igalia.com>
> >> Cc: "David Hildenbrand (Red Hat)" <david@kernel.org>
> >> Cc: Zi Yan <ziy@nvidia.com>
> >> Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
> >> Cc: Lance Yang <lance.yang@linux.dev>
> >> Cc: <stable@vger.kernel.org>
> >
> >Why cc:stable? In other words, what is the userspace-visible runtime
> >effect of this bug?
>
> On memory pressure or failure, we would try to split folio to reclaim or limit
> bad memory. If failed to split it, we will leave some memory unusable.
>
> I would put this in change log, if it looks good to you.
>
> As David mentioned some change in comment and change log, do you prefer a v3?
v3 would be good please. If the patch(set) was large and had been
under test for significant time then I think a delta is preferable.
But that isn't the case here.
^ permalink raw reply [flat|nested] 12+ messages in thread
end of thread, other threads:[~2026-02-05 3:13 UTC | newest]
Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2026-02-04 0:42 [Patch v2] mm/huge_memory: fix early failure try_to_migrate() when split huge pmd for shared thp Wei Yang
2026-02-04 2:03 ` Baolin Wang
2026-02-04 2:22 ` Zi Yan
2026-02-04 3:12 ` Lance Yang
2026-02-04 9:41 ` Gavin Guo
2026-02-04 19:36 ` David Hildenbrand (arm)
2026-02-04 20:02 ` Zi Yan
2026-02-04 20:43 ` David Hildenbrand (arm)
2026-02-05 2:59 ` Wei Yang
2026-02-04 19:42 ` Andrew Morton
2026-02-05 3:04 ` Wei Yang
2026-02-05 3:13 ` Andrew Morton
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox