* [PATCH v2 2/3] mm/huge_memory: Prevent huge zeropage refcount corruption in PMD move
@ 2026-02-26 14:16 Chris Down
2026-02-26 15:17 ` David Hildenbrand (Arm)
0 siblings, 1 reply; 2+ messages in thread
From: Chris Down @ 2026-02-26 14:16 UTC (permalink / raw)
To: Andrew Morton
Cc: David Hildenbrand, Matthew Wilcox, kernel-team, linux-mm,
linux-kernel, stable
After commit d82d09e48219 ("mm/huge_memory: mark PMD mappings of the
huge zero folio special"), moved huge zero PMDs must remain special so
vm_normal_page_pmd() continues to treat them as special mappings.
move_pages_huge_pmd() currently reconstructs the destination PMD in the
huge zero page branch, which drops PMD state such as pmd_special() on
architectures with CONFIG_ARCH_HAS_PTE_SPECIAL. As a result,
vm_normal_page_pmd() can treat the moved huge zero PMD as a normal page
and corrupt its refcount.
Instead of reconstructing the PMD from the folio, derive the destination
entry from src_pmdval after pmdp_huge_clear_flush(), then handle the PMD
metadata the same way move_huge_pmd() does for moved entries by marking
it soft-dirty and clearing uffd-wp.
Fixes: d82d09e48219 ("mm/huge_memory: mark PMD mappings of the huge zero folio special")
Cc: stable@vger.kernel.org
Signed-off-by: Chris Down <chris@chrisdown.name>
---
mm/huge_memory.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index fed57951a7cd..8166b5e871ad 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -2794,7 +2794,8 @@ int move_pages_huge_pmd(struct mm_struct *mm, pmd_t *dst_pmd, pmd_t *src_pmd, pm
_dst_pmd = pmd_mkwrite(pmd_mkdirty(_dst_pmd), dst_vma);
} else {
src_pmdval = pmdp_huge_clear_flush(src_vma, src_addr, src_pmd);
- _dst_pmd = folio_mk_pmd(page_folio(src_page), dst_vma->vm_page_prot);
+ _dst_pmd = move_soft_dirty_pmd(src_pmdval);
+ _dst_pmd = clear_uffd_wp_pmd(_dst_pmd);
}
set_pmd_at(mm, dst_addr, dst_pmd, _dst_pmd);
--
2.51.2
^ permalink raw reply [flat|nested] 2+ messages in thread
* Re: [PATCH v2 2/3] mm/huge_memory: Prevent huge zeropage refcount corruption in PMD move
2026-02-26 14:16 [PATCH v2 2/3] mm/huge_memory: Prevent huge zeropage refcount corruption in PMD move Chris Down
@ 2026-02-26 15:17 ` David Hildenbrand (Arm)
0 siblings, 0 replies; 2+ messages in thread
From: David Hildenbrand (Arm) @ 2026-02-26 15:17 UTC (permalink / raw)
To: Chris Down, Andrew Morton
Cc: Matthew Wilcox, kernel-team, linux-mm, linux-kernel, stable
On 2/26/26 15:16, Chris Down wrote:
> After commit d82d09e48219 ("mm/huge_memory: mark PMD mappings of the
> huge zero folio special"), moved huge zero PMDs must remain special so
> vm_normal_page_pmd() continues to treat them as special mappings.
>
> move_pages_huge_pmd() currently reconstructs the destination PMD in the
> huge zero page branch, which drops PMD state such as pmd_special() on
> architectures with CONFIG_ARCH_HAS_PTE_SPECIAL. As a result,
> vm_normal_page_pmd() can treat the moved huge zero PMD as a normal page
> and corrupt its refcount.
>
> Instead of reconstructing the PMD from the folio, derive the destination
> entry from src_pmdval after pmdp_huge_clear_flush(), then handle the PMD
> metadata the same way move_huge_pmd() does for moved entries by marking
> it soft-dirty and clearing uffd-wp.
>
> Fixes: d82d09e48219 ("mm/huge_memory: mark PMD mappings of the huge zero folio special")
> Cc: stable@vger.kernel.org
> Signed-off-by: Chris Down <chris@chrisdown.name>
> ---
> mm/huge_memory.c | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index fed57951a7cd..8166b5e871ad 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -2794,7 +2794,8 @@ int move_pages_huge_pmd(struct mm_struct *mm, pmd_t *dst_pmd, pmd_t *src_pmd, pm
> _dst_pmd = pmd_mkwrite(pmd_mkdirty(_dst_pmd), dst_vma);
> } else {
> src_pmdval = pmdp_huge_clear_flush(src_vma, src_addr, src_pmd);
> - _dst_pmd = folio_mk_pmd(page_folio(src_page), dst_vma->vm_page_prot);
> + _dst_pmd = move_soft_dirty_pmd(src_pmdval);
> + _dst_pmd = clear_uffd_wp_pmd(_dst_pmd);
Please squash that patch directly in #1.
It doesn't make sense to leave something partially fixed in #1. It's
been completely broken from the start. folio_mk_pmd() should never have
been used.
Apart from that, the end results LGTM, thanks
--
Cheers,
David
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2026-02-26 15:17 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2026-02-26 14:16 [PATCH v2 2/3] mm/huge_memory: Prevent huge zeropage refcount corruption in PMD move Chris Down
2026-02-26 15:17 ` David Hildenbrand (Arm)
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox