* [Patch v3 0/2] mm/huge_memory: cleanup for pmd folio installation
@ 2025-10-08 9:54 Wei Yang
2025-10-08 9:54 ` [Patch v3 1/2] mm/huge_memory: add pmd folio to ds_queue in do_huge_zero_wp_pmd() Wei Yang
2025-10-08 9:54 ` [Patch v3 2/2] mm/khugepaged: unify pmd folio installation with map_anon_folio_pmd() Wei Yang
0 siblings, 2 replies; 6+ messages in thread
From: Wei Yang @ 2025-10-08 9:54 UTC (permalink / raw)
To: akpm, david, lorenzo.stoakes, ziy, baolin.wang, Liam.Howlett,
npache, ryan.roberts, dev.jain, baohua, lance.yang,
wangkefeng.wang
Cc: linux-mm, usamaarif642, willy, Wei Yang
This is mostly a resend of previous two separate patches [1][2].
Since they both modify the same file, there would be some conflict if just
apply one of it. To make reviewing smoothly, they are resend together as
suggested by Zi Yan.
Patch [1] add pmd folio to ds_queue during do_huge_zero_wp_pmd() which is missed
Patch [2] unify pmd folio installation in collapse_huge_page() like other places
No code change after previous version, so I preserve the RB and Acked-by.
They are rebased on current akpm/mm-new branch with base commit:
1de81dd7733c 2025-10-07 mm/page_alloc: batch page freeing in free_frozen_page_commit
[1]: https://lkml.kernel.org/r/20251002013825.20448-1-richard.weiyang@gmail.com
[2]: https://lkml.kernel.org/r/20251007005022.24413-1-richard.weiyang@gmail.com
Wei Yang (2):
mm/huge_memory: add pmd folio to ds_queue in do_huge_zero_wp_pmd()
mm/khugepaged: unify pmd folio installation with map_anon_folio_pmd()
include/linux/huge_mm.h | 7 +++++++
mm/huge_memory.c | 14 ++++++++++----
mm/khugepaged.c | 9 +--------
3 files changed, 18 insertions(+), 12 deletions(-)
--
2.34.1
^ permalink raw reply [flat|nested] 6+ messages in thread* [Patch v3 1/2] mm/huge_memory: add pmd folio to ds_queue in do_huge_zero_wp_pmd() 2025-10-08 9:54 [Patch v3 0/2] mm/huge_memory: cleanup for pmd folio installation Wei Yang @ 2025-10-08 9:54 ` Wei Yang 2025-10-08 9:54 ` [Patch v3 2/2] mm/khugepaged: unify pmd folio installation with map_anon_folio_pmd() Wei Yang 1 sibling, 0 replies; 6+ messages in thread From: Wei Yang @ 2025-10-08 9:54 UTC (permalink / raw) To: akpm, david, lorenzo.stoakes, ziy, baolin.wang, Liam.Howlett, npache, ryan.roberts, dev.jain, baohua, lance.yang, wangkefeng.wang Cc: linux-mm, usamaarif642, willy, Wei Yang, stable We add pmd folio into ds_queue on the first page fault in __do_huge_pmd_anonymous_page(), so that we can split it in case of memory pressure. This should be the same for a pmd folio during wp page fault. Commit 1ced09e0331f ("mm: allocate THP on hugezeropage wp-fault") miss to add it to ds_queue, which means system may not reclaim enough memory in case of memory pressure even the pmd folio is under used. Move deferred_split_folio() into map_anon_folio_pmd() to make the pmd folio installation consistent. Fixes: 1ced09e0331f ("mm: allocate THP on hugezeropage wp-fault") Signed-off-by: Wei Yang <richard.weiyang@gmail.com> Cc: David Hildenbrand <david@redhat.com> Cc: Lance Yang <lance.yang@linux.dev> Cc: Dev Jain <dev.jain@arm.com> Acked-by: David Hildenbrand <david@redhat.com> Reviewed-by: Lance Yang <lance.yang@linux.dev> Reviewed-by: Dev Jain <dev.jain@arm.com> Acked-by: Usama Arif <usamaarif642@gmail.com> Reviewed-by: Zi Yan <ziy@nvidia.com> Cc: <stable@vger.kernel.org> --- v3: * rebase on latest mm-new * gather rb and acked-by v2: * add fix, cc stable and put description about the flow of current code * move deferred_split_folio() into map_anon_folio_pmd() --- mm/huge_memory.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 002922bb6e42..e86699306c5e 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1317,6 +1317,7 @@ static void map_anon_folio_pmd(struct folio *folio, pmd_t *pmd, count_vm_event(THP_FAULT_ALLOC); count_mthp_stat(HPAGE_PMD_ORDER, MTHP_STAT_ANON_FAULT_ALLOC); count_memcg_event_mm(vma->vm_mm, THP_FAULT_ALLOC); + deferred_split_folio(folio, false); } static vm_fault_t __do_huge_pmd_anonymous_page(struct vm_fault *vmf) @@ -1357,7 +1358,6 @@ static vm_fault_t __do_huge_pmd_anonymous_page(struct vm_fault *vmf) pgtable_trans_huge_deposit(vma->vm_mm, vmf->pmd, pgtable); map_anon_folio_pmd(folio, vmf->pmd, vma, haddr); mm_inc_nr_ptes(vma->vm_mm); - deferred_split_folio(folio, false); spin_unlock(vmf->ptl); } -- 2.34.1 ^ permalink raw reply [flat|nested] 6+ messages in thread
* [Patch v3 2/2] mm/khugepaged: unify pmd folio installation with map_anon_folio_pmd() 2025-10-08 9:54 [Patch v3 0/2] mm/huge_memory: cleanup for pmd folio installation Wei Yang 2025-10-08 9:54 ` [Patch v3 1/2] mm/huge_memory: add pmd folio to ds_queue in do_huge_zero_wp_pmd() Wei Yang @ 2025-10-08 9:54 ` Wei Yang 2025-10-08 14:24 ` Dev Jain 2025-10-08 14:45 ` David Hildenbrand 1 sibling, 2 replies; 6+ messages in thread From: Wei Yang @ 2025-10-08 9:54 UTC (permalink / raw) To: akpm, david, lorenzo.stoakes, ziy, baolin.wang, Liam.Howlett, npache, ryan.roberts, dev.jain, baohua, lance.yang, wangkefeng.wang Cc: linux-mm, usamaarif642, willy, Wei Yang Currently we install pmd folio with map_anon_folio_pmd() in __do_huge_pmd_anonymous_page() and do_huge_zero_wp_pmd(). While in collapse_huge_page(), it is done with identical code except statistics adjustment. Unify the process with map_anon_folio_pmd() to install pmd folio. Split it to map_anon_folio_pmd_pf() and map_anon_folio_pmd_nopf() to be used in page fault or not respectively. No functional change is intended. Signed-off-by: Wei Yang <richard.weiyang@gmail.com> Cc: David Hildenbrand <david@redhat.com> Cc: Lance Yang <lance.yang@linux.dev> Cc: Dev Jain <dev.jain@arm.com> Cc: Zi Yan <ziy@nvidia.com> Cc: Usama Arif <usamaarif642@gmail.com> Cc: Matthew Wilcox <willy@infradead.org> Acked-by: David Hildenbrand <david@redhat.com> Reviewed-by: Zi Yan <ziy@nvidia.com> Acked-by: Lance Yang <lance.yang@linux.dev> --- v3: * add static inline and put bracket into separate lines * rebase on latest mm-new v2: * split map_anon_folio_pmd_[no]pf() suggested by Matthew --- include/linux/huge_mm.h | 7 +++++++ mm/huge_memory.c | 14 ++++++++++---- mm/khugepaged.c | 9 +-------- 3 files changed, 18 insertions(+), 12 deletions(-) diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index bb48cb50c0ec..588e3522a1d0 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -542,6 +542,8 @@ void split_huge_pmd_locked(struct vm_area_struct *vma, unsigned long address, pmd_t *pmd, bool freeze); bool unmap_huge_pmd_locked(struct vm_area_struct *vma, unsigned long addr, pmd_t *pmdp, struct folio *folio); +void map_anon_folio_pmd_nopf(struct folio *folio, pmd_t *pmd, + struct vm_area_struct *vma, unsigned long haddr); #else /* CONFIG_TRANSPARENT_HUGEPAGE */ @@ -633,6 +635,11 @@ static inline bool unmap_huge_pmd_locked(struct vm_area_struct *vma, return false; } +static inline void map_anon_folio_pmd_nopf(struct folio *folio, pmd_t *pmd, + struct vm_area_struct *vma, unsigned long haddr) +{ +} + #define split_huge_pud(__vma, __pmd, __address) \ do { } while (0) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index e86699306c5e..09198906667b 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1302,7 +1302,7 @@ static struct folio *vma_alloc_anon_folio_pmd(struct vm_area_struct *vma, return folio; } -static void map_anon_folio_pmd(struct folio *folio, pmd_t *pmd, +void map_anon_folio_pmd_nopf(struct folio *folio, pmd_t *pmd, struct vm_area_struct *vma, unsigned long haddr) { pmd_t entry; @@ -1313,11 +1313,17 @@ static void map_anon_folio_pmd(struct folio *folio, pmd_t *pmd, folio_add_lru_vma(folio, vma); set_pmd_at(vma->vm_mm, haddr, pmd, entry); update_mmu_cache_pmd(vma, haddr, pmd); + deferred_split_folio(folio, false); +} + +static void map_anon_folio_pmd_pf(struct folio *folio, pmd_t *pmd, + struct vm_area_struct *vma, unsigned long haddr) +{ + map_anon_folio_pmd_nopf(folio, pmd, vma, haddr); add_mm_counter(vma->vm_mm, MM_ANONPAGES, HPAGE_PMD_NR); count_vm_event(THP_FAULT_ALLOC); count_mthp_stat(HPAGE_PMD_ORDER, MTHP_STAT_ANON_FAULT_ALLOC); count_memcg_event_mm(vma->vm_mm, THP_FAULT_ALLOC); - deferred_split_folio(folio, false); } static vm_fault_t __do_huge_pmd_anonymous_page(struct vm_fault *vmf) @@ -1356,7 +1362,7 @@ static vm_fault_t __do_huge_pmd_anonymous_page(struct vm_fault *vmf) return ret; } pgtable_trans_huge_deposit(vma->vm_mm, vmf->pmd, pgtable); - map_anon_folio_pmd(folio, vmf->pmd, vma, haddr); + map_anon_folio_pmd_pf(folio, vmf->pmd, vma, haddr); mm_inc_nr_ptes(vma->vm_mm); spin_unlock(vmf->ptl); } @@ -1962,7 +1968,7 @@ static vm_fault_t do_huge_zero_wp_pmd(struct vm_fault *vmf) if (ret) goto release; (void)pmdp_huge_clear_flush(vma, haddr, vmf->pmd); - map_anon_folio_pmd(folio, vmf->pmd, vma, haddr); + map_anon_folio_pmd_pf(folio, vmf->pmd, vma, haddr); goto unlock; release: folio_put(folio); diff --git a/mm/khugepaged.c b/mm/khugepaged.c index abe54f0043c7..e947b96e1443 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -1224,17 +1224,10 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address, __folio_mark_uptodate(folio); pgtable = pmd_pgtable(_pmd); - _pmd = folio_mk_pmd(folio, vma->vm_page_prot); - _pmd = maybe_pmd_mkwrite(pmd_mkdirty(_pmd), vma); - spin_lock(pmd_ptl); BUG_ON(!pmd_none(*pmd)); - folio_add_new_anon_rmap(folio, vma, address, RMAP_EXCLUSIVE); - folio_add_lru_vma(folio, vma); pgtable_trans_huge_deposit(mm, pmd, pgtable); - set_pmd_at(mm, address, pmd, _pmd); - update_mmu_cache_pmd(vma, address, pmd); - deferred_split_folio(folio, false); + map_anon_folio_pmd_nopf(folio, pmd, vma, address); spin_unlock(pmd_ptl); folio = NULL; -- 2.34.1 ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [Patch v3 2/2] mm/khugepaged: unify pmd folio installation with map_anon_folio_pmd() 2025-10-08 9:54 ` [Patch v3 2/2] mm/khugepaged: unify pmd folio installation with map_anon_folio_pmd() Wei Yang @ 2025-10-08 14:24 ` Dev Jain 2025-10-08 14:45 ` David Hildenbrand 1 sibling, 0 replies; 6+ messages in thread From: Dev Jain @ 2025-10-08 14:24 UTC (permalink / raw) To: Wei Yang, akpm, david, lorenzo.stoakes, ziy, baolin.wang, Liam.Howlett, npache, ryan.roberts, baohua, lance.yang, wangkefeng.wang Cc: linux-mm, usamaarif642, willy On 08/10/25 3:24 pm, Wei Yang wrote: > Currently we install pmd folio with map_anon_folio_pmd() in > __do_huge_pmd_anonymous_page() and do_huge_zero_wp_pmd(). While in > collapse_huge_page(), it is done with identical code except statistics > adjustment. > > Unify the process with map_anon_folio_pmd() to install pmd folio. Split > it to map_anon_folio_pmd_pf() and map_anon_folio_pmd_nopf() to be used > in page fault or not respectively. > > No functional change is intended. > > Signed-off-by: Wei Yang <richard.weiyang@gmail.com> > Cc: David Hildenbrand <david@redhat.com> > Cc: Lance Yang <lance.yang@linux.dev> > Cc: Dev Jain <dev.jain@arm.com> > Cc: Zi Yan <ziy@nvidia.com> > Cc: Usama Arif <usamaarif642@gmail.com> > Cc: Matthew Wilcox <willy@infradead.org> > Acked-by: David Hildenbrand <david@redhat.com> > Reviewed-by: Zi Yan <ziy@nvidia.com> > Acked-by: Lance Yang <lance.yang@linux.dev> > > --- LGTM Reviewed-by: Dev Jain <dev.jain@arm.com> ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [Patch v3 2/2] mm/khugepaged: unify pmd folio installation with map_anon_folio_pmd() 2025-10-08 9:54 ` [Patch v3 2/2] mm/khugepaged: unify pmd folio installation with map_anon_folio_pmd() Wei Yang 2025-10-08 14:24 ` Dev Jain @ 2025-10-08 14:45 ` David Hildenbrand 2025-10-09 1:46 ` Wei Yang 1 sibling, 1 reply; 6+ messages in thread From: David Hildenbrand @ 2025-10-08 14:45 UTC (permalink / raw) To: Wei Yang, akpm, lorenzo.stoakes, ziy, baolin.wang, Liam.Howlett, npache, ryan.roberts, dev.jain, baohua, lance.yang, wangkefeng.wang Cc: linux-mm, usamaarif642, willy On 08.10.25 11:54, Wei Yang wrote: > Currently we install pmd folio with map_anon_folio_pmd() in > __do_huge_pmd_anonymous_page() and do_huge_zero_wp_pmd(). While in > collapse_huge_page(), it is done with identical code except statistics > adjustment. > > Unify the process with map_anon_folio_pmd() to install pmd folio. Split > it to map_anon_folio_pmd_pf() and map_anon_folio_pmd_nopf() to be used > in page fault or not respectively. > > No functional change is intended. > > Signed-off-by: Wei Yang <richard.weiyang@gmail.com> > Cc: David Hildenbrand <david@redhat.com> > Cc: Lance Yang <lance.yang@linux.dev> > Cc: Dev Jain <dev.jain@arm.com> > Cc: Zi Yan <ziy@nvidia.com> > Cc: Usama Arif <usamaarif642@gmail.com> > Cc: Matthew Wilcox <willy@infradead.org> > Acked-by: David Hildenbrand <david@redhat.com> > Reviewed-by: Zi Yan <ziy@nvidia.com> > Acked-by: Lance Yang <lance.yang@linux.dev> > > --- > v3: > * add static inline and put bracket into separate lines > * rebase on latest mm-new > v2: > * split map_anon_folio_pmd_[no]pf() suggested by Matthew > --- > include/linux/huge_mm.h | 7 +++++++ > mm/huge_memory.c | 14 ++++++++++---- > mm/khugepaged.c | 9 +-------- > 3 files changed, 18 insertions(+), 12 deletions(-) > > diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h > index bb48cb50c0ec..588e3522a1d0 100644 > --- a/include/linux/huge_mm.h > +++ b/include/linux/huge_mm.h > @@ -542,6 +542,8 @@ void split_huge_pmd_locked(struct vm_area_struct *vma, unsigned long address, > pmd_t *pmd, bool freeze); > bool unmap_huge_pmd_locked(struct vm_area_struct *vma, unsigned long addr, > pmd_t *pmdp, struct folio *folio); > +void map_anon_folio_pmd_nopf(struct folio *folio, pmd_t *pmd, > + struct vm_area_struct *vma, unsigned long haddr); > > #else /* CONFIG_TRANSPARENT_HUGEPAGE */ > > @@ -633,6 +635,11 @@ static inline bool unmap_huge_pmd_locked(struct vm_area_struct *vma, > return false; > } > > +static inline void map_anon_folio_pmd_nopf(struct folio *folio, pmd_t *pmd, > + struct vm_area_struct *vma, unsigned long haddr) > +{ > +} > + Thinking again, maybe you don't even need this stub? Any calling code should be guarded by CONFIG_TRANSPARENT_HUGEPAGE. -- Cheers David / dhildenb ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [Patch v3 2/2] mm/khugepaged: unify pmd folio installation with map_anon_folio_pmd() 2025-10-08 14:45 ` David Hildenbrand @ 2025-10-09 1:46 ` Wei Yang 0 siblings, 0 replies; 6+ messages in thread From: Wei Yang @ 2025-10-09 1:46 UTC (permalink / raw) To: David Hildenbrand Cc: Wei Yang, akpm, lorenzo.stoakes, ziy, baolin.wang, Liam.Howlett, npache, ryan.roberts, dev.jain, baohua, lance.yang, wangkefeng.wang, linux-mm, usamaarif642, willy On Wed, Oct 08, 2025 at 04:45:52PM +0200, David Hildenbrand wrote: >On 08.10.25 11:54, Wei Yang wrote: >> Currently we install pmd folio with map_anon_folio_pmd() in >> __do_huge_pmd_anonymous_page() and do_huge_zero_wp_pmd(). While in >> collapse_huge_page(), it is done with identical code except statistics >> adjustment. >> >> Unify the process with map_anon_folio_pmd() to install pmd folio. Split >> it to map_anon_folio_pmd_pf() and map_anon_folio_pmd_nopf() to be used >> in page fault or not respectively. >> >> No functional change is intended. >> >> Signed-off-by: Wei Yang <richard.weiyang@gmail.com> >> Cc: David Hildenbrand <david@redhat.com> >> Cc: Lance Yang <lance.yang@linux.dev> >> Cc: Dev Jain <dev.jain@arm.com> >> Cc: Zi Yan <ziy@nvidia.com> >> Cc: Usama Arif <usamaarif642@gmail.com> >> Cc: Matthew Wilcox <willy@infradead.org> >> Acked-by: David Hildenbrand <david@redhat.com> >> Reviewed-by: Zi Yan <ziy@nvidia.com> >> Acked-by: Lance Yang <lance.yang@linux.dev> >> >> --- >> v3: >> * add static inline and put bracket into separate lines >> * rebase on latest mm-new >> v2: >> * split map_anon_folio_pmd_[no]pf() suggested by Matthew >> --- >> include/linux/huge_mm.h | 7 +++++++ >> mm/huge_memory.c | 14 ++++++++++---- >> mm/khugepaged.c | 9 +-------- >> 3 files changed, 18 insertions(+), 12 deletions(-) >> >> diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h >> index bb48cb50c0ec..588e3522a1d0 100644 >> --- a/include/linux/huge_mm.h >> +++ b/include/linux/huge_mm.h >> @@ -542,6 +542,8 @@ void split_huge_pmd_locked(struct vm_area_struct *vma, unsigned long address, >> pmd_t *pmd, bool freeze); >> bool unmap_huge_pmd_locked(struct vm_area_struct *vma, unsigned long addr, >> pmd_t *pmdp, struct folio *folio); >> +void map_anon_folio_pmd_nopf(struct folio *folio, pmd_t *pmd, >> + struct vm_area_struct *vma, unsigned long haddr); >> #else /* CONFIG_TRANSPARENT_HUGEPAGE */ >> @@ -633,6 +635,11 @@ static inline bool unmap_huge_pmd_locked(struct vm_area_struct *vma, >> return false; >> } >> +static inline void map_anon_folio_pmd_nopf(struct folio *folio, pmd_t *pmd, >> + struct vm_area_struct *vma, unsigned long haddr) >> +{ >> +} >> + > >Thinking again, maybe you don't even need this stub? Any calling code should >be guarded by CONFIG_TRANSPARENT_HUGEPAGE. > I think you are right. I have removed the stub and build with/out CONFIG_TRANSPARENT_HUGEPAGE, both are fine. @Andrew, would you mind removing the above static inline stub? Thanks. >-- >Cheers > >David / dhildenb -- Wei Yang Help you, Help me ^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2025-10-09 1:46 UTC | newest] Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2025-10-08 9:54 [Patch v3 0/2] mm/huge_memory: cleanup for pmd folio installation Wei Yang 2025-10-08 9:54 ` [Patch v3 1/2] mm/huge_memory: add pmd folio to ds_queue in do_huge_zero_wp_pmd() Wei Yang 2025-10-08 9:54 ` [Patch v3 2/2] mm/khugepaged: unify pmd folio installation with map_anon_folio_pmd() Wei Yang 2025-10-08 14:24 ` Dev Jain 2025-10-08 14:45 ` David Hildenbrand 2025-10-09 1:46 ` Wei Yang
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox