From: Wei Yang <richard.weiyang@gmail.com>
To: Lance Yang <lance.yang@linux.dev>
Cc: Wei Yang <richard.weiyang@gmail.com>,
akpm@linux-foundation.org, david@redhat.com,
lorenzo.stoakes@oracle.com, ziy@nvidia.com,
baolin.wang@linux.alibaba.com, Liam.Howlett@oracle.com,
npache@redhat.com, ryan.roberts@arm.com, dev.jain@arm.com,
baohua@kernel.org, wangkefeng.wang@huawei.com,
linux-mm@kvack.org, stable@vger.kernel.org
Subject: Re: [Patch v2] mm/huge_memory: add pmd folio to ds_queue in do_huge_zero_wp_pmd()
Date: Thu, 2 Oct 2025 03:17:43 +0000 [thread overview]
Message-ID: <20251002031743.4anbofbyym5tlwrt@master> (raw)
In-Reply-To: <fa3f9e82-c6c8-43f2-803f-b8bb0fe56f37@linux.dev>
On Thu, Oct 02, 2025 at 10:31:53AM +0800, Lance Yang wrote:
>
>
>On 2025/10/2 09:46, Wei Yang wrote:
>> On Thu, Oct 02, 2025 at 01:38:25AM +0000, Wei Yang wrote:
>> > We add pmd folio into ds_queue on the first page fault in
>> > __do_huge_pmd_anonymous_page(), so that we can split it in case of
>> > memory pressure. This should be the same for a pmd folio during wp
>> > page fault.
>> >
>> > Commit 1ced09e0331f ("mm: allocate THP on hugezeropage wp-fault") miss
>> > to add it to ds_queue, which means system may not reclaim enough memory
>> > in case of memory pressure even the pmd folio is under used.
>> >
>> > Move deferred_split_folio() into map_anon_folio_pmd() to make the pmd
>> > folio installation consistent.
>> >
>>
>> Since we move deferred_split_folio() into map_anon_folio_pmd(), I am thinking
>> about whether we can consolidate the process in collapse_huge_page().
>>
>> Use map_anon_folio_pmd() in collapse_huge_page(), but skip those statistic
>> adjustment.
>
>Yeah, that's a good idea :)
>
>We could add a simple bool is_fault parameter to map_anon_folio_pmd()
>to control the statistics.
>
>The fault paths would call it with true, and the collapse paths could
>then call it with false.
>
>Something like this:
>
>```
>diff --git a/mm/huge_memory.c b/mm/huge_memory.c
>index 1b81680b4225..9924180a4a56 100644
>--- a/mm/huge_memory.c
>+++ b/mm/huge_memory.c
>@@ -1218,7 +1218,7 @@ static struct folio *vma_alloc_anon_folio_pmd(struct
>vm_area_struct *vma,
> }
>
> static void map_anon_folio_pmd(struct folio *folio, pmd_t *pmd,
>- struct vm_area_struct *vma, unsigned long haddr)
>+ struct vm_area_struct *vma, unsigned long haddr, bool is_fault)
> {
> pmd_t entry;
>
>@@ -1228,10 +1228,15 @@ static void map_anon_folio_pmd(struct folio *folio,
>pmd_t *pmd,
> folio_add_lru_vma(folio, vma);
> set_pmd_at(vma->vm_mm, haddr, pmd, entry);
> update_mmu_cache_pmd(vma, haddr, pmd);
>- add_mm_counter(vma->vm_mm, MM_ANONPAGES, HPAGE_PMD_NR);
>- count_vm_event(THP_FAULT_ALLOC);
>- count_mthp_stat(HPAGE_PMD_ORDER, MTHP_STAT_ANON_FAULT_ALLOC);
>- count_memcg_event_mm(vma->vm_mm, THP_FAULT_ALLOC);
>+
>+ if (is_fault) {
>+ add_mm_counter(vma->vm_mm, MM_ANONPAGES, HPAGE_PMD_NR);
>+ count_vm_event(THP_FAULT_ALLOC);
>+ count_mthp_stat(HPAGE_PMD_ORDER, MTHP_STAT_ANON_FAULT_ALLOC);
>+ count_memcg_event_mm(vma->vm_mm, THP_FAULT_ALLOC);
>+ }
>+
>+ deferred_split_folio(folio, false);
> }
>
> static vm_fault_t __do_huge_pmd_anonymous_page(struct vm_fault *vmf)
>diff --git a/mm/khugepaged.c b/mm/khugepaged.c
>index d0957648db19..2eddd5a60e48 100644
>--- a/mm/khugepaged.c
>+++ b/mm/khugepaged.c
>@@ -1227,17 +1227,10 @@ static int collapse_huge_page(struct mm_struct *mm,
>unsigned long address,
> __folio_mark_uptodate(folio);
> pgtable = pmd_pgtable(_pmd);
>
>- _pmd = folio_mk_pmd(folio, vma->vm_page_prot);
>- _pmd = maybe_pmd_mkwrite(pmd_mkdirty(_pmd), vma);
>-
> spin_lock(pmd_ptl);
> BUG_ON(!pmd_none(*pmd));
>- folio_add_new_anon_rmap(folio, vma, address, RMAP_EXCLUSIVE);
>- folio_add_lru_vma(folio, vma);
> pgtable_trans_huge_deposit(mm, pmd, pgtable);
>- set_pmd_at(mm, address, pmd, _pmd);
>- update_mmu_cache_pmd(vma, address, pmd);
>- deferred_split_folio(folio, false);
>+ map_anon_folio_pmd(folio, pmd, vma, address, false);
> spin_unlock(pmd_ptl);
>
> folio = NULL;
>```
>
>Untested, though.
>
This is the same as I thought.
Will prepare a patch for it.
--
Wei Yang
Help you, Help me
next prev parent reply other threads:[~2025-10-02 3:17 UTC|newest]
Thread overview: 18+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-10-02 1:38 Wei Yang
2025-10-02 1:46 ` Wei Yang
2025-10-02 2:31 ` Lance Yang
2025-10-02 3:17 ` Wei Yang [this message]
2025-10-02 7:16 ` David Hildenbrand
2025-10-02 7:27 ` Lance Yang
2025-10-02 7:14 ` David Hildenbrand
2025-10-02 7:26 ` Lance Yang
2025-10-03 7:54 ` Dev Jain
2025-10-03 13:49 ` Lance Yang
2025-10-03 14:08 ` Zi Yan
2025-10-03 15:30 ` Usama Arif
2025-10-03 17:11 ` Zi Yan
2025-10-04 2:13 ` Wei Yang
2025-10-04 2:04 ` Wei Yang
2025-10-04 2:37 ` Lance Yang
2025-10-03 13:53 ` Zi Yan
2025-10-14 3:49 ` Baolin Wang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20251002031743.4anbofbyym5tlwrt@master \
--to=richard.weiyang@gmail.com \
--cc=Liam.Howlett@oracle.com \
--cc=akpm@linux-foundation.org \
--cc=baohua@kernel.org \
--cc=baolin.wang@linux.alibaba.com \
--cc=david@redhat.com \
--cc=dev.jain@arm.com \
--cc=lance.yang@linux.dev \
--cc=linux-mm@kvack.org \
--cc=lorenzo.stoakes@oracle.com \
--cc=npache@redhat.com \
--cc=ryan.roberts@arm.com \
--cc=stable@vger.kernel.org \
--cc=wangkefeng.wang@huawei.com \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox