linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Usama Arif <usamaarif642@gmail.com>
To: Zi Yan <ziy@nvidia.com>, Lance Yang <lance.yang@linux.dev>
Cc: Wei Yang <richard.weiyang@gmail.com>,
	linux-mm@kvack.org, baolin.wang@linux.alibaba.com,
	lorenzo.stoakes@oracle.com, Liam.Howlett@oracle.com,
	wangkefeng.wang@huawei.com, stable@vger.kernel.org,
	ryan.roberts@arm.com, dev.jain@arm.com, npache@redhat.com,
	baohua@kernel.org, akpm@linux-foundation.org, david@redhat.com
Subject: Re: [Patch v2] mm/huge_memory: add pmd folio to ds_queue in do_huge_zero_wp_pmd()
Date: Fri, 3 Oct 2025 16:30:23 +0100	[thread overview]
Message-ID: <29ac3e02-fb60-47ed-9834-033604744624@gmail.com> (raw)
In-Reply-To: <1286D3DE-8F53-4B64-840F-A598B130DF13@nvidia.com>



On 03/10/2025 15:08, Zi Yan wrote:
> On 3 Oct 2025, at 9:49, Lance Yang wrote:
> 
>> Hey Wei,
>>
>> On 2025/10/2 09:38, Wei Yang wrote:
>>> We add pmd folio into ds_queue on the first page fault in
>>> __do_huge_pmd_anonymous_page(), so that we can split it in case of
>>> memory pressure. This should be the same for a pmd folio during wp
>>> page fault.
>>>
>>> Commit 1ced09e0331f ("mm: allocate THP on hugezeropage wp-fault") miss
>>> to add it to ds_queue, which means system may not reclaim enough memory
>>
>> IIRC, it was commit dafff3f4c850 ("mm: split underused THPs") that
>> started unconditionally adding all new anon THPs to _deferred_list :)
>>
>>> in case of memory pressure even the pmd folio is under used.
>>>
>>> Move deferred_split_folio() into map_anon_folio_pmd() to make the pmd
>>> folio installation consistent.
>>>
>>> Fixes: 1ced09e0331f ("mm: allocate THP on hugezeropage wp-fault")
>>
>> Shouldn't this rather be the following?
>>
>> Fixes: dafff3f4c850 ("mm: split underused THPs")
> 
> Yes, I agree. In this case, this patch looks more like an optimization
> for split underused THPs.
> 
> One observation on this change is that right after zero pmd wp, the
> deferred split queue could be scanned, the newly added pmd folio will
> split since it is all zero except one subpage. This means we probably
> should allocate a base folio for zero pmd wp and map the rest to zero
> page at the beginning if split underused THP is enabled to avoid
> this long trip. The downside is that user app cannot get a pmd folio
> if it is intended to write data into the entire folio.
> 
> Usama might be able to give some insight here.
> 

Thanks for CCing me Zi!

hmm I think the downside of not having PMD folio probably outweights the cost of splitting
a zer-filled page?
ofcourse I dont have any numbers to back that up, but that would be my initial guess.

Also:

Acked-by: Usama Arif <usamaarif642@gmail.com>


> 
>>
>> Thanks,
>> Lance
>>
>>> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
>>> Cc: David Hildenbrand <david@redhat.com>
>>> Cc: Lance Yang <lance.yang@linux.dev>
>>> Cc: Dev Jain <dev.jain@arm.com>
>>> Cc: <stable@vger.kernel.org>
>>>
>>> ---
>>> v2:
>>>    * add fix, cc stable and put description about the flow of current
>>>      code
>>>    * move deferred_split_folio() into map_anon_folio_pmd()
>>> ---
>>>   mm/huge_memory.c | 2 +-
>>>   1 file changed, 1 insertion(+), 1 deletion(-)
>>>
>>> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
>>> index 1b81680b4225..f13de93637bf 100644
>>> --- a/mm/huge_memory.c
>>> +++ b/mm/huge_memory.c
>>> @@ -1232,6 +1232,7 @@ static void map_anon_folio_pmd(struct folio *folio, pmd_t *pmd,
>>>   	count_vm_event(THP_FAULT_ALLOC);
>>>   	count_mthp_stat(HPAGE_PMD_ORDER, MTHP_STAT_ANON_FAULT_ALLOC);
>>>   	count_memcg_event_mm(vma->vm_mm, THP_FAULT_ALLOC);
>>> +	deferred_split_folio(folio, false);
>>>   }
>>>    static vm_fault_t __do_huge_pmd_anonymous_page(struct vm_fault *vmf)
>>> @@ -1272,7 +1273,6 @@ static vm_fault_t __do_huge_pmd_anonymous_page(struct vm_fault *vmf)
>>>   		pgtable_trans_huge_deposit(vma->vm_mm, vmf->pmd, pgtable);
>>>   		map_anon_folio_pmd(folio, vmf->pmd, vma, haddr);
>>>   		mm_inc_nr_ptes(vma->vm_mm);
>>> -		deferred_split_folio(folio, false);
>>>   		spin_unlock(vmf->ptl);
>>>   	}
>>>
> 
> 
> Best Regards,
> Yan, Zi



  reply	other threads:[~2025-10-03 15:30 UTC|newest]

Thread overview: 18+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-10-02  1:38 Wei Yang
2025-10-02  1:46 ` Wei Yang
2025-10-02  2:31   ` Lance Yang
2025-10-02  3:17     ` Wei Yang
2025-10-02  7:16       ` David Hildenbrand
2025-10-02  7:27         ` Lance Yang
2025-10-02  7:14 ` David Hildenbrand
2025-10-02  7:26 ` Lance Yang
2025-10-03  7:54 ` Dev Jain
2025-10-03 13:49 ` Lance Yang
2025-10-03 14:08   ` Zi Yan
2025-10-03 15:30     ` Usama Arif [this message]
2025-10-03 17:11       ` Zi Yan
2025-10-04  2:13     ` Wei Yang
2025-10-04  2:04   ` Wei Yang
2025-10-04  2:37     ` Lance Yang
2025-10-03 13:53 ` Zi Yan
2025-10-14  3:49 ` Baolin Wang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=29ac3e02-fb60-47ed-9834-033604744624@gmail.com \
    --to=usamaarif642@gmail.com \
    --cc=Liam.Howlett@oracle.com \
    --cc=akpm@linux-foundation.org \
    --cc=baohua@kernel.org \
    --cc=baolin.wang@linux.alibaba.com \
    --cc=david@redhat.com \
    --cc=dev.jain@arm.com \
    --cc=lance.yang@linux.dev \
    --cc=linux-mm@kvack.org \
    --cc=lorenzo.stoakes@oracle.com \
    --cc=npache@redhat.com \
    --cc=richard.weiyang@gmail.com \
    --cc=ryan.roberts@arm.com \
    --cc=stable@vger.kernel.org \
    --cc=wangkefeng.wang@huawei.com \
    --cc=ziy@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox