From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 9A6B4CCA470 for ; Thu, 2 Oct 2025 02:32:22 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B066E8E0003; Wed, 1 Oct 2025 22:32:21 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id AB69F8E0002; Wed, 1 Oct 2025 22:32:21 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9F35E8E0003; Wed, 1 Oct 2025 22:32:21 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 8E00E8E0002 for ; Wed, 1 Oct 2025 22:32:21 -0400 (EDT) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 2C3FE13A940 for ; Thu, 2 Oct 2025 02:32:21 +0000 (UTC) X-FDA: 83951600082.22.C17793C Received: from out-180.mta1.migadu.com (out-180.mta1.migadu.com [95.215.58.180]) by imf02.hostedemail.com (Postfix) with ESMTP id 624578000C for ; Thu, 2 Oct 2025 02:32:19 +0000 (UTC) Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=CbbQ1EZo; spf=pass (imf02.hostedemail.com: domain of lance.yang@linux.dev designates 95.215.58.180 as permitted sender) smtp.mailfrom=lance.yang@linux.dev; dmarc=pass (policy=none) header.from=linux.dev ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1759372339; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=0BIRTtLPf+QDVfO6C0pMRdsH/qN18OkH5csEBfkbiN8=; b=K5EKTy5VfXf2sxJvpftN42ZjMFGegvLB9WX4YhDrWytXieyHI7mzD64EsQoRMj9/iBJL8n oYAXuvAW6WXFhJz5mmtavoCMO/mvYoSD54itSSJG4/p047bm+hiPVhzJyAJgS2soLDrRdT +xsYv6V2V0G0WScGOI6KDzDcRF1R458= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1759372339; a=rsa-sha256; cv=none; b=Qxoiurf2zQQIxYXiQItiPNOdbTcaqFDnEI0xZ7Xmj6n2BDBKio5sIGNpevZPDCFsMCMkz5 cg9u+QV32nkxffbO1kboiaX625xJnSCuOOgkO8HSoDammsEgPPwWDRGSBE20yTfLlPTzi5 jJetY9Bw/vx66IDUVYpmynMjuT5V+zU= ARC-Authentication-Results: i=1; imf02.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=CbbQ1EZo; spf=pass (imf02.hostedemail.com: domain of lance.yang@linux.dev designates 95.215.58.180 as permitted sender) smtp.mailfrom=lance.yang@linux.dev; dmarc=pass (policy=none) header.from=linux.dev Message-ID: DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1759372337; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=0BIRTtLPf+QDVfO6C0pMRdsH/qN18OkH5csEBfkbiN8=; b=CbbQ1EZoL/G2TucCvc8KETWSTYQyFgE5iafViFJfAYCzTyATKovd/hMHVu2UAuXuBaDCRr e4sJoy7tfLV9CcMG/lYjnCLFhqOC/L10/3E3qUcVx4tlOTsRrpnmeDMf7XplYjHKakdTmd uyzlHETjN+ck6aSgEOXajL23OYIpJio= Date: Thu, 2 Oct 2025 10:31:53 +0800 MIME-Version: 1.0 Subject: Re: [Patch v2] mm/huge_memory: add pmd folio to ds_queue in do_huge_zero_wp_pmd() Content-Language: en-US To: Wei Yang Cc: akpm@linux-foundation.org, david@redhat.com, lorenzo.stoakes@oracle.com, ziy@nvidia.com, baolin.wang@linux.alibaba.com, Liam.Howlett@oracle.com, npache@redhat.com, ryan.roberts@arm.com, dev.jain@arm.com, baohua@kernel.org, wangkefeng.wang@huawei.com, linux-mm@kvack.org, stable@vger.kernel.org References: <20251002013825.20448-1-richard.weiyang@gmail.com> <20251002014604.d2ryohvtrdfn7mvf@master> X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. From: Lance Yang In-Reply-To: <20251002014604.d2ryohvtrdfn7mvf@master> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Migadu-Flow: FLOW_OUT X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 624578000C X-Stat-Signature: 46rdzo1rrug7e4inh6551i9f4gq3w6i6 X-Rspam-User: X-HE-Tag: 1759372339-946674 X-HE-Meta: U2FsdGVkX1//zRfRSGOB7GnFLznWqi7FtmWZFFXpwImKsXugLGON8XM4swc4+HxJiRucVzSdrmL81IVZkhruzIalne78Iu5NwGNFOzAyoP/4U8SH8UsZHos8jDUUvwpKxIihp9GfY6LVOV33iddAZ4/3TaoWB9t9Q53QY6unwKx+yGVQcceMy72Jr3VX7nesys4E3OKOkGH27Q/BU6lYIo957v8qKTZcl+pb9cJjz8wgJFbZSwiiLEhDNKqOrisnBwQwbXMNDje++QEO2PhiPFFp24wTY0aOzHknsT7FRSvSr/Edvjx1HOJWNy70kqOdV9ISOzBrXMyzGvlf7BRWteXvV2pAXoWk8QsSxOOMXxjFo8KvYAb3NuqLQXYYJFNU5WgsrEi3qNElAi1cE00zdJf64nbTwm2c6TEBJ6iogwyLvJx5gsRjHSWZlhAXBjYkIa2R9NoHYxcHVasZrf+koiZRaLFShKbW8F8cYB04/danQBdustpU7A9h/FQUmMAONatWMCjLBh414Qw4iyhERNOtJGCk1f6bxuffL+kTBSwmzBCdU+kyM2IooNHrXZBmUgs6c4Sez8mKf14mQneoYnm9QgJWK6yFUAk/6JzY/DQo9c/6PzTlgAs1dz39F1t8XOAYyk4JH1g8+9bJwVlGEOskvHgiy3XseVOGGlurpvnh0zlGBzDoQ4Qr9TBNTqkSiiqdZLPF6njGjvSQgkEYnzbgJKq5X7vrn3gQ6LUDULFIGHnLyj8HjbbetF0Xfrcb3nxfJB1OUT2a9MgIwlBivoj5uJ+/dpQTh/2DIIlx7jnd051p7nSeVj2XFSpgnTrovY1QfygQdvwjC/X/aN3oeeEL54MADEQMWh2KWzEr20wCHniweU96uqNnQD88ADyPw15fUg5a2Kow5A7cWhCxI7LoW36eV3G+R8dYrA3m9KD12RGoyr3y6nXLvLVXr2kqCub+nYR+RrK10L/WfG4 d6m8B33m pxY/AZlnhdQTHgbipC+nm2fgcarCpDafV3e6jmXZDeHmCqWgWLSwt8U3C87m71dc8ecfdiojQQWw4ytCnSKf4Q0M6htG2GcX7XEGjkt+AP078DtcvIqhOQRpxregIExXv6qoazOBpZzNqdzkr1MhIN0X6RrgxJZraQCd1YiV147fpQoE6zq5Gd18IGWh1cZ8BeXFUfLuiNfiA+pieUbTl9q1gHOlHOl3ViLjh6LRlm/Wj8DRVOqLwuOGNHamobmjAr4bO8jeqqnd7F39YNP0cVe42McAKXDtsSt7OesOeVEfxt5ELHEZnOhVk63wuUjkWdyPuGPhe61a/tQBUiI+BHatnk4x7JC1Kaj6mktJ177X5whd8fUKuehf7ZAf7QN5cCAVg1gGXYUel3RP0KjftVvx8ti0chwuA7RwreJ+E6JNBy1ZBpPYuWuEbTuj+dKpsMxNWBZe+sDHNm3jZD0XvaMs5TsBsBqodvDjRYFNM+fnsRd4= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 2025/10/2 09:46, Wei Yang wrote: > On Thu, Oct 02, 2025 at 01:38:25AM +0000, Wei Yang wrote: >> We add pmd folio into ds_queue on the first page fault in >> __do_huge_pmd_anonymous_page(), so that we can split it in case of >> memory pressure. This should be the same for a pmd folio during wp >> page fault. >> >> Commit 1ced09e0331f ("mm: allocate THP on hugezeropage wp-fault") miss >> to add it to ds_queue, which means system may not reclaim enough memory >> in case of memory pressure even the pmd folio is under used. >> >> Move deferred_split_folio() into map_anon_folio_pmd() to make the pmd >> folio installation consistent. >> > > Since we move deferred_split_folio() into map_anon_folio_pmd(), I am thinking > about whether we can consolidate the process in collapse_huge_page(). > > Use map_anon_folio_pmd() in collapse_huge_page(), but skip those statistic > adjustment. Yeah, that's a good idea :) We could add a simple bool is_fault parameter to map_anon_folio_pmd() to control the statistics. The fault paths would call it with true, and the collapse paths could then call it with false. Something like this: ``` diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 1b81680b4225..9924180a4a56 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1218,7 +1218,7 @@ static struct folio *vma_alloc_anon_folio_pmd(struct vm_area_struct *vma, } static void map_anon_folio_pmd(struct folio *folio, pmd_t *pmd, - struct vm_area_struct *vma, unsigned long haddr) + struct vm_area_struct *vma, unsigned long haddr, bool is_fault) { pmd_t entry; @@ -1228,10 +1228,15 @@ static void map_anon_folio_pmd(struct folio *folio, pmd_t *pmd, folio_add_lru_vma(folio, vma); set_pmd_at(vma->vm_mm, haddr, pmd, entry); update_mmu_cache_pmd(vma, haddr, pmd); - add_mm_counter(vma->vm_mm, MM_ANONPAGES, HPAGE_PMD_NR); - count_vm_event(THP_FAULT_ALLOC); - count_mthp_stat(HPAGE_PMD_ORDER, MTHP_STAT_ANON_FAULT_ALLOC); - count_memcg_event_mm(vma->vm_mm, THP_FAULT_ALLOC); + + if (is_fault) { + add_mm_counter(vma->vm_mm, MM_ANONPAGES, HPAGE_PMD_NR); + count_vm_event(THP_FAULT_ALLOC); + count_mthp_stat(HPAGE_PMD_ORDER, MTHP_STAT_ANON_FAULT_ALLOC); + count_memcg_event_mm(vma->vm_mm, THP_FAULT_ALLOC); + } + + deferred_split_folio(folio, false); } static vm_fault_t __do_huge_pmd_anonymous_page(struct vm_fault *vmf) diff --git a/mm/khugepaged.c b/mm/khugepaged.c index d0957648db19..2eddd5a60e48 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -1227,17 +1227,10 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address, __folio_mark_uptodate(folio); pgtable = pmd_pgtable(_pmd); - _pmd = folio_mk_pmd(folio, vma->vm_page_prot); - _pmd = maybe_pmd_mkwrite(pmd_mkdirty(_pmd), vma); - spin_lock(pmd_ptl); BUG_ON(!pmd_none(*pmd)); - folio_add_new_anon_rmap(folio, vma, address, RMAP_EXCLUSIVE); - folio_add_lru_vma(folio, vma); pgtable_trans_huge_deposit(mm, pmd, pgtable); - set_pmd_at(mm, address, pmd, _pmd); - update_mmu_cache_pmd(vma, address, pmd); - deferred_split_folio(folio, false); + map_anon_folio_pmd(folio, pmd, vma, address, false); spin_unlock(pmd_ptl); folio = NULL; ``` Untested, though. > >> Fixes: 1ced09e0331f ("mm: allocate THP on hugezeropage wp-fault") >> Signed-off-by: Wei Yang >> Cc: David Hildenbrand >> Cc: Lance Yang >> Cc: Dev Jain >> Cc: >> >