From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1646BC10F1A for ; Tue, 7 May 2024 09:33:23 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 162A56B0099; Tue, 7 May 2024 05:33:23 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 114896B009C; Tue, 7 May 2024 05:33:23 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id F1D5D6B009E; Tue, 7 May 2024 05:33:22 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id D05026B0099 for ; Tue, 7 May 2024 05:33:22 -0400 (EDT) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 801C21A0349 for ; Tue, 7 May 2024 09:33:22 +0000 (UTC) X-FDA: 82091086644.19.95B71D0 Received: from out30-124.freemail.mail.aliyun.com (out30-124.freemail.mail.aliyun.com [115.124.30.124]) by imf16.hostedemail.com (Postfix) with ESMTP id BFF08180015 for ; Tue, 7 May 2024 09:33:18 +0000 (UTC) Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=linux.alibaba.com header.s=default header.b=pLrka0Lu; dmarc=pass (policy=none) header.from=linux.alibaba.com; spf=pass (imf16.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 115.124.30.124 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1715074400; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=m+W6LCQqw+fad3DX5hnYaBtalOAck6XbEGLoiU/WMjY=; b=JGbKZN+I9Bg3TL+X5yl3Ihl9raKSjFoX19ItzkksZ9EDMK1ZLrftNJbYzp8Y2s3CTM93fZ XIZjXHgRzqNK7+m65gG5guorj8x9vBzEqfVEoV53EYJ+dyR0ua4rSYAUdf5jEqfLIQeeSj mGw+ONubmTs1DIZHLPcsh6L+f/U5Fl0= ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=pass header.d=linux.alibaba.com header.s=default header.b=pLrka0Lu; dmarc=pass (policy=none) header.from=linux.alibaba.com; spf=pass (imf16.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 115.124.30.124 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1715074400; a=rsa-sha256; cv=none; b=roG7zNOGwiNf31TmTmC7E9IbelQxB+dHfWmR1u/BdbQZbwjsZI8/EbFeIDPDxbzPHPK0EH voe+aV/MeubgbBp9HK6ltRxb1r32MsomJammyWtbL6ZuYPL/QkrfpPauXHp/kDytTXZhHu +BTUm0B+tUY57Tbk0cuL5Fr2rPB9NQ0= DKIM-Signature:v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.alibaba.com; s=default; t=1715074395; h=Message-ID:Date:MIME-Version:Subject:To:From:Content-Type; bh=m+W6LCQqw+fad3DX5hnYaBtalOAck6XbEGLoiU/WMjY=; b=pLrka0LuW+Pzno6s/zhL5bcclGXXYknmnsKznRDR/3R0yvqs5oyRTdpfgcxDqtZbVbWnMz/KL2g/IyPDpnJ9lLBEHnadyW8JkvQ+0SKXwMHV6jcjiVtaaegEL3DPyqAMd6QOzyHDfTEm+nncytUzcmbrxlhuCpZfFgCdh/Mk6Bg= X-Alimail-AntiSpam:AC=PASS;BC=-1|-1;BR=01201311R121e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=maildocker-contentspam033045046011;MF=baolin.wang@linux.alibaba.com;NM=1;PH=DS;RN=21;SR=0;TI=SMTPD_---0W6.oB1W_1715074391; Received: from 30.236.4.40(mailfrom:baolin.wang@linux.alibaba.com fp:SMTPD_---0W6.oB1W_1715074391) by smtp.aliyun-inc.com; Tue, 07 May 2024 17:33:12 +0800 Message-ID: <8c1d6e06-d84b-4be7-81c8-76e2d8fb9883@linux.alibaba.com> Date: Tue, 7 May 2024 17:33:11 +0800 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v4 3/3] mm/vmscan: avoid split lazyfree THP during shrink_folio_list() To: Lance Yang Cc: akpm@linux-foundation.org, willy@infradead.org, sj@kernel.org, maskray@google.com, ziy@nvidia.com, ryan.roberts@arm.com, david@redhat.com, 21cnbao@gmail.com, mhocko@suse.com, fengwei.yin@intel.com, zokeefe@google.com, shy828301@gmail.com, xiehuan09@gmail.com, libang.li@antgroup.com, wangkefeng.wang@huawei.com, songmuchun@bytedance.com, peterx@redhat.com, minchan@kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org References: <20240501042700.83974-1-ioworker0@gmail.com> <20240501042700.83974-4-ioworker0@gmail.com> From: Baolin Wang In-Reply-To: Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-Rspam-User: X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: BFF08180015 X-Stat-Signature: bk8c3ub9ff7i3pio4cpsoanpnj6ixfni X-HE-Tag: 1715074398-211487 X-HE-Meta: U2FsdGVkX1/uYEGGTeimiLSeTzMRF43Wg1bp897RTx3J/+zKn83bCe3oSaArd1FMD8LlEKBLF5Rf+Tvrefhrrxme0dyY8M21v9HDYe2yR2wrg+xpsmcyHln7ZT3lK1LTUdpGUVU7IXM2+fVkGwJ2fOiDVzroHAd3EL6+rBhoaKRC08jk+ya6Ere8k7bNFq8nJvuqnij+c/3rZTnX8koRubmrenKBmBOP571PRCVmr+Z+lMvTojx/AGpKrzBNzzRaIDP6mZtZJ2fwd8ORH+bEj5Dct9svnI5Os2vqFQ5EC23bhZsAV1PNKm1gOvqtkoXMgv1ncCgK2nVCAmLU+sVU2W/28scT0nUDIo+VZrebhZtG9Wj7NexeHdwc6wSGa4pzL0eFwG5KPF3Z6dYtqLxoA8ywjOhJexnIhCEQus4IGHkszpIOzlGR25IxigrXtEXi6grKMHkYGL1HRbjJcapdxNTcPfWfcqfENA8jzlACnw2pE8V9sYMP38w32ntpV6j4pUDD+k/MTXDrNk5iG3priw5t3KHMb+scb28c7DH5X3lQUZGt6F6vA/ybLcg+cX6aq3abtFxjc3vXqOG7fU8jmAEw3fuN4cvigJJNv6GFgio+5lmZE+Fc8pCwghIlhxrMLfCzTC3cOhDmwTIscZdTuCLFqg7xmu5Tq2OdwT6mjUHsuGMhLfibc8OWXwTYSY5W1cF+qLM1elP26rnUqwb/m5ZtEu1Gm29X7QdtmdIAciPWoRt8atcBfUdQn6eInqB7QIYbbzkLovu5MkqCcDBpqUcUcXSmEV65hU+ehKDqySibxG0P+qDcSrX/oagqf1htS2cIjS6hGaA8mMpmBltvkSa4yUsR4/7UfPPVUzPPZ0qmJkLg7rEEDvm4xxiFYGIygNrcC2X0R2gI0LlFt53+VK4pL7aHjB8ceGpMC9FwroGIVBQzj8GylrH5KRI/wAV+LOMzm/zFOgi8sxfY7xA cWWi7UZ4 Lw2EyMguvYzdHNgHiGyTCufAgCUhbnhIrgPiNn0jkIWpMNY5heC0CpCen45tBj0xiCNaiINAXpXTntF1Cuz9NFzROniw+0Yuz5nPCXs3Fgp5MU2HV9SMICL/cXOb/O3cjaszcIO4qhDRPPVsCDPi7rC0qQJYEWrflGn01n00qTgjxJ5wLDho2SVB2QLGOtseQqbxpvnUelkFN4O6z+sIkUlZXJj9tGmmwxK8TODxhdFWI76rZUH0OgH4+nNnIV3o+4hGAdaex19ro47w9tktzqvViPaHW9+uEtsCm/mZ99k9M+AaRIg2J7Em2Hhu9HdlMu1sONeyrZoXNScKdtCIGQawUYI4JY00pxwhhJYVQPJL6EyGn1Hs827a0FpHLrcPdRhBsL/ufdgNYRS2k1U96U4yazqdvHmXEukeqx88028VcZ5kcBu9rYVcHhSb9Hilqia7/+ZzvEdZyg0g15UwXMU6i8A== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 2024/5/7 16:26, Lance Yang wrote: > On Tue, May 7, 2024 at 2:32 PM Lance Yang wrote: >> >> Hey Baolin, >> >> Thanks a lot for taking time to review! >> >> On Tue, May 7, 2024 at 12:01 PM Baolin Wang >> wrote: >>> >>> >>> >>> On 2024/5/1 12:27, Lance Yang wrote: >>>> When the user no longer requires the pages, they would use >>>> madvise(MADV_FREE) to mark the pages as lazy free. Subsequently, they >>>> typically would not re-write to that memory again. >>>> >>>> During memory reclaim, if we detect that the large folio and its PMD are >>>> both still marked as clean and there are no unexpected references >>>> (such as GUP), so we can just discard the memory lazily, improving the >>>> efficiency of memory reclamation in this case. On an Intel i5 CPU, reclaiming 1GiB of lazyfree THPs using >>>> mem_cgroup_force_empty() results in the following runtimes in seconds >>>> (shorter is better): >>>> >>>> -------------------------------------------- >>>> | Old | New | Change | >>>> -------------------------------------------- >>>> | 0.683426 | 0.049197 | -92.80% | >>>> -------------------------------------------- >>>> >>>> Suggested-by: Zi Yan >>>> Suggested-by: David Hildenbrand >>>> Signed-off-by: Lance Yang >>>> --- >>>> include/linux/huge_mm.h | 9 +++++ >>>> mm/huge_memory.c | 73 +++++++++++++++++++++++++++++++++++++++++ >>>> mm/rmap.c | 3 ++ >>>> 3 files changed, 85 insertions(+) >>>> >>>> diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h >>>> index 38c4b5537715..017cee864080 100644 >>>> --- a/include/linux/huge_mm.h >>>> +++ b/include/linux/huge_mm.h >>>> @@ -411,6 +411,8 @@ static inline bool thp_migration_supported(void) >>>> >>>> void split_huge_pmd_locked(struct vm_area_struct *vma, unsigned long address, >>>> pmd_t *pmd, bool freeze, struct folio *folio); >>>> +bool unmap_huge_pmd_locked(struct vm_area_struct *vma, unsigned long addr, >>>> + pmd_t *pmdp, struct folio *folio); >>>> >>>> static inline void align_huge_pmd_range(struct vm_area_struct *vma, >>>> unsigned long *start, >>>> @@ -492,6 +494,13 @@ static inline void align_huge_pmd_range(struct vm_area_struct *vma, >>>> unsigned long *start, >>>> unsigned long *end) {} >>>> >>>> +static inline bool unmap_huge_pmd_locked(struct vm_area_struct *vma, >>>> + unsigned long addr, pmd_t *pmdp, >>>> + struct folio *folio) >>>> +{ >>>> + return false; >>>> +} >>>> + >>>> #define split_huge_pud(__vma, __pmd, __address) \ >>>> do { } while (0) >>>> >>>> diff --git a/mm/huge_memory.c b/mm/huge_memory.c >>>> index 145505a1dd05..90fdef847a88 100644 >>>> --- a/mm/huge_memory.c >>>> +++ b/mm/huge_memory.c >>>> @@ -2690,6 +2690,79 @@ static void unmap_folio(struct folio *folio) >>>> try_to_unmap_flush(); >>>> } >>>> >>>> +static bool __discard_trans_pmd_locked(struct vm_area_struct *vma, >>>> + unsigned long addr, pmd_t *pmdp, >>>> + struct folio *folio) >>>> +{ >>>> + struct mm_struct *mm = vma->vm_mm; >>>> + int ref_count, map_count; >>>> + pmd_t orig_pmd = *pmdp; >>>> + struct mmu_gather tlb; >>>> + struct page *page; >>>> + >>>> + if (pmd_dirty(orig_pmd) || folio_test_dirty(folio)) >>>> + return false; >>>> + if (unlikely(!pmd_present(orig_pmd) || !pmd_trans_huge(orig_pmd))) >>>> + return false; >>>> + >>>> + page = pmd_page(orig_pmd); >>>> + if (unlikely(page_folio(page) != folio)) >>>> + return false; >>>> + >>>> + tlb_gather_mmu(&tlb, mm); >>>> + orig_pmd = pmdp_huge_get_and_clear(mm, addr, pmdp); >>>> + tlb_remove_pmd_tlb_entry(&tlb, pmdp, addr); >>>> + >>>> + /* >>>> + * Syncing against concurrent GUP-fast: >>>> + * - clear PMD; barrier; read refcount >>>> + * - inc refcount; barrier; read PMD >>>> + */ >>>> + smp_mb(); >>>> + >>>> + ref_count = folio_ref_count(folio); >>>> + map_count = folio_mapcount(folio); >>>> + >>>> + /* >>>> + * Order reads for folio refcount and dirty flag >>>> + * (see comments in __remove_mapping()). >>>> + */ >>>> + smp_rmb(); >>>> + >>>> + /* >>>> + * If the PMD or folio is redirtied at this point, or if there are >>>> + * unexpected references, we will give up to discard this folio >>>> + * and remap it. >>>> + * >>>> + * The only folio refs must be one from isolation plus the rmap(s). >>>> + */ >>>> + if (ref_count != map_count + 1 || folio_test_dirty(folio) || >>>> + pmd_dirty(orig_pmd)) { >>>> + set_pmd_at(mm, addr, pmdp, orig_pmd); >>>> + return false; >>>> + } >>>> + >>>> + folio_remove_rmap_pmd(folio, page, vma); >>>> + zap_deposited_table(mm, pmdp); >>>> + add_mm_counter(mm, MM_ANONPAGES, -HPAGE_PMD_NR); >>>> + folio_put(folio); >>> >>> IIUC, you missed handling mlock vma, see mlock_drain_local() in >>> try_to_unmap_one(). >> >> Good spot! >> >> I suddenly realized that I overlooked another thing: If we detect that a >> PMD-mapped THP is within the range of the VM_LOCKED VMA, we >> should check whether the TTU_IGNORE_MLOCK flag is set in >> try_to_unmap_one(). If the flag is set, we will remove the PMD mapping >> from the folio. Otherwise, the folio should be mlocked, which avoids >> splitting the folio and then mlocking each page again. > > My previous response above is flawed - sorry :( > > If we detect that a PMD-mapped THP is within the range of the > VM_LOCKED VMA. > > 1) If the TTU_IGNORE_MLOCK flag is set, we will try to remove the > PMD mapping from the folio, as this series has done. Right. > 2) If the flag is not set, the large folio should be mlocked to prevent it > from being picked during memory reclaim? Currently, we just leave it Yes. From commit 1acbc3f93614 ("mm: handle large folio when large folio in VM_LOCKED VMA range"), large folios of the mlocked VMA will be handled during page reclaim phase. > as is and do not to mlock it, IIUC. Original code already handle the mlock case after the PMD-mapped THP is split in try_to_unmap_one(): /* * If the folio is in an mlock()d vma, we must not swap it out. */ if (!(flags & TTU_IGNORE_MLOCK) && (vma->vm_flags & VM_LOCKED)) { /* Restore the mlock which got missed */ if (!folio_test_large(folio)) mlock_vma_folio(folio, vma); page_vma_mapped_walk_done(&pvmw); ret = false; break; }