From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C12D6C4345F for ; Tue, 30 Apr 2024 09:08:14 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 307B06B0087; Tue, 30 Apr 2024 05:08:14 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 2B5B16B0088; Tue, 30 Apr 2024 05:08:14 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 17DC36B0089; Tue, 30 Apr 2024 05:08:14 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id E9DC96B0087 for ; Tue, 30 Apr 2024 05:08:13 -0400 (EDT) Received: from smtpin20.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 981B81A058A for ; Tue, 30 Apr 2024 09:08:13 +0000 (UTC) X-FDA: 82065621666.20.CD81604 Received: from mail-lf1-f50.google.com (mail-lf1-f50.google.com [209.85.167.50]) by imf27.hostedemail.com (Postfix) with ESMTP id B6E0A40018 for ; Tue, 30 Apr 2024 09:08:11 +0000 (UTC) Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=DMgA3gZc; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf27.hostedemail.com: domain of ioworker0@gmail.com designates 209.85.167.50 as permitted sender) smtp.mailfrom=ioworker0@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1714468091; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=/cw7QgYMfe5BKxdCLjTtCJc37gLu0B8DTR7jLTXUhP8=; b=NMCBCIzJdMMUZ1apPOMyODRd5ZVDxP4zb9OMNc18gWgs6y6MI6vlPI3YreLuZ70J4KGwg9 p8FNFjzBbZabCJ3JRj3AQrmi6o7Yfy+4I8mlEYqvqHRfE6X0nxLW3Ir3t3zjVH/ihlYb2C VT7but1wZo0Z73OaROC0uMdQ61Dg3gY= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1714468091; a=rsa-sha256; cv=none; b=G7E1Ueq/kinl/uKFfLKScVJ7nAQGPWwjzFmZh6BzhxaSvC8O4EAKNtQb7FEKoPmcW7ebCJ GbQ4TLC6klW92/vgkJZN6RmkaIv6S/R4XoZlm4nB+mbsErIPd/crPFRJLR2B41v800JspI 4AaqGZe82iqBRBxAZWCjHsmDOLv6CvU= ARC-Authentication-Results: i=1; imf27.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=DMgA3gZc; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf27.hostedemail.com: domain of ioworker0@gmail.com designates 209.85.167.50 as permitted sender) smtp.mailfrom=ioworker0@gmail.com Received: by mail-lf1-f50.google.com with SMTP id 2adb3069b0e04-51bafbe7509so7632298e87.1 for ; Tue, 30 Apr 2024 02:08:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1714468090; x=1715072890; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=/cw7QgYMfe5BKxdCLjTtCJc37gLu0B8DTR7jLTXUhP8=; b=DMgA3gZckCHPu5LaPoX5rid2KRtKl0UUQKXc43FzzOET4ygAEEtV2cocDGrAqbDpnz 3I+B9r5aGVQcbW47aN8WpaJzTHSXpxkLp5miWuqo49skpwEXIf/JV1Xq/o8B5mdmY2RN 0xevQudQZWSv+EwDDwdalECkR052X+0/MXKNt1BPhqeg/xOrDyzz/57cMyCBkGUEbINc EgsNJAqVPNTSIXNCZd7ps8ZhSteyTn3ZJk24lnvanCmlS3wvxJx2+XR1U/OQJId8PAB1 pD1cM0s53w3RCJzwmx8Dk3XcOVlBFJYH0LqY7LiUsW4LAY1Dta/3owG2rqN337Rr2o3t qm2A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1714468090; x=1715072890; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=/cw7QgYMfe5BKxdCLjTtCJc37gLu0B8DTR7jLTXUhP8=; b=GpxUC77eIH3q5MFbF4a1kBh3LcsYP6urtB7y7U3oAlpjQoeuNYQNMXRqSMZKgfui33 EDWII2Jl6Ho8sAIZ3PN4L9keX8iT8ijFDCXzntpC02w+F9B4hJk0CDKv8G5ZDpZwFZt3 LEBckjHQKn8zdLM0j3uxwdpsnxaNxp0IzrTtg7c+Vcc3E7yYE7LF9vbFSyz/A4wWEDP7 syAydl9YUimfE+Zxo1e3QBKYu8qbVVvaveTqiWy/x2Vk11+vevlyXwST2YeOA0/WxKE7 3AEihBbGxrWL+lFZ0qCPZugFy7PXzvZm9NxXZ7lgvu8aGsIyPL0k+Wfsz8C00apZnBAg 0cmg== X-Forwarded-Encrypted: i=1; AJvYcCWSr24M2VeVrCwfalPY7AUHVkNoH/sap/h+wm3RNf714c08WSJ5E+UpnVAE9HNQt3YAu8DZfEO8B14diLgMVV3HXJQ= X-Gm-Message-State: AOJu0YxSeJTvgjpJpQW0fcnnO4jJqK8qrKKeJVRjXTyTWqTS7Bl7JRGF y30uessM3Eiv04bmNqMXkJZyhYJBvA//6/ayH4uri9gpOYxB75Ri0+uLsMhUUQtP0H2u75psfg0 JOzgdOmu31SuQUCiUsjbM8jI0X6o= X-Google-Smtp-Source: AGHT+IGgPWtTDIqP1ZJ2Fu63evtzs/5+O2Qqb8eBhweiGg6h/4Txy+rlKzwaqOIVYMLzTL0iZ3XF1qGKUHHeYUWKJpY= X-Received: by 2002:a19:ca48:0:b0:51a:c2fe:9f73 with SMTP id h8-20020a19ca48000000b0051ac2fe9f73mr9138082lfj.51.1714468089480; Tue, 30 Apr 2024 02:08:09 -0700 (PDT) MIME-Version: 1.0 References: <20240429132308.38794-1-ioworker0@gmail.com> <20240429132308.38794-4-ioworker0@gmail.com> In-Reply-To: From: Lance Yang Date: Tue, 30 Apr 2024 17:07:58 +0800 Message-ID: Subject: Re: [PATCH v3 3/3] mm/vmscan: avoid split lazyfree THP during shrink_folio_list() To: Barry Song <21cnbao@gmail.com> Cc: akpm@linux-foundation.org, willy@infradead.org, maskray@google.com, ziy@nvidia.com, ryan.roberts@arm.com, david@redhat.com, mhocko@suse.com, fengwei.yin@intel.com, zokeefe@google.com, shy828301@gmail.com, xiehuan09@gmail.com, libang.li@antgroup.com, wangkefeng.wang@huawei.com, songmuchun@bytedance.com, peterx@redhat.com, minchan@kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Queue-Id: B6E0A40018 X-Rspam-User: X-Rspamd-Server: rspam03 X-Stat-Signature: u51w8a1wxkahd6a6nwfk39cf8i48qpu5 X-HE-Tag: 1714468091-123412 X-HE-Meta: U2FsdGVkX1+08ZCcOz1ku/t0PArGbj+ubCSs0bF6UKU39eU/vxGWwYEIf8PRKom38h9F8fngBynSRovS6YLN8nY8WS/Blw0Jw04fx4oUJpLTNqGrTnHQ3x+eWmhcOq8BvqqwOsbSu+KGg6vRHt+SNfuN0ejeBfEtteuwUxJq6Nxy8Waa+aFAYFgbrJ8h8KSkuheKlH4LTd2ukU23rj8Fxw8HrcS6kohcuOxUhC1sOnBQ97M5gqJHzCT1PKu5yaJoJlKqjfx4ZGufNn9nUYV0hj9lCfkvnKK/0+kgzo++eBPyHJm1xAz0L+vjToXP9+RJGYuvZhJRiAlakMeyx1h3ExHFN/FSXGcA652Vg54Yww1z15MfrF8xW6exYgNob+iF/JQDEyzrx/ruuSPb9nNF9sU92nOzHKzlvpCNCewW1l74ufJjBZI46NbMuvJ+K+7GjpKESAADwVGci2StN4uZsmcerrmEgBmsFo1pjZD3G1awGBrQkOjEMnG5NTvbSWBRZ4Hw88yeAubQifCBVIW8nF9kTU3j0cms9p0LC4lx5rF2n0JZcl1EkgKSrT143VNJwptMU+FaEJcUXzOOUk62gDAssMOcbr8hSBQ5ptMgm0jWc8eriNqdpuVrKUXzRJ2203HfEIsywOWCEXdsf256rrPf43heDZzRzi8cW80Vd6tVMGArJzArF+yBSqY41tpksk/NURTFSmcNgiFyIY8PDnbIBprCHTLTOms6ybvH4z577vHlVgQ79574SFXg2hO5fQ1ix26ExcR8A8RbFWO25NzRLETXgfZNOuwx8bEl3G7MSKVBGxaut11X2mb1PNVSqaONk1mCzHBrXa8kIUbZ7eyZpiW/9o2TkeRGJTqJ6sGvTq3xMtTaV7q5v4ds0m2JGZLWITUy4KGkiqMR20V8iguQY8lY5HYaNCyG+spUbnlfNzkUZ3TOAU+9oj5fwgFqGYBt7qOV4sLmuuGMRBM v4NuzUn7 dF8IcW1L5LZ169YgV8+OMUCz7tSIIfOVmFcNaCDlLwYCPWptAymkIm+7MGfac9g1Il2wYBqq++qpM8aoLEZoVUxZfOJUFvV1wiJPUK0VPjI+ftRKpbQfA84RaxWsmZqnzNH/BrJqCwBEOdc9GyK+a6JN9Yx7AyGQyIi9UMI2viG2SlSFCiSlOBgNGWEVT/1poo7GLfZCS9lILfzGA6Di/nShCTbcbHKI9Z8b713VKxaYz3DBnnuWy79pfbc5j3zdnTzBRHOXfx81mK7ObHragxfVsWoGzKoSwtnD3lqL7PWtpr8dp1bqOp8sIaIf6SEIZxwyFFB4EhRUQC3y7dILGS47QVsrQ9YfaRT4DWR4bO3rJNUHT/N4PdxfTvfWW8UjaoLc3lSgZfBz8MPam3kRdKJqkwKH+ANGRJKKMfREft2vNziZkQKIUoxA36umZdhU4/TuiTLSkcUUV86ywBoGB1Ecnhg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Hey Barry, Thanks for taking time to review! On Tue, Apr 30, 2024 at 4:35=E2=80=AFPM Barry Song <21cnbao@gmail.com> wrot= e: > > On Tue, Apr 30, 2024 at 1:23=E2=80=AFAM Lance Yang = wrote: > > > > When the user no longer requires the pages, they would use > > madvise(MADV_FREE) to mark the pages as lazy free. Subsequently, they > > typically would not re-write to that memory again. > > > > During memory reclaim, if we detect that the large folio and its PMD ar= e > > both still marked as clean and there are no unexpected references > > (such as GUP), so we can just discard the memory lazily, improving the > > efficiency of memory reclamation in this case. > > > > On an Intel i5 CPU, reclaiming 1GiB of lazyfree THPs using > > mem_cgroup_force_empty() results in the following runtimes in seconds > > (shorter is better): > > > > -------------------------------------------- > > | Old | New | Change | > > -------------------------------------------- > > | 0.683426 | 0.049197 | -92.80% | > > -------------------------------------------- > > > > Suggested-by: Zi Yan > > Suggested-by: David Hildenbrand > > Signed-off-by: Lance Yang > > --- > > include/linux/huge_mm.h | 2 ++ > > mm/huge_memory.c | 75 +++++++++++++++++++++++++++++++++++++++++ > > mm/rmap.c | 3 ++ > > 3 files changed, 80 insertions(+) > > > > diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h > > index 2daadfcc6776..fd330f72b4f3 100644 > > --- a/include/linux/huge_mm.h > > +++ b/include/linux/huge_mm.h > > @@ -38,6 +38,8 @@ int change_huge_pmd(struct mmu_gather *tlb, struct vm= _area_struct *vma, > > unsigned long cp_flags); > > void split_huge_pmd_locked(struct vm_area_struct *vma, unsigned long a= ddress, > > pmd_t *pmd, bool freeze, struct folio *folio= ); > > +bool unmap_huge_pmd_locked(struct vm_area_struct *vma, unsigned long a= ddr, > > + pmd_t *pmdp, struct folio *folio); > > > > vm_fault_t vmf_insert_pfn_pmd(struct vm_fault *vmf, pfn_t pfn, bool wr= ite); > > vm_fault_t vmf_insert_pfn_pud(struct vm_fault *vmf, pfn_t pfn, bool wr= ite); > > diff --git a/mm/huge_memory.c b/mm/huge_memory.c > > index 145505a1dd05..d35d526ed48f 100644 > > --- a/mm/huge_memory.c > > +++ b/mm/huge_memory.c > > @@ -2690,6 +2690,81 @@ static void unmap_folio(struct folio *folio) > > try_to_unmap_flush(); > > } > > > > +static bool __discard_trans_pmd_locked(struct vm_area_struct *vma, > > + unsigned long addr, pmd_t *pmdp, > > + struct folio *folio) > > +{ > > + struct mm_struct *mm =3D vma->vm_mm; > > + int ref_count, map_count; > > + pmd_t orig_pmd =3D *pmdp; > > + struct mmu_gather tlb; > > + struct page *page; > > + > > + if (pmd_dirty(orig_pmd) || folio_test_dirty(folio)) > > + return false; > > + if (unlikely(!pmd_present(orig_pmd) || !pmd_trans_huge(orig_pmd= ))) > > + return false; > > + > > + page =3D pmd_page(orig_pmd); > > + if (unlikely(page_folio(page) !=3D folio)) > > + return false; > > + > > + tlb_gather_mmu(&tlb, mm); > > + orig_pmd =3D pmdp_huge_get_and_clear(mm, addr, pmdp); > > + tlb_remove_pmd_tlb_entry(&tlb, pmdp, addr); > > + > > + /* > > + * Syncing against concurrent GUP-fast: > > + * - clear PMD; barrier; read refcount > > + * - inc refcount; barrier; read PMD > > + */ > > + smp_mb(); > > + > > + ref_count =3D folio_ref_count(folio); > > + map_count =3D folio_mapcount(folio); > > + > > + /* > > + * Order reads for folio refcount and dirty flag > > + * (see comments in __remove_mapping()). > > + */ > > + smp_rmb(); > > + > > + /* > > + * If the PMD or folio is redirtied at this point, or if there = are > > + * unexpected references, we will give up to discard this folio > > + * and remap it. > > + * > > + * The only folio refs must be one from isolation plus the rmap= (s). > > + */ > > + if (ref_count !=3D map_count + 1 || folio_test_dirty(folio) || > > + pmd_dirty(orig_pmd)) { > > + set_pmd_at(mm, addr, pmdp, orig_pmd); > > + return false; > > + } > > + > > + folio_remove_rmap_pmd(folio, page, vma); > > + zap_deposited_table(mm, pmdp); > > + add_mm_counter(mm, MM_ANONPAGES, -HPAGE_PMD_NR); > > + folio_put(folio); > > + > > + return true; > > +} > > + > > +bool unmap_huge_pmd_locked(struct vm_area_struct *vma, unsigned long a= ddr, > > + pmd_t *pmdp, struct folio *folio) > > +{ > > + VM_WARN_ON_FOLIO(!folio_test_pmd_mappable(folio), folio); > > + VM_WARN_ON_FOLIO(!folio_test_locked(folio), folio); > > + VM_WARN_ON_ONCE(!IS_ALIGNED(addr, HPAGE_PMD_SIZE)); > > + > > +#ifdef CONFIG_TRANSPARENT_HUGEPAGE > > + if (folio_test_anon(folio) && !folio_test_swapbacked(folio)) > > + return __discard_trans_pmd_locked(vma, addr, pmdp, foli= o); > > +#endif > > this is weird and huge_memory.c is only built with > CONFIG_TRANSPARENT_HUGEPAGE =3D y; > > mm/Makefile: > obj-$(CONFIG_TRANSPARENT_HUGEPAGE) +=3D huge_memory.o khugepaged.o Thanks for pointing that out! I'll drop the conditional compilation directives :) > > > + > > + return false; > > +} > > + > > static void remap_page(struct folio *folio, unsigned long nr) > > { > > int i =3D 0; > > diff --git a/mm/rmap.c b/mm/rmap.c > > index e42f436c7ff3..ab37af4f47aa 100644 > > --- a/mm/rmap.c > > +++ b/mm/rmap.c > > @@ -1677,6 +1677,9 @@ static bool try_to_unmap_one(struct folio *folio,= struct vm_area_struct *vma, > > } > > > > if (!pvmw.pte && (flags & TTU_SPLIT_HUGE_PMD)) { > > + if (unmap_huge_pmd_locked(vma, range.start, pvm= w.pmd, > > + folio)) > > + goto walk_done; > > this is making > mm/rmap.c:1680: undefined reference to `unmap_huge_pmd_locked' > mm/rmap.c:1687: undefined reference to `split_huge_pmd_locked' You're right! It's my oversight, and I'll make sure to address it in the next version. Thanks again for the review! Lance > > > /* > > * We temporarily have to drop the PTL and star= t once > > * again from that now-PTE-mapped page table. > > -- > > 2.33.1 > >