From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5563CC7EE30 for ; Wed, 25 Jun 2025 11:15:44 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C5BFC6B00B7; Wed, 25 Jun 2025 07:15:43 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C0C5E6B00BA; Wed, 25 Jun 2025 07:15:43 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AD3E96B00BD; Wed, 25 Jun 2025 07:15:43 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 980A86B00B7 for ; Wed, 25 Jun 2025 07:15:43 -0400 (EDT) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 538261CFF3B for ; Wed, 25 Jun 2025 11:15:43 +0000 (UTC) X-FDA: 83593667766.21.BC9F905 Received: from mail-vk1-f177.google.com (mail-vk1-f177.google.com [209.85.221.177]) by imf30.hostedemail.com (Postfix) with ESMTP id 66D448001B for ; Wed, 25 Jun 2025 11:15:41 +0000 (UTC) Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=luT+Vj2g; spf=pass (imf30.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.221.177 as permitted sender) smtp.mailfrom=21cnbao@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1750850141; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=UzzWoBYzuELzHA/ZeU/yfgoxYFcBOEqkCLdhulSDso8=; b=0K5T3Pw7o8ylcJNurIp6dCQAiqsElXKNdAM8qkkJptakd8Okg12V6v9Z/Sy3XBXbzmHVry 7+WqAqvXn97iWFtLkX6ZtxRdpM5F+xKUYUcNiRe4cpDMs5rYO3oNfJPaJZDa0o6F+BeuSo gAxHgevkNj0sunbinaxYGh5Hz5pSscs= ARC-Authentication-Results: i=1; imf30.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=luT+Vj2g; spf=pass (imf30.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.221.177 as permitted sender) smtp.mailfrom=21cnbao@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1750850141; a=rsa-sha256; cv=none; b=FTnX+2LNB+6t66xR6D+u1A5G4QKMpFrlBHcjeNcVrM6u4NlYooHSg2k4CX0UXNA22v6LAB TmmD5VY9wgTrMhaDBPqD3P8zGpCO0Lj7BW2atoGwi2UULE5QzcEajZFDmvdSQoWGnUMGwW xqkaF/QvgcvwN0OjTgZE5UDh3aPbbqo= Received: by mail-vk1-f177.google.com with SMTP id 71dfb90a1353d-5314b486207so1786951e0c.2 for ; Wed, 25 Jun 2025 04:15:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1750850140; x=1751454940; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=UzzWoBYzuELzHA/ZeU/yfgoxYFcBOEqkCLdhulSDso8=; b=luT+Vj2g/CeTg5cGyLVLw4S4agL61LvMNp/1v9/Te2I/A11SorFFnZNr693CFRQYKi J96wW5wNTxHIl3aeiSlvevdWPGGbeixFHMvNcIiWJri7U9X/43yupUCyGFgD4ct4gkYw tSjCAiaRAIAZmxLB4RMj1jaYamESEeuBICNy2rbWtbq3nut0l/Xdmpufcc6yj5rgsZs+ bnKFYvHSZ1WgvgadYNPpbL0N35GfkE3BHJIZUnPgVU8A0AjfJlAWNMM0dsN3i/mOKwRA SIwURrfCwLFMoREdfwG+tdGjDxoQ50UE0fwoVtxUJERZ8cJniPZ34Vlda9FcgCRqbC9z /Raw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750850140; x=1751454940; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=UzzWoBYzuELzHA/ZeU/yfgoxYFcBOEqkCLdhulSDso8=; b=Wc6f4xApG3YbuyqRLyJ6n7vxXE8hg4aBv1OOhvlIrC2UCj5cMD/j7grmfEBRCshQyM ZkzdoEm9j6TWx90hQJUAxM/M0OvWb5PGV7M5l4LrKRn/03ND9ZyreZmLt1ZAeKUkSmOI YZFuJ/042jlfIT30korSEece4tJA1k03/aynit2zwSHdwp8ney3AlWThP8VWhbyNQHpy W1f7ZzPnX4qFnXmBgq8bEmAW6aBz7kG+vsnvWNqGUZznaF343K0lkqWqzHRYrv466Z1Z I5RjuCxBJVrvm++P8kx8rSw7E0ZE5n2/C0scNInrIHi2o3vSdkPb5gBoqjnhMCjZLBl2 tABQ== X-Forwarded-Encrypted: i=1; AJvYcCVPPr7H3r6+IKmLE66pkNM/OeGy6WxEd29hfiz9cyvNCc70fgMK/U7yiGsoRXr5bWpVets/b5UqfA==@kvack.org X-Gm-Message-State: AOJu0YydxLwfH/xLgDBzaOOmKa816ktyAI8wkrGI/u/mrihDWOF+HA7G xheOWAQ1iobwKslCUMupSSt9mX3QWpPtnpj/TVhRKIhakcFf4ftc40iuSYE6VKbnKy2XNXHY/x1 EHVQA0Htl4IaZ9jMNalFkwafZ4LrmIRE= X-Gm-Gg: ASbGncuT78ELOrHvQ6JrMfe9kKLtI4jhgAPzN/MgccZseZfmLsvSGsb60Z4LKUckaQf vmZWM7qCNSj2HEavHhB4oOJxxCrXAdNWgv/5oTJqrxThgsO7XoQVTxSq7uSX/F+eftUbiojYqAr qfsxzx1c5YGkk5D7SuNnoMrGM/9kKwB4BfBe1DyelSTGL6EPDsphwU8w== X-Google-Smtp-Source: AGHT+IHwxdbJNROelzY3OknT3azAp92ywQE88MfIFFnBydzATnoYswUHYfRj+B3vYRzmFHBdJZPqRWnqwsEAdwdY54w= X-Received: by 2002:a05:6122:1ac9:b0:531:19ee:93ea with SMTP id 71dfb90a1353d-532ef208beamr1331489e0c.0.1750850140366; Wed, 25 Jun 2025 04:15:40 -0700 (PDT) MIME-Version: 1.0 References: <2c19a6cf-0b42-477b-a672-ed8c1edd4267@redhat.com> <20250624162503.78957-1-ioworker0@gmail.com> <27d174e0-c209-4851-825a-0baeb56df86f@redhat.com> <938c4726-b93e-46df-bceb-65c7574714a6@linux.dev> <5ba95609-302b-456a-a863-2bd5df51baf2@redhat.com> In-Reply-To: <5ba95609-302b-456a-a863-2bd5df51baf2@redhat.com> From: Barry Song <21cnbao@gmail.com> Date: Wed, 25 Jun 2025 23:15:29 +1200 X-Gm-Features: Ac12FXzIwn089IFe1tMYOckn8MZY5eni2aX8dcGFabAwMmxZ1sP0z4GGek9WzJY Message-ID: Subject: Re: [PATCH v4 3/4] mm: Support batched unmap for lazyfree large folios during reclamation To: David Hildenbrand Cc: Lance Yang , akpm@linux-foundation.org, baolin.wang@linux.alibaba.com, chrisl@kernel.org, kasong@tencent.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-riscv@lists.infradead.org, lorenzo.stoakes@oracle.com, ryan.roberts@arm.com, v-songbaohua@oppo.com, x86@kernel.org, ying.huang@intel.com, zhengtangquan@oppo.com, Lance Yang Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspam-User: X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 66D448001B X-Stat-Signature: kzd6ssn18z6a1rk8e8ta7t5rgc1364nt X-HE-Tag: 1750850141-762576 X-HE-Meta: U2FsdGVkX1+khVKs2CojPoG1ks8keSzg3UdCBOcmKXOf7Nt6SkzBPViWeSHmaWh3zweuHJlF68fs5WX5TQc2XzFdAvr17WpPfUwXTywF4paQuQZPxLaV+ilsuM0RCpNh2HOfE6H0Qd4mQYvS4jHOJrsyOdq2IU4knJPmMGm6mmnbC9Gt48+6MoNCJs0dyu1Pa97mjgGlldDQ+HxlmNk9reZx5si7uvUPJA0T5bXiCi4en+2lHIVtiihBL0te33km9Zl6jnbv1ILJ8M5/Cdf3ZIBpoEXyEV+YOaFgqEH4GR4VU4D4Tx2FwISnekr86lIHV8studwU4BACZ8tCTTVJvHf2ogw32djzZTe6bR0oxEZXSpvUj4zurSpsIbiM10/4yehl6FtPBBuwJoIr/UX70bAkC/0ppLrekQ/JxnTxRa4wW/dTeNsA13TE+eAkoOg/H2wFvnS43JeYzpvZWNTEI04XJUrdlvePbK6x7GE4pXQU8SPOmyIE5q8sFLAwTsD3+LWAjqdJyo5l8L1kGHJg+AzfYWvwtGAriC3it+4EpMoptKt03ypqtqpYG6+CBa3GO7Z0ukmxTBB7ygfO2fmiOP/sGX4dTrUo4bpbXNovJgYMuzaFmOZglfrGhmUBDzNq2Y2XR9RzYbDhU4icB/V/LuPX+V+/5BqfJaIndfelbMJjYg5nsj/6qqrXk5NlKpAekI/YRlckBy5fb/0UgDy6eLfrufZ5M5tU+pgYrimDovQ4K8DqkZvvh5QZNWLt7yYwvf/zQnk1mnX52F1szwNRbDD9Osadf6W3IeoGzoSJFdqzbLn0o8apuZA4CPyeaiu3lV2aUr1f4UQFwXly3tF/4y9ZtdUdT3jhUkvTRq3BF8DK8MgWiJnhgGZUygMqrOp8NtF/h/glffEBmt8XgN1bgUtBVMoJ/QwTx6xsZgMLMEw3Ih38wu/4Rzvu7fAK1mpyyG8Ns7ImmzHpBZT7fsT Nf9ZbmhC foNIPfNQ2cXLjyFb71gXYi+uAyya1BIIJZHmULlqBlDnh+081FEvYTBn21yYkRmWDOaGOevMM+hz+Tw88njesRsliVPuH/dDnw7ThlOZmT6/C+W+93hdlH1Rko8elbdJRIJXHdUvSyBoks+/V0rzbqKDtXONcTBa1pjYiT3eAES3VifYXkN/qlhoVkx3szn4qtPhTM0Rn/poF1g+6FpMbtgqitoNAnimzXLulys7RTTzIBONbaKxiPFjWtOT0jmlMxIxGD4n9VR4qZcoNrOvAMa70YYTLGd7o9Z7+xRWhHd6g55xtr72M+gbsTwdgGlr6UQCvk79iCGFQxaAH8EXvNaS8urbAULj7uOw3PRNDitlrd3wzejHxwd/cHHY/cSzUSSWBUR944DLIra2zxJL1Sx5umgAQspCkOL8f X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Wed, Jun 25, 2025 at 11:01=E2=80=AFPM David Hildenbrand wrote: > > On 25.06.25 12:57, Barry Song wrote: > >>> > >>> Note that I don't quite understand why we have to batch the whole thi= ng > >>> or fallback to > >>> individual pages. Why can't we perform other batches that span only s= ome > >>> PTEs? What's special > >>> about 1 PTE vs. 2 PTEs vs. all PTEs? > >> > >> That's a good point about the "all-or-nothing" batching logic ;) > >> > >> It seems the "all-or-nothing" approach is specific to the lazyfree use > >> case, which needs to unmap the entire folio for reclamation. If that's > >> not possible, it falls back to the single-page slow path. > > > > Other cases advance the PTE themselves, while try_to_unmap_one() relies > > on page_vma_mapped_walk() to advance the PTE. Unless we want to manuall= y > > modify pvmw.pte and pvmw.address outside of page_vma_mapped_walk(), whi= ch > > to me seems like a violation of layers. :-) > > Please explain to me why the following is not clearer and better: This part is much clearer, but that doesn=E2=80=99t necessarily improve the= overall picture. The main challenge is how to exit the iteration of while (page_vma_mapped_walk(&pvmw)). Right now, we have it laid out quite straightforwardly: /* We have already batched the entire folio */ if (nr_pages > 1) goto walk_done; with any nr between 1 and folio_nr_pages(), we have to consider two issues: 1. How to skip PTE checks inside page_vma_mapped_walk for entries that were already handled in the previous batch; 2. How to break the iteration when this batch has arrived at the end. Of course, we could avoid both, but that would mean performing unnecessary checks inside page_vma_mapped_walk(). We=E2=80=99ll still need to introduce some =E2=80=9Ccomplicated=E2=80=9D co= de to address the issues mentioned above, won=E2=80=99t we? > > diff --git a/mm/rmap.c b/mm/rmap.c > index 8200d705fe4ac..09e2c2f28aa58 100644 > --- a/mm/rmap.c > +++ b/mm/rmap.c > @@ -1845,23 +1845,31 @@ void folio_remove_rmap_pud(struct folio *folio, s= truct page *page, > #endif > } > > -/* We support batch unmapping of PTEs for lazyfree large folios */ > -static inline bool can_batch_unmap_folio_ptes(unsigned long addr, > - struct folio *folio, pte_t *ptep) > +static inline unsigned int folio_unmap_pte_batch(struct folio *folio, > + struct page_vma_mapped_walk *pvmw, enum ttu_flags flags, > + pte_t pte) > { > const fpb_t fpb_flags =3D FPB_IGNORE_DIRTY | FPB_IGNORE_SOFT_DIR= TY; > - int max_nr =3D folio_nr_pages(folio); > - pte_t pte =3D ptep_get(ptep); > + struct vm_area_struct *vma =3D pvmw->vma; > + unsigned long end_addr, addr =3D pvmw->address; > + unsigned int max_nr; > + > + if (flags & TTU_HWPOISON) > + return 1; > + if (!folio_test_large(folio)) > + return 1; > + > + /* We may only batch within a single VMA and a single page table.= */ > + end_addr =3D min_t(unsigned long, ALIGN(addr + 1, PMD_SIZE), vma-= >vm_end); > + max_nr =3D (end_addr - addr) >> PAGE_SHIFT; > > + /* We only support lazyfree batching for now ... */ > if (!folio_test_anon(folio) || folio_test_swapbacked(folio)) > - return false; > + return 1; > if (pte_unused(pte)) > - return false; > - if (pte_pfn(pte) !=3D folio_pfn(folio)) > - return false; > - > - return folio_pte_batch(folio, addr, ptep, pte, max_nr, fpb_flags,= NULL, > - NULL, NULL) =3D=3D max_nr; > + return 1; > + return folio_pte_batch(folio, addr, pvmw->pte, pte, max_nr, fpb_f= lags, > + NULL, NULL, NULL); > } > > /* > @@ -2024,9 +2032,7 @@ static bool try_to_unmap_one(struct folio *folio, s= truct vm_area_struct *vma, > if (pte_dirty(pteval)) > folio_mark_dirty(folio); > } else if (likely(pte_present(pteval))) { > - if (folio_test_large(folio) && !(flags & TTU_HWPO= ISON) && > - can_batch_unmap_folio_ptes(address, folio, pv= mw.pte)) > - nr_pages =3D folio_nr_pages(folio); > + nr_pages =3D folio_unmap_pte_batch(folio, &pvmw, = flags, pteval); > end_addr =3D address + nr_pages * PAGE_SIZE; > flush_cache_range(vma, address, end_addr); > > > -- > Cheers, > > David / dhildenb > Thanks Barry