From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id A65EC1090245 for ; Thu, 19 Mar 2026 15:47:25 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D41706B0512; Thu, 19 Mar 2026 11:47:24 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D18826B0514; Thu, 19 Mar 2026 11:47:24 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C2F566B0515; Thu, 19 Mar 2026 11:47:24 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id AC9046B0512 for ; Thu, 19 Mar 2026 11:47:24 -0400 (EDT) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 45074160693 for ; Thu, 19 Mar 2026 15:47:24 +0000 (UTC) X-FDA: 84563242008.06.906F84D Received: from tor.source.kernel.org (tor.source.kernel.org [172.105.4.254]) by imf28.hostedemail.com (Postfix) with ESMTP id 908A3C0011 for ; Thu, 19 Mar 2026 15:47:22 +0000 (UTC) Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=YHTqJs+I; dmarc=pass (policy=quarantine) header.from=kernel.org; spf=pass (imf28.hostedemail.com: domain of ljs@kernel.org designates 172.105.4.254 as permitted sender) smtp.mailfrom=ljs@kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1773935242; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=kb3rQCex5sth3CZ0/noji7LZoa5exYWqkym9mXhKRfQ=; b=BSaUBG/wHEYxq+3X0cUbcYW5vLHPBaomEBxzj/E3Uf+sJdiXJ/FSxL8f6zrlUg+LZpKPp0 lYzrbVALokj1moiUmQLOq8+hMkGyT1FdVGM8DZyVXobZK6YBjDFDyeaSRlge3xqU7xuous k3GcrPsV1H5yw4cRrA9QA62AY98xcH0= ARC-Authentication-Results: i=1; imf28.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=YHTqJs+I; dmarc=pass (policy=quarantine) header.from=kernel.org; spf=pass (imf28.hostedemail.com: domain of ljs@kernel.org designates 172.105.4.254 as permitted sender) smtp.mailfrom=ljs@kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1773935242; a=rsa-sha256; cv=none; b=IENCOn2xpP58NefXCvpzmTCqZCBek+lEzXjsHgmvRm61b/ST39LQAjhyHPGY0ib3K1ThYM OlS+6BY8ONn0LB3Z5qheovY1wyqIpJamgAich8sHyBWRwLwiOF6RWb7QAajHgznhZtwaFC MqsdyQmvqwRo0Pa1IhnLS3CIyOuZfmU= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by tor.source.kernel.org (Postfix) with ESMTP id 8B478600C4; Thu, 19 Mar 2026 15:47:21 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id A724EC19425; Thu, 19 Mar 2026 15:47:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773935241; bh=dvRv50xLAZNsLQKNhwdBibVxJ2KJO5rs0LbD+DoBLcw=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=YHTqJs+ILujfZYyQaCmE2NmV0a2+RTtuABX1PCMsUTPLy+JqqhoLjAZ3yzTHeWk/H xvZpvOtMdDz6+4Nj7zB/kO07Iow3hgv1U9hSu0DnZKIMl6C1nvD6nn9JFgP0llmTGR 1pliz3bhRpJU5cORjtpzgcMPPVvLGy6ELhC0D5WpCzTbhr8GcuJjiMXP/ltS0DtzHX BA2nskhn8DrgW5lytufksPFVLuru/D2MYibmPC9k7dfalEVuZb4RxSywEKnJgPSphR HtPjOl5pqDMGIR8Chtf08pmXbk42/Pzg7BLpBtRtCajZn+Lrb38FY9NcgOFv/pifp+ cnVCy7SlQNi+A== Date: Thu, 19 Mar 2026 15:47:18 +0000 From: "Lorenzo Stoakes (Oracle)" To: Dev Jain Cc: akpm@linux-foundation.org, axelrasmussen@google.com, yuanchu@google.com, david@kernel.org, hughd@google.com, chrisl@kernel.org, kasong@tencent.com, weixugc@google.com, Liam.Howlett@oracle.com, vbabka@kernel.org, rppt@kernel.org, surenb@google.com, mhocko@suse.com, riel@surriel.com, harry.yoo@oracle.com, jannh@google.com, pfalcato@suse.de, baolin.wang@linux.alibaba.com, shikemeng@huaweicloud.com, nphamcs@gmail.com, bhe@redhat.com, baohua@kernel.org, youngjun.park@lge.com, ziy@nvidia.com, kas@kernel.org, willy@infradead.org, yuzhao@google.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, ryan.roberts@arm.com, anshuman.khandual@arm.com Subject: Re: [PATCH 8/9] mm/rmap: introduce folio_try_share_anon_rmap_ptes Message-ID: References: <20260310073013.4069309-1-dev.jain@arm.com> <20260310073013.4069309-9-dev.jain@arm.com> <3ff22f08-5617-4c05-8fa0-1d808806e322@lucifer.local> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 908A3C0011 X-Stat-Signature: d7yopney8wsub5o4c4qy6c7fym18zbht X-Rspam-User: X-HE-Tag: 1773935242-933917 X-HE-Meta: U2FsdGVkX18y2zXaZagSzKTFEZiRr7qb3aeIVMjiZHl/tvK4arq2ZO+PwMrWn5YIHjG5eyxPyoZEqW1RyZRRm+zD0YAv/e/4wo/CIxBrdNukXdox1CitM2nvYcsVAs1NwH6u1GtdEK70jf/EqK3ssFryUnAxo0s86cvApptkYunQIckTfRxU6fDzph2HJk12LR+lwkl91KaXt6haYpPklDewO7lfO8Y84K1oFHfpYnpX4fpHxzpn4AdVBtqbscCYwRVxLU+PxlFzxDmWx22m5mI/mmqHfdxDaPrFU/0rZOkbRAbZykyWHDuNpWOnmxyh9qtFR+vgsVHqYbXk9RN2WEdEuZ94u/7JkGwGryVbwotVCoHGqN+7bCMNlBoDJ2IrZfnTBjt9ceCkYTng8U5BtgsFLApmGNyr8bhjYyVzYs01xPP+3pKwv1Gfbu4/i+3OabLaURCHt/5mFmPEW/msMPbahAPKuz7F7qrHGE7fSJnSVSrLxDxmK9WNRYzF16MpdGulPr2e7mbqIWS43W8b8o5+wyDZhmf9XOPiXtHQhFF3Wz+ai2t3aJM7IelDmVsKCc1mYLqLqMO8DELdWDAPrZm/spCuGpqSeixCSS3tH3Ubqmgmw06qNyvAJw3wD16ViHBFUnWgjyXjGY9S36POizUjJDf4/TpFZoypquC1q7adYrpvI1Vk3VUwkSnWripv80+3xaae4MhVI3EZmRWM8Hp149r6bVhNHemvvWU86xc0VTHjc1e1W+F/ZNe4vEr4G5HjYqC3REMedlCqtRSTwdMYBAHkpvjheM1uc8Qx6tDH4kCu2Tj2xusOKtcGTleWQ8JUD5kWu+awC9+wvxoHP3xW9/mkKQgFTt81mCE89CNAijOmYSbYEulQX4oz9WbA8swjNLg0NPuO3HMt15pAOyrzLB4zbKz3hWc83sarMfba7StQOnSsb/X8N2gDv498vn1NLdqA1/moJjKLTnW 4BQc57al ZqulGkB2ssDyRMxYV2A0sspHNvVg6/w0qQ4S1woldb9cNCfA7H5MSLSI7xRRX0NC9l64Fz3Ku1D435CTp47mdju8n/St//xCmCbCu7v0V60jKspDPWa8jqJUZnylxuPPJhklvKC0OJXhgjU3NgEyvSzEltkv97bbcBowWKvr2+TKKGJyyHEg0rjUvHexE5uOhkXUdq94idLXEBNYeBs/iiuzpy0WAwAB2EJOOQjFigkuSZD20EswB7wknrDvq9U3jEeWerKkVwbKy2G2riu701Uhb6u8Z6FogkjwH5TUVOYIt5iysYNlvKsjlsASTUtW2VhKI/cah9kxOecHatygrJF5wofeAPaoE0IbUE8DcddkiDaIJG8dk6J0HxQ== Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Wed, Mar 11, 2026 at 01:39:25PM +0530, Dev Jain wrote: > > > On 10/03/26 3:08 pm, Lorenzo Stoakes (Oracle) wrote: > > On Tue, Mar 10, 2026 at 01:00:12PM +0530, Dev Jain wrote: > >> In the quest of enabling batched unmapping of anonymous folios, we need to > >> handle the sharing of exclusive pages. Hence, a batched version of > >> folio_try_share_anon_rmap_pte is required. > >> > >> Currently, the sole purpose of nr_pages in __folio_try_share_anon_rmap is > >> to do some rmap sanity checks. Add helpers to set and clear the > >> PageAnonExclusive bit on a batch of nr_pages. Note that > >> __folio_try_share_anon_rmap can receive nr_pages == HPAGE_PMD_NR from the > >> PMD path, but currently we only clear the bit on the head page. Retain this > >> behaviour by setting nr_pages = 1 in case the caller is > >> folio_try_share_anon_rmap_pmd. > >> > >> Signed-off-by: Dev Jain > >> --- > >> include/linux/page-flags.h | 11 +++++++++++ > >> include/linux/rmap.h | 28 ++++++++++++++++++++++++++-- > >> mm/rmap.c | 2 +- > >> 3 files changed, 38 insertions(+), 3 deletions(-) > >> > >> diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h > >> index 0e03d816e8b9d..1d74ed9a28c41 100644 > >> --- a/include/linux/page-flags.h > >> +++ b/include/linux/page-flags.h > >> @@ -1178,6 +1178,17 @@ static __always_inline void __ClearPageAnonExclusive(struct page *page) > >> __clear_bit(PG_anon_exclusive, &PF_ANY(page, 1)->flags.f); > >> } > >> > >> +static __always_inline void ClearPagesAnonExclusive(struct page *page, > >> + unsigned int nr) > > > > You're randomly moving between nr and nr_pages, can we just consistently use > > nr_pages please. > > Okay. > > > > >> +{ > >> + for (;;) { > >> + ClearPageAnonExclusive(page); > >> + if (--nr == 0) > > > > You really require nr to != 0 here or otherwise you're going to be clearing 4 > > billion pages :) > > I'm following the pattern in pgtable.h, see set_ptes, > clear_young_dirty_ptes, etc. > > > > >> + break; > >> + ++page; > >> + } > >> +} > > > > Can we put this in mm.h or somewhere else please, and can we do away with this > > What is the problem with page-flags.h? I am not very aware on which > function to put in which header file semantically, so please educate me on > this. It's a mess in there and this this doesn't really belong. > > > HorribleNamingConvention, this is new, we can 'get away' with making it > > something sensible :) > > I'll name it folio_clear_pages_anon_exclusive. OK > > > > > > I wonder if we shouldn't also add a folio pointer here, and some > > VM_WARN_ON_ONCE()'s. Like: > > > > static inline void folio_clear_page_batch(struct folio *folio, > > struct page *first_subpage, > > unsigned int nr_pages) > > { > > struct page *subpage = first_subpage; > > > > VM_WARN_ON_ONCE(!nr_pages); > > VM_WARN_ON_ONCE(... check first_subpage in folio ...); > > VM_WARN_ON_ONCE(... check first_subpage -> first_subpage + nr_pages in folio ...); > > I like what you are saying, but __folio_rmap_sanity_checks in the caller > checks exactly this :) This is a shared function that can't be assumed to always be called from a context where that has run. VM_WARN_ON_ONCE()'s are free in release kernels so I don't think it should be such an issue. > > > > > while (nr_pages--) > > ClearPageAnonExclusive(subpage++); > > } > > > >> + > >> #ifdef CONFIG_MMU > >> #define __PG_MLOCKED (1UL << PG_mlocked) > >> #else > >> diff --git a/include/linux/rmap.h b/include/linux/rmap.h > >> index 1b7720c66ac87..7a67776dca3fe 100644 > >> --- a/include/linux/rmap.h > >> +++ b/include/linux/rmap.h > >> @@ -712,9 +712,13 @@ static __always_inline int __folio_try_share_anon_rmap(struct folio *folio, > >> VM_WARN_ON_FOLIO(!PageAnonExclusive(page), folio); > >> __folio_rmap_sanity_checks(folio, page, nr_pages, level); > >> > >> + /* We only clear anon-exclusive from head page of PMD folio */ > > > > Is this accurate? David? I thought anon exclusive was per-subpage for any large > > folio...? > > The current behaviour is to do this only. I was also surprised with this, > so I had dug in and found out: > > https://lore.kernel.org/all/20220428083441.37290-13-david@redhat.com/ > > where David says: > > "Long story short: once > PTE-mapped, we have to track information about exclusivity per sub-page, > but until then, we can just track it for the compound page in the head > page and not having to update a whole bunch of subpages all of the time > for a simple PMD mapping of a THP." OK > > > > > > If we're explicitly doing this for some reason here, then why introduce it? > > > >> + if (level == PGTABLE_LEVEL_PMD) > >> + nr_pages = 1; > >> + > >> /* device private folios cannot get pinned via GUP. */ > >> if (unlikely(folio_is_device_private(folio))) { > >> - ClearPageAnonExclusive(page); > >> + ClearPagesAnonExclusive(page, nr_pages); > > > > I really kind of hate this 'we are looking at subpage X with variable page in > > folio Y, but we don't mention Y' thing. It's super confusing that we have a > > pointer to a thing which sometimes we deref and treat as a value we care about > > and sometimes treat as an array. > > > > This pattern exists throughout all the batched stuff and I kind of hate it > > everywhere. > > > > I guess the batching means that we are looking at a sub-folio range. > > > > If C had a better type system we could somehow have a type that encoded this, > > but it doesn't :>) > > > > But I wonder if we shouldn't just go ahead and rename page -> pages and be > > consistent about this? > > Agreed. You are correct in saying that this function should receive struct > folio to assert that we are essentially in a page array, and some sanity > checking should happen. But the callers are already doing the checking > in folio_rmap_sanity_checks. Let me think on this. You treat that like a license to never put an assert anywhere else in mm... In any case I'm not sure I mentioend an assert here, more so the pattern around folio v.s subpages. > > > > > >> return 0; > >> } > >> > >> @@ -766,7 +770,7 @@ static __always_inline int __folio_try_share_anon_rmap(struct folio *folio, > >> > >> if (unlikely(folio_maybe_dma_pinned(folio))) > >> return -EBUSY; > >> - ClearPageAnonExclusive(page); > >> + ClearPagesAnonExclusive(page, nr_pages); > >> > >> /* > >> * This is conceptually a smp_wmb() paired with the smp_rmb() in > >> @@ -804,6 +808,26 @@ static inline int folio_try_share_anon_rmap_pte(struct folio *folio, > >> return __folio_try_share_anon_rmap(folio, page, 1, PGTABLE_LEVEL_PTE); > >> } > >> > >> +/** > >> + * folio_try_share_anon_rmap_ptes - try marking exclusive anonymous pages > >> + * mapped by PTEs possibly shared to prepare > >> + * for KSM or temporary unmapping > > > > This description is very confusing. 'Try marking exclusive anonymous pages > > [... marking them as what?] mapped by PTEs[, or (]possibly shared[, or )] to > > prepare for KSM[under what circumstances? Why mention KSM here?] or temporary > > unmapping [why temporary?] > > > > OK I think you mean to say 'marking' them as 'possibly' shared. > > > > But really by 'shared' you mean clearing anon exclusive right? So maybe the > > function name and description should reference that instead. > > > > But this needs clarifying. This isn't an exercise in minimum number of words to > > describe the function. > > > > Ohhh now I see this is what the comment is in folio_try_share_anon_rmap_pte() :P > > > > Well, I wish we could update the original too ;) but OK this is fine as-is to > > matc that then. > > > >> + * @folio: The folio to share a mapping of > >> + * @page: The first mapped exclusive page of the batch > >> + * @nr_pages: The number of pages to share (batch size) > >> + * > >> + * See folio_try_share_anon_rmap_pte for full description. > >> + * > >> + * Context: The caller needs to hold the page table lock and has to have the > >> + * page table entries cleared/invalidated. Those PTEs used to map consecutive > >> + * pages of the folio passed here. The PTEs are all in the same PMD and VMA. > > > > Can we VM_WARN_ON_ONCE() any of this? Not completely a necessity. > > Again, we have WARN checks in folio_rmap_sanity_checks, and even in > folio_pte_batch. I am afraid of duplication. Why on earth are you bothering with 'context' then? If having run folio_rmap_sanity_checks() means we never have to assert or think about state or context again then why bother? I looked in __folio_rma_sanity_checks() and I see no: Breaking it down: Context: The caller needs to hold the page table lock and has to have the page table entries cleared/invalidated. Those PTEs used to map consecutive pages of the folio passed here. The PTEs are all in the same PMD and VMA. - Page table lock <-- NOT checked - Page table entries cleared/invalidated <-- NOT checked - PTEs passed in must map all pages in the folio? <-- you aren't passing in PTEs? - PTEs are all in the same PMD <- You aren't passing in PTEs? - PTES are all in the same VMA <- You aren't passing in PTEs? So that's hardly a coherent picture no? > > > > >> + */ > >> +static inline int folio_try_share_anon_rmap_ptes(struct folio *folio, > >> + struct page *page, unsigned int nr) > >> +{ > >> + return __folio_try_share_anon_rmap(folio, page, nr, PGTABLE_LEVEL_PTE); > >> +} > >> + > >> /** > >> * folio_try_share_anon_rmap_pmd - try marking an exclusive anonymous page > >> * range mapped by a PMD possibly shared to > >> diff --git a/mm/rmap.c b/mm/rmap.c > >> index 42f6b00cced01..bba5b571946d8 100644 > >> --- a/mm/rmap.c > >> +++ b/mm/rmap.c > >> @@ -2300,7 +2300,7 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, > >> > >> /* See folio_try_share_anon_rmap(): clear PTE first. */ > >> if (anon_exclusive && > >> - folio_try_share_anon_rmap_pte(folio, subpage)) { > >> + folio_try_share_anon_rmap_ptes(folio, subpage, 1)) { > > > > I guess this is because you intend to make this batched later with >1, but I > > don't see the point of adding this since folio_try_share_anon_rmap_pte() already > > does what you're doing here. > > > > So why not just change this when you actually batch? > > It is generally the consensus to introduce a new function along with its > caller. Although one may argue that the caller introduced here is not That's not always the case, sometimes it makes more sense and is cleaner to introduce it first. All rules of thumb are open to sensible interpretation. I'm not sure arbitrarily using the function in a way that makes no sense in the code is a good approach but this isn't the end of the world. > a functional change (still passing nr_pages = 1). So I am fine doing > what you suggest. > > > > > Buuuut.... haven't you not already changed this whole function to now 'jump' > > ahead if batched, so why are we only specifying nr_pages = 1 here? > > Because...please bear with the insanity :) currently we are in a ridiculous > situation where > > nr_pages can be > 1 for file folios, and lazyfree folios, *and* it is > required that the VMA is non-uffd. > > So, the "jump" thingy I was doing in the previous patch was adding support > for file folios, belonging to uffd VMAs (see pte_install_uffd_wp_if_needed, > we need to handle uffd-wp marker for file folios only, and also I should > have mentioned "file folio" in the subject line of that patch, of course > I missed that because reasoning through this code is very difficult) > > To answer your question, currently for anon folios nr_pages == 1, so > the jump is a no-op. > > When I had discovered the uffd-wp bug some weeks back, I was pushing > back against the idea of hacking around it by disabling batching > for uffd-VMAs in folio_unmap_pte_batch, but solve it then and there > properly. Now we have too many cases - first we added lazyfree support, > then file-non-uffd support, my patch 5 adds file-uffd support, and > last patch finally completes this with anon support. I am not a fan of any of that, this just speaks to the need to clean this up before endlessly adding more functionality and piles of complexity and confusion. Let's just add some patches to do things sanely please. > > > > > > Honestly I think this function needs to be fully refactored away from the > > appalling giant-ball-of-string mess it is now before we try to add in batching > > to be honest. > > > > Let's not accumulate more technical debt. > > I agree, I am happy to help in cleaning up this function. Right, well then this series needs to have some clean up patches in it first. > > > > > > >> folio_put_swap(folio, subpage, 1); > >> set_pte_at(mm, address, pvmw.pte, pteval); > >> goto walk_abort; > >> -- > >> 2.34.1 > >> > > > > Thanks, Lorenzo > Thanks< Lorenzo