From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A9ED0C8302D for ; Mon, 30 Jun 2025 10:31:33 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4CC0C6B008A; Mon, 30 Jun 2025 06:31:33 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 4A3BC6B0092; Mon, 30 Jun 2025 06:31:33 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3E07E6B00AA; Mon, 30 Jun 2025 06:31:33 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 2FF276B008A for ; Mon, 30 Jun 2025 06:31:33 -0400 (EDT) Received: from smtpin12.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 88B9014042A for ; Mon, 30 Jun 2025 10:31:32 +0000 (UTC) X-FDA: 83611700424.12.4863B71 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf22.hostedemail.com (Postfix) with ESMTP id 9C2B5C0011 for ; Mon, 30 Jun 2025 10:31:30 +0000 (UTC) Authentication-Results: imf22.hostedemail.com; dkim=none; spf=pass (imf22.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1751279490; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=qbvWLxKSmD2/+BQBHz9SsH9CBE6M5BdjauaLz1EymiA=; b=KxF+SY9jvfKtpCM67afe5eP6S039qWOjkmttSkQkthT4cTFS+isl0c4Vjr6drJh8CpA7KP yiznPZW0z4GsCb1VuXGupH/EswxPE6qLfl8X110adIx/vXKS0CMaEAjjW9ejsicPZ+qQAU 5nYTqN+MuN7yn0jFc+OYpxzkb2eceFM= ARC-Authentication-Results: i=1; imf22.hostedemail.com; dkim=none; spf=pass (imf22.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1751279490; a=rsa-sha256; cv=none; b=QAqMCM5AKgwDM61TVhe9E1j5vkT2VjpL71uDXpl2sX5JZwH0Om/UiyS+oHVq6py3BaRedr MBk5CrsU4/vMSnV4TC2xmn9UPabgdFW7qwH0Nl8a8q53M1ynFzgD4PCkMRyLTKw7sw2xo3 SjzxZCaVIx3MVoOXMy60nBLSU46HoOM= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id BB86F1D34; Mon, 30 Jun 2025 03:31:13 -0700 (PDT) Received: from [10.1.34.165] (XHFQ2J9959.cambridge.arm.com [10.1.34.165]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 986E93F58B; Mon, 30 Jun 2025 03:31:25 -0700 (PDT) Message-ID: <41386e41-c1c4-4898-8958-2f4daa92dc7c@arm.com> Date: Mon, 30 Jun 2025 11:31:23 +0100 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v4 3/4] mm: Optimize mprotect() by PTE-batching Content-Language: en-GB To: Dev Jain , akpm@linux-foundation.org Cc: david@redhat.com, willy@infradead.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, catalin.marinas@arm.com, will@kernel.org, Liam.Howlett@oracle.com, lorenzo.stoakes@oracle.com, vbabka@suse.cz, jannh@google.com, anshuman.khandual@arm.com, peterx@redhat.com, joey.gouly@arm.com, ioworker0@gmail.com, baohua@kernel.org, kevin.brodsky@arm.com, quic_zhenhuah@quicinc.com, christophe.leroy@csgroup.eu, yangyicong@hisilicon.com, linux-arm-kernel@lists.infradead.org, hughd@google.com, yang@os.amperecomputing.com, ziy@nvidia.com References: <20250628113435.46678-1-dev.jain@arm.com> <20250628113435.46678-4-dev.jain@arm.com> From: Ryan Roberts In-Reply-To: <20250628113435.46678-4-dev.jain@arm.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Rspam-User: X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 9C2B5C0011 X-Stat-Signature: bxe35znsx6k9s5ubp4tu9ajjkt9effjm X-HE-Tag: 1751279490-650704 X-HE-Meta: U2FsdGVkX1/24+IQts1WFTNunasCBF6DTT8GXKFJs+LJln+DVFPlHmtU0jHvkm3QajJMFN8YDiL06xl4dPBX4AuihFxeo4bN+ixqBC45wgp8ZZtQHvCMUDW3JoN/NkJ4PvLZD5MjhC9XiH7YjpgNpd2DHYz2r/bO65AGfi7M7Z1NX5uPUib5kx0vsXKC6lmlctoW1kROsl57uKLOhx6E60dZSddHvCX5z4hedhbMzNYt0TMhWCB9gvLEstnIxTw/rh2lAEV0fz/EQAD/yeq6bGULtnVdKXAt661jLlZDfJehvTMQlwxNNEQcF+S43VS/tmJj0Z9OYsXRwU4ILGYUP58j4rBNkrYDGIKmZP9q1PU4RjinGegZtoqi+y7nKEcIFxeJLxFNqWQry16VCOKEHlt3H1WRbqLipo6G8MNBTxADuQ0A4kMID4WcCYq3xNCNUKZU7EnM5iOMBCxiOhraYCsffvBCLUY3s9lkx6naukmlLoQUiDHKrwAycrszlsiRNRV/pBcEvEGYE2PKvKjZMhLGjtg5uh6YnhBB3KIsT/ZG9cIkWOT2kC+tNh1cYfsGe57Z7NffPVDRAjPiBfusBdSFNqoMFIKqIRGgpoxMNwHtzhTMj4cz2FCshEuVctcpHeBWtk1BdhkAo/5br9oQdq/eaKHDm/hTwJw82n5peyigrSGRLcc3OIa+6W+WGPJijCIxW4jaajI4jKry2+6cIKri5yG22qUEW6Uwv06P2S7EgcBfJ1x11L09py2DLGhSDGy879YEKBHUEHiCS1bE6l5Bl8BYjAexDdx6Ndea60yOvguU4Ugb1zkiGbn2Eehd33lJpHjBJGgayude+CUDv3BHI4ufvSJKo+is1g+IeQ3LRsTgAqIB08Ypi1NL38igufR8RR6A71vuR/xKj5tFMid6D/6saw6eY11L4pGFtQjL23Jz1bMxZ4kYUXBdUEaw3X5sO7QZxbAlPcJCpnX nnALpGqw uz+jrd/yE8bmiR7DQED5bbTrpYyC98ygK8d0xjjXPp9LkO6xdnjUtwlnBSBd7bMM0/mb+oO/to73nY5a9eJ2ut45L3vwQo564RFh/LakCCasurfcBIvgzhYXpL+yOs1aEAqoEdj3efU475F0Otz3Ti7UkEHiX3NpIfteR X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 28/06/2025 12:34, Dev Jain wrote: > Use folio_pte_batch to batch process a large folio. Reuse the folio from > prot_numa case if possible. > > For all cases other than the PageAnonExclusive case, if the case holds true > for one pte in the batch, one can confirm that that case will hold true for > other ptes in the batch too; for pte_needs_soft_dirty_wp(), we do not pass > FPB_IGNORE_SOFT_DIRTY. modify_prot_start_ptes() collects the dirty > and access bits across the batch, therefore batching across > pte_dirty(): this is correct since the dirty bit on the PTE really is > just an indication that the folio got written to, so even if the PTE is > not actually dirty (but one of the PTEs in the batch is), the wp-fault > optimization can be made. > > The crux now is how to batch around the PageAnonExclusive case; we must > check the corresponding condition for every single page. Therefore, from > the large folio batch, we process sub batches of ptes mapping pages with > the same PageAnonExclusive condition, and process that sub batch, then > determine and process the next sub batch, and so on. Note that this does > not cause any extra overhead; if suppose the size of the folio batch > is 512, then the sub batch processing in total will take 512 iterations, > which is the same as what we would have done before. > > Signed-off-by: Dev Jain > --- > mm/mprotect.c | 143 +++++++++++++++++++++++++++++++++++++++++--------- > 1 file changed, 117 insertions(+), 26 deletions(-) > > diff --git a/mm/mprotect.c b/mm/mprotect.c > index 627b0d67cc4a..28c7ce7728ff 100644 > --- a/mm/mprotect.c > +++ b/mm/mprotect.c > @@ -40,35 +40,47 @@ > > #include "internal.h" > > -bool can_change_pte_writable(struct vm_area_struct *vma, unsigned long addr, > - pte_t pte) > -{ > - struct page *page; > +enum tristate { > + TRI_FALSE = 0, > + TRI_TRUE = 1, > + TRI_MAYBE = -1, > +}; > > +/* > + * Returns enum tristate indicating whether the pte can be changed to writable. > + * If TRI_MAYBE is returned, then the folio is anonymous and the user must > + * additionally check PageAnonExclusive() for every page in the desired range. > + */ > +static int maybe_change_pte_writable(struct vm_area_struct *vma, > + unsigned long addr, pte_t pte, > + struct folio *folio) > +{ > if (WARN_ON_ONCE(!(vma->vm_flags & VM_WRITE))) > - return false; > + return TRI_FALSE; > > /* Don't touch entries that are not even readable. */ > if (pte_protnone(pte)) > - return false; > + return TRI_FALSE; > > /* Do we need write faults for softdirty tracking? */ > if (pte_needs_soft_dirty_wp(vma, pte)) > - return false; > + return TRI_FALSE; > > /* Do we need write faults for uffd-wp tracking? */ > if (userfaultfd_pte_wp(vma, pte)) > - return false; > + return TRI_FALSE; > > if (!(vma->vm_flags & VM_SHARED)) { > /* > * Writable MAP_PRIVATE mapping: We can only special-case on > * exclusive anonymous pages, because we know that our > * write-fault handler similarly would map them writable without > - * any additional checks while holding the PT lock. > + * any additional checks while holding the PT lock. So if the > + * folio is not anonymous, we know we cannot change pte to > + * writable. If it is anonymous then the caller must further > + * check that the page is AnonExclusive(). > */ > - page = vm_normal_page(vma, addr, pte); > - return page && PageAnon(page) && PageAnonExclusive(page); > + return (!folio || folio_test_anon(folio)) ? TRI_MAYBE : TRI_FALSE; > } > > VM_WARN_ON_ONCE(is_zero_pfn(pte_pfn(pte)) && pte_dirty(pte)); > @@ -80,15 +92,61 @@ bool can_change_pte_writable(struct vm_area_struct *vma, unsigned long addr, > * FS was already notified and we can simply mark the PTE writable > * just like the write-fault handler would do. > */ > - return pte_dirty(pte); > + return pte_dirty(pte) ? TRI_TRUE : TRI_FALSE; > +} > + > +/* > + * Returns the number of pages within the folio, starting from the page > + * indicated by pgidx and up to pgidx + max_nr, that have the same value of > + * PageAnonExclusive(). Must only be called for anonymous folios. Value of > + * PageAnonExclusive() is returned in *exclusive. > + */ > +static int anon_exclusive_batch(struct folio *folio, int pgidx, int max_nr, > + bool *exclusive) > +{ > + struct page *page; > + int nr = 1; > + > + if (!folio) { > + *exclusive = false; > + return nr; > + } > + > + page = folio_page(folio, pgidx++); > + *exclusive = PageAnonExclusive(page); > + while (nr < max_nr) { > + page = folio_page(folio, pgidx++); > + if ((*exclusive) != PageAnonExclusive(page)) nit: brackets not required around *exclusive. > + break; > + nr++; > + } > + > + return nr; > +} > + > +bool can_change_pte_writable(struct vm_area_struct *vma, unsigned long addr, > + pte_t pte) > +{ > + struct page *page; > + int ret; > + > + ret = maybe_change_pte_writable(vma, addr, pte, NULL); > + if (ret == TRI_MAYBE) { > + page = vm_normal_page(vma, addr, pte); > + ret = page && PageAnon(page) && PageAnonExclusive(page); > + } > + > + return ret; > } > > static int mprotect_folio_pte_batch(struct folio *folio, unsigned long addr, > - pte_t *ptep, pte_t pte, int max_nr_ptes) > + pte_t *ptep, pte_t pte, int max_nr_ptes, fpb_t switch_off_flags) > { > - const fpb_t flags = FPB_IGNORE_DIRTY | FPB_IGNORE_SOFT_DIRTY; > + fpb_t flags = FPB_IGNORE_DIRTY | FPB_IGNORE_SOFT_DIRTY; > + > + flags &= ~switch_off_flags; This is mega confusing when reading the caller. Because the caller passes FPB_IGNORE_SOFT_DIRTY and that actually means DON'T ignore soft dirty. Can't we just pass in the flags we want? > > - if (!folio || !folio_test_large(folio) || (max_nr_ptes == 1)) > + if (!folio || !folio_test_large(folio)) What's the rational for dropping the max_nr_ptes == 1 condition? If you don't need it, why did you add it in the earler patch? > return 1; > > return folio_pte_batch(folio, addr, ptep, pte, max_nr_ptes, flags, > @@ -154,7 +212,8 @@ static int prot_numa_skip_ptes(struct folio **foliop, struct vm_area_struct *vma > } > > skip_batch: > - nr_ptes = mprotect_folio_pte_batch(folio, addr, pte, oldpte, max_nr_ptes); > + nr_ptes = mprotect_folio_pte_batch(folio, addr, pte, oldpte, > + max_nr_ptes, 0); > out: > *foliop = folio; > return nr_ptes; > @@ -191,7 +250,10 @@ static long change_pte_range(struct mmu_gather *tlb, > if (pte_present(oldpte)) { > int max_nr_ptes = (end - addr) >> PAGE_SHIFT; > struct folio *folio = NULL; > - pte_t ptent; > + int sub_nr_ptes, pgidx = 0; > + pte_t ptent, newpte; > + bool sub_set_write; > + int set_write; > > /* > * Avoid trapping faults against the zero or KSM > @@ -206,6 +268,11 @@ static long change_pte_range(struct mmu_gather *tlb, > continue; > } > > + if (!folio) > + folio = vm_normal_folio(vma, addr, oldpte); > + > + nr_ptes = mprotect_folio_pte_batch(folio, addr, pte, oldpte, > + max_nr_ptes, FPB_IGNORE_SOFT_DIRTY); >From the other thread, my memory is jogged that this function ignores write permission bit. So I think that's opening up a bug when applied here? If the first pte is writable but the rest are not (COW), doesn't this now make them all writable? I don't *think* that's a problem for the prot_numa use, but I could be wrong. > oldpte = modify_prot_start_ptes(vma, addr, pte, nr_ptes); Even if I'm wrong about ignoring write bit being a bug, I don't think the docs for this function permit write bit to be different across the batch? > ptent = pte_modify(oldpte, newprot); > > @@ -227,15 +294,39 @@ static long change_pte_range(struct mmu_gather *tlb, > * example, if a PTE is already dirty and no other > * COW or special handling is required. > */ > - if ((cp_flags & MM_CP_TRY_CHANGE_WRITABLE) && > - !pte_write(ptent) && > - can_change_pte_writable(vma, addr, ptent)) > - ptent = pte_mkwrite(ptent, vma); > - > - modify_prot_commit_ptes(vma, addr, pte, oldpte, ptent, nr_ptes); > - if (pte_needs_flush(oldpte, ptent)) > - tlb_flush_pte_range(tlb, addr, PAGE_SIZE); > - pages++; > + set_write = (cp_flags & MM_CP_TRY_CHANGE_WRITABLE) && > + !pte_write(ptent); > + if (set_write) > + set_write = maybe_change_pte_writable(vma, addr, ptent, folio); Why not just: set_write = (cp_flags & MM_CP_TRY_CHANGE_WRITABLE) && !pte_write(ptent) && maybe_change_pte_writable(...); ? > + > + while (nr_ptes) { > + if (set_write == TRI_MAYBE) { > + sub_nr_ptes = anon_exclusive_batch(folio, > + pgidx, nr_ptes, &sub_set_write); > + } else { > + sub_nr_ptes = nr_ptes; > + sub_set_write = (set_write == TRI_TRUE); > + } > + > + if (sub_set_write) > + newpte = pte_mkwrite(ptent, vma); > + else > + newpte = ptent; > + > + modify_prot_commit_ptes(vma, addr, pte, oldpte, > + newpte, sub_nr_ptes); > + if (pte_needs_flush(oldpte, newpte)) What did we conclude with pte_needs_flush()? I thought there was an arch where it looked dodgy calling this for just the pte at the head of the batch? Thanks, Ryan > + tlb_flush_pte_range(tlb, addr, > + sub_nr_ptes * PAGE_SIZE); > + > + addr += sub_nr_ptes * PAGE_SIZE; > + pte += sub_nr_ptes; > + oldpte = pte_advance_pfn(oldpte, sub_nr_ptes); > + ptent = pte_advance_pfn(ptent, sub_nr_ptes); > + nr_ptes -= sub_nr_ptes; > + pages += sub_nr_ptes; > + pgidx += sub_nr_ptes; > + } > } else if (is_swap_pte(oldpte)) { > swp_entry_t entry = pte_to_swp_entry(oldpte); > pte_t newpte;