From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8FC49C369D5 for ; Mon, 28 Apr 2025 12:59:36 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D0C4D6B00AF; Mon, 28 Apr 2025 08:59:34 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C95206B00B0; Mon, 28 Apr 2025 08:59:34 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B59E36B00B1; Mon, 28 Apr 2025 08:59:34 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 959576B00AF for ; Mon, 28 Apr 2025 08:59:34 -0400 (EDT) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 7D06558514 for ; Mon, 28 Apr 2025 12:59:35 +0000 (UTC) X-FDA: 83383459110.01.536CD4B Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf09.hostedemail.com (Postfix) with ESMTP id B66CB14000D for ; Mon, 28 Apr 2025 12:59:33 +0000 (UTC) Authentication-Results: imf09.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf09.hostedemail.com: domain of dev.jain@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=dev.jain@arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1745845173; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=vW9Zk8xCpUIBi2yoZrAD7iyYrLxWS2AOyv+ofm5sIAw=; b=m0kK+JIz/2BGpw2n5PxikwKbiJczbkGHrL9kZCn5UYNjI10AIdOMr8LcpHoka6ldRFviQF JYH3jWGPdfYh9Lw/esIPFvCsOSja1AwsH1hr0+rl4VveOAGwccGPC+0BUnIT325rzrbUCE gjF1GiJzOqyxhxL5SyKZnrps8UkEkwo= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf09.hostedemail.com: domain of dev.jain@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=dev.jain@arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1745845173; a=rsa-sha256; cv=none; b=G4uoxT9qnjfMV7tgSECosob0ly1eRYC/DJlMmeAJGy0sN82XRnpABGnnZnnx4lHdXVKoSv AzGJR22aK2u8s0wMbHPPISALLm0cErn7cvmUnje9lhWvlfW3A099RQd3vMxR1M+UXCKdB6 GyO5S9z9cIUkUfZx0ZQTLbOUDiKfbKA= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 651F21516; Mon, 28 Apr 2025 05:59:26 -0700 (PDT) Received: from [10.163.78.210] (unknown [10.163.78.210]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 8B9353F673; Mon, 28 Apr 2025 05:59:24 -0700 (PDT) Message-ID: <083865e8-572b-41b2-9221-3cee01349fab@arm.com> Date: Mon, 28 Apr 2025 18:29:21 +0530 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH 6/7] mm: Batch around can_change_pte_writable() To: Lance Yang , akpm@linux-foundation.org Cc: ryan.roberts@arm.com, david@redhat.com, willy@infradead.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, catalin.marinas@arm.com, will@kernel.org, Liam.Howlett@oracle.com, lorenzo.stoakes@oracle.com, vbabka@suse.cz, jannh@google.com, anshuman.khandual@arm.com, peterx@redhat.com, joey.gouly@arm.com, ioworker0@gmail.com, baohua@kernel.org, kevin.brodsky@arm.com, quic_zhenhuah@quicinc.com, christophe.leroy@csgroup.eu, yangyicong@hisilicon.com, linux-arm-kernel@lists.infradead.org, namit@vmware.com, hughd@google.com, yang@os.amperecomputing.com, ziy@nvidia.com References: <20250428120414.12101-1-dev.jain@arm.com> <20250428120414.12101-7-dev.jain@arm.com> Content-Language: en-US From: Dev Jain In-Reply-To: Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-Rspam-User: X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: B66CB14000D X-Stat-Signature: xgba9sw5qu74kxa5d1b1m3tdqzod7qqn X-HE-Tag: 1745845173-545078 X-HE-Meta: U2FsdGVkX1/rDiwRq0xNgPObq0uEhWFN2PjEDUc4h+okxxRh6E21SqU0748djlljjCZPVqnOcD2ze4iag2KPywtkccD/6paWyI5J46mU5BnTITJfce4QB0ssk3NnS39eQsuzXXLxLIMR9820xrimDTnBeY98uTrAhIA8R93kwOSHuKe3hfNFLesqg4oyJS0GhOZoVmcv4roZN3FTHup3ad+jOgEauAu1yeIBhY98UbleuWEynpFatn1WiL9v0SD/wdKjmg/GInO3XngMRPVjf3unXqO5+nEtTpyqzQOdL3Crhg6IMrxG83yEhaYIugQgC+48FsVYPr5msd7vv5g4K2cYQDrF1OvX41E/X2Fh9GZf7ns3pWnxkMi3WXnSMTZS4Upqqi4QIWikTH+qwb6xSrm+gLYt5HXcg7OubJSEiJuvd6kVZPp+rlmex5k6mVTqKVTcm+Y6hvq6iKIxFkHT45dATbiIKB9f+9VpMbm+O6sNcnKwAbGzwuX6e6h818gr1zuEEAoK2Cemhh0Yoza9JfTJrBv9NrzEqHFr152Sr1ePAzHkNfd5+oJbnAp9OttynjAog2Ystc3fejNBdm2oemmIZT3fax6tcPTqaEsqCP9z8Poz8sQvVsJzkx+mWkhpC+R6ialj+NyK2J1+SnktXgATaz3mG9bC9p2lRa7uFviErC/uYB1a5ek/4aGSZzeIS1izsglBo/Gx54zwKmVTLomcH/lmLnQwaXh3ZtGCjwSV+SwnsQmD2ume5c/ypFascw/KHLePfPkDrDXU+oJZ36ki40UAeogEIUHs7ERaEakZ45mVWTptVQhPGwoCkkHvvHDWGtz7ysTUa6WeuwCttIPq/SDvCynL8fqyE4PcV3h5NPsjZgKH6x1PuCvvMHlNeVvGrUjnaVCU455Oe3thO/0giWqsnptWM6kZt9M79AEfyViIXgnFiLpZhT/kUHZOk9OXEv0NEDNQGMaXyCl E6d6aqFK 5zHhtYJRleoScDG86h5Af78ec6JRMMixiTrYxmkF6g6Hkk57r1tgfbWz8LuLxSKYuqPwhsiJzPo3Cc+iVr9ZVr0VM8hwXmnVgsT+OKcMJSEBgEfh5aWr15zKy6ukMm4IiPhtiNovgCEQ05D0SbJqPhuY7cQc19128repmNmnAudBy7pt47+YyiBRNnTbjzl25po/YNcWtjhyy1SFqzU1H0oZ76zCqiPSYV77T8xm4Eqw+fV8N25IsC9yhT31Obs5i2VhhEIUAPoHcGJgP0lrnfYgi3A== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 28/04/25 6:20 pm, Lance Yang wrote: > Hey Dev, > > On 2025/4/28 20:04, Dev Jain wrote: >> In preparation for patch 7, we need to properly batch around >> can_change_pte_writable(). We batch around pte_needs_soft_dirty_wp() by >> the corresponding fpb flag, we batch around the page-anon exclusive check >> using folio_maybe_mapped_shared(); modify_prot_start_ptes() collects the >> dirty and access bits across the batch, therefore batching across >> pte_dirty(): this is correct since the dirty bit on the PTE really >> is just an indication that the folio got written to, so even if >> the PTE is not actually dirty (but one of the PTEs in the batch is), >> the wp-fault optimization can be made. >> >> Signed-off-by: Dev Jain >> --- >>   include/linux/mm.h | 4 ++-- >>   mm/gup.c           | 2 +- >>   mm/huge_memory.c   | 4 ++-- >>   mm/memory.c        | 6 +++--- >>   mm/mprotect.c      | 9 ++++++--- >>   5 files changed, 14 insertions(+), 11 deletions(-) >> >> diff --git a/include/linux/mm.h b/include/linux/mm.h >> index 5eb0d77c4438..ffa02e15863f 100644 >> --- a/include/linux/mm.h >> +++ b/include/linux/mm.h >> @@ -2710,8 +2710,8 @@ int get_cmdline(struct task_struct *task, char >> *buffer, int buflen); >>   #define  MM_CP_UFFD_WP_ALL                 (MM_CP_UFFD_WP | \ >>                           MM_CP_UFFD_WP_RESOLVE) >> -bool can_change_pte_writable(struct vm_area_struct *vma, unsigned >> long addr, >> -                 pte_t pte); >> +bool can_change_ptes_writable(struct vm_area_struct *vma, unsigned >> long addr, >> +                 pte_t pte, struct folio *folio, unsigned int nr); >>   extern long change_protection(struct mmu_gather *tlb, >>                     struct vm_area_struct *vma, unsigned long start, >>                     unsigned long end, unsigned long cp_flags); >> diff --git a/mm/gup.c b/mm/gup.c >> index 84461d384ae2..6a605fc5f2cb 100644 >> --- a/mm/gup.c >> +++ b/mm/gup.c >> @@ -614,7 +614,7 @@ static inline bool can_follow_write_common(struct >> page *page, >>           return false; >>       /* >> -     * See can_change_pte_writable(): we broke COW and could map the >> page >> +     * See can_change_ptes_writable(): we broke COW and could map the >> page >>        * writable if we have an exclusive anonymous page ... >>        */ >>       return page && PageAnon(page) && PageAnonExclusive(page); >> diff --git a/mm/huge_memory.c b/mm/huge_memory.c >> index 28c87e0e036f..e5496c0d9e7e 100644 >> --- a/mm/huge_memory.c >> +++ b/mm/huge_memory.c >> @@ -2032,12 +2032,12 @@ static inline bool >> can_change_pmd_writable(struct vm_area_struct *vma, >>           return false; >>       if (!(vma->vm_flags & VM_SHARED)) { >> -        /* See can_change_pte_writable(). */ >> +        /* See can_change_ptes_writable(). */ >>           page = vm_normal_page_pmd(vma, addr, pmd); >>           return page && PageAnon(page) && PageAnonExclusive(page); >>       } >> -    /* See can_change_pte_writable(). */ >> +    /* See can_change_ptes_writable(). */ >>       return pmd_dirty(pmd); >>   } >> diff --git a/mm/memory.c b/mm/memory.c >> index b9e8443aaa86..b1fda3de8d27 100644 >> --- a/mm/memory.c >> +++ b/mm/memory.c >> @@ -750,7 +750,7 @@ static void restore_exclusive_pte(struct >> vm_area_struct *vma, >>           pte = pte_mkuffd_wp(pte); >>       if ((vma->vm_flags & VM_WRITE) && >> -        can_change_pte_writable(vma, address, pte)) { >> +        can_change_ptes_writable(vma, address, pte, NULL, 1)) { >>           if (folio_test_dirty(folio)) >>               pte = pte_mkdirty(pte); >>           pte = pte_mkwrite(pte, vma); >> @@ -5767,7 +5767,7 @@ static void numa_rebuild_large_mapping(struct >> vm_fault *vmf, struct vm_area_stru >>               ptent = pte_modify(ptent, vma->vm_page_prot); >>               writable = pte_write(ptent); >>               if (!writable && pte_write_upgrade && >> -                can_change_pte_writable(vma, addr, ptent)) >> +                can_change_ptes_writable(vma, addr, ptent, NULL, 1)) >>                   writable = true; >>           } >> @@ -5808,7 +5808,7 @@ static vm_fault_t do_numa_page(struct vm_fault >> *vmf) >>        */ >>       writable = pte_write(pte); >>       if (!writable && pte_write_upgrade && >> -        can_change_pte_writable(vma, vmf->address, pte)) >> +        can_change_ptes_writable(vma, vmf->address, pte, NULL, 1)) >>           writable = true; >>       folio = vm_normal_folio(vma, vmf->address, pte); >> diff --git a/mm/mprotect.c b/mm/mprotect.c >> index 33eabc995584..362fd7e5457d 100644 >> --- a/mm/mprotect.c >> +++ b/mm/mprotect.c >> @@ -40,8 +40,8 @@ >>   #include "internal.h" >> -bool can_change_pte_writable(struct vm_area_struct *vma, unsigned >> long addr, >> -                 pte_t pte) >> +bool can_change_ptes_writable(struct vm_area_struct *vma, unsigned >> long addr, >> +                  pte_t pte, struct folio *folio, unsigned int nr) >>   { >>       struct page *page; >> @@ -67,6 +67,9 @@ bool can_change_pte_writable(struct vm_area_struct >> *vma, unsigned long addr, >>            * write-fault handler similarly would map them writable >> without >>            * any additional checks while holding the PT lock. >>            */ >> +        if (unlikely(nr != 1)) >> +            return !folio_maybe_mapped_shared(folio); >> + >>           page = vm_normal_page(vma, addr, pte); >>           return page && PageAnon(page) && PageAnonExclusive(page); >>       } > > IIUC, As mentioned in the comment above, we should do the same anonymous > check > to large folios. And folio_maybe_mapped_shared() already handles both > order-0 > and large folios nicely, so we could simplify the logic as follows: Thanks. Although we will have to call vm_normal_folio() in case of !folio, since we may not have the folio already for nr == 1 case. > > diff --git a/mm/mprotect.c b/mm/mprotect.c > index 1605e89349d2..df56a30bb241 100644 > --- a/mm/mprotect.c > +++ b/mm/mprotect.c > @@ -43,8 +43,6 @@ >  bool can_change_ptes_writable(struct vm_area_struct *vma, unsigned > long addr, >                               pte_t pte, struct folio *folio, unsigned > int nr) >  { > -       struct page *page; > - >         if (WARN_ON_ONCE(!(vma->vm_flags & VM_WRITE))) >                 return false; > > @@ -67,11 +65,7 @@ bool can_change_ptes_writable(struct vm_area_struct > *vma, unsigned long addr, >                  * write-fault handler similarly would map them > writable without >                  * any additional checks while holding the PT lock. >                  */ > -               if (unlikely(nr != 1)) > -                       return !folio_maybe_mapped_shared(folio); > - > -               page = vm_normal_page(vma, addr, pte); > -               return page && PageAnon(page) && PageAnonExclusive(page); > +               return folio_test_anon(folio) && ! > folio_maybe_mapped_shared(folio); >         } > >         VM_WARN_ON_ONCE(is_zero_pfn(pte_pfn(pte)) && pte_dirty(pte)); > -- > > Thanks, > Lance > >> @@ -222,7 +225,7 @@ static long change_pte_range(struct mmu_gather *tlb, >>                */ >>               if ((cp_flags & MM_CP_TRY_CHANGE_WRITABLE) && >>                   !pte_write(ptent) && >> -                can_change_pte_writable(vma, addr, ptent)) >> +                can_change_ptes_writable(vma, addr, ptent, folio, 1)) >>                   ptent = pte_mkwrite(ptent, vma); >>               ptep_modify_prot_commit(vma, addr, pte, oldpte, ptent); > >