From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7D87CC3ABA5 for ; Tue, 29 Apr 2025 05:25:05 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 313B66B0029; Tue, 29 Apr 2025 01:25:04 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 2C67C6B002A; Tue, 29 Apr 2025 01:25:04 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 189A46B002B; Tue, 29 Apr 2025 01:25:04 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id ED25E6B0029 for ; Tue, 29 Apr 2025 01:25:03 -0400 (EDT) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 840B8140851 for ; Tue, 29 Apr 2025 05:25:04 +0000 (UTC) X-FDA: 83385942528.17.59B0EAE Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf06.hostedemail.com (Postfix) with ESMTP id 1388818000A for ; Tue, 29 Apr 2025 05:25:02 +0000 (UTC) Authentication-Results: imf06.hostedemail.com; dkim=none; spf=pass (imf06.hostedemail.com: domain of dev.jain@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=dev.jain@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1745904303; a=rsa-sha256; cv=none; b=e4qez5efUrNZ2kmefak4nWvRckMo8/SAsaDIQ+uffXFY7RRBdy2WtQ8uptxYrMU0e1yPCM 12w6yqrNs8zfAhij4FNSp1Q9Mnh4rodrgrjoWa8GFckKgRJ4Pimr3vn8v7cJUBaXikkD0v po7yFlsXb30zK/M6CKEvZcHDIERg4Tk= ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=none; spf=pass (imf06.hostedemail.com: domain of dev.jain@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=dev.jain@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1745904303; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=pzkZjylbLLAHjpwW0NmKdLwDoLrTrms8xV7X98/S9fs=; b=2u7Pbjj9sP0ujpi4JljCufXShyCnhcz0aJndCVLJ+dUpR8/0LXGx7hoPxFchJqn8uE+jo3 igqcNSvlZI9S1LFXhBGY9oKe+TaiUML5bUMtDY4h2H3X6fr2zJCPRGCUUxODuBgVFPcVe3 8D4n/NB01QgRpfCrpbY8jo0kvrMckFM= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id A65EB1515; Mon, 28 Apr 2025 22:24:55 -0700 (PDT) Received: from K4MQJ0H1H2.arm.com (unknown [10.163.78.253]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id D2B4A3F5A1; Mon, 28 Apr 2025 22:24:52 -0700 (PDT) From: Dev Jain To: akpm@linux-foundation.org Cc: ryan.roberts@arm.com, david@redhat.com, willy@infradead.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, catalin.marinas@arm.com, will@kernel.org, Liam.Howlett@oracle.com, lorenzo.stoakes@oracle.com, vbabka@suse.cz, jannh@google.com, anshuman.khandual@arm.com, peterx@redhat.com, joey.gouly@arm.com, ioworker0@gmail.com, baohua@kernel.org, kevin.brodsky@arm.com, quic_zhenhuah@quicinc.com, christophe.leroy@csgroup.eu, yangyicong@hisilicon.com, linux-arm-kernel@lists.infradead.org, namit@vmware.com, hughd@google.com, yang@os.amperecomputing.com, ziy@nvidia.com, Dev Jain Subject: [PATCH v2 7/7] mm: Optimize mprotect() through PTE-batching Date: Tue, 29 Apr 2025 10:53:36 +0530 Message-Id: <20250429052336.18912-8-dev.jain@arm.com> X-Mailer: git-send-email 2.39.3 (Apple Git-146) In-Reply-To: <20250429052336.18912-1-dev.jain@arm.com> References: <20250429052336.18912-1-dev.jain@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Rspamd-Queue-Id: 1388818000A X-Stat-Signature: famkgargp44zt1q6kua1qhcmmbeaatiq X-Rspam-User: X-Rspamd-Server: rspam08 X-HE-Tag: 1745904302-96344 X-HE-Meta: U2FsdGVkX19xlyZKSXZArB818BkS07dfhqc8E9+Kkq9h8f9YPeSCCjtr+ACdoG2a3PZsfb10vJ+MNYs8CaXeA8gVrogL07B5Nh5EvukIlcncApnz8yGu9scpdKzz8Ke1xUd4auumkroxkCIFxjUM9XzqUcDwAgeLbNOnwMnFsQNWwWvFAwNrSBnUt/6IXI6gcHWtXIY1fiQF+70qlNW/I+jkYGD8kNpgsEARXUGyyZi9T9NA7m8Vsyx7ZjXm48C4M2UVtJIrdcoGRpxKPg5Gmr2dVX1rEbcQ4x3rWn306ueyeVX8+9Zki192h+cf+9YC55mp6cqdvj9SWifgDl8YPw3VMtt5PvP4qog9w+ecuvKsBuI/RmvdeHGmwHcuPGEgw/m6R7Wvey91EFWk4G98b+3UP4AFvsZa3M31S/a3LKmuLUYBMz59S1N2Za2pd/IT4L+Th8Kv5dTrf1qNhRva1hbx6VhJcBKFpGRBqQ0KT3b/2DxRlHujb7HKIThpJZAlMo32q9y+GogabJ7bHDn59Qfm+pNry7blY3DPWZmyEUKWE/R8qe2YL5l7xcqBUpNJFC0xfo7diPNZFO2CN+lxoIDBuN16AL7N/NuUMQRih5aApqCGzM5gpXo3B9foINfChJpd+dKnt4XZJCGP0CnXpscZM4Yz5cVX9N08lih5c6fPHOCFFM/my9n2lS8T8wmD2pxf1onfiboflSBMv4j75242IFh6DIjriq192N3TrJnBvwqblrl0cotqweHbIEJhw7f+6Jr3CGIt+I+HUFJaw9QnvCpiayAPwFixK0cnkRC0eC0HC3xUzDwccZ75/WvB0k5dkRrLsde6uQRlqqGcEdSKjDKh+rQO X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: The common pte_present case does not require the folio. Elide the overhead of vm_normal_folio() for the small folio case, by making an approximation: for arm64, pte_batch_hint() is conclusive. For other arches, if the pfns pointed to by the current and the next PTE are contiguous, check whether a large folio is actually mapped, and only then make the batch optimization. Reuse the folio from prot_numa case if possible. Since modify_prot_start_ptes() gathers access/dirty bits, it lets us batch around pte_needs_flush() (for parisc, the definition includes the access bit). Signed-off-by: Dev Jain --- mm/mprotect.c | 49 +++++++++++++++++++++++++++++++++++-------------- 1 file changed, 35 insertions(+), 14 deletions(-) diff --git a/mm/mprotect.c b/mm/mprotect.c index baff009fc981..f8382806611f 100644 --- a/mm/mprotect.c +++ b/mm/mprotect.c @@ -129,7 +129,7 @@ static bool prot_numa_skip(struct vm_area_struct *vma, struct folio *folio, return false; } -static bool prot_numa_avoid_fault(struct vm_area_struct *vma, +static struct folio *prot_numa_avoid_fault(struct vm_area_struct *vma, unsigned long addr, pte_t *pte, pte_t oldpte, int target_node, int max_nr, int *nr) { @@ -139,25 +139,37 @@ static bool prot_numa_avoid_fault(struct vm_area_struct *vma, /* Avoid TLB flush if possible */ if (pte_protnone(oldpte)) - return true; + return NULL; folio = vm_normal_folio(vma, addr, oldpte); if (!folio) - return true; + return NULL; ret = prot_numa_skip(vma, folio, target_node); if (ret) { if (folio_test_large(folio) && max_nr != 1) *nr = folio_pte_batch(folio, addr, pte, oldpte, max_nr, flags, NULL, NULL, NULL); - return ret; + return NULL; } if (folio_use_access_time(folio)) folio_xchg_access_time(folio, jiffies_to_msecs(jiffies)); - return false; + return folio; } +static bool maybe_contiguous_pte_pfns(pte_t *ptep, pte_t pte) +{ + pte_t *next_ptep, next_pte; + + if (pte_batch_hint(ptep, pte) != 1) + return true; + + next_ptep = ptep + 1; + next_pte = ptep_get(next_ptep); + + return unlikely(pte_pfn(next_pte) - pte_pfn(pte) == PAGE_SIZE); +} static long change_pte_range(struct mmu_gather *tlb, struct vm_area_struct *vma, pmd_t *pmd, unsigned long addr, unsigned long end, pgprot_t newprot, unsigned long cp_flags) @@ -188,19 +200,28 @@ static long change_pte_range(struct mmu_gather *tlb, oldpte = ptep_get(pte); if (pte_present(oldpte)) { int max_nr = (end - addr) >> PAGE_SHIFT; + const fpb_t flags = FPB_IGNORE_DIRTY; + struct folio *folio = NULL; pte_t ptent; /* * Avoid trapping faults against the zero or KSM * pages. See similar comment in change_huge_pmd. */ - if (prot_numa && - prot_numa_avoid_fault(vma, addr, pte, - oldpte, target_node, - max_nr, &nr)) + if (prot_numa) { + folio = prot_numa_avoid_fault(vma, addr, pte, + oldpte, target_node, max_nr, &nr); + if (!folio) continue; + } - oldpte = ptep_modify_prot_start(vma, addr, pte); + if (!folio && (max_nr != 1) && maybe_contiguous_pte_pfns(pte, oldpte)) { + folio = vm_normal_folio(vma, addr, oldpte); + if (folio_test_large(folio)) + nr = folio_pte_batch(folio, addr, pte, + oldpte, max_nr, flags, NULL, NULL, NULL); + } + oldpte = modify_prot_start_ptes(vma, addr, pte, nr); ptent = pte_modify(oldpte, newprot); if (uffd_wp) @@ -223,13 +244,13 @@ static long change_pte_range(struct mmu_gather *tlb, */ if ((cp_flags & MM_CP_TRY_CHANGE_WRITABLE) && !pte_write(ptent) && - can_change_ptes_writable(vma, addr, ptent, folio, 1)) + can_change_ptes_writable(vma, addr, ptent, folio, nr)) ptent = pte_mkwrite(ptent, vma); - ptep_modify_prot_commit(vma, addr, pte, oldpte, ptent); + modify_prot_commit_ptes(vma, addr, pte, oldpte, ptent, nr); if (pte_needs_flush(oldpte, ptent)) - tlb_flush_pte_range(tlb, addr, PAGE_SIZE); - pages++; + tlb_flush_pte_range(tlb, addr, nr * PAGE_SIZE); + pages += nr; } else if (is_swap_pte(oldpte)) { swp_entry_t entry = pte_to_swp_entry(oldpte); pte_t newpte; -- 2.30.2