From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D7A68C54ED1 for ; Tue, 27 May 2025 07:51:21 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7636C6B008C; Tue, 27 May 2025 03:51:21 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 739336B0092; Tue, 27 May 2025 03:51:21 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 677206B0093; Tue, 27 May 2025 03:51:21 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 490096B008C for ; Tue, 27 May 2025 03:51:21 -0400 (EDT) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 01AAFE4B01 for ; Tue, 27 May 2025 07:51:20 +0000 (UTC) X-FDA: 83487917562.30.3EFD52A Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf18.hostedemail.com (Postfix) with ESMTP id 3EB8A1C000A for ; Tue, 27 May 2025 07:51:19 +0000 (UTC) Authentication-Results: imf18.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf18.hostedemail.com: domain of dev.jain@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=dev.jain@arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1748332279; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=dbFbhqMGhl2qDi/zJMPeLAroLOadunyBBp0BnmSNMXs=; b=SWUQontYHwimDqBLzCLUA4Mm2gDfbKe1O3rAskdSEEp+OLb9/tc4SQnjO7Nnb57o0F8ENM f38iI4u9URC4+BBkkZtVRpLoM9NDbUbdaUAlimDUhwAqHMmwYPJiA1uP2oXsX7PhJKj2pB w47WMB72bgV5tWaIwne2RZ68h6TOf7Q= ARC-Authentication-Results: i=1; imf18.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf18.hostedemail.com: domain of dev.jain@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=dev.jain@arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1748332279; a=rsa-sha256; cv=none; b=sqZNAypOZBYuiHNcz6WK9h1fsHEdL1D3uOf7NRK+EOvwigSovO3icIp8Tjh7Og3bOn1qZz znnZcKiq5rcXv2dltMydZc3kBjP3gn9gIrOJvE2Q/1PO8rKI11jq+8JGtz9yDBCkKrRb2S ic8BvdW4xz0LecfCYPssDkMdJ4viU94= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 6626E14BF; Tue, 27 May 2025 00:51:02 -0700 (PDT) Received: from localhost.localdomain (unknown [10.163.85.29]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 185893F694; Tue, 27 May 2025 00:51:10 -0700 (PDT) From: Dev Jain To: akpm@linux-foundation.org Cc: Liam.Howlett@oracle.com, lorenzo.stoakes@oracle.com, vbabka@suse.cz, jannh@google.com, pfalcato@suse.de, linux-mm@kvack.org, linux-kernel@vger.kernel.org, david@redhat.com, peterx@redhat.com, ryan.roberts@arm.com, mingo@kernel.org, libang.li@antgroup.com, maobibo@loongson.cn, zhengqi.arch@bytedance.com, baohua@kernel.org, anshuman.khandual@arm.com, willy@infradead.org, ioworker0@gmail.com, yang@os.amperecomputing.com, baolin.wang@linux.alibaba.com, ziy@nvidia.com, hughd@google.com, Dev Jain Subject: [PATCH v3 2/2] mm: Optimize mremap() by PTE batching Date: Tue, 27 May 2025 13:20:49 +0530 Message-Id: <20250527075049.60215-3-dev.jain@arm.com> X-Mailer: git-send-email 2.39.3 (Apple Git-146) In-Reply-To: <20250527075049.60215-1-dev.jain@arm.com> References: <20250527075049.60215-1-dev.jain@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 3EB8A1C000A X-Stat-Signature: myh8jnyreu58sp8j448wzdgxu54b4iuh X-Rspam-User: X-HE-Tag: 1748332279-169572 X-HE-Meta: U2FsdGVkX1+5fun+Kk7ULdriVs5ZqcJ5v7hCTMeharkcdraLkAzriPxxo7yMfo6qBJLsSCVApNVxYiQbLSQp4TuHUJz2oXXk6+gCT5nIVz2X8cWc7xI4JxAmVWmgC5LuqUDmSZfJ7z1nTiBBF8Vn7yzvwaR3gRWJRiQa+ILPl5IRpzq1aNdmx1hzF6UoJ3GTCBd4Hux+FLLzvh4F7rAq1PSRq6jZJKknMz6Gv+p5+U89DuNkoBPjDFNVj1b5gF9vGEdh+V/Lvg359mD4rbKufwmOToVUk9XIJ5vKB4VznQowW7R3mtL+5K5wGntdiort3/hmah9HORxV+nnYlfuOk9capN16yhhRa/BtKdQ6uqZZDHuApPulMKmG34FVZlV1d/uEqt1xcPlGPaAEYcCVI8hLcC8DIe3BQBsgQiLE7NUK9x100Pq1xt/eDKVT5voPV+bB5jMYhr0U37oxeq7atrggN2C4CQRjSCKgqXOwIWJfQQw3dsIVVW3Bsm6vYQVniAowHYlMCyIr4gkKCV65BNUV/GBSbwtfJkSz6PVz6A85bI9+MOKaAd+8bm3QnUkjlr89Mpt2oFEYmVl7r/dPN71f0sqOP8bjKAU70qaXuK2Heix7p4xiNCFFe3APhdB/Y1I4I+kFwn23oP43icaJtzp5cJhEnTnU1ZiAHBwC440aUcVOVcNX232uHtCU7ULVcMXfc4GXwgR0aMH3rZ/e8DTpOFkdwtzmSOkZoZ9m7jH0nZvmcJ7fXdYp+IBB0Qr5zCr+mOqVVX2V8lGWm9K5IBjjD00ib28P/j4augvbyPXG73INm2v/hPleXzLExkU6lXlK+qxUDs6FOYrVKs1AAEiyL+V+mzjwindRE7+4McIqmKljH8tBvf1iUZ5WUZvjRr1rD2Y0jKIX7vP8tDDj7hzVd5YTaW8eItdnnruCE8rUkwk98ELxlHHLLDODsDKsVkz0vVyAlxKRQGX9zdw bRgRYIKT FMZWbwYTD8ZqgdAHr3Vm9KqdrivMgtlUBXbIp8VugRe/2IXtqbdDIYILVQSsgynhKzORd6FlDVrfAweTy3T1/pz2JtWS0HD3gJZ/OCSVZ9w3F6A5yidXxYXML8ZTOzaBEb+ivGy3OCndQTIzXEY5j7n+0eA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Use folio_pte_batch() to optimize move_ptes(). On arm64, if the ptes are painted with the contig bit, then ptep_get() will iterate through all 16 entries to collect a/d bits. Hence this optimization will result in a 16x reduction in the number of ptep_get() calls. Next, ptep_get_and_clear() will eventually call contpte_try_unfold() on every contig block, thus flushing the TLB for the complete large folio range. Instead, use get_and_clear_full_ptes() so as to elide TLBIs on each contig block, and only do them on the starting and ending contig block. Signed-off-by: Dev Jain --- mm/mremap.c | 40 +++++++++++++++++++++++++++++++++------- 1 file changed, 33 insertions(+), 7 deletions(-) diff --git a/mm/mremap.c b/mm/mremap.c index 0163e02e5aa8..580b41f8d169 100644 --- a/mm/mremap.c +++ b/mm/mremap.c @@ -170,6 +170,24 @@ static pte_t move_soft_dirty_pte(pte_t pte) return pte; } +/* mremap a batch of PTEs mapping the same large folio */ +static int mremap_folio_pte_batch(struct vm_area_struct *vma, unsigned long addr, + pte_t *ptep, pte_t pte, int max_nr) +{ + const fpb_t flags = FPB_IGNORE_DIRTY | FPB_IGNORE_SOFT_DIRTY; + struct folio *folio; + + if (max_nr == 1) + return 1; + + folio = vm_normal_folio(vma, addr, pte); + if (!folio || !folio_test_large(folio)) + return 1; + + return folio_pte_batch(folio, addr, ptep, pte, max_nr, flags, NULL, + NULL, NULL); +} + static int move_ptes(struct pagetable_move_control *pmc, unsigned long extent, pmd_t *old_pmd, pmd_t *new_pmd) { @@ -177,7 +195,7 @@ static int move_ptes(struct pagetable_move_control *pmc, bool need_clear_uffd_wp = vma_has_uffd_without_event_remap(vma); struct mm_struct *mm = vma->vm_mm; pte_t *old_ptep, *new_ptep; - pte_t pte; + pte_t old_pte, pte; pmd_t dummy_pmdval; spinlock_t *old_ptl, *new_ptl; bool force_flush = false; @@ -185,6 +203,8 @@ static int move_ptes(struct pagetable_move_control *pmc, unsigned long new_addr = pmc->new_addr; unsigned long old_end = old_addr + extent; unsigned long len = old_end - old_addr; + int max_nr_ptes; + int nr_ptes; int err = 0; /* @@ -236,12 +256,14 @@ static int move_ptes(struct pagetable_move_control *pmc, flush_tlb_batched_pending(vma->vm_mm); arch_enter_lazy_mmu_mode(); - for (; old_addr < old_end; old_ptep++, old_addr += PAGE_SIZE, - new_ptep++, new_addr += PAGE_SIZE) { - if (pte_none(ptep_get(old_ptep))) + for (; old_addr < old_end; old_ptep += nr_ptes, old_addr += nr_ptes * PAGE_SIZE, + new_ptep += nr_ptes, new_addr += nr_ptes * PAGE_SIZE) { + nr_ptes = 1; + max_nr_ptes = (old_end - old_addr) >> PAGE_SHIFT; + old_pte = ptep_get(old_ptep); + if (pte_none(old_pte)) continue; - pte = ptep_get_and_clear(mm, old_addr, old_ptep); /* * If we are remapping a valid PTE, make sure * to flush TLB before we drop the PTL for the @@ -253,8 +275,12 @@ static int move_ptes(struct pagetable_move_control *pmc, * the TLB entry for the old mapping has been * flushed. */ - if (pte_present(pte)) + if (pte_present(old_pte)) { + nr_ptes = mremap_folio_pte_batch(vma, old_addr, old_ptep, + old_pte, max_nr_ptes); force_flush = true; + } + pte = get_and_clear_full_ptes(mm, old_addr, old_ptep, nr_ptes, 0); pte = move_pte(pte, old_addr, new_addr); pte = move_soft_dirty_pte(pte); @@ -267,7 +293,7 @@ static int move_ptes(struct pagetable_move_control *pmc, else if (is_swap_pte(pte)) pte = pte_swp_clear_uffd_wp(pte); } - set_pte_at(mm, new_addr, new_ptep, pte); + set_ptes(mm, new_addr, new_ptep, pte, nr_ptes); } } -- 2.30.2