From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B18D4C3ABB6 for ; Wed, 7 May 2025 06:03:31 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8B66C6B0088; Wed, 7 May 2025 02:03:29 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 868646B0089; Wed, 7 May 2025 02:03:29 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7572B6B008A; Wed, 7 May 2025 02:03:29 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 5ABCD6B0088 for ; Wed, 7 May 2025 02:03:29 -0400 (EDT) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 902FA1D015B for ; Wed, 7 May 2025 06:03:30 +0000 (UTC) X-FDA: 83415069780.19.F4E03F9 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf19.hostedemail.com (Postfix) with ESMTP id EE29E1A0005 for ; Wed, 7 May 2025 06:03:28 +0000 (UTC) Authentication-Results: imf19.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf19.hostedemail.com: domain of dev.jain@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=dev.jain@arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1746597809; a=rsa-sha256; cv=none; b=NXrU6Z+/XyYHvEDDu/Qg+mLCVqqgrM9iGswgFRjpMjsP1ZLLRhHPOWIkP8Ut2QQkFhHpil jUiyr0E7bxaQuEi2fBg6XhUVHh8Um9vyfPJlrK74gro3HWqX4WRcQOxI0x8srz0xx1BDfS ppeocRjUfcgR6E8G+3DigA/FMjnB5Fc= ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf19.hostedemail.com: domain of dev.jain@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=dev.jain@arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1746597809; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=0VTMD9k89UyVMIzho6z1X+MNNvmr5X9p6jz78Ghwl+Q=; b=LN4RBvTmJIezIc3LNVo8L9QGjRqjCDbWm6l96mQndjxZ8PZDSaJpUPL0Lo3NmS1wOpZtRZ Cx6+DW10J9h5YssbBkcj736SyhtRuqS4ESRxttt3fbkvH6mgrXp27IM832jDth5V/PBjEx 306ou2s7NGHZPJMh3suLjanB8IiEgqw= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 3EB481EDB; Tue, 6 May 2025 23:03:18 -0700 (PDT) Received: from K4MQJ0H1H2.emea.arm.com (K4MQJ0H1H2.blr.arm.com [10.162.43.22]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id AA3EF3F5A1; Tue, 6 May 2025 23:03:20 -0700 (PDT) From: Dev Jain To: akpm@linux-foundation.org Cc: Liam.Howlett@oracle.com, lorenzo.stoakes@oracle.com, vbabka@suse.cz, jannh@google.com, pfalcato@suse.de, linux-mm@kvack.org, linux-kernel@vger.kernel.org, david@redhat.com, peterx@redhat.com, ryan.roberts@arm.com, mingo@kernel.org, libang.li@antgroup.com, maobibo@loongson.cn, zhengqi.arch@bytedance.com, baohua@kernel.org, anshuman.khandual@arm.com, willy@infradead.org, ioworker0@gmail.com, yang@os.amperecomputing.com, baolin.wang@linux.alibaba.com, ziy@nvidia.com, hughd@google.com, Dev Jain Subject: [PATCH v2 2/2] mm: Optimize mremap() by PTE batching Date: Wed, 7 May 2025 11:32:56 +0530 Message-Id: <20250507060256.78278-3-dev.jain@arm.com> X-Mailer: git-send-email 2.39.3 (Apple Git-146) In-Reply-To: <20250507060256.78278-1-dev.jain@arm.com> References: <20250507060256.78278-1-dev.jain@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: EE29E1A0005 X-Stat-Signature: ch7x46ypo73ambczhhpmpq18wfdknm19 X-Rspam-User: X-HE-Tag: 1746597808-262319 X-HE-Meta: U2FsdGVkX1+dY7lhaft4LZbOJZ1h2sxJ/m5ZqBG5XHYnLs0m3MEtyy0TlCBDE1DjJgGgyVehkyGxi/2APgriQhaQYL25e3fVdVWC/o7LqGd0DLB2A00DAqWUKZG2QfdAdShAoJs3gMbyeHfxiLxvWqj78y4QvzjjMs2aqs8AoU5HvxGudRZHAww55adUnIaA+9xd5yWIEfpH34co0y7OT2NKfySEVqgcNRSCSzdQjceKQZf2S688pY2iOkdMSjEtjCiJuXXzTTN5XAl1dxDrL5yOFqdF44Yk+dOSseC96ZnMYCtkl9/8wufUZKH/QVZYj7IFKIUJdooHI7hgd1dqthOqlAH1JB5VcuiTDFIYB+K81WfwvFtMCILU5lmhLkoYwSy3T30J0hnyCqEYM9mAlT2xGRTgl3yfskQadeqc0rCQyFvoIyvVCKgE1OAeD2/+8pZDWsLRsH4G9g+sFRVYNseas94ClogVPNmBQhMKGG89SWMKr6/VNP13fAREFgHIT+YASkam1gon1+tQsv6eIYVahm2/+obfXtnF1rK+kg2SdDvq2AHDXp0G1a6QlzNmnxuqpWidyXIrkHk5FFgRlOj4i+GyuLCOiSAowDNoQvXHDzbMGRwKpVMZu/uRbmJBOTn8MMP2ZJZBK1YavkPjWbCKrfTlx9Jc/wxf556g274TI0s6k8jGKNeHj8LnKmY/YMX8L8+ahXOFvjLW8dTss8lkq9UsoxITeBX8Ge9r6zCYSIUYOB4GYEPhxh8kWnRNTdGHX1bohEjpN4NO8tq/u8uId1hcgjRztdgSzcLXIi+3GpLz3LoPH8Fb9/eBy/sslTDX9zv/ydRbU+zMCmFXvHA4Mbln01tRh5b7acwpvGGMh3MLF/sfTXuHyyQFwjXQUurGCZJo2lO8S3JxnVtbGYT7jo2pzE92Ss7O19IYXsS9pZDdAsAYSPXsIrkp9rtMMlZ2hISzzxi9H9UBzBj 0LPYY7eJ aGhvNy86fZL/L/omcV3DPL8kLm3sfH6t5toDyqfoP4yBX8SDA2CCzn74Q6BEo/Re/xZaaxaF26VhQ8oMPBaHeS5FTN+QBJ3KwmZdcGgbGgFRaC1B1deTQg5hk5dPF6wV1ZVjYkLkQEIy+SNNwj8US0wORRA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: To use PTE batching, we want to determine whether the folio mapped by the PTE is large, thus requiring the use of vm_normal_folio(). We want to avoid the cost of vm_normal_folio() if the code path doesn't already require the folio. For arm64, pte_batch_hint() does the job. To generalize this hint, add a helper which will determine whether two consecutive PTEs point to consecutive PFNs, in which case there is a high probability that the underlying folio is large. Next, use folio_pte_batch() to optimize move_ptes(). On arm64, if the ptes are painted with the contig bit, then ptep_get() will iterate through all 16 entries to collect a/d bits. Hence this optimization will result in a 16x reduction in the number of ptep_get() calls. Next, ptep_get_and_clear() will eventually call contpte_try_unfold() on every contig block, thus flushing the TLB for the complete large folio range. Instead, use get_and_clear_full_ptes() so as to elide TLBIs on each contig block, and only do them on the starting and ending contig block. Signed-off-by: Dev Jain --- include/linux/pgtable.h | 29 +++++++++++++++++++++++++++++ mm/mremap.c | 37 ++++++++++++++++++++++++++++++------- 2 files changed, 59 insertions(+), 7 deletions(-) diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h index b50447ef1c92..38dab1f562ed 100644 --- a/include/linux/pgtable.h +++ b/include/linux/pgtable.h @@ -369,6 +369,35 @@ static inline pgd_t pgdp_get(pgd_t *pgdp) } #endif +/** + * maybe_contiguous_pte_pfns - Hint whether the page mapped by the pte belongs + * to a large folio. + * @ptep: Pointer to the page table entry. + * @pte: The page table entry. + * + * This helper is invoked when the caller wants to batch over a set of ptes + * mapping a large folio, but the concerned code path does not already have + * the folio. We want to avoid the cost of vm_normal_folio() only to find that + * the underlying folio was small; i.e keep the small folio case as fast as + * possible. + * + * The caller must ensure that ptep + 1 exists. + */ +static inline bool maybe_contiguous_pte_pfns(pte_t *ptep, pte_t pte) +{ + pte_t *next_ptep, next_pte; + + if (pte_batch_hint(ptep, pte) != 1) + return true; + + next_ptep = ptep + 1; + next_pte = ptep_get(next_ptep); + if (!pte_present(next_pte)) + return false; + + return unlikely(pte_pfn(next_pte) - pte_pfn(pte) == 1); +} + #ifndef __HAVE_ARCH_PTEP_TEST_AND_CLEAR_YOUNG static inline int ptep_test_and_clear_young(struct vm_area_struct *vma, unsigned long address, diff --git a/mm/mremap.c b/mm/mremap.c index 0163e02e5aa8..9c88a276bec4 100644 --- a/mm/mremap.c +++ b/mm/mremap.c @@ -170,6 +170,23 @@ static pte_t move_soft_dirty_pte(pte_t pte) return pte; } +/* mremap a batch of PTEs mapping the same large folio */ +static int mremap_folio_pte_batch(struct vm_area_struct *vma, unsigned long addr, + pte_t *ptep, pte_t pte, int max_nr) +{ + const fpb_t flags = FPB_IGNORE_DIRTY | FPB_IGNORE_SOFT_DIRTY; + struct folio *folio; + int nr = 1; + + if ((max_nr != 1) && maybe_contiguous_pte_pfns(ptep, pte)) { + folio = vm_normal_folio(vma, addr, pte); + if (folio && folio_test_large(folio)) + nr = folio_pte_batch(folio, addr, ptep, pte, max_nr, + flags, NULL, NULL, NULL); + } + return nr; +} + static int move_ptes(struct pagetable_move_control *pmc, unsigned long extent, pmd_t *old_pmd, pmd_t *new_pmd) { @@ -177,7 +194,7 @@ static int move_ptes(struct pagetable_move_control *pmc, bool need_clear_uffd_wp = vma_has_uffd_without_event_remap(vma); struct mm_struct *mm = vma->vm_mm; pte_t *old_ptep, *new_ptep; - pte_t pte; + pte_t old_pte, pte; pmd_t dummy_pmdval; spinlock_t *old_ptl, *new_ptl; bool force_flush = false; @@ -186,6 +203,7 @@ static int move_ptes(struct pagetable_move_control *pmc, unsigned long old_end = old_addr + extent; unsigned long len = old_end - old_addr; int err = 0; + int max_nr; /* * When need_rmap_locks is true, we take the i_mmap_rwsem and anon_vma @@ -236,12 +254,13 @@ static int move_ptes(struct pagetable_move_control *pmc, flush_tlb_batched_pending(vma->vm_mm); arch_enter_lazy_mmu_mode(); - for (; old_addr < old_end; old_ptep++, old_addr += PAGE_SIZE, - new_ptep++, new_addr += PAGE_SIZE) { - if (pte_none(ptep_get(old_ptep))) + for (int nr = 1; old_addr < old_end; old_ptep += nr, old_addr += nr * PAGE_SIZE, + new_ptep += nr, new_addr += nr * PAGE_SIZE) { + max_nr = (old_end - old_addr) >> PAGE_SHIFT; + old_pte = ptep_get(old_ptep); + if (pte_none(old_pte)) continue; - pte = ptep_get_and_clear(mm, old_addr, old_ptep); /* * If we are remapping a valid PTE, make sure * to flush TLB before we drop the PTL for the @@ -253,8 +272,12 @@ static int move_ptes(struct pagetable_move_control *pmc, * the TLB entry for the old mapping has been * flushed. */ - if (pte_present(pte)) + if (pte_present(old_pte)) { + nr = mremap_folio_pte_batch(vma, old_addr, old_ptep, + old_pte, max_nr); force_flush = true; + } + pte = get_and_clear_full_ptes(mm, old_addr, old_ptep, nr, 0); pte = move_pte(pte, old_addr, new_addr); pte = move_soft_dirty_pte(pte); @@ -267,7 +290,7 @@ static int move_ptes(struct pagetable_move_control *pmc, else if (is_swap_pte(pte)) pte = pte_swp_clear_uffd_wp(pte); } - set_pte_at(mm, new_addr, new_ptep, pte); + set_ptes(mm, new_addr, new_ptep, pte, nr); } } -- 2.30.2