From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A9FD4C282D2 for ; Tue, 4 Mar 2025 15:05:25 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 90F586B0099; Tue, 4 Mar 2025 10:05:17 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 8C3646B009A; Tue, 4 Mar 2025 10:05:17 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 75F7B6B009B; Tue, 4 Mar 2025 10:05:17 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 504606B0099 for ; Tue, 4 Mar 2025 10:05:17 -0500 (EST) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id F15801A0206 for ; Tue, 4 Mar 2025 15:05:16 +0000 (UTC) X-FDA: 83184191874.26.6AB1BC5 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf29.hostedemail.com (Postfix) with ESMTP id EF424120017 for ; Tue, 4 Mar 2025 15:05:14 +0000 (UTC) Authentication-Results: imf29.hostedemail.com; dkim=none; spf=pass (imf29.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1741100715; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=th/IQZfHt9AcPjof0ScPeiHwDMp+4L8L0OKS4xd9m68=; b=sBu/AEEwKXfP1LakWMlEufpnqhXn9zTAN9/IUWJExStdUmx9h5nAokMl+aF6aRgDAXn1yK vH+5vBiQJ26gYhkmsbY+V5Rcn+rzu9kgOtG9dUU3Q6BQbYoS8zdL6Fe0XZ0UvTh5Nj8cx3 IR+d8zCQhuSDyq4Qk6/RGbsiTW6OlTk= ARC-Authentication-Results: i=1; imf29.hostedemail.com; dkim=none; spf=pass (imf29.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1741100715; a=rsa-sha256; cv=none; b=647/trBxIIUprtmQAJuPbdfxyl5WB4TmBj9iMGVS87182Cov1Lcr6sczg5icTUeHrtBEI2 rZcJ/3amgKbnLUXnPlMSr8IdQD4a79yY/m+UEy75oj9bYYgfh7Aewz4i/7Z/wN3iuwdprV RnVfBoN8QPYK6jIBqYMmydhtbd5mDtw= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id F3807FEC; Tue, 4 Mar 2025 07:05:27 -0800 (PST) Received: from e125769.cambridge.arm.com (e125769.cambridge.arm.com [10.1.196.27]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 6DD083F66E; Tue, 4 Mar 2025 07:05:12 -0800 (PST) From: Ryan Roberts To: Catalin Marinas , Will Deacon , Pasha Tatashin , Andrew Morton , Uladzislau Rezki , Christoph Hellwig , David Hildenbrand , "Matthew Wilcox (Oracle)" , Mark Rutland , Anshuman Khandual , Alexandre Ghiti , Kevin Brodsky Cc: Ryan Roberts , linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v3 09/11] arm64/mm: Support huge pte-mapped pages in vmap Date: Tue, 4 Mar 2025 15:04:39 +0000 Message-ID: <20250304150444.3788920-10-ryan.roberts@arm.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250304150444.3788920-1-ryan.roberts@arm.com> References: <20250304150444.3788920-1-ryan.roberts@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Rspam-User: X-Stat-Signature: i1c4paw4xt5x4y4hzak6w8kf586s1i7g X-Rspamd-Queue-Id: EF424120017 X-Rspamd-Server: rspam07 X-HE-Tag: 1741100714-609978 X-HE-Meta: U2FsdGVkX18/+/JeuqnLOwf5ayA4B66kxo5IEmF58CuiTJz907VIN7RgBaiThHXGMZiTkGFYPe423ZzB4FjxLqVzY+RdgpFYa9lI+URFtOnWmSdqYpuCa/Fdtw+eTXghar7IYQWM+iHeei0MImKs2Ehqe2BkXw9Qt+WsCEjM4RGjMeLq1ezEvetZlDKeRhGAOtWXn939lOVt+tggFYuT0b4e5HCGmjUOUWc2SFzZ33T0vui69E/arF+diGZCqwLor0c7YT94Y05NFYOH+DwdufU0sR7GkNylf8x4lxXQ0++/ZY0PQrEQQuPB3vX9awLF9sx8vPVk0wUIouuR6wGkowe2vsaVXtJTe9Iw3DeonUjHGXvwyM270SvC+8dd0KwcBzhsBdXOtkIPlG2iFk+6L7jJJfmxKSNq4RFhJnuJJORh3MAZkYCuHyibvS/Ns+mtm+CvcjwwP8dddhYi4xxGw+Eidm3GnG766RASzu+VepRPvMegclUhYwRzss5oP9W6+9Gr1dL3IQ/fvnov9TRCR2OR5ykAVopcrc8WYuuj9p146Njp82h9cfFDhECbu9xPBHAv3bsaqw+yHzUnH7YGyjlUPvjmzRgHXKsI67hEkM/hihQeTKJPcSIlRO8j4IqJdLNMt/4xj9vRcSDgb6kmRxRjxBsfoJMVnY+GLLPRptMls0gnqOez8muWmDfHg+uBKoarB5+ghe6JZu2/2ZoqCR7acjDwkqnmczQ+KbL1FxiFEH5goU6yB/jKgNhWauWxL/ehIFAG975W+oSCRqVlvrRcY+fNDwu+kNH/lnmoi/U9VSmfsnejrNVs0ASTvJwRcNJMVp6RzsX+FNTTWxI6sGf7JYGqMTH3YEfKCvuUGanXDvkYBAvg/8ZUprIhLDKHKei0ct8FEpPqYx/EhuDHVVj+XjvSUdnpKiZffuAd+qQKltWqX4JChrxrLcJT3KLhS9dsigli6gVxca/eN9y yCXtAvxG kK2XKWtHp6MwAGdC+McV8Y+Wi5O/WWbBPqLAHhs5Sg0a2iM8C4OkpP/7JjEAtYZDhCCCR2SDKFlBpJ20dFLfKuL0jbFt91zdc0L64UCBU/rkVWjDYLPUBZCJExLTARO9uXi4nTFr5+ezUglTRld7VAmSpNQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Implement the required arch functions to enable use of contpte in the vmap when VM_ALLOW_HUGE_VMAP is specified. This speeds up vmap operations due to only having to issue a DSB and ISB per contpte block instead of per pte. But it also means that the TLB pressure reduces due to only needing a single TLB entry for the whole contpte block. Since vmap uses set_huge_pte_at() to set the contpte, that API is now used for kernel mappings for the first time. Although in the vmap case we never expect it to be called to modify a valid mapping so clear_flush() should never be called, it's still wise to make it robust for the kernel case, so amend the tlb flush function if the mm is for kernel space. Tested with vmalloc performance selftests: # kself/mm/test_vmalloc.sh \ run_test_mask=1 test_repeat_count=5 nr_pages=256 test_loop_count=100000 use_huge=1 Duration reduced from 1274243 usec to 1083553 usec on Apple M2 for 15% reduction in time taken. Reviewed-by: Anshuman Khandual Reviewed-by: Catalin Marinas Signed-off-by: Ryan Roberts --- arch/arm64/include/asm/vmalloc.h | 45 ++++++++++++++++++++++++++++++++ arch/arm64/mm/hugetlbpage.c | 5 +++- 2 files changed, 49 insertions(+), 1 deletion(-) diff --git a/arch/arm64/include/asm/vmalloc.h b/arch/arm64/include/asm/vmalloc.h index 38fafffe699f..12f534e8f3ed 100644 --- a/arch/arm64/include/asm/vmalloc.h +++ b/arch/arm64/include/asm/vmalloc.h @@ -23,6 +23,51 @@ static inline bool arch_vmap_pmd_supported(pgprot_t prot) return !IS_ENABLED(CONFIG_PTDUMP_DEBUGFS); } +#define arch_vmap_pte_range_map_size arch_vmap_pte_range_map_size +static inline unsigned long arch_vmap_pte_range_map_size(unsigned long addr, + unsigned long end, u64 pfn, + unsigned int max_page_shift) +{ + /* + * If the block is at least CONT_PTE_SIZE in size, and is naturally + * aligned in both virtual and physical space, then we can pte-map the + * block using the PTE_CONT bit for more efficient use of the TLB. + */ + if (max_page_shift < CONT_PTE_SHIFT) + return PAGE_SIZE; + + if (end - addr < CONT_PTE_SIZE) + return PAGE_SIZE; + + if (!IS_ALIGNED(addr, CONT_PTE_SIZE)) + return PAGE_SIZE; + + if (!IS_ALIGNED(PFN_PHYS(pfn), CONT_PTE_SIZE)) + return PAGE_SIZE; + + return CONT_PTE_SIZE; +} + +#define arch_vmap_pte_range_unmap_size arch_vmap_pte_range_unmap_size +static inline unsigned long arch_vmap_pte_range_unmap_size(unsigned long addr, + pte_t *ptep) +{ + /* + * The caller handles alignment so it's sufficient just to check + * PTE_CONT. + */ + return pte_valid_cont(__ptep_get(ptep)) ? CONT_PTE_SIZE : PAGE_SIZE; +} + +#define arch_vmap_pte_supported_shift arch_vmap_pte_supported_shift +static inline int arch_vmap_pte_supported_shift(unsigned long size) +{ + if (size >= CONT_PTE_SIZE) + return CONT_PTE_SHIFT; + + return PAGE_SHIFT; +} + #endif #define arch_vmap_pgprot_tagged arch_vmap_pgprot_tagged diff --git a/arch/arm64/mm/hugetlbpage.c b/arch/arm64/mm/hugetlbpage.c index efd18bd1eae3..c1cb13dd5e84 100644 --- a/arch/arm64/mm/hugetlbpage.c +++ b/arch/arm64/mm/hugetlbpage.c @@ -210,7 +210,10 @@ static void clear_flush(struct mm_struct *mm, for (i = 0; i < ncontig; i++, addr += pgsize, ptep++) ptep_get_and_clear_anysz(mm, ptep, pgsize); - __flush_hugetlb_tlb_range(&vma, saddr, addr, pgsize, true); + if (mm == &init_mm) + flush_tlb_kernel_range(saddr, addr); + else + __flush_hugetlb_tlb_range(&vma, saddr, addr, pgsize, true); } void set_huge_pte_at(struct mm_struct *mm, unsigned long addr, -- 2.43.0