From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id F2E8EC02192 for ; Wed, 5 Feb 2025 15:11:05 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7C2D3280021; Wed, 5 Feb 2025 10:11:05 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 772DE280013; Wed, 5 Feb 2025 10:11:05 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6145C280021; Wed, 5 Feb 2025 10:11:05 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 3A809280013 for ; Wed, 5 Feb 2025 10:11:05 -0500 (EST) Received: from smtpin11.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id EB7581601C6 for ; Wed, 5 Feb 2025 15:11:04 +0000 (UTC) X-FDA: 83086228848.11.E18573A Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf01.hostedemail.com (Postfix) with ESMTP id EED714001D for ; Wed, 5 Feb 2025 15:11:01 +0000 (UTC) Authentication-Results: imf01.hostedemail.com; dkim=none; spf=pass (imf01.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1738768262; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=V68pH4S7d/Sl2VYKE0uFiVbQmTdTAcmGvsS/XLOraMg=; b=M0EErdl4p/m8zjdTufXN77leoTg+AWsDQf958dIpUfg15gJ1kDxcHpadoYUD2HufKQLhNE nFj+XjsLjW6SFzPCygDAidZQuai5WPGuYBmmLipYCAbEbfpbWaDdQsxYP+nJRH7jOeEVPG GQlIoIMtNqkNrWYlTQpKxhFonn4Ffw8= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=none; spf=pass (imf01.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1738768262; a=rsa-sha256; cv=none; b=3+ufOghJwqG7bAmwasPlaIaf3k90V4qvP9QN4Yt+4U/7KgUdJCUkvsh3CDqrJpfnKVBWDI Cjtqzn7KSraGigjv42f+FfJUN5VvXKNl8G6cCOHzlcbe9VKvFYP5bWSwClCNXmVtCU0ULF m4XeWffbiMWsv598E9VEmG4u7xH3jpE= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 88D461C01; Wed, 5 Feb 2025 07:11:24 -0800 (PST) Received: from e125769.cambridge.arm.com (e125769.cambridge.arm.com [10.1.196.27]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 6CE293F5A1; Wed, 5 Feb 2025 07:10:58 -0800 (PST) From: Ryan Roberts To: Catalin Marinas , Will Deacon , Muchun Song , Pasha Tatashin , Andrew Morton , Uladzislau Rezki , Christoph Hellwig , Mark Rutland , Ard Biesheuvel , Anshuman Khandual , Dev Jain , Alexandre Ghiti , Steve Capper , Kevin Brodsky Cc: Ryan Roberts , linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v1 12/16] arm64/mm: Support huge pte-mapped pages in vmap Date: Wed, 5 Feb 2025 15:09:52 +0000 Message-ID: <20250205151003.88959-13-ryan.roberts@arm.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250205151003.88959-1-ryan.roberts@arm.com> References: <20250205151003.88959-1-ryan.roberts@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Rspamd-Queue-Id: EED714001D X-Rspam-User: X-Rspamd-Server: rspam11 X-Stat-Signature: 3i36wh5hg7fymcjpukdm8yicuqs73mc6 X-HE-Tag: 1738768261-231146 X-HE-Meta: U2FsdGVkX18Xzgk6mQhA7adre3eOpXtU5DQzQwZ8c8PNrN8yX+dfvCmFvqxr7H6X1UmG2uyzUlxagXW/VOzTngER5XWnnm4XbYOcq8nky5uM5kV+ZzFutaYCUdCwZtOJUruXrPNhnxY/X3Dw8l7zBwvIZGcVcK+ysG86vCekYFQsIgxkS+13STOwuii+cmqmiHQ4+/U4UX+2/cGwCgQp/IQwKx45FJj2SAewsTL1rhdCMSDhsofIyZEajqDLd0NlWdQh4VEmEpJThsLK/Yo+3pLXObnMEzj3Somn1XqGbvXCDHlIsH0zSkSgxUE87ZipYt4/6HiysBA5aeCQnBAPA2s21pXj7rY6v/jd9G7dV6aXYYoYPlrZzFmg3rbKRJRXbiN/f9vLTGAue6g/bNyaKsFB9rgVHgLUpFtAYCYSyY2g0V47sAs+ls94nQSQNfLQFWdpJOAL/Eix8POHX5kE27Lp/JdMIs3URxEKRfQ/5ARNHBFa9OdZTF1RgnXhuqu1C5OKggK/yRFoCQFGo4eUw4EPeCbzL+m2KqJAbQGUFQwFRObQb4TkEDKd+r22iuoDZpZWQnOsZKQoSVRsfZDcxhLm9RXJqpGmQO1fN7Jzup7ukrrTCYSZmTSFtHHFYqqMONJFgM96xmzvNVN8fa4tcFXDpj7npLFwoSPalbFIIQSW1ud3spBcSfc+eGJJzQWBofB95h/dj0GOQRRw+E6d+a96Z/DE61pp/IEozc3doxzZwbuBnHvZHNan3cL+BuUApBZhoAvYk9hhR8h2JUOwvIroOOo2PsXRXnBZIo7hTRrFCjVWB8p5ZCuR4K8bzGy9ObVUHqJcJ+0rh/YvKpFmiI/cicEKYdgtFRtMeVqqRHc2VAiaXD/zmG26mIx35M3xy7EvqiW2AL334hnLG2vDg+Oj6eyULlhlJcYC7FousdDNzrrIz21zaMXmdnZP/e+DrSo0Fd5pLDoIm0pw2uk ZrnfTa6R hxplZjzBtamccGJpFnTmwNIjXM4cY9va6NgKWZj8EZ4Fv0MArlBfbNt4IXOXrC7ESMqeGcsqzTx36uTrG3fMJRGEwhdV5xMAv9s8nQ+Dwu4LngMqxNfrcRtBRAEhcgJYYokl6Eoxd/7ZbIpMboFA6ku4c3g== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Implement the required arch functions to enable use of contpte in the vmap when VM_ALLOW_HUGE_VMAP is specified. This speeds up vmap operations due to only having to issue a DSB and ISB per contpte block instead of per pte. But it also means that the TLB pressure reduces due to only needing a single TLB entry for the whole contpte block. Since vmap uses set_huge_pte_at() to set the contpte, that API is now used for kernel mappings for the first time. Although in the vmap case we never expect it to be called to modify a valid mapping so clear_flush() should never be called, it's still wise to make it robust for the kernel case, so amend the tlb flush function if the mm is for kernel space. Tested with vmalloc performance selftests: # kself/mm/test_vmalloc.sh \ run_test_mask=1 test_repeat_count=5 nr_pages=256 test_loop_count=100000 use_huge=1 Duration reduced from 1274243 usec to 1083553 usec on Apple M2 for 15% reduction in time taken. Signed-off-by: Ryan Roberts --- arch/arm64/include/asm/vmalloc.h | 40 ++++++++++++++++++++++++++++++++ arch/arm64/mm/hugetlbpage.c | 5 +++- 2 files changed, 44 insertions(+), 1 deletion(-) diff --git a/arch/arm64/include/asm/vmalloc.h b/arch/arm64/include/asm/vmalloc.h index 38fafffe699f..fbdeb40f3857 100644 --- a/arch/arm64/include/asm/vmalloc.h +++ b/arch/arm64/include/asm/vmalloc.h @@ -23,6 +23,46 @@ static inline bool arch_vmap_pmd_supported(pgprot_t prot) return !IS_ENABLED(CONFIG_PTDUMP_DEBUGFS); } +#define arch_vmap_pte_range_map_size arch_vmap_pte_range_map_size +static inline unsigned long arch_vmap_pte_range_map_size(unsigned long addr, + unsigned long end, u64 pfn, + unsigned int max_page_shift) +{ + if (max_page_shift < CONT_PTE_SHIFT) + return PAGE_SIZE; + + if (end - addr < CONT_PTE_SIZE) + return PAGE_SIZE; + + if (!IS_ALIGNED(addr, CONT_PTE_SIZE)) + return PAGE_SIZE; + + if (!IS_ALIGNED(PFN_PHYS(pfn), CONT_PTE_SIZE)) + return PAGE_SIZE; + + return CONT_PTE_SIZE; +} + +#define arch_vmap_pte_range_unmap_size arch_vmap_pte_range_unmap_size +static inline unsigned long arch_vmap_pte_range_unmap_size(unsigned long addr, + pte_t *ptep) +{ + /* + * The caller handles alignment so it's sufficient just to check + * PTE_CONT. + */ + return pte_valid_cont(__ptep_get(ptep)) ? CONT_PTE_SIZE : PAGE_SIZE; +} + +#define arch_vmap_pte_supported_shift arch_vmap_pte_supported_shift +static inline int arch_vmap_pte_supported_shift(unsigned long size) +{ + if (size >= CONT_PTE_SIZE) + return CONT_PTE_SHIFT; + + return PAGE_SHIFT; +} + #endif #define arch_vmap_pgprot_tagged arch_vmap_pgprot_tagged diff --git a/arch/arm64/mm/hugetlbpage.c b/arch/arm64/mm/hugetlbpage.c index 02afee31444e..a74e43101dad 100644 --- a/arch/arm64/mm/hugetlbpage.c +++ b/arch/arm64/mm/hugetlbpage.c @@ -217,7 +217,10 @@ static void clear_flush(struct mm_struct *mm, for (i = 0; i < ncontig; i++, addr += pgsize, ptep++) ___ptep_get_and_clear(mm, ptep, pgsize); - __flush_hugetlb_tlb_range(&vma, saddr, addr, pgsize, true); + if (mm == &init_mm) + flush_tlb_kernel_range(saddr, addr); + else + __flush_hugetlb_tlb_range(&vma, saddr, addr, pgsize, true); } void set_huge_pte_at(struct mm_struct *mm, unsigned long addr, -- 2.43.0