From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D7866C021A4 for ; Mon, 24 Feb 2025 12:04:32 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 779D26B008A; Mon, 24 Feb 2025 07:04:32 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 7030D6B008C; Mon, 24 Feb 2025 07:04:32 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5A3F46B0092; Mon, 24 Feb 2025 07:04:32 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 3CF996B008A for ; Mon, 24 Feb 2025 07:04:32 -0500 (EST) Received: from smtpin23.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 769271212BB for ; Mon, 24 Feb 2025 12:04:01 +0000 (UTC) X-FDA: 83154704682.23.D32AD72 Received: from tor.source.kernel.org (tor.source.kernel.org [172.105.4.254]) by imf13.hostedemail.com (Postfix) with ESMTP id D724C20006 for ; Mon, 24 Feb 2025 12:03:59 +0000 (UTC) Authentication-Results: imf13.hostedemail.com; dkim=none; dmarc=fail reason="SPF not aligned (relaxed), No valid DKIM" header.from=arm.com (policy=none); spf=pass (imf13.hostedemail.com: domain of cmarinas@kernel.org designates 172.105.4.254 as permitted sender) smtp.mailfrom=cmarinas@kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1740398639; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=OZPt7DtLrr63jZ3pZpXntJhaWa0xW/waW9qRxGt5wAo=; b=k4lfkImwi/wRgWjYOftnHXoUCcwRUfYBEADmpBmbYLT+KGQg83YtSKZkzk4bssPC3xamvn vPNBiIYLq4G+aBP8U1RnFnxSPtflO8AT4tf3bxLvw/8zhU/yWpPTw7t7YeqA98vj3vHBxt pCrOriCDuQaCSqiaPOiWjzqkz1ewmUg= ARC-Authentication-Results: i=1; imf13.hostedemail.com; dkim=none; dmarc=fail reason="SPF not aligned (relaxed), No valid DKIM" header.from=arm.com (policy=none); spf=pass (imf13.hostedemail.com: domain of cmarinas@kernel.org designates 172.105.4.254 as permitted sender) smtp.mailfrom=cmarinas@kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1740398639; a=rsa-sha256; cv=none; b=M1dSBaXwrIJDiCcXVawWHkuP424Xo2NaV7QEXLmqO1/dutQLatctj6qiToRmiS/rY+j5Kx FaUYQZ/G4L0DfD5LES7BqRHktgnGmf7NAwQ4vif57r2RZe2lRPivLZuXjygXNkQczOAC9x oDueFKgM3tCMU9SenYlO7lCLYvOMzSY= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by tor.source.kernel.org (Postfix) with ESMTP id 6EB81611A7; Mon, 24 Feb 2025 12:03:53 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 5E55EC4CED6; Mon, 24 Feb 2025 12:03:56 +0000 (UTC) Date: Mon, 24 Feb 2025 12:03:54 +0000 From: Catalin Marinas To: Ryan Roberts Cc: Will Deacon , Pasha Tatashin , Andrew Morton , Uladzislau Rezki , Christoph Hellwig , David Hildenbrand , "Matthew Wilcox (Oracle)" , Mark Rutland , Anshuman Khandual , Alexandre Ghiti , Kevin Brodsky , linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH v2 10/14] arm64/mm: Support huge pte-mapped pages in vmap Message-ID: References: <20250217140809.1702789-1-ryan.roberts@arm.com> <20250217140809.1702789-11-ryan.roberts@arm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20250217140809.1702789-11-ryan.roberts@arm.com> X-Rspam-User: X-Stat-Signature: f3doujfnoyd6uxijiurf46u8cyedorp4 X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: D724C20006 X-HE-Tag: 1740398639-203977 X-HE-Meta: U2FsdGVkX19OAkv+v6OkvGjUbJI4wImdPDIHvQQ7o+UN6PPsbXDFF/jQUrOSsNEf+nHLceNNhLfnVLnImKMYq5jhbCC3ZborU60GmzjVNZu5rcxSNmCVGhDoDb7qtfn+mp96+r4kpmbrVJPNBXdFZkSMp/RtUkVleUXnh8Y51M56P54irzxxJ+6VP34kg8knUnN1blK6LMyTnSgEuhmB7vngNGHU5dFmNpYrtXwI2mkYN3JzMuz8uvCc+m6ftcrWjGS9ogb536ol+KzZ37cAi4BTOukhD9AAUWrEJzKi3pnux6rDkUKYo3q9sWhGCtXS6eQcg3AIY63qZJHVWWK8uD4CYoZv9oe2oYaf2vt8w8bp9WVUE7xIeiwONcykHxCestXIbZIHfgrrp3bbAEpf+8MCK1uwvET2HDpXxNxxKCh63mrgqIWok1vg++mx2HFlUogEa27K88l+7sp7FvWgGnmNkBXBZuOYfu3sE/ewW7bvE5DuRJOWnzsEBv2C902RDOX/9rGXkIwRecye3ZCgT5aD0vuKAaU4KEA3LSLiA66pVdQ84pdEW52+67MZUFElBFH+9kyXIgzZQpShFCERXcQK1XL3ZkITuF1EyvK102Tco5Hp/EVK37xpRfk6WkqIak+VJpGYyjSmBJUDWhNwAyQMZulYaQI0a2krkTSkU/VRQH1e5JU9m7bfKtWU8+h+9teAtTdIm2ET6m/Yp15Bs7G7DrJCHIDA/i9rXiv3LyK12MI7mSlXXkeYePJ/2f+wMxwqy6s4KlGqeGm2jLG/y76TFUd19emDKEljdGTTO+a6mMI1oBZ10pGxcri+sR72CHwRa9IHm3fyURdEtn0GUL+p4IUVBcloEiccuWM+e4vvx4FM8C1E0FnFT46iKkYYoZB1ZWzq+KkmugdMe/ihKRzoh6rF89ew5cP5RpkzRJEAkbWqdYyuaeQS/HZ4viGlJf9qAjA6yTdyG7CknUi sX8TTUYI jCP7PnTnKOnQbHtO5vCHpqLDH7ZTlRZTaRtk9pbXl3bdPyA9VKFGBoXh6V8raCogDlCF6K1lD/uvnY+kKsLxm53yXSacrNX5Cj+gLP9ErvtNbEJowVEgsB6QMBYpk0+KYUjrlY36xx/YbHphMUciE+dkxvnufGZoFQFR7x4lhPvvHGS6JzvDgv8lgDx/ZXSAiXNltIrWsDRby93aM5v1Si6SNRPIS2NgZBn+hUa4Xxq/tC/gbKfGCW5+2a6neWxZeCu6AkShRYqyPjRNBzf/qB53+qPsBYmE38+VyKYHDaGeN3q3NPoO2rCjM2o/PzLgyzw2PsWk7s2Z7Vcbs/izNuisBTjxaCLtrU5e1vawY1UqjnF3BQriPxgwCcL8sMfsczUr3Cn2yPtQdLqF2ZwBMFYOi+Jpvz6Msq83Fby2jd4jkJR9ZYuy0AnlOCccCbq9gHD16 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Mon, Feb 17, 2025 at 02:08:02PM +0000, Ryan Roberts wrote: > Implement the required arch functions to enable use of contpte in the > vmap when VM_ALLOW_HUGE_VMAP is specified. This speeds up vmap > operations due to only having to issue a DSB and ISB per contpte block > instead of per pte. For non-cont PTEs, do you happen to know how often vmap_pte_range() is called for multiple entries? It might be worth changing that to use set_ptes() directly and we get some benefit as well. > But it also means that the TLB pressure reduces due > to only needing a single TLB entry for the whole contpte block. > > Since vmap uses set_huge_pte_at() to set the contpte, that API is now > used for kernel mappings for the first time. Although in the vmap case > we never expect it to be called to modify a valid mapping so > clear_flush() should never be called, it's still wise to make it robust > for the kernel case, so amend the tlb flush function if the mm is for > kernel space. > > Tested with vmalloc performance selftests: > > # kself/mm/test_vmalloc.sh \ > run_test_mask=1 > test_repeat_count=5 > nr_pages=256 > test_loop_count=100000 > use_huge=1 > > Duration reduced from 1274243 usec to 1083553 usec on Apple M2 for 15% > reduction in time taken. > > Reviewed-by: Anshuman Khandual > Signed-off-by: Ryan Roberts > --- > arch/arm64/include/asm/vmalloc.h | 46 ++++++++++++++++++++++++++++++++ > arch/arm64/mm/hugetlbpage.c | 5 +++- > 2 files changed, 50 insertions(+), 1 deletion(-) > > diff --git a/arch/arm64/include/asm/vmalloc.h b/arch/arm64/include/asm/vmalloc.h > index 38fafffe699f..40ebc664190b 100644 > --- a/arch/arm64/include/asm/vmalloc.h > +++ b/arch/arm64/include/asm/vmalloc.h > @@ -23,6 +23,52 @@ static inline bool arch_vmap_pmd_supported(pgprot_t prot) > return !IS_ENABLED(CONFIG_PTDUMP_DEBUGFS); > } > > +#define arch_vmap_pte_range_map_size arch_vmap_pte_range_map_size > +static inline unsigned long arch_vmap_pte_range_map_size(unsigned long addr, > + unsigned long end, u64 pfn, > + unsigned int max_page_shift) > +{ > + /* > + * If the block is at least CONT_PTE_SIZE in size, and is naturally > + * aligned in both virtual and physical space, then we can pte-map the > + * block using the PTE_CONT bit for more efficient use of the TLB. > + */ > + Nit: unnecessary empty line. > + if (max_page_shift < CONT_PTE_SHIFT) > + return PAGE_SIZE; > + > + if (end - addr < CONT_PTE_SIZE) > + return PAGE_SIZE; > + > + if (!IS_ALIGNED(addr, CONT_PTE_SIZE)) > + return PAGE_SIZE; > + > + if (!IS_ALIGNED(PFN_PHYS(pfn), CONT_PTE_SIZE)) > + return PAGE_SIZE; > + > + return CONT_PTE_SIZE; > +} > + > +#define arch_vmap_pte_range_unmap_size arch_vmap_pte_range_unmap_size > +static inline unsigned long arch_vmap_pte_range_unmap_size(unsigned long addr, > + pte_t *ptep) > +{ > + /* > + * The caller handles alignment so it's sufficient just to check > + * PTE_CONT. > + */ > + return pte_valid_cont(__ptep_get(ptep)) ? CONT_PTE_SIZE : PAGE_SIZE; > +} > + > +#define arch_vmap_pte_supported_shift arch_vmap_pte_supported_shift > +static inline int arch_vmap_pte_supported_shift(unsigned long size) > +{ > + if (size >= CONT_PTE_SIZE) > + return CONT_PTE_SHIFT; > + > + return PAGE_SHIFT; > +} > + > #endif > > #define arch_vmap_pgprot_tagged arch_vmap_pgprot_tagged > diff --git a/arch/arm64/mm/hugetlbpage.c b/arch/arm64/mm/hugetlbpage.c > index 8ac86cd180b3..a29f347fea54 100644 > --- a/arch/arm64/mm/hugetlbpage.c > +++ b/arch/arm64/mm/hugetlbpage.c > @@ -217,7 +217,10 @@ static void clear_flush(struct mm_struct *mm, > for (i = 0; i < ncontig; i++, addr += pgsize, ptep++) > ptep_get_and_clear_anysz(mm, ptep, pgsize); > > - __flush_hugetlb_tlb_range(&vma, saddr, addr, pgsize, true); > + if (mm == &init_mm) > + flush_tlb_kernel_range(saddr, addr); > + else > + __flush_hugetlb_tlb_range(&vma, saddr, addr, pgsize, true); > } > > void set_huge_pte_at(struct mm_struct *mm, unsigned long addr, Reviewed-by: Catalin Marinas