From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id EA842C369C2 for ; Tue, 22 Apr 2025 08:18:48 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 987246B0030; Tue, 22 Apr 2025 04:18:46 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 90B906B0031; Tue, 22 Apr 2025 04:18:46 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 671C96B0032; Tue, 22 Apr 2025 04:18:46 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 4C8596B0030 for ; Tue, 22 Apr 2025 04:18:46 -0400 (EDT) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 812E7C17B8 for ; Tue, 22 Apr 2025 08:18:47 +0000 (UTC) X-FDA: 83360978694.27.18DF877 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf18.hostedemail.com (Postfix) with ESMTP id EE8E71C0004 for ; Tue, 22 Apr 2025 08:18:45 +0000 (UTC) Authentication-Results: imf18.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf18.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1745309926; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=NDJNkv5yYw6mw/Wl/ecL0dwVt7fR9YtYQZKRkREHB9M=; b=NCRnAC50QhA9Bu13xh881orV7IGULvyI5FOLnf7HfVNIOSngkoeSl3/yjKmejiEfmNnyOw 7H6cdWJ+BTa2yk4YTzS43hlqs1yyK/H6aLEQMVplo52k0OxlTxUPJaBUSe5lY5RmjEC1Xa wmkfJmLY4nVvXJlV/tMR1FNv6pzqzHA= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1745309926; a=rsa-sha256; cv=none; b=BAYoeLp6uaG+gHbF0GFgk9AlykqUtNqFX7EpRC3WD6/i6jB4fCxEliLKVfH2AweLny3Byt rs02s8rAfeA74Xdq7C/YNx9LDnOvJAgsZsLHAqPfp/hJ4FjnHrpi8W2NgCi4ogZC5VtAc2 pmUKp+BhvHaKlDONXM4hSxS2NDhZ0SI= ARC-Authentication-Results: i=1; imf18.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf18.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 1887319F0; Tue, 22 Apr 2025 01:18:41 -0700 (PDT) Received: from e125769.cambridge.arm.com (e125769.cambridge.arm.com [10.1.196.27]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 6517D3F66E; Tue, 22 Apr 2025 01:18:43 -0700 (PDT) From: Ryan Roberts To: Catalin Marinas , Will Deacon , Pasha Tatashin , Andrew Morton , Uladzislau Rezki , Christoph Hellwig , David Hildenbrand , "Matthew Wilcox (Oracle)" , Mark Rutland , Anshuman Khandual , Alexandre Ghiti , Kevin Brodsky Cc: Ryan Roberts , linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v4 02/11] arm64: hugetlb: Refine tlb maintenance scope Date: Tue, 22 Apr 2025 09:18:10 +0100 Message-ID: <20250422081822.1836315-3-ryan.roberts@arm.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250422081822.1836315-1-ryan.roberts@arm.com> References: <20250422081822.1836315-1-ryan.roberts@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: EE8E71C0004 X-Rspam-User: X-Stat-Signature: ntgzxwea4kyu15u35uw9t9eamzrazd9e X-HE-Tag: 1745309925-722310 X-HE-Meta: U2FsdGVkX1/oFSJ4UcvoQ8szmzEaPUxnlJy99AYZDJaHFG1SjKneV6quNh4fOrSNQT2Cpk9Uq3VONKEcpLeJFcoI4F+i1JTTG8xZYC6wHtZi22ENdLw4jfULOl+JdqTX+cIXmAPeFG6GrxwejkNMsc4V89a3stzeTxT1FC2I+lcmxlbCYZ0WyMB8SiaB1UQ0kfaOLfD3BH4luqK2Xld/TerfSFGBVFXdTCr761JKVrqtz58U+QkPcos+FY3bbYMPvdl9EpUvzuNjzwvLJu2qhWgVc4Ueg7cQdMVuVqFe2sZImzMeKD9Iq2SpgVS++rE4vDeOzT3VFfpI3dh9A3tO/RQ876xX4qXQ+vYGnu0p/epuaSubYRcSyXtunoF5G51S3TUCQ2a+P8Uvgs+uZZ+7pfIs85uja6S3F2NEXQDT3CPYNVVwDBY/8VZwDIrlqWXNFGZPBNBnwEpI2C/1tEBb569LosfaDV4Fa9eQyewICtwXrVuVlT2HGqzoZx/+DO9EKylLXd/v25DmvoWlohkQY7uTYMl1KBAt2fObTEde5p0RFkdH4q++wv/mml/3PBk8L5PDCsDpSiu3yX8Srzb5xA4jqifglp/zwlgx/MhGJff4iBSYBxz3c6VuCf6kTa3sC8aNz1vy+JrTDvX9WDIBMxRN4gxUiH8WeLQOhnvJ5IwWP8lFnMOw1cDz73F43JLrgddaE8ktVtI5b5nhGy5gWu7CZlYndD7v6+q5I+I9F+rZo1bXvT2dHrj3sPF+9yXo6zNfiGmJlEUcals6Cz5OWxGfd+sqdW1NpUqeTpEHHyL5d9OhpGKbyvrIdrjn33R+mCaQhJntigxUJYqfoEVxqwBYy36/F/s5FNtSi1fFJGpxt8qaOfD0WOprEq4XVQY/uIltpOzAjd0Fayz/k+DSdS/xL4ih2flGYc3CZLybIfOXiowtUDfuVbYdzBCsSY1fdhoPoBAVpdGdp3cyQNE okL/BFQx CSN+VbwtH/xL5djfOcC2BrF6XOi1r1NHqya8X8KdieCUUJQ36fWcOlGvGmkuS3MXS5k5GeX2pOvLzgD00ceuqIYXnkZWuL/esnFMttheyCb9liP++YzOHRRsvKGdFakYs9DwE0PHWzoKjJjhR3nvmfKf0xQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: When operating on contiguous blocks of ptes (or pmds) for some hugetlb sizes, we must honour break-before-make requirements and clear down the block to invalid state in the pgtable then invalidate the relevant tlb entries before making the pgtable entries valid again. However, the tlb maintenance is currently always done assuming the worst case stride (PAGE_SIZE), last_level (false) and tlb_level (TLBI_TTL_UNKNOWN). We can do much better with the hinting; In reality, we know the stride from the huge_pte pgsize, we are always operating only on the last level, and we always know the tlb_level, again based on pgsize. So let's start providing these hints. Additionally, avoid tlb maintenace in set_huge_pte_at(). Break-before-make is only required if we are transitioning the contiguous pte block from valid -> valid. So let's elide the clear-and-flush ("break") if the pte range was previously invalid. Reviewed-by: Catalin Marinas Reviewed-by: Anshuman Khandual Signed-off-by: Ryan Roberts --- arch/arm64/include/asm/hugetlb.h | 29 +++++++++++++++++++---------- arch/arm64/mm/hugetlbpage.c | 9 ++++++--- 2 files changed, 25 insertions(+), 13 deletions(-) diff --git a/arch/arm64/include/asm/hugetlb.h b/arch/arm64/include/asm/hugetlb.h index 07fbf5bf85a7..2a8155c4a882 100644 --- a/arch/arm64/include/asm/hugetlb.h +++ b/arch/arm64/include/asm/hugetlb.h @@ -69,29 +69,38 @@ extern void huge_ptep_modify_prot_commit(struct vm_area_struct *vma, #include -#define __HAVE_ARCH_FLUSH_HUGETLB_TLB_RANGE -static inline void flush_hugetlb_tlb_range(struct vm_area_struct *vma, - unsigned long start, - unsigned long end) +static inline void __flush_hugetlb_tlb_range(struct vm_area_struct *vma, + unsigned long start, + unsigned long end, + unsigned long stride, + bool last_level) { - unsigned long stride = huge_page_size(hstate_vma(vma)); - switch (stride) { #ifndef __PAGETABLE_PMD_FOLDED case PUD_SIZE: - __flush_tlb_range(vma, start, end, PUD_SIZE, false, 1); + __flush_tlb_range(vma, start, end, PUD_SIZE, last_level, 1); break; #endif case CONT_PMD_SIZE: case PMD_SIZE: - __flush_tlb_range(vma, start, end, PMD_SIZE, false, 2); + __flush_tlb_range(vma, start, end, PMD_SIZE, last_level, 2); break; case CONT_PTE_SIZE: - __flush_tlb_range(vma, start, end, PAGE_SIZE, false, 3); + __flush_tlb_range(vma, start, end, PAGE_SIZE, last_level, 3); break; default: - __flush_tlb_range(vma, start, end, PAGE_SIZE, false, TLBI_TTL_UNKNOWN); + __flush_tlb_range(vma, start, end, PAGE_SIZE, last_level, TLBI_TTL_UNKNOWN); } } +#define __HAVE_ARCH_FLUSH_HUGETLB_TLB_RANGE +static inline void flush_hugetlb_tlb_range(struct vm_area_struct *vma, + unsigned long start, + unsigned long end) +{ + unsigned long stride = huge_page_size(hstate_vma(vma)); + + __flush_hugetlb_tlb_range(vma, start, end, stride, false); +} + #endif /* __ASM_HUGETLB_H */ diff --git a/arch/arm64/mm/hugetlbpage.c b/arch/arm64/mm/hugetlbpage.c index 701394aa7734..087fc43381c6 100644 --- a/arch/arm64/mm/hugetlbpage.c +++ b/arch/arm64/mm/hugetlbpage.c @@ -183,8 +183,9 @@ static pte_t get_clear_contig_flush(struct mm_struct *mm, { pte_t orig_pte = get_clear_contig(mm, addr, ptep, pgsize, ncontig); struct vm_area_struct vma = TLB_FLUSH_VMA(mm, 0); + unsigned long end = addr + (pgsize * ncontig); - flush_tlb_range(&vma, addr, addr + (pgsize * ncontig)); + __flush_hugetlb_tlb_range(&vma, addr, end, pgsize, true); return orig_pte; } @@ -209,7 +210,7 @@ static void clear_flush(struct mm_struct *mm, for (i = 0; i < ncontig; i++, addr += pgsize, ptep++) __ptep_get_and_clear(mm, addr, ptep); - flush_tlb_range(&vma, saddr, addr); + __flush_hugetlb_tlb_range(&vma, saddr, addr, pgsize, true); } void set_huge_pte_at(struct mm_struct *mm, unsigned long addr, @@ -238,7 +239,9 @@ void set_huge_pte_at(struct mm_struct *mm, unsigned long addr, dpfn = pgsize >> PAGE_SHIFT; hugeprot = pte_pgprot(pte); - clear_flush(mm, addr, ptep, pgsize, ncontig); + /* Only need to "break" if transitioning valid -> valid. */ + if (pte_valid(__ptep_get(ptep))) + clear_flush(mm, addr, ptep, pgsize, ncontig); for (i = 0; i < ncontig; i++, ptep++, addr += pgsize, pfn += dpfn) __set_ptes(mm, addr, ptep, pfn_pte(pfn, hugeprot), 1); -- 2.43.0