From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 955A8C369CB for ; Tue, 22 Apr 2025 08:19:12 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DCEEA6B0089; Tue, 22 Apr 2025 04:19:03 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D532D6B008A; Tue, 22 Apr 2025 04:19:03 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BF7BA6B008C; Tue, 22 Apr 2025 04:19:03 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 9F2DA6B0089 for ; Tue, 22 Apr 2025 04:19:03 -0400 (EDT) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id E12AC5FDD7 for ; Tue, 22 Apr 2025 08:19:04 +0000 (UTC) X-FDA: 83360979408.19.12D2D17 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf24.hostedemail.com (Postfix) with ESMTP id 39212180006 for ; Tue, 22 Apr 2025 08:19:03 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=none; spf=pass (imf24.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1745309943; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Y/URpHH1qAYetMSNH6nVQT63m6K1M7/DshTq0ZISdYI=; b=tJkmwCoYnE+8FfgJ2lG5LVA1v2+pag6T6rMcOArvej85C5+flFpOZ6ZTx+HBj2kIlP2zU+ jX7G/wPnC2PUCfHbar23SeAd05yrJ8W23xdcBrUwZRsh0nGBsJxyKN873Bphjlqzmwwanu Ug1iT9XOyKCKitO7AFNeCLE6OXSDac4= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=none; spf=pass (imf24.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1745309943; a=rsa-sha256; cv=none; b=W5CeCjmjhc49xxRg1xUelrKKqTlWxFFzO1E/8pkcdJ88YoBYcbLfwQzlmZ6bFKypEGzK/X VrOIfGAA1j/C45PBQlydTheKb0kOgDuQ08wGzjZ/frBfjaL54p3xL/q9AZZnqWcRxJ2iMm gOisF/QBOd8AUexMYID4wxQHl6C4Ia0= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 436561EDB; Tue, 22 Apr 2025 01:18:58 -0700 (PDT) Received: from e125769.cambridge.arm.com (e125769.cambridge.arm.com [10.1.196.27]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 8BBBF3F66E; Tue, 22 Apr 2025 01:19:00 -0700 (PDT) From: Ryan Roberts To: Catalin Marinas , Will Deacon , Pasha Tatashin , Andrew Morton , Uladzislau Rezki , Christoph Hellwig , David Hildenbrand , "Matthew Wilcox (Oracle)" , Mark Rutland , Anshuman Khandual , Alexandre Ghiti , Kevin Brodsky Cc: Ryan Roberts , linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v4 10/11] mm/vmalloc: Enter lazy mmu mode while manipulating vmalloc ptes Date: Tue, 22 Apr 2025 09:18:18 +0100 Message-ID: <20250422081822.1836315-11-ryan.roberts@arm.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250422081822.1836315-1-ryan.roberts@arm.com> References: <20250422081822.1836315-1-ryan.roberts@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Stat-Signature: k3qb9dcbpf8p7echrbn7kkwuuz7imo4f X-Rspamd-Queue-Id: 39212180006 X-Rspam-User: X-Rspamd-Server: rspam05 X-HE-Tag: 1745309943-299564 X-HE-Meta: U2FsdGVkX19hXQfGBnvapQqpww3Dlk7LQp6AwuoVPINXf7N1BA8yRNsuk+F8umJ2JK9/xyuOA3Dv4rSYAHLetNX4ICxo9G/JO7E3R8zim8LXqPiAOBC60rNJkcUL15tHGzx2Sac+ZOc6YwMl3mw61BSUHgcKMWXjpNKym7jtdP+Kqu9hG76U0ghC1JxFr+v3wVx4Ok8GbVQU/ojwDPrrBmuaI0P8kC+fSAK5PXQORiYu9VbzSixsWIfYgW4LZ+gwywcLucXwogqPxAnjlrELNfvUDjDE/7vHXIUlWgE+u3zwALhgBTaHTg9MJtPZNvAdimNAKrjJZl+7iWwSnPUd5nLqi1hPlJqlP3p2xBBLPRbtp/TJdBaMAeR9P2zy3EzZUHgYTIW7k1GNNdzoEpLSgJKoOVOnLpg6fryPQQVvflb6PnGokWQoBDt5RlDcr9yC7lJ+kUr736aYh/RC95yaESH7bu8ORrQy6mvd+rE86cM/rh20djIlVzmCXQteKy6luuOyMxYdMfm45KWdYpT3Y0sLSiKbawG8hrvJ8NwdNCKzTf/PTXl4Q5Afb3u5HNJlDAS9iH4SA0eYck7aHb0f0XvLFOqKftwv3y8wVkKcAvidLBqsmYKbpdcB/96PMlpWaIybJ3uG4rpnY0nSSF7JnG03T+C14daN+f4ZvIcLS9ZWl7+azykhgPcpKyCURaJg2ysTYG4eFcBde4+oZMdy4/JjXJMNkdmlo56s4FZY/dtY3RGz7do0dkmIxajuPaonLiq0XNJSSr/b0X7PbBf5xLuJddVfyZQI+bDinJo+pXC1wrwNDnAMvyJb/6jdGfe0uWA0GsafxNiCMPhmmQbwaQleyPu0eKT5/ltVGn2XpV9nMvWICXXW0YhKaOhT1SnbFoEbqN58Q3vMAr0wK/Bc8h7xh0Eb3DNCvVSPH9I758fCqSpswB2Z38j6lt6VeedNkCbIiXLjkaSxScW8e5p Xaw3mycG 2ye++WuIIoP9SJuy/AouQaA58WdDElV67Utk8nmPk4F7wDWHWJ+86rzZzp/K3T/wFSx9o+cV+1BfrWgJDYF2E5Kc3g//kT8huX1mJsXdY0dHK8NkpfK5flQDaHjgo5MNbj8h/zNlN5UxIuNLc/5qzt11FbWfVifRUMtF6Js+xXZOtanEgXhhWK0cH2A== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Wrap vmalloc's pte table manipulation loops with arch_enter_lazy_mmu_mode() / arch_leave_lazy_mmu_mode(). This provides the arch code with the opportunity to optimize the pte manipulations. Note that vmap_pfn() already uses lazy mmu mode since it delegates to apply_to_page_range() which enters lazy mmu mode for both user and kernel mappings. These hooks will shortly be used by arm64 to improve vmalloc performance. Reviewed-by: Uladzislau Rezki (Sony) Reviewed-by: Catalin Marinas Reviewed-by: Anshuman Khandual Signed-off-by: Ryan Roberts --- mm/vmalloc.c | 14 ++++++++++++++ 1 file changed, 14 insertions(+) diff --git a/mm/vmalloc.c b/mm/vmalloc.c index fe2e2cc8da94..24430160b37f 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -104,6 +104,9 @@ static int vmap_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end, pte = pte_alloc_kernel_track(pmd, addr, mask); if (!pte) return -ENOMEM; + + arch_enter_lazy_mmu_mode(); + do { if (unlikely(!pte_none(ptep_get(pte)))) { if (pfn_valid(pfn)) { @@ -127,6 +130,8 @@ static int vmap_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end, set_pte_at(&init_mm, addr, pte, pfn_pte(pfn, prot)); pfn++; } while (pte += PFN_DOWN(size), addr += size, addr != end); + + arch_leave_lazy_mmu_mode(); *mask |= PGTBL_PTE_MODIFIED; return 0; } @@ -354,6 +359,8 @@ static void vunmap_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end, unsigned long size = PAGE_SIZE; pte = pte_offset_kernel(pmd, addr); + arch_enter_lazy_mmu_mode(); + do { #ifdef CONFIG_HUGETLB_PAGE size = arch_vmap_pte_range_unmap_size(addr, pte); @@ -370,6 +377,8 @@ static void vunmap_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end, ptent = ptep_get_and_clear(&init_mm, addr, pte); WARN_ON(!pte_none(ptent) && !pte_present(ptent)); } while (pte += (size >> PAGE_SHIFT), addr += size, addr != end); + + arch_leave_lazy_mmu_mode(); *mask |= PGTBL_PTE_MODIFIED; } @@ -515,6 +524,9 @@ static int vmap_pages_pte_range(pmd_t *pmd, unsigned long addr, pte = pte_alloc_kernel_track(pmd, addr, mask); if (!pte) return -ENOMEM; + + arch_enter_lazy_mmu_mode(); + do { struct page *page = pages[*nr]; @@ -528,6 +540,8 @@ static int vmap_pages_pte_range(pmd_t *pmd, unsigned long addr, set_pte_at(&init_mm, addr, pte, mk_pte(page, prot)); (*nr)++; } while (pte++, addr += PAGE_SIZE, addr != end); + + arch_leave_lazy_mmu_mode(); *mask |= PGTBL_PTE_MODIFIED; return 0; } -- 2.43.0