From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 174D7C369C2 for ; Tue, 22 Apr 2025 08:18:44 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id EF5E86B0025; Tue, 22 Apr 2025 04:18:42 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E7DF56B002E; Tue, 22 Apr 2025 04:18:42 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CF9076B0030; Tue, 22 Apr 2025 04:18:42 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id AAA116B0025 for ; Tue, 22 Apr 2025 04:18:42 -0400 (EDT) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id D1A851617B6 for ; Tue, 22 Apr 2025 08:18:43 +0000 (UTC) X-FDA: 83360978526.29.C69025B Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf05.hostedemail.com (Postfix) with ESMTP id F096E100002 for ; Tue, 22 Apr 2025 08:18:41 +0000 (UTC) Authentication-Results: imf05.hostedemail.com; dkim=none; spf=pass (imf05.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1745309922; a=rsa-sha256; cv=none; b=JbFRBsU9C8R+R3nnYFxAPkQJqjNw+dS2BRcknHPA/Xnp3NDLdCRNmYqLdUqItAHF+0ldPm WED1Ka8W3MQEIa/T+fA8moND8Wkv5mawhHkfTSllBs0BXJW48F6iei3HH8Dq83wcY6bEz5 LmwB0I4nD8Pr89XsDbjC6iJyTlKN36Y= ARC-Authentication-Results: i=1; imf05.hostedemail.com; dkim=none; spf=pass (imf05.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1745309922; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:references; bh=860Nc6VgaqMy9r3Lux8leT4R6XBkLQIHX5lzUNdrEy8=; b=Ba8K3V3i7JiV444Jc6/QxvOpy6lQLmRzy5J+gy9ExNtdxUYw//N8uL8nUEyUB3y6HzoeA0 7R04uFnQg8M7tMrhCj1X8gVeny6xI05RVgxxYy6ybsLta3smsTfFZaGiBAK8RpOsC8ksIh qvGeA5gxecQCPj3/ikyNjeq0ZXDL3p4= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id C1D47152B; Tue, 22 Apr 2025 01:18:36 -0700 (PDT) Received: from e125769.cambridge.arm.com (e125769.cambridge.arm.com [10.1.196.27]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 17CCD3F66E; Tue, 22 Apr 2025 01:18:38 -0700 (PDT) From: Ryan Roberts To: Catalin Marinas , Will Deacon , Pasha Tatashin , Andrew Morton , Uladzislau Rezki , Christoph Hellwig , David Hildenbrand , "Matthew Wilcox (Oracle)" , Mark Rutland , Anshuman Khandual , Alexandre Ghiti , Kevin Brodsky Cc: Ryan Roberts , linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v4 00/11] Perf improvements for hugetlb and vmalloc on arm64 Date: Tue, 22 Apr 2025 09:18:08 +0100 Message-ID: <20250422081822.1836315-1-ryan.roberts@arm.com> X-Mailer: git-send-email 2.43.0 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Rspamd-Queue-Id: F096E100002 X-Stat-Signature: 3h8d8re38phzygtos41ytzizuent4xso X-Rspam-User: X-Rspamd-Server: rspam08 X-HE-Tag: 1745309921-806961 X-HE-Meta: U2FsdGVkX1/GcluEnueYH0znSpTQsILGITPPW0NqDPOr2grXQk/TanjBAWpz5t/QV0c0ysdcQtp1qbczCOLa6G3BwcICuC7o61rU3hxHqRAQlq8pUlNcdgp2Jr8v0thuy+DLeW3ln0/uZipp6btpSLwHuVHfTk66iIusUrrIpCL2t1mkyUsnWHtei77JUg36HZ4PIJqGNPsmKZpLI8Rq7g+WjuyJulMsk5xep0VhOrsgFr3WzJ4LEYcJr6uwXFkzgRtwn4WXJ3CSZrSxBhi+jSQLQX10uSTK7SxpdPYJ9HB5M9NCxYEP8v1uttXb3tcJ0pTOzkJ6DaU7B0aTTw8iWS7rlGVKCw3td3mjFEmrL8CzdqHH5me5s/WSNC+745BGY+KJYPeAUGIMUtClcMsIEiBNnHKJt5S19RIfxGqq5yVZEpt9e489KL9+ZrwdLEnDowkEJRRkRuHP3DgQYNl9D0Q1KtlocQWHkAmkmZwnrDl5KQdbWwrTFO4W8wFczNz7xmHiFvRPtnp3p1we0SYWfg+C9wYbNDUzokL0aTcIytm1Hj5xRL85jcr92rWJ63nQaBY555UZzBnL91ihhJkn0LwqwY0pAz6Z+Oe6UTmtnJEJWXLfEygJ+i60FN/4vk4nmgWbqtcWiSoMzTIog61zNI/X1h7ymA095r6hUymL/OX57meDZ58YIpgdbV7wBwjCNKiinrApURH9cmCxASdk1T2zzdB75lC9UWrfqmM7LL6qEhSc8pRWXWNaZwhv0XomuMFHE5WPkfrfzoVbe4MqFoRFaMpp8c+h+61v6YmQRPdcY5DLf2eAwvOkkIDbaNlmqdLuNshO57ElobYgdJVszUqV0tDzWaH/+k69MeCc7X76YMfUdasO3jmN7/Ge+/Hgmp+e3IQft1rS5Lnp0ZeqggXwcgVcTSItcxvWOOGb2K+AWz+l88zHUbbky4Pt/UYfM0+S3KlJ1s5HnLdBZib HO5HbP0w pXFw++Ea+PtSTM94D1js0+EMVKcScdR5s3JcGOxFEkAI4pbnMYeuTrViuKBZKqZRWaqbyDk4tGk7X1I3gbz+fqPKDfV9XNOBBG5u4Wcadz6YvvJY7G4JvtDsfR7W5J9TyzNTNOi+GVPRVALaJJYjSWLN7gr9O1O9YjPYL9kFJb7erFUk= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Hi All, This is v4 of a series to improve performance for hugetlb and vmalloc on arm64. Although some of these patches are core-mm, advice from Andrew was to go via the arm64 tree. All patches are now acked/reviewed by relevant maintainers so I believe this should be good-to-go. The 2 key performance improvements are 1) enabling the use of contpte-mapped blocks in the vmalloc space when appropriate (which reduces TLB pressure). There were already hooks for this (used by powerpc) but they required some tidying and extending for arm64. And 2) batching up barriers when modifying the vmalloc address space for upto 30% reduction in time taken in vmalloc(). vmalloc() performance was measured using the test_vmalloc.ko module. Tested on Apple M2 and Ampere Altra. Each test had loop count set to 500000 and the whole test was repeated 10 times. legend: - p: nr_pages (pages to allocate) - h: use_huge (vmalloc() vs vmalloc_huge()) - (I): statistically significant improvement (95% CI does not overlap) - (R): statistically significant regression (95% CI does not overlap) - measurements are times; smaller is better +--------------------------------------------------+-------------+-------------+ | Benchmark | | | | Result Class | Apple M2 | Ampere Alta | +==================================================+=============+=============+ | micromm/vmalloc | | | | fix_align_alloc_test: p:1, h:0 (usec) | (I) -11.53% | -2.57% | | fix_size_alloc_test: p:1, h:0 (usec) | 2.14% | 1.79% | | fix_size_alloc_test: p:4, h:0 (usec) | (I) -9.93% | (I) -4.80% | | fix_size_alloc_test: p:16, h:0 (usec) | (I) -25.07% | (I) -14.24% | | fix_size_alloc_test: p:16, h:1 (usec) | (I) -14.07% | (R) 7.93% | | fix_size_alloc_test: p:64, h:0 (usec) | (I) -29.43% | (I) -19.30% | | fix_size_alloc_test: p:64, h:1 (usec) | (I) -16.39% | (R) 6.71% | | fix_size_alloc_test: p:256, h:0 (usec) | (I) -31.46% | (I) -20.60% | | fix_size_alloc_test: p:256, h:1 (usec) | (I) -16.58% | (R) 6.70% | | fix_size_alloc_test: p:512, h:0 (usec) | (I) -31.96% | (I) -20.04% | | fix_size_alloc_test: p:512, h:1 (usec) | 2.30% | 0.71% | | full_fit_alloc_test: p:1, h:0 (usec) | -2.94% | 1.77% | | kvfree_rcu_1_arg_vmalloc_test: p:1, h:0 (usec) | -7.75% | 1.71% | | kvfree_rcu_2_arg_vmalloc_test: p:1, h:0 (usec) | -9.07% | (R) 2.34% | | long_busy_list_alloc_test: p:1, h:0 (usec) | (I) -29.18% | (I) -17.91% | | pcpu_alloc_test: p:1, h:0 (usec) | -14.71% | -3.14% | | random_size_align_alloc_test: p:1, h:0 (usec) | (I) -11.08% | (I) -4.62% | | random_size_alloc_test: p:1, h:0 (usec) | (I) -30.25% | (I) -17.95% | | vm_map_ram_test: p:1, h:0 (usec) | 5.06% | (R) 6.63% | +--------------------------------------------------+-------------+-------------+ So there are some nice improvements but also some regressions to explain: fix_size_alloc_test with h:1 and p:16,64,256 regress by ~6% on Altra. The regression is actually introduced by enabling contpte-mapped 64K blocks in these tests, and that regression is reduced (from about 8% if memory serves) by doing the barrier batching. I don't have a definite conclusion on the root cause, but I've ruled out the differences in the mapping paths in vmalloc. I strongly believe this is likely due to the difference in the allocation path; 64K blocks are not cached per-cpu so we have to go all the way to the buddy. I'm not sure why this doesn't show up on M2 though. Regardless, I'm going to assert that it's better to choose 16x reduction in TLB pressure vs 6% on the vmalloc allocation call duration. Changes since v3 [3] ==================== - Applied R-bs (thanks all!) - Renamed set_ptes_anysz() -> __set_ptes_anysz() (Catalin) - Renamed ptep_get_and_clear_anysz() -> __ptep_get_and_clear_anysz() (Catalin) - Only set TIF_LAZY_MMU_PENDING if not already set to avoid atomic ops (Catalin) - Fix commet typos (Anshuman) - Fix build warnings when PMD is folded (buildbot) - Reverse xmas tree for variables in __page_table_check_p[mu]ds_set() (Pasha) Changes since v2 [2] ==================== - Removed the new arch_update_kernel_mappings_[begin|end]() API - Switches to arch_[enter|leave]_lazy_mmu_mode() instead for barrier batching - Removed clean up to avoid barriers for invalid or user mappings Changes since v1 [1] ==================== - Split out the fixes into their own series - Added Rbs from Anshuman - Thanks! - Added patch to clean up the methods by which huge_pte size is determined - Added "#ifndef __PAGETABLE_PMD_FOLDED" around PUD_SIZE in flush_hugetlb_tlb_range() - Renamed ___set_ptes() -> set_ptes_anysz() - Renamed ___ptep_get_and_clear() -> ptep_get_and_clear_anysz() - Fixed typos in commit logs - Refactored pXd_valid_not_user() for better reuse - Removed TIF_KMAP_UPDATE_PENDING after concluding that single flag is sufficent - Concluded the extra isb() in __switch_to() is not required - Only call arch_update_kernel_mappings_[begin|end]() for kernel mappings Applies on top of v6.15-rc3. All mm selftests run and no regressions observed. [1] https://lore.kernel.org/all/20250205151003.88959-1-ryan.roberts@arm.com/ [2] https://lore.kernel.org/all/20250217140809.1702789-1-ryan.roberts@arm.com/ [3] https://lore.kernel.org/all/20250304150444.3788920-1-ryan.roberts@arm.com/ Thanks, Ryan Ryan Roberts (11): arm64: hugetlb: Cleanup huge_pte size discovery mechanisms arm64: hugetlb: Refine tlb maintenance scope mm/page_table_check: Batch-check pmds/puds just like ptes arm64/mm: Refactor __set_ptes() and __ptep_get_and_clear() arm64: hugetlb: Use __set_ptes_anysz() and __ptep_get_and_clear_anysz() arm64/mm: Hoist barriers out of set_ptes_anysz() loop mm/vmalloc: Warn on improper use of vunmap_range() mm/vmalloc: Gracefully unmap huge ptes arm64/mm: Support huge pte-mapped pages in vmap mm/vmalloc: Enter lazy mmu mode while manipulating vmalloc ptes arm64/mm: Batch barriers when updating kernel mappings arch/arm64/include/asm/hugetlb.h | 29 ++-- arch/arm64/include/asm/pgtable.h | 209 +++++++++++++++++++-------- arch/arm64/include/asm/thread_info.h | 2 + arch/arm64/include/asm/vmalloc.h | 45 ++++++ arch/arm64/kernel/process.c | 9 +- arch/arm64/mm/hugetlbpage.c | 73 ++++------ include/linux/page_table_check.h | 30 ++-- include/linux/vmalloc.h | 8 + mm/page_table_check.c | 34 +++-- mm/vmalloc.c | 40 ++++- 10 files changed, 329 insertions(+), 150 deletions(-) -- 2.43.0