From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6DDC9C3ABC3 for ; Tue, 13 May 2025 12:46:13 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0E0946B00C5; Tue, 13 May 2025 08:46:11 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 0678D6B00C6; Tue, 13 May 2025 08:46:10 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E4A9B6B00D2; Tue, 13 May 2025 08:46:10 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id C32756B00C5 for ; Tue, 13 May 2025 08:46:10 -0400 (EDT) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 2A509C09F8 for ; Tue, 13 May 2025 12:46:12 +0000 (UTC) X-FDA: 83437857384.13.FFC4543 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf21.hostedemail.com (Postfix) with ESMTP id 671E91C000D for ; Tue, 13 May 2025 12:46:10 +0000 (UTC) Authentication-Results: imf21.hostedemail.com; dkim=none; spf=pass (imf21.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1747140370; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=bWGGkHdXinABqlynVqbXwvBgNzwWZAGLIxJD7qCCqOE=; b=zRn9pXEly5djDvS1XErVvU/Tp5SU/kxfGMVIrAoxVL31mjV70EPKJrrwuzTfFlmPmlseiT +VDLqD7nmhzM9U0j/Ys1AeqEbTUhi0o0JvfaMl1HsEhw7aIyQhWUBdMk5speG36zW2cmZO dk2aYcjBQ87S6lS3op4kMnrO4bOw66I= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=none; spf=pass (imf21.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1747140370; a=rsa-sha256; cv=none; b=y2q15NblIQwFtTjYYbCIJyVWojjn3EFthgpdsbmYvYRXCqzipOSwaCuFq6yYkVmLsw3Zoa JkwWkC8HEOdWcV2BwjSXv4htc9D5GBlw9iSN7ZkgB/9FJqCwtIWEja6+IJ4GFM9j7iQAx6 qq4b2IreriYIrkBCRhDN/MUGREs2uHw= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 7BBB8168F; Tue, 13 May 2025 05:45:58 -0700 (PDT) Received: from [10.1.25.187] (XHFQ2J9959.cambridge.arm.com [10.1.25.187]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id A6B5C3F5A1; Tue, 13 May 2025 05:46:07 -0700 (PDT) Message-ID: Date: Tue, 13 May 2025 13:46:06 +0100 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [RFC PATCH v4 5/5] mm/filemap: Allow arch to request folio size for exec memory Content-Language: en-GB To: Will Deacon Cc: Andrew Morton , "Matthew Wilcox (Oracle)" , Alexander Viro , Christian Brauner , Jan Kara , David Hildenbrand , Dave Chinner , Catalin Marinas , Kalesh Singh , Zi Yan , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org References: <20250430145920.3748738-1-ryan.roberts@arm.com> <20250430145920.3748738-6-ryan.roberts@arm.com> <20250509135223.GB5707@willie-the-truck> From: Ryan Roberts In-Reply-To: <20250509135223.GB5707@willie-the-truck> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Rspam-User: X-Rspamd-Queue-Id: 671E91C000D X-Rspamd-Server: rspam09 X-Stat-Signature: 1tmhsgquhr1s359icz33e1b4bng5im1e X-HE-Tag: 1747140370-997643 X-HE-Meta: U2FsdGVkX1/jUAFLjgx62CPkQcrv0iEE39Y6niFG/VbfJ1d4TpBQwj+5GFtgpZcx2RKq9lA3yAjqHhUCx3gXzwEVX6BHXRUb3njzwiBmXNC31gmEdWS0HiCX7waB3Jt/T+BZfrt2exo43+IQmtuVLQKAQd56eX705YiMkf+Yf7oNES7Ym2ZzAVyYcsexxln7SPaX6tOR0iYuCvthgirEZDiv2U3gx47qTlANfMEOV/x3WRMhd1/S1ifJD2MkpzpGmnNNSlf71sbSZFuPeRRYaFGRZ2S8anVgRLzNIeyhB7WsT89Qfp6HgP1uGlDHoMDuDICzEdHc3jCyNihFSVK95GgUlBG3HQ8uJepXGly15O9Jdsj5qH3ok3OvPyN38eeeuYLCiLEA708HDiViyVJWIhBpvArHh3VEY+DXxZgwSrMJMXTOjOvwWfvcGtRtqrLh0fo0ufN2rijyX404WFP5qKHOQ1gzoq3GlqOajzU12tWIRaGMdbhl8ohBE4ULnX5i4po5gU88K1HB7buPIte4DDBuk4UJIFqhQm/vWTGShiQbe9IXrJMOjQ602gIW54lmBGTK02ibSCwERhfWoRmkYlCuZD5K0+5KY5W++L8ux6XdEBb4H+yxKlq5ADzj4PK7D5ISxNl/JqUJaRMM66R2ymZGcnhFvGOSdvlPm9C3jobpYyVPAKlrCD3x6kIjGSc8sIWBcpZAV3EpYCBHuQGjgpo9Aq8okSu+JJaqTGkhhhOKPzkFRDColW3uuLdWxR7BW3OBgiOAd3q2kfSJVHgqxo346kf/CeyOIDZvCbRcPKXOJFgsff9SJg/B8R3Fg4IdHTsAJEuk0cEtueAlNiaFozOv6EuIKGgdhq6gsjILo8UTOJb05FrFqz4F0WF/WndW7nZsunMKQLYX1TrCKwE4l3Y8Gp5rQIZ9LHXApNRgRf/RyDicks0Th+KXujZQObp5+PPw6mFY2Re9xfsxmf4 kh4f0uco iHGT+HzfHG+J/RcEEv47Saiq5iTs5kJQRBufWCAoycGbDeBfmWVL/YuwLptWenQosvsi85nio1C/XgPsNCEtbSKhBNPByT375iY/D7F97LTre12UjCz2m9FThXdUUPNjXGH0R56jt3NT5KThLiPFDQnvX5k/GLTMgKamxyFuvi2omv2cxrQQgCpDIE4ASEO/YcSWZviuF+45f0a121h3n5B8owNYTnHHHY3dQ X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 09/05/2025 14:52, Will Deacon wrote: > On Wed, Apr 30, 2025 at 03:59:18PM +0100, Ryan Roberts wrote: >> Change the readahead config so that if it is being requested for an >> executable mapping, do a synchronous read into a set of folios with an >> arch-specified order and in a naturally aligned manner. We no longer >> center the read on the faulting page but simply align it down to the >> previous natural boundary. Additionally, we don't bother with an >> asynchronous part. >> >> On arm64 if memory is physically contiguous and naturally aligned to the >> "contpte" size, we can use contpte mappings, which improves utilization >> of the TLB. When paired with the "multi-size THP" feature, this works >> well to reduce dTLB pressure. However iTLB pressure is still high due to >> executable mappings having a low likelihood of being in the required >> folio size and mapping alignment, even when the filesystem supports >> readahead into large folios (e.g. XFS). >> >> The reason for the low likelihood is that the current readahead >> algorithm starts with an order-0 folio and increases the folio order by >> 2 every time the readahead mark is hit. But most executable memory tends >> to be accessed randomly and so the readahead mark is rarely hit and most >> executable folios remain order-0. >> >> So let's special-case the read(ahead) logic for executable mappings. The >> trade-off is performance improvement (due to more efficient storage of >> the translations in iTLB) vs potential for making reclaim more difficult >> (due to the folios being larger so if a part of the folio is hot the >> whole thing is considered hot). But executable memory is a small portion >> of the overall system memory so I doubt this will even register from a >> reclaim perspective. >> >> I've chosen 64K folio size for arm64 which benefits both the 4K and 16K >> base page size configs. Crucially the same amount of data is still read >> (usually 128K) so I'm not expecting any read amplification issues. I >> don't anticipate any write amplification because text is always RO. >> >> Note that the text region of an ELF file could be populated into the >> page cache for other reasons than taking a fault in a mmapped area. The >> most common case is due to the loader read()ing the header which can be >> shared with the beginning of text. So some text will still remain in >> small folios, but this simple, best effort change provides good >> performance improvements as is. >> >> Confine this special-case approach to the bounds of the VMA. This >> prevents wasting memory for any padding that might exist in the file >> between sections. Previously the padding would have been contained in >> order-0 folios and would be easy to reclaim. But now it would be part of >> a larger folio so more difficult to reclaim. Solve this by simply not >> reading it into memory in the first place. >> >> Benchmarking >> ============ >> TODO: NUMBERS ARE FOR V3 OF SERIES. NEED TO RERUN FOR THIS VERSION. >> >> The below shows nginx and redis benchmarks on Ampere Altra arm64 system. >> >> First, confirmation that this patch causes more text to be contained in >> 64K folios: >> >> | File-backed folios | system boot | nginx | redis | >> | by size as percentage |-----------------|-----------------|-----------------| >> | of all mapped text mem | before | after | before | after | before | after | >> |========================|========|========|========|========|========|========| >> | base-page-4kB | 26% | 9% | 27% | 6% | 21% | 5% | >> | thp-aligned-8kB | 4% | 2% | 3% | 0% | 4% | 1% | >> | thp-aligned-16kB | 57% | 21% | 57% | 6% | 54% | 10% | >> | thp-aligned-32kB | 4% | 1% | 4% | 1% | 3% | 1% | >> | thp-aligned-64kB | 7% | 65% | 8% | 85% | 9% | 72% | >> | thp-aligned-2048kB | 0% | 0% | 0% | 0% | 7% | 8% | >> | thp-unaligned-16kB | 1% | 1% | 1% | 1% | 1% | 1% | >> | thp-unaligned-32kB | 0% | 0% | 0% | 0% | 0% | 0% | >> | thp-unaligned-64kB | 0% | 0% | 0% | 1% | 0% | 1% | >> | thp-partial | 1% | 1% | 0% | 0% | 1% | 1% | >> |------------------------|--------|--------|--------|--------|--------|--------| >> | cont-aligned-64kB | 7% | 65% | 8% | 85% | 16% | 80% | >> >> The above shows that for both workloads (each isolated with cgroups) as >> well as the general system state after boot, the amount of text backed >> by 4K and 16K folios reduces and the amount backed by 64K folios >> increases significantly. And the amount of text that is contpte-mapped >> significantly increases (see last row). >> >> And this is reflected in performance improvement: >> >> | Benchmark | Improvement | >> +===============================================+======================+ >> | pts/nginx (200 connections) | 8.96% | >> | pts/nginx (1000 connections) | 6.80% | >> +-----------------------------------------------+----------------------+ >> | pts/redis (LPOP, 50 connections) | 5.07% | >> | pts/redis (LPUSH, 50 connections) | 3.68% | >> >> Signed-off-by: Ryan Roberts >> --- >> arch/arm64/include/asm/pgtable.h | 8 +++++++ >> include/linux/pgtable.h | 11 +++++++++ >> mm/filemap.c | 40 ++++++++++++++++++++++++++------ >> 3 files changed, 52 insertions(+), 7 deletions(-) >> >> diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h >> index 2a77f11b78d5..9eb35af0d3cf 100644 >> --- a/arch/arm64/include/asm/pgtable.h >> +++ b/arch/arm64/include/asm/pgtable.h >> @@ -1537,6 +1537,14 @@ static inline void update_mmu_cache_range(struct vm_fault *vmf, >> */ >> #define arch_wants_old_prefaulted_pte cpu_has_hw_af >> >> +/* >> + * Request exec memory is read into pagecache in at least 64K folios. This size >> + * can be contpte-mapped when 4K base pages are in use (16 pages into 1 iTLB >> + * entry), and HPA can coalesce it (4 pages into 1 TLB entry) when 16K base >> + * pages are in use. >> + */ >> +#define exec_folio_order() ilog2(SZ_64K >> PAGE_SHIFT) >> + >> static inline bool pud_sect_supported(void) >> { >> return PAGE_SIZE == SZ_4K; >> diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h >> index b50447ef1c92..1dd539c49f90 100644 >> --- a/include/linux/pgtable.h >> +++ b/include/linux/pgtable.h >> @@ -456,6 +456,17 @@ static inline bool arch_has_hw_pte_young(void) >> } >> #endif >> >> +#ifndef exec_folio_order >> +/* >> + * Returns preferred minimum folio order for executable file-backed memory. Must >> + * be in range [0, PMD_ORDER). Default to order-0. >> + */ >> +static inline unsigned int exec_folio_order(void) >> +{ >> + return 0; >> +} >> +#endif >> + >> #ifndef arch_check_zapped_pte >> static inline void arch_check_zapped_pte(struct vm_area_struct *vma, >> pte_t pte) >> diff --git a/mm/filemap.c b/mm/filemap.c >> index e61f374068d4..37fe4a55c00d 100644 >> --- a/mm/filemap.c >> +++ b/mm/filemap.c >> @@ -3252,14 +3252,40 @@ static struct file *do_sync_mmap_readahead(struct vm_fault *vmf) >> if (mmap_miss > MMAP_LOTSAMISS) >> return fpin; >> >> - /* >> - * mmap read-around >> - */ >> fpin = maybe_unlock_mmap_for_io(vmf, fpin); >> - ra->start = max_t(long, 0, vmf->pgoff - ra->ra_pages / 2); >> - ra->size = ra->ra_pages; >> - ra->async_size = ra->ra_pages / 4; >> - ra->order = 0; >> + if (vm_flags & VM_EXEC) { >> + /* >> + * Allow arch to request a preferred minimum folio order for >> + * executable memory. This can often be beneficial to >> + * performance if (e.g.) arm64 can contpte-map the folio. >> + * Executable memory rarely benefits from readahead, due to its >> + * random access nature, so set async_size to 0. > > In light of this observation (about randomness of instruction fetch), do > you think it's worth ignoring VM_RAND_READ for VM_EXEC? Hmm, yeah that makes sense. Something like: ---8<--- diff --git a/mm/filemap.c b/mm/filemap.c index 7b90cbeb4a1a..6c8bf5116c54 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -3233,7 +3233,8 @@ static struct file *do_sync_mmap_readahead(struct vm_fault *vmf) if (!ra->ra_pages) return fpin; - if (vm_flags & VM_SEQ_READ) { + /* VM_EXEC case below is already intended for random access */ + if ((vm_flags & (VM_SEQ_READ | VM_EXEC)) == VM_SEQ_READ) { fpin = maybe_unlock_mmap_for_io(vmf, fpin); page_cache_sync_ra(&ractl, ra->ra_pages); return fpin; ---8<--- > > Either way, I was looking at this because it touches arm64 and it looks > fine to me: > > Acked-by: Will Deacon Thanks! > > Will