From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id BC56EC3ABBC for ; Fri, 9 May 2025 13:52:34 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0B95328003B; Fri, 9 May 2025 09:52:34 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 06878280031; Fri, 9 May 2025 09:52:34 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E70C528003B; Fri, 9 May 2025 09:52:33 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id C6899280031 for ; Fri, 9 May 2025 09:52:33 -0400 (EDT) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 945F51A0148 for ; Fri, 9 May 2025 13:52:33 +0000 (UTC) X-FDA: 83423509386.10.36E1AEA Received: from nyc.source.kernel.org (nyc.source.kernel.org [147.75.193.91]) by imf19.hostedemail.com (Postfix) with ESMTP id D45DD1A0002 for ; Fri, 9 May 2025 13:52:31 +0000 (UTC) Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=iPga8wPk; spf=pass (imf19.hostedemail.com: domain of will@kernel.org designates 147.75.193.91 as permitted sender) smtp.mailfrom=will@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1746798751; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=PPQQZwjTpQydGj/6mJJBakK+kfyF9WpZWCLweIO1vHA=; b=wbocWfVzjWRYxbdNAzq2QrLm5zgU2pyyjLGhn+iXLDnnviYSTpDP2ldyijn/CwQHb0lcu9 YSzGI6pYjkGpoUaFMjuD1TNmpTM7TbcMgWkhBNFbmGq2aaj57W4zX2eu6HBlyZ1DpeonfR db/RhkJToxQlFvUXFC04hro9M31pmJE= ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=iPga8wPk; spf=pass (imf19.hostedemail.com: domain of will@kernel.org designates 147.75.193.91 as permitted sender) smtp.mailfrom=will@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1746798751; a=rsa-sha256; cv=none; b=OKXllhTTJGOr9f4W20InzBfeAQqLOr1zcKOidNBkA7UMhVZTvW/+NzeHQgH/cC3jzExFG+ M1ysOBQpDnSFpEZSYLEviUAI/z7cviiYxgR4U14yNKXJEaPuyThTezBxHxeD9qFwn1wlD+ liatySyrF0/3Dj2c0Ew+o/qmVHRP/tU= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by nyc.source.kernel.org (Postfix) with ESMTP id 2122EA4B887; Fri, 9 May 2025 13:52:31 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 2B3F8C4CEE4; Fri, 9 May 2025 13:52:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1746798750; bh=08z/fA4ndGXl+5Zu9henL1jr4DrNeoyJNAHGe5fOiKA=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=iPga8wPk30SBCjyT2PEUpAYxWoTbCCCa51yC3+AyjqVwUqmbgf8l5BZGzU50nqgKO T/SbQqEFoNjRGxPWMrUpgxEAUiuavHUVGS9kV4yQEJ/XhIdS89Bc2EN2IArxQcAxwJ +rcfywS+2pEzDvkl2yjYifY85o+FKATLxPkT6cuVESvyw7VmGs13D6MI7YEVWlcuvc T99kz01ZuvAObUCGDlslJ4D7PEfNSa98Yw9ZX+FGfp1Eq4BfgUUEg8RBWOD981D2C0 qRZYPd1NmVpZ0b+Enh2uKqfOKs3j90id1gG9RjeCtFUuGpabtO/JLz3TOUX+gre7ja LB9GEVxkaxeog== Date: Fri, 9 May 2025 14:52:24 +0100 From: Will Deacon To: Ryan Roberts Cc: Andrew Morton , "Matthew Wilcox (Oracle)" , Alexander Viro , Christian Brauner , Jan Kara , David Hildenbrand , Dave Chinner , Catalin Marinas , Kalesh Singh , Zi Yan , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org Subject: Re: [RFC PATCH v4 5/5] mm/filemap: Allow arch to request folio size for exec memory Message-ID: <20250509135223.GB5707@willie-the-truck> References: <20250430145920.3748738-1-ryan.roberts@arm.com> <20250430145920.3748738-6-ryan.roberts@arm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20250430145920.3748738-6-ryan.roberts@arm.com> User-Agent: Mutt/1.10.1 (2018-07-13) X-Rspam-User: X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: D45DD1A0002 X-Stat-Signature: g4omcrz4sbee3i7shsdegxu3dq93o59q X-HE-Tag: 1746798751-227332 X-HE-Meta: U2FsdGVkX18erZSBKTgTATc+pY5qHk6u4BbhSSzPkrVwwvLIMi34u7r/d6krNXbjNpjaglNEwVLCthbX3TMsH1FNM0zzmyZZ7780dEt+eWg7rLM4bMqlfI8Dpd/EFtwiEaLBHhU3X37ByV9vCW6QJOR61QF6jkJy8dTygyjX9GBR9r9DxH032cuGvA1OoPXC9bTGmbCVJeUWMiN9UgMHWZ9oMjAuVFmTaycbpbk9AI60TSGxllFXLRJ/chZF1CKJcb0W1SYQit04Qh2RpCNdfmxp1aeXClzrQiFCsw392KnXY75QhunD4TMg0+FUStEGApjZP4cNqKOX2ICHwKpqab5aMXM2ZmoN6oX0SujFv3gjaELlYg6sAYsCSnWr5EoaEsfFUm7+b4mOTOGd8UNcArgGQyUKkTC+ndNZdzZnWxom0A98xyQuaYioNVmphCrPZ1UX6reKvLeoqSh/2NlGUgSZljBEjeAkcNx0c4zf8SPBImo53+Pz5IVn8Gm3NSQHX985Rsako+WO5nJqXFsOKZV3smTd2A5u5eJndDmPEDr9A6w8I4P2A1ppWwow6fam2uGksu7hyIvFI9bl/wukl6yE7ZaVBQ5HH99+24bwKZsewCHVxUSPAwXZR7QqX22hMu1lTITLLM5XmthJhV3Vektqc6aK5lx9BlsXYkpNdJc340JLglHrUAgK4qRCYDqOIFqIrOkQlvK4H/ixgSichfsyi4qBqu5csJ0uEReLM1VnVAfyoPQp7/jgE9f4JXEtVAbaJ+sOgL7kIvgJUlf7q7fRkqZ5mv1+s8dpVZzt0H9+6e07ptlctN/H4BH2mhEPaRpJi+VGzzp3HPmEHkcylcO3MXC76x6tbuKZxlJbu7C+nzZaePmTTP0wBwwsqgl49vPfpxR79NyV24qzox/VVc55EtlXHqz8HqQcabHU0O+luDvRXxESQohVVLTq/hSSKy474ncZZP8B5Rktvoo yJOwVkTP 9hqbKcKQs7kO0GeNWnV2SRXAH2Nu2puAdfEYP+9zL+9A1cscb0GZHtH0LKWHgKFgkkVnmRxWWBN7DCjKOU+9dcn7wz7f+AJdSCGBq17GRUWEPuKBpt0sf8AdInIFmocS2kx8CwIv0REirIX62rAzUkTX5nKajA1V8uFa4Di3qtVI4F5zLN029lWBNNc0nnZvSoZ9iB5abKcviA5dXNuBihIDk5f9Q0aGiVXtrOzLoSZG+7+dVjxpSLm60slffTFhYqQxFjXhcMYKevW6l0vlUNaBu9qUjC0ArFSZqCYgiSHhLLgo= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Wed, Apr 30, 2025 at 03:59:18PM +0100, Ryan Roberts wrote: > Change the readahead config so that if it is being requested for an > executable mapping, do a synchronous read into a set of folios with an > arch-specified order and in a naturally aligned manner. We no longer > center the read on the faulting page but simply align it down to the > previous natural boundary. Additionally, we don't bother with an > asynchronous part. > > On arm64 if memory is physically contiguous and naturally aligned to the > "contpte" size, we can use contpte mappings, which improves utilization > of the TLB. When paired with the "multi-size THP" feature, this works > well to reduce dTLB pressure. However iTLB pressure is still high due to > executable mappings having a low likelihood of being in the required > folio size and mapping alignment, even when the filesystem supports > readahead into large folios (e.g. XFS). > > The reason for the low likelihood is that the current readahead > algorithm starts with an order-0 folio and increases the folio order by > 2 every time the readahead mark is hit. But most executable memory tends > to be accessed randomly and so the readahead mark is rarely hit and most > executable folios remain order-0. > > So let's special-case the read(ahead) logic for executable mappings. The > trade-off is performance improvement (due to more efficient storage of > the translations in iTLB) vs potential for making reclaim more difficult > (due to the folios being larger so if a part of the folio is hot the > whole thing is considered hot). But executable memory is a small portion > of the overall system memory so I doubt this will even register from a > reclaim perspective. > > I've chosen 64K folio size for arm64 which benefits both the 4K and 16K > base page size configs. Crucially the same amount of data is still read > (usually 128K) so I'm not expecting any read amplification issues. I > don't anticipate any write amplification because text is always RO. > > Note that the text region of an ELF file could be populated into the > page cache for other reasons than taking a fault in a mmapped area. The > most common case is due to the loader read()ing the header which can be > shared with the beginning of text. So some text will still remain in > small folios, but this simple, best effort change provides good > performance improvements as is. > > Confine this special-case approach to the bounds of the VMA. This > prevents wasting memory for any padding that might exist in the file > between sections. Previously the padding would have been contained in > order-0 folios and would be easy to reclaim. But now it would be part of > a larger folio so more difficult to reclaim. Solve this by simply not > reading it into memory in the first place. > > Benchmarking > ============ > TODO: NUMBERS ARE FOR V3 OF SERIES. NEED TO RERUN FOR THIS VERSION. > > The below shows nginx and redis benchmarks on Ampere Altra arm64 system. > > First, confirmation that this patch causes more text to be contained in > 64K folios: > > | File-backed folios | system boot | nginx | redis | > | by size as percentage |-----------------|-----------------|-----------------| > | of all mapped text mem | before | after | before | after | before | after | > |========================|========|========|========|========|========|========| > | base-page-4kB | 26% | 9% | 27% | 6% | 21% | 5% | > | thp-aligned-8kB | 4% | 2% | 3% | 0% | 4% | 1% | > | thp-aligned-16kB | 57% | 21% | 57% | 6% | 54% | 10% | > | thp-aligned-32kB | 4% | 1% | 4% | 1% | 3% | 1% | > | thp-aligned-64kB | 7% | 65% | 8% | 85% | 9% | 72% | > | thp-aligned-2048kB | 0% | 0% | 0% | 0% | 7% | 8% | > | thp-unaligned-16kB | 1% | 1% | 1% | 1% | 1% | 1% | > | thp-unaligned-32kB | 0% | 0% | 0% | 0% | 0% | 0% | > | thp-unaligned-64kB | 0% | 0% | 0% | 1% | 0% | 1% | > | thp-partial | 1% | 1% | 0% | 0% | 1% | 1% | > |------------------------|--------|--------|--------|--------|--------|--------| > | cont-aligned-64kB | 7% | 65% | 8% | 85% | 16% | 80% | > > The above shows that for both workloads (each isolated with cgroups) as > well as the general system state after boot, the amount of text backed > by 4K and 16K folios reduces and the amount backed by 64K folios > increases significantly. And the amount of text that is contpte-mapped > significantly increases (see last row). > > And this is reflected in performance improvement: > > | Benchmark | Improvement | > +===============================================+======================+ > | pts/nginx (200 connections) | 8.96% | > | pts/nginx (1000 connections) | 6.80% | > +-----------------------------------------------+----------------------+ > | pts/redis (LPOP, 50 connections) | 5.07% | > | pts/redis (LPUSH, 50 connections) | 3.68% | > > Signed-off-by: Ryan Roberts > --- > arch/arm64/include/asm/pgtable.h | 8 +++++++ > include/linux/pgtable.h | 11 +++++++++ > mm/filemap.c | 40 ++++++++++++++++++++++++++------ > 3 files changed, 52 insertions(+), 7 deletions(-) > > diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h > index 2a77f11b78d5..9eb35af0d3cf 100644 > --- a/arch/arm64/include/asm/pgtable.h > +++ b/arch/arm64/include/asm/pgtable.h > @@ -1537,6 +1537,14 @@ static inline void update_mmu_cache_range(struct vm_fault *vmf, > */ > #define arch_wants_old_prefaulted_pte cpu_has_hw_af > > +/* > + * Request exec memory is read into pagecache in at least 64K folios. This size > + * can be contpte-mapped when 4K base pages are in use (16 pages into 1 iTLB > + * entry), and HPA can coalesce it (4 pages into 1 TLB entry) when 16K base > + * pages are in use. > + */ > +#define exec_folio_order() ilog2(SZ_64K >> PAGE_SHIFT) > + > static inline bool pud_sect_supported(void) > { > return PAGE_SIZE == SZ_4K; > diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h > index b50447ef1c92..1dd539c49f90 100644 > --- a/include/linux/pgtable.h > +++ b/include/linux/pgtable.h > @@ -456,6 +456,17 @@ static inline bool arch_has_hw_pte_young(void) > } > #endif > > +#ifndef exec_folio_order > +/* > + * Returns preferred minimum folio order for executable file-backed memory. Must > + * be in range [0, PMD_ORDER). Default to order-0. > + */ > +static inline unsigned int exec_folio_order(void) > +{ > + return 0; > +} > +#endif > + > #ifndef arch_check_zapped_pte > static inline void arch_check_zapped_pte(struct vm_area_struct *vma, > pte_t pte) > diff --git a/mm/filemap.c b/mm/filemap.c > index e61f374068d4..37fe4a55c00d 100644 > --- a/mm/filemap.c > +++ b/mm/filemap.c > @@ -3252,14 +3252,40 @@ static struct file *do_sync_mmap_readahead(struct vm_fault *vmf) > if (mmap_miss > MMAP_LOTSAMISS) > return fpin; > > - /* > - * mmap read-around > - */ > fpin = maybe_unlock_mmap_for_io(vmf, fpin); > - ra->start = max_t(long, 0, vmf->pgoff - ra->ra_pages / 2); > - ra->size = ra->ra_pages; > - ra->async_size = ra->ra_pages / 4; > - ra->order = 0; > + if (vm_flags & VM_EXEC) { > + /* > + * Allow arch to request a preferred minimum folio order for > + * executable memory. This can often be beneficial to > + * performance if (e.g.) arm64 can contpte-map the folio. > + * Executable memory rarely benefits from readahead, due to its > + * random access nature, so set async_size to 0. In light of this observation (about randomness of instruction fetch), do you think it's worth ignoring VM_RAND_READ for VM_EXEC? Either way, I was looking at this because it touches arm64 and it looks fine to me: Acked-by: Will Deacon Will