From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id BAC08C4707C for ; Fri, 12 Jan 2024 05:23:58 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E34BE6B0080; Fri, 12 Jan 2024 00:23:57 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id DE4D86B0085; Fri, 12 Jan 2024 00:23:57 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C84CF6B0087; Fri, 12 Jan 2024 00:23:57 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id B9CB76B0080 for ; Fri, 12 Jan 2024 00:23:57 -0500 (EST) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 73A2B805B4 for ; Fri, 12 Jan 2024 05:23:57 +0000 (UTC) X-FDA: 81669517314.30.F95303F Received: from mail-ua1-f53.google.com (mail-ua1-f53.google.com [209.85.222.53]) by imf08.hostedemail.com (Postfix) with ESMTP id DE3BF160007 for ; Fri, 12 Jan 2024 05:23:54 +0000 (UTC) Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=MDAJ4JEX; spf=pass (imf08.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.222.53 as permitted sender) smtp.mailfrom=21cnbao@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1705037034; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=0jbg7j/o7TigrpbP9pCRChKmkHgg8XMfVY3M+Leeyjw=; b=muyDgeKZ3BZDHFrG5AKMftZBd+e/u/0bVl2QeQ5buxPt4NWYYvbkebEbSpl7T9R6OYBhvy r2PD/rDRduzbAc2q5PKj+kfL0GlLZI8VgsrzFL8oVuHJrVqV7fsDPjbnnQyW+5EpYUQ4N5 xjdfxFZlwX6bIRG8H2m901iviyhOA1Q= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1705037034; a=rsa-sha256; cv=none; b=jUy8jMx+HBPk3gkGTtHrXxiglkSE/Ka5UWVTCF7kgDVMALVUB44q2EHpEOws2/JK3Hknw2 OXcK+5CoWv2dti1WmE/pt+PderUsBfk9x83laNQAfPumhI73nB4H529BGu9Z2QIolCCMxm fDuNcenVyoXeYzdItMeZFG+h/gAu1fo= ARC-Authentication-Results: i=1; imf08.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=MDAJ4JEX; spf=pass (imf08.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.222.53 as permitted sender) smtp.mailfrom=21cnbao@gmail.com; dmarc=pass (policy=none) header.from=gmail.com Received: by mail-ua1-f53.google.com with SMTP id a1e0cc1a2514c-7ce9defc4c2so640105241.0 for ; Thu, 11 Jan 2024 21:23:54 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1705037034; x=1705641834; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=0jbg7j/o7TigrpbP9pCRChKmkHgg8XMfVY3M+Leeyjw=; b=MDAJ4JEXO6P5S7dLB32UqRACzyNFInm1E1DHdtT4zP5oHS7oMC2sE3rSXFYYPj8zY9 0tMXwrmUR7gKnn9tUGLTmH4C5mbt7WIPW2Uao+y4GxcBucin/X9Ie99H1vP5ZvyL0Vew rP5rp8oDzvHeGM3GIZEIlA1iDQbEEESXN8xq6nLjM4ipbaylEjAT9iq7aMfh0Eb3bH2U l5Z8k9aYrLMVHffqAl3zQL/JSZNWkHEkr++duqXJFzlSvaakx+4k2pBeBeIrQjFs0JQ9 Lr079RgxYxYTc9EG/uMAdt69EtUQ2H8AeuGRt6U90bwE+LDDbTUvq6584QZIupDtsEFZ tDiQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1705037034; x=1705641834; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=0jbg7j/o7TigrpbP9pCRChKmkHgg8XMfVY3M+Leeyjw=; b=L4fsWusGwRtpgMJjWbVcQPTWfayYJg+Hb4rCbEDksojyDGotGPR8RAB3UmrkcM52M9 VUUOoYRJG/h41BgQ8ySs+Ic+k5qD+SGBjlyM8/vrruKOqCjNkD2tMrJZIV6F/EUVtCPT EkTdqFFfWrkVYIZETYRyrZw6++UGHGfPJVvlVg2WnNrHou5AYtrOiBj9Ik4txUR7T7V4 /EI2diuV69WzK15H42y7/IVh/N5VvJzDRXG3rTB4NmePRSaPVm9Q1kq7XWZzlxFtzrJT BKW6ONXe95nK/+/JYPfHskqr2EWPkevmJE5O19OzfrDBL6VH0zdmnLxFEleQFMjQUNZm PnwA== X-Gm-Message-State: AOJu0YyFbiZ8Lq2NeUaHkWa72SnhufU3o+opVTEAWNDkCdp6fn3SfjEG ZjOFiQtg5O4EfAZ5S6RWp/bXWp4be2jcWDlqEp4= X-Google-Smtp-Source: AGHT+IGL5NibdB+VgWzXbVkI///6TjrjTVlE73BwT0RwBfRQYn6OOfT/CsGMEtgMADyRNcqDE8crQpKiCNt+P65LdmM= X-Received: by 2002:a05:6102:4193:b0:468:151c:b018 with SMTP id cd19-20020a056102419300b00468151cb018mr472286vsb.14.1705037033413; Thu, 11 Jan 2024 21:23:53 -0800 (PST) MIME-Version: 1.0 References: <20240111154106.3692206-1-ryan.roberts@arm.com> In-Reply-To: <20240111154106.3692206-1-ryan.roberts@arm.com> From: Barry Song <21cnbao@gmail.com> Date: Fri, 12 Jan 2024 18:23:42 +1300 Message-ID: Subject: Re: [RFC PATCH v1] mm/filemap: Allow arch to request folio size for exec memory To: Ryan Roberts Cc: Catalin Marinas , Will Deacon , Mark Rutland , "Matthew Wilcox (Oracle)" , Andrew Morton , David Hildenbrand , John Hubbard , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Queue-Id: DE3BF160007 X-Rspam-User: X-Rspamd-Server: rspam11 X-Stat-Signature: d6nxugsr4b67bsxi8gg7qmzfkmb9ps3t X-HE-Tag: 1705037034-767268 X-HE-Meta: U2FsdGVkX1+9gdGSF8GD0vlAswDtXSVHf/GcCnGKrRag4CkNHkO0io8sZeX1Eocds/NtBUI6zypBSPHWo9p9RsNBkjzGdxuIP5ro7y7RrLsAUc/goV+3kbF+UQDorTFSauVP7y3CxIcMpD8Jaw6ypCMjeeQOpFLFUDXAo7HIsDMdcPtaGRxZep5FO2Ue0zT4hNgWnTRI7/7iffZTLkFV7+hldBzmRJ0B3SPfQbuLPkoROGyKGCc2miYOWr3B5WMP8zx6qAOdZ4Oka5s5CRBAm5BTgLSAD5FooSFqNKuZgjYa52THOa1ZVU4jA3m2uS2ywpYh2ZQ6fLeWUtP2A2OPjCYjYwdswsF+4+10mdBlV0//aEJnTY8Kbj1vg6puf8QW1QgLaFqhYKEwO4wlLqCTE7P9v/Uj+KMwxBIf5uGcRwynxLhJSc/SdZLASAVPVq8Z5EvH7j86G5eQKtK7N/4kb1Sr2vbD8/WMuqUtbvv0hXpos40TdCgpFWLrXmY9Ar3yTuhBxxMkWrFlYcIAPeNH10jril6yRayr2+aKMK84yqjfnnIdv/Tsd/qC1DkMje97G2n/jST0rX3viNIvQps5Ow2hwdc88QhvJLn6TEeuShyXHCSKAz/mib0i7ua3takSRXfTC/IiKtXnazTsWGKIG7/mMN1EVtYZcNyQqS8Ka5E/JIGxBPtqxODdbawBztx4Th+Ciz/rz7RPUvPG5ewxPu5WU69gifxW4LOjbMu0K/Ya3872qnlieHgdrkByFsXPv8QPczBV+rwcDXMdTKmTknL1AXD4iCUALOID1i9iLwO/lyF1QlT/iDQze9XT9hlBm7fNhynmUNU+GaXlaYYVFbqVIWQrNHQCnLa1TwOEOxb7QL+VCH9NFA3WPAokeavXnUxTPYI9nwBK+9Ax/mJzNiLwOvcCTKtgikuSon0p8FbX/e3SGrJ+VfJp6TofR0zDHVo/Ix7bWLAdvU1UFpc JTHTyUSC o3ir+CFIKg5iPbNBJQr2SR/iYUKvrVC0Wn0go X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Fri, Jan 12, 2024 at 4:41=E2=80=AFAM Ryan Roberts = wrote: > > Change the readahead config so that if it is being requested for an > executable mapping, do a synchronous read of an arch-specified size in a > naturally aligned manner. > > On arm64 if memory is physically contiguous and naturally aligned to the > "contpte" size, we can use contpte mappings, which improves utilization > of the TLB. When paired with the "multi-size THP" changes, this works > well to reduce dTLB pressure. However iTLB pressure is still high due to > executable mappings having a low liklihood of being in the required > folio size and mapping alignment, even when the filesystem supports > readahead into large folios (e.g. XFS). > > The reason for the low liklihood is that the current readahead algorithm > starts with an order-2 folio and increases the folio order by 2 every > time the readahead mark is hit. But most executable memory is faulted in > fairly randomly and so the readahead mark is rarely hit and most > executable folios remain order-2. This is observed impirically and > confirmed from discussion with a gnu linker expert; in general, the > linker does nothing to group temporally accessed text together > spacially. Additionally, with the current read-around approach there are > no alignment guarrantees between the file and folio. This is > insufficient for arm64's contpte mapping requirement (order-4 for 4K > base pages). > > So it seems reasonable to special-case the read(ahead) logic for > executable mappings. The trade-off is performance improvement (due to > more efficient storage of the translations in iTLB) vs potential read > amplification (due to reading too much data around the fault which won't > be used), and the latter is independent of base page size. I've chosen > 64K folio size for arm64 which benefits both the 4K and 16K base page > size configs and shouldn't lead to any further read-amplification since > the old read-around path was (usually) reading blocks of 128K (with the > last 32K being async). > > Performance Benchmarking > ------------------------ > > The below shows kernel compilation and speedometer javascript benchmarks > on Ampere Altra arm64 system. (The contpte patch series is applied in > the baseline). > > First, confirmation that this patch causes more memory to be contained > in 64K folios (this is for all file-backed memory so includes > non-executable too): > > | File-backed folios | Speedometer | Kernel Compile | > | by size as percentage |-----------------|-----------------| > | of all mapped file mem | before | after | before | after | > |=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D|=3D=3D=3D=3D=3D=3D=3D=3D|=3D=3D=3D=3D=3D=3D=3D=3D|=3D=3D=3D=3D=3D=3D=3D= =3D|=3D=3D=3D=3D=3D=3D=3D=3D| > |file-thp-aligned-16kB | 45% | 9% | 46% | 7% | > |file-thp-aligned-32kB | 2% | 0% | 3% | 1% | > |file-thp-aligned-64kB | 3% | 63% | 5% | 80% | > |file-thp-aligned-128kB | 11% | 11% | 0% | 0% | > |file-thp-unaligned-16kB | 1% | 0% | 3% | 1% | > |file-thp-unaligned-128kB | 1% | 0% | 0% | 0% | > |file-thp-partial | 0% | 0% | 0% | 0% | > |-------------------------|--------|--------|--------|--------| > |file-cont-aligned-64kB | 16% | 75% | 5% | 80% | > > The above shows that for both use cases, the amount of file memory > backed by 16K folios reduces and the amount backed by 64K folios > increases significantly. And the amount of memory that is contpte-mapped > significantly increases (last line). > > And this is reflected in performance improvement: > > Kernel Compilation (smaller is faster): > | kernel | real-time | kern-time | user-time | peak memory | > |----------|-------------|-------------|-------------|---------------| > | before | 0.0% | 0.0% | 0.0% | 0.0% | > | after | -1.6% | -2.1% | -1.7% | 0.0% | > > Speedometer (bigger is faster): > | kernel | runs_per_min | peak memory | > |----------|----------------|---------------| > | before | 0.0% | 0.0% | > | after | 1.3% | 1.0% | > > Both benchmarks show a ~1.5% improvement once the patch is applied. Hi Ryan, you had the data regarding exec-cont-pte in cont-pte series[1], which has already shown 1-2% improvement. Kernel Compilation with -j8 (negative is faster): | kernel | real-time | kern-time | user-time | |---------------------------|-----------|-----------|-----------| | baseline | 0.0% | 0.0% | 0.0% | | mTHP | -4.6% | -38.0% | -0.4% | | mTHP + contpte | -5.4% | -37.7% | -1.3% | | mTHP + contpte + exefolio | -7.4% | -39.5% | -3.5% | Kernel Compilation with -j80 (negative is faster): | kernel | real-time | kern-time | user-time | |---------------------------|-----------|-----------|-----------| | baseline | 0.0% | 0.0% | 0.0% | | mTHP | -4.9% | -36.1% | -0.2% | | mTHP + contpte | -5.8% | -36.0% | -1.2% | | mTHP + contpte + exefolio | -6.8% | -37.0% | -3.1% | Speedometer (positive is faster): | kernel | runs_per_min | |:--------------------------|--------------| | baseline | 0.0% | | mTHP | 1.5% | | mTHP + contpte | 3.7% | | mTHP + contpte + exefolio | 4.9% | Is this 1.5% you are saying now an extra improvement after you have mTHP + contpte + exefolio in [1]? [1] https://lore.kernel.org/linux-mm/20231218105100.172635-1-ryan.roberts@a= rm.com/ > > Alternatives > ------------ > > I considered (and rejected for now - but I anticipate this patch will > stimulate discussion around what the best approach is) alternative > approaches: > > - Expose a global user-controlled knob to set the preferred folio > size; this would move policy to user space and allow (e.g.) setting > it to PMD-size for even better iTLB utilizaiton. But this would add > ABI, and I prefer to start with the simplest approach first. It also > has the downside that a change wouldn't apply to memory already in > the page cache that is in active use (e.g. libc) so we don't get the > same level of utilization as for something that is fixed from boot. > > - Add a per-vma attribute to allow user space to specify preferred > folio size for memory faulted from the range. (we've talked about > such a control in the context of mTHP). The dynamic loader would > then be responsible for adding the annotations. Again this feels > like something that could be added later if value was demonstrated. > > - Enhance MADV_COLLAPSE to collapse to THP sizes less than PMD-size. > This would still require dynamic linker involvement, but would > additionally neccessitate a copy and all memory in the range would > be synchronously faulted in, adding to application load time. It > would work for filesystems that don't support large folios though. > > Signed-off-by: Ryan Roberts > --- > > Hi all, > > I originally concocted something similar to this, with Matthew's help, as= a > quick proof of concept hack. Since then I've tried a few different approa= ches > but always came back to this as the simplest solution. I expect this will= raise > a few eyebrows but given it is providing a real performance win, I hope w= e can > converge to something that can be upstreamed. > > This depends on my contpte series to actually set the contiguous bit in t= he page > table. > > Thanks, > Ryan > > > arch/arm64/include/asm/pgtable.h | 12 ++++++++++++ > include/linux/pgtable.h | 12 ++++++++++++ > mm/filemap.c | 19 +++++++++++++++++++ > 3 files changed, 43 insertions(+) > > diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pg= table.h > index f5bf059291c3..8f8f3f7eb8d8 100644 > --- a/arch/arm64/include/asm/pgtable.h > +++ b/arch/arm64/include/asm/pgtable.h > @@ -1143,6 +1143,18 @@ static inline void update_mmu_cache_range(struct v= m_fault *vmf, > */ > #define arch_wants_old_prefaulted_pte cpu_has_hw_af > > +/* > + * Request exec memory is read into pagecache in at least 64K folios. Th= e > + * trade-off here is performance improvement due to storing translations= more > + * effciently in the iTLB vs the potential for read amplification due to= reading > + * data from disk that won't be used. The latter is independent of base = page > + * size, so we set a page-size independent block size of 64K. This size = can be > + * contpte-mapped when 4K base pages are in use (16 pages into 1 iTLB en= try), > + * and HPA can coalesce it (4 pages into 1 TLB entry) when 16K base page= s are in > + * use. > + */ > +#define arch_wants_exec_folio_order(void) ilog2(SZ_64K >> PAGE_SHIFT) > + > static inline bool pud_sect_supported(void) > { > return PAGE_SIZE =3D=3D SZ_4K; > diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h > index 170925379534..57090616d09c 100644 > --- a/include/linux/pgtable.h > +++ b/include/linux/pgtable.h > @@ -428,6 +428,18 @@ static inline bool arch_has_hw_pte_young(void) > } > #endif > > +#ifndef arch_wants_exec_folio_order > +/* > + * Returns preferred minimum folio order for executable file-backed memo= ry. Must > + * be in range [0, PMD_ORDER]. Negative value implies that the HW has no > + * preference and mm will not special-case executable memory in the page= cache. > + */ > +static inline int arch_wants_exec_folio_order(void) > +{ > + return -1; > +} > +#endif > + > #ifndef arch_check_zapped_pte > static inline void arch_check_zapped_pte(struct vm_area_struct *vma, > pte_t pte) > diff --git a/mm/filemap.c b/mm/filemap.c > index 67ba56ecdd32..80a76d755534 100644 > --- a/mm/filemap.c > +++ b/mm/filemap.c > @@ -3115,6 +3115,25 @@ static struct file *do_sync_mmap_readahead(struct = vm_fault *vmf) > } > #endif > > + /* > + * Allow arch to request a preferred minimum folio order for exec= utable > + * memory. This can often be beneficial to performance if (e.g.) = arm64 > + * can contpte-map the folio. Executable memory rarely benefits f= rom > + * read-ahead anyway, due to its random access nature. > + */ > + if (vm_flags & VM_EXEC) { > + int order =3D arch_wants_exec_folio_order(); > + > + if (order >=3D 0) { > + fpin =3D maybe_unlock_mmap_for_io(vmf, fpin); > + ra->size =3D 1UL << order; > + ra->async_size =3D 0; > + ractl._index &=3D ~((unsigned long)ra->size - 1); > + page_cache_ra_order(&ractl, ra, order); > + return fpin; > + } > + } > + > /* If we don't want any read-ahead, don't bother */ > if (vm_flags & VM_RAND_READ) > return fpin; > -- > 2.25.1 > Thanks barry