From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4545CC7115A for ; Thu, 19 Jun 2025 11:07:12 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D7E1E6B00A3; Thu, 19 Jun 2025 07:07:11 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D2F5B6B00A5; Thu, 19 Jun 2025 07:07:11 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C1CFC6B00A7; Thu, 19 Jun 2025 07:07:11 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id B16036B00A3 for ; Thu, 19 Jun 2025 07:07:11 -0400 (EDT) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 5CDC21A0C82 for ; Thu, 19 Jun 2025 11:07:11 +0000 (UTC) X-FDA: 83571873462.07.131A901 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf11.hostedemail.com (Postfix) with ESMTP id AC5F74000B for ; Thu, 19 Jun 2025 11:07:09 +0000 (UTC) Authentication-Results: imf11.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf11.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1750331229; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=1+XNSvZbWjpBGiyn2+C6g1oWjvS2/pETqArF+mwNeSw=; b=yqByocHrGxQOCBAEr9lDbFGK4BHK7zDqAtmxjXltP6mF+eRaP7mBCPMBeCwTQW+KYE6JiU 5qqUDRxoJpyP6KADzneNqU9sG3pSeceOU5M6/0V5qMSdsXZGNCucb6gj+RqWETV3SitP0B rRpV5DYX+t4FoLQJwzGpVIqTCBIrwUk= ARC-Authentication-Results: i=1; imf11.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf11.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1750331229; a=rsa-sha256; cv=none; b=YHSQ9nF8IsoTgXKYPB/ZN2LiR3wljCgOT0+Z3xljngVwExTLxML4BbUnFRqU4dFK/4wBlK 1t+SZqy6gNG604ls6YyiCBVnAPvnr5ObgH2UAfHlLEYn0RKoyG6wAqVte9S1CsHLcWMcGT UOGxM6IApGytYavrlEw9is4VWU9nRkQ= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id C13C8113E; Thu, 19 Jun 2025 04:06:48 -0700 (PDT) Received: from [10.57.84.221] (unknown [10.57.84.221]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 9BC913F58B; Thu, 19 Jun 2025 04:07:06 -0700 (PDT) Message-ID: Date: Thu, 19 Jun 2025 12:07:05 +0100 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v5 5/5] mm/filemap: Allow arch to request folio size for exec memory Content-Language: en-GB To: Andrew Morton , "Matthew Wilcox (Oracle)" , Alexander Viro , Christian Brauner , Jan Kara , David Hildenbrand , Dave Chinner , Catalin Marinas , Will Deacon , Kalesh Singh , Zi Yan Cc: linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org References: <20250609092729.274960-1-ryan.roberts@arm.com> <20250609092729.274960-6-ryan.roberts@arm.com> From: Ryan Roberts In-Reply-To: <20250609092729.274960-6-ryan.roberts@arm.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Rspamd-Queue-Id: AC5F74000B X-Rspam-User: X-Rspamd-Server: rspam07 X-Stat-Signature: bfafkzxpqwc6ypfh1c5g84eruygncgcc X-HE-Tag: 1750331229-383648 X-HE-Meta: U2FsdGVkX1+CNxNgx7sRiQ6zvzmUuGueNIIDxz4KD33fJFNRS5ZPEWJ6X42SssCO89gS0zgt5c5XyEO9UR/Wb4rktzrTU2/rfwDA5p7wL1fj8PJhBqrsS7EJFoxZL0FlY2N+wIohqZ++e0xde/ycA1bqGuKh+MFZaAbT84jrjU/GX36m464Y8g9OhRkW7dx44y5coDb0Hy3IciIDTi8Am9WE9MBsKqx62pJqQxhBc9ACCPQqCoMZ1HCFcqcORtM4ffVxjBGzXK7I3hl1CY+ITPbxahs9OfsXgSpu6O7ciAUI87AtH9nVKJ3GevBj9cnA9Fd+jv8dZTwyD0F2l4suCz/xy81gNSeXMGSjDzNMxFpthLyjuM4/CQw9gdi50g/YHZE1iSSAlJwH4mtgWH/XFxa6e4mdA1873pze2MZUIoCBOJDr30xFCMlBtQG/v7NKQSEnK7A9vsOWS3GDMUlZPBbNu4JNSZCtGwq4gYulgKpMnUonpWZsJwNUvwAIN8906WEKNYXkAZsEu4Ip80U9Ek/kO9rYYyOZSWZ4yvbmfr7MxrBFY5ixNAhpPUGiFK6dS5HZkjSmZpnYAG+0Y7pbRtDmBQkLzkiGotnWHzf/lzHYeI5LCrJ7VXJmmqHKFrDJVmRa59gmqo41W1QLJRNYUXDZpCag7zpU+EBLKm3DqQJWircyjslH4q9BxbT0hVI1bHejcAybnfcjoqvtWKd7vgh9y+QWP9kiiH7YZEoYmazHIutXBXyI2lsniAc0hTZ7mGiyLJIV/JSDOEz/nIbnP9kbnX47hZiu2+WFmYWeOXuyih62aJcBna/P8xJbX8mbZFS29n7kAFPgsnZ9hng/Oykw5QRoBhcU X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Hi Andrew, On 09/06/2025 10:27, Ryan Roberts wrote: > Change the readahead config so that if it is being requested for an > executable mapping, do a synchronous read into a set of folios with an > arch-specified order and in a naturally aligned manner. We no longer > center the read on the faulting page but simply align it down to the > previous natural boundary. Additionally, we don't bother with an > asynchronous part. > > On arm64 if memory is physically contiguous and naturally aligned to the > "contpte" size, we can use contpte mappings, which improves utilization > of the TLB. When paired with the "multi-size THP" feature, this works > well to reduce dTLB pressure. However iTLB pressure is still high due to > executable mappings having a low likelihood of being in the required > folio size and mapping alignment, even when the filesystem supports > readahead into large folios (e.g. XFS). > > The reason for the low likelihood is that the current readahead > algorithm starts with an order-0 folio and increases the folio order by > 2 every time the readahead mark is hit. But most executable memory tends > to be accessed randomly and so the readahead mark is rarely hit and most > executable folios remain order-0. > > So let's special-case the read(ahead) logic for executable mappings. The > trade-off is performance improvement (due to more efficient storage of > the translations in iTLB) vs potential for making reclaim more difficult > (due to the folios being larger so if a part of the folio is hot the > whole thing is considered hot). But executable memory is a small portion > of the overall system memory so I doubt this will even register from a > reclaim perspective. > > I've chosen 64K folio size for arm64 which benefits both the 4K and 16K > base page size configs. Crucially the same amount of data is still read > (usually 128K) so I'm not expecting any read amplification issues. I > don't anticipate any write amplification because text is always RO. > > Note that the text region of an ELF file could be populated into the > page cache for other reasons than taking a fault in a mmapped area. The > most common case is due to the loader read()ing the header which can be > shared with the beginning of text. So some text will still remain in > small folios, but this simple, best effort change provides good > performance improvements as is. > > Confine this special-case approach to the bounds of the VMA. This > prevents wasting memory for any padding that might exist in the file > between sections. Previously the padding would have been contained in > order-0 folios and would be easy to reclaim. But now it would be part of > a larger folio so more difficult to reclaim. Solve this by simply not > reading it into memory in the first place. > > Benchmarking > ============ > > The below shows pgbench and redis benchmarks on Graviton3 arm64 system. > > First, confirmation that this patch causes more text to be contained in > 64K folios: > > +----------------------+---------------+---------------+---------------+ > | File-backed folios by| system boot | pgbench | redis | > | size as percentage of+-------+-------+-------+-------+-------+-------+ > | all mapped text mem |before | after |before | after |before | after | > +======================+=======+=======+=======+=======+=======+=======+ > | base-page-4kB | 78% | 30% | 78% | 11% | 73% | 14% | > | thp-aligned-8kB | 1% | 0% | 0% | 0% | 1% | 0% | > | thp-aligned-16kB | 17% | 4% | 17% | 3% | 20% | 4% | > | thp-aligned-32kB | 1% | 1% | 1% | 2% | 1% | 1% | > | thp-aligned-64kB | 3% | 63% | 3% | 81% | 4% | 77% | > | thp-aligned-128kB | 0% | 1% | 1% | 1% | 1% | 2% | > | thp-unaligned-64kB | 0% | 0% | 0% | 1% | 0% | 1% | > | thp-unaligned-128kB | 0% | 1% | 0% | 0% | 0% | 0% | > | thp-partial | 0% | 0% | 0% | 1% | 0% | 1% | > +----------------------+-------+-------+-------+-------+-------+-------+ > | cont-aligned-64kB | 4% | 65% | 4% | 83% | 6% | 79% | > +----------------------+-------+-------+-------+-------+-------+-------+ > > The above shows that for both workloads (each isolated with cgroups) as > well as the general system state after boot, the amount of text backed > by 4K and 16K folios reduces and the amount backed by 64K folios > increases significantly. And the amount of text that is contpte-mapped > significantly increases (see last row). > > And this is reflected in performance improvement. "(I)" indicates a > statistically significant improvement. Note TPS and Reqs/sec are rates > so bigger is better, ms is time so smaller is better: > > +-------------+-------------------------------------------+------------+ > | Benchmark | Result Class | Improvemnt | > +=============+===========================================+============+ > | pts/pgbench | Scale: 1 Clients: 1 RO (TPS) | (I) 3.47% | > | | Scale: 1 Clients: 1 RO - Latency (ms) | -2.88% | > | | Scale: 1 Clients: 250 RO (TPS) | (I) 5.02% | > | | Scale: 1 Clients: 250 RO - Latency (ms) | (I) -4.79% | > | | Scale: 1 Clients: 1000 RO (TPS) | (I) 6.16% | > | | Scale: 1 Clients: 1000 RO - Latency (ms) | (I) -5.82% | > | | Scale: 100 Clients: 1 RO (TPS) | 2.51% | > | | Scale: 100 Clients: 1 RO - Latency (ms) | -3.51% | > | | Scale: 100 Clients: 250 RO (TPS) | (I) 4.75% | > | | Scale: 100 Clients: 250 RO - Latency (ms) | (I) -4.44% | > | | Scale: 100 Clients: 1000 RO (TPS) | (I) 6.34% | > | | Scale: 100 Clients: 1000 RO - Latency (ms)| (I) -5.95% | > +-------------+-------------------------------------------+------------+ > | pts/redis | Test: GET Connections: 50 (Reqs/sec) | (I) 3.20% | > | | Test: GET Connections: 1000 (Reqs/sec) | (I) 2.55% | > | | Test: LPOP Connections: 50 (Reqs/sec) | (I) 4.59% | > | | Test: LPOP Connections: 1000 (Reqs/sec) | (I) 4.81% | > | | Test: LPUSH Connections: 50 (Reqs/sec) | (I) 5.31% | > | | Test: LPUSH Connections: 1000 (Reqs/sec) | (I) 4.36% | > | | Test: SADD Connections: 50 (Reqs/sec) | (I) 2.64% | > | | Test: SADD Connections: 1000 (Reqs/sec) | (I) 4.15% | > | | Test: SET Connections: 50 (Reqs/sec) | (I) 3.11% | > | | Test: SET Connections: 1000 (Reqs/sec) | (I) 3.36% | > +-------------+-------------------------------------------+------------+ > > Reviewed-by: Jan Kara > Acked-by: Will Deacon > Signed-off-by: Ryan Roberts A use-after-free issue was reported againt this patch, which I believe is still in mm-unstable? The problem is that I'm accessing the vma after unlocking it. So the fix is to move the unlock to after the if/else. Would you mind squashing this into the patch? The report is here: https://lore.kernel.org/linux-mm/hi6tsbuplmf6jcr44tqu6mdhtyebyqgsfif7okhnrzkcowpo4d@agoyrl4ozyth/ ---8<--- diff --git a/mm/filemap.c b/mm/filemap.c index 93fbc2ef232a..eaf853d6b719 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -3265,7 +3265,6 @@ static struct file *do_sync_mmap_readahead(struct vm_fault *vmf) if (mmap_miss > MMAP_LOTSAMISS) return fpin; - fpin = maybe_unlock_mmap_for_io(vmf, fpin); if (vm_flags & VM_EXEC) { /* * Allow arch to request a preferred minimum folio order for @@ -3299,6 +3298,8 @@ static struct file *do_sync_mmap_readahead(struct vm_fault *vmf) ra->async_size = ra->ra_pages / 4; ra->order = 0; } + + fpin = maybe_unlock_mmap_for_io(vmf, fpin); ractl._index = ra->start; page_cache_ra_order(&ractl, ra); return fpin; ---8<--- Thanks, Ryan