From: Ryan Roberts <ryan.roberts@arm.com>
To: Matthew Wilcox <willy@infradead.org>
Cc: Kalesh Singh <kaleshsingh@google.com>,
Andrew Morton <akpm@linux-foundation.org>,
David Hildenbrand <david@redhat.com>,
Dave Chinner <david@fromorbit.com>,
Catalin Marinas <catalin.marinas@arm.com>,
Will Deacon <will@kernel.org>,
linux-arm-kernel@lists.infradead.org,
linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org,
linux-mm@kvack.org
Subject: Re: [PATCH v3] mm/filemap: Allow arch to request folio size for exec memory
Date: Sat, 29 Mar 2025 10:07:59 +0000 [thread overview]
Message-ID: <ee11907a-5bd7-44ec-844c-8f10ff406b46@arm.com> (raw)
In-Reply-To: <Z-b1FmZ5nHzh5huL@casper.infradead.org>
On 28/03/2025 15:14, Matthew Wilcox wrote:
> On Thu, Mar 27, 2025 at 04:23:14PM -0400, Ryan Roberts wrote:
>> + Kalesh
>>
>> On 27/03/2025 12:44, Matthew Wilcox wrote:
>>> On Thu, Mar 27, 2025 at 04:06:58PM +0000, Ryan Roberts wrote:
>>>> So let's special-case the read(ahead) logic for executable mappings. The
>>>> trade-off is performance improvement (due to more efficient storage of
>>>> the translations in iTLB) vs potential read amplification (due to
>>>> reading too much data around the fault which won't be used), and the
>>>> latter is independent of base page size. I've chosen 64K folio size for
>>>> arm64 which benefits both the 4K and 16K base page size configs and
>>>> shouldn't lead to any read amplification in practice since the old
>>>> read-around path was (usually) reading blocks of 128K. I don't
>>>> anticipate any write amplification because text is always RO.
>>>
>>> Is there not also the potential for wasted memory due to ELF alignment?
>>
>> I think this is an orthogonal issue? My change isn't making that any worse.
>
> To a certain extent, it is. If readahead was doing order-2 allocations
> before and is now doing order-4, you're tying up 0-12 extra pages which
> happen to be filled with zeroes due to being used to cache the contents
> of a hole.
Well we would still have read them in before, nothing has changed there. But I
guess your point is more about reclaim? Because those pages are now contained in
a larger folio, if part of the folio is in use then all of it remains active.
Whereas before, if the folio was fully contained in the pad area and never
accessed, it would fall down the LRU quickly and get reclaimed.
>
>>> Kalesh talked about it in the MM BOF at the same time that Ted and I
>>> were discussing it in the FS BOF. Some coordination required (like
>>> maybe Kalesh could have mentioned it to me rathere than assuming I'd be
>>> there?)
>>
>> I was at Kalesh's talk. David H suggested that a potential solution might be for
>> readahead to ask the fs where the next hole is and then truncate readahead to
>> avoid reading the hole. Given it's padding, nothing should directly fault it in
>> so it never ends up in the page cache. Not sure if you discussed anything like
>> that if you were talking in parallel?
>
> Ted said that he and Kalesh had talked about that solution. I have a
> more bold solution in mind which lifts the ext4 extent cache to the
> VFS inode so that the readahead code can interrogate it.
>
>> Anyway, I'm not sure if you're suggesting these changes need to be considered as
>> one somehow or if you're just mentioning it given it is loosely related? My view
>> is that this change is an improvement indepently and could go in much sooner.
>
> This is not a reason to delay this patch. It's just a downside which
> should be mentioned in the commit message.
Fair point; I'll add a paragraph about the potential reclaim issue.
>
>>>> +static inline int arch_exec_folio_order(void)
>>>> +{
>>>> + return -1;
>>>> +}
>>>
>>> This feels a bit fragile. I often expect to be able to store an order
>>> in an unsigned int. Why not return 0 instead?
>>
>> Well 0 is a valid order, no? I think we have had the "is order signed or
>> unsigned" argument before. get_order() returns a signed int :)
>
> But why not always return a valid order? I don't think we need a
> sentinel. The default value can be 0 to do what we do today.
>
But a single order-0 folio is not what we do today. Note that my change as
currently implemented requests to read a *single* folio of the specified order.
And note that we only get the order we request to page_cache_ra_order() because
the size is limited to a single folio. If the size were bigger, that function
would actually expand the requested order by 2. (although the parameter is
called "new_order", it's actually interpretted as "old_order").
The current behavior is effectively to read 128K in order-2 folios (with smaller
folios for boundary alignment).
So I see a few options:
- Continue to allow non-opted in arches to use the existing behaviour; in this
case we need a sentinel. This could be -1, UINT_MAX or 0. But in the latter case
you are preventing an opted-in arch from specifying that they want order-0 -
it's meaning is overridden.
- Force all arches to use the new approach with a default folio order (and
readahead size) of order-0. (The default can be overridden per-arch). Personally
I'd be nervous about making this change.
- Decouple the read size from the folio order size; continue to use the 128K
read size and only allow opting-in to a specific folio order. The default order
would be 2 (or 0). We would need to fix page_cache_async_ra() to call
page_cache_ra_order() with "order + 2" (the new order) and fix
page_cache_ra_order() to treat its order parameter as the *new* order.
Perhaps we should do those fixes anyway (and then actually start with a folio
order of 0 - which I think you said in the past was your original intention?).
Thanks,
Ryan
next prev parent reply other threads:[~2025-03-29 10:08 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-03-27 16:06 Ryan Roberts
2025-03-27 16:44 ` Matthew Wilcox
2025-03-27 20:23 ` Ryan Roberts
2025-03-28 19:14 ` Matthew Wilcox
2025-03-29 10:07 ` Ryan Roberts [this message]
2025-04-01 7:19 ` Kalesh Singh
2025-04-01 10:35 ` Ryan Roberts
2025-04-01 17:55 ` Kalesh Singh
2025-04-02 13:40 ` Ryan Roberts
2025-03-28 0:07 ` Zi Yan
2025-03-28 13:09 ` Ryan Roberts
2025-03-28 13:32 ` Zi Yan
2025-03-28 13:50 ` Ryan Roberts
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ee11907a-5bd7-44ec-844c-8f10ff406b46@arm.com \
--to=ryan.roberts@arm.com \
--cc=akpm@linux-foundation.org \
--cc=catalin.marinas@arm.com \
--cc=david@fromorbit.com \
--cc=david@redhat.com \
--cc=kaleshsingh@google.com \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=will@kernel.org \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox