linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
To: Kalesh Singh <kaleshsingh@google.com>
Cc: Kent Overstreet <kent.overstreet@linux.dev>,
	lsf-pc@lists.linux-foundation.org,
	"open list:MEMORY MANAGEMENT" <linux-mm@kvack.org>,
	linux-fsdevel <linux-fsdevel@vger.kernel.org>,
	Suren Baghdasaryan <surenb@google.com>,
	David Hildenbrand <david@redhat.com>,
	"Liam R. Howlett" <Liam.Howlett@oracle.com>,
	Juan Yescas <jyescas@google.com>,
	android-mm <android-mm@google.com>,
	Matthew Wilcox <willy@infradead.org>,
	Vlastimil Babka <vbabka@suse.cz>, Michal Hocko <mhocko@suse.com>,
	Johannes Weiner <hannes@cmpxchg.org>,
	Nhat Pham <nphamcs@gmail.com>
Subject: Re: [LSF/MM/BPF TOPIC] Optimizing Page Cache Readahead Behavior
Date: Sun, 23 Feb 2025 09:30:57 +0000	[thread overview]
Message-ID: <31e946de-b8d5-4681-b2ac-006360895a87@lucifer.local> (raw)
In-Reply-To: <CAC_TJvepQjR03qa-9C+kL4Or4COUFjZevv+-0gTUFYgNdquq-Q@mail.gmail.com>

On Sat, Feb 22, 2025 at 09:36:48PM -0800, Kalesh Singh wrote:
> On Sat, Feb 22, 2025 at 10:03 AM Kent Overstreet
> <kent.overstreet@linux.dev> wrote:
> >
> > On Fri, Feb 21, 2025 at 01:13:15PM -0800, Kalesh Singh wrote:
> > > Hi organizers of LSF/MM,
> > >
> > > I realize this is a late submission, but I was hoping there might
> > > still be a chance to have this topic considered for discussion.
> > >
> > > Problem Statement
> > > ===============
> > >
> > > Readahead can result in unnecessary page cache pollution for mapped
> > > regions that are never accessed. Current mechanisms to disable
> > > readahead lack granularity and rather operate at the file or VMA
> > > level. This proposal seeks to initiate discussion at LSFMM to explore
> > > potential solutions for optimizing page cache/readahead behavior.
> > >
> > >
> > > Background
> > > =========
> > >
> > > The read-ahead heuristics on file-backed memory mappings can
> > > inadvertently populate the page cache with pages corresponding to
> > > regions that user-space processes are known never to access e.g ELF
> > > LOAD segment padding regions. While these pages are ultimately
> > > reclaimable, their presence precipitates unnecessary I/O operations,
> > > particularly when a substantial quantity of such regions exists.
> > >
> > > Although the underlying file can be made sparse in these regions to
> > > mitigate I/O, readahead will still allocate discrete zero pages when
> > > populating the page cache within these ranges. These pages, while
> > > subject to reclaim, introduce additional churn to the LRU. This
> > > reclaim overhead is further exacerbated in filesystems that support
> > > "fault-around" semantics, that can populate the surrounding pages’
> > > PTEs if found present in the page cache.

One note - if you use guard regions, fault-around won't be performed on
them ;)

It seems strange to me sparse regions would place duplicate zeroed pages in
the page cache...

> > >
> > > While the memory impact may be negligible for large files containing a
> > > limited number of sparse regions, it becomes appreciable for many
> > > small mappings characterized by numerous holes. This scenario can
> > > arise from efforts to minimize vm_area_struct slab memory footprint.

Presumably we're most concern with _synchronous_ readhead here? Because
once you estabish PG_readhead markers to trigger subsequent asynchronous
readahead, I don't think you can retain control. I go into that more below.

> > >
> > > Limitations of Existing Mechanisms
> > > ===========================
> > >
> > > fadvise(..., POSIX_FADV_RANDOM, ...): disables read-ahead for the
> > > entire file, rather than specific sub-regions. The offset and length
> > > parameters primarily serve the POSIX_FADV_WILLNEED [1] and
> > > POSIX_FADV_DONTNEED [2] cases.
> > >
> > > madvise(..., MADV_RANDOM, ...): Similarly, this applies on the entire
> > > VMA, rather than specific sub-regions. [3]
> > > Guard Regions: While guard regions for file-backed VMAs circumvent
> > > fault-around concerns, the fundamental issue of unnecessary page cache
> > > population persists. [4]

Note, not for fault-around. But yes for readahead, unavoidably, as there is no
metadata at VMA level (intentionally).

> >
> Hi Kent. Thanks for taking a look at this.
>
> > What if we introduced something like
> >
> > madvise(..., MADV_READAHEAD_BOUNDARY, offset)
> >
> > Would that be sufficient? And would a single readahead boundary offset
> > suffice?
>
> I like the idea of having boundaries. In this particular example the
> single boundary suffices, though I think we’ll need to support
> multiple (see below).
>
> One requirement that we’d like to meet is that the solution doesn’t
> cause VMA splits, to avoid additional slab usage, so perhaps fadvise()
> is better suited to this?

+1 to not causing VMA splits, but presumably you'd madvise() the whole VMA
anyway to adopt to this boundary mode?

But if you're trying to do something sub-VMA, I mean I'm not sure there's
any way for you to do this without splitting the VMA?

You end up in the same situation as guard regions which is - how do we
encode this information in such a way as to _not_ require VMA splitting,
and for guard regions the answer is 'we encode it in the page tables, and
modify _fault_ behaviour'.

Obviously that won't work here, so you really have nowhere else to put it.

While readahead state is stored in struct file(->f_ra) [which is somewhat
iffy on a few levels but still], fundamentally for asynchronous

>
> Another behavior of “mmap readahead” is that it doesn’t really respect
> VMA (start, end) boundaries:

Right, but doesn't readahead strictly belong to the file/folios rather than
any specific mapping?

Fine for synchronous readahead potentially, as you could say - ok we're
major faulting, only bring in up to the VMA boundary. But once you plant
PG_readahead markers to trigger asynchronous readahead on minor faults and
you're into filemap_readahead(), you lose all this kind of context.

And is it really fair if you have multiple mappings as well as potentially
read() operations on a file?

I'm not sure how feasible it is to restrict beyond _initial synchronous_
readahead, and I think you could only do that with VMA metadata, and so
you'd split the VMA, and wouldn't this defeat the purpose somewhat?

>
> The below demonstrates readahead past the end of the mapped region of the file:
>
> sudo sync && sudo sh -c 'echo 3 > /proc/sys/vm/drop_caches' &&
> ./pollute_page_cache.sh
>
> Creating sparse file of size 25 pages
> Apparent Size: 100K
> Real Size: 0
> Number cached pages: 0
> Reading first 5 pages via mmap...
> Mapping and reading pages: [0, 6) of file 'myfile.txt'
> Number cached pages: 25
>
> Similarly the readahead can bring in pages before the start of the
> mapped region. I believe this is due to mmap “read-around” [6]:
>
> sudo sync && sudo sh -c 'echo 3 > /proc/sys/vm/drop_caches' &&
> ./pollute_page_cache.sh
>
> Creating sparse file of size 25 pages
> Apparent Size: 100K
> Real Size: 0
> Number cached pages: 0
> Reading last 5 pages via mmap...
> Mapping and reading pages: [20, 25) of file 'myfile.txt'
> Number cached pages: 25
>
> I’m not sure what the historical use cases for readahead past the VMA
> boundaries are; but at least in some scenarios this behavior is not
> desirable. For instance, many apps mmap uncompressed ELF files
> directly from a page-aligned offset within a zipped APK as a space
> saving and security feature. The read ahead and read around behaviors
> lead to unrelated resources from the zipped APK populated in the page
> cache. I think in this case we’ll need to have more than a single
> boundary per file.
>
> A somewhat related but separate issue is that currently distinct pages
> are allocated in the page cache when reading sparse file holes. I
> think at least in the case of reading this should be avoidable.

This does seem like something that could be improved, seems very strange we
do this though.

>
> sudo sync && sudo sh -c 'echo 3 > /proc/sys/vm/drop_caches' &&
> ./pollute_page_cache.sh
>
> Creating sparse file of size 1GB
> Apparent Size: 977M
> Real Size: 0
> Number cached pages: 0
> Meminfo Cached:          9078768 kB
> Reading 1GB of holes...
> Number cached pages: 250000
> Meminfo Cached:         10117324 kB
>
> (10117324-9078768)/4 = 259639 = ~250000 pages # (global counter = some noise)
>
> --Kalesh


  parent reply	other threads:[~2025-02-23  9:31 UTC|newest]

Thread overview: 26+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-02-21 21:13 Kalesh Singh
2025-02-22 18:03 ` Kent Overstreet
2025-02-23  5:36   ` Kalesh Singh
2025-02-23  5:42     ` Kalesh Singh
2025-02-23  9:30     ` Lorenzo Stoakes [this message]
2025-02-23 12:24       ` Matthew Wilcox
2025-02-23  5:34 ` Ritesh Harjani
2025-02-23  6:50   ` Kalesh Singh
2025-02-24 12:56   ` David Sterba
2025-02-24 14:14 ` [Lsf-pc] " Jan Kara
2025-02-24 14:21   ` Lorenzo Stoakes
2025-02-24 16:31     ` Jan Kara
2025-02-24 16:52       ` Lorenzo Stoakes
2025-02-24 21:36         ` Kalesh Singh
2025-02-24 21:55           ` Kalesh Singh
2025-02-24 23:56           ` Dave Chinner
2025-02-25  6:45             ` Kalesh Singh
2025-02-27 22:12             ` Matthew Wilcox
2025-02-28  1:12               ` Dave Chinner
2025-02-28  9:07               ` David Hildenbrand
2025-04-02  0:13                 ` Kalesh Singh
2025-02-25  5:44           ` Lorenzo Stoakes
2025-02-25  6:59             ` Kalesh Singh
2025-02-25 16:36           ` Jan Kara
2025-02-26  0:49             ` Kalesh Singh
2025-02-25 16:21         ` Jan Kara

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=31e946de-b8d5-4681-b2ac-006360895a87@lucifer.local \
    --to=lorenzo.stoakes@oracle.com \
    --cc=Liam.Howlett@oracle.com \
    --cc=android-mm@google.com \
    --cc=david@redhat.com \
    --cc=hannes@cmpxchg.org \
    --cc=jyescas@google.com \
    --cc=kaleshsingh@google.com \
    --cc=kent.overstreet@linux.dev \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=lsf-pc@lists.linux-foundation.org \
    --cc=mhocko@suse.com \
    --cc=nphamcs@gmail.com \
    --cc=surenb@google.com \
    --cc=vbabka@suse.cz \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox