linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Kalesh Singh <kaleshsingh@google.com>
To: David Hildenbrand <david@redhat.com>
Cc: Matthew Wilcox <willy@infradead.org>,
	Dave Chinner <david@fromorbit.com>,
	 Lorenzo Stoakes <lorenzo.stoakes@oracle.com>,
	Jan Kara <jack@suse.cz>,
	 lsf-pc@lists.linux-foundation.org,
	 "open list:MEMORY MANAGEMENT" <linux-mm@kvack.org>,
	linux-fsdevel <linux-fsdevel@vger.kernel.org>,
	 Suren Baghdasaryan <surenb@google.com>,
	"Liam R. Howlett" <Liam.Howlett@oracle.com>,
	 Juan Yescas <jyescas@google.com>,
	android-mm <android-mm@google.com>,
	 Vlastimil Babka <vbabka@suse.cz>, Michal Hocko <mhocko@suse.com>,
	 "Cc: Android Kernel" <kernel-team@android.com>
Subject: Re: [Lsf-pc] [LSF/MM/BPF TOPIC] Optimizing Page Cache Readahead Behavior
Date: Tue, 1 Apr 2025 17:13:56 -0700	[thread overview]
Message-ID: <CAC_TJveXmMFBuMh_S3RQc3p9Z6eqK+r=1yYfcquD7soDsgkGXg@mail.gmail.com> (raw)
In-Reply-To: <2dcaa0a6-c20d-4e57-80df-b288d2faa58d@redhat.com>

On Fri, Feb 28, 2025 at 1:07 AM David Hildenbrand <david@redhat.com> wrote:
>
> On 27.02.25 23:12, Matthew Wilcox wrote:
> > On Tue, Feb 25, 2025 at 10:56:21AM +1100, Dave Chinner wrote:
> >>>  From the previous discussions that Matthew shared [7], it seems like
> >>> Dave proposed an alternative to moving the extents to the VFS layer to
> >>> invert the IO read path operations [8]. Maybe this is a move
> >>> approachable solution since there is precedence for the same in the
> >>> write path?
> >>>
> >>> [7] https://lore.kernel.org/linux-fsdevel/Zs97qHI-wA1a53Mm@casper.infradead.org/
> >>> [8] https://lore.kernel.org/linux-fsdevel/ZtAPsMcc3IC1VaAF@dread.disaster.area/
> >>
> >> Yes, if we are going to optimise away redundant zeros being stored
> >> in the page cache over holes, we need to know where the holes in the
> >> file are before the page cache is populated.
> >
> > Well, you shot that down when I started trying to flesh it out:
> > https://lore.kernel.org/linux-fsdevel/Zs+2u3%2FUsoaUHuid@dread.disaster.area/
> >
> >> As for efficient hole tracking in the mapping tree, I suspect that
> >> we should be looking at using exceptional entries in the mapping
> >> tree for holes, not inserting mulitple references to the zero folio.
> >> i.e. the important information for data storage optimisation is that
> >> the region covers a hole, not that it contains zeros.
> >
> > The xarray is very much optimised for storing power-of-two sized &
> > aligned objects.  It makes no sense to try to track extents using the
> > mapping tree.  Now, if we abandon the radix tree for the maple tree, we
> > could talk about storing zero extents in the same data structure.
> > But that's a big change with potentially significant downsides.
> > It's something I want to play with, but I'm a little busy right now.
> >
> >> For buffered reads, all that is required when such an exceptional
> >> entry is returned is a memset of the user buffer. For buffered
> >> writes, we simply treat it like a normal folio allocating write and
> >> replace the exceptional entry with the allocated (and zeroed) folio.
> >
> > ... and unmap the zero page from any mappings.
> >
> >> For read page faults, the zero page gets mapped (and maybe
> >> accounted) via the vma rather than the mapping tree entry. For write
> >> faults, a folio gets allocated and the exception entry replaced
> >> before we call into ->page_mkwrite().
> >>
> >> Invalidation simply removes the exceptional entries.
> >
> > ... and unmap the zero page from any mappings.
> >
>
> I'll add one detail for future reference; not sure about the priority
> this should have, but it's one of these nasty corner cases that are not
> the obvious to spot when having the shared zeropage in MAP_SHARED mappings:
>
> Currently, only FS-DAX makes use of the shared zeropage in "ordinary
> MAP_SHARED" mappings. It doesn't use it for "holes" but for "logically
> zero" pages, to avoid allocating disk blocks (-> translating to actual
> DAX memory) on read-only access.
>
> There is one issue between gup(FOLL_LONGTERM | FOLL_PIN) and the shared
> zeropage in MAP_SHARED mappings. It so far does not apply to fsdax,
> because ... we don't support FOLL_LONGTERM for fsdax at all.
>
> I spelled out part of the issue in fce831c92092 ("mm/memory: cleanly
> support zeropage in vm_insert_page*(), vm_map_pages*() and
> vmf_insert_mixed()").
>
> In general, the problem is that gup(FOLL_LONGTERM | FOLL_PIN) will have
> to decide if it is okay to longterm-pin the shared zeropage in a
> MAP_SHARED mapping (which might just be fine with a R/O file in some
> cases?), and if not, it would have to trigger FAULT_FLAG_UNSHARE similar
> to how we break COW in MAP_PRIVATE mappings (shared zeropage ->
> anonymous folio).
>
> If gup(FOLL_LONGTERM | FOLL_PIN) would just always longterm-pin the
> shared zeropage, and somebody else would end up triggering replacement
> of the shared zeropage in the pagecache (e.g., write() to the file
> offset, write access to the VMA that triggers a write fault etc.), you'd
> get a disconnect between what the GUP user sees and what the pagecache
> actually contains.
>
> The file system fault logic will have to be taught about
> FAULT_FLAG_UNSHARE and handle it accordingly (e.g., allocate fill file
> hole, allocate disk space, allocate an actual folio ...).
>
> Things like memfd_pin_folios() might require similar care -- that one in
> particular should likely never return the shared zeropage.
>
> Likely gup(FOLL_LONGTERM | FOLL_PIN) users like RDMA or VFIO will be
> able to trigger it.
>
>
> Not using the shared zeropage but instead some "hole" PTE marker could
> avoid this problem. Of course, not allowing for reading the shared
> zeropage there, but maybe that's not strictly required?
>

Link to slides for the talk:
https://drive.google.com/file/d/1MOJu5FZurV4XaCLrQhM9S5ubN7H_jEA8/view?usp=drive_link

Thanks,
Kalesh

> --
> Cheers,
>
> David / dhildenb
>


  reply	other threads:[~2025-04-02  0:14 UTC|newest]

Thread overview: 26+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-02-21 21:13 Kalesh Singh
2025-02-22 18:03 ` Kent Overstreet
2025-02-23  5:36   ` Kalesh Singh
2025-02-23  5:42     ` Kalesh Singh
2025-02-23  9:30     ` Lorenzo Stoakes
2025-02-23 12:24       ` Matthew Wilcox
2025-02-23  5:34 ` Ritesh Harjani
2025-02-23  6:50   ` Kalesh Singh
2025-02-24 12:56   ` David Sterba
2025-02-24 14:14 ` [Lsf-pc] " Jan Kara
2025-02-24 14:21   ` Lorenzo Stoakes
2025-02-24 16:31     ` Jan Kara
2025-02-24 16:52       ` Lorenzo Stoakes
2025-02-24 21:36         ` Kalesh Singh
2025-02-24 21:55           ` Kalesh Singh
2025-02-24 23:56           ` Dave Chinner
2025-02-25  6:45             ` Kalesh Singh
2025-02-27 22:12             ` Matthew Wilcox
2025-02-28  1:12               ` Dave Chinner
2025-02-28  9:07               ` David Hildenbrand
2025-04-02  0:13                 ` Kalesh Singh [this message]
2025-02-25  5:44           ` Lorenzo Stoakes
2025-02-25  6:59             ` Kalesh Singh
2025-02-25 16:36           ` Jan Kara
2025-02-26  0:49             ` Kalesh Singh
2025-02-25 16:21         ` Jan Kara

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAC_TJveXmMFBuMh_S3RQc3p9Z6eqK+r=1yYfcquD7soDsgkGXg@mail.gmail.com' \
    --to=kaleshsingh@google.com \
    --cc=Liam.Howlett@oracle.com \
    --cc=android-mm@google.com \
    --cc=david@fromorbit.com \
    --cc=david@redhat.com \
    --cc=jack@suse.cz \
    --cc=jyescas@google.com \
    --cc=kernel-team@android.com \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=lorenzo.stoakes@oracle.com \
    --cc=lsf-pc@lists.linux-foundation.org \
    --cc=mhocko@suse.com \
    --cc=surenb@google.com \
    --cc=vbabka@suse.cz \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox