linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Dave Chinner <david@fromorbit.com>
To: Matthew Wilcox <willy@infradead.org>
Cc: Jan Kara <jack@suse.cz>,
	linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org, Jeff Layton <jlayton@kernel.org>,
	Christoph Hellwig <hch@infradead.org>, Chris Mason <clm@fb.com>
Subject: Re: [RFC v2 0/9] Replacing the readpages a_op
Date: Thu, 23 Jan 2020 10:47:40 +1100	[thread overview]
Message-ID: <20200122234740.GI9407@dread.disaster.area> (raw)
In-Reply-To: <20200121214845.GA14467@bombadil.infradead.org>

On Tue, Jan 21, 2020 at 01:48:45PM -0800, Matthew Wilcox wrote:
> On Tue, Jan 21, 2020 at 12:36:27PM +0100, Jan Kara wrote:
> > > v2: Chris asked me to show what this would look like if we just have
> > > the implementation look up the pages in the page cache, and I managed
> > > to figure out some things I'd done wrong last time.  It's even simpler
> > > than v1 (net 104 lines deleted).
> > 
> > I have an unfinished patch series laying around that pulls the ->readpage
> > / ->readpages API in somewhat different direction so I'd like to discuss
> > whether it's possible to solve my problem using your API. The problem I
> > have is that currently some operations such as hole punching can race with
> > ->readpage / ->readpages like:
> > 
> > CPU0						CPU1
> > fallocate(fd, FALLOC_FL_PUNCH_HOLE, off, len)
> >   filemap_write_and_wait_range()
> >   down_write(inode->i_rwsem);
> >   truncate_pagecache_range();

shouldn't fallocate be holding EXT4_I(inode)->i_mmap_sem before it
truncates the page cache? Otherwise it's not serialised against
page faults. Looks at code ... oh, it does hold the i_mmap_sem in
write mode, so....

> > 						readahead(fd, off, len)
> > 						  creates pages in page cache
> > 						  looks up block mapping
> >   removes blocks from inode and frees them
> > 						  issues bio
> > 						    - reads stale data -
> > 						      potential security
> > 						      issue

.... I'm not sure that this race condition should exist anymore
as readahead should not run until the filesystem drops it's inode
and mmap locks after the entire extent freeing operation is
complete...

> > Now how I wanted to address this is that I'd change the API convention for
> > ->readpage() so that we call it with the page unlocked and the function
> > would lock the page, check it's still OK, and do what it needs. And this
> > will allow ->readpage() and also ->readpages() to grab lock
> > (EXT4_I(inode)->i_mmap_sem in case of ext4) to synchronize with hole punching
> > while we are adding pages to page cache and mapping underlying blocks.
> > 
> > Now your API makes even ->readpages() (actually ->readahead) called with
> > pages locked so that makes this approach problematic because of lock
> > inversions. So I'd prefer if we could keep the situation that ->readpages /
> > ->readahead gets called without any pages in page cache locked...
> 
> I'm not a huge fan of that approach because it increases the number of
> atomic ops (right now, we __SetPageLocked on the page before adding it
> to i_pages).  Holepunch is a rather rare operation while readpage and
> readpages/readahead are extremely common, so can we make holepunch take
> a lock that will prevent new readpage(s) succeeding?
> 
> I have an idea to move the lock entries from DAX to being a generic page
> cache concept.  That way, holepunch could insert lock entries into the
> pagecache to cover the range being punched, and readpage(s) would either
> skip lock entries or block on them.
> 
> Maybe there's a better approach though.

Can we step back for a moment and look at how we already serialise
readahead against truncate/hole punch? While the readahead code
itself doesn't serialise against truncate, in all cases we should be
running through the filesystem at a higher layer and provides the
truncate/holepunch serialisation before we get to the readahead
code.

The read() syscall IO path:

  read()
    ->read_iter()
      filesystem takes truncate serialisation lock
      generic_file_read_iter()
        generic_file_buffered_read()
	  page_cache_sync_readahead()
	    ....
	  page_cache_async_readahead()
	    ....
      .....
      filesystem drops truncate serialisation lock

The page fault IO path:

page fault
  ->fault
    xfs_vm_filemap_fault
      filesystem takes mmap truncate serialisation lock
	filemap_fault
	  do_async_mmap_readahead
	    page_cache_async_readahead
	  ....
	  do_sync_mmap_readahead
	    page_cache_sync_readahead
	.....
      filesystem drops mmap truncate serialisation lock

Then there is fadvise(WILLNEED) which calls
force_page_cache_readahead() directly. We now have ->fadvise for
filesystems to add locking around this:

fadvise64
  vfs_fadvise
    ->fadvise
      xfs_file_fadvise
        filesystem takes truncate serialisation lock
	generic_fadvise()
        filesystem drops truncate serialisation lock

XFS and overlay both have ->fadvise methods to provide this
serialisation of readahead, but ext4 does not so users could trigger
the stated race condition through fadvise(WILLNEED) or readahead()
syscalls.

Hence, AFAICT, ext4 only needs to implement ->fadvise and the stated
readahead vs truncate_pagecache_range() race condition goes away.
Unless, of course, I've missed some other entry point into the
readahead code that does not vector through the filesystem first.
What have I missed?

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com


  parent reply	other threads:[~2020-01-22 23:47 UTC|newest]

Thread overview: 23+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-01-15  2:38 Matthew Wilcox
2020-01-15  2:38 ` [PATCH v2 1/9] mm: Fix the return type of __do_page_cache_readahead Matthew Wilcox
2020-01-15  2:38 ` [PATCH v2 2/9] readahead: Ignore return value of ->readpages Matthew Wilcox
2020-01-15  2:38 ` [PATCH v2 3/9] XArray: Add xarray_for_each_range Matthew Wilcox
2020-01-15  2:38 ` [PATCH v2 4/9] readahead: Put pages in cache earlier Matthew Wilcox
2020-01-15  2:38 ` [PATCH v2 5/9] mm: Add readahead address space operation Matthew Wilcox
2020-01-15  2:38 ` [PATCH v2 6/9] iomap,xfs: Convert from readpages to readahead Matthew Wilcox
2020-01-15  7:16   ` Christoph Hellwig
2020-01-15  7:42     ` Matthew Wilcox
2020-01-24 22:53       ` Matthew Wilcox
2020-01-15  2:38 ` [PATCH v2 7/9] cifs: " Matthew Wilcox
2020-01-15  2:38 ` [PATCH v2 8/9] mm: Remove add_to_page_cache_locked Matthew Wilcox
2020-01-15  2:38 ` [PATCH v2 9/9] mm: Unify all add_to_page_cache variants Matthew Wilcox
2020-01-15  7:20   ` Christoph Hellwig
2020-01-15  7:44     ` Matthew Wilcox
2020-01-18 23:13 ` [RFC v2 0/9] Replacing the readpages a_op Matthew Wilcox
2020-01-21 11:36 ` Jan Kara
2020-01-21 21:48   ` Matthew Wilcox
2020-01-22  9:44     ` Jan Kara
2020-01-23 10:31       ` Jan Kara
2020-01-22 23:47     ` Dave Chinner [this message]
2020-01-23 10:21       ` Jan Kara
2020-01-23 22:29         ` Dave Chinner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200122234740.GI9407@dread.disaster.area \
    --to=david@fromorbit.com \
    --cc=clm@fb.com \
    --cc=hch@infradead.org \
    --cc=jack@suse.cz \
    --cc=jlayton@kernel.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-xfs@vger.kernel.org \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox