From: Alistair Popple <apopple@nvidia.com>
To: Dan Williams <dan.j.williams@intel.com>
Cc: Jason Gunthorpe <jgg@nvidia.com>,
akpm@linux-foundation.org, Matthew Wilcox <willy@infradead.org>,
Jan Kara <jack@suse.cz>, "Darrick J. Wong" <djwong@kernel.org>,
Christoph Hellwig <hch@lst.de>,
John Hubbard <jhubbard@nvidia.com>,
linux-fsdevel@vger.kernel.org, nvdimm@lists.linux.dev,
linux-xfs@vger.kernel.org, linux-mm@kvack.org,
linux-ext4@vger.kernel.org
Subject: Re: [PATCH v2 10/18] fsdax: Manage pgmap references at entry insertion and deletion
Date: Tue, 27 Sep 2022 16:07:05 +1000 [thread overview]
Message-ID: <8735cdl4pk.fsf@nvdebian.thelocal> (raw)
In-Reply-To: <632e031958740_33d629428@dwillia2-xfh.jf.intel.com.notmuch>
Dan Williams <dan.j.williams@intel.com> writes:
> Jason Gunthorpe wrote:
>> On Fri, Sep 23, 2022 at 09:29:51AM -0700, Dan Williams wrote:
>> > > > /**
>> > > > * pgmap_get_folio() - reference a folio in a live @pgmap by @pfn
>> > > > * @pgmap: live pgmap instance, caller ensures this does not race @pgmap death
>> > > > * @pfn: page frame number covered by @pgmap
>> > > > */
>> > > > struct folio *pgmap_get_folio(struct dev_pagemap *pgmap,
>> > > > unsigned long pfn)
>>
>> Maybe should be not be pfn but be 'offset from the first page of the
>> pgmap' ? Then we don't need the xa_load stuff, since it cann't be
>> wrong by definition.
>>
>> > > > {
>> > > > struct page *page;
>> > > >
>> > > > VM_WARN_ONCE(pgmap != xa_load(&pgmap_array, PHYS_PFN(phys)));
>> > > >
>> > > > if (WARN_ONCE(percpu_ref_is_dying(&pgmap->ref)))
>> > > > return NULL;
>> > >
>> > > This shouldn't be a WARN?
>> >
>> > It's a bug if someone calls this after killing the pgmap. I.e. the
>> > expectation is that the caller is synchronzing this. The only reason
>> > this isn't a VM_WARN_ONCE is because the sanity check is cheap, but I do
>> > not expect it to fire on anything but a development kernel.
>>
>> OK, that makes sense
>>
>> But shouldn't this get the pgmap refcount here? The reason we started
>> talking about this was to make all the pgmap logic self contained so
>> that the pgmap doesn't pass its own destroy until all the all the
>> page_free()'s have been done.
That sounds good to me at least. I just noticed we introduced this exact
bug for device private/coherent pages when making their refcounts zero
based. Nothing currently takes pgmap->ref when a private/coherent page
is mapped. Therefore memunmap_pages() will complete and pgmap destroyed
while pgmap pages are still mapped.
So I think we need to call put_dev_pagemap() as part of
free_zone_device_page().
- Alistair
>> > > > This does not create compound folios, that needs to be coordinated with
>> > > > the caller and likely needs an explicit
>> > >
>> > > Does it? What situations do you think the caller needs to coordinate
>> > > the folio size? Caller should call the function for each logical unit
>> > > of storage it wants to allocate from the pgmap..
>> >
>> > The problem for fsdax is that it needs to gather all the PTEs, hold a
>> > lock to synchronize against events that would shatter a huge page, and
>> > then build up the compound folio metadata before inserting the PMD.
>>
>> Er, at this point we are just talking about acquiring virgin pages
>> nobody else is using, not inserting things. There is no possibility of
>> conurrent shattering because, by definition, nothing else can
>> reference these struct pages at this instant.
>>
>> Also, the caller must already be serializating pgmap_get_folio()
>> against concurrent calls on the same pfn (since it is an error to call
>> pgmap_get_folio() on an non-free pfn)
>>
>> So, I would expect the caller must already have all the necessary
>> locking to accept maximally sized folios.
>>
>> eg if it has some reason to punch a hole in the contiguous range
>> (shatter the folio) it must *already* serialize against
>> pgmap_get_folio(), since something like punching a hole must know with
>> certainty if any struct pages are refcount != 0 or not, and must not
>> race with something trying to set their refcount to 1.
>
> Perhaps, I'll take a look. The scenario I am more concerned about is
> processA sets up a VMA of PAGE_SIZE and races processB to fault in the
> same filesystem block with a VMA of PMD_SIZE. Right now processA gets a
> PTE mapping and processB gets a PMD mapping, but the refcounting is all
> handled in small pages. I need to investigate more what is needed for
> fsdax to support folio_size() > mapping entry size.
next prev parent reply other threads:[~2022-09-27 6:19 UTC|newest]
Thread overview: 77+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-09-16 3:35 [PATCH v2 00/18] Fix the DAX-gup mistake Dan Williams
2022-09-16 3:35 ` [PATCH v2 01/18] fsdax: Wait on @page not @page->_refcount Dan Williams
2022-09-20 14:30 ` Jason Gunthorpe
2022-09-16 3:35 ` [PATCH v2 02/18] fsdax: Use dax_page_idle() to document DAX busy page checking Dan Williams
2022-09-20 14:31 ` Jason Gunthorpe
2022-09-16 3:35 ` [PATCH v2 03/18] fsdax: Include unmapped inodes for page-idle detection Dan Williams
2022-09-16 3:35 ` [PATCH v2 04/18] ext4: Add ext4_break_layouts() to the inode eviction path Dan Williams
2022-09-16 3:35 ` [PATCH v2 05/18] xfs: Add xfs_break_layouts() " Dan Williams
[not found] ` <20220918225731.GG3600936@dread.disaster.area>
2022-09-19 16:11 ` Dan Williams
[not found] ` <20220919212959.GL3600936@dread.disaster.area>
2022-09-20 16:44 ` Dan Williams
[not found] ` <20220921221416.GT3600936@dread.disaster.area>
2022-09-21 22:28 ` Jason Gunthorpe
[not found] ` <20220923001846.GX3600936@dread.disaster.area>
2022-09-23 0:41 ` Dan Williams
[not found] ` <20220923021012.GZ3600936@dread.disaster.area>
2022-09-23 9:38 ` Jan Kara
2022-09-23 23:06 ` Dan Williams
[not found] ` <20220925235407.GA3600936@dread.disaster.area>
2022-09-26 14:10 ` Jan Kara
2022-09-29 23:33 ` Dan Williams
2022-09-30 13:41 ` Jan Kara
2022-09-30 17:56 ` Dan Williams
2022-09-30 18:06 ` Jason Gunthorpe
2022-09-30 18:46 ` Dan Williams
2022-10-03 7:55 ` Jan Kara
2022-09-23 12:39 ` Jason Gunthorpe
[not found] ` <20220926003430.GB3600936@dread.disaster.area>
2022-09-26 13:04 ` Jason Gunthorpe
2022-09-22 0:02 ` Dan Williams
2022-09-22 0:10 ` Jason Gunthorpe
2022-09-16 3:35 ` [PATCH v2 06/18] fsdax: Rework dax_layout_busy_page() to dax_zap_mappings() Dan Williams
2022-09-16 3:35 ` [PATCH v2 07/18] fsdax: Update dax_insert_entry() calling convention to return an error Dan Williams
2022-09-16 3:35 ` [PATCH v2 08/18] fsdax: Cleanup dax_associate_entry() Dan Williams
2022-09-16 3:36 ` [PATCH v2 09/18] fsdax: Rework dax_insert_entry() calling convention Dan Williams
2022-09-16 3:36 ` [PATCH v2 10/18] fsdax: Manage pgmap references at entry insertion and deletion Dan Williams
2022-09-21 14:03 ` Jason Gunthorpe
2022-09-21 15:18 ` Dan Williams
2022-09-21 21:38 ` Dan Williams
2022-09-21 22:07 ` Jason Gunthorpe
2022-09-22 0:14 ` Dan Williams
2022-09-22 0:25 ` Jason Gunthorpe
2022-09-22 2:17 ` Dan Williams
2022-09-22 17:55 ` Jason Gunthorpe
2022-09-22 21:54 ` Dan Williams
2022-09-23 1:36 ` Dave Chinner
2022-09-23 2:01 ` Dan Williams
2022-09-23 13:24 ` Jason Gunthorpe
2022-09-23 16:29 ` Dan Williams
2022-09-23 17:42 ` Jason Gunthorpe
2022-09-23 19:03 ` Dan Williams
2022-09-23 19:23 ` Jason Gunthorpe
2022-09-27 6:07 ` Alistair Popple [this message]
2022-09-27 12:56 ` Jason Gunthorpe
2022-09-16 3:36 ` [PATCH v2 11/18] devdax: Minor warning fixups Dan Williams
2022-09-16 3:36 ` [PATCH v2 12/18] devdax: Move address_space helpers to the DAX core Dan Williams
2022-09-27 6:20 ` Alistair Popple
2022-09-29 22:38 ` Dan Williams
2022-09-16 3:36 ` [PATCH v2 13/18] dax: Prep mapping helpers for compound pages Dan Williams
2022-09-21 14:06 ` Jason Gunthorpe
2022-09-21 15:19 ` Dan Williams
2022-09-16 3:36 ` [PATCH v2 14/18] devdax: add PUD support to the DAX mapping infrastructure Dan Williams
2022-09-16 3:36 ` [PATCH v2 15/18] devdax: Use dax_insert_entry() + dax_delete_mapping_entry() Dan Williams
2022-09-21 14:10 ` Jason Gunthorpe
2022-09-21 15:48 ` Dan Williams
2022-09-21 22:23 ` Jason Gunthorpe
2022-09-22 0:15 ` Dan Williams
2022-09-16 3:36 ` [PATCH v2 16/18] mm/memremap_pages: Support initializing pages to a zero reference count Dan Williams
2022-09-21 15:24 ` Jason Gunthorpe
2022-09-21 23:45 ` Dan Williams
2022-09-22 0:03 ` Alistair Popple
2022-09-22 0:04 ` Jason Gunthorpe
2022-09-22 0:34 ` Dan Williams
2022-09-22 1:36 ` Alistair Popple
2022-09-22 2:34 ` Dan Williams
2022-09-26 6:17 ` Alistair Popple
2022-09-22 0:13 ` John Hubbard
2022-09-16 3:36 ` [PATCH v2 17/18] fsdax: Delete put_devmap_managed_page_refs() Dan Williams
2022-09-16 3:36 ` [PATCH v2 18/18] mm/gup: Drop DAX pgmap accounting Dan Williams
2022-09-20 14:29 ` [PATCH v2 00/18] Fix the DAX-gup mistake Jason Gunthorpe
2022-09-20 16:50 ` Dan Williams
2022-11-09 0:20 ` Andrew Morton
2022-11-09 11:38 ` Jan Kara
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=8735cdl4pk.fsf@nvdebian.thelocal \
--to=apopple@nvidia.com \
--cc=akpm@linux-foundation.org \
--cc=dan.j.williams@intel.com \
--cc=djwong@kernel.org \
--cc=hch@lst.de \
--cc=jack@suse.cz \
--cc=jgg@nvidia.com \
--cc=jhubbard@nvidia.com \
--cc=linux-ext4@vger.kernel.org \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-xfs@vger.kernel.org \
--cc=nvdimm@lists.linux.dev \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox