linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Alistair Popple <apopple@nvidia.com>
To: "Darrick J. Wong" <djwong@kernel.org>
Cc: akpm@linux-foundation.org, dan.j.williams@intel.com,
	 linux-mm@kvack.org, alison.schofield@intel.com,
	lina@asahilina.net,  zhang.lyra@gmail.com,
	gerald.schaefer@linux.ibm.com, vishal.l.verma@intel.com,
	 dave.jiang@intel.com, logang@deltatee.com, bhelgaas@google.com,
	jack@suse.cz,  jgg@ziepe.ca, catalin.marinas@arm.com,
	will@kernel.org, mpe@ellerman.id.au,  npiggin@gmail.com,
	dave.hansen@linux.intel.com, ira.weiny@intel.com,
	 willy@infradead.org, tytso@mit.edu, linmiaohe@huawei.com,
	david@redhat.com,  peterx@redhat.com, linux-doc@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	 linux-arm-kernel@lists.infradead.org,
	linuxppc-dev@lists.ozlabs.org, nvdimm@lists.linux.dev,
	 linux-cxl@vger.kernel.org, linux-fsdevel@vger.kernel.org,
	linux-ext4@vger.kernel.org,  linux-xfs@vger.kernel.org,
	jhubbard@nvidia.com, hch@lst.de, david@fromorbit.com,
	 chenhuacai@kernel.org, kernel@xen0n.name,
	loongarch@lists.linux.dev
Subject: Re: [PATCH v6 05/26] fs/dax: Create a common implementation to break DAX layouts
Date: Mon, 13 Jan 2025 11:47:41 +1100	[thread overview]
Message-ID: <lui7hffmc35dfzwxu3xyybf5pion74fbfxszfopsp6tgyt2ajq@bmpeieroavro> (raw)
In-Reply-To: <20250110164438.GJ6156@frogsfrogsfrogs>

On Fri, Jan 10, 2025 at 08:44:38AM -0800, Darrick J. Wong wrote:
> On Fri, Jan 10, 2025 at 05:00:33PM +1100, Alistair Popple wrote:
> > Prior to freeing a block file systems supporting FS DAX must check
> > that the associated pages are both unmapped from user-space and not
> > undergoing DMA or other access from eg. get_user_pages(). This is
> > achieved by unmapping the file range and scanning the FS DAX
> > page-cache to see if any pages within the mapping have an elevated
> > refcount.
> > 
> > This is done using two functions - dax_layout_busy_page_range() which
> > returns a page to wait for the refcount to become idle on. Rather than
> > open-code this introduce a common implementation to both unmap and
> > wait for the page to become idle.
> > 
> > Signed-off-by: Alistair Popple <apopple@nvidia.com>
> 
> So now that Dan Carpenter has complained, I guess I should look at
> this...
> 
> > ---
> > 
> > Changes for v5:
> > 
> >  - Don't wait for idle pages on non-DAX mappings
> > 
> > Changes for v4:
> > 
> >  - Fixed some build breakage due to missing symbol exports reported by
> >    John Hubbard (thanks!).
> > ---
> >  fs/dax.c            | 33 +++++++++++++++++++++++++++++++++
> >  fs/ext4/inode.c     | 10 +---------
> >  fs/fuse/dax.c       | 27 +++------------------------
> >  fs/xfs/xfs_inode.c  | 23 +++++------------------
> >  fs/xfs/xfs_inode.h  |  2 +-
> >  include/linux/dax.h | 21 +++++++++++++++++++++
> >  mm/madvise.c        |  8 ++++----
> >  7 files changed, 68 insertions(+), 56 deletions(-)
> > 
> > diff --git a/fs/dax.c b/fs/dax.c
> > index d010c10..9c3bd07 100644
> > --- a/fs/dax.c
> > +++ b/fs/dax.c
> > @@ -845,6 +845,39 @@ int dax_delete_mapping_entry(struct address_space *mapping, pgoff_t index)
> >  	return ret;
> >  }
> >  
> > +static int wait_page_idle(struct page *page,
> > +			void (cb)(struct inode *),
> > +			struct inode *inode)
> > +{
> > +	return ___wait_var_event(page, page_ref_count(page) == 1,
> > +				TASK_INTERRUPTIBLE, 0, 0, cb(inode));
> > +}
> > +
> > +/*
> > + * Unmaps the inode and waits for any DMA to complete prior to deleting the
> > + * DAX mapping entries for the range.
> > + */
> > +int dax_break_mapping(struct inode *inode, loff_t start, loff_t end,
> > +		void (cb)(struct inode *))
> > +{
> > +	struct page *page;
> > +	int error;
> > +
> > +	if (!dax_mapping(inode->i_mapping))
> > +		return 0;
> > +
> > +	do {
> > +		page = dax_layout_busy_page_range(inode->i_mapping, start, end);
> > +		if (!page)
> > +			break;
> > +
> > +		error = wait_page_idle(page, cb, inode);
> > +	} while (error == 0);
> 
> You didn't initialize error to 0, so it could be any value.  What if
> dax_layout_busy_page_range returns null the first time through the loop?

Yes. I went down the rabbit hole of figuring out why this didn't produce a
compiler warning and forgot to go back and fix it. Thanks.
 
> > +
> > +	return error;
> > +}
> > +EXPORT_SYMBOL_GPL(dax_break_mapping);
> > +
> >  /*
> >   * Invalidate DAX entry if it is clean.
> >   */
> 
> <I'm no expert, skipping to xfs>
> 
> > diff --git a/fs/xfs/xfs_inode.c b/fs/xfs/xfs_inode.c
> > index 42ea203..295730a 100644
> > --- a/fs/xfs/xfs_inode.c
> > +++ b/fs/xfs/xfs_inode.c
> > @@ -2715,21 +2715,17 @@ xfs_mmaplock_two_inodes_and_break_dax_layout(
> >  	struct xfs_inode	*ip2)
> >  {
> >  	int			error;
> > -	bool			retry;
> >  	struct page		*page;
> >  
> >  	if (ip1->i_ino > ip2->i_ino)
> >  		swap(ip1, ip2);
> >  
> >  again:
> > -	retry = false;
> >  	/* Lock the first inode */
> >  	xfs_ilock(ip1, XFS_MMAPLOCK_EXCL);
> > -	error = xfs_break_dax_layouts(VFS_I(ip1), &retry);
> > -	if (error || retry) {
> > +	error = xfs_break_dax_layouts(VFS_I(ip1));
> > +	if (error) {
> >  		xfs_iunlock(ip1, XFS_MMAPLOCK_EXCL);
> > -		if (error == 0 && retry)
> > -			goto again;
> 
> Hmm, so the retry loop has moved into xfs_break_dax_layouts, which means
> that we no longer cycle the MMAPLOCK.  Why was the lock cycling
> unnecessary?

Because the lock cycling is already happening in the xfs_wait_dax_page()
callback which is called as part of the retry loop in dax_break_mapping().

> >  		return error;
> >  	}
> >  
> > @@ -2988,19 +2984,11 @@ xfs_wait_dax_page(
> >  
> >  int
> >  xfs_break_dax_layouts(
> > -	struct inode		*inode,
> > -	bool			*retry)
> > +	struct inode		*inode)
> >  {
> > -	struct page		*page;
> > -
> >  	xfs_assert_ilocked(XFS_I(inode), XFS_MMAPLOCK_EXCL);
> >  
> > -	page = dax_layout_busy_page(inode->i_mapping);
> > -	if (!page)
> > -		return 0;
> > -
> > -	*retry = true;
> > -	return dax_wait_page_idle(page, xfs_wait_dax_page, inode);
> > +	return dax_break_mapping_inode(inode, xfs_wait_dax_page);
> >  }
> >  
> >  int
> > @@ -3018,8 +3006,7 @@ xfs_break_layouts(
> >  		retry = false;
> >  		switch (reason) {
> >  		case BREAK_UNMAP:
> > -			error = xfs_break_dax_layouts(inode, &retry);
> > -			if (error || retry)
> > +			if (xfs_break_dax_layouts(inode))
> 
> dax_break_mapping can return -ERESTARTSYS, right?  So doesn't this need
> to be:
> 			error = xfs_break_dax_layouts(inode);
> 			if (error)
> 				break;
> 
> Hm?

Right. Thanks for the review, have fixed for the next respin.

 - Alistair

> --D
> 
> >  				break;
> >  			fallthrough;
> >  		case BREAK_WRITE:
> > diff --git a/fs/xfs/xfs_inode.h b/fs/xfs/xfs_inode.h
> > index 1648dc5..c4f03f6 100644
> > --- a/fs/xfs/xfs_inode.h
> > +++ b/fs/xfs/xfs_inode.h
> > @@ -593,7 +593,7 @@ xfs_itruncate_extents(
> >  	return xfs_itruncate_extents_flags(tpp, ip, whichfork, new_size, 0);
> >  }
> >  
> > -int	xfs_break_dax_layouts(struct inode *inode, bool *retry);
> > +int	xfs_break_dax_layouts(struct inode *inode);
> >  int	xfs_break_layouts(struct inode *inode, uint *iolock,
> >  		enum layout_break_reason reason);
> >  
> > diff --git a/include/linux/dax.h b/include/linux/dax.h
> > index 9b1ce98..f6583d3 100644
> > --- a/include/linux/dax.h
> > +++ b/include/linux/dax.h
> > @@ -228,6 +228,20 @@ static inline void dax_read_unlock(int id)
> >  {
> >  }
> >  #endif /* CONFIG_DAX */
> > +
> > +#if !IS_ENABLED(CONFIG_FS_DAX)
> > +static inline int __must_check dax_break_mapping(struct inode *inode,
> > +			    loff_t start, loff_t end, void (cb)(struct inode *))
> > +{
> > +	return 0;
> > +}
> > +
> > +static inline void dax_break_mapping_uninterruptible(struct inode *inode,
> > +						void (cb)(struct inode *))
> > +{
> > +}
> > +#endif
> > +
> >  bool dax_alive(struct dax_device *dax_dev);
> >  void *dax_get_private(struct dax_device *dax_dev);
> >  long dax_direct_access(struct dax_device *dax_dev, pgoff_t pgoff, long nr_pages,
> > @@ -251,6 +265,13 @@ vm_fault_t dax_finish_sync_fault(struct vm_fault *vmf,
> >  int dax_delete_mapping_entry(struct address_space *mapping, pgoff_t index);
> >  int dax_invalidate_mapping_entry_sync(struct address_space *mapping,
> >  				      pgoff_t index);
> > +int __must_check dax_break_mapping(struct inode *inode, loff_t start,
> > +				loff_t end, void (cb)(struct inode *));
> > +static inline int __must_check dax_break_mapping_inode(struct inode *inode,
> > +						void (cb)(struct inode *))
> > +{
> > +	return dax_break_mapping(inode, 0, LLONG_MAX, cb);
> > +}
> >  int dax_dedupe_file_range_compare(struct inode *src, loff_t srcoff,
> >  				  struct inode *dest, loff_t destoff,
> >  				  loff_t len, bool *is_same,
> > diff --git a/mm/madvise.c b/mm/madvise.c
> > index 49f3a75..1f4c99e 100644
> > --- a/mm/madvise.c
> > +++ b/mm/madvise.c
> > @@ -1063,7 +1063,7 @@ static int guard_install_pud_entry(pud_t *pud, unsigned long addr,
> >  	pud_t pudval = pudp_get(pud);
> >  
> >  	/* If huge return >0 so we abort the operation + zap. */
> > -	return pud_trans_huge(pudval) || pud_devmap(pudval);
> > +	return pud_trans_huge(pudval);
> >  }
> >  
> >  static int guard_install_pmd_entry(pmd_t *pmd, unsigned long addr,
> > @@ -1072,7 +1072,7 @@ static int guard_install_pmd_entry(pmd_t *pmd, unsigned long addr,
> >  	pmd_t pmdval = pmdp_get(pmd);
> >  
> >  	/* If huge return >0 so we abort the operation + zap. */
> > -	return pmd_trans_huge(pmdval) || pmd_devmap(pmdval);
> > +	return pmd_trans_huge(pmdval);
> >  }
> >  
> >  static int guard_install_pte_entry(pte_t *pte, unsigned long addr,
> > @@ -1183,7 +1183,7 @@ static int guard_remove_pud_entry(pud_t *pud, unsigned long addr,
> >  	pud_t pudval = pudp_get(pud);
> >  
> >  	/* If huge, cannot have guard pages present, so no-op - skip. */
> > -	if (pud_trans_huge(pudval) || pud_devmap(pudval))
> > +	if (pud_trans_huge(pudval))
> >  		walk->action = ACTION_CONTINUE;
> >  
> >  	return 0;
> > @@ -1195,7 +1195,7 @@ static int guard_remove_pmd_entry(pmd_t *pmd, unsigned long addr,
> >  	pmd_t pmdval = pmdp_get(pmd);
> >  
> >  	/* If huge, cannot have guard pages present, so no-op - skip. */
> > -	if (pmd_trans_huge(pmdval) || pmd_devmap(pmdval))
> > +	if (pmd_trans_huge(pmdval))
> >  		walk->action = ACTION_CONTINUE;
> >  
> >  	return 0;
> > -- 
> > git-series 0.9.1
> > 


  reply	other threads:[~2025-01-13  0:47 UTC|newest]

Thread overview: 97+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-01-10  6:00 [PATCH v6 00/26] fs/dax: Fix ZONE_DEVICE page reference counts Alistair Popple
2025-01-10  6:00 ` [PATCH v6 01/26] fuse: Fix dax truncate/punch_hole fault path Alistair Popple
2025-02-05 13:03   ` Vivek Goyal
2025-02-06  0:10     ` Dan Williams
2025-02-06 12:41       ` Asahi Lina
2025-02-06 19:44         ` Dan Williams
2025-02-06 19:57           ` Asahi Lina
2025-02-06 13:37       ` Vivek Goyal
2025-02-06 14:30         ` Stefan Hajnoczi
2025-02-06 14:59           ` Albert Esteve
2025-02-06 18:10             ` Stefan Hajnoczi
2025-02-06 18:22             ` David Hildenbrand
2025-02-07 16:16               ` Albert Esteve
2025-01-10  6:00 ` [PATCH v6 02/26] fs/dax: Return unmapped busy pages from dax_layout_busy_page_range() Alistair Popple
2025-01-10  6:00 ` [PATCH v6 03/26] fs/dax: Don't skip locked entries when scanning entries Alistair Popple
2025-01-10  6:00 ` [PATCH v6 04/26] fs/dax: Refactor wait for dax idle page Alistair Popple
2025-01-10  6:00 ` [PATCH v6 05/26] fs/dax: Create a common implementation to break DAX layouts Alistair Popple
2025-01-10 16:44   ` Darrick J. Wong
2025-01-13  0:47     ` Alistair Popple [this message]
2025-01-13  2:47       ` Darrick J. Wong
2025-01-13 20:11   ` Dan Williams
2025-01-13 23:06   ` Dan Williams
2025-01-14  0:19   ` Dan Williams
2025-01-10  6:00 ` [PATCH v6 06/26] fs/dax: Always remove DAX page-cache entries when breaking layouts Alistair Popple
2025-01-13 23:31   ` Dan Williams
2025-01-10  6:00 ` [PATCH v6 07/26] fs/dax: Ensure all pages are idle prior to filesystem unmount Alistair Popple
2025-01-10 16:50   ` Darrick J. Wong
2025-01-13  0:57     ` Alistair Popple
2025-01-13  2:49       ` Darrick J. Wong
2025-01-13  5:48         ` Alistair Popple
2025-01-13 16:39           ` Darrick J. Wong
2025-01-13 23:42   ` Dan Williams
2025-01-10  6:00 ` [PATCH v6 08/26] fs/dax: Remove PAGE_MAPPING_DAX_SHARED mapping flag Alistair Popple
2025-01-14  0:52   ` Dan Williams
2025-01-15  5:32     ` Alistair Popple
2025-01-15  5:44       ` Dan Williams
2025-01-17  0:54         ` Alistair Popple
2025-01-14 14:47   ` David Hildenbrand
2025-01-10  6:00 ` [PATCH v6 09/26] mm/gup: Remove redundant check for PCI P2PDMA page Alistair Popple
2025-01-10  6:00 ` [PATCH v6 10/26] mm/mm_init: Move p2pdma page refcount initialisation to p2pdma Alistair Popple
2025-01-14 14:51   ` David Hildenbrand
2025-01-10  6:00 ` [PATCH v6 11/26] mm: Allow compound zone device pages Alistair Popple
2025-01-14 14:59   ` David Hildenbrand
2025-01-17  1:05     ` Alistair Popple
2025-01-10  6:00 ` [PATCH v6 12/26] mm/memory: Enhance insert_page_into_pte_locked() to create writable mappings Alistair Popple
2025-01-14 15:03   ` David Hildenbrand
     [not found]   ` <6785b90f300d8_20fa29465@dwillia2-xfh.jf.intel.com.notmuch>
2025-01-15  5:36     ` Alistair Popple
2025-01-10  6:00 ` [PATCH v6 13/26] mm/memory: Add vmf_insert_page_mkwrite() Alistair Popple
2025-01-14 16:15   ` David Hildenbrand
2025-01-15  6:13     ` Alistair Popple
2025-01-10  6:00 ` [PATCH v6 14/26] rmap: Add support for PUD sized mappings to rmap Alistair Popple
2025-01-14  1:21   ` Dan Williams
2025-01-10  6:00 ` [PATCH v6 15/26] huge_memory: Add vmf_insert_folio_pud() Alistair Popple
2025-01-14  1:27   ` Dan Williams
2025-01-14 16:22   ` David Hildenbrand
2025-01-15  6:38     ` Alistair Popple
2025-01-10  6:00 ` [PATCH v6 16/26] huge_memory: Add vmf_insert_folio_pmd() Alistair Popple
2025-01-14  2:04   ` Dan Williams
2025-01-14 16:40   ` David Hildenbrand
2025-01-14 17:22     ` Dan Williams
2025-01-15  7:05       ` Alistair Popple
2025-01-10  6:00 ` [PATCH v6 17/26] memremap: Add is_devdax_page() and is_fsdax_page() helpers Alistair Popple
2025-01-14  2:05   ` Dan Williams
2025-01-10  6:00 ` [PATCH v6 18/26] mm/gup: Don't allow FOLL_LONGTERM pinning of FS DAX pages Alistair Popple
2025-01-14  2:16   ` Dan Williams
2025-01-10  6:00 ` [PATCH v6 19/26] proc/task_mmu: Mark devdax and fsdax pages as always unpinned Alistair Popple
2025-01-14  2:28   ` Dan Williams
2025-01-14 16:45     ` David Hildenbrand
2025-01-17  1:28       ` Alistair Popple
2025-01-10  6:00 ` [PATCH v6 20/26] mm/mlock: Skip ZONE_DEVICE PMDs during mlock Alistair Popple
2025-01-14  2:42   ` Dan Williams
2025-01-17  1:54     ` Alistair Popple
2025-01-10  6:00 ` [PATCH v6 21/26] fs/dax: Properly refcount fs dax pages Alistair Popple
2025-01-10 16:54   ` Darrick J. Wong
2025-01-13  3:18     ` Alistair Popple
2025-01-14  3:35   ` Dan Williams
2025-02-07  5:31     ` Alistair Popple
2025-02-07  5:50       ` Dan Williams
2025-02-09 23:35         ` Alistair Popple
2025-01-10  6:00 ` [PATCH v6 22/26] device/dax: Properly refcount device dax pages when mapping Alistair Popple
2025-01-14  6:12   ` Dan Williams
2025-02-03 11:29     ` Alistair Popple
2025-01-10  6:00 ` [PATCH v6 23/26] mm: Remove pXX_devmap callers Alistair Popple
2025-01-14 18:50   ` Dan Williams
2025-01-15  7:27     ` Alistair Popple
2025-02-04 19:06       ` Dan Williams
2025-02-05  9:57         ` Alistair Popple
2025-01-10  6:00 ` [PATCH v6 24/26] mm: Remove devmap related functions and page table bits Alistair Popple
2025-01-11 10:08   ` Huacai Chen
2025-01-14 19:03   ` Dan Williams
2025-01-10  6:00 ` [PATCH v6 25/26] Revert "riscv: mm: Add support for ZONE_DEVICE" Alistair Popple
2025-01-14 19:11   ` Dan Williams
2025-01-10  6:00 ` [PATCH v6 26/26] Revert "LoongArch: Add ARCH_HAS_PTE_DEVMAP support" Alistair Popple
2025-01-10  7:05 ` [PATCH v6 00/26] fs/dax: Fix ZONE_DEVICE page reference counts Dan Williams
2025-01-11  1:30   ` Andrew Morton
2025-01-11  3:35     ` Dan Williams
2025-01-13  1:05       ` Alistair Popple

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=lui7hffmc35dfzwxu3xyybf5pion74fbfxszfopsp6tgyt2ajq@bmpeieroavro \
    --to=apopple@nvidia.com \
    --cc=akpm@linux-foundation.org \
    --cc=alison.schofield@intel.com \
    --cc=bhelgaas@google.com \
    --cc=catalin.marinas@arm.com \
    --cc=chenhuacai@kernel.org \
    --cc=dan.j.williams@intel.com \
    --cc=dave.hansen@linux.intel.com \
    --cc=dave.jiang@intel.com \
    --cc=david@fromorbit.com \
    --cc=david@redhat.com \
    --cc=djwong@kernel.org \
    --cc=gerald.schaefer@linux.ibm.com \
    --cc=hch@lst.de \
    --cc=ira.weiny@intel.com \
    --cc=jack@suse.cz \
    --cc=jgg@ziepe.ca \
    --cc=jhubbard@nvidia.com \
    --cc=kernel@xen0n.name \
    --cc=lina@asahilina.net \
    --cc=linmiaohe@huawei.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-cxl@vger.kernel.org \
    --cc=linux-doc@vger.kernel.org \
    --cc=linux-ext4@vger.kernel.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-xfs@vger.kernel.org \
    --cc=linuxppc-dev@lists.ozlabs.org \
    --cc=logang@deltatee.com \
    --cc=loongarch@lists.linux.dev \
    --cc=mpe@ellerman.id.au \
    --cc=npiggin@gmail.com \
    --cc=nvdimm@lists.linux.dev \
    --cc=peterx@redhat.com \
    --cc=tytso@mit.edu \
    --cc=vishal.l.verma@intel.com \
    --cc=will@kernel.org \
    --cc=willy@infradead.org \
    --cc=zhang.lyra@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox