linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* Re: [PATCH] xfs: compute buffer address correctly in xmbuf_map_backing_mem
       [not found] <20250408003030.GD6283@frogsfrogsfrogs>
@ 2025-04-08  5:11 ` Christoph Hellwig
  2025-04-08  6:03   ` Darrick J. Wong
  0 siblings, 1 reply; 2+ messages in thread
From: Christoph Hellwig @ 2025-04-08  5:11 UTC (permalink / raw)
  To: Darrick J. Wong; +Cc: Carlos Maiolino, xfs, willy, linux-mm

On Mon, Apr 07, 2025 at 05:30:30PM -0700, Darrick J. Wong wrote:
> From: Darrick J. Wong <djwong@kernel.org>
> 
> Prior to commit e614a00117bc2d, xmbuf_map_backing_mem relied on
> folio_file_page to return the base page for the xmbuf's loff_t in the
> xfile, and set b_addr to the page_address of that base page.
> 
> Now that folio_file_page has been removed from xmbuf_map_backing_mem, we
> always set b_addr to the folio_address of the folio.  This is correct
> for the situation where the folio size matches the buffer size, but it's
> totally wrong if tmpfs uses large folios.  We need to use
> offset_in_folio here.
> 
> Found via xfs/801, which demonstrated evidence of corruption of an
> in-memory rmap btree block right after initializing an adjacent block.

Hmm, I thought we'd never get large folios for our non-standard tmpfs
use.  I guess I was wrong on that..

The fix looks good:

Reviewed-by: Christoph Hellwig <hch@lst.de>

But a little note below:

> +	bp->b_addr = folio_address(folio) + offset_in_folio(folio, pos);

Given that this is or at least will become a common pattern, do we
want a mm layer helper for it?



^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: [PATCH] xfs: compute buffer address correctly in xmbuf_map_backing_mem
  2025-04-08  5:11 ` [PATCH] xfs: compute buffer address correctly in xmbuf_map_backing_mem Christoph Hellwig
@ 2025-04-08  6:03   ` Darrick J. Wong
  0 siblings, 0 replies; 2+ messages in thread
From: Darrick J. Wong @ 2025-04-08  6:03 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: Carlos Maiolino, xfs, willy, linux-mm

On Mon, Apr 07, 2025 at 10:11:08PM -0700, Christoph Hellwig wrote:
> On Mon, Apr 07, 2025 at 05:30:30PM -0700, Darrick J. Wong wrote:
> > From: Darrick J. Wong <djwong@kernel.org>
> > 
> > Prior to commit e614a00117bc2d, xmbuf_map_backing_mem relied on
> > folio_file_page to return the base page for the xmbuf's loff_t in the
> > xfile, and set b_addr to the page_address of that base page.
> > 
> > Now that folio_file_page has been removed from xmbuf_map_backing_mem, we
> > always set b_addr to the folio_address of the folio.  This is correct
> > for the situation where the folio size matches the buffer size, but it's
> > totally wrong if tmpfs uses large folios.  We need to use
> > offset_in_folio here.
> > 
> > Found via xfs/801, which demonstrated evidence of corruption of an
> > in-memory rmap btree block right after initializing an adjacent block.
> 
> Hmm, I thought we'd never get large folios for our non-standard tmpfs
> use.  I guess I was wrong on that..

Yeah, you can force THPs for tmpfs.  I don't know why you would, the
memory usage is gawful on most files that end up in there.

> The fix looks good:
> 
> Reviewed-by: Christoph Hellwig <hch@lst.de>
> 
> But a little note below:
> 
> > +	bp->b_addr = folio_address(folio) + offset_in_folio(folio, pos);
> 
> Given that this is or at least will become a common pattern, do we
> want a mm layer helper for it?

Yeah, we should; this is the third one in XFS.  What to name it, though?

void *folio_addr(const struct folio *folio, loff_t pos) ?

I'm surprised there wasn't an equivalent for struct page.

--D


^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2025-04-08  6:03 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <20250408003030.GD6283@frogsfrogsfrogs>
2025-04-08  5:11 ` [PATCH] xfs: compute buffer address correctly in xmbuf_map_backing_mem Christoph Hellwig
2025-04-08  6:03   ` Darrick J. Wong

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox