linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: "Pankaj Raghav (Samsung)" <kernel@pankajraghav.com>
To: John Garry <john.g.garry@oracle.com>
Cc: david@fromorbit.com, djwong@kernel.org, chandan.babu@oracle.com,
	brauner@kernel.org, akpm@linux-foundation.org,
	willy@infradead.org, mcgrof@kernel.org, linux-mm@kvack.org,
	hare@suse.de, linux-kernel@vger.kernel.org,
	yang@os.amperecomputing.com, Zi Yan <zi.yan@sent.com>,
	linux-xfs@vger.kernel.org, p.raghav@samsung.com,
	linux-fsdevel@vger.kernel.org, hch@lst.de, gost.dev@samsung.com,
	cl@os.amperecomputing.com
Subject: Re: [PATCH v7 07/11] iomap: fix iomap_dio_zero() for fs bs > system page size
Date: Tue, 11 Jun 2024 09:41:37 +0000	[thread overview]
Message-ID: <20240611094137.vxuhldj4b3qslsdj@quentin> (raw)
In-Reply-To: <4c6e092d-5580-42c8-9932-b42995e914be@oracle.com>

> > index 49938419fcc7..9f791db473e4 100644
> > --- a/fs/iomap/buffered-io.c
> > +++ b/fs/iomap/buffered-io.c
> > @@ -1990,6 +1990,12 @@ EXPORT_SYMBOL_GPL(iomap_writepages);
> >   static int __init iomap_init(void)
> >   {
> > +	int ret;
> > +
> > +	ret = iomap_dio_init();
> > +	if (ret)
> > +		return ret;
> > +
> >   	return bioset_init(&iomap_ioend_bioset, 4 * (PAGE_SIZE / SECTOR_SIZE),
> >   			   offsetof(struct iomap_ioend, io_bio),
> >   			   BIOSET_NEED_BVECS);
> 
> I suppose that it does not matter that zero_fs_block is leaked if this fails
> (or is it even leaked?), as I don't think that failing that bioset_init()
> call is handled at all.

If bioset_init fails, then we have even more problems than just a leaked
64k memory? ;)

Do you have something like this in mind?

diff --git a/fs/internal.h b/fs/internal.h
index 30217f0ff4c6..def96c7ed9ea 100644
--- a/fs/internal.h
+++ b/fs/internal.h
@@ -39,6 +39,7 @@ int __block_write_begin_int(struct folio *folio, loff_t pos, unsigned len,
  * iomap/direct-io.c
  */
 int iomap_dio_init(void);
+void iomap_dio_exit(void);
 
 /*
  * char_dev.c
diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
index 9f791db473e4..8d8b9e62201f 100644
--- a/fs/iomap/buffered-io.c
+++ b/fs/iomap/buffered-io.c
@@ -1994,10 +1994,16 @@ static int __init iomap_init(void)
 
        ret = iomap_dio_init();
        if (ret)
-               return ret;
+               goto out;
 
-       return bioset_init(&iomap_ioend_bioset, 4 * (PAGE_SIZE / SECTOR_SIZE),
+       ret = bioset_init(&iomap_ioend_bioset, 4 * (PAGE_SIZE / SECTOR_SIZE),
                           offsetof(struct iomap_ioend, io_bio),
                           BIOSET_NEED_BVECS);
+       if (!ret)
+               goto out;
+
+       iomap_dio_exit();
+out:
+       return ret;
 }
 fs_initcall(iomap_init);
diff --git a/fs/iomap/direct-io.c b/fs/iomap/direct-io.c
index b95600b254a3..f4c9445ca50d 100644
--- a/fs/iomap/direct-io.c
+++ b/fs/iomap/direct-io.c
@@ -69,6 +69,12 @@ int iomap_dio_init(void)
        return 0;
 }
 
+void iomap_dio_exit(void)
+{
+       __free_pages(zero_fs_block, ZERO_FSB_ORDER);
+
+}
+
 static struct bio *iomap_dio_alloc_bio(const struct iomap_iter *iter,
                struct iomap_dio *dio, unsigned short nr_vecs, blk_opf_t opf)
 {

> 
> > +
> >   static struct bio *iomap_dio_alloc_bio(const struct iomap_iter *iter,
> >   		struct iomap_dio *dio, unsigned short nr_vecs, blk_opf_t opf)
> >   {
> > @@ -236,17 +253,22 @@ static void iomap_dio_zero(const struct iomap_iter *iter, struct iomap_dio *dio,
> >   		loff_t pos, unsigned len)
> >   {
> >   	struct inode *inode = file_inode(dio->iocb->ki_filp);
> > -	struct page *page = ZERO_PAGE(0);
> >   	struct bio *bio;
> > +	/*
> > +	 * Max block size supported is 64k
> > +	 */
> > +	WARN_ON_ONCE(len > ZERO_FSB_SIZE);
> 
> JFYI, As mentioned in https://lore.kernel.org/linux-xfs/20240429174746.2132161-1-john.g.garry@oracle.com/T/#m5354e2b2531a5552a8b8acd4a95342ed4d7500f2,
> we would like to support an arbitrary size. Maybe I will need to loop for
> zeroing sizes > 64K.

The initial patches were looping with a ZERO_PAGE(0), but the initial
feedback was to use a huge zero page. But when I discussed that at LSF,
the people thought we will be using a lot of memory for sub-block
memory, especially on architectures with 64k base page size.

So for now a good tradeoff between memory usage and efficiency was to
use a 64k buffer as that is the maximum FSB we support.[1]

IIUC, you will be using this function also to zero out the extent and
not just a FSB?

I think we could resort to looping until we have a way to request
arbitrary zero folios without having to allocate at it in
iomap_dio_alloc_bio() for every IO.

[1] https://lore.kernel.org/linux-xfs/20240529134509.120826-8-kernel@pankajraghav.com/

--
Pankaj


  reply	other threads:[~2024-06-11  9:41 UTC|newest]

Thread overview: 58+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-06-07 14:58 [PATCH v7 00/11] enable bs > ps in XFS Pankaj Raghav (Samsung)
2024-06-07 14:58 ` [PATCH v7 01/11] readahead: rework loop in page_cache_ra_unbounded() Pankaj Raghav (Samsung)
2024-06-07 14:58 ` [PATCH v7 02/11] fs: Allow fine-grained control of folio sizes Pankaj Raghav (Samsung)
2024-06-12 15:38   ` Darrick J. Wong
2024-06-07 14:58 ` [PATCH v7 03/11] filemap: allocate mapping_min_order folios in the page cache Pankaj Raghav (Samsung)
2024-06-12  9:01   ` Hannes Reinecke
2024-06-12 15:40   ` Darrick J. Wong
2024-06-12 17:24   ` Matthew Wilcox
2024-06-13  8:44   ` Christoph Hellwig
2024-06-17  9:58     ` Pankaj Raghav (Samsung)
2024-06-17 12:34       ` Matthew Wilcox
2024-06-07 14:58 ` [PATCH v7 04/11] readahead: allocate folios with mapping_min_order in readahead Pankaj Raghav (Samsung)
2024-06-12 18:50   ` Matthew Wilcox
2024-06-14  9:26     ` Pankaj Raghav (Samsung)
2024-06-17 12:32       ` Matthew Wilcox
2024-06-17 16:04         ` Pankaj Raghav (Samsung)
2024-06-17 16:10           ` Matthew Wilcox
2024-06-17 16:39             ` Pankaj Raghav (Samsung)
2024-06-18  6:56               ` Hannes Reinecke
2024-06-21 12:19                 ` Pankaj Raghav (Samsung)
2024-06-21 13:28                   ` Hannes Reinecke
2024-06-18  6:52             ` Hannes Reinecke
2024-06-07 14:58 ` [PATCH v7 05/11] mm: split a folio in minimum folio order chunks Pankaj Raghav (Samsung)
2024-06-07 16:58   ` Zi Yan
2024-06-07 17:01     ` Matthew Wilcox
2024-06-07 20:45       ` Pankaj Raghav (Samsung)
2024-06-07 20:30     ` Pankaj Raghav (Samsung)
2024-06-07 20:51       ` Zi Yan
2024-06-10  7:26         ` Pankaj Raghav (Samsung)
2024-06-12  9:02   ` Hannes Reinecke
2024-06-07 14:58 ` [PATCH v7 06/11] filemap: cap PTE range to be created to allowed zero fill in folio_map_range() Pankaj Raghav (Samsung)
2024-06-12 19:08   ` Matthew Wilcox
2024-06-13  7:57     ` Luis Chamberlain
2024-06-13  8:07       ` David Hildenbrand
2024-06-13  8:13         ` Luis Chamberlain
2024-06-13  8:16           ` David Hildenbrand
2024-06-13 15:27             ` Luis Chamberlain
2024-06-13 15:32               ` Matthew Wilcox
2024-06-13 15:38                 ` Luis Chamberlain
2024-06-13 15:40                   ` Matthew Wilcox
2024-06-13 19:39                     ` Luis Chamberlain
2024-06-07 14:58 ` [PATCH v7 07/11] iomap: fix iomap_dio_zero() for fs bs > system page size Pankaj Raghav (Samsung)
2024-06-11  7:38   ` John Garry
2024-06-11  9:41     ` Pankaj Raghav (Samsung) [this message]
2024-06-11 10:00       ` John Garry
2024-06-12 20:40   ` Darrick J. Wong
2024-06-17 15:08     ` Pankaj Raghav (Samsung)
2024-06-07 14:58 ` [PATCH v7 08/11] xfs: use kvmalloc for xattr buffers Pankaj Raghav (Samsung)
2024-06-07 14:59 ` [PATCH v7 09/11] xfs: expose block size in stat Pankaj Raghav (Samsung)
2024-06-07 14:59 ` [PATCH v7 10/11] xfs: make the calculation generic in xfs_sb_validate_fsb_count() Pankaj Raghav (Samsung)
2024-06-13  8:45   ` Christoph Hellwig
2024-06-17 16:09     ` Pankaj Raghav (Samsung)
2024-06-07 14:59 ` [PATCH v7 11/11] xfs: enable block size larger than page size support Pankaj Raghav (Samsung)
2024-06-13  8:47   ` Christoph Hellwig
2024-06-17  1:29     ` Dave Chinner
2024-06-17  6:51       ` Christoph Hellwig
2024-06-17 16:31         ` Pankaj Raghav (Samsung)
2024-06-17 23:18         ` Dave Chinner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20240611094137.vxuhldj4b3qslsdj@quentin \
    --to=kernel@pankajraghav.com \
    --cc=akpm@linux-foundation.org \
    --cc=brauner@kernel.org \
    --cc=chandan.babu@oracle.com \
    --cc=cl@os.amperecomputing.com \
    --cc=david@fromorbit.com \
    --cc=djwong@kernel.org \
    --cc=gost.dev@samsung.com \
    --cc=hare@suse.de \
    --cc=hch@lst.de \
    --cc=john.g.garry@oracle.com \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-xfs@vger.kernel.org \
    --cc=mcgrof@kernel.org \
    --cc=p.raghav@samsung.com \
    --cc=willy@infradead.org \
    --cc=yang@os.amperecomputing.com \
    --cc=zi.yan@sent.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox