From: "Pankaj Raghav (Samsung)" <kernel@pankajraghav.com>
To: david@fromorbit.com, chandan.babu@oracle.com,
akpm@linux-foundation.org, brauner@kernel.org,
willy@infradead.org, djwong@kernel.org
Cc: linux-kernel@vger.kernel.org, hare@suse.de,
john.g.garry@oracle.com, gost.dev@samsung.com,
yang@os.amperecomputing.com, p.raghav@samsung.com,
cl@os.amperecomputing.com, linux-xfs@vger.kernel.org, hch@lst.de,
mcgrof@kernel.org, linux-mm@kvack.org,
linux-fsdevel@vger.kernel.org
Subject: [PATCH v6 04/11] readahead: allocate folios with mapping_min_order in readahead
Date: Wed, 29 May 2024 15:45:02 +0200 [thread overview]
Message-ID: <20240529134509.120826-5-kernel@pankajraghav.com> (raw)
In-Reply-To: <20240529134509.120826-1-kernel@pankajraghav.com>
From: Pankaj Raghav <p.raghav@samsung.com>
page_cache_ra_unbounded() was allocating single pages (0 order folios)
if there was no folio found in an index. Allocate mapping_min_order folios
as we need to guarantee the minimum order if it is set.
When read_pages() is triggered and if a page is already present, check
for truncation and move the ractl->_index by mapping_min_nrpages if that
folio was truncated. This is done to ensure we keep the alignment
requirement while adding a folio to the page cache.
page_cache_ra_order() tries to allocate folio to the page cache with a
higher order if the index aligns with that order. Modify it so that the
order does not go below the mapping_min_order requirement of the page
cache. This function will do the right thing even if the new_order passed
is less than the mapping_min_order.
When adding new folios to the page cache we must also ensure the index
used is aligned to the mapping_min_order as the page cache requires the
index to be aligned to the order of the folio.
readahead_expand() is called from readahead aops to extend the range of
the readahead so this function can assume ractl->_index to be aligned with
min_order.
Signed-off-by: Pankaj Raghav <p.raghav@samsung.com>
Reviewed-by: Hannes Reinecke <hare@suse.de>
---
mm/readahead.c | 85 +++++++++++++++++++++++++++++++++++++++++---------
1 file changed, 71 insertions(+), 14 deletions(-)
diff --git a/mm/readahead.c b/mm/readahead.c
index da34b28da02c..389cd802da63 100644
--- a/mm/readahead.c
+++ b/mm/readahead.c
@@ -206,9 +206,10 @@ void page_cache_ra_unbounded(struct readahead_control *ractl,
unsigned long nr_to_read, unsigned long lookahead_size)
{
struct address_space *mapping = ractl->mapping;
- unsigned long index = readahead_index(ractl);
+ unsigned long ra_folio_index, index = readahead_index(ractl);
gfp_t gfp_mask = readahead_gfp_mask(mapping);
- unsigned long i = 0;
+ unsigned long mark, i = 0;
+ unsigned int min_nrpages = mapping_min_folio_nrpages(mapping);
/*
* Partway through the readahead operation, we will have added
@@ -223,6 +224,22 @@ void page_cache_ra_unbounded(struct readahead_control *ractl,
unsigned int nofs = memalloc_nofs_save();
filemap_invalidate_lock_shared(mapping);
+ index = mapping_align_start_index(mapping, index);
+
+ /*
+ * As iterator `i` is aligned to min_nrpages, round_up the
+ * difference between nr_to_read and lookahead_size to mark the
+ * index that only has lookahead or "async_region" to set the
+ * readahead flag.
+ */
+ ra_folio_index = round_up(readahead_index(ractl) + nr_to_read - lookahead_size,
+ min_nrpages);
+ mark = ra_folio_index - index;
+ if (index != readahead_index(ractl)) {
+ nr_to_read += readahead_index(ractl) - index;
+ ractl->_index = index;
+ }
+
/*
* Preallocate as many pages as we will need.
*/
@@ -230,7 +247,9 @@ void page_cache_ra_unbounded(struct readahead_control *ractl,
struct folio *folio = xa_load(&mapping->i_pages, index + i);
int ret;
+
if (folio && !xa_is_value(folio)) {
+ long nr_pages = folio_nr_pages(folio);
/*
* Page already present? Kick off the current batch
* of contiguous pages before continuing with the
@@ -240,12 +259,24 @@ void page_cache_ra_unbounded(struct readahead_control *ractl,
* not worth getting one just for that.
*/
read_pages(ractl);
- ractl->_index += folio_nr_pages(folio);
+
+ /*
+ * Move the ractl->_index by at least min_pages
+ * if the folio got truncated to respect the
+ * alignment constraint in the page cache.
+ *
+ */
+ if (mapping != folio->mapping)
+ nr_pages = min_nrpages;
+
+ VM_BUG_ON_FOLIO(nr_pages < min_nrpages, folio);
+ ractl->_index += nr_pages;
i = ractl->_index + ractl->_nr_pages - index;
continue;
}
- folio = filemap_alloc_folio(gfp_mask, 0);
+ folio = filemap_alloc_folio(gfp_mask,
+ mapping_min_folio_order(mapping));
if (!folio)
break;
@@ -255,11 +286,11 @@ void page_cache_ra_unbounded(struct readahead_control *ractl,
if (ret == -ENOMEM)
break;
read_pages(ractl);
- ractl->_index++;
+ ractl->_index += min_nrpages;
i = ractl->_index + ractl->_nr_pages - index;
continue;
}
- if (i == nr_to_read - lookahead_size)
+ if (i == mark)
folio_set_readahead(folio);
ractl->_workingset |= folio_test_workingset(folio);
ractl->_nr_pages += folio_nr_pages(folio);
@@ -493,13 +524,19 @@ void page_cache_ra_order(struct readahead_control *ractl,
{
struct address_space *mapping = ractl->mapping;
pgoff_t index = readahead_index(ractl);
+ unsigned int min_order = mapping_min_folio_order(mapping);
pgoff_t limit = (i_size_read(mapping->host) - 1) >> PAGE_SHIFT;
pgoff_t mark = index + ra->size - ra->async_size;
unsigned int nofs;
int err = 0;
gfp_t gfp = readahead_gfp_mask(mapping);
+ unsigned int min_ra_size = max(4, mapping_min_folio_nrpages(mapping));
- if (!mapping_large_folio_support(mapping) || ra->size < 4)
+ /*
+ * Fallback when size < min_nrpages as each folio should be
+ * at least min_nrpages anyway.
+ */
+ if (!mapping_large_folio_support(mapping) || ra->size < min_ra_size)
goto fallback;
limit = min(limit, index + ra->size - 1);
@@ -508,11 +545,20 @@ void page_cache_ra_order(struct readahead_control *ractl,
new_order += 2;
new_order = min(mapping_max_folio_order(mapping), new_order);
new_order = min_t(unsigned int, new_order, ilog2(ra->size));
+ new_order = max(new_order, min_order);
}
/* See comment in page_cache_ra_unbounded() */
nofs = memalloc_nofs_save();
filemap_invalidate_lock_shared(mapping);
+ /*
+ * If the new_order is greater than min_order and index is
+ * already aligned to new_order, then this will be noop as index
+ * aligned to new_order should also be aligned to min_order.
+ */
+ ractl->_index = mapping_align_start_index(mapping, index);
+ index = readahead_index(ractl);
+
while (index <= limit) {
unsigned int order = new_order;
@@ -520,7 +566,7 @@ void page_cache_ra_order(struct readahead_control *ractl,
if (index & ((1UL << order) - 1))
order = __ffs(index);
/* Don't allocate pages past EOF */
- while (index + (1UL << order) - 1 > limit)
+ while (order > min_order && index + (1UL << order) - 1 > limit)
order--;
err = ra_alloc_folio(ractl, index, mark, order, gfp);
if (err)
@@ -784,8 +830,15 @@ void readahead_expand(struct readahead_control *ractl,
struct file_ra_state *ra = ractl->ra;
pgoff_t new_index, new_nr_pages;
gfp_t gfp_mask = readahead_gfp_mask(mapping);
+ unsigned long min_nrpages = mapping_min_folio_nrpages(mapping);
+ unsigned int min_order = mapping_min_folio_order(mapping);
new_index = new_start / PAGE_SIZE;
+ /*
+ * Readahead code should have aligned the ractl->_index to
+ * min_nrpages before calling readahead aops.
+ */
+ VM_BUG_ON(!IS_ALIGNED(ractl->_index, min_nrpages));
/* Expand the leading edge downwards */
while (ractl->_index > new_index) {
@@ -795,9 +848,11 @@ void readahead_expand(struct readahead_control *ractl,
if (folio && !xa_is_value(folio))
return; /* Folio apparently present */
- folio = filemap_alloc_folio(gfp_mask, 0);
+ folio = filemap_alloc_folio(gfp_mask, min_order);
if (!folio)
return;
+
+ index = mapping_align_start_index(mapping, index);
if (filemap_add_folio(mapping, folio, index, gfp_mask) < 0) {
folio_put(folio);
return;
@@ -807,7 +862,7 @@ void readahead_expand(struct readahead_control *ractl,
ractl->_workingset = true;
psi_memstall_enter(&ractl->_pflags);
}
- ractl->_nr_pages++;
+ ractl->_nr_pages += min_nrpages;
ractl->_index = folio->index;
}
@@ -822,9 +877,11 @@ void readahead_expand(struct readahead_control *ractl,
if (folio && !xa_is_value(folio))
return; /* Folio apparently present */
- folio = filemap_alloc_folio(gfp_mask, 0);
+ folio = filemap_alloc_folio(gfp_mask, min_order);
if (!folio)
return;
+
+ index = mapping_align_start_index(mapping, index);
if (filemap_add_folio(mapping, folio, index, gfp_mask) < 0) {
folio_put(folio);
return;
@@ -834,10 +891,10 @@ void readahead_expand(struct readahead_control *ractl,
ractl->_workingset = true;
psi_memstall_enter(&ractl->_pflags);
}
- ractl->_nr_pages++;
+ ractl->_nr_pages += min_nrpages;
if (ra) {
- ra->size++;
- ra->async_size++;
+ ra->size += min_nrpages;
+ ra->async_size += min_nrpages;
}
}
}
--
2.34.1
next prev parent reply other threads:[~2024-05-29 13:45 UTC|newest]
Thread overview: 22+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-05-29 13:44 [PATCH v6 00/11] enable bs > ps in XFS Pankaj Raghav (Samsung)
2024-05-29 13:44 ` [PATCH v6 01/11] readahead: rework loop in page_cache_ra_unbounded() Pankaj Raghav (Samsung)
2024-05-29 13:45 ` [PATCH v6 02/11] fs: Allow fine-grained control of folio sizes Pankaj Raghav (Samsung)
2024-06-03 6:28 ` Hannes Reinecke
2024-05-29 13:45 ` [PATCH v6 03/11] filemap: allocate mapping_min_order folios in the page cache Pankaj Raghav (Samsung)
2024-06-03 12:18 ` Matthew Wilcox
2024-06-04 9:42 ` Pankaj Raghav (Samsung)
2024-05-29 13:45 ` Pankaj Raghav (Samsung) [this message]
2024-05-29 13:45 ` [PATCH v6 05/11] mm: split a folio in minimum folio order chunks Pankaj Raghav (Samsung)
2024-06-03 6:34 ` Hannes Reinecke
2024-06-03 12:36 ` Matthew Wilcox
2024-06-04 10:29 ` Pankaj Raghav (Samsung)
2024-05-29 13:45 ` [PATCH v6 06/11] filemap: cap PTE range to be created to allowed zero fill in folio_map_range() Pankaj Raghav (Samsung)
2024-06-03 6:35 ` Hannes Reinecke
2024-05-29 13:45 ` [PATCH v6 07/11] iomap: fix iomap_dio_zero() for fs bs > system page size Pankaj Raghav (Samsung)
2024-06-02 23:22 ` Dave Chinner
2024-06-04 9:43 ` Pankaj Raghav (Samsung)
2024-06-03 6:39 ` Hannes Reinecke
2024-05-29 13:45 ` [PATCH v6 08/11] xfs: use kvmalloc for xattr buffers Pankaj Raghav (Samsung)
2024-05-29 13:45 ` [PATCH v6 09/11] xfs: expose block size in stat Pankaj Raghav (Samsung)
2024-05-29 13:45 ` [PATCH v6 10/11] xfs: make the calculation generic in xfs_sb_validate_fsb_count() Pankaj Raghav (Samsung)
2024-05-29 13:45 ` [PATCH v6 11/11] xfs: enable block size larger than page size support Pankaj Raghav (Samsung)
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20240529134509.120826-5-kernel@pankajraghav.com \
--to=kernel@pankajraghav.com \
--cc=akpm@linux-foundation.org \
--cc=brauner@kernel.org \
--cc=chandan.babu@oracle.com \
--cc=cl@os.amperecomputing.com \
--cc=david@fromorbit.com \
--cc=djwong@kernel.org \
--cc=gost.dev@samsung.com \
--cc=hare@suse.de \
--cc=hch@lst.de \
--cc=john.g.garry@oracle.com \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-xfs@vger.kernel.org \
--cc=mcgrof@kernel.org \
--cc=p.raghav@samsung.com \
--cc=willy@infradead.org \
--cc=yang@os.amperecomputing.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox