From: Pankaj Raghav <kernel@pankajraghav.com>
To: linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org
Cc: p.raghav@samsung.com, david@fromorbit.com, da.gomez@samsung.com,
akpm@linux-foundation.org, linux-kernel@vger.kernel.org,
willy@infradead.org, djwong@kernel.org, linux-mm@kvack.org,
chandan.babu@oracle.com, mcgrof@kernel.org, gost.dev@samsung.com
Subject: [RFC 19/23] truncate: align index to mapping_min_order
Date: Fri, 15 Sep 2023 20:38:44 +0200 [thread overview]
Message-ID: <20230915183848.1018717-20-kernel@pankajraghav.com> (raw)
In-Reply-To: <20230915183848.1018717-1-kernel@pankajraghav.com>
From: Luis Chamberlain <mcgrof@kernel.org>
Align indices to mapping_min_order in invalidate_inode_pages2_range(),
mapping_try_invalidate() and truncate_inode_pages_range(). This is
necessary to keep the folios added to the page cache aligned with
mapping_min_order.
Signed-off-by: Luis Chamberlain <mcgrof@kernel.org>
Signed-off-by: Pankaj Raghav <p.raghav@samsung.com>
---
mm/truncate.c | 34 ++++++++++++++++++++++++----------
1 file changed, 24 insertions(+), 10 deletions(-)
diff --git a/mm/truncate.c b/mm/truncate.c
index 8e3aa9e8618e..d5ce8e30df70 100644
--- a/mm/truncate.c
+++ b/mm/truncate.c
@@ -337,6 +337,8 @@ void truncate_inode_pages_range(struct address_space *mapping,
int i;
struct folio *folio;
bool same_folio;
+ unsigned int order = mapping_min_folio_order(mapping);
+ unsigned int nrpages = 1U << order;
if (mapping_empty(mapping))
return;
@@ -347,7 +349,9 @@ void truncate_inode_pages_range(struct address_space *mapping,
* start of the range and 'partial_end' at the end of the range.
* Note that 'end' is exclusive while 'lend' is inclusive.
*/
- start = (lstart + PAGE_SIZE - 1) >> PAGE_SHIFT;
+ start = (lstart + (nrpages * PAGE_SIZE) - 1) >> PAGE_SHIFT;
+ start = round_down(start, nrpages);
+
if (lend == -1)
/*
* lend == -1 indicates end-of-file so we have to set 'end'
@@ -356,7 +360,7 @@ void truncate_inode_pages_range(struct address_space *mapping,
*/
end = -1;
else
- end = (lend + 1) >> PAGE_SHIFT;
+ end = round_down((lend + 1) >> PAGE_SHIFT, nrpages);
folio_batch_init(&fbatch);
index = start;
@@ -372,8 +376,9 @@ void truncate_inode_pages_range(struct address_space *mapping,
cond_resched();
}
- same_folio = (lstart >> PAGE_SHIFT) == (lend >> PAGE_SHIFT);
- folio = __filemap_get_folio(mapping, lstart >> PAGE_SHIFT, FGP_LOCK, 0);
+ same_folio = round_down(lstart >> PAGE_SHIFT, nrpages) ==
+ round_down(lend >> PAGE_SHIFT, nrpages);
+ folio = __filemap_get_folio(mapping, start, FGP_LOCK, 0);
if (!IS_ERR(folio)) {
same_folio = lend < folio_pos(folio) + folio_size(folio);
if (!truncate_inode_partial_folio(folio, lstart, lend)) {
@@ -387,7 +392,8 @@ void truncate_inode_pages_range(struct address_space *mapping,
}
if (!same_folio) {
- folio = __filemap_get_folio(mapping, lend >> PAGE_SHIFT,
+ folio = __filemap_get_folio(mapping,
+ round_down(lend >> PAGE_SHIFT, nrpages),
FGP_LOCK, 0);
if (!IS_ERR(folio)) {
if (!truncate_inode_partial_folio(folio, lstart, lend))
@@ -497,15 +503,18 @@ EXPORT_SYMBOL(truncate_inode_pages_final);
unsigned long mapping_try_invalidate(struct address_space *mapping,
pgoff_t start, pgoff_t end, unsigned long *nr_failed)
{
+ unsigned int min_order = mapping_min_folio_order(mapping);
+ unsigned int nrpages = 1UL << min_order;
pgoff_t indices[PAGEVEC_SIZE];
struct folio_batch fbatch;
- pgoff_t index = start;
+ pgoff_t index = round_up(start, nrpages);
+ pgoff_t end_idx = round_down(end, nrpages);
unsigned long ret;
unsigned long count = 0;
int i;
folio_batch_init(&fbatch);
- while (find_lock_entries(mapping, &index, end, &fbatch, indices)) {
+ while (find_lock_entries(mapping, &index, end_idx, &fbatch, indices)) {
for (i = 0; i < folio_batch_count(&fbatch); i++) {
struct folio *folio = fbatch.folios[i];
@@ -618,9 +627,11 @@ static int folio_launder(struct address_space *mapping, struct folio *folio)
int invalidate_inode_pages2_range(struct address_space *mapping,
pgoff_t start, pgoff_t end)
{
+ unsigned int min_order = mapping_min_folio_order(mapping);
+ unsigned int nrpages = 1UL << min_order;
pgoff_t indices[PAGEVEC_SIZE];
struct folio_batch fbatch;
- pgoff_t index;
+ pgoff_t index, end_idx;
int i;
int ret = 0;
int ret2 = 0;
@@ -630,8 +641,9 @@ int invalidate_inode_pages2_range(struct address_space *mapping,
return 0;
folio_batch_init(&fbatch);
- index = start;
- while (find_get_entries(mapping, &index, end, &fbatch, indices)) {
+ index = round_up(start, nrpages);
+ end_idx = round_down(end, nrpages);
+ while (find_get_entries(mapping, &index, end_idx, &fbatch, indices)) {
for (i = 0; i < folio_batch_count(&fbatch); i++) {
struct folio *folio = fbatch.folios[i];
@@ -660,6 +672,8 @@ int invalidate_inode_pages2_range(struct address_space *mapping,
continue;
}
VM_BUG_ON_FOLIO(!folio_contains(folio, indices[i]), folio);
+ VM_BUG_ON_FOLIO(folio_order(folio) < min_order, folio);
+ VM_BUG_ON_FOLIO(folio->index & (nrpages - 1), folio);
folio_wait_writeback(folio);
if (folio_mapped(folio))
--
2.40.1
next prev parent reply other threads:[~2023-09-15 18:39 UTC|newest]
Thread overview: 54+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-09-15 18:38 [RFC 00/23] Enable block size > page size in XFS Pankaj Raghav
2023-09-15 18:38 ` [RFC 01/23] fs: Allow fine-grained control of folio sizes Pankaj Raghav
2023-09-15 19:03 ` Matthew Wilcox
2023-09-15 18:38 ` [RFC 02/23] pagemap: use mapping_min_order in fgf_set_order() Pankaj Raghav
2023-09-15 18:55 ` Matthew Wilcox
2023-09-20 7:46 ` Pankaj Raghav
2023-09-15 18:38 ` [RFC 03/23] filemap: add folio with at least mapping_min_order in __filemap_get_folio Pankaj Raghav
2023-09-15 19:00 ` Matthew Wilcox
2023-09-20 8:06 ` Pankaj Raghav
2023-09-15 18:38 ` [RFC 04/23] filemap: set the order of the index in page_cache_delete_batch() Pankaj Raghav
2023-09-15 19:43 ` Matthew Wilcox
2023-09-18 18:20 ` Luis Chamberlain
2023-09-15 18:38 ` [RFC 05/23] filemap: align index to mapping_min_order in filemap_range_has_page() Pankaj Raghav
2023-09-15 19:45 ` Matthew Wilcox
2023-09-18 18:25 ` Luis Chamberlain
2023-09-15 18:38 ` [RFC 06/23] mm: call xas_set_order() in replace_page_cache_folio() Pankaj Raghav
2023-09-15 19:46 ` Matthew Wilcox
2023-09-18 18:27 ` Luis Chamberlain
2023-09-15 18:38 ` [RFC 07/23] filemap: align the index to mapping_min_order in __filemap_add_folio() Pankaj Raghav
2023-09-15 19:48 ` Matthew Wilcox
2023-09-18 18:32 ` Luis Chamberlain
2023-09-15 18:38 ` [RFC 08/23] filemap: align the index to mapping_min_order in filemap_get_folios_tag() Pankaj Raghav
2023-09-15 19:50 ` Matthew Wilcox
2023-09-18 18:36 ` Luis Chamberlain
2023-09-15 18:38 ` [RFC 09/23] filemap: use mapping_min_order while allocating folios Pankaj Raghav
2023-09-15 19:54 ` Matthew Wilcox
2023-09-15 18:38 ` [RFC 10/23] filemap: align the index to mapping_min_order in filemap_get_pages() Pankaj Raghav
2023-09-15 18:38 ` [RFC 11/23] filemap: align the index to mapping_min_order in do_[a]sync_mmap_readahead Pankaj Raghav
2023-09-15 18:38 ` [RFC 12/23] filemap: align index to mapping_min_order in filemap_fault() Pankaj Raghav
2023-09-15 18:38 ` [RFC 13/23] readahead: set file_ra_state->ra_pages to be at least mapping_min_order Pankaj Raghav
2023-09-15 18:38 ` [RFC 14/23] readahead: allocate folios with mapping_min_order in ra_unbounded() Pankaj Raghav
2023-09-15 18:38 ` [RFC 15/23] readahead: align with mapping_min_order in force_page_cache_ra() Pankaj Raghav
2023-09-15 18:38 ` [RFC 16/23] readahead: add folio with at least mapping_min_order in page_cache_ra_order Pankaj Raghav
2023-09-15 18:38 ` [RFC 17/23] readahead: set the minimum ra size in get_(init|next)_ra Pankaj Raghav
2023-09-15 18:38 ` [RFC 18/23] readahead: align ra start and size to mapping_min_order in ondemand_ra() Pankaj Raghav
2023-09-15 18:38 ` Pankaj Raghav [this message]
2023-09-15 18:38 ` [RFC 20/23] mm: round down folio split requirements Pankaj Raghav
2023-09-15 18:38 ` [RFC 21/23] xfs: expose block size in stat Pankaj Raghav
2023-09-15 18:38 ` [RFC 22/23] xfs: enable block size larger than page size support Pankaj Raghav
2023-09-15 18:38 ` [RFC 23/23] xfs: set minimum order folio for page cache based on blocksize Pankaj Raghav
2023-09-15 18:50 ` [RFC 00/23] Enable block size > page size in XFS Matthew Wilcox
2023-09-18 12:35 ` Pankaj Raghav
2023-09-17 22:05 ` Dave Chinner
2023-09-18 2:04 ` Luis Chamberlain
2023-09-18 5:07 ` Dave Chinner
2023-09-18 12:29 ` Pankaj Raghav
2023-09-19 11:56 ` Ritesh Harjani
2023-09-19 21:15 ` Luis Chamberlain
2023-09-21 3:00 ` Luis Chamberlain
[not found] ` <ZQvNVAfZMjE3hgmN@bombadil.infradead.org>
2023-09-21 6:03 ` Dave Chinner
2023-09-21 7:18 ` Luis Chamberlain
2023-09-21 7:20 ` Luis Chamberlain
2023-09-22 5:03 ` Dave Chinner
2023-09-22 19:38 ` Matthew Wilcox
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20230915183848.1018717-20-kernel@pankajraghav.com \
--to=kernel@pankajraghav.com \
--cc=akpm@linux-foundation.org \
--cc=chandan.babu@oracle.com \
--cc=da.gomez@samsung.com \
--cc=david@fromorbit.com \
--cc=djwong@kernel.org \
--cc=gost.dev@samsung.com \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-xfs@vger.kernel.org \
--cc=mcgrof@kernel.org \
--cc=p.raghav@samsung.com \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox