From: Jan Kara <jack@suse.cz>
To: <linux-mm@kvack.org>
Cc: Andrew Morton <akpm@linux-foundation.org>,
Matthew Wilcox <willy@infradead.org>,
<linux-fsdevel@vger.kernel.org>, Jan Kara <jack@suse.cz>
Subject: [PATCH 03/10] readahead: Properly shorten readahead when falling back to do_page_cache_ra()
Date: Tue, 25 Jun 2024 12:18:53 +0200 [thread overview]
Message-ID: <20240625101909.12234-3-jack@suse.cz> (raw)
In-Reply-To: <20240625100859.15507-1-jack@suse.cz>
When we succeed in creating some folios in page_cache_ra_order() but
then need to fallback to single page folios, we don't shorten the amount
to read passed to do_page_cache_ra() by the amount we've already read.
This then results in reading more and also in placing another readahead
mark in the middle of the readahead window which confuses readahead
code. Fix the problem by properly reducing number of pages to read.
Signed-off-by: Jan Kara <jack@suse.cz>
---
mm/readahead.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/mm/readahead.c b/mm/readahead.c
index af0fbd302a38..1c58e0463be1 100644
--- a/mm/readahead.c
+++ b/mm/readahead.c
@@ -491,7 +491,8 @@ void page_cache_ra_order(struct readahead_control *ractl,
struct file_ra_state *ra, unsigned int new_order)
{
struct address_space *mapping = ractl->mapping;
- pgoff_t index = readahead_index(ractl);
+ pgoff_t start = readahead_index(ractl);
+ pgoff_t index = start;
pgoff_t limit = (i_size_read(mapping->host) - 1) >> PAGE_SHIFT;
pgoff_t mark = index + ra->size - ra->async_size;
unsigned int nofs;
@@ -544,7 +545,7 @@ void page_cache_ra_order(struct readahead_control *ractl,
if (!err)
return;
fallback:
- do_page_cache_ra(ractl, ra->size, ra->async_size);
+ do_page_cache_ra(ractl, ra->size - (index - start), ra->async_size);
}
/*
--
2.35.3
next prev parent reply other threads:[~2024-06-25 10:19 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-06-25 10:18 [PATCH 0/10] mm: Fix various readahead quirks Jan Kara
2024-06-25 10:18 ` [PATCH 01/10] readahead: Make sure sync readahead reads needed page Jan Kara
2024-06-25 10:18 ` [PATCH 02/10] filemap: Fix page_cache_next_miss() when no hole found Jan Kara
2024-06-25 10:18 ` Jan Kara [this message]
2024-06-25 10:18 ` [PATCH 04/10] readahead: Drop pointless index from force_page_cache_ra() Jan Kara
2024-06-25 10:18 ` [PATCH 05/10] readahead: Drop index argument of page_cache_async_readahead() Jan Kara
2024-06-25 10:18 ` [PATCH 06/10] readahead: Drop dead code in page_cache_ra_order() Jan Kara
2024-06-25 10:18 ` [PATCH 07/10] readahead: Drop dead code in ondemand_readahead() Jan Kara
2024-06-25 10:18 ` [PATCH 08/10] readahead: Disentangle async and sync readahead Jan Kara
2024-06-25 10:18 ` [PATCH 09/10] readahead: Fold try_context_readahead() into its single caller Jan Kara
2024-06-25 10:19 ` [PATCH 10/10] readahead: Simplify gotos in page_cache_sync_ra() Jan Kara
2024-06-25 17:12 ` [PATCH 0/10] mm: Fix various readahead quirks Josef Bacik
2024-06-27 3:04 ` Zhang Peng
2024-06-27 6:10 ` zippermonkey
2024-06-27 21:13 ` Andrew Morton
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20240625101909.12234-3-jack@suse.cz \
--to=jack@suse.cz \
--cc=akpm@linux-foundation.org \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox