linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] Revert "readahead: properly shorten readahead when falling back to do_page_cache_ra()"
@ 2024-11-26 14:52 Jan Kara
  2024-11-26 15:01 ` Philippe Troin
  0 siblings, 1 reply; 2+ messages in thread
From: Jan Kara @ 2024-11-26 14:52 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Matthew Wilcox, linux-mm, linux-fsdevel, Anders Blomdell,
	Philippe Troin, Jan Kara, stable

This reverts commit 7c877586da3178974a8a94577b6045a48377ff25.

Anders and Philippe have reported that recent kernels occasionally hang
when used with NFS in readahead code. The problem has been bisected to
7c877586da3 ("readahead: properly shorten readahead when falling back to
do_page_cache_ra()"). The cause of the problem is that ra->size can be
shrunk by read_pages() call and subsequently we end up calling
do_page_cache_ra() with negative (read huge positive) number of pages.
Let's revert 7c877586da3 for now until we can find a proper way how the
logic in read_pages() and page_cache_ra_order() can coexist. This can
lead to reduced readahead throughput due to readahead window confusion
but that's better than outright hangs.

Reported-by: Anders Blomdell <anders.blomdell@gmail.com>
Reported-by: Philippe Troin <phil@fifi.org>
CC: stable@vger.kernel.org
Signed-off-by: Jan Kara <jack@suse.cz>
---
 mm/readahead.c | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/mm/readahead.c b/mm/readahead.c
index 8f1cf599b572..ea650b8b02fb 100644
--- a/mm/readahead.c
+++ b/mm/readahead.c
@@ -458,8 +458,7 @@ void page_cache_ra_order(struct readahead_control *ractl,
 		struct file_ra_state *ra, unsigned int new_order)
 {
 	struct address_space *mapping = ractl->mapping;
-	pgoff_t start = readahead_index(ractl);
-	pgoff_t index = start;
+	pgoff_t index = readahead_index(ractl);
 	unsigned int min_order = mapping_min_folio_order(mapping);
 	pgoff_t limit = (i_size_read(mapping->host) - 1) >> PAGE_SHIFT;
 	pgoff_t mark = index + ra->size - ra->async_size;
@@ -522,7 +521,7 @@ void page_cache_ra_order(struct readahead_control *ractl,
 	if (!err)
 		return;
 fallback:
-	do_page_cache_ra(ractl, ra->size - (index - start), ra->async_size);
+	do_page_cache_ra(ractl, ra->size, ra->async_size);
 }
 
 static unsigned long ractl_max_pages(struct readahead_control *ractl,
-- 
2.35.3



^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: [PATCH] Revert "readahead: properly shorten readahead when falling back to do_page_cache_ra()"
  2024-11-26 14:52 [PATCH] Revert "readahead: properly shorten readahead when falling back to do_page_cache_ra()" Jan Kara
@ 2024-11-26 15:01 ` Philippe Troin
  0 siblings, 0 replies; 2+ messages in thread
From: Philippe Troin @ 2024-11-26 15:01 UTC (permalink / raw)
  To: Jan Kara, Andrew Morton
  Cc: Matthew Wilcox, linux-mm, linux-fsdevel, Anders Blomdell, stable

On Tue, 2024-11-26 at 15:52 +0100, Jan Kara wrote:
> This reverts commit 7c877586da3178974a8a94577b6045a48377ff25.
> 
> Anders and Philippe have reported that recent kernels occasionally hang
> when used with NFS in readahead code. The problem has been bisected to
> 7c877586da3 ("readahead: properly shorten readahead when falling back to
> do_page_cache_ra()"). The cause of the problem is that ra->size can be
> shrunk by read_pages() call and subsequently we end up calling
> do_page_cache_ra() with negative (read huge positive) number of pages.
> Let's revert 7c877586da3 for now until we can find a proper way how the
> logic in read_pages() and page_cache_ra_order() can coexist. This can
> lead to reduced readahead throughput due to readahead window confusion
> but that's better than outright hangs.
> 
> Reported-by: Anders Blomdell <anders.blomdell@gmail.com>
> Reported-by: Philippe Troin <phil@fifi.org>
> CC: stable@vger.kernel.org
> Signed-off-by: Jan Kara <jack@suse.cz>
> ---
>  mm/readahead.c | 5 ++---
>  1 file changed, 2 insertions(+), 3 deletions(-)
> 
> diff --git a/mm/readahead.c b/mm/readahead.c
> index 8f1cf599b572..ea650b8b02fb 100644
> --- a/mm/readahead.c
> +++ b/mm/readahead.c
> @@ -458,8 +458,7 @@ void page_cache_ra_order(struct readahead_control *ractl,
>  		struct file_ra_state *ra, unsigned int new_order)
>  {
>  	struct address_space *mapping = ractl->mapping;
> -	pgoff_t start = readahead_index(ractl);
> -	pgoff_t index = start;
> +	pgoff_t index = readahead_index(ractl);
>  	unsigned int min_order = mapping_min_folio_order(mapping);
>  	pgoff_t limit = (i_size_read(mapping->host) - 1) >> PAGE_SHIFT;
>  	pgoff_t mark = index + ra->size - ra->async_size;
> @@ -522,7 +521,7 @@ void page_cache_ra_order(struct readahead_control *ractl,
>  	if (!err)
>  		return;
>  fallback:
> -	do_page_cache_ra(ractl, ra->size - (index - start), ra->async_size);
> +	do_page_cache_ra(ractl, ra->size, ra->async_size);
>  }
>  
>  static unsigned long ractl_max_pages(struct readahead_control *ractl,

You can add a
   Tested-by: Philippe Troin <phil@fifi.org>
tag as I did experiment and validate the fix with that revert on top of 6.11.10.

Phil.


^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2024-11-26 15:01 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-11-26 14:52 [PATCH] Revert "readahead: properly shorten readahead when falling back to do_page_cache_ra()" Jan Kara
2024-11-26 15:01 ` Philippe Troin

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox