* [RFC] mm: readahead: change ra size for random read
@ 2019-12-20 1:58 Hillf Danton
0 siblings, 0 replies; only message in thread
From: Hillf Danton @ 2019-12-20 1:58 UTC (permalink / raw)
To: linux-kernel; +Cc: linux-mm, Hillf Danton
Set a smaller ra size for random read than contiguous one. It is
deemed random if the lower-level device is unable to meet the
read pattern even with its IO capability doubled.
Signed-off-by: Hillf Danton <hdanton@sina.com>
---
--- a/mm/readahead.c
+++ p/mm/readahead.c
@@ -388,6 +388,7 @@ ondemand_readahead(struct address_space
unsigned long max_pages = ra->ra_pages;
unsigned long add_pages;
pgoff_t prev_offset;
+ bool random = true;
/*
* If the request exceeds the readahead window, allow the read to
@@ -399,8 +400,34 @@ ondemand_readahead(struct address_space
/*
* start of file
*/
- if (!offset)
- goto initial_readahead;
+ if (!offset) {
+fill_ra:
+ ra->start = offset;
+ ra->size = random ?
+ min(bdi->io_pages, bdi->ra_pages) :
+ max(bdi->io_pages, bdi->ra_pages);
+
+ ra->async_size = ra->size > req_size ?
+ ra->size - req_size : ra->size;
+
+ return ra_submit(ra, mapping, filp);
+ } else {
+ unsigned long leap;
+
+ if (offset > ra->start)
+ leap = offset - ra->start;
+ else
+ leap = ra->start - offset;
+
+ /*
+ * anything other than page cache cannot help if it is
+ * too great a leap for the lower-level device to back
+ * up so feel free to put ra into fire
+ */
+ random = leap > max(bdi->io_pages, bdi->ra_pages) * 2;
+
+ goto fill_ra;
+ }
/*
* It's the expected callback offset, assume sequential access.
^ permalink raw reply [flat|nested] only message in thread
only message in thread, other threads:[~2019-12-20 1:58 UTC | newest]
Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-12-20 1:58 [RFC] mm: readahead: change ra size for random read Hillf Danton
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox