linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Anatoly Stepanov <stepanov.anatoly@huawei.com>
To: <willy@infradead.org>, <akpm@linux-foundation.org>,
	<david@kernel.org>, <ljs@kernel.org>, <Liam.Howlett@oracle.com>,
	<vbabka@kernel.org>, <rppt@kernel.org>, <surenb@google.com>,
	<mhocko@suse.com>, <wangkefeng.wang@huawei.com>,
	<yanquanmin1@huawei.com>, <zuoze1@huawei.com>,
	<artem.kuzin@huawei.com>, <gutierrez.asier@huawei-partners.com>
Cc: <linux-fsdevel@vger.kernel.org>, <linux-mm@kvack.org>,
	<linux-kernel@vger.kernel.org>,
	Anatoly Stepanov <stepanov.anatoly@huawei.com>
Subject: [RFC PATCH 2/2] filemap: use high-order folios in filemap sync RA
Date: Thu, 16 Apr 2026 03:28:53 +0800	[thread overview]
Message-ID: <20260415192853.3470423-3-stepanov.anatoly@huawei.com> (raw)
In-Reply-To: <20260415192853.3470423-1-stepanov.anatoly@huawei.com>

[Idea]

If a mmap'ed file being accessed such that async RA never
kicks in, we might end up with only 0-order folios in the page cache.

if fault_around_bytes is larger than 1 single page, then
it's beneficial to use high-order folios, which brings significant
filemap_map_pages() speedup.
So, let's just use fault_around_bytes as a starting point here.

if an arch supports PTE-coalescing we can get more of those for free.
(see arm64 example below)

We don't save the new order to "ra->order", so if async RA will happen
it would normally start from order-0.

[Things to be discussed]

But at the same time, i can see drawback for 16K, 64K pages, in this case fault_around will still be 64K by default.
In this case, it seems makes sense to make the fault_around_bytes be like order-N of PAGE_SIZE, not fixed bytes number.

Another issue is - when fault_around=0, but we'd like to use high-order folios for sync_RA, for cont-PTE for example,
For this we can use kind of "max(fault_around_order, cont_pte_order)".

Or introduce some dedicated tunable like "sync_mmap_order".

[Benchmark]

Simple benchmark below reading 100M file in 4M (RA size) chunks
such that async RA doesn't kick in and the page cache ends up being
filled up with 0-order folios.

The patched kernel gives ~3 times increase in throughput,
considering the page cache is filled up at the moment.

The main speedup comes from filemap_map_pages() due to high-order
folios usage.

As a bonus, we get better cont_pte bit coverage for Arm64.

Example:
// Open 100M file and read every 4M chunk, given max_ra=4M
// Perform 10 runs, measure the throughput.
...
 char *map = mmap(NULL, filesize, PROT_READ, MAP_PRIVATE, fd, 0);
    if (map == MAP_FAILED) {
        perror("Error mapping file");
        close(fd);
        return 1;
    }

    struct timespec start, end;
    clock_gettime(CLOCK_MONOTONIC, &start);

    unsigned int size_4M = 4*1024*1024;
    unsigned int num_reads = filesize / size_4M;
    volatile char val;
    for (int i = 0; i < num_reads; i++) {
        off_t offset = (off_t)i * size_4M;
        val = map[offset];
    }

    clock_gettime(CLOCK_MONOTONIC, &end);
...

Before patch (last 3 runs):
...
Throughput: 127942.68 operations per second
Throughput: 133646.96 operations per second
Throughput: 134321.94 operations per second

// filemap_map_pages(), fault_around_bytes = 64K
Time per 10 runs: ~2000 usec

// "smaps" numbers for the test file:
Rss:                1600 kB
Private_Clean:      1600 kB
Referenced:         1540 kB
ContPTE:	    0 kB

Patched kernel (last 3 runs):
...
Throughput: 366515.17 operations per second
Throughput: 404465.30 operations per second
Throughput: 370535.05 operations per second

// filemap_map_pages(), fault_around_bytes = 64K
Time per 10 runs: ~730 usec

// "smaps" numbers for the test file:
Rss:                1600 kB
Private_Clean:      1600 kB
Referenced:         1540 kB
ContPTE(Rss):       1536 kB

Signed-off-by: Anatoly Stepanov <stepanov.anatoly@huawei.com>
---
 include/linux/pagemap.h | 1 +
 mm/filemap.c            | 1 +
 mm/internal.h           | 1 +
 mm/memory.c             | 2 +-
 mm/readahead.c          | 5 +++--
 5 files changed, 7 insertions(+), 3 deletions(-)

diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
index ec442af3f..e133a3a6b 100644
--- a/include/linux/pagemap.h
+++ b/include/linux/pagemap.h
@@ -1359,6 +1359,7 @@ struct readahead_control {
 	struct file *file;
 	struct address_space *mapping;
 	struct file_ra_state *ra;
+	unsigned int sync_mmap_order;
 /* private: use the readahead_* accessors instead */
 	pgoff_t _index;
 	unsigned int _nr_pages;
diff --git a/mm/filemap.c b/mm/filemap.c
index 406cef06b..1ed5a0688 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -3398,6 +3398,7 @@ static struct file *do_sync_mmap_readahead(struct vm_fault *vmf)
 		ra->size = ra->ra_pages;
 		ra->async_size = ra->ra_pages / 4;
 		ra->order = 0;
+		ractl.sync_mmap_order = __ffs(fault_around_pages);
 	}
 
 	fpin = maybe_unlock_mmap_for_io(vmf, fpin);
diff --git a/mm/internal.h b/mm/internal.h
index cb0af847d..96157c82b 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -1770,4 +1770,5 @@ static inline int io_remap_pfn_range_complete(struct vm_area_struct *vma,
 	return remap_pfn_range_complete(vma, addr, pfn, size, prot);
 }
 
+extern unsigned long fault_around_pages;
 #endif	/* __MM_INTERNAL_H */
diff --git a/mm/memory.c b/mm/memory.c
index 2f815a34d..57ae027dd 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -5670,7 +5670,7 @@ vm_fault_t finish_fault(struct vm_fault *vmf)
 	return ret;
 }
 
-static unsigned long fault_around_pages __read_mostly =
+unsigned long fault_around_pages __read_mostly =
 	65536 >> PAGE_SHIFT;
 
 #ifdef CONFIG_DEBUG_FS
diff --git a/mm/readahead.c b/mm/readahead.c
index 7b05082c8..322bc115b 100644
--- a/mm/readahead.c
+++ b/mm/readahead.c
@@ -476,7 +476,7 @@ void page_cache_ra_order(struct readahead_control *ractl,
 	unsigned int nofs;
 	int err = 0;
 	gfp_t gfp = readahead_gfp_mask(mapping);
-	unsigned int new_order = ra->order;
+	unsigned int new_order = max(ra->order, ractl->sync_mmap_order);
 
 	trace_page_cache_ra_order(mapping->host, start, ra);
 	if (!mapping_large_folio_support(mapping)) {
@@ -490,7 +490,8 @@ void page_cache_ra_order(struct readahead_control *ractl,
 	new_order = min_t(unsigned int, new_order, ilog2(ra->size));
 	new_order = max(new_order, min_order);
 
-	ra->order = new_order;
+	if (ra->order >= ractl->sync_mmap_order)
+		ra->order = new_order;
 
 	/* See comment in page_cache_ra_unbounded() */
 	nofs = memalloc_nofs_save();
-- 
2.34.1



  parent reply	other threads:[~2026-04-15 11:47 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-04-15 19:28 [RFC PATCH 0/2] Use high-order folios in mmap " Anatoly Stepanov
2026-04-15 13:18 ` Matthew Wilcox
2026-04-15 13:33   ` Stepanov Anatoly
2026-04-15 19:28 ` [RFC PATCH 1/2] procfs: add contpte info into smaps Anatoly Stepanov
2026-04-15 12:52   ` David Hildenbrand (Arm)
2026-04-15 19:28 ` Anatoly Stepanov [this message]
2026-04-15 12:06   ` [RFC PATCH 2/2] filemap: use high-order folios in filemap sync RA Pedro Falcato
2026-04-15 12:31     ` Stepanov Anatoly
2026-04-15 12:46     ` Stepanov Anatoly

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20260415192853.3470423-3-stepanov.anatoly@huawei.com \
    --to=stepanov.anatoly@huawei.com \
    --cc=Liam.Howlett@oracle.com \
    --cc=akpm@linux-foundation.org \
    --cc=artem.kuzin@huawei.com \
    --cc=david@kernel.org \
    --cc=gutierrez.asier@huawei-partners.com \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=ljs@kernel.org \
    --cc=mhocko@suse.com \
    --cc=rppt@kernel.org \
    --cc=surenb@google.com \
    --cc=vbabka@kernel.org \
    --cc=wangkefeng.wang@huawei.com \
    --cc=willy@infradead.org \
    --cc=yanquanmin1@huawei.com \
    --cc=zuoze1@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox