From: Mingyu He <mingyu.he@shopee.com>
To: linux-mm@kvack.org, linux-fsdevel@vger.kernel.org
Cc: hannes@cmpxchg.org, clm@meta.com, linux-kernel@vger.kernel.org,
willy@infradead.org, kirill@shutemov.name, bfoster@redhat.com,
Jens Axboe <axboe@kernel.dk>,
littleswimmingwhale@gmail.com
Subject: [ISSUE] Read performance regression when using RWF_DONTCACHE form 8026e49 "mm/filemap: add read support for RWF_DONTCACHE"
Date: Wed, 15 Apr 2026 15:28:27 +0800 [thread overview]
Message-ID: <CAAoBcuSK38z5Nh7rGuHDFwpNvtc0W49OZKc1_d1QkCbK_nL7Ew@mail.gmail.com> (raw)
Hi,
Introduction:
I found this feature quite useful because in many scenarios, files
only need to be read once and then discarded. Keeping them in the page
cache can lead to a drop in read performance during cache reclaim.
Therefore, I conducted functional tests after the official release of
v7.0.0. I found that in normal preadv2 read scenarios (write has not
been tested yet), the read performance actually has a significant
regression. I would like to discuss whether this is expected, or if my
usage or application scenario is incorrect.
Test Project:
I tested reading a 5GB file on my NVMe drive using preadv2. The file
was generated using:
dd if=/dev/random of=file_top.bin oflag=direct bs=1M count=5120
The test was conducted in two scenarios:
Running outside of any cgroup.
Running within a cgroup with limits: memory.max = 3G, memory.high = 1G.
Each scenario ran the test program once. The test program performs a
controlled experiment: the first round uses preadv2 with
RWF_DONTCACHE, and the second round does not.
Test Results:
I found that after applying RWF_DONTCACHE, the performance dropped
quite drastically. This result was consistent both inside and outside
the cgroup. During the tests, I monitored memory.stat within the
cgroup and confirmed that RWF_DONTCACHE was indeed working (the file
cache remained very small).
The smaller the buffer_size in the test program, the more the
performance dropped. Initially, I used a 4k buffer_size, and the
performance decreased significantly. When the buffer_size was
increased to 128K, the read performance with RWF_DONTCACHE actually
surpassed the non-flagged version by about 10%.
Important:
I suspect this is due to readahead. In most cases, files that are
"accessed once" are read sequentially. RWF_DONTCACHE might be dropping
these readahead pages, resulting in a failure to fully utilize data
locality. In contrast, reads without the flag do not drop these
prefetch pages, making them much faster.
Discussion:
Is there an issue with my program? Or is the test flawed? If both are
fine, is it worth further optimizing RWF_DONTCACHE in this regard? The
concept of RWF_DONTCACHE itself is very attractive, but the practical
effect in this scenario is not ideal.
Below is my test program and hardware information:
================================
CPU: Intel(R) Xeon(R) Gold 5220R CPU @ 2.20GHz (96 cores)
OS: Ubuntu 24.04
Kernel: v7.0.0 (Tested after official release)
Disk:
nvme0n1 ext4 1.7T 5G 0% 0 Dell Ent NVMe v2 AGN RI U.2 1.92TB /data
=================================
Test Results (with 4k buffer_size):
File: /data/file_top.bin (5.00 GiB)
=== Round 1: preadv2 + RWF_DONTCACHE ===
file: /data/file_top.bin
flags: RWF_DONTCACHE
page cache dropped
bytes read: 5368709120 (5.00 GiB)
time: 35068.1 ms
throughput: 146.0 MiB/s
=== Round 2: preadv2 (normal) ===
file: /data/file_top.bin
flags: (none)
page cache dropped
bytes read: 5368709120 (5.00 GiB)
time: 3428.6 ms
throughput: 1493.3 MiB/s
==============================
Test Program:
/*
test_preadv2_dontcache.c - Compare preadv2 with/without RWF_DONTCACHE
*/
#define _GNU_SOURCE
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <fcntl.h>
#include <unistd.h>
#include <errno.h>
#include <sys/uio.h>
#include <sys/stat.h>
#include <time.h>
#ifndef RWF_DONTCACHE
#define RWF_DONTCACHE 0x00000080
#endif
#define BUF_SIZE (4 * 1024)
#define DEFAULT_PATH "/data/file_top.bin"
static void drop_caches(void)
{
int fd;
sync();
fd = open("/proc/sys/vm/drop_caches", O_WRONLY);
if (fd < 0) exit(1);
write(fd, "3\n", 2);
close(fd);
}
static double time_diff_ms(struct timespec *start, struct timespec *end)
{
return (end->tv_sec - start->tv_sec) * 1000.0 +
(end->tv_nsec - start->tv_nsec) / 1e6;
}
static void read_file(const char *path, int flags, const char *label)
{
char *buf;
struct iovec iov;
struct timespec t_start, t_end;
ssize_t ret;
off_t offset = 0;
size_t total = 0;
int fd;
buf = aligned_alloc(4096, BUF_SIZE);
fd = open(path, O_RDONLY);
iov.iov_base = buf;
iov.iov_len = BUF_SIZE;
drop_caches();
clock_gettime(CLOCK_MONOTONIC, &t_start);
while (1) {
ret = preadv2(fd, &iov, 1, offset, flags);
if (ret <= 0) break;
offset += ret;
total += ret;
}
clock_gettime(CLOCK_MONOTONIC, &t_end);
printf("\n=== %s ===\n", label);
printf(" throughput: %.1f MiB/s\n",
total / (1024.0 * 1024.0) /
(time_diff_ms(&t_start, &t_end) / 1000.0));
close(fd);
free(buf);
}
int main(int argc, char *argv[])
{
const char *path = DEFAULT_PATH;
if (argc > 1) path = argv[1];
read_file(path, RWF_DONTCACHE, "Round 1: preadv2 + RWF_DONTCACHE");
read_file(path, 0, "Round 2: preadv2 (normal)");
return 0;
}
________________________________
next reply other threads:[~2026-04-15 7:28 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-04-15 7:28 Mingyu He [this message]
2026-04-15 10:05 ` Kiryl Shutsemau
2026-04-15 11:04 ` Mingyu He
2026-04-15 13:49 ` Matthew Wilcox
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CAAoBcuSK38z5Nh7rGuHDFwpNvtc0W49OZKc1_d1QkCbK_nL7Ew@mail.gmail.com \
--to=mingyu.he@shopee.com \
--cc=axboe@kernel.dk \
--cc=bfoster@redhat.com \
--cc=clm@meta.com \
--cc=hannes@cmpxchg.org \
--cc=kirill@shutemov.name \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=littleswimmingwhale@gmail.com \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox