linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [ISSUE] Read performance regression when using RWF_DONTCACHE form 8026e49 "mm/filemap: add read support for RWF_DONTCACHE"
@ 2026-04-15  7:28 Mingyu He
  2026-04-15 10:05 ` Kiryl Shutsemau
  2026-04-15 13:49 ` Matthew Wilcox
  0 siblings, 2 replies; 4+ messages in thread
From: Mingyu He @ 2026-04-15  7:28 UTC (permalink / raw)
  To: linux-mm, linux-fsdevel
  Cc: hannes, clm, linux-kernel, willy, kirill, bfoster, Jens Axboe,
	littleswimmingwhale

Hi,

Introduction:

I found this feature quite useful because in many scenarios, files
only need to be read once and then discarded. Keeping them in the page
cache can lead to a drop in read performance during cache reclaim.
Therefore, I conducted functional tests after the official release of
v7.0.0. I found that in normal preadv2 read scenarios (write has not
been tested yet), the read performance actually has a significant
regression. I would like to discuss whether this is expected, or if my
usage or application scenario is incorrect.

Test Project:

I tested reading a 5GB file on my NVMe drive using preadv2. The file
was generated using:

dd if=/dev/random of=file_top.bin oflag=direct bs=1M count=5120

The test was conducted in two scenarios:

Running outside of any cgroup.

Running within a cgroup with limits: memory.max = 3G, memory.high = 1G.

Each scenario ran the test program once. The test program performs a
controlled experiment: the first round uses preadv2 with
RWF_DONTCACHE, and the second round does not.

Test Results:

I found that after applying RWF_DONTCACHE, the performance dropped
quite drastically. This result was consistent both inside and outside
the cgroup. During the tests, I monitored memory.stat within the
cgroup and confirmed that RWF_DONTCACHE was indeed working (the file
cache remained very small).

The smaller the buffer_size in the test program, the more the
performance dropped. Initially, I used a 4k buffer_size, and the
performance decreased significantly. When the buffer_size was
increased to 128K, the read performance with RWF_DONTCACHE actually
surpassed the non-flagged version by about 10%.

Important:

I suspect this is due to readahead. In most cases, files that are
"accessed once" are read sequentially. RWF_DONTCACHE might be dropping
these readahead pages, resulting in a failure to fully utilize data
locality. In contrast, reads without the flag do not drop these
prefetch pages, making them much faster.

Discussion:

Is there an issue with my program? Or is the test flawed? If both are
fine, is it worth further optimizing RWF_DONTCACHE in this regard? The
concept of RWF_DONTCACHE itself is very attractive, but the practical
effect in this scenario is not ideal.

Below is my test program and hardware information:

================================

CPU: Intel(R) Xeon(R) Gold 5220R CPU @ 2.20GHz (96 cores)

OS: Ubuntu 24.04

Kernel: v7.0.0 (Tested after official release)

Disk:

nvme0n1 ext4 1.7T 5G 0% 0 Dell Ent NVMe v2 AGN RI U.2 1.92TB /data


=================================

Test Results (with 4k buffer_size):

File: /data/file_top.bin (5.00 GiB)

=== Round 1: preadv2 + RWF_DONTCACHE ===

file: /data/file_top.bin

flags: RWF_DONTCACHE

page cache dropped

bytes read: 5368709120 (5.00 GiB)

time: 35068.1 ms

throughput: 146.0 MiB/s

=== Round 2: preadv2 (normal) ===

file: /data/file_top.bin

flags: (none)

page cache dropped

bytes read: 5368709120 (5.00 GiB)

time: 3428.6 ms

throughput: 1493.3 MiB/s


==============================

Test Program:

/*

test_preadv2_dontcache.c - Compare preadv2 with/without RWF_DONTCACHE

*/

#define _GNU_SOURCE

#include <stdio.h>

#include <stdlib.h>

#include <string.h>

#include <fcntl.h>

#include <unistd.h>

#include <errno.h>

#include <sys/uio.h>

#include <sys/stat.h>

#include <time.h>

#ifndef RWF_DONTCACHE

#define RWF_DONTCACHE 0x00000080

#endif

#define BUF_SIZE (4 * 1024)

#define DEFAULT_PATH "/data/file_top.bin"

static void drop_caches(void)

{

int fd;

sync();

fd = open("/proc/sys/vm/drop_caches", O_WRONLY);

if (fd < 0) exit(1);

write(fd, "3\n", 2);

close(fd);

}

static double time_diff_ms(struct timespec *start, struct timespec *end)

{

return (end->tv_sec - start->tv_sec) * 1000.0 +

(end->tv_nsec - start->tv_nsec) / 1e6;

}

static void read_file(const char *path, int flags, const char *label)

{

char *buf;

struct iovec iov;

struct timespec t_start, t_end;

ssize_t ret;

off_t offset = 0;

size_t total = 0;

int fd;

    buf = aligned_alloc(4096, BUF_SIZE);
    fd = open(path, O_RDONLY);
    iov.iov_base = buf;
    iov.iov_len = BUF_SIZE;

    drop_caches();
    clock_gettime(CLOCK_MONOTONIC, &t_start);

    while (1) {
            ret = preadv2(fd, &iov, 1, offset, flags);
            if (ret <= 0) break;
            offset += ret;
            total += ret;
    }

    clock_gettime(CLOCK_MONOTONIC, &t_end);
    printf("\n=== %s ===\n", label);
    printf("  throughput: %.1f MiB/s\n",
           total / (1024.0 * 1024.0) /
           (time_diff_ms(&t_start, &t_end) / 1000.0));

    close(fd);
    free(buf);

}

int main(int argc, char *argv[])

{

const char *path = DEFAULT_PATH;

if (argc > 1) path = argv[1];

read_file(path, RWF_DONTCACHE, "Round 1: preadv2 + RWF_DONTCACHE");

read_file(path, 0, "Round 2: preadv2 (normal)");

return 0;

}

________________________________


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [ISSUE] Read performance regression when using RWF_DONTCACHE form 8026e49 "mm/filemap: add read support for RWF_DONTCACHE"
  2026-04-15  7:28 [ISSUE] Read performance regression when using RWF_DONTCACHE form 8026e49 "mm/filemap: add read support for RWF_DONTCACHE" Mingyu He
@ 2026-04-15 10:05 ` Kiryl Shutsemau
  2026-04-15 11:04   ` Mingyu He
  2026-04-15 13:49 ` Matthew Wilcox
  1 sibling, 1 reply; 4+ messages in thread
From: Kiryl Shutsemau @ 2026-04-15 10:05 UTC (permalink / raw)
  To: Mingyu He
  Cc: linux-mm, linux-fsdevel, hannes, clm, linux-kernel, willy,
	bfoster, Jens Axboe, littleswimmingwhale

On Wed, Apr 15, 2026 at 03:28:27PM +0800, Mingyu He wrote:
> The smaller the buffer_size in the test program, the more the
> performance dropped. Initially, I used a 4k buffer_size, and the
> performance decreased significantly. When the buffer_size was
> increased to 128K, the read performance with RWF_DONTCACHE actually
> surpassed the non-flagged version by about 10%.

Maybe you have block size larger than 4k? Core-mm will allocate larger
folios for page cache if filesystem asks it to. And if you try to access
it with 4k buffer it gets multiple read-discard cycles for the same
block with RWF_DONTCACHE. Without RWF_DONTCACHE only the first access to
the block will lead to I/O, following accesses are served from page
cache.

-- 
  Kiryl Shutsemau / Kirill A. Shutemov


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [ISSUE] Read performance regression when using RWF_DONTCACHE form 8026e49 "mm/filemap: add read support for RWF_DONTCACHE"
  2026-04-15 10:05 ` Kiryl Shutsemau
@ 2026-04-15 11:04   ` Mingyu He
  0 siblings, 0 replies; 4+ messages in thread
From: Mingyu He @ 2026-04-15 11:04 UTC (permalink / raw)
  To: Kiryl Shutsemau
  Cc: linux-mm, linux-fsdevel, hannes, clm, linux-kernel, willy,
	bfoster, Jens Axboe, littleswimmingwhale

Hi Kiryl,

I will list my phy sec and fs block size at the tail of this email.

I have 2 types of hard disk on my Linux. SSD NVME and HDD.
And I tested buffer_size with range from 1k, 4k, 16k, 64k, 128k.  And
also with/without cgroup.

On both type of hard disk I got same output: RWF_DONTCACHE has very
low performance.

Strongly guessing this is due to readahead.   Pages are dropped after
reading. So system need another I/O to get the next part of the data.
However, I dont test the cases with Kswapd strongly working (But this
is not the core of the question.)


I guess this case needs optimization. But I am not sure it needs an
optimization or just I got wrong using cases, as I am not a proficient
kernel developer.
So I need the advice from experts like you to make sure.
If this is a case worth optimizing, I'd like to do that optimization
 ( But I think many people might have noticed this problem, so I'm not
sure I could finish the optimization before those proficient
developers )


RWF_DONTCACHE Performance Comparison (MiB/s)

+--------------+-------------+------------------------+------------------+
| Device Type  | Buffer Size | RWF_DONTCACHE (MiB/s)  | Normal (MiB/s)   |
+--------------+-------------+------------------------+------------------+
| HDD          | 4K          | 119.6                  | 2268.1           |
| HDD          | 16K         | 1568.6                 | 3814.7           |
| HDD          | 64K         | 2351.0                 | 4161.8           |
| HDD          | 128K        | 2951.4                 | 4061.0           |
+--------------+-------------+------------------------+------------------+
| NVMe         | 4K          | 148.7                  | 1556.1           |
| NVMe         | 16K         | 619.0                  | 1601.5           |
| NVMe         | 64K         | 1139.6                 | 1618.6           |
| NVMe         | 128K        | 1725.4                 | 1579.2
  |- NVMe @ 128K is the only case where RWF_DONTCACHE > Normal
+--------------+-------------+------------------------+------------------+







# lsblk -o NAME,FSTYPE,SIZE,FSUSED,FSUSE%,ROTA,MODEL,MOUNTPOINT

NAME    FSTYPE  SIZE FSUSED FSUSE% ROTA MODEL
    MOUNTPOIN
sda             1.1T                  1 PERC H750 Adp
├─sda1            4M                  1
├─sda2  vfat    110M   6.1M     6%    1
    /boot/efi
├─sda3  ext4      2G 517.1M    27%    1                                    /boot
└─sda4  xfs     1.1T  70.4G     6%    1                                    /
nvme0n1 ext4    1.7T     5G     0%    0 Dell Ent NVMe v2 AGN RI U.2 1.92TB /data


# lsblk -o NAME,PHY-SEC,LOG-SEC
NAME    PHY-SEC LOG-SEC
sda         512     512
├─sda1      512     512
├─sda2      512     512
├─sda3      512     512
└─sda4      512     512
nvme0n1     512     512

# dumpe2fs /dev/nvme0n1 | grep "Block size"
dumpe2fs 1.47.0 (5-Feb-2023)
Block size:               4096

# xfs_info /
meta-data=/dev/sda4              isize=512    agcount=566, agsize=516864 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=1, sparse=1, rmapbt=0
         =                       reflink=1    bigtime=1 inobtcount=1
data     =                       bsize=4096   blocks=292326651, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
log      =internal log           bsize=4096   blocks=16384, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0


On Wed, Apr 15, 2026 at 6:05 PM Kiryl Shutsemau <kirill@shutemov.name> wrote:
>
> On Wed, Apr 15, 2026 at 03:28:27PM +0800, Mingyu He wrote:
> > The smaller the buffer_size in the test program, the more the
> > performance dropped. Initially, I used a 4k buffer_size, and the
> > performance decreased significantly. When the buffer_size was
> > increased to 128K, the read performance with RWF_DONTCACHE actually
> > surpassed the non-flagged version by about 10%.
>
> Maybe you have block size larger than 4k? Core-mm will allocate larger
> folios for page cache if filesystem asks it to. And if you try to access
> it with 4k buffer it gets multiple read-discard cycles for the same
> block with RWF_DONTCACHE. Without RWF_DONTCACHE only the first access to
> the block will lead to I/O, following accesses are served from page
> cache.
>
> --
>   Kiryl Shutsemau / Kirill A. Shutemov


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [ISSUE] Read performance regression when using RWF_DONTCACHE form 8026e49 "mm/filemap: add read support for RWF_DONTCACHE"
  2026-04-15  7:28 [ISSUE] Read performance regression when using RWF_DONTCACHE form 8026e49 "mm/filemap: add read support for RWF_DONTCACHE" Mingyu He
  2026-04-15 10:05 ` Kiryl Shutsemau
@ 2026-04-15 13:49 ` Matthew Wilcox
  1 sibling, 0 replies; 4+ messages in thread
From: Matthew Wilcox @ 2026-04-15 13:49 UTC (permalink / raw)
  To: Mingyu He
  Cc: linux-mm, linux-fsdevel, hannes, clm, linux-kernel, kirill,
	bfoster, Jens Axboe, littleswimmingwhale

On Wed, Apr 15, 2026 at 03:28:27PM +0800, Mingyu He wrote:
> I found this feature quite useful because in many scenarios, files
> only need to be read once and then discarded. Keeping them in the page
> cache can lead to a drop in read performance during cache reclaim.
> Therefore, I conducted functional tests after the official release of
> v7.0.0. I found that in normal preadv2 read scenarios (write has not
> been tested yet), the read performance actually has a significant
> regression. I would like to discuss whether this is expected, or if my
> usage or application scenario is incorrect.

Your entire premise is wrong.  This is not a magic "make I/O go faster" 
flag.  Comparing it to cached I/O is entirely wrong; your workload
clearly benefits from readahead and you've asked to not do readahead
by specifying RWF_DONTCACHE.

Rather you should compare against O_DIRECT reads.


^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2026-04-15 13:49 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2026-04-15  7:28 [ISSUE] Read performance regression when using RWF_DONTCACHE form 8026e49 "mm/filemap: add read support for RWF_DONTCACHE" Mingyu He
2026-04-15 10:05 ` Kiryl Shutsemau
2026-04-15 11:04   ` Mingyu He
2026-04-15 13:49 ` Matthew Wilcox

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox