linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Andreas Dilger <adilger@dilger.ca>
To: Mingyu He <mingyu.he@shopee.com>
Cc: Kiryl Shutsemau <kirill@shutemov.name>,
	linux-mm@kvack.org, linux-fsdevel@vger.kernel.org,
	hannes@cmpxchg.org, clm@meta.com, linux-kernel@vger.kernel.org,
	willy@infradead.org, bfoster@redhat.com,
	Jens Axboe <axboe@kernel.dk>,
	littleswimmingwhale@gmail.com
Subject: Re: [ISSUE] Read performance regression when using RWF_DONTCACHE form 8026e49 "mm/filemap: add read support for RWF_DONTCACHE"
Date: Wed, 15 Apr 2026 12:00:26 -0600	[thread overview]
Message-ID: <B0F17C4D-C1E3-4778-A352-AE9CA3434E8F@dilger.ca> (raw)
In-Reply-To: <CAAoBcuQ9s68HKiFwhqXPc8-bfXYsFXc2SeD01H3d62pyxi4uvQ@mail.gmail.com>

On Apr 15, 2026, at 05:04, Mingyu He <mingyu.he@shopee.com> wrote:
> 
> Hi Kiryl,
> 
> I will list my phy sec and fs block size at the tail of this email.
> 
> I have 2 types of hard disk on my Linux. SSD NVME and HDD.
> And I tested buffer_size with range from 1k, 4k, 16k, 64k, 128k.  And
> also with/without cgroup.
> 
> On both type of hard disk I got same output: RWF_DONTCACHE has very
> low performance.
> 
> Strongly guessing this is due to readahead.   Pages are dropped after
> reading. So system need another I/O to get the next part of the data.
> However, I dont test the cases with Kswapd strongly working (But this
> is not the core of the question.)
> 
> 
> I guess this case needs optimization. But I am not sure it needs an
> optimization or just I got wrong using cases, as I am not a proficient
> kernel developer.
> So I need the advice from experts like you to make sure.
> If this is a case worth optimizing, I'd like to do that optimization
> ( But I think many people might have noticed this problem, so I'm not
> sure I could finish the optimization before those proficient
> developers )
> 
> 
> RWF_DONTCACHE Performance Comparison (MiB/s)
> 
> +--------------+-------------+------------------------+------------------+
> | Device Type  | Buffer Size | RWF_DONTCACHE (MiB/s)  | Normal (MiB/s)   |
> +--------------+-------------+------------------------+------------------+
> | HDD          | 4K          | 119.6                  | 2268.1           |
> | HDD          | 16K         | 1568.6                 | 3814.7           |
> | HDD          | 64K         | 2351.0                 | 4161.8           |
> | HDD          | 128K        | 2951.4                 | 4061.0           |
> +--------------+-------------+------------------------+------------------+
> | NVMe         | 4K          | 148.7                  | 1556.1           |
> | NVMe         | 16K         | 619.0                  | 1601.5           |
> | NVMe         | 64K         | 1139.6                 | 1618.6           |
> | NVMe         | 128K        | 1725.4                 | 1579.2
>  |- NVMe @ 128K is the only case where RWF_DONTCACHE > Normal
> +--------------+-------------+------------------------+------------------+

If the HDD performance is 4GB/s then it is almost certainly a RAID system
with multiple individual spindles.  Reading at 4KB or even 128KB is likely
only reading data from 1-2 spindles at a time. The 4KiB read size is
showing that a single spindle has about 120 IOPS, while modern HDDs have
about 250MB/s bandwidth, so you need to read 2MB/IOP *per spindle* to get
peak performance.  For an 8+2 RAID that means 16MB reads would be best.

Cheers, Andreas

> # lsblk -o NAME,FSTYPE,SIZE,FSUSED,FSUSE%,ROTA,MODEL,MOUNTPOINT
> 
> NAME    FSTYPE  SIZE FSUSED FSUSE% ROTA MODEL
>    MOUNTPOIN
> sda             1.1T                  1 PERC H750 Adp
> ├─sda1            4M                  1
> ├─sda2  vfat    110M   6.1M     6%    1
>    /boot/efi
> ├─sda3  ext4      2G 517.1M    27%    1                                    /boot
> └─sda4  xfs     1.1T  70.4G     6%    1                                    /
> nvme0n1 ext4    1.7T     5G     0%    0 Dell Ent NVMe v2 AGN RI U.2 1.92TB /data
> 
> 
> # lsblk -o NAME,PHY-SEC,LOG-SEC
> NAME    PHY-SEC LOG-SEC
> sda         512     512
> ├─sda1      512     512
> ├─sda2      512     512
> ├─sda3      512     512
> └─sda4      512     512
> nvme0n1     512     512
> 
> # dumpe2fs /dev/nvme0n1 | grep "Block size"
> dumpe2fs 1.47.0 (5-Feb-2023)
> Block size:               4096
> 
> # xfs_info /
> meta-data=/dev/sda4              isize=512    agcount=566, agsize=516864 blks
>         =                       sectsz=512   attr=2, projid32bit=1
>         =                       crc=1        finobt=1, sparse=1, rmapbt=0
>         =                       reflink=1    bigtime=1 inobtcount=1
> data     =                       bsize=4096   blocks=292326651, imaxpct=25
>         =                       sunit=0      swidth=0 blks
> naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
> log      =internal log           bsize=4096   blocks=16384, version=2
>         =                       sectsz=512   sunit=0 blks, lazy-count=1
> realtime =none                   extsz=4096   blocks=0, rtextents=0
> 
> 
> On Wed, Apr 15, 2026 at 6:05 PM Kiryl Shutsemau <kirill@shutemov.name> wrote:
>> 
>> On Wed, Apr 15, 2026 at 03:28:27PM +0800, Mingyu He wrote:
>>> The smaller the buffer_size in the test program, the more the
>>> performance dropped. Initially, I used a 4k buffer_size, and the
>>> performance decreased significantly. When the buffer_size was
>>> increased to 128K, the read performance with RWF_DONTCACHE actually
>>> surpassed the non-flagged version by about 10%.
>> 
>> Maybe you have block size larger than 4k? Core-mm will allocate larger
>> folios for page cache if filesystem asks it to. And if you try to access
>> it with 4k buffer it gets multiple read-discard cycles for the same
>> block with RWF_DONTCACHE. Without RWF_DONTCACHE only the first access to
>> the block will lead to I/O, following accesses are served from page
>> cache.
>> 
>> --
>>  Kiryl Shutsemau / Kirill A. Shutemov
> 


Cheers, Andreas







  reply	other threads:[~2026-04-15 18:00 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-04-15  7:28 Mingyu He
2026-04-15 10:05 ` Kiryl Shutsemau
2026-04-15 11:04   ` Mingyu He
2026-04-15 18:00     ` Andreas Dilger [this message]
2026-04-15 13:49 ` Matthew Wilcox

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=B0F17C4D-C1E3-4778-A352-AE9CA3434E8F@dilger.ca \
    --to=adilger@dilger.ca \
    --cc=axboe@kernel.dk \
    --cc=bfoster@redhat.com \
    --cc=clm@meta.com \
    --cc=hannes@cmpxchg.org \
    --cc=kirill@shutemov.name \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=littleswimmingwhale@gmail.com \
    --cc=mingyu.he@shopee.com \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox