linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Jens Axboe <axboe@kernel.dk>
To: Stefan Metzmacher <metze@samba.org>,
	linux-mm@kvack.org, linux-fsdevel@vger.kernel.org
Cc: hannes@cmpxchg.org, clm@meta.com, linux-kernel@vger.kernel.org
Subject: Re: [PATCH 08/13] fs: add read support for RWF_UNCACHED
Date: Mon, 11 Nov 2024 08:44:58 -0700	[thread overview]
Message-ID: <17b9b5e7-fdcd-4769-b429-a67ebd466c97@kernel.dk> (raw)
In-Reply-To: <42d612bc-cd3e-46cf-b8d3-50b7c01a9b93@kernel.dk>

On 11/11/24 7:10 AM, Jens Axboe wrote:
> On 11/11/24 6:04 AM, Stefan Metzmacher wrote:
>> Hi Jens,
>>
>>> If the same test case is run with RWF_UNCACHED set for the buffered read,
>>> the output looks as follows:
>>>
>>> Reading bs 65536, uncached 0
>>>    1s: 153144MB/sec
>>>    2s: 156760MB/sec
>>>    3s: 158110MB/sec
>>>    4s: 158009MB/sec
>>>    5s: 158043MB/sec
>>>    6s: 157638MB/sec
>>>    7s: 157999MB/sec
>>>    8s: 158024MB/sec
>>>    9s: 157764MB/sec
>>>   10s: 157477MB/sec
>>>   11s: 157417MB/sec
>>>   12s: 157455MB/sec
>>>   13s: 157233MB/sec
>>>   14s: 156692MB/sec
>>>
>>> which is just chugging along at ~155GB/sec of read performance. Looking
>>> at top, we see:
>>>
>>>   PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND
>>> 7961 root      20   0  267004      0      0 S  3180   0.0   5:37.95 uncached
>>> 8024 axboe     20   0   14292   4096      0 R   1.0   0.0   0:00.13 top
>>>
>>> where just the test app is using CPU, no reclaim is taking place outside
>>> of the main thread. Not only is performance 65% better, it's also using
>>> half the CPU to do it.
>>
>> Do you have numbers of similar code using O_DIRECT just to
>> see the impact of the memcpy from the page cache to the userspace
>> buffer...
> 
> I don't, but I can surely generate those. I didn't consider them that
> interesting for this comparison which is why I didn't do them, O_DIRECT
> reads for bigger blocks sizes (or even smaller block sizes, if using
> io_uring + registered buffers) will definitely have lower overhead than
> uncached and buffered IO. Copying 160GB/sec isn't free :-)
> 
> For writes it's a bit more complicated to do an apples to apples
> comparison, as uncached IO isn't synchronous like O_DIRECT is. It only
> kicks off the IO, doesn't wait for it.

Here's the read side - same test as above, using 64K reads:

  1s: 24947MB/sec
  2s: 24840MB/sec
  3s: 24666MB/sec
  4s: 24549MB/sec
  5s: 24575MB/sec
  6s: 24669MB/sec
  7s: 24611MB/sec
  8s: 24369MB/sec
  9s: 24261MB/sec
 10s: 24125MB/sec

which is in fact pretty depressing. As before, this is 32 threads, each
reading a file from separate XFS mount points, so 32 file systems in
total. If I bump the read size to 128K, then it's about 42GB/sec. 256K
gets you to 71-72GB/sec.

Just goes to show you, you need parallellism to get the best performance
out of the devices with O_DIRECT. If I run io_uring + dio + registered
buffers, I can get ~172GB/sec out of reading the same 32 files from 32
threads.

-- 
Jens Axboe


  reply	other threads:[~2024-11-11 15:45 UTC|newest]

Thread overview: 36+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-11-08 17:43 [PATCHSET v4] Uncached buffered IO Jens Axboe
2024-11-08 17:43 ` [PATCH 01/13] mm/filemap: change filemap_create_folio() to take a struct kiocb Jens Axboe
2024-11-08 18:18   ` Matthew Wilcox
2024-11-08 19:22     ` Jens Axboe
2024-11-08 17:43 ` [PATCH 02/13] mm/readahead: add folio allocation helper Jens Axboe
2024-11-08 17:43 ` [PATCH 03/13] mm: add PG_uncached page flag Jens Axboe
2024-11-08 19:25   ` Kirill A. Shutemov
2024-11-08 19:39     ` Jens Axboe
2024-11-08 17:43 ` [PATCH 04/13] mm/readahead: add readahead_control->uncached member Jens Axboe
2024-11-08 18:21   ` Matthew Wilcox
2024-11-08 19:22     ` Jens Axboe
2024-11-08 17:43 ` [PATCH 05/13] mm/filemap: use page_cache_sync_ra() to kick off read-ahead Jens Axboe
2024-11-08 17:43 ` [PATCH 06/13] mm/truncate: make invalidate_complete_folio2() public Jens Axboe
2024-11-08 17:43 ` [PATCH 07/13] fs: add FOP_UNCACHED flag Jens Axboe
2024-11-08 18:27   ` Matthew Wilcox
2024-11-08 19:23     ` Jens Axboe
2024-11-08 17:43 ` [PATCH 08/13] fs: add read support for RWF_UNCACHED Jens Axboe
2024-11-08 18:33   ` Matthew Wilcox
2024-11-08 19:25     ` Jens Axboe
2024-11-11 13:04   ` Stefan Metzmacher
2024-11-11 14:10     ` Jens Axboe
2024-11-11 15:44       ` Jens Axboe [this message]
2024-11-08 17:43 ` [PATCH 09/13] mm: drop uncached pages when writeback completes Jens Axboe
2024-11-08 17:43 ` [PATCH 10/13] mm/filemap: make buffered writes work with RWF_UNCACHED Jens Axboe
2024-11-08 17:43 ` [PATCH 11/13] iomap: " Jens Axboe
2024-11-08 18:46   ` Matthew Wilcox
2024-11-08 19:26     ` Jens Axboe
2024-11-08 19:49       ` Jens Axboe
2024-11-08 20:07         ` Matthew Wilcox
2024-11-08 20:18           ` Jens Axboe
2024-11-08 17:43 ` [PATCH 12/13] ext4: flag as supporting FOP_UNCACHED Jens Axboe
2024-11-08 17:43 ` [PATCH 13/13] xfs: " Jens Axboe
2024-11-11 12:55 ` [PATCHSET v4] Uncached buffered IO Stefan Metzmacher
2024-11-11 14:08   ` Jens Axboe
2024-11-11 15:05     ` Jens Axboe
2024-11-11 23:54       ` Jens Axboe

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=17b9b5e7-fdcd-4769-b429-a67ebd466c97@kernel.dk \
    --to=axboe@kernel.dk \
    --cc=clm@meta.com \
    --cc=hannes@cmpxchg.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=metze@samba.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox