From: Kent Overstreet <kent.overstreet@linux.dev>
To: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Matthew Wilcox <willy@infradead.org>,
Luis Chamberlain <mcgrof@kernel.org>,
lsf-pc@lists.linux-foundation.org, linux-fsdevel@vger.kernel.org,
linux-mm <linux-mm@kvack.org>,
Daniel Gomez <da.gomez@samsung.com>,
Pankaj Raghav <p.raghav@samsung.com>,
Jens Axboe <axboe@kernel.dk>, Dave Chinner <david@fromorbit.com>,
Christoph Hellwig <hch@lst.de>, Chris Mason <clm@fb.com>,
Johannes Weiner <hannes@cmpxchg.org>
Subject: Re: [LSF/MM/BPF TOPIC] Measuring limits and enhancing buffered IO
Date: Sun, 25 Feb 2024 00:18:23 -0500 [thread overview]
Message-ID: <o4a6577t2z5xytjwmixqkl33h23vfnjypwbx7jaaldtldpvjf5@dzbzkhrzyobb> (raw)
In-Reply-To: <CAHk-=wjUkYLv23KtF=EyCrQcmf9NGwE8Yo1cuxdaLF8gqx5zWw@mail.gmail.com>
On Sat, Feb 24, 2024 at 09:31:44AM -0800, Linus Torvalds wrote:
> On Fri, 23 Feb 2024 at 20:12, Matthew Wilcox <willy@infradead.org> wrote:
> >
> > On Fri, Feb 23, 2024 at 03:59:58PM -0800, Luis Chamberlain wrote:
> > > What are the limits to buffered IO
> > > and how do we test that? Who keeps track of it?
> >
> > TLDR: Why does the pagecache suck?
>
> What? No.
>
> Our page cache is so good that the question is literally "what are the
> limits of it", and "how we would measure them".
>
> That's not a sign of suckage.
>
> When you have to have completely unrealistic loads that nobody would
> actually care about in reality just to get a number for the limit,
> it's not a sign of problems.
>
> Or rather, the "problem" is the person looking at a stupid load, and
> going "we need to improve this because I can write a benchmark for
> this".
>
> Here's a clue: a hardware discussion forum I visit was arguing about
> memory latencies, and talking about how their measured overhead of
> DRAM latency was literally 85% on the CPU side, not the DRAM side.
>
> Guess what? It's because the CPU in question had quite a bit of L3,
> and it was spread out, and the CPU doesn't even start the memory
> access before it has checked caches.
>
> And here's a big honking clue: only a complete nincompoop and mentally
> deficient rodent would look at that and say "caches suck".
>
> > > ~86 GiB/s on pmem DIO on xfs with 64k block size, 1024 XFS agcount on x86_64
> > > Vs
> > > ~ 7,000 MiB/s with buffered IO
> >
> > Profile? My guess is that you're bottlenecked on the xa_lock between
> > memory reclaim removing folios from the page cache and the various
> > threads adding folios to the page cache.
>
> I doubt it's the locking.
>
> In fact, for writeout in particular it's probably not even the page
> cache at all.
>
> For writeout, we have a very traditional problem: we care about a
> million times more about latency than we care about throughput,
> because nobody ever actually cares all that much about performance of
> huge writes.
Before large folios, we had people very much bottlenecked by 4k page
overhead on sequential IO; my customer/sponsor was one of them.
Factor of 2 or 3, IIRC; it was _bad_. And when you looked at the
profiles and looked at the filemap.c code it wasn't hard to see why;
we'd walk a radix tree, do an atomic op (get the page), then do a 4k
usercopy... hence the work I did to break up
generic_file_buffered_read() and vectorize it, which was a huge
improvement.
It's definitely less of a factor when post large folios and when we're
talking about workloads that don't fit in cache, but I always wanted to
do a generic version of the vectorized write path that brfs and bcachefs
have.
next prev parent reply other threads:[~2024-02-25 5:18 UTC|newest]
Thread overview: 90+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-02-23 23:59 Luis Chamberlain
2024-02-24 4:12 ` Matthew Wilcox
2024-02-24 17:31 ` Linus Torvalds
2024-02-24 18:13 ` Matthew Wilcox
2024-02-24 18:24 ` Linus Torvalds
2024-02-24 18:20 ` Linus Torvalds
2024-02-24 19:11 ` Linus Torvalds
2024-02-24 21:42 ` Theodore Ts'o
2024-02-24 22:57 ` Chris Mason
2024-02-24 23:40 ` Linus Torvalds
2024-05-10 23:57 ` Luis Chamberlain
2024-02-25 5:18 ` Kent Overstreet [this message]
2024-02-25 6:04 ` Kent Overstreet
2024-02-25 13:10 ` Matthew Wilcox
2024-02-25 17:03 ` Linus Torvalds
2024-02-25 21:14 ` Matthew Wilcox
2024-02-25 23:45 ` Linus Torvalds
2024-02-26 1:02 ` Kent Overstreet
2024-02-26 1:32 ` Linus Torvalds
2024-02-26 1:58 ` Kent Overstreet
2024-02-26 2:06 ` Kent Overstreet
2024-02-26 2:34 ` Linus Torvalds
2024-02-26 2:50 ` Al Viro
2024-02-26 17:17 ` Linus Torvalds
2024-02-26 21:07 ` Matthew Wilcox
2024-02-26 21:17 ` Kent Overstreet
2024-02-26 21:19 ` Kent Overstreet
2024-02-26 21:55 ` Paul E. McKenney
2024-02-26 23:29 ` Kent Overstreet
2024-02-27 0:05 ` Paul E. McKenney
2024-02-27 0:29 ` Kent Overstreet
2024-02-27 0:55 ` Paul E. McKenney
2024-02-27 1:08 ` Kent Overstreet
2024-02-27 5:17 ` Paul E. McKenney
2024-02-27 6:21 ` Kent Overstreet
2024-02-27 15:32 ` Paul E. McKenney
2024-02-27 15:52 ` Kent Overstreet
2024-02-27 16:06 ` Paul E. McKenney
2024-02-27 15:54 ` Matthew Wilcox
2024-02-27 16:21 ` Paul E. McKenney
2024-02-27 16:34 ` Kent Overstreet
2024-02-27 17:58 ` Paul E. McKenney
2024-02-28 23:55 ` Kent Overstreet
2024-02-29 19:42 ` Paul E. McKenney
2024-02-29 20:51 ` Kent Overstreet
2024-03-05 2:19 ` Paul E. McKenney
2024-02-27 0:43 ` Dave Chinner
2024-02-26 22:46 ` Linus Torvalds
2024-02-26 23:48 ` Linus Torvalds
2024-02-27 7:21 ` Kent Overstreet
2024-02-27 15:39 ` Matthew Wilcox
2024-02-27 15:54 ` Kent Overstreet
2024-02-27 16:34 ` Linus Torvalds
2024-02-27 16:47 ` Kent Overstreet
2024-02-27 17:07 ` Linus Torvalds
2024-02-27 17:20 ` Kent Overstreet
2024-02-27 18:02 ` Linus Torvalds
2024-05-14 11:52 ` Luis Chamberlain
2024-05-14 16:04 ` Linus Torvalds
2024-11-15 19:43 ` Linus Torvalds
2024-11-15 20:42 ` Matthew Wilcox
2024-11-15 21:52 ` Linus Torvalds
2024-02-25 21:29 ` Kent Overstreet
2024-02-25 17:32 ` Kent Overstreet
2024-02-24 17:55 ` Luis Chamberlain
2024-02-25 5:24 ` Kent Overstreet
2024-02-26 12:22 ` Dave Chinner
2024-02-27 10:07 ` Kent Overstreet
2024-02-27 14:08 ` Luis Chamberlain
2024-02-27 14:57 ` Kent Overstreet
2024-02-27 22:13 ` Dave Chinner
2024-02-27 22:21 ` Kent Overstreet
2024-02-27 22:42 ` Dave Chinner
2024-02-28 7:48 ` [Lsf-pc] " Amir Goldstein
2024-02-28 14:01 ` Chris Mason
2024-02-29 0:25 ` Dave Chinner
2024-02-29 0:57 ` Kent Overstreet
2024-03-04 0:46 ` Dave Chinner
2024-02-27 22:46 ` Linus Torvalds
2024-02-27 23:00 ` Linus Torvalds
2024-02-28 2:22 ` Kent Overstreet
2024-02-28 3:00 ` Matthew Wilcox
2024-02-28 4:22 ` Matthew Wilcox
2024-02-28 17:34 ` Kent Overstreet
2024-02-28 18:04 ` Matthew Wilcox
2024-02-28 18:18 ` Kent Overstreet
2024-02-28 19:09 ` Linus Torvalds
2024-02-28 19:29 ` Kent Overstreet
2024-02-28 20:17 ` Linus Torvalds
2024-02-28 23:21 ` Kent Overstreet
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=o4a6577t2z5xytjwmixqkl33h23vfnjypwbx7jaaldtldpvjf5@dzbzkhrzyobb \
--to=kent.overstreet@linux.dev \
--cc=axboe@kernel.dk \
--cc=clm@fb.com \
--cc=da.gomez@samsung.com \
--cc=david@fromorbit.com \
--cc=hannes@cmpxchg.org \
--cc=hch@lst.de \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=lsf-pc@lists.linux-foundation.org \
--cc=mcgrof@kernel.org \
--cc=p.raghav@samsung.com \
--cc=torvalds@linux-foundation.org \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox