From: Matthew Wilcox <willy@infradead.org>
To: Luis Chamberlain <mcgrof@kernel.org>
Cc: linux-fsdevel@vger.kernel.org, linux-mm@kvack.org,
linux-block@vger.kernel.org, lsf-pc@lists.linux-foundation.org,
david@fromorbit.com, leon@kernel.org, hch@lst.de,
kbusch@kernel.org, sagi@grimberg.me, axboe@kernel.dk,
joro@8bytes.org, brauner@kernel.org, hare@suse.de,
djwong@kernel.org, john.g.garry@oracle.com,
ritesh.list@gmail.com, p.raghav@samsung.com,
gost.dev@samsung.com, da.gomez@samsung.com
Subject: Re: [LSF/MM/BPF TOPIC] breaking the 512 KiB IO boundary on x86_64
Date: Thu, 20 Mar 2025 12:11:47 +0000 [thread overview]
Message-ID: <Z9wGA9z_cVn6Mfa1@casper.infradead.org> (raw)
In-Reply-To: <Z9v-1xjl7dD7Tr-H@bombadil.infradead.org>
On Thu, Mar 20, 2025 at 04:41:11AM -0700, Luis Chamberlain wrote:
> We've been constrained to a max single 512 KiB IO for a while now on x86_64.
...
> It does beg a few questions:
>
> - How are we computing the new max single IO anyway? Are we really
> bounded only by what devices support?
> - Do we believe this is the step in the right direction?
> - Is 2 MiB a sensible max block sector size limit for the next few years?
> - What other considerations should we have?
> - Do we want something more deterministic for large folios for direct IO?
Is the 512KiB limit one that real programs actually hit? Would we
see any benefit from increasing it? A high end NVMe device has a
bandwidth limit around 10GB/s, so that's reached around 20k IOPS,
which is almost laughably low.
next prev parent reply other threads:[~2025-03-20 12:11 UTC|newest]
Thread overview: 25+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-03-20 11:41 Luis Chamberlain
2025-03-20 12:11 ` Matthew Wilcox [this message]
2025-03-20 13:29 ` Daniel Gomez
2025-03-20 14:31 ` Matthew Wilcox
2025-03-20 13:47 ` Daniel Gomez
2025-03-20 14:54 ` Christoph Hellwig
2025-03-21 9:14 ` Daniel Gomez
2025-03-20 14:18 ` Christoph Hellwig
2025-03-20 15:37 ` Bart Van Assche
2025-03-20 15:58 ` Keith Busch
2025-03-20 16:13 ` Kanchan Joshi
2025-03-20 16:38 ` Christoph Hellwig
2025-03-20 21:50 ` Luis Chamberlain
2025-03-20 21:46 ` Luis Chamberlain
2025-03-20 21:40 ` Luis Chamberlain
2025-03-20 18:46 ` Ritesh Harjani
2025-03-20 21:30 ` Darrick J. Wong
2025-03-21 2:13 ` Ritesh Harjani
2025-03-21 3:05 ` Darrick J. Wong
2025-03-21 4:56 ` Theodore Ts'o
2025-03-21 5:00 ` Christoph Hellwig
2025-03-21 18:39 ` Ritesh Harjani
2025-03-21 16:38 ` Keith Busch
2025-03-21 17:21 ` Ritesh Harjani
2025-03-21 18:55 ` Keith Busch
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Z9wGA9z_cVn6Mfa1@casper.infradead.org \
--to=willy@infradead.org \
--cc=axboe@kernel.dk \
--cc=brauner@kernel.org \
--cc=da.gomez@samsung.com \
--cc=david@fromorbit.com \
--cc=djwong@kernel.org \
--cc=gost.dev@samsung.com \
--cc=hare@suse.de \
--cc=hch@lst.de \
--cc=john.g.garry@oracle.com \
--cc=joro@8bytes.org \
--cc=kbusch@kernel.org \
--cc=leon@kernel.org \
--cc=linux-block@vger.kernel.org \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=lsf-pc@lists.linux-foundation.org \
--cc=mcgrof@kernel.org \
--cc=p.raghav@samsung.com \
--cc=ritesh.list@gmail.com \
--cc=sagi@grimberg.me \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox