From: Ojaswin Mujoo <ojaswin@linux.ibm.com>
To: Andres Freund <andres@anarazel.de>
Cc: Pankaj Raghav <pankaj.raghav@linux.dev>,
linux-xfs@vger.kernel.org, linux-mm@kvack.org,
linux-fsdevel@vger.kernel.org, lsf-pc@lists.linux-foundation.org,
djwong@kernel.org, john.g.garry@oracle.com, willy@infradead.org,
hch@lst.de, ritesh.list@gmail.com, jack@suse.cz,
Luis Chamberlain <mcgrof@kernel.org>,
dchinner@redhat.com, Javier Gonzalez <javier.gonz@samsung.com>,
gost.dev@samsung.com, tytso@mit.edu, p.raghav@samsung.com,
vi.shah@samsung.com
Subject: Re: [LSF/MM/BPF TOPIC] Buffered atomic writes
Date: Wed, 18 Feb 2026 00:03:18 +0530 [thread overview]
Message-ID: <aZS0biD6YKKEMSel@li-dc0c254c-257c-11b2-a85c-98b6c1322444.ibm.com> (raw)
In-Reply-To: <zzvybbfy6bcxnkt4cfzruhdyy6jsvnuvtjkebdeqwkm6nfpgij@dlps7ucza22s>
On Mon, Feb 16, 2026 at 10:45:40AM -0500, Andres Freund wrote:
> Hi,
>
> On 2026-02-16 10:52:35 +0100, Pankaj Raghav wrote:
> > On 2/13/26 14:32, Ojaswin Mujoo wrote:
> > > On Fri, Feb 13, 2026 at 11:20:36AM +0100, Pankaj Raghav wrote:
> > >> We currently have RFCs posted by John Garry and Ojaswin Mujoo, and there
> > >> was a previous LSFMM proposal about untorn buffered writes from Ted Tso.
> > >> Based on the conversation/blockers we had before, the discussion at LSFMM
> > >> should focus on the following blocking issues:
> > >>
> > >> - Handling Short Writes under Memory Pressure[6]: A buffered atomic
> > >> write might span page boundaries. If memory pressure causes a page
> > >> fault or reclaim mid-copy, the write could be torn inside the page
> > >> cache before it even reaches the filesystem.
> > >> - The current RFC uses a "pinning" approach: pinning user pages and
> > >> creating a BVEC to ensure the full copy can proceed atomically.
> > >> This adds complexity to the write path.
> > >> - Discussion: Is this acceptable? Should we consider alternatives,
> > >> such as requiring userspace to mlock the I/O buffers before
> > >> issuing the write to guarantee atomic copy in the page cache?
> > >
> > > Right, I chose this approach because we only get to know about the short
> > > copy after it has actually happened in copy_folio_from_iter_atomic()
> > > and it seemed simpler to just not let the short copy happen. This is
> > > inspired from how dio pins the pages for DMA, just that we do it
> > > for a shorter time.
> > >
> > > It does add slight complexity to the path but I'm not sure if it's complex
> > > enough to justify adding a hard requirement of having pages mlock'd.
> > >
> >
> > As databases like postgres have a buffer cache that they manage in userspace,
> > which is eventually used to do IO, I am wondering if they already do a mlock
> > or some other way to guarantee the buffer cache does not get reclaimed. That is
> > why I was thinking if we could make it a requirement. Of course, that also requires
> > checking if the range is mlocked in the iomap_write_iter path.
>
> We don't generally mlock our buffer pool - but we strongly recommend to use
> explicit huge pages (due to TLB pressure, faster fork() and less memory wasted
> on page tables), which afaict has basically the same effect. However, that
> doesn't make the page cache pages locked...
>
>
> > >> - Page Cache Model vs. Filesystem CoW: The current RFC introduces a
> > >> PG_atomic page flag to track dirty pages requiring atomic writeback.
> > >> This faced pushback due to page flags being a scarce resource[7].
> > >> Furthermore, it was argued that atomic model does not fit the buffered
> > >> I/O model because data sitting in the page cache is vulnerable to
> > >> modification before writeback occurs, and writeback does not preserve
> > >> application ordering[8].
> > >> - Dave Chinner has proposed leveraging the filesystem's CoW path
> > >> where we always allocate new blocks for the atomic write (forced
> > >> CoW). If the hardware supports it (e.g., NVMe atomic limits), the
> > >> filesystem can optimize the writeback to use REQ_ATOMIC in place,
> > >> avoiding the CoW overhead while maintaining the architectural
> > >> separation.
> > >
> > > Right, this is what I'm doing in the new RFC where we maintain the
> > > mappings for atomic write in COW fork. This way we are able to utilize a
> > > lot of existing infrastructure, however it does add some complexity to
> > > ->iomap_begin() and ->writeback_range() callbacks of the FS. I believe
> > > it is a tradeoff since the general consesus was mostly to avoid adding
> > > too much complexity to iomap layer.
> > >
> > > Another thing that came up is to consider using write through semantics
> > > for buffered atomic writes, where we are able to transition page to
> > > writeback state immediately after the write and avoid any other users to
> > > modify the data till writeback completes. This might affect performance
> > > since we won't be able to batch similar atomic IOs but maybe
> > > applications like postgres would not mind this too much. If we go with
> > > this approach, we will be able to avoid worrying too much about other
> > > users changing atomic data underneath us.
> > >
> >
> > Hmm, IIUC, postgres will write their dirty buffer cache by combining
> > multiple DB pages based on `io_combine_limit` (typically 128kb).
>
> We will try to do that, but it's obviously far from always possible, in some
> workloads [parts of ]the data in the buffer pool rarely will be dirtied in
> consecutive blocks.
>
> FWIW, postgres already tries to force some just-written pages into
> writeback. For sources of writes that can be plentiful and are done in the
> background, we default to issuing sync_file_range(SYNC_FILE_RANGE_WRITE),
> after 256kB-512kB of writes, as otherwise foreground latency can be
> significantly impacted by the kernel deciding to suddenly write back (due to
> dirty_writeback_centisecs, dirty_background_bytes, ...) and because otherwise
> the fsyncs at the end of a checkpoint can be unpredictably slow. For
> foreground writes we do not default to that, as there are users that won't
> (because they don't know, because they overcommit hardware, ...) size
> postgres' buffer pool to be big enough and thus will often re-dirty pages that
> have already recently been written out to the operating systems. But for many
> workloads it's recommened that users turn on
> sync_file_range(SYNC_FILE_RANGE_WRITE) for foreground writes as well (*).
>
> So for many workloads it'd be fine to just always start writeback for atomic
> writes immediately. It's possible, but I am not at all sure, that for most of
> the other workloads, the gains from atomic writes will outstrip the cost of
> more frequently writing data back.
>
>
> (*) As it turns out, it often seems to improves write throughput as well, if
> writeback is triggered by memory pressure instead of SYNC_FILE_RANGE_WRITE,
> linux seems to often trigger a lot more small random IO.
>
>
> > So immediately writing them might be ok as long as we don't remove those
> > pages from the page cache like we do in RWF_UNCACHED.
>
> Yes, it might. I actually often have wished for something like a
> RWF_WRITEBACK flag...
>
>
> > > An argument against this however is that it is user's responsibility to
> > > not do non atomic IO over an atomic range and this shall be considered a
> > > userspace usage error. This is similar to how there are ways users can
> > > tear a dio if they perform overlapping writes. [1].
>
> Hm, the scope of the prohibition here is not clear to me. Would it just
> be forbidden to do:
>
> P1: start pwritev(fd, [blocks 1-10], RWF_ATOMIC)
> P2: pwrite(fd, [any block in 1-10]), non-atomically
> P1: complete pwritev(fd, ...)
>
> or is it also forbidden to do:
>
> P1: pwritev(fd, [blocks 1-10], RWF_ATOMIC) start & completes
> Kernel: starts writeback but doesn't complete it
> P1: pwrite(fd, [any block in 1-10]), non-atomically
> Kernel: completes writeback
>
> The former is not at all an issue for postgres' use case, the pages in our
> buffer pool that are undergoing IO are locked, preventing additional IO (be it
> reads or writes) to those blocks.
>
> The latter would be a problem, since userspace wouldn't even know that here is
> still "atomic writeback" going on, afaict the only way we could avoid it would
> be to issue an f[data]sync(), which likely would be prohibitively expensive.
>
>
>
> > > That being said, I think these points are worth discussing and it would
> > > be helpful to have people from postgres around while discussing these
> > > semantics with the FS community members.
> > >
> > > As for ordering of writes, I'm not sure if that is something that
> > > we should guarantee via the RWF_ATOMIC api. Ensuring ordering has mostly
> > > been the task of userspace via fsync() and friends.
> > >
> >
> > Agreed.
>
> From postgres' side that's fine. In the cases we care about ordering we use
> fsync() already.
>
>
> > > [1] https://lore.kernel.org/fstests/0af205d9-6093-4931-abe9-f236acae8d44@oracle.com/
> > >
> > >> - Discussion: While the CoW approach fits XFS and other CoW
> > >> filesystems well, it presents challenges for filesystems like ext4
> > >> which lack CoW capabilities for data. Should this be a filesystem
> > >> specific feature?
> > >
> > > I believe your question is if we should have a hard dependency on COW
> > > mappings for atomic writes. Currently, COW in atomic write context in
> > > XFS, is used for these 2 things:
> > >
> > > 1. COW fork holds atomic write ranges.
> > >
> > > This is not strictly a COW feature, just that we are repurposing the COW
> > > fork to hold our atomic ranges. Basically a way for writeback path to
> > > know that atomic write was done here.
>
> Does that mean buffered atomic writes would cause fragmentation? Some common
> database workloads, e.g. anything running on cheaper cloud storage, are pretty
> sensitive to that due to the increase in use of the metered IOPS.
>
Hi Andres,
So we have tricks like allocating more blocks than needed which helps
with fragmentation even when using COW fork. I think we are able to tune
how aggresively we want preallocate more blocks. Further, if we have say
fallocated a range in file which satisfies our requirements, then we can
also upgrade to HW (non cow) atomic writes and use the falloc'd extents
which will also help with fragmentations
My point being, I don't think COW usage will strictly mean more
fragmentation however we will eventually need to run benchamrks and see.
Hopefully once I have the implementation, we can work on these things.
Regards,
ojaswin
> Greetings,
>
> Andres Freund
next prev parent reply other threads:[~2026-02-17 18:33 UTC|newest]
Thread overview: 38+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-02-13 10:20 Pankaj Raghav
2026-02-13 13:32 ` Ojaswin Mujoo
2026-02-16 9:52 ` Pankaj Raghav
2026-02-16 15:45 ` Andres Freund
2026-02-17 12:06 ` Jan Kara
2026-02-17 12:42 ` Pankaj Raghav
2026-02-17 16:21 ` Andres Freund
2026-02-18 1:04 ` Dave Chinner
2026-02-18 6:47 ` Christoph Hellwig
2026-02-18 23:42 ` Dave Chinner
2026-02-17 16:13 ` Andres Freund
2026-02-17 18:27 ` Ojaswin Mujoo
2026-02-17 18:42 ` Andres Freund
2026-02-18 17:37 ` Jan Kara
2026-02-18 21:04 ` Andres Freund
2026-02-19 0:32 ` Dave Chinner
2026-02-17 18:33 ` Ojaswin Mujoo [this message]
2026-02-17 17:20 ` Ojaswin Mujoo
2026-02-18 17:42 ` [Lsf-pc] " Jan Kara
2026-02-18 20:22 ` Ojaswin Mujoo
2026-02-16 11:38 ` Jan Kara
2026-02-16 13:18 ` Pankaj Raghav
2026-02-17 18:36 ` Ojaswin Mujoo
2026-02-16 15:57 ` Andres Freund
2026-02-17 18:39 ` Ojaswin Mujoo
2026-02-18 0:26 ` Dave Chinner
2026-02-18 6:49 ` Christoph Hellwig
2026-02-18 12:54 ` Ojaswin Mujoo
2026-02-15 9:01 ` Amir Goldstein
2026-02-17 5:51 ` Christoph Hellwig
2026-02-17 9:23 ` [Lsf-pc] " Amir Goldstein
2026-02-17 15:47 ` Andres Freund
2026-02-17 22:45 ` Dave Chinner
2026-02-18 4:10 ` Andres Freund
2026-02-18 6:53 ` Christoph Hellwig
2026-02-18 6:51 ` Christoph Hellwig
2026-02-20 10:08 ` Pankaj Raghav (Samsung)
2026-02-20 15:10 ` Christoph Hellwig
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=aZS0biD6YKKEMSel@li-dc0c254c-257c-11b2-a85c-98b6c1322444.ibm.com \
--to=ojaswin@linux.ibm.com \
--cc=andres@anarazel.de \
--cc=dchinner@redhat.com \
--cc=djwong@kernel.org \
--cc=gost.dev@samsung.com \
--cc=hch@lst.de \
--cc=jack@suse.cz \
--cc=javier.gonz@samsung.com \
--cc=john.g.garry@oracle.com \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-xfs@vger.kernel.org \
--cc=lsf-pc@lists.linux-foundation.org \
--cc=mcgrof@kernel.org \
--cc=p.raghav@samsung.com \
--cc=pankaj.raghav@linux.dev \
--cc=ritesh.list@gmail.com \
--cc=tytso@mit.edu \
--cc=vi.shah@samsung.com \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox