* [PATCH RFC v5 0/3] block: enable RWF_DONTCACHE for block devices
@ 2026-04-08 23:08 Tal Zussman
2026-04-08 23:08 ` [PATCH RFC v5 1/3] block: add BIO_COMPLETE_IN_TASK for task-context completion Tal Zussman
` (3 more replies)
0 siblings, 4 replies; 8+ messages in thread
From: Tal Zussman @ 2026-04-08 23:08 UTC (permalink / raw)
To: Jens Axboe, Matthew Wilcox (Oracle),
Christian Brauner, Darrick J. Wong, Carlos Maiolino,
Alexander Viro, Jan Kara
Cc: Christoph Hellwig, Dave Chinner, Bart Van Assche, linux-block,
linux-kernel, linux-xfs, linux-fsdevel, linux-mm, Tal Zussman
Add support for using RWF_DONTCACHE with block devices.
Dropbehind pruning needs to be done in non-IRQ context, but block
devices complete writeback in IRQ context.
To fix this, we can defer dropbehind invalidation to task context. We
introduce a new BIO_COMPLETE_IN_TASK flag that allows the bio submitter
to request task-context completion of bi_end_io. When bio_endio() sees
this flag in non-task context, it queues the bio to a per-CPU lockless
list and schedules a delayed work item to do bio completion.
Patch 1 adds the BIO_COMPLETE_IN_TASK infrastructure in the block
layer.
Patch 2 wires BIO_COMPLETE_IN_TASK into iomap writeback for dropbehind
folios, removes IOMAP_IOEND_DONTCACHE, and removes the DONTCACHE
workqueue deferral from XFS.
Patch 3 enables RWF_DONTCACHE for block devices, setting
BIO_COMPLETE_IN_TASK in submit_bh_wbc() for the CONFIG_BUFFER_HEAD
path.
This support is useful for databases that operate on raw block devices,
among other userspace applications.
I tested this (with CONFIG_BUFFER_HEAD=y) for reads and writes on a
single block device on a VM, so results may be noisy.
Reads were tested on the root partition with a 45GB range (~2x RAM).
Writes were tested on a disabled swap parition (~1GB) in a memcg of size
244MB to force reclaim pressure.
Results:
===== READS (/dev/nvme0n1p2) =====
sec normal MB/s dontcache MB/s
---- ------------ --------------
1 1098.6 1609.0
2 1270.3 1506.6
3 1093.3 1576.5
4 1141.8 2393.9
5 1365.3 2793.8
6 1324.6 2065.9
7 879.6 1920.7
8 1434.1 1662.4
9 1184.9 1857.9
10 1166.4 1702.8
11 1161.4 1653.4
12 1086.9 1555.4
13 1198.5 1718.9
14 1111.9 1752.2
---- ------------ --------------
avg 1173.7 1828.8 (+56%)
==== WRITES (/dev/nvme0n1p3) =====
sec normal MB/s dontcache MB/s
---- ------------ --------------
1 692.4 9297.7
2 4810.8 9342.8
3 5221.7 2955.2
4 396.7 8488.3
5 7249.2 9249.3
6 6695.4 1376.2
7 122.9 9125.8
8 5486.5 9414.7
9 6921.5 8743.5
10 27.9 8997.8
---- ------------ --------------
avg 3762.5 7699.1 (+105%)
---
Changes in v5:
- 1/3: Replace local_lock + bio_list with struct llist, per Dave.
- 1/3: Use delayed_work with 1-jiffie delay, per Dave.
- 1/3: Add dedicated workqueue to avoid deadlocks, per Christoph.
- 1/3: Restructure work function as do/while loop and only schedule work
originally when the list was previously empty, per Jens.
- 2/3: Delete IOMAP_IOEND_DONTCACHE and its NOMERGE entry, per Matthew
and Christoph.
- Link to v4: https://lore.kernel.org/r/20260325-blk-dontcache-v4-0-c4b56db43f64@columbia.edu
Changes in v4:
- 1/3: Move dropbehind deferral from folio-level to bio-level using
BIO_COMPLETE_IN_TASK, per Matthew and Jan.
- 1/3: Work function yields on need_resched() to avoid hogging the CPU,
per Jan.
- 2/3: New patch. Set BIO_COMPLETE_IN_TASK on iomap writeback bios for
DONTCACHE folios, removing the need for XFS-specific workqueue
deferral.
- 3/3: Set BIO_COMPLETE_IN_TASK in submit_bh_wbc() for buffer_head
path.
- 3/3: Update commit message to mention CONFIG_BUFFER_HEAD=n path.
- Link to v3: https://lore.kernel.org/r/20260227-blk-dontcache-v3-0-cd309ccd5868@columbia.edu
Changes in v3:
- 1/2: Convert dropbehind deferral to per-CPU folio_batches protected by
local_lock using per-CPU work items, to reduce contention, per Jens.
- 1/2: Call folio_end_dropbehind_irq() directly from
folio_end_writeback(), per Jens.
- 1/2: Add CPU hotplug dead callback to drain the departing CPU's folio
batch.
- 2/2: Introduce block_write_begin_iocb(), per Christoph.
- 2/2: Dropped R-b due to changes.
- Link to v2: https://lore.kernel.org/r/20260225-blk-dontcache-v2-0-70e7ac4f7108@columbia.edu
Changes in v2:
- Add R-b from Jan Kara for 2/2.
- Add patch to defer dropbehind completion from IRQ context via a work
item (1/2).
- Add initial performance numbers to cover letter.
- Link to v1: https://lore.kernel.org/r/20260218-blk-dontcache-v1-1-fad6675ef71f@columbia.edu
---
Tal Zussman (3):
block: add BIO_COMPLETE_IN_TASK for task-context completion
iomap: use BIO_COMPLETE_IN_TASK for dropbehind writeback
block: enable RWF_DONTCACHE for block devices
block/bio.c | 83 ++++++++++++++++++++++++++++++++++++++++++++-
block/fops.c | 5 +--
fs/buffer.c | 22 ++++++++++--
fs/iomap/ioend.c | 5 +--
fs/xfs/xfs_aops.c | 4 ---
include/linux/blk_types.h | 7 +++-
include/linux/buffer_head.h | 3 ++
include/linux/iomap.h | 6 +---
8 files changed, 117 insertions(+), 18 deletions(-)
---
base-commit: f384d0b7710d3edaab718c02bbae46a4d3fd09de
change-id: 20260218-blk-dontcache-338133dd045e
Best regards,
--
Tal Zussman <tz2294@columbia.edu>
^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCH RFC v5 1/3] block: add BIO_COMPLETE_IN_TASK for task-context completion
2026-04-08 23:08 [PATCH RFC v5 0/3] block: enable RWF_DONTCACHE for block devices Tal Zussman
@ 2026-04-08 23:08 ` Tal Zussman
2026-04-08 23:08 ` [PATCH RFC v5 2/3] iomap: use BIO_COMPLETE_IN_TASK for dropbehind writeback Tal Zussman
` (2 subsequent siblings)
3 siblings, 0 replies; 8+ messages in thread
From: Tal Zussman @ 2026-04-08 23:08 UTC (permalink / raw)
To: Jens Axboe, Matthew Wilcox (Oracle),
Christian Brauner, Darrick J. Wong, Carlos Maiolino,
Alexander Viro, Jan Kara
Cc: Christoph Hellwig, Dave Chinner, Bart Van Assche, linux-block,
linux-kernel, linux-xfs, linux-fsdevel, linux-mm, Tal Zussman
Some bio completion handlers need to run in task context but bio_endio()
can be called from IRQ context (e.g. buffer_head writeback). Add a
BIO_COMPLETE_IN_TASK flag that bio submitters can set to request
task-context completion of their bi_end_io callback.
When bio_endio() sees this flag and is running in non-task context, it
queues the bio to a per-cpu lockless list and schedules a delayed work
item to call bi_end_io() from task context. The delayed work uses a
1-jiffie delay to allow batches of completions to accumulate before
processing. A CPU hotplug dead callback drains any remaining bios from
the departing CPU's batch.
This will be used to enable RWF_DONTCACHE for block devices, and could
be used for other subsystems like fscrypt that need task-context bio
completion.
Suggested-by: Matthew Wilcox <willy@infradead.org>
Signed-off-by: Tal Zussman <tz2294@columbia.edu>
---
block/bio.c | 83 ++++++++++++++++++++++++++++++++++++++++++++++-
include/linux/blk_types.h | 7 +++-
2 files changed, 88 insertions(+), 2 deletions(-)
diff --git a/block/bio.c b/block/bio.c
index 8203bb7455a9..21b403eb1c04 100644
--- a/block/bio.c
+++ b/block/bio.c
@@ -18,6 +18,7 @@
#include <linux/highmem.h>
#include <linux/blk-crypto.h>
#include <linux/xarray.h>
+#include <linux/llist.h>
#include <trace/events/block.h>
#include "blk.h"
@@ -1714,6 +1715,51 @@ void bio_check_pages_dirty(struct bio *bio)
}
EXPORT_SYMBOL_GPL(bio_check_pages_dirty);
+struct bio_complete_batch {
+ struct llist_head list;
+ struct delayed_work work;
+ int cpu;
+};
+
+static DEFINE_PER_CPU(struct bio_complete_batch, bio_complete_batch);
+static struct workqueue_struct *bio_complete_wq;
+
+static void bio_complete_work_fn(struct work_struct *w)
+{
+ struct delayed_work *dw = to_delayed_work(w);
+ struct bio_complete_batch *batch =
+ container_of(dw, struct bio_complete_batch, work);
+ struct llist_node *node;
+ struct bio *bio, *next;
+
+ do {
+ node = llist_del_all(&batch->list);
+ if (!node)
+ break;
+
+ node = llist_reverse_order(node);
+ llist_for_each_entry_safe(bio, next, node, bi_llist)
+ bio->bi_end_io(bio);
+
+ if (need_resched()) {
+ if (!llist_empty(&batch->list))
+ mod_delayed_work_on(batch->cpu,
+ bio_complete_wq,
+ &batch->work, 0);
+ break;
+ }
+ } while (1);
+}
+
+static void bio_queue_completion(struct bio *bio)
+{
+ struct bio_complete_batch *batch = this_cpu_ptr(&bio_complete_batch);
+
+ if (llist_add(&bio->bi_llist, &batch->list))
+ mod_delayed_work_on(batch->cpu, bio_complete_wq,
+ &batch->work, 1);
+}
+
static inline bool bio_remaining_done(struct bio *bio)
{
/*
@@ -1788,7 +1834,9 @@ void bio_endio(struct bio *bio)
}
#endif
- if (bio->bi_end_io)
+ if (!in_task() && bio_flagged(bio, BIO_COMPLETE_IN_TASK))
+ bio_queue_completion(bio);
+ else if (bio->bi_end_io)
bio->bi_end_io(bio);
}
EXPORT_SYMBOL(bio_endio);
@@ -1974,6 +2022,24 @@ int bioset_init(struct bio_set *bs,
}
EXPORT_SYMBOL(bioset_init);
+/*
+ * Drain a dead CPU's deferred bio completions.
+ */
+static int bio_complete_batch_cpu_dead(unsigned int cpu)
+{
+ struct bio_complete_batch *batch =
+ per_cpu_ptr(&bio_complete_batch, cpu);
+ struct llist_node *node;
+ struct bio *bio, *next;
+
+ node = llist_del_all(&batch->list);
+ node = llist_reverse_order(node);
+ llist_for_each_entry_safe(bio, next, node, bi_llist)
+ bio->bi_end_io(bio);
+
+ return 0;
+}
+
static int __init init_bio(void)
{
int i;
@@ -1988,6 +2054,21 @@ static int __init init_bio(void)
SLAB_HWCACHE_ALIGN | SLAB_PANIC, NULL);
}
+ for_each_possible_cpu(i) {
+ struct bio_complete_batch *batch =
+ per_cpu_ptr(&bio_complete_batch, i);
+
+ init_llist_head(&batch->list);
+ INIT_DELAYED_WORK(&batch->work, bio_complete_work_fn);
+ batch->cpu = i;
+ }
+
+ bio_complete_wq = alloc_workqueue("bio_complete", WQ_MEM_RECLAIM, 0);
+ if (!bio_complete_wq)
+ panic("bio: can't allocate bio_complete workqueue\n");
+
+ cpuhp_setup_state(CPUHP_BP_PREPARE_DYN, "block/bio:complete:dead",
+ NULL, bio_complete_batch_cpu_dead);
cpuhp_setup_state_multi(CPUHP_BIO_DEAD, "block/bio:dead", NULL,
bio_cpu_dead);
diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h
index 8808ee76e73c..0b55159d110d 100644
--- a/include/linux/blk_types.h
+++ b/include/linux/blk_types.h
@@ -11,6 +11,7 @@
#include <linux/device.h>
#include <linux/ktime.h>
#include <linux/rw_hint.h>
+#include <linux/llist.h>
struct bio_set;
struct bio;
@@ -208,7 +209,10 @@ typedef unsigned int blk_qc_t;
* stacking drivers)
*/
struct bio {
- struct bio *bi_next; /* request queue link */
+ union {
+ struct bio *bi_next; /* request queue link */
+ struct llist_node bi_llist; /* deferred completion */
+ };
struct block_device *bi_bdev;
blk_opf_t bi_opf; /* bottom bits REQ_OP, top bits
* req_flags.
@@ -322,6 +326,7 @@ enum {
BIO_REMAPPED,
BIO_ZONE_WRITE_PLUGGING, /* bio handled through zone write plugging */
BIO_EMULATES_ZONE_APPEND, /* bio emulates a zone append operation */
+ BIO_COMPLETE_IN_TASK, /* complete bi_end_io() in task context */
BIO_FLAG_LAST
};
--
2.39.5
^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCH RFC v5 2/3] iomap: use BIO_COMPLETE_IN_TASK for dropbehind writeback
2026-04-08 23:08 [PATCH RFC v5 0/3] block: enable RWF_DONTCACHE for block devices Tal Zussman
2026-04-08 23:08 ` [PATCH RFC v5 1/3] block: add BIO_COMPLETE_IN_TASK for task-context completion Tal Zussman
@ 2026-04-08 23:08 ` Tal Zussman
2026-04-08 23:08 ` [PATCH RFC v5 3/3] block: enable RWF_DONTCACHE for block devices Tal Zussman
2026-04-09 6:53 ` [PATCH RFC v5 0/3] " Christoph Hellwig
3 siblings, 0 replies; 8+ messages in thread
From: Tal Zussman @ 2026-04-08 23:08 UTC (permalink / raw)
To: Jens Axboe, Matthew Wilcox (Oracle),
Christian Brauner, Darrick J. Wong, Carlos Maiolino,
Alexander Viro, Jan Kara
Cc: Christoph Hellwig, Dave Chinner, Bart Van Assche, linux-block,
linux-kernel, linux-xfs, linux-fsdevel, linux-mm, Tal Zussman
Set BIO_COMPLETE_IN_TASK on iomap writeback bios when a dropbehind folio
is added. This ensures that bi_end_io runs in task context, where
folio_end_dropbehind() can safely invalidate folios.
With the bio layer now handling task-context deferral generically,
IOMAP_IOEND_DONTCACHE is no longer needed, as XFS no longer needs to
route DONTCACHE ioends through its completion workqueue. Remove the flag
and its NOMERGE entry.
Without the NOMERGE, regular I/Os that get merged with a dropbehind
folio will also have their completion deferred to task context.
Signed-off-by: Tal Zussman <tz2294@columbia.edu>
---
fs/iomap/ioend.c | 5 +++--
fs/xfs/xfs_aops.c | 4 ----
include/linux/iomap.h | 6 +-----
3 files changed, 4 insertions(+), 11 deletions(-)
diff --git a/fs/iomap/ioend.c b/fs/iomap/ioend.c
index e4d57cb969f1..fe2a4c3dae42 100644
--- a/fs/iomap/ioend.c
+++ b/fs/iomap/ioend.c
@@ -182,8 +182,6 @@ ssize_t iomap_add_to_ioend(struct iomap_writepage_ctx *wpc, struct folio *folio,
ioend_flags |= IOMAP_IOEND_UNWRITTEN;
if (wpc->iomap.flags & IOMAP_F_SHARED)
ioend_flags |= IOMAP_IOEND_SHARED;
- if (folio_test_dropbehind(folio))
- ioend_flags |= IOMAP_IOEND_DONTCACHE;
if (pos == wpc->iomap.offset && (wpc->iomap.flags & IOMAP_F_BOUNDARY))
ioend_flags |= IOMAP_IOEND_BOUNDARY;
@@ -200,6 +198,9 @@ ssize_t iomap_add_to_ioend(struct iomap_writepage_ctx *wpc, struct folio *folio,
if (!bio_add_folio(&ioend->io_bio, folio, map_len, poff))
goto new_ioend;
+ if (folio_test_dropbehind(folio))
+ bio_set_flag(&ioend->io_bio, BIO_COMPLETE_IN_TASK);
+
/*
* Clamp io_offset and io_size to the incore EOF so that ondisk
* file size updates in the ioend completion are byte-accurate.
diff --git a/fs/xfs/xfs_aops.c b/fs/xfs/xfs_aops.c
index 76678814f46f..0d469b91377d 100644
--- a/fs/xfs/xfs_aops.c
+++ b/fs/xfs/xfs_aops.c
@@ -510,10 +510,6 @@ xfs_ioend_needs_wq_completion(
if (ioend->io_flags & (IOMAP_IOEND_UNWRITTEN | IOMAP_IOEND_SHARED))
return true;
- /* Page cache invalidation cannot be done in irq context. */
- if (ioend->io_flags & IOMAP_IOEND_DONTCACHE)
- return true;
-
return false;
}
diff --git a/include/linux/iomap.h b/include/linux/iomap.h
index 99b7209dabd7..a5d6401ebd80 100644
--- a/include/linux/iomap.h
+++ b/include/linux/iomap.h
@@ -392,16 +392,12 @@ sector_t iomap_bmap(struct address_space *mapping, sector_t bno,
#define IOMAP_IOEND_BOUNDARY (1U << 2)
/* is direct I/O */
#define IOMAP_IOEND_DIRECT (1U << 3)
-/* is DONTCACHE I/O */
-#define IOMAP_IOEND_DONTCACHE (1U << 4)
-
/*
* Flags that if set on either ioend prevent the merge of two ioends.
* (IOMAP_IOEND_BOUNDARY also prevents merges, but only one-way)
*/
#define IOMAP_IOEND_NOMERGE_FLAGS \
- (IOMAP_IOEND_SHARED | IOMAP_IOEND_UNWRITTEN | IOMAP_IOEND_DIRECT | \
- IOMAP_IOEND_DONTCACHE)
+ (IOMAP_IOEND_SHARED | IOMAP_IOEND_UNWRITTEN | IOMAP_IOEND_DIRECT)
/*
* Structure for writeback I/O completions.
--
2.39.5
^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCH RFC v5 3/3] block: enable RWF_DONTCACHE for block devices
2026-04-08 23:08 [PATCH RFC v5 0/3] block: enable RWF_DONTCACHE for block devices Tal Zussman
2026-04-08 23:08 ` [PATCH RFC v5 1/3] block: add BIO_COMPLETE_IN_TASK for task-context completion Tal Zussman
2026-04-08 23:08 ` [PATCH RFC v5 2/3] iomap: use BIO_COMPLETE_IN_TASK for dropbehind writeback Tal Zussman
@ 2026-04-08 23:08 ` Tal Zussman
2026-04-09 16:08 ` Christoph Hellwig
2026-04-09 6:53 ` [PATCH RFC v5 0/3] " Christoph Hellwig
3 siblings, 1 reply; 8+ messages in thread
From: Tal Zussman @ 2026-04-08 23:08 UTC (permalink / raw)
To: Jens Axboe, Matthew Wilcox (Oracle),
Christian Brauner, Darrick J. Wong, Carlos Maiolino,
Alexander Viro, Jan Kara
Cc: Christoph Hellwig, Dave Chinner, Bart Van Assche, linux-block,
linux-kernel, linux-xfs, linux-fsdevel, linux-mm, Tal Zussman
Block device buffered reads and writes already pass through
filemap_read() and iomap_file_buffered_write() respectively, both of
which handle IOCB_DONTCACHE. Enable RWF_DONTCACHE for block device files
by setting FOP_DONTCACHE in def_blk_fops.
For CONFIG_BUFFER_HEAD=y paths, add block_write_begin_iocb() which
threads the kiocb through so that buffer_head-based I/O can use
DONTCACHE behavior. The existing block_write_begin() is preserved as a
wrapper that passes a NULL iocb. Set BIO_COMPLETE_IN_TASK in
submit_bh_wbc() when the folio has dropbehind so that buffer_head
writeback completions get deferred to task context.
CONFIG_BUFFER_HEAD=n paths are handled by the previously added iomap
BIO_COMPLETE_IN_TASK support.
This support is useful for databases that operate on raw block devices,
among other userspace applications.
Signed-off-by: Tal Zussman <tz2294@columbia.edu>
---
block/fops.c | 5 +++--
fs/buffer.c | 22 +++++++++++++++++++---
include/linux/buffer_head.h | 3 +++
3 files changed, 25 insertions(+), 5 deletions(-)
diff --git a/block/fops.c b/block/fops.c
index 4d32785b31d9..d8165f6ba71c 100644
--- a/block/fops.c
+++ b/block/fops.c
@@ -505,7 +505,8 @@ static int blkdev_write_begin(const struct kiocb *iocb,
unsigned len, struct folio **foliop,
void **fsdata)
{
- return block_write_begin(mapping, pos, len, foliop, blkdev_get_block);
+ return block_write_begin_iocb(iocb, mapping, pos, len, foliop,
+ blkdev_get_block);
}
static int blkdev_write_end(const struct kiocb *iocb,
@@ -967,7 +968,7 @@ const struct file_operations def_blk_fops = {
.splice_write = iter_file_splice_write,
.fallocate = blkdev_fallocate,
.uring_cmd = blkdev_uring_cmd,
- .fop_flags = FOP_BUFFER_RASYNC,
+ .fop_flags = FOP_BUFFER_RASYNC | FOP_DONTCACHE,
};
static __init int blkdev_init(void)
diff --git a/fs/buffer.c b/fs/buffer.c
index ed724a902657..c60c0ad6cc35 100644
--- a/fs/buffer.c
+++ b/fs/buffer.c
@@ -2239,14 +2239,19 @@ EXPORT_SYMBOL(block_commit_write);
*
* The filesystem needs to handle block truncation upon failure.
*/
-int block_write_begin(struct address_space *mapping, loff_t pos, unsigned len,
+int block_write_begin_iocb(const struct kiocb *iocb,
+ struct address_space *mapping, loff_t pos, unsigned len,
struct folio **foliop, get_block_t *get_block)
{
pgoff_t index = pos >> PAGE_SHIFT;
+ fgf_t fgp_flags = FGP_WRITEBEGIN;
struct folio *folio;
int status;
- folio = __filemap_get_folio(mapping, index, FGP_WRITEBEGIN,
+ if (iocb && iocb->ki_flags & IOCB_DONTCACHE)
+ fgp_flags |= FGP_DONTCACHE;
+
+ folio = __filemap_get_folio(mapping, index, fgp_flags,
mapping_gfp_mask(mapping));
if (IS_ERR(folio))
return PTR_ERR(folio);
@@ -2261,6 +2266,13 @@ int block_write_begin(struct address_space *mapping, loff_t pos, unsigned len,
*foliop = folio;
return status;
}
+
+int block_write_begin(struct address_space *mapping, loff_t pos, unsigned len,
+ struct folio **foliop, get_block_t *get_block)
+{
+ return block_write_begin_iocb(NULL, mapping, pos, len, foliop,
+ get_block);
+}
EXPORT_SYMBOL(block_write_begin);
int block_write_end(loff_t pos, unsigned len, unsigned copied,
@@ -2589,7 +2601,8 @@ int cont_write_begin(const struct kiocb *iocb, struct address_space *mapping,
(*bytes)++;
}
- return block_write_begin(mapping, pos, len, foliop, get_block);
+ return block_write_begin_iocb(iocb, mapping, pos, len, foliop,
+ get_block);
}
EXPORT_SYMBOL(cont_write_begin);
@@ -2801,6 +2814,9 @@ static void submit_bh_wbc(blk_opf_t opf, struct buffer_head *bh,
bio = bio_alloc(bh->b_bdev, 1, opf, GFP_NOIO);
+ if (folio_test_dropbehind(bh->b_folio))
+ bio_set_flag(bio, BIO_COMPLETE_IN_TASK);
+
fscrypt_set_bio_crypt_ctx_bh(bio, bh, GFP_NOIO);
bio->bi_iter.bi_sector = bh->b_blocknr * (bh->b_size >> 9);
diff --git a/include/linux/buffer_head.h b/include/linux/buffer_head.h
index b16b88bfbc3e..ddf88ce290f2 100644
--- a/include/linux/buffer_head.h
+++ b/include/linux/buffer_head.h
@@ -260,6 +260,9 @@ int block_read_full_folio(struct folio *, get_block_t *);
bool block_is_partially_uptodate(struct folio *, size_t from, size_t count);
int block_write_begin(struct address_space *mapping, loff_t pos, unsigned len,
struct folio **foliop, get_block_t *get_block);
+int block_write_begin_iocb(const struct kiocb *iocb,
+ struct address_space *mapping, loff_t pos, unsigned len,
+ struct folio **foliop, get_block_t *get_block);
int __block_write_begin(struct folio *folio, loff_t pos, unsigned len,
get_block_t *get_block);
int block_write_end(loff_t pos, unsigned len, unsigned copied, struct folio *);
--
2.39.5
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH RFC v5 0/3] block: enable RWF_DONTCACHE for block devices
2026-04-08 23:08 [PATCH RFC v5 0/3] block: enable RWF_DONTCACHE for block devices Tal Zussman
` (2 preceding siblings ...)
2026-04-08 23:08 ` [PATCH RFC v5 3/3] block: enable RWF_DONTCACHE for block devices Tal Zussman
@ 2026-04-09 6:53 ` Christoph Hellwig
2026-04-09 18:56 ` Tal Zussman
3 siblings, 1 reply; 8+ messages in thread
From: Christoph Hellwig @ 2026-04-09 6:53 UTC (permalink / raw)
To: Tal Zussman
Cc: Jens Axboe, Matthew Wilcox (Oracle),
Christian Brauner, Darrick J. Wong, Carlos Maiolino,
Alexander Viro, Jan Kara, Christoph Hellwig, Dave Chinner,
Bart Van Assche, linux-block, linux-kernel, linux-xfs,
linux-fsdevel, linux-mm
What tree is this against? I tried Jens' for-next and for-7.1/block
trees, linux-next and current mainline and it does not apply to any
of them.
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH RFC v5 3/3] block: enable RWF_DONTCACHE for block devices
2026-04-08 23:08 ` [PATCH RFC v5 3/3] block: enable RWF_DONTCACHE for block devices Tal Zussman
@ 2026-04-09 16:08 ` Christoph Hellwig
2026-04-09 18:59 ` Tal Zussman
0 siblings, 1 reply; 8+ messages in thread
From: Christoph Hellwig @ 2026-04-09 16:08 UTC (permalink / raw)
To: Tal Zussman
Cc: Jens Axboe, Matthew Wilcox (Oracle),
Christian Brauner, Darrick J. Wong, Carlos Maiolino,
Alexander Viro, Jan Kara, Christoph Hellwig, Dave Chinner,
Bart Van Assche, linux-block, linux-kernel, linux-xfs,
linux-fsdevel, linux-mm
For the next round please split this into the buffer.c parts for
the infrastructure and the use in the block device file operations.
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH RFC v5 0/3] block: enable RWF_DONTCACHE for block devices
2026-04-09 6:53 ` [PATCH RFC v5 0/3] " Christoph Hellwig
@ 2026-04-09 18:56 ` Tal Zussman
0 siblings, 0 replies; 8+ messages in thread
From: Tal Zussman @ 2026-04-09 18:56 UTC (permalink / raw)
To: Christoph Hellwig
Cc: Jens Axboe, Matthew Wilcox (Oracle),
Christian Brauner, Darrick J. Wong, Carlos Maiolino,
Alexander Viro, Jan Kara, Dave Chinner, Bart Van Assche,
linux-block, linux-kernel, linux-xfs, linux-fsdevel, linux-mm
On 4/9/26 2:53 AM, Christoph Hellwig wrote:
> What tree is this against? I tried Jens' for-next and for-7.1/block
> trees, linux-next and current mainline and it does not apply to any
> of them.
Sorry about that -- I had previously based this on an earlier rc and forgot
to rebase. For the next iteration, is there a preferred tree to rebase
onto? I see you rebased this on linux-next.
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH RFC v5 3/3] block: enable RWF_DONTCACHE for block devices
2026-04-09 16:08 ` Christoph Hellwig
@ 2026-04-09 18:59 ` Tal Zussman
0 siblings, 0 replies; 8+ messages in thread
From: Tal Zussman @ 2026-04-09 18:59 UTC (permalink / raw)
To: Christoph Hellwig
Cc: Jens Axboe, Matthew Wilcox (Oracle),
Christian Brauner, Darrick J. Wong, Carlos Maiolino,
Alexander Viro, Jan Kara, Dave Chinner, Bart Van Assche,
linux-block, linux-kernel, linux-xfs, linux-fsdevel, linux-mm
On 4/9/26 12:08 PM, Christoph Hellwig wrote:
> For the next round please split this into the buffer.c parts for
> the infrastructure and the use in the block device file operations.
Will do.
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2026-04-09 18:59 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2026-04-08 23:08 [PATCH RFC v5 0/3] block: enable RWF_DONTCACHE for block devices Tal Zussman
2026-04-08 23:08 ` [PATCH RFC v5 1/3] block: add BIO_COMPLETE_IN_TASK for task-context completion Tal Zussman
2026-04-08 23:08 ` [PATCH RFC v5 2/3] iomap: use BIO_COMPLETE_IN_TASK for dropbehind writeback Tal Zussman
2026-04-08 23:08 ` [PATCH RFC v5 3/3] block: enable RWF_DONTCACHE for block devices Tal Zussman
2026-04-09 16:08 ` Christoph Hellwig
2026-04-09 18:59 ` Tal Zussman
2026-04-09 6:53 ` [PATCH RFC v5 0/3] " Christoph Hellwig
2026-04-09 18:56 ` Tal Zussman
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox