From: Kundan Kumar <kundan.kumar@samsung.com>
To: viro@zeniv.linux.org.uk, brauner@kernel.org, jack@suse.cz,
willy@infradead.org, mcgrof@kernel.org, clm@meta.com,
david@fromorbit.com, amir73il@gmail.com, axboe@kernel.dk,
hch@lst.de, ritesh.list@gmail.com, djwong@kernel.org,
dave@stgolabs.net, cem@kernel.org, wangyufei@vivo.com
Cc: linux-fsdevel@vger.kernel.org, linux-mm@kvack.org,
linux-xfs@vger.kernel.org, gost.dev@samsung.com,
kundan.kumar@samsung.com, anuj20.g@samsung.com,
vishak.g@samsung.com, joshi.k@samsung.com
Subject: [PATCH v3 0/6] AG aware parallel writeback for XFS
Date: Fri, 16 Jan 2026 15:38:12 +0530 [thread overview]
Message-ID: <20260116100818.7576-1-kundan.kumar@samsung.com> (raw)
In-Reply-To: <CGME20260116101236epcas5p12ba3de776976f4ea6666e16a33ab6ec4@epcas5p1.samsung.com>
This series explores AG aware parallel writeback for XFS. The goal is
to reduce writeback contention and improve scalability by allowing
writeback to be distributed across allocation groups (AGs).
Problem statement
=================
Today, XFS writeback walks the page cache serially per inode and funnels
all writeback through a single writeback context. For aging filesystems,
especially with high parallel buffered IO this leads to limited
concurrency across independent AGs.
The filesystem already has strong AG level parallelism for allocation and
metadata operations, but writeback remains largely AG agnostic.
High-level approach
===================
This series introduces an AG aware writeback with following model:
1) Predict the target AG for buffered writes (mapped or delalloc) at write
time.
2) Tag AG hints per folio (via lightweight metadata / xarray).
3) Track dirty AGs per inode using bitmap.
4) Offload writeback to per AG worker threads, each performing a onepass
scan.
5) Workers filter folios and submit folios which are tagged for its AG.
Unlike our earlier approach that parallelized writeback by introducing
multiple writeback contexts per BDI, this series keeps all changes within
XFS and is orthogonal to that work. The AG aware mechanism uses per folio
AG hints to route writeback to AG specific workers, and therefore applies
even when a single inode’s data spans multiple AGs. This avoids the
earlier limitation of relying on inode-based AG locality, which can break
down on aged/fragmented filesystems.
IOPS and throughput
===================
We see significant improvemnt in IOPS if files span across multiple AG
Workload 12 files each of 500M in 12 directories(AGs) - numjobs = 12
- NVMe device Intel Optane
Base XFS : 308 MiB/s
Parallel Writeback XFS : 1534 MiB/s (+398%)
Workload 6 files each of 6G in 6 directories(AGs) - numjobs = 12
- NVMe device Intel Optane
Base XFS : 409 MiB/s
Parallel Writeback XFS : 1245 MiB/s (+204%)
These changes are on top of the v6.18 kernel release.
Future work involves tighten writeback control (wbc) handling to integrate
with global writeback accounting and range semantics, also evaluate
interaction with higher level writeback parallelism.
Kundan Kumar (6):
iomap: add write ops hook to attach metadata to folios
xfs: add helpers to pack AG prediction info for per-folio tracking
xfs: add per-inode AG prediction map and dirty-AG bitmap
xfs: tag folios with AG number during buffered write via iomap attach
hook
xfs: add per-AG writeback workqueue infrastructure
xfs: offload writeback by AG using per-inode dirty bitmap and per-AG
workers
fs/iomap/buffered-io.c | 3 +
fs/xfs/xfs_aops.c | 257 +++++++++++++++++++++++++++++++++++++++++
fs/xfs/xfs_aops.h | 3 +
fs/xfs/xfs_icache.c | 27 +++++
fs/xfs/xfs_inode.h | 5 +
fs/xfs/xfs_iomap.c | 114 ++++++++++++++++++
fs/xfs/xfs_iomap.h | 31 +++++
fs/xfs/xfs_mount.c | 2 +
fs/xfs/xfs_mount.h | 10 ++
fs/xfs/xfs_super.c | 2 +
include/linux/iomap.h | 3 +
11 files changed, 457 insertions(+)
--
2.25.1
next parent reply other threads:[~2026-01-16 10:12 UTC|newest]
Thread overview: 48+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <CGME20260116101236epcas5p12ba3de776976f4ea6666e16a33ab6ec4@epcas5p1.samsung.com>
2026-01-16 10:08 ` Kundan Kumar [this message]
[not found] ` <CGME20260116101241epcas5p330f9c335a096aaaefda4b7d3c38d6038@epcas5p3.samsung.com>
2026-01-16 10:08 ` [PATCH v3 1/6] iomap: add write ops hook to attach metadata to folios Kundan Kumar
[not found] ` <CGME20260116101245epcas5p30269c6aa35784db67e6d6ca800a683a7@epcas5p3.samsung.com>
2026-01-16 10:08 ` [PATCH v3 2/6] xfs: add helpers to pack AG prediction info for per-folio tracking Kundan Kumar
2026-01-29 0:45 ` Darrick J. Wong
2026-02-03 7:15 ` Kundan Kumar
2026-02-05 16:39 ` Darrick J. Wong
2026-02-04 7:37 ` Nirjhar Roy (IBM)
[not found] ` <CGME20260116101251epcas5p1cf5b48f2efb14fe4387be3053b3c3ebc@epcas5p1.samsung.com>
2026-01-16 10:08 ` [PATCH v3 3/6] xfs: add per-inode AG prediction map and dirty-AG bitmap Kundan Kumar
2026-01-29 0:44 ` Darrick J. Wong
2026-02-03 7:20 ` Kundan Kumar
2026-02-05 16:42 ` Darrick J. Wong
2026-02-05 6:44 ` Nirjhar Roy (IBM)
2026-02-05 16:32 ` Darrick J. Wong
2026-02-06 5:41 ` Nirjhar Roy (IBM)
2026-02-05 6:36 ` Nirjhar Roy (IBM)
2026-02-05 16:36 ` Darrick J. Wong
2026-02-06 5:36 ` Nirjhar Roy (IBM)
2026-02-06 5:57 ` Darrick J. Wong
2026-02-06 6:03 ` Nirjhar Roy (IBM)
2026-02-06 7:00 ` Christoph Hellwig
[not found] ` <CGME20260116101256epcas5p2d6125a6bcad78c33f737fdc3484aca79@epcas5p2.samsung.com>
2026-01-16 10:08 ` [PATCH v3 4/6] xfs: tag folios with AG number during buffered write via iomap attach hook Kundan Kumar
2026-01-29 0:47 ` Darrick J. Wong
2026-01-29 22:40 ` Darrick J. Wong
2026-02-03 7:32 ` Kundan Kumar
2026-02-03 7:28 ` Kundan Kumar
2026-02-05 15:56 ` Brian Foster
2026-02-06 6:44 ` Nirjhar Roy (IBM)
[not found] ` <CGME20260116101259epcas5p1cfa6ab02e5a01f7c46cc78df95c57ce0@epcas5p1.samsung.com>
2026-01-16 10:08 ` [PATCH v3 5/6] xfs: add per-AG writeback workqueue infrastructure Kundan Kumar
2026-01-29 22:21 ` Darrick J. Wong
2026-02-03 7:35 ` Kundan Kumar
2026-02-06 6:46 ` Christoph Hellwig
2026-02-10 11:56 ` Nirjhar Roy (IBM)
[not found] ` <CGME20260116101305epcas5p497cd6d9027301853669f1c1aaffbf128@epcas5p4.samsung.com>
2026-01-16 10:08 ` [PATCH v3 6/6] xfs: offload writeback by AG using per-inode dirty bitmap and per-AG workers Kundan Kumar
2026-01-29 22:34 ` Darrick J. Wong
2026-02-03 7:40 ` Kundan Kumar
2026-02-11 9:39 ` Nirjhar Roy (IBM)
2026-01-16 16:13 ` [syzbot ci] Re: AG aware parallel writeback for XFS syzbot ci
2026-01-21 19:54 ` [PATCH v3 0/6] " Brian Foster
2026-01-22 16:15 ` Kundan Kumar
2026-01-23 9:36 ` Pankaj Raghav (Samsung)
2026-01-23 13:26 ` Brian Foster
2026-01-28 18:28 ` Kundan Kumar
2026-02-06 6:25 ` Christoph Hellwig
2026-02-06 10:07 ` Kundan Kumar
2026-02-06 17:42 ` Darrick J. Wong
2026-02-09 6:30 ` Christoph Hellwig
2026-02-09 15:54 ` Kundan Kumar
2026-02-10 15:38 ` Christoph Hellwig
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260116100818.7576-1-kundan.kumar@samsung.com \
--to=kundan.kumar@samsung.com \
--cc=amir73il@gmail.com \
--cc=anuj20.g@samsung.com \
--cc=axboe@kernel.dk \
--cc=brauner@kernel.org \
--cc=cem@kernel.org \
--cc=clm@meta.com \
--cc=dave@stgolabs.net \
--cc=david@fromorbit.com \
--cc=djwong@kernel.org \
--cc=gost.dev@samsung.com \
--cc=hch@lst.de \
--cc=jack@suse.cz \
--cc=joshi.k@samsung.com \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-xfs@vger.kernel.org \
--cc=mcgrof@kernel.org \
--cc=ritesh.list@gmail.com \
--cc=viro@zeniv.linux.org.uk \
--cc=vishak.g@samsung.com \
--cc=wangyufei@vivo.com \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox