From: Kundan Kumar <kundan.kumar@samsung.com>
To: Brian Foster <bfoster@redhat.com>
Cc: viro@zeniv.linux.org.uk, brauner@kernel.org, jack@suse.cz,
willy@infradead.org, mcgrof@kernel.org, clm@meta.com,
david@fromorbit.com, amir73il@gmail.com, axboe@kernel.dk,
hch@lst.de, ritesh.list@gmail.com, djwong@kernel.org,
dave@stgolabs.net, cem@kernel.org, wangyufei@vivo.com,
linux-fsdevel@vger.kernel.org, linux-mm@kvack.org,
linux-xfs@vger.kernel.org, gost.dev@samsung.com,
anuj20.g@samsung.com, vishak.g@samsung.com, joshi.k@samsung.com
Subject: Re: [PATCH v3 0/6] AG aware parallel writeback for XFS
Date: Wed, 28 Jan 2026 23:58:25 +0530 [thread overview]
Message-ID: <e7413e3b-3fae-4aab-90a1-4a6695156b2e@samsung.com> (raw)
In-Reply-To: <aXN3EtxKFXX8DEbl@bfoster>
On 1/23/2026 6:56 PM, Brian Foster wrote:
> On Thu, Jan 22, 2026 at 09:45:05PM +0530, Kundan Kumar wrote:
>>>
>>> Could you provide more detail on how you're testing here? I threw this
>>> at some beefier storage I have around out of curiosity and I'm not
>>> seeing much of a difference. It could be I'm missing some details or
>>> maybe the storage outweighs the processing benefit. But for example, is
>>> this a fio test command being used? Is there preallocation? What type of
>>> storage? Is a particular fs geometry being targeted for this
>>> optimization (i.e. smaller AGs), etc.?
>>
>> Thanks Brian for the detailed review and for taking the time to
>> look through the code.
>>
>> The numbers quoted were from fio buffered write workloads on NVMe
>> (Optane devices), using multiple files placed in different
>> directories mapping to different AGs. Jobs were buffered
>> randwrite with multiple jobs. This can be tested with both
>> fallocate=none or otherwise.
>>
>> Sample script for 12 jobs to 12 directories(AGs) :
>>
>> mkfs.xfs -f -d agcount=12 /dev/nvme0n1
>> mount /dev/nvme0n1 /mnt
>> sync
>> echo 3 > /proc/sys/vm/drop_caches
>>
>> for i in {1..12}; do
>> mkdir -p /mnt/dir$i
>> done
>>
>> fio job.fio
>>
>> umount /mnt
>> echo 3 > /proc/sys/vm/drop_caches
>>
>> The job file :
>>
>> [global]
>> bs=4k
>> iodepth=32
>> rw=randwrite
>> ioengine=io_uring
>> fallocate=none
>> nrfiles=12
>> numjobs=1
>> size=6G
>> direct=0
>> group_reporting=1
>> create_on_open=1
>> name=test
>>
>> [job1]
>> directory=/mnt/dir1
>>
>> [job2]
>> directory=/mnt/dir2
>> ...
>> ...
>> [job12]
>> directory=/mnt/dir12
>>
>
> Thanks..
>
>>>
>>> FWIW, I skimmed through the code a bit and the main thing that kind of
>>> stands out to me is the write time per-folio hinting. Writeback handling
>>> for the overwrite (i.e. non-delalloc) case is basically a single lookup
>>> per mapping under shared inode lock. The question that comes to mind
>>> there is what is the value of per-ag batching as opposed to just adding
>>> generic concurrency? It seems unnecessary to me to take care to shuffle
>>> overwrites into per-ag based workers when the underlying locking is
>>> already shared.
>>>
>>
>> That’s a fair point. For the overwrite (non-delalloc) case, the
>> per-folio AG hinting is not meant to change allocation behavior, and
>> I agree the underlying inode locking remains shared. The primary value
>> I’m seeing there is the ability to partition writeback iteration and
>> submission when dirty data spans multiple AGs.
>> I will try routing overwrite writeback to workers irrespective of AG
>> (e.g. hash/inode based), to compare between generic concurrency vs AG
>> batching.
>>
>>> WRT delalloc, it looks like we're basically taking the inode AG as the
>>> starting point and guessing based on the on-disk AGF free blocks counter
>>> at the time of the write. The delalloc accounting doesn't count against
>>> the AGF, however, so ISTM that in many cases this would just effectively
>>> land on the inode AG for larger delalloc writes. Is that not the case?
>>>
>>> Once we get to delalloc writeback, we're under exclusive inode lock and
>>> fall into the block allocator. The latter trylock iterates the AGs
>>> looking for a good candidate. So what's the advantage of per-ag
>>> splitting delalloc at writeback time if we're sending the same inode to
>>> per-ag workers that all 1. require exclusive inode lock and 2. call into
>>> an allocator that is designed to be scalable (i.e. if one AG is locked
>>> it will just move to the next)?
>>>
>>
>> The intent of per-AG splitting is not to parallelize allocation
>> within a single inode or override allocator behavior, but to
>> partition writeback scheduling so that inodes associated with
>> different AGs are routed to different workers. This implicitly
>> distributes inodes across AG workers, even though each inode’s
>> delalloc conversion remains serialized.
>>
>>> Yet another consideration is how delalloc conversion works at the
>>> xfs_bmapi_convert_delalloc() -> xfs_bmapi_convert_one_delalloc() level.
>>> If you take a look at the latter, we look up the entire delalloc extent
>>> backing the folio under writeback and attempt to allocate it all at once
>>> (not just the blocks backing the folio). So in theory if we were to end
>>> up tagging a sequence of contiguous delalloc backed folios at buffered
>>> write time with different AGs, we're still going to try to allocate all
>>> of that in one AG at writeback time. So the per-ag hinting also sort of
>>> competes with this by shuffling writeback of the same potential extent
>>> into different workers, making it a little hard to try and reason about.
>>>
>>
>> Agreed — delalloc conversion happens at extent granularity, so
>> per-folio AG hints are not meant to steer final allocation. In this
>> series the hints are used purely as writeback scheduling tokens;
>> allocation still occurs once per extent under XFS_ILOCK_EXCL using
>> existing allocator logic. The goal is to partition writeback work and
>> avoid funneling multiple inodes through a single writeback path, not
>> to influence extent placement.
>>
>
> Yeah.. I realize none of this is really intended to drive allocation
> behavior. The observation that all this per-folio tracking ultimately
> boils down to either sharding based on information we have at writeback
> time (i.e. overwrites) or effectively batching based on on-disk AG state
> at the time of the write is kind of what suggests that the folio
> granular hinting is potentially overkill.
>
>>> So stepping back it kind of feels to me like the write time hinting has
>>> so much potential for inaccuracy and unpredictability of writeback time
>>> behavior (for the delalloc case), that it makes me wonder if we're
>>> effectively just enabling arbitrary concurrency at writeback time and
>>> perhaps seeing benefit from that. If so, that makes me wonder if the
>>> associated value can be gained by somehow simplifying this to not
>>> require write time hinting at all.
>>>
>>> Have you run any experiments that perhaps rotors inodes to the
>>> individual wb workers based on the inode AG (i.e. basically ignoring all
>>> the write time stuff) by chance? Or anything that otherwise helps
>>> quantify the value of per-ag batching over just basic concurrency? I'd
>>> be interested to see if/how behavior changes with something like that.
>>
>> Yes, inode-AG based routing has been explored as part of earlier
>> higher-level writeback work (link below), where inodes are affined to
>> writeback contexts based on inode AG. That effectively provides
>> generic concurrency and serves as a useful baseline.
>> https://lore.kernel.org/all/20251014120845.2361-1-kundan.kumar@samsung.com/
>>
>
> Ah, I recall seeing that. A couple questions..
>
> That link states the following:
>
> "For XFS, affining inodes to writeback threads resulted in a decline
> in IOPS for certain devices. The issue was caused by AG lock contention
> in xfs_end_io, where multiple writeback threads competed for the same
> AG lock."
>
> Can you quantify that? It seems like xfs_end_io() mostly cares about
> things like unwritten conversion, COW remapping, etc., so block
> allocation shouldn't be prominent. Is this producing something where
> frequent unwritten conversion results in a lot of bmapbt splits or
> something?
>
I captured stacks from the contending completion workers and the hotspot
is in the unwritten conversion path
(xfs_end_io() -> xfs_iomap_write_unwritten()). We were repeatedly
contending on the AGF buffer lock via xfs_alloc_fix_freelist() /
xfs_alloc_read_agf() when writeback threads were affined per-inode.
This contention went away once writeback was distributed across
AG-based workers, pointing to reduced AGF hotspotting during unwritten
conversion (rmap/btree updates and freelist fixes), rather than block
allocation in the write path itself.
> Also, how safe is it is to break off writeback tasks at the XFS layer
> like this? For example, is it safe to spread around the wbc to a bunch
> of tasks like this? What about serialization for things like bandwidth
> accounting and whatnot in the core/calling code? Should the code that
> splits off wq tasks in XFS be responsible to wait for parallel
> submission completion before returning (I didn't see anything doing that
> on a scan, but could have missed it)..?
>
You are right that core writeback accounting assumes serialized
updates. The current series copies wbc per worker to avoid concurrent
mutation, but that is not sufficient for strict global accounting
semantics.
For this series we only offload the async path
(wbc->sync_mode != WB_SYNC_ALL), so we do not wait for worker completion
before returning from ->writepages(). Sync writeback continues down the
existing iomap_writepages path.
>> The motivation for this series is the complementary case where a
>> single inode’s dirty data spans multiple AGs on aged/fragmented
>> filesystems, where inode-AG affinity breaks down. The folio-level AG
>> hinting here is intended to explore whether finer-grained partitioning
>> provides additional benefit beyond inode-based routing.
>>
>
> But also I'm not sure I follow the high level goal here. I have the same
> question as Pankaj in that regard.. is this series intended to replace
> the previous bdi level approach, or go along with it somehow? Doing
> something at the bdi level seems like a more natural approach in
> general, so I'm curious why the change in direction.
>
> Brian
>
This series is intended to replace the earlier BDI-level approach for
XFS, not to go alongside it. While BDI-level sharding is the more
natural generic mechanism, we saw XFS regressions on some setups when
inodes were affined to wb threads due to completion-side AG contention.
The goal here is to make concurrency an XFS policy decision by routing
writeback using AG-aware folio tags, so we avoid inode-affinity
hotspots and handle cases where a single inode spans multiple AGs on
aged or fragmented filesystems.
If this approach does not hold up across workloads and devices, we can
fall back to the generic BDI sharding model.
- Kundan
next prev parent reply other threads:[~2026-01-28 18:28 UTC|newest]
Thread overview: 48+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <CGME20260116101236epcas5p12ba3de776976f4ea6666e16a33ab6ec4@epcas5p1.samsung.com>
2026-01-16 10:08 ` Kundan Kumar
[not found] ` <CGME20260116101241epcas5p330f9c335a096aaaefda4b7d3c38d6038@epcas5p3.samsung.com>
2026-01-16 10:08 ` [PATCH v3 1/6] iomap: add write ops hook to attach metadata to folios Kundan Kumar
[not found] ` <CGME20260116101245epcas5p30269c6aa35784db67e6d6ca800a683a7@epcas5p3.samsung.com>
2026-01-16 10:08 ` [PATCH v3 2/6] xfs: add helpers to pack AG prediction info for per-folio tracking Kundan Kumar
2026-01-29 0:45 ` Darrick J. Wong
2026-02-03 7:15 ` Kundan Kumar
2026-02-05 16:39 ` Darrick J. Wong
2026-02-04 7:37 ` Nirjhar Roy (IBM)
[not found] ` <CGME20260116101251epcas5p1cf5b48f2efb14fe4387be3053b3c3ebc@epcas5p1.samsung.com>
2026-01-16 10:08 ` [PATCH v3 3/6] xfs: add per-inode AG prediction map and dirty-AG bitmap Kundan Kumar
2026-01-29 0:44 ` Darrick J. Wong
2026-02-03 7:20 ` Kundan Kumar
2026-02-05 16:42 ` Darrick J. Wong
2026-02-05 6:44 ` Nirjhar Roy (IBM)
2026-02-05 16:32 ` Darrick J. Wong
2026-02-06 5:41 ` Nirjhar Roy (IBM)
2026-02-05 6:36 ` Nirjhar Roy (IBM)
2026-02-05 16:36 ` Darrick J. Wong
2026-02-06 5:36 ` Nirjhar Roy (IBM)
2026-02-06 5:57 ` Darrick J. Wong
2026-02-06 6:03 ` Nirjhar Roy (IBM)
2026-02-06 7:00 ` Christoph Hellwig
[not found] ` <CGME20260116101256epcas5p2d6125a6bcad78c33f737fdc3484aca79@epcas5p2.samsung.com>
2026-01-16 10:08 ` [PATCH v3 4/6] xfs: tag folios with AG number during buffered write via iomap attach hook Kundan Kumar
2026-01-29 0:47 ` Darrick J. Wong
2026-01-29 22:40 ` Darrick J. Wong
2026-02-03 7:32 ` Kundan Kumar
2026-02-03 7:28 ` Kundan Kumar
2026-02-05 15:56 ` Brian Foster
2026-02-06 6:44 ` Nirjhar Roy (IBM)
[not found] ` <CGME20260116101259epcas5p1cfa6ab02e5a01f7c46cc78df95c57ce0@epcas5p1.samsung.com>
2026-01-16 10:08 ` [PATCH v3 5/6] xfs: add per-AG writeback workqueue infrastructure Kundan Kumar
2026-01-29 22:21 ` Darrick J. Wong
2026-02-03 7:35 ` Kundan Kumar
2026-02-06 6:46 ` Christoph Hellwig
2026-02-10 11:56 ` Nirjhar Roy (IBM)
[not found] ` <CGME20260116101305epcas5p497cd6d9027301853669f1c1aaffbf128@epcas5p4.samsung.com>
2026-01-16 10:08 ` [PATCH v3 6/6] xfs: offload writeback by AG using per-inode dirty bitmap and per-AG workers Kundan Kumar
2026-01-29 22:34 ` Darrick J. Wong
2026-02-03 7:40 ` Kundan Kumar
2026-02-11 9:39 ` Nirjhar Roy (IBM)
2026-01-16 16:13 ` [syzbot ci] Re: AG aware parallel writeback for XFS syzbot ci
2026-01-21 19:54 ` [PATCH v3 0/6] " Brian Foster
2026-01-22 16:15 ` Kundan Kumar
2026-01-23 9:36 ` Pankaj Raghav (Samsung)
2026-01-23 13:26 ` Brian Foster
2026-01-28 18:28 ` Kundan Kumar [this message]
2026-02-06 6:25 ` Christoph Hellwig
2026-02-06 10:07 ` Kundan Kumar
2026-02-06 17:42 ` Darrick J. Wong
2026-02-09 6:30 ` Christoph Hellwig
2026-02-09 15:54 ` Kundan Kumar
2026-02-10 15:38 ` Christoph Hellwig
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=e7413e3b-3fae-4aab-90a1-4a6695156b2e@samsung.com \
--to=kundan.kumar@samsung.com \
--cc=amir73il@gmail.com \
--cc=anuj20.g@samsung.com \
--cc=axboe@kernel.dk \
--cc=bfoster@redhat.com \
--cc=brauner@kernel.org \
--cc=cem@kernel.org \
--cc=clm@meta.com \
--cc=dave@stgolabs.net \
--cc=david@fromorbit.com \
--cc=djwong@kernel.org \
--cc=gost.dev@samsung.com \
--cc=hch@lst.de \
--cc=jack@suse.cz \
--cc=joshi.k@samsung.com \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-xfs@vger.kernel.org \
--cc=mcgrof@kernel.org \
--cc=ritesh.list@gmail.com \
--cc=viro@zeniv.linux.org.uk \
--cc=vishak.g@samsung.com \
--cc=wangyufei@vivo.com \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox