From: Jan Kara <jack@suse.cz>
To: Shakeel Butt <shakeel.butt@linux.dev>
Cc: Jan Kara <jack@suse.cz>,
linux-fsdevel@vger.kernel.org, linux-mm@kvack.org,
Matthew Wilcox <willy@infradead.org>,
lsf-pc@lists.linux-foundation.org
Subject: Re: [LSF/MM/BPF TOPIC] Filesystem inode reclaim
Date: Thu, 16 Apr 2026 12:06:59 +0200 [thread overview]
Message-ID: <i67kqas5wx2a25tzoleygwpeyunx6ovvf3t2e25yab3myxbazh@h7hctwxddhq4> (raw)
In-Reply-To: <ad_Eo7iQVT-HUkx1@linux.dev>
On Wed 15-04-26 10:45:11, Shakeel Butt wrote:
> On Tue, Apr 14, 2026 at 11:15:48AM +0200, Jan Kara wrote:
> > > > I have been mulling over possible solutions since I don't think each
> > > > filesystem should be inventing a complex inode lifetime management scheme
> > > > as XFS has invented to solve these issues. Here's what I think we could do:
> > > >
> > > > 1) Filesystems will be required to mark inodes that have non-trivial
> > > > cleanup work to do on reclaim with an inode flag I_RECLAIM_HARD (or
> > > > whatever :)). Usually I expect this to happen on first inode modification
> > > > or so. This will require some per-fs work but it shouldn't be that
> > > > difficult and filesystems can be adapted one-by-one as they decide to
> > > > address these warnings from reclaim.
> > > >
> > > > 2) Inodes without I_RECLAIM_HARD will be reclaimed as usual directly from
> > > > kswapd / direct reclaim. I'm keeping this variant of inode reclaim for
> > > > performance reasons. I expect this to be a significant portion of inodes
> > > > on average and in particular for some workloads which scan a lot of inodes
> > > > (find through the whole fs or similar) the efficiency of inode reclaim is
> > > > one of the determining factors for their performance.
> > > >
> > > > 3) Inodes with I_RECLAIM_HARD will be moved by the shrinker to a separate
> > > > per-sb list s_hard_reclaim_inodes and we'll queue work (per-sb work struct)
> > > > to process them.
> > >
> > > This async worker is an interesting idea. I have been brain-storming for similar
> > > problems and I was going towards more kswapds or async/background reclaimers and
> > > such reclaimers can do more intensive cleanup work. Basically aim to avoid
> > > direct reclaimers as much as possible.
> >
> > So similarly as we eventually moved direct page writeback from kswapd
> > reclaim, I think it makes sense to remove difficult inode reclaim from
> > kswapd as well. In particular because I think such separation makes it
> > clearer that while you do complex inode reclaim and allocate memory from
> > there, there's still kswapd that can free some memory for you to make
> > forward progress. And you better need to be sure that there's enough "easy
> > to free" memory to allow for forward progress of difficult reclaim.
>
> Another important point that we need memory guarantee for forward progress of
> the difficult reclaim.
Yes, although I don't expect we can get it in a direct way (we have only
very vague idea how much memory is needed for reclaiming such inodes) but
just by making sure the amount of hard to reclaim inodes cannot grow too much.
> > > > 4) The work will walk s_hard_reclaim_inodes list and call evict() for each
> > > > inode, doing the hard work.
> > > >
> > > > This way, kswapd / direct reclaim doesn't wait for hard to reclaim inodes
> > > > and they can work on freeing memory needed for freeing of hard to reclaim
> > > > inodes. So warnings about GFP_NOFAIL allocations aren't only papered over,
> > > > they should really be addressed.
> > > >
> > > > One possible concern is that s_hard_reclaim_inodes list could grow out of
> > > > control for some workloads (in particular because there could be multiple
> > > > CPUs generating hard to reclaim inodes while the cleanup would be
> > > > single-threaded).
> > >
> > > Why single-threaded? What will be the issue to have multiple such workers
> > > doing independent cleanups? Also these workers will need memory
> > > guarantees as well (something like PF_MEMALLOC) to not cause their
> > > allocations stuck in reclaim.
> >
> > Well, single-threaded isn't a requirement but in the beginning I plan to do
> > it like that for simplicity similarly as currently there's only one flush
> > work doing writeback (although we are just discussing moving to more for
> > that). Also the inode cleanup will contend on fs-wide resources such as
> > journal so although some scaling can bring you benefits it will be
> > difficult to scale beyond certain limits (again heavily fs dependent).
>
> Difficult reclaim uses fs-wide resources (and locks) and thus we can not
> depend on it to be effective under extreme memory pressure, right?
Correct.
> Or do we want it to be reliable under extreme memory pressure where we
> will need to provide memory and cpu guarantees to it?
At least I don't have that expectation :)
> One more question, I assume it is fs-dependent but is it possible to avoid
> allocations (and thus reclaim) under fs-wide locks? One challenge/issue we at
> Meta are seeing is (btrfs) lock holders getting stuck in reclaim causing
> isolation issues.
I don't think it is practically feasible. Often before you acquire locks
and start working, you don't know how much memory you'll need. For simple
operations you can go with worst case estimates and preallocation before
acquiring locks (like we do e.g. with radix tree manipulations) but for
complex mutations of data structures involving journalling etc. it isn't
really practical anymore - too much code to execute, too many possibilities
to consider, too many interactions with other parts of the system.
I understand the priority inversion issues that are arising from this for
memcg reclaim. But I think the "measure now and punish later" model that is
used e.g. for dirty page throttling or blk-iocost throttling of metadata IO
is an approach which has much higher chances of success than trying to move
the allocations out of locks.
Honza
--
Jan Kara <jack@suse.com>
SUSE Labs, CR
next prev parent reply other threads:[~2026-04-16 10:07 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-04-09 9:16 Jan Kara
2026-04-09 12:57 ` [Lsf-pc] " Amir Goldstein
2026-04-09 16:48 ` Boris Burkov
2026-04-10 10:00 ` Jan Kara
2026-04-10 11:08 ` Christoph Hellwig
2026-04-10 13:58 ` Jan Kara
2026-04-10 9:54 ` Jan Kara
2026-04-09 16:12 ` Darrick J. Wong
2026-04-09 17:37 ` Jeff Layton
2026-04-10 9:43 ` Jan Kara
2026-04-10 7:19 ` Christoph Hellwig
2026-04-10 20:56 ` Jan Kara
2026-04-10 21:14 ` Andreas Dilger
2026-04-13 7:45 ` Jan Kara
2026-04-10 9:23 ` Christian Brauner
2026-04-10 10:14 ` Jan Kara
2026-04-13 21:23 ` Shakeel Butt
2026-04-14 9:15 ` Jan Kara
2026-04-15 17:45 ` Shakeel Butt
2026-04-16 10:06 ` Jan Kara [this message]
2026-04-17 0:06 ` Shakeel Butt
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=i67kqas5wx2a25tzoleygwpeyunx6ovvf3t2e25yab3myxbazh@h7hctwxddhq4 \
--to=jack@suse.cz \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=lsf-pc@lists.linux-foundation.org \
--cc=shakeel.butt@linux.dev \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox