linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Jan Kara <jack@suse.cz>
To: Shakeel Butt <shakeel.butt@linux.dev>
Cc: Jan Kara <jack@suse.cz>,
	linux-fsdevel@vger.kernel.org,  linux-mm@kvack.org,
	Matthew Wilcox <willy@infradead.org>,
	 lsf-pc@lists.linux-foundation.org
Subject: Re: [LSF/MM/BPF TOPIC] Filesystem inode reclaim
Date: Tue, 14 Apr 2026 11:15:48 +0200	[thread overview]
Message-ID: <z6m4h2yamlckbyzh2ey2ixa7t7s6hlzg5jcle4qyebkopkvujw@pcrzrkuycg4g> (raw)
In-Reply-To: <ad1QfUWEPSCDUDYv@linux.dev>

Hi Shakeel!

On Mon 13-04-26 14:23:13, Shakeel Butt wrote:
> On Thu, Apr 09, 2026 at 11:16:44AM +0200, Jan Kara wrote:
> > Hello!
> > 
> > This is a recurring topic Matthew has been kicking forward for the last
> > year so let me maybe offer a fs-person point of view on the problem and
> > possible solutions. The problem is very simple: When a filesystem (ext4,
> > btrfs, vfat) is about to reclaim an inode, it sometimes needs to perform a
> > complex cleanup - like trimming of preallocated blocks beyond end of file,
> > making sure journalling machinery is done with the inode, etc.. This may
> > require reading metadata into memory which requires memory allocations and
> 
> Some of these allocations may have __GFP_ACCOUNT flag as well, right? Also are
> these just slab allocations or can be page allocations as well? And does the
> caller holds shared locks while performing these allocations?

Yes, some of these allocations may be __GFP_ACCOUNT - e.g. if we end up in
fs/buffer.c: grow_dev_folio() which needs to allocate folio to load
metadata into and allocate buffer_heads underlying that folio.

Regarding shared locks - it is fs dependent. I cannot currently remember
where __GFP_ACCOUNT allocation would be done under some wide-scale lock but
I cannot also completely rule that out. Definitely there are allocations
without __GFP_ACCOUNT under fs-wide locks.

> > as inode eviction cannot fail, these are effectively GFP_NOFAIL
> > allocations (and there are other reasons why it would be very difficult to
> > make some of these required allocations in the filesystems failable).
> > 
> > GFP_NOFAIL allocation from reclaim context (be it kswapd or direct reclaim)
> > trigger warnings 
> 
> I assume these are the PF_MEMALLOC + GFP_NOFAIL warnings, right?

Yes.

> > - and for a good reason as forward progress isn't
> > guaranteed. Also it leaves a bad taste that we are performing sometimes
> > rather long running operations blocking on IO from reclaim context thus
> > stalling reclaim for substantial amount of time to free 1k worth of slab
> > cache.
> 
> Agreed, particularly in the multi-tenant and overcommitted environments where
> unrelated direct reclaimers have to spend their CPU time to cleaup/freeup
> memory from others.
> 
> BTW I think kswapd doing such hard work is fine.
> 
> > 
> > I have been mulling over possible solutions since I don't think each
> > filesystem should be inventing a complex inode lifetime management scheme
> > as XFS has invented to solve these issues. Here's what I think we could do:
> > 
> > 1) Filesystems will be required to mark inodes that have non-trivial
> > cleanup work to do on reclaim with an inode flag I_RECLAIM_HARD (or
> > whatever :)). Usually I expect this to happen on first inode modification
> > or so. This will require some per-fs work but it shouldn't be that
> > difficult and filesystems can be adapted one-by-one as they decide to
> > address these warnings from reclaim.
> > 
> > 2) Inodes without I_RECLAIM_HARD will be reclaimed as usual directly from
> > kswapd / direct reclaim. I'm keeping this variant of inode reclaim for
> > performance reasons. I expect this to be a significant portion of inodes
> > on average and in particular for some workloads which scan a lot of inodes
> > (find through the whole fs or similar) the efficiency of inode reclaim is
> > one of the determining factors for their performance.
> > 
> > 3) Inodes with I_RECLAIM_HARD will be moved by the shrinker to a separate
> > per-sb list s_hard_reclaim_inodes and we'll queue work (per-sb work struct)
> > to process them.
> 
> This async worker is an interesting idea. I have been brain-storming for similar
> problems and I was going towards more kswapds or async/background reclaimers and
> such reclaimers can do more intensive cleanup work. Basically aim to avoid
> direct reclaimers as much as possible.

So similarly as we eventually moved direct page writeback from kswapd
reclaim, I think it makes sense to remove difficult inode reclaim from
kswapd as well. In particular because I think such separation makes it
clearer that while you do complex inode reclaim and allocate memory from
there, there's still kswapd that can free some memory for you to make
forward progress. And you better need to be sure that there's enough "easy
to free" memory to allow for forward progress of difficult reclaim.

> > 4) The work will walk s_hard_reclaim_inodes list and call evict() for each
> > inode, doing the hard work.
> > 
> > This way, kswapd / direct reclaim doesn't wait for hard to reclaim inodes
> > and they can work on freeing memory needed for freeing of hard to reclaim
> > inodes. So warnings about GFP_NOFAIL allocations aren't only papered over,
> > they should really be addressed.
> > 
> > One possible concern is that s_hard_reclaim_inodes list could grow out of
> > control for some workloads (in particular because there could be multiple
> > CPUs generating hard to reclaim inodes while the cleanup would be
> > single-threaded).
> 
> Why single-threaded? What will be the issue to have multiple such workers
> doing independent cleanups? Also these workers will need memory
> guarantees as well (something like PF_MEMALLOC) to not cause their
> allocations stuck in reclaim.

Well, single-threaded isn't a requirement but in the beginning I plan to do
it like that for simplicity similarly as currently there's only one flush
work doing writeback (although we are just discussing moving to more for
that). Also the inode cleanup will contend on fs-wide resources such as
journal so although some scaling can bring you benefits it will be
difficult to scale beyond certain limits (again heavily fs dependent).

> > This could be addressed by tracking number of inodes in
> > that list and if it grows over some limit, we could start throttling
> > processes when setting I_RECLAIM_HARD inode flag.
> 
> I assume you are thinking of this specific limit as similar to the dirty
> memory limits we already have, right?

Yes.

								Honza
-- 
Jan Kara <jack@suse.com>
SUSE Labs, CR


      reply	other threads:[~2026-04-14  9:15 UTC|newest]

Thread overview: 18+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-04-09  9:16 Jan Kara
2026-04-09 12:57 ` [Lsf-pc] " Amir Goldstein
2026-04-09 16:48   ` Boris Burkov
2026-04-10 10:00     ` Jan Kara
2026-04-10 11:08     ` Christoph Hellwig
2026-04-10 13:58       ` Jan Kara
2026-04-10  9:54   ` Jan Kara
2026-04-09 16:12 ` Darrick J. Wong
2026-04-09 17:37   ` Jeff Layton
2026-04-10  9:43     ` Jan Kara
2026-04-10  7:19 ` Christoph Hellwig
2026-04-10 20:56   ` Jan Kara
2026-04-10 21:14     ` Andreas Dilger
2026-04-13  7:45       ` Jan Kara
2026-04-10  9:23 ` Christian Brauner
2026-04-10 10:14   ` Jan Kara
2026-04-13 21:23 ` Shakeel Butt
2026-04-14  9:15   ` Jan Kara [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=z6m4h2yamlckbyzh2ey2ixa7t7s6hlzg5jcle4qyebkopkvujw@pcrzrkuycg4g \
    --to=jack@suse.cz \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=lsf-pc@lists.linux-foundation.org \
    --cc=shakeel.butt@linux.dev \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox