From: Jan Kara <jack@suse.cz>
To: Matthew Wilcox <willy@infradead.org>
Cc: lsf-pc@lists.linux-foundation.org, linux-scsi@vger.kernel.org,
linux-ide@vger.kernel.org, linux-nvme@lists.infradead.org,
linux-block@vger.kernel.org, linux-mm@kvack.org,
linux-fsdevel@vger.kernel.org
Subject: Re: [Lsf-pc] [LSF/MM/BPF TOPIC] Removing GFP_NOFS
Date: Fri, 5 Jan 2024 11:57:36 +0100 [thread overview]
Message-ID: <20240105105736.24jep6q6cd7vsnmz@quack3> (raw)
In-Reply-To: <ZZcgXI46AinlcBDP@casper.infradead.org>
Hello,
On Thu 04-01-24 21:17:16, Matthew Wilcox wrote:
> This is primarily a _FILESYSTEM_ track topic. All the work has already
> been done on the MM side; the FS people need to do their part. It could
> be a joint session, but I'm not sure there's much for the MM people
> to say.
>
> There are situations where we need to allocate memory, but cannot call
> into the filesystem to free memory. Generally this is because we're
> holding a lock or we've started a transaction, and attempting to write
> out dirty folios to reclaim memory would result in a deadlock.
>
> The old way to solve this problem is to specify GFP_NOFS when allocating
> memory. This conveys little information about what is being protected
> against, and so it is hard to know when it might be safe to remove.
> It's also a reflex -- many filesystem authors use GFP_NOFS by default
> even when they could use GFP_KERNEL because there's no risk of deadlock.
>
> The new way is to use the scoped APIs -- memalloc_nofs_save() and
> memalloc_nofs_restore(). These should be called when we start a
> transaction or take a lock that would cause a GFP_KERNEL allocation to
> deadlock. Then just use GFP_KERNEL as normal. The memory allocators
> can see the nofs situation is in effect and will not call back into
> the filesystem.
>
> This results in better code within your filesystem as you don't need to
> pass around gfp flags as much, and can lead to better performance from
> the memory allocators as GFP_NOFS will not be used unnecessarily.
>
> The memalloc_nofs APIs were introduced in May 2017, but we still have
> over 1000 uses of GFP_NOFS in fs/ today (and 200 outside fs/, which is
> really sad). This session is for filesystem developers to talk about
> what they need to do to fix up their own filesystem, or share stories
> about how they made their filesystem better by adopting the new APIs.
I agree this is a worthy goal and the scoped API helped us a lot in the
ext4/jbd2 land. Still we have some legacy to deal with:
~> git grep "NOFS" fs/jbd2/ | wc -l
15
~> git grep "NOFS" fs/ext4/ | wc -l
71
When you are asking about what would help filesystems with the conversion I
actually have one wish. The most common case is that you need to annotate
some lock that can be grabbed in the reclaim path and thus you must avoid
GFP_FS allocations from under it. For example to deal with reclaim
deadlocks in the writeback paths we had to introduce wrappers like:
static inline int ext4_writepages_down_read(struct super_block *sb)
{
percpu_down_read(&EXT4_SB(sb)->s_writepages_rwsem);
return memalloc_nofs_save();
}
static inline void ext4_writepages_up_read(struct super_block *sb, int ctx)
{
memalloc_nofs_restore(ctx);
percpu_up_read(&EXT4_SB(sb)->s_writepages_rwsem);
}
When you have to do it for 5 locks in your filesystem it gets a bit ugly
and it would be nice to have some generic way to deal with this. We already
have the spin_lock_irqsave() precedent we might follow (and I don't
necessarily mean the calling convention which is a bit weird for today's
standards)?
Even more lovely would be if we could actually avoid passing around the
returned reclaim state because sometimes the locks get acquired / released
in different functions and passing the state around requires quite some
changes and gets ugly. That would mean we'd have to have
fs-reclaim-forbidden counter instead of just a flag in task_struct. OTOH
then we could just mark the lock (mutex / rwsem / whatever) as
fs-reclaim-unsafe during init and the rest would just magically happen.
That would be super-easy to use.
Honza
--
Jan Kara <jack@suse.com>
SUSE Labs, CR
next prev parent reply other threads:[~2024-01-05 10:57 UTC|newest]
Thread overview: 24+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-01-04 21:17 Matthew Wilcox
2024-01-05 10:13 ` Viacheslav Dubeyko
2024-01-05 10:26 ` [Lsf-pc] " Jan Kara
2024-01-05 14:17 ` Viacheslav Dubeyko
2024-01-05 14:35 ` Vlastimil Babka (SUSE)
2024-01-05 10:57 ` Jan Kara [this message]
2024-01-08 11:47 ` [Lsf-pc] " Johannes Thumshirn
2024-01-08 17:39 ` David Sterba
2024-01-09 7:43 ` Johannes Thumshirn
2024-01-09 22:23 ` Dave Chinner
2024-01-09 15:47 ` Luis Henriques
2024-01-09 18:04 ` Johannes Thumshirn
2024-01-08 6:39 ` Dave Chinner
2024-01-09 4:47 ` Dave Chinner
2024-02-08 16:02 ` Vlastimil Babka (SUSE)
2024-02-08 17:33 ` Michal Hocko
2024-02-08 19:55 ` Vlastimil Babka (SUSE)
2024-02-08 22:45 ` Kent Overstreet
2024-02-12 1:20 ` Dave Chinner
2024-02-12 2:06 ` Kent Overstreet
2024-02-12 4:35 ` Dave Chinner
2024-02-12 19:30 ` Kent Overstreet
2024-02-12 22:07 ` Dave Chinner
2024-01-09 22:44 ` Dave Chinner
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20240105105736.24jep6q6cd7vsnmz@quack3 \
--to=jack@suse.cz \
--cc=linux-block@vger.kernel.org \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-ide@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-nvme@lists.infradead.org \
--cc=linux-scsi@vger.kernel.org \
--cc=lsf-pc@lists.linux-foundation.org \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox