From: Jan Kara <jack@suse.cz>
To: Zhihao Cheng <chengzhihao1@huawei.com>
Cc: linux-ext4@vger.kernel.org, akpm@linux-foundation.org,
jack@suse.cz, mgorman@suse.de, linux-mm@kvack.org,
linux-kernel@vger.kernel.org, yukuai3@huawei.com,
yi.zhang@huawei.com
Subject: Re: [PATCH] mm: migrate: buffer_migrate_folio_norefs() fallback migrate not uptodate pages
Date: Thu, 25 Aug 2022 12:57:04 +0200 [thread overview]
Message-ID: <20220825105704.e46hz6dp6opawsjk@quack3> (raw)
In-Reply-To: <20220825080146.2021641-1-chengzhihao1@huawei.com>
On Thu 25-08-22 16:01:46, Zhihao Cheng wrote:
> From: Zhang Yi <yi.zhang@huawei.com>
>
> Recently we notice that ext4 filesystem occasionally fail to read
> metadata from disk and report error message, but the disk and block
> layer looks fine. After analyse, we lockon commit 88dbcbb3a484
> ("blkdev: avoid migration stalls for blkdev pages"). It provide a
> migration method for the bdev, we could move page that has buffers
> without extra users now, but it will lock the buffers on the page, which
> breaks a lot of current filesystem's fragile metadata read operations,
> like ll_rw_block() for common usage and ext4_read_bh_lock() for ext4,
> these helpers just trylock the buffer and skip submit IO if it lock
> failed, many callers just wait_on_buffer() and conclude IO error if the
> buffer is not uptodate after buffer unlocked.
>
> This issue could be easily reproduced by add some delay just after
> buffer_migrate_lock_buffers() in __buffer_migrate_folio() and do
> fsstress on ext4 filesystem.
>
> EXT4-fs error (device pmem1): __ext4_find_entry:1658: inode #73193:
> comm fsstress: reading directory lblock 0
> EXT4-fs error (device pmem1): __ext4_find_entry:1658: inode #75334:
> comm fsstress: reading directory lblock 0
>
> Something like ll_rw_block() should be used carefully and seems could
> only be safely used for the readahead case. So the best way is to fix
> the read operations in filesystem in the long run, but now let us avoid
> this issue first. This patch avoid this issue by fallback to migrate
> pages that are not uotodate like fallback_migrate_folio(), those pages
> that has buffers may probably do read operation soon.
>
> Fixes: 88dbcbb3a484 ("blkdev: avoid migration stalls for blkdev pages")
> Signed-off-by: Zhang Yi <yi.zhang@huawei.com>
> Signed-off-by: Zhihao Cheng <chengzhihao1@huawei.com>
Thanks for the analysis and the fix! As you noted above this is actually a
bug in the filesystems that they assume that locked buffer means it is
under IO. Usually that is the case but there are other places that lock
the buffer without doing IO. Page migration is one of them, jbd2 machinery
another one, there may be others.
So I think this really ought to be fixed in filesystems instead of papering
over the bug in the migration code. I agree this is more work but we will
reduce the technical debt, not increase it :). Honestly, ll_rw_block()
should just die. It is actively dangerous to use. Instead we should have
one call for readahead of bhs and the rest should be converted to
submit_bh() or similar calls. There are like 25 callers remaining so it
won't be even that hard.
And then we have the same buggy code as in ll_rw_block() copied to
ext4_bread_batch() (ext4_read_bh_lock() in particular) so that needs to be
fixed as well...
Honza
> ---
> mm/migrate.c | 32 ++++++++++++++++++++++++++++++++
> 1 file changed, 32 insertions(+)
>
> diff --git a/mm/migrate.c b/mm/migrate.c
> index 6a1597c92261..bded69867619 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -691,6 +691,38 @@ static int __buffer_migrate_folio(struct address_space *mapping,
> if (!head)
> return migrate_folio(mapping, dst, src, mode);
>
> + /*
> + * If the mapped buffers on the page are not uptodate and has refcount,
> + * some others may propably try to lock the buffer and submit read IO
> + * through ll_rw_block(), but it will not submit IO once it failed to
> + * lock the buffer, so try to fallback to migrate_folio() to prevent
> + * false positive EIO.
> + */
> + if (check_refs) {
> + bool uptodate = true;
> + bool invalidate = false;
> +
> + bh = head;
> + do {
> + if (buffer_mapped(bh) && !buffer_uptodate(bh)) {
> + uptodate = false;
> + if (atomic_read(&bh->b_count)) {
> + invalidate = true;
> + break;
> + }
> + }
> + bh = bh->b_this_page;
> + } while (bh != head);
> +
> + if (!uptodate) {
> + if (invalidate)
> + invalidate_bh_lrus();
> + if (filemap_release_folio(src, GFP_KERNEL))
> + return migrate_folio(mapping, dst, src, mode);
> + return -EAGAIN;
> + }
> + }
> +
> /* Check whether page does not have extra refs before we do more work */
> expected_count = folio_expected_refs(mapping, src);
> if (folio_ref_count(src) != expected_count)
> --
> 2.31.1
>
--
Jan Kara <jack@suse.com>
SUSE Labs, CR
next prev parent reply other threads:[~2022-08-25 10:57 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-08-25 8:01 Zhihao Cheng
2022-08-25 10:57 ` Jan Kara [this message]
2022-08-25 11:32 ` Zhihao Cheng
2022-08-25 14:11 ` Jan Kara
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20220825105704.e46hz6dp6opawsjk@quack3 \
--to=jack@suse.cz \
--cc=akpm@linux-foundation.org \
--cc=chengzhihao1@huawei.com \
--cc=linux-ext4@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mgorman@suse.de \
--cc=yi.zhang@huawei.com \
--cc=yukuai3@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox