linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Ming Lei <ming.lei@redhat.com>
To: Baokun Li <libaokun1@huawei.com>
Cc: Matthew Wilcox <willy@infradead.org>,
	Theodore Ts'o <tytso@mit.edu>,
	linux-ext4@vger.kernel.org,
	Andreas Dilger <adilger.kernel@dilger.ca>,
	linux-block@vger.kernel.org,
	Andrew Morton <akpm@linux-foundation.org>,
	linux-fsdevel@vger.kernel.org, linux-mm@kvack.org,
	Dave Chinner <dchinner@redhat.com>,
	Eric Sandeen <sandeen@redhat.com>, Christoph Hellwig <hch@lst.de>,
	Zhang Yi <yi.zhang@redhat.com>, yangerkun <yangerkun@huawei.com>,
	ming.lei@redhat.com
Subject: Re: [ext4 io hang] buffered write io hang in balance_dirty_pages
Date: Thu, 27 Apr 2023 19:27:04 +0800	[thread overview]
Message-ID: <ZEpcCOCNDhdMHQyY@ovpn-8-26.pek2.redhat.com> (raw)
In-Reply-To: <663b10eb-4b61-c445-c07c-90c99f629c74@huawei.com>

On Thu, Apr 27, 2023 at 07:19:35PM +0800, Baokun Li wrote:
> On 2023/4/27 18:01, Ming Lei wrote:
> > On Thu, Apr 27, 2023 at 02:36:51PM +0800, Baokun Li wrote:
> > > On 2023/4/27 12:50, Ming Lei wrote:
> > > > Hello Matthew,
> > > > 
> > > > On Thu, Apr 27, 2023 at 04:58:36AM +0100, Matthew Wilcox wrote:
> > > > > On Thu, Apr 27, 2023 at 10:20:28AM +0800, Ming Lei wrote:
> > > > > > Hello Guys,
> > > > > > 
> > > > > > I got one report in which buffered write IO hangs in balance_dirty_pages,
> > > > > > after one nvme block device is unplugged physically, then umount can't
> > > > > > succeed.
> > > > > That's a feature, not a bug ... the dd should continue indefinitely?
> > > > Can you explain what the feature is? And not see such 'issue' or 'feature'
> > > > on xfs.
> > > > 
> > > > The device has been gone, so IMO it is reasonable to see FS buffered write IO
> > > > failed. Actually dmesg has shown that 'EXT4-fs (nvme0n1): Remounting
> > > > filesystem read-only'. Seems these things may confuse user.
> > > 
> > > The reason for this difference is that ext4 and xfs handle errors
> > > differently.
> > > 
> > > ext4 remounts the filesystem as read-only or even just continues, vfs_write
> > > does not check for these.
> > vfs_write may not find anything wrong, but ext4 remount could see that
> > disk is gone, which might happen during or after remount, however.
> > 
> > > xfs shuts down the filesystem, so it returns a failure at
> > > xfs_file_write_iter when it finds an error.
> > > 
> > > 
> > > ``` ext4
> > > ksys_write
> > >   vfs_write
> > >    ext4_file_write_iter
> > >     ext4_buffered_write_iter
> > >      ext4_write_checks
> > >       file_modified
> > >        file_modified_flags
> > >         __file_update_time
> > >          inode_update_time
> > >           generic_update_time
> > >            __mark_inode_dirty
> > >             ext4_dirty_inode ---> 2. void func, No propagating errors out
> > >              __ext4_journal_start_sb
> > >               ext4_journal_check_start ---> 1. Error found, remount-ro
> > >      generic_perform_write ---> 3. No error sensed, continue
> > >       balance_dirty_pages_ratelimited
> > >        balance_dirty_pages_ratelimited_flags
> > >         balance_dirty_pages
> > >          // 4. Sleeping waiting for dirty pages to be freed
> > >          __set_current_state(TASK_KILLABLE)
> > >          io_schedule_timeout(pause);
> > > ```
> > > 
> > > ``` xfs
> > > ksys_write
> > >   vfs_write
> > >    xfs_file_write_iter
> > >     if (xfs_is_shutdown(ip->i_mount))
> > >       return -EIO;    ---> dd fail
> > > ```
> > Thanks for the info which is really helpful for me to understand the
> > problem.
> > 
> > > > > balance_dirty_pages() is sleeping in KILLABLE state, so kill -9 of
> > > > > the dd process should succeed.
> > > > Yeah, dd can be killed, however it may be any application(s), :-)
> > > > 
> > > > Fortunately it won't cause trouble during reboot/power off, given
> > > > userspace will be killed at that time.
> > > > 
> > > > 
> > > > 
> > > > Thanks,
> > > > Ming
> > > > 
> > > Don't worry about that, we always set the current thread to TASK_KILLABLE
> > > 
> > > while waiting in balance_dirty_pages().
> > I have another concern, if 'dd' isn't killed, dirty pages won't be cleaned, and
> > these (big amount)memory becomes not usable, and typical scenario could be USB HDD
> > unplugged.
> > 
> > 
> > thanks,
> > Ming
> Yes, it is unreasonable to continue writing data with the previously opened
> fd after
> the file system becomes read-only, resulting in dirty page accumulation.
> 
> I provided a patch in another reply.
> Could you help test if it can solve your problem?
> If it can indeed solve your problem, I will officially send it to the email
> list.

OK, I will test it tomorrow.

But I am afraid if it can avoid the issue completely because the
old write task hang in balance_dirty_pages() may still write/dirty pages
if it is one very big size write IO.

Thanks,
Ming



  reply	other threads:[~2023-04-27 12:00 UTC|newest]

Thread overview: 29+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-04-27  2:20 Ming Lei
2023-04-27  3:58 ` Matthew Wilcox
2023-04-27  4:50   ` Ming Lei
2023-04-27  6:36     ` Baokun Li
2023-04-27  7:33       ` Baokun Li
2023-04-27 10:01       ` Ming Lei
2023-04-27 11:19         ` Baokun Li
2023-04-27 11:27           ` Ming Lei [this message]
2023-04-28  1:41             ` Ming Lei
2023-04-28  3:47               ` Baokun Li
2023-04-28  5:47                 ` Theodore Ts'o
2023-04-29  3:16                   ` Ming Lei
2023-04-29  4:40                     ` Christoph Hellwig
2023-04-29  5:10                       ` Ming Lei
2023-05-01  4:47                         ` Christoph Hellwig
2023-05-02  0:57                           ` Ming Lei
2023-05-02  1:35                             ` Dave Chinner
2023-05-02 15:35                               ` Darrick J. Wong
2023-05-02 22:33                                 ` Dave Chinner
2023-05-02 23:27                                   ` Darrick J. Wong
2023-04-29  4:56                     ` Theodore Ts'o
2023-05-01  2:06                       ` Dave Chinner
2023-05-04  3:09                   ` Baokun Li
2023-04-27 23:33 ` Dave Chinner
2023-04-28  2:56   ` Matthew Wilcox
2023-04-28  5:24     ` Dave Chinner
2023-05-04 15:59 ` Keith Busch
2023-05-04 16:21   ` Matthew Wilcox
2023-05-05  2:06   ` Ming Lei

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ZEpcCOCNDhdMHQyY@ovpn-8-26.pek2.redhat.com \
    --to=ming.lei@redhat.com \
    --cc=adilger.kernel@dilger.ca \
    --cc=akpm@linux-foundation.org \
    --cc=dchinner@redhat.com \
    --cc=hch@lst.de \
    --cc=libaokun1@huawei.com \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-ext4@vger.kernel.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=sandeen@redhat.com \
    --cc=tytso@mit.edu \
    --cc=willy@infradead.org \
    --cc=yangerkun@huawei.com \
    --cc=yi.zhang@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox