From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2306CC77B61 for ; Thu, 27 Apr 2023 12:00:06 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5CE826B0071; Thu, 27 Apr 2023 08:00:06 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 558116B0072; Thu, 27 Apr 2023 08:00:06 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3F7DC6B0074; Thu, 27 Apr 2023 08:00:06 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 297E06B0071 for ; Thu, 27 Apr 2023 08:00:06 -0400 (EDT) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id CC0FEA0247 for ; Thu, 27 Apr 2023 12:00:05 +0000 (UTC) X-FDA: 80727027570.29.E22BADA Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf20.hostedemail.com (Postfix) with ESMTP id 368AF1C006F for ; Thu, 27 Apr 2023 12:00:02 +0000 (UTC) Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=eJXvd4S5; spf=pass (imf20.hostedemail.com: domain of ming.lei@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=ming.lei@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1682596802; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=GvY4p7P7cJsg0/C+DT6cOBtihMGDlSNrlTaMona+HC4=; b=z4psth7Lxi+E2sRcYuA5xgbaktaDEYBSvRNGbE6tkpYmoeRwbrCPwgwI4dYkTs18/oiSy1 CuaI08I2leqoV7e/eHUKBF1xE/8XL7pqJhBIf1/wQkFCh+0FrgU3oJAuhg49il2qsxVOs4 ihcLEt6rF/uu1ck6yYyuW7wn0UCLNhg= ARC-Authentication-Results: i=1; imf20.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=eJXvd4S5; spf=pass (imf20.hostedemail.com: domain of ming.lei@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=ming.lei@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1682596802; a=rsa-sha256; cv=none; b=3urplUdR/y7YdnYE+y3qj99nbuwcVykbnhA+oys++CG3EMCZPUFEyHGihBnt5eJS5TyA5R g0fgYkkl82Jn57I2vXvi8bxaDZz1QlbmBiweNZjdvBhoK1UHuPaUgb754m5AXjGexAH9xw JmZgqz/+9P8lmLLCb22R+3kuwFc0OrU= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1682596801; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=GvY4p7P7cJsg0/C+DT6cOBtihMGDlSNrlTaMona+HC4=; b=eJXvd4S5cTxr7tn5hipuqEPWjc83UTFtDJWRQDG420P1oJ3fCSBVAaPY/VloWBfgnlMzvn wEhiwVUPpH8KXsF8McES43An/TSmh2p6s50c3IMKVHaCGU/gtgFJSXYJMwDmjSKtRk3cXb gcFydeOz0RC1nU3D/xD8gySGsYTEPI8= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-471-i-P2r6DfNfq2gp1vlaW-_Q-1; Thu, 27 Apr 2023 07:27:17 -0400 X-MC-Unique: i-P2r6DfNfq2gp1vlaW-_Q-1 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com [10.11.54.3]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 169CA1C04190; Thu, 27 Apr 2023 11:27:17 +0000 (UTC) Received: from ovpn-8-26.pek2.redhat.com (ovpn-8-26.pek2.redhat.com [10.72.8.26]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 6303F1121315; Thu, 27 Apr 2023 11:27:08 +0000 (UTC) Date: Thu, 27 Apr 2023 19:27:04 +0800 From: Ming Lei To: Baokun Li Cc: Matthew Wilcox , Theodore Ts'o , linux-ext4@vger.kernel.org, Andreas Dilger , linux-block@vger.kernel.org, Andrew Morton , linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, Dave Chinner , Eric Sandeen , Christoph Hellwig , Zhang Yi , yangerkun , ming.lei@redhat.com Subject: Re: [ext4 io hang] buffered write io hang in balance_dirty_pages Message-ID: References: <663b10eb-4b61-c445-c07c-90c99f629c74@huawei.com> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <663b10eb-4b61-c445-c07c-90c99f629c74@huawei.com> X-Scanned-By: MIMEDefang 3.1 on 10.11.54.3 X-Rspamd-Queue-Id: 368AF1C006F X-Stat-Signature: 6yr67mdq4p75xftebpcxq8cf1y74jrrg X-Rspam-User: X-Rspamd-Server: rspam08 X-HE-Tag: 1682596802-755643 X-HE-Meta: U2FsdGVkX189JLHDVO6sHZX1A4k68K2AErOZEQFmcCzz5Vtsp4S56MukscBMJcDMzYSzu2tzWCdVC+C4k9XDYdW5EFSqYdJBYojfUDJiP8WNkAP3IEBZMf31lijO2Z3FQuWMDS1Wm6ZOaDR5U9Et8hJ7Tlbzwtn1IOU/9Qyq0ouEK6vajjtt5+G0SHXZu7Y4tU9coaToQ5aH8KwQuv3Rp6MXPNQjQnx9ai9G5OGUqbcyzB0gInyC03ufSAaKaH4wBR1ohx/8q8svCRDF+TWxItGflgBPn3LnCbWVjbE/JAJNU6Q0hgfh3nebuzYMVV3+vWmXC3GPyGSi0q+8bxCdeFbr4mtQlYmx4QTyUHu+i9YhG0H9OIOfYNOB0YtJttnpHdouteKyMU7gOQPNpqJBdaEhceEuBMPPnB/r8WVmqilWoyUl80d/rPRb06OJ7P2I72v2xwVqXksQkptSIy36jIAgXdagmNtV+vIKPFTHZdnagM31OOFtYuYc5VnbLTJRnAw2F2ozMQUZkYvBroz1LdXtyBTXUAatf29Ou5w/kHLKFtBSqTNhEnMD2I1jX1GPC3wZmkS5rFqftDBYuXddlbHwgo8jGVbdVjEBEGQ6OjBsPRdDZt/5xUU/s0ArmtDY0JN5ZbeNHiW+kdpb0P4mn2jDKvfWxkG5zx4uevmTFRenpQjMv7arR7C3t5nOsH/UNi3Rwsrw9dL4jVIK12ik0lTzSTTR2aXvtPGm/g5RLquK7l2ES75TtBlW01aMchUL8sBbgbTYVOJ8PTQ8zVTV/xX451Nc9AYd6h2X5XGJwH8sFKC9KO8eb7oY/UWGzDCl0m7k7htN0kl6aOkeuYqkUZAIxQ1w5XXy0Mb6mO6F5kDrvy9Vd+3ZuukMTrL9OGnjut4S6A36eBODUrZyH7KrWd1k97nNcP2Z1MZyGPyCvEdqVzU228T2B4egxtJ/4/vrKK8yKtqoX7534475HVv V7JWxDcw 6lp+45H0lARpvXNjhn7rNikZYBGrm/RmoqbXtBn18d0yc1QXzoT3r53bmEd9S7/wXeUetPnHhYHeyi7lpGP6Hgk7bPOaW38PGSDXozzZnYN5Ml6F9q/GclTdCyWrVbhrbhcVOfSSE0ui1BPkzcJOenaGieu+mDNGb9SDa4WWhEauFPSK+p6qy7rVYW6N0WZcNaBcV/iBw4V8crjyd7adGXKCDdA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Thu, Apr 27, 2023 at 07:19:35PM +0800, Baokun Li wrote: > On 2023/4/27 18:01, Ming Lei wrote: > > On Thu, Apr 27, 2023 at 02:36:51PM +0800, Baokun Li wrote: > > > On 2023/4/27 12:50, Ming Lei wrote: > > > > Hello Matthew, > > > > > > > > On Thu, Apr 27, 2023 at 04:58:36AM +0100, Matthew Wilcox wrote: > > > > > On Thu, Apr 27, 2023 at 10:20:28AM +0800, Ming Lei wrote: > > > > > > Hello Guys, > > > > > > > > > > > > I got one report in which buffered write IO hangs in balance_dirty_pages, > > > > > > after one nvme block device is unplugged physically, then umount can't > > > > > > succeed. > > > > > That's a feature, not a bug ... the dd should continue indefinitely? > > > > Can you explain what the feature is? And not see such 'issue' or 'feature' > > > > on xfs. > > > > > > > > The device has been gone, so IMO it is reasonable to see FS buffered write IO > > > > failed. Actually dmesg has shown that 'EXT4-fs (nvme0n1): Remounting > > > > filesystem read-only'. Seems these things may confuse user. > > > > > > The reason for this difference is that ext4 and xfs handle errors > > > differently. > > > > > > ext4 remounts the filesystem as read-only or even just continues, vfs_write > > > does not check for these. > > vfs_write may not find anything wrong, but ext4 remount could see that > > disk is gone, which might happen during or after remount, however. > > > > > xfs shuts down the filesystem, so it returns a failure at > > > xfs_file_write_iter when it finds an error. > > > > > > > > > ``` ext4 > > > ksys_write > > >  vfs_write > > >   ext4_file_write_iter > > >    ext4_buffered_write_iter > > >     ext4_write_checks > > >      file_modified > > >       file_modified_flags > > >        __file_update_time > > >         inode_update_time > > >          generic_update_time > > >           __mark_inode_dirty > > >            ext4_dirty_inode ---> 2. void func, No propagating errors out > > >             __ext4_journal_start_sb > > >              ext4_journal_check_start ---> 1. Error found, remount-ro > > >     generic_perform_write ---> 3. No error sensed, continue > > >      balance_dirty_pages_ratelimited > > >       balance_dirty_pages_ratelimited_flags > > >        balance_dirty_pages > > >         // 4. Sleeping waiting for dirty pages to be freed > > >         __set_current_state(TASK_KILLABLE) > > >         io_schedule_timeout(pause); > > > ``` > > > > > > ``` xfs > > > ksys_write > > >  vfs_write > > >   xfs_file_write_iter > > >    if (xfs_is_shutdown(ip->i_mount)) > > >      return -EIO;    ---> dd fail > > > ``` > > Thanks for the info which is really helpful for me to understand the > > problem. > > > > > > > balance_dirty_pages() is sleeping in KILLABLE state, so kill -9 of > > > > > the dd process should succeed. > > > > Yeah, dd can be killed, however it may be any application(s), :-) > > > > > > > > Fortunately it won't cause trouble during reboot/power off, given > > > > userspace will be killed at that time. > > > > > > > > > > > > > > > > Thanks, > > > > Ming > > > > > > > Don't worry about that, we always set the current thread to TASK_KILLABLE > > > > > > while waiting in balance_dirty_pages(). > > I have another concern, if 'dd' isn't killed, dirty pages won't be cleaned, and > > these (big amount)memory becomes not usable, and typical scenario could be USB HDD > > unplugged. > > > > > > thanks, > > Ming > Yes, it is unreasonable to continue writing data with the previously opened > fd after > the file system becomes read-only, resulting in dirty page accumulation. > > I provided a patch in another reply. > Could you help test if it can solve your problem? > If it can indeed solve your problem, I will officially send it to the email > list. OK, I will test it tomorrow. But I am afraid if it can avoid the issue completely because the old write task hang in balance_dirty_pages() may still write/dirty pages if it is one very big size write IO. Thanks, Ming