From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D9FE8C433E0 for ; Mon, 15 Jun 2020 14:25:59 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 8066D206E2 for ; Mon, 15 Jun 2020 14:25:59 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8066D206E2 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=applied-asynchrony.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id E7F416B0005; Mon, 15 Jun 2020 10:25:58 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E09F96B0006; Mon, 15 Jun 2020 10:25:58 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CD0276B0007; Mon, 15 Jun 2020 10:25:58 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0169.hostedemail.com [216.40.44.169]) by kanga.kvack.org (Postfix) with ESMTP id AFFB16B0005 for ; Mon, 15 Jun 2020 10:25:58 -0400 (EDT) Received: from smtpin13.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 39EEE12631 for ; Mon, 15 Jun 2020 14:25:58 +0000 (UTC) X-FDA: 76931670396.13.horse66_371370226df7 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin13.hostedemail.com (Postfix) with ESMTP id 63C2E180A95F8 for ; Mon, 15 Jun 2020 14:25:55 +0000 (UTC) X-HE-Tag: horse66_371370226df7 X-Filterd-Recvd-Size: 10697 Received: from mail.itouring.de (mail.itouring.de [188.40.134.68]) by imf49.hostedemail.com (Postfix) with ESMTP for ; Mon, 15 Jun 2020 14:25:54 +0000 (UTC) Received: from tux.applied-asynchrony.com (p5ddd79e0.dip0.t-ipconnect.de [93.221.121.224]) by mail.itouring.de (Postfix) with ESMTPSA id 21FEC41603A0; Mon, 15 Jun 2020 16:25:53 +0200 (CEST) Received: from [192.168.100.223] (ragnarok.applied-asynchrony.com [192.168.100.223]) by tux.applied-asynchrony.com (Postfix) with ESMTP id C7CD3F01600; Mon, 15 Jun 2020 16:25:52 +0200 (CEST) Subject: Re: [PATCH v3] xfs: avoid deadlock when trigger memory reclaim in ->writepages To: Yafang Shao , linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org References: <1592222181-9832-1-git-send-email-laoar.shao@gmail.com> From: =?UTF-8?Q?Holger_Hoffst=c3=a4tte?= Organization: Applied Asynchrony, Inc. Message-ID: Date: Mon, 15 Jun 2020 16:25:52 +0200 MIME-Version: 1.0 In-Reply-To: <1592222181-9832-1-git-send-email-laoar.shao@gmail.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit X-Rspamd-Queue-Id: 63C2E180A95F8 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam04 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 2020-06-15 13:56, Yafang Shao wrote: > Recently there is a XFS deadlock on our server with an old kernel. > This deadlock is caused by allocating memory in xfs_map_blocks() while > doing writeback on behalf of memroy reclaim. Although this deadlock happens > on an old kernel, I think it could happen on the upstream as well. This > issue only happens once and can't be reproduced, so I haven't tried to > reproduce it on upsteam kernel. > > Bellow is the call trace of this deadlock. > [480594.790087] INFO: task redis-server:16212 blocked for more than 120 seconds. > [480594.790087] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. > [480594.790088] redis-server D ffffffff8168bd60 0 16212 14347 0x00000004 > [480594.790090] ffff880da128f070 0000000000000082 ffff880f94a2eeb0 ffff880da128ffd8 > [480594.790092] ffff880da128ffd8 ffff880da128ffd8 ffff880f94a2eeb0 ffff88103f9d6c40 > [480594.790094] 0000000000000000 7fffffffffffffff ffff88207ffc0ee8 ffffffff8168bd60 > [480594.790096] Call Trace: > [480594.790101] [] schedule+0x29/0x70 > [480594.790103] [] schedule_timeout+0x239/0x2c0 > [480594.790111] [] io_schedule_timeout+0xae/0x130 > [480594.790114] [] io_schedule+0x18/0x20 > [480594.790116] [] bit_wait_io+0x11/0x50 > [480594.790118] [] __wait_on_bit+0x65/0x90 > [480594.790121] [] wait_on_page_bit+0x81/0xa0 > [480594.790125] [] shrink_page_list+0x6d2/0xaf0 > [480594.790130] [] shrink_inactive_list+0x223/0x710 > [480594.790135] [] shrink_lruvec+0x3b5/0x810 > [480594.790139] [] shrink_zone+0xba/0x1e0 > [480594.790141] [] do_try_to_free_pages+0x100/0x510 > [480594.790143] [] try_to_free_mem_cgroup_pages+0xdd/0x170 > [480594.790145] [] mem_cgroup_reclaim+0x4e/0x120 > [480594.790147] [] __mem_cgroup_try_charge+0x41c/0x670 > [480594.790153] [] __memcg_kmem_newpage_charge+0xf6/0x180 > [480594.790157] [] __alloc_pages_nodemask+0x22d/0x420 > [480594.790162] [] alloc_pages_current+0xaa/0x170 > [480594.790165] [] new_slab+0x30c/0x320 > [480594.790168] [] ___slab_alloc+0x3ac/0x4f0 > [480594.790204] [] __slab_alloc+0x40/0x5c > [480594.790206] [] kmem_cache_alloc+0x193/0x1e0 > [480594.790233] [] kmem_zone_alloc+0x97/0x130 [xfs] > [480594.790247] [] _xfs_trans_alloc+0x3a/0xa0 [xfs] > [480594.790261] [] xfs_trans_alloc+0x3c/0x50 [xfs] > [480594.790276] [] xfs_iomap_write_allocate+0x1cb/0x390 [xfs] > [480594.790299] [] xfs_map_blocks+0x1a6/0x210 [xfs] > [480594.790312] [] xfs_do_writepage+0x17b/0x550 [xfs] > [480594.790314] [] write_cache_pages+0x251/0x4d0 [xfs] > [480594.790338] [] xfs_vm_writepages+0xc5/0xe0 [xfs] > [480594.790341] [] do_writepages+0x1e/0x40 > [480594.790343] [] __filemap_fdatawrite_range+0x65/0x80 > [480594.790346] [] filemap_write_and_wait_range+0x41/0x90 > [480594.790360] [] xfs_file_fsync+0x66/0x1e0 [xfs] > [480594.790363] [] do_fsync+0x65/0xa0 > [480594.790365] [] SyS_fdatasync+0x13/0x20 > [480594.790367] [] system_call_fastpath+0x16/0x1b > > Note that xfs_iomap_write_allocate() is replaced by xfs_convert_blocks() in > commit 4ad765edb02a ("xfs: move xfs_iomap_write_allocate to xfs_aops.c") > and write_cache_pages() is replaced by iomap_writepages() in > commit 598ecfbaa742 ("iomap: lift the xfs writeback code to iomap"). > So for upsteam, the call trace should be, > xfs_vm_writepages > -> iomap_writepages > -> write_cache_pages > -> iomap_do_writepage > -> xfs_map_blocks > -> xfs_convert_blocks > -> xfs_bmapi_convert_delalloc > -> xfs_trans_alloc //It should alloc page with GFP_NOFS > > I'm not sure whether it is proper to add the GFP_NOFS to all the > ->writepags, so I only add it for XFS. > > Stefan also reported that he saw this issue two times while under memory > pressure, So I add his reported-by. > > Reported-by: Stefan Priebe - Profihost AG > Signed-off-by: Yafang Shao > > --- > v2 -> v3: > - retitle the subject from "iomap: avoid deadlock if memory reclaim is > triggered in writepage path" > - set GFP_NOFS only for XFS ->writepages > > v1 -> v2: > - retitle the subject from "xfs: avoid deadlock when tigger memory reclam > in xfs_map_blocks()" > - set GFP_NOFS in iomap_do_writepage(), per Dave > --- > fs/xfs/xfs_aops.c | 14 +++++++++++++- > 1 file changed, 13 insertions(+), 1 deletion(-) > > diff --git a/fs/xfs/xfs_aops.c b/fs/xfs/xfs_aops.c > index b356118..1ccfbf2 100644 > --- a/fs/xfs/xfs_aops.c > +++ b/fs/xfs/xfs_aops.c > @@ -573,9 +573,21 @@ static inline bool xfs_ioend_needs_workqueue(struct iomap_ioend *ioend) > struct writeback_control *wbc) > { > struct xfs_writepage_ctx wpc = { }; > + unsigned int nofs_flag; > + int ret; > > xfs_iflags_clear(XFS_I(mapping->host), XFS_ITRUNCATED); > - return iomap_writepages(mapping, wbc, &wpc.ctx, &xfs_writeback_ops); > + > + /* > + * We can allocate memory here while doing writeback on behalf of > + * memory reclaim. To avoid memory allocation deadlocks set the > + * task-wide nofs context for the following operations. > + */ > + nofs_flag = memalloc_nofs_save(); > + ret = iomap_writepages(mapping, wbc, &wpc.ctx, &xfs_writeback_ops); > + memalloc_nofs_restore(nofs_flag); > + > + return ret; > } > > STATIC int > Not sure if I did something wrong, but while the previous version of this patch worked fine, this one gave me (with v2 removed obviously): [ +0.000004] WARNING: CPU: 0 PID: 2811 at fs/iomap/buffered-io.c:1544 iomap_do_writepage+0x6b4/0x780 [ +0.000001] Modules linked in: tcp_bbr2 sch_fq_codel nct6775 hwmon_vid btrfs blake2b_generic xor zstd_compress x86_pkg_temp_thermal drivetemp coretemp crc32_pclmul raid6_pq zstd_decompress aesni_intel i915 crypto_simd cryptd glue_helper intel_gtt bfq i2c_algo_bit iosf_mbi drm_kms_helper i2c_i801 mq_deadline syscopyarea sysfillrect usbhid sysimgblt fb_sys_fops drm drm_panel_orientation_quirks i2c_core atlantic video backlight [ +0.000013] CPU: 0 PID: 2811 Comm: dmesg Not tainted 5.7.2 #1 [ +0.000000] Hardware name: System manufacturer System Product Name/P8Z68-V LX, BIOS 4105 07/01/2013 [ +0.000002] RIP: 0010:iomap_do_writepage+0x6b4/0x780 [ +0.000001] Code: ff e9 bf fc ff ff 48 8b 44 24 48 48 39 44 24 28 0f 84 f7 fb ff ff 0f 0b e9 f0 fb ff ff 4c 8b 64 24 10 41 89 c7 e9 d8 fb ff ff <0f> 0b eb 88 8b 54 24 24 85 d2 75 5f 49 8b 45 50 48 8b 40 10 48 85 [ +0.000001] RSP: 0018:ffffc90000277ae8 EFLAGS: 00010206 [ +0.000001] RAX: 0000000000444004 RBX: ffffc90000277bc0 RCX: 0000000000000018 [ +0.000000] RDX: 0000000000000000 RSI: ffffc90000277d58 RDI: ffffea001fd25e00 [ +0.000001] RBP: ffff8887f7edfd40 R08: ffffffffffffffff R09: 0000000000000006 [ +0.000001] R10: ffff88881f5dd000 R11: 000000000000020d R12: ffffea001fd25e00 [ +0.000000] R13: ffffc90000277c80 R14: ffff8887fb95e800 R15: ffffea001fd25e00 [ +0.000001] FS: 0000000000000000(0000) GS:ffff8887ff600000(0000) knlGS:0000000000000000 [ +0.000001] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ +0.000000] CR2: 00007f836c0915e8 CR3: 0000000002009005 CR4: 00000000000606f0 [ +0.000001] Call Trace: [ +0.000006] ? page_mkclean+0x5e/0x90 [ +0.000003] ? clear_page_dirty_for_io+0x18a/0x1d0 [ +0.000002] write_cache_pages+0x196/0x400 [ +0.000001] ? iomap_readpages+0x180/0x180 [ +0.000003] iomap_writepages+0x1c/0x40 [ +0.000003] xfs_vm_writepages+0x68/0x90 [ +0.000002] do_writepages+0x25/0xa0 [ +0.000002] __filemap_fdatawrite_range+0xa7/0xe0 [ +0.000002] xfs_release+0x11b/0x160 [ +0.000002] __fput+0xd7/0x240 [ +0.000003] task_work_run+0x5c/0x80 [ +0.000001] do_exit+0x33b/0x9c0 [ +0.000001] do_group_exit+0x33/0x90 [ +0.000002] __x64_sys_exit_group+0x14/0x20 [ +0.000001] do_syscall_64+0x4e/0x310 [ +0.000003] ? do_user_addr_fault+0x1df/0x460 [ +0.000004] entry_SYSCALL_64_after_hwframe+0x44/0xa9 [ +0.000001] RIP: 0033:0x7f836bf73489 [ +0.000003] Code: Bad RIP value. [ +0.000000] RSP: 002b:00007fff90ec3b98 EFLAGS: 00000246 ORIG_RAX: 00000000000000e7 [ +0.000001] RAX: ffffffffffffffda RBX: 00007f836c0626e0 RCX: 00007f836bf73489 [ +0.000001] RDX: 000000000000003c RSI: 00000000000000e7 RDI: 0000000000000000 [ +0.000001] RBP: 00007f836c0626e0 R08: ffffffffffffff80 R09: 00007fff90ec3a60 [ +0.000000] R10: 0000000000000003 R11: 0000000000000246 R12: 0000000000000000 [ +0.000001] R13: 0000000000000000 R14: 0000000000000002 R15: 0000000000000000 [ +0.000001] ---[ end trace aed8763335accf60 ]--- ..and hosed the fs by eating any writeback, zeroing out $things. 0/10 cats, do not want. -h