From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 15B5FC433FE for ; Wed, 19 Oct 2022 01:16:52 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 28BB16B0072; Tue, 18 Oct 2022 21:16:52 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 23BBD6B0073; Tue, 18 Oct 2022 21:16:52 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 104556B0074; Tue, 18 Oct 2022 21:16:52 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 0097C6B0072 for ; Tue, 18 Oct 2022 21:16:51 -0400 (EDT) Received: from smtpin18.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id BB4621402EE for ; Wed, 19 Oct 2022 01:16:51 +0000 (UTC) X-FDA: 80035934622.18.F1E6672 Received: from mail-qv1-f53.google.com (mail-qv1-f53.google.com [209.85.219.53]) by imf22.hostedemail.com (Postfix) with ESMTP id 4AFDDC0032 for ; Wed, 19 Oct 2022 01:16:51 +0000 (UTC) Received: by mail-qv1-f53.google.com with SMTP id mx8so10456084qvb.8 for ; Tue, 18 Oct 2022 18:16:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fromorbit-com.20210112.gappssmtp.com; s=20210112; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=M++Ivb11u0st7Bn1wCddmgCz9h2EQ4tgIX5+e878ohI=; b=g+3WED34ktkLOM+MRmUh+g/MSWglySA1V1WXHzi9R3lfT7hotb0sPuVbS52iIjHR1M Fqy+KFnSaa9jHZtU0Cky+2lYVk5jsBKotShID/5PjYjE1/uEAYmDEbApVx3Jx2X0snnt paJCmHYy7sGmGz6c3eJOTbTeBXn/5HeJxsu0Mt42DaGsqVg13H4k57CsMWcn3Dd1rHk0 m92/HIoQDK13Pd8+l9UHC1WXEdfG6Y0nXtUbTB29ApJOkJHR8eb3PbJ/BEykpo3DscJ+ 1YLeUeGy3PZrYspKj5EQ2SwnuM3KXoxrTN1aweYoPuCtnKp/Jj7TWoBHyPcWe41FdxQt s2jA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=M++Ivb11u0st7Bn1wCddmgCz9h2EQ4tgIX5+e878ohI=; b=nvGWpG2kYWKp9OwgOq6yba4OQOdKRmXY92k1dGPUrjrr3fBuPrK05vvamdCPcUSu/s qRj+R2uq/eoE5micMASfOgYLa6459jUwDl2xcXZTp+BGIoj4mLEdVW0Z1ogqMO3CmEaa 9ra7y0UUU0aMC56AvalmcqalVClX8SIrC9Lo8jjLo9b0MWekZgkq8z3Sd3rlC/+KTzdq i5M2bPLdXAEepCayNgoP41BOgvmB0vvYq7kCfzSoN91sVvR6hXSwOdOU1jA6BXHoZ86m fcM9pmxCvK6AQ8Np1nh5EgnXrc9W7qerH6KmdMqIUKOx6swp9JZu2tPlogcaP+owaEVE nnYA== X-Gm-Message-State: ACrzQf3G6rf15cVaYs5JFMwzC+teoKStpArzBafgylZx8iDZIK+lfMBz 9C558rFirfANBcQ/eum1YbiqrAANwxxwlZ2n X-Google-Smtp-Source: AMsMyM7ErKcQDb1XVV6IwNSsaGy14Aw7/3cvVP0dy4+YVMedCXsYPtQByHXhidP7yjkS7OnlcX2oTg== X-Received: by 2002:a17:902:ef51:b0:180:7922:ce40 with SMTP id e17-20020a170902ef5100b001807922ce40mr5676548plx.8.1666142199708; Tue, 18 Oct 2022 18:16:39 -0700 (PDT) Received: from dread.disaster.area (pa49-181-106-210.pa.nsw.optusnet.com.au. [49.181.106.210]) by smtp.gmail.com with ESMTPSA id p2-20020a634202000000b0043ae1797e2bsm8590841pga.63.2022.10.18.18.16.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 18 Oct 2022 18:16:39 -0700 (PDT) Received: from dave by dread.disaster.area with local (Exim 4.92.3) (envelope-from ) id 1okxhE-003fe5-13; Wed, 19 Oct 2022 12:16:36 +1100 Date: Wed, 19 Oct 2022 12:16:36 +1100 From: Dave Chinner To: Matthew Wilcox Cc: Zhaoyang Huang , "zhaoyang.huang" , Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org, ke.wang@unisoc.com, steve.kang@unisoc.com, baocong.liu@unisoc.com, linux-fsdevel@vger.kernel.org Subject: Re: [RFC PATCH] mm: move xa forward when run across zombie page Message-ID: <20221019011636.GM2703033@dread.disaster.area> References: <1665725448-31439-1-git-send-email-zhaoyang.huang@unisoc.com> <20221018223042.GJ2703033@dread.disaster.area> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20221018223042.GJ2703033@dread.disaster.area> ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1666142211; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=M++Ivb11u0st7Bn1wCddmgCz9h2EQ4tgIX5+e878ohI=; b=w74S3Fv6KuRQNCw2HzyyLTR+MmkZadhXWjHhbUwCjG6tOZCCm01NLspbdJ8ypImuYbjnrt 3HhKaYA0+oYK6d7UnfgkWPVjnC2ieKtWJxCY84Vq/hV+hGtwjY1WS5TzK0DKGmb7shE6SF QpGwnt91iMwimqcmvzzPar/FHCMrkY8= ARC-Authentication-Results: i=1; imf22.hostedemail.com; dkim=pass header.d=fromorbit-com.20210112.gappssmtp.com header.s=20210112 header.b=g+3WED34; spf=none (imf22.hostedemail.com: domain of david@fromorbit.com has no SPF policy when checking 209.85.219.53) smtp.mailfrom=david@fromorbit.com; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1666142211; a=rsa-sha256; cv=none; b=Om7eKj9p9i0j5yxz4hz/wINyPaREbmyWbZ/a+5eXuwoIDBFMcVpKjE9iKZ3KrsLHn+11jU o7HvsDXgvcxBrHz8qvWvJFEdl3Zhy3gODZi+q74HrIEyVACM4Bovxfkd11Lm7T6hBmPfl8 8jEnBYBoeyb8MqJpGD7Rcw+MUrGdz+4= X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: 4AFDDC0032 X-Rspam-User: Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=fromorbit-com.20210112.gappssmtp.com header.s=20210112 header.b=g+3WED34; spf=none (imf22.hostedemail.com: domain of david@fromorbit.com has no SPF policy when checking 209.85.219.53) smtp.mailfrom=david@fromorbit.com; dmarc=none X-Stat-Signature: coz1h1q8nb87jsqszthk3j1u18khw4s8 X-HE-Tag: 1666142211-594834 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, Oct 19, 2022 at 09:30:42AM +1100, Dave Chinner wrote: > On Tue, Oct 18, 2022 at 04:09:17AM +0100, Matthew Wilcox wrote: > > On Tue, Oct 18, 2022 at 10:52:19AM +0800, Zhaoyang Huang wrote: > > > On Mon, Oct 17, 2022 at 11:55 PM Matthew Wilcox wrote: > > > > > > > > On Mon, Oct 17, 2022 at 01:34:13PM +0800, Zhaoyang Huang wrote: > > > > > On Fri, Oct 14, 2022 at 8:12 PM Matthew Wilcox wrote: > > > > > > > > > > > > On Fri, Oct 14, 2022 at 01:30:48PM +0800, zhaoyang.huang wrote: > > > > > > > From: Zhaoyang Huang > > > > > > > > > > > > > > Bellowing RCU stall is reported where kswapd traps in a live lock when shrink > > > > > > > superblock's inode list. The direct reason is zombie page keeps staying on the > > > > > > > xarray's slot and make the check and retry loop permanently. The root cause is unknown yet > > > > > > > and supposed could be an xa update without synchronize_rcu etc. I would like to > > > > > > > suggest skip this page to break the live lock as a workaround. > > > > > > > > > > > > No, the underlying bug should be fixed. > > > > > > > > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ > > > Understand. IMHO, find_get_entry actruely works as an open API dealing > > > with different kinds of address_spaces page cache, which requires high > > > robustness to deal with any corner cases. Take the current problem as > > > example, the inode with fault page(refcount=0) could remain on the > > > sb's list without live lock problem. > > > > But it's a corner case that shouldn't happen! What else is going on > > at the time? Can you reproduce this problem easily? If so, how? > > I've been seeing this livelock, too. The reproducer is, > unfortunately, something I can't share - it's a massive program that > triggers a data corruption I'm working on solving. > > Now that I've > mostly fixed the data corruption, long duration test runs end up > livelocking in page cache lookup after several hours. > > The test is effectively writing a 100MB file with multiple threads > doing reverse adjacent racing 1MB unaligned writes. Once the file is > written, it is then mmap()d and read back from the filesystem for > verification. > > THis is then run with tens of processes concurrently, and then under > a massively confined memcg (e.g. 32 processes/files are run in a > memcg with only 200MB of memory allowed). This causes writeback, > readahead and memory reclaim to race with incoming mmap read faults > and writes. The livelock occurs on file verification and it appears > to be an interaction with readahead thrashing. > > On my test rig, the physical read to write ratio is at least 20:1 - > with 32 processes running, the 5s IO rates are: > > Device tps MB_read/s MB_wrtn/s MB_dscd/s MB_read MB_wrtn MB_dscd > dm-0 52187.20 3677.42 1345.92 0.00 18387 6729 0 > dm-0 62865.60 5947.29 0.08 0.00 29736 0 0 > dm-0 62972.80 5911.20 0.00 0.00 29556 0 0 > dm-0 59803.00 5516.72 133.47 0.00 27583 667 0 > dm-0 63068.20 5292.34 511.52 0.00 26461 2557 0 > dm-0 56775.60 4184.52 1248.38 0.00 20922 6241 0 > dm-0 63087.40 5901.26 43.77 0.00 29506 218 0 > dm-0 62769.00 5833.97 60.54 0.00 29169 302 0 > dm-0 64810.20 5636.13 305.63 0.00 28180 1528 0 > dm-0 65222.60 5598.99 349.48 0.00 27994 1747 0 > dm-0 62444.00 4887.05 926.67 0.00 24435 4633 0 > dm-0 63812.00 5622.68 294.66 0.00 28113 1473 0 > dm-0 63482.00 5728.43 195.74 0.00 28642 978 0 > > This is reading and writing the same amount of file data at the > application level, but once the data has been written and kicked out > of the page cache it seems to require an awful lot more read IO to > get it back to the application. i.e. this looks like mmap() is > readahead thrashing severely, and eventually it livelocks with this > sort of report: > > [175901.982484] rcu: INFO: rcu_preempt detected stalls on CPUs/tasks: > [175901.985095] rcu: Tasks blocked on level-1 rcu_node (CPUs 0-15): P25728 > [175901.987996] (detected by 0, t=97399871 jiffies, g=15891025, q=1972622 ncpus=32) > [175901.991698] task:test_write state:R running task stack:12784 pid:25728 ppid: 25696 flags:0x00004002 > [175901.995614] Call Trace: > [175901.996090] > [175901.996594] ? __schedule+0x301/0xa30 > [175901.997411] ? sysvec_apic_timer_interrupt+0xb/0x90 > [175901.998513] ? sysvec_apic_timer_interrupt+0xb/0x90 > [175901.999578] ? asm_sysvec_apic_timer_interrupt+0x16/0x20 > [175902.000714] ? xas_start+0x53/0xc0 > [175902.001484] ? xas_load+0x24/0xa0 > [175902.002208] ? xas_load+0x5/0xa0 > [175902.002878] ? __filemap_get_folio+0x87/0x340 > [175902.003823] ? filemap_fault+0x139/0x8d0 > [175902.004693] ? __do_fault+0x31/0x1d0 > [175902.005372] ? __handle_mm_fault+0xda9/0x17d0 > [175902.006213] ? handle_mm_fault+0xd0/0x2a0 > [175902.006998] ? exc_page_fault+0x1d9/0x810 > [175902.007789] ? asm_exc_page_fault+0x22/0x30 > [175902.008613] > > Given that filemap_fault on XFS is probably trying to map large > folios, I do wonder if this is a result of some kind of race with > teardown of a large folio... > > There is a very simple corruption reproducer script that has been > written, but I haven't been using it. I don't know if long term > running of the script here: > > https://lore.kernel.org/linux-xfs/d00aff43-2bdc-0724-1996-4e58e061ecfd@redhat.com/ > > will trigger the livelock as the verification step is > significantly different, but it will give you insight into the > setup of the environment that leads to the livelock. Maybe you could > replace the md5sum verification with a mmap read with xfs_io to > simulate the fault load that seems to lead to this issue... FWIW, just tested this on a current Linus kernel. While there is massive read-ahead thrashing on v6.0, the thrashing is largely gone in v6.1-rc1+ and the iteration rate of the test is much, much better. The livelock remains, however. Cheers, Dave. -- Dave Chinner david@fromorbit.com