From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1CEA5C4332F for ; Wed, 1 Nov 2023 11:23:25 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 75D0E8D005C; Wed, 1 Nov 2023 07:23:25 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 70D508D0001; Wed, 1 Nov 2023 07:23:25 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5D4D78D005C; Wed, 1 Nov 2023 07:23:25 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 4DB0E8D0001 for ; Wed, 1 Nov 2023 07:23:25 -0400 (EDT) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 2075614079D for ; Wed, 1 Nov 2023 11:23:25 +0000 (UTC) X-FDA: 81409149570.28.BCC4481 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf23.hostedemail.com (Postfix) with ESMTP id 39CE6140025 for ; Wed, 1 Nov 2023 11:23:23 +0000 (UTC) Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=YzUsuObA; spf=pass (imf23.hostedemail.com: domain of ming.lei@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=ming.lei@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1698837803; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=EeFxVPDwX8rUrPW3tfvqgv2hmUJKD8elj+/gexXEcLQ=; b=oT3mT89W6qRZ8mZa2qq/Qqfs/ociMdjgjGggDAmRbTEHJP4F7/WNNLi7SGEWmF7nOf4zmW 6/srcqUjStJh6ASxY0/JjpRTUWKbVkzbI8PnNwz4EQKjHELgjLMQkcUpFH4nJuemZzWZgk AVrT8g2I1k5InJ/7wejQcOJ7eTB9s5E= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1698837803; a=rsa-sha256; cv=none; b=JNmc9VFNuES+fNcL1lKqQgreGaWWsccxaP3qCW0oxWbplfGpQMhpLOAzUtPp+eDYi9TrBB hEe+rtdSFU2m9x/TqPUs/FxZteuh0S/VfyseMlw1sVg40wkFrDflU/IpQUUwtXqVQl5bOA 5bJETNXMHQlqWvW07SANv01dqxy7kKY= ARC-Authentication-Results: i=1; imf23.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=YzUsuObA; spf=pass (imf23.hostedemail.com: domain of ming.lei@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=ming.lei@redhat.com; dmarc=pass (policy=none) header.from=redhat.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1698837802; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=EeFxVPDwX8rUrPW3tfvqgv2hmUJKD8elj+/gexXEcLQ=; b=YzUsuObAJgpQiGOehTksVaSrvva/vcLrxgSE+bahYfbC6QBPXa5wV8ZeLqud1LE0F94f6u fkzRgkquLIHgbfK9DqO8zaazgcaIeyI4Ie4V2qe21ywASt/FSS2GMMBdPDUhVd9AXTKpTr k6GL1BGTOVrrFPfFj+7nqy6E/lkRC44= Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-661-NeK5-0WEMAKRnchMuU24-g-1; Wed, 01 Nov 2023 07:23:19 -0400 X-MC-Unique: NeK5-0WEMAKRnchMuU24-g-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 20AD63811F2F; Wed, 1 Nov 2023 11:23:18 +0000 (UTC) Received: from fedora (unknown [10.72.120.3]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 842BD10F4C; Wed, 1 Nov 2023 11:23:09 +0000 (UTC) Date: Wed, 1 Nov 2023 19:23:05 +0800 From: Ming Lei To: Hannes Reinecke Cc: Ming Lei , Marek =?iso-8859-1?Q?Marczykowski-G=F3recki?= , Jan Kara , Mikulas Patocka , Vlastimil Babka , Andrew Morton , Matthew Wilcox , Michal Hocko , stable@vger.kernel.org, regressions@lists.linux.dev, Alasdair Kergon , Mike Snitzer , dm-devel@lists.linux.dev, linux-mm@kvack.org, linux-block@vger.kernel.org, linux-nvme@lists.infradead.org Subject: Re: Intermittent storage (dm-crypt?) freeze - regression 6.4->6.5 Message-ID: References: <20231030155603.k3kejytq2e4vnp7z@quack3> <98aefaa9-1ac-a0e4-fb9a-89ded456750@redhat.com> <20231031140136.25bio5wajc5pmdtl@quack3> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.5 X-Stat-Signature: 6gczsb9b17h439dy6nuj4g6ycy4j313m X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: 39CE6140025 X-Rspam-User: X-HE-Tag: 1698837803-55565 X-HE-Meta: U2FsdGVkX1/Q1Lt8z6H3876qTdej8dihBXkWO7XlbJ7FWxZvi0TnV6ckVUiEw1v7n7DTME4Cy6EZeFIOGeNTTqmKO02ujVuFD1aLv47x8IuFeks2U66P/e28i6J+WgXsEy/OpuTVR9AVXRB+uRsDMrlKxjNamKxTaPlnUxN7EypxXmOR3LSxMqyWv7H93C5EAgzLEQsldlApW/3BBVYMhHUBUIcW7F52vtgrh17S32sw8gB5ytS+bu+YNx/3ht56YYFGzGDyXcp+vbXuCc2alpr9Z+gSz33wvj/BY73u+03SEysHb2/+ckb9D8mjzRPSc69cnQ1AtON5yh6osxnpXNn9Jb1pBoW6FD5nPj32D2Q3uDEmOxdzEpBTjM7gf8wMSVGWzPRRYau1vfIVNQpdpEgiphjfY6ceiirlYkFrhNg71pxfCqo80DyEkk/Qh/A/65jyH0oWWsTd+NeIZLHxMb3ei5w9CB5iWFIWHjU4fXuj2zMy+/rDOYsxcDzxbC2ODKRiVJy5hdIrJ2dIiOqzIsOj9z9eokQYvBwklWUy4cROeeoBW67l0XWZR2/4bzFi3qlYcsw9k9MP5n5Bvx6aERgDXL4bqeyDBbESbT3a2u+5CmoMaEnUwh4iySma2UfbTKwguk9MBEWzv/O/a8nrg45ApOxMVXRMo2HJQfD7B2dFUiHUEihAL8lkd/mm6znxEpRkjc+KpsfR+9yy0Inb+ADmkRqYSjG1wsAP85+jes9LmGnKzjjUuI971i8jpjLz9qBUeB64ShUUOGf692kYbsdJ+hxowUV9e74eI9yRXzMlQlJhJumAIwRR//XYi1X5dzg643BItuvn3Za+gduRaTQ68HSSud77JkbFfCkL+H372jtQEwCOSdziHDvQx75ILHsxCky6hgamxwNWZrxMurWtpCeRbHBl3d6I4fs74LfL7ZXNayBxqrKInX5GmeMmHU+yMSMpGIiefvU/hjj 1XwP5rPV DMDZnb1TRsgYZbpAJXHvcTkVL7kCaduB6FNaNPjGUYuEJQeZ4F13neNOr/pooAZcisLHkQRh7netA+NFX/sA0SDs0AK+Luf4Mf70Iv0eYkAuk7H3vGLp/R8+hoiJwub5VzEsEK3mTxPrjrL1yjIzKe5EK/igeUOVQx5rObH4vtpw4JX4lPqL1GYkfmTTWQJyQwDywFKUOK6fSOJO2NnBMj2NEbZCxry//PIIHNAEc2Zv8wQiLwsUaH9VOZZaCJ2Jbjlco4ER8THwda2TpTDxputFVF/8tZK2E4QRSUsI2G/oB6fp29InjdD03h0S6dlckYZgVcNWrx7ol5+U= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Wed, Nov 01, 2023 at 11:15:02AM +0100, Hannes Reinecke wrote: > On 11/1/23 04:24, Ming Lei wrote: > > On Wed, Nov 01, 2023 at 03:14:22AM +0100, Marek Marczykowski-Górecki wrote: > > > On Wed, Nov 01, 2023 at 09:27:24AM +0800, Ming Lei wrote: > > > > On Tue, Oct 31, 2023 at 11:42 PM Marek Marczykowski-Górecki > > > > wrote: > > > > > > > > > > On Tue, Oct 31, 2023 at 03:01:36PM +0100, Jan Kara wrote: > > > > > > On Tue 31-10-23 04:48:44, Marek Marczykowski-Górecki wrote: > > > > > > > Then tried: > > > > > > > - PAGE_ALLOC_COSTLY_ORDER=4, order=4 - cannot reproduce, > > > > > > > - PAGE_ALLOC_COSTLY_ORDER=4, order=5 - cannot reproduce, > > > > > > > - PAGE_ALLOC_COSTLY_ORDER=4, order=6 - freeze rather quickly > > > > > > > > > > > > > > I've retried the PAGE_ALLOC_COSTLY_ORDER=4,order=5 case several times > > > > > > > and I can't reproduce the issue there. I'm confused... > > > > > > > > > > > > And this kind of confirms that allocations > PAGE_ALLOC_COSTLY_ORDER > > > > > > causing hangs is most likely just a coincidence. Rather something either in > > > > > > the block layer or in the storage driver has problems with handling bios > > > > > > with sufficiently high order pages attached. This is going to be a bit > > > > > > painful to debug I'm afraid. How long does it take for you trigger the > > > > > > hang? I'm asking to get rough estimate how heavy tracing we can afford so > > > > > > that we don't overwhelm the system... > > > > > > > > > > Sometimes it freezes just after logging in, but in worst case it takes > > > > > me about 10min of more or less `tar xz` + `dd`. > > > > > > > > blk-mq debugfs is usually helpful for hang issue in block layer or > > > > underlying drivers: > > > > > > > > (cd /sys/kernel/debug/block && find . -type f -exec grep -aH . {} \;) > > > > > > > > BTW, you can just collect logs of the exact disks if you know what > > > > are behind dm-crypt, > > > > which can be figured out by `lsblk`, and it has to be collected after > > > > the hang is triggered. > > > > > > dm-crypt lives on the nvme disk, this is what I collected when it > > > hanged: > > > > > ... > > > nvme0n1/hctx4/cpu4/default_rq_list:000000000d41998f {.op=READ, .cmd_flags=, .rq_flags=IO_STAT, .state=idle, .tag=65, .internal_tag=-1} > > > nvme0n1/hctx4/cpu4/default_rq_list:00000000d0d04ed2 {.op=READ, .cmd_flags=, .rq_flags=IO_STAT, .state=idle, .tag=70, .internal_tag=-1} > > > > Two requests stays in sw queue, but not related with this issue. > > > > > nvme0n1/hctx4/type:default > > > nvme0n1/hctx4/dispatch_busy:9 > > > > non-zero dispatch_busy means BLK_STS_RESOURCE is returned from > > nvme_queue_rq() recently and mostly. > > > > > nvme0n1/hctx4/active:0 > > > nvme0n1/hctx4/run:20290468 > > > > ... > > > > > nvme0n1/hctx4/tags:nr_tags=1023 > > > nvme0n1/hctx4/tags:nr_reserved_tags=0 > > > nvme0n1/hctx4/tags:active_queues=0 > > > nvme0n1/hctx4/tags:bitmap_tags: > > > nvme0n1/hctx4/tags:depth=1023 > > > nvme0n1/hctx4/tags:busy=3 > > > > Just three requests in-flight, two are in sw queue, another is in hctx->dispatch. > > > > ... > > > > > nvme0n1/hctx4/dispatch:00000000b335fa89 {.op=WRITE, .cmd_flags=NOMERGE, .rq_flags=DONTPREP|IO_STAT, .state=idle, .tag=78, .internal_tag=-1} > > > nvme0n1/hctx4/flags:alloc_policy=FIFO SHOULD_MERGE > > > nvme0n1/hctx4/state:SCHED_RESTART > > > > The request staying in hctx->dispatch can't move on, and nvme_queue_rq() > > returns -BLK_STS_RESOURCE constantly, and you can verify with > > the following bpftrace when the hang is triggered: > > > > bpftrace -e 'kretfunc:nvme_queue_rq { @[retval, kstack]=count() }' > > > > It is very likely that memory allocation inside nvme_queue_rq() > > can't be done successfully, then blk-mq just have to retry by calling > > nvme_queue_rq() on the above request. > > > And that is something I've been wondering (for quite some time now): > What _is_ the appropriate error handling for -ENOMEM? It is just my guess. Actually it shouldn't fail since the sgl allocation is backed with memory pool, but there is also dma pool allocation and dma mapping. > At this time, we assume it to be a retryable error and re-run the queue > in the hope that things will sort itself out. It should not be hard to figure out why nvme_queue_rq() can't move on. > But if they don't we're stuck. > Can we somehow figure out if we make progress during submission, and (at > least) issue a warning once we detect a stall? It needs counting on request retry, and people often hate to add something to request or bio in fast path. Also this kind of issue is easy to show in blk-mq debugfs or bpftrace. Thanks, Ming