From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id BE4C0C5B552 for ; Mon, 9 Jun 2025 08:55:37 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 442936B008C; Mon, 9 Jun 2025 04:55:37 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 41A076B0092; Mon, 9 Jun 2025 04:55:37 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 357646B0093; Mon, 9 Jun 2025 04:55:37 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 171856B008C for ; Mon, 9 Jun 2025 04:55:37 -0400 (EDT) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 2B3591A16A2 for ; Mon, 9 Jun 2025 08:55:36 +0000 (UTC) X-FDA: 83535253872.17.9173EBC Received: from mail-ua1-f42.google.com (mail-ua1-f42.google.com [209.85.222.42]) by imf23.hostedemail.com (Postfix) with ESMTP id 57229140003 for ; Mon, 9 Jun 2025 08:55:34 +0000 (UTC) Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=Mpx5GL+H; spf=pass (imf23.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.222.42 as permitted sender) smtp.mailfrom=21cnbao@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1749459334; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=eWUCzRZgRJ5VFllFfaWokn2i/RsN736Y8YeRkYVNhCA=; b=soQlSdX+rTEKyAXANT0iwydF9z5hS02/MWIo8cHaaH0G2dRqYE5QE0wlotMRhHbhI9FRVF srhulxW8uyKsZs4JxwIeZoo4M4159yDI7FcDjW8UiCze5e28qilbpSXc4UOUB6ATN0ML4I nKohFtkmV7FjLUjkaQJK1SRXkJ/o2iA= ARC-Authentication-Results: i=1; imf23.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=Mpx5GL+H; spf=pass (imf23.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.222.42 as permitted sender) smtp.mailfrom=21cnbao@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1749459334; a=rsa-sha256; cv=none; b=XICSFubG3e+5J+GmWIkjqJdHaYsTBQv/VDYq7kLyQMkR2rqClmUNDupMRljxmi6HYy+IsV Su0PUJnJO3T2HPYBmgye9h5v+4k5KpUE9oa/1IPivBbSjlGSCVVAE2LR99dyYoBAVu97pu E3QibE3+ZmEweFSdoRDhdKl8FAYmxJI= Received: by mail-ua1-f42.google.com with SMTP id a1e0cc1a2514c-87ecc02528aso167706241.2 for ; Mon, 09 Jun 2025 01:55:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1749459333; x=1750064133; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=eWUCzRZgRJ5VFllFfaWokn2i/RsN736Y8YeRkYVNhCA=; b=Mpx5GL+Ht7hNZbNe57pl/exL/srDhDqO80wCGCeXX6czH3f5tWlTL4A1z3nsWv2EDj AFM1PCuLpiGV+PaY7gvOr/fmUE2IoUH2i41CuY5sR9GYremALKoWs9w5rNrMEdIjSGYJ jb4UBAizzklee4sGcad0psK7seZyqPQeVq1fdeW1R5H0oqRPCdEkSJ0Gt9Y42OFJd8dv l40dmaSkeuaiuAjhryt9ttZKdydY8vLLglwOsfRzkaM8EOZCV0T7r4WQxsi7fqAchhJa sMgxTYEUBhasUn0XOhxUEgSamUiJ2WOKo/K+ku84nEriPmsUWtiNMXOJEayYi/fi+F8m faPQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749459333; x=1750064133; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=eWUCzRZgRJ5VFllFfaWokn2i/RsN736Y8YeRkYVNhCA=; b=b+1O4Nqw5ggyY3NiFdffmJDi2KxMqCU4uvsPSh3iJjlG3C97HQ7TqDpO1VfRnNBFK/ bVLxY8QmV3H9jLU0kEJOlwmWdz7BxNZNBEV5o5gZ7NUVw9MMDAhUK2x8dDkH4FSepkdE 4GMjwGg5n5sDo++yLxBWLCPhwjHWfoYwvibCKOtsgdUW4vMpLSNtsvMJT0RSNuOruJOj poBKrBUN/ZFRUsvAKdAvvBHo4pS/CHXh9ey1rREfQbjAtyDdUT8Llyv00OZimMO5fsei jSN/FdmGtn+hOHhvo97LmwZBLVgGXKNI9CiA4bJDxad8TZ3rjo0OeyzEiGYou2/5TqmL dj0Q== X-Forwarded-Encrypted: i=1; AJvYcCValdRte/EksFSs1KKxwp7xJjQVwwUsyGRzDG3uoec1QyJvwxxq94OUM1Ct6K8ENNqGnoXqGtKqFw==@kvack.org X-Gm-Message-State: AOJu0Yy4nsgYDZsJrYpOmhH26y6ooPqLMoAgd+ND65FJ7aWcpJHHlAtF G75SIEPXSQS+u096nzWUCunOT5QOms/kxUTOldkXVkNn4PabMESHzna2dB2QUcfPiU2sSBxJRcm XE7HU53t9jhWOhwlGkENgPWv/F9Lv8GE= X-Gm-Gg: ASbGncs97sqx0K9I2n37dreCMe/0QifWwJgjQMvhvfZeWiQ8+R2XtSY1NucRK2d3nZm bdGzXmxjOV178FoVIAsgm+gUtuOa6Iu9mayNfSyx12a0pZL2omR1mmlz5z4roe1AeUGQv/PCHPs pB9SfC09IRxZ5d+LKKGtcZauzuwtq8CADgqg== X-Google-Smtp-Source: AGHT+IGJNhTbIxn0fGn5LZK7I1+5VjM8COIwIWtxkknam96k2e+FoVcx5/oxLKqEf1OivZNvPQpHNNT7jQjqIo6VlmE= X-Received: by 2002:a05:6102:3752:b0:4cb:5d6c:9946 with SMTP id ada2fe7eead31-4e7728badb7mr9695771137.10.1749459333277; Mon, 09 Jun 2025 01:55:33 -0700 (PDT) MIME-Version: 1.0 References: <20250608192713.95875-1-ryncsn@gmail.com> <36f52466-071a-4efb-adc2-8514b11f120c@linux.alibaba.com> <1452d0c6-50ab-4680-9aa9-13290d51177d@linux.alibaba.com> In-Reply-To: <1452d0c6-50ab-4680-9aa9-13290d51177d@linux.alibaba.com> From: Barry Song <21cnbao@gmail.com> Date: Mon, 9 Jun 2025 20:55:22 +1200 X-Gm-Features: AX0GCFtwjg5yReDZoYlFbo8C3roBLm3Ns6Rs-JPeJ52ef1McoWGV3BSOrdcrQu8 Message-ID: Subject: Re: [PATCH] mm/shmem, swap: fix softlockup with mTHP swapin To: Baolin Wang Cc: Kairui Song , linux-mm@kvack.org, Andrew Morton , Hugh Dickins , Kemeng Shi , Chris Li , Nhat Pham , Baoquan He , Usama Arif , linux-kernel@vger.kernel.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Stat-Signature: p91s377wad7w48bn8ge3imhqwgg8xuf4 X-Rspamd-Queue-Id: 57229140003 X-Rspam-User: X-Rspamd-Server: rspam02 X-HE-Tag: 1749459334-116762 X-HE-Meta: U2FsdGVkX19iOrZnHHT36EVPXGdCyKacz/VXyW9kwL1vSezCkyOLtTpmhH/SWdx9sRwdOhOyzbyUKcHw05EUtsyRe0sznBiFmJv7SDZ9thj1LKNANK/WnhjrdHbipUGpwlCA1pXGWls7DdWon7qikCCI8qk3qywQj43MuqLTXzf+uPyHJC9yzldcFu4a+9Mu7SX3S9tS4jyB3q/oZr/uqVy8dpEj2w1SUnjntndCtfezFsMTkitjAmMT25pgJfoJtcCQu0O7ehyWoBvAGwMbgfH+pnakpnGfXOgnW17JmoNPr5U6pvnfcPH9Osuth0fXBpl9LQkRSGf75aaNldN5GUyNhbE+0WqkEPV8E8wU05+nOUZxCgVQf5epSR46GH5vPMMzlQk5tawkLivjNv0xygPUGsESO96I9f54vbEryeOVvoozcKwLvSmp9AuqP+GZa08padj3bqt4c39Qhcve5ONf+su9V7NpTmizmv5/sWiOOhS8fztMDp2ki2und31+kp6z1pfUQ9Tpq+vQe9HpLhvyuMAuT0Y5j0UtnsV29xn7Ug9iZWsvCwtzM8tSGgOyNg97v1zM805VfBQVKXnADcMb+TFU7/51NJacRfavrRiwjXAoZ8lT3D6HUM+KWWo0d0oHzDvLXzC4l6Sv524QCO5WQuPDuLt1uz5Icpbiroz+ovwrZnsWgNNY6dDR51li8KRQ5OMHpKTveuwWz6dpzl62y72UFtRyAKjAWSQoixW2ERUQxWAAYL1WssiiO3n5fm6ey+gknuYFQD2eEs7U2yYkH8phSvlSatSnq3OUNd1U2YqM7prlVp3M0quOgwNe07KDSFZiSw+0yqXjQt+AHeMXKxPEWDA3N/5LIXLlZxIOEwiZ9ZWOE8qeLupQdXZYAn/jGELaM6mpQbRyXteFqUYxXGuNYGmf23wLx6FHz7/abLc6dG8eZHFAmXf4vHWhJxEUdPIgwl8muMWfj4O 3cXg0m5X F3u8KgTZ5f5TesHe/7NV82u9k8CeC804+O1j3exC9zOaUsPthPEPGaqllwUHsZ7QxMLgwzB6QvffGKGq0idz5N8Y/WlztVFU6Gkq+qNrM1W2BIfm/m+3OO4FSqVP1KMuHniu1Psf2ExHutATmh9r6zdeC96CyPTjpY/uCwpGuM7SzkmuysSCoBUBSJykBZcXDbaAc5fKTKvGs592a1Tvvi+d934EIsp3I0ex+aVFkIVwxsFEzjBOUR0rSvM83vv6A8yRHYYYya0a+VuK+Z6prp6QctCo4BGD32vt7GSWCwiqyM3uKqI+FFi7V2ZDXL1OW77Yh4/wQUpojJTLfCI1ookeX0qgJTxVMs7xttWfYDGHWUhFDkby4lDvnyw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Mon, Jun 9, 2025 at 8:49=E2=80=AFPM Baolin Wang wrote: > > > > On 2025/6/9 16:36, Kairui Song wrote: > > On Mon, Jun 9, 2025 at 4:27=E2=80=AFPM Baolin Wang > > wrote: > >> On 2025/6/9 03:27, Kairui Song wrote: > >>> From: Kairui Song > >>> > >>> Following softlockup can be easily reproduced on my test machine with= : > >>> > >>> echo always > /sys/kernel/mm/transparent_hugepage/hugepages-64kB/enab= led > >>> swapon /dev/zram0 # zram0 is a 48G swap device > >>> mkdir -p /sys/fs/cgroup/memory/test > >>> echo 1G > /sys/fs/cgroup/test/memory.max > >>> echo $BASHPID > /sys/fs/cgroup/test/cgroup.procs > >>> while true; do > >>> dd if=3D/dev/zero of=3D/tmp/test.img bs=3D1M count=3D5120 > >>> cat /tmp/test.img > /dev/null > >>> rm /tmp/test.img > >>> done > >>> > >>> Then after a while: > >>> watchdog: BUG: soft lockup - CPU#0 stuck for 763s! [cat:5787] > >>> Modules linked in: zram virtiofs > >>> CPU: 0 UID: 0 PID: 5787 Comm: cat Kdump: loaded Tainted: G = L 6.15.0.orig-gf3021d9246bc-dirty #118 PREEMPT(voluntary)=C2=B7 > >>> Tainted: [L]=3DSOFTLOCKUP > >>> Hardware name: Red Hat KVM/RHEL-AV, BIOS 0.0.0 02/06/2015 > >>> RIP: 0010:mpol_shared_policy_lookup+0xd/0x70 > >>> Code: e9 b8 b4 ff ff 31 c0 c3 cc cc cc cc 90 90 90 90 90 90 90 90 90 = 90 90 90 90 90 90 90 90 66 0f 1f 00 0f 1f 44 00 00 41 54 55 53 <48> 8b 1f 4= 8 85 db 74 41 4c 8d 67 08 48 89 fb 48 89 f5 4c 89 e7 e8 > >>> RSP: 0018:ffffc90002b1fc28 EFLAGS: 00000202 > >>> RAX: 00000000001c20ca RBX: 0000000000724e1e RCX: 0000000000000001 > >>> RDX: ffff888118e214c8 RSI: 0000000000057d42 RDI: ffff888118e21518 > >>> RBP: 000000000002bec8 R08: 0000000000000001 R09: 0000000000000000 > >>> R10: 0000000000000bf4 R11: 0000000000000000 R12: 0000000000000001 > >>> R13: 00000000001c20ca R14: 00000000001c20ca R15: 0000000000000000 > >>> FS: 00007f03f995c740(0000) GS:ffff88a07ad9a000(0000) knlGS:000000000= 0000000 > >>> CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 > >>> CR2: 00007f03f98f1000 CR3: 0000000144626004 CR4: 0000000000770eb0 > >>> DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 > >>> DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 > >>> PKRU: 55555554 > >>> Call Trace: > >>> > >>> shmem_alloc_folio+0x31/0xc0 > >>> shmem_swapin_folio+0x309/0xcf0 > >>> ? filemap_get_entry+0x117/0x1e0 > >>> ? xas_load+0xd/0xb0 > >>> ? filemap_get_entry+0x101/0x1e0 > >>> shmem_get_folio_gfp+0x2ed/0x5b0 > >>> shmem_file_read_iter+0x7f/0x2e0 > >>> vfs_read+0x252/0x330 > >>> ksys_read+0x68/0xf0 > >>> do_syscall_64+0x4c/0x1c0 > >>> entry_SYSCALL_64_after_hwframe+0x76/0x7e > >>> RIP: 0033:0x7f03f9a46991 > >>> Code: 00 48 8b 15 81 14 10 00 f7 d8 64 89 02 b8 ff ff ff ff eb bd e8 = 20 ad 01 00 f3 0f 1e fa 80 3d 35 97 10 00 00 74 13 31 c0 0f 05 <48> 3d 00 f= 0 ff ff 77 4f c3 66 0f 1f 44 00 00 55 48 89 e5 48 83 ec > >>> RSP: 002b:00007fff3c52bd28 EFLAGS: 00000246 ORIG_RAX: 000000000000000= 0 > >>> RAX: ffffffffffffffda RBX: 0000000000040000 RCX: 00007f03f9a46991 > >>> RDX: 0000000000040000 RSI: 00007f03f98ba000 RDI: 0000000000000003 > >>> RBP: 00007fff3c52bd50 R08: 0000000000000000 R09: 00007f03f9b9a380 > >>> R10: 0000000000000022 R11: 0000000000000246 R12: 0000000000040000 > >>> R13: 00007f03f98ba000 R14: 0000000000000003 R15: 0000000000000000 > >>> > >>> > >>> The reason is simple, readahead brought some order 0 folio in swap > >>> cache, and the swapin mTHP folio being allocated is in confict with i= t, > >>> so swapcache_prepare fails and causes shmem_swap_alloc_folio to retur= n > >>> -EEXIST, and shmem simply retries again and again causing this loop. > >> > >> If swapcache_prepare() fails and retries, the folio's order (order 0) > >> getting from swapcache will be different from the order stored in the > >> shmem mapping, so we will split the large swap entry by the following > >> logic in shmem_swapin_folio(). So I am not sure why causing a softlock= up? > >> > >> } else if (order !=3D folio_order(folio)) { > >> /* > >> * Swap readahead may swap in order 0 folios into swa= pcache > >> * asynchronously, while the shmem mapping can still = stores > >> * large swap entries. In such cases, we should split= the > >> * large swap entry to prevent possible data corrupti= on. > >> */ > >> split_order =3D shmem_split_large_entry(inode, index,= swap, gfp); > >> if (split_order < 0) { > >> error =3D split_order; > >> goto failed; > >> } > >> > >> /* > >> * If the large swap entry has already been split, it= is > >> * necessary to recalculate the new swap entry based = on > >> * the old order alignment. > >> */ > >> if (split_order > 0) { > >> pgoff_t offset =3D index - round_down(index, = 1 << split_order); > >> > >> swap =3D swp_entry(swp_type(swap), swp_offset= (swap) + offset); > >> } > >> } > > > > For example if the swap entry is 0x0 in shmem with order 4 (so it > > corresponds to swap entries 0x0 - 0x10), and a order 0 folio is > > currently cached with swap entry 0xa, then shmem swapin will try to > > use a folio with order 4, that will always fails swapcache_prepare, > > but filemap/swapcache lookup use entry 0x0 will return NULL, causing a > > loop. > > OK. Thanks for the explanation. > > >>> Fix it by applying a similar fix for anon mTHP swapin. > >>> > >>> The performance change is very slight, time of swapin 10g zero folios > >>> (test for 12 times): > >>> Before: 2.49s > >>> After: 2.52s > >>> > >>> Fixes: 1dd44c0af4fa1 ("mm: shmem: skip swapcache for swapin of synchr= onous swap device") > >>> Signed-off-by: Kairui Song > >>> > >>> --- > >>> > >>> I found this issue while doing a performance comparing of mm-new with > >>> swap table series [1] on top of mm-new. This issue no longer exists > >>> if the swap table series is applied, because it elimated both > >>> SWAP_HAS_CACHE and SWP_SYNCHRONOUS_IO swapin completely while improvi= ng > >>> the performance and simplify the code, and the race swapin is solved > >>> differently by then. > >>> > >>> (The zero map fix might still need to stay for a while, but could be > >>> optimized too later with swap table). > >> > >> I don't understand why adding zeromap changes, and should explain this > >> explicitly. > > > > To stay in consistency with anon mTHP swapin, swap_zeromap_batch have > > it's own comments that a hybird folio with zero and non-zero pages > > can't be brought back as a whole. I can mention that in the commit > > message. For mTHP swapin, we need the zeromap check because we have no way to record whether there was a prior mTHP swap-out. So we rely on checking the continuity of swap offsets. It=E2=80=99s entirely possible that, in the past, several small folios were swapped out to consecutive locations, and one of them happened to be a zero folio, while the others were not. But for shmem, we have a place to record that information - we swapped-ou= t a mTHP, right? Regarding zeromap: for an mTHP swap-out, we currently can't mark subpages individually as zeromap=E2=80=94it=E2=80=99s either all-zero for every subp= age or none are. So maybe we don't need swap_zeromap_batch() for shmem? > > Yes. Thanks. Thanks Barry