From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 072A6C5AD49 for ; Mon, 9 Jun 2025 02:32:14 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8C64E6B0088; Sun, 8 Jun 2025 22:32:14 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 876806B008C; Sun, 8 Jun 2025 22:32:14 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 78C276B0092; Sun, 8 Jun 2025 22:32:14 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 5A4FA6B0088 for ; Sun, 8 Jun 2025 22:32:14 -0400 (EDT) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id C257C1D6C3E for ; Mon, 9 Jun 2025 02:32:13 +0000 (UTC) X-FDA: 83534287746.13.9CD03E9 Received: from mail-lj1-f169.google.com (mail-lj1-f169.google.com [209.85.208.169]) by imf27.hostedemail.com (Postfix) with ESMTP id D5B0A40007 for ; Mon, 9 Jun 2025 02:32:11 +0000 (UTC) Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=IzkNo6Zu; spf=pass (imf27.hostedemail.com: domain of ryncsn@gmail.com designates 209.85.208.169 as permitted sender) smtp.mailfrom=ryncsn@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1749436332; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=+x5kDxkUqYTRERjo2DCogvWV0p7sKGQlKnir/kOwee4=; b=0aw4QtyosaOVgScixLdZ8MjVWcIQe9nyFnlnqweGdiIZSzlAUHRwI8Pjgo/pi1sRoogRYF 4ugxJwPOY0X89Yq79PRTrHHRJ/tZeJKGU/A7JvDX5A8/DddWp0Wv5uKuYUEikVaoRPjClr Xk3Emt0lO4ONbKtQGCYmvVHTqfzuRrg= ARC-Authentication-Results: i=1; imf27.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=IzkNo6Zu; spf=pass (imf27.hostedemail.com: domain of ryncsn@gmail.com designates 209.85.208.169 as permitted sender) smtp.mailfrom=ryncsn@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1749436332; a=rsa-sha256; cv=none; b=7mcAsJ7hi61z7KlzK1PAdp3AlagM88xduuezk6agj0w0I9rdU3qWbtP0f2Cn2ucsQYIU1d wlFq4azah8RfIc+CtaBvem6J7o32UlNPuB5oSu4YuNPXzblM/r9lmndPjWa4WGmprHCCYT Yw7FWUd9ljlk9CHuTx+9Z97HDfDCh3Q= Received: by mail-lj1-f169.google.com with SMTP id 38308e7fff4ca-3105ef2a071so41725941fa.1 for ; Sun, 08 Jun 2025 19:32:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1749436330; x=1750041130; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=+x5kDxkUqYTRERjo2DCogvWV0p7sKGQlKnir/kOwee4=; b=IzkNo6ZuBKe1J8HrUWrQKi3ltI6nFkw6xnkxtaJfWezUmOin/BLpkexUlLYQ4EeT+L WgfXG1iHjl0MGI7zr4fipOXaKyeubmaWWLA2zyMot0J+4luaipe0GDIHknl6SD8Md0+7 3Alz4JgxW5fWsKxvo5pHTAChXbx3kM8SrNsbWiCtB/3+aC4L68HgpVmPPt2Fm5Or7JFh LlQ1BbApWhlaCToPXQXT1uaLOTpZyu7Jt5i5n6zmNWKc5T2AHBeOPtZn1iHgbPPrZREP a59IwI3g4MTJbYgRCVbZphvbQu4/okhv4n2SLZaELLFuEteSxekjJRS/CIMEhV9u0P6x ur0A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749436330; x=1750041130; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=+x5kDxkUqYTRERjo2DCogvWV0p7sKGQlKnir/kOwee4=; b=Ntdb7WcTJHquFDs218FmD2HKU0PXmwCZ3lsMpURBF7NZGPr+LUw6yU+n4YIgF58FY8 FEGi6wdo/jFFqMQbk8ecrv+zzmlHXqQTI/6zrXbbklDViBJxy4Z2+umiq0Xp5NIebX3J aFBpxbTINJUbpGgcWNpjl1WtU1HM2gSazJJ5xiNI+tnAr3AzlwcXXzXJKvk8Jr41Fph4 FbkWUvHk/o1DdWhi3GpAWRkoY8BMo0iLQNerUR8lnG/VSDtUpH21nxUQvHity+mQhVDY Esh4gt+Q8DN7VTX1Im03uOloUgYH/sVl+AWH0szQVn50pkBYScsq8rtTEc2heAH2pxl6 U1CA== X-Gm-Message-State: AOJu0YyPTa1T5Du7366rOpPas6V1M4FA4DzIT0JoHgmOm+lVEUXuGFnv I0hBkSMdaRQNjHIU10tfj/Yaf8IHgrAc5V/MKURpIfq14+v/8NArAJFyTkzVw5IMwl5nyVVrzUE WUNs1WlCXIWrwtjLOuq84f6PJYpOCRn8= X-Gm-Gg: ASbGncunBHJ99GvioP/7MrGBQhAeSrfO5HjTTfWWhM6hMyNAjgSeYzl4+WbTzyNqV/e NFWWVcYBxOziBvu1sUThcSV8pY7cHvxTF5a9/QOoSrUsBZQ6A/yPodEBDvTu/ajRHr4Jtr6t/ir NKj1ZZ6bBzQkSA1F4ZYPHkywxWT0oCGunQ8vDzll0dXg== X-Google-Smtp-Source: AGHT+IE396iTBd2yIKfx0dITBTCjzMVwQa1V/FQ3s2hz0oQ9q+ejedqRFeG5F3O45Ex1bFCi4ZsWtS1AO3lgzu3rEE8= X-Received: by 2002:a2e:3c08:0:b0:32a:6026:ec08 with SMTP id 38308e7fff4ca-32adfd8cd04mr20171441fa.18.1749436329647; Sun, 08 Jun 2025 19:32:09 -0700 (PDT) MIME-Version: 1.0 References: <20250608192713.95875-1-ryncsn@gmail.com> In-Reply-To: From: Kairui Song Date: Mon, 9 Jun 2025 10:31:52 +0800 X-Gm-Features: AX0GCFu9LMLdhUDTBTM8GexzDRVKpwCpAG3xj_ASQJVltOBiRCCZSagqesmTyg4 Message-ID: Subject: Re: [PATCH] mm/shmem, swap: fix softlockup with mTHP swapin To: Barry Song <21cnbao@gmail.com> Cc: linux-mm@kvack.org, Andrew Morton , Hugh Dickins , Baolin Wang , Kemeng Shi , Chris Li , Nhat Pham , Baoquan He , Usama Arif , linux-kernel@vger.kernel.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: D5B0A40007 X-Stat-Signature: dh9gy9pp5ugywa8uz94kx974xp4o4x3w X-Rspam-User: X-HE-Tag: 1749436331-859699 X-HE-Meta: U2FsdGVkX1+NPCIeTrbKX24lh0o+UC6izIuIEIH7ySJeLXOrw7ww81ilQzEx8JWXtc2x824kjeSZ0W6LKKXTOhWYMPCvDojuKrr3qazcI1v2uWz/NKa5BKmuZO4L5iBn8nJg01u8Hb0MoHYTep70CGbc7CR7hLppYCtnbXKFHZTP2705vePHUTV29od6Gu/BkIG5NKczjQBx4UkVHk8/oAhJACqmxw8tIWgRHOrqRd34Yg+uDrYIae4YQz590v+mvZAPd75msjw5PguY3oHRxNB/Iamo5CeAV6ooA2gVBNxU8b5cifiKAgpOitUWqSt8jHTA133+jQGhwDPv93gZQHN3XPdRL4mbYLLZ8qWSykxteA5rh7huFzSIXcf+wCWARMpylsuRNfdLtzm5F6hSCSRmbBd052wLGKJjlnd/Ub7Q2IUUTyf2H/h4yerJXxID4d7PLNIUDO4CRONj+8WWFrNxhfBY8Mo5S8BdMD+32U3YNk8WD2N1HZ0Oow6jgbrGhSFdMb2YLzFMQ9trDdhcjBChZeUgPuLRJktZpQp9f1LhuI4gm92NIP1L10hxh08AF+J82WRKPqHxhbfQJ12dM6PHW9mAKksRHmBDp9XqOKE8cl1FhajyTP0VFY5EpzHJboo/VGXmu0OS2Qc5Vk6jvNIt7XMZr8vkrXwiXj4b1sShTct0fRTE1JHrGCr/dcy+oUiNjRK9glJ/o1tp1QTX4yTZ5nNXYpuQ9Vx5nBw+xYFoPRPF3PboxxBxehJjrObtwR2FEvexTOcpbjRUYPStrm07ifIVU4xH1zC6RWs8nMk+kemIA6lEuQCsG1anVtyGUdaABrNj0b//gCQN6XSe1X7Tr2AWDje+kXpbNfkzjOcWWXZ5aW1mSTUZO9bUjaHqQzx6oLQDG6g/6IR1D5DDSdmUx/tGSKOJCTg6gRJBW4wBmJXGo7RaYxFLUuV534m/iM9/dIhCl52Z1iLgBo7 N8zP0Jgw UeksPX9NryMqQ1P44ICCHBIEy3U+oFxZPAAsIPBvP6vFCc7Cfzqlr+rRZBKQw+DzUWGe6wDssTK0Dv0bB7FQC09R6tH/e95zA7wl/S+g6XalyYTCmAamn/vapfxFFD29cCGbXcvoZ6A25KWLgC8FcRGeTMCdKfewr4/bBIFpggcGT7CT3ZwWoNcmz9ctoioB+yNZSDpBu1Kf3pAZfHk8fG0JvBI5JoaUXZ8Vq/i30rSSaez1GD+vw0HQ5Nb6l2tM/X925h6sbFl8G+FUCzhnm8fFVYS8xCx+PUtP6nV0ggfQp8GTDGdGH8EtcytPN0K+UJ2LOnKUBs9MYI3Ovqe3oVnYML9lqhyv/x/BZWxDq5oD4dpujpAq0+TTyuSrcCXpwiDCQnA5Q2zkub60= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Mon, Jun 9, 2025 at 7:58=E2=80=AFAM Barry Song <21cnbao@gmail.com> wrote= : > > On Mon, Jun 9, 2025 at 7:27=E2=80=AFAM Kairui Song wro= te: > > > > From: Kairui Song > > > > Following softlockup can be easily reproduced on my test machine with: > > > > echo always > /sys/kernel/mm/transparent_hugepage/hugepages-64kB/enable= d > > swapon /dev/zram0 # zram0 is a 48G swap device > > mkdir -p /sys/fs/cgroup/memory/test > > echo 1G > /sys/fs/cgroup/test/memory.max > > echo $BASHPID > /sys/fs/cgroup/test/cgroup.procs > > while true; do > > dd if=3D/dev/zero of=3D/tmp/test.img bs=3D1M count=3D5120 > > cat /tmp/test.img > /dev/null > > rm /tmp/test.img > > done > > > > Then after a while: > > watchdog: BUG: soft lockup - CPU#0 stuck for 763s! [cat:5787] > > Modules linked in: zram virtiofs > > CPU: 0 UID: 0 PID: 5787 Comm: cat Kdump: loaded Tainted: G = L 6.15.0.orig-gf3021d9246bc-dirty #118 PREEMPT(voluntary)=C2=B7 > > Tainted: [L]=3DSOFTLOCKUP > > Hardware name: Red Hat KVM/RHEL-AV, BIOS 0.0.0 02/06/2015 > > RIP: 0010:mpol_shared_policy_lookup+0xd/0x70 > > Code: e9 b8 b4 ff ff 31 c0 c3 cc cc cc cc 90 90 90 90 90 90 90 90 90 90= 90 90 90 90 90 90 90 66 0f 1f 00 0f 1f 44 00 00 41 54 55 53 <48> 8b 1f 48 = 85 db 74 41 4c 8d 67 08 48 89 fb 48 89 f5 4c 89 e7 e8 > > RSP: 0018:ffffc90002b1fc28 EFLAGS: 00000202 > > RAX: 00000000001c20ca RBX: 0000000000724e1e RCX: 0000000000000001 > > RDX: ffff888118e214c8 RSI: 0000000000057d42 RDI: ffff888118e21518 > > RBP: 000000000002bec8 R08: 0000000000000001 R09: 0000000000000000 > > R10: 0000000000000bf4 R11: 0000000000000000 R12: 0000000000000001 > > R13: 00000000001c20ca R14: 00000000001c20ca R15: 0000000000000000 > > FS: 00007f03f995c740(0000) GS:ffff88a07ad9a000(0000) knlGS:00000000000= 00000 > > CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 > > CR2: 00007f03f98f1000 CR3: 0000000144626004 CR4: 0000000000770eb0 > > DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 > > DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 > > PKRU: 55555554 > > Call Trace: > > > > shmem_alloc_folio+0x31/0xc0 > > shmem_swapin_folio+0x309/0xcf0 > > ? filemap_get_entry+0x117/0x1e0 > > ? xas_load+0xd/0xb0 > > ? filemap_get_entry+0x101/0x1e0 > > shmem_get_folio_gfp+0x2ed/0x5b0 > > shmem_file_read_iter+0x7f/0x2e0 > > vfs_read+0x252/0x330 > > ksys_read+0x68/0xf0 > > do_syscall_64+0x4c/0x1c0 > > entry_SYSCALL_64_after_hwframe+0x76/0x7e > > RIP: 0033:0x7f03f9a46991 > > Code: 00 48 8b 15 81 14 10 00 f7 d8 64 89 02 b8 ff ff ff ff eb bd e8 20= ad 01 00 f3 0f 1e fa 80 3d 35 97 10 00 00 74 13 31 c0 0f 05 <48> 3d 00 f0 = ff ff 77 4f c3 66 0f 1f 44 00 00 55 48 89 e5 48 83 ec > > RSP: 002b:00007fff3c52bd28 EFLAGS: 00000246 ORIG_RAX: 0000000000000000 > > RAX: ffffffffffffffda RBX: 0000000000040000 RCX: 00007f03f9a46991 > > RDX: 0000000000040000 RSI: 00007f03f98ba000 RDI: 0000000000000003 > > RBP: 00007fff3c52bd50 R08: 0000000000000000 R09: 00007f03f9b9a380 > > R10: 0000000000000022 R11: 0000000000000246 R12: 0000000000040000 > > R13: 00007f03f98ba000 R14: 0000000000000003 R15: 0000000000000000 > > > > > > The reason is simple, readahead brought some order 0 folio in swap > > cache, and the swapin mTHP folio being allocated is in confict with it, > > so swapcache_prepare fails and causes shmem_swap_alloc_folio to return > > -EEXIST, and shmem simply retries again and again causing this loop. > > > > Fix it by applying a similar fix for anon mTHP swapin. > > > > The performance change is very slight, time of swapin 10g zero folios > > (test for 12 times): > > Before: 2.49s > > After: 2.52s > > > > Fixes: 1dd44c0af4fa1 ("mm: shmem: skip swapcache for swapin of synchron= ous swap device") > > Signed-off-by: Kairui Song > > > > --- > > > > I found this issue while doing a performance comparing of mm-new with > > swap table series [1] on top of mm-new. This issue no longer exists > > if the swap table series is applied, because it elimated both > > SWAP_HAS_CACHE and SWP_SYNCHRONOUS_IO swapin completely while improving > > the performance and simplify the code, and the race swapin is solved > > differently by then. > > > > (The zero map fix might still need to stay for a while, but could be > > optimized too later with swap table). > > > > It will be good if the swap table series could get reviewed and merged > > to avoid more fixes like this. SWAP_HAS_CACHE and SWP_SYNCHRONOUS_IO ha= s > > a history of causing many issues. I'll do a swap table rebase on top of > > this fix, if this one is accepted. > > > > And for a comparision, swap in 10G into shmem: > > > > Before this patch: 2.49s > > After this patch: 2.52s > > After swap table: 2.37s (Removing SWAP_HAS_CACHE and SWP_SYNCHRONOUS_= IO, > > still not in the best shape but looking good= ) > > > > Link: https://lore.kernel.org/linux-mm/20250514201729.48420-1-ryncsn@gm= ail.com/ [1] > > > > mm/memory.c | 20 -------------------- > > mm/shmem.c | 12 +++++++++++- > > mm/swap.h | 19 +++++++++++++++++++ > > 3 files changed, 30 insertions(+), 21 deletions(-) > > > > diff --git a/mm/memory.c b/mm/memory.c > > index 9ead7ab07e8e..3845ed068d74 100644 > > --- a/mm/memory.c > > +++ b/mm/memory.c > > @@ -4313,26 +4313,6 @@ static struct folio *__alloc_swap_folio(struct v= m_fault *vmf) > > } > > > > #ifdef CONFIG_TRANSPARENT_HUGEPAGE > > -static inline int non_swapcache_batch(swp_entry_t entry, int max_nr) > > -{ > > - struct swap_info_struct *si =3D swp_swap_info(entry); > > - pgoff_t offset =3D swp_offset(entry); > > - int i; > > - > > - /* > > - * While allocating a large folio and doing swap_read_folio, wh= ich is > > - * the case the being faulted pte doesn't have swapcache. We ne= ed to > > - * ensure all PTEs have no cache as well, otherwise, we might g= o to > > - * swap devices while the content is in swapcache. > > - */ > > - for (i =3D 0; i < max_nr; i++) { > > - if ((si->swap_map[offset + i] & SWAP_HAS_CACHE)) > > - return i; > > - } > > - > > - return i; > > -} > > - > > /* > > * Check if the PTEs within a range are contiguous swap entries > > * and have consistent swapcache, zeromap. > > diff --git a/mm/shmem.c b/mm/shmem.c > > index 73182e904f9c..484cd3043a78 100644 > > --- a/mm/shmem.c > > +++ b/mm/shmem.c > > @@ -1995,6 +1995,14 @@ static struct folio *shmem_swap_alloc_folio(stru= ct inode *inode, > > */ > > if (swapcache_prepare(entry, nr_pages)) { > > folio_put(new); > > + > > + /* > > + * A smaller folio is in the swap cache, mTHP swapin wi= ll always fail > > + * until it's gone. Return -EINVAL to fallback to order= 0. > > + */ > > + if (non_swapcache_batch(entry, nr_pages) !=3D nr_pages) > > + return ERR_PTR(-EINVAL); > > + Hi Barry, > We're doing this before swapcache_prepare() for mTHP swapin. Why does it > happen after swapcache_prepare() in the shmem case? `non_swapcache_batch(entry, nr_pages) !=3D nr_pages` is unlikely, that's the reason why no one noticed this issue so far, so moving it after swapcache_prepare can help avoid overhead caused by it in the common case. swapcache_prepare already implies this check, but swapcache_prepare can fall for multiple reasons, and shmem should and only should fallback to order 0 swapin if it's caused by an existing cache. (currently shmem unconditionally retry) And non_swapcache_batch might not be the best solution here, it also might have false positives, we can add a full filemap lookup here, but might be overkill for a corner case like this. I still think merge swap cache with swap_map using swap table is the long term solution. Maybe I'm premature optimizing it, I can use the easier to review implementation (same way with anon mTHP) and do a quick benchmark, if there is no obvious performance change I'll use that style in V2.