From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 158AAC7115D for ; Mon, 23 Jun 2025 03:41:06 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id AD2036B00AA; Sun, 22 Jun 2025 23:41:05 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A81CF6B00AC; Sun, 22 Jun 2025 23:41:05 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 971976B00BA; Sun, 22 Jun 2025 23:41:05 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 8692C6B00AA for ; Sun, 22 Jun 2025 23:41:05 -0400 (EDT) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 25D68C198E for ; Mon, 23 Jun 2025 03:41:05 +0000 (UTC) X-FDA: 83585264490.28.9BFDABA Received: from mail-lj1-f169.google.com (mail-lj1-f169.google.com [209.85.208.169]) by imf08.hostedemail.com (Postfix) with ESMTP id 3D01A160007 for ; Mon, 23 Jun 2025 03:41:02 +0000 (UTC) Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=BBqHZ2+v; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf08.hostedemail.com: domain of ryncsn@gmail.com designates 209.85.208.169 as permitted sender) smtp.mailfrom=ryncsn@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1750650063; a=rsa-sha256; cv=none; b=bp68GOV5rDTer4e127C031c2ksUSW1WuEfpKuVvJGvo2nEtmXtQP753n5RjnHslscvW8AX BCn0aZKubLaW2E2MKM7sWywLn/krtPKF1JSzO4Sj84uj1bukb8/EJrphCYebT0On+kiFxC 9I2QbK+3LV8g5UPypvfInq4qPamLpwk= ARC-Authentication-Results: i=1; imf08.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=BBqHZ2+v; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf08.hostedemail.com: domain of ryncsn@gmail.com designates 209.85.208.169 as permitted sender) smtp.mailfrom=ryncsn@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1750650063; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=/c49PzwWt7CcmCqEvzNnLBXT5IZU8zFU5FglZ/KCmv0=; b=1cdzIH+TLc0qWbP6hW6i7NBiSC8TykiCmOGw1TfJp2JuiXMS9El4seTJVHcN30Y537Cq7f kDRKTycazPMHnCQy8pSDfUjYqjHPZyDghETnxLbucACRN4ZhNHRzaWJh/sh6Vt/wmzfxzC eVaShADjb6/Ae53dpxFNOCOuJSH/4rk= Received: by mail-lj1-f169.google.com with SMTP id 38308e7fff4ca-32b5931037eso28390671fa.2 for ; Sun, 22 Jun 2025 20:41:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1750650061; x=1751254861; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=/c49PzwWt7CcmCqEvzNnLBXT5IZU8zFU5FglZ/KCmv0=; b=BBqHZ2+vg5leFb9RJtcGOlvxu6cD46yVjd8Ybpm8H8bzzh2NrIcreuFNYpoVCy5J08 9EzcN0PdlULTW74b+FqvlbnvvZMU0B31HOEQMOgh8l9HbhLSesRX/HZEXFbvjDb70wQb HXsqT35ATv+C0tZ4AZooBW13gJ7+l9CDDI9KJE9kZ876rwwyVrE7BBYzNSU9BQ292h6I 5710xr7VljBnqlS2AnCOPKLk7qfws9HddyLQsEokQb7fqHWQBAtlNbcxMI0dkJrR2a9g stncqikbNmzxoZmw+5zU1Dy6+JrJxjnT/yO0D0JUF8zANBQPPHGCVURQ8vQHWT6b6Euo xIQQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750650061; x=1751254861; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=/c49PzwWt7CcmCqEvzNnLBXT5IZU8zFU5FglZ/KCmv0=; b=c0X5L29ZkJHjdrEFc8H0neeM8SdhWuT8tbLRXYRUQbZQnRf0XPIHEaMF3tvIbOmLbe wP8gRjTPxkH4SkrwaDteTmjwtoXvoxaFu2D5Am3HlBaxXdptgnbsZwkV038Yma3NX5Gz SZJhNZEVhe9TxNlI55CnZpkxiVL4zzD7leZ6cM40hfSXzn1azaYyAK4iT/vVnspx6cZK e7vngcQyl1WoF0PigTMiiNqPCV6N8me+Jt3SWUfsbLuWIWoYr01hGPoM0UutBzw6z9Qa udhsVjd21PnZ7wQJBZZo6Lzh/2BRAcowVPWqpTCul/rr4UJOFtLlOlxYQciWZbTq3IAR wENg== X-Gm-Message-State: AOJu0YyHcJnLaf+uL0j/YcXM719UxwejWj9ZRVcnUTPDhuKUEg/XPse+ OgvWs1V2Tn/4Hsj97QqtlJeciyxIulKPzEmITr2BaU6dpGGWVCgYygdMwHgOu77pRp26piN3qoK frZv6AOQRKQKOGl/a5uBNCCZOnQDZgLKki9HwJqE= X-Gm-Gg: ASbGncsbZzULYn5X34mWalb074/wFQAAcXNJCGHarrR50CvPr4qQ69m+pwL8P5cchvl JoyUhIusajFTdVLFNS2MkOECQVY09oiVj1vuSBzSFpGxue0XO50JGTPRR4dTXHO4+Gz6dfdXDx1 sq1Eh8lwmYcK3phNvfJ61nqlO8tF4E6JN4Ef9UlaUPSjc= X-Google-Smtp-Source: AGHT+IHpBlE7pwmohT5p8mSJ9Krlh+08B0qObwm4HzHIRj8/nfduF/1u7PX/Tx+auJJ8/t6Qr8MpqahlzbyC+tpk0zs= X-Received: by 2002:a2e:a007:0:b0:32b:82bf:cc53 with SMTP id 38308e7fff4ca-32b99460f29mr20238341fa.31.1750650061318; Sun, 22 Jun 2025 20:41:01 -0700 (PDT) MIME-Version: 1.0 References: <20250619175538.15799-1-ryncsn@gmail.com> <20250619175538.15799-2-ryncsn@gmail.com> <9e31bbb8-73e7-4e67-973d-491f93ba938f@linux.alibaba.com> In-Reply-To: <9e31bbb8-73e7-4e67-973d-491f93ba938f@linux.alibaba.com> From: Kairui Song Date: Mon, 23 Jun 2025 11:40:23 +0800 X-Gm-Features: AX0GCFsnCGsy1B2Sk45knqmIyOdVKZSU64PmZdWLmlltku9gxf2_ligOXdXe0yY Message-ID: Subject: Re: [PATCH v2 1/4] mm/shmem, swap: improve cached mTHP handling and fix potential hung To: Baolin Wang Cc: linux-mm@kvack.org, Andrew Morton , Hugh Dickins , Matthew Wilcox , Kemeng Shi , Chris Li , Nhat Pham , Baoquan He , Barry Song , linux-kernel@vger.kernel.org, stable@vger.kernel.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspam-User: X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 3D01A160007 X-Stat-Signature: teo1kxwryenjqehnaqqu8cmyqfu5z4em X-HE-Tag: 1750650062-698640 X-HE-Meta: U2FsdGVkX1++z1czIEVL2Vu4GVuLaOCAOfCTN+2kXCpuWZ9ZRHl7jr9sd3VM8SIGVgDUJ7ShPrAufN7eXPjFf030B6/RFXl2yTt2z7l8s08Ua7lnPXBTg7YkwBEWdTh52Xj6oT5NI1gNdq5owtg1TONf/aVyCvRuiR4otLyo3PHkZMgp5fIxQ0W8XhiLyF7TRsUzqsoRK4Ga48MwK9osoCUQljYTBhJLKtWF6yFQq9tbwKX8I54RpYv8hjsaf6cwSJqxIEIdFS7AbFXow7/xHjhRGHVGeKsX5LCJrRj8ogZG4OwKOQBk2wDKuaERJE1IXFTbcRowmsLR8PbP3Fc89tf/yhQosNxQxJuJBx8aGZ6vT74+4v7fGyK/N1rouQCLz297Gw1sBV+WEBuwika13DHOLRwcfLqa4vt/iurrUEA8LkHmBr2HiAuWHR+HmkA1rdhdHrMvMNR9eC8OzXpGB5cTZwOyEljvzO2sRHF4LfStfFlnseCHHwc8kNu+R/U2v5mqMexKckF9W+l+o2FHDeqq7ABFsfI13/y4sG75SI0nfwD3AG6tvk1eZeeG41y3NUWjcLMh8TAkFhOQVwRSOJaXlH1eipJe/yvRzSd5opDVBl6TZgs9eUvk4BTB54hd5b3ojRNIN3o3AcyLVo2JzgsN5NKAVGo4MOQz6zHzrHB0sjHcLrg/d4Os8uHePcUjEx9wUey8mD79twGLQjqeJKBf04ancbdQJqUF5+h941NZXma84eGFBbguYVZjdmIE7pC2abXZiiIcKkIqWahsaCenj4AdDL6knEHpe73OdZAzsaywIGmW9PVrn72UeYVfzaXEmj8OrFucz6bj8CSkuFYzrRMQDY1SzHzdSoNUZ5PqSXug9TUDIUBum9NUG1b4iXA6i04UjUENJ2r8RvD9I3Tems0LyZxLBFNWeJYjh5Z3MgqSOtc6cnhrIGmMr8Kcylxmc4qHgoCVYqIPiF+ IbkFdH/l deWYOhT3AWz22/b+Ha3mlygygwp6HP2zyhhW0UrUDI2Ex5X3OxRbBJwHDo/Ozbb78oIvt5xVcO6qd5X2tiEmS0jrioh4lDTMHwyVpQ9F7knD+3NggYI1bP1Ik8bCJa/C0tsD4i6IaKIWorJb+qWOpdBdB81TSevLAgVKoyL+tpTg//drD+MVBmh40LkkY1pDC++Ajf0/lmXC+4bl/yai3mLFE2njRyVgPtSUDOMXdSn7PBf6c59ipUGzbtqLlMTYcQsvKblewxF/YYOBb3rWF1sAh6K8zxED4yp774A3cnBO9xk0391pDFEoUkBII/EXhss/R+Ytutv6Zp5SZPtNWWYAhc5fQIc0KBkzVnjhnsZqj8bt2WW7H4rOlFxDNA96N3NSrxOY8Rm/CkvHo1m4WSoWOx0CMeRaATOhEkQZ2qZLFHE0FSQbeTSLLAvOli9PnMJ/QvFfADD1+MO+/XGX3GtQxxA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Mon, Jun 23, 2025 at 11:39=E2=80=AFAM Baolin Wang wrote: > On 2025/6/23 11:35, Kairui Song wrote: > > On Mon, Jun 23, 2025 at 11:26=E2=80=AFAM Baolin Wang > > wrote: > >> > >> Hi Kairui, > >> > >> On 2025/6/20 01:55, Kairui Song wrote: > >>> From: Kairui Song > >>> > >>> The current swap-in code assumes that, when a swap entry in shmem > >>> mapping is order 0, its cached folios (if present) must be order 0 > >>> too, which turns out not always correct. > >>> > >>> The problem is shmem_split_large_entry is called before verifying the > >>> folio will eventually be swapped in, one possible race is: > >>> > >>> CPU1 CPU2 > >>> shmem_swapin_folio > >>> /* swap in of order > 0 swap entry S1 */ > >>> folio =3D swap_cache_get_folio > >>> /* folio =3D NULL */ > >>> order =3D xa_get_order > >>> /* order > 0 */ > >>> folio =3D shmem_swap_alloc_folio > >>> /* mTHP alloc failure, folio =3D NULL */ > >>> <... Interrupted ...> > >>> shmem_swapin_folio > >>> /* S1 is swapped in */ > >>> shmem_writeout > >>> /* S1 is swapped out, folio cached= */ > >>> shmem_split_large_entry(..., S1) > >>> /* S1 is split, but the folio covering it has order > 0 now */ > >>> > >>> Now any following swapin of S1 will hang: `xa_get_order` returns 0, > >>> and folio lookup will return a folio with order > 0. The > >>> `xa_get_order(&mapping->i_pages, index) !=3D folio_order(folio)` will > >>> always return false causing swap-in to return -EEXIST. > >>> > >>> And this looks fragile. So fix this up by allowing seeing a larger fo= lio > >>> in swap cache, and check the whole shmem mapping range covered by the > >>> swapin have the right swap value upon inserting the folio. And drop > >>> the redundant tree walks before the insertion. > >>> > >>> This will actually improve the performance, as it avoided two redunda= nt > >>> Xarray tree walks in the hot path, and the only side effect is that i= n > >>> the failure path, shmem may redundantly reallocate a few folios > >>> causing temporary slight memory pressure. > >>> > >>> And worth noting, it may seems the order and value check before > >>> inserting might help reducing the lock contention, which is not true. > >>> The swap cache layer ensures raced swapin will either see a swap cach= e > >>> folio or failed to do a swapin (we have SWAP_HAS_CACHE bit even if > >>> swap cache is bypassed), so holding the folio lock and checking the > >>> folio flag is already good enough for avoiding the lock contention. > >>> The chance that a folio passes the swap entry value check but the > >>> shmem mapping slot has changed should be very low. > >> > >> Thanks for fixing the issue. Sadly, I haven't reproduced this issue fr= om > >> my previous test cases :( > >> > >> And I have a question below. > >> > >>> Cc: stable@vger.kernel.org > >>> Fixes: 809bc86517cc ("mm: shmem: support large folio swap out") > >>> Signed-off-by: Kairui Song > >>> Reviewed-by: Kemeng Shi > >>> --- > >>> mm/shmem.c | 30 +++++++++++++++++++++--------- > >>> 1 file changed, 21 insertions(+), 9 deletions(-) > >>> > >>> diff --git a/mm/shmem.c b/mm/shmem.c > >>> index eda35be2a8d9..4e7ef343a29b 100644 > >>> --- a/mm/shmem.c > >>> +++ b/mm/shmem.c > >>> @@ -884,7 +884,9 @@ static int shmem_add_to_page_cache(struct folio *= folio, > >>> pgoff_t index, void *expected, gfp_= t gfp) > >>> { > >>> XA_STATE_ORDER(xas, &mapping->i_pages, index, folio_order(foli= o)); > >>> - long nr =3D folio_nr_pages(folio); > >>> + unsigned long nr =3D folio_nr_pages(folio); > >>> + swp_entry_t iter, swap; > >>> + void *entry; > >>> > >>> VM_BUG_ON_FOLIO(index !=3D round_down(index, nr), folio); > >>> VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio); > >>> @@ -896,14 +898,24 @@ static int shmem_add_to_page_cache(struct folio= *folio, > >>> > >>> gfp &=3D GFP_RECLAIM_MASK; > >>> folio_throttle_swaprate(folio, gfp); > >>> + swap =3D iter =3D radix_to_swp_entry(expected); > >>> > >>> do { > >>> xas_lock_irq(&xas); > >>> - if (expected !=3D xas_find_conflict(&xas)) { > >>> - xas_set_err(&xas, -EEXIST); > >>> - goto unlock; > >>> + xas_for_each_conflict(&xas, entry) { > >>> + /* > >>> + * The range must either be empty, or filled wi= th > >>> + * expected swap entries. Shmem swap entries ar= e never > >>> + * partially freed without split of both entry = and > >>> + * folio, so there shouldn't be any holes. > >>> + */ > >>> + if (!expected || entry !=3D swp_to_radix_entry(= iter)) { > >>> + xas_set_err(&xas, -EEXIST); > >>> + goto unlock; > >>> + } > >>> + iter.val +=3D 1 << xas_get_order(&xas); > >>> } > >>> - if (expected && xas_find_conflict(&xas)) { > >>> + if (expected && iter.val - nr !=3D swap.val) { > >>> xas_set_err(&xas, -EEXIST); > >>> goto unlock; > >>> } > >>> @@ -2323,7 +2335,7 @@ static int shmem_swapin_folio(struct inode *ino= de, pgoff_t index, > >>> error =3D -ENOMEM; > >>> goto failed; > >>> } > >>> - } else if (order !=3D folio_order(folio)) { > >>> + } else if (order > folio_order(folio)) { > >>> /* > >>> * Swap readahead may swap in order 0 folios into swap= cache > >>> * asynchronously, while the shmem mapping can still s= tores > >>> @@ -2348,15 +2360,15 @@ static int shmem_swapin_folio(struct inode *i= node, pgoff_t index, > >>> > >>> swap =3D swp_entry(swp_type(swap), swp_offset(= swap) + offset); > >>> } > >>> + } else if (order < folio_order(folio)) { > >>> + swap.val =3D round_down(swp_type(swap), folio_order(fol= io)); > >> > >> Why rounding down the swap type? do you mean rounding down the swap of= fset? > > > > Ouch, right, it should be the value: > > > > swap.val =3D round_down(swap.val, folio_order(folio)); > > > > I messed up the code here during a rebase, let me send a V3 then. > > Should be > > swap.val =3D round_down(swap.val, 1 << folio_order(folio)); > > ? Yes, exactly.