From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 69F91C83F17 for ; Sun, 13 Jul 2025 10:53:19 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 863E06B007B; Sun, 13 Jul 2025 06:53:18 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 814C36B0088; Sun, 13 Jul 2025 06:53:18 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6DC616B0089; Sun, 13 Jul 2025 06:53:18 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 4FBF36B007B for ; Sun, 13 Jul 2025 06:53:18 -0400 (EDT) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id C6436C036F for ; Sun, 13 Jul 2025 10:53:17 +0000 (UTC) X-FDA: 83658929634.27.26733B7 Received: from mail-ua1-f47.google.com (mail-ua1-f47.google.com [209.85.222.47]) by imf02.hostedemail.com (Postfix) with ESMTP id 086A480007 for ; Sun, 13 Jul 2025 10:53:15 +0000 (UTC) Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=mNCfCFAk; spf=pass (imf02.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.222.47 as permitted sender) smtp.mailfrom=21cnbao@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1752403996; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=nBCD89cIb4CZSAkeZzQ7C+33RF/FMrBf5lJ2sJv84Cg=; b=iHQpHMr7/XGEJaxYmtWwSsBFNkAqGeoS2DzmKGOgenQqQsVgSeD/4T57wsTIdrGWp4HDDb zxCspu/8Zpr5ecLFDMdVKrAbwv1WXkQ1EYin68MEyS/Jk4g/8hpqpSQNt40ff+nmgvjUj4 DSey/9i2OrbMXNlfi1CO/FYuNMZCj/E= ARC-Authentication-Results: i=1; imf02.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=mNCfCFAk; spf=pass (imf02.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.222.47 as permitted sender) smtp.mailfrom=21cnbao@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1752403996; a=rsa-sha256; cv=none; b=60jrJhEKNekgqyJoEih9X6ki4akh5oFqvKrg36C9Xis/LOke4FvGFRG6aa6VvsXLVS+ItM xj7UCW4s+/sLlZu1zOTxTMlfk3C1t+KkycyWoik9aOjOWs2wELfQ+CZvDFuOGuHIptj5sR cq42TTiMKhVpDw8r5PfW1R8XIPeLgWw= Received: by mail-ua1-f47.google.com with SMTP id a1e0cc1a2514c-87f26496daeso1134822241.2 for ; Sun, 13 Jul 2025 03:53:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1752403995; x=1753008795; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=nBCD89cIb4CZSAkeZzQ7C+33RF/FMrBf5lJ2sJv84Cg=; b=mNCfCFAkQT1Y/1HmpX10BvXzWOqNn1cdg8oQcvapiwo0r2v0JGMj154DUzxhlDjTfI mUQq5Oc9gyrKKElFWj4ypNw6USUCIwAtcDVsXQ8+eGOjNC2X9MAzZKF1sgbDzP27syZ8 gAToCuEHrS2Z2ThsQ/QXqLUvmOOGkC9+51Yc32E1vgXehFaZglWAJ2+zrkr4DzhkFzJJ 70BxrOY7jPUA/DZbEQJpyTkzEgyPO274U/uzIIfOM0/j5btEgVxGYgX8HNwU4ot5ISZ+ pc26bJItA+91mhmYRKfTjIzGT3hVm6NMC4jDAniL71foaTtTHjXAFNB5roJ8QPZtVvWz oL7w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1752403995; x=1753008795; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=nBCD89cIb4CZSAkeZzQ7C+33RF/FMrBf5lJ2sJv84Cg=; b=L473lqS4eoG5T9iVreW5nXCN9yGWMPTzRqV462BQIaASrJbncvvkcJydXIU4Up+ZyU SHblILmfnHfte8DDHeS1QWHcZcckl+lSOWK3T0xAmXkZJCwj2bFuMKLWqDpNJ7cSoKo1 1l9ZTlXP+4SE1QFW8d1SCi719zOlIcO7YlriJa1gUy0kFGVOtP9sTwiwXU0UcGe2FNND Uc72ML88Bru00Yw2cK2DYrTatFcmK29u6SlbC+el7hoDWnNxrpr3HnQAIsKsEXTZabO2 6a4gn6qPiEVCJFGtIiPEXfsXPrE43TxRBrgxDt6QggpeLoJ2n/JLweuo55iVoacwNDw4 P6Kw== X-Gm-Message-State: AOJu0YxPaYyWvBAOiBenFS7YeMcUhcyy9J9l0M5dsab7CDBB754wQvgq wXOGu7PHMiKiN2Bi53NaG09VMU5peu8Ia/HsSPQmW6ra/n7O2ySd+0lT1v4/oHmdepYcVzKbDYp d5LDJ57X5Ss7ru8ImIYJJS3jdn/9mRuw= X-Gm-Gg: ASbGnctHOLRlwLRfdLgeuQCsPqn16u6YF5PxdjntLXemJU2WQOO6fqG7MulWMAbzbAu 6494jkJB08ABcjtC/MsHTeaDRz1rkGJUdUKG4y1YwlmbX8N7BgcRqDWd+0D25AXYwpeTftYY9IB nlnqj8VM7eiN5EtEMS+4ngP4MY3fv4Ip7BqvA8duge1EAuNN1vLFU79pi0SXBX/9ffE88tJNLWB Obo31KGYg== X-Google-Smtp-Source: AGHT+IGO7fzgCHnwtGBmpzdI+Lcp5NPSPAdYCBcUxfRl+K7iQ4kZHrV/G3oN1woaoOc37bNJWuTdROoBl9F1t8N4pfs= X-Received: by 2002:a05:6102:50a6:b0:4e7:866c:5ccd with SMTP id ada2fe7eead31-4f64255fbccmr5899555137.8.1752403994954; Sun, 13 Jul 2025 03:53:14 -0700 (PDT) MIME-Version: 1.0 References: <20250710033706.71042-1-ryncsn@gmail.com> <20250710033706.71042-6-ryncsn@gmail.com> In-Reply-To: <20250710033706.71042-6-ryncsn@gmail.com> From: Barry Song <21cnbao@gmail.com> Date: Sun, 13 Jul 2025 18:53:02 +0800 X-Gm-Features: Ac12FXz0wqlbK5Di05MUapxY7irnMFjMcMYO7_YxInCzB3TK7RmQggn9FxCtF8k Message-ID: Subject: Re: [PATCH v5 5/8] mm/shmem, swap: never use swap cache and readahead for SWP_SYNCHRONOUS_IO To: Kairui Song Cc: linux-mm@kvack.org, Andrew Morton , Hugh Dickins , Baolin Wang , Matthew Wilcox , Kemeng Shi , Chris Li , Nhat Pham , Baoquan He , linux-kernel@vger.kernel.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Stat-Signature: 88f94px1ejwekun1gd5s6j3opk8he6rz X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 086A480007 X-Rspam-User: X-HE-Tag: 1752403995-917916 X-HE-Meta: U2FsdGVkX18yOeFOGrlGwJpUlOfMJOQbyV3lD3B3ZEQ+Vd3Qas4PbIXrJqfLL8P6kgiTCMC/LZKs50vUJilEbszhloAagVRT6bacHWzQ0hWbsmJUaj/TBktJk+L9xk3zVkxsF+o1ASR/47t7Xdmjx1TQ5DmnCOYxYkzUs6WKtaRzNaZm6QTW5cfbikmjYjAz/5DNF1lTbmX8tR8jIrAFe+3KJun5iM4CT5pbnZa/il7AxXsjacmA9yDZDuBce26jXemp9Szop2BCOVPzMkWrs6LJqRCroYQCSoHG0DEqCOC9UMf6UZvEP9ChoZmbOl4M8gr0Qh4/yCovkQkUrCcv9TcZ61EKrHyguZuWOttMiFZcQKesTJUN/DDayBbMOLQ7MwRmN+7ob/khGa552Orue5NbRq1UR7rviFyV1RXAkHKkFT9uKNo+2lnkDNWkybJCy5Hphy/4ElG/uaINpTCas/cqCYaIOfumgXbwUXsc8S0xko7G6iRgdz08PxTR4rqyetI5+7Yu4RxjihDi18gVhLBhPMF6g0JetWGfFBm4rKGkojJRz/aTKBlTSg3Vbh0D9Jl8Rep49N4YR7kJ4KuY11Y6iczmNsfzmqQYeRd0VWZnQFlndPL4oOBO/mgsCkgyVeHR11pEIanlzKE/hJHWBMi+BaKVm3nv3yHg9mTCpgB035QXBz+U1uhJezcYVc5MoWTnEyaD24DeMoIt4mrCLs/sLl/yJPjyy4uMScIoriIr+//5STCf7UnvyTfmPWMQxN/SAcQUlH0uGNW20YzAv7rVprPcXCXNN/cc9kIWv56KWddtRXG2LV2vHspuyE84nQ9Bxytg34gV0Wip/tuQS6X10DRNQPpiGs/z1hfYkttaw75ZNoNxUXQPlqJlA30vb9JhjtdKNDdS2AW/UdBqjqFoj9lcq4NHcAMLldUYBOVoydLKpVl8oCRBbD71bySP4NXxZjmZNHl1tNKecYU ot8eRNZc CL/FpKKk7vrlF5GTCTTmHs3pqpvCIMrszAbIyNW/pDCTZoVCSeU9o5btgAoMDSoCuuhpepRekyO2qj+IvZ5WO4h2pt/pqWggXnbdfnHQyl01GsOeHHnN1c5tz5+3nM2f1WX/nYD18RYT87MtNi84wTYlkQ+I2EUsYx6nGq0uyfD5COmjBJarIbeU7auAfE7inZ9Pl8JFhnLpOlQwC7TpRrFK2Eikcf+1Mbc2dGg99QrXH/TtiYbIx/9bm7OsgFXhVAq35suIktGzUvLLU1QPqVUSy885+QzTR1nHzr0qC027fHyxWSR5hvUCCmeaZfT7n7ZA2PZFZ3FzAuXFZ9rKlqkJD+04qFB/9TrNkTc+AOl6DvZp58FJWSCl9XTAh4D6HXeB45EjMWKdUWZ9EzFI09b/17Yc0t8FVhZNF X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Thu, Jul 10, 2025 at 11:37=E2=80=AFAM Kairui Song wro= te: > > From: Kairui Song > > For SWP_SYNCHRONOUS_IO devices, if a cache bypassing THP swapin failed > due to reasons like memory pressure, partially conflicting swap cache > or ZSWAP enabled, shmem will fallback to cached order 0 swapin. > > Right now the swap cache still has a non-trivial overhead, and readahead > is not helpful for SWP_SYNCHRONOUS_IO devices, so we should always skip > the readahead and swap cache even if the swapin falls back to order 0. > > So handle the fallback logic without falling back to the cached read. > > Signed-off-by: Kairui Song > --- > mm/shmem.c | 41 ++++++++++++++++++++++++++++------------- > 1 file changed, 28 insertions(+), 13 deletions(-) > > diff --git a/mm/shmem.c b/mm/shmem.c > index 97db1097f7de..847e6f128485 100644 > --- a/mm/shmem.c > +++ b/mm/shmem.c > @@ -1982,6 +1982,7 @@ static struct folio *shmem_swap_alloc_folio(struct = inode *inode, > struct shmem_inode_info *info =3D SHMEM_I(inode); > int nr_pages =3D 1 << order; > struct folio *new; > + gfp_t alloc_gfp; > void *shadow; > > /* > @@ -1989,6 +1990,7 @@ static struct folio *shmem_swap_alloc_folio(struct = inode *inode, > * limit chance of success with further cpuset and node constrain= ts. > */ > gfp &=3D ~GFP_CONSTRAINT_MASK; > + alloc_gfp =3D gfp; > if (!IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE)) { > if (WARN_ON_ONCE(order)) > return ERR_PTR(-EINVAL); > @@ -2003,19 +2005,22 @@ static struct folio *shmem_swap_alloc_folio(struc= t inode *inode, > if ((vma && unlikely(userfaultfd_armed(vma))) || > !zswap_never_enabled() || > non_swapcache_batch(entry, nr_pages) !=3D nr_pages) > - return ERR_PTR(-EINVAL); > + goto fallback; > > - gfp =3D limit_gfp_mask(vma_thp_gfp_mask(vma), gfp); > + alloc_gfp =3D limit_gfp_mask(vma_thp_gfp_mask(vma), gfp); > + } > +retry: > + new =3D shmem_alloc_folio(alloc_gfp, order, info, index); > + if (!new) { > + new =3D ERR_PTR(-ENOMEM); > + goto fallback; > } > - > - new =3D shmem_alloc_folio(gfp, order, info, index); > - if (!new) > - return ERR_PTR(-ENOMEM); > > if (mem_cgroup_swapin_charge_folio(new, vma ? vma->vm_mm : NULL, > - gfp, entry)) { > + alloc_gfp, entry)) { > folio_put(new); > - return ERR_PTR(-ENOMEM); > + new =3D ERR_PTR(-ENOMEM); > + goto fallback; > } > > /* > @@ -2030,7 +2035,9 @@ static struct folio *shmem_swap_alloc_folio(struct = inode *inode, > */ > if (swapcache_prepare(entry, nr_pages)) { > folio_put(new); > - return ERR_PTR(-EEXIST); > + new =3D ERR_PTR(-EEXIST); > + /* Try smaller folio to avoid cache conflict */ > + goto fallback; > } > > __folio_set_locked(new); > @@ -2044,6 +2051,15 @@ static struct folio *shmem_swap_alloc_folio(struct= inode *inode, > folio_add_lru(new); > swap_read_folio(new, NULL); > return new; > +fallback: > + /* Order 0 swapin failed, nothing to fallback to, abort */ > + if (!order) > + return new; Feels a bit odd to me. Would it be possible to handle this earlier, like: if (!order) return ERR_PTR(-ENOMEM); goto fallback; or: if (order) goto fallback; return ERR_PTR(-ENOMEM); Not strongly opinionated here=E2=80=94totally up to you. Thanks Barry