From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6EBD1C7115B for ; Thu, 19 Jun 2025 17:38:07 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 86F196B0088; Thu, 19 Jun 2025 13:38:06 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 8208F6B0089; Thu, 19 Jun 2025 13:38:06 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 70E9C6B008A; Thu, 19 Jun 2025 13:38:06 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 5E5546B0088 for ; Thu, 19 Jun 2025 13:38:06 -0400 (EDT) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id B82A61D65FD for ; Thu, 19 Jun 2025 17:38:05 +0000 (UTC) X-FDA: 83572858530.09.5E454D5 Received: from mail-lj1-f169.google.com (mail-lj1-f169.google.com [209.85.208.169]) by imf05.hostedemail.com (Postfix) with ESMTP id B8128100002 for ; Thu, 19 Jun 2025 17:38:03 +0000 (UTC) Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b="CI/lT7+O"; spf=pass (imf05.hostedemail.com: domain of ryncsn@gmail.com designates 209.85.208.169 as permitted sender) smtp.mailfrom=ryncsn@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1750354683; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=OdG1ig0yFjbOMH8OBAS6HhXXXjRpyHtMI1DOQoERDrE=; b=JnV1pLjA6u/NHwfCpno+EAqpFoGPyiiYfWx8ISAm+cEJNdS1x3I5gWUh8lpERMyeWr7iJu /qHebnm+BELx8YlNMRpiQwllfpNqj6aYIcuojAAZgvtqmslEdKMIYMa++AzPx1Mo/OxVLM PSK8Mo2yYPzlhxRk2XIAf02xw+KKNxM= ARC-Authentication-Results: i=1; imf05.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b="CI/lT7+O"; spf=pass (imf05.hostedemail.com: domain of ryncsn@gmail.com designates 209.85.208.169 as permitted sender) smtp.mailfrom=ryncsn@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1750354683; a=rsa-sha256; cv=none; b=BlwiVubIY4iqo1rT9VfIDyeEFFfGoRvGSCERxiwZ3qODjr/avKV/Takm8lm3R8hNyx0ifv EvmXFMbQt+cwKNDJVJ1DNHEQZqjnhQ+7zZdqYUQ0dillhoxKm/7XGO5hnkF3nJ6tLv8pXo f7NiBywkO4rmBlifHUrEE4viti/SMNU= Received: by mail-lj1-f169.google.com with SMTP id 38308e7fff4ca-32ac42bb4e4so9211531fa.0 for ; Thu, 19 Jun 2025 10:38:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1750354682; x=1750959482; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=OdG1ig0yFjbOMH8OBAS6HhXXXjRpyHtMI1DOQoERDrE=; b=CI/lT7+OQ5GfqO/J7mcN0hysQJrLeOXGkCEN6sfEMDRnkuKWb0bDtsqzvqoS+jY5zn uqjWv6TaqLnWibE6v6xizQqkKIftb59iGKV9IEMbn7DPhS0+TK+u4a7o6VI6npBeA4P1 18CeRjXOfxCyCqGibH/eglO5Kolxyqvg6a8F8jrt9bpCYYwMoKy7U2+wKw2Zuqyn7e/I 2HuMiUdeFpvclkSCo2dLrW/832+j6sNN4UZfCtesm+xBhtl7njFNIXAWAlZeC4zhe9Zd WVm+eWoorD7Vmr0YMQlNb2XaerN8Vk61ezOM7svrfNDhRPWtTXYFHLqJjvZkl9xlzX1Y XL7w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750354682; x=1750959482; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=OdG1ig0yFjbOMH8OBAS6HhXXXjRpyHtMI1DOQoERDrE=; b=HkNjgPxma65xkBOtRKfmzt2NFO4bgmXg94FKQiKvgpqcsW/17/KlZYEG4T2y36gGbA e+hWhzLMj2rdsr68XcIDGWSHQ57d3WC/kmJVKQ0/0zZR+F1IAxSMjcmG4enIngJX3NVF a+3uB3xo5RIY/WYB1lNgHN8E2heEY+kIphwjCpjIVWxf1qmSadqfHRM+kk7F5O6L9qOV 4SEu7FVtorT4y7j2AaUEPhaSMOd/ZgJIzyw3fYcsA62+zRvSSSrSn7T9RB+kJ8JrtYL3 HEIsG7+Ux+On2oi2t7hhGQwtgZeIFhNMXJ9+12jk2+6T4CGDh5v4/OhuyAKN8UxjMqek ln7Q== X-Gm-Message-State: AOJu0YxgUE1xHtxbfQ/vzevLAVVUnN4x9iCgv072cvy1iDbsjeEIuP42 Dwm+5I2VAxEEN0ffx6gFm3gArYvylB4cVq2k/IPeL9E/xob52Hc5I+NhJRCA+SHMkJTp71ZpAyB 0DQRp2U6BTpUvjY1ZXApFLr1oFCxBu1A= X-Gm-Gg: ASbGnctqhXFdIyxKSsaKKi9Q87hcbxzKTZ7W4ixcabGrb1TTPjeoxplM0O71FCtN2t4 pQxDy6upQTZaWFNOQcRaUBomEhWGdd+XDT97skp7LzYzVhpXjo7qmEwlcFYA5T3enxIni+Z4sw+ Msxf27IVJLFqjA+iGT79J/8i0S3j6Q/TJix2uq7x+fXTG4gkY0ITTF3Q== X-Google-Smtp-Source: AGHT+IGu6221iPW6oxvUGCbZQXXQ9dxx5ctEWKGgVDVNy08hC2dJP7oYCDEECAJXpZDHuJbmyho2N7bh3Zzk0fTJDdA= X-Received: by 2002:a05:651c:20db:20b0:32b:9792:1887 with SMTP id 38308e7fff4ca-32b97921bbfmr531101fa.11.1750354681442; Thu, 19 Jun 2025 10:38:01 -0700 (PDT) MIME-Version: 1.0 References: <20250617183503.10527-1-ryncsn@gmail.com> <20250617183503.10527-5-ryncsn@gmail.com> In-Reply-To: From: Kairui Song Date: Fri, 20 Jun 2025 01:37:24 +0800 X-Gm-Features: AX0GCFtTNLBEfqVpt2YEbWVNb5mcsPO2dq58xPj-PqUjG_DyYgyvi4VjVt-QlJU Message-ID: Subject: Re: [PATCH 4/4] mm/shmem, swap: avoid false positive swap cache lookup To: Kemeng Shi Cc: linux-mm@kvack.org, Andrew Morton , Hugh Dickins , Baolin Wang , Matthew Wilcox , Chris Li , Nhat Pham , Baoquan He , Barry Song , linux-kernel@vger.kernel.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Server: rspam11 X-Rspam-User: X-Rspamd-Queue-Id: B8128100002 X-Stat-Signature: q3yiyrr8jmtc6hti1y1nb9bi1k968ejm X-HE-Tag: 1750354683-917906 X-HE-Meta: U2FsdGVkX1+uahBrvpkqyLAY0FnkRYb19WVnmyE1lm0Ikn4Mgrjp/y7G7f2/34FsaJID6u/Rgi66WxzkPZTBPFpSuUBgMYAI/bmAkEk1f04ktS6VhKGz6KX/Iuj8DFeofxwwrd82VKK/4S08mD5lrzpmuoGyqWP6+Y0xVp/1ZvosOaWzrc5Nz/GCLw6uyaBmjfa+7TCperRCg9KnqYfqBF2mTMhKt60DIEWYi++NarU2oWfrlEVu4KFqEmEiMFDEEXIi9BaTZplYbBSeflgZC0dwhuVAVSn5qWf+J2tyjmBC/wkCjpCn2H3sVF0Fru3JHXoOVd3MW4anaFkc+h231rpqYqrgr3QuAcy2hw8qlBo8SxfezjglBhb7E+3qdXLW8FEgqVnh3thxuQD0Gz51+ok5MrDZxb5AgxQ/rCPo34ggiVBJkMhosroGXDy7jB6eK1lWji8ojNfPnaKm9/lTpZlqlwBYDXSMckqf3KeBHfnmePFc4zfhwfYzf0I7CnIygX3yIb+E28/gC3GW82DPxXaHqDCtfdFAaFJMsNLA6ScuKgynj0mxgSz09ra5TT741TNe+7fU/eu3sio5Ihl3KLihNfQFjT8Rpv96Wc9PIdXqoIU497UosypAXZjIvp7q3Jz7lwk5/TchDzD8OPdQq7hJqJVK2TtVL73BAIRk4rXLvXHqUfSgneQ86NM0/6SpY3uCVSgAE+mzoFhLF9dUJ+fXRZ9JLh5LAo5CwxWItVF7yLtGgSipn9DfHceQ7DXOn/ZDuwNjsvYQXP4PUTl0JPCQ+DaIQp3//7bAYbwyB0F8MGzl0l1oLbI4ajR6ywZ4Ov1I6Wzv/SQdideNAIvi84xvDWcN1MSDs+234gku7fddcwpcfwtPb7BCQ2Id3q6+F4HxKe8JS78xQJ/NuL/5GZQulgPPeXUdAY3olri4R2fdh92/bijxiLgmYGgBNqgiK713jh3+Vt9FtkhJ6/z ofwhw5h7 Ul5fAL0TSYagDANDlzr8CXrMJ7d3jtDY0iCt9p/pLbLorG/R/GQhIHfU9TL0M0rWSjD8RaKRUvu4M1NQkMrr4QyIoRIacR9DAbikDKkmje6C0mk3b/l2IXcOEDJxpDriaHiz/7l+42o5wfGEiRX6H2J37Fqgd2RqoyZfN3Jqmg/BnGuGK7tCWaN9+JLcThTn8kAnCtSRmVvZaASD3ZVVSjJa3QEU+sO+fKyYylW0xRyWrybtmg4irNiUJyK93gBvUfKs1UaQHfwYcFDsvMz/cjrbmhXPNLD1QqWQrFmXZ5lqJJWVB7ym1goVNBCitawOYWCA2w9dWpzVlWptTRkJBw1RGuEKSd58R5Gzy7rskSQpc8qAuez9V1ZyXgA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Thu, Jun 19, 2025 at 9:28=E2=80=AFAM Kemeng Shi wrote: > > > > on 6/18/2025 2:35 AM, Kairui Song wrote: > > From: Kairui Song > > > > If the shmem read request's index points to the middle of a large swap > > entry, shmem swap in does the swap cache lookup use the large swap > > entry's starting value (the first sub swap entry of this large entry). > > This will lead to false positive lookup result if only the first few > > swap entries are cached, but the requested swap entry pointed by index > > is uncached. > > > > Currently shmem will do a large entry split then retry the swapin from > > beginning, which is a waste of CPU and fragile. Handle this correctly. > > > > Also add some sanity checks to help understand the code and ensure > > things won't go wrong. > > > > Signed-off-by: Kairui Song > > --- > > mm/shmem.c | 61 ++++++++++++++++++++++++++---------------------------- > > 1 file changed, 29 insertions(+), 32 deletions(-) > > > > diff --git a/mm/shmem.c b/mm/shmem.c > > index 46dea2fa1b43..0bc30dafad90 100644 > > --- a/mm/shmem.c > > +++ b/mm/shmem.c > > @@ -1977,12 +1977,12 @@ static struct folio *shmem_alloc_and_add_folio(= struct vm_fault *vmf, > > > > static struct folio *shmem_swapin_direct(struct inode *inode, > > struct vm_area_struct *vma, pgoff_t index, > > - swp_entry_t entry, int *order, gfp_t gfp) > > + swp_entry_t swap_entry, swp_entry_t swap, > > + int *order, gfp_t gfp) > > { > > struct shmem_inode_info *info =3D SHMEM_I(inode); > > int nr_pages =3D 1 << *order; > > struct folio *new; > > - pgoff_t offset; > > void *shadow; > > > > /* > > @@ -2003,13 +2003,11 @@ static struct folio *shmem_swapin_direct(struct= inode *inode, > > */ > > if ((vma && userfaultfd_armed(vma)) || > > !zswap_never_enabled() || > > - non_swapcache_batch(entry, nr_pages) !=3D nr_pages) { > > - offset =3D index - round_down(index, nr_pages); > > - entry =3D swp_entry(swp_type(entry), > > - swp_offset(entry) + offset); > > + non_swapcache_batch(swap_entry, nr_pages) !=3D nr_pag= es) { > > *order =3D 0; > > nr_pages =3D 1; > > } else { > > + swap.val =3D swap_entry.val; > > gfp_t huge_gfp =3D vma_thp_gfp_mask(vma); > > > > gfp =3D limit_gfp_mask(huge_gfp, gfp); > > @@ -2021,7 +2019,7 @@ static struct folio *shmem_swapin_direct(struct i= node *inode, > > return ERR_PTR(-ENOMEM); > > > > if (mem_cgroup_swapin_charge_folio(new, vma ? vma->vm_mm : NULL, > > - gfp, entry)) { > > + gfp, swap)) { > > folio_put(new); > > return ERR_PTR(-ENOMEM); > > } > > @@ -2036,17 +2034,17 @@ static struct folio *shmem_swapin_direct(struct= inode *inode, > > * In this case, shmem_add_to_page_cache() will help identify the > > * concurrent swapin and return -EEXIST. > > */ > > - if (swapcache_prepare(entry, nr_pages)) { > > + if (swapcache_prepare(swap, nr_pages)) { > > folio_put(new); > > return ERR_PTR(-EEXIST); > > } > > > > __folio_set_locked(new); > > __folio_set_swapbacked(new); > > - new->swap =3D entry; > > + new->swap =3D swap; > > > > - memcg1_swapin(entry, nr_pages); > > - shadow =3D get_shadow_from_swap_cache(entry); > > + memcg1_swapin(swap, nr_pages); > > + shadow =3D get_shadow_from_swap_cache(swap); > > if (shadow) > > workingset_refault(new, shadow); > > folio_add_lru(new); > > @@ -2278,20 +2276,21 @@ static int shmem_swapin_folio(struct inode *ino= de, pgoff_t index, > > struct mm_struct *fault_mm =3D vma ? vma->vm_mm : NULL; > > struct shmem_inode_info *info =3D SHMEM_I(inode); > > int error, nr_pages, order, swap_order; > > + swp_entry_t swap, swap_entry; > > struct swap_info_struct *si; > > struct folio *folio =3D NULL; > > bool skip_swapcache =3D false; > > - swp_entry_t swap; > > + pgoff_t offset; > > > > VM_BUG_ON(!*foliop || !xa_is_value(*foliop)); > > - swap =3D radix_to_swp_entry(*foliop); > > + swap_entry =3D radix_to_swp_entry(*foliop); > > *foliop =3D NULL; > > > > - if (is_poisoned_swp_entry(swap)) > > + if (is_poisoned_swp_entry(swap_entry)) > > return -EIO; > > > > - si =3D get_swap_device(swap); > > - order =3D shmem_swap_check_entry(mapping, index, swap); > > + si =3D get_swap_device(swap_entry); > > + order =3D shmem_swap_check_entry(mapping, index, swap_entry); > > if (unlikely(!si)) { > > if (order < 0) > > return -EEXIST; > > @@ -2303,7 +2302,9 @@ static int shmem_swapin_folio(struct inode *inode= , pgoff_t index, > > return -EEXIST; > > } > > > > - /* Look it up and read it in.. */ > > + /* @index may points to the middle of a large entry, get the real= swap value first */ > > + offset =3D index - round_down(index, 1 << order); > > + swap.val =3D swap_entry.val + offset; > > folio =3D swap_cache_get_folio(swap, NULL, 0); > > if (!folio) { > > /* Or update major stats only when swapin succeeds?? */ > > @@ -2315,7 +2316,7 @@ static int shmem_swapin_folio(struct inode *inode= , pgoff_t index, > > /* Try direct mTHP swapin bypassing swap cache and readah= ead */ > > if (data_race(si->flags & SWP_SYNCHRONOUS_IO)) { > > swap_order =3D order; > > - folio =3D shmem_swapin_direct(inode, vma, index, > > + folio =3D shmem_swapin_direct(inode, vma, index, = swap_entry, > > swap, &swap_order, gf= p); > > if (!IS_ERR(folio)) { > > skip_swapcache =3D true; > > @@ -2338,28 +2339,25 @@ static int shmem_swapin_folio(struct inode *ino= de, pgoff_t index, > > } > > } > > alloced: > > + swap_order =3D folio_order(folio); > > + nr_pages =3D folio_nr_pages(folio); > > + > > + /* The swap-in should cover both @swap and @index */ > > + swap.val =3D round_down(swap.val, nr_pages); > > + VM_WARN_ON_ONCE(swap.val > swap_entry.val + offset); > > + VM_WARN_ON_ONCE(swap.val + nr_pages <=3D swap_entry.val + offset)= ;> + > > /* > > * We need to split an existing large entry if swapin brought in = a > > * smaller folio due to various of reasons. > > - * > > - * And worth noting there is a special case: if there is a smalle= r > > - * cached folio that covers @swap, but not @index (it only covers > > - * first few sub entries of the large entry, but @index points to > > - * later parts), the swap cache lookup will still see this folio, > > - * And we need to split the large entry here. Later checks will f= ail, > > - * as it can't satisfy the swap requirement, and we will retry > > - * the swapin from beginning. > > */ > > - swap_order =3D folio_order(folio); > > + index =3D round_down(index, nr_pages); > > if (order > swap_order) { > > - error =3D shmem_split_swap_entry(inode, index, swap, gfp)= ; > > + error =3D shmem_split_swap_entry(inode, index, swap_entry= , gfp); > > if (error) > > goto failed_nolock; > > } > > > > - index =3D round_down(index, 1 << swap_order); > > - swap.val =3D round_down(swap.val, 1 << swap_order); > > - > > /* We have to do this with folio locked to prevent races */ > > folio_lock(folio); > > if ((!skip_swapcache && !folio_test_swapcache(folio)) || > > @@ -2372,7 +2370,6 @@ static int shmem_swapin_folio(struct inode *inode= , pgoff_t index, > > goto failed; > > } > > folio_wait_writeback(folio); > > - nr_pages =3D folio_nr_pages(folio); > > > > /* > > * Some architectures may have to restore extra metadata to the > > > The patch look good to me, just some small suggestion. > I think the name "swap" and "swap_entry" is not good enough. Maybe someth= ing > like "index_entry" and "align_entry" will be more clean. Thanks, very good suggestion, I prefer index_entry then. > Besides we pass "swap" and "order" already, we can calculate swap_entry e= asily > and the code will be more easy to understand. True, I'm not sure if the compiler is smart enough to avoid a round_down here, the inlined function can be optimized better with parameters. > Not a big deal anyway, so: > Reviewed-by: Kemeng Shi > Thanks again!