From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0143CC7115C for ; Mon, 23 Jun 2025 03:26:41 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 957026B009C; Sun, 22 Jun 2025 23:26:41 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 92E396B00AC; Sun, 22 Jun 2025 23:26:41 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 86B4D6B00AD; Sun, 22 Jun 2025 23:26:41 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 76FA06B00AC for ; Sun, 22 Jun 2025 23:26:41 -0400 (EDT) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 00F861229AB for ; Mon, 23 Jun 2025 03:26:40 +0000 (UTC) X-FDA: 83585228202.29.6127B80 Received: from out30-112.freemail.mail.aliyun.com (out30-112.freemail.mail.aliyun.com [115.124.30.112]) by imf23.hostedemail.com (Postfix) with ESMTP id 3E7B4140002 for ; Mon, 23 Jun 2025 03:26:37 +0000 (UTC) Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=linux.alibaba.com header.s=default header.b=g9JOuCYf; spf=pass (imf23.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 115.124.30.112 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com; dmarc=pass (policy=none) header.from=linux.alibaba.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1750649199; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=wYHutWc8bo/cobGKcb79yoay4X6hZe8yZdDY2OXxASI=; b=2m/nf6puxpSwbkrwz+CKKgN8K4ITbRzzJSsOGmndSKunEd8O24VVVE1ZdKHwpfsbO6bEOs IBXBXfy7HCECjZeUAu0W8SE1af3jB/Lzva7ubUfmyBEyMo/A0oQ4+HdzbNrmSNqRhuKpaL kBQ7VcmjSX45TaJAVVVKNFWT6XwRBNw= ARC-Authentication-Results: i=1; imf23.hostedemail.com; dkim=pass header.d=linux.alibaba.com header.s=default header.b=g9JOuCYf; spf=pass (imf23.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 115.124.30.112 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com; dmarc=pass (policy=none) header.from=linux.alibaba.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1750649199; a=rsa-sha256; cv=none; b=yDlrIA+9aL450ZaN/K/WOZa8cbxOy5yvvidX4t2qnrFoOJXg7QjjJx/k7/NTOHdRUWDEuY wWPnDjbMHfHiEFTmPAKGAxtxBUFLl45TQ3xwhidS45KLOk4nMtNOZgBlcbl0vdDK0AdQ81 zqgFAf1Di4yn2P9DGh1AL5HWJFvZIR4= DKIM-Signature:v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.alibaba.com; s=default; t=1750649195; h=Message-ID:Date:MIME-Version:Subject:To:From:Content-Type; bh=wYHutWc8bo/cobGKcb79yoay4X6hZe8yZdDY2OXxASI=; b=g9JOuCYfOAFdYWdW6+EWL7tq25cPvHgl2a0gOkWcCnh4Pdid91BfzmZknef5JWZETY0UNOsYDt3NURWmnJY5h5q45xozsGH7xrxFQMT5HY9dcEl1yObzKwLdxfDbf9fwY0pRmMYLpG8D8OEg3S2LLUsQY5TFmx9RcWOXnEwLKiU= Received: from 30.74.144.128(mailfrom:baolin.wang@linux.alibaba.com fp:SMTPD_---0WeSN70M_1750649193 cluster:ay36) by smtp.aliyun-inc.com; Mon, 23 Jun 2025 11:26:33 +0800 Message-ID: Date: Mon, 23 Jun 2025 11:26:32 +0800 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v2 1/4] mm/shmem, swap: improve cached mTHP handling and fix potential hung To: Kairui Song , linux-mm@kvack.org Cc: Andrew Morton , Hugh Dickins , Matthew Wilcox , Kemeng Shi , Chris Li , Nhat Pham , Baoquan He , Barry Song , linux-kernel@vger.kernel.org, stable@vger.kernel.org References: <20250619175538.15799-1-ryncsn@gmail.com> <20250619175538.15799-2-ryncsn@gmail.com> From: Baolin Wang In-Reply-To: <20250619175538.15799-2-ryncsn@gmail.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Rspamd-Queue-Id: 3E7B4140002 X-Stat-Signature: fz5ykego1s46szno811a15ycgo649ozg X-Rspam-User: X-Rspamd-Server: rspam04 X-HE-Tag: 1750649197-834098 X-HE-Meta: U2FsdGVkX1+cCyRO1i2yRKA/IFdgHVDXi08EJ3zkJ/+X80wg0133g1HwyWcyORoktFGzTOjxj8vtIWA/tLGaI21CPGcPXliHQUoeTY1AMQywRvr3BewMNUnrqO4Xw6IE1meWxnPLh8TanH5ZPFkZVes1OeH/GG3DPtnk1QrHHmd79iZCG+gJEu9Sj50rRXLuwYLIRfkGiCysaJb9ImtQW+5Bd10rOJQxrcigpwO9pSwuoj858NjJ6ykZiTc7FxOGYt+PhOOrFZI1JplcsfBWS+2GLemzHG4TSD7BGipxXpQJyl2ApC+ZW07aDP9D3uEog5+/gqianlpRlNq2EZwNx+FaNdqappNJcQbOCSHZWgzu2eUqePp29Uqhnc2nwM7W8MP1Rfs/Jke0rydJiT5mZaworr2N2br/CkBMzS7AqpaZwR3xmV6AeD/X/1H0Qq7QlOsvZ2UxstJTYhPCX969gbDAIZP0rgRWfhBOGEKlAB8/w1iiHqWAcVgLC/VjBxfmkGqKPC2v93h588LXsJ7UXIpAjL8tTpbQ8NnWQEDsiF5RzSk01GEq+Ua1zWMgLPzSCpbXCC/uqz3Q6ZxxzpQG9DqK0e0nzNRhbn5NvDr4uJgragXTJe8onO3Bj8WIQzlShHmO6PfqWIoyFylSeym1LNjPOrtbLTVhg5dX5+jJzgX7ETaAk7oZi+XJjoRbltXRzGxc8K6TGCpu+/VmiHmiZNIDe9CeQBQrOVxC60UdFjE7HkgBwUnGcEbkQGY2QzO660EAJ/bErfRFJQxZ+7ltpp7JsGdCV3Whq7NtZR83S6BMXlyfh1yn3UNbGgPAW7Hgf6h9+I3W2pGXuT3gK6efTnETxPNF/oHGGKx7LFSTiUJOdLRuirwXLcVIEy2pA/DahfQFBLWn7Qb4RIAyCKkgBRLTo1/ogCV1UIrtWF/0RTTFpjfIqE3qxj7AlBKg/DWw7FEKCIIfYcw1iRA6eqg Hf7iOsGd 1hSEKjsVq6tNgT+vikLCo+MrWnVSSePXf6rj2BOwQqmaPHuahVRvaSzNusN8+2xLTA2tTF3anixk17KZgVEgNRPdFblKWF+JPlTWfZsjGIY88adFuw6c6MXPT1xXYhh1FnL7G6vTAA/ZB1oPgqckdQuoJ4Qy8Y7IXXXaUnJmm1GwIPUDzNNrXQsVoPJosuKkgvGpbpjyU7DytZDEgzfxItASCIioUKfhYcSajETi2SfjXtG+vY78qWipEGD80d8j9wOiHnEpYSHmBRyykf9yQyawhgTJlmSXPMH2FYmzR+kxR31PyekQYidIYFvc1Uk2Iag9UJAsF16awOH5GKXpIx0q/THrzhGtXFi1Nz+RPhoURBGnPzjmNvtl0IsloQ20TuFu81fdorD3naMY= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Hi Kairui, On 2025/6/20 01:55, Kairui Song wrote: > From: Kairui Song > > The current swap-in code assumes that, when a swap entry in shmem > mapping is order 0, its cached folios (if present) must be order 0 > too, which turns out not always correct. > > The problem is shmem_split_large_entry is called before verifying the > folio will eventually be swapped in, one possible race is: > > CPU1 CPU2 > shmem_swapin_folio > /* swap in of order > 0 swap entry S1 */ > folio = swap_cache_get_folio > /* folio = NULL */ > order = xa_get_order > /* order > 0 */ > folio = shmem_swap_alloc_folio > /* mTHP alloc failure, folio = NULL */ > <... Interrupted ...> > shmem_swapin_folio > /* S1 is swapped in */ > shmem_writeout > /* S1 is swapped out, folio cached */ > shmem_split_large_entry(..., S1) > /* S1 is split, but the folio covering it has order > 0 now */ > > Now any following swapin of S1 will hang: `xa_get_order` returns 0, > and folio lookup will return a folio with order > 0. The > `xa_get_order(&mapping->i_pages, index) != folio_order(folio)` will > always return false causing swap-in to return -EEXIST. > > And this looks fragile. So fix this up by allowing seeing a larger folio > in swap cache, and check the whole shmem mapping range covered by the > swapin have the right swap value upon inserting the folio. And drop > the redundant tree walks before the insertion. > > This will actually improve the performance, as it avoided two redundant > Xarray tree walks in the hot path, and the only side effect is that in > the failure path, shmem may redundantly reallocate a few folios > causing temporary slight memory pressure. > > And worth noting, it may seems the order and value check before > inserting might help reducing the lock contention, which is not true. > The swap cache layer ensures raced swapin will either see a swap cache > folio or failed to do a swapin (we have SWAP_HAS_CACHE bit even if > swap cache is bypassed), so holding the folio lock and checking the > folio flag is already good enough for avoiding the lock contention. > The chance that a folio passes the swap entry value check but the > shmem mapping slot has changed should be very low. Thanks for fixing the issue. Sadly, I haven't reproduced this issue from my previous test cases :( And I have a question below. > Cc: stable@vger.kernel.org > Fixes: 809bc86517cc ("mm: shmem: support large folio swap out") > Signed-off-by: Kairui Song > Reviewed-by: Kemeng Shi > --- > mm/shmem.c | 30 +++++++++++++++++++++--------- > 1 file changed, 21 insertions(+), 9 deletions(-) > > diff --git a/mm/shmem.c b/mm/shmem.c > index eda35be2a8d9..4e7ef343a29b 100644 > --- a/mm/shmem.c > +++ b/mm/shmem.c > @@ -884,7 +884,9 @@ static int shmem_add_to_page_cache(struct folio *folio, > pgoff_t index, void *expected, gfp_t gfp) > { > XA_STATE_ORDER(xas, &mapping->i_pages, index, folio_order(folio)); > - long nr = folio_nr_pages(folio); > + unsigned long nr = folio_nr_pages(folio); > + swp_entry_t iter, swap; > + void *entry; > > VM_BUG_ON_FOLIO(index != round_down(index, nr), folio); > VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio); > @@ -896,14 +898,24 @@ static int shmem_add_to_page_cache(struct folio *folio, > > gfp &= GFP_RECLAIM_MASK; > folio_throttle_swaprate(folio, gfp); > + swap = iter = radix_to_swp_entry(expected); > > do { > xas_lock_irq(&xas); > - if (expected != xas_find_conflict(&xas)) { > - xas_set_err(&xas, -EEXIST); > - goto unlock; > + xas_for_each_conflict(&xas, entry) { > + /* > + * The range must either be empty, or filled with > + * expected swap entries. Shmem swap entries are never > + * partially freed without split of both entry and > + * folio, so there shouldn't be any holes. > + */ > + if (!expected || entry != swp_to_radix_entry(iter)) { > + xas_set_err(&xas, -EEXIST); > + goto unlock; > + } > + iter.val += 1 << xas_get_order(&xas); > } > - if (expected && xas_find_conflict(&xas)) { > + if (expected && iter.val - nr != swap.val) { > xas_set_err(&xas, -EEXIST); > goto unlock; > } > @@ -2323,7 +2335,7 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index, > error = -ENOMEM; > goto failed; > } > - } else if (order != folio_order(folio)) { > + } else if (order > folio_order(folio)) { > /* > * Swap readahead may swap in order 0 folios into swapcache > * asynchronously, while the shmem mapping can still stores > @@ -2348,15 +2360,15 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index, > > swap = swp_entry(swp_type(swap), swp_offset(swap) + offset); > } > + } else if (order < folio_order(folio)) { > + swap.val = round_down(swp_type(swap), folio_order(folio)); Why rounding down the swap type? do you mean rounding down the swap offset? > } > > alloced: > /* We have to do this with folio locked to prevent races */ > folio_lock(folio); > if ((!skip_swapcache && !folio_test_swapcache(folio)) || > - folio->swap.val != swap.val || > - !shmem_confirm_swap(mapping, index, swap) || > - xa_get_order(&mapping->i_pages, index) != folio_order(folio)) { > + folio->swap.val != swap.val) { > error = -EEXIST; > goto unlock; > }