From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id DEB01C7115D for ; Mon, 23 Jun 2025 03:39:01 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4B2CF6B00B4; Sun, 22 Jun 2025 23:39:01 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 463376B00B5; Sun, 22 Jun 2025 23:39:01 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 379A46B00B6; Sun, 22 Jun 2025 23:39:01 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 28B8B6B00B4 for ; Sun, 22 Jun 2025 23:39:01 -0400 (EDT) Received: from smtpin12.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id E45971042B7 for ; Mon, 23 Jun 2025 03:39:00 +0000 (UTC) X-FDA: 83585259240.12.57777F2 Received: from out30-133.freemail.mail.aliyun.com (out30-133.freemail.mail.aliyun.com [115.124.30.133]) by imf25.hostedemail.com (Postfix) with ESMTP id B918BA0002 for ; Mon, 23 Jun 2025 03:38:58 +0000 (UTC) Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=linux.alibaba.com header.s=default header.b=u536ndcl; dmarc=pass (policy=none) header.from=linux.alibaba.com; spf=pass (imf25.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 115.124.30.133 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1750649939; a=rsa-sha256; cv=none; b=XYgc77H5lQimYTd0pLJOWaV3FP58ztHPdQ6T013rlbz+inNmM8OLa2ncOYIV4Uaj58hj+5 r0eeLa2LPYwoVgutBss556r0Mdu/+7crO0oeVbVYpJUkQU3kWHs6NvYKCAKAtimILsIru7 Dn7bKxX7sEst1vq91GLErSL9EoDszFY= ARC-Authentication-Results: i=1; imf25.hostedemail.com; dkim=pass header.d=linux.alibaba.com header.s=default header.b=u536ndcl; dmarc=pass (policy=none) header.from=linux.alibaba.com; spf=pass (imf25.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 115.124.30.133 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1750649939; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=5FYQj48wsEqvtrDOtYtJF6xNw63teRydsL/cwkD20bY=; b=S2cBXrboUTuahlq6TSFrLrLN7NubidsV/96uhUI94tXWelVrTIYRRGM5XED7LlhexkcqXI QEQlpiMesgINZN2lRRv6OV6TtzwQhWcPWuAOTryGkdZcVTrHoUMuInqps2t4BXzLlqGYVW PmEgIA4wYzriqJhIip5/etHfHgaBdU4= DKIM-Signature:v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.alibaba.com; s=default; t=1750649935; h=Message-ID:Date:MIME-Version:Subject:To:From:Content-Type; bh=5FYQj48wsEqvtrDOtYtJF6xNw63teRydsL/cwkD20bY=; b=u536ndclhwHCxUlmz611xXoWqTI8L5/SG6gYcNDaDfuAEGm/cjV0shCUnD0xMa5zUoya6t4dBZAwS81qhmnSKZouzL0VwGrVxS+bltQ8TzOWugAig+maaz2k92JGOrG7FFPFLuqAPLL8ltiaqLTMVPj8jDcAMhCqSZT1CgQ59C8= Received: from 30.74.144.128(mailfrom:baolin.wang@linux.alibaba.com fp:SMTPD_---0WeSNDHE_1750649933 cluster:ay36) by smtp.aliyun-inc.com; Mon, 23 Jun 2025 11:38:54 +0800 Message-ID: <9e31bbb8-73e7-4e67-973d-491f93ba938f@linux.alibaba.com> Date: Mon, 23 Jun 2025 11:38:53 +0800 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v2 1/4] mm/shmem, swap: improve cached mTHP handling and fix potential hung To: Kairui Song Cc: linux-mm@kvack.org, Andrew Morton , Hugh Dickins , Matthew Wilcox , Kemeng Shi , Chris Li , Nhat Pham , Baoquan He , Barry Song , linux-kernel@vger.kernel.org, stable@vger.kernel.org References: <20250619175538.15799-1-ryncsn@gmail.com> <20250619175538.15799-2-ryncsn@gmail.com> From: Baolin Wang In-Reply-To: Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-Stat-Signature: niutxha479mos5matr3yugc98uskmtjh X-Rspamd-Queue-Id: B918BA0002 X-Rspam-User: X-Rspamd-Server: rspam06 X-HE-Tag: 1750649938-426372 X-HE-Meta: U2FsdGVkX18lAKrDCvps3DuZXF3f7ZrsysNJ6KOj0q0L7cFcoOxGMZDqEZdMs6Cnncxcj3fZvc1wAeR1HJH6dHFnzy6IlGJ2mo+zWA6hub37KTG0NHQiBkMCCnlNp9HhUXMqZHxr79D6CuSFUuVVWOIMFOmVANc/cX/aQ6v1wWu4Asq+K1pDMfZ7yHGq6pOSdZweBDpl86H9wpqTdyQ9vFVqV5Kx+JilCEulS5esZMdW4BwF7ML0Bi1SbgIeu88lbHiGWLDthzm1aaGC5KXi5e9Vr80e54dBO6BLEhOnv7dGbK8iv3vbz9rO1e97rc7pcfidRsYWFoGd75hDvwvQWcNtTvKBdi6LxPOWGpRTLKUV5DKq6kN+oy5irIkw5fYhlGSDDSKxKimzps3cVwxbZ5uO4ni5gpRtbFwhMvAEdddxRp4+ibQef4zbyeANkN05FvXfOF7ms0XpayCws1lIM1mCsNMOQ//nyk9cNK9ZvMVjNwTIHhYjwNseckK8R6JactnA+DC6EEkUbUi+bh1aV91pH3II8ZW3vD1/WRkyMBHptPnq0VW1ZkS/FzvMew88TyfE6K8wh+PUBY4nWbH9EKDd1N7ho/W1fulTX+pCL3TJlpb8N6f8m191jw7+yjTJa2BiG8gciNybNaKl1W6H3Zwfl7N2kfmdV4Q+gDC87K7KuUthOfJq9pyTpyfskoFWoXmWWuJfZ9lz4nIEP5bQzRCZOJAQtdIQt8XyJVeizY9o1Eg4Evr4sY8Njr4ZppBSCZ2o0OYnM1WIMQhYJdhQ1IbEMCf2erbYlIAmKN37l60b6Lj07asfKe+64xHA3/nTPNOkm9QctJwdXYnfWhdchu6XtMDZVMP93XQX4IdDkswTnIUAtHqreZrXoMMscwwqvs044+Hy5LCOl3QHqaQTHZH2NU/vYYwuZiTd2ja/jhE+g/X7obgS/8Y+VxP2pfOpOfwf2V+/d/or08fn8NX vw/qG979 GsZfJrvUGy19g8jyQ/eapBquIFvxOqjy1JtCt/JPw4e5IexId5SdRvMHp3vPrxzqN4x3ZO/Xsri1fxbPvLkaC51A+5gK2yHUF0/WHIryn8jtr1i03wS8oDu3i8KOFSoHL4EcCBqBiTXOSiG1izrD8bicVOTXgg8bXW6MgjVNgSYutbFtP937hma3fNaPMFFHIo3PwFx2lqVe+2x33r9nHyptW2JxwIgpfukfRxcpTnYbpYeMwKaoz0wR8DPeGrGCqdLIqb+jHbR63knBv18G53fIw+CtyNmRtPp9JDQaz+QPlwe0Dv33O/OQExW9fpqfT9PCheWTlRWFjLacXPqXkqzmR+xbJiIr6m+SBUjiSHkOTtKP9leAg6hOhvOz3zLq8YskknFshCKa7OQ3PKkmtk/MBQw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 2025/6/23 11:35, Kairui Song wrote: > On Mon, Jun 23, 2025 at 11:26 AM Baolin Wang > wrote: >> >> Hi Kairui, >> >> On 2025/6/20 01:55, Kairui Song wrote: >>> From: Kairui Song >>> >>> The current swap-in code assumes that, when a swap entry in shmem >>> mapping is order 0, its cached folios (if present) must be order 0 >>> too, which turns out not always correct. >>> >>> The problem is shmem_split_large_entry is called before verifying the >>> folio will eventually be swapped in, one possible race is: >>> >>> CPU1 CPU2 >>> shmem_swapin_folio >>> /* swap in of order > 0 swap entry S1 */ >>> folio = swap_cache_get_folio >>> /* folio = NULL */ >>> order = xa_get_order >>> /* order > 0 */ >>> folio = shmem_swap_alloc_folio >>> /* mTHP alloc failure, folio = NULL */ >>> <... Interrupted ...> >>> shmem_swapin_folio >>> /* S1 is swapped in */ >>> shmem_writeout >>> /* S1 is swapped out, folio cached */ >>> shmem_split_large_entry(..., S1) >>> /* S1 is split, but the folio covering it has order > 0 now */ >>> >>> Now any following swapin of S1 will hang: `xa_get_order` returns 0, >>> and folio lookup will return a folio with order > 0. The >>> `xa_get_order(&mapping->i_pages, index) != folio_order(folio)` will >>> always return false causing swap-in to return -EEXIST. >>> >>> And this looks fragile. So fix this up by allowing seeing a larger folio >>> in swap cache, and check the whole shmem mapping range covered by the >>> swapin have the right swap value upon inserting the folio. And drop >>> the redundant tree walks before the insertion. >>> >>> This will actually improve the performance, as it avoided two redundant >>> Xarray tree walks in the hot path, and the only side effect is that in >>> the failure path, shmem may redundantly reallocate a few folios >>> causing temporary slight memory pressure. >>> >>> And worth noting, it may seems the order and value check before >>> inserting might help reducing the lock contention, which is not true. >>> The swap cache layer ensures raced swapin will either see a swap cache >>> folio or failed to do a swapin (we have SWAP_HAS_CACHE bit even if >>> swap cache is bypassed), so holding the folio lock and checking the >>> folio flag is already good enough for avoiding the lock contention. >>> The chance that a folio passes the swap entry value check but the >>> shmem mapping slot has changed should be very low. >> >> Thanks for fixing the issue. Sadly, I haven't reproduced this issue from >> my previous test cases :( >> >> And I have a question below. >> >>> Cc: stable@vger.kernel.org >>> Fixes: 809bc86517cc ("mm: shmem: support large folio swap out") >>> Signed-off-by: Kairui Song >>> Reviewed-by: Kemeng Shi >>> --- >>> mm/shmem.c | 30 +++++++++++++++++++++--------- >>> 1 file changed, 21 insertions(+), 9 deletions(-) >>> >>> diff --git a/mm/shmem.c b/mm/shmem.c >>> index eda35be2a8d9..4e7ef343a29b 100644 >>> --- a/mm/shmem.c >>> +++ b/mm/shmem.c >>> @@ -884,7 +884,9 @@ static int shmem_add_to_page_cache(struct folio *folio, >>> pgoff_t index, void *expected, gfp_t gfp) >>> { >>> XA_STATE_ORDER(xas, &mapping->i_pages, index, folio_order(folio)); >>> - long nr = folio_nr_pages(folio); >>> + unsigned long nr = folio_nr_pages(folio); >>> + swp_entry_t iter, swap; >>> + void *entry; >>> >>> VM_BUG_ON_FOLIO(index != round_down(index, nr), folio); >>> VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio); >>> @@ -896,14 +898,24 @@ static int shmem_add_to_page_cache(struct folio *folio, >>> >>> gfp &= GFP_RECLAIM_MASK; >>> folio_throttle_swaprate(folio, gfp); >>> + swap = iter = radix_to_swp_entry(expected); >>> >>> do { >>> xas_lock_irq(&xas); >>> - if (expected != xas_find_conflict(&xas)) { >>> - xas_set_err(&xas, -EEXIST); >>> - goto unlock; >>> + xas_for_each_conflict(&xas, entry) { >>> + /* >>> + * The range must either be empty, or filled with >>> + * expected swap entries. Shmem swap entries are never >>> + * partially freed without split of both entry and >>> + * folio, so there shouldn't be any holes. >>> + */ >>> + if (!expected || entry != swp_to_radix_entry(iter)) { >>> + xas_set_err(&xas, -EEXIST); >>> + goto unlock; >>> + } >>> + iter.val += 1 << xas_get_order(&xas); >>> } >>> - if (expected && xas_find_conflict(&xas)) { >>> + if (expected && iter.val - nr != swap.val) { >>> xas_set_err(&xas, -EEXIST); >>> goto unlock; >>> } >>> @@ -2323,7 +2335,7 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index, >>> error = -ENOMEM; >>> goto failed; >>> } >>> - } else if (order != folio_order(folio)) { >>> + } else if (order > folio_order(folio)) { >>> /* >>> * Swap readahead may swap in order 0 folios into swapcache >>> * asynchronously, while the shmem mapping can still stores >>> @@ -2348,15 +2360,15 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index, >>> >>> swap = swp_entry(swp_type(swap), swp_offset(swap) + offset); >>> } >>> + } else if (order < folio_order(folio)) { >>> + swap.val = round_down(swp_type(swap), folio_order(folio)); >> >> Why rounding down the swap type? do you mean rounding down the swap offset? > > Ouch, right, it should be the value: > > swap.val = round_down(swap.val, folio_order(folio)); > > I messed up the code here during a rebase, let me send a V3 then. Should be swap.val = round_down(swap.val, 1 << folio_order(folio)); ?