From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C594ACCF9F0 for ; Wed, 29 Oct 2025 16:00:18 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 344EC8E00A3; Wed, 29 Oct 2025 12:00:18 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 31C008E0045; Wed, 29 Oct 2025 12:00:18 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 232548E00A3; Wed, 29 Oct 2025 12:00:18 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 109B08E0045 for ; Wed, 29 Oct 2025 12:00:18 -0400 (EDT) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id CB8A589086 for ; Wed, 29 Oct 2025 16:00:17 +0000 (UTC) X-FDA: 84051613674.21.2BE3724 Received: from mail-pj1-f42.google.com (mail-pj1-f42.google.com [209.85.216.42]) by imf22.hostedemail.com (Postfix) with ESMTP id D489CC0003 for ; Wed, 29 Oct 2025 16:00:15 +0000 (UTC) Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=StwUQURp; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf22.hostedemail.com: domain of ryncsn@gmail.com designates 209.85.216.42 as permitted sender) smtp.mailfrom=ryncsn@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1761753615; a=rsa-sha256; cv=none; b=Nu9XtLySIvBIWgUeUrjYZL8FxehGSV1noMvxlHRktZf0FVa9mobgxzzxqJS0ohdsujUBG7 oZ1jCX8hnAvnfel7miK+Fp21f9QzgecHwFgd1JGFAtKo7Sx93zPr+AQP7hJd7ahv4QkWT1 /t/yQ1+NTMBq35eJKIX7ypFEu3EkHg8= ARC-Authentication-Results: i=1; imf22.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=StwUQURp; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf22.hostedemail.com: domain of ryncsn@gmail.com designates 209.85.216.42 as permitted sender) smtp.mailfrom=ryncsn@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1761753615; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=8nOl+SU4PPYP3fCDuFL6r8khIpsm6yNyf/3hVmtV4SM=; b=LlYSeFkhhY8FtXwtlFmryZYjEcyq3SB3J1gk9BEbE7k3Akbq4ecKSk/pPmpG2TwACN/k+p HQwj86hTldl8tlsPi9Hq4tLY/8983rDAza4KeLZobJISlYb08NSUlL0luWJGNvSdAxfh0V Gy1ioF2qajQFiBOF6w4ciPdGDy7QLlY= Received: by mail-pj1-f42.google.com with SMTP id 98e67ed59e1d1-339d7c403b6so69092a91.2 for ; Wed, 29 Oct 2025 09:00:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1761753615; x=1762358415; darn=kvack.org; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=8nOl+SU4PPYP3fCDuFL6r8khIpsm6yNyf/3hVmtV4SM=; b=StwUQURpIr7l3jHTjr+Oc5hmAhsPBWUfulKFaB49EPdHfaU5aguMs+g1zw+ReS3729 W7z4/XAlucHSfeMSXZLGVVWap8oVi3o0g9R9r8SxnMtiNaMbJsEhtX2Yhw6D0WSaCBoQ BPlu5jATODHbQWcPV6PlK+QjSTzFimEpYUl4SMEDtchRgHgC/6aJmLOAaYIWg31ACNf1 ZbwIM29Igrh3ufGG0ns7l2rViEbbgMvVIjkjXXtZb7nIGjqlNcvQLoYJd8pAJzsSM+N9 Z+Z1LnGaqXQyCdlC3dMT056qTAd5J1+32MXThdgNoSQZNw5Ua3FpVRZJ/KFJgTUSGN1S w0xg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1761753615; x=1762358415; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=8nOl+SU4PPYP3fCDuFL6r8khIpsm6yNyf/3hVmtV4SM=; b=DGhfpBPv7qNVo1m63U5LJeIyXAo0vZWoq36c0q9CKo3FBIhdN8ZaNgGCt3ZbmDD/z7 bKZKJnGa9mNP4lFcrU4d9lRdy44ZL2345whX2n6RHIUnxwZ5ULvmHLA3T/DHYFPxMXPT dmAOCGD097PqFEis8GtJryBGYlD6+eVtdag7mZueuEl67YxRSv2S6xbIaOlh18OSFNJv PFUBEf4K7KwcMbY1nHIlNw2LCqH/oEKWC8dU5+J8OIlrtSEZaAFvuDtenc9qCb4FPSMd 0Q/Aie6RfDo5kC6OUpATPaVeTtPLFQXxwQ9u0eIJVqBA3LEVbB5DNWVvP4l7Fb+hIt1E IT7w== X-Gm-Message-State: AOJu0YxVe+x/jFru/KXdiDr56zxvcvRq6BXqXlGmPKcqnHuJLYtX+cv3 pXCwNnJXxiU5rtYauc7UtAltQoFTlMYwSN5/HwghfBrDt4NVlZTGwu2l X-Gm-Gg: ASbGncuA0oWIYPpDiAsTamBjmpfmKN4dluYcxO7cJA605Og+z09wDsoEazIQ0vMHCK7 jD60wgZyK8amwy1IXls/OygdyE7HLL3Qcc7bq1lUZb2ZqVntR6F/GtV7soSuMWKVamiZzlAI2xz dNEhwcnwT/Wzfm3MNVdwSz2of/P2V5+IIZB5CTa2MKfn9uYajVEr9CeiatQk4vb+2+jMwq5bJMz 4TIMSBGKFGw6yX7bs6+CKjGxsWSPE2FVBde5a2e1Re8G+jNn6kPGK46dmMo53BQXai1Njcv24ZS 1GilwtrB4G0JhXnKBpE1wfEiVt7+9dUr10RRginnpfvingPgt8kOWzOsQrMAJSd+pWFTCQ8vE4e 9IcXrwhthMleYybgFJG0XUj9H1iNfkbfACzRBgSQZ3m2UTwcE2ItRBr4W6lDyx22AgEdMzbqtsg Eb/n9/ULAHV5Pf1WcETcWA X-Google-Smtp-Source: AGHT+IEdDsTYIcn1GU0ePW/OQ5eneMt6fHxiUdQrsAB+BibqdhzsnSDNaoYabLh0JhIuhC4B204MNA== X-Received: by 2002:a17:90b:1dcc:b0:32e:4924:690f with SMTP id 98e67ed59e1d1-3403a143773mr3962982a91.6.1761753614344; Wed, 29 Oct 2025 09:00:14 -0700 (PDT) Received: from [127.0.0.1] ([101.32.222.185]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-33fed7e95aasm16087366a91.8.2025.10.29.09.00.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 29 Oct 2025 09:00:13 -0700 (PDT) From: Kairui Song Date: Wed, 29 Oct 2025 23:58:42 +0800 Subject: [PATCH 16/19] mm, swap: check swap table directly for checking cache MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit Message-Id: <20251029-swap-table-p2-v1-16-3d43f3b6ec32@tencent.com> References: <20251029-swap-table-p2-v1-0-3d43f3b6ec32@tencent.com> In-Reply-To: <20251029-swap-table-p2-v1-0-3d43f3b6ec32@tencent.com> To: linux-mm@kvack.org Cc: Andrew Morton , Baoquan He , Barry Song , Chris Li , Nhat Pham , Johannes Weiner , Yosry Ahmed , David Hildenbrand , Youngjun Park , Hugh Dickins , Baolin Wang , "Huang, Ying" , Kemeng Shi , Lorenzo Stoakes , "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, Kairui Song X-Mailer: b4 0.14.3 X-Rspam-User: X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: D489CC0003 X-Stat-Signature: bq35pepc5ocxita4sjit74ow6fs73t4h X-HE-Tag: 1761753615-702540 X-HE-Meta: U2FsdGVkX1/Qzqmn6y9taGVqEnVTRdsh2odddl6jLGI0e+4xCNwcxOqwyYpjw5RZNZlZ3pY4sKiOsPqJPpDjyf3Dvrk2bljIOU0IW9hsQVYyMW82N+CNMIiJ6EMPjc1opuaVD8bvtpmdA+EO+FcyZZI53VrLuc0m67sIUeKkeoYCBFdvNoZJdLisTQnAJLK7NNuxPNOjxE5+e9yinu6lAGAmfbf3L1PIAW0n8e3D0kSDyRL59YjtWRVhrpmYRQg4QJe9JbZ5PnORvkHvfRJ9cumZCXDuFrdj+NBaKFLm7GIrParWZyPchlyAMg5TPjUUcOXqCZqNHOILNfmsGON4fHb/KTBuaPWV7NKhjXElwd+KHSkC5npLXz2uE86ZVSu/pL0cn8WRNfOYw2n/5XacCjBwfCz83/9k+SX+Zcyz8CWph0fKBbrPDRe1VhhGN7wriB0LECCuqZWJb11uf2DbnEGZe5RYcXI+hhi+Y0rBRMsvnQkUnfkQU+XxrfVJrifOg0SXB49y4ljZFE/HemwR8u8x3hR4CBa+X6vAR26fX0vS8rvIVq7EGblBYj8sNsRtnA/Fv3U5bqd5AIFedA54AERQWLBGvDj1qMfpOEQcAnV0F80uWlm99pmp5EDmSDC6yps/bvLz3pIF9rRXMDuKdRXoBe+xGELv02xE3rbe2cPp2rjrOW0roWeGYtuNcscyFZgjJUayljYKj0AFNW1AgyLh91g16ApwZSZAivIw1nx2zSNo4NVzMQcdUl3fpkLBpu7thAD2gIDvIMxLpkXRIBjwMyu7u+3o3K8utaQRFNpEpjjD7vw8ohVkumj7Ewd2AwCDIcYDmd/Q0eoaxQymDs4v9LOxH3JGIaNJ2hkA1XxpheGIVxUV4/1/ptyFPtCjMiqceiEUaQ+1TByoWGzko+2vCYZgUrh9wUIihOTdOhhVDFihyRbnSC1gtuXWVf0r3a3TUfxDexCZOdRzlQ4 VPNxfqyI rC4/oQWvIG0YLHcEolQvIShU27Pthrm7rvNDZ23lT9kZoKKnSQMy1zpkSCLFFJiXprvD8u3z+al3bBPdSZpS3zjrohpaQXFczlJDh/URyVd8PfFk547TGiy00pvNynY41nVKV4FGfnSOslJLBB3krM8PEWuPNyE9yUxGv3YNQRbG+37iF0P5ZsSRAh6uCvNMNCgcwstyURSmX7SQW81cvKLdjtar00zJ6jMl56ugn1pVQkyPKe+MFOit2grJFDfN8clfnd5PGQc7J37pD2TK8KuWyZTKtjPm0uBG6KSx+kF8e52XDGdcCszi/Q6xgJHaGksOW0vduhJTDwXrethj4ORPyQmWnq2gMNX0cArlB8xmfXkOWYvVU5on+FYhhhJtheF8tYHdC4nRLkMCkr0YovoWzyMqTnfbOQ3HZRpOucValFf80KtiKEo8BiA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Kairui Song Instead of looking at the swap map, check swap table directly to tell if a swap slot is cached. Prepares for the removal of SWAP_HAS_CACHE. Signed-off-by: Kairui Song --- mm/swap.h | 11 ++++++++--- mm/swap_state.c | 16 ++++++++++++++++ mm/swapfile.c | 55 +++++++++++++++++++++++++++++-------------------------- mm/userfaultfd.c | 10 +++------- 4 files changed, 56 insertions(+), 36 deletions(-) diff --git a/mm/swap.h b/mm/swap.h index 03694ffa662f..73f07bcea5f0 100644 --- a/mm/swap.h +++ b/mm/swap.h @@ -275,6 +275,7 @@ void __swapcache_clear_cached(struct swap_info_struct *si, * swap entries in the page table, similar to locking swap cache folio. * - See the comment of get_swap_device() for more complex usage. */ +bool swap_cache_check_folio(swp_entry_t entry); struct folio *swap_cache_get_folio(swp_entry_t entry); void *swap_cache_get_shadow(swp_entry_t entry); void swap_cache_del_folio(struct folio *folio); @@ -335,8 +336,6 @@ static inline int swap_zeromap_batch(swp_entry_t entry, int max_nr, static inline int non_swapcache_batch(swp_entry_t entry, int max_nr) { - struct swap_info_struct *si = __swap_entry_to_info(entry); - pgoff_t offset = swp_offset(entry); int i; /* @@ -345,8 +344,9 @@ static inline int non_swapcache_batch(swp_entry_t entry, int max_nr) * be in conflict with the folio in swap cache. */ for (i = 0; i < max_nr; i++) { - if ((si->swap_map[offset + i] & SWAP_HAS_CACHE)) + if (swap_cache_check_folio(entry)) return i; + entry.val++; } return i; @@ -449,6 +449,11 @@ static inline int swap_writeout(struct folio *folio, return 0; } +static inline bool swap_cache_check_folio(swp_entry_t entry) +{ + return false; +} + static inline struct folio *swap_cache_get_folio(swp_entry_t entry) { return NULL; diff --git a/mm/swap_state.c b/mm/swap_state.c index 85d9f99c384f..41d4fa056203 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -103,6 +103,22 @@ struct folio *swap_cache_get_folio(swp_entry_t entry) return NULL; } +/** + * swap_cache_check_folio - Check if a swap slot has cache. + * @entry: swap entry indicating the slot. + * + * Context: Caller must ensure @entry is valid and protect the swap + * device with reference count or locks. + */ +bool swap_cache_check_folio(swp_entry_t entry) +{ + unsigned long swp_tb; + + swp_tb = swap_table_get(__swap_entry_to_cluster(entry), + swp_cluster_offset(entry)); + return swp_tb_is_folio(swp_tb); +} + /** * swap_cache_get_shadow - Looks up a shadow in the swap cache. * @entry: swap entry used for the lookup. diff --git a/mm/swapfile.c b/mm/swapfile.c index 8d98f28907bc..3b7df5768d7f 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -788,23 +788,18 @@ static unsigned int cluster_reclaim_range(struct swap_info_struct *si, unsigned int nr_pages = 1 << order; unsigned long offset = start, end = start + nr_pages; unsigned char *map = si->swap_map; - int nr_reclaim; + unsigned long swp_tb; spin_unlock(&ci->lock); do { - switch (READ_ONCE(map[offset])) { - case 0: + if (swap_count(READ_ONCE(map[offset]))) break; - case SWAP_HAS_CACHE: - nr_reclaim = __try_to_reclaim_swap(si, offset, TTRS_ANYWAY); - if (nr_reclaim < 0) - goto out; - break; - default: - goto out; + swp_tb = swap_table_get(ci, offset % SWAPFILE_CLUSTER); + if (swp_tb_is_folio(swp_tb)) { + if (__try_to_reclaim_swap(si, offset, TTRS_ANYWAY) < 0) + break; } } while (++offset < end); -out: spin_lock(&ci->lock); /* @@ -820,37 +815,41 @@ static unsigned int cluster_reclaim_range(struct swap_info_struct *si, * Recheck the range no matter reclaim succeeded or not, the slot * could have been be freed while we are not holding the lock. */ - for (offset = start; offset < end; offset++) - if (READ_ONCE(map[offset])) + for (offset = start; offset < end; offset++) { + swp_tb = __swap_table_get(ci, offset % SWAPFILE_CLUSTER); + if (swap_count(map[offset]) || !swp_tb_is_null(swp_tb)) return SWAP_ENTRY_INVALID; + } return start; } static bool cluster_scan_range(struct swap_info_struct *si, struct swap_cluster_info *ci, - unsigned long start, unsigned int nr_pages, + unsigned long offset, unsigned int nr_pages, bool *need_reclaim) { - unsigned long offset, end = start + nr_pages; + unsigned long end = offset + nr_pages; unsigned char *map = si->swap_map; + unsigned long swp_tb; if (cluster_is_empty(ci)) return true; - for (offset = start; offset < end; offset++) { - switch (READ_ONCE(map[offset])) { - case 0: - continue; - case SWAP_HAS_CACHE: + do { + if (swap_count(map[offset])) + return false; + swp_tb = __swap_table_get(ci, offset % SWAPFILE_CLUSTER); + if (swp_tb_is_folio(swp_tb)) { + WARN_ON_ONCE(!(map[offset] & SWAP_HAS_CACHE)); if (!vm_swap_full()) return false; *need_reclaim = true; - continue; - default: - return false; + } else { + /* A entry with no count and no cache must be null */ + VM_WARN_ON_ONCE(!swp_tb_is_null(swp_tb)); } - } + } while (++offset < end); return true; } @@ -1013,7 +1012,8 @@ static void swap_reclaim_full_clusters(struct swap_info_struct *si, bool force) to_scan--; while (offset < end) { - if (READ_ONCE(map[offset]) == SWAP_HAS_CACHE) { + if (!swap_count(READ_ONCE(map[offset])) && + swp_tb_is_folio(__swap_table_get(ci, offset % SWAPFILE_CLUSTER))) { spin_unlock(&ci->lock); nr_reclaim = __try_to_reclaim_swap(si, offset, TTRS_ANYWAY); @@ -1957,6 +1957,7 @@ void swap_put_entries_direct(swp_entry_t entry, int nr) struct swap_info_struct *si; bool any_only_cache = false; unsigned long offset; + unsigned long swp_tb; si = get_swap_device(entry); if (WARN_ON_ONCE(!si)) @@ -1981,7 +1982,9 @@ void swap_put_entries_direct(swp_entry_t entry, int nr) */ for (offset = start_offset; offset < end_offset; offset += nr) { nr = 1; - if (READ_ONCE(si->swap_map[offset]) == SWAP_HAS_CACHE) { + swp_tb = swap_table_get(__swap_offset_to_cluster(si, offset), + offset % SWAPFILE_CLUSTER); + if (!swap_count(READ_ONCE(si->swap_map[offset])) && swp_tb_is_folio(swp_tb)) { /* * Folios are always naturally aligned in swap so * advance forward to the next boundary. Zero means no diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c index 00122f42718c..5411fd340ac3 100644 --- a/mm/userfaultfd.c +++ b/mm/userfaultfd.c @@ -1184,17 +1184,13 @@ static int move_swap_pte(struct mm_struct *mm, struct vm_area_struct *dst_vma, * Check if the swap entry is cached after acquiring the src_pte * lock. Otherwise, we might miss a newly loaded swap cache folio. * - * Check swap_map directly to minimize overhead, READ_ONCE is sufficient. * We are trying to catch newly added swap cache, the only possible case is * when a folio is swapped in and out again staying in swap cache, using the * same entry before the PTE check above. The PTL is acquired and released - * twice, each time after updating the swap_map's flag. So holding - * the PTL here ensures we see the updated value. False positive is possible, - * e.g. SWP_SYNCHRONOUS_IO swapin may set the flag without touching the - * cache, or during the tiny synchronization window between swap cache and - * swap_map, but it will be gone very quickly, worst result is retry jitters. + * twice, each time after updating the swap table. So holding + * the PTL here ensures we see the updated value. */ - if (READ_ONCE(si->swap_map[swp_offset(entry)]) & SWAP_HAS_CACHE) { + if (swap_cache_check_folio(entry)) { double_pt_unlock(dst_ptl, src_ptl); return -EAGAIN; } -- 2.51.1