From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7C62CC3ABDA for ; Wed, 14 May 2025 20:18:56 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id CFEE86B00AB; Wed, 14 May 2025 16:18:55 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id CB24B6B00AC; Wed, 14 May 2025 16:18:55 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B510E6B00AD; Wed, 14 May 2025 16:18:55 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 934EE6B00AB for ; Wed, 14 May 2025 16:18:55 -0400 (EDT) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 8D81B8026C for ; Wed, 14 May 2025 20:18:55 +0000 (UTC) X-FDA: 83442627030.27.9B1741E Received: from mail-pg1-f170.google.com (mail-pg1-f170.google.com [209.85.215.170]) by imf28.hostedemail.com (Postfix) with ESMTP id 94E0DC0004 for ; Wed, 14 May 2025 20:18:53 +0000 (UTC) Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=SxBLAuHg; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf28.hostedemail.com: domain of ryncsn@gmail.com designates 209.85.215.170 as permitted sender) smtp.mailfrom=ryncsn@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1747253933; h=from:from:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=+Inn5hnMzAYF/hvn9qgsv0iMlGjCtx2951Y0vV9NByg=; b=u4bz3WflEuWpnuqIwF0JGY1/R6gRUF35Jepl3+PCVTXyqfK+WtvWAD9rEf4kFeb8tQOkgI wsaEqWMD97TvI2X1VGFxWRpvzwS0oU5GbtRSLnyeb/5Tmh+Cb0mxi6X2Q67/Sx6FyBMVdK IIvLn1EzpVA++WHI6F4vOmnl+dvzKsk= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1747253933; a=rsa-sha256; cv=none; b=udY2iaR7HlVQRD7sHfOHU8AIsGgw6FQ1d0+q/ZGMxid5idu3xF4Wsdny9/lmpzojSFKqKf OxaPrJP6fj5Y64Tca1OGIUAxV77Agas0KJ2zY5yBxMAGwDJOR6o4052yEICikoC9FZZPaz wfyGlnSNGRZmoVHyHIcYq7KyrCUwCas= ARC-Authentication-Results: i=1; imf28.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=SxBLAuHg; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf28.hostedemail.com: domain of ryncsn@gmail.com designates 209.85.215.170 as permitted sender) smtp.mailfrom=ryncsn@gmail.com Received: by mail-pg1-f170.google.com with SMTP id 41be03b00d2f7-879d2e419b9so108779a12.2 for ; Wed, 14 May 2025 13:18:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1747253932; x=1747858732; darn=kvack.org; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:from:to:cc:subject :date:message-id:reply-to; bh=+Inn5hnMzAYF/hvn9qgsv0iMlGjCtx2951Y0vV9NByg=; b=SxBLAuHgcp1Y1rMY2hO9CONZKtzfJq79KiyMkUATiog9o8r779QWcs0IOppivA82OP R3SMuBhoI9F6UQrU99lROTjzwrNHawurHLCkLZyujWWgRK1iqO9n7PBALVrVNxh9EAjJ aoZZIK9h+jxAQQfYRs/+XxDtyQk/M5IAkkswUsK5RaIj28nBrzxaHXbiKIoD6iDxa3FS PDV/m9FINqKiPTovHhD2RAaudVcYvxu2Z/Zu4IaZrNsaMDGA82befPG2+9T4retq/Zwy YtP0LQS7S/vBCMLhMBDe4hM6qYh9gAN0jfxvg6CIWuRvtARH9WPSUJSW00MquC40iQSd DwnA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747253932; x=1747858732; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=+Inn5hnMzAYF/hvn9qgsv0iMlGjCtx2951Y0vV9NByg=; b=fvqplVIdeg+rN9a2085Q+TwwoOnNbXmqZyTOT46dyIySWIuFoHzl0v9TozrPtblzW0 Q/4Rs72OlJU+qkOnjGaI1WszPnp50pnxVgZQSxd4UK4O8Kf6jFjuUQv4EAFYXV9zsIFy emCqektp7lveVpKYofVYPnm4QP24EaeOXK2WzNgLBD/Bvb4n/81U7DWhHqARcHYDLhed vxhuCntaLtkidT5nQFLttEQv8Ggd/Q0aVnUwkAbYjkUYO9N8Vcmc7PH6ZcloGDpJW6Pl xB8OxuYX5bIC6CYr0xlbK2tiaQTNelE5djxkAdZ4a8tBG4kHBYv9jkBk+mUTriEfDPFc mzuA== X-Gm-Message-State: AOJu0YyaYkzlyGLspjUATjdnZD1Hq2y8R+3ByapKeirhFFDx0GCbQeLq jx18kxXx3DtcAsM/WNxFTdlMfr3HwLnuJTSjXb4yC61Os0emjqBS0gORvsRaMzA= X-Gm-Gg: ASbGnctIcZ2QclUkm+4wwZk8PmypVaPUgJivVpI0U1W2YFkQaOQq2e6flrWyZlGGCh6 FmSDaKPPUuwq5KnxIcV7ZEarVhUxrBh+LFvGF8vVEn9BToYJ6Gpw35SHIvZgrRs/l+Ik8zRyevG nPBNhMpqaTVs/WISO2Si5u3uGXFm5t+Xz1gVmZAyaS8Rmo0nd8CxxPu62K5sGy+AyxFgJBVpNtF YUQYPE35FXqFcO3/W/UcOL+Xlw90LCMJTonK7UCz4pXEymN/Ode87Houd89K2WurUFNH4JwZF58 HM4p5VRnCJ3glHzPUMfePKt1WA1BGM4aCX3QDFUD9pNiSVTmLtAMqb63Ba3WG7KSSnXhmlwU X-Google-Smtp-Source: AGHT+IECeWsHKXpnFhae71pLlnfK923YZ7Xrln1JPPj4G0ESoJT5CC8nJJ+k9tOkE46IEUgTqyvRzg== X-Received: by 2002:a17:90b:35d2:b0:2ee:f076:20fb with SMTP id 98e67ed59e1d1-30e2e613263mr8552200a91.17.1747253931813; Wed, 14 May 2025 13:18:51 -0700 (PDT) Received: from KASONG-MC4.tencent.com ([101.32.222.185]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-30e33401934sm2003692a91.9.2025.05.14.13.18.46 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Wed, 14 May 2025 13:18:51 -0700 (PDT) From: Kairui Song To: linux-mm@kvack.org Cc: Andrew Morton , Matthew Wilcox , Hugh Dickins , Chris Li , David Hildenbrand , Yosry Ahmed , "Huang, Ying" , Nhat Pham , Johannes Weiner , Baolin Wang , Baoquan He , Barry Song , Kalesh Singh , Kemeng Shi , Tim Chen , Ryan Roberts , linux-kernel@vger.kernel.org, Kairui Song Subject: [PATCH 14/28] mm/shmem: never bypass the swap cache for SWP_SYNCHRONOUS_IO Date: Thu, 15 May 2025 04:17:14 +0800 Message-ID: <20250514201729.48420-15-ryncsn@gmail.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250514201729.48420-1-ryncsn@gmail.com> References: <20250514201729.48420-1-ryncsn@gmail.com> Reply-To: Kairui Song MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Stat-Signature: 44439uo5hj8z1hk643xjdowzyzk4ktt5 X-Rspam-User: X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 94E0DC0004 X-HE-Tag: 1747253933-899135 X-HE-Meta: U2FsdGVkX18/zm3c/f7Ejo42lR4DgOANT7uUnWfcqPys/szuhL27xoeJs/5v3vHwdS5aO8CVc6AHs0po9gjBTwZfe4/PGFhsLQl5ONJVexGKp/rWaXsg1EFMcaN5MLxxF1hPyjCLFqVPp1SMxn1MM0Nh++uZJ5a9OkMMCwBE0sY0qo92rD5A9deuVP4rvpnWgPLsJNeH8oc8nFNdCalaFkmpZPVATEeH7H8y0BLIGRaORveE6LJ6azc8YIu7zUOSE+NTHczJR9efe5ZQCmohkHxUmUaz9Y7JLYjlax1JpOyLFCmdbYT0JBSgbr11LmTGOOS82s9PD/lgCJAxlCCrl/LaMdjEQBB20XVk8dDe5pGBwF5V7kU/aI9tGKgUfAkLW6J7maVEysj/Ve5f9hXTRtS7Yj2o/B8j5iBnBtPmU4WdL7WquanTuEZobItqWLnDKRjc4WVUsLHDvZIe703+V6te0LFIglEMGF0JjsMqeQ86MuTLb7rTeAcuVOh6Ctzhac7ot0zyAqTBqM+InK6w684Z/0Hdxw0+aQ6S6SR5q47KDia1E+iPpJcvjFKE+fA1muJHSKUVbSl7TJ9PaJ9UMUSg6c1LNeBfhAv0yI3JCle0cUSpXr8wta3NRQzNu94qKx98/XJ+vPeuwpSW8A5MZB2AtmQ1Nhw9zrTOWjkXq8iXp55v8Lnk9+VgTyIOviDX5Fz6PB4Sk0aS/EdH+h8xt/Ox2EvIo8XTEauI+WM8a6RggPXwm4eCs3PLKP4Z3STbpM/xfsbMd94I07nYHoYu472JZmQVDxlvRmS1gCZbBybFRZUhatw1Z0yUlwIt/qdo/AtrgOigDCi+ZcRGNCeU8B95G/d3Zt4bFOnJp1ejtwvoAElcx3aJQ82rAzOgH6yUti6CbSRHiurTAVF+GgMuFdkv4hlVtUO35aMh7oXnhrbxRRAcf/7gAKJ5JyzMynqwQk8FeRrSo8fve8AD6Gb gPcdv6m5 6/JFpx6Ur4f+Tt0bfgGJj67j0JaclvCQNMKrNNTDw6fGrMYCVekInf/3uWIuNG5W0VHLTJ5cEdaM7grOOtvkVN129GEuoWa8hcX+tM/NQtBoOUwhNVcSK2SvHuNE9WWQbr71b37Nawl/UjSS6QYyacaY03te7OH0eiQPGXsT3Z+EIvK1s54q+YD3dDLBsGRvtWn7MJ5x09lYwYdhdiv80zuF5tOGJr//sAvV+Dg2/d3lHhoFxj8r/QRSTd3XOIzW7z9EtRlTIuA0AZzHIflIswnxLIyT9YSl/2DDAZXtKP7wyYJy5MLRQr+4Nn8A9ijRK3dMwXj36NKRUE279YbJYf/v/swIHcA13aEi5sye3a8qQdG8tfhPLBGdkrFaERFkyOK/ZbtkGKEpxVkx8ZSMk+8PkC13l7OQQIBvVzps0JhHzrvl3Mr+pOSij87OJZAcEh3zmQZldSWHmCVQayof9koa/7u84QoP+1FoG4/Oji7/V2KwIzHiocgrWekHHtl7SwwUj7AjfOLfJvWDNRgQyTXe+/lB9qU7tM4lq+pgiV4g2tIMWFYrIbKF/c4GUsSgbg+d1ozSvHS9xl9G3hOcqOt3TbnN7DjsIxD/9GGsX12tbvvo= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Kairui Song Now the overhead of the swap cache is trivial to none, bypassing the swap cache is no longer a valid optimization. So remove the cache bypass swap path for simplification. Many helpers and functions can be dropped now. Signed-off-by: Kairui Song --- mm/shmem.c | 109 ++++++++++++++++++-------------------------------- mm/swap.h | 4 -- mm/swapfile.c | 35 +++++----------- 3 files changed, 48 insertions(+), 100 deletions(-) diff --git a/mm/shmem.c b/mm/shmem.c index da80a8faa39e..e87eff03c08b 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -899,7 +899,9 @@ static int shmem_add_to_page_cache(struct folio *folio, pgoff_t index, void *expected, gfp_t gfp) { XA_STATE_ORDER(xas, &mapping->i_pages, index, folio_order(folio)); - long nr = folio_nr_pages(folio); + unsigned long nr = folio_nr_pages(folio); + swp_entry_t iter, swap; + void *entry; VM_BUG_ON_FOLIO(index != round_down(index, nr), folio); VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio); @@ -912,13 +914,19 @@ static int shmem_add_to_page_cache(struct folio *folio, gfp &= GFP_RECLAIM_MASK; folio_throttle_swaprate(folio, gfp); + if (expected) + swap = iter = radix_to_swp_entry(expected); + do { xas_lock_irq(&xas); - if (expected != xas_find_conflict(&xas)) { - xas_set_err(&xas, -EEXIST); - goto unlock; + xas_for_each_conflict(&xas, entry) { + if (!expected || entry != swp_to_radix_entry(iter)) { + xas_set_err(&xas, -EEXIST); + goto unlock; + } + iter.val += 1 << xas_get_order(&xas); } - if (expected && xas_find_conflict(&xas)) { + if (expected && iter.val - nr != swap.val) { xas_set_err(&xas, -EEXIST); goto unlock; } @@ -1973,14 +1981,12 @@ static struct folio *shmem_alloc_and_add_folio(struct vm_fault *vmf, return ERR_PTR(error); } -static struct folio *shmem_swap_alloc_folio(struct inode *inode, +static struct folio *shmem_swapin_folio_order(struct inode *inode, struct vm_area_struct *vma, pgoff_t index, swp_entry_t entry, int order, gfp_t gfp) { struct shmem_inode_info *info = SHMEM_I(inode); - struct folio *new; - void *shadow; - int nr_pages; + struct folio *new, *swapcache; /* * We have arrived here because our zones are constrained, so don't @@ -1995,41 +2001,19 @@ static struct folio *shmem_swap_alloc_folio(struct inode *inode, new = shmem_alloc_folio(gfp, order, info, index); if (!new) - return ERR_PTR(-ENOMEM); + return NULL; - nr_pages = folio_nr_pages(new); if (mem_cgroup_swapin_charge_folio(new, vma ? vma->vm_mm : NULL, - gfp, entry)) { + gfp, entry)) { folio_put(new); - return ERR_PTR(-ENOMEM); + return NULL; } - /* - * Prevent parallel swapin from proceeding with the swap cache flag. - * - * Of course there is another possible concurrent scenario as well, - * that is to say, the swap cache flag of a large folio has already - * been set by swapcache_prepare(), while another thread may have - * already split the large swap entry stored in the shmem mapping. - * In this case, shmem_add_to_page_cache() will help identify the - * concurrent swapin and return -EEXIST. - */ - if (swapcache_prepare(entry, nr_pages)) { + swapcache = swapin_entry(entry, new); + if (swapcache != new) folio_put(new); - return ERR_PTR(-EEXIST); - } - __folio_set_locked(new); - __folio_set_swapbacked(new); - new->swap = entry; - - memcg1_swapin(entry, nr_pages); - shadow = swap_cache_get_shadow(entry); - if (shadow) - workingset_refault(new, shadow); - folio_add_lru(new); - swap_read_folio(new, NULL); - return new; + return swapcache; } /* @@ -2122,8 +2106,7 @@ static int shmem_replace_folio(struct folio **foliop, gfp_t gfp, } static void shmem_set_folio_swapin_error(struct inode *inode, pgoff_t index, - struct folio *folio, swp_entry_t swap, - bool skip_swapcache) + struct folio *folio, swp_entry_t swap) { struct address_space *mapping = inode->i_mapping; swp_entry_t swapin_error; @@ -2139,8 +2122,7 @@ static void shmem_set_folio_swapin_error(struct inode *inode, pgoff_t index, nr_pages = folio_nr_pages(folio); folio_wait_writeback(folio); - if (!skip_swapcache) - delete_from_swap_cache(folio); + delete_from_swap_cache(folio); /* * Don't treat swapin error folio as alloced. Otherwise inode->i_blocks * won't be 0 when inode is released and thus trigger WARN_ON(i_blocks) @@ -2241,7 +2223,6 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index, struct shmem_inode_info *info = SHMEM_I(inode); struct swap_info_struct *si; struct folio *folio = NULL; - bool skip_swapcache = false; swp_entry_t swap; int error, nr_pages, order, split_order; @@ -2283,25 +2264,16 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index, !zswap_never_enabled())) fallback_order0 = true; - /* Skip swapcache for synchronous device. */ + /* Try mTHP swapin for synchronous device. */ if (!fallback_order0 && data_race(si->flags & SWP_SYNCHRONOUS_IO)) { - folio = shmem_swap_alloc_folio(inode, vma, index, swap, order, gfp); - if (!IS_ERR(folio)) { - skip_swapcache = true; + folio = shmem_swapin_folio_order(inode, vma, index, swap, order, gfp); + if (folio) goto alloced; - } - - /* - * Fallback to swapin order-0 folio unless the swap entry - * already exists. - */ - error = PTR_ERR(folio); - folio = NULL; - if (error == -EEXIST) - goto failed; } /* + * Fallback to swapin order-0 folio. + * * Now swap device can only swap in order 0 folio, then we * should split the large swap entry stored in the pagecache * if necessary. @@ -2338,13 +2310,15 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index, split_order = shmem_split_large_entry(inode, index, swap, gfp); if (split_order < 0) { error = split_order; + folio_put(folio); + folio = NULL; goto failed; } } alloced: /* We have to do this with folio locked to prevent races */ folio_lock(folio); - if (!skip_swapcache && !folio_swap_contains(folio, swap)) { + if (!folio_swap_contains(folio, swap)) { error = -EEXIST; goto unlock; } @@ -2353,12 +2327,15 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index, index = round_down(index, nr_pages); swap = swp_entry(swp_type(swap), round_down(swp_offset(swap), nr_pages)); - if (folio_order(folio) != shmem_check_swap_entry(mapping, index, swap)) { + /* + * Swap must go through swap cache layer, only the split may happen + * without locking the swap cache. + */ + if (folio_order(folio) < shmem_check_swap_entry(mapping, index, swap)) { error = -EEXIST; goto unlock; } - if (!skip_swapcache) - swap_update_readahead(folio, NULL, 0); + swap_update_readahead(folio, NULL, 0); if (!folio_test_uptodate(folio)) { error = -EIO; goto failed; @@ -2387,12 +2364,7 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index, if (sgp == SGP_WRITE) folio_mark_accessed(folio); - if (skip_swapcache) { - folio->swap.val = 0; - swapcache_clear(si, swap, nr_pages); - } else { - delete_from_swap_cache(folio); - } + delete_from_swap_cache(folio); folio_mark_dirty(folio); swap_free_nr(swap, nr_pages); put_swap_device(si); @@ -2403,11 +2375,8 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index, if (shmem_check_swap_entry(mapping, index, swap) < 0) error = -EEXIST; if (error == -EIO) - shmem_set_folio_swapin_error(inode, index, folio, swap, - skip_swapcache); + shmem_set_folio_swapin_error(inode, index, folio, swap); unlock: - if (skip_swapcache) - swapcache_clear(si, swap, folio_nr_pages(folio)); if (folio) { folio_unlock(folio); folio_put(folio); diff --git a/mm/swap.h b/mm/swap.h index aab6bf9c3a8a..cad24a3abda8 100644 --- a/mm/swap.h +++ b/mm/swap.h @@ -319,10 +319,6 @@ static inline int swap_writepage(struct page *p, struct writeback_control *wbc) return 0; } -static inline void swapcache_clear(struct swap_info_struct *si, swp_entry_t entry, int nr) -{ -} - static inline struct folio *swap_cache_get_folio(swp_entry_t entry) { return NULL; diff --git a/mm/swapfile.c b/mm/swapfile.c index 62af67b6f7c2..d3abd2149f8e 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -1430,22 +1430,6 @@ struct swap_info_struct *get_swap_device(swp_entry_t entry) return NULL; } -static void swap_entries_put_cache(struct swap_info_struct *si, - swp_entry_t entry, int nr) -{ - unsigned long offset = swp_offset(entry); - struct swap_cluster_info *ci; - - ci = swap_lock_cluster(si, offset); - if (swap_only_has_cache(si, offset, nr)) { - swap_entries_free(si, ci, entry, nr); - } else { - for (int i = 0; i < nr; i++, entry.val++) - swap_entry_put_locked(si, ci, entry, SWAP_HAS_CACHE); - } - swap_unlock_cluster(ci); -} - static bool swap_entries_put_map(struct swap_info_struct *si, swp_entry_t entry, int nr) { @@ -1578,13 +1562,21 @@ void swap_free_nr(swp_entry_t entry, int nr_pages) void put_swap_folio(struct folio *folio, swp_entry_t entry) { struct swap_info_struct *si; + struct swap_cluster_info *ci; + unsigned long offset = swp_offset(entry); int size = 1 << swap_entry_order(folio_order(folio)); si = _swap_info_get(entry); if (!si) return; - swap_entries_put_cache(si, entry, size); + ci = swap_lock_cluster(si, offset); + if (swap_only_has_cache(si, offset, size)) + swap_entries_free(si, ci, entry, size); + else + for (int i = 0; i < size; i++, entry.val++) + swap_entry_put_locked(si, ci, entry, SWAP_HAS_CACHE); + swap_unlock_cluster(ci); } int __swap_count(swp_entry_t entry) @@ -3615,15 +3607,6 @@ int swapcache_prepare(swp_entry_t entry, int nr) return __swap_duplicate(entry, SWAP_HAS_CACHE, nr); } -/* - * Caller should ensure entries belong to the same folio so - * the entries won't span cross cluster boundary. - */ -void swapcache_clear(struct swap_info_struct *si, swp_entry_t entry, int nr) -{ - swap_entries_put_cache(si, entry, nr); -} - /* * add_swap_count_continuation - called when a swap count is duplicated * beyond SWAP_MAP_MAX, it allocates a new page and links that to the entry's -- 2.49.0