From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E123BC369DC for ; Wed, 30 Apr 2025 00:55:09 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D0FCA6B00D3; Tue, 29 Apr 2025 20:55:07 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id CB9106B00D5; Tue, 29 Apr 2025 20:55:07 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AE2D36B00D4; Tue, 29 Apr 2025 20:55:07 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 8A4456B00D2 for ; Tue, 29 Apr 2025 20:55:07 -0400 (EDT) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 64B01161463 for ; Tue, 29 Apr 2025 23:38:54 +0000 (UTC) X-FDA: 83388698988.16.686C42E Received: from mail-yw1-f181.google.com (mail-yw1-f181.google.com [209.85.128.181]) by imf03.hostedemail.com (Postfix) with ESMTP id 8D73A20008 for ; Tue, 29 Apr 2025 23:38:52 +0000 (UTC) Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=ISAy9a86; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf03.hostedemail.com: domain of nphamcs@gmail.com designates 209.85.128.181 as permitted sender) smtp.mailfrom=nphamcs@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1745969932; a=rsa-sha256; cv=none; b=CTje4P8RNKK+s4Yij5sSaakB3tFeKRTVqLj4Uldqi4jrOGSkFl9ZhWNBytTvIHRovKtd5+ qbmFZwphQL8kaTAMRxsEoS6dkR8COYHt00Nx5T518zTabk0D2vDe6D6oKFcU0cCPCNOtmY 1WUUac8PBBPllwNqnVivqgU0jG8MOts= ARC-Authentication-Results: i=1; imf03.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=ISAy9a86; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf03.hostedemail.com: domain of nphamcs@gmail.com designates 209.85.128.181 as permitted sender) smtp.mailfrom=nphamcs@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1745969932; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=oKPYMi9z2u+pylu9elX+HeHi2VBZ+HlSHJvjFWP/QJA=; b=dIoGzK5HGd0Wwzl200ziHIGT/SzvVkcxUbQZZd8PEfVgNOl0EIhqWK2BtJuRvjQ/ZiGq5L HkRC5GOh2dymlEQSuA7mP6vHGyIIawD46qh8ESPsXBfhd3rwUt9qWG1AE1zeWLzDzBUvP4 /LyhOm2CLX3zJqRsHp95iKsXjXvZ00o= Received: by mail-yw1-f181.google.com with SMTP id 00721157ae682-6ff07872097so63586317b3.3 for ; Tue, 29 Apr 2025 16:38:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1745969931; x=1746574731; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=oKPYMi9z2u+pylu9elX+HeHi2VBZ+HlSHJvjFWP/QJA=; b=ISAy9a86dyenbxRD4UDbzPQDlUGIDgdIwpPWM/hahGrZTXggjk1FXL9785N3DVW/1I kudvdbnB4Byo76aK5OrR4lw3JAmWc+96RS9035dMCAaz2vxHd1KpDUvgGo4ceWJdM1hb vG9iGXlx/gx1PGuoIL+vj+blHvDzFvDrXamobpP8H9kzzQhBYdZKNSyuItlOIe55tXkq AqLJvOJJS9j0JLfGSHyXdojUYP4TEZrHUaPdU2Z0RRlW/JsihwKd72bnzpH3yVmewHIv kIbj2/rEoXGJ9SZTR4xPtFhYEOG8A5snQX+wsjZLA8qgryCC/vSfl2PQ8vjRCrUrhXrW f+Tg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1745969931; x=1746574731; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=oKPYMi9z2u+pylu9elX+HeHi2VBZ+HlSHJvjFWP/QJA=; b=dxq+mTfJSBT0Z/YbMWbhcL9AyCd6Z3AMtxEy+mk7IBCXWOo4PF1jInrwCw7SjNJr5x nbvEj/IpFez+Owq+ZUYS8VB7ozhLahmLdoE7UIEUHDjFvYXomyWftVMGb9WZCCq8gwgU AzU7NthF61iJzW3LvtPmJjSpnnV9GuUkSHOOxBMb4Ya6DeYaUedL72tRck5Iwjq3q6Eq SnWVLmNtDLiWNGZc4UaATDaNe8UmGnRdxfcTc0ziEnAkrYaUt2GengFSIVB0uyTE6meU 1Un1eeQqMgr78OrIQjg4Ii9FMLH0v6cm8722UT0SPlKomSVdFzYYdwAAGrzdKyp1M6O2 5NIA== X-Gm-Message-State: AOJu0YwCJhmxiArGI2M53nHoZlZ+IibQ6sjIbKb48M3Nwone6n1ge9sW 89Y+K+Dk5oFOqeBpirjxzjY4pOzhyFImZfhQ8PggvwSnWXJIKcJU2d6wlw== X-Gm-Gg: ASbGncvUOx9zhgwECKFDNaFaWf6W5ngfshr6qpF5pEl1NYriQrFIw7lNtsqJMhW+TYy OF4bfvpAp3gXxJyGcWaWf2dnKKHxHL86sw3xLNAhW04QH8b3y46cd5oexDZSM61t/jEs4zMdZkt Kb0aPGsIc9sRDzFqZmbR7HBqQjjOONV2kY6tbkg1tf62fJqeeK6HxEFA1JfJK8tztCpHz3pj+zs bzIcNJIgZoCYKEhHGX0EgS2XIo7/cBd4OfcWVlrj+jr+19pbSA7uMqs8Y3fmm5+FkJ17yYn1rXd VIfJBkw+q8kaRsEhsR2d/ANut6oRyBYz X-Google-Smtp-Source: AGHT+IEbVsUwWm1vJ8cPgyEKm/U3mnCYlKZjFZzkCwn0NZV9arn5fKzWNi2sdzUswsF0UpfObKyaeA== X-Received: by 2002:a05:690c:3509:b0:6fb:9429:83c5 with SMTP id 00721157ae682-708ad623882mr10923457b3.19.1745969931462; Tue, 29 Apr 2025 16:38:51 -0700 (PDT) Received: from localhost ([2a03:2880:25ff:74::]) by smtp.gmail.com with ESMTPSA id 00721157ae682-708ae06befesm761997b3.55.2025.04.29.16.38.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 29 Apr 2025 16:38:51 -0700 (PDT) From: Nhat Pham To: linux-mm@kvack.org Cc: akpm@linux-foundation.org, hannes@cmpxchg.org, hughd@google.com, yosry.ahmed@linux.dev, mhocko@kernel.org, roman.gushchin@linux.dev, shakeel.butt@linux.dev, muchun.song@linux.dev, len.brown@intel.com, chengming.zhou@linux.dev, kasong@tencent.com, chrisl@kernel.org, huang.ying.caritas@gmail.com, ryan.roberts@arm.com, viro@zeniv.linux.org.uk, baohua@kernel.org, osalvador@suse.de, lorenzo.stoakes@oracle.com, christophe.leroy@csgroup.eu, pavel@kernel.org, kernel-team@meta.com, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-pm@vger.kernel.org, peterx@redhat.com Subject: [RFC PATCH v2 02/18] swapfile: rearrange functions Date: Tue, 29 Apr 2025 16:38:30 -0700 Message-ID: <20250429233848.3093350-3-nphamcs@gmail.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20250429233848.3093350-1-nphamcs@gmail.com> References: <20250429233848.3093350-1-nphamcs@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 8D73A20008 X-Stat-Signature: cyk7yzokax335cjiz4yncewabkcy4rxz X-Rspam-User: X-HE-Tag: 1745969932-137602 X-HE-Meta: U2FsdGVkX1/AuRl87zdMP6XvkrJDaBRb6PJaDqLOA9m3f0X0thoAOqgEgSTg59jp+UJukIKbcmDu2C15bAyyNVaTmOfaGCaoMrR2/ler17yg3MT43TWU20pysK+4+c5KLVbNq81r7NtYM66TVkAUYRQPjoOMYcAQGBRfLYStFYXQO7X2XBQoIO8WRF4al3caJeqzCOmMXGY7gvP64PBv+DU1akMPZLO8Gwz2sGaT5C4csUV8pf2VqJ7rO/J/LGfyUl6EzIH/YNdW1I78H1iDWMbaUjv+IMsRqLvfEbqscZdPMNsxbSJJLj8GW30Dw/Idur/rS1fqtA40jIg2/8l6GSlYtCj4866a721GhJAEm8P6yIzJxGNS3fRjr8Ji8KXOLRtMbpDuuqu0ysok69EnUtxbodGR8Q2o4ih4U8+eHipG18/jLp9kcY5Or/Sl46ALkezM+Dh6bG4qloFwzl/CNh+vb9JnoMXPBlUOrA4rMn8VXVpucPBxnD60dyes4AJoTpK1vTuCThFGuvraCD5EMs0ce+WQ+8ZaxRywe2QlUyjPElUZWNqcEpNtvO66bDUrmn0Ec7x48AbIVBvseRCh096XoWXbyVdpP9hkg5R4M5an/QivnPoiSWoxII0v77xazokJS3QLEyTjd5nXuYkIMFst6dI3C4E4hLVRzf3A4pzFnaK+5YEkMiYUZj3t+5To6xgJO97XvBU2PXgnX86sBgTRxdRh1m9NgekDht289BUxqlM92fJIxbNw6QDGMhisDdI+2fDwxuGBtlmNpl0KeNm83IE56iyjFss2o7+I28Ie7LUBkUV8MatVVPylq+NvIOSbpE+X4Zm1YpJnES1QAExvyQ51u/Z8ZCm5FawzwpS2QkJOggYIFkBL3H3xfO/RfYQGPA4XjVEhK1POpSKDjr6cDUYHeSO3iu+LKZ8MGFD70hPxQseMHvgeZ9PnjmdH/f6qLlfM+Wxm4zAklc5 0jru6NSM gAwTnhpvF/qfYFhSSfpvxiJUU4nVwJpkaFs8h3Yhg1yjgzfgMcC2TRPoStFxqklpRsdmIfZOuEme+1N9jS2Uvmr008lxwoJLXK+ZvP+/YujUxFDuP7KRxNXDH3lE7139WGvWYUY9NK1ewSraOAJwkqQ04CHdqnkbzoOmxi32ZKJWBKHPb2amowXlS66ReI23HYqXdid3dURPPleaWHKem+yivuwSLnwFYqzrTmzcVRrL3sl+Ck62rB7TYfOOHisrrGLr/AYVf8zrFJ/h/ZVLoITwzAMUPUQUGJxf/CZbdz5QH5WGIwuXHQ+TqvB3tUbI0GN1S9tQKr/LHVIX5rHTjEc5ohgfXHt6rRakek50gqFbHVrc6sQqBVUsofAmJ2PQaagad/swGvXEYyXqPiA2jjGJKN+S/zlvAG6AWRILUlsk96fSPSWb22ZmxUZcpTYlGu2fxW+H9Ky/z4CPMevyoN8QFoS0RdvaDmbjSOEX0TD4YKfeLn/z6Q7goUw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Rearrange some functions in preparation for the rest of the series. No functional change intended. Signed-off-by: Nhat Pham --- mm/swapfile.c | 332 +++++++++++++++++++++++++------------------------- 1 file changed, 166 insertions(+), 166 deletions(-) diff --git a/mm/swapfile.c b/mm/swapfile.c index df7c4e8b089c..426674d35983 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -124,11 +124,6 @@ static struct swap_info_struct *swap_type_to_swap_info(int type) return READ_ONCE(swap_info[type]); /* rcu_dereference() */ } -static inline unsigned char swap_count(unsigned char ent) -{ - return ent & ~SWAP_HAS_CACHE; /* may include COUNT_CONTINUED flag */ -} - /* * Use the second highest bit of inuse_pages counter as the indicator * if one swap device is on the available plist, so the atomic can @@ -161,6 +156,11 @@ static long swap_usage_in_pages(struct swap_info_struct *si) /* Reclaim directly, bypass the slot cache and don't touch device lock */ #define TTRS_DIRECT 0x8 +static inline unsigned char swap_count(unsigned char ent) +{ + return ent & ~SWAP_HAS_CACHE; /* may include COUNT_CONTINUED flag */ +} + static bool swap_is_has_cache(struct swap_info_struct *si, unsigned long offset, int nr_pages) { @@ -1326,46 +1326,6 @@ static struct swap_info_struct *_swap_info_get(swp_entry_t entry) return NULL; } -static unsigned char __swap_entry_free_locked(struct swap_info_struct *si, - unsigned long offset, - unsigned char usage) -{ - unsigned char count; - unsigned char has_cache; - - count = si->swap_map[offset]; - - has_cache = count & SWAP_HAS_CACHE; - count &= ~SWAP_HAS_CACHE; - - if (usage == SWAP_HAS_CACHE) { - VM_BUG_ON(!has_cache); - has_cache = 0; - } else if (count == SWAP_MAP_SHMEM) { - /* - * Or we could insist on shmem.c using a special - * swap_shmem_free() and free_shmem_swap_and_cache()... - */ - count = 0; - } else if ((count & ~COUNT_CONTINUED) <= SWAP_MAP_MAX) { - if (count == COUNT_CONTINUED) { - if (swap_count_continued(si, offset, count)) - count = SWAP_MAP_MAX | COUNT_CONTINUED; - else - count = SWAP_MAP_MAX; - } else - count--; - } - - usage = count | has_cache; - if (usage) - WRITE_ONCE(si->swap_map[offset], usage); - else - WRITE_ONCE(si->swap_map[offset], SWAP_HAS_CACHE); - - return usage; -} - /* * When we get a swap entry, if there aren't some other ways to * prevent swapoff, such as the folio in swap cache is locked, RCU @@ -1432,6 +1392,46 @@ struct swap_info_struct *get_swap_device(swp_entry_t entry) return NULL; } +static unsigned char __swap_entry_free_locked(struct swap_info_struct *si, + unsigned long offset, + unsigned char usage) +{ + unsigned char count; + unsigned char has_cache; + + count = si->swap_map[offset]; + + has_cache = count & SWAP_HAS_CACHE; + count &= ~SWAP_HAS_CACHE; + + if (usage == SWAP_HAS_CACHE) { + VM_BUG_ON(!has_cache); + has_cache = 0; + } else if (count == SWAP_MAP_SHMEM) { + /* + * Or we could insist on shmem.c using a special + * swap_shmem_free() and free_shmem_swap_and_cache()... + */ + count = 0; + } else if ((count & ~COUNT_CONTINUED) <= SWAP_MAP_MAX) { + if (count == COUNT_CONTINUED) { + if (swap_count_continued(si, offset, count)) + count = SWAP_MAP_MAX | COUNT_CONTINUED; + else + count = SWAP_MAP_MAX; + } else + count--; + } + + usage = count | has_cache; + if (usage) + WRITE_ONCE(si->swap_map[offset], usage); + else + WRITE_ONCE(si->swap_map[offset], SWAP_HAS_CACHE); + + return usage; +} + static unsigned char __swap_entry_free(struct swap_info_struct *si, swp_entry_t entry) { @@ -1585,25 +1585,6 @@ void put_swap_folio(struct folio *folio, swp_entry_t entry) unlock_cluster(ci); } -void swapcache_free_entries(swp_entry_t *entries, int n) -{ - int i; - struct swap_cluster_info *ci; - struct swap_info_struct *si = NULL; - - if (n <= 0) - return; - - for (i = 0; i < n; ++i) { - si = _swap_info_get(entries[i]); - if (si) { - ci = lock_cluster(si, swp_offset(entries[i])); - swap_entry_range_free(si, ci, entries[i], 1); - unlock_cluster(ci); - } - } -} - int __swap_count(swp_entry_t entry) { struct swap_info_struct *si = swp_swap_info(entry); @@ -1717,57 +1698,6 @@ static bool folio_swapped(struct folio *folio) return swap_page_trans_huge_swapped(si, entry, folio_order(folio)); } -static bool folio_swapcache_freeable(struct folio *folio) -{ - VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio); - - if (!folio_test_swapcache(folio)) - return false; - if (folio_test_writeback(folio)) - return false; - - /* - * Once hibernation has begun to create its image of memory, - * there's a danger that one of the calls to folio_free_swap() - * - most probably a call from __try_to_reclaim_swap() while - * hibernation is allocating its own swap pages for the image, - * but conceivably even a call from memory reclaim - will free - * the swap from a folio which has already been recorded in the - * image as a clean swapcache folio, and then reuse its swap for - * another page of the image. On waking from hibernation, the - * original folio might be freed under memory pressure, then - * later read back in from swap, now with the wrong data. - * - * Hibernation suspends storage while it is writing the image - * to disk so check that here. - */ - if (pm_suspended_storage()) - return false; - - return true; -} - -/** - * folio_free_swap() - Free the swap space used for this folio. - * @folio: The folio to remove. - * - * If swap is getting full, or if there are no more mappings of this folio, - * then call folio_free_swap to free its swap space. - * - * Return: true if we were able to release the swap space. - */ -bool folio_free_swap(struct folio *folio) -{ - if (!folio_swapcache_freeable(folio)) - return false; - if (folio_swapped(folio)) - return false; - - delete_from_swap_cache(folio); - folio_set_dirty(folio); - return true; -} - /** * free_swap_and_cache_nr() - Release reference on range of swap entries and * reclaim their cache if no more references remain. @@ -1842,6 +1772,76 @@ void free_swap_and_cache_nr(swp_entry_t entry, int nr) put_swap_device(si); } +void swapcache_free_entries(swp_entry_t *entries, int n) +{ + int i; + struct swap_cluster_info *ci; + struct swap_info_struct *si = NULL; + + if (n <= 0) + return; + + for (i = 0; i < n; ++i) { + si = _swap_info_get(entries[i]); + if (si) { + ci = lock_cluster(si, swp_offset(entries[i])); + swap_entry_range_free(si, ci, entries[i], 1); + unlock_cluster(ci); + } + } +} + +static bool folio_swapcache_freeable(struct folio *folio) +{ + VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio); + + if (!folio_test_swapcache(folio)) + return false; + if (folio_test_writeback(folio)) + return false; + + /* + * Once hibernation has begun to create its image of memory, + * there's a danger that one of the calls to folio_free_swap() + * - most probably a call from __try_to_reclaim_swap() while + * hibernation is allocating its own swap pages for the image, + * but conceivably even a call from memory reclaim - will free + * the swap from a folio which has already been recorded in the + * image as a clean swapcache folio, and then reuse its swap for + * another page of the image. On waking from hibernation, the + * original folio might be freed under memory pressure, then + * later read back in from swap, now with the wrong data. + * + * Hibernation suspends storage while it is writing the image + * to disk so check that here. + */ + if (pm_suspended_storage()) + return false; + + return true; +} + +/** + * folio_free_swap() - Free the swap space used for this folio. + * @folio: The folio to remove. + * + * If swap is getting full, or if there are no more mappings of this folio, + * then call folio_free_swap to free its swap space. + * + * Return: true if we were able to release the swap space. + */ +bool folio_free_swap(struct folio *folio) +{ + if (!folio_swapcache_freeable(folio)) + return false; + if (folio_swapped(folio)) + return false; + + delete_from_swap_cache(folio); + folio_set_dirty(folio); + return true; +} + #ifdef CONFIG_HIBERNATION swp_entry_t get_swap_page_of_type(int type) @@ -1957,6 +1957,37 @@ unsigned int count_swap_pages(int type, int free) } #endif /* CONFIG_HIBERNATION */ +/* + * Scan swap_map from current position to next entry still in use. + * Return 0 if there are no inuse entries after prev till end of + * the map. + */ +static unsigned int find_next_to_unuse(struct swap_info_struct *si, + unsigned int prev) +{ + unsigned int i; + unsigned char count; + + /* + * No need for swap_lock here: we're just looking + * for whether an entry is in use, not modifying it; false + * hits are okay, and sys_swapoff() has already prevented new + * allocations from this area (while holding swap_lock). + */ + for (i = prev + 1; i < si->max; i++) { + count = READ_ONCE(si->swap_map[i]); + if (count && swap_count(count) != SWAP_MAP_BAD) + break; + if ((i % LATENCY_LIMIT) == 0) + cond_resched(); + } + + if (i == si->max) + i = 0; + + return i; +} + static inline int pte_same_as_swp(pte_t pte, pte_t swp_pte) { return pte_same(pte_swp_clear_flags(pte), swp_pte); @@ -2241,37 +2272,6 @@ static int unuse_mm(struct mm_struct *mm, unsigned int type) return ret; } -/* - * Scan swap_map from current position to next entry still in use. - * Return 0 if there are no inuse entries after prev till end of - * the map. - */ -static unsigned int find_next_to_unuse(struct swap_info_struct *si, - unsigned int prev) -{ - unsigned int i; - unsigned char count; - - /* - * No need for swap_lock here: we're just looking - * for whether an entry is in use, not modifying it; false - * hits are okay, and sys_swapoff() has already prevented new - * allocations from this area (while holding swap_lock). - */ - for (i = prev + 1; i < si->max; i++) { - count = READ_ONCE(si->swap_map[i]); - if (count && swap_count(count) != SWAP_MAP_BAD) - break; - if ((i % LATENCY_LIMIT) == 0) - cond_resched(); - } - - if (i == si->max) - i = 0; - - return i; -} - static int try_to_unuse(unsigned int type) { struct mm_struct *prev_mm; @@ -3525,6 +3525,26 @@ void si_swapinfo(struct sysinfo *val) spin_unlock(&swap_lock); } +struct swap_info_struct *swp_swap_info(swp_entry_t entry) +{ + return swap_type_to_swap_info(swp_type(entry)); +} + +/* + * out-of-line methods to avoid include hell. + */ +struct address_space *swapcache_mapping(struct folio *folio) +{ + return swp_swap_info(folio->swap)->swap_file->f_mapping; +} +EXPORT_SYMBOL_GPL(swapcache_mapping); + +pgoff_t __folio_swap_cache_index(struct folio *folio) +{ + return swap_cache_index(folio->swap); +} +EXPORT_SYMBOL_GPL(__folio_swap_cache_index); + /* * Verify that nr swap entries are valid and increment their swap map counts. * @@ -3658,26 +3678,6 @@ void swapcache_clear(struct swap_info_struct *si, swp_entry_t entry, int nr) cluster_swap_free_nr(si, offset, nr, SWAP_HAS_CACHE); } -struct swap_info_struct *swp_swap_info(swp_entry_t entry) -{ - return swap_type_to_swap_info(swp_type(entry)); -} - -/* - * out-of-line methods to avoid include hell. - */ -struct address_space *swapcache_mapping(struct folio *folio) -{ - return swp_swap_info(folio->swap)->swap_file->f_mapping; -} -EXPORT_SYMBOL_GPL(swapcache_mapping); - -pgoff_t __folio_swap_cache_index(struct folio *folio) -{ - return swap_cache_index(folio->swap); -} -EXPORT_SYMBOL_GPL(__folio_swap_cache_index); - /* * add_swap_count_continuation - called when a swap count is duplicated * beyond SWAP_MAP_MAX, it allocates a new page and links that to the entry's -- 2.47.1