From: Kairui Song <ryncsn@gmail.com>
To: linux-mm@kvack.org
Cc: Andrew Morton <akpm@linux-foundation.org>,
Baoquan He <bhe@redhat.com>, Barry Song <baohua@kernel.org>,
Chris Li <chrisl@kernel.org>, Nhat Pham <nphamcs@gmail.com>,
Yosry Ahmed <yosry.ahmed@linux.dev>,
David Hildenbrand <david@kernel.org>,
Johannes Weiner <hannes@cmpxchg.org>,
Youngjun Park <youngjun.park@lge.com>,
Hugh Dickins <hughd@google.com>,
Baolin Wang <baolin.wang@linux.alibaba.com>,
Ying Huang <ying.huang@linux.alibaba.com>,
Kemeng Shi <shikemeng@huaweicloud.com>,
Lorenzo Stoakes <lorenzo.stoakes@oracle.com>,
"Matthew Wilcox (Oracle)" <willy@infradead.org>,
linux-kernel@vger.kernel.org, Kairui Song <kasong@tencent.com>
Subject: [PATCH v2 03/19] mm, swap: never bypass the swap cache even for SWP_SYNCHRONOUS_IO
Date: Mon, 17 Nov 2025 02:11:44 +0800 [thread overview]
Message-ID: <20251117-swap-table-p2-v2-3-37730e6ea6d5@tencent.com> (raw)
In-Reply-To: <20251117-swap-table-p2-v2-0-37730e6ea6d5@tencent.com>
From: Kairui Song <kasong@tencent.com>
Now the overhead of the swap cache is trivial. Bypassing the swap cache
is no longer a valid optimization. So unify the swapin path using the
swap cache. This changes the swap in behavior in two observable ways.
Readahead is now always disabled for SWP_SYNCHRONOUS_IO devices, which
is a huge win for most workloads: We used to rely on `SWP_SYNCHRONOUS_IO
&& __swap_count(entry) == 1` as the indicator to bypass both the swap
cache and readahead, the swap count check made bypassing ineffective in
many cases, and it's not a good indicator. The limitation existed
because the current swap design made it hard to decouple readahead
bypassing and swap cache bypassing [1]. We do want to always bypass
readahead for SWP_SYNCHRONOUS_IO devices, but bypassing swap cache will
cause redundant IO and memory overhead. Now that swap cache bypassing is
gone, this swap count check can be dropped, the new introduced swap path
is now always effective to skip the readahead.
The second thing here is that this enabled a large swap for all swap
entries on SWP_SYNCHRONOUS_IO devices. Previously, the large swap in is
also coupled with swap cache bypassing, and so the swap count checking
also makes large swap in less effective. Now this is also improved. We
will always have large swap supported for all SWP_SYNCHRONOUS_IO cases.
And to catch potential issues with large swap in, especially with page
exclusiveness and swap cache, more debug sanity checks and comments are
added. But overall, the code is simpler. And new helper and routines
will be used by other components in later commits too. And now it's
possible to rely on the swap cache layer for resolving synchronization
issues, which will also be done by a later commit.
Worth mentioning that for a large folio workload, this may cause more
serious thrashing. This isn't a problem with this commit, but a generic
large folio issue. For a 4K workload, this commit increases the
performance.
Signed-off-by: Kairui Song <kasong@tencent.com>
---
mm/memory.c | 136 +++++++++++++++++++++-----------------------------------
mm/swap.h | 6 +++
mm/swap_state.c | 27 +++++++++++
3 files changed, 84 insertions(+), 85 deletions(-)
diff --git a/mm/memory.c b/mm/memory.c
index 50b93b45b174..c95c36efc26a 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -4606,7 +4606,15 @@ static struct folio *alloc_swap_folio(struct vm_fault *vmf)
}
#endif /* CONFIG_TRANSPARENT_HUGEPAGE */
-static DECLARE_WAIT_QUEUE_HEAD(swapcache_wq);
+/* Sanity check that a folio is fully exclusive */
+static void check_swap_exclusive(struct folio *folio, swp_entry_t entry,
+ unsigned int nr_pages)
+{
+ do {
+ VM_WARN_ON_ONCE_FOLIO(__swap_count(entry) != 1, folio);
+ entry.val++;
+ } while (--nr_pages);
+}
/*
* We enter with non-exclusive mmap_lock (to exclude vma changes,
@@ -4619,17 +4627,14 @@ static DECLARE_WAIT_QUEUE_HEAD(swapcache_wq);
vm_fault_t do_swap_page(struct vm_fault *vmf)
{
struct vm_area_struct *vma = vmf->vma;
- struct folio *swapcache, *folio = NULL;
- DECLARE_WAITQUEUE(wait, current);
+ struct folio *swapcache = NULL, *folio;
struct page *page;
struct swap_info_struct *si = NULL;
rmap_t rmap_flags = RMAP_NONE;
- bool need_clear_cache = false;
bool exclusive = false;
softleaf_t entry;
pte_t pte;
vm_fault_t ret = 0;
- void *shadow = NULL;
int nr_pages;
unsigned long page_idx;
unsigned long address;
@@ -4700,57 +4705,21 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
folio = swap_cache_get_folio(entry);
if (folio)
swap_update_readahead(folio, vma, vmf->address);
- swapcache = folio;
-
if (!folio) {
- if (data_race(si->flags & SWP_SYNCHRONOUS_IO) &&
- __swap_count(entry) == 1) {
- /* skip swapcache */
+ if (data_race(si->flags & SWP_SYNCHRONOUS_IO)) {
folio = alloc_swap_folio(vmf);
if (folio) {
- __folio_set_locked(folio);
- __folio_set_swapbacked(folio);
-
- nr_pages = folio_nr_pages(folio);
- if (folio_test_large(folio))
- entry.val = ALIGN_DOWN(entry.val, nr_pages);
/*
- * Prevent parallel swapin from proceeding with
- * the cache flag. Otherwise, another thread
- * may finish swapin first, free the entry, and
- * swapout reusing the same entry. It's
- * undetectable as pte_same() returns true due
- * to entry reuse.
+ * folio is charged, so swapin can only fail due
+ * to raced swapin and return NULL.
*/
- if (swapcache_prepare(entry, nr_pages)) {
- /*
- * Relax a bit to prevent rapid
- * repeated page faults.
- */
- add_wait_queue(&swapcache_wq, &wait);
- schedule_timeout_uninterruptible(1);
- remove_wait_queue(&swapcache_wq, &wait);
- goto out_page;
- }
- need_clear_cache = true;
-
- memcg1_swapin(entry, nr_pages);
-
- shadow = swap_cache_get_shadow(entry);
- if (shadow)
- workingset_refault(folio, shadow);
-
- folio_add_lru(folio);
-
- /* To provide entry to swap_read_folio() */
- folio->swap = entry;
- swap_read_folio(folio, NULL);
- folio->private = NULL;
+ swapcache = swapin_folio(entry, folio);
+ if (swapcache != folio)
+ folio_put(folio);
+ folio = swapcache;
}
} else {
- folio = swapin_readahead(entry, GFP_HIGHUSER_MOVABLE,
- vmf);
- swapcache = folio;
+ folio = swapin_readahead(entry, GFP_HIGHUSER_MOVABLE, vmf);
}
if (!folio) {
@@ -4772,6 +4741,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
count_memcg_event_mm(vma->vm_mm, PGMAJFAULT);
}
+ swapcache = folio;
ret |= folio_lock_or_retry(folio, vmf);
if (ret & VM_FAULT_RETRY)
goto out_release;
@@ -4841,24 +4811,6 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
goto out_nomap;
}
- /* allocated large folios for SWP_SYNCHRONOUS_IO */
- if (folio_test_large(folio) && !folio_test_swapcache(folio)) {
- unsigned long nr = folio_nr_pages(folio);
- unsigned long folio_start = ALIGN_DOWN(vmf->address, nr * PAGE_SIZE);
- unsigned long idx = (vmf->address - folio_start) / PAGE_SIZE;
- pte_t *folio_ptep = vmf->pte - idx;
- pte_t folio_pte = ptep_get(folio_ptep);
-
- if (!pte_same(folio_pte, pte_move_swp_offset(vmf->orig_pte, -idx)) ||
- swap_pte_batch(folio_ptep, nr, folio_pte) != nr)
- goto out_nomap;
-
- page_idx = idx;
- address = folio_start;
- ptep = folio_ptep;
- goto check_folio;
- }
-
nr_pages = 1;
page_idx = 0;
address = vmf->address;
@@ -4902,12 +4854,37 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
BUG_ON(!folio_test_anon(folio) && folio_test_mappedtodisk(folio));
BUG_ON(folio_test_anon(folio) && PageAnonExclusive(page));
+ /*
+ * If a large folio already belongs to anon mapping, then we
+ * can just go on and map it partially.
+ * If not, with the large swapin check above failing, the page table
+ * have changed, so sub pages might got charged to the wrong cgroup,
+ * or even should be shmem. So we have to free it and fallback.
+ * Nothing should have touched it, both anon and shmem checks if a
+ * large folio is fully appliable before use.
+ *
+ * This will be removed once we unify folio allocation in the swap cache
+ * layer, where allocation of a folio stabilizes the swap entries.
+ */
+ if (!folio_test_anon(folio) && folio_test_large(folio) &&
+ nr_pages != folio_nr_pages(folio)) {
+ if (!WARN_ON_ONCE(folio_test_dirty(folio)))
+ swap_cache_del_folio(folio);
+ goto out_nomap;
+ }
+
/*
* Check under PT lock (to protect against concurrent fork() sharing
* the swap entry concurrently) for certainly exclusive pages.
*/
if (!folio_test_ksm(folio)) {
+ /*
+ * The can_swapin_thp check above ensures all PTE have
+ * same exclusivenss, only check one PTE is fine.
+ */
exclusive = pte_swp_exclusive(vmf->orig_pte);
+ if (exclusive)
+ check_swap_exclusive(folio, entry, nr_pages);
if (folio != swapcache) {
/*
* We have a fresh page that is not exposed to the
@@ -4985,18 +4962,16 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
vmf->orig_pte = pte_advance_pfn(pte, page_idx);
/* ksm created a completely new copy */
- if (unlikely(folio != swapcache && swapcache)) {
+ if (unlikely(folio != swapcache)) {
folio_add_new_anon_rmap(folio, vma, address, RMAP_EXCLUSIVE);
folio_add_lru_vma(folio, vma);
} else if (!folio_test_anon(folio)) {
/*
- * We currently only expect small !anon folios which are either
- * fully exclusive or fully shared, or new allocated large
- * folios which are fully exclusive. If we ever get large
- * folios within swapcache here, we have to be careful.
+ * We currently only expect !anon folios that are fully
+ * mappable. See the comment after can_swapin_thp above.
*/
- VM_WARN_ON_ONCE(folio_test_large(folio) && folio_test_swapcache(folio));
- VM_WARN_ON_FOLIO(!folio_test_locked(folio), folio);
+ VM_WARN_ON_ONCE_FOLIO(folio_nr_pages(folio) != nr_pages, folio);
+ VM_WARN_ON_ONCE_FOLIO(folio_mapped(folio), folio);
folio_add_new_anon_rmap(folio, vma, address, rmap_flags);
} else {
folio_add_anon_rmap_ptes(folio, page, nr_pages, vma, address,
@@ -5036,12 +5011,6 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
if (vmf->pte)
pte_unmap_unlock(vmf->pte, vmf->ptl);
out:
- /* Clear the swap cache pin for direct swapin after PTL unlock */
- if (need_clear_cache) {
- swapcache_clear(si, entry, nr_pages);
- if (waitqueue_active(&swapcache_wq))
- wake_up(&swapcache_wq);
- }
if (si)
put_swap_device(si);
return ret;
@@ -5049,6 +5018,8 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
if (vmf->pte)
pte_unmap_unlock(vmf->pte, vmf->ptl);
out_page:
+ if (folio_test_swapcache(folio))
+ folio_free_swap(folio);
folio_unlock(folio);
out_release:
folio_put(folio);
@@ -5056,11 +5027,6 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
folio_unlock(swapcache);
folio_put(swapcache);
}
- if (need_clear_cache) {
- swapcache_clear(si, entry, nr_pages);
- if (waitqueue_active(&swapcache_wq))
- wake_up(&swapcache_wq);
- }
if (si)
put_swap_device(si);
return ret;
diff --git a/mm/swap.h b/mm/swap.h
index 0fff92e42cfe..214e7d041030 100644
--- a/mm/swap.h
+++ b/mm/swap.h
@@ -268,6 +268,7 @@ struct folio *swap_cluster_readahead(swp_entry_t entry, gfp_t flag,
struct mempolicy *mpol, pgoff_t ilx);
struct folio *swapin_readahead(swp_entry_t entry, gfp_t flag,
struct vm_fault *vmf);
+struct folio *swapin_folio(swp_entry_t entry, struct folio *folio);
void swap_update_readahead(struct folio *folio, struct vm_area_struct *vma,
unsigned long addr);
@@ -386,6 +387,11 @@ static inline struct folio *swapin_readahead(swp_entry_t swp, gfp_t gfp_mask,
return NULL;
}
+static inline struct folio *swapin_folio(swp_entry_t entry, struct folio *folio)
+{
+ return NULL;
+}
+
static inline void swap_update_readahead(struct folio *folio,
struct vm_area_struct *vma, unsigned long addr)
{
diff --git a/mm/swap_state.c b/mm/swap_state.c
index 7b93704fcbe7..826bec661305 100644
--- a/mm/swap_state.c
+++ b/mm/swap_state.c
@@ -545,6 +545,33 @@ struct folio *swap_cache_alloc_folio(swp_entry_t entry, gfp_t gfp_mask,
return result;
}
+/**
+ * swapin_folio - swap-in one or multiple entries skipping readahead.
+ * @entry: starting swap entry to swap in
+ * @folio: a new allocated and charged folio
+ *
+ * Reads @entry into @folio, @folio will be added to the swap cache.
+ * If @folio is a large folio, the @entry will be rounded down to align
+ * with the folio size.
+ *
+ * Return: returns pointer to @folio on success. If folio is a large folio
+ * and this raced with another swapin, NULL will be returned. Else, if
+ * another folio was already added to the swap cache, return that swap
+ * cache folio instead.
+ */
+struct folio *swapin_folio(swp_entry_t entry, struct folio *folio)
+{
+ struct folio *swapcache;
+ pgoff_t offset = swp_offset(entry);
+ unsigned long nr_pages = folio_nr_pages(folio);
+
+ entry = swp_entry(swp_type(entry), round_down(offset, nr_pages));
+ swapcache = __swap_cache_prepare_and_add(entry, folio, 0, true, false);
+ if (swapcache == folio)
+ swap_read_folio(folio, NULL);
+ return swapcache;
+}
+
/*
* Locate a page of swap in physical memory, reserving swap cache space
* and reading the disk if it is not already cached.
--
2.51.2
next prev parent reply other threads:[~2025-11-16 18:12 UTC|newest]
Thread overview: 42+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-11-16 18:11 [PATCH v2 00/19] mm, swap: swap table phase II: unify swapin use swap cache and cleanup flags Kairui Song
2025-11-16 18:11 ` [PATCH v2 01/19] mm, swap: rename __read_swap_cache_async to swap_cache_alloc_folio Kairui Song
2025-11-17 4:36 ` Nhat Pham
2025-11-17 8:11 ` Barry Song
2025-11-22 8:39 ` Chris Li
2025-11-16 18:11 ` [PATCH v2 02/19] mm, swap: split swap cache preparation loop into a standalone helper Kairui Song
2025-11-17 8:27 ` Barry Song
2025-11-17 10:04 ` Kairui Song
2025-11-22 8:50 ` Chris Li
2025-11-16 18:11 ` Kairui Song [this message]
2025-11-21 0:55 ` [PATCH v2 03/19] mm, swap: never bypass the swap cache even for SWP_SYNCHRONOUS_IO Barry Song
2025-11-21 2:41 ` Kairui Song
2025-11-21 4:56 ` Barry Song
2025-11-21 5:38 ` Kairui Song
2025-11-16 18:11 ` [PATCH v2 04/19] mm, swap: always try to free swap cache for SWP_SYNCHRONOUS_IO devices Kairui Song
2025-11-16 18:11 ` [PATCH v2 05/19] mm, swap: simplify the code and reduce indention Kairui Song
2025-11-16 18:11 ` [PATCH v2 06/19] mm, swap: free the swap cache after folio is mapped Kairui Song
2025-11-16 18:11 ` [PATCH v2 07/19] mm/shmem: never bypass the swap cache for SWP_SYNCHRONOUS_IO Kairui Song
2025-11-16 18:11 ` [PATCH v2 08/19] mm/shmem, swap: remove SWAP_MAP_SHMEM Kairui Song
2025-11-16 18:11 ` [PATCH v2 09/19] mm, swap: swap entry of a bad slot should not be considered as swapped out Kairui Song
2025-11-16 18:11 ` [PATCH v2 10/19] mm, swap: consolidate cluster reclaim and check logic Kairui Song
2025-11-20 6:47 ` YoungJun Park
2025-11-20 15:32 ` Kairui Song
2025-11-21 6:58 ` YoungJun Park
2025-11-20 7:11 ` YoungJun Park
2025-11-20 7:13 ` YoungJun Park
2025-11-20 15:24 ` Kairui Song
2025-11-16 18:11 ` [PATCH v2 11/19] mm, swap: split locked entry duplicating into a standalone helper Kairui Song
2025-11-16 18:11 ` [PATCH v2 12/19] mm, swap: use swap cache as the swap in synchronize layer Kairui Song
2025-11-16 18:11 ` [PATCH v2 13/19] mm, swap: remove workaround for unsynchronized swap map cache state Kairui Song
2025-11-16 18:11 ` [PATCH v2 14/19] mm, swap: sanitize swap entry management workflow Kairui Song
2025-11-17 11:23 ` kernel test robot
2025-11-17 13:05 ` Kairui Song
2025-11-16 18:11 ` [PATCH v2 15/19] mm, swap: add folio to swap cache directly on allocation Kairui Song
2025-11-16 18:11 ` [PATCH v2 16/19] mm, swap: check swap table directly for checking cache Kairui Song
2025-11-16 18:11 ` [PATCH v2 17/19] mm, swap: clean up and improve swap entries freeing Kairui Song
2025-11-16 18:11 ` [PATCH v2 18/19] mm, swap: drop the SWAP_HAS_CACHE flag Kairui Song
2025-11-17 11:01 ` kernel test robot
2025-11-17 11:22 ` kernel test robot
2025-11-17 13:30 ` Kairui Song
2025-11-16 18:12 ` [PATCH v2 19/19] mm, swap: remove no longer needed _swap_info_get Kairui Song
2025-11-17 3:21 ` [PATCH v2 00/19] mm, swap: swap table phase II: unify swapin use swap cache and cleanup flags Kairui Song
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20251117-swap-table-p2-v2-3-37730e6ea6d5@tencent.com \
--to=ryncsn@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=baohua@kernel.org \
--cc=baolin.wang@linux.alibaba.com \
--cc=bhe@redhat.com \
--cc=chrisl@kernel.org \
--cc=david@kernel.org \
--cc=hannes@cmpxchg.org \
--cc=hughd@google.com \
--cc=kasong@tencent.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=lorenzo.stoakes@oracle.com \
--cc=nphamcs@gmail.com \
--cc=shikemeng@huaweicloud.com \
--cc=willy@infradead.org \
--cc=ying.huang@linux.alibaba.com \
--cc=yosry.ahmed@linux.dev \
--cc=youngjun.park@lge.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox