From: Chris Li <chrisl@kernel.org>
To: Kairui Song <kasong@tencent.com>
Cc: linux-mm@kvack.org, Andrew Morton <akpm@linux-foundation.org>,
Matthew Wilcox <willy@infradead.org>,
Hugh Dickins <hughd@google.com>, Barry Song <baohua@kernel.org>,
Baoquan He <bhe@redhat.com>, Nhat Pham <nphamcs@gmail.com>,
Kemeng Shi <shikemeng@huaweicloud.com>,
Baolin Wang <baolin.wang@linux.alibaba.com>,
Ying Huang <ying.huang@linux.alibaba.com>,
Johannes Weiner <hannes@cmpxchg.org>,
David Hildenbrand <david@redhat.com>,
Yosry Ahmed <yosryahmed@google.com>,
Lorenzo Stoakes <lorenzo.stoakes@oracle.com>,
Zi Yan <ziy@nvidia.com>,
linux-kernel@vger.kernel.org
Subject: Re: [PATCH v2 02/15] mm, swap: use unified helper for swap cache look up
Date: Fri, 5 Sep 2025 16:59:29 -0700 [thread overview]
Message-ID: <CAF8kJuP3TePwm-Yv9k=8caDN_OHsEXLjkQyqzX8Zi5rByAxRzw@mail.gmail.com> (raw)
In-Reply-To: <20250905191357.78298-3-ryncsn@gmail.com>
Acked-by: Chris Li <chrisl@kernel.org>
Chris
On Fri, Sep 5, 2025 at 12:14 PM Kairui Song <ryncsn@gmail.com> wrote:
>
> From: Kairui Song <kasong@tencent.com>
>
> The swap cache lookup helper swap_cache_get_folio currently does
> readahead updates as well, so callers that are not doing swapin from any
> VMA or mapping are forced to reuse filemap helpers instead, and have to
> access the swap cache space directly.
>
> So decouple readahead update with swap cache lookup. Move the readahead
> update part into a standalone helper. Let the caller call the readahead
> update helper if they do readahead. And convert all swap cache lookups
> to use swap_cache_get_folio.
>
> After this commit, there are only three special cases for accessing swap
> cache space now: huge memory splitting, migration, and shmem replacing,
> because they need to lock the XArray. The following commits will wrap
> their accesses to the swap cache too, with special helpers.
>
> And worth noting, currently dropbehind is not supported for anon folio,
> and we will never see a dropbehind folio in swap cache. The unified
> helper can be updated later to handle that.
>
> While at it, add proper kernedoc for touched helpers.
>
> No functional change.
>
> Signed-off-by: Kairui Song <kasong@tencent.com>
> Acked-by: Chris Li <chrisl@kernel.org>
> Acked-by: Nhat Pham <nphamcs@gmail.com>
> Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com>
> Reviewed-by: Barry Song <baohua@kernel.org>
> ---
> mm/memory.c | 6 ++-
> mm/mincore.c | 3 +-
> mm/shmem.c | 4 +-
> mm/swap.h | 13 ++++--
> mm/swap_state.c | 109 +++++++++++++++++++++++++----------------------
> mm/swapfile.c | 11 +++--
> mm/userfaultfd.c | 5 +--
> 7 files changed, 81 insertions(+), 70 deletions(-)
>
> diff --git a/mm/memory.c b/mm/memory.c
> index d9de6c056179..10ef528a5f44 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -4660,9 +4660,11 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
> if (unlikely(!si))
> goto out;
>
> - folio = swap_cache_get_folio(entry, vma, vmf->address);
> - if (folio)
> + folio = swap_cache_get_folio(entry);
> + if (folio) {
> + swap_update_readahead(folio, vma, vmf->address);
> page = folio_file_page(folio, swp_offset(entry));
> + }
> swapcache = folio;
>
> if (!folio) {
> diff --git a/mm/mincore.c b/mm/mincore.c
> index 2f3e1816a30d..8ec4719370e1 100644
> --- a/mm/mincore.c
> +++ b/mm/mincore.c
> @@ -76,8 +76,7 @@ static unsigned char mincore_swap(swp_entry_t entry, bool shmem)
> if (!si)
> return 0;
> }
> - folio = filemap_get_entry(swap_address_space(entry),
> - swap_cache_index(entry));
> + folio = swap_cache_get_folio(entry);
> if (shmem)
> put_swap_device(si);
> /* The swap cache space contains either folio, shadow or NULL */
> diff --git a/mm/shmem.c b/mm/shmem.c
> index 2df26f4d6e60..4e27e8e5da3b 100644
> --- a/mm/shmem.c
> +++ b/mm/shmem.c
> @@ -2354,7 +2354,7 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index,
> }
>
> /* Look it up and read it in.. */
> - folio = swap_cache_get_folio(swap, NULL, 0);
> + folio = swap_cache_get_folio(swap);
> if (!folio) {
> if (data_race(si->flags & SWP_SYNCHRONOUS_IO)) {
> /* Direct swapin skipping swap cache & readahead */
> @@ -2379,6 +2379,8 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index,
> count_vm_event(PGMAJFAULT);
> count_memcg_event_mm(fault_mm, PGMAJFAULT);
> }
> + } else {
> + swap_update_readahead(folio, NULL, 0);
> }
>
> if (order > folio_order(folio)) {
> diff --git a/mm/swap.h b/mm/swap.h
> index 1ae44d4193b1..efb6d7ff9f30 100644
> --- a/mm/swap.h
> +++ b/mm/swap.h
> @@ -62,8 +62,7 @@ void delete_from_swap_cache(struct folio *folio);
> void clear_shadow_from_swap_cache(int type, unsigned long begin,
> unsigned long end);
> void swapcache_clear(struct swap_info_struct *si, swp_entry_t entry, int nr);
> -struct folio *swap_cache_get_folio(swp_entry_t entry,
> - struct vm_area_struct *vma, unsigned long addr);
> +struct folio *swap_cache_get_folio(swp_entry_t entry);
> struct folio *read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
> struct vm_area_struct *vma, unsigned long addr,
> struct swap_iocb **plug);
> @@ -74,6 +73,8 @@ struct folio *swap_cluster_readahead(swp_entry_t entry, gfp_t flag,
> struct mempolicy *mpol, pgoff_t ilx);
> struct folio *swapin_readahead(swp_entry_t entry, gfp_t flag,
> struct vm_fault *vmf);
> +void swap_update_readahead(struct folio *folio, struct vm_area_struct *vma,
> + unsigned long addr);
>
> static inline unsigned int folio_swap_flags(struct folio *folio)
> {
> @@ -159,6 +160,11 @@ static inline struct folio *swapin_readahead(swp_entry_t swp, gfp_t gfp_mask,
> return NULL;
> }
>
> +static inline void swap_update_readahead(struct folio *folio,
> + struct vm_area_struct *vma, unsigned long addr)
> +{
> +}
> +
> static inline int swap_writeout(struct folio *folio,
> struct swap_iocb **swap_plug)
> {
> @@ -169,8 +175,7 @@ static inline void swapcache_clear(struct swap_info_struct *si, swp_entry_t entr
> {
> }
>
> -static inline struct folio *swap_cache_get_folio(swp_entry_t entry,
> - struct vm_area_struct *vma, unsigned long addr)
> +static inline struct folio *swap_cache_get_folio(swp_entry_t entry)
> {
> return NULL;
> }
> diff --git a/mm/swap_state.c b/mm/swap_state.c
> index 99513b74b5d8..68ec531d0f2b 100644
> --- a/mm/swap_state.c
> +++ b/mm/swap_state.c
> @@ -69,6 +69,27 @@ void show_swap_cache_info(void)
> printk("Total swap = %lukB\n", K(total_swap_pages));
> }
>
> +/**
> + * swap_cache_get_folio - Looks up a folio in the swap cache.
> + * @entry: swap entry used for the lookup.
> + *
> + * A found folio will be returned unlocked and with its refcount increased.
> + *
> + * Context: Caller must ensure @entry is valid and protect the swap device
> + * with reference count or locks.
> + * Return: Returns the found folio on success, NULL otherwise. The caller
> + * must lock and check if the folio still matches the swap entry before
> + * use.
> + */
> +struct folio *swap_cache_get_folio(swp_entry_t entry)
> +{
> + struct folio *folio = filemap_get_folio(swap_address_space(entry),
> + swap_cache_index(entry));
> + if (IS_ERR(folio))
> + return NULL;
> + return folio;
> +}
> +
> void *get_shadow_from_swap_cache(swp_entry_t entry)
> {
> struct address_space *address_space = swap_address_space(entry);
> @@ -272,55 +293,43 @@ static inline bool swap_use_vma_readahead(void)
> return READ_ONCE(enable_vma_readahead) && !atomic_read(&nr_rotate_swap);
> }
>
> -/*
> - * Lookup a swap entry in the swap cache. A found folio will be returned
> - * unlocked and with its refcount incremented - we rely on the kernel
> - * lock getting page table operations atomic even if we drop the folio
> - * lock before returning.
> - *
> - * Caller must lock the swap device or hold a reference to keep it valid.
> +/**
> + * swap_update_readahead - Update the readahead statistics of VMA or globally.
> + * @folio: the swap cache folio that just got hit.
> + * @vma: the VMA that should be updated, could be NULL for global update.
> + * @addr: the addr that triggered the swapin, ignored if @vma is NULL.
> */
> -struct folio *swap_cache_get_folio(swp_entry_t entry,
> - struct vm_area_struct *vma, unsigned long addr)
> +void swap_update_readahead(struct folio *folio, struct vm_area_struct *vma,
> + unsigned long addr)
> {
> - struct folio *folio;
> -
> - folio = filemap_get_folio(swap_address_space(entry), swap_cache_index(entry));
> - if (!IS_ERR(folio)) {
> - bool vma_ra = swap_use_vma_readahead();
> - bool readahead;
> + bool readahead, vma_ra = swap_use_vma_readahead();
>
> - /*
> - * At the moment, we don't support PG_readahead for anon THP
> - * so let's bail out rather than confusing the readahead stat.
> - */
> - if (unlikely(folio_test_large(folio)))
> - return folio;
> -
> - readahead = folio_test_clear_readahead(folio);
> - if (vma && vma_ra) {
> - unsigned long ra_val;
> - int win, hits;
> -
> - ra_val = GET_SWAP_RA_VAL(vma);
> - win = SWAP_RA_WIN(ra_val);
> - hits = SWAP_RA_HITS(ra_val);
> - if (readahead)
> - hits = min_t(int, hits + 1, SWAP_RA_HITS_MAX);
> - atomic_long_set(&vma->swap_readahead_info,
> - SWAP_RA_VAL(addr, win, hits));
> - }
> -
> - if (readahead) {
> - count_vm_event(SWAP_RA_HIT);
> - if (!vma || !vma_ra)
> - atomic_inc(&swapin_readahead_hits);
> - }
> - } else {
> - folio = NULL;
> + /*
> + * At the moment, we don't support PG_readahead for anon THP
> + * so let's bail out rather than confusing the readahead stat.
> + */
> + if (unlikely(folio_test_large(folio)))
> + return;
> +
> + readahead = folio_test_clear_readahead(folio);
> + if (vma && vma_ra) {
> + unsigned long ra_val;
> + int win, hits;
> +
> + ra_val = GET_SWAP_RA_VAL(vma);
> + win = SWAP_RA_WIN(ra_val);
> + hits = SWAP_RA_HITS(ra_val);
> + if (readahead)
> + hits = min_t(int, hits + 1, SWAP_RA_HITS_MAX);
> + atomic_long_set(&vma->swap_readahead_info,
> + SWAP_RA_VAL(addr, win, hits));
> }
>
> - return folio;
> + if (readahead) {
> + count_vm_event(SWAP_RA_HIT);
> + if (!vma || !vma_ra)
> + atomic_inc(&swapin_readahead_hits);
> + }
> }
>
> struct folio *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
> @@ -336,14 +345,10 @@ struct folio *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
> *new_page_allocated = false;
> for (;;) {
> int err;
> - /*
> - * First check the swap cache. Since this is normally
> - * called after swap_cache_get_folio() failed, re-calling
> - * that would confuse statistics.
> - */
> - folio = filemap_get_folio(swap_address_space(entry),
> - swap_cache_index(entry));
> - if (!IS_ERR(folio))
> +
> + /* Check the swap cache in case the folio is already there */
> + folio = swap_cache_get_folio(entry);
> + if (folio)
> goto got_folio;
>
> /*
> diff --git a/mm/swapfile.c b/mm/swapfile.c
> index a7ffabbe65ef..4b8ab2cb49ca 100644
> --- a/mm/swapfile.c
> +++ b/mm/swapfile.c
> @@ -213,15 +213,14 @@ static int __try_to_reclaim_swap(struct swap_info_struct *si,
> unsigned long offset, unsigned long flags)
> {
> swp_entry_t entry = swp_entry(si->type, offset);
> - struct address_space *address_space = swap_address_space(entry);
> struct swap_cluster_info *ci;
> struct folio *folio;
> int ret, nr_pages;
> bool need_reclaim;
>
> again:
> - folio = filemap_get_folio(address_space, swap_cache_index(entry));
> - if (IS_ERR(folio))
> + folio = swap_cache_get_folio(entry);
> + if (!folio)
> return 0;
>
> nr_pages = folio_nr_pages(folio);
> @@ -2131,7 +2130,7 @@ static int unuse_pte_range(struct vm_area_struct *vma, pmd_t *pmd,
> pte_unmap(pte);
> pte = NULL;
>
> - folio = swap_cache_get_folio(entry, vma, addr);
> + folio = swap_cache_get_folio(entry);
> if (!folio) {
> struct vm_fault vmf = {
> .vma = vma,
> @@ -2357,8 +2356,8 @@ static int try_to_unuse(unsigned int type)
> (i = find_next_to_unuse(si, i)) != 0) {
>
> entry = swp_entry(type, i);
> - folio = filemap_get_folio(swap_address_space(entry), swap_cache_index(entry));
> - if (IS_ERR(folio))
> + folio = swap_cache_get_folio(entry);
> + if (!folio)
> continue;
>
> /*
> diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c
> index 50aaa8dcd24c..af61b95c89e4 100644
> --- a/mm/userfaultfd.c
> +++ b/mm/userfaultfd.c
> @@ -1489,9 +1489,8 @@ static long move_pages_ptes(struct mm_struct *mm, pmd_t *dst_pmd, pmd_t *src_pmd
> * separately to allow proper handling.
> */
> if (!src_folio)
> - folio = filemap_get_folio(swap_address_space(entry),
> - swap_cache_index(entry));
> - if (!IS_ERR_OR_NULL(folio)) {
> + folio = swap_cache_get_folio(entry);
> + if (folio) {
> if (folio_test_large(folio)) {
> ret = -EBUSY;
> folio_put(folio);
> --
> 2.51.0
>
>
next prev parent reply other threads:[~2025-09-05 23:59 UTC|newest]
Thread overview: 80+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-09-05 19:13 [PATCH v2 00/15] mm, swap: introduce swap table as swap cache (phase I) Kairui Song
2025-09-05 19:13 ` [PATCH v2 01/15] docs/mm: add document for swap table Kairui Song
2025-09-05 23:58 ` Chris Li
2025-09-06 13:31 ` Kairui Song
2025-09-08 12:35 ` Baoquan He
2025-09-08 14:27 ` Kairui Song
2025-09-08 15:06 ` Baoquan He
2025-09-08 15:01 ` Chris Li
2025-09-08 15:09 ` Baoquan He
2025-09-08 15:52 ` Chris Li
2025-09-05 19:13 ` [PATCH v2 02/15] mm, swap: use unified helper for swap cache look up Kairui Song
2025-09-05 23:59 ` Chris Li [this message]
2025-09-08 11:43 ` David Hildenbrand
2025-09-05 19:13 ` [PATCH v2 03/15] mm, swap: fix swap cahe index error when retrying reclaim Kairui Song
2025-09-05 22:40 ` Nhat Pham
2025-09-06 6:30 ` Kairui Song
2025-09-06 1:51 ` Chris Li
2025-09-06 6:28 ` Kairui Song
2025-09-06 11:58 ` Chris Li
2025-09-08 3:08 ` Baolin Wang
2025-09-08 11:45 ` David Hildenbrand
2025-09-05 19:13 ` [PATCH v2 04/15] mm, swap: check page poison flag after locking it Kairui Song
2025-09-06 2:00 ` Chris Li
2025-09-08 12:11 ` David Hildenbrand
2025-09-09 14:54 ` Kairui Song
2025-09-09 15:18 ` David Hildenbrand
2025-09-05 19:13 ` [PATCH v2 05/15] mm, swap: always lock and check the swap cache folio before use Kairui Song
2025-09-06 2:12 ` Chris Li
2025-09-06 6:32 ` Kairui Song
2025-09-08 12:18 ` David Hildenbrand
2025-09-09 14:58 ` Kairui Song
2025-09-09 15:19 ` David Hildenbrand
2025-09-10 12:56 ` Kairui Song
2025-09-05 19:13 ` [PATCH v2 06/15] mm, swap: rename and move some swap cluster definition and helpers Kairui Song
2025-09-06 2:13 ` Chris Li
2025-09-08 3:03 ` Baolin Wang
2025-09-05 19:13 ` [PATCH v2 07/15] mm, swap: tidy up swap device and cluster info helpers Kairui Song
2025-09-06 2:14 ` Chris Li
2025-09-08 12:21 ` David Hildenbrand
2025-09-08 15:01 ` Kairui Song
2025-09-05 19:13 ` [PATCH v2 08/15] mm/shmem, swap: remove redundant error handling for replacing folio Kairui Song
2025-09-08 3:17 ` Baolin Wang
2025-09-08 9:28 ` Kairui Song
2025-09-05 19:13 ` [PATCH v2 09/15] mm, swap: cleanup swap cache API and add kerneldoc Kairui Song
2025-09-06 5:45 ` Chris Li
2025-09-08 0:11 ` Barry Song
2025-09-08 3:23 ` Baolin Wang
2025-09-08 12:23 ` David Hildenbrand
2025-09-05 19:13 ` [PATCH v2 10/15] mm, swap: wrap swap cache replacement with a helper Kairui Song
2025-09-06 7:09 ` Chris Li
2025-09-08 3:41 ` Baolin Wang
2025-09-08 10:44 ` Kairui Song
2025-09-09 1:18 ` Baolin Wang
2025-09-08 12:30 ` David Hildenbrand
2025-09-08 14:20 ` Kairui Song
2025-09-08 14:39 ` David Hildenbrand
2025-09-08 14:49 ` Kairui Song
2025-09-05 19:13 ` [PATCH v2 11/15] mm, swap: use the swap table for the swap cache and switch API Kairui Song
2025-09-06 15:28 ` Chris Li
2025-09-08 15:38 ` Kairui Song
2025-09-07 12:55 ` Klara Modin
2025-09-08 14:34 ` Kairui Song
2025-09-08 15:00 ` Klara Modin
2025-09-08 15:10 ` Kairui Song
2025-09-08 13:45 ` David Hildenbrand
2025-09-08 15:14 ` Kairui Song
2025-09-08 15:32 ` Kairui Song
2025-09-10 2:53 ` SeongJae Park
2025-09-10 2:56 ` Kairui Song
2025-09-05 19:13 ` [PATCH v2 12/15] mm, swap: mark swap address space ro and add context debug check Kairui Song
2025-09-06 15:35 ` Chris Li
2025-09-08 13:10 ` David Hildenbrand
2025-09-05 19:13 ` [PATCH v2 13/15] mm, swap: remove contention workaround for swap cache Kairui Song
2025-09-06 15:30 ` Chris Li
2025-09-08 13:12 ` David Hildenbrand
2025-09-05 19:13 ` [PATCH v2 14/15] mm, swap: implement dynamic allocation of swap table Kairui Song
2025-09-06 15:45 ` Chris Li
2025-09-08 14:58 ` Kairui Song
2025-09-05 19:13 ` [PATCH v2 15/15] mm, swap: use a single page for swap table when the size fits Kairui Song
2025-09-06 15:48 ` Chris Li
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='CAF8kJuP3TePwm-Yv9k=8caDN_OHsEXLjkQyqzX8Zi5rByAxRzw@mail.gmail.com' \
--to=chrisl@kernel.org \
--cc=akpm@linux-foundation.org \
--cc=baohua@kernel.org \
--cc=baolin.wang@linux.alibaba.com \
--cc=bhe@redhat.com \
--cc=david@redhat.com \
--cc=hannes@cmpxchg.org \
--cc=hughd@google.com \
--cc=kasong@tencent.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=lorenzo.stoakes@oracle.com \
--cc=nphamcs@gmail.com \
--cc=shikemeng@huaweicloud.com \
--cc=willy@infradead.org \
--cc=ying.huang@linux.alibaba.com \
--cc=yosryahmed@google.com \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox