From: Kairui Song <ryncsn@gmail.com>
To: Barry Song <21cnbao@gmail.com>
Cc: linux-mm@kvack.org, Andrew Morton <akpm@linux-foundation.org>,
Baoquan He <bhe@redhat.com>, Chris Li <chrisl@kernel.org>,
Nhat Pham <nphamcs@gmail.com>,
Johannes Weiner <hannes@cmpxchg.org>,
Yosry Ahmed <yosry.ahmed@linux.dev>,
David Hildenbrand <david@redhat.com>,
Youngjun Park <youngjun.park@lge.com>,
Hugh Dickins <hughd@google.com>,
Baolin Wang <baolin.wang@linux.alibaba.com>,
"Huang, Ying" <ying.huang@linux.alibaba.com>,
Kemeng Shi <shikemeng@huaweicloud.com>,
Lorenzo Stoakes <lorenzo.stoakes@oracle.com>,
"Matthew Wilcox (Oracle)" <willy@infradead.org>,
linux-kernel@vger.kernel.org
Subject: Re: [PATCH 13/19] mm, swap: remove workaround for unsynchronized swap map cache state
Date: Mon, 17 Nov 2025 00:01:29 +0800 [thread overview]
Message-ID: <CAMgjq7BU5S3cPQSRA2+RriPRNEZzZZK-VeuRiMtAzOgva-ZUKw@mail.gmail.com> (raw)
In-Reply-To: <CAGsJ_4yjU0NmQe0cM2xDkMYVdAWRc2Q1FUMGxpo8cVkEt5ifVQ@mail.gmail.com>
On Mon, Nov 10, 2025 at 3:21 PM Barry Song <21cnbao@gmail.com> wrote:
>
> On Sun, Nov 9, 2025 at 10:18 PM Kairui Song <ryncsn@gmail.com> wrote:
> >
> > On Fri, Nov 7, 2025 at 11:07 AM Barry Song <21cnbao@gmail.com> wrote:
> > >
> > > > struct folio *swap_cache_alloc_folio(swp_entry_t entry, gfp_t gfp_mask,
> > > > struct mempolicy *mpol, pgoff_t ilx,
> > > > - bool *new_page_allocated,
> > > > - bool skip_if_exists)
> > > > + bool *new_page_allocated)
> > > > {
> > > > struct swap_info_struct *si = __swap_entry_to_info(entry);
> > > > struct folio *folio;
> > > > @@ -548,8 +542,7 @@ struct folio *swap_cache_alloc_folio(swp_entry_t entry, gfp_t gfp_mask,
> > > > if (!folio)
> > > > return NULL;
> > > > /* Try add the new folio, returns existing folio or NULL on failure. */
> > > > - result = __swap_cache_prepare_and_add(entry, folio, gfp_mask,
> > > > - false, skip_if_exists);
> > > > + result = __swap_cache_prepare_and_add(entry, folio, gfp_mask, false);
> > > > if (result == folio)
> > > > *new_page_allocated = true;
> > > > else
> > > > @@ -578,7 +571,7 @@ struct folio *swapin_folio(swp_entry_t entry, struct folio *folio)
> > > > unsigned long nr_pages = folio_nr_pages(folio);
> > > >
> > > > entry = swp_entry(swp_type(entry), round_down(offset, nr_pages));
> > > > - swapcache = __swap_cache_prepare_and_add(entry, folio, 0, true, false);
> > > > + swapcache = __swap_cache_prepare_and_add(entry, folio, 0, true);
> > > > if (swapcache == folio)
> > > > swap_read_folio(folio, NULL);
> > > > return swapcache;
> > >
> > > I wonder if we could also drop the "charged" — it doesn’t seem
> > > difficult to move the charging step before
> > > __swap_cache_prepare_and_add(), even for swap_cache_alloc_folio()?
> >
> > Hi Barry, thanks for the review and suggestion.
> >
> > It may cause much more serious cgroup thrashing. Charge may cause
> > reclaim, so races swapin will have a much larger race window and cause
> > a lot of repeated folio alloc / charge.
> >
> > This param exists because anon / shmem does their own charge for large
> > folio swapin, and then inserts the folio into the swap cache, which is
> > causing more memory pressure already. I think ideally we want to unify
> > all alloc & charging for swap in folio allocation, and have a
> > swap_cache_alloc_folio that supports `orders`. For raced swapin only
> > one will insert a folio successfully into the swap cache and charge
> > it, which should make the race window very tiny or maybe avoid
> > redundant folio allocation completely with further work. I did some
> > tests and it shows that it will improve the memory usage and avoid
> > some OOM under pressure for (m)THP.
>
> This is quite interesting. I wonder if the change below could help reduce
> mTHP swap thrashing. The fallback order-0 path also changes after
> swap_cache_add_folio(), as order-0 pages are typically the ones triggering
> memcg reclamation.
>
> diff --git a/mm/memory.c b/mm/memory.c
> index 27d91ae3648a..d97f1a8a5ca3 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -4470,11 +4470,13 @@ static struct folio *__alloc_swap_folio(struct
> vm_fault *vmf)
> return NULL;
>
> entry = pte_to_swp_entry(vmf->orig_pte);
> +#if 0
> if (mem_cgroup_swapin_charge_folio(folio, vma->vm_mm,
> GFP_KERNEL, entry)) {
> folio_put(folio);
> return NULL;
> }
> +#endif
>
> return folio;
> }
> diff --git a/mm/swap_state.c b/mm/swap_state.c
> index 2bf72d58f6ee..9d0b55deacc6 100644
> --- a/mm/swap_state.c
> +++ b/mm/swap_state.c
> @@ -605,7 +605,7 @@ struct folio *swapin_folio(swp_entry_t entry,
> struct folio *folio)
> unsigned long nr_pages = folio_nr_pages(folio);
>
> entry = swp_entry(swp_type(entry), round_down(offset, nr_pages));
> - swapcache = __swap_cache_prepare_and_add(entry, folio, 0, true);
> + swapcache = __swap_cache_prepare_and_add(entry, folio, 0,
> folio_order(folio));
> if (swapcache == folio)
> swap_read_folio(folio, NULL);
> return swapcache;
Yeah, that will surely improve the thrashing issue. Having a
`folio_order` check as the charged parameter looks strange though.
Ideally we will have the swap_cache_alloc_folio to do all the folio
allocation so there won't be many different swap in folio charging
callsites (currently we have like > 3 callsites, anon THP, anon order
0, shmem THP, and the common order 0 in swap_cache_alloc_folio). That
will also help remove a WARN_ON check in Patch 3.
next prev parent reply other threads:[~2025-11-16 16:02 UTC|newest]
Thread overview: 54+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-10-29 15:58 [PATCH 00/19] mm, swap: never bypass swap cache and cleanup flags (swap table phase II) Kairui Song
2025-10-29 15:58 ` [PATCH 01/19] mm/swap: rename __read_swap_cache_async to swap_cache_alloc_folio Kairui Song
2025-10-30 22:53 ` Yosry Ahmed
2025-11-03 8:28 ` Barry Song
2025-11-03 9:02 ` Kairui Song
2025-11-03 9:10 ` Barry Song
2025-11-03 16:50 ` Yosry Ahmed
2025-10-29 15:58 ` [PATCH 02/19] mm, swap: split swap cache preparation loop into a standalone helper Kairui Song
2025-10-29 15:58 ` [PATCH 03/19] mm, swap: never bypass the swap cache even for SWP_SYNCHRONOUS_IO Kairui Song
2025-11-04 3:47 ` Barry Song
2025-11-04 10:44 ` Kairui Song
2025-10-29 15:58 ` [PATCH 04/19] mm, swap: always try to free swap cache for SWP_SYNCHRONOUS_IO devices Kairui Song
2025-11-04 4:19 ` Barry Song
2025-11-04 8:26 ` Barry Song
2025-11-04 10:55 ` Kairui Song
2025-10-29 15:58 ` [PATCH 05/19] mm, swap: simplify the code and reduce indention Kairui Song
2025-10-29 15:58 ` [PATCH 06/19] mm, swap: free the swap cache after folio is mapped Kairui Song
2025-11-04 9:14 ` Barry Song
2025-11-04 10:50 ` Kairui Song
2025-11-04 19:52 ` Barry Song
2025-10-29 15:58 ` [PATCH 07/19] mm/shmem: never bypass the swap cache for SWP_SYNCHRONOUS_IO Kairui Song
2025-10-29 15:58 ` [PATCH 08/19] mm/shmem, swap: remove SWAP_MAP_SHMEM Kairui Song
2025-10-29 15:58 ` [PATCH 09/19] mm, swap: swap entry of a bad slot should not be considered as swapped out Kairui Song
2025-10-29 15:58 ` [PATCH 10/19] mm, swap: consolidate cluster reclaim and check logic Kairui Song
2025-10-31 5:25 ` YoungJun Park
2025-10-31 7:11 ` Kairui Song
2025-10-29 15:58 ` [PATCH 11/19] mm, swap: split locked entry duplicating into a standalone helper Kairui Song
2025-10-29 15:58 ` [PATCH 12/19] mm, swap: use swap cache as the swap in synchronize layer Kairui Song
2025-10-29 19:25 ` kernel test robot
2025-10-29 15:58 ` [PATCH 13/19] mm, swap: remove workaround for unsynchronized swap map cache state Kairui Song
2025-11-07 3:07 ` Barry Song
2025-11-09 14:18 ` Kairui Song
2025-11-10 7:21 ` Barry Song
2025-11-16 16:01 ` Kairui Song [this message]
2025-10-29 15:58 ` [PATCH 14/19] mm, swap: sanitize swap entry management workflow Kairui Song
2025-10-29 19:25 ` kernel test robot
2025-10-30 5:25 ` Kairui Song
2025-10-29 19:25 ` kernel test robot
2025-11-01 4:51 ` YoungJun Park
2025-11-01 8:59 ` Kairui Song
2025-11-01 9:08 ` YoungJun Park
2025-10-29 15:58 ` [PATCH 15/19] mm, swap: add folio to swap cache directly on allocation Kairui Song
2025-10-29 16:52 ` Kairui Song
2025-10-31 5:56 ` YoungJun Park
2025-10-31 7:02 ` Kairui Song
2025-10-29 15:58 ` [PATCH 16/19] mm, swap: check swap table directly for checking cache Kairui Song
2025-11-06 21:02 ` Barry Song
2025-11-07 3:13 ` Kairui Song
2025-10-29 15:58 ` [PATCH 17/19] mm, swap: clean up and improve swap entries freeing Kairui Song
2025-10-29 15:58 ` [PATCH 18/19] mm, swap: drop the SWAP_HAS_CACHE flag Kairui Song
2025-10-29 15:58 ` [PATCH 19/19] mm, swap: remove no longer needed _swap_info_get Kairui Song
2025-10-30 23:04 ` [PATCH 00/19] mm, swap: never bypass swap cache and cleanup flags (swap table phase II) Yosry Ahmed
2025-10-31 6:58 ` Kairui Song
2025-11-05 7:39 ` Chris Li
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CAMgjq7BU5S3cPQSRA2+RriPRNEZzZZK-VeuRiMtAzOgva-ZUKw@mail.gmail.com \
--to=ryncsn@gmail.com \
--cc=21cnbao@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=baolin.wang@linux.alibaba.com \
--cc=bhe@redhat.com \
--cc=chrisl@kernel.org \
--cc=david@redhat.com \
--cc=hannes@cmpxchg.org \
--cc=hughd@google.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=lorenzo.stoakes@oracle.com \
--cc=nphamcs@gmail.com \
--cc=shikemeng@huaweicloud.com \
--cc=willy@infradead.org \
--cc=ying.huang@linux.alibaba.com \
--cc=yosry.ahmed@linux.dev \
--cc=youngjun.park@lge.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox