From: Baolin Wang <baolin.wang@linux.alibaba.com>
To: Kairui Song <ryncsn@gmail.com>
Cc: linux-mm@kvack.org, Andrew Morton <akpm@linux-foundation.org>,
Baoquan He <bhe@redhat.com>, Barry Song <baohua@kernel.org>,
Chris Li <chrisl@kernel.org>, Nhat Pham <nphamcs@gmail.com>,
Yosry Ahmed <yosry.ahmed@linux.dev>,
David Hildenbrand <david@kernel.org>,
Johannes Weiner <hannes@cmpxchg.org>,
Youngjun Park <youngjun.park@lge.com>,
Hugh Dickins <hughd@google.com>,
Ying Huang <ying.huang@linux.alibaba.com>,
Kemeng Shi <shikemeng@huaweicloud.com>,
Lorenzo Stoakes <lorenzo.stoakes@oracle.com>,
"Matthew Wilcox (Oracle)" <willy@infradead.org>,
linux-kernel@vger.kernel.org
Subject: Re: [PATCH v3 07/19] mm/shmem: never bypass the swap cache for SWP_SYNCHRONOUS_IO
Date: Thu, 4 Dec 2025 20:30:11 +0800 [thread overview]
Message-ID: <6d88bc71-bc5c-4f5f-8ca9-5bd0e2677fb6@linux.alibaba.com> (raw)
In-Reply-To: <CAMgjq7DcpMgLjX1m=+4SM=zMe5+H4qDLqdOUGnYGNBQ_HsKw-w@mail.gmail.com>
On 2025/12/3 13:33, Kairui Song wrote:
> On Tue, Dec 2, 2025 at 3:34 PM Baolin Wang
> <baolin.wang@linux.alibaba.com> wrote:
>>
>> Hi Kairui,
>>
>> On 2025/11/25 03:13, Kairui Song wrote:
>>> From: Kairui Song <kasong@tencent.com>
>>>
>>> Now the overhead of the swap cache is trivial to none, bypassing the
>>> swap cache is no longer a valid optimization.
>>>
>>> We have removed the cache bypass swapin for anon memory, now do the same
>>> for shmem. Many helpers and functions can be dropped now.
>>>
>>> Signed-off-by: Kairui Song <kasong@tencent.com>
>>> ---
>>
>> I'm glad to see we can remove the skip swapcache logic. I did a quick
>> test, testing 1G shmem sequential swap-in with 64K mTHP and 2M mTHP, and
>> I observed a slight drop, which could also be fluctuation. Can you also
>> perform some measurements?
>>
>> 64K shmem mTHP:
>> W/ patchset W/o patchset
>> 154 ms 148 ms
>>
>> 2M shmem mTHP
>> W/ patchset W/o patchset
>> 117 ms 115 ms
>
> Hi Baolin,
>
> Thanks for testing! This patch (7/19) is still an intermediate step,
> so we are still updating both swap_map and swap table with higher
> overhead. And even with that, the performance change looks small
> (~1-4% in the result you posted), close to noise level.
>
> And after this whole series, the double update is *partially* dropped,
> so the performance is almost identical to before:
>
> tmpfs with transparent_hugepage_tmpfs=within_size, 3 test run on my machine:
> Before [PATCH 7/19] [PATCH 19/19]
> 5.99s 6.29s 6.08s (~1%)
>
> Note we are still using swap_map so there are double lookups
> everywhere in this series, and I added more WARN_ON checks. Swap is
> complex so being cautious is better I think. I've also mentioned
> another valkey slight performance drop in the cover letter due to
> this, which is also tiny and will be improved a lot in phase 3 by
> removing swap_map and the double lookup, as demonstrated before:
> https://lore.kernel.org/linux-mm/20250514201729.48420-1-ryncsn@gmail.com/
>
> Last time I tested that branch it was a clear optimization for shmem,
> some of the optimizations in that series were split or merged
> separately so the performance may look go up / down in some
> intermediate steps, the final result is good.
Sounds good. Better to mention this (including your data) in the commit
message. With that, please feel free to add:
Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com>
Tested-by: Baolin Wang <baolin.wang@linux.alibaba.com>
> swap_cgroup_ctrl will be gone too, even later maybe though.
>
>>
>> Anyway I still hope we can remove the skip swapcache logic. The changes
>> look good to me with one nit as below. Thanks for your work.
>>
>>> mm/shmem.c | 65 +++++++++++++++++------------------------------------------
>>> mm/swap.h | 4 ----
>>> mm/swapfile.c | 35 +++++++++-----------------------
>>> 3 files changed, 27 insertions(+), 77 deletions(-)
>>>
>>> diff --git a/mm/shmem.c b/mm/shmem.c
>>> index ad18172ff831..d08248fd67ff 100644
>>> --- a/mm/shmem.c
>>> +++ b/mm/shmem.c
>>> @@ -2001,10 +2001,9 @@ static struct folio *shmem_swap_alloc_folio(struct inode *inode,
>>> swp_entry_t entry, int order, gfp_t gfp)
>>> {
>>> struct shmem_inode_info *info = SHMEM_I(inode);
>>> + struct folio *new, *swapcache;
>>> int nr_pages = 1 << order;
>>> - struct folio *new;
>>> gfp_t alloc_gfp;
>>> - void *shadow;
>>>
>>> /*
>>> * We have arrived here because our zones are constrained, so don't
>>> @@ -2044,34 +2043,19 @@ static struct folio *shmem_swap_alloc_folio(struct inode *inode,
>>> goto fallback;
>>> }
>>>
>>> - /*
>>> - * Prevent parallel swapin from proceeding with the swap cache flag.
>>> - *
>>> - * Of course there is another possible concurrent scenario as well,
>>> - * that is to say, the swap cache flag of a large folio has already
>>> - * been set by swapcache_prepare(), while another thread may have
>>> - * already split the large swap entry stored in the shmem mapping.
>>> - * In this case, shmem_add_to_page_cache() will help identify the
>>> - * concurrent swapin and return -EEXIST.
>>> - */
>>> - if (swapcache_prepare(entry, nr_pages)) {
>>> + swapcache = swapin_folio(entry, new);
>>> + if (swapcache != new) {
>>> folio_put(new);
>>> - new = ERR_PTR(-EEXIST);
>>> - /* Try smaller folio to avoid cache conflict */
>>> - goto fallback;
>>> + if (!swapcache) {
>>> + /*
>>> + * The new folio is charged already, swapin can
>>> + * only fail due to another raced swapin.
>>> + */
>>> + new = ERR_PTR(-EEXIST);
>>> + goto fallback;
>>> + }
>>> }
>>> -
>>> - __folio_set_locked(new);
>>> - __folio_set_swapbacked(new);
>>> - new->swap = entry;
>>> -
>>> - memcg1_swapin(entry, nr_pages);
>>> - shadow = swap_cache_get_shadow(entry);
>>> - if (shadow)
>>> - workingset_refault(new, shadow);
>>> - folio_add_lru(new);
>>> - swap_read_folio(new, NULL);
>>> - return new;
>>> + return swapcache;
>>> fallback:
>>> /* Order 0 swapin failed, nothing to fallback to, abort */
>>> if (!order)
>>> @@ -2161,8 +2145,7 @@ static int shmem_replace_folio(struct folio **foliop, gfp_t gfp,
>>> }
>>>
>>> static void shmem_set_folio_swapin_error(struct inode *inode, pgoff_t index,
>>> - struct folio *folio, swp_entry_t swap,
>>> - bool skip_swapcache)
>>> + struct folio *folio, swp_entry_t swap)
>>> {
>>> struct address_space *mapping = inode->i_mapping;
>>> swp_entry_t swapin_error;
>>> @@ -2178,8 +2161,7 @@ static void shmem_set_folio_swapin_error(struct inode *inode, pgoff_t index,
>>>
>>> nr_pages = folio_nr_pages(folio);
>>> folio_wait_writeback(folio);
>>> - if (!skip_swapcache)
>>> - swap_cache_del_folio(folio);
>>> + swap_cache_del_folio(folio);
>>> /*
>>> * Don't treat swapin error folio as alloced. Otherwise inode->i_blocks
>>> * won't be 0 when inode is released and thus trigger WARN_ON(i_blocks)
>>> @@ -2279,7 +2261,6 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index,
>>> softleaf_t index_entry;
>>> struct swap_info_struct *si;
>>> struct folio *folio = NULL;
>>> - bool skip_swapcache = false;
>>> int error, nr_pages, order;
>>> pgoff_t offset;
>>>
>>> @@ -2322,7 +2303,6 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index,
>>> folio = NULL;
>>> goto failed;
>>> }
>>> - skip_swapcache = true;
>>> } else {
>>> /* Cached swapin only supports order 0 folio */
>>> folio = shmem_swapin_cluster(swap, gfp, info, index);
>>> @@ -2378,9 +2358,8 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index,
>>> * and swap cache folios are never partially freed.
>>> */
>>> folio_lock(folio);
>>> - if ((!skip_swapcache && !folio_test_swapcache(folio)) ||
>>> - shmem_confirm_swap(mapping, index, swap) < 0 ||
>>> - folio->swap.val != swap.val) {
>>> + if (!folio_matches_swap_entry(folio, swap) ||
>>> + shmem_confirm_swap(mapping, index, swap) < 0) {
>>
>> We should still keep the '!folio_test_swapcache(folio)' check here?
>
> Thanks for the review, this one is OK because folio_test_swapcache is
> included in folio_matches_swap_entry already.
OK.
next prev parent reply other threads:[~2025-12-04 12:30 UTC|newest]
Thread overview: 28+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-11-24 19:13 [PATCH v3 00/19] mm, swap: swap table phase II: unify swapin use swap cache and cleanup flags Kairui Song
2025-11-24 19:13 ` [PATCH v3 01/19] mm, swap: rename __read_swap_cache_async to swap_cache_alloc_folio Kairui Song
2025-11-24 19:13 ` [PATCH v3 02/19] mm, swap: split swap cache preparation loop into a standalone helper Kairui Song
2025-11-24 19:13 ` [PATCH v3 03/19] mm, swap: never bypass the swap cache even for SWP_SYNCHRONOUS_IO Kairui Song
2025-11-24 19:13 ` [PATCH v3 04/19] mm, swap: always try to free swap cache for SWP_SYNCHRONOUS_IO devices Kairui Song
2025-11-24 19:13 ` [PATCH v3 05/19] mm, swap: simplify the code and reduce indention Kairui Song
2025-11-24 19:13 ` [PATCH v3 06/19] mm, swap: free the swap cache after folio is mapped Kairui Song
2025-11-24 19:13 ` [PATCH v3 07/19] mm/shmem: never bypass the swap cache for SWP_SYNCHRONOUS_IO Kairui Song
2025-12-02 7:34 ` Baolin Wang
2025-12-03 5:33 ` Kairui Song
2025-12-04 12:30 ` Baolin Wang [this message]
2025-11-24 19:13 ` [PATCH v3 08/19] mm/shmem, swap: remove SWAP_MAP_SHMEM Kairui Song
2025-12-02 7:04 ` Baolin Wang
2025-11-24 19:13 ` [PATCH v3 09/19] mm, swap: swap entry of a bad slot should not be considered as swapped out Kairui Song
2025-11-24 19:13 ` [PATCH v3 10/19] mm, swap: consolidate cluster reclaim and usability check Kairui Song
2025-11-24 19:13 ` [PATCH v3 11/19] mm, swap: split locked entry duplicating into a standalone helper Kairui Song
2025-11-24 19:13 ` [PATCH v3 12/19] mm, swap: use swap cache as the swap in synchronize layer Kairui Song
2025-11-24 19:13 ` [PATCH v3 13/19] mm, swap: remove workaround for unsynchronized swap map cache state Kairui Song
2025-11-24 19:13 ` [PATCH v3 14/19] mm, swap: cleanup swap entry management workflow Kairui Song
2025-11-25 18:11 ` Rafael J. Wysocki
2025-11-24 19:13 ` [PATCH v3 15/19] mm, swap: add folio to swap cache directly on allocation Kairui Song
2025-11-24 19:13 ` [PATCH v3 16/19] mm, swap: check swap table directly for checking cache Kairui Song
2025-11-24 19:14 ` [PATCH v3 17/19] mm, swap: clean up and improve swap entries freeing Kairui Song
2025-11-24 19:14 ` [PATCH v3 18/19] mm, swap: drop the SWAP_HAS_CACHE flag Kairui Song
2025-11-24 19:14 ` [PATCH v3 19/19] mm, swap: remove no longer needed _swap_info_get Kairui Song
2025-11-29 17:07 ` [PATCH v3 00/19] mm, swap: swap table phase II: unify swapin use swap cache and cleanup flags Chris Li
2025-11-29 18:18 ` Andrew Morton
2025-11-30 20:44 ` Chris Li
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=6d88bc71-bc5c-4f5f-8ca9-5bd0e2677fb6@linux.alibaba.com \
--to=baolin.wang@linux.alibaba.com \
--cc=akpm@linux-foundation.org \
--cc=baohua@kernel.org \
--cc=bhe@redhat.com \
--cc=chrisl@kernel.org \
--cc=david@kernel.org \
--cc=hannes@cmpxchg.org \
--cc=hughd@google.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=lorenzo.stoakes@oracle.com \
--cc=nphamcs@gmail.com \
--cc=ryncsn@gmail.com \
--cc=shikemeng@huaweicloud.com \
--cc=willy@infradead.org \
--cc=ying.huang@linux.alibaba.com \
--cc=yosry.ahmed@linux.dev \
--cc=youngjun.park@lge.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox