From: Andrew Morton <akpm@linux-foundation.org>
To: Kairui Song <ryncsn@gmail.com>
Cc: linux-mm@kvack.org, Hugh Dickins <hughd@google.com>,
Baolin Wang <baolin.wang@linux.alibaba.com>,
Kemeng Shi <shikemeng@huaweicloud.com>,
Nhat Pham <nphamcs@gmail.com>, Chris Li <chrisl@kernel.org>,
Baoquan He <bhe@redhat.com>, Barry Song <baohua@kernel.org>,
linux-kernel@vger.kernel.org, Kairui Song <kasong@tencent.com>,
stable@vger.kernel.org
Subject: Re: [PATCH v2] mm/shmem, swap: fix race of truncate and swap entry split
Date: Sun, 18 Jan 2026 11:33:15 -0800 [thread overview]
Message-ID: <20260118113315.b102a7728769f05c5aeec57c@linux-foundation.org> (raw)
In-Reply-To: <20260119-shmem-swap-fix-v2-1-034c946fd393@tencent.com>
On Mon, 19 Jan 2026 00:55:59 +0800 Kairui Song <ryncsn@gmail.com> wrote:
> From: Kairui Song <kasong@tencent.com>
>
> The helper for shmem swap freeing is not handling the order of swap
> entries correctly. It uses xa_cmpxchg_irq to erase the swap entry, but
> it gets the entry order before that using xa_get_order without lock
> protection, and it may get an outdated order value if the entry is split
> or changed in other ways after the xa_get_order and before the
> xa_cmpxchg_irq.
>
> And besides, the order could grow and be larger than expected, and cause
> truncation to erase data beyond the end border. For example, if the
> target entry and following entries are swapped in or freed, then a large
> folio was added in place and swapped out, using the same entry, the
> xa_cmpxchg_irq will still succeed, it's very unlikely to happen though.
>
> To fix that, open code the Xarray cmpxchg and put the order retrieval
> and value checking in the same critical section. Also, ensure the order
> won't exceed the end border, skip it if the entry goes across the
> border.
>
> Skipping large swap entries crosses the end border is safe here.
> Shmem truncate iterates the range twice, in the first iteration,
> find_lock_entries already filtered such entries, and shmem will
> swapin the entries that cross the end border and partially truncate the
> folio (split the folio or at least zero part of it). So in the second
> loop here, if we see a swap entry that crosses the end order, it must
> at least have its content erased already.
>
> I observed random swapoff hangs and kernel panics when stress testing
> ZSWAP with shmem. After applying this patch, all problems are gone.
>
> Fixes: 809bc86517cc ("mm: shmem: support large folio swap out")
September 2024.
Seems about right. A researcher recently found that kernel bugs take two years
to fix. https://pebblebed.com/blog/kernel-bugs?ref=itsfoss.com
>
> ...
>
> --- a/mm/shmem.c
> +++ b/mm/shmem.c
> @@ -962,17 +962,29 @@ static void shmem_delete_from_page_cache(struct folio *folio, void *radswap)
> * being freed).
> */
> static long shmem_free_swap(struct address_space *mapping,
> - pgoff_t index, void *radswap)
> + pgoff_t index, pgoff_t end, void *radswap)
> {
> - int order = xa_get_order(&mapping->i_pages, index);
> - void *old;
> + XA_STATE(xas, &mapping->i_pages, index);
> + unsigned int nr_pages = 0;
> + pgoff_t base;
> + void *entry;
>
> - old = xa_cmpxchg_irq(&mapping->i_pages, index, radswap, NULL, 0);
> - if (old != radswap)
> - return 0;
> - swap_put_entries_direct(radix_to_swp_entry(radswap), 1 << order);
> + xas_lock_irq(&xas);
> + entry = xas_load(&xas);
> + if (entry == radswap) {
> + nr_pages = 1 << xas_get_order(&xas);
> + base = round_down(xas.xa_index, nr_pages);
> + if (base < index || base + nr_pages - 1 > end)
> + nr_pages = 0;
> + else
> + xas_store(&xas, NULL);
> + }
> + xas_unlock_irq(&xas);
> +
> + if (nr_pages)
> + swap_put_entries_direct(radix_to_swp_entry(radswap), nr_pages);
>
> - return 1 << order;
> + return nr_pages;
> }
>
What tree was this prepared against?
Both Linus mainline and mm.git have
: static long shmem_free_swap(struct address_space *mapping,
: pgoff_t index, void *radswap)
: {
: int order = xa_get_order(&mapping->i_pages, index);
: void *old;
:
: old = xa_cmpxchg_irq(&mapping->i_pages, index, radswap, NULL, 0);
: if (old != radswap)
: return 0;
: free_swap_and_cache_nr(radix_to_swp_entry(radswap), 1 << order);
:
: return 1 << order;
: }
but that free_swap_and_cache_nr() call is absent from your tree.
next prev parent reply other threads:[~2026-01-18 19:33 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-01-18 16:55 Kairui Song
2026-01-18 19:33 ` Andrew Morton [this message]
2026-01-19 2:17 ` Kairui Song
2026-01-19 3:45 ` Andrew Morton
2026-01-19 3:04 ` Baolin Wang
2026-01-19 3:17 ` Kairui Song
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260118113315.b102a7728769f05c5aeec57c@linux-foundation.org \
--to=akpm@linux-foundation.org \
--cc=baohua@kernel.org \
--cc=baolin.wang@linux.alibaba.com \
--cc=bhe@redhat.com \
--cc=chrisl@kernel.org \
--cc=hughd@google.com \
--cc=kasong@tencent.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=nphamcs@gmail.com \
--cc=ryncsn@gmail.com \
--cc=shikemeng@huaweicloud.com \
--cc=stable@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox