linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Kemeng Shi <shikemeng@huaweicloud.com>
To: hughd@google.com, baolin.wang@linux.alibaba.com,
	willy@infradead.org, akpm@linux-foundation.org
Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	linux-fsdevel@vger.kernel.org
Subject: [PATCH 2/7] mm: shmem: avoid setting error on splited entries in shmem_set_folio_swapin_error()
Date: Fri,  6 Jun 2025 06:10:32 +0800	[thread overview]
Message-ID: <20250605221037.7872-3-shikemeng@huaweicloud.com> (raw)
In-Reply-To: <20250605221037.7872-1-shikemeng@huaweicloud.com>

When large entry is splited, the first entry splited from large entry
retains the same entry value and index as original large entry but it's
order is reduced. In shmem_set_folio_swapin_error(), if large entry is
splited before xa_cmpxchg_irq(), we may replace the first splited entry
with error entry while using the size of original large entry for release
operations. This could lead to a WARN_ON(i_blocks) due to incorrect
nr_pages used by shmem_recalc_inode() and could lead to used after free
due to incorrect nr_pages used by swap_free_nr().
Skip setting error if entry spliiting is detected to fix the issue. The
bad entry will be replaced with error entry anyway as we will still get
IO error when we swap in the bad entry at next time.

Fixes: 12885cbe88ddf ("mm: shmem: split large entry if the swapin folio is not large")
Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com>
---
 mm/shmem.c | 21 +++++++++++++++------
 1 file changed, 15 insertions(+), 6 deletions(-)

diff --git a/mm/shmem.c b/mm/shmem.c
index e27d19867e03..f1062910a4de 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -2127,16 +2127,25 @@ static void shmem_set_folio_swapin_error(struct inode *inode, pgoff_t index,
 	struct address_space *mapping = inode->i_mapping;
 	swp_entry_t swapin_error;
 	void *old;
-	int nr_pages;
+	int nr_pages = folio_nr_pages(folio);
+	int order;
 
 	swapin_error = make_poisoned_swp_entry();
-	old = xa_cmpxchg_irq(&mapping->i_pages, index,
-			     swp_to_radix_entry(swap),
-			     swp_to_radix_entry(swapin_error), 0);
-	if (old != swp_to_radix_entry(swap))
+	xa_lock_irq(&mapping->i_pages);
+	order = xa_get_order(&mapping->i_pages, index);
+	if (nr_pages != (1 << order)) {
+		xa_unlock_irq(&mapping->i_pages);
 		return;
+	}
+	old = __xa_cmpxchg(&mapping->i_pages, index,
+			   swp_to_radix_entry(swap),
+			   swp_to_radix_entry(swapin_error), 0);
+	if (old != swp_to_radix_entry(swap)) {
+		xa_unlock_irq(&mapping->i_pages);
+		return;
+	}
+	xa_unlock_irq(&mapping->i_pages);
 
-	nr_pages = folio_nr_pages(folio);
 	folio_wait_writeback(folio);
 	if (!skip_swapcache)
 		delete_from_swap_cache(folio);
-- 
2.30.0



  parent reply	other threads:[~2025-06-05 13:17 UTC|newest]

Thread overview: 24+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-06-05 22:10 [PATCH 0/7] Some random fixes and cleanups to shmem Kemeng Shi
2025-06-05 20:02 ` Andrew Morton
2025-06-05 22:10 ` [PATCH 1/7] mm: shmem: correctly pass alloced parameter to shmem_recalc_inode() to avoid WARN_ON() Kemeng Shi
2025-06-05 19:57   ` Andrew Morton
2025-06-06  1:11     ` Kemeng Shi
2025-06-06  1:28       ` Andrew Morton
2025-06-06  2:29         ` Kemeng Shi
2025-06-06  1:31       ` Kemeng Shi
2025-06-07  6:11   ` Baolin Wang
2025-06-09  0:46     ` Kemeng Shi
2025-06-10  1:02       ` Kemeng Shi
2025-06-11  7:29         ` Baolin Wang
2025-06-11  8:38           ` Kemeng Shi
2025-06-05 22:10 ` Kemeng Shi [this message]
2025-06-07  6:20   ` [PATCH 2/7] mm: shmem: avoid setting error on splited entries in shmem_set_folio_swapin_error() Baolin Wang
2025-06-09  1:19     ` Kemeng Shi
2025-06-11  7:41       ` Baolin Wang
2025-06-11  9:11         ` Kemeng Shi
2025-06-13  6:56           ` Baolin Wang
2025-06-05 22:10 ` [PATCH 3/7] mm: shmem: avoid missing entries in shmem_undo_range() when entries was splited concurrently Kemeng Shi
2025-06-05 22:10 ` [PATCH 4/7] mm: shmem: handle special case of shmem_recalc_inode() in it's caller Kemeng Shi
2025-06-05 22:10 ` [PATCH 5/7] mm: shmem: wrap additional shmem quota related code with CONFIG_TMPFS_QUOTA Kemeng Shi
2025-06-05 22:10 ` [PATCH 6/7] mm: shmem: simplify error flow in thpsize_shmem_enabled_store() Kemeng Shi
2025-06-05 22:10 ` [PATCH 7/7] mm: shmem: eliminate unneeded page counting in shmem_unuse_swap_entries() Kemeng Shi

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20250605221037.7872-3-shikemeng@huaweicloud.com \
    --to=shikemeng@huaweicloud.com \
    --cc=akpm@linux-foundation.org \
    --cc=baolin.wang@linux.alibaba.com \
    --cc=hughd@google.com \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox