linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Kairui Song via B4 Relay <devnull+kasong.tencent.com@kernel.org>
To: linux-mm@kvack.org
Cc: Michal Hocko <mhocko@kernel.org>,
	 Roman Gushchin <roman.gushchin@linux.dev>,
	 Shakeel Butt <shakeel.butt@linux.dev>,
	Muchun Song <muchun.song@linux.dev>,
	 Andrew Morton <akpm@linux-foundation.org>,
	Chris Li <chrisl@kernel.org>,
	 Kemeng Shi <shikemeng@huaweicloud.com>,
	Nhat Pham <nphamcs@gmail.com>,  Baoquan He <bhe@redhat.com>,
	Barry Song <baohua@kernel.org>,
	 Youngjun Park <youngjun.park@lge.com>,
	Johannes Weiner <hannes@cmpxchg.org>,
	 Alexandre Ghiti <alex@ghiti.fr>,
	David Hildenbrand <david@kernel.org>,
	 Lorenzo Stoakes <ljs@kernel.org>,
	 "Liam R. Howlett" <Liam.Howlett@oracle.com>,
	 Vlastimil Babka <vbabka@kernel.org>,
	Mike Rapoport <rppt@kernel.org>,
	 Suren Baghdasaryan <surenb@google.com>,
	Michal Hocko <mhocko@suse.com>,  Hugh Dickins <hughd@google.com>,
	 Baolin Wang <baolin.wang@linux.alibaba.com>,
	 Chuanhua Han <hanchuanhua@oppo.com>,
	linux-kernel@vger.kernel.org,  cgroups@vger.kernel.org,
	Kairui Song <kasong@tencent.com>
Subject: [PATCH RFC 2/2] mm, swap: fix race of charging into the wrong memcg for THP
Date: Tue, 07 Apr 2026 22:55:43 +0800	[thread overview]
Message-ID: <20260407-swap-memcg-fix-v1-2-a473ce2e5bb8@tencent.com> (raw)
In-Reply-To: <20260407-swap-memcg-fix-v1-0-a473ce2e5bb8@tencent.com>

From: Kairui Song <kasong@tencent.com>

During THP swapin via the SYNCHRONOUS_IO path, the folio is allocated
and charged to a memcg before being inserted into the swap cache.
Between allocation and swap cache insertion, the page table can change
under us (we don't hold the PTE lock), so the swap entry may be freed
and reused by a different cgroup. This causes the folio to be charged to
the wrong memcg. Shmem also has a similar issue.

Usually, the double check of the page table will catch this, but the
same page table entry may reuse the swap entry, but by a different
cgroup. The chance is extremely low, requiring a set of rare time
windows to hit in a row, but it is totally possible.

Fix this by charging the folio after it is inserted and stabilized in
the swap cache. This would also improve the performance and simplify the
code.

Also remove the now-stale comment about memcg charging of swapin
Now we always charge the folio after adding it to the swap cache.
Previously it has to do this before adding to swap cache for
maintaining the per-memcg swapcache stat. We have decoupled the stat
update from adding the folio during swapin in previous commit so
this is fine now.

Fixes: 242d12c981745 ("mm: support large folios swap-in for sync io devices")
Signed-off-by: Kairui Song <kasong@tencent.com>
---
 mm/memcontrol.c |  3 +--
 mm/memory.c     | 53 +++++++++++++++++++++++------------------------------
 mm/shmem.c      | 15 ++++-----------
 mm/swap.h       |  5 +++--
 mm/swap_state.c | 17 +++++++++--------
 5 files changed, 40 insertions(+), 53 deletions(-)

diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index c3d98ab41f1f..21caed15c9f5 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -5067,8 +5067,7 @@ int mem_cgroup_charge_hugetlb(struct folio *folio, gfp_t gfp)
  * @gfp: reclaim mode
  * @entry: swap entry for which the folio is allocated
  *
- * This function charges a folio allocated for swapin. Please call this before
- * adding the folio to the swapcache.
+ * This function charges a folio allocated for swapin.
  *
  * Returns 0 on success. Otherwise, an error code is returned.
  */
diff --git a/mm/memory.c b/mm/memory.c
index ea6568571131..6d5b0c10ac8e 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -4595,22 +4595,8 @@ static vm_fault_t handle_pte_marker(struct vm_fault *vmf)
 
 static struct folio *__alloc_swap_folio(struct vm_fault *vmf)
 {
-	struct vm_area_struct *vma = vmf->vma;
-	struct folio *folio;
-	softleaf_t entry;
-
-	folio = vma_alloc_folio(GFP_HIGHUSER_MOVABLE, 0, vma, vmf->address);
-	if (!folio)
-		return NULL;
-
-	entry = softleaf_from_pte(vmf->orig_pte);
-	if (mem_cgroup_swapin_charge_folio(folio, vma->vm_mm,
-					   GFP_KERNEL, entry)) {
-		folio_put(folio);
-		return NULL;
-	}
-
-	return folio;
+	return vma_alloc_folio(GFP_HIGHUSER_MOVABLE, 0,
+			       vmf->vma, vmf->address);
 }
 
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
@@ -4736,13 +4722,8 @@ static struct folio *alloc_swap_folio(struct vm_fault *vmf)
 	while (orders) {
 		addr = ALIGN_DOWN(vmf->address, PAGE_SIZE << order);
 		folio = vma_alloc_folio(gfp, order, vma, addr);
-		if (folio) {
-			if (!mem_cgroup_swapin_charge_folio(folio, vma->vm_mm,
-							    gfp, entry))
-				return folio;
-			count_mthp_stat(order, MTHP_STAT_SWPIN_FALLBACK_CHARGE);
-			folio_put(folio);
-		}
+		if (folio)
+			return folio;
 		count_mthp_stat(order, MTHP_STAT_SWPIN_FALLBACK);
 		order = next_order(&orders, order);
 	}
@@ -4858,18 +4839,30 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
 	folio = swap_cache_get_folio(entry);
 	if (folio)
 		swap_update_readahead(folio, vma, vmf->address);
+
 	if (!folio) {
 		if (data_race(si->flags & SWP_SYNCHRONOUS_IO)) {
+			gfp_t gfp = GFP_HIGHUSER_MOVABLE;
+
 			folio = alloc_swap_folio(vmf);
 			if (folio) {
-				/*
-				 * folio is charged, so swapin can only fail due
-				 * to raced swapin and return NULL.
-				 */
-				swapcache = swapin_folio(entry, folio);
-				if (swapcache != folio)
+				if (folio_test_large(folio))
+					gfp = vma_thp_gfp_mask(vma);
+				swapcache = swapin_folio(entry, folio, gfp);
+				if (swapcache) {
+					/* We might hit with another cached swapin */
+					if (swapcache != folio)
+						folio_put(folio);
+					folio = swapcache;
+				} else if (folio_test_large(folio)) {
+					/* THP swapin failed, try order 0 */
+					folio_put(folio);
+					folio = swapin_readahead(entry, GFP_HIGHUSER_MOVABLE, vmf);
+				} else {
+					/* order 0 swapin failure, abort */
 					folio_put(folio);
-				folio = swapcache;
+					folio = NULL;
+				}
 			}
 		} else {
 			folio = swapin_readahead(entry, GFP_HIGHUSER_MOVABLE, vmf);
diff --git a/mm/shmem.c b/mm/shmem.c
index 5aa43657886c..bc67b04b9de4 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -2071,22 +2071,15 @@ static struct folio *shmem_swap_alloc_folio(struct inode *inode,
 		goto fallback;
 	}
 
-	if (mem_cgroup_swapin_charge_folio(new, vma ? vma->vm_mm : NULL,
-					   alloc_gfp, entry)) {
-		folio_put(new);
-		new = ERR_PTR(-ENOMEM);
-		goto fallback;
-	}
-
-	swapcache = swapin_folio(entry, new);
+	swapcache = swapin_folio(entry, new, alloc_gfp);
 	if (swapcache != new) {
 		folio_put(new);
 		if (!swapcache) {
 			/*
-			 * The new folio is charged already, swapin can
-			 * only fail due to another raced swapin.
+			 * Fail with -ENOMEM by default, caller will
+			 * correct it to -EEXIST if mapping changed.
 			 */
-			new = ERR_PTR(-EEXIST);
+			new = ERR_PTR(-ENOMEM);
 			goto fallback;
 		}
 	}
diff --git a/mm/swap.h b/mm/swap.h
index a77016f2423b..90f1edabb73a 100644
--- a/mm/swap.h
+++ b/mm/swap.h
@@ -300,7 +300,7 @@ struct folio *swap_cluster_readahead(swp_entry_t entry, gfp_t flag,
 		struct mempolicy *mpol, pgoff_t ilx);
 struct folio *swapin_readahead(swp_entry_t entry, gfp_t flag,
 		struct vm_fault *vmf);
-struct folio *swapin_folio(swp_entry_t entry, struct folio *folio);
+struct folio *swapin_folio(swp_entry_t entry, struct folio *folio, gfp_t flag);
 void swap_update_readahead(struct folio *folio, struct vm_area_struct *vma,
 			   unsigned long addr);
 
@@ -433,7 +433,8 @@ static inline struct folio *swapin_readahead(swp_entry_t swp, gfp_t gfp_mask,
 	return NULL;
 }
 
-static inline struct folio *swapin_folio(swp_entry_t entry, struct folio *folio)
+static inline struct folio *swapin_folio(swp_entry_t entry,
+		struct folio *folio, gfp_t flag)
 {
 	return NULL;
 }
diff --git a/mm/swap_state.c b/mm/swap_state.c
index c53d16b87a98..d24a7a3482ec 100644
--- a/mm/swap_state.c
+++ b/mm/swap_state.c
@@ -468,8 +468,7 @@ void swap_update_readahead(struct folio *folio, struct vm_area_struct *vma,
  * __swap_cache_prepare_and_add - Prepare the folio and add it to swap cache.
  * @entry: swap entry to be bound to the folio.
  * @folio: folio to be added.
- * @gfp: memory allocation flags for charge, can be 0 if @charged is true.
- * @charged: if the folio is already charged.
+ * @gfp: memory allocation flags for charge.
  *
  * Update the swap_map and add folio as swap cache, typically before swapin.
  * All swap slots covered by the folio must have a non-zero swap count.
@@ -480,7 +479,7 @@ void swap_update_readahead(struct folio *folio, struct vm_area_struct *vma,
  */
 static struct folio *__swap_cache_prepare_and_add(swp_entry_t entry,
 						  struct folio *folio,
-						  gfp_t gfp, bool charged)
+						  gfp_t gfp)
 {
 	unsigned long nr_pages = folio_nr_pages(folio);
 	struct folio *swapcache = NULL;
@@ -511,12 +510,14 @@ static struct folio *__swap_cache_prepare_and_add(swp_entry_t entry,
 			goto failed;
 	}
 
-	if (!charged && mem_cgroup_swapin_charge_folio(folio, NULL, gfp, entry)) {
+	if (mem_cgroup_swapin_charge_folio(folio, NULL, gfp, entry)) {
 		/* We might lose the shadow here, but that's fine */
 		ci = swap_cluster_get_and_lock(folio);
 		__swap_cache_do_del_folio(ci, folio, entry, NULL);
 		swap_cluster_unlock(ci);
 
+		count_mthp_stat(folio_order(folio), MTHP_STAT_SWPIN_FALLBACK_CHARGE);
+
 		/* __swap_cache_do_del_folio doesn't put the refs */
 		folio_ref_sub(folio, nr_pages);
 		goto failed;
@@ -578,7 +579,7 @@ struct folio *swap_cache_alloc_folio(swp_entry_t entry, gfp_t gfp_mask,
 	if (!folio)
 		return NULL;
 	/* Try add the new folio, returns existing folio or NULL on failure. */
-	result = __swap_cache_prepare_and_add(entry, folio, gfp_mask, false);
+	result = __swap_cache_prepare_and_add(entry, folio, gfp_mask);
 	if (result == folio)
 		*new_page_allocated = true;
 	else
@@ -589,7 +590,7 @@ struct folio *swap_cache_alloc_folio(swp_entry_t entry, gfp_t gfp_mask,
 /**
  * swapin_folio - swap-in one or multiple entries skipping readahead.
  * @entry: starting swap entry to swap in
- * @folio: a new allocated and charged folio
+ * @folio: a new allocated folio
  *
  * Reads @entry into @folio, @folio will be added to the swap cache.
  * If @folio is a large folio, the @entry will be rounded down to align
@@ -600,14 +601,14 @@ struct folio *swap_cache_alloc_folio(swp_entry_t entry, gfp_t gfp_mask,
  * to order 0. Else, if another folio was already added to the swap cache,
  * return that swap cache folio instead.
  */
-struct folio *swapin_folio(swp_entry_t entry, struct folio *folio)
+struct folio *swapin_folio(swp_entry_t entry, struct folio *folio, gfp_t gfp)
 {
 	struct folio *swapcache;
 	pgoff_t offset = swp_offset(entry);
 	unsigned long nr_pages = folio_nr_pages(folio);
 
 	entry = swp_entry(swp_type(entry), round_down(offset, nr_pages));
-	swapcache = __swap_cache_prepare_and_add(entry, folio, 0, true);
+	swapcache = __swap_cache_prepare_and_add(entry, folio, gfp);
 	if (swapcache == folio)
 		swap_read_folio(folio, NULL);
 	return swapcache;

-- 
2.53.0




      parent reply	other threads:[~2026-04-07 14:55 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-04-07 14:55 [PATCH RFC 0/2] mm, swap: fix swapin race that causes inaccurate memcg accounting Kairui Song via B4 Relay
2026-04-07 14:55 ` [PATCH RFC 1/2] mm, swap: fix potential race of charging into the wrong memcg Kairui Song via B4 Relay
2026-04-07 14:55 ` Kairui Song via B4 Relay [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20260407-swap-memcg-fix-v1-2-a473ce2e5bb8@tencent.com \
    --to=devnull+kasong.tencent.com@kernel.org \
    --cc=Liam.Howlett@oracle.com \
    --cc=akpm@linux-foundation.org \
    --cc=alex@ghiti.fr \
    --cc=baohua@kernel.org \
    --cc=baolin.wang@linux.alibaba.com \
    --cc=bhe@redhat.com \
    --cc=cgroups@vger.kernel.org \
    --cc=chrisl@kernel.org \
    --cc=david@kernel.org \
    --cc=hanchuanhua@oppo.com \
    --cc=hannes@cmpxchg.org \
    --cc=hughd@google.com \
    --cc=kasong@tencent.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=ljs@kernel.org \
    --cc=mhocko@kernel.org \
    --cc=mhocko@suse.com \
    --cc=muchun.song@linux.dev \
    --cc=nphamcs@gmail.com \
    --cc=roman.gushchin@linux.dev \
    --cc=rppt@kernel.org \
    --cc=shakeel.butt@linux.dev \
    --cc=shikemeng@huaweicloud.com \
    --cc=surenb@google.com \
    --cc=vbabka@kernel.org \
    --cc=youngjun.park@lge.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox