linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [Patch v2] mm/huge_memory: fix NULL pointer deference when splitting folio
@ 2025-11-19 23:53 Wei Yang
  2025-11-20  0:03 ` Wei Yang
                   ` (3 more replies)
  0 siblings, 4 replies; 7+ messages in thread
From: Wei Yang @ 2025-11-19 23:53 UTC (permalink / raw)
  To: akpm, david, lorenzo.stoakes, ziy, baolin.wang, Liam.Howlett,
	npache, ryan.roberts, dev.jain, baohua, lance.yang, pjw, palmer,
	aou, alex
  Cc: linux-mm, Wei Yang, stable

Commit c010d47f107f ("mm: thp: split huge page to any lower order
pages") introduced an early check on the folio's order via
mapping->flags before proceeding with the split work.

This check introduced a bug: for shmem folios in the swap cache and
truncated folios, the mapping pointer can be NULL. Accessing
mapping->flags in this state leads directly to a NULL pointer
dereference.

This commit fixes the issue by moving the check for mapping != NULL
before any attempt to access mapping->flags.

Fixes: c010d47f107f ("mm: thp: split huge page to any lower order pages")
Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
Cc: Zi Yan <ziy@nvidia.com>
Cc: "David Hildenbrand (Red Hat)" <david@kernel.org>
Cc: <stable@vger.kernel.org>

---
This patch is based on current mm-new, latest commit:

    febb34c02328 dt-bindings: riscv: Add Svrsw60t59b extension description

v2:
  * just move folio->mapping ahead
---
 mm/huge_memory.c | 22 ++++++++++------------
 1 file changed, 10 insertions(+), 12 deletions(-)

diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index efea42d68157..4e9e920f306d 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -3929,6 +3929,16 @@ static int __folio_split(struct folio *folio, unsigned int new_order,
 	if (folio != page_folio(split_at) || folio != page_folio(lock_at))
 		return -EINVAL;
 
+	/*
+	 * Folios that just got truncated cannot get split. Signal to the
+	 * caller that there was a race.
+	 *
+	 * TODO: this will also currently refuse shmem folios that are in the
+	 * swapcache.
+	 */
+	if (!is_anon && !folio->mapping)
+		return -EBUSY;
+
 	if (new_order >= old_order)
 		return -EINVAL;
 
@@ -3965,18 +3975,6 @@ static int __folio_split(struct folio *folio, unsigned int new_order,
 		gfp_t gfp;
 
 		mapping = folio->mapping;
-
-		/* Truncated ? */
-		/*
-		 * TODO: add support for large shmem folio in swap cache.
-		 * When shmem is in swap cache, mapping is NULL and
-		 * folio_test_swapcache() is true.
-		 */
-		if (!mapping) {
-			ret = -EBUSY;
-			goto out;
-		}
-
 		min_order = mapping_min_folio_order(folio->mapping);
 		if (new_order < min_order) {
 			ret = -EINVAL;
-- 
2.34.1



^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2025-11-20  9:24 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-11-19 23:53 [Patch v2] mm/huge_memory: fix NULL pointer deference when splitting folio Wei Yang
2025-11-20  0:03 ` Wei Yang
2025-11-20  0:46   ` Andrew Morton
2025-11-20  0:49     ` Wei Yang
2025-11-20  0:03 ` Zi Yan
2025-11-20  6:07 ` Baolin Wang
2025-11-20  9:24 ` David Hildenbrand (Red Hat)

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox