linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 1/1] mm: prevent poison consumption when splitting THP
@ 2025-09-28  3:28 Qiuxu Zhuo
  2025-09-28 21:55 ` Jiaqi Yan
                   ` (4 more replies)
  0 siblings, 5 replies; 28+ messages in thread
From: Qiuxu Zhuo @ 2025-09-28  3:28 UTC (permalink / raw)
  To: akpm, david, lorenzo.stoakes, linmiaohe, tony.luck
  Cc: qiuxu.zhuo, ziy, baolin.wang, Liam.Howlett, npache, ryan.roberts,
	dev.jain, baohua, nao.horiguchi, farrah.chen, linux-mm,
	linux-kernel, Andrew Zaborowski

From: Andrew Zaborowski <andrew.zaborowski@intel.com>

When performing memory error injection on a THP (Transparent Huge Page)
mapped to userspace on an x86 server, the kernel panics with the following
trace. The expected behavior is to terminate the affected process instead
of panicking the kernel, as the x86 Machine Check code can recover from an
in-userspace #MC.

  mce: [Hardware Error]: CPU 0: Machine Check Exception: f Bank 3: bd80000000070134
  mce: [Hardware Error]: RIP 10:<ffffffff8372f8bc> {memchr_inv+0x4c/0xf0}
  mce: [Hardware Error]: TSC afff7bbff88a ADDR 1d301b000 MISC 80 PPIN 1e741e77539027db
  mce: [Hardware Error]: PROCESSOR 0:d06d0 TIME 1758093249 SOCKET 0 APIC 0 microcode 80000320
  mce: [Hardware Error]: Run the above through 'mcelog --ascii'
  mce: [Hardware Error]: Machine check: Data load in unrecoverable area of kernel
  Kernel panic - not syncing: Fatal local machine check

The root cause of this panic is that handling a memory failure triggered by
an in-userspace #MC necessitates splitting the THP. The splitting process
employs a mechanism, implemented in try_to_map_unused_to_zeropage(), which
reads the sub-pages of the THP to identify zero-filled pages. However,
reading the sub-pages results in a second in-kernel #MC, occurring before
the initial memory_failure() completes, ultimately leading to a kernel
panic. See the kernel panic call trace on the two #MCs.

  First Machine Check occurs // [1]
    memory_failure()         // [2]
      try_to_split_thp_page()
        split_huge_page()
          split_huge_page_to_list_to_order()
            __folio_split()  // [3]
              remap_page()
                remove_migration_ptes()
                  remove_migration_pte()
                    try_to_map_unused_to_zeropage()
                      memchr_inv()                   // [4]
                        Second Machine Check occurs  // [5]
                          Kernel panic

[1] Triggered by accessing a hardware-poisoned THP in userspace, which is
    typically recoverable by terminating the affected process.

[2] Call folio_set_has_hwpoisoned() before try_to_split_thp_page().

[3] Pass the RMP_USE_SHARED_ZEROPAGE remap flag to remap_page().

[4] Re-access sub-pages of the hw-poisoned THP in the kernel.

[5] Triggered in-kernel, leading to a panic kernel.

In Step[2], memory_failure() sets the has_hwpoisoned flag on the THP,
right before calling try_to_split_thp_page(). Fix this panic by not
passing the RMP_USE_SHARED_ZEROPAGE flag to remap_page() in Step[3]
if the THP has the has_hwpoisoned flag set. This prevents access to
sub-pages of the poisoned THP for zero-page identification, avoiding
a second in-kernel #MC that would cause kernel panic.

[ Qiuxu: Re-worte the commit message. ]

Reported-by: Farrah Chen <farrah.chen@intel.com>
Signed-off-by: Andrew Zaborowski <andrew.zaborowski@intel.com>
Tested-by: Farrah Chen <farrah.chen@intel.com>
Tested-by: Qiuxu Zhuo <qiuxu.zhuo@intel.com>
Reviewed-by: Qiuxu Zhuo <qiuxu.zhuo@intel.com>
Signed-off-by: Qiuxu Zhuo <qiuxu.zhuo@intel.com>
---
 mm/huge_memory.c    | 3 ++-
 mm/memory-failure.c | 6 ++++--
 2 files changed, 6 insertions(+), 3 deletions(-)

diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 9c38a95e9f09..1568f0308b90 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -3588,6 +3588,7 @@ static int __folio_split(struct folio *folio, unsigned int new_order,
 		struct list_head *list, bool uniform_split)
 {
 	struct deferred_split *ds_queue = get_deferred_split_queue(folio);
+	bool has_hwpoisoned = folio_test_has_hwpoisoned(folio);
 	XA_STATE(xas, &folio->mapping->i_pages, folio->index);
 	struct folio *end_folio = folio_next(folio);
 	bool is_anon = folio_test_anon(folio);
@@ -3858,7 +3859,7 @@ static int __folio_split(struct folio *folio, unsigned int new_order,
 	if (nr_shmem_dropped)
 		shmem_uncharge(mapping->host, nr_shmem_dropped);
 
-	if (!ret && is_anon)
+	if (!ret && is_anon && !has_hwpoisoned)
 		remap_flags = RMP_USE_SHARED_ZEROPAGE;
 	remap_page(folio, 1 << order, remap_flags);
 
diff --git a/mm/memory-failure.c b/mm/memory-failure.c
index df6ee59527dd..3ba6fd4079ab 100644
--- a/mm/memory-failure.c
+++ b/mm/memory-failure.c
@@ -2351,8 +2351,10 @@ int memory_failure(unsigned long pfn, int flags)
 		 * otherwise it may race with THP split.
 		 * And the flag can't be set in get_hwpoison_page() since
 		 * it is called by soft offline too and it is just called
-		 * for !MF_COUNT_INCREASED.  So here seems to be the best
-		 * place.
+		 * for !MF_COUNT_INCREASED.
+		 * It also tells split_huge_page() to not bother using
+		 * the shared zeropage -- the all-zeros check would
+		 * consume the poison.  So here seems to be the best place.
 		 *
 		 * Don't need care about the above error handling paths for
 		 * get_hwpoison_page() since they handle either free page
-- 
2.43.0



^ permalink raw reply	[flat|nested] 28+ messages in thread

end of thread, other threads:[~2025-10-15  6:51 UTC | newest]

Thread overview: 28+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-09-28  3:28 [PATCH 1/1] mm: prevent poison consumption when splitting THP Qiuxu Zhuo
2025-09-28 21:55 ` Jiaqi Yan
2025-09-29 12:29   ` Miaohe Lin
2025-09-29 13:57     ` Zhuo, Qiuxu
2025-09-29 15:15       ` Jiaqi Yan
2025-09-29 13:27   ` Zhuo, Qiuxu
2025-09-29 15:51     ` Luck, Tony
2025-09-29 16:30       ` Zhuo, Qiuxu
2025-09-29 17:25         ` David Hildenbrand
2025-09-30  1:48           ` Lance Yang
2025-09-30  8:53             ` David Hildenbrand
2025-09-30 10:13               ` Lance Yang
2025-09-30 10:20                 ` Lance Yang
2025-09-29  7:34 ` David Hildenbrand
2025-09-29 13:52   ` Zhuo, Qiuxu
2025-09-29 16:12     ` David Hildenbrand
2025-10-12  1:37   ` Wei Yang
2025-10-12  4:23     ` Jiaqi Yan
2025-10-11  7:55 ` [PATCH v2 " Qiuxu Zhuo
2025-10-11  9:09   ` Lance Yang
2025-10-11 18:18   ` Andrew Morton
2025-10-12  1:23   ` Wei Yang
2025-10-13 17:15   ` Zi Yan
2025-10-14  2:42   ` Miaohe Lin
2025-10-14 14:19 ` [PATCH v3 " Qiuxu Zhuo
2025-10-14 14:29   ` David Hildenbrand
2025-10-14 14:51     ` Zhuo, Qiuxu
2025-10-15  6:49 ` [PATCH v4 " Qiuxu Zhuo

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox