linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Andrew Morton <akpm@linux-foundation.org>
To: David Hildenbrand <david@redhat.com>
Cc: Zi Yan <ziy@nvidia.com>,
	"Pankaj Raghav (Samsung)" <kernel@pankajraghav.com>,
	Matthew Wilcox <willy@infradead.org>,
	Luis Chamberlain <mcgrof@kernel.org>,
	Jinjiang Tu <tujinjiang@huawei.com>,
	Oscar Salvador <osalvador@suse.de>,
	linmiaohe@huawei.com, mhocko@kernel.org, linux-mm@kvack.org,
	wangkefeng.wang@huawei.com
Subject: Re: [PATCH v2 2/2] mm/memory_hotplug: fix hwpoisoned large folio handling in do_migrate_range
Date: Sat, 19 Jul 2025 19:23:00 -0700	[thread overview]
Message-ID: <20250719192300.9e32c35ddc49f11c7954306b@linux-foundation.org> (raw)
In-Reply-To: <66bc7274-ec2a-423a-8094-b8d4cc9438fe@redhat.com>


I continue to retain the original patch in mm-hotfixes as part of
akpm's lame bug-tracking system.  3 weeks in -next.

And I just added a cc:stable to it because December 2018.

I don't expect many real-world users will be putting fake delays in
memory_failure(), but it's there.

So what do we do here?  Add a TODO, merge it under the
better-than-it-was-before theory and move on?


From: Jinjiang Tu <tujinjiang@huawei.com>
Subject: mm/memory_hotplug: fix hwpoisoned large folio handling in do_migrate_range
Date: Fri, 27 Jun 2025 20:57:47 +0800

In do_migrate_range(), the hwpoisoned folio may be large folio, which
can't be handled by unmap_poisoned_folio().

I can reproduce this issue in qemu after adding delay in memory_failure()

BUG: kernel NULL pointer dereference, address: 0000000000000000
Workqueue: kacpi_hotplug acpi_hotplug_work_fn
RIP: 0010:try_to_unmap_one+0x16a/0xfc0
 <TASK>
 rmap_walk_anon+0xda/0x1f0
 try_to_unmap+0x78/0x80
 ? __pfx_try_to_unmap_one+0x10/0x10
 ? __pfx_folio_not_mapped+0x10/0x10
 ? __pfx_folio_lock_anon_vma_read+0x10/0x10
 unmap_poisoned_folio+0x60/0x140
 do_migrate_range+0x4d1/0x600
 ? slab_memory_callback+0x6a/0x190
 ? notifier_call_chain+0x56/0xb0
 offline_pages+0x3e6/0x460
 memory_subsys_offline+0x130/0x1f0
 device_offline+0xba/0x110
 acpi_bus_offline+0xb7/0x130
 acpi_scan_hot_remove+0x77/0x290
 acpi_device_hotplug+0x1e0/0x240
 acpi_hotplug_work_fn+0x1a/0x30
 process_one_work+0x186/0x340

In this case, just make offline_pages() fail.

Also, do_migrate_range() may be called between memory_failure() setting
the hwposion flag and isolation of the folio from the lru, so remove
WARN_ON().

Also, in other places unmap_poisoned_folio() is called when the folio is
isolated, so obey that in do_migrate_range().

Link: https://lkml.kernel.org/r/20250627125747.3094074-3-tujinjiang@huawei.com
Fixes: b15c87263a69 ("hwpoison, memory_hotplug: allow hwpoisoned pages to be offlined")
Signed-off-by: Jinjiang Tu <tujinjiang@huawei.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Kefeng Wang <wangkefeng.wang@huawei.com>
Cc: Miaohe Lin <linmiaohe@huawei.com>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Oscar Salvador <osalvador@suse.de>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---

 mm/memory_hotplug.c |   21 +++++++++++++--------
 1 file changed, 13 insertions(+), 8 deletions(-)

--- a/mm/memory_hotplug.c~mm-memory_hotplug-fix-hwpoisoned-large-folio-handling-in-do_migrate_range
+++ a/mm/memory_hotplug.c
@@ -1795,7 +1795,7 @@ found:
 	return 0;
 }
 
-static void do_migrate_range(unsigned long start_pfn, unsigned long end_pfn)
+static int do_migrate_range(unsigned long start_pfn, unsigned long end_pfn)
 {
 	struct folio *folio;
 	unsigned long pfn;
@@ -1819,8 +1819,10 @@ static void do_migrate_range(unsigned lo
 			pfn = folio_pfn(folio) + folio_nr_pages(folio) - 1;
 
 		if (folio_contain_hwpoisoned_page(folio)) {
-			if (WARN_ON(folio_test_lru(folio)))
-				folio_isolate_lru(folio);
+			if (folio_test_large(folio) && !folio_test_hugetlb(folio))
+				goto err_out;
+			if (folio_test_lru(folio) && !folio_isolate_lru(folio))
+				goto err_out;
 			if (folio_mapped(folio)) {
 				folio_lock(folio);
 				unmap_poisoned_folio(folio, pfn, false);
@@ -1877,6 +1879,11 @@ put_folio:
 			putback_movable_pages(&source);
 		}
 	}
+	return 0;
+err_out:
+	folio_put(folio);
+	putback_movable_pages(&source);
+	return -EBUSY;
 }
 
 static int __init cmdline_parse_movable_node(char *p)
@@ -2041,11 +2048,9 @@ int offline_pages(unsigned long start_pf
 
 			ret = scan_movable_pages(pfn, end_pfn, &pfn);
 			if (!ret) {
-				/*
-				 * TODO: fatal migration failures should bail
-				 * out
-				 */
-				do_migrate_range(pfn, end_pfn);
+				ret = do_migrate_range(pfn, end_pfn);
+				if (ret)
+					break;
 			}
 		} while (!ret);
 
_



  reply	other threads:[~2025-07-20  2:23 UTC|newest]

Thread overview: 36+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-06-27 12:57 [PATCH v2 0/2] fix two calls of unmap_poisoned_folio() for large folio Jinjiang Tu
2025-06-27 12:57 ` [PATCH v2 1/2] mm/vmscan: fix hwpoisoned large folio handling in shrink_folio_list Jinjiang Tu
2025-06-27 17:10   ` David Hildenbrand
2025-06-27 22:00   ` Andrew Morton
2025-06-28  2:38     ` Jinjiang Tu
2025-06-28  3:13   ` Miaohe Lin
2025-07-01 14:13   ` Oscar Salvador
2025-07-03  7:30     ` Jinjiang Tu
2025-06-27 12:57 ` [PATCH v2 2/2] mm/memory_hotplug: fix hwpoisoned large folio handling in do_migrate_range Jinjiang Tu
2025-07-01 14:21   ` Oscar Salvador
2025-07-03  7:46     ` Jinjiang Tu
2025-07-03  7:57       ` David Hildenbrand
2025-07-03  8:24         ` Jinjiang Tu
2025-07-03  9:06           ` David Hildenbrand
2025-07-07 11:51             ` Jinjiang Tu
2025-07-07 12:37               ` David Hildenbrand
2025-07-08  1:15                 ` Jinjiang Tu
2025-07-08  9:54                   ` David Hildenbrand
2025-07-09 16:27                     ` Zi Yan
2025-07-14 13:53                       ` Pankaj Raghav
2025-07-14 14:20                         ` Zi Yan
2025-07-14 14:24                           ` David Hildenbrand
2025-07-14 15:09                             ` Pankaj Raghav (Samsung)
2025-07-14 15:14                               ` David Hildenbrand
2025-07-14 15:25                                 ` Zi Yan
2025-07-14 15:28                                   ` Zi Yan
2025-07-14 15:33                                     ` David Hildenbrand
2025-07-14 15:44                                       ` Zi Yan
2025-07-14 15:52                                         ` David Hildenbrand
2025-07-20  2:23                                           ` Andrew Morton [this message]
2025-07-22 15:30                                             ` David Hildenbrand
2025-08-21  5:02                                               ` Andrew Morton
2025-08-21 22:07                                                 ` David Hildenbrand
2025-08-22 17:24                                                   ` Zi Yan
2025-08-25  2:05                                                   ` Miaohe Lin
2025-07-03  7:53   ` David Hildenbrand

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20250719192300.9e32c35ddc49f11c7954306b@linux-foundation.org \
    --to=akpm@linux-foundation.org \
    --cc=david@redhat.com \
    --cc=kernel@pankajraghav.com \
    --cc=linmiaohe@huawei.com \
    --cc=linux-mm@kvack.org \
    --cc=mcgrof@kernel.org \
    --cc=mhocko@kernel.org \
    --cc=osalvador@suse.de \
    --cc=tujinjiang@huawei.com \
    --cc=wangkefeng.wang@huawei.com \
    --cc=willy@infradead.org \
    --cc=ziy@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox