linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Zi Yan <ziy@nvidia.com>
To: David Hildenbrand <david@redhat.com>,
	Matthew Wilcox <willy@infradead.org>,
	Pankaj Raghav <kernel@pankajraghav.com>,
	Luis Chamberlain <mcgrof@kernel.org>
Cc: Jinjiang Tu <tujinjiang@huawei.com>,
	Oscar Salvador <osalvador@suse.de>,
	akpm@linux-foundation.org, linmiaohe@huawei.com,
	mhocko@kernel.org, linux-mm@kvack.org,
	wangkefeng.wang@huawei.com
Subject: Re: [PATCH v2 2/2] mm/memory_hotplug: fix hwpoisoned large folio handling in do_migrate_range
Date: Wed, 09 Jul 2025 12:27:05 -0400	[thread overview]
Message-ID: <1D589FE5-3515-4ED5-B12E-D5CE23BA5D13@nvidia.com> (raw)
In-Reply-To: <8c9719f0-c072-40bb-b7f6-6f2cc41a31dc@redhat.com>

On 8 Jul 2025, at 5:54, David Hildenbrand wrote:

> On 08.07.25 03:15, Jinjiang Tu wrote:
>>
>> 在 2025/7/7 20:37, David Hildenbrand 写道:
>>> On 07.07.25 13:51, Jinjiang Tu wrote:
>>>>
>>>> 在 2025/7/3 17:06, David Hildenbrand 写道:
>>>>> On 03.07.25 10:24, Jinjiang Tu wrote:
>>>>>>
>>>>>> 在 2025/7/3 15:57, David Hildenbrand 写道:
>>>>>>> On 03.07.25 09:46, Jinjiang Tu wrote:
>>>>>>>>
>>>>>>>> 在 2025/7/1 22:21, Oscar Salvador 写道:
>>>>>>>>> On Fri, Jun 27, 2025 at 08:57:47PM +0800, Jinjiang Tu wrote:
>>>>>>>>>> In do_migrate_range(), the hwpoisoned folio may be large folio,
>>>>>>>>>> which
>>>>>>>>>> can't be handled by unmap_poisoned_folio().
>>>>>>>>>>
>>>>>>>>>> I can reproduce this issue in qemu after adding delay in
>>>>>>>>>> memory_failure()
>>>>>>>>>>
>>>>>>>>>> BUG: kernel NULL pointer dereference, address: 0000000000000000
>>>>>>>>>> Workqueue: kacpi_hotplug acpi_hotplug_work_fn
>>>>>>>>>> RIP: 0010:try_to_unmap_one+0x16a/0xfc0
>>>>>>>>>>       <TASK>
>>>>>>>>>>       rmap_walk_anon+0xda/0x1f0
>>>>>>>>>>       try_to_unmap+0x78/0x80
>>>>>>>>>>       ? __pfx_try_to_unmap_one+0x10/0x10
>>>>>>>>>>       ? __pfx_folio_not_mapped+0x10/0x10
>>>>>>>>>>       ? __pfx_folio_lock_anon_vma_read+0x10/0x10
>>>>>>>>>>       unmap_poisoned_folio+0x60/0x140
>>>>>>>>>>       do_migrate_range+0x4d1/0x600
>>>>>>>>>>       ? slab_memory_callback+0x6a/0x190
>>>>>>>>>>       ? notifier_call_chain+0x56/0xb0
>>>>>>>>>>       offline_pages+0x3e6/0x460
>>>>>>>>>>       memory_subsys_offline+0x130/0x1f0
>>>>>>>>>>       device_offline+0xba/0x110
>>>>>>>>>>       acpi_bus_offline+0xb7/0x130
>>>>>>>>>>       acpi_scan_hot_remove+0x77/0x290
>>>>>>>>>>       acpi_device_hotplug+0x1e0/0x240
>>>>>>>>>>       acpi_hotplug_work_fn+0x1a/0x30
>>>>>>>>>>       process_one_work+0x186/0x340
>>>>>>>>>>
>>>>>>>>>> In this case, just make offline_pages() fail.
>>>>>>>>>>
>>>>>>>>>> Besides, do_migrate_range() may be called between
>>>>>>>>>> memory_failure set
>>>>>>>>>> hwposion flag and ioslate the folio from lru, so remove WARN_ON().
>>>>>>>>>> In other
>>>>>>>>>> places, unmap_poisoned_folio() is called when the folio is
>>>>>>>>>> isolated, obey
>>>>>>>>>> it in do_migrate_range() too.
>>>>>>>>>>
>>>>>>>>>> Fixes: b15c87263a69 ("hwpoison, memory_hotplug: allow hwpoisoned
>>>>>>>>>> pages to be offlined")
>>>>>>>>>> Signed-off-by: Jinjiang Tu <tujinjiang@huawei.com>
>>>>>>>>> ...
>>>>>>>>>> @@ -2041,11 +2048,9 @@ int offline_pages(unsigned long start_pfn,
>>>>>>>>>> unsigned long nr_pages,
>>>>>>>>>>                      ret = scan_movable_pages(pfn, end_pfn, &pfn);
>>>>>>>>>>                   if (!ret) {
>>>>>>>>>> -                /*
>>>>>>>>>> -                 * TODO: fatal migration failures should bail
>>>>>>>>>> -                 * out
>>>>>>>>>> -                 */
>>>>>>>>>> -                do_migrate_range(pfn, end_pfn);
>>>>>>>>>> +                ret = do_migrate_range(pfn, end_pfn);
>>>>>>>>>> +                if (ret)
>>>>>>>>>> +                    break;
>>>>>>>>> I am not really sure about this one.
>>>>>>>>> I get the reason you're adding it, but note that migrate_pages()
>>>>>>>>> can
>>>>>>>>> also return
>>>>>>>>> "fatal" errors and we don't propagate that.
>>>>>>>>>
>>>>>>>>> The moto has always been to migrate as much as possible, and this
>>>>>>>>> changes this
>>>>>>>>> behaviour.
>>>>>>>> If we just skip to next pfn, offline_pages() will deadloop
>>>>>>>> meaningless
>>>>>>>> util received signal.
>>>>>>>
>>>>>>> Yeah, that's also not good,
>>>>>>>
>>>>>>>> It seems there is no document to guarantee memory offline have to
>>>>>>>> migrate as much as possible.
>>>>>>>
>>>>>>> We should try offlining as good as possible. But if there is
>>>>>>> something
>>>>>>> we just cannot possibly migrate, there is no sense in retrying.
>>>>>>>
>>>>>>> Now, could we run into this case here because we are racing with
>>>>>>> other
>>>>>>> code, and actually retrying again could make it work?
>>>>>>>
>>>>>>> Remind me again: how exactly do we arrive at this point of having a
>>>>>>> large folio that is hwpoisoned but still mapped?
>>>>>>>
>>>>>>> In memory_failure(), we do on a  large folio
>>>>>>>
>>>>>>> 1) folio_set_has_hwpoisoned
>>>>>>> 2) try_to_split_thp_page
>>>>>>> 3) if splitting fails, kill_procs_now
>>>>>> If 2) is executed when do_migrate_range() increment the refcount of
>>>>>> the
>>>>>> folio, the split fails, and retry is meaningless.
>>>>>
>>>>> kill_procs_now will kill all processes, effectively unmapping the
>>>>> folio in that case?
>>>>>
>>>>> So retrying would later just ... get us an unmapped folio and we can
>>>>> make progress?
>>>>>
>>>> kill_procs_now()->collect_procs() collects the tasks to kill. But not
>>>> all tasks that maps the folio
>>>> will be collected,
>>>> collect_procs_anon()->task_early_kill()->find_early_kill_thread()
>>>> will not
>>>> select the task (not current) if PF_MCE_PROCESS isn't set and
>>>> sysctl_memory_failure_early_kill
>>>> isn't enabled (this is the default behaviour).
>>>
>>> I think you're right, that's rather nasty.
>>>
>>> We fail to split, but keep the folio mapped into some processes.
>>>
>>> And we can't unmap it because unmap_poisoned_folio() does not properly
>>> support large folios yet.
>>>
>>> We really should unmap the folio when splitting fail. :(
>> unmap_poisoned_folio() doesn't guarantee the folio is unmapped
>> successfully, according
>> to the return val. Although I don't know in which case we will fail to
>> unmap.
>
> I think there are only cases for anon folios, when the folio is in the swapcache
> or it's lazyfree (!swapbacked) and we run into the walk_abort cases. Retrying will
> likely make it work in many cases I assume.
>
> This is all really rather suboptimal and, I'm afraid, requires much bigger rework to improve it.
>
> Failing memory offlining is also not nice.
>
> I was wondering whether we can just keep splitting the folio until it works:
>
> diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
> index b1caedbade5b1..991b095ac7e78 100644
> --- a/mm/memory_hotplug.c
> +++ b/mm/memory_hotplug.c
> @@ -1819,6 +1819,19 @@ static void do_migrate_range(unsigned long start_pfn, unsigned long end_pfn)
>                         pfn = folio_pfn(folio) + folio_nr_pages(folio) - 1;
>                  if (folio_contain_hwpoisoned_page(folio)) {
> +                       /*
> +                        * unmap_poisoned_folio() cannot handle THPs, so
> +                        * keep trying to split first.
> +                        */
> +                       if (folio_test_large(folio) && !folio_test_hugetlb(folio)) {
> +                               folio_lock(folio);
> +                               split_huge_page_to_list_to_order(&folio->page,
> +                                                                NULL,
> +                                                                min_order_for_split(folio));
> +                               folio_unlock(folio);
> +                               if (folio_test_large(folio))
> +                                       goto put_folio;
> +                       }
>                         if (WARN_ON(folio_test_lru(folio)))
>                                 folio_isolate_lru(folio);
>                         if (folio_mapped(folio)) {
>
>
> Probably the WARN_ON can indeed trigger now.
>
>
> @Zi Yan, on a related note ...
>
> in memory_failure(), we call try_to_split_thp_page(). If it works,
> we assume that we have a small folio.
>
> But that is not the case if split_huge_page() cannot split it to
> order-0 ... min_order_for_split().

Right. memory failure needs to learn about this. Either poison every
subpage or write back if necessary and drop the page cache folio.

>
> I'm afraid we havbe more such code that does not expect that if split_huge_page()
> succeeds that we still have a large folio ...

I did some search, here are the users of split_huge_page*():

1. ksm: since it is anonymous only, so split always goes to order-0;
2. userfaultfd: it is also anonymous;
3. madvise cold or pageout: a large pagecache folio will be split if it is partially
   mapped. And it will retry. It might cause a deadlock if the folio has a min order.
4. shmem: split always goes to order-0;
5. memory-failure: see above.

So we will need to take care of madvise cold or pageout case?

Hi Matthew, Pankaj, and Luis,

Is it possible to partially map a min-order folio in a fs with LBS? Based on my
understanding of  madvise_cold_or_pageout_pte_range(), it seems that it will try
to split the folio and expects a order-0 folio after a successful split.
But splitting a min-order folio is a nop. It could lead to a deadlock in the code.
Or I just get it wrong?


Best Regards,
Yan, Zi


  reply	other threads:[~2025-07-09 16:27 UTC|newest]

Thread overview: 36+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-06-27 12:57 [PATCH v2 0/2] fix two calls of unmap_poisoned_folio() for large folio Jinjiang Tu
2025-06-27 12:57 ` [PATCH v2 1/2] mm/vmscan: fix hwpoisoned large folio handling in shrink_folio_list Jinjiang Tu
2025-06-27 17:10   ` David Hildenbrand
2025-06-27 22:00   ` Andrew Morton
2025-06-28  2:38     ` Jinjiang Tu
2025-06-28  3:13   ` Miaohe Lin
2025-07-01 14:13   ` Oscar Salvador
2025-07-03  7:30     ` Jinjiang Tu
2025-06-27 12:57 ` [PATCH v2 2/2] mm/memory_hotplug: fix hwpoisoned large folio handling in do_migrate_range Jinjiang Tu
2025-07-01 14:21   ` Oscar Salvador
2025-07-03  7:46     ` Jinjiang Tu
2025-07-03  7:57       ` David Hildenbrand
2025-07-03  8:24         ` Jinjiang Tu
2025-07-03  9:06           ` David Hildenbrand
2025-07-07 11:51             ` Jinjiang Tu
2025-07-07 12:37               ` David Hildenbrand
2025-07-08  1:15                 ` Jinjiang Tu
2025-07-08  9:54                   ` David Hildenbrand
2025-07-09 16:27                     ` Zi Yan [this message]
2025-07-14 13:53                       ` Pankaj Raghav
2025-07-14 14:20                         ` Zi Yan
2025-07-14 14:24                           ` David Hildenbrand
2025-07-14 15:09                             ` Pankaj Raghav (Samsung)
2025-07-14 15:14                               ` David Hildenbrand
2025-07-14 15:25                                 ` Zi Yan
2025-07-14 15:28                                   ` Zi Yan
2025-07-14 15:33                                     ` David Hildenbrand
2025-07-14 15:44                                       ` Zi Yan
2025-07-14 15:52                                         ` David Hildenbrand
2025-07-20  2:23                                           ` Andrew Morton
2025-07-22 15:30                                             ` David Hildenbrand
2025-08-21  5:02                                               ` Andrew Morton
2025-08-21 22:07                                                 ` David Hildenbrand
2025-08-22 17:24                                                   ` Zi Yan
2025-08-25  2:05                                                   ` Miaohe Lin
2025-07-03  7:53   ` David Hildenbrand

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1D589FE5-3515-4ED5-B12E-D5CE23BA5D13@nvidia.com \
    --to=ziy@nvidia.com \
    --cc=akpm@linux-foundation.org \
    --cc=david@redhat.com \
    --cc=kernel@pankajraghav.com \
    --cc=linmiaohe@huawei.com \
    --cc=linux-mm@kvack.org \
    --cc=mcgrof@kernel.org \
    --cc=mhocko@kernel.org \
    --cc=osalvador@suse.de \
    --cc=tujinjiang@huawei.com \
    --cc=wangkefeng.wang@huawei.com \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox