From: David Hildenbrand <david@redhat.com>
To: Jinjiang Tu <tujinjiang@huawei.com>, Miaohe Lin <linmiaohe@huawei.com>
Cc: wangkefeng.wang@huawei.com, nao.horiguchi@gmail.com,
akpm@linux-foundation.org, xueshuai@linux.alibaba.com,
ziy@nvidia.com, osalvador@suse.de, linux-mm@kvack.org
Subject: Re: [PATCH] mm/memory-failure: fix infinite UCE for VM_PFNMAP pfn
Date: Fri, 8 Aug 2025 10:21:57 +0200 [thread overview]
Message-ID: <864f2ef6-51bd-42f3-9988-16b5e94f05d9@redhat.com> (raw)
In-Reply-To: <156267ef-f834-4bea-9dc0-c8ad32d066b0@huawei.com>
On 07.08.25 13:13, Jinjiang Tu wrote:
>
> 在 2025/8/6 20:41, David Hildenbrand 写道:
>> On 06.08.25 05:24, Jinjiang Tu wrote:
>>>
>>> 在 2025/8/6 11:05, Miaohe Lin 写道:
>>>> On 2025/8/6 10:05, Jinjiang Tu wrote:
>>>>> When memory_failure() is called for a already hwpoisoned pfn,
>>>>> kill_accessing_process() will be called to kill current task.
>>>>> However, if
>>>> Thanks for your patch.
>>>>
>>>>> the vma of the accessing vaddr is VM_PFNMAP, walk_page_range() will
>>>>> skip
>>>>> the vma in walk_page_test() and return 0.
>>>>>
>>>>> Before commit aaf99ac2ceb7 ("mm/hwpoison: do not send SIGBUS to
>>>>> processes
>>>>> with recovered clean pages"), kill_accessing_process() will return
>>>>> EFAULT.
>>>> I'm not sure but pfn_to_online_page should return NULL for VM_PFNMAP
>>>> pages?
>>>> So memory_failure_dev_pagemap should handle these pages?
>>>
>>> We could call remap_pfn_range() for those pfns with struct page.
>>> IIUC, VM_PFNMAP
>>> means we should assume the pfn doesn't have struct page, but it can
>>> have.
>>>
>>>>> For x86, the current task will be killed in kill_me_maybe().
>>>>>
>>>>> However, after this commit, kill_accessing_process() simplies
>>>>> return 0,
>>>>> that means UCE is handled properly, but it doesn't actually. In
>>>>> such case,
>>>>> the user task will trigger UCE infinitely.
>>>> Did you ever trigger this loop?
>>>
>>> Yes. Our test is as follow steps:
>>> 1) create a user task allocates a clean anonymous page, wihout
>>> accessing it.
>>> 2) use einj to inject UCE for the page
>>> 3) create task devmem to use /dev/mem to map the pfn and keep
>>> accessing it.
>>
>> What is the use case for that? It sounds extremely questionable.
>>
> This case is only for test, and is strange indeed.
>
> But considering another case, a driver may map same RAM pfn to several processes with remap_pfn_range().
> If the first task triggers UCE when accessing the pfn, the task will be killed. But the other tasks couldn't be killed
> and triggers UCE infinitely.
Yes, the "anon page" example is confusing though. We really just want to
test here if the PFN is mapped. And I would agree that your patch is
correct in that case.
For memory poisoning handling you really need a "struct page".
struct-less memory is only handled in special ways for DAX (see
pfn_to_online_page() logic in memory_failure()).
So what you describe here really only works when a process uses
remap_pfn_range() to VM_PFNMAP a struct-page-backed PFN.
Likely your patch description should be:
"
mm/memory-failure: fix infinite UCE for VM_PFNMAP'ed page
When memory_failure() is called for an already hardware poisoned page,
kill_accessing_process() will conditionally send a SIGBUS to the current
(triggering) process if it still maps the page.
However, in case the page is not ordinarily mapped, but was mapped
through remap_pfn_range(), kill_accessing_process() would not identify
it as mapped even though hwpoison_pte_range() would be prepared to
handle it, because walk_page_range() will skip VM_PFNMAP as default in
walk_page_test().
walk_page_range() will return 0, assuming "not mapped" and the SIGBUS
will be skipped. In this case, the user task will trigger UCE infinitely
because it will not receive a SIGBUS on access and simply retry.
Before commit aaf99ac2ceb7 ("mm/hwpoison: do not send SIGBUS to
processes with recovered clean pages"), kill_accessing_process() would
return EFAULT in that case, and on x86, the current task would be killed
in kill_me_maybe().
Let's fix it by adding our custom .test_walk callback that will also
process VM_PFNMAP VMAs.
"
--
Cheers,
David / dhildenb
next prev parent reply other threads:[~2025-08-08 8:22 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-08-06 2:05 Jinjiang Tu
2025-08-06 3:05 ` Miaohe Lin
2025-08-06 3:24 ` Jinjiang Tu
2025-08-06 12:41 ` David Hildenbrand
2025-08-07 11:13 ` Jinjiang Tu
2025-08-08 8:21 ` David Hildenbrand [this message]
2025-08-09 1:23 ` Jinjiang Tu
2025-08-06 12:39 ` David Hildenbrand
2025-08-07 11:06 ` Jinjiang Tu
2025-08-08 8:08 ` David Hildenbrand
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=864f2ef6-51bd-42f3-9988-16b5e94f05d9@redhat.com \
--to=david@redhat.com \
--cc=akpm@linux-foundation.org \
--cc=linmiaohe@huawei.com \
--cc=linux-mm@kvack.org \
--cc=nao.horiguchi@gmail.com \
--cc=osalvador@suse.de \
--cc=tujinjiang@huawei.com \
--cc=wangkefeng.wang@huawei.com \
--cc=xueshuai@linux.alibaba.com \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox