From: Miaohe Lin <linmiaohe@huawei.com>
To: Jiaqi Yan <jiaqiyan@google.com>
Cc: <nao.horiguchi@gmail.com>, <tony.luck@intel.com>,
<wangkefeng.wang@huawei.com>, <willy@infradead.org>,
<akpm@linux-foundation.org>, <osalvador@suse.de>,
<rientjes@google.com>, <duenwen@google.com>,
<jthoughton@google.com>, <jgg@nvidia.com>, <ankita@nvidia.com>,
<peterx@redhat.com>, <sidhartha.kumar@oracle.com>,
<ziy@nvidia.com>, <david@redhat.com>,
<dave.hansen@linux.intel.com>, <muchun.song@linux.dev>,
<linux-mm@kvack.org>, <linux-kernel@vger.kernel.org>,
<linux-fsdevel@vger.kernel.org>, <william.roche@oracle.com>,
<harry.yoo@oracle.com>, <jane.chu@oracle.com>
Subject: Re: [PATCH v3 1/3] mm: memfd/hugetlb: introduce memfd-based userspace MFR policy
Date: Tue, 10 Feb 2026 15:31:29 +0800 [thread overview]
Message-ID: <31cc7bed-c30f-489c-3ac3-4842aa00b869@huawei.com> (raw)
In-Reply-To: <CACw3F50PwJ+sSOX0wySQgBzrEW2XOctxuX5jM37OG0HS_kHdbQ@mail.gmail.com>
On 2026/2/10 12:47, Jiaqi Yan wrote:
> On Mon, Feb 9, 2026 at 3:54 AM Miaohe Lin <linmiaohe@huawei.com> wrote:
>>
>> On 2026/2/4 3:23, Jiaqi Yan wrote:
>>> Sometimes immediately hard offlining a large chunk of contigous memory
>>> having uncorrected memory errors (UE) may not be the best option.
>>> Cloud providers usually serve capacity- and performance-critical guest
>>> memory with 1G HugeTLB hugepages, as this significantly reduces the
>>> overhead associated with managing page tables and TLB misses. However,
>>> for today's HugeTLB system, once a byte of memory in a hugepage is
>>> hardware corrupted, the kernel discards the whole hugepage, including
>>> the healthy portion. Customer workload running in the VM can hardly
>>> recover from such a great loss of memory.
>>
>> Thanks for your patch. Some questions below.
>>
>>>
>>> Therefore keeping or discarding a large chunk of contiguous memory
>>> owned by userspace (particularly to serve guest memory) due to
>>> recoverable UE may better be controlled by userspace process
>>> that owns the memory, e.g. VMM in the Cloud environment.
>>>
>>> Introduce a memfd-based userspace memory failure (MFR) policy,
>>> MFD_MF_KEEP_UE_MAPPED. It is possible to support for other memfd,
>>> but the current implementation only covers HugeTLB.
>>>
>>> For a hugepage associated with MFD_MF_KEEP_UE_MAPPED enabled memfd,
>>> whenever it runs into a new UE,
>>>
>>> * MFR defers hard offline operations, i.e., unmapping and
>>
>> So the folio can't be unpoisoned until hugetlb folio becomes free?
>
> Are you asking from testing perspective, are we still able to clean up
> injected test errors via unpoison_memory() with MFD_MF_KEEP_UE_MAPPED?
>
> If so, unpoison_memory() can't turn the HWPoison hugetlb page to
> normal hugetlb page as MFD_MF_KEEP_UE_MAPPED automatically dissolves
We might loss some testability but that should be an acceptable compromise.
> it. unpoison_memory(pfn) can probably still turn the HWPoison raw page
> back to a normal one, but you already lost the hugetlb page.
>
>>
>>> dissolving. MFR still sets HWPoison flag, holds a refcount
>>> for every raw HWPoison page, record them in a list, sends SIGBUS
>>> to the consuming thread, but si_addr_lsb is reduced to PAGE_SHIFT.
>>> If userspace is able to handle the SIGBUS, the HWPoison hugepage
>>> remains accessible via the mapping created with that memfd.
>>>
>>> * If the memory was not faulted in yet, the fault handler also
>>> allows fault in the HWPoison folio.
>>>
>>> For a MFD_MF_KEEP_UE_MAPPED enabled memfd, when it is closed, or
>>> when userspace process truncates its hugepages:
>>>
>>> * When the HugeTLB in-memory file system removes the filemap's
>>> folios one by one, it asks MFR to deal with HWPoison folios
>>> on the fly, implemented by filemap_offline_hwpoison_folio().
>>>
>>> * MFR drops the refcounts being held for the raw HWPoison
>>> pages within the folio. Now that the HWPoison folio becomes
>>> free, MFR dissolves it into a set of raw pages. The healthy pages
>>> are recycled into buddy allocator, while the HWPoison ones are
>>> prevented from re-allocation.
>>>
>> ...
>>
>>>
>>> +static void filemap_offline_hwpoison_folio_hugetlb(struct folio *folio)
>>> +{
>>> + int ret;
>>> + struct llist_node *head;
>>> + struct raw_hwp_page *curr, *next;
>>> +
>>> + /*
>>> + * Since folio is still in the folio_batch, drop the refcount
>>> + * elevated by filemap_get_folios.
>>> + */
>>> + folio_put_refs(folio, 1);
>>> + head = llist_del_all(raw_hwp_list_head(folio));
>>
>> We might race with get_huge_page_for_hwpoison()? llist_add() might be called
>> by folio_set_hugetlb_hwpoison() just after llist_del_all()?
>
> Oh, when there is a new UE while we releasing the folio here, right?
Right.
> In that case, would mutex_lock(&mf_mutex) eliminate potential race?
IMO spin_lock_irq(&hugetlb_lock) might be better.
>
>>
>>> +
>>> + /*
>>> + * Release refcounts held by try_memory_failure_hugetlb, one per
>>> + * HWPoison-ed page in the raw hwp list.
>>> + *
>>> + * Set HWPoison flag on each page so that free_has_hwpoisoned()
>>> + * can exclude them during dissolve_free_hugetlb_folio().
>>> + */
>>> + llist_for_each_entry_safe(curr, next, head, node) {
>>> + folio_put(folio);
>>
>> The hugetlb folio refcnt will only be increased once even if it contains multiple UE sub-pages.
>> See __get_huge_page_for_hwpoison() for details. So folio_put() might be called more times than
>> folio_try_get() in __get_huge_page_for_hwpoison().
>
> The changes in folio_set_hugetlb_hwpoison() should make
> __get_huge_page_for_hwpoison() not to take the "out" path which
> decrease the increased refcount for folio. IOW, every time a new UE
> happens, we handle the hugetlb page as if it is an in-use hugetlb
> page.
See below code snippet (comment [1] and [2]):
int __get_huge_page_for_hwpoison(unsigned long pfn, int flags,
bool *migratable_cleared)
{
struct page *page = pfn_to_page(pfn);
struct folio *folio = page_folio(page);
int ret = 2; /* fallback to normal page handling */
bool count_increased = false;
if (!folio_test_hugetlb(folio))
goto out;
if (flags & MF_COUNT_INCREASED) {
ret = 1;
count_increased = true;
} else if (folio_test_hugetlb_freed(folio)) {
ret = 0;
} else if (folio_test_hugetlb_migratable(folio)) {
^^^^*hugetlb_migratable is checked before trying to get folio refcnt* [1]
ret = folio_try_get(folio);
if (ret)
count_increased = true;
} else {
ret = -EBUSY;
if (!(flags & MF_NO_RETRY))
goto out;
}
if (folio_set_hugetlb_hwpoison(folio, page)) {
ret = -EHWPOISON;
goto out;
}
/*
* Clearing hugetlb_migratable for hwpoisoned hugepages to prevent them
* from being migrated by memory hotremove.
*/
if (count_increased && folio_test_hugetlb_migratable(folio)) {
folio_clear_hugetlb_migratable(folio);
^^^^^*hugetlb_migratable is cleared when first time seeing folio* [2]
*migratable_cleared = true;
}
Or am I miss something?
>
>>
>>> + SetPageHWPoison(curr->page);
>>
>> If hugetlb folio vmemmap is optimized, I think SetPageHWPoison might trigger BUG.
>
> Ah, I see, vmemmap optimization doesn't allow us to move flags from
> raw_hwp_list to tail pages. I guess the best I can do is to bail out
> if vmemmap is enabled like folio_clear_hugetlb_hwpoison().
I think you can do this after hugetlb_vmemmap_restore_folio() is called.
Thanks.
.
next prev parent reply other threads:[~2026-02-10 7:31 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-02-03 19:23 [PATCH v3 0/3] memfd-based Userspace MFR Policy for HugeTLB Jiaqi Yan
2026-02-03 19:23 ` [PATCH v3 1/3] mm: memfd/hugetlb: introduce memfd-based userspace MFR policy Jiaqi Yan
2026-02-04 17:29 ` William Roche
2026-02-10 4:46 ` Jiaqi Yan
2026-02-09 11:54 ` Miaohe Lin
2026-02-10 4:47 ` Jiaqi Yan
2026-02-10 7:31 ` Miaohe Lin [this message]
2026-02-13 5:01 ` Jiaqi Yan
2026-02-03 19:23 ` [PATCH v3 2/3] selftests/mm: test userspace MFR for HugeTLB hugepage Jiaqi Yan
2026-02-04 17:53 ` William Roche
2026-02-12 3:11 ` Jiaqi Yan
2026-02-09 12:01 ` Miaohe Lin
2026-02-12 3:17 ` Jiaqi Yan
2026-02-03 19:23 ` [PATCH v3 3/3] Documentation: add documentation for MFD_MF_KEEP_UE_MAPPED Jiaqi Yan
2026-02-04 17:56 ` William Roche
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=31cc7bed-c30f-489c-3ac3-4842aa00b869@huawei.com \
--to=linmiaohe@huawei.com \
--cc=akpm@linux-foundation.org \
--cc=ankita@nvidia.com \
--cc=dave.hansen@linux.intel.com \
--cc=david@redhat.com \
--cc=duenwen@google.com \
--cc=harry.yoo@oracle.com \
--cc=jane.chu@oracle.com \
--cc=jgg@nvidia.com \
--cc=jiaqiyan@google.com \
--cc=jthoughton@google.com \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=muchun.song@linux.dev \
--cc=nao.horiguchi@gmail.com \
--cc=osalvador@suse.de \
--cc=peterx@redhat.com \
--cc=rientjes@google.com \
--cc=sidhartha.kumar@oracle.com \
--cc=tony.luck@intel.com \
--cc=wangkefeng.wang@huawei.com \
--cc=william.roche@oracle.com \
--cc=willy@infradead.org \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox