From: Harry Yoo <harry.yoo@oracle.com>
To: Jiaqi Yan <jiaqiyan@google.com>
Cc: "“William Roche" <william.roche@oracle.com>,
"Ackerley Tng" <ackerleytng@google.com>,
jgg@nvidia.com, akpm@linux-foundation.org, ankita@nvidia.com,
dave.hansen@linux.intel.com, david@redhat.com,
duenwen@google.com, jane.chu@oracle.com, jthoughton@google.com,
linmiaohe@huawei.com, linux-fsdevel@vger.kernel.org,
linux-kernel@vger.kernel.org, linux-mm@kvack.org,
muchun.song@linux.dev, nao.horiguchi@gmail.com,
osalvador@suse.de, peterx@redhat.com, rientjes@google.com,
sidhartha.kumar@oracle.com, tony.luck@intel.com,
wangkefeng.wang@huawei.com, willy@infradead.org
Subject: Re: [RFC PATCH v1 0/3] Userspace MFR Policy via memfd
Date: Wed, 22 Oct 2025 22:09:08 +0900 [thread overview]
Message-ID: <aPjXdP63T1yYtvkq@hyeyoo> (raw)
In-Reply-To: <CACw3F521fi5HWhCKi_KrkNLXkw668HO4h8+DjkP2+vBuK-=org@mail.gmail.com>
On Mon, Oct 13, 2025 at 03:14:32PM -0700, Jiaqi Yan wrote:
> On Fri, Sep 19, 2025 at 8:58 AM “William Roche <william.roche@oracle.com> wrote:
> >
> > From: William Roche <william.roche@oracle.com>
> >
> > Hello,
> >
> > The possibility to keep a VM using large hugetlbfs pages running after a memory
> > error is very important, and the possibility described here could be a good
> > candidate to address this issue.
>
> Thanks for expressing interest, William, and sorry for getting back to
> you so late.
>
> >
> > So I would like to provide my feedback after testing this code with the
> > introduction of persistent errors in the address space: My tests used a VM
> > running a kernel able to provide MFD_MF_KEEP_UE_MAPPED memfd segments to the
> > test program provided with this project. But instead of injecting the errors
> > with madvise calls from this program, I get the guest physical address of a
> > location and inject the error from the hypervisor into the VM, so that any
> > subsequent access to the location is prevented directly from the hypervisor
> > level.
>
> This is exactly what VMM should do: when it owns or manages the VM
> memory with MFD_MF_KEEP_UE_MAPPED, it is then VMM's responsibility to
> isolate guest/VCPUs from poisoned memory pages, e.g. by intercepting
> such memory accesses.
>
> >
> > Using this framework, I realized that the code provided here has a problem:
> > When the error impacts a large folio, the release of this folio doesn't isolate
> > the sub-page(s) actually impacted by the poison. __rmqueue_pcplist() can return
> > a known poisoned page to get_page_from_freelist().
>
> Just curious, how exactly you can repro this leaking of a known poison
> page? It may help me debug my patch.
>
> >
> > This revealed some mm limitations, as I would have expected that the
> > check_new_pages() mechanism used by the __rmqueue functions would filter these
> > pages out, but I noticed that this has been disabled by default in 2023 with:
> > [PATCH] mm, page_alloc: reduce page alloc/free sanity checks
> > https://lore.kernel.org/all/20230216095131.17336-1-vbabka@suse.cz
>
> Thanks for the reference. I did turned on CONFIG_DEBUG_VM=y during dev
> and testing but didn't notice any WARNING on "bad page"; It is very
> likely I was just lucky.
>
> >
> >
> > This problem seems to be avoided if we call take_page_off_buddy(page) in the
> > filemap_offline_hwpoison_folio_hugetlb() function without testing if
> > PageBuddy(page) is true first.
>
> Oh, I think you are right, filemap_offline_hwpoison_folio_hugetlb
> shouldn't call take_page_off_buddy(page) depend on PageBuddy(page) or
> not. take_page_off_buddy will check PageBuddy or not, on the page_head
> of different page orders. So maybe somehow a known poisoned page is
> not taken off from buddy allocator due to this?
Maybe it's the case where the poisoned page is merged to a larger page,
and the PGTY_buddy flag is set on its buddy of the poisoned page, so
PageBuddy() returns false?:
[ free page A ][ free page B (poisoned) ]
When these two are merged, then we set PGTY_buddy on page A but not on B.
But even after fixing that we need to fix the race condition.
> Let me try to fix it in v2, by the end of the week. If you could test
> with your way of repro as well, that will be very helpful!
>
> > But according to me it leaves a (small) race condition where a new page
> > allocation could get a poisoned sub-page between the dissolve phase and the
> > attempt to remove it from the buddy allocator.
> >
> > I do have the impression that a correct behavior (isolating an impacted
> > sub-page and remapping the valid memory content) using large pages is
> > currently only achieved with Transparent Huge Pages.
> > If performance requires using Hugetlb pages, than maybe we could accept to
> > loose a huge page after a memory impacted MFD_MF_KEEP_UE_MAPPED memfd segment
> > is released ? If it can easily avoid some other corruption.
> >
> > I'm very interested in finding an appropriate way to deal with memory errors on
> > hugetlbfs pages, and willing to help to build a valid solution. This project
> > showed a real possibility to do so, even in cases where pinned memory is used -
> > with VFIO for example.
> >
> > I would really be interested in knowing your feedback about this project, and
> > if another solution is considered more adapted to deal with errors on hugetlbfs
> > pages, please let us know.
>
> There is also another possible path if VMM can change to back VM
> memory with *1G guest_memfd*, which wraps 1G hugetlbfs. In Ackerley's
> work [1], guest_memfd can split the 1G page for conversions. If we
> re-use the splitting for memory failure recovery, we can probably
> achieve something generally similar to THP's memory failure recovery:
> split 1G to 2M and 4k chunks, then unmap only 4k of poisoned page. We
> still lose the 1G TLB size so VM may be subject to some performance
> sacrifice.
> [1] https://lore.kernel.org/linux-mm/2ae41e0d80339da2b57011622ac2288fed65cd01.1747264138.git.ackerleytng@google.com
I want to take a closer look at the actual patches but either way sounds
good to me.
By the way, please Cc me in future revisions :)
Thanks!
--
Cheers,
Harry / Hyeonggon
next prev parent reply other threads:[~2025-10-22 13:09 UTC|newest]
Thread overview: 23+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-01-18 23:15 Jiaqi Yan
2025-01-18 23:15 ` [RFC PATCH v1 1/3] mm: memfd/hugetlb: introduce userspace memory failure recovery policy Jiaqi Yan
2025-01-18 23:15 ` [RFC PATCH v1 2/3] selftests/mm: test userspace MFR for HugeTLB 1G hugepage Jiaqi Yan
2025-01-18 23:15 ` [RFC PATCH v1 3/3] Documentation: add userspace MF recovery policy via memfd Jiaqi Yan
2025-01-20 17:26 ` [RFC PATCH v1 0/3] Userspace MFR Policy " Jason Gunthorpe
2025-01-21 21:45 ` Jiaqi Yan
2025-01-22 16:41 ` Zi Yan
2025-09-19 15:58 ` “William Roche
2025-10-13 22:14 ` Jiaqi Yan
2025-10-14 20:57 ` William Roche
2025-10-28 4:17 ` Jiaqi Yan
2025-10-22 13:09 ` Harry Yoo [this message]
2025-10-28 4:17 ` Jiaqi Yan
2025-10-28 7:00 ` Harry Yoo
2025-10-30 11:51 ` Miaohe Lin
2025-10-30 17:28 ` Jiaqi Yan
2025-10-30 21:28 ` Jiaqi Yan
2025-11-03 8:16 ` Harry Yoo
2025-11-03 8:53 ` Harry Yoo
2025-11-03 16:57 ` Jiaqi Yan
2025-11-04 3:44 ` Miaohe Lin
2025-11-06 7:53 ` Harry Yoo
2025-11-12 1:28 ` Jiaqi Yan
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=aPjXdP63T1yYtvkq@hyeyoo \
--to=harry.yoo@oracle.com \
--cc=ackerleytng@google.com \
--cc=akpm@linux-foundation.org \
--cc=ankita@nvidia.com \
--cc=dave.hansen@linux.intel.com \
--cc=david@redhat.com \
--cc=duenwen@google.com \
--cc=jane.chu@oracle.com \
--cc=jgg@nvidia.com \
--cc=jiaqiyan@google.com \
--cc=jthoughton@google.com \
--cc=linmiaohe@huawei.com \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=muchun.song@linux.dev \
--cc=nao.horiguchi@gmail.com \
--cc=osalvador@suse.de \
--cc=peterx@redhat.com \
--cc=rientjes@google.com \
--cc=sidhartha.kumar@oracle.com \
--cc=tony.luck@intel.com \
--cc=wangkefeng.wang@huawei.com \
--cc=william.roche@oracle.com \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox