From: Jiaqi Yan <jiaqiyan@google.com>
To: Zi Yan <ziy@nvidia.com>
Cc: nao.horiguchi@gmail.com, linmiaohe@huawei.com, david@redhat.com,
lorenzo.stoakes@oracle.com, william.roche@oracle.com,
harry.yoo@oracle.com, tony.luck@intel.com,
wangkefeng.wang@huawei.com, willy@infradead.org,
jane.chu@oracle.com, akpm@linux-foundation.org,
osalvador@suse.de, muchun.song@linux.dev, linux-mm@kvack.org,
linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org
Subject: Re: [PATCH v1 2/2] mm/memory-failure: avoid free HWPoison high-order folio
Date: Mon, 17 Nov 2025 21:12:53 -0800 [thread overview]
Message-ID: <CACw3F50aVFoeaEnTptvq+qjVibupM1e8XJUeU2W_y-JMzJx1iw@mail.gmail.com> (raw)
In-Reply-To: <CD886E34-9126-4B34-93B2-3DFBDAC4C454@nvidia.com>
On Sat, Nov 15, 2025 at 6:10 PM Zi Yan <ziy@nvidia.com> wrote:
>
> On 15 Nov 2025, at 20:47, Jiaqi Yan wrote:
>
> > At the end of dissolve_free_hugetlb_folio, when a free HugeTLB
> > folio becomes non-HugeTLB, it is released to buddy allocator
> > as a high-order folio, e.g. a folio that contains 262144 pages
> > if the folio was a 1G HugeTLB hugepage.
> >
> > This is problematic if the HugeTLB hugepage contained HWPoison
> > subpages. In that case, since buddy allocator does not check
> > HWPoison for non-zero-order folio, the raw HWPoison page can
> > be given out with its buddy page and be re-used by either
> > kernel or userspace.
> >
> > Memory failure recovery (MFR) in kernel does attempt to take
> > raw HWPoison page off buddy allocator after
> > dissolve_free_hugetlb_folio. However, there is always a time
> > window between freed to buddy allocator and taken off from
> > buddy allocator.
> >
> > One obvious way to avoid this problem is to add page sanity
> > checks in page allocate or free path. However, it is against
> > the past efforts to reduce sanity check overhead [1,2,3].
> >
> > Introduce hugetlb_free_hwpoison_folio to solve this problem.
> > The idea is, in case a HugeTLB folio for sure contains HWPoison
> > page(s), first split the non-HugeTLB high-order folio uniformly
> > into 0-order folios, then let healthy pages join the buddy
> > allocator while reject the HWPoison ones.
> >
> > [1] https://lore.kernel.org/linux-mm/1460711275-1130-15-git-send-email-mgorman@techsingularity.net/
> > [2] https://lore.kernel.org/linux-mm/1460711275-1130-16-git-send-email-mgorman@techsingularity.net/
> > [3] https://lore.kernel.org/all/20230216095131.17336-1-vbabka@suse.cz
> >
> > Signed-off-by: Jiaqi Yan <jiaqiyan@google.com>
> > ---
> > include/linux/hugetlb.h | 4 ++++
> > mm/hugetlb.c | 8 ++++++--
> > mm/memory-failure.c | 43 +++++++++++++++++++++++++++++++++++++++++
> > 3 files changed, 53 insertions(+), 2 deletions(-)
> >
> > diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
> > index 8e63e46b8e1f0..e1c334a7db2fe 100644
> > --- a/include/linux/hugetlb.h
> > +++ b/include/linux/hugetlb.h
> > @@ -870,8 +870,12 @@ int dissolve_free_hugetlb_folios(unsigned long start_pfn,
> > unsigned long end_pfn);
> >
> > #ifdef CONFIG_MEMORY_FAILURE
> > +extern void hugetlb_free_hwpoison_folio(struct folio *folio);
> > extern void folio_clear_hugetlb_hwpoison(struct folio *folio);
> > #else
> > +static inline void hugetlb_free_hwpoison_folio(struct folio *folio)
> > +{
> > +}
> > static inline void folio_clear_hugetlb_hwpoison(struct folio *folio)
> > {
> > }
> > diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> > index 0455119716ec0..801ca1a14c0f0 100644
> > --- a/mm/hugetlb.c
> > +++ b/mm/hugetlb.c
> > @@ -1596,6 +1596,7 @@ static void __update_and_free_hugetlb_folio(struct hstate *h,
> > struct folio *folio)
> > {
> > bool clear_flag = folio_test_hugetlb_vmemmap_optimized(folio);
> > + bool has_hwpoison = folio_test_hwpoison(folio);
> >
> > if (hstate_is_gigantic(h) && !gigantic_page_runtime_supported())
> > return;
> > @@ -1638,12 +1639,15 @@ static void __update_and_free_hugetlb_folio(struct hstate *h,
> > * Move PageHWPoison flag from head page to the raw error pages,
> > * which makes any healthy subpages reusable.
> > */
> > - if (unlikely(folio_test_hwpoison(folio)))
> > + if (unlikely(has_hwpoison))
> > folio_clear_hugetlb_hwpoison(folio);
> >
> > folio_ref_unfreeze(folio, 1);
> >
> > - hugetlb_free_folio(folio);
> > + if (unlikely(has_hwpoison))
> > + hugetlb_free_hwpoison_folio(folio);
> > + else
> > + hugetlb_free_folio(folio);
> > }
> >
> > /*
> > diff --git a/mm/memory-failure.c b/mm/memory-failure.c
> > index 3edebb0cda30b..e6a9deba6292a 100644
> > --- a/mm/memory-failure.c
> > +++ b/mm/memory-failure.c
> > @@ -2002,6 +2002,49 @@ int __get_huge_page_for_hwpoison(unsigned long pfn, int flags,
> > return ret;
> > }
> >
> > +void hugetlb_free_hwpoison_folio(struct folio *folio)
> > +{
> > + struct folio *curr, *next;
> > + struct folio *end_folio = folio_next(folio);
> > + int ret;
> > +
> > + VM_WARN_ON_FOLIO(folio_ref_count(folio) != 1, folio);
> > +
> > + ret = uniform_split_unmapped_folio_to_zero_order(folio);
>
> I realize that __split_unmapped_folio() is a wrong name and causes confusion.
> It should be __split_frozen_folio(), since when you look at its current
> call site, it is called after the folio is frozen. There probably
> should be a check in __split_unmapped_folio() to make sure the folio is frozen.
>
> Is it possible to change hugetlb_free_hwpoison_folio() so that it
> can be called before folio_ref_unfreeze(folio, 1)? In this way,
> __split_unmapped_folio() is called at frozen folios.
>
> You can add a preparation patch to rename __split_unmapped_folio() to
> __split_frozen_folio() and add
> VM_WARN_ON_ONCE_FOLIO(folio_ref_count(folio) != 0, folio) to the function.
>
FWIW, I am going to still follow your suggestion to improve code
healthiness or readability :)
> Thanks.
Thanks, Zi!
>
> > + if (ret) {
> > + /*
> > + * In case of split failure, none of the pages in folio
> > + * will be freed to buddy allocator.
> > + */
> > + pr_err("%#lx: failed to split free %d-order folio with HWPoison page(s): %d\n",
> > + folio_pfn(folio), folio_order(folio), ret);
> > + return;
> > + }
> > +
> > + /* Expect 1st folio's refcount==1, and other's refcount==0. */
> > + for (curr = folio; curr != end_folio; curr = next) {
> > + next = folio_next(curr);
> > +
> > + VM_WARN_ON_FOLIO(folio_order(curr), curr);
> > +
> > + if (PageHWPoison(&curr->page)) {
> > + if (curr != folio)
> > + folio_ref_inc(curr);
> > +
> > + VM_WARN_ON_FOLIO(folio_ref_count(curr) != 1, curr);
> > + pr_warn("%#lx: prevented freeing HWPoison page\n",
> > + folio_pfn(curr));
> > + continue;
> > + }
> > +
> > + if (curr == folio)
> > + folio_ref_dec(curr);
> > +
> > + VM_WARN_ON_FOLIO(folio_ref_count(curr), curr);
> > + free_frozen_pages(&curr->page, folio_order(curr));
> > + }
> > +}
> > +
> > /*
> > * Taking refcount of hugetlb pages needs extra care about race conditions
> > * with basic operations like hugepage allocation/free/demotion.
> > --
> > 2.52.0.rc1.455.g30608eb744-goog
>
>
> --
> Best Regards,
> Yan, Zi
next prev parent reply other threads:[~2025-11-18 5:13 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-11-16 1:47 [PATCH v1 0/2] Only free healthy pages in high-order HWPoison folio Jiaqi Yan
2025-11-16 1:47 ` [PATCH v1 1/2] mm/huge_memory: introduce uniform_split_unmapped_folio_to_zero_order Jiaqi Yan
2025-11-16 11:51 ` Matthew Wilcox
2025-11-17 3:15 ` Harry Yoo
2025-11-17 3:21 ` Zi Yan
2025-11-17 3:39 ` Harry Yoo
2025-11-17 13:43 ` Matthew Wilcox
2025-11-18 6:24 ` Jiaqi Yan
2025-11-18 10:19 ` Harry Yoo
2025-11-18 19:26 ` Jiaqi Yan
2025-11-18 21:54 ` Zi Yan
2025-11-19 12:37 ` Harry Yoo
2025-11-19 19:21 ` Jiaqi Yan
2025-11-19 20:35 ` Zi Yan
2025-11-16 22:38 ` kernel test robot
2025-11-17 17:12 ` David Hildenbrand (Red Hat)
2025-11-16 1:47 ` [PATCH v1 2/2] mm/memory-failure: avoid free HWPoison high-order folio Jiaqi Yan
2025-11-16 2:10 ` Zi Yan
2025-11-18 5:12 ` Jiaqi Yan [this message]
2025-11-17 17:15 ` David Hildenbrand (Red Hat)
2025-11-18 5:17 ` Jiaqi Yan
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CACw3F50aVFoeaEnTptvq+qjVibupM1e8XJUeU2W_y-JMzJx1iw@mail.gmail.com \
--to=jiaqiyan@google.com \
--cc=akpm@linux-foundation.org \
--cc=david@redhat.com \
--cc=harry.yoo@oracle.com \
--cc=jane.chu@oracle.com \
--cc=linmiaohe@huawei.com \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=lorenzo.stoakes@oracle.com \
--cc=muchun.song@linux.dev \
--cc=nao.horiguchi@gmail.com \
--cc=osalvador@suse.de \
--cc=tony.luck@intel.com \
--cc=wangkefeng.wang@huawei.com \
--cc=william.roche@oracle.com \
--cc=willy@infradead.org \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox