From: Harry Yoo <harry.yoo@oracle.com>
To: Zi Yan <ziy@nvidia.com>
Cc: Matthew Wilcox <willy@infradead.org>,
Jiaqi Yan <jiaqiyan@google.com>,
nao.horiguchi@gmail.com, linmiaohe@huawei.com, david@redhat.com,
lorenzo.stoakes@oracle.com, william.roche@oracle.com,
tony.luck@intel.com, wangkefeng.wang@huawei.com,
jane.chu@oracle.com, akpm@linux-foundation.org,
osalvador@suse.de, muchun.song@linux.dev, linux-mm@kvack.org,
linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org,
Vlastimil Babka <vbabka@suse.cz>,
Suren Baghdasaryan <surenb@google.com>,
Michal Hocko <mhocko@suse.com>,
Brendan Jackman <jackmanb@google.com>,
Johannes Weiner <hannes@cmpxchg.org>
Subject: Re: [PATCH v1 1/2] mm/huge_memory: introduce uniform_split_unmapped_folio_to_zero_order
Date: Mon, 17 Nov 2025 12:39:46 +0900 [thread overview]
Message-ID: <aRqZApBuzxzo9rF9@hyeyoo> (raw)
In-Reply-To: <4C3B115F-5559-430A-A240-A6A291819818@nvidia.com>
On Sun, Nov 16, 2025 at 10:21:26PM -0500, Zi Yan wrote:
> On 16 Nov 2025, at 22:15, Harry Yoo wrote:
>
> > On Sun, Nov 16, 2025 at 11:51:14AM +0000, Matthew Wilcox wrote:
> >> On Sun, Nov 16, 2025 at 01:47:20AM +0000, Jiaqi Yan wrote:
> >>> Introduce uniform_split_unmapped_folio_to_zero_order, a wrapper
> >>> to the existing __split_unmapped_folio. Caller can use it to
> >>> uniformly split an unmapped high-order folio into 0-order folios.
> >>
> >> Please don't make this function exist. I appreciate what you're trying
> >> to do, but let's try to do it differently?
> >>
> >> When we have struct folio separately allocated from struct page,
> >> splitting a folio will mean allocating new struct folios for every
> >> new folio created. I anticipate an order-0 folio will be about 80 or
> >> 96 bytes. So if we create 512 * 512 folios in a single go, that'll be
> >> an allocation of 20MB.
> >>
> >> This is why I asked Zi Yan to create the asymmetrical folio split, so we
> >> only end up creating log() of this. In the case of a single hwpoison page
> >> in an order-18 hugetlb, that'd be 19 allocations totallying 1520 bytes.
> >
> > Oh god, I completely overlooked this aspect when discussing this with Jiaqi.
> > Thanks for raising this concern.
> >
> >> But since we're only doing this on free, we won't need to do folio
> >> allocations at all; we'll just be able to release the good pages to the
> >> page allocator and sequester the hwpoison pages.
> >
> > [+Cc PAGE ALLOCATOR folks]
> >
> > So we need an interface to free only healthy portion of a hwpoison folio.
> >
> > I think a proper approach to this should be to "free a hwpoison folio
> > just like freeing a normal folio via folio_put() or free_frozen_pages(),
> > then the page allocator will add only healthy pages to the freelist and
> > isolate the hwpoison pages". Oherwise we'll end up open coding a lot,
> > which is too fragile.
>
> Why not use __split_unmaped_folio(folio, /*new_order=*/0,
> /split_at=*/HWPoisoned_page,
> ..., /*uniform_split=*/ false)?
>
> If there are multiple HWPoisoned pages, just repeat.
Using __split_unmapped_folio() is totally fine. I was just thinking that
maybe we should hide the complexity inside the page allocator if we want
to avoid allocating struct folio at all when handling this.
> > In fact, that can be done by teaching free_pages_prepare() how to handle
> > the case where one or more subpages of a folio are hwpoison pages.
> >
> > How this should be implemented in the page allocator in memdescs world?
> > Hmm, we'll want to do some kind of non-uniform split, without actually
> > splitting the folio but allocating struct buddy?
> >
> > But... for now I think hiding this complexity inside the page allocator
> > is good enough. For now this would just mean splitting a frozen page
> > inside the page allocator (probably non-uniform?). We can later re-implement
> > this to provide better support for memdescs.
--
Cheers,
Harry / Hyeonggon
next prev parent reply other threads:[~2025-11-17 3:40 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-11-16 1:47 [PATCH v1 0/2] Only free healthy pages in high-order HWPoison folio Jiaqi Yan
2025-11-16 1:47 ` [PATCH v1 1/2] mm/huge_memory: introduce uniform_split_unmapped_folio_to_zero_order Jiaqi Yan
2025-11-16 11:51 ` Matthew Wilcox
2025-11-17 3:15 ` Harry Yoo
2025-11-17 3:21 ` Zi Yan
2025-11-17 3:39 ` Harry Yoo [this message]
2025-11-17 13:43 ` Matthew Wilcox
2025-11-18 6:24 ` Jiaqi Yan
2025-11-18 10:19 ` Harry Yoo
2025-11-18 19:26 ` Jiaqi Yan
2025-11-18 21:54 ` Zi Yan
2025-11-19 12:37 ` Harry Yoo
2025-11-19 19:21 ` Jiaqi Yan
2025-11-19 20:35 ` Zi Yan
2025-11-16 22:38 ` kernel test robot
2025-11-17 17:12 ` David Hildenbrand (Red Hat)
2025-11-16 1:47 ` [PATCH v1 2/2] mm/memory-failure: avoid free HWPoison high-order folio Jiaqi Yan
2025-11-16 2:10 ` Zi Yan
2025-11-18 5:12 ` Jiaqi Yan
2025-11-17 17:15 ` David Hildenbrand (Red Hat)
2025-11-18 5:17 ` Jiaqi Yan
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=aRqZApBuzxzo9rF9@hyeyoo \
--to=harry.yoo@oracle.com \
--cc=akpm@linux-foundation.org \
--cc=david@redhat.com \
--cc=hannes@cmpxchg.org \
--cc=jackmanb@google.com \
--cc=jane.chu@oracle.com \
--cc=jiaqiyan@google.com \
--cc=linmiaohe@huawei.com \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=lorenzo.stoakes@oracle.com \
--cc=mhocko@suse.com \
--cc=muchun.song@linux.dev \
--cc=nao.horiguchi@gmail.com \
--cc=osalvador@suse.de \
--cc=surenb@google.com \
--cc=tony.luck@intel.com \
--cc=vbabka@suse.cz \
--cc=wangkefeng.wang@huawei.com \
--cc=william.roche@oracle.com \
--cc=willy@infradead.org \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox