linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Yang Shi <shy828301@gmail.com>
To: Zi Yan <ziy@nvidia.com>
Cc: linmiaohe@huawei.com, jane.chu@oracle.com, david@redhat.com,
	 kernel@pankajraghav.com,
	 syzbot+e6367ea2fdab6ed46056@syzkaller.appspotmail.com,
	 syzkaller-bugs@googlegroups.com, akpm@linux-foundation.org,
	mcgrof@kernel.org,  nao.horiguchi@gmail.com,
	Lorenzo Stoakes <lorenzo.stoakes@oracle.com>,
	 Baolin Wang <baolin.wang@linux.alibaba.com>,
	"Liam R. Howlett" <Liam.Howlett@oracle.com>,
	 Nico Pache <npache@redhat.com>,
	Ryan Roberts <ryan.roberts@arm.com>, Dev Jain <dev.jain@arm.com>,
	 Barry Song <baohua@kernel.org>,
	Lance Yang <lance.yang@linux.dev>,
	 "Matthew Wilcox (Oracle)" <willy@infradead.org>,
	Wei Yang <richard.weiyang@gmail.com>,
	 linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org,
	 linux-mm@kvack.org
Subject: Re: [PATCH v2 2/3] mm/memory-failure: improve large block size folio handling.
Date: Mon, 20 Oct 2025 16:41:02 -0700	[thread overview]
Message-ID: <CAHbLzkp8ob1_pxczeQnwinSL=DS=kByyL+yuTRFuQ0O=Eio0oA@mail.gmail.com> (raw)
In-Reply-To: <5EE26793-2CD4-4776-B13C-AA5984D53C04@nvidia.com>

On Mon, Oct 20, 2025 at 12:46 PM Zi Yan <ziy@nvidia.com> wrote:
>
> On 17 Oct 2025, at 15:11, Yang Shi wrote:
>
> > On Wed, Oct 15, 2025 at 8:38 PM Zi Yan <ziy@nvidia.com> wrote:
> >>
> >> Large block size (LBS) folios cannot be split to order-0 folios but
> >> min_order_for_folio(). Current split fails directly, but that is not
> >> optimal. Split the folio to min_order_for_folio(), so that, after split,
> >> only the folio containing the poisoned page becomes unusable instead.
> >>
> >> For soft offline, do not split the large folio if it cannot be split to
> >> order-0. Since the folio is still accessible from userspace and premature
> >> split might lead to potential performance loss.
> >>
> >> Suggested-by: Jane Chu <jane.chu@oracle.com>
> >> Signed-off-by: Zi Yan <ziy@nvidia.com>
> >> Reviewed-by: Luis Chamberlain <mcgrof@kernel.org>
> >> ---
> >>  mm/memory-failure.c | 25 +++++++++++++++++++++----
> >>  1 file changed, 21 insertions(+), 4 deletions(-)
> >>
> >> diff --git a/mm/memory-failure.c b/mm/memory-failure.c
> >> index f698df156bf8..443df9581c24 100644
> >> --- a/mm/memory-failure.c
> >> +++ b/mm/memory-failure.c
> >> @@ -1656,12 +1656,13 @@ static int identify_page_state(unsigned long pfn, struct page *p,
> >>   * there is still more to do, hence the page refcount we took earlier
> >>   * is still needed.
> >>   */
> >> -static int try_to_split_thp_page(struct page *page, bool release)
> >> +static int try_to_split_thp_page(struct page *page, unsigned int new_order,
> >> +               bool release)
> >>  {
> >>         int ret;
> >>
> >>         lock_page(page);
> >> -       ret = split_huge_page(page);
> >> +       ret = split_huge_page_to_list_to_order(page, NULL, new_order);
> >>         unlock_page(page);
> >>
> >>         if (ret && release)
> >> @@ -2280,6 +2281,7 @@ int memory_failure(unsigned long pfn, int flags)
> >>         folio_unlock(folio);
> >>
> >>         if (folio_test_large(folio)) {
> >> +               int new_order = min_order_for_split(folio);
> >>                 /*
> >>                  * The flag must be set after the refcount is bumped
> >>                  * otherwise it may race with THP split.
> >> @@ -2294,7 +2296,14 @@ int memory_failure(unsigned long pfn, int flags)
> >>                  * page is a valid handlable page.
> >>                  */
> >>                 folio_set_has_hwpoisoned(folio);
> >> -               if (try_to_split_thp_page(p, false) < 0) {
> >> +               /*
> >> +                * If the folio cannot be split to order-0, kill the process,
> >> +                * but split the folio anyway to minimize the amount of unusable
> >> +                * pages.
> >> +                */
> >> +               if (try_to_split_thp_page(p, new_order, false) || new_order) {
> >
> > folio split will clear PG_has_hwpoisoned flag. It is ok for splitting
> > to order-0 folios because the PG_hwpoisoned flag is set on the
> > poisoned page. But if you split the folio to some smaller order large
> > folios, it seems you need to keep PG_has_hwpoisoned flag on the
> > poisoned folio.
>
> OK, this means all pages in a folio with folio_test_has_hwpoisoned() should be
> checked to be able to set after-split folio's flag properly. Current folio
> split code does not do that. I am thinking about whether that causes any
> issue. Probably not, because:
>
> 1. before Patch 1 is applied, large after-split folios are already causing
> a warning in memory_failure(). That kinda masks this issue.
> 2. after Patch 1 is applied, no large after-split folios will appear,
> since the split will fail.

I'm a little bit confused. Didn't this patch split large folio to
new-order-large-folio (new order is min order)? So this patch had
code:
if (try_to_split_thp_page(p, new_order, false) || new_order) {

Thanks,
Yang

>
> @Miaohe and @Jane, please let me know if my above reasoning makes sense or not.
>
> To make this patch right, folio's has_hwpoisoned flag needs to be preserved
> like what Yang described above. My current plan is to move
> folio_clear_has_hwpoisoned(folio) into __split_folio_to_order() and
> scan every page in the folio if the folio's has_hwpoisoned is set.
> There will be redundant scans in non uniform split case, since a has_hwpoisoned
> folio can be split multiple times (leading to multiple page scans), unless
> the scan result is stored.
>
> @Miaohe and @Jane, is it possible to have multiple HW poisoned pages in
> a folio? Is the memory failure process like 1) page access causing MCE,
> 2) memory_failure() is used to handle it and split the large folio containing
> it? Or multiple MCEs can be received and multiple pages in a folio are marked
> then a split would happen?
>
> >
> > Yang
> >
> >
> >> +                       /* get folio again in case the original one is split */
> >> +                       folio = page_folio(p);
> >>                         res = -EHWPOISON;
> >>                         kill_procs_now(p, pfn, flags, folio);
> >>                         put_page(p);
> >> @@ -2621,7 +2630,15 @@ static int soft_offline_in_use_page(struct page *page)
> >>         };
> >>
> >>         if (!huge && folio_test_large(folio)) {
> >> -               if (try_to_split_thp_page(page, true)) {
> >> +               int new_order = min_order_for_split(folio);
> >> +
> >> +               /*
> >> +                * If the folio cannot be split to order-0, do not split it at
> >> +                * all to retain the still accessible large folio.
> >> +                * NOTE: if getting free memory is perferred, split it like it
> >> +                * is done in memory_failure().
> >> +                */
> >> +               if (new_order || try_to_split_thp_page(page, new_order, true)) {
> >>                         pr_info("%#lx: thp split failed\n", pfn);
> >>                         return -EBUSY;
> >>                 }
> >> --
> >> 2.51.0
> >>
> >>
>
>
> --
> Best Regards,
> Yan, Zi


  reply	other threads:[~2025-10-20 23:41 UTC|newest]

Thread overview: 27+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-10-16  3:34 [PATCH v2 0/3] Do not change split folio target order Zi Yan
2025-10-16  3:34 ` [PATCH v2 1/3] mm/huge_memory: do not change split_huge_page*() target order silently Zi Yan
2025-10-16  7:31   ` Wei Yang
2025-10-16 14:32     ` Zi Yan
2025-10-16 20:59       ` Andrew Morton
2025-10-17  1:03         ` Zi Yan
2025-10-17  9:06           ` Lorenzo Stoakes
2025-10-17  9:10             ` Lorenzo Stoakes
2025-10-17 14:16               ` Zi Yan
2025-10-17 14:32                 ` Lorenzo Stoakes
2025-10-18  0:05                   ` Andrew Morton
2025-10-17  1:01       ` Wei Yang
2025-10-16  3:34 ` [PATCH v2 2/3] mm/memory-failure: improve large block size folio handling Zi Yan
2025-10-17  9:33   ` Lorenzo Stoakes
2025-10-20 20:09     ` Zi Yan
2025-10-17 19:11   ` Yang Shi
2025-10-20 19:46     ` Zi Yan
2025-10-20 23:41       ` Yang Shi [this message]
2025-10-21  1:23         ` Zi Yan
2025-10-21 15:44           ` David Hildenbrand
2025-10-21 15:55             ` Zi Yan
2025-10-21 18:28               ` David Hildenbrand
2025-10-21 18:57                 ` Zi Yan
2025-10-21 19:07                   ` Yang Shi
2025-10-22  6:39       ` Miaohe Lin
2025-10-16  3:34 ` [PATCH v2 3/3] mm/huge_memory: fix kernel-doc comments for folio_split() and related Zi Yan
2025-10-17  9:20   ` Lorenzo Stoakes

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAHbLzkp8ob1_pxczeQnwinSL=DS=kByyL+yuTRFuQ0O=Eio0oA@mail.gmail.com' \
    --to=shy828301@gmail.com \
    --cc=Liam.Howlett@oracle.com \
    --cc=akpm@linux-foundation.org \
    --cc=baohua@kernel.org \
    --cc=baolin.wang@linux.alibaba.com \
    --cc=david@redhat.com \
    --cc=dev.jain@arm.com \
    --cc=jane.chu@oracle.com \
    --cc=kernel@pankajraghav.com \
    --cc=lance.yang@linux.dev \
    --cc=linmiaohe@huawei.com \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=lorenzo.stoakes@oracle.com \
    --cc=mcgrof@kernel.org \
    --cc=nao.horiguchi@gmail.com \
    --cc=npache@redhat.com \
    --cc=richard.weiyang@gmail.com \
    --cc=ryan.roberts@arm.com \
    --cc=syzbot+e6367ea2fdab6ed46056@syzkaller.appspotmail.com \
    --cc=syzkaller-bugs@googlegroups.com \
    --cc=willy@infradead.org \
    --cc=ziy@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox