From: David Hildenbrand <david@redhat.com>
To: Zi Yan <ziy@nvidia.com>, linmiaohe@huawei.com, jane.chu@oracle.com
Cc: kernel@pankajraghav.com, akpm@linux-foundation.org,
mcgrof@kernel.org, nao.horiguchi@gmail.com,
Lorenzo Stoakes <lorenzo.stoakes@oracle.com>,
Baolin Wang <baolin.wang@linux.alibaba.com>,
"Liam R. Howlett" <Liam.Howlett@oracle.com>,
Nico Pache <npache@redhat.com>,
Ryan Roberts <ryan.roberts@arm.com>, Dev Jain <dev.jain@arm.com>,
Barry Song <baohua@kernel.org>, Lance Yang <lance.yang@linux.dev>,
"Matthew Wilcox (Oracle)" <willy@infradead.org>,
Wei Yang <richard.weiyang@gmail.com>,
Yang Shi <shy828301@gmail.com>,
linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org,
linux-mm@kvack.org
Subject: Re: [PATCH v3 3/4] mm/memory-failure: improve large block size folio handling.
Date: Wed, 22 Oct 2025 22:17:21 +0200 [thread overview]
Message-ID: <6279a5b3-cb00-49d0-8521-b7b9dfdee2a8@redhat.com> (raw)
In-Reply-To: <20251022033531.389351-4-ziy@nvidia.com>
On 22.10.25 05:35, Zi Yan wrote:
Subject: I'd drop the trailing "."
> Large block size (LBS) folios cannot be split to order-0 folios but
> min_order_for_folio(). Current split fails directly, but that is not
> optimal. Split the folio to min_order_for_folio(), so that, after split,
> only the folio containing the poisoned page becomes unusable instead.
>
> For soft offline, do not split the large folio if its min_order_for_folio()
> is not 0. Since the folio is still accessible from userspace and premature
> split might lead to potential performance loss.
>
> Suggested-by: Jane Chu <jane.chu@oracle.com>
> Signed-off-by: Zi Yan <ziy@nvidia.com>
This is not a fix, correct? Because the fix for the issue we saw was
sent out separately.
> Reviewed-by: Luis Chamberlain <mcgrof@kernel.org>
> ---
> mm/memory-failure.c | 30 ++++++++++++++++++++++++++----
> 1 file changed, 26 insertions(+), 4 deletions(-)
>
> diff --git a/mm/memory-failure.c b/mm/memory-failure.c
> index f698df156bf8..40687b7aa8be 100644
> --- a/mm/memory-failure.c
> +++ b/mm/memory-failure.c
> @@ -1656,12 +1656,13 @@ static int identify_page_state(unsigned long pfn, struct page *p,
> * there is still more to do, hence the page refcount we took earlier
> * is still needed.
> */
> -static int try_to_split_thp_page(struct page *page, bool release)
> +static int try_to_split_thp_page(struct page *page, unsigned int new_order,
> + bool release)
> {
> int ret;
>
> lock_page(page);
> - ret = split_huge_page(page);
> + ret = split_huge_page_to_order(page, new_order);
> unlock_page(page);
>
> if (ret && release)
> @@ -2280,6 +2281,9 @@ int memory_failure(unsigned long pfn, int flags)
> folio_unlock(folio);
>
> if (folio_test_large(folio)) {
> + int new_order = min_order_for_split(folio);
could be const
> + int err;
> +
> /*
> * The flag must be set after the refcount is bumped
> * otherwise it may race with THP split.
> @@ -2294,7 +2298,15 @@ int memory_failure(unsigned long pfn, int flags)
> * page is a valid handlable page.
> */
> folio_set_has_hwpoisoned(folio);
> - if (try_to_split_thp_page(p, false) < 0) {
> + err = try_to_split_thp_page(p, new_order, /* release= */ false);
> + /*
> + * If the folio cannot be split to order-0, kill the process,
> + * but split the folio anyway to minimize the amount of unusable
> + * pages.
You could briefly explain here that the remainder of memory failure
handling code cannot deal with large folios, which is why we treat it
just like failed split.
--
Cheers
David / dhildenb
next prev parent reply other threads:[~2025-10-22 20:17 UTC|newest]
Thread overview: 19+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-10-22 3:35 [PATCH v3 0/4] Optimize folio split in memory failure Zi Yan
2025-10-22 3:35 ` [PATCH v3 1/4] mm/huge_memory: preserve PG_has_hwpoisoned if a folio is split to >0 order Zi Yan
2025-10-22 20:09 ` David Hildenbrand
2025-10-22 20:27 ` Zi Yan
2025-10-22 20:34 ` David Hildenbrand
2025-10-22 20:40 ` Zi Yan
2025-10-24 15:58 ` Lorenzo Stoakes
2025-10-25 15:21 ` Zi Yan
2025-10-22 3:35 ` [PATCH v3 2/4] mm/huge_memory: add split_huge_page_to_order() Zi Yan
2025-10-22 20:13 ` David Hildenbrand
2025-10-24 16:11 ` Lorenzo Stoakes
2025-10-22 3:35 ` [PATCH v3 3/4] mm/memory-failure: improve large block size folio handling Zi Yan
2025-10-22 20:17 ` David Hildenbrand [this message]
2025-10-22 20:29 ` Zi Yan
2025-10-24 18:11 ` Lorenzo Stoakes
2025-10-22 3:35 ` [PATCH v3 4/4] mm/huge_memory: fix kernel-doc comments for folio_split() and related Zi Yan
2025-10-22 20:18 ` David Hildenbrand
2025-10-22 20:47 ` [PATCH v3 0/4] Optimize folio split in memory failure Zi Yan
2025-10-22 20:47 ` Zi Yan
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=6279a5b3-cb00-49d0-8521-b7b9dfdee2a8@redhat.com \
--to=david@redhat.com \
--cc=Liam.Howlett@oracle.com \
--cc=akpm@linux-foundation.org \
--cc=baohua@kernel.org \
--cc=baolin.wang@linux.alibaba.com \
--cc=dev.jain@arm.com \
--cc=jane.chu@oracle.com \
--cc=kernel@pankajraghav.com \
--cc=lance.yang@linux.dev \
--cc=linmiaohe@huawei.com \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=lorenzo.stoakes@oracle.com \
--cc=mcgrof@kernel.org \
--cc=nao.horiguchi@gmail.com \
--cc=npache@redhat.com \
--cc=richard.weiyang@gmail.com \
--cc=ryan.roberts@arm.com \
--cc=shy828301@gmail.com \
--cc=willy@infradead.org \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox