* [PATCH v3 0/4] Optimize folio split in memory failure
@ 2025-10-22 3:35 Zi Yan
2025-10-22 3:35 ` [PATCH v3 1/4] mm/huge_memory: preserve PG_has_hwpoisoned if a folio is split to >0 order Zi Yan
` (4 more replies)
0 siblings, 5 replies; 19+ messages in thread
From: Zi Yan @ 2025-10-22 3:35 UTC (permalink / raw)
To: linmiaohe, david, jane.chu
Cc: kernel, ziy, akpm, mcgrof, nao.horiguchi, Lorenzo Stoakes,
Baolin Wang, Liam R. Howlett, Nico Pache, Ryan Roberts, Dev Jain,
Barry Song, Lance Yang, Matthew Wilcox (Oracle),
Wei Yang, Yang Shi, linux-fsdevel, linux-kernel, linux-mm
Hi all,
This patchset is a follow-up of "[PATCH v3] mm/huge_memory: do not change
split_huge_page*() target order silently."[1]. It improves how memory
failure code handles large block size(LBS) folios with
min_order_for_split() > 0. By splitting a large folio containing HW
poisoned pages to min_order_for_split(), the after-split folios without
HW poisoned pages could be freed for reuse. To achieve this, folio split
code needs to set has_hwpoisoned on after-split folios containing HW
poisoned pages.
This patchset includes:
1. A patch sets has_hwpoisoned on the right after-split folios after
scanning all pages in the folios,
2. A patch adds split_huge_page_to_order(),
3. Patch 2 and Patch 3 of "[PATCH v2 0/3] Do not change split folio target
order"[2],
This patchset is based on mm-new.
Changelog
===
From V2[2]:
1. Patch 1 is sent separately as a hotfix[1].
2. set has_hwpoisoned on after-split folios if any contains HW poisoned
pages.
3. added split_huge_page_to_order().
4. added a missing newline after variable decalaration.
5. added /* release= */ to try_to_split_thp_page().
6. restructured try_to_split_thp_page() in memory_failure().
7. fixed a typo.
8. clarified the comment in soft_offline_in_use_page().
Link: https://lore.kernel.org/all/20251017013630.139907-1-ziy@nvidia.com/ [1]
Link: https://lore.kernel.org/all/20251016033452.125479-1-ziy@nvidia.com/ [2]
Zi Yan (4):
mm/huge_memory: preserve PG_has_hwpoisoned if a folio is split to >0
order
mm/huge_memory: add split_huge_page_to_order()
mm/memory-failure: improve large block size folio handling.
mm/huge_memory: fix kernel-doc comments for folio_split() and related.
include/linux/huge_mm.h | 22 ++++++++++++-----
mm/huge_memory.c | 55 ++++++++++++++++++++++++++++++-----------
mm/memory-failure.c | 30 +++++++++++++++++++---
3 files changed, 82 insertions(+), 25 deletions(-)
--
2.51.0
^ permalink raw reply [flat|nested] 19+ messages in thread
* [PATCH v3 1/4] mm/huge_memory: preserve PG_has_hwpoisoned if a folio is split to >0 order
2025-10-22 3:35 [PATCH v3 0/4] Optimize folio split in memory failure Zi Yan
@ 2025-10-22 3:35 ` Zi Yan
2025-10-22 20:09 ` David Hildenbrand
2025-10-24 15:58 ` Lorenzo Stoakes
2025-10-22 3:35 ` [PATCH v3 2/4] mm/huge_memory: add split_huge_page_to_order() Zi Yan
` (3 subsequent siblings)
4 siblings, 2 replies; 19+ messages in thread
From: Zi Yan @ 2025-10-22 3:35 UTC (permalink / raw)
To: linmiaohe, david, jane.chu
Cc: kernel, ziy, akpm, mcgrof, nao.horiguchi, Lorenzo Stoakes,
Baolin Wang, Liam R. Howlett, Nico Pache, Ryan Roberts, Dev Jain,
Barry Song, Lance Yang, Matthew Wilcox (Oracle),
Wei Yang, Yang Shi, linux-fsdevel, linux-kernel, linux-mm
folio split clears PG_has_hwpoisoned, but the flag should be preserved in
after-split folios containing pages with PG_hwpoisoned flag if the folio is
split to >0 order folios. Scan all pages in a to-be-split folio to
determine which after-split folios need the flag.
An alternatives is to change PG_has_hwpoisoned to PG_maybe_hwpoisoned to
avoid the scan and set it on all after-split folios, but resulting false
positive has undesirable negative impact. To remove false positive, caller
of folio_test_has_hwpoisoned() and folio_contain_hwpoisoned_page() needs to
do the scan. That might be causing a hassle for current and future callers
and more costly than doing the scan in the split code. More details are
discussed in [1].
It is OK that current implementation does not do this, because memory
failure code always tries to split to order-0 folios and if a folio cannot
be split to order-0, memory failure code either gives warnings or the split
is not performed.
Link: https://lore.kernel.org/all/CAHbLzkoOZm0PXxE9qwtF4gKR=cpRXrSrJ9V9Pm2DJexs985q4g@mail.gmail.com/ [1]
Signed-off-by: Zi Yan <ziy@nvidia.com>
---
mm/huge_memory.c | 28 +++++++++++++++++++++++++---
1 file changed, 25 insertions(+), 3 deletions(-)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index fc65ec3393d2..f3896c1f130f 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -3455,6 +3455,17 @@ bool can_split_folio(struct folio *folio, int caller_pins, int *pextra_pins)
caller_pins;
}
+static bool page_range_has_hwpoisoned(struct page *first_page, long nr_pages)
+{
+ long i;
+
+ for (i = 0; i < nr_pages; i++)
+ if (PageHWPoison(first_page + i))
+ return true;
+
+ return false;
+}
+
/*
* It splits @folio into @new_order folios and copies the @folio metadata to
* all the resulting folios.
@@ -3462,22 +3473,32 @@ bool can_split_folio(struct folio *folio, int caller_pins, int *pextra_pins)
static void __split_folio_to_order(struct folio *folio, int old_order,
int new_order)
{
+ /* Scan poisoned pages when split a poisoned folio to large folios */
+ bool check_poisoned_pages = folio_test_has_hwpoisoned(folio) &&
+ new_order != 0;
long new_nr_pages = 1 << new_order;
long nr_pages = 1 << old_order;
long i;
+ folio_clear_has_hwpoisoned(folio);
+
+ /* Check first new_nr_pages since the loop below skips them */
+ if (check_poisoned_pages &&
+ page_range_has_hwpoisoned(folio_page(folio, 0), new_nr_pages))
+ folio_set_has_hwpoisoned(folio);
/*
* Skip the first new_nr_pages, since the new folio from them have all
* the flags from the original folio.
*/
for (i = new_nr_pages; i < nr_pages; i += new_nr_pages) {
struct page *new_head = &folio->page + i;
-
/*
* Careful: new_folio is not a "real" folio before we cleared PageTail.
* Don't pass it around before clear_compound_head().
*/
struct folio *new_folio = (struct folio *)new_head;
+ bool poisoned_new_folio = check_poisoned_pages &&
+ page_range_has_hwpoisoned(new_head, new_nr_pages);
VM_BUG_ON_PAGE(atomic_read(&new_folio->_mapcount) != -1, new_head);
@@ -3514,6 +3535,9 @@ static void __split_folio_to_order(struct folio *folio, int old_order,
(1L << PG_dirty) |
LRU_GEN_MASK | LRU_REFS_MASK));
+ if (poisoned_new_folio)
+ folio_set_has_hwpoisoned(new_folio);
+
new_folio->mapping = folio->mapping;
new_folio->index = folio->index + i;
@@ -3600,8 +3624,6 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
int start_order = uniform_split ? new_order : old_order - 1;
int split_order;
- folio_clear_has_hwpoisoned(folio);
-
/*
* split to new_order one order at a time. For uniform split,
* folio is split to new_order directly.
--
2.51.0
^ permalink raw reply [flat|nested] 19+ messages in thread
* [PATCH v3 2/4] mm/huge_memory: add split_huge_page_to_order()
2025-10-22 3:35 [PATCH v3 0/4] Optimize folio split in memory failure Zi Yan
2025-10-22 3:35 ` [PATCH v3 1/4] mm/huge_memory: preserve PG_has_hwpoisoned if a folio is split to >0 order Zi Yan
@ 2025-10-22 3:35 ` Zi Yan
2025-10-22 20:13 ` David Hildenbrand
2025-10-24 16:11 ` Lorenzo Stoakes
2025-10-22 3:35 ` [PATCH v3 3/4] mm/memory-failure: improve large block size folio handling Zi Yan
` (2 subsequent siblings)
4 siblings, 2 replies; 19+ messages in thread
From: Zi Yan @ 2025-10-22 3:35 UTC (permalink / raw)
To: linmiaohe, david, jane.chu
Cc: kernel, ziy, akpm, mcgrof, nao.horiguchi, Lorenzo Stoakes,
Baolin Wang, Liam R. Howlett, Nico Pache, Ryan Roberts, Dev Jain,
Barry Song, Lance Yang, Matthew Wilcox (Oracle),
Wei Yang, Yang Shi, linux-fsdevel, linux-kernel, linux-mm
When caller does not supply a list to split_huge_page_to_list_to_order(),
use split_huge_page_to_order() instead.
Signed-off-by: Zi Yan <ziy@nvidia.com>
---
include/linux/huge_mm.h | 12 ++++++++++--
1 file changed, 10 insertions(+), 2 deletions(-)
diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
index 7698b3542c4f..34f8d8453bf3 100644
--- a/include/linux/huge_mm.h
+++ b/include/linux/huge_mm.h
@@ -381,6 +381,10 @@ static inline int split_huge_page_to_list_to_order(struct page *page, struct lis
{
return __split_huge_page_to_list_to_order(page, list, new_order, false);
}
+static inline int split_huge_page_to_order(struct page *page, unsigned int new_order)
+{
+ return split_huge_page_to_list_to_order(page, NULL, new_order);
+}
/*
* try_folio_split_to_order - try to split a @folio at @page to @new_order using
@@ -400,8 +404,7 @@ static inline int try_folio_split_to_order(struct folio *folio,
struct page *page, unsigned int new_order)
{
if (!non_uniform_split_supported(folio, new_order, /* warns= */ false))
- return split_huge_page_to_list_to_order(&folio->page, NULL,
- new_order);
+ return split_huge_page_to_order(&folio->page, new_order);
return folio_split(folio, new_order, page, NULL);
}
static inline int split_huge_page(struct page *page)
@@ -590,6 +593,11 @@ split_huge_page_to_list_to_order(struct page *page, struct list_head *list,
VM_WARN_ON_ONCE_PAGE(1, page);
return -EINVAL;
}
+static inline int split_huge_page_to_order(struct page *page, unsigned int new_order)
+{
+ VM_WARN_ON_ONCE_PAGE(1, page);
+ return -EINVAL;
+}
static inline int split_huge_page(struct page *page)
{
VM_WARN_ON_ONCE_PAGE(1, page);
--
2.51.0
^ permalink raw reply [flat|nested] 19+ messages in thread
* [PATCH v3 3/4] mm/memory-failure: improve large block size folio handling.
2025-10-22 3:35 [PATCH v3 0/4] Optimize folio split in memory failure Zi Yan
2025-10-22 3:35 ` [PATCH v3 1/4] mm/huge_memory: preserve PG_has_hwpoisoned if a folio is split to >0 order Zi Yan
2025-10-22 3:35 ` [PATCH v3 2/4] mm/huge_memory: add split_huge_page_to_order() Zi Yan
@ 2025-10-22 3:35 ` Zi Yan
2025-10-22 20:17 ` David Hildenbrand
2025-10-24 18:11 ` Lorenzo Stoakes
2025-10-22 3:35 ` [PATCH v3 4/4] mm/huge_memory: fix kernel-doc comments for folio_split() and related Zi Yan
2025-10-22 20:47 ` [PATCH v3 0/4] Optimize folio split in memory failure Zi Yan
4 siblings, 2 replies; 19+ messages in thread
From: Zi Yan @ 2025-10-22 3:35 UTC (permalink / raw)
To: linmiaohe, david, jane.chu
Cc: kernel, ziy, akpm, mcgrof, nao.horiguchi, Lorenzo Stoakes,
Baolin Wang, Liam R. Howlett, Nico Pache, Ryan Roberts, Dev Jain,
Barry Song, Lance Yang, Matthew Wilcox (Oracle),
Wei Yang, Yang Shi, linux-fsdevel, linux-kernel, linux-mm
Large block size (LBS) folios cannot be split to order-0 folios but
min_order_for_folio(). Current split fails directly, but that is not
optimal. Split the folio to min_order_for_folio(), so that, after split,
only the folio containing the poisoned page becomes unusable instead.
For soft offline, do not split the large folio if its min_order_for_folio()
is not 0. Since the folio is still accessible from userspace and premature
split might lead to potential performance loss.
Suggested-by: Jane Chu <jane.chu@oracle.com>
Signed-off-by: Zi Yan <ziy@nvidia.com>
Reviewed-by: Luis Chamberlain <mcgrof@kernel.org>
---
mm/memory-failure.c | 30 ++++++++++++++++++++++++++----
1 file changed, 26 insertions(+), 4 deletions(-)
diff --git a/mm/memory-failure.c b/mm/memory-failure.c
index f698df156bf8..40687b7aa8be 100644
--- a/mm/memory-failure.c
+++ b/mm/memory-failure.c
@@ -1656,12 +1656,13 @@ static int identify_page_state(unsigned long pfn, struct page *p,
* there is still more to do, hence the page refcount we took earlier
* is still needed.
*/
-static int try_to_split_thp_page(struct page *page, bool release)
+static int try_to_split_thp_page(struct page *page, unsigned int new_order,
+ bool release)
{
int ret;
lock_page(page);
- ret = split_huge_page(page);
+ ret = split_huge_page_to_order(page, new_order);
unlock_page(page);
if (ret && release)
@@ -2280,6 +2281,9 @@ int memory_failure(unsigned long pfn, int flags)
folio_unlock(folio);
if (folio_test_large(folio)) {
+ int new_order = min_order_for_split(folio);
+ int err;
+
/*
* The flag must be set after the refcount is bumped
* otherwise it may race with THP split.
@@ -2294,7 +2298,15 @@ int memory_failure(unsigned long pfn, int flags)
* page is a valid handlable page.
*/
folio_set_has_hwpoisoned(folio);
- if (try_to_split_thp_page(p, false) < 0) {
+ err = try_to_split_thp_page(p, new_order, /* release= */ false);
+ /*
+ * If the folio cannot be split to order-0, kill the process,
+ * but split the folio anyway to minimize the amount of unusable
+ * pages.
+ */
+ if (err || new_order) {
+ /* get folio again in case the original one is split */
+ folio = page_folio(p);
res = -EHWPOISON;
kill_procs_now(p, pfn, flags, folio);
put_page(p);
@@ -2621,7 +2633,17 @@ static int soft_offline_in_use_page(struct page *page)
};
if (!huge && folio_test_large(folio)) {
- if (try_to_split_thp_page(page, true)) {
+ int new_order = min_order_for_split(folio);
+
+ /*
+ * If new_order (target split order) is not 0, do not split the
+ * folio at all to retain the still accessible large folio.
+ * NOTE: if minimizing the number of soft offline pages is
+ * preferred, split it to non-zero new_order like it is done in
+ * memory_failure().
+ */
+ if (new_order || try_to_split_thp_page(page, /* new_order= */ 0,
+ /* release= */ true)) {
pr_info("%#lx: thp split failed\n", pfn);
return -EBUSY;
}
--
2.51.0
^ permalink raw reply [flat|nested] 19+ messages in thread
* [PATCH v3 4/4] mm/huge_memory: fix kernel-doc comments for folio_split() and related.
2025-10-22 3:35 [PATCH v3 0/4] Optimize folio split in memory failure Zi Yan
` (2 preceding siblings ...)
2025-10-22 3:35 ` [PATCH v3 3/4] mm/memory-failure: improve large block size folio handling Zi Yan
@ 2025-10-22 3:35 ` Zi Yan
2025-10-22 20:18 ` David Hildenbrand
2025-10-22 20:47 ` [PATCH v3 0/4] Optimize folio split in memory failure Zi Yan
4 siblings, 1 reply; 19+ messages in thread
From: Zi Yan @ 2025-10-22 3:35 UTC (permalink / raw)
To: linmiaohe, david, jane.chu
Cc: kernel, ziy, akpm, mcgrof, nao.horiguchi, Lorenzo Stoakes,
Baolin Wang, Liam R. Howlett, Nico Pache, Ryan Roberts, Dev Jain,
Barry Song, Lance Yang, Matthew Wilcox (Oracle),
Wei Yang, Yang Shi, linux-fsdevel, linux-kernel, linux-mm
try_folio_split_to_order(), folio_split, __folio_split(), and
__split_unmapped_folio() do not have correct kernel-doc comment format.
Fix them.
Signed-off-by: Zi Yan <ziy@nvidia.com>
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
---
include/linux/huge_mm.h | 10 ++++++----
mm/huge_memory.c | 27 +++++++++++++++------------
2 files changed, 21 insertions(+), 16 deletions(-)
diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
index 34f8d8453bf3..cbb2243f8e56 100644
--- a/include/linux/huge_mm.h
+++ b/include/linux/huge_mm.h
@@ -386,9 +386,9 @@ static inline int split_huge_page_to_order(struct page *page, unsigned int new_o
return split_huge_page_to_list_to_order(page, NULL, new_order);
}
-/*
- * try_folio_split_to_order - try to split a @folio at @page to @new_order using
- * non uniform split.
+/**
+ * try_folio_split_to_order() - try to split a @folio at @page to @new_order
+ * using non uniform split.
* @folio: folio to be split
* @page: split to @new_order at the given page
* @new_order: the target split order
@@ -398,7 +398,7 @@ static inline int split_huge_page_to_order(struct page *page, unsigned int new_o
* folios are put back to LRU list. Use min_order_for_split() to get the lower
* bound of @new_order.
*
- * Return: 0: split is successful, otherwise split failed.
+ * Return: 0 - split is successful, otherwise split failed.
*/
static inline int try_folio_split_to_order(struct folio *folio,
struct page *page, unsigned int new_order)
@@ -486,6 +486,8 @@ static inline spinlock_t *pud_trans_huge_lock(pud_t *pud,
/**
* folio_test_pmd_mappable - Can we map this folio with a PMD?
* @folio: The folio to test
+ *
+ * Return: true - @folio can be mapped, false - @folio cannot be mapped.
*/
static inline bool folio_test_pmd_mappable(struct folio *folio)
{
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index f3896c1f130f..38094d24fb14 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -3576,8 +3576,9 @@ static void __split_folio_to_order(struct folio *folio, int old_order,
ClearPageCompound(&folio->page);
}
-/*
- * It splits an unmapped @folio to lower order smaller folios in two ways.
+/**
+ * __split_unmapped_folio() - splits an unmapped @folio to lower order folios in
+ * two ways: uniform split or non-uniform split.
* @folio: the to-be-split folio
* @new_order: the smallest order of the after split folios (since buddy
* allocator like split generates folios with orders from @folio's
@@ -3612,8 +3613,8 @@ static void __split_folio_to_order(struct folio *folio, int old_order,
* folio containing @page. The caller needs to unlock and/or free after-split
* folios if necessary.
*
- * For !uniform_split, when -ENOMEM is returned, the original folio might be
- * split. The caller needs to check the input folio.
+ * Return: 0 - successful, <0 - failed (if -ENOMEM is returned, @folio might be
+ * split but not to @new_order, the caller needs to check)
*/
static int __split_unmapped_folio(struct folio *folio, int new_order,
struct page *split_at, struct xa_state *xas,
@@ -3732,8 +3733,8 @@ bool uniform_split_supported(struct folio *folio, unsigned int new_order,
return true;
}
-/*
- * __folio_split: split a folio at @split_at to a @new_order folio
+/**
+ * __folio_split() - split a folio at @split_at to a @new_order folio
* @folio: folio to split
* @new_order: the order of the new folio
* @split_at: a page within the new folio
@@ -3751,7 +3752,7 @@ bool uniform_split_supported(struct folio *folio, unsigned int new_order,
* 1. for uniform split, @lock_at points to one of @folio's subpages;
* 2. for buddy allocator like (non-uniform) split, @lock_at points to @folio.
*
- * return: 0: successful, <0 failed (if -ENOMEM is returned, @folio might be
+ * Return: 0 - successful, <0 - failed (if -ENOMEM is returned, @folio might be
* split but not to @new_order, the caller needs to check)
*/
static int __folio_split(struct folio *folio, unsigned int new_order,
@@ -4140,14 +4141,13 @@ int __split_huge_page_to_list_to_order(struct page *page, struct list_head *list
unmapped);
}
-/*
- * folio_split: split a folio at @split_at to a @new_order folio
+/**
+ * folio_split() - split a folio at @split_at to a @new_order folio
* @folio: folio to split
* @new_order: the order of the new folio
* @split_at: a page within the new folio
- *
- * return: 0: successful, <0 failed (if -ENOMEM is returned, @folio might be
- * split but not to @new_order, the caller needs to check)
+ * @list: after-split folios are added to @list if not null, otherwise to LRU
+ * list
*
* It has the same prerequisites and returns as
* split_huge_page_to_list_to_order().
@@ -4161,6 +4161,9 @@ int __split_huge_page_to_list_to_order(struct page *page, struct list_head *list
* [order-4, {order-3}, order-3, order-5, order-6, order-7, order-8].
*
* After split, folio is left locked for caller.
+ *
+ * Return: 0 - successful, <0 - failed (if -ENOMEM is returned, @folio might be
+ * split but not to @new_order, the caller needs to check)
*/
int folio_split(struct folio *folio, unsigned int new_order,
struct page *split_at, struct list_head *list)
--
2.51.0
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH v3 1/4] mm/huge_memory: preserve PG_has_hwpoisoned if a folio is split to >0 order
2025-10-22 3:35 ` [PATCH v3 1/4] mm/huge_memory: preserve PG_has_hwpoisoned if a folio is split to >0 order Zi Yan
@ 2025-10-22 20:09 ` David Hildenbrand
2025-10-22 20:27 ` Zi Yan
2025-10-24 15:58 ` Lorenzo Stoakes
1 sibling, 1 reply; 19+ messages in thread
From: David Hildenbrand @ 2025-10-22 20:09 UTC (permalink / raw)
To: Zi Yan, linmiaohe, jane.chu
Cc: kernel, akpm, mcgrof, nao.horiguchi, Lorenzo Stoakes,
Baolin Wang, Liam R. Howlett, Nico Pache, Ryan Roberts, Dev Jain,
Barry Song, Lance Yang, Matthew Wilcox (Oracle),
Wei Yang, Yang Shi, linux-fsdevel, linux-kernel, linux-mm
On 22.10.25 05:35, Zi Yan wrote:
> folio split clears PG_has_hwpoisoned, but the flag should be preserved in
> after-split folios containing pages with PG_hwpoisoned flag if the folio is
> split to >0 order folios. Scan all pages in a to-be-split folio to
> determine which after-split folios need the flag.
>
> An alternatives is to change PG_has_hwpoisoned to PG_maybe_hwpoisoned to
> avoid the scan and set it on all after-split folios, but resulting false
> positive has undesirable negative impact. To remove false positive, caller
> of folio_test_has_hwpoisoned() and folio_contain_hwpoisoned_page() needs to
> do the scan. That might be causing a hassle for current and future callers
> and more costly than doing the scan in the split code. More details are
> discussed in [1].
>
> It is OK that current implementation does not do this, because memory
> failure code always tries to split to order-0 folios and if a folio cannot
> be split to order-0, memory failure code either gives warnings or the split
> is not performed.
>
We're losing PG_has_hwpoisoned for large folios, so likely this should be
a stable fix for splitting anything to an order > 0 ?
> Link: https://lore.kernel.org/all/CAHbLzkoOZm0PXxE9qwtF4gKR=cpRXrSrJ9V9Pm2DJexs985q4g@mail.gmail.com/ [1]
> Signed-off-by: Zi Yan <ziy@nvidia.com>
> ---
> mm/huge_memory.c | 28 +++++++++++++++++++++++++---
> 1 file changed, 25 insertions(+), 3 deletions(-)
>
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index fc65ec3393d2..f3896c1f130f 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -3455,6 +3455,17 @@ bool can_split_folio(struct folio *folio, int caller_pins, int *pextra_pins)
> caller_pins;
> }
>
> +static bool page_range_has_hwpoisoned(struct page *first_page, long nr_pages)
> +{
> + long i;
> +
> + for (i = 0; i < nr_pages; i++)
> + if (PageHWPoison(first_page + i))
> + return true;
> +
> + return false;
Nit: I'd just do
static bool page_range_has_hwpoisoned(struct page *page, unsigned long nr_pages)
{
for (; nr_pages; page++, nr_pages--)
if (PageHWPoison(page))
return true;
}
return false;
}
> +}
> +
> /*
> * It splits @folio into @new_order folios and copies the @folio metadata to
> * all the resulting folios.
> @@ -3462,22 +3473,32 @@ bool can_split_folio(struct folio *folio, int caller_pins, int *pextra_pins)
> static void __split_folio_to_order(struct folio *folio, int old_order,
> int new_order)
> {
> + /* Scan poisoned pages when split a poisoned folio to large folios */
> + bool check_poisoned_pages = folio_test_has_hwpoisoned(folio) &&
> + new_order != 0;
I'd shorten this to "handle_hwpoison" or sth like that.
Maybe we can make it const and fit it into a single line.
Comparison with 0 is not required.
const bool handle_hwpoison = folio_test_has_hwpoisoned(folio) && new_order;
> long new_nr_pages = 1 << new_order;
> long nr_pages = 1 << old_order;
> long i;
>
> + folio_clear_has_hwpoisoned(folio);
> +
> + /* Check first new_nr_pages since the loop below skips them */
> + if (check_poisoned_pages &&
> + page_range_has_hwpoisoned(folio_page(folio, 0), new_nr_pages))
> + folio_set_has_hwpoisoned(folio);
> /*
> * Skip the first new_nr_pages, since the new folio from them have all
> * the flags from the original folio.
> */
> for (i = new_nr_pages; i < nr_pages; i += new_nr_pages) {
> struct page *new_head = &folio->page + i;
> -
> /*
> * Careful: new_folio is not a "real" folio before we cleared PageTail.
> * Don't pass it around before clear_compound_head().
> */
> struct folio *new_folio = (struct folio *)new_head;
> + bool poisoned_new_folio = check_poisoned_pages &&
> + page_range_has_hwpoisoned(new_head, new_nr_pages);
Is the temp variable really required? I'm afraid it is a bit ugly either way :)
I'd just move it into the if() below.
if (handle_hwpoison &&
page_range_has_hwpoisoned(new_head, new_nr_pages)
folio_set_has_hwpoisoned(new_folio);
--
Cheers
David / dhildenb
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH v3 2/4] mm/huge_memory: add split_huge_page_to_order()
2025-10-22 3:35 ` [PATCH v3 2/4] mm/huge_memory: add split_huge_page_to_order() Zi Yan
@ 2025-10-22 20:13 ` David Hildenbrand
2025-10-24 16:11 ` Lorenzo Stoakes
1 sibling, 0 replies; 19+ messages in thread
From: David Hildenbrand @ 2025-10-22 20:13 UTC (permalink / raw)
To: Zi Yan, linmiaohe, jane.chu
Cc: kernel, akpm, mcgrof, nao.horiguchi, Lorenzo Stoakes,
Baolin Wang, Liam R. Howlett, Nico Pache, Ryan Roberts, Dev Jain,
Barry Song, Lance Yang, Matthew Wilcox (Oracle),
Wei Yang, Yang Shi, linux-fsdevel, linux-kernel, linux-mm
On 22.10.25 05:35, Zi Yan wrote:
> When caller does not supply a list to split_huge_page_to_list_to_order(),
> use split_huge_page_to_order() instead.
>
> Signed-off-by: Zi Yan <ziy@nvidia.com>
> ---
> include/linux/huge_mm.h | 12 ++++++++++--
> 1 file changed, 10 insertions(+), 2 deletions(-)
>
> diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
> index 7698b3542c4f..34f8d8453bf3 100644
> --- a/include/linux/huge_mm.h
> +++ b/include/linux/huge_mm.h
> @@ -381,6 +381,10 @@ static inline int split_huge_page_to_list_to_order(struct page *page, struct lis
> {
> return __split_huge_page_to_list_to_order(page, list, new_order, false);
> }
> +static inline int split_huge_page_to_order(struct page *page, unsigned int new_order)
> +{
> + return split_huge_page_to_list_to_order(page, NULL, new_order);
> +}
>
> /*
> * try_folio_split_to_order - try to split a @folio at @page to @new_order using
> @@ -400,8 +404,7 @@ static inline int try_folio_split_to_order(struct folio *folio,
> struct page *page, unsigned int new_order)
> {
> if (!non_uniform_split_supported(folio, new_order, /* warns= */ false))
> - return split_huge_page_to_list_to_order(&folio->page, NULL,
> - new_order);
> + return split_huge_page_to_order(&folio->page, new_order);
> return folio_split(folio, new_order, page, NULL);
Much more readable
Acked-by: David Hildenbrand <david@redhat.com>
--
Cheers
David / dhildenb
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH v3 3/4] mm/memory-failure: improve large block size folio handling.
2025-10-22 3:35 ` [PATCH v3 3/4] mm/memory-failure: improve large block size folio handling Zi Yan
@ 2025-10-22 20:17 ` David Hildenbrand
2025-10-22 20:29 ` Zi Yan
2025-10-24 18:11 ` Lorenzo Stoakes
1 sibling, 1 reply; 19+ messages in thread
From: David Hildenbrand @ 2025-10-22 20:17 UTC (permalink / raw)
To: Zi Yan, linmiaohe, jane.chu
Cc: kernel, akpm, mcgrof, nao.horiguchi, Lorenzo Stoakes,
Baolin Wang, Liam R. Howlett, Nico Pache, Ryan Roberts, Dev Jain,
Barry Song, Lance Yang, Matthew Wilcox (Oracle),
Wei Yang, Yang Shi, linux-fsdevel, linux-kernel, linux-mm
On 22.10.25 05:35, Zi Yan wrote:
Subject: I'd drop the trailing "."
> Large block size (LBS) folios cannot be split to order-0 folios but
> min_order_for_folio(). Current split fails directly, but that is not
> optimal. Split the folio to min_order_for_folio(), so that, after split,
> only the folio containing the poisoned page becomes unusable instead.
>
> For soft offline, do not split the large folio if its min_order_for_folio()
> is not 0. Since the folio is still accessible from userspace and premature
> split might lead to potential performance loss.
>
> Suggested-by: Jane Chu <jane.chu@oracle.com>
> Signed-off-by: Zi Yan <ziy@nvidia.com>
This is not a fix, correct? Because the fix for the issue we saw was
sent out separately.
> Reviewed-by: Luis Chamberlain <mcgrof@kernel.org>
> ---
> mm/memory-failure.c | 30 ++++++++++++++++++++++++++----
> 1 file changed, 26 insertions(+), 4 deletions(-)
>
> diff --git a/mm/memory-failure.c b/mm/memory-failure.c
> index f698df156bf8..40687b7aa8be 100644
> --- a/mm/memory-failure.c
> +++ b/mm/memory-failure.c
> @@ -1656,12 +1656,13 @@ static int identify_page_state(unsigned long pfn, struct page *p,
> * there is still more to do, hence the page refcount we took earlier
> * is still needed.
> */
> -static int try_to_split_thp_page(struct page *page, bool release)
> +static int try_to_split_thp_page(struct page *page, unsigned int new_order,
> + bool release)
> {
> int ret;
>
> lock_page(page);
> - ret = split_huge_page(page);
> + ret = split_huge_page_to_order(page, new_order);
> unlock_page(page);
>
> if (ret && release)
> @@ -2280,6 +2281,9 @@ int memory_failure(unsigned long pfn, int flags)
> folio_unlock(folio);
>
> if (folio_test_large(folio)) {
> + int new_order = min_order_for_split(folio);
could be const
> + int err;
> +
> /*
> * The flag must be set after the refcount is bumped
> * otherwise it may race with THP split.
> @@ -2294,7 +2298,15 @@ int memory_failure(unsigned long pfn, int flags)
> * page is a valid handlable page.
> */
> folio_set_has_hwpoisoned(folio);
> - if (try_to_split_thp_page(p, false) < 0) {
> + err = try_to_split_thp_page(p, new_order, /* release= */ false);
> + /*
> + * If the folio cannot be split to order-0, kill the process,
> + * but split the folio anyway to minimize the amount of unusable
> + * pages.
You could briefly explain here that the remainder of memory failure
handling code cannot deal with large folios, which is why we treat it
just like failed split.
--
Cheers
David / dhildenb
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH v3 4/4] mm/huge_memory: fix kernel-doc comments for folio_split() and related.
2025-10-22 3:35 ` [PATCH v3 4/4] mm/huge_memory: fix kernel-doc comments for folio_split() and related Zi Yan
@ 2025-10-22 20:18 ` David Hildenbrand
0 siblings, 0 replies; 19+ messages in thread
From: David Hildenbrand @ 2025-10-22 20:18 UTC (permalink / raw)
To: Zi Yan, linmiaohe, jane.chu
Cc: kernel, akpm, mcgrof, nao.horiguchi, Lorenzo Stoakes,
Baolin Wang, Liam R. Howlett, Nico Pache, Ryan Roberts, Dev Jain,
Barry Song, Lance Yang, Matthew Wilcox (Oracle),
Wei Yang, Yang Shi, linux-fsdevel, linux-kernel, linux-mm
On 22.10.25 05:35, Zi Yan wrote:
> try_folio_split_to_order(), folio_split, __folio_split(), and
> __split_unmapped_folio() do not have correct kernel-doc comment format.
> Fix them.
>
> Signed-off-by: Zi Yan <ziy@nvidia.com>
> Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> ---
Acked-by: David Hildenbrand <david@redhat.com>
--
Cheers
David / dhildenb
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH v3 1/4] mm/huge_memory: preserve PG_has_hwpoisoned if a folio is split to >0 order
2025-10-22 20:09 ` David Hildenbrand
@ 2025-10-22 20:27 ` Zi Yan
2025-10-22 20:34 ` David Hildenbrand
0 siblings, 1 reply; 19+ messages in thread
From: Zi Yan @ 2025-10-22 20:27 UTC (permalink / raw)
To: David Hildenbrand
Cc: linmiaohe, jane.chu, kernel, akpm, mcgrof, nao.horiguchi,
Lorenzo Stoakes, Baolin Wang, Liam R. Howlett, Nico Pache,
Ryan Roberts, Dev Jain, Barry Song, Lance Yang,
Matthew Wilcox (Oracle),
Wei Yang, Yang Shi, linux-fsdevel, linux-kernel, linux-mm
On 22 Oct 2025, at 16:09, David Hildenbrand wrote:
> On 22.10.25 05:35, Zi Yan wrote:
>> folio split clears PG_has_hwpoisoned, but the flag should be preserved in
>> after-split folios containing pages with PG_hwpoisoned flag if the folio is
>> split to >0 order folios. Scan all pages in a to-be-split folio to
>> determine which after-split folios need the flag.
>>
>> An alternatives is to change PG_has_hwpoisoned to PG_maybe_hwpoisoned to
>> avoid the scan and set it on all after-split folios, but resulting false
>> positive has undesirable negative impact. To remove false positive, caller
>> of folio_test_has_hwpoisoned() and folio_contain_hwpoisoned_page() needs to
>> do the scan. That might be causing a hassle for current and future callers
>> and more costly than doing the scan in the split code. More details are
>> discussed in [1].
>>
>> It is OK that current implementation does not do this, because memory
>> failure code always tries to split to order-0 folios and if a folio cannot
>> be split to order-0, memory failure code either gives warnings or the split
>> is not performed.
>>
>
> We're losing PG_has_hwpoisoned for large folios, so likely this should be
> a stable fix for splitting anything to an order > 0 ?
I was the borderline on this, because:
1. before the hotfix, which prevents silently bumping target split order,
memory failure would give a warning when a folio is split to >0 order
folios. The warning is masking this issue.
2. after the hotfix, folios with PG_has_hwpoisoned will not be split
to >0 order folios since memory failure always wants to split a folio
to order 0 and a folio containing LBS folios will not be split, thus
without losing PG_has_hwpoisoned.
But one can use debugfs interface to split a has_hwpoisoned folio to >0 order
folios.
I will add
Fixes: c010d47f107f ("mm: thp: split huge page to any lower order pages")
and cc stable in the next version.
>
>> Link: https://lore.kernel.org/all/CAHbLzkoOZm0PXxE9qwtF4gKR=cpRXrSrJ9V9Pm2DJexs985q4g@mail.gmail.com/ [1]
>> Signed-off-by: Zi Yan <ziy@nvidia.com>
>> ---
>> mm/huge_memory.c | 28 +++++++++++++++++++++++++---
>> 1 file changed, 25 insertions(+), 3 deletions(-)
>>
>> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
>> index fc65ec3393d2..f3896c1f130f 100644
>> --- a/mm/huge_memory.c
>> +++ b/mm/huge_memory.c
>> @@ -3455,6 +3455,17 @@ bool can_split_folio(struct folio *folio, int caller_pins, int *pextra_pins)
>> caller_pins;
>> }
>> +static bool page_range_has_hwpoisoned(struct page *first_page, long nr_pages)
>> +{
>> + long i;
>> +
>> + for (i = 0; i < nr_pages; i++)
>> + if (PageHWPoison(first_page + i))
>> + return true;
>> +
>> + return false;
>
> Nit: I'd just do
>
> static bool page_range_has_hwpoisoned(struct page *page, unsigned long nr_pages)
> {
> for (; nr_pages; page++, nr_pages--)
> if (PageHWPoison(page))
> return true;
> }
> return false;
> }
>
OK, will use this one.
>> +}
>> +
>> /*
>> * It splits @folio into @new_order folios and copies the @folio metadata to
>> * all the resulting folios.
>> @@ -3462,22 +3473,32 @@ bool can_split_folio(struct folio *folio, int caller_pins, int *pextra_pins)
>> static void __split_folio_to_order(struct folio *folio, int old_order,
>> int new_order)
>> {
>> + /* Scan poisoned pages when split a poisoned folio to large folios */
>> + bool check_poisoned_pages = folio_test_has_hwpoisoned(folio) &&
>> + new_order != 0;
>
> I'd shorten this to "handle_hwpoison" or sth like that.
>
> Maybe we can make it const and fit it into a single line.
>
> Comparison with 0 is not required.
>
> const bool handle_hwpoison = folio_test_has_hwpoisoned(folio) && new_order;
Sure, will use this.
>
>> long new_nr_pages = 1 << new_order;
>> long nr_pages = 1 << old_order;
>> long i;
>> + folio_clear_has_hwpoisoned(folio);
>> +
>> + /* Check first new_nr_pages since the loop below skips them */
>> + if (check_poisoned_pages &&
>> + page_range_has_hwpoisoned(folio_page(folio, 0), new_nr_pages))
>> + folio_set_has_hwpoisoned(folio);
>> /*
>> * Skip the first new_nr_pages, since the new folio from them have all
>> * the flags from the original folio.
>> */
>> for (i = new_nr_pages; i < nr_pages; i += new_nr_pages) {
>> struct page *new_head = &folio->page + i;
>> -
>> /*
>> * Careful: new_folio is not a "real" folio before we cleared PageTail.
>> * Don't pass it around before clear_compound_head().
>> */
>> struct folio *new_folio = (struct folio *)new_head;
>> + bool poisoned_new_folio = check_poisoned_pages &&
>> + page_range_has_hwpoisoned(new_head, new_nr_pages);
>
> Is the temp variable really required? I'm afraid it is a bit ugly either way :)
>
> I'd just move it into the if() below.
>
> if (handle_hwpoison &&
> page_range_has_hwpoisoned(new_head, new_nr_pages)
> folio_set_has_hwpoisoned(new_folio);
>
Sure. :)
--
Best Regards,
Yan, Zi
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH v3 3/4] mm/memory-failure: improve large block size folio handling.
2025-10-22 20:17 ` David Hildenbrand
@ 2025-10-22 20:29 ` Zi Yan
0 siblings, 0 replies; 19+ messages in thread
From: Zi Yan @ 2025-10-22 20:29 UTC (permalink / raw)
To: David Hildenbrand
Cc: linmiaohe, jane.chu, kernel, akpm, mcgrof, nao.horiguchi,
Lorenzo Stoakes, Baolin Wang, Liam R. Howlett, Nico Pache,
Ryan Roberts, Dev Jain, Barry Song, Lance Yang,
Matthew Wilcox (Oracle),
Wei Yang, Yang Shi, linux-fsdevel, linux-kernel, linux-mm
On 22 Oct 2025, at 16:17, David Hildenbrand wrote:
> On 22.10.25 05:35, Zi Yan wrote:
>
> Subject: I'd drop the trailing "."
>
>> Large block size (LBS) folios cannot be split to order-0 folios but
>> min_order_for_folio(). Current split fails directly, but that is not
>> optimal. Split the folio to min_order_for_folio(), so that, after split,
>> only the folio containing the poisoned page becomes unusable instead.
>>
>> For soft offline, do not split the large folio if its min_order_for_folio()
>> is not 0. Since the folio is still accessible from userspace and premature
>> split might lead to potential performance loss.
>>
>> Suggested-by: Jane Chu <jane.chu@oracle.com>
>> Signed-off-by: Zi Yan <ziy@nvidia.com>
>
> This is not a fix, correct? Because the fix for the issue we saw was sent out separately.
No. It is just an optimization.
>
>> Reviewed-by: Luis Chamberlain <mcgrof@kernel.org>
>> ---
>> mm/memory-failure.c | 30 ++++++++++++++++++++++++++----
>> 1 file changed, 26 insertions(+), 4 deletions(-)
>>
>> diff --git a/mm/memory-failure.c b/mm/memory-failure.c
>> index f698df156bf8..40687b7aa8be 100644
>> --- a/mm/memory-failure.c
>> +++ b/mm/memory-failure.c
>> @@ -1656,12 +1656,13 @@ static int identify_page_state(unsigned long pfn, struct page *p,
>> * there is still more to do, hence the page refcount we took earlier
>> * is still needed.
>> */
>> -static int try_to_split_thp_page(struct page *page, bool release)
>> +static int try_to_split_thp_page(struct page *page, unsigned int new_order,
>> + bool release)
>> {
>> int ret;
>> lock_page(page);
>> - ret = split_huge_page(page);
>> + ret = split_huge_page_to_order(page, new_order);
>> unlock_page(page);
>> if (ret && release)
>> @@ -2280,6 +2281,9 @@ int memory_failure(unsigned long pfn, int flags)
>> folio_unlock(folio);
>> if (folio_test_large(folio)) {
>> + int new_order = min_order_for_split(folio);
>
> could be const
Sure.
>
>> + int err;
>> +
>> /*
>> * The flag must be set after the refcount is bumped
>> * otherwise it may race with THP split.
>> @@ -2294,7 +2298,15 @@ int memory_failure(unsigned long pfn, int flags)
>> * page is a valid handlable page.
>> */
>> folio_set_has_hwpoisoned(folio);
>> - if (try_to_split_thp_page(p, false) < 0) {
>> + err = try_to_split_thp_page(p, new_order, /* release= */ false);
>> + /*
>> + * If the folio cannot be split to order-0, kill the process,
>> + * but split the folio anyway to minimize the amount of unusable
>> + * pages.
>
> You could briefly explain here that the remainder of memory failure handling code cannot deal with large folios, which is why we treat it just like failed split.
Sure. Will add.
--
Best Regards,
Yan, Zi
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH v3 1/4] mm/huge_memory: preserve PG_has_hwpoisoned if a folio is split to >0 order
2025-10-22 20:27 ` Zi Yan
@ 2025-10-22 20:34 ` David Hildenbrand
2025-10-22 20:40 ` Zi Yan
0 siblings, 1 reply; 19+ messages in thread
From: David Hildenbrand @ 2025-10-22 20:34 UTC (permalink / raw)
To: Zi Yan
Cc: linmiaohe, jane.chu, kernel, akpm, mcgrof, nao.horiguchi,
Lorenzo Stoakes, Baolin Wang, Liam R. Howlett, Nico Pache,
Ryan Roberts, Dev Jain, Barry Song, Lance Yang,
Matthew Wilcox (Oracle),
Wei Yang, Yang Shi, linux-fsdevel, linux-kernel, linux-mm
On 22.10.25 22:27, Zi Yan wrote:
> On 22 Oct 2025, at 16:09, David Hildenbrand wrote:
>
>> On 22.10.25 05:35, Zi Yan wrote:
>>> folio split clears PG_has_hwpoisoned, but the flag should be preserved in
>>> after-split folios containing pages with PG_hwpoisoned flag if the folio is
>>> split to >0 order folios. Scan all pages in a to-be-split folio to
>>> determine which after-split folios need the flag.
>>>
>>> An alternatives is to change PG_has_hwpoisoned to PG_maybe_hwpoisoned to
>>> avoid the scan and set it on all after-split folios, but resulting false
>>> positive has undesirable negative impact. To remove false positive, caller
>>> of folio_test_has_hwpoisoned() and folio_contain_hwpoisoned_page() needs to
>>> do the scan. That might be causing a hassle for current and future callers
>>> and more costly than doing the scan in the split code. More details are
>>> discussed in [1].
>>>
>>> It is OK that current implementation does not do this, because memory
>>> failure code always tries to split to order-0 folios and if a folio cannot
>>> be split to order-0, memory failure code either gives warnings or the split
>>> is not performed.
>>>
>>
>> We're losing PG_has_hwpoisoned for large folios, so likely this should be
>> a stable fix for splitting anything to an order > 0 ?
>
> I was the borderline on this, because:
>
> 1. before the hotfix, which prevents silently bumping target split order,
> memory failure would give a warning when a folio is split to >0 order
> folios. The warning is masking this issue.
> 2. after the hotfix, folios with PG_has_hwpoisoned will not be split
> to >0 order folios since memory failure always wants to split a folio
> to order 0 and a folio containing LBS folios will not be split, thus
> without losing PG_has_hwpoisoned.
>
I was rather wondering about something like
a) memory failure wants to split to some order (order-0?) but fails the
split (e.g., raised reference). hwpoison is set.
b) Later, something else (truncation?) wants to split to order > 0 and
loses the hwpoison bit.
Would that be possible?
>
> I will add
> Fixes: c010d47f107f ("mm: thp: split huge page to any lower order pages")
> and cc stable in the next version.
That would be better I think. But then you have to pull this patch out
as well from this series, gah :)
--
Cheers
David / dhildenb
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH v3 1/4] mm/huge_memory: preserve PG_has_hwpoisoned if a folio is split to >0 order
2025-10-22 20:34 ` David Hildenbrand
@ 2025-10-22 20:40 ` Zi Yan
0 siblings, 0 replies; 19+ messages in thread
From: Zi Yan @ 2025-10-22 20:40 UTC (permalink / raw)
To: David Hildenbrand
Cc: linmiaohe, jane.chu, kernel, akpm, mcgrof, nao.horiguchi,
Lorenzo Stoakes, Baolin Wang, Liam R. Howlett, Nico Pache,
Ryan Roberts, Dev Jain, Barry Song, Lance Yang,
Matthew Wilcox (Oracle),
Wei Yang, Yang Shi, linux-fsdevel, linux-kernel, linux-mm
On 22 Oct 2025, at 16:34, David Hildenbrand wrote:
> On 22.10.25 22:27, Zi Yan wrote:
>> On 22 Oct 2025, at 16:09, David Hildenbrand wrote:
>>
>>> On 22.10.25 05:35, Zi Yan wrote:
>>>> folio split clears PG_has_hwpoisoned, but the flag should be preserved in
>>>> after-split folios containing pages with PG_hwpoisoned flag if the folio is
>>>> split to >0 order folios. Scan all pages in a to-be-split folio to
>>>> determine which after-split folios need the flag.
>>>>
>>>> An alternatives is to change PG_has_hwpoisoned to PG_maybe_hwpoisoned to
>>>> avoid the scan and set it on all after-split folios, but resulting false
>>>> positive has undesirable negative impact. To remove false positive, caller
>>>> of folio_test_has_hwpoisoned() and folio_contain_hwpoisoned_page() needs to
>>>> do the scan. That might be causing a hassle for current and future callers
>>>> and more costly than doing the scan in the split code. More details are
>>>> discussed in [1].
>>>>
>>>> It is OK that current implementation does not do this, because memory
>>>> failure code always tries to split to order-0 folios and if a folio cannot
>>>> be split to order-0, memory failure code either gives warnings or the split
>>>> is not performed.
>>>>
>>>
>>> We're losing PG_has_hwpoisoned for large folios, so likely this should be
>>> a stable fix for splitting anything to an order > 0 ?
>>
>> I was the borderline on this, because:
>>
>> 1. before the hotfix, which prevents silently bumping target split order,
>> memory failure would give a warning when a folio is split to >0 order
>> folios. The warning is masking this issue.
>> 2. after the hotfix, folios with PG_has_hwpoisoned will not be split
>> to >0 order folios since memory failure always wants to split a folio
>> to order 0 and a folio containing LBS folios will not be split, thus
>> without losing PG_has_hwpoisoned.
>>
>
> I was rather wondering about something like
>
> a) memory failure wants to split to some order (order-0?) but fails the split (e.g., raised reference). hwpoison is set.
>
> b) Later, something else (truncation?) wants to split to order > 0 and loses the hwpoison bit.
>
> Would that be possible?
Yeah, that is possible after commit 7460b470a131 ("mm/truncate: use folio_split()
in truncate operation") when truncation splits a folio to >0 order folios.
>
>>
>> I will add
>> Fixes: c010d47f107f ("mm: thp: split huge page to any lower order pages")
>> and cc stable in the next version.
>
> That would be better I think. But then you have to pull this patch out as well from this series, gah :)
Yep, let me tell this horrible story in the cover letter.
--
Best Regards,
Yan, Zi
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH v3 0/4] Optimize folio split in memory failure
2025-10-22 3:35 [PATCH v3 0/4] Optimize folio split in memory failure Zi Yan
` (3 preceding siblings ...)
2025-10-22 3:35 ` [PATCH v3 4/4] mm/huge_memory: fix kernel-doc comments for folio_split() and related Zi Yan
@ 2025-10-22 20:47 ` Zi Yan
2025-10-22 20:47 ` Zi Yan
4 siblings, 1 reply; 19+ messages in thread
From: Zi Yan @ 2025-10-22 20:47 UTC (permalink / raw)
To: linmiaohe, david, jane.chu
Cc: kernel, ziy, akpm, mcgrof, nao.horiguchi, Lorenzo Stoakes,
Baolin Wang, Liam R. Howlett, Nico Pache, Ryan Roberts, Dev Jain,
Barry Song, Lance Yang, Matthew Wilcox (Oracle),
Wei Yang, Yang Shi, linux-fsdevel, linux-kernel, linux-mm
On 21 Oct 2025, at 23:35, Zi Yan wrote:
> Hi all,
>
> This patchset is a follow-up of "[PATCH v3] mm/huge_memory: do not change
> split_huge_page*() target order silently."[1]. It improves how memory
> failure code handles large block size(LBS) folios with
> min_order_for_split() > 0. By splitting a large folio containing HW
> poisoned pages to min_order_for_split(), the after-split folios without
> HW poisoned pages could be freed for reuse. To achieve this, folio split
> code needs to set has_hwpoisoned on after-split folios containing HW
> poisoned pages.
>
> This patchset includes:
> 1. A patch sets has_hwpoisoned on the right after-split folios after
> scanning all pages in the folios,
Based on the discussion with David[1], this patch will be sent separately
as a hotfix. The remaining patches will be sent out after Patch 1 is picked
up. Please note that I will address David's feedback in the new version of
Patch 1. Sorry for the inconvenience.
[1] https://lore.kernel.org/all/d3d05898-5530-4990-9d61-8268bd483765@redhat.com/
> 2. A patch adds split_huge_page_to_order(),
> 3. Patch 2 and Patch 3 of "[PATCH v2 0/3] Do not change split folio target
> order"[2],
>
> This patchset is based on mm-new.
>
> Changelog
> ===
> From V2[2]:
> 1. Patch 1 is sent separately as a hotfix[1].
> 2. set has_hwpoisoned on after-split folios if any contains HW poisoned
> pages.
> 3. added split_huge_page_to_order().
> 4. added a missing newline after variable decalaration.
> 5. added /* release= */ to try_to_split_thp_page().
> 6. restructured try_to_split_thp_page() in memory_failure().
> 7. fixed a typo.
> 8. clarified the comment in soft_offline_in_use_page().
>
>
> Link: https://lore.kernel.org/all/20251017013630.139907-1-ziy@nvidia.com/ [1]
> Link: https://lore.kernel.org/all/20251016033452.125479-1-ziy@nvidia.com/ [2]
>
> Zi Yan (4):
> mm/huge_memory: preserve PG_has_hwpoisoned if a folio is split to >0
> order
> mm/huge_memory: add split_huge_page_to_order()
> mm/memory-failure: improve large block size folio handling.
> mm/huge_memory: fix kernel-doc comments for folio_split() and related.
>
> include/linux/huge_mm.h | 22 ++++++++++++-----
> mm/huge_memory.c | 55 ++++++++++++++++++++++++++++++-----------
> mm/memory-failure.c | 30 +++++++++++++++++++---
> 3 files changed, 82 insertions(+), 25 deletions(-)
>
> --
> 2.51.0
--
Best Regards,
Yan, Zi
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH v3 0/4] Optimize folio split in memory failure
2025-10-22 20:47 ` [PATCH v3 0/4] Optimize folio split in memory failure Zi Yan
@ 2025-10-22 20:47 ` Zi Yan
0 siblings, 0 replies; 19+ messages in thread
From: Zi Yan @ 2025-10-22 20:47 UTC (permalink / raw)
To: linmiaohe, david, jane.chu
Cc: kernel, ziy, akpm, mcgrof, nao.horiguchi, Lorenzo Stoakes,
Baolin Wang, Liam R. Howlett, Nico Pache, Ryan Roberts, Dev Jain,
Barry Song, Lance Yang, Matthew Wilcox (Oracle),
Wei Yang, Yang Shi, linux-fsdevel, linux-kernel, linux-mm
On 22 Oct 2025, at 16:47, Zi Yan wrote:
> On 21 Oct 2025, at 23:35, Zi Yan wrote:
>
>> Hi all,
>>
>> This patchset is a follow-up of "[PATCH v3] mm/huge_memory: do not change
>> split_huge_page*() target order silently."[1]. It improves how memory
>> failure code handles large block size(LBS) folios with
>> min_order_for_split() > 0. By splitting a large folio containing HW
>> poisoned pages to min_order_for_split(), the after-split folios without
>> HW poisoned pages could be freed for reuse. To achieve this, folio split
>> code needs to set has_hwpoisoned on after-split folios containing HW
>> poisoned pages.
>>
>> This patchset includes:
>> 1. A patch sets has_hwpoisoned on the right after-split folios after
>> scanning all pages in the folios,
>
> Based on the discussion with David[1], this patch will be sent separately
this patch is Patch 1.
> as a hotfix. The remaining patches will be sent out after Patch 1 is picked
> up. Please note that I will address David's feedback in the new version of
> Patch 1. Sorry for the inconvenience.
>
> [1] https://lore.kernel.org/all/d3d05898-5530-4990-9d61-8268bd483765@redhat.com/
>
>> 2. A patch adds split_huge_page_to_order(),
>> 3. Patch 2 and Patch 3 of "[PATCH v2 0/3] Do not change split folio target
>> order"[2],
>>
>> This patchset is based on mm-new.
>>
>> Changelog
>> ===
>> From V2[2]:
>> 1. Patch 1 is sent separately as a hotfix[1].
>> 2. set has_hwpoisoned on after-split folios if any contains HW poisoned
>> pages.
>> 3. added split_huge_page_to_order().
>> 4. added a missing newline after variable decalaration.
>> 5. added /* release= */ to try_to_split_thp_page().
>> 6. restructured try_to_split_thp_page() in memory_failure().
>> 7. fixed a typo.
>> 8. clarified the comment in soft_offline_in_use_page().
>>
>>
>> Link: https://lore.kernel.org/all/20251017013630.139907-1-ziy@nvidia.com/ [1]
>> Link: https://lore.kernel.org/all/20251016033452.125479-1-ziy@nvidia.com/ [2]
>>
>> Zi Yan (4):
>> mm/huge_memory: preserve PG_has_hwpoisoned if a folio is split to >0
>> order
>> mm/huge_memory: add split_huge_page_to_order()
>> mm/memory-failure: improve large block size folio handling.
>> mm/huge_memory: fix kernel-doc comments for folio_split() and related.
>>
>> include/linux/huge_mm.h | 22 ++++++++++++-----
>> mm/huge_memory.c | 55 ++++++++++++++++++++++++++++++-----------
>> mm/memory-failure.c | 30 +++++++++++++++++++---
>> 3 files changed, 82 insertions(+), 25 deletions(-)
>>
>> --
>> 2.51.0
>
>
> --
> Best Regards,
> Yan, Zi
--
Best Regards,
Yan, Zi
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH v3 1/4] mm/huge_memory: preserve PG_has_hwpoisoned if a folio is split to >0 order
2025-10-22 3:35 ` [PATCH v3 1/4] mm/huge_memory: preserve PG_has_hwpoisoned if a folio is split to >0 order Zi Yan
2025-10-22 20:09 ` David Hildenbrand
@ 2025-10-24 15:58 ` Lorenzo Stoakes
2025-10-25 15:21 ` Zi Yan
1 sibling, 1 reply; 19+ messages in thread
From: Lorenzo Stoakes @ 2025-10-24 15:58 UTC (permalink / raw)
To: Zi Yan
Cc: linmiaohe, david, jane.chu, kernel, akpm, mcgrof, nao.horiguchi,
Baolin Wang, Liam R. Howlett, Nico Pache, Ryan Roberts, Dev Jain,
Barry Song, Lance Yang, Matthew Wilcox (Oracle),
Wei Yang, Yang Shi, linux-fsdevel, linux-kernel, linux-mm
On Tue, Oct 21, 2025 at 11:35:27PM -0400, Zi Yan wrote:
> folio split clears PG_has_hwpoisoned, but the flag should be preserved in
> after-split folios containing pages with PG_hwpoisoned flag if the folio is
> split to >0 order folios. Scan all pages in a to-be-split folio to
> determine which after-split folios need the flag.
>
> An alternatives is to change PG_has_hwpoisoned to PG_maybe_hwpoisoned to
> avoid the scan and set it on all after-split folios, but resulting false
> positive has undesirable negative impact. To remove false positive, caller
> of folio_test_has_hwpoisoned() and folio_contain_hwpoisoned_page() needs to
> do the scan. That might be causing a hassle for current and future callers
> and more costly than doing the scan in the split code. More details are
> discussed in [1].
>
> It is OK that current implementation does not do this, because memory
> failure code always tries to split to order-0 folios and if a folio cannot
> be split to order-0, memory failure code either gives warnings or the split
> is not performed.
>
> Link: https://lore.kernel.org/all/CAHbLzkoOZm0PXxE9qwtF4gKR=cpRXrSrJ9V9Pm2DJexs985q4g@mail.gmail.com/ [1]
> Signed-off-by: Zi Yan <ziy@nvidia.com>
I guess this was split out to [0]? :)
[0]: https://lore.kernel.org/linux-mm/44310717-347c-4ede-ad31-c6d375a449b9@linux.dev/
> ---
> mm/huge_memory.c | 28 +++++++++++++++++++++++++---
> 1 file changed, 25 insertions(+), 3 deletions(-)
>
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index fc65ec3393d2..f3896c1f130f 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -3455,6 +3455,17 @@ bool can_split_folio(struct folio *folio, int caller_pins, int *pextra_pins)
> caller_pins;
> }
>
> +static bool page_range_has_hwpoisoned(struct page *first_page, long nr_pages)
> +{
> + long i;
> +
> + for (i = 0; i < nr_pages; i++)
> + if (PageHWPoison(first_page + i))
> + return true;
> +
> + return false;
> +}
> +
> /*
> * It splits @folio into @new_order folios and copies the @folio metadata to
> * all the resulting folios.
> @@ -3462,22 +3473,32 @@ bool can_split_folio(struct folio *folio, int caller_pins, int *pextra_pins)
> static void __split_folio_to_order(struct folio *folio, int old_order,
> int new_order)
> {
> + /* Scan poisoned pages when split a poisoned folio to large folios */
> + bool check_poisoned_pages = folio_test_has_hwpoisoned(folio) &&
> + new_order != 0;
> long new_nr_pages = 1 << new_order;
> long nr_pages = 1 << old_order;
> long i;
>
> + folio_clear_has_hwpoisoned(folio);
> +
> + /* Check first new_nr_pages since the loop below skips them */
> + if (check_poisoned_pages &&
> + page_range_has_hwpoisoned(folio_page(folio, 0), new_nr_pages))
> + folio_set_has_hwpoisoned(folio);
> /*
> * Skip the first new_nr_pages, since the new folio from them have all
> * the flags from the original folio.
> */
> for (i = new_nr_pages; i < nr_pages; i += new_nr_pages) {
> struct page *new_head = &folio->page + i;
> -
> /*
> * Careful: new_folio is not a "real" folio before we cleared PageTail.
> * Don't pass it around before clear_compound_head().
> */
> struct folio *new_folio = (struct folio *)new_head;
> + bool poisoned_new_folio = check_poisoned_pages &&
> + page_range_has_hwpoisoned(new_head, new_nr_pages);
>
> VM_BUG_ON_PAGE(atomic_read(&new_folio->_mapcount) != -1, new_head);
>
> @@ -3514,6 +3535,9 @@ static void __split_folio_to_order(struct folio *folio, int old_order,
> (1L << PG_dirty) |
> LRU_GEN_MASK | LRU_REFS_MASK));
>
> + if (poisoned_new_folio)
> + folio_set_has_hwpoisoned(new_folio);
> +
> new_folio->mapping = folio->mapping;
> new_folio->index = folio->index + i;
>
> @@ -3600,8 +3624,6 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
> int start_order = uniform_split ? new_order : old_order - 1;
> int split_order;
>
> - folio_clear_has_hwpoisoned(folio);
> -
> /*
> * split to new_order one order at a time. For uniform split,
> * folio is split to new_order directly.
> --
> 2.51.0
>
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH v3 2/4] mm/huge_memory: add split_huge_page_to_order()
2025-10-22 3:35 ` [PATCH v3 2/4] mm/huge_memory: add split_huge_page_to_order() Zi Yan
2025-10-22 20:13 ` David Hildenbrand
@ 2025-10-24 16:11 ` Lorenzo Stoakes
1 sibling, 0 replies; 19+ messages in thread
From: Lorenzo Stoakes @ 2025-10-24 16:11 UTC (permalink / raw)
To: Zi Yan
Cc: linmiaohe, david, jane.chu, kernel, akpm, mcgrof, nao.horiguchi,
Baolin Wang, Liam R. Howlett, Nico Pache, Ryan Roberts, Dev Jain,
Barry Song, Lance Yang, Matthew Wilcox (Oracle),
Wei Yang, Yang Shi, linux-fsdevel, linux-kernel, linux-mm
On Tue, Oct 21, 2025 at 11:35:28PM -0400, Zi Yan wrote:
> When caller does not supply a list to split_huge_page_to_list_to_order(),
> use split_huge_page_to_order() instead.
>
> Signed-off-by: Zi Yan <ziy@nvidia.com>
LGTM, so:
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> ---
> include/linux/huge_mm.h | 12 ++++++++++--
> 1 file changed, 10 insertions(+), 2 deletions(-)
>
> diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
> index 7698b3542c4f..34f8d8453bf3 100644
> --- a/include/linux/huge_mm.h
> +++ b/include/linux/huge_mm.h
> @@ -381,6 +381,10 @@ static inline int split_huge_page_to_list_to_order(struct page *page, struct lis
> {
> return __split_huge_page_to_list_to_order(page, list, new_order, false);
> }
> +static inline int split_huge_page_to_order(struct page *page, unsigned int new_order)
> +{
> + return split_huge_page_to_list_to_order(page, NULL, new_order);
> +}
>
> /*
> * try_folio_split_to_order - try to split a @folio at @page to @new_order using
> @@ -400,8 +404,7 @@ static inline int try_folio_split_to_order(struct folio *folio,
> struct page *page, unsigned int new_order)
> {
> if (!non_uniform_split_supported(folio, new_order, /* warns= */ false))
> - return split_huge_page_to_list_to_order(&folio->page, NULL,
> - new_order);
> + return split_huge_page_to_order(&folio->page, new_order);
> return folio_split(folio, new_order, page, NULL);
> }
> static inline int split_huge_page(struct page *page)
> @@ -590,6 +593,11 @@ split_huge_page_to_list_to_order(struct page *page, struct list_head *list,
> VM_WARN_ON_ONCE_PAGE(1, page);
> return -EINVAL;
> }
> +static inline int split_huge_page_to_order(struct page *page, unsigned int new_order)
> +{
> + VM_WARN_ON_ONCE_PAGE(1, page);
> + return -EINVAL;
> +}
> static inline int split_huge_page(struct page *page)
> {
> VM_WARN_ON_ONCE_PAGE(1, page);
> --
> 2.51.0
>
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH v3 3/4] mm/memory-failure: improve large block size folio handling.
2025-10-22 3:35 ` [PATCH v3 3/4] mm/memory-failure: improve large block size folio handling Zi Yan
2025-10-22 20:17 ` David Hildenbrand
@ 2025-10-24 18:11 ` Lorenzo Stoakes
1 sibling, 0 replies; 19+ messages in thread
From: Lorenzo Stoakes @ 2025-10-24 18:11 UTC (permalink / raw)
To: Zi Yan
Cc: linmiaohe, david, jane.chu, kernel, akpm, mcgrof, nao.horiguchi,
Baolin Wang, Liam R. Howlett, Nico Pache, Ryan Roberts, Dev Jain,
Barry Song, Lance Yang, Matthew Wilcox (Oracle),
Wei Yang, Yang Shi, linux-fsdevel, linux-kernel, linux-mm
On Tue, Oct 21, 2025 at 11:35:29PM -0400, Zi Yan wrote:
> Large block size (LBS) folios cannot be split to order-0 folios but
> min_order_for_folio(). Current split fails directly, but that is not
> optimal. Split the folio to min_order_for_folio(), so that, after split,
> only the folio containing the poisoned page becomes unusable instead.
>
> For soft offline, do not split the large folio if its min_order_for_folio()
> is not 0. Since the folio is still accessible from userspace and premature
> split might lead to potential performance loss.
>
> Suggested-by: Jane Chu <jane.chu@oracle.com>
> Signed-off-by: Zi Yan <ziy@nvidia.com>
LGTM, with David's comments addressed, feel free to add:
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> Reviewed-by: Luis Chamberlain <mcgrof@kernel.org>
> ---
> mm/memory-failure.c | 30 ++++++++++++++++++++++++++----
> 1 file changed, 26 insertions(+), 4 deletions(-)
>
> diff --git a/mm/memory-failure.c b/mm/memory-failure.c
> index f698df156bf8..40687b7aa8be 100644
> --- a/mm/memory-failure.c
> +++ b/mm/memory-failure.c
> @@ -1656,12 +1656,13 @@ static int identify_page_state(unsigned long pfn, struct page *p,
> * there is still more to do, hence the page refcount we took earlier
> * is still needed.
> */
> -static int try_to_split_thp_page(struct page *page, bool release)
> +static int try_to_split_thp_page(struct page *page, unsigned int new_order,
> + bool release)
> {
> int ret;
>
> lock_page(page);
> - ret = split_huge_page(page);
> + ret = split_huge_page_to_order(page, new_order);
> unlock_page(page);
>
> if (ret && release)
> @@ -2280,6 +2281,9 @@ int memory_failure(unsigned long pfn, int flags)
> folio_unlock(folio);
>
> if (folio_test_large(folio)) {
> + int new_order = min_order_for_split(folio);
> + int err;
> +
> /*
> * The flag must be set after the refcount is bumped
> * otherwise it may race with THP split.
> @@ -2294,7 +2298,15 @@ int memory_failure(unsigned long pfn, int flags)
> * page is a valid handlable page.
> */
> folio_set_has_hwpoisoned(folio);
> - if (try_to_split_thp_page(p, false) < 0) {
> + err = try_to_split_thp_page(p, new_order, /* release= */ false);
> + /*
> + * If the folio cannot be split to order-0, kill the process,
> + * but split the folio anyway to minimize the amount of unusable
> + * pages.
> + */
> + if (err || new_order) {
> + /* get folio again in case the original one is split */
> + folio = page_folio(p);
> res = -EHWPOISON;
> kill_procs_now(p, pfn, flags, folio);
> put_page(p);
> @@ -2621,7 +2633,17 @@ static int soft_offline_in_use_page(struct page *page)
> };
>
> if (!huge && folio_test_large(folio)) {
> - if (try_to_split_thp_page(page, true)) {
> + int new_order = min_order_for_split(folio);
> +
> + /*
> + * If new_order (target split order) is not 0, do not split the
> + * folio at all to retain the still accessible large folio.
> + * NOTE: if minimizing the number of soft offline pages is
> + * preferred, split it to non-zero new_order like it is done in
> + * memory_failure().
> + */
> + if (new_order || try_to_split_thp_page(page, /* new_order= */ 0,
> + /* release= */ true)) {
> pr_info("%#lx: thp split failed\n", pfn);
> return -EBUSY;
> }
> --
> 2.51.0
>
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH v3 1/4] mm/huge_memory: preserve PG_has_hwpoisoned if a folio is split to >0 order
2025-10-24 15:58 ` Lorenzo Stoakes
@ 2025-10-25 15:21 ` Zi Yan
0 siblings, 0 replies; 19+ messages in thread
From: Zi Yan @ 2025-10-25 15:21 UTC (permalink / raw)
To: Lorenzo Stoakes
Cc: linmiaohe, david, jane.chu, kernel, akpm, mcgrof, nao.horiguchi,
Baolin Wang, Liam R. Howlett, Nico Pache, Ryan Roberts, Dev Jain,
Barry Song, Lance Yang, Matthew Wilcox (Oracle),
Wei Yang, Yang Shi, linux-fsdevel, linux-kernel, linux-mm
On 24 Oct 2025, at 11:58, Lorenzo Stoakes wrote:
> On Tue, Oct 21, 2025 at 11:35:27PM -0400, Zi Yan wrote:
>> folio split clears PG_has_hwpoisoned, but the flag should be preserved in
>> after-split folios containing pages with PG_hwpoisoned flag if the folio is
>> split to >0 order folios. Scan all pages in a to-be-split folio to
>> determine which after-split folios need the flag.
>>
>> An alternatives is to change PG_has_hwpoisoned to PG_maybe_hwpoisoned to
>> avoid the scan and set it on all after-split folios, but resulting false
>> positive has undesirable negative impact. To remove false positive, caller
>> of folio_test_has_hwpoisoned() and folio_contain_hwpoisoned_page() needs to
>> do the scan. That might be causing a hassle for current and future callers
>> and more costly than doing the scan in the split code. More details are
>> discussed in [1].
>>
>> It is OK that current implementation does not do this, because memory
>> failure code always tries to split to order-0 folios and if a folio cannot
>> be split to order-0, memory failure code either gives warnings or the split
>> is not performed.
>>
>> Link: https://lore.kernel.org/all/CAHbLzkoOZm0PXxE9qwtF4gKR=cpRXrSrJ9V9Pm2DJexs985q4g@mail.gmail.com/ [1]
>> Signed-off-by: Zi Yan <ziy@nvidia.com>
>
> I guess this was split out to [0]? :)
>
> [0]: https://lore.kernel.org/linux-mm/44310717-347c-4ede-ad31-c6d375a449b9@linux.dev/
Yes. The decision is based on the discussion with David[1] and announced at[2].
[1] https://lore.kernel.org/all/d3d05898-5530-4990-9d61-8268bd483765@redhat.com/
[2] https://lore.kernel.org/all/1AE28DE5-1E0A-432B-B21B-61E0E3F54909@nvidia.com/
--
Best Regards,
Yan, Zi
^ permalink raw reply [flat|nested] 19+ messages in thread
end of thread, other threads:[~2025-10-25 15:22 UTC | newest]
Thread overview: 19+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-10-22 3:35 [PATCH v3 0/4] Optimize folio split in memory failure Zi Yan
2025-10-22 3:35 ` [PATCH v3 1/4] mm/huge_memory: preserve PG_has_hwpoisoned if a folio is split to >0 order Zi Yan
2025-10-22 20:09 ` David Hildenbrand
2025-10-22 20:27 ` Zi Yan
2025-10-22 20:34 ` David Hildenbrand
2025-10-22 20:40 ` Zi Yan
2025-10-24 15:58 ` Lorenzo Stoakes
2025-10-25 15:21 ` Zi Yan
2025-10-22 3:35 ` [PATCH v3 2/4] mm/huge_memory: add split_huge_page_to_order() Zi Yan
2025-10-22 20:13 ` David Hildenbrand
2025-10-24 16:11 ` Lorenzo Stoakes
2025-10-22 3:35 ` [PATCH v3 3/4] mm/memory-failure: improve large block size folio handling Zi Yan
2025-10-22 20:17 ` David Hildenbrand
2025-10-22 20:29 ` Zi Yan
2025-10-24 18:11 ` Lorenzo Stoakes
2025-10-22 3:35 ` [PATCH v3 4/4] mm/huge_memory: fix kernel-doc comments for folio_split() and related Zi Yan
2025-10-22 20:18 ` David Hildenbrand
2025-10-22 20:47 ` [PATCH v3 0/4] Optimize folio split in memory failure Zi Yan
2025-10-22 20:47 ` Zi Yan
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox