From: Zi Yan <ziy@nvidia.com>
To: "Pankaj Raghav (Samsung)" <kernel@pankajraghav.com>
Cc: David Hildenbrand <david@redhat.com>,
Luis Chamberlain <mcgrof@kernel.org>,
syzbot <syzbot+e6367ea2fdab6ed46056@syzkaller.appspotmail.com>,
akpm@linux-foundation.org, linmiaohe@huawei.com,
linux-kernel@vger.kernel.org, linux-mm@kvack.org,
nao.horiguchi@gmail.com, syzkaller-bugs@googlegroups.com
Subject: Re: [syzbot] [mm?] WARNING in memory_failure
Date: Thu, 25 Sep 2025 10:24:20 -0400 [thread overview]
Message-ID: <80D4F8CE-FCFF-44F9-8846-6098FAC76082@nvidia.com> (raw)
In-Reply-To: <fzfcprayhtwbyuauld5geudyzzrslcb3luaneejq4hyq2aqm3l@iwpn2n33gi3m>
On 25 Sep 2025, at 8:02, Pankaj Raghav (Samsung) wrote:
>>>>
>>>> We might just need (a), since there is no caller of (b) in kernel, except
>>>> split_folio_to_order() is used for testing. There might be future uses
>>>> when kernel wants to convert from THP to mTHP, but it seems that we are
>>>> not there yet.
>>>>
>>>
>>> Even better, then maybe selected interfaces could just fail if the min-order contradicts with the request to split to a non-larger (order-0) folio.
>>
>> Yep. Let’s hear what Luis and Pankaj will say about this.
>>
>>>
>>>>
>>>>
>>>> +Luis and Pankaj for their opinions on how LBS is going to use split folio
>>>> to any order.
>>>>
>>>> Hi Luis and Pankaj,
>>>>
>>>> It seems that bumping split folio order from 0 to mapping_min_folio_order()
>>>> instead of simply failing the split folio call gives surprises to some
>>>> callers and causes issues like the one reported by this email. I cannot think
>>>> of any situation where failing a folio split does not work. If LBS code
>>>> wants to split, it should supply mapping_min_folio_order(), right? Does
>>>> such caller exist?
>>>>
>
> I am not aware of any place in the LBS path where we supply the
> min_order. truncate_inode_partial_folio() calls try_folio_split(), which
> takes care of splitting in min_order chunks. So we embedded the
> min_order in the MM functions that performs the split instead of the
> caller passing the min_order. Probably, that is why this problem is
> being exposed now where people are surprised by seeing a large folio
> even though they asked to split folios to order-0.
>
> As you concluded, we will not be breaking anything wrt LBS as we
> just refuse to split if it doesn't match the min_order. The only issue I
> see is we might be exacerbating ENOMEM errors as we are not splitting as
> many folios with this change. But the solution for that is simple, add
> more RAM to the system ;)
>
> Just for clarity, are we talking about changing the behaviour just the
> try_to_split_thp_page() function or all the split functions in huge_mm.h?
I want to change all the split functions in huge_mm.h and provide
mapping_min_folio_order() to try_folio_split() in truncate_inode_partial_folio().
Something like below:
1. no split function will change the given order;
2. __folio_split() will no longer give VM_WARN_ONCE when provided new_order
is smaller than mapping_min_folio_order().
In this way, for an LBS folio that cannot be split to order 0, split
functions will return -EINVAL to tell caller that the folio cannot
be split. The caller is supposed to handle the split failure.
WDYT?
diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
index f327d62fc985..e15c3ca07e33 100644
--- a/include/linux/huge_mm.h
+++ b/include/linux/huge_mm.h
@@ -387,34 +387,16 @@ int folio_split(struct folio *folio, unsigned int new_order, struct page *page,
* Return: 0: split is successful, otherwise split failed.
*/
static inline int try_folio_split(struct folio *folio, struct page *page,
- struct list_head *list)
+ struct list_head *list, unsigned int order)
{
- int ret = min_order_for_split(folio);
-
- if (ret < 0)
- return ret;
-
- if (!non_uniform_split_supported(folio, 0, false))
+ if (!non_uniform_split_supported(folio, order, false))
return split_huge_page_to_list_to_order(&folio->page, list,
- ret);
- return folio_split(folio, ret, page, list);
+ order);
+ return folio_split(folio, order, page, list);
}
static inline int split_huge_page(struct page *page)
{
- struct folio *folio = page_folio(page);
- int ret = min_order_for_split(folio);
-
- if (ret < 0)
- return ret;
-
- /*
- * split_huge_page() locks the page before splitting and
- * expects the same page that has been split to be locked when
- * returned. split_folio(page_folio(page)) cannot be used here
- * because it converts the page to folio and passes the head
- * page to be split.
- */
- return split_huge_page_to_list_to_order(page, NULL, ret);
+ return split_huge_page_to_list_to_order(page, NULL, 0);
}
void deferred_split_folio(struct folio *folio, bool partially_mapped);
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 5acca24bbabb..faf5da459a4c 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -3653,8 +3653,6 @@ static int __folio_split(struct folio *folio, unsigned int new_order,
min_order = mapping_min_folio_order(folio->mapping);
if (new_order < min_order) {
- VM_WARN_ONCE(1, "Cannot split mapped folio below min-order: %u",
- min_order);
ret = -EINVAL;
goto out;
}
@@ -3986,11 +3984,6 @@ int min_order_for_split(struct folio *folio)
int split_folio_to_list(struct folio *folio, struct list_head *list)
{
- int ret = min_order_for_split(folio);
-
- if (ret < 0)
- return ret;
-
return split_huge_page_to_list_to_order(&folio->page, list, ret);
}
diff --git a/mm/truncate.c b/mm/truncate.c
index 91eb92a5ce4f..1c15149ae8e9 100644
--- a/mm/truncate.c
+++ b/mm/truncate.c
@@ -194,6 +194,7 @@ bool truncate_inode_partial_folio(struct folio *folio, loff_t start, loff_t end)
size_t size = folio_size(folio);
unsigned int offset, length;
struct page *split_at, *split_at2;
+ unsigned int min_order;
if (pos < start)
offset = start - pos;
@@ -223,8 +224,9 @@ bool truncate_inode_partial_folio(struct folio *folio, loff_t start, loff_t end)
if (!folio_test_large(folio))
return true;
+ min_order = mapping_min_folio_order(folio->mapping);
split_at = folio_page(folio, PAGE_ALIGN_DOWN(offset) / PAGE_SIZE);
- if (!try_folio_split(folio, split_at, NULL)) {
+ if (!try_folio_split(folio, split_at, NULL, min_order)) {
/*
* try to split at offset + length to make sure folios within
* the range can be dropped, especially to avoid memory waste
@@ -254,7 +256,7 @@ bool truncate_inode_partial_folio(struct folio *folio, loff_t start, loff_t end)
*/
if (folio_test_large(folio2) &&
folio2->mapping == folio->mapping)
- try_folio_split(folio2, split_at2, NULL);
+ try_folio_split(folio2, split_at2, NULL, min_order);
folio_unlock(folio2);
out:
Best Regards,
Yan, Zi
next prev parent reply other threads:[~2025-09-25 14:24 UTC|newest]
Thread overview: 43+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-09-23 16:22 syzbot
2025-09-24 11:32 ` David Hildenbrand
2025-09-24 15:03 ` Zi Yan
2025-09-24 15:35 ` David Hildenbrand
2025-09-24 16:33 ` Zi Yan
2025-09-24 17:05 ` David Hildenbrand
2025-09-24 17:52 ` Zi Yan
2025-09-25 12:02 ` Pankaj Raghav (Samsung)
2025-09-25 14:24 ` Zi Yan [this message]
2025-09-25 16:23 ` Yang Shi
2025-09-25 16:48 ` David Hildenbrand
2025-09-25 17:26 ` Yang Shi
2025-09-29 11:08 ` Pankaj Raghav (Samsung)
2025-09-29 15:20 ` Zi Yan
2025-09-29 16:13 ` David Hildenbrand
2025-10-01 1:51 ` Zi Yan
2025-10-01 2:06 ` syzbot
2025-10-01 2:13 ` Zi Yan
2025-10-01 4:51 ` syzbot
2025-10-01 23:58 ` jane.chu
2025-10-02 0:38 ` Zi Yan
2025-10-02 2:04 ` Zi Yan
2025-10-02 2:50 ` syzbot
2025-10-02 5:23 ` jane.chu
2025-10-02 13:54 ` Zi Yan
2025-10-02 17:47 ` jane.chu
2025-10-09 7:39 ` Miaohe Lin
2025-10-10 15:25 ` Zi Yan
2025-10-02 17:54 ` jane.chu
2025-10-02 18:45 ` Zi Yan
2025-10-03 4:02 ` jane.chu
2025-10-02 18:33 ` Zi Yan
2025-10-02 19:09 ` syzbot
2025-10-02 7:25 ` David Hildenbrand
2025-09-29 17:29 ` jane.chu
2025-09-29 17:49 ` jane.chu
2025-09-29 18:23 ` jane.chu
2025-09-29 20:15 ` Zi Yan
2025-09-29 20:52 ` jane.chu
2025-09-30 2:51 ` Miaohe Lin
2025-09-30 4:35 ` jane.chu
2025-09-30 6:31 ` Miaohe Lin
2025-10-01 18:15 ` jane.chu
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=80D4F8CE-FCFF-44F9-8846-6098FAC76082@nvidia.com \
--to=ziy@nvidia.com \
--cc=akpm@linux-foundation.org \
--cc=david@redhat.com \
--cc=kernel@pankajraghav.com \
--cc=linmiaohe@huawei.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mcgrof@kernel.org \
--cc=nao.horiguchi@gmail.com \
--cc=syzbot+e6367ea2fdab6ed46056@syzkaller.appspotmail.com \
--cc=syzkaller-bugs@googlegroups.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox