linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: David Hildenbrand <david@redhat.com>
To: syzbot <syzbot+e6367ea2fdab6ed46056@syzkaller.appspotmail.com>,
	akpm@linux-foundation.org, linmiaohe@huawei.com,
	linux-kernel@vger.kernel.org, linux-mm@kvack.org,
	nao.horiguchi@gmail.com, syzkaller-bugs@googlegroups.com,
	Zi Yan <ziy@nvidia.com>
Subject: Re: [syzbot] [mm?] WARNING in memory_failure
Date: Wed, 24 Sep 2025 13:32:48 +0200	[thread overview]
Message-ID: <ce93b55c-75a7-4b4d-a68b-9d80baf1578b@redhat.com> (raw)
In-Reply-To: <68d2c943.a70a0220.1b52b.02b3.GAE@google.com>

On 23.09.25 18:22, syzbot wrote:
> Hello,
> 
> syzbot found the following issue on:
> 
> HEAD commit:    b5db4add5e77 Merge branch 'for-next/core' into for-kernelci
> git tree:       git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git for-kernelci
> console output: https://syzkaller.appspot.com/x/log.txt?x=10edb8e2580000
> kernel config:  https://syzkaller.appspot.com/x/.config?x=d2ae34a0711ff2f1
> dashboard link: https://syzkaller.appspot.com/bug?extid=e6367ea2fdab6ed46056
> compiler:       Debian clang version 20.1.8 (++20250708063551+0c9f909b7976-1~exp1~20250708183702.136), Debian LLD 20.1.8
> userspace arch: arm64
> syz repro:      https://syzkaller.appspot.com/x/repro.syz?x=14160f12580000
> C reproducer:   https://syzkaller.appspot.com/x/repro.c?x=1361627c580000
> 
> Downloadable assets:
> disk image: https://storage.googleapis.com/syzbot-assets/6eee2232d5c1/disk-b5db4add.raw.xz
> vmlinux: https://storage.googleapis.com/syzbot-assets/a8b00f2f1234/vmlinux-b5db4add.xz
> kernel image: https://storage.googleapis.com/syzbot-assets/fc0d466f156c/Image-b5db4add.gz.xz
> 
> IMPORTANT: if you fix the issue, please add the following tag to the commit:
> Reported-by: syzbot+e6367ea2fdab6ed46056@syzkaller.appspotmail.com
> 
> Injecting memory failure for pfn 0x104000 at process virtual address 0x20000000
> ------------[ cut here ]------------
> WARNING: CPU: 1 PID: 6700 at mm/memory-failure.c:2391 memory_failure+0x18ec/0x1db4 mm/memory-failure.c:2391
> Modules linked in:
> CPU: 1 UID: 0 PID: 6700 Comm: syz.0.17 Not tainted syzkaller #0 PREEMPT
> Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 06/30/2025
> pstate: 83400005 (Nzcv daif +PAN -UAO +TCO +DIT -SSBS BTYPE=--)
> pc : memory_failure+0x18ec/0x1db4 mm/memory-failure.c:2391
> lr : memory_failure+0x18ec/0x1db4 mm/memory-failure.c:2391
> sp : ffff8000a41478c0
> x29: ffff8000a41479a0 x28: 05ffc00000200868 x27: ffff700014828f20
> x26: 1fffffbff8620001 x25: 05ffc0000020086d x24: 1fffffbff8620000
> x23: fffffdffc3100008 x22: fffffdffc3100000 x21: fffffdffc3100000
> x20: 0000000000000023 x19: dfff800000000000 x18: 1fffe00033793888
> x17: ffff80008f7ee000 x16: ffff80008052aa64 x15: 0000000000000001
> x14: 1fffffbff8620000 x13: 0000000000000000 x12: 0000000000000000
> x11: ffff7fbff8620001 x10: 0000000000ff0100 x9 : 0000000000000000
> x8 : ffff0000d7eedb80 x7 : ffff800080428910 x6 : 0000000000000000
> x5 : 0000000000000001 x4 : 0000000000000001 x3 : ffff800080cf5438
> x2 : 0000000000000001 x1 : 0000000000000040 x0 : 0000000000000000
> Call trace:
>   memory_failure+0x18ec/0x1db4 mm/memory-failure.c:2391 (P)
>   madvise_inject_error mm/madvise.c:1475 [inline]
>   madvise_do_behavior+0x2c8/0x7c4 mm/madvise.c:1875
>   do_madvise+0x190/0x248 mm/madvise.c:1978
>   __do_sys_madvise mm/madvise.c:1987 [inline]
>   __se_sys_madvise mm/madvise.c:1985 [inline]
>   __arm64_sys_madvise+0xa4/0xc0 mm/madvise.c:1985
>   __invoke_syscall arch/arm64/kernel/syscall.c:35 [inline]
>   invoke_syscall+0x98/0x254 arch/arm64/kernel/syscall.c:49
>   el0_svc_common+0x130/0x23c arch/arm64/kernel/syscall.c:132
>   do_el0_svc+0x48/0x58 arch/arm64/kernel/syscall.c:151
>   el0_svc+0x5c/0x254 arch/arm64/kernel/entry-common.c:744
>   el0t_64_sync_handler+0x84/0x12c arch/arm64/kernel/entry-common.c:763
>   el0t_64_sync+0x198/0x19c arch/arm64/kernel/entry.S:596

We're running into the

         WARN_ON(folio_test_large(folio));

in memory_failure().

Which is weird because we have the

         if (folio_test_large(folio)) {
                 /*
                  * The flag must be set after the refcount is bumped
                  * otherwise it may race with THP split.
                  * And the flag can't be set in get_hwpoison_page() since
                  * it is called by soft offline too and it is just called
                  * for !MF_COUNT_INCREASED.  So here seems to be the best
                  * place.
                  *
                  * Don't need care about the above error handling paths for
                  * get_hwpoison_page() since they handle either free page
                  * or unhandlable page.  The refcount is bumped iff the
                  * page is a valid handlable page.
                  */
                 folio_set_has_hwpoisoned(folio);
                 if (try_to_split_thp_page(p, false) < 0) {
                         res = -EHWPOISON;
                         kill_procs_now(p, pfn, flags, folio);
                         put_page(p);
                         action_result(pfn, MF_MSG_UNSPLIT_THP, MF_FAILED);
                         goto unlock_mutex;
                 }
                 VM_BUG_ON_PAGE(!page_count(p), p);
                 folio = page_folio(p);
         }

before it.

But likely that's what I raised to Zi Yan recently: if try_to_split_thp_page()->split_huge_page()
silently decided to split to something that is not a small folio (the min_order_for_split() bit),
this changed the semantics of the function.

Likely split_huge_page() should have failed if the min_order makes us not split to order-0,
or there would have to be some "parameter" that tells split_huge_page() what expectation (order) the
callers has.

We can check folio_test_large() after the split, but really, we should just not be splitting at
all if it doesn't serve our purpose.

-- 
Cheers

David / dhildenb



  reply	other threads:[~2025-09-24 11:32 UTC|newest]

Thread overview: 43+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-09-23 16:22 syzbot
2025-09-24 11:32 ` David Hildenbrand [this message]
2025-09-24 15:03   ` Zi Yan
2025-09-24 15:35     ` David Hildenbrand
2025-09-24 16:33       ` Zi Yan
2025-09-24 17:05         ` David Hildenbrand
2025-09-24 17:52           ` Zi Yan
2025-09-25 12:02             ` Pankaj Raghav (Samsung)
2025-09-25 14:24               ` Zi Yan
2025-09-25 16:23                 ` Yang Shi
2025-09-25 16:48                   ` David Hildenbrand
2025-09-25 17:26                     ` Yang Shi
2025-09-29 11:08                 ` Pankaj Raghav (Samsung)
2025-09-29 15:20                   ` Zi Yan
2025-09-29 16:13                     ` David Hildenbrand
2025-10-01  1:51                     ` Zi Yan
2025-10-01  2:06                       ` syzbot
2025-10-01  2:13                       ` Zi Yan
2025-10-01  4:51                         ` syzbot
2025-10-01 23:58                           ` jane.chu
2025-10-02  0:38                             ` Zi Yan
2025-10-02  2:04                               ` Zi Yan
2025-10-02  2:50                                 ` syzbot
2025-10-02  5:23                                 ` jane.chu
2025-10-02 13:54                                   ` Zi Yan
2025-10-02 17:47                                     ` jane.chu
2025-10-09  7:39                                       ` Miaohe Lin
2025-10-10 15:25                                         ` Zi Yan
2025-10-02 17:54                                     ` jane.chu
2025-10-02 18:45                                       ` Zi Yan
2025-10-03  4:02                                         ` jane.chu
2025-10-02 18:33                                   ` Zi Yan
2025-10-02 19:09                                     ` syzbot
2025-10-02  7:25                             ` David Hildenbrand
2025-09-29 17:29                   ` jane.chu
2025-09-29 17:49                     ` jane.chu
2025-09-29 18:23                       ` jane.chu
2025-09-29 20:15                         ` Zi Yan
2025-09-29 20:52                           ` jane.chu
2025-09-30  2:51                         ` Miaohe Lin
2025-09-30  4:35                           ` jane.chu
2025-09-30  6:31                             ` Miaohe Lin
2025-10-01 18:15                               ` jane.chu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ce93b55c-75a7-4b4d-a68b-9d80baf1578b@redhat.com \
    --to=david@redhat.com \
    --cc=akpm@linux-foundation.org \
    --cc=linmiaohe@huawei.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=nao.horiguchi@gmail.com \
    --cc=syzbot+e6367ea2fdab6ed46056@syzkaller.appspotmail.com \
    --cc=syzkaller-bugs@googlegroups.com \
    --cc=ziy@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox