* [syzbot] INFO: task can't die in reclaim_throttle
@ 2021-12-09 0:52 syzbot
2021-12-09 14:51 ` Vlastimil Babka
` (2 more replies)
0 siblings, 3 replies; 5+ messages in thread
From: syzbot @ 2021-12-09 0:52 UTC (permalink / raw)
To: akpm, linux-kernel, linux-mm, syzkaller-bugs
Hello,
syzbot found the following issue on:
HEAD commit: 4eee8d0b64ec Add linux-next specific files for 20211208
git tree: linux-next
console output: https://syzkaller.appspot.com/x/log.txt?x=113d8d75b00000
kernel config: https://syzkaller.appspot.com/x/.config?x=20b74d9da4ce1ef1
dashboard link: https://syzkaller.appspot.com/bug?extid=dcea9eda277e1090b35f
compiler: gcc (Debian 10.2.1-6) 10.2.1 20210110, GNU ld (GNU Binutils for Debian) 2.35.2
Unfortunately, I don't have any reproducer for this issue yet.
IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+dcea9eda277e1090b35f@syzkaller.appspotmail.com
INFO: task syz-executor.5:925 can't die for more than 143 seconds.
task:syz-executor.5 state:D
stack:23840 pid: 925 ppid: 565 flags:0x00004006
Call Trace:
<TASK>
context_switch kernel/sched/core.c:4986 [inline]
__schedule+0xab2/0x4d90 kernel/sched/core.c:6296
schedule+0xd2/0x260 kernel/sched/core.c:6369
schedule_timeout+0x14a/0x2a0 kernel/time/timer.c:1881
reclaim_throttle+0x1ce/0x5e0 mm/vmscan.c:1072
consider_reclaim_throttle mm/vmscan.c:3399 [inline]
shrink_zones mm/vmscan.c:3486 [inline]
do_try_to_free_pages+0x7cd/0x1620 mm/vmscan.c:3541
try_to_free_pages+0x29f/0x750 mm/vmscan.c:3776
__perform_reclaim mm/page_alloc.c:4603 [inline]
__alloc_pages_direct_reclaim mm/page_alloc.c:4624 [inline]
__alloc_pages_slowpath.constprop.0+0xa9e/0x2080 mm/page_alloc.c:5014
__alloc_pages+0x412/0x500 mm/page_alloc.c:5389
alloc_pages+0x1aa/0x310 mm/mempolicy.c:2271
alloc_slab_page mm/slub.c:1799 [inline]
allocate_slab mm/slub.c:1952 [inline]
new_slab+0x2a9/0x3a0 mm/slub.c:2004
___slab_alloc+0x6be/0xd60 mm/slub.c:3019
__slab_alloc.constprop.0+0x4d/0xa0 mm/slub.c:3106
slab_alloc_node mm/slub.c:3197 [inline]
slab_alloc mm/slub.c:3239 [inline]
kmem_cache_alloc+0x35c/0x3a0 mm/slub.c:3244
mempool_alloc+0x146/0x350 mm/mempool.c:392
bvec_alloc+0x16b/0x200 block/bio.c:206
bio_alloc_bioset+0x376/0x4a0 block/bio.c:481
bio_alloc include/linux/bio.h:371 [inline]
mpage_alloc+0x2f/0x1b0 fs/mpage.c:79
do_mpage_readpage+0xfa9/0x2590 fs/mpage.c:306
mpage_readahead+0x3db/0x920 fs/mpage.c:389
read_pages+0x1db/0x790 mm/readahead.c:129
page_cache_ra_unbounded+0x585/0x780 mm/readahead.c:238
do_page_cache_ra+0xf9/0x140 mm/readahead.c:268
do_sync_mmap_readahead mm/filemap.c:3058 [inline]
filemap_fault+0x157f/0x21c0 mm/filemap.c:3151
__do_fault+0x10d/0x790 mm/memory.c:3846
do_read_fault mm/memory.c:4161 [inline]
do_fault mm/memory.c:4290 [inline]
handle_pte_fault mm/memory.c:4548 [inline]
__handle_mm_fault+0x2761/0x4160 mm/memory.c:4683
handle_mm_fault+0x1c8/0x790 mm/memory.c:4781
faultin_page mm/gup.c:939 [inline]
__get_user_pages+0x503/0xf80 mm/gup.c:1160
populate_vma_page_range+0x24d/0x330 mm/gup.c:1492
__mm_populate+0x1ea/0x3e0 mm/gup.c:1601
mm_populate include/linux/mm.h:2698 [inline]
vm_mmap_pgoff+0x20e/0x290 mm/util.c:524
ksys_mmap_pgoff+0x40d/0x5a0 mm/mmap.c:1630
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x35/0xb0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x44/0xae
RIP: 0033:0x7fd1c83f6af9
RSP: 002b:00007fd1c736c188 EFLAGS: 00000246 ORIG_RAX: 0000000000000009
RAX: ffffffffffffffda RBX: 00007fd1c8509f60 RCX: 00007fd1c83f6af9
RDX: 0000000001000002 RSI: 0000000000b36000 RDI: 0000000020000000
RBP: 00007fd1c8450ff7 R08: 0000000000000004 R09: 0000000000000000
R10: 0000000000028011 R11: 0000000000000246 R12: 0000000000000000
R13: 00007fff309a431f R14: 00007fd1c736c300 R15: 0000000000022000
</TASK>
Showing all locks held in the system:
1 lock held by khungtaskd/26:
#0: ffffffff8bb828a0 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x53/0x260 kernel/locking/lockdep.c:6460
1 lock held by kswapd1/99:
1 lock held by in:imklog/6230:
#0: ffff888021d92370 (&f->f_pos_lock){+.+.}-{3:3}, at: __fdget_pos+0xe9/0x100 fs/file.c:994
1 lock held by syz-executor.5/925:
=============================================
---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzkaller@googlegroups.com.
syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [syzbot] INFO: task can't die in reclaim_throttle
2021-12-09 0:52 [syzbot] INFO: task can't die in reclaim_throttle syzbot
@ 2021-12-09 14:51 ` Vlastimil Babka
2021-12-09 22:00 ` Matthew Wilcox
2021-12-11 21:11 ` syzbot
2021-12-12 8:40 ` syzbot
2 siblings, 1 reply; 5+ messages in thread
From: Vlastimil Babka @ 2021-12-09 14:51 UTC (permalink / raw)
To: syzbot, akpm, linux-kernel, linux-mm, syzkaller-bugs; +Cc: Mel Gorman
On 12/9/21 01:52, syzbot wrote:
> Hello,
+ CC Mel
> syzbot found the following issue on:
>
> HEAD commit: 4eee8d0b64ec Add linux-next specific files for 20211208
> git tree: linux-next
> console output: https://syzkaller.appspot.com/x/log.txt?x=113d8d75b00000
> kernel config: https://syzkaller.appspot.com/x/.config?x=20b74d9da4ce1ef1
> dashboard link: https://syzkaller.appspot.com/bug?extid=dcea9eda277e1090b35f
> compiler: gcc (Debian 10.2.1-6) 10.2.1 20210110, GNU ld (GNU Binutils for Debian) 2.35.2
>
> Unfortunately, I don't have any reproducer for this issue yet.
>
> IMPORTANT: if you fix the issue, please add the following tag to the commit:
> Reported-by: syzbot+dcea9eda277e1090b35f@syzkaller.appspotmail.com
>
> INFO: task syz-executor.5:925 can't die for more than 143 seconds.
> task:syz-executor.5 state:D
> stack:23840 pid: 925 ppid: 565 flags:0x00004006
> Call Trace:
> <TASK>
> context_switch kernel/sched/core.c:4986 [inline]
> __schedule+0xab2/0x4d90 kernel/sched/core.c:6296
> schedule+0xd2/0x260 kernel/sched/core.c:6369
> schedule_timeout+0x14a/0x2a0 kernel/time/timer.c:1881
> reclaim_throttle+0x1ce/0x5e0 mm/vmscan.c:1072
> consider_reclaim_throttle mm/vmscan.c:3399 [inline]
> shrink_zones mm/vmscan.c:3486 [inline]
> do_try_to_free_pages+0x7cd/0x1620 mm/vmscan.c:3541
> try_to_free_pages+0x29f/0x750 mm/vmscan.c:3776
> __perform_reclaim mm/page_alloc.c:4603 [inline]
> __alloc_pages_direct_reclaim mm/page_alloc.c:4624 [inline]
> __alloc_pages_slowpath.constprop.0+0xa9e/0x2080 mm/page_alloc.c:5014
> __alloc_pages+0x412/0x500 mm/page_alloc.c:5389
> alloc_pages+0x1aa/0x310 mm/mempolicy.c:2271
> alloc_slab_page mm/slub.c:1799 [inline]
> allocate_slab mm/slub.c:1952 [inline]
> new_slab+0x2a9/0x3a0 mm/slub.c:2004
> ___slab_alloc+0x6be/0xd60 mm/slub.c:3019
> __slab_alloc.constprop.0+0x4d/0xa0 mm/slub.c:3106
> slab_alloc_node mm/slub.c:3197 [inline]
> slab_alloc mm/slub.c:3239 [inline]
> kmem_cache_alloc+0x35c/0x3a0 mm/slub.c:3244
> mempool_alloc+0x146/0x350 mm/mempool.c:392
> bvec_alloc+0x16b/0x200 block/bio.c:206
> bio_alloc_bioset+0x376/0x4a0 block/bio.c:481
> bio_alloc include/linux/bio.h:371 [inline]
> mpage_alloc+0x2f/0x1b0 fs/mpage.c:79
> do_mpage_readpage+0xfa9/0x2590 fs/mpage.c:306
> mpage_readahead+0x3db/0x920 fs/mpage.c:389
> read_pages+0x1db/0x790 mm/readahead.c:129
> page_cache_ra_unbounded+0x585/0x780 mm/readahead.c:238
> do_page_cache_ra+0xf9/0x140 mm/readahead.c:268
> do_sync_mmap_readahead mm/filemap.c:3058 [inline]
> filemap_fault+0x157f/0x21c0 mm/filemap.c:3151
> __do_fault+0x10d/0x790 mm/memory.c:3846
> do_read_fault mm/memory.c:4161 [inline]
> do_fault mm/memory.c:4290 [inline]
> handle_pte_fault mm/memory.c:4548 [inline]
> __handle_mm_fault+0x2761/0x4160 mm/memory.c:4683
> handle_mm_fault+0x1c8/0x790 mm/memory.c:4781
> faultin_page mm/gup.c:939 [inline]
> __get_user_pages+0x503/0xf80 mm/gup.c:1160
> populate_vma_page_range+0x24d/0x330 mm/gup.c:1492
> __mm_populate+0x1ea/0x3e0 mm/gup.c:1601
> mm_populate include/linux/mm.h:2698 [inline]
> vm_mmap_pgoff+0x20e/0x290 mm/util.c:524
> ksys_mmap_pgoff+0x40d/0x5a0 mm/mmap.c:1630
> do_syscall_x64 arch/x86/entry/common.c:50 [inline]
> do_syscall_64+0x35/0xb0 arch/x86/entry/common.c:80
> entry_SYSCALL_64_after_hwframe+0x44/0xae
> RIP: 0033:0x7fd1c83f6af9
> RSP: 002b:00007fd1c736c188 EFLAGS: 00000246 ORIG_RAX: 0000000000000009
> RAX: ffffffffffffffda RBX: 00007fd1c8509f60 RCX: 00007fd1c83f6af9
> RDX: 0000000001000002 RSI: 0000000000b36000 RDI: 0000000020000000
> RBP: 00007fd1c8450ff7 R08: 0000000000000004 R09: 0000000000000000
> R10: 0000000000028011 R11: 0000000000000246 R12: 0000000000000000
> R13: 00007fff309a431f R14: 00007fd1c736c300 R15: 0000000000022000
> </TASK>
>
> Showing all locks held in the system:
> 1 lock held by khungtaskd/26:
> #0: ffffffff8bb828a0 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x53/0x260 kernel/locking/lockdep.c:6460
> 1 lock held by kswapd1/99:
> 1 lock held by in:imklog/6230:
> #0: ffff888021d92370 (&f->f_pos_lock){+.+.}-{3:3}, at: __fdget_pos+0xe9/0x100 fs/file.c:994
> 1 lock held by syz-executor.5/925:
>
> =============================================
>
>
>
> ---
> This report is generated by a bot. It may contain errors.
> See https://goo.gl/tpsmEJ for more information about syzbot.
> syzbot engineers can be reached at syzkaller@googlegroups.com.
>
> syzbot will keep track of this issue. See:
> https://goo.gl/tpsmEJ#status for how to communicate with syzbot.
>
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [syzbot] INFO: task can't die in reclaim_throttle
2021-12-09 14:51 ` Vlastimil Babka
@ 2021-12-09 22:00 ` Matthew Wilcox
0 siblings, 0 replies; 5+ messages in thread
From: Matthew Wilcox @ 2021-12-09 22:00 UTC (permalink / raw)
To: Vlastimil Babka
Cc: syzbot, akpm, linux-kernel, linux-mm, syzkaller-bugs, Mel Gorman
On Thu, Dec 09, 2021 at 03:51:00PM +0100, Vlastimil Babka wrote:
> On 12/9/21 01:52, syzbot wrote:
> > Hello,
>
> + CC Mel
I don't know that this is Mel's fault. We've been absolutely beastly
to the MM here.
page_cache_ra_unbounded() calls memalloc_nofs_save() so that page
reclaim won't try to reclaim pages that we've already allocated and
locked.
Then mempool_alloc() has set NOMEMALLOC, NORETRY and NOWARN, then called
kmem_cache_alloc() which has unfortunately fallen all the way into the
page allocator. And gone to sleep for 143 seconds by the looks of things.
Hm, maybe that is Mel's fault, then. But still, we've asked the MM to
do reclaim with one hand tied behind its back (we're clearly on the
second round in mempool_alloc() here, so we might have GFP_IO set).
> > syzbot found the following issue on:
> >
> > HEAD commit: 4eee8d0b64ec Add linux-next specific files for 20211208
> > git tree: linux-next
> > console output: https://syzkaller.appspot.com/x/log.txt?x=113d8d75b00000
> > kernel config: https://syzkaller.appspot.com/x/.config?x=20b74d9da4ce1ef1
> > dashboard link: https://syzkaller.appspot.com/bug?extid=dcea9eda277e1090b35f
> > compiler: gcc (Debian 10.2.1-6) 10.2.1 20210110, GNU ld (GNU Binutils for Debian) 2.35.2
> >
> > Unfortunately, I don't have any reproducer for this issue yet.
> >
> > IMPORTANT: if you fix the issue, please add the following tag to the commit:
> > Reported-by: syzbot+dcea9eda277e1090b35f@syzkaller.appspotmail.com
> >
> > INFO: task syz-executor.5:925 can't die for more than 143 seconds.
> > task:syz-executor.5 state:D
> > stack:23840 pid: 925 ppid: 565 flags:0x00004006
> > Call Trace:
> > <TASK>
> > context_switch kernel/sched/core.c:4986 [inline]
> > __schedule+0xab2/0x4d90 kernel/sched/core.c:6296
> > schedule+0xd2/0x260 kernel/sched/core.c:6369
> > schedule_timeout+0x14a/0x2a0 kernel/time/timer.c:1881
> > reclaim_throttle+0x1ce/0x5e0 mm/vmscan.c:1072
> > consider_reclaim_throttle mm/vmscan.c:3399 [inline]
> > shrink_zones mm/vmscan.c:3486 [inline]
> > do_try_to_free_pages+0x7cd/0x1620 mm/vmscan.c:3541
> > try_to_free_pages+0x29f/0x750 mm/vmscan.c:3776
> > __perform_reclaim mm/page_alloc.c:4603 [inline]
> > __alloc_pages_direct_reclaim mm/page_alloc.c:4624 [inline]
> > __alloc_pages_slowpath.constprop.0+0xa9e/0x2080 mm/page_alloc.c:5014
> > __alloc_pages+0x412/0x500 mm/page_alloc.c:5389
> > alloc_pages+0x1aa/0x310 mm/mempolicy.c:2271
> > alloc_slab_page mm/slub.c:1799 [inline]
> > allocate_slab mm/slub.c:1952 [inline]
> > new_slab+0x2a9/0x3a0 mm/slub.c:2004
> > ___slab_alloc+0x6be/0xd60 mm/slub.c:3019
> > __slab_alloc.constprop.0+0x4d/0xa0 mm/slub.c:3106
> > slab_alloc_node mm/slub.c:3197 [inline]
> > slab_alloc mm/slub.c:3239 [inline]
> > kmem_cache_alloc+0x35c/0x3a0 mm/slub.c:3244
> > mempool_alloc+0x146/0x350 mm/mempool.c:392
> > bvec_alloc+0x16b/0x200 block/bio.c:206
> > bio_alloc_bioset+0x376/0x4a0 block/bio.c:481
> > bio_alloc include/linux/bio.h:371 [inline]
> > mpage_alloc+0x2f/0x1b0 fs/mpage.c:79
> > do_mpage_readpage+0xfa9/0x2590 fs/mpage.c:306
> > mpage_readahead+0x3db/0x920 fs/mpage.c:389
> > read_pages+0x1db/0x790 mm/readahead.c:129
> > page_cache_ra_unbounded+0x585/0x780 mm/readahead.c:238
> > do_page_cache_ra+0xf9/0x140 mm/readahead.c:268
> > do_sync_mmap_readahead mm/filemap.c:3058 [inline]
> > filemap_fault+0x157f/0x21c0 mm/filemap.c:3151
> > __do_fault+0x10d/0x790 mm/memory.c:3846
> > do_read_fault mm/memory.c:4161 [inline]
> > do_fault mm/memory.c:4290 [inline]
> > handle_pte_fault mm/memory.c:4548 [inline]
> > __handle_mm_fault+0x2761/0x4160 mm/memory.c:4683
> > handle_mm_fault+0x1c8/0x790 mm/memory.c:4781
> > faultin_page mm/gup.c:939 [inline]
> > __get_user_pages+0x503/0xf80 mm/gup.c:1160
> > populate_vma_page_range+0x24d/0x330 mm/gup.c:1492
> > __mm_populate+0x1ea/0x3e0 mm/gup.c:1601
> > mm_populate include/linux/mm.h:2698 [inline]
> > vm_mmap_pgoff+0x20e/0x290 mm/util.c:524
> > ksys_mmap_pgoff+0x40d/0x5a0 mm/mmap.c:1630
> > do_syscall_x64 arch/x86/entry/common.c:50 [inline]
> > do_syscall_64+0x35/0xb0 arch/x86/entry/common.c:80
> > entry_SYSCALL_64_after_hwframe+0x44/0xae
> > RIP: 0033:0x7fd1c83f6af9
> > RSP: 002b:00007fd1c736c188 EFLAGS: 00000246 ORIG_RAX: 0000000000000009
> > RAX: ffffffffffffffda RBX: 00007fd1c8509f60 RCX: 00007fd1c83f6af9
> > RDX: 0000000001000002 RSI: 0000000000b36000 RDI: 0000000020000000
> > RBP: 00007fd1c8450ff7 R08: 0000000000000004 R09: 0000000000000000
> > R10: 0000000000028011 R11: 0000000000000246 R12: 0000000000000000
> > R13: 00007fff309a431f R14: 00007fd1c736c300 R15: 0000000000022000
> > </TASK>
> >
> > Showing all locks held in the system:
> > 1 lock held by khungtaskd/26:
> > #0: ffffffff8bb828a0 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x53/0x260 kernel/locking/lockdep.c:6460
> > 1 lock held by kswapd1/99:
> > 1 lock held by in:imklog/6230:
> > #0: ffff888021d92370 (&f->f_pos_lock){+.+.}-{3:3}, at: __fdget_pos+0xe9/0x100 fs/file.c:994
> > 1 lock held by syz-executor.5/925:
> >
> > =============================================
> >
> >
> >
> > ---
> > This report is generated by a bot. It may contain errors.
> > See https://goo.gl/tpsmEJ for more information about syzbot.
> > syzbot engineers can be reached at syzkaller@googlegroups.com.
> >
> > syzbot will keep track of this issue. See:
> > https://goo.gl/tpsmEJ#status for how to communicate with syzbot.
> >
>
>
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [syzbot] INFO: task can't die in reclaim_throttle
2021-12-09 0:52 [syzbot] INFO: task can't die in reclaim_throttle syzbot
2021-12-09 14:51 ` Vlastimil Babka
@ 2021-12-11 21:11 ` syzbot
2021-12-12 8:40 ` syzbot
2 siblings, 0 replies; 5+ messages in thread
From: syzbot @ 2021-12-11 21:11 UTC (permalink / raw)
To: akpm, linux-kernel, linux-mm, mgorman, syzkaller-bugs, vbabka, willy
syzbot has found a reproducer for the following issue on:
HEAD commit: ea922272cbe5 Add linux-next specific files for 20211210
git tree: linux-next
console output: https://syzkaller.appspot.com/x/log.txt?x=15175f75b00000
kernel config: https://syzkaller.appspot.com/x/.config?x=c1359a19d2230002
dashboard link: https://syzkaller.appspot.com/bug?extid=dcea9eda277e1090b35f
compiler: gcc (Debian 10.2.1-6) 10.2.1 20210110, GNU ld (GNU Binutils for Debian) 2.35.2
syz repro: https://syzkaller.appspot.com/x/repro.syz?x=14d842b1b00000
IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+dcea9eda277e1090b35f@syzkaller.appspotmail.com
INFO: task syz-executor.1:3674 can't die for more than 143 seconds.
task:syz-executor.1 state:D stack:23920 pid: 3674 ppid: 1 flags:0x00004004
Call Trace:
<TASK>
context_switch kernel/sched/core.c:4986 [inline]
__schedule+0xab2/0x4d90 kernel/sched/core.c:6296
schedule+0xd2/0x260 kernel/sched/core.c:6369
schedule_timeout+0x14a/0x2a0 kernel/time/timer.c:1881
reclaim_throttle+0x1ce/0x5e0 mm/vmscan.c:1072
consider_reclaim_throttle mm/vmscan.c:3399 [inline]
shrink_zones mm/vmscan.c:3486 [inline]
do_try_to_free_pages+0x7cd/0x1620 mm/vmscan.c:3541
try_to_free_mem_cgroup_pages+0x2cd/0x840 mm/vmscan.c:3855
reclaim_high.constprop.0+0x190/0x250 mm/memcontrol.c:2299
mem_cgroup_handle_over_high+0x18c/0x540 mm/memcontrol.c:2483
tracehook_notify_resume include/linux/tracehook.h:198 [inline]
exit_to_user_mode_loop kernel/entry/common.c:175 [inline]
exit_to_user_mode_prepare+0x1ab/0x290 kernel/entry/common.c:207
__syscall_exit_to_user_mode_work kernel/entry/common.c:289 [inline]
syscall_exit_to_user_mode+0x19/0x60 kernel/entry/common.c:300
do_syscall_64+0x42/0xb0 arch/x86/entry/common.c:86
entry_SYSCALL_64_after_hwframe+0x44/0xae
RIP: 0033:0x7f345ee5e9d8
RSP: 002b:00007fff5c713cb0 EFLAGS: 00000287 ORIG_RAX: 0000000000000101
RAX: 0000000000000003 RBX: 0000000000000003 RCX: 00007f345ee5e9d8
RDX: 0000000000090800 RSI: 00007f345eeb8256 RDI: 00000000ffffff9c
RBP: 00007fff5c713d7c R08: 0000000000090800 R09: 00007f345eeb8256
R10: 0000000000000000 R11: 0000000000000287 R12: 0000000000000000
R13: 00000000000b3eac R14: 000000000000000b R15: 00007fff5c713de0
</TASK>
Showing all locks held in the system:
1 lock held by khungtaskd/26:
#0: ffffffff8bb818a0 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x53/0x260 kernel/locking/lockdep.c:6460
1 lock held by klogd/2959:
2 locks held by getty/3289:
#0: ffff88823bcca898 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x22/0x80 drivers/tty/tty_ldisc.c:244
#1: ffffc90002b962e8 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0xcf0/0x1230 drivers/tty/n_tty.c:2077
2 locks held by kworker/u4:1/3705:
#0: ffff8880b9c39c98 (&rq->__lock){-.-.}-{2:2}, at: raw_spin_rq_lock_nested+0x2b/0x120 kernel/sched/core.c:489
#1: ffff8880b9c27988 (&per_cpu_ptr(group->pcpu, cpu)->seq){-.-.}-{0:0}, at: psi_task_switch+0x3e7/0x4e0 kernel/sched/psi.c:891
=============================================
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [syzbot] INFO: task can't die in reclaim_throttle
2021-12-09 0:52 [syzbot] INFO: task can't die in reclaim_throttle syzbot
2021-12-09 14:51 ` Vlastimil Babka
2021-12-11 21:11 ` syzbot
@ 2021-12-12 8:40 ` syzbot
2 siblings, 0 replies; 5+ messages in thread
From: syzbot @ 2021-12-12 8:40 UTC (permalink / raw)
To: akpm, linux-kernel, linux-mm, mgorman, syzkaller-bugs, vbabka, willy
syzbot has found a reproducer for the following issue on:
HEAD commit: ea922272cbe5 Add linux-next specific files for 20211210
git tree: linux-next
console output: https://syzkaller.appspot.com/x/log.txt?x=130f5f75b00000
kernel config: https://syzkaller.appspot.com/x/.config?x=c1359a19d2230002
dashboard link: https://syzkaller.appspot.com/bug?extid=dcea9eda277e1090b35f
compiler: gcc (Debian 10.2.1-6) 10.2.1 20210110, GNU ld (GNU Binutils for Debian) 2.35.2
syz repro: https://syzkaller.appspot.com/x/repro.syz?x=14f4a551b00000
C reproducer: https://syzkaller.appspot.com/x/repro.c?x=1613df3ab00000
IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+dcea9eda277e1090b35f@syzkaller.appspotmail.com
INFO: task syz-executor786:3696 can't die for more than 143 seconds.
task:syz-executor786 state:D stack:28344 pid: 3696 ppid: 3669 flags:0x00004004
Call Trace:
<TASK>
context_switch kernel/sched/core.c:4986 [inline]
__schedule+0xab2/0x4d90 kernel/sched/core.c:6296
schedule+0xd2/0x260 kernel/sched/core.c:6369
schedule_timeout+0x14a/0x2a0 kernel/time/timer.c:1881
reclaim_throttle+0x1ce/0x5e0 mm/vmscan.c:1072
consider_reclaim_throttle mm/vmscan.c:3399 [inline]
shrink_zones mm/vmscan.c:3486 [inline]
do_try_to_free_pages+0x7cd/0x1620 mm/vmscan.c:3541
try_to_free_mem_cgroup_pages+0x2cd/0x840 mm/vmscan.c:3855
reclaim_high.constprop.0+0x190/0x250 mm/memcontrol.c:2299
mem_cgroup_handle_over_high+0x18c/0x540 mm/memcontrol.c:2483
tracehook_notify_resume include/linux/tracehook.h:198 [inline]
exit_to_user_mode_loop kernel/entry/common.c:175 [inline]
exit_to_user_mode_prepare+0x1ab/0x290 kernel/entry/common.c:207
irqentry_exit_to_user_mode+0x5/0x40 kernel/entry/common.c:313
exc_page_fault+0xc6/0x180 arch/x86/mm/fault.c:1543
asm_exc_page_fault+0x1e/0x30 arch/x86/include/asm/idtentry.h:568
RIP: 0033:0x7fd40489c3ee
RSP: 002b:00007ffe50e3a520 EFLAGS: 00010202
RAX: 00007fd4049253d0 RBX: 00007fd40491e508 RCX: 00007fd40489c3cb
RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000001200011
RBP: 0000000000000001 R08: 0000000000000000 R09: 00005555571d1300
R10: 00005555571d15d0 R11: 0000000000000246 R12: 0000000000000001
R13: 0000000000000000 R14: 0000000000000000 R15: 00007ffe50e3a5a0
</TASK>
Showing all locks held in the system:
1 lock held by khungtaskd/27:
#0: ffffffff8bb818a0 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x53/0x260 kernel/locking/lockdep.c:6460
6 locks held by kworker/u4:2/50:
#0: ffff8880b9d39c98 (&rq->__lock){-.-.}-{2:2}, at: raw_spin_rq_lock_nested+0x2b/0x120 kernel/sched/core.c:489
#1: ffff8880b9d27988 (&per_cpu_ptr(group->pcpu, cpu)->seq){-.-.}-{0:0}, at: psi_task_switch+0x176/0x4e0 kernel/sched/psi.c:882
#2: ffff8880b9d284d8 (&base->lock){-.-.}-{2:2}, at: lock_timer_base+0x5a/0x1f0 kernel/time/timer.c:946
#3: ffffffff90799400 (&obj_hash[i].lock){-.-.}-{2:2}, at: debug_object_activate+0x12e/0x3e0 lib/debugobjects.c:661
#4: ffffffff8ba4a548 (text_mutex){+.+.}-{3:3}, at: arch_jump_label_transform_apply+0xe/0x20 arch/x86/kernel/jump_label.c:145
#5: ffff888010dbb138 (ptlock_ptr(page)#2){+.+.}-{2:2}, at: spin_lock include/linux/spinlock.h:354 [inline]
#5: ffff888010dbb138 (ptlock_ptr(page)#2){+.+.}-{2:2}, at: __get_locked_pte+0x2b6/0x4d0 mm/memory.c:1722
1 lock held by syslogd/2955:
#0: ffff8880b9d39c98 (&rq->__lock){-.-.}-{2:2}, at: raw_spin_rq_lock_nested+0x2b/0x120 kernel/sched/core.c:489
2 locks held by getty/3289:
#0: ffff88814a873098 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x22/0x80 drivers/tty/tty_ldisc.c:244
#1: ffffc90002b962e8 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0xcf0/0x1230 drivers/tty/n_tty.c:2077
3 locks held by syz-executor786/3925:
#0: ffff88807fb82460 (sb_writers#10){.+.+}-{0:0}, at: __do_sys_sendfile64 fs/read_write.c:1310 [inline]
#0: ffff88807fb82460 (sb_writers#10){.+.+}-{0:0}, at: __se_sys_sendfile64 fs/read_write.c:1296 [inline]
#0: ffff88807fb82460 (sb_writers#10){.+.+}-{0:0}, at: __x64_sys_sendfile64+0x1cc/0x210 fs/read_write.c:1296
#1: ffff88801dc3c888 (&of->mutex){+.+.}-{3:3}, at: kernfs_fop_write_iter+0x287/0x500 fs/kernfs/file.c:287
#2: ffff888070e3c3a8 (kn->active#167){.+.+}-{0:0}, at: kernfs_fop_write_iter+0x2ab/0x500 fs/kernfs/file.c:288
3 locks held by syz-executor786/3928:
#0: ffff88807fb82460 (sb_writers#10){.+.+}-{0:0}, at: __do_sys_sendfile64 fs/read_write.c:1310 [inline]
#0: ffff88807fb82460 (sb_writers#10){.+.+}-{0:0}, at: __se_sys_sendfile64 fs/read_write.c:1296 [inline]
#0: ffff88807fb82460 (sb_writers#10){.+.+}-{0:0}, at: __x64_sys_sendfile64+0x1cc/0x210 fs/read_write.c:1296
#1: ffff88801dc3cc88 (&of->mutex){+.+.}-{3:3}, at: kernfs_fop_write_iter+0x287/0x500 fs/kernfs/file.c:287
#2: ffff888070e3c3a8 (kn->active#167){.+.+}-{0:0}, at: kernfs_fop_write_iter+0x2ab/0x500 fs/kernfs/file.c:288
3 locks held by syz-executor786/3931:
#0: ffff88807fb82460 (sb_writers#10){.+.+}-{0:0}, at: __do_sys_sendfile64 fs/read_write.c:1310 [inline]
#0: ffff88807fb82460 (sb_writers#10){.+.+}-{0:0}, at: __se_sys_sendfile64 fs/read_write.c:1296 [inline]
#0: ffff88807fb82460 (sb_writers#10){.+.+}-{0:0}, at: __x64_sys_sendfile64+0x1cc/0x210 fs/read_write.c:1296
#1: ffff888077934488 (&of->mutex){+.+.}-{3:3}, at: kernfs_fop_write_iter+0x287/0x500 fs/kernfs/file.c:287
#2: ffff888070e3c3a8 (kn->active#167){.+.+}-{0:0}, at: kernfs_fop_write_iter+0x2ab/0x500 fs/kernfs/file.c:288
3 locks held by syz-executor786/3933:
#0: ffff88807fb82460 (sb_writers#10){.+.+}-{0:0}, at: __do_sys_sendfile64 fs/read_write.c:1310 [inline]
#0: ffff88807fb82460 (sb_writers#10){.+.+}-{0:0}, at: __se_sys_sendfile64 fs/read_write.c:1296 [inline]
#0: ffff88807fb82460 (sb_writers#10){.+.+}-{0:0}, at: __x64_sys_sendfile64+0x1cc/0x210 fs/read_write.c:1296
#1: ffff88801dc3d488 (&of->mutex){+.+.}-{3:3}, at: kernfs_fop_write_iter+0x287/0x500 fs/kernfs/file.c:287
#2: ffff888070e3c3a8 (kn->active#167){.+.+}-{0:0}, at: kernfs_fop_write_iter+0x2ab/0x500 fs/kernfs/file.c:288
3 locks held by syz-executor786/3935:
#0: ffff88807fb82460 (sb_writers#10){.+.+}-{0:0}, at: __do_sys_sendfile64 fs/read_write.c:1310 [inline]
#0: ffff88807fb82460 (sb_writers#10){.+.+}-{0:0}, at: __se_sys_sendfile64 fs/read_write.c:1296 [inline]
#0: ffff88807fb82460 (sb_writers#10){.+.+}-{0:0}, at: __x64_sys_sendfile64+0x1cc/0x210 fs/read_write.c:1296
#1: ffff88801dc3d888 (&of->mutex){+.+.}-{3:3}, at: kernfs_fop_write_iter+0x287/0x500 fs/kernfs/file.c:287
#2: ffff888070e3c3a8 (kn->active#167){.+.+}-{0:0}, at: kernfs_fop_write_iter+0x2ab/0x500 fs/kernfs/file.c:288
=============================================
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2021-12-12 8:40 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-12-09 0:52 [syzbot] INFO: task can't die in reclaim_throttle syzbot
2021-12-09 14:51 ` Vlastimil Babka
2021-12-09 22:00 ` Matthew Wilcox
2021-12-11 21:11 ` syzbot
2021-12-12 8:40 ` syzbot
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox