From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pd0-f174.google.com (mail-pd0-f174.google.com [209.85.192.174]) by kanga.kvack.org (Postfix) with ESMTP id BFD846B0035 for ; Wed, 5 Feb 2014 22:50:57 -0500 (EST) Received: by mail-pd0-f174.google.com with SMTP id z10so1192409pdj.33 for ; Wed, 05 Feb 2014 19:50:57 -0800 (PST) Received: from mail-pb0-x234.google.com (mail-pb0-x234.google.com [2607:f8b0:400e:c01::234]) by mx.google.com with ESMTPS id xf4si31316867pab.220.2014.02.05.19.50.55 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Wed, 05 Feb 2014 19:50:56 -0800 (PST) Received: by mail-pb0-f52.google.com with SMTP id jt11so1231000pbb.25 for ; Wed, 05 Feb 2014 19:50:55 -0800 (PST) Date: Wed, 5 Feb 2014 19:50:10 -0800 (PST) From: Hugh Dickins Subject: mmotm 2014-02-05 list_lru_add lockdep splat Message-ID: MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII Sender: owner-linux-mm@kvack.org List-ID: To: Johannes Weiner Cc: Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org ====================================================== [ INFO: SOFTIRQ-safe -> SOFTIRQ-unsafe lock order detected ] 3.14.0-rc1-mm1 #1 Not tainted ------------------------------------------------------ kswapd0/48 [HC0[0]:SC0[0]:HE0:SE1] is trying to acquire: (&(&lru->node[i].lock)->rlock){+.+.-.}, at: [] list_lru_add+0x80/0xf4 s already holding: (&(&mapping->tree_lock)->rlock){..-.-.}, at: [] __remove_mapping+0x3b/0x12d which would create a new lock dependency: (&(&mapping->tree_lock)->rlock){..-.-.} -> (&(&lru->node[i].lock)->rlock){+.+.-.} pendency connects a SOFTIRQ-irq-safe lock: (&(&mapping->tree_lock)->rlock){..-.-.} e SOFTIRQ-irq-safe at: [] __lock_acquire+0x589/0x954 [] lock_acquire+0x61/0x78 [] _raw_spin_lock_irqsave+0x3f/0x51 [] test_clear_page_writeback+0x96/0x2a5 [] end_page_writeback+0x17/0x41 [] end_buffer_async_write+0x12a/0x1aa [] end_bio_bh_io_sync+0x31/0x3c [] bio_endio+0x50/0x6e [] blk_update_request+0x16c/0x2fe [] blk_update_bidi_request+0x17/0x65 [] blk_end_bidi_request+0x1a/0x56 [] blk_end_request+0xb/0xd [] scsi_io_completion+0x16f/0x474 [] scsi_finish_command+0xb6/0xbf [] scsi_softirq_done+0xe9/0xf0 [] blk_done_softirq+0x79/0x8b [] __do_softirq+0xf7/0x213 [] irq_exit+0x3d/0x92 [] do_IRQ+0xb3/0xcc [] ret_from_intr+0x0/0x13 [] __might_sleep+0x71/0x198 [] console_conditional_schedule+0x20/0x27 [] fbcon_redraw.isra.20+0xee/0x15c [] fbcon_scroll+0x61c/0xba3 [] scrup+0xc5/0xe0 [] lf+0x29/0x61 [] do_con_trol+0x162/0x129d [] do_con_write+0x767/0x7f4 [] con_write+0xe/0x20 [] do_output_char+0x8b/0x1a6 [] n_tty_write+0x2ab/0x3c8 [] tty_write+0x1a9/0x241 [] redirected_tty_write+0x88/0x91 [] do_loop_readv_writev+0x43/0x72 [] do_readv_writev+0xf7/0x1be [] vfs_writev+0x32/0x46 [] SyS_writev+0x44/0x78 [] system_call_fastpath+0x16/0x1b q-unsafe lock: (&(&lru->node[i].lock)->rlock){+.+.-.} e SOFTIRQ-irq-unsafe at: ... [] __lock_acquire+0x600/0x954 [] lock_acquire+0x61/0x78 [] _raw_spin_lock+0x34/0x41 [] list_lru_add+0x80/0xf4 [] dput+0xb8/0x107 [] path_put+0x11/0x1c [] path_openat+0x4eb/0x58c [] do_filp_open+0x35/0x7a [] open_exec+0x36/0xd7 [] do_execve_common.isra.33+0x280/0x6b4 [] do_execve+0x13/0x15 [] try_to_run_init_process+0x24/0x49 [] kernel_init+0xb9/0xff [] ret_from_fork+0x7c/0xb0 might help us debug this: Possible interrupt unsafe locking scenario: CPU0 CPU1 ---- ---- lock(&(&lru->node[i].lock)->rlock); local_irq_disable(); lock(&(&mapping->tree_lock)->rlock); lock(&(&lru->node[i].lock)->rlock); lock(&(&mapping->tree_lock)->rlock); ** 1 lock held by kswapd0/48: #0: (&(&mapping->tree_lock)->rlock){..-.-.}, at: [] __remove_mapping+0x3b/0x12d s between SOFTIRQ-irq-safe lock and the holding lock: -> (&(&mapping->tree_lock)->rlock){..-.-.} ops: 139535 { IN-SOFTIRQ-W at: [] __lock_acquire+0x589/0x954 [] lock_acquire+0x61/0x78 [] _raw_spin_lock_irqsave+0x3f/0x51 [] test_clear_page_writeback+0x96/0x2a5 [] end_page_writeback+0x17/0x41 [] end_buffer_async_write+0x12a/0x1aa [] end_bio_bh_io_sync+0x31/0x3c [] bio_endio+0x50/0x6e [] blk_update_request+0x16c/0x2fe [] blk_update_bidi_request+0x17/0x65 [] blk_end_bidi_request+0x1a/0x56 [] blk_end_request+0xb/0xd [] scsi_io_completion+0x16f/0x474 [] scsi_finish_command+0xb6/0xbf [] scsi_softirq_done+0xe9/0xf0 [] blk_done_softirq+0x79/0x8b [] __do_softirq+0xf7/0x213 [] irq_exit+0x3d/0x92 [] do_IRQ+0xb3/0xcc [] ret_from_intr+0x0/0x13 [] __might_sleep+0x71/0x198 [] console_conditional_schedule+0x20/0x27 [] fbcon_redraw.isra.20+0xee/0x15c [] fbcon_scroll+0x61c/0xba3 [] scrup+0xc5/0xe0 [] lf+0x29/0x61 [] do_con_trol+0x162/0x129d [] do_con_write+0x767/0x7f4 [] con_write+0xe/0x20 [] do_output_char+0x8b/0x1a6 [] n_tty_write+0x2ab/0x3c8 [] tty_write+0x1a9/0x241 [] redirected_tty_write+0x88/0x91 [] do_loop_readv_writev+0x43/0x72 [] do_readv_writev+0xf7/0x1be [] vfs_writev+0x32/0x46 [] SyS_writev+0x44/0x78 [] system_call_fastpath+0x16/0x1b IN-RECLAIM_FS-W at: [] __lock_acquire+0x62f/0x954 [] lock_acquire+0x61/0x78 [] _raw_spin_lock_irq+0x3a/0x47 [] __remove_mapping+0x3b/0x12d [] shrink_page_list+0x6e7/0x8db [] shrink_inactive_list+0x24e/0x391 [] shrink_lruvec+0x3e3/0x589 [] shrink_zone+0x5f/0x159 [] balance_pgdat+0x32c/0x4fd [] kswapd+0x304/0x331 [] kthread+0xf1/0xf9 [] ret_from_fork+0x7c/0xb0 INITIAL USE at: [] __lock_acquire+0x647/0x954 [] lock_acquire+0x61/0x78 [] _raw_spin_lock_irq+0x3a/0x47 [] shmem_add_to_page_cache.isra.25+0x7f/0x102 [] shmem_getpage_gfp+0x354/0x658 [] shmem_read_mapping_page_gfp+0x2e/0x49 [] i915_gem_object_get_pages_gtt+0xe9/0x417 [] i915_gem_object_get_pages+0x59/0x85 [] i915_gem_object_pin+0x22f/0x4e0 [] i915_gem_create_context+0x208/0x404 [] i915_gem_context_init+0x12e/0x1f0 [] i915_gem_init+0xdc/0x19a [] i915_driver_load+0xa28/0xd38 [] drm_dev_register+0xd2/0x14a [] drm_get_pci_dev+0x104/0x1d4 [] i915_pci_probe+0x40/0x49 [] local_pci_probe+0x1f/0x51 [] pci_device_probe+0xc6/0xec [] driver_probe_device+0x90/0x19b [] __driver_attach+0x5c/0x7e [] bus_for_each_dev+0x55/0x89 [] driver_attach+0x19/0x1b [] bus_add_driver+0xec/0x1d3 [] driver_register+0x89/0xc5 [] __pci_register_driver+0x58/0x5b [] drm_pci_init+0x64/0xe8 [] i915_init+0x6a/0x6c [] do_one_initcall+0x7f/0x10b [] kernel_init_freeable+0x104/0x196 [] kernel_init+0x9/0xff [] ret_from_fork+0x7c/0xb0 } ... key at: [] __key.30540+0x0/0x8 ... acquired at: [] check_irq_usage+0x54/0xa8 [] validate_chain.isra.22+0x87c/0xe96 [] __lock_acquire+0x85e/0x954 [] lock_acquire+0x61/0x78 [] _raw_spin_lock+0x34/0x41 [] list_lru_add+0x80/0xf4 [] __delete_from_page_cache+0x122/0x1cc [] __remove_mapping+0xf4/0x12d [] shrink_page_list+0x6e7/0x8db [] shrink_inactive_list+0x24e/0x391 [] shrink_lruvec+0x3e3/0x589 [] shrink_zone+0x5f/0x159 [] balance_pgdat+0x32c/0x4fd [] kswapd+0x304/0x331 [] kthread+0xf1/0xf9 [] ret_from_fork+0x7c/0xb0 s between the lock to be acquired and SOFTIRQ-irq-unsafe lock: -> (&(&lru->node[i].lock)->rlock){+.+.-.} ops: 13037 { HARDIRQ-ON-W at: [] __lock_acquire+0x5df/0x954 [] lock_acquire+0x61/0x78 [] _raw_spin_lock+0x34/0x41 [] list_lru_add+0x80/0xf4 [] dput+0xb8/0x107 [] path_put+0x11/0x1c [] path_openat+0x4eb/0x58c [] do_filp_open+0x35/0x7a [] open_exec+0x36/0xd7 [] do_execve_common.isra.33+0x280/0x6b4 [] do_execve+0x13/0x15 [] try_to_run_init_process+0x24/0x49 [] kernel_init+0xb9/0xff [] ret_from_fork+0x7c/0xb0 SOFTIRQ-ON-W at: [] __lock_acquire+0x600/0x954 [] lock_acquire+0x61/0x78 [] _raw_spin_lock+0x34/0x41 [] list_lru_add+0x80/0xf4 [] dput+0xb8/0x107 [] path_put+0x11/0x1c [] path_openat+0x4eb/0x58c [] do_filp_open+0x35/0x7a [] open_exec+0x36/0xd7 [] do_execve_common.isra.33+0x280/0x6b4 [] do_execve+0x13/0x15 [] try_to_run_init_process+0x24/0x49 [] kernel_init+0xb9/0xff [] ret_from_fork+0x7c/0xb0 IN-RECLAIM_FS-W at: [] __lock_acquire+0x62f/0x954 [] lock_acquire+0x61/0x78 [] _raw_spin_lock+0x34/0x41 [] list_lru_count_node+0x19/0x55 [] super_cache_count+0x5f/0xb5 [] shrink_slab_node+0x40/0x171 [] shrink_slab+0x76/0x134 [] balance_pgdat+0x363/0x4fd [] kswapd+0x304/0x331 [] kthread+0xf1/0xf9 [] ret_from_fork+0x7c/0xb0 INITIAL USE at: [] __lock_acquire+0x647/0x954 [] lock_acquire+0x61/0x78 [] _raw_spin_lock+0x34/0x41 [] list_lru_add+0x80/0xf4 [] dput+0xb8/0x107 [] path_put+0x11/0x1c [] path_openat+0x4eb/0x58c [] do_filp_open+0x35/0x7a [] open_exec+0x36/0xd7 [] do_execve_common.isra.33+0x280/0x6b4 [] do_execve+0x13/0x15 [] try_to_run_init_process+0x24/0x49 [] kernel_init+0xb9/0xff [] ret_from_fork+0x7c/0xb0 } ... key at: [] __key.17506+0x0/0x10 ... acquired at: [] check_irq_usage+0x54/0xa8 [] validate_chain.isra.22+0x87c/0xe96 [] __lock_acquire+0x85e/0x954 [] lock_acquire+0x61/0x78 [] _raw_spin_lock+0x34/0x41 [] list_lru_add+0x80/0xf4 [] __delete_from_page_cache+0x122/0x1cc [] __remove_mapping+0xf4/0x12d [] shrink_page_list+0x6e7/0x8db [] shrink_inactive_list+0x24e/0x391 [] shrink_lruvec+0x3e3/0x589 [] shrink_zone+0x5f/0x159 [] balance_pgdat+0x32c/0x4fd [] kswapd+0x304/0x331 [] kthread+0xf1/0xf9 [] ret_from_fork+0x7c/0xb0 : CPU: 3 PID: 48 Comm: kswapd0 Not tainted 3.14.0-rc1-mm1 #1 Hardware name: LENOVO 4174EH1/4174EH1, BIOS 8CET51WW (1.31 ) 11/29/2011 0000000000000000 ffff880029e37688 ffffffff8158f143 ffff880029e39428 ffff880029e37780 ffffffff810bb982 0000000000000000 0000000000000000 ffff880000000001 0000000400000006 ffffffff817dac97 ffff880029e376d0 Call Trace: [] dump_stack+0x4e/0x7a [] check_usage+0x591/0x5a2 [] check_irq_usage+0x54/0xa8 [] validate_chain.isra.22+0x87c/0xe96 [] __lock_acquire+0x85e/0x954 [] lock_acquire+0x61/0x78 [] ? list_lru_add+0x80/0xf4 [] _raw_spin_lock+0x34/0x41 [] ? list_lru_add+0x80/0xf4 [] list_lru_add+0x80/0xf4 [] __delete_from_page_cache+0x122/0x1cc [] __remove_mapping+0xf4/0x12d [] shrink_page_list+0x6e7/0x8db [] ? trace_hardirqs_on_caller+0x142/0x19e [] shrink_inactive_list+0x24e/0x391 [] shrink_lruvec+0x3e3/0x589 [] shrink_zone+0x5f/0x159 [] balance_pgdat+0x32c/0x4fd [] kswapd+0x304/0x331 [] ? abort_exclusive_wait+0x84/0x84 [] ? balance_pgdat+0x4fd/0x4fd [] kthread+0xf1/0xf9 [] ? _raw_spin_unlock_irq+0x27/0x46 [] ? kthread_stop+0x5a/0x5a [] ret_from_fork+0x7c/0xb0 [] ? kthread_stop+0x5a/0x5a -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org