From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail137.messagelabs.com (mail137.messagelabs.com [216.82.249.19]) by kanga.kvack.org (Postfix) with SMTP id 64E166B004D for ; Tue, 10 Nov 2009 11:58:16 -0500 (EST) Subject: [PATCH] prevent deadlock in __unmap_hugepage_range() when alloc_huge_page() fails. From: Larry Woodman Content-Type: multipart/mixed; boundary="=-OO+By7Lz7MYtHsqpzyd2" Date: Tue, 10 Nov 2009 12:00:56 -0500 Message-Id: <1257872456.3227.2.camel@dhcp-100-19-198.bos.redhat.com> Mime-Version: 1.0 Sender: owner-linux-mm@kvack.org To: "linux-kernel@vger.kernel.org" , "linux-mm@kvack.org" List-ID: --=-OO+By7Lz7MYtHsqpzyd2 Content-Type: text/plain Content-Transfer-Encoding: 7bit hugetlb_fault() takes the mm->page_table_lock spinlock then calls hugetlb_cow(). If the alloc_huge_page() in hugetlb_cow() fails due to an insufficient huge page pool it calls unmap_ref_private() with the mm->page_table_lock held. unmap_ref_private() then calls unmap_hugepage_range() which tries to acquire the mm->page_table_lock. [] print_circular_bug_tail+0x80/0x9f [] ? check_noncircular+0xb0/0xe8 [] __lock_acquire+0x956/0xc0e [] lock_acquire+0xee/0x12e [] ? unmap_hugepage_range+0x3e/0x84 [] ? unmap_hugepage_range+0x3e/0x84 [] _spin_lock+0x40/0x89 [] ? unmap_hugepage_range+0x3e/0x84 [] ? alloc_huge_page+0x218/0x318 [] unmap_hugepage_range+0x3e/0x84 [] hugetlb_cow+0x1e2/0x3f4 [] ? hugetlb_fault+0x453/0x4f6 [] hugetlb_fault+0x480/0x4f6 [] follow_hugetlb_page+0x116/0x2d9 [] ? _spin_unlock_irq+0x3a/0x5c [] __get_user_pages+0x2a3/0x427 [] get_user_pages+0x3e/0x54 [] get_user_pages_fast+0x170/0x1b5 [] dio_get_page+0x64/0x14a [] __blockdev_direct_IO+0x4b7/0xb31 [] blkdev_direct_IO+0x58/0x6e [] ? blkdev_get_blocks+0x0/0xb8 [] generic_file_aio_read+0xdd/0x528 [] ? avc_has_perm+0x66/0x8c [] do_sync_read+0xf5/0x146 [] ? autoremove_wake_function+0x0/0x5a [] ? security_file_permission+0x24/0x3a [] vfs_read+0xb5/0x126 [] ? fget_light+0x5e/0xf8 [] sys_read+0x54/0x8c [] system_call_fastpath+0x16/0x1b This can be fixed by dropping the mm->page_table_lock around the call to unmap_ref_private() if alloc_huge_page() fails, its dropped right below in the normal path anyway: diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 5d7601b..f4daef4 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1973,12 +1973,15 @@ retry_avoidcopy: */ if (outside_reserve) { BUG_ON(huge_pte_none(pte)); + spin_unlock(&mm->page_table_lock); if (unmap_ref_private(mm, vma, old_page, address)) { + spin_lock(&mm->page_table_lock); BUG_ON(page_count(old_page) != 1); BUG_ON(huge_pte_none(pte)); goto retry_avoidcopy; } WARN_ON_ONCE(1); + spin_lock(&mm->page_table_lock); } return -PTR_ERR(new_page); Signed-off-by: Larry Woodman --=-OO+By7Lz7MYtHsqpzyd2 Content-Disposition: attachment; filename=hugetlb_cow.diff Content-Type: text/x-patch; name=hugetlb_cow.diff; charset=UTF-8 Content-Transfer-Encoding: 7bit diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 5d7601b..f4daef4 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1973,12 +1973,15 @@ retry_avoidcopy: */ if (outside_reserve) { BUG_ON(huge_pte_none(pte)); + spin_unlock(&mm->page_table_lock); if (unmap_ref_private(mm, vma, old_page, address)) { + spin_lock(&mm->page_table_lock); BUG_ON(page_count(old_page) != 1); BUG_ON(huge_pte_none(pte)); goto retry_avoidcopy; } WARN_ON_ONCE(1); + spin_lock(&mm->page_table_lock); } return -PTR_ERR(new_page); --=-OO+By7Lz7MYtHsqpzyd2-- -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org