From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pf0-f199.google.com (mail-pf0-f199.google.com [209.85.192.199]) by kanga.kvack.org (Postfix) with ESMTP id D3A306B0006 for ; Sat, 3 Mar 2018 17:33:08 -0500 (EST) Received: by mail-pf0-f199.google.com with SMTP id x7so7463787pfd.19 for ; Sat, 03 Mar 2018 14:33:08 -0800 (PST) Received: from NAM01-BY2-obe.outbound.protection.outlook.com (mail-by2nam01on0129.outbound.protection.outlook.com. [104.47.34.129]) by mx.google.com with ESMTPS id p61-v6si6847590plb.584.2018.03.03.14.33.07 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Sat, 03 Mar 2018 14:33:07 -0800 (PST) From: Sasha Levin Subject: [PATCH AUTOSEL for 4.9 077/219] mm: Fix false-positive VM_BUG_ON() in page_cache_{get,add}_speculative() Date: Sat, 3 Mar 2018 22:28:40 +0000 Message-ID: <20180303222716.26640-77-alexander.levin@microsoft.com> References: <20180303222716.26640-1-alexander.levin@microsoft.com> In-Reply-To: <20180303222716.26640-1-alexander.levin@microsoft.com> Content-Language: en-US Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Sender: owner-linux-mm@kvack.org List-ID: To: "linux-kernel@vger.kernel.org" , "stable@vger.kernel.org" Cc: "Kirill A. Shutemov" , Andrew Morton , "Aneesh Kumar K . V" , "Kirill A . Shutemov" , LKP , Linus Torvalds , Peter Zijlstra , Thomas Gleixner , "linux-mm@kvack.org" , Ingo Molnar , Sasha Levin From: "Kirill A. Shutemov" [ Upstream commit 591a3d7c09fa08baff48ad86c2347dbd28a52753 ] 0day testing by Fengguang Wu triggered this crash while running Trinity: kernel BUG at include/linux/pagemap.h:151! ... CPU: 0 PID: 458 Comm: trinity-c0 Not tainted 4.11.0-rc2-00251-g2947ba0 #1 ... Call Trace: __get_user_pages_fast() get_user_pages_fast() get_futex_key() futex_requeue() do_futex() SyS_futex() do_syscall_64() entry_SYSCALL64_slow_path() It' VM_BUG_ON() due to false-negative in_atomic(). We call page_cache_get_speculative() with disabled local interrupts. It should be atomic enough. So let's check for disabled interrupts in the VM_BUG_ON() condition too, to resolve this. ( This got triggered by the conversion of the x86 GUP code to the generic GUP code. ) Reported-by: Fengguang Wu Signed-off-by: Kirill A. Shutemov Cc: Andrew Morton Cc: Aneesh Kumar K.V Cc: Kirill A. Shutemov Cc: LKP Cc: Linus Torvalds Cc: Peter Zijlstra Cc: Thomas Gleixner Cc: linux-mm@kvack.org Link: http://lkml.kernel.org/r/20170324114709.pcytvyb3d6ajux33@black.fi.int= el.com Signed-off-by: Ingo Molnar Signed-off-by: Sasha Levin --- include/linux/pagemap.h | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 7dbe9148b2f8..35f4c4d9c405 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -148,7 +148,7 @@ static inline int page_cache_get_speculative(struct pag= e *page) =20 #ifdef CONFIG_TINY_RCU # ifdef CONFIG_PREEMPT_COUNT - VM_BUG_ON(!in_atomic()); + VM_BUG_ON(!in_atomic() && !irqs_disabled()); # endif /* * Preempt must be disabled here - we rely on rcu_read_lock doing @@ -186,7 +186,7 @@ static inline int page_cache_add_speculative(struct pag= e *page, int count) =20 #if !defined(CONFIG_SMP) && defined(CONFIG_TREE_RCU) # ifdef CONFIG_PREEMPT_COUNT - VM_BUG_ON(!in_atomic()); + VM_BUG_ON(!in_atomic() && !irqs_disabled()); # endif VM_BUG_ON_PAGE(page_count(page) =3D=3D 0, page); page_ref_add(page, count); --=20 2.14.1 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org