From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-wm0-f71.google.com (mail-wm0-f71.google.com [74.125.82.71]) by kanga.kvack.org (Postfix) with ESMTP id 7CBE36B0253 for ; Tue, 19 Jul 2016 18:55:57 -0400 (EDT) Received: by mail-wm0-f71.google.com with SMTP id o80so21613883wme.1 for ; Tue, 19 Jul 2016 15:55:57 -0700 (PDT) Received: from mail-wm0-x22e.google.com (mail-wm0-x22e.google.com. [2a00:1450:400c:c09::22e]) by mx.google.com with ESMTPS id i130si272763wme.120.2016.07.19.15.55.56 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 19 Jul 2016 15:55:56 -0700 (PDT) Received: by mail-wm0-x22e.google.com with SMTP id f65so154381997wmi.0 for ; Tue, 19 Jul 2016 15:55:56 -0700 (PDT) MIME-Version: 1.0 In-Reply-To: References: <1468619065-3222-1-git-send-email-keescook@chromium.org> <1468619065-3222-3-git-send-email-keescook@chromium.org> From: Kees Cook Date: Tue, 19 Jul 2016 15:55:54 -0700 Message-ID: Subject: Re: [PATCH v3 02/11] mm: Hardened usercopy Content-Type: text/plain; charset=UTF-8 Sender: owner-linux-mm@kvack.org List-ID: To: Laura Abbott Cc: LKML , Balbir Singh , Daniel Micay , Josh Poimboeuf , Rik van Riel , Casey Schaufler , PaX Team , Brad Spengler , Russell King , Catalin Marinas , Will Deacon , Ard Biesheuvel , Benjamin Herrenschmidt , Michael Ellerman , Tony Luck , Fenghua Yu , "David S. Miller" , "x86@kernel.org" , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Andy Lutomirski , Borislav Petkov , Mathias Krause , Jan Kara , Vitaly Wool , Andrea Arcangeli , Dmitry Vyukov , Laura Abbott , "linux-arm-kernel@lists.infradead.org" , linux-ia64@vger.kernel.org, "linuxppc-dev@lists.ozlabs.org" , sparclinux , linux-arch , Linux-MM , "kernel-hardening@lists.openwall.com" On Tue, Jul 19, 2016 at 12:12 PM, Kees Cook wrote: > On Mon, Jul 18, 2016 at 6:52 PM, Laura Abbott wrote: >> On 07/15/2016 02:44 PM, Kees Cook wrote: >>> +static inline const char *check_heap_object(const void *ptr, unsigned >>> long n, >>> + bool to_user) >>> +{ >>> + struct page *page, *endpage; >>> + const void *end = ptr + n - 1; >>> + >>> + if (!virt_addr_valid(ptr)) >>> + return NULL; >>> + >> >> >> virt_addr_valid returns true on vmalloc addresses on arm64 which causes some >> intermittent false positives (tab completion in a qemu buildroot environment >> was showing it fairly reliably). I think this is an arm64 bug because >> virt_addr_valid should return true if and only if virt_to_page returns the >> corresponding page. We can work around this for now by explicitly >> checking against is_vmalloc_addr. > > Hrm, that's weird. Sounds like a bug too, but I'll add a check for > is_vmalloc_addr() to catch it for now. BTW, if you were testing against -next, KASAN moved things around in copy_*_user() in a way I wasn't expecting (__copy* and copy* now both call __arch_copy* instead of copy* calling __copy*). I'll have this fixed in the next version. -Kees -- Kees Cook Chrome OS & Brillo Security -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org