From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-wm0-f72.google.com (mail-wm0-f72.google.com [74.125.82.72]) by kanga.kvack.org (Postfix) with ESMTP id 0F8876B0005 for ; Fri, 15 Jul 2016 00:05:28 -0400 (EDT) Received: by mail-wm0-f72.google.com with SMTP id f126so6058814wma.3 for ; Thu, 14 Jul 2016 21:05:28 -0700 (PDT) Received: from mail-wm0-x22b.google.com (mail-wm0-x22b.google.com. [2a00:1450:400c:c09::22b]) by mx.google.com with ESMTPS id 16si1980631wmb.72.2016.07.14.21.05.26 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 14 Jul 2016 21:05:26 -0700 (PDT) Received: by mail-wm0-x22b.google.com with SMTP id r190so7578607wmr.0 for ; Thu, 14 Jul 2016 21:05:26 -0700 (PDT) MIME-Version: 1.0 In-Reply-To: <20160715014151.GA13944@balbir.ozlabs.ibm.com> References: <1468446964-22213-1-git-send-email-keescook@chromium.org> <1468446964-22213-3-git-send-email-keescook@chromium.org> <20160714232019.GA28254@350D> <1468544658.30053.26.camel@redhat.com> <20160715014151.GA13944@balbir.ozlabs.ibm.com> From: Kees Cook Date: Thu, 14 Jul 2016 21:05:25 -0700 Message-ID: Subject: Re: [PATCH v2 02/11] mm: Hardened usercopy Content-Type: text/plain; charset=UTF-8 Sender: owner-linux-mm@kvack.org List-ID: To: bsingharora@gmail.com Cc: Rik van Riel , LKML , Casey Schaufler , PaX Team , Brad Spengler , Russell King , Catalin Marinas , Will Deacon , Ard Biesheuvel , Benjamin Herrenschmidt , Michael Ellerman , Tony Luck , Fenghua Yu , "David S. Miller" , "x86@kernel.org" , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Andy Lutomirski , Borislav Petkov , Mathias Krause , Jan Kara , Vitaly Wool , Andrea Arcangeli , Dmitry Vyukov , Laura Abbott , "linux-arm-kernel@lists.infradead.org" , linux-ia64@vger.kernel.org, "linuxppc-dev@lists.ozlabs.org" , sparclinux , linux-arch , Linux-MM , "kernel-hardening@lists.openwall.com" On Thu, Jul 14, 2016 at 6:41 PM, Balbir Singh wrote: > On Thu, Jul 14, 2016 at 09:04:18PM -0400, Rik van Riel wrote: >> On Fri, 2016-07-15 at 09:20 +1000, Balbir Singh wrote: >> >> > > == >> > > + ((unsigned long)end & (unsigned >> > > long)PAGE_MASK))) >> > > + return NULL; >> > > + >> > > + /* Allow if start and end are inside the same compound >> > > page. */ >> > > + endpage = virt_to_head_page(end); >> > > + if (likely(endpage == page)) >> > > + return NULL; >> > > + >> > > + /* Allow special areas, device memory, and sometimes >> > > kernel data. */ >> > > + if (PageReserved(page) && PageReserved(endpage)) >> > > + return NULL; >> > >> > If we came here, it's likely that endpage > page, do we need to check >> > that only the first and last pages are reserved? What about the ones >> > in >> > the middle? >> >> I think this will be so rare, we can get away with just >> checking the beginning and the end. >> > > But do we want to leave a hole where an aware user space > can try a longer copy_* to avoid this check? If it is unlikely > should we just bite the bullet and do the check for the entire > range? I'd be okay with expanding the test -- it should be an extremely rare situation already since the common Reserved areas (kernel data) will have already been explicitly tested. What's the best way to do "next page"? Should it just be: for ( ; page <= endpage ; ptr += PAGE_SIZE, page = virt_to_head_page(ptr) ) { if (!PageReserved(page)) return ""; } return NULL; ? -- Kees Cook Chrome OS & Brillo Security -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org