From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH, MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 95F19CA9EB6 for ; Wed, 23 Oct 2019 16:25:55 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 498242086D for ; Wed, 23 Oct 2019 16:25:55 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=chromium.org header.i=@chromium.org header.b="bCC1XEBz" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 498242086D Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=chromium.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id DAF956B0010; Wed, 23 Oct 2019 12:25:54 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D85AA6B0266; Wed, 23 Oct 2019 12:25:54 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C4E5A6B0269; Wed, 23 Oct 2019 12:25:54 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0126.hostedemail.com [216.40.44.126]) by kanga.kvack.org (Postfix) with ESMTP id A23196B0010 for ; Wed, 23 Oct 2019 12:25:54 -0400 (EDT) Received: from smtpin22.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with SMTP id 511728415 for ; Wed, 23 Oct 2019 16:25:54 +0000 (UTC) X-FDA: 76075575828.22.week08_8ac94ae7eb651 X-HE-Tag: week08_8ac94ae7eb651 X-Filterd-Recvd-Size: 8216 Received: from mail-pf1-f194.google.com (mail-pf1-f194.google.com [209.85.210.194]) by imf10.hostedemail.com (Postfix) with ESMTP for ; Wed, 23 Oct 2019 16:25:53 +0000 (UTC) Received: by mail-pf1-f194.google.com with SMTP id 205so13255890pfw.2 for ; Wed, 23 Oct 2019 09:25:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=2ozmEBacDU37qdWHmaIXZS/t2e/cIQTfwOrpKCFE/tE=; b=bCC1XEBzNQHigL0d7R6bPEzxm8alb+W4dTu8itjDoIuQnpvCdh21/KVzmfE+Tgs+ET ceWRH1QFAtQCwVmj0wWuvh6QCkQaRgwgCLBTlsOklkfp9LViU9s8B80hK7K0MoC/AGMW ZF1fsM40mC3fIPu5YTFXe03NoKjMsALdc4tX0= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=2ozmEBacDU37qdWHmaIXZS/t2e/cIQTfwOrpKCFE/tE=; b=W82K9wOT9S8mMeRX+oEL2JPj7CwD4QDT9MGVG3nftfL6Cu2ELolN2IeB2+BqxgvSL3 IrDJHX1VQgMpE1LhrRGV5bcrDLfNyoJ9jbNJ35O6zyiSQMtc5A2+QHTscs+qAkBDqfz8 F72ZP3yzuztjK2vUew46GUgwqTkMu5eSEQW6p+Bx2bxr/sAvfAT64N0hVqKubzpr48EX immUI47QV5qlIPL94g0JG37OXujgbbF3czOhOpvfrJLGxrJRoPhN2MVz4meBRgkjojmf TxQdIuzXzhWzZmnAb7ryuWBL4E/7AmlvKJmaVCS6zHdX6EL8KrmWnZmSWmjwx/WnjGNB FxPA== X-Gm-Message-State: APjAAAXVsOV4+k0zFlyckdbt9XgErzwwaTfM1fSd1bQdoh73T3xWr6YG hYgnwrVfoGwXIeiJyAHm32XATg== X-Google-Smtp-Source: APXvYqw1/SgWscVWVYwNzvC56vf/67s2dhZn0DnFv2vyro2EnV66Rhc87hJA64tPAho0TcM73pujSw== X-Received: by 2002:a63:1042:: with SMTP id 2mr11383658pgq.59.1571847952291; Wed, 23 Oct 2019 09:25:52 -0700 (PDT) Received: from www.outflux.net (smtp.outflux.net. [198.145.64.163]) by smtp.gmail.com with ESMTPSA id y8sm27285047pgs.34.2019.10.23.09.25.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 23 Oct 2019 09:25:51 -0700 (PDT) Date: Wed, 23 Oct 2019 09:25:49 -0700 From: Kees Cook To: David Hildenbrand Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, Matthew Wilcox , Michal Hocko , Andrew Morton , kvm-ppc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, linux-hyperv@vger.kernel.org, devel@driverdev.osuosl.org, xen-devel@lists.xenproject.org, x86@kernel.org, Alexander Duyck , Alexander Duyck , Alex Williamson , Allison Randal , Andy Lutomirski , "Aneesh Kumar K.V" , Anshuman Khandual , Anthony Yznaga , Ben Chan , Benjamin Herrenschmidt , Borislav Petkov , Boris Ostrovsky , Christophe Leroy , Cornelia Huck , Dan Carpenter , Dan Williams , Dave Hansen , Fabio Estevam , Greg Kroah-Hartman , Haiyang Zhang , "H. Peter Anvin" , Ingo Molnar , "Isaac J. Manjarres" , Jeremy Sowden , Jim Mattson , Joerg Roedel , Johannes Weiner , Juergen Gross , KarimAllah Ahmed , Kate Stewart , "K. Y. Srinivasan" , Madhumitha Prabakaran , Matt Sickler , Mel Gorman , Michael Ellerman , Michal Hocko , Mike Rapoport , Mike Rapoport , Nicholas Piggin , Nishka Dasgupta , Oscar Salvador , Paolo Bonzini , Paul Mackerras , Paul Mackerras , Pavel Tatashin , Pavel Tatashin , Peter Zijlstra , Qian Cai , Radim =?utf-8?B?S3LEjW3DocWZ?= , Rob Springer , Sasha Levin , Sean Christopherson , Simon =?iso-8859-1?Q?Sandstr=F6m?= , Stefano Stabellini , Stephen Hemminger , Thomas Gleixner , Todd Poynor , Vandana BN , Vitaly Kuznetsov , Vlastimil Babka , Wanpeng Li , YueHaibing Subject: Re: [PATCH RFC v1 02/12] mm/usercopy.c: Prepare check_page_span() for PG_reserved changes Message-ID: <201910230924.DE879ED80F@keescook> References: <20191022171239.21487-1-david@redhat.com> <20191022171239.21487-3-david@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, Oct 23, 2019 at 10:20:14AM +0200, David Hildenbrand wrote: > On 22.10.19 19:12, David Hildenbrand wrote: > > Right now, ZONE_DEVICE memory is always set PG_reserved. We want to > > change that. > > > > Let's make sure that the logic in the function won't change. Once we no > > longer set these pages to reserved, we can rework this function to > > perform separate checks for ZONE_DEVICE (split from PG_reserved checks). > > > > Cc: Kees Cook > > Cc: Andrew Morton > > Cc: Kate Stewart > > Cc: Allison Randal > > Cc: "Isaac J. Manjarres" > > Cc: Qian Cai > > Cc: Thomas Gleixner > > Signed-off-by: David Hildenbrand > > --- > > mm/usercopy.c | 5 +++-- > > 1 file changed, 3 insertions(+), 2 deletions(-) > > > > diff --git a/mm/usercopy.c b/mm/usercopy.c > > index 660717a1ea5c..a3ac4be35cde 100644 > > --- a/mm/usercopy.c > > +++ b/mm/usercopy.c > > @@ -203,14 +203,15 @@ static inline void check_page_span(const void *ptr, unsigned long n, > > * device memory), or CMA. Otherwise, reject since the object spans > > * several independently allocated pages. > > */ > > - is_reserved = PageReserved(page); > > + is_reserved = PageReserved(page) || is_zone_device_page(page); > > is_cma = is_migrate_cma_page(page); > > if (!is_reserved && !is_cma) > > usercopy_abort("spans multiple pages", NULL, to_user, 0, n); > > for (ptr += PAGE_SIZE; ptr <= end; ptr += PAGE_SIZE) { > > page = virt_to_head_page(ptr); > > - if (is_reserved && !PageReserved(page)) > > + if (is_reserved && !(PageReserved(page) || > > + is_zone_device_page(page))) > > usercopy_abort("spans Reserved and non-Reserved pages", > > NULL, to_user, 0, n); > > if (is_cma && !is_migrate_cma_page(page)) > > > > @Kees, would it be okay to stop checking against ZONE_DEVICE pages here or > is there a good rationale behind this? > > (I would turn this patch into a simple update of the comment if we agree > that we don't care) There has been work to actually remove the page span checks entirely, but there wasn't consensus on what the right way forward was. I continue to leaning toward just dropping it entirely, but Matthew Wilcox has some alternative ideas that could use some further thought/testing. -- Kees Cook