From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 35E7DC43603 for ; Tue, 10 Dec 2019 13:39:40 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id E578620836 for ; Tue, 10 Dec 2019 13:39:39 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E578620836 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 9B8DE6B2C92; Tue, 10 Dec 2019 08:39:39 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 969336B2C93; Tue, 10 Dec 2019 08:39:39 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8A6EC6B2C94; Tue, 10 Dec 2019 08:39:39 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0151.hostedemail.com [216.40.44.151]) by kanga.kvack.org (Postfix) with ESMTP id 752C16B2C92 for ; Tue, 10 Dec 2019 08:39:39 -0500 (EST) Received: from smtpin26.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with SMTP id 191F28249980 for ; Tue, 10 Dec 2019 13:39:39 +0000 (UTC) X-FDA: 76249339278.26.cloud90_19fd29d21f341 X-HE-Tag: cloud90_19fd29d21f341 X-Filterd-Recvd-Size: 7382 Received: from mx1.suse.de (mx2.suse.de [195.135.220.15]) by imf27.hostedemail.com (Postfix) with ESMTP for ; Tue, 10 Dec 2019 13:39:38 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 816B4B016; Tue, 10 Dec 2019 13:39:35 +0000 (UTC) Received: by quack2.suse.cz (Postfix, from userid 1000) id AF0221E0B23; Tue, 10 Dec 2019 14:39:32 +0100 (CET) Date: Tue, 10 Dec 2019 14:39:32 +0100 From: Jan Kara To: John Hubbard Cc: Andrew Morton , Al Viro , Alex Williamson , Benjamin Herrenschmidt , =?iso-8859-1?Q?Bj=F6rn_T=F6pel?= , Christoph Hellwig , Dan Williams , Daniel Vetter , Dave Chinner , David Airlie , "David S . Miller" , Ira Weiny , Jan Kara , Jason Gunthorpe , Jens Axboe , Jonathan Corbet , =?iso-8859-1?B?Suly9G1l?= Glisse , Magnus Karlsson , Mauro Carvalho Chehab , Michael Ellerman , Michal Hocko , Mike Kravetz , Paul Mackerras , Shuah Khan , Vlastimil Babka , bpf@vger.kernel.org, dri-devel@lists.freedesktop.org, kvm@vger.kernel.org, linux-block@vger.kernel.org, linux-doc@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-media@vger.kernel.org, linux-rdma@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, netdev@vger.kernel.org, linux-mm@kvack.org, LKML Subject: Re: [PATCH v8 24/26] mm/gup: track FOLL_PIN pages Message-ID: <20191210133932.GH1551@quack2.suse.cz> References: <20191209225344.99740-1-jhubbard@nvidia.com> <20191209225344.99740-25-jhubbard@nvidia.com> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline In-Reply-To: <20191209225344.99740-25-jhubbard@nvidia.com> User-Agent: Mutt/1.10.1 (2018-07-13) Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon 09-12-19 14:53:42, John Hubbard wrote: > Add tracking of pages that were pinned via FOLL_PIN. >=20 > As mentioned in the FOLL_PIN documentation, callers who effectively set > FOLL_PIN are required to ultimately free such pages via unpin_user_page= (). > The effect is similar to FOLL_GET, and may be thought of as "FOLL_GET > for DIO and/or RDMA use". >=20 > Pages that have been pinned via FOLL_PIN are identifiable via a > new function call: >=20 > bool page_dma_pinned(struct page *page); >=20 > What to do in response to encountering such a page, is left to later > patchsets. There is discussion about this in [1], [2], and [3]. >=20 > This also changes a BUG_ON(), to a WARN_ON(), in follow_page_mask(). >=20 > [1] Some slow progress on get_user_pages() (Apr 2, 2019): > https://lwn.net/Articles/784574/ > [2] DMA and get_user_pages() (LPC: Dec 12, 2018): > https://lwn.net/Articles/774411/ > [3] The trouble with get_user_pages() (Apr 30, 2018): > https://lwn.net/Articles/753027/ >=20 > Suggested-by: Jan Kara > Suggested-by: J=E9r=F4me Glisse > Signed-off-by: John Hubbard Looks nice, some comments below... > +/* > + * try_grab_compound_head() - attempt to elevate a page's refcount, by= a > + * flags-dependent amount. > + * > + * This has a default assumption of "use FOLL_GET behavior, if FOLL_PI= N is not > + * set". > + * > + * "grab" names in this file mean, "look at flags to decide with to us= e FOLL_PIN > + * or FOLL_GET behavior, when incrementing the page's refcount. > + */ > +static struct page *try_grab_compound_head(struct page *page, int refs= , > + unsigned int flags) > +{ > + if (flags & FOLL_PIN) > + return try_pin_compound_head(page, refs); > + > + return try_get_compound_head(page, refs); > +} > + > +/** > + * grab_page() - elevate a page's refcount by a flag-dependent amount > + * > + * This might not do anything at all, depending on the flags argument. > + * > + * "grab" names in this file mean, "look at flags to decide with to us= e FOLL_PIN ^^^ whethe= r > + * or FOLL_GET behavior, when incrementing the page's refcount. > + * > + * @page: pointer to page to be grabbed > + * @flags: gup flags: these are the FOLL_* flag values. > + * > + * Either FOLL_PIN or FOLL_GET (or neither) may be set, but not both a= t the same > + * time. (That's true throughout the get_user_pages*() and pin_user_pa= ges*() > + * APIs.) Cases: > + * > + * FOLL_GET: page's refcount will be incremented by 1. > + * FOLL_PIN: page's refcount will be incremented by GUP_PIN_COUNTING_B= IAS. > + */ > +void grab_page(struct page *page, unsigned int flags) > +{ > + if (flags & FOLL_GET) > + get_page(page); > + else if (flags & FOLL_PIN) { > + get_page(page); > + WARN_ON_ONCE(flags & FOLL_GET); > + /* > + * Use get_page(), above, to do the refcount error > + * checking. Then just add in the remaining references: > + */ > + page_ref_add(page, GUP_PIN_COUNTING_BIAS - 1); This is wrong for two reasons: 1) You miss compound_head() indirection from get_page() for this page_ref_add(). 2) page_ref_add() could overflow the counter without noticing. Especially with GUP_PIN_COUNTING_BIAS being non-trivial, it is realistic that an attacker might try to overflow the page refcount and we have to protect the kernel against that. So I think that all the places that woul= d use grab_page() actually need to use try_grab_page() and then gracefully deal with the failure. > @@ -278,11 +425,23 @@ static struct page *follow_page_pte(struct vm_are= a_struct *vma, > goto retry; > } > =20 > - if (flags & FOLL_GET) { > + if (flags & (FOLL_PIN | FOLL_GET)) { > + /* > + * Allow try_get_page() to take care of error handling, for > + * both cases: FOLL_GET or FOLL_PIN: > + */ > if (unlikely(!try_get_page(page))) { > page =3D ERR_PTR(-ENOMEM); > goto out; > } > + > + if (flags & FOLL_PIN) { > + WARN_ON_ONCE(flags & FOLL_GET); > + > + /* We got a +1 refcount from try_get_page(), above. */ > + page_ref_add(page, GUP_PIN_COUNTING_BIAS - 1); > + __update_proc_vmstat(page, NR_FOLL_PIN_REQUESTED, 1); > + } > } The same problem here as above, plus this place should use the same try_grab..() helper, shouldn't it? > @@ -544,8 +703,8 @@ static struct page *follow_page_mask(struct vm_area= _struct *vma, > /* make this handle hugepd */ > page =3D follow_huge_addr(mm, address, flags & FOLL_WRITE); > if (!IS_ERR(page)) { > - BUG_ON(flags & FOLL_GET); > - return page; > + WARN_ON_ONCE(flags & (FOLL_GET | FOLL_PIN)); > + return NULL; I agree with the change to WARN_ON_ONCE but why is correct the change of the return value? Note that this is actually a "success branch". Honza --=20 Jan Kara SUSE Labs, CR