From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 22FF5C3F68F for ; Wed, 11 Dec 2019 11:28:14 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id D8DB6206A5 for ; Wed, 11 Dec 2019 11:28:13 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D8DB6206A5 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 5E4166B31AB; Wed, 11 Dec 2019 06:28:13 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 595376B31AC; Wed, 11 Dec 2019 06:28:13 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4AB0B6B31AD; Wed, 11 Dec 2019 06:28:13 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0245.hostedemail.com [216.40.44.245]) by kanga.kvack.org (Postfix) with ESMTP id 35D036B31AB for ; Wed, 11 Dec 2019 06:28:13 -0500 (EST) Received: from smtpin12.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with SMTP id E802C2C32 for ; Wed, 11 Dec 2019 11:28:12 +0000 (UTC) X-FDA: 76252636824.12.thing06_64c4e505d8905 X-HE-Tag: thing06_64c4e505d8905 X-Filterd-Recvd-Size: 8007 Received: from mx1.suse.de (mx2.suse.de [195.135.220.15]) by imf40.hostedemail.com (Postfix) with ESMTP for ; Wed, 11 Dec 2019 11:28:12 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 4E902AD0E; Wed, 11 Dec 2019 11:28:10 +0000 (UTC) Received: by quack2.suse.cz (Postfix, from userid 1000) id 63C461E0B23; Wed, 11 Dec 2019 12:28:07 +0100 (CET) Date: Wed, 11 Dec 2019 12:28:07 +0100 From: Jan Kara To: John Hubbard Cc: Andrew Morton , Al Viro , Alex Williamson , Benjamin Herrenschmidt , =?iso-8859-1?Q?Bj=F6rn_T=F6pel?= , Christoph Hellwig , Dan Williams , Daniel Vetter , Dave Chinner , David Airlie , "David S . Miller" , Ira Weiny , Jan Kara , Jason Gunthorpe , Jens Axboe , Jonathan Corbet , =?iso-8859-1?B?Suly9G1l?= Glisse , Magnus Karlsson , Mauro Carvalho Chehab , Michael Ellerman , Michal Hocko , Mike Kravetz , Paul Mackerras , Shuah Khan , Vlastimil Babka , bpf@vger.kernel.org, dri-devel@lists.freedesktop.org, kvm@vger.kernel.org, linux-block@vger.kernel.org, linux-doc@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-media@vger.kernel.org, linux-rdma@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, netdev@vger.kernel.org, linux-mm@kvack.org, LKML , "Kirill A . Shutemov" Subject: Re: [PATCH v9 23/25] mm/gup: track FOLL_PIN pages Message-ID: <20191211112807.GN1551@quack2.suse.cz> References: <20191211025318.457113-1-jhubbard@nvidia.com> <20191211025318.457113-24-jhubbard@nvidia.com> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline In-Reply-To: <20191211025318.457113-24-jhubbard@nvidia.com> User-Agent: Mutt/1.10.1 (2018-07-13) Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue 10-12-19 18:53:16, John Hubbard wrote: > Add tracking of pages that were pinned via FOLL_PIN. >=20 > As mentioned in the FOLL_PIN documentation, callers who effectively set > FOLL_PIN are required to ultimately free such pages via unpin_user_page= (). > The effect is similar to FOLL_GET, and may be thought of as "FOLL_GET > for DIO and/or RDMA use". >=20 > Pages that have been pinned via FOLL_PIN are identifiable via a > new function call: >=20 > bool page_dma_pinned(struct page *page); >=20 > What to do in response to encountering such a page, is left to later > patchsets. There is discussion about this in [1], [2], and [3]. >=20 > This also changes a BUG_ON(), to a WARN_ON(), in follow_page_mask(). >=20 > [1] Some slow progress on get_user_pages() (Apr 2, 2019): > https://lwn.net/Articles/784574/ > [2] DMA and get_user_pages() (LPC: Dec 12, 2018): > https://lwn.net/Articles/774411/ > [3] The trouble with get_user_pages() (Apr 30, 2018): > https://lwn.net/Articles/753027/ The patch looks mostly good to me now. Just a few smaller comments below. > Suggested-by: Jan Kara > Suggested-by: J=E9r=F4me Glisse > Reviewed-by: Jan Kara > Reviewed-by: J=E9r=F4me Glisse > Reviewed-by: Ira Weiny I think you inherited here the Reviewed-by tags from the "add flags" patc= h you've merged into this one but that's not really fair since this patch does much more... In particular I didn't give my Reviewed-by tag for this patch yet. > +/* > + * try_grab_compound_head() - attempt to elevate a page's refcount, by= a > + * flags-dependent amount. > + * > + * This has a default assumption of "use FOLL_GET behavior, if FOLL_PI= N is not > + * set". > + * > + * "grab" names in this file mean, "look at flags to decide whether to= use > + * FOLL_PIN or FOLL_GET behavior, when incrementing the page's refcoun= t. > + */ > +static __maybe_unused struct page *try_grab_compound_head(struct page = *page, > + int refs, > + unsigned int flags) > +{ > + if (flags & FOLL_PIN) > + return try_pin_compound_head(page, refs); > + > + return try_get_compound_head(page, refs); > +} I somewhat wonder about the asymmetry of try_grab_compound_head() vs try_grab_page() in the treatment of 'flags'. How costly would it be to ma= ke them symmetric (i.e., either set FOLL_GET for try_grab_compound_head() callers or make sure one of FOLL_GET, FOLL_PIN is set for try_grab_page()= )? Because this difference looks like a subtle catch in the long run... > + > +/** > + * try_grab_page() - elevate a page's refcount by a flag-dependent amo= unt > + * > + * This might not do anything at all, depending on the flags argument. > + * > + * "grab" names in this file mean, "look at flags to decide whether to= use > + * FOLL_PIN or FOLL_GET behavior, when incrementing the page's refcoun= t. > + * > + * @page: pointer to page to be grabbed > + * @flags: gup flags: these are the FOLL_* flag values. > + * > + * Either FOLL_PIN or FOLL_GET (or neither) may be set, but not both a= t the same > + * time. (That's true throughout the get_user_pages*() and pin_user_pa= ges*() > + * APIs.) Cases: > + * > + * FOLL_GET: page's refcount will be incremented by 1. > + * FOLL_PIN: page's refcount will be incremented by GUP_PIN_COUNT= ING_BIAS. > + * > + * Return: true for success, or if no action was required (if neither = FOLL_PIN > + * nor FOLL_GET was set, nothing is done). False for failure: FOLL_GET= or > + * FOLL_PIN was set, but the page could not be grabbed. > + */ > +bool __must_check try_grab_page(struct page *page, unsigned int flags) > +{ > + if (flags & FOLL_GET) > + return try_get_page(page); > + else if (flags & FOLL_PIN) { > + page =3D compound_head(page); > + WARN_ON_ONCE(flags & FOLL_GET); > + > + if (WARN_ON_ONCE(page_ref_zero_or_close_to_bias_overflow(page))) > + return false; > + > + page_ref_add(page, GUP_PIN_COUNTING_BIAS); > + __update_proc_vmstat(page, NR_FOLL_PIN_REQUESTED, 1); > + } > + > + return true; > +} ... > @@ -1522,8 +1536,8 @@ struct page *follow_trans_huge_pmd(struct vm_area= _struct *vma, > skip_mlock: > page +=3D (addr & ~HPAGE_PMD_MASK) >> PAGE_SHIFT; > VM_BUG_ON_PAGE(!PageCompound(page) && !is_zone_device_page(page), pag= e); > - if (flags & FOLL_GET) > - get_page(page); > + if (!try_grab_page(page, flags)) > + page =3D ERR_PTR(-EFAULT); I think you need to also move the try_grab_page() earlier in the function= . At this point the page may be marked as mlocked and you'd need to undo th= at in case try_grab_page() fails. > diff --git a/mm/hugetlb.c b/mm/hugetlb.c > index ac65bb5e38ac..0aab6fe0072f 100644 > --- a/mm/hugetlb.c > +++ b/mm/hugetlb.c > @@ -4356,7 +4356,13 @@ long follow_hugetlb_page(struct mm_struct *mm, s= truct vm_area_struct *vma, > same_page: > if (pages) { > pages[i] =3D mem_map_offset(page, pfn_offset); > - get_page(pages[i]); > + if (!try_grab_page(pages[i], flags)) { > + spin_unlock(ptl); > + remainder =3D 0; > + err =3D -ENOMEM; > + WARN_ON_ONCE(1); > + break; > + } > } This function does a refcount overflow check early so that it doesn't hav= e to do try_get_page() here. So that check can be now removed when you do try_grab_page() here anyway since that early check seems to be just a tin= y optimization AFAICT. Honza --=20 Jan Kara SUSE Labs, CR