From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.4 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 47596C2D0DB for ; Mon, 27 Jan 2020 18:17:55 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 0AD57214AF for ; Mon, 27 Jan 2020 18:17:54 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=nvidia.com header.i=@nvidia.com header.b="Ue5YGSxp" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0AD57214AF Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=nvidia.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 9CFFE6B0272; Mon, 27 Jan 2020 13:17:54 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 97F956B0273; Mon, 27 Jan 2020 13:17:54 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 895766B0274; Mon, 27 Jan 2020 13:17:54 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0237.hostedemail.com [216.40.44.237]) by kanga.kvack.org (Postfix) with ESMTP id 748856B0272 for ; Mon, 27 Jan 2020 13:17:54 -0500 (EST) Received: from smtpin10.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with SMTP id 2E72B37E7 for ; Mon, 27 Jan 2020 18:17:54 +0000 (UTC) X-FDA: 76424222868.10.cloud10_ab356d8b761c X-HE-Tag: cloud10_ab356d8b761c X-Filterd-Recvd-Size: 4712 Received: from hqnvemgate24.nvidia.com (hqnvemgate24.nvidia.com [216.228.121.143]) by imf16.hostedemail.com (Postfix) with ESMTP for ; Mon, 27 Jan 2020 18:17:53 +0000 (UTC) Received: from hqpgpgate102.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate24.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Mon, 27 Jan 2020 10:17:03 -0800 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate102.nvidia.com (PGP Universal service); Mon, 27 Jan 2020 10:17:52 -0800 X-PGP-Universal: processed; by hqpgpgate102.nvidia.com on Mon, 27 Jan 2020 10:17:52 -0800 Received: from [10.110.48.28] (10.124.1.5) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Mon, 27 Jan 2020 18:17:51 +0000 Subject: Re: [PATCH 1/3] mm/gup: track FOLL_PIN pages To: Jan Kara CC: Andrew Morton , Al Viro , Christoph Hellwig , Dan Williams , Dave Chinner , Ira Weiny , Jason Gunthorpe , Jonathan Corbet , =?UTF-8?B?SsOpcsO0bWUgR2xpc3Nl?= , "Kirill A . Shutemov" , Michal Hocko , Mike Kravetz , Shuah Khan , Vlastimil Babka , , , , , , LKML , "Kirill A . Shutemov" References: <20200125021115.731629-1-jhubbard@nvidia.com> <20200125021115.731629-2-jhubbard@nvidia.com> <20200127110624.GD19414@quack2.suse.cz> X-Nvconfidentiality: public From: John Hubbard Message-ID: <132c05dc-ee18-029e-4f04-4a7cf532dd9d@nvidia.com> Date: Mon, 27 Jan 2020 10:17:51 -0800 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.4.2 MIME-Version: 1.0 In-Reply-To: <20200127110624.GD19414@quack2.suse.cz> X-Originating-IP: [10.124.1.5] X-ClientProxiedBy: HQMAIL111.nvidia.com (172.20.187.18) To HQMAIL107.nvidia.com (172.20.187.13) Content-Type: text/plain; charset="utf-8" Content-Language: en-US Content-Transfer-Encoding: 7bit DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1580149023; bh=+ILbFzTvgHtPE4pnBXrv89ml3yzlDaNRNlBvHEV4G8E=; h=X-PGP-Universal:Subject:To:CC:References:X-Nvconfidentiality:From: Message-ID:Date:User-Agent:MIME-Version:In-Reply-To: X-Originating-IP:X-ClientProxiedBy:Content-Type:Content-Language: Content-Transfer-Encoding; b=Ue5YGSxpthjQIeIHGLLT2HWwsc8wRNTAjufHDuRYSh7VVqcfZCJ3FMadpdE5GJFZJ KiLMgqJSfFD9Yle+/KhQYJ4wvAb+tIeXiiy7HGfGB8sC/fzQ6jKdb2O/AQfY784HqD eRVMs6DOCfl8v4aWj4JoWOhPJCiRTbiFmEM/RNmEbWS5mJHsRkB/3MAMdlYe6U8cik Y96tFaP+I/BRxeiDdTh8WmsoDCpx1P3N434Bg8X1xcVgIbsn4R9HN5py5t6d/tr5AU bsFOIQTl5NN4QnPcfGzmdmlkn0FM6/pgzbVUoi/T45tEgm8tTtHrGnLTrWwn0dEmgS Y9+zWPMs/nWxg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 1/27/20 3:06 AM, Jan Kara wrote: > On Fri 24-01-20 18:11:13, John Hubbard wrote: >> +static __maybe_unused struct page *try_grab_compound_head(struct page *page, >> + int refs, >> + unsigned int flags) >> +{ >> + if (flags & FOLL_GET) >> + return try_get_compound_head(page, refs); >> + else if (flags & FOLL_PIN) { >> + int orig_refs = refs; >> + >> + /* >> + * When pinning a compound page of order > 1 (which is what >> + * hpage_pincount_available() checks for), use an exact count to >> + * track it, via hpage_pincount_inc/_dec(). >> + * >> + * However, be sure to *also* increment the normal page refcount >> + * field at least once, so that the page really is pinned. >> + */ >> + if (!hpage_pincount_available(page)) >> + refs *= GUP_PIN_COUNTING_BIAS; >> + >> + page = try_get_compound_head(page, refs); >> + if (!page) >> + return NULL; >> + >> + if (hpage_pincount_available(page)) >> + hpage_pincount_inc(page); > > Umm, adding just 1 to pincount looks dangerous to me as > try_grab_compound_head() would not compose - you could not release > references acquired by try_grab_compound_head() with refs==2 by two calls > to put_compound_head() with refs==1. So I'd rather have here: > hpage_pincount_add(page, refs) and similarly in put_compound_head(). > Otherwise the patch looks good to me from a quick look. > > Honza Yes, you are right. The hpage_pincount really should track refs. I'll fix it up. thanks, -- John Hubbard NVIDIA