From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.3 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,NICE_REPLY_A,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B3A1FC4363A for ; Wed, 28 Oct 2020 06:00:45 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id DBEA922447 for ; Wed, 28 Oct 2020 06:00:44 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=nvidia.com header.i=@nvidia.com header.b="Xfsw0UI4" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org DBEA922447 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=nvidia.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 4B94E6B005D; Wed, 28 Oct 2020 02:00:44 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 4673C6B0062; Wed, 28 Oct 2020 02:00:44 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 32F556B0068; Wed, 28 Oct 2020 02:00:44 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0208.hostedemail.com [216.40.44.208]) by kanga.kvack.org (Postfix) with ESMTP id F28E16B005D for ; Wed, 28 Oct 2020 02:00:43 -0400 (EDT) Received: from smtpin08.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 97369181AEF00 for ; Wed, 28 Oct 2020 06:00:43 +0000 (UTC) X-FDA: 77420285166.08.paste72_250c19a27282 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin08.hostedemail.com (Postfix) with ESMTP id 75E1B1819E76C for ; Wed, 28 Oct 2020 06:00:43 +0000 (UTC) X-HE-Tag: paste72_250c19a27282 X-Filterd-Recvd-Size: 4057 Received: from hqnvemgate26.nvidia.com (hqnvemgate26.nvidia.com [216.228.121.65]) by imf42.hostedemail.com (Postfix) with ESMTP for ; Wed, 28 Oct 2020 06:00:42 +0000 (UTC) Received: from hqmail.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate26.nvidia.com (using TLS: TLSv1.2, AES256-SHA) id ; Tue, 27 Oct 2020 23:00:20 -0700 Received: from [10.2.58.85] (10.124.1.5) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Wed, 28 Oct 2020 06:00:40 +0000 Subject: Re: [PATCH 1/2] mm: reorganize internal_get_user_pages_fast() To: Christoph Hellwig , Jan Kara CC: Jason Gunthorpe , , "Andrea Arcangeli" , Andrew Morton , Hugh Dickins , Jann Horn , "Kirill Shutemov" , Kirill Tkhai , Linux-MM , Michal Hocko , Oleg Nesterov , Peter Xu References: <1-v1-281e425c752f+2df-gup_fork_jgg@nvidia.com> <16c50bb0-431d-5bfb-7b80-a8af0b4da90f@nvidia.com> <20201027093301.GA16090@quack2.suse.cz> <20201027095545.GA30382@lst.de> From: John Hubbard Message-ID: Date: Tue, 27 Oct 2020 23:00:40 -0700 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.12.0 MIME-Version: 1.0 In-Reply-To: <20201027095545.GA30382@lst.de> Content-Type: text/plain; charset="utf-8"; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit X-Originating-IP: [10.124.1.5] X-ClientProxiedBy: HQMAIL105.nvidia.com (172.20.187.12) To HQMAIL107.nvidia.com (172.20.187.13) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1603864820; bh=1BRrdI6k089JuedPu4h+i0NMU17gAvFPAvbhYmzev/M=; h=Subject:To:CC:References:From:Message-ID:Date:User-Agent: MIME-Version:In-Reply-To:Content-Type:Content-Language: Content-Transfer-Encoding:X-Originating-IP:X-ClientProxiedBy; b=Xfsw0UI4t5xVfE+nacUIJXsGIGAfvps6PNJ6PUb2v1NTST9dz3Ffo1a2bL6VuNzri M+VtxgnRePumz0BSWiRcU6d7MGjm+os+mUdCBXvoocI2QTQ0x6MrfjpHL95Th9kkIC jSP51w8T4ys4qSq1R6KeYCUVLjdg+KyLP4MTAmSOfgR/U3mcSIegMoU4FfSjsJm1AK 91RsH06l9NjEp2GRHnun8w/2LL1WIDZT1of1OHgO32u4/6ioeyAODxy2kQvdYtipDR zjk7MwdGHJa+5P0WlOwt4Q+Ifc8AeCvrU8vRd7TS+PJM3rZIdfWcDtLDnNtrM2Lb0F iGpZibMWyx1lA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 10/27/20 2:55 AM, Christoph Hellwig wrote: > On Tue, Oct 27, 2020 at 10:33:01AM +0100, Jan Kara wrote: >> Actually there are callers that care about partial success. See e.g. >> iov_iter_get_pages() usage in fs/direct_io.c:dio_refill_pages() or >> bio_iov_iter_get_pages(). These places handle partial success just fine and >> not allowing partial success from GUP could regress things... Good point. And those also happen to be the key call sites that I haven't yet converted to pin_user_pages*(). Seeing as how I'm three versions into attempting to convert the various *iov_iter*() routines, I should have remembered that they are all about partial success. :) > > But most users do indeed not care. Maybe an explicit FOLL_PARTIAL to > opt into partial handling could clean up a lot of the mess. Maybe just > for pin_user_pages for now. > That does seem like the perfect mix. IIRC, all of the pin_user_pages() call sites today do not accept partial success (and it's easy enough to audit and confirm). So likely no need to add FOLL_PARTIAL there, and no huge danger of regressions. It would definitely reduce the line count at multiple call sites, in return for adding some lines to gup.c. And maybe it can go further, at some point, but that's a good way to start. I'm leaning toward just sending out a small series to do that, unless there are objections and/or better ways to improve this area... thanks, -- John Hubbard NVIDIA