From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.9 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9DCB9C388F9 for ; Tue, 27 Oct 2020 13:15:15 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 9E5FF20754 for ; Tue, 27 Oct 2020 13:15:14 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=ziepe.ca header.i=@ziepe.ca header.b="mU2kPOv7" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9E5FF20754 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=ziepe.ca Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id A218B6B005C; Tue, 27 Oct 2020 09:15:13 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 9D3426B006E; Tue, 27 Oct 2020 09:15:13 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 875766B007B; Tue, 27 Oct 2020 09:15:13 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0124.hostedemail.com [216.40.44.124]) by kanga.kvack.org (Postfix) with ESMTP id 5867E6B005C for ; Tue, 27 Oct 2020 09:15:13 -0400 (EDT) Received: from smtpin18.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id D46B43625 for ; Tue, 27 Oct 2020 13:15:12 +0000 (UTC) X-FDA: 77417751264.18.shock66_1f11bcf2727c Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin18.hostedemail.com (Postfix) with ESMTP id 96986100ED0DE for ; Tue, 27 Oct 2020 13:15:12 +0000 (UTC) X-HE-Tag: shock66_1f11bcf2727c X-Filterd-Recvd-Size: 6562 Received: from mail-qv1-f65.google.com (mail-qv1-f65.google.com [209.85.219.65]) by imf03.hostedemail.com (Postfix) with ESMTP for ; Tue, 27 Oct 2020 13:15:11 +0000 (UTC) Received: by mail-qv1-f65.google.com with SMTP id cv1so603770qvb.2 for ; Tue, 27 Oct 2020 06:15:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ziepe.ca; s=google; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=sHmAyUwVAC1RZfyUSoKpUkH6mD5+QpMZcYlVXmkgxH0=; b=mU2kPOv7szjJ5zW+bQ6BFR26C8Luax041L0av4G8wiZg4RTt8WNFJsqiiTJVSSUqn9 DHwUggweGomAj/XPHnVfixfeNqULEl7kOQL+gEtxw8gSHqAAGqRarmuFp6/OT9KwDIvP GGYaNHMMUmlxrP/xza5phRgRZy9Oa2piX7dSUZYvRhMdSue49g/k0v780Ak5zN0ywqxh eCskrH3/sbI1vLTBjZfm+xCXRPYYn5WaCj1/REItIr+voVRDHHRh3IE++6xUs7yX7EA2 Zc3LtdzPASeNU/iXrWyyakPOamzq772fSyEjnZ3wjkAh5ZvgBeLpOsFzeeZf/DVa2z4y Go7g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=sHmAyUwVAC1RZfyUSoKpUkH6mD5+QpMZcYlVXmkgxH0=; b=fNK7/+hFuAvqkf4sv77/PJJn+G5GezyvFRXE6vi1PAG68qMBRFfSTQXqVyIDcUsyWs 8f0/ECitK01zMunqidCKivu4i5WTqOilfprLJACEvYnXQQAffdRYhi5ryy8J0RNL11e/ u83tDxZZAxf+ZkJb6Hqidm9ebVj+2qpUaf0mfHDZ9zDV6XvhZD0QBYvPs4TKQC6qMtTl LvxYGn1Bna87wb5hbIqeY16faDUeUXp/gLyckmX+0P2HbiO+WcLlPzZ75xQ5UdkAZPQ1 HwMboe13BK40tloddkVBc4EPj3fkOiy3WAtoHWJOl4AZkGH+Gh+BF7K3Kry0/oKM/N/y yN+w== X-Gm-Message-State: AOAM531WoDtBxW0ezFIhMcO9r4HabZo8O4QcsdoNcWfxvkoPVNhlHgQT Jo4cuYj/K9YhbY7KgC7F+NI7EQ== X-Google-Smtp-Source: ABdhPJydTjKeY2w5KSD1T7IK56kSvz3ZZH+klUlmJHUCotvf185qdWyac85cxoit/yL6SaHTP/PABw== X-Received: by 2002:a0c:e70d:: with SMTP id d13mr2017645qvn.45.1603804511081; Tue, 27 Oct 2020 06:15:11 -0700 (PDT) Received: from ziepe.ca (hlfxns017vw-156-34-48-30.dhcp-dynamic.fibreop.ns.bellaliant.net. [156.34.48.30]) by smtp.gmail.com with ESMTPSA id v14sm702222qta.44.2020.10.27.06.15.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 27 Oct 2020 06:15:10 -0700 (PDT) Received: from jgg by mlx with local (Exim 4.94) (envelope-from ) id 1kXOof-009JYh-D0; Tue, 27 Oct 2020 10:15:09 -0300 Date: Tue, 27 Oct 2020 10:15:09 -0300 From: Jason Gunthorpe To: Jan Kara Cc: John Hubbard , linux-kernel@vger.kernel.org, Andrea Arcangeli , Andrew Morton , Christoph Hellwig , Hugh Dickins , Jann Horn , Kirill Shutemov , Kirill Tkhai , Linux-MM , Michal Hocko , Oleg Nesterov , Peter Xu Subject: Re: [PATCH 1/2] mm: reorganize internal_get_user_pages_fast() Message-ID: <20201027131509.GU36674@ziepe.ca> References: <1-v1-281e425c752f+2df-gup_fork_jgg@nvidia.com> <16c50bb0-431d-5bfb-7b80-a8af0b4da90f@nvidia.com> <20201027093301.GA16090@quack2.suse.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20201027093301.GA16090@quack2.suse.cz> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, Oct 27, 2020 at 10:33:01AM +0100, Jan Kara wrote: > On Fri 23-10-20 21:44:17, John Hubbard wrote: > > On 10/23/20 5:19 PM, Jason Gunthorpe wrote: > > > + start += (unsigned long)nr_pinned << PAGE_SHIFT; > > > + pages += nr_pinned; > > > + ret = __gup_longterm_unlocked(start, nr_pages - nr_pinned, gup_flags, > > > + pages); > > > + if (ret < 0) { > > > /* Have to be a bit careful with return values */ > > > > ...and can we move that comment up one level, so that it reads: > > > > /* Have to be a bit careful with return values */ > > if (ret < 0) { > > if (nr_pinned) > > return nr_pinned; > > return ret; > > } > > return ret + nr_pinned; > > > > Thinking about this longer term, it would be nice if the whole gup/pup API > > set just stopped pretending that anyone cares about partial success, because > > they *don't*. If we had return values of "0 or -ERRNO" throughout, and an > > additional set of API wrappers that did some sort of limited retry just like > > some of the callers do, that would be a happier story. > > Actually there are callers that care about partial success. See e.g. > iov_iter_get_pages() usage in fs/direct_io.c:dio_refill_pages() or > bio_iov_iter_get_pages(). These places handle partial success just fine and > not allowing partial success from GUP could regress things... I looked through a bunch of call sites, and there are a wack that actually do only want a complete return and are carrying a bunch of code to fix it: pvec = kvmalloc_array(npages, sizeof(struct page *), GFP_KERNEL); if (!pvec) return -ENOMEM; do { unsigned num_pages = npages - pinned; uint64_t ptr = userptr->ptr + pinned * PAGE_SIZE; struct page **pages = pvec + pinned; ret = pin_user_pages_fast(ptr, num_pages, !userptr->ro ? FOLL_WRITE : 0, pages); if (ret < 0) { unpin_user_pages(pvec, pinned); kvfree(pvec); return ret; } pinned += ret; } while (pinned < npages); Is really a lot better if written as: pvec = kvmalloc_array(npages, sizeof(struct page *), GFP_KERNEL); if (!pvec) return -ENOMEM; ret = pin_user_pages_fast(userptr->ptr, npages, FOLL_COMPLETE | (!userptr->ro ? FOLL_WRITE : 0), pvec); if (ret) { kvfree(pvec); return ret; } (eg FOLL_COMPLETE says to return exactly npages or fail) Some code assumes things work that way already anyhow: /* Pin user pages for DMA Xfer */ err = pin_user_pages_unlocked(user_dma.uaddr, user_dma.page_count, dma->map, FOLL_FORCE); if (user_dma.page_count != err) { IVTV_DEBUG_WARN("failed to map user pages, returned %d instead of %d\n", err, user_dma.page_count); if (err >= 0) { unpin_user_pages(dma->map, err); return -EINVAL; } return err; } Actually I'm quite surprised I didn't find too many missing the tricky unpin_user_pages() on the error path - eg videobuf_dma_init_user_locked() is wrong. Jason