From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.4 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH, MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 633A0C433FF for ; Sat, 10 Aug 2019 00:07:01 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 0A2062166E for ; Sat, 10 Aug 2019 00:07:00 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=nvidia.com header.i=@nvidia.com header.b="P4Y3ZDhi" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0A2062166E Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=nvidia.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 89B016B0006; Fri, 9 Aug 2019 20:07:00 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 84B356B0007; Fri, 9 Aug 2019 20:07:00 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6EC046B0008; Fri, 9 Aug 2019 20:07:00 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from mail-pf1-f197.google.com (mail-pf1-f197.google.com [209.85.210.197]) by kanga.kvack.org (Postfix) with ESMTP id 368AB6B0006 for ; Fri, 9 Aug 2019 20:07:00 -0400 (EDT) Received: by mail-pf1-f197.google.com with SMTP id u21so62477316pfn.15 for ; Fri, 09 Aug 2019 17:07:00 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:subject:to:cc:references:from:message-id:date :user-agent:mime-version:in-reply-to:content-language :content-transfer-encoding:dkim-signature; bh=Cy1zdJSEqwoFmqaSThScUQztNmSg0xcFlEeyeXrzPSw=; b=jpgWQutitrKAJazTkGA2ztPgPb1sa++QSZ3DNRlTacjOUm2CcZjRUcGsAkmTRoWj53 dtaz51H/8lh23N5DVtCbAQ/j/NIBXcDNlLLHBPhFpGAj4WDUExFJaZgydzjU02lA5VWD diHQtSeHC2sxTs1NVG6phvtTb2B82KFQ1FkHEqv2/Vm9YTDjHt1PeHLC0hKFS9ED/zBB mGk6EOBkFXuXEvMJ6S7JikapJqQZYy3bzsqhDkllSlsCbqjvVgU1JfTXEd1oJXbJ0MGL fwarruH4ckiyoPwnFsjyQLQPBMlFNMgJ+C+w9Rcgc8f+uQbB9JKX8Xzq9oUK/oic31wX dahg== X-Gm-Message-State: APjAAAWI4vKWkv4turGAXUV5PUzHhAkbgCdnIjAfVDE4Vk+A7uCV30F7 387l8gOazlS5YS6e9oW6FY3Os1exJWzUnwVLVveATcjOOxR9VKCBQJ0Nx+IV55fVth+1tTB6eI9 vO+e06OQvRxUgF7X+PyGDF4qHy5bacAh1477q4htqZUx1DRTaXeUZaOzf3ezRjarhSw== X-Received: by 2002:a63:714a:: with SMTP id b10mr20498268pgn.25.1565395619781; Fri, 09 Aug 2019 17:06:59 -0700 (PDT) X-Google-Smtp-Source: APXvYqwBz90OxB0GM1np0d7rRsuPKrYbQnI9J45QEbVvEMo+heLPyg7T5ln29dYtcC8dDp0O8GqL X-Received: by 2002:a63:714a:: with SMTP id b10mr20498186pgn.25.1565395618453; Fri, 09 Aug 2019 17:06:58 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1565395618; cv=none; d=google.com; s=arc-20160816; b=vdrEKulombKGMWuH611z0FlxWufNOKHOKSp69rKa+3zx8qQvHAMusADkOpkb7tYCVV n6hRe+2sOJ8ZDVsMrKyO/5/RrnIDDmr34zITFuhGgTPf8r2KEJ8A7Lh8IrkQ9w/3Wflc p97JWuE5c/tMQeLtRBi93nskQRc9sqcTH5+tUU0Sy7oOmE6hVLS1xDbqiRJ1X4TO2/Hf qJgEgtbduCLhCAu8fZxVaoEhDsq/QMppAlB0CGtpSp4s72Vz6kDov96csIMKVQ2207ng myQYM1YvGiK2OXdSD1x7bQRipIfwbOLXqzMbuGBYqSegGR95Db5cRETkxW62jO55HJhd f5zw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=dkim-signature:content-transfer-encoding:content-language :in-reply-to:mime-version:user-agent:date:message-id:from:references :cc:to:subject; bh=Cy1zdJSEqwoFmqaSThScUQztNmSg0xcFlEeyeXrzPSw=; b=WuZ85NNo/hv2/SCjrFQ4lPXxj8F4TpHgWopr3VchDWNHHK4O0NQSfVE+dH9rhnZLlX W36SfvEz38MEoINQKul23lQDmdDKxu/t835nr7h9KqwEafi3svmfrjpsadQAg/S6arGn 0d+9RCUsSXh3iwotvjIHa+7ARhnnD9zKg8kAJFPFn0DAMtq7oQruzSZqCLvKFx6j/iuc dS2PoDydiq5RkYVaheFzNv2TFojRje22a0K4zMm96NrndA/TnpQVSsLkSsz8n34+aJuO p5OcQiMZuRTpvWp3BfKltdzT7nW+M/zcRVPO+r8e8f9LeTN1eex41s/gHBevv4nujVrm 5AUw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@nvidia.com header.s=n1 header.b=P4Y3ZDhi; spf=pass (google.com: domain of jhubbard@nvidia.com designates 216.228.121.64 as permitted sender) smtp.mailfrom=jhubbard@nvidia.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=nvidia.com Received: from hqemgate15.nvidia.com (hqemgate15.nvidia.com. [216.228.121.64]) by mx.google.com with ESMTPS id n9si54518415pgq.240.2019.08.09.17.06.58 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 09 Aug 2019 17:06:58 -0700 (PDT) Received-SPF: pass (google.com: domain of jhubbard@nvidia.com designates 216.228.121.64 as permitted sender) client-ip=216.228.121.64; Authentication-Results: mx.google.com; dkim=pass header.i=@nvidia.com header.s=n1 header.b=P4Y3ZDhi; spf=pass (google.com: domain of jhubbard@nvidia.com designates 216.228.121.64 as permitted sender) smtp.mailfrom=jhubbard@nvidia.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=nvidia.com Received: from hqpgpgate102.nvidia.com (Not Verified[216.228.121.13]) by hqemgate15.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Fri, 09 Aug 2019 17:07:08 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate102.nvidia.com (PGP Universal service); Fri, 09 Aug 2019 17:06:57 -0700 X-PGP-Universal: processed; by hqpgpgate102.nvidia.com on Fri, 09 Aug 2019 17:06:57 -0700 Received: from [10.110.48.28] (172.20.13.39) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Sat, 10 Aug 2019 00:06:57 +0000 Subject: Re: [RFC PATCH v2 10/19] mm/gup: Pass a NULL vaddr_pin through GUP fast To: , Andrew Morton CC: Jason Gunthorpe , Dan Williams , Matthew Wilcox , Jan Kara , Theodore Ts'o , Michal Hocko , Dave Chinner , , , , , , , References: <20190809225833.6657-1-ira.weiny@intel.com> <20190809225833.6657-11-ira.weiny@intel.com> From: John Hubbard X-Nvconfidentiality: public Message-ID: <8b3cdb1b-863c-b904-edb5-0f7b35038fdf@nvidia.com> Date: Fri, 9 Aug 2019 17:06:57 -0700 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.8.0 MIME-Version: 1.0 In-Reply-To: <20190809225833.6657-11-ira.weiny@intel.com> X-Originating-IP: [172.20.13.39] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To HQMAIL107.nvidia.com (172.20.187.13) Content-Type: text/plain; charset="utf-8" Content-Language: en-US Content-Transfer-Encoding: 7bit DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1565395628; bh=Cy1zdJSEqwoFmqaSThScUQztNmSg0xcFlEeyeXrzPSw=; h=X-PGP-Universal:Subject:To:CC:References:From:X-Nvconfidentiality: Message-ID:Date:User-Agent:MIME-Version:In-Reply-To: X-Originating-IP:X-ClientProxiedBy:Content-Type:Content-Language: Content-Transfer-Encoding; b=P4Y3ZDhiJjHcGxAywNBs2eKVTUoFGEhA9Arh8zru3uY0gREK0eEPAv5G+sn4KFS5I mq+G1pPBSkaaqKg9Ar8MmvUf3ktvgBeuCIfqC22uKLN8NkyYskKoB7LkrEpTdtj9ie QW3k6GODwUpZOzs9K65RDKHhyCsAnLzpTxaZyt5LYqqSmAMQm2DsOnocxdOMCEuufm S92A5gwM4CMdk4mFIVatj9lQwRjz4T12twW3kHBW48zM3oCWwf6LfFd7QAmW3pGQl0 nEEOUt1PZzygXDpyX66jzHDn7obVo7Fth1CDX+KbpJgLIc8nEgjU0kKSMiA5f8usAK e12cg3hpYVjag== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 8/9/19 3:58 PM, ira.weiny@intel.com wrote: > From: Ira Weiny > > Internally GUP fast needs to know that fast users will not support file > pins. Pass NULL for vaddr_pin through the fast call stack so that the > pin code can return an error if it encounters file backed memory within > the address range. > Reviewed-by: John Hubbard thanks, -- John Hubbard NVIDIA > Signed-off-by: Ira Weiny > --- > mm/gup.c | 65 ++++++++++++++++++++++++++++++++++---------------------- > 1 file changed, 40 insertions(+), 25 deletions(-) > > diff --git a/mm/gup.c b/mm/gup.c > index 7a449500f0a6..504af3e9a942 100644 > --- a/mm/gup.c > +++ b/mm/gup.c > @@ -1813,7 +1813,8 @@ static inline struct page *try_get_compound_head(struct page *page, int refs) > > #ifdef CONFIG_ARCH_HAS_PTE_SPECIAL > static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end, > - unsigned int flags, struct page **pages, int *nr) > + unsigned int flags, struct page **pages, int *nr, > + struct vaddr_pin *vaddr_pin) > { > struct dev_pagemap *pgmap = NULL; > int nr_start = *nr, ret = 0; > @@ -1894,7 +1895,8 @@ static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end, > * useful to have gup_huge_pmd even if we can't operate on ptes. > */ > static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end, > - unsigned int flags, struct page **pages, int *nr) > + unsigned int flags, struct page **pages, int *nr, > + struct vaddr_pin *vaddr_pin) > { > return 0; > } > @@ -1903,7 +1905,7 @@ static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end, > #if defined(CONFIG_ARCH_HAS_PTE_DEVMAP) && defined(CONFIG_TRANSPARENT_HUGEPAGE) > static int __gup_device_huge(unsigned long pfn, unsigned long addr, > unsigned long end, struct page **pages, int *nr, > - unsigned int flags) > + unsigned int flags, struct vaddr_pin *vaddr_pin) > { > int nr_start = *nr; > struct dev_pagemap *pgmap = NULL; > @@ -1938,13 +1940,14 @@ static int __gup_device_huge(unsigned long pfn, unsigned long addr, > > static int __gup_device_huge_pmd(pmd_t orig, pmd_t *pmdp, unsigned long addr, > unsigned long end, struct page **pages, int *nr, > - unsigned int flags) > + unsigned int flags, struct vaddr_pin *vaddr_pin) > { > unsigned long fault_pfn; > int nr_start = *nr; > > fault_pfn = pmd_pfn(orig) + ((addr & ~PMD_MASK) >> PAGE_SHIFT); > - if (!__gup_device_huge(fault_pfn, addr, end, pages, nr, flags)) > + if (!__gup_device_huge(fault_pfn, addr, end, pages, nr, flags, > + vaddr_pin)) > return 0; > > if (unlikely(pmd_val(orig) != pmd_val(*pmdp))) { > @@ -1957,13 +1960,14 @@ static int __gup_device_huge_pmd(pmd_t orig, pmd_t *pmdp, unsigned long addr, > > static int __gup_device_huge_pud(pud_t orig, pud_t *pudp, unsigned long addr, > unsigned long end, struct page **pages, int *nr, > - unsigned int flags) > + unsigned int flags, struct vaddr_pin *vaddr_pin) > { > unsigned long fault_pfn; > int nr_start = *nr; > > fault_pfn = pud_pfn(orig) + ((addr & ~PUD_MASK) >> PAGE_SHIFT); > - if (!__gup_device_huge(fault_pfn, addr, end, pages, nr, flags)) > + if (!__gup_device_huge(fault_pfn, addr, end, pages, nr, flags, > + vaddr_pin)) > return 0; > > if (unlikely(pud_val(orig) != pud_val(*pudp))) { > @@ -1975,7 +1979,7 @@ static int __gup_device_huge_pud(pud_t orig, pud_t *pudp, unsigned long addr, > #else > static int __gup_device_huge_pmd(pmd_t orig, pmd_t *pmdp, unsigned long addr, > unsigned long end, struct page **pages, int *nr, > - unsigned int flags) > + unsigned int flags, struct vaddr_pin *vaddr_pin) > { > BUILD_BUG(); > return 0; > @@ -1983,7 +1987,7 @@ static int __gup_device_huge_pmd(pmd_t orig, pmd_t *pmdp, unsigned long addr, > > static int __gup_device_huge_pud(pud_t pud, pud_t *pudp, unsigned long addr, > unsigned long end, struct page **pages, int *nr, > - unsigned int flags) > + unsigned int flags, struct vaddr_pin *vaddr_pin) > { > BUILD_BUG(); > return 0; > @@ -2075,7 +2079,8 @@ static inline int gup_huge_pd(hugepd_t hugepd, unsigned long addr, > #endif /* CONFIG_ARCH_HAS_HUGEPD */ > > static int gup_huge_pmd(pmd_t orig, pmd_t *pmdp, unsigned long addr, > - unsigned long end, unsigned int flags, struct page **pages, int *nr) > + unsigned long end, unsigned int flags, struct page **pages, > + int *nr, struct vaddr_pin *vaddr_pin) > { > struct page *head, *page; > int refs; > @@ -2087,7 +2092,7 @@ static int gup_huge_pmd(pmd_t orig, pmd_t *pmdp, unsigned long addr, > if (unlikely(flags & FOLL_LONGTERM)) > return 0; > return __gup_device_huge_pmd(orig, pmdp, addr, end, pages, nr, > - flags); > + flags, vaddr_pin); > } > > refs = 0; > @@ -2117,7 +2122,8 @@ static int gup_huge_pmd(pmd_t orig, pmd_t *pmdp, unsigned long addr, > } > > static int gup_huge_pud(pud_t orig, pud_t *pudp, unsigned long addr, > - unsigned long end, unsigned int flags, struct page **pages, int *nr) > + unsigned long end, unsigned int flags, struct page **pages, int *nr, > + struct vaddr_pin *vaddr_pin) > { > struct page *head, *page; > int refs; > @@ -2129,7 +2135,7 @@ static int gup_huge_pud(pud_t orig, pud_t *pudp, unsigned long addr, > if (unlikely(flags & FOLL_LONGTERM)) > return 0; > return __gup_device_huge_pud(orig, pudp, addr, end, pages, nr, > - flags); > + flags, vaddr_pin); > } > > refs = 0; > @@ -2196,7 +2202,8 @@ static int gup_huge_pgd(pgd_t orig, pgd_t *pgdp, unsigned long addr, > } > > static int gup_pmd_range(pud_t pud, unsigned long addr, unsigned long end, > - unsigned int flags, struct page **pages, int *nr) > + unsigned int flags, struct page **pages, int *nr, > + struct vaddr_pin *vaddr_pin) > { > unsigned long next; > pmd_t *pmdp; > @@ -2220,7 +2227,7 @@ static int gup_pmd_range(pud_t pud, unsigned long addr, unsigned long end, > return 0; > > if (!gup_huge_pmd(pmd, pmdp, addr, next, flags, > - pages, nr)) > + pages, nr, vaddr_pin)) > return 0; > > } else if (unlikely(is_hugepd(__hugepd(pmd_val(pmd))))) { > @@ -2231,7 +2238,8 @@ static int gup_pmd_range(pud_t pud, unsigned long addr, unsigned long end, > if (!gup_huge_pd(__hugepd(pmd_val(pmd)), addr, > PMD_SHIFT, next, flags, pages, nr)) > return 0; > - } else if (!gup_pte_range(pmd, addr, next, flags, pages, nr)) > + } else if (!gup_pte_range(pmd, addr, next, flags, pages, nr, > + vaddr_pin)) > return 0; > } while (pmdp++, addr = next, addr != end); > > @@ -2239,7 +2247,8 @@ static int gup_pmd_range(pud_t pud, unsigned long addr, unsigned long end, > } > > static int gup_pud_range(p4d_t p4d, unsigned long addr, unsigned long end, > - unsigned int flags, struct page **pages, int *nr) > + unsigned int flags, struct page **pages, int *nr, > + struct vaddr_pin *vaddr_pin) > { > unsigned long next; > pud_t *pudp; > @@ -2253,13 +2262,14 @@ static int gup_pud_range(p4d_t p4d, unsigned long addr, unsigned long end, > return 0; > if (unlikely(pud_huge(pud))) { > if (!gup_huge_pud(pud, pudp, addr, next, flags, > - pages, nr)) > + pages, nr, vaddr_pin)) > return 0; > } else if (unlikely(is_hugepd(__hugepd(pud_val(pud))))) { > if (!gup_huge_pd(__hugepd(pud_val(pud)), addr, > PUD_SHIFT, next, flags, pages, nr)) > return 0; > - } else if (!gup_pmd_range(pud, addr, next, flags, pages, nr)) > + } else if (!gup_pmd_range(pud, addr, next, flags, pages, nr, > + vaddr_pin)) > return 0; > } while (pudp++, addr = next, addr != end); > > @@ -2267,7 +2277,8 @@ static int gup_pud_range(p4d_t p4d, unsigned long addr, unsigned long end, > } > > static int gup_p4d_range(pgd_t pgd, unsigned long addr, unsigned long end, > - unsigned int flags, struct page **pages, int *nr) > + unsigned int flags, struct page **pages, int *nr, > + struct vaddr_pin *vaddr_pin) > { > unsigned long next; > p4d_t *p4dp; > @@ -2284,7 +2295,8 @@ static int gup_p4d_range(pgd_t pgd, unsigned long addr, unsigned long end, > if (!gup_huge_pd(__hugepd(p4d_val(p4d)), addr, > P4D_SHIFT, next, flags, pages, nr)) > return 0; > - } else if (!gup_pud_range(p4d, addr, next, flags, pages, nr)) > + } else if (!gup_pud_range(p4d, addr, next, flags, pages, nr, > + vaddr_pin)) > return 0; > } while (p4dp++, addr = next, addr != end); > > @@ -2292,7 +2304,8 @@ static int gup_p4d_range(pgd_t pgd, unsigned long addr, unsigned long end, > } > > static void gup_pgd_range(unsigned long addr, unsigned long end, > - unsigned int flags, struct page **pages, int *nr) > + unsigned int flags, struct page **pages, int *nr, > + struct vaddr_pin *vaddr_pin) > { > unsigned long next; > pgd_t *pgdp; > @@ -2312,7 +2325,8 @@ static void gup_pgd_range(unsigned long addr, unsigned long end, > if (!gup_huge_pd(__hugepd(pgd_val(pgd)), addr, > PGDIR_SHIFT, next, flags, pages, nr)) > return; > - } else if (!gup_p4d_range(pgd, addr, next, flags, pages, nr)) > + } else if (!gup_p4d_range(pgd, addr, next, flags, pages, nr, > + vaddr_pin)) > return; > } while (pgdp++, addr = next, addr != end); > } > @@ -2374,7 +2388,8 @@ int __get_user_pages_fast(unsigned long start, int nr_pages, int write, > if (IS_ENABLED(CONFIG_HAVE_FAST_GUP) && > gup_fast_permitted(start, end)) { > local_irq_save(flags); > - gup_pgd_range(start, end, write ? FOLL_WRITE : 0, pages, &nr); > + gup_pgd_range(start, end, write ? FOLL_WRITE : 0, pages, &nr, > + NULL); > local_irq_restore(flags); > } > > @@ -2445,7 +2460,7 @@ int get_user_pages_fast(unsigned long start, int nr_pages, > if (IS_ENABLED(CONFIG_HAVE_FAST_GUP) && > gup_fast_permitted(start, end)) { > local_irq_disable(); > - gup_pgd_range(addr, end, gup_flags, pages, &nr); > + gup_pgd_range(addr, end, gup_flags, pages, &nr, NULL); > local_irq_enable(); > ret = nr; > } >