From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.7 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,NICE_REPLY_A, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A21BBC4361B for ; Fri, 4 Dec 2020 21:36:34 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 0A6F022D0B for ; Fri, 4 Dec 2020 21:36:33 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0A6F022D0B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=nvidia.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 49CA46B0036; Fri, 4 Dec 2020 16:36:33 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 4253F6B005C; Fri, 4 Dec 2020 16:36:33 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2EE116B005D; Fri, 4 Dec 2020 16:36:33 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0098.hostedemail.com [216.40.44.98]) by kanga.kvack.org (Postfix) with ESMTP id 14D866B0036 for ; Fri, 4 Dec 2020 16:36:33 -0500 (EST) Received: from smtpin17.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id C7583181AEF30 for ; Fri, 4 Dec 2020 21:36:32 +0000 (UTC) X-FDA: 77556909024.17.chair65_5b03620273c7 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin17.hostedemail.com (Postfix) with ESMTP id AD7BD180D0180 for ; Fri, 4 Dec 2020 21:36:32 +0000 (UTC) X-HE-Tag: chair65_5b03620273c7 X-Filterd-Recvd-Size: 8591 Received: from hqnvemgate26.nvidia.com (hqnvemgate26.nvidia.com [216.228.121.65]) by imf20.hostedemail.com (Postfix) with ESMTP for ; Fri, 4 Dec 2020 21:36:31 +0000 (UTC) Received: from hqmail.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate26.nvidia.com (using TLS: TLSv1.2, AES256-SHA) id ; Fri, 04 Dec 2020 13:36:30 -0800 Received: from [10.2.53.244] (10.124.1.5) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Fri, 4 Dec 2020 21:36:27 +0000 Subject: Re: [PATCH] mm/gup: remove the vma allocation from gup_longterm_locked() To: Jason Gunthorpe , Andrew Morton , linux-mm CC: Dan Williams , Ira Weiny , Pavel Tatashin References: <0-v1-5551df3ed12e+b8-gup_dax_speedup_jgg@nvidia.com> From: John Hubbard Message-ID: <99dc35be-62c1-56b6-ae37-024a2b2ab81d@nvidia.com> Date: Fri, 4 Dec 2020 13:36:27 -0800 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:83.0) Gecko/20100101 Thunderbird/83.0 MIME-Version: 1.0 In-Reply-To: <0-v1-5551df3ed12e+b8-gup_dax_speedup_jgg@nvidia.com> Content-Type: text/plain; charset="UTF-8"; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit X-Originating-IP: [10.124.1.5] X-ClientProxiedBy: HQMAIL105.nvidia.com (172.20.187.12) To HQMAIL107.nvidia.com (172.20.187.13) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1607117790; bh=OkVC9er1YBuMpuc3Ls3jrrY1P10E3ZryUbS+sQ0cC8A=; h=Subject:To:CC:References:From:Message-ID:Date:User-Agent: MIME-Version:In-Reply-To:Content-Type:Content-Language: Content-Transfer-Encoding:X-Originating-IP:X-ClientProxiedBy; b=cINx7A+nD6+utxI1xeM6R9O1XBYWMbPXlZGRH3KCMCTErNW9m3+etX9izxX5ri5Jo SpJAxEC98sBMWfB3+L49fzgm/URSyHhLFKJMTiGD4QkMRtm0SoZPmFYZKKESzu7+qd Wg29O3rm6ZT57low0oxkklxPwB0usv84KYMchurjI3DyPWTkBU3Mjt1FFEwy3N5fTf hh2UGF/QJ1NnH6WmoPvvpO7WkqvrDML72/z/zzbYKJbO7V4ySQ8E5DHkPG7bx3Q9Wa yxVesFdkhP0JNZ526gVEfsJRipkOiiGHIRJFkzCXOKe0EzW9rzCwxg/ueD7Au7kiuj +oU+MOUtVpdhA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 12/4/20 11:39 AM, Jason Gunthorpe wrote: > Long ago there wasn't a FOLL_LONGTERM flag so this DAX check was done by > post-processing the VMA list. > > These days it is trivial to just check each VMA to see if it is DAX before > processing it inside __get_user_pages() and return failure if a DAX VMA is > encountered with FOLL_LONGTERM. > > Removing the allocation of the VMA list is a significant speed up for many > call sites. This all looks nice, and if you actually have quantifiable perf results as you imply above, then let's put them right here. ...still checking the rest of the diffs, I'll post separately with the full review. So far it's clean... thanks, -- John Hubbard NVIDIA > > Add an IS_ENABLED to vma_is_fsdax so that code generation is unchanged > when DAX is compiled out. > > Remove the dummy version of __gup_longterm_locked() as !CONFIG_CMA already > makes memalloc_nocma_save(), check_and_migrate_cma_pages(), and > memalloc_nocma_restore() into a NOP. > > Cc: Dan Williams > Cc: Ira Weiny > Cc: John Hubbard > Cc: Pavel Tatashin > Signed-off-by: Jason Gunthorpe > --- > include/linux/fs.h | 2 +- > mm/gup.c | 83 +++++++++------------------------------------- > 2 files changed, 16 insertions(+), 69 deletions(-) > > This was tested using the fake nvdimm stuff and RDMA's FOLL_LONGTERM pin > continues to correctly reject DAX vmas and returns EOPNOTSUPP > > Pavel, this accomplishes the same #ifdef clean up as your patch series for CMA > by just deleting all the code that justified the ifdefs. > > FWIW, this is probably going to be the start of a longer trickle of patches to > make pin_user_pages()/unpin_user_pages() faster. This flow is offensively slow > right now. > > Ira, I investigated streamlining the callers from here, and you are right. > The distinction that FOLL_LONGTERM means locked == NULL is no longer required > now that the vma list isn't used, and with some adjusting of the CMA path we > can purge out a lot of other complexity too. > > I have some drafts, but I want to tackle this separately. > > diff --git a/include/linux/fs.h b/include/linux/fs.h > index 8667d0cdc71e76..1fcc2b00582b22 100644 > --- a/include/linux/fs.h > +++ b/include/linux/fs.h > @@ -3230,7 +3230,7 @@ static inline bool vma_is_fsdax(struct vm_area_struct *vma) > { > struct inode *inode; > > - if (!vma->vm_file) > + if (!IS_ENABLED(CONFIG_FS_DAX) || !vma->vm_file) > return false; > if (!vma_is_dax(vma)) > return false; > diff --git a/mm/gup.c b/mm/gup.c > index 9c6a2f5001c5c2..311a44ff41ff42 100644 > --- a/mm/gup.c > +++ b/mm/gup.c > @@ -923,6 +923,9 @@ static int check_vma_flags(struct vm_area_struct *vma, unsigned long gup_flags) > if (gup_flags & FOLL_ANON && !vma_is_anonymous(vma)) > return -EFAULT; > > + if ((gup_flags & FOLL_LONGTERM) && vma_is_fsdax(vma)) > + return -EOPNOTSUPP; > + > if (write) { > if (!(vm_flags & VM_WRITE)) { > if (!(gup_flags & FOLL_FORCE)) > @@ -1060,10 +1063,14 @@ static long __get_user_pages(struct mm_struct *mm, > goto next_page; > } > > - if (!vma || check_vma_flags(vma, gup_flags)) { > + if (!vma) { > ret = -EFAULT; > goto out; > } > + ret = check_vma_flags(vma, gup_flags); > + if (ret) > + goto out; > + > if (is_vm_hugetlb_page(vma)) { > i = follow_hugetlb_page(mm, vma, pages, vmas, > &start, &nr_pages, i, > @@ -1567,26 +1574,6 @@ struct page *get_dump_page(unsigned long addr) > } > #endif /* CONFIG_ELF_CORE */ > > -#if defined(CONFIG_FS_DAX) || defined (CONFIG_CMA) > -static bool check_dax_vmas(struct vm_area_struct **vmas, long nr_pages) > -{ > - long i; > - struct vm_area_struct *vma_prev = NULL; > - > - for (i = 0; i < nr_pages; i++) { > - struct vm_area_struct *vma = vmas[i]; > - > - if (vma == vma_prev) > - continue; > - > - vma_prev = vma; > - > - if (vma_is_fsdax(vma)) > - return true; > - } > - return false; > -} > - > #ifdef CONFIG_CMA > static long check_and_migrate_cma_pages(struct mm_struct *mm, > unsigned long start, > @@ -1705,63 +1692,23 @@ static long __gup_longterm_locked(struct mm_struct *mm, > struct vm_area_struct **vmas, > unsigned int gup_flags) > { > - struct vm_area_struct **vmas_tmp = vmas; > unsigned long flags = 0; > - long rc, i; > + long rc; > > - if (gup_flags & FOLL_LONGTERM) { > - if (!pages) > - return -EINVAL; > - > - if (!vmas_tmp) { > - vmas_tmp = kcalloc(nr_pages, > - sizeof(struct vm_area_struct *), > - GFP_KERNEL); > - if (!vmas_tmp) > - return -ENOMEM; > - } > + if (gup_flags & FOLL_LONGTERM) > flags = memalloc_nocma_save(); > - } > > - rc = __get_user_pages_locked(mm, start, nr_pages, pages, > - vmas_tmp, NULL, gup_flags); > + rc = __get_user_pages_locked(mm, start, nr_pages, pages, vmas, NULL, > + gup_flags); > > if (gup_flags & FOLL_LONGTERM) { > - if (rc < 0) > - goto out; > - > - if (check_dax_vmas(vmas_tmp, rc)) { > - if (gup_flags & FOLL_PIN) > - unpin_user_pages(pages, rc); > - else > - for (i = 0; i < rc; i++) > - put_page(pages[i]); > - rc = -EOPNOTSUPP; > - goto out; > - } > - > - rc = check_and_migrate_cma_pages(mm, start, rc, pages, > - vmas_tmp, gup_flags); > -out: > + if (rc > 0) > + rc = check_and_migrate_cma_pages(mm, start, rc, pages, > + vmas, gup_flags); > memalloc_nocma_restore(flags); > } > - > - if (vmas_tmp != vmas) > - kfree(vmas_tmp); > return rc; > } > -#else /* !CONFIG_FS_DAX && !CONFIG_CMA */ > -static __always_inline long __gup_longterm_locked(struct mm_struct *mm, > - unsigned long start, > - unsigned long nr_pages, > - struct page **pages, > - struct vm_area_struct **vmas, > - unsigned int flags) > -{ > - return __get_user_pages_locked(mm, start, nr_pages, pages, vmas, > - NULL, flags); > -} > -#endif /* CONFIG_FS_DAX || CONFIG_CMA */ > > static bool is_valid_gup_flags(unsigned int gup_flags) > { >