From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 22B9CC55179 for ; Thu, 29 Oct 2020 10:21:14 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 9131A20719 for ; Thu, 29 Oct 2020 10:21:13 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="IN3yvG5M" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9131A20719 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id C48836B0062; Thu, 29 Oct 2020 06:21:12 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id BFAA26B0068; Thu, 29 Oct 2020 06:21:12 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AE7E46B006C; Thu, 29 Oct 2020 06:21:12 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0236.hostedemail.com [216.40.44.236]) by kanga.kvack.org (Postfix) with ESMTP id 8125A6B0062 for ; Thu, 29 Oct 2020 06:21:12 -0400 (EDT) Received: from smtpin20.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 237EF181AEF15 for ; Thu, 29 Oct 2020 10:21:12 +0000 (UTC) X-FDA: 77424570384.20.fear32_320c8632728c Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin20.hostedemail.com (Postfix) with ESMTP id 044C6180C07AF for ; Thu, 29 Oct 2020 10:21:11 +0000 (UTC) X-HE-Tag: fear32_320c8632728c X-Filterd-Recvd-Size: 6004 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf16.hostedemail.com (Postfix) with ESMTP for ; Thu, 29 Oct 2020 10:21:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=gP+9EBW4WYH0Q3riFZe4cnGNQ0TiUX8xEI8GXjdhsE8=; b=IN3yvG5MkkamYsxYSs+DGTZWar IX0jdVBm0lpRC0YRDk65I0+npkp6+Nh84Q7kf1kkBQaUEZu0C94OxqrXIOSuyOW/pBt+AAkOV3ZcQ peCCyPdYAQQUYpl7iLU6Po4inERjQSK6Xnpuc1XMei8s5KeOne65lJ7AhsGpO8pTQ2TZuMo8rFF9o RM+7EuxbJ99Vi+3o28Lb1LpjaT9QEk5+HEDbNmSKB5bWfR4i78t7my6JQWH2iolDDbxSbKzeHY7x/ v0n4upo4j9pDvneyPSzYok/AuRxTan3DszEsaMcZ5U00cTzCZ0Uo8Il2OdhO5Sk8w+MJ/rivrhSIq 5T3rYwwg==; Received: from 089144193201.atnat0002.highway.a1.net ([89.144.193.201] helo=localhost) by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux)) id 1kY53I-0004Ji-RL; Thu, 29 Oct 2020 10:21:05 +0000 From: Christoph Hellwig To: Andrew Morton Cc: Dan Williams , Daniel Vetter , linux-nvdimm@lists.01.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 2/2] mm: simplify follow_pte{,pmd} Date: Thu, 29 Oct 2020 11:14:32 +0100 Message-Id: <20201029101432.47011-3-hch@lst.de> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20201029101432.47011-1-hch@lst.de> References: <20201029101432.47011-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by casper.infradead.org. See http://www.infradead.org/rpr.html Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Merge __follow_pte_pmd, follow_pte_pmd and follow_pte into a single follow_pte function and just pass two additional NULL arguments for the two previous follow_pte callers. Signed-off-by: Christoph Hellwig --- fs/dax.c | 9 ++++----- include/linux/mm.h | 6 +++--- mm/memory.c | 35 +++++------------------------------ 3 files changed, 12 insertions(+), 38 deletions(-) diff --git a/fs/dax.c b/fs/dax.c index 5b47834f2e1bb5..26d5dcd2d69e5c 100644 --- a/fs/dax.c +++ b/fs/dax.c @@ -810,12 +810,11 @@ static void dax_entry_mkclean(struct address_space = *mapping, pgoff_t index, address =3D pgoff_address(index, vma); =20 /* - * Note because we provide range to follow_pte_pmd it will - * call mmu_notifier_invalidate_range_start() on our behalf - * before taking any lock. + * Note because we provide range to follow_pte it will call + * mmu_notifier_invalidate_range_start() on our behalf before + * taking any lock. */ - if (follow_pte_pmd(vma->vm_mm, address, &range, - &ptep, &pmdp, &ptl)) + if (follow_pte(vma->vm_mm, address, &range, &ptep, &pmdp, &ptl)) continue; =20 /* diff --git a/include/linux/mm.h b/include/linux/mm.h index ef360fe70aafcf..113b0b4fd90af5 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1655,9 +1655,9 @@ void free_pgd_range(struct mmu_gather *tlb, unsigne= d long addr, unsigned long end, unsigned long floor, unsigned long ceiling); int copy_page_range(struct vm_area_struct *dst_vma, struct vm_area_struct *s= rc_vma); -int follow_pte_pmd(struct mm_struct *mm, unsigned long address, - struct mmu_notifier_range *range, - pte_t **ptepp, pmd_t **pmdpp, spinlock_t **ptlp); +int follow_pte(struct mm_struct *mm, unsigned long address, + struct mmu_notifier_range *range, pte_t **ptepp, pmd_t **pmdpp, + spinlock_t **ptlp); int follow_pfn(struct vm_area_struct *vma, unsigned long address, unsigned long *pfn); int follow_phys(struct vm_area_struct *vma, unsigned long address, diff --git a/mm/memory.c b/mm/memory.c index 00458e7b49fef8..fa00682f7a4312 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4696,9 +4696,9 @@ int __pmd_alloc(struct mm_struct *mm, pud_t *pud, u= nsigned long address) } #endif /* __PAGETABLE_PMD_FOLDED */ =20 -static int __follow_pte_pmd(struct mm_struct *mm, unsigned long address, - struct mmu_notifier_range *range, - pte_t **ptepp, pmd_t **pmdpp, spinlock_t **ptlp) +int follow_pte(struct mm_struct *mm, unsigned long address, + struct mmu_notifier_range *range, pte_t **ptepp, pmd_t **pmdpp, + spinlock_t **ptlp) { pgd_t *pgd; p4d_t *p4d; @@ -4763,31 +4763,6 @@ static int __follow_pte_pmd(struct mm_struct *mm, = unsigned long address, return -EINVAL; } =20 -static inline int follow_pte(struct mm_struct *mm, unsigned long address= , - pte_t **ptepp, spinlock_t **ptlp) -{ - int res; - - /* (void) is needed to make gcc happy */ - (void) __cond_lock(*ptlp, - !(res =3D __follow_pte_pmd(mm, address, NULL, - ptepp, NULL, ptlp))); - return res; -} - -int follow_pte_pmd(struct mm_struct *mm, unsigned long address, - struct mmu_notifier_range *range, - pte_t **ptepp, pmd_t **pmdpp, spinlock_t **ptlp) -{ - int res; - - /* (void) is needed to make gcc happy */ - (void) __cond_lock(*ptlp, - !(res =3D __follow_pte_pmd(mm, address, range, - ptepp, pmdpp, ptlp))); - return res; -} - /** * follow_pfn - look up PFN at a user virtual address * @vma: memory mapping @@ -4808,7 +4783,7 @@ int follow_pfn(struct vm_area_struct *vma, unsigned= long address, if (!(vma->vm_flags & (VM_IO | VM_PFNMAP))) return ret; =20 - ret =3D follow_pte(vma->vm_mm, address, &ptep, &ptl); + ret =3D follow_pte(vma->vm_mm, address, NULL, &ptep, NULL, &ptl); if (ret) return ret; *pfn =3D pte_pfn(*ptep); @@ -4829,7 +4804,7 @@ int follow_phys(struct vm_area_struct *vma, if (!(vma->vm_flags & (VM_IO | VM_PFNMAP))) goto out; =20 - if (follow_pte(vma->vm_mm, address, &ptep, &ptl)) + if (follow_pte(vma->vm_mm, address, NULL, &ptep, NULL, &ptl)) goto out; pte =3D *ptep; =20 --=20 2.28.0