From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D07DAC2D0D4 for ; Fri, 6 Dec 2019 13:54:23 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 867B024673 for ; Fri, 6 Dec 2019 13:54:23 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 867B024673 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 88DED6B162B; Fri, 6 Dec 2019 08:54:20 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 865066B162C; Fri, 6 Dec 2019 08:54:20 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 77B556B162D; Fri, 6 Dec 2019 08:54:20 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0194.hostedemail.com [216.40.44.194]) by kanga.kvack.org (Postfix) with ESMTP id 637EB6B162B for ; Fri, 6 Dec 2019 08:54:20 -0500 (EST) Received: from smtpin06.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with SMTP id E449D5838 for ; Fri, 6 Dec 2019 13:54:19 +0000 (UTC) X-FDA: 76234861038.06.fire01_42e1174dfdb3e X-HE-Tag: fire01_42e1174dfdb3e X-Filterd-Recvd-Size: 6093 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf08.hostedemail.com (Postfix) with ESMTP for ; Fri, 6 Dec 2019 13:54:19 +0000 (UTC) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 1EC3D11D4; Fri, 6 Dec 2019 05:54:19 -0800 (PST) Received: from e112269-lin.cambridge.arm.com (e112269-lin.cambridge.arm.com [10.1.194.43]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 928503F718; Fri, 6 Dec 2019 05:54:16 -0800 (PST) From: Steven Price To: Andrew Morton , linux-mm@kvack.org Cc: Steven Price , Andy Lutomirski , Ard Biesheuvel , Arnd Bergmann , Borislav Petkov , Catalin Marinas , Dave Hansen , Ingo Molnar , James Morse , =?UTF-8?q?J=C3=A9r=C3=B4me=20Glisse?= , Peter Zijlstra , Thomas Gleixner , Will Deacon , x86@kernel.org, "H. Peter Anvin" , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Mark Rutland , "Liang, Kan" Subject: [PATCH v16 12/25] mm: pagewalk: Allow walking without vma Date: Fri, 6 Dec 2019 13:53:03 +0000 Message-Id: <20191206135316.47703-13-steven.price@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20191206135316.47703-1-steven.price@arm.com> References: <20191206135316.47703-1-steven.price@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Since 48684a65b4e3: "mm: pagewalk: fix misbehavior of walk_page_range for vma(VM_PFNMAP)", page_table_walk() will report any kernel area as a hole, because it lacks a vma. This means each arch has re-implemented page table walking when needed, for example in the per-arch ptdump walker. Remove the requirement to have a vma in the generic code and add a new function walk_page_range_novma() which ignores the VMAs and simply walks the page tables. Signed-off-by: Steven Price --- include/linux/pagewalk.h | 5 +++++ mm/pagewalk.c | 44 ++++++++++++++++++++++++++++++++-------- 2 files changed, 41 insertions(+), 8 deletions(-) diff --git a/include/linux/pagewalk.h b/include/linux/pagewalk.h index 06790f23957f..2c9725bdcf1f 100644 --- a/include/linux/pagewalk.h +++ b/include/linux/pagewalk.h @@ -59,6 +59,7 @@ struct mm_walk_ops { * @ops: operation to call during the walk * @mm: mm_struct representing the target process of page table walk * @vma: vma currently walked (NULL if walking outside vmas) + * @no_vma: walk ignoring vmas (vma will always be NULL) * @private: private data for callbacks' usage * * (see the comment on walk_page_range() for more details) @@ -67,12 +68,16 @@ struct mm_walk { const struct mm_walk_ops *ops; struct mm_struct *mm; struct vm_area_struct *vma; + bool no_vma; void *private; }; =20 int walk_page_range(struct mm_struct *mm, unsigned long start, unsigned long end, const struct mm_walk_ops *ops, void *private); +int walk_page_range_novma(struct mm_struct *mm, unsigned long start, + unsigned long end, const struct mm_walk_ops *ops, + void *private); int walk_page_vma(struct vm_area_struct *vma, const struct mm_walk_ops *= ops, void *private); int walk_page_mapping(struct address_space *mapping, pgoff_t first_index= , diff --git a/mm/pagewalk.c b/mm/pagewalk.c index c089786e7a7f..efa464cf079b 100644 --- a/mm/pagewalk.c +++ b/mm/pagewalk.c @@ -39,7 +39,7 @@ static int walk_pmd_range(pud_t *pud, unsigned long add= r, unsigned long end, do { again: next =3D pmd_addr_end(addr, end); - if (pmd_none(*pmd) || !walk->vma) { + if (pmd_none(*pmd) || (!walk->vma && !walk->no_vma)) { if (ops->pte_hole) err =3D ops->pte_hole(addr, next, walk); if (err) @@ -62,9 +62,14 @@ static int walk_pmd_range(pud_t *pud, unsigned long ad= dr, unsigned long end, if (!ops->pte_entry) continue; =20 - split_huge_pmd(walk->vma, pmd, addr); - if (pmd_trans_unstable(pmd)) - goto again; + if (walk->vma) { + split_huge_pmd(walk->vma, pmd, addr); + if (pmd_trans_unstable(pmd)) + goto again; + } else if (pmd_leaf(*pmd) || !pmd_present(*pmd)) { + continue; + } + err =3D walk_pte_range(pmd, addr, next, walk); if (err) break; @@ -85,7 +90,7 @@ static int walk_pud_range(p4d_t *p4d, unsigned long add= r, unsigned long end, do { again: next =3D pud_addr_end(addr, end); - if (pud_none(*pud) || !walk->vma) { + if (pud_none(*pud) || (!walk->vma && !walk->no_vma)) { if (ops->pte_hole) err =3D ops->pte_hole(addr, next, walk); if (err) @@ -99,9 +104,13 @@ static int walk_pud_range(p4d_t *p4d, unsigned long a= ddr, unsigned long end, break; } =20 - split_huge_pud(walk->vma, pud, addr); - if (pud_none(*pud)) - goto again; + if (walk->vma) { + split_huge_pud(walk->vma, pud, addr); + if (pud_none(*pud)) + goto again; + } else if (pud_leaf(*pud) || !pud_present(*pud)) { + continue; + } =20 if (ops->pmd_entry || ops->pte_entry) err =3D walk_pmd_range(pud, addr, next, walk); @@ -374,6 +383,25 @@ int walk_page_range(struct mm_struct *mm, unsigned l= ong start, return err; } =20 +int walk_page_range_novma(struct mm_struct *mm, unsigned long start, + unsigned long end, const struct mm_walk_ops *ops, + void *private) +{ + struct mm_walk walk =3D { + .ops =3D ops, + .mm =3D mm, + .private =3D private, + .no_vma =3D true + }; + + if (start >=3D end || !walk.mm) + return -EINVAL; + + lockdep_assert_held(&walk.mm->mmap_sem); + + return __walk_page_range(start, end, &walk); +} + int walk_page_vma(struct vm_area_struct *vma, const struct mm_walk_ops *= ops, void *private) { --=20 2.20.1