* [patch 057/158] mm: pagewalk: allow walking without vma
@ 2019-12-01 1:52 akpm
0 siblings, 0 replies; only message in thread
From: akpm @ 2019-12-01 1:52 UTC (permalink / raw)
To: akpm, alex, aou, ard.biesheuvel, arnd, aryabinin, benh,
borntraeger, bp, cai, catalin.marinas, dave.hansen, dave.jiang,
davem, dvyukov, glider, gor, heiko.carstens, hpa, james.morse,
jhogan, kan.liang, linux-mm, linux, luto, mark.rutland, mawilcox,
mingo, mm-commits, mpe, n-horiguchi, palmer, paul.burton,
paul.walmsley, paulus, peterz, ralf, sfr, shashim, steven.price,
tglx, torvalds, vgupta, will, zong.li
From: Steven Price <steven.price@arm.com>
Subject: mm: pagewalk: allow walking without vma
Since 48684a65b4e3: "mm: pagewalk: fix misbehavior of walk_page_range for
vma(VM_PFNMAP)", page_table_walk() will report any kernel area as a hole,
because it lacks a vma.
This means each arch has re-implemented page table walking when needed,
for example in the per-arch ptdump walker.
Remove the requirement to have a vma in the generic code and add a new
function walk_page_range_novma() which ignores the VMAs and simply walks
the page tables.
[steven.price@arm.com: v15]
Link: http://lkml.kernel.org/r/20191101140942.51554-13-steven.price@arm.com
[steven.price@arm.com: fix boot crash]
Link: http://lkml.kernel.org/r/20191028135910.33253-13-steven.price@arm.com
Cc: Zong Li <zong.li@sifive.com>
Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Cc: Shiraz Hashim <shashim@codeaurora.org>
Cc: Albert Ou <aou@eecs.berkeley.edu>
Cc: Alexander Potapenko <glider@google.com>
Cc: Alexandre Ghiti <alex@ghiti.fr>
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christian Borntraeger <borntraeger@de.ibm.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Dave Jiang <dave.jiang@intel.com>
Cc: David S. Miller <davem@davemloft.net>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: James Hogan <jhogan@kernel.org>
Cc: James Morse <james.morse@arm.com>
Cc: "Liang, Kan" <kan.liang@linux.intel.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Matthew Wilcox <mawilcox@microsoft.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Palmer Dabbelt <palmer@sifive.com>
Cc: Paul Burton <paul.burton@mips.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Vasily Gorbik <gor@linux.ibm.com>
Cc: Vineet Gupta <vgupta@synopsys.com>
Cc: Will Deacon <will@kernel.org>
Cc: Qian Cai <cai@lca.pw>
Cc: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---
include/linux/pagewalk.h | 5 ++++
mm/pagewalk.c | 44 ++++++++++++++++++++++++++++++-------
2 files changed, 41 insertions(+), 8 deletions(-)
--- a/include/linux/pagewalk.h~mm-pagewalk-allow-walking-without-vma
+++ a/include/linux/pagewalk.h
@@ -59,6 +59,7 @@ struct mm_walk_ops {
* @ops: operation to call during the walk
* @mm: mm_struct representing the target process of page table walk
* @vma: vma currently walked (NULL if walking outside vmas)
+ * @no_vma: walk ignoring vmas (vma will always be NULL)
* @private: private data for callbacks' usage
*
* (see the comment on walk_page_range() for more details)
@@ -67,12 +68,16 @@ struct mm_walk {
const struct mm_walk_ops *ops;
struct mm_struct *mm;
struct vm_area_struct *vma;
+ bool no_vma;
void *private;
};
int walk_page_range(struct mm_struct *mm, unsigned long start,
unsigned long end, const struct mm_walk_ops *ops,
void *private);
+int walk_page_range_novma(struct mm_struct *mm, unsigned long start,
+ unsigned long end, const struct mm_walk_ops *ops,
+ void *private);
int walk_page_vma(struct vm_area_struct *vma, const struct mm_walk_ops *ops,
void *private);
int walk_page_mapping(struct address_space *mapping, pgoff_t first_index,
--- a/mm/pagewalk.c~mm-pagewalk-allow-walking-without-vma
+++ a/mm/pagewalk.c
@@ -39,7 +39,7 @@ static int walk_pmd_range(pud_t *pud, un
do {
again:
next = pmd_addr_end(addr, end);
- if (pmd_none(*pmd) || !walk->vma) {
+ if (pmd_none(*pmd) || (!walk->vma && !walk->no_vma)) {
if (ops->pte_hole)
err = ops->pte_hole(addr, next, walk);
if (err)
@@ -62,9 +62,14 @@ again:
if (!ops->pte_entry)
continue;
- split_huge_pmd(walk->vma, pmd, addr);
- if (pmd_trans_unstable(pmd))
- goto again;
+ if (walk->vma) {
+ split_huge_pmd(walk->vma, pmd, addr);
+ if (pmd_trans_unstable(pmd))
+ goto again;
+ } else if (pmd_leaf(*pmd) || !pmd_present(*pmd)) {
+ continue;
+ }
+
err = walk_pte_range(pmd, addr, next, walk);
if (err)
break;
@@ -85,7 +90,7 @@ static int walk_pud_range(p4d_t *p4d, un
do {
again:
next = pud_addr_end(addr, end);
- if (pud_none(*pud) || !walk->vma) {
+ if (pud_none(*pud) || (!walk->vma && !walk->no_vma)) {
if (ops->pte_hole)
err = ops->pte_hole(addr, next, walk);
if (err)
@@ -99,9 +104,13 @@ static int walk_pud_range(p4d_t *p4d, un
break;
}
- split_huge_pud(walk->vma, pud, addr);
- if (pud_none(*pud))
- goto again;
+ if (walk->vma) {
+ split_huge_pud(walk->vma, pud, addr);
+ if (pud_none(*pud))
+ goto again;
+ } else if (pud_leaf(*pud) || !pud_present(*pud)) {
+ continue;
+ }
if (ops->pmd_entry || ops->pte_entry)
err = walk_pmd_range(pud, addr, next, walk);
@@ -374,6 +383,25 @@ int walk_page_range(struct mm_struct *mm
return err;
}
+int walk_page_range_novma(struct mm_struct *mm, unsigned long start,
+ unsigned long end, const struct mm_walk_ops *ops,
+ void *private)
+{
+ struct mm_walk walk = {
+ .ops = ops,
+ .mm = mm,
+ .private = private,
+ .no_vma = true
+ };
+
+ if (start >= end || !walk.mm)
+ return -EINVAL;
+
+ lockdep_assert_held(&walk.mm->mmap_sem);
+
+ return __walk_page_range(start, end, &walk);
+}
+
int walk_page_vma(struct vm_area_struct *vma, const struct mm_walk_ops *ops,
void *private)
{
_
^ permalink raw reply [flat|nested] only message in thread
only message in thread, other threads:[~2019-12-01 1:52 UTC | newest]
Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-12-01 1:52 [patch 057/158] mm: pagewalk: allow walking without vma akpm
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox