From: Steven Price <steven.price@arm.com>
To: linux-mm@kvack.org
Cc: "Steven Price" <steven.price@arm.com>,
"Andy Lutomirski" <luto@kernel.org>,
"Ard Biesheuvel" <ard.biesheuvel@linaro.org>,
"Arnd Bergmann" <arnd@arndb.de>, "Borislav Petkov" <bp@alien8.de>,
"Catalin Marinas" <catalin.marinas@arm.com>,
"Dave Hansen" <dave.hansen@linux.intel.com>,
"Ingo Molnar" <mingo@redhat.com>,
"James Morse" <james.morse@arm.com>,
"Jérôme Glisse" <jglisse@redhat.com>,
"Peter Zijlstra" <peterz@infradead.org>,
"Thomas Gleixner" <tglx@linutronix.de>,
"Will Deacon" <will.deacon@arm.com>,
x86@kernel.org, "H. Peter Anvin" <hpa@zytor.com>,
linux-arm-kernel@lists.infradead.org,
linux-kernel@vger.kernel.org,
"Mark Rutland" <Mark.Rutland@arm.com>,
"Liang, Kan" <kan.liang@linux.intel.com>
Subject: [PATCH v2 04/13] mm: pagewalk: Add p4d_entry() and pgd_entry()
Date: Thu, 21 Feb 2019 11:34:53 +0000 [thread overview]
Message-ID: <20190221113502.54153-5-steven.price@arm.com> (raw)
In-Reply-To: <20190221113502.54153-1-steven.price@arm.com>
pgd_entry() and pud_entry() were removed by commit 0b1fbfe50006c410
("mm/pagewalk: remove pgd_entry() and pud_entry()") because there were
no users. We're about to add users so reintroduce them, along with
p4d_entry() as we now have 5 levels of tables.
Note that commit a00cc7d9dd93d66a ("mm, x86: add support for
PUD-sized transparent hugepages") already re-added pud_entry() but with
different semantics to the other callbacks. Since there have never
been upstream users of this, revert the semantics back to match the
other callbacks. This means pud_entry() is called for all entries, not
just transparent huge pages.
Signed-off-by: Steven Price <steven.price@arm.com>
---
include/linux/mm.h | 9 ++++++---
mm/pagewalk.c | 27 ++++++++++++++++-----------
2 files changed, 22 insertions(+), 14 deletions(-)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 80bb6408fe73..1a4b1615d012 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1412,10 +1412,9 @@ void unmap_vmas(struct mmu_gather *tlb, struct vm_area_struct *start_vma,
/**
* mm_walk - callbacks for walk_page_range
+ * @pgd_entry: if set, called for each non-empty PGD (top-level) entry
+ * @p4d_entry: if set, called for each non-empty P4D (1st-level) entry
* @pud_entry: if set, called for each non-empty PUD (2nd-level) entry
- * this handler should only handle pud_trans_huge() puds.
- * the pmd_entry or pte_entry callbacks will be used for
- * regular PUDs.
* @pmd_entry: if set, called for each non-empty PMD (3rd-level) entry
* this handler is required to be able to handle
* pmd_trans_huge() pmds. They may simply choose to
@@ -1435,6 +1434,10 @@ void unmap_vmas(struct mmu_gather *tlb, struct vm_area_struct *start_vma,
* (see the comment on walk_page_range() for more details)
*/
struct mm_walk {
+ int (*pgd_entry)(pgd_t *pgd, unsigned long addr,
+ unsigned long next, struct mm_walk *walk);
+ int (*p4d_entry)(p4d_t *p4d, unsigned long addr,
+ unsigned long next, struct mm_walk *walk);
int (*pud_entry)(pud_t *pud, unsigned long addr,
unsigned long next, struct mm_walk *walk);
int (*pmd_entry)(pmd_t *pmd, unsigned long addr,
diff --git a/mm/pagewalk.c b/mm/pagewalk.c
index c3084ff2569d..98373a9f88b8 100644
--- a/mm/pagewalk.c
+++ b/mm/pagewalk.c
@@ -90,15 +90,9 @@ static int walk_pud_range(p4d_t *p4d, unsigned long addr, unsigned long end,
}
if (walk->pud_entry) {
- spinlock_t *ptl = pud_trans_huge_lock(pud, walk->vma);
-
- if (ptl) {
- err = walk->pud_entry(pud, addr, next, walk);
- spin_unlock(ptl);
- if (err)
- break;
- continue;
- }
+ err = walk->pud_entry(pud, addr, next, walk);
+ if (err)
+ break;
}
split_huge_pud(walk->vma, pud, addr);
@@ -131,7 +125,12 @@ static int walk_p4d_range(pgd_t *pgd, unsigned long addr, unsigned long end,
break;
continue;
}
- if (walk->pmd_entry || walk->pte_entry)
+ if (walk->p4d_entry) {
+ err = walk->p4d_entry(p4d, addr, next, walk);
+ if (err)
+ break;
+ }
+ if (walk->pud_entry || walk->pmd_entry || walk->pte_entry)
err = walk_pud_range(p4d, addr, next, walk);
if (err)
break;
@@ -157,7 +156,13 @@ static int walk_pgd_range(unsigned long addr, unsigned long end,
break;
continue;
}
- if (walk->pmd_entry || walk->pte_entry)
+ if (walk->pgd_entry) {
+ err = walk->pgd_entry(pgd, addr, next, walk);
+ if (err)
+ break;
+ }
+ if (walk->p4d_entry || walk->pud_entry || walk->pmd_entry ||
+ walk->pte_entry)
err = walk_p4d_range(pgd, addr, next, walk);
if (err)
break;
--
2.20.1
next prev parent reply other threads:[~2019-02-21 11:35 UTC|newest]
Thread overview: 31+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-02-21 11:34 [PATCH v2 00/13] Convert x86 & arm64 to use generic page walk Steven Price
2019-02-21 11:34 ` [PATCH v2 01/13] arm64: mm: Add p?d_large() definitions Steven Price
2019-02-21 13:52 ` Mark Rutland
2019-02-21 11:34 ` [PATCH v2 02/13] x86/mm: " Steven Price
2019-02-21 14:21 ` Kirill A. Shutemov
2019-02-21 11:34 ` [PATCH v2 03/13] mm: Add generic p?d_large() macros Steven Price
2019-02-21 13:41 ` Mark Rutland
2019-02-21 14:28 ` Kirill A. Shutemov
2019-02-21 14:46 ` Steven Price
2019-02-21 14:57 ` Kirill A. Shutemov
2019-02-21 17:16 ` Steven Price
2019-02-21 21:06 ` Kirill A. Shutemov
2019-02-22 10:21 ` Steven Price
2019-03-01 11:53 ` Mike Rapoport
2019-03-01 12:30 ` Kirill A. Shutemov
2019-03-01 13:39 ` Steven Price
2019-03-03 7:12 ` Mike Rapoport
2019-03-04 14:35 ` Steven Price
2019-03-04 14:53 ` Mike Rapoport
2019-03-01 11:49 ` Mike Rapoport
2019-03-01 12:28 ` Kirill A. Shutemov
2019-02-21 11:34 ` Steven Price [this message]
2019-02-21 11:34 ` [PATCH v2 05/13] mm: pagewalk: Allow walking without vma Steven Price
2019-02-21 11:34 ` [PATCH v2 06/13] mm: pagewalk: Add 'depth' parameter to pte_hole Steven Price
2019-02-21 11:34 ` [PATCH v2 07/13] mm: pagewalk: Add test_p?d callbacks Steven Price
2019-02-21 11:34 ` [PATCH v2 08/13] arm64: mm: Convert mm/dump.c to use walk_page_range() Steven Price
2019-02-21 11:34 ` [PATCH v2 09/13] x86/mm: Point to struct seq_file from struct pg_state Steven Price
2019-02-21 11:34 ` [PATCH v2 10/13] x86/mm+efi: Convert ptdump_walk_pgd_level() to take a mm_struct Steven Price
2019-02-21 11:35 ` [PATCH v2 11/13] x86/mm: Convert ptdump_walk_pgd_level_debugfs() to take an mm_struct Steven Price
2019-02-21 11:35 ` [PATCH v2 12/13] x86/mm: Convert ptdump_walk_pgd_level_core() " Steven Price
2019-02-21 11:35 ` [PATCH v2 13/13] x86: mm: Convert dump_pagetables to use walk_page_range Steven Price
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20190221113502.54153-5-steven.price@arm.com \
--to=steven.price@arm.com \
--cc=Mark.Rutland@arm.com \
--cc=ard.biesheuvel@linaro.org \
--cc=arnd@arndb.de \
--cc=bp@alien8.de \
--cc=catalin.marinas@arm.com \
--cc=dave.hansen@linux.intel.com \
--cc=hpa@zytor.com \
--cc=james.morse@arm.com \
--cc=jglisse@redhat.com \
--cc=kan.liang@linux.intel.com \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=luto@kernel.org \
--cc=mingo@redhat.com \
--cc=peterz@infradead.org \
--cc=tglx@linutronix.de \
--cc=will.deacon@arm.com \
--cc=x86@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox