From: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: Dave Hansen <dave.hansen@intel.com>,
Hugh Dickins <hughd@google.com>,
"Kirill A. Shutemov" <kirill@shutemov.name>,
Peter Feiner <pfeiner@google.com>,
Jerome Marchand <jmarchan@redhat.com>,
"linux-mm@kvack.org" <linux-mm@kvack.org>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Subject: [PATCH -mm v7 06/13] pagemap: use walk->vma instead of calling find_vma()
Date: Fri, 7 Nov 2014 07:01:58 +0000 [thread overview]
Message-ID: <1415343692-6314-7-git-send-email-n-horiguchi@ah.jp.nec.com> (raw)
In-Reply-To: <1415343692-6314-1-git-send-email-n-horiguchi@ah.jp.nec.com>
Page table walker has the information of the current vma in mm_walk,
so we don't have to call find_vma() in each pagemap_(pte|hugetlb)_range()
call any longer. Currently pagemap_pte_range() does vma loop itself, so
this patch reduces many lines of code.
NULL-vma check is omitted because we assume that we never run these
callbacks on any address outside vma. And even if it were broken, NULL
pointer dereference would be detected, so we can get enough information
for debugging.
Signed-off-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
---
ChangeLog v7:
- remove while-loop in pagemap_pte_range() (thanks to Peter Feiner)
- remove Kirill's Ack because this patch has non-minor change since v6
---
fs/proc/task_mmu.c | 68 +++++++++++++-----------------------------------------
1 file changed, 16 insertions(+), 52 deletions(-)
diff --git mmotm-2014-11-05-16-01.orig/fs/proc/task_mmu.c mmotm-2014-11-05-16-01/fs/proc/task_mmu.c
index 9aaab24677ae..f997734d2b4b 100644
--- mmotm-2014-11-05-16-01.orig/fs/proc/task_mmu.c
+++ mmotm-2014-11-05-16-01/fs/proc/task_mmu.c
@@ -1054,15 +1054,13 @@ static inline void thp_pmd_to_pagemap_entry(pagemap_entry_t *pme, struct pagemap
static int pagemap_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end,
struct mm_walk *walk)
{
- struct vm_area_struct *vma;
+ struct vm_area_struct *vma = walk->vma;
struct pagemapread *pm = walk->private;
spinlock_t *ptl;
pte_t *pte;
int err = 0;
- /* find the first VMA at or above 'addr' */
- vma = find_vma(walk->mm, addr);
- if (vma && pmd_trans_huge_lock(pmd, vma, &ptl) == 1) {
+ if (pmd_trans_huge_lock(pmd, vma, &ptl) == 1) {
int pmd_flags2;
if ((vma->vm_flags & VM_SOFTDIRTY) || pmd_soft_dirty(*pmd))
@@ -1088,50 +1086,19 @@ static int pagemap_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end,
if (pmd_trans_unstable(pmd))
return 0;
- while (1) {
- /* End of address space hole, which we mark as non-present. */
- unsigned long hole_end;
-
- if (vma)
- hole_end = min(end, vma->vm_start);
- else
- hole_end = end;
-
- for (; addr < hole_end; addr += PAGE_SIZE) {
- pagemap_entry_t pme = make_pme(PM_NOT_PRESENT(pm->v2));
-
- err = add_to_pagemap(addr, &pme, pm);
- if (err)
- return err;
- }
-
- if (!vma || vma->vm_start >= end)
- break;
- /*
- * We can't possibly be in a hugetlb VMA. In general,
- * for a mm_walk with a pmd_entry and a hugetlb_entry,
- * the pmd_entry can only be called on addresses in a
- * hugetlb if the walk starts in a non-hugetlb VMA and
- * spans a hugepage VMA. Since pagemap_read walks are
- * PMD-sized and PMD-aligned, this will never be true.
- */
- BUG_ON(is_vm_hugetlb_page(vma));
-
- /* Addresses in the VMA. */
- for (; addr < min(end, vma->vm_end); addr += PAGE_SIZE) {
- pagemap_entry_t pme;
- pte = pte_offset_map(pmd, addr);
- pte_to_pagemap_entry(&pme, pm, vma, addr, *pte);
- pte_unmap(pte);
- err = add_to_pagemap(addr, &pme, pm);
- if (err)
- return err;
- }
-
- if (addr == end)
- break;
+ /*
+ * We can assume that @vma always points to a valid one and @end never
+ * goes beyond vma->vm_end.
+ */
+ for (; addr < end; addr += PAGE_SIZE) {
+ pagemap_entry_t pme;
- vma = find_vma(walk->mm, addr);
+ pte = pte_offset_map(pmd, addr);
+ pte_to_pagemap_entry(&pme, pm, vma, addr, *pte);
+ pte_unmap(pte);
+ err = add_to_pagemap(addr, &pme, pm);
+ if (err)
+ return err;
}
cond_resched();
@@ -1158,15 +1125,12 @@ static int pagemap_hugetlb_range(pte_t *pte, unsigned long hmask,
struct mm_walk *walk)
{
struct pagemapread *pm = walk->private;
- struct vm_area_struct *vma;
+ struct vm_area_struct *vma = walk->vma;
int err = 0;
int flags2;
pagemap_entry_t pme;
- vma = find_vma(walk->mm, addr);
- WARN_ON_ONCE(!vma);
-
- if (vma && (vma->vm_flags & VM_SOFTDIRTY))
+ if (vma->vm_flags & VM_SOFTDIRTY)
flags2 = __PM_SOFT_DIRTY;
else
flags2 = 0;
--
2.2.0.rc0.2.gf745acb
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2014-11-07 7:05 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-11-07 7:01 [PATCH -mm v7 00/13] pagewalk: improve vma handling, apply to new users Naoya Horiguchi
2014-11-07 7:01 ` [PATCH -mm v7 01/13] mm/pagewalk: remove pgd_entry() and pud_entry() Naoya Horiguchi
2014-11-07 7:01 ` [PATCH -mm v7 02/13] pagewalk: improve vma handling Naoya Horiguchi
[not found] ` <20150122152205.39b7f8f451824b556c1a3f70@linux-foundation.org>
2015-01-23 8:02 ` [PATCH -mm 14/13] mm: pagewalk: fix misbehavior of walk_page_range for vma(VM_PFNMAP) (Re: [PATCH -mm v7 02/13] pagewalk: improve vma handling) Naoya Horiguchi
2014-11-07 7:01 ` [PATCH -mm v7 04/13] smaps: remove mem_size_stats->vma and use walk_page_vma() Naoya Horiguchi
2014-11-07 7:01 ` [PATCH -mm v7 03/13] pagewalk: add walk_page_vma() Naoya Horiguchi
2014-11-07 7:01 ` [PATCH -mm v7 05/13] clear_refs: remove clear_refs_private->vma and introduce clear_refs_test_walk() Naoya Horiguchi
2014-11-07 7:01 ` Naoya Horiguchi [this message]
2014-11-07 7:01 ` [PATCH -mm v7 07/13] numa_maps: fix typo in gather_hugetbl_stats Naoya Horiguchi
2014-11-07 7:02 ` [PATCH -mm v7 08/13] numa_maps: remove numa_maps->vma Naoya Horiguchi
2014-11-07 7:02 ` [PATCH -mm v7 09/13] memcg: cleanup preparation for page table walk Naoya Horiguchi
2014-11-07 7:02 ` [PATCH -mm v7 10/13] arch/powerpc/mm/subpage-prot.c: use walk->vma and walk_page_vma() Naoya Horiguchi
2014-11-07 7:02 ` [PATCH -mm v7 11/13] mempolicy: apply page table walker on queue_pages_range() Naoya Horiguchi
2014-11-07 7:02 ` [PATCH -mm v7 12/13] mm: /proc/pid/clear_refs: avoid split_huge_page() Naoya Horiguchi
2014-11-07 7:02 ` [PATCH -mm v7 13/13] mincore: apply page table walker on do_mincore() Naoya Horiguchi
2015-01-16 16:32 ` [PATCH -mm v7 00/13] pagewalk: improve vma handling, apply to new users Kirill A. Shutemov
2015-01-16 21:38 ` Andrew Morton
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1415343692-6314-7-git-send-email-n-horiguchi@ah.jp.nec.com \
--to=n-horiguchi@ah.jp.nec.com \
--cc=akpm@linux-foundation.org \
--cc=dave.hansen@intel.com \
--cc=hughd@google.com \
--cc=jmarchan@redhat.com \
--cc=kirill@shutemov.name \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=pfeiner@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox