From: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
To: linux-mm@kvack.org
Cc: Andrew Morton <akpm@linux-foundation.org>,
Dave Hansen <dave.hansen@intel.com>,
Hugh Dickins <hughd@google.com>,
"Kirill A. Shutemov" <kirill@shutemov.name>,
linux-kernel@vger.kernel.org,
Naoya Horiguchi <nao.horiguchi@gmail.com>,
"Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>,
Pavel Emelyanov <xemul@parallels.com>,
Andrea Arcangeli <aarcange@redhat.com>,
Cyrill Gorcunov <gorcunov@gmail.com>
Subject: [PATCH v3 12/13] mm: /proc/pid/clear_refs: avoid split_huge_page()
Date: Fri, 20 Jun 2014 16:11:38 -0400 [thread overview]
Message-ID: <1403295099-6407-13-git-send-email-n-horiguchi@ah.jp.nec.com> (raw)
In-Reply-To: <1403295099-6407-1-git-send-email-n-horiguchi@ah.jp.nec.com>
From: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Currently pagewalker splits all THP pages on any clear_refs request. It's
not necessary. We can handle this on PMD level.
One side effect is that soft dirty will potentially see more dirty memory,
since we will mark whole THP page dirty at once.
Sanity checked with CRIU test suite. More testing is required.
ChangeLog:
- move code for thp to clear_refs_pte_range()
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Pavel Emelyanov <xemul@parallels.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Dave Hansen <dave.hansen@intel.com>
Signed-off-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Cc: Cyrill Gorcunov <gorcunov@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---
fs/proc/task_mmu.c | 47 ++++++++++++++++++++++++++++++++++++++++++++---
1 file changed, 44 insertions(+), 3 deletions(-)
diff --git v3.16-rc1.orig/fs/proc/task_mmu.c v3.16-rc1/fs/proc/task_mmu.c
index ac50a829320a..2acbe144152c 100644
--- v3.16-rc1.orig/fs/proc/task_mmu.c
+++ v3.16-rc1/fs/proc/task_mmu.c
@@ -719,10 +719,10 @@ struct clear_refs_private {
enum clear_refs_types type;
};
+#ifdef CONFIG_MEM_SOFT_DIRTY
static inline void clear_soft_dirty(struct vm_area_struct *vma,
unsigned long addr, pte_t *pte)
{
-#ifdef CONFIG_MEM_SOFT_DIRTY
/*
* The soft-dirty tracker uses #PF-s to catch writes
* to pages, so write-protect the pte as well. See the
@@ -741,9 +741,35 @@ static inline void clear_soft_dirty(struct vm_area_struct *vma,
}
set_pte_at(vma->vm_mm, addr, pte, ptent);
-#endif
}
+static inline void clear_soft_dirty_pmd(struct vm_area_struct *vma,
+ unsigned long addr, pmd_t *pmdp)
+{
+ pmd_t pmd = *pmdp;
+
+ pmd = pmd_wrprotect(pmd);
+ pmd = pmd_clear_flags(pmd, _PAGE_SOFT_DIRTY);
+
+ if (vma->vm_flags & VM_SOFTDIRTY)
+ vma->vm_flags &= ~VM_SOFTDIRTY;
+
+ set_pmd_at(vma->vm_mm, addr, pmdp, pmd);
+}
+
+#else
+
+static inline void clear_soft_dirty(struct vm_area_struct *vma,
+ unsigned long addr, pte_t *pte)
+{
+}
+
+static inline void clear_soft_dirty_pmd(struct vm_area_struct *vma,
+ unsigned long addr, pmd_t *pmdp)
+{
+}
+#endif
+
static int clear_refs_pte_range(pmd_t *pmd, unsigned long addr,
unsigned long end, struct mm_walk *walk)
{
@@ -753,7 +779,22 @@ static int clear_refs_pte_range(pmd_t *pmd, unsigned long addr,
spinlock_t *ptl;
struct page *page;
- split_huge_page_pmd(vma, addr, pmd);
+ if (pmd_trans_huge_lock(pmd, vma, &ptl) == 1) {
+ if (cp->type == CLEAR_REFS_SOFT_DIRTY) {
+ clear_soft_dirty_pmd(vma, addr, pmd);
+ goto out;
+ }
+
+ page = pmd_page(*pmd);
+
+ /* Clear accessed and referenced bits. */
+ pmdp_test_and_clear_young(vma, addr, pmd);
+ ClearPageReferenced(page);
+out:
+ spin_unlock(ptl);
+ return 0;
+ }
+
if (pmd_trans_unstable(pmd))
return 0;
--
1.9.3
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2014-06-20 20:12 UTC|newest]
Thread overview: 28+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-06-20 20:11 [PATCH v3 00/13] pagewalk: improve vma handling, apply to new users Naoya Horiguchi
2014-06-20 20:11 ` [PATCH v3 01/13] mm/pagewalk: remove pgd_entry() and pud_entry() Naoya Horiguchi
2014-06-30 10:23 ` Kirill A. Shutemov
2014-06-20 20:11 ` [PATCH v3 02/13] pagewalk: improve vma handling Naoya Horiguchi
2014-06-30 11:53 ` Kirill A. Shutemov
2014-06-30 14:28 ` Naoya Horiguchi
2014-06-20 20:11 ` [PATCH v3 03/13] pagewalk: add walk_page_vma() Naoya Horiguchi
2014-06-30 11:56 ` Kirill A. Shutemov
2014-06-20 20:11 ` [PATCH v3 04/13] smaps: remove mem_size_stats->vma and use walk_page_vma() Naoya Horiguchi
2014-06-26 13:35 ` Jerome Marchand
2014-06-26 14:41 ` Naoya Horiguchi
2014-06-30 11:58 ` Kirill A. Shutemov
2014-06-20 20:11 ` [PATCH v3 05/13] clear_refs: remove clear_refs_private->vma and introduce clear_refs_test_walk() Naoya Horiguchi
2014-06-30 12:02 ` Kirill A. Shutemov
2014-06-20 20:11 ` [PATCH v3 06/13] pagemap: use walk->vma instead of calling find_vma() Naoya Horiguchi
2014-06-30 12:03 ` Kirill A. Shutemov
2014-06-20 20:11 ` [PATCH v3 07/13] numa_maps: remove numa_maps->vma Naoya Horiguchi
2014-06-30 12:07 ` Kirill A. Shutemov
2014-06-20 20:11 ` [PATCH v3 08/13] numa_maps: fix typo in gather_hugetbl_stats Naoya Horiguchi
2014-06-30 12:08 ` Kirill A. Shutemov
2014-06-20 20:11 ` [PATCH v3 09/13] memcg: apply walk_page_vma() Naoya Horiguchi
2014-06-30 12:20 ` Kirill A. Shutemov
2014-06-30 14:31 ` Naoya Horiguchi
2014-06-20 20:11 ` [PATCH v3 10/13] arch/powerpc/mm/subpage-prot.c: use walk->vma and walk_page_vma() Naoya Horiguchi
2014-06-30 12:21 ` Kirill A. Shutemov
2014-06-20 20:11 ` [PATCH v3 11/13] mempolicy: apply page table walker on queue_pages_range() Naoya Horiguchi
2014-06-20 20:11 ` Naoya Horiguchi [this message]
2014-06-20 20:11 ` [PATCH v3 13/13] mincore: apply page table walker on do_mincore() Naoya Horiguchi
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1403295099-6407-13-git-send-email-n-horiguchi@ah.jp.nec.com \
--to=n-horiguchi@ah.jp.nec.com \
--cc=aarcange@redhat.com \
--cc=akpm@linux-foundation.org \
--cc=dave.hansen@intel.com \
--cc=gorcunov@gmail.com \
--cc=hughd@google.com \
--cc=kirill.shutemov@linux.intel.com \
--cc=kirill@shutemov.name \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=nao.horiguchi@gmail.com \
--cc=xemul@parallels.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox