linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH -V2] /proc/PID/smaps: Add PMD migration entry parsing
@ 2020-04-02  2:00 Huang, Ying
  2020-04-02  6:44 ` Michal Hocko
  2020-04-02 12:53 ` Kirill A. Shutemov
  0 siblings, 2 replies; 11+ messages in thread
From: Huang, Ying @ 2020-04-02  2:00 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-mm, linux-kernel, Huang Ying, Zi Yan, Andrea Arcangeli,
	Kirill A . Shutemov, Vlastimil Babka, Alexey Dobriyan,
	Michal Hocko, Konstantin Khlebnikov, Jérôme Glisse,
	Yang Shi

From: Huang Ying <ying.huang@intel.com>

Now, when read /proc/PID/smaps, the PMD migration entry in page table is simply
ignored.  To improve the accuracy of /proc/PID/smaps, its parsing and processing
is added.

Before the patch, for a fully populated 400 MB anonymous VMA, sometimes some THP
pages under migration may be lost as follows.

7f3f6a7e5000-7f3f837e5000 rw-p 00000000 00:00 0
Size:             409600 kB
KernelPageSize:        4 kB
MMUPageSize:           4 kB
Rss:              407552 kB
Pss:              407552 kB
Shared_Clean:          0 kB
Shared_Dirty:          0 kB
Private_Clean:         0 kB
Private_Dirty:    407552 kB
Referenced:       301056 kB
Anonymous:        407552 kB
LazyFree:              0 kB
AnonHugePages:    405504 kB
ShmemPmdMapped:        0 kB
FilePmdMapped:        0 kB
Shared_Hugetlb:        0 kB
Private_Hugetlb:       0 kB
Swap:                  0 kB
SwapPss:               0 kB
Locked:                0 kB
THPeligible:		1
VmFlags: rd wr mr mw me ac

After the patch, it will be always,

7f3f6a7e5000-7f3f837e5000 rw-p 00000000 00:00 0
Size:             409600 kB
KernelPageSize:        4 kB
MMUPageSize:           4 kB
Rss:              409600 kB
Pss:              409600 kB
Shared_Clean:          0 kB
Shared_Dirty:          0 kB
Private_Clean:         0 kB
Private_Dirty:    409600 kB
Referenced:       294912 kB
Anonymous:        409600 kB
LazyFree:              0 kB
AnonHugePages:    407552 kB
ShmemPmdMapped:        0 kB
FilePmdMapped:        0 kB
Shared_Hugetlb:        0 kB
Private_Hugetlb:       0 kB
Swap:                  0 kB
SwapPss:               0 kB
Locked:                0 kB
THPeligible:		1
VmFlags: rd wr mr mw me ac

Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
Reviewed-by: Zi Yan <ziy@nvidia.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Alexey Dobriyan <adobriyan@gmail.com>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Konstantin Khlebnikov <khlebnikov@yandex-team.ru>
Cc: "Jérôme Glisse" <jglisse@redhat.com>
Cc: Yang Shi <yang.shi@linux.alibaba.com>
---

v2:

- Use thp_migration_supported() in condition to reduce code size if THP
  migration isn't enabled.

- Replace VM_BUG_ON() with VM_WARN_ON_ONCE(), it's not necessary to nuking
  kernel for this.

---
 fs/proc/task_mmu.c | 18 +++++++++++++-----
 1 file changed, 13 insertions(+), 5 deletions(-)

diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
index 8d382d4ec067..9c72f9ce2dd8 100644
--- a/fs/proc/task_mmu.c
+++ b/fs/proc/task_mmu.c
@@ -546,10 +546,19 @@ static void smaps_pmd_entry(pmd_t *pmd, unsigned long addr,
 	struct mem_size_stats *mss = walk->private;
 	struct vm_area_struct *vma = walk->vma;
 	bool locked = !!(vma->vm_flags & VM_LOCKED);
-	struct page *page;
+	struct page *page = NULL;
 
-	/* FOLL_DUMP will return -EFAULT on huge zero page */
-	page = follow_trans_huge_pmd(vma, addr, pmd, FOLL_DUMP);
+	if (pmd_present(*pmd)) {
+		/* FOLL_DUMP will return -EFAULT on huge zero page */
+		page = follow_trans_huge_pmd(vma, addr, pmd, FOLL_DUMP);
+	} else if (unlikely(thp_migration_supported() && is_swap_pmd(*pmd))) {
+		swp_entry_t entry = pmd_to_swp_entry(*pmd);
+
+		if (is_migration_entry(entry))
+			page = migration_entry_to_page(entry);
+		else
+			VM_WARN_ON_ONCE(1);
+	}
 	if (IS_ERR_OR_NULL(page))
 		return;
 	if (PageAnon(page))
@@ -578,8 +587,7 @@ static int smaps_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end,
 
 	ptl = pmd_trans_huge_lock(pmd, vma);
 	if (ptl) {
-		if (pmd_present(*pmd))
-			smaps_pmd_entry(pmd, addr, walk);
+		smaps_pmd_entry(pmd, addr, walk);
 		spin_unlock(ptl);
 		goto out;
 	}
-- 
2.25.0



^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2020-04-02 15:57 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-04-02  2:00 [PATCH -V2] /proc/PID/smaps: Add PMD migration entry parsing Huang, Ying
2020-04-02  6:44 ` Michal Hocko
2020-04-02  7:03   ` Huang, Ying
2020-04-02  7:44     ` Michal Hocko
2020-04-02  8:10       ` Huang, Ying
2020-04-02  8:21         ` Michal Hocko
2020-04-02  8:27           ` Huang, Ying
2020-04-02  8:29           ` Konstantin Khlebnikov
2020-04-02  8:58             ` Michal Hocko
2020-04-02 15:52       ` Yang Shi
2020-04-02 12:53 ` Kirill A. Shutemov

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox