From: Zi Yan <zi.yan@sent.com>
To: linux-mm@kvack.org
Cc: "Kirill A . Shutemov" <kirill.shutemov@linux.intel.com>,
Roman Gushchin <guro@fb.com>, Rik van Riel <riel@surriel.com>,
Matthew Wilcox <willy@infradead.org>,
Shakeel Butt <shakeelb@google.com>,
Yang Shi <shy828301@gmail.com>, Jason Gunthorpe <jgg@nvidia.com>,
Mike Kravetz <mike.kravetz@oracle.com>,
Michal Hocko <mhocko@suse.com>,
David Hildenbrand <david@redhat.com>,
William Kucharski <william.kucharski@oracle.com>,
Andrea Arcangeli <aarcange@redhat.com>,
John Hubbard <jhubbard@nvidia.com>,
David Nellans <dnellans@nvidia.com>,
linux-kernel@vger.kernel.org, Zi Yan <ziy@nvidia.com>
Subject: [RFC PATCH v2 19/30] mm: stats: make smap stats understand PUD THPs.
Date: Mon, 28 Sep 2020 13:54:17 -0400 [thread overview]
Message-ID: <20200928175428.4110504-20-zi.yan@sent.com> (raw)
In-Reply-To: <20200928175428.4110504-1-zi.yan@sent.com>
From: Zi Yan <ziy@nvidia.com>
Signed-off-by: Zi Yan <ziy@nvidia.com>
---
fs/proc/task_mmu.c | 68 ++++++++++++++++++++++++++++++++++++++++++----
1 file changed, 63 insertions(+), 5 deletions(-)
diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
index a21484b1414d..077196182288 100644
--- a/fs/proc/task_mmu.c
+++ b/fs/proc/task_mmu.c
@@ -430,10 +430,9 @@ static void smaps_page_accumulate(struct mem_size_stats *mss,
}
static void smaps_account(struct mem_size_stats *mss, struct page *page,
- bool compound, bool young, bool dirty, bool locked)
+ unsigned long size, bool young, bool dirty, bool locked)
{
- int i, nr = compound ? compound_nr(page) : 1;
- unsigned long size = nr * PAGE_SIZE;
+ int i, nr = size / PAGE_SIZE;
/*
* First accumulate quantities that depend only on |size| and the type
@@ -530,7 +529,7 @@ static void smaps_pte_entry(pte_t *pte, unsigned long addr,
if (!page)
return;
- smaps_account(mss, page, false, pte_young(*pte), pte_dirty(*pte), locked);
+ smaps_account(mss, page, PAGE_SIZE, pte_young(*pte), pte_dirty(*pte), locked);
}
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
@@ -561,8 +560,44 @@ static void smaps_pmd_entry(pmd_t *pmd, unsigned long addr,
/* pass */;
else
mss->file_thp += HPAGE_PMD_SIZE;
- smaps_account(mss, page, true, pmd_young(*pmd), pmd_dirty(*pmd), locked);
+ smaps_account(mss, page, HPAGE_PMD_SIZE, pmd_young(*pmd),
+ pmd_dirty(*pmd), locked);
}
+
+#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
+static void smaps_pud_entry(pud_t *pud, unsigned long addr,
+ struct mm_walk *walk)
+{
+ struct mem_size_stats *mss = walk->private;
+ struct vm_area_struct *vma = walk->vma;
+ bool locked = !!(vma->vm_flags & VM_LOCKED);
+ struct page *page = NULL;
+
+ if (pud_present(*pud)) {
+ /* FOLL_DUMP will return -EFAULT on huge zero page */
+ page = follow_trans_huge_pud(vma, addr, pud, FOLL_DUMP);
+ }
+ if (IS_ERR_OR_NULL(page))
+ return;
+ if (PageAnon(page))
+ mss->anonymous_thp += HPAGE_PUD_SIZE;
+ else if (PageSwapBacked(page))
+ mss->shmem_thp += HPAGE_PUD_SIZE;
+ else if (is_zone_device_page(page))
+ /* pass */;
+ else
+ mss->file_thp += HPAGE_PUD_SIZE;
+ smaps_account(mss, page, HPAGE_PUD_SIZE, pud_young(*pud),
+ pud_dirty(*pud), locked);
+}
+#else
+static void smaps_pud_entry(pud_t *pud, unsigned long addr,
+ struct mm_walk *walk)
+{
+}
+#endif
+
+
#else
static void smaps_pmd_entry(pmd_t *pmd, unsigned long addr,
struct mm_walk *walk)
@@ -570,6 +605,28 @@ static void smaps_pmd_entry(pmd_t *pmd, unsigned long addr,
}
#endif
+static int smaps_pud_range(pud_t pud, pud_t *pudp, unsigned long addr,
+ unsigned long end, struct mm_walk *walk)
+{
+ struct vm_area_struct *vma = walk->vma;
+ spinlock_t *ptl;
+
+ ptl = pud_trans_huge_lock(pudp, vma);
+ if (ptl) {
+ if (memcmp(pudp, &pud, sizeof(pud)) != 0) {
+ walk->action = ACTION_AGAIN;
+ spin_unlock(ptl);
+ return 0;
+ }
+ smaps_pud_entry(pudp, addr, walk);
+ spin_unlock(ptl);
+ walk->action = ACTION_CONTINUE;
+ }
+
+ cond_resched();
+ return 0;
+}
+
static int smaps_pte_range(pmd_t pmd, pmd_t *pmdp, unsigned long addr,
unsigned long end, struct mm_walk *walk)
{
@@ -712,6 +769,7 @@ static int smaps_hugetlb_range(pte_t *pte, unsigned long hmask,
#endif /* HUGETLB_PAGE */
static const struct mm_walk_ops smaps_walk_ops = {
+ .pud_entry = smaps_pud_range,
.pmd_entry = smaps_pte_range,
.hugetlb_entry = smaps_hugetlb_range,
};
--
2.28.0
next prev parent reply other threads:[~2020-09-28 17:56 UTC|newest]
Thread overview: 56+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-09-28 17:53 [RFC PATCH v2 00/30] 1GB PUD THP support on x86_64 Zi Yan
2020-09-28 17:53 ` [RFC PATCH v2 01/30] mm/pagewalk: use READ_ONCE when reading the PUD entry unlocked Zi Yan
2020-09-28 17:54 ` [RFC PATCH v2 02/30] mm: pagewalk: use READ_ONCE when reading the PMD " Zi Yan
2020-09-28 17:54 ` [RFC PATCH v2 03/30] mm: thp: use single linked list for THP page table page deposit Zi Yan
2020-09-28 19:34 ` Matthew Wilcox
2020-09-28 20:34 ` Zi Yan
2020-09-28 17:54 ` [RFC PATCH v2 04/30] mm: add new helper functions to allocate one PMD page with 512 PTE pages Zi Yan
2020-09-28 17:54 ` [RFC PATCH v2 05/30] mm: thp: add page table deposit/withdraw functions for PUD THP Zi Yan
2020-09-28 17:54 ` [RFC PATCH v2 06/30] mm: change thp_order and thp_nr as we will have not just PMD THPs Zi Yan
2020-09-28 17:54 ` [RFC PATCH v2 07/30] mm: thp: add anonymous PUD THP page fault support without enabling it Zi Yan
2020-09-28 17:54 ` [RFC PATCH v2 08/30] mm: thp: add PUD THP support for copy_huge_pud Zi Yan
2020-09-28 17:54 ` [RFC PATCH v2 09/30] mm: thp: add PUD THP support to zap_huge_pud Zi Yan
2020-09-28 17:54 ` [RFC PATCH v2 10/30] fs: proc: add PUD THP kpageflag Zi Yan
2020-09-28 17:54 ` [RFC PATCH v2 11/30] mm: thp: handling PUD THP reference bit Zi Yan
2020-09-28 17:54 ` [RFC PATCH v2 12/30] mm: rmap: add mappped/unmapped page order to anonymous page rmap functions Zi Yan
2020-09-28 17:54 ` [RFC PATCH v2 13/30] mm: rmap: add map_order to page_remove_anon_compound_rmap Zi Yan
2020-09-28 17:54 ` [RFC PATCH v2 14/30] mm: thp: add PUD THP split_huge_pud_page() function Zi Yan
2020-09-28 17:54 ` [RFC PATCH v2 15/30] mm: thp: add PUD THP to deferred split list when PUD mapping is gone Zi Yan
2020-09-28 17:54 ` [RFC PATCH v2 16/30] mm: debug: adapt dump_page to PUD THP Zi Yan
2020-09-28 17:54 ` [RFC PATCH v2 17/30] mm: thp: PUD THP COW splits PUD page and falls back to PMD page Zi Yan
2020-09-28 17:54 ` [RFC PATCH v2 18/30] mm: thp: PUD THP follow_p*d_page() support Zi Yan
2020-09-28 17:54 ` Zi Yan [this message]
2020-09-28 17:54 ` [RFC PATCH v2 20/30] mm: page_vma_walk: teach it about PMD-mapped PUD THP Zi Yan
2020-09-28 17:54 ` [RFC PATCH v2 21/30] mm: thp: PUD THP support in try_to_unmap() Zi Yan
2020-09-28 17:54 ` [RFC PATCH v2 22/30] mm: thp: split PUD THPs at page reclaim Zi Yan
2020-09-28 17:54 ` [RFC PATCH v2 23/30] mm: support PUD THP pagemap support Zi Yan
2020-09-28 17:54 ` [RFC PATCH v2 24/30] mm: madvise: add page size options to MADV_HUGEPAGE and MADV_NOHUGEPAGE Zi Yan
2020-09-28 17:54 ` [RFC PATCH v2 25/30] mm: vma: add VM_HUGEPAGE_PUD to vm_flags at bit 37 Zi Yan
2020-09-28 17:54 ` [RFC PATCH v2 26/30] mm: thp: add a global knob to enable/disable PUD THPs Zi Yan
2020-09-28 17:54 ` [RFC PATCH v2 27/30] mm: thp: make PUD THP size public Zi Yan
2020-09-28 17:54 ` [RFC PATCH v2 28/30] hugetlb: cma: move cma reserve function to cma.c Zi Yan
2020-09-28 17:54 ` [RFC PATCH v2 29/30] mm: thp: use cma reservation for pud thp allocation Zi Yan
2020-09-28 17:54 ` [RFC PATCH v2 30/30] mm: thp: enable anonymous PUD THP at page fault path Zi Yan
2020-09-30 11:55 ` [RFC PATCH v2 00/30] 1GB PUD THP support on x86_64 Michal Hocko
2020-10-01 15:14 ` Zi Yan
2020-10-02 7:32 ` Michal Hocko
2020-10-02 7:50 ` David Hildenbrand
2020-10-02 8:10 ` Michal Hocko
2020-10-02 8:30 ` David Hildenbrand
2020-10-05 15:03 ` Zi Yan
2020-10-05 15:55 ` Matthew Wilcox
2020-10-05 17:04 ` Roman Gushchin
2020-10-05 19:12 ` Zi Yan
2020-10-05 19:37 ` Matthew Wilcox
2020-10-05 17:16 ` Roman Gushchin
2020-10-05 17:27 ` David Hildenbrand
2020-10-05 18:25 ` Roman Gushchin
2020-10-05 18:33 ` David Hildenbrand
2020-10-05 19:11 ` Roman Gushchin
2020-10-06 8:25 ` David Hildenbrand
2020-10-05 17:39 ` David Hildenbrand
2020-10-05 18:05 ` Zi Yan
2020-10-05 18:48 ` David Hildenbrand
2020-10-06 11:59 ` Michal Hocko
2020-10-05 15:34 ` Zi Yan
2020-10-05 17:30 ` David Hildenbrand
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20200928175428.4110504-20-zi.yan@sent.com \
--to=zi.yan@sent.com \
--cc=aarcange@redhat.com \
--cc=david@redhat.com \
--cc=dnellans@nvidia.com \
--cc=guro@fb.com \
--cc=jgg@nvidia.com \
--cc=jhubbard@nvidia.com \
--cc=kirill.shutemov@linux.intel.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mhocko@suse.com \
--cc=mike.kravetz@oracle.com \
--cc=riel@surriel.com \
--cc=shakeelb@google.com \
--cc=shy828301@gmail.com \
--cc=william.kucharski@oracle.com \
--cc=willy@infradead.org \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox