linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] mm: Reduce memory bloat with THP
@ 2017-12-15  1:28 Nitin Gupta
  2017-12-15  5:55 ` Anshuman Khandual
                   ` (2 more replies)
  0 siblings, 3 replies; 8+ messages in thread
From: Nitin Gupta @ 2017-12-15  1:28 UTC (permalink / raw)
  To: linux-mm
  Cc: steven.sistare, Nitin Gupta, Andrew Morton, Ingo Molnar,
	Mel Gorman, Nadav Amit, Minchan Kim, Kirill A. Shutemov,
	Peter Zijlstra, Vegard Nossum, Levin, Alexander (Sasha Levin),
	Michal Hocko, David Rientjes, Vlastimil Babka, SeongJae Park,
	Shaohua Li, Aneesh Kumar K.V, Andrea Arcangeli, Mike Rapoport,
	Anshuman Khandual, Rik van Riel, Ross Zwisler, Jan Kara,
	Dave Jiang, Jérôme Glisse, Matthew Wilcox,
	Hugh Dickins, Tobin C Harding, open list

Currently, if the THP enabled policy is "always", or the mode
is "madvise" and a region is marked as MADV_HUGEPAGE, a hugepage
is allocated on a page fault if the pud or pmd is empty.  This
yields the best VA translation performance, but increases memory
consumption if some small page ranges within the huge page are
never accessed.

An alternate behavior for such page faults is to install a
hugepage only when a region is actually found to be (almost)
fully mapped and active.  This is a compromise between
translation performance and memory consumption.  Currently there
is no way for an application to choose this compromise for the
page fault conditions above.

With this change, when an application issues MADV_DONTNEED on a
memory region, the region is marked as "space-efficient". For
such regions, a hugepage is not immediately allocated on first
write.  Instead, it is left to the khugepaged thread to do
delayed hugepage promotion depending on whether the region is
actually mapped and active. When application issues
MADV_HUGEPAGE, the region is marked again as non-space-efficient
wherein hugepage is allocated on first touch.

Orabug: 26910556

Reviewed-by: Steve Sistare <steven.sistare@oracle.com>
Signed-off-by: Nitin Gupta <nitin.m.gupta@oracle.com>
---
 include/linux/mm_types.h | 1 +
 mm/khugepaged.c          | 1 +
 mm/madvise.c             | 1 +
 mm/memory.c              | 6 ++++--
 4 files changed, 7 insertions(+), 2 deletions(-)

diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index cfd0ac4..6d0783a 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -339,6 +339,7 @@ struct vm_area_struct {
 	struct mempolicy *vm_policy;	/* NUMA policy for the VMA */
 #endif
 	struct vm_userfaultfd_ctx vm_userfaultfd_ctx;
+	bool space_efficient;
 } __randomize_layout;
 
 struct core_thread {
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index ea4ff25..2f4037a 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -319,6 +319,7 @@ int hugepage_madvise(struct vm_area_struct *vma,
 #endif
 		*vm_flags &= ~VM_NOHUGEPAGE;
 		*vm_flags |= VM_HUGEPAGE;
+		vma->space_efficient = false;
 		/*
 		 * If the vma become good for khugepaged to scan,
 		 * register it here without waiting a page fault that
diff --git a/mm/madvise.c b/mm/madvise.c
index 751e97a..b2ec07b 100644
--- a/mm/madvise.c
+++ b/mm/madvise.c
@@ -508,6 +508,7 @@ static long madvise_dontneed_single_vma(struct vm_area_struct *vma,
 					unsigned long start, unsigned long end)
 {
 	zap_page_range(vma, start, end - start);
+	vma->space_efficient = true;
 	return 0;
 }
 
diff --git a/mm/memory.c b/mm/memory.c
index 5eb3d25..6485014 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -4001,7 +4001,8 @@ static int __handle_mm_fault(struct vm_area_struct *vma, unsigned long address,
 	vmf.pud = pud_alloc(mm, p4d, address);
 	if (!vmf.pud)
 		return VM_FAULT_OOM;
-	if (pud_none(*vmf.pud) && transparent_hugepage_enabled(vma)) {
+	if (pud_none(*vmf.pud) && transparent_hugepage_enabled(vma)
+		&& !vma->space_efficient) {
 		ret = create_huge_pud(&vmf);
 		if (!(ret & VM_FAULT_FALLBACK))
 			return ret;
@@ -4027,7 +4028,8 @@ static int __handle_mm_fault(struct vm_area_struct *vma, unsigned long address,
 	vmf.pmd = pmd_alloc(mm, vmf.pud, address);
 	if (!vmf.pmd)
 		return VM_FAULT_OOM;
-	if (pmd_none(*vmf.pmd) && transparent_hugepage_enabled(vma)) {
+	if (pmd_none(*vmf.pmd) && transparent_hugepage_enabled(vma)
+		&& !vma->space_efficient) {
 		ret = create_huge_pmd(&vmf);
 		if (!(ret & VM_FAULT_FALLBACK))
 			return ret;
-- 
2.9.2

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2017-12-18 13:53 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-12-15  1:28 [PATCH] mm: Reduce memory bloat with THP Nitin Gupta
2017-12-15  5:55 ` Anshuman Khandual
2017-12-16  7:18   ` Nitin Gupta
2017-12-15 10:00 ` Kirill A. Shutemov
2017-12-16  7:04   ` Nitin Gupta
2017-12-18 13:53     ` Peter Zijlstra
2017-12-15 10:01 ` Kirill A. Shutemov
2017-12-16  7:21   ` Nitin Gupta

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox