* [PATCHv3 0/6] mm: Improve mlock tracking for large folios
@ 2025-09-23 11:07 Kiryl Shutsemau
2025-09-23 11:07 ` [PATCHv3 1/6] mm/page_vma_mapped: Track if the page is mapped across page table boundary Kiryl Shutsemau
` (5 more replies)
0 siblings, 6 replies; 9+ messages in thread
From: Kiryl Shutsemau @ 2025-09-23 11:07 UTC (permalink / raw)
To: Andrew Morton, David Hildenbrand, Hugh Dickins, Matthew Wilcox
Cc: Lorenzo Stoakes, Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Rik van Riel, Harry Yoo,
Johannes Weiner, Shakeel Butt, Baolin Wang, linux-mm,
linux-kernel, Kiryl Shutsemau
From: Kiryl Shutsemau <kas@kernel.org>
The patchset includes several fixes and improvements related to mlock
tracking of large folios.
The main objective is to reduce the undercount of Mlocked memory in
/proc/meminfo and improve the accuracy of the statistics.
Patches 1-2:
These patches address a minor race condition in folio_referenced_one()
related to mlock_vma_folio().
Currently, mlock_vma_folio() is called on large folio without the page
table lock, which can result in a race condition with unmap (i.e.
MADV_DONTNEED). This can lead to partially mapped folios on the
unevictable LRU list.
While not a significant issue, I do not believe backporting is
necessary.
Patch 3:
This patch adds mlocking logic similar to folio_referenced_one() to
try_to_unmap_one(), allowing for mlocking of large folios where
possible.
Patch 4-5:
These patches modifies finish_fault() and faultaround to map in the
entire folio when possible, enabling efficient mlocking upon addition to
the rmap.
Patch 6:
This patch makes rmap mlock large folios if they are fully mapped,
addressing the primary source of mlock undercount for large folios.
v3:
- Map entire folios on faultaround where possible;
- Fix comments and commit messages;
- Apply tags;
Kiryl Shutsemau (6):
mm/page_vma_mapped: Track if the page is mapped across page table
boundary
mm/rmap: Fix a mlock race condition in folio_referenced_one()
mm/rmap: mlock large folios in try_to_unmap_one()
mm/fault: Try to map the entire file folio in finish_fault()
mm/filemap: Map entire large folio faultaround
mm/rmap: Improve mlock tracking for large folios
include/linux/rmap.h | 5 ++
mm/filemap.c | 15 ++++++
mm/memory.c | 9 +---
mm/page_vma_mapped.c | 1 +
mm/rmap.c | 109 ++++++++++++++++++++++++-------------------
5 files changed, 84 insertions(+), 55 deletions(-)
--
2.50.1
^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCHv3 1/6] mm/page_vma_mapped: Track if the page is mapped across page table boundary
2025-09-23 11:07 [PATCHv3 0/6] mm: Improve mlock tracking for large folios Kiryl Shutsemau
@ 2025-09-23 11:07 ` Kiryl Shutsemau
2025-09-23 15:15 ` Kiryl Shutsemau
2025-09-23 11:07 ` [PATCHv3 2/6] mm/rmap: Fix a mlock race condition in folio_referenced_one() Kiryl Shutsemau
` (4 subsequent siblings)
5 siblings, 1 reply; 9+ messages in thread
From: Kiryl Shutsemau @ 2025-09-23 11:07 UTC (permalink / raw)
To: Andrew Morton, David Hildenbrand, Hugh Dickins, Matthew Wilcox
Cc: Lorenzo Stoakes, Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Rik van Riel, Harry Yoo,
Johannes Weiner, Shakeel Butt, Baolin Wang, linux-mm,
linux-kernel, Kiryl Shutsemau
From: Kiryl Shutsemau <kas@kernel.org>
Add a PVMW_PGTABLE_CROSSSED flag that page_vma_mapped_walk() will set if
the page is mapped across page table boundary. Unlike other PVMW_*
flags, this one is result of page_vma_mapped_walk() and not set by the
caller.
folio_referenced_one() will use it to detect if it safe to mlock the
folio.
Signed-off-by: Kiryl Shutsemau <kas@kernel.org>
Reviewed-by: Shakeel Butt <shakeel.butt@linux.dev>
---
include/linux/rmap.h | 5 +++++
mm/page_vma_mapped.c | 1 +
2 files changed, 6 insertions(+)
diff --git a/include/linux/rmap.h b/include/linux/rmap.h
index 6cd020eea37a..4f4659c0fc93 100644
--- a/include/linux/rmap.h
+++ b/include/linux/rmap.h
@@ -928,6 +928,11 @@ struct page *make_device_exclusive(struct mm_struct *mm, unsigned long addr,
/* Look for migration entries rather than present PTEs */
#define PVMW_MIGRATION (1 << 1)
+/* Result flags */
+
+/* The page is mapped across page table boundary */
+#define PVMW_PGTABLE_CROSSSED (1 << 16)
+
struct page_vma_mapped_walk {
unsigned long pfn;
unsigned long nr_pages;
diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c
index e981a1a292d2..a184b88743c3 100644
--- a/mm/page_vma_mapped.c
+++ b/mm/page_vma_mapped.c
@@ -309,6 +309,7 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)
}
pte_unmap(pvmw->pte);
pvmw->pte = NULL;
+ pvmw->flags |= PVMW_PGTABLE_CROSSSED;
goto restart;
}
pvmw->pte++;
--
2.50.1
^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCHv3 2/6] mm/rmap: Fix a mlock race condition in folio_referenced_one()
2025-09-23 11:07 [PATCHv3 0/6] mm: Improve mlock tracking for large folios Kiryl Shutsemau
2025-09-23 11:07 ` [PATCHv3 1/6] mm/page_vma_mapped: Track if the page is mapped across page table boundary Kiryl Shutsemau
@ 2025-09-23 11:07 ` Kiryl Shutsemau
2025-09-23 11:07 ` [PATCHv3 3/6] mm/rmap: mlock large folios in try_to_unmap_one() Kiryl Shutsemau
` (3 subsequent siblings)
5 siblings, 0 replies; 9+ messages in thread
From: Kiryl Shutsemau @ 2025-09-23 11:07 UTC (permalink / raw)
To: Andrew Morton, David Hildenbrand, Hugh Dickins, Matthew Wilcox
Cc: Lorenzo Stoakes, Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Rik van Riel, Harry Yoo,
Johannes Weiner, Shakeel Butt, Baolin Wang, linux-mm,
linux-kernel, Kiryl Shutsemau
From: Kiryl Shutsemau <kas@kernel.org>
The mlock_vma_folio() function requires the page table lock to be held
in order to safely mlock the folio. However, folio_referenced_one()
mlocks a large folios outside of the page_vma_mapped_walk() loop where
the page table lock has already been dropped.
Rework the mlock logic to use the same code path inside the loop for
both large and small folios.
Use PVMW_PGTABLE_CROSSED to detect when the folio is mapped across a
page table boundary.
Signed-off-by: Kiryl Shutsemau <kas@kernel.org>
Reviewed-by: Shakeel Butt <shakeel.butt@linux.dev>
---
mm/rmap.c | 59 ++++++++++++++++++++-----------------------------------
1 file changed, 21 insertions(+), 38 deletions(-)
diff --git a/mm/rmap.c b/mm/rmap.c
index 568198e9efc2..3d0235f332de 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -851,34 +851,34 @@ static bool folio_referenced_one(struct folio *folio,
{
struct folio_referenced_arg *pra = arg;
DEFINE_FOLIO_VMA_WALK(pvmw, folio, vma, address, 0);
- int referenced = 0;
- unsigned long start = address, ptes = 0;
+ int ptes = 0, referenced = 0;
while (page_vma_mapped_walk(&pvmw)) {
address = pvmw.address;
if (vma->vm_flags & VM_LOCKED) {
- if (!folio_test_large(folio) || !pvmw.pte) {
- /* Restore the mlock which got missed */
- mlock_vma_folio(folio, vma);
- page_vma_mapped_walk_done(&pvmw);
- pra->vm_flags |= VM_LOCKED;
- return false; /* To break the loop */
- }
- /*
- * For large folio fully mapped to VMA, will
- * be handled after the pvmw loop.
- *
- * For large folio cross VMA boundaries, it's
- * expected to be picked by page reclaim. But
- * should skip reference of pages which are in
- * the range of VM_LOCKED vma. As page reclaim
- * should just count the reference of pages out
- * the range of VM_LOCKED vma.
- */
ptes++;
pra->mapcount--;
- continue;
+
+ /* Only mlock fully mapped pages */
+ if (pvmw.pte && ptes != pvmw.nr_pages)
+ continue;
+
+ /*
+ * All PTEs must be protected by page table lock in
+ * order to mlock the page.
+ *
+ * If page table boundary has been cross, current ptl
+ * only protect part of ptes.
+ */
+ if (pvmw.flags & PVMW_PGTABLE_CROSSSED)
+ continue;
+
+ /* Restore the mlock which got missed */
+ mlock_vma_folio(folio, vma);
+ page_vma_mapped_walk_done(&pvmw);
+ pra->vm_flags |= VM_LOCKED;
+ return false; /* To break the loop */
}
/*
@@ -914,23 +914,6 @@ static bool folio_referenced_one(struct folio *folio,
pra->mapcount--;
}
- if ((vma->vm_flags & VM_LOCKED) &&
- folio_test_large(folio) &&
- folio_within_vma(folio, vma)) {
- unsigned long s_align, e_align;
-
- s_align = ALIGN_DOWN(start, PMD_SIZE);
- e_align = ALIGN_DOWN(start + folio_size(folio) - 1, PMD_SIZE);
-
- /* folio doesn't cross page table boundary and fully mapped */
- if ((s_align == e_align) && (ptes == folio_nr_pages(folio))) {
- /* Restore the mlock which got missed */
- mlock_vma_folio(folio, vma);
- pra->vm_flags |= VM_LOCKED;
- return false; /* To break the loop */
- }
- }
-
if (referenced)
folio_clear_idle(folio);
if (folio_test_clear_young(folio))
--
2.50.1
^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCHv3 3/6] mm/rmap: mlock large folios in try_to_unmap_one()
2025-09-23 11:07 [PATCHv3 0/6] mm: Improve mlock tracking for large folios Kiryl Shutsemau
2025-09-23 11:07 ` [PATCHv3 1/6] mm/page_vma_mapped: Track if the page is mapped across page table boundary Kiryl Shutsemau
2025-09-23 11:07 ` [PATCHv3 2/6] mm/rmap: Fix a mlock race condition in folio_referenced_one() Kiryl Shutsemau
@ 2025-09-23 11:07 ` Kiryl Shutsemau
2025-09-23 11:07 ` [PATCHv3 4/6] mm/fault: Try to map the entire file folio in finish_fault() Kiryl Shutsemau
` (2 subsequent siblings)
5 siblings, 0 replies; 9+ messages in thread
From: Kiryl Shutsemau @ 2025-09-23 11:07 UTC (permalink / raw)
To: Andrew Morton, David Hildenbrand, Hugh Dickins, Matthew Wilcox
Cc: Lorenzo Stoakes, Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Rik van Riel, Harry Yoo,
Johannes Weiner, Shakeel Butt, Baolin Wang, linux-mm,
linux-kernel, Kiryl Shutsemau
From: Kiryl Shutsemau <kas@kernel.org>
Currently, try_to_unmap_once() only tries to mlock small folios.
Use logic similar to folio_referenced_one() to mlock large folios:
only do this for fully mapped folios and under page table lock that
protects all page table entries.
Signed-off-by: Kiryl Shutsemau <kas@kernel.org>
Reviewed-by: Shakeel Butt <shakeel.butt@linux.dev>
---
mm/rmap.c | 31 ++++++++++++++++++++++++++++---
1 file changed, 28 insertions(+), 3 deletions(-)
diff --git a/mm/rmap.c b/mm/rmap.c
index 3d0235f332de..a55c3bf41287 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1870,6 +1870,7 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
unsigned long nr_pages = 1, end_addr;
unsigned long pfn;
unsigned long hsz = 0;
+ int ptes = 0;
/*
* When racing against e.g. zap_pte_range() on another cpu,
@@ -1910,10 +1911,34 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
*/
if (!(flags & TTU_IGNORE_MLOCK) &&
(vma->vm_flags & VM_LOCKED)) {
+ ptes++;
+
+ /*
+ * Set 'ret' to indicate the page cannot be unmapped.
+ *
+ * Do not jump to walk_abort immediately as additional
+ * iteration might be required to detect fully mapped
+ * folio an mlock it.
+ */
+ ret = false;
+
+ /* Only mlock fully mapped pages */
+ if (pvmw.pte && ptes != pvmw.nr_pages)
+ continue;
+
+ /*
+ * All PTEs must be protected by page table lock in
+ * order to mlock the page.
+ *
+ * If page table boundary has been cross, current ptl
+ * only protect part of ptes.
+ */
+ if (pvmw.flags & PVMW_PGTABLE_CROSSSED)
+ goto walk_done;
+
/* Restore the mlock which got missed */
- if (!folio_test_large(folio))
- mlock_vma_folio(folio, vma);
- goto walk_abort;
+ mlock_vma_folio(folio, vma);
+ goto walk_done;
}
if (!pvmw.pte) {
--
2.50.1
^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCHv3 4/6] mm/fault: Try to map the entire file folio in finish_fault()
2025-09-23 11:07 [PATCHv3 0/6] mm: Improve mlock tracking for large folios Kiryl Shutsemau
` (2 preceding siblings ...)
2025-09-23 11:07 ` [PATCHv3 3/6] mm/rmap: mlock large folios in try_to_unmap_one() Kiryl Shutsemau
@ 2025-09-23 11:07 ` Kiryl Shutsemau
2025-10-29 4:52 ` D, Suneeth
2025-09-23 11:07 ` [PATCHv3 5/6] mm/filemap: Map entire large folio faultaround Kiryl Shutsemau
2025-09-23 11:07 ` [PATCHv3 6/6] mm/rmap: Improve mlock tracking for large folios Kiryl Shutsemau
5 siblings, 1 reply; 9+ messages in thread
From: Kiryl Shutsemau @ 2025-09-23 11:07 UTC (permalink / raw)
To: Andrew Morton, David Hildenbrand, Hugh Dickins, Matthew Wilcox
Cc: Lorenzo Stoakes, Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Rik van Riel, Harry Yoo,
Johannes Weiner, Shakeel Butt, Baolin Wang, linux-mm,
linux-kernel, Kiryl Shutsemau
From: Kiryl Shutsemau <kas@kernel.org>
The finish_fault() function uses per-page fault for file folios. This
only occurs for file folios smaller than PMD_SIZE.
The comment suggests that this approach prevents RSS inflation.
However, it only prevents RSS accounting. The folio is still mapped to
the process, and the fact that it is mapped by a single PTE does not
affect memory pressure. Additionally, the kernel's ability to map
large folios as PMD if they are large enough does not support this
argument.
When possible, map large folios in one shot. This reduces the number of
minor page faults and allows for TLB coalescing.
Mapping large folios at once will allow the rmap code to mlock it on
add, as it will recognize that it is fully mapped and mlocking is safe.
Signed-off-by: Kiryl Shutsemau <kas@kernel.org>
Reviewed-by: Shakeel Butt <shakeel.butt@linux.dev>
Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com>
---
mm/memory.c | 9 ++-------
1 file changed, 2 insertions(+), 7 deletions(-)
diff --git a/mm/memory.c b/mm/memory.c
index 0ba4f6b71847..812a7d9f6531 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -5386,13 +5386,8 @@ vm_fault_t finish_fault(struct vm_fault *vmf)
nr_pages = folio_nr_pages(folio);
- /*
- * Using per-page fault to maintain the uffd semantics, and same
- * approach also applies to non shmem/tmpfs faults to avoid
- * inflating the RSS of the process.
- */
- if (!vma_is_shmem(vma) || unlikely(userfaultfd_armed(vma)) ||
- unlikely(needs_fallback)) {
+ /* Using per-page fault to maintain the uffd semantics */
+ if (unlikely(userfaultfd_armed(vma)) || unlikely(needs_fallback)) {
nr_pages = 1;
} else if (nr_pages > 1) {
pgoff_t idx = folio_page_idx(folio, page);
--
2.50.1
^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCHv3 5/6] mm/filemap: Map entire large folio faultaround
2025-09-23 11:07 [PATCHv3 0/6] mm: Improve mlock tracking for large folios Kiryl Shutsemau
` (3 preceding siblings ...)
2025-09-23 11:07 ` [PATCHv3 4/6] mm/fault: Try to map the entire file folio in finish_fault() Kiryl Shutsemau
@ 2025-09-23 11:07 ` Kiryl Shutsemau
2025-09-23 11:07 ` [PATCHv3 6/6] mm/rmap: Improve mlock tracking for large folios Kiryl Shutsemau
5 siblings, 0 replies; 9+ messages in thread
From: Kiryl Shutsemau @ 2025-09-23 11:07 UTC (permalink / raw)
To: Andrew Morton, David Hildenbrand, Hugh Dickins, Matthew Wilcox
Cc: Lorenzo Stoakes, Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Rik van Riel, Harry Yoo,
Johannes Weiner, Shakeel Butt, Baolin Wang, linux-mm,
linux-kernel, Kiryl Shutsemau
From: Kiryl Shutsemau <kas@kernel.org>
Currently, kernel only maps part of large folio that fits into
start_pgoff/end_pgoff range.
Map entire folio where possible. It will match finish_fault() behaviour
that user hits on cold page cache.
Mapping large folios at once will allow the rmap code to mlock it on
add, as it will recognize that it is fully mapped and mlocking is safe.
Signed-off-by: Kiryl Shutsemau <kas@kernel.org>
---
mm/filemap.c | 15 +++++++++++++++
1 file changed, 15 insertions(+)
diff --git a/mm/filemap.c b/mm/filemap.c
index 751838ef05e5..26cae577ba23 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -3643,6 +3643,21 @@ static vm_fault_t filemap_map_folio_range(struct vm_fault *vmf,
struct page *page = folio_page(folio, start);
unsigned int count = 0;
pte_t *old_ptep = vmf->pte;
+ unsigned long addr0;
+
+ /*
+ * Map the large folio fully where possible.
+ *
+ * The folio must not cross VMA or page table boundary.
+ */
+ addr0 = addr - start * PAGE_SIZE;
+ if (folio_within_vma(folio, vmf->vma) &&
+ (addr0 & PMD_MASK) == ((addr0 + folio_size(folio) - 1) & PMD_MASK)) {
+ vmf->pte -= start;
+ page -= start;
+ addr = addr0;
+ nr_pages = folio_nr_pages(folio);
+ }
do {
if (PageHWPoison(page + count))
--
2.50.1
^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCHv3 6/6] mm/rmap: Improve mlock tracking for large folios
2025-09-23 11:07 [PATCHv3 0/6] mm: Improve mlock tracking for large folios Kiryl Shutsemau
` (4 preceding siblings ...)
2025-09-23 11:07 ` [PATCHv3 5/6] mm/filemap: Map entire large folio faultaround Kiryl Shutsemau
@ 2025-09-23 11:07 ` Kiryl Shutsemau
5 siblings, 0 replies; 9+ messages in thread
From: Kiryl Shutsemau @ 2025-09-23 11:07 UTC (permalink / raw)
To: Andrew Morton, David Hildenbrand, Hugh Dickins, Matthew Wilcox
Cc: Lorenzo Stoakes, Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Rik van Riel, Harry Yoo,
Johannes Weiner, Shakeel Butt, Baolin Wang, linux-mm,
linux-kernel, Kiryl Shutsemau
From: Kiryl Shutsemau <kas@kernel.org>
The kernel currently does not mlock large folios when adding them to
rmap, stating that it is difficult to confirm that the folio is fully
mapped and safe to mlock it.
This leads to a significant undercount of Mlocked in /proc/meminfo,
causing problems in production where the stat was used to estimate
system utilization and determine if load shedding is required.
However, nowadays the caller passes a number of pages of the folio that
are getting mapped, making it easy to check if the entire folio is
mapped to the VMA.
mlock the folio on rmap if it is fully mapped to the VMA.
Mlocked in /proc/meminfo can still undercount, but the value is closer
the truth and is useful for userspace.
Signed-off-by: Kiryl Shutsemau <kas@kernel.org>
Acked-by: David Hildenbrand <david@redhat.com>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Acked-by: Shakeel Butt <shakeel.butt@linux.dev>
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com>
---
mm/rmap.c | 19 ++++++++++++-------
1 file changed, 12 insertions(+), 7 deletions(-)
diff --git a/mm/rmap.c b/mm/rmap.c
index a55c3bf41287..d5b40800198c 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1462,12 +1462,12 @@ static __always_inline void __folio_add_anon_rmap(struct folio *folio,
}
/*
- * For large folio, only mlock it if it's fully mapped to VMA. It's
- * not easy to check whether the large folio is fully mapped to VMA
- * here. Only mlock normal 4K folio and leave page reclaim to handle
- * large folio.
+ * Only mlock it if the folio is fully mapped to the VMA.
+ *
+ * Partially mapped folios can be split on reclaim and part outside
+ * of mlocked VMA can be evicted or freed.
*/
- if (!folio_test_large(folio))
+ if (folio_nr_pages(folio) == nr_pages)
mlock_vma_folio(folio, vma);
}
@@ -1603,8 +1603,13 @@ static __always_inline void __folio_add_file_rmap(struct folio *folio,
nr = __folio_add_rmap(folio, page, nr_pages, vma, level, &nr_pmdmapped);
__folio_mod_stat(folio, nr, nr_pmdmapped);
- /* See comments in folio_add_anon_rmap_*() */
- if (!folio_test_large(folio))
+ /*
+ * Only mlock it if the folio is fully mapped to the VMA.
+ *
+ * Partially mapped folios can be split on reclaim and part outside
+ * of mlocked VMA can be evicted or freed.
+ */
+ if (folio_nr_pages(folio) == nr_pages)
mlock_vma_folio(folio, vma);
}
--
2.50.1
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCHv3 1/6] mm/page_vma_mapped: Track if the page is mapped across page table boundary
2025-09-23 11:07 ` [PATCHv3 1/6] mm/page_vma_mapped: Track if the page is mapped across page table boundary Kiryl Shutsemau
@ 2025-09-23 15:15 ` Kiryl Shutsemau
0 siblings, 0 replies; 9+ messages in thread
From: Kiryl Shutsemau @ 2025-09-23 15:15 UTC (permalink / raw)
To: Andrew Morton, David Hildenbrand, Hugh Dickins, Matthew Wilcox
Cc: Lorenzo Stoakes, Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Rik van Riel, Harry Yoo,
Johannes Weiner, Shakeel Butt, Baolin Wang, linux-mm,
linux-kernel
On Tue, Sep 23, 2025 at 12:07:06PM +0100, Kiryl Shutsemau wrote:
> +#define PVMW_PGTABLE_CROSSSED (1 << 16)
I managed to spell CROSSED with three 'S' all over the patchset :-]
I don't want to spam with new revision.
The fixed version is here:
https://git.kernel.org/pub/scm/linux/kernel/git/kas/linux.git large_folio_mlock
--
Kiryl Shutsemau / Kirill A. Shutemov
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCHv3 4/6] mm/fault: Try to map the entire file folio in finish_fault()
2025-09-23 11:07 ` [PATCHv3 4/6] mm/fault: Try to map the entire file folio in finish_fault() Kiryl Shutsemau
@ 2025-10-29 4:52 ` D, Suneeth
0 siblings, 0 replies; 9+ messages in thread
From: D, Suneeth @ 2025-10-29 4:52 UTC (permalink / raw)
To: Kiryl Shutsemau, Andrew Morton, David Hildenbrand, Hugh Dickins,
Matthew Wilcox
Cc: Lorenzo Stoakes, Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Rik van Riel, Harry Yoo,
Johannes Weiner, Shakeel Butt, Baolin Wang, linux-mm,
linux-kernel, Kiryl Shutsemau
Hi Kiryl Shutsemau,
On 9/23/2025 4:37 PM, Kiryl Shutsemau wrote:
> From: Kiryl Shutsemau <kas@kernel.org>
>
> The finish_fault() function uses per-page fault for file folios. This
> only occurs for file folios smaller than PMD_SIZE.
>
> The comment suggests that this approach prevents RSS inflation.
> However, it only prevents RSS accounting. The folio is still mapped to
> the process, and the fact that it is mapped by a single PTE does not
> affect memory pressure. Additionally, the kernel's ability to map
> large folios as PMD if they are large enough does not support this
> argument.
>
> When possible, map large folios in one shot. This reduces the number of
> minor page faults and allows for TLB coalescing.
>
> Mapping large folios at once will allow the rmap code to mlock it on
> add, as it will recognize that it is fully mapped and mlocking is safe.
>
We run will-it-scale micro-benchmark as part of our weekly CI for Kernel
Performance Regression testing between a stable vs rc kernel. We were
able to observe drastic performance gain on AMD platforms (Turin and
Bergamo) with running the will-it-scale-process-page-fault3 variant
between the kernels v6.17 and v6.18-rc1 in the range of 322-400%.
Bisecting further landed me onto this commit
(19773df031bcc67d5caa06bf0ddbbff40174be7a) as the first commit to cause
this gain.
The following were the machines' configuration and test parameters used:-
Model name: AMD EPYC 128-Core Processor [Bergamo]
Thread(s) per core: 2
Core(s) per socket: 128
Socket(s): 1
Total online memory: 258G
Model name: AMD EPYC 64-Core Processor [Turin]
Thread(s) per core: 2
Core(s) per socket: 64
Socket(s): 1
Total online memory: 258G
Test params:
nr_task: [1 8 64 128 192 256]
mode: process
test: page_fault3
kpi: per_process_ops
cpufreq_governor: performance
The following are the stats after bisection:-
KPI v6.17 %diff v6.16-rc1 %diff v6.17-with19773df031
----- ------ ----- --------- ----- --------------------
per_
process_ 936152 +322 3954402 +339 4109353
ops
I have even checked the numbers built with the patch set[1] which was a
fix to the regression reported[2], to see if the gain holds good and yes
indeed it is.
per_process_ops %diff (w.r.t baseline v6.17)
--------------- ----------------------------
v6.17.0-withfixpatch: 3968637 +324
[1]
http://lore.kernel.org/all/20251020163054.1063646-1-kirill@shutemov.name/
[2] https://lore.kernel.org/all/20251014175214.GW6188@frogsfrogsfrogs/
Recreation steps:
1) git clone https://github.com/antonblanchard/will-it-scale.git
2) git clone https://github.com/intel/lkp-tests.git
3) cd will-it-scale && git apply
lkp-tests/programs/will-it-scale/pkg/will-it-scale.patch
4) make
5) python3 runtest.py page_fault3 25 process 0 0 1 8 64 128 192 256
NOTE: [5] is specific to machine's architecture. starting from 1 is the
array of no.of tasks that you'd wish to run the testcase which here is
no.cores per CCX, per NUMA node/ per Socket, nr_threads.
> Signed-off-by: Kiryl Shutsemau <kas@kernel.org>
> Reviewed-by: Shakeel Butt <shakeel.butt@linux.dev>
> Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com>
> ---
> mm/memory.c | 9 ++-------
> 1 file changed, 2 insertions(+), 7 deletions(-)
>
> diff --git a/mm/memory.c b/mm/memory.c
> index 0ba4f6b71847..812a7d9f6531 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -5386,13 +5386,8 @@ vm_fault_t finish_fault(struct vm_fault *vmf)
>
> nr_pages = folio_nr_pages(folio);
>
> - /*
> - * Using per-page fault to maintain the uffd semantics, and same
> - * approach also applies to non shmem/tmpfs faults to avoid
> - * inflating the RSS of the process.
> - */
> - if (!vma_is_shmem(vma) || unlikely(userfaultfd_armed(vma)) ||
> - unlikely(needs_fallback)) {
> + /* Using per-page fault to maintain the uffd semantics */
> + if (unlikely(userfaultfd_armed(vma)) || unlikely(needs_fallback)) {
> nr_pages = 1;
> } else if (nr_pages > 1) {
> pgoff_t idx = folio_page_idx(folio, page);
Thanks and Regards,
Suneeth D
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2025-10-29 4:53 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-09-23 11:07 [PATCHv3 0/6] mm: Improve mlock tracking for large folios Kiryl Shutsemau
2025-09-23 11:07 ` [PATCHv3 1/6] mm/page_vma_mapped: Track if the page is mapped across page table boundary Kiryl Shutsemau
2025-09-23 15:15 ` Kiryl Shutsemau
2025-09-23 11:07 ` [PATCHv3 2/6] mm/rmap: Fix a mlock race condition in folio_referenced_one() Kiryl Shutsemau
2025-09-23 11:07 ` [PATCHv3 3/6] mm/rmap: mlock large folios in try_to_unmap_one() Kiryl Shutsemau
2025-09-23 11:07 ` [PATCHv3 4/6] mm/fault: Try to map the entire file folio in finish_fault() Kiryl Shutsemau
2025-10-29 4:52 ` D, Suneeth
2025-09-23 11:07 ` [PATCHv3 5/6] mm/filemap: Map entire large folio faultaround Kiryl Shutsemau
2025-09-23 11:07 ` [PATCHv3 6/6] mm/rmap: Improve mlock tracking for large folios Kiryl Shutsemau
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox