* [PATCHv3 1/5] mm/rmap: Fix a mlock race condition in folio_referenced_one()
2025-09-23 11:03 [PATCHv3 0/5] mm: Improve mlock tracking for large folios Kiryl Shutsemau
@ 2025-09-23 11:03 ` Kiryl Shutsemau
2025-09-23 11:03 ` [PATCHv3 2/5] mm/rmap: mlock large folios in try_to_unmap_one() Kiryl Shutsemau
` (4 subsequent siblings)
5 siblings, 0 replies; 7+ messages in thread
From: Kiryl Shutsemau @ 2025-09-23 11:03 UTC (permalink / raw)
To: Andrew Morton, David Hildenbrand, Hugh Dickins, Matthew Wilcox
Cc: Lorenzo Stoakes, Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Rik van Riel, Harry Yoo,
Johannes Weiner, Shakeel Butt, Baolin Wang, linux-mm,
linux-kernel, Kiryl Shutsemau
From: Kiryl Shutsemau <kas@kernel.org>
The mlock_vma_folio() function requires the page table lock to be held
in order to safely mlock the folio. However, folio_referenced_one()
mlocks a large folios outside of the page_vma_mapped_walk() loop where
the page table lock has already been dropped.
Rework the mlock logic to use the same code path inside the loop for
both large and small folios.
Use PVMW_PGTABLE_CROSSED to detect when the folio is mapped across a
page table boundary.
Signed-off-by: Kiryl Shutsemau <kas@kernel.org>
Reviewed-by: Shakeel Butt <shakeel.butt@linux.dev>
---
mm/rmap.c | 59 ++++++++++++++++++++-----------------------------------
1 file changed, 21 insertions(+), 38 deletions(-)
diff --git a/mm/rmap.c b/mm/rmap.c
index 568198e9efc2..3d0235f332de 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -851,34 +851,34 @@ static bool folio_referenced_one(struct folio *folio,
{
struct folio_referenced_arg *pra = arg;
DEFINE_FOLIO_VMA_WALK(pvmw, folio, vma, address, 0);
- int referenced = 0;
- unsigned long start = address, ptes = 0;
+ int ptes = 0, referenced = 0;
while (page_vma_mapped_walk(&pvmw)) {
address = pvmw.address;
if (vma->vm_flags & VM_LOCKED) {
- if (!folio_test_large(folio) || !pvmw.pte) {
- /* Restore the mlock which got missed */
- mlock_vma_folio(folio, vma);
- page_vma_mapped_walk_done(&pvmw);
- pra->vm_flags |= VM_LOCKED;
- return false; /* To break the loop */
- }
- /*
- * For large folio fully mapped to VMA, will
- * be handled after the pvmw loop.
- *
- * For large folio cross VMA boundaries, it's
- * expected to be picked by page reclaim. But
- * should skip reference of pages which are in
- * the range of VM_LOCKED vma. As page reclaim
- * should just count the reference of pages out
- * the range of VM_LOCKED vma.
- */
ptes++;
pra->mapcount--;
- continue;
+
+ /* Only mlock fully mapped pages */
+ if (pvmw.pte && ptes != pvmw.nr_pages)
+ continue;
+
+ /*
+ * All PTEs must be protected by page table lock in
+ * order to mlock the page.
+ *
+ * If page table boundary has been cross, current ptl
+ * only protect part of ptes.
+ */
+ if (pvmw.flags & PVMW_PGTABLE_CROSSSED)
+ continue;
+
+ /* Restore the mlock which got missed */
+ mlock_vma_folio(folio, vma);
+ page_vma_mapped_walk_done(&pvmw);
+ pra->vm_flags |= VM_LOCKED;
+ return false; /* To break the loop */
}
/*
@@ -914,23 +914,6 @@ static bool folio_referenced_one(struct folio *folio,
pra->mapcount--;
}
- if ((vma->vm_flags & VM_LOCKED) &&
- folio_test_large(folio) &&
- folio_within_vma(folio, vma)) {
- unsigned long s_align, e_align;
-
- s_align = ALIGN_DOWN(start, PMD_SIZE);
- e_align = ALIGN_DOWN(start + folio_size(folio) - 1, PMD_SIZE);
-
- /* folio doesn't cross page table boundary and fully mapped */
- if ((s_align == e_align) && (ptes == folio_nr_pages(folio))) {
- /* Restore the mlock which got missed */
- mlock_vma_folio(folio, vma);
- pra->vm_flags |= VM_LOCKED;
- return false; /* To break the loop */
- }
- }
-
if (referenced)
folio_clear_idle(folio);
if (folio_test_clear_young(folio))
--
2.50.1
^ permalink raw reply [flat|nested] 7+ messages in thread* [PATCHv3 2/5] mm/rmap: mlock large folios in try_to_unmap_one()
2025-09-23 11:03 [PATCHv3 0/5] mm: Improve mlock tracking for large folios Kiryl Shutsemau
2025-09-23 11:03 ` [PATCHv3 1/5] mm/rmap: Fix a mlock race condition in folio_referenced_one() Kiryl Shutsemau
@ 2025-09-23 11:03 ` Kiryl Shutsemau
2025-09-23 11:03 ` [PATCHv3 3/5] mm/fault: Try to map the entire file folio in finish_fault() Kiryl Shutsemau
` (3 subsequent siblings)
5 siblings, 0 replies; 7+ messages in thread
From: Kiryl Shutsemau @ 2025-09-23 11:03 UTC (permalink / raw)
To: Andrew Morton, David Hildenbrand, Hugh Dickins, Matthew Wilcox
Cc: Lorenzo Stoakes, Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Rik van Riel, Harry Yoo,
Johannes Weiner, Shakeel Butt, Baolin Wang, linux-mm,
linux-kernel, Kiryl Shutsemau
From: Kiryl Shutsemau <kas@kernel.org>
Currently, try_to_unmap_once() only tries to mlock small folios.
Use logic similar to folio_referenced_one() to mlock large folios:
only do this for fully mapped folios and under page table lock that
protects all page table entries.
Signed-off-by: Kiryl Shutsemau <kas@kernel.org>
Reviewed-by: Shakeel Butt <shakeel.butt@linux.dev>
---
mm/rmap.c | 31 ++++++++++++++++++++++++++++---
1 file changed, 28 insertions(+), 3 deletions(-)
diff --git a/mm/rmap.c b/mm/rmap.c
index 3d0235f332de..a55c3bf41287 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1870,6 +1870,7 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
unsigned long nr_pages = 1, end_addr;
unsigned long pfn;
unsigned long hsz = 0;
+ int ptes = 0;
/*
* When racing against e.g. zap_pte_range() on another cpu,
@@ -1910,10 +1911,34 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
*/
if (!(flags & TTU_IGNORE_MLOCK) &&
(vma->vm_flags & VM_LOCKED)) {
+ ptes++;
+
+ /*
+ * Set 'ret' to indicate the page cannot be unmapped.
+ *
+ * Do not jump to walk_abort immediately as additional
+ * iteration might be required to detect fully mapped
+ * folio an mlock it.
+ */
+ ret = false;
+
+ /* Only mlock fully mapped pages */
+ if (pvmw.pte && ptes != pvmw.nr_pages)
+ continue;
+
+ /*
+ * All PTEs must be protected by page table lock in
+ * order to mlock the page.
+ *
+ * If page table boundary has been cross, current ptl
+ * only protect part of ptes.
+ */
+ if (pvmw.flags & PVMW_PGTABLE_CROSSSED)
+ goto walk_done;
+
/* Restore the mlock which got missed */
- if (!folio_test_large(folio))
- mlock_vma_folio(folio, vma);
- goto walk_abort;
+ mlock_vma_folio(folio, vma);
+ goto walk_done;
}
if (!pvmw.pte) {
--
2.50.1
^ permalink raw reply [flat|nested] 7+ messages in thread* [PATCHv3 3/5] mm/fault: Try to map the entire file folio in finish_fault()
2025-09-23 11:03 [PATCHv3 0/5] mm: Improve mlock tracking for large folios Kiryl Shutsemau
2025-09-23 11:03 ` [PATCHv3 1/5] mm/rmap: Fix a mlock race condition in folio_referenced_one() Kiryl Shutsemau
2025-09-23 11:03 ` [PATCHv3 2/5] mm/rmap: mlock large folios in try_to_unmap_one() Kiryl Shutsemau
@ 2025-09-23 11:03 ` Kiryl Shutsemau
2025-09-23 11:03 ` [PATCHv3 4/5] mm/filemap: Map entire large folio faultaround Kiryl Shutsemau
` (2 subsequent siblings)
5 siblings, 0 replies; 7+ messages in thread
From: Kiryl Shutsemau @ 2025-09-23 11:03 UTC (permalink / raw)
To: Andrew Morton, David Hildenbrand, Hugh Dickins, Matthew Wilcox
Cc: Lorenzo Stoakes, Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Rik van Riel, Harry Yoo,
Johannes Weiner, Shakeel Butt, Baolin Wang, linux-mm,
linux-kernel, Kiryl Shutsemau
From: Kiryl Shutsemau <kas@kernel.org>
The finish_fault() function uses per-page fault for file folios. This
only occurs for file folios smaller than PMD_SIZE.
The comment suggests that this approach prevents RSS inflation.
However, it only prevents RSS accounting. The folio is still mapped to
the process, and the fact that it is mapped by a single PTE does not
affect memory pressure. Additionally, the kernel's ability to map
large folios as PMD if they are large enough does not support this
argument.
When possible, map large folios in one shot. This reduces the number of
minor page faults and allows for TLB coalescing.
Mapping large folios at once will allow the rmap code to mlock it on
add, as it will recognize that it is fully mapped and mlocking is safe.
Signed-off-by: Kiryl Shutsemau <kas@kernel.org>
Reviewed-by: Shakeel Butt <shakeel.butt@linux.dev>
Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com>
---
mm/memory.c | 9 ++-------
1 file changed, 2 insertions(+), 7 deletions(-)
diff --git a/mm/memory.c b/mm/memory.c
index 0ba4f6b71847..812a7d9f6531 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -5386,13 +5386,8 @@ vm_fault_t finish_fault(struct vm_fault *vmf)
nr_pages = folio_nr_pages(folio);
- /*
- * Using per-page fault to maintain the uffd semantics, and same
- * approach also applies to non shmem/tmpfs faults to avoid
- * inflating the RSS of the process.
- */
- if (!vma_is_shmem(vma) || unlikely(userfaultfd_armed(vma)) ||
- unlikely(needs_fallback)) {
+ /* Using per-page fault to maintain the uffd semantics */
+ if (unlikely(userfaultfd_armed(vma)) || unlikely(needs_fallback)) {
nr_pages = 1;
} else if (nr_pages > 1) {
pgoff_t idx = folio_page_idx(folio, page);
--
2.50.1
^ permalink raw reply [flat|nested] 7+ messages in thread* [PATCHv3 4/5] mm/filemap: Map entire large folio faultaround
2025-09-23 11:03 [PATCHv3 0/5] mm: Improve mlock tracking for large folios Kiryl Shutsemau
` (2 preceding siblings ...)
2025-09-23 11:03 ` [PATCHv3 3/5] mm/fault: Try to map the entire file folio in finish_fault() Kiryl Shutsemau
@ 2025-09-23 11:03 ` Kiryl Shutsemau
2025-09-23 11:03 ` [PATCHv3 5/5] mm/rmap: Improve mlock tracking for large folios Kiryl Shutsemau
2025-09-23 11:05 ` [PATCHv3 0/5] mm: " Kiryl Shutsemau
5 siblings, 0 replies; 7+ messages in thread
From: Kiryl Shutsemau @ 2025-09-23 11:03 UTC (permalink / raw)
To: Andrew Morton, David Hildenbrand, Hugh Dickins, Matthew Wilcox
Cc: Lorenzo Stoakes, Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Rik van Riel, Harry Yoo,
Johannes Weiner, Shakeel Butt, Baolin Wang, linux-mm,
linux-kernel, Kiryl Shutsemau
From: Kiryl Shutsemau <kas@kernel.org>
Currently, kernel only maps part of large folio that fits into
start_pgoff/end_pgoff range.
Map entire folio where possible. It will match finish_fault() behaviour
that user hits on cold page cache.
Mapping large folios at once will allow the rmap code to mlock it on
add, as it will recognize that it is fully mapped and mlocking is safe.
Signed-off-by: Kiryl Shutsemau <kas@kernel.org>
---
mm/filemap.c | 15 +++++++++++++++
1 file changed, 15 insertions(+)
diff --git a/mm/filemap.c b/mm/filemap.c
index 751838ef05e5..26cae577ba23 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -3643,6 +3643,21 @@ static vm_fault_t filemap_map_folio_range(struct vm_fault *vmf,
struct page *page = folio_page(folio, start);
unsigned int count = 0;
pte_t *old_ptep = vmf->pte;
+ unsigned long addr0;
+
+ /*
+ * Map the large folio fully where possible.
+ *
+ * The folio must not cross VMA or page table boundary.
+ */
+ addr0 = addr - start * PAGE_SIZE;
+ if (folio_within_vma(folio, vmf->vma) &&
+ (addr0 & PMD_MASK) == ((addr0 + folio_size(folio) - 1) & PMD_MASK)) {
+ vmf->pte -= start;
+ page -= start;
+ addr = addr0;
+ nr_pages = folio_nr_pages(folio);
+ }
do {
if (PageHWPoison(page + count))
--
2.50.1
^ permalink raw reply [flat|nested] 7+ messages in thread* [PATCHv3 5/5] mm/rmap: Improve mlock tracking for large folios
2025-09-23 11:03 [PATCHv3 0/5] mm: Improve mlock tracking for large folios Kiryl Shutsemau
` (3 preceding siblings ...)
2025-09-23 11:03 ` [PATCHv3 4/5] mm/filemap: Map entire large folio faultaround Kiryl Shutsemau
@ 2025-09-23 11:03 ` Kiryl Shutsemau
2025-09-23 11:05 ` [PATCHv3 0/5] mm: " Kiryl Shutsemau
5 siblings, 0 replies; 7+ messages in thread
From: Kiryl Shutsemau @ 2025-09-23 11:03 UTC (permalink / raw)
To: Andrew Morton, David Hildenbrand, Hugh Dickins, Matthew Wilcox
Cc: Lorenzo Stoakes, Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Rik van Riel, Harry Yoo,
Johannes Weiner, Shakeel Butt, Baolin Wang, linux-mm,
linux-kernel, Kiryl Shutsemau
From: Kiryl Shutsemau <kas@kernel.org>
The kernel currently does not mlock large folios when adding them to
rmap, stating that it is difficult to confirm that the folio is fully
mapped and safe to mlock it.
This leads to a significant undercount of Mlocked in /proc/meminfo,
causing problems in production where the stat was used to estimate
system utilization and determine if load shedding is required.
However, nowadays the caller passes a number of pages of the folio that
are getting mapped, making it easy to check if the entire folio is
mapped to the VMA.
mlock the folio on rmap if it is fully mapped to the VMA.
Mlocked in /proc/meminfo can still undercount, but the value is closer
the truth and is useful for userspace.
Signed-off-by: Kiryl Shutsemau <kas@kernel.org>
Acked-by: David Hildenbrand <david@redhat.com>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Acked-by: Shakeel Butt <shakeel.butt@linux.dev>
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com>
---
mm/rmap.c | 19 ++++++++++++-------
1 file changed, 12 insertions(+), 7 deletions(-)
diff --git a/mm/rmap.c b/mm/rmap.c
index a55c3bf41287..d5b40800198c 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1462,12 +1462,12 @@ static __always_inline void __folio_add_anon_rmap(struct folio *folio,
}
/*
- * For large folio, only mlock it if it's fully mapped to VMA. It's
- * not easy to check whether the large folio is fully mapped to VMA
- * here. Only mlock normal 4K folio and leave page reclaim to handle
- * large folio.
+ * Only mlock it if the folio is fully mapped to the VMA.
+ *
+ * Partially mapped folios can be split on reclaim and part outside
+ * of mlocked VMA can be evicted or freed.
*/
- if (!folio_test_large(folio))
+ if (folio_nr_pages(folio) == nr_pages)
mlock_vma_folio(folio, vma);
}
@@ -1603,8 +1603,13 @@ static __always_inline void __folio_add_file_rmap(struct folio *folio,
nr = __folio_add_rmap(folio, page, nr_pages, vma, level, &nr_pmdmapped);
__folio_mod_stat(folio, nr, nr_pmdmapped);
- /* See comments in folio_add_anon_rmap_*() */
- if (!folio_test_large(folio))
+ /*
+ * Only mlock it if the folio is fully mapped to the VMA.
+ *
+ * Partially mapped folios can be split on reclaim and part outside
+ * of mlocked VMA can be evicted or freed.
+ */
+ if (folio_nr_pages(folio) == nr_pages)
mlock_vma_folio(folio, vma);
}
--
2.50.1
^ permalink raw reply [flat|nested] 7+ messages in thread* Re: [PATCHv3 0/5] mm: Improve mlock tracking for large folios
2025-09-23 11:03 [PATCHv3 0/5] mm: Improve mlock tracking for large folios Kiryl Shutsemau
` (4 preceding siblings ...)
2025-09-23 11:03 ` [PATCHv3 5/5] mm/rmap: Improve mlock tracking for large folios Kiryl Shutsemau
@ 2025-09-23 11:05 ` Kiryl Shutsemau
5 siblings, 0 replies; 7+ messages in thread
From: Kiryl Shutsemau @ 2025-09-23 11:05 UTC (permalink / raw)
To: Andrew Morton, David Hildenbrand, Hugh Dickins, Matthew Wilcox
Cc: Lorenzo Stoakes, Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Rik van Riel, Harry Yoo,
Johannes Weiner, Shakeel Butt, Baolin Wang, linux-mm,
linux-kernel
On Tue, Sep 23, 2025 at 12:03:05PM +0100, Kiryl Shutsemau wrote:
> From: Kiryl Shutsemau <kas@kernel.org>
>
> The patchset includes several fixes and improvements related to mlock
> tracking of large folios.
Please, disregard. I missed one patch on submission.
--
Kiryl Shutsemau / Kirill A. Shutemov
^ permalink raw reply [flat|nested] 7+ messages in thread