linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH RFC] mm/mglru: lazily activate folios while folios are really mapped
@ 2026-02-25 22:37 Barry Song
  0 siblings, 0 replies; 2+ messages in thread
From: Barry Song @ 2026-02-25 22:37 UTC (permalink / raw)
  To: akpm, linux-mm
  Cc: linux-kernel, Barry Song, wangzicheng, Suren Baghdasaryan,
	Lei Liu, Matthew Wilcox (Oracle),
	Axel Rasmussen, Yuanchu Xie, Wei Xu, Kairui Song, Tangquan Zheng

From: Barry Song <baohua@kernel.org>

MGLRU activates folios when a new folio is added and
lru_gen_in_fault() returns true. The problem is that when a
page fault occurs at address N, readahead may bring in many
folios around N, and those folios are also activated even
though many of them may never be accessed.

A previous attempt by Lei Liu proposed introducing a separate
LRU for readahead[1], but that approach is likely over-designed.

This patch instead activates folios lazily, only when they are
actually mapped, so that unused folios do not occupy higher-
priority positions in the LRU and become harder to reclaim.

A similar optimization could also be applied to swapin readahead,
but this RFC limits the change to file-backed folios for now.

Based on Tangquan's observations, this can significantly reduce
file refaults on Android devices when using MGLRU.

BTW, it seems somewhat odd that all LRU APIs are defined in
swap.c and swap.h.

[1] https://lore.kernel.org/linux-mm/20250916072226.220426-1-liulei.rjpt@vivo.com/

Cc: wangzicheng <wangzicheng@honor.com>
Cc: Suren Baghdasaryan <surenb@google.com>
Cc: Lei Liu <liulei.rjpt@vivo.com>
Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
Cc: Axel Rasmussen <axelrasmussen@google.com>
Cc: Yuanchu Xie <yuanchu@google.com>
Cc: Wei Xu <weixugc@google.com>
Cc: Kairui Song <kasong@tencent.com>
Cc: Tangquan Zheng <zhengtangquan@oppo.com>
Signed-off-by: Barry Song <baohua@kernel.org>
---
 include/linux/swap.h |  1 +
 mm/filemap.c         |  2 ++
 mm/swap.c            | 16 +++++++++++++++-
 3 files changed, 18 insertions(+), 1 deletion(-)

diff --git a/include/linux/swap.h b/include/linux/swap.h
index 62fc7499b408..ce88ec560527 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -335,6 +335,7 @@ void folio_add_lru(struct folio *);
 void folio_add_lru_vma(struct folio *, struct vm_area_struct *);
 void mark_page_accessed(struct page *);
 void folio_mark_accessed(struct folio *);
+void folio_activate_on_mapped(struct folio *folio);
 
 static inline bool folio_may_be_lru_cached(struct folio *folio)
 {
diff --git a/mm/filemap.c b/mm/filemap.c
index 6cd7974d4ada..0b8f383facdb 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -3567,6 +3567,7 @@ vm_fault_t filemap_fault(struct vm_fault *vmf)
 		}
 	}
 
+	folio_activate_on_mapped(folio);
 	if (!lock_folio_maybe_drop_mmap(vmf, folio, &fpin))
 		goto out_retry;
 
@@ -3926,6 +3927,7 @@ vm_fault_t filemap_map_pages(struct vm_fault *vmf,
 					nr_pages, &rss, &mmap_miss, file_end);
 
 		folio_unlock(folio);
+		folio_activate_on_mapped(folio);
 	} while ((folio = next_uptodate_folio(&xas, mapping, end_pgoff)) != NULL);
 	add_mm_counter(vma->vm_mm, folio_type, rss);
 	pte_unmap_unlock(vmf->pte, vmf->ptl);
diff --git a/mm/swap.c b/mm/swap.c
index bb19ccbece46..e50b1e794ef1 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -488,6 +488,19 @@ void folio_mark_accessed(struct folio *folio)
 }
 EXPORT_SYMBOL(folio_mark_accessed);
 
+void folio_activate_on_mapped(struct folio *folio)
+{
+	if (lru_gen_enabled() && lru_gen_in_fault() &&
+			!(current->flags & PF_MEMALLOC) &&
+			!folio_test_active(folio) &&
+			!folio_test_unevictable(folio)) {
+		if (folio_test_lru(folio))
+			folio_activate(folio);
+		else /* still in lru cache */
+			__lru_cache_activate_folio(folio);
+	}
+}
+
 /**
  * folio_add_lru - Add a folio to an LRU list.
  * @folio: The folio to be added to the LRU.
@@ -506,7 +519,8 @@ void folio_add_lru(struct folio *folio)
 	/* see the comment in lru_gen_folio_seq() */
 	if (lru_gen_enabled() && !folio_test_unevictable(folio) &&
 	    lru_gen_in_fault() && !(current->flags & PF_MEMALLOC))
-		folio_set_active(folio);
+		if (!folio_is_file_lru(folio))
+			folio_set_active(folio);
 
 	folio_batch_add_and_move(folio, lru_add);
 }
-- 
2.39.3 (Apple Git-146)



^ permalink raw reply	[flat|nested] 2+ messages in thread
* [PATCH RFC] mm/mglru: lazily activate folios while folios are really mapped
@ 2026-02-25 21:26 Barry Song
  0 siblings, 0 replies; 2+ messages in thread
From: Barry Song @ 2026-02-25 21:26 UTC (permalink / raw)
  To: akpm, linux-mm
  Cc: linux-kernel, Barry Song, wangzicheng, Suren Baghdasaryan,
	Lei Liu, Matthew Wilcox, Axel Rasmussen, Yuanchu Xie, Wei Xu,
	Kairui Song, Tangquan Zheng

From: Barry Song <baohua@kernel.org>

MGLRU activates folios when a new folio is added and
lru_gen_in_fault() returns true. The problem is that when a
page fault occurs at address N, readahead may bring in many
folios around N, and those folios are also activated even
though many of them may never be accessed.

A previous attempt by Lei Liu proposed introducing a separate
LRU for readahead[1], but that approach is likely over-designed.

This patch instead activates folios lazily, only when they are
actually mapped, so that unused folios do not occupy higher-
priority positions in the LRU and become harder to reclaim.

A similar optimization could also be applied to swapin readahead,
but this RFC limits the change to file-backed folios for now.

Based on Tangquan's observations, this can significantly reduce
file refaults on Android devices when using MGLRU.

BTW, it seems somewhat odd that all LRU APIs are defined in
swap.c and swap.h.

[1] https://lore.kernel.org/linux-mm/20250916072226.220426-1-liulei.rjpt@vivo.com/

Cc: wangzicheng <wangzicheng@honor.com>
Cc: Suren Baghdasaryan <surenb@google.com>
Cc: Lei Liu <liulei.rjpt@vivo.com>
Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
Cc: Axel Rasmussen <axelrasmussen@google.com>
Cc: Yuanchu Xie <yuanchu@google.com>
Cc: Wei Xu <weixugc@google.com>
Cc: Kairui Song <kasong@tencent.com>
Cc: Tangquan Zheng <zhengtangquan@oppo.com>
Signed-off-by: Barry Song <baohua@kernel.org>
---
 include/linux/swap.h |  1 +
 mm/filemap.c         |  2 ++
 mm/swap.c            | 17 ++++++++++++++++-
 3 files changed, 19 insertions(+), 1 deletion(-)

diff --git a/include/linux/swap.h b/include/linux/swap.h
index 62fc7499b408..ce88ec560527 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -335,6 +335,7 @@ void folio_add_lru(struct folio *);
 void folio_add_lru_vma(struct folio *, struct vm_area_struct *);
 void mark_page_accessed(struct page *);
 void folio_mark_accessed(struct folio *);
+void folio_activate_on_mapped(struct folio *folio);
 
 static inline bool folio_may_be_lru_cached(struct folio *folio)
 {
diff --git a/mm/filemap.c b/mm/filemap.c
index 6cd7974d4ada..0b8f383facdb 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -3567,6 +3567,7 @@ vm_fault_t filemap_fault(struct vm_fault *vmf)
 		}
 	}
 
+	folio_activate_on_mapped(folio);
 	if (!lock_folio_maybe_drop_mmap(vmf, folio, &fpin))
 		goto out_retry;
 
@@ -3926,6 +3927,7 @@ vm_fault_t filemap_map_pages(struct vm_fault *vmf,
 					nr_pages, &rss, &mmap_miss, file_end);
 
 		folio_unlock(folio);
+		folio_activate_on_mapped(folio);
 	} while ((folio = next_uptodate_folio(&xas, mapping, end_pgoff)) != NULL);
 	add_mm_counter(vma->vm_mm, folio_type, rss);
 	pte_unmap_unlock(vmf->pte, vmf->ptl);
diff --git a/mm/swap.c b/mm/swap.c
index bb19ccbece46..1a991586c5af 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -488,6 +488,20 @@ void folio_mark_accessed(struct folio *folio)
 }
 EXPORT_SYMBOL(folio_mark_accessed);
 
+void folio_activate_on_mapped(struct folio *folio)
+{
+	if (lru_gen_enabled() && lru_gen_in_fault() &&
+			!(current->flags & PF_MEMALLOC) &&
+			!folio_test_active(folio) &&
+			!folio_test_unevictable(folio)) {
+		if (folio_test_lru(folio))
+			folio_activate(folio);
+		else /* still in lru cache */
+			__lru_cache_activate_folio(folio);
+	}
+}
+EXPORT_SYMBOL(folio_activate_on_mapped);
+
 /**
  * folio_add_lru - Add a folio to an LRU list.
  * @folio: The folio to be added to the LRU.
@@ -506,7 +520,8 @@ void folio_add_lru(struct folio *folio)
 	/* see the comment in lru_gen_folio_seq() */
 	if (lru_gen_enabled() && !folio_test_unevictable(folio) &&
 	    lru_gen_in_fault() && !(current->flags & PF_MEMALLOC))
-		folio_set_active(folio);
+		if (!folio_is_file_lru(folio))
+			folio_set_active(folio);
 
 	folio_batch_add_and_move(folio, lru_add);
 }
-- 
2.39.3 (Apple Git-146)



^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2026-02-26  0:39 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2026-02-25 22:37 [PATCH RFC] mm/mglru: lazily activate folios while folios are really mapped Barry Song
  -- strict thread matches above, loose matches on Subject: below --
2026-02-25 21:26 Barry Song

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox