linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] mm: vmscan: the dirty folio unmap redundantly
@ 2023-10-18  1:30 Zhiguo Jiang
  2023-10-18 14:12 ` Matthew Wilcox
  2023-10-19 13:03 ` David Hildenbrand
  0 siblings, 2 replies; 5+ messages in thread
From: Zhiguo Jiang @ 2023-10-18  1:30 UTC (permalink / raw)
  To: Andrew Morton, linux-mm, linux-kernel; +Cc: opensource.kernel, Zhiguo Jiang

If the dirty folio is not reclaimed in the shrink process, it do
not need to unmap, which can save shrinking time during traversaling
the dirty folio.

Signed-off-by: Zhiguo Jiang <justinjiang@vivo.com>
---
 mm/vmscan.c | 72 +++++++++++++++++++++++++++--------------------------
 1 file changed, 37 insertions(+), 35 deletions(-)
 mode change 100644 => 100755 mm/vmscan.c

diff --git a/mm/vmscan.c b/mm/vmscan.c
index 2cc0cb41fb32..cf555cdfcefc
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1261,6 +1261,43 @@ static unsigned int shrink_folio_list(struct list_head *folio_list,
 			enum ttu_flags flags = TTU_BATCH_FLUSH;
 			bool was_swapbacked = folio_test_swapbacked(folio);
 
+			if (folio_test_dirty(folio)) {
+				/*
+				 * Only kswapd can writeback filesystem folios
+				 * to avoid risk of stack overflow. But avoid
+				 * injecting inefficient single-folio I/O into
+				 * flusher writeback as much as possible: only
+				 * write folios when we've encountered many
+				 * dirty folios, and when we've already scanned
+				 * the rest of the LRU for clean folios and see
+				 * the same dirty folios again (with the reclaim
+				 * flag set).
+				 */
+				if (folio_is_file_lru(folio) &&
+					(!current_is_kswapd() ||
+					 !folio_test_reclaim(folio) ||
+					 !test_bit(PGDAT_DIRTY, &pgdat->flags))) {
+					/*
+					 * Immediately reclaim when written back.
+					 * Similar in principle to folio_deactivate()
+					 * except we already have the folio isolated
+					 * and know it's dirty
+					 */
+					node_stat_mod_folio(folio, NR_VMSCAN_IMMEDIATE,
+							nr_pages);
+					folio_set_reclaim(folio);
+
+					goto activate_locked;
+				}
+
+				if (references == FOLIOREF_RECLAIM_CLEAN)
+					goto keep_locked;
+				if (!may_enter_fs(folio, sc->gfp_mask))
+					goto keep_locked;
+				if (!sc->may_writepage)
+					goto keep_locked;
+			}
+
 			if (folio_test_pmd_mappable(folio))
 				flags |= TTU_SPLIT_HUGE_PMD;
 
@@ -1286,41 +1323,6 @@ static unsigned int shrink_folio_list(struct list_head *folio_list,
 
 		mapping = folio_mapping(folio);
 		if (folio_test_dirty(folio)) {
-			/*
-			 * Only kswapd can writeback filesystem folios
-			 * to avoid risk of stack overflow. But avoid
-			 * injecting inefficient single-folio I/O into
-			 * flusher writeback as much as possible: only
-			 * write folios when we've encountered many
-			 * dirty folios, and when we've already scanned
-			 * the rest of the LRU for clean folios and see
-			 * the same dirty folios again (with the reclaim
-			 * flag set).
-			 */
-			if (folio_is_file_lru(folio) &&
-			    (!current_is_kswapd() ||
-			     !folio_test_reclaim(folio) ||
-			     !test_bit(PGDAT_DIRTY, &pgdat->flags))) {
-				/*
-				 * Immediately reclaim when written back.
-				 * Similar in principle to folio_deactivate()
-				 * except we already have the folio isolated
-				 * and know it's dirty
-				 */
-				node_stat_mod_folio(folio, NR_VMSCAN_IMMEDIATE,
-						nr_pages);
-				folio_set_reclaim(folio);
-
-				goto activate_locked;
-			}
-
-			if (references == FOLIOREF_RECLAIM_CLEAN)
-				goto keep_locked;
-			if (!may_enter_fs(folio, sc->gfp_mask))
-				goto keep_locked;
-			if (!sc->may_writepage)
-				goto keep_locked;
-
 			/*
 			 * Folio is dirty. Flush the TLB if a writable entry
 			 * potentially exists to avoid CPU writes after I/O
-- 
2.39.0



^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2023-10-19 13:23 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-10-18  1:30 [PATCH] mm: vmscan: the dirty folio unmap redundantly Zhiguo Jiang
2023-10-18 14:12 ` Matthew Wilcox
2023-10-19  1:27   ` 答复: " 江志国
2023-10-19 13:03 ` David Hildenbrand
2023-10-19 13:23   ` zhiguojiang

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox