From: Zhiguo Jiang <justinjiang@vivo.com>
To: Andrew Morton <akpm@linux-foundation.org>,
linux-mm@kvack.org, linux-kernel@vger.kernel.org
Cc: opensource.kernel@vivo.com, Zhiguo Jiang <justinjiang@vivo.com>
Subject: [PATCH v2 1/2] mm:vmscan: the dirty folio in folio list skip unmap
Date: Thu, 19 Oct 2023 21:42:10 +0800 [thread overview]
Message-ID: <20231019134211.329-2-justinjiang@vivo.com> (raw)
In-Reply-To: <20231019134211.329-1-justinjiang@vivo.com>
In the shrink_folio_list() the sources of the file dirty folio include
two ways below:
1. The dirty folio is from the incoming parameter folio_list,
which is the inactive file lru.
2. The dirty folio is from the PTE dirty bit transferred by
the try_to_unmap().
For the first source of the dirty folio, if the dirty folio does not
support pageout, the dirty folio can skip unmap in advance to reduce
recyling time.
Signed-off-by: Zhiguo Jiang <justinjiang@vivo.com>
---
Changelog:
v1->v2:
1. Keep the original judgment flow.
2. Add the interface of folio_check_pageout().
3. The dirty folio which does not support pageout in inactive file lru
skip unmap in advance.
mm/vmscan.c | 103 +++++++++++++++++++++++++++++++++-------------------
1 file changed, 66 insertions(+), 37 deletions(-)
diff --git a/mm/vmscan.c b/mm/vmscan.c
index a68d01fcc307..e067269275a5 100755
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -925,6 +925,44 @@ static void folio_check_dirty_writeback(struct folio *folio,
mapping->a_ops->is_dirty_writeback(folio, dirty, writeback);
}
+/* Check if a dirty folio can support pageout in the recyling process*/
+static bool folio_check_pageout(struct folio *folio,
+ struct pglist_data *pgdat)
+{
+ int ret = true;
+
+ /*
+ * Anonymous folios are not handled by flushers and must be written
+ * from reclaim context. Do not stall reclaim based on them.
+ * MADV_FREE anonymous folios are put into inactive file list too.
+ * They could be mistakenly treated as file lru. So further anon
+ * test is needed.
+ */
+ if (!folio_is_file_lru(folio) ||
+ (folio_test_anon(folio) && !folio_test_swapbacked(folio)))
+ goto out;
+
+ if (folio_test_dirty(folio) &&
+ (!current_is_kswapd() ||
+ !folio_test_reclaim(folio) ||
+ !test_bit(PGDAT_DIRTY, &pgdat->flags))) {
+ /*
+ * Immediately reclaim when written back.
+ * Similar in principle to folio_deactivate()
+ * except we already have the folio isolated
+ * and know it's dirty
+ */
+ node_stat_mod_folio(folio, NR_VMSCAN_IMMEDIATE,
+ folio_nr_pages(folio));
+ folio_set_reclaim(folio);
+
+ ret = false;
+ }
+
+out:
+ return ret;
+}
+
static struct folio *alloc_demote_folio(struct folio *src,
unsigned long private)
{
@@ -1078,6 +1116,12 @@ static unsigned int shrink_folio_list(struct list_head *folio_list,
if (dirty && !writeback)
stat->nr_unqueued_dirty += nr_pages;
+ /* If the dirty folio dose not support pageout,
+ * the dirty folio can skip this recycling.
+ */
+ if (!folio_check_pageout(folio, pgdat))
+ goto activate_locked;
+
/*
* Treat this folio as congested if folios are cycling
* through the LRU so quickly that the folios marked
@@ -1261,43 +1305,6 @@ static unsigned int shrink_folio_list(struct list_head *folio_list,
enum ttu_flags flags = TTU_BATCH_FLUSH;
bool was_swapbacked = folio_test_swapbacked(folio);
- if (folio_test_dirty(folio)) {
- /*
- * Only kswapd can writeback filesystem folios
- * to avoid risk of stack overflow. But avoid
- * injecting inefficient single-folio I/O into
- * flusher writeback as much as possible: only
- * write folios when we've encountered many
- * dirty folios, and when we've already scanned
- * the rest of the LRU for clean folios and see
- * the same dirty folios again (with the reclaim
- * flag set).
- */
- if (folio_is_file_lru(folio) &&
- (!current_is_kswapd() ||
- !folio_test_reclaim(folio) ||
- !test_bit(PGDAT_DIRTY, &pgdat->flags))) {
- /*
- * Immediately reclaim when written back.
- * Similar in principle to folio_deactivate()
- * except we already have the folio isolated
- * and know it's dirty
- */
- node_stat_mod_folio(folio, NR_VMSCAN_IMMEDIATE,
- nr_pages);
- folio_set_reclaim(folio);
-
- goto activate_locked;
- }
-
- if (references == FOLIOREF_RECLAIM_CLEAN)
- goto keep_locked;
- if (!may_enter_fs(folio, sc->gfp_mask))
- goto keep_locked;
- if (!sc->may_writepage)
- goto keep_locked;
- }
-
if (folio_test_pmd_mappable(folio))
flags |= TTU_SPLIT_HUGE_PMD;
@@ -1323,6 +1330,28 @@ static unsigned int shrink_folio_list(struct list_head *folio_list,
mapping = folio_mapping(folio);
if (folio_test_dirty(folio)) {
+ /*
+ * Only kswapd can writeback filesystem folios
+ * to avoid risk of stack overflow. But avoid
+ * injecting inefficient single-folio I/O into
+ * flusher writeback as much as possible: only
+ * write folios when we've encountered many
+ * dirty folios, and when we've already scanned
+ * the rest of the LRU for clean folios and see
+ * the same dirty folios again (with the reclaim
+ * flag set).
+ */
+ if (folio_is_file_lru(folio) &&
+ !folio_check_pageout(folio, pgdat))
+ goto activate_locked;
+
+ if (references == FOLIOREF_RECLAIM_CLEAN)
+ goto keep_locked;
+ if (!may_enter_fs(folio, sc->gfp_mask))
+ goto keep_locked;
+ if (!sc->may_writepage)
+ goto keep_locked;
+
/*
* Folio is dirty. Flush the TLB if a writable entry
* potentially exists to avoid CPU writes after I/O
--
2.39.0
next prev parent reply other threads:[~2023-10-19 13:42 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-10-19 13:42 [PATCH v2 0/2] mm: the dirty folio unmap redundantly Zhiguo Jiang
2023-10-19 13:42 ` Zhiguo Jiang [this message]
2023-11-06 21:35 ` [PATCH v2 1/2] mm:vmscan: the dirty folio in folio list skip unmap Andrew Morton
2023-10-19 13:42 ` [PATCH v2 2/2] mm:vmscan: the ref clean dirty folio " Zhiguo Jiang
2023-10-20 3:29 ` Matthew Wilcox
2023-10-20 3:37 ` zhiguojiang
-- strict thread matches above, loose matches on Subject: below --
2023-10-19 13:14 [PATCH v2 0/2] mm: the dirty folio unmap redundantly Zhiguo Jiang
[not found] ` <20231019131446.317-2-justinjiang@vivo.com>
2023-10-19 14:15 ` [PATCH v2 1/2] mm:vmscan: the dirty folio in folio_list skip unmap David Hildenbrand
2023-10-20 3:59 ` zhiguojiang
2023-10-20 4:09 ` zhiguojiang
2023-10-20 4:15 ` Matthew Wilcox
2023-10-20 4:36 ` zhiguojiang
2023-10-23 8:07 ` zhiguojiang
2023-10-23 12:21 ` Matthew Wilcox
2023-10-23 12:44 ` zhiguojiang
2023-10-23 13:01 ` Matthew Wilcox
2023-10-24 2:04 ` zhiguojiang
2023-10-24 7:07 ` David Hildenbrand
2023-10-24 7:21 ` zhiguojiang
2023-10-25 15:37 ` zhiguojiang
2023-10-24 2:08 ` zhiguojiang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20231019134211.329-2-justinjiang@vivo.com \
--to=justinjiang@vivo.com \
--cc=akpm@linux-foundation.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=opensource.kernel@vivo.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox