* [PATCH v2 0/2] mm: the dirty folio unmap redundantly @ 2023-10-19 13:42 Zhiguo Jiang 2023-10-19 13:42 ` [PATCH v2 1/2] mm:vmscan: the dirty folio in folio list skip unmap Zhiguo Jiang 2023-10-19 13:42 ` [PATCH v2 2/2] mm:vmscan: the ref clean dirty folio " Zhiguo Jiang 0 siblings, 2 replies; 7+ messages in thread From: Zhiguo Jiang @ 2023-10-19 13:42 UTC (permalink / raw) To: Andrew Morton, linux-mm, linux-kernel; +Cc: opensource.kernel, Zhiguo Jiang *** BLURB HERE *** Zhiguo Jiang (2): mm:vmscan: the dirty folio in folio list skip unmap mm:vmscan: the ref clean dirty folio skip unmap mm/vmscan.c | 106 ++++++++++++++++++++++++++++++++++------------------ 1 file changed, 69 insertions(+), 37 deletions(-) -- 2.39.0 ^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH v2 1/2] mm:vmscan: the dirty folio in folio list skip unmap 2023-10-19 13:42 [PATCH v2 0/2] mm: the dirty folio unmap redundantly Zhiguo Jiang @ 2023-10-19 13:42 ` Zhiguo Jiang 2023-11-06 21:35 ` Andrew Morton 2023-10-19 13:42 ` [PATCH v2 2/2] mm:vmscan: the ref clean dirty folio " Zhiguo Jiang 1 sibling, 1 reply; 7+ messages in thread From: Zhiguo Jiang @ 2023-10-19 13:42 UTC (permalink / raw) To: Andrew Morton, linux-mm, linux-kernel; +Cc: opensource.kernel, Zhiguo Jiang In the shrink_folio_list() the sources of the file dirty folio include two ways below: 1. The dirty folio is from the incoming parameter folio_list, which is the inactive file lru. 2. The dirty folio is from the PTE dirty bit transferred by the try_to_unmap(). For the first source of the dirty folio, if the dirty folio does not support pageout, the dirty folio can skip unmap in advance to reduce recyling time. Signed-off-by: Zhiguo Jiang <justinjiang@vivo.com> --- Changelog: v1->v2: 1. Keep the original judgment flow. 2. Add the interface of folio_check_pageout(). 3. The dirty folio which does not support pageout in inactive file lru skip unmap in advance. mm/vmscan.c | 103 +++++++++++++++++++++++++++++++++------------------- 1 file changed, 66 insertions(+), 37 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index a68d01fcc307..e067269275a5 100755 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -925,6 +925,44 @@ static void folio_check_dirty_writeback(struct folio *folio, mapping->a_ops->is_dirty_writeback(folio, dirty, writeback); } +/* Check if a dirty folio can support pageout in the recyling process*/ +static bool folio_check_pageout(struct folio *folio, + struct pglist_data *pgdat) +{ + int ret = true; + + /* + * Anonymous folios are not handled by flushers and must be written + * from reclaim context. Do not stall reclaim based on them. + * MADV_FREE anonymous folios are put into inactive file list too. + * They could be mistakenly treated as file lru. So further anon + * test is needed. + */ + if (!folio_is_file_lru(folio) || + (folio_test_anon(folio) && !folio_test_swapbacked(folio))) + goto out; + + if (folio_test_dirty(folio) && + (!current_is_kswapd() || + !folio_test_reclaim(folio) || + !test_bit(PGDAT_DIRTY, &pgdat->flags))) { + /* + * Immediately reclaim when written back. + * Similar in principle to folio_deactivate() + * except we already have the folio isolated + * and know it's dirty + */ + node_stat_mod_folio(folio, NR_VMSCAN_IMMEDIATE, + folio_nr_pages(folio)); + folio_set_reclaim(folio); + + ret = false; + } + +out: + return ret; +} + static struct folio *alloc_demote_folio(struct folio *src, unsigned long private) { @@ -1078,6 +1116,12 @@ static unsigned int shrink_folio_list(struct list_head *folio_list, if (dirty && !writeback) stat->nr_unqueued_dirty += nr_pages; + /* If the dirty folio dose not support pageout, + * the dirty folio can skip this recycling. + */ + if (!folio_check_pageout(folio, pgdat)) + goto activate_locked; + /* * Treat this folio as congested if folios are cycling * through the LRU so quickly that the folios marked @@ -1261,43 +1305,6 @@ static unsigned int shrink_folio_list(struct list_head *folio_list, enum ttu_flags flags = TTU_BATCH_FLUSH; bool was_swapbacked = folio_test_swapbacked(folio); - if (folio_test_dirty(folio)) { - /* - * Only kswapd can writeback filesystem folios - * to avoid risk of stack overflow. But avoid - * injecting inefficient single-folio I/O into - * flusher writeback as much as possible: only - * write folios when we've encountered many - * dirty folios, and when we've already scanned - * the rest of the LRU for clean folios and see - * the same dirty folios again (with the reclaim - * flag set). - */ - if (folio_is_file_lru(folio) && - (!current_is_kswapd() || - !folio_test_reclaim(folio) || - !test_bit(PGDAT_DIRTY, &pgdat->flags))) { - /* - * Immediately reclaim when written back. - * Similar in principle to folio_deactivate() - * except we already have the folio isolated - * and know it's dirty - */ - node_stat_mod_folio(folio, NR_VMSCAN_IMMEDIATE, - nr_pages); - folio_set_reclaim(folio); - - goto activate_locked; - } - - if (references == FOLIOREF_RECLAIM_CLEAN) - goto keep_locked; - if (!may_enter_fs(folio, sc->gfp_mask)) - goto keep_locked; - if (!sc->may_writepage) - goto keep_locked; - } - if (folio_test_pmd_mappable(folio)) flags |= TTU_SPLIT_HUGE_PMD; @@ -1323,6 +1330,28 @@ static unsigned int shrink_folio_list(struct list_head *folio_list, mapping = folio_mapping(folio); if (folio_test_dirty(folio)) { + /* + * Only kswapd can writeback filesystem folios + * to avoid risk of stack overflow. But avoid + * injecting inefficient single-folio I/O into + * flusher writeback as much as possible: only + * write folios when we've encountered many + * dirty folios, and when we've already scanned + * the rest of the LRU for clean folios and see + * the same dirty folios again (with the reclaim + * flag set). + */ + if (folio_is_file_lru(folio) && + !folio_check_pageout(folio, pgdat)) + goto activate_locked; + + if (references == FOLIOREF_RECLAIM_CLEAN) + goto keep_locked; + if (!may_enter_fs(folio, sc->gfp_mask)) + goto keep_locked; + if (!sc->may_writepage) + goto keep_locked; + /* * Folio is dirty. Flush the TLB if a writable entry * potentially exists to avoid CPU writes after I/O -- 2.39.0 ^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH v2 1/2] mm:vmscan: the dirty folio in folio list skip unmap 2023-10-19 13:42 ` [PATCH v2 1/2] mm:vmscan: the dirty folio in folio list skip unmap Zhiguo Jiang @ 2023-11-06 21:35 ` Andrew Morton 0 siblings, 0 replies; 7+ messages in thread From: Andrew Morton @ 2023-11-06 21:35 UTC (permalink / raw) To: Zhiguo Jiang; +Cc: linux-mm, linux-kernel, opensource.kernel On Thu, 19 Oct 2023 21:42:10 +0800 Zhiguo Jiang <justinjiang@vivo.com> wrote: > In the shrink_folio_list() the sources of the file dirty folio include > two ways below: > 1. The dirty folio is from the incoming parameter folio_list, > which is the inactive file lru. > 2. The dirty folio is from the PTE dirty bit transferred by > the try_to_unmap(). > > For the first source of the dirty folio, if the dirty folio does not > support pageout, the dirty folio can skip unmap in advance to reduce > recyling time. > This patch does an amount of code movement and it implements a functional change. Is it possible to split these? The first patch moves code around but has no runtime effect, the second patch implements the functional change. Also, the patch doesn't apply to current code so please redo it against Linus's latest tree? Thanks. ^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH v2 2/2] mm:vmscan: the ref clean dirty folio skip unmap 2023-10-19 13:42 [PATCH v2 0/2] mm: the dirty folio unmap redundantly Zhiguo Jiang 2023-10-19 13:42 ` [PATCH v2 1/2] mm:vmscan: the dirty folio in folio list skip unmap Zhiguo Jiang @ 2023-10-19 13:42 ` Zhiguo Jiang 2023-10-20 3:29 ` Matthew Wilcox 1 sibling, 1 reply; 7+ messages in thread From: Zhiguo Jiang @ 2023-10-19 13:42 UTC (permalink / raw) To: Andrew Morton, linux-mm, linux-kernel; +Cc: opensource.kernel, Zhiguo Jiang If the dirty folio in folio_list which is inactive file lru is FOLIOREF_RECLAIM_CLEAN, the dirty folio can skip unmap in advance to reduce recyling time. Signed-off-by: Zhiguo Jiang <justinjiang@vivo.com> --- Changelog: v1->v2: 1. The dirty folio in folio_list wich FOLIOREF_RECLAIM_CLEAN skip unmap in advance. mm/vmscan.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/mm/vmscan.c b/mm/vmscan.c index e067269275a5..e587dafeef94 100755 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1225,7 +1225,10 @@ static unsigned int shrink_folio_list(struct list_head *folio_list, stat->nr_ref_keep += nr_pages; goto keep_locked; case FOLIOREF_RECLAIM: + break; case FOLIOREF_RECLAIM_CLEAN: + if (dirty) + goto activate_locked; ; /* try to reclaim the folio below */ } -- 2.39.0 ^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH v2 2/2] mm:vmscan: the ref clean dirty folio skip unmap 2023-10-19 13:42 ` [PATCH v2 2/2] mm:vmscan: the ref clean dirty folio " Zhiguo Jiang @ 2023-10-20 3:29 ` Matthew Wilcox 2023-10-20 3:37 ` zhiguojiang 0 siblings, 1 reply; 7+ messages in thread From: Matthew Wilcox @ 2023-10-20 3:29 UTC (permalink / raw) To: Zhiguo Jiang; +Cc: Andrew Morton, linux-mm, linux-kernel, opensource.kernel On Thu, Oct 19, 2023 at 09:42:11PM +0800, Zhiguo Jiang wrote: > +++ b/mm/vmscan.c > @@ -1225,7 +1225,10 @@ static unsigned int shrink_folio_list(struct list_head *folio_list, > stat->nr_ref_keep += nr_pages; > goto keep_locked; > case FOLIOREF_RECLAIM: > + break; > case FOLIOREF_RECLAIM_CLEAN: > + if (dirty) > + goto activate_locked; Why activate_locked and not keep_locked? ^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH v2 2/2] mm:vmscan: the ref clean dirty folio skip unmap 2023-10-20 3:29 ` Matthew Wilcox @ 2023-10-20 3:37 ` zhiguojiang 0 siblings, 0 replies; 7+ messages in thread From: zhiguojiang @ 2023-10-20 3:37 UTC (permalink / raw) To: Matthew Wilcox; +Cc: Andrew Morton, linux-mm, linux-kernel, opensource.kernel 在 2023/10/20 11:29, Matthew Wilcox 写道: > On Thu, Oct 19, 2023 at 09:42:11PM +0800, Zhiguo Jiang wrote: >> +++ b/mm/vmscan.c >> @@ -1225,7 +1225,10 @@ static unsigned int shrink_folio_list(struct list_head *folio_list, >> stat->nr_ref_keep += nr_pages; >> goto keep_locked; >> case FOLIOREF_RECLAIM: >> + break; >> case FOLIOREF_RECLAIM_CLEAN: >> + if (dirty) >> + goto activate_locked; > Why activate_locked and not keep_locked? Hi, This is a mistake, should be keep_locked. ^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH v2 0/2] mm: the dirty folio unmap redundantly @ 2023-10-19 13:14 Zhiguo Jiang 2023-10-19 13:14 ` [PATCH v2 2/2] mm:vmscan: the ref clean dirty folio skip unmap Zhiguo Jiang 0 siblings, 1 reply; 7+ messages in thread From: Zhiguo Jiang @ 2023-10-19 13:14 UTC (permalink / raw) To: Andrew Morton, linux-mm, linux-kernel; +Cc: opensource.kernel, Zhiguo Jiang *** BLURB HERE *** Zhiguo Jiang (2): mm:vmscan: the dirty folio in folio_list skip unmap mm:vmscan: the ref clean dirty folio skip unmap mm/vmscan.c | 106 ++++++++++++++++++++++++++++++++++------------------ 1 file changed, 69 insertions(+), 37 deletions(-) -- 2.39.0 ^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH v2 2/2] mm:vmscan: the ref clean dirty folio skip unmap 2023-10-19 13:14 [PATCH v2 0/2] mm: the dirty folio unmap redundantly Zhiguo Jiang @ 2023-10-19 13:14 ` Zhiguo Jiang 0 siblings, 0 replies; 7+ messages in thread From: Zhiguo Jiang @ 2023-10-19 13:14 UTC (permalink / raw) To: Andrew Morton, linux-mm, linux-kernel; +Cc: opensource.kernel, Zhiguo Jiang If the dirty folio in folio_list which is inactive file lru is FOLIOREF_RECLAIM_CLEAN, the dirty folio can skip unmap in advance to reduce recyling time. Signed-off-by: Zhiguo Jiang <justinjiang@vivo.com> --- Changelog: v1->v2: 1. The dirty folio in folio_list wich FOLIOREF_RECLAIM_CLEAN skip unmap in advance. mm/vmscan.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/mm/vmscan.c b/mm/vmscan.c index e067269275a5..e587dafeef94 100755 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1225,7 +1225,10 @@ static unsigned int shrink_folio_list(struct list_head *folio_list, stat->nr_ref_keep += nr_pages; goto keep_locked; case FOLIOREF_RECLAIM: + break; case FOLIOREF_RECLAIM_CLEAN: + if (dirty) + goto activate_locked; ; /* try to reclaim the folio below */ } -- 2.39.0 ^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2023-11-06 21:35 UTC | newest] Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2023-10-19 13:42 [PATCH v2 0/2] mm: the dirty folio unmap redundantly Zhiguo Jiang 2023-10-19 13:42 ` [PATCH v2 1/2] mm:vmscan: the dirty folio in folio list skip unmap Zhiguo Jiang 2023-11-06 21:35 ` Andrew Morton 2023-10-19 13:42 ` [PATCH v2 2/2] mm:vmscan: the ref clean dirty folio " Zhiguo Jiang 2023-10-20 3:29 ` Matthew Wilcox 2023-10-20 3:37 ` zhiguojiang -- strict thread matches above, loose matches on Subject: below -- 2023-10-19 13:14 [PATCH v2 0/2] mm: the dirty folio unmap redundantly Zhiguo Jiang 2023-10-19 13:14 ` [PATCH v2 2/2] mm:vmscan: the ref clean dirty folio skip unmap Zhiguo Jiang
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox