* [PATCH -next 0/2] Make memory reclamation measurable
@ 2023-12-12 3:26 Bixuan Cui
2023-12-12 3:26 ` [PATCH -next 1/2] mm: shrinker: add new event to trace shrink count Bixuan Cui
2023-12-12 3:26 ` [PATCH -next 2/2] mm: vmscan: add new event to trace shrink lru Bixuan Cui
0 siblings, 2 replies; 7+ messages in thread
From: Bixuan Cui @ 2023-12-12 3:26 UTC (permalink / raw)
To: rostedt, mhiramat, mathieu.desnoyers, akpm
Cc: linux-kernel, linux-trace-kernel, linux-mm, cuibixuan, opensource.kernel
From: cuibixuan <cuibixuan@vivo.com>
When the system memory is low, kswapd reclaims the memory. The key steps
of memory reclamation include
1.shrink_lruvec
* shrink_active_list, moves folios from the active LRU to the inactive LRU
* shrink_inactive_list, shrink lru from inactive LRU list
2.shrink_slab
* shrinker->count_objects(), calculates the freeable memory
* shrinker->scan_objects(), reclaims the slab memory
The existing tracers in the vmscan are as follows:
--do_try_to_free_pages
--shrink_zones
--trace_mm_vmscan_node_reclaim_begin (tracer)
--shrink_node
--shrink_node_memcgs
--trace_mm_vmscan_memcg_shrink_begin (tracer)
--shrink_lruvec
--shrink_list
--shrink_active_list
--trace_mm_vmscan_lru_shrink_active (tracer)
--shrink_inactive_list
--trace_mm_vmscan_lru_shrink_inactive (tracer)
--shrink_active_list
--shrink_slab
--do_shrink_slab
--shrinker->count_objects()
--trace_mm_shrink_slab_start (tracer)
--shrinker->scan_objects()
--trace_mm_shrink_slab_end (tracer)
--trace_mm_vmscan_memcg_shrink_end (tracer)
--trace_mm_vmscan_node_reclaim_end (tracer)
If we get the duration and quantity of shrink lru and slab,
then we can measure the memory recycling, as follows
Measuring memory reclamation with bpf:
LRU FILE:
CPU COMM ShrinkActive(us) ShrinkInactive(us) Reclaim(page)
7 kswapd0 26 51 32
7 kswapd0 52 47 13
SLAB:
CPU COMM OBJ_NAME Count_Dur(us) Freeable(page) Scan_Dur(us) Reclaim(page)
1 kswapd0 super_cache_scan.cfi_jt 2 341 3225 128
7 kswapd0 super_cache_scan.cfi_jt 0 2247 8524 1024
7 kswapd0 super_cache_scan.cfi_jt 2367 0 0 0
For this, add the new tracer to shrink_active_list/shrink_inactive_list
and shrinker->count_objects().
cuibixuan (2):
mm: shrinker: add new event to trace shrink count
mm: vmscan: add new event to trace shrink lru
include/trace/events/vmscan.h | 87 ++++++++++++++++++++++++++++++++++-
mm/shrinker.c | 4 ++
mm/vmscan.c | 8 +++-
3 files changed, 95 insertions(+), 4 deletions(-)
--
2.39.0
^ permalink raw reply [flat|nested] 7+ messages in thread* [PATCH -next 1/2] mm: shrinker: add new event to trace shrink count 2023-12-12 3:26 [PATCH -next 0/2] Make memory reclamation measurable Bixuan Cui @ 2023-12-12 3:26 ` Bixuan Cui 2023-12-12 3:26 ` [PATCH -next 2/2] mm: vmscan: add new event to trace shrink lru Bixuan Cui 1 sibling, 0 replies; 7+ messages in thread From: Bixuan Cui @ 2023-12-12 3:26 UTC (permalink / raw) To: rostedt, mhiramat, mathieu.desnoyers, akpm Cc: linux-kernel, linux-trace-kernel, linux-mm, cuibixuan, opensource.kernel From: cuibixuan <cuibixuan@vivo.com> do_shrink_slab() calculates the freeable memory through shrinker->count_objects(), and then reclaims the memory through shrinker->scan_objects(). When reclaiming memory, shrinker->count_objects() takes a certain amount of time: Fun spend(us) ext4_es_count 4302 ext4_es_scan 12 super_cache_count 4195 super_cache_scan 2103 Therefore, adding the trace event to count_objects() can more accurately obtain the time taken for slab memory recycling. Example of output: kswapd0-103 [003] ..... 1098.317942: mm_shrink_count_start: kfree_rcu_shrink_count.cfi_jt+0x0/0x8 00000000c540ff51: nid: 0 kswapd0-103 [003] ..... 1098.317951: mm_shrink_count_end: kfree_rcu_shrink_count.cfi_jt+0x0/0x8 00000000c540ff51: nid: 0 freeable:36 Signed-off-by: Bixuan Cui <cuibixuan@vivo.com> --- include/trace/events/vmscan.h | 49 +++++++++++++++++++++++++++++++++++ mm/shrinker.c | 4 +++ 2 files changed, 53 insertions(+) diff --git a/include/trace/events/vmscan.h b/include/trace/events/vmscan.h index 1a488c30afa5..406faa5591c1 100644 --- a/include/trace/events/vmscan.h +++ b/include/trace/events/vmscan.h @@ -196,6 +196,55 @@ DEFINE_EVENT(mm_vmscan_direct_reclaim_end_template, mm_vmscan_memcg_softlimit_re ); #endif /* CONFIG_MEMCG */ +TRACE_EVENT(mm_shrink_count_start, + TP_PROTO(struct shrinker *shr, struct shrink_control *sc), + + TP_ARGS(shr, sc), + + TP_STRUCT__entry( + __field(struct shrinker *, shr) + __field(void *, shrink) + __field(int, nid) + ), + + TP_fast_assign( + __entry->shr = shr; + __entry->shrink = shr->count_objects; + __entry->nid = sc->nid; + ), + + TP_printk("%pS %p: nid: %d", + __entry->shrink, + __entry->shr, + __entry->nid) +); + +TRACE_EVENT(mm_shrink_count_end, + TP_PROTO(struct shrinker *shr, struct shrink_control *sc, long freeable), + + TP_ARGS(shr, sc, freeable), + + TP_STRUCT__entry( + __field(struct shrinker *, shr) + __field(void *, shrink) + __field(int, nid) + __field(long, freeable) + ), + + TP_fast_assign( + __entry->shr = shr; + __entry->shrink = shr->count_objects; + __entry->nid = sc->nid; + __entry->freeable = freeable; + ), + + TP_printk("%pS %p: nid: %d freeable:%ld", + __entry->shrink, + __entry->shr, + __entry->nid, + __entry->freeable) +); + TRACE_EVENT(mm_shrink_slab_start, TP_PROTO(struct shrinker *shr, struct shrink_control *sc, long nr_objects_to_shrink, unsigned long cache_items, diff --git a/mm/shrinker.c b/mm/shrinker.c index dd91eab43ed3..d0c7bf61db61 100644 --- a/mm/shrinker.c +++ b/mm/shrinker.c @@ -379,7 +379,11 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl, : SHRINK_BATCH; long scanned = 0, next_deferred; + trace_mm_shrink_count_start(shrinker, shrinkctl); + freeable = shrinker->count_objects(shrinker, shrinkctl); + + trace_mm_shrink_count_end(shrinker, shrinkctl, freeable); if (freeable == 0 || freeable == SHRINK_EMPTY) return freeable; -- 2.39.0 ^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH -next 2/2] mm: vmscan: add new event to trace shrink lru 2023-12-12 3:26 [PATCH -next 0/2] Make memory reclamation measurable Bixuan Cui 2023-12-12 3:26 ` [PATCH -next 1/2] mm: shrinker: add new event to trace shrink count Bixuan Cui @ 2023-12-12 3:26 ` Bixuan Cui 2023-12-13 3:03 ` Andrew Morton ` (2 more replies) 1 sibling, 3 replies; 7+ messages in thread From: Bixuan Cui @ 2023-12-12 3:26 UTC (permalink / raw) To: rostedt, mhiramat, mathieu.desnoyers, akpm Cc: linux-kernel, linux-trace-kernel, linux-mm, cuibixuan, opensource.kernel From: cuibixuan <cuibixuan@vivo.com> Add a new event to calculate the shrink_inactive_list()/shrink_active_list() execution time. Example of output: kswapd0-103 [007] ..... 1098.353020: mm_vmscan_lru_shrink_active_start: nid=0 kswapd0-103 [007] ..... 1098.353040: mm_vmscan_lru_shrink_active_end: nid=0 nr_taken=32 nr_active=0 nr_deactivated=32 nr_referenced=0 priority=6 flags=RECLAIM_WB_FILE|RECLAIM_WB_ASYNC kswapd0-103 [007] ..... 1098.353040: mm_vmscan_lru_shrink_inactive_start: nid=0 kswapd0-103 [007] ..... 1098.353094: mm_vmscan_lru_shrink_inactive_end: nid=0 nr_scanned=32 nr_reclaimed=0 nr_dirty=0 nr_writeback=0 nr_congested=0 nr_immediate=0 nr_activate_anon=0 nr_activate_file=0 nr_ref_keep=32 nr_unmap_fail=0 priority=6 flags=RECLAIM_WB_ANON|RECLAIM_WB_ASYNC kswapd0-103 [007] ..... 1098.353094: mm_vmscan_lru_shrink_inactive_start: nid=0 kswapd0-103 [007] ..... 1098.353162: mm_vmscan_lru_shrink_inactive_end: nid=0 nr_scanned=32 nr_reclaimed=21 nr_dirty=0 nr_writeback=0 nr_congested=0 nr_immediate=0 nr_activate_anon=0 nr_activate_file=0 nr_ref_keep=11 nr_unmap_fail=0 priority=6 flags=RECLAIM_WB_FILE|RECLAIM_WB_ASYNC Signed-off-by: Bixuan Cui <cuibixuan@vivo.com> --- include/trace/events/vmscan.h | 38 +++++++++++++++++++++++++++++++++-- mm/vmscan.c | 8 ++++++-- 2 files changed, 42 insertions(+), 4 deletions(-) diff --git a/include/trace/events/vmscan.h b/include/trace/events/vmscan.h index 406faa5591c1..9809d158f968 100644 --- a/include/trace/events/vmscan.h +++ b/include/trace/events/vmscan.h @@ -395,7 +395,24 @@ TRACE_EVENT(mm_vmscan_write_folio, show_reclaim_flags(__entry->reclaim_flags)) ); -TRACE_EVENT(mm_vmscan_lru_shrink_inactive, +TRACE_EVENT(mm_vmscan_lru_shrink_inactive_start, + + TP_PROTO(int nid), + + TP_ARGS(nid), + + TP_STRUCT__entry( + __field(int, nid) + ), + + TP_fast_assign( + __entry->nid = nid; + ), + + TP_printk("nid=%d", __entry->nid) +); + +TRACE_EVENT(mm_vmscan_lru_shrink_inactive_end, TP_PROTO(int nid, unsigned long nr_scanned, unsigned long nr_reclaimed, @@ -446,7 +463,24 @@ TRACE_EVENT(mm_vmscan_lru_shrink_inactive, show_reclaim_flags(__entry->reclaim_flags)) ); -TRACE_EVENT(mm_vmscan_lru_shrink_active, +TRACE_EVENT(mm_vmscan_lru_shrink_active_start, + + TP_PROTO(int nid), + + TP_ARGS(nid), + + TP_STRUCT__entry( + __field(int, nid) + ), + + TP_fast_assign( + __entry->nid = nid; + ), + + TP_printk("nid=%d", __entry->nid) +); + +TRACE_EVENT(mm_vmscan_lru_shrink_active_end, TP_PROTO(int nid, unsigned long nr_taken, unsigned long nr_active, unsigned long nr_deactivated, diff --git a/mm/vmscan.c b/mm/vmscan.c index 4e3b835c6b4a..73e690b3ce68 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1906,6 +1906,8 @@ static unsigned long shrink_inactive_list(unsigned long nr_to_scan, struct pglist_data *pgdat = lruvec_pgdat(lruvec); bool stalled = false; + trace_mm_vmscan_lru_shrink_inactive_start(pgdat->node_id); + while (unlikely(too_many_isolated(pgdat, file, sc))) { if (stalled) return 0; @@ -1990,7 +1992,7 @@ static unsigned long shrink_inactive_list(unsigned long nr_to_scan, if (file) sc->nr.file_taken += nr_taken; - trace_mm_vmscan_lru_shrink_inactive(pgdat->node_id, + trace_mm_vmscan_lru_shrink_inactive_end(pgdat->node_id, nr_scanned, nr_reclaimed, &stat, sc->priority, file); return nr_reclaimed; } @@ -2028,6 +2030,8 @@ static void shrink_active_list(unsigned long nr_to_scan, int file = is_file_lru(lru); struct pglist_data *pgdat = lruvec_pgdat(lruvec); + trace_mm_vmscan_lru_shrink_active_start(pgdat->node_id); + lru_add_drain(); spin_lock_irq(&lruvec->lru_lock); @@ -2107,7 +2111,7 @@ static void shrink_active_list(unsigned long nr_to_scan, lru_note_cost(lruvec, file, 0, nr_rotated); mem_cgroup_uncharge_list(&l_active); free_unref_page_list(&l_active); - trace_mm_vmscan_lru_shrink_active(pgdat->node_id, nr_taken, nr_activate, + trace_mm_vmscan_lru_shrink_active_end(pgdat->node_id, nr_taken, nr_activate, nr_deactivate, nr_rotated, sc->priority, file); } -- 2.39.0 ^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH -next 2/2] mm: vmscan: add new event to trace shrink lru 2023-12-12 3:26 ` [PATCH -next 2/2] mm: vmscan: add new event to trace shrink lru Bixuan Cui @ 2023-12-13 3:03 ` Andrew Morton 2023-12-13 6:30 ` Bixuan Cui 2023-12-14 14:29 ` kernel test robot 2023-12-14 16:51 ` kernel test robot 2 siblings, 1 reply; 7+ messages in thread From: Andrew Morton @ 2023-12-13 3:03 UTC (permalink / raw) To: Bixuan Cui Cc: rostedt, mhiramat, mathieu.desnoyers, linux-kernel, linux-trace-kernel, linux-mm, opensource.kernel On Mon, 11 Dec 2023 19:26:40 -0800 Bixuan Cui <cuibixuan@vivo.com> wrote: > -TRACE_EVENT(mm_vmscan_lru_shrink_inactive, > +TRACE_EVENT(mm_vmscan_lru_shrink_inactive_start, Current kernels have a call to trace_mm_vmscan_lru_shrink_inactive() in evict_folios(), so this renaming broke the build. ^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH -next 2/2] mm: vmscan: add new event to trace shrink lru 2023-12-13 3:03 ` Andrew Morton @ 2023-12-13 6:30 ` Bixuan Cui 0 siblings, 0 replies; 7+ messages in thread From: Bixuan Cui @ 2023-12-13 6:30 UTC (permalink / raw) To: Andrew Morton Cc: rostedt, mhiramat, mathieu.desnoyers, linux-kernel, linux-trace-kernel, linux-mm, opensource.kernel 在 2023/12/13 11:03, Andrew Morton 写道: >> -TRACE_EVENT(mm_vmscan_lru_shrink_inactive, >> +TRACE_EVENT(mm_vmscan_lru_shrink_inactive_start, > Current kernels have a call to trace_mm_vmscan_lru_shrink_inactive() in > evict_folios(), so this renaming broke the build. Sorry, I did not enable CONFIG_LRU_GEN when compiling and testing. I will double check my patches. Thanks Bixuan Cui ^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH -next 2/2] mm: vmscan: add new event to trace shrink lru 2023-12-12 3:26 ` [PATCH -next 2/2] mm: vmscan: add new event to trace shrink lru Bixuan Cui 2023-12-13 3:03 ` Andrew Morton @ 2023-12-14 14:29 ` kernel test robot 2023-12-14 16:51 ` kernel test robot 2 siblings, 0 replies; 7+ messages in thread From: kernel test robot @ 2023-12-14 14:29 UTC (permalink / raw) To: Bixuan Cui, rostedt, mhiramat, mathieu.desnoyers, akpm Cc: oe-kbuild-all, linux-kernel, linux-trace-kernel, linux-mm, cuibixuan, opensource.kernel Hi Bixuan, kernel test robot noticed the following build errors: [auto build test ERROR on next-20231211] url: https://github.com/intel-lab-lkp/linux/commits/Bixuan-Cui/mm-shrinker-add-new-event-to-trace-shrink-count/20231212-112824 base: next-20231211 patch link: https://lore.kernel.org/r/20231212032640.6968-3-cuibixuan%40vivo.com patch subject: [PATCH -next 2/2] mm: vmscan: add new event to trace shrink lru config: i386-buildonly-randconfig-003-20231214 (https://download.01.org/0day-ci/archive/20231214/202312142212.vbSe7CMs-lkp@intel.com/config) compiler: gcc-11 (Debian 11.3.0-12) 11.3.0 reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20231214/202312142212.vbSe7CMs-lkp@intel.com/reproduce) If you fix the issue in a separate patch/commit (i.e. not just a new version of the same patch/commit), kindly add following tags | Reported-by: kernel test robot <lkp@intel.com> | Closes: https://lore.kernel.org/oe-kbuild-all/202312142212.vbSe7CMs-lkp@intel.com/ All errors (new ones prefixed by >>): mm/vmscan.c: In function 'evict_folios': >> mm/vmscan.c:4533:9: error: implicit declaration of function 'trace_mm_vmscan_lru_shrink_inactive'; did you mean 'trace_mm_vmscan_lru_shrink_inactive_end'? [-Werror=implicit-function-declaration] 4533 | trace_mm_vmscan_lru_shrink_inactive(pgdat->node_id, | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | trace_mm_vmscan_lru_shrink_inactive_end cc1: some warnings being treated as errors vim +4533 mm/vmscan.c ac35a490237446 Yu Zhao 2022-09-18 4500 a579086c99ed70 Yu Zhao 2022-12-21 4501 static int evict_folios(struct lruvec *lruvec, struct scan_control *sc, int swappiness) ac35a490237446 Yu Zhao 2022-09-18 4502 { ac35a490237446 Yu Zhao 2022-09-18 4503 int type; ac35a490237446 Yu Zhao 2022-09-18 4504 int scanned; ac35a490237446 Yu Zhao 2022-09-18 4505 int reclaimed; ac35a490237446 Yu Zhao 2022-09-18 4506 LIST_HEAD(list); 359a5e1416caaf Yu Zhao 2022-11-15 4507 LIST_HEAD(clean); ac35a490237446 Yu Zhao 2022-09-18 4508 struct folio *folio; 359a5e1416caaf Yu Zhao 2022-11-15 4509 struct folio *next; ac35a490237446 Yu Zhao 2022-09-18 4510 enum vm_event_item item; ac35a490237446 Yu Zhao 2022-09-18 4511 struct reclaim_stat stat; bd74fdaea14602 Yu Zhao 2022-09-18 4512 struct lru_gen_mm_walk *walk; 359a5e1416caaf Yu Zhao 2022-11-15 4513 bool skip_retry = false; ac35a490237446 Yu Zhao 2022-09-18 4514 struct mem_cgroup *memcg = lruvec_memcg(lruvec); ac35a490237446 Yu Zhao 2022-09-18 4515 struct pglist_data *pgdat = lruvec_pgdat(lruvec); ac35a490237446 Yu Zhao 2022-09-18 4516 ac35a490237446 Yu Zhao 2022-09-18 4517 spin_lock_irq(&lruvec->lru_lock); ac35a490237446 Yu Zhao 2022-09-18 4518 ac35a490237446 Yu Zhao 2022-09-18 4519 scanned = isolate_folios(lruvec, sc, swappiness, &type, &list); ac35a490237446 Yu Zhao 2022-09-18 4520 ac35a490237446 Yu Zhao 2022-09-18 4521 scanned += try_to_inc_min_seq(lruvec, swappiness); ac35a490237446 Yu Zhao 2022-09-18 4522 ac35a490237446 Yu Zhao 2022-09-18 4523 if (get_nr_gens(lruvec, !swappiness) == MIN_NR_GENS) ac35a490237446 Yu Zhao 2022-09-18 4524 scanned = 0; ac35a490237446 Yu Zhao 2022-09-18 4525 ac35a490237446 Yu Zhao 2022-09-18 4526 spin_unlock_irq(&lruvec->lru_lock); ac35a490237446 Yu Zhao 2022-09-18 4527 ac35a490237446 Yu Zhao 2022-09-18 4528 if (list_empty(&list)) ac35a490237446 Yu Zhao 2022-09-18 4529 return scanned; 359a5e1416caaf Yu Zhao 2022-11-15 4530 retry: 49fd9b6df54e61 Matthew Wilcox (Oracle 2022-09-02 4531) reclaimed = shrink_folio_list(&list, pgdat, sc, &stat, false); 359a5e1416caaf Yu Zhao 2022-11-15 4532 sc->nr_reclaimed += reclaimed; 8c2214fc9a470a Jaewon Kim 2023-10-03 @4533 trace_mm_vmscan_lru_shrink_inactive(pgdat->node_id, 8c2214fc9a470a Jaewon Kim 2023-10-03 4534 scanned, reclaimed, &stat, sc->priority, 8c2214fc9a470a Jaewon Kim 2023-10-03 4535 type ? LRU_INACTIVE_FILE : LRU_INACTIVE_ANON); 359a5e1416caaf Yu Zhao 2022-11-15 4536 359a5e1416caaf Yu Zhao 2022-11-15 4537 list_for_each_entry_safe_reverse(folio, next, &list, lru) { 359a5e1416caaf Yu Zhao 2022-11-15 4538 if (!folio_evictable(folio)) { 359a5e1416caaf Yu Zhao 2022-11-15 4539 list_del(&folio->lru); 359a5e1416caaf Yu Zhao 2022-11-15 4540 folio_putback_lru(folio); 359a5e1416caaf Yu Zhao 2022-11-15 4541 continue; 359a5e1416caaf Yu Zhao 2022-11-15 4542 } ac35a490237446 Yu Zhao 2022-09-18 4543 359a5e1416caaf Yu Zhao 2022-11-15 4544 if (folio_test_reclaim(folio) && 359a5e1416caaf Yu Zhao 2022-11-15 4545 (folio_test_dirty(folio) || folio_test_writeback(folio))) { ac35a490237446 Yu Zhao 2022-09-18 4546 /* restore LRU_REFS_FLAGS cleared by isolate_folio() */ ac35a490237446 Yu Zhao 2022-09-18 4547 if (folio_test_workingset(folio)) ac35a490237446 Yu Zhao 2022-09-18 4548 folio_set_referenced(folio); 359a5e1416caaf Yu Zhao 2022-11-15 4549 continue; 359a5e1416caaf Yu Zhao 2022-11-15 4550 } ac35a490237446 Yu Zhao 2022-09-18 4551 359a5e1416caaf Yu Zhao 2022-11-15 4552 if (skip_retry || folio_test_active(folio) || folio_test_referenced(folio) || 359a5e1416caaf Yu Zhao 2022-11-15 4553 folio_mapped(folio) || folio_test_locked(folio) || 359a5e1416caaf Yu Zhao 2022-11-15 4554 folio_test_dirty(folio) || folio_test_writeback(folio)) { 359a5e1416caaf Yu Zhao 2022-11-15 4555 /* don't add rejected folios to the oldest generation */ 359a5e1416caaf Yu Zhao 2022-11-15 4556 set_mask_bits(&folio->flags, LRU_REFS_MASK | LRU_REFS_FLAGS, 359a5e1416caaf Yu Zhao 2022-11-15 4557 BIT(PG_active)); 359a5e1416caaf Yu Zhao 2022-11-15 4558 continue; 359a5e1416caaf Yu Zhao 2022-11-15 4559 } 359a5e1416caaf Yu Zhao 2022-11-15 4560 359a5e1416caaf Yu Zhao 2022-11-15 4561 /* retry folios that may have missed folio_rotate_reclaimable() */ 359a5e1416caaf Yu Zhao 2022-11-15 4562 list_move(&folio->lru, &clean); 359a5e1416caaf Yu Zhao 2022-11-15 4563 sc->nr_scanned -= folio_nr_pages(folio); ac35a490237446 Yu Zhao 2022-09-18 4564 } ac35a490237446 Yu Zhao 2022-09-18 4565 ac35a490237446 Yu Zhao 2022-09-18 4566 spin_lock_irq(&lruvec->lru_lock); ac35a490237446 Yu Zhao 2022-09-18 4567 49fd9b6df54e61 Matthew Wilcox (Oracle 2022-09-02 4568) move_folios_to_lru(lruvec, &list); ac35a490237446 Yu Zhao 2022-09-18 4569 bd74fdaea14602 Yu Zhao 2022-09-18 4570 walk = current->reclaim_state->mm_walk; bd74fdaea14602 Yu Zhao 2022-09-18 4571 if (walk && walk->batched) bd74fdaea14602 Yu Zhao 2022-09-18 4572 reset_batch_size(lruvec, walk); bd74fdaea14602 Yu Zhao 2022-09-18 4573 57e9cc50f4dd92 Johannes Weiner 2022-10-26 4574 item = PGSTEAL_KSWAPD + reclaimer_offset(); ac35a490237446 Yu Zhao 2022-09-18 4575 if (!cgroup_reclaim(sc)) ac35a490237446 Yu Zhao 2022-09-18 4576 __count_vm_events(item, reclaimed); ac35a490237446 Yu Zhao 2022-09-18 4577 __count_memcg_events(memcg, item, reclaimed); ac35a490237446 Yu Zhao 2022-09-18 4578 __count_vm_events(PGSTEAL_ANON + type, reclaimed); ac35a490237446 Yu Zhao 2022-09-18 4579 ac35a490237446 Yu Zhao 2022-09-18 4580 spin_unlock_irq(&lruvec->lru_lock); ac35a490237446 Yu Zhao 2022-09-18 4581 ac35a490237446 Yu Zhao 2022-09-18 4582 mem_cgroup_uncharge_list(&list); ac35a490237446 Yu Zhao 2022-09-18 4583 free_unref_page_list(&list); ac35a490237446 Yu Zhao 2022-09-18 4584 359a5e1416caaf Yu Zhao 2022-11-15 4585 INIT_LIST_HEAD(&list); 359a5e1416caaf Yu Zhao 2022-11-15 4586 list_splice_init(&clean, &list); 359a5e1416caaf Yu Zhao 2022-11-15 4587 359a5e1416caaf Yu Zhao 2022-11-15 4588 if (!list_empty(&list)) { 359a5e1416caaf Yu Zhao 2022-11-15 4589 skip_retry = true; 359a5e1416caaf Yu Zhao 2022-11-15 4590 goto retry; 359a5e1416caaf Yu Zhao 2022-11-15 4591 } ac35a490237446 Yu Zhao 2022-09-18 4592 ac35a490237446 Yu Zhao 2022-09-18 4593 return scanned; ac35a490237446 Yu Zhao 2022-09-18 4594 } ac35a490237446 Yu Zhao 2022-09-18 4595 -- 0-DAY CI Kernel Test Service https://github.com/intel/lkp-tests/wiki ^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH -next 2/2] mm: vmscan: add new event to trace shrink lru 2023-12-12 3:26 ` [PATCH -next 2/2] mm: vmscan: add new event to trace shrink lru Bixuan Cui 2023-12-13 3:03 ` Andrew Morton 2023-12-14 14:29 ` kernel test robot @ 2023-12-14 16:51 ` kernel test robot 2 siblings, 0 replies; 7+ messages in thread From: kernel test robot @ 2023-12-14 16:51 UTC (permalink / raw) To: Bixuan Cui, rostedt, mhiramat, mathieu.desnoyers, akpm Cc: llvm, oe-kbuild-all, linux-kernel, linux-trace-kernel, linux-mm, cuibixuan, opensource.kernel Hi Bixuan, kernel test robot noticed the following build errors: [auto build test ERROR on next-20231211] url: https://github.com/intel-lab-lkp/linux/commits/Bixuan-Cui/mm-shrinker-add-new-event-to-trace-shrink-count/20231212-112824 base: next-20231211 patch link: https://lore.kernel.org/r/20231212032640.6968-3-cuibixuan%40vivo.com patch subject: [PATCH -next 2/2] mm: vmscan: add new event to trace shrink lru config: i386-randconfig-014-20231214 (https://download.01.org/0day-ci/archive/20231215/202312150018.EIE4fkeF-lkp@intel.com/config) compiler: clang version 16.0.4 (https://github.com/llvm/llvm-project.git ae42196bc493ffe877a7e3dff8be32035dea4d07) reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20231215/202312150018.EIE4fkeF-lkp@intel.com/reproduce) If you fix the issue in a separate patch/commit (i.e. not just a new version of the same patch/commit), kindly add following tags | Reported-by: kernel test robot <lkp@intel.com> | Closes: https://lore.kernel.org/oe-kbuild-all/202312150018.EIE4fkeF-lkp@intel.com/ All errors (new ones prefixed by >>): >> mm/vmscan.c:4533:2: error: call to undeclared function 'trace_mm_vmscan_lru_shrink_inactive'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration] trace_mm_vmscan_lru_shrink_inactive(pgdat->node_id, ^ mm/vmscan.c:4533:2: note: did you mean 'trace_mm_vmscan_lru_shrink_inactive_end'? include/trace/events/vmscan.h:415:1: note: 'trace_mm_vmscan_lru_shrink_inactive_end' declared here TRACE_EVENT(mm_vmscan_lru_shrink_inactive_end, ^ include/linux/tracepoint.h:566:2: note: expanded from macro 'TRACE_EVENT' DECLARE_TRACE(name, PARAMS(proto), PARAMS(args)) ^ include/linux/tracepoint.h:432:2: note: expanded from macro 'DECLARE_TRACE' __DECLARE_TRACE(name, PARAMS(proto), PARAMS(args), \ ^ include/linux/tracepoint.h:255:21: note: expanded from macro '__DECLARE_TRACE' static inline void trace_##name(proto) \ ^ <scratch space>:60:1: note: expanded from here trace_mm_vmscan_lru_shrink_inactive_end ^ 1 error generated. vim +/trace_mm_vmscan_lru_shrink_inactive +4533 mm/vmscan.c ac35a490237446 Yu Zhao 2022-09-18 4500 a579086c99ed70 Yu Zhao 2022-12-21 4501 static int evict_folios(struct lruvec *lruvec, struct scan_control *sc, int swappiness) ac35a490237446 Yu Zhao 2022-09-18 4502 { ac35a490237446 Yu Zhao 2022-09-18 4503 int type; ac35a490237446 Yu Zhao 2022-09-18 4504 int scanned; ac35a490237446 Yu Zhao 2022-09-18 4505 int reclaimed; ac35a490237446 Yu Zhao 2022-09-18 4506 LIST_HEAD(list); 359a5e1416caaf Yu Zhao 2022-11-15 4507 LIST_HEAD(clean); ac35a490237446 Yu Zhao 2022-09-18 4508 struct folio *folio; 359a5e1416caaf Yu Zhao 2022-11-15 4509 struct folio *next; ac35a490237446 Yu Zhao 2022-09-18 4510 enum vm_event_item item; ac35a490237446 Yu Zhao 2022-09-18 4511 struct reclaim_stat stat; bd74fdaea14602 Yu Zhao 2022-09-18 4512 struct lru_gen_mm_walk *walk; 359a5e1416caaf Yu Zhao 2022-11-15 4513 bool skip_retry = false; ac35a490237446 Yu Zhao 2022-09-18 4514 struct mem_cgroup *memcg = lruvec_memcg(lruvec); ac35a490237446 Yu Zhao 2022-09-18 4515 struct pglist_data *pgdat = lruvec_pgdat(lruvec); ac35a490237446 Yu Zhao 2022-09-18 4516 ac35a490237446 Yu Zhao 2022-09-18 4517 spin_lock_irq(&lruvec->lru_lock); ac35a490237446 Yu Zhao 2022-09-18 4518 ac35a490237446 Yu Zhao 2022-09-18 4519 scanned = isolate_folios(lruvec, sc, swappiness, &type, &list); ac35a490237446 Yu Zhao 2022-09-18 4520 ac35a490237446 Yu Zhao 2022-09-18 4521 scanned += try_to_inc_min_seq(lruvec, swappiness); ac35a490237446 Yu Zhao 2022-09-18 4522 ac35a490237446 Yu Zhao 2022-09-18 4523 if (get_nr_gens(lruvec, !swappiness) == MIN_NR_GENS) ac35a490237446 Yu Zhao 2022-09-18 4524 scanned = 0; ac35a490237446 Yu Zhao 2022-09-18 4525 ac35a490237446 Yu Zhao 2022-09-18 4526 spin_unlock_irq(&lruvec->lru_lock); ac35a490237446 Yu Zhao 2022-09-18 4527 ac35a490237446 Yu Zhao 2022-09-18 4528 if (list_empty(&list)) ac35a490237446 Yu Zhao 2022-09-18 4529 return scanned; 359a5e1416caaf Yu Zhao 2022-11-15 4530 retry: 49fd9b6df54e61 Matthew Wilcox (Oracle 2022-09-02 4531) reclaimed = shrink_folio_list(&list, pgdat, sc, &stat, false); 359a5e1416caaf Yu Zhao 2022-11-15 4532 sc->nr_reclaimed += reclaimed; 8c2214fc9a470a Jaewon Kim 2023-10-03 @4533 trace_mm_vmscan_lru_shrink_inactive(pgdat->node_id, 8c2214fc9a470a Jaewon Kim 2023-10-03 4534 scanned, reclaimed, &stat, sc->priority, 8c2214fc9a470a Jaewon Kim 2023-10-03 4535 type ? LRU_INACTIVE_FILE : LRU_INACTIVE_ANON); 359a5e1416caaf Yu Zhao 2022-11-15 4536 359a5e1416caaf Yu Zhao 2022-11-15 4537 list_for_each_entry_safe_reverse(folio, next, &list, lru) { 359a5e1416caaf Yu Zhao 2022-11-15 4538 if (!folio_evictable(folio)) { 359a5e1416caaf Yu Zhao 2022-11-15 4539 list_del(&folio->lru); 359a5e1416caaf Yu Zhao 2022-11-15 4540 folio_putback_lru(folio); 359a5e1416caaf Yu Zhao 2022-11-15 4541 continue; 359a5e1416caaf Yu Zhao 2022-11-15 4542 } ac35a490237446 Yu Zhao 2022-09-18 4543 359a5e1416caaf Yu Zhao 2022-11-15 4544 if (folio_test_reclaim(folio) && 359a5e1416caaf Yu Zhao 2022-11-15 4545 (folio_test_dirty(folio) || folio_test_writeback(folio))) { ac35a490237446 Yu Zhao 2022-09-18 4546 /* restore LRU_REFS_FLAGS cleared by isolate_folio() */ ac35a490237446 Yu Zhao 2022-09-18 4547 if (folio_test_workingset(folio)) ac35a490237446 Yu Zhao 2022-09-18 4548 folio_set_referenced(folio); 359a5e1416caaf Yu Zhao 2022-11-15 4549 continue; 359a5e1416caaf Yu Zhao 2022-11-15 4550 } ac35a490237446 Yu Zhao 2022-09-18 4551 359a5e1416caaf Yu Zhao 2022-11-15 4552 if (skip_retry || folio_test_active(folio) || folio_test_referenced(folio) || 359a5e1416caaf Yu Zhao 2022-11-15 4553 folio_mapped(folio) || folio_test_locked(folio) || 359a5e1416caaf Yu Zhao 2022-11-15 4554 folio_test_dirty(folio) || folio_test_writeback(folio)) { 359a5e1416caaf Yu Zhao 2022-11-15 4555 /* don't add rejected folios to the oldest generation */ 359a5e1416caaf Yu Zhao 2022-11-15 4556 set_mask_bits(&folio->flags, LRU_REFS_MASK | LRU_REFS_FLAGS, 359a5e1416caaf Yu Zhao 2022-11-15 4557 BIT(PG_active)); 359a5e1416caaf Yu Zhao 2022-11-15 4558 continue; 359a5e1416caaf Yu Zhao 2022-11-15 4559 } 359a5e1416caaf Yu Zhao 2022-11-15 4560 359a5e1416caaf Yu Zhao 2022-11-15 4561 /* retry folios that may have missed folio_rotate_reclaimable() */ 359a5e1416caaf Yu Zhao 2022-11-15 4562 list_move(&folio->lru, &clean); 359a5e1416caaf Yu Zhao 2022-11-15 4563 sc->nr_scanned -= folio_nr_pages(folio); ac35a490237446 Yu Zhao 2022-09-18 4564 } ac35a490237446 Yu Zhao 2022-09-18 4565 ac35a490237446 Yu Zhao 2022-09-18 4566 spin_lock_irq(&lruvec->lru_lock); ac35a490237446 Yu Zhao 2022-09-18 4567 49fd9b6df54e61 Matthew Wilcox (Oracle 2022-09-02 4568) move_folios_to_lru(lruvec, &list); ac35a490237446 Yu Zhao 2022-09-18 4569 bd74fdaea14602 Yu Zhao 2022-09-18 4570 walk = current->reclaim_state->mm_walk; bd74fdaea14602 Yu Zhao 2022-09-18 4571 if (walk && walk->batched) bd74fdaea14602 Yu Zhao 2022-09-18 4572 reset_batch_size(lruvec, walk); bd74fdaea14602 Yu Zhao 2022-09-18 4573 57e9cc50f4dd92 Johannes Weiner 2022-10-26 4574 item = PGSTEAL_KSWAPD + reclaimer_offset(); ac35a490237446 Yu Zhao 2022-09-18 4575 if (!cgroup_reclaim(sc)) ac35a490237446 Yu Zhao 2022-09-18 4576 __count_vm_events(item, reclaimed); ac35a490237446 Yu Zhao 2022-09-18 4577 __count_memcg_events(memcg, item, reclaimed); ac35a490237446 Yu Zhao 2022-09-18 4578 __count_vm_events(PGSTEAL_ANON + type, reclaimed); ac35a490237446 Yu Zhao 2022-09-18 4579 ac35a490237446 Yu Zhao 2022-09-18 4580 spin_unlock_irq(&lruvec->lru_lock); ac35a490237446 Yu Zhao 2022-09-18 4581 ac35a490237446 Yu Zhao 2022-09-18 4582 mem_cgroup_uncharge_list(&list); ac35a490237446 Yu Zhao 2022-09-18 4583 free_unref_page_list(&list); ac35a490237446 Yu Zhao 2022-09-18 4584 359a5e1416caaf Yu Zhao 2022-11-15 4585 INIT_LIST_HEAD(&list); 359a5e1416caaf Yu Zhao 2022-11-15 4586 list_splice_init(&clean, &list); 359a5e1416caaf Yu Zhao 2022-11-15 4587 359a5e1416caaf Yu Zhao 2022-11-15 4588 if (!list_empty(&list)) { 359a5e1416caaf Yu Zhao 2022-11-15 4589 skip_retry = true; 359a5e1416caaf Yu Zhao 2022-11-15 4590 goto retry; 359a5e1416caaf Yu Zhao 2022-11-15 4591 } ac35a490237446 Yu Zhao 2022-09-18 4592 ac35a490237446 Yu Zhao 2022-09-18 4593 return scanned; ac35a490237446 Yu Zhao 2022-09-18 4594 } ac35a490237446 Yu Zhao 2022-09-18 4595 -- 0-DAY CI Kernel Test Service https://github.com/intel/lkp-tests/wiki ^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2023-12-14 16:59 UTC | newest] Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2023-12-12 3:26 [PATCH -next 0/2] Make memory reclamation measurable Bixuan Cui 2023-12-12 3:26 ` [PATCH -next 1/2] mm: shrinker: add new event to trace shrink count Bixuan Cui 2023-12-12 3:26 ` [PATCH -next 2/2] mm: vmscan: add new event to trace shrink lru Bixuan Cui 2023-12-13 3:03 ` Andrew Morton 2023-12-13 6:30 ` Bixuan Cui 2023-12-14 14:29 ` kernel test robot 2023-12-14 16:51 ` kernel test robot
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox