I've broken the mm tracepoint up into functional and more useful groups. This group consists of page reclamation tracepoints, all within vmscan.c In addition, I moved to TRACE_EVENT definitions in include/trace/events/kmem.h like the other mm tracepoints rather than creating a new header file. 1.) mm_kswapd_ran: This tracepoint records how many pages were reclaimed from kswapd on a per node basis. # tracer: nop # # TASK-PID CPU# TIMESTAMP FUNCTION # | | | | | <...>-626 [005] 142.943401: mm_kswapd_ran: node=1 reclaimed=1273 <...>-626 [005] 143.159882: mm_kswapd_ran: node=1 reclaimed=2473 <...>-626 [005] 143.226377: mm_kswapd_ran: node=1 reclaimed=4675 <...>-626 [005] 143.473898: mm_kswapd_ran: node=1 reclaimed=24056 <...>-625 [000] 151.428285: mm_kswapd_ran: node=0 reclaimed=55816 2.) mm_pagereclaim_reclaimall: This tracepoint records how many pages were reclaimed due to direct reclaim occurring out of the memory allocator, on each node and the priority achieved. # tracer: nop # # TASK-PID CPU# TIMESTAMP FUNCTION # | | | | | memory-9974 [000] 213.795877: mm_directreclaim_reclaimall: node=0 reclaimed=42 priority=11 memory-9974 [004] 214.134956: mm_directreclaim_reclaimall: node=1 reclaimed=52 priority=10 memory-9974 [007] 214.407464: mm_directreclaim_reclaimall: node=1 reclaimed=96 priority=10 memory-9974 [001] 214.577797: mm_directreclaim_reclaimall: node=0 reclaimed=128 priority=8 memory-9974 [004] 215.022107: mm_directreclaim_reclaimall: node=1 reclaimed=64 priority=10 3.) mm_direct_reclaim_reclaimzone: This tracepoint records how many pages were reclaimed when a numa node was exhausted but other nodes were not because zone_reclaim_mode was set during boot. 4.) mm_pagereclaim_shrink_zone: This tracepoint records how many pages were reclaimed by a single invocation of shrink_zone and the priority. # tracer: nop # # TASK-PID CPU# TIMESTAMP FUNCTION # | | | | | kswapd1-626 [005] 223.090040: mm_pagereclaim_shrinkzone: reclaimed=63 priority=10 kswapd0-625 [002] 223.271915: mm_pagereclaim_shrinkzone: reclaimed=224 priority=10 memory-9980 [004] 223.465403: mm_pagereclaim_shrinkzone: reclaimed=50 priority=10 restorecond-9028 [001] 223.465776: mm_pagereclaim_shrinkzone: reclaimed=49 priority=10 memory-9980 [004] 223.469705: mm_pagereclaim_shrinkzone: reclaimed=93 priority=10 5.) mm_pagereclaim_shrinkactive: This tracepoint records how many pages were deactivated and reactivated by a single invocation of shrink_active_list, whether they were anonymous or pagecache pages and the priority. # tracer: nop # # TASK-PID CPU# TIMESTAMP FUNCTION # | | | | | kswapd0-625 [000] 141.697105: mm_pagereclaim_shrinkactive: scanned=32, anonymous, priority=12 kswapd1-626 [004] 141.917568: mm_pagereclaim_shrinkactive: scanned=32, anonymous, priority=12 restorecond-9031 [003] 144.262264: mm_pagereclaim_shrinkactive: scanned=32, anonymous, priority=12 memory-9978 [004] 144.301358: mm_pagereclaim_shrinkactive: scanned=32, anonymous, priority=12 Xorg-9711 [002] 147.927108: mm_pagereclaim_shrinkactive: scanned=32, pagecache, priority=7 6.) mm_pagereclaim_shrinkinactive: This tracepoint records how many pages were reclaimed and freed by a single invocation of shrink_inactive_list, the total number of inactive pages scanned and the priority. # tracer: nop # # TASK-PID CPU# TIMESTAMP FUNCTION # | | | | | kswapd1-626 [005] 124.740577: mm_pagereclaim_shrinkinactive: scanned=32, reclaimed=31, priority=10 memory-9975 [000] 126.175589: mm_pagereclaim_shrinkinactive: scanned=32, reclaimed=32, priority=11 kswapd0-625 [002] 128.949227: mm_pagereclaim_shrinkinactive: scanned=32, reclaimed=32, priority=11 memory-9975 [004] 129.547320: mm_pagereclaim_shrinkinactive: scanned=32, reclaimed=32, priority=11 restorecond-9030 [001] 129.550783: mm_pagereclaim_shrinkinactive: scanned=32, reclaimed=32, priority=11 ~ When these tracepoints are enabled together its easy and useful to determine exactly what is being done on the page reclaim logic. You can clearly see how many pages are reclaimed by kswapd and direct reclaim, how the priority is decreasing with each call to shrink_zone and how effective shrink_active_list and shrink_inactive_list are in reclaiming memory: # tracer: nop # # TASK-PID CPU# TIMESTAMP FUNCTION # | | | | | kswapd0-625 [000] 221.037254: mm_pagereclaim_shrinkactive: scanned=32, anonymous, priority=10 kswapd0-625 [000] 221.037270: mm_pagereclaim_shrinkactive: scanned=32, anonymous, priority=10 kswapd0-625 [000] 221.037286: mm_pagereclaim_shrinkinactive: scanned=32, reclaimed=32, priority=10 kswapd0-625 [000] 221.037302: mm_pagereclaim_shrinkinactive: scanned=32, reclaimed=32, priority=10 kswapd0-625 [000] 221.037302: mm_pagereclaim_shrinkzone: reclaimed=64 priority=10 kswapd0-625 [000] 221.037386: mm_kswapd_ran: node=0 reclaimed=408 memory-9973 [006] 221.052384: mm_pagereclaim_shrinkactive: scanned=32, anonymous, priority=9 memory-9973 [006] 221.052399: mm_pagereclaim_shrinkactive: scanned=32, anonymous, priority=9 memory-9973 [006] 221.052456: mm_pagereclaim_shrinkinactive: scanned=32, reclaimed=28, priority=9 memory-9973 [006] 221.052507: mm_pagereclaim_shrinkinactive: scanned=32, reclaimed=32, priority=9 memory-9973 [006] 221.052507: mm_pagereclaim_shrinkzone: reclaimed=60 priority=9 memory-9973 [006] 221.052553: mm_directreclaim_reclaimall: node=1 reclaimed=60 priority=9 Signed-off-by: Larry Woodman