* [PATCH v3] mm: move pgscan, pgsteal, pgrefill to node stats
@ 2026-02-18 22:26 JP Kobryn (Meta)
2026-02-19 8:57 ` Vlastimil Babka (SUSE)
` (2 more replies)
0 siblings, 3 replies; 4+ messages in thread
From: JP Kobryn (Meta) @ 2026-02-18 22:26 UTC (permalink / raw)
To: linux-mm, mst, mhocko, vbabka
Cc: apopple, akpm, axelrasmussen, byungchul, cgroups, david,
eperezma, gourry, jasowang, hannes, joshua.hahnjy, Liam.Howlett,
linux-kernel, lorenzo.stoakes, matthew.brost, rppt, muchun.song,
zhengqi.arch, rakie.kim, roman.gushchin, shakeel.butt, surenb,
virtualization, weixugc, xuanzhuo, ying.huang, yuanchu, ziy,
kernel-team
From: JP Kobryn <jp.kobryn@linux.dev>
There are situations where reclaim kicks in on a system with free memory.
One possible cause is a NUMA imbalance scenario where one or more nodes are
under pressure. It would help if we could easily identify such nodes.
Move the pgscan, pgsteal, and pgrefill counters from vm_event_item to
node_stat_item to provide per-node reclaim visibility. With these counters
as node stats, the values are now displayed in the per-node section of
/proc/zoneinfo, which allows for quick identification of the affected
nodes.
/proc/vmstat continues to report the same counters, aggregated across all
nodes. But the ordering of these items within the readout changes as they
move from the vm events section to the node stats section.
Memcg accounting of these counters is preserved. The relocated counters
remain visible in memory.stat alongside the existing aggregate pgscan and
pgsteal counters.
However, this change affects how the global counters are accumulated.
Previously, the global event count update was gated on !cgroup_reclaim(),
excluding memcg-based reclaim from /proc/vmstat. Now that
mod_lruvec_state() is being used to update the counters, the global
counters will include all reclaim. This is consistent with how pgdemote
counters are already tracked.
Finally, the virtio_balloon driver is updated to use
global_node_page_state() to fetch the counters, as they are no longer
accessible through the vm_events array.
Signed-off-by: JP Kobryn <jp.kobryn@linux.dev>
Suggested-by: Johannes Weiner <hannes@cmpxchg.org>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
---
v3:
- additionally move PGREFILL to node stats
v2: https://lore.kernel.org/linux-mm/20260218032941.225439-1-jp.kobryn@linux.dev/
- update commit message
- add entries to memory_stats array
- add switch cases in memcg_page_state_output_unit()
v1: https://lore.kernel.org/linux-mm/20260212045109.255391-3-inwardvessel@gmail.com/
drivers/virtio/virtio_balloon.c | 8 ++---
include/linux/mmzone.h | 13 ++++++++
include/linux/vm_event_item.h | 13 --------
mm/memcontrol.c | 56 +++++++++++++++++++++++----------
mm/vmscan.c | 38 ++++++++--------------
mm/vmstat.c | 26 +++++++--------
6 files changed, 82 insertions(+), 72 deletions(-)
diff --git a/drivers/virtio/virtio_balloon.c b/drivers/virtio/virtio_balloon.c
index 4e549abe59ff..ab945532ceef 100644
--- a/drivers/virtio/virtio_balloon.c
+++ b/drivers/virtio/virtio_balloon.c
@@ -369,13 +369,13 @@ static inline unsigned int update_balloon_vm_stats(struct virtio_balloon *vb)
update_stat(vb, idx++, VIRTIO_BALLOON_S_ALLOC_STALL, stall);
update_stat(vb, idx++, VIRTIO_BALLOON_S_ASYNC_SCAN,
- pages_to_bytes(events[PGSCAN_KSWAPD]));
+ pages_to_bytes(global_node_page_state(PGSCAN_KSWAPD)));
update_stat(vb, idx++, VIRTIO_BALLOON_S_DIRECT_SCAN,
- pages_to_bytes(events[PGSCAN_DIRECT]));
+ pages_to_bytes(global_node_page_state(PGSCAN_DIRECT)));
update_stat(vb, idx++, VIRTIO_BALLOON_S_ASYNC_RECLAIM,
- pages_to_bytes(events[PGSTEAL_KSWAPD]));
+ pages_to_bytes(global_node_page_state(PGSTEAL_KSWAPD)));
update_stat(vb, idx++, VIRTIO_BALLOON_S_DIRECT_RECLAIM,
- pages_to_bytes(events[PGSTEAL_DIRECT]));
+ pages_to_bytes(global_node_page_state(PGSTEAL_DIRECT)));
#ifdef CONFIG_HUGETLB_PAGE
update_stat(vb, idx++, VIRTIO_BALLOON_S_HTLB_PGALLOC,
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index 3e51190a55e4..546bca95ca40 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -255,6 +255,19 @@ enum node_stat_item {
PGDEMOTE_DIRECT,
PGDEMOTE_KHUGEPAGED,
PGDEMOTE_PROACTIVE,
+ PGSTEAL_KSWAPD,
+ PGSTEAL_DIRECT,
+ PGSTEAL_KHUGEPAGED,
+ PGSTEAL_PROACTIVE,
+ PGSTEAL_ANON,
+ PGSTEAL_FILE,
+ PGSCAN_KSWAPD,
+ PGSCAN_DIRECT,
+ PGSCAN_KHUGEPAGED,
+ PGSCAN_PROACTIVE,
+ PGSCAN_ANON,
+ PGSCAN_FILE,
+ PGREFILL,
#ifdef CONFIG_HUGETLB_PAGE
NR_HUGETLB,
#endif
diff --git a/include/linux/vm_event_item.h b/include/linux/vm_event_item.h
index 22a139f82d75..03fe95f5a020 100644
--- a/include/linux/vm_event_item.h
+++ b/include/linux/vm_event_item.h
@@ -38,21 +38,8 @@ enum vm_event_item { PGPGIN, PGPGOUT, PSWPIN, PSWPOUT,
PGFREE, PGACTIVATE, PGDEACTIVATE, PGLAZYFREE,
PGFAULT, PGMAJFAULT,
PGLAZYFREED,
- PGREFILL,
PGREUSE,
- PGSTEAL_KSWAPD,
- PGSTEAL_DIRECT,
- PGSTEAL_KHUGEPAGED,
- PGSTEAL_PROACTIVE,
- PGSCAN_KSWAPD,
- PGSCAN_DIRECT,
- PGSCAN_KHUGEPAGED,
- PGSCAN_PROACTIVE,
PGSCAN_DIRECT_THROTTLE,
- PGSCAN_ANON,
- PGSCAN_FILE,
- PGSTEAL_ANON,
- PGSTEAL_FILE,
#ifdef CONFIG_NUMA
PGSCAN_ZONE_RECLAIM_SUCCESS,
PGSCAN_ZONE_RECLAIM_FAILED,
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 007413a53b45..a2e6d6ada823 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -328,6 +328,19 @@ static const unsigned int memcg_node_stat_items[] = {
PGDEMOTE_DIRECT,
PGDEMOTE_KHUGEPAGED,
PGDEMOTE_PROACTIVE,
+ PGSTEAL_KSWAPD,
+ PGSTEAL_DIRECT,
+ PGSTEAL_KHUGEPAGED,
+ PGSTEAL_PROACTIVE,
+ PGSTEAL_ANON,
+ PGSTEAL_FILE,
+ PGSCAN_KSWAPD,
+ PGSCAN_DIRECT,
+ PGSCAN_KHUGEPAGED,
+ PGSCAN_PROACTIVE,
+ PGSCAN_ANON,
+ PGSCAN_FILE,
+ PGREFILL,
#ifdef CONFIG_HUGETLB_PAGE
NR_HUGETLB,
#endif
@@ -441,17 +454,8 @@ static const unsigned int memcg_vm_event_stat[] = {
#endif
PSWPIN,
PSWPOUT,
- PGSCAN_KSWAPD,
- PGSCAN_DIRECT,
- PGSCAN_KHUGEPAGED,
- PGSCAN_PROACTIVE,
- PGSTEAL_KSWAPD,
- PGSTEAL_DIRECT,
- PGSTEAL_KHUGEPAGED,
- PGSTEAL_PROACTIVE,
PGFAULT,
PGMAJFAULT,
- PGREFILL,
PGACTIVATE,
PGDEACTIVATE,
PGLAZYFREE,
@@ -1382,6 +1386,15 @@ static const struct memory_stat memory_stats[] = {
{ "pgdemote_direct", PGDEMOTE_DIRECT },
{ "pgdemote_khugepaged", PGDEMOTE_KHUGEPAGED },
{ "pgdemote_proactive", PGDEMOTE_PROACTIVE },
+ { "pgsteal_kswapd", PGSTEAL_KSWAPD },
+ { "pgsteal_direct", PGSTEAL_DIRECT },
+ { "pgsteal_khugepaged", PGSTEAL_KHUGEPAGED },
+ { "pgsteal_proactive", PGSTEAL_PROACTIVE },
+ { "pgscan_kswapd", PGSCAN_KSWAPD },
+ { "pgscan_direct", PGSCAN_DIRECT },
+ { "pgscan_khugepaged", PGSCAN_KHUGEPAGED },
+ { "pgscan_proactive", PGSCAN_PROACTIVE },
+ { "pgrefill", PGREFILL },
#ifdef CONFIG_NUMA_BALANCING
{ "pgpromote_success", PGPROMOTE_SUCCESS },
#endif
@@ -1425,6 +1438,15 @@ static int memcg_page_state_output_unit(int item)
case PGDEMOTE_DIRECT:
case PGDEMOTE_KHUGEPAGED:
case PGDEMOTE_PROACTIVE:
+ case PGSTEAL_KSWAPD:
+ case PGSTEAL_DIRECT:
+ case PGSTEAL_KHUGEPAGED:
+ case PGSTEAL_PROACTIVE:
+ case PGSCAN_KSWAPD:
+ case PGSCAN_DIRECT:
+ case PGSCAN_KHUGEPAGED:
+ case PGSCAN_PROACTIVE:
+ case PGREFILL:
#ifdef CONFIG_NUMA_BALANCING
case PGPROMOTE_SUCCESS:
#endif
@@ -1496,15 +1518,15 @@ static void memcg_stat_format(struct mem_cgroup *memcg, struct seq_buf *s)
/* Accumulated memory events */
seq_buf_printf(s, "pgscan %lu\n",
- memcg_events(memcg, PGSCAN_KSWAPD) +
- memcg_events(memcg, PGSCAN_DIRECT) +
- memcg_events(memcg, PGSCAN_PROACTIVE) +
- memcg_events(memcg, PGSCAN_KHUGEPAGED));
+ memcg_page_state(memcg, PGSCAN_KSWAPD) +
+ memcg_page_state(memcg, PGSCAN_DIRECT) +
+ memcg_page_state(memcg, PGSCAN_PROACTIVE) +
+ memcg_page_state(memcg, PGSCAN_KHUGEPAGED));
seq_buf_printf(s, "pgsteal %lu\n",
- memcg_events(memcg, PGSTEAL_KSWAPD) +
- memcg_events(memcg, PGSTEAL_DIRECT) +
- memcg_events(memcg, PGSTEAL_PROACTIVE) +
- memcg_events(memcg, PGSTEAL_KHUGEPAGED));
+ memcg_page_state(memcg, PGSTEAL_KSWAPD) +
+ memcg_page_state(memcg, PGSTEAL_DIRECT) +
+ memcg_page_state(memcg, PGSTEAL_PROACTIVE) +
+ memcg_page_state(memcg, PGSTEAL_KHUGEPAGED));
for (i = 0; i < ARRAY_SIZE(memcg_vm_event_stat); i++) {
#ifdef CONFIG_MEMCG_V1
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 44e4fcd6463c..11df7daac318 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1984,7 +1984,7 @@ static unsigned long shrink_inactive_list(unsigned long nr_to_scan,
unsigned long nr_taken;
struct reclaim_stat stat;
bool file = is_file_lru(lru);
- enum vm_event_item item;
+ enum node_stat_item item;
struct pglist_data *pgdat = lruvec_pgdat(lruvec);
bool stalled = false;
@@ -2010,10 +2010,8 @@ static unsigned long shrink_inactive_list(unsigned long nr_to_scan,
__mod_node_page_state(pgdat, NR_ISOLATED_ANON + file, nr_taken);
item = PGSCAN_KSWAPD + reclaimer_offset(sc);
- if (!cgroup_reclaim(sc))
- __count_vm_events(item, nr_scanned);
- count_memcg_events(lruvec_memcg(lruvec), item, nr_scanned);
- __count_vm_events(PGSCAN_ANON + file, nr_scanned);
+ mod_lruvec_state(lruvec, item, nr_scanned);
+ mod_lruvec_state(lruvec, PGSCAN_ANON + file, nr_scanned);
spin_unlock_irq(&lruvec->lru_lock);
@@ -2030,10 +2028,8 @@ static unsigned long shrink_inactive_list(unsigned long nr_to_scan,
stat.nr_demoted);
__mod_node_page_state(pgdat, NR_ISOLATED_ANON + file, -nr_taken);
item = PGSTEAL_KSWAPD + reclaimer_offset(sc);
- if (!cgroup_reclaim(sc))
- __count_vm_events(item, nr_reclaimed);
- count_memcg_events(lruvec_memcg(lruvec), item, nr_reclaimed);
- __count_vm_events(PGSTEAL_ANON + file, nr_reclaimed);
+ mod_lruvec_state(lruvec, item, nr_reclaimed);
+ mod_lruvec_state(lruvec, PGSTEAL_ANON + file, nr_reclaimed);
lru_note_cost_unlock_irq(lruvec, file, stat.nr_pageout,
nr_scanned - nr_reclaimed);
@@ -2120,9 +2116,7 @@ static void shrink_active_list(unsigned long nr_to_scan,
__mod_node_page_state(pgdat, NR_ISOLATED_ANON + file, nr_taken);
- if (!cgroup_reclaim(sc))
- __count_vm_events(PGREFILL, nr_scanned);
- count_memcg_events(lruvec_memcg(lruvec), PGREFILL, nr_scanned);
+ mod_lruvec_state(lruvec, PGREFILL, nr_scanned);
spin_unlock_irq(&lruvec->lru_lock);
@@ -4542,7 +4536,7 @@ static int scan_folios(unsigned long nr_to_scan, struct lruvec *lruvec,
{
int i;
int gen;
- enum vm_event_item item;
+ enum node_stat_item item;
int sorted = 0;
int scanned = 0;
int isolated = 0;
@@ -4601,13 +4595,9 @@ static int scan_folios(unsigned long nr_to_scan, struct lruvec *lruvec,
}
item = PGSCAN_KSWAPD + reclaimer_offset(sc);
- if (!cgroup_reclaim(sc)) {
- __count_vm_events(item, isolated);
- __count_vm_events(PGREFILL, sorted);
- }
- count_memcg_events(memcg, item, isolated);
- count_memcg_events(memcg, PGREFILL, sorted);
- __count_vm_events(PGSCAN_ANON + type, isolated);
+ mod_lruvec_state(lruvec, item, isolated);
+ mod_lruvec_state(lruvec, PGREFILL, sorted);
+ mod_lruvec_state(lruvec, PGSCAN_ANON + type, isolated);
trace_mm_vmscan_lru_isolate(sc->reclaim_idx, sc->order, scan_batch,
scanned, skipped, isolated,
type ? LRU_INACTIVE_FILE : LRU_INACTIVE_ANON);
@@ -4692,7 +4682,7 @@ static int evict_folios(unsigned long nr_to_scan, struct lruvec *lruvec,
LIST_HEAD(clean);
struct folio *folio;
struct folio *next;
- enum vm_event_item item;
+ enum node_stat_item item;
struct reclaim_stat stat;
struct lru_gen_mm_walk *walk;
bool skip_retry = false;
@@ -4756,10 +4746,8 @@ static int evict_folios(unsigned long nr_to_scan, struct lruvec *lruvec,
stat.nr_demoted);
item = PGSTEAL_KSWAPD + reclaimer_offset(sc);
- if (!cgroup_reclaim(sc))
- __count_vm_events(item, reclaimed);
- count_memcg_events(memcg, item, reclaimed);
- __count_vm_events(PGSTEAL_ANON + type, reclaimed);
+ mod_lruvec_state(lruvec, item, reclaimed);
+ mod_lruvec_state(lruvec, PGSTEAL_ANON + type, reclaimed);
spin_unlock_irq(&lruvec->lru_lock);
diff --git a/mm/vmstat.c b/mm/vmstat.c
index 99270713e0c1..d5e6ba683211 100644
--- a/mm/vmstat.c
+++ b/mm/vmstat.c
@@ -1276,6 +1276,19 @@ const char * const vmstat_text[] = {
[I(PGDEMOTE_DIRECT)] = "pgdemote_direct",
[I(PGDEMOTE_KHUGEPAGED)] = "pgdemote_khugepaged",
[I(PGDEMOTE_PROACTIVE)] = "pgdemote_proactive",
+ [I(PGSTEAL_KSWAPD)] = "pgsteal_kswapd",
+ [I(PGSTEAL_DIRECT)] = "pgsteal_direct",
+ [I(PGSTEAL_KHUGEPAGED)] = "pgsteal_khugepaged",
+ [I(PGSTEAL_PROACTIVE)] = "pgsteal_proactive",
+ [I(PGSTEAL_ANON)] = "pgsteal_anon",
+ [I(PGSTEAL_FILE)] = "pgsteal_file",
+ [I(PGSCAN_KSWAPD)] = "pgscan_kswapd",
+ [I(PGSCAN_DIRECT)] = "pgscan_direct",
+ [I(PGSCAN_KHUGEPAGED)] = "pgscan_khugepaged",
+ [I(PGSCAN_PROACTIVE)] = "pgscan_proactive",
+ [I(PGSCAN_ANON)] = "pgscan_anon",
+ [I(PGSCAN_FILE)] = "pgscan_file",
+ [I(PGREFILL)] = "pgrefill",
#ifdef CONFIG_HUGETLB_PAGE
[I(NR_HUGETLB)] = "nr_hugetlb",
#endif
@@ -1318,21 +1331,8 @@ const char * const vmstat_text[] = {
[I(PGMAJFAULT)] = "pgmajfault",
[I(PGLAZYFREED)] = "pglazyfreed",
- [I(PGREFILL)] = "pgrefill",
[I(PGREUSE)] = "pgreuse",
- [I(PGSTEAL_KSWAPD)] = "pgsteal_kswapd",
- [I(PGSTEAL_DIRECT)] = "pgsteal_direct",
- [I(PGSTEAL_KHUGEPAGED)] = "pgsteal_khugepaged",
- [I(PGSTEAL_PROACTIVE)] = "pgsteal_proactive",
- [I(PGSCAN_KSWAPD)] = "pgscan_kswapd",
- [I(PGSCAN_DIRECT)] = "pgscan_direct",
- [I(PGSCAN_KHUGEPAGED)] = "pgscan_khugepaged",
- [I(PGSCAN_PROACTIVE)] = "pgscan_proactive",
[I(PGSCAN_DIRECT_THROTTLE)] = "pgscan_direct_throttle",
- [I(PGSCAN_ANON)] = "pgscan_anon",
- [I(PGSCAN_FILE)] = "pgscan_file",
- [I(PGSTEAL_ANON)] = "pgsteal_anon",
- [I(PGSTEAL_FILE)] = "pgsteal_file",
#ifdef CONFIG_NUMA
[I(PGSCAN_ZONE_RECLAIM_SUCCESS)] = "zone_reclaim_success",
--
2.47.3
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH v3] mm: move pgscan, pgsteal, pgrefill to node stats
2026-02-18 22:26 [PATCH v3] mm: move pgscan, pgsteal, pgrefill to node stats JP Kobryn (Meta)
@ 2026-02-19 8:57 ` Vlastimil Babka (SUSE)
2026-02-19 10:02 ` kernel test robot
2026-02-19 13:10 ` kernel test robot
2 siblings, 0 replies; 4+ messages in thread
From: Vlastimil Babka (SUSE) @ 2026-02-19 8:57 UTC (permalink / raw)
To: JP Kobryn (Meta), linux-mm, mst, mhocko, vbabka
Cc: apopple, akpm, axelrasmussen, byungchul, cgroups, david,
eperezma, gourry, jasowang, hannes, joshua.hahnjy, Liam.Howlett,
linux-kernel, lorenzo.stoakes, matthew.brost, rppt, muchun.song,
zhengqi.arch, rakie.kim, roman.gushchin, shakeel.butt, surenb,
virtualization, weixugc, xuanzhuo, ying.huang, yuanchu, ziy,
kernel-team
On 2/18/26 23:26, JP Kobryn (Meta) wrote:
> From: JP Kobryn <jp.kobryn@linux.dev>
>
> There are situations where reclaim kicks in on a system with free memory.
> One possible cause is a NUMA imbalance scenario where one or more nodes are
> under pressure. It would help if we could easily identify such nodes.
>
> Move the pgscan, pgsteal, and pgrefill counters from vm_event_item to
> node_stat_item to provide per-node reclaim visibility. With these counters
> as node stats, the values are now displayed in the per-node section of
> /proc/zoneinfo, which allows for quick identification of the affected
> nodes.
>
> /proc/vmstat continues to report the same counters, aggregated across all
> nodes. But the ordering of these items within the readout changes as they
> move from the vm events section to the node stats section.
>
> Memcg accounting of these counters is preserved. The relocated counters
> remain visible in memory.stat alongside the existing aggregate pgscan and
> pgsteal counters.
>
> However, this change affects how the global counters are accumulated.
> Previously, the global event count update was gated on !cgroup_reclaim(),
> excluding memcg-based reclaim from /proc/vmstat. Now that
> mod_lruvec_state() is being used to update the counters, the global
> counters will include all reclaim. This is consistent with how pgdemote
> counters are already tracked.
>
> Finally, the virtio_balloon driver is updated to use
> global_node_page_state() to fetch the counters, as they are no longer
> accessible through the vm_events array.
>
> Signed-off-by: JP Kobryn <jp.kobryn@linux.dev>
> Suggested-by: Johannes Weiner <hannes@cmpxchg.org>
> Acked-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Vlastimil Babka (SUSE) <vbabka@kernel.org>
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH v3] mm: move pgscan, pgsteal, pgrefill to node stats
2026-02-18 22:26 [PATCH v3] mm: move pgscan, pgsteal, pgrefill to node stats JP Kobryn (Meta)
2026-02-19 8:57 ` Vlastimil Babka (SUSE)
@ 2026-02-19 10:02 ` kernel test robot
2026-02-19 13:10 ` kernel test robot
2 siblings, 0 replies; 4+ messages in thread
From: kernel test robot @ 2026-02-19 10:02 UTC (permalink / raw)
To: JP Kobryn (Meta), linux-mm, mst, mhocko, vbabka
Cc: oe-kbuild-all, apopple, akpm, axelrasmussen, byungchul, cgroups,
david, eperezma, gourry, jasowang, hannes, joshua.hahnjy,
Liam.Howlett, linux-kernel, lorenzo.stoakes, matthew.brost, rppt,
muchun.song, zhengqi.arch, rakie.kim, roman.gushchin,
shakeel.butt, surenb, virtualization, weixugc, xuanzhuo,
ying.huang
Hi JP,
kernel test robot noticed the following build warnings:
[auto build test WARNING on akpm-mm/mm-everything]
url: https://github.com/intel-lab-lkp/linux/commits/JP-Kobryn-Meta/mm-move-pgscan-pgsteal-pgrefill-to-node-stats/20260219-063016
base: https://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm.git mm-everything
patch link: https://lore.kernel.org/r/20260218222652.108411-1-jp.kobryn%40linux.dev
patch subject: [PATCH v3] mm: move pgscan, pgsteal, pgrefill to node stats
config: m68k-allmodconfig (https://download.01.org/0day-ci/archive/20260219/202602191719.7nkLTJOP-lkp@intel.com/config)
compiler: m68k-linux-gcc (GCC) 15.2.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20260219/202602191719.7nkLTJOP-lkp@intel.com/reproduce)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202602191719.7nkLTJOP-lkp@intel.com/
All warnings (new ones prefixed by >>):
mm/vmscan.c: In function 'scan_folios':
>> mm/vmscan.c:4542:28: warning: unused variable 'memcg' [-Wunused-variable]
4542 | struct mem_cgroup *memcg = lruvec_memcg(lruvec);
| ^~~~~
vim +/memcg +4542 mm/vmscan.c
ac35a490237446 Yu Zhao 2022-09-18 4527
af827e0904899f Koichiro Den 2025-05-31 4528 static int scan_folios(unsigned long nr_to_scan, struct lruvec *lruvec,
af827e0904899f Koichiro Den 2025-05-31 4529 struct scan_control *sc, int type, int tier,
af827e0904899f Koichiro Den 2025-05-31 4530 struct list_head *list)
ac35a490237446 Yu Zhao 2022-09-18 4531 {
669281ee7ef731 Kalesh Singh 2023-08-01 4532 int i;
669281ee7ef731 Kalesh Singh 2023-08-01 4533 int gen;
b0e9e710c6bb5a JP Kobryn 2026-02-18 4534 enum node_stat_item item;
ac35a490237446 Yu Zhao 2022-09-18 4535 int sorted = 0;
ac35a490237446 Yu Zhao 2022-09-18 4536 int scanned = 0;
ac35a490237446 Yu Zhao 2022-09-18 4537 int isolated = 0;
8c2214fc9a470a Jaewon Kim 2023-10-03 4538 int skipped = 0;
49d921b471c513 Chen Ridong 2025-12-04 4539 int scan_batch = min(nr_to_scan, MAX_LRU_BATCH);
49d921b471c513 Chen Ridong 2025-12-04 4540 int remaining = scan_batch;
391655fe08d1f9 Yu Zhao 2022-12-21 4541 struct lru_gen_folio *lrugen = &lruvec->lrugen;
ac35a490237446 Yu Zhao 2022-09-18 @4542 struct mem_cgroup *memcg = lruvec_memcg(lruvec);
ac35a490237446 Yu Zhao 2022-09-18 4543
ac35a490237446 Yu Zhao 2022-09-18 4544 VM_WARN_ON_ONCE(!list_empty(list));
ac35a490237446 Yu Zhao 2022-09-18 4545
ac35a490237446 Yu Zhao 2022-09-18 4546 if (get_nr_gens(lruvec, type) == MIN_NR_GENS)
ac35a490237446 Yu Zhao 2022-09-18 4547 return 0;
ac35a490237446 Yu Zhao 2022-09-18 4548
ac35a490237446 Yu Zhao 2022-09-18 4549 gen = lru_gen_from_seq(lrugen->min_seq[type]);
ac35a490237446 Yu Zhao 2022-09-18 4550
669281ee7ef731 Kalesh Singh 2023-08-01 4551 for (i = MAX_NR_ZONES; i > 0; i--) {
ac35a490237446 Yu Zhao 2022-09-18 4552 LIST_HEAD(moved);
8c2214fc9a470a Jaewon Kim 2023-10-03 4553 int skipped_zone = 0;
669281ee7ef731 Kalesh Singh 2023-08-01 4554 int zone = (sc->reclaim_idx + i) % MAX_NR_ZONES;
6df1b2212950aa Yu Zhao 2022-12-21 4555 struct list_head *head = &lrugen->folios[gen][type][zone];
ac35a490237446 Yu Zhao 2022-09-18 4556
ac35a490237446 Yu Zhao 2022-09-18 4557 while (!list_empty(head)) {
ac35a490237446 Yu Zhao 2022-09-18 4558 struct folio *folio = lru_to_folio(head);
ac35a490237446 Yu Zhao 2022-09-18 4559 int delta = folio_nr_pages(folio);
ac35a490237446 Yu Zhao 2022-09-18 4560
ac35a490237446 Yu Zhao 2022-09-18 4561 VM_WARN_ON_ONCE_FOLIO(folio_test_unevictable(folio), folio);
ac35a490237446 Yu Zhao 2022-09-18 4562 VM_WARN_ON_ONCE_FOLIO(folio_test_active(folio), folio);
ac35a490237446 Yu Zhao 2022-09-18 4563 VM_WARN_ON_ONCE_FOLIO(folio_is_file_lru(folio) != type, folio);
ac35a490237446 Yu Zhao 2022-09-18 4564 VM_WARN_ON_ONCE_FOLIO(folio_zonenum(folio) != zone, folio);
ac35a490237446 Yu Zhao 2022-09-18 4565
ac35a490237446 Yu Zhao 2022-09-18 4566 scanned += delta;
ac35a490237446 Yu Zhao 2022-09-18 4567
669281ee7ef731 Kalesh Singh 2023-08-01 4568 if (sort_folio(lruvec, folio, sc, tier))
ac35a490237446 Yu Zhao 2022-09-18 4569 sorted += delta;
ac35a490237446 Yu Zhao 2022-09-18 4570 else if (isolate_folio(lruvec, folio, sc)) {
ac35a490237446 Yu Zhao 2022-09-18 4571 list_add(&folio->lru, list);
ac35a490237446 Yu Zhao 2022-09-18 4572 isolated += delta;
ac35a490237446 Yu Zhao 2022-09-18 4573 } else {
ac35a490237446 Yu Zhao 2022-09-18 4574 list_move(&folio->lru, &moved);
8c2214fc9a470a Jaewon Kim 2023-10-03 4575 skipped_zone += delta;
ac35a490237446 Yu Zhao 2022-09-18 4576 }
ac35a490237446 Yu Zhao 2022-09-18 4577
8c2214fc9a470a Jaewon Kim 2023-10-03 4578 if (!--remaining || max(isolated, skipped_zone) >= MIN_LRU_BATCH)
ac35a490237446 Yu Zhao 2022-09-18 4579 break;
ac35a490237446 Yu Zhao 2022-09-18 4580 }
ac35a490237446 Yu Zhao 2022-09-18 4581
8c2214fc9a470a Jaewon Kim 2023-10-03 4582 if (skipped_zone) {
ac35a490237446 Yu Zhao 2022-09-18 4583 list_splice(&moved, head);
8c2214fc9a470a Jaewon Kim 2023-10-03 4584 __count_zid_vm_events(PGSCAN_SKIP, zone, skipped_zone);
8c2214fc9a470a Jaewon Kim 2023-10-03 4585 skipped += skipped_zone;
ac35a490237446 Yu Zhao 2022-09-18 4586 }
ac35a490237446 Yu Zhao 2022-09-18 4587
ac35a490237446 Yu Zhao 2022-09-18 4588 if (!remaining || isolated >= MIN_LRU_BATCH)
ac35a490237446 Yu Zhao 2022-09-18 4589 break;
ac35a490237446 Yu Zhao 2022-09-18 4590 }
ac35a490237446 Yu Zhao 2022-09-18 4591
e452872b40e3f1 Hao Jia 2025-03-18 4592 item = PGSCAN_KSWAPD + reclaimer_offset(sc);
b0e9e710c6bb5a JP Kobryn 2026-02-18 4593 mod_lruvec_state(lruvec, item, isolated);
b0e9e710c6bb5a JP Kobryn 2026-02-18 4594 mod_lruvec_state(lruvec, PGREFILL, sorted);
b0e9e710c6bb5a JP Kobryn 2026-02-18 4595 mod_lruvec_state(lruvec, PGSCAN_ANON + type, isolated);
49d921b471c513 Chen Ridong 2025-12-04 4596 trace_mm_vmscan_lru_isolate(sc->reclaim_idx, sc->order, scan_batch,
8c2214fc9a470a Jaewon Kim 2023-10-03 4597 scanned, skipped, isolated,
8c2214fc9a470a Jaewon Kim 2023-10-03 4598 type ? LRU_INACTIVE_FILE : LRU_INACTIVE_ANON);
1bc542c6a0d144 Zeng Jingxiang 2024-10-26 4599 if (type == LRU_GEN_FILE)
1bc542c6a0d144 Zeng Jingxiang 2024-10-26 4600 sc->nr.file_taken += isolated;
ac35a490237446 Yu Zhao 2022-09-18 4601 /*
e9d4e1ee788097 Yu Zhao 2022-12-21 4602 * There might not be eligible folios due to reclaim_idx. Check the
e9d4e1ee788097 Yu Zhao 2022-12-21 4603 * remaining to prevent livelock if it's not making progress.
ac35a490237446 Yu Zhao 2022-09-18 4604 */
ac35a490237446 Yu Zhao 2022-09-18 4605 return isolated || !remaining ? scanned : 0;
ac35a490237446 Yu Zhao 2022-09-18 4606 }
ac35a490237446 Yu Zhao 2022-09-18 4607
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH v3] mm: move pgscan, pgsteal, pgrefill to node stats
2026-02-18 22:26 [PATCH v3] mm: move pgscan, pgsteal, pgrefill to node stats JP Kobryn (Meta)
2026-02-19 8:57 ` Vlastimil Babka (SUSE)
2026-02-19 10:02 ` kernel test robot
@ 2026-02-19 13:10 ` kernel test robot
2 siblings, 0 replies; 4+ messages in thread
From: kernel test robot @ 2026-02-19 13:10 UTC (permalink / raw)
To: JP Kobryn (Meta), linux-mm, mst, mhocko, vbabka
Cc: oe-kbuild-all, apopple, akpm, axelrasmussen, byungchul, cgroups,
david, eperezma, gourry, jasowang, hannes, joshua.hahnjy,
Liam.Howlett, linux-kernel, lorenzo.stoakes, matthew.brost, rppt,
muchun.song, zhengqi.arch, rakie.kim, roman.gushchin,
shakeel.butt, surenb, virtualization, weixugc, xuanzhuo,
ying.huang
Hi JP,
kernel test robot noticed the following build warnings:
[auto build test WARNING on akpm-mm/mm-everything]
url: https://github.com/intel-lab-lkp/linux/commits/JP-Kobryn-Meta/mm-move-pgscan-pgsteal-pgrefill-to-node-stats/20260219-063016
base: https://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm.git mm-everything
patch link: https://lore.kernel.org/r/20260218222652.108411-1-jp.kobryn%40linux.dev
patch subject: [PATCH v3] mm: move pgscan, pgsteal, pgrefill to node stats
config: x86_64-rhel-9.4 (https://download.01.org/0day-ci/archive/20260219/202602191417.23zH3uja-lkp@intel.com/config)
compiler: gcc-14 (Debian 14.2.0-19) 14.2.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20260219/202602191417.23zH3uja-lkp@intel.com/reproduce)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202602191417.23zH3uja-lkp@intel.com/
All warnings (new ones prefixed by >>):
mm/vmscan.c: In function 'scan_folios':
>> mm/vmscan.c:4542:28: warning: unused variable 'memcg' [-Wunused-variable]
4542 | struct mem_cgroup *memcg = lruvec_memcg(lruvec);
| ^~~~~
vim +/memcg +4542 mm/vmscan.c
ac35a490237446 Yu Zhao 2022-09-18 4527
af827e0904899f Koichiro Den 2025-05-31 4528 static int scan_folios(unsigned long nr_to_scan, struct lruvec *lruvec,
af827e0904899f Koichiro Den 2025-05-31 4529 struct scan_control *sc, int type, int tier,
af827e0904899f Koichiro Den 2025-05-31 4530 struct list_head *list)
ac35a490237446 Yu Zhao 2022-09-18 4531 {
669281ee7ef731 Kalesh Singh 2023-08-01 4532 int i;
669281ee7ef731 Kalesh Singh 2023-08-01 4533 int gen;
b0e9e710c6bb5a JP Kobryn 2026-02-18 4534 enum node_stat_item item;
ac35a490237446 Yu Zhao 2022-09-18 4535 int sorted = 0;
ac35a490237446 Yu Zhao 2022-09-18 4536 int scanned = 0;
ac35a490237446 Yu Zhao 2022-09-18 4537 int isolated = 0;
8c2214fc9a470a Jaewon Kim 2023-10-03 4538 int skipped = 0;
49d921b471c513 Chen Ridong 2025-12-04 4539 int scan_batch = min(nr_to_scan, MAX_LRU_BATCH);
49d921b471c513 Chen Ridong 2025-12-04 4540 int remaining = scan_batch;
391655fe08d1f9 Yu Zhao 2022-12-21 4541 struct lru_gen_folio *lrugen = &lruvec->lrugen;
ac35a490237446 Yu Zhao 2022-09-18 @4542 struct mem_cgroup *memcg = lruvec_memcg(lruvec);
ac35a490237446 Yu Zhao 2022-09-18 4543
ac35a490237446 Yu Zhao 2022-09-18 4544 VM_WARN_ON_ONCE(!list_empty(list));
ac35a490237446 Yu Zhao 2022-09-18 4545
ac35a490237446 Yu Zhao 2022-09-18 4546 if (get_nr_gens(lruvec, type) == MIN_NR_GENS)
ac35a490237446 Yu Zhao 2022-09-18 4547 return 0;
ac35a490237446 Yu Zhao 2022-09-18 4548
ac35a490237446 Yu Zhao 2022-09-18 4549 gen = lru_gen_from_seq(lrugen->min_seq[type]);
ac35a490237446 Yu Zhao 2022-09-18 4550
669281ee7ef731 Kalesh Singh 2023-08-01 4551 for (i = MAX_NR_ZONES; i > 0; i--) {
ac35a490237446 Yu Zhao 2022-09-18 4552 LIST_HEAD(moved);
8c2214fc9a470a Jaewon Kim 2023-10-03 4553 int skipped_zone = 0;
669281ee7ef731 Kalesh Singh 2023-08-01 4554 int zone = (sc->reclaim_idx + i) % MAX_NR_ZONES;
6df1b2212950aa Yu Zhao 2022-12-21 4555 struct list_head *head = &lrugen->folios[gen][type][zone];
ac35a490237446 Yu Zhao 2022-09-18 4556
ac35a490237446 Yu Zhao 2022-09-18 4557 while (!list_empty(head)) {
ac35a490237446 Yu Zhao 2022-09-18 4558 struct folio *folio = lru_to_folio(head);
ac35a490237446 Yu Zhao 2022-09-18 4559 int delta = folio_nr_pages(folio);
ac35a490237446 Yu Zhao 2022-09-18 4560
ac35a490237446 Yu Zhao 2022-09-18 4561 VM_WARN_ON_ONCE_FOLIO(folio_test_unevictable(folio), folio);
ac35a490237446 Yu Zhao 2022-09-18 4562 VM_WARN_ON_ONCE_FOLIO(folio_test_active(folio), folio);
ac35a490237446 Yu Zhao 2022-09-18 4563 VM_WARN_ON_ONCE_FOLIO(folio_is_file_lru(folio) != type, folio);
ac35a490237446 Yu Zhao 2022-09-18 4564 VM_WARN_ON_ONCE_FOLIO(folio_zonenum(folio) != zone, folio);
ac35a490237446 Yu Zhao 2022-09-18 4565
ac35a490237446 Yu Zhao 2022-09-18 4566 scanned += delta;
ac35a490237446 Yu Zhao 2022-09-18 4567
669281ee7ef731 Kalesh Singh 2023-08-01 4568 if (sort_folio(lruvec, folio, sc, tier))
ac35a490237446 Yu Zhao 2022-09-18 4569 sorted += delta;
ac35a490237446 Yu Zhao 2022-09-18 4570 else if (isolate_folio(lruvec, folio, sc)) {
ac35a490237446 Yu Zhao 2022-09-18 4571 list_add(&folio->lru, list);
ac35a490237446 Yu Zhao 2022-09-18 4572 isolated += delta;
ac35a490237446 Yu Zhao 2022-09-18 4573 } else {
ac35a490237446 Yu Zhao 2022-09-18 4574 list_move(&folio->lru, &moved);
8c2214fc9a470a Jaewon Kim 2023-10-03 4575 skipped_zone += delta;
ac35a490237446 Yu Zhao 2022-09-18 4576 }
ac35a490237446 Yu Zhao 2022-09-18 4577
8c2214fc9a470a Jaewon Kim 2023-10-03 4578 if (!--remaining || max(isolated, skipped_zone) >= MIN_LRU_BATCH)
ac35a490237446 Yu Zhao 2022-09-18 4579 break;
ac35a490237446 Yu Zhao 2022-09-18 4580 }
ac35a490237446 Yu Zhao 2022-09-18 4581
8c2214fc9a470a Jaewon Kim 2023-10-03 4582 if (skipped_zone) {
ac35a490237446 Yu Zhao 2022-09-18 4583 list_splice(&moved, head);
8c2214fc9a470a Jaewon Kim 2023-10-03 4584 __count_zid_vm_events(PGSCAN_SKIP, zone, skipped_zone);
8c2214fc9a470a Jaewon Kim 2023-10-03 4585 skipped += skipped_zone;
ac35a490237446 Yu Zhao 2022-09-18 4586 }
ac35a490237446 Yu Zhao 2022-09-18 4587
ac35a490237446 Yu Zhao 2022-09-18 4588 if (!remaining || isolated >= MIN_LRU_BATCH)
ac35a490237446 Yu Zhao 2022-09-18 4589 break;
ac35a490237446 Yu Zhao 2022-09-18 4590 }
ac35a490237446 Yu Zhao 2022-09-18 4591
e452872b40e3f1 Hao Jia 2025-03-18 4592 item = PGSCAN_KSWAPD + reclaimer_offset(sc);
b0e9e710c6bb5a JP Kobryn 2026-02-18 4593 mod_lruvec_state(lruvec, item, isolated);
b0e9e710c6bb5a JP Kobryn 2026-02-18 4594 mod_lruvec_state(lruvec, PGREFILL, sorted);
b0e9e710c6bb5a JP Kobryn 2026-02-18 4595 mod_lruvec_state(lruvec, PGSCAN_ANON + type, isolated);
49d921b471c513 Chen Ridong 2025-12-04 4596 trace_mm_vmscan_lru_isolate(sc->reclaim_idx, sc->order, scan_batch,
8c2214fc9a470a Jaewon Kim 2023-10-03 4597 scanned, skipped, isolated,
8c2214fc9a470a Jaewon Kim 2023-10-03 4598 type ? LRU_INACTIVE_FILE : LRU_INACTIVE_ANON);
1bc542c6a0d144 Zeng Jingxiang 2024-10-26 4599 if (type == LRU_GEN_FILE)
1bc542c6a0d144 Zeng Jingxiang 2024-10-26 4600 sc->nr.file_taken += isolated;
ac35a490237446 Yu Zhao 2022-09-18 4601 /*
e9d4e1ee788097 Yu Zhao 2022-12-21 4602 * There might not be eligible folios due to reclaim_idx. Check the
e9d4e1ee788097 Yu Zhao 2022-12-21 4603 * remaining to prevent livelock if it's not making progress.
ac35a490237446 Yu Zhao 2022-09-18 4604 */
ac35a490237446 Yu Zhao 2022-09-18 4605 return isolated || !remaining ? scanned : 0;
ac35a490237446 Yu Zhao 2022-09-18 4606 }
ac35a490237446 Yu Zhao 2022-09-18 4607
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2026-02-19 13:10 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2026-02-18 22:26 [PATCH v3] mm: move pgscan, pgsteal, pgrefill to node stats JP Kobryn (Meta)
2026-02-19 8:57 ` Vlastimil Babka (SUSE)
2026-02-19 10:02 ` kernel test robot
2026-02-19 13:10 ` kernel test robot
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox