Hi. On Tue, Dec 09, 2025 at 01:25:52AM +0000, Chen Ridong wrote: > From: Chen Ridong > > The memcg LRU was introduced to improve scalability in global reclaim, > but its implementation has grown complex and can cause performance > regressions when creating many memory cgroups [1]. > > This series implements mem_cgroup_iter with a reclaim cookie in > shrink_many() for global reclaim, following the pattern already used in > shrink_node_memcgs(), an approach suggested by Johannes [1]. The new > design maintains good fairness across cgroups by preserving iteration > state between reclaim passes. > > Testing was performed using the original stress test from Yu Zhao [2] on a > 1 TB, 4-node NUMA system. The results show: (I think the cover letter somehow lost the targets of [1],[2]. I assume I could retrieve those from patch 1/5.) > > pgsteal: > memcg LRU memcg iter > stddev(pgsteal) / mean(pgsteal) 106.03% 93.20% > sum(pgsteal) / sum(requested) 98.10% 99.28% > > workingset_refault_anon: > memcg LRU memcg iter > stddev(refault) / mean(refault) 193.97% 134.67% > sum(refault) 1,963,229 2,027,567 > > The new implementation shows clear fairness improvements, reducing the > standard deviation relative to the mean by 12.8 percentage points for > pgsteal and bringing the pgsteal ratio closer to 100%. Refault counts > increased by 3.2% (from 1,963,229 to 2,027,567). Just as a quick clarification -- this isn't supposed to affect regular (CONFIG_LRU_GEN_ENABLED=n) reclaim, correct? Thanks, Michal