> IOWs, shr->nr_in_batch can grow much larger than any single node LRU > list, and the deffered count is only limited to (2 * max_pass). > Hence if the same node is the one that keeps stealing the global > shr->nr_in_batch calculation, it will always be a number related to > the size of the cache on that node. All the other nodes will simply > keep adding their delta counts to it. > > Hence if you've got a node with less cache in it than others, and > kswapd comes along, it will see a gigantic amount of deferred work > in nr_in_batch, and then we end up removing a large amount of the > cache on that node, even though it hasn't had a significant amount > of pressure. And the node that has pressure continues to wind up > nr_in_batch until it's the one that gets hit by a kswapd run with > that wound up nr_in_batch.... > > Cheers, > > Dave. > Ok Dave, My system in general seems to behave quite differently than this. In special, I hardly see peaks and the caches fill up very slowly. They later are pruned but always down to the same level, and then they grow slowly again, in a triangular fashion. Always within a fairly reasonable range. This might be because my disks are slower than yours. It may also be some glitch in my setup. I spent a fair amount of time today trying to see your behavior but I can't. I will try more tomorrow. For the time being, what do you think about the following patch (that obviously need a lot more work, just a PoC) ? If we are indeed deferring work to unrelated nodes, keeping the deferred work per-node should help. I don't want to make it a static array because the shrinker structure tend to be embedded in structures. In particular, the superblock already have two list_lrus with per-node static arrays. This will make the sb gigantic. But that is not the main thing.