Hi, In chasing a performance problem on a 2.4.9-based VM (yes, that one!), we found a case where kswapd was consuming massive CPU time, 97% of which was in prune_icache (and of that, about 7% was in the inode_has_buffers() sub-function). slabinfo showed about 100k inodes in use. The hypothesis is that we've got buffers in cache pinning the inodes. It's not pages doing the pinning because if the inode page count is zero we never perform the inode_has_buffers() test. On buffer write, the bh goes onto BUF_LOCKED, but never gets removed from there. In other testing I've seen several GB of memory in BUF_LOCKED bh'es during extensive write loads. That's normally no problem, except that the lack of a refile_buffer() on those bh'es also keeps them on the inode's own buffer lists. If it's metadata that the buffers back (ie. it's all in low memory) and the demand on the system is for highmem pages, then we're not necessarily going to be aggressively doing try_to_release_page() on the lowmem pages which would allow the bhes to be refiled. Doing the refile really isn't hard, either. We expect IO completion to be happening in approximately list order on the BUF_LOCKED list, so simply doing a refile on any unlocked buffers at the head of that list is going to keep it under control in O(1) time per buffer. With the patch below we've not seen this particular pathology recur. Comments? --Stephen