This is something that I've been chasing for months, and I'm getting tired of it. :( The issue has been observed on 4GB RAM x86_64 machines (one server, one desktop) without swap subsystem (not even compiled in). The important thing to remember about a 4GB x86_64 machine is that the NORMAL zone is about 6 times smaller than the DMA32 zone. As picture is 10000 words, I've attached two graphs that nicely show what I've observed. As memory usage slowly rises, the MM subsystem gradually evicts pagecache pages from the NORMAL zone, trying to eventually get rid of all of them! This process takes days, typically more than 5 on this particular server. Of course, this means that eventually the zone will be choke full of anon pages, and without swap, the kernel can't do much about it. But as it tries to balance the zone, various bad things will happen. On the server I've seen sudden freeing of hundreds of MB of pagecache, on the desktop there's a general slowdown, sound dropouts (HTTP streaming) and so... The first graph was probably 3.8 kernel, the second one is 3.9.0-rc4+ patched with the kswapd series v2. Obviously not much has changes wrt this problem, although it seems to me that kernel now hesitates freeing a large amounts of memory needlessly, or does it less often. But on the desktop there's no improvement, as soon as the pagecache gets really low in the NORMAL zone, there's severe slowdown, dropouts, etc... One other thing, the lower graphs say "Normal zone file pages", what is actually graphed is nr_active_file + nr_inactive_file from the NORMAL zone! I've also attached two zoneinfo outputs. Notice how DMA32 zones have hundreds of thousand of pagecache pages, but only a few dozens are in the NORMAL zone! Also nr_vmscan_write is telling. Much higher values for zone NORMAL (especially when you take in account how little pagecache is there!), I guess those poor pagecacache pages that survives there get written a millisecond after they're dirtied, a probable cause of the slowdown I experience on the desktop. There's a reasonable possibility that this imbalance between zones was introduced somewhere between 3.3 and 3.4, because VM behaves slightly differently in 3.3 (doesn't evict pagecache from the NORMAL zone so aggresively). Unfortunately, I have some userspace incompatibilities when running 3.3, so I'm not 100% sure (didn't run it long enough to be absolutely sure). I tried to find the problematic commit, and cc715d99e529 certainly looked like it's the culprit, but it's not! buffer_heads_over_limit is NEVER true on the machine, not even close. So that commit is basically a noop. Also it's not important if THP is on or off, the behaviour stays the same. My apologies for the long email, I tried to provide as much information as possible. -- Zlatko