I'm really evil, so I changed the loop in compact_capture_page() to basically steal the highest-order page it can. This shouldn't _break_ anything, but it does ensure that we'll be splitting pages that we find more often and recreating this *MUCH* faster: - for (order = cc->order; order < MAX_ORDER; order++) { + for (order = MAX_ORDER - 1; order >= cc->order;order--) I also augmented the area in capture_free_page() that I expect to be leaking: if (alloc_order != order) { static int leaked_pages = 0; leaked_pages += 1<free_area[order], migratetype); } I add up all the fields in buddyinfo to figure out how much _should_ be in the allocator and then compare it to MemFree to get a guess at how much is leaked. That number correlates _really_ well with the "leaked_pages" variable above. That pretty much seals it for me. I'll run a stress test overnight to see if it pops up again. The patch I'm running is attached. I'll send a properly changelogged one tomorrow if it works.