On Fri, Apr 22, 2011 at 8:41 PM, Rik van Riel wrote: > On 04/22/2011 11:33 PM, Ying Han wrote: > > Now we would like to launch another job C, since we know there are A(16G >> - 10G) + B(16G - 10G) = 12G "cold" memory can be reclaimed (w/o >> impacting the A and B's performance). So what will happen >> >> 1. start running C on the host, which triggers global memory pressure >> right away. If the reclaim is fast, C start growing with the free pages >> from A and B. >> >> However, it might be possible that the reclaim can not catch-up with the >> job's page allocation. We end up with either OOM condition or >> performance spike on any of the running jobs. >> >> One way to improve it is to set a wmark on either A/B to be proactively >> reclaiming pages before launching C. The global memory pressure won't >> help much here since we won't trigger that. >> >> min_free_kbytes more or less indirectly provides the same on a global >> level, but I don't think anybody tunes it just for aggressiveness of >> background reclaim. >> > > This sounds like yet another reason to have a tunable that > can increase the gap between min_free_kbytes and low_free_kbytes > (automatically scaled to size in every zone). > > The realtime people want this to reduce allocation latencies. > > I want it for dynamic virtual machine resizing, without the > memory fragmentation inherent in balloons (which would destroy > the performance benefit of transparent hugepages). > > Now Google wants it for job placement. > To clarify a bit, we scale the min_free_kbytes to reduce the likelyhood of page allocation failure. This is still the global per-zone page allocation, and is different from the memcg discussion we have in this thread. To be more specific, our case is more or less caused by the 128M fake node size. Anyway, this is different from what have been discussed so far on this thread. :) --Ying > > Is there any good reason we can't have a low watermark > equivalent to min_free_kbytes? :) > > -- > All rights reversed >