From mboxrd@z Thu Jan 1 00:00:00 1970 Date: Tue, 5 Dec 2006 12:15:46 -0800 (PST) From: Christoph Lameter Subject: Re: la la la la ... swappiness In-Reply-To: <20061205120256.b1db9887.akpm@osdl.org> Message-ID: References: <200612050641.kB56f7wY018196@ms-smtp-06.texas.rr.com> <20061205085914.b8f7f48d.akpm@osdl.org> <20061205120256.b1db9887.akpm@osdl.org> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII Sender: owner-linux-mm@kvack.org Return-Path: To: Andrew Morton Cc: Linus Torvalds , Aucoin , 'Nick Piggin' , 'Tim Schmielau' , Linux Memory Management List List-ID: On Tue, 5 Dec 2006, Andrew Morton wrote: > But otoh, it's a very common scenario, and nobody has observed it before. This is the same scenario as mlocked memory. Kame-san has recently posted an occurence in ZONE_DMA. I have 3 customers where I have seen similar VM behavior with a special shared memory thingy locking down lots of memory. In fact in the NUMA case with cpusets the limits being off is a very common problem. F.e. the dirty balancing logic does not take into account that the application can just run on a subset of the machine. So if a cpuset is just 1/10th of the whole machine then we will never be able to reach the dirty limits, all the nodes of a cpuset may be filled up with dirty pages. A simple cp of a large file will bring the machine into a continual reclaim on all nodes. I am working on a solution for the dirty throttling but we have similar issues for the other limits. I wonder if we should not account for unreclaimable memory per zone and recalculate the limits if they change significantly. A series of huge page allocations would then retune the limits. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org