From mboxrd@z Thu Jan 1 00:00:00 1970 Date: Wed, 16 May 2007 13:44:59 -0700 (PDT) From: Christoph Lameter Subject: Re: [PATCH 0/5] make slab gfp fair In-Reply-To: <1179348039.2912.48.camel@lappy> Message-ID: References: <20070514131904.440041502@chello.nl> <20070514161224.GC11115@waste.org> <1179164453.2942.26.camel@lappy> <1179170912.2942.37.camel@lappy> <1179250036.7173.7.camel@twins> <1179298771.7173.16.camel@twins> <1179343521.2912.20.camel@lappy> <1179346738.2912.39.camel@lappy> <1179348039.2912.48.camel@lappy> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII Sender: owner-linux-mm@kvack.org Return-Path: To: Peter Zijlstra Cc: Matt Mackall , linux-kernel@vger.kernel.org, linux-mm@kvack.org, Thomas Graf , David Miller , Andrew Morton , Daniel Phillips , Pekka Enberg List-ID: On Wed, 16 May 2007, Peter Zijlstra wrote: > > How does all of this interact with > > > > 1. cpusets > > > > 2. dma allocations and highmem? > > > > 3. Containers? > > Much like the normal kmem_cache would do; I'm not changing any of the > page allocation semantics. So if we run out of memory on a cpuset then network I/O will still fail? I do not see any distinction between DMA and regular memory. If we need DMA memory to complete the transaction then this wont work? > But its wanted to try the normal cpu_slab path first to detect that the > situation has subsided and we can resume normal operation. Is there some indicator somewhere that indicates that we are in trouble? I just see the ranks. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org