From mboxrd@z Thu Jan 1 00:00:00 1970 Date: Thu, 15 Dec 2005 17:26:01 +0100 From: Pavel Machek Subject: Re: [RFC][PATCH 0/6] Critical Page Pool Message-ID: <20051215162601.GJ2904@elf.ucw.cz> References: <439FCECA.3060909@us.ibm.com> <20051214100841.GA18381@elf.ucw.cz> <43A0406C.8020108@us.ibm.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <43A0406C.8020108@us.ibm.com> Sender: owner-linux-mm@kvack.org Return-Path: To: Matthew Dobson Cc: linux-kernel@vger.kernel.org, andrea@suse.de, Sridhar Samudrala , Andrew Morton , Linux Memory Management List-ID: Hi! > > I don't see how this can ever work. > > > > How can _userspace_ know about what allocations are critical to the > > kernel?! > > Well, it isn't userspace that is determining *which* allocations are > critical to the kernel. That is statically determined at compile time by > using the flag __GFP_CRITICAL on specific *kernel* allocations. Sridhar, > cc'd on this mail, has a set of patches that sprinkle the __GFP_CRITICAL > flag throughout the networking code to take advantage of this pool. > Userspace is in charge of determining *when* we're in an emergency > situation, and should thus use the critical pool, but not *which* It still is not too reliable. If you userspace tool is swapped out (etc), it may not get chance to wake up. > > And as you noticed, it does not work for your original usage case, > > because reserved memory pool would have to be "sum of all network > > interface bandwidths * ammount of time expected to survive without > > network" which is way too much. > > Well, I never suggested it didn't work for my original usage case. The > discussion we had is that it would be incredibly difficult to 100% > iron-clad guarantee that the pool would NEVER run out of pages. But we can > size the pool, especially given a decent workload approximation, so as to > make failure far less likely. Perhaps you should add file in Documentation/ explaining it is not reliable? > > If you want few emergency pages for some strange hack you are doing > > (swapping over network?), just put swap into ramdisk and swapon() it > > when you are in emergency, or use memory hotplug and plug few more > > gigabytes into your machine. But don't go introducing infrastructure > > that _can't_ be used right. > > Well, that's basically the point of posting these patches as an RFC. I'm > not quite so delusional as to think they're going to get picked up right > now. I was, however, hoping for feedback to figure out how to design > infrastructure that *can* be used right, as well as trying to find other > potential users of such a feature. Well, we don't usually take infrastructure that has no in-kernel users, and example user would indeed be nice. Pavel -- Thanks, Sharp! -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org