From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail138.messagelabs.com (mail138.messagelabs.com [216.82.249.35]) by kanga.kvack.org (Postfix) with ESMTP id 0CC3B6B0044 for ; Tue, 27 Jan 2009 04:07:57 -0500 (EST) Subject: Re: [patch] SLQB slab allocator (try 2) From: Peter Zijlstra In-Reply-To: References: <20090123154653.GA14517@wotan.suse.de> <1232959706.21504.7.camel@penberg-laptop> <1232960840.4863.7.camel@laptop> Content-Type: text/plain Date: Tue, 27 Jan 2009 10:07:52 +0100 Message-Id: <1233047272.4984.12.camel@laptop> Mime-Version: 1.0 Content-Transfer-Encoding: 7bit Sender: owner-linux-mm@kvack.org To: Christoph Lameter Cc: Pekka Enberg , Nick Piggin , Linux Memory Management List , Linux Kernel Mailing List , Andrew Morton , Lin Ming , "Zhang, Yanmin" List-ID: On Mon, 2009-01-26 at 12:22 -0500, Christoph Lameter wrote: > On Mon, 26 Jan 2009, Peter Zijlstra wrote: > > > Then again, anything that does allocation is per definition not bounded > > and not something we can have on latency critical paths -- so on that > > respect its not interesting. > > Well there is the problem in SLAB and SLQB that they *continue* to do > processing after an allocation. They defer queue cleaning. So your latency > critical paths are interrupted by the deferred queue processing. No they're not -- well, only if you let them that is, and then its your own fault. Remember, -rt is about being able to preempt pretty much everything. If the userspace task has a higher priority than the timer interrupt, the timer interrupt just gets to wait. Yes there is a very small hardirq window where the actual interrupt triggers, but all that that does is a wakeup and then its gone again. > SLAB has > the awful habit of gradually pushing objects out of its queued (tried to > approximate the loss of cpu cache hotness over time). So for awhile you > get hit every 2 seconds with some free operations to the page allocator on > each cpu. If you have a lot of cpus then this may become an ongoing > operation. The slab pages end up in the page allocator queues which is > then occasionally pushed back to the buddy lists. Another relatively high > spike there. Like Nick has been asking, can you give a solid test case that demonstrates this issue? I'm thinking getting git of those cross-bar queues hugely reduces that problem. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org