From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from psmtp.com (na3sys010amx135.postini.com [74.125.245.135]) by kanga.kvack.org (Postfix) with SMTP id 703856B002B for ; Mon, 15 Oct 2012 21:28:40 -0400 (EDT) Received: by mail-ob0-f169.google.com with SMTP id va7so6850242obc.14 for ; Mon, 15 Oct 2012 18:28:39 -0700 (PDT) MIME-Version: 1.0 In-Reply-To: <1350141021.21172.14949.camel@edumazet-glaptop> References: <1350141021.21172.14949.camel@edumazet-glaptop> Date: Tue, 16 Oct 2012 10:28:39 +0900 Message-ID: Subject: Re: [Q] Default SLAB allocator From: JoonSoo Kim Content-Type: text/plain; charset=ISO-8859-1 Sender: owner-linux-mm@kvack.org List-ID: To: Eric Dumazet Cc: David Rientjes , Andi Kleen , Ezequiel Garcia , Linux Kernel Mailing List , linux-mm@kvack.org, Tim Bird , celinux-dev@lists.celinuxforum.org Hello, Eric. 2012/10/14 Eric Dumazet : > SLUB was really bad in the common workload you describe (allocations > done by one cpu, freeing done by other cpus), because all kfree() hit > the slow path and cpus contend in __slab_free() in the loop guarded by > cmpxchg_double_slab(). SLAB has a cache for this, while SLUB directly > hit the main "struct page" to add the freed object to freelist. Could you elaborate more on how 'netperf RR' makes kernel "allocations done by one cpu, freeling done by other cpus", please? I don't have enough background network subsystem, so I'm just curious. > I played some months ago adding a percpu associative cache to SLUB, then > just moved on other strategy. > > (Idea for this per cpu cache was to build a temporary free list of > objects to batch accesses to struct page) Is this implemented and submitted? If it is, could you tell me the link for the patches? Thanks! -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org