From: JoonSoo Kim <js1304@gmail.com>
To: Eric Dumazet <eric.dumazet@gmail.com>
Cc: David Rientjes <rientjes@google.com>,
Andi Kleen <andi@firstfloor.org>,
Ezequiel Garcia <elezegarcia@gmail.com>,
Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
linux-mm@kvack.org, Tim Bird <tim.bird@am.sony.com>,
celinux-dev@lists.celinuxforum.org
Subject: Re: [Q] Default SLAB allocator
Date: Tue, 16 Oct 2012 10:28:39 +0900 [thread overview]
Message-ID: <CAAmzW4M8drwRPy_qWxnkG3-GKGPq+m24me+pGOWNtPzA15iVfg@mail.gmail.com> (raw)
In-Reply-To: <1350141021.21172.14949.camel@edumazet-glaptop>
Hello, Eric.
2012/10/14 Eric Dumazet <eric.dumazet@gmail.com>:
> SLUB was really bad in the common workload you describe (allocations
> done by one cpu, freeing done by other cpus), because all kfree() hit
> the slow path and cpus contend in __slab_free() in the loop guarded by
> cmpxchg_double_slab(). SLAB has a cache for this, while SLUB directly
> hit the main "struct page" to add the freed object to freelist.
Could you elaborate more on how 'netperf RR' makes kernel "allocations
done by one cpu, freeling done by other cpus", please?
I don't have enough background network subsystem, so I'm just curious.
> I played some months ago adding a percpu associative cache to SLUB, then
> just moved on other strategy.
>
> (Idea for this per cpu cache was to build a temporary free list of
> objects to batch accesses to struct page)
Is this implemented and submitted?
If it is, could you tell me the link for the patches?
Thanks!
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2012-10-16 1:28 UTC|newest]
Thread overview: 33+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-10-11 14:19 Ezequiel Garcia
2012-10-11 22:42 ` Andi Kleen
2012-10-11 22:59 ` David Rientjes
2012-10-11 23:10 ` Andi Kleen
2012-10-12 12:07 ` Ezequiel Garcia
2012-10-13 9:54 ` David Rientjes
2012-10-13 12:44 ` Ezequiel Garcia
2012-10-16 0:46 ` David Rientjes
2012-10-16 12:35 ` Ezequiel Garcia
2012-10-16 12:56 ` Eric Dumazet
2012-10-16 18:07 ` Tim Bird
2012-10-16 18:27 ` Ezequiel Garcia
2012-10-16 18:44 ` Tim Bird
2012-10-16 18:49 ` Ezequiel Garcia
2012-10-16 19:16 ` Eric Dumazet
2012-10-17 18:45 ` Tim Bird
2012-10-17 19:13 ` Eric Dumazet
2012-10-17 19:20 ` Shentino
2012-10-17 20:33 ` Tim Bird
2012-10-18 0:46 ` Shentino
2012-10-17 20:58 ` Tim Bird
2012-10-17 21:05 ` Ezequiel Garcia
2012-10-16 18:36 ` Ezequiel Garcia
2012-10-16 18:54 ` Christoph Lameter
2012-10-13 9:51 ` David Rientjes
2012-10-13 15:10 ` Eric Dumazet
2012-10-16 1:28 ` JoonSoo Kim [this message]
2012-10-16 7:23 ` Eric Dumazet
2012-10-19 0:03 ` JoonSoo Kim
2012-10-19 7:01 ` Eric Dumazet
2012-10-16 0:45 ` David Rientjes
2012-10-16 18:53 ` Christoph Lameter
2012-10-16 19:02 ` Christoph Lameter
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CAAmzW4M8drwRPy_qWxnkG3-GKGPq+m24me+pGOWNtPzA15iVfg@mail.gmail.com \
--to=js1304@gmail.com \
--cc=andi@firstfloor.org \
--cc=celinux-dev@lists.celinuxforum.org \
--cc=elezegarcia@gmail.com \
--cc=eric.dumazet@gmail.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=rientjes@google.com \
--cc=tim.bird@am.sony.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox