From: Christoph Lameter <cl@linux.com>
To: Jesper Dangaard Brouer <brouer@redhat.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
Joonsoo Kim <iamjoonsoo.kim@lge.com>,
Pekka Enberg <penberg@kernel.org>,
David Rientjes <rientjes@google.com>,
linux-mm@kvack.org
Subject: Re: slub: bulk allocation from per cpu partial pages
Date: Thu, 16 Apr 2015 10:54:07 -0500 (CDT) [thread overview]
Message-ID: <alpine.DEB.2.11.1504161049030.8605@gentwo.org> (raw)
In-Reply-To: <20150416140638.684838a2@redhat.com>
On Thu, 16 Apr 2015, Jesper Dangaard Brouer wrote:
> On CPU E5-2630 @ 2.30GHz, the cost of kmem_cache_alloc +
> kmem_cache_free, is a tight loop (most optimal fast-path), cost 22ns.
> With elem size 256 bytes, where slab chooses to make 32 obj-per-slab.
>
> With this patch, testing different bulk sizes, the cost of alloc+free
> per element is improved for small sizes of bulk (which I guess this the
> is expected outcome).
>
> Have something to compare against, I also ran the bulk sizes through
> the fallback versions __kmem_cache_alloc_bulk() and
> __kmem_cache_free_bulk(), e.g. the none optimized versions.
>
> size -- optimized -- fallback
> bulk 8 -- 15ns -- 22ns
> bulk 16 -- 15ns -- 22ns
Good.
> bulk 30 -- 44ns -- 48ns
> bulk 32 -- 47ns -- 50ns
> bulk 64 -- 52ns -- 54ns
Hmm.... We are hittling the atomics I guess... What you got so far is only
using the per cpu data. Wonder how many partial pages are available
there and how much is satisfied from which per cpu structure. There are a
couple of cmpxchg_doubles in the optimized patch to squeeze even the last
object out of the pages before going to the next. I could avoid those
and simply rotate to another per cpu partial page instead.
Got some more here that deals with per node partials but at that point we
will be taking spinlocks.
> For smaller bulk sizes 8 and 16, this is actually a significant
> improvement, especially considering the free side is not optimized.
I have some draft code here to do the same for the free side. But I
thought we better get to some working code on the free side first.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2015-04-16 15:54 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-04-08 18:13 slub bulk alloc: Extract objects from the per cpu slab Christoph Lameter
2015-04-08 22:53 ` Andrew Morton
2015-04-09 14:03 ` Christoph Lameter
2015-04-09 17:16 ` slub: bulk allocation from per cpu partial pages Christoph Lameter
2015-04-16 12:06 ` Jesper Dangaard Brouer
2015-04-16 15:54 ` Christoph Lameter [this message]
2015-04-17 5:44 ` Jesper Dangaard Brouer
2015-04-17 6:06 ` Jesper Dangaard Brouer
2015-04-30 18:40 ` Christoph Lameter
2015-04-30 19:20 ` Jesper Dangaard Brouer
2015-04-09 20:19 ` slub bulk alloc: Extract objects from the per cpu slab Andrew Morton
2015-04-11 2:19 ` Christoph Lameter
2015-04-11 7:25 ` Jesper Dangaard Brouer
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=alpine.DEB.2.11.1504161049030.8605@gentwo.org \
--to=cl@linux.com \
--cc=akpm@linux-foundation.org \
--cc=brouer@redhat.com \
--cc=iamjoonsoo.kim@lge.com \
--cc=linux-mm@kvack.org \
--cc=penberg@kernel.org \
--cc=rientjes@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox