From: Christoph Lameter <cl@linux.com>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: brouer@redhat.com, Joonsoo Kim <iamjoonsoo.kim@lge.com>,
Pekka Enberg <penberg@kernel.org>,
David Rientjes <rientjes@google.com>,
linux-mm@kvack.org
Subject: Re: slub bulk alloc: Extract objects from the per cpu slab
Date: Fri, 10 Apr 2015 21:19:06 -0500 (CDT) [thread overview]
Message-ID: <alpine.DEB.2.11.1504102115320.1179@gentwo.org> (raw)
In-Reply-To: <20150409131916.51a533219dbff7a6f2294034@linux-foundation.org>
On Thu, 9 Apr 2015, Andrew Morton wrote:
> > This is going to increase as we add more capabilities. I have a second
> > patch here that extends the fast allocation to the per cpu partial pages.
>
> Yes, but what is the expected success rate of the initial bulk
> allocation attempt? If it's 1% then perhaps there's no point in doing
> it.
After we have extracted object from all structures aorund we can also go
directly to the page allocator if we wanted and bypass lots of the
processing for metadata. So we will ultimately end up with 100% success
rate.
> > > This kmem_cache_cpu.tid logic is a bit opaque. The low-level
> > > operations seem reasonably well documented but I couldn't find anywhere
> > > which tells me how it all actually works - what is "disambiguation
> > > during cmpxchg" and how do we achieve it?
> >
> > This is used to force a retry in slab_alloc_node() if preemption occurs
> > there. We are modifying the per cpu state thus a retry must be forced.
>
> No, I'm not referring to this patch. I'm referring to the overall
> design concept behind kmem_cache_cpu.tid. This patch made me go and
> look, and it's a bit of a head-scratcher. It's unobvious and doesn't
> appear to be documented in any central place. Perhaps it's in a
> changelog, but who has time for that?
The tid logic is documented somewhat in mm/slub.c. Line 1749 and
following.
> Keeping them in -next is not a problem - I was wondering about when to
> start moving the code into mainline.
When Mr. Brouer has confirmed that the stuff actually does some good for
his issue.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2015-04-11 2:19 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-04-08 18:13 Christoph Lameter
2015-04-08 22:53 ` Andrew Morton
2015-04-09 14:03 ` Christoph Lameter
2015-04-09 17:16 ` slub: bulk allocation from per cpu partial pages Christoph Lameter
2015-04-16 12:06 ` Jesper Dangaard Brouer
2015-04-16 15:54 ` Christoph Lameter
2015-04-17 5:44 ` Jesper Dangaard Brouer
2015-04-17 6:06 ` Jesper Dangaard Brouer
2015-04-30 18:40 ` Christoph Lameter
2015-04-30 19:20 ` Jesper Dangaard Brouer
2015-04-09 20:19 ` slub bulk alloc: Extract objects from the per cpu slab Andrew Morton
2015-04-11 2:19 ` Christoph Lameter [this message]
2015-04-11 7:25 ` Jesper Dangaard Brouer
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=alpine.DEB.2.11.1504102115320.1179@gentwo.org \
--to=cl@linux.com \
--cc=akpm@linux-foundation.org \
--cc=brouer@redhat.com \
--cc=iamjoonsoo.kim@lge.com \
--cc=linux-mm@kvack.org \
--cc=penberg@kernel.org \
--cc=rientjes@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox