linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Andrew Morton <akpm@linux-foundation.org>
To: Christoph Lameter <cl@linux.com>
Cc: brouer@redhat.com, Joonsoo Kim <iamjoonsoo.kim@lge.com>,
	Pekka Enberg <penberg@kernel.org>,
	David Rientjes <rientjes@google.com>,
	linux-mm@kvack.org
Subject: Re: slub bulk alloc: Extract objects from the per cpu slab
Date: Thu, 9 Apr 2015 13:19:16 -0700	[thread overview]
Message-ID: <20150409131916.51a533219dbff7a6f2294034@linux-foundation.org> (raw)
In-Reply-To: <alpine.DEB.2.11.1504090859560.19278@gentwo.org>

On Thu, 9 Apr 2015 09:03:24 -0500 (CDT) Christoph Lameter <cl@linux.com> wrote:

> On Wed, 8 Apr 2015, Andrew Morton wrote:
> 
> > On Wed, 8 Apr 2015 13:13:29 -0500 (CDT) Christoph Lameter <cl@linux.com> wrote:
> >
> > > First piece: accelleration of retrieval of per cpu objects
> > >
> > >
> > > If we are allocating lots of objects then it is advantageous to
> > > disable interrupts and avoid the this_cpu_cmpxchg() operation to
> > > get these objects faster. Note that we cannot do the fast operation
> > > if debugging is enabled.
> >
> > Why can't we do it if debugging is enabled?
> 
> We would have to add extra code to do all the debugging checks. And it
> would not be fast anyways.

I updated the changelog to reflect this.

> > > Allocate as many objects as possible in the fast way and then fall
> > > back to the generic implementation for the rest of the objects.
> >
> > Seems sane.  What's the expected success rate of the initial bulk
> > allocation attempt?
> 
> This is going to increase as we add more capabilities. I have a second
> patch here that extends the fast allocation to the per cpu partial pages.

Yes, but what is the expected success rate of the initial bulk
allocation attempt?  If it's 1% then perhaps there's no point in doing
it.

> > > +		c->tid = next_tid(c->tid);
> > > +
> > > +		local_irq_enable();
> > > +	}
> > > +
> > > +	return __kmem_cache_alloc_bulk(s, flags, size, p);
> >
> > This kmem_cache_cpu.tid logic is a bit opaque.  The low-level
> > operations seem reasonably well documented but I couldn't find anywhere
> > which tells me how it all actually works - what is "disambiguation
> > during cmpxchg" and how do we achieve it?
> 
> This is used to force a retry in slab_alloc_node() if preemption occurs
> there. We are modifying the per cpu state thus a retry must be forced.

No, I'm not referring to this patch.  I'm referring to the overall
design concept behind kmem_cache_cpu.tid.  This patch made me go and
look, and it's a bit of a head-scratcher.  It's unobvious and doesn't
appear to be documented in any central place.  Perhaps it's in a
changelog, but who has time for that?

A comment somewhere which describes the concept is needed.

> > I'm in two minds about putting
> > slab-infrastructure-for-bulk-object-allocation-and-freeing-v3.patch and
> > slub-bulk-alloc-extract-objects-from-the-per-cpu-slab.patch into 4.1.
> > They're standalone (ie: no in-kernel callers!) hence harmless, and
> > merging them will make Jesper's life a bit easier.  But otoh they are
> > unproven and have no in-kernel callers, so formally they shouldn't be
> > merged yet.  I suppose we can throw them away again if things don't
> > work out.
> 
> Can we keep them in -next and I will add patches as we go forward? There
> was already a lot of discussion before and I would like to go
> incrementally adding methods to do bulk extraction from the various
> control structures that we have holding objects.

Keeping them in -next is not a problem - I was wondering about when to
start moving the code into mainline.  

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  parent reply	other threads:[~2015-04-09 20:19 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-04-08 18:13 Christoph Lameter
2015-04-08 22:53 ` Andrew Morton
2015-04-09 14:03   ` Christoph Lameter
2015-04-09 17:16     ` slub: bulk allocation from per cpu partial pages Christoph Lameter
2015-04-16 12:06       ` Jesper Dangaard Brouer
2015-04-16 15:54         ` Christoph Lameter
2015-04-17  5:44           ` Jesper Dangaard Brouer
2015-04-17  6:06             ` Jesper Dangaard Brouer
2015-04-30 18:40               ` Christoph Lameter
2015-04-30 19:20                 ` Jesper Dangaard Brouer
2015-04-09 20:19     ` Andrew Morton [this message]
2015-04-11  2:19       ` slub bulk alloc: Extract objects from the per cpu slab Christoph Lameter
2015-04-11  7:25         ` Jesper Dangaard Brouer

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20150409131916.51a533219dbff7a6f2294034@linux-foundation.org \
    --to=akpm@linux-foundation.org \
    --cc=brouer@redhat.com \
    --cc=cl@linux.com \
    --cc=iamjoonsoo.kim@lge.com \
    --cc=linux-mm@kvack.org \
    --cc=penberg@kernel.org \
    --cc=rientjes@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox