linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Pekka Enberg <penberg@kernel.org>
To: David Rientjes <rientjes@google.com>
Cc: Christoph Lameter <cl@linux.com>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	Andrew Morton <akpm@linux-foundation.org>,
	hughd@google.com, linux-kernel@vger.kernel.org,
	linux-mm@kvack.org
Subject: Re: [GIT PULL] Lockless SLUB slowpaths for v3.1-rc1
Date: Mon, 1 Aug 2011 15:45:04 +0300	[thread overview]
Message-ID: <CAOJsxLGyC4=WwGu7kUTwVKF3AxhfWjBg2sZu=W08RtVMHKk8eQ@mail.gmail.com> (raw)
In-Reply-To: <alpine.DEB.2.00.1108010229150.1062@chino.kir.corp.google.com>

Hi David,

On Mon, Aug 1, 2011 at 1:02 PM, David Rientjes <rientjes@google.com> wrote:
> Here's the same testing environment with CONFIG_SLUB_STATS for 16 threads
> instead of 160:

[snip]

Looking at the data (in slightly reorganized form):

  alloc
  =====

    16 threads:

      cache           alloc_fastpath          alloc_slowpath
      kmalloc-256     4263275 (91.1%)         417445   (8.9%)
      kmalloc-1024    4636360 (99.1%)         42091    (0.9%)
      kmalloc-4096    2570312 (54.4%)         2155946  (45.6%)

    160 threads:

      cache           alloc_fastpath          alloc_slowpath
      kmalloc-256     10937512 (62.8%)        6490753  (37.2%)
      kmalloc-1024    17121172 (98.3%)        303547   (1.7%)
      kmalloc-4096    5526281  (31.7%)        11910454 (68.3%)

  free
  ====

    16 threads:

      cache           free_fastpath           free_slowpath
      kmalloc-256     210115   (4.5%)         4470604  (95.5%)
      kmalloc-1024    3579699  (76.5%)        1098764  (23.5%)
      kmalloc-4096    67616    (1.4%)         4658678  (98.6%)

    160 threads:
      cache           free_fastpath           free_slowpath
      kmalloc-256     15469    (0.1%)         17412798 (99.9%)
      kmalloc-1024    11604742 (66.6%)        5819973  (33.4%)
      kmalloc-4096    14848    (0.1%)         17421902 (99.9%)

it's pretty sad to see how SLUB alloc fastpath utilization drops so
dramatically. Free fastpath utilization isn't all that great with 160
threads either but it seems to me that most of the performance
regression compared to SLAB still comes from the alloc paths.

I guess the problem here is that __slab_free() happens on a remote CPU
which puts the object to 'struct page' freelist which effectively means
we're unable to recycle free'd objects. As the number of concurrent
threads increase, we simply drain out the fastpath freelists more
quickly. Did I understand the problem correctly?

If that's really happening, I'm still bit puzzled why we're hitting the
slowpath so much. I'd assume that __slab_alloc() would simply reload the
'struct page' freelist once the per-cpu freelist is empty.  Why is that
not happening? I see __slab_alloc() does deactivate_slab() upon
node_match() failure. What kind of ALLOC_NODE_MISMATCH stats are you
seeing?

                        Pekka

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2011-08-01 12:45 UTC|newest]

Thread overview: 26+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-07-28 22:47 Pekka Enberg
2011-07-29 15:04 ` Christoph Lameter
2011-07-29 23:18   ` Andi Kleen
2011-07-30  6:33     ` Eric Dumazet
2011-07-31 18:50   ` David Rientjes
2011-07-31 20:24     ` David Rientjes
2011-07-31 20:45       ` Pekka Enberg
2011-07-31 21:55         ` David Rientjes
2011-08-01  5:08           ` Pekka Enberg
2011-08-01 10:02             ` David Rientjes
2011-08-01 12:45               ` Pekka Enberg [this message]
2011-08-02  2:43                 ` David Rientjes
2011-08-01 12:06           ` Pekka Enberg
2011-08-01 15:55             ` Christoph Lameter
2011-08-02  4:05             ` David Rientjes
2011-08-02 14:15               ` Christoph Lameter
2011-08-02 16:24                 ` David Rientjes
2011-08-02 16:36                   ` Christoph Lameter
2011-08-02 20:02                     ` David Rientjes
2011-08-03 14:09                       ` Christoph Lameter
2011-08-08 20:04                         ` David Rientjes
2011-07-30 18:27 ` Linus Torvalds
2011-07-30 18:32   ` Linus Torvalds
2011-07-31 17:39     ` Andi Kleen
2011-08-01  0:22       ` KAMEZAWA Hiroyuki
2011-07-31 18:11     ` David Rientjes

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAOJsxLGyC4=WwGu7kUTwVKF3AxhfWjBg2sZu=W08RtVMHKk8eQ@mail.gmail.com' \
    --to=penberg@kernel.org \
    --cc=akpm@linux-foundation.org \
    --cc=cl@linux.com \
    --cc=hughd@google.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=rientjes@google.com \
    --cc=torvalds@linux-foundation.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox