From: Christoph Lameter <clameter@sgi.com>
To: Tim Chen <tim.c.chen@linux.intel.com>
Cc: "Chen, Tim C" <tim.c.chen@intel.com>,
"Siddha, Suresh B" <suresh.b.siddha@intel.com>,
"Zhang, Yanmin" <yanmin.zhang@intel.com>,
"Wang, Peter Xihong" <peter.xihong.wang@intel.com>,
Arjan van de Ven <arjan@infradead.org>,
linux-kernel@vger.kernel.org, linux-mm@kvack.org
Subject: RE: Regression with SLUB on Netperf and Volanomark
Date: Fri, 4 May 2007 16:59:15 -0700 (PDT) [thread overview]
Message-ID: <Pine.LNX.4.64.0705041658350.28260@schroedinger.engr.sgi.com> (raw)
In-Reply-To: <1178318609.23795.214.camel@localhost.localdomain>
On Fri, 4 May 2007, Tim Chen wrote:
> On Fri, 2007-05-04 at 11:27 -0700, Christoph Lameter wrote:
>
> >
> > Not sure where to go here. Increasing the per cpu slab size may hold off
> > the issue up to a certain cpu cache size. For that we would need to
> > identify which slabs create the performance issue.
> >
> > One easy way to check that this is indeed the case: Enable fake NUMA. You
> > will then have separate queues for each processor since they are on
> > different "nodes". Create two fake nodes. Run one thread in each node and
> > see if this fixes it.
>
> I tried with fake NUMA (boot with numa=fake=2) and use
>
> numactl --physcpubind=1 --membind=0 ./netserver
> numactl --physcpubind=2 --membind=1 ./netperf -t TCP_STREAM -l 60 -H
> 127.0.0.1 -i 5,5 -I 99,5 -- -s 57344 -S 57344 -m 4096
>
> to run the tests. The results are about the same as the non-NUMA case,
> with slab about 5% better than slub.
Hmmmm... both tests were run in the same context? NUMA has additional
overhead in other areas.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2007-05-04 23:59 UTC|newest]
Thread overview: 22+ messages / expand[flat|nested] mbox.gz Atom feed top
2007-05-02 18:43 Tim Chen
2007-05-02 19:47 ` Christoph Lameter
2007-05-03 23:28 ` Chen, Tim C
2007-05-04 0:44 ` Christoph Lameter
2007-05-04 1:45 ` Christoph Lameter
2007-05-04 23:41 ` Tim Chen
2007-05-05 1:02 ` Christoph Lameter
2007-05-08 0:40 ` Tim Chen
2007-05-08 1:49 ` Christoph Lameter
2007-05-08 21:02 ` Tim Chen
2007-05-08 22:02 ` Christoph Lameter
2007-05-08 4:57 ` Christoph Lameter
2007-05-04 2:42 ` Christoph Lameter
2007-05-04 17:14 ` Tim Chen
2007-05-04 18:10 ` Christoph Lameter
2007-05-04 17:39 ` Tim Chen
2007-05-04 18:27 ` Christoph Lameter
2007-05-04 22:43 ` Tim Chen
2007-05-04 23:59 ` Christoph Lameter [this message]
2007-05-04 23:42 ` Tim Chen
2007-05-05 1:41 ` Christoph Lameter
2007-05-05 2:05 ` Christoph Lameter
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Pine.LNX.4.64.0705041658350.28260@schroedinger.engr.sgi.com \
--to=clameter@sgi.com \
--cc=arjan@infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=peter.xihong.wang@intel.com \
--cc=suresh.b.siddha@intel.com \
--cc=tim.c.chen@intel.com \
--cc=tim.c.chen@linux.intel.com \
--cc=yanmin.zhang@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox