linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Eric Dumazet <eric.dumazet@gmail.com>
To: "Alex,Shi" <alex.shi@intel.com>
Cc: David Rientjes <rientjes@google.com>,
	Christoph Lameter <cl@linux.com>,
	"penberg@kernel.org" <penberg@kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"linux-mm@kvack.org" <linux-mm@kvack.org>
Subject: RE: [PATCH 1/3] slub: set a criteria for slub node partial adding
Date: Wed, 14 Dec 2011 07:44:14 +0100	[thread overview]
Message-ID: <1323845054.2846.18.camel@edumazet-laptop> (raw)
In-Reply-To: <1323842761.16790.8295.camel@debian>

Le mercredi 14 dA(C)cembre 2011 A  14:06 +0800, Alex,Shi a A(C)crit :
> On Wed, 2011-12-14 at 10:36 +0800, David Rientjes wrote:
> > On Tue, 13 Dec 2011, David Rientjes wrote:
> > 
> > > > > 	{
> > > > > 	        n->nr_partial++;
> > > > > 	-       if (tail == DEACTIVATE_TO_TAIL)
> > > > > 	-               list_add_tail(&page->lru, &n->partial);
> > > > > 	-       else
> > > > > 	-               list_add(&page->lru, &n->partial);
> > > > > 	+       list_add_tail(&page->lru, &n->partial);
> > > > > 	}
> > > > > 
> > 
> > 2 machines (one netserver, one netperf) both with 16 cores, 64GB memory 
> > with netperf-2.4.5 comparing Linus' -git with and without this patch:
> > 
> > 	threads		SLUB		SLUB+patch
> > 	 16		116614		117213 (+0.5%)
> > 	 32		216436		215065 (-0.6%)
> > 	 48		299991		299399 (-0.2%)
> > 	 64		373753		374617 (+0.2%)
> > 	 80		435688		435765 (UNCH)
> > 	 96		494630		496590 (+0.4%)
> > 	112		546766		546259 (-0.1%)
> > 
> > This suggests the difference is within the noise, so this patch neither 
> > helps nor hurts netperf on my setup, as expected.
> 
> Thanks for the data. Real netperf is hard to give enough press on SLUB.
> but as I mentioned before, I also didn't find real performance change on
> my loopback netperf testing. 
> 
> I retested hackbench again. about 1% performance increase still exists
> on my 2 sockets SNB/WSM and 4 sockets NHM.  and no performance drop for
> other machines. 
> 
> Christoph, what's comments you like to offer for the results or for this
> code change? 

I believe far more aggressive mechanism is needed to help these
workloads.

Please note that the COLD/HOT page concept is not very well used in
kernel, because its not really obvious that some decisions are always
good (or maybe this is not well known)

We should try to batch things a bit, instead of doing a very small unit
of work in slow path.

We now have a very fast fastpath, but inefficient slow path.

SLAB has a litle cache per cpu, we could add one to SLUB for freed
objects, not belonging to current slab. This could avoid all these
activate/deactivate overhead.



--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2011-12-14  6:44 UTC|newest]

Thread overview: 39+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-12-02  8:23 Alex Shi
2011-12-02  8:23 ` [PATCH 2/3] slub: remove unnecessary statistics, deactivate_to_head/tail Alex Shi
2011-12-02  8:23   ` [PATCH 3/3] slub: fill per cpu partial only when free objects larger than one quarter Alex Shi
2011-12-02 14:44   ` [PATCH 2/3] slub: remove unnecessary statistics, deactivate_to_head/tail Christoph Lameter
2011-12-06 21:08   ` David Rientjes
2011-12-02 11:36 ` [PATCH 1/3] slub: set a criteria for slub node partial adding Eric Dumazet
2011-12-02 20:02   ` Christoph Lameter
2011-12-05  2:21     ` Shaohua Li
2011-12-05 10:01     ` Alex,Shi
2011-12-05  3:28   ` Alex,Shi
2011-12-02 14:43 ` Christoph Lameter
2011-12-05  9:22   ` Alex,Shi
2011-12-06 21:06     ` David Rientjes
2011-12-07  5:11       ` Shaohua Li
2011-12-07  7:28         ` David Rientjes
2011-12-12  2:43           ` Shaohua Li
2011-12-12  4:14             ` Alex,Shi
2011-12-12  4:35               ` Shaohua Li
2011-12-12  4:25                 ` Alex,Shi
2011-12-12  4:48                   ` Shaohua Li
2011-12-12  6:17                     ` Alex,Shi
2011-12-12  6:09             ` Eric Dumazet
2011-12-14  1:29             ` David Rientjes
2011-12-14  2:43               ` Shaohua Li
2011-12-14  2:38                 ` David Rientjes
2011-12-09  8:30   ` Alex,Shi
2011-12-09 10:10     ` David Rientjes
2011-12-09 13:40       ` Shi, Alex
2011-12-14  1:38         ` David Rientjes
2011-12-14  2:36           ` David Rientjes
2011-12-14  6:06             ` Alex,Shi
2011-12-14  6:44               ` Eric Dumazet [this message]
2011-12-14  6:47                 ` Pekka Enberg
2011-12-14 14:53                   ` Christoph Lameter
2011-12-14  6:56                 ` Alex,Shi
2011-12-14 14:59                   ` Christoph Lameter
2011-12-14 17:33                     ` Eric Dumazet
2011-12-14 18:26                       ` Christoph Lameter
2011-12-13 13:01       ` Shi, Alex

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1323845054.2846.18.camel@edumazet-laptop \
    --to=eric.dumazet@gmail.com \
    --cc=alex.shi@intel.com \
    --cc=cl@linux.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=penberg@kernel.org \
    --cc=rientjes@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox