linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Pekka Enberg <penberg@cs.helsinki.fi>
To: Christoph Lameter <cl@linux-foundation.org>
Cc: Mel Gorman <mel@csn.ul.ie>, Nick Piggin <nickpiggin@yahoo.com.au>,
	Nick Piggin <npiggin@suse.de>,
	Linux Memory Management List <linux-mm@kvack.org>,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
	Andrew Morton <akpm@linux-foundation.org>,
	Lin Ming <ming.m.lin@intel.com>,
	"Zhang, Yanmin" <yanmin_zhang@linux.intel.com>
Subject: Re: [patch] SLQB slab allocator (try 2)
Date: Tue, 17 Feb 2009 19:01:36 +0200	[thread overview]
Message-ID: <1234890096.11511.6.camel@penberg-laptop> (raw)
In-Reply-To: <alpine.DEB.1.10.0902171120040.27813@qirst.com>

Hi Christoph,

On Mon, 16 Feb 2009, Pekka Enberg wrote:
> > Btw, Yanmin, do you have access to the tests Mel is running (especially the
> > ones where slub-rvrt seems to do worse)? Can you see this kind of regression?
> > The results make we wonder whether we should avoid reverting all of the page
> > allocator pass-through and just add a kmalloc cache for 8K allocations. Or not
> > address the netperf regression at all. Double-hmm.

On Tue, 2009-02-17 at 11:20 -0500, Christoph Lameter wrote:
> Going to 8k for the limit beyond we pass through to the page allocator may
> be the simplest and best solution. Someone please work on the page
> allocator...

Yeah. Something like this totally untested patch, perhaps?

			Pekka

diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h
index 2f5c16b..e93cb3d 100644
--- a/include/linux/slub_def.h
+++ b/include/linux/slub_def.h
@@ -201,6 +201,13 @@ static __always_inline struct kmem_cache *kmalloc_slab(size_t size)
 #define SLUB_DMA (__force gfp_t)0
 #endif
 
+/*
+ * The maximum allocation size that will be satisfied by the slab allocator for
+ * kmalloc(). Requests that exceed this limit are passed directly to the page
+ * allocator.
+ */
+#define SLAB_LIMIT (8 * 1024)
+
 void *kmem_cache_alloc(struct kmem_cache *, gfp_t);
 void *__kmalloc(size_t size, gfp_t flags);
 
@@ -212,7 +219,7 @@ static __always_inline void *kmalloc_large(size_t size, gfp_t flags)
 static __always_inline void *kmalloc(size_t size, gfp_t flags)
 {
 	if (__builtin_constant_p(size)) {
-		if (size > PAGE_SIZE)
+		if (size > SLAB_LIMIT)
 			return kmalloc_large(size, flags);
 
 		if (!(flags & SLUB_DMA)) {
@@ -234,7 +241,7 @@ void *kmem_cache_alloc_node(struct kmem_cache *, gfp_t flags, int node);
 static __always_inline void *kmalloc_node(size_t size, gfp_t flags, int node)
 {
 	if (__builtin_constant_p(size) &&
-		size <= PAGE_SIZE && !(flags & SLUB_DMA)) {
+		size <= SLAB_LIMIT && !(flags & SLUB_DMA)) {
 			struct kmem_cache *s = kmalloc_slab(size);
 
 		if (!s)
diff --git a/mm/slub.c b/mm/slub.c
index 0280eee..a324188 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -2658,7 +2658,7 @@ void *__kmalloc(size_t size, gfp_t flags)
 {
 	struct kmem_cache *s;
 
-	if (unlikely(size > PAGE_SIZE))
+	if (unlikely(size > SLAB_LIMIT))
 		return kmalloc_large(size, flags);
 
 	s = get_slab(size, flags);
@@ -2686,7 +2686,7 @@ void *__kmalloc_node(size_t size, gfp_t flags, int node)
 {
 	struct kmem_cache *s;
 
-	if (unlikely(size > PAGE_SIZE))
+	if (unlikely(size > SLAB_LIMIT))
 		return kmalloc_large_node(size, flags, node);
 
 	s = get_slab(size, flags);
@@ -3223,7 +3223,7 @@ void *__kmalloc_track_caller(size_t size, gfp_t gfpflags, unsigned long caller)
 {
 	struct kmem_cache *s;
 
-	if (unlikely(size > PAGE_SIZE))
+	if (unlikely(size > SLAB_LIMIT))
 		return kmalloc_large(size, gfpflags);
 
 	s = get_slab(size, gfpflags);
@@ -3239,7 +3239,7 @@ void *__kmalloc_node_track_caller(size_t size, gfp_t gfpflags,
 {
 	struct kmem_cache *s;
 
-	if (unlikely(size > PAGE_SIZE))
+	if (unlikely(size > SLAB_LIMIT))
 		return kmalloc_large_node(size, gfpflags, node);
 
 	s = get_slab(size, gfpflags);


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2009-02-17 17:01 UTC|newest]

Thread overview: 55+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2009-01-23 15:46 Nick Piggin
2009-01-24  2:38 ` Zhang, Yanmin
2009-01-26  8:48 ` Pekka Enberg
2009-01-26  9:07   ` Peter Zijlstra
2009-01-26  9:10     ` Peter Zijlstra
2009-01-26 17:22     ` Christoph Lameter
2009-01-27  9:07       ` Peter Zijlstra
2009-01-27 20:21         ` Christoph Lameter
2009-02-03  2:04           ` Nick Piggin
2009-02-03 10:12   ` Mel Gorman
2009-02-03 10:36     ` Nick Piggin
2009-02-03 11:22       ` Mel Gorman
2009-02-03 11:26         ` Mel Gorman
2009-02-04  6:48         ` Nick Piggin
2009-02-04 15:27           ` Mel Gorman
2009-02-05  3:59             ` Nick Piggin
2009-02-05 13:49               ` Mel Gorman
2009-02-16 18:42               ` Mel Gorman
2009-02-16 19:17                 ` Pekka Enberg
2009-02-16 19:41                   ` Mel Gorman
2009-02-16 19:43                     ` Pekka Enberg
2009-02-17  1:06                   ` Zhang, Yanmin
2009-02-17 16:20                   ` Christoph Lameter
2009-02-17 17:01                     ` Pekka Enberg [this message]
2009-02-17 17:05                       ` Christoph Lameter
2009-02-17 17:24                         ` Pekka Enberg
2009-02-17 18:11                         ` Johannes Weiner
2009-02-17 19:43                           ` Pekka Enberg
2009-02-17 20:04                             ` Christoph Lameter
2009-02-18  0:48                               ` KOSAKI Motohiro
2009-02-18  8:09                                 ` Pekka Enberg
2009-02-19  0:05                                   ` KOSAKI Motohiro
2009-02-19  9:16                                     ` Pekka Enberg
2009-02-19 12:51                                       ` KOSAKI Motohiro
2009-02-19 13:15                                         ` Pekka Enberg
2009-02-19 13:49                                           ` KOSAKI Motohiro
2009-02-19 14:19                                             ` Christoph Lameter
2009-02-18  1:05                         ` Zhang, Yanmin
2009-02-18  7:48                           ` Pekka Enberg
2009-02-18  8:43                             ` Zhang, Yanmin
2009-02-18  9:01                               ` Pekka Enberg
2009-02-18  9:19                                 ` Zhang, Yanmin
2009-02-19  8:40                         ` Pekka Enberg
2009-02-16 19:25                 ` Pekka Enberg
2009-02-16 19:44                   ` Mel Gorman
2009-02-16 19:42                     ` Pekka Enberg
2009-02-03 11:28       ` Mel Gorman
2009-02-03 11:50         ` Nick Piggin
2009-02-03 12:01           ` Mel Gorman
2009-02-03 12:07             ` Nick Piggin
2009-02-03 12:26               ` Mel Gorman
2009-02-04 15:49               ` Christoph Lameter
2009-02-04 15:48           ` Christoph Lameter
2009-02-03 18:58     ` Pekka Enberg
2009-02-04 16:06       ` Christoph Lameter

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1234890096.11511.6.camel@penberg-laptop \
    --to=penberg@cs.helsinki.fi \
    --cc=akpm@linux-foundation.org \
    --cc=cl@linux-foundation.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mel@csn.ul.ie \
    --cc=ming.m.lin@intel.com \
    --cc=nickpiggin@yahoo.com.au \
    --cc=npiggin@suse.de \
    --cc=yanmin_zhang@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox