From: Christoph Lameter <cl@linux.com>
To: Matthew Wilcox <willy@infradead.org>
Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org,
Pekka Enberg <penberg@cs.helsinki.fi>,
akpm@linux-foundation.org, Mel Gorman <mel@skynet.ie>,
andi@firstfloor.org, Rik van Riel <riel@redhat.com>,
Dave Chinner <dchinner@redhat.com>,
Christoph Hellwig <hch@lst.de>, Michal Hocko <mhocko@suse.com>,
Mike Kravetz <mike.kravetz@oracle.com>
Subject: [RFC 4/7] slub: Sort slab cache list and establish maximum objects for defrag slabs
Date: Thu, 20 Dec 2018 19:22:00 +0000 [thread overview]
Message-ID: <01000167cd1143e3-1533fccc-7036-4a4e-97ea-5be8b347bbf0-000000@email.amazonses.com> (raw)
In-Reply-To: <20181220192145.023162076@linux.com>
[-- Attachment #1: sort_and_max --]
[-- Type: text/plain, Size: 2554 bytes --]
It is advantageous to have all defragmentable slabs together at the
beginning of the list of slabs so that there is no need to scan the
complete list. Put defragmentable caches first when adding a slab cache
and others last.
Determine the maximum number of objects in defragmentable slabs. This allows
the sizing of the array holding refs to objects in a slab later.
Signed-off-by: Christoph Lameter <cl@linux.com>
---
mm/slub.c | 26 ++++++++++++++++++++++++--
1 file changed, 24 insertions(+), 2 deletions(-)
Index: linux/mm/slub.c
===================================================================
--- linux.orig/mm/slub.c
+++ linux/mm/slub.c
@@ -196,6 +196,9 @@ static inline bool kmem_cache_has_cpu_pa
/* Use cmpxchg_double */
#define __CMPXCHG_DOUBLE ((slab_flags_t __force)0x40000000U)
+/* Maximum objects in defragmentable slabs */
+static unsigned int max_defrag_slab_objects;
+
/*
* Tracking user of a slab.
*/
@@ -4310,22 +4313,45 @@ int __kmem_cache_create(struct kmem_cach
return err;
}
+/*
+ * Allocate a slab scratch space that is sufficient to keep at least
+ * max_defrag_slab_objects pointers to individual objects and also a bitmap
+ * for max_defrag_slab_objects.
+ */
+static inline void *alloc_scratch(void)
+{
+ return kmalloc(max_defrag_slab_objects * sizeof(void *) +
+ BITS_TO_LONGS(max_defrag_slab_objects) * sizeof(unsigned long),
+ GFP_KERNEL);
+}
+
void kmem_cache_setup_mobility(struct kmem_cache *s,
kmem_isolate_func isolate, kmem_migrate_func migrate)
{
+ int max_objects = oo_objects(s->max);
+
/*
* Defragmentable slabs must have a ctor otherwise objects may be
* in an undetermined state after they are allocated.
*/
BUG_ON(!s->ctor);
+
+ mutex_lock(&slab_mutex);
+
s->isolate = isolate;
s->migrate = migrate;
+
/*
* Sadly serialization requirements currently mean that we have
* to disable fast cmpxchg based processing.
*/
s->flags &= ~__CMPXCHG_DOUBLE;
+ list_move(&s->list, &slab_caches); /* Move to top */
+ if (max_objects > max_defrag_slab_objects)
+ max_defrag_slab_objects = max_objects;
+
+ mutex_unlock(&slab_mutex);
}
EXPORT_SYMBOL(kmem_cache_setup_mobility);
Index: linux/mm/slab_common.c
===================================================================
--- linux.orig/mm/slab_common.c
+++ linux/mm/slab_common.c
@@ -393,7 +393,7 @@ static struct kmem_cache *create_cache(c
goto out_free_cache;
s->refcount = 1;
- list_add(&s->list, &slab_caches);
+ list_add_tail(&s->list, &slab_caches);
memcg_link_cache(s);
out:
if (err)
WARNING: multiple messages have this Message-ID
From: Christoph Lameter <cl@linux.com>
To: Matthew Wilcox <willy@infradead.org>
Cc: linux-mm@kvack.org
Cc: linux-kernel@vger.kernel.org
Cc: Pekka Enberg <penberg@cs.helsinki.fi>
Cc: akpm@linux-foundation.org
Cc: Mel Gorman <mel@skynet.ie>
Cc: andi@firstfloor.org
Cc: Rik van Riel <riel@redhat.com>
Cc: Dave Chinner <dchinner@redhat.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Mike Kravetz <mike.kravetz@oracle.com>
Subject: [RFC 4/7] slub: Sort slab cache list and establish maximum objects for defrag slabs
Date: Thu, 20 Dec 2018 19:22:00 +0000 [thread overview]
Message-ID: <01000167cd1143e3-1533fccc-7036-4a4e-97ea-5be8b347bbf0-000000@email.amazonses.com> (raw)
Message-ID: <20181220192200.J0TdaYNSFhYaQ9co5eT-8BDJCMn_C_S1IVgCKjMjow8@z> (raw)
In-Reply-To: <20181220192145.023162076@linux.com>
[-- Attachment #1: sort_and_max --]
[-- Type: text/plain, Size: 2555 bytes --]
It is advantageous to have all defragmentable slabs together at the
beginning of the list of slabs so that there is no need to scan the
complete list. Put defragmentable caches first when adding a slab cache
and others last.
Determine the maximum number of objects in defragmentable slabs. This allows
the sizing of the array holding refs to objects in a slab later.
Signed-off-by: Christoph Lameter <cl@linux.com>
---
mm/slub.c | 26 ++++++++++++++++++++++++--
1 file changed, 24 insertions(+), 2 deletions(-)
Index: linux/mm/slub.c
===================================================================
--- linux.orig/mm/slub.c
+++ linux/mm/slub.c
@@ -196,6 +196,9 @@ static inline bool kmem_cache_has_cpu_pa
/* Use cmpxchg_double */
#define __CMPXCHG_DOUBLE ((slab_flags_t __force)0x40000000U)
+/* Maximum objects in defragmentable slabs */
+static unsigned int max_defrag_slab_objects;
+
/*
* Tracking user of a slab.
*/
@@ -4310,22 +4313,45 @@ int __kmem_cache_create(struct kmem_cach
return err;
}
+/*
+ * Allocate a slab scratch space that is sufficient to keep at least
+ * max_defrag_slab_objects pointers to individual objects and also a bitmap
+ * for max_defrag_slab_objects.
+ */
+static inline void *alloc_scratch(void)
+{
+ return kmalloc(max_defrag_slab_objects * sizeof(void *) +
+ BITS_TO_LONGS(max_defrag_slab_objects) * sizeof(unsigned long),
+ GFP_KERNEL);
+}
+
void kmem_cache_setup_mobility(struct kmem_cache *s,
kmem_isolate_func isolate, kmem_migrate_func migrate)
{
+ int max_objects = oo_objects(s->max);
+
/*
* Defragmentable slabs must have a ctor otherwise objects may be
* in an undetermined state after they are allocated.
*/
BUG_ON(!s->ctor);
+
+ mutex_lock(&slab_mutex);
+
s->isolate = isolate;
s->migrate = migrate;
+
/*
* Sadly serialization requirements currently mean that we have
* to disable fast cmpxchg based processing.
*/
s->flags &= ~__CMPXCHG_DOUBLE;
+ list_move(&s->list, &slab_caches); /* Move to top */
+ if (max_objects > max_defrag_slab_objects)
+ max_defrag_slab_objects = max_objects;
+
+ mutex_unlock(&slab_mutex);
}
EXPORT_SYMBOL(kmem_cache_setup_mobility);
Index: linux/mm/slab_common.c
===================================================================
--- linux.orig/mm/slab_common.c
+++ linux/mm/slab_common.c
@@ -393,7 +393,7 @@ static struct kmem_cache *create_cache(c
goto out_free_cache;
s->refcount = 1;
- list_add(&s->list, &slab_caches);
+ list_add_tail(&s->list, &slab_caches);
memcg_link_cache(s);
out:
if (err)
next prev parent reply other threads:[~2018-12-20 19:22 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <20181220192145.023162076@linux.com>
2018-12-20 19:21 ` [RFC 1/7] slub: Replace ctor field with ops field in /sys/slab/* Christoph Lameter
2018-12-20 19:21 ` Christoph Lameter
2018-12-20 19:21 ` [RFC 2/7] slub: Add defrag_ratio field and sysfs support Christoph Lameter
2018-12-20 19:21 ` Christoph Lameter
2018-12-20 19:21 ` [RFC 3/7] slub: Add isolate() and migrate() methods Christoph Lameter
2018-12-20 19:21 ` Christoph Lameter
2018-12-20 19:22 ` Christoph Lameter [this message]
2018-12-20 19:22 ` [RFC 4/7] slub: Sort slab cache list and establish maximum objects for defrag slabs Christoph Lameter
2018-12-20 19:22 ` [RFC 5/7] slub: Slab defrag core Christoph Lameter
2018-12-20 19:22 ` Christoph Lameter
2018-12-20 19:22 ` [RFC 6/7] slub: Extend slabinfo to support -D and -F options Christoph Lameter
2018-12-20 19:22 ` Christoph Lameter
2018-12-20 19:22 ` [RFC 7/7] xarray: Implement migration function for objects Christoph Lameter
2018-12-20 19:22 ` Christoph Lameter
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=01000167cd1143e3-1533fccc-7036-4a4e-97ea-5be8b347bbf0-000000@email.amazonses.com \
--to=cl@linux.com \
--cc=akpm@linux-foundation.org \
--cc=andi@firstfloor.org \
--cc=dchinner@redhat.com \
--cc=hch@lst.de \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mel@skynet.ie \
--cc=mhocko@suse.com \
--cc=mike.kravetz@oracle.com \
--cc=penberg@cs.helsinki.fi \
--cc=riel@redhat.com \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox