From: Christoph Lameter <cl@linux.com>
To: Matthew Wilcox <willy@infradead.org>
Cc: linux-mm@kvack.org, Pekka Enberg <penberg@cs.helsinki.fi>,
akpm@linux-foundation.org, Mel Gorman <mel@skynet.ie>,
andi@firstfloor.org, Rik van Riel <riel@redhat.com>,
Dave Chinner <dchinner@redhat.com>,
Christoph Hellwig <hch@lst.de>
Subject: [RFC 3/8] slub: Add isolate() and migrate() methods
Date: Wed, 27 Dec 2017 16:06:39 -0600 [thread overview]
Message-ID: <20171227220652.402842142@linux.com> (raw)
In-Reply-To: <20171227220636.361857279@linux.com>
[-- Attachment #1: isolate_and_migrate_methods --]
[-- Type: text/plain, Size: 6693 bytes --]
Add the two methods needed for moving objects and enable the
display of the callbacks via the /sys/kernel/slab interface.
Add documentation explaining the use of these methods and the prototypes
for slab.h. Add functions to setup the callbacks method for a slab cache.
Add empty functions for SLAB/SLOB. The API is generic so it
could be theoretically implemented for these allocators as well.
Signed-off-by: Christoph Lameter <cl@linux.com>
---
include/linux/slab.h | 50 +++++++++++++++++++++++++++++++++++++++++++++++
include/linux/slub_def.h | 3 ++
mm/slub.c | 29 ++++++++++++++++++++++++++-
3 files changed, 81 insertions(+), 1 deletion(-)
Index: linux/include/linux/slub_def.h
===================================================================
--- linux.orig/include/linux/slub_def.h
+++ linux/include/linux/slub_def.h
@@ -98,6 +98,9 @@ struct kmem_cache {
gfp_t allocflags; /* gfp flags to use on each alloc */
int refcount; /* Refcount for slab cache destroy */
void (*ctor)(void *);
+ kmem_isolate_func *isolate;
+ kmem_migrate_func *migrate;
+
int inuse; /* Offset to metadata */
int align; /* Alignment */
int reserved; /* Reserved bytes at the end of slabs */
Index: linux/mm/slub.c
===================================================================
--- linux.orig/mm/slub.c
+++ linux/mm/slub.c
@@ -3479,7 +3479,6 @@ static int calculate_sizes(struct kmem_c
else
s->flags &= ~__OBJECT_POISON;
-
/*
* If we are Redzoning then check if there is some space between the
* end of the object and the free pointer. If not then add an
@@ -4275,6 +4274,25 @@ int __kmem_cache_create(struct kmem_cach
return err;
}
+void kmem_cache_setup_mobility(struct kmem_cache *s,
+ kmem_isolate_func isolate, kmem_migrate_func migrate)
+{
+ /*
+ * Defragmentable slabs must have a ctor otherwise objects may be
+ * in an undetermined state after they are allocated.
+ */
+ BUG_ON(!s->ctor);
+ s->isolate = isolate;
+ s->migrate = migrate;
+ /*
+ * Sadly serialization requirements currently mean that we have
+ * to disable fast cmpxchg based processing.
+ */
+ s->flags &= ~__CMPXCHG_DOUBLE;
+
+}
+EXPORT_SYMBOL(kmem_cache_setup_mobility);
+
void *__kmalloc_track_caller(size_t size, gfp_t gfpflags, unsigned long caller)
{
struct kmem_cache *s;
@@ -4969,6 +4987,20 @@ static ssize_t ops_show(struct kmem_cach
if (s->ctor)
x += sprintf(buf + x, "ctor : %pS\n", s->ctor);
+
+ if (s->isolate) {
+ x += sprintf(buf + x, "isolate : ");
+ x += sprint_symbol(buf + x,
+ (unsigned long)s->isolate);
+ x += sprintf(buf + x, "\n");
+ }
+
+ if (s->migrate) {
+ x += sprintf(buf + x, "migrate : ");
+ x += sprint_symbol(buf + x,
+ (unsigned long)s->migrate);
+ x += sprintf(buf + x, "\n");
+ }
return x;
}
SLAB_ATTR_RO(ops);
Index: linux/include/linux/slab.h
===================================================================
--- linux.orig/include/linux/slab.h
+++ linux/include/linux/slab.h
@@ -146,6 +146,68 @@ void memcg_deactivate_kmem_caches(struct
void memcg_destroy_kmem_caches(struct mem_cgroup *);
/*
+ * Function prototypes passed to kmem_cache_setup_mobility() to enable mobile
+ * objects and targeted reclaim in slab caches.
+ */
+
+/*
+ * kmem_cache_isolate_func() is called with locks held so that the slab
+ * objects cannot be freed. We are in an atomic context and no slab
+ * operations may be performed. The purpose of kmem_cache_isolate_func()
+ * is to pin the object so that it cannot be freed until
+ * kmem_cache_migrate_func() has processed them. This may be accomplished
+ * by increasing the refcount or setting a flag.
+ *
+ * Parameters passed are the number of objects to process and an array of
+ * pointers to objects which are intended to be moved.
+ *
+ * Returns a pointer that is passed to the migrate function. If any objects
+ * cannot be touched at this point then the pointer may indicate a
+ * failure and then the migration function can simply remove the references
+ * that were already obtained. The private data could be used to track
+ * the objects that were already pinned.
+ *
+ * The object pointer array passed is also passed to kmem_cache_migrate().
+ * The function may remove objects from the array by setting pointers to
+ * NULL. This is useful if we can determine that an object is being freed
+ * because kmem_cache_isolate_func() was called when the subsystem
+ * was calling kmem_cache_free().
+ * In that case it is not necessary to increase the refcount or
+ * specially mark the object because the release of the slab lock
+ * will lead to the immediate freeing of the object.
+ */
+typedef void *kmem_isolate_func(struct kmem_cache *, void **, int);
+
+/*
+ * kmem_cache_move_migrate_func is called with no locks held and interrupts
+ * enabled. Sleeping is possible. Any operation may be performed in
+ * migrate(). kmem_cache_migrate_func should allocate new objects and
+ * free all the objects.
+ **
+ * Parameters passed are the number of objects in the array, the array of
+ * pointers to the objects, the NUMA node where the object should be
+ * allocated and the pointer returned by kmem_cache_isolate_func().
+ *
+ * Success is checked by examining the number of remaining objects in
+ * the slab. If the number is zero then the objects will be freed.
+ */
+typedef void kmem_migrate_func(struct kmem_cache *, void **, int nr, int node, void *private);
+
+/*
+ * kmem_cache_setup_mobility() is used to setup callbacks for a slab cache.
+ */
+#ifdef CONFIG_SLUB
+void kmem_cache_setup_mobility(struct kmem_cache *, kmem_isolate_func,
+ kmem_migrate_func);
+#else
+static inline void kmem_cache_setup_mobility(struct kmem_cache *s,
+ kmem_isolate_func isolate, kmem_migrate_func migrate) {}
+#endif
+
+/*
+ * Allocator specific definitions. These are mainly used to establish optimized
+ * ways to convert kmalloc() calls to kmem_cache_alloc() invocations by
+ * selecting the appropriate general cache at compile time.
* Please use this macro to create slab caches. Simply specify the
* name of the structure and maybe some flags that are listed above.
*
Index: linux/mm/slab_common.c
===================================================================
--- linux.orig/mm/slab_common.c
+++ linux/mm/slab_common.c
@@ -278,7 +278,7 @@ int slab_unmergeable(struct kmem_cache *
if (!is_root_cache(s))
return 1;
- if (s->ctor)
+ if (s->ctor || s->isolate || s->migrate)
return 1;
/*
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2017-12-27 22:09 UTC|newest]
Thread overview: 24+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-12-27 22:06 [RFC 0/8] Xarray object migration V1 Christoph Lameter
2017-12-27 22:06 ` [RFC 2/8] slub: Add defrag_ratio field and sysfs support Christoph Lameter
2017-12-30 6:20 ` Matthew Wilcox
2018-01-02 14:53 ` Christopher Lameter
2017-12-27 22:06 ` Christoph Lameter [this message]
2017-12-30 6:42 ` [RFC 3/8] slub: Add isolate() and migrate() methods Matthew Wilcox
2018-01-01 21:20 ` Matthew Wilcox
2018-01-02 14:56 ` Christopher Lameter
2018-01-02 14:55 ` Christopher Lameter
2017-12-27 22:06 ` [RFC 4/8] slub: Sort slab cache list and establish maximum objects for defrag slabs Christoph Lameter
2017-12-27 22:06 ` [RFC 5/8] slub: Slab defrag core Christoph Lameter
2017-12-27 22:06 ` [RFC 6/8] slub: Extend slabinfo to support -D and -F options Christoph Lameter
2017-12-27 22:06 ` [RFC 7/8] xarray: Implement migration function for objects Christoph Lameter
2017-12-27 22:06 ` [RFC 8/8] Add debugging output Christoph Lameter
2017-12-28 5:19 ` [RFC 0/8] Xarray object migration V1 Randy Dunlap
2017-12-28 14:57 ` Christopher Lameter
2017-12-28 17:18 ` James Bottomley
2017-12-28 17:33 ` Benjamin LaHaise
2017-12-28 17:40 ` James Bottomley
2017-12-28 19:17 ` Benjamin LaHaise
2017-12-28 20:00 ` James Bottomley
2017-12-28 20:33 ` Benjamin LaHaise
2017-12-28 22:24 ` Dave Chinner
2017-12-29 0:19 ` Christopher Lameter
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20171227220652.402842142@linux.com \
--to=cl@linux.com \
--cc=akpm@linux-foundation.org \
--cc=andi@firstfloor.org \
--cc=dchinner@redhat.com \
--cc=hch@lst.de \
--cc=linux-mm@kvack.org \
--cc=mel@skynet.ie \
--cc=penberg@cs.helsinki.fi \
--cc=riel@redhat.com \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox