linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [RFC v2 0/3] zsmalloc: make its pages can be migrated
@ 2015-10-15  9:08 Hui Zhu
  2015-10-15  9:09 ` [RFC v2 1/3] migrate: new struct migration and add it to struct page Hui Zhu
                   ` (2 more replies)
  0 siblings, 3 replies; 7+ messages in thread
From: Hui Zhu @ 2015-10-15  9:08 UTC (permalink / raw)
  To: Minchan Kim, Nitin Gupta, Sergey Senozhatsky, Andrew Morton,
	Kirill A. Shutemov, Mel Gorman, Dave Hansen, Johannes Weiner,
	Michal Hocko, Konstantin Khlebnikov, Andrea Arcangeli,
	Alexander Duyck, Tejun Heo, Joonsoo Kim, Naoya Horiguchi,
	Jennifer Herbert, Hugh Dickins, Vladimir Davydov,
	Vlastimil Babka, David Rientjes, Sasha Levin,
	Steven Rostedt (Red Hat),
	Aneesh Kumar K.V, Wanpeng Li, Geert Uytterhoeven, Greg Thelen,
	Al Viro, linux-kernel, linux-mm
  Cc: teawater, Hui Zhu

According to the review for the prev version [1], I got that I should
not increase the size of struct page.
So I update it in new version.

And I also add check for CONFIG_MIGRATION to make function just work
when CONFIG_MIGRATION is open.

Hui Zhu (3):
migrate: new struct migration and add it to struct page
zsmalloc: mark its page "PageMigration"
zram: make create "__GFP_MOVABLE" pool

 drivers/block/zram/zram_drv.c |    6 
 include/linux/migrate.h       |   43 ++
 include/linux/mm_types.h      |    3 
 mm/compaction.c               |    8 
 mm/migrate.c                  |   17 -
 mm/vmscan.c                   |    2 
 mm/zsmalloc.c                 |  605 +++++++++++++++++++++++++++++++++++++++---
 7 files changed, 639 insertions(+), 45 deletions(-)

[1] http://comments.gmane.org/gmane.linux.kernel.mm/139724

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 7+ messages in thread

* [RFC v2 1/3] migrate: new struct migration and add it to struct page
  2015-10-15  9:08 [RFC v2 0/3] zsmalloc: make its pages can be migrated Hui Zhu
@ 2015-10-15  9:09 ` Hui Zhu
  2015-10-15  9:27   ` Vlastimil Babka
  2015-10-15  9:09 ` [RFC v2 2/3] zsmalloc: mark its page "PageMigration" Hui Zhu
  2015-10-15  9:09 ` [RFC v2 3/3] zram: make create "__GFP_MOVABLE" pool Hui Zhu
  2 siblings, 1 reply; 7+ messages in thread
From: Hui Zhu @ 2015-10-15  9:09 UTC (permalink / raw)
  To: Minchan Kim, Nitin Gupta, Sergey Senozhatsky, Andrew Morton,
	Kirill A. Shutemov, Mel Gorman, Dave Hansen, Johannes Weiner,
	Michal Hocko, Konstantin Khlebnikov, Andrea Arcangeli,
	Alexander Duyck, Tejun Heo, Joonsoo Kim, Naoya Horiguchi,
	Jennifer Herbert, Hugh Dickins, Vladimir Davydov,
	Vlastimil Babka, David Rientjes, Sasha Levin,
	Steven Rostedt (Red Hat),
	Aneesh Kumar K.V, Wanpeng Li, Geert Uytterhoeven, Greg Thelen,
	Al Viro, linux-kernel, linux-mm
  Cc: teawater, Hui Zhu

I got that add function interfaces is really not a good idea.
So I add a new struct migration to put all migration interfaces and add
this struct to struct page as union of "mapping".
Then the function doesn't need increase the size of struct page.

Also I change the flags from "PG_movable" to "PageMigration" according
to the review.

Signed-off-by: Hui Zhu <zhuhui@xiaomi.com>
---
 include/linux/migrate.h  | 30 ++++++++++++++++++++++++++++++
 include/linux/mm_types.h |  3 +++
 mm/compaction.c          |  8 ++++++++
 mm/migrate.c             | 17 +++++++++++++----
 mm/vmscan.c              |  2 +-
 5 files changed, 55 insertions(+), 5 deletions(-)

diff --git a/include/linux/migrate.h b/include/linux/migrate.h
index cac1c09..8b8caba 100644
--- a/include/linux/migrate.h
+++ b/include/linux/migrate.h
@@ -27,6 +27,31 @@ enum migrate_reason {
 };
 
 #ifdef CONFIG_MIGRATION
+struct migration {
+	int (*isolate)(struct page *page);
+	void (*put)(struct page *page);
+	int (*move)(struct page *page, struct page *newpage, int force,
+		       enum migrate_mode mode);
+};
+
+#define PAGE_MIGRATION_MAPCOUNT_VALUE (-512)
+
+static inline int PageMigration(struct page *page)
+{
+	return atomic_read(&page->_mapcount) == PAGE_MIGRATION_MAPCOUNT_VALUE;
+}
+
+static inline void __SetPageMigration(struct page *page)
+{
+	VM_BUG_ON_PAGE(atomic_read(&page->_mapcount) != -1, page);
+	atomic_set(&page->_mapcount, PAGE_MIGRATION_MAPCOUNT_VALUE);
+}
+
+static inline void __ClearPageMigration(struct page *page)
+{
+	VM_BUG_ON_PAGE(!PageMigration(page), page);
+	atomic_set(&page->_mapcount, -1);
+}
 
 extern void putback_movable_pages(struct list_head *l);
 extern int migrate_page(struct address_space *,
@@ -45,6 +70,11 @@ extern int migrate_page_move_mapping(struct address_space *mapping,
 		int extra_count);
 #else
 
+static inline int PageMigration(struct page *page)
+{
+	return false;
+}
+
 static inline void putback_movable_pages(struct list_head *l) {}
 static inline int migrate_pages(struct list_head *l, new_page_t new,
 		free_page_t free, unsigned long private, enum migrate_mode mode,
diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index 3d6baa7..61d5da4 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -56,6 +56,9 @@ struct page {
 						 * see PAGE_MAPPING_ANON below.
 						 */
 		void *s_mem;			/* slab first object */
+#ifdef CONFIG_MIGRATION
+		struct migration *migration;
+#endif
 	};
 
 	/* Second double word */
diff --git a/mm/compaction.c b/mm/compaction.c
index c5c627a..d05822e 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -752,6 +752,14 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
 		 */
 		is_lru = PageLRU(page);
 		if (!is_lru) {
+#ifdef CONFIG_MIGRATION
+			if (PageMigration(page)) {
+				if (page->migration->isolate(page) == 0)
+					goto isolate_success;
+
+				continue;
+			}
+#endif
 			if (unlikely(balloon_page_movable(page))) {
 				if (balloon_page_isolate(page)) {
 					/* Successfully isolated */
diff --git a/mm/migrate.c b/mm/migrate.c
index 842ecd7..2e20d4e 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -93,7 +93,9 @@ void putback_movable_pages(struct list_head *l)
 		list_del(&page->lru);
 		dec_zone_page_state(page, NR_ISOLATED_ANON +
 				page_is_file_cache(page));
-		if (unlikely(isolated_balloon_page(page)))
+		if (PageMigration(page))
+			page->migration->put(page);
+		else if (unlikely(isolated_balloon_page(page)))
 			balloon_page_putback(page);
 		else
 			putback_lru_page(page);
@@ -953,7 +955,10 @@ static ICE_noinline int unmap_and_move(new_page_t get_new_page,
 		if (unlikely(split_huge_page(page)))
 			goto out;
 
-	rc = __unmap_and_move(page, newpage, force, mode);
+	if (PageMigration(page))
+		rc = page->migration->move(page, newpage, force, mode);
+	else
+		rc = __unmap_and_move(page, newpage, force, mode);
 
 out:
 	if (rc != -EAGAIN) {
@@ -967,7 +972,9 @@ out:
 		dec_zone_page_state(page, NR_ISOLATED_ANON +
 				page_is_file_cache(page));
 		/* Soft-offlined page shouldn't go through lru cache list */
-		if (reason == MR_MEMORY_FAILURE) {
+		if (PageMigration(page))
+			page->migration->put(page);
+		else if (reason == MR_MEMORY_FAILURE) {
 			put_page(page);
 			if (!test_set_page_hwpoison(page))
 				num_poisoned_pages_inc();
@@ -983,7 +990,9 @@ out:
 	if (rc != MIGRATEPAGE_SUCCESS && put_new_page) {
 		ClearPageSwapBacked(newpage);
 		put_new_page(newpage, private);
-	} else if (unlikely(__is_movable_balloon_page(newpage))) {
+	} else if (PageMigration(newpage))
+		put_page(newpage);
+	else if (unlikely(__is_movable_balloon_page(newpage))) {
 		/* drop our reference, page already in the balloon */
 		put_page(newpage);
 	} else
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 7f63a93..87d6934 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1245,7 +1245,7 @@ unsigned long reclaim_clean_pages_from_list(struct zone *zone,
 
 	list_for_each_entry_safe(page, next, page_list, lru) {
 		if (page_is_file_cache(page) && !PageDirty(page) &&
-		    !isolated_balloon_page(page)) {
+		    !isolated_balloon_page(page) && !PageMigration(page)) {
 			ClearPageActive(page);
 			list_move(&page->lru, &clean_pages);
 		}
-- 
1.9.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 7+ messages in thread

* [RFC v2 2/3] zsmalloc: mark its page "PageMigration"
  2015-10-15  9:08 [RFC v2 0/3] zsmalloc: make its pages can be migrated Hui Zhu
  2015-10-15  9:09 ` [RFC v2 1/3] migrate: new struct migration and add it to struct page Hui Zhu
@ 2015-10-15  9:09 ` Hui Zhu
  2015-10-15  9:09 ` [RFC v2 3/3] zram: make create "__GFP_MOVABLE" pool Hui Zhu
  2 siblings, 0 replies; 7+ messages in thread
From: Hui Zhu @ 2015-10-15  9:09 UTC (permalink / raw)
  To: Minchan Kim, Nitin Gupta, Sergey Senozhatsky, Andrew Morton,
	Kirill A. Shutemov, Mel Gorman, Dave Hansen, Johannes Weiner,
	Michal Hocko, Konstantin Khlebnikov, Andrea Arcangeli,
	Alexander Duyck, Tejun Heo, Joonsoo Kim, Naoya Horiguchi,
	Jennifer Herbert, Hugh Dickins, Vladimir Davydov,
	Vlastimil Babka, David Rientjes, Sasha Levin,
	Steven Rostedt (Red Hat),
	Aneesh Kumar K.V, Wanpeng Li, Geert Uytterhoeven, Greg Thelen,
	Al Viro, linux-kernel, linux-mm
  Cc: teawater, Hui Zhu

Most of idea is same with prev version that mark zsmalloc's page
"PageMigration" and introduce the function for the interfaces
zs_isolatepage, zs_isolatepage and zs_migratepage.

But I put data of zs from struct page to struct migration.

Signed-off-by: Hui Zhu <zhuhui@xiaomi.com>
---
 include/linux/migrate.h |  13 ++
 mm/zsmalloc.c           | 605 ++++++++++++++++++++++++++++++++++++++++++++----
 2 files changed, 578 insertions(+), 40 deletions(-)

diff --git a/include/linux/migrate.h b/include/linux/migrate.h
index 8b8caba..b8f9448 100644
--- a/include/linux/migrate.h
+++ b/include/linux/migrate.h
@@ -32,6 +32,19 @@ struct migration {
 	void (*put)(struct page *page);
 	int (*move)(struct page *page, struct page *newpage, int force,
 		       enum migrate_mode mode);
+#ifdef CONFIG_ZSMALLOC
+	struct {
+		/* For all zs_page.  */
+		struct list_head zs_lru;
+		struct page *zs_page;
+		void *zs_class;
+
+		/* For all zs_page first_page.  */
+		int zs_fg;
+		unsigned zs_inuse;
+		unsigned zs_objects;
+	};
+#endif
 };
 
 #define PAGE_MIGRATION_MAPCOUNT_VALUE (-512)
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 3134a37..5282a03 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -21,8 +21,11 @@
  *		starting in this page. For the first page, this is
  *		always 0, so we use this field (aka freelist) to point
  *		to the first free object in zspage.
- *	page->lru: links together all component pages (except the first page)
- *		of a zspage
+ *	zs_page_lru(page): links together all component pages (except the
+		first page) of a zspage
+ *	page->migration->zs_class (CONFIG_MIGRATION): class of the zspage
+ *	page->migration->zs_fg (CONFIG_MIGRATION): fullness group
+ *		of the zspage
  *
  *	For _first_ page only:
  *
@@ -33,11 +36,12 @@
  *	page->freelist: points to the first free object in zspage.
  *		Free objects are linked together using in-place
  *		metadata.
- *	page->objects: maximum number of objects we can store in this
+ *	zs_page_objects(page): maximum number of objects we can store in this
  *		zspage (class->zspage_order * PAGE_SIZE / class->size)
- *	page->lru: links together first pages of various zspages.
+ *	zs_page_lru(page): links together first pages of various zspages.
  *		Basically forming list of zspages in a fullness group.
- *	page->mapping: class index and fullness group of the zspage
+ *	page->mapping(no CONFIG_MIGRATION): class index and fullness group
+ *		of the zspage
  *
  * Usage of struct page flags:
  *	PG_private: identifies the first component page
@@ -64,6 +68,9 @@
 #include <linux/debugfs.h>
 #include <linux/zsmalloc.h>
 #include <linux/zpool.h>
+#include <linux/migrate.h>
+#include <linux/rwlock.h>
+#include <linux/mm.h>
 
 /*
  * This must be power of 2 and greater than of equal to sizeof(link_free).
@@ -214,6 +221,8 @@ struct size_class {
 
 	/* huge object: pages_per_zspage == 1 && maxobj_per_zspage == 1 */
 	bool huge;
+
+	atomic_t count;
 };
 
 /*
@@ -279,6 +288,12 @@ struct mapping_area {
 	bool huge;
 };
 
+#ifdef CONFIG_MIGRATION
+static rwlock_t zs_class_rwlock;
+static rwlock_t zs_tag_rwlock;
+struct kmem_cache *zs_migration_cachep;
+#endif
+
 static int create_handle_cache(struct zs_pool *pool)
 {
 	pool->handle_cachep = kmem_cache_create("zs_handle", ZS_HANDLE_SIZE,
@@ -294,7 +309,7 @@ static void destroy_handle_cache(struct zs_pool *pool)
 static unsigned long alloc_handle(struct zs_pool *pool)
 {
 	return (unsigned long)kmem_cache_alloc(pool->handle_cachep,
-		pool->flags & ~__GFP_HIGHMEM);
+		pool->flags & ~(__GFP_HIGHMEM | __GFP_MOVABLE));
 }
 
 static void free_handle(struct zs_pool *pool, unsigned long handle)
@@ -307,6 +322,41 @@ static void record_obj(unsigned long handle, unsigned long obj)
 	*(unsigned long *)handle = obj;
 }
 
+#ifdef CONFIG_MIGRATION
+#define zs_page_lru(page)	((page)->migration->zs_lru)
+#define zs_page_inuse(page)	((page)->migration->zs_inuse)
+#define zs_page_objects(page)	((page)->migration->zs_objects)
+
+static struct migration *alloc_migration(gfp_t flags)
+{
+	return (struct migration *)kmem_cache_alloc(zs_migration_cachep,
+		flags & ~(__GFP_HIGHMEM | __GFP_MOVABLE));
+}
+
+static void free_migration(struct migration *migration)
+{
+	kmem_cache_free(zs_migration_cachep, (void *)migration);
+}
+
+void zs_put_page(struct page *page)
+{
+	if (put_page_testzero(page)) {
+		if (page->migration) {
+			free_migration(page->migration);
+			page->migration = NULL;
+		}
+		free_hot_cold_page(page, 0);
+	}
+}
+
+#else
+#define zs_page_lru(page)	((page)->lru)
+#define zs_page_inuse(page)	((page)->inuse)
+#define zs_page_objects(page)	((page)->objects)
+
+#define zs_put_page(page)	put_page(page)
+#endif
+
 /* zpool driver */
 
 #ifdef CONFIG_ZPOOL
@@ -404,6 +454,7 @@ static int is_last_page(struct page *page)
 	return PagePrivate2(page);
 }
 
+#ifndef CONFIG_MIGRATION
 static void get_zspage_mapping(struct page *page, unsigned int *class_idx,
 				enum fullness_group *fullness)
 {
@@ -425,6 +476,7 @@ static void set_zspage_mapping(struct page *page, unsigned int class_idx,
 			(fullness & FULLNESS_MASK);
 	page->mapping = (struct address_space *)m;
 }
+#endif
 
 /*
  * zsmalloc divides the pool into various size classes where each
@@ -612,8 +664,8 @@ static enum fullness_group get_fullness_group(struct page *page)
 	enum fullness_group fg;
 	BUG_ON(!is_first_page(page));
 
-	inuse = page->inuse;
-	max_objects = page->objects;
+	inuse = zs_page_inuse(page);
+	max_objects = zs_page_objects(page);
 
 	if (inuse == 0)
 		fg = ZS_EMPTY;
@@ -656,8 +708,8 @@ static void insert_zspage(struct page *page, struct size_class *class,
 	 * We want to see more ZS_FULL pages and less almost
 	 * empty/full. Put pages with higher ->inuse first.
 	 */
-	list_add_tail(&page->lru, &(*head)->lru);
-	if (page->inuse >= (*head)->inuse)
+	list_add_tail(&zs_page_lru(page), &zs_page_lru(*head));
+	if (zs_page_inuse(page) >= zs_page_inuse(*head))
 		*head = page;
 }
 
@@ -677,13 +729,23 @@ static void remove_zspage(struct page *page, struct size_class *class,
 
 	head = &class->fullness_list[fullness];
 	BUG_ON(!*head);
-	if (list_empty(&(*head)->lru))
+	if (list_empty(&zs_page_lru(*head)))
 		*head = NULL;
-	else if (*head == page)
-		*head = (struct page *)list_entry((*head)->lru.next,
+	else if (*head == page) {
+#ifdef CONFIG_MIGRATION
+		struct migration *migration;
+
+		migration = (struct migration *)
+				list_entry(zs_page_lru(*head).next,
+					   struct migration, zs_lru);
+		*head = migration->zs_page;
+#else
+		*head = (struct page *)list_entry(zs_page_lru(*head).next,
 					struct page, lru);
+#endif
+	}
 
-	list_del_init(&page->lru);
+	list_del_init(&zs_page_lru(page));
 	zs_stat_dec(class, fullness == ZS_ALMOST_EMPTY ?
 			CLASS_ALMOST_EMPTY : CLASS_ALMOST_FULL, 1);
 }
@@ -700,19 +762,29 @@ static void remove_zspage(struct page *page, struct size_class *class,
 static enum fullness_group fix_fullness_group(struct size_class *class,
 						struct page *page)
 {
+#ifndef CONFIG_MIGRATION
 	int class_idx;
+#endif
 	enum fullness_group currfg, newfg;
 
 	BUG_ON(!is_first_page(page));
 
+#ifdef CONFIG_MIGRATION
+	currfg = page->migration->zs_fg;
+#else
 	get_zspage_mapping(page, &class_idx, &currfg);
+#endif
 	newfg = get_fullness_group(page);
 	if (newfg == currfg)
 		goto out;
 
 	remove_zspage(page, class, currfg);
 	insert_zspage(page, class, newfg);
+#ifdef CONFIG_MIGRATION
+	page->migration->zs_fg = newfg;
+#else
 	set_zspage_mapping(page, class_idx, newfg);
+#endif
 
 out:
 	return newfg;
@@ -775,8 +847,18 @@ static struct page *get_next_page(struct page *page)
 		next = NULL;
 	else if (is_first_page(page))
 		next = (struct page *)page_private(page);
-	else
-		next = list_entry(page->lru.next, struct page, lru);
+	else {
+#ifdef CONFIG_MIGRATION
+		struct migration *migration;
+
+		migration = (struct migration *)
+				list_entry(zs_page_lru(page).next,
+					   struct migration, zs_lru);
+		next = migration->zs_page;
+#else
+		next = list_entry(zs_page_lru(page).next, struct page, lru);
+#endif
+	}
 
 	return next;
 }
@@ -809,9 +891,14 @@ static void *location_to_obj(struct page *page, unsigned long obj_idx)
 static void obj_to_location(unsigned long obj, struct page **page,
 				unsigned long *obj_idx)
 {
-	obj >>= OBJ_TAG_BITS;
-	*page = pfn_to_page(obj >> OBJ_INDEX_BITS);
-	*obj_idx = (obj & OBJ_INDEX_MASK);
+	if (obj == 0) {
+		*page = NULL;
+		*obj_idx = 0;
+	} else {
+		obj >>= OBJ_TAG_BITS;
+		*page = pfn_to_page(obj >> OBJ_INDEX_BITS);
+		*obj_idx = (obj & OBJ_INDEX_MASK);
+	}
 }
 
 static unsigned long handle_to_obj(unsigned long handle)
@@ -859,39 +946,59 @@ static void unpin_tag(unsigned long handle)
 	clear_bit_unlock(HANDLE_PIN_BIT, ptr);
 }
 
+
 static void reset_page(struct page *page)
 {
+#ifdef CONFIG_MIGRATION
+	/* Lock the page to protect the atomic access of page->migration.  */
+	lock_page(page);
+#endif
 	clear_bit(PG_private, &page->flags);
 	clear_bit(PG_private_2, &page->flags);
 	set_page_private(page, 0);
+#ifndef CONFIG_MIGRATION
 	page->mapping = NULL;
+#endif
 	page->freelist = NULL;
 	page_mapcount_reset(page);
+#ifdef CONFIG_MIGRATION
+	unlock_page(page);
+#endif
 }
 
 static void free_zspage(struct page *first_page)
 {
-	struct page *nextp, *tmp, *head_extra;
+#ifdef CONFIG_MIGRATION
+	struct migration *tmp, *nextm;
+#else
+	struct page *tmp;
+#endif
+	struct page *nextp, *head_extra;
 
 	BUG_ON(!is_first_page(first_page));
-	BUG_ON(first_page->inuse);
+	BUG_ON(zs_page_inuse(first_page));
 
 	head_extra = (struct page *)page_private(first_page);
 
 	reset_page(first_page);
-	__free_page(first_page);
+	zs_put_page(first_page);
 
 	/* zspage with only 1 system page */
 	if (!head_extra)
 		return;
-
-	list_for_each_entry_safe(nextp, tmp, &head_extra->lru, lru) {
-		list_del(&nextp->lru);
+#ifdef CONFIG_MIGRATION
+	list_for_each_entry_safe(nextm, tmp, &zs_page_lru(head_extra),
+				 zs_lru) {
+		nextp = nextm->zs_page;
+#else
+	list_for_each_entry_safe(nextp, tmp, &zs_page_lru(head_extra), lru) {
+#endif
+		list_del(&zs_page_lru(nextp));
 		reset_page(nextp);
-		__free_page(nextp);
+		zs_put_page(nextp);
 	}
 	reset_page(head_extra);
-	__free_page(head_extra);
+	zs_put_page(head_extra);
 }
 
 /* Initialize a newly allocated zspage */
@@ -937,6 +1044,311 @@ static void init_zspage(struct page *first_page, struct size_class *class)
 	}
 }
 
+#ifdef CONFIG_MIGRATION
+static void
+get_class(struct size_class *class)
+{
+	atomic_inc(&class->count);
+}
+
+static void
+put_class(struct size_class *class)
+{
+	if (atomic_dec_and_test(&class->count))
+		kfree(class);
+}
+
+static int zs_isolatepage(struct page *page)
+{
+	int ret = -EBUSY;
+
+	if (!get_page_unless_zero(page))
+		return -EBUSY;
+
+	read_lock(&zs_class_rwlock);
+	lock_page(page);
+
+	if (page_count(page) != 2)
+		goto put_out;
+	if (!page->migration)
+		goto put_out;
+	get_class(page->migration->zs_class);
+
+	ret = 0;
+out:
+	unlock_page(page);
+	read_unlock(&zs_class_rwlock);
+	return ret;
+
+put_out:
+	zs_put_page(page);
+	goto out;
+}
+
+static void zs_putpage(struct page *page)
+{
+	put_class(page->migration->zs_class);
+	zs_put_page(page);
+}
+
+struct zspage_loop_struct {
+	struct size_class *class;
+	struct page *page;
+	struct page *newpage;
+	void *newaddr;
+
+	struct page *cur_page;
+	void *cur_addr;
+
+	unsigned long offset;
+	unsigned int idx;
+};
+
+static void
+zspage_migratepage_obj_callback(unsigned long head,
+				struct zspage_loop_struct *zls)
+{
+	BUG_ON(zls == NULL);
+
+	if (head & OBJ_ALLOCATED_TAG) {
+		unsigned long copy_size;
+		unsigned long newobj;
+		unsigned long handle;
+
+		/* Migratepage allocated just need handle the zls->page.  */
+		if (zls->cur_page != zls->page)
+			return;
+
+		copy_size = zls->class->size;
+
+		if (zls->offset + copy_size > PAGE_SIZE)
+			copy_size = PAGE_SIZE - zls->offset;
+
+		newobj = (unsigned long)location_to_obj(zls->newpage, zls->idx);
+
+		/* Remove OBJ_ALLOCATED_TAG will get the real handle.  */
+		handle = head & ~OBJ_ALLOCATED_TAG;
+		record_obj(handle, newobj);
+
+		/* Copy allocated chunk to allocated chunk.
+		 * Handle is included in it.
+		 */
+		memcpy(zls->newaddr + zls->offset,
+		       zls->cur_addr + zls->offset, copy_size);
+	} else {
+		struct link_free *link;
+		unsigned long obj;
+		unsigned long tmp_idx;
+		struct page *tmp_page;
+
+		link = (struct link_free *)(zls->cur_addr + zls->offset);
+		obj = (unsigned long)link->next;
+
+		obj_to_location(obj, &tmp_page, &tmp_idx);
+		if (tmp_page == zls->page) {
+			/* Update new obj with newpage to current link.  */
+			obj = (unsigned long)location_to_obj(zls->newpage,
+							     tmp_idx);
+			link->next = (void *)obj;
+		}
+
+		if (zls->cur_page == zls->page) {
+			/* Update obj to link of newaddr.  */
+			link = (struct link_free *)(zls->newaddr + zls->offset);
+			link->next = (void *)obj;
+		}
+	}
+}
+
+static void
+zspage_loop_1(struct size_class *class, struct page *cur_page,
+	      struct zspage_loop_struct *zls,
+	      void (*callback)(unsigned long head,
+			       struct zspage_loop_struct *zls))
+{
+	void *addr;
+	unsigned long m_offset = 0;
+	unsigned int obj_idx = 0;
+
+	if (!is_first_page(cur_page))
+		m_offset = cur_page->index;
+
+	addr = kmap_atomic(cur_page);
+
+	if (zls) {
+		zls->cur_page = cur_page;
+		zls->cur_addr = addr;
+	}
+
+	while (m_offset < PAGE_SIZE) {
+		unsigned long head = obj_to_head(class, cur_page,
+						 addr + m_offset);
+
+		if (zls) {
+			zls->offset = m_offset;
+			zls->idx = obj_idx;
+		}
+
+		callback(head, zls);
+
+		m_offset += class->size;
+		obj_idx++;
+	}
+
+	kunmap_atomic(addr);
+}
+
+static void
+zspage_loop(struct size_class *class, struct page *first_page,
+	    struct page *page, struct page *newpage,
+	    void (*callback)(unsigned long head,
+			     struct zspage_loop_struct *zls))
+{
+	struct page *cur_page;
+	struct zspage_loop_struct zl;
+	struct zspage_loop_struct *zls = NULL;
+
+	BUG_ON(!is_first_page(first_page));
+
+	if (page) {
+		zls = &zl;
+		zls->class = class;
+		zls->page = page;
+		zls->newpage = newpage;
+		zls->newaddr = kmap_atomic(zls->newpage);
+	}
+
+	cur_page = first_page;
+	while (cur_page) {
+		zspage_loop_1(class, cur_page, zls, callback);
+		cur_page = get_next_page(cur_page);
+	}
+
+	if (zls)
+		kunmap_atomic(zls->newaddr);
+}
+
+static int
+zs_movepage(struct page *page, struct page *newpage, int force,
+	    enum migrate_mode mode)
+{
+	int ret = -EAGAIN;
+	struct size_class *class = page->migration->zs_class;
+	struct page *first_page;
+
+	write_lock(&zs_tag_rwlock);
+	spin_lock(&class->lock);
+
+	if (page_count(page) <= 1)
+		goto out;
+
+	first_page = get_first_page(page);
+
+	INIT_LIST_HEAD(&newpage->lru);
+	if (page == first_page) {	/* first page */
+		struct page **head;
+
+		newpage->freelist = page->freelist;
+		SetPagePrivate(newpage);
+
+		if (class->huge) {
+			unsigned long handle = page_private(page);
+			unsigned long obj
+				= (unsigned long)location_to_obj(newpage, 0);
+
+			if (handle != 0) {
+				void *addr, *newaddr;
+
+				/* The page is allocated.  */
+				handle = handle & ~OBJ_ALLOCATED_TAG;
+				record_obj(handle, obj);
+				addr = kmap_atomic(page);
+				newaddr = kmap_atomic(newpage);
+				memcpy(newaddr, addr, class->size);
+				kunmap_atomic(newaddr);
+				kunmap_atomic(addr);
+			} else
+				newpage->freelist = (void *)obj;
+			set_page_private(newpage, handle);
+		} else {
+			struct page *head_extra
+				= (struct page *)page_private(page);
+
+			if (head_extra) {
+				struct migration *nextm;
+
+				head_extra->first_page = newpage;
+				list_for_each_entry(nextm,
+						    &zs_page_lru(head_extra),
+						    zs_lru)
+					nextm->zs_page->first_page = newpage;
+			}
+			set_page_private(newpage, (unsigned long)head_extra);
+		}
+
+		head = &class->fullness_list[first_page->migration->zs_fg];
+		BUG_ON(!*head);
+		if (*head == page)
+			*head = newpage;
+	} else {
+		void *addr, *newaddr;
+
+		newpage->first_page = page->first_page;
+		newpage->index = page->index;
+
+		if (page->index > 0) {
+			addr = kmap_atomic(page);
+			newaddr = kmap_atomic(newpage);
+			memcpy(newaddr, addr, page->index);
+			kunmap_atomic(newaddr);
+			kunmap_atomic(addr);
+		}
+	}
+	if (is_last_page(page))	/* last page */
+		SetPagePrivate2(newpage);
+
+	if (!class->huge) {
+		zspage_loop(class, first_page, page, newpage,
+			    zspage_migratepage_obj_callback);
+	}
+
+	/* Add newpage to zspage.  */
+	if (first_page == page)
+		first_page = newpage;
+	else {
+		if ((struct page *)page_private(first_page) == page)
+			set_page_private(first_page, (unsigned long)newpage);
+	}
+	newpage->migration = page->migration;
+	newpage->migration->zs_page = newpage;
+
+	if (!class->huge) {
+		struct page *tmp_page;
+		unsigned long tmp_idx;
+
+		/* Update first_page->freelist if need.  */
+		obj_to_location((unsigned long)first_page->freelist,
+				&tmp_page, &tmp_idx);
+		if (tmp_page == page)
+			first_page->freelist = location_to_obj(newpage,
+							       tmp_idx);
+	}
+
+	get_page(newpage);
+	__SetPageMigration(newpage);
+
+	page->migration = NULL;
+	reset_page(page);
+	zs_put_page(page);
+
+	ret = MIGRATEPAGE_SUCCESS;
+out:
+	spin_unlock(&class->lock);
+	write_unlock(&zs_tag_rwlock);
+	return ret;
+}
+#endif
+
 /*
  * Allocate a zspage for the given size class
  */
@@ -948,11 +1360,12 @@ static struct page *alloc_zspage(struct size_class *class, gfp_t flags)
 	/*
 	 * Allocate individual pages and link them together as:
 	 * 1. first page->private = first sub-page
-	 * 2. all sub-pages are linked together using page->lru
+	 * 2. all sub-pages are linked together using zs_page_lru
 	 * 3. each sub-page is linked to the first page using page->first_page
 	 *
 	 * For each size class, First/Head pages are linked together using
-	 * page->lru. Also, we set PG_private to identify the first page
+	 * zs_page_lru.
+	 * Also, we set PG_private to identify the first page
 	 * (i.e. no other sub-page has this flag set) and PG_private_2 to
 	 * identify the last page.
 	 */
@@ -963,20 +1376,35 @@ static struct page *alloc_zspage(struct size_class *class, gfp_t flags)
 		page = alloc_page(flags);
 		if (!page)
 			goto cleanup;
+#ifdef CONFIG_MIGRATION
+		page->migration = alloc_migration(flags);
+		if (!page->migration) {
+			__free_page(page);
+			goto cleanup;
+		}
+#endif
 
 		INIT_LIST_HEAD(&page->lru);
+#ifdef CONFIG_MIGRATION
+		page->migration->isolate = zs_isolatepage;
+		page->migration->put = zs_putpage;
+		page->migration->move = zs_movepage;
+		INIT_LIST_HEAD(&page->migration->zs_lru);
+		page->migration->zs_page = page;
+		page->migration->zs_class = class;
+#endif
 		if (i == 0) {	/* first page */
 			SetPagePrivate(page);
 			set_page_private(page, 0);
 			first_page = page;
-			first_page->inuse = 0;
+			zs_page_inuse(first_page) = 0;
 		}
 		if (i == 1)
 			set_page_private(first_page, (unsigned long)page);
 		if (i >= 1)
 			page->first_page = first_page;
 		if (i >= 2)
-			list_add(&page->lru, &prev_page->lru);
+			list_add(&zs_page_lru(page), &zs_page_lru(prev_page));
 		if (i == class->pages_per_zspage - 1)	/* last page */
 			SetPagePrivate2(page);
 		prev_page = page;
@@ -986,7 +1414,8 @@ static struct page *alloc_zspage(struct size_class *class, gfp_t flags)
 
 	first_page->freelist = location_to_obj(first_page, 0);
 	/* Maximum number of objects we can store in this zspage */
-	first_page->objects = class->pages_per_zspage * PAGE_SIZE / class->size;
+	zs_page_objects(first_page)
+		= class->pages_per_zspage * PAGE_SIZE / class->size;
 
 	error = 0; /* Success */
 
@@ -1221,7 +1650,7 @@ static bool zspage_full(struct page *page)
 {
 	BUG_ON(!is_first_page(page));
 
-	return page->inuse == page->objects;
+	return zs_page_inuse(page) == zs_page_objects(page);
 }
 
 unsigned long zs_get_total_pages(struct zs_pool *pool)
@@ -1250,12 +1679,15 @@ void *zs_map_object(struct zs_pool *pool, unsigned long handle,
 	struct page *page;
 	unsigned long obj, obj_idx, off;
 
+#ifndef CONFIG_MIGRATION
 	unsigned int class_idx;
+#endif
 	enum fullness_group fg;
 	struct size_class *class;
 	struct mapping_area *area;
 	struct page *pages[2];
 	void *ret;
+	struct page *first_page;
 
 	BUG_ON(!handle);
 
@@ -1267,12 +1699,22 @@ void *zs_map_object(struct zs_pool *pool, unsigned long handle,
 	BUG_ON(in_interrupt());
 
 	/* From now on, migration cannot move the object */
+#ifdef CONFIG_MIGRATION
+	read_lock(&zs_tag_rwlock);
+#endif
 	pin_tag(handle);
 
 	obj = handle_to_obj(handle);
 	obj_to_location(obj, &page, &obj_idx);
-	get_zspage_mapping(get_first_page(page), &class_idx, &fg);
+	first_page = get_first_page(page);
+#ifdef CONFIG_MIGRATION
+	fg = first_page->migration->zs_fg;
+	class = first_page->migration->zs_class;
+#else
+	get_zspage_mapping(first_page, &class_idx, &fg);
 	class = pool->size_class[class_idx];
+#endif
+
 	off = obj_idx_to_offset(page, obj_idx, class->size);
 
 	area = &get_cpu_var(zs_map_area);
@@ -1302,18 +1744,26 @@ void zs_unmap_object(struct zs_pool *pool, unsigned long handle)
 {
 	struct page *page;
 	unsigned long obj, obj_idx, off;
-
+#ifndef CONFIG_MIGRATION
 	unsigned int class_idx;
+#endif
 	enum fullness_group fg;
 	struct size_class *class;
 	struct mapping_area *area;
+	struct page *first_page;
 
 	BUG_ON(!handle);
 
 	obj = handle_to_obj(handle);
 	obj_to_location(obj, &page, &obj_idx);
-	get_zspage_mapping(get_first_page(page), &class_idx, &fg);
+	first_page = get_first_page(page);
+#ifdef CONFIG_MIGRATION
+	fg = first_page->migration->zs_fg;
+	class = first_page->migration->zs_class;
+#else
+	get_zspage_mapping(first_page, &class_idx, &fg);
 	class = pool->size_class[class_idx];
+#endif
 	off = obj_idx_to_offset(page, obj_idx, class->size);
 
 	area = this_cpu_ptr(&zs_map_area);
@@ -1330,6 +1780,9 @@ void zs_unmap_object(struct zs_pool *pool, unsigned long handle)
 	}
 	put_cpu_var(zs_map_area);
 	unpin_tag(handle);
+#ifdef CONFIG_MIGRATION
+	read_unlock(&zs_tag_rwlock);
+#endif
 }
 EXPORT_SYMBOL_GPL(zs_unmap_object);
 
@@ -1350,6 +1803,8 @@ static unsigned long obj_malloc(struct page *first_page,
 
 	vaddr = kmap_atomic(m_page);
 	link = (struct link_free *)vaddr + m_offset / sizeof(*link);
+BUG_ON(first_page == NULL);
+BUG_ON(link == NULL);
 	first_page->freelist = link->next;
 	if (!class->huge)
 		/* record handle in the header of allocated chunk */
@@ -1358,13 +1813,31 @@ static unsigned long obj_malloc(struct page *first_page,
 		/* record handle in first_page->private */
 		set_page_private(first_page, handle);
 	kunmap_atomic(vaddr);
-	first_page->inuse++;
+	zs_page_inuse(first_page)++;
 	zs_stat_inc(class, OBJ_USED, 1);
 
 	return obj;
 }
 
 
+#ifdef CONFIG_MIGRATION
+static void set_zspage_migration(struct size_class *class, struct page *page)
+{
+	struct page *head_extra = (struct page *)page_private(page);
+
+	BUG_ON(!is_first_page(page));
+
+	__SetPageMigration(page);
+	if (!class->huge && head_extra) {
+		struct migration *nextm;
+
+		__SetPageMigration(head_extra);
+		list_for_each_entry(nextm, &zs_page_lru(head_extra), zs_lru)
+			__SetPageMigration(nextm->zs_page);
+	}
+}
+#endif
+
 /**
  * zs_malloc - Allocate block of given size from pool.
  * @pool: pool to allocate from
@@ -1401,16 +1874,21 @@ unsigned long zs_malloc(struct zs_pool *pool, size_t size)
 			free_handle(pool, handle);
 			return 0;
 		}
-
+#ifdef CONFIG_MIGRATION
+		first_page->migration->zs_fg = ZS_EMPTY;
+#else
 		set_zspage_mapping(first_page, class->index, ZS_EMPTY);
+#endif
 		atomic_long_add(class->pages_per_zspage,
 					&pool->pages_allocated);
 
 		spin_lock(&class->lock);
+#ifdef CONFIG_MIGRATION
+		set_zspage_migration(class, first_page);
+#endif
 		zs_stat_inc(class, OBJ_ALLOCATED, get_maxobj_per_zspage(
 				class->size, class->pages_per_zspage));
 	}
-
 	obj = obj_malloc(first_page, class, handle);
 	/* Now move the zspage to another fullness group, if required */
 	fix_fullness_group(class, first_page);
@@ -1446,7 +1924,7 @@ static void obj_free(struct zs_pool *pool, struct size_class *class,
 		set_page_private(first_page, 0);
 	kunmap_atomic(vaddr);
 	first_page->freelist = (void *)obj;
-	first_page->inuse--;
+	zs_page_inuse(first_page)--;
 	zs_stat_dec(class, OBJ_USED, 1);
 }
 
@@ -1454,20 +1932,30 @@ void zs_free(struct zs_pool *pool, unsigned long handle)
 {
 	struct page *first_page, *f_page;
 	unsigned long obj, f_objidx;
+#ifndef CONFIG_MIGRATION
 	int class_idx;
+#endif
 	struct size_class *class;
 	enum fullness_group fullness;
 
 	if (unlikely(!handle))
 		return;
 
+#ifdef CONFIG_MIGRATION
+	read_lock(&zs_tag_rwlock);
+#endif
 	pin_tag(handle);
 	obj = handle_to_obj(handle);
 	obj_to_location(obj, &f_page, &f_objidx);
 	first_page = get_first_page(f_page);
 
+#ifdef CONFIG_MIGRATION
+	fullness = first_page->migration->zs_fg;
+	class = first_page->migration->zs_class;
+#else
 	get_zspage_mapping(first_page, &class_idx, &fullness);
 	class = pool->size_class[class_idx];
+#endif
 
 	spin_lock(&class->lock);
 	obj_free(pool, class, obj);
@@ -1481,6 +1969,9 @@ void zs_free(struct zs_pool *pool, unsigned long handle)
 	}
 	spin_unlock(&class->lock);
 	unpin_tag(handle);
+#ifdef CONFIG_MIGRATION
+	read_unlock(&zs_tag_rwlock);
+#endif
 
 	free_handle(pool, handle);
 }
@@ -1672,7 +2163,12 @@ static enum fullness_group putback_zspage(struct zs_pool *pool,
 
 	fullness = get_fullness_group(first_page);
 	insert_zspage(first_page, class, fullness);
+#ifdef CONFIG_MIGRATION
+	first_page->migration->zs_class = class;
+	first_page->migration->zs_fg = fullness;
+#else
 	set_zspage_mapping(first_page, class->index, fullness);
+#endif
 
 	if (fullness == ZS_EMPTY) {
 		zs_stat_dec(class, OBJ_ALLOCATED, get_maxobj_per_zspage(
@@ -1928,6 +2424,10 @@ struct zs_pool *zs_create_pool(char *name, gfp_t flags)
 			get_maxobj_per_zspage(size, pages_per_zspage) == 1)
 			class->huge = true;
 		spin_lock_init(&class->lock);
+		atomic_set(&class->count, 0);
+#ifdef CONFIG_MIGRATION
+		get_class(class);
+#endif
 		pool->size_class[i] = class;
 
 		prev_class = class;
@@ -1975,7 +2475,13 @@ void zs_destroy_pool(struct zs_pool *pool)
 					class->size, fg);
 			}
 		}
+#ifdef CONFIG_MIGRATION
+		write_lock(&zs_class_rwlock);
+		put_class(class);
+		write_unlock(&zs_class_rwlock);
+#else
 		kfree(class);
+#endif
 	}
 
 	destroy_handle_cache(pool);
@@ -1992,6 +2498,11 @@ static int __init zs_init(void)
 	if (ret)
 		goto notifier_fail;
 
+#ifdef CONFIG_MIGRATION
+	rwlock_init(&zs_class_rwlock);
+	rwlock_init(&zs_tag_rwlock);
+#endif
+
 	init_zs_size_classes();
 
 #ifdef CONFIG_ZPOOL
@@ -2003,6 +2514,17 @@ static int __init zs_init(void)
 		pr_err("zs stat initialization failed\n");
 		goto stat_fail;
 	}
+
+#ifdef CONFIG_MIGRATION
+	zs_migration_cachep = kmem_cache_create("zs_migration",
+						sizeof(struct migration),
+						0, 0, NULL);
+	if (!zs_migration_cachep) {
+		pr_err("zs migration initialization failed\n");
+		goto stat_fail;
+	}
+#endif
+
 	return 0;
 
 stat_fail:
@@ -2017,6 +2539,9 @@ notifier_fail:
 
 static void __exit zs_exit(void)
 {
+#ifdef CONFIG_MIGRATION
+	kmem_cache_destroy(zs_migration_cachep);
+#endif
 #ifdef CONFIG_ZPOOL
 	zpool_unregister_driver(&zs_zpool_driver);
 #endif
-- 
1.9.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 7+ messages in thread

* [RFC v2 3/3] zram: make create "__GFP_MOVABLE" pool
  2015-10-15  9:08 [RFC v2 0/3] zsmalloc: make its pages can be migrated Hui Zhu
  2015-10-15  9:09 ` [RFC v2 1/3] migrate: new struct migration and add it to struct page Hui Zhu
  2015-10-15  9:09 ` [RFC v2 2/3] zsmalloc: mark its page "PageMigration" Hui Zhu
@ 2015-10-15  9:09 ` Hui Zhu
  2 siblings, 0 replies; 7+ messages in thread
From: Hui Zhu @ 2015-10-15  9:09 UTC (permalink / raw)
  To: Minchan Kim, Nitin Gupta, Sergey Senozhatsky, Andrew Morton,
	Kirill A. Shutemov, Mel Gorman, Dave Hansen, Johannes Weiner,
	Michal Hocko, Konstantin Khlebnikov, Andrea Arcangeli,
	Alexander Duyck, Tejun Heo, Joonsoo Kim, Naoya Horiguchi,
	Jennifer Herbert, Hugh Dickins, Vladimir Davydov,
	Vlastimil Babka, David Rientjes, Sasha Levin,
	Steven Rostedt (Red Hat),
	Aneesh Kumar K.V, Wanpeng Li, Geert Uytterhoeven, Greg Thelen,
	Al Viro, linux-kernel, linux-mm
  Cc: teawater, Hui Zhu

Change the flags when call zs_create_pool to make zram alloc movable
zsmalloc page if CONFIG_MIGRATION.

Signed-off-by: Hui Zhu <zhuhui@xiaomi.com>
---
 drivers/block/zram/zram_drv.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
index 9fa15bb..3e1e955 100644
--- a/drivers/block/zram/zram_drv.c
+++ b/drivers/block/zram/zram_drv.c
@@ -514,7 +514,13 @@ static struct zram_meta *zram_meta_alloc(char *pool_name, u64 disksize)
 		goto out_error;
 	}
 
+#ifdef CONFIG_MIGRATION
+	meta->mem_pool
+		= zs_create_pool(pool_name,
+				 GFP_NOIO | __GFP_HIGHMEM | __GFP_MOVABLE);
+#else
 	meta->mem_pool = zs_create_pool(pool_name, GFP_NOIO | __GFP_HIGHMEM);
+#endif
 	if (!meta->mem_pool) {
 		pr_err("Error creating memory pool\n");
 		goto out_error;
-- 
1.9.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [RFC v2 1/3] migrate: new struct migration and add it to struct page
  2015-10-15  9:09 ` [RFC v2 1/3] migrate: new struct migration and add it to struct page Hui Zhu
@ 2015-10-15  9:27   ` Vlastimil Babka
  2015-10-15  9:53     ` Minchan Kim
  0 siblings, 1 reply; 7+ messages in thread
From: Vlastimil Babka @ 2015-10-15  9:27 UTC (permalink / raw)
  To: Hui Zhu, Minchan Kim, Nitin Gupta, Sergey Senozhatsky,
	Andrew Morton, Kirill A. Shutemov, Mel Gorman, Dave Hansen,
	Johannes Weiner, Michal Hocko, Konstantin Khlebnikov,
	Andrea Arcangeli, Alexander Duyck, Tejun Heo, Joonsoo Kim,
	Naoya Horiguchi, Jennifer Herbert, Hugh Dickins,
	Vladimir Davydov, David Rientjes, Sasha Levin,
	Steven Rostedt (Red Hat),
	Aneesh Kumar K.V, Wanpeng Li, Geert Uytterhoeven, Greg Thelen,
	Al Viro, linux-kernel, linux-mm
  Cc: teawater

On 10/15/2015 11:09 AM, Hui Zhu wrote:
> I got that add function interfaces is really not a good idea.
> So I add a new struct migration to put all migration interfaces and add
> this struct to struct page as union of "mapping".

That's better, but not as flexible as the previously proposed approaches 
that Sergey pointed you at:

  http://lkml.iu.edu/hypermail/linux/kernel/1507.0/03233.html
  http://lkml.iu.edu/hypermail/linux/kernel/1508.1/00696.html

There the operations are reachable via mapping, so we can support the 
special operations migration also when mapping is otherwise needed; your 
patch excludes mapping.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [RFC v2 1/3] migrate: new struct migration and add it to struct page
  2015-10-15  9:27   ` Vlastimil Babka
@ 2015-10-15  9:53     ` Minchan Kim
  2015-10-19 12:08       ` Hui Zhu
  0 siblings, 1 reply; 7+ messages in thread
From: Minchan Kim @ 2015-10-15  9:53 UTC (permalink / raw)
  To: Vlastimil Babka
  Cc: Hui Zhu, Nitin Gupta, Sergey Senozhatsky, Andrew Morton,
	Kirill A. Shutemov, Mel Gorman, Dave Hansen, Johannes Weiner,
	Michal Hocko, Konstantin Khlebnikov, Andrea Arcangeli,
	Alexander Duyck, Tejun Heo, Joonsoo Kim, Naoya Horiguchi,
	Jennifer Herbert, Hugh Dickins, Vladimir Davydov, David Rientjes,
	Sasha Levin, Steven Rostedt (Red Hat),
	Aneesh Kumar K.V, Wanpeng Li, Geert Uytterhoeven, Greg Thelen,
	Al Viro, linux-kernel, linux-mm, teawater

On Thu, Oct 15, 2015 at 11:27:15AM +0200, Vlastimil Babka wrote:
> On 10/15/2015 11:09 AM, Hui Zhu wrote:
> >I got that add function interfaces is really not a good idea.
> >So I add a new struct migration to put all migration interfaces and add
> >this struct to struct page as union of "mapping".
> 
> That's better, but not as flexible as the previously proposed
> approaches that Sergey pointed you at:
> 
>  http://lkml.iu.edu/hypermail/linux/kernel/1507.0/03233.html
>  http://lkml.iu.edu/hypermail/linux/kernel/1508.1/00696.html
> 
> There the operations are reachable via mapping, so we can support
> the special operations migration also when mapping is otherwise
> needed; your patch excludes mapping.
> 

Hello Hui,

FYI, I take over the work from Gioh and have a plan to improve the work.
So, Could you wait a bit? Of course, if you have better idea, feel free
to post it.

Thanks.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [RFC v2 1/3] migrate: new struct migration and add it to struct page
  2015-10-15  9:53     ` Minchan Kim
@ 2015-10-19 12:08       ` Hui Zhu
  0 siblings, 0 replies; 7+ messages in thread
From: Hui Zhu @ 2015-10-19 12:08 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Vlastimil Babka, Hui Zhu, Nitin Gupta, Sergey Senozhatsky,
	Andrew Morton, Kirill A. Shutemov, Mel Gorman, Dave Hansen,
	Johannes Weiner, Michal Hocko, Konstantin Khlebnikov,
	Andrea Arcangeli, Alexander Duyck, Tejun Heo, Joonsoo Kim,
	Naoya Horiguchi, Jennifer Herbert, Hugh Dickins,
	Vladimir Davydov, David Rientjes, Sasha Levin,
	Steven Rostedt (Red Hat),
	Aneesh Kumar K.V, Wanpeng Li, Geert Uytterhoeven, Greg Thelen,
	Al Viro, linux-kernel, Linux Memory Management List

On Thu, Oct 15, 2015 at 5:53 PM, Minchan Kim <minchan@kernel.org> wrote:
> On Thu, Oct 15, 2015 at 11:27:15AM +0200, Vlastimil Babka wrote:
>> On 10/15/2015 11:09 AM, Hui Zhu wrote:
>> >I got that add function interfaces is really not a good idea.
>> >So I add a new struct migration to put all migration interfaces and add
>> >this struct to struct page as union of "mapping".
>>
>> That's better, but not as flexible as the previously proposed
>> approaches that Sergey pointed you at:
>>
>>  http://lkml.iu.edu/hypermail/linux/kernel/1507.0/03233.html
>>  http://lkml.iu.edu/hypermail/linux/kernel/1508.1/00696.html
>>
>> There the operations are reachable via mapping, so we can support
>> the special operations migration also when mapping is otherwise
>> needed; your patch excludes mapping.
>>
>
> Hello Hui,
>
> FYI, I take over the work from Gioh and have a plan to improve the work.
> So, Could you wait a bit? Of course, if you have better idea, feel free
> to post it.
>
> Thanks.

Hi Minchan and Vlastimil,

If you don't mind. I want to wait the patches and focus on page
movable of zsmalloc part.
What do you think about it?

Best,
Hui

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2015-10-19 12:08 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-10-15  9:08 [RFC v2 0/3] zsmalloc: make its pages can be migrated Hui Zhu
2015-10-15  9:09 ` [RFC v2 1/3] migrate: new struct migration and add it to struct page Hui Zhu
2015-10-15  9:27   ` Vlastimil Babka
2015-10-15  9:53     ` Minchan Kim
2015-10-19 12:08       ` Hui Zhu
2015-10-15  9:09 ` [RFC v2 2/3] zsmalloc: mark its page "PageMigration" Hui Zhu
2015-10-15  9:09 ` [RFC v2 3/3] zram: make create "__GFP_MOVABLE" pool Hui Zhu

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox