linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [RFC PATCH v3 00/21] mm/zsmalloc: Split zsdesc from struct page
@ 2023-11-30 10:12 Hyeonggon Yoo
  2023-11-30 10:12 ` [RFC PATCH v3 01/21] mm/zsmalloc: create new struct zsdesc Hyeonggon Yoo
                   ` (21 more replies)
  0 siblings, 22 replies; 32+ messages in thread
From: Hyeonggon Yoo @ 2023-11-30 10:12 UTC (permalink / raw)
  To: Minchan Kim, Sergey Senozhatsky
  Cc: Matthew Wilcox, Mike Rapoport, Yosry Ahmed, linux-mm, Hyeonggon Yoo

RFC v2: https://lore.kernel.org/linux-mm/20230713042037.980211-1-42.hyeyoo@gmail.com/

v2 -> v3:
 - rebased to the latest mm-unstable
 - adjusted comments from Sergey Senozhatsky (Moving zsdesc definition,
   kerneldoc fix) and Yosry Ahmed (adding memcg_data field to zsdesc)


V3 update is a bit late, but I still believe this is worth doing.
It would be nice to get comments/reviews/acks from maintainers/people.

Cover Letter:

The purpose of this series is to define own memory descriptor for zsmalloc,
instead of re-using various fields of struct page. This is a part of the
effort to reduce the size of struct page to unsigned long and enable
dynamic allocation of memory descriptors.

While [1] outlines this ultimate objective, the current use of struct page
is highly dependent on its definition, making it challenging to separately
allocate memory descriptors.

Therefore, this series introduces new descriptor for zsmalloc, called
zsdesc. It overlays struct page for now, but will eventually be allocated
independently in the future. And apart from dynamic allocation of descriptors,
this is a nice cleanup.

This work is also available at:
	https://gitlab.com/hyeyoo/linux/-/tree/separate_zsdesc_rfc-v3

[1] State Of The Page, August 2022
https://lore.kernel.org/lkml/YvV1KTyzZ+Jrtj9x@casper.infradead.org

Hyeonggon Yoo (21):
  mm/zsmalloc: create new struct zsdesc
  mm/zsmalloc: add utility functions for zsdesc
  mm/zsmalloc: replace first_page to first_zsdesc in struct zspage
  mm/zsmalloc: add alternatives of frequently used helper functions
  mm/zsmalloc: convert {try,}lock_zspage() to use zsdesc
  mm/zsmalloc: convert __zs_{map,unmap}_object() to use zsdesc
  mm/zsmalloc: convert obj_to_location() and its users to use zsdesc
  mm/zsmalloc: convert obj_malloc() to use zsdesc
  mm/zsmalloc: convert create_page_chain() and its users to use zsdesc
  mm/zsmalloc: convert obj_allocated() and related helpers to use zsdesc
  mm/zsmalloc: convert init_zspage() to use zsdesc
  mm/zsmalloc: convert obj_to_page() and zs_free() to use zsdesc
  mm/zsmalloc: convert reset_page() to reset_zsdesc()
  mm/zsmalloc: convert zs_page_{isolate,migrate,putback} to use zsdesc
  mm/zsmalloc: convert __free_zspage() to use zsdesc
  mm/zsmalloc: convert location_to_obj() to use zsdesc
  mm/zsmalloc: convert migrate_zspage() to use zsdesc
  mm/zsmalloc: convert get_zspage() to take zsdesc
  mm/zsmalloc: convert SetZsPageMovable() to use zsdesc
  mm/zsmalloc: remove now unused helper functions
  mm/zsmalloc: convert {get,set}_first_obj_offset() to use zsdesc

 mm/zsmalloc.c | 578 +++++++++++++++++++++++++++++++-------------------
 1 file changed, 364 insertions(+), 214 deletions(-)

-- 
2.39.3



^ permalink raw reply	[flat|nested] 32+ messages in thread

* [RFC PATCH v3 01/21] mm/zsmalloc: create new struct zsdesc
  2023-11-30 10:12 [RFC PATCH v3 00/21] mm/zsmalloc: Split zsdesc from struct page Hyeonggon Yoo
@ 2023-11-30 10:12 ` Hyeonggon Yoo
  2023-11-30 10:12 ` [RFC PATCH v3 02/21] mm/zsmalloc: add utility functions for zsdesc Hyeonggon Yoo
                   ` (20 subsequent siblings)
  21 siblings, 0 replies; 32+ messages in thread
From: Hyeonggon Yoo @ 2023-11-30 10:12 UTC (permalink / raw)
  To: Minchan Kim, Sergey Senozhatsky
  Cc: Matthew Wilcox, Mike Rapoport, Yosry Ahmed, linux-mm, Hyeonggon Yoo

Currently zsmalloc reuses fields of struct page. As part of simplifying
struct page, create own type for zsmalloc called zsdesc.

Remove comments about how zsmalloc reuses fields of struct page, because
zsdesc uses more intuitive names.

Note that zsmalloc does not use PG_owner_priv_v1 after commit a41ec880aa7b
("zsmalloc: move huge compressed obj from page to zspage"). Thus only
document how zsmalloc uses PG_private flag.

It is very tempting to rearrange zsdesc, but the three words after flags
field are not available for zsmalloc. Add comments about that.

Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
---
 mm/zsmalloc.c | 67 ++++++++++++++++++++++++++++++++++++++-------------
 1 file changed, 50 insertions(+), 17 deletions(-)

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index b1c0dad7f4cf..60ce2a4dfeeb 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -11,23 +11,6 @@
  * Released under the terms of GNU General Public License Version 2.0
  */
 
-/*
- * Following is how we use various fields and flags of underlying
- * struct page(s) to form a zspage.
- *
- * Usage of struct page fields:
- *	page->private: points to zspage
- *	page->index: links together all component pages of a zspage
- *		For the huge page, this is always 0, so we use this field
- *		to store handle.
- *	page->page_type: first object offset in a subpage of zspage
- *
- * Usage of struct page flags:
- *	PG_private: identifies the first component page
- *	PG_owner_priv_1: identifies the huge component page
- *
- */
-
 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
 
 /*
@@ -241,6 +224,56 @@ struct zs_pool {
 	atomic_t compaction_in_progress;
 };
 
+/*
+ * struct zsdesc - memory descriptor for zsmalloc memory
+ *
+ * This struct overlays struct page for now. Do not modify without a
+ * good understanding of the issues.
+ *
+ * Usage of struct page flags on zsdesc:
+ *	PG_private: identifies the first component zsdesc
+ */
+struct zsdesc {
+	unsigned long __page_flags;
+
+	/*
+	 * Although not used by zsmalloc, this field is used by
+	 * non-LRU movable page migration code. Leave it unused.
+	 */
+	struct list_head __page_lru;
+
+	/* Always points to zsmalloc_mops with PAGE_MAPPING_MOVABLE set */
+	struct movable_operations *mops;
+
+	union {
+		/* linked list of all zsdescs in a zspage */
+		struct zsdesc *next;
+		/* for huge zspages */
+		unsigned long handle;
+	};
+
+	struct zspage *zspage;
+	unsigned int first_obj_offset;
+	unsigned int __page_refcount;
+#ifdef CONFIG_MEMCG
+	unsigned long __page_memcg_data;
+#endif
+};
+
+#define ZSDESC_MATCH(pg, zs) \
+	static_assert(offsetof(struct page, pg) == offsetof(struct zsdesc, zs))
+
+ZSDESC_MATCH(flags, __page_flags);
+ZSDESC_MATCH(lru, __page_lru);
+ZSDESC_MATCH(mapping, mops);
+ZSDESC_MATCH(index, next);
+ZSDESC_MATCH(index, handle);
+ZSDESC_MATCH(private, zspage);
+ZSDESC_MATCH(page_type, first_obj_offset);
+ZSDESC_MATCH(_refcount, __page_refcount);
+#undef ZSDESC_MATCH
+static_assert(sizeof(struct zsdesc) <= sizeof(struct page));
+
 struct zspage {
 	struct {
 		unsigned int huge:HUGE_BITS;
-- 
2.39.3



^ permalink raw reply	[flat|nested] 32+ messages in thread

* [RFC PATCH v3 02/21] mm/zsmalloc: add utility functions for zsdesc
  2023-11-30 10:12 [RFC PATCH v3 00/21] mm/zsmalloc: Split zsdesc from struct page Hyeonggon Yoo
  2023-11-30 10:12 ` [RFC PATCH v3 01/21] mm/zsmalloc: create new struct zsdesc Hyeonggon Yoo
@ 2023-11-30 10:12 ` Hyeonggon Yoo
  2023-11-30 10:12 ` [RFC PATCH v3 03/21] mm/zsmalloc: replace first_page to first_zsdesc in struct zspage Hyeonggon Yoo
                   ` (19 subsequent siblings)
  21 siblings, 0 replies; 32+ messages in thread
From: Hyeonggon Yoo @ 2023-11-30 10:12 UTC (permalink / raw)
  To: Minchan Kim, Sergey Senozhatsky
  Cc: Matthew Wilcox, Mike Rapoport, Yosry Ahmed, linux-mm, Hyeonggon Yoo

Introduce basic utility functions for zsdesc to avoid directly accessing
fields of struct page. More helpers will be defined later.

zsdesc_page() is defined with _Generic to preserve constness.
page_zsdesc() does not call compound_head() because zsdesc is always
a base page.

Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
---
 mm/zsmalloc.c | 33 +++++++++++++++++++++++++++++++++
 1 file changed, 33 insertions(+)

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 60ce2a4dfeeb..47df9103787e 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -274,6 +274,39 @@ ZSDESC_MATCH(_refcount, __page_refcount);
 #undef ZSDESC_MATCH
 static_assert(sizeof(struct zsdesc) <= sizeof(struct page));
 
+#define zsdesc_page(zdesc) (_Generic((zdesc),				\
+		const struct zsdesc *:	(const struct page *)zdesc,	\
+		struct zsdesc *:	(struct page *)zdesc))
+
+static inline struct zsdesc *page_zsdesc(struct page *page)
+{
+	return (struct zsdesc *)page;
+}
+
+static inline unsigned long zsdesc_pfn(const struct zsdesc *zsdesc)
+{
+	return page_to_pfn(zsdesc_page(zsdesc));
+}
+
+static inline struct zsdesc *pfn_zsdesc(unsigned long pfn)
+{
+	return page_zsdesc(pfn_to_page(pfn));
+}
+
+static inline void zsdesc_get(struct zsdesc *zsdesc)
+{
+	struct folio *folio = (struct folio *)zsdesc;
+
+	folio_get(folio);
+}
+
+static inline void zsdesc_put(struct zsdesc *zsdesc)
+{
+	struct folio *folio = (struct folio *)zsdesc;
+
+	folio_put(folio);
+}
+
 struct zspage {
 	struct {
 		unsigned int huge:HUGE_BITS;
-- 
2.39.3



^ permalink raw reply	[flat|nested] 32+ messages in thread

* [RFC PATCH v3 03/21] mm/zsmalloc: replace first_page to first_zsdesc in struct zspage
  2023-11-30 10:12 [RFC PATCH v3 00/21] mm/zsmalloc: Split zsdesc from struct page Hyeonggon Yoo
  2023-11-30 10:12 ` [RFC PATCH v3 01/21] mm/zsmalloc: create new struct zsdesc Hyeonggon Yoo
  2023-11-30 10:12 ` [RFC PATCH v3 02/21] mm/zsmalloc: add utility functions for zsdesc Hyeonggon Yoo
@ 2023-11-30 10:12 ` Hyeonggon Yoo
  2023-12-01 19:23   ` Minchan Kim
  2023-11-30 10:12 ` [RFC PATCH v3 04/21] mm/zsmalloc: add alternatives of frequently used helper functions Hyeonggon Yoo
                   ` (18 subsequent siblings)
  21 siblings, 1 reply; 32+ messages in thread
From: Hyeonggon Yoo @ 2023-11-30 10:12 UTC (permalink / raw)
  To: Minchan Kim, Sergey Senozhatsky
  Cc: Matthew Wilcox, Mike Rapoport, Yosry Ahmed, linux-mm, Hyeonggon Yoo

Replace first_page to first_zsdesc in struct zspage for further
conversion.

Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
---
 mm/zsmalloc.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 47df9103787e..4c9f9a2cb681 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -317,7 +317,7 @@ struct zspage {
 	};
 	unsigned int inuse;
 	unsigned int freeobj;
-	struct page *first_page;
+	struct zsdesc *first_zsdesc;
 	struct list_head list; /* fullness list */
 	struct zs_pool *pool;
 	rwlock_t lock;
@@ -516,7 +516,7 @@ static inline void mod_zspage_inuse(struct zspage *zspage, int val)
 
 static inline struct page *get_first_page(struct zspage *zspage)
 {
-	struct page *first_page = zspage->first_page;
+	struct page *first_page = zsdesc_page(zspage->first_zsdesc);
 
 	VM_BUG_ON_PAGE(!is_first_page(first_page), first_page);
 	return first_page;
@@ -1028,7 +1028,7 @@ static void create_page_chain(struct size_class *class, struct zspage *zspage,
 		set_page_private(page, (unsigned long)zspage);
 		page->index = 0;
 		if (i == 0) {
-			zspage->first_page = page;
+			zspage->first_zsdesc = page_zsdesc(page);
 			SetPagePrivate(page);
 			if (unlikely(class->objs_per_zspage == 1 &&
 					class->pages_per_zspage == 1))
@@ -1402,7 +1402,7 @@ static unsigned long obj_malloc(struct zs_pool *pool,
 		link->handle = handle;
 	else
 		/* record handle to page->index */
-		zspage->first_page->index = handle;
+		zspage->first_zsdesc->handle = handle;
 
 	kunmap_atomic(vaddr);
 	mod_zspage_inuse(zspage, 1);
-- 
2.39.3



^ permalink raw reply	[flat|nested] 32+ messages in thread

* [RFC PATCH v3 04/21] mm/zsmalloc: add alternatives of frequently used helper functions
  2023-11-30 10:12 [RFC PATCH v3 00/21] mm/zsmalloc: Split zsdesc from struct page Hyeonggon Yoo
                   ` (2 preceding siblings ...)
  2023-11-30 10:12 ` [RFC PATCH v3 03/21] mm/zsmalloc: replace first_page to first_zsdesc in struct zspage Hyeonggon Yoo
@ 2023-11-30 10:12 ` Hyeonggon Yoo
  2023-12-04  3:45   ` Matthew Wilcox
  2023-11-30 10:12 ` [RFC PATCH v3 05/21] mm/zsmalloc: convert {try,}lock_zspage() to use zsdesc Hyeonggon Yoo
                   ` (17 subsequent siblings)
  21 siblings, 1 reply; 32+ messages in thread
From: Hyeonggon Yoo @ 2023-11-30 10:12 UTC (permalink / raw)
  To: Minchan Kim, Sergey Senozhatsky
  Cc: Matthew Wilcox, Mike Rapoport, Yosry Ahmed, linux-mm, Hyeonggon Yoo

get_first_page(), get_next_page(), is_first_page() are frequently used
throughout zsmalloc code. As replacing them all at once would be hard to
review, add alternative helpers and gradually replace its users to
use new functions.

Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
---
 mm/zsmalloc.c | 27 +++++++++++++++++++++++++--
 1 file changed, 25 insertions(+), 2 deletions(-)

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 4c9f9a2cb681..c511539bee8c 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -502,6 +502,11 @@ static __maybe_unused int is_first_page(struct page *page)
 	return PagePrivate(page);
 }
 
+static __maybe_unused int is_first_zsdesc(struct zsdesc *zsdesc)
+{
+	return PagePrivate(zsdesc_page(zsdesc));
+}
+
 /* Protected by pool->lock */
 static inline int get_zspage_inuse(struct zspage *zspage)
 {
@@ -514,7 +519,7 @@ static inline void mod_zspage_inuse(struct zspage *zspage, int val)
 	zspage->inuse += val;
 }
 
-static inline struct page *get_first_page(struct zspage *zspage)
+static __maybe_unused inline struct page *get_first_page(struct zspage *zspage)
 {
 	struct page *first_page = zsdesc_page(zspage->first_zsdesc);
 
@@ -522,6 +527,14 @@ static inline struct page *get_first_page(struct zspage *zspage)
 	return first_page;
 }
 
+static __maybe_unused struct zsdesc *get_first_zsdesc(struct zspage *zspage)
+{
+	struct zsdesc *first_zsdesc = zspage->first_zsdesc;
+
+	VM_BUG_ON_PAGE(!is_first_zsdesc(first_zsdesc), zsdesc_page(first_zsdesc));
+	return first_zsdesc;
+}
+
 static inline unsigned int get_first_obj_offset(struct page *page)
 {
 	return page->page_type;
@@ -810,7 +823,7 @@ static struct zspage *get_zspage(struct page *page)
 	return zspage;
 }
 
-static struct page *get_next_page(struct page *page)
+static __maybe_unused struct page *get_next_page(struct page *page)
 {
 	struct zspage *zspage = get_zspage(page);
 
@@ -820,6 +833,16 @@ static struct page *get_next_page(struct page *page)
 	return (struct page *)page->index;
 }
 
+static __maybe_unused struct zsdesc *get_next_zsdesc(struct zsdesc *zsdesc)
+{
+	struct zspage *zspage = get_zspage(zsdesc_page(zsdesc));
+
+	if (unlikely(ZsHugePage(zspage)))
+		return NULL;
+
+	return zsdesc->next;
+}
+
 /**
  * obj_to_location - get (<page>, <obj_idx>) from encoded object value
  * @obj: the encoded object value
-- 
2.39.3



^ permalink raw reply	[flat|nested] 32+ messages in thread

* [RFC PATCH v3 05/21] mm/zsmalloc: convert {try,}lock_zspage() to use zsdesc
  2023-11-30 10:12 [RFC PATCH v3 00/21] mm/zsmalloc: Split zsdesc from struct page Hyeonggon Yoo
                   ` (3 preceding siblings ...)
  2023-11-30 10:12 ` [RFC PATCH v3 04/21] mm/zsmalloc: add alternatives of frequently used helper functions Hyeonggon Yoo
@ 2023-11-30 10:12 ` Hyeonggon Yoo
  2023-11-30 10:12 ` [RFC PATCH v3 06/21] mm/zsmalloc: convert __zs_{map,unmap}_object() " Hyeonggon Yoo
                   ` (16 subsequent siblings)
  21 siblings, 0 replies; 32+ messages in thread
From: Hyeonggon Yoo @ 2023-11-30 10:12 UTC (permalink / raw)
  To: Minchan Kim, Sergey Senozhatsky
  Cc: Matthew Wilcox, Mike Rapoport, Yosry Ahmed, linux-mm, Hyeonggon Yoo

Introduce trylock_zsdesc(), unlock_zsdesc(), wait_on_zsdesc_locked()
and convert trylock_zspage() and lock_zspage() to use zsdesc.

Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
---
 mm/zsmalloc.c | 55 ++++++++++++++++++++++++++++++++-------------------
 1 file changed, 35 insertions(+), 20 deletions(-)

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index c511539bee8c..91fccc67185b 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -307,6 +307,21 @@ static inline void zsdesc_put(struct zsdesc *zsdesc)
 	folio_put(folio);
 }
 
+static inline int trylock_zsdesc(struct zsdesc *zsdesc)
+{
+	return trylock_page(zsdesc_page(zsdesc));
+}
+
+static inline void unlock_zsdesc(struct zsdesc *zsdesc)
+{
+	unlock_page(zsdesc_page(zsdesc));
+}
+
+static inline void wait_on_zsdesc_locked(struct zsdesc *zsdesc)
+{
+	wait_on_page_locked(zsdesc_page(zsdesc));
+}
+
 struct zspage {
 	struct {
 		unsigned int huge:HUGE_BITS;
@@ -915,11 +930,11 @@ static void reset_page(struct page *page)
 
 static int trylock_zspage(struct zspage *zspage)
 {
-	struct page *cursor, *fail;
+	struct zsdesc *cursor, *fail;
 
-	for (cursor = get_first_page(zspage); cursor != NULL; cursor =
-					get_next_page(cursor)) {
-		if (!trylock_page(cursor)) {
+	for (cursor = get_first_zsdesc(zspage); cursor != NULL; cursor =
+					get_next_zsdesc(cursor)) {
+		if (!trylock_zsdesc(cursor)) {
 			fail = cursor;
 			goto unlock;
 		}
@@ -927,9 +942,9 @@ static int trylock_zspage(struct zspage *zspage)
 
 	return 1;
 unlock:
-	for (cursor = get_first_page(zspage); cursor != fail; cursor =
-					get_next_page(cursor))
-		unlock_page(cursor);
+	for (cursor = get_first_zsdesc(zspage); cursor != fail; cursor =
+					get_next_zsdesc(cursor))
+		unlock_zsdesc(cursor);
 
 	return 0;
 }
@@ -1759,7 +1774,7 @@ static int putback_zspage(struct size_class *class, struct zspage *zspage)
  */
 static void lock_zspage(struct zspage *zspage)
 {
-	struct page *curr_page, *page;
+	struct zsdesc *curr_zsdesc, *zsdesc;
 
 	/*
 	 * Pages we haven't locked yet can be migrated off the list while we're
@@ -1771,24 +1786,24 @@ static void lock_zspage(struct zspage *zspage)
 	 */
 	while (1) {
 		migrate_read_lock(zspage);
-		page = get_first_page(zspage);
-		if (trylock_page(page))
+		zsdesc = get_first_zsdesc(zspage);
+		if (trylock_zsdesc(zsdesc))
 			break;
-		get_page(page);
+		zsdesc_get(zsdesc);
 		migrate_read_unlock(zspage);
-		wait_on_page_locked(page);
-		put_page(page);
+		wait_on_zsdesc_locked(zsdesc);
+		zsdesc_put(zsdesc);
 	}
 
-	curr_page = page;
-	while ((page = get_next_page(curr_page))) {
-		if (trylock_page(page)) {
-			curr_page = page;
+	curr_zsdesc = zsdesc;
+	while ((zsdesc = get_next_zsdesc(curr_zsdesc))) {
+		if (trylock_zsdesc(zsdesc)) {
+			curr_zsdesc = zsdesc;
 		} else {
-			get_page(page);
+			zsdesc_get(zsdesc);
 			migrate_read_unlock(zspage);
-			wait_on_page_locked(page);
-			put_page(page);
+			wait_on_zsdesc_locked(zsdesc);
+			zsdesc_put(zsdesc);
 			migrate_read_lock(zspage);
 		}
 	}
-- 
2.39.3



^ permalink raw reply	[flat|nested] 32+ messages in thread

* [RFC PATCH v3 06/21] mm/zsmalloc: convert __zs_{map,unmap}_object() to use zsdesc
  2023-11-30 10:12 [RFC PATCH v3 00/21] mm/zsmalloc: Split zsdesc from struct page Hyeonggon Yoo
                   ` (4 preceding siblings ...)
  2023-11-30 10:12 ` [RFC PATCH v3 05/21] mm/zsmalloc: convert {try,}lock_zspage() to use zsdesc Hyeonggon Yoo
@ 2023-11-30 10:12 ` Hyeonggon Yoo
  2023-11-30 10:12 ` [RFC PATCH v3 07/21] mm/zsmalloc: convert obj_to_location() and its users " Hyeonggon Yoo
                   ` (15 subsequent siblings)
  21 siblings, 0 replies; 32+ messages in thread
From: Hyeonggon Yoo @ 2023-11-30 10:12 UTC (permalink / raw)
  To: Minchan Kim, Sergey Senozhatsky
  Cc: Matthew Wilcox, Mike Rapoport, Yosry Ahmed, linux-mm, Hyeonggon Yoo

These two functions take pointer to an array of struct page. Introduce
zsdesc_kmap_atomic() and make __zs_{map,unmap}_object() take pointer
to an array of zsdesc instead of page.

Add silly type casting when calling them. Casting will be removed
in the next patch.

Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
---
 mm/zsmalloc.c | 21 +++++++++++++--------
 1 file changed, 13 insertions(+), 8 deletions(-)

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 91fccc67185b..be3b8734bdf2 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -322,6 +322,11 @@ static inline void wait_on_zsdesc_locked(struct zsdesc *zsdesc)
 	wait_on_page_locked(zsdesc_page(zsdesc));
 }
 
+static inline void *zsdesc_kmap_atomic(struct zsdesc *zsdesc)
+{
+	return kmap_atomic(zsdesc_page(zsdesc));
+}
+
 struct zspage {
 	struct {
 		unsigned int huge:HUGE_BITS;
@@ -1155,7 +1160,7 @@ static inline void __zs_cpu_down(struct mapping_area *area)
 }
 
 static void *__zs_map_object(struct mapping_area *area,
-			struct page *pages[2], int off, int size)
+			struct zsdesc *zsdescs[2], int off, int size)
 {
 	int sizes[2];
 	void *addr;
@@ -1172,10 +1177,10 @@ static void *__zs_map_object(struct mapping_area *area,
 	sizes[1] = size - sizes[0];
 
 	/* copy object to per-cpu buffer */
-	addr = kmap_atomic(pages[0]);
+	addr = zsdesc_kmap_atomic(zsdescs[0]);
 	memcpy(buf, addr + off, sizes[0]);
 	kunmap_atomic(addr);
-	addr = kmap_atomic(pages[1]);
+	addr = zsdesc_kmap_atomic(zsdescs[1]);
 	memcpy(buf + sizes[0], addr, sizes[1]);
 	kunmap_atomic(addr);
 out:
@@ -1183,7 +1188,7 @@ static void *__zs_map_object(struct mapping_area *area,
 }
 
 static void __zs_unmap_object(struct mapping_area *area,
-			struct page *pages[2], int off, int size)
+			struct zsdesc *zsdescs[2], int off, int size)
 {
 	int sizes[2];
 	void *addr;
@@ -1202,10 +1207,10 @@ static void __zs_unmap_object(struct mapping_area *area,
 	sizes[1] = size - sizes[0];
 
 	/* copy per-cpu buffer to object */
-	addr = kmap_atomic(pages[0]);
+	addr = zsdesc_kmap_atomic(zsdescs[0]);
 	memcpy(addr + off, buf, sizes[0]);
 	kunmap_atomic(addr);
-	addr = kmap_atomic(pages[1]);
+	addr = zsdesc_kmap_atomic(zsdescs[1]);
 	memcpy(addr, buf + sizes[0], sizes[1]);
 	kunmap_atomic(addr);
 
@@ -1346,7 +1351,7 @@ void *zs_map_object(struct zs_pool *pool, unsigned long handle,
 	pages[1] = get_next_page(page);
 	BUG_ON(!pages[1]);
 
-	ret = __zs_map_object(area, pages, off, class->size);
+	ret = __zs_map_object(area, (struct zsdesc **)pages, off, class->size);
 out:
 	if (likely(!ZsHugePage(zspage)))
 		ret += ZS_HANDLE_SIZE;
@@ -1381,7 +1386,7 @@ void zs_unmap_object(struct zs_pool *pool, unsigned long handle)
 		pages[1] = get_next_page(page);
 		BUG_ON(!pages[1]);
 
-		__zs_unmap_object(area, pages, off, class->size);
+		__zs_unmap_object(area, (struct zsdesc **)pages, off, class->size);
 	}
 	local_unlock(&zs_map_area.lock);
 
-- 
2.39.3



^ permalink raw reply	[flat|nested] 32+ messages in thread

* [RFC PATCH v3 07/21] mm/zsmalloc: convert obj_to_location() and its users to use zsdesc
  2023-11-30 10:12 [RFC PATCH v3 00/21] mm/zsmalloc: Split zsdesc from struct page Hyeonggon Yoo
                   ` (5 preceding siblings ...)
  2023-11-30 10:12 ` [RFC PATCH v3 06/21] mm/zsmalloc: convert __zs_{map,unmap}_object() " Hyeonggon Yoo
@ 2023-11-30 10:12 ` Hyeonggon Yoo
  2023-11-30 10:12 ` [RFC PATCH v3 08/21] mm/zsmalloc: convert obj_malloc() " Hyeonggon Yoo
                   ` (14 subsequent siblings)
  21 siblings, 0 replies; 32+ messages in thread
From: Hyeonggon Yoo @ 2023-11-30 10:12 UTC (permalink / raw)
  To: Minchan Kim, Sergey Senozhatsky
  Cc: Matthew Wilcox, Mike Rapoport, Yosry Ahmed, linux-mm, Hyeonggon Yoo

Convert obj_to_location() to take zsdesc and also convert its users
to use zsdesc.

Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
---
 mm/zsmalloc.c | 75 ++++++++++++++++++++++++++-------------------------
 1 file changed, 38 insertions(+), 37 deletions(-)

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index be3b8734bdf2..f5a20c20ec19 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -864,16 +864,16 @@ static __maybe_unused struct zsdesc *get_next_zsdesc(struct zsdesc *zsdesc)
 }
 
 /**
- * obj_to_location - get (<page>, <obj_idx>) from encoded object value
+ * obj_to_location - get (<zsdesc>, <obj_idx>) from encoded object value
  * @obj: the encoded object value
- * @page: page object resides in zspage
+ * @zsdesc: zsdesc object resides in zspage
  * @obj_idx: object index
  */
-static void obj_to_location(unsigned long obj, struct page **page,
+static void obj_to_location(unsigned long obj, struct zsdesc **zsdesc,
 				unsigned int *obj_idx)
 {
 	obj >>= OBJ_TAG_BITS;
-	*page = pfn_to_page(obj >> OBJ_INDEX_BITS);
+	*zsdesc = pfn_zsdesc(obj >> OBJ_INDEX_BITS);
 	*obj_idx = (obj & OBJ_INDEX_MASK);
 }
 
@@ -1302,13 +1302,13 @@ void *zs_map_object(struct zs_pool *pool, unsigned long handle,
 			enum zs_mapmode mm)
 {
 	struct zspage *zspage;
-	struct page *page;
+	struct zsdesc *zsdesc;
 	unsigned long obj, off;
 	unsigned int obj_idx;
 
 	struct size_class *class;
 	struct mapping_area *area;
-	struct page *pages[2];
+	struct zsdesc *zsdescs[2];
 	void *ret;
 
 	/*
@@ -1321,8 +1321,8 @@ void *zs_map_object(struct zs_pool *pool, unsigned long handle,
 	/* It guarantees it can get zspage from handle safely */
 	spin_lock(&pool->lock);
 	obj = handle_to_obj(handle);
-	obj_to_location(obj, &page, &obj_idx);
-	zspage = get_zspage(page);
+	obj_to_location(obj, &zsdesc, &obj_idx);
+	zspage = get_zspage(zsdesc_page(zsdesc));
 
 	/*
 	 * migration cannot move any zpages in this zspage. Here, pool->lock
@@ -1341,17 +1341,17 @@ void *zs_map_object(struct zs_pool *pool, unsigned long handle,
 	area->vm_mm = mm;
 	if (off + class->size <= PAGE_SIZE) {
 		/* this object is contained entirely within a page */
-		area->vm_addr = kmap_atomic(page);
+		area->vm_addr = zsdesc_kmap_atomic(zsdesc);
 		ret = area->vm_addr + off;
 		goto out;
 	}
 
 	/* this object spans two pages */
-	pages[0] = page;
-	pages[1] = get_next_page(page);
-	BUG_ON(!pages[1]);
+	zsdescs[0] = zsdesc;
+	zsdescs[1] = get_next_zsdesc(zsdesc);
+	BUG_ON(!zsdescs[1]);
 
-	ret = __zs_map_object(area, (struct zsdesc **)pages, off, class->size);
+	ret = __zs_map_object(area, zsdescs, off, class->size);
 out:
 	if (likely(!ZsHugePage(zspage)))
 		ret += ZS_HANDLE_SIZE;
@@ -1363,7 +1363,7 @@ EXPORT_SYMBOL_GPL(zs_map_object);
 void zs_unmap_object(struct zs_pool *pool, unsigned long handle)
 {
 	struct zspage *zspage;
-	struct page *page;
+	struct zsdesc *zsdesc;
 	unsigned long obj, off;
 	unsigned int obj_idx;
 
@@ -1371,8 +1371,8 @@ void zs_unmap_object(struct zs_pool *pool, unsigned long handle)
 	struct mapping_area *area;
 
 	obj = handle_to_obj(handle);
-	obj_to_location(obj, &page, &obj_idx);
-	zspage = get_zspage(page);
+	obj_to_location(obj, &zsdesc, &obj_idx);
+	zspage = get_zspage(zsdesc_page(zsdesc));
 	class = zspage_class(pool, zspage);
 	off = offset_in_page(class->size * obj_idx);
 
@@ -1380,13 +1380,13 @@ void zs_unmap_object(struct zs_pool *pool, unsigned long handle)
 	if (off + class->size <= PAGE_SIZE)
 		kunmap_atomic(area->vm_addr);
 	else {
-		struct page *pages[2];
+		struct zsdesc *zsdescs[2];
 
-		pages[0] = page;
-		pages[1] = get_next_page(page);
-		BUG_ON(!pages[1]);
+		zsdescs[0] = zsdesc;
+		zsdescs[1] = get_next_zsdesc(zsdesc);
+		BUG_ON(!zsdescs[1]);
 
-		__zs_unmap_object(area, (struct zsdesc **)pages, off, class->size);
+		__zs_unmap_object(area, zsdescs, off, class->size);
 	}
 	local_unlock(&zs_map_area.lock);
 
@@ -1528,23 +1528,24 @@ static void obj_free(int class_size, unsigned long obj)
 {
 	struct link_free *link;
 	struct zspage *zspage;
-	struct page *f_page;
+	struct zsdesc *f_zsdesc;
 	unsigned long f_offset;
 	unsigned int f_objidx;
 	void *vaddr;
 
-	obj_to_location(obj, &f_page, &f_objidx);
+
+	obj_to_location(obj, &f_zsdesc, &f_objidx);
 	f_offset = offset_in_page(class_size * f_objidx);
-	zspage = get_zspage(f_page);
+	zspage = get_zspage(zsdesc_page(f_zsdesc));
 
-	vaddr = kmap_atomic(f_page);
+	vaddr = zsdesc_kmap_atomic(f_zsdesc);
 	link = (struct link_free *)(vaddr + f_offset);
 
 	/* Insert this object in containing zspage's freelist */
 	if (likely(!ZsHugePage(zspage)))
 		link->next = get_freeobj(zspage) << OBJ_TAG_BITS;
 	else
-		f_page->index = 0;
+		f_zsdesc->next = NULL;
 	set_freeobj(zspage, f_objidx);
 
 	kunmap_atomic(vaddr);
@@ -1587,7 +1588,7 @@ EXPORT_SYMBOL_GPL(zs_free);
 static void zs_object_copy(struct size_class *class, unsigned long dst,
 				unsigned long src)
 {
-	struct page *s_page, *d_page;
+	struct zsdesc *s_zsdesc, *d_zsdesc;
 	unsigned int s_objidx, d_objidx;
 	unsigned long s_off, d_off;
 	void *s_addr, *d_addr;
@@ -1596,8 +1597,8 @@ static void zs_object_copy(struct size_class *class, unsigned long dst,
 
 	s_size = d_size = class->size;
 
-	obj_to_location(src, &s_page, &s_objidx);
-	obj_to_location(dst, &d_page, &d_objidx);
+	obj_to_location(src, &s_zsdesc, &s_objidx);
+	obj_to_location(dst, &d_zsdesc, &d_objidx);
 
 	s_off = offset_in_page(class->size * s_objidx);
 	d_off = offset_in_page(class->size * d_objidx);
@@ -1608,8 +1609,8 @@ static void zs_object_copy(struct size_class *class, unsigned long dst,
 	if (d_off + class->size > PAGE_SIZE)
 		d_size = PAGE_SIZE - d_off;
 
-	s_addr = kmap_atomic(s_page);
-	d_addr = kmap_atomic(d_page);
+	s_addr = zsdesc_kmap_atomic(s_zsdesc);
+	d_addr = zsdesc_kmap_atomic(d_zsdesc);
 
 	while (1) {
 		size = min(s_size, d_size);
@@ -1634,17 +1635,17 @@ static void zs_object_copy(struct size_class *class, unsigned long dst,
 		if (s_off >= PAGE_SIZE) {
 			kunmap_atomic(d_addr);
 			kunmap_atomic(s_addr);
-			s_page = get_next_page(s_page);
-			s_addr = kmap_atomic(s_page);
-			d_addr = kmap_atomic(d_page);
+			s_zsdesc = get_next_zsdesc(s_zsdesc);
+			s_addr = zsdesc_kmap_atomic(s_zsdesc);
+			d_addr = zsdesc_kmap_atomic(d_zsdesc);
 			s_size = class->size - written;
 			s_off = 0;
 		}
 
 		if (d_off >= PAGE_SIZE) {
 			kunmap_atomic(d_addr);
-			d_page = get_next_page(d_page);
-			d_addr = kmap_atomic(d_page);
+			d_zsdesc = get_next_zsdesc(d_zsdesc);
+			d_addr = zsdesc_kmap_atomic(d_zsdesc);
 			d_size = class->size - written;
 			d_off = 0;
 		}
@@ -1910,7 +1911,7 @@ static int zs_page_migrate(struct page *newpage, struct page *page,
 	struct zs_pool *pool;
 	struct size_class *class;
 	struct zspage *zspage;
-	struct page *dummy;
+	struct zsdesc *dummy;
 	void *s_addr, *d_addr, *addr;
 	unsigned int offset;
 	unsigned long handle;
-- 
2.39.3



^ permalink raw reply	[flat|nested] 32+ messages in thread

* [RFC PATCH v3 08/21] mm/zsmalloc: convert obj_malloc() to use zsdesc
  2023-11-30 10:12 [RFC PATCH v3 00/21] mm/zsmalloc: Split zsdesc from struct page Hyeonggon Yoo
                   ` (6 preceding siblings ...)
  2023-11-30 10:12 ` [RFC PATCH v3 07/21] mm/zsmalloc: convert obj_to_location() and its users " Hyeonggon Yoo
@ 2023-11-30 10:12 ` Hyeonggon Yoo
  2023-11-30 10:12 ` [RFC PATCH v3 09/21] mm/zsmalloc: convert create_page_chain() and its users " Hyeonggon Yoo
                   ` (13 subsequent siblings)
  21 siblings, 0 replies; 32+ messages in thread
From: Hyeonggon Yoo @ 2023-11-30 10:12 UTC (permalink / raw)
  To: Minchan Kim, Sergey Senozhatsky
  Cc: Matthew Wilcox, Mike Rapoport, Yosry Ahmed, linux-mm, Hyeonggon Yoo

Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
---
 mm/zsmalloc.c | 16 ++++++++--------
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index f5a20c20ec19..74ed0477f40e 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -1416,12 +1416,12 @@ EXPORT_SYMBOL_GPL(zs_huge_class_size);
 static unsigned long obj_malloc(struct zs_pool *pool,
 				struct zspage *zspage, unsigned long handle)
 {
-	int i, nr_page, offset;
+	int i, nr_zsdesc, offset;
 	unsigned long obj;
 	struct link_free *link;
 	struct size_class *class;
 
-	struct page *m_page;
+	struct zsdesc *m_zsdesc;
 	unsigned long m_offset;
 	void *vaddr;
 
@@ -1430,14 +1430,14 @@ static unsigned long obj_malloc(struct zs_pool *pool,
 	obj = get_freeobj(zspage);
 
 	offset = obj * class->size;
-	nr_page = offset >> PAGE_SHIFT;
+	nr_zsdesc = offset >> PAGE_SHIFT;
 	m_offset = offset_in_page(offset);
-	m_page = get_first_page(zspage);
+	m_zsdesc = get_first_zsdesc(zspage);
 
-	for (i = 0; i < nr_page; i++)
-		m_page = get_next_page(m_page);
+	for (i = 0; i < nr_zsdesc; i++)
+		m_zsdesc = get_next_zsdesc(m_zsdesc);
 
-	vaddr = kmap_atomic(m_page);
+	vaddr = zsdesc_kmap_atomic(m_zsdesc);
 	link = (struct link_free *)vaddr + m_offset / sizeof(*link);
 	set_freeobj(zspage, link->next >> OBJ_TAG_BITS);
 	if (likely(!ZsHugePage(zspage)))
@@ -1450,7 +1450,7 @@ static unsigned long obj_malloc(struct zs_pool *pool,
 	kunmap_atomic(vaddr);
 	mod_zspage_inuse(zspage, 1);
 
-	obj = location_to_obj(m_page, obj);
+	obj = location_to_obj(zsdesc_page(m_zsdesc), obj);
 
 	return obj;
 }
-- 
2.39.3



^ permalink raw reply	[flat|nested] 32+ messages in thread

* [RFC PATCH v3 09/21] mm/zsmalloc: convert create_page_chain() and its users to use zsdesc
  2023-11-30 10:12 [RFC PATCH v3 00/21] mm/zsmalloc: Split zsdesc from struct page Hyeonggon Yoo
                   ` (7 preceding siblings ...)
  2023-11-30 10:12 ` [RFC PATCH v3 08/21] mm/zsmalloc: convert obj_malloc() " Hyeonggon Yoo
@ 2023-11-30 10:12 ` Hyeonggon Yoo
  2023-11-30 10:12 ` [RFC PATCH v3 10/21] mm/zsmalloc: convert obj_allocated() and related helpers " Hyeonggon Yoo
                   ` (12 subsequent siblings)
  21 siblings, 0 replies; 32+ messages in thread
From: Hyeonggon Yoo @ 2023-11-30 10:12 UTC (permalink / raw)
  To: Minchan Kim, Sergey Senozhatsky
  Cc: Matthew Wilcox, Mike Rapoport, Yosry Ahmed, linux-mm, Hyeonggon Yoo

Introduce a few helper functions for conversion.
Convert create_page_chain() and its user replace_sub_page() to use zsdesc.

Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
---
 mm/zsmalloc.c | 120 ++++++++++++++++++++++++++++++++++----------------
 1 file changed, 81 insertions(+), 39 deletions(-)

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 74ed0477f40e..1b5b9322ec21 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -327,6 +327,48 @@ static inline void *zsdesc_kmap_atomic(struct zsdesc *zsdesc)
 	return kmap_atomic(zsdesc_page(zsdesc));
 }
 
+static inline void zsdesc_set_zspage(struct zsdesc *zsdesc,
+				     struct zspage *zspage)
+{
+	zsdesc->zspage = zspage;
+}
+
+static inline void zsdesc_set_first(struct zsdesc *zsdesc)
+{
+	SetPagePrivate(zsdesc_page(zsdesc));
+}
+
+static const struct movable_operations zsmalloc_mops;
+
+static inline void zsdesc_set_movable(struct zsdesc *zsdesc)
+{
+	__SetPageMovable(zsdesc_page(zsdesc), &zsmalloc_mops);
+}
+
+static inline void zsdesc_inc_zone_page_state(struct zsdesc *zsdesc)
+{
+	inc_zone_page_state(zsdesc_page(zsdesc), NR_ZSPAGES);
+}
+
+static inline void zsdesc_dec_zone_page_state(struct zsdesc *zsdesc)
+{
+	dec_zone_page_state(zsdesc_page(zsdesc), NR_ZSPAGES);
+}
+
+static inline struct zsdesc *alloc_zsdesc(gfp_t gfp)
+{
+	struct page *page = alloc_page(gfp);
+
+	return page_zsdesc(page);
+}
+
+static inline void free_zsdesc(struct zsdesc *zsdesc)
+{
+	struct page *page = zsdesc_page(zsdesc);
+
+	__free_page(page);
+}
+
 struct zspage {
 	struct {
 		unsigned int huge:HUGE_BITS;
@@ -1051,35 +1093,35 @@ static void init_zspage(struct size_class *class, struct zspage *zspage)
 }
 
 static void create_page_chain(struct size_class *class, struct zspage *zspage,
-				struct page *pages[])
+				struct zsdesc *zsdescs[])
 {
 	int i;
-	struct page *page;
-	struct page *prev_page = NULL;
-	int nr_pages = class->pages_per_zspage;
+	struct zsdesc *zsdesc;
+	struct zsdesc *prev_zsdesc = NULL;
+	int nr_zsdescs = class->pages_per_zspage;
 
 	/*
 	 * Allocate individual pages and link them together as:
-	 * 1. all pages are linked together using page->index
-	 * 2. each sub-page point to zspage using page->private
+	 * 1. all pages are linked together using zsdesc->next
+	 * 2. each sub-page point to zspage using zsdesc->zspage
 	 *
-	 * we set PG_private to identify the first page (i.e. no other sub-page
+	 * we set PG_private to identify the first zsdesc (i.e. no other zsdesc
 	 * has this flag set).
 	 */
-	for (i = 0; i < nr_pages; i++) {
-		page = pages[i];
-		set_page_private(page, (unsigned long)zspage);
-		page->index = 0;
+	for (i = 0; i < nr_zsdescs; i++) {
+		zsdesc = zsdescs[i];
+		zsdesc_set_zspage(zsdesc, zspage);
+		zsdesc->next = NULL;
 		if (i == 0) {
-			zspage->first_zsdesc = page_zsdesc(page);
-			SetPagePrivate(page);
+			zspage->first_zsdesc = zsdesc;
+			zsdesc_set_first(zsdesc);
 			if (unlikely(class->objs_per_zspage == 1 &&
 					class->pages_per_zspage == 1))
 				SetZsHugePage(zspage);
 		} else {
-			prev_page->index = (unsigned long)page;
+			prev_zsdesc->next = zsdesc;
 		}
-		prev_page = page;
+		prev_zsdesc = zsdesc;
 	}
 }
 
@@ -1091,7 +1133,7 @@ static struct zspage *alloc_zspage(struct zs_pool *pool,
 					gfp_t gfp)
 {
 	int i;
-	struct page *pages[ZS_MAX_PAGES_PER_ZSPAGE];
+	struct zsdesc *zsdescs[ZS_MAX_PAGES_PER_ZSPAGE];
 	struct zspage *zspage = cache_alloc_zspage(pool, gfp);
 
 	if (!zspage)
@@ -1101,23 +1143,23 @@ static struct zspage *alloc_zspage(struct zs_pool *pool,
 	migrate_lock_init(zspage);
 
 	for (i = 0; i < class->pages_per_zspage; i++) {
-		struct page *page;
+		struct zsdesc *zsdesc;
 
-		page = alloc_page(gfp);
-		if (!page) {
+		zsdesc = alloc_zsdesc(gfp);
+		if (!zsdesc) {
 			while (--i >= 0) {
-				dec_zone_page_state(pages[i], NR_ZSPAGES);
-				__free_page(pages[i]);
+				zsdesc_dec_zone_page_state(zsdescs[i]);
+				free_zsdesc(zsdescs[i]);
 			}
 			cache_free_zspage(pool, zspage);
 			return NULL;
 		}
 
-		inc_zone_page_state(page, NR_ZSPAGES);
-		pages[i] = page;
+		zsdesc_inc_zone_page_state(zsdesc);
+		zsdescs[i] = zsdesc;
 	}
 
-	create_page_chain(class, zspage, pages);
+	create_page_chain(class, zspage, zsdescs);
 	init_zspage(class, zspage);
 	zspage->pool = pool;
 
@@ -1860,29 +1902,29 @@ static void dec_zspage_isolation(struct zspage *zspage)
 	zspage->isolated--;
 }
 
-static const struct movable_operations zsmalloc_mops;
-
 static void replace_sub_page(struct size_class *class, struct zspage *zspage,
-				struct page *newpage, struct page *oldpage)
+				struct zsdesc *new_zsdesc, struct zsdesc *old_zsdesc)
 {
-	struct page *page;
-	struct page *pages[ZS_MAX_PAGES_PER_ZSPAGE] = {NULL, };
+	struct zsdesc *zsdesc;
+	struct zsdesc *zsdescs[ZS_MAX_PAGES_PER_ZSPAGE] = {NULL, };
+	unsigned int first_obj_offset;
 	int idx = 0;
 
-	page = get_first_page(zspage);
+	zsdesc = get_first_zsdesc(zspage);
 	do {
-		if (page == oldpage)
-			pages[idx] = newpage;
+		if (zsdesc == old_zsdesc)
+			zsdescs[idx] = new_zsdesc;
 		else
-			pages[idx] = page;
+			zsdescs[idx] = zsdesc;
 		idx++;
-	} while ((page = get_next_page(page)) != NULL);
+	} while ((zsdesc = get_next_zsdesc(zsdesc)) != NULL);
 
-	create_page_chain(class, zspage, pages);
-	set_first_obj_offset(newpage, get_first_obj_offset(oldpage));
+	create_page_chain(class, zspage, zsdescs);
+	first_obj_offset = get_first_obj_offset(zsdesc_page(old_zsdesc));
+	set_first_obj_offset(zsdesc_page(new_zsdesc), first_obj_offset);
 	if (unlikely(ZsHugePage(zspage)))
-		newpage->index = oldpage->index;
-	__SetPageMovable(newpage, &zsmalloc_mops);
+		new_zsdesc->handle = old_zsdesc->handle;
+	zsdesc_set_movable(new_zsdesc);
 }
 
 static bool zs_page_isolate(struct page *page, isolate_mode_t mode)
@@ -1965,7 +2007,7 @@ static int zs_page_migrate(struct page *newpage, struct page *page,
 	}
 	kunmap_atomic(s_addr);
 
-	replace_sub_page(class, zspage, newpage, page);
+	replace_sub_page(class, zspage, page_zsdesc(newpage), page_zsdesc(page));
 	dec_zspage_isolation(zspage);
 	/*
 	 * Since we complete the data copy and set up new zspage structure,
-- 
2.39.3



^ permalink raw reply	[flat|nested] 32+ messages in thread

* [RFC PATCH v3 10/21] mm/zsmalloc: convert obj_allocated() and related helpers to use zsdesc
  2023-11-30 10:12 [RFC PATCH v3 00/21] mm/zsmalloc: Split zsdesc from struct page Hyeonggon Yoo
                   ` (8 preceding siblings ...)
  2023-11-30 10:12 ` [RFC PATCH v3 09/21] mm/zsmalloc: convert create_page_chain() and its users " Hyeonggon Yoo
@ 2023-11-30 10:12 ` Hyeonggon Yoo
  2023-11-30 10:12 ` [RFC PATCH v3 11/21] mm/zsmalloc: convert init_zspage() " Hyeonggon Yoo
                   ` (11 subsequent siblings)
  21 siblings, 0 replies; 32+ messages in thread
From: Hyeonggon Yoo @ 2023-11-30 10:12 UTC (permalink / raw)
  To: Minchan Kim, Sergey Senozhatsky
  Cc: Matthew Wilcox, Mike Rapoport, Yosry Ahmed, linux-mm, Hyeonggon Yoo

Convert obj_allocated(), and related helpers to take zsdesc. Also make
its callers to cast (struct page *) to (struct zsdesc *) when calling them.
The users will be converted gradually as there are many.

Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
---
 mm/zsmalloc.c | 20 ++++++++++----------
 1 file changed, 10 insertions(+), 10 deletions(-)

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 1b5b9322ec21..f625d991bab1 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -946,15 +946,15 @@ static unsigned long handle_to_obj(unsigned long handle)
 	return *(unsigned long *)handle;
 }
 
-static inline bool obj_allocated(struct page *page, void *obj,
+static inline bool obj_allocated(struct zsdesc *zsdesc, void *obj,
 				 unsigned long *phandle)
 {
 	unsigned long handle;
-	struct zspage *zspage = get_zspage(page);
+	struct zspage *zspage = get_zspage(zsdesc_page(zsdesc));
 
 	if (unlikely(ZsHugePage(zspage))) {
-		VM_BUG_ON_PAGE(!is_first_page(page), page);
-		handle = page->index;
+		VM_BUG_ON_PAGE(!is_first_zsdesc(zsdesc), zsdesc_page(zsdesc));
+		handle = zsdesc->handle;
 	} else
 		handle = *(unsigned long *)obj;
 
@@ -1702,18 +1702,18 @@ static void zs_object_copy(struct size_class *class, unsigned long dst,
  * return handle.
  */
 static unsigned long find_alloced_obj(struct size_class *class,
-				      struct page *page, int *obj_idx)
+				      struct zsdesc *zsdesc, int *obj_idx)
 {
 	unsigned int offset;
 	int index = *obj_idx;
 	unsigned long handle = 0;
-	void *addr = kmap_atomic(page);
+	void *addr = zsdesc_kmap_atomic(zsdesc);
 
-	offset = get_first_obj_offset(page);
+	offset = get_first_obj_offset(zsdesc_page(zsdesc));
 	offset += class->size * index;
 
 	while (offset < PAGE_SIZE) {
-		if (obj_allocated(page, addr + offset, &handle))
+		if (obj_allocated(zsdesc, addr + offset, &handle))
 			break;
 
 		offset += class->size;
@@ -1737,7 +1737,7 @@ static void migrate_zspage(struct zs_pool *pool, struct zspage *src_zspage,
 	struct size_class *class = pool->size_class[src_zspage->class];
 
 	while (1) {
-		handle = find_alloced_obj(class, s_page, &obj_idx);
+		handle = find_alloced_obj(class, page_zsdesc(s_page), &obj_idx);
 		if (!handle) {
 			s_page = get_next_page(s_page);
 			if (!s_page)
@@ -1996,7 +1996,7 @@ static int zs_page_migrate(struct page *newpage, struct page *page,
 
 	for (addr = s_addr + offset; addr < s_addr + PAGE_SIZE;
 					addr += class->size) {
-		if (obj_allocated(page, addr, &handle)) {
+		if (obj_allocated(page_zsdesc(page), addr, &handle)) {
 
 			old_obj = handle_to_obj(handle);
 			obj_to_location(old_obj, &dummy, &obj_idx);
-- 
2.39.3



^ permalink raw reply	[flat|nested] 32+ messages in thread

* [RFC PATCH v3 11/21] mm/zsmalloc: convert init_zspage() to use zsdesc
  2023-11-30 10:12 [RFC PATCH v3 00/21] mm/zsmalloc: Split zsdesc from struct page Hyeonggon Yoo
                   ` (9 preceding siblings ...)
  2023-11-30 10:12 ` [RFC PATCH v3 10/21] mm/zsmalloc: convert obj_allocated() and related helpers " Hyeonggon Yoo
@ 2023-11-30 10:12 ` Hyeonggon Yoo
  2023-11-30 10:12 ` [RFC PATCH v3 12/21] mm/zsmalloc: convert obj_to_page() and zs_free() " Hyeonggon Yoo
                   ` (10 subsequent siblings)
  21 siblings, 0 replies; 32+ messages in thread
From: Hyeonggon Yoo @ 2023-11-30 10:12 UTC (permalink / raw)
  To: Minchan Kim, Sergey Senozhatsky
  Cc: Matthew Wilcox, Mike Rapoport, Yosry Ahmed, linux-mm, Hyeonggon Yoo

Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
---
 mm/zsmalloc.c | 16 ++++++++--------
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index f625d991bab1..8fe934df298e 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -1052,16 +1052,16 @@ static void init_zspage(struct size_class *class, struct zspage *zspage)
 {
 	unsigned int freeobj = 1;
 	unsigned long off = 0;
-	struct page *page = get_first_page(zspage);
+	struct zsdesc *zsdesc = get_first_zsdesc(zspage);
 
-	while (page) {
-		struct page *next_page;
+	while (zsdesc) {
+		struct zsdesc *next_zsdesc;
 		struct link_free *link;
 		void *vaddr;
 
-		set_first_obj_offset(page, off);
+		set_first_obj_offset(zsdesc_page(zsdesc), off);
 
-		vaddr = kmap_atomic(page);
+		vaddr = zsdesc_kmap_atomic(zsdesc);
 		link = (struct link_free *)vaddr + off / sizeof(*link);
 
 		while ((off += class->size) < PAGE_SIZE) {
@@ -1074,8 +1074,8 @@ static void init_zspage(struct size_class *class, struct zspage *zspage)
 		 * page, which must point to the first object on the next
 		 * page (if present)
 		 */
-		next_page = get_next_page(page);
-		if (next_page) {
+		next_zsdesc = get_next_zsdesc(zsdesc);
+		if (next_zsdesc) {
 			link->next = freeobj++ << OBJ_TAG_BITS;
 		} else {
 			/*
@@ -1085,7 +1085,7 @@ static void init_zspage(struct size_class *class, struct zspage *zspage)
 			link->next = -1UL << OBJ_TAG_BITS;
 		}
 		kunmap_atomic(vaddr);
-		page = next_page;
+		zsdesc = next_zsdesc;
 		off %= PAGE_SIZE;
 	}
 
-- 
2.39.3



^ permalink raw reply	[flat|nested] 32+ messages in thread

* [RFC PATCH v3 12/21] mm/zsmalloc: convert obj_to_page() and zs_free() to use zsdesc
  2023-11-30 10:12 [RFC PATCH v3 00/21] mm/zsmalloc: Split zsdesc from struct page Hyeonggon Yoo
                   ` (10 preceding siblings ...)
  2023-11-30 10:12 ` [RFC PATCH v3 11/21] mm/zsmalloc: convert init_zspage() " Hyeonggon Yoo
@ 2023-11-30 10:12 ` Hyeonggon Yoo
  2023-11-30 10:12 ` [RFC PATCH v3 13/21] mm/zsmalloc: convert reset_page() to reset_zsdesc() Hyeonggon Yoo
                   ` (9 subsequent siblings)
  21 siblings, 0 replies; 32+ messages in thread
From: Hyeonggon Yoo @ 2023-11-30 10:12 UTC (permalink / raw)
  To: Minchan Kim, Sergey Senozhatsky
  Cc: Matthew Wilcox, Mike Rapoport, Yosry Ahmed, linux-mm, Hyeonggon Yoo

Rename obj_to_page() to obj_to_zsdesc() and also convert it and
its user zs_free() to use zsdesc.

Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
---
 mm/zsmalloc.c | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 8fe934df298e..1140eefa3a1c 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -919,10 +919,10 @@ static void obj_to_location(unsigned long obj, struct zsdesc **zsdesc,
 	*obj_idx = (obj & OBJ_INDEX_MASK);
 }
 
-static void obj_to_page(unsigned long obj, struct page **page)
+static void obj_to_zsdesc(unsigned long obj, struct zsdesc **zsdesc)
 {
 	obj >>= OBJ_TAG_BITS;
-	*page = pfn_to_page(obj >> OBJ_INDEX_BITS);
+	*zsdesc = pfn_zsdesc(obj >> OBJ_INDEX_BITS);
 }
 
 /**
@@ -1597,7 +1597,7 @@ static void obj_free(int class_size, unsigned long obj)
 void zs_free(struct zs_pool *pool, unsigned long handle)
 {
 	struct zspage *zspage;
-	struct page *f_page;
+	struct zsdesc *f_zsdesc;
 	unsigned long obj;
 	struct size_class *class;
 	int fullness;
@@ -1611,8 +1611,8 @@ void zs_free(struct zs_pool *pool, unsigned long handle)
 	 */
 	spin_lock(&pool->lock);
 	obj = handle_to_obj(handle);
-	obj_to_page(obj, &f_page);
-	zspage = get_zspage(f_page);
+	obj_to_zsdesc(obj, &f_zsdesc);
+	zspage = get_zspage(zsdesc_page(f_zsdesc));
 	class = zspage_class(pool, zspage);
 
 	class_stat_dec(class, ZS_OBJS_INUSE, 1);
-- 
2.39.3



^ permalink raw reply	[flat|nested] 32+ messages in thread

* [RFC PATCH v3 13/21] mm/zsmalloc: convert reset_page() to reset_zsdesc()
  2023-11-30 10:12 [RFC PATCH v3 00/21] mm/zsmalloc: Split zsdesc from struct page Hyeonggon Yoo
                   ` (11 preceding siblings ...)
  2023-11-30 10:12 ` [RFC PATCH v3 12/21] mm/zsmalloc: convert obj_to_page() and zs_free() " Hyeonggon Yoo
@ 2023-11-30 10:12 ` Hyeonggon Yoo
  2023-11-30 10:12 ` [RFC PATCH v3 14/21] mm/zsmalloc: convert zs_page_{isolate,migrate,putback} to use zsdesc Hyeonggon Yoo
                   ` (8 subsequent siblings)
  21 siblings, 0 replies; 32+ messages in thread
From: Hyeonggon Yoo @ 2023-11-30 10:12 UTC (permalink / raw)
  To: Minchan Kim, Sergey Senozhatsky
  Cc: Matthew Wilcox, Mike Rapoport, Yosry Ahmed, linux-mm, Hyeonggon Yoo

reset_page() is called prior to freeing base pages of a zspage.
As it's closely associated with details of struct page, rename it to
reset_zsdesc() and move closer to newly added zsdesc helper functions.

Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
---
 mm/zsmalloc.c | 24 +++++++++++++-----------
 1 file changed, 13 insertions(+), 11 deletions(-)

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 1140eefa3a1c..1252120c28bc 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -369,6 +369,17 @@ static inline void free_zsdesc(struct zsdesc *zsdesc)
 	__free_page(page);
 }
 
+static void reset_zsdesc(struct zsdesc *zsdesc)
+{
+	struct page *page = zsdesc_page(zsdesc);
+
+	__ClearPageMovable(page);
+	ClearPagePrivate(page);
+	set_page_private(page, 0);
+	page_mapcount_reset(page);
+	page->index = 0;
+}
+
 struct zspage {
 	struct {
 		unsigned int huge:HUGE_BITS;
@@ -966,15 +977,6 @@ static inline bool obj_allocated(struct zsdesc *zsdesc, void *obj,
 	return true;
 }
 
-static void reset_page(struct page *page)
-{
-	__ClearPageMovable(page);
-	ClearPagePrivate(page);
-	set_page_private(page, 0);
-	page_mapcount_reset(page);
-	page->index = 0;
-}
-
 static int trylock_zspage(struct zspage *zspage)
 {
 	struct zsdesc *cursor, *fail;
@@ -1014,7 +1016,7 @@ static void __free_zspage(struct zs_pool *pool, struct size_class *class,
 	do {
 		VM_BUG_ON_PAGE(!PageLocked(page), page);
 		next = get_next_page(page);
-		reset_page(page);
+		reset_zsdesc(page_zsdesc(page));
 		unlock_page(page);
 		dec_zone_page_state(page, NR_ZSPAGES);
 		put_page(page);
@@ -2022,7 +2024,7 @@ static int zs_page_migrate(struct page *newpage, struct page *page,
 		inc_zone_page_state(newpage, NR_ZSPAGES);
 	}
 
-	reset_page(page);
+	reset_zsdesc(page_zsdesc(page));
 	put_page(page);
 
 	return MIGRATEPAGE_SUCCESS;
-- 
2.39.3



^ permalink raw reply	[flat|nested] 32+ messages in thread

* [RFC PATCH v3 14/21] mm/zsmalloc: convert zs_page_{isolate,migrate,putback} to use zsdesc
  2023-11-30 10:12 [RFC PATCH v3 00/21] mm/zsmalloc: Split zsdesc from struct page Hyeonggon Yoo
                   ` (12 preceding siblings ...)
  2023-11-30 10:12 ` [RFC PATCH v3 13/21] mm/zsmalloc: convert reset_page() to reset_zsdesc() Hyeonggon Yoo
@ 2023-11-30 10:12 ` Hyeonggon Yoo
  2023-12-04  3:32   ` Sergey Senozhatsky
  2023-11-30 10:12 ` [RFC PATCH v3 15/21] mm/zsmalloc: convert __free_zspage() " Hyeonggon Yoo
                   ` (7 subsequent siblings)
  21 siblings, 1 reply; 32+ messages in thread
From: Hyeonggon Yoo @ 2023-11-30 10:12 UTC (permalink / raw)
  To: Minchan Kim, Sergey Senozhatsky
  Cc: Matthew Wilcox, Mike Rapoport, Yosry Ahmed, linux-mm, Hyeonggon Yoo

Convert the functions for movable operations of zsmalloc to use zsdesc.

Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
---
 mm/zsmalloc.c | 50 ++++++++++++++++++++++++++++++++------------------
 1 file changed, 32 insertions(+), 18 deletions(-)

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 1252120c28bc..92641a3b2d98 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -380,6 +380,16 @@ static void reset_zsdesc(struct zsdesc *zsdesc)
 	page->index = 0;
 }
 
+static inline bool zsdesc_is_isolated(struct zsdesc *zsdesc)
+{
+	return PageIsolated(zsdesc_page(zsdesc));
+}
+
+struct zone *zsdesc_zone(struct zsdesc *zsdesc)
+{
+	return page_zone(zsdesc_page(zsdesc));
+}
+
 struct zspage {
 	struct {
 		unsigned int huge:HUGE_BITS;
@@ -1933,14 +1943,15 @@ static bool zs_page_isolate(struct page *page, isolate_mode_t mode)
 {
 	struct zs_pool *pool;
 	struct zspage *zspage;
+	struct zsdesc *zsdesc = page_zsdesc(page);
 
 	/*
 	 * Page is locked so zspage couldn't be destroyed. For detail, look at
 	 * lock_zspage in free_zspage.
 	 */
-	VM_BUG_ON_PAGE(PageIsolated(page), page);
+	VM_BUG_ON_PAGE(zsdesc_is_isolated(zsdesc), zsdesc_page(zsdesc));
 
-	zspage = get_zspage(page);
+	zspage = get_zspage(zsdesc_page(zsdesc));
 	pool = zspage->pool;
 	spin_lock(&pool->lock);
 	inc_zspage_isolation(zspage);
@@ -1956,6 +1967,8 @@ static int zs_page_migrate(struct page *newpage, struct page *page,
 	struct size_class *class;
 	struct zspage *zspage;
 	struct zsdesc *dummy;
+	struct zsdesc *new_zsdesc = page_zsdesc(newpage);
+	struct zsdesc *zsdesc = page_zsdesc(page);
 	void *s_addr, *d_addr, *addr;
 	unsigned int offset;
 	unsigned long handle;
@@ -1970,10 +1983,10 @@ static int zs_page_migrate(struct page *newpage, struct page *page,
 	if (mode == MIGRATE_SYNC_NO_COPY)
 		return -EINVAL;
 
-	VM_BUG_ON_PAGE(!PageIsolated(page), page);
+	VM_BUG_ON_PAGE(!zsdesc_is_isolated(zsdesc), zsdesc_page(zsdesc));
 
 	/* The page is locked, so this pointer must remain valid */
-	zspage = get_zspage(page);
+	zspage = get_zspage(zsdesc_page(zsdesc));
 	pool = zspage->pool;
 
 	/*
@@ -1986,30 +1999,30 @@ static int zs_page_migrate(struct page *newpage, struct page *page,
 	/* the migrate_write_lock protects zpage access via zs_map_object */
 	migrate_write_lock(zspage);
 
-	offset = get_first_obj_offset(page);
-	s_addr = kmap_atomic(page);
+	offset = get_first_obj_offset(zsdesc_page(zsdesc));
+	s_addr = zsdesc_kmap_atomic(zsdesc);
 
 	/*
 	 * Here, any user cannot access all objects in the zspage so let's move.
 	 */
-	d_addr = kmap_atomic(newpage);
+	d_addr = zsdesc_kmap_atomic(new_zsdesc);
 	copy_page(d_addr, s_addr);
 	kunmap_atomic(d_addr);
 
 	for (addr = s_addr + offset; addr < s_addr + PAGE_SIZE;
 					addr += class->size) {
-		if (obj_allocated(page_zsdesc(page), addr, &handle)) {
+		if (obj_allocated(zsdesc, addr, &handle)) {
 
 			old_obj = handle_to_obj(handle);
 			obj_to_location(old_obj, &dummy, &obj_idx);
-			new_obj = (unsigned long)location_to_obj(newpage,
+			new_obj = (unsigned long)location_to_obj(zsdesc_page(new_zsdesc),
 								obj_idx);
 			record_obj(handle, new_obj);
 		}
 	}
 	kunmap_atomic(s_addr);
 
-	replace_sub_page(class, zspage, page_zsdesc(newpage), page_zsdesc(page));
+	replace_sub_page(class, zspage, new_zsdesc, zsdesc);
 	dec_zspage_isolation(zspage);
 	/*
 	 * Since we complete the data copy and set up new zspage structure,
@@ -2018,14 +2031,14 @@ static int zs_page_migrate(struct page *newpage, struct page *page,
 	spin_unlock(&pool->lock);
 	migrate_write_unlock(zspage);
 
-	get_page(newpage);
-	if (page_zone(newpage) != page_zone(page)) {
-		dec_zone_page_state(page, NR_ZSPAGES);
-		inc_zone_page_state(newpage, NR_ZSPAGES);
+	zsdesc_get(new_zsdesc);
+	if (zsdesc_zone(new_zsdesc) != zsdesc_zone(zsdesc)) {
+		zsdesc_dec_zone_page_state(zsdesc);
+		zsdesc_inc_zone_page_state(new_zsdesc);
 	}
 
-	reset_zsdesc(page_zsdesc(page));
-	put_page(page);
+	reset_zsdesc(zsdesc);
+	zsdesc_put(zsdesc);
 
 	return MIGRATEPAGE_SUCCESS;
 }
@@ -2034,10 +2047,11 @@ static void zs_page_putback(struct page *page)
 {
 	struct zs_pool *pool;
 	struct zspage *zspage;
+	struct zsdesc *zsdesc = page_zsdesc(page);
 
-	VM_BUG_ON_PAGE(!PageIsolated(page), page);
+	VM_BUG_ON_PAGE(!zsdesc_is_isolated(zsdesc), zsdesc_page(zsdesc));
 
-	zspage = get_zspage(page);
+	zspage = get_zspage(zsdesc_page(zsdesc));
 	pool = zspage->pool;
 	spin_lock(&pool->lock);
 	dec_zspage_isolation(zspage);
-- 
2.39.3



^ permalink raw reply	[flat|nested] 32+ messages in thread

* [RFC PATCH v3 15/21] mm/zsmalloc: convert __free_zspage() to use zsdesc
  2023-11-30 10:12 [RFC PATCH v3 00/21] mm/zsmalloc: Split zsdesc from struct page Hyeonggon Yoo
                   ` (13 preceding siblings ...)
  2023-11-30 10:12 ` [RFC PATCH v3 14/21] mm/zsmalloc: convert zs_page_{isolate,migrate,putback} to use zsdesc Hyeonggon Yoo
@ 2023-11-30 10:12 ` Hyeonggon Yoo
  2023-11-30 10:12 ` [RFC PATCH v3 16/21] mm/zsmalloc: convert location_to_obj() " Hyeonggon Yoo
                   ` (6 subsequent siblings)
  21 siblings, 0 replies; 32+ messages in thread
From: Hyeonggon Yoo @ 2023-11-30 10:12 UTC (permalink / raw)
  To: Minchan Kim, Sergey Senozhatsky
  Cc: Matthew Wilcox, Mike Rapoport, Yosry Ahmed, linux-mm, Hyeonggon Yoo

Introduce zsdesc_is_locked() and convert __free_zspage() to use zsdesc.

Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
---
 mm/zsmalloc.c | 25 +++++++++++++++----------
 1 file changed, 15 insertions(+), 10 deletions(-)

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 92641a3b2d98..fdcc47569644 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -317,6 +317,11 @@ static inline void unlock_zsdesc(struct zsdesc *zsdesc)
 	unlock_page(zsdesc_page(zsdesc));
 }
 
+static inline bool zsdesc_is_locked(struct zsdesc *zsdesc)
+{
+	return PageLocked(zsdesc_page(zsdesc));
+}
+
 static inline void wait_on_zsdesc_locked(struct zsdesc *zsdesc)
 {
 	wait_on_page_locked(zsdesc_page(zsdesc));
@@ -1011,7 +1016,7 @@ static int trylock_zspage(struct zspage *zspage)
 static void __free_zspage(struct zs_pool *pool, struct size_class *class,
 				struct zspage *zspage)
 {
-	struct page *page, *next;
+	struct zsdesc *zsdesc, *next;
 	int fg;
 	unsigned int class_idx;
 
@@ -1022,16 +1027,16 @@ static void __free_zspage(struct zs_pool *pool, struct size_class *class,
 	VM_BUG_ON(get_zspage_inuse(zspage));
 	VM_BUG_ON(fg != ZS_INUSE_RATIO_0);
 
-	next = page = get_first_page(zspage);
+	next = zsdesc = get_first_zsdesc(zspage);
 	do {
-		VM_BUG_ON_PAGE(!PageLocked(page), page);
-		next = get_next_page(page);
-		reset_zsdesc(page_zsdesc(page));
-		unlock_page(page);
-		dec_zone_page_state(page, NR_ZSPAGES);
-		put_page(page);
-		page = next;
-	} while (page != NULL);
+		VM_BUG_ON_PAGE(!zsdesc_is_locked(zsdesc), zsdesc_page(zsdesc));
+		next = get_next_zsdesc(zsdesc);
+		reset_zsdesc(zsdesc);
+		unlock_zsdesc(zsdesc);
+		zsdesc_dec_zone_page_state(zsdesc);
+		zsdesc_put(zsdesc);
+		zsdesc = next;
+	} while (zsdesc != NULL);
 
 	cache_free_zspage(pool, zspage);
 
-- 
2.39.3



^ permalink raw reply	[flat|nested] 32+ messages in thread

* [RFC PATCH v3 16/21] mm/zsmalloc: convert location_to_obj() to use zsdesc
  2023-11-30 10:12 [RFC PATCH v3 00/21] mm/zsmalloc: Split zsdesc from struct page Hyeonggon Yoo
                   ` (14 preceding siblings ...)
  2023-11-30 10:12 ` [RFC PATCH v3 15/21] mm/zsmalloc: convert __free_zspage() " Hyeonggon Yoo
@ 2023-11-30 10:12 ` Hyeonggon Yoo
  2023-11-30 10:12 ` [RFC PATCH v3 17/21] mm/zsmalloc: convert migrate_zspage() " Hyeonggon Yoo
                   ` (5 subsequent siblings)
  21 siblings, 0 replies; 32+ messages in thread
From: Hyeonggon Yoo @ 2023-11-30 10:12 UTC (permalink / raw)
  To: Minchan Kim, Sergey Senozhatsky
  Cc: Matthew Wilcox, Mike Rapoport, Yosry Ahmed, linux-mm, Hyeonggon Yoo

As all users of location_to_obj() now use zsdesc, convert
location_to_obj() to use zsdesc.

Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
---
 mm/zsmalloc.c | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index fdcc47569644..317bb0e8939a 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -952,15 +952,15 @@ static void obj_to_zsdesc(unsigned long obj, struct zsdesc **zsdesc)
 }
 
 /**
- * location_to_obj - get obj value encoded from (<page>, <obj_idx>)
- * @page: page object resides in zspage
+ * location_to_obj - get obj value encoded from (<zsdesc>, <obj_idx>)
+ * @zsdesc: zsdesc object resides in zspage
  * @obj_idx: object index
  */
-static unsigned long location_to_obj(struct page *page, unsigned int obj_idx)
+static unsigned long location_to_obj(struct zsdesc *zsdesc, unsigned int obj_idx)
 {
 	unsigned long obj;
 
-	obj = page_to_pfn(page) << OBJ_INDEX_BITS;
+	obj = zsdesc_pfn(zsdesc) << OBJ_INDEX_BITS;
 	obj |= obj_idx & OBJ_INDEX_MASK;
 	obj <<= OBJ_TAG_BITS;
 
@@ -1509,7 +1509,7 @@ static unsigned long obj_malloc(struct zs_pool *pool,
 	kunmap_atomic(vaddr);
 	mod_zspage_inuse(zspage, 1);
 
-	obj = location_to_obj(zsdesc_page(m_zsdesc), obj);
+	obj = location_to_obj(m_zsdesc, obj);
 
 	return obj;
 }
@@ -2020,7 +2020,7 @@ static int zs_page_migrate(struct page *newpage, struct page *page,
 
 			old_obj = handle_to_obj(handle);
 			obj_to_location(old_obj, &dummy, &obj_idx);
-			new_obj = (unsigned long)location_to_obj(zsdesc_page(new_zsdesc),
+			new_obj = (unsigned long)location_to_obj(new_zsdesc,
 								obj_idx);
 			record_obj(handle, new_obj);
 		}
-- 
2.39.3



^ permalink raw reply	[flat|nested] 32+ messages in thread

* [RFC PATCH v3 17/21] mm/zsmalloc: convert migrate_zspage() to use zsdesc
  2023-11-30 10:12 [RFC PATCH v3 00/21] mm/zsmalloc: Split zsdesc from struct page Hyeonggon Yoo
                   ` (15 preceding siblings ...)
  2023-11-30 10:12 ` [RFC PATCH v3 16/21] mm/zsmalloc: convert location_to_obj() " Hyeonggon Yoo
@ 2023-11-30 10:12 ` Hyeonggon Yoo
  2023-11-30 10:12 ` [RFC PATCH v3 18/21] mm/zsmalloc: convert get_zspage() to take zsdesc Hyeonggon Yoo
                   ` (4 subsequent siblings)
  21 siblings, 0 replies; 32+ messages in thread
From: Hyeonggon Yoo @ 2023-11-30 10:12 UTC (permalink / raw)
  To: Minchan Kim, Sergey Senozhatsky
  Cc: Matthew Wilcox, Mike Rapoport, Yosry Ahmed, linux-mm, Hyeonggon Yoo

Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
---
 mm/zsmalloc.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 317bb0e8939a..91ff1f84455f 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -1750,14 +1750,14 @@ static void migrate_zspage(struct zs_pool *pool, struct zspage *src_zspage,
 	unsigned long used_obj, free_obj;
 	unsigned long handle;
 	int obj_idx = 0;
-	struct page *s_page = get_first_page(src_zspage);
+	struct zsdesc *s_zsdesc = get_first_zsdesc(src_zspage);
 	struct size_class *class = pool->size_class[src_zspage->class];
 
 	while (1) {
-		handle = find_alloced_obj(class, page_zsdesc(s_page), &obj_idx);
+		handle = find_alloced_obj(class, s_zsdesc, &obj_idx);
 		if (!handle) {
-			s_page = get_next_page(s_page);
-			if (!s_page)
+			s_zsdesc = get_next_zsdesc(s_zsdesc);
+			if (!s_zsdesc)
 				break;
 			obj_idx = 0;
 			continue;
-- 
2.39.3



^ permalink raw reply	[flat|nested] 32+ messages in thread

* [RFC PATCH v3 18/21] mm/zsmalloc: convert get_zspage() to take zsdesc
  2023-11-30 10:12 [RFC PATCH v3 00/21] mm/zsmalloc: Split zsdesc from struct page Hyeonggon Yoo
                   ` (16 preceding siblings ...)
  2023-11-30 10:12 ` [RFC PATCH v3 17/21] mm/zsmalloc: convert migrate_zspage() " Hyeonggon Yoo
@ 2023-11-30 10:12 ` Hyeonggon Yoo
  2023-11-30 10:12 ` [RFC PATCH v3 19/21] mm/zsmalloc: convert SetZsPageMovable() to use zsdesc Hyeonggon Yoo
                   ` (3 subsequent siblings)
  21 siblings, 0 replies; 32+ messages in thread
From: Hyeonggon Yoo @ 2023-11-30 10:12 UTC (permalink / raw)
  To: Minchan Kim, Sergey Senozhatsky
  Cc: Matthew Wilcox, Mike Rapoport, Yosry Ahmed, linux-mm, Hyeonggon Yoo

Now that all users except get_next_page() (which will be removed in
later patch) use zsdesc, convert get_zspage() to take zsdesc instead
of page.

Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
---
 mm/zsmalloc.c | 24 ++++++++++++------------
 1 file changed, 12 insertions(+), 12 deletions(-)

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 91ff1f84455f..828c45eba8ea 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -903,9 +903,9 @@ static int fix_fullness_group(struct size_class *class, struct zspage *zspage)
 	return newfg;
 }
 
-static struct zspage *get_zspage(struct page *page)
+static struct zspage *get_zspage(struct zsdesc *zsdesc)
 {
-	struct zspage *zspage = (struct zspage *)page_private(page);
+	struct zspage *zspage = zsdesc->zspage;
 
 	BUG_ON(zspage->magic != ZSPAGE_MAGIC);
 	return zspage;
@@ -913,7 +913,7 @@ static struct zspage *get_zspage(struct page *page)
 
 static __maybe_unused struct page *get_next_page(struct page *page)
 {
-	struct zspage *zspage = get_zspage(page);
+	struct zspage *zspage = get_zspage(page_zsdesc(page));
 
 	if (unlikely(ZsHugePage(zspage)))
 		return NULL;
@@ -923,7 +923,7 @@ static __maybe_unused struct page *get_next_page(struct page *page)
 
 static __maybe_unused struct zsdesc *get_next_zsdesc(struct zsdesc *zsdesc)
 {
-	struct zspage *zspage = get_zspage(zsdesc_page(zsdesc));
+	struct zspage *zspage = get_zspage(zsdesc);
 
 	if (unlikely(ZsHugePage(zspage)))
 		return NULL;
@@ -976,7 +976,7 @@ static inline bool obj_allocated(struct zsdesc *zsdesc, void *obj,
 				 unsigned long *phandle)
 {
 	unsigned long handle;
-	struct zspage *zspage = get_zspage(zsdesc_page(zsdesc));
+	struct zspage *zspage = get_zspage(zsdesc);
 
 	if (unlikely(ZsHugePage(zspage))) {
 		VM_BUG_ON_PAGE(!is_first_zsdesc(zsdesc), zsdesc_page(zsdesc));
@@ -1381,7 +1381,7 @@ void *zs_map_object(struct zs_pool *pool, unsigned long handle,
 	spin_lock(&pool->lock);
 	obj = handle_to_obj(handle);
 	obj_to_location(obj, &zsdesc, &obj_idx);
-	zspage = get_zspage(zsdesc_page(zsdesc));
+	zspage = get_zspage(zsdesc);
 
 	/*
 	 * migration cannot move any zpages in this zspage. Here, pool->lock
@@ -1431,7 +1431,7 @@ void zs_unmap_object(struct zs_pool *pool, unsigned long handle)
 
 	obj = handle_to_obj(handle);
 	obj_to_location(obj, &zsdesc, &obj_idx);
-	zspage = get_zspage(zsdesc_page(zsdesc));
+	zspage = get_zspage(zsdesc);
 	class = zspage_class(pool, zspage);
 	off = offset_in_page(class->size * obj_idx);
 
@@ -1595,7 +1595,7 @@ static void obj_free(int class_size, unsigned long obj)
 
 	obj_to_location(obj, &f_zsdesc, &f_objidx);
 	f_offset = offset_in_page(class_size * f_objidx);
-	zspage = get_zspage(zsdesc_page(f_zsdesc));
+	zspage = get_zspage(f_zsdesc);
 
 	vaddr = zsdesc_kmap_atomic(f_zsdesc);
 	link = (struct link_free *)(vaddr + f_offset);
@@ -1629,7 +1629,7 @@ void zs_free(struct zs_pool *pool, unsigned long handle)
 	spin_lock(&pool->lock);
 	obj = handle_to_obj(handle);
 	obj_to_zsdesc(obj, &f_zsdesc);
-	zspage = get_zspage(zsdesc_page(f_zsdesc));
+	zspage = get_zspage(f_zsdesc);
 	class = zspage_class(pool, zspage);
 
 	class_stat_dec(class, ZS_OBJS_INUSE, 1);
@@ -1956,7 +1956,7 @@ static bool zs_page_isolate(struct page *page, isolate_mode_t mode)
 	 */
 	VM_BUG_ON_PAGE(zsdesc_is_isolated(zsdesc), zsdesc_page(zsdesc));
 
-	zspage = get_zspage(zsdesc_page(zsdesc));
+	zspage = get_zspage(zsdesc);
 	pool = zspage->pool;
 	spin_lock(&pool->lock);
 	inc_zspage_isolation(zspage);
@@ -1991,7 +1991,7 @@ static int zs_page_migrate(struct page *newpage, struct page *page,
 	VM_BUG_ON_PAGE(!zsdesc_is_isolated(zsdesc), zsdesc_page(zsdesc));
 
 	/* The page is locked, so this pointer must remain valid */
-	zspage = get_zspage(zsdesc_page(zsdesc));
+	zspage = get_zspage(zsdesc);
 	pool = zspage->pool;
 
 	/*
@@ -2056,7 +2056,7 @@ static void zs_page_putback(struct page *page)
 
 	VM_BUG_ON_PAGE(!zsdesc_is_isolated(zsdesc), zsdesc_page(zsdesc));
 
-	zspage = get_zspage(zsdesc_page(zsdesc));
+	zspage = get_zspage(zsdesc);
 	pool = zspage->pool;
 	spin_lock(&pool->lock);
 	dec_zspage_isolation(zspage);
-- 
2.39.3



^ permalink raw reply	[flat|nested] 32+ messages in thread

* [RFC PATCH v3 19/21] mm/zsmalloc: convert SetZsPageMovable() to use zsdesc
  2023-11-30 10:12 [RFC PATCH v3 00/21] mm/zsmalloc: Split zsdesc from struct page Hyeonggon Yoo
                   ` (17 preceding siblings ...)
  2023-11-30 10:12 ` [RFC PATCH v3 18/21] mm/zsmalloc: convert get_zspage() to take zsdesc Hyeonggon Yoo
@ 2023-11-30 10:12 ` Hyeonggon Yoo
  2023-11-30 10:12 ` [RFC PATCH v3 20/21] mm/zsmalloc: remove now unused helper functions Hyeonggon Yoo
                   ` (2 subsequent siblings)
  21 siblings, 0 replies; 32+ messages in thread
From: Hyeonggon Yoo @ 2023-11-30 10:12 UTC (permalink / raw)
  To: Minchan Kim, Sergey Senozhatsky
  Cc: Matthew Wilcox, Mike Rapoport, Yosry Ahmed, linux-mm, Hyeonggon Yoo

Convert SetZsPageMovable() to use zsdesc.

Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
---
 mm/zsmalloc.c | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 828c45eba8ea..1ff83081616b 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -2125,13 +2125,13 @@ static void init_deferred_free(struct zs_pool *pool)
 
 static void SetZsPageMovable(struct zs_pool *pool, struct zspage *zspage)
 {
-	struct page *page = get_first_page(zspage);
+	struct zsdesc *zsdesc = get_first_zsdesc(zspage);
 
 	do {
-		WARN_ON(!trylock_page(page));
-		__SetPageMovable(page, &zsmalloc_mops);
-		unlock_page(page);
-	} while ((page = get_next_page(page)) != NULL);
+		WARN_ON(!trylock_zsdesc(zsdesc));
+		zsdesc_set_movable(zsdesc);
+		unlock_zsdesc(zsdesc);
+	} while ((zsdesc = get_next_zsdesc(zsdesc)) != NULL);
 }
 #else
 static inline void zs_flush_migration(struct zs_pool *pool) { }
-- 
2.39.3



^ permalink raw reply	[flat|nested] 32+ messages in thread

* [RFC PATCH v3 20/21] mm/zsmalloc: remove now unused helper functions
  2023-11-30 10:12 [RFC PATCH v3 00/21] mm/zsmalloc: Split zsdesc from struct page Hyeonggon Yoo
                   ` (18 preceding siblings ...)
  2023-11-30 10:12 ` [RFC PATCH v3 19/21] mm/zsmalloc: convert SetZsPageMovable() to use zsdesc Hyeonggon Yoo
@ 2023-11-30 10:12 ` Hyeonggon Yoo
  2023-11-30 10:12 ` [RFC PATCH v3 21/21] mm/zsmalloc: convert {get,set}_first_obj_offset() to use zsdesc Hyeonggon Yoo
  2023-12-01 19:28 ` [RFC PATCH v3 00/21] mm/zsmalloc: Split zsdesc from struct page Minchan Kim
  21 siblings, 0 replies; 32+ messages in thread
From: Hyeonggon Yoo @ 2023-11-30 10:12 UTC (permalink / raw)
  To: Minchan Kim, Sergey Senozhatsky
  Cc: Matthew Wilcox, Mike Rapoport, Yosry Ahmed, linux-mm, Hyeonggon Yoo

All users of is_first_page(), get_first_page(), get_next_page()
are now converted to use new helper functions that takes zsdesc.

Remove now unused helper functions.

Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
---
 mm/zsmalloc.c | 29 +++--------------------------
 1 file changed, 3 insertions(+), 26 deletions(-)

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 1ff83081616b..65387cd4cc5d 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -585,12 +585,7 @@ static DEFINE_PER_CPU(struct mapping_area, zs_map_area) = {
 	.lock	= INIT_LOCAL_LOCK(lock),
 };
 
-static __maybe_unused int is_first_page(struct page *page)
-{
-	return PagePrivate(page);
-}
-
-static __maybe_unused int is_first_zsdesc(struct zsdesc *zsdesc)
+static int is_first_zsdesc(struct zsdesc *zsdesc)
 {
 	return PagePrivate(zsdesc_page(zsdesc));
 }
@@ -607,15 +602,7 @@ static inline void mod_zspage_inuse(struct zspage *zspage, int val)
 	zspage->inuse += val;
 }
 
-static __maybe_unused inline struct page *get_first_page(struct zspage *zspage)
-{
-	struct page *first_page = zsdesc_page(zspage->first_zsdesc);
-
-	VM_BUG_ON_PAGE(!is_first_page(first_page), first_page);
-	return first_page;
-}
-
-static __maybe_unused struct zsdesc *get_first_zsdesc(struct zspage *zspage)
+static struct zsdesc *get_first_zsdesc(struct zspage *zspage)
 {
 	struct zsdesc *first_zsdesc = zspage->first_zsdesc;
 
@@ -911,17 +898,7 @@ static struct zspage *get_zspage(struct zsdesc *zsdesc)
 	return zspage;
 }
 
-static __maybe_unused struct page *get_next_page(struct page *page)
-{
-	struct zspage *zspage = get_zspage(page_zsdesc(page));
-
-	if (unlikely(ZsHugePage(zspage)))
-		return NULL;
-
-	return (struct page *)page->index;
-}
-
-static __maybe_unused struct zsdesc *get_next_zsdesc(struct zsdesc *zsdesc)
+static struct zsdesc *get_next_zsdesc(struct zsdesc *zsdesc)
 {
 	struct zspage *zspage = get_zspage(zsdesc);
 
-- 
2.39.3



^ permalink raw reply	[flat|nested] 32+ messages in thread

* [RFC PATCH v3 21/21] mm/zsmalloc: convert {get,set}_first_obj_offset() to use zsdesc
  2023-11-30 10:12 [RFC PATCH v3 00/21] mm/zsmalloc: Split zsdesc from struct page Hyeonggon Yoo
                   ` (19 preceding siblings ...)
  2023-11-30 10:12 ` [RFC PATCH v3 20/21] mm/zsmalloc: remove now unused helper functions Hyeonggon Yoo
@ 2023-11-30 10:12 ` Hyeonggon Yoo
  2023-12-01 19:28 ` [RFC PATCH v3 00/21] mm/zsmalloc: Split zsdesc from struct page Minchan Kim
  21 siblings, 0 replies; 32+ messages in thread
From: Hyeonggon Yoo @ 2023-11-30 10:12 UTC (permalink / raw)
  To: Minchan Kim, Sergey Senozhatsky
  Cc: Matthew Wilcox, Mike Rapoport, Yosry Ahmed, linux-mm, Hyeonggon Yoo

Now that all users of {get,set}_first_obj_offset() are converted
to use zsdesc, convert them to use zsdesc.

Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
---
 mm/zsmalloc.c | 18 +++++++++---------
 1 file changed, 9 insertions(+), 9 deletions(-)

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 65387cd4cc5d..0e1434f8ecdb 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -610,14 +610,14 @@ static struct zsdesc *get_first_zsdesc(struct zspage *zspage)
 	return first_zsdesc;
 }
 
-static inline unsigned int get_first_obj_offset(struct page *page)
+static inline unsigned int get_first_obj_offset(struct zsdesc *zsdesc)
 {
-	return page->page_type;
+	return zsdesc->first_obj_offset;
 }
 
-static inline void set_first_obj_offset(struct page *page, unsigned int offset)
+static inline void set_first_obj_offset(struct zsdesc *zsdesc, unsigned int offset)
 {
-	page->page_type = offset;
+	zsdesc->first_obj_offset = offset;
 }
 
 static inline unsigned int get_freeobj(struct zspage *zspage)
@@ -1053,7 +1053,7 @@ static void init_zspage(struct size_class *class, struct zspage *zspage)
 		struct link_free *link;
 		void *vaddr;
 
-		set_first_obj_offset(zsdesc_page(zsdesc), off);
+		set_first_obj_offset(zsdesc, off);
 
 		vaddr = zsdesc_kmap_atomic(zsdesc);
 		link = (struct link_free *)vaddr + off / sizeof(*link);
@@ -1703,7 +1703,7 @@ static unsigned long find_alloced_obj(struct size_class *class,
 	unsigned long handle = 0;
 	void *addr = zsdesc_kmap_atomic(zsdesc);
 
-	offset = get_first_obj_offset(zsdesc_page(zsdesc));
+	offset = get_first_obj_offset(zsdesc);
 	offset += class->size * index;
 
 	while (offset < PAGE_SIZE) {
@@ -1914,8 +1914,8 @@ static void replace_sub_page(struct size_class *class, struct zspage *zspage,
 	} while ((zsdesc = get_next_zsdesc(zsdesc)) != NULL);
 
 	create_page_chain(class, zspage, zsdescs);
-	first_obj_offset = get_first_obj_offset(zsdesc_page(old_zsdesc));
-	set_first_obj_offset(zsdesc_page(new_zsdesc), first_obj_offset);
+	first_obj_offset = get_first_obj_offset(old_zsdesc);
+	set_first_obj_offset(new_zsdesc, first_obj_offset);
 	if (unlikely(ZsHugePage(zspage)))
 		new_zsdesc->handle = old_zsdesc->handle;
 	zsdesc_set_movable(new_zsdesc);
@@ -1981,7 +1981,7 @@ static int zs_page_migrate(struct page *newpage, struct page *page,
 	/* the migrate_write_lock protects zpage access via zs_map_object */
 	migrate_write_lock(zspage);
 
-	offset = get_first_obj_offset(zsdesc_page(zsdesc));
+	offset = get_first_obj_offset(zsdesc);
 	s_addr = zsdesc_kmap_atomic(zsdesc);
 
 	/*
-- 
2.39.3



^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [RFC PATCH v3 03/21] mm/zsmalloc: replace first_page to first_zsdesc in struct zspage
  2023-11-30 10:12 ` [RFC PATCH v3 03/21] mm/zsmalloc: replace first_page to first_zsdesc in struct zspage Hyeonggon Yoo
@ 2023-12-01 19:23   ` Minchan Kim
  2023-12-03  5:22     ` Hyeonggon Yoo
  0 siblings, 1 reply; 32+ messages in thread
From: Minchan Kim @ 2023-12-01 19:23 UTC (permalink / raw)
  To: Hyeonggon Yoo
  Cc: Sergey Senozhatsky, Matthew Wilcox, Mike Rapoport, Yosry Ahmed, linux-mm

On Thu, Nov 30, 2023 at 07:12:24PM +0900, Hyeonggon Yoo wrote:
> Replace first_page to first_zsdesc in struct zspage for further
> conversion.
> 
> Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
> ---
>  mm/zsmalloc.c | 8 ++++----
>  1 file changed, 4 insertions(+), 4 deletions(-)
> 
> diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
> index 47df9103787e..4c9f9a2cb681 100644
> --- a/mm/zsmalloc.c
> +++ b/mm/zsmalloc.c
> @@ -317,7 +317,7 @@ struct zspage {
>  	};
>  	unsigned int inuse;
>  	unsigned int freeobj;
> -	struct page *first_page;
> +	struct zsdesc *first_zsdesc;
>  	struct list_head list; /* fullness list */
>  	struct zs_pool *pool;
>  	rwlock_t lock;
> @@ -516,7 +516,7 @@ static inline void mod_zspage_inuse(struct zspage *zspage, int val)
>  
>  static inline struct page *get_first_page(struct zspage *zspage)
>  {
> -	struct page *first_page = zspage->first_page;
> +	struct page *first_page = zsdesc_page(zspage->first_zsdesc);
>  
>  	VM_BUG_ON_PAGE(!is_first_page(first_page), first_page);
>  	return first_page;
> @@ -1028,7 +1028,7 @@ static void create_page_chain(struct size_class *class, struct zspage *zspage,
>  		set_page_private(page, (unsigned long)zspage);
>  		page->index = 0;
>  		if (i == 0) {
> -			zspage->first_page = page;
> +			zspage->first_zsdesc = page_zsdesc(page);
>  			SetPagePrivate(page);
>  			if (unlikely(class->objs_per_zspage == 1 &&
>  					class->pages_per_zspage == 1))
> @@ -1402,7 +1402,7 @@ static unsigned long obj_malloc(struct zs_pool *pool,
>  		link->handle = handle;
>  	else
>  		/* record handle to page->index */
           
Can you update the comment, too?

> -		zspage->first_page->index = handle;
> +		zspage->first_zsdesc->handle = handle;
>  
>  	kunmap_atomic(vaddr);
>  	mod_zspage_inuse(zspage, 1);
> -- 
> 2.39.3
> 


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [RFC PATCH v3 00/21] mm/zsmalloc: Split zsdesc from struct page
  2023-11-30 10:12 [RFC PATCH v3 00/21] mm/zsmalloc: Split zsdesc from struct page Hyeonggon Yoo
                   ` (20 preceding siblings ...)
  2023-11-30 10:12 ` [RFC PATCH v3 21/21] mm/zsmalloc: convert {get,set}_first_obj_offset() to use zsdesc Hyeonggon Yoo
@ 2023-12-01 19:28 ` Minchan Kim
  2023-12-02  4:36   ` Sergey Senozhatsky
  21 siblings, 1 reply; 32+ messages in thread
From: Minchan Kim @ 2023-12-01 19:28 UTC (permalink / raw)
  To: Hyeonggon Yoo
  Cc: Sergey Senozhatsky, Matthew Wilcox, Mike Rapoport, Yosry Ahmed, linux-mm

On Thu, Nov 30, 2023 at 07:12:21PM +0900, Hyeonggon Yoo wrote:
> RFC v2: https://lore.kernel.org/linux-mm/20230713042037.980211-1-42.hyeyoo@gmail.com/
> 
> v2 -> v3:
>  - rebased to the latest mm-unstable
>  - adjusted comments from Sergey Senozhatsky (Moving zsdesc definition,
>    kerneldoc fix) and Yosry Ahmed (adding memcg_data field to zsdesc)
> 
> 
> V3 update is a bit late, but I still believe this is worth doing.
> It would be nice to get comments/reviews/acks from maintainers/people.
> 
> Cover Letter:
> 
> The purpose of this series is to define own memory descriptor for zsmalloc,
> instead of re-using various fields of struct page. This is a part of the
> effort to reduce the size of struct page to unsigned long and enable
> dynamic allocation of memory descriptors.
> 
> While [1] outlines this ultimate objective, the current use of struct page
> is highly dependent on its definition, making it challenging to separately
> allocate memory descriptors.
> 
> Therefore, this series introduces new descriptor for zsmalloc, called
> zsdesc. It overlays struct page for now, but will eventually be allocated
> independently in the future. And apart from dynamic allocation of descriptors,
> this is a nice cleanup.

And the new descriptor doesn't bloat anything for zsmalloc meta size. Right?
Please specify it into the description.

> 
> This work is also available at:
> 	https://gitlab.com/hyeyoo/linux/-/tree/separate_zsdesc_rfc-v3
> 
> [1] State Of The Page, August 2022
> https://lore.kernel.org/lkml/YvV1KTyzZ+Jrtj9x@casper.infradead.org
> 
> Hyeonggon Yoo (21):
>   mm/zsmalloc: create new struct zsdesc
>   mm/zsmalloc: add utility functions for zsdesc
>   mm/zsmalloc: replace first_page to first_zsdesc in struct zspage
>   mm/zsmalloc: add alternatives of frequently used helper functions
>   mm/zsmalloc: convert {try,}lock_zspage() to use zsdesc
>   mm/zsmalloc: convert __zs_{map,unmap}_object() to use zsdesc
>   mm/zsmalloc: convert obj_to_location() and its users to use zsdesc
>   mm/zsmalloc: convert obj_malloc() to use zsdesc
>   mm/zsmalloc: convert create_page_chain() and its users to use zsdesc
>   mm/zsmalloc: convert obj_allocated() and related helpers to use zsdesc
>   mm/zsmalloc: convert init_zspage() to use zsdesc
>   mm/zsmalloc: convert obj_to_page() and zs_free() to use zsdesc
>   mm/zsmalloc: convert reset_page() to reset_zsdesc()
>   mm/zsmalloc: convert zs_page_{isolate,migrate,putback} to use zsdesc
>   mm/zsmalloc: convert __free_zspage() to use zsdesc
>   mm/zsmalloc: convert location_to_obj() to use zsdesc
>   mm/zsmalloc: convert migrate_zspage() to use zsdesc
>   mm/zsmalloc: convert get_zspage() to take zsdesc
>   mm/zsmalloc: convert SetZsPageMovable() to use zsdesc
>   mm/zsmalloc: remove now unused helper functions
>   mm/zsmalloc: convert {get,set}_first_obj_offset() to use zsdesc
> 
>  mm/zsmalloc.c | 578 +++++++++++++++++++++++++++++++-------------------
>  1 file changed, 364 insertions(+), 214 deletions(-)
> 
> -- 
> 2.39.3
> 

It's straightforward, and I don't see any problems. I sincerely appreciate
your dedication to making this change.

Thank you, Hyeonggon!


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [RFC PATCH v3 00/21] mm/zsmalloc: Split zsdesc from struct page
  2023-12-01 19:28 ` [RFC PATCH v3 00/21] mm/zsmalloc: Split zsdesc from struct page Minchan Kim
@ 2023-12-02  4:36   ` Sergey Senozhatsky
  2023-12-02 22:46     ` Matthew Wilcox
  2023-12-03  5:21     ` Hyeonggon Yoo
  0 siblings, 2 replies; 32+ messages in thread
From: Sergey Senozhatsky @ 2023-12-02  4:36 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Hyeonggon Yoo, Sergey Senozhatsky, Matthew Wilcox, Mike Rapoport,
	Yosry Ahmed, linux-mm

On (23/12/01 11:28), Minchan Kim wrote:
> On Thu, Nov 30, 2023 at 07:12:21PM +0900, Hyeonggon Yoo wrote:
> > RFC v2: https://lore.kernel.org/linux-mm/20230713042037.980211-1-42.hyeyoo@gmail.com/
> > 
> > v2 -> v3:
> >  - rebased to the latest mm-unstable
> >  - adjusted comments from Sergey Senozhatsky (Moving zsdesc definition,
> >    kerneldoc fix) and Yosry Ahmed (adding memcg_data field to zsdesc)
> > 
> > 
> > V3 update is a bit late, but I still believe this is worth doing.
> > It would be nice to get comments/reviews/acks from maintainers/people.
> > 
> > Cover Letter:
> > 
> > The purpose of this series is to define own memory descriptor for zsmalloc,
> > instead of re-using various fields of struct page. This is a part of the
> > effort to reduce the size of struct page to unsigned long and enable
> > dynamic allocation of memory descriptors.
> > 
> > While [1] outlines this ultimate objective, the current use of struct page
> > is highly dependent on its definition, making it challenging to separately
> > allocate memory descriptors.
> > 
> > Therefore, this series introduces new descriptor for zsmalloc, called
> > zsdesc. It overlays struct page for now, but will eventually be allocated
> > independently in the future. And apart from dynamic allocation of descriptors,
> > this is a nice cleanup.
> 
> And the new descriptor doesn't bloat anything for zsmalloc meta size. Right?
> Please specify it into the description.

Right, that popped up [1] previously but I'm not sure if we had a definitive
answer.

https://lore.kernel.org/lkml/20230720071826.GE955071@google.com/


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [RFC PATCH v3 00/21] mm/zsmalloc: Split zsdesc from struct page
  2023-12-02  4:36   ` Sergey Senozhatsky
@ 2023-12-02 22:46     ` Matthew Wilcox
  2023-12-03  5:21     ` Hyeonggon Yoo
  1 sibling, 0 replies; 32+ messages in thread
From: Matthew Wilcox @ 2023-12-02 22:46 UTC (permalink / raw)
  To: Sergey Senozhatsky
  Cc: Minchan Kim, Hyeonggon Yoo, Mike Rapoport, Yosry Ahmed, linux-mm

On Sat, Dec 02, 2023 at 01:36:37PM +0900, Sergey Senozhatsky wrote:
> On (23/12/01 11:28), Minchan Kim wrote:
> > > Therefore, this series introduces new descriptor for zsmalloc, called
> > > zsdesc. It overlays struct page for now, but will eventually be allocated
> > > independently in the future. And apart from dynamic allocation of descriptors,
> > > this is a nice cleanup.
> > 
> > And the new descriptor doesn't bloat anything for zsmalloc meta size. Right?
> > Please specify it into the description.
> 
> Right, that popped up [1] previously but I'm not sure if we had a definitive
> answer.
> 
> https://lore.kernel.org/lkml/20230720071826.GE955071@google.com/

Today, it remains an overlay of struct page.  In the future, it will be
separately allocated, and may end up shrinking.  TBD.


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [RFC PATCH v3 00/21] mm/zsmalloc: Split zsdesc from struct page
  2023-12-02  4:36   ` Sergey Senozhatsky
  2023-12-02 22:46     ` Matthew Wilcox
@ 2023-12-03  5:21     ` Hyeonggon Yoo
  1 sibling, 0 replies; 32+ messages in thread
From: Hyeonggon Yoo @ 2023-12-03  5:21 UTC (permalink / raw)
  To: Sergey Senozhatsky
  Cc: Minchan Kim, Matthew Wilcox, Mike Rapoport, Yosry Ahmed, linux-mm

On Sat, Dec 2, 2023 at 1:36 PM Sergey Senozhatsky
<senozhatsky@chromium.org> wrote:
>
> On (23/12/01 11:28), Minchan Kim wrote:
> > On Thu, Nov 30, 2023 at 07:12:21PM +0900, Hyeonggon Yoo wrote:
> > > RFC v2: https://lore.kernel.org/linux-mm/20230713042037.980211-1-42.hyeyoo@gmail.com/
> > >
> > > v2 -> v3:
> > >  - rebased to the latest mm-unstable
> > >  - adjusted comments from Sergey Senozhatsky (Moving zsdesc definition,
> > >    kerneldoc fix) and Yosry Ahmed (adding memcg_data field to zsdesc)
> > >
> > >
> > > V3 update is a bit late, but I still believe this is worth doing.
> > > It would be nice to get comments/reviews/acks from maintainers/people.
> > >
> > > Cover Letter:
> > >
> > > The purpose of this series is to define own memory descriptor for zsmalloc,
> > > instead of re-using various fields of struct page. This is a part of the
> > > effort to reduce the size of struct page to unsigned long and enable
> > > dynamic allocation of memory descriptors.
> > >
> > > While [1] outlines this ultimate objective, the current use of struct page
> > > is highly dependent on its definition, making it challenging to separately
> > > allocate memory descriptors.
> > >
> > > Therefore, this series introduces new descriptor for zsmalloc, called
> > > zsdesc. It overlays struct page for now, but will eventually be allocated
> > > independently in the future. And apart from dynamic allocation of descriptors,
> > > this is a nice cleanup.
> >
> > And the new descriptor doesn't bloat anything for zsmalloc meta size. Right?
> > Please specify it into the description.
>
> Right, that popped up [1] previously but I'm not sure if we had a definitive
> answer.
> https://lore.kernel.org/lkml/20230720071826.GE955071@google.com/

Oh, I thought I did, maybe I missed it.

Let me explain it here, it does not bloat the struct page:
> static_assert(sizeof(struct zsdesc) <= sizeof(struct page));
Otherwise the compiler will complain.

So with this patch series, it does not bloat the size of struct page,
nor allocates
additional memory for zsmalloc. Will add that in the description of
the next revision.

As Matthew pointed, in the future struct page and zsdesc may end up being
shrunken and separately allocated. What this series does is only
separating definition
of the descriptor (but still overlaying struct page)

Thanks a lot!
--
Hyeonggon


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [RFC PATCH v3 03/21] mm/zsmalloc: replace first_page to first_zsdesc in struct zspage
  2023-12-01 19:23   ` Minchan Kim
@ 2023-12-03  5:22     ` Hyeonggon Yoo
  0 siblings, 0 replies; 32+ messages in thread
From: Hyeonggon Yoo @ 2023-12-03  5:22 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Sergey Senozhatsky, Matthew Wilcox, Mike Rapoport, Yosry Ahmed, linux-mm

On Sat, Dec 2, 2023 at 4:23 AM Minchan Kim <minchan@kernel.org> wrote:
>
> On Thu, Nov 30, 2023 at 07:12:24PM +0900, Hyeonggon Yoo wrote:
> > Replace first_page to first_zsdesc in struct zspage for further
> > conversion.
> >
> > Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
> > ---
> >  mm/zsmalloc.c | 8 ++++----
> >  1 file changed, 4 insertions(+), 4 deletions(-)
> >
> > diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
> > index 47df9103787e..4c9f9a2cb681 100644
> > --- a/mm/zsmalloc.c
> > +++ b/mm/zsmalloc.c
> > @@ -317,7 +317,7 @@ struct zspage {
> >       };
> >       unsigned int inuse;
> >       unsigned int freeobj;
> > -     struct page *first_page;
> > +     struct zsdesc *first_zsdesc;
> >       struct list_head list; /* fullness list */
> >       struct zs_pool *pool;
> >       rwlock_t lock;
> > @@ -516,7 +516,7 @@ static inline void mod_zspage_inuse(struct zspage *zspage, int val)
> >
> >  static inline struct page *get_first_page(struct zspage *zspage)
> >  {
> > -     struct page *first_page = zspage->first_page;
> > +     struct page *first_page = zsdesc_page(zspage->first_zsdesc);
> >
> >       VM_BUG_ON_PAGE(!is_first_page(first_page), first_page);
> >       return first_page;
> > @@ -1028,7 +1028,7 @@ static void create_page_chain(struct size_class *class, struct zspage *zspage,
> >               set_page_private(page, (unsigned long)zspage);
> >               page->index = 0;
> >               if (i == 0) {
> > -                     zspage->first_page = page;
> > +                     zspage->first_zsdesc = page_zsdesc(page);
> >                       SetPagePrivate(page);
> >                       if (unlikely(class->objs_per_zspage == 1 &&
> >                                       class->pages_per_zspage == 1))
> > @@ -1402,7 +1402,7 @@ static unsigned long obj_malloc(struct zs_pool *pool,
> >               link->handle = handle;
> >       else
> >               /* record handle to page->index */
>
> Can you update the comment, too?

Will do in the next revision, thanks!

> > -             zspage->first_page->index = handle;
> > +             zspage->first_zsdesc->handle = handle;
> >
> >       kunmap_atomic(vaddr);
> >       mod_zspage_inuse(zspage, 1);
> > --
> > 2.39.3
> >


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [RFC PATCH v3 14/21] mm/zsmalloc: convert zs_page_{isolate,migrate,putback} to use zsdesc
  2023-11-30 10:12 ` [RFC PATCH v3 14/21] mm/zsmalloc: convert zs_page_{isolate,migrate,putback} to use zsdesc Hyeonggon Yoo
@ 2023-12-04  3:32   ` Sergey Senozhatsky
  2023-12-05  0:21     ` Hyeonggon Yoo
  0 siblings, 1 reply; 32+ messages in thread
From: Sergey Senozhatsky @ 2023-12-04  3:32 UTC (permalink / raw)
  To: Hyeonggon Yoo
  Cc: Minchan Kim, Sergey Senozhatsky, Matthew Wilcox, Mike Rapoport,
	Yosry Ahmed, linux-mm

On (23/11/30 19:12), Hyeonggon Yoo wrote:
[..]
> +static inline bool zsdesc_is_isolated(struct zsdesc *zsdesc)
> +{
> +	return PageIsolated(zsdesc_page(zsdesc));
> +}
> +
> +struct zone *zsdesc_zone(struct zsdesc *zsdesc)

static struct zone

> +{
> +	return page_zone(zsdesc_page(zsdesc));
> +}


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [RFC PATCH v3 04/21] mm/zsmalloc: add alternatives of frequently used helper functions
  2023-11-30 10:12 ` [RFC PATCH v3 04/21] mm/zsmalloc: add alternatives of frequently used helper functions Hyeonggon Yoo
@ 2023-12-04  3:45   ` Matthew Wilcox
  2023-12-05  0:35     ` Hyeonggon Yoo
  0 siblings, 1 reply; 32+ messages in thread
From: Matthew Wilcox @ 2023-12-04  3:45 UTC (permalink / raw)
  To: Hyeonggon Yoo
  Cc: Minchan Kim, Sergey Senozhatsky, Mike Rapoport, Yosry Ahmed, linux-mm

On Thu, Nov 30, 2023 at 07:12:25PM +0900, Hyeonggon Yoo wrote:
> +static __maybe_unused int is_first_zsdesc(struct zsdesc *zsdesc)
> +{
> +	return PagePrivate(zsdesc_page(zsdesc));
> +}

static inline bool is_first_zsdesc(struct zsdesc *zsdesc)
{
	return folio_test_private(zsdesc_folio(zsdesc));
}

> -static inline struct page *get_first_page(struct zspage *zspage)
> +static __maybe_unused inline struct page *get_first_page(struct zspage *zspage)

I don't think you need __maybe_unused with inline.

> +static __maybe_unused struct zsdesc *get_first_zsdesc(struct zspage *zspage)
> +{
> +	struct zsdesc *first_zsdesc = zspage->first_zsdesc;
> +
> +	VM_BUG_ON_PAGE(!is_first_zsdesc(first_zsdesc), zsdesc_page(first_zsdesc));

Do we want a VM_BUG_ON_ZSDESC?



^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [RFC PATCH v3 14/21] mm/zsmalloc: convert zs_page_{isolate,migrate,putback} to use zsdesc
  2023-12-04  3:32   ` Sergey Senozhatsky
@ 2023-12-05  0:21     ` Hyeonggon Yoo
  0 siblings, 0 replies; 32+ messages in thread
From: Hyeonggon Yoo @ 2023-12-05  0:21 UTC (permalink / raw)
  To: Sergey Senozhatsky
  Cc: Minchan Kim, Matthew Wilcox, Mike Rapoport, Yosry Ahmed, linux-mm

On Mon, Dec 4, 2023 at 12:32 PM Sergey Senozhatsky
<senozhatsky@chromium.org> wrote:
>
> On (23/11/30 19:12), Hyeonggon Yoo wrote:
> [..]
> > +static inline bool zsdesc_is_isolated(struct zsdesc *zsdesc)
> > +{
> > +     return PageIsolated(zsdesc_page(zsdesc));
> > +}
> > +
> > +struct zone *zsdesc_zone(struct zsdesc *zsdesc)
>
> static struct zone

Will do in v4, thanks!

> > +{
> > +     return page_zone(zsdesc_page(zsdesc));
> > +}


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [RFC PATCH v3 04/21] mm/zsmalloc: add alternatives of frequently used helper functions
  2023-12-04  3:45   ` Matthew Wilcox
@ 2023-12-05  0:35     ` Hyeonggon Yoo
  0 siblings, 0 replies; 32+ messages in thread
From: Hyeonggon Yoo @ 2023-12-05  0:35 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: Minchan Kim, Sergey Senozhatsky, Mike Rapoport, Yosry Ahmed, linux-mm

On Mon, Dec 4, 2023 at 12:45 PM Matthew Wilcox <willy@infradead.org> wrote:
>
> On Thu, Nov 30, 2023 at 07:12:25PM +0900, Hyeonggon Yoo wrote:
> > +static __maybe_unused int is_first_zsdesc(struct zsdesc *zsdesc)
> > +{
> > +     return PagePrivate(zsdesc_page(zsdesc));
> > +}
>
> static inline bool is_first_zsdesc(struct zsdesc *zsdesc)
> {
>         return folio_test_private(zsdesc_folio(zsdesc));
> }

PagePrivate(zsdesc_page(zsdesc)) is fine as zsmalloc always allocates
a base page,
and then build a chain of base pages. That's not going to change anytime soon.
But will drop __maybe_unused and add inline.

> > -static inline struct page *get_first_page(struct zspage *zspage)
> > +static __maybe_unused inline struct page *get_first_page(struct zspage *zspage)
>
> I don't think you need __maybe_unused with inline.

Right, will adjust in v4.

> > +static __maybe_unused struct zsdesc *get_first_zsdesc(struct zspage *zspage)
> > +{
> > +     struct zsdesc *first_zsdesc = zspage->first_zsdesc;
> > +
> > +     VM_BUG_ON_PAGE(!is_first_zsdesc(first_zsdesc), zsdesc_page(first_zsdesc));
>
> Do we want a VM_BUG_ON_ZSDESC?

If the kernel starts allocating zsdesc separately, we'll need that.
But at now, I don't think we need to implement that yet?

Thanks!
--
Hyeonggon


^ permalink raw reply	[flat|nested] 32+ messages in thread

end of thread, other threads:[~2023-12-05  0:35 UTC | newest]

Thread overview: 32+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-11-30 10:12 [RFC PATCH v3 00/21] mm/zsmalloc: Split zsdesc from struct page Hyeonggon Yoo
2023-11-30 10:12 ` [RFC PATCH v3 01/21] mm/zsmalloc: create new struct zsdesc Hyeonggon Yoo
2023-11-30 10:12 ` [RFC PATCH v3 02/21] mm/zsmalloc: add utility functions for zsdesc Hyeonggon Yoo
2023-11-30 10:12 ` [RFC PATCH v3 03/21] mm/zsmalloc: replace first_page to first_zsdesc in struct zspage Hyeonggon Yoo
2023-12-01 19:23   ` Minchan Kim
2023-12-03  5:22     ` Hyeonggon Yoo
2023-11-30 10:12 ` [RFC PATCH v3 04/21] mm/zsmalloc: add alternatives of frequently used helper functions Hyeonggon Yoo
2023-12-04  3:45   ` Matthew Wilcox
2023-12-05  0:35     ` Hyeonggon Yoo
2023-11-30 10:12 ` [RFC PATCH v3 05/21] mm/zsmalloc: convert {try,}lock_zspage() to use zsdesc Hyeonggon Yoo
2023-11-30 10:12 ` [RFC PATCH v3 06/21] mm/zsmalloc: convert __zs_{map,unmap}_object() " Hyeonggon Yoo
2023-11-30 10:12 ` [RFC PATCH v3 07/21] mm/zsmalloc: convert obj_to_location() and its users " Hyeonggon Yoo
2023-11-30 10:12 ` [RFC PATCH v3 08/21] mm/zsmalloc: convert obj_malloc() " Hyeonggon Yoo
2023-11-30 10:12 ` [RFC PATCH v3 09/21] mm/zsmalloc: convert create_page_chain() and its users " Hyeonggon Yoo
2023-11-30 10:12 ` [RFC PATCH v3 10/21] mm/zsmalloc: convert obj_allocated() and related helpers " Hyeonggon Yoo
2023-11-30 10:12 ` [RFC PATCH v3 11/21] mm/zsmalloc: convert init_zspage() " Hyeonggon Yoo
2023-11-30 10:12 ` [RFC PATCH v3 12/21] mm/zsmalloc: convert obj_to_page() and zs_free() " Hyeonggon Yoo
2023-11-30 10:12 ` [RFC PATCH v3 13/21] mm/zsmalloc: convert reset_page() to reset_zsdesc() Hyeonggon Yoo
2023-11-30 10:12 ` [RFC PATCH v3 14/21] mm/zsmalloc: convert zs_page_{isolate,migrate,putback} to use zsdesc Hyeonggon Yoo
2023-12-04  3:32   ` Sergey Senozhatsky
2023-12-05  0:21     ` Hyeonggon Yoo
2023-11-30 10:12 ` [RFC PATCH v3 15/21] mm/zsmalloc: convert __free_zspage() " Hyeonggon Yoo
2023-11-30 10:12 ` [RFC PATCH v3 16/21] mm/zsmalloc: convert location_to_obj() " Hyeonggon Yoo
2023-11-30 10:12 ` [RFC PATCH v3 17/21] mm/zsmalloc: convert migrate_zspage() " Hyeonggon Yoo
2023-11-30 10:12 ` [RFC PATCH v3 18/21] mm/zsmalloc: convert get_zspage() to take zsdesc Hyeonggon Yoo
2023-11-30 10:12 ` [RFC PATCH v3 19/21] mm/zsmalloc: convert SetZsPageMovable() to use zsdesc Hyeonggon Yoo
2023-11-30 10:12 ` [RFC PATCH v3 20/21] mm/zsmalloc: remove now unused helper functions Hyeonggon Yoo
2023-11-30 10:12 ` [RFC PATCH v3 21/21] mm/zsmalloc: convert {get,set}_first_obj_offset() to use zsdesc Hyeonggon Yoo
2023-12-01 19:28 ` [RFC PATCH v3 00/21] mm/zsmalloc: Split zsdesc from struct page Minchan Kim
2023-12-02  4:36   ` Sergey Senozhatsky
2023-12-02 22:46     ` Matthew Wilcox
2023-12-03  5:21     ` Hyeonggon Yoo

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox