From: alexs@kernel.org
To: Vitaly Wool <vitaly.wool@konsulko.com>,
Miaohe Lin <linmiaohe@huawei.com>,
Andrew Morton <akpm@linux-foundation.org>,
linux-kernel@vger.kernel.org, linux-mm@kvack.org,
minchan@kernel.org, willy@infradead.org,
senozhatsky@chromium.org, david@redhat.com, 42.hyeyoo@gmail.com,
Yosry Ahmed <yosryahmed@google.com>,
nphamcs@gmail.com
Cc: Alex Shi <alexs@kernel.org>
Subject: [PATCH v6 21/21] mm/zsmalloc: update comments for page->zpdesc changes
Date: Tue, 13 Aug 2024 16:46:07 +0800 [thread overview]
Message-ID: <20240813084611.4122571-22-alexs@kernel.org> (raw)
In-Reply-To: <20240813084611.4122571-1-alexs@kernel.org>
From: Alex Shi <alexs@kernel.org>
After the page to zpdesc conversion, there still left few comments or
function named with page not zpdesc, let's update the comments and
rename function create_page_chain() as create_zpdesc_chain().
Signed-off-by: Alex Shi <alexs@kernel.org>
---
mm/zsmalloc.c | 47 ++++++++++++++++++++++++++---------------------
1 file changed, 26 insertions(+), 21 deletions(-)
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 3dbe6bfa656b..37619f4b074b 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -17,14 +17,16 @@
*
* Usage of struct zpdesc fields:
* zpdesc->zspage: points to zspage
- * zpdesc->next: links together all component pages of a zspage
+ * zpdesc->next: links together all component zpdescs of a zspage
* For the huge page, this is always 0, so we use this field
* to store handle.
* zpdesc->first_obj_offset: PG_zsmalloc, lower 16 bit locate the first
* object offset in a subpage of a zspage
*
* Usage of struct zpdesc(page) flags:
- * PG_private: identifies the first component page
+ * PG_private: identifies the first component zpdesc
+ * PG_lock: lock all component zpdescs for a zspage free, serialize with
+ * migration
*/
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
@@ -191,7 +193,10 @@ struct size_class {
*/
int size;
int objs_per_zspage;
- /* Number of PAGE_SIZE sized pages to combine to form a 'zspage' */
+ /*
+ * Number of PAGE_SIZE sized zpdescs/pages to combine to
+ * form a 'zspage'
+ */
int pages_per_zspage;
unsigned int index;
@@ -907,7 +912,7 @@ static void free_zspage(struct zs_pool *pool, struct size_class *class,
/*
* Since zs_free couldn't be sleepable, this function cannot call
- * lock_page. The page locks trylock_zspage got will be released
+ * lock_page. The zpdesc locks trylock_zspage got will be released
* by __free_zspage.
*/
if (!trylock_zspage(zspage)) {
@@ -964,7 +969,7 @@ static void init_zspage(struct size_class *class, struct zspage *zspage)
set_freeobj(zspage, 0);
}
-static void create_page_chain(struct size_class *class, struct zspage *zspage,
+static void create_zpdesc_chain(struct size_class *class, struct zspage *zspage,
struct zpdesc *zpdescs[])
{
int i;
@@ -973,9 +978,9 @@ static void create_page_chain(struct size_class *class, struct zspage *zspage,
int nr_zpdescs = class->pages_per_zspage;
/*
- * Allocate individual pages and link them together as:
- * 1. all pages are linked together using zpdesc->next
- * 2. each sub-page point to zspage using zpdesc->zspage
+ * Allocate individual zpdescs and link them together as:
+ * 1. all zpdescs are linked together using zpdesc->next
+ * 2. each sub-zpdesc point to zspage using zpdesc->zspage
*
* we set PG_private to identify the first zpdesc (i.e. no other zpdesc
* has this flag set).
@@ -1033,7 +1038,7 @@ static struct zspage *alloc_zspage(struct zs_pool *pool,
zpdescs[i] = zpdesc;
}
- create_page_chain(class, zspage, zpdescs);
+ create_zpdesc_chain(class, zspage, zpdescs);
init_zspage(class, zspage);
zspage->pool = pool;
zspage->class = class->index;
@@ -1360,7 +1365,7 @@ static unsigned long obj_malloc(struct zs_pool *pool,
/* record handle in the header of allocated chunk */
link->handle = handle | OBJ_ALLOCATED_TAG;
else
- /* record handle to page->index */
+ /* record handle to zpdesc->handle */
zspage->first_zpdesc->handle = handle | OBJ_ALLOCATED_TAG;
kunmap_atomic(vaddr);
@@ -1693,19 +1698,19 @@ static int putback_zspage(struct size_class *class, struct zspage *zspage)
#ifdef CONFIG_COMPACTION
/*
* To prevent zspage destroy during migration, zspage freeing should
- * hold locks of all pages in the zspage.
+ * hold locks of all component zpdesc in the zspage.
*/
static void lock_zspage(struct zspage *zspage)
{
struct zpdesc *curr_zpdesc, *zpdesc;
/*
- * Pages we haven't locked yet can be migrated off the list while we're
+ * Zpdesc we haven't locked yet can be migrated off the list while we're
* trying to lock them, so we need to be careful and only attempt to
- * lock each page under migrate_read_lock(). Otherwise, the page we lock
- * may no longer belong to the zspage. This means that we may wait for
- * the wrong page to unlock, so we must take a reference to the page
- * prior to waiting for it to unlock outside migrate_read_lock().
+ * lock each zpdesc under migrate_read_lock(). Otherwise, the zpdesc we
+ * lock may no longer belong to the zspage. This means that we may wait
+ * for the wrong zpdesc to unlock, so we must take a reference to the
+ * zpdesc prior to waiting for it to unlock outside migrate_read_lock().
*/
while (1) {
migrate_read_lock(zspage);
@@ -1780,7 +1785,7 @@ static void replace_sub_page(struct size_class *class, struct zspage *zspage,
idx++;
} while ((zpdesc = get_next_zpdesc(zpdesc)) != NULL);
- create_page_chain(class, zspage, zpdescs);
+ create_zpdesc_chain(class, zspage, zpdescs);
first_obj_offset = get_first_obj_offset(oldzpdesc);
set_first_obj_offset(newzpdesc, first_obj_offset);
if (unlikely(ZsHugePage(zspage)))
@@ -1791,8 +1796,8 @@ static void replace_sub_page(struct size_class *class, struct zspage *zspage,
static bool zs_page_isolate(struct page *page, isolate_mode_t mode)
{
/*
- * Page is locked so zspage couldn't be destroyed. For detail, look at
- * lock_zspage in free_zspage.
+ * Page/zpdesc is locked so zspage couldn't be destroyed. For detail,
+ * look at lock_zspage in free_zspage.
*/
VM_BUG_ON_PAGE(PageIsolated(page), page);
@@ -1819,7 +1824,7 @@ static int zs_page_migrate(struct page *newpage, struct page *page,
/* We're committed, tell the world that this is a Zsmalloc page. */
__zpdesc_set_zsmalloc(newzpdesc);
- /* The page is locked, so this pointer must remain valid */
+ /* The zpdesc/page is locked, so this pointer must remain valid */
zspage = get_zspage(zpdesc);
pool = zspage->pool;
@@ -1892,7 +1897,7 @@ static const struct movable_operations zsmalloc_mops = {
};
/*
- * Caller should hold page_lock of all pages in the zspage
+ * Caller should hold zpdesc locks of all in the zspage
* In here, we cannot use zspage meta data.
*/
static void async_free_zspage(struct work_struct *work)
--
2.43.0
prev parent reply other threads:[~2024-08-13 8:42 UTC|newest]
Thread overview: 22+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-08-13 8:45 [PATCH v6 00/21] mm/zsmalloc: add zpdesc memory descriptor for zswap.zpool alexs
2024-08-13 8:45 ` [PATCH v6 01/21] " alexs
2024-08-13 8:45 ` [PATCH v6 02/21] mm/zsmalloc: use zpdesc in trylock_zspage()/lock_zspage() alexs
2024-08-13 8:45 ` [PATCH v6 03/21] mm/zsmalloc: convert __zs_map_object/__zs_unmap_object to use zpdesc alexs
2024-08-13 8:45 ` [PATCH v6 04/21] mm/zsmalloc: add and use pfn/zpdesc seeking funcs alexs
2024-08-13 8:45 ` [PATCH v6 05/21] mm/zsmalloc: convert obj_malloc() to use zpdesc alexs
2024-08-13 8:45 ` [PATCH v6 06/21] mm/zsmalloc: convert create_page_chain() and its users " alexs
2024-08-13 8:45 ` [PATCH v6 07/21] mm/zsmalloc: convert obj_allocated() and related helpers " alexs
2024-08-13 8:45 ` [PATCH v6 08/21] mm/zsmalloc: convert init_zspage() " alexs
2024-08-13 8:45 ` [PATCH v6 09/21] mm/zsmalloc: convert obj_to_page() and zs_free() " alexs
2024-08-13 8:45 ` [PATCH v6 10/21] mm/zsmalloc: add zpdesc_is_isolated()/zpdesc_zone() helper for zs_page_migrate() alexs
2024-08-13 8:45 ` [PATCH v6 11/21] mm/zsmalloc: rename reset_page to reset_zpdesc and use zpdesc in it alexs
2024-08-13 8:45 ` [PATCH v6 12/21] mm/zsmalloc: convert __free_zspage() to use zdsesc alexs
2024-08-13 8:45 ` [PATCH v6 13/21] mm/zsmalloc: convert location_to_obj() to take zpdesc alexs
2024-08-13 8:46 ` [PATCH v6 14/21] mm/zsmalloc: convert migrate_zspage() to use zpdesc alexs
2024-08-13 8:46 ` [PATCH v6 15/21] mm/zsmalloc: convert get_zspage() to take zpdesc alexs
2024-08-13 8:46 ` [PATCH v6 16/21] mm/zsmalloc: convert SetZsPageMovable and remove unused funcs alexs
2024-08-13 8:46 ` [PATCH v6 17/21] mm/zsmalloc: convert get/set_first_obj_offset() to take zpdesc alexs
2024-08-13 8:46 ` [PATCH v6 18/21] mm/zsmalloc: introduce __zpdesc_clear_movable alexs
2024-08-13 8:46 ` [PATCH v6 19/21] mm/zsmalloc: introduce __zpdesc_clear/set_zsmalloc() alexs
2024-08-13 8:46 ` [PATCH v6 20/21] mm/zsmalloc: introduce zpdesc_clear_first() helper alexs
2024-08-13 8:46 ` alexs [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20240813084611.4122571-22-alexs@kernel.org \
--to=alexs@kernel.org \
--cc=42.hyeyoo@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=david@redhat.com \
--cc=linmiaohe@huawei.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=minchan@kernel.org \
--cc=nphamcs@gmail.com \
--cc=senozhatsky@chromium.org \
--cc=vitaly.wool@konsulko.com \
--cc=willy@infradead.org \
--cc=yosryahmed@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox