From: Wenchao Hao <haowenchao22@gmail.com>
To: Andrew Morton <akpm@linux-foundation.org>,
Chengming Zhou <chengming.zhou@linux.dev>,
Jens Axboe <axboe@kernel.dk>,
Johannes Weiner <hannes@cmpxchg.org>,
Minchan Kim <minchan@kernel.org>, Nhat Pham <nphamcs@gmail.com>,
Sergey Senozhatsky <senozhatsky@chromium.org>,
Yosry Ahmed <yosry@kernel.org>,
linux-block@vger.kernel.org, linux-kernel@vger.kernel.org,
linux-mm@kvack.org
Cc: Barry Song <baohua@kernel.org>,
Xueyuan Chen <xueyuan.chen21@gmail.com>,
Wenchao Hao <haowenchao@xiaomi.com>
Subject: [RFC PATCH v2 1/4] mm:zsmalloc: drop class lock before freeing zspage
Date: Tue, 21 Apr 2026 20:16:13 +0800 [thread overview]
Message-ID: <20260421121616.3298845-2-haowenchao@xiaomi.com> (raw)
In-Reply-To: <20260421121616.3298845-1-haowenchao@xiaomi.com>
From: Xueyuan Chen <xueyuan.chen21@gmail.com>
Currently in zs_free(), the class->lock is held until the zspage is
completely freed and the counters are updated. However, freeing pages back
to the buddy allocator requires acquiring the zone lock.
Under heavy memory pressure, zone lock contention can be severe. When this
happens, the CPU holding the class->lock will stall waiting for the zone
lock, thereby blocking all other CPUs attempting to acquire the same
class->lock.
This patch shrinks the critical section of the class->lock to reduce lock
contention. By moving the actual page freeing process outside the
class->lock, we can improve the concurrency performance of zs_free().
Testing on the RADXA O6 platform shows that with 12 CPUs concurrently
performing zs_free() operations, the execution time is reduced by 20%.
Signed-off-by: Xueyuan Chen <xueyuan.chen21@gmail.com>
Signed-off-by: Wenchao Hao <haowenchao@xiaomi.com>
---
mm/zsmalloc.c | 28 ++++++++++++++++++++++------
1 file changed, 22 insertions(+), 6 deletions(-)
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 63128ddb7959..40687c8a7469 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -801,13 +801,10 @@ static int trylock_zspage(struct zspage *zspage)
return 0;
}
-static void __free_zspage(struct zs_pool *pool, struct size_class *class,
- struct zspage *zspage)
+static inline void __free_zspage_lockless(struct zs_pool *pool, struct zspage *zspage)
{
struct zpdesc *zpdesc, *next;
- assert_spin_locked(&class->lock);
-
VM_BUG_ON(get_zspage_inuse(zspage));
VM_BUG_ON(zspage->fullness != ZS_INUSE_RATIO_0);
@@ -823,7 +820,13 @@ static void __free_zspage(struct zs_pool *pool, struct size_class *class,
} while (zpdesc != NULL);
cache_free_zspage(zspage);
+}
+static void __free_zspage(struct zs_pool *pool, struct size_class *class,
+ struct zspage *zspage)
+{
+ assert_spin_locked(&class->lock);
+ __free_zspage_lockless(pool, zspage);
class_stat_sub(class, ZS_OBJS_ALLOCATED, class->objs_per_zspage);
atomic_long_sub(class->pages_per_zspage, &pool->pages_allocated);
}
@@ -1388,6 +1391,7 @@ void zs_free(struct zs_pool *pool, unsigned long handle)
unsigned long obj;
struct size_class *class;
int fullness;
+ struct zspage *zspage_to_free = NULL;
if (IS_ERR_OR_NULL((void *)handle))
return;
@@ -1408,10 +1412,22 @@ void zs_free(struct zs_pool *pool, unsigned long handle)
obj_free(class->size, obj);
fullness = fix_fullness_group(class, zspage);
- if (fullness == ZS_INUSE_RATIO_0)
- free_zspage(pool, class, zspage);
+ if (fullness == ZS_INUSE_RATIO_0) {
+ if (trylock_zspage(zspage)) {
+ remove_zspage(class, zspage);
+ class_stat_sub(class, ZS_OBJS_ALLOCATED,
+ class->objs_per_zspage);
+ zspage_to_free = zspage;
+ } else
+ kick_deferred_free(pool);
+ }
spin_unlock(&class->lock);
+
+ if (likely(zspage_to_free)) {
+ __free_zspage_lockless(pool, zspage_to_free);
+ atomic_long_sub(class->pages_per_zspage, &pool->pages_allocated);
+ }
cache_free_handle(handle);
}
EXPORT_SYMBOL_GPL(zs_free);
--
2.34.1
next prev parent reply other threads:[~2026-04-21 12:16 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-04-21 12:16 [RFC PATCH v2 0/4] mm/zsmalloc: reduce zs_free() latency on swap release path Wenchao Hao
2026-04-21 12:16 ` Wenchao Hao [this message]
2026-04-21 12:16 ` [RFC PATCH v2 2/4] mm/zsmalloc: introduce zs_free_deferred() for async handle freeing Wenchao Hao
2026-04-21 12:16 ` [RFC PATCH v2 3/4] zram: defer zs_free() in swap slot free notification path Wenchao Hao
2026-04-21 12:16 ` [RFC PATCH v2 4/4] mm/zswap: defer zs_free() in zswap_invalidate() path Wenchao Hao
2026-04-21 15:54 ` [RFC PATCH v2 0/4] mm/zsmalloc: reduce zs_free() latency on swap release path Nhat Pham
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260421121616.3298845-2-haowenchao@xiaomi.com \
--to=haowenchao22@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=axboe@kernel.dk \
--cc=baohua@kernel.org \
--cc=chengming.zhou@linux.dev \
--cc=hannes@cmpxchg.org \
--cc=haowenchao@xiaomi.com \
--cc=linux-block@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=minchan@kernel.org \
--cc=nphamcs@gmail.com \
--cc=senozhatsky@chromium.org \
--cc=xueyuan.chen21@gmail.com \
--cc=yosry@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox