* [RFC PATCH v2 1/4] mm:zsmalloc: drop class lock before freeing zspage
2026-04-21 12:16 [RFC PATCH v2 0/4] mm/zsmalloc: reduce zs_free() latency on swap release path Wenchao Hao
@ 2026-04-21 12:16 ` Wenchao Hao
2026-04-21 12:16 ` [RFC PATCH v2 2/4] mm/zsmalloc: introduce zs_free_deferred() for async handle freeing Wenchao Hao
` (3 subsequent siblings)
4 siblings, 0 replies; 11+ messages in thread
From: Wenchao Hao @ 2026-04-21 12:16 UTC (permalink / raw)
To: Andrew Morton, Chengming Zhou, Jens Axboe, Johannes Weiner,
Minchan Kim, Nhat Pham, Sergey Senozhatsky, Yosry Ahmed,
linux-block, linux-kernel, linux-mm
Cc: Barry Song, Xueyuan Chen, Wenchao Hao
From: Xueyuan Chen <xueyuan.chen21@gmail.com>
Currently in zs_free(), the class->lock is held until the zspage is
completely freed and the counters are updated. However, freeing pages back
to the buddy allocator requires acquiring the zone lock.
Under heavy memory pressure, zone lock contention can be severe. When this
happens, the CPU holding the class->lock will stall waiting for the zone
lock, thereby blocking all other CPUs attempting to acquire the same
class->lock.
This patch shrinks the critical section of the class->lock to reduce lock
contention. By moving the actual page freeing process outside the
class->lock, we can improve the concurrency performance of zs_free().
Testing on the RADXA O6 platform shows that with 12 CPUs concurrently
performing zs_free() operations, the execution time is reduced by 20%.
Signed-off-by: Xueyuan Chen <xueyuan.chen21@gmail.com>
Signed-off-by: Wenchao Hao <haowenchao@xiaomi.com>
---
mm/zsmalloc.c | 28 ++++++++++++++++++++++------
1 file changed, 22 insertions(+), 6 deletions(-)
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 63128ddb7959..40687c8a7469 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -801,13 +801,10 @@ static int trylock_zspage(struct zspage *zspage)
return 0;
}
-static void __free_zspage(struct zs_pool *pool, struct size_class *class,
- struct zspage *zspage)
+static inline void __free_zspage_lockless(struct zs_pool *pool, struct zspage *zspage)
{
struct zpdesc *zpdesc, *next;
- assert_spin_locked(&class->lock);
-
VM_BUG_ON(get_zspage_inuse(zspage));
VM_BUG_ON(zspage->fullness != ZS_INUSE_RATIO_0);
@@ -823,7 +820,13 @@ static void __free_zspage(struct zs_pool *pool, struct size_class *class,
} while (zpdesc != NULL);
cache_free_zspage(zspage);
+}
+static void __free_zspage(struct zs_pool *pool, struct size_class *class,
+ struct zspage *zspage)
+{
+ assert_spin_locked(&class->lock);
+ __free_zspage_lockless(pool, zspage);
class_stat_sub(class, ZS_OBJS_ALLOCATED, class->objs_per_zspage);
atomic_long_sub(class->pages_per_zspage, &pool->pages_allocated);
}
@@ -1388,6 +1391,7 @@ void zs_free(struct zs_pool *pool, unsigned long handle)
unsigned long obj;
struct size_class *class;
int fullness;
+ struct zspage *zspage_to_free = NULL;
if (IS_ERR_OR_NULL((void *)handle))
return;
@@ -1408,10 +1412,22 @@ void zs_free(struct zs_pool *pool, unsigned long handle)
obj_free(class->size, obj);
fullness = fix_fullness_group(class, zspage);
- if (fullness == ZS_INUSE_RATIO_0)
- free_zspage(pool, class, zspage);
+ if (fullness == ZS_INUSE_RATIO_0) {
+ if (trylock_zspage(zspage)) {
+ remove_zspage(class, zspage);
+ class_stat_sub(class, ZS_OBJS_ALLOCATED,
+ class->objs_per_zspage);
+ zspage_to_free = zspage;
+ } else
+ kick_deferred_free(pool);
+ }
spin_unlock(&class->lock);
+
+ if (likely(zspage_to_free)) {
+ __free_zspage_lockless(pool, zspage_to_free);
+ atomic_long_sub(class->pages_per_zspage, &pool->pages_allocated);
+ }
cache_free_handle(handle);
}
EXPORT_SYMBOL_GPL(zs_free);
--
2.34.1
^ permalink raw reply [flat|nested] 11+ messages in thread* [RFC PATCH v2 2/4] mm/zsmalloc: introduce zs_free_deferred() for async handle freeing
2026-04-21 12:16 [RFC PATCH v2 0/4] mm/zsmalloc: reduce zs_free() latency on swap release path Wenchao Hao
2026-04-21 12:16 ` [RFC PATCH v2 1/4] mm:zsmalloc: drop class lock before freeing zspage Wenchao Hao
@ 2026-04-21 12:16 ` Wenchao Hao
2026-04-21 19:46 ` Nhat Pham
2026-04-21 12:16 ` [RFC PATCH v2 3/4] zram: defer zs_free() in swap slot free notification path Wenchao Hao
` (2 subsequent siblings)
4 siblings, 1 reply; 11+ messages in thread
From: Wenchao Hao @ 2026-04-21 12:16 UTC (permalink / raw)
To: Andrew Morton, Chengming Zhou, Jens Axboe, Johannes Weiner,
Minchan Kim, Nhat Pham, Sergey Senozhatsky, Yosry Ahmed,
linux-block, linux-kernel, linux-mm
Cc: Barry Song, Xueyuan Chen, Wenchao Hao
zs_free() is expensive due to internal locking (pool->lock, class->lock)
and potential zspage freeing. On the process exit path, the slow
zs_free() blocks memory reclamation, delaying overall memory release.
This has been reported to significantly impact Android low-memory
killing where slot_free() accounts for over 80% of the total swap
entry freeing cost.
Introduce zs_free_deferred() which queues handles into a fixed-size
per-pool array for later processing by a workqueue. This allows callers
to defer the expensive zs_free() and return quickly, so the process
exit path can release memory faster. The array capacity is derived from
a 128MB uncompressed data budget (128MB >> PAGE_SHIFT entries), which
scales naturally with PAGE_SIZE. When the array reaches half capacity,
the workqueue is scheduled to drain pending handles.
zs_free_deferred() uses spin_trylock() to access the deferred queue.
If the lock is contended (e.g. drain in progress) or the queue is full,
it falls back to synchronous zs_free() to guarantee correctness.
Also introduce zs_free_deferred_flush() for use during pool teardown to
ensure all pending handles are freed.
Signed-off-by: Wenchao Hao <haowenchao@xiaomi.com>
---
include/linux/zsmalloc.h | 2 +
mm/zsmalloc.c | 111 +++++++++++++++++++++++++++++++++++++++
2 files changed, 113 insertions(+)
diff --git a/include/linux/zsmalloc.h b/include/linux/zsmalloc.h
index 478410c880b1..1e5ac1a39d41 100644
--- a/include/linux/zsmalloc.h
+++ b/include/linux/zsmalloc.h
@@ -30,6 +30,8 @@ void zs_destroy_pool(struct zs_pool *pool);
unsigned long zs_malloc(struct zs_pool *pool, size_t size, gfp_t flags,
const int nid);
void zs_free(struct zs_pool *pool, unsigned long obj);
+void zs_free_deferred(struct zs_pool *pool, unsigned long handle);
+void zs_free_deferred_flush(struct zs_pool *pool);
size_t zs_huge_class_size(struct zs_pool *pool);
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 40687c8a7469..defc892555e4 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -53,6 +53,10 @@
#define ZS_HANDLE_SIZE (sizeof(unsigned long))
+#define ZS_DEFERRED_FREE_MAX_BYTES (128 << 20)
+#define ZS_DEFERRED_FREE_CAPACITY (ZS_DEFERRED_FREE_MAX_BYTES >> PAGE_SHIFT)
+#define ZS_DEFERRED_FREE_THRESHOLD (ZS_DEFERRED_FREE_CAPACITY / 2)
+
/*
* Object location (<PFN>, <obj_idx>) is encoded as
* a single (unsigned long) handle value.
@@ -217,6 +221,13 @@ struct zs_pool {
/* protect zspage migration/compaction */
rwlock_t lock;
atomic_t compaction_in_progress;
+
+ /* deferred free support */
+ spinlock_t deferred_lock;
+ unsigned long *deferred_handles;
+ unsigned int deferred_count;
+ unsigned int deferred_capacity;
+ struct work_struct deferred_free_work;
};
static inline void zpdesc_set_first(struct zpdesc *zpdesc)
@@ -579,6 +590,19 @@ static int zs_stats_size_show(struct seq_file *s, void *v)
}
DEFINE_SHOW_ATTRIBUTE(zs_stats_size);
+static int zs_stats_deferred_show(struct seq_file *s, void *v)
+{
+ struct zs_pool *pool = s->private;
+
+ spin_lock(&pool->deferred_lock);
+ seq_printf(s, "pending: %u\n", pool->deferred_count);
+ seq_printf(s, "capacity: %u\n", pool->deferred_capacity);
+ spin_unlock(&pool->deferred_lock);
+
+ return 0;
+}
+DEFINE_SHOW_ATTRIBUTE(zs_stats_deferred);
+
static void zs_pool_stat_create(struct zs_pool *pool, const char *name)
{
if (!zs_stat_root) {
@@ -590,6 +614,9 @@ static void zs_pool_stat_create(struct zs_pool *pool, const char *name)
debugfs_create_file("classes", S_IFREG | 0444, pool->stat_dentry, pool,
&zs_stats_size_fops);
+ debugfs_create_file("deferred_free", S_IFREG | 0444,
+ pool->stat_dentry, pool,
+ &zs_stats_deferred_fops);
}
static void zs_pool_stat_destroy(struct zs_pool *pool)
@@ -1432,6 +1459,76 @@ void zs_free(struct zs_pool *pool, unsigned long handle)
}
EXPORT_SYMBOL_GPL(zs_free);
+static void zs_deferred_free_work(struct work_struct *work)
+{
+ struct zs_pool *pool = container_of(work, struct zs_pool,
+ deferred_free_work);
+ unsigned long handle;
+
+ while (1) {
+ spin_lock(&pool->deferred_lock);
+ if (pool->deferred_count == 0) {
+ spin_unlock(&pool->deferred_lock);
+ break;
+ }
+ handle = pool->deferred_handles[--pool->deferred_count];
+ spin_unlock(&pool->deferred_lock);
+
+ zs_free(pool, handle);
+ cond_resched();
+ }
+}
+
+/**
+ * zs_free_deferred - queue a handle for asynchronous freeing
+ * @pool: pool to free from
+ * @handle: handle to free
+ *
+ * Place @handle into a deferred free queue for later processing by a
+ * workqueue. This is intended for callers that are in atomic context
+ * (e.g. under a spinlock) and cannot afford the cost of zs_free()
+ * directly. When the queue reaches a threshold the work is scheduled.
+ * Falls back to synchronous zs_free() if the lock is contended (drain
+ * in progress) or if the queue is full.
+ */
+void zs_free_deferred(struct zs_pool *pool, unsigned long handle)
+{
+ if (IS_ERR_OR_NULL((void *)handle))
+ return;
+
+ if (!spin_trylock(&pool->deferred_lock))
+ goto sync_free;
+
+ if (pool->deferred_count >= pool->deferred_capacity) {
+ spin_unlock(&pool->deferred_lock);
+ goto sync_free;
+ }
+
+ pool->deferred_handles[pool->deferred_count++] = handle;
+ if (pool->deferred_count >= ZS_DEFERRED_FREE_THRESHOLD)
+ queue_work(system_wq, &pool->deferred_free_work);
+ spin_unlock(&pool->deferred_lock);
+ return;
+
+sync_free:
+ zs_free(pool, handle);
+}
+EXPORT_SYMBOL_GPL(zs_free_deferred);
+
+/**
+ * zs_free_deferred_flush - flush all pending deferred frees
+ * @pool: pool to flush
+ *
+ * Wait for any scheduled work to complete, then drain any remaining
+ * handles. Must be called from process context.
+ */
+void zs_free_deferred_flush(struct zs_pool *pool)
+{
+ flush_work(&pool->deferred_free_work);
+ zs_deferred_free_work(&pool->deferred_free_work);
+}
+EXPORT_SYMBOL_GPL(zs_free_deferred_flush);
+
static void zs_object_copy(struct size_class *class, unsigned long dst,
unsigned long src)
{
@@ -2099,6 +2196,18 @@ struct zs_pool *zs_create_pool(const char *name)
rwlock_init(&pool->lock);
atomic_set(&pool->compaction_in_progress, 0);
+ spin_lock_init(&pool->deferred_lock);
+ pool->deferred_capacity = ZS_DEFERRED_FREE_CAPACITY;
+ pool->deferred_handles = kvmalloc_array(pool->deferred_capacity,
+ sizeof(unsigned long),
+ GFP_KERNEL);
+ if (!pool->deferred_handles) {
+ kfree(pool);
+ return NULL;
+ }
+ pool->deferred_count = 0;
+ INIT_WORK(&pool->deferred_free_work, zs_deferred_free_work);
+
pool->name = kstrdup(name, GFP_KERNEL);
if (!pool->name)
goto err;
@@ -2201,6 +2310,7 @@ void zs_destroy_pool(struct zs_pool *pool)
int i;
zs_unregister_shrinker(pool);
+ zs_free_deferred_flush(pool);
zs_flush_migration(pool);
zs_pool_stat_destroy(pool);
@@ -2224,6 +2334,7 @@ void zs_destroy_pool(struct zs_pool *pool)
kfree(class);
}
+ kvfree(pool->deferred_handles);
kfree(pool->name);
kfree(pool);
}
--
2.34.1
^ permalink raw reply [flat|nested] 11+ messages in thread* Re: [RFC PATCH v2 2/4] mm/zsmalloc: introduce zs_free_deferred() for async handle freeing
2026-04-21 12:16 ` [RFC PATCH v2 2/4] mm/zsmalloc: introduce zs_free_deferred() for async handle freeing Wenchao Hao
@ 2026-04-21 19:46 ` Nhat Pham
0 siblings, 0 replies; 11+ messages in thread
From: Nhat Pham @ 2026-04-21 19:46 UTC (permalink / raw)
To: Wenchao Hao
Cc: Andrew Morton, Chengming Zhou, Jens Axboe, Johannes Weiner,
Minchan Kim, Sergey Senozhatsky, Yosry Ahmed, linux-block,
linux-kernel, linux-mm, Barry Song, Xueyuan Chen, Wenchao Hao
On Tue, Apr 21, 2026 at 5:16 AM Wenchao Hao <haowenchao22@gmail.com> wrote:
>
> zs_free() is expensive due to internal locking (pool->lock, class->lock)
> and potential zspage freeing. On the process exit path, the slow
> zs_free() blocks memory reclamation, delaying overall memory release.
> This has been reported to significantly impact Android low-memory
> killing where slot_free() accounts for over 80% of the total swap
> entry freeing cost.
>
> Introduce zs_free_deferred() which queues handles into a fixed-size
> per-pool array for later processing by a workqueue. This allows callers
> to defer the expensive zs_free() and return quickly, so the process
> exit path can release memory faster. The array capacity is derived from
> a 128MB uncompressed data budget (128MB >> PAGE_SHIFT entries), which
> scales naturally with PAGE_SIZE. When the array reaches half capacity,
> the workqueue is scheduled to drain pending handles.
>
> zs_free_deferred() uses spin_trylock() to access the deferred queue.
> If the lock is contended (e.g. drain in progress) or the queue is full,
> it falls back to synchronous zs_free() to guarantee correctness.
>
> Also introduce zs_free_deferred_flush() for use during pool teardown to
> ensure all pending handles are freed.
Hmmm per-pool workqueue.
Does that mean that if you only have one zs pool (in the case of
zswap, or if you only have one zram device), you'll have less
concurrency in freeing up zsmalloc memory for process teardown? Would
this be problematic?
I think Kairui was also suggesting per-cpu-fying these batches/queues.
>
> Signed-off-by: Wenchao Hao <haowenchao@xiaomi.com>
> ---
> include/linux/zsmalloc.h | 2 +
> mm/zsmalloc.c | 111 +++++++++++++++++++++++++++++++++++++++
> 2 files changed, 113 insertions(+)
>
> diff --git a/include/linux/zsmalloc.h b/include/linux/zsmalloc.h
> index 478410c880b1..1e5ac1a39d41 100644
> --- a/include/linux/zsmalloc.h
> +++ b/include/linux/zsmalloc.h
> @@ -30,6 +30,8 @@ void zs_destroy_pool(struct zs_pool *pool);
> unsigned long zs_malloc(struct zs_pool *pool, size_t size, gfp_t flags,
> const int nid);
> void zs_free(struct zs_pool *pool, unsigned long obj);
> +void zs_free_deferred(struct zs_pool *pool, unsigned long handle);
> +void zs_free_deferred_flush(struct zs_pool *pool);
>
> size_t zs_huge_class_size(struct zs_pool *pool);
>
> diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
> index 40687c8a7469..defc892555e4 100644
> --- a/mm/zsmalloc.c
> +++ b/mm/zsmalloc.c
> @@ -53,6 +53,10 @@
>
> #define ZS_HANDLE_SIZE (sizeof(unsigned long))
>
> +#define ZS_DEFERRED_FREE_MAX_BYTES (128 << 20)
> +#define ZS_DEFERRED_FREE_CAPACITY (ZS_DEFERRED_FREE_MAX_BYTES >> PAGE_SHIFT)
> +#define ZS_DEFERRED_FREE_THRESHOLD (ZS_DEFERRED_FREE_CAPACITY / 2)
> +
> /*
> * Object location (<PFN>, <obj_idx>) is encoded as
> * a single (unsigned long) handle value.
> @@ -217,6 +221,13 @@ struct zs_pool {
> /* protect zspage migration/compaction */
> rwlock_t lock;
> atomic_t compaction_in_progress;
> +
> + /* deferred free support */
> + spinlock_t deferred_lock;
> + unsigned long *deferred_handles;
> + unsigned int deferred_count;
> + unsigned int deferred_capacity;
> + struct work_struct deferred_free_work;
> };
>
> static inline void zpdesc_set_first(struct zpdesc *zpdesc)
> @@ -579,6 +590,19 @@ static int zs_stats_size_show(struct seq_file *s, void *v)
> }
> DEFINE_SHOW_ATTRIBUTE(zs_stats_size);
>
> +static int zs_stats_deferred_show(struct seq_file *s, void *v)
> +{
> + struct zs_pool *pool = s->private;
> +
> + spin_lock(&pool->deferred_lock);
> + seq_printf(s, "pending: %u\n", pool->deferred_count);
> + seq_printf(s, "capacity: %u\n", pool->deferred_capacity);
> + spin_unlock(&pool->deferred_lock);
> +
> + return 0;
> +}
> +DEFINE_SHOW_ATTRIBUTE(zs_stats_deferred);
> +
> static void zs_pool_stat_create(struct zs_pool *pool, const char *name)
> {
> if (!zs_stat_root) {
> @@ -590,6 +614,9 @@ static void zs_pool_stat_create(struct zs_pool *pool, const char *name)
>
> debugfs_create_file("classes", S_IFREG | 0444, pool->stat_dentry, pool,
> &zs_stats_size_fops);
> + debugfs_create_file("deferred_free", S_IFREG | 0444,
> + pool->stat_dentry, pool,
> + &zs_stats_deferred_fops);
> }
>
> static void zs_pool_stat_destroy(struct zs_pool *pool)
> @@ -1432,6 +1459,76 @@ void zs_free(struct zs_pool *pool, unsigned long handle)
> }
> EXPORT_SYMBOL_GPL(zs_free);
>
> +static void zs_deferred_free_work(struct work_struct *work)
> +{
> + struct zs_pool *pool = container_of(work, struct zs_pool,
> + deferred_free_work);
> + unsigned long handle;
> +
> + while (1) {
> + spin_lock(&pool->deferred_lock);
> + if (pool->deferred_count == 0) {
> + spin_unlock(&pool->deferred_lock);
> + break;
> + }
> + handle = pool->deferred_handles[--pool->deferred_count];
> + spin_unlock(&pool->deferred_lock);
Any reason why we're locking, grabbing a handle, then unlocking, one
at a time? Why dont we just lock, grab all the handles (or at least a
batch of them), unlock, then process the handles one at a time?
We can also have a pair of handle arrays. Whenever defer worker is
woken up, just swap the arrays under the lock, then free the handles
in the old array :)
> +
> + zs_free(pool, handle);
> + cond_resched();
> + }
> +}
> +
> +/**
> + * zs_free_deferred - queue a handle for asynchronous freeing
> + * @pool: pool to free from
> + * @handle: handle to free
> + *
> + * Place @handle into a deferred free queue for later processing by a
> + * workqueue. This is intended for callers that are in atomic context
> + * (e.g. under a spinlock) and cannot afford the cost of zs_free()
> + * directly. When the queue reaches a threshold the work is scheduled.
> + * Falls back to synchronous zs_free() if the lock is contended (drain
> + * in progress) or if the queue is full.
> + */
> +void zs_free_deferred(struct zs_pool *pool, unsigned long handle)
> +{
> + if (IS_ERR_OR_NULL((void *)handle))
> + return;
> +
> + if (!spin_trylock(&pool->deferred_lock))
> + goto sync_free;
> +
> + if (pool->deferred_count >= pool->deferred_capacity) {
> + spin_unlock(&pool->deferred_lock);
> + goto sync_free;
> + }
> +
> + pool->deferred_handles[pool->deferred_count++] = handle;
> + if (pool->deferred_count >= ZS_DEFERRED_FREE_THRESHOLD)
> + queue_work(system_wq, &pool->deferred_free_work);
> + spin_unlock(&pool->deferred_lock);
> + return;
> +
> +sync_free:
> + zs_free(pool, handle);
> +}
> +EXPORT_SYMBOL_GPL(zs_free_deferred);
> +
> +/**
> + * zs_free_deferred_flush - flush all pending deferred frees
> + * @pool: pool to flush
> + *
> + * Wait for any scheduled work to complete, then drain any remaining
> + * handles. Must be called from process context.
> + */
> +void zs_free_deferred_flush(struct zs_pool *pool)
> +{
> + flush_work(&pool->deferred_free_work);
> + zs_deferred_free_work(&pool->deferred_free_work);
> +}
> +EXPORT_SYMBOL_GPL(zs_free_deferred_flush);
> +
> static void zs_object_copy(struct size_class *class, unsigned long dst,
> unsigned long src)
> {
> @@ -2099,6 +2196,18 @@ struct zs_pool *zs_create_pool(const char *name)
> rwlock_init(&pool->lock);
> atomic_set(&pool->compaction_in_progress, 0);
>
> + spin_lock_init(&pool->deferred_lock);
> + pool->deferred_capacity = ZS_DEFERRED_FREE_CAPACITY;
> + pool->deferred_handles = kvmalloc_array(pool->deferred_capacity,
> + sizeof(unsigned long),
> + GFP_KERNEL);
> + if (!pool->deferred_handles) {
> + kfree(pool);
> + return NULL;
> + }
> + pool->deferred_count = 0;
> + INIT_WORK(&pool->deferred_free_work, zs_deferred_free_work);
> +
> pool->name = kstrdup(name, GFP_KERNEL);
> if (!pool->name)
> goto err;
> @@ -2201,6 +2310,7 @@ void zs_destroy_pool(struct zs_pool *pool)
> int i;
>
> zs_unregister_shrinker(pool);
> + zs_free_deferred_flush(pool);
> zs_flush_migration(pool);
> zs_pool_stat_destroy(pool);
>
> @@ -2224,6 +2334,7 @@ void zs_destroy_pool(struct zs_pool *pool)
> kfree(class);
> }
>
> + kvfree(pool->deferred_handles);
> kfree(pool->name);
> kfree(pool);
> }
> --
> 2.34.1
>
^ permalink raw reply [flat|nested] 11+ messages in thread
* [RFC PATCH v2 3/4] zram: defer zs_free() in swap slot free notification path
2026-04-21 12:16 [RFC PATCH v2 0/4] mm/zsmalloc: reduce zs_free() latency on swap release path Wenchao Hao
2026-04-21 12:16 ` [RFC PATCH v2 1/4] mm:zsmalloc: drop class lock before freeing zspage Wenchao Hao
2026-04-21 12:16 ` [RFC PATCH v2 2/4] mm/zsmalloc: introduce zs_free_deferred() for async handle freeing Wenchao Hao
@ 2026-04-21 12:16 ` Wenchao Hao
2026-04-21 12:16 ` [RFC PATCH v2 4/4] mm/zswap: defer zs_free() in zswap_invalidate() path Wenchao Hao
2026-04-21 15:54 ` [RFC PATCH v2 0/4] mm/zsmalloc: reduce zs_free() latency on swap release path Nhat Pham
4 siblings, 0 replies; 11+ messages in thread
From: Wenchao Hao @ 2026-04-21 12:16 UTC (permalink / raw)
To: Andrew Morton, Chengming Zhou, Jens Axboe, Johannes Weiner,
Minchan Kim, Nhat Pham, Sergey Senozhatsky, Yosry Ahmed,
linux-block, linux-kernel, linux-mm
Cc: Barry Song, Xueyuan Chen, Wenchao Hao
From: "Barry Song (Xiaomi)" <baohua@kernel.org>
zram_slot_free_notify() is called on the process exit path when
unmapping swap entries. The slot_free() it calls internally invokes
zs_free(), which accounts for ~87% of slot_free() cost due to zsmalloc
internal locking (pool->lock, class->lock) and potential zspage freeing.
This blocks the process exit path, delaying overall memory release
during Android low-memory killing.
Split slot_free() into slot_free_extract() and the actual zs_free()
call. slot_free_extract() handles all slot metadata cleanup (clearing
flags, updating stats, zeroing handle/size) and returns the zsmalloc
handle that needs freeing. This separation has two benefits:
1. It makes the two responsibilities of slot_free() explicit: slot
metadata management (must be done under slot lock) vs zsmalloc
memory release (can be deferred).
2. It allows zram_slot_free_notify() to use zs_free_deferred() for
the handle, deferring the expensive zs_free() to a workqueue so
the exit path can release memory faster.
While at it, merge three separate clear_slot_flag() calls for
ZRAM_IDLE, ZRAM_INCOMPRESSIBLE, and ZRAM_PP_SLOT into a single
bitmask operation via clear_slot_flags_on_free(), reducing redundant
read-modify-write cycles on the same flags word.
All other slot_free() callers (write, discard, meta_free) continue
to use synchronous zs_free() through the unchanged slot_free()
wrapper.
Signed-off-by: Barry Song (Xiaomi) <baohua@kernel.org>
Signed-off-by: Wenchao Hao <haowenchao@xiaomi.com>
---
drivers/block/zram/zram_drv.c | 37 ++++++++++++++++++++++++++---------
1 file changed, 28 insertions(+), 9 deletions(-)
diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
index c2afd1c34f4a..382c4dc57c8d 100644
--- a/drivers/block/zram/zram_drv.c
+++ b/drivers/block/zram/zram_drv.c
@@ -165,6 +165,15 @@ static inline bool slot_allocated(struct zram *zram, u32 index)
test_slot_flag(zram, index, ZRAM_WB);
}
+#define ZRAM_FLAGS_TO_CLEAR_ON_FREE (BIT(ZRAM_IDLE) | \
+ BIT(ZRAM_INCOMPRESSIBLE) | \
+ BIT(ZRAM_PP_SLOT))
+
+static inline void clear_slot_flags_on_free(struct zram *zram, u32 index)
+{
+ zram->table[index].attr.flags &= ~ZRAM_FLAGS_TO_CLEAR_ON_FREE;
+}
+
static inline void set_slot_comp_priority(struct zram *zram, u32 index,
u32 prio)
{
@@ -2000,17 +2009,20 @@ static bool zram_meta_alloc(struct zram *zram, u64 disksize)
return true;
}
-static void slot_free(struct zram *zram, u32 index)
+/*
+ * Clear slot metadata and extract the zsmalloc handle for freeing.
+ * Returns the handle that needs to be freed via zs_free(), or 0 if
+ * no zsmalloc freeing is needed (e.g. same-filled or writeback slots).
+ */
+static unsigned long slot_free_extract(struct zram *zram, u32 index)
{
- unsigned long handle;
+ unsigned long handle = 0;
#ifdef CONFIG_ZRAM_TRACK_ENTRY_ACTIME
zram->table[index].attr.ac_time = 0;
#endif
- clear_slot_flag(zram, index, ZRAM_IDLE);
- clear_slot_flag(zram, index, ZRAM_INCOMPRESSIBLE);
- clear_slot_flag(zram, index, ZRAM_PP_SLOT);
+ clear_slot_flags_on_free(zram, index);
set_slot_comp_priority(zram, index, 0);
if (test_slot_flag(zram, index, ZRAM_HUGE)) {
@@ -2041,9 +2053,7 @@ static void slot_free(struct zram *zram, u32 index)
handle = get_slot_handle(zram, index);
if (!handle)
- return;
-
- zs_free(zram->mem_pool, handle);
+ return 0;
atomic64_sub(get_slot_size(zram, index),
&zram->stats.compr_data_size);
@@ -2051,6 +2061,15 @@ static void slot_free(struct zram *zram, u32 index)
atomic64_dec(&zram->stats.pages_stored);
set_slot_handle(zram, index, 0);
set_slot_size(zram, index, 0);
+
+ return handle;
+}
+
+static void slot_free(struct zram *zram, u32 index)
+{
+ unsigned long handle = slot_free_extract(zram, index);
+
+ zs_free(zram->mem_pool, handle);
}
static int read_same_filled_page(struct zram *zram, struct page *page,
@@ -2794,7 +2813,7 @@ static void zram_slot_free_notify(struct block_device *bdev,
return;
}
- slot_free(zram, index);
+ zs_free_deferred(zram->mem_pool, slot_free_extract(zram, index));
slot_unlock(zram, index);
}
--
2.34.1
^ permalink raw reply [flat|nested] 11+ messages in thread* [RFC PATCH v2 4/4] mm/zswap: defer zs_free() in zswap_invalidate() path
2026-04-21 12:16 [RFC PATCH v2 0/4] mm/zsmalloc: reduce zs_free() latency on swap release path Wenchao Hao
` (2 preceding siblings ...)
2026-04-21 12:16 ` [RFC PATCH v2 3/4] zram: defer zs_free() in swap slot free notification path Wenchao Hao
@ 2026-04-21 12:16 ` Wenchao Hao
2026-04-21 17:03 ` Nhat Pham
2026-04-21 15:54 ` [RFC PATCH v2 0/4] mm/zsmalloc: reduce zs_free() latency on swap release path Nhat Pham
4 siblings, 1 reply; 11+ messages in thread
From: Wenchao Hao @ 2026-04-21 12:16 UTC (permalink / raw)
To: Andrew Morton, Chengming Zhou, Jens Axboe, Johannes Weiner,
Minchan Kim, Nhat Pham, Sergey Senozhatsky, Yosry Ahmed,
linux-block, linux-kernel, linux-mm
Cc: Barry Song, Xueyuan Chen, Wenchao Hao
zswap_invalidate() is called on the same process exit path as
zram_slot_free_notify(). The zswap_entry_free() it calls internally
performs zs_free() which is expensive due to zsmalloc internal locking.
Unlike zram which has a trylock fallback, zswap_invalidate() executes
unconditionally, making the latency impact potentially worse.
Like zram, the expensive zs_free() here blocks the process exit path,
delaying overall memory release. Additionally, zswap_entry_free()
performs extra work beyond zs_free(): list_lru_del() (takes its own
spinlock), obj_cgroup accounting, and kmem_cache_free for the entry
itself.
Use zs_free_deferred() in zswap_invalidate() path to defer the
expensive zsmalloc handle freeing to a workqueue, allowing the exit
path to release memory faster. All other callers (zswap_load,
zswap_writeback_entry, zswap_store error paths) run in process context
and continue to use synchronous zs_free().
Signed-off-by: Wenchao Hao <haowenchao@xiaomi.com>
---
mm/zswap.c | 16 +++++++++++++---
1 file changed, 13 insertions(+), 3 deletions(-)
diff --git a/mm/zswap.c b/mm/zswap.c
index 0823cadd02b6..7291f6deb5b6 100644
--- a/mm/zswap.c
+++ b/mm/zswap.c
@@ -713,11 +713,16 @@ static void zswap_entry_cache_free(struct zswap_entry *entry)
/*
* Carries out the common pattern of freeing an entry's zsmalloc allocation,
* freeing the entry itself, and decrementing the number of stored pages.
+ * When @deferred is true, the zsmalloc handle is queued for async freeing
+ * instead of being freed immediately.
*/
-static void zswap_entry_free(struct zswap_entry *entry)
+static void __zswap_entry_free(struct zswap_entry *entry, bool deferred)
{
zswap_lru_del(&zswap_list_lru, entry);
- zs_free(entry->pool->zs_pool, entry->handle);
+ if (deferred)
+ zs_free_deferred(entry->pool->zs_pool, entry->handle);
+ else
+ zs_free(entry->pool->zs_pool, entry->handle);
zswap_pool_put(entry->pool);
if (entry->objcg) {
obj_cgroup_uncharge_zswap(entry->objcg, entry->length);
@@ -729,6 +734,11 @@ static void zswap_entry_free(struct zswap_entry *entry)
atomic_long_dec(&zswap_stored_pages);
}
+static void zswap_entry_free(struct zswap_entry *entry)
+{
+ __zswap_entry_free(entry, false);
+}
+
/*********************************
* compressed storage functions
**********************************/
@@ -1655,7 +1665,7 @@ void zswap_invalidate(swp_entry_t swp)
entry = xa_erase(tree, offset);
if (entry)
- zswap_entry_free(entry);
+ __zswap_entry_free(entry, true);
}
int zswap_swapon(int type, unsigned long nr_pages)
--
2.34.1
^ permalink raw reply [flat|nested] 11+ messages in thread* Re: [RFC PATCH v2 4/4] mm/zswap: defer zs_free() in zswap_invalidate() path
2026-04-21 12:16 ` [RFC PATCH v2 4/4] mm/zswap: defer zs_free() in zswap_invalidate() path Wenchao Hao
@ 2026-04-21 17:03 ` Nhat Pham
0 siblings, 0 replies; 11+ messages in thread
From: Nhat Pham @ 2026-04-21 17:03 UTC (permalink / raw)
To: Wenchao Hao
Cc: Andrew Morton, Chengming Zhou, Jens Axboe, Johannes Weiner,
Minchan Kim, Sergey Senozhatsky, Yosry Ahmed, linux-block,
linux-kernel, linux-mm, Barry Song, Xueyuan Chen, Wenchao Hao
On Tue, Apr 21, 2026 at 5:16 AM Wenchao Hao <haowenchao22@gmail.com> wrote:
>
> zswap_invalidate() is called on the same process exit path as
> zram_slot_free_notify(). The zswap_entry_free() it calls internally
> performs zs_free() which is expensive due to zsmalloc internal locking.
> Unlike zram which has a trylock fallback, zswap_invalidate() executes
> unconditionally, making the latency impact potentially worse.
Hmmm my understanding is that we don't have contention at this point,
because zswap mainly relies on swap cache to synchronize.
But yeah I can see the effect of slow zsmalloc entry freeing here.
>
> Like zram, the expensive zs_free() here blocks the process exit path,
> delaying overall memory release. Additionally, zswap_entry_free()
> performs extra work beyond zs_free(): list_lru_del() (takes its own
> spinlock), obj_cgroup accounting, and kmem_cache_free for the entry
> itself.
>
> Use zs_free_deferred() in zswap_invalidate() path to defer the
> expensive zsmalloc handle freeing to a workqueue, allowing the exit
> path to release memory faster. All other callers (zswap_load,
> zswap_writeback_entry, zswap_store error paths) run in process context
> and continue to use synchronous zs_free().
I wonder if this approach can speed up zswap_load() (i.e page fault
latency) too?
Code LGTM correctness-wise (assuming zs_free_deferred works) :)
>
> Signed-off-by: Wenchao Hao <haowenchao@xiaomi.com>
> ---
> mm/zswap.c | 16 +++++++++++++---
> 1 file changed, 13 insertions(+), 3 deletions(-)
>
> diff --git a/mm/zswap.c b/mm/zswap.c
> index 0823cadd02b6..7291f6deb5b6 100644
> --- a/mm/zswap.c
> +++ b/mm/zswap.c
> @@ -713,11 +713,16 @@ static void zswap_entry_cache_free(struct zswap_entry *entry)
> /*
> * Carries out the common pattern of freeing an entry's zsmalloc allocation,
> * freeing the entry itself, and decrementing the number of stored pages.
> + * When @deferred is true, the zsmalloc handle is queued for async freeing
> + * instead of being freed immediately.
> */
> -static void zswap_entry_free(struct zswap_entry *entry)
> +static void __zswap_entry_free(struct zswap_entry *entry, bool deferred)
> {
> zswap_lru_del(&zswap_list_lru, entry);
> - zs_free(entry->pool->zs_pool, entry->handle);
> + if (deferred)
> + zs_free_deferred(entry->pool->zs_pool, entry->handle);
> + else
> + zs_free(entry->pool->zs_pool, entry->handle);
> zswap_pool_put(entry->pool);
> if (entry->objcg) {
> obj_cgroup_uncharge_zswap(entry->objcg, entry->length);
> @@ -729,6 +734,11 @@ static void zswap_entry_free(struct zswap_entry *entry)
> atomic_long_dec(&zswap_stored_pages);
> }
>
> +static void zswap_entry_free(struct zswap_entry *entry)
> +{
> + __zswap_entry_free(entry, false);
> +}
> +
> /*********************************
> * compressed storage functions
> **********************************/
> @@ -1655,7 +1665,7 @@ void zswap_invalidate(swp_entry_t swp)
>
> entry = xa_erase(tree, offset);
> if (entry)
> - zswap_entry_free(entry);
> + __zswap_entry_free(entry, true);
> }
>
> int zswap_swapon(int type, unsigned long nr_pages)
> --
> 2.34.1
>
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [RFC PATCH v2 0/4] mm/zsmalloc: reduce zs_free() latency on swap release path
2026-04-21 12:16 [RFC PATCH v2 0/4] mm/zsmalloc: reduce zs_free() latency on swap release path Wenchao Hao
` (3 preceding siblings ...)
2026-04-21 12:16 ` [RFC PATCH v2 4/4] mm/zswap: defer zs_free() in zswap_invalidate() path Wenchao Hao
@ 2026-04-21 15:54 ` Nhat Pham
2026-04-21 17:17 ` Kairui Song
4 siblings, 1 reply; 11+ messages in thread
From: Nhat Pham @ 2026-04-21 15:54 UTC (permalink / raw)
To: Wenchao Hao
Cc: Andrew Morton, Chengming Zhou, Jens Axboe, Johannes Weiner,
Minchan Kim, Sergey Senozhatsky, Yosry Ahmed, linux-block,
linux-kernel, linux-mm, Barry Song, Xueyuan Chen, Wenchao Hao,
Kairui Song
On Tue, Apr 21, 2026 at 5:16 AM Wenchao Hao <haowenchao22@gmail.com> wrote:
>
> Swap freeing can be expensive when unmapping a VMA containing
> many swap entries. This has been reported to significantly
> delay memory reclamation during Android's low-memory killing,
> especially when multiple processes are terminated to free
> memory, with slot_free() accounting for more than 80% of
> the total cost of freeing swap entries.
>
> Two earlier attempts by Lei and Zhiguo added a new thread in the mm core
> to asynchronously collect and free swap entries [1][2], but the
> design itself is fairly complex.
>
> When anon folios and swap entries are mixed within a
> process, reclaiming anon folios from killed processes
> helps return memory to the system as quickly as possible,
> so that newly launched applications can satisfy their
> memory demands. It is not ideal for swap freeing to block
> anon folio freeing. On the other hand, swap freeing can
> still return memory to the system, although at a slower
> rate due to memory compression.
Is this correct? I don't think we do decompression in
zswap_invalidate() path. We do decompression in zswap_load(), but as a
separate step from zswap_invalidate().
zswap/zsmalloc entry freeing is decoupled from decompression. For
example, on process teardown, we free the zsmalloc memory but never
decompress (if we do then it's a bug to be fixed lol, but I doubt it).
Zsmalloc freeing might not be worth as much bang-for-your-buck wise
compared to anon folio freeing, but if it's "expensive", then I think
that points to a different root-cause: zsmalloc's poor scalability in
the free path.
I've stared at this code path for a bit, because my other patch series
(vswap - see [1]) was reported to display regression on the free path
on the usemem benchmark. And one of the issues was the contention
between compaction (both systemwide compaction, i.e zs_page_migrate,
and zsmalloc's internal compaction, but mostly the former).:
* zs_free read-acquires pool->lock, and compaction write-acquires the
same lock. So the compaction thread will make all zs free-ers wait for
it. I saw this read lock delay when I perfed the free step of usemem.
* If this lock has fair queue-ing semantics (I have not checked), then
if there a compaction is behind a bunch of zs_free in the queue, then
all the subsequent zs_free's ers are blocked :)
* I'm also curious about cache-friendliness of this rwlock, bouncing
across CPUs, if you have multiple processes being torn down
concurrently.
Have you perf-ed process teardown yet? Can I ask you for a perf trace
on this part? I'm not against async zs-freeing (might still be
required after all), but if it's something fixable on the zsmalloc
side, we should probably prioritize that :) Otherwise these swap
freeing workers will exhibit the same poor scalability behavior - we
might be better off because we manage to get rid of bigger chunks of
uncompressed memory first, but we will still be slowed in releasing
the system's and cgroup's (in zswap's case) compressed memory
I'd love to hear more about thoughts from Yosry, Johannes, Sergey and
Minchan too.
^ permalink raw reply [flat|nested] 11+ messages in thread* Re: [RFC PATCH v2 0/4] mm/zsmalloc: reduce zs_free() latency on swap release path
2026-04-21 15:54 ` [RFC PATCH v2 0/4] mm/zsmalloc: reduce zs_free() latency on swap release path Nhat Pham
@ 2026-04-21 17:17 ` Kairui Song
2026-04-21 18:07 ` Nhat Pham
0 siblings, 1 reply; 11+ messages in thread
From: Kairui Song @ 2026-04-21 17:17 UTC (permalink / raw)
To: Nhat Pham
Cc: Wenchao Hao, Andrew Morton, Chengming Zhou, Jens Axboe,
Johannes Weiner, Minchan Kim, Sergey Senozhatsky, Yosry Ahmed,
linux-block, linux-kernel, linux-mm, Barry Song, Xueyuan Chen,
Wenchao Hao
On Tue, Apr 21, 2026 at 11:55 PM Nhat Pham <nphamcs@gmail.com> wrote:
>
Thanks for adding me to the Cc list :), Barry started this idea with
ZRAM, which looks very interesting to me.
> On Tue, Apr 21, 2026 at 5:16 AM Wenchao Hao <haowenchao22@gmail.com> wrote:
> >
> > Swap freeing can be expensive when unmapping a VMA containing
> > many swap entries. This has been reported to significantly
> > delay memory reclamation during Android's low-memory killing,
> > especially when multiple processes are terminated to free
> > memory, with slot_free() accounting for more than 80% of
> > the total cost of freeing swap entries.
> >
> > Two earlier attempts by Lei and Zhiguo added a new thread in the mm core
> > to asynchronously collect and free swap entries [1][2], but the
> > design itself is fairly complex.
> >
> > When anon folios and swap entries are mixed within a
> > process, reclaiming anon folios from killed processes
> > helps return memory to the system as quickly as possible,
> > so that newly launched applications can satisfy their
> > memory demands. It is not ideal for swap freeing to block
> > anon folio freeing. On the other hand, swap freeing can
> > still return memory to the system, although at a slower
> > rate due to memory compression.
>
> Is this correct? I don't think we do decompression in
> zswap_invalidate() path. We do decompression in zswap_load(), but as a
> separate step from zswap_invalidate().
It's not about decompression. I think what Wenchao means here is that:
freeing the swap entry also releases the backing compression data, but
compared to freeing an actual folio (which bring back a free folio to
reduce memory pressure), you may need to free a lot of swap entries to
free one whole folio, because the compressed data could be much
smaller than folio and with fragmentation. And swap entry freeing is
still not fast enough to be ignored.
>
> zswap/zsmalloc entry freeing is decoupled from decompression. For
> example, on process teardown, we free the zsmalloc memory but never
> decompress (if we do then it's a bug to be fixed lol, but I doubt it).
>
> Zsmalloc freeing might not be worth as much bang-for-your-buck wise
> compared to anon folio freeing, but if it's "expensive", then I think
> that points to a different root-cause: zsmalloc's poor scalability in
> the free path.
That's a very nice insight. I had an idea previously that can we have
something like a zs free bulk? Freeing handles one by one does seem
expensive.
https://lore.kernel.org/linux-mm/adt3Q_SRToF6fb3W@KASONG-MC4/
It might be tricky to do so though.
It will be best if we can speed up everything, doing things async
doesn't reduce the total amount of work, and might cause more trouble
like worker overhead or delayed freeing causing more memory pressure,
if the workqueue didn't run in time. Or maybe a process is almost
completely swapped out, then this won't help at all.
I'm not against the async idea, they might combine well.
>
> I've stared at this code path for a bit, because my other patch series
> (vswap - see [1]) was reported to display regression on the free path
> on the usemem benchmark. And one of the issues was the contention
> between compaction (both systemwide compaction, i.e zs_page_migrate,
> and zsmalloc's internal compaction, but mostly the former).:
>
> * zs_free read-acquires pool->lock, and compaction write-acquires the
> same lock. So the compaction thread will make all zs free-ers wait for
> it. I saw this read lock delay when I perfed the free step of usemem.
>
> * If this lock has fair queue-ing semantics (I have not checked), then
> if there a compaction is behind a bunch of zs_free in the queue, then
> all the subsequent zs_free's ers are blocked :)
>
> * I'm also curious about cache-friendliness of this rwlock, bouncing
> across CPUs, if you have multiple processes being torn down
> concurrently.
That's interesting, when I mentioned zs free bulk I was thinking that,
if we have a percpu queue, at least we may try read lock that on every
enqueue, free the whole queue if successful, then release the lock.
I'm sure there are more ways to optimize that, just a random idea :)
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [RFC PATCH v2 0/4] mm/zsmalloc: reduce zs_free() latency on swap release path
2026-04-21 17:17 ` Kairui Song
@ 2026-04-21 18:07 ` Nhat Pham
2026-04-21 18:25 ` Nhat Pham
0 siblings, 1 reply; 11+ messages in thread
From: Nhat Pham @ 2026-04-21 18:07 UTC (permalink / raw)
To: Kairui Song
Cc: Wenchao Hao, Andrew Morton, Chengming Zhou, Jens Axboe,
Johannes Weiner, Minchan Kim, Sergey Senozhatsky, Yosry Ahmed,
linux-block, linux-kernel, linux-mm, Barry Song, Xueyuan Chen,
Wenchao Hao
On Tue, Apr 21, 2026 at 10:18 AM Kairui Song <ryncsn@gmail.com> wrote:
>
> On Tue, Apr 21, 2026 at 11:55 PM Nhat Pham <nphamcs@gmail.com> wrote:
> >
>
> Thanks for adding me to the Cc list :), Barry started this idea with
> ZRAM, which looks very interesting to me.
>
> > On Tue, Apr 21, 2026 at 5:16 AM Wenchao Hao <haowenchao22@gmail.com> wrote:
> > >
> > > Swap freeing can be expensive when unmapping a VMA containing
> > > many swap entries. This has been reported to significantly
> > > delay memory reclamation during Android's low-memory killing,
> > > especially when multiple processes are terminated to free
> > > memory, with slot_free() accounting for more than 80% of
> > > the total cost of freeing swap entries.
> > >
> > > Two earlier attempts by Lei and Zhiguo added a new thread in the mm core
> > > to asynchronously collect and free swap entries [1][2], but the
> > > design itself is fairly complex.
> > >
> > > When anon folios and swap entries are mixed within a
> > > process, reclaiming anon folios from killed processes
> > > helps return memory to the system as quickly as possible,
> > > so that newly launched applications can satisfy their
> > > memory demands. It is not ideal for swap freeing to block
> > > anon folio freeing. On the other hand, swap freeing can
> > > still return memory to the system, although at a slower
> > > rate due to memory compression.
> >
> > Is this correct? I don't think we do decompression in
> > zswap_invalidate() path. We do decompression in zswap_load(), but as a
> > separate step from zswap_invalidate().
>
> It's not about decompression. I think what Wenchao means here is that:
> freeing the swap entry also releases the backing compression data, but
> compared to freeing an actual folio (which bring back a free folio to
> reduce memory pressure), you may need to free a lot of swap entries to
> free one whole folio, because the compressed data could be much
> smaller than folio and with fragmentation. And swap entry freeing is
> still not fast enough to be ignored.
Ah I see yeah. That's the not "as much bang-for-your-buck" as folio
freeing category. I agree on this point.
>
> >
> > zswap/zsmalloc entry freeing is decoupled from decompression. For
> > example, on process teardown, we free the zsmalloc memory but never
> > decompress (if we do then it's a bug to be fixed lol, but I doubt it).
> >
> > Zsmalloc freeing might not be worth as much bang-for-your-buck wise
> > compared to anon folio freeing, but if it's "expensive", then I think
> > that points to a different root-cause: zsmalloc's poor scalability in
> > the free path.
>
> That's a very nice insight. I had an idea previously that can we have
> something like a zs free bulk? Freeing handles one by one does seem
> expensive.
> https://lore.kernel.org/linux-mm/adt3Q_SRToF6fb3W@KASONG-MC4/
>
> It might be tricky to do so though.
>
> It will be best if we can speed up everything, doing things async
> doesn't reduce the total amount of work, and might cause more trouble
> like worker overhead or delayed freeing causing more memory pressure,
> if the workqueue didn't run in time. Or maybe a process is almost
> completely swapped out, then this won't help at all.
>
> I'm not against the async idea, they might combine well.
Completely agree! I was thinking about batching the free operations
for zsmalloc. Right now seems like even if we have a contiguous range
of swap slots to be freed, we call one
zram_slot_free_notify/zswap_invalidate at a time, which then call
zs_free one at a time? I wonder if there's any batching opportunity
here. Might be complicated with the pool lock and class lock dance in
zs_free() though :)
And yeah the async stuff is orthogonal too.
>
> >
> > I've stared at this code path for a bit, because my other patch series
> > (vswap - see [1]) was reported to display regression on the free path
> > on the usemem benchmark. And one of the issues was the contention
> > between compaction (both systemwide compaction, i.e zs_page_migrate,
> > and zsmalloc's internal compaction, but mostly the former).:
> >
> > * zs_free read-acquires pool->lock, and compaction write-acquires the
> > same lock. So the compaction thread will make all zs free-ers wait for
> > it. I saw this read lock delay when I perfed the free step of usemem.
> >
> > * If this lock has fair queue-ing semantics (I have not checked), then
> > if there a compaction is behind a bunch of zs_free in the queue, then
> > all the subsequent zs_free's ers are blocked :)
> >
> > * I'm also curious about cache-friendliness of this rwlock, bouncing
> > across CPUs, if you have multiple processes being torn down
> > concurrently.
>
> That's interesting, when I mentioned zs free bulk I was thinking that,
> if we have a percpu queue, at least we may try read lock that on every
> enqueue, free the whole queue if successful, then release the lock.
> I'm sure there are more ways to optimize that, just a random idea :)
Yep! Would be nice to have some perf trace to pinpoint where the overhead is.
On my end, I perfed the free phase of usemem. It varies a bit based on
exact build config, kernel version, or even between runs, but the
cheapest I've seen for the pool lock contention overhead is about 3%
of the free phase (this is on baseline, not vswap kernel). That's
pretty big (bigger than vswap overhead even on the kernels with vswap,
which is kinda silly). Obviously the host was very overcommitted, so
compaction was running in the background at the same time, but
still...
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [RFC PATCH v2 0/4] mm/zsmalloc: reduce zs_free() latency on swap release path
2026-04-21 18:07 ` Nhat Pham
@ 2026-04-21 18:25 ` Nhat Pham
0 siblings, 0 replies; 11+ messages in thread
From: Nhat Pham @ 2026-04-21 18:25 UTC (permalink / raw)
To: Kairui Song
Cc: Wenchao Hao, Andrew Morton, Chengming Zhou, Jens Axboe,
Johannes Weiner, Minchan Kim, Sergey Senozhatsky, Yosry Ahmed,
linux-block, linux-kernel, linux-mm, Barry Song, Xueyuan Chen,
Wenchao Hao
On Tue, Apr 21, 2026 at 11:07 AM Nhat Pham <nphamcs@gmail.com> wrote:
>
> On Tue, Apr 21, 2026 at 10:18 AM Kairui Song <ryncsn@gmail.com> wrote:
> >
> > On Tue, Apr 21, 2026 at 11:55 PM Nhat Pham <nphamcs@gmail.com> wrote:
> > >
> >
> > Thanks for adding me to the Cc list :), Barry started this idea with
> > ZRAM, which looks very interesting to me.
> >
> > > On Tue, Apr 21, 2026 at 5:16 AM Wenchao Hao <haowenchao22@gmail.com> wrote:
> > > >
> > > > Swap freeing can be expensive when unmapping a VMA containing
> > > > many swap entries. This has been reported to significantly
> > > > delay memory reclamation during Android's low-memory killing,
> > > > especially when multiple processes are terminated to free
> > > > memory, with slot_free() accounting for more than 80% of
> > > > the total cost of freeing swap entries.
> > > >
> > > > Two earlier attempts by Lei and Zhiguo added a new thread in the mm core
> > > > to asynchronously collect and free swap entries [1][2], but the
> > > > design itself is fairly complex.
> > > >
> > > > When anon folios and swap entries are mixed within a
> > > > process, reclaiming anon folios from killed processes
> > > > helps return memory to the system as quickly as possible,
> > > > so that newly launched applications can satisfy their
> > > > memory demands. It is not ideal for swap freeing to block
> > > > anon folio freeing. On the other hand, swap freeing can
> > > > still return memory to the system, although at a slower
> > > > rate due to memory compression.
> > >
> > > Is this correct? I don't think we do decompression in
> > > zswap_invalidate() path. We do decompression in zswap_load(), but as a
> > > separate step from zswap_invalidate().
> >
> > It's not about decompression. I think what Wenchao means here is that:
> > freeing the swap entry also releases the backing compression data, but
> > compared to freeing an actual folio (which bring back a free folio to
> > reduce memory pressure), you may need to free a lot of swap entries to
> > free one whole folio, because the compressed data could be much
> > smaller than folio and with fragmentation. And swap entry freeing is
> > still not fast enough to be ignored.
>
> Ah I see yeah. That's the not "as much bang-for-your-buck" as folio
> freeing category. I agree on this point.
>
> >
> > >
> > > zswap/zsmalloc entry freeing is decoupled from decompression. For
> > > example, on process teardown, we free the zsmalloc memory but never
> > > decompress (if we do then it's a bug to be fixed lol, but I doubt it).
> > >
> > > Zsmalloc freeing might not be worth as much bang-for-your-buck wise
> > > compared to anon folio freeing, but if it's "expensive", then I think
> > > that points to a different root-cause: zsmalloc's poor scalability in
> > > the free path.
> >
> > That's a very nice insight. I had an idea previously that can we have
> > something like a zs free bulk? Freeing handles one by one does seem
> > expensive.
> > https://lore.kernel.org/linux-mm/adt3Q_SRToF6fb3W@KASONG-MC4/
> >
> > It might be tricky to do so though.
> >
> > It will be best if we can speed up everything, doing things async
> > doesn't reduce the total amount of work, and might cause more trouble
> > like worker overhead or delayed freeing causing more memory pressure,
> > if the workqueue didn't run in time. Or maybe a process is almost
> > completely swapped out, then this won't help at all.
> >
> > I'm not against the async idea, they might combine well.
>
> Completely agree! I was thinking about batching the free operations
> for zsmalloc. Right now seems like even if we have a contiguous range
> of swap slots to be freed, we call one
> zram_slot_free_notify/zswap_invalidate at a time, which then call
> zs_free one at a time? I wonder if there's any batching opportunity
> here. Might be complicated with the pool lock and class lock dance in
> zs_free() though :)
>
> And yeah the async stuff is orthogonal too.
>
> >
> > >
> > > I've stared at this code path for a bit, because my other patch series
> > > (vswap - see [1]) was reported to display regression on the free path
> > > on the usemem benchmark. And one of the issues was the contention
> > > between compaction (both systemwide compaction, i.e zs_page_migrate,
> > > and zsmalloc's internal compaction, but mostly the former).:
> > >
> > > * zs_free read-acquires pool->lock, and compaction write-acquires the
> > > same lock. So the compaction thread will make all zs free-ers wait for
> > > it. I saw this read lock delay when I perfed the free step of usemem.
> > >
> > > * If this lock has fair queue-ing semantics (I have not checked), then
> > > if there a compaction is behind a bunch of zs_free in the queue, then
> > > all the subsequent zs_free's ers are blocked :)
> > >
> > > * I'm also curious about cache-friendliness of this rwlock, bouncing
> > > across CPUs, if you have multiple processes being torn down
> > > concurrently.
> >
> > That's interesting, when I mentioned zs free bulk I was thinking that,
> > if we have a percpu queue, at least we may try read lock that on every
> > enqueue, free the whole queue if successful, then release the lock.
> > I'm sure there are more ways to optimize that, just a random idea :)
>
> Yep! Would be nice to have some perf trace to pinpoint where the overhead is.
>
Ah OK - I found this thread now:
https://lore.kernel.org/linux-mm/20260414054930.225853-1-xueyuan.chen21@gmail.com/
Hmm, free_zspage() and kmem_cache_free().
* kmem_cache_free() is just handle freeing. Bulk-freeing?
* free_zspage() looks like just ordinary teardown work :( Seems like
we're not spinning any lock here - we just try lock the backing pages,
and the rest is normal work. Not sure how to optimize this - perhaps
deferring is the only way.
^ permalink raw reply [flat|nested] 11+ messages in thread