From: Sergey Senozhatsky <senozhatsky@chromium.org>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: Yosry Ahmed <yosry.ahmed@linux.dev>,
Hillf Danton <hdanton@sina.com>, Kairui Song <ryncsn@gmail.com>,
Sebastian Andrzej Siewior <bigeasy@linutronix.de>,
Minchan Kim <minchan@kernel.org>,
linux-mm@kvack.org, linux-kernel@vger.kernel.org,
Sergey Senozhatsky <senozhatsky@chromium.org>
Subject: [PATCH v9 10/19] zram: rework recompression loop
Date: Thu, 27 Feb 2025 13:35:28 +0900 [thread overview]
Message-ID: <20250227043618.88380-11-senozhatsky@chromium.org> (raw)
In-Reply-To: <20250227043618.88380-1-senozhatsky@chromium.org>
This reworks recompression loop handling:
- set a rule that stream-put NULLs the stream pointer
If the loop returns with a non-NULL stream then it's a
successful recompression, otherwise the stream should
always be NULL.
- do not count the number of recompressions
Mark object as incompressible as soon as the algorithm
with the highest priority failed to compress that object.
- count compression errors as resource usage
Even if compression has failed, we still need to bump
num_recomp_pages counter.
Signed-off-by: Sergey Senozhatsky <senozhatsky@chromium.org>
---
drivers/block/zram/zram_drv.c | 54 +++++++++++++----------------------
1 file changed, 20 insertions(+), 34 deletions(-)
diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
index 3a23dfed9542..6755ca90f737 100644
--- a/drivers/block/zram/zram_drv.c
+++ b/drivers/block/zram/zram_drv.c
@@ -1888,9 +1888,8 @@ static int recompress_slot(struct zram *zram, u32 index, struct page *page,
unsigned int comp_len_new;
unsigned int class_index_old;
unsigned int class_index_new;
- u32 num_recomps = 0;
void *src, *dst;
- int ret;
+ int ret = 0;
handle_old = zram_get_handle(zram, index);
if (!handle_old)
@@ -1933,7 +1932,6 @@ static int recompress_slot(struct zram *zram, u32 index, struct page *page,
if (!zram->comps[prio])
continue;
- num_recomps++;
zstrm = zcomp_stream_get(zram->comps[prio]);
src = kmap_local_page(page);
ret = zcomp_compress(zram->comps[prio], zstrm,
@@ -1942,7 +1940,8 @@ static int recompress_slot(struct zram *zram, u32 index, struct page *page,
if (ret) {
zcomp_stream_put(zstrm);
- return ret;
+ zstrm = NULL;
+ break;
}
class_index_new = zs_lookup_class_index(zram->mem_pool,
@@ -1952,6 +1951,7 @@ static int recompress_slot(struct zram *zram, u32 index, struct page *page,
if (class_index_new >= class_index_old ||
(threshold && comp_len_new >= threshold)) {
zcomp_stream_put(zstrm);
+ zstrm = NULL;
continue;
}
@@ -1959,14 +1959,6 @@ static int recompress_slot(struct zram *zram, u32 index, struct page *page,
break;
}
- /*
- * We did not try to recompress, e.g. when we have only one
- * secondary algorithm and the page is already recompressed
- * using that algorithm
- */
- if (!zstrm)
- return 0;
-
/*
* Decrement the limit (if set) on pages we can recompress, even
* when current recompression was unsuccessful or did not compress
@@ -1976,38 +1968,32 @@ static int recompress_slot(struct zram *zram, u32 index, struct page *page,
if (*num_recomp_pages)
*num_recomp_pages -= 1;
- if (class_index_new >= class_index_old) {
+ /* Compression error */
+ if (ret)
+ return ret;
+
+ if (!zstrm) {
/*
* Secondary algorithms failed to re-compress the page
- * in a way that would save memory, mark the object as
- * incompressible so that we will not try to compress
- * it again.
+ * in a way that would save memory.
*
- * We need to make sure that all secondary algorithms have
- * failed, so we test if the number of recompressions matches
- * the number of active secondary algorithms.
+ * Mark the object incompressible if the max-priority
+ * algorithm couldn't re-compress it.
*/
- if (num_recomps == zram->num_active_comps - 1)
- zram_set_flag(zram, index, ZRAM_INCOMPRESSIBLE);
+ if (prio < zram->num_active_comps)
+ return 0;
+ zram_set_flag(zram, index, ZRAM_INCOMPRESSIBLE);
return 0;
}
- /* Successful recompression but above threshold */
- if (threshold && comp_len_new >= threshold)
- return 0;
-
/*
- * No direct reclaim (slow path) for handle allocation and no
- * re-compression attempt (unlike in zram_write_bvec()) since
- * we already have stored that object in zsmalloc. If we cannot
- * alloc memory for recompressed object then we bail out and
- * simply keep the old (existing) object in zsmalloc.
+ * We are holding per-CPU stream mutex and entry lock so better
+ * avoid direct reclaim. Allocation error is not fatal since
+ * we still have the old object in the mem_pool.
*/
handle_new = zs_malloc(zram->mem_pool, comp_len_new,
- __GFP_KSWAPD_RECLAIM |
- __GFP_NOWARN |
- __GFP_HIGHMEM |
- __GFP_MOVABLE);
+ GFP_NOIO | __GFP_NOWARN |
+ __GFP_HIGHMEM | __GFP_MOVABLE);
if (IS_ERR_VALUE(handle_new)) {
zcomp_stream_put(zstrm);
return PTR_ERR((void *)handle_new);
--
2.48.1.658.g4767266eb4-goog
next prev parent reply other threads:[~2025-02-27 4:37 UTC|newest]
Thread overview: 26+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-02-27 4:35 [PATCH v9 00/19] zsmalloc/zram: there be preemption Sergey Senozhatsky
2025-02-27 4:35 ` [PATCH v9 01/19] zram: sleepable entry locking Sergey Senozhatsky
2025-02-27 4:35 ` [PATCH v9 02/19] zram: permit preemption with active compression stream Sergey Senozhatsky
2025-02-27 4:35 ` [PATCH v9 03/19] zram: remove unused crypto include Sergey Senozhatsky
2025-02-27 4:35 ` [PATCH v9 04/19] zram: remove max_comp_streams device attr Sergey Senozhatsky
2025-02-27 4:35 ` [PATCH v9 05/19] zram: remove second stage of handle allocation Sergey Senozhatsky
2025-02-27 4:35 ` [PATCH v9 06/19] zram: no-warn about zsmalloc " Sergey Senozhatsky
2025-02-27 4:35 ` [PATCH v9 07/19] zram: remove writestall zram_stats member Sergey Senozhatsky
2025-02-27 4:35 ` [PATCH v9 08/19] zram: limit max recompress prio to num_active_comps Sergey Senozhatsky
2025-02-27 4:35 ` [PATCH v9 09/19] zram: filter out recomp targets based on priority Sergey Senozhatsky
2025-02-27 4:35 ` Sergey Senozhatsky [this message]
2025-02-27 4:35 ` [PATCH v9 11/19] zram: move post-processing target allocation Sergey Senozhatsky
2025-02-27 4:35 ` [PATCH v9 12/19] zsmalloc: rename pool lock Sergey Senozhatsky
2025-02-27 4:35 ` [PATCH v9 13/19] zsmalloc: make zspage lock preemptible Sergey Senozhatsky
2025-02-27 4:35 ` [PATCH v9 14/19] zsmalloc: introduce new object mapping API Sergey Senozhatsky
2025-02-27 5:48 ` Yosry Ahmed
2025-02-27 6:54 ` Sergey Senozhatsky
2025-02-27 7:09 ` Yosry Ahmed
2025-03-01 7:22 ` Herbert Xu
2025-03-03 2:42 ` Sergey Senozhatsky
2025-03-03 2:51 ` Herbert Xu
2025-02-27 4:35 ` [PATCH v9 15/19] zram: switch to new zsmalloc " Sergey Senozhatsky
2025-02-27 4:35 ` [PATCH v9 16/19] zram: permit reclaim in zstd custom allocator Sergey Senozhatsky
2025-02-27 4:35 ` [PATCH v9 17/19] zram: do not leak page on recompress_store error path Sergey Senozhatsky
2025-02-27 4:35 ` [PATCH v9 18/19] zram: do not leak page on writeback_store " Sergey Senozhatsky
2025-02-27 4:35 ` [PATCH v9 19/19] zram: add might_sleep to zcomp API Sergey Senozhatsky
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250227043618.88380-11-senozhatsky@chromium.org \
--to=senozhatsky@chromium.org \
--cc=akpm@linux-foundation.org \
--cc=bigeasy@linutronix.de \
--cc=hdanton@sina.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=minchan@kernel.org \
--cc=ryncsn@gmail.com \
--cc=yosry.ahmed@linux.dev \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox