From: Sergey Senozhatsky <senozhatsky@chromium.org>
To: Yuwen Chen <ywen.chen@foxmail.com>
Cc: senozhatsky@chromium.org, akpm@linux-foundation.org,
bgeffon@google.com, licayy@outlook.com,
linux-block@vger.kernel.org, linux-kernel@vger.kernel.org,
linux-mm@kvack.org, minchan@kernel.org, richardycc@google.com
Subject: Re: [RFC PATCHv5 0/6] zram: introduce writeback bio batching
Date: Fri, 21 Nov 2025 18:12:29 +0900 [thread overview]
Message-ID: <buckmtxvdfnpgo56owip3fjqbzraws2wvtomzfkywhczckoqlt@fifgyl5fjpbt> (raw)
In-Reply-To: <tencent_865DD78A73BC3C9CAFCBAEBE222B6EA5F107@qq.com>
On (25/11/21 16:23), Yuwen Chen wrote:
> I used the following code for testing here, and the result was 32.
>
> code:
> @@ -983,6 +983,7 @@ static int zram_writeback_slots(struct zram *zram,
> struct zram_pp_slot *pps;
> int ret = 0, err = 0;
> u32 index = 0;
> + int inflight = 0;
>
> while ((pps = select_pp_slot(ctl))) {
> spin_lock(&zram->wb_limit_lock);
> @@ -993,6 +994,9 @@ static int zram_writeback_slots(struct zram *zram,
> }
> spin_unlock(&zram->wb_limit_lock);
>
> + if (inflight < atomic_read(&wb_ctl->num_inflight))
> + inflight = atomic_read(&wb_ctl->num_inflight);
> +
> while (!req) {
> req = zram_select_idle_req(wb_ctl);
> if (req)
> @@ -1074,6 +1078,7 @@ next:
> ret = err;
> }
>
> + pr_err("%s: inflight max: %d\n", __func__, inflight);
> return ret;
> }
I think this will always give you 32 (or you current batch size limit),
just because the way it works - we first deplete all ->idle (reaching
max ->inflight) and only then complete finished requests (dropping
->inflight).
I had a version of the patch that had different main loop. It would
always first complete finished requests. I think this one will give
accurate ->inflight number.
---
diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
index ab0785878069..398609e9d061 100644
--- a/drivers/block/zram/zram_drv.c
+++ b/drivers/block/zram/zram_drv.c
@@ -999,13 +999,6 @@ static int zram_writeback_slots(struct zram *zram,
}
while (!req) {
- req = zram_select_idle_req(wb_ctl);
- if (req)
- break;
-
- wait_event(wb_ctl->done_wait,
- !list_empty(&wb_ctl->done_reqs));
-
err = zram_complete_done_reqs(zram, wb_ctl);
/*
* BIO errors are not fatal, we continue and simply
@@ -1017,6 +1010,13 @@ static int zram_writeback_slots(struct zram *zram,
*/
if (err)
ret = err;
+
+ req = zram_select_idle_req(wb_ctl);
+ if (req)
+ break;
+
+ wait_event(wb_ctl->done_wait,
+ !list_empty(&wb_ctl->done_reqs));
}
if (blk_idx == INVALID_BDEV_BLOCK) {
---
> > I think page-fault latency of a written-back page is expected to be
> > higher, that's a trade-off that we agree on. Off the top of my head,
> > I don't think we can do anything about it.
> >
> > Is loop device always used as for writeback targets?
>
> On the Android platform, currently only the loop device is supported as
> the backend for writeback, possibly for security reasons. I noticed that
> EROFS has implemented a CONFIG_EROFS_FS_BACKED_BY_FILE to reduce this
> latency. I think ZRAM might also be able to do this.
I see. Do you use S/W or H/W compression?
next prev parent reply other threads:[~2025-11-21 9:12 UTC|newest]
Thread overview: 37+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-11-20 15:21 Sergey Senozhatsky
2025-11-20 15:21 ` [RFC PATCHv5 1/6] " Sergey Senozhatsky
2025-11-21 7:05 ` Yuwen Chen
2025-11-21 7:18 ` Sergey Senozhatsky
2025-11-21 7:40 ` Hannes Reinecke
2025-11-21 7:47 ` Sergey Senozhatsky
2025-11-20 15:21 ` [RFC PATCHv5 2/6] zram: add writeback batch size device attr Sergey Senozhatsky
2025-11-20 15:57 ` Brian Geffon
2025-11-21 1:56 ` Sergey Senozhatsky
2025-11-21 2:48 ` Sergey Senozhatsky
2025-11-20 15:21 ` [RFC PATCHv5 3/6] zram: take write lock in wb limit store handlers Sergey Senozhatsky
2025-11-20 16:03 ` Brian Geffon
2025-11-20 15:21 ` [RFC PATCHv5 4/6] zram: drop wb_limit_lock Sergey Senozhatsky
2025-11-20 16:03 ` Brian Geffon
2025-11-20 15:21 ` [RFC PATCHv5 5/6] zram: rework bdev block allocation Sergey Senozhatsky
2025-11-20 16:35 ` Brian Geffon
2025-11-20 15:21 ` [RFC PATCHv5 6/6] zram: read slot block idx under slot lock Sergey Senozhatsky
2025-11-20 18:13 ` Brian Geffon
2025-11-24 14:49 ` Brian Geffon
2025-11-21 7:14 ` [RFC PATCHv5 0/6] zram: introduce writeback bio batching Yuwen Chen
2025-11-21 7:32 ` Sergey Senozhatsky
2025-11-21 7:44 ` Yuwen Chen
2025-11-21 7:58 ` Sergey Senozhatsky
2025-11-21 8:23 ` Yuwen Chen
2025-11-21 9:12 ` Sergey Senozhatsky [this message]
2025-11-21 12:21 ` Gao Xiang
2025-11-21 12:43 ` Gao Xiang
2025-11-22 10:07 ` Sergey Senozhatsky
2025-11-22 12:24 ` Gao Xiang
2025-11-22 13:43 ` Sergey Senozhatsky
2025-11-22 14:09 ` Gao Xiang
2025-11-23 0:08 ` Sergey Senozhatsky
2025-11-23 1:23 ` Gao Xiang
2025-11-23 3:07 ` Sergey Senozhatsky
2025-11-23 0:22 ` Sergey Senozhatsky
2025-11-23 1:39 ` Gao Xiang
[not found] ` <tencent_D7ED79431EC0D75957539121B7CC7897EB06@qq.com>
2025-12-20 11:14 ` Sergey Senozhatsky
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=buckmtxvdfnpgo56owip3fjqbzraws2wvtomzfkywhczckoqlt@fifgyl5fjpbt \
--to=senozhatsky@chromium.org \
--cc=akpm@linux-foundation.org \
--cc=bgeffon@google.com \
--cc=licayy@outlook.com \
--cc=linux-block@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=minchan@kernel.org \
--cc=richardycc@google.com \
--cc=ywen.chen@foxmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox