linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Yuwen Chen <ywen.chen@foxmail.com>
To: senozhatsky@chromium.org
Cc: akpm@linux-foundation.org, bgeffon@google.com,
	licayy@outlook.com, linux-block@vger.kernel.org,
	linux-kernel@vger.kernel.org, linux-mm@kvack.org,
	minchan@kernel.org, richardycc@google.com, ywen.chen@foxmail.com
Subject: Re: [RFC PATCHv5 0/6] zram: introduce writeback bio batching
Date: Fri, 21 Nov 2025 16:23:58 +0800	[thread overview]
Message-ID: <tencent_865DD78A73BC3C9CAFCBAEBE222B6EA5F107@qq.com> (raw)
In-Reply-To: <ts32xzxrpxmwf3okxo4bu2ynbgnfe6mehf5h6eibp7dp3r6jp7@4f7oz6tzqwxn>

On Fri, 21 Nov 2025 16:58:41 +0900, Sergey Senozhatsky wrote:
> No problem.  I wonder if the effect is more visible on larger data sets.
> 0.3 second sounds like a very short write.  In my VM tests I couldn't get
> more than 2 inflight requests at a time, I guess because decompression
> was much slower than IO.  I wonder how many inflight requests you had in
> your tests.

I used the following code for testing here, and the result was 32.

code:
@@ -983,6 +983,7 @@ static int zram_writeback_slots(struct zram *zram,
        struct zram_pp_slot *pps;
        int ret = 0, err = 0;
        u32 index = 0;
+       int inflight = 0;
 
        while ((pps = select_pp_slot(ctl))) {
                spin_lock(&zram->wb_limit_lock);
@@ -993,6 +994,9 @@ static int zram_writeback_slots(struct zram *zram,
                }
                spin_unlock(&zram->wb_limit_lock);
 
+               if (inflight < atomic_read(&wb_ctl->num_inflight))
+                       inflight = atomic_read(&wb_ctl->num_inflight);
+
                while (!req) {
                        req = zram_select_idle_req(wb_ctl);
                        if (req)
@@ -1074,6 +1078,7 @@ next:
                        ret = err;
        }
 
+       pr_err("%s: inflight max: %d\n", __func__, inflight);
        return ret;
 }

log: 
[3741949.842927] zram: zram_writeback_slots: inflight max: 32

Changing ZRAM_WB_REQ_CNT to 64 didn't shorten the overall time.

> I think page-fault latency of a written-back page is expected to be
> higher, that's a trade-off that we agree on.  Off the top of my head,
> I don't think we can do anything about it.
>
> Is loop device always used as for writeback targets?

On the Android platform, currently only the loop device is supported as
the backend for writeback, possibly for security reasons. I noticed that
EROFS has implemented a CONFIG_EROFS_FS_BACKED_BY_FILE to reduce this
latency. I think ZRAM might also be able to do this.



  reply	other threads:[~2025-11-21  8:24 UTC|newest]

Thread overview: 36+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-11-20 15:21 Sergey Senozhatsky
2025-11-20 15:21 ` [RFC PATCHv5 1/6] " Sergey Senozhatsky
2025-11-21  7:05   ` Yuwen Chen
2025-11-21  7:18     ` Sergey Senozhatsky
2025-11-21  7:40   ` Hannes Reinecke
2025-11-21  7:47     ` Sergey Senozhatsky
2025-11-20 15:21 ` [RFC PATCHv5 2/6] zram: add writeback batch size device attr Sergey Senozhatsky
2025-11-20 15:57   ` Brian Geffon
2025-11-21  1:56     ` Sergey Senozhatsky
2025-11-21  2:48   ` Sergey Senozhatsky
2025-11-20 15:21 ` [RFC PATCHv5 3/6] zram: take write lock in wb limit store handlers Sergey Senozhatsky
2025-11-20 16:03   ` Brian Geffon
2025-11-20 15:21 ` [RFC PATCHv5 4/6] zram: drop wb_limit_lock Sergey Senozhatsky
2025-11-20 16:03   ` Brian Geffon
2025-11-20 15:21 ` [RFC PATCHv5 5/6] zram: rework bdev block allocation Sergey Senozhatsky
2025-11-20 16:35   ` Brian Geffon
2025-11-20 15:21 ` [RFC PATCHv5 6/6] zram: read slot block idx under slot lock Sergey Senozhatsky
2025-11-20 18:13   ` Brian Geffon
2025-11-24 14:49   ` Brian Geffon
2025-11-21  7:14 ` [RFC PATCHv5 0/6] zram: introduce writeback bio batching Yuwen Chen
2025-11-21  7:32   ` Sergey Senozhatsky
2025-11-21  7:44     ` Yuwen Chen
2025-11-21  7:58       ` Sergey Senozhatsky
2025-11-21  8:23         ` Yuwen Chen [this message]
2025-11-21  9:12           ` Sergey Senozhatsky
2025-11-21 12:21             ` Gao Xiang
2025-11-21 12:43               ` Gao Xiang
2025-11-22 10:07               ` Sergey Senozhatsky
2025-11-22 12:24                 ` Gao Xiang
2025-11-22 13:43                   ` Sergey Senozhatsky
2025-11-22 14:09                     ` Gao Xiang
2025-11-23  0:08                       ` Sergey Senozhatsky
2025-11-23  1:23                         ` Gao Xiang
2025-11-23  3:07                           ` Sergey Senozhatsky
2025-11-23  0:22                   ` Sergey Senozhatsky
2025-11-23  1:39                     ` Gao Xiang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=tencent_865DD78A73BC3C9CAFCBAEBE222B6EA5F107@qq.com \
    --to=ywen.chen@foxmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=bgeffon@google.com \
    --cc=licayy@outlook.com \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=minchan@kernel.org \
    --cc=richardycc@google.com \
    --cc=senozhatsky@chromium.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox