linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Sergey Senozhatsky <senozhatsky@chromium.org>
To: Yuwen Chen <ywen.chen@foxmail.com>
Cc: axboe@kernel.dk, akpm@linux-foundation.org, bgeffon@google.com,
	 licayy@outlook.com, linux-block@vger.kernel.org,
	linux-kernel@vger.kernel.org,  linux-mm@kvack.org,
	liumartin@google.com, minchan@kernel.org, richardycc@google.com,
	 senozhatsky@chromium.org
Subject: Re: [PATCH v4] zram: Implement multi-page write-back
Date: Mon, 10 Nov 2025 13:49:26 +0900	[thread overview]
Message-ID: <yv2ktkwwu3hadzkw6wb4inqzihndfpwb42svuu25ngmn6eb7c4@hclvcrnsmvvk> (raw)
In-Reply-To: <tencent_0FBBFC8AE0B97BC63B5D47CE1FF2BABFDA09@qq.com>

On (25/11/06 09:49), Yuwen Chen wrote:
> For block devices, sequential write performance is significantly
> better than random write. Currently, zram's write-back function
> only supports single-page operations, which fails to leverage
> the sequential write advantage and leads to suboptimal performance.

As a side note:
You almost never do sequential writes to the backing device. The
thing is, e.g. when zram is used as swap, page faults happen randomly
and free up (slot-free) random page-size chunks (so random bits in
zram->bitmap become clear), which then get overwritten (zram simply
picks the first available bit from zram->bitmap) during next writeback.
There is nothing sequential about that, in systems with sufficiently
large uptime and sufficiently frequent writeback/readback events
writeback bitmap becomes sparse, which results in random IO, so your
test tests an ideal case that almost never happens in practice.


  reply	other threads:[~2025-11-10  4:49 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <tencent_78FC2C4FE16BA1EBAF0897DB60FCD675ED05@qq.com>
2025-11-05  3:33 ` [PATCH v2] " Yuwen Chen
2025-11-05  6:48   ` [PATCH v3] " Yuwen Chen
2025-11-05 15:25     ` Jens Axboe
2025-11-06  1:49       ` [PATCH v4] " Yuwen Chen
2025-11-10  4:49         ` Sergey Senozhatsky [this message]
2025-11-10  7:16           ` Yuwen Chen
2025-11-12  5:16             ` Sergey Senozhatsky
2025-11-12  5:18         ` Sergey Senozhatsky
2025-11-12  6:57           ` Yuwen Chen
2025-11-13  2:04         ` Sergey Senozhatsky
2025-11-13  5:10           ` Sergey Senozhatsky
2025-11-13  2:11         ` Sergey Senozhatsky
2025-11-13  2:20         ` Sergey Senozhatsky
2025-11-13  4:44           ` Sergey Senozhatsky
2025-11-13  7:55           ` Yuwen Chen
2025-11-13  5:40         ` Minchan Kim
2025-11-13  6:03           ` Sergey Senozhatsky
2025-11-13  8:27             ` Yuwen Chen
2025-11-13  7:37         ` Sergey Senozhatsky
2025-11-13  7:55         ` Sergey Senozhatsky
2025-11-06  2:28       ` [PATCH v3] " Yuwen Chen

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=yv2ktkwwu3hadzkw6wb4inqzihndfpwb42svuu25ngmn6eb7c4@hclvcrnsmvvk \
    --to=senozhatsky@chromium.org \
    --cc=akpm@linux-foundation.org \
    --cc=axboe@kernel.dk \
    --cc=bgeffon@google.com \
    --cc=licayy@outlook.com \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=liumartin@google.com \
    --cc=minchan@kernel.org \
    --cc=richardycc@google.com \
    --cc=ywen.chen@foxmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox