linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Sergey Senozhatsky <senozhatsky@chromium.org>
To: Yuwen Chen <ywen.chen@foxmail.com>
Cc: senozhatsky@chromium.org, akpm@linux-foundation.org,
	axboe@kernel.dk,  bgeffon@google.com, licayy@outlook.com,
	linux-block@vger.kernel.org,  linux-kernel@vger.kernel.org,
	linux-mm@kvack.org, liumartin@google.com, minchan@kernel.org,
	 richardycc@google.com
Subject: Re: [PATCH v4] zram: Implement multi-page write-back
Date: Wed, 12 Nov 2025 14:16:20 +0900	[thread overview]
Message-ID: <3enjvepoexpm567kfyz3bxwr4md7xvsrehgt4hoc54pynuhisu@75nxt6b5cmkb> (raw)
In-Reply-To: <tencent_B2DC37E3A2AED0E7F179365FCB5D82455B08@qq.com>

On (25/11/10 15:16), Yuwen Chen wrote:
> On 10 Nov 2025 13:49:26 +0900, Sergey Senozhatsky wrote:
> > As a side note:
> > You almost never do sequential writes to the backing device. The
> > thing is, e.g. when zram is used as swap, page faults happen randomly
> > and free up (slot-free) random page-size chunks (so random bits in
> > zram->bitmap become clear), which then get overwritten (zram simply
> > picks the first available bit from zram->bitmap) during next writeback.
> > There is nothing sequential about that, in systems with sufficiently
> > large uptime and sufficiently frequent writeback/readback events
> > writeback bitmap becomes sparse, which results in random IO, so your
> > test tests an ideal case that almost never happens in practice.
> 
> Thank you very much for your reply.
> As you mentioned, the current test data was measured under the condition
> that all writes were sequential writes. In a normal user environment,
> there are a large number of random writes. However, the multiple
> concurrent submissions implemented in this submission still have performance
> advantages for storage devices. I artificially created the worst - case
> scenario (all writes are random writes) with the following code:
> 
> for (int i = 0; i < nr_pages; i++)
>     alloc_block_bdev(zram);
> 
> for (int i = 0; i < nr_pages; i += 2)
>     free_block_bdev(zram, i);

Well, technically, I guess that's not the worst case.  The worst case
is when writeback races with page-faults/slot-free events, which happen
on opposite sides of the writeback device and which clear ->bitmap bits
on the opposite sides, so for writeback you alternate all the time and
pick either head or tail slots (->bitmap bits).  But you don't need to
test it, it's just a note.

The thing that I'm curious about is why does it help for flash storage?
It's not a spinning disk, where seek times dominate the IO time.

> On the physical machine, the measured data is as follows:
> before modification:
> real	0m0.624s
> user	0m0.000s
> sys	0m0.347s
> 
> real	0m0.663s
> user	0m0.001s
> sys	0m0.354s
> 
> real	0m0.635s
> user	0m0.000s
> sys	0m0.335s
> 
> after modification:
> real	0m0.340s
> user	0m0.000s
> sys	0m0.239s
> 
> real	0m0.326s
> user	0m0.000s
> sys	0m0.230s
> 
> real	0m0.313s
> user	0m0.000s
> sys	0m0.223s

Thanks for testing.

My next question is: what problem do you solve with this?  I mean,
do you use it production (somewhere).  If so, do you have a rough
number of how many MiBs you writeback and how often, and what's the
performance impact of this patch.  Again, if you use it in production.


  reply	other threads:[~2025-11-12  5:16 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <tencent_78FC2C4FE16BA1EBAF0897DB60FCD675ED05@qq.com>
2025-11-05  3:33 ` [PATCH v2] " Yuwen Chen
2025-11-05  6:48   ` [PATCH v3] " Yuwen Chen
2025-11-05 15:25     ` Jens Axboe
2025-11-06  1:49       ` [PATCH v4] " Yuwen Chen
2025-11-10  4:49         ` Sergey Senozhatsky
2025-11-10  7:16           ` Yuwen Chen
2025-11-12  5:16             ` Sergey Senozhatsky [this message]
2025-11-12  5:18         ` Sergey Senozhatsky
2025-11-12  6:57           ` Yuwen Chen
2025-11-13  2:04         ` Sergey Senozhatsky
2025-11-13  5:10           ` Sergey Senozhatsky
2025-11-13  2:11         ` Sergey Senozhatsky
2025-11-13  2:20         ` Sergey Senozhatsky
2025-11-13  4:44           ` Sergey Senozhatsky
2025-11-13  7:55           ` Yuwen Chen
2025-11-13  5:40         ` Minchan Kim
2025-11-13  6:03           ` Sergey Senozhatsky
2025-11-13  8:27             ` Yuwen Chen
2025-11-13  7:37         ` Sergey Senozhatsky
2025-11-13  7:55         ` Sergey Senozhatsky
2025-11-06  2:28       ` [PATCH v3] " Yuwen Chen

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=3enjvepoexpm567kfyz3bxwr4md7xvsrehgt4hoc54pynuhisu@75nxt6b5cmkb \
    --to=senozhatsky@chromium.org \
    --cc=akpm@linux-foundation.org \
    --cc=axboe@kernel.dk \
    --cc=bgeffon@google.com \
    --cc=licayy@outlook.com \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=liumartin@google.com \
    --cc=minchan@kernel.org \
    --cc=richardycc@google.com \
    --cc=ywen.chen@foxmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox