From: Gao Xiang <hsiangkao@linux.alibaba.com>
To: Sergey Senozhatsky <senozhatsky@chromium.org>
Cc: Yuwen Chen <ywen.chen@foxmail.com>,
akpm@linux-foundation.org, bgeffon@google.com,
licayy@outlook.com, linux-block@vger.kernel.org,
linux-kernel@vger.kernel.org, linux-mm@kvack.org,
minchan@kernel.org, richardycc@google.com
Subject: Re: [RFC PATCHv5 0/6] zram: introduce writeback bio batching
Date: Sat, 22 Nov 2025 20:24:40 +0800 [thread overview]
Message-ID: <853796e3-fd44-4fc2-8fd2-5810342a6ebe@linux.alibaba.com> (raw)
In-Reply-To: <kvgy5ms2xlkcjuzuq7xx5lmjwx3frguosve7sqbp6wh3gpih5k@kjuwfbdd2cqz>
On 2025/11/22 18:07, Sergey Senozhatsky wrote:
> On (25/11/21 20:21), Gao Xiang wrote:
>>>>> I think page-fault latency of a written-back page is expected to be
>>>>> higher, that's a trade-off that we agree on. Off the top of my head,
>>>>> I don't think we can do anything about it.
>>>>>
>>>>> Is loop device always used as for writeback targets?
>>>>
>>>> On the Android platform, currently only the loop device is supported as
>>>> the backend for writeback, possibly for security reasons. I noticed that
>>>> EROFS has implemented a CONFIG_EROFS_FS_BACKED_BY_FILE to reduce this
>>>> latency. I think ZRAM might also be able to do this.
>>>
>>> I see. Do you use S/W or H/W compression?
>>
>> No, I'm pretty sure it's impossible for zram to access
>> file I/Os without another thread context (e.g. workqueue),
>> especially for write I/Os, which is unlike erofs:
>>
>> EROFS can do because EROFS is a specific filesystem, you
>> could see it's a seperate fs, and it can only read (no
>> write context) backing files in erofs and/or other fses,
>> which is much like vfs/overlayfs read_iter() directly
>> going into the backing fses without nested contexts.
>> (Even if loop is used, it will create its own thread
>> contexts with workqueues, which is safe.)
>>
>> In the other hand, zram/loop can act as a virtual block
>> device which is rather different, which means you could
>> format an ext4 filesystem and backing another ext4/btrfs,
>> like this:
>>
>> zram(ext4) -> backing ext4/btrfs
>>
>> It's unsafe (in addition to GFP_NOIO allocation
>> restriction) since zram cannot manage those ext4/btrfs
>> existing contexts:
>>
>> - Take one detailed example, if the upper zram ext4
>> assigns current->journal_info = xxx, and submit_bio() to
>> zram, which will confuse the backing ext4 since it should
>> assume current->journal_info == NULL, so the virtual block
>> devices need another thread context to isolate those two
>> different uncontrolled contexts.
>>
>> So I don't think it's feasible for block drivers to act
>> like this, especially mixing with writing to backing fses
>> operations.
>
> Sorry, I don't completely understand your point, but backing
> device is never expected to have any fs on it. So from your
> email:
zram(ext4) means zram device itself is formated as ext4.
>
>> zram(ext4) -> backing ext4/btrfs
>
> This is not a valid configuration, as far as I'm concerned.
> Unless I'm missing your point.
Why it's not valid? zram can be used as a regular virtual
block device, and format with any fs, and mount the zram
then.
Thanks,
Gao Xiang
next prev parent reply other threads:[~2025-11-22 12:24 UTC|newest]
Thread overview: 36+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-11-20 15:21 Sergey Senozhatsky
2025-11-20 15:21 ` [RFC PATCHv5 1/6] " Sergey Senozhatsky
2025-11-21 7:05 ` Yuwen Chen
2025-11-21 7:18 ` Sergey Senozhatsky
2025-11-21 7:40 ` Hannes Reinecke
2025-11-21 7:47 ` Sergey Senozhatsky
2025-11-20 15:21 ` [RFC PATCHv5 2/6] zram: add writeback batch size device attr Sergey Senozhatsky
2025-11-20 15:57 ` Brian Geffon
2025-11-21 1:56 ` Sergey Senozhatsky
2025-11-21 2:48 ` Sergey Senozhatsky
2025-11-20 15:21 ` [RFC PATCHv5 3/6] zram: take write lock in wb limit store handlers Sergey Senozhatsky
2025-11-20 16:03 ` Brian Geffon
2025-11-20 15:21 ` [RFC PATCHv5 4/6] zram: drop wb_limit_lock Sergey Senozhatsky
2025-11-20 16:03 ` Brian Geffon
2025-11-20 15:21 ` [RFC PATCHv5 5/6] zram: rework bdev block allocation Sergey Senozhatsky
2025-11-20 16:35 ` Brian Geffon
2025-11-20 15:21 ` [RFC PATCHv5 6/6] zram: read slot block idx under slot lock Sergey Senozhatsky
2025-11-20 18:13 ` Brian Geffon
2025-11-24 14:49 ` Brian Geffon
2025-11-21 7:14 ` [RFC PATCHv5 0/6] zram: introduce writeback bio batching Yuwen Chen
2025-11-21 7:32 ` Sergey Senozhatsky
2025-11-21 7:44 ` Yuwen Chen
2025-11-21 7:58 ` Sergey Senozhatsky
2025-11-21 8:23 ` Yuwen Chen
2025-11-21 9:12 ` Sergey Senozhatsky
2025-11-21 12:21 ` Gao Xiang
2025-11-21 12:43 ` Gao Xiang
2025-11-22 10:07 ` Sergey Senozhatsky
2025-11-22 12:24 ` Gao Xiang [this message]
2025-11-22 13:43 ` Sergey Senozhatsky
2025-11-22 14:09 ` Gao Xiang
2025-11-23 0:08 ` Sergey Senozhatsky
2025-11-23 1:23 ` Gao Xiang
2025-11-23 3:07 ` Sergey Senozhatsky
2025-11-23 0:22 ` Sergey Senozhatsky
2025-11-23 1:39 ` Gao Xiang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=853796e3-fd44-4fc2-8fd2-5810342a6ebe@linux.alibaba.com \
--to=hsiangkao@linux.alibaba.com \
--cc=akpm@linux-foundation.org \
--cc=bgeffon@google.com \
--cc=licayy@outlook.com \
--cc=linux-block@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=minchan@kernel.org \
--cc=richardycc@google.com \
--cc=senozhatsky@chromium.org \
--cc=ywen.chen@foxmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox