From: Hannes Reinecke <hare@suse.de>
To: Sergey Senozhatsky <senozhatsky@chromium.org>,
Andrew Morton <akpm@linux-foundation.org>,
Minchan Kim <minchan@kernel.org>,
Yuwen Chen <ywen.chen@foxmail.com>,
Richard Chang <richardycc@google.com>
Cc: Brian Geffon <bgeffon@google.com>,
Fengyu Lian <licayy@outlook.com>,
linux-kernel@vger.kernel.org, linux-mm@kvack.org,
linux-block@vger.kernel.org, Minchan Kim <minchan@google.com>
Subject: Re: [RFC PATCHv5 1/6] zram: introduce writeback bio batching
Date: Fri, 21 Nov 2025 08:40:23 +0100 [thread overview]
Message-ID: <90c06ff6-009f-430a-9b81-ca795e3115b0@suse.de> (raw)
In-Reply-To: <20251120152126.3126298-2-senozhatsky@chromium.org>
On 11/20/25 16:21, Sergey Senozhatsky wrote:
> Currently, zram writeback supports only a single bio writeback
> operation, waiting for bio completion before post-processing
> next pp-slot. This works, in general, but has certain throughput
> limitations. Introduce batched (multiple) bio writeback support
> to take advantage of parallel requests processing and better
> requests scheduling.
>
> For the time being the writeback batch size (maximum number of
> in-flight bio requests) is set to 32 for all devices. A follow
> up patch adds a writeback_batch_size device attribute, so the
> batch size becomes run-time configurable.
>
> Signed-off-by: Sergey Senozhatsky <senozhatsky@chromium.org>
> Co-developed-by: Yuwen Chen <ywen.chen@foxmail.com>
> Co-developed-by: Richard Chang <richardycc@google.com>
> Suggested-by: Minchan Kim <minchan@google.com>
> ---
> drivers/block/zram/zram_drv.c | 366 +++++++++++++++++++++++++++-------
> 1 file changed, 298 insertions(+), 68 deletions(-)
>
> diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
> index a43074657531..37c1416ac902 100644
> --- a/drivers/block/zram/zram_drv.c
> +++ b/drivers/block/zram/zram_drv.c
[ .. ]
> +static int zram_complete_done_reqs(struct zram *zram,
> + struct zram_wb_ctl *wb_ctl)
> +{
> + struct zram_wb_req *req;
> + unsigned long flags;
> int ret = 0, err;
> - u32 index;
>
> - page = alloc_page(GFP_KERNEL);
> - if (!page)
> - return -ENOMEM;
> + while (1) {
> + spin_lock_irqsave(&wb_ctl->done_lock, flags);
> + req = list_first_entry_or_null(&wb_ctl->done_reqs,
> + struct zram_wb_req, entry);
> + if (req)
> + list_del(&req->entry);
> + spin_unlock_irqrestore(&wb_ctl->done_lock, flags);
> +
> + if (!req)
> + break;
> +
> + err = zram_writeback_complete(zram, req);
> + if (err)
> + ret = err;
> +
> + atomic_dec(&wb_ctl->num_inflight);
> + release_pp_slot(zram, req->pps);
> + req->pps = NULL;
> +
> + list_add(&req->entry, &wb_ctl->idle_reqs);
Shouldn't this be locked?
> + }
> +
> + return ret;
> +}
> +
> +static struct zram_wb_req *zram_select_idle_req(struct zram_wb_ctl *wb_ctl)
> +{
> + struct zram_wb_req *req;
> +
> + req = list_first_entry_or_null(&wb_ctl->idle_reqs,
> + struct zram_wb_req, entry);
> + if (req)
> + list_del(&req->entry);
See above. I think you need to lock this to avoid someone stepping in
here an modify the element under you.
Cheers,
Hannes
--
Dr. Hannes Reinecke Kernel Storage Architect
hare@suse.de +49 911 74053 688
SUSE Software Solutions GmbH, Frankenstr. 146, 90461 Nürnberg
HRB 36809 (AG Nürnberg), GF: I. Totev, A. McDonald, W. Knoblich
next prev parent reply other threads:[~2025-11-21 7:40 UTC|newest]
Thread overview: 36+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-11-20 15:21 [RFC PATCHv5 0/6] " Sergey Senozhatsky
2025-11-20 15:21 ` [RFC PATCHv5 1/6] " Sergey Senozhatsky
2025-11-21 7:05 ` Yuwen Chen
2025-11-21 7:18 ` Sergey Senozhatsky
2025-11-21 7:40 ` Hannes Reinecke [this message]
2025-11-21 7:47 ` Sergey Senozhatsky
2025-11-20 15:21 ` [RFC PATCHv5 2/6] zram: add writeback batch size device attr Sergey Senozhatsky
2025-11-20 15:57 ` Brian Geffon
2025-11-21 1:56 ` Sergey Senozhatsky
2025-11-21 2:48 ` Sergey Senozhatsky
2025-11-20 15:21 ` [RFC PATCHv5 3/6] zram: take write lock in wb limit store handlers Sergey Senozhatsky
2025-11-20 16:03 ` Brian Geffon
2025-11-20 15:21 ` [RFC PATCHv5 4/6] zram: drop wb_limit_lock Sergey Senozhatsky
2025-11-20 16:03 ` Brian Geffon
2025-11-20 15:21 ` [RFC PATCHv5 5/6] zram: rework bdev block allocation Sergey Senozhatsky
2025-11-20 16:35 ` Brian Geffon
2025-11-20 15:21 ` [RFC PATCHv5 6/6] zram: read slot block idx under slot lock Sergey Senozhatsky
2025-11-20 18:13 ` Brian Geffon
2025-11-24 14:49 ` Brian Geffon
2025-11-21 7:14 ` [RFC PATCHv5 0/6] zram: introduce writeback bio batching Yuwen Chen
2025-11-21 7:32 ` Sergey Senozhatsky
2025-11-21 7:44 ` Yuwen Chen
2025-11-21 7:58 ` Sergey Senozhatsky
2025-11-21 8:23 ` Yuwen Chen
2025-11-21 9:12 ` Sergey Senozhatsky
2025-11-21 12:21 ` Gao Xiang
2025-11-21 12:43 ` Gao Xiang
2025-11-22 10:07 ` Sergey Senozhatsky
2025-11-22 12:24 ` Gao Xiang
2025-11-22 13:43 ` Sergey Senozhatsky
2025-11-22 14:09 ` Gao Xiang
2025-11-23 0:08 ` Sergey Senozhatsky
2025-11-23 1:23 ` Gao Xiang
2025-11-23 3:07 ` Sergey Senozhatsky
2025-11-23 0:22 ` Sergey Senozhatsky
2025-11-23 1:39 ` Gao Xiang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=90c06ff6-009f-430a-9b81-ca795e3115b0@suse.de \
--to=hare@suse.de \
--cc=akpm@linux-foundation.org \
--cc=bgeffon@google.com \
--cc=licayy@outlook.com \
--cc=linux-block@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=minchan@google.com \
--cc=minchan@kernel.org \
--cc=richardycc@google.com \
--cc=senozhatsky@chromium.org \
--cc=ywen.chen@foxmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox