linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Barry Song <baohua@kernel.org>
To: Sergey Senozhatsky <senozhatsky@chromium.org>
Cc: Xueyuan Chen <xueyuan.chen21@gmail.com>,
	ryncsn@gmail.com, minchan@kernel.org,  akpm@linux-foundation.org,
	linux-mm@kvack.org, axboe@kernel.dk,
	 linux-block@vger.kernel.org, linux-kernel@vger.kernel.org,
	kasong@tencent.com,  chrisl@kernel.org, justinjiang@vivo.com,
	liulei.rjpt@vivo.com
Subject: Re: [RFC PATCH] zram: support asynchronous GC for lazy slot freeing
Date: Thu, 16 Apr 2026 16:09:00 +0800	[thread overview]
Message-ID: <CAGsJ_4z-3WQQR6f_mMYHNEMAc07h+JPMxQTsoR0+GWgkvQVRTg@mail.gmail.com> (raw)
In-Reply-To: <aeCRnAhfg3fNi6Ey@google.com>

On Thu, Apr 16, 2026 at 3:41 PM Sergey Senozhatsky
<senozhatsky@chromium.org> wrote:
>
> On (26/04/14 13:49), Xueyuan Chen wrote:
> > On Sun, Apr 12, 2026 at 07:48:48PM +0800, Kairui Song wrote:
> > [...]
> > >What is making this slot_free so costly? zs_free?
> >
> > Yes, I've captured some perf data on RK3588 cpu2:
> >
> >   -    3.79%     0.42%  zram  [zram]  [k] slot_free
> >      - 89.04% slot_free
> >         - 65.40% zs_free
> >            + 77.29% free_zspage
> >            + 21.75% kmem_cache_free
> >              0.68% __kern_my_cpu_offset
> >         + 13.19% _raw_spin_unlock
> >         + 4.86% _raw_read_unlock
> >           4.75% obj_free
> >         + 4.72% _raw_read_lock
> >           3.64% fix_fullness_group
> >         + 2.02% _raw_spin_lock
> >         + 1.31% kmem_cache_free
> >
> > It's clear that zs_free is the primary hotspot, accounting for ~65.40%
> > of the total slot_free cycles. Beyond that, have some read and spin lock
> > in slot_free.
>
> Just a random thought, if zs_free() is costly then it likely also affects
> zswap, which makes me wonder if doing something on the zsmalloc side is a
> "batter" way forward.

Xueyuan's perf shows that 65.4% of slot_free is spent in
zs_free, so there is still around 35% elsewhere. However, this
might be a measurement issue. If we confirm the number is >=90%
or so, moving GC into zsmalloc seems like a better option. My
real use case is zram rather than zswap in the Android industry,
but this would benefit both zswap and zram.

Meanwhile, I would also like to try whether combining many bit
operations, such as clear_slot_flag(), can further help.

Thanks
Barry


  reply	other threads:[~2026-04-16  8:09 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-04-12  6:04 Barry Song (Xiaomi)
2026-04-12 11:48 ` Kairui Song
2026-04-14  5:49   ` Xueyuan Chen
2026-04-16  7:41     ` Sergey Senozhatsky
2026-04-16  8:09       ` Barry Song [this message]
2026-04-17 21:59   ` Barry Song

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAGsJ_4z-3WQQR6f_mMYHNEMAc07h+JPMxQTsoR0+GWgkvQVRTg@mail.gmail.com \
    --to=baohua@kernel.org \
    --cc=akpm@linux-foundation.org \
    --cc=axboe@kernel.dk \
    --cc=chrisl@kernel.org \
    --cc=justinjiang@vivo.com \
    --cc=kasong@tencent.com \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=liulei.rjpt@vivo.com \
    --cc=minchan@kernel.org \
    --cc=ryncsn@gmail.com \
    --cc=senozhatsky@chromium.org \
    --cc=xueyuan.chen21@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox