From: Sergey Senozhatsky <senozhatsky@chromium.org>
To: Minchan Kim <minchan@kernel.org>
Cc: Sergey Senozhatsky <senozhatsky@chromium.org>,
Andrew Morton <akpm@linux-foundation.org>,
Nitin Gupta <ngupta@vflare.org>,
linux-kernel@vger.kernel.org, linux-mm@kvack.org
Subject: Re: [PATCHv4 0/9] zsmalloc/zram: configurable zspage size
Date: Mon, 14 Nov 2022 12:53:57 +0900 [thread overview]
Message-ID: <Y3G71RFFZ/HL+yXN@google.com> (raw)
In-Reply-To: <Y26AbHxhPBJdWZQE@google.com>
Hi Minchan,
On (22/11/11 09:03), Minchan Kim wrote:
> > Sorry, not sure I'm following. So you want a .config value
> > for zspage limit? I really like the sysfs knob, because then
> > one may set values on per-device basis (if they have multiple
> > zram devices in a system with different data patterns):
>
> Yes, I wanted to have just a global policy to drive zsmalloc smarter
> without needing user's big effort to decide right tune value(I thought
> the decision process would be quite painful for normal user who don't
> have enough resources) since zsmalloc's design makes it possible.
> But for the interim solution until we prove no regression, just
> provide config and then remove the config later when we add aggressive
> zpage compaction(if necessary, please see below) since it's easier to
> deprecate syfs knob.
[..]
> I understand what you want to achieve with per-pool config with exposing
> the knob to user but my worry is still how user could decide best fit
> since workload is so dynamic. Some groups have enough resouces to practice
> under fleet experimental while many others don't so if we really need the
> per-pool config step, at least, I'd like to provide default guide to user
> in the documentation along with the tunable knobs for experimental.
> Maybe, we can suggest 4 for swap case and 8 for fs case.
>
> I don't disagree the sysfs knobs for use cases but can't we deal with the
> issue better way?
[..]
> with *aggressive zpage compaction*. Now, we are relying on shrinker
> (it might be already enough) to trigger but we could change the policy
> wasted memory in the class size crossed a threshold we defind for zram fs
> usecase since it would be used without memory pressure.
>
> What do you think about?
This is tricky. I didn't want us to come up with any sort of policies
based on assumptions. For instance, we know that SUSE uses zram with fs
under severe memory pressure (so severe that they immediately noticed
when we removed zsmalloc handle allocation slow path and reported a
regression), so assumption that fs zram use-case is not memory sensitive
does not always hold.
There are too many variables. We have different data patterns, yes, but
even same data patterns have different characteristics when compressed
with different algorithms; then we also have different host states
(memory pressure, etc.) and so on.
I think that it'll be safer for us to execute it the other way.
We can (that's what I was going to do) reach out to people (Android,
SUSE, Meta, ChromeOS, Google cloud, WebOS, Tizen) and ask them to run
experiments (try out various numbers). Then (several months later) we
can take a look at the data - what numbers work for which workloads,
and then we can introduce/change policies, based on evidence and real
use cases. Who knows, maybe zspage_chain_size of 6 can be the new
default and then we can add .config policy, maybe 7 or 8. Or maybe we
won't find a single number that works equally well for everyone (even
in similar use cases).
This is where sysfs knob is very useful. Unlike .config, which has no
flexibility especially when your entire fleet uses same .config for all
builds, sysfs knob lets people run numerous A/B tests simultaneously
(not to mention that some setups have many zram devices which can have
different zspage_chain_size-s). And we don't even need to deprecate it,
if we introduce a generic one like allocator_tunables, which will
support tuples `key=val`. Then we can just deprecate a specific `key`.
next prev parent reply other threads:[~2022-11-14 3:54 UTC|newest]
Thread overview: 34+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-10-31 5:40 Sergey Senozhatsky
2022-10-31 5:41 ` [PATCHv4 1/9] zram: add size class equals check into recompression Sergey Senozhatsky
2022-10-31 5:41 ` [PATCHv4 2/9] zsmalloc: turn zspage order into runtime variable Sergey Senozhatsky
2022-11-10 21:59 ` Minchan Kim
2022-11-11 10:38 ` Sergey Senozhatsky
2022-11-11 17:09 ` Minchan Kim
2022-11-14 3:55 ` Sergey Senozhatsky
2022-10-31 5:41 ` [PATCHv4 3/9] zsmalloc: move away from page order defines Sergey Senozhatsky
2022-11-10 22:02 ` Minchan Kim
2022-10-31 5:41 ` [PATCHv4 4/9] zsmalloc: make huge class watermark zs_pool member Sergey Senozhatsky
2022-11-10 22:25 ` Minchan Kim
2022-11-11 1:07 ` Sergey Senozhatsky
2022-10-31 5:41 ` [PATCHv4 5/9] zram: huge size watermark cannot be global Sergey Senozhatsky
2022-10-31 5:41 ` [PATCHv4 6/9] zsmalloc: pass limit on pages per-zspage to zs_create_pool() Sergey Senozhatsky
2022-11-09 6:24 ` Sergey Senozhatsky
2022-11-11 17:14 ` Minchan Kim
2022-11-11 2:10 ` Minchan Kim
2022-11-11 10:32 ` Sergey Senozhatsky
2022-10-31 5:41 ` [PATCHv4 7/9] zram: add pages_per_pool_page device attribute Sergey Senozhatsky
2022-11-09 4:34 ` Sergey Senozhatsky
2022-10-31 5:41 ` [PATCHv4 8/9] Documentation: document zram pages_per_pool_page attribute Sergey Senozhatsky
2022-11-11 2:20 ` Minchan Kim
2022-11-11 10:34 ` Sergey Senozhatsky
2022-10-31 5:41 ` [PATCHv4 9/9] zsmalloc: break out of loop when found perfect zspage order Sergey Senozhatsky
2022-11-10 22:44 ` [PATCHv4 0/9] zsmalloc/zram: configurable zspage size Minchan Kim
2022-11-11 0:56 ` Sergey Senozhatsky
2022-11-11 17:03 ` Minchan Kim
2022-11-14 3:53 ` Sergey Senozhatsky [this message]
2022-11-14 7:55 ` Sergey Senozhatsky
2022-11-14 8:37 ` Sergey Senozhatsky
2022-11-15 6:01 ` Sergey Senozhatsky
2022-11-15 7:59 ` Sergey Senozhatsky
2022-11-15 23:23 ` Minchan Kim
2022-11-16 0:52 ` Sergey Senozhatsky
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Y3G71RFFZ/HL+yXN@google.com \
--to=senozhatsky@chromium.org \
--cc=akpm@linux-foundation.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=minchan@kernel.org \
--cc=ngupta@vflare.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox