linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Sergey Senozhatsky <senozhatsky@chromium.org>
To: Minchan Kim <minchan@kernel.org>
Cc: Sergey Senozhatsky <senozhatsky@chromium.org>,
	Andrew Morton <akpm@linux-foundation.org>,
	Nitin Gupta <ngupta@vflare.org>,
	linux-kernel@vger.kernel.org, linux-mm@kvack.org
Subject: Re: [PATCHv4 6/9] zsmalloc: pass limit on pages per-zspage to zs_create_pool()
Date: Fri, 11 Nov 2022 19:32:44 +0900	[thread overview]
Message-ID: <Y24kzGHyjY9G0d7T@google.com> (raw)
In-Reply-To: <Y22vGNX0yGaAeoDR@google.com>

On (22/11/10 18:10), Minchan Kim wrote:
> On Mon, Oct 31, 2022 at 02:41:05PM +0900, Sergey Senozhatsky wrote:
> > Allow zsmalloc pool owner to specify max number of pages
> > per-zspage (during pool creation), so that different pools
> > can have different characteristics.
> > 
> > By default we pass ZS_DEFAULT_PAGES_PER_ZSPAGE which is 4
> > (matches the current order 2 zspages limit).
> 
> How could user decide what's the best size for their workload?

[..]

For starters in a similar manner that I showed during our meeting.
They can run tests, gather stats (zsmalloc objects distribution),
analyze where most of the objects sit, how things change when we
have different cluster configurations, and so on.

But more importantly: they need lots of zramX mm_stat data, which is
perfectly traceable and collectable during fleet A/B testing: when a
number of devices get randomly assigned to different experiments and
receive different zspage len configuration, which they feed to zram
sysfs knobs during system startup (when init script configures zram).
And then look at statistically significant improvements or regressions.

This is how things done in ChromeOS and I'm sure in many other places.

In this regard, finding best zspage len value is not any different from
finding what is the best zram disksize, or what is the best compression
algorithm. Exactly same approach - feed different configuration to devices
and then analyze the data. Look at mm_stat-s before and after experiment,
per device class/type.

We can discuss in more details internally.

> >  static void zs_zpool_destroy(void *pool)
> > @@ -2195,6 +2195,7 @@ static int zs_register_shrinker(struct zs_pool *pool)
> >  /**
> >   * zs_create_pool - Creates an allocation pool to work from.
> >   * @name: pool name to be created
> > + * @num_pages: maximum number of pages per-zspage
> 
> How about "max_page_chain:"? 

OK.

Do you dislike idea of creating a `struct zs_tunables` which will hold
all fields that we can tune? And then zsmalloc users can pass that
struct (a pointer to) to zs_create_pool().

There can be various tunables. Like policy changes: do we use static
zspool configuration, or a dynamic one and so on.

On zram side, we can have a generic sysfs knob: allocator_tuning,
which will accept named params, the same way we did it for
recomp_algorithm and recompress.

	echo "tuneable=VAL tunealbe=VAL" > /sys/block/zramX/allocator_tuning


  reply	other threads:[~2022-11-11 10:32 UTC|newest]

Thread overview: 34+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-10-31  5:40 [PATCHv4 0/9] zsmalloc/zram: configurable zspage size Sergey Senozhatsky
2022-10-31  5:41 ` [PATCHv4 1/9] zram: add size class equals check into recompression Sergey Senozhatsky
2022-10-31  5:41 ` [PATCHv4 2/9] zsmalloc: turn zspage order into runtime variable Sergey Senozhatsky
2022-11-10 21:59   ` Minchan Kim
2022-11-11 10:38     ` Sergey Senozhatsky
2022-11-11 17:09       ` Minchan Kim
2022-11-14  3:55         ` Sergey Senozhatsky
2022-10-31  5:41 ` [PATCHv4 3/9] zsmalloc: move away from page order defines Sergey Senozhatsky
2022-11-10 22:02   ` Minchan Kim
2022-10-31  5:41 ` [PATCHv4 4/9] zsmalloc: make huge class watermark zs_pool member Sergey Senozhatsky
2022-11-10 22:25   ` Minchan Kim
2022-11-11  1:07     ` Sergey Senozhatsky
2022-10-31  5:41 ` [PATCHv4 5/9] zram: huge size watermark cannot be global Sergey Senozhatsky
2022-10-31  5:41 ` [PATCHv4 6/9] zsmalloc: pass limit on pages per-zspage to zs_create_pool() Sergey Senozhatsky
2022-11-09  6:24   ` Sergey Senozhatsky
2022-11-11 17:14     ` Minchan Kim
2022-11-11  2:10   ` Minchan Kim
2022-11-11 10:32     ` Sergey Senozhatsky [this message]
2022-10-31  5:41 ` [PATCHv4 7/9] zram: add pages_per_pool_page device attribute Sergey Senozhatsky
2022-11-09  4:34   ` Sergey Senozhatsky
2022-10-31  5:41 ` [PATCHv4 8/9] Documentation: document zram pages_per_pool_page attribute Sergey Senozhatsky
2022-11-11  2:20   ` Minchan Kim
2022-11-11 10:34     ` Sergey Senozhatsky
2022-10-31  5:41 ` [PATCHv4 9/9] zsmalloc: break out of loop when found perfect zspage order Sergey Senozhatsky
2022-11-10 22:44 ` [PATCHv4 0/9] zsmalloc/zram: configurable zspage size Minchan Kim
2022-11-11  0:56   ` Sergey Senozhatsky
2022-11-11 17:03     ` Minchan Kim
2022-11-14  3:53       ` Sergey Senozhatsky
2022-11-14  7:55       ` Sergey Senozhatsky
2022-11-14  8:37       ` Sergey Senozhatsky
2022-11-15  6:01       ` Sergey Senozhatsky
2022-11-15  7:59         ` Sergey Senozhatsky
2022-11-15 23:23           ` Minchan Kim
2022-11-16  0:52             ` Sergey Senozhatsky

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Y24kzGHyjY9G0d7T@google.com \
    --to=senozhatsky@chromium.org \
    --cc=akpm@linux-foundation.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=minchan@kernel.org \
    --cc=ngupta@vflare.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox