From: Sergey Senozhatsky <senozhatsky@chromium.org>
To: Yosry Ahmed <yosry.ahmed@linux.dev>
Cc: Sergey Senozhatsky <senozhatsky@chromium.org>,
Nhat Pham <nphamcs@gmail.com>,
Andrew Morton <akpm@linux-foundation.org>,
Minchan Kim <minchan@kernel.org>,
Johannes Weiner <hannes@cmpxchg.org>,
Brian Geffon <bgeffon@google.com>,
linux-kernel@vger.kernel.org, linux-mm@kvack.org
Subject: Re: [RFC PATCH] zsmalloc: make common caches global
Date: Thu, 22 Jan 2026 12:28:56 +0900 [thread overview]
Message-ID: <uodv6dukliy7bnfprh4yoxjkrn77uqljarlg5pmlippxsxygzv@gthjss7yyrlf> (raw)
In-Reply-To: <cevxzukpt4363kdlb5ofre5tquuvm6jyphstp6nlxjxqibj4wx@2yg4xwlcxwg5>
On (26/01/21 23:58), Yosry Ahmed wrote:
> On Wed, Jan 21, 2026 at 12:41:39PM +0900, Sergey Senozhatsky wrote:
> > On (26/01/19 13:44), Nhat Pham wrote:
> > > On Thu, Jan 15, 2026 at 9:53 PM Sergey Senozhatsky
> > > <senozhatsky@chromium.org> wrote:
> > > >
> > > > On (26/01/16 13:48), Sergey Senozhatsky wrote:
> > > > > Currently, zsmalloc creates kmem_cache of handles and zspages
> > > > > for each pool, which may be suboptimal from the memory usage
> > > > > point of view (extra internal fragmentation per pool). Systems
> > > > > that create multiple zsmalloc pools may benefit from shared
> > > > > common zsmalloc caches.
> > > >
> > > > This is step 1.
> > > >
> > > > Step 2 is to look into possibility of sharing zsmalloc pools.
> > > > E.g. if there are N zram devices in the system, do we really need
> > > > N zsmalloc pools? Can we just share a single pool between them?
> > >
> > > Ditto for zswap (although here, we almost always only have a single zswap pool).
> >
> > COMPLETELY UNTESTED (current linux-next doesn't boot for me, hitting
> > an "Oops: stack guard page: 0000" early during boot).
> >
> > So I'm thinking of something like below. Basically have a Kconfig
> > option to turn zsmalloc into a singleton pool mode, transparently
> > for zsmalloc users.
>
> Why do we need a config option? Is the main concern with a single pool
> lock contention? If yes, we can probably measure it by spawning many
> zram devices and stressing them at the same time.
That's a good question. I haven't thought about just converting
zsmalloc to a singleton pool by default. I don't think I'm
concerned with lock contention, the thing is we should have the
same upper boundary contention wise (there are only num_online_cpus()
tasks that can concurrently access any zsmalloc pool, be it a singleton
or not). I certainly will try to measure once I have linux-next booting
again.
What was the reason why you allocated many zsmalloc pool in zswap?
next prev parent reply other threads:[~2026-01-22 3:29 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-01-16 4:48 Sergey Senozhatsky
2026-01-16 5:52 ` Sergey Senozhatsky
2026-01-19 21:44 ` Nhat Pham
2026-01-21 3:41 ` Sergey Senozhatsky
2026-01-21 23:58 ` Yosry Ahmed
2026-01-22 3:28 ` Sergey Senozhatsky [this message]
2026-01-22 3:39 ` Yosry Ahmed
2026-01-22 3:55 ` Sergey Senozhatsky
2026-01-16 20:49 ` Yosry Ahmed
2026-01-17 2:24 ` Sergey Senozhatsky
2026-01-21 1:30 ` Yosry Ahmed
2026-01-21 1:56 ` Sergey Senozhatsky
2026-01-19 21:43 ` Nhat Pham
2026-01-20 1:19 ` Sergey Senozhatsky
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=uodv6dukliy7bnfprh4yoxjkrn77uqljarlg5pmlippxsxygzv@gthjss7yyrlf \
--to=senozhatsky@chromium.org \
--cc=akpm@linux-foundation.org \
--cc=bgeffon@google.com \
--cc=hannes@cmpxchg.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=minchan@kernel.org \
--cc=nphamcs@gmail.com \
--cc=yosry.ahmed@linux.dev \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox