linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Yosry Ahmed <yosry.ahmed@linux.dev>
To: Sergey Senozhatsky <senozhatsky@chromium.org>
Cc: Nhat Pham <nphamcs@gmail.com>,
	 Andrew Morton <akpm@linux-foundation.org>,
	Minchan Kim <minchan@kernel.org>,
	 Johannes Weiner <hannes@cmpxchg.org>,
	Brian Geffon <bgeffon@google.com>,
	linux-kernel@vger.kernel.org,  linux-mm@kvack.org
Subject: Re: [RFC PATCH] zsmalloc: make common caches global
Date: Thu, 22 Jan 2026 03:39:30 +0000	[thread overview]
Message-ID: <fjb3hzhbmnlgqquahaevekydn5enb45rhgzhixqrtykxaxjk5f@xlcyzanq6qxp> (raw)
In-Reply-To: <uodv6dukliy7bnfprh4yoxjkrn77uqljarlg5pmlippxsxygzv@gthjss7yyrlf>

On Thu, Jan 22, 2026 at 12:28:56PM +0900, Sergey Senozhatsky wrote:
> On (26/01/21 23:58), Yosry Ahmed wrote:
> > On Wed, Jan 21, 2026 at 12:41:39PM +0900, Sergey Senozhatsky wrote:
> > > On (26/01/19 13:44), Nhat Pham wrote:
> > > > On Thu, Jan 15, 2026 at 9:53 PM Sergey Senozhatsky
> > > > <senozhatsky@chromium.org> wrote:
> > > > >
> > > > > On (26/01/16 13:48), Sergey Senozhatsky wrote:
> > > > > > Currently, zsmalloc creates kmem_cache of handles and zspages
> > > > > > for each pool, which may be suboptimal from the memory usage
> > > > > > point of view (extra internal fragmentation per pool).  Systems
> > > > > > that create multiple zsmalloc pools may benefit from shared
> > > > > > common zsmalloc caches.
> > > > >
> > > > > This is step 1.
> > > > >
> > > > > Step 2 is to look into possibility of sharing zsmalloc pools.
> > > > > E.g. if there are N zram devices in the system, do we really need
> > > > > N zsmalloc pools?  Can we just share a single pool between them?
> > > > 
> > > > Ditto for zswap (although here, we almost always only have a single zswap pool).
> > > 
> > > COMPLETELY UNTESTED (current linux-next doesn't boot for me, hitting
> > > an "Oops: stack guard page: 0000" early during boot).
> > > 
> > > So I'm thinking of something like below.  Basically have a Kconfig
> > > option to turn zsmalloc into a singleton pool mode, transparently
> > > for zsmalloc users.
> > 
> > Why do we need a config option? Is the main concern with a single pool
> > lock contention? If yes, we can probably measure it by spawning many
> > zram devices and stressing them at the same time.
> 
> That's a good question.  I haven't thought about just converting
> zsmalloc to a singleton pool by default.  I don't think I'm
> concerned with lock contention, the thing is we should have the
> same upper boundary contention wise (there are only num_online_cpus()
> tasks that can concurrently access any zsmalloc pool, be it a singleton
> or not).  I certainly will try to measure once I have linux-next booting
> again.
> 
> What was the reason why you allocated many zsmalloc pool in zswap?

IIRC it was actually lock contention, specifically the pool spinlock.
When the change was made to per-class spinlocks, we dropped the multiple
pools:
http://lore.kernel.org/linux-mm/20240617-zsmalloc-lock-mm-everything-v1-0-5e5081ea11b3@linux.dev/.

So having multiple pools does mitigate lock contention in some cases.
Even though the upper boundary might be the same, the actual number of
CPUs contending on the same lock would go down in practice.

While looking for this, I actually found something more interesting. I
did propose more-or-less the same exact patch back when zswap used
multiple pools:
https://lore.kernel.org/all/20240604175340.218175-1-yosryahmed@google.com/.

Seems like Minchan had some concerns back then. I wonder if those still
apply.


  reply	other threads:[~2026-01-22  3:39 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-01-16  4:48 Sergey Senozhatsky
2026-01-16  5:52 ` Sergey Senozhatsky
2026-01-19 21:44   ` Nhat Pham
2026-01-21  3:41     ` Sergey Senozhatsky
2026-01-21 23:58       ` Yosry Ahmed
2026-01-22  3:28         ` Sergey Senozhatsky
2026-01-22  3:39           ` Yosry Ahmed [this message]
2026-01-22  3:55             ` Sergey Senozhatsky
2026-01-16 20:49 ` Yosry Ahmed
2026-01-17  2:24   ` Sergey Senozhatsky
2026-01-21  1:30     ` Yosry Ahmed
2026-01-21  1:56       ` Sergey Senozhatsky
2026-01-19 21:43 ` Nhat Pham
2026-01-20  1:19   ` Sergey Senozhatsky

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=fjb3hzhbmnlgqquahaevekydn5enb45rhgzhixqrtykxaxjk5f@xlcyzanq6qxp \
    --to=yosry.ahmed@linux.dev \
    --cc=akpm@linux-foundation.org \
    --cc=bgeffon@google.com \
    --cc=hannes@cmpxchg.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=minchan@kernel.org \
    --cc=nphamcs@gmail.com \
    --cc=senozhatsky@chromium.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox