From: Sergey Senozhatsky <senozhatsky@chromium.org>
To: Yosry Ahmed <yosry.ahmed@linux.dev>
Cc: Sergey Senozhatsky <senozhatsky@chromium.org>,
Andrew Morton <akpm@linux-foundation.org>,
Nhat Pham <nphamcs@gmail.com>, Minchan Kim <minchan@kernel.org>,
Johannes Weiner <hannes@cmpxchg.org>,
Brian Geffon <bgeffon@google.com>,
linux-kernel@vger.kernel.org, linux-mm@kvack.org
Subject: Re: [PATCH] zsmalloc: use actual object size to detect spans
Date: Wed, 7 Jan 2026 14:38:36 +0900 [thread overview]
Message-ID: <3tiwggkz53gkl3ysemobzny5ymvjzn7ssfxbevae6ptpfbdzph@riv2id277ctm> (raw)
In-Reply-To: <mumv2eouuqepj5btx4jeghevlleucdslt76erxtt4sm5ntavx5@oipdwrdhzlta>
On (26/01/07 05:22), Yosry Ahmed wrote:
> On Wed, Jan 07, 2026 at 12:03:37PM +0900, Sergey Senozhatsky wrote:
> > On (26/01/07 02:10), Yosry Ahmed wrote:
> > > I think the changes need to be shuffled around to avoid this, or just
> > > have a combined patch, which would be less pretty.
> >
> > Dunno. Do we want to completely separate HugePage handling
> > and make it a fast path? Then it seems things begin to work.
>
> HugePage should always be PAGE_SIZE, so never spans two pages, right?
Right
if (unlikely(class->objs_per_zspage == 1 && class->pages_per_zspage == 1))
SetZsHugePage(zspage);
> I like separating the logic because it's cleaner, but I want us to
> understand the problem first (see my other reply) instead of just
> papering over it.
Sure.
> > diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
> > index cb449acc8809..9b067853b6c2 100644
> > --- a/mm/zsmalloc.c
> > +++ b/mm/zsmalloc.c
> > @@ -1077,6 +1077,7 @@ void *zs_obj_read_begin(struct zs_pool *pool, unsigned long handle,
> > unsigned long obj, off;
> > unsigned int obj_idx;
> > struct size_class *class;
> > + size_t sizes[2];
> > void *addr;
> >
> > /* Guarantee we can get zspage from handle safely */
> > @@ -1089,35 +1090,27 @@ void *zs_obj_read_begin(struct zs_pool *pool, unsigned long handle,
> > zspage_read_lock(zspage);
> > read_unlock(&pool->lock);
> >
> > + /* Fast path for huge size class */
> > + if (ZsHugePage(zspage))
> > + return kmap_local_zpdesc(zpdesc);
>
> Can we WARN here if somehow the HugePage is spanning two pages?
I can add a WARN, but that really cannot happen. We always allocate just
one physical page per zspage for such size classes.
prev parent reply other threads:[~2026-01-07 5:38 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-01-06 4:25 Sergey Senozhatsky
2026-01-07 0:23 ` Yosry Ahmed
2026-01-07 0:59 ` Sergey Senozhatsky
2026-01-07 1:37 ` Sergey Senozhatsky
2026-01-07 1:56 ` Yosry Ahmed
2026-01-07 2:06 ` Sergey Senozhatsky
2026-01-07 2:10 ` Yosry Ahmed
2026-01-07 2:20 ` Sergey Senozhatsky
2026-01-07 2:22 ` Sergey Senozhatsky
2026-01-07 5:19 ` Yosry Ahmed
2026-01-07 5:30 ` Sergey Senozhatsky
2026-01-07 7:12 ` Sergey Senozhatsky
2026-01-07 3:03 ` Sergey Senozhatsky
2026-01-07 5:22 ` Yosry Ahmed
2026-01-07 5:38 ` Sergey Senozhatsky [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=3tiwggkz53gkl3ysemobzny5ymvjzn7ssfxbevae6ptpfbdzph@riv2id277ctm \
--to=senozhatsky@chromium.org \
--cc=akpm@linux-foundation.org \
--cc=bgeffon@google.com \
--cc=hannes@cmpxchg.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=minchan@kernel.org \
--cc=nphamcs@gmail.com \
--cc=yosry.ahmed@linux.dev \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox