linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Johannes Weiner <hannes@cmpxchg.org>
To: Vitaly Wool <vitaly.wool@konsulko.se>
Cc: Igor Belousov <igor.b@beldev.am>,
	linux-mm@kvack.org, akpm@linux-foundation.org,
	linux-kernel@vger.kernel.org, Nhat Pham <nphamcs@gmail.com>,
	Shakeel Butt <shakeel.butt@linux.dev>,
	Yosry Ahmed <yosry.ahmed@linux.dev>,
	Minchan Kim <minchan@kernel.org>,
	Sergey Senozhatsky <senozhatsky@chromium.org>
Subject: Re: [PATCH] mm/zblock: use vmalloc for page allocations
Date: Mon, 5 May 2025 10:08:12 -0400	[thread overview]
Message-ID: <20250505140812.GA30814@cmpxchg.org> (raw)
In-Reply-To: <83CB359A-955E-48B6-B0D9-DD4F2E1146D4@konsulko.se>

On Sat, May 03, 2025 at 08:46:07PM +0200, Vitaly Wool wrote:
> 
> 
> > On May 2, 2025, at 10:07 AM, Igor Belousov <igor.b@beldev.am> wrote:
> > 
> > On 2025-05-02 12:01, Vitaly Wool wrote:
> >> From: Igor Belousov <igor.b@beldev.am>
> >> Use vmalloc for page allocations for zblock blocks to avoid extra
> >> pressure on the memmory subsystem with multiple higher order
> >> allocations.
> >> While at it, introduce a module parameter to opportunistically
> >> allocate pages of lower orders via try_page_alloc() for faster
> >> allocations whenever possible.
> >> Since vmalloc works fine with non-power of 2 numbers of pages,
> >> rewrite the block size tables to use that opportunity.
> >> Signed-off-by: Igor Belousov <igor.b@beldev.am>
> >> Signed-off-by: Vitaly Wool <vitaly.wool@konsulko.se>
> >> ---
> >> Tests run on qemu-arm64 (8 CPUs, 1.5G RAM, 4K pages):
> >> 1. zblock
> >> 43205.38user
> >> 7320.53system
> >> 2:12:04elapsed
> >> zswpin 346127
> >> zswpout 1642438
> >> 2. zsmalloc
> >> 47194.61user
> >> 7978.48system
> >> 2:25:03elapsed
> >> zswpin 448031
> >> zswpout 1810485
> >> So zblock gives a nearly 10% advantage.
> >> Please note that zsmalloc *crashes* on 16K page tests so I couldn't
> >> compare performance in that case.
> > 
> > Right, and it looks like this:
> > 
> > [  762.499278]  bug_handler+0x0/0xa8
> > [  762.499433]  die_kernel_fault+0x1c4/0x36c
> > [  762.499616]  fault_from_pkey+0x0/0x98
> > [  762.499784]  do_translation_fault+0x3c/0x94
> > [  762.499969]  do_mem_abort+0x44/0x94
> > [  762.500140]  el1_abort+0x40/0x64
> > [  762.500306]  el1h_64_sync_handler+0xa4/0x120
> > [  762.500502]  el1h_64_sync+0x6c/0x70
> > [  762.500718]  __pi_memcpy_generic+0x1e4/0x22c (P)
> > [  762.500931]  zs_zpool_obj_write+0x10/0x1c
> > [  762.501117]  zpool_obj_write+0x18/0x24
> > [  762.501305]  zswap_store+0x490/0x7c4
> > [  762.501474]  swap_writepage+0x260/0x448
> > [  762.501654]  pageout+0x148/0x340
> > [  762.501816]  shrink_folio_list+0xa7c/0xf34
> > [  762.502008]  shrink_lruvec+0x6fc/0xbd0
> > [  762.502189]  shrink_node+0x52c/0x960
> > [  762.502359]  balance_pgdat+0x344/0x738
> > [  762.502537]  kswapd+0x210/0x37c
> > [  762.502691]  kthread+0x12c/0x204
> > [  762.502920]  ret_from_fork+0x10/0x20
> 
> In fact we don’t know if zsmalloc is actually supposed to work with
> 16K pages. That’s the question to Sergey and Minchan. If it is
> indeed supposed to handle 16K pages, I would suggest that you
> submitted a full report with reproduction steps and/or provided a
> fix if possible.

I've been using zsmalloc with 16k pages just fine for ~a year,
currently running it on 6.14.2-asahi. This machine sees a lot of
memory pressure, too.

Could this be a more recent regression, maybe in the new obj_write()?


  parent reply	other threads:[~2025-05-05 14:08 UTC|newest]

Thread overview: 22+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-05-02  8:01 Vitaly Wool
2025-05-02  8:07 ` Igor Belousov
2025-05-03 18:46   ` Vitaly Wool
2025-05-04  5:02     ` Sergey Senozhatsky
2025-05-04  6:14       ` Sergey Senozhatsky
2025-05-05 14:08     ` Johannes Weiner [this message]
2025-05-06  2:13       ` Sergey Senozhatsky
2025-05-05 14:29 ` Johannes Weiner
2025-05-06  9:42   ` Uladzislau Rezki
2025-05-06 13:13 ` Yosry Ahmed
2025-05-06 13:27   ` Herbert Xu
2025-05-06 13:37     ` Christoph Hellwig
2025-05-07  5:57   ` Sergey Senozhatsky
2025-05-07  6:08     ` Sergey Senozhatsky
2025-05-07  6:14       ` Sergey Senozhatsky
2025-05-07  6:54       ` Christoph Hellwig
2025-05-08  5:58         ` Sergey Senozhatsky
2025-05-08  6:00           ` Christoph Hellwig
2025-05-08  6:17             ` Sergey Senozhatsky
2025-05-08  6:33               ` Sergey Senozhatsky
2025-05-07  8:50       ` Uladzislau Rezki
2025-05-08  6:07         ` Sergey Senozhatsky

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20250505140812.GA30814@cmpxchg.org \
    --to=hannes@cmpxchg.org \
    --cc=akpm@linux-foundation.org \
    --cc=igor.b@beldev.am \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=minchan@kernel.org \
    --cc=nphamcs@gmail.com \
    --cc=senozhatsky@chromium.org \
    --cc=shakeel.butt@linux.dev \
    --cc=vitaly.wool@konsulko.se \
    --cc=yosry.ahmed@linux.dev \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox