linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Vitaly Wool <vitaly.wool@konsulko.se>
To: Igor Belousov <igor.b@beldev.am>
Cc: linux-mm@kvack.org, akpm@linux-foundation.org,
	linux-kernel@vger.kernel.org, Nhat Pham <nphamcs@gmail.com>,
	Shakeel Butt <shakeel.butt@linux.dev>,
	Johannes Weiner <hannes@cmpxchg.org>,
	Yosry Ahmed <yosry.ahmed@linux.dev>,
	Minchan Kim <minchan@kernel.org>,
	Sergey Senozhatsky <senozhatsky@chromium.org>
Subject: Re: [PATCH] mm/zblock: use vmalloc for page allocations
Date: Sat, 3 May 2025 20:46:07 +0200	[thread overview]
Message-ID: <83CB359A-955E-48B6-B0D9-DD4F2E1146D4@konsulko.se> (raw)
In-Reply-To: <fddf0457275576c890d16921465cf473@beldev.am>



> On May 2, 2025, at 10:07 AM, Igor Belousov <igor.b@beldev.am> wrote:
> 
> On 2025-05-02 12:01, Vitaly Wool wrote:
>> From: Igor Belousov <igor.b@beldev.am>
>> Use vmalloc for page allocations for zblock blocks to avoid extra
>> pressure on the memmory subsystem with multiple higher order
>> allocations.
>> While at it, introduce a module parameter to opportunistically
>> allocate pages of lower orders via try_page_alloc() for faster
>> allocations whenever possible.
>> Since vmalloc works fine with non-power of 2 numbers of pages,
>> rewrite the block size tables to use that opportunity.
>> Signed-off-by: Igor Belousov <igor.b@beldev.am>
>> Signed-off-by: Vitaly Wool <vitaly.wool@konsulko.se>
>> ---
>> Tests run on qemu-arm64 (8 CPUs, 1.5G RAM, 4K pages):
>> 1. zblock
>> 43205.38user
>> 7320.53system
>> 2:12:04elapsed
>> zswpin 346127
>> zswpout 1642438
>> 2. zsmalloc
>> 47194.61user
>> 7978.48system
>> 2:25:03elapsed
>> zswpin 448031
>> zswpout 1810485
>> So zblock gives a nearly 10% advantage.
>> Please note that zsmalloc *crashes* on 16K page tests so I couldn't
>> compare performance in that case.
> 
> Right, and it looks like this:
> 
> [  762.499278]  bug_handler+0x0/0xa8
> [  762.499433]  die_kernel_fault+0x1c4/0x36c
> [  762.499616]  fault_from_pkey+0x0/0x98
> [  762.499784]  do_translation_fault+0x3c/0x94
> [  762.499969]  do_mem_abort+0x44/0x94
> [  762.500140]  el1_abort+0x40/0x64
> [  762.500306]  el1h_64_sync_handler+0xa4/0x120
> [  762.500502]  el1h_64_sync+0x6c/0x70
> [  762.500718]  __pi_memcpy_generic+0x1e4/0x22c (P)
> [  762.500931]  zs_zpool_obj_write+0x10/0x1c
> [  762.501117]  zpool_obj_write+0x18/0x24
> [  762.501305]  zswap_store+0x490/0x7c4
> [  762.501474]  swap_writepage+0x260/0x448
> [  762.501654]  pageout+0x148/0x340
> [  762.501816]  shrink_folio_list+0xa7c/0xf34
> [  762.502008]  shrink_lruvec+0x6fc/0xbd0
> [  762.502189]  shrink_node+0x52c/0x960
> [  762.502359]  balance_pgdat+0x344/0x738
> [  762.502537]  kswapd+0x210/0x37c
> [  762.502691]  kthread+0x12c/0x204
> [  762.502920]  ret_from_fork+0x10/0x20

In fact we don’t know if zsmalloc is actually supposed to work with 16K pages. That’s the question to Sergey and Minchan. If it is indeed supposed to handle 16K pages, I would suggest that you submitted a full report with reproduction steps and/or provided a fix if possible.

~Vitaly



  reply	other threads:[~2025-05-03 18:46 UTC|newest]

Thread overview: 22+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-05-02  8:01 Vitaly Wool
2025-05-02  8:07 ` Igor Belousov
2025-05-03 18:46   ` Vitaly Wool [this message]
2025-05-04  5:02     ` Sergey Senozhatsky
2025-05-04  6:14       ` Sergey Senozhatsky
2025-05-05 14:08     ` Johannes Weiner
2025-05-06  2:13       ` Sergey Senozhatsky
2025-05-05 14:29 ` Johannes Weiner
2025-05-06  9:42   ` Uladzislau Rezki
2025-05-06 13:13 ` Yosry Ahmed
2025-05-06 13:27   ` Herbert Xu
2025-05-06 13:37     ` Christoph Hellwig
2025-05-07  5:57   ` Sergey Senozhatsky
2025-05-07  6:08     ` Sergey Senozhatsky
2025-05-07  6:14       ` Sergey Senozhatsky
2025-05-07  6:54       ` Christoph Hellwig
2025-05-08  5:58         ` Sergey Senozhatsky
2025-05-08  6:00           ` Christoph Hellwig
2025-05-08  6:17             ` Sergey Senozhatsky
2025-05-08  6:33               ` Sergey Senozhatsky
2025-05-07  8:50       ` Uladzislau Rezki
2025-05-08  6:07         ` Sergey Senozhatsky

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=83CB359A-955E-48B6-B0D9-DD4F2E1146D4@konsulko.se \
    --to=vitaly.wool@konsulko.se \
    --cc=akpm@linux-foundation.org \
    --cc=hannes@cmpxchg.org \
    --cc=igor.b@beldev.am \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=minchan@kernel.org \
    --cc=nphamcs@gmail.com \
    --cc=senozhatsky@chromium.org \
    --cc=shakeel.butt@linux.dev \
    --cc=yosry.ahmed@linux.dev \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox