linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Nhat Pham <nphamcs@gmail.com>
To: Barry Song <21cnbao@gmail.com>
Cc: linux-mm@kvack.org, akpm@linux-foundation.org, axboe@kernel.dk,
	 bala.seshasayee@linux.intel.com, chrisl@kernel.org,
	david@redhat.com,  hannes@cmpxchg.org,
	kanchana.p.sridhar@intel.com, kasong@tencent.com,
	 linux-block@vger.kernel.org, minchan@kernel.org,
	senozhatsky@chromium.org,  surenb@google.com, terrelln@fb.com,
	v-songbaohua@oppo.com,  wajdi.k.feghali@intel.com,
	willy@infradead.org, ying.huang@intel.com,
	 yosryahmed@google.com, yuzhao@google.com,
	zhengtangquan@oppo.com,  zhouchengming@bytedance.com,
	usamaarif642@gmail.com, ryan.roberts@arm.com
Subject: Re: [PATCH RFC v2 0/2] mTHP-friendly compression in zsmalloc and zram based on multi-pages
Date: Mon, 11 Nov 2024 11:30:10 -0800	[thread overview]
Message-ID: <CAKEwX=OAE9r_VH38c3M0ekmBYWb5qKS-LPiBFBqToaJwEg3hJw@mail.gmail.com> (raw)
In-Reply-To: <20241107101005.69121-1-21cnbao@gmail.com>

On Thu, Nov 7, 2024 at 2:10 AM Barry Song <21cnbao@gmail.com> wrote:
>
> From: Barry Song <v-songbaohua@oppo.com>
>
> When large folios are compressed at a larger granularity, we observe
> a notable reduction in CPU usage and a significant improvement in
> compression ratios.
>
> mTHP's ability to be swapped out without splitting and swapped back in
> as a whole allows compression and decompression at larger granularities.
>
> This patchset enhances zsmalloc and zram by adding support for dividing
> large folios into multi-page blocks, typically configured with a
> 2-order granularity. Without this patchset, a large folio is always
> divided into `nr_pages` 4KiB blocks.
>
> The granularity can be set using the `ZSMALLOC_MULTI_PAGES_ORDER`
> setting, where the default of 2 allows all anonymous THP to benefit.
>
> Examples include:
> * A 16KiB large folio will be compressed and stored as a single 16KiB
>   block.
> * A 64KiB large folio will be compressed and stored as four 16KiB
>   blocks.
>
> For example, swapping out and swapping in 100MiB of typical anonymous
> data 100 times (with 16KB mTHP enabled) using zstd yields the following
> results:
>
>                         w/o patches        w/ patches
> swap-out time(ms)       68711              49908
> swap-in time(ms)        30687              20685
> compression ratio       20.49%             16.9%

The data looks very promising :) My understanding is it also results
in memory saving as well right? Since zstd operates better on bigger
inputs.

Is there any end-to-end benchmarking? My intuition is that this patch
series overall will improve the situations, assuming we don't fallback
to individual zero order page swapin too often, but it'd be nice if
there is some data backing this intuition (especially with the
upstream setup, i.e without any private patches). If the fallback
scenario happens frequently, the patch series can make a page fault
more expensive (since we have to decompress the entire chunk, and
discard everything but the single page being loaded in), so it might
make a difference.

Not super qualified to comment on zram changes otherwise - just a
casual observer to see if we can adopt this for zswap. zswap has the
added complexity of not supporting THP zswap in (until Usama's patch
series lands), and the presence of mixed backing states (due to zswap
writeback), increasing the likelihood of fallback :)


  parent reply	other threads:[~2024-11-11 19:30 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-11-07 10:10 Barry Song
2024-11-07 10:10 ` [PATCH RFC v2 1/2] mm: zsmalloc: support objects compressed based on multiple pages Barry Song
2024-11-07 10:10 ` [PATCH RFC v2 2/2] zram: support compression at the granularity of multi-pages Barry Song
2024-11-08  5:19 ` [PATCH RFC v2 0/2] mTHP-friendly compression in zsmalloc and zram based on multi-pages Huang, Ying
2024-11-08  6:51   ` Barry Song
2024-11-11 16:43     ` Usama Arif
2024-11-11 20:31       ` Barry Song
2024-11-18  9:56         ` Sergey Senozhatsky
2024-11-18 20:27           ` Barry Song
2024-11-19  2:45             ` Sergey Senozhatsky
2024-11-19  2:51               ` Barry Song
2024-11-12  1:07     ` Huang, Ying
2024-11-12  1:25       ` Barry Song
2024-11-12  1:25         ` Huang, Ying
2024-11-11 19:30 ` Nhat Pham [this message]
2024-11-11 21:37   ` Barry Song
2024-11-18 10:27     ` Barry Song
2024-11-18 20:00       ` Nhat Pham
2024-11-18 20:28       ` Usama Arif
2024-11-18 20:51         ` Barry Song
2024-11-18 21:48           ` Barry Song

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAKEwX=OAE9r_VH38c3M0ekmBYWb5qKS-LPiBFBqToaJwEg3hJw@mail.gmail.com' \
    --to=nphamcs@gmail.com \
    --cc=21cnbao@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=axboe@kernel.dk \
    --cc=bala.seshasayee@linux.intel.com \
    --cc=chrisl@kernel.org \
    --cc=david@redhat.com \
    --cc=hannes@cmpxchg.org \
    --cc=kanchana.p.sridhar@intel.com \
    --cc=kasong@tencent.com \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=minchan@kernel.org \
    --cc=ryan.roberts@arm.com \
    --cc=senozhatsky@chromium.org \
    --cc=surenb@google.com \
    --cc=terrelln@fb.com \
    --cc=usamaarif642@gmail.com \
    --cc=v-songbaohua@oppo.com \
    --cc=wajdi.k.feghali@intel.com \
    --cc=willy@infradead.org \
    --cc=ying.huang@intel.com \
    --cc=yosryahmed@google.com \
    --cc=yuzhao@google.com \
    --cc=zhengtangquan@oppo.com \
    --cc=zhouchengming@bytedance.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox