linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Barry Song <21cnbao@gmail.com>
To: akpm@linux-foundation.org, minchan@kernel.org,
	senozhatsky@chromium.org,  linux-block@vger.kernel.org,
	axboe@kernel.dk, linux-mm@kvack.org,
	 Ryan Roberts <ryan.roberts@arm.com>
Cc: terrelln@fb.com, chrisl@kernel.org, david@redhat.com,
	kasong@tencent.com, yuzhao@google.com, yosryahmed@google.com,
	nphamcs@gmail.com, willy@infradead.org, hannes@cmpxchg.org,
	ying.huang@intel.com, surenb@google.com,
	wajdi.k.feghali@intel.com, kanchana.p.sridhar@intel.com,
	corbet@lwn.net, zhouchengming@bytedance.com,
	"Barry Song" <v-songbaohua@oppo.com>,
	"郑堂权(Blues Zheng)" <zhengtangquan@oppo.com>
Subject: Re: [PATCH RFC 0/2] mTHP-friendly compression in zsmalloc and zram based on multi-pages
Date: Thu, 28 Mar 2024 11:01:17 +1300	[thread overview]
Message-ID: <CAGsJ_4y6xFXXUt78VLK1o_+AEMkf=NJEXuye0dtvGvk+i6xXRw@mail.gmail.com> (raw)
In-Reply-To: <20240327214816.31191-1-21cnbao@gmail.com>

Apologies for the top posting.

+Ryan, I missed adding Ryan at the last moment :-)

On Thu, Mar 28, 2024 at 10:48 AM Barry Song <21cnbao@gmail.com> wrote:
>
> From: Barry Song <v-songbaohua@oppo.com>
>
> mTHP is generally considered to potentially waste memory due to fragmentation,
> but it may also serve as a source of memory savings.
> When large folios are compressed at a larger granularity, we observe a remarkable
> decrease in CPU utilization and a significant improvement in compression ratios.
>
> The following data illustrates the time and compressed data for typical anonymous
> pages gathered from Android phones.
>
> granularity   orig_data_size   compr_data_size   time(us)
> 4KiB-zstd      1048576000       246876055        50259962
> 64KiB-zstd     1048576000       199763892        18330605
>
> Due to mTHP's ability to be swapped out without splitting[1] and swapped in as a
> whole[2], it enables compression and decompression to be performed at larger
> granularities.
>
> This patchset enhances zsmalloc and zram by introducing support for dividing large
> folios into multi-pages, typically configured with a 4-order granularity. Here are
> concrete examples:
>
> * If a large folio's size is 32KiB, it will still be compressed and stored at a 4KiB
>   granularity.
> * If a large folio's size is 64KiB, it will be compressed and stored as a single 64KiB
>   block.
> * If a large folio's size is 128KiB, it will be compressed and stored as two 64KiB
>   multi-pages.
>
> Without the patchset, a large folio is always divided into nr_pages 4KiB blocks.
>
> The granularity can be configured using the ZSMALLOC_MULTI_PAGES_ORDER setting.
>
> [1] https://lore.kernel.org/linux-mm/20240327144537.4165578-1-ryan.roberts@arm.com/
> [2] https://lore.kernel.org/linux-mm/20240304081348.197341-1-21cnbao@gmail.com/
>
> Tangquan Zheng (2):
>   mm: zsmalloc: support objects compressed based on multiple pages
>   zram: support compression at the granularity of multi-pages
>
>  drivers/block/zram/Kconfig    |   9 +
>  drivers/block/zram/zcomp.c    |  23 ++-
>  drivers/block/zram/zcomp.h    |  12 +-
>  drivers/block/zram/zram_drv.c | 372 +++++++++++++++++++++++++++++++---
>  drivers/block/zram/zram_drv.h |  21 ++
>  include/linux/zsmalloc.h      |  10 +-
>  mm/Kconfig                    |  18 ++
>  mm/zsmalloc.c                 | 215 +++++++++++++++-----
>  8 files changed, 586 insertions(+), 94 deletions(-)
>
> --
> 2.34.1
>


      parent reply	other threads:[~2024-03-27 22:01 UTC|newest]

Thread overview: 18+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-03-27 21:48 Barry Song
2024-03-27 21:48 ` [PATCH RFC 1/2] mm: zsmalloc: support objects compressed based on multiple pages Barry Song
2024-10-21 23:26   ` Barry Song
2024-03-27 21:48 ` [PATCH RFC 2/2] zram: support compression at the granularity of multi-pages Barry Song
2024-04-11  0:40   ` Sergey Senozhatsky
2024-04-11  1:24     ` Barry Song
2024-04-11  1:42   ` Sergey Senozhatsky
2024-04-11  2:03     ` Barry Song
2024-04-11  4:14       ` Sergey Senozhatsky
2024-04-11  7:49         ` Barry Song
2024-04-19  3:41           ` Sergey Senozhatsky
2024-10-21 23:28   ` Barry Song
2024-11-06 16:23     ` Usama Arif
2024-11-07 10:25       ` Barry Song
2024-11-07 10:31         ` Barry Song
2024-11-07 11:49           ` Usama Arif
2024-11-07 20:53             ` Barry Song
2024-03-27 22:01 ` Barry Song [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAGsJ_4y6xFXXUt78VLK1o_+AEMkf=NJEXuye0dtvGvk+i6xXRw@mail.gmail.com' \
    --to=21cnbao@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=axboe@kernel.dk \
    --cc=chrisl@kernel.org \
    --cc=corbet@lwn.net \
    --cc=david@redhat.com \
    --cc=hannes@cmpxchg.org \
    --cc=kanchana.p.sridhar@intel.com \
    --cc=kasong@tencent.com \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=minchan@kernel.org \
    --cc=nphamcs@gmail.com \
    --cc=ryan.roberts@arm.com \
    --cc=senozhatsky@chromium.org \
    --cc=surenb@google.com \
    --cc=terrelln@fb.com \
    --cc=v-songbaohua@oppo.com \
    --cc=wajdi.k.feghali@intel.com \
    --cc=willy@infradead.org \
    --cc=ying.huang@intel.com \
    --cc=yosryahmed@google.com \
    --cc=yuzhao@google.com \
    --cc=zhengtangquan@oppo.com \
    --cc=zhouchengming@bytedance.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox