From: Barry Song <21cnbao@gmail.com>
To: "Huang, Ying" <ying.huang@intel.com>
Cc: linux-mm@kvack.org, akpm@linux-foundation.org, axboe@kernel.dk,
bala.seshasayee@linux.intel.com, chrisl@kernel.org,
david@redhat.com, hannes@cmpxchg.org,
kanchana.p.sridhar@intel.com, kasong@tencent.com,
linux-block@vger.kernel.org, minchan@kernel.org,
nphamcs@gmail.com, senozhatsky@chromium.org, surenb@google.com,
terrelln@fb.com, v-songbaohua@oppo.com,
wajdi.k.feghali@intel.com, willy@infradead.org,
yosryahmed@google.com, yuzhao@google.com,
zhengtangquan@oppo.com, zhouchengming@bytedance.com,
usamaarif642@gmail.com, ryan.roberts@arm.com
Subject: Re: [PATCH RFC v2 0/2] mTHP-friendly compression in zsmalloc and zram based on multi-pages
Date: Tue, 12 Nov 2024 14:25:08 +1300 [thread overview]
Message-ID: <CAGsJ_4xVB4EKR9pitLa3aUmxiHUr8VNSS=QQ12_peO7PGvTZfg@mail.gmail.com> (raw)
In-Reply-To: <87pln1z2da.fsf@yhuang6-desk2.ccr.corp.intel.com>
On Tue, Nov 12, 2024 at 2:11 PM Huang, Ying <ying.huang@intel.com> wrote:
>
> Barry Song <21cnbao@gmail.com> writes:
>
> > On Fri, Nov 8, 2024 at 6:23 PM Huang, Ying <ying.huang@intel.com> wrote:
> >>
> >> Hi, Barry,
> >>
> >> Barry Song <21cnbao@gmail.com> writes:
> >>
> >> > From: Barry Song <v-songbaohua@oppo.com>
> >> >
> >> > When large folios are compressed at a larger granularity, we observe
> >> > a notable reduction in CPU usage and a significant improvement in
> >> > compression ratios.
> >> >
> >> > mTHP's ability to be swapped out without splitting and swapped back in
> >> > as a whole allows compression and decompression at larger granularities.
> >> >
> >> > This patchset enhances zsmalloc and zram by adding support for dividing
> >> > large folios into multi-page blocks, typically configured with a
> >> > 2-order granularity. Without this patchset, a large folio is always
> >> > divided into `nr_pages` 4KiB blocks.
> >> >
> >> > The granularity can be set using the `ZSMALLOC_MULTI_PAGES_ORDER`
> >> > setting, where the default of 2 allows all anonymous THP to benefit.
> >> >
> >> > Examples include:
> >> > * A 16KiB large folio will be compressed and stored as a single 16KiB
> >> > block.
> >> > * A 64KiB large folio will be compressed and stored as four 16KiB
> >> > blocks.
> >> >
> >> > For example, swapping out and swapping in 100MiB of typical anonymous
> >> > data 100 times (with 16KB mTHP enabled) using zstd yields the following
> >> > results:
> >> >
> >> > w/o patches w/ patches
> >> > swap-out time(ms) 68711 49908
> >> > swap-in time(ms) 30687 20685
> >> > compression ratio 20.49% 16.9%
> >>
> >> The data looks good. Thanks!
> >>
> >> Have you considered the situation that the large folio fails to be
> >> allocated during swap-in? It's possible because the memory may be very
> >> fragmented.
> >
> > That's correct, good question. On phones, we use a large folio pool to maintain
> > a relatively high allocation success rate. When mTHP allocation fails, we have
> > a workaround to allocate nr_pages of small folios and map them together to
> > avoid partial reads. This ensures that the benefits of larger block compression
> > and decompression are consistently maintained. That was the code running
> > on production phones.
> >
> > We also previously experimented with maintaining multiple buffers for
> > decompressed
> > large blocks in zRAM, allowing upcoming do_swap_page() calls to use them when
> > falling back to small folios. In this setup, the buffers achieved a
> > high hit rate, though
> > I don’t recall the exact number.
> >
> > I'm concerned that this fault-around-like fallback to nr_pages small
> > folios may not
> > gain traction upstream. Do you have any suggestions for improvement?
>
> It appears that we still haven't a solution to guarantee 100% mTHP
> allocation success rate. If so, we need a fallback solution for that.
>
> Another possible solution is,
>
> 1) If failed to allocate mTHP with nr_pages, allocate nr_pages normal (4k)
> folios instead
>
> 2) Revise the decompression interface to accept a set of folios (instead
> of one folio) as target. Then, we can decompress to the normal
> folios allocated in 1).
>
> 3) in do_swap_page(), we can either map all folios or just the fault
> folios. We can put non-fault folios into swap cache if necessary.
>
> Does this work?
this is exactly what we did in production phones:
[1] https://github.com/OnePlusOSS/android_kernel_oneplus_sm8650/blob/oneplus/sm8650_u_14.0.0_oneplus12/mm/memory.c#L4656
[2] https://github.com/OnePlusOSS/android_kernel_oneplus_sm8650/blob/oneplus/sm8650_u_14.0.0_oneplus12/mm/memory.c#L5439
I feel that we don't need to fall back to nr_pages (though that's what
we used on phones);
using a dedicated 4 should be sufficient, as if zsmalloc is handling
compression and
decompression of 16KB. However, we are not adding them to the
swapcache; instead,
they are mapped immediately.
>
> >>
> >> > -v2:
> >> > While it is not mature yet, I know some people are waiting for
> >> > an update :-)
> >> > * Fixed some stability issues.
> >> > * rebase againest the latest mm-unstable.
> >> > * Set default order to 2 which benefits all anon mTHP.
> >> > * multipages ZsPageMovable is not supported yet.
> >> >
> >> > Tangquan Zheng (2):
> >> > mm: zsmalloc: support objects compressed based on multiple pages
> >> > zram: support compression at the granularity of multi-pages
> >> >
> >> > drivers/block/zram/Kconfig | 9 +
> >> > drivers/block/zram/zcomp.c | 17 +-
> >> > drivers/block/zram/zcomp.h | 12 +-
> >> > drivers/block/zram/zram_drv.c | 450 +++++++++++++++++++++++++++++++---
> >> > drivers/block/zram/zram_drv.h | 45 ++++
> >> > include/linux/zsmalloc.h | 10 +-
> >> > mm/Kconfig | 18 ++
> >> > mm/zsmalloc.c | 232 +++++++++++++-----
> >> > 8 files changed, 699 insertions(+), 94 deletions(-)
> >>
>
> --
> Best Regards,
> Huang, Ying
Thanks
barry
next prev parent reply other threads:[~2024-11-12 1:25 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-11-07 10:10 Barry Song
2024-11-07 10:10 ` [PATCH RFC v2 1/2] mm: zsmalloc: support objects compressed based on multiple pages Barry Song
2024-11-07 10:10 ` [PATCH RFC v2 2/2] zram: support compression at the granularity of multi-pages Barry Song
2024-11-08 5:19 ` [PATCH RFC v2 0/2] mTHP-friendly compression in zsmalloc and zram based on multi-pages Huang, Ying
2024-11-08 6:51 ` Barry Song
2024-11-11 16:43 ` Usama Arif
2024-11-11 20:31 ` Barry Song
2024-11-18 9:56 ` Sergey Senozhatsky
2024-11-18 20:27 ` Barry Song
2024-11-19 2:45 ` Sergey Senozhatsky
2024-11-19 2:51 ` Barry Song
2024-11-12 1:07 ` Huang, Ying
2024-11-12 1:25 ` Barry Song [this message]
2024-11-12 1:25 ` Huang, Ying
2024-11-11 19:30 ` Nhat Pham
2024-11-11 21:37 ` Barry Song
2024-11-18 10:27 ` Barry Song
2024-11-18 20:00 ` Nhat Pham
2024-11-18 20:28 ` Usama Arif
2024-11-18 20:51 ` Barry Song
2024-11-18 21:48 ` Barry Song
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='CAGsJ_4xVB4EKR9pitLa3aUmxiHUr8VNSS=QQ12_peO7PGvTZfg@mail.gmail.com' \
--to=21cnbao@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=axboe@kernel.dk \
--cc=bala.seshasayee@linux.intel.com \
--cc=chrisl@kernel.org \
--cc=david@redhat.com \
--cc=hannes@cmpxchg.org \
--cc=kanchana.p.sridhar@intel.com \
--cc=kasong@tencent.com \
--cc=linux-block@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=minchan@kernel.org \
--cc=nphamcs@gmail.com \
--cc=ryan.roberts@arm.com \
--cc=senozhatsky@chromium.org \
--cc=surenb@google.com \
--cc=terrelln@fb.com \
--cc=usamaarif642@gmail.com \
--cc=v-songbaohua@oppo.com \
--cc=wajdi.k.feghali@intel.com \
--cc=willy@infradead.org \
--cc=ying.huang@intel.com \
--cc=yosryahmed@google.com \
--cc=yuzhao@google.com \
--cc=zhengtangquan@oppo.com \
--cc=zhouchengming@bytedance.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox