From: Barry Song <21cnbao@gmail.com>
To: Shakeel Butt <shakeel.butt@linux.dev>
Cc: Chuanhua Han <chuanhuahan@gmail.com>,
Ryan Roberts <ryan.roberts@arm.com>,
akpm@linux-foundation.org, linux-mm@kvack.org,
chengming.zhou@linux.dev, chrisl@kernel.org, david@redhat.com,
hannes@cmpxchg.org, kasong@tencent.com,
linux-arm-kernel@lists.infradead.org,
linux-kernel@vger.kernel.org, mhocko@suse.com,
nphamcs@gmail.com, shy828301@gmail.com, steven.price@arm.com,
surenb@google.com, wangkefeng.wang@huawei.com,
willy@infradead.org, xiang@kernel.org, ying.huang@intel.com,
yosryahmed@google.com, yuzhao@google.com,
Chuanhua Han <hanchuanhua@oppo.com>,
Barry Song <v-songbaohua@oppo.com>
Subject: Re: [RFC PATCH v3 5/5] mm: support large folios swapin as a whole
Date: Wed, 12 Jun 2024 10:13:06 +1200 [thread overview]
Message-ID: <CAGsJ_4xVerq0bukCeZgXmjn2uBUviBEBjY6AWM4wm1M4D2N0og@mail.gmail.com> (raw)
In-Reply-To: <ly745k53gpkef6ktaoilbib4bzrwyuobli7adlylk5yf24ddhk@l4x2swggwm3f>
On Wed, Jun 12, 2024 at 5:24 AM Shakeel Butt <shakeel.butt@linux.dev> wrote:
>
> On Tue, Jun 11, 2024 at 12:23:41PM GMT, Barry Song wrote:
> > On Tue, Jun 11, 2024 at 8:43 AM Shakeel Butt <shakeel.butt@linux.dev> wrote:
> > >
> > > On Thu, Mar 14, 2024 at 08:56:17PM GMT, Chuanhua Han wrote:
> > > [...]
> > > > >
> > > > > So in the common case, swap-in will pull in the same size of folio as was
> > > > > swapped-out. Is that definitely the right policy for all folio sizes? Certainly
> > > > > it makes sense for "small" large folios (e.g. up to 64K IMHO). But I'm not sure
> > > > > it makes sense for 2M THP; As the size increases the chances of actually needing
> > > > > all of the folio reduces so chances are we are wasting IO. There are similar
> > > > > arguments for CoW, where we currently copy 1 page per fault - it probably makes
> > > > > sense to copy the whole folio up to a certain size.
> > > > For 2M THP, IO overhead may not necessarily be large? :)
> > > > 1.If 2M THP are continuously stored in the swap device, the IO
> > > > overhead may not be very large (such as submitting bio with one
> > > > bio_vec at a time).
> > > > 2.If the process really needs this 2M data, one page-fault may perform
> > > > much better than multiple.
> > > > 3.For swap devices like zram,using 2M THP might also improve
> > > > decompression efficiency.
> > > >
> > >
> > > Sorry for late response, do we have any performance data backing the
> > > above claims particularly for zswap/swap-on-zram cases?
> >
> > no need to say sorry. You are always welcome to give comments.
> >
> > this, combining with zram modification, not only improves compression
> > ratio but also reduces CPU time significantly. you may find some data
> > here[1].
> >
> > granularity orig_data_size compr_data_size time(us)
> > 4KiB-zstd 1048576000 246876055 50259962
> > 64KiB-zstd 1048576000 199763892 18330605
> >
> > On mobile devices, We tested the performance of swapin by running
> > 100 iterations of swapping in 100MB of data ,and the results were
> > as follows.the swapin speed increased by about 45%.
> >
> > time consumption of swapin(ms)
> > lz4 4k 45274
> > lz4 64k 22942
> >
> > zstdn 4k 85035
> > zstdn 64k 46558
>
> Thanks for the response. Above numbers are actually very fascinating and
> counter intuitive (at least to me). Do you also have numbers for 2MiB
> THP? I am assuming 64k is the right balance between too small or too
> large. Did you experiment on server machines as well?
I don’t possess data on 2MiB, and regrettably, I lack a server machine
for testing. However, I believe that this type of higher compression ratio
and lower CPU consumption generally holds true for generic anonymous
memory.
64KB is a right balance. But nothing can stop THP from using 64KB to
swapin, compression and decompression. as you can see from the
zram/zsmalloc series, we actually have a configuration
CONFIG_ZSMALLOC_MULTI_PAGES_ORDER
The default value is 4.
That means a 2MB THP can be compressed/decompressed as 32 * 64KB.
If we use 64KB as the swapin granularity, we still have the balance and
all the benefits if 2MB is a too large swap-in granularity which might cause
memory waste.
>
> >
> > [1] https://lore.kernel.org/linux-mm/20240327214816.31191-1-21cnbao@gmail.com/
> >
Thanks
Barry
next prev parent reply other threads:[~2024-06-11 22:13 UTC|newest]
Thread overview: 59+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-03-04 8:13 [RFC PATCH v3 0/5] mm: support large folios swap-in Barry Song
2024-03-04 8:13 ` [RFC PATCH v3 1/5] arm64: mm: swap: support THP_SWAP on hardware with MTE Barry Song
2024-03-11 16:55 ` Ryan Roberts
2024-03-21 8:42 ` Barry Song
2024-03-21 10:31 ` Ryan Roberts
2024-03-21 10:43 ` Barry Song
2024-03-22 2:51 ` Barry Song
2024-03-22 7:41 ` Barry Song
2024-03-22 10:19 ` Ryan Roberts
2024-03-23 2:15 ` Chris Li
2024-03-23 3:50 ` Barry Song
2024-03-04 8:13 ` [RFC PATCH v3 2/5] mm: swap: introduce swap_nr_free() for batched swap_free() Barry Song
2024-03-11 18:51 ` Ryan Roberts
2024-03-14 13:12 ` Chuanhua Han
2024-03-14 13:43 ` Ryan Roberts
2024-03-15 8:34 ` Chuanhua Han
2024-03-15 10:57 ` Ryan Roberts
2024-03-18 1:28 ` Chuanhua Han
2024-03-04 8:13 ` [RFC PATCH v3 3/5] mm: swap: make should_try_to_free_swap() support large-folio Barry Song
2024-03-12 12:34 ` Ryan Roberts
2024-03-13 2:21 ` Chuanhua Han
2024-03-13 9:09 ` Ryan Roberts
2024-03-13 9:24 ` Chuanhua Han
2024-03-04 8:13 ` [RFC PATCH v3 4/5] mm: swap: introduce swapcache_prepare_nr and swapcache_clear_nr for large folios swap-in Barry Song
2024-03-12 15:35 ` Ryan Roberts
2024-03-18 22:35 ` Barry Song
2024-03-04 8:13 ` [RFC PATCH v3 5/5] mm: support large folios swapin as a whole Barry Song
2024-03-12 16:33 ` Ryan Roberts
2024-03-14 12:56 ` Chuanhua Han
2024-03-14 13:57 ` Ryan Roberts
2024-03-14 20:43 ` Barry Song
2024-03-15 10:59 ` Ryan Roberts
2024-03-15 1:16 ` Chuanhua Han
2024-06-10 20:43 ` Shakeel Butt
2024-06-11 0:23 ` Barry Song
2024-06-11 17:24 ` Shakeel Butt
2024-06-11 22:13 ` Barry Song [this message]
2024-03-15 8:41 ` Huang, Ying
2024-03-15 8:54 ` Barry Song
2024-03-15 9:15 ` Huang, Ying
2024-03-15 10:01 ` Barry Song
2024-03-15 12:06 ` Ryan Roberts
2024-03-17 6:11 ` Barry Song
2024-03-18 1:52 ` Huang, Ying
2024-03-18 2:41 ` Barry Song
2024-03-18 16:45 ` Ryan Roberts
2024-03-19 6:27 ` Barry Song
2024-03-19 9:05 ` Ryan Roberts
2024-03-21 9:22 ` Barry Song
2024-03-21 11:13 ` Ryan Roberts
2024-03-19 9:20 ` Huang, Ying
2024-03-19 12:19 ` Ryan Roberts
2024-03-20 2:18 ` Huang, Ying
2024-03-20 2:47 ` Barry Song
2024-03-20 6:20 ` Huang, Ying
2024-03-20 18:38 ` Barry Song
2024-03-21 4:23 ` Huang, Ying
2024-03-21 5:12 ` Barry Song
2024-03-21 10:20 ` Barry Song
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CAGsJ_4xVerq0bukCeZgXmjn2uBUviBEBjY6AWM4wm1M4D2N0og@mail.gmail.com \
--to=21cnbao@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=chengming.zhou@linux.dev \
--cc=chrisl@kernel.org \
--cc=chuanhuahan@gmail.com \
--cc=david@redhat.com \
--cc=hanchuanhua@oppo.com \
--cc=hannes@cmpxchg.org \
--cc=kasong@tencent.com \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mhocko@suse.com \
--cc=nphamcs@gmail.com \
--cc=ryan.roberts@arm.com \
--cc=shakeel.butt@linux.dev \
--cc=shy828301@gmail.com \
--cc=steven.price@arm.com \
--cc=surenb@google.com \
--cc=v-songbaohua@oppo.com \
--cc=wangkefeng.wang@huawei.com \
--cc=willy@infradead.org \
--cc=xiang@kernel.org \
--cc=ying.huang@intel.com \
--cc=yosryahmed@google.com \
--cc=yuzhao@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox