From: Shakeel Butt <shakeel.butt@linux.dev>
To: Chuanhua Han <chuanhuahan@gmail.com>
Cc: Ryan Roberts <ryan.roberts@arm.com>,
Barry Song <21cnbao@gmail.com>,
akpm@linux-foundation.org, linux-mm@kvack.org,
chengming.zhou@linux.dev, chrisl@kernel.org, david@redhat.com,
hannes@cmpxchg.org, kasong@tencent.com,
linux-arm-kernel@lists.infradead.org,
linux-kernel@vger.kernel.org, mhocko@suse.com, nphamcs@gmail.com,
shy828301@gmail.com, steven.price@arm.com, surenb@google.com,
wangkefeng.wang@huawei.com, willy@infradead.org,
xiang@kernel.org, ying.huang@intel.com, yosryahmed@google.com,
yuzhao@google.com, Chuanhua Han <hanchuanhua@oppo.com>,
Barry Song <v-songbaohua@oppo.com>
Subject: Re: [RFC PATCH v3 5/5] mm: support large folios swapin as a whole
Date: Mon, 10 Jun 2024 13:43:10 -0700 [thread overview]
Message-ID: <emvsj7wfy24dzr6uxyac2qotp7nsdi7hnesihaldkvgo3mfzrf@u7fafr7mc3e7> (raw)
In-Reply-To: <CANzGp4+p3xSo9uX2i7K2bSZ3VKEQQChAVzdmBD3O2qXq_cE2yA@mail.gmail.com>
On Thu, Mar 14, 2024 at 08:56:17PM GMT, Chuanhua Han wrote:
[...]
> >
> > So in the common case, swap-in will pull in the same size of folio as was
> > swapped-out. Is that definitely the right policy for all folio sizes? Certainly
> > it makes sense for "small" large folios (e.g. up to 64K IMHO). But I'm not sure
> > it makes sense for 2M THP; As the size increases the chances of actually needing
> > all of the folio reduces so chances are we are wasting IO. There are similar
> > arguments for CoW, where we currently copy 1 page per fault - it probably makes
> > sense to copy the whole folio up to a certain size.
> For 2M THP, IO overhead may not necessarily be large? :)
> 1.If 2M THP are continuously stored in the swap device, the IO
> overhead may not be very large (such as submitting bio with one
> bio_vec at a time).
> 2.If the process really needs this 2M data, one page-fault may perform
> much better than multiple.
> 3.For swap devices like zram,using 2M THP might also improve
> decompression efficiency.
>
Sorry for late response, do we have any performance data backing the
above claims particularly for zswap/swap-on-zram cases?
next prev parent reply other threads:[~2024-06-10 20:43 UTC|newest]
Thread overview: 59+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-03-04 8:13 [RFC PATCH v3 0/5] mm: support large folios swap-in Barry Song
2024-03-04 8:13 ` [RFC PATCH v3 1/5] arm64: mm: swap: support THP_SWAP on hardware with MTE Barry Song
2024-03-11 16:55 ` Ryan Roberts
2024-03-21 8:42 ` Barry Song
2024-03-21 10:31 ` Ryan Roberts
2024-03-21 10:43 ` Barry Song
2024-03-22 2:51 ` Barry Song
2024-03-22 7:41 ` Barry Song
2024-03-22 10:19 ` Ryan Roberts
2024-03-23 2:15 ` Chris Li
2024-03-23 3:50 ` Barry Song
2024-03-04 8:13 ` [RFC PATCH v3 2/5] mm: swap: introduce swap_nr_free() for batched swap_free() Barry Song
2024-03-11 18:51 ` Ryan Roberts
2024-03-14 13:12 ` Chuanhua Han
2024-03-14 13:43 ` Ryan Roberts
2024-03-15 8:34 ` Chuanhua Han
2024-03-15 10:57 ` Ryan Roberts
2024-03-18 1:28 ` Chuanhua Han
2024-03-04 8:13 ` [RFC PATCH v3 3/5] mm: swap: make should_try_to_free_swap() support large-folio Barry Song
2024-03-12 12:34 ` Ryan Roberts
2024-03-13 2:21 ` Chuanhua Han
2024-03-13 9:09 ` Ryan Roberts
2024-03-13 9:24 ` Chuanhua Han
2024-03-04 8:13 ` [RFC PATCH v3 4/5] mm: swap: introduce swapcache_prepare_nr and swapcache_clear_nr for large folios swap-in Barry Song
2024-03-12 15:35 ` Ryan Roberts
2024-03-18 22:35 ` Barry Song
2024-03-04 8:13 ` [RFC PATCH v3 5/5] mm: support large folios swapin as a whole Barry Song
2024-03-12 16:33 ` Ryan Roberts
2024-03-14 12:56 ` Chuanhua Han
2024-03-14 13:57 ` Ryan Roberts
2024-03-14 20:43 ` Barry Song
2024-03-15 10:59 ` Ryan Roberts
2024-03-15 1:16 ` Chuanhua Han
2024-06-10 20:43 ` Shakeel Butt [this message]
2024-06-11 0:23 ` Barry Song
2024-06-11 17:24 ` Shakeel Butt
2024-06-11 22:13 ` Barry Song
2024-03-15 8:41 ` Huang, Ying
2024-03-15 8:54 ` Barry Song
2024-03-15 9:15 ` Huang, Ying
2024-03-15 10:01 ` Barry Song
2024-03-15 12:06 ` Ryan Roberts
2024-03-17 6:11 ` Barry Song
2024-03-18 1:52 ` Huang, Ying
2024-03-18 2:41 ` Barry Song
2024-03-18 16:45 ` Ryan Roberts
2024-03-19 6:27 ` Barry Song
2024-03-19 9:05 ` Ryan Roberts
2024-03-21 9:22 ` Barry Song
2024-03-21 11:13 ` Ryan Roberts
2024-03-19 9:20 ` Huang, Ying
2024-03-19 12:19 ` Ryan Roberts
2024-03-20 2:18 ` Huang, Ying
2024-03-20 2:47 ` Barry Song
2024-03-20 6:20 ` Huang, Ying
2024-03-20 18:38 ` Barry Song
2024-03-21 4:23 ` Huang, Ying
2024-03-21 5:12 ` Barry Song
2024-03-21 10:20 ` Barry Song
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=emvsj7wfy24dzr6uxyac2qotp7nsdi7hnesihaldkvgo3mfzrf@u7fafr7mc3e7 \
--to=shakeel.butt@linux.dev \
--cc=21cnbao@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=chengming.zhou@linux.dev \
--cc=chrisl@kernel.org \
--cc=chuanhuahan@gmail.com \
--cc=david@redhat.com \
--cc=hanchuanhua@oppo.com \
--cc=hannes@cmpxchg.org \
--cc=kasong@tencent.com \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mhocko@suse.com \
--cc=nphamcs@gmail.com \
--cc=ryan.roberts@arm.com \
--cc=shy828301@gmail.com \
--cc=steven.price@arm.com \
--cc=surenb@google.com \
--cc=v-songbaohua@oppo.com \
--cc=wangkefeng.wang@huawei.com \
--cc=willy@infradead.org \
--cc=xiang@kernel.org \
--cc=ying.huang@intel.com \
--cc=yosryahmed@google.com \
--cc=yuzhao@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox