From: Huang Ying <ying.huang@intel.com>
To: linux-mm@kvack.org
Cc: linux-kernel@vger.kernel.org,
Andrew Morton <akpm@linux-foundation.org>,
"Huang, Ying" <ying.huang@intel.com>, Zi Yan <ziy@nvidia.com>,
Yang Shi <shy828301@gmail.com>,
Baolin Wang <baolin.wang@linux.alibaba.com>,
Oscar Salvador <osalvador@suse.de>,
Matthew Wilcox <willy@infradead.org>
Subject: [RFC 0/6] migrate_pages(): batch TLB flushing
Date: Wed, 21 Sep 2022 14:06:10 +0800 [thread overview]
Message-ID: <20220921060616.73086-1-ying.huang@intel.com> (raw)
From: "Huang, Ying" <ying.huang@intel.com>
Now, migrate_pages() migrate pages one by one, like the fake code as
follows,
for each page
unmap
flush TLB
copy
restore map
If multiple pages are passed to migrate_pages(), there are
opportunities to batch the TLB flushing and copying. That is, we can
change the code to something as follows,
for each page
unmap
for each page
flush TLB
for each page
copy
for each page
restore map
The total number of TLB flushing IPI can be reduced considerably. And
we may use some hardware accelerator such as DSA to accelerate the
page copying.
So in this patch, we refactor the migrate_pages() implementation and
implement the TLB flushing batching. Base on this, hardware
accelerated page copying can be implemented.
If too many pages are passed to migrate_pages(), in the naive batched
implementation, we may unmap too many pages at the same time. The
possibility for a task to wait for the migrated pages to be mapped
again increases. So the latency may be hurt. To deal with this
issue, the max number of pages be unmapped in batch is restricted to
no more than HPAGE_PMD_NR. That is, the influence is at the same
level of THP migration.
We use the following test to measure the performance impact of the
patchset,
On a 2-socket Intel server,
- Run pmbench memory accessing benchmark
- Run `migratepages` to migrate pages of pmbench between node 0 and
node 1 back and forth.
With the patch, the TLB flushing IPI reduces 99.1% during the test and
the number of pages migrated successfully per second increases 291.7%.
This patchset is based on v6.0-rc5 and the following patchset,
[PATCH -V3 0/8] migrate_pages(): fix several bugs in error path
https://lore.kernel.org/lkml/20220817081408.513338-1-ying.huang@intel.com/
The migrate_pages() related code is converting to folio now. So this
patchset cannot apply recent akpm/mm-unstable branch. This patchset
is used to check the basic idea. If it is OK, I will rebase the
patchset on top of folio changes.
Best Regards,
Huang, Ying
next reply other threads:[~2022-09-21 6:06 UTC|newest]
Thread overview: 50+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-09-21 6:06 Huang Ying [this message]
2022-09-21 6:06 ` [RFC 1/6] mm/migrate_pages: separate huge page and normal pages migration Huang Ying
2022-09-21 15:55 ` Zi Yan
2022-09-22 1:14 ` Huang, Ying
2022-09-22 6:03 ` Baolin Wang
2022-09-22 6:22 ` Huang, Ying
2022-09-21 6:06 ` [RFC 2/6] mm/migrate_pages: split unmap_and_move() to _unmap() and _move() Huang Ying
2022-09-21 16:08 ` Zi Yan
2022-09-22 1:15 ` Huang, Ying
2022-09-22 6:36 ` Baolin Wang
2022-09-26 9:28 ` Alistair Popple
2022-09-26 18:06 ` Yang Shi
2022-09-27 0:02 ` Alistair Popple
2022-09-27 1:51 ` Huang, Ying
2022-09-27 20:34 ` John Hubbard
2022-09-27 20:57 ` Yang Shi
2022-09-28 0:59 ` Alistair Popple
2022-09-28 1:41 ` Huang, Ying
2022-09-28 1:44 ` John Hubbard
2022-09-28 1:49 ` Yang Shi
2022-09-28 1:56 ` John Hubbard
2022-09-28 2:14 ` Yang Shi
2022-09-28 2:57 ` John Hubbard
2022-09-28 3:25 ` Yang Shi
2022-09-28 3:39 ` Yang Shi
2022-09-27 20:56 ` Yang Shi
2022-09-27 20:54 ` Yang Shi
2022-09-21 6:06 ` [RFC 3/6] mm/migrate_pages: restrict number of pages to migrate in batch Huang Ying
2022-09-21 16:10 ` Zi Yan
2022-09-21 16:15 ` Zi Yan
2022-09-22 1:15 ` Huang, Ying
2022-09-21 6:06 ` [RFC 4/6] mm/migrate_pages: batch _unmap and _move Huang Ying
2022-09-21 6:06 ` [RFC 5/6] mm/migrate_pages: share more code between " Huang Ying
2022-09-21 6:06 ` [RFC 6/6] mm/migrate_pages: batch flushing TLB Huang Ying
2022-09-21 15:47 ` [RFC 0/6] migrate_pages(): batch TLB flushing Zi Yan
2022-09-22 1:45 ` Huang, Ying
2022-09-22 3:47 ` haoxin
2022-09-22 4:36 ` Huang, Ying
2022-09-22 12:50 ` Bharata B Rao
2022-09-23 7:52 ` Huang, Ying
2022-09-27 10:46 ` Bharata B Rao
2022-09-28 1:46 ` Huang, Ying
2022-09-26 9:11 ` Alistair Popple
2022-09-27 11:21 ` haoxin
2022-09-28 2:01 ` Huang, Ying
2022-09-28 3:33 ` haoxin
2022-09-28 4:53 ` Huang, Ying
2022-11-01 14:49 ` Hesham Almatary
2022-11-02 3:14 ` Huang, Ying
2022-11-02 14:13 ` Hesham Almatary
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20220921060616.73086-1-ying.huang@intel.com \
--to=ying.huang@intel.com \
--cc=akpm@linux-foundation.org \
--cc=baolin.wang@linux.alibaba.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=osalvador@suse.de \
--cc=shy828301@gmail.com \
--cc=willy@infradead.org \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox