From: Baolin Wang <baolin.wang@linux.alibaba.com>
To: akpm@linux-foundation.org
Cc: shy828301@gmail.com, ying.huang@intel.com,
baolin.wang@linux.alibaba.com, linux-mm@kvack.org,
linux-kernel@vger.kernel.org
Subject: [PATCH] mm: huge_memory: batch tlb flush when splitting a pte-mapped THP
Date: Mon, 30 Oct 2023 09:11:47 +0800 [thread overview]
Message-ID: <431d9fb6823036369dcb1d3b2f63732f01df21a7.1698488264.git.baolin.wang@linux.alibaba.com> (raw)
I can observe an obvious tlb flush hotpot when splitting a pte-mapped THP on
my ARM64 server, and the distribution of this hotspot is as follows:
- 16.85% split_huge_page_to_list
+ 7.80% down_write
- 7.49% try_to_migrate
- 7.48% rmap_walk_anon
7.23% ptep_clear_flush
+ 1.52% __split_huge_page
The reason is that the split_huge_page_to_list() will build migration entries
for each subpage of a pte-mapped Anon THP by try_to_migrate(), or unmap for
file THP, and it will clear and tlb flush for each subpage's pte. Moreover,
the split_huge_page_to_list() will set TTU_SPLIT_HUGE_PMD flag to ensure
the THP is already a pte-mapped THP before splitting it to some normal pages.
Actually, there is no need to flush tlb for each subpage immediately, instead
we can batch tlb flush for the pte-mapped THP to improve the performance.
After this patch, we can see the batch tlb flush can improve the latency
obviously when running thpscale.
k6.5-base patched
Amean fault-both-1 1071.17 ( 0.00%) 901.83 * 15.81%*
Amean fault-both-3 2386.08 ( 0.00%) 1865.32 * 21.82%*
Amean fault-both-5 2851.10 ( 0.00%) 2273.84 * 20.25%*
Amean fault-both-7 3679.91 ( 0.00%) 2881.66 * 21.69%*
Amean fault-both-12 5916.66 ( 0.00%) 4369.55 * 26.15%*
Amean fault-both-18 7981.36 ( 0.00%) 6303.57 * 21.02%*
Amean fault-both-24 10950.79 ( 0.00%) 8752.56 * 20.07%*
Amean fault-both-30 14077.35 ( 0.00%) 10170.01 * 27.76%*
Amean fault-both-32 13061.57 ( 0.00%) 11630.08 * 10.96%*
Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
---
mm/huge_memory.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index f31f02472396..0e4c14bf6872 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -2379,7 +2379,7 @@ void vma_adjust_trans_huge(struct vm_area_struct *vma,
static void unmap_folio(struct folio *folio)
{
enum ttu_flags ttu_flags = TTU_RMAP_LOCKED | TTU_SPLIT_HUGE_PMD |
- TTU_SYNC;
+ TTU_SYNC | TTU_BATCH_FLUSH;
VM_BUG_ON_FOLIO(!folio_test_large(folio), folio);
@@ -2392,6 +2392,8 @@ static void unmap_folio(struct folio *folio)
try_to_migrate(folio, ttu_flags);
else
try_to_unmap(folio, ttu_flags | TTU_IGNORE_MLOCK);
+
+ try_to_unmap_flush();
}
static void remap_page(struct folio *folio, unsigned long nr)
--
2.39.3
next reply other threads:[~2023-10-30 1:12 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-10-30 1:11 Baolin Wang [this message]
2023-10-30 1:53 ` Huang, Ying
2023-10-31 18:26 ` Yang Shi
2023-11-01 6:13 ` Alistair Popple
2023-11-02 7:10 ` Baolin Wang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=431d9fb6823036369dcb1d3b2f63732f01df21a7.1698488264.git.baolin.wang@linux.alibaba.com \
--to=baolin.wang@linux.alibaba.com \
--cc=akpm@linux-foundation.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=shy828301@gmail.com \
--cc=ying.huang@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox