From: Wei Yang <richard.weiyang@gmail.com>
To: akpm@linux-foundation.org, david@kernel.org, ljs@kernel.org,
ziy@nvidia.com, baolin.wang@linux.alibaba.com,
Liam.Howlett@oracle.com, npache@redhat.com, ryan.roberts@arm.com,
dev.jain@arm.com, baohua@kernel.org, lance.yang@linux.dev,
riel@surriel.com, vbabka@kernel.org, harry@kernel.org,
jannh@google.com, rppt@kernel.org, surenb@google.com,
mhocko@suse.com, shuah@kernel.org
Cc: linux-mm@kvack.org, Wei Yang <richard.weiyang@gmail.com>,
Gavin Guo <gavinguo@igalia.com>
Subject: [PATCH 2/2] mm/selftests: add split_shared_pmd()
Date: Wed, 15 Apr 2026 01:08:39 +0000 [thread overview]
Message-ID: <20260415010839.20124-3-richard.weiyang@gmail.com> (raw)
In-Reply-To: <20260415010839.20124-1-richard.weiyang@gmail.com>
Commit 60fbb14396d5 ("mm/huge_memory: adjust try_to_migrate_one() and
split_huge_pmd_locked()") introduced a bug to fail try_to_migrate()
early by returning false unconditionally after split_huge_pmd_locked(),
when this large pmd is shared by multi-process.
This is fixed by commit 939080834fef ("mm/huge_memory: fix
early failure try_to_migrate() when split huge pmd for shared THP").
Let's add a selftest to check this is not broken any more.
Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
Cc: Gavin Guo <gavinguo@igalia.com>
Cc: "David Hildenbrand (Red Hat)" <david@kernel.org>
Cc: Zi Yan <ziy@nvidia.com>
Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
Cc: Lance Yang <lance.yang@linux.dev>
---
.../selftests/mm/split_huge_page_test.c | 73 ++++++++++++++++++-
1 file changed, 72 insertions(+), 1 deletion(-)
diff --git a/tools/testing/selftests/mm/split_huge_page_test.c b/tools/testing/selftests/mm/split_huge_page_test.c
index 500d07c4938b..9d1de67f9929 100644
--- a/tools/testing/selftests/mm/split_huge_page_test.c
+++ b/tools/testing/selftests/mm/split_huge_page_test.c
@@ -16,6 +16,7 @@
#include <sys/mman.h>
#include <sys/mount.h>
#include <sys/param.h>
+#include <sys/wait.h>
#include <malloc.h>
#include <stdbool.h>
#include <time.h>
@@ -332,6 +333,74 @@ static void split_pmd_zero_pages(void)
free(one_page);
}
+static void split_shared_pmd(void)
+{
+ char *one_page;
+ int nr_pmds = 1;
+ size_t len = nr_pmds * pmd_pagesize;
+ size_t i;
+ pid_t pid;
+ int status;
+ int ret = 0, level = 0;
+
+ one_page = memalign(pmd_pagesize, len);
+ if (!one_page)
+ ksft_exit_fail_msg("Fail to allocate memory: %s\n", strerror(errno));
+
+ madvise(one_page, len, MADV_HUGEPAGE);
+
+ for (i = 0; i < len; i++)
+ one_page[i] = (char)i;
+
+ if (!check_huge_anon(one_page, nr_pmds, pmd_pagesize))
+ ksft_exit_fail_msg("No THP is allocated\n");
+
+ for (;;) {
+ pid = fork();
+
+ if (pid < 0) {
+ perror("Error: fork\n");
+ exit(KSFT_SKIP);
+ }
+
+ if (pid != 0)
+ break;
+
+ /*
+ * Current /sys/kernel/debug/split_huge_pages interface would
+ * call folio_split() for each page in the range. So we need
+ * to create one more map to the PMD, otherwise it still split
+ * successfully after 512 = (pmd_pagesize / pagesize) trials.
+ */
+ if (++level == (pmd_pagesize / pagesize)) {
+ /* split THPs */
+ write_debugfs(PID_FMT, getpid(), (uint64_t)one_page,
+ (uint64_t)one_page + len, 0);
+
+ memset(expected_orders, 0, sizeof(int) * (pmd_order + 1));
+ expected_orders[0] = nr_pmds << pmd_order;
+
+ if (check_after_split_folio_orders(one_page, len, pagemap_fd,
+ kpageflags_fd, expected_orders,
+ (pmd_order + 1)))
+ exit(KSFT_FAIL);
+
+ exit(KSFT_PASS);
+ }
+ }
+
+ wait(&status);
+ free(one_page);
+
+ if (WIFEXITED(status))
+ ret = WEXITSTATUS(status);
+
+ if (level != 0)
+ exit(ret);
+
+ ksft_test_result_report(ret, "Split shared pmd\n");
+}
+
static void split_pmd_thp_to_order(int order)
{
char *one_page;
@@ -777,7 +846,7 @@ int main(int argc, char **argv)
if (!expected_orders)
ksft_exit_fail_msg("Fail to allocate memory: %s\n", strerror(errno));
- tests = 2 + (pmd_order - 1) + (2 * pmd_order) + (pmd_order - 1) * 4 + 2;
+ tests = 3 + (pmd_order - 1) + (2 * pmd_order) + (pmd_order - 1) * 4 + 2;
ksft_set_plan(tests);
pagemap_fd = open(pagemap_proc, O_RDONLY);
@@ -792,6 +861,8 @@ int main(int argc, char **argv)
split_pmd_zero_pages();
+ split_shared_pmd();
+
for (i = 0; i < pmd_order; i++)
if (i != 1)
split_pmd_thp_to_order(i);
--
2.34.1
prev parent reply other threads:[~2026-04-15 1:10 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-04-15 1:08 [PATCH 0/2] mm/huge_memory: optimize migration when huge PMD needs split Wei Yang
2026-04-15 1:08 ` [PATCH 1/2] mm/huge_memory: return true if split_huge_pmd_locked() split PMD to migration entry Wei Yang
2026-04-15 1:08 ` Wei Yang [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260415010839.20124-3-richard.weiyang@gmail.com \
--to=richard.weiyang@gmail.com \
--cc=Liam.Howlett@oracle.com \
--cc=akpm@linux-foundation.org \
--cc=baohua@kernel.org \
--cc=baolin.wang@linux.alibaba.com \
--cc=david@kernel.org \
--cc=dev.jain@arm.com \
--cc=gavinguo@igalia.com \
--cc=harry@kernel.org \
--cc=jannh@google.com \
--cc=lance.yang@linux.dev \
--cc=linux-mm@kvack.org \
--cc=ljs@kernel.org \
--cc=mhocko@suse.com \
--cc=npache@redhat.com \
--cc=riel@surriel.com \
--cc=rppt@kernel.org \
--cc=ryan.roberts@arm.com \
--cc=shuah@kernel.org \
--cc=surenb@google.com \
--cc=vbabka@kernel.org \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox