From: Zi Yan <ziy@nvidia.com>
To: linux-mm@kvack.org,
"Kirill A . Shutemov" <kirill.shutemov@linux.intel.com>,
"Matthew Wilcox (Oracle)" <willy@infradead.org>
Cc: Ryan Roberts <ryan.roberts@arm.com>,
Hugh Dickins <hughd@google.com>,
David Hildenbrand <david@redhat.com>,
Yang Shi <yang@os.amperecomputing.com>,
Miaohe Lin <linmiaohe@huawei.com>,
Kefeng Wang <wangkefeng.wang@huawei.com>,
Yu Zhao <yuzhao@google.com>, John Hubbard <jhubbard@nvidia.com>,
linux-kernel@vger.kernel.org, Zi Yan <ziy@nvidia.com>
Subject: [PATCH v4 10/10] selftests/mm: add tests for folio_split(), buddy allocator like split.
Date: Mon, 6 Jan 2025 11:55:13 -0500 [thread overview]
Message-ID: <20250106165513.104899-11-ziy@nvidia.com> (raw)
In-Reply-To: <20250106165513.104899-1-ziy@nvidia.com>
It splits page cache folios to orders from 0 to 8 at different in-folio
offset.
Signed-off-by: Zi Yan <ziy@nvidia.com>
---
.../selftests/mm/split_huge_page_test.c | 29 ++++++++++++++-----
1 file changed, 22 insertions(+), 7 deletions(-)
diff --git a/tools/testing/selftests/mm/split_huge_page_test.c b/tools/testing/selftests/mm/split_huge_page_test.c
index 5bb159ebc83d..1af8d6fa4465 100644
--- a/tools/testing/selftests/mm/split_huge_page_test.c
+++ b/tools/testing/selftests/mm/split_huge_page_test.c
@@ -14,6 +14,7 @@
#include <fcntl.h>
#include <sys/mman.h>
#include <sys/mount.h>
+#include <sys/param.h>
#include <malloc.h>
#include <stdbool.h>
#include <time.h>
@@ -420,7 +421,8 @@ int create_pagecache_thp_and_fd(const char *testfile, size_t fd_size, int *fd,
return -1;
}
-void split_thp_in_pagecache_to_order(size_t fd_size, int order, const char *fs_loc)
+void split_thp_in_pagecache_to_order_at(size_t fd_size, const char *fs_loc,
+ int order, int offset)
{
int fd;
char *addr;
@@ -438,7 +440,12 @@ void split_thp_in_pagecache_to_order(size_t fd_size, int order, const char *fs_l
return;
err = 0;
- write_debugfs(PID_FMT, getpid(), (uint64_t)addr, (uint64_t)addr + fd_size, order);
+ if (offset == -1)
+ write_debugfs(PID_FMT, getpid(), (uint64_t)addr,
+ (uint64_t)addr + fd_size, order);
+ else
+ write_debugfs(PID_FMT, getpid(), (uint64_t)addr,
+ (uint64_t)addr + fd_size, order, offset);
for (i = 0; i < fd_size; i++)
if (*(addr + i) != (char)i) {
@@ -458,8 +465,8 @@ void split_thp_in_pagecache_to_order(size_t fd_size, int order, const char *fs_l
close(fd);
unlink(testfile);
if (err)
- ksft_exit_fail_msg("Split PMD-mapped pagecache folio to order %d failed\n", order);
- ksft_test_result_pass("Split PMD-mapped pagecache folio to order %d passed\n", order);
+ ksft_exit_fail_msg("Split PMD-mapped pagecache folio to order %d at in-folio offset %d failed\n", order, offset);
+ ksft_test_result_pass("Split PMD-mapped pagecache folio to order %d at in-folio offset %d passed\n", order, offset);
}
int main(int argc, char **argv)
@@ -470,6 +477,7 @@ int main(int argc, char **argv)
char fs_loc_template[] = "/tmp/thp_fs_XXXXXX";
const char *fs_loc;
bool created_tmp;
+ int offset;
ksft_print_header();
@@ -481,7 +489,7 @@ int main(int argc, char **argv)
if (argc > 1)
optional_xfs_path = argv[1];
- ksft_set_plan(1+9+2+9);
+ ksft_set_plan(1+8+2+9+8*4+2);
pagesize = getpagesize();
pageshift = ffs(pagesize) - 1;
@@ -494,7 +502,8 @@ int main(int argc, char **argv)
split_pmd_zero_pages();
for (i = 0; i < 9; i++)
- split_pmd_thp_to_order(i);
+ if (i != 1)
+ split_pmd_thp_to_order(i);
split_pte_mapped_thp();
split_file_backed_thp();
@@ -502,7 +511,13 @@ int main(int argc, char **argv)
created_tmp = prepare_thp_fs(optional_xfs_path, fs_loc_template,
&fs_loc);
for (i = 8; i >= 0; i--)
- split_thp_in_pagecache_to_order(fd_size, i, fs_loc);
+ split_thp_in_pagecache_to_order_at(fd_size, fs_loc, i, -1);
+
+ for (i = 0; i < 9; i++)
+ for (offset = 0;
+ offset < pmd_pagesize / pagesize;
+ offset += MAX(pmd_pagesize / pagesize / 4, 1 << i))
+ split_thp_in_pagecache_to_order_at(fd_size, fs_loc, i, offset);
cleanup_thp_fs(fs_loc, created_tmp);
ksft_finished();
--
2.45.2
prev parent reply other threads:[~2025-01-06 16:59 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-01-06 16:55 [PATCH v4 00/10] Buddy allocator like folio split Zi Yan
2025-01-06 16:55 ` [PATCH v4 01/10] selftests/mm: use selftests framework to print test result Zi Yan
2025-01-06 16:55 ` [PATCH v4 02/10] selftests/mm: add tests for splitting pmd THPs to all lower orders Zi Yan
2025-01-06 16:55 ` [PATCH v4 03/10] mm/huge_memory: allow split shmem large folio to any order Zi Yan
2025-01-06 16:55 ` [PATCH v4 04/10] mm/huge_memory: add two new (not yet used) functions for folio_split() Zi Yan
2025-01-06 16:55 ` [PATCH v4 05/10] mm/huge_memory: move folio split common code to __folio_split() Zi Yan
2025-01-06 16:55 ` [PATCH v4 06/10] mm/huge_memory: add buddy allocator like folio_split() Zi Yan
2025-01-06 16:55 ` [PATCH v4 07/10] mm/huge_memory: remove the old, unused __split_huge_page() Zi Yan
2025-01-06 16:55 ` [PATCH v4 08/10] mm/huge_memory: add folio_split() to debugfs testing interface Zi Yan
2025-01-06 16:55 ` [PATCH v4 09/10] mm/truncate: use folio_split() for truncate operation Zi Yan
2025-01-06 16:55 ` Zi Yan [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250106165513.104899-11-ziy@nvidia.com \
--to=ziy@nvidia.com \
--cc=david@redhat.com \
--cc=hughd@google.com \
--cc=jhubbard@nvidia.com \
--cc=kirill.shutemov@linux.intel.com \
--cc=linmiaohe@huawei.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=ryan.roberts@arm.com \
--cc=wangkefeng.wang@huawei.com \
--cc=willy@infradead.org \
--cc=yang@os.amperecomputing.com \
--cc=yuzhao@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox