From: Ryan Roberts <ryan.roberts@arm.com>
To: Andrew Morton <akpm@linux-foundation.org>,
Shuah Khan <shuah@kernel.org>, Peter Xu <peterx@redhat.com>
Cc: Ryan Roberts <ryan.roberts@arm.com>,
linux-mm@kvack.org, linux-kselftest@vger.kernel.org
Subject: [PATCH v1 3/3] selftests/mm: Speed up split_huge_page_test
Date: Tue, 18 Mar 2025 17:43:41 +0000 [thread overview]
Message-ID: <20250318174343.243631-3-ryan.roberts@arm.com> (raw)
In-Reply-To: <20250318174343.243631-1-ryan.roberts@arm.com>
create_pagecache_thp_and_fd() was previously writing a file sized at
twice the PMD size by making a per-byte write syscall. This was quite
slow when the PMD size is 4M, but completely intolerable for 32M (PMD
size for arm64's 16K page size), and 512M (PMD size for arm64's 64K page
size).
The byte pattern has a 256 byte period, so let's create a 1K buffer and
fill it with exactly 4 periods. Then we can write the buffer as many
times as is required to fill the file. This makes things much more
tolerable.
The test now passes for 16K page size. It still fails for 64K page size
because MAX_PAGECACHE_ORDER is too small for 512M folio size (I think).
Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
---
tools/testing/selftests/mm/split_huge_page_test.c | 11 +++++++----
1 file changed, 7 insertions(+), 4 deletions(-)
diff --git a/tools/testing/selftests/mm/split_huge_page_test.c b/tools/testing/selftests/mm/split_huge_page_test.c
index 3f353f3d070f..499333d75fff 100644
--- a/tools/testing/selftests/mm/split_huge_page_test.c
+++ b/tools/testing/selftests/mm/split_huge_page_test.c
@@ -5,6 +5,7 @@
*/
#define _GNU_SOURCE
+#include <assert.h>
#include <stdio.h>
#include <stdlib.h>
#include <stdarg.h>
@@ -361,6 +362,7 @@ int create_pagecache_thp_and_fd(const char *testfile, size_t fd_size, int *fd,
{
size_t i;
int dummy = 0;
+ unsigned char buf[1024];
srand(time(NULL));
@@ -368,11 +370,12 @@ int create_pagecache_thp_and_fd(const char *testfile, size_t fd_size, int *fd,
if (*fd == -1)
ksft_exit_fail_msg("Failed to create a file at %s\n", testfile);
- for (i = 0; i < fd_size; i++) {
- unsigned char byte = (unsigned char)i;
+ assert(fd_size % sizeof(buf) == 0);
+ for (i = 0; i < sizeof(buf); i++)
+ buf[i] = (unsigned char)i;
+ for (i = 0; i < fd_size; i += sizeof(buf))
+ write(*fd, buf, sizeof(buf));
- write(*fd, &byte, sizeof(byte));
- }
close(*fd);
sync();
*fd = open("/proc/sys/vm/drop_caches", O_WRONLY);
--
2.43.0
next prev parent reply other threads:[~2025-03-18 17:44 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-03-18 17:43 [PATCH v1 1/3] selftests/mm: Fix half_ufd_size_MB calculation Ryan Roberts
2025-03-18 17:43 ` [PATCH v1 2/3] selftests/mm: uffd-unit-tests support for hugepages > 2M Ryan Roberts
2025-03-18 19:54 ` Peter Xu
2025-03-18 21:52 ` Rafael Aquini
2025-03-18 17:43 ` Ryan Roberts [this message]
2025-03-18 19:54 ` [PATCH v1 3/3] selftests/mm: Speed up split_huge_page_test Peter Xu
2025-03-18 21:53 ` Rafael Aquini
2025-03-18 19:54 ` [PATCH v1 1/3] selftests/mm: Fix half_ufd_size_MB calculation Peter Xu
2025-03-18 22:05 ` Ryan Roberts
2025-03-18 21:49 ` Rafael Aquini
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250318174343.243631-3-ryan.roberts@arm.com \
--to=ryan.roberts@arm.com \
--cc=akpm@linux-foundation.org \
--cc=linux-kselftest@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=peterx@redhat.com \
--cc=shuah@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox