* [PATCH] selftests/mm: Simplify byte pattern checking in mremap_test
@ 2026-04-10 14:30 Dev Jain
2026-04-13 19:27 ` David Hildenbrand (Arm)
0 siblings, 1 reply; 5+ messages in thread
From: Dev Jain @ 2026-04-10 14:30 UTC (permalink / raw)
To: akpm, david, shuah
Cc: ljs, Liam.Howlett, vbabka, rppt, surenb, mhocko, linux-mm,
linux-kselftest, linux-kernel, ryan.roberts, anshuman.khandual,
Dev Jain, Sarthak Sharma
The original version of mremap_test (7df666253f26: "kselftests: vm: add
mremap tests") validated remapped contents byte-by-byte and printed a
mismatch index in case the bytes streams are not equal. That made
validation expensive in both cases: for "no mismatch" (the common case when
mremap is not buggy), it still walked all bytes in C; for "mismatch", it
broke out of the loop after printing the mismatch index.
Later, my commit 7033c6cc9620 ("selftests/mm: mremap_test: optimize
execution time from minutes to seconds using chunkwise memcmp") tried to
optimize both cases by using chunk-wise memcmp() and only scanning bytes
within a range which has been determined by memcmp as mismatching.
But get_sqrt() in that commit is buggy: `high = mid - 1` is applied
unconditionally. This makes the speed of checking the mismatch index
suboptimal.
The mismatch index does not provide useful debugging value here: if
validation fails, we know mremap behavior is wrong, and the specific byte
offset does not make root-causing easier.
So instead of fixing get_sqrt(), bite the bullet, drop mismatch index
scanning and just compare the two byte streams with memcmp().
Reported-by: Sarthak Sharma <sarthak.sharma@arm.com>
Signed-off-by: Dev Jain <dev.jain@arm.com>
---
Sorry for sending two patchsets the same day - the problem was made known
to me today, and I couldn't help myself but fix it immediately, imagine
my embarrassment when I found out that I made a typo in the binary search
code which I had been writing consistently throughout college :)
Applies on mm-unstable.
tools/testing/selftests/mm/mremap_test.c | 109 +++--------------------
1 file changed, 10 insertions(+), 99 deletions(-)
diff --git a/tools/testing/selftests/mm/mremap_test.c b/tools/testing/selftests/mm/mremap_test.c
index 308576437228c..131d9d6db8679 100644
--- a/tools/testing/selftests/mm/mremap_test.c
+++ b/tools/testing/selftests/mm/mremap_test.c
@@ -76,27 +76,6 @@ enum {
.expect_failure = should_fail \
}
-/* compute square root using binary search */
-static unsigned long get_sqrt(unsigned long val)
-{
- unsigned long low = 1;
-
- /* assuming rand_size is less than 1TB */
- unsigned long high = (1UL << 20);
-
- while (low <= high) {
- unsigned long mid = low + (high - low) / 2;
- unsigned long temp = mid * mid;
-
- if (temp == val)
- return mid;
- if (temp < val)
- low = mid + 1;
- high = mid - 1;
- }
- return low;
-}
-
/*
* Returns false if the requested remap region overlaps with an
* existing mapping (e.g text, stack) else returns true.
@@ -995,11 +974,9 @@ static long long remap_region(struct config c, unsigned int threshold_mb,
char *rand_addr)
{
void *addr, *tmp_addr, *src_addr, *dest_addr, *dest_preamble_addr = NULL;
- unsigned long long t, d;
struct timespec t_start = {0, 0}, t_end = {0, 0};
long long start_ns, end_ns, align_mask, ret, offset;
unsigned long long threshold;
- unsigned long num_chunks;
if (threshold_mb == VALIDATION_NO_THRESHOLD)
threshold = c.region_size;
@@ -1068,87 +1045,21 @@ static long long remap_region(struct config c, unsigned int threshold_mb,
goto clean_up_dest_preamble;
}
- /*
- * Verify byte pattern after remapping. Employ an algorithm with a
- * square root time complexity in threshold: divide the range into
- * chunks, if memcmp() returns non-zero, only then perform an
- * iteration in that chunk to find the mismatch index.
- */
- num_chunks = get_sqrt(threshold);
- for (unsigned long i = 0; i < num_chunks; ++i) {
- size_t chunk_size = threshold / num_chunks;
- unsigned long shift = i * chunk_size;
-
- if (!memcmp(dest_addr + shift, rand_addr + shift, chunk_size))
- continue;
-
- /* brute force iteration only over mismatch segment */
- for (t = shift; t < shift + chunk_size; ++t) {
- if (((char *) dest_addr)[t] != rand_addr[t]) {
- ksft_print_msg("Data after remap doesn't match at offset %llu\n",
- t);
- ksft_print_msg("Expected: %#x\t Got: %#x\n", rand_addr[t] & 0xff,
- ((char *) dest_addr)[t] & 0xff);
- ret = -1;
- goto clean_up_dest;
- }
- }
- }
-
- /*
- * if threshold is not divisible by num_chunks, then check the
- * last chunk
- */
- for (t = num_chunks * (threshold / num_chunks); t < threshold; ++t) {
- if (((char *) dest_addr)[t] != rand_addr[t]) {
- ksft_print_msg("Data after remap doesn't match at offset %llu\n",
- t);
- ksft_print_msg("Expected: %#x\t Got: %#x\n", rand_addr[t] & 0xff,
- ((char *) dest_addr)[t] & 0xff);
- ret = -1;
- goto clean_up_dest;
- }
+ /* Verify byte pattern after remapping */
+ if (memcmp(dest_addr, rand_addr, threshold)) {
+ ksft_print_msg("Data after remap doesn't match\n");
+ ret = -1;
+ goto clean_up_dest;
}
/* Verify the dest preamble byte pattern after remapping */
- if (!c.dest_preamble_size)
- goto no_preamble;
-
- num_chunks = get_sqrt(c.dest_preamble_size);
-
- for (unsigned long i = 0; i < num_chunks; ++i) {
- size_t chunk_size = c.dest_preamble_size / num_chunks;
- unsigned long shift = i * chunk_size;
-
- if (!memcmp(dest_preamble_addr + shift, rand_addr + shift,
- chunk_size))
- continue;
-
- /* brute force iteration only over mismatched segment */
- for (d = shift; d < shift + chunk_size; ++d) {
- if (((char *) dest_preamble_addr)[d] != rand_addr[d]) {
- ksft_print_msg("Preamble data after remap doesn't match at offset %llu\n",
- d);
- ksft_print_msg("Expected: %#x\t Got: %#x\n", rand_addr[d] & 0xff,
- ((char *) dest_preamble_addr)[d] & 0xff);
- ret = -1;
- goto clean_up_dest;
- }
- }
- }
-
- for (d = num_chunks * (c.dest_preamble_size / num_chunks); d < c.dest_preamble_size; ++d) {
- if (((char *) dest_preamble_addr)[d] != rand_addr[d]) {
- ksft_print_msg("Preamble data after remap doesn't match at offset %llu\n",
- d);
- ksft_print_msg("Expected: %#x\t Got: %#x\n", rand_addr[d] & 0xff,
- ((char *) dest_preamble_addr)[d] & 0xff);
- ret = -1;
- goto clean_up_dest;
- }
+ if (c.dest_preamble_size &&
+ memcmp(dest_preamble_addr, rand_addr, c.dest_preamble_size)) {
+ ksft_print_msg("Preamble data after remap doesn't match\n");
+ ret = -1;
+ goto clean_up_dest;
}
-no_preamble:
start_ns = t_start.tv_sec * NS_PER_SEC + t_start.tv_nsec;
end_ns = t_end.tv_sec * NS_PER_SEC + t_end.tv_nsec;
ret = end_ns - start_ns;
--
2.34.1
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH] selftests/mm: Simplify byte pattern checking in mremap_test
2026-04-10 14:30 [PATCH] selftests/mm: Simplify byte pattern checking in mremap_test Dev Jain
@ 2026-04-13 19:27 ` David Hildenbrand (Arm)
2026-04-14 5:09 ` Dev Jain
0 siblings, 1 reply; 5+ messages in thread
From: David Hildenbrand (Arm) @ 2026-04-13 19:27 UTC (permalink / raw)
To: Dev Jain, akpm, shuah
Cc: ljs, Liam.Howlett, vbabka, rppt, surenb, mhocko, linux-mm,
linux-kselftest, linux-kernel, ryan.roberts, anshuman.khandual,
Sarthak Sharma
On 4/10/26 16:30, Dev Jain wrote:
> The original version of mremap_test (7df666253f26: "kselftests: vm: add
> mremap tests") validated remapped contents byte-by-byte and printed a
> mismatch index in case the bytes streams are not equal. That made
> validation expensive in both cases: for "no mismatch" (the common case when
> mremap is not buggy), it still walked all bytes in C; for "mismatch", it
> broke out of the loop after printing the mismatch index.
>
> Later, my commit 7033c6cc9620 ("selftests/mm: mremap_test: optimize
> execution time from minutes to seconds using chunkwise memcmp") tried to
> optimize both cases by using chunk-wise memcmp() and only scanning bytes
> within a range which has been determined by memcmp as mismatching.
>
> But get_sqrt() in that commit is buggy: `high = mid - 1` is applied
> unconditionally. This makes the speed of checking the mismatch index
> suboptimal.
So is that the only problem with 7033c6cc9620: the speed?
>
> The mismatch index does not provide useful debugging value here: if
> validation fails, we know mremap behavior is wrong, and the specific byte
> offset does not make root-causing easier.
Fully agreed.
>
> So instead of fixing get_sqrt(), bite the bullet, drop mismatch index
> scanning and just compare the two byte streams with memcmp().
How does this affect the execution time of the test?
>
> Reported-by: Sarthak Sharma <sarthak.sharma@arm.com>
> Signed-off-by: Dev Jain <dev.jain@arm.com>
Fixes: 7033c6cc9620 ("selftests/mm: mremap_test: optimize execution time
from minutes to seconds using chunkwise memcmp")
?
> ---
> Sorry for sending two patchsets the same day - the problem was made known
> to me today, and I couldn't help myself but fix it immediately, imagine
> my embarrassment when I found out that I made a typo in the binary search
> code which I had been writing consistently throughout college :)
:)
>
> Applies on mm-unstable.
>
> tools/testing/selftests/mm/mremap_test.c | 109 +++--------------------
> 1 file changed, 10 insertions(+), 99 deletions(-)
I mean, it certainly looks like a nice cleanup.
--
Cheers,
David
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH] selftests/mm: Simplify byte pattern checking in mremap_test
2026-04-13 19:27 ` David Hildenbrand (Arm)
@ 2026-04-14 5:09 ` Dev Jain
2026-04-14 7:31 ` Ryan Roberts
2026-04-14 8:01 ` David Hildenbrand (Arm)
0 siblings, 2 replies; 5+ messages in thread
From: Dev Jain @ 2026-04-14 5:09 UTC (permalink / raw)
To: David Hildenbrand (Arm), akpm, shuah
Cc: ljs, Liam.Howlett, vbabka, rppt, surenb, mhocko, linux-mm,
linux-kselftest, linux-kernel, ryan.roberts, anshuman.khandual,
Sarthak Sharma
On 14/04/26 12:57 am, David Hildenbrand (Arm) wrote:
> On 4/10/26 16:30, Dev Jain wrote:
>> The original version of mremap_test (7df666253f26: "kselftests: vm: add
>> mremap tests") validated remapped contents byte-by-byte and printed a
>> mismatch index in case the bytes streams are not equal. That made
>> validation expensive in both cases: for "no mismatch" (the common case when
>> mremap is not buggy), it still walked all bytes in C; for "mismatch", it
>> broke out of the loop after printing the mismatch index.
>>
>> Later, my commit 7033c6cc9620 ("selftests/mm: mremap_test: optimize
>> execution time from minutes to seconds using chunkwise memcmp") tried to
>> optimize both cases by using chunk-wise memcmp() and only scanning bytes
>> within a range which has been determined by memcmp as mismatching.
>>
>> But get_sqrt() in that commit is buggy: `high = mid - 1` is applied
>> unconditionally. This makes the speed of checking the mismatch index
>> suboptimal.
>
> So is that the only problem with 7033c6cc9620: the speed?
Yes.
I'll explain the algorithm in 7033c6cc9620.
The problem statement is: given two buffers of equal length n, find the
first mismatch index.
Algorithm: Divide the buffers into sqrt(n) chunks. Do a memcmp() over
each chunk. If all of them succeed, the buffers are equal, giving the
result in O(sqrt(n)) * t, where t = time taken by memcmp().
Otherwise, worst case is that we find the mismatch in the last chunk.
Now brute-force iterate this chunk to find the mismatch. Since chunk
size is sqrt(n), complexity is again
sqrt(n) * t + sqrt(n) = O(sqrt(n)) * t.
So if get_sqrt() computes a wrong square root, we lose this time
complexity.
Maybe there is an optimal value of x = #number of chunks of the buffer,
which may not be sqrt(n).
But given the information we have, a CS course on algorithms will
say this is one of the optimal ways to do it.
>
>>
>> The mismatch index does not provide useful debugging value here: if
>> validation fails, we know mremap behavior is wrong, and the specific byte
>> offset does not make root-causing easier.
>
> Fully agreed.
>
>>
>> So instead of fixing get_sqrt(), bite the bullet, drop mismatch index
>> scanning and just compare the two byte streams with memcmp().
>
> How does this affect the execution time of the test?
I just checked with ./mremap_test -t 0, the variance is very high on my
system.
In the common case of the test passing:
before patch, there are multiple sub-length calls to memcmp.
after patch, there is a single full-length call to memcmp.
So the time should reduce but may not be very distinguishable.
>
>>
>> Reported-by: Sarthak Sharma <sarthak.sharma@arm.com>
>> Signed-off-by: Dev Jain <dev.jain@arm.com>
>
> Fixes: 7033c6cc9620 ("selftests/mm: mremap_test: optimize execution time
> from minutes to seconds using chunkwise memcmp")
>
> ?
Not needed. 7033c6cc9620 does not create any incorrectness in the checking
of mismatch index.
>
>> ---
>> Sorry for sending two patchsets the same day - the problem was made known
>> to me today, and I couldn't help myself but fix it immediately, imagine
>> my embarrassment when I found out that I made a typo in the binary search
>> code which I had been writing consistently throughout college :)
>
> :)
>
>>
>> Applies on mm-unstable.
>>
>> tools/testing/selftests/mm/mremap_test.c | 109 +++--------------------
>> 1 file changed, 10 insertions(+), 99 deletions(-)
>
> I mean, it certainly looks like a nice cleanup.
>
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH] selftests/mm: Simplify byte pattern checking in mremap_test
2026-04-14 5:09 ` Dev Jain
@ 2026-04-14 7:31 ` Ryan Roberts
2026-04-14 8:01 ` David Hildenbrand (Arm)
1 sibling, 0 replies; 5+ messages in thread
From: Ryan Roberts @ 2026-04-14 7:31 UTC (permalink / raw)
To: Dev Jain, David Hildenbrand (Arm), akpm, shuah
Cc: ljs, Liam.Howlett, vbabka, rppt, surenb, mhocko, linux-mm,
linux-kselftest, linux-kernel, anshuman.khandual, Sarthak Sharma
On 14/04/2026 06:09, Dev Jain wrote:
>
>
> On 14/04/26 12:57 am, David Hildenbrand (Arm) wrote:
>> On 4/10/26 16:30, Dev Jain wrote:
>>> The original version of mremap_test (7df666253f26: "kselftests: vm: add
>>> mremap tests") validated remapped contents byte-by-byte and printed a
>>> mismatch index in case the bytes streams are not equal. That made
>>> validation expensive in both cases: for "no mismatch" (the common case when
>>> mremap is not buggy), it still walked all bytes in C; for "mismatch", it
>>> broke out of the loop after printing the mismatch index.
>>>
>>> Later, my commit 7033c6cc9620 ("selftests/mm: mremap_test: optimize
>>> execution time from minutes to seconds using chunkwise memcmp") tried to
>>> optimize both cases by using chunk-wise memcmp() and only scanning bytes
>>> within a range which has been determined by memcmp as mismatching.
>>>
>>> But get_sqrt() in that commit is buggy: `high = mid - 1` is applied
>>> unconditionally. This makes the speed of checking the mismatch index
>>> suboptimal.
>>
>> So is that the only problem with 7033c6cc9620: the speed?
>
> Yes.
>
> I'll explain the algorithm in 7033c6cc9620.
>
> The problem statement is: given two buffers of equal length n, find the
> first mismatch index.
>
> Algorithm: Divide the buffers into sqrt(n) chunks. Do a memcmp() over
> each chunk. If all of them succeed, the buffers are equal, giving the
> result in O(sqrt(n)) * t, where t = time taken by memcmp().
>
> Otherwise, worst case is that we find the mismatch in the last chunk.
> Now brute-force iterate this chunk to find the mismatch. Since chunk
> size is sqrt(n), complexity is again
> sqrt(n) * t + sqrt(n) = O(sqrt(n)) * t.
>
> So if get_sqrt() computes a wrong square root, we lose this time
> complexity.
>
> Maybe there is an optimal value of x = #number of chunks of the buffer,
> which may not be sqrt(n).
>
> But given the information we have, a CS course on algorithms will
> say this is one of the optimal ways to do it.
>
>>
>>>
>>> The mismatch index does not provide useful debugging value here: if
>>> validation fails, we know mremap behavior is wrong, and the specific byte
>>> offset does not make root-causing easier.
>>
>> Fully agreed.
>>
>>>
>>> So instead of fixing get_sqrt(), bite the bullet, drop mismatch index
>>> scanning and just compare the two byte streams with memcmp().
>>
>> How does this affect the execution time of the test?
>
> I just checked with ./mremap_test -t 0, the variance is very high on my
> system.
>
> In the common case of the test passing:
>
> before patch, there are multiple sub-length calls to memcmp.
> after patch, there is a single full-length call to memcmp.
>
> So the time should reduce but may not be very distinguishable.
My intuition would be the opposite; if you hafve a 4096 byte buffer, I would
have thought that a single memcmp would be significantly faster than sqrt(4096)
= 64 calls, each over 64 bytes.
If you want to keep the common case fast, but also find the first differing
offset on failure, I expect you can exploit the fact that the buffers are all
page aligned. With some prompting, Codex gave me this:
---8<---
static size_t first_mismatch_offset(const void *buf1, const void *buf2,
size_t len)
{
const uint64_t *ptr1 = buf1;
const uint64_t *ptr2 = buf2;
size_t word;
size_t words = len / sizeof(*ptr1);
assert(!((uintptr_t)buf1 & (sizeof(*ptr1) - 1)));
assert(!((uintptr_t)buf2 & (sizeof(*ptr2) - 1)));
assert(!(len & (sizeof(*ptr1) - 1)));
if (!memcmp(buf1, buf2, len))
return len;
for (word = 0; word < words; word++) {
if (ptr1[word] != ptr2[word]) {
const unsigned char *bytes1 =
(const unsigned char *)&ptr1[word];
const unsigned char *bytes2 =
(const unsigned char *)&ptr2[word];
size_t i;
for (i = 0; i < sizeof(*ptr1); i++) {
if (bytes1[i] != bytes2[i])
return word * sizeof(*ptr1) + i;
}
}
}
return len;
}
---8<---
I've not benchmarked it though...
Thanks,
Ryan
>
>>
>>>
>>> Reported-by: Sarthak Sharma <sarthak.sharma@arm.com>
>>> Signed-off-by: Dev Jain <dev.jain@arm.com>
>>
>> Fixes: 7033c6cc9620 ("selftests/mm: mremap_test: optimize execution time
>> from minutes to seconds using chunkwise memcmp")
>>
>> ?
>
> Not needed. 7033c6cc9620 does not create any incorrectness in the checking
> of mismatch index.
>
>>
>>> ---
>>> Sorry for sending two patchsets the same day - the problem was made known
>>> to me today, and I couldn't help myself but fix it immediately, imagine
>>> my embarrassment when I found out that I made a typo in the binary search
>>> code which I had been writing consistently throughout college :)
>>
>> :)
>>
>>>
>>> Applies on mm-unstable.
>>>
>>> tools/testing/selftests/mm/mremap_test.c | 109 +++--------------------
>>> 1 file changed, 10 insertions(+), 99 deletions(-)
>>
>> I mean, it certainly looks like a nice cleanup.
>>
>
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH] selftests/mm: Simplify byte pattern checking in mremap_test
2026-04-14 5:09 ` Dev Jain
2026-04-14 7:31 ` Ryan Roberts
@ 2026-04-14 8:01 ` David Hildenbrand (Arm)
1 sibling, 0 replies; 5+ messages in thread
From: David Hildenbrand (Arm) @ 2026-04-14 8:01 UTC (permalink / raw)
To: Dev Jain, akpm, shuah
Cc: ljs, Liam.Howlett, vbabka, rppt, surenb, mhocko, linux-mm,
linux-kselftest, linux-kernel, ryan.roberts, anshuman.khandual,
Sarthak Sharma
On 4/14/26 07:09, Dev Jain wrote:
>
>
> On 14/04/26 12:57 am, David Hildenbrand (Arm) wrote:
>> On 4/10/26 16:30, Dev Jain wrote:
>>> The original version of mremap_test (7df666253f26: "kselftests: vm: add
>>> mremap tests") validated remapped contents byte-by-byte and printed a
>>> mismatch index in case the bytes streams are not equal. That made
>>> validation expensive in both cases: for "no mismatch" (the common case when
>>> mremap is not buggy), it still walked all bytes in C; for "mismatch", it
>>> broke out of the loop after printing the mismatch index.
>>>
>>> Later, my commit 7033c6cc9620 ("selftests/mm: mremap_test: optimize
>>> execution time from minutes to seconds using chunkwise memcmp") tried to
>>> optimize both cases by using chunk-wise memcmp() and only scanning bytes
>>> within a range which has been determined by memcmp as mismatching.
>>>
>>> But get_sqrt() in that commit is buggy: `high = mid - 1` is applied
>>> unconditionally. This makes the speed of checking the mismatch index
>>> suboptimal.
>>
>> So is that the only problem with 7033c6cc9620: the speed?
>
> Yes.
>
> I'll explain the algorithm in 7033c6cc9620.
>
> The problem statement is: given two buffers of equal length n, find the
> first mismatch index.
>
> Algorithm: Divide the buffers into sqrt(n) chunks. Do a memcmp() over
> each chunk. If all of them succeed, the buffers are equal, giving the
> result in O(sqrt(n)) * t, where t = time taken by memcmp().
>
> Otherwise, worst case is that we find the mismatch in the last chunk.
> Now brute-force iterate this chunk to find the mismatch. Since chunk
> size is sqrt(n), complexity is again
> sqrt(n) * t + sqrt(n) = O(sqrt(n)) * t.
>
> So if get_sqrt() computes a wrong square root, we lose this time
> complexity.
Ah, thanks for clarifying.
>
> Maybe there is an optimal value of x = #number of chunks of the buffer,
> which may not be sqrt(n).
>
> But given the information we have, a CS course on algorithms will
> say this is one of the optimal ways to do it.
>
>>
>>>
>>> The mismatch index does not provide useful debugging value here: if
>>> validation fails, we know mremap behavior is wrong, and the specific byte
>>> offset does not make root-causing easier.
>>
>> Fully agreed.
>>
>>>
>>> So instead of fixing get_sqrt(), bite the bullet, drop mismatch index
>>> scanning and just compare the two byte streams with memcmp().
>>
>> How does this affect the execution time of the test?
>
> I just checked with ./mremap_test -t 0, the variance is very high on my
> system.
>
> In the common case of the test passing:
>
> before patch, there are multiple sub-length calls to memcmp.
> after patch, there is a single full-length call to memcmp.
>
> So the time should reduce but may not be very distinguishable.
Okay, so doesn't matter. I agree that we should simplify all that.
The exact index is irrelevant. Whoever wants to debug the test failure
could modify the test to find that out. It's one of the tests we don't
really expect to fail (often).
>
>>
>>>
>>> Reported-by: Sarthak Sharma <sarthak.sharma@arm.com>
>>> Signed-off-by: Dev Jain <dev.jain@arm.com>
>>
>> Fixes: 7033c6cc9620 ("selftests/mm: mremap_test: optimize execution time
>> from minutes to seconds using chunkwise memcmp")
>>
>> ?
>
> Not needed. 7033c6cc9620 does not create any incorrectness in the checking
> of mismatch index.
Yes, agreed.
I would suggest to rewrite/simplify/clarify the patch description, not
talking about "buggy" etc, focusing on the simplification.
"
The original version of mremap_test (7df666253f26: "kselftests: vm: add
mremap tests") validated remapped contents byte-by-byte and printed a
mismatch index in case the bytes streams didn't match. That was rather
inefficient, especially also if the test passed.
Later, commit 7033c6cc9620 ("selftests/mm: mremap_test: optimize
execution time from minutes to seconds using chunkwise memcmp") used
memcmp() on bigger chunks, to fallback to byte-wise scanning to detect
the problematic index only if it discovered a problem.
However, the implementation is overly complicated (e.g., get_sqrt() is
currently not optimal) and we don't really have to report the exact
index: whoever debugs the failing test can figure that out.
Let's simplify by just comparing both byte streams with memcmp() and not
detecting the exact failed index.
"
--
Cheers,
David
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2026-04-14 8:02 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2026-04-10 14:30 [PATCH] selftests/mm: Simplify byte pattern checking in mremap_test Dev Jain
2026-04-13 19:27 ` David Hildenbrand (Arm)
2026-04-14 5:09 ` Dev Jain
2026-04-14 7:31 ` Ryan Roberts
2026-04-14 8:01 ` David Hildenbrand (Arm)
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox