linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2] selftests/mm: Simplify byte pattern checking in mremap_test
@ 2026-04-15  4:45 Dev Jain
  2026-04-15  6:00 ` Mike Rapoport
  2026-04-15  7:48 ` David Hildenbrand (Arm)
  0 siblings, 2 replies; 7+ messages in thread
From: Dev Jain @ 2026-04-15  4:45 UTC (permalink / raw)
  To: akpm, david, shuah
  Cc: ljs, Liam.Howlett, vbabka, rppt, surenb, mhocko, linux-mm,
	linux-kselftest, linux-kernel, ryan.roberts, anshuman.khandual,
	Dev Jain, Sarthak Sharma

The original version of mremap_test (7df666253f26: "kselftests: vm: add
mremap tests") validated remapped contents byte-by-byte and printed a
mismatch index in case the bytes streams didn't match. That was rather
inefficient, especially also if the test passed.

Later, commit 7033c6cc9620 ("selftests/mm: mremap_test: optimize
execution time from minutes to seconds using chunkwise memcmp") used
memcmp() on bigger chunks, to fallback to byte-wise scanning to detect
the problematic index only if it discovered a problem.

However, the implementation is overly complicated (e.g., get_sqrt() is
currently not optimal) and we don't really have to report the exact
index: whoever debugs the failing test can figure that out.

Let's simplify by just comparing both byte streams with memcmp() and not
detecting the exact failed index.

Reported-by: Sarthak Sharma <sarthak.sharma@arm.com>
Tested-by: Sarthak Sharma <sarthak.sharma@arm.com>
Signed-off-by: Dev Jain <dev.jain@arm.com>
---
Applies on mm-unstable.

v1->v2:
 - Simplify patch description

v1:
 - https://lore.kernel.org/all/20260410143031.148173-1-dev.jain@arm.com/

 tools/testing/selftests/mm/mremap_test.c | 109 +++--------------------
 1 file changed, 10 insertions(+), 99 deletions(-)

diff --git a/tools/testing/selftests/mm/mremap_test.c b/tools/testing/selftests/mm/mremap_test.c
index 308576437228c..131d9d6db8679 100644
--- a/tools/testing/selftests/mm/mremap_test.c
+++ b/tools/testing/selftests/mm/mremap_test.c
@@ -76,27 +76,6 @@ enum {
 	.expect_failure = should_fail				\
 }
 
-/* compute square root using binary search */
-static unsigned long get_sqrt(unsigned long val)
-{
-	unsigned long low = 1;
-
-	/* assuming rand_size is less than 1TB */
-	unsigned long high = (1UL << 20);
-
-	while (low <= high) {
-		unsigned long mid = low + (high - low) / 2;
-		unsigned long temp = mid * mid;
-
-		if (temp == val)
-			return mid;
-		if (temp < val)
-			low = mid + 1;
-		high = mid - 1;
-	}
-	return low;
-}
-
 /*
  * Returns false if the requested remap region overlaps with an
  * existing mapping (e.g text, stack) else returns true.
@@ -995,11 +974,9 @@ static long long remap_region(struct config c, unsigned int threshold_mb,
 			      char *rand_addr)
 {
 	void *addr, *tmp_addr, *src_addr, *dest_addr, *dest_preamble_addr = NULL;
-	unsigned long long t, d;
 	struct timespec t_start = {0, 0}, t_end = {0, 0};
 	long long  start_ns, end_ns, align_mask, ret, offset;
 	unsigned long long threshold;
-	unsigned long num_chunks;
 
 	if (threshold_mb == VALIDATION_NO_THRESHOLD)
 		threshold = c.region_size;
@@ -1068,87 +1045,21 @@ static long long remap_region(struct config c, unsigned int threshold_mb,
 		goto clean_up_dest_preamble;
 	}
 
-	/*
-	 * Verify byte pattern after remapping. Employ an algorithm with a
-	 * square root time complexity in threshold: divide the range into
-	 * chunks, if memcmp() returns non-zero, only then perform an
-	 * iteration in that chunk to find the mismatch index.
-	 */
-	num_chunks = get_sqrt(threshold);
-	for (unsigned long i = 0; i < num_chunks; ++i) {
-		size_t chunk_size = threshold / num_chunks;
-		unsigned long shift = i * chunk_size;
-
-		if (!memcmp(dest_addr + shift, rand_addr + shift, chunk_size))
-			continue;
-
-		/* brute force iteration only over mismatch segment */
-		for (t = shift; t < shift + chunk_size; ++t) {
-			if (((char *) dest_addr)[t] != rand_addr[t]) {
-				ksft_print_msg("Data after remap doesn't match at offset %llu\n",
-						t);
-				ksft_print_msg("Expected: %#x\t Got: %#x\n", rand_addr[t] & 0xff,
-						((char *) dest_addr)[t] & 0xff);
-				ret = -1;
-				goto clean_up_dest;
-			}
-		}
-	}
-
-	/*
-	 * if threshold is not divisible by num_chunks, then check the
-	 * last chunk
-	 */
-	for (t = num_chunks * (threshold / num_chunks); t < threshold; ++t) {
-		if (((char *) dest_addr)[t] != rand_addr[t]) {
-			ksft_print_msg("Data after remap doesn't match at offset %llu\n",
-					t);
-			ksft_print_msg("Expected: %#x\t Got: %#x\n", rand_addr[t] & 0xff,
-					((char *) dest_addr)[t] & 0xff);
-			ret = -1;
-			goto clean_up_dest;
-		}
+	/* Verify byte pattern after remapping */
+	if (memcmp(dest_addr, rand_addr, threshold)) {
+		ksft_print_msg("Data after remap doesn't match\n");
+		ret = -1;
+		goto clean_up_dest;
 	}
 
 	/* Verify the dest preamble byte pattern after remapping */
-	if (!c.dest_preamble_size)
-		goto no_preamble;
-
-	num_chunks = get_sqrt(c.dest_preamble_size);
-
-	for (unsigned long i = 0; i < num_chunks; ++i) {
-		size_t chunk_size = c.dest_preamble_size / num_chunks;
-		unsigned long shift = i * chunk_size;
-
-		if (!memcmp(dest_preamble_addr + shift, rand_addr + shift,
-			    chunk_size))
-			continue;
-
-		/* brute force iteration only over mismatched segment */
-		for (d = shift; d < shift + chunk_size; ++d) {
-			if (((char *) dest_preamble_addr)[d] != rand_addr[d]) {
-				ksft_print_msg("Preamble data after remap doesn't match at offset %llu\n",
-						d);
-				ksft_print_msg("Expected: %#x\t Got: %#x\n", rand_addr[d] & 0xff,
-						((char *) dest_preamble_addr)[d] & 0xff);
-				ret = -1;
-				goto clean_up_dest;
-			}
-		}
-	}
-
-	for (d = num_chunks * (c.dest_preamble_size / num_chunks); d < c.dest_preamble_size; ++d) {
-		if (((char *) dest_preamble_addr)[d] != rand_addr[d]) {
-			ksft_print_msg("Preamble data after remap doesn't match at offset %llu\n",
-					d);
-			ksft_print_msg("Expected: %#x\t Got: %#x\n", rand_addr[d] & 0xff,
-					((char *) dest_preamble_addr)[d] & 0xff);
-			ret = -1;
-			goto clean_up_dest;
-		}
+	if (c.dest_preamble_size &&
+	    memcmp(dest_preamble_addr, rand_addr, c.dest_preamble_size)) {
+		ksft_print_msg("Preamble data after remap doesn't match\n");
+		ret = -1;
+		goto clean_up_dest;
 	}
 
-no_preamble:
 	start_ns = t_start.tv_sec * NS_PER_SEC + t_start.tv_nsec;
 	end_ns = t_end.tv_sec * NS_PER_SEC + t_end.tv_nsec;
 	ret = end_ns - start_ns;
-- 
2.34.1


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v2] selftests/mm: Simplify byte pattern checking in mremap_test
  2026-04-15  4:45 [PATCH v2] selftests/mm: Simplify byte pattern checking in mremap_test Dev Jain
@ 2026-04-15  6:00 ` Mike Rapoport
  2026-04-15  7:48 ` David Hildenbrand (Arm)
  1 sibling, 0 replies; 7+ messages in thread
From: Mike Rapoport @ 2026-04-15  6:00 UTC (permalink / raw)
  To: Dev Jain
  Cc: akpm, david, shuah, ljs, Liam.Howlett, vbabka, surenb, mhocko,
	linux-mm, linux-kselftest, linux-kernel, ryan.roberts,
	anshuman.khandual, Sarthak Sharma

On Wed, Apr 15, 2026 at 10:15:09AM +0530, Dev Jain wrote:
> The original version of mremap_test (7df666253f26: "kselftests: vm: add
> mremap tests") validated remapped contents byte-by-byte and printed a
> mismatch index in case the bytes streams didn't match. That was rather
> inefficient, especially also if the test passed.
> 
> Later, commit 7033c6cc9620 ("selftests/mm: mremap_test: optimize
> execution time from minutes to seconds using chunkwise memcmp") used
> memcmp() on bigger chunks, to fallback to byte-wise scanning to detect
> the problematic index only if it discovered a problem.
> 
> However, the implementation is overly complicated (e.g., get_sqrt() is
> currently not optimal) and we don't really have to report the exact
> index: whoever debugs the failing test can figure that out.
> 
> Let's simplify by just comparing both byte streams with memcmp() and not
> detecting the exact failed index.
> 
> Reported-by: Sarthak Sharma <sarthak.sharma@arm.com>
> Tested-by: Sarthak Sharma <sarthak.sharma@arm.com>
> Signed-off-by: Dev Jain <dev.jain@arm.com>
> ---
> Applies on mm-unstable.
> 
> v1->v2:
>  - Simplify patch description
> 
> v1:
>  - https://lore.kernel.org/all/20260410143031.148173-1-dev.jain@arm.com/
> 
>  tools/testing/selftests/mm/mremap_test.c | 109 +++--------------------
>  1 file changed, 10 insertions(+), 99 deletions(-)

I like it :)

Acked-by: Mike Rapoport (Microsoft) <rppt@kernel.org>

-- 
Sincerely yours,
Mike.


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v2] selftests/mm: Simplify byte pattern checking in mremap_test
  2026-04-15  4:45 [PATCH v2] selftests/mm: Simplify byte pattern checking in mremap_test Dev Jain
  2026-04-15  6:00 ` Mike Rapoport
@ 2026-04-15  7:48 ` David Hildenbrand (Arm)
  2026-04-15  8:35   ` Dev Jain
  1 sibling, 1 reply; 7+ messages in thread
From: David Hildenbrand (Arm) @ 2026-04-15  7:48 UTC (permalink / raw)
  To: Dev Jain, akpm, shuah
  Cc: ljs, Liam.Howlett, vbabka, rppt, surenb, mhocko, linux-mm,
	linux-kselftest, linux-kernel, ryan.roberts, anshuman.khandual,
	Sarthak Sharma

On 4/15/26 06:45, Dev Jain wrote:
> The original version of mremap_test (7df666253f26: "kselftests: vm: add
> mremap tests") validated remapped contents byte-by-byte and printed a
> mismatch index in case the bytes streams didn't match. That was rather
> inefficient, especially also if the test passed.
> 
> Later, commit 7033c6cc9620 ("selftests/mm: mremap_test: optimize
> execution time from minutes to seconds using chunkwise memcmp") used
> memcmp() on bigger chunks, to fallback to byte-wise scanning to detect
> the problematic index only if it discovered a problem.
> 
> However, the implementation is overly complicated (e.g., get_sqrt() is
> currently not optimal) and we don't really have to report the exact
> index: whoever debugs the failing test can figure that out.
> 
> Let's simplify by just comparing both byte streams with memcmp() and not
> detecting the exact failed index.
> 
> Reported-by: Sarthak Sharma <sarthak.sharma@arm.com>
> Tested-by: Sarthak Sharma <sarthak.sharma@arm.com>
> Signed-off-by: Dev Jain <dev.jain@arm.com>
> ---

I'll note something interesting: before 7033c6cc9620, we would check
random bytes in the stream. With 7033c6cc9620 we only check the first
threshold bytes IIUC.

That means, that we are not actually verifying most of the area at all
anymore?


The whole test options are extremely questionable:

$ ./mremap_test --help
./mremap_test: invalid option -- '-'
Usage: ./mremap_test [[-t <threshold_mb>] [-p <pattern_seed>]]
-t       only validate threshold_mb of the remapped region
         if 0 is supplied no threshold is used; all tests
         are run and remapped regions validated fully.
         The default threshold used is 4MB.
-p       provide a seed to generate the random pattern for
         validating the remapped region.

Nobody will ever set these parameters, really. And tests that test
different things each time they are run are not particularly helpful.

We should just remove all that and do something reasonable internally.

That is

a) Remove all the perf crap (ehm sorry, "advanced tests that don't
   belong here and that nobody ever runs") from this functional test

b) Remove all options from the test. Nobody ever uses them. They are
   stupid.

c) Remove any randomization from the test. There is no need for random
   patterns, just fill pages with increasing numbers.

d) Just always verify the whole regions. Without the rand() magic this
   will probably be just ... fairly fast?

-- 
Cheers,

David


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v2] selftests/mm: Simplify byte pattern checking in mremap_test
  2026-04-15  7:48 ` David Hildenbrand (Arm)
@ 2026-04-15  8:35   ` Dev Jain
  2026-04-15  8:50     ` David Hildenbrand (Arm)
  2026-04-15  9:16     ` David Laight
  0 siblings, 2 replies; 7+ messages in thread
From: Dev Jain @ 2026-04-15  8:35 UTC (permalink / raw)
  To: David Hildenbrand (Arm), akpm, shuah
  Cc: ljs, Liam.Howlett, vbabka, rppt, surenb, mhocko, linux-mm,
	linux-kselftest, linux-kernel, ryan.roberts, anshuman.khandual,
	Sarthak Sharma



On 15/04/26 1:18 pm, David Hildenbrand (Arm) wrote:
> On 4/15/26 06:45, Dev Jain wrote:
>> The original version of mremap_test (7df666253f26: "kselftests: vm: add
>> mremap tests") validated remapped contents byte-by-byte and printed a
>> mismatch index in case the bytes streams didn't match. That was rather
>> inefficient, especially also if the test passed.
>>
>> Later, commit 7033c6cc9620 ("selftests/mm: mremap_test: optimize
>> execution time from minutes to seconds using chunkwise memcmp") used
>> memcmp() on bigger chunks, to fallback to byte-wise scanning to detect
>> the problematic index only if it discovered a problem.
>>
>> However, the implementation is overly complicated (e.g., get_sqrt() is
>> currently not optimal) and we don't really have to report the exact
>> index: whoever debugs the failing test can figure that out.
>>
>> Let's simplify by just comparing both byte streams with memcmp() and not
>> detecting the exact failed index.
>>
>> Reported-by: Sarthak Sharma <sarthak.sharma@arm.com>
>> Tested-by: Sarthak Sharma <sarthak.sharma@arm.com>
>> Signed-off-by: Dev Jain <dev.jain@arm.com>
>> ---
> 
> I'll note something interesting: before 7033c6cc9620, we would check
> random bytes in the stream. With 7033c6cc9620 we only check the first
> threshold bytes IIUC.

Before 7033c6cc9620, the block of code was:

	/* Verify byte pattern after remapping */
	srand(pattern_seed);
	for (t = 0; t < threshold; t++) {
		char c = (char) rand();

		if (((char *) dest_addr)[t] != c) {
			ksft_print_msg("Data after remap doesn't match at offset %llu\n",
				       t);
			ksft_print_msg("Expected: %#x\t Got: %#x\n", c & 0xff,
					((char *) dest_addr)[t] & 0xff);
			ret = -1;
			goto clean_up_dest;
		}
	}

which is still checking the first threshold bytes only. Note that
pattern_seed remains constant at runtime, so 7033c6cc9620 just replaces
this with a buffer filled with the rand() stream.

> 
> That means, that we are not actually verifying most of the area at all
> anymore?
> 
> 
> The whole test options are extremely questionable:
> 
> $ ./mremap_test --help
> ./mremap_test: invalid option -- '-'
> Usage: ./mremap_test [[-t <threshold_mb>] [-p <pattern_seed>]]
> -t       only validate threshold_mb of the remapped region
>          if 0 is supplied no threshold is used; all tests
>          are run and remapped regions validated fully.
>          The default threshold used is 4MB.
> -p       provide a seed to generate the random pattern for
>          validating the remapped region.
> 
> Nobody will ever set these parameters, really. And tests that test
> different things each time they are run are not particularly helpful.
> 
> We should just remove all that and do something reasonable internally.
> 
> That is
> 
> a) Remove all the perf crap (ehm sorry, "advanced tests that don't
>    belong here and that nobody ever runs") from this functional test

Hmm... perhaps this is useful, we can keep this by default so we can
detect if a bug comes up in PMD/PUD mremap? If the test takes too long
we know we have messed up something there. Although "test taking too
long" is not a nice way to know that there is a bug ...

and test won't even take long perhaps since memcmp on 1G will be fairly
fast. In case we mess up PMD/PUD mremap real bug reports will come sooner
than anyone detecting this from mremap_test.

So I'll remove this.

> 
> b) Remove all options from the test. Nobody ever uses them. They are
>    stupid.

Agreed.

> 
> c) Remove any randomization from the test. There is no need for random
>    patterns, just fill pages with increasing numbers.

Agreed.

> 
> d) Just always verify the whole regions. Without the rand() magic this
>    will probably be just ... fairly fast?

Yeah we are doing a simple memcmp() so it is fine.

I'll implement these changes.

> 



^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v2] selftests/mm: Simplify byte pattern checking in mremap_test
  2026-04-15  8:35   ` Dev Jain
@ 2026-04-15  8:50     ` David Hildenbrand (Arm)
  2026-04-15  9:16     ` David Laight
  1 sibling, 0 replies; 7+ messages in thread
From: David Hildenbrand (Arm) @ 2026-04-15  8:50 UTC (permalink / raw)
  To: Dev Jain, akpm, shuah
  Cc: ljs, Liam.Howlett, vbabka, rppt, surenb, mhocko, linux-mm,
	linux-kselftest, linux-kernel, ryan.roberts, anshuman.khandual,
	Sarthak Sharma

On 4/15/26 10:35, Dev Jain wrote:
> 
> 
> On 15/04/26 1:18 pm, David Hildenbrand (Arm) wrote:
>> On 4/15/26 06:45, Dev Jain wrote:
>>> The original version of mremap_test (7df666253f26: "kselftests: vm: add
>>> mremap tests") validated remapped contents byte-by-byte and printed a
>>> mismatch index in case the bytes streams didn't match. That was rather
>>> inefficient, especially also if the test passed.
>>>
>>> Later, commit 7033c6cc9620 ("selftests/mm: mremap_test: optimize
>>> execution time from minutes to seconds using chunkwise memcmp") used
>>> memcmp() on bigger chunks, to fallback to byte-wise scanning to detect
>>> the problematic index only if it discovered a problem.
>>>
>>> However, the implementation is overly complicated (e.g., get_sqrt() is
>>> currently not optimal) and we don't really have to report the exact
>>> index: whoever debugs the failing test can figure that out.
>>>
>>> Let's simplify by just comparing both byte streams with memcmp() and not
>>> detecting the exact failed index.
>>>
>>> Reported-by: Sarthak Sharma <sarthak.sharma@arm.com>
>>> Tested-by: Sarthak Sharma <sarthak.sharma@arm.com>
>>> Signed-off-by: Dev Jain <dev.jain@arm.com>
>>> ---
>>
>> I'll note something interesting: before 7033c6cc9620, we would check
>> random bytes in the stream. With 7033c6cc9620 we only check the first
>> threshold bytes IIUC.
> 
> Before 7033c6cc9620, the block of code was:
> 
> 	/* Verify byte pattern after remapping */
> 	srand(pattern_seed);
> 	for (t = 0; t < threshold; t++) {
> 		char c = (char) rand();
> 
> 		if (((char *) dest_addr)[t] != c) {
> 			ksft_print_msg("Data after remap doesn't match at offset %llu\n",
> 				       t);
> 			ksft_print_msg("Expected: %#x\t Got: %#x\n", c & 0xff,
> 					((char *) dest_addr)[t] & 0xff);
> 			ret = -1;
> 			goto clean_up_dest;
> 		}
> 	}
> 
> which is still checking the first threshold bytes only. Note that
> pattern_seed remains constant at runtime, so 7033c6cc9620 just replaces
> this with a buffer filled with the rand() stream.

Ah, thanks for clarifying, I got lost in the changes.

So to your patch here, definitely

Acked-by: David Hildenbrand (Arm) <david@kernel.org>

> 
>>
>> That means, that we are not actually verifying most of the area at all
>> anymore?
>>
>>
>> The whole test options are extremely questionable:
>>
>> $ ./mremap_test --help
>> ./mremap_test: invalid option -- '-'
>> Usage: ./mremap_test [[-t <threshold_mb>] [-p <pattern_seed>]]
>> -t       only validate threshold_mb of the remapped region
>>          if 0 is supplied no threshold is used; all tests
>>          are run and remapped regions validated fully.
>>          The default threshold used is 4MB.
>> -p       provide a seed to generate the random pattern for
>>          validating the remapped region.
>>
>> Nobody will ever set these parameters, really. And tests that test
>> different things each time they are run are not particularly helpful.
>>
>> We should just remove all that and do something reasonable internally.
>>
>> That is
>>
>> a) Remove all the perf crap (ehm sorry, "advanced tests that don't
>>    belong here and that nobody ever runs") from this functional test
> 
> Hmm... perhaps this is useful, we can keep this by default so we can
> detect if a bug comes up in PMD/PUD mremap? If the test takes too long
> we know we have messed up something there. Although "test taking too
> long" is not a nice way to know that there is a bug ...

Exactly. And we should strive for tests in selftests/mm that just work
(tm). Any other useful (benchmarking) tools can go to tools/mm.

> 
> and test won't even take long perhaps since memcmp on 1G will be fairly
> fast. In case we mess up PMD/PUD mremap real bug reports will come sooner
> than anyone detecting this from mremap_test.
> 
> So I'll remove this.
> 
>>
>> b) Remove all options from the test. Nobody ever uses them. They are
>>    stupid.
> 
> Agreed.
> 
>>
>> c) Remove any randomization from the test. There is no need for random
>>    patterns, just fill pages with increasing numbers.
> 
> Agreed.
> 
>>
>> d) Just always verify the whole regions. Without the rand() magic this
>>    will probably be just ... fairly fast?
> 
> Yeah we are doing a simple memcmp() so it is fine.
> 
> I'll implement these changes.


Thanks a bunch!


-- 
Cheers,

David


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v2] selftests/mm: Simplify byte pattern checking in mremap_test
  2026-04-15  8:35   ` Dev Jain
  2026-04-15  8:50     ` David Hildenbrand (Arm)
@ 2026-04-15  9:16     ` David Laight
  2026-04-15  9:21       ` David Hildenbrand (Arm)
  1 sibling, 1 reply; 7+ messages in thread
From: David Laight @ 2026-04-15  9:16 UTC (permalink / raw)
  To: Dev Jain
  Cc: David Hildenbrand (Arm),
	akpm, shuah, ljs, Liam.Howlett, vbabka, rppt, surenb, mhocko,
	linux-mm, linux-kselftest, linux-kernel, ryan.roberts,
	anshuman.khandual, Sarthak Sharma

On Wed, 15 Apr 2026 14:05:01 +0530
Dev Jain <dev.jain@arm.com> wrote:

...
> > 
> > c) Remove any randomization from the test. There is no need for random
> >    patterns, just fill pages with increasing numbers.  
> 
> Agreed.

I'd start from (say) 0x123456789abcdef0 rather than zero to avoid lots
of zero bytes.

> > d) Just always verify the whole regions. Without the rand() magic this
> >    will probably be just ... fairly fast?  
> 
> Yeah we are doing a simple memcmp() so it is fine.

You don't need a copy of the buffer, just compare against the pattern
that filled it.

	David

> 
> I'll implement these changes.
> 
> >   
> 
> 



^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v2] selftests/mm: Simplify byte pattern checking in mremap_test
  2026-04-15  9:16     ` David Laight
@ 2026-04-15  9:21       ` David Hildenbrand (Arm)
  0 siblings, 0 replies; 7+ messages in thread
From: David Hildenbrand (Arm) @ 2026-04-15  9:21 UTC (permalink / raw)
  To: David Laight, Dev Jain
  Cc: akpm, shuah, ljs, Liam.Howlett, vbabka, rppt, surenb, mhocko,
	linux-mm, linux-kselftest, linux-kernel, ryan.roberts,
	anshuman.khandual, Sarthak Sharma

On 4/15/26 11:16, David Laight wrote:
> On Wed, 15 Apr 2026 14:05:01 +0530
> Dev Jain <dev.jain@arm.com> wrote:
> 
> ...
>>>
>>> c) Remove any randomization from the test. There is no need for random
>>>    patterns, just fill pages with increasing numbers.  
>>
>> Agreed.
> 
> I'd start from (say) 0x123456789abcdef0 rather than zero to avoid lots
> of zero bytes.

I'd just initialize the first page with (unsigned int) 0's, the second
page with (unsigned int) 1's etc.

mremap only moved full pages, so it would even be sufficient to only
check the first (unsigned int) in a page. But obviously, just comparing
full pages might be easier.

-- 
Cheers,

David


^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2026-04-15  9:21 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2026-04-15  4:45 [PATCH v2] selftests/mm: Simplify byte pattern checking in mremap_test Dev Jain
2026-04-15  6:00 ` Mike Rapoport
2026-04-15  7:48 ` David Hildenbrand (Arm)
2026-04-15  8:35   ` Dev Jain
2026-04-15  8:50     ` David Hildenbrand (Arm)
2026-04-15  9:16     ` David Laight
2026-04-15  9:21       ` David Hildenbrand (Arm)

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox