linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Zi Yan <ziy@nvidia.com>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: David Hildenbrand <david@kernel.org>,
	Lorenzo Stoakes <lorenzo.stoakes@oracle.com>,
	Zi Yan <ziy@nvidia.com>, Hugh Dickins <hughd@google.com>,
	Baolin Wang <baolin.wang@linux.alibaba.com>,
	"Liam R. Howlett" <Liam.Howlett@oracle.com>,
	Nico Pache <npache@redhat.com>,
	Ryan Roberts <ryan.roberts@arm.com>, Dev Jain <dev.jain@arm.com>,
	Barry Song <baohua@kernel.org>, Lance Yang <lance.yang@linux.dev>,
	Matthew Wilcox <willy@infradead.org>,
	Bas van Dijk <bas@dfinity.org>,
	Eero Kelly <eero.kelly@dfinity.org>,
	Andrew Battat <andrew.battat@dfinity.org>,
	Adam Bratschi-Kaye <adam.bratschikaye@dfinity.org>,
	<linux-mm@kvack.org>, <linux-kernel@vger.kernel.org>,
	<linux-fsdevel@vger.kernel.org>
Subject: Re: [PATCH v4] selftests/mm: add folio_split() and filemap_get_entry() race test
Date: Sun, 22 Mar 2026 21:12:36 -0400	[thread overview]
Message-ID: <B0985749-939D-4256-B9B9-C54C0734CF72@nvidia.com> (raw)
In-Reply-To: <20260320142219.375118-1-ziy@nvidia.com>

On 20 Mar 2026, at 10:22, Zi Yan wrote:

> The added folio_split_race_test is a modified C port of the race condition
> test from [1]. The test creates shmem huge pages, where the main thread
> punches holes in the shmem to cause folio_split() in the kernel and
> a set of 16 threads reads the shmem to cause filemap_get_entry() in the
> kernel. filemap_get_entry() reads the folio and xarray split by
> folio_split() locklessly. The original test[2] is written in rust and uses
> memfd (shmem backed). This C port uses shmem directly and use a single
> process.
>
> Note: the initial rust to C conversion is done by Cursor.
>
> Link: https://lore.kernel.org/all/CAKNNEtw5_kZomhkugedKMPOG-sxs5Q5OLumWJdiWXv+C9Yct0w@mail.gmail.com/ [1]
> Link: https://github.com/dfinity/thp-madv-remove-test [2]
> Signed-off-by: Bas van Dijk <bas@dfinity.org>
> Signed-off-by: Adam Bratschi-Kaye <adam.bratschikaye@dfinity.org>
> Signed-off-by: Zi Yan <ziy@nvidia.com>
> ---
> From V3:
> 1. fixed for loop stepping issue
> 2. used PRIu64 instead of %zu for uint64_t.
>
> From V2:
> 1. simplied the program by removing fork.
>
> From V1:
> 1. added prctl(PR_SET_PDEATHSIG, SIGTERM) to avoid child looping
>    forever.
> 2. removed page_idx % PUNCH_INTERVAL >= 0, since it is a nop. Added a
>    comment.
> 3. added a child process status check to prevent parent looping forever
>    and record that as a failure.
> 4. used ksft_exit_skip() instead of ksft_finished() when the program is
>    not running as root.
> 5. restored THP settings properly when the program exits abnormally.
>  tools/testing/selftests/mm/Makefile           |   1 +
>  .../selftests/mm/folio_split_race_test.c      | 293 ++++++++++++++++++
>  tools/testing/selftests/mm/run_vmtests.sh     |   2 +
>  3 files changed, 296 insertions(+)
>  create mode 100644 tools/testing/selftests/mm/folio_split_race_test.c
>

Hi Andrew,

The fixup below addressed the new issues (first and third) raised by sashiko[1].

The second issue is that the test only verifies first 8 bytes. Because
the test is intended to verify the race condition causing a wrong page index
is used.

The fourth issue is addressed in Q3 from V3[2]


[1] https://sashiko.dev/#/patchset/20260320142219.375118-1-ziy%40nvidia.com
[2] https://lore.kernel.org/all/8B720FB8-DE4D-487A-9AEF-AC204E9F5755@nvidia.com/


From a66945de00f33c163cf814ac7c2d9620a725bfed Mon Sep 17 00:00:00 2001
From: Zi Yan <ziy@nvidia.com>
Date: Sun, 22 Mar 2026 19:53:54 -0400
Subject: [PATCH] selftests/mm: fix sashiko complains on folio_split_race_test

1. used PRIu64 for uint64_t
2. added pthread_barrier_t to ensure main thread starts to punch holes when
   all reader threads are spawned.

Signed-off-by: Zi Yan <ziy@nvidia.com>
---
 .../selftests/mm/folio_split_race_test.c        | 17 ++++++++++++++---
 1 file changed, 14 insertions(+), 3 deletions(-)

diff --git a/tools/testing/selftests/mm/folio_split_race_test.c b/tools/testing/selftests/mm/folio_split_race_test.c
index c264cc625a7cb..ab6868e7e2efe 100644
--- a/tools/testing/selftests/mm/folio_split_race_test.c
+++ b/tools/testing/selftests/mm/folio_split_race_test.c
@@ -46,6 +46,7 @@ struct shared_ctl {
 	atomic_uint_fast32_t stop;
 	atomic_size_t reader_failures;
 	atomic_size_t reader_verified;
+	pthread_barrier_t barrier;
 };

 static void fill_page(unsigned char *base, size_t page_idx)
@@ -78,14 +79,14 @@ static bool check_page(unsigned char *base, size_t page_idx)
 		}
 		if (all_zero) {
 			ksft_print_msg(
-				"CORRUPTED: page %zu (huge page %zu) is ALL ZEROS\n",
+				"CORRUPTED: page %zu (huge page %" PRIu64 ") is ALL ZEROS\n",
 				page_idx,
 				(page_idx * page_size) / pmd_pagesize);
 		} else {
 			ksft_print_msg(
-				"CORRUPTED: page %zu (huge page %zu): expected idx %zu, got %lu\n",
+				"CORRUPTED: page %zu (huge page %" PRIu64 "): expected idx %zu, got %" PRIu64 "\n",
 				page_idx, (page_idx * page_size) / pmd_pagesize,
-				page_idx, (unsigned long)got_idx);
+				page_idx, got_idx);
 		}
 		return false;
 	}
@@ -110,6 +111,8 @@ static void *reader_thread(void *arg)
 	atomic_size_t *verified = ra->verified;
 	size_t page_idx;

+	pthread_barrier_wait(&ctl->barrier);
+
 	while (atomic_load_explicit(&ctl->stop, memory_order_acquire) == 0) {
 		for (page_idx = (size_t)tid; page_idx < TOTAL_PAGES;
 		     page_idx += NUM_READER_THREADS) {
@@ -178,8 +181,14 @@ static size_t run_iteration(void)
 	if (!check_huge_shmem(mmap_base, NR_PMD_PAGE, pmd_pagesize))
 		ksft_exit_fail_msg("No shmem THP is allocated\n");

+	if (pthread_barrier_init(&ctl.barrier, NULL, NUM_READER_THREADS + 1) != 0)
+		ksft_exit_fail_msg("pthread_barrier_init failed\n");
+
 	create_readers(threads, args, mmap_base, &ctl);

+	/* Wait for all reader threads to be ready before punching holes. */
+	pthread_barrier_wait(&ctl.barrier);
+
 	for (i = 0; i < TOTAL_PAGES; i++) {
 		if (i % PUNCH_INTERVAL != 0)
 			continue;
@@ -198,6 +207,8 @@ static size_t run_iteration(void)
 	for (i = 0; i < NUM_READER_THREADS; i++)
 		pthread_join(threads[i], NULL);

+	pthread_barrier_destroy(&ctl.barrier);
+
 	reader_failures = atomic_load_explicit(&ctl.reader_failures,
 					       memory_order_acquire);
 	reader_verified = atomic_load_explicit(&ctl.reader_verified,
-- 
2.53.0



--
Best Regards,
Yan, Zi


  parent reply	other threads:[~2026-03-23  1:12 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-03-20 14:22 Zi Yan
2026-03-20 18:00 ` Zi Yan
2026-03-23  1:12 ` Zi Yan [this message]
2026-03-23 12:48 ` David Hildenbrand (Arm)
2026-03-23 15:24   ` Zi Yan

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=B0985749-939D-4256-B9B9-C54C0734CF72@nvidia.com \
    --to=ziy@nvidia.com \
    --cc=Liam.Howlett@oracle.com \
    --cc=adam.bratschikaye@dfinity.org \
    --cc=akpm@linux-foundation.org \
    --cc=andrew.battat@dfinity.org \
    --cc=baohua@kernel.org \
    --cc=baolin.wang@linux.alibaba.com \
    --cc=bas@dfinity.org \
    --cc=david@kernel.org \
    --cc=dev.jain@arm.com \
    --cc=eero.kelly@dfinity.org \
    --cc=hughd@google.com \
    --cc=lance.yang@linux.dev \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=lorenzo.stoakes@oracle.com \
    --cc=npache@redhat.com \
    --cc=ryan.roberts@arm.com \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox