linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/4] mm/vma: fix anon_vma UAF on mremap() faulted, unfaulted merge
@ 2026-01-05 20:11 Lorenzo Stoakes
  2026-01-05 20:11 ` [PATCH v2 1/4] " Lorenzo Stoakes
                   ` (3 more replies)
  0 siblings, 4 replies; 9+ messages in thread
From: Lorenzo Stoakes @ 2026-01-05 20:11 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Liam R . Howlett, Vlastimil Babka, Jann Horn, Pedro Falcato,
	Yeoreum Yun, linux-mm, linux-kernel, David Hildenbrand,
	Jeongjun Park, Rik van Riel, Harry Yoo

Commit 879bca0a2c4f ("mm/vma: fix incorrectly disallowed anonymous VMA
merges") introduced the ability to merge previously unavailable VMA merge
scenarios.

However, it is handling merges incorrectly when it comes to mremap() of a
faulted VMA adjacent to an unfaulted VMA. The issues arise in three cases:

1. Previous VMA unfaulted:

              copied -----|
                          v
	|-----------|.............|
	| unfaulted |(faulted VMA)|
	|-----------|.............|
	     prev

2. Next VMA unfaulted:

              copied -----|
                          v
	            |.............|-----------|
	            |(faulted VMA)| unfaulted |
                    |.............|-----------|
		                      next

3. Both adjacent VMAs unfaulted:

              copied -----|
                          v
	|-----------|.............|-----------|
	| unfaulted |(faulted VMA)| unfaulted |
	|-----------|.............|-----------|
	     prev                      next

This series fixes each of these cases, and introduces self tests to assert
that the issues are corrected.

I also test a further case which was already handled, to assert that my
changes continues to correctly handle it:

4. prev unfaulted, next faulted:

              copied -----|
                          v
	|-----------|.............|-----------|
	| unfaulted |(faulted VMA)|  faulted  |
	|-----------|.............|-----------|
	     prev                      next

This bug was discovered via a syzbot report, linked to in the first patch
in the series, I confirmed that this series fixes the bug.

I also discovered that we are failing to check that the faulted VMA was not
forked when merging a copied VMA in cases 1-3 above, an issue this series
also addresses.

I also added self tests to assert that this is resolved (and confirmed that
the tests failed prior to this).

I also cleaned up vma_expand() as part of this work, renamed
vma_had_uncowed_parents() to vma_is_fork_child() as the previous name was
unduly confusing, and simplified the comments around this function.

v2:
* Provide more general solution that fixes failure raised by Harry (thanks
  very much for raising the issues!)
* Additionally discovered another failure case (prev unfaulted merge with
  faulted). The general solution solves this also.
* Reworked vma_expand() to be more logical and understandable.
* Added vma_merge_copied_range() specifically for mremap() so we abstract
  out the invocation of vma_merge_new_range() to make things a little more
  straightforward.
* Added exhaustive self tests for every case, including unfaulted, faulted,
  faulted (which was previously correctly handled by vma_expand()).
* Discovered that we are incorrectly allowing merges between
  faulted/unfaulted mremap() for forked VMAs, so adjusted
  is_mergeable_anon_vma() to correctly check for this for the mremap()
  case.
* While I was there, renamed vma_had_uncowed_parents() to
  vma_is_fork_child() as the name was confusing, and removed duplicative
  comments.
* Added self tests to assert correctness for the forked VMA changes.

v1:
https://lore.kernel.org/all/20260102205520.986725-1-lorenzo.stoakes@oracle.com/

Lorenzo Stoakes (4):
  mm/vma: fix anon_vma UAF on mremap() faulted, unfaulted merge
  tools/testing/selftests: add tests for !tgt, src mremap() merges
  mm/vma: enforce VMA fork limit on unfaulted,faulted mremap merge too
  tools/testing/selftests: add forked (un)/faulted VMA merge tests

 mm/vma.c                           | 111 ++++++---
 mm/vma.h                           |   3 +
 tools/testing/selftests/mm/merge.c | 384 +++++++++++++++++++++++++++--
 3 files changed, 434 insertions(+), 64 deletions(-)

--
2.52.0


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH v2 1/4] mm/vma: fix anon_vma UAF on mremap() faulted, unfaulted merge
  2026-01-05 20:11 [PATCH v2 0/4] mm/vma: fix anon_vma UAF on mremap() faulted, unfaulted merge Lorenzo Stoakes
@ 2026-01-05 20:11 ` Lorenzo Stoakes
  2026-01-06  3:15   ` Harry Yoo
  2026-01-06 15:01   ` Jeongjun Park
  2026-01-05 20:11 ` [PATCH v2 2/4] tools/testing/selftests: add tests for !tgt, src mremap() merges Lorenzo Stoakes
                   ` (2 subsequent siblings)
  3 siblings, 2 replies; 9+ messages in thread
From: Lorenzo Stoakes @ 2026-01-05 20:11 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Liam R . Howlett, Vlastimil Babka, Jann Horn, Pedro Falcato,
	Yeoreum Yun, linux-mm, linux-kernel, David Hildenbrand,
	Jeongjun Park, Rik van Riel, Harry Yoo

Commit 879bca0a2c4f ("mm/vma: fix incorrectly disallowed anonymous VMA
merges") introduced the ability to merge previously unavailable VMA merge
scenarios.

The key piece of logic introduced was the ability to merge a faulted VMA
immediately next to an unfaulted VMA, which relies upon dup_anon_vma() to
correctly handle anon_vma state.

In the case of the merge of an existing VMA (that is changing properties
of a VMA and then merging if those properties are shared by adjacent
VMAs), dup_anon_vma() is invoked correctly.

However in the case of the merge of a new VMA, a corner case peculiar to
mremap() was missed.

The issue is that vma_expand() only performs dup_anon_vma() if the target
(the VMA that will ultimately become the merged VMA): is not the next VMA,
i.e.  the one that appears after the range in which the new VMA is to be
established.

A key insight here is that in all other cases other than mremap(), a new
VMA merge either expands an existing VMA, meaning that the target VMA will
be that VMA, or would have anon_vma be NULL.

Specifically:

* __mmap_region() - no anon_vma in place, initial mapping.
* do_brk_flags() - expanding an existing VMA.
* vma_merge_extend() - expanding an existing VMA.
* relocate_vma_down() - no anon_vma in place, initial mapping.

In addition, we are in the unique situation of needing to duplicate
anon_vma state from a VMA that is neither the previous or next VMA being
merged with.

dup_anon_vma() deals exclusively with the target=unfaulted, src=faulted
case. This leaves four possibilities, in each case where the copied VMA is
faulted:

1. Previous VMA unfaulted:

              copied -----|
                          v
	|-----------|.............|
	| unfaulted |(faulted VMA)|
	|-----------|.............|
	     prev

target = prev, expand prev to cover.

2. Next VMA unfaulted:

              copied -----|
                          v
	            |.............|-----------|
	            |(faulted VMA)| unfaulted |
                    |.............|-----------|
		                      next

target = next, expand next to cover.

3. Both adjacent VMAs unfaulted:

              copied -----|
                          v
	|-----------|.............|-----------|
	| unfaulted |(faulted VMA)| unfaulted |
	|-----------|.............|-----------|
	     prev                      next

target = prev, expand prev to cover.

4. prev unfaulted, next faulted:

              copied -----|
                          v
	|-----------|.............|-----------|
	| unfaulted |(faulted VMA)|  faulted  |
	|-----------|.............|-----------|
	     prev                      next

target = prev, expand prev to cover. Essentially equivalent to 3, but with
additional requirement that next's anon_vma is the same as the copied
VMA's. This is covered by the existing logic.

To account for this very explicitly, we introduce vma_merge_copied_range(),
which sets a newly introduced vmg->copied_from field, then invokes
vma_merge_new_range() which handles the rest of the logic.

We then update the key vma_expand() function to clean up the logic and make
what's going on clearer, making the 'remove next' case less special, before
invoking dup_anon_vma() unconditionally should we be copying from a VMA.

Note that in case 3, the if (remove_next) ... branch will be a no-op, as
next=src in this instance and src is unfaulted.

In case 4, it won't be, but since in this instance next=src and it is
faulted, this will have required tgt=faulted, src=faulted to be compatible,
meaning that next->anon_vma == vmg->copied_from->anon_vma, and thus a
single dup_anon_vma() of next suffices to copy anon_vma state for the
copied-from VMA also.

If we are copying from a VMA in a successful merge we must _always_
propagate anon_vma state.

This issue can be observed most directly by invoked mremap() to move
around a VMA and cause this kind of merge with the MREMAP_DONTUNMAP flag
specified.

This will result in unlink_anon_vmas() being called after failing to
duplicate anon_vma state to the target VMA, which results in the anon_vma
itself being freed with folios still possessing dangling pointers to the
anon_vma and thus a use-after-free bug.

This bug was discovered via a syzbot report, which this patch resolves.

We further make a change to update the mergeable anon_vma check to assert
the copied-from anon_vma did not have CoW parents, as otherwise
dup_anon_vma() might incorrectly propagate CoW ancestors from the next VMA
in case 4 despite the anon_vma's being identical for both VMAs.

Signed-off-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Fixes: 879bca0a2c4f ("mm/vma: fix incorrectly disallowed anonymous VMA merges")
Reported-by: syzbot+b165fc2e11771c66d8ba@syzkaller.appspotmail.com
Closes: https://lore.kernel.org/all/694a2745.050a0220.19928e.0017.GAE@google.com/
Cc: stable@kernel.org
---
 mm/vma.c | 84 +++++++++++++++++++++++++++++++++++++++-----------------
 mm/vma.h |  3 ++
 2 files changed, 62 insertions(+), 25 deletions(-)

diff --git a/mm/vma.c b/mm/vma.c
index 6377aa290a27..660f4732f8a5 100644
--- a/mm/vma.c
+++ b/mm/vma.c
@@ -829,6 +829,8 @@ static __must_check struct vm_area_struct *vma_merge_existing_range(
 	VM_WARN_ON_VMG(middle &&
 		       !(vma_iter_addr(vmg->vmi) >= middle->vm_start &&
 			 vma_iter_addr(vmg->vmi) < middle->vm_end), vmg);
+	/* An existing merge can never be used by the mremap() logic. */
+	VM_WARN_ON_VMG(vmg->copied_from, vmg);
 
 	vmg->state = VMA_MERGE_NOMERGE;
 
@@ -1098,6 +1100,33 @@ struct vm_area_struct *vma_merge_new_range(struct vma_merge_struct *vmg)
 	return NULL;
 }
 
+/*
+ * vma_merge_copied_range - Attempt to merge a VMA that is being copied by
+ * mremap()
+ *
+ * @vmg: Describes the VMA we are adding, in the copied-to range @vmg->start to
+ *       @vmg->end (exclusive), which we try to merge with any adjacent VMAs if
+ *       possible.
+ *
+ * vmg->prev, next, start, end, pgoff should all be relative to the COPIED TO
+ * range, i.e. the target range for the VMA.
+ *
+ * Returns: In instances where no merge was possible, NULL. Otherwise, a pointer
+ *          to the VMA we expanded.
+ *
+ * ASSUMPTIONS: Same as vma_merge_new_range(), except vmg->middle must contain
+ *              the copied-from VMA.
+ */
+static struct vm_area_struct *vma_merge_copied_range(struct vma_merge_struct *vmg)
+{
+	/* We must have a copied-from VMA. */
+	VM_WARN_ON_VMG(!vmg->middle, vmg);
+
+	vmg->copied_from = vmg->middle;
+	vmg->middle = NULL;
+	return vma_merge_new_range(vmg);
+}
+
 /*
  * vma_expand - Expand an existing VMA
  *
@@ -1117,46 +1146,52 @@ struct vm_area_struct *vma_merge_new_range(struct vma_merge_struct *vmg)
 int vma_expand(struct vma_merge_struct *vmg)
 {
 	struct vm_area_struct *anon_dup = NULL;
-	bool remove_next = false;
 	struct vm_area_struct *target = vmg->target;
 	struct vm_area_struct *next = vmg->next;
+	bool remove_next = false;
 	vm_flags_t sticky_flags;
-
-	sticky_flags = vmg->vm_flags & VM_STICKY;
-	sticky_flags |= target->vm_flags & VM_STICKY;
-
-	VM_WARN_ON_VMG(!target, vmg);
+	int ret = 0;
 
 	mmap_assert_write_locked(vmg->mm);
-
 	vma_start_write(target);
-	if (next && (target != next) && (vmg->end == next->vm_end)) {
-		int ret;
 
-		sticky_flags |= next->vm_flags & VM_STICKY;
+	if (next && target != next && vmg->end == next->vm_end)
 		remove_next = true;
-		/* This should already have been checked by this point. */
-		VM_WARN_ON_VMG(!can_merge_remove_vma(next), vmg);
-		vma_start_write(next);
-		/*
-		 * In this case we don't report OOM, so vmg->give_up_on_mm is
-		 * safe.
-		 */
-		ret = dup_anon_vma(target, next, &anon_dup);
-		if (ret)
-			return ret;
-	}
 
+	/* We must have a target. */
+	VM_WARN_ON_VMG(!target, vmg);
+	/* This should have already been checked by this point. */
+	VM_WARN_ON_VMG(remove_next && !can_merge_remove_vma(next), vmg);
 	/* Not merging but overwriting any part of next is not handled. */
 	VM_WARN_ON_VMG(next && !remove_next &&
 		       next != target && vmg->end > next->vm_start, vmg);
-	/* Only handles expanding */
+	/* Only handles expanding. */
 	VM_WARN_ON_VMG(target->vm_start < vmg->start ||
 		       target->vm_end > vmg->end, vmg);
 
+	sticky_flags = vmg->vm_flags & VM_STICKY;
+	sticky_flags |= target->vm_flags & VM_STICKY;
 	if (remove_next)
-		vmg->__remove_next = true;
+		sticky_flags |= next->vm_flags & VM_STICKY;
 
+	/*
+	 * If we are removing the next VMA or copying from a VMA
+	 * (e.g. mremap()'ing), we must propagate anon_vma state.
+	 *
+	 * Note that, by convention, callers ignore OOM for this case, so
+	 * we don't need to account for vmg->give_up_on_mm here.
+	 */
+	if (remove_next)
+		ret = dup_anon_vma(target, next, &anon_dup);
+	if (!ret && vmg->copied_from)
+		ret = dup_anon_vma(target, vmg->copied_from, &anon_dup);
+	if (ret)
+		return ret;
+
+	if (remove_next) {
+		vma_start_write(next);
+		vmg->__remove_next = true;
+	}
 	if (commit_merge(vmg))
 		goto nomem;
 
@@ -1828,10 +1863,9 @@ struct vm_area_struct *copy_vma(struct vm_area_struct **vmap,
 	if (new_vma && new_vma->vm_start < addr + len)
 		return NULL;	/* should never get here */
 
-	vmg.middle = NULL; /* New VMA range. */
 	vmg.pgoff = pgoff;
 	vmg.next = vma_iter_next_rewind(&vmi, NULL);
-	new_vma = vma_merge_new_range(&vmg);
+	new_vma = vma_merge_copied_range(&vmg);
 
 	if (new_vma) {
 		/*
diff --git a/mm/vma.h b/mm/vma.h
index e4c7bd79de5f..d51efd9da113 100644
--- a/mm/vma.h
+++ b/mm/vma.h
@@ -106,6 +106,9 @@ struct vma_merge_struct {
 	struct anon_vma_name *anon_name;
 	enum vma_merge_state state;
 
+	/* If copied from (i.e. mremap()'d) the VMA from which we are copying. */
+	struct vm_area_struct *copied_from;
+
 	/* Flags which callers can use to modify merge behaviour: */
 
 	/*
-- 
2.52.0



^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH v2 2/4] tools/testing/selftests: add tests for !tgt, src mremap() merges
  2026-01-05 20:11 [PATCH v2 0/4] mm/vma: fix anon_vma UAF on mremap() faulted, unfaulted merge Lorenzo Stoakes
  2026-01-05 20:11 ` [PATCH v2 1/4] " Lorenzo Stoakes
@ 2026-01-05 20:11 ` Lorenzo Stoakes
  2026-01-05 20:11 ` [PATCH v2 3/4] mm/vma: enforce VMA fork limit on unfaulted,faulted mremap merge too Lorenzo Stoakes
  2026-01-05 20:11 ` [PATCH v2 4/4] tools/testing/selftests: add forked (un)/faulted VMA merge tests Lorenzo Stoakes
  3 siblings, 0 replies; 9+ messages in thread
From: Lorenzo Stoakes @ 2026-01-05 20:11 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Liam R . Howlett, Vlastimil Babka, Jann Horn, Pedro Falcato,
	Yeoreum Yun, linux-mm, linux-kernel, David Hildenbrand,
	Jeongjun Park, Rik van Riel, Harry Yoo

Test that mremap()'ing a VMA into a position such that the target VMA on
merge is unfaulted and the source faulted is correctly performed.

We cover 4 cases:

    1. Previous VMA unfaulted:

                  copied -----|
                              v
            |-----------|.............|
            | unfaulted |(faulted VMA)|
            |-----------|.............|
                 prev

    target = prev, expand prev to cover.

    2. Next VMA unfaulted:

                  copied -----|
                              v
                        |.............|-----------|
                        |(faulted VMA)| unfaulted |
                        |.............|-----------|
                                          next

    target = next, expand next to cover.

    3. Both adjacent VMAs unfaulted:

                  copied -----|
                              v
            |-----------|.............|-----------|
            | unfaulted |(faulted VMA)| unfaulted |
            |-----------|.............|-----------|
                 prev                      next

    target = prev, expand prev to cover.

    4. prev unfaulted, next faulted:

                  copied -----|
                              v
            |-----------|.............|-----------|
            | unfaulted |(faulted VMA)|  faulted  |
            |-----------|.............|-----------|
                 prev                      next

    target = prev, expand prev to cover. Essentially equivalent to 3, but
    with additional requirement that next's anon_vma is the same as the
    copied VMA's.

Each of these are performed with MREMAP_DONTUNMAP set, which will cause a
KASAN assert for UAF or an assert on zero refcount anon_vma if a bug exists
with correctly propagating anon_vma state in each scenario.

Signed-off-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Fixes: 879bca0a2c4f ("mm/vma: fix incorrectly disallowed anonymous VMA merges")
Cc: stable@kernel.org
---
 tools/testing/selftests/mm/merge.c | 232 +++++++++++++++++++++++++++++
 1 file changed, 232 insertions(+)

diff --git a/tools/testing/selftests/mm/merge.c b/tools/testing/selftests/mm/merge.c
index 363c1033cc7d..22be149f7109 100644
--- a/tools/testing/selftests/mm/merge.c
+++ b/tools/testing/selftests/mm/merge.c
@@ -1171,4 +1171,236 @@ TEST_F(merge, mremap_correct_placed_faulted)
 	ASSERT_EQ(procmap->query.vma_end, (unsigned long)ptr + 15 * page_size);
 }
 
+TEST_F(merge, mremap_faulted_to_unfaulted_prev)
+{
+	struct procmap_fd *procmap = &self->procmap;
+	unsigned int page_size = self->page_size;
+	char *ptr_a, *ptr_b;
+
+	/*
+	 * mremap() such that A and B merge:
+	 *
+	 *                             |------------|
+	 *                             |    \       |
+	 *           |-----------|     |    /  |---------|
+	 *           | unfaulted |     v    \  | faulted |
+	 *           |-----------|          /  |---------|
+	 *                 B                \       A
+	 */
+
+	/* Map VMA A into place. */
+	ptr_a = mmap(&self->carveout[page_size + 3 * page_size],
+		     3 * page_size,
+		     PROT_READ | PROT_WRITE,
+		     MAP_PRIVATE | MAP_ANON | MAP_FIXED, -1, 0);
+	ASSERT_NE(ptr_a, MAP_FAILED);
+	/* Fault it in. */
+	ptr_a[0] = 'x';
+
+	/*
+	 * Now move it out of the way so we can place VMA B in position,
+	 * unfaulted.
+	 */
+	ptr_a = mremap(ptr_a, 3 * page_size, 3 * page_size,
+		       MREMAP_FIXED | MREMAP_MAYMOVE, &self->carveout[20 * page_size]);
+	ASSERT_NE(ptr_a, MAP_FAILED);
+
+	/* Map VMA B into place. */
+	ptr_b = mmap(&self->carveout[page_size], 3 * page_size,
+		     PROT_READ | PROT_WRITE,
+		     MAP_PRIVATE | MAP_ANON | MAP_FIXED, -1, 0);
+	ASSERT_NE(ptr_b, MAP_FAILED);
+
+	/*
+	 * Now move VMA A into position with MREMAP_DONTUNMAP to catch incorrect
+	 * anon_vma propagation.
+	 */
+	ptr_a = mremap(ptr_a, 3 * page_size, 3 * page_size,
+		       MREMAP_FIXED | MREMAP_MAYMOVE | MREMAP_DONTUNMAP,
+		       &self->carveout[page_size + 3 * page_size]);
+	ASSERT_NE(ptr_a, MAP_FAILED);
+
+	/* The VMAs should have merged. */
+	ASSERT_TRUE(find_vma_procmap(procmap, ptr_b));
+	ASSERT_EQ(procmap->query.vma_start, (unsigned long)ptr_b);
+	ASSERT_EQ(procmap->query.vma_end, (unsigned long)ptr_b + 6 * page_size);
+}
+
+TEST_F(merge, mremap_faulted_to_unfaulted_next)
+{
+	struct procmap_fd *procmap = &self->procmap;
+	unsigned int page_size = self->page_size;
+	char *ptr_a, *ptr_b;
+
+	/*
+	 * mremap() such that A and B merge:
+	 *
+	 *      |---------------------------|
+	 *      |                   \       |
+	 *      |    |-----------|  /  |---------|
+	 *      v    | unfaulted |  \  | faulted |
+	 *           |-----------|  /  |---------|
+	 *                 B        \       A
+	 *
+	 * Then unmap VMA A to trigger the bug.
+	 */
+
+	/* Map VMA A into place. */
+	ptr_a = mmap(&self->carveout[page_size], 3 * page_size,
+		     PROT_READ | PROT_WRITE,
+		     MAP_PRIVATE | MAP_ANON | MAP_FIXED, -1, 0);
+	ASSERT_NE(ptr_a, MAP_FAILED);
+	/* Fault it in. */
+	ptr_a[0] = 'x';
+
+	/*
+	 * Now move it out of the way so we can place VMA B in position,
+	 * unfaulted.
+	 */
+	ptr_a = mremap(ptr_a, 3 * page_size, 3 * page_size,
+		       MREMAP_FIXED | MREMAP_MAYMOVE, &self->carveout[20 * page_size]);
+	ASSERT_NE(ptr_a, MAP_FAILED);
+
+	/* Map VMA B into place. */
+	ptr_b = mmap(&self->carveout[page_size + 3 * page_size], 3 * page_size,
+		     PROT_READ | PROT_WRITE,
+		     MAP_PRIVATE | MAP_ANON | MAP_FIXED, -1, 0);
+	ASSERT_NE(ptr_b, MAP_FAILED);
+
+	/*
+	 * Now move VMA A into position with MREMAP_DONTUNMAP to catch incorrect
+	 * anon_vma propagation.
+	 */
+	ptr_a = mremap(ptr_a, 3 * page_size, 3 * page_size,
+		       MREMAP_FIXED | MREMAP_MAYMOVE | MREMAP_DONTUNMAP,
+		       &self->carveout[page_size]);
+	ASSERT_NE(ptr_a, MAP_FAILED);
+
+	/* The VMAs should have merged. */
+	ASSERT_TRUE(find_vma_procmap(procmap, ptr_a));
+	ASSERT_EQ(procmap->query.vma_start, (unsigned long)ptr_a);
+	ASSERT_EQ(procmap->query.vma_end, (unsigned long)ptr_a + 6 * page_size);
+}
+
+TEST_F(merge, mremap_faulted_to_unfaulted_prev_unfaulted_next)
+{
+	struct procmap_fd *procmap = &self->procmap;
+	unsigned int page_size = self->page_size;
+	char *ptr_a, *ptr_b, *ptr_c;
+
+	/*
+	 * mremap() with MREMAP_DONTUNMAP such that A, B and C merge:
+	 *
+	 *                  |---------------------------|
+	 *                  |                   \       |
+	 * |-----------|    |    |-----------|  /  |---------|
+	 * | unfaulted |    v    | unfaulted |  \  | faulted |
+	 * |-----------|         |-----------|  /  |---------|
+	 *       A                     C        \        B
+	 */
+
+	/* Map VMA B into place. */
+	ptr_b = mmap(&self->carveout[page_size + 3 * page_size], 3 * page_size,
+		     PROT_READ | PROT_WRITE,
+		     MAP_PRIVATE | MAP_ANON | MAP_FIXED, -1, 0);
+	ASSERT_NE(ptr_b, MAP_FAILED);
+	/* Fault it in. */
+	ptr_b[0] = 'x';
+
+	/*
+	 * Now move it out of the way so we can place VMAs A, C in position,
+	 * unfaulted.
+	 */
+	ptr_b = mremap(ptr_b, 3 * page_size, 3 * page_size,
+		       MREMAP_FIXED | MREMAP_MAYMOVE, &self->carveout[20 * page_size]);
+	ASSERT_NE(ptr_b, MAP_FAILED);
+
+	/* Map VMA A into place. */
+
+	ptr_a = mmap(&self->carveout[page_size], 3 * page_size,
+		     PROT_READ | PROT_WRITE,
+		     MAP_PRIVATE | MAP_ANON | MAP_FIXED, -1, 0);
+	ASSERT_NE(ptr_a, MAP_FAILED);
+
+	/* Map VMA C into place. */
+	ptr_c = mmap(&self->carveout[page_size + 3 * page_size + 3 * page_size],
+		     3 * page_size, PROT_READ | PROT_WRITE,
+		     MAP_PRIVATE | MAP_ANON | MAP_FIXED, -1, 0);
+	ASSERT_NE(ptr_c, MAP_FAILED);
+
+	/*
+	 * Now move VMA B into position with MREMAP_DONTUNMAP to catch incorrect
+	 * anon_vma propagation.
+	 */
+	ptr_b = mremap(ptr_b, 3 * page_size, 3 * page_size,
+		       MREMAP_FIXED | MREMAP_MAYMOVE | MREMAP_DONTUNMAP,
+		       &self->carveout[page_size + 3 * page_size]);
+	ASSERT_NE(ptr_b, MAP_FAILED);
+
+	/* The VMAs should have merged. */
+	ASSERT_TRUE(find_vma_procmap(procmap, ptr_a));
+	ASSERT_EQ(procmap->query.vma_start, (unsigned long)ptr_a);
+	ASSERT_EQ(procmap->query.vma_end, (unsigned long)ptr_a + 9 * page_size);
+}
+
+TEST_F(merge, mremap_faulted_to_unfaulted_prev_faulted_next)
+{
+	struct procmap_fd *procmap = &self->procmap;
+	unsigned int page_size = self->page_size;
+	char *ptr_a, *ptr_b, *ptr_bc;
+
+	/*
+	 * mremap() with MREMAP_DONTUNMAP such that A, B and C merge:
+	 *
+	 *                  |---------------------------|
+	 *                  |                   \       |
+	 * |-----------|    |    |-----------|  /  |---------|
+	 * | unfaulted |    v    |  faulted  |  \  | faulted |
+	 * |-----------|         |-----------|  /  |---------|
+	 *       A                     C        \       B
+	 */
+
+	/*
+	 * Map VMA B and C into place. We have to map them together so their
+	 * anon_vma is the same and the vma->vm_pgoff's are correctly aligned.
+	 */
+	ptr_bc = mmap(&self->carveout[page_size + 3 * page_size],
+		      3 * page_size + 3 * page_size,
+		     PROT_READ | PROT_WRITE,
+		     MAP_PRIVATE | MAP_ANON | MAP_FIXED, -1, 0);
+	ASSERT_NE(ptr_bc, MAP_FAILED);
+
+	/* Fault it in. */
+	ptr_bc[0] = 'x';
+
+	/*
+	 * Now move VMA B out the way (splitting VMA BC) so we can place VMA A
+	 * in position, unfaulted, and leave the remainder of the VMA we just
+	 * moved in place, faulted, as VMA C.
+	 */
+	ptr_b = mremap(ptr_bc, 3 * page_size, 3 * page_size,
+		       MREMAP_FIXED | MREMAP_MAYMOVE, &self->carveout[20 * page_size]);
+	ASSERT_NE(ptr_b, MAP_FAILED);
+
+	/* Map VMA A into place. */
+	ptr_a = mmap(&self->carveout[page_size], 3 * page_size,
+		     PROT_READ | PROT_WRITE,
+		     MAP_PRIVATE | MAP_ANON | MAP_FIXED, -1, 0);
+	ASSERT_NE(ptr_a, MAP_FAILED);
+
+	/*
+	 * Now move VMA B into position with MREMAP_DONTUNMAP to catch incorrect
+	 * anon_vma propagation.
+	 */
+	ptr_b = mremap(ptr_b, 3 * page_size, 3 * page_size,
+		       MREMAP_FIXED | MREMAP_MAYMOVE | MREMAP_DONTUNMAP,
+		       &self->carveout[page_size + 3 * page_size]);
+	ASSERT_NE(ptr_b, MAP_FAILED);
+
+	/* The VMAs should have merged. */
+	ASSERT_TRUE(find_vma_procmap(procmap, ptr_a));
+	ASSERT_EQ(procmap->query.vma_start, (unsigned long)ptr_a);
+	ASSERT_EQ(procmap->query.vma_end, (unsigned long)ptr_a + 9 * page_size);
+}
+
 TEST_HARNESS_MAIN
-- 
2.52.0



^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH v2 3/4] mm/vma: enforce VMA fork limit on unfaulted,faulted mremap merge too
  2026-01-05 20:11 [PATCH v2 0/4] mm/vma: fix anon_vma UAF on mremap() faulted, unfaulted merge Lorenzo Stoakes
  2026-01-05 20:11 ` [PATCH v2 1/4] " Lorenzo Stoakes
  2026-01-05 20:11 ` [PATCH v2 2/4] tools/testing/selftests: add tests for !tgt, src mremap() merges Lorenzo Stoakes
@ 2026-01-05 20:11 ` Lorenzo Stoakes
  2026-01-06  6:03   ` Harry Yoo
  2026-01-06 15:23   ` Jeongjun Park
  2026-01-05 20:11 ` [PATCH v2 4/4] tools/testing/selftests: add forked (un)/faulted VMA merge tests Lorenzo Stoakes
  3 siblings, 2 replies; 9+ messages in thread
From: Lorenzo Stoakes @ 2026-01-05 20:11 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Liam R . Howlett, Vlastimil Babka, Jann Horn, Pedro Falcato,
	Yeoreum Yun, linux-mm, linux-kernel, David Hildenbrand,
	Jeongjun Park, Rik van Riel, Harry Yoo

The is_mergeable_anon_vma() function uses vmg->middle as the source
VMA. However when merging a new VMA, this field is NULL.

In all cases except mremap(), the new VMA will either be newly established
and thus lack an anon_vma, or will be an expansion of an existing VMA thus
we do not care about whether VMA is CoW'd or not.

In the case of an mremap(), we can end up in a situation where we can
accidentally allow an unfaulted/faulted merge with a VMA that has been
forked, violating the general rule that we do not permit this for reasons
of anon_vma lock scalability.

Now we have the ability to be aware of the fact we are copying a VMA and
also know which VMA that is, we can explicitly check for this, so do so.

This is pertinent since commit 879bca0a2c4f ("mm/vma: fix incorrectly
disallowed anonymous VMA merges"), as this patch permits unfaulted/faulted
merges that were previously disallowed running afoul of this issue.

While we are here, vma_had_uncowed_parents() is a confusing name, so make
it simple and rename it to vma_is_fork_child().

Signed-off-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Fixes: 879bca0a2c4f ("mm/vma: fix incorrectly disallowed anonymous VMA merges")
Cc: stable@kernel.org
---
 mm/vma.c | 27 +++++++++++++++------------
 1 file changed, 15 insertions(+), 12 deletions(-)

diff --git a/mm/vma.c b/mm/vma.c
index 660f4732f8a5..fb45a6be7417 100644
--- a/mm/vma.c
+++ b/mm/vma.c
@@ -67,18 +67,13 @@ struct mmap_state {
 		.state = VMA_MERGE_START,				\
 	}
 
-/*
- * If, at any point, the VMA had unCoW'd mappings from parents, it will maintain
- * more than one anon_vma_chain connecting it to more than one anon_vma. A merge
- * would mean a wider range of folios sharing the root anon_vma lock, and thus
- * potential lock contention, we do not wish to encourage merging such that this
- * scales to a problem.
- */
-static bool vma_had_uncowed_parents(struct vm_area_struct *vma)
+/* Was this VMA ever forked from a parent, i.e. maybe contains CoW mappings? */
+static bool vma_is_fork_child(struct vm_area_struct *vma)
 {
 	/*
 	 * The list_is_singular() test is to avoid merging VMA cloned from
-	 * parents. This can improve scalability caused by anon_vma lock.
+	 * parents. This can improve scalability caused by the anon_vma root
+	 * lock.
 	 */
 	return vma && vma->anon_vma && !list_is_singular(&vma->anon_vma_chain);
 }
@@ -115,11 +110,19 @@ static bool is_mergeable_anon_vma(struct vma_merge_struct *vmg, bool merge_next)
 	VM_WARN_ON(src && src_anon != src->anon_vma);
 
 	/* Case 1 - we will dup_anon_vma() from src into tgt. */
-	if (!tgt_anon && src_anon)
-		return !vma_had_uncowed_parents(src);
+	if (!tgt_anon && src_anon) {
+		struct vm_area_struct *copied_from = vmg->copied_from;
+
+		if (vma_is_fork_child(src))
+			return false;
+		if (vma_is_fork_child(copied_from))
+			return false;
+
+		return true;
+	}
 	/* Case 2 - we will simply use tgt's anon_vma. */
 	if (tgt_anon && !src_anon)
-		return !vma_had_uncowed_parents(tgt);
+		return !vma_is_fork_child(tgt);
 	/* Case 3 - the anon_vma's are already shared. */
 	return src_anon == tgt_anon;
 }
-- 
2.52.0



^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH v2 4/4] tools/testing/selftests: add forked (un)/faulted VMA merge tests
  2026-01-05 20:11 [PATCH v2 0/4] mm/vma: fix anon_vma UAF on mremap() faulted, unfaulted merge Lorenzo Stoakes
                   ` (2 preceding siblings ...)
  2026-01-05 20:11 ` [PATCH v2 3/4] mm/vma: enforce VMA fork limit on unfaulted,faulted mremap merge too Lorenzo Stoakes
@ 2026-01-05 20:11 ` Lorenzo Stoakes
  3 siblings, 0 replies; 9+ messages in thread
From: Lorenzo Stoakes @ 2026-01-05 20:11 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Liam R . Howlett, Vlastimil Babka, Jann Horn, Pedro Falcato,
	Yeoreum Yun, linux-mm, linux-kernel, David Hildenbrand,
	Jeongjun Park, Rik van Riel, Harry Yoo

Now we correctly handle forked faulted/unfaulted merge on mremap(),
exhaustively assert that we handle this correctly.

Do this in the less duplicative way by adding a new merge_with_fork fixture
and forked/unforked variants, and abstract the forking logic as necessary
to avoid code duplication with this also.

Signed-off-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Fixes: 879bca0a2c4f ("mm/vma: fix incorrectly disallowed anonymous VMA merges")
Cc: stable@kernel.org
---
 tools/testing/selftests/mm/merge.c | 180 ++++++++++++++++++++++-------
 1 file changed, 139 insertions(+), 41 deletions(-)

diff --git a/tools/testing/selftests/mm/merge.c b/tools/testing/selftests/mm/merge.c
index 22be149f7109..10b686102b79 100644
--- a/tools/testing/selftests/mm/merge.c
+++ b/tools/testing/selftests/mm/merge.c
@@ -22,12 +22,37 @@ FIXTURE(merge)
 	struct procmap_fd procmap;
 };
 
+static char *map_carveout(unsigned int page_size)
+{
+	return mmap(NULL, 30 * page_size, PROT_NONE,
+		    MAP_ANON | MAP_PRIVATE, -1, 0);
+}
+
+static pid_t do_fork(struct procmap_fd *procmap)
+{
+	pid_t pid = fork();
+
+	if (pid == -1)
+		return -1;
+	if (pid != 0) {
+		wait(NULL);
+		return pid;
+	}
+
+	/* Reopen for child. */
+	if (close_procmap(procmap))
+		return -1;
+	if (open_self_procmap(procmap))
+		return -1;
+
+	return 0;
+}
+
 FIXTURE_SETUP(merge)
 {
 	self->page_size = psize();
 	/* Carve out PROT_NONE region to map over. */
-	self->carveout = mmap(NULL, 30 * self->page_size, PROT_NONE,
-			      MAP_ANON | MAP_PRIVATE, -1, 0);
+	self->carveout = map_carveout(self->page_size);
 	ASSERT_NE(self->carveout, MAP_FAILED);
 	/* Setup PROCMAP_QUERY interface. */
 	ASSERT_EQ(open_self_procmap(&self->procmap), 0);
@@ -36,7 +61,8 @@ FIXTURE_SETUP(merge)
 FIXTURE_TEARDOWN(merge)
 {
 	ASSERT_EQ(munmap(self->carveout, 30 * self->page_size), 0);
-	ASSERT_EQ(close_procmap(&self->procmap), 0);
+	/* May fail for parent of forked process. */
+	close_procmap(&self->procmap);
 	/*
 	 * Clear unconditionally, as some tests set this. It is no issue if this
 	 * fails (KSM may be disabled for instance).
@@ -44,6 +70,44 @@ FIXTURE_TEARDOWN(merge)
 	prctl(PR_SET_MEMORY_MERGE, 0, 0, 0, 0);
 }
 
+FIXTURE(merge_with_fork)
+{
+	unsigned int page_size;
+	char *carveout;
+	struct procmap_fd procmap;
+};
+
+FIXTURE_VARIANT(merge_with_fork)
+{
+	bool forked;
+};
+
+FIXTURE_VARIANT_ADD(merge_with_fork, forked)
+{
+	.forked = true,
+};
+
+FIXTURE_VARIANT_ADD(merge_with_fork, unforked)
+{
+	.forked = false,
+};
+
+FIXTURE_SETUP(merge_with_fork)
+{
+	self->page_size = psize();
+	self->carveout = map_carveout(self->page_size);
+	ASSERT_NE(self->carveout, MAP_FAILED);
+	ASSERT_EQ(open_self_procmap(&self->procmap), 0);
+}
+
+FIXTURE_TEARDOWN(merge_with_fork)
+{
+	ASSERT_EQ(munmap(self->carveout, 30 * self->page_size), 0);
+	ASSERT_EQ(close_procmap(&self->procmap), 0);
+	/* See above. */
+	prctl(PR_SET_MEMORY_MERGE, 0, 0, 0, 0);
+}
+
 TEST_F(merge, mprotect_unfaulted_left)
 {
 	unsigned int page_size = self->page_size;
@@ -322,8 +386,8 @@ TEST_F(merge, forked_target_vma)
 	unsigned int page_size = self->page_size;
 	char *carveout = self->carveout;
 	struct procmap_fd *procmap = &self->procmap;
-	pid_t pid;
 	char *ptr, *ptr2;
+	pid_t pid;
 	int i;
 
 	/*
@@ -344,19 +408,10 @@ TEST_F(merge, forked_target_vma)
 	 */
 	ptr[0] = 'x';
 
-	pid = fork();
+	pid = do_fork(&self->procmap);
 	ASSERT_NE(pid, -1);
-
-	if (pid != 0) {
-		wait(NULL);
+	if (pid != 0)
 		return;
-	}
-
-	/* Child process below: */
-
-	/* Reopen for child. */
-	ASSERT_EQ(close_procmap(&self->procmap), 0);
-	ASSERT_EQ(open_self_procmap(&self->procmap), 0);
 
 	/* unCOWing everything does not cause the AVC to go away. */
 	for (i = 0; i < 5 * page_size; i += page_size)
@@ -386,8 +441,8 @@ TEST_F(merge, forked_source_vma)
 	unsigned int page_size = self->page_size;
 	char *carveout = self->carveout;
 	struct procmap_fd *procmap = &self->procmap;
-	pid_t pid;
 	char *ptr, *ptr2;
+	pid_t pid;
 	int i;
 
 	/*
@@ -408,19 +463,10 @@ TEST_F(merge, forked_source_vma)
 	 */
 	ptr[0] = 'x';
 
-	pid = fork();
+	pid = do_fork(&self->procmap);
 	ASSERT_NE(pid, -1);
-
-	if (pid != 0) {
-		wait(NULL);
+	if (pid != 0)
 		return;
-	}
-
-	/* Child process below: */
-
-	/* Reopen for child. */
-	ASSERT_EQ(close_procmap(&self->procmap), 0);
-	ASSERT_EQ(open_self_procmap(&self->procmap), 0);
 
 	/* unCOWing everything does not cause the AVC to go away. */
 	for (i = 0; i < 5 * page_size; i += page_size)
@@ -1171,10 +1217,11 @@ TEST_F(merge, mremap_correct_placed_faulted)
 	ASSERT_EQ(procmap->query.vma_end, (unsigned long)ptr + 15 * page_size);
 }
 
-TEST_F(merge, mremap_faulted_to_unfaulted_prev)
+TEST_F(merge_with_fork, mremap_faulted_to_unfaulted_prev)
 {
 	struct procmap_fd *procmap = &self->procmap;
 	unsigned int page_size = self->page_size;
+	unsigned long offset;
 	char *ptr_a, *ptr_b;
 
 	/*
@@ -1197,6 +1244,14 @@ TEST_F(merge, mremap_faulted_to_unfaulted_prev)
 	/* Fault it in. */
 	ptr_a[0] = 'x';
 
+	if (variant->forked) {
+		pid_t pid = do_fork(&self->procmap);
+
+		ASSERT_NE(pid, -1);
+		if (pid != 0)
+			return;
+	}
+
 	/*
 	 * Now move it out of the way so we can place VMA B in position,
 	 * unfaulted.
@@ -1220,16 +1275,19 @@ TEST_F(merge, mremap_faulted_to_unfaulted_prev)
 		       &self->carveout[page_size + 3 * page_size]);
 	ASSERT_NE(ptr_a, MAP_FAILED);
 
-	/* The VMAs should have merged. */
+	/* The VMAs should have merged, if not forked. */
 	ASSERT_TRUE(find_vma_procmap(procmap, ptr_b));
 	ASSERT_EQ(procmap->query.vma_start, (unsigned long)ptr_b);
-	ASSERT_EQ(procmap->query.vma_end, (unsigned long)ptr_b + 6 * page_size);
+
+	offset = variant->forked ? 3 * page_size : 6 * page_size;
+	ASSERT_EQ(procmap->query.vma_end, (unsigned long)ptr_b + offset);
 }
 
-TEST_F(merge, mremap_faulted_to_unfaulted_next)
+TEST_F(merge_with_fork, mremap_faulted_to_unfaulted_next)
 {
 	struct procmap_fd *procmap = &self->procmap;
 	unsigned int page_size = self->page_size;
+	unsigned long offset;
 	char *ptr_a, *ptr_b;
 
 	/*
@@ -1253,6 +1311,14 @@ TEST_F(merge, mremap_faulted_to_unfaulted_next)
 	/* Fault it in. */
 	ptr_a[0] = 'x';
 
+	if (variant->forked) {
+		pid_t pid = do_fork(&self->procmap);
+
+		ASSERT_NE(pid, -1);
+		if (pid != 0)
+			return;
+	}
+
 	/*
 	 * Now move it out of the way so we can place VMA B in position,
 	 * unfaulted.
@@ -1276,16 +1342,18 @@ TEST_F(merge, mremap_faulted_to_unfaulted_next)
 		       &self->carveout[page_size]);
 	ASSERT_NE(ptr_a, MAP_FAILED);
 
-	/* The VMAs should have merged. */
+	/* The VMAs should have merged, if not forked. */
 	ASSERT_TRUE(find_vma_procmap(procmap, ptr_a));
 	ASSERT_EQ(procmap->query.vma_start, (unsigned long)ptr_a);
-	ASSERT_EQ(procmap->query.vma_end, (unsigned long)ptr_a + 6 * page_size);
+	offset = variant->forked ? 3 * page_size : 6 * page_size;
+	ASSERT_EQ(procmap->query.vma_end, (unsigned long)ptr_a + offset);
 }
 
-TEST_F(merge, mremap_faulted_to_unfaulted_prev_unfaulted_next)
+TEST_F(merge_with_fork, mremap_faulted_to_unfaulted_prev_unfaulted_next)
 {
 	struct procmap_fd *procmap = &self->procmap;
 	unsigned int page_size = self->page_size;
+	unsigned long offset;
 	char *ptr_a, *ptr_b, *ptr_c;
 
 	/*
@@ -1307,6 +1375,14 @@ TEST_F(merge, mremap_faulted_to_unfaulted_prev_unfaulted_next)
 	/* Fault it in. */
 	ptr_b[0] = 'x';
 
+	if (variant->forked) {
+		pid_t pid = do_fork(&self->procmap);
+
+		ASSERT_NE(pid, -1);
+		if (pid != 0)
+			return;
+	}
+
 	/*
 	 * Now move it out of the way so we can place VMAs A, C in position,
 	 * unfaulted.
@@ -1337,13 +1413,21 @@ TEST_F(merge, mremap_faulted_to_unfaulted_prev_unfaulted_next)
 		       &self->carveout[page_size + 3 * page_size]);
 	ASSERT_NE(ptr_b, MAP_FAILED);
 
-	/* The VMAs should have merged. */
+	/* The VMAs should have merged, if not forked. */
 	ASSERT_TRUE(find_vma_procmap(procmap, ptr_a));
 	ASSERT_EQ(procmap->query.vma_start, (unsigned long)ptr_a);
-	ASSERT_EQ(procmap->query.vma_end, (unsigned long)ptr_a + 9 * page_size);
+	offset = variant->forked ? 3 * page_size : 9 * page_size;
+	ASSERT_EQ(procmap->query.vma_end, (unsigned long)ptr_a + offset);
+
+	/* If forked, B and C should also not have merged. */
+	if (variant->forked) {
+		ASSERT_TRUE(find_vma_procmap(procmap, ptr_b));
+		ASSERT_EQ(procmap->query.vma_start, (unsigned long)ptr_b);
+		ASSERT_EQ(procmap->query.vma_end, (unsigned long)ptr_b + 3 * page_size);
+	}
 }
 
-TEST_F(merge, mremap_faulted_to_unfaulted_prev_faulted_next)
+TEST_F(merge_with_fork, mremap_faulted_to_unfaulted_prev_faulted_next)
 {
 	struct procmap_fd *procmap = &self->procmap;
 	unsigned int page_size = self->page_size;
@@ -1373,6 +1457,14 @@ TEST_F(merge, mremap_faulted_to_unfaulted_prev_faulted_next)
 	/* Fault it in. */
 	ptr_bc[0] = 'x';
 
+	if (variant->forked) {
+		pid_t pid = do_fork(&self->procmap);
+
+		ASSERT_NE(pid, -1);
+		if (pid != 0)
+			return;
+	}
+
 	/*
 	 * Now move VMA B out the way (splitting VMA BC) so we can place VMA A
 	 * in position, unfaulted, and leave the remainder of the VMA we just
@@ -1397,10 +1489,16 @@ TEST_F(merge, mremap_faulted_to_unfaulted_prev_faulted_next)
 		       &self->carveout[page_size + 3 * page_size]);
 	ASSERT_NE(ptr_b, MAP_FAILED);
 
-	/* The VMAs should have merged. */
-	ASSERT_TRUE(find_vma_procmap(procmap, ptr_a));
-	ASSERT_EQ(procmap->query.vma_start, (unsigned long)ptr_a);
-	ASSERT_EQ(procmap->query.vma_end, (unsigned long)ptr_a + 9 * page_size);
+	/* The VMAs should have merged. A,B,C if unforked, B, C if forked. */
+	if (variant->forked) {
+		ASSERT_TRUE(find_vma_procmap(procmap, ptr_b));
+		ASSERT_EQ(procmap->query.vma_start, (unsigned long)ptr_b);
+		ASSERT_EQ(procmap->query.vma_end, (unsigned long)ptr_b + 6 * page_size);
+	} else {
+		ASSERT_TRUE(find_vma_procmap(procmap, ptr_a));
+		ASSERT_EQ(procmap->query.vma_start, (unsigned long)ptr_a);
+		ASSERT_EQ(procmap->query.vma_end, (unsigned long)ptr_a + 9 * page_size);
+	}
 }
 
 TEST_HARNESS_MAIN
-- 
2.52.0



^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v2 1/4] mm/vma: fix anon_vma UAF on mremap() faulted, unfaulted merge
  2026-01-05 20:11 ` [PATCH v2 1/4] " Lorenzo Stoakes
@ 2026-01-06  3:15   ` Harry Yoo
  2026-01-06 15:01   ` Jeongjun Park
  1 sibling, 0 replies; 9+ messages in thread
From: Harry Yoo @ 2026-01-06  3:15 UTC (permalink / raw)
  To: Lorenzo Stoakes
  Cc: Andrew Morton, Liam R . Howlett, Vlastimil Babka, Jann Horn,
	Pedro Falcato, Yeoreum Yun, linux-mm, linux-kernel,
	David Hildenbrand, Jeongjun Park, Rik van Riel

On Mon, Jan 05, 2026 at 08:11:47PM +0000, Lorenzo Stoakes wrote:
> Commit 879bca0a2c4f ("mm/vma: fix incorrectly disallowed anonymous VMA
> merges") introduced the ability to merge previously unavailable VMA merge
> scenarios.
> 
> The key piece of logic introduced was the ability to merge a faulted VMA
> immediately next to an unfaulted VMA, which relies upon dup_anon_vma() to
> correctly handle anon_vma state.
> 
> In the case of the merge of an existing VMA (that is changing properties
> of a VMA and then merging if those properties are shared by adjacent
> VMAs), dup_anon_vma() is invoked correctly.
> 
> However in the case of the merge of a new VMA, a corner case peculiar to
> mremap() was missed.
> 
> The issue is that vma_expand() only performs dup_anon_vma() if the target
> (the VMA that will ultimately become the merged VMA): is not the next VMA,
> i.e.  the one that appears after the range in which the new VMA is to be
> established.
> 
> A key insight here is that in all other cases other than mremap(), a new
> VMA merge either expands an existing VMA, meaning that the target VMA will
> be that VMA, or would have anon_vma be NULL.
> 
> Specifically:
> 
> * __mmap_region() - no anon_vma in place, initial mapping.
> * do_brk_flags() - expanding an existing VMA.
> * vma_merge_extend() - expanding an existing VMA.
> * relocate_vma_down() - no anon_vma in place, initial mapping.
> 
> In addition, we are in the unique situation of needing to duplicate
> anon_vma state from a VMA that is neither the previous or next VMA being
> merged with.
> 
> dup_anon_vma() deals exclusively with the target=unfaulted, src=faulted
> case. This leaves four possibilities, in each case where the copied VMA is
> faulted:
> 
> 1. Previous VMA unfaulted:
> 
>               copied -----|
>                           v
> 	|-----------|.............|
> 	| unfaulted |(faulted VMA)|
> 	|-----------|.............|
> 	     prev
> 
> target = prev, expand prev to cover.

Oops, I missed this case!

> 2. Next VMA unfaulted:
> 
>               copied -----|
>                           v
> 	            |.............|-----------|
> 	            |(faulted VMA)| unfaulted |
>                     |.............|-----------|
> 		                      next
> 
> target = next, expand next to cover.
> 
> 3. Both adjacent VMAs unfaulted:
> 
>               copied -----|
>                           v
> 	|-----------|.............|-----------|
> 	| unfaulted |(faulted VMA)| unfaulted |
> 	|-----------|.............|-----------|
> 	     prev                      next
> 
> target = prev, expand prev to cover.
> 
> 4. prev unfaulted, next faulted:
> 
>               copied -----|
>                           v
> 	|-----------|.............|-----------|
> 	| unfaulted |(faulted VMA)|  faulted  |
> 	|-----------|.............|-----------|
> 	     prev                      next
> 
> target = prev, expand prev to cover. Essentially equivalent to 3, but with
> additional requirement that next's anon_vma is the same as the copied
> VMA's. This is covered by the existing logic.
> 
> To account for this very explicitly, we introduce vma_merge_copied_range(),
> which sets a newly introduced vmg->copied_from field, then invokes
> vma_merge_new_range() which handles the rest of the logic.
> 
> We then update the key vma_expand() function to clean up the logic and make
> what's going on clearer, making the 'remove next' case less special, before
> invoking dup_anon_vma() unconditionally should we be copying from a VMA.
> 
> Note that in case 3, the if (remove_next) ... branch will be a no-op, as
> next=src in this instance and src is unfaulted.
> 
> In case 4, it won't be, but since in this instance next=src and it is
> faulted, this will have required tgt=faulted, src=faulted to be compatible,
> meaning that next->anon_vma == vmg->copied_from->anon_vma, and thus a
> single dup_anon_vma() of next suffices to copy anon_vma state for the
> copied-from VMA also.

Makes sense.

> If we are copying from a VMA in a successful merge we must _always_
> propagate anon_vma state.
> 
> This issue can be observed most directly by invoked mremap() to move
> around a VMA and cause this kind of merge with the MREMAP_DONTUNMAP flag
> specified.
> 
> This will result in unlink_anon_vmas() being called after failing to
> duplicate anon_vma state to the target VMA, which results in the anon_vma
> itself being freed with folios still possessing dangling pointers to the
> anon_vma and thus a use-after-free bug.
> 
> This bug was discovered via a syzbot report, which this patch resolves.
 
> We further make a change to update the mergeable anon_vma check to assert
> the copied-from anon_vma did not have CoW parents, as otherwise

I guess that part is in patch 3/4.

> dup_anon_vma() might incorrectly propagate CoW ancestors from the next VMA
> in case 4 despite the anon_vma's being identical for both VMAs.
>
> Signed-off-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> Fixes: 879bca0a2c4f ("mm/vma: fix incorrectly disallowed anonymous VMA merges")
> Reported-by: syzbot+b165fc2e11771c66d8ba@syzkaller.appspotmail.com
> Closes: https://lore.kernel.org/all/694a2745.050a0220.19928e.0017.GAE@google.com/
> Cc: stable@kernel.org
> ---

Looks good to me, so:
Reviewed-by: Harry Yoo <harry.yoo@oracle.com>

-- 
Cheers,
Harry / Hyeonggon


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v2 3/4] mm/vma: enforce VMA fork limit on unfaulted,faulted mremap merge too
  2026-01-05 20:11 ` [PATCH v2 3/4] mm/vma: enforce VMA fork limit on unfaulted,faulted mremap merge too Lorenzo Stoakes
@ 2026-01-06  6:03   ` Harry Yoo
  2026-01-06 15:23   ` Jeongjun Park
  1 sibling, 0 replies; 9+ messages in thread
From: Harry Yoo @ 2026-01-06  6:03 UTC (permalink / raw)
  To: Lorenzo Stoakes
  Cc: Andrew Morton, Liam R . Howlett, Vlastimil Babka, Jann Horn,
	Pedro Falcato, Yeoreum Yun, linux-mm, linux-kernel,
	David Hildenbrand, Jeongjun Park, Rik van Riel

On Mon, Jan 05, 2026 at 08:11:49PM +0000, Lorenzo Stoakes wrote:
> The is_mergeable_anon_vma() function uses vmg->middle as the source
> VMA. However when merging a new VMA, this field is NULL.
> 
> In all cases except mremap(), the new VMA will either be newly established
> and thus lack an anon_vma, or will be an expansion of an existing VMA thus
> we do not care about whether VMA is CoW'd or not.
> 
> In the case of an mremap(), we can end up in a situation where we can
> accidentally allow an unfaulted/faulted merge with a VMA that has been
> forked, violating the general rule that we do not permit this for reasons
> of anon_vma lock scalability.
> 
> Now we have the ability to be aware of the fact we are copying a VMA and
> also know which VMA that is, we can explicitly check for this, so do so.
> 
> This is pertinent since commit 879bca0a2c4f ("mm/vma: fix incorrectly
> disallowed anonymous VMA merges"), as this patch permits unfaulted/faulted
> merges that were previously disallowed running afoul of this issue.
> 
> While we are here, vma_had_uncowed_parents() is a confusing name, so make
> it simple and rename it to vma_is_fork_child().
> 
> Signed-off-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> Fixes: 879bca0a2c4f ("mm/vma: fix incorrectly disallowed anonymous VMA merges")
> Cc: stable@kernel.org
> ---

LGTM, so:
Reviewed-by: Harry Yoo <harry.yoo@oracle.com>

-- 
Cheers,
Harry / Hyeonggon


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v2 1/4] mm/vma: fix anon_vma UAF on mremap() faulted, unfaulted merge
  2026-01-05 20:11 ` [PATCH v2 1/4] " Lorenzo Stoakes
  2026-01-06  3:15   ` Harry Yoo
@ 2026-01-06 15:01   ` Jeongjun Park
  1 sibling, 0 replies; 9+ messages in thread
From: Jeongjun Park @ 2026-01-06 15:01 UTC (permalink / raw)
  To: Lorenzo Stoakes
  Cc: Andrew Morton, Liam R . Howlett, Vlastimil Babka, Jann Horn,
	Pedro Falcato, Yeoreum Yun, linux-mm, linux-kernel,
	David Hildenbrand, Rik van Riel, Harry Yoo

Lorenzo Stoakes <lorenzo.stoakes@oracle.com> wrote:
>
> Commit 879bca0a2c4f ("mm/vma: fix incorrectly disallowed anonymous VMA
> merges") introduced the ability to merge previously unavailable VMA merge
> scenarios.
>
> The key piece of logic introduced was the ability to merge a faulted VMA
> immediately next to an unfaulted VMA, which relies upon dup_anon_vma() to
> correctly handle anon_vma state.
>
> In the case of the merge of an existing VMA (that is changing properties
> of a VMA and then merging if those properties are shared by adjacent
> VMAs), dup_anon_vma() is invoked correctly.
>
> However in the case of the merge of a new VMA, a corner case peculiar to
> mremap() was missed.
>
> The issue is that vma_expand() only performs dup_anon_vma() if the target
> (the VMA that will ultimately become the merged VMA): is not the next VMA,
> i.e.  the one that appears after the range in which the new VMA is to be
> established.
>
> A key insight here is that in all other cases other than mremap(), a new
> VMA merge either expands an existing VMA, meaning that the target VMA will
> be that VMA, or would have anon_vma be NULL.
>
> Specifically:
>
> * __mmap_region() - no anon_vma in place, initial mapping.
> * do_brk_flags() - expanding an existing VMA.
> * vma_merge_extend() - expanding an existing VMA.
> * relocate_vma_down() - no anon_vma in place, initial mapping.
>
> In addition, we are in the unique situation of needing to duplicate
> anon_vma state from a VMA that is neither the previous or next VMA being
> merged with.
>
> dup_anon_vma() deals exclusively with the target=unfaulted, src=faulted
> case. This leaves four possibilities, in each case where the copied VMA is
> faulted:
>
> 1. Previous VMA unfaulted:
>
>               copied -----|
>                           v
>         |-----------|.............|
>         | unfaulted |(faulted VMA)|
>         |-----------|.............|
>              prev
>
> target = prev, expand prev to cover.
>
> 2. Next VMA unfaulted:
>
>               copied -----|
>                           v
>                     |.............|-----------|
>                     |(faulted VMA)| unfaulted |
>                     |.............|-----------|
>                                       next
>
> target = next, expand next to cover.
>
> 3. Both adjacent VMAs unfaulted:
>
>               copied -----|
>                           v
>         |-----------|.............|-----------|
>         | unfaulted |(faulted VMA)| unfaulted |
>         |-----------|.............|-----------|
>              prev                      next
>
> target = prev, expand prev to cover.
>
> 4. prev unfaulted, next faulted:
>
>               copied -----|
>                           v
>         |-----------|.............|-----------|
>         | unfaulted |(faulted VMA)|  faulted  |
>         |-----------|.............|-----------|
>              prev                      next
>
> target = prev, expand prev to cover. Essentially equivalent to 3, but with
> additional requirement that next's anon_vma is the same as the copied
> VMA's. This is covered by the existing logic.
>
> To account for this very explicitly, we introduce vma_merge_copied_range(),
> which sets a newly introduced vmg->copied_from field, then invokes
> vma_merge_new_range() which handles the rest of the logic.
>
> We then update the key vma_expand() function to clean up the logic and make
> what's going on clearer, making the 'remove next' case less special, before
> invoking dup_anon_vma() unconditionally should we be copying from a VMA.
>
> Note that in case 3, the if (remove_next) ... branch will be a no-op, as
> next=src in this instance and src is unfaulted.
>
> In case 4, it won't be, but since in this instance next=src and it is
> faulted, this will have required tgt=faulted, src=faulted to be compatible,
> meaning that next->anon_vma == vmg->copied_from->anon_vma, and thus a
> single dup_anon_vma() of next suffices to copy anon_vma state for the
> copied-from VMA also.
>
> If we are copying from a VMA in a successful merge we must _always_
> propagate anon_vma state.
>
> This issue can be observed most directly by invoked mremap() to move
> around a VMA and cause this kind of merge with the MREMAP_DONTUNMAP flag
> specified.
>
> This will result in unlink_anon_vmas() being called after failing to
> duplicate anon_vma state to the target VMA, which results in the anon_vma
> itself being freed with folios still possessing dangling pointers to the
> anon_vma and thus a use-after-free bug.
>
> This bug was discovered via a syzbot report, which this patch resolves.
>
> We further make a change to update the mergeable anon_vma check to assert
> the copied-from anon_vma did not have CoW parents, as otherwise
> dup_anon_vma() might incorrectly propagate CoW ancestors from the next VMA
> in case 4 despite the anon_vma's being identical for both VMAs.
>
> Signed-off-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> Fixes: 879bca0a2c4f ("mm/vma: fix incorrectly disallowed anonymous VMA merges")
> Reported-by: syzbot+b165fc2e11771c66d8ba@syzkaller.appspotmail.com
> Closes: https://lore.kernel.org/all/694a2745.050a0220.19928e.0017.GAE@google.com/
> Cc: stable@kernel.org
> ---

Wow, I didn't know there would be this many problems. LGTM
Reviewed-by: Jeongjun Park <aha310510@gmail.com>

And this syzbot report seems to have the same root cause.
Reported-by: syzbot+5272541ccbbb14e2ec30@syzkaller.appspotmail.com
Closes: https://lore.kernel.org/all/694e3dc6.050a0220.35954c.0066.GAE@google.com/

>  mm/vma.c | 84 +++++++++++++++++++++++++++++++++++++++-----------------
>  mm/vma.h |  3 ++
>  2 files changed, 62 insertions(+), 25 deletions(-)
>
> diff --git a/mm/vma.c b/mm/vma.c
> index 6377aa290a27..660f4732f8a5 100644
> --- a/mm/vma.c
> +++ b/mm/vma.c
> @@ -829,6 +829,8 @@ static __must_check struct vm_area_struct *vma_merge_existing_range(
>         VM_WARN_ON_VMG(middle &&
>                        !(vma_iter_addr(vmg->vmi) >= middle->vm_start &&
>                          vma_iter_addr(vmg->vmi) < middle->vm_end), vmg);
> +       /* An existing merge can never be used by the mremap() logic. */
> +       VM_WARN_ON_VMG(vmg->copied_from, vmg);
>
>         vmg->state = VMA_MERGE_NOMERGE;
>
> @@ -1098,6 +1100,33 @@ struct vm_area_struct *vma_merge_new_range(struct vma_merge_struct *vmg)
>         return NULL;
>  }
>
> +/*
> + * vma_merge_copied_range - Attempt to merge a VMA that is being copied by
> + * mremap()
> + *
> + * @vmg: Describes the VMA we are adding, in the copied-to range @vmg->start to
> + *       @vmg->end (exclusive), which we try to merge with any adjacent VMAs if
> + *       possible.
> + *
> + * vmg->prev, next, start, end, pgoff should all be relative to the COPIED TO
> + * range, i.e. the target range for the VMA.
> + *
> + * Returns: In instances where no merge was possible, NULL. Otherwise, a pointer
> + *          to the VMA we expanded.
> + *
> + * ASSUMPTIONS: Same as vma_merge_new_range(), except vmg->middle must contain
> + *              the copied-from VMA.
> + */
> +static struct vm_area_struct *vma_merge_copied_range(struct vma_merge_struct *vmg)
> +{
> +       /* We must have a copied-from VMA. */
> +       VM_WARN_ON_VMG(!vmg->middle, vmg);
> +
> +       vmg->copied_from = vmg->middle;
> +       vmg->middle = NULL;
> +       return vma_merge_new_range(vmg);
> +}
> +
>  /*
>   * vma_expand - Expand an existing VMA
>   *
> @@ -1117,46 +1146,52 @@ struct vm_area_struct *vma_merge_new_range(struct vma_merge_struct *vmg)
>  int vma_expand(struct vma_merge_struct *vmg)
>  {
>         struct vm_area_struct *anon_dup = NULL;
> -       bool remove_next = false;
>         struct vm_area_struct *target = vmg->target;
>         struct vm_area_struct *next = vmg->next;
> +       bool remove_next = false;
>         vm_flags_t sticky_flags;
> -
> -       sticky_flags = vmg->vm_flags & VM_STICKY;
> -       sticky_flags |= target->vm_flags & VM_STICKY;
> -
> -       VM_WARN_ON_VMG(!target, vmg);
> +       int ret = 0;
>
>         mmap_assert_write_locked(vmg->mm);
> -
>         vma_start_write(target);
> -       if (next && (target != next) && (vmg->end == next->vm_end)) {
> -               int ret;
>
> -               sticky_flags |= next->vm_flags & VM_STICKY;
> +       if (next && target != next && vmg->end == next->vm_end)
>                 remove_next = true;
> -               /* This should already have been checked by this point. */
> -               VM_WARN_ON_VMG(!can_merge_remove_vma(next), vmg);
> -               vma_start_write(next);
> -               /*
> -                * In this case we don't report OOM, so vmg->give_up_on_mm is
> -                * safe.
> -                */
> -               ret = dup_anon_vma(target, next, &anon_dup);
> -               if (ret)
> -                       return ret;
> -       }
>
> +       /* We must have a target. */
> +       VM_WARN_ON_VMG(!target, vmg);
> +       /* This should have already been checked by this point. */
> +       VM_WARN_ON_VMG(remove_next && !can_merge_remove_vma(next), vmg);
>         /* Not merging but overwriting any part of next is not handled. */
>         VM_WARN_ON_VMG(next && !remove_next &&
>                        next != target && vmg->end > next->vm_start, vmg);
> -       /* Only handles expanding */
> +       /* Only handles expanding. */
>         VM_WARN_ON_VMG(target->vm_start < vmg->start ||
>                        target->vm_end > vmg->end, vmg);
>
> +       sticky_flags = vmg->vm_flags & VM_STICKY;
> +       sticky_flags |= target->vm_flags & VM_STICKY;
>         if (remove_next)
> -               vmg->__remove_next = true;
> +               sticky_flags |= next->vm_flags & VM_STICKY;
>
> +       /*
> +        * If we are removing the next VMA or copying from a VMA
> +        * (e.g. mremap()'ing), we must propagate anon_vma state.
> +        *
> +        * Note that, by convention, callers ignore OOM for this case, so
> +        * we don't need to account for vmg->give_up_on_mm here.
> +        */
> +       if (remove_next)
> +               ret = dup_anon_vma(target, next, &anon_dup);
> +       if (!ret && vmg->copied_from)
> +               ret = dup_anon_vma(target, vmg->copied_from, &anon_dup);
> +       if (ret)
> +               return ret;
> +
> +       if (remove_next) {
> +               vma_start_write(next);
> +               vmg->__remove_next = true;
> +       }
>         if (commit_merge(vmg))
>                 goto nomem;
>
> @@ -1828,10 +1863,9 @@ struct vm_area_struct *copy_vma(struct vm_area_struct **vmap,
>         if (new_vma && new_vma->vm_start < addr + len)
>                 return NULL;    /* should never get here */
>
> -       vmg.middle = NULL; /* New VMA range. */
>         vmg.pgoff = pgoff;
>         vmg.next = vma_iter_next_rewind(&vmi, NULL);
> -       new_vma = vma_merge_new_range(&vmg);
> +       new_vma = vma_merge_copied_range(&vmg);
>
>         if (new_vma) {
>                 /*
> diff --git a/mm/vma.h b/mm/vma.h
> index e4c7bd79de5f..d51efd9da113 100644
> --- a/mm/vma.h
> +++ b/mm/vma.h
> @@ -106,6 +106,9 @@ struct vma_merge_struct {
>         struct anon_vma_name *anon_name;
>         enum vma_merge_state state;
>
> +       /* If copied from (i.e. mremap()'d) the VMA from which we are copying. */
> +       struct vm_area_struct *copied_from;
> +
>         /* Flags which callers can use to modify merge behaviour: */
>
>         /*
> --
> 2.52.0
>


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v2 3/4] mm/vma: enforce VMA fork limit on unfaulted,faulted mremap merge too
  2026-01-05 20:11 ` [PATCH v2 3/4] mm/vma: enforce VMA fork limit on unfaulted,faulted mremap merge too Lorenzo Stoakes
  2026-01-06  6:03   ` Harry Yoo
@ 2026-01-06 15:23   ` Jeongjun Park
  1 sibling, 0 replies; 9+ messages in thread
From: Jeongjun Park @ 2026-01-06 15:23 UTC (permalink / raw)
  To: Lorenzo Stoakes
  Cc: Andrew Morton, Liam R . Howlett, Vlastimil Babka, Jann Horn,
	Pedro Falcato, Yeoreum Yun, linux-mm, linux-kernel,
	David Hildenbrand, Rik van Riel, Harry Yoo

Lorenzo Stoakes <lorenzo.stoakes@oracle.com> wrote:
>
> The is_mergeable_anon_vma() function uses vmg->middle as the source
> VMA. However when merging a new VMA, this field is NULL.
>
> In all cases except mremap(), the new VMA will either be newly established
> and thus lack an anon_vma, or will be an expansion of an existing VMA thus
> we do not care about whether VMA is CoW'd or not.
>
> In the case of an mremap(), we can end up in a situation where we can
> accidentally allow an unfaulted/faulted merge with a VMA that has been
> forked, violating the general rule that we do not permit this for reasons
> of anon_vma lock scalability.
>
> Now we have the ability to be aware of the fact we are copying a VMA and
> also know which VMA that is, we can explicitly check for this, so do so.
>
> This is pertinent since commit 879bca0a2c4f ("mm/vma: fix incorrectly
> disallowed anonymous VMA merges"), as this patch permits unfaulted/faulted
> merges that were previously disallowed running afoul of this issue.
>
> While we are here, vma_had_uncowed_parents() is a confusing name, so make
> it simple and rename it to vma_is_fork_child().
>
> Signed-off-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> Fixes: 879bca0a2c4f ("mm/vma: fix incorrectly disallowed anonymous VMA merges")
> Cc: stable@kernel.org
> ---

Reviewed-by: Jeongjun Park <aha310510@gmail.com>

>  mm/vma.c | 27 +++++++++++++++------------
>  1 file changed, 15 insertions(+), 12 deletions(-)
>
> diff --git a/mm/vma.c b/mm/vma.c
> index 660f4732f8a5..fb45a6be7417 100644
> --- a/mm/vma.c
> +++ b/mm/vma.c
> @@ -67,18 +67,13 @@ struct mmap_state {
>                 .state = VMA_MERGE_START,                               \
>         }
>
> -/*
> - * If, at any point, the VMA had unCoW'd mappings from parents, it will maintain
> - * more than one anon_vma_chain connecting it to more than one anon_vma. A merge
> - * would mean a wider range of folios sharing the root anon_vma lock, and thus
> - * potential lock contention, we do not wish to encourage merging such that this
> - * scales to a problem.
> - */
> -static bool vma_had_uncowed_parents(struct vm_area_struct *vma)
> +/* Was this VMA ever forked from a parent, i.e. maybe contains CoW mappings? */
> +static bool vma_is_fork_child(struct vm_area_struct *vma)
>  {
>         /*
>          * The list_is_singular() test is to avoid merging VMA cloned from
> -        * parents. This can improve scalability caused by anon_vma lock.
> +        * parents. This can improve scalability caused by the anon_vma root
> +        * lock.
>          */
>         return vma && vma->anon_vma && !list_is_singular(&vma->anon_vma_chain);
>  }
> @@ -115,11 +110,19 @@ static bool is_mergeable_anon_vma(struct vma_merge_struct *vmg, bool merge_next)
>         VM_WARN_ON(src && src_anon != src->anon_vma);
>
>         /* Case 1 - we will dup_anon_vma() from src into tgt. */
> -       if (!tgt_anon && src_anon)
> -               return !vma_had_uncowed_parents(src);
> +       if (!tgt_anon && src_anon) {
> +               struct vm_area_struct *copied_from = vmg->copied_from;
> +
> +               if (vma_is_fork_child(src))
> +                       return false;
> +               if (vma_is_fork_child(copied_from))
> +                       return false;
> +
> +               return true;
> +       }
>         /* Case 2 - we will simply use tgt's anon_vma. */
>         if (tgt_anon && !src_anon)
> -               return !vma_had_uncowed_parents(tgt);
> +               return !vma_is_fork_child(tgt);
>         /* Case 3 - the anon_vma's are already shared. */
>         return src_anon == tgt_anon;
>  }
> --
> 2.52.0
>


^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2026-01-06 15:23 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2026-01-05 20:11 [PATCH v2 0/4] mm/vma: fix anon_vma UAF on mremap() faulted, unfaulted merge Lorenzo Stoakes
2026-01-05 20:11 ` [PATCH v2 1/4] " Lorenzo Stoakes
2026-01-06  3:15   ` Harry Yoo
2026-01-06 15:01   ` Jeongjun Park
2026-01-05 20:11 ` [PATCH v2 2/4] tools/testing/selftests: add tests for !tgt, src mremap() merges Lorenzo Stoakes
2026-01-05 20:11 ` [PATCH v2 3/4] mm/vma: enforce VMA fork limit on unfaulted,faulted mremap merge too Lorenzo Stoakes
2026-01-06  6:03   ` Harry Yoo
2026-01-06 15:23   ` Jeongjun Park
2026-01-05 20:11 ` [PATCH v2 4/4] tools/testing/selftests: add forked (un)/faulted VMA merge tests Lorenzo Stoakes

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox