* [PATCH] mm/vma: fix anon_vma UAF on mremap() faulted, unfaulted merge
@ 2026-01-02 20:55 Lorenzo Stoakes
2026-01-02 21:00 ` Lorenzo Stoakes
` (2 more replies)
0 siblings, 3 replies; 8+ messages in thread
From: Lorenzo Stoakes @ 2026-01-02 20:55 UTC (permalink / raw)
To: Andrew Morton
Cc: Liam R . Howlett, Vlastimil Babka, Jann Horn, Pedro Falcato,
Yeoreum Yun, linux-mm, linux-kernel, David Hildenbrand,
Jeongjun Park, Rik van Riel, Harry Yoo
Commit 879bca0a2c4f ("mm/vma: fix incorrectly disallowed anonymous VMA
merges") introduced the ability to merge previously unavailable VMA merge
scenarios.
The key piece of logic introduced was the ability to merge a faulted VMA
immediately next to an unfaulted VMA, which relies upon dup_anon_vma() to
correctly handle anon_vma state.
In the case of the merge of an existing VMA (that is changing properties of
a VMA and then merging if those properties are shared by adjacent VMAs),
dup_anon_vma() is invoked correctly.
However in the case of the merge of a new VMA, a corner case peculiar to
mremap() was missed.
The issue is that vma_expand() only performs dup_anon_vma() if the target
(the VMA that will ultimately become the merged VMA): is not the next VMA,
i.e. the one that appears after the range in which the new VMA is to be
established.
A key insight here is that in all other cases other than mremap(), a new
VMA merge either expands an existing VMA, meaning that the target VMA will
be that VMA, or would have anon_vma be NULL.
Specifically:
* __mmap_region() - no anon_vma in place, initial mapping.
* do_brk_flags() - expanding an existing VMA.
* vma_merge_extend() - expanding an existing VMA.
* relocate_vma_down() - no anon_vma in place, initial mapping.
In addition, we are in the unique situation of needing to duplicate
anon_vma state from a VMA that is neither the previous or next VMA being
merged with.
To account for this, introduce a new field in struct vma_merge_struct
specifically for the mremap() case, and update vma_expand() to explicitly
check for this case and invoke dup_anon_vma() to ensure anon_vma state is
correctly propagated.
This issue can be observed most directly by invoked mremap() to move around
a VMA and cause this kind of merge with the MREMAP_DONTUNMAP flag
specified.
This will result in unlink_anon_vmas() being called after failing to
duplicate anon_vma state to the target VMA, which results in the anon_vma
itself being freed with folios still possessing dangling pointers to the
anon_vma and thus a use-after-free bug.
This bug was discovered via a syzbot report, which this patch resolves.
The following program reproduces the issue (and is fixed by this patch):
#define _GNU_SOURCE
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <sys/mman.h>
#define RESERVED_PGS (100)
#define VMA_A_PGS (10)
#define VMA_B_PGS (10)
#define NUM_ITERS (1000)
static void trigger_bug(void)
{
unsigned long page_size = sysconf(_SC_PAGE_SIZE);
char *reserved, *ptr_a, *ptr_b;
/*
* The goal here is to achieve:
*
* mremap() with MREMAP_DONTUNMAP such that A and B merge:
*
* |-------------------------|
* | |
* | |-----------| |---------|
* v | unfaulted | | faulted |
* |-----------| |---------|
* B A
*
* Then unmap VMA A to trigger the bug.
*/
/* Reserve a region of memory to operate in. */
reserved = mmap(NULL, RESERVED_PGS * page_size, PROT_NONE,
MAP_PRIVATE | MAP_ANON, -1, 0);
if (reserved == MAP_FAILED) {
perror("mmap reserved");
exit(EXIT_FAILURE);
}
/* Map VMA A into place. */
ptr_a = mmap(&reserved[page_size], VMA_A_PGS * page_size,
PROT_READ | PROT_WRITE,
MAP_PRIVATE | MAP_ANON | MAP_FIXED, -1, 0);
if (ptr_a == MAP_FAILED) {
perror("mmap VMA A");
exit(EXIT_FAILURE);
}
/* Fault it in. */
ptr_a[0] = 'x';
/*
* Now move it out of the way so we can place VMA B in position,
* unfaulted.
*/
ptr_a = mremap(ptr_a, VMA_A_PGS * page_size, VMA_A_PGS * page_size,
MREMAP_FIXED | MREMAP_MAYMOVE, &reserved[50 * page_size]);
if (ptr_a == MAP_FAILED) {
perror("mremap VMA A out of the way");
exit(EXIT_FAILURE);
}
/* Map VMA B into place. */
ptr_b = mmap(&reserved[page_size + VMA_A_PGS * page_size],
VMA_B_PGS * page_size, PROT_READ | PROT_WRITE,
MAP_PRIVATE | MAP_ANON | MAP_FIXED, -1, 0);
if (ptr_b == MAP_FAILED) {
perror("mmap VMA B");
exit(EXIT_FAILURE);
}
/* Now move VMA A into position w/MREMAP_DONTUNMAP + free anon_vma. */
ptr_a = mremap(ptr_a, VMA_A_PGS * page_size, VMA_A_PGS * page_size,
MREMAP_FIXED | MREMAP_MAYMOVE | MREMAP_DONTUNMAP,
&reserved[page_size]);
if (ptr_a == MAP_FAILED) {
perror("mremap VMA A with MREMAP_DONTUNMAP");
exit(EXIT_FAILURE);
}
/* Finally, unmap VMA A which should trigger the bug. */
munmap(ptr_a, VMA_A_PGS * page_size);
/* Cleanup in case bug didn't trigger sufficiently visibly... */
munmap(reserved, RESERVED_PGS * page_size);
}
int main(void)
{
int i;
for (i = 0; i < NUM_ITERS; i++)
trigger_bug();
return EXIT_SUCCESS;
}
Signed-off-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Fixes: 879bca0a2c4f ("mm/vma: fix incorrectly disallowed anonymous VMA merges")
Reported-by: syzbot+b165fc2e11771c66d8ba@syzkaller.appspotmail.com
Closes: https://lore.kernel.org/all/694a2745.050a0220.19928e.0017.GAE@google.com/
Cc: stable@kernel.org
---
mm/vma.c | 58 ++++++++++++++++++++++++++++++++++++++++++--------------
mm/vma.h | 3 +++
2 files changed, 47 insertions(+), 14 deletions(-)
diff --git a/mm/vma.c b/mm/vma.c
index 6377aa290a27..2268f518a89b 100644
--- a/mm/vma.c
+++ b/mm/vma.c
@@ -1130,26 +1130,50 @@ int vma_expand(struct vma_merge_struct *vmg)
mmap_assert_write_locked(vmg->mm);
vma_start_write(target);
- if (next && (target != next) && (vmg->end == next->vm_end)) {
+ if (next && vmg->end == next->vm_end) {
+ struct vm_area_struct *copied_from = vmg->copied_from;
int ret;
- sticky_flags |= next->vm_flags & VM_STICKY;
- remove_next = true;
- /* This should already have been checked by this point. */
- VM_WARN_ON_VMG(!can_merge_remove_vma(next), vmg);
- vma_start_write(next);
- /*
- * In this case we don't report OOM, so vmg->give_up_on_mm is
- * safe.
- */
- ret = dup_anon_vma(target, next, &anon_dup);
- if (ret)
- return ret;
+ if (target != next) {
+ sticky_flags |= next->vm_flags & VM_STICKY;
+ remove_next = true;
+ /* This should already have been checked by this point. */
+ VM_WARN_ON_VMG(!can_merge_remove_vma(next), vmg);
+ vma_start_write(next);
+ /*
+ * In this case we don't report OOM, so vmg->give_up_on_mm is
+ * safe.
+ */
+ ret = dup_anon_vma(target, next, &anon_dup);
+ if (ret)
+ return ret;
+ } else if (copied_from) {
+ vma_start_write(next);
+
+ /*
+ * We are copying from a VMA (i.e. mremap()'ing) to
+ * next, and thus must ensure that either anon_vma's are
+ * already compatible (in which case this call is a nop)
+ * or all anon_vma state is propagated to next
+ */
+ ret = dup_anon_vma(next, copied_from, &anon_dup);
+ if (ret)
+ return ret;
+ } else {
+ /* In no other case may the anon_vma differ. */
+ VM_WARN_ON_VMG(target->anon_vma != next->anon_vma, vmg);
+ }
}
/* Not merging but overwriting any part of next is not handled. */
VM_WARN_ON_VMG(next && !remove_next &&
next != target && vmg->end > next->vm_start, vmg);
+ /*
+ * We should only see a copy with next as the target on a new merge
+ * which sets the end to the next of next.
+ */
+ VM_WARN_ON_VMG(target == next && vmg->copied_from &&
+ vmg->end != next->vm_end, vmg);
/* Only handles expanding */
VM_WARN_ON_VMG(target->vm_start < vmg->start ||
target->vm_end > vmg->end, vmg);
@@ -1807,6 +1831,13 @@ struct vm_area_struct *copy_vma(struct vm_area_struct **vmap,
VMA_ITERATOR(vmi, mm, addr);
VMG_VMA_STATE(vmg, &vmi, NULL, vma, addr, addr + len);
+ /*
+ * VMG_VMA_STATE() installs vma in middle, but this is a new VMA, inform
+ * merging logic correctly.
+ */
+ vmg.copied_from = vma;
+ vmg.middle = NULL;
+
/*
* If anonymous vma has not yet been faulted, update new pgoff
* to match new location, to increase its chance of merging.
@@ -1828,7 +1859,6 @@ struct vm_area_struct *copy_vma(struct vm_area_struct **vmap,
if (new_vma && new_vma->vm_start < addr + len)
return NULL; /* should never get here */
- vmg.middle = NULL; /* New VMA range. */
vmg.pgoff = pgoff;
vmg.next = vma_iter_next_rewind(&vmi, NULL);
new_vma = vma_merge_new_range(&vmg);
diff --git a/mm/vma.h b/mm/vma.h
index e4c7bd79de5f..50f0bdb0eb79 100644
--- a/mm/vma.h
+++ b/mm/vma.h
@@ -106,6 +106,9 @@ struct vma_merge_struct {
struct anon_vma_name *anon_name;
enum vma_merge_state state;
+ /* If we are copying a VMA, which VMA are we copying from? */
+ struct vm_area_struct *copied_from;
+
/* Flags which callers can use to modify merge behaviour: */
/*
--
2.52.0
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH] mm/vma: fix anon_vma UAF on mremap() faulted, unfaulted merge
2026-01-02 20:55 [PATCH] mm/vma: fix anon_vma UAF on mremap() faulted, unfaulted merge Lorenzo Stoakes
@ 2026-01-02 21:00 ` Lorenzo Stoakes
2026-01-04 19:25 ` David Hildenbrand (Red Hat)
2026-01-05 5:11 ` Harry Yoo
2 siblings, 0 replies; 8+ messages in thread
From: Lorenzo Stoakes @ 2026-01-02 21:00 UTC (permalink / raw)
To: Andrew Morton
Cc: Liam R . Howlett, Vlastimil Babka, Jann Horn, Pedro Falcato,
Yeoreum Yun, linux-mm, linux-kernel, David Hildenbrand,
Jeongjun Park, Rik van Riel, Harry Yoo
Andrew - obviously pending review scrutiny, could we get this into an rc-
relatively soon? As this is quite a serious bug.
Also many thanks due to Jeongjun for his work in analysing this bug and ensuring
it got attention, and Harry + David for their insightful contributions, much
appreciated!
Cheers, Lorenzo
On Fri, Jan 02, 2026 at 08:55:20PM +0000, Lorenzo Stoakes wrote:
> Commit 879bca0a2c4f ("mm/vma: fix incorrectly disallowed anonymous VMA
> merges") introduced the ability to merge previously unavailable VMA merge
> scenarios.
>
> The key piece of logic introduced was the ability to merge a faulted VMA
> immediately next to an unfaulted VMA, which relies upon dup_anon_vma() to
> correctly handle anon_vma state.
>
> In the case of the merge of an existing VMA (that is changing properties of
> a VMA and then merging if those properties are shared by adjacent VMAs),
> dup_anon_vma() is invoked correctly.
>
> However in the case of the merge of a new VMA, a corner case peculiar to
> mremap() was missed.
>
> The issue is that vma_expand() only performs dup_anon_vma() if the target
> (the VMA that will ultimately become the merged VMA): is not the next VMA,
> i.e. the one that appears after the range in which the new VMA is to be
> established.
>
> A key insight here is that in all other cases other than mremap(), a new
> VMA merge either expands an existing VMA, meaning that the target VMA will
> be that VMA, or would have anon_vma be NULL.
>
> Specifically:
>
> * __mmap_region() - no anon_vma in place, initial mapping.
> * do_brk_flags() - expanding an existing VMA.
> * vma_merge_extend() - expanding an existing VMA.
> * relocate_vma_down() - no anon_vma in place, initial mapping.
>
> In addition, we are in the unique situation of needing to duplicate
> anon_vma state from a VMA that is neither the previous or next VMA being
> merged with.
>
> To account for this, introduce a new field in struct vma_merge_struct
> specifically for the mremap() case, and update vma_expand() to explicitly
> check for this case and invoke dup_anon_vma() to ensure anon_vma state is
> correctly propagated.
>
> This issue can be observed most directly by invoked mremap() to move around
> a VMA and cause this kind of merge with the MREMAP_DONTUNMAP flag
> specified.
>
> This will result in unlink_anon_vmas() being called after failing to
> duplicate anon_vma state to the target VMA, which results in the anon_vma
> itself being freed with folios still possessing dangling pointers to the
> anon_vma and thus a use-after-free bug.
>
> This bug was discovered via a syzbot report, which this patch resolves.
>
> The following program reproduces the issue (and is fixed by this patch):
>
> #define _GNU_SOURCE
> #include <stdio.h>
> #include <stdlib.h>
> #include <unistd.h>
> #include <sys/mman.h>
>
> #define RESERVED_PGS (100)
> #define VMA_A_PGS (10)
> #define VMA_B_PGS (10)
> #define NUM_ITERS (1000)
>
> static void trigger_bug(void)
> {
> unsigned long page_size = sysconf(_SC_PAGE_SIZE);
> char *reserved, *ptr_a, *ptr_b;
>
> /*
> * The goal here is to achieve:
> *
> * mremap() with MREMAP_DONTUNMAP such that A and B merge:
> *
> * |-------------------------|
> * | |
> * | |-----------| |---------|
> * v | unfaulted | | faulted |
> * |-----------| |---------|
> * B A
> *
> * Then unmap VMA A to trigger the bug.
> */
>
> /* Reserve a region of memory to operate in. */
> reserved = mmap(NULL, RESERVED_PGS * page_size, PROT_NONE,
> MAP_PRIVATE | MAP_ANON, -1, 0);
> if (reserved == MAP_FAILED) {
> perror("mmap reserved");
> exit(EXIT_FAILURE);
> }
>
> /* Map VMA A into place. */
> ptr_a = mmap(&reserved[page_size], VMA_A_PGS * page_size,
> PROT_READ | PROT_WRITE,
> MAP_PRIVATE | MAP_ANON | MAP_FIXED, -1, 0);
> if (ptr_a == MAP_FAILED) {
> perror("mmap VMA A");
> exit(EXIT_FAILURE);
> }
> /* Fault it in. */
> ptr_a[0] = 'x';
>
> /*
> * Now move it out of the way so we can place VMA B in position,
> * unfaulted.
> */
> ptr_a = mremap(ptr_a, VMA_A_PGS * page_size, VMA_A_PGS * page_size,
> MREMAP_FIXED | MREMAP_MAYMOVE, &reserved[50 * page_size]);
> if (ptr_a == MAP_FAILED) {
> perror("mremap VMA A out of the way");
> exit(EXIT_FAILURE);
> }
>
> /* Map VMA B into place. */
> ptr_b = mmap(&reserved[page_size + VMA_A_PGS * page_size],
> VMA_B_PGS * page_size, PROT_READ | PROT_WRITE,
> MAP_PRIVATE | MAP_ANON | MAP_FIXED, -1, 0);
> if (ptr_b == MAP_FAILED) {
> perror("mmap VMA B");
> exit(EXIT_FAILURE);
> }
>
> /* Now move VMA A into position w/MREMAP_DONTUNMAP + free anon_vma. */
> ptr_a = mremap(ptr_a, VMA_A_PGS * page_size, VMA_A_PGS * page_size,
> MREMAP_FIXED | MREMAP_MAYMOVE | MREMAP_DONTUNMAP,
> &reserved[page_size]);
> if (ptr_a == MAP_FAILED) {
> perror("mremap VMA A with MREMAP_DONTUNMAP");
> exit(EXIT_FAILURE);
> }
>
> /* Finally, unmap VMA A which should trigger the bug. */
> munmap(ptr_a, VMA_A_PGS * page_size);
>
> /* Cleanup in case bug didn't trigger sufficiently visibly... */
> munmap(reserved, RESERVED_PGS * page_size);
> }
>
> int main(void)
> {
> int i;
>
> for (i = 0; i < NUM_ITERS; i++)
> trigger_bug();
>
> return EXIT_SUCCESS;
> }
>
> Signed-off-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> Fixes: 879bca0a2c4f ("mm/vma: fix incorrectly disallowed anonymous VMA merges")
> Reported-by: syzbot+b165fc2e11771c66d8ba@syzkaller.appspotmail.com
> Closes: https://lore.kernel.org/all/694a2745.050a0220.19928e.0017.GAE@google.com/
> Cc: stable@kernel.org
> ---
> mm/vma.c | 58 ++++++++++++++++++++++++++++++++++++++++++--------------
> mm/vma.h | 3 +++
> 2 files changed, 47 insertions(+), 14 deletions(-)
>
> diff --git a/mm/vma.c b/mm/vma.c
> index 6377aa290a27..2268f518a89b 100644
> --- a/mm/vma.c
> +++ b/mm/vma.c
> @@ -1130,26 +1130,50 @@ int vma_expand(struct vma_merge_struct *vmg)
> mmap_assert_write_locked(vmg->mm);
>
> vma_start_write(target);
> - if (next && (target != next) && (vmg->end == next->vm_end)) {
> + if (next && vmg->end == next->vm_end) {
> + struct vm_area_struct *copied_from = vmg->copied_from;
> int ret;
>
> - sticky_flags |= next->vm_flags & VM_STICKY;
> - remove_next = true;
> - /* This should already have been checked by this point. */
> - VM_WARN_ON_VMG(!can_merge_remove_vma(next), vmg);
> - vma_start_write(next);
> - /*
> - * In this case we don't report OOM, so vmg->give_up_on_mm is
> - * safe.
> - */
> - ret = dup_anon_vma(target, next, &anon_dup);
> - if (ret)
> - return ret;
> + if (target != next) {
> + sticky_flags |= next->vm_flags & VM_STICKY;
> + remove_next = true;
> + /* This should already have been checked by this point. */
> + VM_WARN_ON_VMG(!can_merge_remove_vma(next), vmg);
> + vma_start_write(next);
> + /*
> + * In this case we don't report OOM, so vmg->give_up_on_mm is
> + * safe.
> + */
> + ret = dup_anon_vma(target, next, &anon_dup);
> + if (ret)
> + return ret;
> + } else if (copied_from) {
> + vma_start_write(next);
> +
> + /*
> + * We are copying from a VMA (i.e. mremap()'ing) to
> + * next, and thus must ensure that either anon_vma's are
> + * already compatible (in which case this call is a nop)
> + * or all anon_vma state is propagated to next
> + */
> + ret = dup_anon_vma(next, copied_from, &anon_dup);
> + if (ret)
> + return ret;
> + } else {
> + /* In no other case may the anon_vma differ. */
> + VM_WARN_ON_VMG(target->anon_vma != next->anon_vma, vmg);
> + }
> }
>
> /* Not merging but overwriting any part of next is not handled. */
> VM_WARN_ON_VMG(next && !remove_next &&
> next != target && vmg->end > next->vm_start, vmg);
> + /*
> + * We should only see a copy with next as the target on a new merge
> + * which sets the end to the next of next.
> + */
> + VM_WARN_ON_VMG(target == next && vmg->copied_from &&
> + vmg->end != next->vm_end, vmg);
> /* Only handles expanding */
> VM_WARN_ON_VMG(target->vm_start < vmg->start ||
> target->vm_end > vmg->end, vmg);
> @@ -1807,6 +1831,13 @@ struct vm_area_struct *copy_vma(struct vm_area_struct **vmap,
> VMA_ITERATOR(vmi, mm, addr);
> VMG_VMA_STATE(vmg, &vmi, NULL, vma, addr, addr + len);
>
> + /*
> + * VMG_VMA_STATE() installs vma in middle, but this is a new VMA, inform
> + * merging logic correctly.
> + */
> + vmg.copied_from = vma;
> + vmg.middle = NULL;
> +
> /*
> * If anonymous vma has not yet been faulted, update new pgoff
> * to match new location, to increase its chance of merging.
> @@ -1828,7 +1859,6 @@ struct vm_area_struct *copy_vma(struct vm_area_struct **vmap,
> if (new_vma && new_vma->vm_start < addr + len)
> return NULL; /* should never get here */
>
> - vmg.middle = NULL; /* New VMA range. */
> vmg.pgoff = pgoff;
> vmg.next = vma_iter_next_rewind(&vmi, NULL);
> new_vma = vma_merge_new_range(&vmg);
> diff --git a/mm/vma.h b/mm/vma.h
> index e4c7bd79de5f..50f0bdb0eb79 100644
> --- a/mm/vma.h
> +++ b/mm/vma.h
> @@ -106,6 +106,9 @@ struct vma_merge_struct {
> struct anon_vma_name *anon_name;
> enum vma_merge_state state;
>
> + /* If we are copying a VMA, which VMA are we copying from? */
> + struct vm_area_struct *copied_from;
> +
> /* Flags which callers can use to modify merge behaviour: */
>
> /*
> --
> 2.52.0
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH] mm/vma: fix anon_vma UAF on mremap() faulted, unfaulted merge
2026-01-02 20:55 [PATCH] mm/vma: fix anon_vma UAF on mremap() faulted, unfaulted merge Lorenzo Stoakes
2026-01-02 21:00 ` Lorenzo Stoakes
@ 2026-01-04 19:25 ` David Hildenbrand (Red Hat)
2026-01-05 12:53 ` Lorenzo Stoakes
2026-01-05 5:11 ` Harry Yoo
2 siblings, 1 reply; 8+ messages in thread
From: David Hildenbrand (Red Hat) @ 2026-01-04 19:25 UTC (permalink / raw)
To: Lorenzo Stoakes, Andrew Morton
Cc: Liam R . Howlett, Vlastimil Babka, Jann Horn, Pedro Falcato,
Yeoreum Yun, linux-mm, linux-kernel, Jeongjun Park, Rik van Riel,
Harry Yoo
On 1/2/26 21:55, Lorenzo Stoakes wrote:
> Commit 879bca0a2c4f ("mm/vma: fix incorrectly disallowed anonymous VMA
> merges") introduced the ability to merge previously unavailable VMA merge
> scenarios.
>
> The key piece of logic introduced was the ability to merge a faulted VMA
> immediately next to an unfaulted VMA, which relies upon dup_anon_vma() to
> correctly handle anon_vma state.
>
> In the case of the merge of an existing VMA (that is changing properties of
> a VMA and then merging if those properties are shared by adjacent VMAs),
> dup_anon_vma() is invoked correctly.
>
> However in the case of the merge of a new VMA, a corner case peculiar to
> mremap() was missed.
>
> The issue is that vma_expand() only performs dup_anon_vma() if the target
> (the VMA that will ultimately become the merged VMA): is not the next VMA,
> i.e. the one that appears after the range in which the new VMA is to be
> established.
>
> A key insight here is that in all other cases other than mremap(), a new
> VMA merge either expands an existing VMA, meaning that the target VMA will
> be that VMA, or would have anon_vma be NULL.
>
> Specifically:
>
> * __mmap_region() - no anon_vma in place, initial mapping.
> * do_brk_flags() - expanding an existing VMA.
> * vma_merge_extend() - expanding an existing VMA.
> * relocate_vma_down() - no anon_vma in place, initial mapping.
>
> In addition, we are in the unique situation of needing to duplicate
> anon_vma state from a VMA that is neither the previous or next VMA being
> merged with.
>
> To account for this, introduce a new field in struct vma_merge_struct
> specifically for the mremap() case, and update vma_expand() to explicitly
> check for this case and invoke dup_anon_vma() to ensure anon_vma state is
> correctly propagated.
>
> This issue can be observed most directly by invoked mremap() to move around
> a VMA and cause this kind of merge with the MREMAP_DONTUNMAP flag
> specified.
>
> This will result in unlink_anon_vmas() being called after failing to
> duplicate anon_vma state to the target VMA, which results in the anon_vma
> itself being freed with folios still possessing dangling pointers to the
> anon_vma and thus a use-after-free bug.
Makes sense to me.
>
> This bug was discovered via a syzbot report, which this patch resolves.
>
> The following program reproduces the issue (and is fixed by this patch):
>
> #define _GNU_SOURCE
> #include <stdio.h>
> #include <stdlib.h>
> #include <unistd.h>
> #include <sys/mman.h>
>
> #define RESERVED_PGS (100)
> #define VMA_A_PGS (10)
> #define VMA_B_PGS (10)
> #define NUM_ITERS (1000)
>
> static void trigger_bug(void)
> {
> unsigned long page_size = sysconf(_SC_PAGE_SIZE);
> char *reserved, *ptr_a, *ptr_b;
>
> /*
> * The goal here is to achieve:
> *
> * mremap() with MREMAP_DONTUNMAP such that A and B merge:
> *
> * |-------------------------|
> * | |
> * | |-----------| |---------|
> * v | unfaulted | | faulted |
> * |-----------| |---------|
> * B A
> *
> * Then unmap VMA A to trigger the bug.
> */
>
> /* Reserve a region of memory to operate in. */
> reserved = mmap(NULL, RESERVED_PGS * page_size, PROT_NONE,
> MAP_PRIVATE | MAP_ANON, -1, 0);
> if (reserved == MAP_FAILED) {
> perror("mmap reserved");
> exit(EXIT_FAILURE);
> }
>
> /* Map VMA A into place. */
> ptr_a = mmap(&reserved[page_size], VMA_A_PGS * page_size,
> PROT_READ | PROT_WRITE,
> MAP_PRIVATE | MAP_ANON | MAP_FIXED, -1, 0);
> if (ptr_a == MAP_FAILED) {
> perror("mmap VMA A");
> exit(EXIT_FAILURE);
> }
> /* Fault it in. */
> ptr_a[0] = 'x';
>
> /*
> * Now move it out of the way so we can place VMA B in position,
> * unfaulted.
> */
> ptr_a = mremap(ptr_a, VMA_A_PGS * page_size, VMA_A_PGS * page_size,
> MREMAP_FIXED | MREMAP_MAYMOVE, &reserved[50 * page_size]);
> if (ptr_a == MAP_FAILED) {
> perror("mremap VMA A out of the way");
> exit(EXIT_FAILURE);
> }
>
> /* Map VMA B into place. */
> ptr_b = mmap(&reserved[page_size + VMA_A_PGS * page_size],
> VMA_B_PGS * page_size, PROT_READ | PROT_WRITE,
> MAP_PRIVATE | MAP_ANON | MAP_FIXED, -1, 0);
> if (ptr_b == MAP_FAILED) {
> perror("mmap VMA B");
> exit(EXIT_FAILURE);
> }
>
> /* Now move VMA A into position w/MREMAP_DONTUNMAP + free anon_vma. */
> ptr_a = mremap(ptr_a, VMA_A_PGS * page_size, VMA_A_PGS * page_size,
> MREMAP_FIXED | MREMAP_MAYMOVE | MREMAP_DONTUNMAP,
> &reserved[page_size]);
> if (ptr_a == MAP_FAILED) {
> perror("mremap VMA A with MREMAP_DONTUNMAP");
> exit(EXIT_FAILURE);
> }
>
> /* Finally, unmap VMA A which should trigger the bug. */
> munmap(ptr_a, VMA_A_PGS * page_size);
>
> /* Cleanup in case bug didn't trigger sufficiently visibly... */
> munmap(reserved, RESERVED_PGS * page_size);
> }
>
> int main(void)
> {
> int i;
>
> for (i = 0; i < NUM_ITERS; i++)
> trigger_bug();
Just wondering, why do we have to loop, I would have thought that this would
trigger deterministically.
>
> return EXIT_SUCCESS;
> }
>
> Signed-off-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> Fixes: 879bca0a2c4f ("mm/vma: fix incorrectly disallowed anonymous VMA merges")
> Reported-by: syzbot+b165fc2e11771c66d8ba@syzkaller.appspotmail.com
> Closes: https://lore.kernel.org/all/694a2745.050a0220.19928e.0017.GAE@google.com/
> Cc: stable@kernel.org
I was wondering whether this commit actually fixes older reports that Jann
mentioned in his commit a222439e1e27 ("mm/rmap: add anon_vma lifetime debug check").
[1] https://lore.kernel.org/r/67abaeaf.050a0220.110943.0041.GAE@google.com
[2] https://lore.kernel.org/r/67a76f33.050a0220.3d72c.0028.GAE@google.com
But 879bca0a2c4f went into v6.16, but [1] and [2] are against v6.14.
So naturally I wonder, could it be that we had a bug even before 879bca0a2c4f that
resulted in similar symptoms?
Option (1): [1] and [2] are already fixed
Option (2): [1] and [2] are still broken
Option (3): [1] and [2] would be fixed by your patch as well
But we don't even have reproducers, so [1] and [2] could just be a side-effect of
another bug, maybe.
> ---
> mm/vma.c | 58 ++++++++++++++++++++++++++++++++++++++++++--------------
> mm/vma.h | 3 +++
> 2 files changed, 47 insertions(+), 14 deletions(-)
>
> diff --git a/mm/vma.c b/mm/vma.c
> index 6377aa290a27..2268f518a89b 100644
> --- a/mm/vma.c
> +++ b/mm/vma.c
> @@ -1130,26 +1130,50 @@ int vma_expand(struct vma_merge_struct *vmg)
> mmap_assert_write_locked(vmg->mm);
>
> vma_start_write(target);
> - if (next && (target != next) && (vmg->end == next->vm_end)) {
> + if (next && vmg->end == next->vm_end) {
> + struct vm_area_struct *copied_from = vmg->copied_from;
> int ret;
>
> - sticky_flags |= next->vm_flags & VM_STICKY;
> - remove_next = true;
> - /* This should already have been checked by this point. */
> - VM_WARN_ON_VMG(!can_merge_remove_vma(next), vmg);
> - vma_start_write(next);
> - /*
> - * In this case we don't report OOM, so vmg->give_up_on_mm is
> - * safe.
> - */
> - ret = dup_anon_vma(target, next, &anon_dup);
> - if (ret)
> - return ret;
> + if (target != next) {
> + sticky_flags |= next->vm_flags & VM_STICKY;
> + remove_next = true;
> + /* This should already have been checked by this point. */
> + VM_WARN_ON_VMG(!can_merge_remove_vma(next), vmg);
> + vma_start_write(next);
> + /*
> + * In this case we don't report OOM, so vmg->give_up_on_mm is
> + * safe.
> + */
> + ret = dup_anon_vma(target, next, &anon_dup);
> + if (ret)
> + return ret;
> + } else if (copied_from) {
> + vma_start_write(next);
> +
> + /*
> + * We are copying from a VMA (i.e. mremap()'ing) to
> + * next, and thus must ensure that either anon_vma's are
> + * already compatible (in which case this call is a nop)
> + * or all anon_vma state is propagated to next
> + */
> + ret = dup_anon_vma(next, copied_from, &anon_dup);
> + if (ret)
> + return ret;
> + } else {
> + /* In no other case may the anon_vma differ. */
> + VM_WARN_ON_VMG(target->anon_vma != next->anon_vma, vmg);
> + }
No expert on that code, but looks reasonable to me.
Wondering whether we want to pull the vma_start_write(next) before the loop
(for the warn we certainly don't care).
--
Cheers
David
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH] mm/vma: fix anon_vma UAF on mremap() faulted, unfaulted merge
2026-01-02 20:55 [PATCH] mm/vma: fix anon_vma UAF on mremap() faulted, unfaulted merge Lorenzo Stoakes
2026-01-02 21:00 ` Lorenzo Stoakes
2026-01-04 19:25 ` David Hildenbrand (Red Hat)
@ 2026-01-05 5:11 ` Harry Yoo
2026-01-05 9:12 ` Lorenzo Stoakes
2026-01-05 15:24 ` Liam R. Howlett
2 siblings, 2 replies; 8+ messages in thread
From: Harry Yoo @ 2026-01-05 5:11 UTC (permalink / raw)
To: Lorenzo Stoakes
Cc: Andrew Morton, Liam R . Howlett, Vlastimil Babka, Jann Horn,
Pedro Falcato, Yeoreum Yun, linux-mm, linux-kernel,
David Hildenbrand, Jeongjun Park, Rik van Riel
On Fri, Jan 02, 2026 at 08:55:20PM +0000, Lorenzo Stoakes wrote:
> Commit 879bca0a2c4f ("mm/vma: fix incorrectly disallowed anonymous VMA
> merges") introduced the ability to merge previously unavailable VMA merge
> scenarios.
>
> The key piece of logic introduced was the ability to merge a faulted VMA
> immediately next to an unfaulted VMA, which relies upon dup_anon_vma() to
> correctly handle anon_vma state.
>
> In the case of the merge of an existing VMA (that is changing properties of
> a VMA and then merging if those properties are shared by adjacent VMAs),
> dup_anon_vma() is invoked correctly.
>
> However in the case of the merge of a new VMA, a corner case peculiar to
> mremap() was missed.
>
> The issue is that vma_expand() only performs dup_anon_vma() if the target
> (the VMA that will ultimately become the merged VMA): is not the next VMA,
> i.e. the one that appears after the range in which the new VMA is to be
> established.
>
> A key insight here is that in all other cases other than mremap(), a new
> VMA merge either expands an existing VMA, meaning that the target VMA will
> be that VMA, or would have anon_vma be NULL.
>
> Specifically:
>
> * __mmap_region() - no anon_vma in place, initial mapping.
> * do_brk_flags() - expanding an existing VMA.
> * vma_merge_extend() - expanding an existing VMA.
> * relocate_vma_down() - no anon_vma in place, initial mapping.
>
> In addition, we are in the unique situation of needing to duplicate
> anon_vma state from a VMA that is neither the previous or next VMA being
> merged with.
>
> To account for this, introduce a new field in struct vma_merge_struct
> specifically for the mremap() case, and update vma_expand() to explicitly
> check for this case and invoke dup_anon_vma() to ensure anon_vma state is
> correctly propagated.
>
> This issue can be observed most directly by invoked mremap() to move around
> a VMA and cause this kind of merge with the MREMAP_DONTUNMAP flag
> specified.
>
> This will result in unlink_anon_vmas() being called after failing to
> duplicate anon_vma state to the target VMA, which results in the anon_vma
> itself being freed with folios still possessing dangling pointers to the
> anon_vma and thus a use-after-free bug.
>
> This bug was discovered via a syzbot report, which this patch resolves.
>
> The following program reproduces the issue (and is fixed by this patch):
[...]
> Signed-off-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> Fixes: 879bca0a2c4f ("mm/vma: fix incorrectly disallowed anonymous VMA merges")
> Reported-by: syzbot+b165fc2e11771c66d8ba@syzkaller.appspotmail.com
> Closes: https://lore.kernel.org/all/694a2745.050a0220.19928e.0017.GAE@google.com/
> Cc: stable@kernel.org
> ---
Hi Lorenzo, I really appreciate that you've done very through analysis of
the bug so quickly and precisely, and wrote a fix. Also having a simpler
repro (that works on my machine!) is hugely helpful.
My comment inlined below.
> mm/vma.c | 58 ++++++++++++++++++++++++++++++++++++++++++--------------
> mm/vma.h | 3 +++
> 2 files changed, 47 insertions(+), 14 deletions(-)
>
> diff --git a/mm/vma.c b/mm/vma.c
> index 6377aa290a27..2268f518a89b 100644
> --- a/mm/vma.c
> +++ b/mm/vma.c
> @@ -1130,26 +1130,50 @@ int vma_expand(struct vma_merge_struct *vmg)
> mmap_assert_write_locked(vmg->mm);
>
> vma_start_write(target);
> - if (next && (target != next) && (vmg->end == next->vm_end)) {
> + if (next && vmg->end == next->vm_end) {
> + struct vm_area_struct *copied_from = vmg->copied_from;
> int ret;
>
> - sticky_flags |= next->vm_flags & VM_STICKY;
> - remove_next = true;
> - /* This should already have been checked by this point. */
> - VM_WARN_ON_VMG(!can_merge_remove_vma(next), vmg);
> - vma_start_write(next);
> - /*
> - * In this case we don't report OOM, so vmg->give_up_on_mm is
> - * safe.
> - */
> - ret = dup_anon_vma(target, next, &anon_dup);
> - if (ret)
> - return ret;
> + if (target != next) {
> + sticky_flags |= next->vm_flags & VM_STICKY;
> + remove_next = true;
> + /* This should already have been checked by this point. */
> + VM_WARN_ON_VMG(!can_merge_remove_vma(next), vmg);
> + vma_start_write(next);
> + /*
> + * In this case we don't report OOM, so vmg->give_up_on_mm is
> + * safe.
> + */
> + ret = dup_anon_vma(target, next, &anon_dup);
> + if (ret)
> + return ret;
While this fix works when we're expanding the next VMA to cover the new
range, I don't think it's covering the case where we're expanding the
prev VMA to cover the new range and next VMA.
Previously I argued [1] that when mremap()'ing into a gap between two unfaulted
VMAs that are compatible, calling `dup_anon_vma(target, next, &anon_dup);`
is incorrect:
mremap()
|-----------------------------------|
| |
v |
[ VMA C, unfaulted ][ gap ][ VMA B, unfaulted ][ gap ][ VMA A, faulted ]
I suspected this patch doesn't cover the case, so I slightly modified your
repro to test my theory (added to the end of the email).
The test confirmed my theory. It doesn't cover the case above because
target is not next but prev ((target != next) returns true), and neither
target nor next have anon_vma, but the VMA that is copied from does.
With the modified repro, I'm still seeing the warning that Jann added,
on top of mm-hotfixes-unstable (HEAD: 871cf622a8ba) which already has
your fix (65769f3b9877).
[1] https://lore.kernel.org/linux-mm/aVd-UZQGW4ltH6hY@hyeyoo
> + } else if (copied_from) {
> + vma_start_write(next);
> +
> + /*
> + * We are copying from a VMA (i.e. mremap()'ing) to
> + * next, and thus must ensure that either anon_vma's are
> + * already compatible (in which case this call is a nop)
> + * or all anon_vma state is propagated to next
> + */
> + ret = dup_anon_vma(next, copied_from, &anon_dup);
> + if (ret)
> + return ret;
So we need to fix this to work even when (target != next) returns true.
Modified repro:
#define _GNU_SOURCE
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <sys/mman.h>
#define RESERVED_PGS (100)
#define VMA_A_PGS (10)
#define VMA_B_PGS (10)
#define VMA_C_PGS (10)
#define NUM_ITERS (1000)
static void trigger_bug(void)
{
unsigned long page_size = sysconf(_SC_PAGE_SIZE);
char *reserved, *ptr_a, *ptr_b, *ptr_c;
/*
* The goal here is to achieve:
* mremap()
* |-----------------------------------|
* | |
* v |
* [ VMA C, unfaulted ][ gap ][ VMA B, unfaulted ][ gap ][ VMA A, faulted ]
*
* Merge VMA C, B, A by expanding VMA C.
*/
/* Reserve a region of memory to operate in. */
reserved = mmap(NULL, RESERVED_PGS * page_size, PROT_NONE,
MAP_PRIVATE | MAP_ANON, -1, 0);
if (reserved == MAP_FAILED) {
perror("mmap reserved");
exit(EXIT_FAILURE);
}
/* Map VMA A into place. */
ptr_a = mmap(&reserved[page_size + VMA_C_PGS * page_size], VMA_A_PGS * page_size,
PROT_READ | PROT_WRITE,
MAP_PRIVATE | MAP_ANON | MAP_FIXED, -1, 0);
if (ptr_a == MAP_FAILED) {
perror("mmap VMA A");
exit(EXIT_FAILURE);
}
/* Fault it in. */
ptr_a[0] = 'x';
/*
* Now move it out of the way so we can place VMA B in position,
* unfaulted.
*/
ptr_a = mremap(ptr_a, VMA_A_PGS * page_size, VMA_A_PGS * page_size,
MREMAP_FIXED | MREMAP_MAYMOVE, &reserved[50 * page_size]);
if (ptr_a == MAP_FAILED) {
perror("mremap VMA A out of the way");
exit(EXIT_FAILURE);
}
/* Map VMA C into place. */
ptr_c = mmap(&reserved[page_size], VMA_C_PGS * page_size,
PROT_READ | PROT_WRITE,
MAP_PRIVATE | MAP_ANON | MAP_FIXED, -1, 0);
if (ptr_c == MAP_FAILED) {
perror("mmap VMA C");
exit(EXIT_FAILURE);
}
/* Map VMA B into place. */
ptr_b = mmap(&reserved[page_size + VMA_C_PGS * page_size + VMA_A_PGS * page_size],
VMA_B_PGS * page_size, PROT_READ | PROT_WRITE,
MAP_PRIVATE | MAP_ANON | MAP_FIXED, -1, 0);
if (ptr_b == MAP_FAILED) {
perror("mmap VMA B");
exit(EXIT_FAILURE);
}
/* Now move VMA A into position w/MREMAP_DONTUNMAP + free anon_vma. */
ptr_a = mremap(ptr_a, VMA_A_PGS * page_size, VMA_A_PGS * page_size,
MREMAP_FIXED | MREMAP_MAYMOVE | MREMAP_DONTUNMAP,
&reserved[page_size + VMA_C_PGS * page_size]);
if (ptr_a == MAP_FAILED) {
perror("mremap VMA A with MREMAP_DONTUNMAP");
exit(EXIT_FAILURE);
}
/* Finally, unmap VMA A which should trigger the bug. */
munmap(ptr_a, VMA_A_PGS * page_size);
/* Cleanup in case bug didn't trigger sufficiently visibly... */
munmap(reserved, RESERVED_PGS * page_size);
}
int main(void)
{
int i;
for (i = 0; i < NUM_ITERS; i++)
trigger_bug();
return EXIT_SUCCESS;
}
--
Cheers,
Harry / Hyeonggon
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH] mm/vma: fix anon_vma UAF on mremap() faulted, unfaulted merge
2026-01-05 5:11 ` Harry Yoo
@ 2026-01-05 9:12 ` Lorenzo Stoakes
2026-01-05 15:24 ` Liam R. Howlett
1 sibling, 0 replies; 8+ messages in thread
From: Lorenzo Stoakes @ 2026-01-05 9:12 UTC (permalink / raw)
To: Harry Yoo
Cc: Andrew Morton, Liam R . Howlett, Vlastimil Babka, Jann Horn,
Pedro Falcato, Yeoreum Yun, linux-mm, linux-kernel,
David Hildenbrand, Jeongjun Park, Rik van Riel
Andrew - drop the original will rework a v2, thanks.
On Mon, Jan 05, 2026 at 02:11:23PM +0900, Harry Yoo wrote:
> On Fri, Jan 02, 2026 at 08:55:20PM +0000, Lorenzo Stoakes wrote:
> > Commit 879bca0a2c4f ("mm/vma: fix incorrectly disallowed anonymous VMA
> > merges") introduced the ability to merge previously unavailable VMA merge
> > scenarios.
> >
> > The key piece of logic introduced was the ability to merge a faulted VMA
> > immediately next to an unfaulted VMA, which relies upon dup_anon_vma() to
> > correctly handle anon_vma state.
> >
> > In the case of the merge of an existing VMA (that is changing properties of
> > a VMA and then merging if those properties are shared by adjacent VMAs),
> > dup_anon_vma() is invoked correctly.
> >
> > However in the case of the merge of a new VMA, a corner case peculiar to
> > mremap() was missed.
> >
> > The issue is that vma_expand() only performs dup_anon_vma() if the target
> > (the VMA that will ultimately become the merged VMA): is not the next VMA,
> > i.e. the one that appears after the range in which the new VMA is to be
> > established.
> >
> > A key insight here is that in all other cases other than mremap(), a new
> > VMA merge either expands an existing VMA, meaning that the target VMA will
> > be that VMA, or would have anon_vma be NULL.
> >
> > Specifically:
> >
> > * __mmap_region() - no anon_vma in place, initial mapping.
> > * do_brk_flags() - expanding an existing VMA.
> > * vma_merge_extend() - expanding an existing VMA.
> > * relocate_vma_down() - no anon_vma in place, initial mapping.
> >
> > In addition, we are in the unique situation of needing to duplicate
> > anon_vma state from a VMA that is neither the previous or next VMA being
> > merged with.
> >
> > To account for this, introduce a new field in struct vma_merge_struct
> > specifically for the mremap() case, and update vma_expand() to explicitly
> > check for this case and invoke dup_anon_vma() to ensure anon_vma state is
> > correctly propagated.
> >
> > This issue can be observed most directly by invoked mremap() to move around
> > a VMA and cause this kind of merge with the MREMAP_DONTUNMAP flag
> > specified.
> >
> > This will result in unlink_anon_vmas() being called after failing to
> > duplicate anon_vma state to the target VMA, which results in the anon_vma
> > itself being freed with folios still possessing dangling pointers to the
> > anon_vma and thus a use-after-free bug.
> >
> > This bug was discovered via a syzbot report, which this patch resolves.
> >
> > The following program reproduces the issue (and is fixed by this patch):
>
> [...]
>
> > Signed-off-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> > Fixes: 879bca0a2c4f ("mm/vma: fix incorrectly disallowed anonymous VMA merges")
> > Reported-by: syzbot+b165fc2e11771c66d8ba@syzkaller.appspotmail.com
> > Closes: https://lore.kernel.org/all/694a2745.050a0220.19928e.0017.GAE@google.com/
> > Cc: stable@kernel.org
> > ---
>
> Hi Lorenzo, I really appreciate that you've done very through analysis of
> the bug so quickly and precisely, and wrote a fix. Also having a simpler
> repro (that works on my machine!) is hugely helpful.
>
> My comment inlined below.
>
> > mm/vma.c | 58 ++++++++++++++++++++++++++++++++++++++++++--------------
> > mm/vma.h | 3 +++
> > 2 files changed, 47 insertions(+), 14 deletions(-)
> >
> > diff --git a/mm/vma.c b/mm/vma.c
> > index 6377aa290a27..2268f518a89b 100644
> > --- a/mm/vma.c
> > +++ b/mm/vma.c
> > @@ -1130,26 +1130,50 @@ int vma_expand(struct vma_merge_struct *vmg)
> > mmap_assert_write_locked(vmg->mm);
> >
> > vma_start_write(target);
> > - if (next && (target != next) && (vmg->end == next->vm_end)) {
> > + if (next && vmg->end == next->vm_end) {
> > + struct vm_area_struct *copied_from = vmg->copied_from;
> > int ret;
> >
> > - sticky_flags |= next->vm_flags & VM_STICKY;
> > - remove_next = true;
> > - /* This should already have been checked by this point. */
> > - VM_WARN_ON_VMG(!can_merge_remove_vma(next), vmg);
> > - vma_start_write(next);
> > - /*
> > - * In this case we don't report OOM, so vmg->give_up_on_mm is
> > - * safe.
> > - */
> > - ret = dup_anon_vma(target, next, &anon_dup);
> > - if (ret)
> > - return ret;
> > + if (target != next) {
> > + sticky_flags |= next->vm_flags & VM_STICKY;
> > + remove_next = true;
> > + /* This should already have been checked by this point. */
> > + VM_WARN_ON_VMG(!can_merge_remove_vma(next), vmg);
> > + vma_start_write(next);
> > + /*
> > + * In this case we don't report OOM, so vmg->give_up_on_mm is
> > + * safe.
> > + */
> > + ret = dup_anon_vma(target, next, &anon_dup);
> > + if (ret)
> > + return ret;
>
> While this fix works when we're expanding the next VMA to cover the new
> range, I don't think it's covering the case where we're expanding the
> prev VMA to cover the new range and next VMA.
Right yeah because we're also not propagating the mremap'd VMA's anon_vma state,
damn.
I missed the wood for the trees, this just needs to be a general pattern for
propagating mremap(). This is probably why we disallowed these merges in the
past :) but it's good to address this, it also shows what a mess anon mremap()
makes, and further underlines the need for anon rmap rework.
Let me rework this, and come back with a general solution.
I will also add some self tests whose failure case will be 'breaks
CONFIG_DEBUG_VM kernel', and just assert basic stuff on them. We already have a
merge.c test suite for this.
>
> Previously I argued [1] that when mremap()'ing into a gap between two unfaulted
Right sorry missed this as was so focused on fixing the reported issue (and was
on leave at the time so wanted this done asap :)
> VMAs that are compatible, calling `dup_anon_vma(target, next, &anon_dup);`
> is incorrect:
> mremap()
> |-----------------------------------|
> | |
> v |
> [ VMA C, unfaulted ][ gap ][ VMA B, unfaulted ][ gap ][ VMA A, faulted ]
>
>
> I suspected this patch doesn't cover the case, so I slightly modified your
> repro to test my theory (added to the end of the email).
>
> The test confirmed my theory. It doesn't cover the case above because
> target is not next but prev ((target != next) returns true), and neither
> target nor next have anon_vma, but the VMA that is copied from does.
>
> With the modified repro, I'm still seeing the warning that Jann added,
> on top of mm-hotfixes-unstable (HEAD: 871cf622a8ba) which already has
> your fix (65769f3b9877).
>
> [1] https://lore.kernel.org/linux-mm/aVd-UZQGW4ltH6hY@hyeyoo
>
> > + } else if (copied_from) {
> > + vma_start_write(next);
> > +
> > + /*
> > + * We are copying from a VMA (i.e. mremap()'ing) to
> > + * next, and thus must ensure that either anon_vma's are
> > + * already compatible (in which case this call is a nop)
> > + * or all anon_vma state is propagated to next
> > + */
> > + ret = dup_anon_vma(next, copied_from, &anon_dup);
> > + if (ret)
> > + return ret;
>
> So we need to fix this to work even when (target != next) returns true.
>
> Modified repro:
>
> #define _GNU_SOURCE
> #include <stdio.h>
> #include <stdlib.h>
> #include <unistd.h>
> #include <sys/mman.h>
>
> #define RESERVED_PGS (100)
> #define VMA_A_PGS (10)
> #define VMA_B_PGS (10)
> #define VMA_C_PGS (10)
> #define NUM_ITERS (1000)
>
> static void trigger_bug(void)
> {
> unsigned long page_size = sysconf(_SC_PAGE_SIZE);
> char *reserved, *ptr_a, *ptr_b, *ptr_c;
>
> /*
> * The goal here is to achieve:
> * mremap()
> * |-----------------------------------|
> * | |
> * v |
> * [ VMA C, unfaulted ][ gap ][ VMA B, unfaulted ][ gap ][ VMA A, faulted ]
> *
> * Merge VMA C, B, A by expanding VMA C.
> */
>
> /* Reserve a region of memory to operate in. */
> reserved = mmap(NULL, RESERVED_PGS * page_size, PROT_NONE,
> MAP_PRIVATE | MAP_ANON, -1, 0);
> if (reserved == MAP_FAILED) {
> perror("mmap reserved");
> exit(EXIT_FAILURE);
> }
>
> /* Map VMA A into place. */
> ptr_a = mmap(&reserved[page_size + VMA_C_PGS * page_size], VMA_A_PGS * page_size,
> PROT_READ | PROT_WRITE,
> MAP_PRIVATE | MAP_ANON | MAP_FIXED, -1, 0);
> if (ptr_a == MAP_FAILED) {
> perror("mmap VMA A");
> exit(EXIT_FAILURE);
> }
> /* Fault it in. */
> ptr_a[0] = 'x';
>
> /*
> * Now move it out of the way so we can place VMA B in position,
> * unfaulted.
> */
> ptr_a = mremap(ptr_a, VMA_A_PGS * page_size, VMA_A_PGS * page_size,
> MREMAP_FIXED | MREMAP_MAYMOVE, &reserved[50 * page_size]);
> if (ptr_a == MAP_FAILED) {
> perror("mremap VMA A out of the way");
> exit(EXIT_FAILURE);
> }
>
> /* Map VMA C into place. */
> ptr_c = mmap(&reserved[page_size], VMA_C_PGS * page_size,
> PROT_READ | PROT_WRITE,
> MAP_PRIVATE | MAP_ANON | MAP_FIXED, -1, 0);
> if (ptr_c == MAP_FAILED) {
> perror("mmap VMA C");
> exit(EXIT_FAILURE);
> }
>
> /* Map VMA B into place. */
> ptr_b = mmap(&reserved[page_size + VMA_C_PGS * page_size + VMA_A_PGS * page_size],
> VMA_B_PGS * page_size, PROT_READ | PROT_WRITE,
> MAP_PRIVATE | MAP_ANON | MAP_FIXED, -1, 0);
> if (ptr_b == MAP_FAILED) {
> perror("mmap VMA B");
> exit(EXIT_FAILURE);
> }
>
> /* Now move VMA A into position w/MREMAP_DONTUNMAP + free anon_vma. */
> ptr_a = mremap(ptr_a, VMA_A_PGS * page_size, VMA_A_PGS * page_size,
> MREMAP_FIXED | MREMAP_MAYMOVE | MREMAP_DONTUNMAP,
> &reserved[page_size + VMA_C_PGS * page_size]);
> if (ptr_a == MAP_FAILED) {
> perror("mremap VMA A with MREMAP_DONTUNMAP");
> exit(EXIT_FAILURE);
> }
>
> /* Finally, unmap VMA A which should trigger the bug. */
> munmap(ptr_a, VMA_A_PGS * page_size);
>
> /* Cleanup in case bug didn't trigger sufficiently visibly... */
> munmap(reserved, RESERVED_PGS * page_size);
> }
>
> int main(void)
> {
> int i;
>
> for (i = 0; i < NUM_ITERS; i++)
> trigger_bug();
>
> return EXIT_SUCCESS;
> }
Thanks for this! Will go through all this and come up with a v2.
>
> --
> Cheers,
> Harry / Hyeonggon
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH] mm/vma: fix anon_vma UAF on mremap() faulted, unfaulted merge
2026-01-04 19:25 ` David Hildenbrand (Red Hat)
@ 2026-01-05 12:53 ` Lorenzo Stoakes
0 siblings, 0 replies; 8+ messages in thread
From: Lorenzo Stoakes @ 2026-01-05 12:53 UTC (permalink / raw)
To: David Hildenbrand (Red Hat)
Cc: Andrew Morton, Liam R . Howlett, Vlastimil Babka, Jann Horn,
Pedro Falcato, Yeoreum Yun, linux-mm, linux-kernel,
Jeongjun Park, Rik van Riel, Harry Yoo
On Sun, Jan 04, 2026 at 08:25:08PM +0100, David Hildenbrand (Red Hat) wrote:
> On 1/2/26 21:55, Lorenzo Stoakes wrote:
> > Commit 879bca0a2c4f ("mm/vma: fix incorrectly disallowed anonymous VMA
> > merges") introduced the ability to merge previously unavailable VMA merge
> > scenarios.
> >
> > The key piece of logic introduced was the ability to merge a faulted VMA
> > immediately next to an unfaulted VMA, which relies upon dup_anon_vma() to
> > correctly handle anon_vma state.
> >
> > In the case of the merge of an existing VMA (that is changing properties of
> > a VMA and then merging if those properties are shared by adjacent VMAs),
> > dup_anon_vma() is invoked correctly.
> >
> > However in the case of the merge of a new VMA, a corner case peculiar to
> > mremap() was missed.
> >
> > The issue is that vma_expand() only performs dup_anon_vma() if the target
> > (the VMA that will ultimately become the merged VMA): is not the next VMA,
> > i.e. the one that appears after the range in which the new VMA is to be
> > established.
> >
> > A key insight here is that in all other cases other than mremap(), a new
> > VMA merge either expands an existing VMA, meaning that the target VMA will
> > be that VMA, or would have anon_vma be NULL.
> >
> > Specifically:
> >
> > * __mmap_region() - no anon_vma in place, initial mapping.
> > * do_brk_flags() - expanding an existing VMA.
> > * vma_merge_extend() - expanding an existing VMA.
> > * relocate_vma_down() - no anon_vma in place, initial mapping.
> >
> > In addition, we are in the unique situation of needing to duplicate
> > anon_vma state from a VMA that is neither the previous or next VMA being
> > merged with.
> >
> > To account for this, introduce a new field in struct vma_merge_struct
> > specifically for the mremap() case, and update vma_expand() to explicitly
> > check for this case and invoke dup_anon_vma() to ensure anon_vma state is
> > correctly propagated.
> >
> > This issue can be observed most directly by invoked mremap() to move around
> > a VMA and cause this kind of merge with the MREMAP_DONTUNMAP flag
> > specified.
> >
> > This will result in unlink_anon_vmas() being called after failing to
> > duplicate anon_vma state to the target VMA, which results in the anon_vma
> > itself being freed with folios still possessing dangling pointers to the
> > anon_vma and thus a use-after-free bug.
>
> Makes sense to me.
>
> >
> > This bug was discovered via a syzbot report, which this patch resolves.
> >
> > The following program reproduces the issue (and is fixed by this patch):
> >
> > #define _GNU_SOURCE
> > #include <stdio.h>
> > #include <stdlib.h>
> > #include <unistd.h>
> > #include <sys/mman.h>
> >
> > #define RESERVED_PGS (100)
> > #define VMA_A_PGS (10)
> > #define VMA_B_PGS (10)
> > #define NUM_ITERS (1000)
> >
> > static void trigger_bug(void)
> > {
> > unsigned long page_size = sysconf(_SC_PAGE_SIZE);
> > char *reserved, *ptr_a, *ptr_b;
> >
> > /*
> > * The goal here is to achieve:
> > *
> > * mremap() with MREMAP_DONTUNMAP such that A and B merge:
> > *
> > * |-------------------------|
> > * | |
> > * | |-----------| |---------|
> > * v | unfaulted | | faulted |
> > * |-----------| |---------|
> > * B A
> > *
> > * Then unmap VMA A to trigger the bug.
> > */
> >
> > /* Reserve a region of memory to operate in. */
> > reserved = mmap(NULL, RESERVED_PGS * page_size, PROT_NONE,
> > MAP_PRIVATE | MAP_ANON, -1, 0);
> > if (reserved == MAP_FAILED) {
> > perror("mmap reserved");
> > exit(EXIT_FAILURE);
> > }
> >
> > /* Map VMA A into place. */
> > ptr_a = mmap(&reserved[page_size], VMA_A_PGS * page_size,
> > PROT_READ | PROT_WRITE,
> > MAP_PRIVATE | MAP_ANON | MAP_FIXED, -1, 0);
> > if (ptr_a == MAP_FAILED) {
> > perror("mmap VMA A");
> > exit(EXIT_FAILURE);
> > }
> > /* Fault it in. */
> > ptr_a[0] = 'x';
> >
> > /*
> > * Now move it out of the way so we can place VMA B in position,
> > * unfaulted.
> > */
> > ptr_a = mremap(ptr_a, VMA_A_PGS * page_size, VMA_A_PGS * page_size,
> > MREMAP_FIXED | MREMAP_MAYMOVE, &reserved[50 * page_size]);
> > if (ptr_a == MAP_FAILED) {
> > perror("mremap VMA A out of the way");
> > exit(EXIT_FAILURE);
> > }
> >
> > /* Map VMA B into place. */
> > ptr_b = mmap(&reserved[page_size + VMA_A_PGS * page_size],
> > VMA_B_PGS * page_size, PROT_READ | PROT_WRITE,
> > MAP_PRIVATE | MAP_ANON | MAP_FIXED, -1, 0);
> > if (ptr_b == MAP_FAILED) {
> > perror("mmap VMA B");
> > exit(EXIT_FAILURE);
> > }
> >
> > /* Now move VMA A into position w/MREMAP_DONTUNMAP + free anon_vma. */
> > ptr_a = mremap(ptr_a, VMA_A_PGS * page_size, VMA_A_PGS * page_size,
> > MREMAP_FIXED | MREMAP_MAYMOVE | MREMAP_DONTUNMAP,
> > &reserved[page_size]);
> > if (ptr_a == MAP_FAILED) {
> > perror("mremap VMA A with MREMAP_DONTUNMAP");
> > exit(EXIT_FAILURE);
> > }
> >
> > /* Finally, unmap VMA A which should trigger the bug. */
> > munmap(ptr_a, VMA_A_PGS * page_size);
> >
> > /* Cleanup in case bug didn't trigger sufficiently visibly... */
> > munmap(reserved, RESERVED_PGS * page_size);
> > }
> >
> > int main(void)
> > {
> > int i;
> >
> > for (i = 0; i < NUM_ITERS; i++)
> > trigger_bug();
>
> Just wondering, why do we have to loop, I would have thought that this would
> trigger deterministically.
>
> >
> > return EXIT_SUCCESS;
> > }
> >
> > Signed-off-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> > Fixes: 879bca0a2c4f ("mm/vma: fix incorrectly disallowed anonymous VMA merges")
> > Reported-by: syzbot+b165fc2e11771c66d8ba@syzkaller.appspotmail.com
> > Closes: https://lore.kernel.org/all/694a2745.050a0220.19928e.0017.GAE@google.com/
> > Cc: stable@kernel.org
>
> I was wondering whether this commit actually fixes older reports that Jann
> mentioned in his commit a222439e1e27 ("mm/rmap: add anon_vma lifetime debug check").
>
> [1] https://lore.kernel.org/r/67abaeaf.050a0220.110943.0041.GAE@google.com
> [2] https://lore.kernel.org/r/67a76f33.050a0220.3d72c.0028.GAE@google.com
They feel similar given the splats.
So what we had before was:
static inline bool is_mergeable_anon_vma(struct anon_vma *anon_vma1,
struct anon_vma *anon_vma2, struct vm_area_struct *vma)
{
/*
* The list_is_singular() test is to avoid merging VMA cloned from
* parents. This can improve scalability caused by anon_vma lock.
*/
if ((!anon_vma1 || !anon_vma2) && (!vma ||
list_is_singular(&vma->anon_vma_chain)))
return true;
return anon_vma1 == anon_vma2;
}
static bool can_vma_merge_before(struct vma_merge_struct *vmg)
{
pgoff_t pglen = PHYS_PFN(vmg->end - vmg->start);
if (is_mergeable_vma(vmg, /* merge_next = */ true) &&
is_mergeable_anon_vma(vmg->anon_vma, vmg->next->anon_vma, vmg->next)) {
if (vmg->next->vm_pgoff == vmg->pgoff + pglen)
return true;
}
return false;
}
So we'd disallow this kind of merge as vma == next and vmg->anon_vma = the
faulted-in VMA's anon_vma.
And after 6.16:
static bool is_mergeable_anon_vma(struct vma_merge_struct *vmg, bool merge_next)
{
struct vm_area_struct *tgt = merge_next ? vmg->next : vmg->prev;
struct vm_area_struct *src = vmg->middle; /* exisitng merge case. */
struct anon_vma *tgt_anon = tgt->anon_vma;
struct anon_vma *src_anon = vmg->anon_vma;
/*
* We _can_ have !src, vmg->anon_vma via copy_vma(). In this instance we
* will remove the existing VMA's anon_vma's so there's no scalability
* concerns.
*/
VM_WARN_ON(src && src_anon != src->anon_vma);
/* Case 1 - we will dup_anon_vma() from src into tgt. */
if (!tgt_anon && src_anon)
return !vma_had_uncowed_parents(src);
/* Case 2 - we will simply use tgt's anon_vma. */
if (tgt_anon && !src_anon)
return !vma_had_uncowed_parents(tgt);
/* Case 3 - the anon_vma's are already shared. */
return src_anon == tgt_anon;
}
Where we _will_ allow this merge.
So I don't think this can be the same case.
That is bizarre.
By the way this also makes me think we should do something like:
&& !vma_had_uncowed_parents(vmg->copied_from)
For case 1... as otherwise we're making this case different from merging with
the VMA already moved.
Really using vma_merge_new_range() in copy_vma() is a hack as we're not merging
a _new_ VMA.
Hm maybe clearer to add vma_merge_copied_range() and just put all this horrid
stuff in one single place. Let me play around with this.
I will be adding tests.
>
>
> But 879bca0a2c4f went into v6.16, but [1] and [2] are against v6.14.
>
> So naturally I wonder, could it be that we had a bug even before 879bca0a2c4f that
> resulted in similar symptoms?
>
> Option (1): [1] and [2] are already fixed
>
> Option (2): [1] and [2] are still broken
Probably this.
>
> Option (3): [1] and [2] would be fixed by your patch as well
>
> But we don't even have reproducers, so [1] and [2] could just be a side-effect of
> another bug, maybe.
Yeah... so good idea to keep this assert here :)
>
>
>
> > ---
> > mm/vma.c | 58 ++++++++++++++++++++++++++++++++++++++++++--------------
> > mm/vma.h | 3 +++
> > 2 files changed, 47 insertions(+), 14 deletions(-)
> >
> > diff --git a/mm/vma.c b/mm/vma.c
> > index 6377aa290a27..2268f518a89b 100644
> > --- a/mm/vma.c
> > +++ b/mm/vma.c
> > @@ -1130,26 +1130,50 @@ int vma_expand(struct vma_merge_struct *vmg)
> > mmap_assert_write_locked(vmg->mm);
> >
> > vma_start_write(target);
> > - if (next && (target != next) && (vmg->end == next->vm_end)) {
> > + if (next && vmg->end == next->vm_end) {
> > + struct vm_area_struct *copied_from = vmg->copied_from;
> > int ret;
> >
> > - sticky_flags |= next->vm_flags & VM_STICKY;
> > - remove_next = true;
> > - /* This should already have been checked by this point. */
> > - VM_WARN_ON_VMG(!can_merge_remove_vma(next), vmg);
> > - vma_start_write(next);
> > - /*
> > - * In this case we don't report OOM, so vmg->give_up_on_mm is
> > - * safe.
> > - */
> > - ret = dup_anon_vma(target, next, &anon_dup);
> > - if (ret)
> > - return ret;
> > + if (target != next) {
> > + sticky_flags |= next->vm_flags & VM_STICKY;
> > + remove_next = true;
> > + /* This should already have been checked by this point. */
> > + VM_WARN_ON_VMG(!can_merge_remove_vma(next), vmg);
> > + vma_start_write(next);
> > + /*
> > + * In this case we don't report OOM, so vmg->give_up_on_mm is
> > + * safe.
> > + */
> > + ret = dup_anon_vma(target, next, &anon_dup);
> > + if (ret)
> > + return ret;
> > + } else if (copied_from) {
> > + vma_start_write(next);
> > +
> > + /*
> > + * We are copying from a VMA (i.e. mremap()'ing) to
> > + * next, and thus must ensure that either anon_vma's are
> > + * already compatible (in which case this call is a nop)
> > + * or all anon_vma state is propagated to next
> > + */
> > + ret = dup_anon_vma(next, copied_from, &anon_dup);
> > + if (ret)
> > + return ret;
> > + } else {
> > + /* In no other case may the anon_vma differ. */
> > + VM_WARN_ON_VMG(target->anon_vma != next->anon_vma, vmg);
> > + }
>
>
> No expert on that code, but looks reasonable to me.
>
> Wondering whether we want to pull the vma_start_write(next) before the loop
> (for the warn we certainly don't care).
Actually I think we don't need vma_start_write(next) after all, since we're not
removing next in this case.
>
> --
> Cheers
>
> David
v2 incoming...
Thanks, Lorenzo
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH] mm/vma: fix anon_vma UAF on mremap() faulted, unfaulted merge
2026-01-05 5:11 ` Harry Yoo
2026-01-05 9:12 ` Lorenzo Stoakes
@ 2026-01-05 15:24 ` Liam R. Howlett
2026-01-05 15:32 ` Lorenzo Stoakes
1 sibling, 1 reply; 8+ messages in thread
From: Liam R. Howlett @ 2026-01-05 15:24 UTC (permalink / raw)
To: Harry Yoo
Cc: Lorenzo Stoakes, Andrew Morton, Vlastimil Babka, Jann Horn,
Pedro Falcato, Yeoreum Yun, linux-mm, linux-kernel,
David Hildenbrand, Jeongjun Park, Rik van Riel
* Harry Yoo <harry.yoo@oracle.com> [260105 00:11]:
> On Fri, Jan 02, 2026 at 08:55:20PM +0000, Lorenzo Stoakes wrote:
> > Commit 879bca0a2c4f ("mm/vma: fix incorrectly disallowed anonymous VMA
> > merges") introduced the ability to merge previously unavailable VMA merge
> > scenarios.
> >
> > The key piece of logic introduced was the ability to merge a faulted VMA
> > immediately next to an unfaulted VMA, which relies upon dup_anon_vma() to
> > correctly handle anon_vma state.
> >
> > In the case of the merge of an existing VMA (that is changing properties of
> > a VMA and then merging if those properties are shared by adjacent VMAs),
> > dup_anon_vma() is invoked correctly.
> >
> > However in the case of the merge of a new VMA, a corner case peculiar to
> > mremap() was missed.
> >
> > The issue is that vma_expand() only performs dup_anon_vma() if the target
> > (the VMA that will ultimately become the merged VMA): is not the next VMA,
> > i.e. the one that appears after the range in which the new VMA is to be
> > established.
> >
> > A key insight here is that in all other cases other than mremap(), a new
> > VMA merge either expands an existing VMA, meaning that the target VMA will
> > be that VMA, or would have anon_vma be NULL.
> >
> > Specifically:
> >
> > * __mmap_region() - no anon_vma in place, initial mapping.
> > * do_brk_flags() - expanding an existing VMA.
> > * vma_merge_extend() - expanding an existing VMA.
> > * relocate_vma_down() - no anon_vma in place, initial mapping.
> >
> > In addition, we are in the unique situation of needing to duplicate
> > anon_vma state from a VMA that is neither the previous or next VMA being
> > merged with.
> >
> > To account for this, introduce a new field in struct vma_merge_struct
> > specifically for the mremap() case, and update vma_expand() to explicitly
> > check for this case and invoke dup_anon_vma() to ensure anon_vma state is
> > correctly propagated.
> >
> > This issue can be observed most directly by invoked mremap() to move around
> > a VMA and cause this kind of merge with the MREMAP_DONTUNMAP flag
> > specified.
> >
> > This will result in unlink_anon_vmas() being called after failing to
> > duplicate anon_vma state to the target VMA, which results in the anon_vma
> > itself being freed with folios still possessing dangling pointers to the
> > anon_vma and thus a use-after-free bug.
> >
> > This bug was discovered via a syzbot report, which this patch resolves.
> >
> > The following program reproduces the issue (and is fixed by this patch):
>
> [...]
>
> > Signed-off-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> > Fixes: 879bca0a2c4f ("mm/vma: fix incorrectly disallowed anonymous VMA merges")
> > Reported-by: syzbot+b165fc2e11771c66d8ba@syzkaller.appspotmail.com
> > Closes: https://lore.kernel.org/all/694a2745.050a0220.19928e.0017.GAE@google.com/
> > Cc: stable@kernel.org
> > ---
>
> Hi Lorenzo, I really appreciate that you've done very through analysis of
> the bug so quickly and precisely, and wrote a fix. Also having a simpler
> repro (that works on my machine!) is hugely helpful.
>
> My comment inlined below.
>
> > mm/vma.c | 58 ++++++++++++++++++++++++++++++++++++++++++--------------
> > mm/vma.h | 3 +++
> > 2 files changed, 47 insertions(+), 14 deletions(-)
> >
> > diff --git a/mm/vma.c b/mm/vma.c
> > index 6377aa290a27..2268f518a89b 100644
> > --- a/mm/vma.c
> > +++ b/mm/vma.c
> > @@ -1130,26 +1130,50 @@ int vma_expand(struct vma_merge_struct *vmg)
> > mmap_assert_write_locked(vmg->mm);
> >
> > vma_start_write(target);
> > - if (next && (target != next) && (vmg->end == next->vm_end)) {
> > + if (next && vmg->end == next->vm_end) {
> > + struct vm_area_struct *copied_from = vmg->copied_from;
> > int ret;
> >
> > - sticky_flags |= next->vm_flags & VM_STICKY;
> > - remove_next = true;
> > - /* This should already have been checked by this point. */
> > - VM_WARN_ON_VMG(!can_merge_remove_vma(next), vmg);
> > - vma_start_write(next);
> > - /*
> > - * In this case we don't report OOM, so vmg->give_up_on_mm is
> > - * safe.
> > - */
> > - ret = dup_anon_vma(target, next, &anon_dup);
> > - if (ret)
> > - return ret;
> > + if (target != next) {
> > + sticky_flags |= next->vm_flags & VM_STICKY;
> > + remove_next = true;
> > + /* This should already have been checked by this point. */
> > + VM_WARN_ON_VMG(!can_merge_remove_vma(next), vmg);
> > + vma_start_write(next);
> > + /*
> > + * In this case we don't report OOM, so vmg->give_up_on_mm is
> > + * safe.
> > + */
> > + ret = dup_anon_vma(target, next, &anon_dup);
> > + if (ret)
> > + return ret;
>
> While this fix works when we're expanding the next VMA to cover the new
> range, I don't think it's covering the case where we're expanding the
> prev VMA to cover the new range and next VMA.
>
> Previously I argued [1] that when mremap()'ing into a gap between two unfaulted
> VMAs that are compatible, calling `dup_anon_vma(target, next, &anon_dup);`
> is incorrect:
> mremap()
> |-----------------------------------|
> | |
> v |
> [ VMA C, unfaulted ][ gap ][ VMA B, unfaulted ][ gap ][ VMA A, faulted ]
The key part here is that target == prev in this case (as stated in the
email linked). So we're going to dup nothing, but we really need to dup
VMA A's anon vma - right?
>
>
> I suspected this patch doesn't cover the case, so I slightly modified your
> repro to test my theory (added to the end of the email).
>
> The test confirmed my theory. It doesn't cover the case above because
> target is not next but prev ((target != next) returns true), and neither
> target nor next have anon_vma, but the VMA that is copied from does.
>
> With the modified repro, I'm still seeing the warning that Jann added,
> on top of mm-hotfixes-unstable (HEAD: 871cf622a8ba) which already has
> your fix (65769f3b9877).
>
> [1] https://lore.kernel.org/linux-mm/aVd-UZQGW4ltH6hY@hyeyoo
>
> > + } else if (copied_from) {
> > + vma_start_write(next);
> > +
> > + /*
> > + * We are copying from a VMA (i.e. mremap()'ing) to
> > + * next, and thus must ensure that either anon_vma's are
> > + * already compatible (in which case this call is a nop)
> > + * or all anon_vma state is propagated to next
> > + */
> > + ret = dup_anon_vma(next, copied_from, &anon_dup);
> > + if (ret)
> > + return ret;
>
> So we need to fix this to work even when (target != next) returns true.
>
> Modified repro:
>
> #define _GNU_SOURCE
> #include <stdio.h>
> #include <stdlib.h>
> #include <unistd.h>
> #include <sys/mman.h>
>
> #define RESERVED_PGS (100)
> #define VMA_A_PGS (10)
> #define VMA_B_PGS (10)
> #define VMA_C_PGS (10)
> #define NUM_ITERS (1000)
>
> static void trigger_bug(void)
> {
> unsigned long page_size = sysconf(_SC_PAGE_SIZE);
> char *reserved, *ptr_a, *ptr_b, *ptr_c;
>
> /*
> * The goal here is to achieve:
> * mremap()
> * |-----------------------------------|
> * | |
> * v |
> * [ VMA C, unfaulted ][ gap ][ VMA B, unfaulted ][ gap ][ VMA A, faulted ]
> *
> * Merge VMA C, B, A by expanding VMA C.
> */
>
> /* Reserve a region of memory to operate in. */
> reserved = mmap(NULL, RESERVED_PGS * page_size, PROT_NONE,
> MAP_PRIVATE | MAP_ANON, -1, 0);
> if (reserved == MAP_FAILED) {
> perror("mmap reserved");
> exit(EXIT_FAILURE);
> }
>
> /* Map VMA A into place. */
> ptr_a = mmap(&reserved[page_size + VMA_C_PGS * page_size], VMA_A_PGS * page_size,
> PROT_READ | PROT_WRITE,
> MAP_PRIVATE | MAP_ANON | MAP_FIXED, -1, 0);
> if (ptr_a == MAP_FAILED) {
> perror("mmap VMA A");
> exit(EXIT_FAILURE);
> }
> /* Fault it in. */
> ptr_a[0] = 'x';
>
> /*
> * Now move it out of the way so we can place VMA B in position,
> * unfaulted.
> */
> ptr_a = mremap(ptr_a, VMA_A_PGS * page_size, VMA_A_PGS * page_size,
> MREMAP_FIXED | MREMAP_MAYMOVE, &reserved[50 * page_size]);
> if (ptr_a == MAP_FAILED) {
> perror("mremap VMA A out of the way");
> exit(EXIT_FAILURE);
> }
>
> /* Map VMA C into place. */
> ptr_c = mmap(&reserved[page_size], VMA_C_PGS * page_size,
> PROT_READ | PROT_WRITE,
> MAP_PRIVATE | MAP_ANON | MAP_FIXED, -1, 0);
> if (ptr_c == MAP_FAILED) {
> perror("mmap VMA C");
> exit(EXIT_FAILURE);
> }
>
> /* Map VMA B into place. */
> ptr_b = mmap(&reserved[page_size + VMA_C_PGS * page_size + VMA_A_PGS * page_size],
> VMA_B_PGS * page_size, PROT_READ | PROT_WRITE,
> MAP_PRIVATE | MAP_ANON | MAP_FIXED, -1, 0);
> if (ptr_b == MAP_FAILED) {
> perror("mmap VMA B");
> exit(EXIT_FAILURE);
> }
>
> /* Now move VMA A into position w/MREMAP_DONTUNMAP + free anon_vma. */
> ptr_a = mremap(ptr_a, VMA_A_PGS * page_size, VMA_A_PGS * page_size,
> MREMAP_FIXED | MREMAP_MAYMOVE | MREMAP_DONTUNMAP,
> &reserved[page_size + VMA_C_PGS * page_size]);
> if (ptr_a == MAP_FAILED) {
> perror("mremap VMA A with MREMAP_DONTUNMAP");
> exit(EXIT_FAILURE);
> }
>
> /* Finally, unmap VMA A which should trigger the bug. */
> munmap(ptr_a, VMA_A_PGS * page_size);
>
> /* Cleanup in case bug didn't trigger sufficiently visibly... */
> munmap(reserved, RESERVED_PGS * page_size);
> }
>
> int main(void)
> {
> int i;
>
> for (i = 0; i < NUM_ITERS; i++)
> trigger_bug();
>
> return EXIT_SUCCESS;
> }
>
> --
> Cheers,
> Harry / Hyeonggon
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH] mm/vma: fix anon_vma UAF on mremap() faulted, unfaulted merge
2026-01-05 15:24 ` Liam R. Howlett
@ 2026-01-05 15:32 ` Lorenzo Stoakes
0 siblings, 0 replies; 8+ messages in thread
From: Lorenzo Stoakes @ 2026-01-05 15:32 UTC (permalink / raw)
To: Liam R. Howlett, Harry Yoo, Andrew Morton, Vlastimil Babka,
Jann Horn, Pedro Falcato, Yeoreum Yun, linux-mm, linux-kernel,
David Hildenbrand, Jeongjun Park, Rik van Riel
On Mon, Jan 05, 2026 at 10:24:13AM -0500, Liam R. Howlett wrote:
> > mremap()
> > |-----------------------------------|
> > | |
> > v |
> > [ VMA C, unfaulted ][ gap ][ VMA B, unfaulted ][ gap ][ VMA A, faulted ]
>
> The key part here is that target == prev in this case (as stated in the
> email linked). So we're going to dup nothing, but we really need to dup
> VMA A's anon vma - right?
Yup.
There are a number other cases like this, mremap() of anon breaks things,
because the copy_vma() case violates sensible assumptions, in the way anon
mremap() violates many other sensible assumptions.
I have a generalised fix I'm just finishing up the tests for it now.
You'll see exactly which cases in the v2 which I'll send later today.
I also noticed another issue which I'll fix in the same series...
Happy New Year! ;)
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2026-01-05 15:32 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2026-01-02 20:55 [PATCH] mm/vma: fix anon_vma UAF on mremap() faulted, unfaulted merge Lorenzo Stoakes
2026-01-02 21:00 ` Lorenzo Stoakes
2026-01-04 19:25 ` David Hildenbrand (Red Hat)
2026-01-05 12:53 ` Lorenzo Stoakes
2026-01-05 5:11 ` Harry Yoo
2026-01-05 9:12 ` Lorenzo Stoakes
2026-01-05 15:24 ` Liam R. Howlett
2026-01-05 15:32 ` Lorenzo Stoakes
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox