* [PATCH v2 0/2] Fix VMA confusion in Rust Binder
@ 2026-02-18 11:53 Alice Ryhl
2026-02-18 11:53 ` [PATCH v2 1/2] rust_binder: check ownership before using vma Alice Ryhl
2026-02-18 11:53 ` [PATCH v2 2/2] rust_binder: avoid reading the written value in offsets array Alice Ryhl
0 siblings, 2 replies; 7+ messages in thread
From: Alice Ryhl @ 2026-02-18 11:53 UTC (permalink / raw)
To: Greg Kroah-Hartman, Carlos Llamas, Jann Horn
Cc: Miguel Ojeda, Boqun Feng, Gary Guo, Björn Roy Baron,
Benno Lossin, Andreas Hindborg, Trevor Gross, Danilo Krummrich,
Lorenzo Stoakes, Liam R. Howlett, linux-kernel, rust-for-linux,
linux-mm, Alice Ryhl, stable
This series contains two bugfixes for Rust Binder. I'd like to follow
them up with better solutions by changing the VMA api, but as an
immediate fix this should work.
See the first commit for an explanation of the actual bug.
Signed-off-by: Alice Ryhl <aliceryhl@google.com>
---
Changes in v2:
- Use imperative mood.
- Add some comments about why reuse of ShrinkablePageRange is not a
problem.
- Use ptr::from_ref()
- Rustfmt
- Link to v1: https://lore.kernel.org/r/20260217-binder-vma-check-v1-0-1a2b37f7b762@google.com
---
Alice Ryhl (2):
rust_binder: check ownership before using vma
rust_binder: avoid reading the written value in offsets array
drivers/android/binder/page_range.rs | 83 +++++++++++++++++++++++++++---------
drivers/android/binder/thread.rs | 17 +++-----
2 files changed, 69 insertions(+), 31 deletions(-)
---
base-commit: 0f2acd3148e0ef42bdacbd477f90e8533f96b2ac
change-id: 20260217-binder-vma-check-b6fca42e986c
Best regards,
--
Alice Ryhl <aliceryhl@google.com>
^ permalink raw reply [flat|nested] 7+ messages in thread* [PATCH v2 1/2] rust_binder: check ownership before using vma 2026-02-18 11:53 [PATCH v2 0/2] Fix VMA confusion in Rust Binder Alice Ryhl @ 2026-02-18 11:53 ` Alice Ryhl 2026-02-18 13:47 ` Danilo Krummrich 2026-02-18 15:54 ` Liam R. Howlett 2026-02-18 11:53 ` [PATCH v2 2/2] rust_binder: avoid reading the written value in offsets array Alice Ryhl 1 sibling, 2 replies; 7+ messages in thread From: Alice Ryhl @ 2026-02-18 11:53 UTC (permalink / raw) To: Greg Kroah-Hartman, Carlos Llamas, Jann Horn Cc: Miguel Ojeda, Boqun Feng, Gary Guo, Björn Roy Baron, Benno Lossin, Andreas Hindborg, Trevor Gross, Danilo Krummrich, Lorenzo Stoakes, Liam R. Howlett, linux-kernel, rust-for-linux, linux-mm, Alice Ryhl, stable When installing missing pages (or zapping them), Rust Binder will look up the vma in the mm by address, and then call vm_insert_page (or zap_page_range_single). However, if the vma is closed and replaced with a different vma at the same address, this can lead to Rust Binder installing pages into the wrong vma. By installing the page into a writable vma, it becomes possible to write to your own binder pages, which are normally read-only. Although you're not supposed to be able to write to those pages, the intent behind the design of Rust Binder is that even if you get that ability, it should not lead to anything bad. Unfortunately, due to another bug, that is not the case. To fix this, store a pointer in vm_private_data and check that the vma returned by vma_lookup() has the right vm_ops and vm_private_data before trying to use the vma. This should ensure that Rust Binder will refuse to interact with any other VMA. The plan is to introduce more vma abstractions to avoid this unsafe access to vm_ops and vm_private_data, but for now let's start with the simplest possible fix. C Binder performs the same check in a slightly different way: it provides a vm_ops->close that sets a boolean to true, then checks that boolean after calling vma_lookup(), but this is more fragile than the solution in this patch. (We probably still want to do both, but the vm_ops->close callback will be added later as part of the follow-up vma API changes.) It's still possible to remap the vma so that pages appear in the right vma, but at the wrong offset, but this is a separate issue and will be fixed when Rust Binder gets a vm_ops->close callback. Cc: stable@vger.kernel.org Fixes: eafedbc7c050 ("rust_binder: add Rust Binder driver") Reported-by: Jann Horn <jannh@google.com> Reviewed-by: Jann Horn <jannh@google.com> Signed-off-by: Alice Ryhl <aliceryhl@google.com> --- drivers/android/binder/page_range.rs | 83 +++++++++++++++++++++++++++--------- 1 file changed, 63 insertions(+), 20 deletions(-) diff --git a/drivers/android/binder/page_range.rs b/drivers/android/binder/page_range.rs index fdd97112ef5c8b2341e498dc3567b659f05e3fd7..67aae783e8b8b7cf60ecf7e711d5f6f6f5d1dbe3 100644 --- a/drivers/android/binder/page_range.rs +++ b/drivers/android/binder/page_range.rs @@ -142,6 +142,30 @@ pub(crate) struct ShrinkablePageRange { _pin: PhantomPinned, } +// We do not define any ops. For now, used only to check identity of vmas. +static BINDER_VM_OPS: bindings::vm_operations_struct = pin_init::zeroed(); + +// To ensure that we do not accidentally install pages into or zap pages from the wrong vma, we +// check its vm_ops and private data before using it. +fn check_vma(vma: &virt::VmaRef, owner: *const ShrinkablePageRange) -> Option<&virt::VmaMixedMap> { + // SAFETY: Just reading the vm_ops pointer of any active vma is safe. + let vm_ops = unsafe { (*vma.as_ptr()).vm_ops }; + if !ptr::eq(vm_ops, &BINDER_VM_OPS) { + return None; + } + + // SAFETY: Reading the vm_private_data pointer of a binder-owned vma is safe. + let vm_private_data = unsafe { (*vma.as_ptr()).vm_private_data }; + // The ShrinkablePageRange is only dropped when the Process is dropped, which only happens once + // the file's ->release handler is invoked, which means the ShrinkablePageRange outlives any + // VMA associated with it, so there can't be any false positives due to pointer reuse here. + if !ptr::eq(vm_private_data, owner.cast()) { + return None; + } + + vma.as_mixedmap_vma() +} + struct Inner { /// Array of pages. /// @@ -308,6 +332,18 @@ pub(crate) fn register_with_vma(&self, vma: &virt::VmaNew) -> Result<usize> { inner.size = num_pages; inner.vma_addr = vma.start(); + // This pointer is only used for comparison - it's not dereferenced. + // + // SAFETY: We own the vma, and we don't use any methods on VmaNew that rely on + // `vm_private_data`. + unsafe { + (*vma.as_ptr()).vm_private_data = ptr::from_ref(self).cast_mut().cast::<c_void>() + }; + + // SAFETY: We own the vma, and we don't use any methods on VmaNew that rely on + // `vm_ops`. + unsafe { (*vma.as_ptr()).vm_ops = &BINDER_VM_OPS }; + Ok(num_pages) } @@ -399,22 +435,24 @@ unsafe fn use_page_slow(&self, i: usize) -> Result<()> { // // Using `mmput_async` avoids this, because then the `mm` cleanup is instead queued to a // workqueue. - MmWithUser::into_mmput_async(self.mm.mmget_not_zero().ok_or(ESRCH)?) - .mmap_read_lock() - .vma_lookup(vma_addr) - .ok_or(ESRCH)? - .as_mixedmap_vma() - .ok_or(ESRCH)? - .vm_insert_page(user_page_addr, &new_page) - .inspect_err(|err| { - pr_warn!( - "Failed to vm_insert_page({}): vma_addr:{} i:{} err:{:?}", - user_page_addr, - vma_addr, - i, - err - ) - })?; + check_vma( + MmWithUser::into_mmput_async(self.mm.mmget_not_zero().ok_or(ESRCH)?) + .mmap_read_lock() + .vma_lookup(vma_addr) + .ok_or(ESRCH)?, + self, + ) + .ok_or(ESRCH)? + .vm_insert_page(user_page_addr, &new_page) + .inspect_err(|err| { + pr_warn!( + "Failed to vm_insert_page({}): vma_addr:{} i:{} err:{:?}", + user_page_addr, + vma_addr, + i, + err + ) + })?; let inner = self.lock.lock(); @@ -667,12 +705,15 @@ fn drop(self: Pin<&mut Self>) { let mmap_read; let mm_mutex; let vma_addr; + let range_ptr; { // CAST: The `list_head` field is first in `PageInfo`. let info = item as *mut PageInfo; // SAFETY: The `range` field of `PageInfo` is immutable. - let range = unsafe { &*((*info).range) }; + range_ptr = unsafe { (*info).range }; + // SAFETY: The `range` outlives its `PageInfo` values. + let range = unsafe { &*range_ptr }; mm = match range.mm.mmget_not_zero() { Some(mm) => MmWithUser::into_mmput_async(mm), @@ -717,9 +758,11 @@ fn drop(self: Pin<&mut Self>) { // SAFETY: The lru lock is locked when this method is called. unsafe { bindings::spin_unlock(&raw mut (*lru).lock) }; - if let Some(vma) = mmap_read.vma_lookup(vma_addr) { - let user_page_addr = vma_addr + (page_index << PAGE_SHIFT); - vma.zap_page_range_single(user_page_addr, PAGE_SIZE); + if let Some(unchecked_vma) = mmap_read.vma_lookup(vma_addr) { + if let Some(vma) = check_vma(unchecked_vma, range_ptr) { + let user_page_addr = vma_addr + (page_index << PAGE_SHIFT); + vma.zap_page_range_single(user_page_addr, PAGE_SIZE); + } } drop(mmap_read); -- 2.53.0.310.g728cabbaf7-goog ^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH v2 1/2] rust_binder: check ownership before using vma 2026-02-18 11:53 ` [PATCH v2 1/2] rust_binder: check ownership before using vma Alice Ryhl @ 2026-02-18 13:47 ` Danilo Krummrich 2026-02-18 15:54 ` Liam R. Howlett 1 sibling, 0 replies; 7+ messages in thread From: Danilo Krummrich @ 2026-02-18 13:47 UTC (permalink / raw) To: Alice Ryhl Cc: Greg Kroah-Hartman, Carlos Llamas, Jann Horn, Miguel Ojeda, Boqun Feng, Gary Guo, Björn Roy Baron, Benno Lossin, Andreas Hindborg, Trevor Gross, Lorenzo Stoakes, Liam R. Howlett, linux-kernel, rust-for-linux, linux-mm, stable On Wed Feb 18, 2026 at 12:53 PM CET, Alice Ryhl wrote: > When installing missing pages (or zapping them), Rust Binder will look > up the vma in the mm by address, and then call vm_insert_page (or > zap_page_range_single). However, if the vma is closed and replaced with > a different vma at the same address, this can lead to Rust Binder > installing pages into the wrong vma. > > By installing the page into a writable vma, it becomes possible to write > to your own binder pages, which are normally read-only. Although you're > not supposed to be able to write to those pages, the intent behind the > design of Rust Binder is that even if you get that ability, it should not > lead to anything bad. Unfortunately, due to another bug, that is not the > case. > > To fix this, store a pointer in vm_private_data and check that the vma > returned by vma_lookup() has the right vm_ops and vm_private_data before > trying to use the vma. This should ensure that Rust Binder will refuse > to interact with any other VMA. The plan is to introduce more vma > abstractions to avoid this unsafe access to vm_ops and vm_private_data, > but for now let's start with the simplest possible fix. > > C Binder performs the same check in a slightly different way: it > provides a vm_ops->close that sets a boolean to true, then checks that > boolean after calling vma_lookup(), but this is more fragile > than the solution in this patch. (We probably still want to do both, but > the vm_ops->close callback will be added later as part of the follow-up > vma API changes.) > > It's still possible to remap the vma so that pages appear in the right > vma, but at the wrong offset, but this is a separate issue and will be > fixed when Rust Binder gets a vm_ops->close callback. > > Cc: stable@vger.kernel.org > Fixes: eafedbc7c050 ("rust_binder: add Rust Binder driver") > Reported-by: Jann Horn <jannh@google.com> > Reviewed-by: Jann Horn <jannh@google.com> > Signed-off-by: Alice Ryhl <aliceryhl@google.com> FWIW, in terms of my drive-by feedback from v1, Acked-by: Danilo Krummrich <dakr@kernel.org> (I'd offer an RB, but I did not dig deep enough into binder to justify it.) ^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH v2 1/2] rust_binder: check ownership before using vma 2026-02-18 11:53 ` [PATCH v2 1/2] rust_binder: check ownership before using vma Alice Ryhl 2026-02-18 13:47 ` Danilo Krummrich @ 2026-02-18 15:54 ` Liam R. Howlett 2026-02-18 16:39 ` Alice Ryhl 1 sibling, 1 reply; 7+ messages in thread From: Liam R. Howlett @ 2026-02-18 15:54 UTC (permalink / raw) To: Alice Ryhl Cc: Greg Kroah-Hartman, Carlos Llamas, Jann Horn, Miguel Ojeda, Boqun Feng, Gary Guo, Björn Roy Baron, Benno Lossin, Andreas Hindborg, Trevor Gross, Danilo Krummrich, Lorenzo Stoakes, linux-kernel, rust-for-linux, linux-mm, stable * Alice Ryhl <aliceryhl@google.com> [260218 06:53]: > When installing missing pages (or zapping them), Rust Binder will look > up the vma in the mm by address, and then call vm_insert_page (or > zap_page_range_single). However, if the vma is closed and replaced with > a different vma at the same address, this can lead to Rust Binder > installing pages into the wrong vma. > > By installing the page into a writable vma, it becomes possible to write > to your own binder pages, which are normally read-only. Although you're > not supposed to be able to write to those pages, the intent behind the > design of Rust Binder is that even if you get that ability, it should not > lead to anything bad. Unfortunately, due to another bug, that is not the > case. > > To fix this, store a pointer in vm_private_data and check that the vma > returned by vma_lookup() has the right vm_ops and vm_private_data before > trying to use the vma. This should ensure that Rust Binder will refuse > to interact with any other VMA. The plan is to introduce more vma > abstractions to avoid this unsafe access to vm_ops and vm_private_data, > but for now let's start with the simplest possible fix. You probably already know this, but there are a list of ways we can ensure the vma is stable, listed in Documentation/mm/process_addrs.rst. Check the "Lock usage" section. I'd feel more comfortable using one of the described ways to maintain a stable vma instead of rolling your own here - we may break your way by accident, or it might cause issues with future changes. When do you think we can move to one of the standard ways of ensuring the vma is stable? > > C Binder performs the same check in a slightly different way: it > provides a vm_ops->close that sets a boolean to true, then checks that > boolean after calling vma_lookup(), but this is more fragile > than the solution in this patch. (We probably still want to do both, but > the vm_ops->close callback will be added later as part of the follow-up > vma API changes.) If I understand this correctly, setting the boolean to true will close the loophole of replacing the vma with an exact duplicate (including private data and vm_ops) but with different write permissions. I assume that is why we want both? > > It's still possible to remap the vma so that pages appear in the right > vma, but at the wrong offset, but this is a separate issue and will be > fixed when Rust Binder gets a vm_ops->close callback. > > Cc: stable@vger.kernel.org > Fixes: eafedbc7c050 ("rust_binder: add Rust Binder driver") > Reported-by: Jann Horn <jannh@google.com> > Reviewed-by: Jann Horn <jannh@google.com> > Signed-off-by: Alice Ryhl <aliceryhl@google.com> Acked-by: Liam R. Howlett <Liam.Howlett@oracle.com> > --- > drivers/android/binder/page_range.rs | 83 +++++++++++++++++++++++++++--------- > 1 file changed, 63 insertions(+), 20 deletions(-) > > diff --git a/drivers/android/binder/page_range.rs b/drivers/android/binder/page_range.rs > index fdd97112ef5c8b2341e498dc3567b659f05e3fd7..67aae783e8b8b7cf60ecf7e711d5f6f6f5d1dbe3 100644 > --- a/drivers/android/binder/page_range.rs > +++ b/drivers/android/binder/page_range.rs > @@ -142,6 +142,30 @@ pub(crate) struct ShrinkablePageRange { > _pin: PhantomPinned, > } > > +// We do not define any ops. For now, used only to check identity of vmas. > +static BINDER_VM_OPS: bindings::vm_operations_struct = pin_init::zeroed(); > + > +// To ensure that we do not accidentally install pages into or zap pages from the wrong vma, we > +// check its vm_ops and private data before using it. > +fn check_vma(vma: &virt::VmaRef, owner: *const ShrinkablePageRange) -> Option<&virt::VmaMixedMap> { > + // SAFETY: Just reading the vm_ops pointer of any active vma is safe. > + let vm_ops = unsafe { (*vma.as_ptr()).vm_ops }; > + if !ptr::eq(vm_ops, &BINDER_VM_OPS) { > + return None; > + } > + > + // SAFETY: Reading the vm_private_data pointer of a binder-owned vma is safe. > + let vm_private_data = unsafe { (*vma.as_ptr()).vm_private_data }; > + // The ShrinkablePageRange is only dropped when the Process is dropped, which only happens once > + // the file's ->release handler is invoked, which means the ShrinkablePageRange outlives any > + // VMA associated with it, so there can't be any false positives due to pointer reuse here. > + if !ptr::eq(vm_private_data, owner.cast()) { > + return None; > + } > + > + vma.as_mixedmap_vma() > +} > + > struct Inner { > /// Array of pages. > /// > @@ -308,6 +332,18 @@ pub(crate) fn register_with_vma(&self, vma: &virt::VmaNew) -> Result<usize> { > inner.size = num_pages; > inner.vma_addr = vma.start(); > > + // This pointer is only used for comparison - it's not dereferenced. > + // > + // SAFETY: We own the vma, and we don't use any methods on VmaNew that rely on > + // `vm_private_data`. > + unsafe { > + (*vma.as_ptr()).vm_private_data = ptr::from_ref(self).cast_mut().cast::<c_void>() > + }; > + > + // SAFETY: We own the vma, and we don't use any methods on VmaNew that rely on > + // `vm_ops`. > + unsafe { (*vma.as_ptr()).vm_ops = &BINDER_VM_OPS }; > + > Ok(num_pages) > } > > @@ -399,22 +435,24 @@ unsafe fn use_page_slow(&self, i: usize) -> Result<()> { > // > // Using `mmput_async` avoids this, because then the `mm` cleanup is instead queued to a > // workqueue. > - MmWithUser::into_mmput_async(self.mm.mmget_not_zero().ok_or(ESRCH)?) > - .mmap_read_lock() > - .vma_lookup(vma_addr) > - .ok_or(ESRCH)? > - .as_mixedmap_vma() > - .ok_or(ESRCH)? > - .vm_insert_page(user_page_addr, &new_page) > - .inspect_err(|err| { > - pr_warn!( > - "Failed to vm_insert_page({}): vma_addr:{} i:{} err:{:?}", > - user_page_addr, > - vma_addr, > - i, > - err > - ) > - })?; > + check_vma( > + MmWithUser::into_mmput_async(self.mm.mmget_not_zero().ok_or(ESRCH)?) > + .mmap_read_lock() > + .vma_lookup(vma_addr) > + .ok_or(ESRCH)?, > + self, > + ) > + .ok_or(ESRCH)? > + .vm_insert_page(user_page_addr, &new_page) > + .inspect_err(|err| { > + pr_warn!( > + "Failed to vm_insert_page({}): vma_addr:{} i:{} err:{:?}", > + user_page_addr, > + vma_addr, > + i, > + err > + ) > + })?; > > let inner = self.lock.lock(); > > @@ -667,12 +705,15 @@ fn drop(self: Pin<&mut Self>) { > let mmap_read; > let mm_mutex; > let vma_addr; > + let range_ptr; > > { > // CAST: The `list_head` field is first in `PageInfo`. > let info = item as *mut PageInfo; > // SAFETY: The `range` field of `PageInfo` is immutable. > - let range = unsafe { &*((*info).range) }; > + range_ptr = unsafe { (*info).range }; > + // SAFETY: The `range` outlives its `PageInfo` values. > + let range = unsafe { &*range_ptr }; > > mm = match range.mm.mmget_not_zero() { > Some(mm) => MmWithUser::into_mmput_async(mm), > @@ -717,9 +758,11 @@ fn drop(self: Pin<&mut Self>) { > // SAFETY: The lru lock is locked when this method is called. > unsafe { bindings::spin_unlock(&raw mut (*lru).lock) }; > > - if let Some(vma) = mmap_read.vma_lookup(vma_addr) { > - let user_page_addr = vma_addr + (page_index << PAGE_SHIFT); > - vma.zap_page_range_single(user_page_addr, PAGE_SIZE); > + if let Some(unchecked_vma) = mmap_read.vma_lookup(vma_addr) { > + if let Some(vma) = check_vma(unchecked_vma, range_ptr) { > + let user_page_addr = vma_addr + (page_index << PAGE_SHIFT); > + vma.zap_page_range_single(user_page_addr, PAGE_SIZE); > + } > } > > drop(mmap_read); > > -- > 2.53.0.310.g728cabbaf7-goog > ^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH v2 1/2] rust_binder: check ownership before using vma 2026-02-18 15:54 ` Liam R. Howlett @ 2026-02-18 16:39 ` Alice Ryhl 0 siblings, 0 replies; 7+ messages in thread From: Alice Ryhl @ 2026-02-18 16:39 UTC (permalink / raw) To: Liam R. Howlett, Greg Kroah-Hartman, Carlos Llamas, Jann Horn, Miguel Ojeda, Boqun Feng, Gary Guo, Björn Roy Baron, Benno Lossin, Andreas Hindborg, Trevor Gross, Danilo Krummrich, Lorenzo Stoakes, linux-kernel, rust-for-linux, linux-mm, stable On Wed, Feb 18, 2026 at 10:54:46AM -0500, Liam R. Howlett wrote: > * Alice Ryhl <aliceryhl@google.com> [260218 06:53]: > > When installing missing pages (or zapping them), Rust Binder will look > > up the vma in the mm by address, and then call vm_insert_page (or > > zap_page_range_single). However, if the vma is closed and replaced with > > a different vma at the same address, this can lead to Rust Binder > > installing pages into the wrong vma. > > > > By installing the page into a writable vma, it becomes possible to write > > to your own binder pages, which are normally read-only. Although you're > > not supposed to be able to write to those pages, the intent behind the > > design of Rust Binder is that even if you get that ability, it should not > > lead to anything bad. Unfortunately, due to another bug, that is not the > > case. > > > > To fix this, store a pointer in vm_private_data and check that the vma > > returned by vma_lookup() has the right vm_ops and vm_private_data before > > trying to use the vma. This should ensure that Rust Binder will refuse > > to interact with any other VMA. The plan is to introduce more vma > > abstractions to avoid this unsafe access to vm_ops and vm_private_data, > > but for now let's start with the simplest possible fix. > > You probably already know this, but there are a list of ways we can > ensure the vma is stable, listed in Documentation/mm/process_addrs.rst. > Check the "Lock usage" section. > > I'd feel more comfortable using one of the described ways to maintain a > stable vma instead of rolling your own here - we may break your way by > accident, or it might cause issues with future changes. > > When do you think we can move to one of the standard ways of ensuring > the vma is stable? If you're referring to the fact that the vma can't change while you hold a lock, then that doesn't apply here because this is about finding the vma again from an ioctl or shrinker callback, not keeping is stable during a single function call scope. It would be nice to get rid of all this special mm logic in Binder, though. For the vm_insert_page() call from ioctls, we can replace it with a vm_fault callback (pending perf analysis). I have no idea how to get rid of the zap_page_range_single() in the shrinker, though. To give a quick recap: The basic idea behind what Binder does is that it maintains an array of nullable struct page pointers. Each page may be on one of three states: 1. In use. 2. Not in use. 3. Completely missing. (Access is segfault.) Accessing a page in state 2 or 3 isn't legal. Pages may alternate between 1 and 2 in very quick succession, so for perf reasons we do not free or unmap pages when they stop being in use. That happens only in the shrinker callback, which is when pages are moved from 2 to 3 by unmapping and freeing the page. Binder explicitly calls vm_insert_page() to move from 3 to 1 (from ioctl context), and explicitly calls zap_page_range_single() to move from 2 to 3 (from shrinker context). This way, the vma reflects Binder's internal struct page array at all times. Changing a Binder vma after creation is not really supported at all. Note that vm_insert_page() is called from the ioctl context of a *different* process than the one the vma is mapped in. That's because it's called from the sender process, and the vma is mapped into the receiver's address space. > > C Binder performs the same check in a slightly different way: it > > provides a vm_ops->close that sets a boolean to true, then checks that > > boolean after calling vma_lookup(), but this is more fragile > > than the solution in this patch. (We probably still want to do both, but > > the vm_ops->close callback will be added later as part of the follow-up > > vma API changes.) > > If I understand this correctly, setting the boolean to true will close > the loophole of replacing the vma with an exact duplicate (including > private data and vm_ops) but with different write permissions. I assume > that is why we want both? No, Binder clears VM_MAYWRITE in mmap so you can never create a writable version of a Binder vma. > > It's still possible to remap the vma so that pages appear in the right > > vma, but at the wrong offset, but this is a separate issue and will be > > fixed when Rust Binder gets a vm_ops->close callback. The main thing a close callback would give you is ensuring the Binder fd becomes unusable once you close the vma. Alice ^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH v2 2/2] rust_binder: avoid reading the written value in offsets array 2026-02-18 11:53 [PATCH v2 0/2] Fix VMA confusion in Rust Binder Alice Ryhl 2026-02-18 11:53 ` [PATCH v2 1/2] rust_binder: check ownership before using vma Alice Ryhl @ 2026-02-18 11:53 ` Alice Ryhl 2026-02-18 16:02 ` Liam R. Howlett 1 sibling, 1 reply; 7+ messages in thread From: Alice Ryhl @ 2026-02-18 11:53 UTC (permalink / raw) To: Greg Kroah-Hartman, Carlos Llamas, Jann Horn Cc: Miguel Ojeda, Boqun Feng, Gary Guo, Björn Roy Baron, Benno Lossin, Andreas Hindborg, Trevor Gross, Danilo Krummrich, Lorenzo Stoakes, Liam R. Howlett, linux-kernel, rust-for-linux, linux-mm, Alice Ryhl, stable When sending a transaction, its offsets array is first copied into the target proc's vma, and then the values are read back from there. This is normally fine because the vma is a read-only mapping, so the target process cannot change the value under us. However, if the target process somehow gains the ability to write to its own vma, it could change the offset before it's read back, causing the kernel to misinterpret what the sender meant. If the sender happens to send a payload with a specific shape, this could in the worst case lead to the receiver being able to privilege escalate into the sender. The intent is that gaining the ability to change the read-only vma of your own process should not be exploitable, so remove this TOCTOU read even though it's unexploitable without another Binder bug. Cc: stable@vger.kernel.org Fixes: eafedbc7c050 ("rust_binder: add Rust Binder driver") Reported-by: Jann Horn <jannh@google.com> Reviewed-by: Jann Horn <jannh@google.com> Signed-off-by: Alice Ryhl <aliceryhl@google.com> --- drivers/android/binder/thread.rs | 17 ++++++----------- 1 file changed, 6 insertions(+), 11 deletions(-) diff --git a/drivers/android/binder/thread.rs b/drivers/android/binder/thread.rs index 1f1709a6a77abc1c865cc9387e7ba7493448c71d..a81910f4cedf9bf485bf1cf954b95aee6c122cfd 100644 --- a/drivers/android/binder/thread.rs +++ b/drivers/android/binder/thread.rs @@ -1016,12 +1016,9 @@ pub(crate) fn copy_transaction_data( // Copy offsets if there are any. if offsets_size > 0 { - { - let mut reader = - UserSlice::new(UserPtr::from_addr(trd_data_ptr.offsets as _), offsets_size) - .reader(); - alloc.copy_into(&mut reader, aligned_data_size, offsets_size)?; - } + let mut offsets_reader = + UserSlice::new(UserPtr::from_addr(trd_data_ptr.offsets as _), offsets_size) + .reader(); let offsets_start = aligned_data_size; let offsets_end = aligned_data_size + offsets_size; @@ -1042,11 +1039,9 @@ pub(crate) fn copy_transaction_data( .step_by(size_of::<u64>()) .enumerate() { - let offset: usize = view - .alloc - .read::<u64>(index_offset)? - .try_into() - .map_err(|_| EINVAL)?; + let offset = offsets_reader.read::<u64>()?; + view.alloc.write(index_offset, &offset)?; + let offset: usize = offset.try_into().map_err(|_| EINVAL)?; if offset < end_of_previous_object || !is_aligned(offset, size_of::<u32>()) { pr_warn!("Got transaction with invalid offset."); -- 2.53.0.310.g728cabbaf7-goog ^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH v2 2/2] rust_binder: avoid reading the written value in offsets array 2026-02-18 11:53 ` [PATCH v2 2/2] rust_binder: avoid reading the written value in offsets array Alice Ryhl @ 2026-02-18 16:02 ` Liam R. Howlett 0 siblings, 0 replies; 7+ messages in thread From: Liam R. Howlett @ 2026-02-18 16:02 UTC (permalink / raw) To: Alice Ryhl Cc: Greg Kroah-Hartman, Carlos Llamas, Jann Horn, Miguel Ojeda, Boqun Feng, Gary Guo, Björn Roy Baron, Benno Lossin, Andreas Hindborg, Trevor Gross, Danilo Krummrich, Lorenzo Stoakes, linux-kernel, rust-for-linux, linux-mm, stable * Alice Ryhl <aliceryhl@google.com> [260218 06:53]: > When sending a transaction, its offsets array is first copied into the > target proc's vma, and then the values are read back from there. This is > normally fine because the vma is a read-only mapping, so the target > process cannot change the value under us. > > However, if the target process somehow gains the ability to write to its > own vma, it could change the offset before it's read back, causing the > kernel to misinterpret what the sender meant. If the sender happens to > send a payload with a specific shape, this could in the worst case lead > to the receiver being able to privilege escalate into the sender. > > The intent is that gaining the ability to change the read-only vma of > your own process should not be exploitable, so remove this TOCTOU read > even though it's unexploitable without another Binder bug. > > Cc: stable@vger.kernel.org > Fixes: eafedbc7c050 ("rust_binder: add Rust Binder driver") > Reported-by: Jann Horn <jannh@google.com> > Reviewed-by: Jann Horn <jannh@google.com> > Signed-off-by: Alice Ryhl <aliceryhl@google.com> Acked-by: Liam R. Howlett <Liam.Howlett@oracle.com> > --- > drivers/android/binder/thread.rs | 17 ++++++----------- > 1 file changed, 6 insertions(+), 11 deletions(-) > > diff --git a/drivers/android/binder/thread.rs b/drivers/android/binder/thread.rs > index 1f1709a6a77abc1c865cc9387e7ba7493448c71d..a81910f4cedf9bf485bf1cf954b95aee6c122cfd 100644 > --- a/drivers/android/binder/thread.rs > +++ b/drivers/android/binder/thread.rs > @@ -1016,12 +1016,9 @@ pub(crate) fn copy_transaction_data( > > // Copy offsets if there are any. > if offsets_size > 0 { > - { > - let mut reader = > - UserSlice::new(UserPtr::from_addr(trd_data_ptr.offsets as _), offsets_size) > - .reader(); > - alloc.copy_into(&mut reader, aligned_data_size, offsets_size)?; > - } > + let mut offsets_reader = > + UserSlice::new(UserPtr::from_addr(trd_data_ptr.offsets as _), offsets_size) > + .reader(); > > let offsets_start = aligned_data_size; > let offsets_end = aligned_data_size + offsets_size; > @@ -1042,11 +1039,9 @@ pub(crate) fn copy_transaction_data( > .step_by(size_of::<u64>()) > .enumerate() > { > - let offset: usize = view > - .alloc > - .read::<u64>(index_offset)? > - .try_into() > - .map_err(|_| EINVAL)?; > + let offset = offsets_reader.read::<u64>()?; > + view.alloc.write(index_offset, &offset)?; > + let offset: usize = offset.try_into().map_err(|_| EINVAL)?; > > if offset < end_of_previous_object || !is_aligned(offset, size_of::<u32>()) { > pr_warn!("Got transaction with invalid offset."); > > -- > 2.53.0.310.g728cabbaf7-goog > ^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2026-02-18 16:39 UTC | newest] Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2026-02-18 11:53 [PATCH v2 0/2] Fix VMA confusion in Rust Binder Alice Ryhl 2026-02-18 11:53 ` [PATCH v2 1/2] rust_binder: check ownership before using vma Alice Ryhl 2026-02-18 13:47 ` Danilo Krummrich 2026-02-18 15:54 ` Liam R. Howlett 2026-02-18 16:39 ` Alice Ryhl 2026-02-18 11:53 ` [PATCH v2 2/2] rust_binder: avoid reading the written value in offsets array Alice Ryhl 2026-02-18 16:02 ` Liam R. Howlett
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox