* [RFC PATCH 0/7] support for mm-local memory allocations and use it
@ 2024-09-11 14:33 Fares Mehanna
2024-09-11 14:34 ` [RFC PATCH 1/7] mseal: expose interface to seal / unseal user memory ranges Fares Mehanna
` (9 more replies)
0 siblings, 10 replies; 24+ messages in thread
From: Fares Mehanna @ 2024-09-11 14:33 UTC (permalink / raw)
Cc: nh-open-source, Fares Mehanna, Marc Zyngier, Oliver Upton,
James Morse, Suzuki K Poulose, Zenghui Yu, Catalin Marinas,
Will Deacon, Andrew Morton, Kemeng Shi, Pierre-Clément Tosi,
Ard Biesheuvel, Mark Rutland, Javier Martinez Canillas,
Arnd Bergmann, Fuad Tabba, Mark Brown, Joey Gouly,
Kristina Martsenko, Randy Dunlap, Bjorn Helgaas,
Jean-Philippe Brucker, Mike Rapoport (IBM),
David Hildenbrand, Roman Kagan,
moderated list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64),
open list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64),
open list, open list:MEMORY MANAGEMENT
In a series posted a few years ago [1], a proposal was put forward to allow the
kernel to allocate memory local to a mm and thus push it out of reach for
current and future speculation-based cross-process attacks. We still believe
this is a nice thing to have.
However, in the time passed since that post Linux mm has grown quite a few new
goodies, so we'd like to explore possibilities to implement this functionality
with less effort and churn leveraging the now available facilities.
An RFC was posted few months back [2] to show the proof of concept and a simple
test driver.
In this RFC, we're using the same approach of implementing mm-local allocations
piggy-backing on memfd_secret(), using regular user addresses but pinning the
pages and flipping the user/supervisor flag on the respective PTEs to make them
directly accessible from kernel.
In addition to that we are submitting 5 patches to use the secret memory to hide
the vCPU gp-regs and fp-regs on arm64 VHE systems.
The generic drawbacks of using user virtual addresses mentioned in the previous
RFC [2] still hold, in addition to a more specific one:
- While the user virtual addresses allocated for kernel secret memory are not
directly accessible by userspace as the PTEs restrict that, copy_from_user()
and copy_to_user() can operate on those ranges, so that e.g. the usermode can
guess the address and pass it as the target buffer for read(), making the
kernel overwrite it with the user-controlled content. Effectively making the
secret memory in the current implementation missing confidentiality and
integrity guarantees.
In the specific case of vCPU registers, this is fine because the owner process
can read and write to them using KVM IOCTLs anyway. But in the general case this
represents a security concern and needs to be addressed.
A possible way forward for the arch-agnostic implementation is to limit the user
virtual addresses used for kernel to specific range that can be checked against
in copy_from_user() and copy_to_user().
For arch specific implementation, using separate PGD is the way to go.
[1] https://lore.kernel.org/lkml/20190612170834.14855-1-mhillenb@amazon.de/
[2] https://lore.kernel.org/lkml/20240621201501.1059948-1-rkagan@amazon.de/
Fares Mehanna / Roman Kagan (2):
mseal: expose interface to seal / unseal user memory ranges
mm/secretmem: implement mm-local kernel allocations
Fares Mehanna (5):
arm64: KVM: Refactor C-code to access vCPU gp-registers through macros
KVM: Refactor Assembly-code to access vCPU gp-registers through a
macro
arm64: KVM: Allocate vCPU gp-regs dynamically on VHE and
KERNEL_SECRETMEM enabled systems
arm64: KVM: Refactor C-code to access vCPU fp-registers through macros
arm64: KVM: Allocate vCPU fp-regs dynamically on VHE and
KERNEL_SECRETMEM enabled systems
arch/arm64/include/asm/kvm_asm.h | 50 ++--
arch/arm64/include/asm/kvm_emulate.h | 2 +-
arch/arm64/include/asm/kvm_host.h | 41 +++-
arch/arm64/kernel/asm-offsets.c | 1 +
arch/arm64/kernel/image-vars.h | 2 +
arch/arm64/kvm/arm.c | 90 +++++++-
arch/arm64/kvm/fpsimd.c | 2 +-
arch/arm64/kvm/guest.c | 14 +-
arch/arm64/kvm/hyp/entry.S | 15 ++
arch/arm64/kvm/hyp/include/hyp/switch.h | 6 +-
arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h | 10 +-
.../arm64/kvm/hyp/include/nvhe/trap_handler.h | 2 +-
arch/arm64/kvm/hyp/nvhe/host.S | 20 +-
arch/arm64/kvm/hyp/nvhe/hyp-main.c | 4 +-
arch/arm64/kvm/reset.c | 2 +-
arch/arm64/kvm/va_layout.c | 38 ++++
include/linux/secretmem.h | 29 +++
mm/Kconfig | 10 +
mm/gup.c | 4 +-
mm/internal.h | 7 +
mm/mseal.c | 81 ++++---
mm/secretmem.c | 213 ++++++++++++++++++
22 files changed, 559 insertions(+), 84 deletions(-)
--
2.40.1
Amazon Web Services Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss
Eingetragen am Amtsgericht Charlottenburg unter HRB 257764 B
Sitz: Berlin
Ust-ID: DE 365 538 597
^ permalink raw reply [flat|nested] 24+ messages in thread
* [RFC PATCH 1/7] mseal: expose interface to seal / unseal user memory ranges
2024-09-11 14:33 [RFC PATCH 0/7] support for mm-local memory allocations and use it Fares Mehanna
@ 2024-09-11 14:34 ` Fares Mehanna
2024-09-12 16:40 ` Liam R. Howlett
2024-09-11 14:34 ` [RFC PATCH 2/7] mm/secretmem: implement mm-local kernel allocations Fares Mehanna
` (8 subsequent siblings)
9 siblings, 1 reply; 24+ messages in thread
From: Fares Mehanna @ 2024-09-11 14:34 UTC (permalink / raw)
Cc: nh-open-source, Fares Mehanna, Roman Kagan, Marc Zyngier,
Oliver Upton, James Morse, Suzuki K Poulose, Zenghui Yu,
Catalin Marinas, Will Deacon, Andrew Morton, Kemeng Shi,
Pierre-Clément Tosi, Ard Biesheuvel, Mark Rutland,
Javier Martinez Canillas, Arnd Bergmann, Fuad Tabba, Mark Brown,
Joey Gouly, Kristina Martsenko, Randy Dunlap, Bjorn Helgaas,
Jean-Philippe Brucker, Mike Rapoport (IBM),
David Hildenbrand,
moderated list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64),
open list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64),
open list, open list:MEMORY MANAGEMENT
To make sure the kernel mm-local mapping is untouched by the user, we will seal
the VMA before changing the protection to be used by the kernel.
This will guarantee that userspace can't unmap or alter this VMA while it is
being used by the kernel.
After the kernel is done with the secret memory, it will unseal the VMA to be
able to unmap and free it.
Unseal operation is not exposed to userspace.
Signed-off-by: Fares Mehanna <faresx@amazon.de>
Signed-off-by: Roman Kagan <rkagan@amazon.de>
---
mm/internal.h | 7 +++++
mm/mseal.c | 81 ++++++++++++++++++++++++++++++++-------------------
2 files changed, 58 insertions(+), 30 deletions(-)
diff --git a/mm/internal.h b/mm/internal.h
index b4d86436565b..cf7280d101e9 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -1501,6 +1501,8 @@ bool can_modify_mm(struct mm_struct *mm, unsigned long start,
unsigned long end);
bool can_modify_mm_madv(struct mm_struct *mm, unsigned long start,
unsigned long end, int behavior);
+/* mm's mmap write lock must be taken before seal/unseal operation */
+int do_mseal(unsigned long start, unsigned long end, bool seal);
#else
static inline int can_do_mseal(unsigned long flags)
{
@@ -1518,6 +1520,11 @@ static inline bool can_modify_mm_madv(struct mm_struct *mm, unsigned long start,
{
return true;
}
+
+static inline int do_mseal(unsigned long start, unsigned long end, bool seal)
+{
+ return -EINVAL;
+}
#endif
#ifdef CONFIG_SHRINKER_DEBUG
diff --git a/mm/mseal.c b/mm/mseal.c
index 15bba28acc00..aac9399ffd5d 100644
--- a/mm/mseal.c
+++ b/mm/mseal.c
@@ -26,6 +26,11 @@ static inline void set_vma_sealed(struct vm_area_struct *vma)
vm_flags_set(vma, VM_SEALED);
}
+static inline void clear_vma_sealed(struct vm_area_struct *vma)
+{
+ vm_flags_clear(vma, VM_SEALED);
+}
+
/*
* check if a vma is sealed for modification.
* return true, if modification is allowed.
@@ -117,7 +122,7 @@ bool can_modify_mm_madv(struct mm_struct *mm, unsigned long start, unsigned long
static int mseal_fixup(struct vma_iterator *vmi, struct vm_area_struct *vma,
struct vm_area_struct **prev, unsigned long start,
- unsigned long end, vm_flags_t newflags)
+ unsigned long end, vm_flags_t newflags, bool seal)
{
int ret = 0;
vm_flags_t oldflags = vma->vm_flags;
@@ -131,7 +136,10 @@ static int mseal_fixup(struct vma_iterator *vmi, struct vm_area_struct *vma,
goto out;
}
- set_vma_sealed(vma);
+ if (seal)
+ set_vma_sealed(vma);
+ else
+ clear_vma_sealed(vma);
out:
*prev = vma;
return ret;
@@ -167,9 +175,9 @@ static int check_mm_seal(unsigned long start, unsigned long end)
}
/*
- * Apply sealing.
+ * Apply sealing / unsealing.
*/
-static int apply_mm_seal(unsigned long start, unsigned long end)
+static int apply_mm_seal(unsigned long start, unsigned long end, bool seal)
{
unsigned long nstart;
struct vm_area_struct *vma, *prev;
@@ -191,11 +199,14 @@ static int apply_mm_seal(unsigned long start, unsigned long end)
unsigned long tmp;
vm_flags_t newflags;
- newflags = vma->vm_flags | VM_SEALED;
+ if (seal)
+ newflags = vma->vm_flags | VM_SEALED;
+ else
+ newflags = vma->vm_flags & ~(VM_SEALED);
tmp = vma->vm_end;
if (tmp > end)
tmp = end;
- error = mseal_fixup(&vmi, vma, &prev, nstart, tmp, newflags);
+ error = mseal_fixup(&vmi, vma, &prev, nstart, tmp, newflags, seal);
if (error)
return error;
nstart = vma_iter_end(&vmi);
@@ -204,6 +215,37 @@ static int apply_mm_seal(unsigned long start, unsigned long end)
return 0;
}
+int do_mseal(unsigned long start, unsigned long end, bool seal)
+{
+ int ret;
+
+ if (end < start)
+ return -EINVAL;
+
+ if (end == start)
+ return 0;
+
+ /*
+ * First pass, this helps to avoid
+ * partial sealing in case of error in input address range,
+ * e.g. ENOMEM error.
+ */
+ ret = check_mm_seal(start, end);
+ if (ret)
+ goto out;
+
+ /*
+ * Second pass, this should success, unless there are errors
+ * from vma_modify_flags, e.g. merge/split error, or process
+ * reaching the max supported VMAs, however, those cases shall
+ * be rare.
+ */
+ ret = apply_mm_seal(start, end, seal);
+
+out:
+ return ret;
+}
+
/*
* mseal(2) seals the VM's meta data from
* selected syscalls.
@@ -256,7 +298,7 @@ static int apply_mm_seal(unsigned long start, unsigned long end)
*
* unseal() is not supported.
*/
-static int do_mseal(unsigned long start, size_t len_in, unsigned long flags)
+static int __do_mseal(unsigned long start, size_t len_in, unsigned long flags)
{
size_t len;
int ret = 0;
@@ -277,33 +319,12 @@ static int do_mseal(unsigned long start, size_t len_in, unsigned long flags)
return -EINVAL;
end = start + len;
- if (end < start)
- return -EINVAL;
-
- if (end == start)
- return 0;
if (mmap_write_lock_killable(mm))
return -EINTR;
- /*
- * First pass, this helps to avoid
- * partial sealing in case of error in input address range,
- * e.g. ENOMEM error.
- */
- ret = check_mm_seal(start, end);
- if (ret)
- goto out;
-
- /*
- * Second pass, this should success, unless there are errors
- * from vma_modify_flags, e.g. merge/split error, or process
- * reaching the max supported VMAs, however, those cases shall
- * be rare.
- */
- ret = apply_mm_seal(start, end);
+ ret = do_mseal(start, end, true);
-out:
mmap_write_unlock(current->mm);
return ret;
}
@@ -311,5 +332,5 @@ static int do_mseal(unsigned long start, size_t len_in, unsigned long flags)
SYSCALL_DEFINE3(mseal, unsigned long, start, size_t, len, unsigned long,
flags)
{
- return do_mseal(start, len, flags);
+ return __do_mseal(start, len, flags);
}
--
2.40.1
Amazon Web Services Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss
Eingetragen am Amtsgericht Charlottenburg unter HRB 257764 B
Sitz: Berlin
Ust-ID: DE 365 538 597
^ permalink raw reply [flat|nested] 24+ messages in thread
* [RFC PATCH 2/7] mm/secretmem: implement mm-local kernel allocations
2024-09-11 14:33 [RFC PATCH 0/7] support for mm-local memory allocations and use it Fares Mehanna
2024-09-11 14:34 ` [RFC PATCH 1/7] mseal: expose interface to seal / unseal user memory ranges Fares Mehanna
@ 2024-09-11 14:34 ` Fares Mehanna
2024-09-11 14:34 ` [RFC PATCH 3/7] arm64: KVM: Refactor C-code to access vCPU gp-registers through macros Fares Mehanna
` (7 subsequent siblings)
9 siblings, 0 replies; 24+ messages in thread
From: Fares Mehanna @ 2024-09-11 14:34 UTC (permalink / raw)
Cc: nh-open-source, Fares Mehanna, Roman Kagan, Marc Zyngier,
Oliver Upton, James Morse, Suzuki K Poulose, Zenghui Yu,
Catalin Marinas, Will Deacon, Andrew Morton, Kemeng Shi,
Pierre-Clément Tosi, Ard Biesheuvel, Mark Rutland,
Javier Martinez Canillas, Arnd Bergmann, Fuad Tabba, Mark Brown,
Joey Gouly, Kristina Martsenko, Randy Dunlap, Bjorn Helgaas,
Jean-Philippe Brucker, Mike Rapoport (IBM),
David Hildenbrand,
moderated list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64),
open list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64),
open list, open list:MEMORY MANAGEMENT
In order to be resilient against cross-process speculation-based
attacks, it makes sense to store certain (secret) items in kernel memory
local to mm.
Implement such allocations on top of secretmem infrastructure.
Specifically, on allocate
1. Create secretmem file.
2. To distinguish it from the conventional memfd_secret()-created one
and to maintain associated mm-local allocation context, put the
latter on ->private_data of the file.
3. Create virtual mapping in user virtual address space using mmap().
4. Seal the virtual mapping to disallow the user from affecting it in
any way.
5. Fault the pages in, effectively calling secretmem fault handler to
remove the pages from kernel linear address and make them local to
process mm.
6. Change the PTE from user mode to kernel mode, any access from
userspace will result in segmentation fault. Kernel can access this
virtual address now.
7. Return the secure area as a struct containing the pointer to the
actual memory and providing the context for the release function
later.
On release,
- if called while mm is still in use, remove the mapping
- otherwise, if performed at mm teardown, no unmapping is necessary
The rest is taken care of by secretmem file cleanup, including returning
the pages to the kernel direct map.
Signed-off-by: Fares Mehanna <faresx@amazon.de>
Signed-off-by: Roman Kagan <rkagan@amazon.de>
---
include/linux/secretmem.h | 29 ++++++
mm/Kconfig | 10 ++
mm/gup.c | 4 +-
mm/secretmem.c | 213 ++++++++++++++++++++++++++++++++++++++
4 files changed, 254 insertions(+), 2 deletions(-)
diff --git a/include/linux/secretmem.h b/include/linux/secretmem.h
index e918f96881f5..39cc73a0e4bd 100644
--- a/include/linux/secretmem.h
+++ b/include/linux/secretmem.h
@@ -2,6 +2,10 @@
#ifndef _LINUX_SECRETMEM_H
#define _LINUX_SECRETMEM_H
+struct secretmem_area {
+ void *ptr;
+};
+
#ifdef CONFIG_SECRETMEM
extern const struct address_space_operations secretmem_aops;
@@ -33,4 +37,29 @@ static inline bool secretmem_active(void)
#endif /* CONFIG_SECRETMEM */
+#ifdef CONFIG_KERNEL_SECRETMEM
+
+bool can_access_secretmem_vma(struct vm_area_struct *vma);
+struct secretmem_area *secretmem_allocate_pages(unsigned int order);
+void secretmem_release_pages(struct secretmem_area *data);
+
+#else
+
+static inline bool can_access_secretmem_vma(struct vm_area_struct *vma)
+{
+ return true;
+}
+
+static inline struct secretmem_area *secretmem_allocate_pages(unsigned int order)
+{
+ return NULL;
+}
+
+static inline void secretmem_release_pages(struct secretmem_area *data)
+{
+ WARN_ONCE(1, "Called secret memory release page without support\n");
+}
+
+#endif /* CONFIG_KERNEL_SECRETMEM */
+
#endif /* _LINUX_SECRETMEM_H */
diff --git a/mm/Kconfig b/mm/Kconfig
index b72e7d040f78..a327d8def179 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -1168,6 +1168,16 @@ config SECRETMEM
memory areas visible only in the context of the owning process and
not mapped to other processes and other kernel page tables.
+config KERNEL_SECRETMEM
+ default y
+ bool "Enable kernel usage of memfd_secret()" if EXPERT
+ depends on SECRETMEM
+ depends on MMU
+ help
+ Enable the kernel usage of memfd_secret() for kernel memory allocations,
+ The allocated memory is visible only to the kernel in the context of
+ the owning process.
+
config ANON_VMA_NAME
bool "Anonymous VMA name support"
depends on PROC_FS && ADVISE_SYSCALLS && MMU
diff --git a/mm/gup.c b/mm/gup.c
index 54d0dc3831fb..6c2c6a0cbe2a 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -1076,7 +1076,7 @@ struct page *follow_page(struct vm_area_struct *vma, unsigned long address,
struct follow_page_context ctx = { NULL };
struct page *page;
- if (vma_is_secretmem(vma))
+ if (!can_access_secretmem_vma(vma))
return NULL;
if (WARN_ON_ONCE(foll_flags & FOLL_PIN))
@@ -1281,7 +1281,7 @@ static int check_vma_flags(struct vm_area_struct *vma, unsigned long gup_flags)
if ((gup_flags & FOLL_LONGTERM) && vma_is_fsdax(vma))
return -EOPNOTSUPP;
- if (vma_is_secretmem(vma))
+ if (!can_access_secretmem_vma(vma))
return -EFAULT;
if (write) {
diff --git a/mm/secretmem.c b/mm/secretmem.c
index 3afb5ad701e1..86afedc65889 100644
--- a/mm/secretmem.c
+++ b/mm/secretmem.c
@@ -13,13 +13,17 @@
#include <linux/bitops.h>
#include <linux/printk.h>
#include <linux/pagemap.h>
+#include <linux/hugetlb.h>
#include <linux/syscalls.h>
#include <linux/pseudo_fs.h>
#include <linux/secretmem.h>
#include <linux/set_memory.h>
#include <linux/sched/signal.h>
+#include <linux/sched/mm.h>
+#include <uapi/asm-generic/mman-common.h>
#include <uapi/linux/magic.h>
+#include <uapi/linux/mman.h>
#include <asm/tlbflush.h>
@@ -42,6 +46,16 @@ MODULE_PARM_DESC(secretmem_enable,
static atomic_t secretmem_users;
+/* secretmem file private context */
+struct secretmem_ctx {
+ struct secretmem_area _area;
+ struct page **_pages;
+ unsigned long _nr_pages;
+ struct file *_file;
+ struct mm_struct *_mm;
+};
+
+
bool secretmem_active(void)
{
return !!atomic_read(&secretmem_users);
@@ -116,6 +130,7 @@ static const struct vm_operations_struct secretmem_vm_ops = {
static int secretmem_release(struct inode *inode, struct file *file)
{
+ kfree(file->private_data);
atomic_dec(&secretmem_users);
return 0;
}
@@ -123,13 +138,23 @@ static int secretmem_release(struct inode *inode, struct file *file)
static int secretmem_mmap(struct file *file, struct vm_area_struct *vma)
{
unsigned long len = vma->vm_end - vma->vm_start;
+ struct secretmem_ctx *ctx = file->private_data;
+ unsigned long kernel_no_permissions;
+
+ kernel_no_permissions = (VM_READ | VM_WRITE | VM_EXEC | VM_MAYEXEC);
if ((vma->vm_flags & (VM_SHARED | VM_MAYSHARE)) == 0)
return -EINVAL;
+ if (ctx && (vma->vm_flags & kernel_no_permissions))
+ return -EINVAL;
+
if (!mlock_future_ok(vma->vm_mm, vma->vm_flags | VM_LOCKED, len))
return -EAGAIN;
+ if (ctx)
+ vm_flags_set(vma, VM_MIXEDMAP);
+
vm_flags_set(vma, VM_LOCKED | VM_DONTDUMP);
vma->vm_ops = &secretmem_vm_ops;
@@ -230,6 +255,194 @@ static struct file *secretmem_file_create(unsigned long flags)
return file;
}
+#ifdef CONFIG_KERNEL_SECRETMEM
+
+struct secretmem_area *secretmem_allocate_pages(unsigned int order)
+{
+ unsigned long uvaddr, uvaddr_inc, unused, nr_pages, bytes_length;
+ struct file *kernel_secfile;
+ struct vm_area_struct *vma;
+ struct secretmem_ctx *ctx;
+ struct page **sec_pages;
+ struct mm_struct *mm;
+ long nr_pinned_pages;
+ pte_t pte, old_pte;
+ spinlock_t *ptl;
+ pte_t *upte;
+ int rc;
+
+ nr_pages = (1 << order);
+ bytes_length = nr_pages * PAGE_SIZE;
+ mm = current->mm;
+
+ if (!mm || !mmget_not_zero(mm))
+ return NULL;
+
+ /* Create secret memory file / truncate it */
+ kernel_secfile = secretmem_file_create(0);
+ if (IS_ERR(kernel_secfile))
+ goto put_mm;
+
+ ctx = kzalloc(sizeof(*ctx), GFP_KERNEL);
+ if (IS_ERR(ctx))
+ goto close_secfile;
+ kernel_secfile->private_data = ctx;
+
+ rc = do_truncate(file_mnt_idmap(kernel_secfile),
+ file_dentry(kernel_secfile), bytes_length, 0, NULL);
+ if (rc)
+ goto close_secfile;
+
+ if (mmap_write_lock_killable(mm))
+ goto close_secfile;
+
+ /* Map pages to the secretmem file */
+ uvaddr = do_mmap(kernel_secfile, 0, bytes_length, PROT_NONE,
+ MAP_SHARED, 0, 0, &unused, NULL);
+ if (IS_ERR_VALUE(uvaddr))
+ goto unlock_mmap;
+
+ /* mseal() the VMA to make sure it won't change */
+ rc = do_mseal(uvaddr, uvaddr + bytes_length, true);
+ if (rc)
+ goto unmap_pages;
+
+ /* Make sure VMA is there, and is kernel-secure */
+ vma = find_vma(current->mm, uvaddr);
+ if (!vma)
+ goto unseal_vma;
+
+ if (!vma_is_secretmem(vma) ||
+ !can_access_secretmem_vma(vma))
+ goto unseal_vma;
+
+ /* Pin user pages; fault them in */
+ sec_pages = kzalloc(sizeof(struct page *) * nr_pages, GFP_KERNEL);
+ if (!sec_pages)
+ goto unseal_vma;
+
+ nr_pinned_pages = pin_user_pages(uvaddr, nr_pages, FOLL_FORCE | FOLL_LONGTERM, sec_pages);
+ if (nr_pinned_pages < 0)
+ goto free_sec_pages;
+ if (nr_pinned_pages != nr_pages)
+ goto unpin_pages;
+
+ /* Modify the existing mapping to be kernel accessible, local to this process mm */
+ uvaddr_inc = uvaddr;
+ while (uvaddr_inc < uvaddr + bytes_length) {
+ upte = get_locked_pte(mm, uvaddr_inc, &ptl);
+ if (!upte)
+ goto unpin_pages;
+ old_pte = ptep_modify_prot_start(vma, uvaddr_inc, upte);
+ pte = pte_modify(old_pte, PAGE_KERNEL);
+ ptep_modify_prot_commit(vma, uvaddr_inc, upte, old_pte, pte);
+ pte_unmap_unlock(upte, ptl);
+ uvaddr_inc += PAGE_SIZE;
+ }
+ flush_tlb_range(vma, uvaddr, uvaddr + bytes_length);
+
+ /* Return data */
+ mmgrab(mm);
+ ctx->_area.ptr = (void *) uvaddr;
+ ctx->_pages = sec_pages;
+ ctx->_nr_pages = nr_pages;
+ ctx->_mm = mm;
+ ctx->_file = kernel_secfile;
+
+ mmap_write_unlock(mm);
+ mmput(mm);
+
+ return &ctx->_area;
+
+unpin_pages:
+ unpin_user_pages(sec_pages, nr_pinned_pages);
+free_sec_pages:
+ kfree(sec_pages);
+unseal_vma:
+ rc = do_mseal(uvaddr, uvaddr + bytes_length, false);
+ if (rc)
+ BUG();
+unmap_pages:
+ rc = do_munmap(mm, uvaddr, bytes_length, NULL);
+ if (rc)
+ BUG();
+unlock_mmap:
+ mmap_write_unlock(mm);
+close_secfile:
+ fput(kernel_secfile);
+put_mm:
+ mmput(mm);
+ return NULL;
+}
+
+void secretmem_release_pages(struct secretmem_area *data)
+{
+ unsigned long uvaddr, bytes_length;
+ struct secretmem_ctx *ctx;
+ int rc;
+
+ if (!data || !data->ptr)
+ BUG();
+
+ ctx = container_of(data, struct secretmem_ctx, _area);
+ if (!ctx || !ctx->_file || !ctx->_pages || !ctx->_mm)
+ BUG();
+
+ bytes_length = ctx->_nr_pages * PAGE_SIZE;
+ uvaddr = (unsigned long) data->ptr;
+
+ /*
+ * Remove the mapping if mm is still in use.
+ * Not secure to continue if unmapping failed.
+ */
+ if (mmget_not_zero(ctx->_mm)) {
+ mmap_write_lock(ctx->_mm);
+ rc = do_mseal(uvaddr, uvaddr + bytes_length, false);
+ if (rc) {
+ mmap_write_unlock(ctx->_mm);
+ BUG();
+ }
+ rc = do_munmap(ctx->_mm, uvaddr, bytes_length, NULL);
+ if (rc) {
+ mmap_write_unlock(ctx->_mm);
+ BUG();
+ }
+ mmap_write_unlock(ctx->_mm);
+ mmput(ctx->_mm);
+ }
+
+ mmdrop(ctx->_mm);
+ unpin_user_pages(ctx->_pages, ctx->_nr_pages);
+ fput(ctx->_file);
+ kfree(ctx->_pages);
+
+ ctx->_nr_pages = 0;
+ ctx->_pages = NULL;
+ ctx->_file = NULL;
+ ctx->_mm = NULL;
+ ctx->_area.ptr = NULL;
+}
+
+bool can_access_secretmem_vma(struct vm_area_struct *vma)
+{
+ struct secretmem_ctx *ctx;
+
+ if (!vma_is_secretmem(vma))
+ return true;
+
+ /*
+ * If VMA is owned by running process, and marked for kernel
+ * usage, then allow access.
+ */
+ ctx = vma->vm_file->private_data;
+ if (ctx && current->mm == vma->vm_mm)
+ return true;
+
+ return false;
+}
+
+#endif /* CONFIG_KERNEL_SECRETMEM */
+
SYSCALL_DEFINE1(memfd_secret, unsigned int, flags)
{
struct file *file;
--
2.40.1
Amazon Web Services Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss
Eingetragen am Amtsgericht Charlottenburg unter HRB 257764 B
Sitz: Berlin
Ust-ID: DE 365 538 597
^ permalink raw reply [flat|nested] 24+ messages in thread
* [RFC PATCH 3/7] arm64: KVM: Refactor C-code to access vCPU gp-registers through macros
2024-09-11 14:33 [RFC PATCH 0/7] support for mm-local memory allocations and use it Fares Mehanna
2024-09-11 14:34 ` [RFC PATCH 1/7] mseal: expose interface to seal / unseal user memory ranges Fares Mehanna
2024-09-11 14:34 ` [RFC PATCH 2/7] mm/secretmem: implement mm-local kernel allocations Fares Mehanna
@ 2024-09-11 14:34 ` Fares Mehanna
2024-09-11 14:34 ` [RFC PATCH 4/7] KVM: Refactor Assembly-code to access vCPU gp-registers through a macro Fares Mehanna
` (6 subsequent siblings)
9 siblings, 0 replies; 24+ messages in thread
From: Fares Mehanna @ 2024-09-11 14:34 UTC (permalink / raw)
Cc: nh-open-source, Fares Mehanna, Marc Zyngier, Oliver Upton,
James Morse, Suzuki K Poulose, Zenghui Yu, Catalin Marinas,
Will Deacon, Andrew Morton, Kemeng Shi, Pierre-Clément Tosi,
Ard Biesheuvel, Mark Rutland, Javier Martinez Canillas,
Arnd Bergmann, Fuad Tabba, Mark Brown, Joey Gouly,
Kristina Martsenko, Randy Dunlap, Bjorn Helgaas,
Jean-Philippe Brucker, Mike Rapoport (IBM),
David Hildenbrand, Roman Kagan,
moderated list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64),
open list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64),
open list, open list:MEMORY MANAGEMENT
Unify how KVM accesses vCPU gp-regs by using two macros vcpu_gp_regs() and
ctxt_gp_regs(). This is prerequisite to move the gp-regs later to be dynamically
allocated for vCPUs.
Signed-off-by: Fares Mehanna <faresx@amazon.de>
---
arch/arm64/include/asm/kvm_emulate.h | 2 +-
arch/arm64/include/asm/kvm_host.h | 3 ++-
arch/arm64/kvm/guest.c | 8 ++++----
arch/arm64/kvm/hyp/include/hyp/switch.h | 2 +-
arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h | 10 +++++-----
arch/arm64/kvm/hyp/include/nvhe/trap_handler.h | 2 +-
6 files changed, 14 insertions(+), 13 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
index a601a9305b10..cabfb76ca514 100644
--- a/arch/arm64/include/asm/kvm_emulate.h
+++ b/arch/arm64/include/asm/kvm_emulate.h
@@ -170,7 +170,7 @@ static __always_inline void vcpu_set_reg(struct kvm_vcpu *vcpu, u8 reg_num,
static inline bool vcpu_is_el2_ctxt(const struct kvm_cpu_context *ctxt)
{
- switch (ctxt->regs.pstate & (PSR_MODE32_BIT | PSR_MODE_MASK)) {
+ switch (ctxt_gp_regs(ctxt)->pstate & (PSR_MODE32_BIT | PSR_MODE_MASK)) {
case PSR_MODE_EL2h:
case PSR_MODE_EL2t:
return true;
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index a33f5996ca9f..31cbd62a5d06 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -946,7 +946,8 @@ struct kvm_vcpu_arch {
#define vcpu_clear_on_unsupported_cpu(vcpu) \
vcpu_clear_flag(vcpu, ON_UNSUPPORTED_CPU)
-#define vcpu_gp_regs(v) (&(v)->arch.ctxt.regs)
+#define ctxt_gp_regs(ctxt) (&(ctxt)->regs)
+#define vcpu_gp_regs(v) (ctxt_gp_regs(&(v)->arch.ctxt))
/*
* Only use __vcpu_sys_reg/ctxt_sys_reg if you know you want the
diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c
index 11098eb7eb44..821a2b7de388 100644
--- a/arch/arm64/kvm/guest.c
+++ b/arch/arm64/kvm/guest.c
@@ -134,16 +134,16 @@ static void *core_reg_addr(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
KVM_REG_ARM_CORE_REG(regs.regs[30]):
off -= KVM_REG_ARM_CORE_REG(regs.regs[0]);
off /= 2;
- return &vcpu->arch.ctxt.regs.regs[off];
+ return &vcpu_gp_regs(vcpu)->regs[off];
case KVM_REG_ARM_CORE_REG(regs.sp):
- return &vcpu->arch.ctxt.regs.sp;
+ return &vcpu_gp_regs(vcpu)->sp;
case KVM_REG_ARM_CORE_REG(regs.pc):
- return &vcpu->arch.ctxt.regs.pc;
+ return &vcpu_gp_regs(vcpu)->pc;
case KVM_REG_ARM_CORE_REG(regs.pstate):
- return &vcpu->arch.ctxt.regs.pstate;
+ return &vcpu_gp_regs(vcpu)->pstate;
case KVM_REG_ARM_CORE_REG(sp_el1):
return __ctxt_sys_reg(&vcpu->arch.ctxt, SP_EL1);
diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
index 37ff87d782b6..d2ed0938fc90 100644
--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
+++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
@@ -649,7 +649,7 @@ static inline void synchronize_vcpu_pstate(struct kvm_vcpu *vcpu, u64 *exit_code
ESR_ELx_EC(read_sysreg_el2(SYS_ESR)) == ESR_ELx_EC_PAC)
write_sysreg_el2(*vcpu_cpsr(vcpu), SYS_SPSR);
- vcpu->arch.ctxt.regs.pstate = read_sysreg_el2(SYS_SPSR);
+ vcpu_gp_regs(vcpu)->pstate = read_sysreg_el2(SYS_SPSR);
}
/*
diff --git a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h b/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h
index 4c0fdabaf8ae..d17033766010 100644
--- a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h
+++ b/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h
@@ -105,13 +105,13 @@ static inline void __sysreg_save_el1_state(struct kvm_cpu_context *ctxt)
static inline void __sysreg_save_el2_return_state(struct kvm_cpu_context *ctxt)
{
- ctxt->regs.pc = read_sysreg_el2(SYS_ELR);
+ ctxt_gp_regs(ctxt)->pc = read_sysreg_el2(SYS_ELR);
/*
* Guest PSTATE gets saved at guest fixup time in all
* cases. We still need to handle the nVHE host side here.
*/
if (!has_vhe() && ctxt->__hyp_running_vcpu)
- ctxt->regs.pstate = read_sysreg_el2(SYS_SPSR);
+ ctxt_gp_regs(ctxt)->pstate = read_sysreg_el2(SYS_SPSR);
if (cpus_have_final_cap(ARM64_HAS_RAS_EXTN))
ctxt_sys_reg(ctxt, DISR_EL1) = read_sysreg_s(SYS_VDISR_EL2);
@@ -202,7 +202,7 @@ static inline void __sysreg_restore_el1_state(struct kvm_cpu_context *ctxt)
/* Read the VCPU state's PSTATE, but translate (v)EL2 to EL1. */
static inline u64 to_hw_pstate(const struct kvm_cpu_context *ctxt)
{
- u64 mode = ctxt->regs.pstate & (PSR_MODE_MASK | PSR_MODE32_BIT);
+ u64 mode = ctxt_gp_regs(ctxt)->pstate & (PSR_MODE_MASK | PSR_MODE32_BIT);
switch (mode) {
case PSR_MODE_EL2t:
@@ -213,7 +213,7 @@ static inline u64 to_hw_pstate(const struct kvm_cpu_context *ctxt)
break;
}
- return (ctxt->regs.pstate & ~(PSR_MODE_MASK | PSR_MODE32_BIT)) | mode;
+ return (ctxt_gp_regs(ctxt)->pstate & ~(PSR_MODE_MASK | PSR_MODE32_BIT)) | mode;
}
static inline void __sysreg_restore_el2_return_state(struct kvm_cpu_context *ctxt)
@@ -235,7 +235,7 @@ static inline void __sysreg_restore_el2_return_state(struct kvm_cpu_context *ctx
if (!(mode & PSR_MODE32_BIT) && mode >= PSR_MODE_EL2t)
pstate = PSR_MODE_EL2h | PSR_IL_BIT;
- write_sysreg_el2(ctxt->regs.pc, SYS_ELR);
+ write_sysreg_el2(ctxt_gp_regs(ctxt)->pc, SYS_ELR);
write_sysreg_el2(pstate, SYS_SPSR);
if (cpus_have_final_cap(ARM64_HAS_RAS_EXTN))
diff --git a/arch/arm64/kvm/hyp/include/nvhe/trap_handler.h b/arch/arm64/kvm/hyp/include/nvhe/trap_handler.h
index 45a84f0ade04..dfe5be0d70ef 100644
--- a/arch/arm64/kvm/hyp/include/nvhe/trap_handler.h
+++ b/arch/arm64/kvm/hyp/include/nvhe/trap_handler.h
@@ -11,7 +11,7 @@
#include <asm/kvm_host.h>
-#define cpu_reg(ctxt, r) (ctxt)->regs.regs[r]
+#define cpu_reg(ctxt, r) (ctxt_gp_regs((ctxt))->regs[r])
#define DECLARE_REG(type, name, ctxt, reg) \
type name = (type)cpu_reg(ctxt, (reg))
--
2.40.1
Amazon Web Services Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss
Eingetragen am Amtsgericht Charlottenburg unter HRB 257764 B
Sitz: Berlin
Ust-ID: DE 365 538 597
^ permalink raw reply [flat|nested] 24+ messages in thread
* [RFC PATCH 4/7] KVM: Refactor Assembly-code to access vCPU gp-registers through a macro
2024-09-11 14:33 [RFC PATCH 0/7] support for mm-local memory allocations and use it Fares Mehanna
` (2 preceding siblings ...)
2024-09-11 14:34 ` [RFC PATCH 3/7] arm64: KVM: Refactor C-code to access vCPU gp-registers through macros Fares Mehanna
@ 2024-09-11 14:34 ` Fares Mehanna
2024-09-11 14:34 ` [RFC PATCH 5/7] arm64: KVM: Allocate vCPU gp-regs dynamically on VHE and KERNEL_SECRETMEM enabled systems Fares Mehanna
` (5 subsequent siblings)
9 siblings, 0 replies; 24+ messages in thread
From: Fares Mehanna @ 2024-09-11 14:34 UTC (permalink / raw)
Cc: nh-open-source, Fares Mehanna, Marc Zyngier, Oliver Upton,
James Morse, Suzuki K Poulose, Zenghui Yu, Catalin Marinas,
Will Deacon, Andrew Morton, Kemeng Shi, Pierre-Clément Tosi,
Ard Biesheuvel, Mark Rutland, Javier Martinez Canillas,
Arnd Bergmann, Fuad Tabba, Mark Brown, Joey Gouly,
Kristina Martsenko, Randy Dunlap, Bjorn Helgaas,
Jean-Philippe Brucker, Mike Rapoport (IBM),
David Hildenbrand, Roman Kagan,
moderated list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64),
open list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64),
open list, open list:MEMORY MANAGEMENT
Right now assembly code accesses vCPU gp-regs directly from the context struct
"struct kvm_cpu_context" using "CPU_XREG_OFFSET()".
Since we want to move gp-regs to dynamic memory, we can no longer assume that
gp-regs will be embedded in the context struct, thus split the access to two
steps.
The first is to get the gp-regs from the context using the assembly macro
"get_ctxt_gp_regs".
And the second is to access the gp-registers directly from within the
"struct user_pt_regs" by removing the offset "CPU_USER_PT_REGS" from the access
macro "CPU_XREG_OFFSET()".
I also changed variable naming and comments where appropriate.
Signed-off-by: Fares Mehanna <faresx@amazon.de>
---
arch/arm64/include/asm/kvm_asm.h | 48 +++++++++++++++++---------------
arch/arm64/kvm/hyp/entry.S | 15 ++++++++++
arch/arm64/kvm/hyp/nvhe/host.S | 20 ++++++++++---
3 files changed, 57 insertions(+), 26 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
index 2181a11b9d92..fa4fb642a5f5 100644
--- a/arch/arm64/include/asm/kvm_asm.h
+++ b/arch/arm64/include/asm/kvm_asm.h
@@ -313,6 +313,10 @@ void __noreturn __cold nvhe_hyp_panic_handler(u64 esr, u64 spsr, u64 elr_virt,
str \vcpu, [\ctxt, #HOST_CONTEXT_VCPU]
.endm
+.macro get_ctxt_gp_regs ctxt, regs
+ add \regs, \ctxt, #CPU_USER_PT_REGS
+.endm
+
/*
* KVM extable for unexpected exceptions.
* Create a struct kvm_exception_table_entry output to a section that can be
@@ -329,7 +333,7 @@ void __noreturn __cold nvhe_hyp_panic_handler(u64 esr, u64 spsr, u64 elr_virt,
.popsection
.endm
-#define CPU_XREG_OFFSET(x) (CPU_USER_PT_REGS + 8*x)
+#define CPU_XREG_OFFSET(x) (8 * (x))
#define CPU_LR_OFFSET CPU_XREG_OFFSET(30)
#define CPU_SP_EL0_OFFSET (CPU_LR_OFFSET + 8)
@@ -337,34 +341,34 @@ void __noreturn __cold nvhe_hyp_panic_handler(u64 esr, u64 spsr, u64 elr_virt,
* We treat x18 as callee-saved as the host may use it as a platform
* register (e.g. for shadow call stack).
*/
-.macro save_callee_saved_regs ctxt
- str x18, [\ctxt, #CPU_XREG_OFFSET(18)]
- stp x19, x20, [\ctxt, #CPU_XREG_OFFSET(19)]
- stp x21, x22, [\ctxt, #CPU_XREG_OFFSET(21)]
- stp x23, x24, [\ctxt, #CPU_XREG_OFFSET(23)]
- stp x25, x26, [\ctxt, #CPU_XREG_OFFSET(25)]
- stp x27, x28, [\ctxt, #CPU_XREG_OFFSET(27)]
- stp x29, lr, [\ctxt, #CPU_XREG_OFFSET(29)]
+.macro save_callee_saved_regs regs
+ str x18, [\regs, #CPU_XREG_OFFSET(18)]
+ stp x19, x20, [\regs, #CPU_XREG_OFFSET(19)]
+ stp x21, x22, [\regs, #CPU_XREG_OFFSET(21)]
+ stp x23, x24, [\regs, #CPU_XREG_OFFSET(23)]
+ stp x25, x26, [\regs, #CPU_XREG_OFFSET(25)]
+ stp x27, x28, [\regs, #CPU_XREG_OFFSET(27)]
+ stp x29, lr, [\regs, #CPU_XREG_OFFSET(29)]
.endm
-.macro restore_callee_saved_regs ctxt
- // We require \ctxt is not x18-x28
- ldr x18, [\ctxt, #CPU_XREG_OFFSET(18)]
- ldp x19, x20, [\ctxt, #CPU_XREG_OFFSET(19)]
- ldp x21, x22, [\ctxt, #CPU_XREG_OFFSET(21)]
- ldp x23, x24, [\ctxt, #CPU_XREG_OFFSET(23)]
- ldp x25, x26, [\ctxt, #CPU_XREG_OFFSET(25)]
- ldp x27, x28, [\ctxt, #CPU_XREG_OFFSET(27)]
- ldp x29, lr, [\ctxt, #CPU_XREG_OFFSET(29)]
+.macro restore_callee_saved_regs regs
+ // We require \regs is not x18-x28
+ ldr x18, [\regs, #CPU_XREG_OFFSET(18)]
+ ldp x19, x20, [\regs, #CPU_XREG_OFFSET(19)]
+ ldp x21, x22, [\regs, #CPU_XREG_OFFSET(21)]
+ ldp x23, x24, [\regs, #CPU_XREG_OFFSET(23)]
+ ldp x25, x26, [\regs, #CPU_XREG_OFFSET(25)]
+ ldp x27, x28, [\regs, #CPU_XREG_OFFSET(27)]
+ ldp x29, lr, [\regs, #CPU_XREG_OFFSET(29)]
.endm
-.macro save_sp_el0 ctxt, tmp
+.macro save_sp_el0 regs, tmp
mrs \tmp, sp_el0
- str \tmp, [\ctxt, #CPU_SP_EL0_OFFSET]
+ str \tmp, [\regs, #CPU_SP_EL0_OFFSET]
.endm
-.macro restore_sp_el0 ctxt, tmp
- ldr \tmp, [\ctxt, #CPU_SP_EL0_OFFSET]
+.macro restore_sp_el0 regs, tmp
+ ldr \tmp, [\regs, #CPU_SP_EL0_OFFSET]
msr sp_el0, \tmp
.endm
diff --git a/arch/arm64/kvm/hyp/entry.S b/arch/arm64/kvm/hyp/entry.S
index 4433a234aa9b..628a123bcdc1 100644
--- a/arch/arm64/kvm/hyp/entry.S
+++ b/arch/arm64/kvm/hyp/entry.S
@@ -28,6 +28,9 @@ SYM_FUNC_START(__guest_enter)
adr_this_cpu x1, kvm_hyp_ctxt, x2
+ // Get gp-regs pointer from the context
+ get_ctxt_gp_regs x1, x1
+
// Store the hyp regs
save_callee_saved_regs x1
@@ -62,6 +65,9 @@ alternative_else_nop_endif
// when this feature is enabled for kernel code.
ptrauth_switch_to_guest x29, x0, x1, x2
+ // Get gp-regs pointer from the context
+ get_ctxt_gp_regs x29, x29
+
// Restore the guest's sp_el0
restore_sp_el0 x29, x0
@@ -108,6 +114,7 @@ SYM_INNER_LABEL(__guest_exit_panic, SYM_L_GLOBAL)
// current state is saved to the guest context but it will only be
// accurate if the guest had been completely restored.
adr_this_cpu x0, kvm_hyp_ctxt, x1
+ get_ctxt_gp_regs x0, x0
adr_l x1, hyp_panic
str x1, [x0, #CPU_XREG_OFFSET(30)]
@@ -120,6 +127,7 @@ SYM_INNER_LABEL(__guest_exit, SYM_L_GLOBAL)
// vcpu x0-x1 on the stack
add x1, x1, #VCPU_CONTEXT
+ get_ctxt_gp_regs x1, x1
ALTERNATIVE(nop, SET_PSTATE_PAN(1), ARM64_HAS_PAN, CONFIG_ARM64_PAN)
@@ -145,6 +153,10 @@ SYM_INNER_LABEL(__guest_exit, SYM_L_GLOBAL)
// Store the guest's sp_el0
save_sp_el0 x1, x2
+ // Recover vCPU context to x1
+ get_vcpu_ptr x1, x2
+ add x1, x1, #VCPU_CONTEXT
+
adr_this_cpu x2, kvm_hyp_ctxt, x3
// Macro ptrauth_switch_to_hyp format:
@@ -157,6 +169,9 @@ SYM_INNER_LABEL(__guest_exit, SYM_L_GLOBAL)
// mte_switch_to_hyp(g_ctxt, h_ctxt, reg1)
mte_switch_to_hyp x1, x2, x3
+ // Get gp-regs pointer from the context
+ get_ctxt_gp_regs x2, x2
+
// Restore hyp's sp_el0
restore_sp_el0 x2, x3
diff --git a/arch/arm64/kvm/hyp/nvhe/host.S b/arch/arm64/kvm/hyp/nvhe/host.S
index 3d610fc51f4d..31afa7396294 100644
--- a/arch/arm64/kvm/hyp/nvhe/host.S
+++ b/arch/arm64/kvm/hyp/nvhe/host.S
@@ -17,6 +17,12 @@
SYM_FUNC_START(__host_exit)
get_host_ctxt x0, x1
+ /* Keep host context in x1 */
+ mov x1, x0
+
+ /* Get gp-regs pointer from the context */
+ get_ctxt_gp_regs x0, x0
+
/* Store the host regs x2 and x3 */
stp x2, x3, [x0, #CPU_XREG_OFFSET(2)]
@@ -36,7 +42,10 @@ SYM_FUNC_START(__host_exit)
/* Store the host regs x18-x29, lr */
save_callee_saved_regs x0
- /* Save the host context pointer in x29 across the function call */
+ /* Save the host context pointer in x28 across the function call */
+ mov x28, x1
+
+ /* Save the host gp-regs pointer in x29 across the function call */
mov x29, x0
#ifdef CONFIG_ARM64_PTR_AUTH_KERNEL
@@ -46,7 +55,7 @@ alternative_else_nop_endif
alternative_if ARM64_KVM_PROTECTED_MODE
/* Save kernel ptrauth keys. */
- add x18, x29, #CPU_APIAKEYLO_EL1
+ add x18, x28, #CPU_APIAKEYLO_EL1
ptrauth_save_state x18, x19, x20
/* Use hyp keys. */
@@ -58,6 +67,7 @@ alternative_else_nop_endif
__skip_pauth_save:
#endif /* CONFIG_ARM64_PTR_AUTH_KERNEL */
+ mov x0, x28
bl handle_trap
__host_enter_restore_full:
@@ -68,7 +78,7 @@ b __skip_pauth_restore
alternative_else_nop_endif
alternative_if ARM64_KVM_PROTECTED_MODE
- add x18, x29, #CPU_APIAKEYLO_EL1
+ add x18, x28, #CPU_APIAKEYLO_EL1
ptrauth_restore_state x18, x19, x20
alternative_else_nop_endif
__skip_pauth_restore:
@@ -101,7 +111,8 @@ SYM_FUNC_END(__host_exit)
* void __noreturn __host_enter(struct kvm_cpu_context *host_ctxt);
*/
SYM_FUNC_START(__host_enter)
- mov x29, x0
+ mov x28, x0
+ get_ctxt_gp_regs x0, x29
b __host_enter_restore_full
SYM_FUNC_END(__host_enter)
@@ -141,6 +152,7 @@ SYM_FUNC_START(__hyp_do_panic)
/* Enter the host, conditionally restoring the host context. */
cbz x29, __host_enter_without_restoring
+ get_ctxt_gp_regs x29, x29
b __host_enter_for_panic
SYM_FUNC_END(__hyp_do_panic)
--
2.40.1
Amazon Web Services Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss
Eingetragen am Amtsgericht Charlottenburg unter HRB 257764 B
Sitz: Berlin
Ust-ID: DE 365 538 597
^ permalink raw reply [flat|nested] 24+ messages in thread
* [RFC PATCH 5/7] arm64: KVM: Allocate vCPU gp-regs dynamically on VHE and KERNEL_SECRETMEM enabled systems
2024-09-11 14:33 [RFC PATCH 0/7] support for mm-local memory allocations and use it Fares Mehanna
` (3 preceding siblings ...)
2024-09-11 14:34 ` [RFC PATCH 4/7] KVM: Refactor Assembly-code to access vCPU gp-registers through a macro Fares Mehanna
@ 2024-09-11 14:34 ` Fares Mehanna
2024-09-11 14:34 ` [RFC PATCH 6/7] arm64: KVM: Refactor C-code to access vCPU fp-registers through macros Fares Mehanna
` (4 subsequent siblings)
9 siblings, 0 replies; 24+ messages in thread
From: Fares Mehanna @ 2024-09-11 14:34 UTC (permalink / raw)
Cc: nh-open-source, Fares Mehanna, Marc Zyngier, Oliver Upton,
James Morse, Suzuki K Poulose, Zenghui Yu, Catalin Marinas,
Will Deacon, Andrew Morton, Kemeng Shi, Pierre-Clément Tosi,
Ard Biesheuvel, Mark Rutland, Javier Martinez Canillas,
Arnd Bergmann, Fuad Tabba, Mark Brown, Joey Gouly,
Kristina Martsenko, Randy Dunlap, Bjorn Helgaas,
Jean-Philippe Brucker, Mike Rapoport (IBM),
David Hildenbrand, Roman Kagan,
moderated list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64),
open list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64),
open list, open list:MEMORY MANAGEMENT
To allocate the vCPU gp-regs using secret memory, we need to dynamically
allocate the vCPU gp-regs first.
This is tricky with NVHE (Non-Virtualization Host Extensions) since it will
require adjusting the virtual address on every access. With a large shared
codebase between the OS and the hypervisor, it would be cumbersome to duplicate
the code with one version using `kern_hyp_va()`.
To avoid this issue, and since the secret memory feature will not be enabled on
NVHE systems, we're introducing the following changes:
1. Maintain a `struct user_pt_regs regs_storage` in the vCPU context struct as a
fallback storage for the vCPU gp-regs.
2. Introduce a pointer `struct user_pt_regs *regs` in the vCPU context struct to
hold the dynamically allocated vCPU gp-regs.
If we are on an NVHE system or a VHE (Virtualization Host Extensions) system
that doesn't support `KERNEL_SECRETMEM`, we will use `ctxt_storage`. Accessing
the context in this case will not require a de-reference operation.
If we are on a VHE system with support for `KERNEL_SECRETMEM`, we will use the
`regs` pointer. In this case, we will add one de-reference operation every time
the vCPU gp-reg is accessed.
Accessing the gp-regs embedded in the vCPU context without de-reference is done
as:
add \regs, \ctxt, #CPU_USER_PT_REGS_STRG
Accessing the dynamically allocated gp-regs with de-reference is done as:
ldr \regs, [\ctxt, #CPU_USER_PT_REGS]
By default, we are using the first version. If we are booting on a system that
supports VHE and `KERNEL_SECRETMEM`, we switch to the second version.
We are also allocating the needed gp-regs allocations for vCPU, kvm_hyp_ctxt and
kvm_host_data structs when needed.
Signed-off-by: Fares Mehanna <faresx@amazon.de>
---
arch/arm64/include/asm/kvm_asm.h | 4 +-
arch/arm64/include/asm/kvm_host.h | 24 +++++++++++-
arch/arm64/kernel/asm-offsets.c | 1 +
arch/arm64/kernel/image-vars.h | 1 +
arch/arm64/kvm/arm.c | 63 ++++++++++++++++++++++++++++++-
arch/arm64/kvm/va_layout.c | 23 +++++++++++
6 files changed, 112 insertions(+), 4 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
index fa4fb642a5f5..1d6de0806dbd 100644
--- a/arch/arm64/include/asm/kvm_asm.h
+++ b/arch/arm64/include/asm/kvm_asm.h
@@ -314,7 +314,9 @@ void __noreturn __cold nvhe_hyp_panic_handler(u64 esr, u64 spsr, u64 elr_virt,
.endm
.macro get_ctxt_gp_regs ctxt, regs
- add \regs, \ctxt, #CPU_USER_PT_REGS
+alternative_cb ARM64_HAS_VIRT_HOST_EXTN, kvm_update_ctxt_gp_regs
+ add \regs, \ctxt, #CPU_USER_PT_REGS_STRG
+alternative_cb_end
.endm
/*
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 31cbd62a5d06..23a10178d1b0 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -541,7 +541,9 @@ struct kvm_sysreg_masks {
};
struct kvm_cpu_context {
- struct user_pt_regs regs; /* sp = sp_el0 */
+ struct user_pt_regs *regs; /* sp = sp_el0 */
+ struct user_pt_regs regs_storage;
+ struct secretmem_area *regs_area;
u64 spsr_abt;
u64 spsr_und;
@@ -946,7 +948,25 @@ struct kvm_vcpu_arch {
#define vcpu_clear_on_unsupported_cpu(vcpu) \
vcpu_clear_flag(vcpu, ON_UNSUPPORTED_CPU)
-#define ctxt_gp_regs(ctxt) (&(ctxt)->regs)
+/* Static allocation is used if NVHE-host or if KERNEL_SECRETMEM is not enabled */
+static __inline bool kvm_use_dynamic_regs(void)
+{
+#ifndef CONFIG_KERNEL_SECRETMEM
+ return false;
+#endif
+ return cpus_have_cap(ARM64_HAS_VIRT_HOST_EXTN);
+}
+
+static __always_inline struct user_pt_regs *ctxt_gp_regs(const struct kvm_cpu_context *ctxt)
+{
+ struct user_pt_regs *regs = (void *) ctxt;
+ asm volatile(ALTERNATIVE_CB("add %0, %0, %1\n",
+ ARM64_HAS_VIRT_HOST_EXTN,
+ kvm_update_ctxt_gp_regs)
+ : "+r" (regs)
+ : "I" (offsetof(struct kvm_cpu_context, regs_storage)));
+ return regs;
+}
#define vcpu_gp_regs(v) (ctxt_gp_regs(&(v)->arch.ctxt))
/*
diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c
index 27de1dddb0ab..275d480f5e65 100644
--- a/arch/arm64/kernel/asm-offsets.c
+++ b/arch/arm64/kernel/asm-offsets.c
@@ -128,6 +128,7 @@ int main(void)
DEFINE(VCPU_FAULT_DISR, offsetof(struct kvm_vcpu, arch.fault.disr_el1));
DEFINE(VCPU_HCR_EL2, offsetof(struct kvm_vcpu, arch.hcr_el2));
DEFINE(CPU_USER_PT_REGS, offsetof(struct kvm_cpu_context, regs));
+ DEFINE(CPU_USER_PT_REGS_STRG, offsetof(struct kvm_cpu_context, regs_storage));
DEFINE(CPU_ELR_EL2, offsetof(struct kvm_cpu_context, sys_regs[ELR_EL2]));
DEFINE(CPU_RGSR_EL1, offsetof(struct kvm_cpu_context, sys_regs[RGSR_EL1]));
DEFINE(CPU_GCR_EL1, offsetof(struct kvm_cpu_context, sys_regs[GCR_EL1]));
diff --git a/arch/arm64/kernel/image-vars.h b/arch/arm64/kernel/image-vars.h
index 8f5422ed1b75..e3bb626e299c 100644
--- a/arch/arm64/kernel/image-vars.h
+++ b/arch/arm64/kernel/image-vars.h
@@ -86,6 +86,7 @@ KVM_NVHE_ALIAS(kvm_patch_vector_branch);
KVM_NVHE_ALIAS(kvm_update_va_mask);
KVM_NVHE_ALIAS(kvm_get_kimage_voffset);
KVM_NVHE_ALIAS(kvm_compute_final_ctr_el0);
+KVM_NVHE_ALIAS(kvm_update_ctxt_gp_regs);
KVM_NVHE_ALIAS(spectre_bhb_patch_loop_iter);
KVM_NVHE_ALIAS(spectre_bhb_patch_loop_mitigation_enable);
KVM_NVHE_ALIAS(spectre_bhb_patch_wa3);
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index 9bef7638342e..78c562a060de 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -16,6 +16,7 @@
#include <linux/fs.h>
#include <linux/mman.h>
#include <linux/sched.h>
+#include <linux/secretmem.h>
#include <linux/kvm.h>
#include <linux/kvm_irqfd.h>
#include <linux/irqbypass.h>
@@ -452,6 +453,7 @@ int kvm_arch_vcpu_precreate(struct kvm *kvm, unsigned int id)
int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu)
{
+ unsigned long pages_needed;
int err;
spin_lock_init(&vcpu->arch.mp_state_lock);
@@ -469,6 +471,14 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu)
vcpu->arch.mmu_page_cache.gfp_zero = __GFP_ZERO;
+ if (kvm_use_dynamic_regs()) {
+ pages_needed = (sizeof(*vcpu_gp_regs(vcpu)) + PAGE_SIZE - 1) / PAGE_SIZE;
+ vcpu->arch.ctxt.regs_area = secretmem_allocate_pages(fls(pages_needed - 1));
+ if (!vcpu->arch.ctxt.regs_area)
+ return -ENOMEM;
+ vcpu->arch.ctxt.regs = vcpu->arch.ctxt.regs_area->ptr;
+ }
+
/* Set up the timer */
kvm_timer_vcpu_init(vcpu);
@@ -489,9 +499,14 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu)
err = kvm_vgic_vcpu_init(vcpu);
if (err)
- return err;
+ goto free_vcpu_ctxt;
return kvm_share_hyp(vcpu, vcpu + 1);
+
+free_vcpu_ctxt:
+ if (kvm_use_dynamic_regs())
+ secretmem_release_pages(vcpu->arch.ctxt.regs_area);
+ return err;
}
void kvm_arch_vcpu_postcreate(struct kvm_vcpu *vcpu)
@@ -508,6 +523,9 @@ void kvm_arch_vcpu_destroy(struct kvm_vcpu *vcpu)
kvm_pmu_vcpu_destroy(vcpu);
kvm_vgic_vcpu_destroy(vcpu);
kvm_arm_vcpu_destroy(vcpu);
+
+ if (kvm_use_dynamic_regs())
+ secretmem_release_pages(vcpu->arch.ctxt.regs_area);
}
void kvm_arch_vcpu_blocking(struct kvm_vcpu *vcpu)
@@ -2683,6 +2701,45 @@ static int __init init_hyp_mode(void)
return err;
}
+static int init_hyp_hve_mode(void)
+{
+ int cpu;
+ int err = 0;
+
+ if (!kvm_use_dynamic_regs())
+ return 0;
+
+ /* Allocate gp-regs */
+ for_each_possible_cpu(cpu) {
+ void *hyp_ctxt_regs;
+ void *kvm_host_data_regs;
+
+ hyp_ctxt_regs = kzalloc(sizeof(struct user_pt_regs), GFP_KERNEL);
+ if (!hyp_ctxt_regs) {
+ err = -ENOMEM;
+ goto free_regs;
+ }
+ per_cpu(kvm_hyp_ctxt, cpu).regs = hyp_ctxt_regs;
+
+ kvm_host_data_regs = kzalloc(sizeof(struct user_pt_regs), GFP_KERNEL);
+ if (!kvm_host_data_regs) {
+ err = -ENOMEM;
+ goto free_regs;
+ }
+ per_cpu(kvm_host_data, cpu).host_ctxt.regs = kvm_host_data_regs;
+ }
+
+ return 0;
+
+free_regs:
+ for_each_possible_cpu(cpu) {
+ kfree(per_cpu(kvm_hyp_ctxt, cpu).regs);
+ kfree(per_cpu(kvm_host_data, cpu).host_ctxt.regs);
+ }
+
+ return err;
+}
+
struct kvm_vcpu *kvm_mpidr_to_vcpu(struct kvm *kvm, unsigned long mpidr)
{
struct kvm_vcpu *vcpu = NULL;
@@ -2806,6 +2863,10 @@ static __init int kvm_arm_init(void)
err = init_hyp_mode();
if (err)
goto out_err;
+ } else {
+ err = init_hyp_hve_mode();
+ if (err)
+ goto out_err;
}
err = kvm_init_vector_slots();
diff --git a/arch/arm64/kvm/va_layout.c b/arch/arm64/kvm/va_layout.c
index 91b22a014610..fcef7e89d042 100644
--- a/arch/arm64/kvm/va_layout.c
+++ b/arch/arm64/kvm/va_layout.c
@@ -185,6 +185,29 @@ void __init kvm_update_va_mask(struct alt_instr *alt,
}
}
+void __init kvm_update_ctxt_gp_regs(struct alt_instr *alt,
+ __le32 *origptr, __le32 *updptr, int nr_inst)
+{
+ u32 rd, rn, imm, insn, oinsn;
+
+ BUG_ON(nr_inst != 1);
+
+ if (!kvm_use_dynamic_regs())
+ return;
+
+ oinsn = le32_to_cpu(origptr[0]);
+ rd = aarch64_insn_decode_register(AARCH64_INSN_REGTYPE_RD, oinsn);
+ rn = aarch64_insn_decode_register(AARCH64_INSN_REGTYPE_RN, oinsn);
+ imm = offsetof(struct kvm_cpu_context, regs);
+
+ insn = aarch64_insn_gen_load_store_imm(rd, rn, imm,
+ AARCH64_INSN_SIZE_64,
+ AARCH64_INSN_LDST_LOAD_IMM_OFFSET);
+ BUG_ON(insn == AARCH64_BREAK_FAULT);
+
+ updptr[0] = cpu_to_le32(insn);
+}
+
void kvm_patch_vector_branch(struct alt_instr *alt,
__le32 *origptr, __le32 *updptr, int nr_inst)
{
--
2.40.1
Amazon Web Services Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss
Eingetragen am Amtsgericht Charlottenburg unter HRB 257764 B
Sitz: Berlin
Ust-ID: DE 365 538 597
^ permalink raw reply [flat|nested] 24+ messages in thread
* [RFC PATCH 6/7] arm64: KVM: Refactor C-code to access vCPU fp-registers through macros
2024-09-11 14:33 [RFC PATCH 0/7] support for mm-local memory allocations and use it Fares Mehanna
` (4 preceding siblings ...)
2024-09-11 14:34 ` [RFC PATCH 5/7] arm64: KVM: Allocate vCPU gp-regs dynamically on VHE and KERNEL_SECRETMEM enabled systems Fares Mehanna
@ 2024-09-11 14:34 ` Fares Mehanna
2024-09-11 14:34 ` [RFC PATCH 7/7] arm64: KVM: Allocate vCPU fp-regs dynamically on VHE and KERNEL_SECRETMEM enabled systems Fares Mehanna
` (3 subsequent siblings)
9 siblings, 0 replies; 24+ messages in thread
From: Fares Mehanna @ 2024-09-11 14:34 UTC (permalink / raw)
Cc: nh-open-source, Fares Mehanna, Marc Zyngier, Oliver Upton,
James Morse, Suzuki K Poulose, Zenghui Yu, Catalin Marinas,
Will Deacon, Andrew Morton, Kemeng Shi, Pierre-Clément Tosi,
Ard Biesheuvel, Mark Rutland, Javier Martinez Canillas,
Arnd Bergmann, Fuad Tabba, Mark Brown, Joey Gouly,
Kristina Martsenko, Randy Dunlap, Bjorn Helgaas,
Jean-Philippe Brucker, Mike Rapoport (IBM),
David Hildenbrand, Roman Kagan,
moderated list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64),
open list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64),
open list, open list:MEMORY MANAGEMENT
Unify how KVM accesses vCPU fp-regs by using vcpu_fp_regs(). This is a
prerequisite to move the fp-regs later to be dynamically allocated for vCPUs.
Signed-off-by: Fares Mehanna <faresx@amazon.de>
---
arch/arm64/include/asm/kvm_host.h | 2 ++
arch/arm64/kvm/arm.c | 2 +-
arch/arm64/kvm/fpsimd.c | 2 +-
arch/arm64/kvm/guest.c | 6 +++---
arch/arm64/kvm/hyp/include/hyp/switch.h | 4 ++--
arch/arm64/kvm/hyp/nvhe/hyp-main.c | 4 ++--
arch/arm64/kvm/reset.c | 2 +-
7 files changed, 12 insertions(+), 10 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 23a10178d1b0..e8ed2c12479f 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -968,6 +968,8 @@ static __always_inline struct user_pt_regs *ctxt_gp_regs(const struct kvm_cpu_co
return regs;
}
#define vcpu_gp_regs(v) (ctxt_gp_regs(&(v)->arch.ctxt))
+#define ctxt_fp_regs(ctxt) (&(ctxt).fp_regs)
+#define vcpu_fp_regs(v) (ctxt_fp_regs(&(v)->arch.ctxt))
/*
* Only use __vcpu_sys_reg/ctxt_sys_reg if you know you want the
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index 78c562a060de..7542af3f766a 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -2507,7 +2507,7 @@ static void finalize_init_hyp_mode(void)
for_each_possible_cpu(cpu) {
struct user_fpsimd_state *fpsimd_state;
- fpsimd_state = &per_cpu_ptr_nvhe_sym(kvm_host_data, cpu)->host_ctxt.fp_regs;
+ fpsimd_state = ctxt_fp_regs(&per_cpu_ptr_nvhe_sym(kvm_host_data, cpu)->host_ctxt);
per_cpu_ptr_nvhe_sym(kvm_host_data, cpu)->fpsimd_state =
kern_hyp_va(fpsimd_state);
}
diff --git a/arch/arm64/kvm/fpsimd.c b/arch/arm64/kvm/fpsimd.c
index c53e5b14038d..c27c96ae22e1 100644
--- a/arch/arm64/kvm/fpsimd.c
+++ b/arch/arm64/kvm/fpsimd.c
@@ -130,7 +130,7 @@ void kvm_arch_vcpu_ctxsync_fp(struct kvm_vcpu *vcpu)
* Currently we do not support SME guests so SVCR is
* always 0 and we just need a variable to point to.
*/
- fp_state.st = &vcpu->arch.ctxt.fp_regs;
+ fp_state.st = vcpu_fp_regs(vcpu);
fp_state.sve_state = vcpu->arch.sve_state;
fp_state.sve_vl = vcpu->arch.sve_max_vl;
fp_state.sme_state = NULL;
diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c
index 821a2b7de388..3474874a00a7 100644
--- a/arch/arm64/kvm/guest.c
+++ b/arch/arm64/kvm/guest.c
@@ -170,13 +170,13 @@ static void *core_reg_addr(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
KVM_REG_ARM_CORE_REG(fp_regs.vregs[31]):
off -= KVM_REG_ARM_CORE_REG(fp_regs.vregs[0]);
off /= 4;
- return &vcpu->arch.ctxt.fp_regs.vregs[off];
+ return &vcpu_fp_regs(vcpu)->vregs[off];
case KVM_REG_ARM_CORE_REG(fp_regs.fpsr):
- return &vcpu->arch.ctxt.fp_regs.fpsr;
+ return &vcpu_fp_regs(vcpu)->fpsr;
case KVM_REG_ARM_CORE_REG(fp_regs.fpcr):
- return &vcpu->arch.ctxt.fp_regs.fpcr;
+ return &vcpu_fp_regs(vcpu)->fpcr;
default:
return NULL;
diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
index d2ed0938fc90..1444bad519db 100644
--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
+++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
@@ -319,7 +319,7 @@ static inline void __hyp_sve_restore_guest(struct kvm_vcpu *vcpu)
*/
sve_cond_update_zcr_vq(vcpu_sve_max_vq(vcpu) - 1, SYS_ZCR_EL2);
__sve_restore_state(vcpu_sve_pffr(vcpu),
- &vcpu->arch.ctxt.fp_regs.fpsr,
+ &vcpu_fp_regs(vcpu)->fpsr,
true);
/*
@@ -401,7 +401,7 @@ static bool kvm_hyp_handle_fpsimd(struct kvm_vcpu *vcpu, u64 *exit_code)
if (sve_guest)
__hyp_sve_restore_guest(vcpu);
else
- __fpsimd_restore_state(&vcpu->arch.ctxt.fp_regs);
+ __fpsimd_restore_state(vcpu_fp_regs(vcpu));
/* Skip restoring fpexc32 for AArch64 guests */
if (!(read_sysreg(hcr_el2) & HCR_RW))
diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c
index f43d845f3c4e..feb1dd37f2a5 100644
--- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c
+++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c
@@ -32,7 +32,7 @@ static void __hyp_sve_save_guest(struct kvm_vcpu *vcpu)
* on the VL, so use a consistent (i.e., the maximum) guest VL.
*/
sve_cond_update_zcr_vq(vcpu_sve_max_vq(vcpu) - 1, SYS_ZCR_EL2);
- __sve_save_state(vcpu_sve_pffr(vcpu), &vcpu->arch.ctxt.fp_regs.fpsr, true);
+ __sve_save_state(vcpu_sve_pffr(vcpu), &vcpu_fp_regs(vcpu)->fpsr, true);
write_sysreg_s(ZCR_ELx_LEN_MASK, SYS_ZCR_EL2);
}
@@ -71,7 +71,7 @@ static void fpsimd_sve_sync(struct kvm_vcpu *vcpu)
if (vcpu_has_sve(vcpu))
__hyp_sve_save_guest(vcpu);
else
- __fpsimd_save_state(&vcpu->arch.ctxt.fp_regs);
+ __fpsimd_save_state(vcpu_fp_regs(vcpu));
if (system_supports_sve())
__hyp_sve_restore_host();
diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
index 0b0ae5ae7bc2..5f38acf5d156 100644
--- a/arch/arm64/kvm/reset.c
+++ b/arch/arm64/kvm/reset.c
@@ -229,7 +229,7 @@ void kvm_reset_vcpu(struct kvm_vcpu *vcpu)
/* Reset core registers */
memset(vcpu_gp_regs(vcpu), 0, sizeof(*vcpu_gp_regs(vcpu)));
- memset(&vcpu->arch.ctxt.fp_regs, 0, sizeof(vcpu->arch.ctxt.fp_regs));
+ memset(vcpu_fp_regs(vcpu), 0, sizeof(*vcpu_fp_regs(vcpu)));
vcpu->arch.ctxt.spsr_abt = 0;
vcpu->arch.ctxt.spsr_und = 0;
vcpu->arch.ctxt.spsr_irq = 0;
--
2.40.1
Amazon Web Services Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss
Eingetragen am Amtsgericht Charlottenburg unter HRB 257764 B
Sitz: Berlin
Ust-ID: DE 365 538 597
^ permalink raw reply [flat|nested] 24+ messages in thread
* [RFC PATCH 7/7] arm64: KVM: Allocate vCPU fp-regs dynamically on VHE and KERNEL_SECRETMEM enabled systems
2024-09-11 14:33 [RFC PATCH 0/7] support for mm-local memory allocations and use it Fares Mehanna
` (5 preceding siblings ...)
2024-09-11 14:34 ` [RFC PATCH 6/7] arm64: KVM: Refactor C-code to access vCPU fp-registers through macros Fares Mehanna
@ 2024-09-11 14:34 ` Fares Mehanna
2024-09-20 12:34 ` [RFC PATCH 0/7] support for mm-local memory allocations and use it Mike Rapoport
` (2 subsequent siblings)
9 siblings, 0 replies; 24+ messages in thread
From: Fares Mehanna @ 2024-09-11 14:34 UTC (permalink / raw)
Cc: nh-open-source, Fares Mehanna, Marc Zyngier, Oliver Upton,
James Morse, Suzuki K Poulose, Zenghui Yu, Catalin Marinas,
Will Deacon, Andrew Morton, Kemeng Shi, Pierre-Clément Tosi,
Ard Biesheuvel, Mark Rutland, Javier Martinez Canillas,
Arnd Bergmann, Fuad Tabba, Mark Brown, Joey Gouly,
Kristina Martsenko, Randy Dunlap, Bjorn Helgaas,
Jean-Philippe Brucker, Mike Rapoport (IBM),
David Hildenbrand, Roman Kagan,
moderated list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64),
open list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64),
open list, open list:MEMORY MANAGEMENT
Similar to what was done in this commit:
"arm64: KVM: Allocate vCPU gp-regs dynamically on VHE and KERNEL_SECRETMEM enabled systems"
We're moving fp-regs to dynamic memory for systems supporting VHE and compiled
with KERNEL_SECRETMEM support. Otherwise, we will use the "fp_regs_storage"
struct embedded in the vCPU context.
Accessing fp-regs embedded in the vCPU context without de-reference is done as:
add \regs, \ctxt, #offsetof(struct kvm_cpu_context, fp_regs_storage)
Accessing the dynamically allocated fp-regs with de-reference is done as:
ldr \regs, [\ctxt, #offsetof(struct kvm_cpu_context, fp_regs)]
Signed-off-by: Fares Mehanna <faresx@amazon.de>
---
arch/arm64/include/asm/kvm_host.h | 16 ++++++++++++++--
arch/arm64/kernel/image-vars.h | 1 +
arch/arm64/kvm/arm.c | 29 +++++++++++++++++++++++++++--
arch/arm64/kvm/va_layout.c | 23 +++++++++++++++++++----
4 files changed, 61 insertions(+), 8 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index e8ed2c12479f..4132c57d7e69 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -550,7 +550,9 @@ struct kvm_cpu_context {
u64 spsr_irq;
u64 spsr_fiq;
- struct user_fpsimd_state fp_regs;
+ struct user_fpsimd_state *fp_regs;
+ struct user_fpsimd_state fp_regs_storage;
+ struct secretmem_area *fp_regs_area;
u64 sys_regs[NR_SYS_REGS];
@@ -968,7 +970,17 @@ static __always_inline struct user_pt_regs *ctxt_gp_regs(const struct kvm_cpu_co
return regs;
}
#define vcpu_gp_regs(v) (ctxt_gp_regs(&(v)->arch.ctxt))
-#define ctxt_fp_regs(ctxt) (&(ctxt).fp_regs)
+
+static __always_inline struct user_fpsimd_state *ctxt_fp_regs(const struct kvm_cpu_context *ctxt)
+{
+ struct user_fpsimd_state *fp_regs = (void *) ctxt;
+ asm volatile(ALTERNATIVE_CB("add %0, %0, %1\n",
+ ARM64_HAS_VIRT_HOST_EXTN,
+ kvm_update_ctxt_fp_regs)
+ : "+r" (fp_regs)
+ : "I" (offsetof(struct kvm_cpu_context, fp_regs_storage)));
+ return fp_regs;
+}
#define vcpu_fp_regs(v) (ctxt_fp_regs(&(v)->arch.ctxt))
/*
diff --git a/arch/arm64/kernel/image-vars.h b/arch/arm64/kernel/image-vars.h
index e3bb626e299c..904573598e0f 100644
--- a/arch/arm64/kernel/image-vars.h
+++ b/arch/arm64/kernel/image-vars.h
@@ -87,6 +87,7 @@ KVM_NVHE_ALIAS(kvm_update_va_mask);
KVM_NVHE_ALIAS(kvm_get_kimage_voffset);
KVM_NVHE_ALIAS(kvm_compute_final_ctr_el0);
KVM_NVHE_ALIAS(kvm_update_ctxt_gp_regs);
+KVM_NVHE_ALIAS(kvm_update_ctxt_fp_regs);
KVM_NVHE_ALIAS(spectre_bhb_patch_loop_iter);
KVM_NVHE_ALIAS(spectre_bhb_patch_loop_mitigation_enable);
KVM_NVHE_ALIAS(spectre_bhb_patch_wa3);
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index 7542af3f766a..17b42e9099c3 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -477,6 +477,14 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu)
if (!vcpu->arch.ctxt.regs_area)
return -ENOMEM;
vcpu->arch.ctxt.regs = vcpu->arch.ctxt.regs_area->ptr;
+
+ pages_needed = (sizeof(*vcpu_fp_regs(vcpu)) + PAGE_SIZE - 1) / PAGE_SIZE;
+ vcpu->arch.ctxt.fp_regs_area = secretmem_allocate_pages(fls(pages_needed - 1));
+ if (!vcpu->arch.ctxt.fp_regs_area) {
+ err = -ENOMEM;
+ goto free_vcpu_ctxt;
+ }
+ vcpu->arch.ctxt.fp_regs = vcpu->arch.ctxt.fp_regs_area->ptr;
}
/* Set up the timer */
@@ -504,8 +512,10 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu)
return kvm_share_hyp(vcpu, vcpu + 1);
free_vcpu_ctxt:
- if (kvm_use_dynamic_regs())
+ if (kvm_use_dynamic_regs()) {
secretmem_release_pages(vcpu->arch.ctxt.regs_area);
+ secretmem_release_pages(vcpu->arch.ctxt.fp_regs_area);
+ }
return err;
}
@@ -524,8 +534,10 @@ void kvm_arch_vcpu_destroy(struct kvm_vcpu *vcpu)
kvm_vgic_vcpu_destroy(vcpu);
kvm_arm_vcpu_destroy(vcpu);
- if (kvm_use_dynamic_regs())
+ if (kvm_use_dynamic_regs()) {
secretmem_release_pages(vcpu->arch.ctxt.regs_area);
+ secretmem_release_pages(vcpu->arch.ctxt.fp_regs_area);
+ }
}
void kvm_arch_vcpu_blocking(struct kvm_vcpu *vcpu)
@@ -2729,12 +2741,25 @@ static int init_hyp_hve_mode(void)
per_cpu(kvm_host_data, cpu).host_ctxt.regs = kvm_host_data_regs;
}
+ /* Allocate fp-regs */
+ for_each_possible_cpu(cpu) {
+ void *kvm_host_data_regs;
+
+ kvm_host_data_regs = kzalloc(sizeof(struct user_fpsimd_state), GFP_KERNEL);
+ if (!kvm_host_data_regs) {
+ err = -ENOMEM;
+ goto free_regs;
+ }
+ per_cpu(kvm_host_data, cpu).host_ctxt.fp_regs = kvm_host_data_regs;
+ }
+
return 0;
free_regs:
for_each_possible_cpu(cpu) {
kfree(per_cpu(kvm_hyp_ctxt, cpu).regs);
kfree(per_cpu(kvm_host_data, cpu).host_ctxt.regs);
+ kfree(per_cpu(kvm_host_data, cpu).host_ctxt.fp_regs);
}
return err;
diff --git a/arch/arm64/kvm/va_layout.c b/arch/arm64/kvm/va_layout.c
index fcef7e89d042..ba1030fa5b08 100644
--- a/arch/arm64/kvm/va_layout.c
+++ b/arch/arm64/kvm/va_layout.c
@@ -185,10 +185,12 @@ void __init kvm_update_va_mask(struct alt_instr *alt,
}
}
-void __init kvm_update_ctxt_gp_regs(struct alt_instr *alt,
- __le32 *origptr, __le32 *updptr, int nr_inst)
+static __always_inline void __init kvm_update_ctxt_regs(struct alt_instr *alt,
+ __le32 *origptr,
+ __le32 *updptr,
+ int nr_inst, u32 imm)
{
- u32 rd, rn, imm, insn, oinsn;
+ u32 rd, rn, insn, oinsn;
BUG_ON(nr_inst != 1);
@@ -198,7 +200,6 @@ void __init kvm_update_ctxt_gp_regs(struct alt_instr *alt,
oinsn = le32_to_cpu(origptr[0]);
rd = aarch64_insn_decode_register(AARCH64_INSN_REGTYPE_RD, oinsn);
rn = aarch64_insn_decode_register(AARCH64_INSN_REGTYPE_RN, oinsn);
- imm = offsetof(struct kvm_cpu_context, regs);
insn = aarch64_insn_gen_load_store_imm(rd, rn, imm,
AARCH64_INSN_SIZE_64,
@@ -208,6 +209,20 @@ void __init kvm_update_ctxt_gp_regs(struct alt_instr *alt,
updptr[0] = cpu_to_le32(insn);
}
+void __init kvm_update_ctxt_gp_regs(struct alt_instr *alt,
+ __le32 *origptr, __le32 *updptr, int nr_inst)
+{
+ u32 offset = offsetof(struct kvm_cpu_context, regs);
+ kvm_update_ctxt_regs(alt, origptr, updptr, nr_inst, offset);
+}
+
+void __init kvm_update_ctxt_fp_regs(struct alt_instr *alt,
+ __le32 *origptr, __le32 *updptr, int nr_inst)
+{
+ u32 offset = offsetof(struct kvm_cpu_context, fp_regs);
+ kvm_update_ctxt_regs(alt, origptr, updptr, nr_inst, offset);
+}
+
void kvm_patch_vector_branch(struct alt_instr *alt,
__le32 *origptr, __le32 *updptr, int nr_inst)
{
--
2.40.1
Amazon Web Services Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss
Eingetragen am Amtsgericht Charlottenburg unter HRB 257764 B
Sitz: Berlin
Ust-ID: DE 365 538 597
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [RFC PATCH 1/7] mseal: expose interface to seal / unseal user memory ranges
2024-09-11 14:34 ` [RFC PATCH 1/7] mseal: expose interface to seal / unseal user memory ranges Fares Mehanna
@ 2024-09-12 16:40 ` Liam R. Howlett
2024-09-25 15:25 ` Fares Mehanna
0 siblings, 1 reply; 24+ messages in thread
From: Liam R. Howlett @ 2024-09-12 16:40 UTC (permalink / raw)
To: Fares Mehanna
Cc: nh-open-source, Roman Kagan, Marc Zyngier, Oliver Upton,
James Morse, Suzuki K Poulose, Zenghui Yu, Catalin Marinas,
Will Deacon, Andrew Morton, Kemeng Shi, Pierre-Clément Tosi,
Ard Biesheuvel, Mark Rutland, Javier Martinez Canillas,
Arnd Bergmann, Fuad Tabba, Mark Brown, Joey Gouly,
Kristina Martsenko, Randy Dunlap, Bjorn Helgaas,
Jean-Philippe Brucker, Mike Rapoport (IBM),
David Hildenbrand,
moderated list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64),
open list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64),
open list, open list:MEMORY MANAGEMENT
* Fares Mehanna <faresx@amazon.de> [240911 10:36]:
> To make sure the kernel mm-local mapping is untouched by the user, we will seal
> the VMA before changing the protection to be used by the kernel.
>
> This will guarantee that userspace can't unmap or alter this VMA while it is
> being used by the kernel.
>
> After the kernel is done with the secret memory, it will unseal the VMA to be
> able to unmap and free it.
>
> Unseal operation is not exposed to userspace.
We can't use the mseal feature for this; it is supposed to be a one way
transition.
Willy describes the feature best here [1].
It is not clear from the change log above or the cover letter as to why
you need to go this route instead of using the mmap lock.
[1] https://lore.kernel.org/lkml/ZS%2F3GCKvNn5qzhC4@casper.infradead.org/
>
> Signed-off-by: Fares Mehanna <faresx@amazon.de>
> Signed-off-by: Roman Kagan <rkagan@amazon.de>
> ---
> mm/internal.h | 7 +++++
> mm/mseal.c | 81 ++++++++++++++++++++++++++++++++-------------------
> 2 files changed, 58 insertions(+), 30 deletions(-)
>
> diff --git a/mm/internal.h b/mm/internal.h
> index b4d86436565b..cf7280d101e9 100644
> --- a/mm/internal.h
> +++ b/mm/internal.h
> @@ -1501,6 +1501,8 @@ bool can_modify_mm(struct mm_struct *mm, unsigned long start,
> unsigned long end);
> bool can_modify_mm_madv(struct mm_struct *mm, unsigned long start,
> unsigned long end, int behavior);
> +/* mm's mmap write lock must be taken before seal/unseal operation */
> +int do_mseal(unsigned long start, unsigned long end, bool seal);
> #else
> static inline int can_do_mseal(unsigned long flags)
> {
> @@ -1518,6 +1520,11 @@ static inline bool can_modify_mm_madv(struct mm_struct *mm, unsigned long start,
> {
> return true;
> }
> +
> +static inline int do_mseal(unsigned long start, unsigned long end, bool seal)
> +{
> + return -EINVAL;
> +}
> #endif
>
> #ifdef CONFIG_SHRINKER_DEBUG
> diff --git a/mm/mseal.c b/mm/mseal.c
> index 15bba28acc00..aac9399ffd5d 100644
> --- a/mm/mseal.c
> +++ b/mm/mseal.c
> @@ -26,6 +26,11 @@ static inline void set_vma_sealed(struct vm_area_struct *vma)
> vm_flags_set(vma, VM_SEALED);
> }
>
> +static inline void clear_vma_sealed(struct vm_area_struct *vma)
> +{
> + vm_flags_clear(vma, VM_SEALED);
> +}
> +
> /*
> * check if a vma is sealed for modification.
> * return true, if modification is allowed.
> @@ -117,7 +122,7 @@ bool can_modify_mm_madv(struct mm_struct *mm, unsigned long start, unsigned long
>
> static int mseal_fixup(struct vma_iterator *vmi, struct vm_area_struct *vma,
> struct vm_area_struct **prev, unsigned long start,
> - unsigned long end, vm_flags_t newflags)
> + unsigned long end, vm_flags_t newflags, bool seal)
> {
> int ret = 0;
> vm_flags_t oldflags = vma->vm_flags;
> @@ -131,7 +136,10 @@ static int mseal_fixup(struct vma_iterator *vmi, struct vm_area_struct *vma,
> goto out;
> }
>
> - set_vma_sealed(vma);
> + if (seal)
> + set_vma_sealed(vma);
> + else
> + clear_vma_sealed(vma);
> out:
> *prev = vma;
> return ret;
> @@ -167,9 +175,9 @@ static int check_mm_seal(unsigned long start, unsigned long end)
> }
>
> /*
> - * Apply sealing.
> + * Apply sealing / unsealing.
> */
> -static int apply_mm_seal(unsigned long start, unsigned long end)
> +static int apply_mm_seal(unsigned long start, unsigned long end, bool seal)
> {
> unsigned long nstart;
> struct vm_area_struct *vma, *prev;
> @@ -191,11 +199,14 @@ static int apply_mm_seal(unsigned long start, unsigned long end)
> unsigned long tmp;
> vm_flags_t newflags;
>
> - newflags = vma->vm_flags | VM_SEALED;
> + if (seal)
> + newflags = vma->vm_flags | VM_SEALED;
> + else
> + newflags = vma->vm_flags & ~(VM_SEALED);
> tmp = vma->vm_end;
> if (tmp > end)
> tmp = end;
> - error = mseal_fixup(&vmi, vma, &prev, nstart, tmp, newflags);
> + error = mseal_fixup(&vmi, vma, &prev, nstart, tmp, newflags, seal);
> if (error)
> return error;
> nstart = vma_iter_end(&vmi);
> @@ -204,6 +215,37 @@ static int apply_mm_seal(unsigned long start, unsigned long end)
> return 0;
> }
>
> +int do_mseal(unsigned long start, unsigned long end, bool seal)
> +{
> + int ret;
> +
> + if (end < start)
> + return -EINVAL;
> +
> + if (end == start)
> + return 0;
> +
> + /*
> + * First pass, this helps to avoid
> + * partial sealing in case of error in input address range,
> + * e.g. ENOMEM error.
> + */
> + ret = check_mm_seal(start, end);
> + if (ret)
> + goto out;
> +
> + /*
> + * Second pass, this should success, unless there are errors
> + * from vma_modify_flags, e.g. merge/split error, or process
> + * reaching the max supported VMAs, however, those cases shall
> + * be rare.
> + */
> + ret = apply_mm_seal(start, end, seal);
> +
> +out:
> + return ret;
> +}
> +
> /*
> * mseal(2) seals the VM's meta data from
> * selected syscalls.
> @@ -256,7 +298,7 @@ static int apply_mm_seal(unsigned long start, unsigned long end)
> *
> * unseal() is not supported.
> */
> -static int do_mseal(unsigned long start, size_t len_in, unsigned long flags)
> +static int __do_mseal(unsigned long start, size_t len_in, unsigned long flags)
> {
> size_t len;
> int ret = 0;
> @@ -277,33 +319,12 @@ static int do_mseal(unsigned long start, size_t len_in, unsigned long flags)
> return -EINVAL;
>
> end = start + len;
> - if (end < start)
> - return -EINVAL;
> -
> - if (end == start)
> - return 0;
>
> if (mmap_write_lock_killable(mm))
> return -EINTR;
>
> - /*
> - * First pass, this helps to avoid
> - * partial sealing in case of error in input address range,
> - * e.g. ENOMEM error.
> - */
> - ret = check_mm_seal(start, end);
> - if (ret)
> - goto out;
> -
> - /*
> - * Second pass, this should success, unless there are errors
> - * from vma_modify_flags, e.g. merge/split error, or process
> - * reaching the max supported VMAs, however, those cases shall
> - * be rare.
> - */
> - ret = apply_mm_seal(start, end);
> + ret = do_mseal(start, end, true);
>
> -out:
> mmap_write_unlock(current->mm);
> return ret;
> }
> @@ -311,5 +332,5 @@ static int do_mseal(unsigned long start, size_t len_in, unsigned long flags)
> SYSCALL_DEFINE3(mseal, unsigned long, start, size_t, len, unsigned long,
> flags)
> {
> - return do_mseal(start, len, flags);
> + return __do_mseal(start, len, flags);
> }
> --
> 2.40.1
>
>
>
>
> Amazon Web Services Development Center Germany GmbH
> Krausenstr. 38
> 10117 Berlin
> Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss
> Eingetragen am Amtsgericht Charlottenburg unter HRB 257764 B
> Sitz: Berlin
> Ust-ID: DE 365 538 597
>
>
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [RFC PATCH 0/7] support for mm-local memory allocations and use it
2024-09-11 14:33 [RFC PATCH 0/7] support for mm-local memory allocations and use it Fares Mehanna
` (6 preceding siblings ...)
2024-09-11 14:34 ` [RFC PATCH 7/7] arm64: KVM: Allocate vCPU fp-regs dynamically on VHE and KERNEL_SECRETMEM enabled systems Fares Mehanna
@ 2024-09-20 12:34 ` Mike Rapoport
2024-09-25 15:33 ` Fares Mehanna
2024-09-20 13:19 ` Alexander Graf
2024-09-27 12:59 ` David Hildenbrand
9 siblings, 1 reply; 24+ messages in thread
From: Mike Rapoport @ 2024-09-20 12:34 UTC (permalink / raw)
To: Fares Mehanna
Cc: nh-open-source, Marc Zyngier, Oliver Upton, James Morse,
Suzuki K Poulose, Zenghui Yu, Catalin Marinas, Will Deacon,
Andrew Morton, Kemeng Shi, Pierre-Clément Tosi,
Ard Biesheuvel, Mark Rutland, Javier Martinez Canillas,
Arnd Bergmann, Fuad Tabba, Mark Brown, Joey Gouly,
Kristina Martsenko, Randy Dunlap, Bjorn Helgaas,
Jean-Philippe Brucker, David Hildenbrand, Roman Kagan,
moderated list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64),
open list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64),
open list, open list:MEMORY MANAGEMENT
Hi,
On Wed, Sep 11, 2024 at 02:33:59PM +0000, Fares Mehanna wrote:
> In a series posted a few years ago [1], a proposal was put forward to allow the
> kernel to allocate memory local to a mm and thus push it out of reach for
> current and future speculation-based cross-process attacks. We still believe
> this is a nice thing to have.
>
> However, in the time passed since that post Linux mm has grown quite a few new
> goodies, so we'd like to explore possibilities to implement this functionality
> with less effort and churn leveraging the now available facilities.
>
> An RFC was posted few months back [2] to show the proof of concept and a simple
> test driver.
>
> In this RFC, we're using the same approach of implementing mm-local allocations
> piggy-backing on memfd_secret(), using regular user addresses but pinning the
> pages and flipping the user/supervisor flag on the respective PTEs to make them
> directly accessible from kernel.
> In addition to that we are submitting 5 patches to use the secret memory to hide
> the vCPU gp-regs and fp-regs on arm64 VHE systems.
>
> The generic drawbacks of using user virtual addresses mentioned in the previous
> RFC [2] still hold, in addition to a more specific one:
>
> - While the user virtual addresses allocated for kernel secret memory are not
> directly accessible by userspace as the PTEs restrict that, copy_from_user()
> and copy_to_user() can operate on those ranges, so that e.g. the usermode can
> guess the address and pass it as the target buffer for read(), making the
> kernel overwrite it with the user-controlled content. Effectively making the
> secret memory in the current implementation missing confidentiality and
> integrity guarantees.
Having a VMA in user mappings for kernel memory seems weird to say the
least.
Core MM does not expect to have VMAs for kernel memory. What will happen if
userspace ftruncates that VMA? Or registers it with userfaultfd?
> In the specific case of vCPU registers, this is fine because the owner process
> can read and write to them using KVM IOCTLs anyway. But in the general case this
> represents a security concern and needs to be addressed.
>
> A possible way forward for the arch-agnostic implementation is to limit the user
> virtual addresses used for kernel to specific range that can be checked against
> in copy_from_user() and copy_to_user().
>
> For arch specific implementation, using separate PGD is the way to go.
>
> [1] https://lore.kernel.org/lkml/20190612170834.14855-1-mhillenb@amazon.de/
This approach seems much more reasonable and it's not that it was entirely
arch-specific. There is some plumbing at arch level, but the allocator is
anyway arch-independent.
--
Sincerely yours,
Mike.
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [RFC PATCH 0/7] support for mm-local memory allocations and use it
2024-09-11 14:33 [RFC PATCH 0/7] support for mm-local memory allocations and use it Fares Mehanna
` (7 preceding siblings ...)
2024-09-20 12:34 ` [RFC PATCH 0/7] support for mm-local memory allocations and use it Mike Rapoport
@ 2024-09-20 13:19 ` Alexander Graf
2024-09-27 12:59 ` David Hildenbrand
9 siblings, 0 replies; 24+ messages in thread
From: Alexander Graf @ 2024-09-20 13:19 UTC (permalink / raw)
To: Fares Mehanna
Cc: nh-open-source, Marc Zyngier, Oliver Upton, James Morse,
Suzuki K Poulose, Zenghui Yu, Catalin Marinas, Will Deacon,
Andrew Morton, Kemeng Shi, Pierre-Clément Tosi,
Ard Biesheuvel, Mark Rutland, Javier Martinez Canillas,
Arnd Bergmann, Fuad Tabba, Mark Brown, Joey Gouly,
Kristina Martsenko, Randy Dunlap, Bjorn Helgaas,
Jean-Philippe Brucker, Mike Rapoport (IBM),
David Hildenbrand, Roman Kagan,
moderated list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64),
open list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64),
open list, open list:MEMORY MANAGEMENT, mark.rutland,
Mike Rapoport
On 11.09.24 16:33, Fares Mehanna wrote:
> In a series posted a few years ago [1], a proposal was put forward to allow the
> kernel to allocate memory local to a mm and thus push it out of reach for
> current and future speculation-based cross-process attacks. We still believe
> this is a nice thing to have.
>
> However, in the time passed since that post Linux mm has grown quite a few new
> goodies, so we'd like to explore possibilities to implement this functionality
> with less effort and churn leveraging the now available facilities.
>
> An RFC was posted few months back [2] to show the proof of concept and a simple
> test driver.
>
> In this RFC, we're using the same approach of implementing mm-local allocations
> piggy-backing on memfd_secret(), using regular user addresses but pinning the
> pages and flipping the user/supervisor flag on the respective PTEs to make them
> directly accessible from kernel.
> In addition to that we are submitting 5 patches to use the secret memory to hide
> the vCPU gp-regs and fp-regs on arm64 VHE systems.
>
> The generic drawbacks of using user virtual addresses mentioned in the previous
> RFC [2] still hold, in addition to a more specific one:
>
> - While the user virtual addresses allocated for kernel secret memory are not
> directly accessible by userspace as the PTEs restrict that, copy_from_user()
> and copy_to_user() can operate on those ranges, so that e.g. the usermode can
> guess the address and pass it as the target buffer for read(), making the
> kernel overwrite it with the user-controlled content. Effectively making the
> secret memory in the current implementation missing confidentiality and
> integrity guarantees.
>
> In the specific case of vCPU registers, this is fine because the owner process
> can read and write to them using KVM IOCTLs anyway. But in the general case this
> represents a security concern and needs to be addressed.
>
> A possible way forward for the arch-agnostic implementation is to limit the user
> virtual addresses used for kernel to specific range that can be checked against
> in copy_from_user() and copy_to_user().
>
> For arch specific implementation, using separate PGD is the way to go.
>
> [1] https://lore.kernel.org/lkml/20190612170834.14855-1-mhillenb@amazon.de/
> [2] https://lore.kernel.org/lkml/20240621201501.1059948-1-rkagan@amazon.de/
Hey Mark and Mike,
We talked at LPC about mm-local memory and you had some inputs. It would
be amazing to write them down here so I don't end up playing game of
telephone :)
Thanks!
Amazon Web Services Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss
Eingetragen am Amtsgericht Charlottenburg unter HRB 257764 B
Sitz: Berlin
Ust-ID: DE 365 538 597
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [RFC PATCH 1/7] mseal: expose interface to seal / unseal user memory ranges
2024-09-12 16:40 ` Liam R. Howlett
@ 2024-09-25 15:25 ` Fares Mehanna
0 siblings, 0 replies; 24+ messages in thread
From: Fares Mehanna @ 2024-09-25 15:25 UTC (permalink / raw)
To: liam.howlett
Cc: akpm, ardb, arnd, bhelgaas, broonie, catalin.marinas, david,
faresx, james.morse, javierm, jean-philippe, joey.gouly,
kristina.martsenko, kvmarm, linux-arm-kernel, linux-kernel,
linux-mm, mark.rutland, maz, nh-open-source, oliver.upton, ptosi,
rdunlap, rkagan, rppt, shikemeng, suzuki.poulose, tabba, will,
yuzenghui
Hi,
Thanks for taking a look and apologies for my delayed response.
> It is not clear from the change log above or the cover letter as to why
> you need to go this route instead of using the mmap lock.
In the current form of the patches I use memfd_secret() to allocate the pages
and remove them from kernel linear address. [1]
This allocate pages, map them in user virtual addresses and track them in a VMA.
Before flipping the permissions on those pages to be used by the kernel, I need
to make sure that those virtual addresses and this VMA is off-limits to the
owning process.
memfd_secret() pages are locked by default, so they won't swap out. I need to
seal the VMA to make sure the owner process can't unmap/remap/... or change the
protection of this VMA.
So before changing the permissions on the secret pages, I make sure the pages
are faulted in, locked and sealed. So userspace can't influence this mapping.
> We can't use the mseal feature for this; it is supposed to be a one way
> transition.
For this approach, I need the unseal operation when releasing the memory range.
The kernel can be done with the secret pages in one of two scenarios:
1. During lifecycle of the process.
2. When the process terminates.
For the first case, I need to unmap the VMA so it can be reused by the owning
process later, so I need the unseal operation. For the second case however we
don't need that since the process mm is already destructed or just about to be
destructed anyway, regardless of sealed/unsealed VMAs. [1]
I didn't expose the unseal operation to userspace.
[1] https://lore.kernel.org/linux-arm-kernel/20240911143421.85612-3-faresx@amazon.de/
Thanks!
Fares.
Amazon Web Services Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss
Eingetragen am Amtsgericht Charlottenburg unter HRB 257764 B
Sitz: Berlin
Ust-ID: DE 365 538 597
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [RFC PATCH 0/7] support for mm-local memory allocations and use it
2024-09-20 12:34 ` [RFC PATCH 0/7] support for mm-local memory allocations and use it Mike Rapoport
@ 2024-09-25 15:33 ` Fares Mehanna
2024-09-27 7:08 ` Mike Rapoport
0 siblings, 1 reply; 24+ messages in thread
From: Fares Mehanna @ 2024-09-25 15:33 UTC (permalink / raw)
To: rppt
Cc: akpm, ardb, arnd, bhelgaas, broonie, catalin.marinas, david,
faresx, james.morse, javierm, jean-philippe, joey.gouly,
kristina.martsenko, kvmarm, linux-arm-kernel, linux-kernel,
linux-mm, mark.rutland, maz, nh-open-source, oliver.upton, ptosi,
rdunlap, rkagan, shikemeng, suzuki.poulose, tabba, will,
yuzenghui
Hi,
Thanks for taking a look and apologies for my delayed response.
> Having a VMA in user mappings for kernel memory seems weird to say the
> least.
I see your point and agree with you. Let me explain the motivation, pros and
cons of the approach after answering your questions.
> Core MM does not expect to have VMAs for kernel memory. What will happen if
> userspace ftruncates that VMA? Or registers it with userfaultfd?
In the patch, I make sure the pages are faulted in, locked and sealed to make
sure the VMA is practically off-limits from the owner process. Only after that
I change the permissions to be used by the kernel.
> This approach seems much more reasonable and it's not that it was entirely
> arch-specific. There is some plumbing at arch level, but the allocator is
> anyway arch-independent.
So I wanted to explore a simple solution to implement mm-local kernel secret
memory without much arch dependent code. I also wanted to reuse as much of
memfd_secret() as possible to benefit from what is done already and possible
future improvements to it.
Keeping the secret pages in user virtual addresses is easier as the page table
entries are not global by default so no special handling for spawn(). keeping
them tracked in VMA shouldn't require special handling for fork().
The challenge was to keep the virtual addresses / VMA away from user control as
long as the kernel is using it, and signal the mm core that this VMA is special
so it is not merged with other VMAs.
I believe locking the pages, sealing the VMA, prefaulting the pages should make
it practicality away of user space influence.
But the current approach have those downsides: (That I can think of)
1. Kernel secret user virtual addresses can still be used in functions accepting
user virtual addresses like copy_from_user / copy_to_user.
2. Even if we are sure the VMA is off-limits to userspace, adding VMA with
kernel addresses will increase attack surface between userspace and the
kernel.
3. Since kernel secret memory is mapped in user virtual addresses, it is very
easy to guess the exact virtual address (using binary search), and since
this functionality is designed to keep user data, it is fair to assume the
userspace will always be able to influence what is written there.
So it kind of breaks KASLR for those specific pages.
4. It locks user virtual memory away, this may break some software if they
assumed they can mmap() into specific places.
One way to address most of those concerns while keeping the solution almost arch
agnostic is is to allocate reasonable chunk of user virtual memory to be only
used for kernel secret memory, and not track them in VMAs.
This is similar to the old approach but instead of creating non-global kernel
PGD per arch it will use chunk of user virtual memory. This chunk can be defined
per arch, and this solution won't use memfd_secret().
We can then easily enlighten the kernel about this range so the kernel can test
for this range in functions like access_ok(). This approach however will make
downside #4 even worse, as it will reserve bigger chunk of user virtual memory
if this feature is enabled.
I'm also very okay switching back to the old approach with the expense of:
1. Supporting fewer architectures that can afford to give away single PGD.
2. More complicated arch specific code.
Also @graf mentioned how aarch64 uses TTBR0/TTBR1 for user and kernel page
tables, I haven't looked at this yet but it probably means that kernel page
table will be tracked per process and TTBR1 will be switched during context
switching.
What do you think? I would appreciate your opinion before working on the next
RFC patch set.
Thanks!
Fares.
Amazon Web Services Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss
Eingetragen am Amtsgericht Charlottenburg unter HRB 257764 B
Sitz: Berlin
Ust-ID: DE 365 538 597
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [RFC PATCH 0/7] support for mm-local memory allocations and use it
2024-09-25 15:33 ` Fares Mehanna
@ 2024-09-27 7:08 ` Mike Rapoport
2024-10-08 20:06 ` Fares Mehanna
0 siblings, 1 reply; 24+ messages in thread
From: Mike Rapoport @ 2024-09-27 7:08 UTC (permalink / raw)
To: Fares Mehanna
Cc: akpm, ardb, arnd, bhelgaas, broonie, catalin.marinas, david,
james.morse, javierm, jean-philippe, joey.gouly,
kristina.martsenko, kvmarm, linux-arm-kernel, linux-kernel,
linux-mm, mark.rutland, maz, nh-open-source, oliver.upton, ptosi,
rdunlap, rkagan, shikemeng, suzuki.poulose, tabba, will,
yuzenghui
On Wed, Sep 25, 2024 at 03:33:47PM +0000, Fares Mehanna wrote:
> Hi,
>
> Thanks for taking a look and apologies for my delayed response.
>
> > Having a VMA in user mappings for kernel memory seems weird to say the
> > least.
>
> I see your point and agree with you. Let me explain the motivation, pros and
> cons of the approach after answering your questions.
>
> > Core MM does not expect to have VMAs for kernel memory. What will happen if
> > userspace ftruncates that VMA? Or registers it with userfaultfd?
>
> In the patch, I make sure the pages are faulted in, locked and sealed to make
> sure the VMA is practically off-limits from the owner process. Only after that
> I change the permissions to be used by the kernel.
And what about VMA accesses from the kernel? How do you verify that
everything that works with VMAs in the kernel can deal with that being a
kernel mapping rather than userspace?
> > This approach seems much more reasonable and it's not that it was entirely
> > arch-specific. There is some plumbing at arch level, but the allocator is
> > anyway arch-independent.
>
> So I wanted to explore a simple solution to implement mm-local kernel secret
> memory without much arch dependent code. I also wanted to reuse as much of
> memfd_secret() as possible to benefit from what is done already and possible
> future improvements to it.
Adding functionality that normally belongs to userspace into mm/secretmem.c
does not feel like a reuse, sorry.
The only thing your actually share is removal of the allocated pages from
the direct map. And hijacking userspace mapping instead of properly
implementing a kernel mapping does not seem like proper solution.
> Keeping the secret pages in user virtual addresses is easier as the page table
> entries are not global by default so no special handling for spawn(). keeping
> them tracked in VMA shouldn't require special handling for fork().
>
> The challenge was to keep the virtual addresses / VMA away from user control as
> long as the kernel is using it, and signal the mm core that this VMA is special
> so it is not merged with other VMAs.
>
> I believe locking the pages, sealing the VMA, prefaulting the pages should make
> it practicality away of user space influence.
>
> But the current approach have those downsides: (That I can think of)
> 1. Kernel secret user virtual addresses can still be used in functions accepting
> user virtual addresses like copy_from_user / copy_to_user.
> 2. Even if we are sure the VMA is off-limits to userspace, adding VMA with
> kernel addresses will increase attack surface between userspace and the
> kernel.
> 3. Since kernel secret memory is mapped in user virtual addresses, it is very
> easy to guess the exact virtual address (using binary search), and since
> this functionality is designed to keep user data, it is fair to assume the
> userspace will always be able to influence what is written there.
> So it kind of breaks KASLR for those specific pages.
There is even no need to guess, it will appear on /proc/pid/maps
> 4. It locks user virtual memory away, this may break some software if they
> assumed they can mmap() into specific places.
>
> One way to address most of those concerns while keeping the solution almost arch
> agnostic is is to allocate reasonable chunk of user virtual memory to be only
> used for kernel secret memory, and not track them in VMAs.
> This is similar to the old approach but instead of creating non-global kernel
> PGD per arch it will use chunk of user virtual memory. This chunk can be defined
> per arch, and this solution won't use memfd_secret().
> We can then easily enlighten the kernel about this range so the kernel can test
> for this range in functions like access_ok(). This approach however will make
> downside #4 even worse, as it will reserve bigger chunk of user virtual memory
> if this feature is enabled.
>
> I'm also very okay switching back to the old approach with the expense of:
> 1. Supporting fewer architectures that can afford to give away single PGD.
Only few architectures can modify their direct map, and all these can spare
a PGD entry.
> 2. More complicated arch specific code.
On x86 similar code already exists for LDT, you may want to look at Andy's
comments on old proclocal posting:
https://lore.kernel.org/lkml/CALCETrXHbS9VXfZ80kOjiTrreM2EbapYeGp68mvJPbosUtorYA@mail.gmail.com/
> Also @graf mentioned how aarch64 uses TTBR0/TTBR1 for user and kernel page
> tables, I haven't looked at this yet but it probably means that kernel page
> table will be tracked per process and TTBR1 will be switched during context
> switching.
>
> What do you think? I would appreciate your opinion before working on the next
> RFC patch set.
>
> Thanks!
> Fares.
>
>
>
> Amazon Web Services Development Center Germany GmbH
> Krausenstr. 38
> 10117 Berlin
> Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss
> Eingetragen am Amtsgericht Charlottenburg unter HRB 257764 B
> Sitz: Berlin
> Ust-ID: DE 365 538 597
>
--
Sincerely yours,
Mike.
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [RFC PATCH 0/7] support for mm-local memory allocations and use it
2024-09-11 14:33 [RFC PATCH 0/7] support for mm-local memory allocations and use it Fares Mehanna
` (8 preceding siblings ...)
2024-09-20 13:19 ` Alexander Graf
@ 2024-09-27 12:59 ` David Hildenbrand
2024-10-10 15:52 ` Fares Mehanna
9 siblings, 1 reply; 24+ messages in thread
From: David Hildenbrand @ 2024-09-27 12:59 UTC (permalink / raw)
To: Fares Mehanna
Cc: nh-open-source, Marc Zyngier, Oliver Upton, James Morse,
Suzuki K Poulose, Zenghui Yu, Catalin Marinas, Will Deacon,
Andrew Morton, Kemeng Shi, Pierre-Clément Tosi,
Ard Biesheuvel, Mark Rutland, Javier Martinez Canillas,
Arnd Bergmann, Fuad Tabba, Mark Brown, Joey Gouly,
Kristina Martsenko, Randy Dunlap, Bjorn Helgaas,
Jean-Philippe Brucker, Mike Rapoport (IBM),
Roman Kagan,
moderated list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64),
open list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64),
open list, open list:MEMORY MANAGEMENT
On 11.09.24 16:33, Fares Mehanna wrote:
> In a series posted a few years ago [1], a proposal was put forward to allow the
> kernel to allocate memory local to a mm and thus push it out of reach for
> current and future speculation-based cross-process attacks. We still believe
> this is a nice thing to have.
>
> However, in the time passed since that post Linux mm has grown quite a few new
> goodies, so we'd like to explore possibilities to implement this functionality
> with less effort and churn leveraging the now available facilities.
>
> An RFC was posted few months back [2] to show the proof of concept and a simple
> test driver.
>
> In this RFC, we're using the same approach of implementing mm-local allocations
> piggy-backing on memfd_secret(), using regular user addresses but pinning the
> pages and flipping the user/supervisor flag on the respective PTEs to make them
> directly accessible from kernel.
> In addition to that we are submitting 5 patches to use the secret memory to hide
> the vCPU gp-regs and fp-regs on arm64 VHE systems.
I'm a bit lost on what exactly we want to achieve. The point where we
start flipping user/supervisor flags confuses me :)
With secretmem, you'd get memory allocated that
(a) Is accessible by user space -- mapped into user space.
(b) Is inaccessible by kernel space -- not mapped into the direct map
(c) GUP will fail, but copy_from / copy_to user will work.
Another way, without secretmem, would be to consider these "secrets"
kernel allocations that can be mapped into user space using mmap() of a
special fd. That is, they wouldn't have their origin in secretmem, but
in KVM as a kernel allocation. It could be achieved by using VM_MIXEDMAP
with vm_insert_pages(), manually removing them from the directmap.
But, I am not sure who is supposed to access what. Let's explore the
requirements. I assume we want:
(a) Pages accessible by user space -- mapped into user space.
(b) Pages inaccessible by kernel space -- not mapped into the direct map
(c) GUP to fail (no direct map).
(d) copy_from / copy_to user to fail?
And on top of that, some way to access these pages on demand from kernel
space? (temporary CPU-local mapping?)
Or how would the kernel make use of these allocations?
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [RFC PATCH 0/7] support for mm-local memory allocations and use it
2024-09-27 7:08 ` Mike Rapoport
@ 2024-10-08 20:06 ` Fares Mehanna
0 siblings, 0 replies; 24+ messages in thread
From: Fares Mehanna @ 2024-10-08 20:06 UTC (permalink / raw)
To: rppt
Cc: akpm, ardb, arnd, bhelgaas, broonie, catalin.marinas, david,
faresx, james.morse, javierm, jean-philippe, joey.gouly,
kristina.martsenko, kvmarm, linux-arm-kernel, linux-kernel,
linux-mm, mark.rutland, maz, nh-open-source, oliver.upton, ptosi,
rdunlap, rkagan, shikemeng, suzuki.poulose, tabba, will,
yuzenghui
> > Hi,
> >
> > Thanks for taking a look and apologies for my delayed response.
> >
> > > Having a VMA in user mappings for kernel memory seems weird to say the
> > > least.
> >
> > I see your point and agree with you. Let me explain the motivation, pros and
> > cons of the approach after answering your questions.
> >
> > > Core MM does not expect to have VMAs for kernel memory. What will happen if
> > > userspace ftruncates that VMA? Or registers it with userfaultfd?
> >
> > In the patch, I make sure the pages are faulted in, locked and sealed to make
> > sure the VMA is practically off-limits from the owner process. Only after that
> > I change the permissions to be used by the kernel.
>
> And what about VMA accesses from the kernel? How do you verify that
> everything that works with VMAs in the kernel can deal with that being a
> kernel mapping rather than userspace?
I added `VM_MIXEDMAP` if the secret allocation is intended for kernel usage,
this should make the VMA special and prevent a lot of operations like VMA merging.
Maybe the usage of `VM_MIXEDMAP` is not ideal and we can introduce a new kernel
flag for that. But I'm not aware of a destructive VMA operation from kernel side
while the VMA is marked special, mixed-map and sealed.
> > > This approach seems much more reasonable and it's not that it was entirely
> > > arch-specific. There is some plumbing at arch level, but the allocator is
> > > anyway arch-independent.
> >
> > So I wanted to explore a simple solution to implement mm-local kernel secret
> > memory without much arch dependent code. I also wanted to reuse as much of
> > memfd_secret() as possible to benefit from what is done already and possible
> > future improvements to it.
>
> Adding functionality that normally belongs to userspace into mm/secretmem.c
> does not feel like a reuse, sorry.
Right, because the mapping in user virtual space most of the operations belongs
to userspace yes. I thought this way would be easier to demonstrate the approach
for RFC.
> The only thing your actually share is removal of the allocated pages from
> the direct map. And hijacking userspace mapping instead of properly
> implementing a kernel mapping does not seem like proper solution.
Also we get:
1. PGD is private when creating new process.
2. Existing kernel-secret mappings for given process will be cloned on fork(),
so no need to keep track of them to be cloned on fork().
3. No special handling for context switching.
> > Keeping the secret pages in user virtual addresses is easier as the page table
> > entries are not global by default so no special handling for spawn(). keeping
> > them tracked in VMA shouldn't require special handling for fork().
> >
> > The challenge was to keep the virtual addresses / VMA away from user control as
> > long as the kernel is using it, and signal the mm core that this VMA is special
> > so it is not merged with other VMAs.
> >
> > I believe locking the pages, sealing the VMA, prefaulting the pages should make
> > it practicality away of user space influence.
> >
> > But the current approach have those downsides: (That I can think of)
> > 1. Kernel secret user virtual addresses can still be used in functions accepting
> > user virtual addresses like copy_from_user / copy_to_user.
> > 2. Even if we are sure the VMA is off-limits to userspace, adding VMA with
> > kernel addresses will increase attack surface between userspace and the
> > kernel.
> > 3. Since kernel secret memory is mapped in user virtual addresses, it is very
> > easy to guess the exact virtual address (using binary search), and since
> > this functionality is designed to keep user data, it is fair to assume the
> > userspace will always be able to influence what is written there.
> > So it kind of breaks KASLR for those specific pages.
>
> There is even no need to guess, it will appear on /proc/pid/maps
Yeah but that is easily fixable, however the other issue stays the same unless
I allocated bigger chunk from userspace and moved away from VMA tracking.
> > 4. It locks user virtual memory away, this may break some software if they
> > assumed they can mmap() into specific places.
> >
> > One way to address most of those concerns while keeping the solution almost arch
> > agnostic is is to allocate reasonable chunk of user virtual memory to be only
> > used for kernel secret memory, and not track them in VMAs.
> > This is similar to the old approach but instead of creating non-global kernel
> > PGD per arch it will use chunk of user virtual memory. This chunk can be defined
> > per arch, and this solution won't use memfd_secret().
> > We can then easily enlighten the kernel about this range so the kernel can test
> > for this range in functions like access_ok(). This approach however will make
> > downside #4 even worse, as it will reserve bigger chunk of user virtual memory
> > if this feature is enabled.
> >
> > I'm also very okay switching back to the old approach with the expense of:
> > 1. Supporting fewer architectures that can afford to give away single PGD.
>
> Only few architectures can modify their direct map, and all these can spare
> a PGD entry.
>
> > 2. More complicated arch specific code.
>
> On x86 similar code already exists for LDT, you may want to look at Andy's
> comments on old proclocal posting:
>
> https://lore.kernel.org/lkml/CALCETrXHbS9VXfZ80kOjiTrreM2EbapYeGp68mvJPbosUtorYA@mail.gmail.com/
Ah I see, so no need to think about architectures that can't spare a PGD. thanks!
I read the discussion and LDT is x86 specific and I wanted to start with aarch64.
I'm still thinking about the best approach for aarch64 for my next PoC, aarch64
track two tables in TTBR0/TTBR1, what I'm thinking of is:
1. Have kernel page table per process, with all its PGD entries shared other than
a single PGD for kernel secret allocations.
2. On fork, traverse the private PGD part and clone existing page table for the
new process.
3. On context switching, write the table to TTBR1, thus the kernel will have
access to all secret allocations per this process.
This will move away from user vaddr and VMA tracking, with the expense of each
architecture will support it on its own way.
Does that sound more decent?
Thank you!
Fares.
> > Also @graf mentioned how aarch64 uses TTBR0/TTBR1 for user and kernel page
> > tables, I haven't looked at this yet but it probably means that kernel page
> > table will be tracked per process and TTBR1 will be switched during context
> > switching.
> >
> > What do you think? I would appreciate your opinion before working on the next
> > RFC patch set.
> >
> > Thanks!
> > Fares.
Amazon Web Services Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss
Eingetragen am Amtsgericht Charlottenburg unter HRB 257764 B
Sitz: Berlin
Ust-ID: DE 365 538 597
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [RFC PATCH 0/7] support for mm-local memory allocations and use it
2024-09-27 12:59 ` David Hildenbrand
@ 2024-10-10 15:52 ` Fares Mehanna
2024-10-11 12:04 ` David Hildenbrand
0 siblings, 1 reply; 24+ messages in thread
From: Fares Mehanna @ 2024-10-10 15:52 UTC (permalink / raw)
To: david
Cc: akpm, ardb, arnd, bhelgaas, broonie, catalin.marinas, faresx,
james.morse, javierm, jean-philippe, joey.gouly,
kristina.martsenko, kvmarm, linux-arm-kernel, linux-kernel,
linux-mm, mark.rutland, maz, nh-open-source, oliver.upton, ptosi,
rdunlap, rkagan, rppt, shikemeng, suzuki.poulose, tabba, will,
yuzenghui
> > In a series posted a few years ago [1], a proposal was put forward to allow the
> > kernel to allocate memory local to a mm and thus push it out of reach for
> > current and future speculation-based cross-process attacks. We still believe
> > this is a nice thing to have.
> >
> > However, in the time passed since that post Linux mm has grown quite a few new
> > goodies, so we'd like to explore possibilities to implement this functionality
> > with less effort and churn leveraging the now available facilities.
> >
> > An RFC was posted few months back [2] to show the proof of concept and a simple
> > test driver.
> >
> > In this RFC, we're using the same approach of implementing mm-local allocations
> > piggy-backing on memfd_secret(), using regular user addresses but pinning the
> > pages and flipping the user/supervisor flag on the respective PTEs to make them
> > directly accessible from kernel.
> > In addition to that we are submitting 5 patches to use the secret memory to hide
> > the vCPU gp-regs and fp-regs on arm64 VHE systems.
>
> I'm a bit lost on what exactly we want to achieve. The point where we
> start flipping user/supervisor flags confuses me :)
>
> With secretmem, you'd get memory allocated that
> (a) Is accessible by user space -- mapped into user space.
> (b) Is inaccessible by kernel space -- not mapped into the direct map
> (c) GUP will fail, but copy_from / copy_to user will work.
>
>
> Another way, without secretmem, would be to consider these "secrets"
> kernel allocations that can be mapped into user space using mmap() of a
> special fd. That is, they wouldn't have their origin in secretmem, but
> in KVM as a kernel allocation. It could be achieved by using VM_MIXEDMAP
> with vm_insert_pages(), manually removing them from the directmap.
>
> But, I am not sure who is supposed to access what. Let's explore the
> requirements. I assume we want:
>
> (a) Pages accessible by user space -- mapped into user space.
> (b) Pages inaccessible by kernel space -- not mapped into the direct map
> (c) GUP to fail (no direct map).
> (d) copy_from / copy_to user to fail?
>
> And on top of that, some way to access these pages on demand from kernel
> space? (temporary CPU-local mapping?)
>
> Or how would the kernel make use of these allocations?
>
> --
> Cheers,
>
> David / dhildenb
Hi David,
Thanks for taking a look at the patches!
We're trying to allocate a kernel memory that is accessible to the kernel but
only when the context of the process is loaded.
So this is a kernel memory that is not needed to operate the kernel itself, it
is to store & process data on behalf of a process. The requirement for this
memory is that it would never be touched unless the process is scheduled on this
core. otherwise any other access will crash the kernel.
So this memory should only be directly readable and writable by the kernel, but
only when the process context is loaded. The memory shouldn't be readable or
writable by the owner process at all.
This is basically done by removing those pages from kernel linear address and
attaching them only in the process mm_struct. So during context switching the
kernel loses access to the secret memory scheduled out and gain access to the
new process secret memory.
This generally protects against speculation attacks, and if other process managed
to trick the kernel to leak data from memory. In this case the kernel will crash
if it tries to access other processes secret memory.
Since this memory is special in the sense that it is kernel memory but only make
sense in the term of the owner process, I tried in this patch series to explore
the possibility of reusing memfd_secret() to allocate this memory in user virtual
address space, manage it in a VMA, flipping the permissions while keeping the
control of the mapping exclusively with the kernel.
Right now it is:
(a) Pages not accessible by user space -- even though they are mapped into user
space, the PTEs are marked for kernel usage.
(b) Pages accessible by kernel space -- even though they are not mapped into the
direct map, the PTEs in uvaddr are marked for kernel usage.
(c) copy_from / copy_to user won't fail -- because it is in the user range, but
this can be fixed by allocating specific range in user vaddr to this feature
and check against this range there.
(d) The secret memory vaddr is guessable by the owner process -- that can also
be fixed by allocating bigger chunk of user vaddr for this feature and
randomly placing the secret memory there.
(e) Mapping is off-limits to the owner process by marking the VMA as locked,
sealed and special.
Other alternative (that was implemented in the first submission) is to track those
allocations in a non-shared kernel PGD per process, then handle creating, forking
and context-switching this PGD.
What I like about the memfd_secret() approach is the simplicity and being arch
agnostic, what I don't like is the increased attack surface by using VMAs to
track those allocations.
I'm thinking of working on a PoC to implement the first approach of using a
non-shared kernel PGD for secret memory allocations on arm64. This includes
adding kernel page table per process where all PGDs are shared but one which
will be used for secret allocations mapping. And handle the fork & context
switching (TTBR1 switching(?)) correctly for the secret memory PGD.
What do you think? I'd really appreciate opinions and possible ways forward.
Thanks!
Fares.
Amazon Web Services Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss
Eingetragen am Amtsgericht Charlottenburg unter HRB 257764 B
Sitz: Berlin
Ust-ID: DE 365 538 597
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [RFC PATCH 0/7] support for mm-local memory allocations and use it
2024-10-10 15:52 ` Fares Mehanna
@ 2024-10-11 12:04 ` David Hildenbrand
2024-10-11 12:36 ` Mediouni, Mohamed
0 siblings, 1 reply; 24+ messages in thread
From: David Hildenbrand @ 2024-10-11 12:04 UTC (permalink / raw)
To: Fares Mehanna
Cc: akpm, ardb, arnd, bhelgaas, broonie, catalin.marinas,
james.morse, javierm, jean-philippe, joey.gouly,
kristina.martsenko, kvmarm, linux-arm-kernel, linux-kernel,
linux-mm, mark.rutland, maz, nh-open-source, oliver.upton, ptosi,
rdunlap, rkagan, rppt, shikemeng, suzuki.poulose, tabba, will,
yuzenghui
On 10.10.24 17:52, Fares Mehanna wrote:
>>> In a series posted a few years ago [1], a proposal was put forward to allow the
>>> kernel to allocate memory local to a mm and thus push it out of reach for
>>> current and future speculation-based cross-process attacks. We still believe
>>> this is a nice thing to have.
>>>
>>> However, in the time passed since that post Linux mm has grown quite a few new
>>> goodies, so we'd like to explore possibilities to implement this functionality
>>> with less effort and churn leveraging the now available facilities.
>>>
>>> An RFC was posted few months back [2] to show the proof of concept and a simple
>>> test driver.
>>>
>>> In this RFC, we're using the same approach of implementing mm-local allocations
>>> piggy-backing on memfd_secret(), using regular user addresses but pinning the
>>> pages and flipping the user/supervisor flag on the respective PTEs to make them
>>> directly accessible from kernel.
>>> In addition to that we are submitting 5 patches to use the secret memory to hide
>>> the vCPU gp-regs and fp-regs on arm64 VHE systems.
>>
>> I'm a bit lost on what exactly we want to achieve. The point where we
>> start flipping user/supervisor flags confuses me :)
>>
>> With secretmem, you'd get memory allocated that
>> (a) Is accessible by user space -- mapped into user space.
>> (b) Is inaccessible by kernel space -- not mapped into the direct map
>> (c) GUP will fail, but copy_from / copy_to user will work.
>>
>>
>> Another way, without secretmem, would be to consider these "secrets"
>> kernel allocations that can be mapped into user space using mmap() of a
>> special fd. That is, they wouldn't have their origin in secretmem, but
>> in KVM as a kernel allocation. It could be achieved by using VM_MIXEDMAP
>> with vm_insert_pages(), manually removing them from the directmap.
>>
>> But, I am not sure who is supposed to access what. Let's explore the
>> requirements. I assume we want:
>>
>> (a) Pages accessible by user space -- mapped into user space.
>> (b) Pages inaccessible by kernel space -- not mapped into the direct map
>> (c) GUP to fail (no direct map).
>> (d) copy_from / copy_to user to fail?
>>
>> And on top of that, some way to access these pages on demand from kernel
>> space? (temporary CPU-local mapping?)
>>
>> Or how would the kernel make use of these allocations?
>>
>> --
>> Cheers,
>>
>> David / dhildenb
>
> Hi David,
Hi Fares!
>
> Thanks for taking a look at the patches!
>
> We're trying to allocate a kernel memory that is accessible to the kernel but
> only when the context of the process is loaded.
>
> So this is a kernel memory that is not needed to operate the kernel itself, it
> is to store & process data on behalf of a process. The requirement for this
> memory is that it would never be touched unless the process is scheduled on this
> core. otherwise any other access will crash the kernel.
>
> So this memory should only be directly readable and writable by the kernel, but
> only when the process context is loaded. The memory shouldn't be readable or
> writable by the owner process at all.
>
> This is basically done by removing those pages from kernel linear address and
> attaching them only in the process mm_struct. So during context switching the
> kernel loses access to the secret memory scheduled out and gain access to the
> new process secret memory.
>
> This generally protects against speculation attacks, and if other process managed
> to trick the kernel to leak data from memory. In this case the kernel will crash
> if it tries to access other processes secret memory.
>
> Since this memory is special in the sense that it is kernel memory but only make
> sense in the term of the owner process, I tried in this patch series to explore
> the possibility of reusing memfd_secret() to allocate this memory in user virtual
> address space, manage it in a VMA, flipping the permissions while keeping the
> control of the mapping exclusively with the kernel.
>
> Right now it is:
> (a) Pages not accessible by user space -- even though they are mapped into user
> space, the PTEs are marked for kernel usage.
Ah, that is the detail I was missing, now I see what you are trying to
achieve, thanks!
It is a bit architecture specific, because ... imagine architectures
that have separate kernel+user space page table hierarchies, and not a
simple PTE flag to change access permissions between kernel/user space.
IIRC s390 is one such architecture that uses separate page tables for
the user-space + kernel-space portions.
> (b) Pages accessible by kernel space -- even though they are not mapped into the
> direct map, the PTEs in uvaddr are marked for kernel usage.
> (c) copy_from / copy_to user won't fail -- because it is in the user range, but
> this can be fixed by allocating specific range in user vaddr to this feature
> and check against this range there.
> (d) The secret memory vaddr is guessable by the owner process -- that can also
> be fixed by allocating bigger chunk of user vaddr for this feature and
> randomly placing the secret memory there.
> (e) Mapping is off-limits to the owner process by marking the VMA as locked,
> sealed and special.
Okay, so in this RFC you are jumping through quite some hoops to have a
kernel allocation unmapped from the direct map but mapped into a
per-process page table only accessible by kernel space. :)
So you really don't want this mapped into user space at all
(consequently, no GUP, no access, no copy_from_user ...). In this RFC
it's mapped but turned inaccessible by flipping the "kernel vs. user"
switch.
>
> Other alternative (that was implemented in the first submission) is to track those
> allocations in a non-shared kernel PGD per process, then handle creating, forking
> and context-switching this PGD.
That sounds like a better approach. So we would remove the pages from
the shared kernel direct map and map them into a separate kernel-portion
in the per-MM page tables?
Can you envision that would also work with architectures like s390x? I
assume we would not only need the per-MM user space page table
hierarchy, but also a per-MM kernel space page table hierarchy, into
which we also map the common/shared-among-all-processes kernel space
page tables (e.g., directmap).
>
> What I like about the memfd_secret() approach is the simplicity and being arch
> agnostic, what I don't like is the increased attack surface by using VMAs to
> track those allocations.
Yes, but memfd_secret() was really design for user space to hold
secrets. But I can see how you came to this solution.
>
> I'm thinking of working on a PoC to implement the first approach of using a
> non-shared kernel PGD for secret memory allocations on arm64. This includes
> adding kernel page table per process where all PGDs are shared but one which
> will be used for secret allocations mapping. And handle the fork & context
> switching (TTBR1 switching(?)) correctly for the secret memory PGD.
>
> What do you think? I'd really appreciate opinions and possible ways forward.
Naive question: does arm64 rather resemble the s390x model or the x86-64
model?
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [RFC PATCH 0/7] support for mm-local memory allocations and use it
2024-10-11 12:04 ` David Hildenbrand
@ 2024-10-11 12:36 ` Mediouni, Mohamed
2024-10-11 12:56 ` Mediouni, Mohamed
0 siblings, 1 reply; 24+ messages in thread
From: Mediouni, Mohamed @ 2024-10-11 12:36 UTC (permalink / raw)
To: David Hildenbrand
Cc: Mehanna, Fares, akpm, ardb, arnd, bhelgaas, broonie,
catalin.marinas, james.morse, javierm, jean-philippe, joey.gouly,
kristina.martsenko, kvmarm, linux-arm-kernel, linux-kernel,
linux-mm, mark.rutland, maz, nh-open-source, oliver.upton, ptosi,
rdunlap, Kagan, Roman, rppt, shikemeng, suzuki.poulose, tabba,
will, yuzenghui
> On 11. Oct 2024, at 14:04, David Hildenbrand <david@redhat.com> wrote:
>
> On 10.10.24 17:52, Fares Mehanna wrote:
>>>> In a series posted a few years ago [1], a proposal was put forward to allow the
>>>> kernel to allocate memory local to a mm and thus push it out of reach for
>>>> current and future speculation-based cross-process attacks. We still believe
>>>> this is a nice thing to have.
>>>>
>>>> However, in the time passed since that post Linux mm has grown quite a few new
>>>> goodies, so we'd like to explore possibilities to implement this functionality
>>>> with less effort and churn leveraging the now available facilities.
>>>>
>>>> An RFC was posted few months back [2] to show the proof of concept and a simple
>>>> test driver.
>>>>
>>>> In this RFC, we're using the same approach of implementing mm-local allocations
>>>> piggy-backing on memfd_secret(), using regular user addresses but pinning the
>>>> pages and flipping the user/supervisor flag on the respective PTEs to make them
>>>> directly accessible from kernel.
>>>> In addition to that we are submitting 5 patches to use the secret memory to hide
>>>> the vCPU gp-regs and fp-regs on arm64 VHE systems.
>>>
>>> I'm a bit lost on what exactly we want to achieve. The point where we
>>> start flipping user/supervisor flags confuses me :)
>>>
>>> With secretmem, you'd get memory allocated that
>>> (a) Is accessible by user space -- mapped into user space.
>>> (b) Is inaccessible by kernel space -- not mapped into the direct map
>>> (c) GUP will fail, but copy_from / copy_to user will work.
>>>
>>>
>>> Another way, without secretmem, would be to consider these "secrets"
>>> kernel allocations that can be mapped into user space using mmap() of a
>>> special fd. That is, they wouldn't have their origin in secretmem, but
>>> in KVM as a kernel allocation. It could be achieved by using VM_MIXEDMAP
>>> with vm_insert_pages(), manually removing them from the directmap.
>>>
>>> But, I am not sure who is supposed to access what. Let's explore the
>>> requirements. I assume we want:
>>>
>>> (a) Pages accessible by user space -- mapped into user space.
>>> (b) Pages inaccessible by kernel space -- not mapped into the direct map
>>> (c) GUP to fail (no direct map).
>>> (d) copy_from / copy_to user to fail?
>>>
>>> And on top of that, some way to access these pages on demand from kernel
>>> space? (temporary CPU-local mapping?)
>>>
>>> Or how would the kernel make use of these allocations?
>>>
>>> --
>>> Cheers,
>>>
>>> David / dhildenb
>> Hi David,
>
> Hi Fares!
>
>> Thanks for taking a look at the patches!
>> We're trying to allocate a kernel memory that is accessible to the kernel but
>> only when the context of the process is loaded.
>> So this is a kernel memory that is not needed to operate the kernel itself, it
>> is to store & process data on behalf of a process. The requirement for this
>> memory is that it would never be touched unless the process is scheduled on this
>> core. otherwise any other access will crash the kernel.
>> So this memory should only be directly readable and writable by the kernel, but
>> only when the process context is loaded. The memory shouldn't be readable or
>> writable by the owner process at all.
>> This is basically done by removing those pages from kernel linear address and
>> attaching them only in the process mm_struct. So during context switching the
>> kernel loses access to the secret memory scheduled out and gain access to the
>> new process secret memory.
>> This generally protects against speculation attacks, and if other process managed
>> to trick the kernel to leak data from memory. In this case the kernel will crash
>> if it tries to access other processes secret memory.
>> Since this memory is special in the sense that it is kernel memory but only make
>> sense in the term of the owner process, I tried in this patch series to explore
>> the possibility of reusing memfd_secret() to allocate this memory in user virtual
>> address space, manage it in a VMA, flipping the permissions while keeping the
>> control of the mapping exclusively with the kernel.
>> Right now it is:
>> (a) Pages not accessible by user space -- even though they are mapped into user
>> space, the PTEs are marked for kernel usage.
>
> Ah, that is the detail I was missing, now I see what you are trying to achieve, thanks!
>
> It is a bit architecture specific, because ... imagine architectures that have separate kernel+user space page table hierarchies, and not a simple PTE flag to change access permissions between kernel/user space.
>
> IIRC s390 is one such architecture that uses separate page tables for the user-space + kernel-space portions.
>
>> (b) Pages accessible by kernel space -- even though they are not mapped into the
>> direct map, the PTEs in uvaddr are marked for kernel usage.
>> (c) copy_from / copy_to user won't fail -- because it is in the user range, but
>> this can be fixed by allocating specific range in user vaddr to this feature
>> and check against this range there.
>> (d) The secret memory vaddr is guessable by the owner process -- that can also
>> be fixed by allocating bigger chunk of user vaddr for this feature and
>> randomly placing the secret memory there.
>> (e) Mapping is off-limits to the owner process by marking the VMA as locked,
>> sealed and special.
>
> Okay, so in this RFC you are jumping through quite some hoops to have a kernel allocation unmapped from the direct map but mapped into a per-process page table only accessible by kernel space. :)
>
> So you really don't want this mapped into user space at all (consequently, no GUP, no access, no copy_from_user ...). In this RFC it's mapped but turned inaccessible by flipping the "kernel vs. user" switch.
>
>> Other alternative (that was implemented in the first submission) is to track those
>> allocations in a non-shared kernel PGD per process, then handle creating, forking
>> and context-switching this PGD.
>
> That sounds like a better approach. So we would remove the pages from the shared kernel direct map and map them into a separate kernel-portion in the per-MM page tables?
>
> Can you envision that would also work with architectures like s390x? I assume we would not only need the per-MM user space page table hierarchy, but also a per-MM kernel space page table hierarchy, into which we also map the common/shared-among-all-processes kernel space page tables (e.g., directmap).
Yes, that’s also applicable to arm64. There’s currently no separate per-mm user space page hierarchy there.
>> What I like about the memfd_secret() approach is the simplicity and being arch
>> agnostic, what I don't like is the increased attack surface by using VMAs to
>> track those allocations.
>
> Yes, but memfd_secret() was really design for user space to hold secrets. But I can see how you came to this solution.
>
>> I'm thinking of working on a PoC to implement the first approach of using a
>> non-shared kernel PGD for secret memory allocations on arm64. This includes
>> adding kernel page table per process where all PGDs are shared but one which
>> will be used for secret allocations mapping. And handle the fork & context
>> switching (TTBR1 switching(?)) correctly for the secret memory PGD.
>> What do you think? I'd really appreciate opinions and possible ways forward.
>
> Naive question: does arm64 rather resemble the s390x model or the x86-64 model?
arm64 has separate page tables for kernel and user-mode. Except for the KPTI case, the kernel page tables aren’t swapped per-process and stay the same all the time.
Thanks,
-Mohamed
> --
> Cheers,
>
> David / dhildenb
>
Amazon Web Services Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss
Eingetragen am Amtsgericht Charlottenburg unter HRB 257764 B
Sitz: Berlin
Ust-ID: DE 365 538 597
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [RFC PATCH 0/7] support for mm-local memory allocations and use it
2024-10-11 12:36 ` Mediouni, Mohamed
@ 2024-10-11 12:56 ` Mediouni, Mohamed
2024-10-11 12:58 ` David Hildenbrand
0 siblings, 1 reply; 24+ messages in thread
From: Mediouni, Mohamed @ 2024-10-11 12:56 UTC (permalink / raw)
To: David Hildenbrand
Cc: Mehanna, Fares, akpm, ardb, arnd, bhelgaas, broonie,
catalin.marinas, james.morse, javierm, jean-philippe, joey.gouly,
kristina.martsenko, kvmarm, linux-arm-kernel, linux-kernel,
linux-mm, mark.rutland, maz, nh-open-source, oliver.upton, ptosi,
rdunlap, Kagan, Roman, rppt, shikemeng, suzuki.poulose, tabba,
will, yuzenghui
> On 11. Oct 2024, at 14:36, Mediouni, Mohamed <mediou@amazon.de> wrote:
>
>
>
>> On 11. Oct 2024, at 14:04, David Hildenbrand <david@redhat.com> wrote:
>>
>> On 10.10.24 17:52, Fares Mehanna wrote:
>>>>> In a series posted a few years ago [1], a proposal was put forward to allow the
>>>>> kernel to allocate memory local to a mm and thus push it out of reach for
>>>>> current and future speculation-based cross-process attacks. We still believe
>>>>> this is a nice thing to have.
>>>>>
>>>>> However, in the time passed since that post Linux mm has grown quite a few new
>>>>> goodies, so we'd like to explore possibilities to implement this functionality
>>>>> with less effort and churn leveraging the now available facilities.
>>>>>
>>>>> An RFC was posted few months back [2] to show the proof of concept and a simple
>>>>> test driver.
>>>>>
>>>>> In this RFC, we're using the same approach of implementing mm-local allocations
>>>>> piggy-backing on memfd_secret(), using regular user addresses but pinning the
>>>>> pages and flipping the user/supervisor flag on the respective PTEs to make them
>>>>> directly accessible from kernel.
>>>>> In addition to that we are submitting 5 patches to use the secret memory to hide
>>>>> the vCPU gp-regs and fp-regs on arm64 VHE systems.
>>>>
>>>> I'm a bit lost on what exactly we want to achieve. The point where we
>>>> start flipping user/supervisor flags confuses me :)
>>>>
>>>> With secretmem, you'd get memory allocated that
>>>> (a) Is accessible by user space -- mapped into user space.
>>>> (b) Is inaccessible by kernel space -- not mapped into the direct map
>>>> (c) GUP will fail, but copy_from / copy_to user will work.
>>>>
>>>>
>>>> Another way, without secretmem, would be to consider these "secrets"
>>>> kernel allocations that can be mapped into user space using mmap() of a
>>>> special fd. That is, they wouldn't have their origin in secretmem, but
>>>> in KVM as a kernel allocation. It could be achieved by using VM_MIXEDMAP
>>>> with vm_insert_pages(), manually removing them from the directmap.
>>>>
>>>> But, I am not sure who is supposed to access what. Let's explore the
>>>> requirements. I assume we want:
>>>>
>>>> (a) Pages accessible by user space -- mapped into user space.
>>>> (b) Pages inaccessible by kernel space -- not mapped into the direct map
>>>> (c) GUP to fail (no direct map).
>>>> (d) copy_from / copy_to user to fail?
>>>>
>>>> And on top of that, some way to access these pages on demand from kernel
>>>> space? (temporary CPU-local mapping?)
>>>>
>>>> Or how would the kernel make use of these allocations?
>>>>
>>>> --
>>>> Cheers,
>>>>
>>>> David / dhildenb
>>> Hi David,
>>
>> Hi Fares!
>>
>>> Thanks for taking a look at the patches!
>>> We're trying to allocate a kernel memory that is accessible to the kernel but
>>> only when the context of the process is loaded.
>>> So this is a kernel memory that is not needed to operate the kernel itself, it
>>> is to store & process data on behalf of a process. The requirement for this
>>> memory is that it would never be touched unless the process is scheduled on this
>>> core. otherwise any other access will crash the kernel.
>>> So this memory should only be directly readable and writable by the kernel, but
>>> only when the process context is loaded. The memory shouldn't be readable or
>>> writable by the owner process at all.
>>> This is basically done by removing those pages from kernel linear address and
>>> attaching them only in the process mm_struct. So during context switching the
>>> kernel loses access to the secret memory scheduled out and gain access to the
>>> new process secret memory.
>>> This generally protects against speculation attacks, and if other process managed
>>> to trick the kernel to leak data from memory. In this case the kernel will crash
>>> if it tries to access other processes secret memory.
>>> Since this memory is special in the sense that it is kernel memory but only make
>>> sense in the term of the owner process, I tried in this patch series to explore
>>> the possibility of reusing memfd_secret() to allocate this memory in user virtual
>>> address space, manage it in a VMA, flipping the permissions while keeping the
>>> control of the mapping exclusively with the kernel.
>>> Right now it is:
>>> (a) Pages not accessible by user space -- even though they are mapped into user
>>> space, the PTEs are marked for kernel usage.
>>
>> Ah, that is the detail I was missing, now I see what you are trying to achieve, thanks!
>>
>> It is a bit architecture specific, because ... imagine architectures that have separate kernel+user space page table hierarchies, and not a simple PTE flag to change access permissions between kernel/user space.
>>
>> IIRC s390 is one such architecture that uses separate page tables for the user-space + kernel-space portions.
>>
>>> (b) Pages accessible by kernel space -- even though they are not mapped into the
>>> direct map, the PTEs in uvaddr are marked for kernel usage.
>>> (c) copy_from / copy_to user won't fail -- because it is in the user range, but
>>> this can be fixed by allocating specific range in user vaddr to this feature
>>> and check against this range there.
>>> (d) The secret memory vaddr is guessable by the owner process -- that can also
>>> be fixed by allocating bigger chunk of user vaddr for this feature and
>>> randomly placing the secret memory there.
>>> (e) Mapping is off-limits to the owner process by marking the VMA as locked,
>>> sealed and special.
>>
>> Okay, so in this RFC you are jumping through quite some hoops to have a kernel allocation unmapped from the direct map but mapped into a per-process page table only accessible by kernel space. :)
>>
>> So you really don't want this mapped into user space at all (consequently, no GUP, no access, no copy_from_user ...). In this RFC it's mapped but turned inaccessible by flipping the "kernel vs. user" switch.
>>
>>> Other alternative (that was implemented in the first submission) is to track those
>>> allocations in a non-shared kernel PGD per process, then handle creating, forking
>>> and context-switching this PGD.
>>
>> That sounds like a better approach. So we would remove the pages from the shared kernel direct map and map them into a separate kernel-portion in the per-MM page tables?
>>
>> Can you envision that would also work with architectures like s390x? I assume we would not only need the per-MM user space page table hierarchy, but also a per-MM kernel space page table hierarchy, into which we also map the common/shared-among-all-processes kernel space page tables (e.g., directmap).
> Yes, that’s also applicable to arm64. There’s currently no separate per-mm user space page hierarchy there.
typo, read kernel
Thanks,
-Mohamed
>>> What I like about the memfd_secret() approach is the simplicity and being arch
>>> agnostic, what I don't like is the increased attack surface by using VMAs to
>>> track those allocations.
>>
>> Yes, but memfd_secret() was really design for user space to hold secrets. But I can see how you came to this solution.
>>
>>> I'm thinking of working on a PoC to implement the first approach of using a
>>> non-shared kernel PGD for secret memory allocations on arm64. This includes
>>> adding kernel page table per process where all PGDs are shared but one which
>>> will be used for secret allocations mapping. And handle the fork & context
>>> switching (TTBR1 switching(?)) correctly for the secret memory PGD.
>>> What do you think? I'd really appreciate opinions and possible ways forward.
>>
>> Naive question: does arm64 rather resemble the s390x model or the x86-64 model?
> arm64 has separate page tables for kernel and user-mode. Except for the KPTI case, the kernel page tables aren’t swapped per-process and stay the same all the time.
>
> Thanks,
> -Mohamed
>> --
>> Cheers,
>>
>> David / dhildenb
>>
>
Amazon Web Services Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss
Eingetragen am Amtsgericht Charlottenburg unter HRB 257764 B
Sitz: Berlin
Ust-ID: DE 365 538 597
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [RFC PATCH 0/7] support for mm-local memory allocations and use it
2024-10-11 12:56 ` Mediouni, Mohamed
@ 2024-10-11 12:58 ` David Hildenbrand
2024-10-11 14:25 ` Fares Mehanna
0 siblings, 1 reply; 24+ messages in thread
From: David Hildenbrand @ 2024-10-11 12:58 UTC (permalink / raw)
To: Mediouni, Mohamed
Cc: Mehanna, Fares, akpm, ardb, arnd, bhelgaas, broonie,
catalin.marinas, james.morse, javierm, jean-philippe, joey.gouly,
kristina.martsenko, kvmarm, linux-arm-kernel, linux-kernel,
linux-mm, mark.rutland, maz, nh-open-source, oliver.upton, ptosi,
rdunlap, Kagan, Roman, rppt, shikemeng, suzuki.poulose, tabba,
will, yuzenghui
On 11.10.24 14:56, Mediouni, Mohamed wrote:
>
>
>> On 11. Oct 2024, at 14:36, Mediouni, Mohamed <mediou@amazon.de> wrote:
>>
>>
>>
>>> On 11. Oct 2024, at 14:04, David Hildenbrand <david@redhat.com> wrote:
>>>
>>> On 10.10.24 17:52, Fares Mehanna wrote:
>>>>>> In a series posted a few years ago [1], a proposal was put forward to allow the
>>>>>> kernel to allocate memory local to a mm and thus push it out of reach for
>>>>>> current and future speculation-based cross-process attacks. We still believe
>>>>>> this is a nice thing to have.
>>>>>>
>>>>>> However, in the time passed since that post Linux mm has grown quite a few new
>>>>>> goodies, so we'd like to explore possibilities to implement this functionality
>>>>>> with less effort and churn leveraging the now available facilities.
>>>>>>
>>>>>> An RFC was posted few months back [2] to show the proof of concept and a simple
>>>>>> test driver.
>>>>>>
>>>>>> In this RFC, we're using the same approach of implementing mm-local allocations
>>>>>> piggy-backing on memfd_secret(), using regular user addresses but pinning the
>>>>>> pages and flipping the user/supervisor flag on the respective PTEs to make them
>>>>>> directly accessible from kernel.
>>>>>> In addition to that we are submitting 5 patches to use the secret memory to hide
>>>>>> the vCPU gp-regs and fp-regs on arm64 VHE systems.
>>>>>
>>>>> I'm a bit lost on what exactly we want to achieve. The point where we
>>>>> start flipping user/supervisor flags confuses me :)
>>>>>
>>>>> With secretmem, you'd get memory allocated that
>>>>> (a) Is accessible by user space -- mapped into user space.
>>>>> (b) Is inaccessible by kernel space -- not mapped into the direct map
>>>>> (c) GUP will fail, but copy_from / copy_to user will work.
>>>>>
>>>>>
>>>>> Another way, without secretmem, would be to consider these "secrets"
>>>>> kernel allocations that can be mapped into user space using mmap() of a
>>>>> special fd. That is, they wouldn't have their origin in secretmem, but
>>>>> in KVM as a kernel allocation. It could be achieved by using VM_MIXEDMAP
>>>>> with vm_insert_pages(), manually removing them from the directmap.
>>>>>
>>>>> But, I am not sure who is supposed to access what. Let's explore the
>>>>> requirements. I assume we want:
>>>>>
>>>>> (a) Pages accessible by user space -- mapped into user space.
>>>>> (b) Pages inaccessible by kernel space -- not mapped into the direct map
>>>>> (c) GUP to fail (no direct map).
>>>>> (d) copy_from / copy_to user to fail?
>>>>>
>>>>> And on top of that, some way to access these pages on demand from kernel
>>>>> space? (temporary CPU-local mapping?)
>>>>>
>>>>> Or how would the kernel make use of these allocations?
>>>>>
>>>>> --
>>>>> Cheers,
>>>>>
>>>>> David / dhildenb
>>>> Hi David,
>>>
>>> Hi Fares!
>>>
>>>> Thanks for taking a look at the patches!
>>>> We're trying to allocate a kernel memory that is accessible to the kernel but
>>>> only when the context of the process is loaded.
>>>> So this is a kernel memory that is not needed to operate the kernel itself, it
>>>> is to store & process data on behalf of a process. The requirement for this
>>>> memory is that it would never be touched unless the process is scheduled on this
>>>> core. otherwise any other access will crash the kernel.
>>>> So this memory should only be directly readable and writable by the kernel, but
>>>> only when the process context is loaded. The memory shouldn't be readable or
>>>> writable by the owner process at all.
>>>> This is basically done by removing those pages from kernel linear address and
>>>> attaching them only in the process mm_struct. So during context switching the
>>>> kernel loses access to the secret memory scheduled out and gain access to the
>>>> new process secret memory.
>>>> This generally protects against speculation attacks, and if other process managed
>>>> to trick the kernel to leak data from memory. In this case the kernel will crash
>>>> if it tries to access other processes secret memory.
>>>> Since this memory is special in the sense that it is kernel memory but only make
>>>> sense in the term of the owner process, I tried in this patch series to explore
>>>> the possibility of reusing memfd_secret() to allocate this memory in user virtual
>>>> address space, manage it in a VMA, flipping the permissions while keeping the
>>>> control of the mapping exclusively with the kernel.
>>>> Right now it is:
>>>> (a) Pages not accessible by user space -- even though they are mapped into user
>>>> space, the PTEs are marked for kernel usage.
>>>
>>> Ah, that is the detail I was missing, now I see what you are trying to achieve, thanks!
>>>
>>> It is a bit architecture specific, because ... imagine architectures that have separate kernel+user space page table hierarchies, and not a simple PTE flag to change access permissions between kernel/user space.
>>>
>>> IIRC s390 is one such architecture that uses separate page tables for the user-space + kernel-space portions.
>>>
>>>> (b) Pages accessible by kernel space -- even though they are not mapped into the
>>>> direct map, the PTEs in uvaddr are marked for kernel usage.
>>>> (c) copy_from / copy_to user won't fail -- because it is in the user range, but
>>>> this can be fixed by allocating specific range in user vaddr to this feature
>>>> and check against this range there.
>>>> (d) The secret memory vaddr is guessable by the owner process -- that can also
>>>> be fixed by allocating bigger chunk of user vaddr for this feature and
>>>> randomly placing the secret memory there.
>>>> (e) Mapping is off-limits to the owner process by marking the VMA as locked,
>>>> sealed and special.
>>>
>>> Okay, so in this RFC you are jumping through quite some hoops to have a kernel allocation unmapped from the direct map but mapped into a per-process page table only accessible by kernel space. :)
>>>
>>> So you really don't want this mapped into user space at all (consequently, no GUP, no access, no copy_from_user ...). In this RFC it's mapped but turned inaccessible by flipping the "kernel vs. user" switch.
>>>
>>>> Other alternative (that was implemented in the first submission) is to track those
>>>> allocations in a non-shared kernel PGD per process, then handle creating, forking
>>>> and context-switching this PGD.
>>>
>>> That sounds like a better approach. So we would remove the pages from the shared kernel direct map and map them into a separate kernel-portion in the per-MM page tables?
>>>
>>> Can you envision that would also work with architectures like s390x? I assume we would not only need the per-MM user space page table hierarchy, but also a per-MM kernel space page table hierarchy, into which we also map the common/shared-among-all-processes kernel space page tables (e.g., directmap).
>> Yes, that’s also applicable to arm64. There’s currently no separate per-mm user space page hierarchy there.
> typo, read kernel
Okay, thanks. So going into that direction makes more sense.
I do wonder if we really have to deal with fork() ... if the primary
users don't really have meaning in the forked child (e.g., just like
fork() with KVM IIRC) we might just get away by "losing" these
allocations in the child process.
Happy to learn why fork() must be supported.
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [RFC PATCH 0/7] support for mm-local memory allocations and use it
2024-10-11 12:58 ` David Hildenbrand
@ 2024-10-11 14:25 ` Fares Mehanna
2024-10-18 18:52 ` David Hildenbrand
0 siblings, 1 reply; 24+ messages in thread
From: Fares Mehanna @ 2024-10-11 14:25 UTC (permalink / raw)
To: david
Cc: akpm, ardb, arnd, bhelgaas, broonie, catalin.marinas, faresx,
james.morse, javierm, jean-philippe, joey.gouly,
kristina.martsenko, kvmarm, linux-arm-kernel, linux-kernel,
linux-mm, mark.rutland, maz, mediou, nh-open-source,
oliver.upton, ptosi, rdunlap, rkagan, rppt, shikemeng,
suzuki.poulose, tabba, will, yuzenghui
>>
>>
>>> On 11. Oct 2024, at 14:36, Mediouni, Mohamed <mediou@amazon.de> wrote:
>>>
>>>
>>>
>>>> On 11. Oct 2024, at 14:04, David Hildenbrand <david@redhat.com> wrote:
>>>>
>>>> On 10.10.24 17:52, Fares Mehanna wrote:
>>>>>>> In a series posted a few years ago [1], a proposal was put forward to allow the
>>>>>>> kernel to allocate memory local to a mm and thus push it out of reach for
>>>>>>> current and future speculation-based cross-process attacks. We still believe
>>>>>>> this is a nice thing to have.
>>>>>>>
>>>>>>> However, in the time passed since that post Linux mm has grown quite a few new
>>>>>>> goodies, so we'd like to explore possibilities to implement this functionality
>>>>>>> with less effort and churn leveraging the now available facilities.
>>>>>>>
>>>>>>> An RFC was posted few months back [2] to show the proof of concept and a simple
>>>>>>> test driver.
>>>>>>>
>>>>>>> In this RFC, we're using the same approach of implementing mm-local allocations
>>>>>>> piggy-backing on memfd_secret(), using regular user addresses but pinning the
>>>>>>> pages and flipping the user/supervisor flag on the respective PTEs to make them
>>>>>>> directly accessible from kernel.
>>>>>>> In addition to that we are submitting 5 patches to use the secret memory to hide
>>>>>>> the vCPU gp-regs and fp-regs on arm64 VHE systems.
>>>>>>
>>>>>> I'm a bit lost on what exactly we want to achieve. The point where we
>>>>>> start flipping user/supervisor flags confuses me :)
>>>>>>
>>>>>> With secretmem, you'd get memory allocated that
>>>>>> (a) Is accessible by user space -- mapped into user space.
>>>>>> (b) Is inaccessible by kernel space -- not mapped into the direct map
>>>>>> (c) GUP will fail, but copy_from / copy_to user will work.
>>>>>>
>>>>>>
>>>>>> Another way, without secretmem, would be to consider these "secrets"
>>>>>> kernel allocations that can be mapped into user space using mmap() of a
>>>>>> special fd. That is, they wouldn't have their origin in secretmem, but
>>>>>> in KVM as a kernel allocation. It could be achieved by using VM_MIXEDMAP
>>>>>> with vm_insert_pages(), manually removing them from the directmap.
>>>>>>
>>>>>> But, I am not sure who is supposed to access what. Let's explore the
>>>>>> requirements. I assume we want:
>>>>>>
>>>>>> (a) Pages accessible by user space -- mapped into user space.
>>>>>> (b) Pages inaccessible by kernel space -- not mapped into the direct map
>>>>>> (c) GUP to fail (no direct map).
>>>>>> (d) copy_from / copy_to user to fail?
>>>>>>
>>>>>> And on top of that, some way to access these pages on demand from kernel
>>>>>> space? (temporary CPU-local mapping?)
>>>>>>
>>>>>> Or how would the kernel make use of these allocations?
>>>>>>
>>>>>> --
>>>>>> Cheers,
>>>>>>
>>>>>> David / dhildenb
>>>>> Hi David,
>>>>
>>>> Hi Fares!
>>>>
>>>>> Thanks for taking a look at the patches!
>>>>> We're trying to allocate a kernel memory that is accessible to the kernel but
>>>>> only when the context of the process is loaded.
>>>>> So this is a kernel memory that is not needed to operate the kernel itself, it
>>>>> is to store & process data on behalf of a process. The requirement for this
>>>>> memory is that it would never be touched unless the process is scheduled on this
>>>>> core. otherwise any other access will crash the kernel.
>>>>> So this memory should only be directly readable and writable by the kernel, but
>>>>> only when the process context is loaded. The memory shouldn't be readable or
>>>>> writable by the owner process at all.
>>>>> This is basically done by removing those pages from kernel linear address and
>>>>> attaching them only in the process mm_struct. So during context switching the
>>>>> kernel loses access to the secret memory scheduled out and gain access to the
>>>>> new process secret memory.
>>>>> This generally protects against speculation attacks, and if other process managed
>>>>> to trick the kernel to leak data from memory. In this case the kernel will crash
>>>>> if it tries to access other processes secret memory.
>>>>> Since this memory is special in the sense that it is kernel memory but only make
>>>>> sense in the term of the owner process, I tried in this patch series to explore
>>>>> the possibility of reusing memfd_secret() to allocate this memory in user virtual
>>>>> address space, manage it in a VMA, flipping the permissions while keeping the
>>>>> control of the mapping exclusively with the kernel.
>>>>> Right now it is:
>>>>> (a) Pages not accessible by user space -- even though they are mapped into user
>>>>> space, the PTEs are marked for kernel usage.
>>>>
>>>> Ah, that is the detail I was missing, now I see what you are trying to achieve, thanks!
>>>>
>>>> It is a bit architecture specific, because ... imagine architectures that have separate kernel+user space page table hierarchies, and not a simple PTE flag
> to change access permissions between kernel/user space.
>>>>
>>>> IIRC s390 is one such architecture that uses separate page tables for the user-space + kernel-space portions.
>>>>
>>>>> (b) Pages accessible by kernel space -- even though they are not mapped into the
>>>>> direct map, the PTEs in uvaddr are marked for kernel usage.
>>>>> (c) copy_from / copy_to user won't fail -- because it is in the user range, but
>>>>> this can be fixed by allocating specific range in user vaddr to this feature
>>>>> and check against this range there.
>>>>> (d) The secret memory vaddr is guessable by the owner process -- that can also
>>>>> be fixed by allocating bigger chunk of user vaddr for this feature and
>>>>> randomly placing the secret memory there.
>>>>> (e) Mapping is off-limits to the owner process by marking the VMA as locked,
>>>>> sealed and special.
>>>>
>>>> Okay, so in this RFC you are jumping through quite some hoops to have a kernel allocation unmapped from the direct map but mapped into a per-process page
> table only accessible by kernel space. :)
>>>>
>>>> So you really don't want this mapped into user space at all (consequently, no GUP, no access, no copy_from_user ...). In this RFC it's mapped but turned
> inaccessible by flipping the "kernel vs. user" switch.
>>>>
>>>>> Other alternative (that was implemented in the first submission) is to track those
>>>>> allocations in a non-shared kernel PGD per process, then handle creating, forking
>>>>> and context-switching this PGD.
>>>>
>>>> That sounds like a better approach. So we would remove the pages from the shared kernel direct map and map them into a separate kernel-portion in the per-MM
> page tables?
>>>>
>>>> Can you envision that would also work with architectures like s390x? I assume we would not only need the per-MM user space page table hierarchy, but also a
> per-MM kernel space page table hierarchy, into which we also map the common/shared-among-all-processes kernel space page tables (e.g., directmap).
>>> Yes, that’s also applicable to arm64. There’s currently no separate per-mm user space page hierarchy there.
>> typo, read kernel
>
>
> Okay, thanks. So going into that direction makes more sense.
>
> I do wonder if we really have to deal with fork() ... if the primary
> users don't really have meaning in the forked child (e.g., just like
> fork() with KVM IIRC) we might just get away by "losing" these
> allocations in the child process.
>
> Happy to learn why fork() must be supported.
It really depends on the use cases of the kernel secret allocation, but in my
mind a troubling scenario:
1. Process A had a resource X.
2. Kernel decided to keep some data related to resource X in process A secret
memory.
3. Process A decided to fork, now process B share the resource X.
4. Process B started using resource X. <-- This will crash the kernel as the
used kernel page table on process B has no mapping for the secret memory used
in resource X.
I haven't tried to trigger this crash myself though.
I didn't think in depth about this issue yet, but I need to because duplicating
the secret memory mappings in the new forked process is easy (To give kernel
access on the secret memory), but tearing them down across all forked processes
is a bit complicated (To clean stale mappings on parent/child processes). Right
now tearing down the mapping will only happen on mm_struct which allocated the
secret memory.
Thanks!
Fares.
Amazon Web Services Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss
Eingetragen am Amtsgericht Charlottenburg unter HRB 257764 B
Sitz: Berlin
Ust-ID: DE 365 538 597
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [RFC PATCH 0/7] support for mm-local memory allocations and use it
2024-10-11 14:25 ` Fares Mehanna
@ 2024-10-18 18:52 ` David Hildenbrand
2024-10-18 19:02 ` David Hildenbrand
0 siblings, 1 reply; 24+ messages in thread
From: David Hildenbrand @ 2024-10-18 18:52 UTC (permalink / raw)
To: Fares Mehanna
Cc: akpm, ardb, arnd, bhelgaas, broonie, catalin.marinas,
james.morse, javierm, jean-philippe, joey.gouly,
kristina.martsenko, kvmarm, linux-arm-kernel, linux-kernel,
linux-mm, mark.rutland, maz, mediou, nh-open-source,
oliver.upton, ptosi, rdunlap, rkagan, rppt, shikemeng,
suzuki.poulose, tabba, will, yuzenghui
On 11.10.24 16:25, Fares Mehanna wrote:
>>>
>>>
>>>> On 11. Oct 2024, at 14:36, Mediouni, Mohamed <mediou@amazon.de> wrote:
>>>>
>>>>
>>>>
>>>>> On 11. Oct 2024, at 14:04, David Hildenbrand <david@redhat.com> wrote:
>>>>>
>>>>> On 10.10.24 17:52, Fares Mehanna wrote:
>>>>>>>> In a series posted a few years ago [1], a proposal was put forward to allow the
>>>>>>>> kernel to allocate memory local to a mm and thus push it out of reach for
>>>>>>>> current and future speculation-based cross-process attacks. We still believe
>>>>>>>> this is a nice thing to have.
>>>>>>>>
>>>>>>>> However, in the time passed since that post Linux mm has grown quite a few new
>>>>>>>> goodies, so we'd like to explore possibilities to implement this functionality
>>>>>>>> with less effort and churn leveraging the now available facilities.
>>>>>>>>
>>>>>>>> An RFC was posted few months back [2] to show the proof of concept and a simple
>>>>>>>> test driver.
>>>>>>>>
>>>>>>>> In this RFC, we're using the same approach of implementing mm-local allocations
>>>>>>>> piggy-backing on memfd_secret(), using regular user addresses but pinning the
>>>>>>>> pages and flipping the user/supervisor flag on the respective PTEs to make them
>>>>>>>> directly accessible from kernel.
>>>>>>>> In addition to that we are submitting 5 patches to use the secret memory to hide
>>>>>>>> the vCPU gp-regs and fp-regs on arm64 VHE systems.
>>>>>>>
>>>>>>> I'm a bit lost on what exactly we want to achieve. The point where we
>>>>>>> start flipping user/supervisor flags confuses me :)
>>>>>>>
>>>>>>> With secretmem, you'd get memory allocated that
>>>>>>> (a) Is accessible by user space -- mapped into user space.
>>>>>>> (b) Is inaccessible by kernel space -- not mapped into the direct map
>>>>>>> (c) GUP will fail, but copy_from / copy_to user will work.
>>>>>>>
>>>>>>>
>>>>>>> Another way, without secretmem, would be to consider these "secrets"
>>>>>>> kernel allocations that can be mapped into user space using mmap() of a
>>>>>>> special fd. That is, they wouldn't have their origin in secretmem, but
>>>>>>> in KVM as a kernel allocation. It could be achieved by using VM_MIXEDMAP
>>>>>>> with vm_insert_pages(), manually removing them from the directmap.
>>>>>>>
>>>>>>> But, I am not sure who is supposed to access what. Let's explore the
>>>>>>> requirements. I assume we want:
>>>>>>>
>>>>>>> (a) Pages accessible by user space -- mapped into user space.
>>>>>>> (b) Pages inaccessible by kernel space -- not mapped into the direct map
>>>>>>> (c) GUP to fail (no direct map).
>>>>>>> (d) copy_from / copy_to user to fail?
>>>>>>>
>>>>>>> And on top of that, some way to access these pages on demand from kernel
>>>>>>> space? (temporary CPU-local mapping?)
>>>>>>>
>>>>>>> Or how would the kernel make use of these allocations?
>>>>>>>
>>>>>>> --
>>>>>>> Cheers,
>>>>>>>
>>>>>>> David / dhildenb
>>>>>> Hi David,
>>>>>
>>>>> Hi Fares!
>>>>>
>>>>>> Thanks for taking a look at the patches!
>>>>>> We're trying to allocate a kernel memory that is accessible to the kernel but
>>>>>> only when the context of the process is loaded.
>>>>>> So this is a kernel memory that is not needed to operate the kernel itself, it
>>>>>> is to store & process data on behalf of a process. The requirement for this
>>>>>> memory is that it would never be touched unless the process is scheduled on this
>>>>>> core. otherwise any other access will crash the kernel.
>>>>>> So this memory should only be directly readable and writable by the kernel, but
>>>>>> only when the process context is loaded. The memory shouldn't be readable or
>>>>>> writable by the owner process at all.
>>>>>> This is basically done by removing those pages from kernel linear address and
>>>>>> attaching them only in the process mm_struct. So during context switching the
>>>>>> kernel loses access to the secret memory scheduled out and gain access to the
>>>>>> new process secret memory.
>>>>>> This generally protects against speculation attacks, and if other process managed
>>>>>> to trick the kernel to leak data from memory. In this case the kernel will crash
>>>>>> if it tries to access other processes secret memory.
>>>>>> Since this memory is special in the sense that it is kernel memory but only make
>>>>>> sense in the term of the owner process, I tried in this patch series to explore
>>>>>> the possibility of reusing memfd_secret() to allocate this memory in user virtual
>>>>>> address space, manage it in a VMA, flipping the permissions while keeping the
>>>>>> control of the mapping exclusively with the kernel.
>>>>>> Right now it is:
>>>>>> (a) Pages not accessible by user space -- even though they are mapped into user
>>>>>> space, the PTEs are marked for kernel usage.
>>>>>
>>>>> Ah, that is the detail I was missing, now I see what you are trying to achieve, thanks!
>>>>>
>>>>> It is a bit architecture specific, because ... imagine architectures that have separate kernel+user space page table hierarchies, and not a simple PTE flag
>> to change access permissions between kernel/user space.
>>>>>
>>>>> IIRC s390 is one such architecture that uses separate page tables for the user-space + kernel-space portions.
>>>>>
>>>>>> (b) Pages accessible by kernel space -- even though they are not mapped into the
>>>>>> direct map, the PTEs in uvaddr are marked for kernel usage.
>>>>>> (c) copy_from / copy_to user won't fail -- because it is in the user range, but
>>>>>> this can be fixed by allocating specific range in user vaddr to this feature
>>>>>> and check against this range there.
>>>>>> (d) The secret memory vaddr is guessable by the owner process -- that can also
>>>>>> be fixed by allocating bigger chunk of user vaddr for this feature and
>>>>>> randomly placing the secret memory there.
>>>>>> (e) Mapping is off-limits to the owner process by marking the VMA as locked,
>>>>>> sealed and special.
>>>>>
>>>>> Okay, so in this RFC you are jumping through quite some hoops to have a kernel allocation unmapped from the direct map but mapped into a per-process page
>> table only accessible by kernel space. :)
>>>>>
>>>>> So you really don't want this mapped into user space at all (consequently, no GUP, no access, no copy_from_user ...). In this RFC it's mapped but turned
>> inaccessible by flipping the "kernel vs. user" switch.
>>>>>
>>>>>> Other alternative (that was implemented in the first submission) is to track those
>>>>>> allocations in a non-shared kernel PGD per process, then handle creating, forking
>>>>>> and context-switching this PGD.
>>>>>
>>>>> That sounds like a better approach. So we would remove the pages from the shared kernel direct map and map them into a separate kernel-portion in the per-MM
>> page tables?
>>>>>
>>>>> Can you envision that would also work with architectures like s390x? I assume we would not only need the per-MM user space page table hierarchy, but also a
>> per-MM kernel space page table hierarchy, into which we also map the common/shared-among-all-processes kernel space page tables (e.g., directmap).
>>>> Yes, that’s also applicable to arm64. There’s currently no separate per-mm user space page hierarchy there.
>>> typo, read kernel
>>
>>
>> Okay, thanks. So going into that direction makes more sense.
>>
>> I do wonder if we really have to deal with fork() ... if the primary
>> users don't really have meaning in the forked child (e.g., just like
>> fork() with KVM IIRC) we might just get away by "losing" these
>> allocations in the child process.
>>
>> Happy to learn why fork() must be supported.
>
> It really depends on the use cases of the kernel secret allocation, but in my
> mind a troubling scenario:
> 1. Process A had a resource X.
> 2. Kernel decided to keep some data related to resource X in process A secret
> memory.
> 3. Process A decided to fork, now process B share the resource X.
> 4. Process B started using resource X. <-- This will crash the kernel as the
> used kernel page table on process B has no mapping for the secret memory used
> in resource X.
>
> I haven't tried to trigger this crash myself though.
>
Right, and if we can rule out any users that are supposed to work after
fork(), we can just disregard that in the first version.
I never played with this, but let's assume you make use of these
mm-local allocations in KVM context.
What would happens if you fork() with a KVM fd and try accessing that fd
from the other process using ioctls? I recall that KVM will not be
"duplicated".
What would happen if you send that fd over to a completely different
process and try accessing that fd from the other process using ioctls?
Of course, question being: if you have MM-local allocations in both
cases and there is suddenly a different MM ... assuming that both cases
are even possible (if they are not possible, great! :) ).
I think I am supposed to know if these things are possible or not and
what would happen, but it's late Friday and my brain is begging for some
Weekend :D
> I didn't think in depth about this issue yet, but I need to because duplicating
> the secret memory mappings in the new forked process is easy (To give kernel
> access on the secret memory), but tearing them down across all forked processes
> is a bit complicated (To clean stale mappings on parent/child processes). Right
> now tearing down the mapping will only happen on mm_struct which allocated the
> secret memory.
If an allocation is MM-local, I would assume that fork() would
*duplicate* that allocation (leaving CoW out of the picture :D ), but
that's where the fun begins (see above regarding my confusion about KVM
and fork() behavior ... ).
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [RFC PATCH 0/7] support for mm-local memory allocations and use it
2024-10-18 18:52 ` David Hildenbrand
@ 2024-10-18 19:02 ` David Hildenbrand
0 siblings, 0 replies; 24+ messages in thread
From: David Hildenbrand @ 2024-10-18 19:02 UTC (permalink / raw)
To: Fares Mehanna
Cc: akpm, ardb, arnd, bhelgaas, broonie, catalin.marinas,
james.morse, javierm, jean-philippe, joey.gouly,
kristina.martsenko, kvmarm, linux-arm-kernel, linux-kernel,
linux-mm, mark.rutland, maz, mediou, nh-open-source,
oliver.upton, ptosi, rdunlap, rkagan, rppt, shikemeng,
suzuki.poulose, tabba, will, yuzenghui
On 18.10.24 20:52, David Hildenbrand wrote:
> On 11.10.24 16:25, Fares Mehanna wrote:
>>>>
>>>>
>>>>> On 11. Oct 2024, at 14:36, Mediouni, Mohamed <mediou@amazon.de> wrote:
>>>>>
>>>>>
>>>>>
>>>>>> On 11. Oct 2024, at 14:04, David Hildenbrand <david@redhat.com> wrote:
>>>>>>
>>>>>> On 10.10.24 17:52, Fares Mehanna wrote:
>>>>>>>>> In a series posted a few years ago [1], a proposal was put forward to allow the
>>>>>>>>> kernel to allocate memory local to a mm and thus push it out of reach for
>>>>>>>>> current and future speculation-based cross-process attacks. We still believe
>>>>>>>>> this is a nice thing to have.
>>>>>>>>>
>>>>>>>>> However, in the time passed since that post Linux mm has grown quite a few new
>>>>>>>>> goodies, so we'd like to explore possibilities to implement this functionality
>>>>>>>>> with less effort and churn leveraging the now available facilities.
>>>>>>>>>
>>>>>>>>> An RFC was posted few months back [2] to show the proof of concept and a simple
>>>>>>>>> test driver.
>>>>>>>>>
>>>>>>>>> In this RFC, we're using the same approach of implementing mm-local allocations
>>>>>>>>> piggy-backing on memfd_secret(), using regular user addresses but pinning the
>>>>>>>>> pages and flipping the user/supervisor flag on the respective PTEs to make them
>>>>>>>>> directly accessible from kernel.
>>>>>>>>> In addition to that we are submitting 5 patches to use the secret memory to hide
>>>>>>>>> the vCPU gp-regs and fp-regs on arm64 VHE systems.
>>>>>>>>
>>>>>>>> I'm a bit lost on what exactly we want to achieve. The point where we
>>>>>>>> start flipping user/supervisor flags confuses me :)
>>>>>>>>
>>>>>>>> With secretmem, you'd get memory allocated that
>>>>>>>> (a) Is accessible by user space -- mapped into user space.
>>>>>>>> (b) Is inaccessible by kernel space -- not mapped into the direct map
>>>>>>>> (c) GUP will fail, but copy_from / copy_to user will work.
>>>>>>>>
>>>>>>>>
>>>>>>>> Another way, without secretmem, would be to consider these "secrets"
>>>>>>>> kernel allocations that can be mapped into user space using mmap() of a
>>>>>>>> special fd. That is, they wouldn't have their origin in secretmem, but
>>>>>>>> in KVM as a kernel allocation. It could be achieved by using VM_MIXEDMAP
>>>>>>>> with vm_insert_pages(), manually removing them from the directmap.
>>>>>>>>
>>>>>>>> But, I am not sure who is supposed to access what. Let's explore the
>>>>>>>> requirements. I assume we want:
>>>>>>>>
>>>>>>>> (a) Pages accessible by user space -- mapped into user space.
>>>>>>>> (b) Pages inaccessible by kernel space -- not mapped into the direct map
>>>>>>>> (c) GUP to fail (no direct map).
>>>>>>>> (d) copy_from / copy_to user to fail?
>>>>>>>>
>>>>>>>> And on top of that, some way to access these pages on demand from kernel
>>>>>>>> space? (temporary CPU-local mapping?)
>>>>>>>>
>>>>>>>> Or how would the kernel make use of these allocations?
>>>>>>>>
>>>>>>>> --
>>>>>>>> Cheers,
>>>>>>>>
>>>>>>>> David / dhildenb
>>>>>>> Hi David,
>>>>>>
>>>>>> Hi Fares!
>>>>>>
>>>>>>> Thanks for taking a look at the patches!
>>>>>>> We're trying to allocate a kernel memory that is accessible to the kernel but
>>>>>>> only when the context of the process is loaded.
>>>>>>> So this is a kernel memory that is not needed to operate the kernel itself, it
>>>>>>> is to store & process data on behalf of a process. The requirement for this
>>>>>>> memory is that it would never be touched unless the process is scheduled on this
>>>>>>> core. otherwise any other access will crash the kernel.
>>>>>>> So this memory should only be directly readable and writable by the kernel, but
>>>>>>> only when the process context is loaded. The memory shouldn't be readable or
>>>>>>> writable by the owner process at all.
>>>>>>> This is basically done by removing those pages from kernel linear address and
>>>>>>> attaching them only in the process mm_struct. So during context switching the
>>>>>>> kernel loses access to the secret memory scheduled out and gain access to the
>>>>>>> new process secret memory.
>>>>>>> This generally protects against speculation attacks, and if other process managed
>>>>>>> to trick the kernel to leak data from memory. In this case the kernel will crash
>>>>>>> if it tries to access other processes secret memory.
>>>>>>> Since this memory is special in the sense that it is kernel memory but only make
>>>>>>> sense in the term of the owner process, I tried in this patch series to explore
>>>>>>> the possibility of reusing memfd_secret() to allocate this memory in user virtual
>>>>>>> address space, manage it in a VMA, flipping the permissions while keeping the
>>>>>>> control of the mapping exclusively with the kernel.
>>>>>>> Right now it is:
>>>>>>> (a) Pages not accessible by user space -- even though they are mapped into user
>>>>>>> space, the PTEs are marked for kernel usage.
>>>>>>
>>>>>> Ah, that is the detail I was missing, now I see what you are trying to achieve, thanks!
>>>>>>
>>>>>> It is a bit architecture specific, because ... imagine architectures that have separate kernel+user space page table hierarchies, and not a simple PTE flag
>>> to change access permissions between kernel/user space.
>>>>>>
>>>>>> IIRC s390 is one such architecture that uses separate page tables for the user-space + kernel-space portions.
>>>>>>
>>>>>>> (b) Pages accessible by kernel space -- even though they are not mapped into the
>>>>>>> direct map, the PTEs in uvaddr are marked for kernel usage.
>>>>>>> (c) copy_from / copy_to user won't fail -- because it is in the user range, but
>>>>>>> this can be fixed by allocating specific range in user vaddr to this feature
>>>>>>> and check against this range there.
>>>>>>> (d) The secret memory vaddr is guessable by the owner process -- that can also
>>>>>>> be fixed by allocating bigger chunk of user vaddr for this feature and
>>>>>>> randomly placing the secret memory there.
>>>>>>> (e) Mapping is off-limits to the owner process by marking the VMA as locked,
>>>>>>> sealed and special.
>>>>>>
>>>>>> Okay, so in this RFC you are jumping through quite some hoops to have a kernel allocation unmapped from the direct map but mapped into a per-process page
>>> table only accessible by kernel space. :)
>>>>>>
>>>>>> So you really don't want this mapped into user space at all (consequently, no GUP, no access, no copy_from_user ...). In this RFC it's mapped but turned
>>> inaccessible by flipping the "kernel vs. user" switch.
>>>>>>
>>>>>>> Other alternative (that was implemented in the first submission) is to track those
>>>>>>> allocations in a non-shared kernel PGD per process, then handle creating, forking
>>>>>>> and context-switching this PGD.
>>>>>>
>>>>>> That sounds like a better approach. So we would remove the pages from the shared kernel direct map and map them into a separate kernel-portion in the per-MM
>>> page tables?
>>>>>>
>>>>>> Can you envision that would also work with architectures like s390x? I assume we would not only need the per-MM user space page table hierarchy, but also a
>>> per-MM kernel space page table hierarchy, into which we also map the common/shared-among-all-processes kernel space page tables (e.g., directmap).
>>>>> Yes, that’s also applicable to arm64. There’s currently no separate per-mm user space page hierarchy there.
>>>> typo, read kernel
>>>
>>>
>>> Okay, thanks. So going into that direction makes more sense.
>>>
>>> I do wonder if we really have to deal with fork() ... if the primary
>>> users don't really have meaning in the forked child (e.g., just like
>>> fork() with KVM IIRC) we might just get away by "losing" these
>>> allocations in the child process.
>>>
>>> Happy to learn why fork() must be supported.
>>
>> It really depends on the use cases of the kernel secret allocation, but in my
>> mind a troubling scenario:
>> 1. Process A had a resource X.
>> 2. Kernel decided to keep some data related to resource X in process A secret
>> memory.
>> 3. Process A decided to fork, now process B share the resource X.
>> 4. Process B started using resource X. <-- This will crash the kernel as the
>> used kernel page table on process B has no mapping for the secret memory used
>> in resource X.
>>
>> I haven't tried to trigger this crash myself though.
>>
>
> Right, and if we can rule out any users that are supposed to work after
> fork(), we can just disregard that in the first version.
>
> I never played with this, but let's assume you make use of these
> mm-local allocations in KVM context.
>
> What would happens if you fork() with a KVM fd and try accessing that fd
> from the other process using ioctls? I recall that KVM will not be
> "duplicated".
>
> What would happen if you send that fd over to a completely different
> process and try accessing that fd from the other process using ioctls?
Stumbling over Documentation/virtual/kvm/api.txt:
"In general file descriptors can be migrated among processes by means
of fork() and the SCM_RIGHTS facility of unix domain socket. These
kinds of tricks are explicitly not supported by kvm. While they will
not cause harm to the host, their actual behavior is not guaranteed by
the API. See "General description" for details on the ioctl usage
model that is supported by KVM.
It is important to note that althought VM ioctls may only be issued from
the process that created the VM, a VM's lifecycle is associated with its
file descriptor, not its creator (process). In other words, the VM and
its resources, *including the associated address space*, are not freed
until the last reference to the VM's file descriptor has been released.
For example, if fork() is issued after ioctl(KVM_CREATE_VM), the VM will
not be freed until both the parent (original) process and its child have
put their references to the VM's file descriptor.
Because a VM's resources are not freed until the last reference to its
file descriptor is released, creating additional references to a VM via
via fork(), dup(), etc... without careful consideration is strongly
discouraged and may have unwanted side effects, e.g. memory allocated
by and on behalf of the VM's process may not be freed/unaccounted when
the VM is shut down.
"
The "may only be issued" doesn't make it clear if that is actively enforced.
But staring at kvm_vcpu_ioctl():
if (vcpu->kvm->mm != current->mm || vcpu->kvm->vm_dead)
return -EIO;
So with KVM it would likely work to *not* care about mm-local memory
allocations during fork().
But of course, what I am getting at is: if we would have some fd that
uses an mm-local allocation, and it could be accessed (ioctl) from
another active MM, we would likely be in trouble ... and not only fork()
is the problem.
We should really not try handling fork() and instead restrict the use
cases where this can be used.
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 24+ messages in thread
end of thread, other threads:[~2024-10-18 19:03 UTC | newest]
Thread overview: 24+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-09-11 14:33 [RFC PATCH 0/7] support for mm-local memory allocations and use it Fares Mehanna
2024-09-11 14:34 ` [RFC PATCH 1/7] mseal: expose interface to seal / unseal user memory ranges Fares Mehanna
2024-09-12 16:40 ` Liam R. Howlett
2024-09-25 15:25 ` Fares Mehanna
2024-09-11 14:34 ` [RFC PATCH 2/7] mm/secretmem: implement mm-local kernel allocations Fares Mehanna
2024-09-11 14:34 ` [RFC PATCH 3/7] arm64: KVM: Refactor C-code to access vCPU gp-registers through macros Fares Mehanna
2024-09-11 14:34 ` [RFC PATCH 4/7] KVM: Refactor Assembly-code to access vCPU gp-registers through a macro Fares Mehanna
2024-09-11 14:34 ` [RFC PATCH 5/7] arm64: KVM: Allocate vCPU gp-regs dynamically on VHE and KERNEL_SECRETMEM enabled systems Fares Mehanna
2024-09-11 14:34 ` [RFC PATCH 6/7] arm64: KVM: Refactor C-code to access vCPU fp-registers through macros Fares Mehanna
2024-09-11 14:34 ` [RFC PATCH 7/7] arm64: KVM: Allocate vCPU fp-regs dynamically on VHE and KERNEL_SECRETMEM enabled systems Fares Mehanna
2024-09-20 12:34 ` [RFC PATCH 0/7] support for mm-local memory allocations and use it Mike Rapoport
2024-09-25 15:33 ` Fares Mehanna
2024-09-27 7:08 ` Mike Rapoport
2024-10-08 20:06 ` Fares Mehanna
2024-09-20 13:19 ` Alexander Graf
2024-09-27 12:59 ` David Hildenbrand
2024-10-10 15:52 ` Fares Mehanna
2024-10-11 12:04 ` David Hildenbrand
2024-10-11 12:36 ` Mediouni, Mohamed
2024-10-11 12:56 ` Mediouni, Mohamed
2024-10-11 12:58 ` David Hildenbrand
2024-10-11 14:25 ` Fares Mehanna
2024-10-18 18:52 ` David Hildenbrand
2024-10-18 19:02 ` David Hildenbrand
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox