From: Fuad Tabba <tabba@google.com>
To: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Marc Zyngier" <maz@kernel.org>,
"Oliver Upton" <oliver.upton@linux.dev>,
"Huacai Chen" <chenhuacai@kernel.org>,
"Michael Ellerman" <mpe@ellerman.id.au>,
"Anup Patel" <anup@brainfault.org>,
"Paul Walmsley" <paul.walmsley@sifive.com>,
"Palmer Dabbelt" <palmer@dabbelt.com>,
"Albert Ou" <aou@eecs.berkeley.edu>,
"Sean Christopherson" <seanjc@google.com>,
"Alexander Viro" <viro@zeniv.linux.org.uk>,
"Christian Brauner" <brauner@kernel.org>,
"Matthew Wilcox (Oracle)" <willy@infradead.org>,
"Andrew Morton" <akpm@linux-foundation.org>,
kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org,
kvmarm@lists.linux.dev, linux-mips@vger.kernel.org,
linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org,
linux-riscv@lists.infradead.org, linux-fsdevel@vger.kernel.org,
linux-mm@kvack.org, linux-kernel@vger.kernel.org,
"Xiaoyao Li" <xiaoyao.li@intel.com>,
"Xu Yilun" <yilun.xu@intel.com>,
"Chao Peng" <chao.p.peng@linux.intel.com>,
"Jarkko Sakkinen" <jarkko@kernel.org>,
"Anish Moorthy" <amoorthy@google.com>,
"David Matlack" <dmatlack@google.com>,
"Yu Zhang" <yu.c.zhang@linux.intel.com>,
"Isaku Yamahata" <isaku.yamahata@intel.com>,
"Mickaël Salaün" <mic@digikod.net>,
"Vlastimil Babka" <vbabka@suse.cz>,
"Vishal Annapurve" <vannapurve@google.com>,
"Ackerley Tng" <ackerleytng@google.com>,
"Maciej Szmigiero" <mail@maciej.szmigiero.name>,
"David Hildenbrand" <david@redhat.com>,
"Quentin Perret" <qperret@google.com>,
"Michael Roth" <michael.roth@amd.com>,
Wang <wei.w.wang@intel.com>,
"Liam Merwick" <liam.merwick@oracle.com>,
"Isaku Yamahata" <isaku.yamahata@gmail.com>,
"Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Subject: Re: [PATCH 27/34] KVM: selftests: Introduce VM "shape" to allow tests to specify the VM type
Date: Mon, 6 Nov 2023 11:54:42 +0000 [thread overview]
Message-ID: <CA+EHjTxz-e_JKYTtEjjYJTXmpvizRXe8EUbhY2E7bwFjkkHVFw@mail.gmail.com> (raw)
In-Reply-To: <20231105163040.14904-28-pbonzini@redhat.com>
On Sun, Nov 5, 2023 at 4:34 PM Paolo Bonzini <pbonzini@redhat.com> wrote:
>
> From: Sean Christopherson <seanjc@google.com>
>
> Add a "vm_shape" structure to encapsulate the selftests-defined "mode",
> along with the KVM-defined "type" for use when creating a new VM. "mode"
> tracks physical and virtual address properties, as well as the preferred
> backing memory type, while "type" corresponds to the VM type.
>
> Taking the VM type will allow adding tests for KVM_CREATE_GUEST_MEMFD,
> a.k.a. guest private memory, without needing an entirely separate set of
> helpers. Guest private memory is effectively usable only by confidential
> VM types, and it's expected that x86 will double down and require unique
> VM types for TDX and SNP guests.
>
> Signed-off-by: Sean Christopherson <seanjc@google.com>
> Message-Id: <20231027182217.3615211-30-seanjc@google.com>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
nit: as in a prior selftest commit messages, references in the commit
message to guest _private_ memory. Should these be changed to just
guest memory?
Reviewed-by: Fuad Tabba <tabba@google.com>
Tested-by: Fuad Tabba <tabba@google.com>
Cheers,
/fuad
> tools/testing/selftests/kvm/dirty_log_test.c | 2 +-
> .../selftests/kvm/include/kvm_util_base.h | 54 +++++++++++++++----
> .../selftests/kvm/kvm_page_table_test.c | 2 +-
> tools/testing/selftests/kvm/lib/kvm_util.c | 43 +++++++--------
> tools/testing/selftests/kvm/lib/memstress.c | 3 +-
> .../kvm/x86_64/ucna_injection_test.c | 2 +-
> 6 files changed, 72 insertions(+), 34 deletions(-)
>
> diff --git a/tools/testing/selftests/kvm/dirty_log_test.c b/tools/testing/selftests/kvm/dirty_log_test.c
> index 936f3a8d1b83..6cbecf499767 100644
> --- a/tools/testing/selftests/kvm/dirty_log_test.c
> +++ b/tools/testing/selftests/kvm/dirty_log_test.c
> @@ -699,7 +699,7 @@ static struct kvm_vm *create_vm(enum vm_guest_mode mode, struct kvm_vcpu **vcpu,
>
> pr_info("Testing guest mode: %s\n", vm_guest_mode_string(mode));
>
> - vm = __vm_create(mode, 1, extra_mem_pages);
> + vm = __vm_create(VM_SHAPE(mode), 1, extra_mem_pages);
>
> log_mode_create_vm_done(vm);
> *vcpu = vm_vcpu_add(vm, 0, guest_code);
> diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h b/tools/testing/selftests/kvm/include/kvm_util_base.h
> index 1441fca6c273..157508c071f3 100644
> --- a/tools/testing/selftests/kvm/include/kvm_util_base.h
> +++ b/tools/testing/selftests/kvm/include/kvm_util_base.h
> @@ -188,6 +188,23 @@ enum vm_guest_mode {
> NUM_VM_MODES,
> };
>
> +struct vm_shape {
> + enum vm_guest_mode mode;
> + unsigned int type;
> +};
> +
> +#define VM_TYPE_DEFAULT 0
> +
> +#define VM_SHAPE(__mode) \
> +({ \
> + struct vm_shape shape = { \
> + .mode = (__mode), \
> + .type = VM_TYPE_DEFAULT \
> + }; \
> + \
> + shape; \
> +})
> +
> #if defined(__aarch64__)
>
> extern enum vm_guest_mode vm_mode_default;
> @@ -220,6 +237,8 @@ extern enum vm_guest_mode vm_mode_default;
>
> #endif
>
> +#define VM_SHAPE_DEFAULT VM_SHAPE(VM_MODE_DEFAULT)
> +
> #define MIN_PAGE_SIZE (1U << MIN_PAGE_SHIFT)
> #define PTES_PER_MIN_PAGE ptes_per_page(MIN_PAGE_SIZE)
>
> @@ -784,21 +803,21 @@ vm_paddr_t vm_alloc_page_table(struct kvm_vm *vm);
> * __vm_create() does NOT create vCPUs, @nr_runnable_vcpus is used purely to
> * calculate the amount of memory needed for per-vCPU data, e.g. stacks.
> */
> -struct kvm_vm *____vm_create(enum vm_guest_mode mode);
> -struct kvm_vm *__vm_create(enum vm_guest_mode mode, uint32_t nr_runnable_vcpus,
> +struct kvm_vm *____vm_create(struct vm_shape shape);
> +struct kvm_vm *__vm_create(struct vm_shape shape, uint32_t nr_runnable_vcpus,
> uint64_t nr_extra_pages);
>
> static inline struct kvm_vm *vm_create_barebones(void)
> {
> - return ____vm_create(VM_MODE_DEFAULT);
> + return ____vm_create(VM_SHAPE_DEFAULT);
> }
>
> static inline struct kvm_vm *vm_create(uint32_t nr_runnable_vcpus)
> {
> - return __vm_create(VM_MODE_DEFAULT, nr_runnable_vcpus, 0);
> + return __vm_create(VM_SHAPE_DEFAULT, nr_runnable_vcpus, 0);
> }
>
> -struct kvm_vm *__vm_create_with_vcpus(enum vm_guest_mode mode, uint32_t nr_vcpus,
> +struct kvm_vm *__vm_create_with_vcpus(struct vm_shape shape, uint32_t nr_vcpus,
> uint64_t extra_mem_pages,
> void *guest_code, struct kvm_vcpu *vcpus[]);
>
> @@ -806,17 +825,27 @@ static inline struct kvm_vm *vm_create_with_vcpus(uint32_t nr_vcpus,
> void *guest_code,
> struct kvm_vcpu *vcpus[])
> {
> - return __vm_create_with_vcpus(VM_MODE_DEFAULT, nr_vcpus, 0,
> + return __vm_create_with_vcpus(VM_SHAPE_DEFAULT, nr_vcpus, 0,
> guest_code, vcpus);
> }
>
> +
> +struct kvm_vm *__vm_create_shape_with_one_vcpu(struct vm_shape shape,
> + struct kvm_vcpu **vcpu,
> + uint64_t extra_mem_pages,
> + void *guest_code);
> +
> /*
> * Create a VM with a single vCPU with reasonable defaults and @extra_mem_pages
> * additional pages of guest memory. Returns the VM and vCPU (via out param).
> */
> -struct kvm_vm *__vm_create_with_one_vcpu(struct kvm_vcpu **vcpu,
> - uint64_t extra_mem_pages,
> - void *guest_code);
> +static inline struct kvm_vm *__vm_create_with_one_vcpu(struct kvm_vcpu **vcpu,
> + uint64_t extra_mem_pages,
> + void *guest_code)
> +{
> + return __vm_create_shape_with_one_vcpu(VM_SHAPE_DEFAULT, vcpu,
> + extra_mem_pages, guest_code);
> +}
>
> static inline struct kvm_vm *vm_create_with_one_vcpu(struct kvm_vcpu **vcpu,
> void *guest_code)
> @@ -824,6 +853,13 @@ static inline struct kvm_vm *vm_create_with_one_vcpu(struct kvm_vcpu **vcpu,
> return __vm_create_with_one_vcpu(vcpu, 0, guest_code);
> }
>
> +static inline struct kvm_vm *vm_create_shape_with_one_vcpu(struct vm_shape shape,
> + struct kvm_vcpu **vcpu,
> + void *guest_code)
> +{
> + return __vm_create_shape_with_one_vcpu(shape, vcpu, 0, guest_code);
> +}
> +
> struct kvm_vcpu *vm_recreate_with_one_vcpu(struct kvm_vm *vm);
>
> void kvm_pin_this_task_to_pcpu(uint32_t pcpu);
> diff --git a/tools/testing/selftests/kvm/kvm_page_table_test.c b/tools/testing/selftests/kvm/kvm_page_table_test.c
> index 69f26d80c821..e37dc9c21888 100644
> --- a/tools/testing/selftests/kvm/kvm_page_table_test.c
> +++ b/tools/testing/selftests/kvm/kvm_page_table_test.c
> @@ -254,7 +254,7 @@ static struct kvm_vm *pre_init_before_test(enum vm_guest_mode mode, void *arg)
>
> /* Create a VM with enough guest pages */
> guest_num_pages = test_mem_size / guest_page_size;
> - vm = __vm_create_with_vcpus(mode, nr_vcpus, guest_num_pages,
> + vm = __vm_create_with_vcpus(VM_SHAPE(mode), nr_vcpus, guest_num_pages,
> guest_code, test_args.vcpus);
>
> /* Align down GPA of the testing memslot */
> diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
> index 95a553400ea9..1c74310f1d44 100644
> --- a/tools/testing/selftests/kvm/lib/kvm_util.c
> +++ b/tools/testing/selftests/kvm/lib/kvm_util.c
> @@ -209,7 +209,7 @@ __weak void vm_vaddr_populate_bitmap(struct kvm_vm *vm)
> (1ULL << (vm->va_bits - 1)) >> vm->page_shift);
> }
>
> -struct kvm_vm *____vm_create(enum vm_guest_mode mode)
> +struct kvm_vm *____vm_create(struct vm_shape shape)
> {
> struct kvm_vm *vm;
>
> @@ -221,13 +221,13 @@ struct kvm_vm *____vm_create(enum vm_guest_mode mode)
> vm->regions.hva_tree = RB_ROOT;
> hash_init(vm->regions.slot_hash);
>
> - vm->mode = mode;
> - vm->type = 0;
> + vm->mode = shape.mode;
> + vm->type = shape.type;
>
> - vm->pa_bits = vm_guest_mode_params[mode].pa_bits;
> - vm->va_bits = vm_guest_mode_params[mode].va_bits;
> - vm->page_size = vm_guest_mode_params[mode].page_size;
> - vm->page_shift = vm_guest_mode_params[mode].page_shift;
> + vm->pa_bits = vm_guest_mode_params[vm->mode].pa_bits;
> + vm->va_bits = vm_guest_mode_params[vm->mode].va_bits;
> + vm->page_size = vm_guest_mode_params[vm->mode].page_size;
> + vm->page_shift = vm_guest_mode_params[vm->mode].page_shift;
>
> /* Setup mode specific traits. */
> switch (vm->mode) {
> @@ -265,7 +265,7 @@ struct kvm_vm *____vm_create(enum vm_guest_mode mode)
> /*
> * Ignore KVM support for 5-level paging (vm->va_bits == 57),
> * it doesn't take effect unless a CR4.LA57 is set, which it
> - * isn't for this VM_MODE.
> + * isn't for this mode (48-bit virtual address space).
> */
> TEST_ASSERT(vm->va_bits == 48 || vm->va_bits == 57,
> "Linear address width (%d bits) not supported",
> @@ -285,10 +285,11 @@ struct kvm_vm *____vm_create(enum vm_guest_mode mode)
> vm->pgtable_levels = 5;
> break;
> default:
> - TEST_FAIL("Unknown guest mode, mode: 0x%x", mode);
> + TEST_FAIL("Unknown guest mode: 0x%x", vm->mode);
> }
>
> #ifdef __aarch64__
> + TEST_ASSERT(!vm->type, "ARM doesn't support test-provided types");
> if (vm->pa_bits != 40)
> vm->type = KVM_VM_TYPE_ARM_IPA_SIZE(vm->pa_bits);
> #endif
> @@ -347,19 +348,19 @@ static uint64_t vm_nr_pages_required(enum vm_guest_mode mode,
> return vm_adjust_num_guest_pages(mode, nr_pages);
> }
>
> -struct kvm_vm *__vm_create(enum vm_guest_mode mode, uint32_t nr_runnable_vcpus,
> +struct kvm_vm *__vm_create(struct vm_shape shape, uint32_t nr_runnable_vcpus,
> uint64_t nr_extra_pages)
> {
> - uint64_t nr_pages = vm_nr_pages_required(mode, nr_runnable_vcpus,
> + uint64_t nr_pages = vm_nr_pages_required(shape.mode, nr_runnable_vcpus,
> nr_extra_pages);
> struct userspace_mem_region *slot0;
> struct kvm_vm *vm;
> int i;
>
> - pr_debug("%s: mode='%s' pages='%ld'\n", __func__,
> - vm_guest_mode_string(mode), nr_pages);
> + pr_debug("%s: mode='%s' type='%d', pages='%ld'\n", __func__,
> + vm_guest_mode_string(shape.mode), shape.type, nr_pages);
>
> - vm = ____vm_create(mode);
> + vm = ____vm_create(shape);
>
> vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS, 0, 0, nr_pages, 0);
> for (i = 0; i < NR_MEM_REGIONS; i++)
> @@ -400,7 +401,7 @@ struct kvm_vm *__vm_create(enum vm_guest_mode mode, uint32_t nr_runnable_vcpus,
> * extra_mem_pages is only used to calculate the maximum page table size,
> * no real memory allocation for non-slot0 memory in this function.
> */
> -struct kvm_vm *__vm_create_with_vcpus(enum vm_guest_mode mode, uint32_t nr_vcpus,
> +struct kvm_vm *__vm_create_with_vcpus(struct vm_shape shape, uint32_t nr_vcpus,
> uint64_t extra_mem_pages,
> void *guest_code, struct kvm_vcpu *vcpus[])
> {
> @@ -409,7 +410,7 @@ struct kvm_vm *__vm_create_with_vcpus(enum vm_guest_mode mode, uint32_t nr_vcpus
>
> TEST_ASSERT(!nr_vcpus || vcpus, "Must provide vCPU array");
>
> - vm = __vm_create(mode, nr_vcpus, extra_mem_pages);
> + vm = __vm_create(shape, nr_vcpus, extra_mem_pages);
>
> for (i = 0; i < nr_vcpus; ++i)
> vcpus[i] = vm_vcpu_add(vm, i, guest_code);
> @@ -417,15 +418,15 @@ struct kvm_vm *__vm_create_with_vcpus(enum vm_guest_mode mode, uint32_t nr_vcpus
> return vm;
> }
>
> -struct kvm_vm *__vm_create_with_one_vcpu(struct kvm_vcpu **vcpu,
> - uint64_t extra_mem_pages,
> - void *guest_code)
> +struct kvm_vm *__vm_create_shape_with_one_vcpu(struct vm_shape shape,
> + struct kvm_vcpu **vcpu,
> + uint64_t extra_mem_pages,
> + void *guest_code)
> {
> struct kvm_vcpu *vcpus[1];
> struct kvm_vm *vm;
>
> - vm = __vm_create_with_vcpus(VM_MODE_DEFAULT, 1, extra_mem_pages,
> - guest_code, vcpus);
> + vm = __vm_create_with_vcpus(shape, 1, extra_mem_pages, guest_code, vcpus);
>
> *vcpu = vcpus[0];
> return vm;
> diff --git a/tools/testing/selftests/kvm/lib/memstress.c b/tools/testing/selftests/kvm/lib/memstress.c
> index df457452d146..d05487e5a371 100644
> --- a/tools/testing/selftests/kvm/lib/memstress.c
> +++ b/tools/testing/selftests/kvm/lib/memstress.c
> @@ -168,7 +168,8 @@ struct kvm_vm *memstress_create_vm(enum vm_guest_mode mode, int nr_vcpus,
> * The memory is also added to memslot 0, but that's a benign side
> * effect as KVM allows aliasing HVAs in meslots.
> */
> - vm = __vm_create_with_vcpus(mode, nr_vcpus, slot0_pages + guest_num_pages,
> + vm = __vm_create_with_vcpus(VM_SHAPE(mode), nr_vcpus,
> + slot0_pages + guest_num_pages,
> memstress_guest_code, vcpus);
>
> args->vm = vm;
> diff --git a/tools/testing/selftests/kvm/x86_64/ucna_injection_test.c b/tools/testing/selftests/kvm/x86_64/ucna_injection_test.c
> index 85f34ca7e49e..0ed32ec903d0 100644
> --- a/tools/testing/selftests/kvm/x86_64/ucna_injection_test.c
> +++ b/tools/testing/selftests/kvm/x86_64/ucna_injection_test.c
> @@ -271,7 +271,7 @@ int main(int argc, char *argv[])
>
> kvm_check_cap(KVM_CAP_MCE);
>
> - vm = __vm_create(VM_MODE_DEFAULT, 3, 0);
> + vm = __vm_create(VM_SHAPE_DEFAULT, 3, 0);
>
> kvm_ioctl(vm->kvm_fd, KVM_X86_GET_MCE_CAP_SUPPORTED,
> &supported_mcg_caps);
> --
> 2.39.1
>
>
next prev parent reply other threads:[~2023-11-06 11:55 UTC|newest]
Thread overview: 80+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-11-05 16:30 [PATCH v14 00/34] KVM: guest_memfd() and per-page attributes Paolo Bonzini
2023-11-05 16:30 ` [PATCH 01/34] KVM: Tweak kvm_hva_range and hva_handler_t to allow reusing for gfn ranges Paolo Bonzini
2023-11-06 9:28 ` Huang, Kai
2023-11-05 16:30 ` [PATCH 02/34] KVM: Assert that mmu_invalidate_in_progress *never* goes negative Paolo Bonzini
2023-11-06 9:29 ` Huang, Kai
2023-11-05 16:30 ` [PATCH 03/34] KVM: Use gfn instead of hva for mmu_notifier_retry Paolo Bonzini
2023-11-06 9:29 ` Huang, Kai
2023-11-05 16:30 ` [PATCH 04/34] KVM: WARN if there are dangling MMU invalidations at VM destruction Paolo Bonzini
2023-11-05 16:30 ` [PATCH 05/34] KVM: PPC: Drop dead code related to KVM_ARCH_WANT_MMU_NOTIFIER Paolo Bonzini
2023-11-05 16:30 ` [PATCH 06/34] KVM: PPC: Return '1' unconditionally for KVM_CAP_SYNC_MMU Paolo Bonzini
2023-11-05 16:30 ` [PATCH 07/34] KVM: Convert KVM_ARCH_WANT_MMU_NOTIFIER to CONFIG_KVM_GENERIC_MMU_NOTIFIER Paolo Bonzini
2023-11-05 16:30 ` [PATCH 08/34] KVM: Introduce KVM_SET_USER_MEMORY_REGION2 Paolo Bonzini
2023-11-06 9:27 ` Huang, Kai
2023-11-07 5:47 ` Yuan Yao
2023-11-05 16:30 ` [PATCH 09/34] KVM: Add KVM_EXIT_MEMORY_FAULT exit to report faults to userspace Paolo Bonzini
2023-11-06 10:23 ` Fuad Tabba
2023-11-09 7:30 ` Xiaoyao Li
2023-11-05 16:30 ` [PATCH 10/34] KVM: Add a dedicated mmu_notifier flag for reclaiming freed memory Paolo Bonzini
2023-11-05 16:30 ` [PATCH 11/34] KVM: Drop .on_unlock() mmu_notifier hook Paolo Bonzini
2023-11-05 16:30 ` [PATCH 12/34] KVM: Introduce per-page memory attributes Paolo Bonzini
2023-11-06 10:39 ` Fuad Tabba
2023-11-05 16:30 ` [PATCH 13/34] mm: Add AS_UNMOVABLE to mark mapping as completely unmovable Paolo Bonzini
2023-11-05 16:30 ` [PATCH 14/34] fs: Rename anon_inode_getfile_secure() and anon_inode_getfd_secure() Paolo Bonzini
2023-11-06 11:41 ` Fuad Tabba
2023-11-06 15:16 ` Christian Brauner
2023-11-05 16:30 ` [PATCH 15/34] KVM: Add KVM_CREATE_GUEST_MEMFD ioctl() for guest-specific backing memory Paolo Bonzini
2023-11-06 10:51 ` Fuad Tabba
2023-11-10 1:53 ` Xiaoyao Li
2023-11-10 18:22 ` Sean Christopherson
2023-11-13 3:37 ` Xiaoyao Li
2025-10-03 17:23 ` Nikita Kalyazin
2025-10-07 13:58 ` Sean Christopherson
2023-11-05 16:30 ` [PATCH 16/34] KVM: x86: "Reset" vcpu->run->exit_reason early in KVM_RUN Paolo Bonzini
2023-11-10 8:49 ` Xiaoyao Li
2023-11-05 16:30 ` [PATCH 17/34] KVM: x86: Disallow hugepages when memory attributes are mixed Paolo Bonzini
2023-11-05 16:30 ` [PATCH 18/34] KVM: x86/mmu: Handle page fault for private memory Paolo Bonzini
2023-11-06 10:54 ` Fuad Tabba
2023-11-05 16:30 ` [PATCH 19/34] KVM: Drop superfluous __KVM_VCPU_MULTIPLE_ADDRESS_SPACE macro Paolo Bonzini
2023-11-05 16:30 ` [PATCH 20/34] KVM: Allow arch code to track number of memslot address spaces per VM Paolo Bonzini
2023-11-05 16:30 ` [PATCH 21/34] KVM: x86: Add support for "protected VMs" that can utilize private memory Paolo Bonzini
2023-11-06 11:01 ` Fuad Tabba
2023-11-05 16:30 ` [PATCH 22/34] KVM: selftests: Drop unused kvm_userspace_memory_region_find() helper Paolo Bonzini
2023-11-06 11:02 ` Fuad Tabba
2023-11-05 16:30 ` [PATCH 23/34] KVM: selftests: Convert lib's mem regions to KVM_SET_USER_MEMORY_REGION2 Paolo Bonzini
2023-11-06 11:03 ` Fuad Tabba
2023-11-05 16:30 ` [PATCH 24/34] KVM: selftests: Add support for creating private memslots Paolo Bonzini
2023-11-06 11:09 ` Fuad Tabba
2023-11-05 16:30 ` [PATCH 25/34] KVM: selftests: Add helpers to convert guest memory b/w private and shared Paolo Bonzini
2023-11-06 11:24 ` Fuad Tabba
2023-11-06 16:13 ` Sean Christopherson
2023-11-06 16:24 ` Fuad Tabba
2023-11-05 16:30 ` [PATCH 26/34] KVM: selftests: Add helpers to do KVM_HC_MAP_GPA_RANGE hypercalls (x86) Paolo Bonzini
2023-11-06 11:44 ` Fuad Tabba
2023-11-05 16:30 ` [PATCH 27/34] KVM: selftests: Introduce VM "shape" to allow tests to specify the VM type Paolo Bonzini
2023-11-06 11:54 ` Fuad Tabba [this message]
2023-11-06 16:04 ` Sean Christopherson
2023-11-06 16:17 ` Fuad Tabba
2023-11-08 17:00 ` Anish Moorthy
2023-11-08 23:37 ` Anish Moorthy
2023-11-09 8:25 ` Paolo Bonzini
2023-11-05 16:30 ` [PATCH 28/34] KVM: selftests: Add GUEST_SYNC[1-6] macros for synchronizing more data Paolo Bonzini
2023-11-06 11:44 ` Fuad Tabba
2023-11-05 16:30 ` [PATCH 29/34] KVM: selftests: Add x86-only selftest for private memory conversions Paolo Bonzini
2023-11-05 16:30 ` [PATCH 30/34] KVM: selftests: Add KVM_SET_USER_MEMORY_REGION2 helper Paolo Bonzini
2023-11-07 12:54 ` Fuad Tabba
2023-11-05 16:30 ` [PATCH 31/34] KVM: selftests: Expand set_memory_region_test to validate guest_memfd() Paolo Bonzini
2023-11-06 14:26 ` Paolo Bonzini
2023-11-05 16:30 ` [PATCH 32/34] KVM: selftests: Add basic selftest for guest_memfd() Paolo Bonzini
2023-11-07 13:07 ` Fuad Tabba
2023-11-16 21:00 ` Ackerley Tng
2023-11-05 16:30 ` [PATCH 33/34] KVM: selftests: Test KVM exit behavior for private memory/access Paolo Bonzini
2023-11-07 14:38 ` Fuad Tabba
2023-11-05 16:30 ` [PATCH 34/34] KVM: selftests: Add a memory region subtest to validate invalid flags Paolo Bonzini
2023-11-09 1:08 ` Anish Moorthy
2023-11-09 8:54 ` Fuad Tabba
2023-11-20 14:09 ` Mark Brown
2023-11-21 17:00 ` Paolo Bonzini
2023-11-05 16:30 ` [PATCH 35/34] KVM: Prepare for handling only shared mappings in mmu_notifier events Paolo Bonzini
2023-11-05 16:30 ` [PATCH 36/34] KVM: Add transparent hugepage support for dedicated guest memory Paolo Bonzini
2023-11-13 12:21 ` [PATCH v14 00/34] KVM: guest_memfd() and per-page attributes Paolo Bonzini
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CA+EHjTxz-e_JKYTtEjjYJTXmpvizRXe8EUbhY2E7bwFjkkHVFw@mail.gmail.com \
--to=tabba@google.com \
--cc=ackerleytng@google.com \
--cc=akpm@linux-foundation.org \
--cc=amoorthy@google.com \
--cc=anup@brainfault.org \
--cc=aou@eecs.berkeley.edu \
--cc=brauner@kernel.org \
--cc=chao.p.peng@linux.intel.com \
--cc=chenhuacai@kernel.org \
--cc=david@redhat.com \
--cc=dmatlack@google.com \
--cc=isaku.yamahata@gmail.com \
--cc=isaku.yamahata@intel.com \
--cc=jarkko@kernel.org \
--cc=kirill.shutemov@linux.intel.com \
--cc=kvm-riscv@lists.infradead.org \
--cc=kvm@vger.kernel.org \
--cc=kvmarm@lists.linux.dev \
--cc=liam.merwick@oracle.com \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mips@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-riscv@lists.infradead.org \
--cc=linuxppc-dev@lists.ozlabs.org \
--cc=mail@maciej.szmigiero.name \
--cc=maz@kernel.org \
--cc=mic@digikod.net \
--cc=michael.roth@amd.com \
--cc=mpe@ellerman.id.au \
--cc=oliver.upton@linux.dev \
--cc=palmer@dabbelt.com \
--cc=paul.walmsley@sifive.com \
--cc=pbonzini@redhat.com \
--cc=qperret@google.com \
--cc=seanjc@google.com \
--cc=vannapurve@google.com \
--cc=vbabka@suse.cz \
--cc=viro@zeniv.linux.org.uk \
--cc=wei.w.wang@intel.com \
--cc=willy@infradead.org \
--cc=xiaoyao.li@intel.com \
--cc=yilun.xu@intel.com \
--cc=yu.c.zhang@linux.intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox