From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id AF065C4167D for ; Mon, 6 Nov 2023 11:55:21 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2F48A6B0180; Mon, 6 Nov 2023 06:55:21 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 2A2226B0182; Mon, 6 Nov 2023 06:55:21 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 11ED86B0183; Mon, 6 Nov 2023 06:55:21 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id EF4EC6B0180 for ; Mon, 6 Nov 2023 06:55:20 -0500 (EST) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id C1C0740795 for ; Mon, 6 Nov 2023 11:55:20 +0000 (UTC) X-FDA: 81427374000.19.D86C95C Received: from mail-qk1-f169.google.com (mail-qk1-f169.google.com [209.85.222.169]) by imf04.hostedemail.com (Postfix) with ESMTP id 0704C40009 for ; Mon, 6 Nov 2023 11:55:18 +0000 (UTC) Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=3PupyQkl; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf04.hostedemail.com: domain of tabba@google.com designates 209.85.222.169 as permitted sender) smtp.mailfrom=tabba@google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1699271719; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=c8NELU8K0U+724gQqgT5wy1Wri3ftccwsyaGDd96txU=; b=d0LIe337y0qYXWaxSjzBr7yZRVKEgAZhwownCxomT8Jr30fZygx6udZgVLq7Z6IyLXQn2n K7Ew3ob4KP441xhdkHBgwUS83v4KvEfh9Y6KNr2mxFvyDK4Amd/iseZbjuXmpSh02PKW/k PxsLjd71ice2uUGtrNLT/JL6Jef1CZY= ARC-Authentication-Results: i=1; imf04.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=3PupyQkl; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf04.hostedemail.com: domain of tabba@google.com designates 209.85.222.169 as permitted sender) smtp.mailfrom=tabba@google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1699271719; a=rsa-sha256; cv=none; b=Hla28/t2Zv9zLjePndXm33jVNzA2Dz5MNZIu6eQFeogLuJ7YcQY3j2MRu/qhLh0xU8Y5DL 0uBZala6eWhnikphfOWFg55nk97plpTEoVy2gnvpty/VnH7Br8Y5C28vYzrKQvcMLe9+q+ TF0KEzbcGleGwhDKG/mq7e865ypV/d0= Received: by mail-qk1-f169.google.com with SMTP id af79cd13be357-77896da2118so279377385a.1 for ; Mon, 06 Nov 2023 03:55:18 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1699271718; x=1699876518; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=c8NELU8K0U+724gQqgT5wy1Wri3ftccwsyaGDd96txU=; b=3PupyQklHykd9vKMCEomIyxaSxvHgPVH/xcqG8Mmmf8HKYouRES8/GY1GGjoWm3kyI Iqmf/D2ypeyXcIhqKosQnHFpYlcaqJ+q/TH3smqdAQBMFSSSlcRCHBvuNpCajH3AHfqy AnhyIHnDGeX9hPxWxwxJMK+GK3nyMCw9lIzLTw4ChToCd4A4dc3/rXz6qIjXnYH15ql/ E9ceU+7ACk6uu2hxqU0MRpUQsnDKWvtzltw7sf6BLNsj2liKo928+UHd8AMJ15b6omYn fhvAIUgHCYo8B0BT8WOi+E/9v0RNKCpi4yPw5JxqtaN0HqbgHZlJ+agB2oUHLk85SD5N hy0w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1699271718; x=1699876518; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=c8NELU8K0U+724gQqgT5wy1Wri3ftccwsyaGDd96txU=; b=dr9wF/ZA9WKj5hmIxgSjZtH+MofGZzwtxKLF5wbcwuwuFbFt4MYo93V7MoqWL1JPFT reWZ3iBAa4clXRPF7Ab+XH8Vjs39jpDnIVlYalBfUQn/fYqbWC6mkrvA8Y16Kjrgnfmk II+EAMyjMqTnRVJNIyS+ns75rk5AvtEuaX+SCCu96Hb9MD7XNGMLWmoqhULeg5o6wx6b EYYkL84JBp78Gx+o813RTylfarUGGw+7WMZdM5Ub8W91BGUMFLeP+rJwONEfmrhhV+h/ m3VTSQRYFOWtitJs2IFJNNV0YHqFJnJQskgGsMVLcSzMKGemwBU8oX/OM9vhGQ4Uimke oBPw== X-Gm-Message-State: AOJu0YwHJUU2mMDAArgMEiesivipy/xQ3cQ1j6yztQagDL2/bQElHIcQ 0VS58c1TV1FvZfKBEBZP/feQ2A3SBJLK4t7gODDXRw== X-Google-Smtp-Source: AGHT+IHpVKDogqYQTihitTUe1P1EC0BoRtrFXM2LqQxju0Y16TTjwSfxvbJ6Op1XhFBMPQO2JF14bitC65EzLdgz9hY= X-Received: by 2002:ad4:574b:0:b0:66d:775:d1af with SMTP id q11-20020ad4574b000000b0066d0775d1afmr35284081qvx.59.1699271717922; Mon, 06 Nov 2023 03:55:17 -0800 (PST) MIME-Version: 1.0 References: <20231105163040.14904-1-pbonzini@redhat.com> <20231105163040.14904-28-pbonzini@redhat.com> In-Reply-To: <20231105163040.14904-28-pbonzini@redhat.com> From: Fuad Tabba Date: Mon, 6 Nov 2023 11:54:42 +0000 Message-ID: Subject: Re: [PATCH 27/34] KVM: selftests: Introduce VM "shape" to allow tests to specify the VM type To: Paolo Bonzini Cc: Marc Zyngier , Oliver Upton , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Alexander Viro , Christian Brauner , "Matthew Wilcox (Oracle)" , Andrew Morton , kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Xiaoyao Li , Xu Yilun , Chao Peng , Jarkko Sakkinen , Anish Moorthy , David Matlack , Yu Zhang , Isaku Yamahata , =?UTF-8?B?TWlja2HDq2wgU2FsYcO8bg==?= , Vlastimil Babka , Vishal Annapurve , Ackerley Tng , Maciej Szmigiero , David Hildenbrand , Quentin Perret , Michael Roth , Wang , Liam Merwick , Isaku Yamahata , "Kirill A. Shutemov" Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 0704C40009 X-Stat-Signature: dgq5qbef8j7cnuifer3ogjh56a85k7yt X-HE-Tag: 1699271718-185489 X-HE-Meta: U2FsdGVkX1/OtE4SIGF7xnWnPLmD4TwqH4pb+e8/o4oNJ/MIcQYwhHuQ7yuWV3QHbdZoaDiyQeq+rlvggnL+t9ReavAC1+evUlWWozljlYqeet8RnhB+jHLUHIAS9SsN87Q6C24pXJ+uOcuH87/o02at+4uIjBE3KKWqMC74cf4Gh66FuSA1ASFWLqbPV/q1S7+NRdSOBmSSChRQ8bWISUptMA+/bjoSCyg+KPqpP2IPNK1bXZKMhrlPDvxXoeHcshPRfDUfeAU8w7sWQgTG0vsSmW/asx1H/1ODowN5XKW2X1hdS6Riq9dKXZU5iwpA/2trC9O5noVtmkOrLRfYzFJOSAOqIwoVwhdVaR6scGjnraIPcL8jr/61IrYw6cfHPfMR+lAQ4YiCqntimCG1rYMj0WYvHCqF3TQ03x6+a7c/dX+f2y0siJUW5Uxt1PqhgQlFH2QtI+qWD1CMszSsxDF/CMOcF3yVtr3a/gK1BJWubSUq0GWxUoaFUhc1p4axv/7pk/zIjBdspEWS655iH0eXDM3R/h/nN+I96vORQD2B+5S3Pk66//dtNDKCdOqOspz1iFPC13n1ATN9JT5sdEV8PsqwXV7GO95gpBG1s7oqlRZx5YeMmtuFzm6hFMmAHe3hooYNVTWb3Bw0E1hddXubDeL/FxHomexQGlnUEpH6h+9d7ndGZIgX8kUOPpkKnaNF1jcrHxlg+B36h/ghoe1p7UmEKW5PkiM78smWrL/aEVNc1WZzV/AYpt+qNbpVhPVR0GM2Glfhn5ohg7QoNKYp/RsQ2j3eq6462/iV7N/Y4/ZSkXg33QdLDKbD1q54jE+mThrkCg2iY4qp2IEAkH138JAZXhauR8zbPFi4Kamk6st3wy6ci05K6GvCJY12WYgAdJubj1JyZZK5U9qpmmHpP/tKqZeicWI4N8u8Gv2Kiftgv+2fRBYZ2wFXRXRenatPW8a+KJY/TR4RUUJ 0YHTejpE iK5LXUL5MUQVcfSu5AkzqjOFgEghHAelIQ14Pd4dAjWb9pUtteLxMc5P0oZJRcevh2THYvrhxQR0SI6ipsjitZxHt0Jnt1gaKsnNSSZQPoHGNpcTT0+eZJ3S8yL/8la/rocsrVqmaq9S+q/H1wxue4Vc+E2PTrEUTTTvq8o2gGjm2Xc3Mik54SmDCc0nRuvvohgbc3IU2+zOug6uypHWxNsTeFykbxr6ZkDJFg4kevF5cjfpAfwbfSRlrAmvzxvEPglh3Xxumfzh23Wz+9QZqgu8ag4bsTpwHNzv3omAeXyhydMbwd37L+0UNDuLwb92WGfxEdHrXsntVOq2aVqXNnMnWJCv5irFe2C9M X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Sun, Nov 5, 2023 at 4:34=E2=80=AFPM Paolo Bonzini = wrote: > > From: Sean Christopherson > > Add a "vm_shape" structure to encapsulate the selftests-defined "mode", > along with the KVM-defined "type" for use when creating a new VM. "mode" > tracks physical and virtual address properties, as well as the preferred > backing memory type, while "type" corresponds to the VM type. > > Taking the VM type will allow adding tests for KVM_CREATE_GUEST_MEMFD, > a.k.a. guest private memory, without needing an entirely separate set of > helpers. Guest private memory is effectively usable only by confidential > VM types, and it's expected that x86 will double down and require unique > VM types for TDX and SNP guests. > > Signed-off-by: Sean Christopherson > Message-Id: <20231027182217.3615211-30-seanjc@google.com> > Signed-off-by: Paolo Bonzini > --- nit: as in a prior selftest commit messages, references in the commit message to guest _private_ memory. Should these be changed to just guest memory? Reviewed-by: Fuad Tabba Tested-by: Fuad Tabba Cheers, /fuad > tools/testing/selftests/kvm/dirty_log_test.c | 2 +- > .../selftests/kvm/include/kvm_util_base.h | 54 +++++++++++++++---- > .../selftests/kvm/kvm_page_table_test.c | 2 +- > tools/testing/selftests/kvm/lib/kvm_util.c | 43 +++++++-------- > tools/testing/selftests/kvm/lib/memstress.c | 3 +- > .../kvm/x86_64/ucna_injection_test.c | 2 +- > 6 files changed, 72 insertions(+), 34 deletions(-) > > diff --git a/tools/testing/selftests/kvm/dirty_log_test.c b/tools/testing= /selftests/kvm/dirty_log_test.c > index 936f3a8d1b83..6cbecf499767 100644 > --- a/tools/testing/selftests/kvm/dirty_log_test.c > +++ b/tools/testing/selftests/kvm/dirty_log_test.c > @@ -699,7 +699,7 @@ static struct kvm_vm *create_vm(enum vm_guest_mode mo= de, struct kvm_vcpu **vcpu, > > pr_info("Testing guest mode: %s\n", vm_guest_mode_string(mode)); > > - vm =3D __vm_create(mode, 1, extra_mem_pages); > + vm =3D __vm_create(VM_SHAPE(mode), 1, extra_mem_pages); > > log_mode_create_vm_done(vm); > *vcpu =3D vm_vcpu_add(vm, 0, guest_code); > diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h b/tools/= testing/selftests/kvm/include/kvm_util_base.h > index 1441fca6c273..157508c071f3 100644 > --- a/tools/testing/selftests/kvm/include/kvm_util_base.h > +++ b/tools/testing/selftests/kvm/include/kvm_util_base.h > @@ -188,6 +188,23 @@ enum vm_guest_mode { > NUM_VM_MODES, > }; > > +struct vm_shape { > + enum vm_guest_mode mode; > + unsigned int type; > +}; > + > +#define VM_TYPE_DEFAULT 0 > + > +#define VM_SHAPE(__mode) \ > +({ \ > + struct vm_shape shape =3D { \ > + .mode =3D (__mode), \ > + .type =3D VM_TYPE_DEFAULT \ > + }; \ > + \ > + shape; \ > +}) > + > #if defined(__aarch64__) > > extern enum vm_guest_mode vm_mode_default; > @@ -220,6 +237,8 @@ extern enum vm_guest_mode vm_mode_default; > > #endif > > +#define VM_SHAPE_DEFAULT VM_SHAPE(VM_MODE_DEFAULT) > + > #define MIN_PAGE_SIZE (1U << MIN_PAGE_SHIFT) > #define PTES_PER_MIN_PAGE ptes_per_page(MIN_PAGE_SIZE) > > @@ -784,21 +803,21 @@ vm_paddr_t vm_alloc_page_table(struct kvm_vm *vm); > * __vm_create() does NOT create vCPUs, @nr_runnable_vcpus is used purel= y to > * calculate the amount of memory needed for per-vCPU data, e.g. stacks. > */ > -struct kvm_vm *____vm_create(enum vm_guest_mode mode); > -struct kvm_vm *__vm_create(enum vm_guest_mode mode, uint32_t nr_runnable= _vcpus, > +struct kvm_vm *____vm_create(struct vm_shape shape); > +struct kvm_vm *__vm_create(struct vm_shape shape, uint32_t nr_runnable_v= cpus, > uint64_t nr_extra_pages); > > static inline struct kvm_vm *vm_create_barebones(void) > { > - return ____vm_create(VM_MODE_DEFAULT); > + return ____vm_create(VM_SHAPE_DEFAULT); > } > > static inline struct kvm_vm *vm_create(uint32_t nr_runnable_vcpus) > { > - return __vm_create(VM_MODE_DEFAULT, nr_runnable_vcpus, 0); > + return __vm_create(VM_SHAPE_DEFAULT, nr_runnable_vcpus, 0); > } > > -struct kvm_vm *__vm_create_with_vcpus(enum vm_guest_mode mode, uint32_t = nr_vcpus, > +struct kvm_vm *__vm_create_with_vcpus(struct vm_shape shape, uint32_t nr= _vcpus, > uint64_t extra_mem_pages, > void *guest_code, struct kvm_vcpu *= vcpus[]); > > @@ -806,17 +825,27 @@ static inline struct kvm_vm *vm_create_with_vcpus(u= int32_t nr_vcpus, > void *guest_code, > struct kvm_vcpu *vcpus[= ]) > { > - return __vm_create_with_vcpus(VM_MODE_DEFAULT, nr_vcpus, 0, > + return __vm_create_with_vcpus(VM_SHAPE_DEFAULT, nr_vcpus, 0, > guest_code, vcpus); > } > > + > +struct kvm_vm *__vm_create_shape_with_one_vcpu(struct vm_shape shape, > + struct kvm_vcpu **vcpu, > + uint64_t extra_mem_pages, > + void *guest_code); > + > /* > * Create a VM with a single vCPU with reasonable defaults and @extra_me= m_pages > * additional pages of guest memory. Returns the VM and vCPU (via out p= aram). > */ > -struct kvm_vm *__vm_create_with_one_vcpu(struct kvm_vcpu **vcpu, > - uint64_t extra_mem_pages, > - void *guest_code); > +static inline struct kvm_vm *__vm_create_with_one_vcpu(struct kvm_vcpu *= *vcpu, > + uint64_t extra_mem= _pages, > + void *guest_code) > +{ > + return __vm_create_shape_with_one_vcpu(VM_SHAPE_DEFAULT, vcpu, > + extra_mem_pages, guest_cod= e); > +} > > static inline struct kvm_vm *vm_create_with_one_vcpu(struct kvm_vcpu **v= cpu, > void *guest_code) > @@ -824,6 +853,13 @@ static inline struct kvm_vm *vm_create_with_one_vcpu= (struct kvm_vcpu **vcpu, > return __vm_create_with_one_vcpu(vcpu, 0, guest_code); > } > > +static inline struct kvm_vm *vm_create_shape_with_one_vcpu(struct vm_sha= pe shape, > + struct kvm_vcp= u **vcpu, > + void *guest_co= de) > +{ > + return __vm_create_shape_with_one_vcpu(shape, vcpu, 0, guest_code= ); > +} > + > struct kvm_vcpu *vm_recreate_with_one_vcpu(struct kvm_vm *vm); > > void kvm_pin_this_task_to_pcpu(uint32_t pcpu); > diff --git a/tools/testing/selftests/kvm/kvm_page_table_test.c b/tools/te= sting/selftests/kvm/kvm_page_table_test.c > index 69f26d80c821..e37dc9c21888 100644 > --- a/tools/testing/selftests/kvm/kvm_page_table_test.c > +++ b/tools/testing/selftests/kvm/kvm_page_table_test.c > @@ -254,7 +254,7 @@ static struct kvm_vm *pre_init_before_test(enum vm_gu= est_mode mode, void *arg) > > /* Create a VM with enough guest pages */ > guest_num_pages =3D test_mem_size / guest_page_size; > - vm =3D __vm_create_with_vcpus(mode, nr_vcpus, guest_num_pages, > + vm =3D __vm_create_with_vcpus(VM_SHAPE(mode), nr_vcpus, guest_num= _pages, > guest_code, test_args.vcpus); > > /* Align down GPA of the testing memslot */ > diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/s= elftests/kvm/lib/kvm_util.c > index 95a553400ea9..1c74310f1d44 100644 > --- a/tools/testing/selftests/kvm/lib/kvm_util.c > +++ b/tools/testing/selftests/kvm/lib/kvm_util.c > @@ -209,7 +209,7 @@ __weak void vm_vaddr_populate_bitmap(struct kvm_vm *v= m) > (1ULL << (vm->va_bits - 1)) >> vm->page_shift); > } > > -struct kvm_vm *____vm_create(enum vm_guest_mode mode) > +struct kvm_vm *____vm_create(struct vm_shape shape) > { > struct kvm_vm *vm; > > @@ -221,13 +221,13 @@ struct kvm_vm *____vm_create(enum vm_guest_mode mod= e) > vm->regions.hva_tree =3D RB_ROOT; > hash_init(vm->regions.slot_hash); > > - vm->mode =3D mode; > - vm->type =3D 0; > + vm->mode =3D shape.mode; > + vm->type =3D shape.type; > > - vm->pa_bits =3D vm_guest_mode_params[mode].pa_bits; > - vm->va_bits =3D vm_guest_mode_params[mode].va_bits; > - vm->page_size =3D vm_guest_mode_params[mode].page_size; > - vm->page_shift =3D vm_guest_mode_params[mode].page_shift; > + vm->pa_bits =3D vm_guest_mode_params[vm->mode].pa_bits; > + vm->va_bits =3D vm_guest_mode_params[vm->mode].va_bits; > + vm->page_size =3D vm_guest_mode_params[vm->mode].page_size; > + vm->page_shift =3D vm_guest_mode_params[vm->mode].page_shift; > > /* Setup mode specific traits. */ > switch (vm->mode) { > @@ -265,7 +265,7 @@ struct kvm_vm *____vm_create(enum vm_guest_mode mode) > /* > * Ignore KVM support for 5-level paging (vm->va_bits =3D= =3D 57), > * it doesn't take effect unless a CR4.LA57 is set, which= it > - * isn't for this VM_MODE. > + * isn't for this mode (48-bit virtual address space). > */ > TEST_ASSERT(vm->va_bits =3D=3D 48 || vm->va_bits =3D=3D 5= 7, > "Linear address width (%d bits) not supported= ", > @@ -285,10 +285,11 @@ struct kvm_vm *____vm_create(enum vm_guest_mode mod= e) > vm->pgtable_levels =3D 5; > break; > default: > - TEST_FAIL("Unknown guest mode, mode: 0x%x", mode); > + TEST_FAIL("Unknown guest mode: 0x%x", vm->mode); > } > > #ifdef __aarch64__ > + TEST_ASSERT(!vm->type, "ARM doesn't support test-provided types")= ; > if (vm->pa_bits !=3D 40) > vm->type =3D KVM_VM_TYPE_ARM_IPA_SIZE(vm->pa_bits); > #endif > @@ -347,19 +348,19 @@ static uint64_t vm_nr_pages_required(enum vm_guest_= mode mode, > return vm_adjust_num_guest_pages(mode, nr_pages); > } > > -struct kvm_vm *__vm_create(enum vm_guest_mode mode, uint32_t nr_runnable= _vcpus, > +struct kvm_vm *__vm_create(struct vm_shape shape, uint32_t nr_runnable_v= cpus, > uint64_t nr_extra_pages) > { > - uint64_t nr_pages =3D vm_nr_pages_required(mode, nr_runnable_vcpu= s, > + uint64_t nr_pages =3D vm_nr_pages_required(shape.mode, nr_runnabl= e_vcpus, > nr_extra_pages); > struct userspace_mem_region *slot0; > struct kvm_vm *vm; > int i; > > - pr_debug("%s: mode=3D'%s' pages=3D'%ld'\n", __func__, > - vm_guest_mode_string(mode), nr_pages); > + pr_debug("%s: mode=3D'%s' type=3D'%d', pages=3D'%ld'\n", __func__= , > + vm_guest_mode_string(shape.mode), shape.type, nr_pages); > > - vm =3D ____vm_create(mode); > + vm =3D ____vm_create(shape); > > vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS, 0, 0, nr_pa= ges, 0); > for (i =3D 0; i < NR_MEM_REGIONS; i++) > @@ -400,7 +401,7 @@ struct kvm_vm *__vm_create(enum vm_guest_mode mode, u= int32_t nr_runnable_vcpus, > * extra_mem_pages is only used to calculate the maximum page table size= , > * no real memory allocation for non-slot0 memory in this function. > */ > -struct kvm_vm *__vm_create_with_vcpus(enum vm_guest_mode mode, uint32_t = nr_vcpus, > +struct kvm_vm *__vm_create_with_vcpus(struct vm_shape shape, uint32_t nr= _vcpus, > uint64_t extra_mem_pages, > void *guest_code, struct kvm_vcpu *= vcpus[]) > { > @@ -409,7 +410,7 @@ struct kvm_vm *__vm_create_with_vcpus(enum vm_guest_m= ode mode, uint32_t nr_vcpus > > TEST_ASSERT(!nr_vcpus || vcpus, "Must provide vCPU array"); > > - vm =3D __vm_create(mode, nr_vcpus, extra_mem_pages); > + vm =3D __vm_create(shape, nr_vcpus, extra_mem_pages); > > for (i =3D 0; i < nr_vcpus; ++i) > vcpus[i] =3D vm_vcpu_add(vm, i, guest_code); > @@ -417,15 +418,15 @@ struct kvm_vm *__vm_create_with_vcpus(enum vm_guest= _mode mode, uint32_t nr_vcpus > return vm; > } > > -struct kvm_vm *__vm_create_with_one_vcpu(struct kvm_vcpu **vcpu, > - uint64_t extra_mem_pages, > - void *guest_code) > +struct kvm_vm *__vm_create_shape_with_one_vcpu(struct vm_shape shape, > + struct kvm_vcpu **vcpu, > + uint64_t extra_mem_pages, > + void *guest_code) > { > struct kvm_vcpu *vcpus[1]; > struct kvm_vm *vm; > > - vm =3D __vm_create_with_vcpus(VM_MODE_DEFAULT, 1, extra_mem_pages= , > - guest_code, vcpus); > + vm =3D __vm_create_with_vcpus(shape, 1, extra_mem_pages, guest_co= de, vcpus); > > *vcpu =3D vcpus[0]; > return vm; > diff --git a/tools/testing/selftests/kvm/lib/memstress.c b/tools/testing/= selftests/kvm/lib/memstress.c > index df457452d146..d05487e5a371 100644 > --- a/tools/testing/selftests/kvm/lib/memstress.c > +++ b/tools/testing/selftests/kvm/lib/memstress.c > @@ -168,7 +168,8 @@ struct kvm_vm *memstress_create_vm(enum vm_guest_mode= mode, int nr_vcpus, > * The memory is also added to memslot 0, but that's a benign sid= e > * effect as KVM allows aliasing HVAs in meslots. > */ > - vm =3D __vm_create_with_vcpus(mode, nr_vcpus, slot0_pages + guest= _num_pages, > + vm =3D __vm_create_with_vcpus(VM_SHAPE(mode), nr_vcpus, > + slot0_pages + guest_num_pages, > memstress_guest_code, vcpus); > > args->vm =3D vm; > diff --git a/tools/testing/selftests/kvm/x86_64/ucna_injection_test.c b/t= ools/testing/selftests/kvm/x86_64/ucna_injection_test.c > index 85f34ca7e49e..0ed32ec903d0 100644 > --- a/tools/testing/selftests/kvm/x86_64/ucna_injection_test.c > +++ b/tools/testing/selftests/kvm/x86_64/ucna_injection_test.c > @@ -271,7 +271,7 @@ int main(int argc, char *argv[]) > > kvm_check_cap(KVM_CAP_MCE); > > - vm =3D __vm_create(VM_MODE_DEFAULT, 3, 0); > + vm =3D __vm_create(VM_SHAPE_DEFAULT, 3, 0); > > kvm_ioctl(vm->kvm_fd, KVM_X86_GET_MCE_CAP_SUPPORTED, > &supported_mcg_caps); > -- > 2.39.1 > >