From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 57EAFC3ABD8 for ; Wed, 14 May 2025 23:43:50 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0C7EC6B00D4; Wed, 14 May 2025 19:43:21 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 002A96B00D5; Wed, 14 May 2025 19:43:20 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D20B06B00D6; Wed, 14 May 2025 19:43:20 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id A7A496B00D4 for ; Wed, 14 May 2025 19:43:20 -0400 (EDT) Received: from smtpin12.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id D1B0112122B for ; Wed, 14 May 2025 23:43:21 +0000 (UTC) X-FDA: 83443142202.12.45A8406 Received: from mail-pf1-f202.google.com (mail-pf1-f202.google.com [209.85.210.202]) by imf08.hostedemail.com (Postfix) with ESMTP id 0C80F160008 for ; Wed, 14 May 2025 23:43:19 +0000 (UTC) Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=L4Ezrwco; spf=pass (imf08.hostedemail.com: domain of 3liolaAsKCMcnpxr4yrB60tt11tyr.p1zyv07A-zzx8npx.14t@flex--ackerleytng.bounces.google.com designates 209.85.210.202 as permitted sender) smtp.mailfrom=3liolaAsKCMcnpxr4yrB60tt11tyr.p1zyv07A-zzx8npx.14t@flex--ackerleytng.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1747266200; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=N9iOM1rSbFI42WS9IxsAoi2d/13fokogLjc+XkopCRc=; b=nYAjtD4Ye9Awekb/qbbRhH0gbWDVOq9x/UjA20SrtSDDxO3hMNiWXUWW9eGMp/lC8UNZie gBd2Wto3f+C4V9oOHG/tTongPDX0sNXpUyqCShAhB/i1BEvi1ITt0ebGAw2E6SAH11xRkK VZD1oH+Fh13Gu8lf/dz6DABowxNRqjQ= ARC-Authentication-Results: i=1; imf08.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=L4Ezrwco; spf=pass (imf08.hostedemail.com: domain of 3liolaAsKCMcnpxr4yrB60tt11tyr.p1zyv07A-zzx8npx.14t@flex--ackerleytng.bounces.google.com designates 209.85.210.202 as permitted sender) smtp.mailfrom=3liolaAsKCMcnpxr4yrB60tt11tyr.p1zyv07A-zzx8npx.14t@flex--ackerleytng.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1747266200; a=rsa-sha256; cv=none; b=PNQBpCtz7LcLl4ZbswAFHk8KfFORk6sFTY77QjwswycVrMV54lWgDzdKYr651zQKhKl6Zw fYiXb1L3FmV4PtcF5deZMhwLQDq82wDxBJDl8705BIOvnQ9ksEf6BmH0Oodf+evShWRTl8 ycYFUyIk+miidJ2U8EppMxEvd/VeQ3c= Received: by mail-pf1-f202.google.com with SMTP id d2e1a72fcca58-7410079c4a6so301761b3a.3 for ; Wed, 14 May 2025 16:43:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1747266199; x=1747870999; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=N9iOM1rSbFI42WS9IxsAoi2d/13fokogLjc+XkopCRc=; b=L4Ezrwco4tq7Cs/R8hqvIRJC366lSc2Pp4m/wb7+NiT+0VmdeH3CEW6HpdrN1fPEtO 9im0oxG+JTJgr4gMyCDdqFm8VVN9I5i95swREKhHBkAbPFj19bLMuYxpHuZLVVWE9fEu IkxBF/GtsvAnuYzFPNFBugXi3VD5LcXvlMRlIpnXFonRxO6lMT/vFaEKFAIOWQmUyKOA RXHP20pTpymQLjve8EvkYU4hLPgdTGZ0QsyLV8PahVAklgonqL6Yfa5Tv3RWfsrA4Yb/ XK2MHWHaEBFUU4LyyVEdkmYmH/QLTLY8NiYcflJqVvqpJ+oa9FGNdjf3mgQk9bLH4W3W +ZZw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747266199; x=1747870999; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=N9iOM1rSbFI42WS9IxsAoi2d/13fokogLjc+XkopCRc=; b=uL32tZjd9Mr7eXUQyvowX2UqKPvWEs01YYHyBRmmt5g/xQy7UrQly9NiH0Gyc7189m Vi/ZWOPlFnKNB2Vg8fanu7t3WJBxWjSdIYdkMg64nsJgs4KcexrLd7TZ81riHuq98L6u 9aOnCiXOAtKo2DQqOC+cWGxQOiuE/xeXFmw5BU7Sn0oSy3wLAmik+nxB+i9edLxuSo0k q2YglrxWM5K4ci/aIe7+66kYSDNNIAzCGaQ3+1325W8Sp5kBJfBDWx0uccH+jLKPnnf8 7mlkVqR6+dUKxNgO8/8cN0o1v3V8m4y3A1YBG4ZdPU62w/z8NH7z/4q9HHaapO3G86TE +hBw== X-Forwarded-Encrypted: i=1; AJvYcCXL1uO0GXPzLwOiziF+3dWDCJn+vWFGvY94X9RxlFdInowF6rMJm1mCP0XP2jqVURelzXzfFqSjfw==@kvack.org X-Gm-Message-State: AOJu0Yy7zA461qRmCYH7zuJXjUYvip3aFHGqtLsfRch+LwjdRKsxW5zD cT2dp0Egsn5q7sHM/WI29NHcMkkpczsTTgBFr3//T0VHYu1nzP83UmlGvIojkxwAOV0WjOwIN1g gV5zIFT3w4cF4qVVybTj5hA== X-Google-Smtp-Source: AGHT+IESWU6gXthLw+H3adUXZGWsqNOWfunn9I9jbZVcDugt7AmKDl1yijbkySA8Gzvt/3Tb29JluojLVpzGy9Oflg== X-Received: from pgah3.prod.google.com ([2002:a05:6a02:4e83:b0:b21:868e:36fd]) (user=ackerleytng job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a21:7a8b:b0:215:d9fc:382e with SMTP id adf61e73a8af0-216115270femr429622637.13.1747266198807; Wed, 14 May 2025 16:43:18 -0700 (PDT) Date: Wed, 14 May 2025 16:41:53 -0700 In-Reply-To: Mime-Version: 1.0 References: X-Mailer: git-send-email 2.49.0.1045.g170613ef41-goog Message-ID: <45a932753580d21627779ccfc1a2400e17dfdd79.1747264138.git.ackerleytng@google.com> Subject: [RFC PATCH v2 14/51] KVM: selftests: Update private_mem_conversions_test to mmap guest_memfd From: Ackerley Tng To: kvm@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, x86@kernel.org, linux-fsdevel@vger.kernel.org Cc: ackerleytng@google.com, aik@amd.com, ajones@ventanamicro.com, akpm@linux-foundation.org, amoorthy@google.com, anthony.yznaga@oracle.com, anup@brainfault.org, aou@eecs.berkeley.edu, bfoster@redhat.com, binbin.wu@linux.intel.com, brauner@kernel.org, catalin.marinas@arm.com, chao.p.peng@intel.com, chenhuacai@kernel.org, dave.hansen@intel.com, david@redhat.com, dmatlack@google.com, dwmw@amazon.co.uk, erdemaktas@google.com, fan.du@intel.com, fvdl@google.com, graf@amazon.com, haibo1.xu@intel.com, hch@infradead.org, hughd@google.com, ira.weiny@intel.com, isaku.yamahata@intel.com, jack@suse.cz, james.morse@arm.com, jarkko@kernel.org, jgg@ziepe.ca, jgowans@amazon.com, jhubbard@nvidia.com, jroedel@suse.de, jthoughton@google.com, jun.miao@intel.com, kai.huang@intel.com, keirf@google.com, kent.overstreet@linux.dev, kirill.shutemov@intel.com, liam.merwick@oracle.com, maciej.wieczor-retman@intel.com, mail@maciej.szmigiero.name, maz@kernel.org, mic@digikod.net, michael.roth@amd.com, mpe@ellerman.id.au, muchun.song@linux.dev, nikunj@amd.com, nsaenz@amazon.es, oliver.upton@linux.dev, palmer@dabbelt.com, pankaj.gupta@amd.com, paul.walmsley@sifive.com, pbonzini@redhat.com, pdurrant@amazon.co.uk, peterx@redhat.com, pgonda@google.com, pvorel@suse.cz, qperret@google.com, quic_cvanscha@quicinc.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, quic_svaddagi@quicinc.com, quic_tsoni@quicinc.com, richard.weiyang@gmail.com, rick.p.edgecombe@intel.com, rientjes@google.com, roypat@amazon.co.uk, rppt@kernel.org, seanjc@google.com, shuah@kernel.org, steven.price@arm.com, steven.sistare@oracle.com, suzuki.poulose@arm.com, tabba@google.com, thomas.lendacky@amd.com, usama.arif@bytedance.com, vannapurve@google.com, vbabka@suse.cz, viro@zeniv.linux.org.uk, vkuznets@redhat.com, wei.w.wang@intel.com, will@kernel.org, willy@infradead.org, xiaoyao.li@intel.com, yan.y.zhao@intel.com, yilun.xu@intel.com, yuzenghui@huawei.com, zhiquan1.li@intel.com Content-Type: text/plain; charset="UTF-8" X-Rspamd-Queue-Id: 0C80F160008 X-Rspam-User: X-Rspamd-Server: rspam11 X-Stat-Signature: fs65eft6bbuz5zk15j841gi61i1csjes X-HE-Tag: 1747266199-404662 X-HE-Meta: U2FsdGVkX18wMwiCEef6tyPZR5x6bqHfcpr18gGnfo7ULJ9d8lZWjKFwdZZSbwpq8jkOXZO7/QN7hWm3HnZ205TZs80jGjuhGM62aEGr9WE4HLIZ6WD5sRZVYeNXQBbBBms6nsUd3M9GTrqeGb9SRINsndS7pkfFeB6DzySw1tJeDcHXMmr+1YLLn7cYSm6GFAFFM4n0cFeYIvg1ETco7XUclzb+nEUJyT5ZLCMEkVDzmoiM26iLvudf18N0fUJ6bRKFqhiU2POmj/WDSJkaEEAV4HXKF3GfPWJpyThiotNOR4mAH5miMGvERI6rjxbaGZxyYggoV/dhi3TMqMs7DKVqz5XlZ55rgaQUigo00FUa8tLeuhLU11wziv3dlIGV+X+cb1sDRX24UslF/bMcJedPyaQOcN8kDRpLNEfxCZ86OjalXASPRT5Cel5M6Tr61TB8g/EJqZTMcDW6qYeZ0dtKfA3ZWNAblQYy6/Ik1/V/7xTp4+YHW8kTmXnwW2vDI4GPzzL+H6p41GY5a3Ntir9KCpAB/Tdc78HUw9Uih/y06+riasbT+s8UXLyDuPAXFOuyy0NPtuVhzGNY1jtb44oZvQKPk0h2MEXDNnhaXcAor5ZhGCoihsYyrVmeKc5pDrHsPQw/CrpJhUwOJAdeLltSv8WsC3ZHOlK6GUABXEFAQW/Iy36C2QKubypN2ZLyOOsVfDdc9PnRlYDKEmxAgShomSfrnLjgQmp5mQr9mOaQw6md66ruDgcV/08nAFiUdU9wLkuM+bS+StMJT2CZmjqwEtKuA8fDGrRVHLtHS3RMo3L9S2FINsj4Yhi3b0F4RGYqeE9v8/M/4LmJjCeaKWNXR8zbnTwtyFGo1NlM3C0KQgICFY4PeK6RV6OEkim9udBODVirBgZXZqXUZwetJrryeIj+G9t6Y3oi5zPDOZlwabRcVk1IlH4XzWFwUNU1A7eB5Tb8cTcsl87PUGj yGX7UKmm mPL5DhCXSbLXMI16TZFRRLp2aI/tLKnpkiVk+ieqrLTVL8NqnV7anOpJZtXRoaT78UEcwBXmfhimPVjUwTH2r6UQpwd/nux5ucZBtwArLaNZpXq2bTW2Aki2M6GnZ8EAzo9jtBG3MSQPM6FEPsAsZV3GqFaf0AxS6BcBAZ0tvdeWxYdBhwyTT69EfMkqx5V6Zn7yUbSyxhs3ajEHVyJPDrUEgTMFDKMWJJzaTGzVSdsy9Pg8/IIHiFNg+emBEukGpKyZkQr/aqmj6NX9W3wgsXbcP4nxSjSmUrbkhi3TMOMl302nMOZIo0AOg/dL7Uyy0p404qIN2BHCQERo/RkUoxcTTyJBWlhlO3P9oKZkvTWc7uW5G4KpjANi3sTyJwqoKd/44MMblt+fEQOXCk7Xpu7n9pYLWGmczgEWHdWg98tOlsbZflQ8tkNCYeBGA78TLvGdeoE8C4GgpwlYB7EME66NF4XJRWKIx26y85s/b2IpIdeG9mFC44RD9uiOFjaEvvfoW/OXf7ICR6HQ/pCUzSkizBcLiuOHgk6Ucxx6QafwoxvfVOvjSwd5OoPvCaPF76TGKXxWsWZVOH4+GwXD2yunCdCOREzHHgFcv5i9PdXgrwpm6olSyakwwbLibfxQD/hs5FhVDzsFx5nKQ+OHITab2iPI47ixxTlUi2jJ+EdT1hhfYGXnb4W7aKOpOMQbrzRqf X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: This patch updates private_mem_conversions_test to use guest_memfd for both private and shared memory. The guest_memfd conversion ioctls are used to perform conversions. Specify -g to also back shared memory with memory from guest_memfd. Signed-off-by: Ackerley Tng Change-Id: Ibc647dc43fbdddac7cc465886bed92c07bbf4f00 --- .../testing/selftests/kvm/include/kvm_util.h | 1 + tools/testing/selftests/kvm/lib/kvm_util.c | 36 ++++ .../kvm/x86/private_mem_conversions_test.c | 163 +++++++++++++++--- 3 files changed, 176 insertions(+), 24 deletions(-) diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h index ffe0625f2d71..ded65a15abea 100644 --- a/tools/testing/selftests/kvm/include/kvm_util.h +++ b/tools/testing/selftests/kvm/include/kvm_util.h @@ -721,6 +721,7 @@ void *addr_gpa2hva(struct kvm_vm *vm, vm_paddr_t gpa); void *addr_gva2hva(struct kvm_vm *vm, vm_vaddr_t gva); vm_paddr_t addr_hva2gpa(struct kvm_vm *vm, void *hva); void *addr_gpa2alias(struct kvm_vm *vm, vm_paddr_t gpa); +int addr_gpa2guest_memfd(struct kvm_vm *vm, vm_paddr_t gpa, loff_t *offset); #ifndef vcpu_arch_put_guest #define vcpu_arch_put_guest(mem, val) do { (mem) = (val); } while (0) diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c index 58a3365f479c..253d0c00e2f0 100644 --- a/tools/testing/selftests/kvm/lib/kvm_util.c +++ b/tools/testing/selftests/kvm/lib/kvm_util.c @@ -1734,6 +1734,42 @@ void *addr_gpa2hva(struct kvm_vm *vm, vm_paddr_t gpa) + (gpa - region->region.guest_phys_addr)); } +/* + * Address VM Physical to guest_memfd + * + * Input Args: + * vm - Virtual Machine + * gpa - VM physical address + * + * Output Args: + * offset - offset in guest_memfd for gpa + * + * Return: + * guest_memfd for + * + * Locates the memory region containing the VM physical address given by gpa, + * within the VM given by vm. When found, the guest_memfd providing the memory + * to the vm physical address and the offset in the file corresponding to the + * requested gpa is returned. A TEST_ASSERT failure occurs if no region + * containing gpa exists. + */ +int addr_gpa2guest_memfd(struct kvm_vm *vm, vm_paddr_t gpa, loff_t *offset) +{ + struct userspace_mem_region *region; + + gpa = vm_untag_gpa(vm, gpa); + + region = userspace_mem_region_find(vm, gpa, gpa); + if (!region) { + TEST_FAIL("No vm physical memory at 0x%lx", gpa); + return -1; + } + + *offset = region->region.guest_memfd_offset + gpa - region->region.guest_phys_addr; + + return region->region.guest_memfd; +} + /* * Address Host Virtual to VM Physical * diff --git a/tools/testing/selftests/kvm/x86/private_mem_conversions_test.c b/tools/testing/selftests/kvm/x86/private_mem_conversions_test.c index 82a8d88b5338..ec20bb7e95c8 100644 --- a/tools/testing/selftests/kvm/x86/private_mem_conversions_test.c +++ b/tools/testing/selftests/kvm/x86/private_mem_conversions_test.c @@ -11,6 +11,7 @@ #include #include #include +#include #include #include @@ -202,15 +203,19 @@ static void guest_test_explicit_conversion(uint64_t base_gpa, bool do_fallocate) guest_sync_shared(gpa, size, p3, p4); memcmp_g(gpa, p4, size); - /* Reset the shared memory back to the initial pattern. */ - memset((void *)gpa, init_p, size); - /* * Free (via PUNCH_HOLE) *all* private memory so that the next * iteration starts from a clean slate, e.g. with respect to * whether or not there are pages/folios in guest_mem. */ guest_map_shared(base_gpa, PER_CPU_DATA_SIZE, true); + + /* + * Reset the entire block back to the initial pattern. Do this + * after fallocate(PUNCH_HOLE) because hole-punching zeroes + * memory. + */ + memset((void *)base_gpa, init_p, PER_CPU_DATA_SIZE); } } @@ -286,7 +291,8 @@ static void guest_code(uint64_t base_gpa) GUEST_DONE(); } -static void handle_exit_hypercall(struct kvm_vcpu *vcpu) +static void handle_exit_hypercall(struct kvm_vcpu *vcpu, + bool back_shared_memory_with_guest_memfd) { struct kvm_run *run = vcpu->run; uint64_t gpa = run->hypercall.args[0]; @@ -303,17 +309,81 @@ static void handle_exit_hypercall(struct kvm_vcpu *vcpu) if (do_fallocate) vm_guest_mem_fallocate(vm, gpa, size, map_shared); - if (set_attributes) - vm_set_memory_attributes(vm, gpa, size, - map_shared ? 0 : KVM_MEMORY_ATTRIBUTE_PRIVATE); + if (set_attributes) { + if (back_shared_memory_with_guest_memfd) { + loff_t offset; + int guest_memfd; + + guest_memfd = addr_gpa2guest_memfd(vm, gpa, &offset); + + if (map_shared) + guest_memfd_convert_shared(guest_memfd, offset, size); + else + guest_memfd_convert_private(guest_memfd, offset, size); + } else { + uint64_t attrs; + + attrs = map_shared ? 0 : KVM_MEMORY_ATTRIBUTE_PRIVATE; + vm_set_memory_attributes(vm, gpa, size, attrs); + } + } run->hypercall.ret = 0; } +static void assert_not_faultable(uint8_t *address) +{ + pid_t child_pid; + + child_pid = fork(); + TEST_ASSERT(child_pid != -1, "fork failed"); + + if (child_pid == 0) { + *address = 'A'; + TEST_FAIL("Child should have exited with a signal"); + } else { + int status; + + waitpid(child_pid, &status, 0); + + TEST_ASSERT(WIFSIGNALED(status), + "Child should have exited with a signal"); + TEST_ASSERT_EQ(WTERMSIG(status), SIGBUS); + } +} + +static void add_memslot(struct kvm_vm *vm, uint64_t gpa, uint32_t slot, + uint64_t size, int guest_memfd, + uint64_t guest_memfd_offset) +{ + struct userspace_mem_region *region; + + region = vm_mem_region_alloc(vm); + + guest_memfd = vm_mem_region_install_guest_memfd(region, guest_memfd); + + vm_mem_region_mmap(region, size, MAP_SHARED, guest_memfd, guest_memfd_offset); + vm_mem_region_install_memory(region, size, getpagesize()); + + region->region.slot = slot; + region->region.flags = KVM_MEM_GUEST_MEMFD; + region->region.guest_phys_addr = gpa; + region->region.guest_memfd_offset = guest_memfd_offset; + + vm_mem_region_add(vm, region); +} + static bool run_vcpus; -static void *__test_mem_conversions(void *__vcpu) +struct test_thread_args { - struct kvm_vcpu *vcpu = __vcpu; + struct kvm_vcpu *vcpu; + bool back_shared_memory_with_guest_memfd; +}; + +static void *__test_mem_conversions(void *params) +{ + struct test_thread_args *args = params; + struct kvm_vcpu *vcpu = args->vcpu; struct kvm_run *run = vcpu->run; struct kvm_vm *vm = vcpu->vm; struct ucall uc; @@ -325,7 +395,10 @@ static void *__test_mem_conversions(void *__vcpu) vcpu_run(vcpu); if (run->exit_reason == KVM_EXIT_HYPERCALL) { - handle_exit_hypercall(vcpu); + handle_exit_hypercall( + vcpu, + args->back_shared_memory_with_guest_memfd); + continue; } @@ -349,8 +422,18 @@ static void *__test_mem_conversions(void *__vcpu) size_t nr_bytes = min_t(size_t, vm->page_size, size - i); uint8_t *hva = addr_gpa2hva(vm, gpa + i); - /* In all cases, the host should observe the shared data. */ - memcmp_h(hva, gpa + i, uc.args[3], nr_bytes); + /* Check contents of memory */ + if (args->back_shared_memory_with_guest_memfd && + uc.args[0] == SYNC_PRIVATE) { + assert_not_faultable(hva); + } else { + /* + * If shared and private memory use + * separate backing memory, the host + * should always observe shared data. + */ + memcmp_h(hva, gpa + i, uc.args[3], nr_bytes); + } /* For shared, write the new pattern to guest memory. */ if (uc.args[0] == SYNC_SHARED) @@ -366,14 +449,16 @@ static void *__test_mem_conversions(void *__vcpu) } } -static void test_mem_conversions(enum vm_mem_backing_src_type src_type, uint32_t nr_vcpus, - uint32_t nr_memslots) +static void test_mem_conversions(enum vm_mem_backing_src_type src_type, + uint32_t nr_vcpus, uint32_t nr_memslots, + bool back_shared_memory_with_guest_memfd) { /* * Allocate enough memory so that each vCPU's chunk of memory can be * naturally aligned with respect to the size of the backing store. */ const size_t alignment = max_t(size_t, SZ_2M, get_backing_src_pagesz(src_type)); + struct test_thread_args *thread_args[KVM_MAX_VCPUS]; const size_t per_cpu_size = align_up(PER_CPU_DATA_SIZE, alignment); const size_t memfd_size = per_cpu_size * nr_vcpus; const size_t slot_size = memfd_size / nr_memslots; @@ -381,6 +466,7 @@ static void test_mem_conversions(enum vm_mem_backing_src_type src_type, uint32_t pthread_t threads[KVM_MAX_VCPUS]; struct kvm_vm *vm; int memfd, i, r; + uint64_t flags; const struct vm_shape shape = { .mode = VM_MODE_DEFAULT, @@ -394,12 +480,23 @@ static void test_mem_conversions(enum vm_mem_backing_src_type src_type, uint32_t vm_enable_cap(vm, KVM_CAP_EXIT_HYPERCALL, (1 << KVM_HC_MAP_GPA_RANGE)); - memfd = vm_create_guest_memfd(vm, memfd_size, 0); + flags = back_shared_memory_with_guest_memfd ? + GUEST_MEMFD_FLAG_SUPPORT_SHARED : + 0; + memfd = vm_create_guest_memfd(vm, memfd_size, flags); - for (i = 0; i < nr_memslots; i++) - vm_mem_add(vm, src_type, BASE_DATA_GPA + slot_size * i, - BASE_DATA_SLOT + i, slot_size / vm->page_size, - KVM_MEM_GUEST_MEMFD, memfd, slot_size * i); + for (i = 0; i < nr_memslots; i++) { + if (back_shared_memory_with_guest_memfd) { + add_memslot(vm, BASE_DATA_GPA + slot_size * i, + BASE_DATA_SLOT + i, slot_size, memfd, + slot_size * i); + } else { + vm_mem_add(vm, src_type, BASE_DATA_GPA + slot_size * i, + BASE_DATA_SLOT + i, + slot_size / vm->page_size, + KVM_MEM_GUEST_MEMFD, memfd, slot_size * i); + } + } for (i = 0; i < nr_vcpus; i++) { uint64_t gpa = BASE_DATA_GPA + i * per_cpu_size; @@ -412,13 +509,23 @@ static void test_mem_conversions(enum vm_mem_backing_src_type src_type, uint32_t */ virt_map(vm, gpa, gpa, PER_CPU_DATA_SIZE / vm->page_size); - pthread_create(&threads[i], NULL, __test_mem_conversions, vcpus[i]); + thread_args[i] = malloc(sizeof(struct test_thread_args)); + TEST_ASSERT(thread_args[i] != NULL, + "Could not allocate memory for thread parameters"); + thread_args[i]->vcpu = vcpus[i]; + thread_args[i]->back_shared_memory_with_guest_memfd = + back_shared_memory_with_guest_memfd; + + pthread_create(&threads[i], NULL, __test_mem_conversions, + (void *)thread_args[i]); } WRITE_ONCE(run_vcpus, true); - for (i = 0; i < nr_vcpus; i++) + for (i = 0; i < nr_vcpus; i++) { pthread_join(threads[i], NULL); + free(thread_args[i]); + } kvm_vm_free(vm); @@ -440,7 +547,7 @@ static void test_mem_conversions(enum vm_mem_backing_src_type src_type, uint32_t static void usage(const char *cmd) { puts(""); - printf("usage: %s [-h] [-m nr_memslots] [-s mem_type] [-n nr_vcpus]\n", cmd); + printf("usage: %s [-h] [-g] [-m nr_memslots] [-s mem_type] [-n nr_vcpus]\n", cmd); puts(""); backing_src_help("-s"); puts(""); @@ -448,18 +555,21 @@ static void usage(const char *cmd) puts(""); puts(" -m: specify the number of memslots (default: 1)"); puts(""); + puts(" -g: back shared memory with guest_memfd (default: false)"); + puts(""); } int main(int argc, char *argv[]) { enum vm_mem_backing_src_type src_type = DEFAULT_VM_MEM_SRC; + bool back_shared_memory_with_guest_memfd = false; uint32_t nr_memslots = 1; uint32_t nr_vcpus = 1; int opt; TEST_REQUIRE(kvm_check_cap(KVM_CAP_VM_TYPES) & BIT(KVM_X86_SW_PROTECTED_VM)); - while ((opt = getopt(argc, argv, "hm:s:n:")) != -1) { + while ((opt = getopt(argc, argv, "hgm:s:n:")) != -1) { switch (opt) { case 's': src_type = parse_backing_src_type(optarg); @@ -470,6 +580,9 @@ int main(int argc, char *argv[]) case 'm': nr_memslots = atoi_positive("nr_memslots", optarg); break; + case 'g': + back_shared_memory_with_guest_memfd = true; + break; case 'h': default: usage(argv[0]); @@ -477,7 +590,9 @@ int main(int argc, char *argv[]) } } - test_mem_conversions(src_type, nr_vcpus, nr_memslots); + test_mem_conversions(src_type, nr_vcpus, nr_memslots, + back_shared_memory_with_guest_memfd); + return 0; } -- 2.49.0.1045.g170613ef41-goog