From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D3B9FC5B543 for ; Wed, 4 Jun 2025 09:49:32 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 72B848D000C; Wed, 4 Jun 2025 05:49:32 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 6DC538D0007; Wed, 4 Jun 2025 05:49:32 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5CB558D000C; Wed, 4 Jun 2025 05:49:32 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 371D78D0007 for ; Wed, 4 Jun 2025 05:49:32 -0400 (EDT) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id D36CBEE312 for ; Wed, 4 Jun 2025 09:49:31 +0000 (UTC) X-FDA: 83517245742.14.5842FFD Received: from mail-qt1-f176.google.com (mail-qt1-f176.google.com [209.85.160.176]) by imf04.hostedemail.com (Postfix) with ESMTP id 03B2E40002 for ; Wed, 4 Jun 2025 09:49:29 +0000 (UTC) Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=Q+HnwQCo; spf=pass (imf04.hostedemail.com: domain of tabba@google.com designates 209.85.160.176 as permitted sender) smtp.mailfrom=tabba@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1749030570; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=yRqGtqvuCY8NPo0IKdzXRFxKLlQ9NCrC/cAzc6lecqo=; b=qO3mnm+AOnjbipOi1I6u/yqn0OPL9FimH5K7Zr/s4yUSrWhfZKeGbjZk6UrGwKZhp+zKU9 hZ4Opf00ZB83RZO4g3N6N5F3lGmnS5n/5kok7GABS3DRS5L/PM0jmByxPA0eQE+yLTjVtg 1r2StDeVaGqNVjqHcye1dqxIrClUIt4= ARC-Authentication-Results: i=1; imf04.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=Q+HnwQCo; spf=pass (imf04.hostedemail.com: domain of tabba@google.com designates 209.85.160.176 as permitted sender) smtp.mailfrom=tabba@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1749030570; a=rsa-sha256; cv=none; b=tHWYjLRqm4xOV4m4bbLl4HVBu7yiS5QUo/4R0gXcrCOvKAPFb1tqG4dHo3YYE/3sNBbNVS qUF4rl+0K3UlV9OWVk4dcWRCFnZJVV0Pt/OZ88DXaQgvPOvGx12ykN94JFFiL/nnd0mRjk /o1chLRdZkgs+YYj/LgZ6zybLF3iFJI= Received: by mail-qt1-f176.google.com with SMTP id d75a77b69052e-4a58197794eso176051cf.1 for ; Wed, 04 Jun 2025 02:49:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1749030569; x=1749635369; darn=kvack.org; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=yRqGtqvuCY8NPo0IKdzXRFxKLlQ9NCrC/cAzc6lecqo=; b=Q+HnwQCoJI/BiQRVnBYHg5lTS6IzclkNBFH9Mgf3tUPZcU2yG/JMQndlkPg/fahkRV b79tKCKF42psNsTIpvG9i/+DQ9oY02H7NSSxOPmc618eyQrlgqeFRqbiByu/+fbJmA9u kMyzuATnlLOjSN5YH6QuF7VQ8v4nXItIHvajd/Mj1V5sAtTyREy5Bvv0TnUhdixXC1RT pO5KDXRiGTNFkCw5mLCRvqLCWvTRtOY7S7y2DXXxDWvwibkEhRFvya8zV1Z3JgyW84nx 1sEvLfGOUmhfVVI+jg5nq+UtFwB3aaUUkyxrZRY/175mcsLHFkhtspJq6Tdbdw98HW7E 7zmQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749030569; x=1749635369; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=yRqGtqvuCY8NPo0IKdzXRFxKLlQ9NCrC/cAzc6lecqo=; b=SAMN2nD+pTxESFvK/kncLZ/6IreoBkhMizG+hi1CBU1/tcgPxtdEx1EQfYuj2nrRSh kOqNAW/gPlg8Tr3jL/4eDY+UB/Wz7HUmmyx1kMHwNn5wArD3pVqLllR5Y2mpRyAiCpyA 6fGUwpyStjxZ7b1NUDruNjPQZrfSxRkObg/pk6CMx6wdjK8sde225VafmNzrirs+2pc7 RSTnM/el7QldJr7a/laLxZ1u9sAYj1ekmOWoH4OcJ2PPy7q3594rlXJuTqYJrGpDZJvj UQcfiqy6Cfcr3KAccza1qASbN8NQWp7XOq9C/Zi+AAp2UK2RCEptJEScdFxNfIGrakoC Upbw== X-Forwarded-Encrypted: i=1; AJvYcCWzkQpKXb4k+r9xLG6ydOUmsO3vOrju0VETHXVvr/utb4o+WPnyu4KYKOYH2J2LcrO3BDRsbmQlKQ==@kvack.org X-Gm-Message-State: AOJu0Yz+s5pDp3tbmhGMITSTjqJau3fqhr89VfgoyW9tnaIHiQcSkp9E pL/Oi5Gt4Jxy3rhJRbxjJFvJURVOHqpvjflPfyXhEodNNXJnwxvdnTHMDZbaTSMIpc8Dvbd8heq fb5KEr4SNc7WeTykCIiWA1DvAaBGI/1x2Ysp404qA X-Gm-Gg: ASbGncto6k2vFaT4Ucaeubl4JWJ0OoQDc+InNILq7mITZ0a5xIKFQ/W8in3xoaHA8im CzDCdSqBnhI8CS9bGXBaYmmVT2luKzM11dZ/ODQpHlE7sLjjruKMcEzVEbFm2jINXAZ6tG9/mhB gdhVMIx2p/Cz5lJvCJLuJ/kzFdiQzzCL9yrtprheVnFw1prcw5XWZ3J0tcYJLsD/qvwLBTemo= X-Google-Smtp-Source: AGHT+IEaR8nkSZ4ebS0MnjWCalWsKA4925IL28TSB+txPZ5jDYV7DZsBg1PYzG4zSeytRn+FRVTppE9G8dOcLB0elGQ= X-Received: by 2002:a05:622a:124d:b0:466:8c23:823a with SMTP id d75a77b69052e-4a5a60dd1e0mr2633951cf.17.1749030568738; Wed, 04 Jun 2025 02:49:28 -0700 (PDT) MIME-Version: 1.0 References: <20250527180245.1413463-1-tabba@google.com> <20250527180245.1413463-17-tabba@google.com> <025ae4ea-822b-44a1-8f78-38afc0e4b07e@redhat.com> In-Reply-To: <025ae4ea-822b-44a1-8f78-38afc0e4b07e@redhat.com> From: Fuad Tabba Date: Wed, 4 Jun 2025 10:48:51 +0100 X-Gm-Features: AX0GCFs-Uw3oDxWOt32n9EpuZI00mJHU4fbd3lIo12rblTCXGnb5EKJ8brnMtMM Message-ID: Subject: Re: [PATCH v10 16/16] KVM: selftests: guest_memfd mmap() test when mapping is allowed To: Gavin Shan Cc: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org, pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, jthoughton@google.com, peterx@redhat.com, pankaj.gupta@amd.com, ira.weiny@intel.com Content-Type: text/plain; charset="UTF-8" X-Rspamd-Queue-Id: 03B2E40002 X-Stat-Signature: gdx9pt9icawd8o9b461fpdrs5mcrcd1f X-Rspam-User: X-Rspamd-Server: rspam04 X-HE-Tag: 1749030569-127863 X-HE-Meta: U2FsdGVkX19SiAbNHM+Ko/b1aLs7zqcygIJ/qTlmMZWiI9JnC0n0if1efhO8gBK9ZFwbLDyCjaRywBm1+o1bJXF7mCb+aZkqf0CbfITOfPW8LTGFNSW0P3hTFHaAJYabbMgZjmSfyhaqLRUBrTqKzOGkXUtDagdwQTykE8PuXaHOjYnKqWvO7tGNBF7noamsouInaDXV45Ruo+w0fAYBzrjslnbObIhLXA6e1q+rhftNcxC2nMam1OshQQFgaVbycHOFR/lsCKF4DGX3IONfelS+q+CwecSeDzng0Dy79H8Mv2R+kqGIufOhUm0FxCmubDYZTz86izC/7D73rd4Hzil2Kz8rzuZ7CneKyqAR2cZcIEj9xQUeQrGkE4rTYtFfiMhsitS6ImibLGe5Q1q9hIuo59FCOGcAL3h+5OUa1fd8nHbKH8p1mazsIAHl0W9Atla+0bQEuZHP9kscGBIlqZqQS31OHO+0Sm5dR5gTLQVNdcLGdHMloFSiiuapo21o2ZthscUJn5WJ0rWseVHpW/kn3WBfmsACCCPjaIsycb80kTj0mAIR0q3peoO8i1Y9TfirvjNR0PX3uqCSRbx0Mmb+/DmWNW6maZVmbOb8XWj8Jve+RrzQw7F9KUG5y3P9Pl63o8FBpZ2LN5JdezBAutZtB4lHuteozc4JMNBjNiIvEKrCtZkqOrROjrGLFtW+1G2J5033ObNrDDsWJW8xf0VIRN5zojRv7h/fHV65L703vf9nKym9rNGvCUSlok2oFIAOv9GvlBBB3kqLnDMTFZFyfF7BglkhFEZy6jm1wFziFshIHlBWPZ7Z6qsAh2AYfugEM2/dh8+NjQGUGQHWLPc3s5ri498yAPiK/bVZ0GKT0mo22nRnM24IbklZLlWtTlQGNvxR3zJLvOAJ8R6pgBmckHlfhWxOSU2Gwtpv1jfVoRm4vsFknt/GGHRwzcmgJV6P56N264JX0Wl1vLj NzPwyj1X 3QtMAl5Fo4StSx+Sf5+mULdWtbC/WJu/7YC7BW3ixgJCUpRZ4c0JtRbJsB+M2JAvNqw4orp7fKVsD8TOOMG1SA9k8FDob79MaFFiyQnmbyCFlGRsdTHrN6nOVgVPULurkQKA44WXTlgpG10HRKRyQWAlub3y095SdrYpFD/2XdW2MXI2SreHqDC6RTJ53rHy7d+nLuxmsE/szEwdScyhlIU28ysTHmDtLtL7ZnHzNx0+XvZdgbzT4kUBFdREIdGSkgnPQzIFFwz+tVBZiISS6lRJ23b71VzzMc51xyDJvH3YEXjxkiIxzwq+Vcy/yChoh4Wp5LGBxBEyqmAU88VqjRG+W/TPPgeH6IrVxPPn7dRhD/rA= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Hi Gavin, On Wed, 4 Jun 2025 at 10:20, Gavin Shan wrote: > > Hi Fuad, > > On 5/28/25 4:02 AM, Fuad Tabba wrote: > > Expand the guest_memfd selftests to include testing mapping guest > > memory for VM types that support it. > > > > Also, build the guest_memfd selftest for arm64. > > > > Co-developed-by: Ackerley Tng > > Signed-off-by: Ackerley Tng > > Signed-off-by: Fuad Tabba > > --- > > tools/testing/selftests/kvm/Makefile.kvm | 1 + > > .../testing/selftests/kvm/guest_memfd_test.c | 162 +++++++++++++++--- > > 2 files changed, 142 insertions(+), 21 deletions(-) > > > > The test case fails on 64KB host, and the file size in test_create_guest_memfd_multiple() > would be page_size and (2 * page_size). The fixed size 4096 and 8192 aren't aligned to 64KB. Yes, however, this patch didn't introduce or modify this test. I think it's better to fix it in a separate patch independent of this series. > # ./guest_memfd_test > Random seed: 0x6b8b4567 > ==== Test Assertion Failure ==== > guest_memfd_test.c:178: fd1 != -1 > pid=7565 tid=7565 errno=22 - Invalid argument > 1 0x000000000040252f: test_create_guest_memfd_multiple at guest_memfd_test.c:178 > 2 (inlined by) test_with_type at guest_memfd_test.c:231 > 3 0x00000000004020c7: main at guest_memfd_test.c:306 > 4 0x0000ffff8cec733f: ?? ??:0 > 5 0x0000ffff8cec7417: ?? ??:0 > 6 0x00000000004021ef: _start at ??:? > memfd creation should succeed > > > diff --git a/tools/testing/selftests/kvm/Makefile.kvm b/tools/testing/selftests/kvm/Makefile.kvm > > index f62b0a5aba35..ccf95ed037c3 100644 > > --- a/tools/testing/selftests/kvm/Makefile.kvm > > +++ b/tools/testing/selftests/kvm/Makefile.kvm > > @@ -163,6 +163,7 @@ TEST_GEN_PROGS_arm64 += access_tracking_perf_test > > TEST_GEN_PROGS_arm64 += arch_timer > > TEST_GEN_PROGS_arm64 += coalesced_io_test > > TEST_GEN_PROGS_arm64 += dirty_log_perf_test > > +TEST_GEN_PROGS_arm64 += guest_memfd_test > > TEST_GEN_PROGS_arm64 += get-reg-list > > TEST_GEN_PROGS_arm64 += memslot_modification_stress_test > > TEST_GEN_PROGS_arm64 += memslot_perf_test > > diff --git a/tools/testing/selftests/kvm/guest_memfd_test.c b/tools/testing/selftests/kvm/guest_memfd_test.c > > index ce687f8d248f..3d6765bc1f28 100644 > > --- a/tools/testing/selftests/kvm/guest_memfd_test.c > > +++ b/tools/testing/selftests/kvm/guest_memfd_test.c > > @@ -34,12 +34,46 @@ static void test_file_read_write(int fd) > > "pwrite on a guest_mem fd should fail"); > > } > > > > -static void test_mmap(int fd, size_t page_size) > > +static void test_mmap_allowed(int fd, size_t page_size, size_t total_size) > > +{ > > + const char val = 0xaa; > > + char *mem; > > + size_t i; > > + int ret; > > + > > + mem = mmap(NULL, total_size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0); > > + TEST_ASSERT(mem != MAP_FAILED, "mmaping() guest memory should pass."); > > + > > If you agree, I think it would be nice to ensure guest-memfd doesn't support > copy-on-write, more details are provided below. Good idea. I think we can do this without adding much more code. I'll add a check in test_mmap_allowed(), since the idea is, even if mmap() is supported, we still can't COW. I'll rename the functions to make this a bit clearer (i.e., supported instead of allowed). Thank you for this and thank you for the reviews! /fuad > > + memset(mem, val, total_size); > > + for (i = 0; i < total_size; i++) > > + TEST_ASSERT_EQ(mem[i], val); > > + > > + ret = fallocate(fd, FALLOC_FL_KEEP_SIZE | FALLOC_FL_PUNCH_HOLE, 0, > > + page_size); > > + TEST_ASSERT(!ret, "fallocate the first page should succeed"); > > + > > + for (i = 0; i < page_size; i++) > > + TEST_ASSERT_EQ(mem[i], 0x00); > > + for (; i < total_size; i++) > > + TEST_ASSERT_EQ(mem[i], val); > > + > > + memset(mem, val, page_size); > > + for (i = 0; i < total_size; i++) > > + TEST_ASSERT_EQ(mem[i], val); > > + > > + ret = munmap(mem, total_size); > > + TEST_ASSERT(!ret, "munmap should succeed"); > > +} > > + > > +static void test_mmap_denied(int fd, size_t page_size, size_t total_size) > > { > > char *mem; > > > > mem = mmap(NULL, page_size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0); > > TEST_ASSERT_EQ(mem, MAP_FAILED); > > + > > + mem = mmap(NULL, total_size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0); > > + TEST_ASSERT_EQ(mem, MAP_FAILED); > > } > > Add one more argument to test_mmap_denied as the flags passed to mmap(). > > static void test_mmap_denied(int fd, size_t page_size, size_t total_size, int mmap_flags) > { > mem = mmap(NULL, total_size, PROT_READ | PROT_WRITE, mmap_flags, fd, 0); > } > > > > > static void test_file_size(int fd, size_t page_size, size_t total_size) > > @@ -120,26 +154,19 @@ static void test_invalid_punch_hole(int fd, size_t page_size, size_t total_size) > > } > > } > > > > -static void test_create_guest_memfd_invalid(struct kvm_vm *vm) > > +static void test_create_guest_memfd_invalid_sizes(struct kvm_vm *vm, > > + uint64_t guest_memfd_flags, > > + size_t page_size) > > { > > - size_t page_size = getpagesize(); > > - uint64_t flag; > > size_t size; > > int fd; > > > > for (size = 1; size < page_size; size++) { > > - fd = __vm_create_guest_memfd(vm, size, 0); > > - TEST_ASSERT(fd == -1 && errno == EINVAL, > > + fd = __vm_create_guest_memfd(vm, size, guest_memfd_flags); > > + TEST_ASSERT(fd < 0 && errno == EINVAL, > > "guest_memfd() with non-page-aligned page size '0x%lx' should fail with EINVAL", > > size); > > } > > - > > - for (flag = BIT(0); flag; flag <<= 1) { > > - fd = __vm_create_guest_memfd(vm, page_size, flag); > > - TEST_ASSERT(fd == -1 && errno == EINVAL, > > - "guest_memfd() with flag '0x%lx' should fail with EINVAL", > > - flag); > > - } > > } > > > > static void test_create_guest_memfd_multiple(struct kvm_vm *vm) > > @@ -170,30 +197,123 @@ static void test_create_guest_memfd_multiple(struct kvm_vm *vm) > > close(fd1); > > } > > > > -int main(int argc, char *argv[]) > > +#define GUEST_MEMFD_TEST_SLOT 10 > > +#define GUEST_MEMFD_TEST_GPA 0x100000000 > > + > > +static bool check_vm_type(unsigned long vm_type) > > { > > - size_t page_size; > > + /* > > + * Not all architectures support KVM_CAP_VM_TYPES. However, those that > > + * support guest_memfd have that support for the default VM type. > > + */ > > + if (vm_type == VM_TYPE_DEFAULT) > > + return true; > > + > > + return kvm_check_cap(KVM_CAP_VM_TYPES) & BIT(vm_type); > > +} > > + > > +static void test_with_type(unsigned long vm_type, uint64_t guest_memfd_flags, > > + bool expect_mmap_allowed) > > +{ > > + struct kvm_vm *vm; > > size_t total_size; > > + size_t page_size; > > int fd; > > - struct kvm_vm *vm; > > > > - TEST_REQUIRE(kvm_has_cap(KVM_CAP_GUEST_MEMFD)); > > + if (!check_vm_type(vm_type)) > > + return; > > > > page_size = getpagesize(); > > total_size = page_size * 4; > > > > - vm = vm_create_barebones(); > > + vm = vm_create_barebones_type(vm_type); > > > > - test_create_guest_memfd_invalid(vm); > > test_create_guest_memfd_multiple(vm); > > + test_create_guest_memfd_invalid_sizes(vm, guest_memfd_flags, page_size); > > > > - fd = vm_create_guest_memfd(vm, total_size, 0); > > + fd = vm_create_guest_memfd(vm, total_size, guest_memfd_flags); > > > > test_file_read_write(fd); > > - test_mmap(fd, page_size); > > + > > + if (expect_mmap_allowed) > > + test_mmap_allowed(fd, page_size, total_size); > > + else > > + test_mmap_denied(fd, page_size, total_size); > > + > > if (expect_mmap_allowed) { > test_mmap_denied(fd, page_size, total_size, MAP_PRIVATE); > test_mmap_allowed(fd, page_size, total_size); > } else { > test_mmap_denied(fd, page_size, total_size, MAP_SHARED); > } > > > test_file_size(fd, page_size, total_size); > > test_fallocate(fd, page_size, total_size); > > test_invalid_punch_hole(fd, page_size, total_size); > > > > close(fd); > > + kvm_vm_release(vm); > > +} > > + > > +static void test_vm_type_gmem_flag_validity(unsigned long vm_type, > > + uint64_t expected_valid_flags) > > +{ > > + size_t page_size = getpagesize(); > > + struct kvm_vm *vm; > > + uint64_t flag = 0; > > + int fd; > > + > > + if (!check_vm_type(vm_type)) > > + return; > > + > > + vm = vm_create_barebones_type(vm_type); > > + > > + for (flag = BIT(0); flag; flag <<= 1) { > > + fd = __vm_create_guest_memfd(vm, page_size, flag); > > + > > + if (flag & expected_valid_flags) { > > + TEST_ASSERT(fd >= 0, > > + "guest_memfd() with flag '0x%lx' should be valid", > > + flag); > > + close(fd); > > + } else { > > + TEST_ASSERT(fd < 0 && errno == EINVAL, > > + "guest_memfd() with flag '0x%lx' should fail with EINVAL", > > + flag); > > + } > > + } > > + > > + kvm_vm_release(vm); > > +} > > + > > +static void test_gmem_flag_validity(void) > > +{ > > + uint64_t non_coco_vm_valid_flags = 0; > > + > > + if (kvm_has_cap(KVM_CAP_GMEM_SHARED_MEM)) > > + non_coco_vm_valid_flags = GUEST_MEMFD_FLAG_SUPPORT_SHARED; > > + > > + test_vm_type_gmem_flag_validity(VM_TYPE_DEFAULT, non_coco_vm_valid_flags); > > + > > +#ifdef __x86_64__ > > + test_vm_type_gmem_flag_validity(KVM_X86_SW_PROTECTED_VM, non_coco_vm_valid_flags); > > + test_vm_type_gmem_flag_validity(KVM_X86_SEV_VM, 0); > > + test_vm_type_gmem_flag_validity(KVM_X86_SEV_ES_VM, 0); > > + test_vm_type_gmem_flag_validity(KVM_X86_SNP_VM, 0); > > + test_vm_type_gmem_flag_validity(KVM_X86_TDX_VM, 0); > > +#endif > > +} > > + > > +int main(int argc, char *argv[]) > > +{ > > + TEST_REQUIRE(kvm_has_cap(KVM_CAP_GUEST_MEMFD)); > > + > > + test_gmem_flag_validity(); > > + > > + test_with_type(VM_TYPE_DEFAULT, 0, false); > > + if (kvm_has_cap(KVM_CAP_GMEM_SHARED_MEM)) { > > + test_with_type(VM_TYPE_DEFAULT, GUEST_MEMFD_FLAG_SUPPORT_SHARED, > > + true); > > + } > > + > > +#ifdef __x86_64__ > > + test_with_type(KVM_X86_SW_PROTECTED_VM, 0, false); > > + if (kvm_has_cap(KVM_CAP_GMEM_SHARED_MEM)) { > > + test_with_type(KVM_X86_SW_PROTECTED_VM, > > + GUEST_MEMFD_FLAG_SUPPORT_SHARED, true); > > + } > > +#endif > > } > > Thanks, > Gavin >