From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C0451C2D0CD for ; Wed, 21 May 2025 06:53:55 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 471EB6B0082; Wed, 21 May 2025 02:53:55 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 422426B0085; Wed, 21 May 2025 02:53:55 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2EC666B008A; Wed, 21 May 2025 02:53:55 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 0B24B6B0082 for ; Wed, 21 May 2025 02:53:55 -0400 (EDT) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id B487459F31 for ; Wed, 21 May 2025 06:53:54 +0000 (UTC) X-FDA: 83465999988.01.5C71276 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf13.hostedemail.com (Postfix) with ESMTP id 5C7A22000E for ; Wed, 21 May 2025 06:53:52 +0000 (UTC) Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=KUP0mGI3; spf=pass (imf13.hostedemail.com: domain of gshan@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=gshan@redhat.com; dmarc=pass (policy=quarantine) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1747810432; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=JbErVy9IEKX7lpyD0YcZLOsPdHblgXbS97dB37YVVZM=; b=fMk7p4wwWFsnCDXAssMEMVN+6ywLdy28RkRRfzL2B4wxbt7Cks3hPbmxTryBPguOkLa+Ct BhhrjKB7BawtY8vaRjMbUVJy5/NeyPSPYePyR5KuxiXgNk4+eEAo14uXi8RGVmBRtCMHBx tJcEa6QAmO9AkmDiaIC+Tv411pkZ0EY= ARC-Authentication-Results: i=1; imf13.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=KUP0mGI3; spf=pass (imf13.hostedemail.com: domain of gshan@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=gshan@redhat.com; dmarc=pass (policy=quarantine) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1747810432; a=rsa-sha256; cv=none; b=VRpyRbMqF2aKVqACtk08DIY4HPQQOojLSwTN3pQAblwtTL6lJOuDRl3scQQdp1JdYCmYrl EX34EKb6U8yT0fEnior8S9kT3uC975uU6GZTPG1NAP2VtIUiGQSAuVnKBemCuvOEirCEfM S8D5ItCJv6vbhKOTUxLxUGvn2UoCiEs= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1747810431; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=JbErVy9IEKX7lpyD0YcZLOsPdHblgXbS97dB37YVVZM=; b=KUP0mGI3H6NnYgMYwObW8pT1JPcVkHkhbbXc8APXvLE/ynt0m90RnPU7lqrPOzfaWfbEvZ 8+qy4vyGb4U1lDvVc2Ztp0pkYDRGNi7n8MsXmWOHXzh2/KDR76tb++P3EHYsy5k29lUzSR 0yolOKBfnLPGyH64jf4NQrxk46e8LaE= Received: from mail-pl1-f197.google.com (mail-pl1-f197.google.com [209.85.214.197]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-294-cbfjIMqBMYi_bqHzf4SZ9g-1; Wed, 21 May 2025 02:53:50 -0400 X-MC-Unique: cbfjIMqBMYi_bqHzf4SZ9g-1 X-Mimecast-MFC-AGG-ID: cbfjIMqBMYi_bqHzf4SZ9g_1747810429 Received: by mail-pl1-f197.google.com with SMTP id d9443c01a7336-23209d8ba1bso36675365ad.3 for ; Tue, 20 May 2025 23:53:49 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747810429; x=1748415229; h=content-transfer-encoding:in-reply-to:from:content-language :references:cc:to:subject:user-agent:mime-version:date:message-id :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=JbErVy9IEKX7lpyD0YcZLOsPdHblgXbS97dB37YVVZM=; b=h+QkjPo5xYk2ICcQGZ0Apyy+4kE8flq3+He4AX35Zp4NSvQPJgggP4yrbovD07hKw5 2IBCJcI0hNlSEjvjAf1bCL/DRek+HqC71/+Pt302TaLCIXd3uODvmIDnl/ZJrylbnra2 J0ZJfOp4opxxJwS/fu4S5gIcbYBFwwnRmrTl8Qf6CRGraZUZTJdwW/BfyVwhwcvWpVCq A/geORvDhxvuQFZCu5CsuHQX6GdJsCJsHzX0HVOaGvfg47Ws5w+3cs1rK5azhnyrRzbJ YP82senMB9jIk1BMKLzIL+2BL9hlmi/x3Owtap+RmIC7ywcxPTsGuXowml7EjHNnoVoK vC6Q== X-Forwarded-Encrypted: i=1; AJvYcCWNVbcPlOBla7ooS40DM888C3iKmjcsPeETTImS9U1owpAYv5Jz+fYeUTrvYmTFGZoSLl5KxZ1eTw==@kvack.org X-Gm-Message-State: AOJu0YzjE2hXQkpwi1MA9XqBHCcsblXaME7BMEN5bxTfXpjhjMh6IRw5 8eOg09DPUYHsANFLS1LcgdCz3iintvmX99By8Vb3DwhCDnjs8+B68xd/lFfGukVxVMvq05Ua8p6 9YVPuNVBfQF+Aq6jkedJUeXgM3ZKuMISjo1TgSxvb79mU89XAKtVX X-Gm-Gg: ASbGncuhMMLQJShsO2+3JyvUIi3RpTZiVwfQ/w0IsqeK4xfbSTLg5JP4VM0SxT8HA49 tYPWJPSvzoX5uRzTMcDDBMDlaF6LLPKZshdt2ni0EJeISLcS23hCTtx+2kDZwyT85038AssCu9n xx3vjGCalnU0i+fZ6uXBNg5s88D7ivyWAXkU6qo4r/TyDp19XQIpUvpdyBq6WlsZzVjF1Phw5Iy uTztd3OREwjs5Sp/6l1TTVhu2NesrER5at/f920av65Jfsu0iC8jgsieDgb+c7eb3X1f/D+VVTw 4o/drGbxLmOA X-Received: by 2002:a17:902:ce8f:b0:224:1001:6787 with SMTP id d9443c01a7336-231d43d56e3mr276720925ad.4.1747810428847; Tue, 20 May 2025 23:53:48 -0700 (PDT) X-Google-Smtp-Source: AGHT+IEYlr0v4xN3raRRN0GaiffcawT6n84gcgurQeIRHyvaB+F+BPtLUhdx8zfTLDycN97zE3b3mw== X-Received: by 2002:a17:902:ce8f:b0:224:1001:6787 with SMTP id d9443c01a7336-231d43d56e3mr276720505ad.4.1747810428390; Tue, 20 May 2025 23:53:48 -0700 (PDT) Received: from [192.168.68.51] ([180.233.125.65]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-231e26d83basm83576515ad.6.2025.05.20.23.53.28 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Tue, 20 May 2025 23:53:47 -0700 (PDT) Message-ID: <7bbe7aea-3b17-444b-8cd3-c5941950307c@redhat.com> Date: Wed, 21 May 2025 16:53:27 +1000 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v9 16/17] KVM: selftests: guest_memfd mmap() test when mapping is allowed To: Fuad Tabba , kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org Cc: pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, jthoughton@google.com, peterx@redhat.com, pankaj.gupta@amd.com, ira.weiny@intel.com References: <20250513163438.3942405-1-tabba@google.com> <20250513163438.3942405-17-tabba@google.com> From: Gavin Shan In-Reply-To: <20250513163438.3942405-17-tabba@google.com> X-Mimecast-Spam-Score: 0 X-Mimecast-MFC-PROC-ID: HzHveEHyBg3Y3kTDGi2AJcrlTZaFmJL3NtJ7WIkoaC4_1747810429 X-Mimecast-Originator: redhat.com Content-Language: en-US Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Rspamd-Server: rspam10 X-Stat-Signature: yd8spqcdyqx8a1k8i1tetyaf3iakinbi X-Rspamd-Queue-Id: 5C7A22000E X-Rspam-User: X-HE-Tag: 1747810432-476735 X-HE-Meta: U2FsdGVkX1/JnzM5AGzdjaLXt+VUTxgF31/d3UDgcXtqPrhTZN3bbdcMXJUmvexNtk0Un3HtoS5etvgmOZ/ap7UITs1pzcaO71VmtpnVRuXlYI+yEn9QmUTlvI/cqjt0Wihz6esRnaOye45mNBPAk/EMW+bQHOIVz8RLglKzCI4+mUtwSaJ/z/1IH+IFZrmsEiBaHlZ2zcNZQcWQGeVBdxXG910T2DmA+LE+IwMAq9X4J8271YgdC1egVMnRJn5AjjHCxxba6e6c89yD8xOkOXI1bT5n6x8gqnoMt25ypu6CyTBEt18bzTN8FsbJW8ovdqd05+Fom8KvRWM5tXqioweEIUL4xBSoOnTsn1TSPmc2sWfM+ra4wrJQAkFFhiWZjLBDMWtaz7aIhazqSrfJ334OAQcn94P7ij+lN/8IRoXiJKf7X128vsDp/bZKAEo4Ptxx21e5NZ7B1wc31U2iPfenWx4dGFuL/PAbwC5uSER8K0e5UI2nHPHHehiEsi5kBKTGunGZ5oU3/3jnIKcFKXVAkSJQufK5UD7o08BiVvpfJRZfJYy4Ojuih82Qq8yo42wV+tGggxGAvSBx3StvieeBWg0eEUWQJ450TWNcemV6Ny3f9N4pjFO59U3FllRjziHzUy0XjDM3bmmml4lSQtHrNGFFLeAIW6n9pu3+b+cdC2PMP2k7mC69snwbsKYLo8NE6e+/mEm3Jabe7d4W51wQsaYxqep64qvulCQ8J+Af0GJg0LTZlC5/LhRm/NgPj1mw1rzKMXo5d7JIIwPODzKYe4QyECqZLvNBWqFrG8pryLagKiW4BfB6VcQ723LcPpkXXmceDejl1WflaJhjnOksBP7pub9MW1uY9fsXmnMCwWawqMXJ+S+jOifnve81RCRfKnkq1alCdx8RiF+HdZBJlYHFRLnTvDC3UtwVUr4A7X4kkJG4cHDt8mGJSZsS941Gf/mTBHDRaZgWMQB NCNvq7hH Dsb42vENt3Ld7n2WsIkB7u+/CJcXNgmNeiul5p9nvjKzaDjEJM5E9I/m1H2h9l3TUGbqMZGMEsN4fYWiS1dFlFuM7eXdnpnpA0KCwJC9daAu/DGdI9lZpLnM2oa0XyMKA37dXY4U2SMPO72sgOd97oCg/eaNnJsJDgwON3EuaSE+dlK/GqhTH4h9ocPaujwhWNC2TfITkYP2OuJwSMfEOYlRdS4Pe2uxGIdIM7m7EuHSyIFLYFpvlGvEKBlO76U9ZzhJExzesFNJac5/W94pQU25EN8GKk/veAko8UfZktOjJF2YaVcdIhlyTJMiD92g4JVbJlDTYWjapfN/QXByIeoDaYp6hstrkdUWOXwUKNydlanyZEYbtEYoCuI8a564ybDnoQ6iiF9sF3yyBjh11DDwZp3aSiByn1DgrIAx2Gy+gmWFsxbkyjcaTnDRxDhNoFKkkduboC1DFRdsdqLluLKkZEDtglb1U62hp822HmQeevgg8ci2mryYWc+2PUtMICS0uGdQunsXmfbMIe8OVE8Wxcg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Hi Fuad, On 5/14/25 2:34 AM, Fuad Tabba wrote: > Expand the guest_memfd selftests to include testing mapping guest > memory for VM types that support it. > > Also, build the guest_memfd selftest for arm64. > > Co-developed-by: Ackerley Tng > Signed-off-by: Ackerley Tng > Signed-off-by: Fuad Tabba > --- > tools/testing/selftests/kvm/Makefile.kvm | 1 + > .../testing/selftests/kvm/guest_memfd_test.c | 145 +++++++++++++++--- > 2 files changed, 126 insertions(+), 20 deletions(-) > > diff --git a/tools/testing/selftests/kvm/Makefile.kvm b/tools/testing/selftests/kvm/Makefile.kvm > index f62b0a5aba35..ccf95ed037c3 100644 > --- a/tools/testing/selftests/kvm/Makefile.kvm > +++ b/tools/testing/selftests/kvm/Makefile.kvm > @@ -163,6 +163,7 @@ TEST_GEN_PROGS_arm64 += access_tracking_perf_test > TEST_GEN_PROGS_arm64 += arch_timer > TEST_GEN_PROGS_arm64 += coalesced_io_test > TEST_GEN_PROGS_arm64 += dirty_log_perf_test > +TEST_GEN_PROGS_arm64 += guest_memfd_test > TEST_GEN_PROGS_arm64 += get-reg-list > TEST_GEN_PROGS_arm64 += memslot_modification_stress_test > TEST_GEN_PROGS_arm64 += memslot_perf_test > diff --git a/tools/testing/selftests/kvm/guest_memfd_test.c b/tools/testing/selftests/kvm/guest_memfd_test.c > index ce687f8d248f..443c49185543 100644 > --- a/tools/testing/selftests/kvm/guest_memfd_test.c > +++ b/tools/testing/selftests/kvm/guest_memfd_test.c > @@ -34,12 +34,46 @@ static void test_file_read_write(int fd) > "pwrite on a guest_mem fd should fail"); > } > > -static void test_mmap(int fd, size_t page_size) > +static void test_mmap_allowed(int fd, size_t page_size, size_t total_size) > +{ > + const char val = 0xaa; > + char *mem; > + size_t i; > + int ret; > + > + mem = mmap(NULL, total_size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0); > + TEST_ASSERT(mem != MAP_FAILED, "mmaping() guest memory should pass."); > + > + memset(mem, val, total_size); > + for (i = 0; i < total_size; i++) > + TEST_ASSERT_EQ(mem[i], val); > + > + ret = fallocate(fd, FALLOC_FL_KEEP_SIZE | FALLOC_FL_PUNCH_HOLE, 0, > + page_size); > + TEST_ASSERT(!ret, "fallocate the first page should succeed"); > + > + for (i = 0; i < page_size; i++) > + TEST_ASSERT_EQ(mem[i], 0x00); > + for (; i < total_size; i++) > + TEST_ASSERT_EQ(mem[i], val); > + > + memset(mem, val, total_size); > + for (i = 0; i < total_size; i++) > + TEST_ASSERT_EQ(mem[i], val); > + The last memset() and check the resident values look redudant because same test has been covered by the first memset(). If we really want to double confirm that the page-cache is writabble, it would be enough to cover the first page. Otherwise, I guess this hunk of code can be removed :) memset(mem, val, page_size); for (i = 0; i < page_size; i++) TEST_ASSERT_EQ(mem[i], val); > + ret = munmap(mem, total_size); > + TEST_ASSERT(!ret, "munmap should succeed"); > +} > + > +static void test_mmap_denied(int fd, size_t page_size, size_t total_size) > { > char *mem; > > mem = mmap(NULL, page_size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0); > TEST_ASSERT_EQ(mem, MAP_FAILED); > + > + mem = mmap(NULL, total_size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0); > + TEST_ASSERT_EQ(mem, MAP_FAILED); > } > > static void test_file_size(int fd, size_t page_size, size_t total_size) > @@ -120,26 +154,19 @@ static void test_invalid_punch_hole(int fd, size_t page_size, size_t total_size) > } > } > > -static void test_create_guest_memfd_invalid(struct kvm_vm *vm) > +static void test_create_guest_memfd_invalid_sizes(struct kvm_vm *vm, > + uint64_t guest_memfd_flags, > + size_t page_size) > { > - size_t page_size = getpagesize(); > - uint64_t flag; > size_t size; > int fd; > > for (size = 1; size < page_size; size++) { > - fd = __vm_create_guest_memfd(vm, size, 0); > + fd = __vm_create_guest_memfd(vm, size, guest_memfd_flags); > TEST_ASSERT(fd == -1 && errno == EINVAL, > "guest_memfd() with non-page-aligned page size '0x%lx' should fail with EINVAL", > size); > } > - > - for (flag = BIT(0); flag; flag <<= 1) { > - fd = __vm_create_guest_memfd(vm, page_size, flag); > - TEST_ASSERT(fd == -1 && errno == EINVAL, > - "guest_memfd() with flag '0x%lx' should fail with EINVAL", > - flag); > - } > } > > static void test_create_guest_memfd_multiple(struct kvm_vm *vm) > @@ -170,30 +197,108 @@ static void test_create_guest_memfd_multiple(struct kvm_vm *vm) > close(fd1); > } > > -int main(int argc, char *argv[]) > +static void test_with_type(unsigned long vm_type, uint64_t guest_memfd_flags, > + bool expect_mmap_allowed) > { > - size_t page_size; > + struct kvm_vm *vm; > size_t total_size; > + size_t page_size; > int fd; > - struct kvm_vm *vm; > > - TEST_REQUIRE(kvm_has_cap(KVM_CAP_GUEST_MEMFD)); > + if (!(kvm_check_cap(KVM_CAP_VM_TYPES) & BIT(vm_type))) > + return; > The check seems incorrect for aarch64 since 0 is always returned from kvm_check_cap() there. The test is skipped for VM_TYPE_DEFAULT on aarch64. So it would be something like below: #define VM_TYPE_DEFAULT 0 if (vm_type != VM_TYPE_DEFAULT && !(kvm_check_cap(KVM_CAP_VM_TYPES) & BIT(vm_type))) return; > page_size = getpagesize(); > total_size = page_size * 4; > > - vm = vm_create_barebones(); > + vm = vm_create_barebones_type(vm_type); > > - test_create_guest_memfd_invalid(vm); > test_create_guest_memfd_multiple(vm); > + test_create_guest_memfd_invalid_sizes(vm, guest_memfd_flags, page_size); > > - fd = vm_create_guest_memfd(vm, total_size, 0); > + fd = vm_create_guest_memfd(vm, total_size, guest_memfd_flags); > > test_file_read_write(fd); > - test_mmap(fd, page_size); > + > + if (expect_mmap_allowed) > + test_mmap_allowed(fd, page_size, total_size); > + else > + test_mmap_denied(fd, page_size, total_size); > + > test_file_size(fd, page_size, total_size); > test_fallocate(fd, page_size, total_size); > test_invalid_punch_hole(fd, page_size, total_size); > > close(fd); > + kvm_vm_release(vm); > +} > + > +static void test_vm_type_gmem_flag_validity(unsigned long vm_type, > + uint64_t expected_valid_flags) > +{ > + size_t page_size = getpagesize(); > + struct kvm_vm *vm; > + uint64_t flag = 0; > + int fd; > + > + if (!(kvm_check_cap(KVM_CAP_VM_TYPES) & BIT(vm_type))) > + return; Same as above > + > + vm = vm_create_barebones_type(vm_type); > + > + for (flag = BIT(0); flag; flag <<= 1) { > + fd = __vm_create_guest_memfd(vm, page_size, flag); > + > + if (flag & expected_valid_flags) { > + TEST_ASSERT(fd > 0, > + "guest_memfd() with flag '0x%lx' should be valid", > + flag); > + close(fd); > + } else { > + TEST_ASSERT(fd == -1 && errno == EINVAL, > + "guest_memfd() with flag '0x%lx' should fail with EINVAL", > + flag); It's more robust to have: TEST_ASSERT(fd < 0 && errno == EINVAL, ...); > + } > + } > + > + kvm_vm_release(vm); > +} > + > +static void test_gmem_flag_validity(void) > +{ > + uint64_t non_coco_vm_valid_flags = 0; > + > + if (kvm_has_cap(KVM_CAP_GMEM_SHARED_MEM)) > + non_coco_vm_valid_flags = GUEST_MEMFD_FLAG_SUPPORT_SHARED; > + > + test_vm_type_gmem_flag_validity(VM_TYPE_DEFAULT, non_coco_vm_valid_flags); > + > +#ifdef __x86_64__ > + test_vm_type_gmem_flag_validity(KVM_X86_SW_PROTECTED_VM, non_coco_vm_valid_flags); > + test_vm_type_gmem_flag_validity(KVM_X86_SEV_VM, 0); > + test_vm_type_gmem_flag_validity(KVM_X86_SEV_ES_VM, 0); > + test_vm_type_gmem_flag_validity(KVM_X86_SNP_VM, 0); > + test_vm_type_gmem_flag_validity(KVM_X86_TDX_VM, 0); > +#endif > +} > + > +int main(int argc, char *argv[]) > +{ > + TEST_REQUIRE(kvm_has_cap(KVM_CAP_GUEST_MEMFD)); > + > + test_gmem_flag_validity(); > + > + test_with_type(VM_TYPE_DEFAULT, 0, false); > + if (kvm_has_cap(KVM_CAP_GMEM_SHARED_MEM)) { > + test_with_type(VM_TYPE_DEFAULT, GUEST_MEMFD_FLAG_SUPPORT_SHARED, > + true); > + } > + > +#ifdef __x86_64__ > + test_with_type(KVM_X86_SW_PROTECTED_VM, 0, false); > + if (kvm_has_cap(KVM_CAP_GMEM_SHARED_MEM)) { > + test_with_type(KVM_X86_SW_PROTECTED_VM, > + GUEST_MEMFD_FLAG_SUPPORT_SHARED, true); > + } > +#endif > } Thanks, Gavin