From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 11F70C2D0CD for ; Wed, 21 May 2025 09:39:31 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 865FC6B0085; Wed, 21 May 2025 05:39:31 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 816246B0088; Wed, 21 May 2025 05:39:31 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6DE016B0089; Wed, 21 May 2025 05:39:31 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 4E84E6B0085 for ; Wed, 21 May 2025 05:39:31 -0400 (EDT) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id AE0665D493 for ; Wed, 21 May 2025 09:39:30 +0000 (UTC) X-FDA: 83466417300.28.3414644 Received: from mail-qt1-f180.google.com (mail-qt1-f180.google.com [209.85.160.180]) by imf26.hostedemail.com (Postfix) with ESMTP id C2731140012 for ; Wed, 21 May 2025 09:39:27 +0000 (UTC) Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=QrYqXwSN; spf=pass (imf26.hostedemail.com: domain of tabba@google.com designates 209.85.160.180 as permitted sender) smtp.mailfrom=tabba@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1747820367; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=hGDbphIguy1iKM6+xTy++Xz3vkmQN6/WjmqnDv6L9nE=; b=sojjReXMcgn1rFMISaeu17YVXLgFlnowP/gYK+JQvvL97fARxitvXCAf2JupLSRj7vBFQu PiPke+LbEN7vQOsAFPXMXevsUhH+9kbx77qXbSOT6yiArGZw0nypZGL0TA1MvBhwndiSot 1GcRoGX5T2j/7ARZBAmKsHHkh0dxJI8= ARC-Authentication-Results: i=1; imf26.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=QrYqXwSN; spf=pass (imf26.hostedemail.com: domain of tabba@google.com designates 209.85.160.180 as permitted sender) smtp.mailfrom=tabba@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1747820367; a=rsa-sha256; cv=none; b=DBPlbko3kHcz6WZYYDdOb36IIYrFQ0qbUeZFCqlYo2uv1r4ypaPVrhfmdWbAw4jzsgj1e9 GOzgepl9pTUcoFyU6x6mTZjIp7HVZxbjnddaGY2+GmynD7q+Z0466RccerVC3AN2pn54tK EeWcO2KJMGubI4sFl2RsHuv6twNs+ts= Received: by mail-qt1-f180.google.com with SMTP id d75a77b69052e-47e9fea29easo1616751cf.1 for ; Wed, 21 May 2025 02:39:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1747820367; x=1748425167; darn=kvack.org; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=hGDbphIguy1iKM6+xTy++Xz3vkmQN6/WjmqnDv6L9nE=; b=QrYqXwSNLALMqc4aeUeRD1CkzwTTa5vOfdqOv4Qvxj/d12d3rBzHiCxJ2KjVGQxQAY /mij4DR4UK2tP0p4pAXMNABzigr+Iy/cinTA2nezykz5MAxnnunI4XdLcXvV+zzmu57v 3brYfTp5EvkFQb06nBeBPJgA7eQD4+phob/de+8ZXnqhj5vhQXHUo9ULtT5EXZJdWqgG TIBjvRuqlNAL9kU0Tbl5RRcUZla9ad8vtigd43JhDGiyZEDb6Op7jzrfAEtPBSdnX4aT xPa9Z3FW1eskyXFVD72dyQO77AQR64TM/0wPHdJN+hO3Sgt6UgxNftEunpinP3tK4PUG FBJw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747820367; x=1748425167; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=hGDbphIguy1iKM6+xTy++Xz3vkmQN6/WjmqnDv6L9nE=; b=Si+jamPwGN5dWC13WYRAfgbyuf43EUCgtSUiok6VnGsiBnsDKzRcAgtjqcumU7chvP 3LaCre0800vJGJpffHhM7P+tKeerXMVH2gMIatxhAKERMvZqOk2tfLvnpzl4D4RlSGVx JxTwqy92YVswbXcsvC0m4amMcxI6B4NAELtcSteOwMP2P72x9R4KHrpp3HDiCIJnMqn3 BsEiddJx5cR6toQC0iLMeZ6DkPSSo8mR9TByC8GtT8W2Dwz3TrHN1uPTSlKxqcR3KYMS b4bVNO/zpOkqISWhMNWoEm7xnF7GIATSeN9IcgDF206HL+BU82AeVe+3XlKLNwCrEEpp DKag== X-Forwarded-Encrypted: i=1; AJvYcCUDf+eICttNNntmS38AOticdTcCJq8rrrgEXlt0x431RGSudpzrBFQVrhXjLRV0WQJ6z2bDBIRWag==@kvack.org X-Gm-Message-State: AOJu0YwPFMyMTrFrcfZdYqKPPrnJDSJAd5z60uEHEns51IMNQiCwrc91 dobnhjThJpy0kbDB2LNdQE4Fu0AzWaN5JLGi1+TG/DNBStkC6tNzi46+x+2BdTCCkKWjF8XQiZ2 KdS+LMxU8BFK0PerHn5DwRFeK+ykX7v0cTvyHdJRk X-Gm-Gg: ASbGncv3qFoWJ/trj7YwcdqGJ01CiKjewWOcHM1ysrCuQvBaJnnxrYDynmkhE0zkCRE +mesEN5Uh+oGePhgHNYsQoNp5pr3+n6xLPG4OiOOLtuXxyn6a+t7b9y/qxdXUhk2CufPUPiLfvI XJQ/Z50qiDFINGv36z5w8I6J2OJYwO0enXpGDw9pk7owItVNUiMJbMT46niVG/gOm59DMCzKxsr DgJHAe5bbY= X-Google-Smtp-Source: AGHT+IHWH5Tjt5x64X9b3qAbjbESX9nJkabmYe84FVCGBi3KXSuMcKDcCXU1/ib7EYl+2HrOV4hRy25CAxS7nOs7o0g= X-Received: by 2002:ac8:590b:0:b0:494:763e:d971 with SMTP id d75a77b69052e-49601267b5dmr13456331cf.23.1747820366438; Wed, 21 May 2025 02:39:26 -0700 (PDT) MIME-Version: 1.0 References: <20250513163438.3942405-1-tabba@google.com> <20250513163438.3942405-17-tabba@google.com> <7bbe7aea-3b17-444b-8cd3-c5941950307c@redhat.com> In-Reply-To: <7bbe7aea-3b17-444b-8cd3-c5941950307c@redhat.com> From: Fuad Tabba Date: Wed, 21 May 2025 10:38:49 +0100 X-Gm-Features: AX0GCFt9zXqqAnIt7HzaSTXaS5kbrEo0MrX_mNwm9FWo1ZMmUlTVqovdzb8I4RA Message-ID: Subject: Re: [PATCH v9 16/17] KVM: selftests: guest_memfd mmap() test when mapping is allowed To: Gavin Shan Cc: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org, pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, jthoughton@google.com, peterx@redhat.com, pankaj.gupta@amd.com, ira.weiny@intel.com Content-Type: text/plain; charset="UTF-8" X-Rspamd-Queue-Id: C2731140012 X-Rspam-User: X-Rspamd-Server: rspam11 X-Stat-Signature: pawqob355m4p16bnsyqtfirrhyrj6orj X-HE-Tag: 1747820367-34289 X-HE-Meta: U2FsdGVkX1+Jv8kG9hDSHLno1WoP67d8KNgulbpG3GKxeblqZQNOgTHvHsqFzdFKIY+511stmFsrpNhappwR65DpSiZQ7uYNdGxmmSHJcUvSNFN+S+MWAoayzk5TjHZuoFBX5mJVetw+3z9yF523RVwtXiBU3YUSQvcnrqPTKOc+GsJzz8cGahmX/y8tDcBuKfkFiCUumnaMb5QcmKGT/qsxiCW6hmDIpgGpy4B5PDUoSsnsJMHFCxJVUu8mKsGVCkbKouRyXrJDBEQZwdsI5HAE/U7sIZPRkotbneAzd5HG/AW8VPza7CzncgqrJCFqZmBvuiZgVToBh8nD/OPOJaVSNR2BTHHORyfb3fEGs5qsZCvcexyiTCcTQHx2Dw+SXlXR0fOWVMej0WTcLHz3q59GKwgr9a0DC0N4Gn/6s3/6L4j+M0dwzVlV2E7tyBq1iPmps9K6Ns4euUcklQbzJ2Wh3HzOV+P587zpwRj8Q5AI9B73UbS9u/plJ77axyOqPY5Xc0KlX/cT1AN4eVZSutRL2ev++F0O+JMdrThc5u3K5Lw/I0t3221i237aPMfJPAXOso/qRNSxKfLwXcqgRFITs0aE99d1+C34QA+GVoUotFFa/HgbbcwfdjVjGTrT9EDqGbBKC37q20S8O4jiwWv8SJFEF4mNeW4DmsR43V/7JsbLgeWfMzrCS3AARQWHQzPiN59tc8cwDxBP4/VQ0+lWtWOjMcl1eNeGyjHKnvOGNgEJxu7KwBFC4eVl+EwLnB4FOEXJV2oOy9aQhuSaZ6i8J+CJ23bLFhWoI3iGDEEjQpJRwdhoEBpkhxDHykMHIEkBZQ9NDTfLAY3m2KBRlRoDxu0S6Szx+6MzGBrPU2bMeffTm0eL67nqwD0DNaoJNfcaaZljHm4BCXQFfwveQCc+oBgA0k6zNzPvjc+lD8vFkQV8UrAL1bVPH8IBdaSCH7cLHUjiAlBYnb1Zp/i 6NZwA3Kk E+YoVlAE7+qND5onbpHUyseNWf1kRF7O+CLd85/VHPZyZhNpfV8ZnwLWl4z8W/t1OX6fwFtS94XX/Qu8XVy/4MvGhK2Euqu4vfrXfyiC8PYtlWD5awnsU9q51TtEbyyngFPHmp3+1kqM5BXJVN2Vtai4L38DNueWcGFsY+8YgA7HGL1Q6f4U9BV4u/7z99PE88Buz5rqobSF1CHLrf8o4I6qZxNdmboNqgCdpue8rRhk/vPyJMec26I0bI8GA5+TGQRxYr1A58hUmKM8ye+/I1CVa0/5IgDYrku1dWId76rz8dD2MpBrS5mH71aSV4KAvMkLHOZIGIcGT6kYp/b/QhvVhWQVgD7vjFGhyjdbtRxG3PMs= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Hi Gavin, On Wed, 21 May 2025 at 07:53, Gavin Shan wrote: > > Hi Fuad, > > On 5/14/25 2:34 AM, Fuad Tabba wrote: > > Expand the guest_memfd selftests to include testing mapping guest > > memory for VM types that support it. > > > > Also, build the guest_memfd selftest for arm64. > > > > Co-developed-by: Ackerley Tng > > Signed-off-by: Ackerley Tng > > Signed-off-by: Fuad Tabba > > --- > > tools/testing/selftests/kvm/Makefile.kvm | 1 + > > .../testing/selftests/kvm/guest_memfd_test.c | 145 +++++++++++++++--- > > 2 files changed, 126 insertions(+), 20 deletions(-) > > > > diff --git a/tools/testing/selftests/kvm/Makefile.kvm b/tools/testing/selftests/kvm/Makefile.kvm > > index f62b0a5aba35..ccf95ed037c3 100644 > > --- a/tools/testing/selftests/kvm/Makefile.kvm > > +++ b/tools/testing/selftests/kvm/Makefile.kvm > > @@ -163,6 +163,7 @@ TEST_GEN_PROGS_arm64 += access_tracking_perf_test > > TEST_GEN_PROGS_arm64 += arch_timer > > TEST_GEN_PROGS_arm64 += coalesced_io_test > > TEST_GEN_PROGS_arm64 += dirty_log_perf_test > > +TEST_GEN_PROGS_arm64 += guest_memfd_test > > TEST_GEN_PROGS_arm64 += get-reg-list > > TEST_GEN_PROGS_arm64 += memslot_modification_stress_test > > TEST_GEN_PROGS_arm64 += memslot_perf_test > > diff --git a/tools/testing/selftests/kvm/guest_memfd_test.c b/tools/testing/selftests/kvm/guest_memfd_test.c > > index ce687f8d248f..443c49185543 100644 > > --- a/tools/testing/selftests/kvm/guest_memfd_test.c > > +++ b/tools/testing/selftests/kvm/guest_memfd_test.c > > @@ -34,12 +34,46 @@ static void test_file_read_write(int fd) > > "pwrite on a guest_mem fd should fail"); > > } > > > > -static void test_mmap(int fd, size_t page_size) > > +static void test_mmap_allowed(int fd, size_t page_size, size_t total_size) > > +{ > > + const char val = 0xaa; > > + char *mem; > > + size_t i; > > + int ret; > > + > > + mem = mmap(NULL, total_size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0); > > + TEST_ASSERT(mem != MAP_FAILED, "mmaping() guest memory should pass."); > > + > > + memset(mem, val, total_size); > > + for (i = 0; i < total_size; i++) > > + TEST_ASSERT_EQ(mem[i], val); > > + > > + ret = fallocate(fd, FALLOC_FL_KEEP_SIZE | FALLOC_FL_PUNCH_HOLE, 0, > > + page_size); > > + TEST_ASSERT(!ret, "fallocate the first page should succeed"); > > + > > + for (i = 0; i < page_size; i++) > > + TEST_ASSERT_EQ(mem[i], 0x00); > > + for (; i < total_size; i++) > > + TEST_ASSERT_EQ(mem[i], val); > > + > > + memset(mem, val, total_size); > > + for (i = 0; i < total_size; i++) > > + TEST_ASSERT_EQ(mem[i], val); > > + > > The last memset() and check the resident values look redudant because same > test has been covered by the first memset(). If we really want to double > confirm that the page-cache is writabble, it would be enough to cover the > first page. Otherwise, I guess this hunk of code can be removed :) My goal was to check that it is in fact writable, and that it stores the expected value, after the punch_hole. I'll limit it to the first page. > > memset(mem, val, page_size); > for (i = 0; i < page_size; i++) > TEST_ASSERT_EQ(mem[i], val); > > > + ret = munmap(mem, total_size); > > + TEST_ASSERT(!ret, "munmap should succeed"); > > +} > > + > > +static void test_mmap_denied(int fd, size_t page_size, size_t total_size) > > { > > char *mem; > > > > mem = mmap(NULL, page_size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0); > > TEST_ASSERT_EQ(mem, MAP_FAILED); > > + > > + mem = mmap(NULL, total_size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0); > > + TEST_ASSERT_EQ(mem, MAP_FAILED); > > } > > > > static void test_file_size(int fd, size_t page_size, size_t total_size) > > @@ -120,26 +154,19 @@ static void test_invalid_punch_hole(int fd, size_t page_size, size_t total_size) > > } > > } > > > > -static void test_create_guest_memfd_invalid(struct kvm_vm *vm) > > +static void test_create_guest_memfd_invalid_sizes(struct kvm_vm *vm, > > + uint64_t guest_memfd_flags, > > + size_t page_size) > > { > > - size_t page_size = getpagesize(); > > - uint64_t flag; > > size_t size; > > int fd; > > > > for (size = 1; size < page_size; size++) { > > - fd = __vm_create_guest_memfd(vm, size, 0); > > + fd = __vm_create_guest_memfd(vm, size, guest_memfd_flags); > > TEST_ASSERT(fd == -1 && errno == EINVAL, > > "guest_memfd() with non-page-aligned page size '0x%lx' should fail with EINVAL", > > size); > > } > > - > > - for (flag = BIT(0); flag; flag <<= 1) { > > - fd = __vm_create_guest_memfd(vm, page_size, flag); > > - TEST_ASSERT(fd == -1 && errno == EINVAL, > > - "guest_memfd() with flag '0x%lx' should fail with EINVAL", > > - flag); > > - } > > } > > > > static void test_create_guest_memfd_multiple(struct kvm_vm *vm) > > @@ -170,30 +197,108 @@ static void test_create_guest_memfd_multiple(struct kvm_vm *vm) > > close(fd1); > > } > > > > -int main(int argc, char *argv[]) > > +static void test_with_type(unsigned long vm_type, uint64_t guest_memfd_flags, > > + bool expect_mmap_allowed) > > { > > - size_t page_size; > > + struct kvm_vm *vm; > > size_t total_size; > > + size_t page_size; > > int fd; > > - struct kvm_vm *vm; > > > > - TEST_REQUIRE(kvm_has_cap(KVM_CAP_GUEST_MEMFD)); > > + if (!(kvm_check_cap(KVM_CAP_VM_TYPES) & BIT(vm_type))) > > + return; > > > > The check seems incorrect for aarch64 since 0 is always returned from > kvm_check_cap() there. The test is skipped for VM_TYPE_DEFAULT on aarch64. > So it would be something like below: > > #define VM_TYPE_DEFAULT 0 > > if (vm_type != VM_TYPE_DEFAULT && > !(kvm_check_cap(KVM_CAP_VM_TYPES) & BIT(vm_type))) > return; Ack. Thanks for this, and for all the other reviews. Cheers, /fuad > > page_size = getpagesize(); > > total_size = page_size * 4; > > > > - vm = vm_create_barebones(); > > + vm = vm_create_barebones_type(vm_type); > > > > - test_create_guest_memfd_invalid(vm); > > test_create_guest_memfd_multiple(vm); > > + test_create_guest_memfd_invalid_sizes(vm, guest_memfd_flags, page_size); > > > > - fd = vm_create_guest_memfd(vm, total_size, 0); > > + fd = vm_create_guest_memfd(vm, total_size, guest_memfd_flags); > > > > test_file_read_write(fd); > > - test_mmap(fd, page_size); > > + > > + if (expect_mmap_allowed) > > + test_mmap_allowed(fd, page_size, total_size); > > + else > > + test_mmap_denied(fd, page_size, total_size); > > + > > test_file_size(fd, page_size, total_size); > > test_fallocate(fd, page_size, total_size); > > test_invalid_punch_hole(fd, page_size, total_size); > > > > close(fd); > > + kvm_vm_release(vm); > > +} > > + > > +static void test_vm_type_gmem_flag_validity(unsigned long vm_type, > > + uint64_t expected_valid_flags) > > +{ > > + size_t page_size = getpagesize(); > > + struct kvm_vm *vm; > > + uint64_t flag = 0; > > + int fd; > > + > > + if (!(kvm_check_cap(KVM_CAP_VM_TYPES) & BIT(vm_type))) > > + return; > > Same as above Ack. > > + > > + vm = vm_create_barebones_type(vm_type); > > + > > + for (flag = BIT(0); flag; flag <<= 1) { > > + fd = __vm_create_guest_memfd(vm, page_size, flag); > > + > > + if (flag & expected_valid_flags) { > > + TEST_ASSERT(fd > 0, > > + "guest_memfd() with flag '0x%lx' should be valid", > > + flag); > > + close(fd); > > + } else { > > + TEST_ASSERT(fd == -1 && errno == EINVAL, > > + "guest_memfd() with flag '0x%lx' should fail with EINVAL", > > + flag); > > It's more robust to have: > > TEST_ASSERT(fd < 0 && errno == EINVAL, ...); Ack. > > + } > > + } > > + > > + kvm_vm_release(vm); > > +} > > + > > +static void test_gmem_flag_validity(void) > > +{ > > + uint64_t non_coco_vm_valid_flags = 0; > > + > > + if (kvm_has_cap(KVM_CAP_GMEM_SHARED_MEM)) > > + non_coco_vm_valid_flags = GUEST_MEMFD_FLAG_SUPPORT_SHARED; > > + > > + test_vm_type_gmem_flag_validity(VM_TYPE_DEFAULT, non_coco_vm_valid_flags); > > + > > +#ifdef __x86_64__ > > + test_vm_type_gmem_flag_validity(KVM_X86_SW_PROTECTED_VM, non_coco_vm_valid_flags); > > + test_vm_type_gmem_flag_validity(KVM_X86_SEV_VM, 0); > > + test_vm_type_gmem_flag_validity(KVM_X86_SEV_ES_VM, 0); > > + test_vm_type_gmem_flag_validity(KVM_X86_SNP_VM, 0); > > + test_vm_type_gmem_flag_validity(KVM_X86_TDX_VM, 0); > > +#endif > > +} > > + > > +int main(int argc, char *argv[]) > > +{ > > + TEST_REQUIRE(kvm_has_cap(KVM_CAP_GUEST_MEMFD)); > > + > > + test_gmem_flag_validity(); > > + > > + test_with_type(VM_TYPE_DEFAULT, 0, false); > > + if (kvm_has_cap(KVM_CAP_GMEM_SHARED_MEM)) { > > + test_with_type(VM_TYPE_DEFAULT, GUEST_MEMFD_FLAG_SUPPORT_SHARED, > > + true); > > + } > > + > > +#ifdef __x86_64__ > > + test_with_type(KVM_X86_SW_PROTECTED_VM, 0, false); > > + if (kvm_has_cap(KVM_CAP_GMEM_SHARED_MEM)) { > > + test_with_type(KVM_X86_SW_PROTECTED_VM, > > + GUEST_MEMFD_FLAG_SUPPORT_SHARED, true); > > + } > > +#endif > > } > > Thanks, > Gavin >