From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2C146C5B543 for ; Wed, 4 Jun 2025 10:25:47 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id BDD908D0010; Wed, 4 Jun 2025 06:25:46 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id BB5968D0007; Wed, 4 Jun 2025 06:25:46 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id ACB4B8D0010; Wed, 4 Jun 2025 06:25:46 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 883E18D0007 for ; Wed, 4 Jun 2025 06:25:46 -0400 (EDT) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 0B0EB80F2B for ; Wed, 4 Jun 2025 10:25:46 +0000 (UTC) X-FDA: 83517337092.17.0ED4C30 Received: from mail-qt1-f176.google.com (mail-qt1-f176.google.com [209.85.160.176]) by imf10.hostedemail.com (Postfix) with ESMTP id 3DBA5C0008 for ; Wed, 4 Jun 2025 10:25:44 +0000 (UTC) Authentication-Results: imf10.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=06DeKsKS; spf=pass (imf10.hostedemail.com: domain of tabba@google.com designates 209.85.160.176 as permitted sender) smtp.mailfrom=tabba@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1749032744; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Bx+zioZGcPa31ofLwFoTS47Ar1db0gsST5uGb0svYrE=; b=WYneEg/WFtHOKphnDyn6TizeiOORyr1xenb6ozbLSwk4bom8r1yQ4lE0H/KrB2n0K6hFKR Vooq/yjhxKoXUQ3ExwWwDH23BHfqUV3FuVyhG8a9bzrVPwxWYuCBqm8VZ8QyNEtAP+VwzZ Q6FzWYwg5KBHZbWPDqCxhEtgFzf3TSE= ARC-Authentication-Results: i=1; imf10.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=06DeKsKS; spf=pass (imf10.hostedemail.com: domain of tabba@google.com designates 209.85.160.176 as permitted sender) smtp.mailfrom=tabba@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1749032744; a=rsa-sha256; cv=none; b=lZyzldC06Plg9Gyw0BsW2MHCaSRQzXloydmeTYFJHa6tka5r2Aux2P6fuSB7rOVPAnav1z 77T8AXuld8l0sR9JhxlMJ69yfbwdLqOL6mz8qqluICkRF2plWoQtTL7PQQyDfZs81MDVSa 6kzXq/ZkkEAT5i5xtiLgFPH9/Ul2+6M= Received: by mail-qt1-f176.google.com with SMTP id d75a77b69052e-4a58ef58a38so192811cf.0 for ; Wed, 04 Jun 2025 03:25:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1749032743; x=1749637543; darn=kvack.org; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=Bx+zioZGcPa31ofLwFoTS47Ar1db0gsST5uGb0svYrE=; b=06DeKsKSLR93ByeTRtuge+tF/ZWjdwwjMeMfrudyHXoVmaAaO5JVXEOUh3jLbFERf/ 6+gUFouIxdbrkuf1ORvqWt8aWHbv+QeOCWD3EN2IjeG0dQH6tol8nK0wYz+jDm0c8y7P zfMgBSyp/rSEUeROUOS6LgHfnfr/oloUGU3EkJ4+BOD/QXez7tjZjE+EidzNAFyGGZII Avx7z92XcJYF9IlqHGoZBwOKa7Dp57ho4KIE09SO51HJADEWrrReCJ7mQRSvkUhquhA1 SLnBScMZNGwEa1+JNsI+qRoJJurihLwgtV9FxJoOuPBN7Zd8xi6Rdx+Nb/227PE89io8 Y6/Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749032743; x=1749637543; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=Bx+zioZGcPa31ofLwFoTS47Ar1db0gsST5uGb0svYrE=; b=VLHPX0GS7XfdOztS6lOCzGXZW8bz6GVKx2mUa6CBGWx7O2DJao79j0ZDUAh72zN5Fa SN87GC7gij+34xh4yOQMyoJGLjCWb1VzWTBbs/CabMtTSGP9lk7Vjw+foHQVFcU3wISc UQeVC9egswhlatehXAm8ymvFZh7U79oEh0iT3QJWd006/beLiw9fXDzxbCBFgnkKWRLg 68MgflkoRK+ufR3kS5bOnKLxXFpsI4ytMzkdLcFQZs4u1tMPJnfp4Ylx6H8wiK6D2r1D b8dnyyEAFzAg1x0T7FDU4EGHtWhZCK41+jX52CqZdAMgwoy5vEKoQgcZEZ3Wli4tdV1L QSJw== X-Forwarded-Encrypted: i=1; AJvYcCVwWG0+N3XgkGALghBKlHJjSgSVCDOl1cMXWkSeL4A+Uq1dSGoSJu17oWO+Cuk0lxKBZylr+34aSQ==@kvack.org X-Gm-Message-State: AOJu0YyK9/P2T9/GwsHm3jUZf/uomNKW5CYpe1i66nF48pnli08AFfDo jVL25NhqQ0zQzmCF9w5cxchoyZxiuSne8NvK35L4TW9tpvU0lc5R4fcOnXrOZwgbaoSKjUwCZQv Jk7ZVuPHYnjXV4RgtcbGErNV75SorP3DqqoAN+84l X-Gm-Gg: ASbGncvz+AMabnq0MiFM48OT7uvLjkwgYrAvZD3e/+XNUhW9EDMNlLusaLYPg2tig5N bn99mvXbVe30qhbqS5li36ppkMniyDopxG+27vVNWvqSiMvpInDI0vPeYw2sxpBgLVs+rVBEa3k 3nFIvswjhe1TDExAW/qS0ZwwY3NEMdMlL7IxFW5n86DH8WbrunLVHYmzaOrg9Q2Mot6O6Y49BLT d8ohQd8pA== X-Google-Smtp-Source: AGHT+IGTD+z2VDhxWQKuNp7nHIvCmBflEL2GXD3vKmU9zBOsititquGLHBhjmd+Y8sYdpHwVHeiU+YlZnp6EhX5jrRQ= X-Received: by 2002:a05:622a:5513:b0:477:9a4:d7ea with SMTP id d75a77b69052e-4a5a60d620fmr2740811cf.13.1749032742765; Wed, 04 Jun 2025 03:25:42 -0700 (PDT) MIME-Version: 1.0 References: <20250527180245.1413463-1-tabba@google.com> <20250527180245.1413463-17-tabba@google.com> <025ae4ea-822b-44a1-8f78-38afc0e4b07e@redhat.com> In-Reply-To: From: Fuad Tabba Date: Wed, 4 Jun 2025 11:25:06 +0100 X-Gm-Features: AX0GCFsjQj8t193ppNK5A8J6zJz7XEfKZShf8qW5ieQH44FDUzy_3f7OTVZ52RI Message-ID: Subject: Re: [PATCH v10 16/16] KVM: selftests: guest_memfd mmap() test when mapping is allowed To: Gavin Shan Cc: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org, pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, jthoughton@google.com, peterx@redhat.com, pankaj.gupta@amd.com, ira.weiny@intel.com Content-Type: text/plain; charset="UTF-8" X-Rspam-User: X-Rspamd-Queue-Id: 3DBA5C0008 X-Rspamd-Server: rspam09 X-Stat-Signature: bsj9p1ajq9uhw3zra317iuzeps3qgcr7 X-HE-Tag: 1749032744-160495 X-HE-Meta: U2FsdGVkX1+icoXpLrMn9CkRfpmfZMhqBapHN6BwxlGaaUEK1oDFoeOI8ysn31oO02gq9emUyK1p2xdnsfW+9koi1WF6/hjTEJtavAmC984sLJ0l3TSDFPFuQvwCx42j+B0h/PEgSuRV6UYvVRdXpq0DYzGUrlaEJAPu76FhtNJbckkmCJY/2PomgKultyKbZloBKWZxGpCghW2k8cbTyqmkKjGU/6aa/gLhieRngruZxvM6/emSFd2aXETQ8rpU8jLewJBN1xtOwf11fRWDC7CypQu5lXjoq0ZRqflktLe8bHYmBJ/0labVW4MrPTAJAq3diY05VFMv3ae0AQcZI+FnFBR0mWBWIEYhAuP7aWdCnwTstmrwx9OAuCItRVhKe2jXgQc5G3NJ4ftMnOrT19kp8OthmtJtgRIQWd4fYhncetip/MsG/Un/hP7OyoK7PBhLDza5uzptFWKtBLSMbm3z3ZGBOk6djCy/dkYYXSVSQ8Bjlfy4zGVmrCo1gdK/Gop6lsH5U2dOBfNVNutmnhviiaqrFYuW1bIOXBHnbYXaKcigcsx1lhNNY+md1LO9Ui6AaN8O2Qa++j04154iBpnN2pxyh0cYCjisekkkb2Qqcgmn592NqB+9QfpCaE9RwVjrSOjKX1jjtTY/ibGW2NbUFOiLsV5X9ksFhPc0M5cnZq8yx62fJ/kYHxag8ZStRwTIGmD8jRfOMVBYFn7QepRUZh/wxkysftKBLI117lCVPFMmXzwEDSnpt5zu/2IWn3OuFBDm9VZ9BImiub8pYg5PyHaSdeoCFeE0UnY0g67nxDe2QAcANsw8vqxhwt/vm6eLzese6c2UAXAQk57pRQB5j/jjmP2k6PjMebMyJdzKj24Tm+XShNofrJqekn9m0l8l10HdHmv63peVo1/X57kJbu6OMxLeJb+XpUxgqo2APoLtFVnEJQy30TbxlksSwoF/mrLUy2LIkydg5j+ B/BBUeiu VJ6GW1KJ50TKEEbfOpZlFPKo48mUOOBabhwhx4EHGKZL/uo3BxOrMupTIAVZLHXgyOIEQA0eN+067mHSiLM0hvoDXz4xjo2uzlEKiA0TR3Qp0bW+LV9kzOkDnkQ0YbHOKaZTNdK0RqIWjHmW9O6gp47LTAdk6h5Z5oO586j1JrtBhC/+w1dJHwjRdj+kQASgBYz9Jl+7hgIqFCL09cKCwHZtzXlcPBALN9avMzh4QKe97D3glKALCCvrFcoWjHD9fZuK9kA2jNFg5ByO+HbSs4h6bC2YLYkPMutT4CppkRCvxoC2vlDyQsnFcCzpFa+LaecZyk7A57Ri8iE5ZtjphTp4FUo276euqyV5n4aU3wBXRVjc= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Hi Gavin, On Wed, 4 Jun 2025 at 11:05, Gavin Shan wrote: > > Hi Fuad, > > On 6/4/25 7:48 PM, Fuad Tabba wrote: > > On Wed, 4 Jun 2025 at 10:20, Gavin Shan wrote: > >> > >> On 5/28/25 4:02 AM, Fuad Tabba wrote: > >>> Expand the guest_memfd selftests to include testing mapping guest > >>> memory for VM types that support it. > >>> > >>> Also, build the guest_memfd selftest for arm64. > >>> > >>> Co-developed-by: Ackerley Tng > >>> Signed-off-by: Ackerley Tng > >>> Signed-off-by: Fuad Tabba > >>> --- > >>> tools/testing/selftests/kvm/Makefile.kvm | 1 + > >>> .../testing/selftests/kvm/guest_memfd_test.c | 162 +++++++++++++++--- > >>> 2 files changed, 142 insertions(+), 21 deletions(-) > >>> > >> > >> The test case fails on 64KB host, and the file size in test_create_guest_memfd_multiple() > >> would be page_size and (2 * page_size). The fixed size 4096 and 8192 aren't aligned to 64KB. > > > > Yes, however, this patch didn't introduce or modify this test. I think > > it's better to fix it in a separate patch independent of this series. > > > > Yeah, it can be separate patch or a preparatory patch before PATCH[16/16] > of this series because x86 hasn't 64KB page size. The currently fixed sizes > (4096 and 8192) are aligned to page size on x86. and 'guest-memfd-test' is > enabled on arm64 by this series. You're right. This patch enables support for arm64, so it should be fixed in conjunction with that. As you suggested, I'll add a seperate patch before this one that fixes this and enables support for arm64. Thanks again! /fuad > >> # ./guest_memfd_test > >> Random seed: 0x6b8b4567 > >> ==== Test Assertion Failure ==== > >> guest_memfd_test.c:178: fd1 != -1 > >> pid=7565 tid=7565 errno=22 - Invalid argument > >> 1 0x000000000040252f: test_create_guest_memfd_multiple at guest_memfd_test.c:178 > >> 2 (inlined by) test_with_type at guest_memfd_test.c:231 > >> 3 0x00000000004020c7: main at guest_memfd_test.c:306 > >> 4 0x0000ffff8cec733f: ?? ??:0 > >> 5 0x0000ffff8cec7417: ?? ??:0 > >> 6 0x00000000004021ef: _start at ??:? > >> memfd creation should succeed > >> > >>> diff --git a/tools/testing/selftests/kvm/Makefile.kvm b/tools/testing/selftests/kvm/Makefile.kvm > >>> index f62b0a5aba35..ccf95ed037c3 100644 > >>> --- a/tools/testing/selftests/kvm/Makefile.kvm > >>> +++ b/tools/testing/selftests/kvm/Makefile.kvm > >>> @@ -163,6 +163,7 @@ TEST_GEN_PROGS_arm64 += access_tracking_perf_test > >>> TEST_GEN_PROGS_arm64 += arch_timer > >>> TEST_GEN_PROGS_arm64 += coalesced_io_test > >>> TEST_GEN_PROGS_arm64 += dirty_log_perf_test > >>> +TEST_GEN_PROGS_arm64 += guest_memfd_test > >>> TEST_GEN_PROGS_arm64 += get-reg-list > >>> TEST_GEN_PROGS_arm64 += memslot_modification_stress_test > >>> TEST_GEN_PROGS_arm64 += memslot_perf_test > >>> diff --git a/tools/testing/selftests/kvm/guest_memfd_test.c b/tools/testing/selftests/kvm/guest_memfd_test.c > >>> index ce687f8d248f..3d6765bc1f28 100644 > >>> --- a/tools/testing/selftests/kvm/guest_memfd_test.c > >>> +++ b/tools/testing/selftests/kvm/guest_memfd_test.c > >>> @@ -34,12 +34,46 @@ static void test_file_read_write(int fd) > >>> "pwrite on a guest_mem fd should fail"); > >>> } > >>> > >>> -static void test_mmap(int fd, size_t page_size) > >>> +static void test_mmap_allowed(int fd, size_t page_size, size_t total_size) > >>> +{ > >>> + const char val = 0xaa; > >>> + char *mem; > >>> + size_t i; > >>> + int ret; > >>> + > >>> + mem = mmap(NULL, total_size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0); > >>> + TEST_ASSERT(mem != MAP_FAILED, "mmaping() guest memory should pass."); > >>> + > >> > >> If you agree, I think it would be nice to ensure guest-memfd doesn't support > >> copy-on-write, more details are provided below. > > > > Good idea. I think we can do this without adding much more code. I'll > > add a check in test_mmap_allowed(), since the idea is, even if mmap() > > is supported, we still can't COW. I'll rename the functions to make > > this a bit clearer (i.e., supported instead of allowed). > > > > Thank you for this and thank you for the reviews! > > > > Sounds good to me :) > > > > >>> + memset(mem, val, total_size); > >>> + for (i = 0; i < total_size; i++) > >>> + TEST_ASSERT_EQ(mem[i], val); > >>> + > >>> + ret = fallocate(fd, FALLOC_FL_KEEP_SIZE | FALLOC_FL_PUNCH_HOLE, 0, > >>> + page_size); > >>> + TEST_ASSERT(!ret, "fallocate the first page should succeed"); > >>> + > >>> + for (i = 0; i < page_size; i++) > >>> + TEST_ASSERT_EQ(mem[i], 0x00); > >>> + for (; i < total_size; i++) > >>> + TEST_ASSERT_EQ(mem[i], val); > >>> + > >>> + memset(mem, val, page_size); > >>> + for (i = 0; i < total_size; i++) > >>> + TEST_ASSERT_EQ(mem[i], val); > >>> + > >>> + ret = munmap(mem, total_size); > >>> + TEST_ASSERT(!ret, "munmap should succeed"); > >>> +} > >>> + > >>> +static void test_mmap_denied(int fd, size_t page_size, size_t total_size) > >>> { > >>> char *mem; > >>> > >>> mem = mmap(NULL, page_size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0); > >>> TEST_ASSERT_EQ(mem, MAP_FAILED); > >>> + > >>> + mem = mmap(NULL, total_size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0); > >>> + TEST_ASSERT_EQ(mem, MAP_FAILED); > >>> } > >> > >> Add one more argument to test_mmap_denied as the flags passed to mmap(). > >> > >> static void test_mmap_denied(int fd, size_t page_size, size_t total_size, int mmap_flags) > >> { > >> mem = mmap(NULL, total_size, PROT_READ | PROT_WRITE, mmap_flags, fd, 0); > >> } > >> > >>> > >>> static void test_file_size(int fd, size_t page_size, size_t total_size) > >>> @@ -120,26 +154,19 @@ static void test_invalid_punch_hole(int fd, size_t page_size, size_t total_size) > >>> } > >>> } > >>> > >>> -static void test_create_guest_memfd_invalid(struct kvm_vm *vm) > >>> +static void test_create_guest_memfd_invalid_sizes(struct kvm_vm *vm, > >>> + uint64_t guest_memfd_flags, > >>> + size_t page_size) > >>> { > >>> - size_t page_size = getpagesize(); > >>> - uint64_t flag; > >>> size_t size; > >>> int fd; > >>> > >>> for (size = 1; size < page_size; size++) { > >>> - fd = __vm_create_guest_memfd(vm, size, 0); > >>> - TEST_ASSERT(fd == -1 && errno == EINVAL, > >>> + fd = __vm_create_guest_memfd(vm, size, guest_memfd_flags); > >>> + TEST_ASSERT(fd < 0 && errno == EINVAL, > >>> "guest_memfd() with non-page-aligned page size '0x%lx' should fail with EINVAL", > >>> size); > >>> } > >>> - > >>> - for (flag = BIT(0); flag; flag <<= 1) { > >>> - fd = __vm_create_guest_memfd(vm, page_size, flag); > >>> - TEST_ASSERT(fd == -1 && errno == EINVAL, > >>> - "guest_memfd() with flag '0x%lx' should fail with EINVAL", > >>> - flag); > >>> - } > >>> } > >>> > >>> static void test_create_guest_memfd_multiple(struct kvm_vm *vm) > >>> @@ -170,30 +197,123 @@ static void test_create_guest_memfd_multiple(struct kvm_vm *vm) > >>> close(fd1); > >>> } > >>> > >>> -int main(int argc, char *argv[]) > >>> +#define GUEST_MEMFD_TEST_SLOT 10 > >>> +#define GUEST_MEMFD_TEST_GPA 0x100000000 > >>> + > >>> +static bool check_vm_type(unsigned long vm_type) > >>> { > >>> - size_t page_size; > >>> + /* > >>> + * Not all architectures support KVM_CAP_VM_TYPES. However, those that > >>> + * support guest_memfd have that support for the default VM type. > >>> + */ > >>> + if (vm_type == VM_TYPE_DEFAULT) > >>> + return true; > >>> + > >>> + return kvm_check_cap(KVM_CAP_VM_TYPES) & BIT(vm_type); > >>> +} > >>> + > >>> +static void test_with_type(unsigned long vm_type, uint64_t guest_memfd_flags, > >>> + bool expect_mmap_allowed) > >>> +{ > >>> + struct kvm_vm *vm; > >>> size_t total_size; > >>> + size_t page_size; > >>> int fd; > >>> - struct kvm_vm *vm; > >>> > >>> - TEST_REQUIRE(kvm_has_cap(KVM_CAP_GUEST_MEMFD)); > >>> + if (!check_vm_type(vm_type)) > >>> + return; > >>> > >>> page_size = getpagesize(); > >>> total_size = page_size * 4; > >>> > >>> - vm = vm_create_barebones(); > >>> + vm = vm_create_barebones_type(vm_type); > >>> > >>> - test_create_guest_memfd_invalid(vm); > >>> test_create_guest_memfd_multiple(vm); > >>> + test_create_guest_memfd_invalid_sizes(vm, guest_memfd_flags, page_size); > >>> > >>> - fd = vm_create_guest_memfd(vm, total_size, 0); > >>> + fd = vm_create_guest_memfd(vm, total_size, guest_memfd_flags); > >>> > >>> test_file_read_write(fd); > >>> - test_mmap(fd, page_size); > >>> + > >>> + if (expect_mmap_allowed) > >>> + test_mmap_allowed(fd, page_size, total_size); > >>> + else > >>> + test_mmap_denied(fd, page_size, total_size); > >>> + > >> > >> if (expect_mmap_allowed) { > >> test_mmap_denied(fd, page_size, total_size, MAP_PRIVATE); > >> test_mmap_allowed(fd, page_size, total_size); > >> } else { > >> test_mmap_denied(fd, page_size, total_size, MAP_SHARED); > >> } > >> > >>> test_file_size(fd, page_size, total_size); > >>> test_fallocate(fd, page_size, total_size); > >>> test_invalid_punch_hole(fd, page_size, total_size); > >>> > >>> close(fd); > >>> + kvm_vm_release(vm); > >>> +} > >>> + > >>> +static void test_vm_type_gmem_flag_validity(unsigned long vm_type, > >>> + uint64_t expected_valid_flags) > >>> +{ > >>> + size_t page_size = getpagesize(); > >>> + struct kvm_vm *vm; > >>> + uint64_t flag = 0; > >>> + int fd; > >>> + > >>> + if (!check_vm_type(vm_type)) > >>> + return; > >>> + > >>> + vm = vm_create_barebones_type(vm_type); > >>> + > >>> + for (flag = BIT(0); flag; flag <<= 1) { > >>> + fd = __vm_create_guest_memfd(vm, page_size, flag); > >>> + > >>> + if (flag & expected_valid_flags) { > >>> + TEST_ASSERT(fd >= 0, > >>> + "guest_memfd() with flag '0x%lx' should be valid", > >>> + flag); > >>> + close(fd); > >>> + } else { > >>> + TEST_ASSERT(fd < 0 && errno == EINVAL, > >>> + "guest_memfd() with flag '0x%lx' should fail with EINVAL", > >>> + flag); > >>> + } > >>> + } > >>> + > >>> + kvm_vm_release(vm); > >>> +} > >>> + > >>> +static void test_gmem_flag_validity(void) > >>> +{ > >>> + uint64_t non_coco_vm_valid_flags = 0; > >>> + > >>> + if (kvm_has_cap(KVM_CAP_GMEM_SHARED_MEM)) > >>> + non_coco_vm_valid_flags = GUEST_MEMFD_FLAG_SUPPORT_SHARED; > >>> + > >>> + test_vm_type_gmem_flag_validity(VM_TYPE_DEFAULT, non_coco_vm_valid_flags); > >>> + > >>> +#ifdef __x86_64__ > >>> + test_vm_type_gmem_flag_validity(KVM_X86_SW_PROTECTED_VM, non_coco_vm_valid_flags); > >>> + test_vm_type_gmem_flag_validity(KVM_X86_SEV_VM, 0); > >>> + test_vm_type_gmem_flag_validity(KVM_X86_SEV_ES_VM, 0); > >>> + test_vm_type_gmem_flag_validity(KVM_X86_SNP_VM, 0); > >>> + test_vm_type_gmem_flag_validity(KVM_X86_TDX_VM, 0); > >>> +#endif > >>> +} > >>> + > >>> +int main(int argc, char *argv[]) > >>> +{ > >>> + TEST_REQUIRE(kvm_has_cap(KVM_CAP_GUEST_MEMFD)); > >>> + > >>> + test_gmem_flag_validity(); > >>> + > >>> + test_with_type(VM_TYPE_DEFAULT, 0, false); > >>> + if (kvm_has_cap(KVM_CAP_GMEM_SHARED_MEM)) { > >>> + test_with_type(VM_TYPE_DEFAULT, GUEST_MEMFD_FLAG_SUPPORT_SHARED, > >>> + true); > >>> + } > >>> + > >>> +#ifdef __x86_64__ > >>> + test_with_type(KVM_X86_SW_PROTECTED_VM, 0, false); > >>> + if (kvm_has_cap(KVM_CAP_GMEM_SHARED_MEM)) { > >>> + test_with_type(KVM_X86_SW_PROTECTED_VM, > >>> + GUEST_MEMFD_FLAG_SUPPORT_SHARED, true); > >>> + } > >>> +#endif > >>> } > >> > > Thanks, > Gavin >