From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D9A72C5B543 for ; Thu, 5 Jun 2025 08:26:08 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 68E766B057D; Thu, 5 Jun 2025 04:26:08 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 63E598D0007; Thu, 5 Jun 2025 04:26:08 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5070D6B0581; Thu, 5 Jun 2025 04:26:08 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 328196B057D for ; Thu, 5 Jun 2025 04:26:08 -0400 (EDT) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id DADD1121E09 for ; Thu, 5 Jun 2025 08:26:07 +0000 (UTC) X-FDA: 83520664374.10.C3CCC3E Received: from mail-qt1-f174.google.com (mail-qt1-f174.google.com [209.85.160.174]) by imf24.hostedemail.com (Postfix) with ESMTP id 074D1180011 for ; Thu, 5 Jun 2025 08:26:05 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b="k1OFwG/b"; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf24.hostedemail.com: domain of tabba@google.com designates 209.85.160.174 as permitted sender) smtp.mailfrom=tabba@google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1749111966; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=z7oK883NyicLlnYnS2RhH8YbYFe6iN8Md3yC/ZesZyQ=; b=vHvE/SzNTkGtJhrAEKHId8/zZNhBPlzPL81BFTJDZRhmZ7v4s3bYu+FEYGs+JW0zxvejmz +AopD1ehPeWg1/JYSfcn4FJ41pY9cu3SuHK+c6PpgOVeHI0Fak4roarKE6aAeeE5t+2yZJ CwhUtzbcksDqKy0VUff4sxAGzZ51X/4= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b="k1OFwG/b"; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf24.hostedemail.com: domain of tabba@google.com designates 209.85.160.174 as permitted sender) smtp.mailfrom=tabba@google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1749111966; a=rsa-sha256; cv=none; b=6wU7DeuSDAK50tBt2lyZTYwPvonV6LEfrqqKWJpqJj0DiPWMIVJwXLkCXA9TpUHN9c0D29 hn+ZiEiybbL8FsO2wpvxRsTAzDaIKgoyy/z95QpDvozLubsO3H9uSXKMNDtoMGEKOyIWIv ySowGZu/EQeQKhaQ1xlZdtnFL6n4VH0= Received: by mail-qt1-f174.google.com with SMTP id d75a77b69052e-47e9fea29easo925931cf.1 for ; Thu, 05 Jun 2025 01:26:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1749111965; x=1749716765; darn=kvack.org; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=z7oK883NyicLlnYnS2RhH8YbYFe6iN8Md3yC/ZesZyQ=; b=k1OFwG/b/75CHVLMQpFUQ3u41s2P6FoAhD03AcbZNau3sgA2exuBUQ6+nOhO1tUB0z gsXMhpGKa679Cqok2k5RIvtgLMq8oCFSFq2u32OrqJYDjvS3Cz9IZQNue7muiyPhYUle BqaRimpWe0NcCJl84aqTPrpg0U1T4B19UG/DjhciYWfnhymDxkD5E+Tr8zgv10vc28FR yv29DdG55GpWgtHtZa+K12wh2347uGyT8qc/QZCqzzxT1cUgP1xXz+eLZgAfobTDR5wd 6NZV+5v6LXdpTpLRAWU0MXUAZol+rI+ldI8RiCfHieG/RyIn6TDVijxrl0hoSdccORO+ 51GQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749111965; x=1749716765; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=z7oK883NyicLlnYnS2RhH8YbYFe6iN8Md3yC/ZesZyQ=; b=mOU0qXJdIGaNB3e1K+WQiK4210AMlK8WuX62NT+cHE9FnY4ttF/UMyIvaYwsKdVZ2F XeO+8JBkDDfik9qtPX2mH3VLWWBQ+3YM3CJxfFPKUcvJNWiuaQ0B8Zp5c2U/I9GJ2e1l XmQafrAXk0wyB73LCYqPv8Y7276s2V0xhgKstXevh+CYuw0a5gw6JTA247ilhlg0SgvJ Xpaj4aYQrA16jwjpNoJyUv/U9Pmrqs4r9DDIC64Qw12DUcya8SWl+LjtjBFIUrqTDS73 y9SfPuMXDk1ctd3kaue18eWP1EQrBmpMmJE31WyV9V/RHb3c/GFkH+EeJfldXfQSm1Z4 SPXA== X-Forwarded-Encrypted: i=1; AJvYcCVXvwDaNLf3m+9TQqpIsQFp/lFPYdsotb1h7od4/HEH88cix5ipNfTFrv1J8MyQk6v5eFvwws/unA==@kvack.org X-Gm-Message-State: AOJu0YyW/RXUJMSDLwvFLdodYbbUDQpmrPhSpiCo9lpGx3gSjYQZRavC 0JOj5L7NscMMTzv+WdlJ+jxv0XHqFO9bCBHPtIiBORyK56oeZGGEX3prkQ3loWXgpVcFeQGj/KE ii//FIpg9VtPiLOsFDDeMoakNVkQOkxHmNZYGhLQo X-Gm-Gg: ASbGncvB3fRY3BCwL0YpwnLTEclDXq/hhxq/ng/MIEq87NnlqqXApu7LrQr3CccCuo1 bTU1db+p2O/dPYfImgzLkJvFZSqJ1nKNt8g4LqbmrXneJwT54JWtaIAOfLQafkLmTQov2ys93df 5UJB/d2aOee0QtSN6kIwYS9o31ZBDxTaLCzjq1UXLQ3rI= X-Google-Smtp-Source: AGHT+IGjtDvWKPjsQwttjetZqd6yOwD85TsjgbQbjKh5u7fOObhMkVIXbKMs9yfE60wIO2iTa8nnat+Nix0m78ewaJU= X-Received: by 2002:a05:622a:5449:b0:4a5:ae61:9457 with SMTP id d75a77b69052e-4a5b0d5dd16mr2461801cf.1.1749111964701; Thu, 05 Jun 2025 01:26:04 -0700 (PDT) MIME-Version: 1.0 References: <20250527180245.1413463-1-tabba@google.com> <20250527180245.1413463-9-tabba@google.com> In-Reply-To: From: Fuad Tabba Date: Thu, 5 Jun 2025 09:25:28 +0100 X-Gm-Features: AX0GCFs9T7yVKRzbqvNzb0JgLynaAb3KtZWYi-fjaVFBuUJrPmfOOFqECR0Tt2E Message-ID: Subject: Re: [PATCH v10 08/16] KVM: guest_memfd: Allow host to map guest_memfd pages To: Gavin Shan Cc: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org, pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, jthoughton@google.com, peterx@redhat.com, pankaj.gupta@amd.com, ira.weiny@intel.com Content-Type: text/plain; charset="UTF-8" X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 074D1180011 X-Stat-Signature: 3tffwii7xkxt5w6ytwn586916ixn9z69 X-Rspam-User: X-HE-Tag: 1749111965-973079 X-HE-Meta: U2FsdGVkX19vN2rXQe76Ykh5CjEHcKkRpEpXFKkhJNhPbCDouut4QCc1NlTuHCwPe2yFCppQbt9qJeHzhpypxFv5h6sjrKmEps5sqKueix2UzvluAzGsvHe92KoIBbF9r2/ECCa9CM5vszIpt49orpU7EQQ7MKaNFJWfbQq4p5b5LVr3WIuLL99df+Rbv8BlmnIJ+SjPKG0jBwXCpphVcXSKfX7lksJxsaRGh15YbCunssq+ndm5jYpUBHJbjfIgCT104ebmFo5KNencmJQ721dSppX4K71ASqESsGcSJ3tJdRL/9/tlVop/q3knqAh4/kC4OLx0Vq0SsJ6phYM/CHyFo0qfvYm+4Dfca4XXEOMXYuVN3TAHEAlcdd55L29GO35nVAlrEeWWblewl4C1WQHK9YI6DZNs9vLNSCFxqhHf/Z49f/a9WO5zTqJKirhNa3KknKjot2PMEpje76QhEvX0+0R4kKSbLiR3YTV7iK9PPbc/dZIFteDXfjSupz+eV4JPIHHJI75DIwcE+Fjdsw2yUS6pnczSftagES9/NKtRkGHct/ZfDGe802SRauXsIGQeYZ0vQrjEgx/h4BUFFod8EkugSzWkfm2uYWmhOuRZR/PlU4Uqu+7FSmehxRPqlhEvBt5FV16RFaMYcNt/zbSwugqE6PLD5gnTAiGBUrMP2Xt7rTh0J6UbfB50A4APZGwT97a+h5niXb1WpNpHjBhkJz5Pot3oWNpBtkZOHGDUUphLQOr4OLbS7ITaqwNzCj4skRGj6vcy1ccO4HRktMhV54zZeuSbldGvInJS/VO2fGTf+KQJV1SmOPmznJ1FterzasSVMV3WsJtdB6wTSc93F+ihUxggAQXvFvjPF237yS1dabV/XdJRF4Kz7BWyowzG8Y9Z5+0hvNzG26Dy2gKB1hhn5izNxuzogo7o0J15LqUfeAi167eGLFRDUI/KFwD6jXeowsMWjDHeA2o h2Bui9rh LXCnqX6m1/YvQGKrB8MFTTbuItlaD3/fh6ZRRdOpIOYs2GhWwWtSwJ+3d4cyRRiPyW9mn/1IkQ0XabzI/MrylAT8fF8OBLBOYTqAbYb48LmxpiKr7Ku51jyDOFsmzwcXGsIm94JwemXKoKFi6i0pkB7Gmxf8cVmivnI/FfP5x2TOu+IUnYRLAY2UVJvayXGv7S9SN+TVNtHqVrzPQkrJT7k+5LMIf6hOJrHpizM7Svj/V3K5z7b+qpJyLR9vOmvu/UDf29c7E2ZvXiM5WdE7LvCm+1RyR4K6XDxAOWwUdoXAQyZwsc9qCQMF5pGRhiUuqBysYL+Sdzr51sdL7UZ1+FupD6VxeiKOZ/SXA8F3W/9ylPzU= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Hi Gavin, On Thu, 5 Jun 2025 at 07:41, Gavin Shan wrote: > > Hi Fuad, > > On 5/28/25 4:02 AM, Fuad Tabba wrote: > > This patch enables support for shared memory in guest_memfd, including > > mapping that memory at the host userspace. This support is gated by the > > configuration option KVM_GMEM_SHARED_MEM, and toggled by the guest_memfd > > flag GUEST_MEMFD_FLAG_SUPPORT_SHARED, which can be set when creating a > > guest_memfd instance. > > > > Co-developed-by: Ackerley Tng > > Signed-off-by: Ackerley Tng > > Signed-off-by: Fuad Tabba > > --- > > arch/x86/include/asm/kvm_host.h | 10 ++++ > > arch/x86/kvm/x86.c | 3 +- > > include/linux/kvm_host.h | 13 ++++++ > > include/uapi/linux/kvm.h | 1 + > > virt/kvm/Kconfig | 5 ++ > > virt/kvm/guest_memfd.c | 81 +++++++++++++++++++++++++++++++++ > > 6 files changed, 112 insertions(+), 1 deletion(-) > > > > diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h > > index 709cc2a7ba66..ce9ad4cd93c5 100644 > > --- a/arch/x86/include/asm/kvm_host.h > > +++ b/arch/x86/include/asm/kvm_host.h > > @@ -2255,8 +2255,18 @@ void kvm_configure_mmu(bool enable_tdp, int tdp_forced_root_level, > > > > #ifdef CONFIG_KVM_GMEM > > #define kvm_arch_supports_gmem(kvm) ((kvm)->arch.supports_gmem) > > + > > +/* > > + * CoCo VMs with hardware support that use guest_memfd only for backing private > > + * memory, e.g., TDX, cannot use guest_memfd with userspace mapping enabled. > > + */ > > +#define kvm_arch_supports_gmem_shared_mem(kvm) \ > > + (IS_ENABLED(CONFIG_KVM_GMEM_SHARED_MEM) && \ > > + ((kvm)->arch.vm_type == KVM_X86_SW_PROTECTED_VM || \ > > + (kvm)->arch.vm_type == KVM_X86_DEFAULT_VM)) > > #else > > #define kvm_arch_supports_gmem(kvm) false > > +#define kvm_arch_supports_gmem_shared_mem(kvm) false > > #endif > > > > #define kvm_arch_has_readonly_mem(kvm) (!(kvm)->arch.has_protected_state) > > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c > > index 035ced06b2dd..2a02f2457c42 100644 > > --- a/arch/x86/kvm/x86.c > > +++ b/arch/x86/kvm/x86.c > > @@ -12718,7 +12718,8 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type) > > return -EINVAL; > > > > kvm->arch.vm_type = type; > > - kvm->arch.supports_gmem = (type == KVM_X86_SW_PROTECTED_VM); > > + kvm->arch.supports_gmem = > > + type == KVM_X86_DEFAULT_VM || type == KVM_X86_SW_PROTECTED_VM; > > /* Decided by the vendor code for other VM types. */ > > kvm->arch.pre_fault_allowed = > > type == KVM_X86_DEFAULT_VM || type == KVM_X86_SW_PROTECTED_VM; > > diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h > > index 80371475818f..ba83547e62b0 100644 > > --- a/include/linux/kvm_host.h > > +++ b/include/linux/kvm_host.h > > @@ -729,6 +729,19 @@ static inline bool kvm_arch_supports_gmem(struct kvm *kvm) > > } > > #endif > > > > +/* > > + * Returns true if this VM supports shared mem in guest_memfd. > > + * > > + * Arch code must define kvm_arch_supports_gmem_shared_mem if support for > > + * guest_memfd is enabled. > > + */ > > +#if !defined(kvm_arch_supports_gmem_shared_mem) && !IS_ENABLED(CONFIG_KVM_GMEM) > > +static inline bool kvm_arch_supports_gmem_shared_mem(struct kvm *kvm) > > +{ > > + return false; > > +} > > +#endif > > + > > #ifndef kvm_arch_has_readonly_mem > > static inline bool kvm_arch_has_readonly_mem(struct kvm *kvm) > > { > > diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h > > index b6ae8ad8934b..c2714c9d1a0e 100644 > > --- a/include/uapi/linux/kvm.h > > +++ b/include/uapi/linux/kvm.h > > @@ -1566,6 +1566,7 @@ struct kvm_memory_attributes { > > #define KVM_MEMORY_ATTRIBUTE_PRIVATE (1ULL << 3) > > > > #define KVM_CREATE_GUEST_MEMFD _IOWR(KVMIO, 0xd4, struct kvm_create_guest_memfd) > > +#define GUEST_MEMFD_FLAG_SUPPORT_SHARED (1ULL << 0) > > > > struct kvm_create_guest_memfd { > > __u64 size; > > diff --git a/virt/kvm/Kconfig b/virt/kvm/Kconfig > > index 559c93ad90be..df225298ab10 100644 > > --- a/virt/kvm/Kconfig > > +++ b/virt/kvm/Kconfig > > @@ -128,3 +128,8 @@ config HAVE_KVM_ARCH_GMEM_PREPARE > > config HAVE_KVM_ARCH_GMEM_INVALIDATE > > bool > > depends on KVM_GMEM > > + > > +config KVM_GMEM_SHARED_MEM > > + select KVM_GMEM > > + bool > > + prompt "Enable support for non-private (shared) memory in guest_memfd" > > diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c > > index 6db515833f61..5d34712f64fc 100644 > > --- a/virt/kvm/guest_memfd.c > > +++ b/virt/kvm/guest_memfd.c > > @@ -312,7 +312,81 @@ static pgoff_t kvm_gmem_get_index(struct kvm_memory_slot *slot, gfn_t gfn) > > return gfn - slot->base_gfn + slot->gmem.pgoff; > > } > > > > +static bool kvm_gmem_supports_shared(struct inode *inode) > > +{ > > + u64 flags; > > + > > + if (!IS_ENABLED(CONFIG_KVM_GMEM_SHARED_MEM)) > > + return false; > > + > > + flags = (u64)inode->i_private; > > + > > + return flags & GUEST_MEMFD_FLAG_SUPPORT_SHARED; > > +} > > + > > + > > +#ifdef CONFIG_KVM_GMEM_SHARED_MEM > > +static vm_fault_t kvm_gmem_fault_shared(struct vm_fault *vmf) > > +{ > > + struct inode *inode = file_inode(vmf->vma->vm_file); > > + struct folio *folio; > > + vm_fault_t ret = VM_FAULT_LOCKED; > > + > > + folio = kvm_gmem_get_folio(inode, vmf->pgoff); > > + if (IS_ERR(folio)) { > > + int err = PTR_ERR(folio); > > + > > + if (err == -EAGAIN) > > + return VM_FAULT_RETRY; > > + > > + return vmf_error(err); > > + } > > + > > + if (WARN_ON_ONCE(folio_test_large(folio))) { > > + ret = VM_FAULT_SIGBUS; > > + goto out_folio; > > + } > > + > > + if (!folio_test_uptodate(folio)) { > > + clear_highpage(folio_page(folio, 0)); > > + kvm_gmem_mark_prepared(folio); > > + } > > + > > + vmf->page = folio_file_page(folio, vmf->pgoff); > > + > > +out_folio: > > + if (ret != VM_FAULT_LOCKED) { > > + folio_unlock(folio); > > + folio_put(folio); > > + } > > + > > + return ret; > > +} > > + > > +static const struct vm_operations_struct kvm_gmem_vm_ops = { > > + .fault = kvm_gmem_fault_shared, > > +}; > > + > > +static int kvm_gmem_mmap(struct file *file, struct vm_area_struct *vma) > > +{ > > + if (!kvm_gmem_supports_shared(file_inode(file))) > > + return -ENODEV; > > + > > + if ((vma->vm_flags & (VM_SHARED | VM_MAYSHARE)) != > > + (VM_SHARED | VM_MAYSHARE)) { > > + return -EINVAL; > > + } > > + > > + vma->vm_ops = &kvm_gmem_vm_ops; > > + > > + return 0; > > +} > > +#else > > +#define kvm_gmem_mmap NULL > > +#endif /* CONFIG_KVM_GMEM_SHARED_MEM */ > > + > > static struct file_operations kvm_gmem_fops = { > > + .mmap = kvm_gmem_mmap, > > .open = generic_file_open, > > .release = kvm_gmem_release, > > .fallocate = kvm_gmem_fallocate, > > @@ -463,6 +537,9 @@ int kvm_gmem_create(struct kvm *kvm, struct kvm_create_guest_memfd *args) > > u64 flags = args->flags; > > u64 valid_flags = 0; > > > > It seems there is an uncovered corner case, which exists in current code (not directly > caused by this patch): After .mmap is hooked, the address space (inode->i_mapping) is > exposed to user space for futher requests like madvise().madvise(MADV_COLLAPSE) can > potentially collapse the pages to a huge page (folio) with the following assumptions. > It's not the expected behavior since huge page isn't supported yet. > > - CONFIG_READ_ONLY_THP_FOR_FS = y > - the folios in the pagecache have been fully populated, it can be done by kvm_gmem_fallocate() > or kvm_gmem_get_pfn(). > - mmap(0x00000f0100000000, ..., MAP_FIXED_NOREPLACE) on the guest-memfd, and then do > madvise(buf, size, MADV_COLLAPSE). > > sys_madvise > do_madvise > madvise_do_behavior > madvise_vma_behavior > madvise_collapse > thp_vma_allowable_order > file_thp_enabled // need to return false to bail from the path earlier at least > hpage_collapse_scan_file > collapse_pte_mapped_thp > > The fix would be to increase inode->i_writecount using allow_write_access() in > __kvm_gmem_create() to break the check done by file_thp_enabled(). Thanks for catching this. Even though it's not an issue until huge page support is added, we might as well handle it now. Out of curiosity, how did you spot this? Cheers, /fuad > diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c > index 0cd12f94958b..fe706c9f21cf 100644 > --- a/virt/kvm/guest_memfd.c > +++ b/virt/kvm/guest_memfd.c > @@ -502,6 +502,7 @@ static int __kvm_gmem_create(struct kvm *kvm, loff_t size, u64 flags) > } > > file->f_flags |= O_LARGEFILE; > + allow_write_access(file); > > > > + if (kvm_arch_supports_gmem_shared_mem(kvm)) > > + valid_flags |= GUEST_MEMFD_FLAG_SUPPORT_SHARED; > > + > > if (flags & ~valid_flags) > > return -EINVAL; > > > > @@ -501,6 +578,10 @@ int kvm_gmem_bind(struct kvm *kvm, struct kvm_memory_slot *slot, > > offset + size > i_size_read(inode)) > > goto err; > > > > + if (kvm_gmem_supports_shared(inode) && > > + !kvm_arch_supports_gmem_shared_mem(kvm)) > > + goto err; > > + > > filemap_invalidate_lock(inode->i_mapping); > > > > start = offset >> PAGE_SHIFT; > > Thanks, > Gavin >