From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 473B0C3ABD8 for ; Wed, 14 May 2025 11:31:29 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2FA066B013A; Wed, 14 May 2025 07:31:28 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 281D86B013B; Wed, 14 May 2025 07:31:28 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0FBFD6B013C; Wed, 14 May 2025 07:31:28 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id E140A6B013A for ; Wed, 14 May 2025 07:31:27 -0400 (EDT) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id C10438050F for ; Wed, 14 May 2025 11:31:27 +0000 (UTC) X-FDA: 83441297814.14.2464619 Received: from mail-qt1-f182.google.com (mail-qt1-f182.google.com [209.85.160.182]) by imf26.hostedemail.com (Postfix) with ESMTP id C52EC140011 for ; Wed, 14 May 2025 11:31:25 +0000 (UTC) Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=HF7EIcwV; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf26.hostedemail.com: domain of tabba@google.com designates 209.85.160.182 as permitted sender) smtp.mailfrom=tabba@google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1747222285; a=rsa-sha256; cv=none; b=v2klaQXjiDHsOe0FHCZ8nnP1be/QjiiVg41N6tCOJ9iX+zH7NpesR+iE49x8h7JILzACKX UXYrhkgfPlqgcJYAmAqNjTJM5xfTdBwIejZ0lSWxrbKYF5HIx+IWDWscuAmboSsBDoGKI9 LAxcvg24lCkXddQC6ZpmvPbQ9dLSC1U= ARC-Authentication-Results: i=1; imf26.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=HF7EIcwV; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf26.hostedemail.com: domain of tabba@google.com designates 209.85.160.182 as permitted sender) smtp.mailfrom=tabba@google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1747222285; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=fzbRmUOC8ih6fzt67RICwBbXDv0dt0XTfbOliydLBUI=; b=hcn1q8LWgm2PQZjlIL+rJv7nvfzHWr8vA7DYlLIHGZIzuXKeNphMlAuiWqmDhNsTOiqwXu OonL9a7+UjAMimO1R+ErG3hfTaCnR+npfzsqt/z7GjNxGKW3Y+wzlFGw13a3UhmSYkRA9T IhmQE9y2SuYCb60NoUOHQMsng0cPFJU= Received: by mail-qt1-f182.google.com with SMTP id d75a77b69052e-47666573242so314041cf.0 for ; Wed, 14 May 2025 04:31:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1747222284; x=1747827084; darn=kvack.org; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=fzbRmUOC8ih6fzt67RICwBbXDv0dt0XTfbOliydLBUI=; b=HF7EIcwVDAYk0aYMDmVPOW/zoXjRuORK2+HHAhFI6XH2RC5mU5beD6O97Salb1FzPi q3uCWiyHnYbk0likXAb2S/Ffu/xcToy7rd2TobG/cS0H3vBlgVtMm4veesFibDnlJMDp HigmaOhB395i/rCKxwUPnf9QolqBRu4bHJQr0vxyZID9CAnP/sBo5uX4KXNC68800Pk4 TCXDhb7kZuZ3nSiHTgWJsWkMVRm6CwuokOqoWzrNJmjXlmQJWA2uiq9O0QXBpsIbJ2Ej NBTyxYJz+lbIFmeR5CxGdxEXsX/GPmcC2kX0u621PIC354svaGlXrKRkw17X7FvLzkoD 2t9Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747222284; x=1747827084; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=fzbRmUOC8ih6fzt67RICwBbXDv0dt0XTfbOliydLBUI=; b=KgFQqsUdfEH62icsyX9tvSyLzl3mky48l3hheZzSFbD3PxAxrk/mej8W8DSSmrXNJK nRBuBJuJS1xRe6ARlt4CmsTPgBlxTSGiQG6VcjARBT0Tp3BEMdzL6ypU6AA64JCo2jHN BQGSv2xMoFqgE9dJbFjBUJ94pF34EXbPSrY0RSWrdINpCa3pbRFOAWWQdh7wI+pMZ4zJ IH9sJlW/PwncFhM/UpKizhKpt61hu7O7Rpmi+DAhdJ7BMnliORpds8K4MzwWPhVNcvNk P/hx9tNJT/ohxMwJP8uF+HOXitJNlp9GkDwT5vwygZQWiH4nog4F24ycr3P32mNvsZQZ HM/A== X-Forwarded-Encrypted: i=1; AJvYcCXLEdq/SC0lL56xRD7ul0vOyTzh8FN8CnrIYajqCnmuVODib+pRbPKTF/JZUfxZkOtGkJdEXCn9/w==@kvack.org X-Gm-Message-State: AOJu0YxFBFlOEuZHTnEuGdAoTswXLNzmDRQ91BnpXvzPYOqN4H31CvH/ J/eALjlT+7zeC5s40RBQO2vxptFxf3bnpFcwr/finfK7CV52VnXFiQDDouU+VP1L+9AP7AWNjl3 9nSHfHYHQXw79792Cg7FFIE2SvQFP0UGy0zm8C69T X-Gm-Gg: ASbGncvIoDDnqxj7UDJLwocAokdp/ACLgSsowe/2TEQqSu7rwEi7dq397h0+0FvzhRx Y7O+FbQKOTjvbkpbDjH2vW8H7c0Icd0v4jWEfEPNkDn2gc52E6a2QuQt/mTjuWdYNSgOZtVBDrm O1tbR12oqdUNVE4WVtTBc54B5jm0CqZAcw7Q== X-Google-Smtp-Source: AGHT+IHgGMrxKe+8f4ldm2sEcuMXD0OaFpCmvdvmLS2UUv2zI6SPX0N053OInUzyGM05Tewp75qyfVmEX240o0OveMI= X-Received: by 2002:a05:622a:103:b0:471:eab0:ef21 with SMTP id d75a77b69052e-49496d491b2mr2857851cf.13.1747222284066; Wed, 14 May 2025 04:31:24 -0700 (PDT) MIME-Version: 1.0 References: <20250513163438.3942405-8-tabba@google.com> <20250514100733.4079-1-roypat@amazon.co.uk> In-Reply-To: <20250514100733.4079-1-roypat@amazon.co.uk> From: Fuad Tabba Date: Wed, 14 May 2025 12:30:20 +0100 X-Gm-Features: AX0GCFudl35qoFdeYv31Tchf1U4v66DqL82wE2IKqy4K6nZ-04hlTThZs-DR5lE Message-ID: Subject: Re: [PATCH v9 07/17] KVM: guest_memfd: Allow host to map guest_memfd() pages To: "Roy, Patrick" Cc: "ackerleytng@google.com" , "akpm@linux-foundation.org" , "amoorthy@google.com" , "anup@brainfault.org" , "aou@eecs.berkeley.edu" , "brauner@kernel.org" , "catalin.marinas@arm.com" , "chao.p.peng@linux.intel.com" , "chenhuacai@kernel.org" , "david@redhat.com" , "dmatlack@google.com" , "fvdl@google.com" , "hch@infradead.org" , "hughd@google.com" , "ira.weiny@intel.com" , "isaku.yamahata@gmail.com" , "isaku.yamahata@intel.com" , "james.morse@arm.com" , "jarkko@kernel.org" , "jgg@nvidia.com" , "jhubbard@nvidia.com" , "jthoughton@google.com" , "keirf@google.com" , "kirill.shutemov@linux.intel.com" , "kvm@vger.kernel.org" , "liam.merwick@oracle.com" , "linux-arm-msm@vger.kernel.org" , "linux-mm@kvack.org" , "mail@maciej.szmigiero.name" , "maz@kernel.org" , "mic@digikod.net" , "michael.roth@amd.com" , "mpe@ellerman.id.au" , "oliver.upton@linux.dev" , "palmer@dabbelt.com" , "pankaj.gupta@amd.com" , "paul.walmsley@sifive.com" , "pbonzini@redhat.com" , "peterx@redhat.com" , "qperret@google.com" , "quic_cvanscha@quicinc.com" , "quic_eberman@quicinc.com" , "quic_mnalajal@quicinc.com" , "quic_pderrin@quicinc.com" , "quic_pheragu@quicinc.com" , "quic_svaddagi@quicinc.com" , "quic_tsoni@quicinc.com" , "rientjes@google.com" , "seanjc@google.com" , "shuah@kernel.org" , "steven.price@arm.com" , "suzuki.poulose@arm.com" , "vannapurve@google.com" , "vbabka@suse.cz" , "viro@zeniv.linux.org.uk" , "wei.w.wang@intel.com" , "will@kernel.org" , "willy@infradead.org" , "xiaoyao.li@intel.com" , "yilun.xu@intel.com" , "yuzenghui@huawei.com" Content-Type: text/plain; charset="UTF-8" X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: C52EC140011 X-Rspam-User: X-Stat-Signature: pzwa4rk5edjiajrw77xhu4n5o7sqp3tx X-HE-Tag: 1747222285-871744 X-HE-Meta: U2FsdGVkX181ECJwOSb+SN0MXNrxAr1nyDPbGnlKq0HFJpfTJ5mJ5GK5JIQogwoyymVc+KC+vHso/GzEZWA9MeKH2nI93qpHWmr1EascM/XenWByY93BFtiaUH350/h440RiPWUdImj2rfx5UTkrH3LnO618tiyDzchjiirzTsmbmmQ3lm64wZBYooqsGflkf2g4Jumd3N96g9AMUEIVuwUZ/hu8A9TNvOiNmK3cohw03OSAJ7PeBbuUTcMyfkLOUIVT1PMeAwXjiF9pyS4HTPXoOufa/qRUAflmmElZY15NzNvwDbVc7rL5mM4LUSgVNPAvEtTNoW5auqHfgmMBS5611J4UsiGDfqzRraL4GO7qCvd9Fx2vVVXr6r/ERUlzruxLrabF9dYupqXLHvctZ53SwZsI3j3BzSYPHgAc2uMiboMapuEyFCUFQkhchtIkMENJE47lFTb8sSkg8qFProKGW7bUNfYbff5TY6SNjf9ekut6D/VdE+Fd7A+QIhfIIZlh1K+wuNRsnHzNeD+52Nj8Y9GASH1ibe8NN+xKpkmqbq4ldOVDMjYGM3fWZMT6MOCmtaAbUMfNhse/wy+eGqolRECrbk+MJzfsJLz/mpKQSSBnVsVKelGXBrYnJ1eB/Q0R50Fzcd70SNyi6epkdnXifxrkACEVevwvavRMAJEcB6knSKssCIAoDKOhPqFmxf/xK7UgCwlVCFWUgrykQ3mPamQXD9SHKNM6toJQli5Y4gi82gyanuVzcPVmZ850xnDNzFRRsutQn2afjj7kNjadtj7LaJ7A81eWJeQpOC1nAPTjE6ZbNNBFRYhbhm+KY8ULzjS3itROaMrCHFEvffASooH2FWvRI7RKAofmfJrkO75kJvTaiPrII33JrckpvZJNQWza6YAl6aypNcQtHzPFzHRo4V11gBOvwWoy8HHM/GbJYOczwrTXqtyLG+8/xJez5Om5bZzNqj2UVZQ HBLZoAN3 jXFLjcl4HKeEmhydNLm859DgE9Qv6B+0RsmSgdvbzSYEJCeLTZtEuBnRP8Bc46vWdRR81YxbskSu/eux8yY1Q4DLgE5cJmvBZVBjh1I7+rFDrNHT0pdQog/6OMtz8dvhjsPRFK+/HbsGtdBvFIFROTau4k70jJ9xlVbO9pTDUi1RvzCBHbVlj6rjtCpvn49TZq+2rjeJGwrIFbnLeEaKkGBTpKoOfqv+FD+S5yPtWBzOlm8PP6ZSYxzNq50NfJCdneX5OYIrcjUpps/MuD4qw4A5e5OtG6DDoLhyFNTQscL0Vt/bOOitxiYty+kumXVykXn4F1TjCxRSHfgpn/hr/TK8fr6cVxhSyuskTdV3uOgARzYE2yG5LdBtiRTqSh3l2W8PamI1l9Fv7zxDPRhsj6oL42g== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Hi Patrick, On Wed, 14 May 2025 at 11:07, Roy, Patrick wrote: > > On Tue, 2025-05-13 at 17:34 +0100, Fuad Tabba wrote: > > This patch enables support for shared memory in guest_memfd, including > > mapping that memory at the host userspace. This support is gated by the > > configuration option KVM_GMEM_SHARED_MEM, and toggled by the guest_memfd > > flag GUEST_MEMFD_FLAG_SUPPORT_SHARED, which can be set when creating a > > guest_memfd instance. > > > > Co-developed-by: Ackerley Tng > > Signed-off-by: Ackerley Tng > > Signed-off-by: Fuad Tabba > > --- > > arch/x86/include/asm/kvm_host.h | 10 ++++ > > include/linux/kvm_host.h | 13 +++++ > > include/uapi/linux/kvm.h | 1 + > > virt/kvm/Kconfig | 5 ++ > > virt/kvm/guest_memfd.c | 88 +++++++++++++++++++++++++++++++++ > > 5 files changed, 117 insertions(+) > > > > diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h > > index 709cc2a7ba66..f72722949cae 100644 > > --- a/arch/x86/include/asm/kvm_host.h > > +++ b/arch/x86/include/asm/kvm_host.h > > @@ -2255,8 +2255,18 @@ void kvm_configure_mmu(bool enable_tdp, int tdp_forced_root_level, > > > > #ifdef CONFIG_KVM_GMEM > > #define kvm_arch_supports_gmem(kvm) ((kvm)->arch.supports_gmem) > > + > > +/* > > + * CoCo VMs with hardware support that use guest_memfd only for backing private > > + * memory, e.g., TDX, cannot use guest_memfd with userspace mapping enabled. > > + */ > > +#define kvm_arch_vm_supports_gmem_shared_mem(kvm) \ > > + (IS_ENABLED(CONFIG_KVM_GMEM_SHARED_MEM) && \ > > + ((kvm)->arch.vm_type == KVM_X86_SW_PROTECTED_VM || \ > > + (kvm)->arch.vm_type == KVM_X86_DEFAULT_VM)) > > I forgot what we ended up deciding wrt "allow guest_memfd usage for default VMs > on x86" in the call two weeks ago, but if we want to do that as part of this > series, then this also needs Yes we did. I missed it in this patch. I'll fix it. > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c > index 12433b1e755b..904b15c678d6 100644 > --- a/arch/x86/kvm/x86.c > +++ b/arch/x86/kvm/x86.c > @@ -12716,7 +12716,7 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type) > return -EINVAL; > > kvm->arch.vm_type = type; > - kvm->arch.supports_gmem = (type == KVM_X86_SW_PROTECTED_VM); > + kvm->arch.supports_gmem = type == KVM_X86_SW_PROTECTED_VM || type == KVM_X86_DEFAULT_VM; > /* Decided by the vendor code for other VM types. */ > kvm->arch.pre_fault_allowed = > type == KVM_X86_DEFAULT_VM || type == KVM_X86_SW_PROTECTED_VM; > > and with that I was able to run my firecracker tests on top of this patch > series with X86_DEFAULT_VM. But I did wonder about this define in > x86/include/asm/kvm_host.h: > > /* SMM is currently unsupported for guests with guest_memfd (esp private) memory. */ > # define kvm_arch_nr_memslot_as_ids(kvm) (kvm_arch_supports_gmem(kvm) ? 1 : 2) > > which I'm not really sure what to make of, but which I think means enabling > guest_memfd for X86_DEFAULT_VM isn't as straight-forward as the above diff :/ Not quite, but I'll sort it out. Thanks, /fuad > Best, > Patrick > > > #else > > #define kvm_arch_supports_gmem(kvm) false > > +#define kvm_arch_vm_supports_gmem_shared_mem(kvm) false > > #endif > > > > #define kvm_arch_has_readonly_mem(kvm) (!(kvm)->arch.has_protected_state) > > diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h > > index ae70e4e19700..2ec89c214978 100644 > > --- a/include/linux/kvm_host.h > > +++ b/include/linux/kvm_host.h > > @@ -729,6 +729,19 @@ static inline bool kvm_arch_supports_gmem(struct kvm *kvm) > > } > > #endif > > > > +/* > > + * Returns true if this VM supports shared mem in guest_memfd. > > + * > > + * Arch code must define kvm_arch_vm_supports_gmem_shared_mem if support for > > + * guest_memfd is enabled. > > + */ > > +#if !defined(kvm_arch_vm_supports_gmem_shared_mem) && !IS_ENABLED(CONFIG_KVM_GMEM) > > +static inline bool kvm_arch_vm_supports_gmem_shared_mem(struct kvm *kvm) > > +{ > > + return false; > > +} > > +#endif > > + > > #ifndef kvm_arch_has_readonly_mem > > static inline bool kvm_arch_has_readonly_mem(struct kvm *kvm) > > { > > diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h > > index b6ae8ad8934b..9857022a0f0c 100644 > > --- a/include/uapi/linux/kvm.h > > +++ b/include/uapi/linux/kvm.h > > @@ -1566,6 +1566,7 @@ struct kvm_memory_attributes { > > #define KVM_MEMORY_ATTRIBUTE_PRIVATE (1ULL << 3) > > > > #define KVM_CREATE_GUEST_MEMFD _IOWR(KVMIO, 0xd4, struct kvm_create_guest_memfd) > > +#define GUEST_MEMFD_FLAG_SUPPORT_SHARED (1UL << 0) > > > > struct kvm_create_guest_memfd { > > __u64 size; > > diff --git a/virt/kvm/Kconfig b/virt/kvm/Kconfig > > index 559c93ad90be..f4e469a62a60 100644 > > --- a/virt/kvm/Kconfig > > +++ b/virt/kvm/Kconfig > > @@ -128,3 +128,8 @@ config HAVE_KVM_ARCH_GMEM_PREPARE > > config HAVE_KVM_ARCH_GMEM_INVALIDATE > > bool > > depends on KVM_GMEM > > + > > +config KVM_GMEM_SHARED_MEM > > + select KVM_GMEM > > + bool > > + prompt "Enables in-place shared memory for guest_memfd" > > diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c > > index 6db515833f61..8e6d1866b55e 100644 > > --- a/virt/kvm/guest_memfd.c > > +++ b/virt/kvm/guest_memfd.c > > @@ -312,7 +312,88 @@ static pgoff_t kvm_gmem_get_index(struct kvm_memory_slot *slot, gfn_t gfn) > > return gfn - slot->base_gfn + slot->gmem.pgoff; > > } > > > > +#ifdef CONFIG_KVM_GMEM_SHARED_MEM > > + > > +static bool kvm_gmem_supports_shared(struct inode *inode) > > +{ > > + uint64_t flags = (uint64_t)inode->i_private; > > + > > + return flags & GUEST_MEMFD_FLAG_SUPPORT_SHARED; > > +} > > + > > +static vm_fault_t kvm_gmem_fault_shared(struct vm_fault *vmf) > > +{ > > + struct inode *inode = file_inode(vmf->vma->vm_file); > > + struct folio *folio; > > + vm_fault_t ret = VM_FAULT_LOCKED; > > + > > + filemap_invalidate_lock_shared(inode->i_mapping); > > + > > + folio = kvm_gmem_get_folio(inode, vmf->pgoff); > > + if (IS_ERR(folio)) { > > + int err = PTR_ERR(folio); > > + > > + if (err == -EAGAIN) > > + ret = VM_FAULT_RETRY; > > + else > > + ret = vmf_error(err); > > + > > + goto out_filemap; > > + } > > + > > + if (folio_test_hwpoison(folio)) { > > + ret = VM_FAULT_HWPOISON; > > + goto out_folio; > > + } > > + > > + if (WARN_ON_ONCE(folio_test_large(folio))) { > > + ret = VM_FAULT_SIGBUS; > > + goto out_folio; > > + } > > + > > + if (!folio_test_uptodate(folio)) { > > + clear_highpage(folio_page(folio, 0)); > > + kvm_gmem_mark_prepared(folio); > > + } > > + > > + vmf->page = folio_file_page(folio, vmf->pgoff); > > + > > +out_folio: > > + if (ret != VM_FAULT_LOCKED) { > > + folio_unlock(folio); > > + folio_put(folio); > > + } > > + > > +out_filemap: > > + filemap_invalidate_unlock_shared(inode->i_mapping); > > + > > + return ret; > > +} > > + > > +static const struct vm_operations_struct kvm_gmem_vm_ops = { > > + .fault = kvm_gmem_fault_shared, > > +}; > > + > > +static int kvm_gmem_mmap(struct file *file, struct vm_area_struct *vma) > > +{ > > + if (!kvm_gmem_supports_shared(file_inode(file))) > > + return -ENODEV; > > + > > + if ((vma->vm_flags & (VM_SHARED | VM_MAYSHARE)) != > > + (VM_SHARED | VM_MAYSHARE)) { > > + return -EINVAL; > > + } > > + > > + vma->vm_ops = &kvm_gmem_vm_ops; > > + > > + return 0; > > +} > > +#else > > +#define kvm_gmem_mmap NULL > > +#endif /* CONFIG_KVM_GMEM_SHARED_MEM */ > > + > > static struct file_operations kvm_gmem_fops = { > > + .mmap = kvm_gmem_mmap, > > .open = generic_file_open, > > .release = kvm_gmem_release, > > .fallocate = kvm_gmem_fallocate, > > @@ -463,6 +544,9 @@ int kvm_gmem_create(struct kvm *kvm, struct kvm_create_guest_memfd *args) > > u64 flags = args->flags; > > u64 valid_flags = 0; > > > > + if (kvm_arch_vm_supports_gmem_shared_mem(kvm)) > > + valid_flags |= GUEST_MEMFD_FLAG_SUPPORT_SHARED; > > + > > if (flags & ~valid_flags) > > return -EINVAL; > > > > @@ -501,6 +585,10 @@ int kvm_gmem_bind(struct kvm *kvm, struct kvm_memory_slot *slot, > > offset + size > i_size_read(inode)) > > goto err; > > > > + if (kvm_gmem_supports_shared(inode) && > > + !kvm_arch_vm_supports_gmem_shared_mem(kvm)) > > + goto err; > > + > > filemap_invalidate_lock(inode->i_mapping); > > > > start = offset >> PAGE_SHIFT; > > -- > > 2.49.0.1045.g170613ef41-goog > > >