From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 60CDBC54FB3 for ; Thu, 29 May 2025 18:20:22 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8AE6B6B008C; Thu, 29 May 2025 14:20:21 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 88B846B0092; Thu, 29 May 2025 14:20:21 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 79FAC6B0093; Thu, 29 May 2025 14:20:21 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 57D436B008C for ; Thu, 29 May 2025 14:20:21 -0400 (EDT) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id DF53FBE71C for ; Thu, 29 May 2025 18:20:20 +0000 (UTC) X-FDA: 83496760200.06.B6238EF Received: from mail-pf1-f202.google.com (mail-pf1-f202.google.com [209.85.210.202]) by imf12.hostedemail.com (Postfix) with ESMTP id 182A740005 for ; Thu, 29 May 2025 18:20:18 +0000 (UTC) Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=fdw+CV2l; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf12.hostedemail.com: domain of 3YaU4aAsKCNY24C6JD6QLF88GG8D6.4GEDAFMP-EECN24C.GJ8@flex--ackerleytng.bounces.google.com designates 209.85.210.202 as permitted sender) smtp.mailfrom=3YaU4aAsKCNY24C6JD6QLF88GG8D6.4GEDAFMP-EECN24C.GJ8@flex--ackerleytng.bounces.google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1748542819; a=rsa-sha256; cv=none; b=3oCkpwS94D9wcRi3WpJ8UFMk/pHw9NfcX4OHvHT/hsZG6KJN2KFcUTj8evsUF5gsmKp+eK nUSaYDemYvIfjqrFXuVgsvKin+fRdfBM6rOShr8muiDFcB2/wHLT16niDpctpd8mu2j1Zc Eo9mdtr1r0VJSceNlpWVLg0Nt/7Qo/0= ARC-Authentication-Results: i=1; imf12.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=fdw+CV2l; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf12.hostedemail.com: domain of 3YaU4aAsKCNY24C6JD6QLF88GG8D6.4GEDAFMP-EECN24C.GJ8@flex--ackerleytng.bounces.google.com designates 209.85.210.202 as permitted sender) smtp.mailfrom=3YaU4aAsKCNY24C6JD6QLF88GG8D6.4GEDAFMP-EECN24C.GJ8@flex--ackerleytng.bounces.google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1748542819; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=VfbJ3SAMOI+96wfKOcoCTG/GNgODonkk4ldcKc8FS58=; b=UQFa9uEOqe6xq6AIsqfX7mAk07V7n06MqE0hmVJtUhbx+KDYdBvknUjQYtK8STB7b+miME KeiUNpN5LqDCC03IPnZxjd2l8fj+OyH/oRvhqarz1vnZCZIekBjO9fhUpNr9QiFtwLf+zY hzmH7Bx9tYh8ujRxoym3N3qAfKWwP0E= Received: by mail-pf1-f202.google.com with SMTP id d2e1a72fcca58-7377139d8b1so1023807b3a.0 for ; Thu, 29 May 2025 11:20:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1748542818; x=1749147618; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=VfbJ3SAMOI+96wfKOcoCTG/GNgODonkk4ldcKc8FS58=; b=fdw+CV2lwOCAQ7yTfaykT9AqSZS53qTStD1MjSkUIbWHl8a2HfZYYGWhbsfAdNJNwa KHLQ9hro4PrW9dw17Wwqh8ap7XersS4cQWy1/d7mJQFGButJPZvdQYy6Giw2lkX89y3o 8igh9OuPjfNkk6hEXFh3S4nBZ3SqXk77DMNKGZHkybUjYFN69+yBE6k7zN6MLbxB5cbJ bVCFg1jjIpV/JyTQvY6rbSuwcNdWzOn8w4d90NzlB+3HNf0QhfwW4pObi29ulGL3n+Hw ClkdX7l7UZRtUKD8Rh10kuGDR3rV9UKLLE/2uxnZDeV88+ijMS5Pktei8xCJuNsYJOEZ a5jQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1748542818; x=1749147618; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=VfbJ3SAMOI+96wfKOcoCTG/GNgODonkk4ldcKc8FS58=; b=EiSf42Ketplf+V6ZyEekzSq9W1Q727SrL+Udjy6RcUZ91dSymrtQrI/wL2sknsc2rd XJ6Mi9KD19+eSBWF6YhcjfjCsW7569T/DJKRQNy9xa8SxS6fytK8AGFesBsT37HpGQ/y 516hDmC+oAfdVr5pCu+3LHGqAAbSiD79mt4HihDfBbLbJrm4EJf1mBQPdhGNFmfE2DfA tO5DIQlb+2qmcdFj+q5uaIx74YJlz9y554XxprOh+iy6yyc2bFtdx3aC21zbTJLBLSub iNDiq5flngTgIvbif2AW9+xuH7N+Erz5F+bWh9+454vgSHqoRMuhl0vEH5+aDla0Z7T2 vaBA== X-Forwarded-Encrypted: i=1; AJvYcCVKSGnGO/Npuy8Bk40P9ulB/IOowu4FJM7jW/fEIbjalPFTJ+8zIz9aCLlGJf4bKm1OChLgwNrbJg==@kvack.org X-Gm-Message-State: AOJu0Yzo3Xt8tYBlBlmf/nL7WTRgabHr+YYnxlt66w/j/u9nUzPK8a6C 3B7I3BuialigjK+vK+1uND+bUFGia9DpzFZU29ngdbbp7V9W5NAm6Esdjlh58PScg0jvL2XGXZd 2RyoS8pihiK75N273hcJiFJYqhQ== X-Google-Smtp-Source: AGHT+IEGQ99xCbo63ElioTP4TSOSQO9Qp/9XbqkHfPc1TG0hEr1RvfBOYX4D49wzFG2SKhNE6JPiZpB8so1iJjxGSQ== X-Received: from pfbhc10.prod.google.com ([2002:a05:6a00:650a:b0:747:adac:b0dd]) (user=ackerleytng job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a21:7a47:b0:1f5:93b1:6a58 with SMTP id adf61e73a8af0-21ad94e2100mr1045007637.8.1748542817673; Thu, 29 May 2025 11:20:17 -0700 (PDT) Date: Thu, 29 May 2025 11:20:16 -0700 In-Reply-To: Mime-Version: 1.0 References: Message-ID: Subject: Re: [RFC PATCH v2 02/51] KVM: guest_memfd: Introduce and use shareability to guard faulting From: Ackerley Tng To: Yan Zhao Cc: kvm@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, x86@kernel.org, linux-fsdevel@vger.kernel.org, aik@amd.com, ajones@ventanamicro.com, akpm@linux-foundation.org, amoorthy@google.com, anthony.yznaga@oracle.com, anup@brainfault.org, aou@eecs.berkeley.edu, bfoster@redhat.com, binbin.wu@linux.intel.com, brauner@kernel.org, catalin.marinas@arm.com, chao.p.peng@intel.com, chenhuacai@kernel.org, dave.hansen@intel.com, david@redhat.com, dmatlack@google.com, dwmw@amazon.co.uk, erdemaktas@google.com, fan.du@intel.com, fvdl@google.com, graf@amazon.com, haibo1.xu@intel.com, hch@infradead.org, hughd@google.com, ira.weiny@intel.com, isaku.yamahata@intel.com, jack@suse.cz, james.morse@arm.com, jarkko@kernel.org, jgg@ziepe.ca, jgowans@amazon.com, jhubbard@nvidia.com, jroedel@suse.de, jthoughton@google.com, jun.miao@intel.com, kai.huang@intel.com, keirf@google.com, kent.overstreet@linux.dev, kirill.shutemov@intel.com, liam.merwick@oracle.com, maciej.wieczor-retman@intel.com, mail@maciej.szmigiero.name, maz@kernel.org, mic@digikod.net, michael.roth@amd.com, mpe@ellerman.id.au, muchun.song@linux.dev, nikunj@amd.com, nsaenz@amazon.es, oliver.upton@linux.dev, palmer@dabbelt.com, pankaj.gupta@amd.com, paul.walmsley@sifive.com, pbonzini@redhat.com, pdurrant@amazon.co.uk, peterx@redhat.com, pgonda@google.com, pvorel@suse.cz, qperret@google.com, quic_cvanscha@quicinc.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, quic_svaddagi@quicinc.com, quic_tsoni@quicinc.com, richard.weiyang@gmail.com, rick.p.edgecombe@intel.com, rientjes@google.com, roypat@amazon.co.uk, rppt@kernel.org, seanjc@google.com, shuah@kernel.org, steven.price@arm.com, steven.sistare@oracle.com, suzuki.poulose@arm.com, tabba@google.com, thomas.lendacky@amd.com, usama.arif@bytedance.com, vannapurve@google.com, vbabka@suse.cz, viro@zeniv.linux.org.uk, vkuznets@redhat.com, wei.w.wang@intel.com, will@kernel.org, willy@infradead.org, xiaoyao.li@intel.com, yilun.xu@intel.com, yuzenghui@huawei.com, zhiquan1.li@intel.com Content-Type: text/plain; charset="UTF-8" X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 182A740005 X-Stat-Signature: 9wrj93durbqye8ayx16hkfptob6a9uir X-Rspam-User: X-HE-Tag: 1748542818-210017 X-HE-Meta: U2FsdGVkX19gbgs/iK3DRRGHsygFmFNTsIO13CNAvngo5lQVqfgTljMVB+SOaW2nUCQf9Qx9sFQ+zh/UnVzAOgBCKvoKmPHbUmgZD4QZyc1pUkEyFE7rV3G1X7IbTpwrrZ68Y1Hq8BB2/c6zkNXQBwKw7Pm9PZdgMrmnl9TlGGb5fIQgefVvyAeJyGlF8+fLFiLLGvvbAmnTSiEFp6xmPf+3K/gHFmGvLJNqaOi8IlCaoNmJp8grPCjcbf18tFsOsHpbC9G6gwVtmOfJ1HueVeJBRIekxyzo3jRJ1Lr16IjOm0tyNcryPUvI0ohkR46yhzf0LxnKLhBEnY+aFkdrHy6fwRF4dq5dFv9YfvPdy3fg2ASyA3agXtWMAYKcry6RHJ921ft/OvsdkLjKB+atBcDjl26SopZ0jb/OuF6oTihZSCcmzMkIrTvvhw8dtIIBpKYbxYY077yOhtqoOQPESx21DUUXhS9KMO/sypNWWQ3r92fIUQ9/VrlecoW5IZhN9J1G5Z6c6se1ldjcCjieWKu3pbmw48b9uDIwqAavAtNU8CuQqebx1VXubm8+eQNeGIJJ0mXbqitgXzGqBbQYRc9xL+uIQTnxj74lOh+zv34VRYZ0F9gMgAn5O5wEYMaDKBRZXjGwdUtmFxuHfCp+Y/sp7G8uKe9ZWrpfjATU6ckuP/SpD+VBEuwk8xKPApynVPMhK+uS7Irqgq6PjDZ0fKCaSz8vWLpTC8eugH++Bz1isM2dv4wisSmJg3yRwNmrydhnGik+/PBW7ZrmYksHmvbu7vcy/Va0l+akPBV9r+vX/LjzGhGx/U+bNvWhoU88B3Qldd/TqWy187Qn27fPkJY/n9lNiP+qkWYY1tN5FjdvY8ev5abRDJOdTLloPYY7977BFnD35/MIU+uirLXm9USJM9E9p2gF5m5YrjuPEtkwiXHbAD1Gq8oa1l28cwQuWliOJJ2Kz/dVxcq8bDE qvTaeoeh FlBiG6nCZiOcA9SIhi0cUr7AsdGcaAusGpZ+AN44xaR8QBvU/25OI5Wa8ACIC2Bl6lT0pc1Ei2u+YfESVDa/iw/Qtcuh6HimgY9Ed3dNi0grFB/dLtp0uKWkzaQiRtPmaNF8puTcii037sPUclDJA1QmJ1oykTVnSTOIDj9sONDVu2JzpLZooTVBWruVVj+p3HRhCG3GgaEQFoWcJuYKJkjgbxFk0lJft5ir8gSNswswJQDoqtBej9x9ybv+MuWAqe2l43A5ZXg32OE8njvulcVM/J4Z1PAnChFe2yqzkg+e8wabuCgX0G3x+stjgdQylW6EwYY1PzF0aQbm/ZSDB12RrcQnvN8fWnzGW5a68RO9q6ZIGFyhvyEwWoSvpHfGBO+Beg6APB8BgyYTJCXOft0lk/lfMnUj4gnPgPLMHzaoYveaGdbn27T0DhyuATa0MiSvtipmTTgpLGYu+tlr5pbzxAOSjtcGnafuUfMivZtNi5+gXDjnqzKa8Tb63hYox3kh37kczOa7dEvCp2wN5nHvFEcUNd8d4jcv2+wZ59DMOPiCiMz4VgzmiFD3+hEcg97JSEvVhFr8bN54Gy5IZOpZvLKUHMDTMlbZLdrVlAFr4xpOFrUktkqNF5dVfwsS8qMLBHrqFyvI+IKVUxe5HkGMrwCJV9rL6A4y6hvvVf0GfW4HU4YIqgiUKSjXgsjFRVYOxakAsbEbX4i3SooRS4gUwBauw6AydKrSG X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Yan Zhao writes: > On Wed, May 14, 2025 at 04:41:41PM -0700, Ackerley Tng wrote: >> Track guest_memfd memory's shareability status within the inode as >> opposed to the file, since it is property of the guest_memfd's memory >> contents. >> >> Shareability is a property of the memory and is indexed using the >> page's index in the inode. Because shareability is the memory's >> property, it is stored within guest_memfd instead of within KVM, like >> in kvm->mem_attr_array. >> >> KVM_MEMORY_ATTRIBUTE_PRIVATE in kvm->mem_attr_array must still be >> retained to allow VMs to only use guest_memfd for private memory and >> some other memory for shared memory. >> >> Not all use cases require guest_memfd() to be shared with the host >> when first created. Add a new flag, GUEST_MEMFD_FLAG_INIT_PRIVATE, >> which when set on KVM_CREATE_GUEST_MEMFD, initializes the memory as >> private to the guest, and therefore not mappable by the >> host. Otherwise, memory is shared until explicitly converted to >> private. >> >> Signed-off-by: Ackerley Tng >> Co-developed-by: Vishal Annapurve >> Signed-off-by: Vishal Annapurve >> Co-developed-by: Fuad Tabba >> Signed-off-by: Fuad Tabba >> Change-Id: If03609cbab3ad1564685c85bdba6dcbb6b240c0f >> --- >> Documentation/virt/kvm/api.rst | 5 ++ >> include/uapi/linux/kvm.h | 2 + >> virt/kvm/guest_memfd.c | 124 ++++++++++++++++++++++++++++++++- >> 3 files changed, 129 insertions(+), 2 deletions(-) >> >> diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst >> index 86f74ce7f12a..f609337ae1c2 100644 >> --- a/Documentation/virt/kvm/api.rst >> +++ b/Documentation/virt/kvm/api.rst >> @@ -6408,6 +6408,11 @@ belonging to the slot via its userspace_addr. >> The use of GUEST_MEMFD_FLAG_SUPPORT_SHARED will not be allowed for CoCo VMs. >> This is validated when the guest_memfd instance is bound to the VM. >> >> +If the capability KVM_CAP_GMEM_CONVERSIONS is supported, then the 'flags' field >> +supports GUEST_MEMFD_FLAG_INIT_PRIVATE. Setting GUEST_MEMFD_FLAG_INIT_PRIVATE >> +will initialize the memory for the guest_memfd as guest-only and not faultable >> +by the host. >> + >> See KVM_SET_USER_MEMORY_REGION2 for additional details. >> >> 4.143 KVM_PRE_FAULT_MEMORY >> diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h >> index 4cc824a3a7c9..d7df312479aa 100644 >> --- a/include/uapi/linux/kvm.h >> +++ b/include/uapi/linux/kvm.h >> @@ -1567,7 +1567,9 @@ struct kvm_memory_attributes { >> #define KVM_MEMORY_ATTRIBUTE_PRIVATE (1ULL << 3) >> >> #define KVM_CREATE_GUEST_MEMFD _IOWR(KVMIO, 0xd4, struct kvm_create_guest_memfd) >> + >> #define GUEST_MEMFD_FLAG_SUPPORT_SHARED (1UL << 0) >> +#define GUEST_MEMFD_FLAG_INIT_PRIVATE (1UL << 1) >> >> struct kvm_create_guest_memfd { >> __u64 size; >> diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c >> index 239d0f13dcc1..590932499eba 100644 >> --- a/virt/kvm/guest_memfd.c >> +++ b/virt/kvm/guest_memfd.c >> @@ -4,6 +4,7 @@ >> #include >> #include >> #include >> +#include >> #include >> #include >> >> @@ -17,6 +18,24 @@ struct kvm_gmem { >> struct list_head entry; >> }; >> >> +struct kvm_gmem_inode_private { >> +#ifdef CONFIG_KVM_GMEM_SHARED_MEM >> + struct maple_tree shareability; >> +#endif >> +}; >> + >> +enum shareability { >> + SHAREABILITY_GUEST = 1, /* Only the guest can map (fault) folios in this range. */ >> + SHAREABILITY_ALL = 2, /* Both guest and host can fault folios in this range. */ >> +}; >> + >> +static struct folio *kvm_gmem_get_folio(struct inode *inode, pgoff_t index); >> + >> +static struct kvm_gmem_inode_private *kvm_gmem_private(struct inode *inode) >> +{ >> + return inode->i_mapping->i_private_data; >> +} >> + >> /** >> * folio_file_pfn - like folio_file_page, but return a pfn. >> * @folio: The folio which contains this index. >> @@ -29,6 +48,58 @@ static inline kvm_pfn_t folio_file_pfn(struct folio *folio, pgoff_t index) >> return folio_pfn(folio) + (index & (folio_nr_pages(folio) - 1)); >> } >> >> +#ifdef CONFIG_KVM_GMEM_SHARED_MEM >> + >> +static int kvm_gmem_shareability_setup(struct kvm_gmem_inode_private *private, >> + loff_t size, u64 flags) >> +{ >> + enum shareability m; >> + pgoff_t last; >> + >> + last = (size >> PAGE_SHIFT) - 1; >> + m = flags & GUEST_MEMFD_FLAG_INIT_PRIVATE ? SHAREABILITY_GUEST : >> + SHAREABILITY_ALL; >> + return mtree_store_range(&private->shareability, 0, last, xa_mk_value(m), >> + GFP_KERNEL); >> +} >> + >> +static enum shareability kvm_gmem_shareability_get(struct inode *inode, >> + pgoff_t index) >> +{ >> + struct maple_tree *mt; >> + void *entry; >> + >> + mt = &kvm_gmem_private(inode)->shareability; >> + entry = mtree_load(mt, index); >> + WARN(!entry, >> + "Shareability should always be defined for all indices in inode."); > I noticed that in [1], the kvm_gmem_mmap() does not check the range. > So, the WARN() here can be hit when userspace mmap() an area larger than the > inode size and accesses the out of band HVA. > > Maybe limit the mmap() range? > > @@ -1609,6 +1620,10 @@ static int kvm_gmem_mmap(struct file *file, struct vm_area_struct *vma) > if (!kvm_gmem_supports_shared(file_inode(file))) > return -ENODEV; > > + if (vma->vm_end - vma->vm_start + (vma->vm_pgoff << PAGE_SHIFT) > i_size_read(file_inode(file))) > + return -EINVAL; > + > if ((vma->vm_flags & (VM_SHARED | VM_MAYSHARE)) != > (VM_SHARED | VM_MAYSHARE)) { > return -EINVAL; > > [1] https://lore.kernel.org/all/20250513163438.3942405-8-tabba@google.com/ > This is a good idea. Thanks! I also think it is a good idea to include this with the guest_memfd mmap base series that Fuad is working on [1], maybe in v11. [1] https://lore.kernel.org/all/20250527180245.1413463-1-tabba@google.com/ >> + return xa_to_value(entry); >> +} >> + >> +static struct folio *kvm_gmem_get_shared_folio(struct inode *inode, pgoff_t index) >> +{ >> + if (kvm_gmem_shareability_get(inode, index) != SHAREABILITY_ALL) >> + return ERR_PTR(-EACCES); >> + >> + return kvm_gmem_get_folio(inode, index); >> +} >> + >> +#else >> + >> +static int kvm_gmem_shareability_setup(struct maple_tree *mt, loff_t size, u64 flags) >> +{ >> + return 0; >> +} >> + >> +static inline struct folio *kvm_gmem_get_shared_folio(struct inode *inode, pgoff_t index) >> +{ >> + WARN_ONCE("Unexpected call to get shared folio.") >> + return NULL; >> +} >> + >> +#endif /* CONFIG_KVM_GMEM_SHARED_MEM */ >> + >> static int __kvm_gmem_prepare_folio(struct kvm *kvm, struct kvm_memory_slot *slot, >> pgoff_t index, struct folio *folio) >> { >> @@ -333,7 +404,7 @@ static vm_fault_t kvm_gmem_fault_shared(struct vm_fault *vmf) >> >> filemap_invalidate_lock_shared(inode->i_mapping); >> >> - folio = kvm_gmem_get_folio(inode, vmf->pgoff); >> + folio = kvm_gmem_get_shared_folio(inode, vmf->pgoff); >> if (IS_ERR(folio)) { >> int err = PTR_ERR(folio); >> >> @@ -420,8 +491,33 @@ static struct file_operations kvm_gmem_fops = { >> .fallocate = kvm_gmem_fallocate, >> }; >> >> +static void kvm_gmem_free_inode(struct inode *inode) >> +{ >> + struct kvm_gmem_inode_private *private = kvm_gmem_private(inode); >> + >> + kfree(private); >> + >> + free_inode_nonrcu(inode); >> +} >> + >> +static void kvm_gmem_destroy_inode(struct inode *inode) >> +{ >> + struct kvm_gmem_inode_private *private = kvm_gmem_private(inode); >> + >> +#ifdef CONFIG_KVM_GMEM_SHARED_MEM >> + /* >> + * mtree_destroy() can't be used within rcu callback, hence can't be >> + * done in ->free_inode(). >> + */ >> + if (private) >> + mtree_destroy(&private->shareability); >> +#endif >> +} >> + >> static const struct super_operations kvm_gmem_super_operations = { >> .statfs = simple_statfs, >> + .destroy_inode = kvm_gmem_destroy_inode, >> + .free_inode = kvm_gmem_free_inode, >> }; >> >> static int kvm_gmem_init_fs_context(struct fs_context *fc) >> @@ -549,12 +645,26 @@ static const struct inode_operations kvm_gmem_iops = { >> static struct inode *kvm_gmem_inode_make_secure_inode(const char *name, >> loff_t size, u64 flags) >> { >> + struct kvm_gmem_inode_private *private; >> struct inode *inode; >> + int err; >> >> inode = alloc_anon_secure_inode(kvm_gmem_mnt->mnt_sb, name); >> if (IS_ERR(inode)) >> return inode; >> >> + err = -ENOMEM; >> + private = kzalloc(sizeof(*private), GFP_KERNEL); >> + if (!private) >> + goto out; >> + >> + mt_init(&private->shareability); > Wrap the mt_init() inside "#ifdef CONFIG_KVM_GMEM_SHARED_MEM" ? > Will fix this in the next revision. Will also update this to only initialize shareability if (flags & GUEST_MEMFD_FLAG_SUPPORT_SHARED). >> + inode->i_mapping->i_private_data = private; >> + >> + err = kvm_gmem_shareability_setup(private, size, flags); >> + if (err) >> + goto out; >> + >> inode->i_private = (void *)(unsigned long)flags; >> inode->i_op = &kvm_gmem_iops; >> inode->i_mapping->a_ops = &kvm_gmem_aops; >> @@ -566,6 +676,11 @@ static struct inode *kvm_gmem_inode_make_secure_inode(const char *name, >> WARN_ON_ONCE(!mapping_unevictable(inode->i_mapping)); >> >> return inode; >> + >> +out: >> + iput(inode); >> + >> + return ERR_PTR(err); >> } >> >> static struct file *kvm_gmem_inode_create_getfile(void *priv, loff_t size, >> @@ -654,6 +769,9 @@ int kvm_gmem_create(struct kvm *kvm, struct kvm_create_guest_memfd *args) >> if (kvm_arch_vm_supports_gmem_shared_mem(kvm)) >> valid_flags |= GUEST_MEMFD_FLAG_SUPPORT_SHARED; >> >> + if (flags & GUEST_MEMFD_FLAG_SUPPORT_SHARED) >> + valid_flags |= GUEST_MEMFD_FLAG_INIT_PRIVATE; >> + >> if (flags & ~valid_flags) >> return -EINVAL; >> >> @@ -842,6 +960,8 @@ int kvm_gmem_get_pfn(struct kvm *kvm, struct kvm_memory_slot *slot, >> if (!file) >> return -EFAULT; >> >> + filemap_invalidate_lock_shared(file_inode(file)->i_mapping); >> + >> folio = __kvm_gmem_get_pfn(file, slot, index, pfn, &is_prepared, max_order); >> if (IS_ERR(folio)) { >> r = PTR_ERR(folio); >> @@ -857,8 +977,8 @@ int kvm_gmem_get_pfn(struct kvm *kvm, struct kvm_memory_slot *slot, >> *page = folio_file_page(folio, index); >> else >> folio_put(folio); >> - >> out: >> + filemap_invalidate_unlock_shared(file_inode(file)->i_mapping); >> fput(file); >> return r; >> } >> -- >> 2.49.0.1045.g170613ef41-goog >> >>