From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A9A05C3ABD8 for ; Wed, 14 May 2025 23:45:13 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 172DB8D000D; Wed, 14 May 2025 19:43:49 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 0D0248D0001; Wed, 14 May 2025 19:43:48 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D79498D000D; Wed, 14 May 2025 19:43:48 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id AE0548D0001 for ; Wed, 14 May 2025 19:43:48 -0400 (EDT) Received: from smtpin24.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id DEDB8C11C3 for ; Wed, 14 May 2025 23:43:49 +0000 (UTC) X-FDA: 83443143378.24.29F5C04 Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) by imf06.hostedemail.com (Postfix) with ESMTP id 1847C180008 for ; Wed, 14 May 2025 23:43:47 +0000 (UTC) Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=LnGmsEyY; spf=pass (imf06.hostedemail.com: domain of 3siolaAsKCOMFHPJWQJdYSLLTTLQJ.HTRQNSZc-RRPaFHP.TWL@flex--ackerleytng.bounces.google.com designates 209.85.216.74 as permitted sender) smtp.mailfrom=3siolaAsKCOMFHPJWQJdYSLLTTLQJ.HTRQNSZc-RRPaFHP.TWL@flex--ackerleytng.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1747266228; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Zcjm16puorCPBtthqT8mNitWd+IfzDcCRMmhCOBHoD8=; b=ONJsJQGLSNI/Ln/E1EVeUa4bIC+xhhXiuFap3kFIdnZkq/5LZaj4h9PldW1VP9dgAVGEhm 9vVJYzOS7vxd9Stb/v+EnapT9vGQFIpjx+sVO8Fe8jHz94RV+ixDGQYqNgXqdgp91AqNrM /LSuSTwHA/xj/soqy1+4ObEEmjR68Ys= ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=LnGmsEyY; spf=pass (imf06.hostedemail.com: domain of 3siolaAsKCOMFHPJWQJdYSLLTTLQJ.HTRQNSZc-RRPaFHP.TWL@flex--ackerleytng.bounces.google.com designates 209.85.216.74 as permitted sender) smtp.mailfrom=3siolaAsKCOMFHPJWQJdYSLLTTLQJ.HTRQNSZc-RRPaFHP.TWL@flex--ackerleytng.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1747266228; a=rsa-sha256; cv=none; b=5NbNqS3PqsrwP0V8srRWMBle11XH4f+cPwrrXW7sYWFRoMIpDgeF22C6PuVfNrdMVyDsqU 4wZkw8VJtDQFqiyyKXy8b7KsHrYtB0xpngcsCvWKWUmSsrKtXwPppWmzGVey1Cq94MtTBR G2Gt3WKLBrEm5Xgj6RIEYwJLE6HCoNQ= Received: by mail-pj1-f74.google.com with SMTP id 98e67ed59e1d1-30a7cc8c52eso423849a91.1 for ; Wed, 14 May 2025 16:43:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1747266227; x=1747871027; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Zcjm16puorCPBtthqT8mNitWd+IfzDcCRMmhCOBHoD8=; b=LnGmsEyYtZD3EjIsRRXnSTb6ls56rksB4p8QuaAs6Fpn/1J/CJ10IlFJ+64LvcHD3b 49VaNXIUjmjHJU2qcx8g3RZ4Jw6HoqcbCv5kV6kAQTMy3lg/FnX0tFrF9C4+oeOqYnBV 4vFvedaUei8fPPa5gpZE0xQeDwpLqnZ/wSP+MJGBifGezBVDrDPkyhxo0o7lu9WSR7rm QswjUjdNT6DsqfhGtpX/EJ46z/SPh99pTfFqnF+eh3s4xc5bHuix9ucccWa/67h+v4cZ 06kPDZZTzijIdt51bLlSKhKLLddflVGLZbA5USqeW7d8SjCvWHpOy2Px1hAL8qsFaqpS 1/5w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747266227; x=1747871027; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Zcjm16puorCPBtthqT8mNitWd+IfzDcCRMmhCOBHoD8=; b=FwvJqIxeqSErsRHks8ZBl0NJWAsdBXJtIGt9fT812ELiHh7MNOSHJ9jLjxInQZqsHq UUgB53hVy/YIoXCKvjCGUN6+Ue5WXqJGRxo+1b3g+Y5sdGhXXc1CFJoVhYniAAwGY8bE 9MQFM7HULO8F7Ly+CWtEUOE5quf77G/8C1G5BQ2QqfelZePa5roCpfcssI6C02y3DM6P 662E6KGzd+vycfr1yg1JTKdW359WYtIDhjI2CXIFz6YfEWy8ooBhcdHgEeJHYe2l/aA2 bh364uTqh1zzUBvyTvcxMb0UcQNgQcy+5Dbua5rHfTF7FW21I6dVj8w3gTNIR1cpLhAA FunA== X-Forwarded-Encrypted: i=1; AJvYcCVG83AZPIVJiM9dFUJ+9jdP+s0YuHp+QVvSgWQEW7n/eHyRtshAMgwAdOwRZentu3VsOD04D486yw==@kvack.org X-Gm-Message-State: AOJu0YyzwhxAxHQPALWrzUcDarjN8XJNwNP1ROp33EsjUCPNr6pBrYsR znWAh5YYUfDO4IzKCwdYvijMNyQpYserQr1S1naKYK3oadRherl4J20YIDTk4rCkc0k5D2iVHeR mTO3b0cNbCEVmWk/WPVJzOA== X-Google-Smtp-Source: AGHT+IGWVoQymI4B6hVxU1Vt4i2GsQrw+x1ntpHu1NnHoUmQ350L2ODuv1v00PBSIq3/k4CuQkSvhD8J7IuL5NzAmA== X-Received: from pjbpw8.prod.google.com ([2002:a17:90b:2788:b0:301:1ea9:63b0]) (user=ackerleytng job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:38c7:b0:30e:37be:698d with SMTP id 98e67ed59e1d1-30e37be6bcemr5417459a91.31.1747266226813; Wed, 14 May 2025 16:43:46 -0700 (PDT) Date: Wed, 14 May 2025 16:42:11 -0700 In-Reply-To: Mime-Version: 1.0 References: X-Mailer: git-send-email 2.49.0.1045.g170613ef41-goog Message-ID: <4d16522293c9a3eacdbe30148b6d6c8ad2eb5908.1747264138.git.ackerleytng@google.com> Subject: [RFC PATCH v2 32/51] KVM: guest_memfd: Support guestmem_hugetlb as custom allocator From: Ackerley Tng To: kvm@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, x86@kernel.org, linux-fsdevel@vger.kernel.org Cc: ackerleytng@google.com, aik@amd.com, ajones@ventanamicro.com, akpm@linux-foundation.org, amoorthy@google.com, anthony.yznaga@oracle.com, anup@brainfault.org, aou@eecs.berkeley.edu, bfoster@redhat.com, binbin.wu@linux.intel.com, brauner@kernel.org, catalin.marinas@arm.com, chao.p.peng@intel.com, chenhuacai@kernel.org, dave.hansen@intel.com, david@redhat.com, dmatlack@google.com, dwmw@amazon.co.uk, erdemaktas@google.com, fan.du@intel.com, fvdl@google.com, graf@amazon.com, haibo1.xu@intel.com, hch@infradead.org, hughd@google.com, ira.weiny@intel.com, isaku.yamahata@intel.com, jack@suse.cz, james.morse@arm.com, jarkko@kernel.org, jgg@ziepe.ca, jgowans@amazon.com, jhubbard@nvidia.com, jroedel@suse.de, jthoughton@google.com, jun.miao@intel.com, kai.huang@intel.com, keirf@google.com, kent.overstreet@linux.dev, kirill.shutemov@intel.com, liam.merwick@oracle.com, maciej.wieczor-retman@intel.com, mail@maciej.szmigiero.name, maz@kernel.org, mic@digikod.net, michael.roth@amd.com, mpe@ellerman.id.au, muchun.song@linux.dev, nikunj@amd.com, nsaenz@amazon.es, oliver.upton@linux.dev, palmer@dabbelt.com, pankaj.gupta@amd.com, paul.walmsley@sifive.com, pbonzini@redhat.com, pdurrant@amazon.co.uk, peterx@redhat.com, pgonda@google.com, pvorel@suse.cz, qperret@google.com, quic_cvanscha@quicinc.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, quic_svaddagi@quicinc.com, quic_tsoni@quicinc.com, richard.weiyang@gmail.com, rick.p.edgecombe@intel.com, rientjes@google.com, roypat@amazon.co.uk, rppt@kernel.org, seanjc@google.com, shuah@kernel.org, steven.price@arm.com, steven.sistare@oracle.com, suzuki.poulose@arm.com, tabba@google.com, thomas.lendacky@amd.com, usama.arif@bytedance.com, vannapurve@google.com, vbabka@suse.cz, viro@zeniv.linux.org.uk, vkuznets@redhat.com, wei.w.wang@intel.com, will@kernel.org, willy@infradead.org, xiaoyao.li@intel.com, yan.y.zhao@intel.com, yilun.xu@intel.com, yuzenghui@huawei.com, zhiquan1.li@intel.com Content-Type: text/plain; charset="UTF-8" X-Rspamd-Server: rspam10 X-Stat-Signature: 9h9hzoyieao9q4xra98xhcxg6kxb7sgn X-Rspamd-Queue-Id: 1847C180008 X-Rspam-User: X-HE-Tag: 1747266227-73112 X-HE-Meta: U2FsdGVkX18yUor1W28bbc3hoxtf/KtAOXv8Kl5OjY4qfipSNugGIoIu+MOIgod9RmUFILo+Qkal1nsJTCDdIDd/7uO/sppHOhmh/Wesnc8cr4b3XUiQ3g1j4/fECT2QyiYSJzveq/UvFJuvZq9lqaP+CmwLfZ32DHA0+08FkdeeBzsqkNmpDOPbroai6F5IKkACTrJcOMKp6ewpBejCrdupenO8002lzIIuyjNgoq7Rf4K1WVXskbElZM0e1+j+iKml3EDW/RnqhTOZ1dSQ8U6I9ph+QEOPdlXRtpn6M+qAADZylt7q839ZQ7VfUR3zYUXVGm3ahrxjUuUnvahmrqd6a3D+Gl8uzNzr+GmUav2B+jFu5DwZHxGDEWAnxM6RO44Z34oWy0TPWO7m3KzXlXEORvgexr5g8hjcSENiQT+HADOP5zTAb4rUuutkyS9tbYGq/tlW+F7eHMvtkLRrAHvJXntFBzAlA5sQRd1Aw+74fdfxNcdt9xT9qh0FjiqYym0dJSBwSV9H5G8cLNryC8Mj/vHvItKlCiRueUZ/KM2FJggOuwy2sKspKlqeAhyyGJsQEo/GpiBdHzFd1aKkc24a+XPg3YYxo+1e4NEUucZ1+6a49zuBal0/70DKmRP+BNPnHVnibVMmrDcr6Vh/sxeadOfB16V3Ano9GaAubhaOcK9oC+inljoJ40oaN7T+XFOTgHWwF7LSZ1o7RZUrPdll5pVHRsYWmjftRKqxBZUxO9s1JiSJH5uAb255CAzUeC1T3sgaztZU5JeCjGIn60YHV3Cnmdxhl9TUcc/m3okP7gKilIBp9djUUX+LiH55FsFB0hDCHSd9s9lNVo+AHQQpQGd/H4hETfvZ8+1mHvJE7eM1Nt70uTNe7KoeIQnRNFQkwrzGY40h6KsuebFmHCxyFUjddVrD1hNQf/vQKNIaKabRNtkfjOEjLUlq/1dW7WQTxY4xfJ9/KZ9Hxxd s7Dpb6S2 kn4EGYKiaNNbdCTmqsHQoPGuyz8gQWvJNifDtrjbmYDQxBLP3lq+9l6BCXG+eZuaGFqUaF9srw/rnTUvSbpLdGaZtXxZhcE9y6lKG/rBS7tg2PYTc3qtHV7j5d8JACUvMdmqlEQ361xUsWC+q2NfHNNBNPFvKHUOx3tMInRRbntLuUEYVaoJ0Bp7M/YC2dt+1Yh9SgrTZa4z0/ERb1da9SCNvphoZtem/fuU5H/a2Y3FUbkvh74j9jVyjvFBbkgBl5kPQ8J2tvIxQiEA2OGkpnM7pdxmGjcLm8uEiMhgG84YbylQpsg/bz0SoK3ghepXcZTYi5+Akf48eJkwjjZbdSNqgPLzzO9JiaPItbYlE7Uuc1MwSmaHQaDYuxukZTAiquFFCuZu9WJa26yDX4D2GvCHpj4PuZ8WLdlmPNNlQecyYWWaJADEBMG6mRFDDKIE6y8BmhS0jcqjlRa8GxGreCRT25GeU/hmiMxzh2UX9tK0AWRHxTVLFmFly0LsihEocpehOEUDl8tN12zxvcDSsk2P5q3F3o8REGNr8XPjbUyOPs3e1OSLxkFII7M6CbTnjY8atWZ5LznUFh2/YY7yAkU/bz3O9isPFceJLKmyeVX9cojoMihdsbfNjqhamJxNYJGPHlqlYAXNyAiXvhZWVqOnG1NwEAUayiZQcIRviYgUiaStzSbZfk7dwzIsQboUEAJHP X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: This patch adds support for guestmem_hugetlb as the first custom allocator in guest_memfd. If requested at guest_memfd creation time, the custom allocator will be used in initialization and cleanup. Signed-off-by: Ackerley Tng Change-Id: I1eb9625dc761ecadcc2aa21480cfdfcf9ab7ce67 --- include/uapi/linux/kvm.h | 1 + virt/kvm/Kconfig | 5 + virt/kvm/guest_memfd.c | 203 +++++++++++++++++++++++++++++++++++++-- 3 files changed, 199 insertions(+), 10 deletions(-) diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h index 433e184f83ea..af486b2e4862 100644 --- a/include/uapi/linux/kvm.h +++ b/include/uapi/linux/kvm.h @@ -1571,6 +1571,7 @@ struct kvm_memory_attributes { #define GUEST_MEMFD_FLAG_SUPPORT_SHARED (1UL << 0) #define GUEST_MEMFD_FLAG_INIT_PRIVATE (1UL << 1) +#define GUEST_MEMFD_FLAG_HUGETLB (1UL << 2) struct kvm_create_guest_memfd { __u64 size; diff --git a/virt/kvm/Kconfig b/virt/kvm/Kconfig index 14ffd9c1d480..ff917bb57371 100644 --- a/virt/kvm/Kconfig +++ b/virt/kvm/Kconfig @@ -133,3 +133,8 @@ config KVM_GMEM_SHARED_MEM select KVM_GMEM bool prompt "Enables in-place shared memory for guest_memfd" + +config KVM_GMEM_HUGETLB + select KVM_PRIVATE_MEM + depends on GUESTMEM_HUGETLB + bool "Enables using a custom allocator with guest_memfd, see CONFIG_GUESTMEM_HUGETLB" diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c index 8c9c9e54616b..c65d93c5a443 100644 --- a/virt/kvm/guest_memfd.c +++ b/virt/kvm/guest_memfd.c @@ -3,11 +3,14 @@ #include #include #include +#include #include #include #include #include +#include + #include "kvm_mm.h" static struct vfsmount *kvm_gmem_mnt; @@ -22,6 +25,10 @@ struct kvm_gmem_inode_private { #ifdef CONFIG_KVM_GMEM_SHARED_MEM struct maple_tree shareability; #endif +#ifdef CONFIG_KVM_GMEM_HUGETLB + const struct guestmem_allocator_operations *allocator_ops; + void *allocator_private; +#endif }; enum shareability { @@ -40,6 +47,44 @@ static struct kvm_gmem_inode_private *kvm_gmem_private(struct inode *inode) return inode->i_mapping->i_private_data; } +#ifdef CONFIG_KVM_GMEM_HUGETLB + +static const struct guestmem_allocator_operations * +kvm_gmem_allocator_ops(struct inode *inode) +{ + return kvm_gmem_private(inode)->allocator_ops; +} + +static void *kvm_gmem_allocator_private(struct inode *inode) +{ + return kvm_gmem_private(inode)->allocator_private; +} + +static bool kvm_gmem_has_custom_allocator(struct inode *inode) +{ + return kvm_gmem_allocator_ops(inode) != NULL; +} + +#else + +static const struct guestmem_allocator_operations * +kvm_gmem_allocator_ops(struct inode *inode) +{ + return NULL; +} + +static void *kvm_gmem_allocator_private(struct inode *inode) +{ + return NULL; +} + +static bool kvm_gmem_has_custom_allocator(struct inode *inode) +{ + return false; +} + +#endif + /** * folio_file_pfn - like folio_file_page, but return a pfn. * @folio: The folio which contains this index. @@ -510,7 +555,6 @@ static int kvm_gmem_filemap_add_folio(struct address_space *mapping, static struct folio *kvm_gmem_get_folio(struct inode *inode, pgoff_t index) { struct folio *folio; - gfp_t gfp; int ret; repeat: @@ -518,17 +562,24 @@ static struct folio *kvm_gmem_get_folio(struct inode *inode, pgoff_t index) if (!IS_ERR(folio)) return folio; - gfp = mapping_gfp_mask(inode->i_mapping); + if (kvm_gmem_has_custom_allocator(inode)) { + void *p = kvm_gmem_allocator_private(inode); - /* TODO: Support huge pages. */ - folio = filemap_alloc_folio(gfp, 0); - if (!folio) - return ERR_PTR(-ENOMEM); + folio = kvm_gmem_allocator_ops(inode)->alloc_folio(p); + if (IS_ERR(folio)) + return folio; + } else { + gfp_t gfp = mapping_gfp_mask(inode->i_mapping); - ret = mem_cgroup_charge(folio, NULL, gfp); - if (ret) { - folio_put(folio); - return ERR_PTR(ret); + folio = filemap_alloc_folio(gfp, 0); + if (!folio) + return ERR_PTR(-ENOMEM); + + ret = mem_cgroup_charge(folio, NULL, gfp); + if (ret) { + folio_put(folio); + return ERR_PTR(ret); + } } ret = kvm_gmem_filemap_add_folio(inode->i_mapping, folio, index); @@ -611,6 +662,80 @@ static void kvm_gmem_invalidate_end(struct kvm_gmem *gmem, pgoff_t start, } } +/** + * kvm_gmem_truncate_indices() - Truncates all folios beginning @index for + * @nr_pages. + * + * @mapping: filemap to truncate pages from. + * @index: the index in the filemap to begin truncation. + * @nr_pages: number of PAGE_SIZE pages to truncate. + * + * Return: the number of PAGE_SIZE pages that were actually truncated. + */ +static long kvm_gmem_truncate_indices(struct address_space *mapping, + pgoff_t index, size_t nr_pages) +{ + struct folio_batch fbatch; + long truncated; + pgoff_t last; + + last = index + nr_pages - 1; + + truncated = 0; + folio_batch_init(&fbatch); + while (filemap_get_folios(mapping, &index, last, &fbatch)) { + unsigned int i; + + for (i = 0; i < folio_batch_count(&fbatch); ++i) { + struct folio *f = fbatch.folios[i]; + + truncated += folio_nr_pages(f); + folio_lock(f); + truncate_inode_folio(f->mapping, f); + folio_unlock(f); + } + + folio_batch_release(&fbatch); + cond_resched(); + } + + return truncated; +} + +/** + * kvm_gmem_truncate_inode_aligned_pages() - Removes entire folios from filemap + * in @inode. + * + * @inode: inode to remove folios from. + * @index: start of range to be truncated. Must be hugepage aligned. + * @nr_pages: number of PAGE_SIZE pages to be iterated over. + * + * Removes folios beginning @index for @nr_pages from filemap in @inode, updates + * inode metadata. + */ +static void kvm_gmem_truncate_inode_aligned_pages(struct inode *inode, + pgoff_t index, + size_t nr_pages) +{ + size_t nr_per_huge_page; + long num_freed; + pgoff_t idx; + void *priv; + + priv = kvm_gmem_allocator_private(inode); + nr_per_huge_page = kvm_gmem_allocator_ops(inode)->nr_pages_in_folio(priv); + + num_freed = 0; + for (idx = index; idx < index + nr_pages; idx += nr_per_huge_page) { + num_freed += kvm_gmem_truncate_indices( + inode->i_mapping, idx, nr_per_huge_page); + } + + spin_lock(&inode->i_lock); + inode->i_blocks -= (num_freed << PAGE_SHIFT) / 512; + spin_unlock(&inode->i_lock); +} + static long kvm_gmem_punch_hole(struct inode *inode, loff_t offset, loff_t len) { struct list_head *gmem_list = &inode->i_mapping->i_private_list; @@ -940,6 +1065,13 @@ static void kvm_gmem_free_inode(struct inode *inode) { struct kvm_gmem_inode_private *private = kvm_gmem_private(inode); + /* private may be NULL if inode creation process had an error. */ + if (private && kvm_gmem_has_custom_allocator(inode)) { + void *p = kvm_gmem_allocator_private(inode); + + kvm_gmem_allocator_ops(inode)->inode_teardown(p, inode->i_size); + } + kfree(private); free_inode_nonrcu(inode); @@ -959,8 +1091,24 @@ static void kvm_gmem_destroy_inode(struct inode *inode) #endif } +static void kvm_gmem_evict_inode(struct inode *inode) +{ + truncate_inode_pages_final_prepare(inode->i_mapping); + + if (kvm_gmem_has_custom_allocator(inode)) { + size_t nr_pages = inode->i_size >> PAGE_SHIFT; + + kvm_gmem_truncate_inode_aligned_pages(inode, 0, nr_pages); + } else { + truncate_inode_pages(inode->i_mapping, 0); + } + + clear_inode(inode); +} + static const struct super_operations kvm_gmem_super_operations = { .statfs = simple_statfs, + .evict_inode = kvm_gmem_evict_inode, .destroy_inode = kvm_gmem_destroy_inode, .free_inode = kvm_gmem_free_inode, }; @@ -1062,6 +1210,12 @@ static void kvm_gmem_free_folio(struct folio *folio) { folio_clear_unevictable(folio); + /* + * No-op for 4K page since the PG_uptodate is cleared as part of + * freeing, but may be required for other allocators to reset page. + */ + folio_clear_uptodate(folio); + kvm_gmem_invalidate(folio); } @@ -1115,6 +1269,25 @@ static struct inode *kvm_gmem_inode_make_secure_inode(const char *name, if (err) goto out; +#ifdef CONFIG_KVM_GMEM_HUGETLB + if (flags & GUEST_MEMFD_FLAG_HUGETLB) { + void *allocator_priv; + size_t nr_pages; + + allocator_priv = guestmem_hugetlb_ops.inode_setup(size, flags); + if (IS_ERR(allocator_priv)) { + err = PTR_ERR(allocator_priv); + goto out; + } + + private->allocator_ops = &guestmem_hugetlb_ops; + private->allocator_private = allocator_priv; + + nr_pages = guestmem_hugetlb_ops.nr_pages_in_folio(allocator_priv); + inode->i_blkbits = ilog2(nr_pages << PAGE_SHIFT); + } +#endif + inode->i_private = (void *)(unsigned long)flags; inode->i_op = &kvm_gmem_iops; inode->i_mapping->a_ops = &kvm_gmem_aops; @@ -1210,6 +1383,10 @@ static int __kvm_gmem_create(struct kvm *kvm, loff_t size, u64 flags) return err; } +/* Mask of bits belonging to allocators and are opaque to guest_memfd. */ +#define SUPPORTED_CUSTOM_ALLOCATOR_MASK \ + (GUESTMEM_HUGETLB_FLAG_MASK << GUESTMEM_HUGETLB_FLAG_SHIFT) + int kvm_gmem_create(struct kvm *kvm, struct kvm_create_guest_memfd *args) { loff_t size = args->size; @@ -1222,6 +1399,12 @@ int kvm_gmem_create(struct kvm *kvm, struct kvm_create_guest_memfd *args) if (flags & GUEST_MEMFD_FLAG_SUPPORT_SHARED) valid_flags |= GUEST_MEMFD_FLAG_INIT_PRIVATE; + if (IS_ENABLED(CONFIG_KVM_GMEM_HUGETLB) && + flags & GUEST_MEMFD_FLAG_HUGETLB) { + valid_flags |= GUEST_MEMFD_FLAG_HUGETLB | + SUPPORTED_CUSTOM_ALLOCATOR_MASK; + } + if (flags & ~valid_flags) return -EINVAL; -- 2.49.0.1045.g170613ef41-goog