* [PATCH v6 0/7] KVM: Restricted mapping of guest_memfd at the host and arm64 support
@ 2025-03-18 16:20 Fuad Tabba
2025-03-18 16:20 ` [PATCH v6 1/7] KVM: guest_memfd: Make guest mem use guest mem inodes instead of anonymous inodes Fuad Tabba
` (6 more replies)
0 siblings, 7 replies; 11+ messages in thread
From: Fuad Tabba @ 2025-03-18 16:20 UTC (permalink / raw)
To: kvm, linux-arm-msm, linux-mm
Cc: pbonzini, chenhuacai, mpe, anup, paul.walmsley, palmer, aou,
seanjc, viro, brauner, willy, akpm, xiaoyao.li, yilun.xu,
chao.p.peng, jarkko, amoorthy, dmatlack, isaku.yamahata, mic,
vbabka, vannapurve, ackerleytng, mail, david, michael.roth,
wei.w.wang, liam.merwick, isaku.yamahata, kirill.shutemov,
suzuki.poulose, steven.price, quic_eberman, quic_mnalajal,
quic_tsoni, quic_svaddagi, quic_cvanscha, quic_pderrin,
quic_pheragu, catalin.marinas, james.morse, yuzenghui,
oliver.upton, maz, will, qperret, keirf, roypat, shuah, hch, jgg,
rientjes, jhubbard, fvdl, hughd, jthoughton, peterx, tabba
This series adds restricted mmap() support to guest_memfd, as well as
support for guest_memfd on arm64. Please see v3 for the context [1].
Main changes since v5 [2]:
- Freeze folio refcounts when checking them to avoid races (Kirill,
Vlastimili, Ackerley)
- Handle invalidation (e.g., on truncation) of potentially shared memory
(Ackerley)
- Rebased on the `KVM: Mapping guest_memfd backed memory at the host for
software protected VMs` series [3], which entails renaming of MAPPABLE
to SHAREABLE and a rebase on Linux 6.14-rc7.
The state diagram that uses the new states in this patch series,
and how they would interact with sharing/unsharing in pKVM [4].
Cheers,
/fuad
[1] https://lore.kernel.org/all/20241010085930.1546800-1-tabba@google.com/
[2] https://lore.kernel.org/all/20250117163001.2326672-1-tabba@google.com/
[3] https://lore.kernel.org/all/20250318161823.4005529-1-tabba@google.com/
[4] https://lpc.events/event/18/contributions/1758/attachments/1457/3699/Guestmemfd%20folio%20state%20page_type.pdf
Ackerley Tng (2):
KVM: guest_memfd: Make guest mem use guest mem inodes instead of
anonymous inodes
KVM: guest_memfd: Track folio sharing within a struct kvm_gmem_private
Fuad Tabba (5):
KVM: guest_memfd: Introduce kvm_gmem_get_pfn_locked(), which retains
the folio lock
KVM: guest_memfd: Folio sharing states and functions that manage their
transition
KVM: guest_memfd: Restore folio state after final folio_put()
KVM: guest_memfd: Handle invalidation of shared memory
KVM: guest_memfd: Add a guest_memfd() flag to initialize it as shared
Documentation/virt/kvm/api.rst | 4 +
include/linux/kvm_host.h | 56 +-
include/uapi/linux/kvm.h | 1 +
include/uapi/linux/magic.h | 1 +
.../testing/selftests/kvm/guest_memfd_test.c | 7 +-
virt/kvm/guest_memfd.c | 589 ++++++++++++++++--
virt/kvm/kvm_main.c | 62 ++
7 files changed, 682 insertions(+), 38 deletions(-)
base-commit: 1ea0414b447c8c96e6a6f6f953323c3df71b85a6
--
2.49.0.rc1.451.g8f38331e32-goog
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH v6 1/7] KVM: guest_memfd: Make guest mem use guest mem inodes instead of anonymous inodes
2025-03-18 16:20 [PATCH v6 0/7] KVM: Restricted mapping of guest_memfd at the host and arm64 support Fuad Tabba
@ 2025-03-18 16:20 ` Fuad Tabba
2025-03-18 16:20 ` [PATCH v6 2/7] KVM: guest_memfd: Introduce kvm_gmem_get_pfn_locked(), which retains the folio lock Fuad Tabba
` (5 subsequent siblings)
6 siblings, 0 replies; 11+ messages in thread
From: Fuad Tabba @ 2025-03-18 16:20 UTC (permalink / raw)
To: kvm, linux-arm-msm, linux-mm
Cc: pbonzini, chenhuacai, mpe, anup, paul.walmsley, palmer, aou,
seanjc, viro, brauner, willy, akpm, xiaoyao.li, yilun.xu,
chao.p.peng, jarkko, amoorthy, dmatlack, isaku.yamahata, mic,
vbabka, vannapurve, ackerleytng, mail, david, michael.roth,
wei.w.wang, liam.merwick, isaku.yamahata, kirill.shutemov,
suzuki.poulose, steven.price, quic_eberman, quic_mnalajal,
quic_tsoni, quic_svaddagi, quic_cvanscha, quic_pderrin,
quic_pheragu, catalin.marinas, james.morse, yuzenghui,
oliver.upton, maz, will, qperret, keirf, roypat, shuah, hch, jgg,
rientjes, jhubbard, fvdl, hughd, jthoughton, peterx, tabba
From: Ackerley Tng <ackerleytng@google.com>
Using guest mem inodes allows us to store metadata for the backing
memory on the inode. Metadata will be added in a later patch to support
HugeTLB pages.
Metadata about backing memory should not be stored on the file, since
the file represents a guest_memfd's binding with a struct kvm, and
metadata about backing memory is not unique to a specific binding and
struct kvm.
Signed-off-by: Fuad Tabba <tabba@google.com>
Signed-off-by: Ackerley Tng <ackerleytng@google.com>
---
include/uapi/linux/magic.h | 1 +
virt/kvm/guest_memfd.c | 130 +++++++++++++++++++++++++++++++------
2 files changed, 111 insertions(+), 20 deletions(-)
diff --git a/include/uapi/linux/magic.h b/include/uapi/linux/magic.h
index bb575f3ab45e..169dba2a6920 100644
--- a/include/uapi/linux/magic.h
+++ b/include/uapi/linux/magic.h
@@ -103,5 +103,6 @@
#define DEVMEM_MAGIC 0x454d444d /* "DMEM" */
#define SECRETMEM_MAGIC 0x5345434d /* "SECM" */
#define PID_FS_MAGIC 0x50494446 /* "PIDF" */
+#define GUEST_MEMORY_MAGIC 0x474d454d /* "GMEM" */
#endif /* __LINUX_MAGIC_H__ */
diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c
index fbf89e643add..844e70c82558 100644
--- a/virt/kvm/guest_memfd.c
+++ b/virt/kvm/guest_memfd.c
@@ -1,12 +1,16 @@
// SPDX-License-Identifier: GPL-2.0
+#include <linux/fs.h>
#include <linux/backing-dev.h>
#include <linux/falloc.h>
#include <linux/kvm_host.h>
+#include <linux/pseudo_fs.h>
#include <linux/pagemap.h>
#include <linux/anon_inodes.h>
#include "kvm_mm.h"
+static struct vfsmount *kvm_gmem_mnt;
+
struct kvm_gmem {
struct kvm *kvm;
struct xarray bindings;
@@ -320,6 +324,38 @@ static pgoff_t kvm_gmem_get_index(struct kvm_memory_slot *slot, gfn_t gfn)
return gfn - slot->base_gfn + slot->gmem.pgoff;
}
+static const struct super_operations kvm_gmem_super_operations = {
+ .statfs = simple_statfs,
+};
+
+static int kvm_gmem_init_fs_context(struct fs_context *fc)
+{
+ struct pseudo_fs_context *ctx;
+
+ if (!init_pseudo(fc, GUEST_MEMORY_MAGIC))
+ return -ENOMEM;
+
+ ctx = fc->fs_private;
+ ctx->ops = &kvm_gmem_super_operations;
+
+ return 0;
+}
+
+static struct file_system_type kvm_gmem_fs = {
+ .name = "kvm_guest_memory",
+ .init_fs_context = kvm_gmem_init_fs_context,
+ .kill_sb = kill_anon_super,
+};
+
+static void kvm_gmem_init_mount(void)
+{
+ kvm_gmem_mnt = kern_mount(&kvm_gmem_fs);
+ BUG_ON(IS_ERR(kvm_gmem_mnt));
+
+ /* For giggles. Userspace can never map this anyways. */
+ kvm_gmem_mnt->mnt_flags |= MNT_NOEXEC;
+}
+
#ifdef CONFIG_KVM_GMEM_SHARED_MEM
static bool kvm_gmem_offset_is_shared(struct file *file, pgoff_t index)
{
@@ -430,6 +466,8 @@ static struct file_operations kvm_gmem_fops = {
void kvm_gmem_init(struct module *module)
{
kvm_gmem_fops.owner = module;
+
+ kvm_gmem_init_mount();
}
static int kvm_gmem_migrate_folio(struct address_space *mapping,
@@ -511,11 +549,79 @@ static const struct inode_operations kvm_gmem_iops = {
.setattr = kvm_gmem_setattr,
};
+static struct inode *kvm_gmem_inode_make_secure_inode(const char *name,
+ loff_t size, u64 flags)
+{
+ const struct qstr qname = QSTR_INIT(name, strlen(name));
+ struct inode *inode;
+ int err;
+
+ inode = alloc_anon_inode(kvm_gmem_mnt->mnt_sb);
+ if (IS_ERR(inode))
+ return inode;
+
+ err = security_inode_init_security_anon(inode, &qname, NULL);
+ if (err) {
+ iput(inode);
+ return ERR_PTR(err);
+ }
+
+ inode->i_private = (void *)(unsigned long)flags;
+ inode->i_op = &kvm_gmem_iops;
+ inode->i_mapping->a_ops = &kvm_gmem_aops;
+ inode->i_mode |= S_IFREG;
+ inode->i_size = size;
+ mapping_set_gfp_mask(inode->i_mapping, GFP_HIGHUSER);
+ mapping_set_inaccessible(inode->i_mapping);
+ /* Unmovable mappings are supposed to be marked unevictable as well. */
+ WARN_ON_ONCE(!mapping_unevictable(inode->i_mapping));
+
+ return inode;
+}
+
+static struct file *kvm_gmem_inode_create_getfile(void *priv, loff_t size,
+ u64 flags)
+{
+ static const char *name = "[kvm-gmem]";
+ struct inode *inode;
+ struct file *file;
+ int err;
+
+ err = -ENOENT;
+ if (!try_module_get(kvm_gmem_fops.owner))
+ goto err;
+
+ inode = kvm_gmem_inode_make_secure_inode(name, size, flags);
+ if (IS_ERR(inode)) {
+ err = PTR_ERR(inode);
+ goto err_put_module;
+ }
+
+ file = alloc_file_pseudo(inode, kvm_gmem_mnt, name, O_RDWR,
+ &kvm_gmem_fops);
+ if (IS_ERR(file)) {
+ err = PTR_ERR(file);
+ goto err_put_inode;
+ }
+
+ file->f_flags |= O_LARGEFILE;
+ file->private_data = priv;
+
+out:
+ return file;
+
+err_put_inode:
+ iput(inode);
+err_put_module:
+ module_put(kvm_gmem_fops.owner);
+err:
+ file = ERR_PTR(err);
+ goto out;
+}
+
static int __kvm_gmem_create(struct kvm *kvm, loff_t size, u64 flags)
{
- const char *anon_name = "[kvm-gmem]";
struct kvm_gmem *gmem;
- struct inode *inode;
struct file *file;
int fd, err;
@@ -529,32 +635,16 @@ static int __kvm_gmem_create(struct kvm *kvm, loff_t size, u64 flags)
goto err_fd;
}
- file = anon_inode_create_getfile(anon_name, &kvm_gmem_fops, gmem,
- O_RDWR, NULL);
+ file = kvm_gmem_inode_create_getfile(gmem, size, flags);
if (IS_ERR(file)) {
err = PTR_ERR(file);
goto err_gmem;
}
- file->f_flags |= O_LARGEFILE;
-
- inode = file->f_inode;
- WARN_ON(file->f_mapping != inode->i_mapping);
-
- inode->i_private = (void *)(unsigned long)flags;
- inode->i_op = &kvm_gmem_iops;
- inode->i_mapping->a_ops = &kvm_gmem_aops;
- inode->i_mode |= S_IFREG;
- inode->i_size = size;
- mapping_set_gfp_mask(inode->i_mapping, GFP_HIGHUSER);
- mapping_set_inaccessible(inode->i_mapping);
- /* Unmovable mappings are supposed to be marked unevictable as well. */
- WARN_ON_ONCE(!mapping_unevictable(inode->i_mapping));
-
kvm_get_kvm(kvm);
gmem->kvm = kvm;
xa_init(&gmem->bindings);
- list_add(&gmem->entry, &inode->i_mapping->i_private_list);
+ list_add(&gmem->entry, &file_inode(file)->i_mapping->i_private_list);
fd_install(fd, file);
return fd;
--
2.49.0.rc1.451.g8f38331e32-goog
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH v6 2/7] KVM: guest_memfd: Introduce kvm_gmem_get_pfn_locked(), which retains the folio lock
2025-03-18 16:20 [PATCH v6 0/7] KVM: Restricted mapping of guest_memfd at the host and arm64 support Fuad Tabba
2025-03-18 16:20 ` [PATCH v6 1/7] KVM: guest_memfd: Make guest mem use guest mem inodes instead of anonymous inodes Fuad Tabba
@ 2025-03-18 16:20 ` Fuad Tabba
2025-03-18 16:20 ` [PATCH v6 3/7] KVM: guest_memfd: Track folio sharing within a struct kvm_gmem_private Fuad Tabba
` (4 subsequent siblings)
6 siblings, 0 replies; 11+ messages in thread
From: Fuad Tabba @ 2025-03-18 16:20 UTC (permalink / raw)
To: kvm, linux-arm-msm, linux-mm
Cc: pbonzini, chenhuacai, mpe, anup, paul.walmsley, palmer, aou,
seanjc, viro, brauner, willy, akpm, xiaoyao.li, yilun.xu,
chao.p.peng, jarkko, amoorthy, dmatlack, isaku.yamahata, mic,
vbabka, vannapurve, ackerleytng, mail, david, michael.roth,
wei.w.wang, liam.merwick, isaku.yamahata, kirill.shutemov,
suzuki.poulose, steven.price, quic_eberman, quic_mnalajal,
quic_tsoni, quic_svaddagi, quic_cvanscha, quic_pderrin,
quic_pheragu, catalin.marinas, james.morse, yuzenghui,
oliver.upton, maz, will, qperret, keirf, roypat, shuah, hch, jgg,
rientjes, jhubbard, fvdl, hughd, jthoughton, peterx, tabba
Create a new variant of kvm_gmem_get_pfn(), which retains the folio lock
if it returns successfully. This is needed in subsequent patches to
protect against races when checking whether a folio can be shared with
the host.
Signed-off-by: Fuad Tabba <tabba@google.com>
---
include/linux/kvm_host.h | 11 +++++++++++
virt/kvm/guest_memfd.c | 27 ++++++++++++++++++++-------
2 files changed, 31 insertions(+), 7 deletions(-)
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index ec3bedc18eab..bc73d7426363 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -2535,6 +2535,9 @@ static inline bool kvm_mem_is_private(struct kvm *kvm, gfn_t gfn)
int kvm_gmem_get_pfn(struct kvm *kvm, struct kvm_memory_slot *slot,
gfn_t gfn, kvm_pfn_t *pfn, struct page **page,
int *max_order);
+int kvm_gmem_get_pfn_locked(struct kvm *kvm, struct kvm_memory_slot *slot,
+ gfn_t gfn, kvm_pfn_t *pfn, struct page **page,
+ int *max_order);
#else
static inline int kvm_gmem_get_pfn(struct kvm *kvm,
struct kvm_memory_slot *slot, gfn_t gfn,
@@ -2544,6 +2547,14 @@ static inline int kvm_gmem_get_pfn(struct kvm *kvm,
KVM_BUG_ON(1, kvm);
return -EIO;
}
+static inline int kvm_gmem_get_pfn_locked(struct kvm *kvm,
+ struct kvm_memory_slot *slot,
+ gfn_t gfn, kvm_pfn_t *pfn,
+ struct page **page, int *max_order)
+{
+ KVM_BUG_ON(1, kvm);
+ return -EIO;
+}
#endif /* CONFIG_KVM_PRIVATE_MEM */
#ifdef CONFIG_HAVE_KVM_ARCH_GMEM_PREPARE
diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c
index 844e70c82558..ac6b8853699d 100644
--- a/virt/kvm/guest_memfd.c
+++ b/virt/kvm/guest_memfd.c
@@ -802,9 +802,9 @@ static struct folio *__kvm_gmem_get_pfn(struct file *file,
return folio;
}
-int kvm_gmem_get_pfn(struct kvm *kvm, struct kvm_memory_slot *slot,
- gfn_t gfn, kvm_pfn_t *pfn, struct page **page,
- int *max_order)
+int kvm_gmem_get_pfn_locked(struct kvm *kvm, struct kvm_memory_slot *slot,
+ gfn_t gfn, kvm_pfn_t *pfn, struct page **page,
+ int *max_order)
{
pgoff_t index = kvm_gmem_get_index(slot, gfn);
struct file *file = kvm_gmem_get_file(slot);
@@ -824,17 +824,30 @@ int kvm_gmem_get_pfn(struct kvm *kvm, struct kvm_memory_slot *slot,
if (!is_prepared)
r = kvm_gmem_prepare_folio(kvm, slot, gfn, folio);
- folio_unlock(folio);
-
- if (!r)
+ if (!r) {
*page = folio_file_page(folio, index);
- else
+ } else {
+ folio_unlock(folio);
folio_put(folio);
+ }
out:
fput(file);
return r;
}
+EXPORT_SYMBOL_GPL(kvm_gmem_get_pfn_locked);
+
+int kvm_gmem_get_pfn(struct kvm *kvm, struct kvm_memory_slot *slot,
+ gfn_t gfn, kvm_pfn_t *pfn, struct page **page,
+ int *max_order)
+{
+ int r = kvm_gmem_get_pfn_locked(kvm, slot, gfn, pfn, page, max_order);
+
+ if (!r)
+ unlock_page(*page);
+
+ return r;
+}
EXPORT_SYMBOL_GPL(kvm_gmem_get_pfn);
#ifdef CONFIG_KVM_GENERIC_PRIVATE_MEM
--
2.49.0.rc1.451.g8f38331e32-goog
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH v6 3/7] KVM: guest_memfd: Track folio sharing within a struct kvm_gmem_private
2025-03-18 16:20 [PATCH v6 0/7] KVM: Restricted mapping of guest_memfd at the host and arm64 support Fuad Tabba
2025-03-18 16:20 ` [PATCH v6 1/7] KVM: guest_memfd: Make guest mem use guest mem inodes instead of anonymous inodes Fuad Tabba
2025-03-18 16:20 ` [PATCH v6 2/7] KVM: guest_memfd: Introduce kvm_gmem_get_pfn_locked(), which retains the folio lock Fuad Tabba
@ 2025-03-18 16:20 ` Fuad Tabba
2025-03-18 16:20 ` [PATCH v6 4/7] KVM: guest_memfd: Folio sharing states and functions that manage their transition Fuad Tabba
` (3 subsequent siblings)
6 siblings, 0 replies; 11+ messages in thread
From: Fuad Tabba @ 2025-03-18 16:20 UTC (permalink / raw)
To: kvm, linux-arm-msm, linux-mm
Cc: pbonzini, chenhuacai, mpe, anup, paul.walmsley, palmer, aou,
seanjc, viro, brauner, willy, akpm, xiaoyao.li, yilun.xu,
chao.p.peng, jarkko, amoorthy, dmatlack, isaku.yamahata, mic,
vbabka, vannapurve, ackerleytng, mail, david, michael.roth,
wei.w.wang, liam.merwick, isaku.yamahata, kirill.shutemov,
suzuki.poulose, steven.price, quic_eberman, quic_mnalajal,
quic_tsoni, quic_svaddagi, quic_cvanscha, quic_pderrin,
quic_pheragu, catalin.marinas, james.morse, yuzenghui,
oliver.upton, maz, will, qperret, keirf, roypat, shuah, hch, jgg,
rientjes, jhubbard, fvdl, hughd, jthoughton, peterx, tabba
From: Ackerley Tng <ackerleytng@google.com>
Track guest_memfd folio sharing state within the inode, since it is a
property of the guest_memfd's memory contents.
The guest_memfd PRIVATE memory attribute is not used for two reasons. It
reflects the userspace expectation for the memory state, and therefore
can be toggled by userspace. Also, although each guest_memfd file has a
1:1 binding with a KVM instance, the plan is to allow multiple files per
inode, e.g. to allow intra-host migration to a new KVM instance, without
destroying guest_memfd.
Signed-off-by: Ackerley Tng <ackerleytng@google.com>
Co-developed-by: Vishal Annapurve <vannapurve@google.com>
Signed-off-by: Vishal Annapurve <vannapurve@google.com>
Co-developed-by: Fuad Tabba <tabba@google.com>
Signed-off-by: Fuad Tabba <tabba@google.com>
---
virt/kvm/guest_memfd.c | 56 ++++++++++++++++++++++++++++++++++++++----
1 file changed, 51 insertions(+), 5 deletions(-)
diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c
index ac6b8853699d..a7f7c6eb6b4a 100644
--- a/virt/kvm/guest_memfd.c
+++ b/virt/kvm/guest_memfd.c
@@ -17,6 +17,17 @@ struct kvm_gmem {
struct list_head entry;
};
+struct kvm_gmem_inode_private {
+#ifdef CONFIG_KVM_GMEM_SHARED_MEM
+ struct xarray shared_offsets;
+#endif
+};
+
+static struct kvm_gmem_inode_private *kvm_gmem_private(struct inode *inode)
+{
+ return inode->i_mapping->i_private_data;
+}
+
#ifdef CONFIG_KVM_GMEM_SHARED_MEM
void kvm_gmem_handle_folio_put(struct folio *folio)
{
@@ -324,8 +335,28 @@ static pgoff_t kvm_gmem_get_index(struct kvm_memory_slot *slot, gfn_t gfn)
return gfn - slot->base_gfn + slot->gmem.pgoff;
}
+static void kvm_gmem_evict_inode(struct inode *inode)
+{
+ struct kvm_gmem_inode_private *private = kvm_gmem_private(inode);
+
+#ifdef CONFIG_KVM_GMEM_SHARED_MEM
+ /*
+ * .evict_inode can be called before private data is set up if there are
+ * issues during inode creation.
+ */
+ if (private)
+ xa_destroy(&private->shared_offsets);
+#endif
+
+ truncate_inode_pages_final(inode->i_mapping);
+
+ kfree(private);
+ clear_inode(inode);
+}
+
static const struct super_operations kvm_gmem_super_operations = {
- .statfs = simple_statfs,
+ .statfs = simple_statfs,
+ .evict_inode = kvm_gmem_evict_inode,
};
static int kvm_gmem_init_fs_context(struct fs_context *fc)
@@ -553,6 +584,7 @@ static struct inode *kvm_gmem_inode_make_secure_inode(const char *name,
loff_t size, u64 flags)
{
const struct qstr qname = QSTR_INIT(name, strlen(name));
+ struct kvm_gmem_inode_private *private;
struct inode *inode;
int err;
@@ -561,10 +593,19 @@ static struct inode *kvm_gmem_inode_make_secure_inode(const char *name,
return inode;
err = security_inode_init_security_anon(inode, &qname, NULL);
- if (err) {
- iput(inode);
- return ERR_PTR(err);
- }
+ if (err)
+ goto out;
+
+ err = -ENOMEM;
+ private = kzalloc(sizeof(*private), GFP_KERNEL);
+ if (!private)
+ goto out;
+
+#ifdef CONFIG_KVM_GMEM_SHARED_MEM
+ xa_init(&private->shared_offsets);
+#endif
+
+ inode->i_mapping->i_private_data = private;
inode->i_private = (void *)(unsigned long)flags;
inode->i_op = &kvm_gmem_iops;
@@ -577,6 +618,11 @@ static struct inode *kvm_gmem_inode_make_secure_inode(const char *name,
WARN_ON_ONCE(!mapping_unevictable(inode->i_mapping));
return inode;
+
+out:
+ iput(inode);
+
+ return ERR_PTR(err);
}
static struct file *kvm_gmem_inode_create_getfile(void *priv, loff_t size,
--
2.49.0.rc1.451.g8f38331e32-goog
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH v6 4/7] KVM: guest_memfd: Folio sharing states and functions that manage their transition
2025-03-18 16:20 [PATCH v6 0/7] KVM: Restricted mapping of guest_memfd at the host and arm64 support Fuad Tabba
` (2 preceding siblings ...)
2025-03-18 16:20 ` [PATCH v6 3/7] KVM: guest_memfd: Track folio sharing within a struct kvm_gmem_private Fuad Tabba
@ 2025-03-18 16:20 ` Fuad Tabba
2025-03-18 16:20 ` [PATCH v6 5/7] KVM: guest_memfd: Restore folio state after final folio_put() Fuad Tabba
` (2 subsequent siblings)
6 siblings, 0 replies; 11+ messages in thread
From: Fuad Tabba @ 2025-03-18 16:20 UTC (permalink / raw)
To: kvm, linux-arm-msm, linux-mm
Cc: pbonzini, chenhuacai, mpe, anup, paul.walmsley, palmer, aou,
seanjc, viro, brauner, willy, akpm, xiaoyao.li, yilun.xu,
chao.p.peng, jarkko, amoorthy, dmatlack, isaku.yamahata, mic,
vbabka, vannapurve, ackerleytng, mail, david, michael.roth,
wei.w.wang, liam.merwick, isaku.yamahata, kirill.shutemov,
suzuki.poulose, steven.price, quic_eberman, quic_mnalajal,
quic_tsoni, quic_svaddagi, quic_cvanscha, quic_pderrin,
quic_pheragu, catalin.marinas, james.morse, yuzenghui,
oliver.upton, maz, will, qperret, keirf, roypat, shuah, hch, jgg,
rientjes, jhubbard, fvdl, hughd, jthoughton, peterx, tabba
To allow in-place sharing of guest_memfd folios with the host,
guest_memfd needs to track their sharing state, because mapping of
shared folios will only be allowed where it safe to access these folios.
It is safe to map and access these folios when explicitly shared with
the host, or potentially if not yet exposed to the guest (e.g., at
initialization).
This patch introduces sharing states for guest_memfd folios as well as
the functions that manage transitioning between those states.
Signed-off-by: Fuad Tabba <tabba@google.com>
---
include/linux/kvm_host.h | 39 +++++++-
virt/kvm/guest_memfd.c | 188 ++++++++++++++++++++++++++++++++++++---
virt/kvm/kvm_main.c | 62 +++++++++++++
3 files changed, 275 insertions(+), 14 deletions(-)
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index bc73d7426363..bf82faf16c53 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -2600,7 +2600,44 @@ long kvm_arch_vcpu_pre_fault_memory(struct kvm_vcpu *vcpu,
#endif
#ifdef CONFIG_KVM_GMEM_SHARED_MEM
+int kvm_gmem_set_shared(struct kvm *kvm, gfn_t start, gfn_t end);
+int kvm_gmem_clear_shared(struct kvm *kvm, gfn_t start, gfn_t end);
+int kvm_gmem_slot_set_shared(struct kvm_memory_slot *slot, gfn_t start,
+ gfn_t end);
+int kvm_gmem_slot_clear_shared(struct kvm_memory_slot *slot, gfn_t start,
+ gfn_t end);
+bool kvm_gmem_slot_is_guest_shared(struct kvm_memory_slot *slot, gfn_t gfn);
void kvm_gmem_handle_folio_put(struct folio *folio);
-#endif
+#else
+static inline int kvm_gmem_set_shared(struct kvm *kvm, gfn_t start, gfn_t end)
+{
+ WARN_ON_ONCE(1);
+ return -EINVAL;
+}
+static inline int kvm_gmem_clear_shared(struct kvm *kvm, gfn_t start,
+ gfn_t end)
+{
+ WARN_ON_ONCE(1);
+ return -EINVAL;
+}
+static inline int kvm_gmem_slot_set_shared(struct kvm_memory_slot *slot,
+ gfn_t start, gfn_t end)
+{
+ WARN_ON_ONCE(1);
+ return -EINVAL;
+}
+static inline int kvm_gmem_slot_clear_shared(struct kvm_memory_slot *slot,
+ gfn_t start, gfn_t end)
+{
+ WARN_ON_ONCE(1);
+ return -EINVAL;
+}
+static inline bool kvm_gmem_slot_is_guest_shared(struct kvm_memory_slot *slot,
+ gfn_t gfn)
+{
+ WARN_ON_ONCE(1);
+ return false;
+}
+#endif /* CONFIG_KVM_GMEM_SHARED_MEM */
#endif
diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c
index a7f7c6eb6b4a..4b857ab421bf 100644
--- a/virt/kvm/guest_memfd.c
+++ b/virt/kvm/guest_memfd.c
@@ -28,14 +28,6 @@ static struct kvm_gmem_inode_private *kvm_gmem_private(struct inode *inode)
return inode->i_mapping->i_private_data;
}
-#ifdef CONFIG_KVM_GMEM_SHARED_MEM
-void kvm_gmem_handle_folio_put(struct folio *folio)
-{
- WARN_ONCE(1, "A placeholder that shouldn't trigger. Work in progress.");
-}
-EXPORT_SYMBOL_GPL(kvm_gmem_handle_folio_put);
-#endif /* CONFIG_KVM_GMEM_SHARED_MEM */
-
/**
* folio_file_pfn - like folio_file_page, but return a pfn.
* @folio: The folio which contains this index.
@@ -388,13 +380,183 @@ static void kvm_gmem_init_mount(void)
}
#ifdef CONFIG_KVM_GMEM_SHARED_MEM
-static bool kvm_gmem_offset_is_shared(struct file *file, pgoff_t index)
+/*
+ * An enum of the valid folio sharing states:
+ * Bit 0: set if not shared with the guest (guest cannot fault it in)
+ * Bit 1: set if not shared with the host (host cannot fault it in)
+ */
+enum folio_shareability {
+ KVM_GMEM_ALL_SHARED = 0b00, /* Shared with host and guest. */
+ KVM_GMEM_GUEST_SHARED = 0b10, /* Shared only with guest. */
+ KVM_GMEM_NONE_SHARED = 0b11, /* Not shared, transient state. */
+};
+
+static int kvm_gmem_offset_set_shared(struct inode *inode, pgoff_t index)
{
- struct kvm_gmem *gmem = file->private_data;
+ struct xarray *shared_offsets = &kvm_gmem_private(inode)->shared_offsets;
+ void *xval = xa_mk_value(KVM_GMEM_ALL_SHARED);
+
+ rwsem_assert_held_write_nolockdep(&inode->i_mapping->invalidate_lock);
+
+ return xa_err(xa_store(shared_offsets, index, xval, GFP_KERNEL));
+}
+
+/*
+ * Marks the range [start, end) as shared with both the host and the guest.
+ * Called when guest shares memory with the host.
+ */
+static int kvm_gmem_offset_range_set_shared(struct inode *inode,
+ pgoff_t start, pgoff_t end)
+{
+ pgoff_t i;
+ int r = 0;
+
+ filemap_invalidate_lock(inode->i_mapping);
+ for (i = start; i < end; i++) {
+ r = kvm_gmem_offset_set_shared(inode, i);
+ if (WARN_ON_ONCE(r))
+ break;
+ }
+ filemap_invalidate_unlock(inode->i_mapping);
+
+ return r;
+}
+
+static int kvm_gmem_offset_clear_shared(struct inode *inode, pgoff_t index)
+{
+ struct xarray *shared_offsets = &kvm_gmem_private(inode)->shared_offsets;
+ void *xval_guest = xa_mk_value(KVM_GMEM_GUEST_SHARED);
+ void *xval_none = xa_mk_value(KVM_GMEM_NONE_SHARED);
+ struct folio *folio;
+ int refcount;
+ int r;
+
+ rwsem_assert_held_write_nolockdep(&inode->i_mapping->invalidate_lock);
+
+ folio = filemap_lock_folio(inode->i_mapping, index);
+ if (!IS_ERR(folio)) {
+ /* +1 references are expected because of filemap_lock_folio(). */
+ refcount = folio_nr_pages(folio) + 1;
+ } else {
+ r = PTR_ERR(folio);
+ if (WARN_ON_ONCE(r != -ENOENT))
+ return r;
+
+ folio = NULL;
+ }
+
+ if (!folio || folio_ref_freeze(folio, refcount)) {
+ /*
+ * No outstanding references: transition to guest shared.
+ */
+ r = xa_err(xa_store(shared_offsets, index, xval_guest, GFP_KERNEL));
+
+ if (folio)
+ folio_ref_unfreeze(folio, refcount);
+ } else {
+ /*
+ * Outstanding references: the folio cannot be faulted in by
+ * anyone until they're dropped.
+ */
+ r = xa_err(xa_store(shared_offsets, index, xval_none, GFP_KERNEL));
+ }
+
+ if (folio) {
+ folio_unlock(folio);
+ folio_put(folio);
+ }
+
+ return r;
+}
+
+/*
+ * Marks the range [start, end) as not shared with the host. If the host doesn't
+ * have any references to a particular folio, then that folio is marked as
+ * shared with the guest.
+ *
+ * However, if the host still has references to the folio, then the folio is
+ * marked and not shared with anyone. Marking it as not shared allows draining
+ * all references from the host, and ensures that the hypervisor does not
+ * transition the folio to private, since the host still might access it.
+ *
+ * Called when guest unshares memory with the host.
+ */
+static int kvm_gmem_offset_range_clear_shared(struct inode *inode,
+ pgoff_t start, pgoff_t end)
+{
+ pgoff_t i;
+ int r = 0;
+
+ filemap_invalidate_lock(inode->i_mapping);
+ for (i = start; i < end; i++) {
+ r = kvm_gmem_offset_clear_shared(inode, i);
+ if (WARN_ON_ONCE(r))
+ break;
+ }
+ filemap_invalidate_unlock(inode->i_mapping);
+
+ return r;
+}
+
+void kvm_gmem_handle_folio_put(struct folio *folio)
+{
+ WARN_ONCE(1, "A placeholder that shouldn't trigger. Work in progress.");
+}
+EXPORT_SYMBOL_GPL(kvm_gmem_handle_folio_put);
+
+static bool kvm_gmem_offset_is_shared(struct inode *inode, pgoff_t index)
+{
+ struct xarray *shared_offsets = &kvm_gmem_private(inode)->shared_offsets;
+ unsigned long r;
+
+ rwsem_assert_held_nolockdep(&inode->i_mapping->invalidate_lock);
+
+ r = xa_to_value(xa_load(shared_offsets, index));
+
+ return r == KVM_GMEM_ALL_SHARED;
+}
+
+static bool kvm_gmem_offset_is_guest_shared(struct inode *inode, pgoff_t index)
+{
+ struct xarray *shared_offsets = &kvm_gmem_private(inode)->shared_offsets;
+ unsigned long r;
+
+ rwsem_assert_held_nolockdep(&inode->i_mapping->invalidate_lock);
+
+ r = xa_to_value(xa_load(shared_offsets, index));
+
+ return (r == KVM_GMEM_ALL_SHARED || r == KVM_GMEM_GUEST_SHARED);
+}
+
+int kvm_gmem_slot_set_shared(struct kvm_memory_slot *slot, gfn_t start, gfn_t end)
+{
+ struct inode *inode = file_inode(READ_ONCE(slot->gmem.file));
+ pgoff_t start_off = slot->gmem.pgoff + start - slot->base_gfn;
+ pgoff_t end_off = start_off + end - start;
+
+ return kvm_gmem_offset_range_set_shared(inode, start_off, end_off);
+}
+
+int kvm_gmem_slot_clear_shared(struct kvm_memory_slot *slot, gfn_t start, gfn_t end)
+{
+ struct inode *inode = file_inode(READ_ONCE(slot->gmem.file));
+ pgoff_t start_off = slot->gmem.pgoff + start - slot->base_gfn;
+ pgoff_t end_off = start_off + end - start;
+
+ return kvm_gmem_offset_range_clear_shared(inode, start_off, end_off);
+}
+
+bool kvm_gmem_slot_is_guest_shared(struct kvm_memory_slot *slot, gfn_t gfn)
+{
+ struct inode *inode = file_inode(READ_ONCE(slot->gmem.file));
+ unsigned long pgoff = slot->gmem.pgoff + gfn - slot->base_gfn;
+ bool r;
+ filemap_invalidate_lock_shared(inode->i_mapping);
+ r = kvm_gmem_offset_is_guest_shared(inode, pgoff);
+ filemap_invalidate_unlock_shared(inode->i_mapping);
- /* For now, VMs that support shared memory share all their memory. */
- return kvm_arch_gmem_supports_shared_mem(gmem->kvm);
+ return r;
}
static vm_fault_t kvm_gmem_fault(struct vm_fault *vmf)
@@ -422,7 +584,7 @@ static vm_fault_t kvm_gmem_fault(struct vm_fault *vmf)
goto out_folio;
}
- if (!kvm_gmem_offset_is_shared(vmf->vma->vm_file, vmf->pgoff)) {
+ if (!kvm_gmem_offset_is_shared(inode, vmf->pgoff)) {
ret = VM_FAULT_SIGBUS;
goto out_folio;
}
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 3e40acb9f5c0..90762252381c 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -3091,6 +3091,68 @@ static int next_segment(unsigned long len, int offset)
return len;
}
+#ifdef CONFIG_KVM_GMEM_SHARED_MEM
+int kvm_gmem_set_shared(struct kvm *kvm, gfn_t start, gfn_t end)
+{
+ struct kvm_memslot_iter iter;
+ int r = 0;
+
+ mutex_lock(&kvm->slots_lock);
+
+ kvm_for_each_memslot_in_gfn_range(&iter, kvm_memslots(kvm), start, end) {
+ struct kvm_memory_slot *memslot = iter.slot;
+ gfn_t gfn_start, gfn_end;
+
+ if (!kvm_slot_can_be_private(memslot))
+ continue;
+
+ gfn_start = max(start, memslot->base_gfn);
+ gfn_end = min(end, memslot->base_gfn + memslot->npages);
+ if (WARN_ON_ONCE(start >= end))
+ continue;
+
+ r = kvm_gmem_slot_set_shared(memslot, gfn_start, gfn_end);
+ if (WARN_ON_ONCE(r))
+ break;
+ }
+
+ mutex_unlock(&kvm->slots_lock);
+
+ return r;
+}
+EXPORT_SYMBOL_GPL(kvm_gmem_set_shared);
+
+int kvm_gmem_clear_shared(struct kvm *kvm, gfn_t start, gfn_t end)
+{
+ struct kvm_memslot_iter iter;
+ int r = 0;
+
+ mutex_lock(&kvm->slots_lock);
+
+ kvm_for_each_memslot_in_gfn_range(&iter, kvm_memslots(kvm), start, end) {
+ struct kvm_memory_slot *memslot = iter.slot;
+ gfn_t gfn_start, gfn_end;
+
+ if (!kvm_slot_can_be_private(memslot))
+ continue;
+
+ gfn_start = max(start, memslot->base_gfn);
+ gfn_end = min(end, memslot->base_gfn + memslot->npages);
+ if (WARN_ON_ONCE(start >= end))
+ continue;
+
+ r = kvm_gmem_slot_clear_shared(memslot, gfn_start, gfn_end);
+ if (WARN_ON_ONCE(r))
+ break;
+ }
+
+ mutex_unlock(&kvm->slots_lock);
+
+ return r;
+}
+EXPORT_SYMBOL_GPL(kvm_gmem_clear_shared);
+#endif /* CONFIG_KVM_GMEM_SHARED_MEM */
+
/* Copy @len bytes from guest memory at '(@gfn * PAGE_SIZE) + @offset' to @data */
static int __kvm_read_guest_page(struct kvm_memory_slot *slot, gfn_t gfn,
void *data, int offset, int len)
--
2.49.0.rc1.451.g8f38331e32-goog
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH v6 5/7] KVM: guest_memfd: Restore folio state after final folio_put()
2025-03-18 16:20 [PATCH v6 0/7] KVM: Restricted mapping of guest_memfd at the host and arm64 support Fuad Tabba
` (3 preceding siblings ...)
2025-03-18 16:20 ` [PATCH v6 4/7] KVM: guest_memfd: Folio sharing states and functions that manage their transition Fuad Tabba
@ 2025-03-18 16:20 ` Fuad Tabba
2025-03-21 20:09 ` Vishal Annapurve
2025-03-18 16:20 ` [PATCH v6 6/7] KVM: guest_memfd: Handle invalidation of shared memory Fuad Tabba
2025-03-18 16:20 ` [PATCH v6 7/7] KVM: guest_memfd: Add a guest_memfd() flag to initialize it as shared Fuad Tabba
6 siblings, 1 reply; 11+ messages in thread
From: Fuad Tabba @ 2025-03-18 16:20 UTC (permalink / raw)
To: kvm, linux-arm-msm, linux-mm
Cc: pbonzini, chenhuacai, mpe, anup, paul.walmsley, palmer, aou,
seanjc, viro, brauner, willy, akpm, xiaoyao.li, yilun.xu,
chao.p.peng, jarkko, amoorthy, dmatlack, isaku.yamahata, mic,
vbabka, vannapurve, ackerleytng, mail, david, michael.roth,
wei.w.wang, liam.merwick, isaku.yamahata, kirill.shutemov,
suzuki.poulose, steven.price, quic_eberman, quic_mnalajal,
quic_tsoni, quic_svaddagi, quic_cvanscha, quic_pderrin,
quic_pheragu, catalin.marinas, james.morse, yuzenghui,
oliver.upton, maz, will, qperret, keirf, roypat, shuah, hch, jgg,
rientjes, jhubbard, fvdl, hughd, jthoughton, peterx, tabba
Before transitioning a guest_memfd folio to unshared, thereby
disallowing access by the host and allowing the hypervisor to transition
its view of the guest page as private, we need to be sure that the host
doesn't have any references to the folio.
This patch uses the guest_memfd folio type to register a callback that
informs the guest_memfd subsystem when the last reference is dropped,
therefore knowing that the host doesn't have any remaining references.
Signed-off-by: Fuad Tabba <tabba@google.com>
---
The function kvm_slot_gmem_register_callback() isn't used in this
series. It will be used later in code that performs unsharing of
memory. I have tested it with pKVM, based on downstream code [*].
It's included in this RFC since it demonstrates the plan to
handle unsharing of private folios.
[*] https://android-kvm.googlesource.com/linux/+/refs/heads/tabba/guestmem-6.13-v6-pkvm
---
include/linux/kvm_host.h | 6 ++
virt/kvm/guest_memfd.c | 142 ++++++++++++++++++++++++++++++++++++++-
2 files changed, 147 insertions(+), 1 deletion(-)
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index bf82faf16c53..d9d9d72d8beb 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -2607,6 +2607,7 @@ int kvm_gmem_slot_set_shared(struct kvm_memory_slot *slot, gfn_t start,
int kvm_gmem_slot_clear_shared(struct kvm_memory_slot *slot, gfn_t start,
gfn_t end);
bool kvm_gmem_slot_is_guest_shared(struct kvm_memory_slot *slot, gfn_t gfn);
+int kvm_gmem_slot_register_callback(struct kvm_memory_slot *slot, gfn_t gfn);
void kvm_gmem_handle_folio_put(struct folio *folio);
#else
static inline int kvm_gmem_set_shared(struct kvm *kvm, gfn_t start, gfn_t end)
@@ -2638,6 +2639,11 @@ static inline bool kvm_gmem_slot_is_guest_shared(struct kvm_memory_slot *slot,
WARN_ON_ONCE(1);
return false;
}
+static inline int kvm_gmem_slot_register_callback(struct kvm_memory_slot *slot, gfn_t gfn)
+{
+ WARN_ON_ONCE(1);
+ return -EINVAL;
+}
#endif /* CONFIG_KVM_GMEM_SHARED_MEM */
#endif
diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c
index 4b857ab421bf..4fd9e5760503 100644
--- a/virt/kvm/guest_memfd.c
+++ b/virt/kvm/guest_memfd.c
@@ -391,6 +391,28 @@ enum folio_shareability {
KVM_GMEM_NONE_SHARED = 0b11, /* Not shared, transient state. */
};
+/*
+ * Unregisters the __folio_put() callback from the folio.
+ *
+ * Restores a folio's refcount after all pending references have been released,
+ * and removes the folio type, thereby removing the callback. Now the folio can
+ * be freed normaly once all actual references have been dropped.
+ *
+ * Must be called with the filemap (inode->i_mapping) invalidate_lock held, and
+ * the folio must be locked.
+ */
+static void kvm_gmem_restore_pending_folio(struct folio *folio, const struct inode *inode)
+{
+ rwsem_assert_held_write_nolockdep(&inode->i_mapping->invalidate_lock);
+ WARN_ON_ONCE(!folio_test_locked(folio));
+
+ if (WARN_ON_ONCE(folio_mapped(folio) || !folio_test_guestmem(folio)))
+ return;
+
+ __folio_clear_guestmem(folio);
+ folio_ref_add(folio, folio_nr_pages(folio));
+}
+
static int kvm_gmem_offset_set_shared(struct inode *inode, pgoff_t index)
{
struct xarray *shared_offsets = &kvm_gmem_private(inode)->shared_offsets;
@@ -398,6 +420,24 @@ static int kvm_gmem_offset_set_shared(struct inode *inode, pgoff_t index)
rwsem_assert_held_write_nolockdep(&inode->i_mapping->invalidate_lock);
+ /*
+ * If the folio is NONE_SHARED, it indicates that it is transitioning to
+ * private (GUEST_SHARED). Transition it to shared (ALL_SHARED)
+ * immediately, and remove the callback.
+ */
+ if (xa_to_value(xa_load(shared_offsets, index)) == KVM_GMEM_NONE_SHARED) {
+ struct folio *folio = filemap_lock_folio(inode->i_mapping, index);
+
+ if (WARN_ON_ONCE(IS_ERR(folio)))
+ return PTR_ERR(folio);
+
+ if (folio_test_guestmem(folio))
+ kvm_gmem_restore_pending_folio(folio, inode);
+
+ folio_unlock(folio);
+ folio_put(folio);
+ }
+
return xa_err(xa_store(shared_offsets, index, xval, GFP_KERNEL));
}
@@ -498,9 +538,109 @@ static int kvm_gmem_offset_range_clear_shared(struct inode *inode,
return r;
}
+/*
+ * Registers a callback to __folio_put(), so that gmem knows that the host does
+ * not have any references to the folio. The callback itself is registered by
+ * setting the folio type to guestmem.
+ *
+ * Returns 0 if a callback was registered or already has been registered, or
+ * -EAGAIN if the host has references, indicating a callback wasn't registered.
+ *
+ * Must be called with the filemap (inode->i_mapping) invalidate_lock held, and
+ * the folio must be locked.
+ */
+static int kvm_gmem_register_callback(struct folio *folio, struct inode *inode, pgoff_t index)
+{
+ struct xarray *shared_offsets = &kvm_gmem_private(inode)->shared_offsets;
+ void *xval_guest = xa_mk_value(KVM_GMEM_GUEST_SHARED);
+ int refcount;
+ int r = 0;
+
+ rwsem_assert_held_write_nolockdep(&inode->i_mapping->invalidate_lock);
+ WARN_ON_ONCE(!folio_test_locked(folio));
+
+ if (folio_test_guestmem(folio))
+ return 0;
+
+ if (folio_mapped(folio))
+ return -EAGAIN;
+
+ refcount = folio_ref_count(folio);
+ if (!folio_ref_freeze(folio, refcount))
+ return -EAGAIN;
+
+ /*
+ * Register callback by setting the folio type and subtracting gmem's
+ * references for it to trigger once outstanding references are dropped.
+ */
+ if (refcount > 1) {
+ __folio_set_guestmem(folio);
+ refcount -= folio_nr_pages(folio);
+ } else {
+ /* No outstanding references, transition it to guest shared. */
+ r = WARN_ON_ONCE(xa_err(xa_store(shared_offsets, index, xval_guest, GFP_KERNEL)));
+ }
+
+ folio_ref_unfreeze(folio, refcount);
+ return r;
+}
+
+int kvm_gmem_slot_register_callback(struct kvm_memory_slot *slot, gfn_t gfn)
+{
+ unsigned long pgoff = slot->gmem.pgoff + gfn - slot->base_gfn;
+ struct inode *inode = file_inode(READ_ONCE(slot->gmem.file));
+ struct folio *folio;
+ int r;
+
+ filemap_invalidate_lock(inode->i_mapping);
+
+ folio = filemap_lock_folio(inode->i_mapping, pgoff);
+ if (WARN_ON_ONCE(IS_ERR(folio))) {
+ r = PTR_ERR(folio);
+ goto out;
+ }
+
+ r = kvm_gmem_register_callback(folio, inode, pgoff);
+
+ folio_unlock(folio);
+ folio_put(folio);
+out:
+ filemap_invalidate_unlock(inode->i_mapping);
+
+ return r;
+}
+EXPORT_SYMBOL_GPL(kvm_gmem_slot_register_callback);
+
+/*
+ * Callback function for __folio_put(), i.e., called once all references by the
+ * host to the folio have been dropped. This allows gmem to transition the state
+ * of the folio to shared with the guest, and allows the hypervisor to continue
+ * transitioning its state to private, since the host cannot attempt to access
+ * it anymore.
+ */
void kvm_gmem_handle_folio_put(struct folio *folio)
{
- WARN_ONCE(1, "A placeholder that shouldn't trigger. Work in progress.");
+ struct address_space *mapping;
+ struct xarray *shared_offsets;
+ struct inode *inode;
+ pgoff_t index;
+ void *xval;
+
+ mapping = folio->mapping;
+ if (WARN_ON_ONCE(!mapping))
+ return;
+
+ inode = mapping->host;
+ index = folio->index;
+ shared_offsets = &kvm_gmem_private(inode)->shared_offsets;
+ xval = xa_mk_value(KVM_GMEM_GUEST_SHARED);
+
+ filemap_invalidate_lock(inode->i_mapping);
+ folio_lock(folio);
+ kvm_gmem_restore_pending_folio(folio, inode);
+ folio_unlock(folio);
+ WARN_ON_ONCE(xa_err(xa_store(shared_offsets, index, xval, GFP_KERNEL)));
+ filemap_invalidate_unlock(inode->i_mapping);
}
EXPORT_SYMBOL_GPL(kvm_gmem_handle_folio_put);
--
2.49.0.rc1.451.g8f38331e32-goog
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH v6 6/7] KVM: guest_memfd: Handle invalidation of shared memory
2025-03-18 16:20 [PATCH v6 0/7] KVM: Restricted mapping of guest_memfd at the host and arm64 support Fuad Tabba
` (4 preceding siblings ...)
2025-03-18 16:20 ` [PATCH v6 5/7] KVM: guest_memfd: Restore folio state after final folio_put() Fuad Tabba
@ 2025-03-18 16:20 ` Fuad Tabba
2025-03-18 16:20 ` [PATCH v6 7/7] KVM: guest_memfd: Add a guest_memfd() flag to initialize it as shared Fuad Tabba
6 siblings, 0 replies; 11+ messages in thread
From: Fuad Tabba @ 2025-03-18 16:20 UTC (permalink / raw)
To: kvm, linux-arm-msm, linux-mm
Cc: pbonzini, chenhuacai, mpe, anup, paul.walmsley, palmer, aou,
seanjc, viro, brauner, willy, akpm, xiaoyao.li, yilun.xu,
chao.p.peng, jarkko, amoorthy, dmatlack, isaku.yamahata, mic,
vbabka, vannapurve, ackerleytng, mail, david, michael.roth,
wei.w.wang, liam.merwick, isaku.yamahata, kirill.shutemov,
suzuki.poulose, steven.price, quic_eberman, quic_mnalajal,
quic_tsoni, quic_svaddagi, quic_cvanscha, quic_pderrin,
quic_pheragu, catalin.marinas, james.morse, yuzenghui,
oliver.upton, maz, will, qperret, keirf, roypat, shuah, hch, jgg,
rientjes, jhubbard, fvdl, hughd, jthoughton, peterx, tabba
When guest_memfd backed memory is invalidated, e.g., on punching holes,
releasing, ensure that the sharing states are updated and that any
folios in a transient state are restored to an appropriate state.
Signed-off-by: Fuad Tabba <tabba@google.com>
---
virt/kvm/guest_memfd.c | 56 ++++++++++++++++++++++++++++++++++++++++++
1 file changed, 56 insertions(+)
diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c
index 4fd9e5760503..0487a08615f0 100644
--- a/virt/kvm/guest_memfd.c
+++ b/virt/kvm/guest_memfd.c
@@ -117,6 +117,16 @@ static struct folio *kvm_gmem_get_folio(struct inode *inode, pgoff_t index)
return filemap_grab_folio(inode->i_mapping, index);
}
+#ifdef CONFIG_KVM_GMEM_SHARED_MEM
+static void kvm_gmem_offset_range_invalidate_shared(struct inode *inode,
+ pgoff_t start, pgoff_t end);
+#else
+static inline void kvm_gmem_offset_range_invalidate_shared(struct inode *inode,
+ pgoff_t start, pgoff_t end)
+{
+}
+#endif
+
static void kvm_gmem_invalidate_begin(struct kvm_gmem *gmem, pgoff_t start,
pgoff_t end)
{
@@ -126,6 +136,7 @@ static void kvm_gmem_invalidate_begin(struct kvm_gmem *gmem, pgoff_t start,
unsigned long index;
xa_for_each_range(&gmem->bindings, index, slot, start, end - 1) {
+ struct file *file = READ_ONCE(slot->gmem.file);
pgoff_t pgoff = slot->gmem.pgoff;
struct kvm_gfn_range gfn_range = {
@@ -145,6 +156,16 @@ static void kvm_gmem_invalidate_begin(struct kvm_gmem *gmem, pgoff_t start,
}
flush |= kvm_mmu_unmap_gfn_range(kvm, &gfn_range);
+
+ /*
+ * If this gets called after kvm_gmem_unbind() it means that all
+ * in-flight operations are gone, and the file has been closed.
+ */
+ if (file) {
+ kvm_gmem_offset_range_invalidate_shared(file_inode(file),
+ gfn_range.start,
+ gfn_range.end);
+ }
}
if (flush)
@@ -509,6 +530,41 @@ static int kvm_gmem_offset_clear_shared(struct inode *inode, pgoff_t index)
return r;
}
+/*
+ * Callback when invalidating memory that is potentially shared.
+ *
+ * Must be called with the filemap (inode->i_mapping) invalidate_lock held.
+ */
+static void kvm_gmem_offset_range_invalidate_shared(struct inode *inode,
+ pgoff_t start, pgoff_t end)
+{
+ struct xarray *shared_offsets = &kvm_gmem_private(inode)->shared_offsets;
+ pgoff_t i;
+
+ rwsem_assert_held_write_nolockdep(&inode->i_mapping->invalidate_lock);
+
+ for (i = start; i < end; i++) {
+ /*
+ * If the folio is NONE_SHARED, it indicates that it is
+ * transitioning to private (GUEST_SHARED). Transition it to
+ * shared (ALL_SHARED) and remove the callback.
+ */
+ if (xa_to_value(xa_load(shared_offsets, i)) == KVM_GMEM_NONE_SHARED) {
+ struct folio *folio = folio = filemap_lock_folio(inode->i_mapping, i);
+
+ if (!WARN_ON_ONCE(IS_ERR(folio))) {
+ if (folio_test_guestmem(folio))
+ kvm_gmem_restore_pending_folio(folio, inode);
+
+ folio_unlock(folio);
+ folio_put(folio);
+ }
+ }
+
+ xa_erase(shared_offsets, i);
+ }
+}
+
/*
* Marks the range [start, end) as not shared with the host. If the host doesn't
* have any references to a particular folio, then that folio is marked as
--
2.49.0.rc1.451.g8f38331e32-goog
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH v6 7/7] KVM: guest_memfd: Add a guest_memfd() flag to initialize it as shared
2025-03-18 16:20 [PATCH v6 0/7] KVM: Restricted mapping of guest_memfd at the host and arm64 support Fuad Tabba
` (5 preceding siblings ...)
2025-03-18 16:20 ` [PATCH v6 6/7] KVM: guest_memfd: Handle invalidation of shared memory Fuad Tabba
@ 2025-03-18 16:20 ` Fuad Tabba
6 siblings, 0 replies; 11+ messages in thread
From: Fuad Tabba @ 2025-03-18 16:20 UTC (permalink / raw)
To: kvm, linux-arm-msm, linux-mm
Cc: pbonzini, chenhuacai, mpe, anup, paul.walmsley, palmer, aou,
seanjc, viro, brauner, willy, akpm, xiaoyao.li, yilun.xu,
chao.p.peng, jarkko, amoorthy, dmatlack, isaku.yamahata, mic,
vbabka, vannapurve, ackerleytng, mail, david, michael.roth,
wei.w.wang, liam.merwick, isaku.yamahata, kirill.shutemov,
suzuki.poulose, steven.price, quic_eberman, quic_mnalajal,
quic_tsoni, quic_svaddagi, quic_cvanscha, quic_pderrin,
quic_pheragu, catalin.marinas, james.morse, yuzenghui,
oliver.upton, maz, will, qperret, keirf, roypat, shuah, hch, jgg,
rientjes, jhubbard, fvdl, hughd, jthoughton, peterx, tabba
Not all use cases require guest_memfd() to be shared with the host when
first created. Add a new flag, GUEST_MEMFD_FLAG_INIT_SHARED, which when
set on KVM_CREATE_GUEST_MEMFD initializes the memory as shared with the
host, and therefore mappable by it. Otherwise, memory is private until
explicitly shared by the guest with the host.
Signed-off-by: Fuad Tabba <tabba@google.com>
---
Documentation/virt/kvm/api.rst | 4 ++++
include/uapi/linux/kvm.h | 1 +
tools/testing/selftests/kvm/guest_memfd_test.c | 7 +++++--
virt/kvm/guest_memfd.c | 12 ++++++++++++
4 files changed, 22 insertions(+), 2 deletions(-)
diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst
index 2b52eb77e29c..a5496d7d323b 100644
--- a/Documentation/virt/kvm/api.rst
+++ b/Documentation/virt/kvm/api.rst
@@ -6386,6 +6386,10 @@ most one mapping per page, i.e. binding multiple memory regions to a single
guest_memfd range is not allowed (any number of memory regions can be bound to
a single guest_memfd file, but the bound ranges must not overlap).
+If the capability KVM_CAP_GMEM_SHARED_MEM is supported, then the flags field
+supports GUEST_MEMFD_FLAG_INIT_SHARED, which initializes the memory as shared
+with the host, and thereby, mappable by it.
+
See KVM_SET_USER_MEMORY_REGION2 for additional details.
4.143 KVM_PRE_FAULT_MEMORY
diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index 117937a895da..22d7e33bf09c 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -1566,6 +1566,7 @@ struct kvm_memory_attributes {
#define KVM_MEMORY_ATTRIBUTE_PRIVATE (1ULL << 3)
#define KVM_CREATE_GUEST_MEMFD _IOWR(KVMIO, 0xd4, struct kvm_create_guest_memfd)
+#define GUEST_MEMFD_FLAG_INIT_SHARED (1UL << 0)
struct kvm_create_guest_memfd {
__u64 size;
diff --git a/tools/testing/selftests/kvm/guest_memfd_test.c b/tools/testing/selftests/kvm/guest_memfd_test.c
index 38c501e49e0e..4a7fcd6aa372 100644
--- a/tools/testing/selftests/kvm/guest_memfd_test.c
+++ b/tools/testing/selftests/kvm/guest_memfd_test.c
@@ -159,7 +159,7 @@ static void test_invalid_punch_hole(int fd, size_t page_size, size_t total_size)
static void test_create_guest_memfd_invalid(struct kvm_vm *vm)
{
size_t page_size = getpagesize();
- uint64_t flag;
+ uint64_t flag = BIT(0);
size_t size;
int fd;
@@ -170,7 +170,10 @@ static void test_create_guest_memfd_invalid(struct kvm_vm *vm)
size);
}
- for (flag = BIT(0); flag; flag <<= 1) {
+ if (kvm_has_cap(KVM_CAP_GMEM_SHARED_MEM))
+ flag = GUEST_MEMFD_FLAG_INIT_SHARED << 1;
+
+ for (; flag; flag <<= 1) {
fd = __vm_create_guest_memfd(vm, page_size, flag);
TEST_ASSERT(fd == -1 && errno == EINVAL,
"guest_memfd() with flag '0x%lx' should fail with EINVAL",
diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c
index 0487a08615f0..d7313e11c2cb 100644
--- a/virt/kvm/guest_memfd.c
+++ b/virt/kvm/guest_memfd.c
@@ -1045,6 +1045,15 @@ static int __kvm_gmem_create(struct kvm *kvm, loff_t size, u64 flags)
goto err_gmem;
}
+ if (IS_ENABLED(CONFIG_KVM_GMEM_SHARED_MEM) &&
+ (flags & GUEST_MEMFD_FLAG_INIT_SHARED)) {
+ err = kvm_gmem_offset_range_set_shared(file_inode(file), 0, size >> PAGE_SHIFT);
+ if (err) {
+ fput(file);
+ goto err_gmem;
+ }
+ }
+
kvm_get_kvm(kvm);
gmem->kvm = kvm;
xa_init(&gmem->bindings);
@@ -1066,6 +1075,9 @@ int kvm_gmem_create(struct kvm *kvm, struct kvm_create_guest_memfd *args)
u64 flags = args->flags;
u64 valid_flags = 0;
+ if (IS_ENABLED(CONFIG_KVM_GMEM_SHARED_MEM))
+ valid_flags |= GUEST_MEMFD_FLAG_INIT_SHARED;
+
if (flags & ~valid_flags)
return -EINVAL;
--
2.49.0.rc1.451.g8f38331e32-goog
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v6 5/7] KVM: guest_memfd: Restore folio state after final folio_put()
2025-03-18 16:20 ` [PATCH v6 5/7] KVM: guest_memfd: Restore folio state after final folio_put() Fuad Tabba
@ 2025-03-21 20:09 ` Vishal Annapurve
2025-03-25 15:57 ` Fuad Tabba
0 siblings, 1 reply; 11+ messages in thread
From: Vishal Annapurve @ 2025-03-21 20:09 UTC (permalink / raw)
To: Fuad Tabba
Cc: kvm, linux-arm-msm, linux-mm, pbonzini, chenhuacai, mpe, anup,
paul.walmsley, palmer, aou, seanjc, viro, brauner, willy, akpm,
xiaoyao.li, yilun.xu, chao.p.peng, jarkko, amoorthy, dmatlack,
isaku.yamahata, mic, vbabka, ackerleytng, mail, david,
michael.roth, wei.w.wang, liam.merwick, isaku.yamahata,
kirill.shutemov, suzuki.poulose, steven.price, quic_eberman,
quic_mnalajal, quic_tsoni, quic_svaddagi, quic_cvanscha,
quic_pderrin, quic_pheragu, catalin.marinas, james.morse,
yuzenghui, oliver.upton, maz, will, qperret, keirf, roypat,
shuah, hch, jgg, rientjes, jhubbard, fvdl, hughd, jthoughton,
peterx
On Tue, Mar 18, 2025 at 9:20 AM Fuad Tabba <tabba@google.com> wrote:
> ...
> +/*
> + * Callback function for __folio_put(), i.e., called once all references by the
> + * host to the folio have been dropped. This allows gmem to transition the state
> + * of the folio to shared with the guest, and allows the hypervisor to continue
> + * transitioning its state to private, since the host cannot attempt to access
> + * it anymore.
> + */
> void kvm_gmem_handle_folio_put(struct folio *folio)
> {
> - WARN_ONCE(1, "A placeholder that shouldn't trigger. Work in progress.");
> + struct address_space *mapping;
> + struct xarray *shared_offsets;
> + struct inode *inode;
> + pgoff_t index;
> + void *xval;
> +
> + mapping = folio->mapping;
> + if (WARN_ON_ONCE(!mapping))
> + return;
> +
> + inode = mapping->host;
> + index = folio->index;
> + shared_offsets = &kvm_gmem_private(inode)->shared_offsets;
> + xval = xa_mk_value(KVM_GMEM_GUEST_SHARED);
> +
> + filemap_invalidate_lock(inode->i_mapping);
As discussed in the guest_memfd upstream, folio_put can happen from
atomic context [1], so we need a way to either defer the work outside
kvm_gmem_handle_folio_put() (which is very likely needed to handle
hugepages and merge operation) or ensure to execute the logic using
synchronization primitives that will not sleep.
[1] https://elixir.bootlin.com/linux/v6.14-rc6/source/include/linux/mm.h#L1483
> + folio_lock(folio);
> + kvm_gmem_restore_pending_folio(folio, inode);
> + folio_unlock(folio);
> + WARN_ON_ONCE(xa_err(xa_store(shared_offsets, index, xval, GFP_KERNEL)));
> + filemap_invalidate_unlock(inode->i_mapping);
> }
> EXPORT_SYMBOL_GPL(kvm_gmem_handle_folio_put);
>
> --
> 2.49.0.rc1.451.g8f38331e32-goog
>
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v6 5/7] KVM: guest_memfd: Restore folio state after final folio_put()
2025-03-21 20:09 ` Vishal Annapurve
@ 2025-03-25 15:57 ` Fuad Tabba
2025-04-02 22:17 ` Michael Roth
0 siblings, 1 reply; 11+ messages in thread
From: Fuad Tabba @ 2025-03-25 15:57 UTC (permalink / raw)
To: Vishal Annapurve
Cc: kvm, linux-arm-msm, linux-mm, pbonzini, chenhuacai, mpe, anup,
paul.walmsley, palmer, aou, seanjc, viro, brauner, willy, akpm,
xiaoyao.li, yilun.xu, chao.p.peng, jarkko, amoorthy, dmatlack,
isaku.yamahata, mic, vbabka, ackerleytng, mail, david,
michael.roth, wei.w.wang, liam.merwick, isaku.yamahata,
kirill.shutemov, suzuki.poulose, steven.price, quic_eberman,
quic_mnalajal, quic_tsoni, quic_svaddagi, quic_cvanscha,
quic_pderrin, quic_pheragu, catalin.marinas, james.morse,
yuzenghui, oliver.upton, maz, will, qperret, keirf, roypat,
shuah, hch, jgg, rientjes, jhubbard, fvdl, hughd, jthoughton,
peterx
Hi Vishal,
On Fri, 21 Mar 2025 at 20:09, Vishal Annapurve <vannapurve@google.com> wrote:
>
> On Tue, Mar 18, 2025 at 9:20 AM Fuad Tabba <tabba@google.com> wrote:
> > ...
> > +/*
> > + * Callback function for __folio_put(), i.e., called once all references by the
> > + * host to the folio have been dropped. This allows gmem to transition the state
> > + * of the folio to shared with the guest, and allows the hypervisor to continue
> > + * transitioning its state to private, since the host cannot attempt to access
> > + * it anymore.
> > + */
> > void kvm_gmem_handle_folio_put(struct folio *folio)
> > {
> > - WARN_ONCE(1, "A placeholder that shouldn't trigger. Work in progress.");
> > + struct address_space *mapping;
> > + struct xarray *shared_offsets;
> > + struct inode *inode;
> > + pgoff_t index;
> > + void *xval;
> > +
> > + mapping = folio->mapping;
> > + if (WARN_ON_ONCE(!mapping))
> > + return;
> > +
> > + inode = mapping->host;
> > + index = folio->index;
> > + shared_offsets = &kvm_gmem_private(inode)->shared_offsets;
> > + xval = xa_mk_value(KVM_GMEM_GUEST_SHARED);
> > +
> > + filemap_invalidate_lock(inode->i_mapping);
>
> As discussed in the guest_memfd upstream, folio_put can happen from
> atomic context [1], so we need a way to either defer the work outside
> kvm_gmem_handle_folio_put() (which is very likely needed to handle
> hugepages and merge operation) or ensure to execute the logic using
> synchronization primitives that will not sleep.
Thanks for pointing this out. For now, rather than deferring (which
we'll come to when hugepages come into play), I think this would be
possible to resolve by ensuring we have exclusive access* to the folio
instead, and using that to ensure that we can access the
shared_offsets maps.
* By exclusive access I mean either holding the folio lock, or knowing
that no one else has references to the folio (which is the case when
kvm_gmem_handle_folio_put() is called).
I'll try to respin something in time for folks to look at it before
the next sync.
Cheers,
/fuad
> [1] https://elixir.bootlin.com/linux/v6.14-rc6/source/include/linux/mm.h#L1483
>
> > + folio_lock(folio);
> > + kvm_gmem_restore_pending_folio(folio, inode);
> > + folio_unlock(folio);
> > + WARN_ON_ONCE(xa_err(xa_store(shared_offsets, index, xval, GFP_KERNEL)));
> > + filemap_invalidate_unlock(inode->i_mapping);
> > }
> > EXPORT_SYMBOL_GPL(kvm_gmem_handle_folio_put);
> >
> > --
> > 2.49.0.rc1.451.g8f38331e32-goog
> >
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v6 5/7] KVM: guest_memfd: Restore folio state after final folio_put()
2025-03-25 15:57 ` Fuad Tabba
@ 2025-04-02 22:17 ` Michael Roth
0 siblings, 0 replies; 11+ messages in thread
From: Michael Roth @ 2025-04-02 22:17 UTC (permalink / raw)
To: Fuad Tabba
Cc: Vishal Annapurve, kvm, linux-arm-msm, linux-mm, pbonzini,
chenhuacai, mpe, anup, paul.walmsley, palmer, aou, seanjc, viro,
brauner, willy, akpm, xiaoyao.li, yilun.xu, chao.p.peng, jarkko,
amoorthy, dmatlack, isaku.yamahata, mic, vbabka, ackerleytng,
mail, david, wei.w.wang, liam.merwick, isaku.yamahata,
kirill.shutemov, suzuki.poulose, steven.price, quic_eberman,
quic_mnalajal, quic_tsoni, quic_svaddagi, quic_cvanscha,
quic_pderrin, quic_pheragu, catalin.marinas, james.morse,
yuzenghui, oliver.upton, maz, will, qperret, keirf, roypat,
shuah, hch, jgg, rientjes, jhubbard, fvdl, hughd, jthoughton,
peterx
On Tue, Mar 25, 2025 at 03:57:00PM +0000, Fuad Tabba wrote:
> Hi Vishal,
>
>
> On Fri, 21 Mar 2025 at 20:09, Vishal Annapurve <vannapurve@google.com> wrote:
> >
> > On Tue, Mar 18, 2025 at 9:20 AM Fuad Tabba <tabba@google.com> wrote:
> > > ...
> > > +/*
> > > + * Callback function for __folio_put(), i.e., called once all references by the
> > > + * host to the folio have been dropped. This allows gmem to transition the state
> > > + * of the folio to shared with the guest, and allows the hypervisor to continue
> > > + * transitioning its state to private, since the host cannot attempt to access
> > > + * it anymore.
> > > + */
> > > void kvm_gmem_handle_folio_put(struct folio *folio)
> > > {
> > > - WARN_ONCE(1, "A placeholder that shouldn't trigger. Work in progress.");
> > > + struct address_space *mapping;
> > > + struct xarray *shared_offsets;
> > > + struct inode *inode;
> > > + pgoff_t index;
> > > + void *xval;
> > > +
> > > + mapping = folio->mapping;
> > > + if (WARN_ON_ONCE(!mapping))
> > > + return;
> > > +
> > > + inode = mapping->host;
> > > + index = folio->index;
> > > + shared_offsets = &kvm_gmem_private(inode)->shared_offsets;
> > > + xval = xa_mk_value(KVM_GMEM_GUEST_SHARED);
> > > +
> > > + filemap_invalidate_lock(inode->i_mapping);
> >
> > As discussed in the guest_memfd upstream, folio_put can happen from
> > atomic context [1], so we need a way to either defer the work outside
> > kvm_gmem_handle_folio_put() (which is very likely needed to handle
> > hugepages and merge operation) or ensure to execute the logic using
> > synchronization primitives that will not sleep.
>
> Thanks for pointing this out. For now, rather than deferring (which
> we'll come to when hugepages come into play), I think this would be
FWIW, with SNP, it's only possible to unsplit an RMP entry if the guest
cooperates with re-validating/re-accepting the memory at a higher order.
Currently, this guest support is not implemented in linux.
So, if we were to opportunistically unsplit hugepages, we'd zap the
mappings in KVM, let it fault in at a higher order so we could reduce
TLB misses, and then KVM would (via
kvm_x86_call(private_max_mapping_level)(kvm, pfn) find that the RMP
entry is still split to 4K, and remap everything right back to the 4K
granularity it was already at to begin with.
TDX seems to have a bit more flexibility in being able to
'unsplit'/promote private ranges back up to higher orders, so it could
potentially benefit from doing things opportunistically...
However, ideally...the guest would just avoid unecessarily carving up
ranges to begin with and pack all it's shared mappings into smaller GPA
ranges. Then, all this unsplitting of huge pages could be completely
avoided until cleanup/truncate time. So maybe even for hugepages we
should just plan to do things this way, at least as a start?
> possible to resolve by ensuring we have exclusive access* to the folio
> instead, and using that to ensure that we can access the
> shared_offsets maps.
>
> * By exclusive access I mean either holding the folio lock, or knowing
> that no one else has references to the folio (which is the case when
> kvm_gmem_handle_folio_put() is called).
>
> I'll try to respin something in time for folks to look at it before
> the next sync.
Thanks for posting. I was looking at how to get rid of
filemap_invalidate_lock() from conversion path, and having that separate
rwlock seems to resolve a lot of the potential races I was looking at.
I'm working on rebasing SNP 2MB support on top of your v7 series now.
-Mike
>
> Cheers,
> /fuad
>
> > [1] https://elixir.bootlin.com/linux/v6.14-rc6/source/include/linux/mm.h#L1483
> >
> > > + folio_lock(folio);
> > > + kvm_gmem_restore_pending_folio(folio, inode);
> > > + folio_unlock(folio);
> > > + WARN_ON_ONCE(xa_err(xa_store(shared_offsets, index, xval, GFP_KERNEL)));
> > > + filemap_invalidate_unlock(inode->i_mapping);
> > > }
> > > EXPORT_SYMBOL_GPL(kvm_gmem_handle_folio_put);
> > >
> > > --
> > > 2.49.0.rc1.451.g8f38331e32-goog
> > >
>
^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2025-04-02 22:18 UTC | newest]
Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-03-18 16:20 [PATCH v6 0/7] KVM: Restricted mapping of guest_memfd at the host and arm64 support Fuad Tabba
2025-03-18 16:20 ` [PATCH v6 1/7] KVM: guest_memfd: Make guest mem use guest mem inodes instead of anonymous inodes Fuad Tabba
2025-03-18 16:20 ` [PATCH v6 2/7] KVM: guest_memfd: Introduce kvm_gmem_get_pfn_locked(), which retains the folio lock Fuad Tabba
2025-03-18 16:20 ` [PATCH v6 3/7] KVM: guest_memfd: Track folio sharing within a struct kvm_gmem_private Fuad Tabba
2025-03-18 16:20 ` [PATCH v6 4/7] KVM: guest_memfd: Folio sharing states and functions that manage their transition Fuad Tabba
2025-03-18 16:20 ` [PATCH v6 5/7] KVM: guest_memfd: Restore folio state after final folio_put() Fuad Tabba
2025-03-21 20:09 ` Vishal Annapurve
2025-03-25 15:57 ` Fuad Tabba
2025-04-02 22:17 ` Michael Roth
2025-03-18 16:20 ` [PATCH v6 6/7] KVM: guest_memfd: Handle invalidation of shared memory Fuad Tabba
2025-03-18 16:20 ` [PATCH v6 7/7] KVM: guest_memfd: Add a guest_memfd() flag to initialize it as shared Fuad Tabba
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox