From: Michael Roth <michael.roth@amd.com>
To: <kvm@vger.kernel.org>
Cc: <linux-coco@lists.linux.dev>, <linux-mm@kvack.org>,
<linux-kernel@vger.kernel.org>, <thomas.lendacky@amd.com>,
<pbonzini@redhat.com>, <seanjc@google.com>, <vbabka@suse.cz>,
<ashish.kalra@amd.com>, <liam.merwick@oracle.com>,
<david@redhat.com>, <vannapurve@google.com>,
<ackerleytng@google.com>, <aik@amd.com>, <ira.weiny@intel.com>,
<yan.y.zhao@intel.com>
Subject: [PATCH v2 5/5] KVM: guest_memfd: GUP source pages prior to populating guest memory
Date: Mon, 15 Dec 2025 09:34:11 -0600 [thread overview]
Message-ID: <20251215153411.3613928-6-michael.roth@amd.com> (raw)
In-Reply-To: <20251215153411.3613928-1-michael.roth@amd.com>
Currently the post-populate callbacks handle copying source pages into
private GPA ranges backed by guest_memfd, where kvm_gmem_populate()
acquires the filemap invalidate lock, then calls a post-populate
callback which may issue a get_user_pages() on the source pages prior to
copying them into the private GPA (e.g. TDX).
This will not be compatible with in-place conversion, where the
userspace page fault path will attempt to acquire filemap invalidate
lock while holding the mm->mmap_lock, leading to a potential ABBA
deadlock[1].
Address this by hoisting the GUP above the filemap invalidate lock so
that these page faults path can be taken early, prior to acquiring the
filemap invalidate lock.
It's not currently clear whether this issue is reachable with the
current implementation of guest_memfd, which doesn't support in-place
conversion, however it does provide a consistent mechanism to provide
stable source/target PFNs to callbacks rather than punting to
vendor-specific code, which allows for more commonality across
architectures, which may be worthwhile even without in-place conversion.
As part of this change, also begin enforcing that the 'src' argument to
kvm_gmem_populate() must be page-aligned, as this greatly reduces the
complexity around how the post-populate callbacks are implemented, and
since no current in-tree users support using a non-page-aligned 'src'
argument.
Suggested-by: Sean Christopherson <seanjc@google.com>
Co-developed-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
Co-developed-by: Vishal Annapurve <vannapurve@google.com>
Signed-off-by: Vishal Annapurve <vannapurve@google.com>
Signed-off-by: Michael Roth <michael.roth@amd.com>
---
arch/x86/kvm/svm/sev.c | 32 ++++++++-------
arch/x86/kvm/vmx/tdx.c | 15 +------
include/linux/kvm_host.h | 4 +-
virt/kvm/guest_memfd.c | 84 +++++++++++++++++++++++++++-------------
4 files changed, 77 insertions(+), 58 deletions(-)
diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index 90c512ca24a9..11ae008aec8a 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -2268,7 +2268,7 @@ struct sev_gmem_populate_args {
};
static int sev_gmem_post_populate(struct kvm *kvm, gfn_t gfn, kvm_pfn_t pfn,
- void __user *src, void *opaque)
+ struct page *src_page, void *opaque)
{
struct sev_gmem_populate_args *sev_populate_args = opaque;
struct sev_data_snp_launch_update fw_args = {0};
@@ -2277,7 +2277,7 @@ static int sev_gmem_post_populate(struct kvm *kvm, gfn_t gfn, kvm_pfn_t pfn,
int level;
int ret;
- if (WARN_ON_ONCE(sev_populate_args->type != KVM_SEV_SNP_PAGE_TYPE_ZERO && !src))
+ if (WARN_ON_ONCE(sev_populate_args->type != KVM_SEV_SNP_PAGE_TYPE_ZERO && !src_page))
return -EINVAL;
ret = snp_lookup_rmpentry((u64)pfn, &assigned, &level);
@@ -2288,14 +2288,14 @@ static int sev_gmem_post_populate(struct kvm *kvm, gfn_t gfn, kvm_pfn_t pfn,
goto out;
}
- if (src) {
- void *vaddr = kmap_local_pfn(pfn);
+ if (src_page) {
+ void *src_vaddr = kmap_local_pfn(page_to_pfn(src_page));
+ void *dst_vaddr = kmap_local_pfn(pfn);
- if (copy_from_user(vaddr, src, PAGE_SIZE)) {
- ret = -EFAULT;
- goto out;
- }
- kunmap_local(vaddr);
+ memcpy(dst_vaddr, src_vaddr, PAGE_SIZE);
+
+ kunmap_local(src_vaddr);
+ kunmap_local(dst_vaddr);
}
ret = rmp_make_private(pfn, gfn << PAGE_SHIFT, PG_LEVEL_4K,
@@ -2325,17 +2325,19 @@ static int sev_gmem_post_populate(struct kvm *kvm, gfn_t gfn, kvm_pfn_t pfn,
if (ret && !snp_page_reclaim(kvm, pfn) &&
sev_populate_args->type == KVM_SEV_SNP_PAGE_TYPE_CPUID &&
sev_populate_args->fw_error == SEV_RET_INVALID_PARAM) {
- void *vaddr = kmap_local_pfn(pfn);
+ void *src_vaddr = kmap_local_pfn(page_to_pfn(src_page));
+ void *dst_vaddr = kmap_local_pfn(pfn);
- if (copy_to_user(src, vaddr, PAGE_SIZE))
- pr_debug("Failed to write CPUID page back to userspace\n");
+ memcpy(src_vaddr, dst_vaddr, PAGE_SIZE);
- kunmap_local(vaddr);
+ kunmap_local(src_vaddr);
+ kunmap_local(dst_vaddr);
}
out:
- pr_debug("%s: exiting with return code %d (fw_error %d)\n",
- __func__, ret, sev_populate_args->fw_error);
+ if (ret)
+ pr_debug("%s: error updating GFN %llx, return code %d (fw_error %d)\n",
+ __func__, gfn, ret, sev_populate_args->fw_error);
return ret;
}
diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c
index 4fb042ce8ed1..3eb597c0e79f 100644
--- a/arch/x86/kvm/vmx/tdx.c
+++ b/arch/x86/kvm/vmx/tdx.c
@@ -3118,34 +3118,21 @@ struct tdx_gmem_post_populate_arg {
};
static int tdx_gmem_post_populate(struct kvm *kvm, gfn_t gfn, kvm_pfn_t pfn,
- void __user *src, void *_arg)
+ struct page *src_page, void *_arg)
{
struct tdx_gmem_post_populate_arg *arg = _arg;
struct kvm_tdx *kvm_tdx = to_kvm_tdx(kvm);
u64 err, entry, level_state;
gpa_t gpa = gfn_to_gpa(gfn);
- struct page *src_page;
int ret, i;
if (KVM_BUG_ON(kvm_tdx->page_add_src, kvm))
return -EIO;
- /*
- * Get the source page if it has been faulted in. Return failure if the
- * source page has been swapped out or unmapped in primary memory.
- */
- ret = get_user_pages_fast((unsigned long)src, 1, 0, &src_page);
- if (ret < 0)
- return ret;
- if (ret != 1)
- return -ENOMEM;
-
kvm_tdx->page_add_src = src_page;
ret = kvm_tdp_mmu_map_private_pfn(arg->vcpu, gfn, pfn);
kvm_tdx->page_add_src = NULL;
- put_page(src_page);
-
if (ret || !(arg->flags & KVM_TDX_MEASURE_MEMORY_REGION))
return ret;
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index 1d0cee72e560..49c0cfe24fd8 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -2566,7 +2566,7 @@ int kvm_arch_gmem_prepare(struct kvm *kvm, gfn_t gfn, kvm_pfn_t pfn, int max_ord
* @gfn: starting GFN to be populated
* @src: userspace-provided buffer containing data to copy into GFN range
* (passed to @post_populate, and incremented on each iteration
- * if not NULL)
+ * if not NULL). Must be page-aligned.
* @npages: number of pages to copy from userspace-buffer
* @post_populate: callback to issue for each gmem page that backs the GPA
* range
@@ -2581,7 +2581,7 @@ int kvm_arch_gmem_prepare(struct kvm *kvm, gfn_t gfn, kvm_pfn_t pfn, int max_ord
* Returns the number of pages that were populated.
*/
typedef int (*kvm_gmem_populate_cb)(struct kvm *kvm, gfn_t gfn, kvm_pfn_t pfn,
- void __user *src, void *opaque);
+ struct page *page, void *opaque);
long kvm_gmem_populate(struct kvm *kvm, gfn_t gfn, void __user *src, long npages,
kvm_gmem_populate_cb post_populate, void *opaque);
diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c
index 8b1248f42aae..18ae59b92257 100644
--- a/virt/kvm/guest_memfd.c
+++ b/virt/kvm/guest_memfd.c
@@ -820,12 +820,48 @@ int kvm_gmem_get_pfn(struct kvm *kvm, struct kvm_memory_slot *slot,
EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_gmem_get_pfn);
#ifdef CONFIG_HAVE_KVM_ARCH_GMEM_POPULATE
+
+static long __kvm_gmem_populate(struct kvm *kvm, struct kvm_memory_slot *slot,
+ struct file *file, gfn_t gfn, struct page *src_page,
+ kvm_gmem_populate_cb post_populate, void *opaque)
+{
+ pgoff_t index = kvm_gmem_get_index(slot, gfn);
+ struct folio *folio;
+ kvm_pfn_t pfn;
+ int ret;
+
+ filemap_invalidate_lock(file->f_mapping);
+
+ folio = __kvm_gmem_get_pfn(file, slot, index, &pfn, NULL);
+ if (IS_ERR(folio)) {
+ ret = PTR_ERR(folio);
+ goto out_unlock;
+ }
+
+ folio_unlock(folio);
+
+ if (!kvm_range_has_memory_attributes(kvm, gfn, gfn + 1,
+ KVM_MEMORY_ATTRIBUTE_PRIVATE,
+ KVM_MEMORY_ATTRIBUTE_PRIVATE)) {
+ ret = -EINVAL;
+ goto out_put_folio;
+ }
+
+ ret = post_populate(kvm, gfn, pfn, src_page, opaque);
+ if (!ret)
+ folio_mark_uptodate(folio);
+
+out_put_folio:
+ folio_put(folio);
+out_unlock:
+ filemap_invalidate_unlock(file->f_mapping);
+ return ret;
+}
+
long kvm_gmem_populate(struct kvm *kvm, gfn_t start_gfn, void __user *src, long npages,
kvm_gmem_populate_cb post_populate, void *opaque)
{
struct kvm_memory_slot *slot;
- void __user *p;
-
int ret = 0;
long i;
@@ -834,6 +870,9 @@ long kvm_gmem_populate(struct kvm *kvm, gfn_t start_gfn, void __user *src, long
if (WARN_ON_ONCE(npages <= 0))
return -EINVAL;
+ if (WARN_ON_ONCE(!PAGE_ALIGNED(src)))
+ return -EINVAL;
+
slot = gfn_to_memslot(kvm, start_gfn);
if (!kvm_slot_has_gmem(slot))
return -EINVAL;
@@ -842,47 +881,38 @@ long kvm_gmem_populate(struct kvm *kvm, gfn_t start_gfn, void __user *src, long
if (!file)
return -EFAULT;
- filemap_invalidate_lock(file->f_mapping);
-
npages = min_t(ulong, slot->npages - (start_gfn - slot->base_gfn), npages);
for (i = 0; i < npages; i++) {
- struct folio *folio;
- gfn_t gfn = start_gfn + i;
- pgoff_t index = kvm_gmem_get_index(slot, gfn);
- kvm_pfn_t pfn;
+ struct page *src_page = NULL;
+ void __user *p;
if (signal_pending(current)) {
ret = -EINTR;
break;
}
- folio = __kvm_gmem_get_pfn(file, slot, index, &pfn, NULL);
- if (IS_ERR(folio)) {
- ret = PTR_ERR(folio);
- break;
- }
+ p = src ? src + i * PAGE_SIZE : NULL;
- folio_unlock(folio);
+ if (p) {
+ ret = get_user_pages_fast((unsigned long)p, 1, 0, &src_page);
+ if (ret < 0)
+ break;
+ if (ret != 1) {
+ ret = -ENOMEM;
+ break;
+ }
+ }
- ret = -EINVAL;
- if (!kvm_range_has_memory_attributes(kvm, gfn, gfn + 1,
- KVM_MEMORY_ATTRIBUTE_PRIVATE,
- KVM_MEMORY_ATTRIBUTE_PRIVATE))
- goto put_folio_and_exit;
+ ret = __kvm_gmem_populate(kvm, slot, file, start_gfn + i, src_page,
+ post_populate, opaque);
- p = src ? src + i * PAGE_SIZE : NULL;
- ret = post_populate(kvm, gfn, pfn, p, opaque);
- if (!ret)
- folio_mark_uptodate(folio);
+ if (src_page)
+ put_page(src_page);
-put_folio_and_exit:
- folio_put(folio);
if (ret)
break;
}
- filemap_invalidate_unlock(file->f_mapping);
-
return ret && !i ? ret : i;
}
EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_gmem_populate);
--
2.25.1
next prev parent reply other threads:[~2025-12-15 15:36 UTC|newest]
Thread overview: 19+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-12-15 15:34 [PATCH v2 0/5] KVM: guest_memfd: Rework preparation/population flows in prep for in-place conversion Michael Roth
2025-12-15 15:34 ` [PATCH v2 1/5] KVM: guest_memfd: Remove partial hugepage handling from kvm_gmem_populate() Michael Roth
2025-12-16 0:12 ` Vishal Annapurve
2025-12-16 0:18 ` Sean Christopherson
2025-12-26 2:49 ` Yan Zhao
2025-12-15 15:34 ` [PATCH v2 2/5] KVM: guest_memfd: Remove preparation tracking Michael Roth
2025-12-16 0:13 ` Vishal Annapurve
2025-12-16 3:29 ` Gupta, Pankaj
2025-12-18 22:53 ` Huang, Kai
2025-12-15 15:34 ` [PATCH v2 3/5] KVM: SEV: Document/enforce page-alignment for KVM_SEV_SNP_LAUNCH_UPDATE Michael Roth
2025-12-16 0:15 ` Vishal Annapurve
2025-12-15 15:34 ` [PATCH v2 4/5] KVM: TDX: Document alignment requirements for KVM_TDX_INIT_MEM_REGION Michael Roth
2025-12-16 0:19 ` Vishal Annapurve
2025-12-15 15:34 ` Michael Roth [this message]
2025-12-16 0:18 ` [PATCH v2 5/5] KVM: guest_memfd: GUP source pages prior to populating guest memory Vishal Annapurve
2025-12-18 22:24 ` Huang, Kai
2025-12-26 2:48 ` Yan Zhao
2025-12-26 3:09 ` Yan Zhao
2025-12-18 22:55 ` [PATCH v2 0/5] KVM: guest_memfd: Rework preparation/population flows in prep for in-place conversion Huang, Kai
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20251215153411.3613928-6-michael.roth@amd.com \
--to=michael.roth@amd.com \
--cc=ackerleytng@google.com \
--cc=aik@amd.com \
--cc=ashish.kalra@amd.com \
--cc=david@redhat.com \
--cc=ira.weiny@intel.com \
--cc=kvm@vger.kernel.org \
--cc=liam.merwick@oracle.com \
--cc=linux-coco@lists.linux.dev \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=pbonzini@redhat.com \
--cc=seanjc@google.com \
--cc=thomas.lendacky@amd.com \
--cc=vannapurve@google.com \
--cc=vbabka@suse.cz \
--cc=yan.y.zhao@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox