From: Fuad Tabba <tabba@google.com>
To: Ackerley Tng <ackerleytng@google.com>
Cc: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org,
linux-mm@kvack.org, pbonzini@redhat.com, chenhuacai@kernel.org,
mpe@ellerman.id.au, anup@brainfault.org,
paul.walmsley@sifive.com, palmer@dabbelt.com,
aou@eecs.berkeley.edu, seanjc@google.com,
viro@zeniv.linux.org.uk, brauner@kernel.org,
willy@infradead.org, akpm@linux-foundation.org,
xiaoyao.li@intel.com, yilun.xu@intel.com,
chao.p.peng@linux.intel.com, jarkko@kernel.org,
amoorthy@google.com, dmatlack@google.com,
isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz,
vannapurve@google.com, mail@maciej.szmigiero.name,
david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com,
liam.merwick@oracle.com, isaku.yamahata@gmail.com,
kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com,
steven.price@arm.com, quic_eberman@quicinc.com,
quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com,
quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com,
quic_pderrin@quicinc.com, quic_pheragu@quicinc.com,
catalin.marinas@arm.com, james.morse@arm.com,
yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org,
will@kernel.org, qperret@google.com, keirf@google.com,
roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org,
jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com,
fvdl@google.com, hughd@google.com, jthoughton@google.com,
peterx@redhat.com
Subject: Re: [PATCH v6 04/10] KVM: guest_memfd: Allow host to map guest_memfd() pages
Date: Mon, 17 Mar 2025 10:42:55 +0000 [thread overview]
Message-ID: <CA+EHjTwWM3Wkp6qEF=H2q0BEa1uoWmjY+vgchQsLjH8t9E2auQ@mail.gmail.com> (raw)
In-Reply-To: <diqzplijjvq5.fsf@ackerleytng-ctop.c.googlers.com>
Hi Ackerley,
On Fri, 14 Mar 2025 at 18:47, Ackerley Tng <ackerleytng@google.com> wrote:
>
> Fuad Tabba <tabba@google.com> writes:
>
> > Add support for mmap() and fault() for guest_memfd backed memory
> > in the host for VMs that support in-place conversion between
> > shared and private. To that end, this patch adds the ability to
> > check whether the VM type supports in-place conversion, and only
> > allows mapping its memory if that's the case.
> >
> > Also add the KVM capability KVM_CAP_GMEM_SHARED_MEM, which
> > indicates that the VM supports shared memory in guest_memfd, or
> > that the host can create VMs that support shared memory.
> > Supporting shared memory implies that memory can be mapped when
> > shared with the host.
> >
> > This is controlled by the KVM_GMEM_SHARED_MEM configuration
> > option.
> >
> > Signed-off-by: Fuad Tabba <tabba@google.com>
> > ---
> > include/linux/kvm_host.h | 11 +++++
> > include/uapi/linux/kvm.h | 1 +
> > virt/kvm/guest_memfd.c | 102 +++++++++++++++++++++++++++++++++++++++
> > virt/kvm/kvm_main.c | 4 ++
> > 4 files changed, 118 insertions(+)
> >
> > diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> > index 3ad0719bfc4f..601bbcaa5e41 100644
> > --- a/include/linux/kvm_host.h
> > +++ b/include/linux/kvm_host.h
> > @@ -728,6 +728,17 @@ static inline bool kvm_arch_has_private_mem(struct kvm *kvm)
> > }
> > #endif
> >
> > +/*
> > + * Arch code must define kvm_arch_gmem_supports_shared_mem if support for
> > + * private memory is enabled and it supports in-place shared/private conversion.
> > + */
> > +#if !defined(kvm_arch_gmem_supports_shared_mem) && !IS_ENABLED(CONFIG_KVM_GMEM_SHARED_MEM)
> > +static inline bool kvm_arch_gmem_supports_shared_mem(struct kvm *kvm)
> > +{
> > + return false;
> > +}
> > +#endif
> > +
> > #ifndef kvm_arch_has_readonly_mem
> > static inline bool kvm_arch_has_readonly_mem(struct kvm *kvm)
> > {
> > diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
> > index 45e6d8fca9b9..117937a895da 100644
> > --- a/include/uapi/linux/kvm.h
> > +++ b/include/uapi/linux/kvm.h
> > @@ -929,6 +929,7 @@ struct kvm_enable_cap {
> > #define KVM_CAP_PRE_FAULT_MEMORY 236
> > #define KVM_CAP_X86_APIC_BUS_CYCLES_NS 237
> > #define KVM_CAP_X86_GUEST_MODE 238
> > +#define KVM_CAP_GMEM_SHARED_MEM 239
> >
> > struct kvm_irq_routing_irqchip {
> > __u32 irqchip;
> > diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c
> > index 5fc414becae5..eea44e003ed1 100644
> > --- a/virt/kvm/guest_memfd.c
> > +++ b/virt/kvm/guest_memfd.c
> > @@ -320,7 +320,109 @@ static pgoff_t kvm_gmem_get_index(struct kvm_memory_slot *slot, gfn_t gfn)
> > return gfn - slot->base_gfn + slot->gmem.pgoff;
> > }
> >
> > +#ifdef CONFIG_KVM_GMEM_SHARED_MEM
> > +static bool folio_offset_is_shared(const struct folio *folio, struct file *file, pgoff_t index)
> > +{
> > + struct kvm_gmem *gmem = file->private_data;
> > +
> > + VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio);
>
> I should've commented on this in the last series, but why must folio
> lock be held to check if this offset is shared?
>
> I was thinking to use the filemap's lock (filemap_invalidate_lock()) to
> guard mappability races. Does that work too?
I was thinking the same thing as I am preparing the sharing state
patch series to be sent. I (wrongly) thought before that it wasn't
possible to protect all cases with the invalidate_lock, but they
already are.
I will fix it in the respin of both. Thanks!
/fuad
> > +
> > + /* For now, VMs that support shared memory share all their memory. */
> > + return kvm_arch_gmem_supports_shared_mem(gmem->kvm);
> > +}
> > +
> > +static vm_fault_t kvm_gmem_fault(struct vm_fault *vmf)
> > +{
> > + struct inode *inode = file_inode(vmf->vma->vm_file);
> > + struct folio *folio;
> > + vm_fault_t ret = VM_FAULT_LOCKED;
> > +
> > + filemap_invalidate_lock_shared(inode->i_mapping);
> > +
> > + folio = kvm_gmem_get_folio(inode, vmf->pgoff);
> > + if (IS_ERR(folio)) {
> > + int err = PTR_ERR(folio);
> > +
> > + if (err == -EAGAIN)
> > + ret = VM_FAULT_RETRY;
> > + else
> > + ret = vmf_error(err);
> > +
> > + goto out_filemap;
> > + }
> > +
> > + if (folio_test_hwpoison(folio)) {
> > + ret = VM_FAULT_HWPOISON;
> > + goto out_folio;
> > + }
> > +
> > + if (!folio_offset_is_shared(folio, vmf->vma->vm_file, vmf->pgoff)) {
> > + ret = VM_FAULT_SIGBUS;
> > + goto out_folio;
> > + }
> > +
> > + /*
> > + * Shared folios would not be marked as "guestmem" so far, and we only
> > + * expect shared folios at this point.
> > + */
> > + if (WARN_ON_ONCE(folio_test_guestmem(folio))) {
> > + ret = VM_FAULT_SIGBUS;
> > + goto out_folio;
> > + }
> > +
> > + /* No support for huge pages. */
> > + if (WARN_ON_ONCE(folio_test_large(folio))) {
> > + ret = VM_FAULT_SIGBUS;
> > + goto out_folio;
> > + }
> > +
> > + if (!folio_test_uptodate(folio)) {
> > + clear_highpage(folio_page(folio, 0));
> > + kvm_gmem_mark_prepared(folio);
> > + }
> > +
> > + vmf->page = folio_file_page(folio, vmf->pgoff);
> > +
> > +out_folio:
> > + if (ret != VM_FAULT_LOCKED) {
> > + folio_unlock(folio);
> > + folio_put(folio);
> > + }
> > +
> > +out_filemap:
> > + filemap_invalidate_unlock_shared(inode->i_mapping);
> > +
> > + return ret;
> > +}
> > +
> > <snip>
next prev parent reply other threads:[~2025-03-17 10:43 UTC|newest]
Thread overview: 22+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-03-12 17:58 [PATCH v6 00/10] KVM: Mapping guest_memfd backed memory at the host for software protected VMs Fuad Tabba
2025-03-12 17:58 ` [PATCH v6 01/10] mm: Consolidate freeing of typed folios on final folio_put() Fuad Tabba
2025-03-12 17:58 ` [PATCH v6 02/10] KVM: guest_memfd: Handle final folio_put() of guest_memfd pages Fuad Tabba
2025-03-12 17:58 ` [PATCH v6 03/10] KVM: guest_memfd: Handle kvm_gmem_handle_folio_put() for KVM as a module Fuad Tabba
2025-03-13 13:46 ` Ackerley Tng
2025-03-13 13:49 ` Ackerley Tng
2025-03-13 13:57 ` Fuad Tabba
2025-03-17 13:43 ` Vlastimil Babka
2025-03-17 14:38 ` Sean Christopherson
2025-03-17 15:04 ` Fuad Tabba
2025-03-17 16:27 ` Ackerley Tng
2025-03-17 16:50 ` Fuad Tabba
2025-03-12 17:58 ` [PATCH v6 04/10] KVM: guest_memfd: Allow host to map guest_memfd() pages Fuad Tabba
2025-03-14 18:46 ` Ackerley Tng
2025-03-17 10:42 ` Fuad Tabba [this message]
2025-03-12 17:58 ` [PATCH v6 05/10] KVM: guest_memfd: Handle in-place shared memory as guest_memfd backed memory Fuad Tabba
2025-03-12 17:58 ` [PATCH v6 06/10] KVM: x86: Mark KVM_X86_SW_PROTECTED_VM as supporting guest_memfd shared memory Fuad Tabba
2025-03-12 17:58 ` [PATCH v6 07/10] KVM: arm64: Refactor user_mem_abort() calculation of force_pte Fuad Tabba
2025-03-12 17:58 ` [PATCH v6 08/10] KVM: arm64: Handle guest_memfd()-backed guest page faults Fuad Tabba
2025-03-12 17:58 ` [PATCH v6 09/10] KVM: arm64: Enable mapping guest_memfd in arm64 Fuad Tabba
2025-03-13 14:20 ` kernel test robot
2025-03-12 17:58 ` [PATCH v6 10/10] KVM: guest_memfd: selftests: guest_memfd mmap() test when mapping is allowed Fuad Tabba
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='CA+EHjTwWM3Wkp6qEF=H2q0BEa1uoWmjY+vgchQsLjH8t9E2auQ@mail.gmail.com' \
--to=tabba@google.com \
--cc=ackerleytng@google.com \
--cc=akpm@linux-foundation.org \
--cc=amoorthy@google.com \
--cc=anup@brainfault.org \
--cc=aou@eecs.berkeley.edu \
--cc=brauner@kernel.org \
--cc=catalin.marinas@arm.com \
--cc=chao.p.peng@linux.intel.com \
--cc=chenhuacai@kernel.org \
--cc=david@redhat.com \
--cc=dmatlack@google.com \
--cc=fvdl@google.com \
--cc=hch@infradead.org \
--cc=hughd@google.com \
--cc=isaku.yamahata@gmail.com \
--cc=isaku.yamahata@intel.com \
--cc=james.morse@arm.com \
--cc=jarkko@kernel.org \
--cc=jgg@nvidia.com \
--cc=jhubbard@nvidia.com \
--cc=jthoughton@google.com \
--cc=keirf@google.com \
--cc=kirill.shutemov@linux.intel.com \
--cc=kvm@vger.kernel.org \
--cc=liam.merwick@oracle.com \
--cc=linux-arm-msm@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mail@maciej.szmigiero.name \
--cc=maz@kernel.org \
--cc=mic@digikod.net \
--cc=michael.roth@amd.com \
--cc=mpe@ellerman.id.au \
--cc=oliver.upton@linux.dev \
--cc=palmer@dabbelt.com \
--cc=paul.walmsley@sifive.com \
--cc=pbonzini@redhat.com \
--cc=peterx@redhat.com \
--cc=qperret@google.com \
--cc=quic_cvanscha@quicinc.com \
--cc=quic_eberman@quicinc.com \
--cc=quic_mnalajal@quicinc.com \
--cc=quic_pderrin@quicinc.com \
--cc=quic_pheragu@quicinc.com \
--cc=quic_svaddagi@quicinc.com \
--cc=quic_tsoni@quicinc.com \
--cc=rientjes@google.com \
--cc=roypat@amazon.co.uk \
--cc=seanjc@google.com \
--cc=shuah@kernel.org \
--cc=steven.price@arm.com \
--cc=suzuki.poulose@arm.com \
--cc=vannapurve@google.com \
--cc=vbabka@suse.cz \
--cc=viro@zeniv.linux.org.uk \
--cc=wei.w.wang@intel.com \
--cc=will@kernel.org \
--cc=willy@infradead.org \
--cc=xiaoyao.li@intel.com \
--cc=yilun.xu@intel.com \
--cc=yuzenghui@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox