From: Fuad Tabba <tabba@google.com>
To: Quentin Perret <qperret@google.com>
Cc: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org,
linux-mm@kvack.org, pbonzini@redhat.com, chenhuacai@kernel.org,
mpe@ellerman.id.au, anup@brainfault.org,
paul.walmsley@sifive.com, palmer@dabbelt.com,
aou@eecs.berkeley.edu, seanjc@google.com,
viro@zeniv.linux.org.uk, brauner@kernel.org,
willy@infradead.org, akpm@linux-foundation.org,
xiaoyao.li@intel.com, yilun.xu@intel.com,
chao.p.peng@linux.intel.com, jarkko@kernel.org,
amoorthy@google.com, dmatlack@google.com,
yu.c.zhang@linux.intel.com, isaku.yamahata@intel.com,
mic@digikod.net, vbabka@suse.cz, vannapurve@google.com,
ackerleytng@google.com, mail@maciej.szmigiero.name,
david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com,
liam.merwick@oracle.com, isaku.yamahata@gmail.com,
kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com,
steven.price@arm.com, quic_eberman@quicinc.com,
quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com,
quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com,
quic_pderrin@quicinc.com, quic_pheragu@quicinc.com,
catalin.marinas@arm.com, james.morse@arm.com,
yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org,
will@kernel.org, keirf@google.com, roypat@amazon.co.uk,
shuah@kernel.org, hch@infradead.org, jgg@nvidia.com,
rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com,
hughd@google.com, jthoughton@google.com
Subject: Re: [PATCH v3 08/11] KVM: arm64: Handle guest_memfd()-backed guest page faults
Date: Tue, 11 Feb 2025 16:34:02 +0000 [thread overview]
Message-ID: <CA+EHjTyU5K4Ro+gx1RcBcs2P2bjoVM24LO0AHSU+yjjQFCsw8Q@mail.gmail.com> (raw)
In-Reply-To: <Z6t6FSNwREpyMrG3@google.com>
Hi Quentin,
On Tue, 11 Feb 2025 at 16:26, Quentin Perret <qperret@google.com> wrote:
>
> On Tuesday 11 Feb 2025 at 16:13:27 (+0000), Fuad Tabba wrote:
> > Hi Quentin,
> >
> > On Tue, 11 Feb 2025 at 15:57, Quentin Perret <qperret@google.com> wrote:
> > >
> > > Hey Fuad,
> > >
> > > On Tuesday 11 Feb 2025 at 12:11:24 (+0000), Fuad Tabba wrote:
> > > > Add arm64 support for handling guest page faults on guest_memfd
> > > > backed memslots.
> > > >
> > > > For now, the fault granule is restricted to PAGE_SIZE.
> > > >
> > > > Signed-off-by: Fuad Tabba <tabba@google.com>
> > > > ---
> > > > arch/arm64/kvm/mmu.c | 84 ++++++++++++++++++++++++++--------------
> > > > include/linux/kvm_host.h | 5 +++
> > > > virt/kvm/kvm_main.c | 5 ---
> > > > 3 files changed, 61 insertions(+), 33 deletions(-)
> > > >
> > > > diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> > > > index b6c0acb2311c..305060518766 100644
> > > > --- a/arch/arm64/kvm/mmu.c
> > > > +++ b/arch/arm64/kvm/mmu.c
> > > > @@ -1454,6 +1454,33 @@ static bool kvm_vma_mte_allowed(struct vm_area_struct *vma)
> > > > return vma->vm_flags & VM_MTE_ALLOWED;
> > > > }
> > > >
> > > > +static kvm_pfn_t faultin_pfn(struct kvm *kvm, struct kvm_memory_slot *slot,
> > > > + gfn_t gfn, bool write_fault, bool *writable,
> > > > + struct page **page, bool is_private)
> > > > +{
> > > > + kvm_pfn_t pfn;
> > > > + int ret;
> > > > +
> > > > + if (!is_private)
> > > > + return __kvm_faultin_pfn(slot, gfn, write_fault ? FOLL_WRITE : 0, writable, page);
> > > > +
> > > > + *writable = false;
> > > > +
> > > > + if (WARN_ON_ONCE(write_fault && memslot_is_readonly(slot)))
> > > > + return KVM_PFN_ERR_NOSLOT_MASK;
> > >
> > > I believe this check is superfluous, we should decide to report an MMIO
> > > exit to userspace for write faults to RO memslots and not get anywhere
> > > near user_mem_abort(). And nit but the error code should probably be
> > > KVM_PFN_ERR_RO_FAULT or something instead?
> >
> > I tried to replicate the behavior of __kvm_faultin_pfn() here (but got
> > the wrong error!). I think you're right though that in the arm64 case,
> > this check isn't needed. Should I fix the return error and keep the
> > warning though?
>
> __kvm_faultin_pfn() will just set *writable to false if it find an RO
> memslot apparently, not return an error. So I'd vote for dropping that
> check so we align with that behaviour.
Ack.
> > > > +
> > > > + ret = kvm_gmem_get_pfn(kvm, slot, gfn, &pfn, page, NULL);
> > > > + if (!ret) {
> > > > + *writable = write_fault;
> > >
> > > In normal KVM, if we're not dirty logging we'll actively map the page as
> > > writable if both the memslot and the userspace mappings are writable.
> > > With gmem, the latter doesn't make much sense, but essentially the
> > > underlying page should really be writable (e.g. no CoW getting in the
> > > way and such?). If so, then perhaps make this
> > >
> > > *writable = !memslot_is_readonly(slot);
> > >
> > > Wdyt?
> >
> > Ack.
> >
> > > > + return pfn;
> > > > + }
> > > > +
> > > > + if (ret == -EHWPOISON)
> > > > + return KVM_PFN_ERR_HWPOISON;
> > > > +
> > > > + return KVM_PFN_ERR_NOSLOT_MASK;
> > > > +}
> > > > +
> > > > static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
> > > > struct kvm_s2_trans *nested,
> > > > struct kvm_memory_slot *memslot, unsigned long hva,
> > > > @@ -1461,25 +1488,26 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
> > > > {
> > > > int ret = 0;
> > > > bool write_fault, writable;
> > > > - bool exec_fault, mte_allowed;
> > > > + bool exec_fault, mte_allowed = false;
> > > > bool device = false, vfio_allow_any_uc = false;
> > > > unsigned long mmu_seq;
> > > > phys_addr_t ipa = fault_ipa;
> > > > struct kvm *kvm = vcpu->kvm;
> > > > - struct vm_area_struct *vma;
> > > > + struct vm_area_struct *vma = NULL;
> > > > short vma_shift;
> > > > void *memcache;
> > > > - gfn_t gfn;
> > > > + gfn_t gfn = ipa >> PAGE_SHIFT;
> > > > kvm_pfn_t pfn;
> > > > bool logging_active = memslot_is_logging(memslot);
> > > > - bool force_pte = logging_active || is_protected_kvm_enabled();
> > > > - long vma_pagesize, fault_granule;
> > > > + bool is_private = kvm_mem_is_private(kvm, gfn);
> > >
> > > Just trying to understand the locking rule for the xarray behind this.
> > > Is it kvm->srcu that protects it for reads here? Something else?
> >
> > I'm not sure I follow. Which xarray are you referring to?
>
> Sorry, yes, that wasn't clear. I meant that kvm_mem_is_private() calls
> kvm_get_memory_attributes() which indexes kvm->mem_attr_array. The
> comment in struct kvm indicates that this xarray is protected by RCU for
> readers, so I was just checking if we were relying on
> kvm_handle_guest_abort() to take srcu_read_lock(&kvm->srcu) for us, or
> if there was something else more subtle here.
I was kind of afraid that people would be confused by this, and I
commented on it in the commit message of the earlier patch:
https://lore.kernel.org/all/20250211121128.703390-6-tabba@google.com/
> Note that the word "private" in the name of the function
> kvm_mem_is_private() doesn't necessarily indicate that the memory
> isn't shared, but is due to the history and evolution of
> guest_memfd and the various names it has received. In effect,
> this function is used to multiplex between the path of a normal
> page fault and the path of a guest_memfd backed page fault.
kvm_mem_is_private() is property of the memslot itself. No xarrays
harmed in the process :)
Cheers,
/fuad
> Cheers,
> Quentin
>
> > >
> > > > + bool force_pte = logging_active || is_private || is_protected_kvm_enabled();
> > > > + long vma_pagesize, fault_granule = PAGE_SIZE;
> > > > enum kvm_pgtable_prot prot = KVM_PGTABLE_PROT_R;
> > > > struct kvm_pgtable *pgt;
> > > > struct page *page;
> > > > enum kvm_pgtable_walk_flags flags = KVM_PGTABLE_WALK_HANDLE_FAULT | KVM_PGTABLE_WALK_SHARED;
> > > >
> > > > - if (fault_is_perm)
> > > > + if (fault_is_perm && !is_private)
> > >
> > > Nit: not strictly necessary I think.
> >
> > You're right.
> >
> > Thanks,
> > /fuad
> >
> > > > fault_granule = kvm_vcpu_trap_get_perm_fault_granule(vcpu);
> > > > write_fault = kvm_is_write_fault(vcpu);
> > > > exec_fault = kvm_vcpu_trap_is_exec_fault(vcpu);
> > > > @@ -1510,24 +1538,30 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
> > > > return ret;
> > > > }
> > > >
> > > > + mmap_read_lock(current->mm);
> > > > +
> > > > /*
> > > > * Let's check if we will get back a huge page backed by hugetlbfs, or
> > > > * get block mapping for device MMIO region.
> > > > */
> > > > - mmap_read_lock(current->mm);
> > > > - vma = vma_lookup(current->mm, hva);
> > > > - if (unlikely(!vma)) {
> > > > - kvm_err("Failed to find VMA for hva 0x%lx\n", hva);
> > > > - mmap_read_unlock(current->mm);
> > > > - return -EFAULT;
> > > > - }
> > > > + if (!is_private) {
> > > > + vma = vma_lookup(current->mm, hva);
> > > > + if (unlikely(!vma)) {
> > > > + kvm_err("Failed to find VMA for hva 0x%lx\n", hva);
> > > > + mmap_read_unlock(current->mm);
> > > > + return -EFAULT;
> > > > + }
> > > >
> > > > - /*
> > > > - * logging_active is guaranteed to never be true for VM_PFNMAP
> > > > - * memslots.
> > > > - */
> > > > - if (WARN_ON_ONCE(logging_active && (vma->vm_flags & VM_PFNMAP)))
> > > > - return -EFAULT;
> > > > + /*
> > > > + * logging_active is guaranteed to never be true for VM_PFNMAP
> > > > + * memslots.
> > > > + */
> > > > + if (WARN_ON_ONCE(logging_active && (vma->vm_flags & VM_PFNMAP)))
> > > > + return -EFAULT;
> > > > +
> > > > + vfio_allow_any_uc = vma->vm_flags & VM_ALLOW_ANY_UNCACHED;
> > > > + mte_allowed = kvm_vma_mte_allowed(vma);
> > > > + }
> > > >
> > > > if (force_pte)
> > > > vma_shift = PAGE_SHIFT;
> > > > @@ -1597,18 +1631,13 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
> > > > ipa &= ~(vma_pagesize - 1);
> > > > }
> > > >
> > > > - gfn = ipa >> PAGE_SHIFT;
> > > > - mte_allowed = kvm_vma_mte_allowed(vma);
> > > > -
> > > > - vfio_allow_any_uc = vma->vm_flags & VM_ALLOW_ANY_UNCACHED;
> > > > -
> > > > /* Don't use the VMA after the unlock -- it may have vanished */
> > > > vma = NULL;
> > > >
> > > > /*
> > > > * Read mmu_invalidate_seq so that KVM can detect if the results of
> > > > - * vma_lookup() or __kvm_faultin_pfn() become stale prior to
> > > > - * acquiring kvm->mmu_lock.
> > > > + * vma_lookup() or faultin_pfn() become stale prior to acquiring
> > > > + * kvm->mmu_lock.
> > > > *
> > > > * Rely on mmap_read_unlock() for an implicit smp_rmb(), which pairs
> > > > * with the smp_wmb() in kvm_mmu_invalidate_end().
> > > > @@ -1616,8 +1645,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
> > > > mmu_seq = vcpu->kvm->mmu_invalidate_seq;
> > > > mmap_read_unlock(current->mm);
> > > >
> > > > - pfn = __kvm_faultin_pfn(memslot, gfn, write_fault ? FOLL_WRITE : 0,
> > > > - &writable, &page);
> > > > + pfn = faultin_pfn(kvm, memslot, gfn, write_fault, &writable, &page, is_private);
> > > > if (pfn == KVM_PFN_ERR_HWPOISON) {
> > > > kvm_send_hwpoison_signal(hva, vma_shift);
> > > > return 0;
> > > > diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> > > > index 39fd6e35c723..415c6274aede 100644
> > > > --- a/include/linux/kvm_host.h
> > > > +++ b/include/linux/kvm_host.h
> > > > @@ -1882,6 +1882,11 @@ static inline int memslot_id(struct kvm *kvm, gfn_t gfn)
> > > > return gfn_to_memslot(kvm, gfn)->id;
> > > > }
> > > >
> > > > +static inline bool memslot_is_readonly(const struct kvm_memory_slot *slot)
> > > > +{
> > > > + return slot->flags & KVM_MEM_READONLY;
> > > > +}
> > > > +
> > > > static inline gfn_t
> > > > hva_to_gfn_memslot(unsigned long hva, struct kvm_memory_slot *slot)
> > > > {
> > > > diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> > > > index 38f0f402ea46..3e40acb9f5c0 100644
> > > > --- a/virt/kvm/kvm_main.c
> > > > +++ b/virt/kvm/kvm_main.c
> > > > @@ -2624,11 +2624,6 @@ unsigned long kvm_host_page_size(struct kvm_vcpu *vcpu, gfn_t gfn)
> > > > return size;
> > > > }
> > > >
> > > > -static bool memslot_is_readonly(const struct kvm_memory_slot *slot)
> > > > -{
> > > > - return slot->flags & KVM_MEM_READONLY;
> > > > -}
> > > > -
> > > > static unsigned long __gfn_to_hva_many(const struct kvm_memory_slot *slot, gfn_t gfn,
> > > > gfn_t *nr_pages, bool write)
> > > > {
> > > > --
> > > > 2.48.1.502.g6dc24dfdaf-goog
> > > >
next prev parent reply other threads:[~2025-02-11 16:34 UTC|newest]
Thread overview: 54+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-02-11 12:11 [PATCH v3 00/11] KVM: Mapping guest_memfd backed memory at the host for software protected VMs Fuad Tabba
2025-02-11 12:11 ` [PATCH v3 01/11] mm: Consolidate freeing of typed folios on final folio_put() Fuad Tabba
2025-02-17 9:33 ` Vlastimil Babka
2025-02-20 11:17 ` David Hildenbrand
2025-02-11 12:11 ` [PATCH v3 02/11] KVM: guest_memfd: Handle final folio_put() of guest_memfd pages Fuad Tabba
2025-02-12 18:19 ` Peter Xu
2025-02-13 8:29 ` Fuad Tabba
2025-02-17 9:49 ` Vlastimil Babka
2025-02-17 10:12 ` Fuad Tabba
2025-02-17 11:21 ` Vlastimil Babka
2025-02-17 11:21 ` Fuad Tabba
2025-02-20 11:22 ` David Hildenbrand
2025-02-20 11:19 ` David Hildenbrand
2025-02-20 11:25 ` David Hildenbrand
2025-02-20 11:28 ` Vlastimil Babka
2025-02-20 11:32 ` David Hildenbrand
2025-02-20 11:38 ` Fuad Tabba
2025-02-20 11:44 ` David Hildenbrand
2025-02-11 12:11 ` [PATCH v3 03/11] KVM: guest_memfd: Allow host to map guest_memfd() pages Fuad Tabba
2025-02-12 5:07 ` Ackerley Tng
2025-02-12 9:21 ` Fuad Tabba
2025-02-12 21:23 ` Peter Xu
2025-02-13 8:24 ` Fuad Tabba
2025-02-11 12:11 ` [PATCH v3 04/11] KVM: guest_memfd: Add KVM capability to check if guest_memfd is shared Fuad Tabba
2025-02-20 11:37 ` David Hildenbrand
2025-02-20 11:39 ` David Hildenbrand
2025-02-20 11:39 ` Fuad Tabba
2025-02-11 12:11 ` [PATCH v3 05/11] KVM: guest_memfd: Handle in-place shared memory as guest_memfd backed memory Fuad Tabba
2025-02-12 0:15 ` Ackerley Tng
2025-02-12 9:23 ` Fuad Tabba
2025-02-11 12:11 ` [PATCH v3 06/11] KVM: x86: Mark KVM_X86_SW_PROTECTED_VM as supporting guest_memfd shared memory Fuad Tabba
2025-02-11 12:11 ` [PATCH v3 07/11] KVM: arm64: Refactor user_mem_abort() calculation of force_pte Fuad Tabba
2025-02-11 12:11 ` [PATCH v3 08/11] KVM: arm64: Handle guest_memfd()-backed guest page faults Fuad Tabba
2025-02-11 15:57 ` Quentin Perret
2025-02-11 16:13 ` Fuad Tabba
2025-02-11 16:25 ` Quentin Perret
2025-02-11 16:34 ` Fuad Tabba [this message]
2025-02-11 16:57 ` Quentin Perret
2025-02-11 17:04 ` Fuad Tabba
2025-02-11 17:19 ` Quentin Perret
2025-02-11 12:11 ` [PATCH v3 09/11] KVM: arm64: Introduce KVM_VM_TYPE_ARM_SW_PROTECTED machine type Fuad Tabba
2025-02-11 16:12 ` Quentin Perret
2025-02-11 16:17 ` Fuad Tabba
2025-02-11 16:29 ` Quentin Perret
2025-02-11 16:32 ` Patrick Roy
2025-02-11 17:09 ` Quentin Perret
2025-02-14 11:13 ` Quentin Perret
2025-02-14 11:33 ` Fuad Tabba
2025-02-14 12:37 ` Patrick Roy
2025-02-14 13:11 ` Fuad Tabba
2025-02-14 13:18 ` Patrick Roy
2025-02-14 15:12 ` Sean Christopherson
2025-02-11 12:11 ` [PATCH v3 10/11] KVM: arm64: Enable mapping guest_memfd in arm64 Fuad Tabba
2025-02-11 12:11 ` [PATCH v3 11/11] KVM: guest_memfd: selftests: guest_memfd mmap() test when mapping is allowed Fuad Tabba
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CA+EHjTyU5K4Ro+gx1RcBcs2P2bjoVM24LO0AHSU+yjjQFCsw8Q@mail.gmail.com \
--to=tabba@google.com \
--cc=ackerleytng@google.com \
--cc=akpm@linux-foundation.org \
--cc=amoorthy@google.com \
--cc=anup@brainfault.org \
--cc=aou@eecs.berkeley.edu \
--cc=brauner@kernel.org \
--cc=catalin.marinas@arm.com \
--cc=chao.p.peng@linux.intel.com \
--cc=chenhuacai@kernel.org \
--cc=david@redhat.com \
--cc=dmatlack@google.com \
--cc=fvdl@google.com \
--cc=hch@infradead.org \
--cc=hughd@google.com \
--cc=isaku.yamahata@gmail.com \
--cc=isaku.yamahata@intel.com \
--cc=james.morse@arm.com \
--cc=jarkko@kernel.org \
--cc=jgg@nvidia.com \
--cc=jhubbard@nvidia.com \
--cc=jthoughton@google.com \
--cc=keirf@google.com \
--cc=kirill.shutemov@linux.intel.com \
--cc=kvm@vger.kernel.org \
--cc=liam.merwick@oracle.com \
--cc=linux-arm-msm@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mail@maciej.szmigiero.name \
--cc=maz@kernel.org \
--cc=mic@digikod.net \
--cc=michael.roth@amd.com \
--cc=mpe@ellerman.id.au \
--cc=oliver.upton@linux.dev \
--cc=palmer@dabbelt.com \
--cc=paul.walmsley@sifive.com \
--cc=pbonzini@redhat.com \
--cc=qperret@google.com \
--cc=quic_cvanscha@quicinc.com \
--cc=quic_eberman@quicinc.com \
--cc=quic_mnalajal@quicinc.com \
--cc=quic_pderrin@quicinc.com \
--cc=quic_pheragu@quicinc.com \
--cc=quic_svaddagi@quicinc.com \
--cc=quic_tsoni@quicinc.com \
--cc=rientjes@google.com \
--cc=roypat@amazon.co.uk \
--cc=seanjc@google.com \
--cc=shuah@kernel.org \
--cc=steven.price@arm.com \
--cc=suzuki.poulose@arm.com \
--cc=vannapurve@google.com \
--cc=vbabka@suse.cz \
--cc=viro@zeniv.linux.org.uk \
--cc=wei.w.wang@intel.com \
--cc=will@kernel.org \
--cc=willy@infradead.org \
--cc=xiaoyao.li@intel.com \
--cc=yilun.xu@intel.com \
--cc=yu.c.zhang@linux.intel.com \
--cc=yuzenghui@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox