From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 29AA0C369D9 for ; Wed, 30 Apr 2025 16:57:28 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E326D6B009B; Wed, 30 Apr 2025 12:57:20 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id DE05E6B00D8; Wed, 30 Apr 2025 12:57:20 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B496C6B00D9; Wed, 30 Apr 2025 12:57:20 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 914E76B009B for ; Wed, 30 Apr 2025 12:57:20 -0400 (EDT) Received: from smtpin24.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id E5582C05A0 for ; Wed, 30 Apr 2025 16:57:20 +0000 (UTC) X-FDA: 83391315840.24.2CC6ED0 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) by imf01.hostedemail.com (Postfix) with ESMTP id 1DB0440006 for ; Wed, 30 Apr 2025 16:57:18 +0000 (UTC) Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=MNUJ9RnG; spf=pass (imf01.hostedemail.com: domain of 3bVYSaAUKCKodKLLKQYYQVO.MYWVSXeh-WWUfKMU.YbQ@flex--tabba.bounces.google.com designates 209.85.128.73 as permitted sender) smtp.mailfrom=3bVYSaAUKCKodKLLKQYYQVO.MYWVSXeh-WWUfKMU.YbQ@flex--tabba.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1746032239; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=lT+w+gixt11vac0AKgWaA63BjBPb8gwiS7Jbs6OQuxA=; b=NRwIsD8Ljy0WgzdRXSlXFPRJOkXqO+ic0qb8L9sSh2Z3dhj8q7KEYM3nhmMShAcrOPovYt QJr6FPwC9mcMge4yAl59N0jUJNknrqOLniKkcI1KzJhlNmZQGLIeBYLC21DtjGZGssYA8Z ODmBorxYdNywMBfKrw4xowcr5rE+mNk= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=MNUJ9RnG; spf=pass (imf01.hostedemail.com: domain of 3bVYSaAUKCKodKLLKQYYQVO.MYWVSXeh-WWUfKMU.YbQ@flex--tabba.bounces.google.com designates 209.85.128.73 as permitted sender) smtp.mailfrom=3bVYSaAUKCKodKLLKQYYQVO.MYWVSXeh-WWUfKMU.YbQ@flex--tabba.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1746032239; a=rsa-sha256; cv=none; b=BZemWVxx0IgaER+Uabt7y4EnPEleixS34W0nNM7sqaXOXibB0kdrmLp5i+F7ZEas54HJE8 h/zvx7TVGu4R/KupGQh0XKMoCqlwd8yvA2eMJfzt0SeNQXwDnm6lrG3Iuqud7JTvVAOSdh ToV5/kGHjHsrBojfIeMYg/Ra2HOCZN0= Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-43cf172ff63so43305e9.3 for ; Wed, 30 Apr 2025 09:57:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1746032238; x=1746637038; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=lT+w+gixt11vac0AKgWaA63BjBPb8gwiS7Jbs6OQuxA=; b=MNUJ9RnGvkv4QZsiecapHLdhVP613A1MNfz5O+kY6oapQ7LDbC2/TECWT9EYJ2KoT0 whCykouYxoyVO9tMY8KDj7HWF3NcedlitZyeXeRhgn3ED1ungK3J9Fp4UVClgZZdRx7I cuu2bIu8JdFXue0yxS+fizixSjS/c2vpLhT9blqPd7LIT/DcTIz0W3SefDnpwEEff2Ua p7XBKf9v0vCmdgbDnzrbYFqPMuMcS9vL4oSbqsLbV8kNNZstB+n32Xj0YqIKIHjL8tRJ SQ+B4oxuOzMdcUYVWUtFqqY/o6AfLabNaQSzsgi2GNxkDo+pJCZnPxyD7UHmkIDDwi1c 6ttw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1746032238; x=1746637038; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=lT+w+gixt11vac0AKgWaA63BjBPb8gwiS7Jbs6OQuxA=; b=TmSq5cJua54wWwwC7SalK7dqtxlmgwkvNNOevpPxp50ORQF0Kwo/BcEhO0c4HAKOeJ aICvKFWncdSDJasvY9DteQJwhD/XThfsnrFSEvYmjbzkkuuo3PrjOfnO4nne2htX4e9T x6cGI7OvEfBEwPdUL6Mhee9tBxRiuYi+l3RG7FiPSH216ucmg4GVklJJQF75rm4KIXMP zGUo/C+v/hM735XH+8AekhTzhHGtW9xC2b0rpjN2MhKzv7w45jqzwcP8hIAyjJKg44uE IXvS0dw3L85JJFbnWmH/tBu+Td3nfPf6lAFlDktgTB0wrY+25WiwL0qCQ3v9sqEkbGNi TNiQ== X-Forwarded-Encrypted: i=1; AJvYcCX6U5YaQc9Pe+zmDlqmzGqOsXksf0eakju4+8iM0CD1yfk1/soaf2MTP93jgQkpWPHNkNkGqMhdQg==@kvack.org X-Gm-Message-State: AOJu0YzaKkmbAnIFcsWA09kw7/r0RW0ecNMAvDqemmBG30C6DeDFY2/F p7F3fGRMtzhPa8uP5OMDLkbZI8A1kvXloayGgsHBm+7S/vC3EL+n/QPhYMinBSuP6Gmg+R60sg= = X-Google-Smtp-Source: AGHT+IEL5W88VW0xyg+gDyttDCBzFVHxw1cpBOysUXjegnCll+1R7oG7QBLoFSGBHg0tx6z/jBMX0Qc8Aw== X-Received: from wmqe11.prod.google.com ([2002:a05:600c:4e4b:b0:43c:ebbe:4bce]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:5246:b0:43d:49eb:963f with SMTP id 5b1f17b1804b1-441b1f5bdb0mr30320305e9.24.1746032237765; Wed, 30 Apr 2025 09:57:17 -0700 (PDT) Date: Wed, 30 Apr 2025 17:56:52 +0100 In-Reply-To: <20250430165655.605595-1-tabba@google.com> Mime-Version: 1.0 References: <20250430165655.605595-1-tabba@google.com> X-Mailer: git-send-email 2.49.0.967.g6a0df3ecc3-goog Message-ID: <20250430165655.605595-11-tabba@google.com> Subject: [PATCH v8 10/13] KVM: arm64: Handle guest_memfd()-backed guest page faults From: Fuad Tabba To: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org Cc: pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, jthoughton@google.com, peterx@redhat.com, pankaj.gupta@amd.com, tabba@google.com Content-Type: text/plain; charset="UTF-8" X-Stat-Signature: qe1md1f5ndsr6b7ogn3cep4ciktnxxha X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 1DB0440006 X-Rspam-User: X-HE-Tag: 1746032238-237829 X-HE-Meta: U2FsdGVkX1+1Jz19GBOzkQVlPaiCVOGKziKbDzLZtPG2iroKSd0Oo6vGJ5wcCZw9t+g/O/dKqEuN2HcUmDLOA1bt+6d1BNLscvbd8pt4r8XbMkagkplUg3mI6LBzJfQefSOy45hR09te6l62UhwKec8M6QO4SyCoWWNjar+b6VItZbu6Bs1hzZNBgYnGPsOZXeOkcDxlbnpnFjG6OfPwEtRgI3Xrq3N7ilyhgsOqUVGorvviy+voULlNNcE+yvtrxtFrylMGhRHJ64vtioZII3xLxW/KOrj98T8wX0JuuE676wWnDI6XFwtONL4zooyKQE5uTjg0lAwprDcOfaSl6uGGl6yUbTdLcpHw8OJTQ+6jOPLdMoSulfq76Ykn38crnZoYsjAcCSeJqcfHVyBczD9aqwFkaaPuzZztvqjvqg8ZLFqn0tdoujyNsbNpJoe5aY3SKGaGrnHtkQSLlRzOjUTapYRVH0iSxiZsWuu5LC9w9kd1GuaxHyy0sv+0MQS8WI/8J+a37X9xr0z02STe7e46/wuFx7DpGud6YKt0uqFclBSeNc94VfoYSgqnORWhPW5c7SdTTXaR3G7LrKfI3qiokN/RxD3T4KHbhdhbdeF+GXWxpDzGISa0y4nO/fmUQ0ACQiD+bCORjr4BbuDlF4InLjURKTaKhb5E7j7yfBuA5GZDjkl08YT0sHCwzV0RYCExi0u3bBP9RhLl85Wkc/EOekJCFMaZJGpj6S9VYE/48yidh9DiyXzfqS4uiz++ZU5d4OzGYRfvNIgNXKYzv4yVWp6ZMngDYXkGvVij0H76vUbVUvnYvn1LiPPXLQ0AnRXL+zGpBaSyOzxQ0PvhsGEmC1JqnCKrZJ7YXnTObxtQrV7/JYOqCl6dZItEanMIiQqFXqU/tg2naD1hAh23eWUE952Ee6VbzRugviZAKvwfIYQVKC5CGrC+jpH+ZCAUvFXxqhn/qZLLyOcWkg3 TFjvW9r3 Vzg+xC6mHpY8/uRWinHI256MjzC/x2e354VvZLdvoxZdDH0PpQukUlDLZY0LwmUy0JmDutV95gqU6RSpN6ibnF7P+f8tOnKq0su+4Ga/SWx5BotO7gys9sEYjJ8LYVUKgTx5dRugIpeB7l66vXtFe0lhVjXZz4Mb5oyJhXmOtPMGY2MCPV0ikOpsOUfunzOQEyU84lErsPZ8AaN+jpNgrSOKRQkIuBxPytvnNyZPRceRLZx+mZEGD69ox0Lz6suVMlNGiEufrVSiXTxu0TfKIfoMEaLK7Bch6v9dAg4SyyMnEv6oVjJzt2K4xNdOcvfAftZYs0deSjBsjavrKVwLz3plj6DHggZF4hgeGJWysMpJLx6sQdgSnbQUfoxTclf0wpZI4xZ7KytWJ/R4uFCzhX8XsjGEZ3RVr/tUefPO8jFPCg+inii0IeT2h2aX6XDd8kRDRx+P1Ag+xajdP6T0Pt3NUgKr/lPx3EHvefWPqNF251Ydp4oCLx3szSEjUO28u6VGgJ1+Fqdt20SS6IUMjVayPzu6rO0vjWYVhkSnUwihWMzdjrcdkkW4W9M3wQ/LvoYDnczwmzHHxC/LDd4pAeJ2F3W1lxON8jUg6UlP3ML1+l+ND7WOm636GOaf0dI82BqUR8IlXBNW2yRvP+LLa94+HJg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Add arm64 support for handling guest page faults on guest_memfd backed memslots. For now, the fault granule is restricted to PAGE_SIZE. Signed-off-by: Fuad Tabba --- arch/arm64/kvm/mmu.c | 65 +++++++++++++++++++++++++++------------- include/linux/kvm_host.h | 5 ++++ virt/kvm/kvm_main.c | 5 ---- 3 files changed, 50 insertions(+), 25 deletions(-) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 148a97c129de..d1044c7f78bb 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1466,6 +1466,30 @@ static bool kvm_vma_mte_allowed(struct vm_area_struct *vma) return vma->vm_flags & VM_MTE_ALLOWED; } +static kvm_pfn_t faultin_pfn(struct kvm *kvm, struct kvm_memory_slot *slot, + gfn_t gfn, bool write_fault, bool *writable, + struct page **page, bool is_gmem) +{ + kvm_pfn_t pfn; + int ret; + + if (!is_gmem) + return __kvm_faultin_pfn(slot, gfn, write_fault ? FOLL_WRITE : 0, writable, page); + + *writable = false; + + ret = kvm_gmem_get_pfn(kvm, slot, gfn, &pfn, page, NULL); + if (!ret) { + *writable = !memslot_is_readonly(slot); + return pfn; + } + + if (ret == -EHWPOISON) + return KVM_PFN_ERR_HWPOISON; + + return KVM_PFN_ERR_NOSLOT_MASK; +} + static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, struct kvm_s2_trans *nested, struct kvm_memory_slot *memslot, unsigned long hva, @@ -1473,19 +1497,20 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, { int ret = 0; bool write_fault, writable; - bool exec_fault, mte_allowed; + bool exec_fault, mte_allowed = false; bool device = false, vfio_allow_any_uc = false; unsigned long mmu_seq; phys_addr_t ipa = fault_ipa; struct kvm *kvm = vcpu->kvm; - struct vm_area_struct *vma; + struct vm_area_struct *vma = NULL; short vma_shift; void *memcache; - gfn_t gfn; + gfn_t gfn = ipa >> PAGE_SHIFT; kvm_pfn_t pfn; bool logging_active = memslot_is_logging(memslot); - bool force_pte = logging_active || is_protected_kvm_enabled(); - long vma_pagesize, fault_granule; + bool is_gmem = kvm_slot_has_gmem(memslot) && kvm_mem_from_gmem(kvm, gfn); + bool force_pte = logging_active || is_gmem || is_protected_kvm_enabled(); + long vma_pagesize, fault_granule = PAGE_SIZE; enum kvm_pgtable_prot prot = KVM_PGTABLE_PROT_R; struct kvm_pgtable *pgt; struct page *page; @@ -1522,16 +1547,22 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, return ret; } + mmap_read_lock(current->mm); + /* * Let's check if we will get back a huge page backed by hugetlbfs, or * get block mapping for device MMIO region. */ - mmap_read_lock(current->mm); - vma = vma_lookup(current->mm, hva); - if (unlikely(!vma)) { - kvm_err("Failed to find VMA for hva 0x%lx\n", hva); - mmap_read_unlock(current->mm); - return -EFAULT; + if (!is_gmem) { + vma = vma_lookup(current->mm, hva); + if (unlikely(!vma)) { + kvm_err("Failed to find VMA for hva 0x%lx\n", hva); + mmap_read_unlock(current->mm); + return -EFAULT; + } + + vfio_allow_any_uc = vma->vm_flags & VM_ALLOW_ANY_UNCACHED; + mte_allowed = kvm_vma_mte_allowed(vma); } if (force_pte) @@ -1602,18 +1633,13 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, ipa &= ~(vma_pagesize - 1); } - gfn = ipa >> PAGE_SHIFT; - mte_allowed = kvm_vma_mte_allowed(vma); - - vfio_allow_any_uc = vma->vm_flags & VM_ALLOW_ANY_UNCACHED; - /* Don't use the VMA after the unlock -- it may have vanished */ vma = NULL; /* * Read mmu_invalidate_seq so that KVM can detect if the results of - * vma_lookup() or __kvm_faultin_pfn() become stale prior to - * acquiring kvm->mmu_lock. + * vma_lookup() or faultin_pfn() become stale prior to acquiring + * kvm->mmu_lock. * * Rely on mmap_read_unlock() for an implicit smp_rmb(), which pairs * with the smp_wmb() in kvm_mmu_invalidate_end(). @@ -1621,8 +1647,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, mmu_seq = vcpu->kvm->mmu_invalidate_seq; mmap_read_unlock(current->mm); - pfn = __kvm_faultin_pfn(memslot, gfn, write_fault ? FOLL_WRITE : 0, - &writable, &page); + pfn = faultin_pfn(kvm, memslot, gfn, write_fault, &writable, &page, is_gmem); if (pfn == KVM_PFN_ERR_HWPOISON) { kvm_send_hwpoison_signal(hva, vma_shift); return 0; diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index f3af6bff3232..1b2e4e9a7802 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -1882,6 +1882,11 @@ static inline int memslot_id(struct kvm *kvm, gfn_t gfn) return gfn_to_memslot(kvm, gfn)->id; } +static inline bool memslot_is_readonly(const struct kvm_memory_slot *slot) +{ + return slot->flags & KVM_MEM_READONLY; +} + static inline gfn_t hva_to_gfn_memslot(unsigned long hva, struct kvm_memory_slot *slot) { diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index c75d8e188eb7..d9bca5ba19dc 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -2640,11 +2640,6 @@ unsigned long kvm_host_page_size(struct kvm_vcpu *vcpu, gfn_t gfn) return size; } -static bool memslot_is_readonly(const struct kvm_memory_slot *slot) -{ - return slot->flags & KVM_MEM_READONLY; -} - static unsigned long __gfn_to_hva_many(const struct kvm_memory_slot *slot, gfn_t gfn, gfn_t *nr_pages, bool write) { -- 2.49.0.901.g37484f566f-goog