From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 548FCC5AE59 for ; Thu, 5 Jun 2025 15:38:55 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 68AA56B00AC; Thu, 5 Jun 2025 11:38:33 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 664166B00AD; Thu, 5 Jun 2025 11:38:33 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 59F436B00AE; Thu, 5 Jun 2025 11:38:33 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 37E226B00AC for ; Thu, 5 Jun 2025 11:38:33 -0400 (EDT) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id D21595D9CF for ; Thu, 5 Jun 2025 15:38:32 +0000 (UTC) X-FDA: 83521754064.10.9A2034E Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) by imf26.hostedemail.com (Postfix) with ESMTP id 1623F14000C for ; Thu, 5 Jun 2025 15:38:30 +0000 (UTC) Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=rWY4g9vU; spf=pass (imf26.hostedemail.com: domain of 39blBaAUKCLgrYZZYemmejc.amkjglsv-kkitYai.mpe@flex--tabba.bounces.google.com designates 209.85.128.74 as permitted sender) smtp.mailfrom=39blBaAUKCLgrYZZYemmejc.amkjglsv-kkitYai.mpe@flex--tabba.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1749137911; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=g1iopiNEacnt/zrf2KolJOPgZprgFWMs9+wdf/O4tEU=; b=sIqtczfp9kvECMxda6EEWGv5Xl2MGR/6YqTCkFSLPv2MEKVNTQROWZknsWLcswJcqBDPz7 T+8fEORLt6KpQu19BBi7VIuNEY2TmQeTijgyQI+OISG9aflN1LpSkVSIWEWoAb7R205vx0 Zq24Ps+ltd6vpVuq7a2edCHKUhn8YxY= ARC-Authentication-Results: i=1; imf26.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=rWY4g9vU; spf=pass (imf26.hostedemail.com: domain of 39blBaAUKCLgrYZZYemmejc.amkjglsv-kkitYai.mpe@flex--tabba.bounces.google.com designates 209.85.128.74 as permitted sender) smtp.mailfrom=39blBaAUKCLgrYZZYemmejc.amkjglsv-kkitYai.mpe@flex--tabba.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1749137911; a=rsa-sha256; cv=none; b=Z11iLiLJofGU3zQHiQZceVfXCgyuVuKpbbKATMIwWGtaSCE9ZQwLslweajyIPnuoylokhB ujit9OLONhpIii++TYokJDR7rdFmFA7ZhFDlfQdZb+D0UwP7gTIDcHL6Jxm5QBGwVleAze hrk6fU148IvRRCAfQaW6k1zlDHgTswA= Received: by mail-wm1-f74.google.com with SMTP id 5b1f17b1804b1-450d9f96f61so8156365e9.1 for ; Thu, 05 Jun 2025 08:38:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1749137910; x=1749742710; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=g1iopiNEacnt/zrf2KolJOPgZprgFWMs9+wdf/O4tEU=; b=rWY4g9vUq1UkFIbQ/IN2V3nCB8YssfN7lq4HzS3347OAKDdNBuwkSJU8ST8dgYJzR3 +9eS0ONUTVTGySnQ5yITt0o5JSgnfH1XC3il19XrhMv53ZW1KpVvwUZ4/Z76iz+BrwNd xSURL5CuMae3qikuw7NUg6pL+LCdfZyJ31RNmF7XQuy0az24t/gt1tWezUnO/MYr5fso Jqz0EdC93YT02zS+r68B6tQJBwCeq2UpZpKMdbhyJf/YgnQc8Xk74nPlk1JL6pHHlUji ImFcgF+zX7BkV5eG609f+hYZ6vp4NU8pjkBg5Np6QjVPtQ1zkLjR6/BUsY9q6Lw0a556 2CFQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749137910; x=1749742710; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=g1iopiNEacnt/zrf2KolJOPgZprgFWMs9+wdf/O4tEU=; b=QcXF4dbaRzz9v3KD+GMMZ/IxmeZgJdiXI9C5N9yHjU0CrxQF+UGMj9QLEMt9gIloqs TPn6QcJCm2eW+8qWhfeCoBq5er0O155TuMofLgqmSlVwjYBWi2EPEMYDi1MYbR/g/z2Z 0nlN8fuXAEoT5VCzthIKka+5Jm8Udoi0JAzWFRoA8Hu1yV0udNOw3nWTn3shihtJLpUC S9nrFZscY+7WFoqnte//ehRLt4xChTX1DI3QlHsOtRat4q5uWkRDN5TjO5OoFbzC5GrP MuFVy2KKhRxmAiNdLlFnSTYpeAxMv6XIZ5ZE1cQRe3+Xy+s/IqxOpWtGpvvbIg/+4UYb /39Q== X-Forwarded-Encrypted: i=1; AJvYcCW1bjVdMqdtuQ7uCLeReZ5V1Iv3Da+3jpI7TBjQXYlpvlh1thzXSobS6wghyAn8Mdy3s3dmupbfAg==@kvack.org X-Gm-Message-State: AOJu0YxkT4nDrbBn2a/aPACSdLN1iAMCYB4JXcd5Rh+lgciGG+miXkWR Q2QxCfBn921AcCOSrHS3sV3PO955431qmRU4PMKdLsDnqTveViSYgoVEGv1odhjCvjW3gNdyx1F DrA== X-Google-Smtp-Source: AGHT+IGasXWyU+CSngaPDa1G8ri1XRKna1hUX5IGWHUlkteXImhCgzrwi56lwVAtUvRYlgwnKMQTe4KUCA== X-Received: from wmbdz10.prod.google.com ([2002:a05:600c:670a:b0:450:dca1:cf91]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:1e89:b0:44b:eb56:1d45 with SMTP id 5b1f17b1804b1-451f0aa7ff9mr75037585e9.15.1749137909540; Thu, 05 Jun 2025 08:38:29 -0700 (PDT) Date: Thu, 5 Jun 2025 16:37:55 +0100 In-Reply-To: <20250605153800.557144-1-tabba@google.com> Mime-Version: 1.0 References: <20250605153800.557144-1-tabba@google.com> X-Mailer: git-send-email 2.49.0.1266.g31b7d2e469-goog Message-ID: <20250605153800.557144-14-tabba@google.com> Subject: [PATCH v11 13/18] KVM: arm64: Refactor user_mem_abort() From: Fuad Tabba To: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org, kvmarm@lists.linux.dev Cc: pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, jthoughton@google.com, peterx@redhat.com, pankaj.gupta@amd.com, ira.weiny@intel.com, tabba@google.com Content-Type: text/plain; charset="UTF-8" X-Stat-Signature: ojue6r8fhfm3wzge3wueb9uuw3ku1w15 X-Rspamd-Queue-Id: 1623F14000C X-Rspam-User: X-Rspamd-Server: rspam02 X-HE-Tag: 1749137910-86186 X-HE-Meta: U2FsdGVkX184EQC+N68DhRYOd8psaX6bd0IC30OYXwFfexGOJ6SSDWCc3qh337rX5kN3kXqQexSihKX12Xj98OAA9jMqLqEXTQ5oaLZ+o8fQZtRZ8OXBQMSvQYpDIz3WApgmBMYfCfr/x7AnPwC9kvMBa7uN7xKxQkBTaL7qIb6mtKCkJTExSZAIhZ9dZfDkV/oWSbaVZLAw9gtLFepLpRoWoTIRhtnFVIk0XxkirDSV3ppUfEjhRtrRCFnCAdpMVdE/R4FmKwlWzqMq7rN8cMw6qYRxCFaVhttOAaBeEpTx4RG5dZu/q/CLKAtXTL/xqEIRwlGOuo5Nv7XdCgTtCASe0LpQq28PdbXEcsgFGznvXNQXArgXWqi4MoOU/DbxAYW+2ihf0tWMcPXtCRzTHeFNoqteoEkQtEU3hXPewm0Y4SvV9B8zJ6MTvM2Yydiz+M8Z+cDEQ0m91m80WeptfTYCUOxxyufljieErak29DbXIutl0HsAw/YZmz8GNU9IIJd8De3GBxhrjK3hGx3wfhaImQlIHwIfgHKwfZ/jiw5BF2hVOCfeqGk7oaM5FpicDfwzfAhmZcAbORFGpdDpLjTfyrQp/bAKKV8iizua1PRqJyOYxwJHGxp3Nx0TKl0iUYJxxiAxGbsxQVh2VnIPBJmDxIeMv88tAhDes8f0+CfXaPRovbs0mPz9k5LbcOv/eCsdulsT4djbICNZ1esa/LBW8skQurXQ2ghQSRc5+gV4PjL9Ea8izWLQDTx2aLPIHmD13lQyhEAAF/HWmI5xupfIBDDU4nkAsVWpjBtHC/IynX4wveMvsKjKawBUlgQowK8DT3hlLSPtqjsMjs4pH/dpTxEsaNHxpeae8BM1AZmF7ERooWXbX5orO87GtC/vvCCAZ2AoupZIbK7fBIHxIHn35khYwD/duARuoMm8pr4M0p0aymgg+WcywUAAYjVNqQmAj+4+FjdiUn4HeQ7 IUD4GttW JTKq4xXE2c/SBxph2HMOjjTDwQATYpwvOFjqj4Ty1aNaGl+53BR2PYEXQgFzxyfpIH8RtE7MDA4+/pWB0HwInwvwhSu/LyJ9yTttdRcVaC9Me+RZ/HgTVleNSTeVMENLo70H+mVbMD9j9WnK6PLqW5T3BwJ4sviDN8+tufuNUFNqQN0zea1rBmhzlxKvwGPjjBKIAJnH/8H8wQch3AzKVFcjrbnETQ2KEibE2ZyRsucmq0MsogOvIPWh7y6BDfmI6jp0HXsQEhmGqbLQtr0rPZzpjPdgl159K/zX2BNO72RwfpI34/H7VOcffLt+MTwiZHHXRWlVeLgyiUGdWDDdJY8ZUbEYOIhOvl8YVFBGDSNuD6hs= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: To simplify the code and to make the assumptions clearer, refactor user_mem_abort() by immediately setting force_pte to true if the conditions are met. Remove the comment about logging_active being guaranteed to never be true for VM_PFNMAP memslots, since it's not actually correct. Move code that will be reused in the following patch into separate functions. Other small instances of tidying up. No functional change intended. Signed-off-by: Fuad Tabba --- arch/arm64/kvm/mmu.c | 100 ++++++++++++++++++++++++------------------- 1 file changed, 55 insertions(+), 45 deletions(-) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index eeda92330ade..ce80be116a30 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1466,13 +1466,56 @@ static bool kvm_vma_mte_allowed(struct vm_area_struct *vma) return vma->vm_flags & VM_MTE_ALLOWED; } +static int prepare_mmu_memcache(struct kvm_vcpu *vcpu, bool topup_memcache, + void **memcache) +{ + int min_pages; + + if (!is_protected_kvm_enabled()) + *memcache = &vcpu->arch.mmu_page_cache; + else + *memcache = &vcpu->arch.pkvm_memcache; + + if (!topup_memcache) + return 0; + + min_pages = kvm_mmu_cache_min_pages(vcpu->arch.hw_mmu); + + if (!is_protected_kvm_enabled()) + return kvm_mmu_topup_memory_cache(*memcache, min_pages); + + return topup_hyp_memcache(*memcache, min_pages); +} + +/* + * Potentially reduce shadow S2 permissions to match the guest's own S2. For + * exec faults, we'd only reach this point if the guest actually allowed it (see + * kvm_s2_handle_perm_fault). + * + * Also encode the level of the original translation in the SW bits of the leaf + * entry as a proxy for the span of that translation. This will be retrieved on + * TLB invalidation from the guest and used to limit the invalidation scope if a + * TTL hint or a range isn't provided. + */ +static void adjust_nested_fault_perms(struct kvm_s2_trans *nested, + enum kvm_pgtable_prot *prot, + bool *writable) +{ + *writable &= kvm_s2_trans_writable(nested); + if (!kvm_s2_trans_readable(nested)) + *prot &= ~KVM_PGTABLE_PROT_R; + + *prot |= kvm_encode_nested_level(nested); +} + static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, struct kvm_s2_trans *nested, struct kvm_memory_slot *memslot, unsigned long hva, bool fault_is_perm) { int ret = 0; - bool write_fault, writable, force_pte = false; + bool topup_memcache; + bool write_fault, writable; bool exec_fault, mte_allowed; bool device = false, vfio_allow_any_uc = false; unsigned long mmu_seq; @@ -1484,6 +1527,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, gfn_t gfn; kvm_pfn_t pfn; bool logging_active = memslot_is_logging(memslot); + bool force_pte = logging_active || is_protected_kvm_enabled(); long vma_pagesize, fault_granule; enum kvm_pgtable_prot prot = KVM_PGTABLE_PROT_R; struct kvm_pgtable *pgt; @@ -1501,28 +1545,16 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, return -EFAULT; } - if (!is_protected_kvm_enabled()) - memcache = &vcpu->arch.mmu_page_cache; - else - memcache = &vcpu->arch.pkvm_memcache; - /* * Permission faults just need to update the existing leaf entry, * and so normally don't require allocations from the memcache. The * only exception to this is when dirty logging is enabled at runtime * and a write fault needs to collapse a block entry into a table. */ - if (!fault_is_perm || (logging_active && write_fault)) { - int min_pages = kvm_mmu_cache_min_pages(vcpu->arch.hw_mmu); - - if (!is_protected_kvm_enabled()) - ret = kvm_mmu_topup_memory_cache(memcache, min_pages); - else - ret = topup_hyp_memcache(memcache, min_pages); - - if (ret) - return ret; - } + topup_memcache = !fault_is_perm || (logging_active && write_fault); + ret = prepare_mmu_memcache(vcpu, topup_memcache, &memcache); + if (ret) + return ret; /* * Let's check if we will get back a huge page backed by hugetlbfs, or @@ -1536,16 +1568,10 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, return -EFAULT; } - /* - * logging_active is guaranteed to never be true for VM_PFNMAP - * memslots. - */ - if (logging_active || is_protected_kvm_enabled()) { - force_pte = true; + if (force_pte) vma_shift = PAGE_SHIFT; - } else { + else vma_shift = get_vma_page_shift(vma, hva); - } switch (vma_shift) { #ifndef __PAGETABLE_PMD_FOLDED @@ -1597,7 +1623,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, max_map_size = PAGE_SIZE; force_pte = (max_map_size == PAGE_SIZE); - vma_pagesize = min(vma_pagesize, (long)max_map_size); + vma_pagesize = min_t(long, vma_pagesize, max_map_size); } /* @@ -1626,7 +1652,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, * Rely on mmap_read_unlock() for an implicit smp_rmb(), which pairs * with the smp_wmb() in kvm_mmu_invalidate_end(). */ - mmu_seq = vcpu->kvm->mmu_invalidate_seq; + mmu_seq = kvm->mmu_invalidate_seq; mmap_read_unlock(current->mm); pfn = __kvm_faultin_pfn(memslot, gfn, write_fault ? FOLL_WRITE : 0, @@ -1661,24 +1687,8 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, if (exec_fault && device) return -ENOEXEC; - /* - * Potentially reduce shadow S2 permissions to match the guest's own - * S2. For exec faults, we'd only reach this point if the guest - * actually allowed it (see kvm_s2_handle_perm_fault). - * - * Also encode the level of the original translation in the SW bits - * of the leaf entry as a proxy for the span of that translation. - * This will be retrieved on TLB invalidation from the guest and - * used to limit the invalidation scope if a TTL hint or a range - * isn't provided. - */ - if (nested) { - writable &= kvm_s2_trans_writable(nested); - if (!kvm_s2_trans_readable(nested)) - prot &= ~KVM_PGTABLE_PROT_R; - - prot |= kvm_encode_nested_level(nested); - } + if (nested) + adjust_nested_fault_perms(nested, &prot, &writable); kvm_fault_lock(kvm); pgt = vcpu->arch.hw_mmu->pgt; -- 2.49.0.1266.g31b7d2e469-goog