From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B5862C5AD49 for ; Mon, 9 Jun 2025 00:27:50 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 481AB6B0092; Sun, 8 Jun 2025 20:27:50 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 432736B0093; Sun, 8 Jun 2025 20:27:50 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 321AD6B0095; Sun, 8 Jun 2025 20:27:50 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 1510E6B0092 for ; Sun, 8 Jun 2025 20:27:50 -0400 (EDT) Received: from smtpin25.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id AFE8ABAA95 for ; Mon, 9 Jun 2025 00:27:49 +0000 (UTC) X-FDA: 83533974258.25.511308F Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf19.hostedemail.com (Postfix) with ESMTP id 5DB981A0004 for ; Mon, 9 Jun 2025 00:27:47 +0000 (UTC) Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=K0QlQ9xN; dmarc=pass (policy=quarantine) header.from=redhat.com; spf=pass (imf19.hostedemail.com: domain of gshan@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=gshan@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1749428867; a=rsa-sha256; cv=none; b=TY7oGSW3RJ1TfNugex5X3ZYx+hRUpDMFV6AWEJv1Qy67A+zra33xdjVt1esa4vssbO8o2Y 4kI2KFXBsxkepiqFwBinYUwshWklYFB8tTN9HvliJRbPkNlcrWqDxvdtR2ydt6s2KN8f/w ExX0kR+imgv4mJ6nVG2mb9VULy/zKho= ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=K0QlQ9xN; dmarc=pass (policy=quarantine) header.from=redhat.com; spf=pass (imf19.hostedemail.com: domain of gshan@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=gshan@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1749428867; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=o9UNmllkjwQtpYWDrQxVcySKnjZcVFIMjHUJsdLCTJk=; b=jSAR1AlqJ25LZloKABfSii10oSDCLtl5uqPUtyPKEyGyd3PwXQFRjLNo4amKNS1gVLNZig 43uy9mErYod//ZvthtRkTkdGOxcpfHwYoN+0mVQYT2JfNr2ZW2BkeMWNq9JvsxSL2O3RZj 19Fmuuc1wm3/vEbWAhkYlvwWTJaiLlk= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1749428866; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=o9UNmllkjwQtpYWDrQxVcySKnjZcVFIMjHUJsdLCTJk=; b=K0QlQ9xNu2LedAmRoskhVrjs4D0JlKlhmaI5Ufle29Q/p6zwNJLDA7o3BZ4zZf1/uVx4mn 6JTPwnWtRtDDjpjrAwQjOP8w4V0tfZ+BTBsvG6ddGZuaZFbq/s4nGFSrY24aNDTWusGU1g iG30gloWsMCgCWLZH0JZobbL4lzwX/c= Received: from mail-pl1-f199.google.com (mail-pl1-f199.google.com [209.85.214.199]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-687-SUXkep-oPb6Kwmu8iwVNGA-1; Sun, 08 Jun 2025 20:27:45 -0400 X-MC-Unique: SUXkep-oPb6Kwmu8iwVNGA-1 X-Mimecast-MFC-AGG-ID: SUXkep-oPb6Kwmu8iwVNGA_1749428865 Received: by mail-pl1-f199.google.com with SMTP id d9443c01a7336-235f77f86f6so23300115ad.2 for ; Sun, 08 Jun 2025 17:27:45 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749428864; x=1750033664; h=content-transfer-encoding:in-reply-to:from:content-language :references:cc:to:subject:user-agent:mime-version:date:message-id :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=o9UNmllkjwQtpYWDrQxVcySKnjZcVFIMjHUJsdLCTJk=; b=cNa/rdIuglLLJIP/NhwmX4dRmUhxGBggL98+EWkHjlntjYZ2BkugMAa9TL9jGXFNUK kqg4t7nReYMFkGy3LZaidPlCzrwvukaz6bshfu8x6AhyC/NAUcM33lFN6krLqG/rHMzC tGDxgRkXpR9hiPP+ZJ0Fr0zO5FaYVaj1X8WTH2S49P71kwm52Mtl7qvhxm71WXz2WnTz QZKG3hwv7Z0MpNS5u1wF6E7qgcGZCC5prhBd+BgSqeZf0il0h5apqZhP62qjFS3dfi9j 88uVyynJl+feqZAMGaW22JmCz1D/2jlhRZWyFILkM8YPUB2zExQ9mNrjP7rWRw3IB4wI LwSQ== X-Forwarded-Encrypted: i=1; AJvYcCV54pB3aZj+PGlG8oxjOdchb8I3BcKdg/FafZWc0ipQUNPrRpxBGcNbr7LnrQD4CTs5L3hWoN+4bQ==@kvack.org X-Gm-Message-State: AOJu0YykQ4w6QUxQ4YG34PUlUwFkaHUgX7G4opI5pXZ2fElvBogjU9KT khI6sGmQNKLWbSPiKlc0U7rL3Y+Z51YbT1iVUQ5Y/SeGTJxUnJIOZpeAo/l9wHVvjTu5CwI4zGE ZhGnHVxUjsMO3kKLpQw5uhijhdtK6s0Ni25fNo8uc41Y1Fnby4x9rLOPTMre+ X-Gm-Gg: ASbGnctioQgMhTCkdz2giKEF3LOsJ558Yn77hq50M81IcLC++ZLkZ3QhNOSk2EAC/qP Ed1Q/jR0CNeB38SGuX0clwq1uU6KeK291qQTbo9T0GdIhSIz6Mo7qffOu6SLKOJ52p/UfZqewfG TyMf+HGw0r/lb/zNVNR+JyXkowdjh3hvvky87b31HL+XQr5hKwh8JZIlVwKnD7PQ8DkNpnroC0R 0W1UMmWXLY4WBnprxOjwkCYN3IzILlQ0B2BuA8i8H1u3/BREdbkFtlrbzMPxFneUaD0Ut9LmryH 0cYht1EPyu5kbTHudev8PotPZNAW5Q1tMYCWEm+vz9mbWYiUz7Q= X-Received: by 2002:a17:902:db10:b0:235:ef67:b5a0 with SMTP id d9443c01a7336-23601d977c9mr168621485ad.36.1749428863578; Sun, 08 Jun 2025 17:27:43 -0700 (PDT) X-Google-Smtp-Source: AGHT+IGEIahDyULI4f4Ir1WuH59IRh27zdVpHrZxdQdpnA/44DRpbUCVtuOsfb5+cTtGSF8pRBHI7Q== X-Received: by 2002:a17:902:db10:b0:235:ef67:b5a0 with SMTP id d9443c01a7336-23601d977c9mr168621195ad.36.1749428863043; Sun, 08 Jun 2025 17:27:43 -0700 (PDT) Received: from [192.168.68.51] (n175-34-62-5.mrk21.qld.optusnet.com.au. [175.34.62.5]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-236034059b3sm43590825ad.175.2025.06.08.17.27.23 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Sun, 08 Jun 2025 17:27:42 -0700 (PDT) Message-ID: Date: Mon, 9 Jun 2025 10:27:21 +1000 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v11 13/18] KVM: arm64: Refactor user_mem_abort() To: Fuad Tabba , kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org, kvmarm@lists.linux.dev Cc: pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, jthoughton@google.com, peterx@redhat.com, pankaj.gupta@amd.com, ira.weiny@intel.com References: <20250605153800.557144-1-tabba@google.com> <20250605153800.557144-14-tabba@google.com> From: Gavin Shan In-Reply-To: <20250605153800.557144-14-tabba@google.com> X-Mimecast-Spam-Score: 0 X-Mimecast-MFC-PROC-ID: rp9_PUXv1e7l2fi7wjwmBcYZjzOWrvxVhY4w4QECTvI_1749428865 X-Mimecast-Originator: redhat.com Content-Language: en-US Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 5DB981A0004 X-Stat-Signature: hdki6xey73x1wrkxacj8agw4a7pf581w X-Rspam-User: X-HE-Tag: 1749428867-947661 X-HE-Meta: U2FsdGVkX1+w26SviDBJELtebpvwihLocsGQAxDGHUUYPuUln4N2VXZboYeDyrKtC/hrorReI+706K1aRs+69rLnovn54HyJUi8nWRAgebXWfHn/g9AevsAenExdh7rT6PkHdQLO8WWWv05h44VMrUj6o3IgK+Li4gWN3sw5IMTTQsMMxJ0R4hClS9e+xETIjvaw6Ao90gE5tYqFn4lE4ioR2num8BtfKgoCNDKCN/k6DU00nyZRzsW+9QY1XpUVf6JU0ViMt+oT6sIeVR+O7q7O0qwAvwBgZt0/APCr5h7coJW7rH+6nu2YI60CrpreQ3mX6sTD1NOdUtR6atIPJti/liVrFFZfSxM8vtuduDh+IpAbbPzUR6viTWpwjez5Z7lvzkk36dvf66RQJ5WSdPVQjZhick5vig/Ee5Cdv9eE6kFwcJOKQZrzEOLtIy5lPs+BxszAej1txzvWC5rzt+PzLOMojGiRVZde+QdHmxobcWAZNbacdKXq0/1viJwdps2+irt/8CgY1Oonus0SNzCXttF7IdOUryDUKo7s7gHYFUWdQgTJIvYhTbEkOili4F/oBZqHGxRWQ/cnZ9MhPfgGAD4cHq9wn9MPpY5ll9UC557qFIF3oKP6KCH5S+fFL8AHbcrPLYLky5LSzRWsXVfgb7OO6ak+0OqASR8DzCD0Qm9krCg2ddwMhahe86mWFPgpSHHHhI7dFi/R28xjcdsLOeQUy94YZhD05GpXjwKdEe7tTf3FKHMC71faaFT8x+BD8SPjrRhscht5wWQsO6EUHAitXb+8hDxhL2DGc8d2mQIP6UzROC9gVJ6/EucvPuQvKDRjMeGfhSeM5X1oLz4XqSmD/6JyUqtQFq2RE/IyTDLFZgsI4Xqpa1xHZrZariInSPC2rhj6n4MsmN5OEz7IWkXtnepZkwYsDmy8fMQLty60cTzosJw0N+iLu3irINsuTyLjF6PnsjXgtu1 tugcGmF5 o4kiFwrbxKBKvc8IowaBhjJ9W9ug0425j9nUEWBDx4aC5FSGdapsnuqFSgOJS7nkG4vB3k5UM4Zk7eq6FlkIHRGyZ0mPwoC9p5hZN+XmVfy+zK5IL7k0RGk17pmb0ZARvISTv0VTzOUlfRouqssejdEDCYyApjBV0S3L7GkQhjxjOQH+9l8ra+E6rm7rXkpmWLgD7IlXwKhVCuO1bFDa0po7f8V6NQV/Psb4iiSJ26paqB33uwC5UvGYdAwzrhHjp0djVRQ6MVaRMoTkGoaRQUBH3wKJtg5ypfJ9EbOf+jmREdSXdjiDjfr/lwwM4TCc3FU4aAWw5TW0dhHkmbY7ZvPkl7HHD0x3Ewwezy8vmbTpPcMosVxj90gIajyifJCg51Gp2ViDEn7X1WYNnfI2bgVESATXw+8dDGTQXq789l445mbVhXgBIhmiCxDUATu2K2IHNeYscJ3VleuwOkh+dgp+wHZ8EGJ/ro3v4Z0lQusprdR1EHyLGDKyW3tXsMO8shN3PMbfNh7tynOf6M1hj4p9ApCAoSGD55VzBVJl8qs8IlfU= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Hi Fuad, On 6/6/25 1:37 AM, Fuad Tabba wrote: > To simplify the code and to make the assumptions clearer, > refactor user_mem_abort() by immediately setting force_pte to > true if the conditions are met. > > Remove the comment about logging_active being guaranteed to never be > true for VM_PFNMAP memslots, since it's not actually correct. > > Move code that will be reused in the following patch into separate > functions. > > Other small instances of tidying up. > > No functional change intended. > > Signed-off-by: Fuad Tabba > --- > arch/arm64/kvm/mmu.c | 100 ++++++++++++++++++++++++------------------- > 1 file changed, 55 insertions(+), 45 deletions(-) > One nitpick below in case v12 is needed. In either way, it looks good to me: Reviewed-by: Gavin Shan > diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c > index eeda92330ade..ce80be116a30 100644 > --- a/arch/arm64/kvm/mmu.c > +++ b/arch/arm64/kvm/mmu.c > @@ -1466,13 +1466,56 @@ static bool kvm_vma_mte_allowed(struct vm_area_struct *vma) > return vma->vm_flags & VM_MTE_ALLOWED; > } > > +static int prepare_mmu_memcache(struct kvm_vcpu *vcpu, bool topup_memcache, > + void **memcache) > +{ > + int min_pages; > + > + if (!is_protected_kvm_enabled()) > + *memcache = &vcpu->arch.mmu_page_cache; > + else > + *memcache = &vcpu->arch.pkvm_memcache; > + > + if (!topup_memcache) > + return 0; > + It's unnecessary to initialize 'memcache' when topup_memcache is false. if (!topup_memcache) return 0; min_pages = kvm_mmu_cache_min_pages(vcpu->arch.hw_mmu); if (!is_protected_kvm_enabled()) *memcache = &vcpu->arch.mmu_page_cache; else *memcache = &vcpu->arch.pkvm_memcache; Thanks, Gavin > + min_pages = kvm_mmu_cache_min_pages(vcpu->arch.hw_mmu); > + > + if (!is_protected_kvm_enabled()) > + return kvm_mmu_topup_memory_cache(*memcache, min_pages); > + > + return topup_hyp_memcache(*memcache, min_pages); > +} > + > +/* > + * Potentially reduce shadow S2 permissions to match the guest's own S2. For > + * exec faults, we'd only reach this point if the guest actually allowed it (see > + * kvm_s2_handle_perm_fault). > + * > + * Also encode the level of the original translation in the SW bits of the leaf > + * entry as a proxy for the span of that translation. This will be retrieved on > + * TLB invalidation from the guest and used to limit the invalidation scope if a > + * TTL hint or a range isn't provided. > + */ > +static void adjust_nested_fault_perms(struct kvm_s2_trans *nested, > + enum kvm_pgtable_prot *prot, > + bool *writable) > +{ > + *writable &= kvm_s2_trans_writable(nested); > + if (!kvm_s2_trans_readable(nested)) > + *prot &= ~KVM_PGTABLE_PROT_R; > + > + *prot |= kvm_encode_nested_level(nested); > +} > + > static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, > struct kvm_s2_trans *nested, > struct kvm_memory_slot *memslot, unsigned long hva, > bool fault_is_perm) > { > int ret = 0; > - bool write_fault, writable, force_pte = false; > + bool topup_memcache; > + bool write_fault, writable; > bool exec_fault, mte_allowed; > bool device = false, vfio_allow_any_uc = false; > unsigned long mmu_seq; > @@ -1484,6 +1527,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, > gfn_t gfn; > kvm_pfn_t pfn; > bool logging_active = memslot_is_logging(memslot); > + bool force_pte = logging_active || is_protected_kvm_enabled(); > long vma_pagesize, fault_granule; > enum kvm_pgtable_prot prot = KVM_PGTABLE_PROT_R; > struct kvm_pgtable *pgt; > @@ -1501,28 +1545,16 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, > return -EFAULT; > } > > - if (!is_protected_kvm_enabled()) > - memcache = &vcpu->arch.mmu_page_cache; > - else > - memcache = &vcpu->arch.pkvm_memcache; > - > /* > * Permission faults just need to update the existing leaf entry, > * and so normally don't require allocations from the memcache. The > * only exception to this is when dirty logging is enabled at runtime > * and a write fault needs to collapse a block entry into a table. > */ > - if (!fault_is_perm || (logging_active && write_fault)) { > - int min_pages = kvm_mmu_cache_min_pages(vcpu->arch.hw_mmu); > - > - if (!is_protected_kvm_enabled()) > - ret = kvm_mmu_topup_memory_cache(memcache, min_pages); > - else > - ret = topup_hyp_memcache(memcache, min_pages); > - > - if (ret) > - return ret; > - } > + topup_memcache = !fault_is_perm || (logging_active && write_fault); > + ret = prepare_mmu_memcache(vcpu, topup_memcache, &memcache); > + if (ret) > + return ret; > > /* > * Let's check if we will get back a huge page backed by hugetlbfs, or > @@ -1536,16 +1568,10 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, > return -EFAULT; > } > > - /* > - * logging_active is guaranteed to never be true for VM_PFNMAP > - * memslots. > - */ > - if (logging_active || is_protected_kvm_enabled()) { > - force_pte = true; > + if (force_pte) > vma_shift = PAGE_SHIFT; > - } else { > + else > vma_shift = get_vma_page_shift(vma, hva); > - } > > switch (vma_shift) { > #ifndef __PAGETABLE_PMD_FOLDED > @@ -1597,7 +1623,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, > max_map_size = PAGE_SIZE; > > force_pte = (max_map_size == PAGE_SIZE); > - vma_pagesize = min(vma_pagesize, (long)max_map_size); > + vma_pagesize = min_t(long, vma_pagesize, max_map_size); > } > > /* > @@ -1626,7 +1652,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, > * Rely on mmap_read_unlock() for an implicit smp_rmb(), which pairs > * with the smp_wmb() in kvm_mmu_invalidate_end(). > */ > - mmu_seq = vcpu->kvm->mmu_invalidate_seq; > + mmu_seq = kvm->mmu_invalidate_seq; > mmap_read_unlock(current->mm); > > pfn = __kvm_faultin_pfn(memslot, gfn, write_fault ? FOLL_WRITE : 0, > @@ -1661,24 +1687,8 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, > if (exec_fault && device) > return -ENOEXEC; > > - /* > - * Potentially reduce shadow S2 permissions to match the guest's own > - * S2. For exec faults, we'd only reach this point if the guest > - * actually allowed it (see kvm_s2_handle_perm_fault). > - * > - * Also encode the level of the original translation in the SW bits > - * of the leaf entry as a proxy for the span of that translation. > - * This will be retrieved on TLB invalidation from the guest and > - * used to limit the invalidation scope if a TTL hint or a range > - * isn't provided. > - */ > - if (nested) { > - writable &= kvm_s2_trans_writable(nested); > - if (!kvm_s2_trans_readable(nested)) > - prot &= ~KVM_PGTABLE_PROT_R; > - > - prot |= kvm_encode_nested_level(nested); > - } > + if (nested) > + adjust_nested_fault_perms(nested, &prot, &writable); > > kvm_fault_lock(kvm); > pgt = vcpu->arch.hw_mmu->pgt;