From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D6E18C3ABCC for ; Tue, 13 May 2025 16:35:23 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3BA3D6B00E0; Tue, 13 May 2025 12:35:07 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 340C46B00E1; Tue, 13 May 2025 12:35:07 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 19E926B00E2; Tue, 13 May 2025 12:35:07 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id E7C116B00E0 for ; Tue, 13 May 2025 12:35:06 -0400 (EDT) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 27795121134 for ; Tue, 13 May 2025 16:35:08 +0000 (UTC) X-FDA: 83438434296.07.2E79845 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) by imf23.hostedemail.com (Postfix) with ESMTP id 4B5CE140015 for ; Tue, 13 May 2025 16:35:06 +0000 (UTC) Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=ze0dA01j; spf=pass (imf23.hostedemail.com: domain of 3uHQjaAUKCHcoVWWVbjjbgZ.Xjhgdips-hhfqVXf.jmb@flex--tabba.bounces.google.com designates 209.85.128.73 as permitted sender) smtp.mailfrom=3uHQjaAUKCHcoVWWVbjjbgZ.Xjhgdips-hhfqVXf.jmb@flex--tabba.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1747154106; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Be1AlR+JPJhKhyuHrgiYClfCDGQ2SZI+bIXk4D2xVD8=; b=lBiCJRqxF4J1bzpc7oh+ds9k9hiIYGncjscFzQEa7fPheqoOB2i1TvUF69iDRVb2CHxONd TnwmdYCqXMWphBvBSGjVSRI4YLoxaeCoLhn+3xM/tHuuXiF5w3dg3j3YFh4eDzbX6RCxrH vr0OnTLIi9lriPA8403frp0HK6xX4+o= ARC-Authentication-Results: i=1; imf23.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=ze0dA01j; spf=pass (imf23.hostedemail.com: domain of 3uHQjaAUKCHcoVWWVbjjbgZ.Xjhgdips-hhfqVXf.jmb@flex--tabba.bounces.google.com designates 209.85.128.73 as permitted sender) smtp.mailfrom=3uHQjaAUKCHcoVWWVbjjbgZ.Xjhgdips-hhfqVXf.jmb@flex--tabba.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1747154106; a=rsa-sha256; cv=none; b=UtJbiMN8ayZoNy5bjLhP7lFQ27o15Cb5c9g9PNDB3TkRPI310r4Wp7dpoKlHeu4H8gc3Bk GiC1H0qkopXvryWfGb6vyO/aM/5lRdWy98fZeSe1ehRh8+dfzeNrUaEHUwwilAXDPDjAxh /lzKXEcT6l+kAOA/dMthn6Zn+wLwVjc= Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-43cf44b66f7so33781825e9.1 for ; Tue, 13 May 2025 09:35:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1747154105; x=1747758905; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Be1AlR+JPJhKhyuHrgiYClfCDGQ2SZI+bIXk4D2xVD8=; b=ze0dA01jqTIl3UT7/c50oE6Z/xs7BOK79ee7t5es9ehiYD3L0GwskSVbo6wtQ4PLED HLu5ISiMrwTFuOhnCMcrtK6bPdPkXQRce4KzAWrxm/Ff6Y2WGsLBeB5eHPMPmdgIan3h EBmgEf5wIHZYgdxYYl1gtDhHDKLU4RpGt11s0S3nbSDwmPfqaPS6r6qZsw9re0FOnOPq BBCqzxDBBn80oQzh6C3uUbm4MOorRwsZamufFNeJWaYJgbGdFIudXdPOd1iKajbHkjAZ tY7yFKrUbNJrqxJiK9A3/oyRPZM6UBWojRZEKxVjA3rY5lk9hs3w1SEzBV0oNBQIB+Bt 0W3w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747154105; x=1747758905; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Be1AlR+JPJhKhyuHrgiYClfCDGQ2SZI+bIXk4D2xVD8=; b=OKKwRw5voJwWwuBVQY+4SFtsxLCryB71tQb8KU/CJ+/FAn883BvUgAsmTqoXoYLRE+ e/PwGqgqX3YaXKpLSZajXrzTH9OAnF80nY2QtaNyj4AcuY8p7ElZYGsIPMKuYA/E0a6q xvlqKvLp+nt3nhJY17oocYMno9saflEeEN2DJfksoPqXC82kJzTrlUDZ4+sVRSWhKr6K t3dchOATHhEW1qV8+cPN9J51pVisDTdWy9JwhcOL3b4NPnnqVI1CjWuvq+sD+CSlOvia w7ZSuHy4OAaPSX1H3dtjhQbFZcjcUVbqalfVYBkbd/PogKvwraJj/thcZSzKpLZgYfCY ZnEg== X-Forwarded-Encrypted: i=1; AJvYcCWo2fQe30rWOpU4S93xUdKZITDPuX1/wl9VyyPe4Fo5DoIVbEIHHTTuYv2Mo9xtcxgvIPKphxsDEA==@kvack.org X-Gm-Message-State: AOJu0YwR1z+23V8bxnplg0ryBNU0Mry61c0NW6oNnqSqgMNLn7CU0c0g KmegaTQ7xJqhiGaLDrO4c6FiPq6F7Oyk4TTL65JECb8oOirP9A9YF4sjCg/bZ+UFIfwgWPBPwg= = X-Google-Smtp-Source: AGHT+IEsMEetoV4YK4SV4vFb6l1Ng9DKZZK9MFFHNiIzxliX7gPauq7tpe88BFecnM4+cLCAaJyauy01cw== X-Received: from wmbhc10.prod.google.com ([2002:a05:600c:870a:b0:442:cd39:5ca4]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:8207:b0:43c:ee62:33f5 with SMTP id 5b1f17b1804b1-442d6ddcf2dmr152785155e9.27.1747154104922; Tue, 13 May 2025 09:35:04 -0700 (PDT) Date: Tue, 13 May 2025 17:34:33 +0100 In-Reply-To: <20250513163438.3942405-1-tabba@google.com> Mime-Version: 1.0 References: <20250513163438.3942405-1-tabba@google.com> X-Mailer: git-send-email 2.49.0.1045.g170613ef41-goog Message-ID: <20250513163438.3942405-13-tabba@google.com> Subject: [PATCH v9 12/17] KVM: arm64: Rename variables in user_mem_abort() From: Fuad Tabba To: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org Cc: pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, jthoughton@google.com, peterx@redhat.com, pankaj.gupta@amd.com, ira.weiny@intel.com, tabba@google.com Content-Type: text/plain; charset="UTF-8" X-Rspam-User: X-Rspamd-Queue-Id: 4B5CE140015 X-Rspamd-Server: rspam09 X-Stat-Signature: qmbzafr4da7p7xedi1376jn11rghgc6h X-HE-Tag: 1747154106-629886 X-HE-Meta: U2FsdGVkX1/rxS6d7vaeOXKaM3YfKIBBvedICFaepYwHouIOaediLimIGgxEd6vaLtQkSP1YLGK9/Uuu5RXSzOPQWDSZP49/KCnv3Mna3+yP4uQAPkX+/akCFu6/KzjzqrTMT12JnpgN74Sj9dBgnYRka7166vig2+Bp3eWEjWwD1ZRgADjdl70PMNbDfCRmtcjRITZrMEaHNN3WA8TMQkXto4hTMqaAwSn5njmR3YxwDkLGREYHiQCmMyUUdu9etYYsm9C94kmnRl18+1L/GuwWdWJfVlJAQUssRd1U9V7VBxWrDxtok/sWVY6gfv4/94Y22GpyaNYxEd2OpT/CJVl0RmqjIAnqJjlyC6NLZG9xXLrnkxbyaDu1ilGOF29xRVonlmdyPnO3Ra1U9fts3gSA+/YnV3xuNJxAwoQYEaCW0O88pwoaPPV77NFlKEPAN5jZNeiMPa6prdAVHP9LN5zjotQWpCKdVZ4JOBBybPejy5eAz1rNqJj8seCsnwj7SvXsbvrkIV19V8NQhCV7I09Dr+PnsBcsoE/BWEJl/sePwclNFGZgOyr1Ac6rwanWRRUlmcmA0Wx/JzF4LcU0fjWWcmILiZbg9lsyWAr5zlMOWD4dbUCzGTTxOdsYgJFnhm6nDj6eJM1LP2hsuu8r5fviYl/sLTMZH0jQOJnOLQ7epEjur8E2iSyDxbgNzlBzpMCCoGyUpEzI4bdMIYnrKNrbl7WeJvZWuri4Nvifx7lyt3BSzvS32800oSw8bRGZtuUIXyUCGNNL0b3L4tDkn+gJzaGlwkQvkz+nPTMUXIlV3JPDpcA5x8F8CwO+ITMxmpvp9b+1ZHIoo8PBi7GdD9TKbO0gjyVYoS+Cs1XJUnet3duhLv6bNppOfZ1QtALVuezMRDMwl1pZ9ZlRm1JYm/rfv/9YLmA60KEmNmZKAdxUBWj0Xg2RhUoVJYJ9sq41BocdPKjZNwBWwy1pDfq P+kDxwXI HSaPsbld1TrvQ9mDC1SWjy41ORck7sKXfS1+xDPBjAmDG0XR5h/M6HyZD6bf09hSR/3Yjq7SdugE7vnRdLRiJ0J8Ss1B5w1SF7GfF9ZeYOXPyQ5LV5VI0BQEZUrgvNry7RSU0h8IQxqBEs6IA6XzsxnuOzMs++kbsuNRIUhkog6H7y2nkXaoTD3Q0DfGNoWdSnRQfjxpGOCJUA309ue8/ubnHMbzrpY1vfVNeUlhpUP710h9w0gU6aMY/f/QumXVZe6KIVprFjrus+Es9dbHyJce2JTlwT0Kw+IDpQlIr7W09MRrlgCHYCD8iphv/kDZIrWlNl0ZFiRnctzQpOsok9ki+3iBkDPVdUEye01DLj9Y+RzXW5dqJRiAI+xeSWWIAjLpM6xHoo3Yx4vSlcwqchS2cyBpK9e5M5IczWsJuFSK/wL03iMLRx4ezONTlL7MN63+rX/pbRYZCN3WN6EFqzZRwVMlgtDIalZvkxL6PPAgvsKYBwrCcNxp7y8ZUafc+OvQ+pIEGPes+Myyh+H+VBH9nFxdHb5/NNBzk4M2/ex9gvyUgbO0sm47Snes2Q3FhGe93YEwqpjLk5yhNjRzLlosewERSppSX0SOcyAg92kcldN6hQaAO3zQsXU1MEZBMDUjovH53ONKWG1NkXttEciry/Q== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Guest memory can be backed by guest_memfd or by anonymous memory. Rename vma_shift to page_shift and vma_pagesize to page_size to ease readability in subsequent patches. Suggested-by: James Houghton Signed-off-by: Fuad Tabba --- arch/arm64/kvm/mmu.c | 54 ++++++++++++++++++++++---------------------- 1 file changed, 27 insertions(+), 27 deletions(-) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 9865ada04a81..d756c2b5913f 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1479,13 +1479,13 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, phys_addr_t ipa = fault_ipa; struct kvm *kvm = vcpu->kvm; struct vm_area_struct *vma; - short vma_shift; + short page_shift; void *memcache; gfn_t gfn; kvm_pfn_t pfn; bool logging_active = memslot_is_logging(memslot); bool force_pte = logging_active || is_protected_kvm_enabled(); - long vma_pagesize, fault_granule; + long page_size, fault_granule; enum kvm_pgtable_prot prot = KVM_PGTABLE_PROT_R; struct kvm_pgtable *pgt; struct page *page; @@ -1538,11 +1538,11 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, } if (force_pte) - vma_shift = PAGE_SHIFT; + page_shift = PAGE_SHIFT; else - vma_shift = get_vma_page_shift(vma, hva); + page_shift = get_vma_page_shift(vma, hva); - switch (vma_shift) { + switch (page_shift) { #ifndef __PAGETABLE_PMD_FOLDED case PUD_SHIFT: if (fault_supports_stage2_huge_mapping(memslot, hva, PUD_SIZE)) @@ -1550,23 +1550,23 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, fallthrough; #endif case CONT_PMD_SHIFT: - vma_shift = PMD_SHIFT; + page_shift = PMD_SHIFT; fallthrough; case PMD_SHIFT: if (fault_supports_stage2_huge_mapping(memslot, hva, PMD_SIZE)) break; fallthrough; case CONT_PTE_SHIFT: - vma_shift = PAGE_SHIFT; + page_shift = PAGE_SHIFT; force_pte = true; fallthrough; case PAGE_SHIFT: break; default: - WARN_ONCE(1, "Unknown vma_shift %d", vma_shift); + WARN_ONCE(1, "Unknown page_shift %d", page_shift); } - vma_pagesize = 1UL << vma_shift; + page_size = 1UL << page_shift; if (nested) { unsigned long max_map_size; @@ -1592,7 +1592,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, max_map_size = PAGE_SIZE; force_pte = (max_map_size == PAGE_SIZE); - vma_pagesize = min(vma_pagesize, (long)max_map_size); + page_size = min_t(long, page_size, max_map_size); } /* @@ -1600,9 +1600,9 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, * ensure we find the right PFN and lay down the mapping in the right * place. */ - if (vma_pagesize == PMD_SIZE || vma_pagesize == PUD_SIZE) { - fault_ipa &= ~(vma_pagesize - 1); - ipa &= ~(vma_pagesize - 1); + if (page_size == PMD_SIZE || page_size == PUD_SIZE) { + fault_ipa &= ~(page_size - 1); + ipa &= ~(page_size - 1); } gfn = ipa >> PAGE_SHIFT; @@ -1627,7 +1627,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, pfn = __kvm_faultin_pfn(memslot, gfn, write_fault ? FOLL_WRITE : 0, &writable, &page); if (pfn == KVM_PFN_ERR_HWPOISON) { - kvm_send_hwpoison_signal(hva, vma_shift); + kvm_send_hwpoison_signal(hva, page_shift); return 0; } if (is_error_noslot_pfn(pfn)) @@ -1636,9 +1636,9 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, if (kvm_is_device_pfn(pfn)) { /* * If the page was identified as device early by looking at - * the VMA flags, vma_pagesize is already representing the + * the VMA flags, page_size is already representing the * largest quantity we can map. If instead it was mapped - * via __kvm_faultin_pfn(), vma_pagesize is set to PAGE_SIZE + * via __kvm_faultin_pfn(), page_size is set to PAGE_SIZE * and must not be upgraded. * * In both cases, we don't let transparent_hugepage_adjust() @@ -1686,16 +1686,16 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, * If we are not forced to use page mapping, check if we are * backed by a THP and thus use block mapping if possible. */ - if (vma_pagesize == PAGE_SIZE && !(force_pte || device)) { + if (page_size == PAGE_SIZE && !(force_pte || device)) { if (fault_is_perm && fault_granule > PAGE_SIZE) - vma_pagesize = fault_granule; + page_size = fault_granule; else - vma_pagesize = transparent_hugepage_adjust(kvm, memslot, - hva, &pfn, - &fault_ipa); + page_size = transparent_hugepage_adjust(kvm, memslot, + hva, &pfn, + &fault_ipa); - if (vma_pagesize < 0) { - ret = vma_pagesize; + if (page_size < 0) { + ret = page_size; goto out_unlock; } } @@ -1703,7 +1703,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, if (!fault_is_perm && !device && kvm_has_mte(kvm)) { /* Check the VMM hasn't introduced a new disallowed VMA */ if (mte_allowed) { - sanitise_mte_tags(kvm, pfn, vma_pagesize); + sanitise_mte_tags(kvm, pfn, page_size); } else { ret = -EFAULT; goto out_unlock; @@ -1728,10 +1728,10 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, /* * Under the premise of getting a FSC_PERM fault, we just need to relax - * permissions only if vma_pagesize equals fault_granule. Otherwise, + * permissions only if page_size equals fault_granule. Otherwise, * kvm_pgtable_stage2_map() should be called to change block size. */ - if (fault_is_perm && vma_pagesize == fault_granule) { + if (fault_is_perm && page_size == fault_granule) { /* * Drop the SW bits in favour of those stored in the * PTE, which will be preserved. @@ -1739,7 +1739,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, prot &= ~KVM_NV_GUEST_MAP_SZ; ret = KVM_PGT_FN(kvm_pgtable_stage2_relax_perms)(pgt, fault_ipa, prot, flags); } else { - ret = KVM_PGT_FN(kvm_pgtable_stage2_map)(pgt, fault_ipa, vma_pagesize, + ret = KVM_PGT_FN(kvm_pgtable_stage2_map)(pgt, fault_ipa, page_size, __pfn_to_phys(pfn), prot, memcache, flags); } -- 2.49.0.1045.g170613ef41-goog