From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E5449C5B552 for ; Mon, 9 Jun 2025 09:02:58 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 672C76B0089; Mon, 9 Jun 2025 05:02:58 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 5FCA86B0092; Mon, 9 Jun 2025 05:02:58 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 49D126B0093; Mon, 9 Jun 2025 05:02:58 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 20CF46B0089 for ; Mon, 9 Jun 2025 05:02:58 -0400 (EDT) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id C8482BF179 for ; Mon, 9 Jun 2025 09:02:57 +0000 (UTC) X-FDA: 83535272394.29.BBC0DC3 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf02.hostedemail.com (Postfix) with ESMTP id 509B480006 for ; Mon, 9 Jun 2025 09:02:55 +0000 (UTC) Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=VG+yfeS8; spf=pass (imf02.hostedemail.com: domain of gshan@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=gshan@redhat.com; dmarc=pass (policy=quarantine) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1749459775; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Q2MrIEEzGhePPG36r/yxjnoCorBMafO55RLCX7Uoxso=; b=fsZSA1FXcODIeyh/Oj2IhZ1rx0/bvHnER4dx1RxddqkY9BYwncZ5MTtxnVpZIUaDTrrzVE 2ZNETR7Ov16SQdtPMsAd24jkku7fqrLEdYoXQUb99lI7YgJRIvaZ8KV4vd7Zw7cAfaWon8 /mprLYO8GRcWDnanVgBJANtBUazJEZM= ARC-Authentication-Results: i=1; imf02.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=VG+yfeS8; spf=pass (imf02.hostedemail.com: domain of gshan@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=gshan@redhat.com; dmarc=pass (policy=quarantine) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1749459775; a=rsa-sha256; cv=none; b=LlXhp+9+q9gHzXfk4p/TUKZCnIcRPYAgSxkwzWFyCIjSE+mxagTaUA6jlw5dysvCZTKTJ2 IhIMlVI4C7hFKV3GHTqwojiy14fzw/1doZOnJkk6o/QWmMlVhmdTqyCtFw63hGbnU075K8 ZaV9UWUf9Zg16JfQn+IeRhqlpRy9NiE= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1749459774; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Q2MrIEEzGhePPG36r/yxjnoCorBMafO55RLCX7Uoxso=; b=VG+yfeS8n0sR1w3PaG+xJnznzdCQrtKEpZtBcj/iXCScBvO5wlPVkqqesBjx8TAreBpjH6 yi/B4A+8s0WMKeZ1JJYmYw7julvW5QPsFC20aOpPB1AJGod4xwq2NffaGadPcLKOLiYARB Sz1vdvZNcZt79wFnu8XfkZedgEspuSg= Received: from mail-pf1-f197.google.com (mail-pf1-f197.google.com [209.85.210.197]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-223-78kAQvsxNWae5GKm4B8aew-1; Mon, 09 Jun 2025 05:02:53 -0400 X-MC-Unique: 78kAQvsxNWae5GKm4B8aew-1 X-Mimecast-MFC-AGG-ID: 78kAQvsxNWae5GKm4B8aew_1749459772 Received: by mail-pf1-f197.google.com with SMTP id d2e1a72fcca58-742a091d290so2801976b3a.3 for ; Mon, 09 Jun 2025 02:02:53 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749459772; x=1750064572; h=content-transfer-encoding:in-reply-to:from:content-language :references:cc:to:subject:user-agent:mime-version:date:message-id :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Q2MrIEEzGhePPG36r/yxjnoCorBMafO55RLCX7Uoxso=; b=sPUoCqRrIldrBSQw8shBXqwiFmP58PjJYYutGTdHrfZurEzGTpw7eSiyG5xflLronM EjJ1iu/pQn8eZTFNEWPSFNEqshQEs9uIXZcZABQoRctrFkkel+j5F0MfQJEv4mSA51KX tXBlTd3HTELSza/5nxeD4wzmEoOCi9hwqzVDyESQLxhRi7PgMcSmYS5D7YAZ1K/DovL2 aBESx6t82G9yzEvtmcLEtfYqZ/7LBpEQRQcnFY7iI4SuAmT81zKJPNZcLPNUVlwCuEgR jjFLXJ9QdRp+ZyB/wBnvH5fIxetJmeebcgmPK8uOkuokm97JRo8utZFZBrJ/F+yY5jCq 2HHA== X-Forwarded-Encrypted: i=1; AJvYcCWoWH0yoYC2fzSmfk8bvJ3SyGOM8548VQhu3lEDix5U6+ojTg/INHFDslTurcJKk5L2SJp45fy7rA==@kvack.org X-Gm-Message-State: AOJu0YywaAZ4lmjV5n6LtjcBNPUHtALKC6+il/mHU3aXAFczC1r6rsZe 6YZ+XU4Bd2Kc7iJcMTauMIWjR72cKiweWxkeAsgUSihPITXH/Ypj80rQ4eUPGH0DR1Xqmxt6l/h U9sWCq5I0qwmspCmpyKKIpcHBsDRBYXIMmMglZyXy7Nw2RnDW3x9O X-Gm-Gg: ASbGncvQWaJLkwlAN+wTU0qQLC4cII4t7UqipfTEjVRA5xIqpmQ7bQK4J00FLYOugHi 7ja94a7toOoGzL7wb2sTW+R5aWlreuLZdCwmMRkDXj7YMLGoNJ9aNqcKpu2NVnsQIEon9LOpIKU xGoaZlZmMfoE1BcbOOxce1MZGsMSVGkc+q88KS3yTIJL/kc7N6ga/0374LiOqbHBDyZel9a/xBg 62EwQ+YZW17DVk5oH4NZ/lvitDTvblNssXHbtNFcLlQbc7fWSmYMBW1FKMJhLdR+8e5YwSvNw/0 NDobsP3HjYsMzd1ztQ6+1lyw0FlAKbrQ2lA+bxcYj3sr1UYnFao= X-Received: by 2002:aa7:88cb:0:b0:73d:fefb:325 with SMTP id d2e1a72fcca58-74827e52272mr13840356b3a.5.1749459772064; Mon, 09 Jun 2025 02:02:52 -0700 (PDT) X-Google-Smtp-Source: AGHT+IE9BfcZFPNQiK0Afi71TcjjNFbyWZ9HvqO0HtNN4+uHwiapdplxXW56T5eZnVVOh7eYx2uDcQ== X-Received: by 2002:aa7:88cb:0:b0:73d:fefb:325 with SMTP id d2e1a72fcca58-74827e52272mr13840291b3a.5.1749459771414; Mon, 09 Jun 2025 02:02:51 -0700 (PDT) Received: from [192.168.68.51] (n175-34-62-5.mrk21.qld.optusnet.com.au. [175.34.62.5]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-7482b0ea94csm5282712b3a.165.2025.06.09.02.02.31 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Mon, 09 Jun 2025 02:02:50 -0700 (PDT) Message-ID: <7c2bea4b-7a89-4875-ac83-50960f90da8c@redhat.com> Date: Mon, 9 Jun 2025 19:02:29 +1000 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v11 13/18] KVM: arm64: Refactor user_mem_abort() To: Fuad Tabba Cc: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org, kvmarm@lists.linux.dev, pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, jthoughton@google.com, peterx@redhat.com, pankaj.gupta@amd.com, ira.weiny@intel.com References: <20250605153800.557144-1-tabba@google.com> <20250605153800.557144-14-tabba@google.com> From: Gavin Shan In-Reply-To: X-Mimecast-Spam-Score: 0 X-Mimecast-MFC-PROC-ID: mGsTLhXOczMCpsn9x1s3rWsufJ5f7zrhS-1b5w-fV-I_1749459772 X-Mimecast-Originator: redhat.com Content-Language: en-US Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Rspam-User: X-Rspamd-Queue-Id: 509B480006 X-Rspamd-Server: rspam09 X-Stat-Signature: uknehi6r59q7xdkomiiwuskxecws85ai X-HE-Tag: 1749459775-611129 X-HE-Meta: U2FsdGVkX1/8DsGp66Fo1pZK8/r8cryXtze+vpObwEKdRm1qmO28gpPTVOkN68khEmMOyYjoIslsY45TyH5GfzsmFXlFLvWM6R21+GDJnSmhaRIKa8C1gjSOFDmgVZQGpxA7pbaaRG9tP7YrJkyy7YVADplVOuusqv8fgmLJKmTYtWEyHychSWOa0DHNyvgaTtFlxj1TalY5+bHlmh4m4JMgwwaajrWa2pcPJUd530tXsrxo1EqjLU45Iitk0cc54IgPlArSQ3poAz5W2zer29kFaW6jTyVr9iPiUN6DKKzT6Pm/mtetgDLh4Dh1zd2Hgs/7TrR5wSKjT7fo13FW2eRFuYQ1w7zeVC+Ql+WmgsdaeH0B1QYf8pRauihNqF/OVu9vlCdQfEKhEqr08GBQNoSL7wrNeKq92kWR2J1Wwmd0Q0CZL4kmPGLUSWP5bde9O24nufjUmlegvfj4XzE/Ud+dO0JIoiiR7uI/cM1uKVx0IbzeHV4fg72Hu4iUjjQRSurQA+Pq7iL2MMaIxeJ2JM1aNBs3vr5bf5wBI9tTPxh1xB338s5PEBf1G/xPkV3QC7c9rDpwztW6G/WkufU0yEXqji3X+LTEPUh3QqtflJpuSd2oBqPSXpKhCal3904R5Zyhf4EXlSSRXm/FmCsZXdSKX/ycS8Ugp3pDQj4a8b3JI5FcOcSMI/aSsRJO2QYJVFDlm57n7Hq5V9LmdTQIMrkM9Cii1i+jTZu0OcbhqehmdUn1qQGpQ4EQUNurvHZY5LaKHrm23L2TUpzJ/GaxHf242JpBl01e1BTP7/bXMZnKKXNN5hE3ybxFBv82g2jyexSkzh/W6rrYWoYvAelhitFFHFBxN6Av4T4dgbQxbNYgWpRhVP4/VFHGzoOPLK/V5uLRh4C/H6PoVcuUxiIqtXE302x2mrf1gyFSU/V8uft7j3uYgEH4vX7BHNHN6AYWFcWeMhrOIre1pYebl+B nEWjUnvc TDv8Uq0JBWHn+31qdNeZQ0+MhdyHYGNnIvnZSbv9MJHL3VGbNNWw+Y7fSxBG8U6Dx7gB2PW0w9D5ub4xYNHIzqleajwBIkomjr8OjVcVdY4dDW3Vq0oMqZLlfmfik7Q1z3UHLpmhDb9oK1QZim5/d3KQ0j5GV1IZSusYCmzrKLBoTObasCdChUgofK6tt/WqZLQw/10eiQU1Vyh99XnNAfsjIIQE/6iyWkUHjUgDJxZ9Lt0ZRB5xUNytBYJn3PX/O8Mn+aRGOFpKEUg0gWz14wokOGJsCaroOGcBee/K87g61DrzYSQiFCzz3W1SdH6rzf6BKJRjHW+cDdUBnwjnCflabolmbjxJ3mxCqmBGHKCY6R2MVk0RZQ+laqIRc3r9iu9b8egUe5+eASxchgfL3sr3J6cBfSmw8d6xVuSr2aTYyIapBkll40B04dyaDNqB+Vl9YHNltrTbN0t5VrPoa5Afnr0USk0uPUqgEhmju2ywiFn7aKMa/dN7ADWv3CYbn7fIGLP6FhsNAEeQ0xir8DnTZWrWqFdsOO50jfNa5+50daPo= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Hi Fuad, On 6/9/25 5:01 PM, Fuad Tabba wrote: > On Mon, 9 Jun 2025 at 01:27, Gavin Shan wrote: >> >> On 6/6/25 1:37 AM, Fuad Tabba wrote: >>> To simplify the code and to make the assumptions clearer, >>> refactor user_mem_abort() by immediately setting force_pte to >>> true if the conditions are met. >>> >>> Remove the comment about logging_active being guaranteed to never be >>> true for VM_PFNMAP memslots, since it's not actually correct. >>> >>> Move code that will be reused in the following patch into separate >>> functions. >>> >>> Other small instances of tidying up. >>> >>> No functional change intended. >>> >>> Signed-off-by: Fuad Tabba >>> --- >>> arch/arm64/kvm/mmu.c | 100 ++++++++++++++++++++++++------------------- >>> 1 file changed, 55 insertions(+), 45 deletions(-) >>> >> >> One nitpick below in case v12 is needed. In either way, it looks good to me: >> >> Reviewed-by: Gavin Shan >> >>> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c >>> index eeda92330ade..ce80be116a30 100644 >>> --- a/arch/arm64/kvm/mmu.c >>> +++ b/arch/arm64/kvm/mmu.c >>> @@ -1466,13 +1466,56 @@ static bool kvm_vma_mte_allowed(struct vm_area_struct *vma) >>> return vma->vm_flags & VM_MTE_ALLOWED; >>> } >>> >>> +static int prepare_mmu_memcache(struct kvm_vcpu *vcpu, bool topup_memcache, >>> + void **memcache) >>> +{ >>> + int min_pages; >>> + >>> + if (!is_protected_kvm_enabled()) >>> + *memcache = &vcpu->arch.mmu_page_cache; >>> + else >>> + *memcache = &vcpu->arch.pkvm_memcache; >>> + >>> + if (!topup_memcache) >>> + return 0; >>> + >> >> It's unnecessary to initialize 'memcache' when topup_memcache is false. > > I thought about this before, and I _think_ you're right. However, I > couldn't completely convince myself that that's always the case for > the code to be functionally equivalent (looking at the condition for > kvm_pgtable_stage2_relax_perms() at the end of the function). Which is > why, if I were to do that, I'd do it as a separate patch. > Thanks for the pointer, which I didn't notice. Yeah, it's something out of scope and can be fixed up in another separate patch after this series gets merged. Please leave it as of being and sorry for the noise. To follow up the discussion, I think it's safe to skip initializing 'memcache' when 'topup_memcache' is false. The current conditions to turn 'memcache' to true would have guranteed that kvm_pgtable_stage2_map() will be executed. It means kvm_pgtable_stage2_relax_perms() will be executed when 'topup_memcache' is false. Besides, it sounds meaningless to dereference 'vcpu->arch.mmu_page_cache' or 'vcpu->arch.pkvm_page_cache' without toping up it. There are comments explaining why 'topup_memcache' is set to true for permission faults. /* * Permission faults just need to update the existing leaf entry, * and so normally don't require allocations from the memcache. The * only exception to this is when dirty logging is enabled at runtime * and a write fault needs to collapse a block entry into a table. */ topup_memcache = !fault_is_perm || (logging_active && write_fault); if (fault_is_perm && vma_pagesize == fault_granule) kvm_pgtable_stage2_relax_perms(...); > Thanks, > /fuad > Thanks, Gavin >> if (!topup_memcache) >> return 0; >> >> min_pages = kvm_mmu_cache_min_pages(vcpu->arch.hw_mmu); >> if (!is_protected_kvm_enabled()) >> *memcache = &vcpu->arch.mmu_page_cache; >> else >> *memcache = &vcpu->arch.pkvm_memcache; >> >> Thanks, >> Gavin >> >>> + min_pages = kvm_mmu_cache_min_pages(vcpu->arch.hw_mmu); >>> + >>> + if (!is_protected_kvm_enabled()) >>> + return kvm_mmu_topup_memory_cache(*memcache, min_pages); >>> + >>> + return topup_hyp_memcache(*memcache, min_pages); >>> +} >>> + >>> +/* >>> + * Potentially reduce shadow S2 permissions to match the guest's own S2. For >>> + * exec faults, we'd only reach this point if the guest actually allowed it (see >>> + * kvm_s2_handle_perm_fault). >>> + * >>> + * Also encode the level of the original translation in the SW bits of the leaf >>> + * entry as a proxy for the span of that translation. This will be retrieved on >>> + * TLB invalidation from the guest and used to limit the invalidation scope if a >>> + * TTL hint or a range isn't provided. >>> + */ >>> +static void adjust_nested_fault_perms(struct kvm_s2_trans *nested, >>> + enum kvm_pgtable_prot *prot, >>> + bool *writable) >>> +{ >>> + *writable &= kvm_s2_trans_writable(nested); >>> + if (!kvm_s2_trans_readable(nested)) >>> + *prot &= ~KVM_PGTABLE_PROT_R; >>> + >>> + *prot |= kvm_encode_nested_level(nested); >>> +} >>> + >>> static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, >>> struct kvm_s2_trans *nested, >>> struct kvm_memory_slot *memslot, unsigned long hva, >>> bool fault_is_perm) >>> { >>> int ret = 0; >>> - bool write_fault, writable, force_pte = false; >>> + bool topup_memcache; >>> + bool write_fault, writable; >>> bool exec_fault, mte_allowed; >>> bool device = false, vfio_allow_any_uc = false; >>> unsigned long mmu_seq; >>> @@ -1484,6 +1527,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, >>> gfn_t gfn; >>> kvm_pfn_t pfn; >>> bool logging_active = memslot_is_logging(memslot); >>> + bool force_pte = logging_active || is_protected_kvm_enabled(); >>> long vma_pagesize, fault_granule; >>> enum kvm_pgtable_prot prot = KVM_PGTABLE_PROT_R; >>> struct kvm_pgtable *pgt; >>> @@ -1501,28 +1545,16 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, >>> return -EFAULT; >>> } >>> >>> - if (!is_protected_kvm_enabled()) >>> - memcache = &vcpu->arch.mmu_page_cache; >>> - else >>> - memcache = &vcpu->arch.pkvm_memcache; >>> - >>> /* >>> * Permission faults just need to update the existing leaf entry, >>> * and so normally don't require allocations from the memcache. The >>> * only exception to this is when dirty logging is enabled at runtime >>> * and a write fault needs to collapse a block entry into a table. >>> */ >>> - if (!fault_is_perm || (logging_active && write_fault)) { >>> - int min_pages = kvm_mmu_cache_min_pages(vcpu->arch.hw_mmu); >>> - >>> - if (!is_protected_kvm_enabled()) >>> - ret = kvm_mmu_topup_memory_cache(memcache, min_pages); >>> - else >>> - ret = topup_hyp_memcache(memcache, min_pages); >>> - >>> - if (ret) >>> - return ret; >>> - } >>> + topup_memcache = !fault_is_perm || (logging_active && write_fault); >>> + ret = prepare_mmu_memcache(vcpu, topup_memcache, &memcache); >>> + if (ret) >>> + return ret; >>> >>> /* >>> * Let's check if we will get back a huge page backed by hugetlbfs, or >>> @@ -1536,16 +1568,10 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, >>> return -EFAULT; >>> } >>> >>> - /* >>> - * logging_active is guaranteed to never be true for VM_PFNMAP >>> - * memslots. >>> - */ >>> - if (logging_active || is_protected_kvm_enabled()) { >>> - force_pte = true; >>> + if (force_pte) >>> vma_shift = PAGE_SHIFT; >>> - } else { >>> + else >>> vma_shift = get_vma_page_shift(vma, hva); >>> - } >>> >>> switch (vma_shift) { >>> #ifndef __PAGETABLE_PMD_FOLDED >>> @@ -1597,7 +1623,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, >>> max_map_size = PAGE_SIZE; >>> >>> force_pte = (max_map_size == PAGE_SIZE); >>> - vma_pagesize = min(vma_pagesize, (long)max_map_size); >>> + vma_pagesize = min_t(long, vma_pagesize, max_map_size); >>> } >>> >>> /* >>> @@ -1626,7 +1652,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, >>> * Rely on mmap_read_unlock() for an implicit smp_rmb(), which pairs >>> * with the smp_wmb() in kvm_mmu_invalidate_end(). >>> */ >>> - mmu_seq = vcpu->kvm->mmu_invalidate_seq; >>> + mmu_seq = kvm->mmu_invalidate_seq; >>> mmap_read_unlock(current->mm); >>> >>> pfn = __kvm_faultin_pfn(memslot, gfn, write_fault ? FOLL_WRITE : 0, >>> @@ -1661,24 +1687,8 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, >>> if (exec_fault && device) >>> return -ENOEXEC; >>> >>> - /* >>> - * Potentially reduce shadow S2 permissions to match the guest's own >>> - * S2. For exec faults, we'd only reach this point if the guest >>> - * actually allowed it (see kvm_s2_handle_perm_fault). >>> - * >>> - * Also encode the level of the original translation in the SW bits >>> - * of the leaf entry as a proxy for the span of that translation. >>> - * This will be retrieved on TLB invalidation from the guest and >>> - * used to limit the invalidation scope if a TTL hint or a range >>> - * isn't provided. >>> - */ >>> - if (nested) { >>> - writable &= kvm_s2_trans_writable(nested); >>> - if (!kvm_s2_trans_readable(nested)) >>> - prot &= ~KVM_PGTABLE_PROT_R; >>> - >>> - prot |= kvm_encode_nested_level(nested); >>> - } >>> + if (nested) >>> + adjust_nested_fault_perms(nested, &prot, &writable); >>> >>> kvm_fault_lock(kvm); >>> pgt = vcpu->arch.hw_mmu->pgt; >> >