From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0546BC25B10 for ; Fri, 10 May 2024 17:09:26 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 65C136B0106; Fri, 10 May 2024 13:09:26 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 60E036B0107; Fri, 10 May 2024 13:09:26 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 486FA6B0108; Fri, 10 May 2024 13:09:26 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 255966B0106 for ; Fri, 10 May 2024 13:09:26 -0400 (EDT) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id B21B3A2510 for ; Fri, 10 May 2024 17:09:25 +0000 (UTC) X-FDA: 82103122290.16.53501F0 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf09.hostedemail.com (Postfix) with ESMTP id D4CA6140031 for ; Fri, 10 May 2024 17:09:21 +0000 (UTC) Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b="M825/EHM"; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf09.hostedemail.com: domain of pbonzini@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=pbonzini@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1715360962; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=7p3PXm6krOSd+K8cPm+lYpKDJuTh76+T2MFZsxS42kQ=; b=WbDkvJtM7N1ST4MUBigNLgBDqN04XMKa+Bo6qXU0dq4mlk68cgrDks4F0ZZMfb4UXF7Mz2 FuTWOHOJ2Y+1ZWuNSwLZ5c72ivrKPM4SJojaE0gCiWqCkWUGMkY5BufZ5cZpJhKR/SePkA 6/cgFwmMrZyiua7rO561F1nqw+G5x7A= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b="M825/EHM"; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf09.hostedemail.com: domain of pbonzini@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=pbonzini@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1715360962; a=rsa-sha256; cv=none; b=Nurt9P5mKpEsi2RrF7au1jduPWiGbgXmIETEtPlt5F0PkB5CCp2fbAeYVAeAVGp5G34001 lyOwM9UnQ59d6lzsxcbKwXpPeCvh6PGdcah+WG7r3Dpl9Oj3zQRIDJBiB/KfpdIQn8NpTo bqluRzQM2xh7mv5qpBpcrKLaAnUnhA4= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1715360961; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:autocrypt:autocrypt; bh=7p3PXm6krOSd+K8cPm+lYpKDJuTh76+T2MFZsxS42kQ=; b=M825/EHMUDVrw0q8bmnm3tYgUVBClQmQ5+6GQ0TS+0X+ZHmS9ZxIX8UyaFcGcWnIwa+lky /TTrHC0OAFuFTbEZ+arUTjrK0DA8sw+rE6ECB4+74RXHoqiEl6LvursSH5zThJ86Dczot0 tYko9yrP70fAIbEy3/DYDNh//ud6In0= Received: from mail-ej1-f70.google.com (mail-ej1-f70.google.com [209.85.218.70]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-637-ZlEH1kMBMZaDplZvcpxEzA-1; Fri, 10 May 2024 13:09:13 -0400 X-MC-Unique: ZlEH1kMBMZaDplZvcpxEzA-1 Received: by mail-ej1-f70.google.com with SMTP id a640c23a62f3a-a59eea00cafso156980966b.1 for ; Fri, 10 May 2024 10:09:13 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1715360952; x=1715965752; h=content-transfer-encoding:in-reply-to:autocrypt:content-language :from:references:cc:to:subject:user-agent:mime-version:date :message-id:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=7p3PXm6krOSd+K8cPm+lYpKDJuTh76+T2MFZsxS42kQ=; b=a/ftHBQNKeoTTzrzOguhGnMEDlFl+GcMPpSOoPep+cM/65ORohEv2Hjh4vcgeouY/6 w4J+ueRm1aVnRNyP9A2xkV3TkUZAR+UPKsgZ3s0N15+9DFxu4wfLlundj4s2e0AW8zvE RT6ZZfwip8Tf7WPZPj4s87D2bvWhHguU68Cff8wsvS23AbdeWOCEY42zfE0s4BdJxXEV K+o7La73zXprVeLDkJh5GDXKf7LqeZ7oYN/O2C9RPJuFKHev1EcNHlrYmcM9t7Xa3iwi i0BtFk4oS2yZ1E5DH7eMxy2UklGhsyBWnBx9+UkzjQY4POxylh3ksLQb1amDku2pATfQ 2wsA== X-Forwarded-Encrypted: i=1; AJvYcCWRb8WnRgfleeIrB1RkB32vR+yEDm0e6CpotDiamtYnIY7p0h3B664PIltFDjTFIC+VhcyqUundXsBLzeHVQU5DSNQ= X-Gm-Message-State: AOJu0YzpFLXw3qzcvrMio1n8ZuA+D1+AGLY/DRZyXftFJ/ZAhMskz7VV 64kLCfNqSGeaH5PsN1Frpjm3urMFgwHzUS8p8QqMAUMrGn+LdYHvz7H1f3L+QgobJkRxpb4XKe0 HdNJ7kWe1vq2AdXrwU9MamHl/9sFKj+lugkOC4+ApMqlmzD2R X-Received: by 2002:a17:906:dd7:b0:a59:d5e4:1487 with SMTP id a640c23a62f3a-a5a2d5cb6ffmr296461866b.42.1715360952407; Fri, 10 May 2024 10:09:12 -0700 (PDT) X-Google-Smtp-Source: AGHT+IEM811MEkD14aLnO6nvhAkFbQgZ7sUPC3JmgKjoPSF+RTljS81r9GeaHQExLPjzmjW5n7F/Og== X-Received: by 2002:a17:906:dd7:b0:a59:d5e4:1487 with SMTP id a640c23a62f3a-a5a2d5cb6ffmr296457566b.42.1715360951811; Fri, 10 May 2024 10:09:11 -0700 (PDT) Received: from [192.168.10.48] ([151.95.155.52]) by smtp.googlemail.com with ESMTPSA id a640c23a62f3a-a5a1781cf70sm204202566b.30.2024.05.10.10.09.08 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Fri, 10 May 2024 10:09:10 -0700 (PDT) Message-ID: Date: Fri, 10 May 2024 19:09:07 +0200 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v15 23/23] KVM: SEV: Fix PSC handling for SMASH/UNSMASH and partial update ops To: Michael Roth , kvm@vger.kernel.org Cc: linux-coco@lists.linux.dev, linux-mm@kvack.org, linux-crypto@vger.kernel.org, x86@kernel.org, linux-kernel@vger.kernel.org, tglx@linutronix.de, mingo@redhat.com, jroedel@suse.de, thomas.lendacky@amd.com, hpa@zytor.com, ardb@kernel.org, seanjc@google.com, vkuznets@redhat.com, jmattson@google.com, luto@kernel.org, dave.hansen@linux.intel.com, slp@redhat.com, pgonda@google.com, peterz@infradead.org, srinivas.pandruvada@linux.intel.com, rientjes@google.com, dovmurik@linux.ibm.com, tobin@ibm.com, bp@alien8.de, vbabka@suse.cz, kirill@shutemov.name, ak@linux.intel.com, tony.luck@intel.com, sathyanarayanan.kuppuswamy@linux.intel.com, alpergun@google.com, jarkko@kernel.org, ashish.kalra@amd.com, nikunj.dadhania@amd.com, pankaj.gupta@amd.com, liam.merwick@oracle.com, papaluri@amd.com References: <20240501085210.2213060-1-michael.roth@amd.com> <20240510015822.503071-1-michael.roth@amd.com> <20240510015822.503071-3-michael.roth@amd.com> From: Paolo Bonzini Autocrypt: addr=pbonzini@redhat.com; keydata= xsEhBFRCcBIBDqDGsz4K0zZun3jh+U6Z9wNGLKQ0kSFyjN38gMqU1SfP+TUNQepFHb/Gc0E2 CxXPkIBTvYY+ZPkoTh5xF9oS1jqI8iRLzouzF8yXs3QjQIZ2SfuCxSVwlV65jotcjD2FTN04 hVopm9llFijNZpVIOGUTqzM4U55sdsCcZUluWM6x4HSOdw5F5Utxfp1wOjD/v92Lrax0hjiX DResHSt48q+8FrZzY+AUbkUS+Jm34qjswdrgsC5uxeVcLkBgWLmov2kMaMROT0YmFY6A3m1S P/kXmHDXxhe23gKb3dgwxUTpENDBGcfEzrzilWueOeUWiOcWuFOed/C3SyijBx3Av/lbCsHU Vx6pMycNTdzU1BuAroB+Y3mNEuW56Yd44jlInzG2UOwt9XjjdKkJZ1g0P9dwptwLEgTEd3Fo UdhAQyRXGYO8oROiuh+RZ1lXp6AQ4ZjoyH8WLfTLf5g1EKCTc4C1sy1vQSdzIRu3rBIjAvnC tGZADei1IExLqB3uzXKzZ1BZ+Z8hnt2og9hb7H0y8diYfEk2w3R7wEr+Ehk5NQsT2MPI2QBd wEv1/Aj1DgUHZAHzG1QN9S8wNWQ6K9DqHZTBnI1hUlkp22zCSHK/6FwUCuYp1zcAEQEAAc0j UGFvbG8gQm9uemluaSA8cGJvbnppbmlAcmVkaGF0LmNvbT7CwU0EEwECACMFAlRCcBICGwMH CwkIBwMCAQYVCAIJCgsEFgIDAQIeAQIXgAAKCRB+FRAMzTZpsbceDp9IIN6BIA0Ol7MoB15E 11kRz/ewzryFY54tQlMnd4xxfH8MTQ/mm9I482YoSwPMdcWFAKnUX6Yo30tbLiNB8hzaHeRj jx12K+ptqYbg+cevgOtbLAlL9kNgLLcsGqC2829jBCUTVeMSZDrzS97ole/YEez2qFpPnTV0 VrRWClWVfYh+JfzpXmgyhbkuwUxNFk421s4Ajp3d8nPPFUGgBG5HOxzkAm7xb1cjAuJ+oi/K CHfkuN+fLZl/u3E/fw7vvOESApLU5o0icVXeakfSz0LsygEnekDbxPnE5af/9FEkXJD5EoYG SEahaEtgNrR4qsyxyAGYgZlS70vkSSYJ+iT2rrwEiDlo31MzRo6Ba2FfHBSJ7lcYdPT7bbk9 AO3hlNMhNdUhoQv7M5HsnqZ6unvSHOKmReNaS9egAGdRN0/GPDWr9wroyJ65ZNQsHl9nXBqE AukZNr5oJO5vxrYiAuuTSd6UI/xFkjtkzltG3mw5ao2bBpk/V/YuePrJsnPFHG7NhizrxttB nTuOSCMo45pfHQ+XYd5K1+Cv/NzZFNWscm5htJ0HznY+oOsZvHTyGz3v91pn51dkRYN0otqr bQ4tlFFuVjArBZcapSIe6NV8C4cEiSTOwE0EVEJx7gEIAMeHcVzuv2bp9HlWDp6+RkZe+vtl KwAHplb/WH59j2wyG8V6i33+6MlSSJMOFnYUCCL77bucx9uImI5nX24PIlqT+zasVEEVGSRF m8dgkcJDB7Tps0IkNrUi4yof3B3shR+vMY3i3Ip0e41zKx0CvlAhMOo6otaHmcxr35sWq1Jk tLkbn3wG+fPQCVudJJECvVQ//UAthSSEklA50QtD2sBkmQ14ZryEyTHQ+E42K3j2IUmOLriF dNr9NvE1QGmGyIcbw2NIVEBOK/GWxkS5+dmxM2iD4Jdaf2nSn3jlHjEXoPwpMs0KZsgdU0pP JQzMUMwmB1wM8JxovFlPYrhNT9MAEQEAAcLBMwQYAQIACQUCVEJx7gIbDAAKCRB+FRAMzTZp sadRDqCctLmYICZu4GSnie4lKXl+HqlLanpVMOoFNnWs9oRP47MbE2wv8OaYh5pNR9VVgyhD OG0AU7oidG36OeUlrFDTfnPYYSF/mPCxHttosyt8O5kabxnIPv2URuAxDByz+iVbL+RjKaGM GDph56ZTswlx75nZVtIukqzLAQ5fa8OALSGum0cFi4ptZUOhDNz1onz61klD6z3MODi0sBZN Aj6guB2L/+2ZwElZEeRBERRd/uommlYuToAXfNRdUwrwl9gRMiA0WSyTb190zneRRDfpSK5d usXnM/O+kr3Dm+Ui+UioPf6wgbn3T0o6I5BhVhs4h4hWmIW7iNhPjX1iybXfmb1gAFfjtHfL xRUr64svXpyfJMScIQtBAm0ihWPltXkyITA92ngCmPdHa6M1hMh4RDX+Jf1fiWubzp1voAg0 JBrdmNZSQDz0iKmSrx8xkoXYfA3bgtFN8WJH2xgFL28XnqY4M6dLhJwV3z08tPSRqYFm4NMP dRsn0/7oymhneL8RthIvjDDQ5ktUjMe8LtHr70OZE/TT88qvEdhiIVUogHdo4qBrk41+gGQh b906Dudw5YhTJFU3nC6bbF2nrLlB4C/XSiH76ZvqzV0Z/cAMBo5NF/w= In-Reply-To: <20240510015822.503071-3-michael.roth@amd.com> X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Language: en-US Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Rspamd-Queue-Id: D4CA6140031 X-Stat-Signature: sbtxsqw6gdjizp8ueofb6tkjwdmkcmt5 X-Rspam-User: X-Rspamd-Server: rspam04 X-HE-Tag: 1715360961-830054 X-HE-Meta: U2FsdGVkX180z90PmUz0wy3Odo5hjT2VZXRTqf875nHwupdg081wFdDGeAUXxDlKgJMnlZZHEz+ccmN5w9D9vAWyQ+L0GADYLvpLTVijRnCQnrKmyKTVZ5l2+lk16/33TRZV6LDLh0j48qQznQYCCin19+rsvKw1WBTQatccy1wj5LmMsaO4YxXC0JMRvuZrdR2KtnSDwsF8J7wpDEzsEl18lH0xsvcQQsNhrLfbQjGvNeTBc1fNP1JO5E5NQSMzXoCmgqOC06xbGVaP2e0R+zbFiW5UrRCAYdWcrb/R9PzxiWhHe7F8CwgU9iE7V4QRS8DpiXiCg4aV5lgeQxB3IN6G8RDU+w7leI17vh0Zltescp3NVAdVfyl70DYA3Z8LKoyJRlVASQg8NtvUnf1oSgzbfwDQfl0UOGGFfc2c6BIewmvQSrCucOGc4Hiih/fssVnaccKFPaim3WsmX7V59gT7FzqGtVp4nDbvME2qsSh0qWxTP2eFhE8raAt0O5hro6v2myHWWGyYImNWsixrx2iPj7wfFLMX/D98BYXfc9s/pL5npNorJghLhIBUf0ASsi7huk+5Ly84LHREDkQEZ8pO2DVQBh7xzdYYH0WQ5YYSBNlwbRwEOlcKofUtcAyhgfk/Eao79H9f7Yc7e3+WC1i6KdY2i60w5czyS6hlaluuBhN8VYRkRxfT8L/hlY0PXZb/U1GLkCv5nBYsKE/a4pSdX0ZuVGwFKbyNMhZFeD4LjI3ZfRsIGaWF1hmpAvpf27umWrkLzaLCJ2+lfaX9nARJU2tlZd60ebnpBOh0JYB5u10JyYUoreeTE938O7e7k6N5A4/xiE2qRb5WBF6P/eKWT+GbIx/ikAwZUPcVuS1FZHWh4oEipujc+uxj/UhnGn+XxW/hLXOz3u1S1+9voABxd4chqNnXbmjOEBI0gPvhv3icDwqaw+oyYFSqHl2MnhcYp4z8OMggIaVqM28 fl/tmvgl rH2E8/uKcCmAP5bhkDczeUeCfPlcGle0RFeJ33ITOXkQYTOm9GWwN77quQU+uj1eLlaJ9d76ootAdSJsd7t6KpqHuS7bXIQMqveIB1No7lng4PwHcuDhp2Hn/hSf25C5vH5DqWOTKiDNUqamik0s7xMPz8HI7asb70nVrCvj2Uiq/OBAMw6WAGsegahFAmkSmCnec0ZRP+JOnRHoy8A45SkfuDbaT4lapBs6PJLTm63bUKhuo7IpvYENar/xoMgt4uXzegNIubmrTcN71/aA59/rh0LwTdkl3Wt0X7G7cREpXqVnl0r89adi0g/2MsHJbMARYhaLPItAYhqUCFRooXBzy7+ozJCjOTTCe7qEpLByKoW0eNxufur5h/1Ah0TiJfYcYzkId42pTaO1MimTD+u+UE0C/SjgrTqeWYVDat3OMuHClJX2pE24WFyOoOg3ajOV0zUU7k0RgK+p70Ni+9c6nIrsQRs5J+6BVrrbBh1MMZVFbkHVT7fMwsaufq720X45h3i18nVFVJaShbZKoziI0pA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 5/10/24 03:58, Michael Roth wrote: > There are a few edge-cases that the current processing for GHCB PSC > requests doesn't handle properly: > > - KVM properly ignores SMASH/UNSMASH ops when they are embedded in a > PSC request buffer which contains other PSC operations, but > inadvertantly forwards them to userspace as private->shared PSC > requests if they appear at the end of the buffer. Make sure these are > ignored instead, just like cases where they are not at the end of the > request buffer. > > - Current code handles non-zero 'cur_page' fields when they are at the > beginning of a new GPA range, but it is not handling properly when > iterating through subsequent entries which are otherwise part of a > contiguous range. Fix up the handling so that these entries are not > combined into a larger contiguous range that include unintended GPA > ranges and are instead processed later as the start of a new > contiguous range. > > - The page size variable used to track 2M entries in KVM for inflight PSCs > might be artifically set to a different value, which can lead to > unexpected values in the entry's final 'cur_page' update. Use the > entry's 'pagesize' field instead to determine what the value of > 'cur_page' should be upon completion of processing. > > While here, also add a small helper for clearing in-flight PSCs > variables and fix up comments for better readability. > > Fixes: 266205d810d2 ("KVM: SEV: Add support to handle Page State Change VMGEXIT") > Signed-off-by: Michael Roth There are some more improvements that can be made to the readability of the code... this one is already better than the patch is fixing up, but I don't like the code that is in the loop even though it is unconditionally followed by "break". Here's my attempt at replacing this patch, which is really more of a rewrite of the whole function... Untested beyond compilation. diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c index 35f0bd91f92e..6e612789c35f 100644 --- a/arch/x86/kvm/svm/sev.c +++ b/arch/x86/kvm/svm/sev.c @@ -3555,23 +3555,25 @@ struct psc_buffer { static int snp_begin_psc(struct vcpu_svm *svm, struct psc_buffer *psc); -static int snp_complete_psc(struct kvm_vcpu *vcpu) +static void snp_complete_psc(struct vcpu_svm *svm, u64 psc_ret) +{ + svm->sev_es.psc_inflight = 0; + svm->sev_es.psc_idx = 0; + svm->sev_es.psc_2m = 0; + ghcb_set_sw_exit_info_2(svm->sev_es.ghcb, psc_ret); +} + +static void __snp_complete_one_psc(struct vcpu_svm *svm) { - struct vcpu_svm *svm = to_svm(vcpu); struct psc_buffer *psc = svm->sev_es.ghcb_sa; struct psc_entry *entries = psc->entries; struct psc_hdr *hdr = &psc->hdr; - __u64 psc_ret; __u16 idx; - if (vcpu->run->hypercall.ret) { - psc_ret = VMGEXIT_PSC_ERROR_GENERIC; - goto out_resume; - } - /* * Everything in-flight has been processed successfully. Update the - * corresponding entries in the guest's PSC buffer. + * corresponding entries in the guest's PSC buffer and zero out the + * count of in-flight PSC entries. */ for (idx = svm->sev_es.psc_idx; svm->sev_es.psc_inflight; svm->sev_es.psc_inflight--, idx++) { @@ -3581,17 +3583,22 @@ static int snp_complete_psc(struct kvm_vcpu *vcpu) } hdr->cur_entry = idx; +} + +static int snp_complete_one_psc(struct kvm_vcpu *vcpu) +{ + struct vcpu_svm *svm = to_svm(vcpu); + struct psc_buffer *psc = svm->sev_es.ghcb_sa; + + if (vcpu->run->hypercall.ret) { + snp_complete_psc(svm, VMGEXIT_PSC_ERROR_GENERIC); + return 1; /* resume guest */ + } + + __snp_complete_one_psc(svm); /* Handle the next range (if any). */ return snp_begin_psc(svm, psc); - -out_resume: - svm->sev_es.psc_idx = 0; - svm->sev_es.psc_inflight = 0; - svm->sev_es.psc_2m = false; - ghcb_set_sw_exit_info_2(svm->sev_es.ghcb, psc_ret); - - return 1; /* resume guest */ } static int snp_begin_psc(struct vcpu_svm *svm, struct psc_buffer *psc) @@ -3601,18 +3608,20 @@ static int snp_begin_psc(struct vcpu_svm *svm, struct psc_buffer *psc) struct psc_hdr *hdr = &psc->hdr; struct psc_entry entry_start; u16 idx, idx_start, idx_end; - __u64 psc_ret, gpa; + u64 gfn; int npages; - - /* There should be no other PSCs in-flight at this point. */ - if (WARN_ON_ONCE(svm->sev_es.psc_inflight)) { - psc_ret = VMGEXIT_PSC_ERROR_GENERIC; - goto out_resume; - } + bool huge; if (!(vcpu->kvm->arch.hypercall_exit_enabled & (1 << KVM_HC_MAP_GPA_RANGE))) { - psc_ret = VMGEXIT_PSC_ERROR_GENERIC; - goto out_resume; + snp_complete_psc(svm, VMGEXIT_PSC_ERROR_GENERIC); + return 1; + } + +next_range: + /* There should be no other PSCs in-flight at this point. */ + if (WARN_ON_ONCE(svm->sev_es.psc_inflight)) { + snp_complete_psc(svm, VMGEXIT_PSC_ERROR_GENERIC); + return 1; } /* @@ -3624,97 +3633,99 @@ static int snp_begin_psc(struct vcpu_svm *svm, struct psc_buffer *psc) idx_end = hdr->end_entry; if (idx_end >= VMGEXIT_PSC_MAX_COUNT) { - psc_ret = VMGEXIT_PSC_ERROR_INVALID_HDR; - goto out_resume; - } - - /* Nothing more to process. */ - if (idx_start > idx_end) { - psc_ret = 0; - goto out_resume; + snp_complete_psc(svm, VMGEXIT_PSC_ERROR_INVALID_HDR); + return 1; } /* Find the start of the next range which needs processing. */ for (idx = idx_start; idx <= idx_end; idx++, hdr->cur_entry++) { - __u16 cur_page; - gfn_t gfn; - bool huge; - entry_start = entries[idx]; - - /* Only private/shared conversions are currently supported. */ - if (entry_start.operation != VMGEXIT_PSC_OP_PRIVATE && - entry_start.operation != VMGEXIT_PSC_OP_SHARED) - continue; - gfn = entry_start.gfn; - cur_page = entry_start.cur_page; huge = entry_start.pagesize; + npages = huge ? 512 : 1; - if ((huge && (cur_page > 512 || !IS_ALIGNED(gfn, 512))) || - (!huge && cur_page > 1)) { - psc_ret = VMGEXIT_PSC_ERROR_INVALID_ENTRY; - goto out_resume; + if (entry_start.cur_page > npages || !IS_ALIGNED(gfn, npages)) { + snp_complete_psc(svm, VMGEXIT_PSC_ERROR_INVALID_ENTRY); + return 1; } + if (entry_start.cur_page) { + /* + * If this is a partially-completed 2M range, force 4K + * handling for the remaining pages since they're effectively + * split at this point. Subsequent code should ensure this + * doesn't get combined with adjacent PSC entries where 2M + * handling is still possible. + */ + npages -= entry_start.cur_page; + gfn += entry_start.cur_page; + huge = false; + } + if (npages) + break; + /* All sub-pages already processed. */ - if ((huge && cur_page == 512) || (!huge && cur_page == 1)) - continue; - - /* - * If this is a partially-completed 2M range, force 4K handling - * for the remaining pages since they're effectively split at - * this point. Subsequent code should ensure this doesn't get - * combined with adjacent PSC entries where 2M handling is still - * possible. - */ - svm->sev_es.psc_2m = cur_page ? false : huge; - svm->sev_es.psc_idx = idx; - svm->sev_es.psc_inflight = 1; - - gpa = gfn_to_gpa(gfn + cur_page); - npages = huge ? 512 - cur_page : 1; - break; } + if (idx > idx_end) { + /* Nothing more to process. */ + snp_complete_psc(svm, 0); + return 1; + } + + svm->sev_es.psc_2m = huge; + svm->sev_es.psc_idx = idx; + svm->sev_es.psc_inflight = 1; + /* * Find all subsequent PSC entries that contain adjacent GPA * ranges/operations and can be combined into a single * KVM_HC_MAP_GPA_RANGE exit. */ - for (idx = svm->sev_es.psc_idx + 1; idx <= idx_end; idx++) { + while (++idx <= idx_end) { struct psc_entry entry = entries[idx]; if (entry.operation != entry_start.operation || - entry.gfn != entry_start.gfn + npages || - !!entry.pagesize != svm->sev_es.psc_2m) + entry.gfn != gfn + npages || + entry.cur_page || + !!entry.pagesize != huge) break; svm->sev_es.psc_inflight++; - npages += entry_start.pagesize ? 512 : 1; + npages += huge ? 512 : 1; } - vcpu->run->exit_reason = KVM_EXIT_HYPERCALL; - vcpu->run->hypercall.nr = KVM_HC_MAP_GPA_RANGE; - vcpu->run->hypercall.args[0] = gpa; - vcpu->run->hypercall.args[1] = npages; - vcpu->run->hypercall.args[2] = entry_start.operation == VMGEXIT_PSC_OP_PRIVATE - ? KVM_MAP_GPA_RANGE_ENCRYPTED - : KVM_MAP_GPA_RANGE_DECRYPTED; - vcpu->run->hypercall.args[2] |= entry_start.pagesize - ? KVM_MAP_GPA_RANGE_PAGE_SZ_2M - : KVM_MAP_GPA_RANGE_PAGE_SZ_4K; - vcpu->arch.complete_userspace_io = snp_complete_psc; + switch (entry_start.operation) { + case VMGEXIT_PSC_OP_PRIVATE: + case VMGEXIT_PSC_OP_SHARED: + vcpu->run->exit_reason = KVM_EXIT_HYPERCALL; + vcpu->run->hypercall.nr = KVM_HC_MAP_GPA_RANGE; + vcpu->run->hypercall.args[0] = gfn_to_gpa(gfn); + vcpu->run->hypercall.args[1] = npages; + vcpu->run->hypercall.args[2] = entry_start.operation == VMGEXIT_PSC_OP_PRIVATE + ? KVM_MAP_GPA_RANGE_ENCRYPTED + : KVM_MAP_GPA_RANGE_DECRYPTED; + vcpu->run->hypercall.args[2] |= huge + ? KVM_MAP_GPA_RANGE_PAGE_SZ_2M + : KVM_MAP_GPA_RANGE_PAGE_SZ_4K; + vcpu->arch.complete_userspace_io = snp_complete_one_psc; - return 0; /* forward request to userspace */ + return 0; /* forward request to userspace */ -out_resume: - svm->sev_es.psc_idx = 0; - svm->sev_es.psc_inflight = 0; - svm->sev_es.psc_2m = false; - ghcb_set_sw_exit_info_2(svm->sev_es.ghcb, psc_ret); + default: + /* + * Only shared/private PSC operations are currently supported, so if the + * entire range consists of unsupported operations (e.g. SMASH/UNSMASH), + * then consider the entire range completed and avoid exiting to + * userspace. In theory snp_complete_psc() can be called directly + * at this point to complete the current range and start the next one, + * but that could lead to unexpected levels of recursion. + */ + __snp_complete_one_psc(svm); + goto next_range; + } - return 1; /* resume guest */ + unreachable(); } static int __sev_snp_update_protected_guest_state(struct kvm_vcpu *vcpu)