From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 35943C4345F for ; Wed, 24 Apr 2024 23:58:19 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7D1EB6B008C; Wed, 24 Apr 2024 19:58:18 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 7825B6B0093; Wed, 24 Apr 2024 19:58:18 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 64A056B0096; Wed, 24 Apr 2024 19:58:18 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 454DD6B008C for ; Wed, 24 Apr 2024 19:58:18 -0400 (EDT) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id A96C41403B4 for ; Wed, 24 Apr 2024 23:58:17 +0000 (UTC) X-FDA: 82046091834.28.14C985C Received: from mail-yw1-f202.google.com (mail-yw1-f202.google.com [209.85.128.202]) by imf14.hostedemail.com (Postfix) with ESMTP id DD7F0100007 for ; Wed, 24 Apr 2024 23:58:15 +0000 (UTC) Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b="bbeh6/OM"; spf=pass (imf14.hostedemail.com: domain of 3l5wpZgYKCK4gSObXQUccUZS.QcaZWbil-aaYjOQY.cfU@flex--seanjc.bounces.google.com designates 209.85.128.202 as permitted sender) smtp.mailfrom=3l5wpZgYKCK4gSObXQUccUZS.QcaZWbil-aaYjOQY.cfU@flex--seanjc.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1714003095; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=hq8SdlFPWpEwMALVKp560wfcsx2v4HYt4jxDkJ/UaCQ=; b=aE1Mf7c/qyNN7aEufaVBozk6RGSC5xJGvV1d2iRimANrjKysPeLOUAQaLjx0oQSysVDRvz YrzCAzRLH79RU7UN52xtto8wbDNnpz6BzuOqU9VVoAn5kP1cc6x47WwSJnpEDiyf9UM5SL BORISTAYjFbS9CAUl57Gkz99lKL01o4= ARC-Authentication-Results: i=1; imf14.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b="bbeh6/OM"; spf=pass (imf14.hostedemail.com: domain of 3l5wpZgYKCK4gSObXQUccUZS.QcaZWbil-aaYjOQY.cfU@flex--seanjc.bounces.google.com designates 209.85.128.202 as permitted sender) smtp.mailfrom=3l5wpZgYKCK4gSObXQUccUZS.QcaZWbil-aaYjOQY.cfU@flex--seanjc.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1714003095; a=rsa-sha256; cv=none; b=QrVb+FZ3hygDlpZdzO+MiGstx3XDOEbJ7IwoMkofE8xPh5LJF8jlNJw07xLOV6lgswmnnj 4kAbG+V3ajfmmUFYCkmf2mloJefq1Uc78kBfhQLZhqsECpReTdhCeID8cJqXlQ6LRYDqxA sXjVyU3QBOItmiGMlshl3wodSMwruJw= Received: by mail-yw1-f202.google.com with SMTP id 00721157ae682-61b13ce3daaso9079597b3.0 for ; Wed, 24 Apr 2024 16:58:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1714003095; x=1714607895; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=hq8SdlFPWpEwMALVKp560wfcsx2v4HYt4jxDkJ/UaCQ=; b=bbeh6/OMKZ1ts9A6xjsqEIM3NfianmLTWODwHdm2IqJKKkWwOwdCJ9M6bBcbsK1kDk l0ryMTwxaKhRgVAKg/wAedL4XQ88kj3p7S1aGpOocAm/mZ3jzKiUJrpaxwIpQ0jonBEw yGNMR1hzpvOdnHkLiK5+qr7+NnZPC1XzYuvgN55yqjA8Bh8+0/C2TVEjomN59/sLs7dJ 7n8ctp94todvgr0Oo0ka3T7Rzs4lOfeK0A5yRjnKAiNUJ8f3UpiweMK2GJ/pg3zzSWd1 8/vSURFhQkdNrXrSkOtrXV6zUaDl7FwtwEYT1+RrJ781p9SgdHr+OwkANQ1wm+nFcC+c KPoA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1714003095; x=1714607895; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=hq8SdlFPWpEwMALVKp560wfcsx2v4HYt4jxDkJ/UaCQ=; b=TJ7NwSKQCn3tSpElIR+phYYIUjbeCGkD8mt8F5z5DFt9RWaMI82ZTU5swqWxf1DPSR 8mpx+mJ/yfpWZozM0wjPE5XH1vIzT8F/qA7P0wIeem2R8YxfcvsscqcV5Sg0vtRlyKis qwjdm6QMw2Chp3bAxasKGWl32B8eaKbTO7SDxXk6zBCYuY3TOS9nwZXLIprhzvYuJBn0 e+0iQG0UkG0/5ZrBmhYg9lECTVIDdB2de4ueQFgXVAikXuGWH4xDagUo2ISo44cBH9fi cl1BAFRr15mot5OIWk0rEvfa+vA/0TI9ArahCekAJCbPfQHjOM5ZhAPUujb3038yLQvj z9jg== X-Forwarded-Encrypted: i=1; AJvYcCXMeicEKwT3r/IgeQwxqld26Yb1SezUn+l/jkOGdNh/+34H9mcNuevMblwSyQ0krj+pM2Gv7xIORHenEzpJfQpqS7M= X-Gm-Message-State: AOJu0YwdndeW035y/FONuO983sQiSnsEO52UQv1QUmAWdFu9gZOey3Pb LakuTlWJvMh4e2u/IUfzOMXQeuweh7AcpSypstedHQhCkaLV6N7Ybh2JRL8MiZao3dns3AbPGbY ycQ== X-Google-Smtp-Source: AGHT+IE5uZLfnqhh3kfhpm2ae2TWAGQslbCCze7Kvv2Uh3gwjGn/xEM7H5pIz2AWmMMjWE7Ke1FVDrC9NQI= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a81:b408:0:b0:61a:e8df:4ae7 with SMTP id h8-20020a81b408000000b0061ae8df4ae7mr925447ywi.8.1714003095004; Wed, 24 Apr 2024 16:58:15 -0700 (PDT) Date: Wed, 24 Apr 2024 16:58:13 -0700 In-Reply-To: <20240421180122.1650812-7-michael.roth@amd.com> Mime-Version: 1.0 References: <20240421180122.1650812-1-michael.roth@amd.com> <20240421180122.1650812-7-michael.roth@amd.com> Message-ID: Subject: Re: [PATCH v14 06/22] KVM: SEV: Add KVM_SEV_SNP_LAUNCH_UPDATE command From: Sean Christopherson To: Michael Roth Cc: kvm@vger.kernel.org, linux-coco@lists.linux.dev, linux-mm@kvack.org, linux-crypto@vger.kernel.org, x86@kernel.org, linux-kernel@vger.kernel.org, tglx@linutronix.de, mingo@redhat.com, jroedel@suse.de, thomas.lendacky@amd.com, hpa@zytor.com, ardb@kernel.org, pbonzini@redhat.com, vkuznets@redhat.com, jmattson@google.com, luto@kernel.org, dave.hansen@linux.intel.com, slp@redhat.com, pgonda@google.com, peterz@infradead.org, srinivas.pandruvada@linux.intel.com, rientjes@google.com, dovmurik@linux.ibm.com, tobin@ibm.com, bp@alien8.de, vbabka@suse.cz, kirill@shutemov.name, ak@linux.intel.com, tony.luck@intel.com, sathyanarayanan.kuppuswamy@linux.intel.com, alpergun@google.com, jarkko@kernel.org, ashish.kalra@amd.com, nikunj.dadhania@amd.com, pankaj.gupta@amd.com, liam.merwick@oracle.com, Brijesh Singh Content-Type: text/plain; charset="us-ascii" X-Stat-Signature: fy6qfbuo549uq7o8mgh8gkeijmgpqw3m X-Rspamd-Queue-Id: DD7F0100007 X-Rspamd-Server: rspam10 X-Rspam-User: X-HE-Tag: 1714003095-705394 X-HE-Meta: U2FsdGVkX19/pDOTDdeSE+HF9n6HAdC7ueYgylVkvvIM+G0wPxACuVsl6M6CEbixtQzgGrnmajHrU/u0N0EhJG4l+W/THVQAy7GA+cSragBAf4VC8xRGaCb0GW/Wnc3MF/APkK1dwyoTV80miIN5zIzXiQamviEPccsZSXIy0ainn180WOicWHphb164yudCPq4IYTie3cqEp5+XZdp20OBi9AnXbY7DdF+95DNUbWucXiS02I/hrFqQlyVYP7d3BZYVGh5YwYTyYmjQTsp+jUOPkkHTJf8pIdCKXg0InIfDUBcWANCgXnD+qCcDN61YJWSLsdQclxMUai61MfJgdbVdPr/qVr1ohHD+ackh4JHxjaWDbq6MPBtFbZvt6JUt3C/SrkUwMEF6ido9DDoAI3yf9a91Vdl7jTj87z33mgG8kPA6zs5DLaOhFb+vq8CvHx7PMQSKqQfOyLfokkQQrW6CvS3R3hHupW2PLcd8HF1dCTIYbBJuaeCjUreBqAFdX6lxKn12xjPCty2xEkFM/Rg0nUw/KMl/GykYlLmc+HFMxctUeznTQoREgHUZa0jtLMb06+zHD7OKPDNZoEhb3FyROXWIfYMoci0IGdKNCh2JXbDWetf63sKYpt1C9vvIKOlfhWS2TjlfsUXaGbRigdDBP2nbVUdPIMSG/HVnQXDjFQI21pfMxusShbLu/vLST4KPSMRh3HgVL71cGw+aUQQdMlYguUA7iWBYeVCznyvTHgGt8f68zqOpknVPiZUN9OaMGQqPHEfgOrOX76JJkZezaX+ZVuh71uvWlJZl2xUycLJYDFzerAo1Wt6/ifmm5NMQx7g/elN5Zne5J1WfgtMgjjdXBpIB2mVKAUB11aPAnGixygiOKG7mIPd5M9QloWjucV1x/0du2glqVylDouDXWkkheKBZnqqytLIsIHLGpk3P9rPq4sHPUmLBx+KKiZnoA+0599b8t/EyOIg gAE1ZBf6 /sCnbhvszf10q4xpVrc10EZyLF1Ixthp5b5dDKyt2qtnBaf0eQ9k38GisMa/ZPcwWv6452fKqkhMvf0kleT/uXPclph0OE94nywqQ14hlssg0hLh2qbC+pryDjRqSpYdEkZp+COtud9+3Av9RAKGm+/hBduX5cByE8d8Yt5Cjwp8Klw1wLRzkIbvVF+/ImSdvB9t4dcgteChVTY1xQmZlsVYpWIkZ7Bxacc5hNF8O3aqZr+r+fXoUdwZc8gklJUrDbjCBfhtJFEm+BE7YF3JlJvX1SCSh1RhAn/PAsMREHU35rZrQ6D1hJeXx8MvSGU8Y9xD6bShATcb2QNwkpLxc0bbRl9B7K2G4RJKhpc5YiDfKE7rNIlMclMD3hT41sdiMm8UruHvtAGcbuezL3QcqKbxXIK1Agu+aFu0nwdQIrIj+2QCud9N3WrCK4UXHBH6O70pqHlWmyNx2dd1pXINNmqFxX2+EUOpmvQ6kjaCyMkdvAv3ubwILQ8tgzezkXC+4/rJ/oMjnyK+kn378JI+YnsKz8ncIyz9pGxf1DNOtgibMYB51qhJsYi7vC2weJDIZXhKrXJN5YiI3X99Y4NazjVwlPb566WtQhSU+3LX1AFNTT60LrGosCi991Q9QMMq/PgP1 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Sun, Apr 21, 2024, Michael Roth wrote: > diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c > index 9d08d1202544..d3ae4ded91df 100644 > --- a/arch/x86/kvm/svm/sev.c > +++ b/arch/x86/kvm/svm/sev.c > @@ -258,6 +258,35 @@ static void sev_decommission(unsigned int handle) > sev_guest_decommission(&decommission, NULL); > } > > +static int snp_page_reclaim(u64 pfn) > +{ > + struct sev_data_snp_page_reclaim data = {0}; > + int err, rc; > + > + data.paddr = __sme_set(pfn << PAGE_SHIFT); > + rc = sev_do_cmd(SEV_CMD_SNP_PAGE_RECLAIM, &data, &err); > + if (WARN_ON_ONCE(rc)) { This now WARNs here *and* in the caller, which is overkill. Pick one. > + /* > + * This shouldn't happen under normal circumstances, but if the > + * reclaim failed, then the page is no longer safe to use. Explain _why_ it's unsafe. The code makes it quite clear that reclaim shouldn't fail. E.g. I assume it's bound to the ASID still? Something else? This is all messy, too. snp_launch_update_vmsa() does RECLAIM but doesn't convert the page back to shared, which seems wrong. sev_gmem_post_populate() open codes the calls separately, and then snp_cleanup_guest_buf() bundles them together. Yeesh, and the return types are all mixed. snp_cleanup_guest_buf() returns a bool on success, but the helpers it uses return 0/-errno. Please convert everything to return 0/-errno, boolean "error codes" are hard to follow and make the code unnecessarily brittle. > + if (WARN_ON_ONCE(sev_populate_args->type != KVM_SEV_SNP_PAGE_TYPE_ZERO && !src)) { > + ret = -EINVAL; Just return -EINVAL, no? > + goto out; > + } > + > + for (gfn = gfn_start, i = 0; gfn < gfn_start + npages; gfn++, i++) { > + struct sev_data_snp_launch_update fw_args = {0}; > + bool assigned; > + void *vaddr; > + int level; > + > + if (!kvm_mem_is_private(kvm, gfn)) { > + pr_debug("%s: Failed to ensure GFN 0x%llx has private memory attribute set\n", > + __func__, gfn); > + ret = -EINVAL; > + break; > + } > + > + ret = snp_lookup_rmpentry((u64)pfn + i, &assigned, &level); > + if (ret || assigned) { > + pr_debug("%s: Failed to ensure GFN 0x%llx RMP entry is initial shared state, ret: %d assigned: %d\n", > + __func__, gfn, ret, assigned); > + ret = -EINVAL; > + break; > + } > + > + if (src) { > + vaddr = kmap_local_pfn(pfn + i); > + ret = copy_from_user(vaddr, src + i * PAGE_SIZE, PAGE_SIZE); > + if (ret) { > + pr_debug("Failed to copy source page into GFN 0x%llx\n", gfn); > + goto out_unmap; > + } > + } > + > + ret = rmp_make_private(pfn + i, gfn << PAGE_SHIFT, PG_LEVEL_4K, > + sev_get_asid(kvm), true); > + if (ret) { > + pr_debug("%s: Failed to mark RMP entry for GFN 0x%llx as private, ret: %d\n", > + __func__, gfn, ret); > + goto out_unmap; > + } > + > + n_private++; > + > + fw_args.gctx_paddr = __psp_pa(sev->snp_context); > + fw_args.address = __sme_set(pfn_to_hpa(pfn + i)); > + fw_args.page_size = PG_LEVEL_TO_RMP(PG_LEVEL_4K); > + fw_args.page_type = sev_populate_args->type; > + ret = __sev_issue_cmd(sev_populate_args->sev_fd, SEV_CMD_SNP_LAUNCH_UPDATE, > + &fw_args, &sev_populate_args->fw_error); > + if (ret) { > + pr_debug("%s: SEV-SNP launch update failed, ret: 0x%x, fw_error: 0x%x\n", > + __func__, ret, sev_populate_args->fw_error); > + > + if (WARN_ON_ONCE(snp_page_reclaim(pfn + i))) > + goto out_unmap; > + > + /* > + * When invalid CPUID function entries are detected, > + * firmware writes the expected values into the page and > + * leaves it unencrypted so it can be used for debugging > + * and error-reporting. > + * > + * Copy this page back into the source buffer so > + * userspace can use this information to provide > + * information on which CPUID leaves/fields failed CPUID > + * validation. > + */ > + if (sev_populate_args->type == KVM_SEV_SNP_PAGE_TYPE_CPUID && > + sev_populate_args->fw_error == SEV_RET_INVALID_PARAM) { > + if (WARN_ON_ONCE(host_rmp_make_shared(pfn + i, PG_LEVEL_4K))) > + goto out_unmap; > + > + if (copy_to_user(src + i * PAGE_SIZE, vaddr, > + PAGE_SIZE)) > + pr_debug("Failed to write CPUID page back to userspace\n"); > + > + /* PFN is hypervisor-owned at this point, skip cleanup for it. */ > + n_private--; If reclaim or make_shared fails, KVM will attempt make_shared again. And I am not a fan of keeping vaddr mapped after making the pfn private. Functionally it probably changes nothing, but conceptually it's odd. And "mapping" is free (AFAIK). Oh, and vaddr is subtly guaranteed to be valid based on the type, which is fun. So why not unmap immediately after the first copy_from_user(), and then this can be: if (ret) goto err; } return 0; err: if (!() && sev_populate_args->type == KVM_SEV_SNP_PAGE_TYPE_CPUID && sev_populate_args->fw_error == SEV_RET_INVALID_PARAM) { vaddr = kmap_local(...); copy_to_user(...); kunmap_local(vaddr); } for (i = 0; i < n_private - 1; i++) return ret; > + sev_populate_args->fw_error == SEV_RET_INVALID_PARAM) > + } > + } > + > +out_unmap: > + kunmap_local(vaddr); > + if (ret) > + break; > + } > + > +out: > + if (ret) { > + pr_debug("%s: exiting with error ret %d, restoring %d gmem PFNs to shared.\n", > + __func__, ret, n_private); > + for (i = 0; i < n_private; i++) > + WARN_ON_ONCE(host_rmp_make_shared(pfn + i, PG_LEVEL_4K)); > + } > + > + return ret; > +} ... > + src = params.type == KVM_SEV_SNP_PAGE_TYPE_ZERO ? NULL : u64_to_user_ptr(params.uaddr); > + > + count = kvm_gmem_populate(kvm, params.gfn_start, src, npages, > + sev_gmem_post_populate, &sev_populate_args); > + if (count < 0) { > + argp->error = sev_populate_args.fw_error; > + pr_debug("%s: kvm_gmem_populate failed, ret %ld (fw_error %d)\n", > + __func__, count, argp->error); > + ret = -EIO; > + } else if (count <= npages) { This seems like excessive paranoia. > + params.gfn_start += count; > + params.len -= count * PAGE_SIZE; > + if (params.type != KVM_SEV_SNP_PAGE_TYPE_ZERO) > + params.uaddr += count * PAGE_SIZE; > + > + ret = copy_to_user(u64_to_user_ptr(argp->data), ¶ms, sizeof(params)) > + ? -EIO : 0; copy_to_user() failures typically return -EFAULT. The style is not the most readable. Maybe this? ret = 0; if (copy_to_user(...)) ret = -EFAULT; > + } else { > + WARN_ONCE(1, "Completed page count %ld exceeds requested amount %ld", > + count, npages); > + ret = -EINVAL; > + } > + > +out: > + mutex_unlock(&kvm->slots_lock); > + > + return ret; > +} > + > int sev_mem_enc_ioctl(struct kvm *kvm, void __user *argp) > { > struct kvm_sev_cmd sev_cmd; > @@ -2217,6 +2451,9 @@ int sev_mem_enc_ioctl(struct kvm *kvm, void __user *argp) > case KVM_SEV_SNP_LAUNCH_START: > r = snp_launch_start(kvm, &sev_cmd); > break; > + case KVM_SEV_SNP_LAUNCH_UPDATE: > + r = snp_launch_update(kvm, &sev_cmd); > + break; > default: > r = -EINVAL; > goto out; > -- > 2.25.1 >