From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 32FF8C4345F for ; Fri, 26 Apr 2024 20:14:38 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8E9C36B0099; Fri, 26 Apr 2024 16:14:37 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 8995E6B009C; Fri, 26 Apr 2024 16:14:37 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 761EB6B009D; Fri, 26 Apr 2024 16:14:37 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 5BC816B0099 for ; Fri, 26 Apr 2024 16:14:37 -0400 (EDT) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id D730C1213DB for ; Fri, 26 Apr 2024 20:14:36 +0000 (UTC) X-FDA: 82052785752.09.90DD373 Received: from mail-pl1-f201.google.com (mail-pl1-f201.google.com [209.85.214.201]) by imf04.hostedemail.com (Postfix) with ESMTP id 171184000B for ; Fri, 26 Apr 2024 20:14:34 +0000 (UTC) Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b="WON7/HiR"; spf=pass (imf04.hostedemail.com: domain of 3KQssZgYKCCgWIERNGKSSKPI.GSQPMRYb-QQOZEGO.SVK@flex--seanjc.bounces.google.com designates 209.85.214.201 as permitted sender) smtp.mailfrom=3KQssZgYKCCgWIERNGKSSKPI.GSQPMRYb-QQOZEGO.SVK@flex--seanjc.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1714162475; a=rsa-sha256; cv=none; b=MR8ZH68U0n9oQjDDuonT+wtfP3MZQVDRxlBc9g6sfhsrWCCe463t27DI1dziJIM2IYl7c9 Zfu0S1bYT+7QmGiTKOGiPbcm3PKW+TFTHI+pFxanyewuvUXWVLSHDxyGIPgj1fK96ZRMlk 0N8EsHlwiTwKJAQ+K30/7bzygs4BCbI= ARC-Authentication-Results: i=1; imf04.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b="WON7/HiR"; spf=pass (imf04.hostedemail.com: domain of 3KQssZgYKCCgWIERNGKSSKPI.GSQPMRYb-QQOZEGO.SVK@flex--seanjc.bounces.google.com designates 209.85.214.201 as permitted sender) smtp.mailfrom=3KQssZgYKCCgWIERNGKSSKPI.GSQPMRYb-QQOZEGO.SVK@flex--seanjc.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1714162475; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=p6QNNeN/iKbfPmZLz+NRuv/dqoduuTvyudHMEjnn2zM=; b=LT1gFPetDNG0iRSUPJXLTk3lBIe1JQ6Fe/M5NzI+UIYLc7BUcC2wuxheeigkW0qvJuVlnv V7BUR1PgGvHq2AkDUB0d4fIX2VteBLe2W9etmrU5ZIFA752DXOc51Gm3eUoPR5IBLAswgv qe2D0RNTvL0DFp4nqURJwYTfhQNucY4= Received: by mail-pl1-f201.google.com with SMTP id d9443c01a7336-1e5e5fa31dbso27466365ad.0 for ; Fri, 26 Apr 2024 13:14:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1714162474; x=1714767274; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=p6QNNeN/iKbfPmZLz+NRuv/dqoduuTvyudHMEjnn2zM=; b=WON7/HiRqfR7/etnBqQkZqb/KBKlX0yWVNa3Cody9HiY4L89skVdplwYkJNS3ljIfR hbeoDDgmFrXOKwC/zvlHtg+TVcyTTjc5fWa3QQkgqOknqqrOo3YC2HgqAw75V6KKkVs1 m/xlQklVNOlUIvo0efWKM3ujElbJUvQZSJzeO1HvGmtpFanZsJOiZ0UYUULIcD5RQxgh cDxZ4P7iAp7DWssxa+JUcmOGTLEhIyu7UkI2AKgKpIcxFCiIX74QeiA6OLCdoXn0+0N2 l6rJoQ2Z1yRRCo/Z8lE7lTvMSFDb+Qpc+VCbTQkmJZhNRVl8VK9urymCZ5ZtvDLHtijW XUrA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1714162474; x=1714767274; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=p6QNNeN/iKbfPmZLz+NRuv/dqoduuTvyudHMEjnn2zM=; b=sWs3+slyJxpggAFlDnWRacW3/c7tjGQjM0GSxF3OHsbBH6HUwZp13I2MnivvCW0pjY MN4gVHlcfDo0Jeyi3r8cOh+yIFLS9NDic0RR0OV1z3fMuxqcWRA6980ToY1TwGkgLhFU srC4Hzj3Oi9qnIh1IDwj7eSzT29RdtxwOz0N8rD0MfcyKASDhhSgc5CRcJMd4QuNetpB 6DnCvx6gezf3ct73VP0soIItKjza/rnLr0LZR7V/e3za1/GYIjoW4E/A1wTGTqj6oW+9 A2AB1U7P8BjnncjSOyQGnPskF2ntgO3PbvhHj+rmR/0LTIWtj8lgPJsVU19Mo7l46kYU 9yBQ== X-Forwarded-Encrypted: i=1; AJvYcCUzwHZ4SPUR/vJSWDxXXKz5Rq6XHmnXX+RtX0hY3iUyUW/tCkNW6/NQWNAVqDgXDEm3/7N7s37WcjIfCEtFLF1q9ng= X-Gm-Message-State: AOJu0Yzs9ltyqxNtOfIhPjxkTW9tbIgA9/CS2uWfcNHuMzpcbB/qR2Ju vIODEtMuiWIb6rRh5Z7eBsAOA1OMNdmwLIXd3MHfxgvgrmZpb845/klRneevcxamRymDJHrGzu5 biA== X-Google-Smtp-Source: AGHT+IHm+XgF6B2SsiuKWzSl3HesgNNUnTRITMzai0EsO8nj/1UyX71FX031Lz6TZHENt4ygoNWm2wDt2xQ= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:903:124a:b0:1e2:8bce:b334 with SMTP id u10-20020a170903124a00b001e28bceb334mr11581plh.9.1714162473746; Fri, 26 Apr 2024 13:14:33 -0700 (PDT) Date: Fri, 26 Apr 2024 13:14:32 -0700 In-Reply-To: <20240426171644.r6dvvfvduzvlrv5c@amd.com> Mime-Version: 1.0 References: <20240421180122.1650812-1-michael.roth@amd.com> <20240421180122.1650812-10-michael.roth@amd.com> <20240425220008.boxnurujlxbx62pg@amd.com> <20240426171644.r6dvvfvduzvlrv5c@amd.com> Message-ID: Subject: Re: [PATCH v14 09/22] KVM: SEV: Add support to handle MSR based Page State Change VMGEXIT From: Sean Christopherson To: Michael Roth Cc: kvm@vger.kernel.org, linux-coco@lists.linux.dev, linux-mm@kvack.org, linux-crypto@vger.kernel.org, x86@kernel.org, linux-kernel@vger.kernel.org, tglx@linutronix.de, mingo@redhat.com, jroedel@suse.de, thomas.lendacky@amd.com, hpa@zytor.com, ardb@kernel.org, pbonzini@redhat.com, vkuznets@redhat.com, jmattson@google.com, luto@kernel.org, dave.hansen@linux.intel.com, slp@redhat.com, pgonda@google.com, peterz@infradead.org, srinivas.pandruvada@linux.intel.com, rientjes@google.com, dovmurik@linux.ibm.com, tobin@ibm.com, bp@alien8.de, vbabka@suse.cz, kirill@shutemov.name, ak@linux.intel.com, tony.luck@intel.com, sathyanarayanan.kuppuswamy@linux.intel.com, alpergun@google.com, jarkko@kernel.org, ashish.kalra@amd.com, nikunj.dadhania@amd.com, pankaj.gupta@amd.com, liam.merwick@oracle.com, Brijesh Singh Content-Type: text/plain; charset="us-ascii" X-Stat-Signature: ow55rwo13ymqnt84k1po1quw5zojdx6a X-Rspam-User: X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 171184000B X-HE-Tag: 1714162474-494964 X-HE-Meta: U2FsdGVkX1+nQ4n7obGRQLaH0gOPq4/jboGHlHcbYPu3xifyVpdh3lgbEujuQM99E2bdyGTPDezMQ3JWpLkqGJsQhaf5PvgzVldA1sp0Ut5HlDtlu0vrasZ6yIcAZX3naAawPIY+inTMfOeWChpw+L4YNg4Q/m5IUzAp2HmQs7g6gIiu9vPMIAFNtwEY/O6APXbn/hHElBjNlSfeYa4F/KP+J3SumzfOZkg+PcXZvct8B3CyVA4kNbZFKhmxIEP43Mt4Z9/92VtMneCZgvttUQUFZ98jJgiPP9R+AOOMPDXaSN0iBOtNijob3wQAQcaEFeGBYtPB1TA3WkkNQm0NQkwr6q8UE7R5SX3gso3ngozD4fM4M0GI7VH4EMxzZVBRwL95HsBbOoH5CHdI7MdV5xPftYGOISHoshR+VDnyXSDJnHmRBwtd6umiS3HfguGvmW8Emhdrj6Ax6ejSZrg+QeUXOqaqK/2UKrAlxIcwGiYh6MwuAbK0M53a/b+/Rke+PnOx9mS6gfGdSi5KttIz0K/5ANXj2pWAjyLYXTOyEteClFkt6SjbetRDPVVUtaOToXseRQ9cSG/nxKq1be5CCjXZO8ePUNEVDxGA9YnRRWyAKxH5sjWz8vQJOQvG230XPJbB2fuEtUrv9Na+ou1FNPkFUblvt11OI1rpr8NbkNK0SWDwQi/XOzQsNvEdyPpW6krG5AwUyXvMKRpMC1Iz8CuVWtiZeqv516Bex6AXjM9GI5fqEjBP2qiIKWwGJTJ2LDbWZcf3weBcNz/287SAwUHE/E1YHpfKjjCCdIfJUcv6s1um6uaDLa2iMpQwgNZqvIXtRG+x6HUFOvCkiylaeqMYKo/7u6MpQiPoDGA8Foz6/GvnQz9LMetjJht2uWnf0/b3EY/jSFmamsSOvP3qlKbZm0yubbxfFqyxamKzBWzWXGxoOCC1hsWPOFy7raXY2WK08uDBTILBT9s5Gt4 xGX2o51y vNHQmOZNCkglmWjTtlmnznVeX3cWXrhG4jwYUSEGXzxBd8z6PwApzmELB8VVCAP4p1mRmidFr9rsDdWGC+rZE1kMpv8nf5eJ6z/y1zBxyHY7niw2lE8c3y8/dvjdYw3Q7+LhbOdTER4H2JRV7bJ+mOrYs5rn/II/Prqgwdvm90U+FhecaaNUYZxXIQjKFvweyLCsgJeRBYREWEtJGr2iMldWOuIbroqsMs9digYlu4SlWCgZpVUF5MeEd2zOiaK/5/x3GkmZ7lDuYltGGos+EGCqhewL1KqSk205X+Ua9+JYE1aUjfjvh3eXt1s6eGseJTKztu4tQPJNCVzXPgpoeBSTf0MEYuBRziIfY+n4DdjT/EoUiZHl3b2t0u6L6xPyWUdCRm6VO7C36wgLzuSIMHzc2bpYhdRCWcyjNMxTIQil5EPbpivmixJmQR8tnTorjwBCFbshfCbb/x3olQt90GjILC+uo89Sq6A/sz78ODZ1JTahKhzgWn6avqfKPviTpr3V2TbA+bvFi+pINUu5Ig51c8JULl04a2zJoLfcMar3UQMrPhWCphTzZ9NZkWPy951kltLi7dCoEKHozA/TYRP4KKQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.023970, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Fri, Apr 26, 2024, Michael Roth wrote: > On Thu, Apr 25, 2024 at 03:13:40PM -0700, Sean Christopherson wrote: > > On Thu, Apr 25, 2024, Michael Roth wrote: > > > On Wed, Apr 24, 2024 at 01:59:48PM -0700, Sean Christopherson wrote: > > > > On Sun, Apr 21, 2024, Michael Roth wrote: > > > > > +static int snp_begin_psc_msr(struct kvm_vcpu *vcpu, u64 ghcb_msr) > > > > > +{ > > > > > + u64 gpa = gfn_to_gpa(GHCB_MSR_PSC_REQ_TO_GFN(ghcb_msr)); > > > > > + u8 op = GHCB_MSR_PSC_REQ_TO_OP(ghcb_msr); > > > > > + struct vcpu_svm *svm = to_svm(vcpu); > > > > > + > > > > > + if (op != SNP_PAGE_STATE_PRIVATE && op != SNP_PAGE_STATE_SHARED) { > > > > > + set_ghcb_msr(svm, GHCB_MSR_PSC_RESP_ERROR); > > > > > + return 1; /* resume guest */ > > > > > + } > > > > > + > > > > > + vcpu->run->exit_reason = KVM_EXIT_VMGEXIT; > > > > > + vcpu->run->vmgexit.type = KVM_USER_VMGEXIT_PSC_MSR; > > > > > + vcpu->run->vmgexit.psc_msr.gpa = gpa; > > > > > + vcpu->run->vmgexit.psc_msr.op = op; > > > > > > > > Argh, no. > > > > > > > > This is the same crud that TDX tried to push[*]. Use KVM's existing user exits, > > > > and extend as *needed*. There is no good reason page state change requests need > > > > *two* exit reasons. The *only* thing KVM supports right now is private<=>shared > > > > conversions, and that can be handled with either KVM_HC_MAP_GPA_RANGE or > > > > KVM_EXIT_MEMORY_FAULT. > > > > > > > > The non-MSR flavor can batch requests, but I'm willing to bet that the overwhelming > > > > majority of requests are contiguous, i.e. can be combined into a range by KVM, > > > > and that handling any outliers by performing multiple exits to userspace will > > > > provide sufficient performance. > > > > > > That does tend to be the case. We won't have as much granularity with > > > the per-entry error codes, but KVM_SET_MEMORY_ATTRIBUTES would be > > > expected to be for the entire range anyway, and if that fails for > > > whatever reason then we KVM_BUG_ON() anyway. We do have to have handling > > > for cases where the entries aren't contiguous however, which would > > > involve multiple KVM_EXIT_HYPERCALLs until everything is satisfied. But > > > not a huge deal since it doesn't seem to be a common case. > > > > If it was less complex overall, I wouldn't be opposed to KVM marshalling everything > > into a buffer, but I suspect it will be simpler to just have KVM loop until the > > PSC request is complete. > > Agreed. But *if* we decided to introduce a buffer, where would you > suggest adding it? The kvm_run union fields are set to 256 bytes, and > we'd need close to 4K to handle a full GHCB PSC buffer in 1 go. Would > additional storage at the end of struct kvm_run be acceptable? Don't even need more memory, just use vcpu->arch.pio_data, which is always allocated and is mmap()able by userspace via KVM_PIO_PAGE_OFFSET. > > > KVM_HC_MAP_GPA_RANGE seems like a nice option because we'd also have the > > > flexibility to just issue that directly within a guest rather than > > > relying on SNP/TDX specific hcalls. I don't know if that approach is > > > practical for a real guest, but it could be useful for having re-usable > > > guest code in KVM selftests that "just works" for all variants of > > > SNP/TDX/sw-protected. (though we'd still want stuff that exercises > > > SNP/TDX->KVM_HC_MAP_GPA_RANGE translation). > > > > > > I think we'd there is some potential baggage there with the previous SEV > > > live migration use cases. There's some potential that existing guest kernels > > > will use it once it gets advertised and issue them alongside GHCB-based > > > page-state changes. It might make sense to use one of the reserved bits > > > to denote this flavor of KVM_HC_MAP_GPA_RANGE as being for > > > hardware/software-protected VMs and not interchangeable with calls that > > > were used for SEV live migration stuff. > > > > I don't think I follow, what exactly wouldn't be interchangeable, and why? > > For instance, if KVM_FEATURE_MIGRATION_CONTROL is advertised, then when > amd_enc_status_change_finish() is triggered as a result of > set_memory_encrypted(), we'd see > > 1) a GHCB PSC for SNP, which will get forwarded to userspace via > KVM_HC_MAP_GPA_RANGE > 2) KVM_HC_MAP_GPA_RANGE issued directly by the guest. > > In that case, we'd be duplicating PSCs but it wouldn't necessarily hurt > anything. But ideally we'd be able to distinguish the 2 cases so we > could rightly treat 1) as only being expected for SNP, and 2) as only > being expected for SEV/SEV-ES. Why would the guest issue both? That's a guest bug. Or if supressing the second hypercall is an issue, simply don't enumerate MIGRATION_CONTROL for SNP guests.