From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D3DF9C77B7C for ; Wed, 19 Apr 2023 22:18:50 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9923490001D; Wed, 19 Apr 2023 18:18:34 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 91C10900003; Wed, 19 Apr 2023 18:18:34 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 71F6490001D; Wed, 19 Apr 2023 18:18:34 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 5E9E4900003 for ; Wed, 19 Apr 2023 18:18:34 -0400 (EDT) Received: from smtpin18.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 31A4D1A02D2 for ; Wed, 19 Apr 2023 22:18:34 +0000 (UTC) X-FDA: 80699555748.18.C2457E2 Received: from mail-pf1-f175.google.com (mail-pf1-f175.google.com [209.85.210.175]) by imf18.hostedemail.com (Postfix) with ESMTP id 5D0521C0005 for ; Wed, 19 Apr 2023 22:18:32 +0000 (UTC) Authentication-Results: imf18.hostedemail.com; dkim=pass header.d=rivosinc-com.20221208.gappssmtp.com header.s=20221208 header.b=SCmbMjD5; spf=pass (imf18.hostedemail.com: domain of atishp@rivosinc.com designates 209.85.210.175 as permitted sender) smtp.mailfrom=atishp@rivosinc.com; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1681942712; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=PU8iFueolrZCmcbo4gYTi+eKDv6f+PtpXTvRFNX2NZA=; b=ddW0aes4LpnrtM3IH9/PGfAnk/Wz43yn5789wcrz8yExNSWOBiPxrNXxJUFQFkjnAjx+do vuaRFb/2bPKRa3FzgSn9+uTATt3JHf4IqXEk7LURM4bUWgFYa1W+aqRV26DHyQXc3iA8cR 1a1b7SdOayG2JBYp29p52ST8KKcnxk0= ARC-Authentication-Results: i=1; imf18.hostedemail.com; dkim=pass header.d=rivosinc-com.20221208.gappssmtp.com header.s=20221208 header.b=SCmbMjD5; spf=pass (imf18.hostedemail.com: domain of atishp@rivosinc.com designates 209.85.210.175 as permitted sender) smtp.mailfrom=atishp@rivosinc.com; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1681942712; a=rsa-sha256; cv=none; b=WaJU3AWfRByzSTDDThPdnlVVgf5U6PHqsyaK0aOfQMkxfoo8jz9FBMuXnIoZHVeEm1wi/H quB+VoKy7aooNJn88FFSNPr4b/pPBnCJLr9R/ho+24BYHJwvrcbLsJ3f+M63ZBTgWeVmmG YbD/CDFhh1cHYY4GAw5t5PZJWYGn4Gs= Received: by mail-pf1-f175.google.com with SMTP id d2e1a72fcca58-63b73203e0aso2785630b3a.1 for ; Wed, 19 Apr 2023 15:18:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20221208.gappssmtp.com; s=20221208; t=1681942711; x=1684534711; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=PU8iFueolrZCmcbo4gYTi+eKDv6f+PtpXTvRFNX2NZA=; b=SCmbMjD55QloUYOgXX6MGUNEjAyAnMm/HfGRAGM1vS7vADDmD//a3bJ19IoxEPbsjR M2cj4sRqNKbEknJUmqcRzzuGvuAWBBnIZmhseVA9z0HrnfGvz/BvmurAl+ZaqdFV9z0+ XE+By4GvDamj1V0iHkImOVtuMlZpTJ8+WBz54nyEAGyf404LButJI5R1r1M9kSp/2kE/ u0yMzOB8MvU04pFuFPh8V3hwD0+VVdFu6iRbztwNwUe0wOuPOvAaUzMpoheX119q8fkx HByPgmjTnZTij1BszzopQyeoxfkgy1UkfAJB+AO4WFsTrtiA+xJhP2Ikh+DJT3BJUL+Y xQ7A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681942711; x=1684534711; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=PU8iFueolrZCmcbo4gYTi+eKDv6f+PtpXTvRFNX2NZA=; b=jDIf6PfS+WYLqjz/z0VlKTdR57T57/URq2YM200ciE/dF/zZQA6/2cfqpdc77pOmBU fDlHiwPnKWHe9yeFvK4Obs8DQ3q1aJ825As42vmk97va+b8Ua+davG9wOtc1mYlNmG51 8SKZ7wMfPPAiyg71PVL3sI1rc4mjMDppQYr936qv9Re5TWMe3z0krSD3l36AJbuZ8W/N EraOzdV4MiFYyu5zMd4Lu6SyPjECMLTzhFOeZ7N4fvAOOUlKBElLI0kSs3HEYGDHD5qL IYGj4qNmY9aXfVEODI+QZaMiQPb7z4tE+7yEEcS8Sx5YB/lWiYDO+fQB/plIHupVI+ca whcw== X-Gm-Message-State: AAQBX9eynY/SRkBbbJxM16iPy2lLiT9vLttK7CXqYf6KNWnvAJDq8S+L dVrCPLgRU9Rh1SiIWQkLhYy+nw== X-Google-Smtp-Source: AKy350YxZk7F6dEeajji4ZjqoV6O38ZOLVkf4vnq1EOHfqdcV069hDfO+BMQgpUlX71NUn0DfXxZlw== X-Received: by 2002:a17:90a:9e5:b0:246:aeee:e61c with SMTP id 92-20020a17090a09e500b00246aeeee61cmr4072001pjo.11.1681942711371; Wed, 19 Apr 2023 15:18:31 -0700 (PDT) Received: from atishp.ba.rivosinc.com ([66.220.2.162]) by smtp.gmail.com with ESMTPSA id jn11-20020a170903050b00b00196807b5189sm11619190plb.292.2023.04.19.15.18.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 19 Apr 2023 15:18:31 -0700 (PDT) From: Atish Patra To: linux-kernel@vger.kernel.org Cc: Atish Patra , Alexandre Ghiti , Andrew Jones , Andrew Morton , Anup Patel , Atish Patra , =?UTF-8?q?Bj=C3=B6rn=20T=C3=B6pel?= , Suzuki K Poulose , Will Deacon , Marc Zyngier , Sean Christopherson , linux-coco@lists.linux.dev, Dylan Reid , abrestic@rivosinc.com, Samuel Ortiz , Christoph Hellwig , Conor Dooley , Greg Kroah-Hartman , Guo Ren , Heiko Stuebner , Jiri Slaby , kvm-riscv@lists.infradead.org, kvm@vger.kernel.org, linux-mm@kvack.org, linux-riscv@lists.infradead.org, Mayuresh Chitale , Palmer Dabbelt , Paolo Bonzini , Paul Walmsley , Rajnesh Kanwal , Uladzislau Rezki Subject: [RFC 28/48] RISC-V: KVM: Add interrupt management functions for TVM Date: Wed, 19 Apr 2023 15:16:56 -0700 Message-Id: <20230419221716.3603068-29-atishp@rivosinc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230419221716.3603068-1-atishp@rivosinc.com> References: <20230419221716.3603068-1-atishp@rivosinc.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Rspamd-Queue-Id: 5D0521C0005 X-Stat-Signature: f15t4kd6sz6kqnxcuzrg8bthc5gxdoax X-Rspam-User: X-Rspamd-Server: rspam08 X-HE-Tag: 1681942712-165506 X-HE-Meta: U2FsdGVkX19b/qSXJSr01HLQfBkkMci8xkrBAq3HdICtWULF4vYOVvUiu7cIWfp6zKGXa33V9GRJEq3L+zDXbfiRAOfhAcHS2xjSKulbNPCF7sb3lx8LBEFMNnvd37jIyelGL9dvr0ld+tg7PGiKm1Xzmln6gRO84N510xBxO58l1Jlr5OCktByzMuOw+iAjpqV4qmVlOkSe5mDONXVJ15vHZX9yRt8hm+CQGNj9YtIbpH8Uy6m5cKwzQMf3LFiMmGkKCMqtI5p9JA17cSLL7ilqIZiqj9eGA/dGcGTso4pskg8svqNo5b00Vim2Fu8CrkGlbs2+7JyTr6+QAJ2SHMJ4ilrGK5N8eBnlE9PslB4zPAMf7lVT3/Hrz/+HaiMfEI82u6j9XWNdqrMomiQMwO6UdpuZHM03ORy2XxfyzPljLxSL8L4HteG57F1Hp2LT6xe9jcbybDJOE0dBcf5vj572Kh+JQSHxexXJ526zXzUwAFrGxgR1zQWfXY4NbtopLQDszgjIyO4vKiBC5ETENyQ7LuTS5MIY558KQW6X1btHp2gw0A3wwpCwmMdZ/eUcegj/XbQ6hQ4HVlr/PcjT5ZX6UYef7b+01R/BigMKh4eZsa08TAJmBSzkTNJlJcv8OoVjzejy4DBEP3L6/XOsc2j+73KIrJu8rV30oqc+6afMSkRooh9PgKqua6PErOsieZE+tWo/x8Y68MznYkxFCm3OlkKES1PBWo0jTyddKGOZsoEDePF17CVgCoUq8pN2NWBcp65SUJKVAIXHLZcsE9Lru6DvqTaUgJ8gtSIPxhr8VXr0HxFccUp+NwYh0NB9ZaHmj14Oi/bDO/In/nMOKrsETguMmR8tEZVMLcNUkaKJHc2/gy5azm2ZHfSCata0Jqh8z+vO1fCoQ+CkCH4v05RGUYH6vAJPy7oLEISan3cDRx6ae9sfuCxWg2uqjeBbAvLQv9AQnbSDAPkxlds syLNEW6m 61LaIYg51OE06PwGPeUy91JfYaLS6R6L/euhP0AChVDZNF8mbkoVGRDYgqBvPFhrY9O1Pb/TVHV0FrHhn8bLOJf0v+y976XpH8XVW+IjW2r+jl+YOvS+ixvD+ok55CssKLqLo2EnoM9JRmkOA0FRtDriv9O6SzrMweY1SV4PtSKA/z2dtutmspNx6ZYnQh8e5emi4OyVfeDhYBN80XCxWWH7ETzCxIRxnoFM9noPl1Gv2mYbjNnbWNsHnzqFwbY+osRWyqbDE1yXRT8Qis5c4icYjvTztclGMQw4ipT2JOE613NF1LmSoupPFKDa53IgTeP8rYFCV8ueeof+kVzQr0L12JTAkc5m5OLET9Of2VuPWJl/Lw2iyOz0aasagKFQVGeiKTkf0o8kj7bsmFciaeaYlKyKprZnsqC2RIMCUDXvQBtqxwT/TkC4Anwb4uNVmmQ3Qj1TLIgoFbjHU4qUlOHphRA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The COVI SBI extension defines the functions related to interrupt management for TVMs. These functions are the glue logic between AIA code and the actually CoVE Interrupt SBI extension(COVI). Signed-off-by: Atish Patra --- arch/riscv/include/asm/kvm_cove.h | 34 ++++ arch/riscv/kvm/cove.c | 256 ++++++++++++++++++++++++++++++ 2 files changed, 290 insertions(+) diff --git a/arch/riscv/include/asm/kvm_cove.h b/arch/riscv/include/asm/kvm_cove.h index b63682f..74bad2f 100644 --- a/arch/riscv/include/asm/kvm_cove.h +++ b/arch/riscv/include/asm/kvm_cove.h @@ -61,10 +61,19 @@ struct kvm_riscv_cove_page { unsigned long gpa; }; +struct imsic_tee_state { + bool bind_required; + bool bound; + int vsfile_hgei; +}; + struct kvm_cove_tvm_vcpu_context { struct kvm_vcpu *vcpu; /* Pages storing each vcpu state of the TVM in TSM */ struct kvm_riscv_cove_page vcpu_state; + + /* Per VCPU imsic state */ + struct imsic_tee_state imsic; }; struct kvm_cove_tvm_context { @@ -133,6 +142,16 @@ int kvm_riscv_cove_vm_add_memreg(struct kvm *kvm, unsigned long gpa, unsigned lo int kvm_riscv_cove_gstage_map(struct kvm_vcpu *vcpu, gpa_t gpa, unsigned long hva); /* Fence related function */ int kvm_riscv_cove_tvm_fence(struct kvm_vcpu *vcpu); + +/* AIA related CoVE functions */ +int kvm_riscv_cove_aia_init(struct kvm *kvm); +int kvm_riscv_cove_vcpu_inject_interrupt(struct kvm_vcpu *vcpu, unsigned long iid); +int kvm_riscv_cove_vcpu_imsic_unbind(struct kvm_vcpu *vcpu, int old_cpu); +int kvm_riscv_cove_vcpu_imsic_bind(struct kvm_vcpu *vcpu, unsigned long imsic_mask); +int kvm_riscv_cove_vcpu_imsic_rebind(struct kvm_vcpu *vcpu, int old_pcpu); +int kvm_riscv_cove_aia_claim_imsic(struct kvm_vcpu *vcpu, phys_addr_t imsic_pa); +int kvm_riscv_cove_aia_convert_imsic(struct kvm_vcpu *vcpu, phys_addr_t imsic_pa); +int kvm_riscv_cove_vcpu_imsic_addr(struct kvm_vcpu *vcpu); #else static inline bool kvm_riscv_cove_enabled(void) {return false; }; static inline int kvm_riscv_cove_init(void) { return -1; } @@ -162,6 +181,21 @@ static inline int kvm_riscv_cove_vm_measure_pages(struct kvm *kvm, } static inline int kvm_riscv_cove_gstage_map(struct kvm_vcpu *vcpu, gpa_t gpa, unsigned long hva) {return -1; } +/* AIA related TEE functions */ +static inline int kvm_riscv_cove_aia_init(struct kvm *kvm) { return -1; } +static inline int kvm_riscv_cove_vcpu_inject_interrupt(struct kvm_vcpu *vcpu, + unsigned long iid) { return -1; } +static inline int kvm_riscv_cove_vcpu_imsic_unbind(struct kvm_vcpu *vcpu, + int old_cpu) { return -1; } +static inline int kvm_riscv_cove_vcpu_imsic_bind(struct kvm_vcpu *vcpu, + unsigned long imsic_mask) { return -1; } +static inline int kvm_riscv_cove_aia_claim_imsic(struct kvm_vcpu *vcpu, + phys_addr_t imsic_pa) { return -1; } +static inline int kvm_riscv_cove_aia_convert_imsic(struct kvm_vcpu *vcpu, + phys_addr_t imsic_pa) { return -1; } +static inline int kvm_riscv_cove_vcpu_imsic_addr(struct kvm_vcpu *vcpu) { return -1; } +static inline int kvm_riscv_cove_vcpu_imsic_rebind(struct kvm_vcpu *vcpu, + int old_pcpu) { return -1; } #endif /* CONFIG_RISCV_COVE_HOST */ #endif /* __KVM_RISCV_COVE_H */ diff --git a/arch/riscv/kvm/cove.c b/arch/riscv/kvm/cove.c index 4a8a8db..154b01a 100644 --- a/arch/riscv/kvm/cove.c +++ b/arch/riscv/kvm/cove.c @@ -8,6 +8,7 @@ * Atish Patra */ +#include #include #include #include @@ -137,6 +138,247 @@ __always_inline bool kvm_riscv_cove_enabled(void) return riscv_cove_enabled; } +static void kvm_cove_imsic_clone(void *info) +{ + int rc; + struct kvm_vcpu *vcpu = info; + struct kvm *kvm = vcpu->kvm; + + rc = sbi_covi_rebind_vcpu_imsic_clone(kvm->arch.tvmc->tvm_guest_id, vcpu->vcpu_idx); + if (rc) + kvm_err("Imsic clone failed guest %ld vcpu %d pcpu %d\n", + kvm->arch.tvmc->tvm_guest_id, vcpu->vcpu_idx, smp_processor_id()); +} + +static void kvm_cove_imsic_unbind(void *info) +{ + struct kvm_vcpu *vcpu = info; + struct kvm_cove_tvm_context *tvmc = vcpu->kvm->arch.tvmc; + + /*TODO: We probably want to return but the remote function call doesn't allow any return */ + if (sbi_covi_unbind_vcpu_imsic_begin(tvmc->tvm_guest_id, vcpu->vcpu_idx)) + return; + + /* This may issue IPIs to running vcpus. */ + if (kvm_riscv_cove_tvm_fence(vcpu)) + return; + + if (sbi_covi_unbind_vcpu_imsic_end(tvmc->tvm_guest_id, vcpu->vcpu_idx)) + return; + + kvm_info("Unbind success for guest %ld vcpu %d pcpu %d\n", + tvmc->tvm_guest_id, smp_processor_id(), vcpu->vcpu_idx); +} + +int kvm_riscv_cove_vcpu_imsic_addr(struct kvm_vcpu *vcpu) +{ + struct kvm_cove_tvm_context *tvmc; + struct kvm *kvm = vcpu->kvm; + struct kvm_vcpu_aia *vaia = &vcpu->arch.aia_context; + int ret; + + if (!kvm->arch.tvmc) + return -EINVAL; + + tvmc = kvm->arch.tvmc; + + ret = sbi_covi_set_vcpu_imsic_addr(tvmc->tvm_guest_id, vcpu->vcpu_idx, vaia->imsic_addr); + if (ret) + return -EPERM; + + return 0; +} + +int kvm_riscv_cove_aia_convert_imsic(struct kvm_vcpu *vcpu, phys_addr_t imsic_pa) +{ + struct kvm *kvm = vcpu->kvm; + int ret; + + if (!kvm->arch.tvmc) + return -EINVAL; + + ret = sbi_covi_convert_imsic(imsic_pa); + if (ret) + return -EPERM; + + ret = kvm_riscv_cove_fence(); + if (ret) + return ret; + + return 0; +} + +int kvm_riscv_cove_aia_claim_imsic(struct kvm_vcpu *vcpu, phys_addr_t imsic_pa) +{ + int ret; + struct kvm *kvm = vcpu->kvm; + + if (!kvm->arch.tvmc) + return -EINVAL; + + ret = sbi_covi_reclaim_imsic(imsic_pa); + if (ret) + return -EPERM; + + return 0; +} + +int kvm_riscv_cove_vcpu_imsic_rebind(struct kvm_vcpu *vcpu, int old_pcpu) +{ + struct kvm_cove_tvm_context *tvmc; + struct kvm *kvm = vcpu->kvm; + struct kvm_cove_tvm_vcpu_context *tvcpu = vcpu->arch.tc; + int ret; + cpumask_t tmpmask; + + if (!kvm->arch.tvmc) + return -EINVAL; + + tvmc = kvm->arch.tvmc; + + ret = sbi_covi_rebind_vcpu_imsic_begin(tvmc->tvm_guest_id, vcpu->vcpu_idx, + BIT(tvcpu->imsic.vsfile_hgei)); + if (ret) { + kvm_err("Imsic rebind begin failed guest %ld vcpu %d pcpu %d\n", + tvmc->tvm_guest_id, vcpu->vcpu_idx, smp_processor_id()); + return ret; + } + + ret = kvm_riscv_cove_tvm_fence(vcpu); + if (ret) + return ret; + + cpumask_clear(&tmpmask); + cpumask_set_cpu(old_pcpu, &tmpmask); + on_each_cpu_mask(&tmpmask, kvm_cove_imsic_clone, vcpu, 1); + + ret = sbi_covi_rebind_vcpu_imsic_end(tvmc->tvm_guest_id, vcpu->vcpu_idx); + if (ret) { + kvm_err("Imsic rebind end failed guest %ld vcpu %d pcpu %d\n", + tvmc->tvm_guest_id, vcpu->vcpu_idx, smp_processor_id()); + return ret; + } + + tvcpu->imsic.bound = true; + + return 0; +} + +int kvm_riscv_cove_vcpu_imsic_bind(struct kvm_vcpu *vcpu, unsigned long imsic_mask) +{ + struct kvm_cove_tvm_context *tvmc; + struct kvm *kvm = vcpu->kvm; + struct kvm_cove_tvm_vcpu_context *tvcpu = vcpu->arch.tc; + int ret; + + if (!kvm->arch.tvmc) + return -EINVAL; + + tvmc = kvm->arch.tvmc; + + ret = sbi_covi_bind_vcpu_imsic(tvmc->tvm_guest_id, vcpu->vcpu_idx, imsic_mask); + if (ret) { + kvm_err("Imsic bind failed for imsic %lx guest %ld vcpu %d pcpu %d\n", + imsic_mask, tvmc->tvm_guest_id, vcpu->vcpu_idx, smp_processor_id()); + return ret; + } + tvcpu->imsic.bound = true; + pr_err("%s: rebind success vcpu %d hgei %d pcpu %d\n", __func__, + vcpu->vcpu_idx, tvcpu->imsic.vsfile_hgei, smp_processor_id()); + + return 0; +} + +int kvm_riscv_cove_vcpu_imsic_unbind(struct kvm_vcpu *vcpu, int old_pcpu) +{ + struct kvm *kvm = vcpu->kvm; + struct kvm_cove_tvm_vcpu_context *tvcpu = vcpu->arch.tc; + cpumask_t tmpmask; + + if (!kvm->arch.tvmc) + return -EINVAL; + + /* No need to unbind if it is not bound already */ + if (!tvcpu->imsic.bound) + return 0; + + /* Do it first even if there is failure to prevent it to try again */ + tvcpu->imsic.bound = false; + + if (smp_processor_id() == old_pcpu) { + kvm_cove_imsic_unbind(vcpu); + } else { + /* Unbind can be invoked from a different physical cpu */ + cpumask_clear(&tmpmask); + cpumask_set_cpu(old_pcpu, &tmpmask); + on_each_cpu_mask(&tmpmask, kvm_cove_imsic_unbind, vcpu, 1); + } + + return 0; +} + +int kvm_riscv_cove_vcpu_inject_interrupt(struct kvm_vcpu *vcpu, unsigned long iid) +{ + struct kvm_cove_tvm_context *tvmc; + struct kvm *kvm = vcpu->kvm; + int ret; + + if (!kvm->arch.tvmc) + return -EINVAL; + + tvmc = kvm->arch.tvmc; + + ret = sbi_covi_inject_external_interrupt(tvmc->tvm_guest_id, vcpu->vcpu_idx, iid); + if (ret) + return ret; + + return 0; +} + +int kvm_riscv_cove_aia_init(struct kvm *kvm) +{ + struct kvm_aia *aia = &kvm->arch.aia; + struct sbi_cove_tvm_aia_params *tvm_aia; + struct kvm_vcpu *vcpu; + struct kvm_cove_tvm_context *tvmc; + int ret; + + if (!kvm->arch.tvmc) + return -EINVAL; + + tvmc = kvm->arch.tvmc; + + /* Sanity Check */ + if (aia->aplic_addr != KVM_RISCV_AIA_UNDEF_ADDR) + return -EINVAL; + + /* TVMs must have a physical guest interrut file */ + if (aia->mode != KVM_DEV_RISCV_AIA_MODE_HWACCEL) + return -ENODEV; + + tvm_aia = kzalloc(sizeof(*tvm_aia), GFP_KERNEL); + if (!tvm_aia) + return -ENOMEM; + + /* Address of the IMSIC group ID, hart ID & guest ID of 0 */ + vcpu = kvm_get_vcpu_by_id(kvm, 0); + tvm_aia->imsic_base_addr = vcpu->arch.aia_context.imsic_addr; + + tvm_aia->group_index_bits = aia->nr_group_bits; + tvm_aia->group_index_shift = aia->nr_group_shift; + tvm_aia->hart_index_bits = aia->nr_hart_bits; + tvm_aia->guest_index_bits = aia->nr_guest_bits; + /* Nested TVMs are not supported yet */ + tvm_aia->guests_per_hart = 0; + + + ret = sbi_covi_tvm_aia_init(tvmc->tvm_guest_id, tvm_aia); + if (ret) + kvm_err("TVM AIA init failed with rc %d\n", ret); + + return ret; +} + void kvm_riscv_cove_vcpu_load(struct kvm_vcpu *vcpu) { kvm_riscv_vcpu_timer_restore(vcpu); @@ -283,6 +525,7 @@ void noinstr kvm_riscv_cove_vcpu_switchto(struct kvm_vcpu *vcpu, struct kvm_cpu_ struct kvm_cpu_context *cntx = &vcpu->arch.guest_context; void *nshmem; struct kvm_guest_timer *gt = &kvm->arch.timer; + struct kvm_cove_tvm_vcpu_context *tvcpuc = vcpu->arch.tc; if (!kvm->arch.tvmc) return; @@ -301,6 +544,19 @@ void noinstr kvm_riscv_cove_vcpu_switchto(struct kvm_vcpu *vcpu, struct kvm_cpu_ tvmc->finalized_done = true; } + /* + * Bind the vsfile here instead during the new vsfile allocation because + * COVH bind call requires the TVM to be in finalized state. + */ + if (tvcpuc->imsic.bind_required) { + tvcpuc->imsic.bind_required = false; + rc = kvm_riscv_cove_vcpu_imsic_bind(vcpu, BIT(tvcpuc->imsic.vsfile_hgei)); + if (rc) { + kvm_err("bind failed with rc %d\n", rc); + return; + } + } + rc = sbi_covh_run_tvm_vcpu(tvmc->tvm_guest_id, vcpu->vcpu_idx); if (rc) { trap->scause = EXC_CUSTOM_KVM_COVE_RUN_FAIL; -- 2.25.1