From: Elliot Berman <quic_eberman@quicinc.com>
To: Alex Elder <elder@linaro.org>,
Srinivas Kandagatla <srinivas.kandagatla@linaro.org>,
Murali Nalajal <quic_mnalajal@quicinc.com>,
Trilok Soni <quic_tsoni@quicinc.com>,
Srivatsa Vaddagiri <quic_svaddagi@quicinc.com>,
Carl van Schaik <quic_cvanscha@quicinc.com>,
Philip Derrin <quic_pderrin@quicinc.com>,
Prakruthi Deepak Heragu <quic_pheragu@quicinc.com>,
Jonathan Corbet <corbet@lwn.net>,
Rob Herring <robh+dt@kernel.org>,
Krzysztof Kozlowski <krzysztof.kozlowski+dt@linaro.org>,
Conor Dooley <conor+dt@kernel.org>,
Catalin Marinas <catalin.marinas@arm.com>,
Will Deacon <will@kernel.org>,
Konrad Dybcio <konrad.dybcio@linaro.org>,
Bjorn Andersson <andersson@kernel.org>,
Dmitry Baryshkov <dmitry.baryshkov@linaro.org>,
"Fuad Tabba" <tabba@google.com>,
Sean Christopherson <seanjc@google.com>,
"Andrew Morton" <akpm@linux-foundation.org>
Cc: <linux-arm-msm@vger.kernel.org>, <linux-doc@vger.kernel.org>,
<linux-kernel@vger.kernel.org>, <devicetree@vger.kernel.org>,
<linux-arm-kernel@lists.infradead.org>, <linux-mm@kvack.org>,
Elliot Berman <quic_eberman@quicinc.com>
Subject: [PATCH v16 20/34] virt: gunyah: Add Qualcomm Gunyah platform ops
Date: Tue, 9 Jan 2024 11:37:58 -0800 [thread overview]
Message-ID: <20240109-gunyah-v16-20-634904bf4ce9@quicinc.com> (raw)
In-Reply-To: <20240109-gunyah-v16-0-634904bf4ce9@quicinc.com>
Qualcomm platforms have a firmware entity which performs access control
to physical pages. Dynamically started Gunyah virtual machines use the
QCOM_SCM_RM_MANAGED_VMID for access. Linux thus needs to assign access
to the memory used by guest VMs. Gunyah doesn't do this operation for us
since it is the current VM (typically VMID_HLOS) delegating the access
and not Gunyah itself. Use the Gunyah platform ops to achieve this so
that only Qualcomm platforms attempt to make the needed SCM calls.
Reviewed-by: Alex Elder <elder@linaro.org>
Co-developed-by: Prakruthi Deepak Heragu <quic_pheragu@quicinc.com>
Signed-off-by: Prakruthi Deepak Heragu <quic_pheragu@quicinc.com>
Signed-off-by: Elliot Berman <quic_eberman@quicinc.com>
---
drivers/virt/gunyah/Kconfig | 13 +++
drivers/virt/gunyah/Makefile | 1 +
drivers/virt/gunyah/gunyah_qcom.c | 218 ++++++++++++++++++++++++++++++++++++++
3 files changed, 232 insertions(+)
diff --git a/drivers/virt/gunyah/Kconfig b/drivers/virt/gunyah/Kconfig
index 23ba523d25dc..fe2823dc48ba 100644
--- a/drivers/virt/gunyah/Kconfig
+++ b/drivers/virt/gunyah/Kconfig
@@ -4,6 +4,7 @@ config GUNYAH
tristate "Gunyah Virtualization drivers"
depends on ARM64
select GUNYAH_PLATFORM_HOOKS
+ imply GUNYAH_QCOM_PLATFORM if ARCH_QCOM
help
The Gunyah drivers are the helper interfaces that run in a guest VM
such as basic inter-VM IPC and signaling mechanisms, and higher level
@@ -14,3 +15,15 @@ config GUNYAH
config GUNYAH_PLATFORM_HOOKS
tristate
+
+config GUNYAH_QCOM_PLATFORM
+ tristate "Support for Gunyah on Qualcomm platforms"
+ depends on GUNYAH
+ select GUNYAH_PLATFORM_HOOKS
+ select QCOM_SCM
+ help
+ Enable support for interacting with Gunyah on Qualcomm
+ platforms. Interaction with Qualcomm firmware requires
+ extra platform-specific support.
+
+ Say Y/M here to use Gunyah on Qualcomm platforms.
diff --git a/drivers/virt/gunyah/Makefile b/drivers/virt/gunyah/Makefile
index ffcde0e0ccfa..a6c6f29b887a 100644
--- a/drivers/virt/gunyah/Makefile
+++ b/drivers/virt/gunyah/Makefile
@@ -4,3 +4,4 @@ gunyah_rsc_mgr-y += rsc_mgr.o rsc_mgr_rpc.o vm_mgr.o vm_mgr_mem.o
obj-$(CONFIG_GUNYAH) += gunyah.o gunyah_rsc_mgr.o gunyah_vcpu.o
obj-$(CONFIG_GUNYAH_PLATFORM_HOOKS) += gunyah_platform_hooks.o
+obj-$(CONFIG_GUNYAH_QCOM_PLATFORM) += gunyah_qcom.o
diff --git a/drivers/virt/gunyah/gunyah_qcom.c b/drivers/virt/gunyah/gunyah_qcom.c
new file mode 100644
index 000000000000..2381d75482ca
--- /dev/null
+++ b/drivers/virt/gunyah/gunyah_qcom.c
@@ -0,0 +1,218 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (c) 2023-2024 Qualcomm Innovation Center, Inc. All rights reserved.
+ */
+
+#include <linux/arm-smccc.h>
+#include <linux/gunyah.h>
+#include <linux/mm.h>
+#include <linux/module.h>
+#include <linux/firmware/qcom/qcom_scm.h>
+#include <linux/types.h>
+#include <linux/uuid.h>
+
+#define QCOM_SCM_RM_MANAGED_VMID 0x3A
+#define QCOM_SCM_MAX_MANAGED_VMID 0x3F
+
+static int
+qcom_scm_gunyah_rm_pre_mem_share(struct gunyah_rm *rm,
+ struct gunyah_rm_mem_parcel *mem_parcel)
+{
+ struct qcom_scm_vmperm *new_perms __free(kfree) = NULL;
+ u64 src, src_cpy;
+ int ret = 0, i, n;
+ u16 vmid;
+
+ new_perms = kcalloc(mem_parcel->n_acl_entries, sizeof(*new_perms),
+ GFP_KERNEL);
+ if (!new_perms)
+ return -ENOMEM;
+
+ for (n = 0; n < mem_parcel->n_acl_entries; n++) {
+ vmid = le16_to_cpu(mem_parcel->acl_entries[n].vmid);
+ if (vmid <= QCOM_SCM_MAX_MANAGED_VMID)
+ new_perms[n].vmid = vmid;
+ else
+ new_perms[n].vmid = QCOM_SCM_RM_MANAGED_VMID;
+ if (mem_parcel->acl_entries[n].perms & GUNYAH_RM_ACL_X)
+ new_perms[n].perm |= QCOM_SCM_PERM_EXEC;
+ if (mem_parcel->acl_entries[n].perms & GUNYAH_RM_ACL_W)
+ new_perms[n].perm |= QCOM_SCM_PERM_WRITE;
+ if (mem_parcel->acl_entries[n].perms & GUNYAH_RM_ACL_R)
+ new_perms[n].perm |= QCOM_SCM_PERM_READ;
+ }
+
+ src = BIT_ULL(QCOM_SCM_VMID_HLOS);
+
+ for (i = 0; i < mem_parcel->n_mem_entries; i++) {
+ src_cpy = src;
+ ret = qcom_scm_assign_mem(
+ le64_to_cpu(mem_parcel->mem_entries[i].phys_addr),
+ le64_to_cpu(mem_parcel->mem_entries[i].size), &src_cpy,
+ new_perms, mem_parcel->n_acl_entries);
+ if (ret)
+ break;
+ }
+
+ /* Did it work ok? */
+ if (!ret)
+ return 0;
+
+ src = 0;
+ for (n = 0; n < mem_parcel->n_acl_entries; n++) {
+ vmid = le16_to_cpu(mem_parcel->acl_entries[n].vmid);
+ if (vmid <= QCOM_SCM_MAX_MANAGED_VMID)
+ src |= BIT_ULL(vmid);
+ else
+ src |= BIT_ULL(QCOM_SCM_RM_MANAGED_VMID);
+ }
+
+ new_perms[0].vmid = QCOM_SCM_VMID_HLOS;
+
+ for (i--; i >= 0; i--) {
+ src_cpy = src;
+ WARN_ON_ONCE(qcom_scm_assign_mem(
+ le64_to_cpu(mem_parcel->mem_entries[i].phys_addr),
+ le64_to_cpu(mem_parcel->mem_entries[i].size), &src_cpy,
+ new_perms, 1));
+ }
+
+ return ret;
+}
+
+static int
+qcom_scm_gunyah_rm_post_mem_reclaim(struct gunyah_rm *rm,
+ struct gunyah_rm_mem_parcel *mem_parcel)
+{
+ struct qcom_scm_vmperm new_perms;
+ u64 src = 0, src_cpy;
+ int ret = 0, i, n;
+ u16 vmid;
+
+ new_perms.vmid = QCOM_SCM_VMID_HLOS;
+ new_perms.perm = QCOM_SCM_PERM_EXEC | QCOM_SCM_PERM_WRITE |
+ QCOM_SCM_PERM_READ;
+
+ for (n = 0; n < mem_parcel->n_acl_entries; n++) {
+ vmid = le16_to_cpu(mem_parcel->acl_entries[n].vmid);
+ if (vmid <= QCOM_SCM_MAX_MANAGED_VMID)
+ src |= (1ull << vmid);
+ else
+ src |= (1ull << QCOM_SCM_RM_MANAGED_VMID);
+ }
+
+ for (i = 0; i < mem_parcel->n_mem_entries; i++) {
+ src_cpy = src;
+ ret = qcom_scm_assign_mem(
+ le64_to_cpu(mem_parcel->mem_entries[i].phys_addr),
+ le64_to_cpu(mem_parcel->mem_entries[i].size), &src_cpy,
+ &new_perms, 1);
+ WARN_ON_ONCE(ret);
+ }
+
+ return ret;
+}
+
+static int
+qcom_scm_gunyah_rm_pre_demand_page(struct gunyah_rm *rm, u16 vmid,
+ enum gunyah_pagetable_access access,
+ struct folio *folio)
+{
+ struct qcom_scm_vmperm new_perms[2];
+ unsigned int n = 1;
+ u64 src;
+
+ new_perms[0].vmid = QCOM_SCM_RM_MANAGED_VMID;
+ new_perms[0].perm = QCOM_SCM_PERM_EXEC | QCOM_SCM_PERM_WRITE |
+ QCOM_SCM_PERM_READ;
+ if (access != GUNYAH_PAGETABLE_ACCESS_X &&
+ access != GUNYAH_PAGETABLE_ACCESS_RX &&
+ access != GUNYAH_PAGETABLE_ACCESS_RWX) {
+ new_perms[1].vmid = QCOM_SCM_VMID_HLOS;
+ new_perms[1].perm = QCOM_SCM_PERM_EXEC | QCOM_SCM_PERM_WRITE |
+ QCOM_SCM_PERM_READ;
+ n++;
+ }
+
+ src = BIT_ULL(QCOM_SCM_VMID_HLOS);
+
+ return qcom_scm_assign_mem(__pfn_to_phys(folio_pfn(folio)),
+ folio_size(folio), &src, new_perms, n);
+}
+
+static int
+qcom_scm_gunyah_rm_release_demand_page(struct gunyah_rm *rm, u16 vmid,
+ enum gunyah_pagetable_access access,
+ struct folio *folio)
+{
+ struct qcom_scm_vmperm new_perms;
+ u64 src;
+
+ new_perms.vmid = QCOM_SCM_VMID_HLOS;
+ new_perms.perm = QCOM_SCM_PERM_EXEC | QCOM_SCM_PERM_WRITE |
+ QCOM_SCM_PERM_READ;
+
+ src = BIT_ULL(QCOM_SCM_RM_MANAGED_VMID);
+
+ if (access != GUNYAH_PAGETABLE_ACCESS_X &&
+ access != GUNYAH_PAGETABLE_ACCESS_RX &&
+ access != GUNYAH_PAGETABLE_ACCESS_RWX)
+ src |= BIT_ULL(QCOM_SCM_VMID_HLOS);
+
+ return qcom_scm_assign_mem(__pfn_to_phys(folio_pfn(folio)),
+ folio_size(folio), &src, &new_perms, 1);
+}
+
+static struct gunyah_rm_platform_ops qcom_scm_gunyah_rm_platform_ops = {
+ .pre_mem_share = qcom_scm_gunyah_rm_pre_mem_share,
+ .post_mem_reclaim = qcom_scm_gunyah_rm_post_mem_reclaim,
+ .pre_demand_page = qcom_scm_gunyah_rm_pre_demand_page,
+ .release_demand_page = qcom_scm_gunyah_rm_release_demand_page,
+};
+
+/* {19bd54bd-0b37-571b-946f-609b54539de6} */
+static const uuid_t QCOM_EXT_UUID = UUID_INIT(0x19bd54bd, 0x0b37, 0x571b, 0x94,
+ 0x6f, 0x60, 0x9b, 0x54, 0x53,
+ 0x9d, 0xe6);
+
+#define GUNYAH_QCOM_EXT_CALL_UUID_ID \
+ ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL, ARM_SMCCC_SMC_32, \
+ ARM_SMCCC_OWNER_VENDOR_HYP, 0x3f01)
+
+static bool gunyah_has_qcom_extensions(void)
+{
+ struct arm_smccc_res res;
+ uuid_t uuid;
+ u32 *up;
+
+ arm_smccc_1_1_smc(GUNYAH_QCOM_EXT_CALL_UUID_ID, &res);
+
+ up = (u32 *)&uuid.b[0];
+ up[0] = lower_32_bits(res.a0);
+ up[1] = lower_32_bits(res.a1);
+ up[2] = lower_32_bits(res.a2);
+ up[3] = lower_32_bits(res.a3);
+
+ return uuid_equal(&uuid, &QCOM_EXT_UUID);
+}
+
+static int __init qcom_gunyah_platform_hooks_register(void)
+{
+ if (!gunyah_has_qcom_extensions())
+ return -ENODEV;
+
+ pr_info("Enabling Gunyah hooks for Qualcomm platforms.\n");
+
+ return gunyah_rm_register_platform_ops(
+ &qcom_scm_gunyah_rm_platform_ops);
+}
+
+static void __exit qcom_gunyah_platform_hooks_unregister(void)
+{
+ gunyah_rm_unregister_platform_ops(&qcom_scm_gunyah_rm_platform_ops);
+}
+
+module_init(qcom_gunyah_platform_hooks_register);
+module_exit(qcom_gunyah_platform_hooks_unregister);
+MODULE_DESCRIPTION("Qualcomm Technologies, Inc. Platform Hooks for Gunyah");
+MODULE_LICENSE("GPL");
--
2.34.1
next prev parent reply other threads:[~2024-01-09 19:39 UTC|newest]
Thread overview: 38+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-01-09 19:37 [PATCH v16 00/34] Drivers for Gunyah hypervisor Elliot Berman
2024-01-09 19:37 ` [PATCH v16 01/34] docs: gunyah: Introduce Gunyah Hypervisor Elliot Berman
2024-01-09 23:31 ` Randy Dunlap
2024-01-10 0:28 ` Elliot Berman
2024-01-10 0:31 ` Randy Dunlap
2024-01-09 19:37 ` [PATCH v16 02/34] dt-bindings: Add binding for gunyah hypervisor Elliot Berman
2024-01-09 19:37 ` [PATCH v16 03/34] gunyah: Common types and error codes for Gunyah hypercalls Elliot Berman
2024-01-09 19:37 ` [PATCH v16 04/34] virt: gunyah: Add hypercalls to identify Gunyah Elliot Berman
2024-01-09 19:37 ` [PATCH v16 05/34] virt: gunyah: Add hypervisor driver Elliot Berman
2024-01-09 19:37 ` [PATCH v16 06/34] virt: gunyah: msgq: Add hypercalls to send and receive messages Elliot Berman
2024-01-09 19:37 ` [PATCH v16 07/34] gunyah: rsc_mgr: Add resource manager RPC core Elliot Berman
2024-01-09 19:37 ` [PATCH v16 08/34] gunyah: vm_mgr: Introduce basic VM Manager Elliot Berman
2024-01-09 19:37 ` [PATCH v16 09/34] gunyah: rsc_mgr: Add VM lifecycle RPC Elliot Berman
2024-01-09 19:37 ` [PATCH v16 10/34] gunyah: vm_mgr: Add VM start/stop Elliot Berman
2024-01-09 19:37 ` [PATCH v16 11/34] virt: gunyah: Translate gh_rm_hyp_resource into gunyah_resource Elliot Berman
2024-01-09 19:37 ` [PATCH v16 12/34] virt: gunyah: Add resource tickets Elliot Berman
2024-01-09 19:37 ` [PATCH v16 13/34] gunyah: vm_mgr: Add framework for VM Functions Elliot Berman
2024-01-09 19:37 ` [PATCH v16 14/34] virt: gunyah: Add hypercalls for running a vCPU Elliot Berman
2024-01-09 19:37 ` [PATCH v16 15/34] virt: gunyah: Add proxy-scheduled vCPUs Elliot Berman
2024-01-09 19:37 ` [PATCH v16 16/34] gunyah: Add hypercalls for demand paging Elliot Berman
2024-01-09 19:37 ` [PATCH v16 17/34] gunyah: rsc_mgr: Add memory parcel RPC Elliot Berman
2024-01-09 19:37 ` [PATCH v16 18/34] virt: gunyah: Add interfaces to map memory into guest address space Elliot Berman
2024-01-09 19:37 ` [PATCH v16 19/34] gunyah: rsc_mgr: Add platform ops on mem_lend/mem_reclaim Elliot Berman
2024-01-09 19:37 ` Elliot Berman [this message]
2024-01-09 19:37 ` [PATCH v16 21/34] virt: gunyah: Implement guestmemfd Elliot Berman
2024-01-09 19:38 ` [PATCH v16 22/34] virt: gunyah: Add ioctl to bind guestmem to VMs Elliot Berman
2024-01-09 19:38 ` [PATCH v16 23/34] virt: gunyah: guestmem: Initialize RM mem parcels from guestmem Elliot Berman
2024-01-09 19:38 ` [PATCH v16 24/34] virt: gunyah: Share guest VM dtb configuration to Gunyah Elliot Berman
2024-01-09 19:38 ` [PATCH v16 25/34] gunyah: rsc_mgr: Add RPC to enable demand paging Elliot Berman
2024-01-09 19:38 ` [PATCH v16 26/34] mm/interval_tree: Export iter_first/iter_next Elliot Berman
2024-01-09 19:38 ` [PATCH v16 27/34] virt: gunyah: Enable demand paging Elliot Berman
2024-01-09 19:38 ` [PATCH v16 28/34] gunyah: rsc_mgr: Add RPC to set VM boot context Elliot Berman
2024-01-09 19:38 ` [PATCH v16 29/34] virt: gunyah: Allow userspace to initialize context of primary vCPU Elliot Berman
2024-01-09 19:38 ` [PATCH v16 30/34] virt: gunyah: Add hypercalls for sending doorbell Elliot Berman
2024-01-09 19:38 ` [PATCH v16 31/34] virt: gunyah: Add irqfd interface Elliot Berman
2024-01-09 19:38 ` [PATCH v16 32/34] virt: gunyah: Add IO handlers Elliot Berman
2024-01-09 19:38 ` [PATCH v16 33/34] virt: gunyah: Add ioeventfd Elliot Berman
2024-01-09 19:38 ` [PATCH v16 34/34] MAINTAINERS: Add Gunyah hypervisor drivers section Elliot Berman
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20240109-gunyah-v16-20-634904bf4ce9@quicinc.com \
--to=quic_eberman@quicinc.com \
--cc=akpm@linux-foundation.org \
--cc=andersson@kernel.org \
--cc=catalin.marinas@arm.com \
--cc=conor+dt@kernel.org \
--cc=corbet@lwn.net \
--cc=devicetree@vger.kernel.org \
--cc=dmitry.baryshkov@linaro.org \
--cc=elder@linaro.org \
--cc=konrad.dybcio@linaro.org \
--cc=krzysztof.kozlowski+dt@linaro.org \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-arm-msm@vger.kernel.org \
--cc=linux-doc@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=quic_cvanscha@quicinc.com \
--cc=quic_mnalajal@quicinc.com \
--cc=quic_pderrin@quicinc.com \
--cc=quic_pheragu@quicinc.com \
--cc=quic_svaddagi@quicinc.com \
--cc=quic_tsoni@quicinc.com \
--cc=robh+dt@kernel.org \
--cc=seanjc@google.com \
--cc=srinivas.kandagatla@linaro.org \
--cc=tabba@google.com \
--cc=will@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox