From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6FE63C36010 for ; Tue, 1 Apr 2025 12:27:15 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 56884280003; Tue, 1 Apr 2025 08:27:13 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 4F264280001; Tue, 1 Apr 2025 08:27:13 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3448C280003; Tue, 1 Apr 2025 08:27:13 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 0DE08280001 for ; Tue, 1 Apr 2025 08:27:13 -0400 (EDT) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 26EEF121CE2 for ; Tue, 1 Apr 2025 12:27:14 +0000 (UTC) X-FDA: 83285399988.15.0BE63EF Received: from mail-oa1-f42.google.com (mail-oa1-f42.google.com [209.85.160.42]) by imf15.hostedemail.com (Postfix) with ESMTP id 25EB6A000D for ; Tue, 1 Apr 2025 12:27:11 +0000 (UTC) Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=linaro.org header.s=google header.b=QIag+aSC; spf=pass (imf15.hostedemail.com: domain of jens.wiklander@linaro.org designates 209.85.160.42 as permitted sender) smtp.mailfrom=jens.wiklander@linaro.org; dmarc=pass (policy=none) header.from=linaro.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1743510432; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=2sCqoi2+dXPe214Qh2/irU3yim1lmhg8lbu3VkUkCRk=; b=dyOnXiRmLEpNPu2HRCtYAnWrR8Ziee//NeGIFiCG2u5HvL5Jx4cM7r03kQoM60a77fxITr nFIpNtPG1s7WAQ7Q/x3D4eRZwy2OImrEljvLcnxf+jMktDZSN9wfeuZQAE3AcXbTkwHGmO bzvKeX6JhVs0oRZm3YziRxnTivLIaV4= ARC-Authentication-Results: i=1; imf15.hostedemail.com; dkim=pass header.d=linaro.org header.s=google header.b=QIag+aSC; spf=pass (imf15.hostedemail.com: domain of jens.wiklander@linaro.org designates 209.85.160.42 as permitted sender) smtp.mailfrom=jens.wiklander@linaro.org; dmarc=pass (policy=none) header.from=linaro.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1743510432; a=rsa-sha256; cv=none; b=sY+qsoxrJZDmFLWpX3VQNtQvbXrQn/ZrVB9OuP4ry22zk5Ps1z2sHOfHw+S11fvYyvmXYD LoS5TktYzlMnrj5HSMTtwwDQ9zgEpTfRWP63PMXcDFXrtBT28IF7MskqXhajNjJNvaZRT5 abF0t3jto5HCB/ahGS0pQeT75sjvBSs= Received: by mail-oa1-f42.google.com with SMTP id 586e51a60fabf-2c81fffd523so1714268fac.0 for ; Tue, 01 Apr 2025 05:27:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1743510431; x=1744115231; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=2sCqoi2+dXPe214Qh2/irU3yim1lmhg8lbu3VkUkCRk=; b=QIag+aSCEj+41jVUxwjv9nizBkAFqReWVRqj9i2pr2VcPwTNEXIVpbzCflYq0PMK6o CzSVWbLtB9/t+kM00JkoRaOQ6CosfZ8VOxezaO7mHjvgefFgezOTwx81prBakJsPiqIn cHqVD6OZ90cyrwcYkVi1/JBsYQW7Hro2JUIrxJh6exkZ3vI6eAc1DUTFirFR7lbkFzwF v8nZdje1+RK5cl4/VWwEpQM8IUzg70pHpVviBklAZBYOdRooL45pINwujAVPaCrez7TQ RJdkuciKULrYpzFznmwDYs9/WyTzCYY21LSswx//3kUt44dVQ4QOHP0UhZlnC386hbEn V2wg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1743510431; x=1744115231; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=2sCqoi2+dXPe214Qh2/irU3yim1lmhg8lbu3VkUkCRk=; b=kI/vVsyryqLPOZ14gw9xCRyJBunNNtCFfrvz4LLxIpQxw8xaPL531Xn8J59VypDhj3 TyGtY5nEVSxm4kbMFgswXO+zW77b6jB6d84dosTJMPIWYnZLWopQYlmrswFsEZlOeTlV n/MEG33r7a6HDeczcPJX80Kfscx7F5l3eNVICUI3vun42YwViLxQ4yPOi2R1/Xc/yWX+ RgtMnvkGAeZ25UkWCo2GYx0qH3R/vnZWN1+z4CR72++0Sr7JAU0yBoJHVt49TFNYmSXL 3PV0QNG2+iXGlg6OSxuwkLkodUoKqmkZctplouXdlGrssTDGFkCTaxJQQm9rJ583ptkh bNKg== X-Forwarded-Encrypted: i=1; AJvYcCVBOZPbkrgl/1ri4kkswdA8+HOJd4GkfZIs7712uGhu5bpiozSG9p6kUfuQGrTzQvSpCFsBPrOLqg==@kvack.org X-Gm-Message-State: AOJu0YwVweoIr9UDPnNHnUufg3aFd3KDWwjZap56iDHjXlqSvXOCTDxH Yv5pSo3eb6CrFFa7BZN5KVkuRAvJGBuN037dCqnuLzxInp08jZExYKKmxx8SM5vWK6Ju+MR1viw JYolZR5OL8iwN1d38LHpysBCPBL6bG2hDTIkkNQ== X-Gm-Gg: ASbGnct7BDpo0m7Ak+X6/46M+maI1zN6CuXHP8TYqzBcWkwYkveiBYS2lFR8nyOwUG2 YkqKQtTcibf9xu1fAzrv0TwBmlHI7hi7pWfqit2tFhTiM7NkmY5FHPhEF+AjNtPEaP6ksWppL0d NGy7cgtNF+PMVfigUOEg9B/Q2jilwDoogAKBVjhg== X-Google-Smtp-Source: AGHT+IEgukiORr4SZjKbgRpiqVcotRwJmBe7YTtNBvIBuB03areMpUPGe3dEZLGjY/JgwKfG2LLes1zFXMHJJLCf0GQ= X-Received: by 2002:a05:6871:3a11:b0:2c2:3a7f:e702 with SMTP id 586e51a60fabf-2cbcf4ed1c0mr6236036fac.11.1743510430925; Tue, 01 Apr 2025 05:27:10 -0700 (PDT) MIME-Version: 1.0 References: <20250305130634.1850178-1-jens.wiklander@linaro.org> <20250305130634.1850178-10-jens.wiklander@linaro.org> In-Reply-To: From: Jens Wiklander Date: Tue, 1 Apr 2025 14:26:59 +0200 X-Gm-Features: AQ5f1JqpEmTnupXLsUS7i7QDnCB4PmN4PxsDBJJ9FxT6NqnXbZNKDY1Mpm4OmN8 Message-ID: Subject: Re: [PATCH v6 09/10] optee: FF-A: dynamic restricted memory allocation To: Sumit Garg Cc: akpm@linux-foundation.org, david@redhat.com, rppt@linux.ibm.com, linux-kernel@vger.kernel.org, linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org, op-tee@lists.trustedfirmware.org, linux-arm-kernel@lists.infradead.org, Olivier Masse , Thierry Reding , Yong Wu , Sumit Semwal , Benjamin Gaignard , Brian Starkey , John Stultz , "T . J . Mercier" , =?UTF-8?Q?Christian_K=C3=B6nig?= , Matthias Brugger , AngeloGioacchino Del Regno , azarrabi@qti.qualcomm.com, Simona Vetter , Daniel Stone , linux-mm@kvack.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Queue-Id: 25EB6A000D X-Rspamd-Server: rspam05 X-Rspam-User: X-Stat-Signature: prjtgtjnzyy3gxi7rdoyweiiefr7aaxe X-HE-Tag: 1743510431-685053 X-HE-Meta: U2FsdGVkX18MhHiVu9exA8Q1atASCgR5HACzB0TjjphQ9ReSfPOAcm8RfPAY/0S3zjjHEl2GhU6KCkcHfIELHd/pMADuPwt6LcR9tFAZeqf3PbBquaVT7WqvFbUdzYh0BFg17AF0/J+qDBcmtkTF6ssV7BLCjrvEN4H3XNWOj5CPH07MejYw/NmakRPNkrSHo/EfZBUVo2J4ZAKvW2cw0tyrjtLH6/0EtYdvroO1NgWh7prVHLcY11U5MDe7ksFg/bq4ClrTJdk22oQac5b6L3VZZlmLnAWir9AC1bc6tJ3r2CrDAIrI+6CrfbII3zrnafYn1nDrCeCN+a0gFHD75AS1/tcOSo1vex34E7iHBoq/ITow8RwsDgeE/IK4UugmMtUn0O0eyuEWUg3wL/JgTKXNwuWUmy01S+dqfuIJkSR8O7pBK6Q3RZjcasgQkgnBJKwYUQ1/2/OZt8HEjmbGIuA1o2CDLOANyC6mI1C7WdVTvK2tqw5huxQ17vbfde/sHCcZfPLO/FnMCMKhx0p8MPyRInbrabLFmsnY9LcxM9k/FK4JU0DvWUTIGLg+LIhqp1V7lZBuQovf0oMMorgqUPCdCgGKlod9DKkDfIDeYn9cjr7xnT38oiaNCiXI1TuYcZugTRAt95O/99ZPvU+Nk2GvOLBM+nlw9w73TbBSs34uaw5NLDuLFF9zx+ov51+5Ozi190dObfeWrWy9cCojranZp5gVH2owBSWKNAydcDisooK7gdR18VajqDsfLiOGfLqe6ejtaO143wwX+zK55OoORjrnCjJQYsqjYLMO5dfDSjsw5iXQ8JAQ+0cjzUeEQdh8nDX8HA+htvGvahfHvZol0kLEWx+CUpzn7P9pjkIFydYwcmUB989UHa8KOKbMP0wxf88HiBhLGSgEbuVLMd+RPMhYFeq2dB/seU65knUrQNYgp9AsFH15lAdJHEjZJ466dyA7XfVTzY322OR VDY6X7Hk yyKD87dgeckgv0sEPT5odgp2tuhtUrU2IP7IrwsvTwR6FZyI6bl76GAuxYCmm9fkxdo/hs4b4xikIntNFP/6G5mPW6kB6DUivAf0IFrvG3oCIoI88mHC2EQk/1kfofo/Kb5LcuFmMM2Q/Q4VFq85nVrClDa5jdPQufmzxfs3nKhNjBN7UkBnV4ljMoLvQDxcKoV3cJJ3BcWW49l0h0vvNBMGtHLdAJadXPnr6gMoac4av3D+n9WXgECkmiPWQIyvukgwg5GtD8ZyNVB8unf5yPld84NZDElH/DyYweJWqy+xDQ1pBnOYzWRhcqhP86KlyQuCVOj4/k7JQAx4AnT9z42Om985OXyNi8ATuiMOb3D+dDGU= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Tue, Apr 1, 2025 at 12:13=E2=80=AFPM Sumit Garg = wrote: > > + MM folks to seek guidance here. > > On Thu, Mar 27, 2025 at 09:07:34AM +0100, Jens Wiklander wrote: > > Hi Sumit, > > > > On Tue, Mar 25, 2025 at 8:42=E2=80=AFAM Sumit Garg wrote: > > > > > > On Wed, Mar 05, 2025 at 02:04:15PM +0100, Jens Wiklander wrote: > > > > Add support in the OP-TEE backend driver dynamic restricted memory > > > > allocation with FF-A. > > > > > > > > The restricted memory pools for dynamically allocated restrict memo= ry > > > > are instantiated when requested by user-space. This instantiation c= an > > > > fail if OP-TEE doesn't support the requested use-case of restricted > > > > memory. > > > > > > > > Restricted memory pools based on a static carveout or dynamic alloc= ation > > > > can coexist for different use-cases. We use only dynamic allocation= with > > > > FF-A. > > > > > > > > Signed-off-by: Jens Wiklander > > > > --- > > > > drivers/tee/optee/Makefile | 1 + > > > > drivers/tee/optee/ffa_abi.c | 143 ++++++++++++- > > > > drivers/tee/optee/optee_private.h | 13 +- > > > > drivers/tee/optee/rstmem.c | 329 ++++++++++++++++++++++++++= ++++ > > > > 4 files changed, 483 insertions(+), 3 deletions(-) > > > > create mode 100644 drivers/tee/optee/rstmem.c > > > > > > > > > > > diff --git a/drivers/tee/optee/rstmem.c b/drivers/tee/optee/rstmem.= c > > > > new file mode 100644 > > > > index 000000000000..ea27769934d4 > > > > --- /dev/null > > > > +++ b/drivers/tee/optee/rstmem.c > > > > @@ -0,0 +1,329 @@ > > > > +// SPDX-License-Identifier: GPL-2.0-only > > > > +/* > > > > + * Copyright (c) 2025, Linaro Limited > > > > + */ > > > > +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt > > > > + > > > > +#include > > > > +#include > > > > +#include > > > > +#include > > > > +#include > > > > +#include > > > > +#include "optee_private.h" > > > > + > > > > +struct optee_rstmem_cma_pool { > > > > + struct tee_rstmem_pool pool; > > > > + struct gen_pool *gen_pool; > > > > + struct optee *optee; > > > > + size_t page_count; > > > > + u16 *end_points; > > > > + u_int end_point_count; > > > > + u_int align; > > > > + refcount_t refcount; > > > > + u32 use_case; > > > > + struct tee_shm *rstmem; > > > > + /* Protects when initializing and tearing down this struct */ > > > > + struct mutex mutex; > > > > +}; > > > > + > > > > +static struct optee_rstmem_cma_pool * > > > > +to_rstmem_cma_pool(struct tee_rstmem_pool *pool) > > > > +{ > > > > + return container_of(pool, struct optee_rstmem_cma_pool, pool)= ; > > > > +} > > > > + > > > > +static int init_cma_rstmem(struct optee_rstmem_cma_pool *rp) > > > > +{ > > > > + int rc; > > > > + > > > > + rp->rstmem =3D tee_shm_alloc_cma_phys_mem(rp->optee->ctx, rp-= >page_count, > > > > + rp->align); > > > > + if (IS_ERR(rp->rstmem)) { > > > > + rc =3D PTR_ERR(rp->rstmem); > > > > + goto err_null_rstmem; > > > > + } > > > > + > > > > + /* > > > > + * TODO unmap the memory range since the physical memory will > > > > + * become inaccesible after the lend_rstmem() call. > > > > + */ > > > > > > What's your plan for this TODO? I think we need a CMA allocator here > > > which can allocate un-mapped memory such that any cache speculation > > > won't lead to CPU hangs once the memory restriction comes into pictur= e. > > > > What happens is platform-specific. For some platforms, it might be > > enough to avoid explicit access. Yes, a CMA allocator with unmapped > > memory or where memory can be unmapped is one option. > > Did you get a chance to enable real memory protection on RockPi board? No, I don't think I have access to the needed documentation for the board to set it up for relevant peripherals. > This will atleast ensure that mapped restricted memory without explicit > access works fine. Since otherwise once people start to enable real > memory restriction in OP-TEE, there can be chances of random hang ups > due to cache speculation. A hypervisor in the normal world can also make the memory inaccessible to the kernel. That shouldn't cause any hangups due to cache speculation. Cheers, Jens > > MM folks, > > Basically what we are trying to achieve here is a "no-map" DT behaviour > [1] which is rather dynamic in nature. The use-case here is that a memor= y > block allocated from CMA can be marked restricted at runtime where we > would like the Linux not being able to directly or indirectly (cache > speculation) access it. Once memory restriction use-case has been > completed, the memory block can be marked as normal and freed for > further CMA allocation. > > It will be apprciated if you can guide us regarding the appropriate APIs > to use for un-mapping/mamping CMA allocations for this use-case. > > [1] https://github.com/devicetree-org/dt-schema/blob/main/dtschema/schema= s/reserved-memory/reserved-memory.yaml#L79 > > -Sumit > > > > > > > > > > + rc =3D rp->optee->ops->lend_rstmem(rp->optee, rp->rstmem, rp-= >end_points, > > > > + rp->end_point_count, rp->use= _case); > > > > + if (rc) > > > > + goto err_put_shm; > > > > + rp->rstmem->flags |=3D TEE_SHM_DYNAMIC; > > > > + > > > > + rp->gen_pool =3D gen_pool_create(PAGE_SHIFT, -1); > > > > + if (!rp->gen_pool) { > > > > + rc =3D -ENOMEM; > > > > + goto err_reclaim; > > > > + } > > > > + > > > > + rc =3D gen_pool_add(rp->gen_pool, rp->rstmem->paddr, > > > > + rp->rstmem->size, -1); > > > > + if (rc) > > > > + goto err_free_pool; > > > > + > > > > + refcount_set(&rp->refcount, 1); > > > > + return 0; > > > > + > > > > +err_free_pool: > > > > + gen_pool_destroy(rp->gen_pool); > > > > + rp->gen_pool =3D NULL; > > > > +err_reclaim: > > > > + rp->optee->ops->reclaim_rstmem(rp->optee, rp->rstmem); > > > > +err_put_shm: > > > > + tee_shm_put(rp->rstmem); > > > > +err_null_rstmem: > > > > + rp->rstmem =3D NULL; > > > > + return rc; > > > > +} > > > > + > > > > +static int get_cma_rstmem(struct optee_rstmem_cma_pool *rp) > > > > +{ > > > > + int rc =3D 0; > > > > + > > > > + if (!refcount_inc_not_zero(&rp->refcount)) { > > > > + mutex_lock(&rp->mutex); > > > > + if (rp->gen_pool) { > > > > + /* > > > > + * Another thread has already initialized the= pool > > > > + * before us, or the pool was just about to b= e torn > > > > + * down. Either way we only need to increase = the > > > > + * refcount and we're done. > > > > + */ > > > > + refcount_inc(&rp->refcount); > > > > + } else { > > > > + rc =3D init_cma_rstmem(rp); > > > > + } > > > > + mutex_unlock(&rp->mutex); > > > > + } > > > > + > > > > + return rc; > > > > +} > > > > + > > > > +static void release_cma_rstmem(struct optee_rstmem_cma_pool *rp) > > > > +{ > > > > + gen_pool_destroy(rp->gen_pool); > > > > + rp->gen_pool =3D NULL; > > > > + > > > > + rp->optee->ops->reclaim_rstmem(rp->optee, rp->rstmem); > > > > + rp->rstmem->flags &=3D ~TEE_SHM_DYNAMIC; > > > > + > > > > + WARN(refcount_read(&rp->rstmem->refcount) !=3D 1, "Unexpected= refcount"); > > > > + tee_shm_put(rp->rstmem); > > > > + rp->rstmem =3D NULL; > > > > +} > > > > + > > > > +static void put_cma_rstmem(struct optee_rstmem_cma_pool *rp) > > > > +{ > > > > + if (refcount_dec_and_test(&rp->refcount)) { > > > > + mutex_lock(&rp->mutex); > > > > + if (rp->gen_pool) > > > > + release_cma_rstmem(rp); > > > > + mutex_unlock(&rp->mutex); > > > > + } > > > > +} > > > > + > > > > +static int rstmem_pool_op_cma_alloc(struct tee_rstmem_pool *pool, > > > > + struct sg_table *sgt, size_t size= , > > > > + size_t *offs) > > > > +{ > > > > + struct optee_rstmem_cma_pool *rp =3D to_rstmem_cma_pool(pool)= ; > > > > + size_t sz =3D ALIGN(size, PAGE_SIZE); > > > > + phys_addr_t pa; > > > > + int rc; > > > > + > > > > + rc =3D get_cma_rstmem(rp); > > > > + if (rc) > > > > + return rc; > > > > + > > > > + pa =3D gen_pool_alloc(rp->gen_pool, sz); > > > > + if (!pa) { > > > > + rc =3D -ENOMEM; > > > > + goto err_put; > > > > + } > > > > + > > > > + rc =3D sg_alloc_table(sgt, 1, GFP_KERNEL); > > > > + if (rc) > > > > + goto err_free; > > > > + > > > > + sg_set_page(sgt->sgl, phys_to_page(pa), size, 0); > > > > + *offs =3D pa - rp->rstmem->paddr; > > > > + > > > > + return 0; > > > > +err_free: > > > > + gen_pool_free(rp->gen_pool, pa, size); > > > > +err_put: > > > > + put_cma_rstmem(rp); > > > > + > > > > + return rc; > > > > +} > > > > + > > > > +static void rstmem_pool_op_cma_free(struct tee_rstmem_pool *pool, > > > > + struct sg_table *sgt) > > > > +{ > > > > + struct optee_rstmem_cma_pool *rp =3D to_rstmem_cma_pool(pool)= ; > > > > + struct scatterlist *sg; > > > > + int i; > > > > + > > > > + for_each_sgtable_sg(sgt, sg, i) > > > > + gen_pool_free(rp->gen_pool, sg_phys(sg), sg->length); > > > > + sg_free_table(sgt); > > > > + put_cma_rstmem(rp); > > > > +} > > > > + > > > > +static int rstmem_pool_op_cma_update_shm(struct tee_rstmem_pool *p= ool, > > > > + struct sg_table *sgt, size_t= offs, > > > > + struct tee_shm *shm, > > > > + struct tee_shm **parent_shm) > > > > +{ > > > > + struct optee_rstmem_cma_pool *rp =3D to_rstmem_cma_pool(pool)= ; > > > > + > > > > + *parent_shm =3D rp->rstmem; > > > > + > > > > + return 0; > > > > +} > > > > + > > > > +static void pool_op_cma_destroy_pool(struct tee_rstmem_pool *pool) > > > > +{ > > > > + struct optee_rstmem_cma_pool *rp =3D to_rstmem_cma_pool(pool)= ; > > > > + > > > > + mutex_destroy(&rp->mutex); > > > > + kfree(rp); > > > > +} > > > > + > > > > +static struct tee_rstmem_pool_ops rstmem_pool_ops_cma =3D { > > > > + .alloc =3D rstmem_pool_op_cma_alloc, > > > > + .free =3D rstmem_pool_op_cma_free, > > > > + .update_shm =3D rstmem_pool_op_cma_update_shm, > > > > + .destroy_pool =3D pool_op_cma_destroy_pool, > > > > +}; > > > > + > > > > +static int get_rstmem_config(struct optee *optee, u32 use_case, > > > > + size_t *min_size, u_int *min_align, > > > > + u16 *end_points, u_int *ep_count) > > > > > > I guess this end points terminology is specific to FF-A ABI. Is there > > > any relevance for this in the common APIs? > > > > Yes, endpoints are specific to FF-A ABI. The list of end-points must > > be presented to FFA_MEM_LEND. We're relying on the secure world to > > know which endpoints are needed for a specific use case. > > > > Cheers, > > Jens > > > > > > > > -Sumit > > > > > > > +{ > > > > + struct tee_param params[2] =3D { > > > > + [0] =3D { > > > > + .attr =3D TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INO= UT, > > > > + .u.value.a =3D use_case, > > > > + }, > > > > + [1] =3D { > > > > + .attr =3D TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_OU= TPUT, > > > > + }, > > > > + }; > > > > + struct optee_shm_arg_entry *entry; > > > > + struct tee_shm *shm_param =3D NULL; > > > > + struct optee_msg_arg *msg_arg; > > > > + struct tee_shm *shm; > > > > + u_int offs; > > > > + int rc; > > > > + > > > > + if (end_points && *ep_count) { > > > > + params[1].u.memref.size =3D *ep_count * sizeof(*end_p= oints); > > > > + shm_param =3D tee_shm_alloc_priv_buf(optee->ctx, > > > > + params[1].u.memref= .size); > > > > + if (IS_ERR(shm_param)) > > > > + return PTR_ERR(shm_param); > > > > + params[1].u.memref.shm =3D shm_param; > > > > + } > > > > + > > > > + msg_arg =3D optee_get_msg_arg(optee->ctx, ARRAY_SIZE(params),= &entry, > > > > + &shm, &offs); > > > > + if (IS_ERR(msg_arg)) { > > > > + rc =3D PTR_ERR(msg_arg); > > > > + goto out_free_shm; > > > > + } > > > > + msg_arg->cmd =3D OPTEE_MSG_CMD_GET_RSTMEM_CONFIG; > > > > + > > > > + rc =3D optee->ops->to_msg_param(optee, msg_arg->params, > > > > + ARRAY_SIZE(params), params, > > > > + false /*!update_out*/); > > > > + if (rc) > > > > + goto out_free_msg; > > > > + > > > > + rc =3D optee->ops->do_call_with_arg(optee->ctx, shm, offs, fa= lse); > > > > + if (rc) > > > > + goto out_free_msg; > > > > + if (msg_arg->ret && msg_arg->ret !=3D TEEC_ERROR_SHORT_BUFFER= ) { > > > > + rc =3D -EINVAL; > > > > + goto out_free_msg; > > > > + } > > > > + > > > > + rc =3D optee->ops->from_msg_param(optee, params, ARRAY_SIZE(p= arams), > > > > + msg_arg->params, true /*updat= e_out*/); > > > > + if (rc) > > > > + goto out_free_msg; > > > > + > > > > + if (!msg_arg->ret && end_points && > > > > + *ep_count < params[1].u.memref.size / sizeof(u16)) { > > > > + rc =3D -EINVAL; > > > > + goto out_free_msg; > > > > + } > > > > + > > > > + *min_size =3D params[0].u.value.a; > > > > + *min_align =3D params[0].u.value.b; > > > > + *ep_count =3D params[1].u.memref.size / sizeof(u16); > > > > + > > > > + if (msg_arg->ret =3D=3D TEEC_ERROR_SHORT_BUFFER) { > > > > + rc =3D -ENOSPC; > > > > + goto out_free_msg; > > > > + } > > > > + > > > > + if (end_points) > > > > + memcpy(end_points, tee_shm_get_va(shm_param, 0), > > > > + params[1].u.memref.size); > > > > + > > > > +out_free_msg: > > > > + optee_free_msg_arg(optee->ctx, entry, offs); > > > > +out_free_shm: > > > > + if (shm_param) > > > > + tee_shm_free(shm_param); > > > > + return rc; > > > > +} > > > > + > > > > +struct tee_rstmem_pool *optee_rstmem_alloc_cma_pool(struct optee *= optee, > > > > + enum tee_dma_heap= _id id) > > > > +{ > > > > + struct optee_rstmem_cma_pool *rp; > > > > + u32 use_case =3D id; > > > > + size_t min_size; > > > > + int rc; > > > > + > > > > + rp =3D kzalloc(sizeof(*rp), GFP_KERNEL); > > > > + if (!rp) > > > > + return ERR_PTR(-ENOMEM); > > > > + rp->use_case =3D use_case; > > > > + > > > > + rc =3D get_rstmem_config(optee, use_case, &min_size, &rp->ali= gn, NULL, > > > > + &rp->end_point_count); > > > > + if (rc) { > > > > + if (rc !=3D -ENOSPC) > > > > + goto err; > > > > + rp->end_points =3D kcalloc(rp->end_point_count, > > > > + sizeof(*rp->end_points), GFP= _KERNEL); > > > > + if (!rp->end_points) { > > > > + rc =3D -ENOMEM; > > > > + goto err; > > > > + } > > > > + rc =3D get_rstmem_config(optee, use_case, &min_size, = &rp->align, > > > > + rp->end_points, &rp->end_point= _count); > > > > + if (rc) > > > > + goto err_kfree_eps; > > > > + } > > > > + > > > > + rp->pool.ops =3D &rstmem_pool_ops_cma; > > > > + rp->optee =3D optee; > > > > + rp->page_count =3D min_size / PAGE_SIZE; > > > > + mutex_init(&rp->mutex); > > > > + > > > > + return &rp->pool; > > > > + > > > > +err_kfree_eps: > > > > + kfree(rp->end_points); > > > > +err: > > > > + kfree(rp); > > > > + return ERR_PTR(rc); > > > > +} > > > > -- > > > > 2.43.0 > > > >