From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id CA311C8303D for ; Fri, 4 Jul 2025 04:24:07 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9F68B44015C; Fri, 4 Jul 2025 00:23:39 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 9A9F6440154; Fri, 4 Jul 2025 00:23:39 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7FB3E44015C; Fri, 4 Jul 2025 00:23:39 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 6D476440154 for ; Fri, 4 Jul 2025 00:23:39 -0400 (EDT) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 3E20D141662 for ; Fri, 4 Jul 2025 04:23:39 +0000 (UTC) X-FDA: 83625288558.28.913BD59 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.17]) by imf01.hostedemail.com (Postfix) with ESMTP id 2D44E40005 for ; Fri, 4 Jul 2025 04:23:37 +0000 (UTC) Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=PckLcFoL; spf=pass (imf01.hostedemail.com: domain of kanchana.p.sridhar@intel.com designates 198.175.65.17 as permitted sender) smtp.mailfrom=kanchana.p.sridhar@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1751603017; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=6MYDc5YKKq+Q8dOF0YJeFCkB8SpGS3lRK6Ffv4tkVDc=; b=L8A3M/ZOUm9rvwIhsRB2Fgo7XCsU8WImO4glo6elePc2F0N6zzthTKibtpG1dQffUKHdBi /6hJfL7ovoM5/+VWlC1d1ICg8wgZEqopIpyxVh8LIy7J7r6HsF2rlUhxP3ZmWBfY4sDRJi 696fdcnwvdjCIAPK3T76ye1sxTfr9CY= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=PckLcFoL; spf=pass (imf01.hostedemail.com: domain of kanchana.p.sridhar@intel.com designates 198.175.65.17 as permitted sender) smtp.mailfrom=kanchana.p.sridhar@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1751603017; a=rsa-sha256; cv=none; b=befK5LJK4kCtM3zOYwFVKhwmgxWBhMgVc9s7Mr/Gub89lWehuL6cVMRSAcNu9mK0NCMybd HfmMuwmsnG9jvrIhbYveTAm5385EErfv9Tp+em+aunzKv8P42y/WEJC+JZCHT37FGna0Br jFXId1kTftFnohQyK4smL4tTATtICWI= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1751603018; x=1783139018; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=corBdRC2LubdCzgP/vAJOPgkY8ojMj47mJjHx3Dxiwk=; b=PckLcFoLKkv/8m4azSwXmH7yoTk178t6IY96msxbXfwfrM58Hi8Jtuf0 yC/RuR3L2FK4DHhq5wSD0x0hlVCYwUAP8NO//iMoapnmPY8AnvuFnTXz4 mLhR5UCLy/pKPrLMywdRg2AZ3mQSQ0LeoQnN/b2ihBuYJwaCpAiKGjwVb p0w7Z/hu8xldrZDiuLb1tCTNGG281Be/hW1T3sa3csF9PQ2ESSpO31u56 xfIOXLjMToCBwRk2d48YOh9oDasNrxI9h5qZWJRbWKbeJsUf/GKlVySBG XsNN4/qvUIcvuBQ8PhlFH4e/OFTh70Q24KXVEGsluXaysXoFXs57mGbrv w==; X-CSE-ConnectionGUID: J7JTCYYySGmymRFKusKfKQ== X-CSE-MsgGUID: rxUXNDUBTHSOnq752lzPmw== X-IronPort-AV: E=McAfee;i="6800,10657,11483"; a="53909086" X-IronPort-AV: E=Sophos;i="6.16,286,1744095600"; d="scan'208";a="53909086" Received: from orviesa006.jf.intel.com ([10.64.159.146]) by orvoesa109.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Jul 2025 21:23:26 -0700 X-CSE-ConnectionGUID: A4+Krgn3TUCSODHB0Yf3KQ== X-CSE-MsgGUID: Lsw7pS/0Qzmx7Ch9iiI0IQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.16,286,1744095600"; d="scan'208";a="153968693" Received: from jf5300-b11a338t.jf.intel.com ([10.242.51.115]) by orviesa006.jf.intel.com with ESMTP; 03 Jul 2025 21:23:25 -0700 From: Kanchana P Sridhar To: linux-kernel@vger.kernel.org, linux-mm@kvack.org, hannes@cmpxchg.org, yosry.ahmed@linux.dev, nphamcs@gmail.com, chengming.zhou@linux.dev, usamaarif642@gmail.com, ryan.roberts@arm.com, 21cnbao@gmail.com, ying.huang@linux.alibaba.com, akpm@linux-foundation.org, senozhatsky@chromium.org, linux-crypto@vger.kernel.org, herbert@gondor.apana.org.au, davem@davemloft.net, clabbe@baylibre.com, ardb@kernel.org, ebiggers@google.com, surenb@google.com, kristen.c.accardi@intel.com, vinicius.gomes@intel.com Cc: wajdi.k.feghali@intel.com, vinodh.gopal@intel.com, kanchana.p.sridhar@intel.com Subject: [PATCH v10 11/25] crypto: iaa - Enablers for submitting descriptors then polling for completion. Date: Thu, 3 Jul 2025 21:23:09 -0700 Message-Id: <20250704042323.10318-12-kanchana.p.sridhar@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20250704042323.10318-1-kanchana.p.sridhar@intel.com> References: <20250704042323.10318-1-kanchana.p.sridhar@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Stat-Signature: izerru94yaapt1d3mwhp68qa81713hwn X-Rspam-User: X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 2D44E40005 X-HE-Tag: 1751603017-369978 X-HE-Meta: U2FsdGVkX18SVnhlJgEefReNpkGZllUaiT//2zgaGLjnfIP9gubdbFKp8aTY5/i6NxKtFQ4CgSHVFcb8ewMkk+hCQKFcN5/+DgupQ+nPtVvplidQ4aY4vCXKu0CTXo5DfunLwNzSh/2sjEOBAFkMLRBYjFgO3Ul/qGLj6Oms548H4hZzVoI9+MTnkcjP/UuABcY3lF4pYbB0CFc9kW87m8PlA/6eupijKxwAdm6BguXUa9UN5ugtDxcfm174rR5gpWdZyVrNL+FfOPpNempeZXulXstxjPSpMYIgOw2Ffp3BGZkDC8FZdp2+jAADHDLeFSWMv5+zfbZz8XEsL9CgEjbYZ0dYADptFWKJicDQICt9LbkPHon8V60EPk19SwmO3bgoZIIDVc4g396jXAip+id/aZJ/ELANzx9mgdxO/1UR+bPZ6ueLHyXXvNkwFIKkf+H5B99Dk3D1597ewC7f2uD5yxZLHUumLtujswuVtcWAvw9wkmME0XD+cCeGPRKIJr7u813K1OPM8YDOlbXxux8Me0lulGph0B5GrQ2jYnIvzTewakURXXKLrjkkxI/u5G+kCjH4x4/ZpQzMBwGyC6pxUPCRFdleMXVY5DfoKwMv2Na6LbBc8pWPGMAGv3E5PAYHE6P7XE4s2N86pmU+DmZWRYGP19SyUsUs/RFoBCypXFFRcF4a+3SmDWbxL2LB2moWpdkkjkx4CYdcNh0NQjge2251apWj+XGRjYzBF+6tNbMZesmAyAIZPSl9kLzvcN+KJr+EdleSftPPd3k/XF/GSdm0c9s1rLIS308RbzRLgcSRZcB5g9XL37JbzIctFqNlN69umR4VBidW2V2RYsbJbhjkVSGk4j+WUsrPdThNWSMpwLYq8jEzjMLOI6lCGc0aJ8wWdNh5LuG7HHDC6L3/3t93/FBEQKMdxeIE23mvxc7+HFxJPHyi025IEMA8wrkQv+biYLiV/D0iS6T e5/3lcZF RtLXjSrmEBsxi/lAOwytJsUHq+2u3xRD9hAsEJPfJWckP8kJt30GHiFb0XDpDjl7bc07aw2UABEAWL4IQgtTZb8+NHJxCmnwZemTe2dVCCHRY/InvmGnHCTTbvkT5wroaGEXc3HLbU5ijgjsnsrFY0aHObIw4NpYSwL1SAtonaF0XAxBep1ZZFLnM3w== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: This patch adds capabilities in the IAA driver for kernel users to avail of the benefits of compressing/decompressing multiple jobs in parallel using IAA hardware acceleration, without the use of interrupts. Instead, this is accomplished using an async "submit-poll" mechanism. To achieve this, we break down a compress/decompress job into two separate activities if the driver is configured for non-irq async mode: 1) Submit a descriptor after caching the "idxd_desc" descriptor in the req->drv_data, and return -EINPROGRESS. 2) Poll: Given a request, retrieve the descriptor and poll its completion status for success/error. This is enabled by the following additions in the driver: 1) The idxd_desc is cached in the "drv_data" member of "struct iaa_req". 2) IAA_REQ_POLL_FLAG: if set in the iaa_req's flags, this tells the driver that it should submit the descriptor and return -EINPROGRESS. If not set, the driver will proceed to call check_completion() in fully synchronous mode, until the hardware returns a completion status. 3) iaa_comp_poll() procedure: This routine is intended to be called after submission returns -EINPROGRESS. It will check the completion status once, and return -EAGAIN if the job has not completed. If the job has completed, it will return the completion status. The purpose of this commit is to allow kernel users of iaa_crypto, such as zswap, to be able to invoke the crypto_acomp_compress() API in fully synchronous mode for sequential/non-batching use cases (i.e. today's status-quo), wherein zswap calls: crypto_wait_req(crypto_acomp_compress(req), wait); and to non-instrusively invoke the fully asynchronous batch compress/decompress functionality that will be introduced in subsequent patches. Both use cases need to reuse same code paths in the driver to interface with hardware: the IAA_REQ_POLL_FLAG allows this shared code to determine whether we need to process an iaa_req synchronously/asynchronously. The idea is to simplify iaa_crypto's sequential/batching interfaces for use by zswap and zram. Thus, regardless of the iaa_crypto driver's 'sync_mode' setting, it can still be forced to use synchronous mode by *not setting* the IAA_REQ_POLL_FLAG in iaa_req->flags: this is the default to support sequential use cases in zswap today. When IAA batching functionality is introduced subsequently, it will set the IAA_REQ_POLL_FLAG for the requests in a batch. We will submit the descriptors for each request in the batch in iaa_[de]compress(), and return -EINPROGRESS. The hardware will begin processing each request as soon as it is submitted, essentially all compress/decompress jobs will be parallelized. The polling function, "iaa_comp_poll()", will retrieve the descriptor from each iaa_req->drv_data to check its completion status. This enables the iaa_crypto driver to implement true async "submit-polling" for parallel compressions and decompressions in the IAA hardware accelerator. Both these conditions need to be met for a request to be processed in fully async submit-poll mode: 1) use_irq should be "false" 2) iaa_req->flags & IAA_REQ_POLL_FLAG should be "true" Signed-off-by: Kanchana P Sridhar --- drivers/crypto/intel/iaa/iaa_crypto.h | 6 ++ drivers/crypto/intel/iaa/iaa_crypto_main.c | 71 +++++++++++++++++++++- 2 files changed, 75 insertions(+), 2 deletions(-) diff --git a/drivers/crypto/intel/iaa/iaa_crypto.h b/drivers/crypto/intel/iaa/iaa_crypto.h index 190157967e3ba..1cc383c94fb80 100644 --- a/drivers/crypto/intel/iaa/iaa_crypto.h +++ b/drivers/crypto/intel/iaa/iaa_crypto.h @@ -41,6 +41,12 @@ IAA_DECOMP_CHECK_FOR_EOB | \ IAA_DECOMP_STOP_ON_EOB) +/* + * If set, the driver must have a way to submit the req, then + * poll its completion status for success/error. + */ +#define IAA_REQ_POLL_FLAG 0x00000002 + /* Representation of IAA workqueue */ struct iaa_wq { struct list_head list; diff --git a/drivers/crypto/intel/iaa/iaa_crypto_main.c b/drivers/crypto/intel/iaa/iaa_crypto_main.c index ecb737c70b53e..4b25235d6636c 100644 --- a/drivers/crypto/intel/iaa/iaa_crypto_main.c +++ b/drivers/crypto/intel/iaa/iaa_crypto_main.c @@ -1890,13 +1890,14 @@ static int iaa_compress(struct iaa_compression_ctx *ctx, struct iaa_req *req, ctx->mode, iaa_device->compression_modes[ctx->mode]); if (likely(!ctx->use_irq)) { + req->drv_data = idxd_desc; iaa_submit_desc_movdir64b(wq, idxd_desc); /* Update stats */ update_total_comp_calls(); update_wq_comp_calls(wq); - if (ctx->async_mode) + if (req->flags & IAA_REQ_POLL_FLAG) return -EINPROGRESS; ret = check_completion(dev, idxd_desc->iax_completion, true, false); @@ -1978,13 +1979,14 @@ static int iaa_decompress(struct iaa_compression_ctx *ctx, struct iaa_req *req, desc = iaa_setup_decompress_hw_desc(idxd_desc, src_addr, slen, dst_addr, *dlen); if (likely(!ctx->use_irq)) { + req->drv_data = idxd_desc; iaa_submit_desc_movdir64b(wq, idxd_desc); /* Update stats */ update_total_decomp_calls(); update_wq_decomp_calls(wq); - if (ctx->async_mode) + if (req->flags & IAA_REQ_POLL_FLAG) return -EINPROGRESS; ret = check_completion(dev, idxd_desc->iax_completion, false, false); @@ -2187,6 +2189,71 @@ static int iaa_comp_adecompress(struct iaa_compression_ctx *ctx, struct iaa_req return ret; } +static int __maybe_unused iaa_comp_poll(struct iaa_compression_ctx *ctx, struct iaa_req *req) +{ + struct idxd_desc *idxd_desc; + struct idxd_device *idxd; + struct iaa_wq *iaa_wq; + struct pci_dev *pdev; + struct device *dev; + struct idxd_wq *wq; + bool compress_op; + int ret; + + idxd_desc = req->drv_data; + if (!idxd_desc) + return -EAGAIN; + + compress_op = (idxd_desc->iax_hw->opcode == IAX_OPCODE_COMPRESS); + wq = idxd_desc->wq; + iaa_wq = idxd_wq_get_private(wq); + idxd = iaa_wq->iaa_device->idxd; + pdev = idxd->pdev; + dev = &pdev->dev; + + ret = check_completion(dev, idxd_desc->iax_completion, compress_op, true); + if (ret == -EAGAIN) + return ret; + if (ret) + goto out; + + req->dlen = idxd_desc->iax_completion->output_size; + + /* Update stats */ + if (compress_op) { + update_total_comp_bytes_out(req->dlen); + update_wq_comp_bytes(wq, req->dlen); + } else { + update_total_decomp_bytes_in(req->slen); + update_wq_decomp_bytes(wq, req->slen); + } + + if (compress_op && ctx->verify_compress) { + dma_addr_t src_addr, dst_addr; + + req->compression_crc = idxd_desc->iax_completion->crc; + + dma_sync_sg_for_device(dev, req->dst, 1, DMA_FROM_DEVICE); + dma_sync_sg_for_device(dev, req->src, 1, DMA_TO_DEVICE); + + src_addr = sg_dma_address(req->src); + dst_addr = sg_dma_address(req->dst); + + ret = iaa_compress_verify(ctx, req, wq, src_addr, req->slen, + dst_addr, req->dlen); + } + +out: + /* caller doesn't call crypto_wait_req, so no acomp_request_complete() */ + dma_unmap_sg(dev, req->dst, sg_nents(req->dst), DMA_FROM_DEVICE); + dma_unmap_sg(dev, req->src, sg_nents(req->src), DMA_TO_DEVICE); + + idxd_free_desc(idxd_desc->wq, idxd_desc); + percpu_ref_put(&iaa_wq->ref); + + return ret; +} + static void compression_ctx_init(struct iaa_compression_ctx *ctx, enum iaa_mode mode) { ctx->mode = mode; -- 2.27.0