linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [v2 PATCH 0/7] crypto: acomp - Add request chaining and virtual address support
@ 2025-03-04  9:25 Herbert Xu
  2025-03-04  9:25 ` [v2 PATCH 1/7] crypto: api - Add cra_type->destroy hook Herbert Xu
                   ` (9 more replies)
  0 siblings, 10 replies; 16+ messages in thread
From: Herbert Xu @ 2025-03-04  9:25 UTC (permalink / raw)
  To: Linux Crypto Mailing List; +Cc: linux-mm, Yosry Ahmed, Kanchana P Sridhar

This patch series adds reqeust chaining and virtual address support
to the crypto_acomp interface.

Herbert Xu (7):
  crypto: api - Add cra_type->destroy hook
  crypto: scomp - Remove tfm argument from alloc/free_ctx
  crypto: acomp - Add request chaining and virtual addresses
  crypto: testmgr - Remove NULL dst acomp tests
  crypto: scomp - Remove support for most non-trivial destination SG
    lists
  crypto: scomp - Add chaining and virtual address support
  crypto: acomp - Move stream management into scomp layer

 crypto/842.c                           |   8 +-
 crypto/acompress.c                     | 208 ++++++++++++++++++++---
 crypto/algapi.c                        |   9 +
 crypto/compress.h                      |   2 -
 crypto/deflate.c                       |   4 +-
 crypto/internal.h                      |   6 +-
 crypto/lz4.c                           |   8 +-
 crypto/lz4hc.c                         |   8 +-
 crypto/lzo-rle.c                       |   8 +-
 crypto/lzo.c                           |   8 +-
 crypto/scompress.c                     | 226 +++++++++++++++----------
 crypto/testmgr.c                       |  29 ----
 crypto/zstd.c                          |   4 +-
 drivers/crypto/cavium/zip/zip_crypto.c |   6 +-
 drivers/crypto/cavium/zip/zip_crypto.h |   6 +-
 include/crypto/acompress.h             | 118 ++++++++++---
 include/crypto/internal/acompress.h    |  39 +++--
 include/crypto/internal/scompress.h    |  18 +-
 18 files changed, 488 insertions(+), 227 deletions(-)

-- 
2.39.5



^ permalink raw reply	[flat|nested] 16+ messages in thread

* [v2 PATCH 1/7] crypto: api - Add cra_type->destroy hook
  2025-03-04  9:25 [v2 PATCH 0/7] crypto: acomp - Add request chaining and virtual address support Herbert Xu
@ 2025-03-04  9:25 ` Herbert Xu
  2025-03-04  9:25 ` [v2 PATCH 2/7] crypto: scomp - Remove tfm argument from alloc/free_ctx Herbert Xu
                   ` (8 subsequent siblings)
  9 siblings, 0 replies; 16+ messages in thread
From: Herbert Xu @ 2025-03-04  9:25 UTC (permalink / raw)
  To: Linux Crypto Mailing List; +Cc: linux-mm, Yosry Ahmed, Kanchana P Sridhar

Add a cra_type->destroy hook so that resources can be freed after
the last user of a registered algorithm is gone.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---
 crypto/algapi.c   | 9 +++++++++
 crypto/internal.h | 6 ++++--
 2 files changed, 13 insertions(+), 2 deletions(-)

diff --git a/crypto/algapi.c b/crypto/algapi.c
index e7a9a2ada2cf..8f72dd15cf9c 100644
--- a/crypto/algapi.c
+++ b/crypto/algapi.c
@@ -89,6 +89,15 @@ static void crypto_destroy_instance(struct crypto_alg *alg)
 	schedule_work(&inst->free_work);
 }
 
+void crypto_destroy_alg(struct crypto_alg *alg)
+{
+	if (alg->cra_type && alg->cra_type->destroy)
+		alg->cra_type->destroy(alg);
+
+	if (alg->cra_destroy)
+		alg->cra_destroy(alg);
+}
+
 /*
  * This function adds a spawn to the list secondary_spawns which
  * will be used at the end of crypto_remove_spawns to unregister
diff --git a/crypto/internal.h b/crypto/internal.h
index 08d43b40e7db..11567ea24fc3 100644
--- a/crypto/internal.h
+++ b/crypto/internal.h
@@ -40,6 +40,7 @@ struct crypto_type {
 	void (*show)(struct seq_file *m, struct crypto_alg *alg);
 	int (*report)(struct sk_buff *skb, struct crypto_alg *alg);
 	void (*free)(struct crypto_instance *inst);
+	void (*destroy)(struct crypto_alg *alg);
 
 	unsigned int type;
 	unsigned int maskclear;
@@ -127,6 +128,7 @@ void *crypto_create_tfm_node(struct crypto_alg *alg,
 			const struct crypto_type *frontend, int node);
 void *crypto_clone_tfm(const struct crypto_type *frontend,
 		       struct crypto_tfm *otfm);
+void crypto_destroy_alg(struct crypto_alg *alg);
 
 static inline void *crypto_create_tfm(struct crypto_alg *alg,
 			const struct crypto_type *frontend)
@@ -163,8 +165,8 @@ static inline struct crypto_alg *crypto_alg_get(struct crypto_alg *alg)
 
 static inline void crypto_alg_put(struct crypto_alg *alg)
 {
-	if (refcount_dec_and_test(&alg->cra_refcnt) && alg->cra_destroy)
-		alg->cra_destroy(alg);
+	if (refcount_dec_and_test(&alg->cra_refcnt))
+		crypto_destroy_alg(alg);
 }
 
 static inline int crypto_tmpl_get(struct crypto_template *tmpl)
-- 
2.39.5



^ permalink raw reply	[flat|nested] 16+ messages in thread

* [v2 PATCH 2/7] crypto: scomp - Remove tfm argument from alloc/free_ctx
  2025-03-04  9:25 [v2 PATCH 0/7] crypto: acomp - Add request chaining and virtual address support Herbert Xu
  2025-03-04  9:25 ` [v2 PATCH 1/7] crypto: api - Add cra_type->destroy hook Herbert Xu
@ 2025-03-04  9:25 ` Herbert Xu
  2025-03-04  9:25 ` [v2 PATCH 3/7] crypto: acomp - Add request chaining and virtual addresses Herbert Xu
                   ` (7 subsequent siblings)
  9 siblings, 0 replies; 16+ messages in thread
From: Herbert Xu @ 2025-03-04  9:25 UTC (permalink / raw)
  To: Linux Crypto Mailing List; +Cc: linux-mm, Yosry Ahmed, Kanchana P Sridhar

The tfm argument is completely unused and meaningless as the
same stream object is identical over all transforms of a given
algorithm.  Remove it.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---
 crypto/842.c                           | 8 ++++----
 crypto/deflate.c                       | 4 ++--
 crypto/lz4.c                           | 8 ++++----
 crypto/lz4hc.c                         | 8 ++++----
 crypto/lzo-rle.c                       | 8 ++++----
 crypto/lzo.c                           | 8 ++++----
 crypto/zstd.c                          | 4 ++--
 drivers/crypto/cavium/zip/zip_crypto.c | 6 +++---
 drivers/crypto/cavium/zip/zip_crypto.h | 6 +++---
 include/crypto/internal/scompress.h    | 8 ++++----
 10 files changed, 34 insertions(+), 34 deletions(-)

diff --git a/crypto/842.c b/crypto/842.c
index e59e54d76960..2238478c3493 100644
--- a/crypto/842.c
+++ b/crypto/842.c
@@ -28,7 +28,7 @@ struct crypto842_ctx {
 	void *wmem;	/* working memory for compress */
 };
 
-static void *crypto842_alloc_ctx(struct crypto_scomp *tfm)
+static void *crypto842_alloc_ctx(void)
 {
 	void *ctx;
 
@@ -43,14 +43,14 @@ static int crypto842_init(struct crypto_tfm *tfm)
 {
 	struct crypto842_ctx *ctx = crypto_tfm_ctx(tfm);
 
-	ctx->wmem = crypto842_alloc_ctx(NULL);
+	ctx->wmem = crypto842_alloc_ctx();
 	if (IS_ERR(ctx->wmem))
 		return -ENOMEM;
 
 	return 0;
 }
 
-static void crypto842_free_ctx(struct crypto_scomp *tfm, void *ctx)
+static void crypto842_free_ctx(void *ctx)
 {
 	kfree(ctx);
 }
@@ -59,7 +59,7 @@ static void crypto842_exit(struct crypto_tfm *tfm)
 {
 	struct crypto842_ctx *ctx = crypto_tfm_ctx(tfm);
 
-	crypto842_free_ctx(NULL, ctx->wmem);
+	crypto842_free_ctx(ctx->wmem);
 }
 
 static int crypto842_compress(struct crypto_tfm *tfm,
diff --git a/crypto/deflate.c b/crypto/deflate.c
index 98e8bcb81a6a..1bf7184ad670 100644
--- a/crypto/deflate.c
+++ b/crypto/deflate.c
@@ -112,7 +112,7 @@ static int __deflate_init(void *ctx)
 	return ret;
 }
 
-static void *deflate_alloc_ctx(struct crypto_scomp *tfm)
+static void *deflate_alloc_ctx(void)
 {
 	struct deflate_ctx *ctx;
 	int ret;
@@ -143,7 +143,7 @@ static void __deflate_exit(void *ctx)
 	deflate_decomp_exit(ctx);
 }
 
-static void deflate_free_ctx(struct crypto_scomp *tfm, void *ctx)
+static void deflate_free_ctx(void *ctx)
 {
 	__deflate_exit(ctx);
 	kfree_sensitive(ctx);
diff --git a/crypto/lz4.c b/crypto/lz4.c
index 0606f8862e78..e66c6d1ba34f 100644
--- a/crypto/lz4.c
+++ b/crypto/lz4.c
@@ -16,7 +16,7 @@ struct lz4_ctx {
 	void *lz4_comp_mem;
 };
 
-static void *lz4_alloc_ctx(struct crypto_scomp *tfm)
+static void *lz4_alloc_ctx(void)
 {
 	void *ctx;
 
@@ -31,14 +31,14 @@ static int lz4_init(struct crypto_tfm *tfm)
 {
 	struct lz4_ctx *ctx = crypto_tfm_ctx(tfm);
 
-	ctx->lz4_comp_mem = lz4_alloc_ctx(NULL);
+	ctx->lz4_comp_mem = lz4_alloc_ctx();
 	if (IS_ERR(ctx->lz4_comp_mem))
 		return -ENOMEM;
 
 	return 0;
 }
 
-static void lz4_free_ctx(struct crypto_scomp *tfm, void *ctx)
+static void lz4_free_ctx(void *ctx)
 {
 	vfree(ctx);
 }
@@ -47,7 +47,7 @@ static void lz4_exit(struct crypto_tfm *tfm)
 {
 	struct lz4_ctx *ctx = crypto_tfm_ctx(tfm);
 
-	lz4_free_ctx(NULL, ctx->lz4_comp_mem);
+	lz4_free_ctx(ctx->lz4_comp_mem);
 }
 
 static int __lz4_compress_crypto(const u8 *src, unsigned int slen,
diff --git a/crypto/lz4hc.c b/crypto/lz4hc.c
index d7cc94aa2fcf..25a95b65aca5 100644
--- a/crypto/lz4hc.c
+++ b/crypto/lz4hc.c
@@ -15,7 +15,7 @@ struct lz4hc_ctx {
 	void *lz4hc_comp_mem;
 };
 
-static void *lz4hc_alloc_ctx(struct crypto_scomp *tfm)
+static void *lz4hc_alloc_ctx(void)
 {
 	void *ctx;
 
@@ -30,14 +30,14 @@ static int lz4hc_init(struct crypto_tfm *tfm)
 {
 	struct lz4hc_ctx *ctx = crypto_tfm_ctx(tfm);
 
-	ctx->lz4hc_comp_mem = lz4hc_alloc_ctx(NULL);
+	ctx->lz4hc_comp_mem = lz4hc_alloc_ctx();
 	if (IS_ERR(ctx->lz4hc_comp_mem))
 		return -ENOMEM;
 
 	return 0;
 }
 
-static void lz4hc_free_ctx(struct crypto_scomp *tfm, void *ctx)
+static void lz4hc_free_ctx(void *ctx)
 {
 	vfree(ctx);
 }
@@ -46,7 +46,7 @@ static void lz4hc_exit(struct crypto_tfm *tfm)
 {
 	struct lz4hc_ctx *ctx = crypto_tfm_ctx(tfm);
 
-	lz4hc_free_ctx(NULL, ctx->lz4hc_comp_mem);
+	lz4hc_free_ctx(ctx->lz4hc_comp_mem);
 }
 
 static int __lz4hc_compress_crypto(const u8 *src, unsigned int slen,
diff --git a/crypto/lzo-rle.c b/crypto/lzo-rle.c
index 0631d975bfac..261ef327637a 100644
--- a/crypto/lzo-rle.c
+++ b/crypto/lzo-rle.c
@@ -15,7 +15,7 @@ struct lzorle_ctx {
 	void *lzorle_comp_mem;
 };
 
-static void *lzorle_alloc_ctx(struct crypto_scomp *tfm)
+static void *lzorle_alloc_ctx(void)
 {
 	void *ctx;
 
@@ -30,14 +30,14 @@ static int lzorle_init(struct crypto_tfm *tfm)
 {
 	struct lzorle_ctx *ctx = crypto_tfm_ctx(tfm);
 
-	ctx->lzorle_comp_mem = lzorle_alloc_ctx(NULL);
+	ctx->lzorle_comp_mem = lzorle_alloc_ctx();
 	if (IS_ERR(ctx->lzorle_comp_mem))
 		return -ENOMEM;
 
 	return 0;
 }
 
-static void lzorle_free_ctx(struct crypto_scomp *tfm, void *ctx)
+static void lzorle_free_ctx(void *ctx)
 {
 	kvfree(ctx);
 }
@@ -46,7 +46,7 @@ static void lzorle_exit(struct crypto_tfm *tfm)
 {
 	struct lzorle_ctx *ctx = crypto_tfm_ctx(tfm);
 
-	lzorle_free_ctx(NULL, ctx->lzorle_comp_mem);
+	lzorle_free_ctx(ctx->lzorle_comp_mem);
 }
 
 static int __lzorle_compress(const u8 *src, unsigned int slen,
diff --git a/crypto/lzo.c b/crypto/lzo.c
index ebda132dd22b..ae40e80a4094 100644
--- a/crypto/lzo.c
+++ b/crypto/lzo.c
@@ -15,7 +15,7 @@ struct lzo_ctx {
 	void *lzo_comp_mem;
 };
 
-static void *lzo_alloc_ctx(struct crypto_scomp *tfm)
+static void *lzo_alloc_ctx(void)
 {
 	void *ctx;
 
@@ -30,14 +30,14 @@ static int lzo_init(struct crypto_tfm *tfm)
 {
 	struct lzo_ctx *ctx = crypto_tfm_ctx(tfm);
 
-	ctx->lzo_comp_mem = lzo_alloc_ctx(NULL);
+	ctx->lzo_comp_mem = lzo_alloc_ctx();
 	if (IS_ERR(ctx->lzo_comp_mem))
 		return -ENOMEM;
 
 	return 0;
 }
 
-static void lzo_free_ctx(struct crypto_scomp *tfm, void *ctx)
+static void lzo_free_ctx(void *ctx)
 {
 	kvfree(ctx);
 }
@@ -46,7 +46,7 @@ static void lzo_exit(struct crypto_tfm *tfm)
 {
 	struct lzo_ctx *ctx = crypto_tfm_ctx(tfm);
 
-	lzo_free_ctx(NULL, ctx->lzo_comp_mem);
+	lzo_free_ctx(ctx->lzo_comp_mem);
 }
 
 static int __lzo_compress(const u8 *src, unsigned int slen,
diff --git a/crypto/zstd.c b/crypto/zstd.c
index 154a969c83a8..68a093427944 100644
--- a/crypto/zstd.c
+++ b/crypto/zstd.c
@@ -103,7 +103,7 @@ static int __zstd_init(void *ctx)
 	return ret;
 }
 
-static void *zstd_alloc_ctx(struct crypto_scomp *tfm)
+static void *zstd_alloc_ctx(void)
 {
 	int ret;
 	struct zstd_ctx *ctx;
@@ -134,7 +134,7 @@ static void __zstd_exit(void *ctx)
 	zstd_decomp_exit(ctx);
 }
 
-static void zstd_free_ctx(struct crypto_scomp *tfm, void *ctx)
+static void zstd_free_ctx(void *ctx)
 {
 	__zstd_exit(ctx);
 	kfree_sensitive(ctx);
diff --git a/drivers/crypto/cavium/zip/zip_crypto.c b/drivers/crypto/cavium/zip/zip_crypto.c
index 1046a746d36f..a9c3efce8f2d 100644
--- a/drivers/crypto/cavium/zip/zip_crypto.c
+++ b/drivers/crypto/cavium/zip/zip_crypto.c
@@ -236,7 +236,7 @@ int  zip_comp_decompress(struct crypto_tfm *tfm,
 } /* Legacy compress framework end */
 
 /* SCOMP framework start */
-void *zip_alloc_scomp_ctx_deflate(struct crypto_scomp *tfm)
+void *zip_alloc_scomp_ctx_deflate(void)
 {
 	int ret;
 	struct zip_kernel_ctx *zip_ctx;
@@ -255,7 +255,7 @@ void *zip_alloc_scomp_ctx_deflate(struct crypto_scomp *tfm)
 	return zip_ctx;
 }
 
-void *zip_alloc_scomp_ctx_lzs(struct crypto_scomp *tfm)
+void *zip_alloc_scomp_ctx_lzs(void)
 {
 	int ret;
 	struct zip_kernel_ctx *zip_ctx;
@@ -274,7 +274,7 @@ void *zip_alloc_scomp_ctx_lzs(struct crypto_scomp *tfm)
 	return zip_ctx;
 }
 
-void zip_free_scomp_ctx(struct crypto_scomp *tfm, void *ctx)
+void zip_free_scomp_ctx(void *ctx)
 {
 	struct zip_kernel_ctx *zip_ctx = ctx;
 
diff --git a/drivers/crypto/cavium/zip/zip_crypto.h b/drivers/crypto/cavium/zip/zip_crypto.h
index b59ddfcacd34..dbe20bfeb3e9 100644
--- a/drivers/crypto/cavium/zip/zip_crypto.h
+++ b/drivers/crypto/cavium/zip/zip_crypto.h
@@ -67,9 +67,9 @@ int  zip_comp_decompress(struct crypto_tfm *tfm,
 			 const u8 *src, unsigned int slen,
 			 u8 *dst, unsigned int *dlen);
 
-void *zip_alloc_scomp_ctx_deflate(struct crypto_scomp *tfm);
-void *zip_alloc_scomp_ctx_lzs(struct crypto_scomp *tfm);
-void  zip_free_scomp_ctx(struct crypto_scomp *tfm, void *zip_ctx);
+void *zip_alloc_scomp_ctx_deflate(void);
+void *zip_alloc_scomp_ctx_lzs(void);
+void  zip_free_scomp_ctx(void *zip_ctx);
 int   zip_scomp_compress(struct crypto_scomp *tfm,
 			 const u8 *src, unsigned int slen,
 			 u8 *dst, unsigned int *dlen, void *ctx);
diff --git a/include/crypto/internal/scompress.h b/include/crypto/internal/scompress.h
index 07a10fd2d321..6ba9974df7d3 100644
--- a/include/crypto/internal/scompress.h
+++ b/include/crypto/internal/scompress.h
@@ -31,8 +31,8 @@ struct crypto_scomp {
  * @calg:	Cmonn algorithm data structure shared with acomp
  */
 struct scomp_alg {
-	void *(*alloc_ctx)(struct crypto_scomp *tfm);
-	void (*free_ctx)(struct crypto_scomp *tfm, void *ctx);
+	void *(*alloc_ctx)(void);
+	void (*free_ctx)(void *ctx);
 	int (*compress)(struct crypto_scomp *tfm, const u8 *src,
 			unsigned int slen, u8 *dst, unsigned int *dlen,
 			void *ctx);
@@ -73,13 +73,13 @@ static inline struct scomp_alg *crypto_scomp_alg(struct crypto_scomp *tfm)
 
 static inline void *crypto_scomp_alloc_ctx(struct crypto_scomp *tfm)
 {
-	return crypto_scomp_alg(tfm)->alloc_ctx(tfm);
+	return crypto_scomp_alg(tfm)->alloc_ctx();
 }
 
 static inline void crypto_scomp_free_ctx(struct crypto_scomp *tfm,
 					 void *ctx)
 {
-	return crypto_scomp_alg(tfm)->free_ctx(tfm, ctx);
+	return crypto_scomp_alg(tfm)->free_ctx(ctx);
 }
 
 static inline int crypto_scomp_compress(struct crypto_scomp *tfm,
-- 
2.39.5



^ permalink raw reply	[flat|nested] 16+ messages in thread

* [v2 PATCH 3/7] crypto: acomp - Add request chaining and virtual addresses
  2025-03-04  9:25 [v2 PATCH 0/7] crypto: acomp - Add request chaining and virtual address support Herbert Xu
  2025-03-04  9:25 ` [v2 PATCH 1/7] crypto: api - Add cra_type->destroy hook Herbert Xu
  2025-03-04  9:25 ` [v2 PATCH 2/7] crypto: scomp - Remove tfm argument from alloc/free_ctx Herbert Xu
@ 2025-03-04  9:25 ` Herbert Xu
  2025-03-04 21:59   ` Sridhar, Kanchana P
  2025-03-04  9:25 ` [v2 PATCH 4/7] crypto: testmgr - Remove NULL dst acomp tests Herbert Xu
                   ` (6 subsequent siblings)
  9 siblings, 1 reply; 16+ messages in thread
From: Herbert Xu @ 2025-03-04  9:25 UTC (permalink / raw)
  To: Linux Crypto Mailing List; +Cc: linux-mm, Yosry Ahmed, Kanchana P Sridhar

This adds request chaining and virtual address support to the
acomp interface.

It is identical to the ahash interface, except that a new flag
CRYPTO_ACOMP_REQ_NONDMA has been added to indicate that the
virtual addresses are not suitable for DMA.  This is because
all existing and potential acomp users can provide memory that
is suitable for DMA so there is no need for a fall-back copy
path.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---
 crypto/acompress.c                  | 201 ++++++++++++++++++++++++++++
 include/crypto/acompress.h          |  89 ++++++++++--
 include/crypto/internal/acompress.h |  22 +++
 3 files changed, 299 insertions(+), 13 deletions(-)

diff --git a/crypto/acompress.c b/crypto/acompress.c
index 30176316140a..d2103d4e42cc 100644
--- a/crypto/acompress.c
+++ b/crypto/acompress.c
@@ -23,6 +23,8 @@ struct crypto_scomp;
 
 static const struct crypto_type crypto_acomp_type;
 
+static void acomp_reqchain_done(void *data, int err);
+
 static inline struct acomp_alg *__crypto_acomp_alg(struct crypto_alg *alg)
 {
 	return container_of(alg, struct acomp_alg, calg.base);
@@ -153,6 +155,205 @@ void acomp_request_free(struct acomp_req *req)
 }
 EXPORT_SYMBOL_GPL(acomp_request_free);
 
+static bool acomp_request_has_nondma(struct acomp_req *req)
+{
+	struct acomp_req *r2;
+
+	if (acomp_request_isnondma(req))
+		return true;
+
+	list_for_each_entry(r2, &req->base.list, base.list)
+		if (acomp_request_isnondma(r2))
+			return true;
+
+	return false;
+}
+
+static void acomp_save_req(struct acomp_req *req, crypto_completion_t cplt)
+{
+	struct crypto_acomp *tfm = crypto_acomp_reqtfm(req);
+	struct acomp_req_chain *state = &req->chain;
+
+	if (!acomp_is_async(tfm))
+		return;
+
+	state->compl = req->base.complete;
+	state->data = req->base.data;
+	req->base.complete = cplt;
+	req->base.data = state;
+	state->req0 = req;
+}
+
+static void acomp_restore_req(struct acomp_req_chain *state)
+{
+	struct acomp_req *req = state->req0;
+	struct crypto_acomp *tfm;
+
+	tfm = crypto_acomp_reqtfm(req);
+	if (!acomp_is_async(tfm))
+		return;
+
+	req->base.complete = state->compl;
+	req->base.data = state->data;
+}
+
+static void acomp_reqchain_virt(struct acomp_req_chain *state, int err)
+{
+	struct acomp_req *req = state->cur;
+	unsigned int slen = req->slen;
+	unsigned int dlen = req->dlen;
+
+	req->base.err = err;
+	if (!state->src)
+		return;
+
+	acomp_request_set_virt(req, state->src, state->dst, slen, dlen);
+	state->src = NULL;
+}
+
+static int acomp_reqchain_finish(struct acomp_req_chain *state,
+				 int err, u32 mask)
+{
+	struct acomp_req *req0 = state->req0;
+	struct acomp_req *req = state->cur;
+	struct acomp_req *n;
+
+	acomp_reqchain_virt(state, err);
+
+	if (req != req0)
+		list_add_tail(&req->base.list, &req0->base.list);
+
+	list_for_each_entry_safe(req, n, &state->head, base.list) {
+		list_del_init(&req->base.list);
+
+		req->base.flags &= mask;
+		req->base.complete = acomp_reqchain_done;
+		req->base.data = state;
+		state->cur = req;
+
+		if (acomp_request_isvirt(req)) {
+			unsigned int slen = req->slen;
+			unsigned int dlen = req->dlen;
+			const u8 *svirt = req->svirt;
+			u8 *dvirt = req->dvirt;
+
+			state->src = svirt;
+			state->dst = dvirt;
+
+			sg_init_one(&state->ssg, svirt, slen);
+			sg_init_one(&state->dsg, dvirt, dlen);
+
+			acomp_request_set_params(req, &state->ssg, &state->dsg,
+						 slen, dlen);
+		}
+
+		err = state->op(req);
+
+		if (err == -EINPROGRESS) {
+			if (!list_empty(&state->head))
+				err = -EBUSY;
+			goto out;
+		}
+
+		if (err == -EBUSY)
+			goto out;
+
+		acomp_reqchain_virt(state, err);
+		list_add_tail(&req->base.list, &req0->base.list);
+	}
+
+	acomp_restore_req(state);
+
+out:
+	return err;
+}
+
+static void acomp_reqchain_done(void *data, int err)
+{
+	struct acomp_req_chain *state = data;
+	crypto_completion_t compl = state->compl;
+
+	data = state->data;
+
+	if (err == -EINPROGRESS) {
+		if (!list_empty(&state->head))
+			return;
+		goto notify;
+	}
+
+	err = acomp_reqchain_finish(state, err, CRYPTO_TFM_REQ_MAY_BACKLOG);
+	if (err == -EBUSY)
+		return;
+
+notify:
+	compl(data, err);
+}
+
+static int acomp_do_req_chain(struct acomp_req *req,
+			      int (*op)(struct acomp_req *req))
+{
+	struct crypto_acomp *tfm = crypto_acomp_reqtfm(req);
+	struct acomp_req_chain *state = &req->chain;
+	int err;
+
+	if (crypto_acomp_req_chain(tfm) ||
+	    (!acomp_request_chained(req) && !acomp_request_isvirt(req)))
+		return op(req);
+
+	/*
+	 * There are no in-kernel users that do this.  If and ever
+	 * such users come into being then we could add a fall-back
+	 * path.
+	 */
+	if (acomp_request_has_nondma(req))
+		return -EINVAL;
+
+	if (acomp_is_async(tfm)) {
+		acomp_save_req(req, acomp_reqchain_done);
+		state = req->base.data;
+	}
+
+	state->op = op;
+	state->cur = req;
+	state->src = NULL;
+	INIT_LIST_HEAD(&state->head);
+	list_splice_init(&req->base.list, &state->head);
+
+	if (acomp_request_isvirt(req)) {
+		unsigned int slen = req->slen;
+		unsigned int dlen = req->dlen;
+		const u8 *svirt = req->svirt;
+		u8 *dvirt = req->dvirt;
+
+		state->src = svirt;
+		state->dst = dvirt;
+
+		sg_init_one(&state->ssg, svirt, slen);
+		sg_init_one(&state->dsg, dvirt, dlen);
+
+		acomp_request_set_params(req, &state->ssg, &state->dsg,
+					 slen, dlen);
+	}
+
+	err = op(req);
+	if (err == -EBUSY || err == -EINPROGRESS)
+		return -EBUSY;
+
+	return acomp_reqchain_finish(state, err, ~0);
+}
+
+int crypto_acomp_compress(struct acomp_req *req)
+{
+	return acomp_do_req_chain(req, crypto_acomp_reqtfm(req)->compress);
+}
+EXPORT_SYMBOL_GPL(crypto_acomp_compress);
+
+int crypto_acomp_decompress(struct acomp_req *req)
+{
+	return acomp_do_req_chain(req, crypto_acomp_reqtfm(req)->decompress);
+}
+EXPORT_SYMBOL_GPL(crypto_acomp_decompress);
+
 void comp_prepare_alg(struct comp_alg_common *alg)
 {
 	struct crypto_alg *base = &alg->base;
diff --git a/include/crypto/acompress.h b/include/crypto/acompress.h
index b6d5136e689d..15bb13e47f8b 100644
--- a/include/crypto/acompress.h
+++ b/include/crypto/acompress.h
@@ -12,10 +12,34 @@
 #include <linux/atomic.h>
 #include <linux/container_of.h>
 #include <linux/crypto.h>
+#include <linux/scatterlist.h>
+#include <linux/types.h>
 
 #define CRYPTO_ACOMP_ALLOC_OUTPUT	0x00000001
+
+/* Set this bit for virtual address instead of SG list. */
+#define CRYPTO_ACOMP_REQ_VIRT		0x00000002
+
+/* Set this bit for if virtual address buffer cannot be used for DMA. */
+#define CRYPTO_ACOMP_REQ_NONDMA		0x00000004
+
 #define CRYPTO_ACOMP_DST_MAX		131072
 
+struct acomp_req;
+
+struct acomp_req_chain {
+	struct list_head head;
+	struct acomp_req *req0;
+	struct acomp_req *cur;
+	int (*op)(struct acomp_req *req);
+	crypto_completion_t compl;
+	void *data;
+	struct scatterlist ssg;
+	struct scatterlist dsg;
+	const u8 *src;
+	u8 *dst;
+};
+
 /**
  * struct acomp_req - asynchronous (de)compression request
  *
@@ -24,14 +48,24 @@
  * @dst:	Destination data
  * @slen:	Size of the input buffer
  * @dlen:	Size of the output buffer and number of bytes produced
+ * @chain:	Private API code data, do not use
  * @__ctx:	Start of private context data
  */
 struct acomp_req {
 	struct crypto_async_request base;
-	struct scatterlist *src;
-	struct scatterlist *dst;
+	union {
+		struct scatterlist *src;
+		const u8 *svirt;
+	};
+	union {
+		struct scatterlist *dst;
+		u8 *dvirt;
+	};
 	unsigned int slen;
 	unsigned int dlen;
+
+	struct acomp_req_chain chain;
+
 	void *__ctx[] CRYPTO_MINALIGN_ATTR;
 };
 
@@ -200,10 +234,14 @@ static inline void acomp_request_set_callback(struct acomp_req *req,
 					      crypto_completion_t cmpl,
 					      void *data)
 {
+	u32 keep = CRYPTO_ACOMP_ALLOC_OUTPUT | CRYPTO_ACOMP_REQ_VIRT;
+
 	req->base.complete = cmpl;
 	req->base.data = data;
-	req->base.flags &= CRYPTO_ACOMP_ALLOC_OUTPUT;
-	req->base.flags |= flgs & ~CRYPTO_ACOMP_ALLOC_OUTPUT;
+	req->base.flags &= keep;
+	req->base.flags |= flgs & ~keep;
+
+	crypto_reqchain_init(&req->base);
 }
 
 /**
@@ -230,11 +268,42 @@ static inline void acomp_request_set_params(struct acomp_req *req,
 	req->slen = slen;
 	req->dlen = dlen;
 
-	req->base.flags &= ~CRYPTO_ACOMP_ALLOC_OUTPUT;
+	req->base.flags &= ~(CRYPTO_ACOMP_ALLOC_OUTPUT | CRYPTO_ACOMP_REQ_VIRT);
 	if (!req->dst)
 		req->base.flags |= CRYPTO_ACOMP_ALLOC_OUTPUT;
 }
 
+/**
+ * acomp_request_set_virt() -- Sets virtual address request parameters
+ *
+ * Sets virtual address parameters required by an acomp operation
+ *
+ * @req:	asynchronous compress request
+ * @src:	virtual address pointer to input buffer
+ * @dst:	virtual address pointer to output buffer.
+ * @slen:	size of the input buffer
+ * @dlen:	size of the output buffer.
+ */
+static inline void acomp_request_set_virt(struct acomp_req *req,
+					  const u8 *src, u8 *dst,
+					  unsigned int slen,
+					  unsigned int dlen)
+{
+	req->svirt = src;
+	req->dvirt = dst;
+	req->slen = slen;
+	req->dlen = dlen;
+
+	req->base.flags &= ~CRYPTO_ACOMP_ALLOC_OUTPUT;
+	req->base.flags |= CRYPTO_ACOMP_REQ_VIRT;
+}
+
+static inline void acomp_request_chain(struct acomp_req *req,
+				       struct acomp_req *head)
+{
+	crypto_request_chain(&req->base, &head->base);
+}
+
 /**
  * crypto_acomp_compress() -- Invoke asynchronous compress operation
  *
@@ -244,10 +313,7 @@ static inline void acomp_request_set_params(struct acomp_req *req,
  *
  * Return:	zero on success; error code in case of error
  */
-static inline int crypto_acomp_compress(struct acomp_req *req)
-{
-	return crypto_acomp_reqtfm(req)->compress(req);
-}
+int crypto_acomp_compress(struct acomp_req *req);
 
 /**
  * crypto_acomp_decompress() -- Invoke asynchronous decompress operation
@@ -258,9 +324,6 @@ static inline int crypto_acomp_compress(struct acomp_req *req)
  *
  * Return:	zero on success; error code in case of error
  */
-static inline int crypto_acomp_decompress(struct acomp_req *req)
-{
-	return crypto_acomp_reqtfm(req)->decompress(req);
-}
+int crypto_acomp_decompress(struct acomp_req *req);
 
 #endif
diff --git a/include/crypto/internal/acompress.h b/include/crypto/internal/acompress.h
index 8831edaafc05..b3b48dea7f2f 100644
--- a/include/crypto/internal/acompress.h
+++ b/include/crypto/internal/acompress.h
@@ -109,4 +109,26 @@ void crypto_unregister_acomp(struct acomp_alg *alg);
 int crypto_register_acomps(struct acomp_alg *algs, int count);
 void crypto_unregister_acomps(struct acomp_alg *algs, int count);
 
+static inline bool acomp_request_chained(struct acomp_req *req)
+{
+	return crypto_request_chained(&req->base);
+}
+
+static inline bool acomp_request_isvirt(struct acomp_req *req)
+{
+	return req->base.flags & CRYPTO_ACOMP_REQ_VIRT;
+}
+
+static inline bool acomp_request_isnondma(struct acomp_req *req)
+{
+	return (req->base.flags &
+		(CRYPTO_ACOMP_REQ_NONDMA | CRYPTO_ACOMP_REQ_VIRT)) ==
+	       (CRYPTO_ACOMP_REQ_NONDMA | CRYPTO_ACOMP_REQ_VIRT);
+}
+
+static inline bool crypto_acomp_req_chain(struct crypto_acomp *tfm)
+{
+	return crypto_tfm_req_chain(&tfm->base);
+}
+
 #endif
-- 
2.39.5



^ permalink raw reply	[flat|nested] 16+ messages in thread

* [v2 PATCH 4/7] crypto: testmgr - Remove NULL dst acomp tests
  2025-03-04  9:25 [v2 PATCH 0/7] crypto: acomp - Add request chaining and virtual address support Herbert Xu
                   ` (2 preceding siblings ...)
  2025-03-04  9:25 ` [v2 PATCH 3/7] crypto: acomp - Add request chaining and virtual addresses Herbert Xu
@ 2025-03-04  9:25 ` Herbert Xu
  2025-03-04  9:25 ` [v2 PATCH 5/7] crypto: scomp - Remove support for most non-trivial destination SG lists Herbert Xu
                   ` (5 subsequent siblings)
  9 siblings, 0 replies; 16+ messages in thread
From: Herbert Xu @ 2025-03-04  9:25 UTC (permalink / raw)
  To: Linux Crypto Mailing List; +Cc: linux-mm, Yosry Ahmed, Kanchana P Sridhar

In preparation for the partial removal of NULL dst acomp support,
remove the tests for them.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---
 crypto/testmgr.c | 29 -----------------------------
 1 file changed, 29 deletions(-)

diff --git a/crypto/testmgr.c b/crypto/testmgr.c
index b69877db3f33..95b973a391cc 100644
--- a/crypto/testmgr.c
+++ b/crypto/testmgr.c
@@ -3522,21 +3522,6 @@ static int test_acomp(struct crypto_acomp *tfm,
 			goto out;
 		}
 
-#ifdef CONFIG_CRYPTO_MANAGER_EXTRA_TESTS
-		crypto_init_wait(&wait);
-		sg_init_one(&src, input_vec, ilen);
-		acomp_request_set_params(req, &src, NULL, ilen, 0);
-
-		ret = crypto_wait_req(crypto_acomp_compress(req), &wait);
-		if (ret) {
-			pr_err("alg: acomp: compression failed on NULL dst buffer test %d for %s: ret=%d\n",
-			       i + 1, algo, -ret);
-			kfree(input_vec);
-			acomp_request_free(req);
-			goto out;
-		}
-#endif
-
 		kfree(input_vec);
 		acomp_request_free(req);
 	}
@@ -3598,20 +3583,6 @@ static int test_acomp(struct crypto_acomp *tfm,
 			goto out;
 		}
 
-#ifdef CONFIG_CRYPTO_MANAGER_EXTRA_TESTS
-		crypto_init_wait(&wait);
-		acomp_request_set_params(req, &src, NULL, ilen, 0);
-
-		ret = crypto_wait_req(crypto_acomp_decompress(req), &wait);
-		if (ret) {
-			pr_err("alg: acomp: decompression failed on NULL dst buffer test %d for %s: ret=%d\n",
-			       i + 1, algo, -ret);
-			kfree(input_vec);
-			acomp_request_free(req);
-			goto out;
-		}
-#endif
-
 		kfree(input_vec);
 		acomp_request_free(req);
 	}
-- 
2.39.5



^ permalink raw reply	[flat|nested] 16+ messages in thread

* [v2 PATCH 5/7] crypto: scomp - Remove support for most non-trivial destination SG lists
  2025-03-04  9:25 [v2 PATCH 0/7] crypto: acomp - Add request chaining and virtual address support Herbert Xu
                   ` (3 preceding siblings ...)
  2025-03-04  9:25 ` [v2 PATCH 4/7] crypto: testmgr - Remove NULL dst acomp tests Herbert Xu
@ 2025-03-04  9:25 ` Herbert Xu
  2025-03-04  9:25 ` [v2 PATCH 6/7] crypto: scomp - Add chaining and virtual address support Herbert Xu
                   ` (4 subsequent siblings)
  9 siblings, 0 replies; 16+ messages in thread
From: Herbert Xu @ 2025-03-04  9:25 UTC (permalink / raw)
  To: Linux Crypto Mailing List; +Cc: linux-mm, Yosry Ahmed, Kanchana P Sridhar

As the only user of acomp/scomp uses a trivial single-page SG
list, remove support for everything else in preprataion for the
addition of virtual address support.

However, keep support for non-trivial source SG lists as that
user is currently jumping through hoops in order to linearise
the source data.

Limit the source SG linearisation buffer to a single page as
that user never goes over that.  The only other potential user
is also unlikely to exceed that (IPComp) and it can easily do
its own linearisation if necessary.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---
 crypto/acompress.c                  |  6 --
 crypto/scompress.c                  | 98 ++++++++++++-----------------
 include/crypto/acompress.h          | 12 +---
 include/crypto/internal/scompress.h |  2 -
 4 files changed, 42 insertions(+), 76 deletions(-)

diff --git a/crypto/acompress.c b/crypto/acompress.c
index d2103d4e42cc..8914d0c4cc75 100644
--- a/crypto/acompress.c
+++ b/crypto/acompress.c
@@ -73,7 +73,6 @@ static int crypto_acomp_init_tfm(struct crypto_tfm *tfm)
 
 	acomp->compress = alg->compress;
 	acomp->decompress = alg->decompress;
-	acomp->dst_free = alg->dst_free;
 	acomp->reqsize = alg->reqsize;
 
 	if (alg->exit)
@@ -146,11 +145,6 @@ void acomp_request_free(struct acomp_req *req)
 	if (tfm->__crt_alg->cra_type != &crypto_acomp_type)
 		crypto_acomp_scomp_free_ctx(req);
 
-	if (req->base.flags & CRYPTO_ACOMP_ALLOC_OUTPUT) {
-		acomp->dst_free(req->dst);
-		req->dst = NULL;
-	}
-
 	__acomp_request_free(req);
 }
 EXPORT_SYMBOL_GPL(acomp_request_free);
diff --git a/crypto/scompress.c b/crypto/scompress.c
index 1cef6bb06a81..d78f307343ac 100644
--- a/crypto/scompress.c
+++ b/crypto/scompress.c
@@ -18,15 +18,18 @@
 #include <linux/seq_file.h>
 #include <linux/slab.h>
 #include <linux/string.h>
-#include <linux/vmalloc.h>
 #include <net/netlink.h>
 
 #include "compress.h"
 
+#define SCOMP_SCRATCH_SIZE	PAGE_SIZE
+
 struct scomp_scratch {
 	spinlock_t	lock;
-	void		*src;
-	void		*dst;
+	union {
+		void *src;
+		unsigned long saddr;
+	};
 };
 
 static DEFINE_PER_CPU(struct scomp_scratch, scomp_scratch) = {
@@ -66,10 +69,8 @@ static void crypto_scomp_free_scratches(void)
 	for_each_possible_cpu(i) {
 		scratch = per_cpu_ptr(&scomp_scratch, i);
 
-		vfree(scratch->src);
-		vfree(scratch->dst);
+		free_page(scratch->saddr);
 		scratch->src = NULL;
-		scratch->dst = NULL;
 	}
 }
 
@@ -79,18 +80,14 @@ static int crypto_scomp_alloc_scratches(void)
 	int i;
 
 	for_each_possible_cpu(i) {
-		void *mem;
+		unsigned long mem;
 
 		scratch = per_cpu_ptr(&scomp_scratch, i);
 
-		mem = vmalloc_node(SCOMP_SCRATCH_SIZE, cpu_to_node(i));
+		mem = __get_free_page(GFP_KERNEL);
 		if (!mem)
 			goto error;
-		scratch->src = mem;
-		mem = vmalloc_node(SCOMP_SCRATCH_SIZE, cpu_to_node(i));
-		if (!mem)
-			goto error;
-		scratch->dst = mem;
+		scratch->saddr = mem;
 	}
 	return 0;
 error:
@@ -113,72 +110,58 @@ static int crypto_scomp_init_tfm(struct crypto_tfm *tfm)
 static int scomp_acomp_comp_decomp(struct acomp_req *req, int dir)
 {
 	struct crypto_acomp *tfm = crypto_acomp_reqtfm(req);
-	void **tfm_ctx = acomp_tfm_ctx(tfm);
+	struct crypto_scomp **tfm_ctx = acomp_tfm_ctx(tfm);
 	struct crypto_scomp *scomp = *tfm_ctx;
 	void **ctx = acomp_request_ctx(req);
 	struct scomp_scratch *scratch;
+	unsigned int slen = req->slen;
+	unsigned int dlen = req->dlen;
 	void *src, *dst;
-	unsigned int dlen;
 	int ret;
 
-	if (!req->src || !req->slen || req->slen > SCOMP_SCRATCH_SIZE)
+	if (!req->src || !slen)
 		return -EINVAL;
 
-	if (req->dst && !req->dlen)
+	if (req->dst && !dlen)
 		return -EINVAL;
 
-	if (!req->dlen || req->dlen > SCOMP_SCRATCH_SIZE)
-		req->dlen = SCOMP_SCRATCH_SIZE;
+	if (sg_nents(req->dst) > 1)
+		return -ENOSYS;
 
-	dlen = req->dlen;
+	if (req->dst->offset >= PAGE_SIZE)
+		return -ENOSYS;
+
+	if (req->dst->offset + dlen > PAGE_SIZE)
+		dlen = PAGE_SIZE - req->dst->offset;
+
+	if (sg_nents(req->src) == 1 && (!PageHighMem(sg_page(req->src)) ||
+					req->src->offset + slen <= PAGE_SIZE))
+		src = kmap_local_page(sg_page(req->src)) + req->src->offset;
+	else
+		src = scratch->src;
+
+	dst = kmap_local_page(sg_page(req->dst)) + req->dst->offset;
 
 	scratch = raw_cpu_ptr(&scomp_scratch);
 	spin_lock(&scratch->lock);
 
-	if (sg_nents(req->src) == 1 && !PageHighMem(sg_page(req->src))) {
-		src = page_to_virt(sg_page(req->src)) + req->src->offset;
-	} else {
-		scatterwalk_map_and_copy(scratch->src, req->src, 0,
-					 req->slen, 0);
-		src = scratch->src;
-	}
-
-	if (req->dst && sg_nents(req->dst) == 1 && !PageHighMem(sg_page(req->dst)))
-		dst = page_to_virt(sg_page(req->dst)) + req->dst->offset;
-	else
-		dst = scratch->dst;
+	if (src == scratch->src)
+		memcpy_from_sglist(src, req->src, 0, req->slen);
 
 	if (dir)
-		ret = crypto_scomp_compress(scomp, src, req->slen,
+		ret = crypto_scomp_compress(scomp, src, slen,
 					    dst, &req->dlen, *ctx);
 	else
-		ret = crypto_scomp_decompress(scomp, src, req->slen,
+		ret = crypto_scomp_decompress(scomp, src, slen,
 					      dst, &req->dlen, *ctx);
-	if (!ret) {
-		if (!req->dst) {
-			req->dst = sgl_alloc(req->dlen, GFP_ATOMIC, NULL);
-			if (!req->dst) {
-				ret = -ENOMEM;
-				goto out;
-			}
-		} else if (req->dlen > dlen) {
-			ret = -ENOSPC;
-			goto out;
-		}
-		if (dst == scratch->dst) {
-			scatterwalk_map_and_copy(scratch->dst, req->dst, 0,
-						 req->dlen, 1);
-		} else {
-			int nr_pages = DIV_ROUND_UP(req->dst->offset + req->dlen, PAGE_SIZE);
-			int i;
-			struct page *dst_page = sg_page(req->dst);
 
-			for (i = 0; i < nr_pages; i++)
-				flush_dcache_page(dst_page + i);
-		}
-	}
-out:
 	spin_unlock(&scratch->lock);
+
+	if (src != scratch->src)
+		kunmap_local(src);
+	kunmap_local(dst);
+	flush_dcache_page(sg_page(req->dst));
+
 	return ret;
 }
 
@@ -225,7 +208,6 @@ int crypto_init_scomp_ops_async(struct crypto_tfm *tfm)
 
 	crt->compress = scomp_acomp_compress;
 	crt->decompress = scomp_acomp_decompress;
-	crt->dst_free = sgl_free;
 	crt->reqsize = sizeof(void *);
 
 	return 0;
diff --git a/include/crypto/acompress.h b/include/crypto/acompress.h
index 15bb13e47f8b..25e193b0b8b4 100644
--- a/include/crypto/acompress.h
+++ b/include/crypto/acompress.h
@@ -15,8 +15,6 @@
 #include <linux/scatterlist.h>
 #include <linux/types.h>
 
-#define CRYPTO_ACOMP_ALLOC_OUTPUT	0x00000001
-
 /* Set this bit for virtual address instead of SG list. */
 #define CRYPTO_ACOMP_REQ_VIRT		0x00000002
 
@@ -75,15 +73,12 @@ struct acomp_req {
  *
  * @compress:		Function performs a compress operation
  * @decompress:		Function performs a de-compress operation
- * @dst_free:		Frees destination buffer if allocated inside the
- *			algorithm
  * @reqsize:		Context size for (de)compression requests
  * @base:		Common crypto API algorithm data structure
  */
 struct crypto_acomp {
 	int (*compress)(struct acomp_req *req);
 	int (*decompress)(struct acomp_req *req);
-	void (*dst_free)(struct scatterlist *dst);
 	unsigned int reqsize;
 	struct crypto_tfm base;
 };
@@ -234,7 +229,7 @@ static inline void acomp_request_set_callback(struct acomp_req *req,
 					      crypto_completion_t cmpl,
 					      void *data)
 {
-	u32 keep = CRYPTO_ACOMP_ALLOC_OUTPUT | CRYPTO_ACOMP_REQ_VIRT;
+	u32 keep = CRYPTO_ACOMP_REQ_VIRT;
 
 	req->base.complete = cmpl;
 	req->base.data = data;
@@ -268,9 +263,7 @@ static inline void acomp_request_set_params(struct acomp_req *req,
 	req->slen = slen;
 	req->dlen = dlen;
 
-	req->base.flags &= ~(CRYPTO_ACOMP_ALLOC_OUTPUT | CRYPTO_ACOMP_REQ_VIRT);
-	if (!req->dst)
-		req->base.flags |= CRYPTO_ACOMP_ALLOC_OUTPUT;
+	req->base.flags &= ~CRYPTO_ACOMP_REQ_VIRT;
 }
 
 /**
@@ -294,7 +287,6 @@ static inline void acomp_request_set_virt(struct acomp_req *req,
 	req->slen = slen;
 	req->dlen = dlen;
 
-	req->base.flags &= ~CRYPTO_ACOMP_ALLOC_OUTPUT;
 	req->base.flags |= CRYPTO_ACOMP_REQ_VIRT;
 }
 
diff --git a/include/crypto/internal/scompress.h b/include/crypto/internal/scompress.h
index 6ba9974df7d3..2a6b15c0a32d 100644
--- a/include/crypto/internal/scompress.h
+++ b/include/crypto/internal/scompress.h
@@ -12,8 +12,6 @@
 #include <crypto/acompress.h>
 #include <crypto/algapi.h>
 
-#define SCOMP_SCRATCH_SIZE	131072
-
 struct acomp_req;
 
 struct crypto_scomp {
-- 
2.39.5



^ permalink raw reply	[flat|nested] 16+ messages in thread

* [v2 PATCH 6/7] crypto: scomp - Add chaining and virtual address support
  2025-03-04  9:25 [v2 PATCH 0/7] crypto: acomp - Add request chaining and virtual address support Herbert Xu
                   ` (4 preceding siblings ...)
  2025-03-04  9:25 ` [v2 PATCH 5/7] crypto: scomp - Remove support for most non-trivial destination SG lists Herbert Xu
@ 2025-03-04  9:25 ` Herbert Xu
  2025-03-04  9:25 ` [v2 PATCH 7/7] crypto: acomp - Move stream management into scomp layer Herbert Xu
                   ` (3 subsequent siblings)
  9 siblings, 0 replies; 16+ messages in thread
From: Herbert Xu @ 2025-03-04  9:25 UTC (permalink / raw)
  To: Linux Crypto Mailing List; +Cc: linux-mm, Yosry Ahmed, Kanchana P Sridhar

Add chaining and virtual address support to all scomp algorithms.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---
 crypto/scompress.c | 82 +++++++++++++++++++++++++++++++---------------
 1 file changed, 56 insertions(+), 26 deletions(-)

diff --git a/crypto/scompress.c b/crypto/scompress.c
index d78f307343ac..8ef2d71ad908 100644
--- a/crypto/scompress.c
+++ b/crypto/scompress.c
@@ -116,7 +116,8 @@ static int scomp_acomp_comp_decomp(struct acomp_req *req, int dir)
 	struct scomp_scratch *scratch;
 	unsigned int slen = req->slen;
 	unsigned int dlen = req->dlen;
-	void *src, *dst;
+	const u8 *src;
+	u8 *dst;
 	int ret;
 
 	if (!req->src || !slen)
@@ -125,28 +126,32 @@ static int scomp_acomp_comp_decomp(struct acomp_req *req, int dir)
 	if (req->dst && !dlen)
 		return -EINVAL;
 
-	if (sg_nents(req->dst) > 1)
-		return -ENOSYS;
-
-	if (req->dst->offset >= PAGE_SIZE)
-		return -ENOSYS;
-
-	if (req->dst->offset + dlen > PAGE_SIZE)
-		dlen = PAGE_SIZE - req->dst->offset;
-
-	if (sg_nents(req->src) == 1 && (!PageHighMem(sg_page(req->src)) ||
-					req->src->offset + slen <= PAGE_SIZE))
-		src = kmap_local_page(sg_page(req->src)) + req->src->offset;
-	else
-		src = scratch->src;
-
-	dst = kmap_local_page(sg_page(req->dst)) + req->dst->offset;
-
 	scratch = raw_cpu_ptr(&scomp_scratch);
+
+	if (acomp_request_isvirt(req)) {
+		src = req->svirt;
+		dst = req->dvirt;
+	} else if (sg_nents(req->dst) > 1)
+		return -ENOSYS;
+	else if (req->dst->offset >= PAGE_SIZE)
+		return -ENOSYS;
+	else {
+		if (req->dst->offset + dlen > PAGE_SIZE)
+			dlen = PAGE_SIZE - req->dst->offset;
+
+		src = scratch->src;
+		if (sg_nents(req->src) == 1 &&
+		    (!PageHighMem(sg_page(req->src)) ||
+		     req->src->offset + slen <= PAGE_SIZE))
+			src = kmap_local_page(sg_page(req->src)) + req->src->offset;
+
+		dst = kmap_local_page(sg_page(req->dst)) + req->dst->offset;
+	}
+
 	spin_lock(&scratch->lock);
 
 	if (src == scratch->src)
-		memcpy_from_sglist(src, req->src, 0, req->slen);
+		memcpy_from_sglist(scratch->src, req->src, 0, req->slen);
 
 	if (dir)
 		ret = crypto_scomp_compress(scomp, src, slen,
@@ -157,22 +162,38 @@ static int scomp_acomp_comp_decomp(struct acomp_req *req, int dir)
 
 	spin_unlock(&scratch->lock);
 
-	if (src != scratch->src)
-		kunmap_local(src);
-	kunmap_local(dst);
-	flush_dcache_page(sg_page(req->dst));
+	if (!acomp_request_isvirt(req)) {
+		if (src != scratch->src)
+			kunmap_local(src);
+		kunmap_local(dst);
+		flush_dcache_page(sg_page(req->dst));
+	}
 
 	return ret;
 }
 
+static int scomp_acomp_chain(struct acomp_req *req, int dir)
+{
+	struct acomp_req *r2;
+	int err;
+
+	err = scomp_acomp_comp_decomp(req, dir);
+	req->base.err = err;
+
+	list_for_each_entry(r2, &req->base.list, base.list)
+		r2->base.err = scomp_acomp_comp_decomp(r2, dir);
+
+	return err;
+}
+
 static int scomp_acomp_compress(struct acomp_req *req)
 {
-	return scomp_acomp_comp_decomp(req, 1);
+	return scomp_acomp_chain(req, 1);
 }
 
 static int scomp_acomp_decompress(struct acomp_req *req)
 {
-	return scomp_acomp_comp_decomp(req, 0);
+	return scomp_acomp_chain(req, 0);
 }
 
 static void crypto_exit_scomp_ops_async(struct crypto_tfm *tfm)
@@ -259,12 +280,21 @@ static const struct crypto_type crypto_scomp_type = {
 	.tfmsize = offsetof(struct crypto_scomp, base),
 };
 
-int crypto_register_scomp(struct scomp_alg *alg)
+static void scomp_prepare_alg(struct scomp_alg *alg)
 {
 	struct crypto_alg *base = &alg->calg.base;
 
 	comp_prepare_alg(&alg->calg);
 
+	base->cra_flags |= CRYPTO_ALG_REQ_CHAIN;
+}
+
+int crypto_register_scomp(struct scomp_alg *alg)
+{
+	struct crypto_alg *base = &alg->calg.base;
+
+	scomp_prepare_alg(alg);
+
 	base->cra_type = &crypto_scomp_type;
 	base->cra_flags |= CRYPTO_ALG_TYPE_SCOMPRESS;
 
-- 
2.39.5



^ permalink raw reply	[flat|nested] 16+ messages in thread

* [v2 PATCH 7/7] crypto: acomp - Move stream management into scomp layer
  2025-03-04  9:25 [v2 PATCH 0/7] crypto: acomp - Add request chaining and virtual address support Herbert Xu
                   ` (5 preceding siblings ...)
  2025-03-04  9:25 ` [v2 PATCH 6/7] crypto: scomp - Add chaining and virtual address support Herbert Xu
@ 2025-03-04  9:25 ` Herbert Xu
  2025-03-05  1:46 ` [v2 PATCH 0/7] crypto: acomp - Add request chaining and virtual address support Jonathan Cameron
                   ` (2 subsequent siblings)
  9 siblings, 0 replies; 16+ messages in thread
From: Herbert Xu @ 2025-03-04  9:25 UTC (permalink / raw)
  To: Linux Crypto Mailing List; +Cc: linux-mm, Yosry Ahmed, Kanchana P Sridhar

Rather than allocating the stream memory in the request object,
move it into a per-cpu buffer managed by scomp.  This takes the
stress off the user from having to manage large request objects
and setting up their own per-cpu buffers in order to do so.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---
 crypto/acompress.c                  | 25 --------
 crypto/compress.h                   |  2 -
 crypto/scompress.c                  | 90 +++++++++++++++++++----------
 include/crypto/acompress.h          | 25 +++++++-
 include/crypto/internal/acompress.h | 17 +-----
 include/crypto/internal/scompress.h | 12 +---
 6 files changed, 84 insertions(+), 87 deletions(-)

diff --git a/crypto/acompress.c b/crypto/acompress.c
index 8914d0c4cc75..f8e18f32478e 100644
--- a/crypto/acompress.c
+++ b/crypto/acompress.c
@@ -124,31 +124,6 @@ struct crypto_acomp *crypto_alloc_acomp_node(const char *alg_name, u32 type,
 }
 EXPORT_SYMBOL_GPL(crypto_alloc_acomp_node);
 
-struct acomp_req *acomp_request_alloc(struct crypto_acomp *acomp)
-{
-	struct crypto_tfm *tfm = crypto_acomp_tfm(acomp);
-	struct acomp_req *req;
-
-	req = __acomp_request_alloc(acomp);
-	if (req && (tfm->__crt_alg->cra_type != &crypto_acomp_type))
-		return crypto_acomp_scomp_alloc_ctx(req);
-
-	return req;
-}
-EXPORT_SYMBOL_GPL(acomp_request_alloc);
-
-void acomp_request_free(struct acomp_req *req)
-{
-	struct crypto_acomp *acomp = crypto_acomp_reqtfm(req);
-	struct crypto_tfm *tfm = crypto_acomp_tfm(acomp);
-
-	if (tfm->__crt_alg->cra_type != &crypto_acomp_type)
-		crypto_acomp_scomp_free_ctx(req);
-
-	__acomp_request_free(req);
-}
-EXPORT_SYMBOL_GPL(acomp_request_free);
-
 static bool acomp_request_has_nondma(struct acomp_req *req)
 {
 	struct acomp_req *r2;
diff --git a/crypto/compress.h b/crypto/compress.h
index c3cedfb5e606..f7737a1fcbbd 100644
--- a/crypto/compress.h
+++ b/crypto/compress.h
@@ -15,8 +15,6 @@ struct acomp_req;
 struct comp_alg_common;
 
 int crypto_init_scomp_ops_async(struct crypto_tfm *tfm);
-struct acomp_req *crypto_acomp_scomp_alloc_ctx(struct acomp_req *req);
-void crypto_acomp_scomp_free_ctx(struct acomp_req *req);
 
 void comp_prepare_alg(struct comp_alg_common *alg);
 
diff --git a/crypto/scompress.c b/crypto/scompress.c
index 8ef2d71ad908..d9cf1696bff5 100644
--- a/crypto/scompress.c
+++ b/crypto/scompress.c
@@ -95,13 +95,62 @@ static int crypto_scomp_alloc_scratches(void)
 	return -ENOMEM;
 }
 
+static void scomp_free_streams(struct scomp_alg *alg)
+{
+	struct crypto_acomp_stream __percpu *stream = alg->stream;
+	int i;
+
+	for_each_possible_cpu(i) {
+		struct crypto_acomp_stream *ps = per_cpu_ptr(stream, i);
+
+		if (!ps->ctx)
+			break;
+
+		alg->free_ctx(ps);
+	}
+
+	free_percpu(stream);
+}
+
+static int scomp_alloc_streams(struct scomp_alg *alg)
+{
+	struct crypto_acomp_stream __percpu *stream;
+	int i;
+
+	stream = alloc_percpu(struct crypto_acomp_stream);
+	if (!stream)
+		return -ENOMEM;
+
+	for_each_possible_cpu(i) {
+		struct crypto_acomp_stream *ps = per_cpu_ptr(stream, i);
+
+		ps->ctx = alg->alloc_ctx();
+		if (IS_ERR(ps->ctx)) {
+			scomp_free_streams(alg);
+			return PTR_ERR(ps->ctx);
+		}
+
+		spin_lock_init(&ps->lock);
+	}
+
+	alg->stream = stream;
+	return 0;
+}
+
 static int crypto_scomp_init_tfm(struct crypto_tfm *tfm)
 {
+	struct scomp_alg *alg = crypto_scomp_alg(__crypto_scomp_tfm(tfm));
 	int ret = 0;
 
 	mutex_lock(&scomp_lock);
+	if (!alg->stream) {
+		ret = scomp_alloc_streams(alg);
+		if (ret)
+			goto unlock;
+	}
 	if (!scomp_scratch_users++)
 		ret = crypto_scomp_alloc_scratches();
+unlock:
 	mutex_unlock(&scomp_lock);
 
 	return ret;
@@ -112,7 +161,7 @@ static int scomp_acomp_comp_decomp(struct acomp_req *req, int dir)
 	struct crypto_acomp *tfm = crypto_acomp_reqtfm(req);
 	struct crypto_scomp **tfm_ctx = acomp_tfm_ctx(tfm);
 	struct crypto_scomp *scomp = *tfm_ctx;
-	void **ctx = acomp_request_ctx(req);
+	struct crypto_acomp_stream *stream;
 	struct scomp_scratch *scratch;
 	unsigned int slen = req->slen;
 	unsigned int dlen = req->dlen;
@@ -148,18 +197,22 @@ static int scomp_acomp_comp_decomp(struct acomp_req *req, int dir)
 		dst = kmap_local_page(sg_page(req->dst)) + req->dst->offset;
 	}
 
+	stream = raw_cpu_ptr(crypto_scomp_alg(scomp)->stream);
+
 	spin_lock(&scratch->lock);
+	spin_lock(&stream->lock);
 
 	if (src == scratch->src)
 		memcpy_from_sglist(scratch->src, req->src, 0, req->slen);
 
 	if (dir)
 		ret = crypto_scomp_compress(scomp, src, slen,
-					    dst, &req->dlen, *ctx);
+					    dst, &req->dlen, stream->ctx);
 	else
 		ret = crypto_scomp_decompress(scomp, src, slen,
-					      dst, &req->dlen, *ctx);
+					      dst, &req->dlen, stream->ctx);
 
+	spin_unlock(&stream->lock);
 	spin_unlock(&scratch->lock);
 
 	if (!acomp_request_isvirt(req)) {
@@ -234,40 +287,15 @@ int crypto_init_scomp_ops_async(struct crypto_tfm *tfm)
 	return 0;
 }
 
-struct acomp_req *crypto_acomp_scomp_alloc_ctx(struct acomp_req *req)
+static void crypto_scomp_destroy(struct crypto_alg *alg)
 {
-	struct crypto_acomp *acomp = crypto_acomp_reqtfm(req);
-	struct crypto_tfm *tfm = crypto_acomp_tfm(acomp);
-	struct crypto_scomp **tfm_ctx = crypto_tfm_ctx(tfm);
-	struct crypto_scomp *scomp = *tfm_ctx;
-	void *ctx;
-
-	ctx = crypto_scomp_alloc_ctx(scomp);
-	if (IS_ERR(ctx)) {
-		kfree(req);
-		return NULL;
-	}
-
-	*req->__ctx = ctx;
-
-	return req;
-}
-
-void crypto_acomp_scomp_free_ctx(struct acomp_req *req)
-{
-	struct crypto_acomp *acomp = crypto_acomp_reqtfm(req);
-	struct crypto_tfm *tfm = crypto_acomp_tfm(acomp);
-	struct crypto_scomp **tfm_ctx = crypto_tfm_ctx(tfm);
-	struct crypto_scomp *scomp = *tfm_ctx;
-	void *ctx = *req->__ctx;
-
-	if (ctx)
-		crypto_scomp_free_ctx(scomp, ctx);
+	scomp_free_streams(__crypto_scomp_alg(alg));
 }
 
 static const struct crypto_type crypto_scomp_type = {
 	.extsize = crypto_alg_extsize,
 	.init_tfm = crypto_scomp_init_tfm,
+	.destroy = crypto_scomp_destroy,
 #ifdef CONFIG_PROC_FS
 	.show = crypto_scomp_show,
 #endif
diff --git a/include/crypto/acompress.h b/include/crypto/acompress.h
index 25e193b0b8b4..5e0602d2c827 100644
--- a/include/crypto/acompress.h
+++ b/include/crypto/acompress.h
@@ -10,9 +10,12 @@
 #define _CRYPTO_ACOMP_H
 
 #include <linux/atomic.h>
+#include <linux/compiler_types.h>
 #include <linux/container_of.h>
 #include <linux/crypto.h>
 #include <linux/scatterlist.h>
+#include <linux/slab.h>
+#include <linux/spinlock_types.h>
 #include <linux/types.h>
 
 /* Set this bit for virtual address instead of SG list. */
@@ -83,8 +86,14 @@ struct crypto_acomp {
 	struct crypto_tfm base;
 };
 
+struct crypto_acomp_stream {
+	spinlock_t lock;
+	void *ctx;
+};
+
 #define COMP_ALG_COMMON {			\
 	struct crypto_alg base;			\
+	struct crypto_acomp_stream __percpu *stream;	\
 }
 struct comp_alg_common COMP_ALG_COMMON;
 
@@ -202,7 +211,16 @@ static inline int crypto_has_acomp(const char *alg_name, u32 type, u32 mask)
  *
  * Return:	allocated handle in case of success or NULL in case of an error
  */
-struct acomp_req *acomp_request_alloc(struct crypto_acomp *tfm);
+static inline struct acomp_req *acomp_request_alloc_noprof(struct crypto_acomp *tfm)
+{
+	struct acomp_req *req;
+
+	req = kzalloc_noprof(sizeof(*req) + crypto_acomp_reqsize(tfm), GFP_KERNEL);
+	if (likely(req))
+		acomp_request_set_tfm(req, tfm);
+	return req;
+}
+#define acomp_request_alloc(...)	alloc_hooks(acomp_request_alloc_noprof(__VA_ARGS__))
 
 /**
  * acomp_request_free() -- zeroize and free asynchronous (de)compression
@@ -211,7 +229,10 @@ struct acomp_req *acomp_request_alloc(struct crypto_acomp *tfm);
  *
  * @req:	request to free
  */
-void acomp_request_free(struct acomp_req *req);
+static inline void acomp_request_free(struct acomp_req *req)
+{
+	kfree_sensitive(req);
+}
 
 /**
  * acomp_request_set_callback() -- Sets an asynchronous callback
diff --git a/include/crypto/internal/acompress.h b/include/crypto/internal/acompress.h
index b3b48dea7f2f..2877053286e3 100644
--- a/include/crypto/internal/acompress.h
+++ b/include/crypto/internal/acompress.h
@@ -32,6 +32,7 @@
  *
  * @reqsize:	Context size for (de)compression requests
  * @base:	Common crypto API algorithm data structure
+ * @stream:	Per-cpu memory for algorithm
  * @calg:	Cmonn algorithm data structure shared with scomp
  */
 struct acomp_alg {
@@ -68,22 +69,6 @@ static inline void acomp_request_complete(struct acomp_req *req,
 	crypto_request_complete(&req->base, err);
 }
 
-static inline struct acomp_req *__acomp_request_alloc_noprof(struct crypto_acomp *tfm)
-{
-	struct acomp_req *req;
-
-	req = kzalloc_noprof(sizeof(*req) + crypto_acomp_reqsize(tfm), GFP_KERNEL);
-	if (likely(req))
-		acomp_request_set_tfm(req, tfm);
-	return req;
-}
-#define __acomp_request_alloc(...)	alloc_hooks(__acomp_request_alloc_noprof(__VA_ARGS__))
-
-static inline void __acomp_request_free(struct acomp_req *req)
-{
-	kfree_sensitive(req);
-}
-
 /**
  * crypto_register_acomp() -- Register asynchronous compression algorithm
  *
diff --git a/include/crypto/internal/scompress.h b/include/crypto/internal/scompress.h
index 2a6b15c0a32d..f25aa2ea3b48 100644
--- a/include/crypto/internal/scompress.h
+++ b/include/crypto/internal/scompress.h
@@ -26,6 +26,7 @@ struct crypto_scomp {
  * @compress:	Function performs a compress operation
  * @decompress:	Function performs a de-compress operation
  * @base:	Common crypto API algorithm data structure
+ * @stream:	Per-cpu memory for algorithm
  * @calg:	Cmonn algorithm data structure shared with acomp
  */
 struct scomp_alg {
@@ -69,17 +70,6 @@ static inline struct scomp_alg *crypto_scomp_alg(struct crypto_scomp *tfm)
 	return __crypto_scomp_alg(crypto_scomp_tfm(tfm)->__crt_alg);
 }
 
-static inline void *crypto_scomp_alloc_ctx(struct crypto_scomp *tfm)
-{
-	return crypto_scomp_alg(tfm)->alloc_ctx();
-}
-
-static inline void crypto_scomp_free_ctx(struct crypto_scomp *tfm,
-					 void *ctx)
-{
-	return crypto_scomp_alg(tfm)->free_ctx(ctx);
-}
-
 static inline int crypto_scomp_compress(struct crypto_scomp *tfm,
 					const u8 *src, unsigned int slen,
 					u8 *dst, unsigned int *dlen, void *ctx)
-- 
2.39.5



^ permalink raw reply	[flat|nested] 16+ messages in thread

* RE: [v2 PATCH 3/7] crypto: acomp - Add request chaining and virtual addresses
  2025-03-04  9:25 ` [v2 PATCH 3/7] crypto: acomp - Add request chaining and virtual addresses Herbert Xu
@ 2025-03-04 21:59   ` Sridhar, Kanchana P
  2025-03-05  1:51     ` Herbert Xu
  0 siblings, 1 reply; 16+ messages in thread
From: Sridhar, Kanchana P @ 2025-03-04 21:59 UTC (permalink / raw)
  To: Herbert Xu, Linux Crypto Mailing List
  Cc: linux-mm, Yosry Ahmed, Sridhar, Kanchana P


> -----Original Message-----
> From: Herbert Xu <herbert@gondor.apana.org.au>
> Sent: Tuesday, March 4, 2025 1:25 AM
> To: Linux Crypto Mailing List <linux-crypto@vger.kernel.org>
> Cc: linux-mm@kvack.org; Yosry Ahmed <yosry.ahmed@linux.dev>; Sridhar,
> Kanchana P <kanchana.p.sridhar@intel.com>
> Subject: [v2 PATCH 3/7] crypto: acomp - Add request chaining and virtual
> addresses
> 
> This adds request chaining and virtual address support to the
> acomp interface.
> 
> It is identical to the ahash interface, except that a new flag
> CRYPTO_ACOMP_REQ_NONDMA has been added to indicate that the
> virtual addresses are not suitable for DMA.  This is because
> all existing and potential acomp users can provide memory that
> is suitable for DMA so there is no need for a fall-back copy
> path.
> 
> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
> ---
>  crypto/acompress.c                  | 201 ++++++++++++++++++++++++++++
>  include/crypto/acompress.h          |  89 ++++++++++--
>  include/crypto/internal/acompress.h |  22 +++
>  3 files changed, 299 insertions(+), 13 deletions(-)
> 
> diff --git a/crypto/acompress.c b/crypto/acompress.c
> index 30176316140a..d2103d4e42cc 100644
> --- a/crypto/acompress.c
> +++ b/crypto/acompress.c
> @@ -23,6 +23,8 @@ struct crypto_scomp;
> 
>  static const struct crypto_type crypto_acomp_type;
> 
> +static void acomp_reqchain_done(void *data, int err);
> +
>  static inline struct acomp_alg *__crypto_acomp_alg(struct crypto_alg *alg)
>  {
>  	return container_of(alg, struct acomp_alg, calg.base);
> @@ -153,6 +155,205 @@ void acomp_request_free(struct acomp_req *req)
>  }
>  EXPORT_SYMBOL_GPL(acomp_request_free);
> 
> +static bool acomp_request_has_nondma(struct acomp_req *req)
> +{
> +	struct acomp_req *r2;
> +
> +	if (acomp_request_isnondma(req))
> +		return true;
> +
> +	list_for_each_entry(r2, &req->base.list, base.list)
> +		if (acomp_request_isnondma(r2))
> +			return true;
> +
> +	return false;
> +}
> +
> +static void acomp_save_req(struct acomp_req *req, crypto_completion_t
> cplt)
> +{
> +	struct crypto_acomp *tfm = crypto_acomp_reqtfm(req);
> +	struct acomp_req_chain *state = &req->chain;
> +
> +	if (!acomp_is_async(tfm))
> +		return;
> +
> +	state->compl = req->base.complete;
> +	state->data = req->base.data;
> +	req->base.complete = cplt;
> +	req->base.data = state;
> +	state->req0 = req;
> +}
> +
> +static void acomp_restore_req(struct acomp_req_chain *state)
> +{
> +	struct acomp_req *req = state->req0;
> +	struct crypto_acomp *tfm;
> +
> +	tfm = crypto_acomp_reqtfm(req);
> +	if (!acomp_is_async(tfm))
> +		return;
> +
> +	req->base.complete = state->compl;
> +	req->base.data = state->data;
> +}
> +
> +static void acomp_reqchain_virt(struct acomp_req_chain *state, int err)
> +{
> +	struct acomp_req *req = state->cur;
> +	unsigned int slen = req->slen;
> +	unsigned int dlen = req->dlen;
> +
> +	req->base.err = err;
> +	if (!state->src)
> +		return;
> +
> +	acomp_request_set_virt(req, state->src, state->dst, slen, dlen);
> +	state->src = NULL;
> +}
> +
> +static int acomp_reqchain_finish(struct acomp_req_chain *state,
> +				 int err, u32 mask)
> +{
> +	struct acomp_req *req0 = state->req0;
> +	struct acomp_req *req = state->cur;
> +	struct acomp_req *n;
> +
> +	acomp_reqchain_virt(state, err);

Unless I am missing something, this seems to be future-proofing, based
on the initial checks you've implemented in acomp_do_req_chain().

> +
> +	if (req != req0)
> +		list_add_tail(&req->base.list, &req0->base.list);
> +
> +	list_for_each_entry_safe(req, n, &state->head, base.list) {
> +		list_del_init(&req->base.list);
> +
> +		req->base.flags &= mask;
> +		req->base.complete = acomp_reqchain_done;
> +		req->base.data = state;
> +		state->cur = req;
> +
> +		if (acomp_request_isvirt(req)) {
> +			unsigned int slen = req->slen;
> +			unsigned int dlen = req->dlen;
> +			const u8 *svirt = req->svirt;
> +			u8 *dvirt = req->dvirt;
> +
> +			state->src = svirt;
> +			state->dst = dvirt;
> +
> +			sg_init_one(&state->ssg, svirt, slen);
> +			sg_init_one(&state->dsg, dvirt, dlen);
> +
> +			acomp_request_set_params(req, &state->ssg,
> &state->dsg,
> +						 slen, dlen);
> +		}
> +
> +		err = state->op(req);
> +
> +		if (err == -EINPROGRESS) {
> +			if (!list_empty(&state->head))
> +				err = -EBUSY;
> +			goto out;
> +		}
> +
> +		if (err == -EBUSY)
> +			goto out;

This is a fully synchronous way of processing the request chain, and
will not work for iaa_crypto's submit-then-poll-for-completions paradigm,
essential for us to process the compressions in parallel in hardware.
Without parallelism, we will not derive the full benefits of IAA.

Would you be willing to incorporate the acomp_do_async_req_chain()
that I have implemented in v8 of my patch-series [1], to enable the iaa_crypto
driver's async way of processing the request chain to get the parallelism,
and/or adapt your implementation to enable this?

Better still, if you agree that the virtual address support is entirely future-proofing,
I would like to request you to consider reviewing and improving my well-validated
implementation of request chaining in [1], with the goal of merging it in with
parallel/series support for the reqchain, and introduce virtual address support
at a later time. 

[1] https://patchwork.kernel.org/project/linux-mm/patch/20250303084724.6490-2-kanchana.p.sridhar@intel.com/


> +
> +		acomp_reqchain_virt(state, err);

Is this really needed? From what I can understand, the important thing this
call does for the implementation, is to set the req->base.err. It seems like
compute overhead (which matters for kernel users like zswap) for setting
the request's error status.

In general, the calls to virtual address support are a bit confusing, since you
check right upfront in acomp_do_req_chain()
"if (acomp_request_has_nondma(req)) return -EINVAL".

Imo, it appears that this is all we need until there are in kernel users that
require the virtual address future-proofing. Please correct me if I am missing
something significant.

Also, is my understanding correct that zswap code that sets up the SG lists
for compress/decompress are not impacted by this?


> +		list_add_tail(&req->base.list, &req0->base.list);
> +	}
> +
> +	acomp_restore_req(state);
> +
> +out:
> +	return err;
> +}
> +
> +static void acomp_reqchain_done(void *data, int err)
> +{
> +	struct acomp_req_chain *state = data;
> +	crypto_completion_t compl = state->compl;
> +
> +	data = state->data;
> +
> +	if (err == -EINPROGRESS) {
> +		if (!list_empty(&state->head))
> +			return;
> +		goto notify;
> +	}
> +
> +	err = acomp_reqchain_finish(state, err,
> CRYPTO_TFM_REQ_MAY_BACKLOG);
> +	if (err == -EBUSY)
> +		return;
> +
> +notify:
> +	compl(data, err);
> +}
> +
> +static int acomp_do_req_chain(struct acomp_req *req,
> +			      int (*op)(struct acomp_req *req))
> +{
> +	struct crypto_acomp *tfm = crypto_acomp_reqtfm(req);
> +	struct acomp_req_chain *state = &req->chain;
> +	int err;
> +
> +	if (crypto_acomp_req_chain(tfm) ||
> +	    (!acomp_request_chained(req) && !acomp_request_isvirt(req)))
> +		return op(req);

Isn't this a bug? If an algorithm opts-in and sets CRYPTO_ALG_REQ_CHAIN
in its cra_flags, the above statement will always be true, the "op" will be
called on the first request, and this will return. Am I missing something?

> +
> +	/*
> +	 * There are no in-kernel users that do this.  If and ever
> +	 * such users come into being then we could add a fall-back
> +	 * path.
> +	 */
> +	if (acomp_request_has_nondma(req))
> +		return -EINVAL;

As mentioned earlier, is this sufficient for now, and is the virtual address
support really future-proofing?

> +
> +	if (acomp_is_async(tfm)) {
> +		acomp_save_req(req, acomp_reqchain_done);
> +		state = req->base.data;
> +	}
> +
> +	state->op = op;
> +	state->cur = req;
> +	state->src = NULL;
> +	INIT_LIST_HEAD(&state->head);
> +	list_splice_init(&req->base.list, &state->head);
> +
> +	if (acomp_request_isvirt(req)) {

Based on the above check for acomp_request_has_nondma(), it should never
get here, IIUC?

In general, can you shed some light on how you envision zswap code to
change based on this patchset?

Thanks,
Kanchana

> +		unsigned int slen = req->slen;
> +		unsigned int dlen = req->dlen;
> +		const u8 *svirt = req->svirt;
> +		u8 *dvirt = req->dvirt;
> +
> +		state->src = svirt;
> +		state->dst = dvirt;
> +
> +		sg_init_one(&state->ssg, svirt, slen);
> +		sg_init_one(&state->dsg, dvirt, dlen);
> +
> +		acomp_request_set_params(req, &state->ssg, &state->dsg,
> +					 slen, dlen);
> +	}
> +
> +	err = op(req);
> +	if (err == -EBUSY || err == -EINPROGRESS)
> +		return -EBUSY;
> +
> +	return acomp_reqchain_finish(state, err, ~0);
> +}
> +
> +int crypto_acomp_compress(struct acomp_req *req)
> +{
> +	return acomp_do_req_chain(req, crypto_acomp_reqtfm(req)-
> >compress);
> +}
> +EXPORT_SYMBOL_GPL(crypto_acomp_compress);
> +
> +int crypto_acomp_decompress(struct acomp_req *req)
> +{
> +	return acomp_do_req_chain(req, crypto_acomp_reqtfm(req)-
> >decompress);
> +}
> +EXPORT_SYMBOL_GPL(crypto_acomp_decompress);
> +
>  void comp_prepare_alg(struct comp_alg_common *alg)
>  {
>  	struct crypto_alg *base = &alg->base;
> diff --git a/include/crypto/acompress.h b/include/crypto/acompress.h
> index b6d5136e689d..15bb13e47f8b 100644
> --- a/include/crypto/acompress.h
> +++ b/include/crypto/acompress.h
> @@ -12,10 +12,34 @@
>  #include <linux/atomic.h>
>  #include <linux/container_of.h>
>  #include <linux/crypto.h>
> +#include <linux/scatterlist.h>
> +#include <linux/types.h>
> 
>  #define CRYPTO_ACOMP_ALLOC_OUTPUT	0x00000001
> +
> +/* Set this bit for virtual address instead of SG list. */
> +#define CRYPTO_ACOMP_REQ_VIRT		0x00000002
> +
> +/* Set this bit for if virtual address buffer cannot be used for DMA. */
> +#define CRYPTO_ACOMP_REQ_NONDMA		0x00000004
> +
>  #define CRYPTO_ACOMP_DST_MAX		131072
> 
> +struct acomp_req;
> +
> +struct acomp_req_chain {
> +	struct list_head head;
> +	struct acomp_req *req0;
> +	struct acomp_req *cur;
> +	int (*op)(struct acomp_req *req);
> +	crypto_completion_t compl;
> +	void *data;
> +	struct scatterlist ssg;
> +	struct scatterlist dsg;
> +	const u8 *src;
> +	u8 *dst;
> +};
> +
>  /**
>   * struct acomp_req - asynchronous (de)compression request
>   *
> @@ -24,14 +48,24 @@
>   * @dst:	Destination data
>   * @slen:	Size of the input buffer
>   * @dlen:	Size of the output buffer and number of bytes produced
> + * @chain:	Private API code data, do not use
>   * @__ctx:	Start of private context data
>   */
>  struct acomp_req {
>  	struct crypto_async_request base;
> -	struct scatterlist *src;
> -	struct scatterlist *dst;
> +	union {
> +		struct scatterlist *src;
> +		const u8 *svirt;
> +	};
> +	union {
> +		struct scatterlist *dst;
> +		u8 *dvirt;
> +	};
>  	unsigned int slen;
>  	unsigned int dlen;
> +
> +	struct acomp_req_chain chain;
> +
>  	void *__ctx[] CRYPTO_MINALIGN_ATTR;
>  };
> 
> @@ -200,10 +234,14 @@ static inline void
> acomp_request_set_callback(struct acomp_req *req,
>  					      crypto_completion_t cmpl,
>  					      void *data)
>  {
> +	u32 keep = CRYPTO_ACOMP_ALLOC_OUTPUT |
> CRYPTO_ACOMP_REQ_VIRT;
> +
>  	req->base.complete = cmpl;
>  	req->base.data = data;
> -	req->base.flags &= CRYPTO_ACOMP_ALLOC_OUTPUT;
> -	req->base.flags |= flgs & ~CRYPTO_ACOMP_ALLOC_OUTPUT;
> +	req->base.flags &= keep;
> +	req->base.flags |= flgs & ~keep;
> +
> +	crypto_reqchain_init(&req->base);
>  }
> 
>  /**
> @@ -230,11 +268,42 @@ static inline void
> acomp_request_set_params(struct acomp_req *req,
>  	req->slen = slen;
>  	req->dlen = dlen;
> 
> -	req->base.flags &= ~CRYPTO_ACOMP_ALLOC_OUTPUT;
> +	req->base.flags &= ~(CRYPTO_ACOMP_ALLOC_OUTPUT |
> CRYPTO_ACOMP_REQ_VIRT);
>  	if (!req->dst)
>  		req->base.flags |= CRYPTO_ACOMP_ALLOC_OUTPUT;
>  }
> 
> +/**
> + * acomp_request_set_virt() -- Sets virtual address request parameters
> + *
> + * Sets virtual address parameters required by an acomp operation
> + *
> + * @req:	asynchronous compress request
> + * @src:	virtual address pointer to input buffer
> + * @dst:	virtual address pointer to output buffer.
> + * @slen:	size of the input buffer
> + * @dlen:	size of the output buffer.
> + */
> +static inline void acomp_request_set_virt(struct acomp_req *req,
> +					  const u8 *src, u8 *dst,
> +					  unsigned int slen,
> +					  unsigned int dlen)
> +{
> +	req->svirt = src;
> +	req->dvirt = dst;
> +	req->slen = slen;
> +	req->dlen = dlen;
> +
> +	req->base.flags &= ~CRYPTO_ACOMP_ALLOC_OUTPUT;
> +	req->base.flags |= CRYPTO_ACOMP_REQ_VIRT;
> +}
> +
> +static inline void acomp_request_chain(struct acomp_req *req,
> +				       struct acomp_req *head)
> +{
> +	crypto_request_chain(&req->base, &head->base);
> +}
> +
>  /**
>   * crypto_acomp_compress() -- Invoke asynchronous compress operation
>   *
> @@ -244,10 +313,7 @@ static inline void acomp_request_set_params(struct
> acomp_req *req,
>   *
>   * Return:	zero on success; error code in case of error
>   */
> -static inline int crypto_acomp_compress(struct acomp_req *req)
> -{
> -	return crypto_acomp_reqtfm(req)->compress(req);
> -}
> +int crypto_acomp_compress(struct acomp_req *req);
> 
>  /**
>   * crypto_acomp_decompress() -- Invoke asynchronous decompress
> operation
> @@ -258,9 +324,6 @@ static inline int crypto_acomp_compress(struct
> acomp_req *req)
>   *
>   * Return:	zero on success; error code in case of error
>   */
> -static inline int crypto_acomp_decompress(struct acomp_req *req)
> -{
> -	return crypto_acomp_reqtfm(req)->decompress(req);
> -}
> +int crypto_acomp_decompress(struct acomp_req *req);
> 
>  #endif
> diff --git a/include/crypto/internal/acompress.h
> b/include/crypto/internal/acompress.h
> index 8831edaafc05..b3b48dea7f2f 100644
> --- a/include/crypto/internal/acompress.h
> +++ b/include/crypto/internal/acompress.h
> @@ -109,4 +109,26 @@ void crypto_unregister_acomp(struct acomp_alg
> *alg);
>  int crypto_register_acomps(struct acomp_alg *algs, int count);
>  void crypto_unregister_acomps(struct acomp_alg *algs, int count);
> 
> +static inline bool acomp_request_chained(struct acomp_req *req)
> +{
> +	return crypto_request_chained(&req->base);
> +}
> +
> +static inline bool acomp_request_isvirt(struct acomp_req *req)
> +{
> +	return req->base.flags & CRYPTO_ACOMP_REQ_VIRT;
> +}
> +
> +static inline bool acomp_request_isnondma(struct acomp_req *req)
> +{
> +	return (req->base.flags &
> +		(CRYPTO_ACOMP_REQ_NONDMA |
> CRYPTO_ACOMP_REQ_VIRT)) ==
> +	       (CRYPTO_ACOMP_REQ_NONDMA |
> CRYPTO_ACOMP_REQ_VIRT);
> +}
> +
> +static inline bool crypto_acomp_req_chain(struct crypto_acomp *tfm)
> +{
> +	return crypto_tfm_req_chain(&tfm->base);
> +}
> +
>  #endif
> --
> 2.39.5



^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [v2 PATCH 0/7] crypto: acomp - Add request chaining and virtual address support
  2025-03-04  9:25 [v2 PATCH 0/7] crypto: acomp - Add request chaining and virtual address support Herbert Xu
                   ` (6 preceding siblings ...)
  2025-03-04  9:25 ` [v2 PATCH 7/7] crypto: acomp - Move stream management into scomp layer Herbert Xu
@ 2025-03-05  1:46 ` Jonathan Cameron
  2025-03-05 21:37 ` Cabiddu, Giovanni
  2025-03-06  8:15 ` Ard Biesheuvel
  9 siblings, 0 replies; 16+ messages in thread
From: Jonathan Cameron @ 2025-03-05  1:46 UTC (permalink / raw)
  To: Herbert Xu
  Cc: Linux Crypto Mailing List, linux-mm, Yosry Ahmed, Kanchana P Sridhar

On Tue, 04 Mar 2025 17:25:01 +0800
Herbert Xu <herbert@gondor.apana.org.au> wrote:

> This patch series adds reqeust chaining and virtual address support
fwiw "request"

> to the crypto_acomp interface.


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [v2 PATCH 3/7] crypto: acomp - Add request chaining and virtual addresses
  2025-03-04 21:59   ` Sridhar, Kanchana P
@ 2025-03-05  1:51     ` Herbert Xu
  2025-03-05 20:09       ` Sridhar, Kanchana P
  0 siblings, 1 reply; 16+ messages in thread
From: Herbert Xu @ 2025-03-05  1:51 UTC (permalink / raw)
  To: Sridhar, Kanchana P; +Cc: Linux Crypto Mailing List, linux-mm, Yosry Ahmed

On Tue, Mar 04, 2025 at 09:59:59PM +0000, Sridhar, Kanchana P wrote:
>
> > +static int acomp_reqchain_finish(struct acomp_req_chain *state,
> > +				 int err, u32 mask)
> > +{
> > +	struct acomp_req *req0 = state->req0;
> > +	struct acomp_req *req = state->cur;
> > +	struct acomp_req *n;
> > +
> > +	acomp_reqchain_virt(state, err);
> 
> Unless I am missing something, this seems to be future-proofing, based
> on the initial checks you've implemented in acomp_do_req_chain().
> 
> > +
> > +	if (req != req0)
> > +		list_add_tail(&req->base.list, &req0->base.list);
> > +
> > +	list_for_each_entry_safe(req, n, &state->head, base.list) {
> > +		list_del_init(&req->base.list);
> > +
> > +		req->base.flags &= mask;
> > +		req->base.complete = acomp_reqchain_done;
> > +		req->base.data = state;
> > +		state->cur = req;
> > +
> > +		if (acomp_request_isvirt(req)) {
> > +			unsigned int slen = req->slen;
> > +			unsigned int dlen = req->dlen;
> > +			const u8 *svirt = req->svirt;
> > +			u8 *dvirt = req->dvirt;
> > +
> > +			state->src = svirt;
> > +			state->dst = dvirt;
> > +
> > +			sg_init_one(&state->ssg, svirt, slen);
> > +			sg_init_one(&state->dsg, dvirt, dlen);
> > +
> > +			acomp_request_set_params(req, &state->ssg,
> > &state->dsg,
> > +						 slen, dlen);
> > +		}
> > +
> > +		err = state->op(req);
> > +
> > +		if (err == -EINPROGRESS) {
> > +			if (!list_empty(&state->head))
> > +				err = -EBUSY;
> > +			goto out;
> > +		}
> > +
> > +		if (err == -EBUSY)
> > +			goto out;
> 
> This is a fully synchronous way of processing the request chain, and
> will not work for iaa_crypto's submit-then-poll-for-completions paradigm,
> essential for us to process the compressions in parallel in hardware.
> Without parallelism, we will not derive the full benefits of IAA.

This function is not for chaining drivers at all.  It's for existing
drivers that do *not* support chaining.

If your driver supports chaining, then it should not come through
acomp_reqchain_finish in the first place.  The acomp_reqchain code
translates chained requests to simple unchained ones for the
existing drivers.  If the driver supports chaining natively, then
it will bypass all this go straight to the driver, where you can do
whatever you want with the chained request.

Cheers,
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt


^ permalink raw reply	[flat|nested] 16+ messages in thread

* RE: [v2 PATCH 3/7] crypto: acomp - Add request chaining and virtual addresses
  2025-03-05  1:51     ` Herbert Xu
@ 2025-03-05 20:09       ` Sridhar, Kanchana P
  0 siblings, 0 replies; 16+ messages in thread
From: Sridhar, Kanchana P @ 2025-03-05 20:09 UTC (permalink / raw)
  To: Herbert Xu
  Cc: Linux Crypto Mailing List, linux-mm, Yosry Ahmed, Sridhar,
	Kanchana P, Feghali, Wajdi K, Gopal, Vinodh


> -----Original Message-----
> From: Herbert Xu <herbert@gondor.apana.org.au>
> Sent: Tuesday, March 4, 2025 5:51 PM
> To: Sridhar, Kanchana P <kanchana.p.sridhar@intel.com>
> Cc: Linux Crypto Mailing List <linux-crypto@vger.kernel.org>; linux-
> mm@kvack.org; Yosry Ahmed <yosry.ahmed@linux.dev>
> Subject: Re: [v2 PATCH 3/7] crypto: acomp - Add request chaining and virtual
> addresses
> 
> On Tue, Mar 04, 2025 at 09:59:59PM +0000, Sridhar, Kanchana P wrote:
> >
> > > +static int acomp_reqchain_finish(struct acomp_req_chain *state,
> > > +				 int err, u32 mask)
> > > +{
> > > +	struct acomp_req *req0 = state->req0;
> > > +	struct acomp_req *req = state->cur;
> > > +	struct acomp_req *n;
> > > +
> > > +	acomp_reqchain_virt(state, err);
> >
> > Unless I am missing something, this seems to be future-proofing, based
> > on the initial checks you've implemented in acomp_do_req_chain().
> >
> > > +
> > > +	if (req != req0)
> > > +		list_add_tail(&req->base.list, &req0->base.list);
> > > +
> > > +	list_for_each_entry_safe(req, n, &state->head, base.list) {
> > > +		list_del_init(&req->base.list);
> > > +
> > > +		req->base.flags &= mask;
> > > +		req->base.complete = acomp_reqchain_done;
> > > +		req->base.data = state;
> > > +		state->cur = req;
> > > +
> > > +		if (acomp_request_isvirt(req)) {
> > > +			unsigned int slen = req->slen;
> > > +			unsigned int dlen = req->dlen;
> > > +			const u8 *svirt = req->svirt;
> > > +			u8 *dvirt = req->dvirt;
> > > +
> > > +			state->src = svirt;
> > > +			state->dst = dvirt;
> > > +
> > > +			sg_init_one(&state->ssg, svirt, slen);
> > > +			sg_init_one(&state->dsg, dvirt, dlen);
> > > +
> > > +			acomp_request_set_params(req, &state->ssg,
> > > &state->dsg,
> > > +						 slen, dlen);
> > > +		}
> > > +
> > > +		err = state->op(req);
> > > +
> > > +		if (err == -EINPROGRESS) {
> > > +			if (!list_empty(&state->head))
> > > +				err = -EBUSY;
> > > +			goto out;
> > > +		}
> > > +
> > > +		if (err == -EBUSY)
> > > +			goto out;
> >
> > This is a fully synchronous way of processing the request chain, and
> > will not work for iaa_crypto's submit-then-poll-for-completions paradigm,
> > essential for us to process the compressions in parallel in hardware.
> > Without parallelism, we will not derive the full benefits of IAA.
> 
> This function is not for chaining drivers at all.  It's for existing
> drivers that do *not* support chaining.
> 
> If your driver supports chaining, then it should not come through
> acomp_reqchain_finish in the first place.  The acomp_reqchain code
> translates chained requests to simple unchained ones for the
> existing drivers.  If the driver supports chaining natively, then
> it will bypass all this go straight to the driver, where you can do
> whatever you want with the chained request.

Hi Herbert,

Can you please take a look at patches 1 (only the acomp_do_async_req_chain() interface),
2 and 4 in my latest v8 "zswap IAA compress batching" series [2], wherein I have tried to
address your comments [1] given in v6, and let me know if this implements
batching with request chaining as you envision?

[1] https://patchwork.kernel.org/comment/26246560/
[2] https://patchwork.kernel.org/project/linux-mm/list/?series=939487

If this architecture looks Ok from your perspective, then can you please
let me know if "acomp_do_async_req_chain()" would be helpful in general,
outside of the iaa_crypto driver, or would your recommendation be for
this to be specific to iaa_crypto?

Thanks,
Kanchana

> 
> Cheers,
> --
> Email: Herbert Xu <herbert@gondor.apana.org.au>
> Home Page: http://gondor.apana.org.au/~herbert/
> PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [v2 PATCH 0/7] crypto: acomp - Add request chaining and virtual address support
  2025-03-04  9:25 [v2 PATCH 0/7] crypto: acomp - Add request chaining and virtual address support Herbert Xu
                   ` (7 preceding siblings ...)
  2025-03-05  1:46 ` [v2 PATCH 0/7] crypto: acomp - Add request chaining and virtual address support Jonathan Cameron
@ 2025-03-05 21:37 ` Cabiddu, Giovanni
  2025-03-06  0:42   ` Herbert Xu
  2025-03-06  8:15 ` Ard Biesheuvel
  9 siblings, 1 reply; 16+ messages in thread
From: Cabiddu, Giovanni @ 2025-03-05 21:37 UTC (permalink / raw)
  To: Herbert Xu
  Cc: Linux Crypto Mailing List, linux-mm, Yosry Ahmed, Kanchana P Sridhar

On Tue, Mar 04, 2025 at 05:25:01PM +0800, Herbert Xu wrote:
> This patch series adds reqeust chaining and virtual address support
> to the crypto_acomp interface.

What is the target tree for this set? It doesn't cleanly apply to
cryptodev.

Thanks,

-- 
Giovanni


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [v2 PATCH 0/7] crypto: acomp - Add request chaining and virtual address support
  2025-03-05 21:37 ` Cabiddu, Giovanni
@ 2025-03-06  0:42   ` Herbert Xu
  2025-03-06 12:14     ` Cabiddu, Giovanni
  0 siblings, 1 reply; 16+ messages in thread
From: Herbert Xu @ 2025-03-06  0:42 UTC (permalink / raw)
  To: Cabiddu, Giovanni
  Cc: Linux Crypto Mailing List, linux-mm, Yosry Ahmed, Kanchana P Sridhar

On Wed, Mar 05, 2025 at 09:37:43PM +0000, Cabiddu, Giovanni wrote:
> 
> What is the target tree for this set? It doesn't cleanly apply to
> cryptodev.

It's based on the other two acomp patches in my queue:

https://patchwork.kernel.org/project/linux-crypto/list/

Cheers,
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [v2 PATCH 0/7] crypto: acomp - Add request chaining and virtual address support
  2025-03-04  9:25 [v2 PATCH 0/7] crypto: acomp - Add request chaining and virtual address support Herbert Xu
                   ` (8 preceding siblings ...)
  2025-03-05 21:37 ` Cabiddu, Giovanni
@ 2025-03-06  8:15 ` Ard Biesheuvel
  9 siblings, 0 replies; 16+ messages in thread
From: Ard Biesheuvel @ 2025-03-06  8:15 UTC (permalink / raw)
  To: Herbert Xu
  Cc: Linux Crypto Mailing List, linux-mm, Yosry Ahmed, Kanchana P Sridhar

On Tue, 4 Mar 2025 at 10:25, Herbert Xu <herbert@gondor.apana.org.au> wrote:
>
> This patch series adds reqeust chaining and virtual address support
> to the crypto_acomp interface.
>
> Herbert Xu (7):
>   crypto: api - Add cra_type->destroy hook
>   crypto: scomp - Remove tfm argument from alloc/free_ctx
>   crypto: acomp - Add request chaining and virtual addresses
>   crypto: testmgr - Remove NULL dst acomp tests
>   crypto: scomp - Remove support for most non-trivial destination SG
>     lists
>   crypto: scomp - Add chaining and virtual address support
>   crypto: acomp - Move stream management into scomp layer
>

How does this v2 differ from the previous version?


>  crypto/842.c                           |   8 +-
>  crypto/acompress.c                     | 208 ++++++++++++++++++++---
>  crypto/algapi.c                        |   9 +
>  crypto/compress.h                      |   2 -
>  crypto/deflate.c                       |   4 +-
>  crypto/internal.h                      |   6 +-
>  crypto/lz4.c                           |   8 +-
>  crypto/lz4hc.c                         |   8 +-
>  crypto/lzo-rle.c                       |   8 +-
>  crypto/lzo.c                           |   8 +-
>  crypto/scompress.c                     | 226 +++++++++++++++----------
>  crypto/testmgr.c                       |  29 ----
>  crypto/zstd.c                          |   4 +-
>  drivers/crypto/cavium/zip/zip_crypto.c |   6 +-
>  drivers/crypto/cavium/zip/zip_crypto.h |   6 +-
>  include/crypto/acompress.h             | 118 ++++++++++---
>  include/crypto/internal/acompress.h    |  39 +++--
>  include/crypto/internal/scompress.h    |  18 +-
>  18 files changed, 488 insertions(+), 227 deletions(-)
>
> --
> 2.39.5
>
>


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [v2 PATCH 0/7] crypto: acomp - Add request chaining and virtual address support
  2025-03-06  0:42   ` Herbert Xu
@ 2025-03-06 12:14     ` Cabiddu, Giovanni
  0 siblings, 0 replies; 16+ messages in thread
From: Cabiddu, Giovanni @ 2025-03-06 12:14 UTC (permalink / raw)
  To: Herbert Xu
  Cc: Linux Crypto Mailing List, linux-mm, Yosry Ahmed, Kanchana P Sridhar

On Thu, Mar 06, 2025 at 08:42:20AM +0800, Herbert Xu wrote:
> On Wed, Mar 05, 2025 at 09:37:43PM +0000, Cabiddu, Giovanni wrote:
> > 
> > What is the target tree for this set? It doesn't cleanly apply to
> > cryptodev.
> 
> It's based on the other two acomp patches in my queue:
> 
> https://patchwork.kernel.org/project/linux-crypto/list/
It is also dependent on `crypto: api - Move struct crypto_type into
internal.h`.

In case someone else wants to give it a go, this is the complete list of
dependencies:

  https://patchwork.kernel.org/project/linux-crypto/patch/Z71PHnpl0FeqChRE@gondor.apana.org.au/
  https://patchwork.kernel.org/project/linux-crypto/patch/aa2a2230a135b79b6f128d3a8beb21b49800e812.1740651138.git.herbert@gondor.apana.org.au/
  https://patchwork.kernel.org/project/linux-crypto/patch/bb32fbfe34c7f5f70dc5802d97e66ec88c470c66.1740651138.git.herbert@gondor.apana.org.au/

Regards,

-- 
Giovanni


^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2025-03-06 12:15 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-03-04  9:25 [v2 PATCH 0/7] crypto: acomp - Add request chaining and virtual address support Herbert Xu
2025-03-04  9:25 ` [v2 PATCH 1/7] crypto: api - Add cra_type->destroy hook Herbert Xu
2025-03-04  9:25 ` [v2 PATCH 2/7] crypto: scomp - Remove tfm argument from alloc/free_ctx Herbert Xu
2025-03-04  9:25 ` [v2 PATCH 3/7] crypto: acomp - Add request chaining and virtual addresses Herbert Xu
2025-03-04 21:59   ` Sridhar, Kanchana P
2025-03-05  1:51     ` Herbert Xu
2025-03-05 20:09       ` Sridhar, Kanchana P
2025-03-04  9:25 ` [v2 PATCH 4/7] crypto: testmgr - Remove NULL dst acomp tests Herbert Xu
2025-03-04  9:25 ` [v2 PATCH 5/7] crypto: scomp - Remove support for most non-trivial destination SG lists Herbert Xu
2025-03-04  9:25 ` [v2 PATCH 6/7] crypto: scomp - Add chaining and virtual address support Herbert Xu
2025-03-04  9:25 ` [v2 PATCH 7/7] crypto: acomp - Move stream management into scomp layer Herbert Xu
2025-03-05  1:46 ` [v2 PATCH 0/7] crypto: acomp - Add request chaining and virtual address support Jonathan Cameron
2025-03-05 21:37 ` Cabiddu, Giovanni
2025-03-06  0:42   ` Herbert Xu
2025-03-06 12:14     ` Cabiddu, Giovanni
2025-03-06  8:15 ` Ard Biesheuvel

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox