From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id AEBCCD5D66B for ; Thu, 7 Nov 2024 17:21:05 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 35DF96B0085; Thu, 7 Nov 2024 12:21:05 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 2E7676B0088; Thu, 7 Nov 2024 12:21:05 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1615D6B0089; Thu, 7 Nov 2024 12:21:05 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id E7F0A6B0085 for ; Thu, 7 Nov 2024 12:21:04 -0500 (EST) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id A92C6C0A58 for ; Thu, 7 Nov 2024 17:21:04 +0000 (UTC) X-FDA: 82759962474.09.CFDF918 Received: from mail-qt1-f181.google.com (mail-qt1-f181.google.com [209.85.160.181]) by imf17.hostedemail.com (Postfix) with ESMTP id 1F0BB40025 for ; Thu, 7 Nov 2024 17:20:34 +0000 (UTC) Authentication-Results: imf17.hostedemail.com; dkim=pass header.d=cmpxchg-org.20230601.gappssmtp.com header.s=20230601 header.b=wocSlqWs; dmarc=pass (policy=none) header.from=cmpxchg.org; spf=pass (imf17.hostedemail.com: domain of hannes@cmpxchg.org designates 209.85.160.181 as permitted sender) smtp.mailfrom=hannes@cmpxchg.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1730999938; a=rsa-sha256; cv=none; b=ELpnIyqC5udmL+yvobF3b5hyQ9rsrEYVISVHeI933ysKi/nuKAGR2G0ejTmSuHfi6FKbYt nLpkbFlgCjbt720mjte9vFBcWar0I/2yhb6tvIcF1Eappif0uTbHysUMgh7TCsKqL6/ZUQ aKNVpEMidPLkDw7y86Lefp4VRf9DaZg= ARC-Authentication-Results: i=1; imf17.hostedemail.com; dkim=pass header.d=cmpxchg-org.20230601.gappssmtp.com header.s=20230601 header.b=wocSlqWs; dmarc=pass (policy=none) header.from=cmpxchg.org; spf=pass (imf17.hostedemail.com: domain of hannes@cmpxchg.org designates 209.85.160.181 as permitted sender) smtp.mailfrom=hannes@cmpxchg.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1730999938; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=4EDkZwm2vH1Rfs2q/4+97N3BGn7nXQVDPU+vRKi/18w=; b=dPMGiGTOug7nbRN48KjBAqWOkQYgDKCf9ToUcL7bWgpXWB9pqADul9ElztFVErdP0oA6N0 6Yop3JakaCDsSA75aBJm0DM6V0zZZEVjF7giBJ9KAkehZURko/JEUtXXq9OVgvyaxRAnZN nE2YY+grp7GV9UNJ0mcsPUX05qOTaic= Received: by mail-qt1-f181.google.com with SMTP id d75a77b69052e-460c316fc37so7444631cf.0 for ; Thu, 07 Nov 2024 09:21:02 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cmpxchg-org.20230601.gappssmtp.com; s=20230601; t=1731000061; x=1731604861; darn=kvack.org; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=4EDkZwm2vH1Rfs2q/4+97N3BGn7nXQVDPU+vRKi/18w=; b=wocSlqWsu76bPjbwqwS3qoMymSPW0NcPrYMgMtUFFmpBIFzAvAwjUnLlWsGR3t1oRV WwF+eECkonxnnDjuMAShPnl3LGvCuvfRa67GKLmLUyaEPMEiuApl4jTW5VURceSWu95Q sYZryHHCS0q04Skj5BG/mloYEilbJZoHuixXdK3H/dudFHR/nme/bNKmKs/nOaBo0g33 fL4iaDuAr4gRdV3EhfTrVoRNNe1bUTZGVkBZ6TNiZen6LC+0xihAbGkGF+h+2qSvDskA Ydb30jlvr3/r116X3TWhxT+hkRlWe1MsR3vKt2COT9/EC7NxK1/j1cY4HSMNyvSx8k/u AB/w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1731000061; x=1731604861; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=4EDkZwm2vH1Rfs2q/4+97N3BGn7nXQVDPU+vRKi/18w=; b=ODZ7LliJ8vvLB0wqKliXF4B4Mi+ELpXve/CFxqIeTC5OnECnHZe9oUAu5/nTmIBQ/T Q28wac6WMMzHEc9uhGsHvWwPNy0pgN/pEHzWFB8PcX5WCQc1aBQ/Aw3WojCJv1tRgf7H dnNUrOAvA8tcMM0Ir1hIxJMZSST96jZOtIcUlWda62JgRlopKkwtCkVfKkm0NbTKQFVt XMNVLDu4zNA5kPdSMIhqBLlw5dnHqcAAwwfBOg1PPzGPQ7Ul5T1yPDVwJtL52CdB6NOW QBWcQl1kAEBZ98LrDrpMSPe3yK0A4O7vp35HDN0hvUf0LHbHIwW2nOcKjR0Wi7gEcdnA i2Yg== X-Forwarded-Encrypted: i=1; AJvYcCWkje7NQ07xNHEybTggswYSomYsxtuko0pePgDJIocQ3SX1fsXo9+hghg0h+lr8GBFfYNeNvqlCeg==@kvack.org X-Gm-Message-State: AOJu0YzjJL6fkOEQDc46GOXTh73N6idFiLGUrYylJiIYydxf+JucoOZV Zehe4wMlGrWGjaqmvy/RhfxfRryzV6uT0/g7IoT5RGJcObRdi+lcGAtb6suKkXs= X-Google-Smtp-Source: AGHT+IGMBxBmaRVXV1gmv6bff7nihVzZndZ5J0qZZby4MV+WSgKWk5uxN1e8nfazxKcLIom6drRc5g== X-Received: by 2002:a05:6214:4411:b0:6d3:66fb:ea7c with SMTP id 6a1803df08f44-6d39d9283admr1605646d6.2.1731000061523; Thu, 07 Nov 2024 09:21:01 -0800 (PST) Received: from localhost ([2603:7000:c01:2716:da5e:d3ff:fee7:26e7]) by smtp.gmail.com with ESMTPSA id 6a1803df08f44-6d396638ea4sm9578286d6.124.2024.11.07.09.20.59 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 07 Nov 2024 09:20:59 -0800 (PST) Date: Thu, 7 Nov 2024 12:20:56 -0500 From: Johannes Weiner To: Kanchana P Sridhar Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, yosryahmed@google.com, nphamcs@gmail.com, chengming.zhou@linux.dev, usamaarif642@gmail.com, ryan.roberts@arm.com, ying.huang@intel.com, 21cnbao@gmail.com, akpm@linux-foundation.org, linux-crypto@vger.kernel.org, herbert@gondor.apana.org.au, davem@davemloft.net, clabbe@baylibre.com, ardb@kernel.org, ebiggers@google.com, surenb@google.com, kristen.c.accardi@intel.com, zanussi@kernel.org, wajdi.k.feghali@intel.com, vinodh.gopal@intel.com Subject: Re: [PATCH v3 09/13] mm: zswap: Modify struct crypto_acomp_ctx to be configurable in nr of acomp_reqs. Message-ID: <20241107172056.GC1172372@cmpxchg.org> References: <20241106192105.6731-1-kanchana.p.sridhar@intel.com> <20241106192105.6731-10-kanchana.p.sridhar@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20241106192105.6731-10-kanchana.p.sridhar@intel.com> X-Rspamd-Queue-Id: 1F0BB40025 X-Stat-Signature: rhn94isfwufq6cwyqpxum5os7ata684b X-Rspam-User: X-Rspamd-Server: rspam05 X-HE-Tag: 1731000034-132993 X-HE-Meta: U2FsdGVkX18gecABLlf8Mo6isZ141kSdDC4zGW0v+Fz+v8dGvtbb9qHxzrK5Nt32fNvl4fkkWm0orH+V1y32wou4WKjLvtqAZV099O1OtKTAqyQ65k9yeKDO32EIlTo9uvqCgcwvKpEddxIllrqkFODivTi7L34ir+NYKySyfgK/g60xuVstGHVoR6pBI92rYkjUfAYnccY6c2HuZBD0BlCgCTK8Z9+QP2WyAix6glgAoBSqAHegCK4VNkvFRUJDGvA0dl/hI8UTMRYFuCzMKPBjsVP3N8Mbfv/YRMfMPyMtSn8tBrhQbLtdAyihApAjekv7edcKYdqj5iBy9nrLG9TxDrFyGbpkIQQD7edK9iyMmhZ6HNdn/zLl1scb/abEQ8DQptAx5UCcKH7O8kD9AxV23IxAwE1E8D7VhxY6lI42HEg5adSRc7FdKZW3LgkU9XfHgpAMu0Sosx9SJ2Y4kvngOY1f4FbzGipMgZbZl3svAKnz1WyepiHK+Zgd3UXmjEP3TulOWMEhfLgo93jVsrPPNYw97zbh8HQO72quxmt37n3JXCeasxlu9uBlaJbhr74B6eUCWljOxy/62jHSnokHhemBAhXcaDPJdXqBgtpLGkNTqwVN1+eHA+kjzwISMxld6dn3E0+b3XeoXT1p8LfcXB++OcgCpP1H5aNG/7zWJvf0CX+trWI2yPU589fwL7c9mAejuLI9kNk0cNcoqdvHdmKXnX6BqZEkWE1uLs0Nhg9tx8z7/O/kXNvbbfXE85Dhow2gFpt+jIXAzdQuv9DpZ3iije3WxrnsJKD01P+kasvd12hAMZdbCmbIiM7DFhc83DtgcuVGHaWdgjffVNjS/iR1PMTJb78OiCIQ8j908OQ9kaZ4eeUvtXhDxfNmtoOc8HMN1trssY9IojHY1PGCeWGQqv+ojak7sgZDVLNTEtZIxecSaWXvmZ6qwrlwM46710okFydd2JvHno/ 7lK+tbYm KFJ8rPdtMzqvTwCyJ3znhGjQ/SXeW7hXcI3nrVGKIe2ukOvuF8u60ALoLVOqQOHBbXn9ue4lMYUfUw7Hir/Fo0IzUTe1U+3JSc71XY+xxHRvTQ4NIgRE1Xky/9XDwHqS7VQa/uDlgk30PE45I/GSg1uy+N9uUzlvOEpfaqZSEbh3Gh/4QP+VKHE1r1owlrTjkdt4q3/Gd5F4gMMalbdGHTywlW+QHRwGaV8NS3Mq/R5oFBnCm+bzpAYrPjErIfhqSaf2APHjbzokThX+GHtR54LuGBfCDSNgQVPvikWZMnlVJfaFIwkJgHhwjOFyVAfju7q92ZrV8sOBau203itU001D4i6rV+wDOfx+T9eLsuy6t5JXtVAtu8TENoLcFlFBivAqMVjYekaIajPiUAkl9KqiGvYrO//JS8e7QMvEi6LPg3htnvT3fVGfY4VvUTRuPj4J53QfUVzc/LWWST9OI3LV6GLk3jhyOOl77ABlxgabSSJ3z/II9bQ3z+mDMAEronfj5SCuMt3DGUpTvtKajAa1gguQ9qSvh+KQ2TDz4NyOL3wj2n6K9BZar9g== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Wed, Nov 06, 2024 at 11:21:01AM -0800, Kanchana P Sridhar wrote: > Modified the definition of "struct crypto_acomp_ctx" to represent a > configurable number of acomp_reqs and the required number of buffers. > > Accordingly, refactored the code that allocates/deallocates the acomp_ctx > resources, so that it can be called to create a regular acomp_ctx with > exactly one acomp_req/buffer, for use in the the existing non-batching > zswap_store(), as well as to create a separate "batching acomp_ctx" with > multiple acomp_reqs/buffers for IAA compress batching. > > Signed-off-by: Kanchana P Sridhar > --- > mm/zswap.c | 149 ++++++++++++++++++++++++++++++++++++++--------------- > 1 file changed, 107 insertions(+), 42 deletions(-) > > diff --git a/mm/zswap.c b/mm/zswap.c > index 3e899fa61445..02e031122fdf 100644 > --- a/mm/zswap.c > +++ b/mm/zswap.c > @@ -143,9 +143,10 @@ bool zswap_never_enabled(void) > > struct crypto_acomp_ctx { > struct crypto_acomp *acomp; > - struct acomp_req *req; > + struct acomp_req **reqs; > + u8 **buffers; > + unsigned int nr_reqs; > struct crypto_wait wait; > - u8 *buffer; > struct mutex mutex; > bool is_sleepable; > }; > @@ -241,6 +242,11 @@ static inline struct xarray *swap_zswap_tree(swp_entry_t swp) > pr_debug("%s pool %s/%s\n", msg, (p)->tfm_name, \ > zpool_get_type((p)->zpool)) > > +static int zswap_create_acomp_ctx(unsigned int cpu, > + struct crypto_acomp_ctx *acomp_ctx, > + char *tfm_name, > + unsigned int nr_reqs); This looks unnecessary. > + > /********************************* > * pool functions > **********************************/ > @@ -813,69 +819,128 @@ static void zswap_entry_free(struct zswap_entry *entry) > /********************************* > * compressed storage functions > **********************************/ > -static int zswap_cpu_comp_prepare(unsigned int cpu, struct hlist_node *node) > +static int zswap_create_acomp_ctx(unsigned int cpu, > + struct crypto_acomp_ctx *acomp_ctx, > + char *tfm_name, > + unsigned int nr_reqs) > { > - struct zswap_pool *pool = hlist_entry(node, struct zswap_pool, node); > - struct crypto_acomp_ctx *acomp_ctx = per_cpu_ptr(pool->acomp_ctx, cpu); > struct crypto_acomp *acomp; > - struct acomp_req *req; > - int ret; > + int ret = -ENOMEM; > + int i, j; > > + acomp_ctx->nr_reqs = 0; > mutex_init(&acomp_ctx->mutex); > > - acomp_ctx->buffer = kmalloc_node(PAGE_SIZE * 2, GFP_KERNEL, cpu_to_node(cpu)); > - if (!acomp_ctx->buffer) > - return -ENOMEM; > - > - acomp = crypto_alloc_acomp_node(pool->tfm_name, 0, 0, cpu_to_node(cpu)); > + acomp = crypto_alloc_acomp_node(tfm_name, 0, 0, cpu_to_node(cpu)); > if (IS_ERR(acomp)) { > pr_err("could not alloc crypto acomp %s : %ld\n", > - pool->tfm_name, PTR_ERR(acomp)); > - ret = PTR_ERR(acomp); > - goto acomp_fail; > + tfm_name, PTR_ERR(acomp)); > + return PTR_ERR(acomp); > } > + > acomp_ctx->acomp = acomp; > acomp_ctx->is_sleepable = acomp_is_async(acomp); > > - req = acomp_request_alloc(acomp_ctx->acomp); > - if (!req) { > - pr_err("could not alloc crypto acomp_request %s\n", > - pool->tfm_name); > - ret = -ENOMEM; > + acomp_ctx->buffers = kmalloc_node(nr_reqs * sizeof(u8 *), > + GFP_KERNEL, cpu_to_node(cpu)); > + if (!acomp_ctx->buffers) > + goto buf_fail; > + > + for (i = 0; i < nr_reqs; ++i) { > + acomp_ctx->buffers[i] = kmalloc_node(PAGE_SIZE * 2, > + GFP_KERNEL, cpu_to_node(cpu)); > + if (!acomp_ctx->buffers[i]) { > + for (j = 0; j < i; ++j) > + kfree(acomp_ctx->buffers[j]); > + kfree(acomp_ctx->buffers); > + ret = -ENOMEM; > + goto buf_fail; > + } > + } > + > + acomp_ctx->reqs = kmalloc_node(nr_reqs * sizeof(struct acomp_req *), > + GFP_KERNEL, cpu_to_node(cpu)); > + if (!acomp_ctx->reqs) > goto req_fail; > + > + for (i = 0; i < nr_reqs; ++i) { > + acomp_ctx->reqs[i] = acomp_request_alloc(acomp_ctx->acomp); > + if (!acomp_ctx->reqs[i]) { > + pr_err("could not alloc crypto acomp_request reqs[%d] %s\n", > + i, tfm_name); > + for (j = 0; j < i; ++j) > + acomp_request_free(acomp_ctx->reqs[j]); > + kfree(acomp_ctx->reqs); > + ret = -ENOMEM; > + goto req_fail; > + } > } > - acomp_ctx->req = req; > > + /* > + * The crypto_wait is used only in fully synchronous, i.e., with scomp > + * or non-poll mode of acomp, hence there is only one "wait" per > + * acomp_ctx, with callback set to reqs[0], under the assumption that > + * there is at least 1 request per acomp_ctx. > + */ > crypto_init_wait(&acomp_ctx->wait); > /* > * if the backend of acomp is async zip, crypto_req_done() will wakeup > * crypto_wait_req(); if the backend of acomp is scomp, the callback > * won't be called, crypto_wait_req() will return without blocking. > */ > - acomp_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG, > + acomp_request_set_callback(acomp_ctx->reqs[0], CRYPTO_TFM_REQ_MAY_BACKLOG, > crypto_req_done, &acomp_ctx->wait); > > + acomp_ctx->nr_reqs = nr_reqs; > return 0; > > req_fail: > + for (i = 0; i < nr_reqs; ++i) > + kfree(acomp_ctx->buffers[i]); > + kfree(acomp_ctx->buffers); > +buf_fail: > crypto_free_acomp(acomp_ctx->acomp); > -acomp_fail: > - kfree(acomp_ctx->buffer); > return ret; > } > > -static int zswap_cpu_comp_dead(unsigned int cpu, struct hlist_node *node) > +static void zswap_delete_acomp_ctx(struct crypto_acomp_ctx *acomp_ctx) > { > - struct zswap_pool *pool = hlist_entry(node, struct zswap_pool, node); > - struct crypto_acomp_ctx *acomp_ctx = per_cpu_ptr(pool->acomp_ctx, cpu); > - > if (!IS_ERR_OR_NULL(acomp_ctx)) { > - if (!IS_ERR_OR_NULL(acomp_ctx->req)) > - acomp_request_free(acomp_ctx->req); > + int i; > + > + for (i = 0; i < acomp_ctx->nr_reqs; ++i) > + if (!IS_ERR_OR_NULL(acomp_ctx->reqs[i])) > + acomp_request_free(acomp_ctx->reqs[i]); > + kfree(acomp_ctx->reqs); > + > + for (i = 0; i < acomp_ctx->nr_reqs; ++i) > + kfree(acomp_ctx->buffers[i]); > + kfree(acomp_ctx->buffers); > + > if (!IS_ERR_OR_NULL(acomp_ctx->acomp)) > crypto_free_acomp(acomp_ctx->acomp); > - kfree(acomp_ctx->buffer); > + > + acomp_ctx->nr_reqs = 0; > + acomp_ctx = NULL; > } > +} > + > +static int zswap_cpu_comp_prepare(unsigned int cpu, struct hlist_node *node) > +{ > + struct zswap_pool *pool = hlist_entry(node, struct zswap_pool, node); > + struct crypto_acomp_ctx *acomp_ctx; > + > + acomp_ctx = per_cpu_ptr(pool->acomp_ctx, cpu); > + return zswap_create_acomp_ctx(cpu, acomp_ctx, pool->tfm_name, 1); > +} > + > +static int zswap_cpu_comp_dead(unsigned int cpu, struct hlist_node *node) > +{ > + struct zswap_pool *pool = hlist_entry(node, struct zswap_pool, node); > + struct crypto_acomp_ctx *acomp_ctx; > + > + acomp_ctx = per_cpu_ptr(pool->acomp_ctx, cpu); > + zswap_delete_acomp_ctx(acomp_ctx); > > return 0; > } There are no other callers to these functions. Just do the work directly in the cpu callbacks here like it used to be. Otherwise it looks good to me.