From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D7451C35274 for ; Mon, 18 Dec 2023 09:37:58 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5BEDA6B0074; Mon, 18 Dec 2023 04:37:58 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 56DFC6B0075; Mon, 18 Dec 2023 04:37:58 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 436186B0087; Mon, 18 Dec 2023 04:37:58 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 332846B0074 for ; Mon, 18 Dec 2023 04:37:58 -0500 (EST) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id F2CA31A0A72 for ; Mon, 18 Dec 2023 09:37:57 +0000 (UTC) X-FDA: 81579437394.29.3B320C6 Received: from mail-lf1-f52.google.com (mail-lf1-f52.google.com [209.85.167.52]) by imf28.hostedemail.com (Postfix) with ESMTP id 08124C001F for ; Mon, 18 Dec 2023 09:37:55 +0000 (UTC) Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=qyd8oOvw; spf=pass (imf28.hostedemail.com: domain of yosryahmed@google.com designates 209.85.167.52 as permitted sender) smtp.mailfrom=yosryahmed@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1702892276; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=PPUaHeCOyiJMxg1+wwkeRXEvlqzlbHDzzow5eR6DPqo=; b=OOQM3bR2e2eEBWq+NDT1XCluxXM+N+lhShlbDhKtVugFonlLwbUzoKj8KwLBp12RbNfStg nQ1b/1YLv+a6QtxDbF3ZTV4JCQYL+toHQd8Ky6FkalkqWaDJwm+INhyH6nh2aIVw/fdv18 WB5zep8sjs24znQn7FA0UavMYYM/DnY= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1702892276; a=rsa-sha256; cv=none; b=jsAcHySKgYqJkyefiDzP86EuJvHmDKTU/QpRdguwnIbkr7/QzXgd1+tJXG8adNmvPgX31B fw54G4uLbj1EpGXP6OgB8/9egIEeaTfXqf4fSA4ZOlUvKNcg4ngvwTiflfQOEn5rNE4/N8 Vu3sBkSHWBxB6uxsid0rNUx6WIqneBw= ARC-Authentication-Results: i=1; imf28.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=qyd8oOvw; spf=pass (imf28.hostedemail.com: domain of yosryahmed@google.com designates 209.85.167.52 as permitted sender) smtp.mailfrom=yosryahmed@google.com; dmarc=pass (policy=reject) header.from=google.com Received: by mail-lf1-f52.google.com with SMTP id 2adb3069b0e04-50e2ce4fb22so1704717e87.1 for ; Mon, 18 Dec 2023 01:37:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1702892274; x=1703497074; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=PPUaHeCOyiJMxg1+wwkeRXEvlqzlbHDzzow5eR6DPqo=; b=qyd8oOvwnEMfbptmxcTqRuPkg/+NQR3e5WAJhr2xDv3VdEnLKc1pU1nBqcqB3o/oWN 9gblMdeuKCpUsOJPuDoNx1rWYS3kJFpu/QVEMVyRPu+Hm46RPJkaLmX4Ua0pLmti8leg yc17SeOu6W1k6L2atScF6tF2OBAzP1PW6wggOkBmnyvVpYCeIFbrgDWTSZTtf7LEAutY a8zy3FgK0dt0WJjqkRefHNIGted1HX9ga4pX3Svi8qYeupiQ4qCsJNBy+uqP753mMNMd urBJDwEs7gKbfZycaQjouB8D/HOlFwBXWreazLNSxyaWw6gdy+2VAJ0Aol3oLYAEAkNY OPgQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1702892274; x=1703497074; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=PPUaHeCOyiJMxg1+wwkeRXEvlqzlbHDzzow5eR6DPqo=; b=wVYWeaW5rqOc8T01fyt+ZH3o63tcNc5skCC+X4v6TteZ61NlIpcHFEiy/U69LUdFi0 ckJnkOG5SWBNFOtQqVCtG6Q97bYkBUGudCMxukVpDiLmQaeHV3HJ95cNrFgC3GrGEJdE wcdgnJ5l3jY4JaZz5u24pvvnMMLBVsYDnVGbVtlZI52ToIJ50sfm9dSwXTsZ2jHG3dRh 96HDZRp3Ks+VyJeISk2BWxlftTotqSNfPpO4s2RlfJYdX3T6WcO9E852gI6vRYUUKghr oZ4EPFzz5r4vHwidzc2z8FwsiPaAMvT9X96SWvvPD0JBZ4Y2T0MQbFJJ0YTJWO8L2Svj A5DQ== X-Gm-Message-State: AOJu0Yy0S6GvBHki5ff4GSci7R8MMHZOKlhPrqJ0Ske5wDBNnIqFdqV3 E7PZNIQLqy5YCikUIJRU1SkQIdYDc0NjHQrzjCRKjA== X-Google-Smtp-Source: AGHT+IG25q646NoCMgQ28Iz/K91ENe22swWyfP12kRHt/Tpufnmk8yGkhydNcTa3kml0aEao3Ymz22yVlOy5UViJp3g= X-Received: by 2002:ac2:5bc6:0:b0:50b:f012:fd18 with SMTP id u6-20020ac25bc6000000b0050bf012fd18mr8120557lfn.12.1702892273810; Mon, 18 Dec 2023 01:37:53 -0800 (PST) MIME-Version: 1.0 References: <20231213-zswap-dstmem-v2-0-daa5d9ae41a7@bytedance.com> <20231213-zswap-dstmem-v2-6-daa5d9ae41a7@bytedance.com> In-Reply-To: <20231213-zswap-dstmem-v2-6-daa5d9ae41a7@bytedance.com> From: Yosry Ahmed Date: Mon, 18 Dec 2023 01:37:17 -0800 Message-ID: Subject: Re: [PATCH v2 6/6] mm/zswap: directly use percpu mutex and buffer in load/store To: Chengming Zhou Cc: Seth Jennings , Dan Streetman , Chris Li , Nhat Pham , Vitaly Wool , Andrew Morton , Johannes Weiner , linux-kernel@vger.kernel.org, linux-mm@kvack.org, Chris Li Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Queue-Id: 08124C001F X-Rspam-User: X-Stat-Signature: nssw7crmm9xa6rcencj4ez1h3odat9tn X-Rspamd-Server: rspam03 X-HE-Tag: 1702892275-137404 X-HE-Meta: U2FsdGVkX19VzoPyVZhjrfoPyr2JaOaBNADoWkctiHAs04odQzESCfkKMoS/1ULhF0JoW6PhqkTs1J3UqqqOlM4mO/MPCVqH8YvY4kIN7r3YhC9u3ofxxQ82ic0E7PnrXWXnK86oJBebuP8zfhXsMIEpDG915W9lZvJjkrbJjPmHMVsseYoTnGfWFSDLGE6Df9CXbgLKk0WXlj860+RuT0sICJfvyF5jlGtipthOa7WxOVYfXxipOJkY5wZT/WmzIWk563Vgi9qil82WoDs/MZZQU/ZexvJqBDKHiyrtpGGWq4DcBP688UjhKPQ6Ie13LOlvZuIbULeV8Var1LNxVrr+aS3vtSiaOf2WHFLvAyeDIpsCi7lp/u1lSCPxAVRGOnWe4HYcZVY0v4MEm38/wH6M+h6ld3oaJwOe5Hw7N/68V5o08NcyXDYEDblQL+sZEpkKU1e+xmbJirzivXALmcil21uFNm5W5rR9oyFqLLTfdImCiQ/mTqWKwAEBeu4iiiE9aV+TMm+l60q1xUvQBfGTo/Qjh9zOufxVeuTmWCKnsTPgz0Mc8T9qaNZDEV+Ax4wSRxMWMkGpgMP5O3bu+occdOljf3G5EOvv8xUPXypRL2ZrGxVNtg6miOwSsX1bVLDtzU6tlfsgKyN5930nBpHE+CduT17H1qPNagmtVXdWNxzKgPvzgRKadZ9PXS3tyN0tkIDNlocOq05zYUnlhxFQOOGpbGMbg7MWw9FzY4QpAfuzKKzEEw8xb3BRwxS3GkRNun/hq0g/UjbA+okoBeGuHW2IjVtrIXRcTP0lsdlJd1VO1ps6YcsCE2BgJX4ne42afRYpUWm6cxnPH/EUoDWQ0UR/3NoY5kgqdz7riLcJvltBbkJwOghrR8Gp+RkY7zSJ75FqGmZMuzoBNTv7BC6itfJf2cDM+PRkwMVUNIPHow5ABU/MTkevaRBsrCZMNyVtSO9qCaNTCXiki2W KrfSgKrt PbO7688AZylhiK/C+iu/wepnX2EjqX1HswLQDgFjVhQXAD1gsgOlsyGCZZQapmgiqpjDmkPnIp0M72FxDKuBb+rtm2CUHNIIOpKvPMK8Ju45Dcye/VwCt8gUJ8G2A7CeJmJZZ0O66hfI+f5AmiTeHW6aDxzdqo3RSTBWb28q/aLL2bAyxbHwcfsnO6+DA/HpeSSnFEpjtitEv4iRaysxPiPwJ2CMAkZVkaM7qRJeXDDBib+B+Z3jcc2JXX/rlhr//N3IZLNvWBWFxU7aLiu7LQUWENgmWM7RfLa9R4hY3+2dWsyq9vzz+l6gzyQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Mon, Dec 18, 2023 at 12:22=E2=80=AFAM Chengming Zhou wrote: > > Since the introduce of reusing the dstmem in the load path, it seems > confusing that we are now using acomp_ctx->dstmem and acomp_ctx->mutex > now for purposes other than what the naming suggests. > > Yosry suggested removing these two fields from acomp_ctx, and directly > using zswap_dstmem and zswap_mutex in both the load and store paths, > rename them, and add proper comments above their definitions that they > are for generic percpu buffering on the load and store paths. > > So this patch remove dstmem and mutex from acomp_ctx, and rename the > zswap_dstmem to zswap_buffer, using the percpu mutex and buffer on > the load and store paths. And refactor out __zswap_store() to only > include the compress & store, since I found zswap_store() is too long. I am not sure refactoring out __zswap_store() is useful to be honest, but I am not objecting to it, it mirrors __zswap_load() in a sense. However, if you want to do so, please do it in a separate patch from renaming the percpu buffers and mutex. This will make reviewing easier (and make my Suggested-by correctly scoped). Also, any reason why raw_smp_processor_id() is used here instead of smp_processor_id()? > > Suggested-by: Yosry Ahmed > Signed-off-by: Chengming Zhou > --- > mm/zswap.c | 193 +++++++++++++++++++++++++++++++++----------------------= ------ > 1 file changed, 104 insertions(+), 89 deletions(-) > > diff --git a/mm/zswap.c b/mm/zswap.c > index 2c349fd88904..b7449294ec3a 100644 > --- a/mm/zswap.c > +++ b/mm/zswap.c > @@ -166,8 +166,6 @@ struct crypto_acomp_ctx { > struct crypto_acomp *acomp; > struct acomp_req *req; > struct crypto_wait wait; > - u8 *dstmem; > - struct mutex *mutex; > }; > > /* > @@ -694,7 +692,7 @@ static void zswap_alloc_shrinker(struct zswap_pool *p= ool) > /********************************* > * per-cpu code > **********************************/ > -static DEFINE_PER_CPU(u8 *, zswap_dstmem); > +static DEFINE_PER_CPU(u8 *, zswap_buffer); > /* > * If users dynamically change the zpool type and compressor at runtime,= i.e. > * zswap is running, zswap can have more than one zpool on one cpu, but = they > @@ -702,39 +700,39 @@ static DEFINE_PER_CPU(u8 *, zswap_dstmem); > */ > static DEFINE_PER_CPU(struct mutex *, zswap_mutex); > > -static int zswap_dstmem_prepare(unsigned int cpu) > +static int zswap_buffer_prepare(unsigned int cpu) > { > struct mutex *mutex; > - u8 *dst; > + u8 *buf; > > - dst =3D kmalloc_node(PAGE_SIZE, GFP_KERNEL, cpu_to_node(cpu)); > - if (!dst) > + buf =3D kmalloc_node(PAGE_SIZE, GFP_KERNEL, cpu_to_node(cpu)); > + if (!buf) > return -ENOMEM; > > mutex =3D kmalloc_node(sizeof(*mutex), GFP_KERNEL, cpu_to_node(cp= u)); > if (!mutex) { > - kfree(dst); > + kfree(buf); > return -ENOMEM; > } > > mutex_init(mutex); > - per_cpu(zswap_dstmem, cpu) =3D dst; > + per_cpu(zswap_buffer, cpu) =3D buf; > per_cpu(zswap_mutex, cpu) =3D mutex; > return 0; > } > > -static int zswap_dstmem_dead(unsigned int cpu) > +static int zswap_buffer_dead(unsigned int cpu) > { > struct mutex *mutex; > - u8 *dst; > + u8 *buf; > > mutex =3D per_cpu(zswap_mutex, cpu); > kfree(mutex); > per_cpu(zswap_mutex, cpu) =3D NULL; > > - dst =3D per_cpu(zswap_dstmem, cpu); > - kfree(dst); > - per_cpu(zswap_dstmem, cpu) =3D NULL; > + buf =3D per_cpu(zswap_buffer, cpu); > + kfree(buf); > + per_cpu(zswap_buffer, cpu) =3D NULL; > > return 0; > } > @@ -772,9 +770,6 @@ static int zswap_cpu_comp_prepare(unsigned int cpu, s= truct hlist_node *node) > acomp_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG, > crypto_req_done, &acomp_ctx->wait); > > - acomp_ctx->mutex =3D per_cpu(zswap_mutex, cpu); > - acomp_ctx->dstmem =3D per_cpu(zswap_dstmem, cpu); > - > return 0; > } > > @@ -1392,20 +1387,98 @@ static int zswap_enabled_param_set(const char *va= l, > return ret; > } > > +static int __zswap_store(struct zswap_entry *entry, struct page *page) > +{ > + struct scatterlist input, output; > + struct crypto_acomp_ctx *acomp_ctx; > + struct zpool *zpool; > + unsigned long handle; > + unsigned int dlen; > + u8 *buf, *dst; > + gfp_t gfp; > + int ret; > + int cpu; > + struct mutex *mutex; > + > + cpu =3D raw_smp_processor_id(); > + mutex =3D per_cpu(zswap_mutex, cpu); > + mutex_lock(mutex); > + > + acomp_ctx =3D per_cpu_ptr(entry->pool->acomp_ctx, cpu); > + buf =3D per_cpu(zswap_buffer, cpu); > + > + sg_init_table(&input, 1); > + sg_set_page(&input, page, PAGE_SIZE, 0); > + sg_init_one(&output, buf, PAGE_SIZE); > + acomp_request_set_params(acomp_ctx->req, &input, &output, PAGE_SI= ZE, PAGE_SIZE); > + /* > + * it maybe looks a little bit silly that we send an asynchronous= request, > + * then wait for its completion synchronously. This makes the pro= cess look > + * synchronous in fact. > + * Theoretically, acomp supports users send multiple acomp reques= ts in one > + * acomp instance, then get those requests done simultaneously. b= ut in this > + * case, zswap actually does store and load page by page, there i= s no > + * existing method to send the second page before the first page = is done > + * in one thread doing zwap. > + * but in different threads running on different cpu, we have dif= ferent > + * acomp instance, so multiple threads can do (de)compression in = parallel. > + */ > + ret =3D crypto_wait_req(crypto_acomp_compress(acomp_ctx->req), &a= comp_ctx->wait); > + dlen =3D acomp_ctx->req->dlen; > + > + if (ret) { > + zswap_reject_compress_fail++; > + goto unlock; > + } > + > + /* store */ > + zpool =3D zswap_find_zpool(entry); > + gfp =3D __GFP_NORETRY | __GFP_NOWARN | __GFP_KSWAPD_RECLAIM; > + if (zpool_malloc_support_movable(zpool)) > + gfp |=3D __GFP_HIGHMEM | __GFP_MOVABLE; > + ret =3D zpool_malloc(zpool, dlen, gfp, &handle); > + if (ret =3D=3D -ENOSPC) { > + zswap_reject_compress_poor++; > + goto unlock; > + } > + if (ret) { > + zswap_reject_alloc_fail++; > + goto unlock; > + } > + dst =3D zpool_map_handle(zpool, handle, ZPOOL_MM_WO); > + memcpy(dst, buf, dlen); > + zpool_unmap_handle(zpool, handle); > + mutex_unlock(mutex); > + > + entry->handle =3D handle; > + entry->length =3D dlen; > + return 0; > + > +unlock: > + mutex_unlock(mutex); > + return ret; > +} > + > static void __zswap_load(struct zswap_entry *entry, struct page *page) > { > struct zpool *zpool =3D zswap_find_zpool(entry); > struct scatterlist input, output; > struct crypto_acomp_ctx *acomp_ctx; > - u8 *src; > + u8 *src, *buf; > + int cpu; > + struct mutex *mutex; > + > + cpu =3D raw_smp_processor_id(); > + mutex =3D per_cpu(zswap_mutex, cpu); > + mutex_lock(mutex); > > - acomp_ctx =3D raw_cpu_ptr(entry->pool->acomp_ctx); > - mutex_lock(acomp_ctx->mutex); > + acomp_ctx =3D per_cpu_ptr(entry->pool->acomp_ctx, cpu); > > src =3D zpool_map_handle(zpool, entry->handle, ZPOOL_MM_RO); > if (!zpool_can_sleep_mapped(zpool)) { > - memcpy(acomp_ctx->dstmem, src, entry->length); > - src =3D acomp_ctx->dstmem; > + buf =3D per_cpu(zswap_buffer, cpu); > + memcpy(buf, src, entry->length); > + src =3D buf; > zpool_unmap_handle(zpool, entry->handle); > } > > @@ -1415,7 +1488,7 @@ static void __zswap_load(struct zswap_entry *entry,= struct page *page) > acomp_request_set_params(acomp_ctx->req, &input, &output, entry->= length, PAGE_SIZE); > BUG_ON(crypto_wait_req(crypto_acomp_decompress(acomp_ctx->req), &= acomp_ctx->wait)); > BUG_ON(acomp_ctx->req->dlen !=3D PAGE_SIZE); > - mutex_unlock(acomp_ctx->mutex); > + mutex_unlock(mutex); > > if (zpool_can_sleep_mapped(zpool)) > zpool_unmap_handle(zpool, entry->handle); > @@ -1539,18 +1612,11 @@ bool zswap_store(struct folio *folio) > struct page *page =3D &folio->page; > struct zswap_tree *tree =3D zswap_trees[type]; > struct zswap_entry *entry, *dupentry; > - struct scatterlist input, output; > - struct crypto_acomp_ctx *acomp_ctx; > struct obj_cgroup *objcg =3D NULL; > struct mem_cgroup *memcg =3D NULL; > struct zswap_pool *pool; > - struct zpool *zpool; > - unsigned int dlen =3D PAGE_SIZE; > - unsigned long handle, value; > - char *buf; > - u8 *src, *dst; > - gfp_t gfp; > - int ret; > + u8 *src; > + unsigned long value; > > VM_WARN_ON_ONCE(!folio_test_locked(folio)); > VM_WARN_ON_ONCE(!folio_test_swapcache(folio)); > @@ -1635,60 +1701,11 @@ bool zswap_store(struct folio *folio) > mem_cgroup_put(memcg); > } > > - /* compress */ > - acomp_ctx =3D raw_cpu_ptr(entry->pool->acomp_ctx); > - > - mutex_lock(acomp_ctx->mutex); > - > - dst =3D acomp_ctx->dstmem; > - sg_init_table(&input, 1); > - sg_set_page(&input, page, PAGE_SIZE, 0); > - > - sg_init_one(&output, dst, PAGE_SIZE); > - acomp_request_set_params(acomp_ctx->req, &input, &output, PAGE_SI= ZE, dlen); > - /* > - * it maybe looks a little bit silly that we send an asynchronous= request, > - * then wait for its completion synchronously. This makes the pro= cess look > - * synchronous in fact. > - * Theoretically, acomp supports users send multiple acomp reques= ts in one > - * acomp instance, then get those requests done simultaneously. b= ut in this > - * case, zswap actually does store and load page by page, there i= s no > - * existing method to send the second page before the first page = is done > - * in one thread doing zwap. > - * but in different threads running on different cpu, we have dif= ferent > - * acomp instance, so multiple threads can do (de)compression in = parallel. > - */ > - ret =3D crypto_wait_req(crypto_acomp_compress(acomp_ctx->req), &a= comp_ctx->wait); > - dlen =3D acomp_ctx->req->dlen; > - > - if (ret) { > - zswap_reject_compress_fail++; > - goto put_dstmem; > - } > - > - /* store */ > - zpool =3D zswap_find_zpool(entry); > - gfp =3D __GFP_NORETRY | __GFP_NOWARN | __GFP_KSWAPD_RECLAIM; > - if (zpool_malloc_support_movable(zpool)) > - gfp |=3D __GFP_HIGHMEM | __GFP_MOVABLE; > - ret =3D zpool_malloc(zpool, dlen, gfp, &handle); > - if (ret =3D=3D -ENOSPC) { > - zswap_reject_compress_poor++; > - goto put_dstmem; > - } > - if (ret) { > - zswap_reject_alloc_fail++; > - goto put_dstmem; > - } > - buf =3D zpool_map_handle(zpool, handle, ZPOOL_MM_WO); > - memcpy(buf, dst, dlen); > - zpool_unmap_handle(zpool, handle); > - mutex_unlock(acomp_ctx->mutex); > + if (__zswap_store(entry, page)) > + goto put_pool; > > /* populate entry */ > entry->swpentry =3D swp_entry(type, offset); > - entry->handle =3D handle; > - entry->length =3D dlen; > > insert_entry: > entry->objcg =3D objcg; > @@ -1725,8 +1742,6 @@ bool zswap_store(struct folio *folio) > > return true; > > -put_dstmem: > - mutex_unlock(acomp_ctx->mutex); > put_pool: > zswap_pool_put(entry->pool); > freepage: > @@ -1902,10 +1917,10 @@ static int zswap_setup(void) > } > > ret =3D cpuhp_setup_state(CPUHP_MM_ZSWP_MEM_PREPARE, "mm/zswap:pr= epare", > - zswap_dstmem_prepare, zswap_dstmem_dead); > + zswap_buffer_prepare, zswap_buffer_dead); > if (ret) { > - pr_err("dstmem alloc failed\n"); > - goto dstmem_fail; > + pr_err("buffer alloc failed\n"); > + goto buffer_fail; > } > > ret =3D cpuhp_setup_state_multi(CPUHP_MM_ZSWP_POOL_PREPARE, > @@ -1940,7 +1955,7 @@ static int zswap_setup(void) > zswap_pool_destroy(pool); > hp_fail: > cpuhp_remove_state(CPUHP_MM_ZSWP_MEM_PREPARE); > -dstmem_fail: > +buffer_fail: > kmem_cache_destroy(zswap_entry_cache); > cache_fail: > /* if built-in, we aren't unloaded on failure; don't allow use */ > > -- > b4 0.10.1