From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8723EC46CD3 for ; Tue, 26 Dec 2023 19:08:59 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D8C1A8D0005; Tue, 26 Dec 2023 14:08:58 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id D3C2E8D0001; Tue, 26 Dec 2023 14:08:58 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C038F8D0005; Tue, 26 Dec 2023 14:08:58 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id AC1B18D0001 for ; Tue, 26 Dec 2023 14:08:58 -0500 (EST) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 82CF0A1740 for ; Tue, 26 Dec 2023 19:08:58 +0000 (UTC) X-FDA: 81609906756.10.9705B7F Received: from mail-io1-f44.google.com (mail-io1-f44.google.com [209.85.166.44]) by imf24.hostedemail.com (Postfix) with ESMTP id C26B7180019 for ; Tue, 26 Dec 2023 19:08:56 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=jg61VKge; spf=pass (imf24.hostedemail.com: domain of nphamcs@gmail.com designates 209.85.166.44 as permitted sender) smtp.mailfrom=nphamcs@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1703617736; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=WEwFqu7N4pNQgqlZY6e4yXz3XTwyP/U0e2i9AKk6T1I=; b=JnS90ugmiwMIlc2uypLFSlk84Mkd2rl6xGjHwOrG/3d6Bvfo1beDfCciRKZXX67FHGuknP eGr3d3VvR2j3FwigkiI5JGx5LzcAULx5m7rE3Ow1CHbm90XVplH5Rx3j7cNVqXYB4yDETa xz03OCAWazOtnbya6h23F/9nOwYbzHc= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1703617736; a=rsa-sha256; cv=none; b=2++XIO2Koxzk74uZ55n6hqSgor4KeQceM3qAdPfYOp5iTcmHDCEenAvjVfMIPzukana8KN zhir2hd6y0N64ejsdNK694j+jAvQCu27UcBOtpC2NC6i4pft5IWdGc2PbbtjheGKyU/3Aq a3yEnbMkwId6bvB2ZYHrtcyf8ht8Rhw= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=jg61VKge; spf=pass (imf24.hostedemail.com: domain of nphamcs@gmail.com designates 209.85.166.44 as permitted sender) smtp.mailfrom=nphamcs@gmail.com; dmarc=pass (policy=none) header.from=gmail.com Received: by mail-io1-f44.google.com with SMTP id ca18e2360f4ac-7ba903342c2so339817739f.3 for ; Tue, 26 Dec 2023 11:08:56 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1703617736; x=1704222536; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=WEwFqu7N4pNQgqlZY6e4yXz3XTwyP/U0e2i9AKk6T1I=; b=jg61VKgegprUbKCgF6nfReF1VK20Q7KSHVaXDUj5PimzvL2KcrvHpiHfVi1aTAk2/g 4tFieRysJB1Y6qv9oUpQIdmNep214/7KeZYrTN5/Ya3LjCZzCNl729/bljSczEc3fpzT pc4B0V4HH2ZhhAG0QCvZRW7bnFI7u1VcN4ggfb4I8C0FXi5jo5kSh/8Dg6EbuZ2rKpdg Ic/XFIfTMXJCF+ymFeyyEZl5UI8rVUD5yYZhpRjPI6DnTBc7tpHfncY71LcACKsSy73a La5YquifpWb+1xjQm9g+KU2jPpjTYImY8vjaOPqoTWgiS17ssaW6Ql2V6IRtumwvsrIR WMsQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1703617736; x=1704222536; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=WEwFqu7N4pNQgqlZY6e4yXz3XTwyP/U0e2i9AKk6T1I=; b=fvK3n/x8qzL+toFjXE+IB9PH20Y/FxkMpcx9hq3h3hOcw61a4fXb/oM4WGEjYGOOZM OxN8FkerYGLGdTGV7EHDeuWaNR9FcHRT57PQVa3gI8lxbLPQDomQoXCk6Zhm+YVIDtDv /02TBQm40woq/wDAtixIjxPxgE9KDnz0z0+sSZDn8kp1iQmpvmrzYw/GTVu+H3Mv3ePK 8h2Tz7P6f4iUSzeG062ZZ0x13CT2uLUOqSeoqjVhZbL3BqT0nmYUwZySs4wUrDNNXNDM KMYmWvRr+a7XJvLYigXlPJV/uDUm0F71m570j5Ivqu+0HnrZysSUXwY04Wb11DEu8x6x ZrfQ== X-Gm-Message-State: AOJu0YzYw6E7ecgddPWigvYZe2kIoRiGK72c8Sd9jCiWxXkTcSL4kHGr gPT7rDScpHf4bTgeAt9v9ACsO9G9m3dGvTPUYPE= X-Google-Smtp-Source: AGHT+IEc2tnxi8AhY9xmeAfznon68lgbaeDPJEoJiZpZsZIqajrzVe4QRk2p2BLDNGb+8Ux0fooWoFhMyMKkfmdtDcU= X-Received: by 2002:a05:6602:4995:b0:7b7:846e:564b with SMTP id eg21-20020a056602499500b007b7846e564bmr12094964iob.26.1703617735752; Tue, 26 Dec 2023 11:08:55 -0800 (PST) MIME-Version: 1.0 References: <20231213-zswap-dstmem-v4-0-f228b059dd89@bytedance.com> <20231213-zswap-dstmem-v4-6-f228b059dd89@bytedance.com> In-Reply-To: <20231213-zswap-dstmem-v4-6-f228b059dd89@bytedance.com> From: Nhat Pham Date: Tue, 26 Dec 2023 11:08:44 -0800 Message-ID: Subject: Re: [PATCH v4 6/6] mm/zswap: change per-cpu mutex and buffer to per-acomp_ctx To: Chengming Zhou Cc: Andrew Morton , Seth Jennings , Johannes Weiner , Vitaly Wool , Chris Li , Yosry Ahmed , Dan Streetman , linux-kernel@vger.kernel.org, linux-mm@kvack.org, Chris Li Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Stat-Signature: t8cekgfbwfzxy5y9wsuawchgtus617st X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: C26B7180019 X-Rspam-User: X-HE-Tag: 1703617736-79564 X-HE-Meta: U2FsdGVkX1/YdLJ4Bl6qgKPvtCJA51hTWpVEKB2FKW3BpUk9yLpJycoiblhZR7MG8BU+3stPlYosK2MuAcsMJm+jg+vYWCExk3qAgUyJJciMns7XI4tTcXTYGfpCwvQ5ZQ4L4+UbtLUGd+aSFhnQyurByx3fKN5wIyO36AbtH2ixZqjuNmWFyC09I0dvrH8dQ+CFKC4VfaAOgZT+/ixdn2aRFLdM4owWse5EGIpMd68LGHo3zwX2Wm3FMT7kkPedkhqBL2FtOSkoIeTMp74Aqa0J1R0YpYEvPcwVOMLrGxDrj7hS/lGdqxgKlMGyuC2hL7t8405RL8/9HvKhMBqs4dWguBNQZfpj5pxPxSOXabRbPGHrwqUEH6PghmnlxuggrD7GUYBYORJLQ+qpi2jYHOGu6gQfOPKrck7SS2GQCmQv3TzPad/htc9XaKPHVkDX+zwH6K0Vwog+91V8aPj4LkEmrbSv0P9wvQm2Fb7yhjef6x+BfzUkL5L2b84h3uNtGPlwJszWp24+4BC3HHiCwFA+ihoWyVFLUMYFUCI7kAyjMOQYm1UQNScMVqmLPT89f05gVr45kK9gDGhk0d0dNi2buu3iiSfM3jdfRXF9XPWXmCaypSOoDGY42PSGzzmvGmrVfVwjHba5dVFMyNp8z2TlarJzmDlISSwEdJGvIANNEDxDRDFqW0bv+ReLPNdEySIamdf/RHugwpNnGJHYPjSpkfEREHXGRtaoSqpDoXm2M768xePx2vCE83Hxj3llF/h3E4yLlizciRyscoYAbGicDxFtR2MWupoW3PUAwS8iO/hHxVcPvBA5SXN6KuGHloWp4FQLUhfb/OG8zxZG5ECRy9kJJCeVkXhFLToZ5MFZ5Q5bkDBH+Ah5e+c55SuDNNNDtccQtCkem+b6q9Bllkt4DZyNl3MbLwBwTmbt476A0REZ9VpfkFu5iCUeD713MWb9y6Grp1SRWQ257/8 jWIp9GLo vF3GwqzxV0rOuO62YXHKw0v56YastFnZCQgguoUjP1ubONDVK8eCK2g/VYp1Bwc/Un55eeS52dFUJG5Oc4YstctkgdUpkOun4bLNw4o7bB1IwZUlJjHN5wuTFDDCivPxjCoUYg3YdfWXvHzkThZVJt5VdD7Gb36JKeuLbrGkjA1pewrLcTd6QfpGJFAOTvsx1ySjv6QwfSafg4v8rmCQNsezrDBAF7j61KlGYIVbTrad0uFbHftY5P0Ecc144+Jv7lgopFMIUEiQjCMhsSq8jURpAlxpb3Dd1l/JtadzELKyImQk3DIkPCCvW39nH+NxNXRs2panFZt2Z+wPc5g/FLVfBUgMyugZ/PwdrOs6LvJr/8t+ln5oBsWsOiClRIir3fJfqSDS3ZNh+Q1iWRcYUV4+OJX6XGCnjOPFIYmicmcmBdJ0rH+YPwIeSSiLOMgUjcbtwsmPZS7UcgPWu4+IYgJYgyEO9pTHwuwZ1n00u4165Wu028FyAuOIQJmtfuYij64OtR3rQX/ZpwJufxdOYT4yisVgCCWbrF8zG X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Tue, Dec 26, 2023 at 7:55=E2=80=AFAM Chengming Zhou wrote: > > First of all, we need to rename acomp_ctx->dstmem field to buffer, > since we are now using for purposes other than compression. > > Then we change per-cpu mutex and buffer to per-acomp_ctx, since > them belong to the acomp_ctx and are necessary parts when used > in the compress/decompress contexts. > > So we can remove the old per-cpu mutex and dstmem. > > Acked-by: Chris Li (Google) > Signed-off-by: Chengming Zhou Fairly straightforward, and actually delete some code :) LGTM. Reviewed-by: Nhat Pham > --- > include/linux/cpuhotplug.h | 1 - > mm/zswap.c | 98 +++++++++++++---------------------------= ------ > 2 files changed, 28 insertions(+), 71 deletions(-) > > diff --git a/include/linux/cpuhotplug.h b/include/linux/cpuhotplug.h > index efc0c0b07efb..c3e06e21766a 100644 > --- a/include/linux/cpuhotplug.h > +++ b/include/linux/cpuhotplug.h > @@ -124,7 +124,6 @@ enum cpuhp_state { > CPUHP_ARM_BL_PREPARE, > CPUHP_TRACE_RB_PREPARE, > CPUHP_MM_ZS_PREPARE, > - CPUHP_MM_ZSWP_MEM_PREPARE, > CPUHP_MM_ZSWP_POOL_PREPARE, > CPUHP_KVM_PPC_BOOK3S_PREPARE, > CPUHP_ZCOMP_PREPARE, > diff --git a/mm/zswap.c b/mm/zswap.c > index 40ee9f109f98..8014509736ad 100644 > --- a/mm/zswap.c > +++ b/mm/zswap.c > @@ -166,8 +166,8 @@ struct crypto_acomp_ctx { > struct crypto_acomp *acomp; > struct acomp_req *req; > struct crypto_wait wait; > - u8 *dstmem; > - struct mutex *mutex; > + u8 *buffer; > + struct mutex mutex; > }; > > /* > @@ -694,63 +694,26 @@ static void zswap_alloc_shrinker(struct zswap_pool = *pool) > /********************************* > * per-cpu code > **********************************/ > -static DEFINE_PER_CPU(u8 *, zswap_dstmem); > -/* > - * If users dynamically change the zpool type and compressor at runtime,= i.e. > - * zswap is running, zswap can have more than one zpool on one cpu, but = they > - * are sharing dtsmem. So we need this mutex to be per-cpu. > - */ > -static DEFINE_PER_CPU(struct mutex *, zswap_mutex); > - > -static int zswap_dstmem_prepare(unsigned int cpu) > -{ > - struct mutex *mutex; > - u8 *dst; > - > - dst =3D kmalloc_node(PAGE_SIZE, GFP_KERNEL, cpu_to_node(cpu)); > - if (!dst) > - return -ENOMEM; > - > - mutex =3D kmalloc_node(sizeof(*mutex), GFP_KERNEL, cpu_to_node(cp= u)); > - if (!mutex) { > - kfree(dst); > - return -ENOMEM; > - } > - > - mutex_init(mutex); > - per_cpu(zswap_dstmem, cpu) =3D dst; > - per_cpu(zswap_mutex, cpu) =3D mutex; > - return 0; > -} > - > -static int zswap_dstmem_dead(unsigned int cpu) > -{ > - struct mutex *mutex; > - u8 *dst; > - > - mutex =3D per_cpu(zswap_mutex, cpu); > - kfree(mutex); > - per_cpu(zswap_mutex, cpu) =3D NULL; > - > - dst =3D per_cpu(zswap_dstmem, cpu); > - kfree(dst); > - per_cpu(zswap_dstmem, cpu) =3D NULL; > - > - return 0; > -} > - > static int zswap_cpu_comp_prepare(unsigned int cpu, struct hlist_node *n= ode) > { > struct zswap_pool *pool =3D hlist_entry(node, struct zswap_pool, = node); > struct crypto_acomp_ctx *acomp_ctx =3D per_cpu_ptr(pool->acomp_ct= x, cpu); > struct crypto_acomp *acomp; > struct acomp_req *req; > + int ret; > + > + mutex_init(&acomp_ctx->mutex); > + > + acomp_ctx->buffer =3D kmalloc_node(PAGE_SIZE, GFP_KERNEL, cpu_to_= node(cpu)); > + if (!acomp_ctx->buffer) > + return -ENOMEM; > > acomp =3D crypto_alloc_acomp_node(pool->tfm_name, 0, 0, cpu_to_no= de(cpu)); > if (IS_ERR(acomp)) { > pr_err("could not alloc crypto acomp %s : %ld\n", > pool->tfm_name, PTR_ERR(acomp)); > - return PTR_ERR(acomp); > + ret =3D PTR_ERR(acomp); > + goto acomp_fail; > } > acomp_ctx->acomp =3D acomp; > > @@ -758,8 +721,8 @@ static int zswap_cpu_comp_prepare(unsigned int cpu, s= truct hlist_node *node) > if (!req) { > pr_err("could not alloc crypto acomp_request %s\n", > pool->tfm_name); > - crypto_free_acomp(acomp_ctx->acomp); > - return -ENOMEM; > + ret =3D -ENOMEM; > + goto req_fail; > } > acomp_ctx->req =3D req; > > @@ -772,10 +735,13 @@ static int zswap_cpu_comp_prepare(unsigned int cpu,= struct hlist_node *node) > acomp_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG, > crypto_req_done, &acomp_ctx->wait); > > - acomp_ctx->mutex =3D per_cpu(zswap_mutex, cpu); > - acomp_ctx->dstmem =3D per_cpu(zswap_dstmem, cpu); > - > return 0; > + > +req_fail: > + crypto_free_acomp(acomp_ctx->acomp); > +acomp_fail: > + kfree(acomp_ctx->buffer); > + return ret; > } > > static int zswap_cpu_comp_dead(unsigned int cpu, struct hlist_node *node= ) > @@ -788,6 +754,7 @@ static int zswap_cpu_comp_dead(unsigned int cpu, stru= ct hlist_node *node) > acomp_request_free(acomp_ctx->req); > if (!IS_ERR_OR_NULL(acomp_ctx->acomp)) > crypto_free_acomp(acomp_ctx->acomp); > + kfree(acomp_ctx->buffer); > } > > return 0; > @@ -1400,12 +1367,12 @@ static void __zswap_load(struct zswap_entry *entr= y, struct page *page) > u8 *src; > > acomp_ctx =3D raw_cpu_ptr(entry->pool->acomp_ctx); > - mutex_lock(acomp_ctx->mutex); > + mutex_lock(&acomp_ctx->mutex); > > src =3D zpool_map_handle(zpool, entry->handle, ZPOOL_MM_RO); > if (!zpool_can_sleep_mapped(zpool)) { > - memcpy(acomp_ctx->dstmem, src, entry->length); > - src =3D acomp_ctx->dstmem; > + memcpy(acomp_ctx->buffer, src, entry->length); > + src =3D acomp_ctx->buffer; > zpool_unmap_handle(zpool, entry->handle); > } > > @@ -1415,7 +1382,7 @@ static void __zswap_load(struct zswap_entry *entry,= struct page *page) > acomp_request_set_params(acomp_ctx->req, &input, &output, entry->= length, PAGE_SIZE); > BUG_ON(crypto_wait_req(crypto_acomp_decompress(acomp_ctx->req), &= acomp_ctx->wait)); > BUG_ON(acomp_ctx->req->dlen !=3D PAGE_SIZE); > - mutex_unlock(acomp_ctx->mutex); > + mutex_unlock(&acomp_ctx->mutex); > > if (zpool_can_sleep_mapped(zpool)) > zpool_unmap_handle(zpool, entry->handle); > @@ -1636,9 +1603,9 @@ bool zswap_store(struct folio *folio) > /* compress */ > acomp_ctx =3D raw_cpu_ptr(entry->pool->acomp_ctx); > > - mutex_lock(acomp_ctx->mutex); > + mutex_lock(&acomp_ctx->mutex); > > - dst =3D acomp_ctx->dstmem; > + dst =3D acomp_ctx->buffer; > sg_init_table(&input, 1); > sg_set_page(&input, page, PAGE_SIZE, 0); > > @@ -1681,7 +1648,7 @@ bool zswap_store(struct folio *folio) > buf =3D zpool_map_handle(zpool, handle, ZPOOL_MM_WO); > memcpy(buf, dst, dlen); > zpool_unmap_handle(zpool, handle); > - mutex_unlock(acomp_ctx->mutex); > + mutex_unlock(&acomp_ctx->mutex); > > /* populate entry */ > entry->swpentry =3D swp_entry(type, offset); > @@ -1724,7 +1691,7 @@ bool zswap_store(struct folio *folio) > return true; > > put_dstmem: > - mutex_unlock(acomp_ctx->mutex); > + mutex_unlock(&acomp_ctx->mutex); > put_pool: > zswap_pool_put(entry->pool); > freepage: > @@ -1899,13 +1866,6 @@ static int zswap_setup(void) > goto cache_fail; > } > > - ret =3D cpuhp_setup_state(CPUHP_MM_ZSWP_MEM_PREPARE, "mm/zswap:pr= epare", > - zswap_dstmem_prepare, zswap_dstmem_dead); > - if (ret) { > - pr_err("dstmem alloc failed\n"); > - goto dstmem_fail; > - } > - > ret =3D cpuhp_setup_state_multi(CPUHP_MM_ZSWP_POOL_PREPARE, > "mm/zswap_pool:prepare", > zswap_cpu_comp_prepare, > @@ -1937,8 +1897,6 @@ static int zswap_setup(void) > if (pool) > zswap_pool_destroy(pool); > hp_fail: > - cpuhp_remove_state(CPUHP_MM_ZSWP_MEM_PREPARE); > -dstmem_fail: > kmem_cache_destroy(zswap_entry_cache); > cache_fail: > /* if built-in, we aren't unloaded on failure; don't allow use */ > > -- > b4 0.10.1