From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D321DC41535 for ; Tue, 19 Dec 2023 18:43:36 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2DADF6B008C; Tue, 19 Dec 2023 13:43:36 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 28AC66B0092; Tue, 19 Dec 2023 13:43:36 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 12BED6B0093; Tue, 19 Dec 2023 13:43:36 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id F0F3F6B008C for ; Tue, 19 Dec 2023 13:43:35 -0500 (EST) Received: from smtpin24.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id BE8FC12037B for ; Tue, 19 Dec 2023 18:43:35 +0000 (UTC) X-FDA: 81584441190.24.FCEB98B Received: from mail-il1-f179.google.com (mail-il1-f179.google.com [209.85.166.179]) by imf22.hostedemail.com (Postfix) with ESMTP id EFA47C0003 for ; Tue, 19 Dec 2023 18:43:33 +0000 (UTC) Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=D+Buj39R; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf22.hostedemail.com: domain of nphamcs@gmail.com designates 209.85.166.179 as permitted sender) smtp.mailfrom=nphamcs@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1703011414; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=V86UCq8lgwT/w7ziVKVoJnuBZRSPh3/qDcwMip/KpF8=; b=V15tOUv1a4aJTL6xVSzF5CbY4PzXuc4XN/9mPuvJ1fOBXI4wWdmB6DQVp+O8w3V41EvLUG A6bv4zXbxsjn64tpZ3CQrHiAyed1Ts9YNIvwW1qMuWmfDUnKs1B+5lytSgIo6guoTsK6is 4rmZmtpsIJXFxIsrYj6qOfrdAmaf138= ARC-Authentication-Results: i=1; imf22.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=D+Buj39R; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf22.hostedemail.com: domain of nphamcs@gmail.com designates 209.85.166.179 as permitted sender) smtp.mailfrom=nphamcs@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1703011414; a=rsa-sha256; cv=none; b=vb4P4Ls86DBRpongrJAPi8AM2wheJ7xr788i279m5DYZzYO0lkLOZ8NbHZdHfg1zNv4+dy URFpn5lYyE85ihZxoROMjhkOeetoPzWu1e3vI5+P5moC81Ivt15Vv51DrpSbP1eeU0ce23 Jnx6W+TR+M0zQjvq9ORA2ki2zlV2kyE= Received: by mail-il1-f179.google.com with SMTP id e9e14a558f8ab-35f71436397so37173795ab.3 for ; Tue, 19 Dec 2023 10:43:33 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1703011413; x=1703616213; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=V86UCq8lgwT/w7ziVKVoJnuBZRSPh3/qDcwMip/KpF8=; b=D+Buj39R067kGP9P305l4NQXkxC/WK/SWJckprj6eRPoWHGZphuiQYrZdQeHz+ZERe H8FkJU796Imt2+SjUQ1g0v7kxCtzyGwzZTDSBF9ggHAXWGmOpBWCgQkKGDBe8N/N8GPh Mz6dNkfEAK7PFqLpHudSGsMIFpCK+vW7qBmlQcSvpUNhs1pCgTP3S1RL8l47toUpMj7q bsGwu5Y1hbHEnfF9GJr9ENnOuJT01aRosvP4bKyxJ7hk6LTcf0sun5W7xkSYBBiT0TfB X12SOh8ZUvzSK0X872ukCwzhW71zc6E1dtsuLvSyRCVZcURrG8dta3q1w55mXvLUWHBi FRSQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1703011413; x=1703616213; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=V86UCq8lgwT/w7ziVKVoJnuBZRSPh3/qDcwMip/KpF8=; b=Gaf0n/SyBRj9K5bZAZkBV1jT1s/O7vRWSmTjs8BYfhvdWsUHa8UgkHlbKgXeCxK4Na EPZyBFrdZAioDl8FVdWdWSBmKMlOy1HDZxSv5ZvAsAcLHtIhJQ22X31iZmXnekMYV7+j UfNc7yIRXbcS16iM0pG6nF2AlCHGdCiQzF53c1WJSt4OrBRU+cUDcggqAvb+CaT9jr5X IPSPhaBIpYli1EEEuPl/vEGJBXK+1KaBgpAF0PKqvLX8qT43P2tf60/HAUMGpA6Uw/3J tbUm7Snnb3lm8Jgh3UvXi8PqTFUcFgoKRoNPZzr/mPc0I+1eI9EG87IbrRM5KaSwIbYn lrcQ== X-Gm-Message-State: AOJu0Ywh/2V0yWmYFPuLrz+jSZXOwcl8xcA9PwjrLK9EoLMBG6vmD5sv KvqK6NzW/gcc8yrasZ3H9XsMSo4ZouJ5QSTzvD0= X-Google-Smtp-Source: AGHT+IFnCKZ9TORYCgdxQblq/cHj+3+j5alcYs1gyn4xoaJ2K6xtQxnOfLgtt92ofSgfaQ+Z9qG9Hoe9+7DonTg8F4s= X-Received: by 2002:a92:c242:0:b0:35e:6ef0:6b04 with SMTP id k2-20020a92c242000000b0035e6ef06b04mr28213967ilo.68.1703011412781; Tue, 19 Dec 2023 10:43:32 -0800 (PST) MIME-Version: 1.0 References: <20231213-zswap-dstmem-v3-0-4eac09b94ece@bytedance.com> <20231213-zswap-dstmem-v3-6-4eac09b94ece@bytedance.com> In-Reply-To: From: Nhat Pham Date: Tue, 19 Dec 2023 10:43:21 -0800 Message-ID: Subject: Re: [PATCH v3 6/6] mm/zswap: directly use percpu mutex and buffer in load/store To: Chris Li Cc: Chengming Zhou , Seth Jennings , Yosry Ahmed , Vitaly Wool , Dan Streetman , Johannes Weiner , Andrew Morton , linux-kernel@vger.kernel.org, linux-mm@kvack.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: EFA47C0003 X-Stat-Signature: itmjnrujq7bfma4ucx98xcdta3crm84p X-Rspam-User: X-HE-Tag: 1703011413-727004 X-HE-Meta: U2FsdGVkX1+Wjq1+wI6MlhefJ/ltp74TQyBzfxIonTJwk2Uwzci5QwPSyzHjb/wEpkfRVYm3yHE931DVMSAAFUAjtTT0XpgbyQaZ2HLIW2ej1dnXo5rcNiPen4ML1dallyy0KxSjLKOVx/2FjbmysPKpGMYEDxFuPvK2Cw7NJdxBy76HM3//HCP87/AK/JNMoVmE7xCTobjxJeubhn83ErqFcD82LVAleBBcVN0KOVezh2P87tr/3AKYeN9+Y0b7TYc0qbcNfpbZg1aYlYLXAAWpeJbPw94Ei6uzg/vJd2VqzLuv79+EZHb8xx30dnmuZP+eoBp6vjLJIh16aariN3E95WU3IumRdIw+iVxNR+dTlzLpDTYJOzOI4HQVxwqBoS48cdNhA3tbmt1t6VKFSt/KKFkMagcpyCksJu0hW5jvaEcpLPpytdt6sGqEbwTynmvCcWD1qFll7x9gYwNqGNqyFvksj64geRaKsEMvB3adGJUsDZr0n399f9o+FF2shVjCHltbpX/R8d3OdyT4jzeGmAzEGdx8X2OgHXXBy99smZjHu84NHBlaVtNI/+kY/qjXBB63M69c9V1YI+3Cq86aTa4rcbo8wiaInrC0yd3o6JQVupFY/C9ZiWDnW8o1+aFI98x/sv44JsFuggv6jDygULb5+HR898Nt/5EI8kG8cMnDHr4cWmsWNrNc0VYVCVZ9o31zsyyjOqAl53c5Na5SA26xNiI+BtGw3cocTrths5DWjYVj/5IIhkrfBBXgU1OOt/lNxKiiaYJYy0DrYXrL507ESM+/7yxDnKHbqdN3RRZYiE1Svw+GSCgXCWa3jj/Hgz6RfcFcQNnDO77FNkNUDNthdfrMqlIiC1SvG/2hbfJ7WPCNc5J9yMFKE/PB92roDzwD0XCw1UPe1lbMYTXeM8PezRpZPCDyXSREu/WAWgA6FcNcYH3msGxDSofGeJ4NmlmhIzuir2zVOhv CgCCpiqk 2Z49TKpqDFVPO3XUCRShLFJJHCUQELhvwOHRI7KWjxXvKqYnFEfD1SRHtjQIcO6/kTQsw4GmzzQvsOpvXrPgOuV9HlNqBa77yOjXgM+BuqILb5Wo+qg/YDI7M8BGIn4xSnrm0vM+sAabojsMJIxmJlU7YupjZGn/R6q/DrlzdAs0pLJAPM5qeksJYcCLXu3+bP98SiQbnMurX5mewAqa4Gf6UBNmB7tEiYE2X+Max7NpiTNz7TnveAnToRSFdOoqSlqRy61wUOSnILlFj3K3i3oSnlqUD6B4/GdkBXbFSvnVdxyoDIus/aMzVHIp4De0GWzv6F/kwpZrla52nQwx3GYwbgawJiv7qDhQvzsWQ6XHfBCUH5wuyLJZdv0n0gWfIUwyOBI6LluVAX6JzFpdPMJP9jA1pvienxlCr96+8ovJ31UFJcflfz/ul7aKnbJwmueRYiT7lRMbJLse73WCxNyC9hHUUmi532GDUKpGfmpjBloIelBTF0BFheA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Tue, Dec 19, 2023 at 5:29=E2=80=AFAM Chris Li wrote: > > Hi Chengming and Yosry, > > On Mon, Dec 18, 2023 at 3:50=E2=80=AFAM Chengming Zhou > wrote: > > > > Since the introduce of reusing the dstmem in the load path, it seems > > confusing that we are now using acomp_ctx->dstmem and acomp_ctx->mutex > > now for purposes other than what the naming suggests. > > > > Yosry suggested removing these two fields from acomp_ctx, and directly > > using zswap_dstmem and zswap_mutex in both the load and store paths, > > rename them, and add proper comments above their definitions that they > > are for generic percpu buffering on the load and store paths. > > > > So this patch remove dstmem and mutex from acomp_ctx, and rename the > > zswap_dstmem to zswap_buffer, using the percpu mutex and buffer on > > the load and store paths. > > Sorry joining this discussion late. > > I get the rename of "dstmem" to "buffer". Because the buffer is used > for both load and store as well. What I don't get is that, why do we > move it out of the acomp_ctx struct. Now we have 3 per cpu entry: > buffer, mutex and acomp_ctx. I think we should do the reverse, fold > this three per cpu entry into one struct the acomp_ctx. Each per_cpu > load() has a sequence of dance around the cpu id and disable preempt > etc, while each of the struct member load is just a plan memory load. > It seems to me it would be more optimal to combine this three per cpu > entry into acomp_ctx. Just do the per cpu for the acomp_ctx once. I agree with Chris. From a practicality POV, what Chris says here makes sense. From a semantic POV, this buffer is only used in (de)compression contexts - be it in store, load, or writeback - so it belonging to the orignal struct still makes sense to me. Why separate it out, without any benefits. Just rename the old field buffer or zswap_buffer and call it a day? It will be a smaller patch too! > > > > > Suggested-by: Yosry Ahmed > > Signed-off-by: Chengming Zhou > > --- > > mm/zswap.c | 69 +++++++++++++++++++++++++++++++++---------------------= -------- > > 1 file changed, 37 insertions(+), 32 deletions(-) > > > > diff --git a/mm/zswap.c b/mm/zswap.c > > index 2c349fd88904..71bdcd552e5b 100644 > > --- a/mm/zswap.c > > +++ b/mm/zswap.c > > @@ -166,8 +166,6 @@ struct crypto_acomp_ctx { > > struct crypto_acomp *acomp; > > struct acomp_req *req; > > struct crypto_wait wait; > > - u8 *dstmem; > > - struct mutex *mutex; > > }; > > > > /* > > @@ -694,7 +692,7 @@ static void zswap_alloc_shrinker(struct zswap_pool = *pool) > > /********************************* > > * per-cpu code > > **********************************/ > > -static DEFINE_PER_CPU(u8 *, zswap_dstmem); > > +static DEFINE_PER_CPU(u8 *, zswap_buffer); > > /* > > * If users dynamically change the zpool type and compressor at runtim= e, i.e. > > * zswap is running, zswap can have more than one zpool on one cpu, bu= t they > > @@ -702,39 +700,39 @@ static DEFINE_PER_CPU(u8 *, zswap_dstmem); > > */ > > static DEFINE_PER_CPU(struct mutex *, zswap_mutex); > > > > -static int zswap_dstmem_prepare(unsigned int cpu) > > +static int zswap_buffer_prepare(unsigned int cpu) > > { > > struct mutex *mutex; > > - u8 *dst; > > + u8 *buf; > > > > - dst =3D kmalloc_node(PAGE_SIZE, GFP_KERNEL, cpu_to_node(cpu)); > > - if (!dst) > > + buf =3D kmalloc_node(PAGE_SIZE, GFP_KERNEL, cpu_to_node(cpu)); > > + if (!buf) > > return -ENOMEM; > > > > mutex =3D kmalloc_node(sizeof(*mutex), GFP_KERNEL, cpu_to_node(= cpu)); > > if (!mutex) { > > - kfree(dst); > > + kfree(buf); > > return -ENOMEM; > > } > > > > mutex_init(mutex); > > - per_cpu(zswap_dstmem, cpu) =3D dst; > > + per_cpu(zswap_buffer, cpu) =3D buf; > > per_cpu(zswap_mutex, cpu) =3D mutex; > > return 0; > > } > > > > -static int zswap_dstmem_dead(unsigned int cpu) > > +static int zswap_buffer_dead(unsigned int cpu) > > { > > struct mutex *mutex; > > - u8 *dst; > > + u8 *buf; > > > > mutex =3D per_cpu(zswap_mutex, cpu); > > kfree(mutex); > > per_cpu(zswap_mutex, cpu) =3D NULL; > > > > - dst =3D per_cpu(zswap_dstmem, cpu); > > - kfree(dst); > > - per_cpu(zswap_dstmem, cpu) =3D NULL; > > + buf =3D per_cpu(zswap_buffer, cpu); > > + kfree(buf); > > + per_cpu(zswap_buffer, cpu) =3D NULL; > > > > return 0; > > } > > @@ -772,9 +770,6 @@ static int zswap_cpu_comp_prepare(unsigned int cpu,= struct hlist_node *node) > > acomp_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG, > > crypto_req_done, &acomp_ctx->wait); > > > > - acomp_ctx->mutex =3D per_cpu(zswap_mutex, cpu); > > - acomp_ctx->dstmem =3D per_cpu(zswap_dstmem, cpu); > > - > > return 0; > > } > > > > @@ -1397,15 +1392,21 @@ static void __zswap_load(struct zswap_entry *en= try, struct page *page) > > struct zpool *zpool =3D zswap_find_zpool(entry); > > struct scatterlist input, output; > > struct crypto_acomp_ctx *acomp_ctx; > > - u8 *src; > > + u8 *src, *buf; > > + int cpu; > > + struct mutex *mutex; > > > > - acomp_ctx =3D raw_cpu_ptr(entry->pool->acomp_ctx); > > - mutex_lock(acomp_ctx->mutex); > > + cpu =3D raw_smp_processor_id(); > > + mutex =3D per_cpu(zswap_mutex, cpu); > First per cpu call. > > + mutex_lock(mutex); > > + > > + acomp_ctx =3D per_cpu_ptr(entry->pool->acomp_ctx, cpu); > Second per cpu > > > > src =3D zpool_map_handle(zpool, entry->handle, ZPOOL_MM_RO); > > if (!zpool_can_sleep_mapped(zpool)) { > > - memcpy(acomp_ctx->dstmem, src, entry->length); > > - src =3D acomp_ctx->dstmem; > > + buf =3D per_cpu(zswap_buffer, cpu); > > Here is the function that does the third per_cpu call. I think doing > one per_cpu and the rest of them load from the context is more > optimal. > Conceptually it is cleaner as well. It is clear what this mutex is > supposed to protect. It is protecting the compression context struct. > Move it out as per cpu, it is less clear what is the protection scope > of the mutex. > > What am I missing? > > Chris > > > > + memcpy(buf, src, entry->length); > > + src =3D buf; > > zpool_unmap_handle(zpool, entry->handle); > > } > > > > @@ -1415,7 +1416,7 @@ static void __zswap_load(struct zswap_entry *entr= y, struct page *page) > > acomp_request_set_params(acomp_ctx->req, &input, &output, entry= ->length, PAGE_SIZE); > > BUG_ON(crypto_wait_req(crypto_acomp_decompress(acomp_ctx->req),= &acomp_ctx->wait)); > > BUG_ON(acomp_ctx->req->dlen !=3D PAGE_SIZE); > > - mutex_unlock(acomp_ctx->mutex); > > + mutex_unlock(mutex); > > > > if (zpool_can_sleep_mapped(zpool)) > > zpool_unmap_handle(zpool, entry->handle); > > @@ -1551,6 +1552,8 @@ bool zswap_store(struct folio *folio) > > u8 *src, *dst; > > gfp_t gfp; > > int ret; > > + int cpu; > > + struct mutex *mutex; > > > > VM_WARN_ON_ONCE(!folio_test_locked(folio)); > > VM_WARN_ON_ONCE(!folio_test_swapcache(folio)); > > @@ -1636,11 +1639,13 @@ bool zswap_store(struct folio *folio) > > } > > > > /* compress */ > > - acomp_ctx =3D raw_cpu_ptr(entry->pool->acomp_ctx); > > + cpu =3D raw_smp_processor_id(); > > + mutex =3D per_cpu(zswap_mutex, cpu); > > + mutex_lock(mutex); > > > > - mutex_lock(acomp_ctx->mutex); > > + acomp_ctx =3D per_cpu_ptr(entry->pool->acomp_ctx, cpu); > > + dst =3D per_cpu(zswap_buffer, cpu); > > > > - dst =3D acomp_ctx->dstmem; > > sg_init_table(&input, 1); > > sg_set_page(&input, page, PAGE_SIZE, 0); > > > > @@ -1683,7 +1688,7 @@ bool zswap_store(struct folio *folio) > > buf =3D zpool_map_handle(zpool, handle, ZPOOL_MM_WO); > > memcpy(buf, dst, dlen); > > zpool_unmap_handle(zpool, handle); > > - mutex_unlock(acomp_ctx->mutex); > > + mutex_unlock(mutex); > > > > /* populate entry */ > > entry->swpentry =3D swp_entry(type, offset); > > @@ -1726,7 +1731,7 @@ bool zswap_store(struct folio *folio) > > return true; > > > > put_dstmem: > > - mutex_unlock(acomp_ctx->mutex); > > + mutex_unlock(mutex); > > put_pool: > > zswap_pool_put(entry->pool); > > freepage: > > @@ -1902,10 +1907,10 @@ static int zswap_setup(void) > > } > > > > ret =3D cpuhp_setup_state(CPUHP_MM_ZSWP_MEM_PREPARE, "mm/zswap:= prepare", > > - zswap_dstmem_prepare, zswap_dstmem_dead= ); > > + zswap_buffer_prepare, zswap_buffer_dead= ); > > if (ret) { > > - pr_err("dstmem alloc failed\n"); > > - goto dstmem_fail; > > + pr_err("buffer alloc failed\n"); > > + goto buffer_fail; > > } > > > > ret =3D cpuhp_setup_state_multi(CPUHP_MM_ZSWP_POOL_PREPARE, > > @@ -1940,7 +1945,7 @@ static int zswap_setup(void) > > zswap_pool_destroy(pool); > > hp_fail: > > cpuhp_remove_state(CPUHP_MM_ZSWP_MEM_PREPARE); > > -dstmem_fail: > > +buffer_fail: > > kmem_cache_destroy(zswap_entry_cache); > > cache_fail: > > /* if built-in, we aren't unloaded on failure; don't allow use = */ > > > > -- > > b4 0.10.1