From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6CC07C41513 for ; Thu, 3 Aug 2023 09:35:42 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B6E20280226; Thu, 3 Aug 2023 05:35:41 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B1DA12801EB; Thu, 3 Aug 2023 05:35:41 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9E6D2280226; Thu, 3 Aug 2023 05:35:41 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 8CE542801EB for ; Thu, 3 Aug 2023 05:35:41 -0400 (EDT) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 616961210A0 for ; Thu, 3 Aug 2023 09:35:41 +0000 (UTC) X-FDA: 81082286082.30.9635931 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by imf17.hostedemail.com (Postfix) with ESMTP id B48384000B for ; Thu, 3 Aug 2023 09:35:38 +0000 (UTC) Authentication-Results: imf17.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf17.hostedemail.com: domain of ruanjinjie@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=ruanjinjie@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1691055339; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=5v8V9udX7U+prY361BgD1oECu/PWTFfk8jZznj3zgP8=; b=E6vmoLMInou9FvDvF2iYpELlfaEzdWu/NhiGZ9M1tjIFvcAP8BvdODDZ75sJXy4npIzQRz gTnWUA4ymn9GcrJRYYZBwnKehotjksnyLXdvPML7SKu2pNbS7vOsnK8u3JPO2IyUctg2A+ 3trFRGeqmfg7B8DTVGisv4KDs6o/PtQ= ARC-Authentication-Results: i=1; imf17.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf17.hostedemail.com: domain of ruanjinjie@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=ruanjinjie@huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1691055339; a=rsa-sha256; cv=none; b=El3b/bEKeSkXMY6N5I6ea0TlamPLrUOW1rggcPbJuOSKroBkzdVMhIil0U+kaVyBUcLhqi Sg4Dw72i/8+puGslWzvgNu3YUfE7+VGGA2ZiROHU4wJlns2DXehUldVtKtHpH/aFmTKEA3 Jcp1m4dNQM2ROZDc/eSROfrjFDlcfhk= Received: from kwepemi500008.china.huawei.com (unknown [172.30.72.57]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4RGkBw022NzNmTw; Thu, 3 Aug 2023 17:32:07 +0800 (CST) Received: from [10.67.109.254] (10.67.109.254) by kwepemi500008.china.huawei.com (7.221.188.139) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.27; Thu, 3 Aug 2023 17:35:33 +0800 Message-ID: <8cb1257e-36dc-5e37-1b73-8d3054115b83@huawei.com> Date: Thu, 3 Aug 2023 17:35:33 +0800 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101 Thunderbird/102.2.0 Subject: Re: [PATCH -next] mm: zswap: use helper function put_z3fold_locked() Content-Language: en-US To: Miaohe Lin , , Vitaly Wool , Andrew Morton References: <20230803070820.3775663-1-ruanjinjie@huawei.com> <12320046-cc1e-17e3-a320-df2a2e677e6e@huawei.com> From: Ruan Jinjie In-Reply-To: <12320046-cc1e-17e3-a320-df2a2e677e6e@huawei.com> Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Originating-IP: [10.67.109.254] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To kwepemi500008.china.huawei.com (7.221.188.139) X-CFilter-Loop: Reflected X-Rspam-User: X-Stat-Signature: rt1po6di45r5y96nsk75mohbgm6exciz X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: B48384000B X-HE-Tag: 1691055338-794524 X-HE-Meta: U2FsdGVkX19PZE2R4aWeazYSTfbWWvSfvz5BCkBxKxa7KEMqkyfPSUZ7cxYu8TMg2/v3zt/7PZ1yWyiUA1KUcTZbn+telzSIt5nxr7mrBsjb7p1rWP8Wo4YuJWLpeWjmaLaheaHUQswzcjC3XTsoSRgwiWzfFCBUhSU8NK9pOYAygA2bIn7yqi7Tu54jZ84owraBRoWqBJHDv89mVJ4eBYS983XHYiaBSZYaRSRIjLyjNdNJ9jglGj5EvV/0oAR5WB4vJQSNu+cwykamTYUMNeYPDnB8mB8g49uxAYVQ2u1HNcNVA9NqDzwMoph+qJOaAmc+XYxK1NVehzcM7WMXuYDKVhLy9q28RHWL0TL++rrd/15JGtpWZdR1wZdrQ2BUwRuqpvloIvG8PCB6Q36VhR+IIVWEiEqyEpwbE5J2LUe1Pqc7y6YIsnMlD+Ys9Sssax9wtU9WluKo1XOX+tqher1rJTVJCR/4KAKlSzM2WYKKgUBasLeCP/7CVv89ooW6cuJUMwHmCY3F7qirwbdWrQW5bpOXV74XYaT0QWeHHw5aPVjHH3G9/sUlfOqYVtUfPzwBtTmju5rryuJXuovXq8Ht4AO+YopBkEIc7M65XvsO23dEEuTY4vz31PTEP91+IuJEvHACWm2OAdv1NUShfe4S6dzFQ6DUfIiXNNo1vmFmz+PwncRlUc+TD5NTMXWbayKNLAWsLSuyr2KsjcxqCqL+0/SaT14tgSH6eQ7Na+n7H5ymFE0HQUrYSlpbg40KYFbg1gz1TZgFOvxzSAMCbuguVzlzoCgzunLgh/exgv6W60FM7jE0MAM78VDFMVQLGtogFRN+Zklmu1OMFRwI8PRV01QdFXoN6N1C9C/G7S7fxhImckxJzSjaXkGEh9vgldfl2jFG3j/ckJEvSY0jCXzAl4MtyOnmru9KHQeHg5LyT5CaT/L7zHJWXuKMDMuSQuEP9L8ShsiS7L4W2iN U38N8dTY pry/6V+0g4A0hlXhLaCsfYn4SWNEbdi/os7WRVPhtqKPasGDFzrnjXZkvFacZO6D91Fqv1PLJKXXCbBZR/X5byQ3rCZdYyIDyeN6WOKfBn68+J0eXOUpNUXwm7wlPdNik656I2mwx2rOQ6lioqE8B9vF5fv0iU4/89GfrLkCWnkUWd1J5UlNIZ9LleVTBxPloY3vFwrASXZZJEeg= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 2023/8/3 16:38, Miaohe Lin wrote: > On 2023/8/3 15:08, Ruan Jinjie wrote: >> This code is already duplicated six times, use helper function >> put_z3fold_locked() to release z3fold page instead of open code it >> to help improve code readability a bit. No functional change involved. >> > > There is one "if (kref_put(&zhdr->refcount, release_z3fold_page_locked_list))" left in z3fold_free(). > It might be better to add another helper for it to make code more consistent? But no preference. Ok, I'll add it sooner. > > Thanks. > >> Signed-off-by: Ruan Jinjie >> --- >> mm/z3fold.c | 18 +++++++++++------- >> 1 file changed, 11 insertions(+), 7 deletions(-) >> >> diff --git a/mm/z3fold.c b/mm/z3fold.c >> index e84de91ecccb..0b483de83d95 100644 >> --- a/mm/z3fold.c >> +++ b/mm/z3fold.c >> @@ -480,6 +480,11 @@ static void release_z3fold_page_locked_list(struct kref *ref) >> __release_z3fold_page(zhdr, true); >> } >> >> +static inline int put_z3fold_locked(struct z3fold_header *zhdr) >> +{ >> + return kref_put(&zhdr->refcount, release_z3fold_page_locked); >> +} >> + >> static void free_pages_work(struct work_struct *w) >> { >> struct z3fold_pool *pool = container_of(w, struct z3fold_pool, work); >> @@ -666,7 +671,7 @@ static struct z3fold_header *compact_single_buddy(struct z3fold_header *zhdr) >> return new_zhdr; >> >> out_fail: >> - if (new_zhdr && !kref_put(&new_zhdr->refcount, release_z3fold_page_locked)) { >> + if (new_zhdr && !put_z3fold_locked(new_zhdr)) { >> add_to_unbuddied(pool, new_zhdr); >> z3fold_page_unlock(new_zhdr); >> } >> @@ -741,7 +746,7 @@ static void do_compact_page(struct z3fold_header *zhdr, bool locked) >> list_del_init(&zhdr->buddy); >> spin_unlock(&pool->lock); >> >> - if (kref_put(&zhdr->refcount, release_z3fold_page_locked)) >> + if (put_z3fold_locked(zhdr)) >> return; >> >> if (test_bit(PAGE_STALE, &page->private) || >> @@ -752,7 +757,7 @@ static void do_compact_page(struct z3fold_header *zhdr, bool locked) >> >> if (!zhdr->foreign_handles && buddy_single(zhdr) && >> zhdr->mapped_count == 0 && compact_single_buddy(zhdr)) { >> - if (!kref_put(&zhdr->refcount, release_z3fold_page_locked)) { >> + if (!put_z3fold_locked(zhdr)) { >> clear_bit(PAGE_CLAIMED, &page->private); >> z3fold_page_unlock(zhdr); >> } >> @@ -878,7 +883,7 @@ static inline struct z3fold_header *__z3fold_alloc(struct z3fold_pool *pool, >> return zhdr; >> >> out_fail: >> - if (!kref_put(&zhdr->refcount, release_z3fold_page_locked)) { >> + if (!put_z3fold_locked(zhdr)) { >> add_to_unbuddied(pool, zhdr); >> z3fold_page_unlock(zhdr); >> } >> @@ -1012,8 +1017,7 @@ static int z3fold_alloc(struct z3fold_pool *pool, size_t size, gfp_t gfp, >> if (zhdr) { >> bud = get_free_buddy(zhdr, chunks); >> if (bud == HEADLESS) { >> - if (!kref_put(&zhdr->refcount, >> - release_z3fold_page_locked)) >> + if (!put_z3fold_locked(zhdr)) >> z3fold_page_unlock(zhdr); >> pr_err("No free chunks in unbuddied\n"); >> WARN_ON(1); >> @@ -1346,7 +1350,7 @@ static void z3fold_page_putback(struct page *page) >> if (!list_empty(&zhdr->buddy)) >> list_del_init(&zhdr->buddy); >> INIT_LIST_HEAD(&page->lru); >> - if (kref_put(&zhdr->refcount, release_z3fold_page_locked)) >> + if (put_z3fold_locked(zhdr)) >> return; >> if (list_empty(&zhdr->buddy)) >> add_to_unbuddied(pool, zhdr); >> >