From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id F3E23C4828E for ; Fri, 2 Feb 2024 22:33:51 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 852186B00B1; Fri, 2 Feb 2024 17:33:51 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 802AA8D0001; Fri, 2 Feb 2024 17:33:51 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6A2406B00C1; Fri, 2 Feb 2024 17:33:51 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 585FC6B00B1 for ; Fri, 2 Feb 2024 17:33:51 -0500 (EST) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 2A458C0EC3 for ; Fri, 2 Feb 2024 22:33:51 +0000 (UTC) X-FDA: 81748317462.04.0F28C63 Received: from mail-io1-f41.google.com (mail-io1-f41.google.com [209.85.166.41]) by imf04.hostedemail.com (Postfix) with ESMTP id 7329640008 for ; Fri, 2 Feb 2024 22:33:49 +0000 (UTC) Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=j6ljHMJJ; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf04.hostedemail.com: domain of nphamcs@gmail.com designates 209.85.166.41 as permitted sender) smtp.mailfrom=nphamcs@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1706913229; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=0bGR4OmpODNoxu6BbozXZQv938jautOfwfQRasaHogc=; b=N+QQwYk3RzxS3sJGh5nV+BztCpKNa5uQhy1hI7Ha02oOzkW8kAuYAcOtzqjLemvO5ykGuq 75clX4vWuC1iqJOMYxhyEGDu8o8znq/T3ZriZ+/SzN2kMbaWvDySST3SjEg5OqUYkCgsqK tuCSa+SkYhNIJIAT7z+yClgtmA6RMtw= ARC-Authentication-Results: i=1; imf04.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=j6ljHMJJ; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf04.hostedemail.com: domain of nphamcs@gmail.com designates 209.85.166.41 as permitted sender) smtp.mailfrom=nphamcs@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1706913229; a=rsa-sha256; cv=none; b=p30N0f7pbFSAffwzT9dctIuU5KCa2x9ydCCC6PS9gWUbeRaxxBMZQAXsP8itObUy2cCBQq iG5J9MrrDoCG2kE/ozXhxrM+uExBoLKm2LT/0PJXwaTD7Tl/dlKeISm9gLCSlc5q7HEcFg saSiZCnpFlQYz5R+Fr6+ay7PVw5CHdU= Received: by mail-io1-f41.google.com with SMTP id ca18e2360f4ac-7bfdbaa135bso106783939f.3 for ; Fri, 02 Feb 2024 14:33:49 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1706913228; x=1707518028; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=0bGR4OmpODNoxu6BbozXZQv938jautOfwfQRasaHogc=; b=j6ljHMJJiJpe52b++XWdoPaNQ/FQEAr8SBBuBWORlW2m3EldQ2RKYIKcFFW7XHWCa9 vMpc5KO9YwYdZTRgYTLemvEzEC3gl/DbQ6SZalwNB4JgN3Uf5iVfD/4hb8o5t0Fzq/5C mhK0G/oPlgjUUJiVDGeuIc8AopElvffGGRze/R/FRcVbTli8RQWw0mnXk4Grei1aq224 KdkMdoAKcdVxFdOP7NY0KH+ra/uN+tRDnDGSgAiqqGYn/mDLPixQEQVn9S1JJbLJxc23 LERuIevzvaTa/ucN81NGI6IproCiF3mYe+o95hYvpy6sy20xzLphLydtM7r/LcQBtxGe onog== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1706913228; x=1707518028; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=0bGR4OmpODNoxu6BbozXZQv938jautOfwfQRasaHogc=; b=dg7oV0hqf5ZoVaxFWiwIccTd5H2sEv3CDUa1PvdPVOTflGKyh+4wpmslLwnbC/Iw/2 SMxBt4o55u6pAAkIVcpilSoZOmrzLZXKTdYbCXW65f/3Jm7MnhXCSGv1W81StZsJxD96 hHJg9hK+wFS7ndzCaP5cbxSKfkGXPtHTeV4VSAWkqgDSOR8DXyCakOXBpp1aAN1ersAB gscLAPYuq5znTpth/+86rhQGZYJAZeDs7A5/gEX9PU7hnKSVwYwm4K/wBBHaG9ZsN8Fz itLdbYu3UXIvzdCJg1jvUC85xFwUw5UArREiQ7kDcYL9a2JOopPo1ELtqv9NX2ztLifJ 7MCQ== X-Gm-Message-State: AOJu0YyZmej8vNvZTcg5NJGIp5wSiJoyrQmU1AnLHlDyZlTy6Aeeg1pt 5qXvjB4kkRYNd6JU9QNSRSJ/OJROnxLsYuFl3R/fNh46m3y9M/8zEmtDP6JjR7lzNGdS6MI8oj6 SCA3tZPAtqSJIaHhtfOh51lQFTQs= X-Google-Smtp-Source: AGHT+IE5HMQzIc+TArWe0msfKNUe5klDoyOrIKRejzpI4XPX5Qk1PrySY4hq5sY2kslVXNINfUOwZasIbFSwZ8mozlo= X-Received: by 2002:a05:6602:583:b0:7c0:35ac:dab8 with SMTP id v3-20020a056602058300b007c035acdab8mr3484746iox.1.1706913228646; Fri, 02 Feb 2024 14:33:48 -0800 (PST) MIME-Version: 1.0 References: <20240201-b4-zswap-invalidate-entry-v1-0-56ed496b6e55@bytedance.com> <20240201-b4-zswap-invalidate-entry-v1-6-56ed496b6e55@bytedance.com> In-Reply-To: <20240201-b4-zswap-invalidate-entry-v1-6-56ed496b6e55@bytedance.com> From: Nhat Pham Date: Fri, 2 Feb 2024 14:33:37 -0800 Message-ID: Subject: Re: [PATCH 6/6] mm/zswap: zswap entry doesn't need refcount anymore To: Chengming Zhou Cc: Johannes Weiner , Andrew Morton , Yosry Ahmed , linux-kernel@vger.kernel.org, linux-mm@kvack.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: 7329640008 X-Stat-Signature: 94eh7gu5mfo3wu6f6ji5rqqyf8jk5gnd X-Rspam-User: X-HE-Tag: 1706913229-164774 X-HE-Meta: U2FsdGVkX19rzwLc6qkIZRbqP623ZMvnAog5wBqISMAxZ4awGKSE/83tIvwuccOm18MuxWz6AwxQ+Ur3Ew7kCGKRLInSgzMcDzkshXZOjT7ikrmawMKueOv/TnPTOypHebFu8nxldKof9sk85/L91kInY+Og9SwAtPFPUXolsf/1Dkcyg6pY1jBYN9CsHsC9HgPaK0VQghLcnyTalzfxjuc8yXaLpoDw0Z9SkTL742qCP2UiOmgukj4rjq253eaNGYmYlClTrtIldm44edxBz4l0jStlPIBI3AgqfQX4/14rL0p8PXxBRYyz1aQgWJfD9XSO6S+pAlpU9Ix4O62jZqN4ZNQQDa0OsYSwpybZh6b6MJlaYsZy4dfnBwmW+K+/h4D91Tb40kBKW9ZsN2Ee1k40GKQrqkLGqFX4xwFJeQdYi7I/IIsOXJNjOiCa+3V7UZ7xoTktTBPvwosuJEEx+JmRAb1ei/N/fhpwIfwzDg7vDcku6a3q1hvnRQfkzWR1l8OORnW26neNS6Bwa2z9E3mL59dR9zob1YUXQyHRL7gs2IkJtJEDUad1c2wf6zQNpjlCs0/0uqDcer5IeRjMEPKA13AgssmysrtGyq2unqgDnNEQ8Vj1f3KdIsuC1HWMDsQBydt01wT+fRhaqHT0qecmI8QlOp2JKmZLAOcMk6R35DexhEjqyKzfeC6Ly6eF/KqWD7t7bxBM1/0dLmrk/Ub2U8D2emL6PPrQOgFd3VovmCzr+F9Gj/fhVaBQGaIxRfNqqn2syk2QUYwVN5seMPasmNE7xAs2od+hmbalecjVS6evypvNc7bnDCO+O4mzYwXwmPYs0Me1TYkLOzBruBll5VckaHU5pyRstjhH5fpXGWsduqSdG/FS5zjq+KHX9vcLwdaKWA4GTBCPDO0pEfC1DKxxTQ+B7oRSB2+i/RpeLmiBCE7aM00zvgT7jsFKOSHFIt2zg0CgON6im8L n2gV24lQ xxmk5MAnOiFO53Q6HKhBkmI3Ch0UPvtwUgpyxEd5EfJMR5QQlGGZu8QNjV1yqWy8Fl3zljQtD6+zdx0TxVH2xwZu9qIQn0m+N3naFkzVJLKM4/VrervdHMxAtEdJcheFDCIgGm9SN+8vGggtHn4cUK8W0T7HOKJiqAqOD+HEzaKxXTftivVRUVO8szoB1J4AhxdRcvLhn9ngPWz+ymYciZL8eCdVHqh6b6g906Bu5owreZupUeaFvRHfXB5otvtVrAefj44TLvIzAvHlpTywBzQhbcfHMzh/8MAmAfwqzAOX1gvsArNaC5H6uv9b1OKWbf5lLBV9re3ztEoAu6Uw/OMwUJhwmC78nGR/T3xRbRLayZxbOuw+2lwk23VwVnPa0fLy53Kyxeg015/O4yjPZ8lilPeLgd9oeRh/u0S2PPGz0HZTTxZopw6V3Dg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Thu, Feb 1, 2024 at 7:50=E2=80=AFAM Chengming Zhou wrote: > > Since we don't need to leave zswap entry on the zswap tree anymore, > we should remove it from tree once we find it from the tree. > > Then after using it, we can directly free it, no concurrent path > can find it from tree. Only the shrinker can see it from lru list, > which will also double check under tree lock, so no race problem. > > So we don't need refcount in zswap entry anymore and don't need to > take the spinlock for the second time to invalidate it. > > The side effect is that zswap_entry_free() maybe not happen in tree > spinlock, but it's ok since nothing need to be protected by the lock. > > Signed-off-by: Chengming Zhou Oh this is sweet! Fewer things to keep in mind. Reviewed-by: Nhat Pham > --- > mm/zswap.c | 63 +++++++++++---------------------------------------------= ------ > 1 file changed, 11 insertions(+), 52 deletions(-) > > diff --git a/mm/zswap.c b/mm/zswap.c > index cbf379abb6c7..cd67f7f6b302 100644 > --- a/mm/zswap.c > +++ b/mm/zswap.c > @@ -193,12 +193,6 @@ struct zswap_pool { > * > * rbnode - links the entry into red-black tree for the appropriate swap= type > * swpentry - associated swap entry, the offset indexes into the red-bla= ck tree > - * refcount - the number of outstanding reference to the entry. This is = needed > - * to protect against premature freeing of the entry by code > - * concurrent calls to load, invalidate, and writeback. The = lock > - * for the zswap_tree structure that contains the entry must > - * be held while changing the refcount. Since the lock must > - * be held, there is no reason to also make refcount atomic. > * length - the length in bytes of the compressed page data. Needed dur= ing > * decompression. For a same value filled page length is 0, and= both > * pool and lru are invalid and must be ignored. > @@ -211,7 +205,6 @@ struct zswap_pool { > struct zswap_entry { > struct rb_node rbnode; > swp_entry_t swpentry; > - int refcount; Hah this should even make zswap a bit more space-efficient. IIRC Yosry has some analysis regarding how much less efficient zswap will be every time we add a new field to zswap entry - this should go in the opposite direction :) > unsigned int length; > struct zswap_pool *pool; > union { > @@ -222,11 +215,6 @@ struct zswap_entry { > struct list_head lru; > }; > > -/* > - * The tree lock in the zswap_tree struct protects a few things: > - * - the rbtree > - * - the refcount field of each entry in the tree > - */ > struct zswap_tree { > struct rb_root rbroot; > spinlock_t lock; > @@ -890,14 +878,10 @@ static int zswap_rb_insert(struct rb_root *root, st= ruct zswap_entry *entry, > return 0; > } > > -static bool zswap_rb_erase(struct rb_root *root, struct zswap_entry *ent= ry) > +static void zswap_rb_erase(struct rb_root *root, struct zswap_entry *ent= ry) > { > - if (!RB_EMPTY_NODE(&entry->rbnode)) { > - rb_erase(&entry->rbnode, root); > - RB_CLEAR_NODE(&entry->rbnode); > - return true; > - } > - return false; > + rb_erase(&entry->rbnode, root); > + RB_CLEAR_NODE(&entry->rbnode); > } > > /********************************* > @@ -911,7 +895,6 @@ static struct zswap_entry *zswap_entry_cache_alloc(gf= p_t gfp, int nid) > entry =3D kmem_cache_alloc_node(zswap_entry_cache, gfp, nid); > if (!entry) > return NULL; > - entry->refcount =3D 1; > RB_CLEAR_NODE(&entry->rbnode); > return entry; > } > @@ -954,33 +937,15 @@ static void zswap_entry_free(struct zswap_entry *en= try) > zswap_update_total_size(); > } > > -/* caller must hold the tree lock */ > -static void zswap_entry_get(struct zswap_entry *entry) > -{ > - WARN_ON_ONCE(!entry->refcount); > - entry->refcount++; > -} > - > -/* caller must hold the tree lock */ > -static void zswap_entry_put(struct zswap_entry *entry) > -{ > - WARN_ON_ONCE(!entry->refcount); > - if (--entry->refcount =3D=3D 0) { > - WARN_ON_ONCE(!RB_EMPTY_NODE(&entry->rbnode)); > - zswap_entry_free(entry); > - } > -} > - > /* > - * If the entry is still valid in the tree, drop the initial ref and rem= ove it > - * from the tree. This function must be called with an additional ref he= ld, > - * otherwise it may race with another invalidation freeing the entry. > + * The caller hold the tree lock and search the entry from the tree, > + * so it must be on the tree, remove it from the tree and free it. > */ > static void zswap_invalidate_entry(struct zswap_tree *tree, > struct zswap_entry *entry) > { > - if (zswap_rb_erase(&tree->rbroot, entry)) > - zswap_entry_put(entry); > + zswap_rb_erase(&tree->rbroot, entry); > + zswap_entry_free(entry); > } > > /********************************* > @@ -1219,7 +1184,7 @@ static int zswap_writeback_entry(struct zswap_entry= *entry, > } > > /* Safe to deref entry after the entry is verified above. */ > - zswap_entry_get(entry); > + zswap_rb_erase(&tree->rbroot, entry); > spin_unlock(&tree->lock); > > zswap_decompress(entry, &folio->page); > @@ -1228,10 +1193,7 @@ static int zswap_writeback_entry(struct zswap_entr= y *entry, > if (entry->objcg) > count_objcg_event(entry->objcg, ZSWPWB); > > - spin_lock(&tree->lock); > - zswap_invalidate_entry(tree, entry); > - zswap_entry_put(entry); > - spin_unlock(&tree->lock); > + zswap_entry_free(entry); > > /* folio is up to date */ > folio_mark_uptodate(folio); > @@ -1702,7 +1664,7 @@ bool zswap_load(struct folio *folio) > spin_unlock(&tree->lock); > return false; > } > - zswap_entry_get(entry); > + zswap_rb_erase(&tree->rbroot, entry); > spin_unlock(&tree->lock); > > if (entry->length) > @@ -1717,10 +1679,7 @@ bool zswap_load(struct folio *folio) > if (entry->objcg) > count_objcg_event(entry->objcg, ZSWPIN); > > - spin_lock(&tree->lock); > - zswap_invalidate_entry(tree, entry); > - zswap_entry_put(entry); > - spin_unlock(&tree->lock); > + zswap_entry_free(entry); > > folio_mark_dirty(folio); > > > -- > b4 0.10.1