From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9E68CC54E66 for ; Wed, 13 Mar 2024 23:24:56 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9AA3C8006F; Wed, 13 Mar 2024 19:24:55 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 95A6880063; Wed, 13 Mar 2024 19:24:55 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 821A78006F; Wed, 13 Mar 2024 19:24:55 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 70AC280063 for ; Wed, 13 Mar 2024 19:24:55 -0400 (EDT) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id F38F1A0747 for ; Wed, 13 Mar 2024 23:24:54 +0000 (UTC) X-FDA: 81893598108.21.37994C3 Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55]) by imf17.hostedemail.com (Postfix) with ESMTP id 930E540007 for ; Wed, 13 Mar 2024 23:24:52 +0000 (UTC) Authentication-Results: imf17.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=A2rSZ87A; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf17.hostedemail.com: domain of chrisl@kernel.org designates 145.40.73.55 as permitted sender) smtp.mailfrom=chrisl@kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1710372293; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=RWxgdY4Wyxkmxzaeu5ieHPVjtlACProZXt76sjZjFGI=; b=GZmyYPqSoVQq5Xfd+zs0hEvIYF/UJmR3kKtBgxFAxBVIaKAQZiK/RDNbhncfMGg+1qvOiH CYlsOBC6JK3e2ZcACNRP97Gdd/9+0UB62LwGY7MC2SwGRutITqOqQ6ebty8gZaSSWykXcE zfn+tijMHdh73S25s8+Vlh8smcQIyxc= ARC-Authentication-Results: i=1; imf17.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=A2rSZ87A; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf17.hostedemail.com: domain of chrisl@kernel.org designates 145.40.73.55 as permitted sender) smtp.mailfrom=chrisl@kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1710372293; a=rsa-sha256; cv=none; b=HxVcp0gDGhbCMuwYjrIOVjT42ftrRb2/UtJDfYRbBsxSv12g1OV7Qrldfr0+lrQvMCd1Ko u1VxVi5tGryuYLFObfqfdbFKGEOIOoFwgCTR8QPtWzWeTkhj3kDiWbWwiWVlyrVHQD/XcS 4aCLop73D4EMbyMGibGk/ijeTlzUG/g= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sin.source.kernel.org (Postfix) with ESMTP id 38C27CE1CAF for ; Wed, 13 Mar 2024 23:24:49 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 24A6CC43609 for ; Wed, 13 Mar 2024 23:24:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1710372288; bh=ePfFqyfxcuzWYHoB97QDAJ1VD497QbbBMW6eC6d8BPs=; h=References:In-Reply-To:From:Date:Subject:To:Cc:From; b=A2rSZ87AHWsoWPGjy9zGKM7JyxHj+MNVY0mk+eWlpvsG7JiBoH2Gu7LxUolyEEi/P 0rs8neNAH8kIcrxjtUNVPTQnk/Oz+dKY9XQEINjAySnOMq7n3jUEVussIgIhfukFB0 zoE/JfLPMRhHkp6dEWO4zI7cZwOLunZFGs6FsHs3O/eKyTK+YVm4AR7wa8yScSD5c3 QFs1b3UeKU1WXwbNajkI+hM8W5sgnq8wlQq7iTjmC/2M6U7+g/phMNYHDWAMtsfDNN DkYHW/TdvRmshGZGKTiqBTxgJu2XMG4BmnljfMtBo/ZsKU0cbE/EWdgt5JYwYPiPuG 3mfElKGik9c8A== Received: by mail-il1-f179.google.com with SMTP id e9e14a558f8ab-36677320d42so2058835ab.2 for ; Wed, 13 Mar 2024 16:24:48 -0700 (PDT) X-Forwarded-Encrypted: i=1; AJvYcCUXUhqpO8leX4prQRNojJrTFnNiCqKhiL8wj81Gls0QpILtxfa0E601+w3iEGwri2yy1yrxermY1y4bURGRb0MicYo= X-Gm-Message-State: AOJu0YweJzJ8olmHThjY7Hj/jELsclBv4sC+XkecUCCvDTDwfr9laIMM XwAaFoWMTxf8H5DbbjD8DxRlq18G8bovuBwqBLa7gGIXj+cr6hiIiPkhZxBMeeZvI5M/22FiLVc Ou/gknpUDVbX58gSLxd/gY0rmdrCoGzh9wA76 X-Google-Smtp-Source: AGHT+IHJSCudccMCWmLtz++Bwl3u1dldevDMpcVAhNTL8172hjishm+vdw8xUlc4kCTeiv3/TsWP/5hVVWlIK4LeqYI= X-Received: by 2002:a92:60e:0:b0:365:b00e:c3ba with SMTP id x14-20020a92060e000000b00365b00ec3bamr65111ilg.9.1710372287384; Wed, 13 Mar 2024 16:24:47 -0700 (PDT) MIME-Version: 1.0 References: <20240312-zswap-xarray-v6-1-1b82027d7082@kernel.org> <20240312184329.GA3501@cmpxchg.org> In-Reply-To: <20240312184329.GA3501@cmpxchg.org> From: Chris Li Date: Wed, 13 Mar 2024 16:24:34 -0700 X-Gmail-Original-Message-ID: Message-ID: Subject: Re: [PATCH v6] zswap: replace RB tree with xarray To: Johannes Weiner Cc: Andrew Morton , linux-kernel@vger.kernel.org, linux-mm@kvack.org, Yosry Ahmed , Nhat Pham , "Matthew Wilcox (Oracle)" , Chengming Zhou , Barry Song , Barry Song , Chengming Zhou Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Queue-Id: 930E540007 X-Rspam-User: X-Rspamd-Server: rspam02 X-Stat-Signature: qdhk8f3t385k8fj3nggpjdz7besiiu9p X-HE-Tag: 1710372292-997653 X-HE-Meta: U2FsdGVkX1+3+x3mXjYGBKrkYLVYM8puShsB7EXEa/5kZqooCjwa2bkyPkKGQmAx7MGomtbFO4fkwyPvwb7AG9wgA7VFXrYlrS8I8hSjLnojHUTvmn/Jat2H5VLCRzXz4smGFWG9kMScm0sewzC76L8myNvjVASRsycOjymzrq3icxK2O69fXzmKPQ9hXj4RHJzHyYCP/Yzn0okSC8RR4hINE5hsZruBUpaV9/qyz+DoxwlPzmC6DPQKFd6PJ3AK/VzcflPnujC3Uq8aOTZqsp5unPJvFFGQWYKJxTHrIhyDDu98E6dImBXi4Xp3Z53IfB5YVLEqaDHaO0Skg6s97+t1MilP9PtZhGZW4WJFgImQ4id2bWOSpMVtqmJJ6rccACFS5DsqBXaCU/X4/NcbwoWDdd7/GZE/yFXoWvmKR9Zeume9ydcp1LH0vZs3xew5gJPVBGXuN8N0s9GtZvO85O6jX7cm8K805ybrNiEidb51Je024vsduqXRUp3lr1XlVNGhiHc1h1StJT8FLvTEdgOJ1mvyOJYI43+V+EXNmsQ6Nusfzf1VXBdkOe4h0kxa7gdT++PHNWzQc/iBhCRJl3xdbdFyIyPsHVBwoNuNaQJHBGKVsRqRRevDqFuTOE8eov/bPNCzLSiT//sMGykmiTdA3CZA5JCdPJXRoyYp9jzd7UWPlXnmP2hb0VnIGiS4spN6TWkvIxqnvh0wWJrkESgzp+RCJ3oHIzZnc8tBhcno4E7vHsmA/ZnwaWxacQ9lGX77wHEFw4UwBTj4bxUPksQr0UAUoVa6SKV5NEOKDdQuSIZD6CU6kVKoiWxwKt45ROO7zwYg+fXj3XwHQnQeMcLjKJwyRInCeHYPHLiCNry9MtGWs/HKz+cGKtqfeK9JiJpp1dlN5Fk/3iN5tLEtFY98zRNtJIBr X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Hi Johannes, On Tue, Mar 12, 2024 at 11:43=E2=80=AFAM Johannes Weiner wrote: > > On Tue, Mar 12, 2024 at 10:31:12AM -0700, Chris Li wrote: > > Very deep RB tree requires rebalance at times. That > > contributes to the zswap fault latencies. Xarray does not > > need to perform tree rebalance. Replacing RB tree to xarray > > can have some small performance gain. > > > > One small difference is that xarray insert might fail with > > ENOMEM, while RB tree insert does not allocate additional > > memory. > > > > The zswap_entry size will reduce a bit due to removing the > > RB node, which has two pointers and a color field. Xarray > > store the pointer in the xarray tree rather than the > > zswap_entry. Every entry has one pointer from the xarray > > tree. Overall, switching to xarray should save some memory, > > if the swap entries are densely packed. > > > > Notice the zswap_rb_search and zswap_rb_insert always > > followed by zswap_rb_erase. Use xa_erase and xa_store > > directly. That saves one tree lookup as well. > > > > Remove zswap_invalidate_entry due to no need to call > > zswap_rb_erase any more. Use zswap_free_entry instead. > > > > The "struct zswap_tree" has been replaced by "struct xarray". > > The tree spin lock has transferred to the xarray lock. > > > > Run the kernel build testing 10 times for each version, averages: > > (memory.max=3D2GB, zswap shrinker and writeback enabled, > > one 50GB swapfile, 24 HT core, 32 jobs) > > > > mm-9a0181a3710eb xarray v5 > > user 3532.385 3535.658 > > sys 536.231 530.083 > > real 200.431 200.176 > > This is a great improvement code and complexity wise. Thanks! > > I have a few questions and comments below: > > What kernel version is this based on? It doesn't apply to > mm-everything, and I can't find 9a0181a3710eb anywhere. It is based on an old version of the mm-unstable. I can try to rebase on mm-everything or later mm-unstable. > > > @@ -1555,28 +1473,35 @@ bool zswap_store(struct folio *folio) > > insert_entry: > > entry->swpentry =3D swp; > > entry->objcg =3D objcg; > > - if (objcg) { > > - obj_cgroup_charge_zswap(objcg, entry->length); > > - /* Account before objcg ref is moved to tree */ > > - count_objcg_event(objcg, ZSWPOUT); > > - } > > > > - /* map */ > > - spin_lock(&tree->lock); > > /* > > * The folio may have been dirtied again, invalidate the > > * possibly stale entry before inserting the new entry. > > */ > > The comment is now somewhat stale and somewhat out of place. It should > be above that `if (old)` part... See below. Ack. > > > - if (zswap_rb_insert(&tree->rbroot, entry, &dupentry) =3D=3D -EEXI= ST) { > > - zswap_invalidate_entry(tree, dupentry); > > - WARN_ON(zswap_rb_insert(&tree->rbroot, entry, &dupentry))= ; > > + old =3D xa_store(tree, offset, entry, GFP_KERNEL); > > + if (xa_is_err(old)) { > > + int err =3D xa_err(old); > > + if (err =3D=3D -ENOMEM) > > + zswap_reject_alloc_fail++; > > + else > > + WARN_ONCE(err, "%s: xa_store failed: %d\n", > > + __func__, err); > > + goto store_failed; > > No need to complicate it. If we have a bug there, an incorrect fail > stat bump is the least of our concerns. Also, no need for __func__ > since that information is included in the WARN: > > if (xa_is_err(old)) { > WARN_ONCE(err !=3D -ENOMEM, "unexpected xarray error: %d\= n", err); > zswap_reject_alloc_fail++; > goto store_failed; > } Ah, I see. Thanks for the simplification. > > I think here is where that comment above should go: Ack. > > /* > * We may have had an existing entry that became stale when > * the folio was redirtied and now the new version is being > * swapped out. Get rid of the old. > */ > > + if (old) > > + zswap_entry_free(old); > > + > > + if (objcg) { > > + obj_cgroup_charge_zswap(objcg, entry->length); > > + /* Account before objcg ref is moved to tree */ > > + count_objcg_event(objcg, ZSWPOUT); > > } > > + > > if (entry->length) { > > INIT_LIST_HEAD(&entry->lru); > > zswap_lru_add(&zswap.list_lru, entry); > > atomic_inc(&zswap.nr_stored); > > } > > - spin_unlock(&tree->lock); > > We previously relied on the tree lock to finish initializing the entry > while it's already in tree. Now we rely on something else: > > 1. Concurrent stores and invalidations are excluded by folio lock= . > > 2. Writeback is excluded by the entry not being on the LRU yet. > The publishing order matters to prevent writeback from seeing > an incoherent entry. > > I think this deserves a comment. I will add your 1. and 2. into a comment block. Thanks for the suggestion. > > > /* update stats */ > > atomic_inc(&zswap_stored_pages); > > @@ -1585,6 +1510,12 @@ bool zswap_store(struct folio *folio) > > > > return true; > > > > +store_failed: > > + if (!entry->length) { > > + atomic_dec(&zswap_same_filled_pages); > > + goto freepage; > > + } > > It'd be good to avoid the nested goto. Why not make the pool > operations conditional on entry->length instead: > > store_failed: > if (!entry->length) > atomic_dec(&zswap_same_filled_pages); > else { > zpool_free(zswap_find_zpool(...)); > put_pool: > zswap_pool_put(entry->pool); > } > freepage: Sure, I have one internal version exactly like that. I later changed again to get rid of the else. I can use your version as well. > > Not super pretty either, but it's a linear flow at least. Thanks for your suggestions. I will send out a new version. Chris