From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0335DC77B7A for ; Wed, 7 Jun 2023 08:15:00 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 73E6E6B0072; Wed, 7 Jun 2023 04:15:00 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 6EEA9900003; Wed, 7 Jun 2023 04:15:00 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 58F98900002; Wed, 7 Jun 2023 04:15:00 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 4A6C06B0072 for ; Wed, 7 Jun 2023 04:15:00 -0400 (EDT) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 07E771A0170 for ; Wed, 7 Jun 2023 08:15:00 +0000 (UTC) X-FDA: 80875241160.26.E8AC24C Received: from mail-qv1-f45.google.com (mail-qv1-f45.google.com [209.85.219.45]) by imf09.hostedemail.com (Postfix) with ESMTP id 1B4F614001D for ; Wed, 7 Jun 2023 08:14:57 +0000 (UTC) Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b=Y43u05nc; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf09.hostedemail.com: domain of yosryahmed@google.com designates 209.85.219.45 as permitted sender) smtp.mailfrom=yosryahmed@google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1686125698; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=8flZ91Hab9nU26QZFMLIP2yuNrFr8wIvvmC5oWYCK80=; b=cFsf3wreTqRpOeksmnnpd2MZ1qzblwVN6kso4xJjfQm7rQQgcJvDP/8Bkx+zd87TVmLA/h M6FpYFmSFQ+Q9v9gFSSy4ZWfk2IyGInAk5EYK2nNbeZegWtWuwTQdqtcUCFW2fvlyAqNlS cqnDjNRC5bVtK+zP1OkJV+ipnvg0I4Y= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b=Y43u05nc; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf09.hostedemail.com: domain of yosryahmed@google.com designates 209.85.219.45 as permitted sender) smtp.mailfrom=yosryahmed@google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1686125698; a=rsa-sha256; cv=none; b=vzGUKE5j6db0vzFKuriNZVpBlB/CXnmXtsUJDM0AbyHSgE+LTjB/S1XZmGUOx0twMJiqHW mf5bM/51XTnoxBttCMIgtibznl1qDZDzBPBWYyab0X3gfTtx97LSBMmAmbdQXuCYUaGTun bLKx9TWeVthGRgOxbzgaYJb04NTL+mc= Received: by mail-qv1-f45.google.com with SMTP id 6a1803df08f44-6261a8bbe5fso60609946d6.0 for ; Wed, 07 Jun 2023 01:14:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1686125697; x=1688717697; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=8flZ91Hab9nU26QZFMLIP2yuNrFr8wIvvmC5oWYCK80=; b=Y43u05ncuo5qs8NAKcg0N4wOfW1VAITNsfcW/om5+ZPRA6d2Q4BRXRX/jzs39xtmZS wi8BehUcVylNS4QA+KOrBPc/LNlBjO7om+Ajm1tvAwa5B39Uy5RS5O7AH+GTTbFDJeDI FLPcm4k2mwkKlCNrupN/A84klILIlycI7gqBpoQz86xBEU69Zsqw+CE717KBkozAvL4Q ZXgRjCHJ7bNS7ST+wnX8nFUqdQkLEwjRE+iwSS9eE3i6dVYAvorx5fGV/2ln7+FQEivr vMEZwp4kCjgwJt2UZcALAHS2pBO4bHSlyiq0sUV9jSTj7mcO05oKjrCi8NBmR2NxZSSy MX3g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1686125697; x=1688717697; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=8flZ91Hab9nU26QZFMLIP2yuNrFr8wIvvmC5oWYCK80=; b=fvhYXJ/VIaU32saJonJHtMBY/avn459LqZNp4m8aVdWKBtPcDtTBnDF4KjiIwhUrw2 182Ct7dYE7WP+UpwOCQPGPWhAhjw+QK3D3MmClbMWLV+RCf/c0We1Y+oROrove6Mc8Y9 Ld3i/g+C9YZOLdGjcW+pe+0VmcztnxKQi6wQiByDDfNTUTuc2/+hEDUVAbPKMvgrDQSc VacauAriJsyCrcn7R1gGwLTQ6TbplFiddCS01CsxjXBU/l4wWVY9rH4ZVMfxNxIPNCcG IJlY6QnOdLYDas0u02vk4YPppaCNFtZ4HGXW4aKx9mZMGlkF/64a2DhtaJEskvcuST3A KXUw== X-Gm-Message-State: AC+VfDw0kDWuuLYvlpXluoGbcEdI549V17ghAr87L7p80Y/s6aANdorW j/N5gCvZJ9qK8KVm5WfW9SxL3aVNPuMMT4a/nRNoXA== X-Google-Smtp-Source: ACHHUZ6Nub8JhRQrwolXBigkZdMGU6kSAgM4AoXOr6NQOHqi0cRN3POTluYy8mqwDYFviRhH/bHVhHPiHnH6l8Dz74Y= X-Received: by 2002:a05:6214:e48:b0:5f1:62d9:3368 with SMTP id o8-20020a0562140e4800b005f162d93368mr1882436qvc.30.1686125696889; Wed, 07 Jun 2023 01:14:56 -0700 (PDT) MIME-Version: 1.0 References: <20230606145611.704392-1-cerasuolodomenico@gmail.com> <20230606145611.704392-2-cerasuolodomenico@gmail.com> In-Reply-To: <20230606145611.704392-2-cerasuolodomenico@gmail.com> From: Yosry Ahmed Date: Wed, 7 Jun 2023 01:14:18 -0700 Message-ID: Subject: Re: [RFC PATCH v2 1/7] mm: zswap: add pool shrinking mechanism To: Domenico Cerasuolo Cc: vitaly.wool@konsulko.com, minchan@kernel.org, senozhatsky@chromium.org, linux-mm@kvack.org, ddstreet@ieee.org, sjenning@redhat.com, nphamcs@gmail.com, hannes@cmpxchg.org, akpm@linux-foundation.org, linux-kernel@vger.kernel.org, kernel-team@meta.com Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Queue-Id: 1B4F614001D X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: mwob47r3tjhqt7kx99nz5ni1wisyhp5z X-HE-Tag: 1686125697-980073 X-HE-Meta: U2FsdGVkX1+JWe1BOw8NApGyvDGogPxl4CGihWljJevBpO4HBT+f1oxEQBCSceaMiRMWHl2QL+U/Mv98kbAFxJPC8UM21iLleQqJSO+7RyD1LC+JkatqZcgs4fUeBx0v0978MNDNc60crWgghpCqFmPuYfV7qmbXnfd/wcQnfc6/rCv+6kwzw4LP9h33DSXllA3jo+46zfwtZ7u5STa3yNSTwAVbjbUujHmo530pjSTraveUeN9DAUCc7vxhW19jK38Hxbx2XBRpeMQci2CC3t7FwGjHACeIB3m8+6iQ0CCZOdfgHMch+/vA5D3tTQ3Dvk+ieD/2jCuVUu7NuAlEscxtCQ4r8pCs9EzTFPLfO7cISOkSgjPeFblcAnAGruwmJ/8wGN/Wf8205Na4GyqbMIf5e1J5rYgUtWxG5x51MelmTBh2D5MDIqI3CT3I/aWzrty2m0ic1Jfp2VFr+ykESO2+tEqXAC9HylNQRL7UeRh7UgeRrcx3mBJOuiI53RMxU7wmiQ+Msq2MQyo405AWmF9L8goGYLnUwWDJgk6imAYjGPAmaJUvY1LXqtVz0cSFW5LhMHnXTHLLYhoVzTl7yOf/ymzZMxf/i8HgI4tkvUzFPjxCAVusb1SRi2oMJe+0cb5JyJ3Dm0cyj8SBPM0MkyG04d+elfWpbDHd9TyFsjBDznzbkUSL8UK/b2zqvWy99wnS0HCvlW8nHabX/oqNkPPxQ28gIdNkhiFqSCm1h+z2RdSceFrzWEBzXLfaSruDcw1WNHp32qzifSHGJe89aeGFobPG4bCqh1PI6j8wV+z9qu2oBfsaDY82vK+mnJ2OQbWPJ6e4C7botSwSWy4cnNs5WJcdmxX93Y1WOBE91ODw4YNEedOXu3p1w988V/hZ998fqhQ4wesX6ro8E9vaO+aiTcf8i3En6oOOQGIjO44rqjqr6S+OZGowXPf6pU+hZiyVs1c/VBZ4NnSuGDR IrosXva/ aVf4YevzAp72fBp88fEJ6lk9RzT7RKQu68wEPArdV+zHtGlBrUDY6vXorF31Q+vK/faGouJutOegPXvpbRcgGAgrAKc5ovYvcln+xSYHeHSCzrGfB9pUfwc9EBZgq76MTmfDX7ES2xmB2Aq103Hgds7p/kL+8EsGwNFgAxzYcRrjxVF2scWufhKoihoHMy4uEvN+aK6OEQ+2kB2kp7M3sItTWU0GQydOE+eKK6yYNjrMEpJxtt8irBy58BeCgMsyVFLYWXclAe0XyTJOB2wlwwjdQBc6rvRDVY+s6eTLyOGkoBkMvD+GFfWCy5iYaA8xmmTBW2P35UJKFeIM= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, Jun 6, 2023 at 7:56=E2=80=AFAM Domenico Cerasuolo wrote: > > Each zpool driver (zbud, z3fold and zsmalloc) implements its own shrink > function, which is called from zpool_shrink. However, with this commit, > a unified shrink function is added to zswap. The ultimate goal is to > eliminate the need for zpool_shrink once all zpool implementations have > dropped their shrink code. > > To ensure the functionality of each commit, this change focuses solely > on adding the mechanism itself. No modifications are made to > the backends, meaning that functionally, there are no immediate changes. > The zswap mechanism will only come into effect once the backends have > removed their shrink code. The subsequent commits will address the > modifications needed in the backends. > > Signed-off-by: Domenico Cerasuolo > --- > mm/zswap.c | 96 +++++++++++++++++++++++++++++++++++++++++++++++++++--- > 1 file changed, 91 insertions(+), 5 deletions(-) > > diff --git a/mm/zswap.c b/mm/zswap.c > index bcb82e09eb64..c99bafcefecf 100644 > --- a/mm/zswap.c > +++ b/mm/zswap.c > @@ -150,6 +150,12 @@ struct crypto_acomp_ctx { > struct mutex *mutex; > }; > > +/* > + * The lock ordering is zswap_tree.lock -> zswap_pool.lru_lock. > + * The only case where lru_lock is not acquired while holding tree.lock = is > + * when a zswap_entry is taken off the lru for writeback, in that case i= t > + * needs to be verified that it's still valid in the tree. > + */ > struct zswap_pool { > struct zpool *zpool; > struct crypto_acomp_ctx __percpu *acomp_ctx; > @@ -159,6 +165,8 @@ struct zswap_pool { > struct work_struct shrink_work; > struct hlist_node node; > char tfm_name[CRYPTO_MAX_ALG_NAME]; > + struct list_head lru; > + spinlock_t lru_lock; > }; > > /* > @@ -176,10 +184,12 @@ struct zswap_pool { > * be held while changing the refcount. Since the lock must > * be held, there is no reason to also make refcount atomic. > * length - the length in bytes of the compressed page data. Needed dur= ing > - * decompression. For a same value filled page length is 0. > + * decompression. For a same value filled page length is 0, and= both > + * pool and lru are invalid and must be ignored. > * pool - the zswap_pool the entry's data is in > * handle - zpool allocation handle that stores the compressed page data > * value - value of the same-value filled pages which have same content > + * lru - handle to the pool's lru used to evict pages. > */ > struct zswap_entry { > struct rb_node rbnode; > @@ -192,6 +202,7 @@ struct zswap_entry { > unsigned long value; > }; > struct obj_cgroup *objcg; > + struct list_head lru; > }; > > struct zswap_header { > @@ -364,6 +375,12 @@ static void zswap_free_entry(struct zswap_entry *ent= ry) > if (!entry->length) > atomic_dec(&zswap_same_filled_pages); > else { > + /* zpool_evictable will be removed once all 3 backends have migra= ted */ > + if (!zpool_evictable(entry->pool->zpool)) { > + spin_lock(&entry->pool->lru_lock); > + list_del(&entry->lru); > + spin_unlock(&entry->pool->lru_lock); > + } > zpool_free(entry->pool->zpool, entry->handle); > zswap_pool_put(entry->pool); > } > @@ -584,14 +601,70 @@ static struct zswap_pool *zswap_pool_find_get(char = *type, char *compressor) > return NULL; > } > > +static int zswap_shrink(struct zswap_pool *pool) Nit: rename to zswap_shrink_one() so that it's clear we always writeback one entry per call? > +{ > + struct zswap_entry *lru_entry, *tree_entry =3D NULL; > + struct zswap_header *zhdr; > + struct zswap_tree *tree; > + int swpoffset; > + int ret; > + > + /* get a reclaimable entry from LRU */ > + spin_lock(&pool->lru_lock); > + if (list_empty(&pool->lru)) { > + spin_unlock(&pool->lru_lock); > + return -EINVAL; > + } > + lru_entry =3D list_last_entry(&pool->lru, struct zswap_entry, lru= ); > + list_del_init(&lru_entry->lru); > + zhdr =3D zpool_map_handle(pool->zpool, lru_entry->handle, ZPOOL_M= M_RO); > + tree =3D zswap_trees[swp_type(zhdr->swpentry)]; > + zpool_unmap_handle(pool->zpool, lru_entry->handle); > + /* > + * Once the pool lock is dropped, the lru_entry might get freed. = The Nit: lru lock* > + * swpoffset is copied to the stack, and lru_entry isn't deref'd = again > + * until the entry is verified to still be alive in the tree. > + */ > + swpoffset =3D swp_offset(zhdr->swpentry); > + spin_unlock(&pool->lru_lock); > + > + /* hold a reference from tree so it won't be freed during writeba= ck */ > + spin_lock(&tree->lock); > + tree_entry =3D zswap_entry_find_get(&tree->rbroot, swpoffset); > + if (tree_entry !=3D lru_entry) { > + if (tree_entry) > + zswap_entry_put(tree, tree_entry); > + spin_unlock(&tree->lock); > + return -EAGAIN; > + } > + spin_unlock(&tree->lock); > + > + ret =3D zswap_writeback_entry(pool->zpool, lru_entry->handle); > + > + spin_lock(&tree->lock); > + if (ret) { > + spin_lock(&pool->lru_lock); > + list_move(&lru_entry->lru, &pool->lru); > + spin_unlock(&pool->lru_lock); > + } > + zswap_entry_put(tree, tree_entry); > + spin_unlock(&tree->lock); > + > + return ret ? -EAGAIN : 0; > +} > + > static void shrink_worker(struct work_struct *w) > { > struct zswap_pool *pool =3D container_of(w, typeof(*pool), > shrink_work); > int ret, failures =3D 0; > > + /* zpool_evictable will be removed once all 3 backends have migra= ted */ > do { > - ret =3D zpool_shrink(pool->zpool, 1, NULL); > + if (zpool_evictable(pool->zpool)) > + ret =3D zpool_shrink(pool->zpool, 1, NULL); > + else > + ret =3D zswap_shrink(pool); > if (ret) { > zswap_reject_reclaim_fail++; > if (ret !=3D -EAGAIN) > @@ -655,6 +728,8 @@ static struct zswap_pool *zswap_pool_create(char *typ= e, char *compressor) > */ > kref_init(&pool->kref); > INIT_LIST_HEAD(&pool->list); > + INIT_LIST_HEAD(&pool->lru); > + spin_lock_init(&pool->lru_lock); > INIT_WORK(&pool->shrink_work, shrink_worker); > > zswap_pool_debug("created", pool); > @@ -1270,7 +1345,7 @@ static int zswap_frontswap_store(unsigned type, pgo= ff_t offset, > } > > /* store */ > - hlen =3D zpool_evictable(entry->pool->zpool) ? sizeof(zhdr) : 0; > + hlen =3D sizeof(zhdr); > gfp =3D __GFP_NORETRY | __GFP_NOWARN | __GFP_KSWAPD_RECLAIM; > if (zpool_malloc_support_movable(entry->pool->zpool)) > gfp |=3D __GFP_HIGHMEM | __GFP_MOVABLE; > @@ -1313,6 +1388,12 @@ static int zswap_frontswap_store(unsigned type, pg= off_t offset, > zswap_entry_put(tree, dupentry); > } > } while (ret =3D=3D -EEXIST); > + /* zpool_evictable will be removed once all 3 backends have migra= ted */ > + if (entry->length && !zpool_evictable(entry->pool->zpool)) { > + spin_lock(&entry->pool->lru_lock); > + list_add(&entry->lru, &entry->pool->lru); > + spin_unlock(&entry->pool->lru_lock); > + } > spin_unlock(&tree->lock); > > /* update stats */ > @@ -1384,8 +1465,7 @@ static int zswap_frontswap_load(unsigned type, pgof= f_t offset, > /* decompress */ > dlen =3D PAGE_SIZE; > src =3D zpool_map_handle(entry->pool->zpool, entry->handle, ZPOOL= _MM_RO); > - if (zpool_evictable(entry->pool->zpool)) > - src +=3D sizeof(struct zswap_header); > + src +=3D sizeof(struct zswap_header); > > if (!zpool_can_sleep_mapped(entry->pool->zpool)) { > memcpy(tmp, src, entry->length); > @@ -1415,6 +1495,12 @@ static int zswap_frontswap_load(unsigned type, pgo= ff_t offset, > freeentry: > spin_lock(&tree->lock); > zswap_entry_put(tree, entry); > + /* zpool_evictable will be removed once all 3 backends have migra= ted */ > + if (entry->length && !zpool_evictable(entry->pool->zpool)) { > + spin_lock(&entry->pool->lru_lock); > + list_move(&entry->lru, &entry->pool->lru); > + spin_unlock(&entry->pool->lru_lock); > + } It's not really this patch's fault, but when merged with commit fe1d1f7d0fb5 ("mm: zswap: support exclusive loads") from mm-unstable [1], and with CONFIG_ZSWAP_EXCLUSIVE_LOADS=3Dy, this causes a crash. This happens because fe1d1f7d0fb5 makes the loads exclusive, so zswap_entry_put(tree, entry) above the added code causes the entry to be freed, then we go ahead and deference multiple fields within it in the added chunk. Moving the chunk above zswap_entry_put() (and consequently also above zswap_invalidate_entry() from fe1d1f7d0fb5) makes this work correctly. Perhaps it would be useful to rebase on top of fe1d1f7d0fb5 for your next version(s), if any. Maybe the outcome would be something like: zswap_entry_put(tree, entry); if (!ret && IS_ENABLED(CONFIG_ZSWAP_EXCLUSIVE_LOADS)) { zswap_invalidate_entry(tree, entry); } else if (entry->length && !zpool_evictable(entry->pool->zpool)) { spin_lock(&entry->pool->lru_lock); list_move(&entry->lru, &entry->pool->lru); spin_unlock(&entry->pool->lru_lock); } I am assuming if we are going to invalidate the entry anyway there is no need to move it to the front of the lru -- but I didn't really think it through. [1]https://lore.kernel.org/lkml/20230530210251.493194-1-yosryahmed@google.c= om/ > spin_unlock(&tree->lock); > > return ret; > -- > 2.34.1 >