From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C8299C4829B for ; Sun, 11 Feb 2024 21:05:03 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 346D06B006E; Sun, 11 Feb 2024 16:05:03 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 2F6646B0072; Sun, 11 Feb 2024 16:05:03 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1BE076B0074; Sun, 11 Feb 2024 16:05:03 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 077C76B006E for ; Sun, 11 Feb 2024 16:05:03 -0500 (EST) Received: from smtpin18.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id A6F5AC0881 for ; Sun, 11 Feb 2024 21:05:02 +0000 (UTC) X-FDA: 81780752844.18.A679341 Received: from mail-io1-f41.google.com (mail-io1-f41.google.com [209.85.166.41]) by imf26.hostedemail.com (Postfix) with ESMTP id D42DE14000A for ; Sun, 11 Feb 2024 21:05:00 +0000 (UTC) Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=Q5LvilOo; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf26.hostedemail.com: domain of nphamcs@gmail.com designates 209.85.166.41 as permitted sender) smtp.mailfrom=nphamcs@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1707685500; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=BnufYSnNU85HPPQIhsbLJzZ5m2bbdl971gPFOAuQfsk=; b=jV+5DEKJgPbIg7M+EGBW3X5gVgjLUJv4zy/Ume5Z7EOfHIB2qZBGXh+EfjsQ3UIsF0E+gz 75yEY6Lm4FCa7jeALy3vUi0EH1DRnH27M410pW0AIhpRjO8fzXLZ/CXEQJtnDAR/IFWaCZ n+LgWMckxnaX3NuEBfqogVbWkngxWkM= ARC-Authentication-Results: i=1; imf26.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=Q5LvilOo; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf26.hostedemail.com: domain of nphamcs@gmail.com designates 209.85.166.41 as permitted sender) smtp.mailfrom=nphamcs@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1707685500; a=rsa-sha256; cv=none; b=T9X0HxBLZx4oWSzd1wIrfhuIziaRUiuaJUIUV/j+Qcg6pIa39kUPqnnLah91Venythf3nu yTI+MOEaIezlme8Cd+iMdRAT74GdOIEigaxlAAJoGj8cY8qkq9Js3LiwxMZiAFE4O97xEz QFyGA/3MLYa1Knr3sVEpDTzvn5jb6Eg= Received: by mail-io1-f41.google.com with SMTP id ca18e2360f4ac-7baa8da5692so136234239f.0 for ; Sun, 11 Feb 2024 13:05:00 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1707685500; x=1708290300; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=BnufYSnNU85HPPQIhsbLJzZ5m2bbdl971gPFOAuQfsk=; b=Q5LvilOo7yda1iPLbLgqIrX2HiNdIrz4NLgnumYZVy/bx032MxkmSCRXzt+u4MdIvJ sa80rkgkFFI/XOtHLIpN9DGx8IJB6Q5dD8QxiucsI5YmrUDLwU1XwHwwq6q92walx4lN MJ7Mj/c9NcO655HJl+LXzuDe+xAfthCqQz3xJ46LuEXVgUSowwbochfW/zN/u3wlXHXa RbMO4JT3NrdlXve0nLxWXnf4YroAKuIdxImmRUurqtFsPXm9hVkZjghcAuqjYCnBlfQa ri7QKpMo0KajHULPUg9Gm7rjEe7hzgE/elxNsfu3NO50sf01wGtv+vrHjHzbw3vq5PZX cnVw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1707685500; x=1708290300; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=BnufYSnNU85HPPQIhsbLJzZ5m2bbdl971gPFOAuQfsk=; b=N3vLjtHuIubFpfgAMP5SkgZXX0N2Pys8fRBjKKRyN+q32EvLu1m7Qfh8BI2psk0jvj 9gMLfNJj65pVthnEaQr82HyDWK7cMQk02VefTaTGpvp8gz1y756QmStoGrhtjaSyK5zX oYTWNZycXSmsF3tkt8FWXOp/3G52clCX5kDIoMX5+ZuQw8Z8XwD3leOm9GCTto2UgtEd IouJm4/Of+iRaoZmov6VW1bAhK4m4NYvwJbQFtSqsqCMwUKOPM1dxS6BEBhSZGwGVYLP MFSMVLE5m+xof/iEk56uwCAG+qfWTTvslpiqLMJoUprAW0tvJK5iiIprXklN7slla6Nm 4mJQ== X-Forwarded-Encrypted: i=1; AJvYcCU1euRQGsDKZN8ox4bC7PQShNglx1loHSs+bT2Uvur1rWINVn1iUAIIw63PXA6V6i67roe09adteCs2848evO+Jsdc= X-Gm-Message-State: AOJu0YyN6pps0CZjU5eArcpo9UU/7hLK23sW3w+X5SGIQcHMRluqPuF2 G6OE8t0q+rz6OD1V63HC39md3XQOK1dEpWdHxzzX3Vz68Y31lmd1MXsRYUVPfnbFOzXx4DKY+5r hNvOfzoT2VxL/J3UFSd3GXe2x5HE= X-Google-Smtp-Source: AGHT+IF+HGrtnDBJeRdffX+TCRoMssvaC7rXsedvbfQDOjkzHlWhtS/2m+moJxZ3Z79tK/zboCp+hWYPIYsFNx5VLSE= X-Received: by 2002:a6b:700a:0:b0:7c4:65c1:c442 with SMTP id l10-20020a6b700a000000b007c465c1c442mr578296ioc.11.1707685499700; Sun, 11 Feb 2024 13:04:59 -0800 (PST) MIME-Version: 1.0 References: <20240210-zswap-global-lru-v1-0-853473d7b0da@bytedance.com> <20240210-zswap-global-lru-v1-1-853473d7b0da@bytedance.com> In-Reply-To: <20240210-zswap-global-lru-v1-1-853473d7b0da@bytedance.com> From: Nhat Pham Date: Sun, 11 Feb 2024 13:04:48 -0800 Message-ID: Subject: Re: [PATCH 1/2] mm/zswap: global lru and shrinker shared by all zswap_pools To: Chengming Zhou Cc: Andrew Morton , Johannes Weiner , Yosry Ahmed , linux-mm@kvack.org, linux-kernel@vger.kernel.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspam-User: X-Stat-Signature: 4y91winzqtc4m176ixnujq1c6nemrtdk X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: D42DE14000A X-HE-Tag: 1707685500-634562 X-HE-Meta: U2FsdGVkX1/8VtnLG1XQO6iOhJpuvK6gRHBO6NS17Q96BXFVbgG6pysrd5Igc+TOqb8KbqEZ8EKc57cDdzvmZ40kLt2k++V4exbhcm2qZr13pF06kFp1Aw+xglBVwtOzbRY2iY6X2sH+GhNHBlP3z7cVkeCFfJn8SsbVlZrZ7hsDKJbv3wgNc+TGlLlgC0DYmh2WBgtuwDQOdZICR5CUpo+e/WmZmlenTd2rIJikntwSFKXdclFET+/LxG0yBaWeIoE8ojBoJBvpS3JZ83dwHZARBcix5q2Q79I2bibbcA46tJ9nCA7zNNPdDwyO3WJDHd3W/OwgyYa1v06J9lizOflSyAAU+rANPCXqAIMx6HNIJcFUcsBOhVS3gyy11Mh2JfpInwN+2QxVsqHrIzmxFFUbbu3lYUmHZawtgg2aO6EFc4nkTvA9DKeVPetYv6tbJo6DJqyDcfoYcHOD7Kx2nSt9/gnmWQpXY0BZ4pxmmu4xY1rTx/GJNPcMDUFEAZ46TjAP5M+7ZCw+ASjrpiMFq/sBtE6zyPInq+ZpF3e6+hmf5krzVhSoSyRkE+nBXeNc+OK1gNVGR6w+Lk4++eE2W+OHuOHYEStGQAf3cJFjLl0SHDRcdxuzUV1R/wBaMUQU5cZn2AK9fT8VWeL0SumucOv/NdBVzVWSWNwrjvkWGVAqLsQdSfrXkz1gbZoTZ+c9+Vh4wcwCRp7n3aghw9iHuTwO2Sq2CrF/I6ycg1kmKUorfki1b04IalvifqaanXx3CmJJ2/7b6dPlXqq3SM3IhT6sbluS51J+ZTTFxq/CqI4Rkjjyg40HnSGrk0pEeArTjxmhXrfZPz9fycbpYp687CjaiTdr9jlKXF4owy4F36gm3uXgZxSoaPG/X87SQwukkOQSKbdN7ZTkOxYNCWca23HPI8ogVeVhdFVKVUqJo6aNGx1IjeQAs6HhgJDYRv1xw1l+GT5gvrj8NfHh1DY HQk6FpLP SEHV7hFcPqsLZXj6gBzgh581z4GHfiorBW/RcOpO4cygpYr2VvlmNMe8b8upQcfBNpeyS4f5Man7yG0wYmGXcVuXcSgqAbt0S21I7wA5XTVtb+xmAgB3SfWAaPAocaf3n6O/IROpks63M5KpiZSqDjEV+kaexTLrKZbAvdzFM7r1kYitthvbZLf0HcWicZZ9mq+wYphbltHsx0gVwgxh18V5F6o8jXrgolXBNS5qWeh37cWF7S65T1tVTOrtTfXzd8siy4IzEECcfni2KoFLiDya4hATeEHnkAwahrjAnh1Rhf3ngEt6/lfi7hRRve07hIeL3NdtJ5UBqupOO6RDzu03w9YBJpzRYBvbbasg5l9MrpwzygXBIQz4tbUHKbB0pBYkFEtyVKoN1Iu+C1nuQ3Qthltvv5UguTfa4FeVOcuAQngc= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Sun, Feb 11, 2024 at 5:57=E2=80=AFAM Chengming Zhou wrote: > > Dynamic zswap_pool creation may create/reuse to have multiple > zswap_pools in a list, only the first will be current used. > > Each zswap_pool has its own lru and shrinker, which is not > necessary and has its problem: > > 1. When memory has pressure, all shrinker of zswap_pools will > try to shrink its own lru, there is no order between them. > > 2. When zswap limit hit, only the last zswap_pool's shrink_work > will try to shrink its lru, which is inefficient. > > Anyway, having a global lru and shrinker shared by all zswap_pools > is better and efficient. > > Signed-off-by: Chengming Zhou I'll do a careful review later, but IMO this is a good idea :) Chris pointed out when he reviewed the zswap shrinker patch series that the reclaim algorithm has to decide which pool to reclaim from, and I have always thought that it was a bit weird that we have to do it at all. We should reclaim stored objects by access ordering, irregardless of which pool it belongs to. Having a shared LRU and other associated reclaim structures is sound, and saves a bit of space too while we're at it. > --- > mm/zswap.c | 153 ++++++++++++++++++++++---------------------------------= ------ > 1 file changed, 55 insertions(+), 98 deletions(-) > > diff --git a/mm/zswap.c b/mm/zswap.c > index 62fe307521c9..7668db8c10e3 100644 > --- a/mm/zswap.c > +++ b/mm/zswap.c > @@ -176,14 +176,17 @@ struct zswap_pool { > struct kref kref; > struct list_head list; > struct work_struct release_work; > - struct work_struct shrink_work; > struct hlist_node node; > char tfm_name[CRYPTO_MAX_ALG_NAME]; > +}; > + > +struct { > struct list_lru list_lru; > - struct mem_cgroup *next_shrink; > - struct shrinker *shrinker; > atomic_t nr_stored; > -}; > + struct shrinker *shrinker; > + struct work_struct shrink_work; > + struct mem_cgroup *next_shrink; > +} zswap; > > /* > * struct zswap_entry > @@ -301,9 +304,6 @@ static void zswap_update_total_size(void) > * pool functions > **********************************/ > > -static void zswap_alloc_shrinker(struct zswap_pool *pool); > -static void shrink_worker(struct work_struct *w); > - > static struct zswap_pool *zswap_pool_create(char *type, char *compressor= ) > { > int i; > @@ -353,30 +353,16 @@ static struct zswap_pool *zswap_pool_create(char *t= ype, char *compressor) > if (ret) > goto error; > > - zswap_alloc_shrinker(pool); > - if (!pool->shrinker) > - goto error; > - > - pr_debug("using %s compressor\n", pool->tfm_name); > - > /* being the current pool takes 1 ref; this func expects the > * caller to always add the new pool as the current pool > */ > kref_init(&pool->kref); > INIT_LIST_HEAD(&pool->list); > - if (list_lru_init_memcg(&pool->list_lru, pool->shrinker)) > - goto lru_fail; > - shrinker_register(pool->shrinker); > - INIT_WORK(&pool->shrink_work, shrink_worker); > - atomic_set(&pool->nr_stored, 0); > > zswap_pool_debug("created", pool); > > return pool; > > -lru_fail: > - list_lru_destroy(&pool->list_lru); > - shrinker_free(pool->shrinker); > error: > if (pool->acomp_ctx) > free_percpu(pool->acomp_ctx); > @@ -434,15 +420,8 @@ static void zswap_pool_destroy(struct zswap_pool *po= ol) > > zswap_pool_debug("destroying", pool); > > - shrinker_free(pool->shrinker); > cpuhp_state_remove_instance(CPUHP_MM_ZSWP_POOL_PREPARE, &pool->no= de); > free_percpu(pool->acomp_ctx); > - list_lru_destroy(&pool->list_lru); > - > - spin_lock(&zswap_pools_lock); > - mem_cgroup_iter_break(NULL, pool->next_shrink); > - pool->next_shrink =3D NULL; > - spin_unlock(&zswap_pools_lock); > > for (i =3D 0; i < ZSWAP_NR_ZPOOLS; i++) > zpool_destroy_pool(pool->zpools[i]); > @@ -529,24 +508,6 @@ static struct zswap_pool *zswap_pool_current_get(voi= d) > return pool; > } > > -static struct zswap_pool *zswap_pool_last_get(void) > -{ > - struct zswap_pool *pool, *last =3D NULL; > - > - rcu_read_lock(); > - > - list_for_each_entry_rcu(pool, &zswap_pools, list) > - last =3D pool; > - WARN_ONCE(!last && zswap_has_pool, > - "%s: no page storage pool!\n", __func__); > - if (!zswap_pool_get(last)) > - last =3D NULL; > - > - rcu_read_unlock(); > - > - return last; > -} > - > /* type and compressor must be null-terminated */ > static struct zswap_pool *zswap_pool_find_get(char *type, char *compress= or) > { > @@ -816,14 +777,10 @@ void zswap_folio_swapin(struct folio *folio) > > void zswap_memcg_offline_cleanup(struct mem_cgroup *memcg) > { > - struct zswap_pool *pool; > - > - /* lock out zswap pools list modification */ > + /* lock out zswap shrinker walking memcg tree */ > spin_lock(&zswap_pools_lock); > - list_for_each_entry(pool, &zswap_pools, list) { > - if (pool->next_shrink =3D=3D memcg) > - pool->next_shrink =3D mem_cgroup_iter(NULL, pool-= >next_shrink, NULL); > - } > + if (zswap.next_shrink =3D=3D memcg) > + zswap.next_shrink =3D mem_cgroup_iter(NULL, zswap.next_sh= rink, NULL); > spin_unlock(&zswap_pools_lock); > } > > @@ -923,9 +880,9 @@ static void zswap_entry_free(struct zswap_entry *entr= y) > if (!entry->length) > atomic_dec(&zswap_same_filled_pages); > else { > - zswap_lru_del(&entry->pool->list_lru, entry); > + zswap_lru_del(&zswap.list_lru, entry); > zpool_free(zswap_find_zpool(entry), entry->handle); > - atomic_dec(&entry->pool->nr_stored); > + atomic_dec(&zswap.nr_stored); > zswap_pool_put(entry->pool); > } > if (entry->objcg) { > @@ -1288,7 +1245,6 @@ static unsigned long zswap_shrinker_scan(struct shr= inker *shrinker, > { > struct lruvec *lruvec =3D mem_cgroup_lruvec(sc->memcg, NODE_DATA(= sc->nid)); > unsigned long shrink_ret, nr_protected, lru_size; > - struct zswap_pool *pool =3D shrinker->private_data; > bool encountered_page_in_swapcache =3D false; > > if (!zswap_shrinker_enabled || > @@ -1299,7 +1255,7 @@ static unsigned long zswap_shrinker_scan(struct shr= inker *shrinker, > > nr_protected =3D > atomic_long_read(&lruvec->zswap_lruvec_state.nr_zswap_pro= tected); > - lru_size =3D list_lru_shrink_count(&pool->list_lru, sc); > + lru_size =3D list_lru_shrink_count(&zswap.list_lru, sc); > > /* > * Abort if we are shrinking into the protected region. > @@ -1316,7 +1272,7 @@ static unsigned long zswap_shrinker_scan(struct shr= inker *shrinker, > return SHRINK_STOP; > } > > - shrink_ret =3D list_lru_shrink_walk(&pool->list_lru, sc, &shrink_= memcg_cb, > + shrink_ret =3D list_lru_shrink_walk(&zswap.list_lru, sc, &shrink_= memcg_cb, > &encountered_page_in_swapcache); > > if (encountered_page_in_swapcache) > @@ -1328,7 +1284,6 @@ static unsigned long zswap_shrinker_scan(struct shr= inker *shrinker, > static unsigned long zswap_shrinker_count(struct shrinker *shrinker, > struct shrink_control *sc) > { > - struct zswap_pool *pool =3D shrinker->private_data; > struct mem_cgroup *memcg =3D sc->memcg; > struct lruvec *lruvec =3D mem_cgroup_lruvec(memcg, NODE_DATA(sc->= nid)); > unsigned long nr_backing, nr_stored, nr_freeable, nr_protected; > @@ -1343,7 +1298,7 @@ static unsigned long zswap_shrinker_count(struct sh= rinker *shrinker, > #else > /* use pool stats instead of memcg stats */ > nr_backing =3D get_zswap_pool_size(pool) >> PAGE_SHIFT; > - nr_stored =3D atomic_read(&pool->nr_stored); > + nr_stored =3D atomic_read(&zswap.nr_stored); > #endif > > if (!nr_stored) > @@ -1351,7 +1306,7 @@ static unsigned long zswap_shrinker_count(struct sh= rinker *shrinker, > > nr_protected =3D > atomic_long_read(&lruvec->zswap_lruvec_state.nr_zswap_pro= tected); > - nr_freeable =3D list_lru_shrink_count(&pool->list_lru, sc); > + nr_freeable =3D list_lru_shrink_count(&zswap.list_lru, sc); > /* > * Subtract the lru size by an estimate of the number of pages > * that should be protected. > @@ -1367,23 +1322,24 @@ static unsigned long zswap_shrinker_count(struct = shrinker *shrinker, > return mult_frac(nr_freeable, nr_backing, nr_stored); > } > > -static void zswap_alloc_shrinker(struct zswap_pool *pool) > +static struct shrinker *zswap_alloc_shrinker(void) > { > - pool->shrinker =3D > + struct shrinker *shrinker; > + > + shrinker =3D > shrinker_alloc(SHRINKER_NUMA_AWARE | SHRINKER_MEMCG_AWARE= , "mm-zswap"); > - if (!pool->shrinker) > - return; > + if (!shrinker) > + return NULL; > > - pool->shrinker->private_data =3D pool; > - pool->shrinker->scan_objects =3D zswap_shrinker_scan; > - pool->shrinker->count_objects =3D zswap_shrinker_count; > - pool->shrinker->batch =3D 0; > - pool->shrinker->seeks =3D DEFAULT_SEEKS; > + shrinker->scan_objects =3D zswap_shrinker_scan; > + shrinker->count_objects =3D zswap_shrinker_count; > + shrinker->batch =3D 0; > + shrinker->seeks =3D DEFAULT_SEEKS; > + return shrinker; > } > > static int shrink_memcg(struct mem_cgroup *memcg) > { > - struct zswap_pool *pool; > int nid, shrunk =3D 0; > > if (!mem_cgroup_zswap_writeback_enabled(memcg)) > @@ -1396,32 +1352,25 @@ static int shrink_memcg(struct mem_cgroup *memcg) > if (memcg && !mem_cgroup_online(memcg)) > return -ENOENT; > > - pool =3D zswap_pool_current_get(); > - if (!pool) > - return -EINVAL; > - > for_each_node_state(nid, N_NORMAL_MEMORY) { > unsigned long nr_to_walk =3D 1; > > - shrunk +=3D list_lru_walk_one(&pool->list_lru, nid, memcg= , > + shrunk +=3D list_lru_walk_one(&zswap.list_lru, nid, memcg= , > &shrink_memcg_cb, NULL, &nr_t= o_walk); > } > - zswap_pool_put(pool); > return shrunk ? 0 : -EAGAIN; > } > > static void shrink_worker(struct work_struct *w) > { > - struct zswap_pool *pool =3D container_of(w, typeof(*pool), > - shrink_work); > struct mem_cgroup *memcg; > int ret, failures =3D 0; > > /* global reclaim will select cgroup in a round-robin fashion. */ > do { > spin_lock(&zswap_pools_lock); > - pool->next_shrink =3D mem_cgroup_iter(NULL, pool->next_sh= rink, NULL); > - memcg =3D pool->next_shrink; > + zswap.next_shrink =3D mem_cgroup_iter(NULL, zswap.next_sh= rink, NULL); > + memcg =3D zswap.next_shrink; > > /* > * We need to retry if we have gone through a full round = trip, or if we > @@ -1445,7 +1394,7 @@ static void shrink_worker(struct work_struct *w) > if (!mem_cgroup_tryget_online(memcg)) { > /* drop the reference from mem_cgroup_iter() */ > mem_cgroup_iter_break(NULL, memcg); > - pool->next_shrink =3D NULL; > + zswap.next_shrink =3D NULL; > spin_unlock(&zswap_pools_lock); > > if (++failures =3D=3D MAX_RECLAIM_RETRIES) > @@ -1467,7 +1416,6 @@ static void shrink_worker(struct work_struct *w) > resched: > cond_resched(); > } while (!zswap_can_accept()); > - zswap_pool_put(pool); > } > > static int zswap_is_page_same_filled(void *ptr, unsigned long *value) > @@ -1508,7 +1456,6 @@ bool zswap_store(struct folio *folio) > struct zswap_entry *entry, *dupentry; > struct obj_cgroup *objcg =3D NULL; > struct mem_cgroup *memcg =3D NULL; > - struct zswap_pool *shrink_pool; > > VM_WARN_ON_ONCE(!folio_test_locked(folio)); > VM_WARN_ON_ONCE(!folio_test_swapcache(folio)); > @@ -1576,7 +1523,7 @@ bool zswap_store(struct folio *folio) > > if (objcg) { > memcg =3D get_mem_cgroup_from_objcg(objcg); > - if (memcg_list_lru_alloc(memcg, &entry->pool->list_lru, G= FP_KERNEL)) { > + if (memcg_list_lru_alloc(memcg, &zswap.list_lru, GFP_KERN= EL)) { > mem_cgroup_put(memcg); > goto put_pool; > } > @@ -1607,8 +1554,8 @@ bool zswap_store(struct folio *folio) > } > if (entry->length) { > INIT_LIST_HEAD(&entry->lru); > - zswap_lru_add(&entry->pool->list_lru, entry); > - atomic_inc(&entry->pool->nr_stored); > + zswap_lru_add(&zswap.list_lru, entry); > + atomic_inc(&zswap.nr_stored); > } > spin_unlock(&tree->lock); > > @@ -1640,9 +1587,7 @@ bool zswap_store(struct folio *folio) > return false; > > shrink: > - shrink_pool =3D zswap_pool_last_get(); > - if (shrink_pool && !queue_work(shrink_wq, &shrink_pool->shrink_wo= rk)) > - zswap_pool_put(shrink_pool); > + queue_work(shrink_wq, &zswap.shrink_work); > goto reject; > } > > @@ -1804,6 +1749,21 @@ static int zswap_setup(void) > if (ret) > goto hp_fail; > > + shrink_wq =3D alloc_workqueue("zswap-shrink", > + WQ_UNBOUND|WQ_MEM_RECLAIM, 1); > + if (!shrink_wq) > + goto hp_fail; > + > + zswap.shrinker =3D zswap_alloc_shrinker(); > + if (!zswap.shrinker) > + goto shrinker_fail; > + if (list_lru_init_memcg(&zswap.list_lru, zswap.shrinker)) > + goto lru_fail; > + shrinker_register(zswap.shrinker); > + > + INIT_WORK(&zswap.shrink_work, shrink_worker); > + atomic_set(&zswap.nr_stored, 0); > + > pool =3D __zswap_pool_create_fallback(); > if (pool) { > pr_info("loaded using pool %s/%s\n", pool->tfm_name, > @@ -1815,19 +1775,16 @@ static int zswap_setup(void) > zswap_enabled =3D false; > } > > - shrink_wq =3D alloc_workqueue("zswap-shrink", > - WQ_UNBOUND|WQ_MEM_RECLAIM, 1); > - if (!shrink_wq) > - goto fallback_fail; > - > if (zswap_debugfs_init()) > pr_warn("debugfs initialization failed\n"); > zswap_init_state =3D ZSWAP_INIT_SUCCEED; > return 0; > > -fallback_fail: > - if (pool) > - zswap_pool_destroy(pool); > +lru_fail: > + list_lru_destroy(&zswap.list_lru); > + shrinker_free(zswap.shrinker); > +shrinker_fail: > + destroy_workqueue(shrink_wq); > hp_fail: > kmem_cache_destroy(zswap_entry_cache); > cache_fail: > > -- > b4 0.10.1