From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 55708C3DA4A for ; Fri, 26 Jul 2024 21:58:58 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DEE606B009D; Fri, 26 Jul 2024 17:58:57 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D9E396B009E; Fri, 26 Jul 2024 17:58:57 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C3EF86B009F; Fri, 26 Jul 2024 17:58:57 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id A07FF6B009D for ; Fri, 26 Jul 2024 17:58:57 -0400 (EDT) Received: from smtpin18.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 5925AC0164 for ; Fri, 26 Jul 2024 21:58:57 +0000 (UTC) X-FDA: 82383269514.18.33CB7A0 Received: from mail-lj1-f177.google.com (mail-lj1-f177.google.com [209.85.208.177]) by imf21.hostedemail.com (Postfix) with ESMTP id 69FAB1C0004 for ; Fri, 26 Jul 2024 21:58:55 +0000 (UTC) Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=2B3FzmUw; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf21.hostedemail.com: domain of yosryahmed@google.com designates 209.85.208.177 as permitted sender) smtp.mailfrom=yosryahmed@google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1722031133; a=rsa-sha256; cv=none; b=1DE2jcC8gElCIh/+WHdN4y+X6acUxLEirh+N+f6ZSj0sAoTb1mc54WgcSVh/THcfUaEFpZ zcv2j4zbcJvKsYAyrUFJp81R04VSI1Q6DFM1+GooZjC1DJO/CP4YdRvt+7jFdD5fjR1wnQ xSARWUltAS5ghZhmgo+m6fvEEDlSXp0= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=2B3FzmUw; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf21.hostedemail.com: domain of yosryahmed@google.com designates 209.85.208.177 as permitted sender) smtp.mailfrom=yosryahmed@google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1722031133; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Pp3k1dVESRj5IkAa6qsSz0zUBIRiRaDZOEs5yBlkiDE=; b=48avefFFCIO736m+pb3tMRhNFM/R1PzKS3PwRtuSnk66mf42VimTd4u/dI3t95TiKFQKHt XixwKRL0bGo4bLlTe49nBSX5Rc3Krds7HFHmJ3WnRJdWKA/037zTNcf5HQsQOqmjozqwfQ x53uhWREG/Nqr062eC1nDcySNDuwdMQ= Received: by mail-lj1-f177.google.com with SMTP id 38308e7fff4ca-2ef2cb7d562so21071061fa.3 for ; Fri, 26 Jul 2024 14:58:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722031134; x=1722635934; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=Pp3k1dVESRj5IkAa6qsSz0zUBIRiRaDZOEs5yBlkiDE=; b=2B3FzmUwXLOovvh+89941xUlQ4D1n5ajWxRUs6u2UFoE8JZk5PSWBCTJ8TVdXaOd8w 7ofqRciEpAYiLOPC9EEdEw+/k49gI5iaCVVHpxeeEtJm/EdSbpd5ZeXMlyiROS53gpUU wZ5ZCf4K2jkcbsODU6OFrLo1bRmqxTbYahdtwt3IvIDMH2+vZv+ACFcTV+M2rsC0vNbq yB0XvRzk2DtEi3GP8W6+dqG9BFkJu/05Trmed5SqisgnQPrDhpA8gqYoiw8+B+QYisnB IeTMa0avCV0tsn1+FM8ClhyT+AftqpWVkF/tkknq396Q27sU9QM0FU+aLzSFvYy0z3nN SNcQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722031134; x=1722635934; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Pp3k1dVESRj5IkAa6qsSz0zUBIRiRaDZOEs5yBlkiDE=; b=rObjQk6HL4Dq3np1TbnDQ6Yx0aHx49yi0UK162xEG+xkL9Rv5uiwNrSY6pLyYH3PfQ Rsi685Y1a4li/KQZUtgAZcnpWQxclWQNVINjuwLLj3fSW74NqstFhffVKC+k9XQ2z2XR F3999pphJe5pw4NQ4bFtVmAa8JbapF0oAjRTgRaAznbKxB+sIb+hohQBL3vDHhGmRqZd nD55QgujBFAYtHWHPdpcWywihDsh58nJt6hUh8HmnRDPS8VH2MLeKXL58XFHmmoT+C92 noy9kwIUhAnXjdUd+Z82HXOg5SKUQQu5pkoPXBemewGgRbic/KI4yprviRff5Wpojjyl AlCA== X-Forwarded-Encrypted: i=1; AJvYcCWi76RJNwsRcU3ZhO5umO+PdRrK/Gs3+JJiYj1KFRm/murM5L9GxOtvIMZ4GuLvZHZj3S9RFOeAGiIXDSVILkGN5ZM= X-Gm-Message-State: AOJu0Yxggrt5cAesllNbcmjW3/R+n1ntpwdo3ApH0coNX4mj3324Mw+e 9aTPZxr10xHT7dr4XOXubLdnEBTxpym761ckt5XGH09iICTJEofT45DjA36YJyb/d+h+mvc/AdW nMORIuVOnEYHfFOps2O48GKGqC1JUUzoFYUMx X-Google-Smtp-Source: AGHT+IHz5QpIp6TMciYkBGiVQczmr5Tlx0E7+PlPYlhcQhE4/+pH1fV+opum5zMho68v71OzHQprexRHcvpqS4M776M= X-Received: by 2002:ac2:5b89:0:b0:52c:db0a:a550 with SMTP id 2adb3069b0e04-5309b2bcbe0mr604059e87.42.1722031133005; Fri, 26 Jul 2024 14:58:53 -0700 (PDT) MIME-Version: 1.0 References: <20240725232813.2260665-1-nphamcs@gmail.com> <20240725232813.2260665-2-nphamcs@gmail.com> In-Reply-To: <20240725232813.2260665-2-nphamcs@gmail.com> From: Yosry Ahmed Date: Fri, 26 Jul 2024 14:58:14 -0700 Message-ID: Subject: Re: [PATCH 1/2] zswap: implement a second chance algorithm for dynamic zswap shrinker To: Nhat Pham Cc: akpm@linux-foundation.org, hannes@cmpxchg.org, shakeelb@google.com, linux-mm@kvack.org, kernel-team@meta.com, linux-kernel@vger.kernel.org, flintglass@gmail.com Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspam-User: X-Rspamd-Queue-Id: 69FAB1C0004 X-Rspamd-Server: rspam01 X-Stat-Signature: rmb6rtyiqq4mdsi16om663a3k146x9b3 X-HE-Tag: 1722031135-688431 X-HE-Meta: U2FsdGVkX1+8rMbqfxeQ53J/S/z7HcWVzqVrl4uRV5skkuGJDS9kCGhdPYPC5HF0QlMWsnADjTOK0kmNqtOi7xgYQJ/El06S1Ix0K7IZgs+KBJiO9EEApnfxxQQbKhh8WDYhaGqwIu26aajc3sDhvV6u2/k4t+3DCaNowhH4nN+CQt1qW5sbdoLu/81zvuQMb2tQKDWpKauJZOSz0NYDBN/kBZlhPXd8+v7bpL1K/3DVqzsf2OR4LnCRbdUTIkHD49xve4H1vgmcYi1nsrwHOjlbjDMkbnoe4ynOiWyx5i3iM0ZKqPHSnDT/HEyDm/9vPOUPjxDy8WqUcmD4sQUnmqMbd2lL6C1WFO4QrZdHevPdtoCeREn7lwUDVTc6vjefP2ows/uM+9uZd6FDyKasJ0uetkW6NCd2O1I91xARG5KXyzMRlmgEipTHnsxxlzkG2i6UutpZ/vlSTvfHIfYPkjWPfQS1ifGn0xsZQlwSV0xaSilXomSYuIlWEw5ev71a+dC6/OMEAq+YLIdHjbUTVxwlK0HeMONyBRaKv3vTxABxBxfOAAKkVa3bKODFnOj50uYfztpL7h6GMoWch0Lz7GxleQ+DglnOHWDd3O+HGMsfAGY9WxpldYJX6Cmi7EU8ZWjUa8aEJ6FnR6VLhdxPoejaqttyb0J1UvnmFw+4cPxK7oAs9OmwyiDQ7YyZC0xNl2fI0PxfR4veMd0u8bJ3JCsniHNFA+QECBPgqcvYMru8hwbvBbIrisf4O/veCU3Z+6AW/aBR4963ULldC79PtjHQM7zZa8T5mpFE2dkTUZf9NDc5bigBwI0Ba6U4IzkSYaWXfJNOBrYEN960+6Nu4FxWMBOHbv3hFesElbXHogDErPMrCNmEr+UHUyfqq15hrP4wfJ8kOxYl0u3LflcdceM0V6Begu8eFdFhsjpKXV4xfcquAhhpwPSZjUQMKAmgjCsm4FhxxJzh9QwQf7Y 21F9gSIS XkwQNQYBwmW1pXnoUU6D05bd3QouUVG2luR5nrX13KxmG5VULZ/QruhRDnC6qiQQG9xaCBmfWFJetmFzGitVq5mMrS8N8AYOeOymzTuCcempuC92sCHE9uutpPj8nfIv3B3yMn4OiKyac4ZZSHOtcZk+hDQgIsmSuhTmHTLZi0RxsOWcjcWHuT1bhbiLrEcWUX89ERnBWUfog8+/oc9PGOWw+tiWswSn99sRoXUVuYaKc/Q+Kn4K5mJQSRJSxAr8DqLa/ X-Bogosity: Ham, tests=bogofilter, spamicity=0.000258, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Thu, Jul 25, 2024 at 4:28=E2=80=AFPM Nhat Pham wrote= : > > Current zswap shrinker's heursitics to prevent overshrinking is brittle > and inaccurate, specifically in the way we decay the protection size > (i.e making pages in the zswap LRU eligible for reclaim). Thanks for working on this and experimenting with different heuristics. I was not a huge fan of these, so I am glad we are trying to replace them with something more intuitive. > > We currently decay protection aggressively in zswap_lru_add() calls. > This leads to the following unfortunate effect: when a new batch of > pages enter zswap, the protection size rapidly decays to below 25% of > the zswap LRU size, which is way too low. > > We have observed this effect in production, when experimenting with the > zswap shrinker: the rate of shrinking shoots up massively right after a > new batch of zswap stores. This is somewhat the opposite of what we want > originally - when new pages enter zswap, we want to protect both these > new pages AND the pages that are already protected in the zswap LRU. > > Replace existing heuristics with a second chance algorithm > > 1. When a new zswap entry is stored in the zswap pool, its reference bit > is set. > 2. When the zswap shrinker encounters a zswap entry with the reference > bit set, give it a second chance - only flips the reference bit and > rotate it in the LRU. > 3. If the shrinker encounters the entry again, this time with its > reference bit unset, then it can reclaim the entry. At the first look, this is similar to the reclaim algorithm. A fundamental difference here is that the reference bit is only set once, when the entry is created. It is different from the conventional second chance page reclaim/replacement algorithm. What this really does, is that it slows down writeback by enforcing that we need to iterate entries exactly twice before we write them back. This sounds a little arbitrary and not very intuitive to me. Taking a step back, what we really want is to writeback zswap entries in order, and avoid writing back more entries than needed. I think the key here is "when needed", which is defined by how much memory pressure we have. The shrinker framework should already be taking this into account. Looking at do_shrink_slab(), in the case of zswap (seek =3D 2), total_scan should boil down to: total_scan =3D (zswap_shrinker_count() * 2 + nr_deferred) >> priority , and this is bounded by zswap_shrinker_count() * 2. Ignoring nr_deferred, we start by scanning 1/2048th of zswap_shrinker_count() at DEF_PRIORITY, then we work our way to 2 * zswap_shrinker_count() at zero priority (before OOMs). At NODE_RECLAIM_PRIORITY, we start at 1/8th of zswap_shrinker_count(). Keep in mind that zswap_shrinker_count() does not return the number of all zswap entries, it subtracts the protected part (or recent swapins) and scales by the compression ratio. So this looks reasonable at first sight, perhaps we want to tune the seek to slow down writeback if we think it's too much, but that doesn't explain the scenario you are describing. Now let's factor in nr_deferred, which looks to me like it could be the culprit here. I am assuming the intention is that if we counted freeable slab objects before but didn't get to free them, we should do it the next time around. This feels like it assumes that the objects will remain there unless reclaimed by the shrinker. This does not apply for zswap, because the objects can be swapped in. Also, in the beginning, before we encounter too many swapins, the protection will be very low, so zswap_shrinker_count() will return a relatively high value. Even if we don't scan and writeback this amount, we will keep carrying this value forward in next reclaim operations, even if the number of existing zswap entries have decreased due to swapins. Could this be the problem? The number of deferred objects to be scanned just keeps going forward as a high value, essentially rendering the heuristics in zswap_shrinker_count() useless? If we just need to slow down writeback by making sure we scan entries twice, could something similar be achieved just by tuning the seek without needing any heuristics to begin with? I am just trying to understand if the main problem is that zswap does not fit well into the shrinker framework as it is, and how we can improve this. Just to be clear, I am in favor of changing those heuristics to something more intuitive and simpler, but I really want to understand what is going on. The approach taken by this patch is definitely simpler, but it doesn't feel more intuitive to me (at least not yet). > > In this manner, the aging of the pages in the zswap LRUs are decoupled > from zswap stores, and picks up the pace with increasing memory pressure > (which is what we want). > > We will still maintain the count of swapins, which is consumed and > subtracted from the lru size in zswap_shrinker_count(), to further > penalize past overshrinking that led to disk swapins. The idea is that > had we considered this many more pages in the LRU active/protected, they > would not have been written back and we would not have had to swapped > them in. > > To test this new heuristics, I built the kernel under a cgroup with > memory.max set to 2G, on a host with 36 cores: > > With the old shrinker: > > real: 263.89s > user: 4318.11s > sys: 673.29s > swapins: 227300.5 > > With the second chance algorithm: > > real: 244.85s > user: 4327.22s > sys: 664.39s > swapins: 94663 > > (average over 5 runs) > > We observe an 1.3% reduction in kernel CPU usage, and around 7.2% > reduction in real time. Note that the number of swapped in pages > dropped by 58%. > > Suggested-by: Johannes Weiner > Signed-off-by: Nhat Pham > --- > include/linux/zswap.h | 16 ++++----- > mm/zswap.c | 84 +++++++++++++++++++------------------------ > 2 files changed, 44 insertions(+), 56 deletions(-) > > diff --git a/include/linux/zswap.h b/include/linux/zswap.h > index 6cecb4a4f68b..b94b6ae262d5 100644 > --- a/include/linux/zswap.h > +++ b/include/linux/zswap.h > @@ -13,17 +13,15 @@ extern atomic_t zswap_stored_pages; > > struct zswap_lruvec_state { > /* > - * Number of pages in zswap that should be protected from the shr= inker. > - * This number is an estimate of the following counts: > + * Number of swapped in pages, i.e not found in the zswap pool. > * > - * a) Recent page faults. > - * b) Recent insertion to the zswap LRU. This includes new zswap = stores, > - * as well as recent zswap LRU rotations. > - * > - * These pages are likely to be warm, and might incur IO if the a= re written > - * to swap. > + * This is consumed and subtracted from the lru size in > + * zswap_shrinker_count() to penalize past overshrinking that led= to disk > + * swapins. The idea is that had we considered this many more pag= es in the > + * LRU active/protected and not written them back, we would not h= ave had to > + * swapped them in. > */ > - atomic_long_t nr_zswap_protected; > + atomic_long_t nr_swapins; > }; > > unsigned long zswap_total_pages(void); > diff --git a/mm/zswap.c b/mm/zswap.c > index adeaf9c97fde..a24ee015d7bc 100644 > --- a/mm/zswap.c > +++ b/mm/zswap.c > @@ -203,6 +203,7 @@ struct zswap_entry { > }; > struct obj_cgroup *objcg; > struct list_head lru; > + bool referenced; If we take this approach, this needs to be placed in the hole after the length, to avoid increasing the size of the zswap_entry. > }; > > static struct xarray *zswap_trees[MAX_SWAPFILES]; > @@ -700,11 +701,10 @@ static inline int entry_to_nid(struct zswap_entry *= entry) > > static void zswap_lru_add(struct list_lru *list_lru, struct zswap_entry = *entry) > { > - atomic_long_t *nr_zswap_protected; > - unsigned long lru_size, old, new; > int nid =3D entry_to_nid(entry); > struct mem_cgroup *memcg; > - struct lruvec *lruvec; > + > + entry->referenced =3D true; > > /* > * Note that it is safe to use rcu_read_lock() here, even in the = face of > @@ -722,19 +722,6 @@ static void zswap_lru_add(struct list_lru *list_lru,= struct zswap_entry *entry) > memcg =3D mem_cgroup_from_entry(entry); > /* will always succeed */ > list_lru_add(list_lru, &entry->lru, nid, memcg); > - > - /* Update the protection area */ > - lru_size =3D list_lru_count_one(list_lru, nid, memcg); > - lruvec =3D mem_cgroup_lruvec(memcg, NODE_DATA(nid)); > - nr_zswap_protected =3D &lruvec->zswap_lruvec_state.nr_zswap_prote= cted; > - old =3D atomic_long_inc_return(nr_zswap_protected); > - /* > - * Decay to avoid overflow and adapt to changing workloads. > - * This is based on LRU reclaim cost decaying heuristics. > - */ > - do { > - new =3D old > lru_size / 4 ? old / 2 : old; > - } while (!atomic_long_try_cmpxchg(nr_zswap_protected, &old, new))= ; > rcu_read_unlock(); > } > > @@ -752,7 +739,7 @@ static void zswap_lru_del(struct list_lru *list_lru, = struct zswap_entry *entry) > > void zswap_lruvec_state_init(struct lruvec *lruvec) > { > - atomic_long_set(&lruvec->zswap_lruvec_state.nr_zswap_protected, 0= ); > + atomic_long_set(&lruvec->zswap_lruvec_state.nr_swapins, 0); > } > > void zswap_folio_swapin(struct folio *folio) > @@ -761,7 +748,7 @@ void zswap_folio_swapin(struct folio *folio) > > if (folio) { > lruvec =3D folio_lruvec(folio); > - atomic_long_inc(&lruvec->zswap_lruvec_state.nr_zswap_prot= ected); > + atomic_long_inc(&lruvec->zswap_lruvec_state.nr_swapins); > } > } > > @@ -1091,6 +1078,16 @@ static enum lru_status shrink_memcg_cb(struct list= _head *item, struct list_lru_o > enum lru_status ret =3D LRU_REMOVED_RETRY; > int writeback_result; > > + /* > + * Second chance algorithm: if the entry has its reference bit se= t, give it > + * a second chance. Only clear the reference bit and rotate it in= the > + * zswap's LRU list. > + */ > + if (entry->referenced) { > + entry->referenced =3D false; > + return LRU_ROTATE; > + } > + > /* > * As soon as we drop the LRU lock, the entry can be freed by > * a concurrent invalidation. This means the following: > @@ -1157,8 +1154,7 @@ static enum lru_status shrink_memcg_cb(struct list_= head *item, struct list_lru_o > static unsigned long zswap_shrinker_scan(struct shrinker *shrinker, > struct shrink_control *sc) > { > - struct lruvec *lruvec =3D mem_cgroup_lruvec(sc->memcg, NODE_DATA(= sc->nid)); > - unsigned long shrink_ret, nr_protected, lru_size; > + unsigned long shrink_ret; > bool encountered_page_in_swapcache =3D false; > > if (!zswap_shrinker_enabled || > @@ -1167,25 +1163,6 @@ static unsigned long zswap_shrinker_scan(struct sh= rinker *shrinker, > return SHRINK_STOP; > } > > - nr_protected =3D > - atomic_long_read(&lruvec->zswap_lruvec_state.nr_zswap_pro= tected); > - lru_size =3D list_lru_shrink_count(&zswap_list_lru, sc); > - > - /* > - * Abort if we are shrinking into the protected region. > - * > - * This short-circuiting is necessary because if we have too many= multiple > - * concurrent reclaimers getting the freeable zswap object counts= at the > - * same time (before any of them made reasonable progress), the t= otal > - * number of reclaimed objects might be more than the number of u= nprotected > - * objects (i.e the reclaimers will reclaim into the protected ar= ea of the > - * zswap LRU). > - */ > - if (nr_protected >=3D lru_size - sc->nr_to_scan) { > - sc->nr_scanned =3D 0; > - return SHRINK_STOP; > - } > - > shrink_ret =3D list_lru_shrink_walk(&zswap_list_lru, sc, &shrink_= memcg_cb, > &encountered_page_in_swapcache); > > @@ -1200,7 +1177,8 @@ static unsigned long zswap_shrinker_count(struct sh= rinker *shrinker, > { > struct mem_cgroup *memcg =3D sc->memcg; > struct lruvec *lruvec =3D mem_cgroup_lruvec(memcg, NODE_DATA(sc->= nid)); > - unsigned long nr_backing, nr_stored, nr_freeable, nr_protected; > + atomic_long_t *nr_swapins =3D &lruvec->zswap_lruvec_state.nr_swap= ins; > + unsigned long nr_backing, nr_stored, lru_size, nr_swapins_cur, nr= _remain; > > if (!zswap_shrinker_enabled || !mem_cgroup_zswap_writeback_enable= d(memcg)) > return 0; > @@ -1233,14 +1211,26 @@ static unsigned long zswap_shrinker_count(struct = shrinker *shrinker, > if (!nr_stored) > return 0; > > - nr_protected =3D > - atomic_long_read(&lruvec->zswap_lruvec_state.nr_zswap_pro= tected); > - nr_freeable =3D list_lru_shrink_count(&zswap_list_lru, sc); > + lru_size =3D list_lru_shrink_count(&zswap_list_lru, sc); > + if (!lru_size) > + return 0; > + > /* > - * Subtract the lru size by an estimate of the number of pages > - * that should be protected. > + * Subtract the lru size by the number of pages that are recently= swapped > + * in. The idea is that had we protect the zswap's LRU by this am= ount of > + * pages, these swap in would not have happened. > */ > - nr_freeable =3D nr_freeable > nr_protected ? nr_freeable - nr_pro= tected : 0; > + nr_swapins_cur =3D atomic_long_read(nr_swapins); > + do { > + if (lru_size >=3D nr_swapins_cur) > + nr_remain =3D 0; > + else > + nr_remain =3D nr_swapins_cur - lru_size; > + } while (!atomic_long_try_cmpxchg(nr_swapins, &nr_swapins_cur, nr= _remain)); > + > + lru_size -=3D nr_swapins_cur - nr_remain; > + if (!lru_size) > + return 0; > > /* > * Scale the number of freeable pages by the memory saving factor= . > @@ -1253,7 +1243,7 @@ static unsigned long zswap_shrinker_count(struct sh= rinker *shrinker, > * space. Hence, we may scale nr_freeable down a little bit more = than we > * should if we have a lot of same-filled pages. > */ > - return mult_frac(nr_freeable, nr_backing, nr_stored); > + return mult_frac(lru_size, nr_backing, nr_stored); > } > > static struct shrinker *zswap_alloc_shrinker(void) > -- > 2.43.0 >