From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7FDCEC27C75 for ; Fri, 14 Jun 2024 12:08:19 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3C0AD6B0108; Fri, 14 Jun 2024 08:08:16 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 36F206B00F9; Fri, 14 Jun 2024 08:08:16 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 56AE56B00F9; Fri, 14 Jun 2024 08:08:04 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id BADCA6B00F5 for ; Fri, 14 Jun 2024 08:07:59 -0400 (EDT) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 76513A0710 for ; Fri, 14 Jun 2024 12:07:59 +0000 (UTC) X-FDA: 82229370678.30.C36A0C8 Received: from out-176.mta1.migadu.com (out-176.mta1.migadu.com [95.215.58.176]) by imf05.hostedemail.com (Postfix) with ESMTP id DA0AD100008 for ; Fri, 14 Jun 2024 12:07:56 +0000 (UTC) Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=kucfFyQm; dmarc=pass (policy=none) header.from=linux.dev; spf=pass (imf05.hostedemail.com: domain of chengming.zhou@linux.dev designates 95.215.58.176 as permitted sender) smtp.mailfrom=chengming.zhou@linux.dev ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1718366875; a=rsa-sha256; cv=none; b=q/NjqV/FP0dbfXxtgzhbqKpn2N6/QMhlO3tpWxuh1fiUwLyHv820J8/iUINVAX51daVKjy CVOPbjpPgMQrtsZXY0plBssIEw99C3b8cWZujuNhM9sx8x5TRiBSRxaQ53KPcVTOpkGezm djGWupqNXSmHf6g9Mis5m5fv0cBr0Bc= ARC-Authentication-Results: i=1; imf05.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=kucfFyQm; dmarc=pass (policy=none) header.from=linux.dev; spf=pass (imf05.hostedemail.com: domain of chengming.zhou@linux.dev designates 95.215.58.176 as permitted sender) smtp.mailfrom=chengming.zhou@linux.dev ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1718366875; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=peu4HYA3Smy92EmNKG1/7ckvk78sjW9rJi3oNPg4ZUo=; b=l6EkZE671WdwU5uckiCUipsBzygVP6kA6IaASwZS7AkkCTOCvvbFxq0cBt6aM9T2Kj2CuD XyFIdw1QFKJSrYctfHDg7rpjeRqDYRWrh9kQgozCQY9QigfIAWDi34hGGP6wLreOM8HI2z iRJTHDIDUF/Bpun95LHSip/1WOOwdSk= X-Envelope-To: usamaarif642@gmail.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1718366874; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=peu4HYA3Smy92EmNKG1/7ckvk78sjW9rJi3oNPg4ZUo=; b=kucfFyQmn6AtisEDi9oL9J/EEUkUwEDe1iYX6QGTVV4LtG5BwKBsQZiYHUteBMNWJYdbAT AgXU5y2Gefqn8NKIbsluwNMhvBHGy7TXurwbuOnpaochqFRlqZt5meEyu5ObJDxYntchub 2Ft5EVjMO50Q7jllRRqnPBVhxnrvK5E= X-Envelope-To: akpm@linux-foundation.org X-Envelope-To: hannes@cmpxchg.org X-Envelope-To: shakeel.butt@linux.dev X-Envelope-To: david@redhat.com X-Envelope-To: ying.huang@intel.com X-Envelope-To: hughd@google.com X-Envelope-To: willy@infradead.org X-Envelope-To: yosryahmed@google.com X-Envelope-To: nphamcs@gmail.com X-Envelope-To: linux-mm@kvack.org X-Envelope-To: linux-kernel@vger.kernel.org X-Envelope-To: kernel-team@meta.com Message-ID: <0e17c634-842b-40f6-b3bd-e4b98ed1dc8f@linux.dev> Date: Fri, 14 Jun 2024 20:07:25 +0800 MIME-Version: 1.0 Subject: Re: [PATCH v5 2/2] mm: remove code to handle same filled pages Content-Language: en-US To: Usama Arif , akpm@linux-foundation.org Cc: hannes@cmpxchg.org, shakeel.butt@linux.dev, david@redhat.com, ying.huang@intel.com, hughd@google.com, willy@infradead.org, yosryahmed@google.com, nphamcs@gmail.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@meta.com References: <20240614100902.3469724-1-usamaarif642@gmail.com> <20240614100902.3469724-3-usamaarif642@gmail.com> X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. From: Chengming Zhou In-Reply-To: <20240614100902.3469724-3-usamaarif642@gmail.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Migadu-Flow: FLOW_OUT X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: DA0AD100008 X-Stat-Signature: huiidc3xpgcjjqto36s69gtjar4sp6cn X-Rspam-User: X-HE-Tag: 1718366876-190422 X-HE-Meta: U2FsdGVkX1+bHzSJ+vSnczYuyMGGZ0Nal+9v/5Ep47K9DdiDGQ5tdjy/Elb5YckW5aEtHdwbFnTqWg4sd0Mv3+eeDSuNYrSPdXCro0W21CjhGhVMxHuNLQ/KNw+OcRicKM58mY/mYpeDKyQWY7YfIUNeEHCiFNjoUf6z4FjkkdI/RXzJTyIjg1L8IyyeeVoafc7t5+fVlijTE/tU6EjHskaYHY71i5RGjMjXTYJM7VzItlh/tJxPZDicvBM8ahfWMQar0U+fFor32cGHj/WWrnLXhe6rgb8oaq/c+7rb9Tgpxi2GrHSroKjnmBnv3gESCogHljalo3BB1gPGgLixnwRkiq4/ZQCFc+236RsC+sL/PqwzDYj7XxeMIKV86sj9eTW7MDwd8F42m+qpRBNYqtltG4oHehtnDR3y16I7YVjXQPKFxxfT/3Mt9d82UyvAWx+tGrrOpUGVawlKEb+yqmgyc2RIcseAZ7IYm22wqb+9dTrvq5ZJqbFpSqVHjgowbeB+V27QydIO2Aa91ghWuipVggEJgSZVeSeluQOkD409ZWQUCbqEkKYMHhTaPW0/TAT6INSpxglkG2njtcy/eNXqA90dVUc6iwaggRHYnnGZlilgN029bzxo1sx8bQ74xBdBeiCw2m8S4+1x5uzf0JXxLcvUH0ATwW/9ObLV454F0fzddn6d3/YZp3XkE9E3DXFUJ6e6hEX4/yFjPFDeUDimnk0Vx4qyPq1JvQylJjo3sx2MfyXBh9OVynGbkj8OZ80cBCMKtWCWYz+lUVcwmjUiLG8mV8uw+y6G9jf1nK5gjxxxNoZ152eLwB0fq9v7zVwX3UIpRxyrZsw6tPxvK5gZ0il6+gNGeKOzy6AwCbBMZbSc3adkd4UFrG3ZEsG92IAgbcAOQkX3kn8xe6MkLNbk0x9R5uYjwNtopCszLi52AgR1dm+8Q1dSEtp4Q31BL3Q7z/IJuGRwUwFrRP4 xn4ZUg9z y9RNPPaefQE3xY3AYMkR0z+RJ++G2d1rsGmNaV1PqdSwYB8fKRd2AplNukjV6YJk8EhpwH1plsPZEurSk1ITLNlWR6tUKW3YwBqG9xJNhvT29BGZ+YHpzdnvA5nPdFcY/UranDWtoyC9Gk4w1s3Q/yaBx6g== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 2024/6/14 18:07, Usama Arif wrote: > With an earlier commit to handle zero-filled pages in swap directly, > and with only 1% of the same-filled pages being non-zero, zswap no > longer needs to handle same-filled pages and can just work on compressed > pages. > > Signed-off-by: Usama Arif Looks good to me, thanks. Reviewed-by: Chengming Zhou > --- > mm/zswap.c | 86 +++++------------------------------------------------- > 1 file changed, 8 insertions(+), 78 deletions(-) > > diff --git a/mm/zswap.c b/mm/zswap.c > index a546c01602aa..e25a6808c2ed 100644 > --- a/mm/zswap.c > +++ b/mm/zswap.c > @@ -44,8 +44,6 @@ > **********************************/ > /* The number of compressed pages currently stored in zswap */ > atomic_t zswap_stored_pages = ATOMIC_INIT(0); > -/* The number of same-value filled pages currently stored in zswap */ > -static atomic_t zswap_same_filled_pages = ATOMIC_INIT(0); > > /* > * The statistics below are not protected from concurrent access for > @@ -188,11 +186,9 @@ static struct shrinker *zswap_shrinker; > * > * swpentry - associated swap entry, the offset indexes into the red-black tree > * length - the length in bytes of the compressed page data. Needed during > - * decompression. For a same value filled page length is 0, and both > - * pool and lru are invalid and must be ignored. > + * decompression. > * pool - the zswap_pool the entry's data is in > * handle - zpool allocation handle that stores the compressed page data > - * value - value of the same-value filled pages which have same content > * objcg - the obj_cgroup that the compressed memory is charged to > * lru - handle to the pool's lru used to evict pages. > */ > @@ -200,10 +196,7 @@ struct zswap_entry { > swp_entry_t swpentry; > unsigned int length; > struct zswap_pool *pool; > - union { > - unsigned long handle; > - unsigned long value; > - }; > + unsigned long handle; > struct obj_cgroup *objcg; > struct list_head lru; > }; > @@ -820,13 +813,9 @@ static struct zpool *zswap_find_zpool(struct zswap_entry *entry) > */ > static void zswap_entry_free(struct zswap_entry *entry) > { > - if (!entry->length) > - atomic_dec(&zswap_same_filled_pages); > - else { > - zswap_lru_del(&zswap_list_lru, entry); > - zpool_free(zswap_find_zpool(entry), entry->handle); > - zswap_pool_put(entry->pool); > - } > + zswap_lru_del(&zswap_list_lru, entry); > + zpool_free(zswap_find_zpool(entry), entry->handle); > + zswap_pool_put(entry->pool); > if (entry->objcg) { > obj_cgroup_uncharge_zswap(entry->objcg, entry->length); > obj_cgroup_put(entry->objcg); > @@ -1268,11 +1257,6 @@ static unsigned long zswap_shrinker_count(struct shrinker *shrinker, > * This ensures that the better zswap compresses memory, the fewer > * pages we will evict to swap (as it will otherwise incur IO for > * relatively small memory saving). > - * > - * The memory saving factor calculated here takes same-filled pages into > - * account, but those are not freeable since they almost occupy no > - * space. Hence, we may scale nr_freeable down a little bit more than we > - * should if we have a lot of same-filled pages. > */ > return mult_frac(nr_freeable, nr_backing, nr_stored); > } > @@ -1376,42 +1360,6 @@ static void shrink_worker(struct work_struct *w) > } while (zswap_total_pages() > thr); > } > > -/********************************* > -* same-filled functions > -**********************************/ > -static bool zswap_is_folio_same_filled(struct folio *folio, unsigned long *value) > -{ > - unsigned long *data; > - unsigned long val; > - unsigned int pos, last_pos = PAGE_SIZE / sizeof(*data) - 1; > - bool ret = false; > - > - data = kmap_local_folio(folio, 0); > - val = data[0]; > - > - if (val != data[last_pos]) > - goto out; > - > - for (pos = 1; pos < last_pos; pos++) { > - if (val != data[pos]) > - goto out; > - } > - > - *value = val; > - ret = true; > -out: > - kunmap_local(data); > - return ret; > -} > - > -static void zswap_fill_folio(struct folio *folio, unsigned long value) > -{ > - unsigned long *data = kmap_local_folio(folio, 0); > - > - memset_l(data, value, PAGE_SIZE / sizeof(unsigned long)); > - kunmap_local(data); > -} > - > /********************************* > * main API > **********************************/ > @@ -1423,7 +1371,6 @@ bool zswap_store(struct folio *folio) > struct zswap_entry *entry, *old; > struct obj_cgroup *objcg = NULL; > struct mem_cgroup *memcg = NULL; > - unsigned long value; > > VM_WARN_ON_ONCE(!folio_test_locked(folio)); > VM_WARN_ON_ONCE(!folio_test_swapcache(folio)); > @@ -1456,13 +1403,6 @@ bool zswap_store(struct folio *folio) > goto reject; > } > > - if (zswap_is_folio_same_filled(folio, &value)) { > - entry->length = 0; > - entry->value = value; > - atomic_inc(&zswap_same_filled_pages); > - goto store_entry; > - } > - > /* if entry is successfully added, it keeps the reference */ > entry->pool = zswap_pool_current_get(); > if (!entry->pool) > @@ -1480,7 +1420,6 @@ bool zswap_store(struct folio *folio) > if (!zswap_compress(folio, entry)) > goto put_pool; > > -store_entry: > entry->swpentry = swp; > entry->objcg = objcg; > > @@ -1528,13 +1467,9 @@ bool zswap_store(struct folio *folio) > return true; > > store_failed: > - if (!entry->length) > - atomic_dec(&zswap_same_filled_pages); > - else { > - zpool_free(zswap_find_zpool(entry), entry->handle); > + zpool_free(zswap_find_zpool(entry), entry->handle); > put_pool: > - zswap_pool_put(entry->pool); > - } > + zswap_pool_put(entry->pool); > freepage: > zswap_entry_cache_free(entry); > reject: > @@ -1597,10 +1532,7 @@ bool zswap_load(struct folio *folio) > if (!entry) > return false; > > - if (entry->length) > - zswap_decompress(entry, folio); > - else > - zswap_fill_folio(folio, entry->value); > + zswap_decompress(entry, folio); > > count_vm_event(ZSWPIN); > if (entry->objcg) > @@ -1703,8 +1635,6 @@ static int zswap_debugfs_init(void) > zswap_debugfs_root, NULL, &total_size_fops); > debugfs_create_atomic_t("stored_pages", 0444, > zswap_debugfs_root, &zswap_stored_pages); > - debugfs_create_atomic_t("same_filled_pages", 0444, > - zswap_debugfs_root, &zswap_same_filled_pages); > > return 0; > }