From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9A2A9C38150 for ; Sat, 6 Jul 2024 02:25:46 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 289BC6B0099; Fri, 5 Jul 2024 22:25:46 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 1C2E76B009A; Fri, 5 Jul 2024 22:25:46 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 03DDD6B009B; Fri, 5 Jul 2024 22:25:45 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id D835A6B0099 for ; Fri, 5 Jul 2024 22:25:45 -0400 (EDT) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 811218050A for ; Sat, 6 Jul 2024 02:25:45 +0000 (UTC) X-FDA: 82307737050.15.88B9231 Received: from mail-oi1-f180.google.com (mail-oi1-f180.google.com [209.85.167.180]) by imf23.hostedemail.com (Postfix) with ESMTP id A560414000B for ; Sat, 6 Jul 2024 02:25:43 +0000 (UTC) Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b="IyQP/Ei0"; spf=pass (imf23.hostedemail.com: domain of flintglass@gmail.com designates 209.85.167.180 as permitted sender) smtp.mailfrom=flintglass@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1720232730; a=rsa-sha256; cv=none; b=ypZ5W0TUQeXVzDgfNw9AJqDqafX1W9T5c+kPrwi2ISeWnk/PYCJTcJuvMeVJ88ThBNGCMP vG2sgUbbFhXBQqwuqvjRuIjKyW50TE4KYZZ1QiYYKjrfcqzKgvZUUMfAeXdcQxtBXDbHLt QfJvzwr4Q8YnvVsr1oyZDc9lzkN0lRY= ARC-Authentication-Results: i=1; imf23.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b="IyQP/Ei0"; spf=pass (imf23.hostedemail.com: domain of flintglass@gmail.com designates 209.85.167.180 as permitted sender) smtp.mailfrom=flintglass@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1720232730; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=3gbNL02p4FBXJkqp2c2S8rGsyHnbj9iQKIebrJKExW4=; b=axNhGERgX9VRkNggJRU0/SIMT+EqqQzgVovEOQ0fi1xDaBhi7HbB1ghsjr3grcoCTEVo0J Rj+WgxM1HbXmJ/P4AgyiZVcYa2gTizXJwi1wepQVU8R4pbur5HveYGiRU2WRgU3ogaq3ID ONXD6rofPirhseJp9g4JX0YKldM9IVU= Received: by mail-oi1-f180.google.com with SMTP id 5614622812f47-3d91e390601so551651b6e.1 for ; Fri, 05 Jul 2024 19:25:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1720232742; x=1720837542; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=3gbNL02p4FBXJkqp2c2S8rGsyHnbj9iQKIebrJKExW4=; b=IyQP/Ei05PJdy5vYBV8xD0uy9fj1xv+QrEPIvNkjWE5/AUWHMT+cPgpRWjjVe1I0f6 AmfPao/krUKQ4/KVDZZptBcKwaLThxMvg9Ijux3X0/KOsxJAlmoYVfudzTUjt6exgsHB TfRSUeiwPRdPGpPYjKKhRjGVxQ4Dd2KWUxSWVG/Uy1YGZJI/e4W9NvWQN4YzI32S6OOs b7cjScDnVplnmfe3BirF6tWC07t/pfZZOTvSqOEUU/T+Z9hWkUvnn5JEHOIz9Yg5pBbT oPurvpuqkZXXyF9P6dM9gegOI7cGdLLhvCVZJFfg0r1zh/ie39vLGaBTrWWSwAhAWpFk K+Kg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1720232742; x=1720837542; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=3gbNL02p4FBXJkqp2c2S8rGsyHnbj9iQKIebrJKExW4=; b=fzRqLk5II3U5D+ZpGwWO7zDyT8H7RpoWryVnY2b4hFaHCzgWZNlpRb1wMnm18RfV4a aynddfM2Lpwurp0iDQpQSSPLY4cSkmmb18LpPDy0XqHIeNRtta1gI4RSkocoupUDeoMC n6pHplVAOKzEP4eJXudjnP4P7Jqtmpo3UMioA8LBuSX/N/7to0LJLVLnvZZKFDv7db+v otcp8MYypx2uYGWwg5mgswF0nVBXtrAkzgqA424A9GQUwbinKqHiGM5v2VksyHkDvTl9 S1e2ViRxs4koM7nGY1ofct+lDPHC2mkMBSLZCVYHBp7BGZXe0mSkr8jGfMbOE5lZKteR yGOg== X-Forwarded-Encrypted: i=1; AJvYcCUgh0r/WslO+rsL8F4yXrYs1z7d98r40w0r0usJL39vZwVqhI58uewW6+F7rpcHXWQgQfy2i1DMSnAUqPqFIe3XIDg= X-Gm-Message-State: AOJu0Yy4FOp10IM+0gY8BOJvf/yv6slg3vjGtLiLIEfLfKnwh8NqP6t1 O8TMnzOYhgyNBx0JZSE4pMdZIwfOU5Z9SKMjqzs99Xcu+yquX07Y X-Google-Smtp-Source: AGHT+IEm7hpJxWUrO+4pHoriMk4Pq9rnFJ3uJllIplvLoEm52TyWCtZ/2L3QtpFAoWc3zaPgZlkb2A== X-Received: by 2002:a05:6808:158b:b0:3d5:6595:7b41 with SMTP id 5614622812f47-3d914c50384mr7403679b6e.5.1720232742555; Fri, 05 Jul 2024 19:25:42 -0700 (PDT) Received: from cbuild.incus (h101-111-009-128.hikari.itscom.jp. [101.111.9.128]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-70b15417a7bsm971274b3a.205.2024.07.05.19.25.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 05 Jul 2024 19:25:42 -0700 (PDT) From: Takero Funaki To: Johannes Weiner , Yosry Ahmed , Nhat Pham , Chengming Zhou , Jonathan Corbet , Andrew Morton , Domenico Cerasuolo Cc: Takero Funaki , linux-mm@kvack.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 3/6] mm: zswap: proactive shrinking before pool size limit is hit Date: Sat, 6 Jul 2024 02:25:19 +0000 Message-ID: <20240706022523.1104080-4-flintglass@gmail.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240706022523.1104080-1-flintglass@gmail.com> References: <20240706022523.1104080-1-flintglass@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Stat-Signature: oqngg3ijrt98qzxbjb5iyig1h6cfb3is X-Rspamd-Queue-Id: A560414000B X-Rspam-User: X-Rspamd-Server: rspam10 X-HE-Tag: 1720232743-333296 X-HE-Meta: U2FsdGVkX185mM2JxuJ4DsNrGtscuL5MiV1DsSNTg8YAkOuZ4v5KZ/QUrgVlJ6VLhuu6PIpslB9HgDBjiQ93PBkkxzbZzUHr/YNdwbUgTnlMmYHWmDFLnV3hAn9tGLP4Ak9t4gYOxx+z6iHIx1IodHidbMlyhrKoTsSEQ6cn6VCmbH9ox5xAc+EIFyxeFTt+luTy3MtuaCnJ1cfY4HFCn3IQ8x2XiYUUk8sTA84Cun8sbIZw7Kw11zxBea/tm7/LYzKbXUk+qTl+papjaBvjhsFBSsnuVqC/DZ8saKEkfIaUF4GwWG6UV/ePlujZQj8x9F0XKW9Iu8DYLeZRAHTxYGM0YawkLozmsgxT3vKcVKoL5ZE/WPmnNzmfYJXr4nKiL73kNvcWD6g1P2jwOOcHyL91O2BhhhuihmfeQZ33UiBqkkgdpSYFqF1uGnFADq+L0Bv+jlto1Rh2l0Zqhyh03Jv9DjhArjl0RI9qXbd1SHRiI8aB0pdm4KY2I7ItF2DaO/hHmD+eF5BHO4BZ0pgG4JJMdEh5VcCbdgTnDc3eekigixDT23pHsG0npkhRwULXuVyjKHnAaE5n/DkV53trBqCUEbUqDJpnrLaBqK0RWRAMn1Z/0Yfr03A5XPsIXuckDNHX/W3EwLbtGwOVBSAFxVyPF7bpTHsluhfh29o/K/R05TUZud/qzHvWEsNxQZ4HheFPqkcUYWqjyiAApBsK3YoyJ3uCdbOIn4iyUdn9ms7OxF+KuyQMxq3ylVfZEufVGx3sJk1I1MwgOgiPWYZ98AFjxY3MeQznp1os8yAWT/MTN4Sx6vvfGZyIJwte5q8Fnk22/LSBl2Iyz3HTjDecgh1pWR4AobteAuSk2pNRvgnQMRLdINqKWuDf6bgfTWcEYiBX9kz4BajcFtm72mAGZEkEM5A34fUbRZehi04/BjwIQarImJCR4iynJDE/q1ki+VdpCTSKNw5LUmJlrhh L3x8saEJ Xh6AKgN8gMoYSEyVdMNMl7xxW9m8BQlkluibo5yG8/FdQ9FdrzNhaZKvn4Ut2DWlGc/xzTiOZr60XVv4o2FOV6KUljkWXrCPfcxfeAOEKrCUf44ve4YnuQ8dYU5g3qr/ZdeS+2BtGXiEIySy4pTU2T6WzoIHyTEHz0NN76W9rR0yM+XcapEiKYPh5Ncwg1YH4lBrIIXHWQMkdAh8Uubs0gKHOpyqNfrHQ0ruj7gQoUWTimrVJdf82urAC+995AYGEfRaeSqIEar6zyPrf+JHf/lxi3JoTAKtXnSKJV9xK3u3XUlWNPfbkabtlQa/sLswoTHxk6vzav0vqU3VxwLo6+VdwTYxUJ7QT0MGE X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: This patch implements proactive shrinking of zswap pool before the max pool size limit is reached. This also changes zswap to accept new pages while the shrinker is running. To prevent zswap from rejecting new pages and incurring latency when zswap is full, this patch queues the global shrinker by a pool usage threshold between 100% and accept_thr_percent, instead of the max pool size. The pool size will be controlled between 90% to 91% for the default accept_thr_percent=90. Since the current global shrinker continues to shrink until accept_thr_percent, we do not need to maintain the hysteresis variable tracking the pool limit overage in zswap_store(). Before this patch, zswap rejected pages while the shrinker is running without incrementing zswap_pool_limit_hit counter. It could be a reason why zswap writethrough new pages before writeback old pages. With this patch, zswap accepts new pages while shrinking, and zswap increments the counter when and only when zswap rejects pages by the max pool size. Now, reclaims smaller than the proactive shrinking amount finish instantly and trigger background shrinking. Admins can check if new pages are buffered by zswap by monitoring the pool_limit_hit counter. The name of sysfs tunable accept_thr_percent is unchanged as it is still the stop condition of the shrinker. The respective documentation is updated to describe the new behavior. Signed-off-by: Takero Funaki --- Documentation/admin-guide/mm/zswap.rst | 17 ++++---- mm/zswap.c | 54 ++++++++++++++++---------- 2 files changed, 42 insertions(+), 29 deletions(-) diff --git a/Documentation/admin-guide/mm/zswap.rst b/Documentation/admin-guide/mm/zswap.rst index 3598dcd7dbe7..a1d8f167a27a 100644 --- a/Documentation/admin-guide/mm/zswap.rst +++ b/Documentation/admin-guide/mm/zswap.rst @@ -111,18 +111,17 @@ checked if it is a same-value filled page before compressing it. If true, the compressed length of the page is set to zero and the pattern or same-filled value is stored. -To prevent zswap from shrinking pool when zswap is full and there's a high -pressure on swap (this will result in flipping pages in and out zswap pool -without any real benefit but with a performance drop for the system), a -special parameter has been introduced to implement a sort of hysteresis to -refuse taking pages into zswap pool until it has sufficient space if the limit -has been hit. To set the threshold at which zswap would start accepting pages -again after it became full, use the sysfs ``accept_threshold_percent`` -attribute, e. g.:: +To prevent zswap from rejecting new pages and incurring latency when zswap is +full, zswap initiates a worker called global shrinker that proactively evicts +some pages from the pool to swap devices while the pool is reaching the limit. +The global shrinker continues to evict pages until there is sufficient space to +accept new pages. To control how many pages should remain in the pool, use the +sysfs ``accept_threshold_percent`` attribute as a percentage of the max pool +size, e. g.:: echo 80 > /sys/module/zswap/parameters/accept_threshold_percent -Setting this parameter to 100 will disable the hysteresis. +Setting this parameter to 100 will disable the proactive shrinking. Some users cannot tolerate the swapping that comes with zswap store failures and zswap writebacks. Swapping can be disabled entirely (without disabling diff --git a/mm/zswap.c b/mm/zswap.c index f092932e652b..24acbab44e7a 100644 --- a/mm/zswap.c +++ b/mm/zswap.c @@ -71,8 +71,6 @@ static u64 zswap_reject_kmemcache_fail; /* Shrinker work queue */ static struct workqueue_struct *shrink_wq; -/* Pool limit was hit, we need to calm down */ -static bool zswap_pool_reached_full; /********************************* * tunables @@ -118,7 +116,10 @@ module_param_cb(zpool, &zswap_zpool_param_ops, &zswap_zpool_type, 0644); static unsigned int zswap_max_pool_percent = 20; module_param_named(max_pool_percent, zswap_max_pool_percent, uint, 0644); -/* The threshold for accepting new pages after the max_pool_percent was hit */ +/* + * The percentage of pool size that the global shrinker keeps in memory. + * It does not protect old pages from the dynamic shrinker. + */ static unsigned int zswap_accept_thr_percent = 90; /* of max pool size */ module_param_named(accept_threshold_percent, zswap_accept_thr_percent, uint, 0644); @@ -488,6 +489,20 @@ static unsigned long zswap_accept_thr_pages(void) return zswap_max_pages() * zswap_accept_thr_percent / 100; } +/* + * Returns threshold to start proactive global shrinking. + */ +static inline unsigned long zswap_shrink_start_pages(void) +{ + /* + * Shrinker will evict pages to the accept threshold. + * We add 1% to not schedule shrinker too frequently + * for small swapout. + */ + return zswap_max_pages() * + min(100, zswap_accept_thr_percent + 1) / 100; +} + unsigned long zswap_total_pages(void) { struct zswap_pool *pool; @@ -505,21 +520,6 @@ unsigned long zswap_total_pages(void) return total; } -static bool zswap_check_limits(void) -{ - unsigned long cur_pages = zswap_total_pages(); - unsigned long max_pages = zswap_max_pages(); - - if (cur_pages >= max_pages) { - zswap_pool_limit_hit++; - zswap_pool_reached_full = true; - } else if (zswap_pool_reached_full && - cur_pages <= zswap_accept_thr_pages()) { - zswap_pool_reached_full = false; - } - return zswap_pool_reached_full; -} - /********************************* * param callbacks **********************************/ @@ -1489,6 +1489,8 @@ bool zswap_store(struct folio *folio) struct obj_cgroup *objcg = NULL; struct mem_cgroup *memcg = NULL; unsigned long value; + unsigned long cur_pages; + bool need_global_shrink = false; VM_WARN_ON_ONCE(!folio_test_locked(folio)); VM_WARN_ON_ONCE(!folio_test_swapcache(folio)); @@ -1511,8 +1513,17 @@ bool zswap_store(struct folio *folio) mem_cgroup_put(memcg); } - if (zswap_check_limits()) + cur_pages = zswap_total_pages(); + + if (cur_pages >= zswap_max_pages()) { + zswap_pool_limit_hit++; + need_global_shrink = true; goto reject; + } + + /* schedule shrink for incoming pages */ + if (cur_pages >= zswap_shrink_start_pages()) + queue_work(shrink_wq, &zswap_shrink_work); /* allocate entry */ entry = zswap_entry_cache_alloc(GFP_KERNEL, folio_nid(folio)); @@ -1555,6 +1566,9 @@ bool zswap_store(struct folio *folio) WARN_ONCE(err != -ENOMEM, "unexpected xarray error: %d\n", err); zswap_reject_alloc_fail++; + + /* reduce entry in array */ + need_global_shrink = true; goto store_failed; } @@ -1604,7 +1618,7 @@ bool zswap_store(struct folio *folio) zswap_entry_cache_free(entry); reject: obj_cgroup_put(objcg); - if (zswap_pool_reached_full) + if (need_global_shrink) queue_work(shrink_wq, &zswap_shrink_work); check_old: /* -- 2.43.0