From: Johannes Weiner <hannes@cmpxchg.org>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: Nhat Pham <nphamcs@gmail.com>,
Yosry Ahmed <yosryahmed@google.com>,
Chengming Zhou <zhouchengming@bytedance.com>,
linux-mm@kvack.org, linux-kernel@vger.kernel.org
Subject: [PATCH 19/20] mm: zswap: function ordering: writeback
Date: Mon, 29 Jan 2024 20:36:55 -0500 [thread overview]
Message-ID: <20240130014208.565554-20-hannes@cmpxchg.org> (raw)
In-Reply-To: <20240130014208.565554-1-hannes@cmpxchg.org>
Shrinking needs writeback. Naturally, move the writeback code above
the shrinking code. Delete the forward decl.
Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
---
mm/zswap.c | 183 ++++++++++++++++++++++++++---------------------------
1 file changed, 90 insertions(+), 93 deletions(-)
diff --git a/mm/zswap.c b/mm/zswap.c
index acd7dcd1e0f2..0cb3437d47eb 100644
--- a/mm/zswap.c
+++ b/mm/zswap.c
@@ -276,9 +276,6 @@ static inline struct zswap_tree *swap_zswap_tree(swp_entry_t swp)
pr_debug("%s pool %s/%s\n", msg, (p)->tfm_name, \
zpool_get_type((p)->zpools[0]))
-static int zswap_writeback_entry(struct zswap_entry *entry,
- swp_entry_t swpentry);
-
static bool zswap_is_full(void)
{
return totalram_pages() * zswap_max_pool_percent / 100 <
@@ -1163,6 +1160,96 @@ static void zswap_decompress(struct zswap_entry *entry, struct page *page)
zpool_unmap_handle(zpool, entry->handle);
}
+/*********************************
+* writeback code
+**********************************/
+/*
+ * Attempts to free an entry by adding a folio to the swap cache,
+ * decompressing the entry data into the folio, and issuing a
+ * bio write to write the folio back to the swap device.
+ *
+ * This can be thought of as a "resumed writeback" of the folio
+ * to the swap device. We are basically resuming the same swap
+ * writeback path that was intercepted with the zswap_store()
+ * in the first place. After the folio has been decompressed into
+ * the swap cache, the compressed version stored by zswap can be
+ * freed.
+ */
+static int zswap_writeback_entry(struct zswap_entry *entry,
+ swp_entry_t swpentry)
+{
+ struct zswap_tree *tree;
+ struct folio *folio;
+ struct mempolicy *mpol;
+ bool folio_was_allocated;
+ struct writeback_control wbc = {
+ .sync_mode = WB_SYNC_NONE,
+ };
+
+ /* try to allocate swap cache folio */
+ mpol = get_task_policy(current);
+ folio = __read_swap_cache_async(swpentry, GFP_KERNEL, mpol,
+ NO_INTERLEAVE_INDEX, &folio_was_allocated, true);
+ if (!folio)
+ return -ENOMEM;
+
+ /*
+ * Found an existing folio, we raced with swapin or concurrent
+ * shrinker. We generally writeback cold folios from zswap, and
+ * swapin means the folio just became hot, so skip this folio.
+ * For unlikely concurrent shrinker case, it will be unlinked
+ * and freed when invalidated by the concurrent shrinker anyway.
+ */
+ if (!folio_was_allocated) {
+ folio_put(folio);
+ return -EEXIST;
+ }
+
+ /*
+ * folio is locked, and the swapcache is now secured against
+ * concurrent swapping to and from the slot. Verify that the
+ * swap entry hasn't been invalidated and recycled behind our
+ * backs (our zswap_entry reference doesn't prevent that), to
+ * avoid overwriting a new swap folio with old compressed data.
+ */
+ tree = swap_zswap_tree(swpentry);
+ spin_lock(&tree->lock);
+ if (zswap_rb_search(&tree->rbroot, swp_offset(swpentry)) != entry) {
+ spin_unlock(&tree->lock);
+ delete_from_swap_cache(folio);
+ folio_unlock(folio);
+ folio_put(folio);
+ return -ENOMEM;
+ }
+
+ /* Safe to deref entry after the entry is verified above. */
+ zswap_entry_get(entry);
+ spin_unlock(&tree->lock);
+
+ zswap_decompress(entry, &folio->page);
+
+ count_vm_event(ZSWPWB);
+ if (entry->objcg)
+ count_objcg_event(entry->objcg, ZSWPWB);
+
+ spin_lock(&tree->lock);
+ zswap_invalidate_entry(tree, entry);
+ zswap_entry_put(entry);
+ spin_unlock(&tree->lock);
+
+ /* folio is up to date */
+ folio_mark_uptodate(folio);
+
+ /* move it to the tail of the inactive list after end_writeback */
+ folio_set_reclaim(folio);
+
+ /* start writeback */
+ __swap_writepage(folio, &wbc);
+ folio_put(folio);
+
+ return 0;
+}
+
/*********************************
* shrinker functions
**********************************/
@@ -1419,96 +1506,6 @@ static void shrink_worker(struct work_struct *w)
zswap_pool_put(pool);
}
-/*********************************
-* writeback code
-**********************************/
-/*
- * Attempts to free an entry by adding a folio to the swap cache,
- * decompressing the entry data into the folio, and issuing a
- * bio write to write the folio back to the swap device.
- *
- * This can be thought of as a "resumed writeback" of the folio
- * to the swap device. We are basically resuming the same swap
- * writeback path that was intercepted with the zswap_store()
- * in the first place. After the folio has been decompressed into
- * the swap cache, the compressed version stored by zswap can be
- * freed.
- */
-static int zswap_writeback_entry(struct zswap_entry *entry,
- swp_entry_t swpentry)
-{
- struct zswap_tree *tree;
- struct folio *folio;
- struct mempolicy *mpol;
- bool folio_was_allocated;
- struct writeback_control wbc = {
- .sync_mode = WB_SYNC_NONE,
- };
-
- /* try to allocate swap cache folio */
- mpol = get_task_policy(current);
- folio = __read_swap_cache_async(swpentry, GFP_KERNEL, mpol,
- NO_INTERLEAVE_INDEX, &folio_was_allocated, true);
- if (!folio)
- return -ENOMEM;
-
- /*
- * Found an existing folio, we raced with swapin or concurrent
- * shrinker. We generally writeback cold folios from zswap, and
- * swapin means the folio just became hot, so skip this folio.
- * For unlikely concurrent shrinker case, it will be unlinked
- * and freed when invalidated by the concurrent shrinker anyway.
- */
- if (!folio_was_allocated) {
- folio_put(folio);
- return -EEXIST;
- }
-
- /*
- * folio is locked, and the swapcache is now secured against
- * concurrent swapping to and from the slot. Verify that the
- * swap entry hasn't been invalidated and recycled behind our
- * backs (our zswap_entry reference doesn't prevent that), to
- * avoid overwriting a new swap folio with old compressed data.
- */
- tree = swap_zswap_tree(swpentry);
- spin_lock(&tree->lock);
- if (zswap_rb_search(&tree->rbroot, swp_offset(swpentry)) != entry) {
- spin_unlock(&tree->lock);
- delete_from_swap_cache(folio);
- folio_unlock(folio);
- folio_put(folio);
- return -ENOMEM;
- }
-
- /* Safe to deref entry after the entry is verified above. */
- zswap_entry_get(entry);
- spin_unlock(&tree->lock);
-
- zswap_decompress(entry, &folio->page);
-
- count_vm_event(ZSWPWB);
- if (entry->objcg)
- count_objcg_event(entry->objcg, ZSWPWB);
-
- spin_lock(&tree->lock);
- zswap_invalidate_entry(tree, entry);
- zswap_entry_put(entry);
- spin_unlock(&tree->lock);
-
- /* folio is up to date */
- folio_mark_uptodate(folio);
-
- /* move it to the tail of the inactive list after end_writeback */
- folio_set_reclaim(folio);
-
- /* start writeback */
- __swap_writepage(folio, &wbc);
- folio_put(folio);
-
- return 0;
-}
-
static int zswap_is_page_same_filled(void *ptr, unsigned long *value)
{
unsigned long *page;
--
2.43.0
next prev parent reply other threads:[~2024-01-30 1:43 UTC|newest]
Thread overview: 73+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-01-30 1:36 [PATCH 00/20] mm: zswap: cleanups Johannes Weiner
2024-01-30 1:36 ` [PATCH 01/20] mm: zswap: rename zswap_free_entry to zswap_entry_free Johannes Weiner
2024-01-30 3:13 ` Chengming Zhou
[not found] ` <20240130031938.GA772725@cmpxchg.org>
2024-01-30 3:21 ` Chengming Zhou
2024-01-30 3:36 ` Chengming Zhou
2024-01-30 8:08 ` Yosry Ahmed
2024-01-30 9:11 ` Nhat Pham
2024-01-30 1:36 ` [PATCH 02/20] mm: zswap: inline and remove zswap_entry_find_get() Johannes Weiner
2024-01-30 3:37 ` Chengming Zhou
2024-01-30 8:09 ` Yosry Ahmed
2024-01-30 16:24 ` Nhat Pham
2024-01-30 1:36 ` [PATCH 03/20] mm: zswap: move zswap_invalidate_entry() to related functions Johannes Weiner
2024-01-30 3:38 ` Chengming Zhou
2024-01-30 8:09 ` Yosry Ahmed
2024-01-30 16:25 ` Nhat Pham
2024-01-30 1:36 ` [PATCH 04/20] mm: zswap: warn when referencing a dead entry Johannes Weiner
2024-01-30 3:39 ` [External] " Chengming Zhou
2024-01-30 8:10 ` Yosry Ahmed
2024-01-30 16:27 ` Nhat Pham
2024-01-30 1:36 ` [PATCH 05/20] mm: zswap: clean up zswap_entry_put() Johannes Weiner
2024-01-30 3:39 ` Chengming Zhou
2024-01-30 7:51 ` Yosry Ahmed
2024-01-30 16:31 ` Nhat Pham
2024-01-30 17:02 ` Yosry Ahmed
2024-01-30 8:10 ` Yosry Ahmed
2024-01-30 1:36 ` [PATCH 06/20] mm: zswap: rename __zswap_load() to zswap_decompress() Johannes Weiner
2024-01-30 3:19 ` Chengming Zhou
2024-01-30 8:11 ` Yosry Ahmed
2024-01-30 16:33 ` Nhat Pham
2024-01-30 1:36 ` [PATCH 07/20] mm: zswap: break out zwap_compress() Johannes Weiner
2024-01-30 3:23 ` Chengming Zhou
2024-01-30 8:11 ` Yosry Ahmed
2024-01-30 16:21 ` Nhat Pham
2024-01-30 1:36 ` [PATCH 08/20] mm: zswap: further cleanup zswap_store() Johannes Weiner
2024-01-30 3:35 ` Chengming Zhou
2024-01-30 8:12 ` Yosry Ahmed
2024-01-30 18:18 ` Nhat Pham
2024-01-30 1:36 ` [PATCH 09/20] mm: zswap: simplify zswap_invalidate() Johannes Weiner
2024-01-30 3:35 ` Chengming Zhou
2024-01-30 8:12 ` Yosry Ahmed
2024-01-30 16:50 ` Nhat Pham
2024-01-30 1:36 ` [PATCH 10/20] mm: zswap: function ordering: pool alloc & free Johannes Weiner
2024-01-30 19:31 ` Nhat Pham
2024-01-30 1:36 ` [PATCH 11/20] mm: zswap: function ordering: pool refcounting Johannes Weiner
2024-01-30 20:13 ` Nhat Pham
2024-01-31 11:23 ` Johannes Weiner
2024-01-31 23:23 ` Nhat Pham
2024-01-30 1:36 ` [PATCH 12/20] mm: zswap: function ordering: zswap_pools Johannes Weiner
2024-01-30 21:16 ` Nhat Pham
2024-01-30 1:36 ` [PATCH 13/20] mm: zswap: function ordering: pool params Johannes Weiner
2024-01-30 21:22 ` Nhat Pham
2024-01-30 1:36 ` [PATCH 14/20] mm: zswap: function ordering: public lru api Johannes Weiner
2024-01-30 23:47 ` Nhat Pham
2024-01-31 11:40 ` Johannes Weiner
2024-01-30 1:36 ` [PATCH 15/20] mm: zswap: function ordering: move entry sections out of LRU section Johannes Weiner
2024-01-30 23:48 ` Nhat Pham
2024-01-30 1:36 ` [PATCH 16/20] mm: zswap: function ordering: move entry section out of tree section Johannes Weiner
2024-01-31 23:24 ` Nhat Pham
2024-01-30 1:36 ` [PATCH 17/20] mm: zswap: function ordering: compress & decompress functions Johannes Weiner
2024-01-31 23:25 ` Nhat Pham
2024-01-30 1:36 ` [PATCH 18/20] mm: zswap: function ordering: per-cpu compression infra Johannes Weiner
2024-01-31 23:33 ` Nhat Pham
2024-01-30 1:36 ` Johannes Weiner [this message]
2024-01-31 23:36 ` [PATCH 19/20] mm: zswap: function ordering: writeback Nhat Pham
2024-01-30 1:36 ` [PATCH 20/20] mm: zswap: function ordering: shrink_memcg_cb Johannes Weiner
2024-01-31 23:37 ` Nhat Pham
2024-01-30 8:16 ` [PATCH 00/20] mm: zswap: cleanups Yosry Ahmed
2024-01-30 12:21 ` Sergey Senozhatsky
2024-01-30 15:52 ` Johannes Weiner
2024-01-31 1:03 ` Sergey Senozhatsky
2024-01-30 15:46 ` Johannes Weiner
2024-01-30 17:15 ` Yosry Ahmed
2024-01-30 23:54 ` Nhat Pham
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20240130014208.565554-20-hannes@cmpxchg.org \
--to=hannes@cmpxchg.org \
--cc=akpm@linux-foundation.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=nphamcs@gmail.com \
--cc=yosryahmed@google.com \
--cc=zhouchengming@bytedance.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox