From: Shakeel Butt <shakeel.butt@linux.dev>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: Matthew Wilcox <willy@infradead.org>,
Johannes Weiner <hannes@cmpxchg.org>,
Omar Sandoval <osandov@osandov.com>, Chris Mason <clm@fb.com>,
linux-mm@kvack.org, linux-kernel@vger.kernel.org,
Meta kernel team <kernel-team@meta.com>,
linux-fsdevel@vger.kernel.org
Subject: [PATCH 2/2] mm: optimize invalidation of shadow entries
Date: Wed, 11 Sep 2024 10:38:01 -0700 [thread overview]
Message-ID: <20240911173801.4025422-3-shakeel.butt@linux.dev> (raw)
In-Reply-To: <20240911173801.4025422-1-shakeel.butt@linux.dev>
The kernel invalidates the page cache in batches of PAGEVEC_SIZE. For
each batch, it traverses the page cache tree and collects the entries
(folio and shadow entries) in the struct folio_batch. For the shadow
entries present in the folio_batch, it has to traverse the page cache
tree for each individual entry to remove them. This patch optimize this
by removing them in a single tree traversal.
To evaluate the changes, we created 200GiB file on a fuse fs and in a
memcg. We created the shadow entries by triggering reclaim through
memory.reclaim in that specific memcg and measure the simple
fadvise(DONTNEED) operation.
# time xfs_io -c 'fadvise -d 0 ${file_size}' file
time (sec)
Without 5.12 +- 0.061
With-patch 4.19 +- 0.086 (18.16% decrease)
Signed-off-by: Shakeel Butt <shakeel.butt@linux.dev>
---
mm/truncate.c | 46 ++++++++++++++++++----------------------------
1 file changed, 18 insertions(+), 28 deletions(-)
diff --git a/mm/truncate.c b/mm/truncate.c
index c7c19c816c2e..793c0d17d7b4 100644
--- a/mm/truncate.c
+++ b/mm/truncate.c
@@ -23,42 +23,28 @@
#include <linux/rmap.h>
#include "internal.h"
-/*
- * Regular page slots are stabilized by the page lock even without the tree
- * itself locked. These unlocked entries need verification under the tree
- * lock.
- */
-static inline void __clear_shadow_entry(struct address_space *mapping,
- pgoff_t index, void *entry)
-{
- XA_STATE(xas, &mapping->i_pages, index);
-
- xas_set_update(&xas, workingset_update_node);
- if (xas_load(&xas) != entry)
- return;
- xas_store(&xas, NULL);
-}
-
static void clear_shadow_entries(struct address_space *mapping,
- struct folio_batch *fbatch, pgoff_t *indices)
+ unsigned long start, unsigned long max)
{
- int i;
+ XA_STATE(xas, &mapping->i_pages, start);
+ struct folio *folio;
/* Handled by shmem itself, or for DAX we do nothing. */
if (shmem_mapping(mapping) || dax_mapping(mapping))
return;
- spin_lock(&mapping->host->i_lock);
- xa_lock_irq(&mapping->i_pages);
+ xas_set_update(&xas, workingset_update_node);
- for (i = 0; i < folio_batch_count(fbatch); i++) {
- struct folio *folio = fbatch->folios[i];
+ spin_lock(&mapping->host->i_lock);
+ xas_lock_irq(&xas);
+ /* Clear all shadow entries from start to max */
+ xas_for_each(&xas, folio, max) {
if (xa_is_value(folio))
- __clear_shadow_entry(mapping, indices[i], folio);
+ xas_store(&xas, NULL);
}
- xa_unlock_irq(&mapping->i_pages);
+ xas_unlock_irq(&xas);
if (mapping_shrinkable(mapping))
inode_add_lru(mapping->host);
spin_unlock(&mapping->host->i_lock);
@@ -478,7 +464,9 @@ unsigned long mapping_try_invalidate(struct address_space *mapping,
folio_batch_init(&fbatch);
while (find_lock_entries(mapping, &index, end, &fbatch, indices)) {
- for (i = 0; i < folio_batch_count(&fbatch); i++) {
+ int nr = folio_batch_count(&fbatch);
+
+ for (i = 0; i < nr; i++) {
struct folio *folio = fbatch.folios[i];
/* We rely upon deletion not changing folio->index */
@@ -505,7 +493,7 @@ unsigned long mapping_try_invalidate(struct address_space *mapping,
}
if (xa_has_values)
- clear_shadow_entries(mapping, &fbatch, indices);
+ clear_shadow_entries(mapping, indices[0], indices[nr-1]);
folio_batch_remove_exceptionals(&fbatch);
folio_batch_release(&fbatch);
@@ -609,7 +597,9 @@ int invalidate_inode_pages2_range(struct address_space *mapping,
folio_batch_init(&fbatch);
index = start;
while (find_get_entries(mapping, &index, end, &fbatch, indices)) {
- for (i = 0; i < folio_batch_count(&fbatch); i++) {
+ int nr = folio_batch_count(&fbatch);
+
+ for (i = 0; i < nr; i++) {
struct folio *folio = fbatch.folios[i];
/* We rely upon deletion not changing folio->index */
@@ -655,7 +645,7 @@ int invalidate_inode_pages2_range(struct address_space *mapping,
}
if (xa_has_values)
- clear_shadow_entries(mapping, &fbatch, indices);
+ clear_shadow_entries(mapping, indices[0], indices[nr-1]);
folio_batch_remove_exceptionals(&fbatch);
folio_batch_release(&fbatch);
--
2.43.5
prev parent reply other threads:[~2024-09-11 17:38 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-09-11 17:37 [PATCH 0/2] mm: optimize shadow entries removal Shakeel Butt
2024-09-11 17:38 ` [PATCH 1/2] mm: optimize truncation of shadow entries Shakeel Butt
2024-09-11 21:08 ` Johannes Weiner
2024-09-11 22:20 ` Shakeel Butt
2024-09-11 17:38 ` Shakeel Butt [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20240911173801.4025422-3-shakeel.butt@linux.dev \
--to=shakeel.butt@linux.dev \
--cc=akpm@linux-foundation.org \
--cc=clm@fb.com \
--cc=hannes@cmpxchg.org \
--cc=kernel-team@meta.com \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=osandov@osandov.com \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox