From: Matthew Wilcox <willy@infradead.org>
To: David Hildenbrand <david@redhat.com>
Cc: Andrew Morton <akpm@linux-foundation.org>, linux-mm@kvack.org
Subject: Re: [PATCH v2 01/18] mm: Make folios_put() the basis of release_pages()
Date: Mon, 19 Feb 2024 15:03:17 +0000 [thread overview]
Message-ID: <ZdNttZb-PvrlCMka@casper.infradead.org> (raw)
In-Reply-To: <15797535-3107-4724-acec-7c006da490f3@redhat.com>
On Mon, Feb 19, 2024 at 10:43:06AM +0100, David Hildenbrand wrote:
> On 17.02.24 03:25, Matthew Wilcox (Oracle) wrote:
> > By making release_pages() call folios_put(), we can get rid of the calls
> > to compound_head() for the callers that already know they have folios.
> > We can also get rid of the lock_batch tracking as we know the size
> > of the batch is limited by folio_batch. This does reduce the maximum
> > number of pages for which the lruvec lock is held, from SWAP_CLUSTER_MAX
> > (32) to PAGEVEC_SIZE (15). I do not expect this to make a significant
> > difference, but if it does, we can increase PAGEVEC_SIZE to 31.
> >
>
> I'm afraid that won't apply to current mm-unstable anymore, where we can now
> put multiple references to a single folio (as part of unmapping
> large PTE-mapped folios).
Argh. I'm not a huge fan of that approach, but let's live with it for
now. How about this as a replacement patch? It compiles ...
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 1743bdeab506..42de41e469a1 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -36,6 +36,7 @@ struct anon_vma;
struct anon_vma_chain;
struct user_struct;
struct pt_regs;
+struct folio_batch;
extern int sysctl_page_lock_unfairness;
@@ -1519,6 +1520,8 @@ static inline void folio_put_refs(struct folio *folio, int refs)
__folio_put(folio);
}
+void folios_put_refs(struct folio_batch *folios, unsigned int *refs);
+
/*
* union release_pages_arg - an array of pages or folios
*
@@ -1541,18 +1544,19 @@ void release_pages(release_pages_arg, int nr);
/**
* folios_put - Decrement the reference count on an array of folios.
* @folios: The folios.
- * @nr: How many folios there are.
*
- * Like folio_put(), but for an array of folios. This is more efficient
- * than writing the loop yourself as it will optimise the locks which
- * need to be taken if the folios are freed.
+ * Like folio_put(), but for a batch of folios. This is more efficient
+ * than writing the loop yourself as it will optimise the locks which need
+ * to be taken if the folios are freed. The folios batch is returned
+ * empty and ready to be reused for another batch; there is no need to
+ * reinitialise it.
*
* Context: May be called in process or interrupt context, but not in NMI
* context. May be called while holding a spinlock.
*/
-static inline void folios_put(struct folio **folios, unsigned int nr)
+static inline void folios_put(struct folio_batch *folios)
{
- release_pages(folios, nr);
+ folios_put_refs(folios, NULL);
}
static inline void put_page(struct page *page)
diff --git a/mm/mlock.c b/mm/mlock.c
index 086546ac5766..1ed2f2ab37cd 100644
--- a/mm/mlock.c
+++ b/mm/mlock.c
@@ -206,8 +206,7 @@ static void mlock_folio_batch(struct folio_batch *fbatch)
if (lruvec)
unlock_page_lruvec_irq(lruvec);
- folios_put(fbatch->folios, folio_batch_count(fbatch));
- folio_batch_reinit(fbatch);
+ folios_put(fbatch);
}
void mlock_drain_local(void)
diff --git a/mm/swap.c b/mm/swap.c
index e5380d732c0d..6b736fceccfa 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -89,7 +89,7 @@ static void __page_cache_release(struct folio *folio)
__folio_clear_lru_flags(folio);
unlock_page_lruvec_irqrestore(lruvec, flags);
}
- /* See comment on folio_test_mlocked in release_pages() */
+ /* See comment on folio_test_mlocked in folios_put() */
if (unlikely(folio_test_mlocked(folio))) {
long nr_pages = folio_nr_pages(folio);
@@ -175,7 +175,7 @@ static void lru_add_fn(struct lruvec *lruvec, struct folio *folio)
* while the LRU lock is held.
*
* (That is not true of __page_cache_release(), and not necessarily
- * true of release_pages(): but those only clear the mlocked flag after
+ * true of folios_put(): but those only clear the mlocked flag after
* folio_put_testzero() has excluded any other users of the folio.)
*/
if (folio_evictable(folio)) {
@@ -221,8 +221,7 @@ static void folio_batch_move_lru(struct folio_batch *fbatch, move_fn_t move_fn)
if (lruvec)
unlock_page_lruvec_irqrestore(lruvec, flags);
- folios_put(fbatch->folios, folio_batch_count(fbatch));
- folio_batch_reinit(fbatch);
+ folios_put(fbatch);
}
static void folio_batch_add_and_move(struct folio_batch *fbatch,
@@ -946,47 +945,30 @@ void lru_cache_disable(void)
}
/**
- * release_pages - batched put_page()
- * @arg: array of pages to release
- * @nr: number of pages
+ * folios_put_refs - Reduce the reference count on a batch of folios.
+ * @folios: The folios.
+ * @refs: The number of refs to subtract from each folio.
*
- * Decrement the reference count on all the pages in @arg. If it
- * fell to zero, remove the page from the LRU and free it.
+ * Like folio_put(), but for a batch of folios. This is more efficient
+ * than writing the loop yourself as it will optimise the locks which need
+ * to be taken if the folios are freed. The folios batch is returned
+ * empty and ready to be reused for another batch; there is no need
+ * to reinitialise it. If @refs is NULL, we subtract one from each
+ * folio refcount.
*
- * Note that the argument can be an array of pages, encoded pages,
- * or folio pointers. We ignore any encoded bits, and turn any of
- * them into just a folio that gets free'd.
+ * Context: May be called in process or interrupt context, but not in NMI
+ * context. May be called while holding a spinlock.
*/
-void release_pages(release_pages_arg arg, int nr)
+void folios_put_refs(struct folio_batch *folios, unsigned int *refs)
{
int i;
- struct encoded_page **encoded = arg.encoded_pages;
LIST_HEAD(pages_to_free);
struct lruvec *lruvec = NULL;
unsigned long flags = 0;
- unsigned int lock_batch;
- for (i = 0; i < nr; i++) {
- unsigned int nr_refs = 1;
- struct folio *folio;
-
- /* Turn any of the argument types into a folio */
- folio = page_folio(encoded_page_ptr(encoded[i]));
-
- /* Is our next entry actually "nr_pages" -> "nr_refs" ? */
- if (unlikely(encoded_page_flags(encoded[i]) &
- ENCODED_PAGE_BIT_NR_PAGES_NEXT))
- nr_refs = encoded_nr_pages(encoded[++i]);
-
- /*
- * Make sure the IRQ-safe lock-holding time does not get
- * excessive with a continuous string of pages from the
- * same lruvec. The lock is held only if lruvec != NULL.
- */
- if (lruvec && ++lock_batch == SWAP_CLUSTER_MAX) {
- unlock_page_lruvec_irqrestore(lruvec, flags);
- lruvec = NULL;
- }
+ for (i = 0; i < folios->nr; i++) {
+ struct folio *folio = folios->folios[i];
+ unsigned int nr_refs = refs ? refs[i] : 1;
if (is_huge_zero_page(&folio->page))
continue;
@@ -1016,13 +998,8 @@ void release_pages(release_pages_arg arg, int nr)
}
if (folio_test_lru(folio)) {
- struct lruvec *prev_lruvec = lruvec;
-
lruvec = folio_lruvec_relock_irqsave(folio, lruvec,
&flags);
- if (prev_lruvec != lruvec)
- lock_batch = 0;
-
lruvec_del_folio(lruvec, folio);
__folio_clear_lru_flags(folio);
}
@@ -1046,6 +1023,47 @@ void release_pages(release_pages_arg arg, int nr)
mem_cgroup_uncharge_list(&pages_to_free);
free_unref_page_list(&pages_to_free);
+ folios->nr = 0;
+}
+EXPORT_SYMBOL(folios_put);
+
+/**
+ * release_pages - batched put_page()
+ * @arg: array of pages to release
+ * @nr: number of pages
+ *
+ * Decrement the reference count on all the pages in @arg. If it
+ * fell to zero, remove the page from the LRU and free it.
+ *
+ * Note that the argument can be an array of pages, encoded pages,
+ * or folio pointers. We ignore any encoded bits, and turn any of
+ * them into just a folio that gets free'd.
+ */
+void release_pages(release_pages_arg arg, int nr)
+{
+ struct folio_batch fbatch;
+ int refs[PAGEVEC_SIZE];
+ struct encoded_page **encoded = arg.encoded_pages;
+ int i;
+
+ folio_batch_init(&fbatch);
+ for (i = 0; i < nr; i++) {
+ /* Turn any of the argument types into a folio */
+ struct folio *folio = page_folio(encoded_page_ptr(encoded[i]));
+
+ /* Is our next entry actually "nr_pages" -> "nr_refs" ? */
+ refs[fbatch.nr] = 1;
+ if (unlikely(encoded_page_flags(encoded[i]) &
+ ENCODED_PAGE_BIT_NR_PAGES_NEXT))
+ refs[fbatch.nr] = encoded_nr_pages(encoded[++i]);
+
+ if (folio_batch_add(&fbatch, folio) > 0)
+ continue;
+ folios_put_refs(&fbatch, refs);
+ }
+
+ if (fbatch.nr)
+ folios_put_refs(&fbatch, refs);
}
EXPORT_SYMBOL(release_pages);
next prev parent reply other threads:[~2024-02-19 15:03 UTC|newest]
Thread overview: 24+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-02-17 2:25 [PATCH v2 00/18] Rearrange batched folio freeing Matthew Wilcox (Oracle)
2024-02-17 2:25 ` [PATCH v2 01/18] mm: Make folios_put() the basis of release_pages() Matthew Wilcox (Oracle)
2024-02-19 9:43 ` David Hildenbrand
2024-02-19 15:03 ` Matthew Wilcox [this message]
2024-02-19 15:31 ` David Hildenbrand
2024-02-19 16:07 ` Matthew Wilcox
2024-02-17 2:25 ` [PATCH v2 02/18] mm: Convert free_unref_page_list() to use folios Matthew Wilcox (Oracle)
2024-02-17 2:25 ` [PATCH v2 03/18] mm: Add free_unref_folios() Matthew Wilcox (Oracle)
2024-02-17 2:25 ` [PATCH v2 04/18] mm: Use folios_put() in __folio_batch_release() Matthew Wilcox (Oracle)
2024-02-17 2:25 ` [PATCH v2 05/18] memcg: Add mem_cgroup_uncharge_folios() Matthew Wilcox (Oracle)
2024-02-17 2:25 ` [PATCH v2 06/18] mm: Remove use of folio list from folios_put() Matthew Wilcox (Oracle)
2024-02-17 2:25 ` [PATCH v2 07/18] mm: Use free_unref_folios() in put_pages_list() Matthew Wilcox (Oracle)
2024-02-17 2:25 ` [PATCH v2 08/18] mm: use __page_cache_release() in folios_put() Matthew Wilcox (Oracle)
2024-02-17 2:25 ` [PATCH v2 09/18] mm: Handle large folios in free_unref_folios() Matthew Wilcox (Oracle)
2024-02-17 2:25 ` [PATCH v2 10/18] mm: Allow non-hugetlb large folios to be batch processed Matthew Wilcox (Oracle)
2024-02-17 2:25 ` [PATCH v2 11/18] mm: Free folios in a batch in shrink_folio_list() Matthew Wilcox (Oracle)
2024-02-17 2:25 ` [PATCH v2 12/18] mm: Free folios directly in move_folios_to_lru() Matthew Wilcox (Oracle)
2024-02-17 2:25 ` [PATCH v2 13/18] memcg: Remove mem_cgroup_uncharge_list() Matthew Wilcox (Oracle)
2024-02-17 2:25 ` [PATCH v2 14/18] mm: Remove free_unref_page_list() Matthew Wilcox (Oracle)
2024-02-17 2:25 ` [PATCH v2 15/18] mm: Remove lru_to_page() Matthew Wilcox (Oracle)
2024-02-17 2:25 ` [PATCH v2 16/18] mm: Convert free_pages_and_swap_cache() to use folios_put() Matthew Wilcox (Oracle)
2024-02-17 2:25 ` [PATCH v2 17/18] mm: Use a folio in __collapse_huge_page_copy_succeeded() Matthew Wilcox (Oracle)
2024-02-17 2:25 ` [PATCH v2 18/18] mm: Convert free_swap_cache() to take a folio Matthew Wilcox (Oracle)
2024-02-19 9:59 ` David Hildenbrand
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ZdNttZb-PvrlCMka@casper.infradead.org \
--to=willy@infradead.org \
--cc=akpm@linux-foundation.org \
--cc=david@redhat.com \
--cc=linux-mm@kvack.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox