* [PATCH v2 00/18] Rearrange batched folio freeing
@ 2024-02-17 2:25 Matthew Wilcox (Oracle)
2024-02-17 2:25 ` [PATCH v2 01/18] mm: Make folios_put() the basis of release_pages() Matthew Wilcox (Oracle)
` (17 more replies)
0 siblings, 18 replies; 24+ messages in thread
From: Matthew Wilcox (Oracle) @ 2024-02-17 2:25 UTC (permalink / raw)
To: Andrew Morton; +Cc: Matthew Wilcox (Oracle), linux-mm
Other than the obvious "remove calls to compound_head" changes, the
fundamental belief here is that iterating a linked list is much slower
than iterating an array (5-15x slower in my testing). There's also
an associated belief that since we iterate the batch of folios three
times, we do better when the array is small (ie 15 entries) than we do
with a batch that is hundreds of entries long, which only gives us the
opportunity for the first pages to fall out of cache by the time we get
to the end.
It is possible we should increase the size of folio_batch. Hopefully the
bots let us know if this introduces any performance regressions.
v2:
- Redo the shrink_folio_list() patch to free the mapped folios at
the end instead of calling try_to_unmap_flush() more often.
- Improve a number of commit messages
- Use pcp_allowed_order() instead of PAGE_ALLOC_COSTLY_ORDER (Ryan)
- Fix move_folios_to_lru() comment (Ryan)
- Add patches 15-18
- Collect R-b tags from Ryan
Matthew Wilcox (Oracle) (18):
mm: Make folios_put() the basis of release_pages()
mm: Convert free_unref_page_list() to use folios
mm: Add free_unref_folios()
mm: Use folios_put() in __folio_batch_release()
memcg: Add mem_cgroup_uncharge_folios()
mm: Remove use of folio list from folios_put()
mm: Use free_unref_folios() in put_pages_list()
mm: use __page_cache_release() in folios_put()
mm: Handle large folios in free_unref_folios()
mm: Allow non-hugetlb large folios to be batch processed
mm: Free folios in a batch in shrink_folio_list()
mm: Free folios directly in move_folios_to_lru()
memcg: Remove mem_cgroup_uncharge_list()
mm: Remove free_unref_page_list()
mm: Remove lru_to_page()
mm: Convert free_pages_and_swap_cache() to use folios_put()
mm: Use a folio in __collapse_huge_page_copy_succeeded()
mm: Convert free_swap_cache() to take a folio
include/linux/memcontrol.h | 26 +++---
include/linux/mm.h | 20 +----
include/linux/swap.h | 8 +-
mm/internal.h | 4 +-
mm/khugepaged.c | 30 +++----
mm/memcontrol.c | 16 ++--
mm/memory.c | 2 +-
mm/mlock.c | 3 +-
mm/page_alloc.c | 76 ++++++++--------
mm/swap.c | 180 ++++++++++++++++++++-----------------
mm/swap_state.c | 25 ++++--
mm/vmscan.c | 52 +++++------
12 files changed, 218 insertions(+), 224 deletions(-)
--
2.43.0
^ permalink raw reply [flat|nested] 24+ messages in thread
* [PATCH v2 01/18] mm: Make folios_put() the basis of release_pages()
2024-02-17 2:25 [PATCH v2 00/18] Rearrange batched folio freeing Matthew Wilcox (Oracle)
@ 2024-02-17 2:25 ` Matthew Wilcox (Oracle)
2024-02-19 9:43 ` David Hildenbrand
2024-02-17 2:25 ` [PATCH v2 02/18] mm: Convert free_unref_page_list() to use folios Matthew Wilcox (Oracle)
` (16 subsequent siblings)
17 siblings, 1 reply; 24+ messages in thread
From: Matthew Wilcox (Oracle) @ 2024-02-17 2:25 UTC (permalink / raw)
To: Andrew Morton; +Cc: Matthew Wilcox (Oracle), linux-mm
By making release_pages() call folios_put(), we can get rid of the calls
to compound_head() for the callers that already know they have folios.
We can also get rid of the lock_batch tracking as we know the size
of the batch is limited by folio_batch. This does reduce the maximum
number of pages for which the lruvec lock is held, from SWAP_CLUSTER_MAX
(32) to PAGEVEC_SIZE (15). I do not expect this to make a significant
difference, but if it does, we can increase PAGEVEC_SIZE to 31.
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
include/linux/mm.h | 19 ++---------
mm/mlock.c | 3 +-
mm/swap.c | 84 +++++++++++++++++++++++++++-------------------
3 files changed, 52 insertions(+), 54 deletions(-)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 6095c86aa040..2a1ebda5fb79 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -36,6 +36,7 @@ struct anon_vma;
struct anon_vma_chain;
struct user_struct;
struct pt_regs;
+struct folio_batch;
extern int sysctl_page_lock_unfairness;
@@ -1532,23 +1533,7 @@ typedef union {
} release_pages_arg __attribute__ ((__transparent_union__));
void release_pages(release_pages_arg, int nr);
-
-/**
- * folios_put - Decrement the reference count on an array of folios.
- * @folios: The folios.
- * @nr: How many folios there are.
- *
- * Like folio_put(), but for an array of folios. This is more efficient
- * than writing the loop yourself as it will optimise the locks which
- * need to be taken if the folios are freed.
- *
- * Context: May be called in process or interrupt context, but not in NMI
- * context. May be called while holding a spinlock.
- */
-static inline void folios_put(struct folio **folios, unsigned int nr)
-{
- release_pages(folios, nr);
-}
+void folios_put(struct folio_batch *folios);
static inline void put_page(struct page *page)
{
diff --git a/mm/mlock.c b/mm/mlock.c
index 086546ac5766..1ed2f2ab37cd 100644
--- a/mm/mlock.c
+++ b/mm/mlock.c
@@ -206,8 +206,7 @@ static void mlock_folio_batch(struct folio_batch *fbatch)
if (lruvec)
unlock_page_lruvec_irq(lruvec);
- folios_put(fbatch->folios, folio_batch_count(fbatch));
- folio_batch_reinit(fbatch);
+ folios_put(fbatch);
}
void mlock_drain_local(void)
diff --git a/mm/swap.c b/mm/swap.c
index cd8f0150ba3a..7bdc63b56859 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -89,7 +89,7 @@ static void __page_cache_release(struct folio *folio)
__folio_clear_lru_flags(folio);
unlock_page_lruvec_irqrestore(lruvec, flags);
}
- /* See comment on folio_test_mlocked in release_pages() */
+ /* See comment on folio_test_mlocked in folios_put() */
if (unlikely(folio_test_mlocked(folio))) {
long nr_pages = folio_nr_pages(folio);
@@ -175,7 +175,7 @@ static void lru_add_fn(struct lruvec *lruvec, struct folio *folio)
* while the LRU lock is held.
*
* (That is not true of __page_cache_release(), and not necessarily
- * true of release_pages(): but those only clear the mlocked flag after
+ * true of folios_put(): but those only clear the mlocked flag after
* folio_put_testzero() has excluded any other users of the folio.)
*/
if (folio_evictable(folio)) {
@@ -221,8 +221,7 @@ static void folio_batch_move_lru(struct folio_batch *fbatch, move_fn_t move_fn)
if (lruvec)
unlock_page_lruvec_irqrestore(lruvec, flags);
- folios_put(fbatch->folios, folio_batch_count(fbatch));
- folio_batch_reinit(fbatch);
+ folios_put(fbatch);
}
static void folio_batch_add_and_move(struct folio_batch *fbatch,
@@ -946,41 +945,27 @@ void lru_cache_disable(void)
}
/**
- * release_pages - batched put_page()
- * @arg: array of pages to release
- * @nr: number of pages
+ * folios_put - Decrement the reference count on a batch of folios.
+ * @folios: The folios.
*
- * Decrement the reference count on all the pages in @arg. If it
- * fell to zero, remove the page from the LRU and free it.
+ * Like folio_put(), but for a batch of folios. This is more efficient
+ * than writing the loop yourself as it will optimise the locks which need
+ * to be taken if the folios are freed. The folios batch is returned
+ * empty and ready to be reused for another batch; there is no need to
+ * reinitialise it.
*
- * Note that the argument can be an array of pages, encoded pages,
- * or folio pointers. We ignore any encoded bits, and turn any of
- * them into just a folio that gets free'd.
+ * Context: May be called in process or interrupt context, but not in NMI
+ * context. May be called while holding a spinlock.
*/
-void release_pages(release_pages_arg arg, int nr)
+void folios_put(struct folio_batch *folios)
{
int i;
- struct encoded_page **encoded = arg.encoded_pages;
LIST_HEAD(pages_to_free);
struct lruvec *lruvec = NULL;
unsigned long flags = 0;
- unsigned int lock_batch;
- for (i = 0; i < nr; i++) {
- struct folio *folio;
-
- /* Turn any of the argument types into a folio */
- folio = page_folio(encoded_page_ptr(encoded[i]));
-
- /*
- * Make sure the IRQ-safe lock-holding time does not get
- * excessive with a continuous string of pages from the
- * same lruvec. The lock is held only if lruvec != NULL.
- */
- if (lruvec && ++lock_batch == SWAP_CLUSTER_MAX) {
- unlock_page_lruvec_irqrestore(lruvec, flags);
- lruvec = NULL;
- }
+ for (i = 0; i < folios->nr; i++) {
+ struct folio *folio = folios->folios[i];
if (is_huge_zero_page(&folio->page))
continue;
@@ -1010,13 +995,8 @@ void release_pages(release_pages_arg arg, int nr)
}
if (folio_test_lru(folio)) {
- struct lruvec *prev_lruvec = lruvec;
-
lruvec = folio_lruvec_relock_irqsave(folio, lruvec,
&flags);
- if (prev_lruvec != lruvec)
- lock_batch = 0;
-
lruvec_del_folio(lruvec, folio);
__folio_clear_lru_flags(folio);
}
@@ -1040,6 +1020,40 @@ void release_pages(release_pages_arg arg, int nr)
mem_cgroup_uncharge_list(&pages_to_free);
free_unref_page_list(&pages_to_free);
+ folios->nr = 0;
+}
+EXPORT_SYMBOL(folios_put);
+
+/**
+ * release_pages - batched put_page()
+ * @arg: array of pages to release
+ * @nr: number of pages
+ *
+ * Decrement the reference count on all the pages in @arg. If it
+ * fell to zero, remove the page from the LRU and free it.
+ *
+ * Note that the argument can be an array of pages, encoded pages,
+ * or folio pointers. We ignore any encoded bits, and turn any of
+ * them into just a folio that gets free'd.
+ */
+void release_pages(release_pages_arg arg, int nr)
+{
+ struct folio_batch fbatch;
+ struct encoded_page **encoded = arg.encoded_pages;
+ int i;
+
+ folio_batch_init(&fbatch);
+ for (i = 0; i < nr; i++) {
+ /* Turn any of the argument types into a folio */
+ struct folio *folio = page_folio(encoded_page_ptr(encoded[i]));
+
+ if (folio_batch_add(&fbatch, folio) > 0)
+ continue;
+ folios_put(&fbatch);
+ }
+
+ if (fbatch.nr)
+ folios_put(&fbatch);
}
EXPORT_SYMBOL(release_pages);
--
2.43.0
^ permalink raw reply [flat|nested] 24+ messages in thread
* [PATCH v2 02/18] mm: Convert free_unref_page_list() to use folios
2024-02-17 2:25 [PATCH v2 00/18] Rearrange batched folio freeing Matthew Wilcox (Oracle)
2024-02-17 2:25 ` [PATCH v2 01/18] mm: Make folios_put() the basis of release_pages() Matthew Wilcox (Oracle)
@ 2024-02-17 2:25 ` Matthew Wilcox (Oracle)
2024-02-17 2:25 ` [PATCH v2 03/18] mm: Add free_unref_folios() Matthew Wilcox (Oracle)
` (15 subsequent siblings)
17 siblings, 0 replies; 24+ messages in thread
From: Matthew Wilcox (Oracle) @ 2024-02-17 2:25 UTC (permalink / raw)
To: Andrew Morton; +Cc: Matthew Wilcox (Oracle), linux-mm, Ryan Roberts
Most of its callees are not yet ready to accept a folio, but we know
all of the pages passed in are actually folios because they're linked
through ->lru.
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reviewed-by: Ryan Roberts <ryan.roberts@arm.com>
---
mm/page_alloc.c | 38 ++++++++++++++++++++------------------
1 file changed, 20 insertions(+), 18 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 7ae4b74c9e5c..a8292c7a0391 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2556,17 +2556,17 @@ void free_unref_page(struct page *page, unsigned int order)
void free_unref_page_list(struct list_head *list)
{
unsigned long __maybe_unused UP_flags;
- struct page *page, *next;
+ struct folio *folio, *next;
struct per_cpu_pages *pcp = NULL;
struct zone *locked_zone = NULL;
int batch_count = 0;
int migratetype;
/* Prepare pages for freeing */
- list_for_each_entry_safe(page, next, list, lru) {
- unsigned long pfn = page_to_pfn(page);
- if (!free_unref_page_prepare(page, pfn, 0)) {
- list_del(&page->lru);
+ list_for_each_entry_safe(folio, next, list, lru) {
+ unsigned long pfn = folio_pfn(folio);
+ if (!free_unref_page_prepare(&folio->page, pfn, 0)) {
+ list_del(&folio->lru);
continue;
}
@@ -2574,24 +2574,25 @@ void free_unref_page_list(struct list_head *list)
* Free isolated pages directly to the allocator, see
* comment in free_unref_page.
*/
- migratetype = get_pcppage_migratetype(page);
+ migratetype = get_pcppage_migratetype(&folio->page);
if (unlikely(is_migrate_isolate(migratetype))) {
- list_del(&page->lru);
- free_one_page(page_zone(page), page, pfn, 0, migratetype, FPI_NONE);
+ list_del(&folio->lru);
+ free_one_page(folio_zone(folio), &folio->page, pfn,
+ 0, migratetype, FPI_NONE);
continue;
}
}
- list_for_each_entry_safe(page, next, list, lru) {
- struct zone *zone = page_zone(page);
+ list_for_each_entry_safe(folio, next, list, lru) {
+ struct zone *zone = folio_zone(folio);
- list_del(&page->lru);
- migratetype = get_pcppage_migratetype(page);
+ list_del(&folio->lru);
+ migratetype = get_pcppage_migratetype(&folio->page);
/*
* Either different zone requiring a different pcp lock or
* excessive lock hold times when freeing a large list of
- * pages.
+ * folios.
*/
if (zone != locked_zone || batch_count == SWAP_CLUSTER_MAX) {
if (pcp) {
@@ -2602,15 +2603,16 @@ void free_unref_page_list(struct list_head *list)
batch_count = 0;
/*
- * trylock is necessary as pages may be getting freed
+ * trylock is necessary as folios may be getting freed
* from IRQ or SoftIRQ context after an IO completion.
*/
pcp_trylock_prepare(UP_flags);
pcp = pcp_spin_trylock(zone->per_cpu_pageset);
if (unlikely(!pcp)) {
pcp_trylock_finish(UP_flags);
- free_one_page(zone, page, page_to_pfn(page),
- 0, migratetype, FPI_NONE);
+ free_one_page(zone, &folio->page,
+ folio_pfn(folio), 0,
+ migratetype, FPI_NONE);
locked_zone = NULL;
continue;
}
@@ -2624,8 +2626,8 @@ void free_unref_page_list(struct list_head *list)
if (unlikely(migratetype >= MIGRATE_PCPTYPES))
migratetype = MIGRATE_MOVABLE;
- trace_mm_page_free_batched(page);
- free_unref_page_commit(zone, pcp, page, migratetype, 0);
+ trace_mm_page_free_batched(&folio->page);
+ free_unref_page_commit(zone, pcp, &folio->page, migratetype, 0);
batch_count++;
}
--
2.43.0
^ permalink raw reply [flat|nested] 24+ messages in thread
* [PATCH v2 03/18] mm: Add free_unref_folios()
2024-02-17 2:25 [PATCH v2 00/18] Rearrange batched folio freeing Matthew Wilcox (Oracle)
2024-02-17 2:25 ` [PATCH v2 01/18] mm: Make folios_put() the basis of release_pages() Matthew Wilcox (Oracle)
2024-02-17 2:25 ` [PATCH v2 02/18] mm: Convert free_unref_page_list() to use folios Matthew Wilcox (Oracle)
@ 2024-02-17 2:25 ` Matthew Wilcox (Oracle)
2024-02-17 2:25 ` [PATCH v2 04/18] mm: Use folios_put() in __folio_batch_release() Matthew Wilcox (Oracle)
` (14 subsequent siblings)
17 siblings, 0 replies; 24+ messages in thread
From: Matthew Wilcox (Oracle) @ 2024-02-17 2:25 UTC (permalink / raw)
To: Andrew Morton; +Cc: Matthew Wilcox (Oracle), linux-mm
Iterate over a folio_batch rather than a linked list. This is easier for
the CPU to prefetch and has a batch count naturally built in so we don't
need to track it. Again, this lowers the maximum lock hold time from
32 folios to 15, but I do not expect this to have a significant effect.
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
mm/internal.h | 5 +++--
mm/page_alloc.c | 59 ++++++++++++++++++++++++++++++-------------------
2 files changed, 39 insertions(+), 25 deletions(-)
diff --git a/mm/internal.h b/mm/internal.h
index d6044c684e93..4d45b351e0fd 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -449,8 +449,9 @@ extern void post_alloc_hook(struct page *page, unsigned int order,
gfp_t gfp_flags);
extern int user_min_free_kbytes;
-extern void free_unref_page(struct page *page, unsigned int order);
-extern void free_unref_page_list(struct list_head *list);
+void free_unref_page(struct page *page, unsigned int order);
+void free_unref_folios(struct folio_batch *fbatch);
+void free_unref_page_list(struct list_head *list);
extern void zone_pcp_reset(struct zone *zone);
extern void zone_pcp_disable(struct zone *zone);
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index a8292c7a0391..8ef1c5c86472 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -32,6 +32,7 @@
#include <linux/sysctl.h>
#include <linux/cpu.h>
#include <linux/cpuset.h>
+#include <linux/pagevec.h>
#include <linux/memory_hotplug.h>
#include <linux/nodemask.h>
#include <linux/vmstat.h>
@@ -2551,57 +2552,51 @@ void free_unref_page(struct page *page, unsigned int order)
}
/*
- * Free a list of 0-order pages
+ * Free a batch of 0-order pages
*/
-void free_unref_page_list(struct list_head *list)
+void free_unref_folios(struct folio_batch *folios)
{
unsigned long __maybe_unused UP_flags;
- struct folio *folio, *next;
struct per_cpu_pages *pcp = NULL;
struct zone *locked_zone = NULL;
- int batch_count = 0;
- int migratetype;
+ int i, j, migratetype;
- /* Prepare pages for freeing */
- list_for_each_entry_safe(folio, next, list, lru) {
+ /* Prepare folios for freeing */
+ for (i = 0, j = 0; i < folios->nr; i++) {
+ struct folio *folio = folios->folios[i];
unsigned long pfn = folio_pfn(folio);
- if (!free_unref_page_prepare(&folio->page, pfn, 0)) {
- list_del(&folio->lru);
+ if (!free_unref_page_prepare(&folio->page, pfn, 0))
continue;
- }
/*
- * Free isolated pages directly to the allocator, see
+ * Free isolated folios directly to the allocator, see
* comment in free_unref_page.
*/
migratetype = get_pcppage_migratetype(&folio->page);
if (unlikely(is_migrate_isolate(migratetype))) {
- list_del(&folio->lru);
free_one_page(folio_zone(folio), &folio->page, pfn,
0, migratetype, FPI_NONE);
continue;
}
+ if (j != i)
+ folios->folios[j] = folio;
+ j++;
}
+ folios->nr = j;
- list_for_each_entry_safe(folio, next, list, lru) {
+ for (i = 0; i < folios->nr; i++) {
+ struct folio *folio = folios->folios[i];
struct zone *zone = folio_zone(folio);
- list_del(&folio->lru);
migratetype = get_pcppage_migratetype(&folio->page);
- /*
- * Either different zone requiring a different pcp lock or
- * excessive lock hold times when freeing a large list of
- * folios.
- */
- if (zone != locked_zone || batch_count == SWAP_CLUSTER_MAX) {
+ /* Different zone requires a different pcp lock */
+ if (zone != locked_zone) {
if (pcp) {
pcp_spin_unlock(pcp);
pcp_trylock_finish(UP_flags);
}
- batch_count = 0;
-
/*
* trylock is necessary as folios may be getting freed
* from IRQ or SoftIRQ context after an IO completion.
@@ -2628,13 +2623,31 @@ void free_unref_page_list(struct list_head *list)
trace_mm_page_free_batched(&folio->page);
free_unref_page_commit(zone, pcp, &folio->page, migratetype, 0);
- batch_count++;
}
if (pcp) {
pcp_spin_unlock(pcp);
pcp_trylock_finish(UP_flags);
}
+ folios->nr = 0;
+}
+
+void free_unref_page_list(struct list_head *list)
+{
+ struct folio_batch fbatch;
+
+ folio_batch_init(&fbatch);
+ while (!list_empty(list)) {
+ struct folio *folio = list_first_entry(list, struct folio, lru);
+
+ list_del(&folio->lru);
+ if (folio_batch_add(&fbatch, folio) > 0)
+ continue;
+ free_unref_folios(&fbatch);
+ }
+
+ if (fbatch.nr)
+ free_unref_folios(&fbatch);
}
/*
--
2.43.0
^ permalink raw reply [flat|nested] 24+ messages in thread
* [PATCH v2 04/18] mm: Use folios_put() in __folio_batch_release()
2024-02-17 2:25 [PATCH v2 00/18] Rearrange batched folio freeing Matthew Wilcox (Oracle)
` (2 preceding siblings ...)
2024-02-17 2:25 ` [PATCH v2 03/18] mm: Add free_unref_folios() Matthew Wilcox (Oracle)
@ 2024-02-17 2:25 ` Matthew Wilcox (Oracle)
2024-02-17 2:25 ` [PATCH v2 05/18] memcg: Add mem_cgroup_uncharge_folios() Matthew Wilcox (Oracle)
` (13 subsequent siblings)
17 siblings, 0 replies; 24+ messages in thread
From: Matthew Wilcox (Oracle) @ 2024-02-17 2:25 UTC (permalink / raw)
To: Andrew Morton; +Cc: Matthew Wilcox (Oracle), linux-mm, Ryan Roberts
There's no need to indirect through release_pages() and iterate
over this batch of folios an extra time; we can just use the batch
that we have.
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reviewed-by: Ryan Roberts <ryan.roberts@arm.com>
---
mm/swap.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/mm/swap.c b/mm/swap.c
index 7bdc63b56859..c5ea0c6669e7 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -1073,8 +1073,7 @@ void __folio_batch_release(struct folio_batch *fbatch)
lru_add_drain();
fbatch->percpu_pvec_drained = true;
}
- release_pages(fbatch->folios, folio_batch_count(fbatch));
- folio_batch_reinit(fbatch);
+ folios_put(fbatch);
}
EXPORT_SYMBOL(__folio_batch_release);
--
2.43.0
^ permalink raw reply [flat|nested] 24+ messages in thread
* [PATCH v2 05/18] memcg: Add mem_cgroup_uncharge_folios()
2024-02-17 2:25 [PATCH v2 00/18] Rearrange batched folio freeing Matthew Wilcox (Oracle)
` (3 preceding siblings ...)
2024-02-17 2:25 ` [PATCH v2 04/18] mm: Use folios_put() in __folio_batch_release() Matthew Wilcox (Oracle)
@ 2024-02-17 2:25 ` Matthew Wilcox (Oracle)
2024-02-17 2:25 ` [PATCH v2 06/18] mm: Remove use of folio list from folios_put() Matthew Wilcox (Oracle)
` (12 subsequent siblings)
17 siblings, 0 replies; 24+ messages in thread
From: Matthew Wilcox (Oracle) @ 2024-02-17 2:25 UTC (permalink / raw)
To: Andrew Morton; +Cc: Matthew Wilcox (Oracle), linux-mm, Ryan Roberts
Almost identical to mem_cgroup_uncharge_list(), except it takes a
folio_batch instead of a list_head.
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reviewed-by: Ryan Roberts <ryan.roberts@arm.com>
---
include/linux/memcontrol.h | 14 ++++++++++++--
mm/memcontrol.c | 13 +++++++++++++
2 files changed, 25 insertions(+), 2 deletions(-)
diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index 4e4caeaea404..46d9abb20761 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -721,10 +721,16 @@ static inline void mem_cgroup_uncharge_list(struct list_head *page_list)
__mem_cgroup_uncharge_list(page_list);
}
-void mem_cgroup_cancel_charge(struct mem_cgroup *memcg, unsigned int nr_pages);
+void __mem_cgroup_uncharge_folios(struct folio_batch *folios);
+static inline void mem_cgroup_uncharge_folios(struct folio_batch *folios)
+{
+ if (mem_cgroup_disabled())
+ return;
+ __mem_cgroup_uncharge_folios(folios);
+}
+void mem_cgroup_cancel_charge(struct mem_cgroup *memcg, unsigned int nr_pages);
void mem_cgroup_replace_folio(struct folio *old, struct folio *new);
-
void mem_cgroup_migrate(struct folio *old, struct folio *new);
/**
@@ -1299,6 +1305,10 @@ static inline void mem_cgroup_uncharge_list(struct list_head *page_list)
{
}
+static inline void mem_cgroup_uncharge_folios(struct folio_batch *folios)
+{
+}
+
static inline void mem_cgroup_cancel_charge(struct mem_cgroup *memcg,
unsigned int nr_pages)
{
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 94d1b278c458..0499d7838224 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -33,6 +33,7 @@
#include <linux/shmem_fs.h>
#include <linux/hugetlb.h>
#include <linux/pagemap.h>
+#include <linux/pagevec.h>
#include <linux/vm_event_item.h>
#include <linux/smp.h>
#include <linux/page-flags.h>
@@ -7564,6 +7565,18 @@ void __mem_cgroup_uncharge_list(struct list_head *page_list)
uncharge_batch(&ug);
}
+void __mem_cgroup_uncharge_folios(struct folio_batch *folios)
+{
+ struct uncharge_gather ug;
+ unsigned int i;
+
+ uncharge_gather_clear(&ug);
+ for (i = 0; i < folios->nr; i++)
+ uncharge_folio(folios->folios[i], &ug);
+ if (ug.memcg)
+ uncharge_batch(&ug);
+}
+
/**
* mem_cgroup_replace_folio - Charge a folio's replacement.
* @old: Currently circulating folio.
--
2.43.0
^ permalink raw reply [flat|nested] 24+ messages in thread
* [PATCH v2 06/18] mm: Remove use of folio list from folios_put()
2024-02-17 2:25 [PATCH v2 00/18] Rearrange batched folio freeing Matthew Wilcox (Oracle)
` (4 preceding siblings ...)
2024-02-17 2:25 ` [PATCH v2 05/18] memcg: Add mem_cgroup_uncharge_folios() Matthew Wilcox (Oracle)
@ 2024-02-17 2:25 ` Matthew Wilcox (Oracle)
2024-02-17 2:25 ` [PATCH v2 07/18] mm: Use free_unref_folios() in put_pages_list() Matthew Wilcox (Oracle)
` (11 subsequent siblings)
17 siblings, 0 replies; 24+ messages in thread
From: Matthew Wilcox (Oracle) @ 2024-02-17 2:25 UTC (permalink / raw)
To: Andrew Morton; +Cc: Matthew Wilcox (Oracle), linux-mm, Ryan Roberts
Instead of putting the interesting folios on a list, delete the
uninteresting one from the folio_batch.
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reviewed-by: Ryan Roberts <ryan.roberts@arm.com>
---
mm/swap.c | 17 ++++++++++-------
1 file changed, 10 insertions(+), 7 deletions(-)
diff --git a/mm/swap.c b/mm/swap.c
index c5ea0c6669e7..6b2f3f77450c 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -959,12 +959,11 @@ void lru_cache_disable(void)
*/
void folios_put(struct folio_batch *folios)
{
- int i;
- LIST_HEAD(pages_to_free);
+ int i, j;
struct lruvec *lruvec = NULL;
unsigned long flags = 0;
- for (i = 0; i < folios->nr; i++) {
+ for (i = 0, j = 0; i < folios->nr; i++) {
struct folio *folio = folios->folios[i];
if (is_huge_zero_page(&folio->page))
@@ -1013,14 +1012,18 @@ void folios_put(struct folio_batch *folios)
count_vm_event(UNEVICTABLE_PGCLEARED);
}
- list_add(&folio->lru, &pages_to_free);
+ if (j != i)
+ folios->folios[j] = folio;
+ j++;
}
if (lruvec)
unlock_page_lruvec_irqrestore(lruvec, flags);
+ folios->nr = j;
+ if (!j)
+ return;
- mem_cgroup_uncharge_list(&pages_to_free);
- free_unref_page_list(&pages_to_free);
- folios->nr = 0;
+ mem_cgroup_uncharge_folios(folios);
+ free_unref_folios(folios);
}
EXPORT_SYMBOL(folios_put);
--
2.43.0
^ permalink raw reply [flat|nested] 24+ messages in thread
* [PATCH v2 07/18] mm: Use free_unref_folios() in put_pages_list()
2024-02-17 2:25 [PATCH v2 00/18] Rearrange batched folio freeing Matthew Wilcox (Oracle)
` (5 preceding siblings ...)
2024-02-17 2:25 ` [PATCH v2 06/18] mm: Remove use of folio list from folios_put() Matthew Wilcox (Oracle)
@ 2024-02-17 2:25 ` Matthew Wilcox (Oracle)
2024-02-17 2:25 ` [PATCH v2 08/18] mm: use __page_cache_release() in folios_put() Matthew Wilcox (Oracle)
` (10 subsequent siblings)
17 siblings, 0 replies; 24+ messages in thread
From: Matthew Wilcox (Oracle) @ 2024-02-17 2:25 UTC (permalink / raw)
To: Andrew Morton; +Cc: Matthew Wilcox (Oracle), linux-mm
Break up the list of folios into batches here so that the folios are
more likely to be cache hot when doing the rest of the processing.
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
mm/swap.c | 17 ++++++++++-------
1 file changed, 10 insertions(+), 7 deletions(-)
diff --git a/mm/swap.c b/mm/swap.c
index 6b2f3f77450c..fffdb48cfbf9 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -138,22 +138,25 @@ EXPORT_SYMBOL(__folio_put);
*/
void put_pages_list(struct list_head *pages)
{
- struct folio *folio, *next;
+ struct folio_batch fbatch;
+ struct folio *folio;
- list_for_each_entry_safe(folio, next, pages, lru) {
- if (!folio_put_testzero(folio)) {
- list_del(&folio->lru);
+ folio_batch_init(&fbatch);
+ list_for_each_entry(folio, pages, lru) {
+ if (!folio_put_testzero(folio))
continue;
- }
if (folio_test_large(folio)) {
- list_del(&folio->lru);
__folio_put_large(folio);
continue;
}
/* LRU flag must be clear because it's passed using the lru */
+ if (folio_batch_add(&fbatch, folio) > 0)
+ continue;
+ free_unref_folios(&fbatch);
}
- free_unref_page_list(pages);
+ if (fbatch.nr)
+ free_unref_folios(&fbatch);
INIT_LIST_HEAD(pages);
}
EXPORT_SYMBOL(put_pages_list);
--
2.43.0
^ permalink raw reply [flat|nested] 24+ messages in thread
* [PATCH v2 08/18] mm: use __page_cache_release() in folios_put()
2024-02-17 2:25 [PATCH v2 00/18] Rearrange batched folio freeing Matthew Wilcox (Oracle)
` (6 preceding siblings ...)
2024-02-17 2:25 ` [PATCH v2 07/18] mm: Use free_unref_folios() in put_pages_list() Matthew Wilcox (Oracle)
@ 2024-02-17 2:25 ` Matthew Wilcox (Oracle)
2024-02-17 2:25 ` [PATCH v2 09/18] mm: Handle large folios in free_unref_folios() Matthew Wilcox (Oracle)
` (9 subsequent siblings)
17 siblings, 0 replies; 24+ messages in thread
From: Matthew Wilcox (Oracle) @ 2024-02-17 2:25 UTC (permalink / raw)
To: Andrew Morton; +Cc: Matthew Wilcox (Oracle), linux-mm
Pass a pointer to the lruvec so we can take advantage of the
folio_lruvec_relock_irqsave(). Adjust the calling convention of
folio_lruvec_relock_irqsave() to suit and add a page_cache_release()
wrapper.
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
include/linux/memcontrol.h | 16 +++++-----
mm/swap.c | 62 ++++++++++++++++++--------------------
2 files changed, 37 insertions(+), 41 deletions(-)
diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index 46d9abb20761..8a0e8972a3d3 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -1705,18 +1705,18 @@ static inline struct lruvec *folio_lruvec_relock_irq(struct folio *folio,
return folio_lruvec_lock_irq(folio);
}
-/* Don't lock again iff page's lruvec locked */
-static inline struct lruvec *folio_lruvec_relock_irqsave(struct folio *folio,
- struct lruvec *locked_lruvec, unsigned long *flags)
+/* Don't lock again iff folio's lruvec locked */
+static inline void folio_lruvec_relock_irqsave(struct folio *folio,
+ struct lruvec **lruvecp, unsigned long *flags)
{
- if (locked_lruvec) {
- if (folio_matches_lruvec(folio, locked_lruvec))
- return locked_lruvec;
+ if (*lruvecp) {
+ if (folio_matches_lruvec(folio, *lruvecp))
+ return;
- unlock_page_lruvec_irqrestore(locked_lruvec, *flags);
+ unlock_page_lruvec_irqrestore(*lruvecp, *flags);
}
- return folio_lruvec_lock_irqsave(folio, flags);
+ *lruvecp = folio_lruvec_lock_irqsave(folio, flags);
}
#ifdef CONFIG_CGROUP_WRITEBACK
diff --git a/mm/swap.c b/mm/swap.c
index fffdb48cfbf9..21c2df0f7928 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -74,22 +74,21 @@ static DEFINE_PER_CPU(struct cpu_fbatches, cpu_fbatches) = {
.lock = INIT_LOCAL_LOCK(lock),
};
-/*
- * This path almost never happens for VM activity - pages are normally freed
- * in batches. But it gets used by networking - and for compound pages.
- */
-static void __page_cache_release(struct folio *folio)
+static void __page_cache_release(struct folio *folio, struct lruvec **lruvecp,
+ unsigned long *flagsp)
{
if (folio_test_lru(folio)) {
- struct lruvec *lruvec;
- unsigned long flags;
-
- lruvec = folio_lruvec_lock_irqsave(folio, &flags);
- lruvec_del_folio(lruvec, folio);
+ folio_lruvec_relock_irqsave(folio, lruvecp, flagsp);
+ lruvec_del_folio(*lruvecp, folio);
__folio_clear_lru_flags(folio);
- unlock_page_lruvec_irqrestore(lruvec, flags);
}
- /* See comment on folio_test_mlocked in folios_put() */
+
+ /*
+ * In rare cases, when truncation or holepunching raced with
+ * munlock after VM_LOCKED was cleared, Mlocked may still be
+ * found set here. This does not indicate a problem, unless
+ * "unevictable_pgs_cleared" appears worryingly large.
+ */
if (unlikely(folio_test_mlocked(folio))) {
long nr_pages = folio_nr_pages(folio);
@@ -99,9 +98,23 @@ static void __page_cache_release(struct folio *folio)
}
}
+/*
+ * This path almost never happens for VM activity - pages are normally freed
+ * in batches. But it gets used by networking - and for compound pages.
+ */
+static void page_cache_release(struct folio *folio)
+{
+ struct lruvec *lruvec = NULL;
+ unsigned long flags;
+
+ __page_cache_release(folio, &lruvec, &flags);
+ if (lruvec)
+ unlock_page_lruvec_irqrestore(lruvec, flags);
+}
+
static void __folio_put_small(struct folio *folio)
{
- __page_cache_release(folio);
+ page_cache_release(folio);
mem_cgroup_uncharge(folio);
free_unref_page(&folio->page, 0);
}
@@ -115,7 +128,7 @@ static void __folio_put_large(struct folio *folio)
* be called for hugetlb (it has a separate hugetlb_cgroup.)
*/
if (!folio_test_hugetlb(folio))
- __page_cache_release(folio);
+ page_cache_release(folio);
destroy_large_folio(folio);
}
@@ -216,7 +229,7 @@ static void folio_batch_move_lru(struct folio_batch *fbatch, move_fn_t move_fn)
if (move_fn != lru_add_fn && !folio_test_clear_lru(folio))
continue;
- lruvec = folio_lruvec_relock_irqsave(folio, lruvec, &flags);
+ folio_lruvec_relock_irqsave(folio, &lruvec, &flags);
move_fn(lruvec, folio);
folio_set_lru(folio);
@@ -996,24 +1009,7 @@ void folios_put(struct folio_batch *folios)
continue;
}
- if (folio_test_lru(folio)) {
- lruvec = folio_lruvec_relock_irqsave(folio, lruvec,
- &flags);
- lruvec_del_folio(lruvec, folio);
- __folio_clear_lru_flags(folio);
- }
-
- /*
- * In rare cases, when truncation or holepunching raced with
- * munlock after VM_LOCKED was cleared, Mlocked may still be
- * found set here. This does not indicate a problem, unless
- * "unevictable_pgs_cleared" appears worryingly large.
- */
- if (unlikely(folio_test_mlocked(folio))) {
- __folio_clear_mlocked(folio);
- zone_stat_sub_folio(folio, NR_MLOCK);
- count_vm_event(UNEVICTABLE_PGCLEARED);
- }
+ __page_cache_release(folio, &lruvec, &flags);
if (j != i)
folios->folios[j] = folio;
--
2.43.0
^ permalink raw reply [flat|nested] 24+ messages in thread
* [PATCH v2 09/18] mm: Handle large folios in free_unref_folios()
2024-02-17 2:25 [PATCH v2 00/18] Rearrange batched folio freeing Matthew Wilcox (Oracle)
` (7 preceding siblings ...)
2024-02-17 2:25 ` [PATCH v2 08/18] mm: use __page_cache_release() in folios_put() Matthew Wilcox (Oracle)
@ 2024-02-17 2:25 ` Matthew Wilcox (Oracle)
2024-02-17 2:25 ` [PATCH v2 10/18] mm: Allow non-hugetlb large folios to be batch processed Matthew Wilcox (Oracle)
` (8 subsequent siblings)
17 siblings, 0 replies; 24+ messages in thread
From: Matthew Wilcox (Oracle) @ 2024-02-17 2:25 UTC (permalink / raw)
To: Andrew Morton; +Cc: Matthew Wilcox (Oracle), linux-mm
Call folio_undo_large_rmappable() if needed. free_unref_page_prepare()
destroys the ability to call folio_order(), so stash the order in
folio->private for the benefit of the second loop.
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
mm/page_alloc.c | 25 +++++++++++++++++--------
1 file changed, 17 insertions(+), 8 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 8ef1c5c86472..eca5b153f732 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2552,7 +2552,7 @@ void free_unref_page(struct page *page, unsigned int order)
}
/*
- * Free a batch of 0-order pages
+ * Free a batch of folios
*/
void free_unref_folios(struct folio_batch *folios)
{
@@ -2565,19 +2565,25 @@ void free_unref_folios(struct folio_batch *folios)
for (i = 0, j = 0; i < folios->nr; i++) {
struct folio *folio = folios->folios[i];
unsigned long pfn = folio_pfn(folio);
- if (!free_unref_page_prepare(&folio->page, pfn, 0))
+ unsigned int order = folio_order(folio);
+
+ if (order > 0 && folio_test_large_rmappable(folio))
+ folio_undo_large_rmappable(folio);
+ if (!free_unref_page_prepare(&folio->page, pfn, order))
continue;
/*
- * Free isolated folios directly to the allocator, see
- * comment in free_unref_page.
+ * Free isolated folios and orders not handled on the PCP
+ * directly to the allocator, see comment in free_unref_page.
*/
migratetype = get_pcppage_migratetype(&folio->page);
- if (unlikely(is_migrate_isolate(migratetype))) {
+ if (!pcp_allowed_order(order) ||
+ is_migrate_isolate(migratetype)) {
free_one_page(folio_zone(folio), &folio->page, pfn,
- 0, migratetype, FPI_NONE);
+ order, migratetype, FPI_NONE);
continue;
}
+ folio->private = (void *)(unsigned long)order;
if (j != i)
folios->folios[j] = folio;
j++;
@@ -2587,7 +2593,9 @@ void free_unref_folios(struct folio_batch *folios)
for (i = 0; i < folios->nr; i++) {
struct folio *folio = folios->folios[i];
struct zone *zone = folio_zone(folio);
+ unsigned int order = (unsigned long)folio->private;
+ folio->private = NULL;
migratetype = get_pcppage_migratetype(&folio->page);
/* Different zone requires a different pcp lock */
@@ -2606,7 +2614,7 @@ void free_unref_folios(struct folio_batch *folios)
if (unlikely(!pcp)) {
pcp_trylock_finish(UP_flags);
free_one_page(zone, &folio->page,
- folio_pfn(folio), 0,
+ folio_pfn(folio), order,
migratetype, FPI_NONE);
locked_zone = NULL;
continue;
@@ -2622,7 +2630,8 @@ void free_unref_folios(struct folio_batch *folios)
migratetype = MIGRATE_MOVABLE;
trace_mm_page_free_batched(&folio->page);
- free_unref_page_commit(zone, pcp, &folio->page, migratetype, 0);
+ free_unref_page_commit(zone, pcp, &folio->page, migratetype,
+ order);
}
if (pcp) {
--
2.43.0
^ permalink raw reply [flat|nested] 24+ messages in thread
* [PATCH v2 10/18] mm: Allow non-hugetlb large folios to be batch processed
2024-02-17 2:25 [PATCH v2 00/18] Rearrange batched folio freeing Matthew Wilcox (Oracle)
` (8 preceding siblings ...)
2024-02-17 2:25 ` [PATCH v2 09/18] mm: Handle large folios in free_unref_folios() Matthew Wilcox (Oracle)
@ 2024-02-17 2:25 ` Matthew Wilcox (Oracle)
2024-02-17 2:25 ` [PATCH v2 11/18] mm: Free folios in a batch in shrink_folio_list() Matthew Wilcox (Oracle)
` (7 subsequent siblings)
17 siblings, 0 replies; 24+ messages in thread
From: Matthew Wilcox (Oracle) @ 2024-02-17 2:25 UTC (permalink / raw)
To: Andrew Morton; +Cc: Matthew Wilcox (Oracle), linux-mm, Ryan Roberts
Hugetlb folios still get special treatment, but normal large folios
can now be freed by free_unref_folios(). This should have a reasonable
performance impact, TBD.
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reviewed-by: Ryan Roberts <ryan.roberts@arm.com>
---
mm/swap.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/mm/swap.c b/mm/swap.c
index 21c2df0f7928..8bd15402cd8f 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -1000,12 +1000,13 @@ void folios_put(struct folio_batch *folios)
if (!folio_put_testzero(folio))
continue;
- if (folio_test_large(folio)) {
+ /* hugetlb has its own memcg */
+ if (folio_test_hugetlb(folio)) {
if (lruvec) {
unlock_page_lruvec_irqrestore(lruvec, flags);
lruvec = NULL;
}
- __folio_put_large(folio);
+ free_huge_folio(folio);
continue;
}
--
2.43.0
^ permalink raw reply [flat|nested] 24+ messages in thread
* [PATCH v2 11/18] mm: Free folios in a batch in shrink_folio_list()
2024-02-17 2:25 [PATCH v2 00/18] Rearrange batched folio freeing Matthew Wilcox (Oracle)
` (9 preceding siblings ...)
2024-02-17 2:25 ` [PATCH v2 10/18] mm: Allow non-hugetlb large folios to be batch processed Matthew Wilcox (Oracle)
@ 2024-02-17 2:25 ` Matthew Wilcox (Oracle)
2024-02-17 2:25 ` [PATCH v2 12/18] mm: Free folios directly in move_folios_to_lru() Matthew Wilcox (Oracle)
` (6 subsequent siblings)
17 siblings, 0 replies; 24+ messages in thread
From: Matthew Wilcox (Oracle) @ 2024-02-17 2:25 UTC (permalink / raw)
To: Andrew Morton; +Cc: Matthew Wilcox (Oracle), linux-mm, Mel Gorman
Use free_unref_page_batch() to free the folios. This may increase the
number of IPIs from calling try_to_unmap_flush() more often, but that's
going to be very workload-dependent. It may even reduce the number of
IPIs as we now batch-free large folios instead of freeing them one at
a time.
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Cc: Mel Gorman <mgorman@suse.de>
---
mm/vmscan.c | 20 +++++++++-----------
1 file changed, 9 insertions(+), 11 deletions(-)
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 87df3a48bdd7..e2292855f58e 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1023,14 +1023,15 @@ static unsigned int shrink_folio_list(struct list_head *folio_list,
struct pglist_data *pgdat, struct scan_control *sc,
struct reclaim_stat *stat, bool ignore_references)
{
+ struct folio_batch free_folios;
LIST_HEAD(ret_folios);
- LIST_HEAD(free_folios);
LIST_HEAD(demote_folios);
unsigned int nr_reclaimed = 0;
unsigned int pgactivate = 0;
bool do_demote_pass;
struct swap_iocb *plug = NULL;
+ folio_batch_init(&free_folios);
memset(stat, 0, sizeof(*stat));
cond_resched();
do_demote_pass = can_demote(pgdat->node_id, sc);
@@ -1429,14 +1430,11 @@ static unsigned int shrink_folio_list(struct list_head *folio_list,
*/
nr_reclaimed += nr_pages;
- /*
- * Is there need to periodically free_folio_list? It would
- * appear not as the counts should be low
- */
- if (unlikely(folio_test_large(folio)))
- destroy_large_folio(folio);
- else
- list_add(&folio->lru, &free_folios);
+ if (folio_batch_add(&free_folios, folio) == 0) {
+ mem_cgroup_uncharge_folios(&free_folios);
+ try_to_unmap_flush();
+ free_unref_folios(&free_folios);
+ }
continue;
activate_locked_split:
@@ -1500,9 +1498,9 @@ static unsigned int shrink_folio_list(struct list_head *folio_list,
pgactivate = stat->nr_activate[0] + stat->nr_activate[1];
- mem_cgroup_uncharge_list(&free_folios);
+ mem_cgroup_uncharge_folios(&free_folios);
try_to_unmap_flush();
- free_unref_page_list(&free_folios);
+ free_unref_folios(&free_folios);
list_splice(&ret_folios, folio_list);
count_vm_events(PGACTIVATE, pgactivate);
--
2.43.0
^ permalink raw reply [flat|nested] 24+ messages in thread
* [PATCH v2 12/18] mm: Free folios directly in move_folios_to_lru()
2024-02-17 2:25 [PATCH v2 00/18] Rearrange batched folio freeing Matthew Wilcox (Oracle)
` (10 preceding siblings ...)
2024-02-17 2:25 ` [PATCH v2 11/18] mm: Free folios in a batch in shrink_folio_list() Matthew Wilcox (Oracle)
@ 2024-02-17 2:25 ` Matthew Wilcox (Oracle)
2024-02-17 2:25 ` [PATCH v2 13/18] memcg: Remove mem_cgroup_uncharge_list() Matthew Wilcox (Oracle)
` (5 subsequent siblings)
17 siblings, 0 replies; 24+ messages in thread
From: Matthew Wilcox (Oracle) @ 2024-02-17 2:25 UTC (permalink / raw)
To: Andrew Morton; +Cc: Matthew Wilcox (Oracle), linux-mm
The few folios which can't be moved to the LRU list (because their
refcount dropped to zero) used to be returned to the caller to dispose
of. Make this simpler to call by freeing the folios directly through
free_unref_folios().
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
mm/vmscan.c | 32 ++++++++++++--------------------
1 file changed, 12 insertions(+), 20 deletions(-)
diff --git a/mm/vmscan.c b/mm/vmscan.c
index e2292855f58e..84f838330c3b 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1798,7 +1798,6 @@ static bool too_many_isolated(struct pglist_data *pgdat, int file,
/*
* move_folios_to_lru() moves folios from private @list to appropriate LRU list.
- * On return, @list is reused as a list of folios to be freed by the caller.
*
* Returns the number of pages moved to the given lruvec.
*/
@@ -1806,8 +1805,9 @@ static unsigned int move_folios_to_lru(struct lruvec *lruvec,
struct list_head *list)
{
int nr_pages, nr_moved = 0;
- LIST_HEAD(folios_to_free);
+ struct folio_batch free_folios;
+ folio_batch_init(&free_folios);
while (!list_empty(list)) {
struct folio *folio = lru_to_folio(list);
@@ -1836,12 +1836,12 @@ static unsigned int move_folios_to_lru(struct lruvec *lruvec,
if (unlikely(folio_put_testzero(folio))) {
__folio_clear_lru_flags(folio);
- if (unlikely(folio_test_large(folio))) {
+ if (folio_batch_add(&free_folios, folio) == 0) {
spin_unlock_irq(&lruvec->lru_lock);
- destroy_large_folio(folio);
+ mem_cgroup_uncharge_folios(&free_folios);
+ free_unref_folios(&free_folios);
spin_lock_irq(&lruvec->lru_lock);
- } else
- list_add(&folio->lru, &folios_to_free);
+ }
continue;
}
@@ -1858,10 +1858,12 @@ static unsigned int move_folios_to_lru(struct lruvec *lruvec,
workingset_age_nonresident(lruvec, nr_pages);
}
- /*
- * To save our caller's stack, now use input list for pages to free.
- */
- list_splice(&folios_to_free, list);
+ if (free_folios.nr) {
+ spin_unlock_irq(&lruvec->lru_lock);
+ mem_cgroup_uncharge_folios(&free_folios);
+ free_unref_folios(&free_folios);
+ spin_lock_irq(&lruvec->lru_lock);
+ }
return nr_moved;
}
@@ -1940,8 +1942,6 @@ static unsigned long shrink_inactive_list(unsigned long nr_to_scan,
spin_unlock_irq(&lruvec->lru_lock);
lru_note_cost(lruvec, file, stat.nr_pageout, nr_scanned - nr_reclaimed);
- mem_cgroup_uncharge_list(&folio_list);
- free_unref_page_list(&folio_list);
/*
* If dirty folios are scanned that are not queued for IO, it
@@ -2082,8 +2082,6 @@ static void shrink_active_list(unsigned long nr_to_scan,
nr_activate = move_folios_to_lru(lruvec, &l_active);
nr_deactivate = move_folios_to_lru(lruvec, &l_inactive);
- /* Keep all free folios in l_active list */
- list_splice(&l_inactive, &l_active);
__count_vm_events(PGDEACTIVATE, nr_deactivate);
__count_memcg_events(lruvec_memcg(lruvec), PGDEACTIVATE, nr_deactivate);
@@ -2093,8 +2091,6 @@ static void shrink_active_list(unsigned long nr_to_scan,
if (nr_rotated)
lru_note_cost(lruvec, file, 0, nr_rotated);
- mem_cgroup_uncharge_list(&l_active);
- free_unref_page_list(&l_active);
trace_mm_vmscan_lru_shrink_active(pgdat->node_id, nr_taken, nr_activate,
nr_deactivate, nr_rotated, sc->priority, file);
}
@@ -4594,10 +4590,6 @@ static int evict_folios(struct lruvec *lruvec, struct scan_control *sc, int swap
spin_unlock_irq(&lruvec->lru_lock);
- mem_cgroup_uncharge_list(&list);
- free_unref_page_list(&list);
-
- INIT_LIST_HEAD(&list);
list_splice_init(&clean, &list);
if (!list_empty(&list)) {
--
2.43.0
^ permalink raw reply [flat|nested] 24+ messages in thread
* [PATCH v2 13/18] memcg: Remove mem_cgroup_uncharge_list()
2024-02-17 2:25 [PATCH v2 00/18] Rearrange batched folio freeing Matthew Wilcox (Oracle)
` (11 preceding siblings ...)
2024-02-17 2:25 ` [PATCH v2 12/18] mm: Free folios directly in move_folios_to_lru() Matthew Wilcox (Oracle)
@ 2024-02-17 2:25 ` Matthew Wilcox (Oracle)
2024-02-17 2:25 ` [PATCH v2 14/18] mm: Remove free_unref_page_list() Matthew Wilcox (Oracle)
` (4 subsequent siblings)
17 siblings, 0 replies; 24+ messages in thread
From: Matthew Wilcox (Oracle) @ 2024-02-17 2:25 UTC (permalink / raw)
To: Andrew Morton; +Cc: Matthew Wilcox (Oracle), linux-mm, Ryan Roberts
All users have been converted to mem_cgroup_uncharge_folios() so
we can remove this API.
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reviewed-by: Ryan Roberts <ryan.roberts@arm.com>
---
include/linux/memcontrol.h | 12 ------------
mm/memcontrol.c | 19 -------------------
2 files changed, 31 deletions(-)
diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index 8a0e8972a3d3..6ed0c54a3773 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -713,14 +713,6 @@ static inline void mem_cgroup_uncharge(struct folio *folio)
__mem_cgroup_uncharge(folio);
}
-void __mem_cgroup_uncharge_list(struct list_head *page_list);
-static inline void mem_cgroup_uncharge_list(struct list_head *page_list)
-{
- if (mem_cgroup_disabled())
- return;
- __mem_cgroup_uncharge_list(page_list);
-}
-
void __mem_cgroup_uncharge_folios(struct folio_batch *folios);
static inline void mem_cgroup_uncharge_folios(struct folio_batch *folios)
{
@@ -1301,10 +1293,6 @@ static inline void mem_cgroup_uncharge(struct folio *folio)
{
}
-static inline void mem_cgroup_uncharge_list(struct list_head *page_list)
-{
-}
-
static inline void mem_cgroup_uncharge_folios(struct folio_batch *folios)
{
}
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 0499d7838224..d45b9f322a92 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -7546,25 +7546,6 @@ void __mem_cgroup_uncharge(struct folio *folio)
uncharge_batch(&ug);
}
-/**
- * __mem_cgroup_uncharge_list - uncharge a list of page
- * @page_list: list of pages to uncharge
- *
- * Uncharge a list of pages previously charged with
- * __mem_cgroup_charge().
- */
-void __mem_cgroup_uncharge_list(struct list_head *page_list)
-{
- struct uncharge_gather ug;
- struct folio *folio;
-
- uncharge_gather_clear(&ug);
- list_for_each_entry(folio, page_list, lru)
- uncharge_folio(folio, &ug);
- if (ug.memcg)
- uncharge_batch(&ug);
-}
-
void __mem_cgroup_uncharge_folios(struct folio_batch *folios)
{
struct uncharge_gather ug;
--
2.43.0
^ permalink raw reply [flat|nested] 24+ messages in thread
* [PATCH v2 14/18] mm: Remove free_unref_page_list()
2024-02-17 2:25 [PATCH v2 00/18] Rearrange batched folio freeing Matthew Wilcox (Oracle)
` (12 preceding siblings ...)
2024-02-17 2:25 ` [PATCH v2 13/18] memcg: Remove mem_cgroup_uncharge_list() Matthew Wilcox (Oracle)
@ 2024-02-17 2:25 ` Matthew Wilcox (Oracle)
2024-02-17 2:25 ` [PATCH v2 15/18] mm: Remove lru_to_page() Matthew Wilcox (Oracle)
` (3 subsequent siblings)
17 siblings, 0 replies; 24+ messages in thread
From: Matthew Wilcox (Oracle) @ 2024-02-17 2:25 UTC (permalink / raw)
To: Andrew Morton; +Cc: Matthew Wilcox (Oracle), linux-mm, Ryan Roberts
All callers now use free_unref_folios() so we can delete this function.
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reviewed-by: Ryan Roberts <ryan.roberts@arm.com>
---
mm/internal.h | 1 -
mm/page_alloc.c | 18 ------------------
2 files changed, 19 deletions(-)
diff --git a/mm/internal.h b/mm/internal.h
index 4d45b351e0fd..3e2b478c610f 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -451,7 +451,6 @@ extern int user_min_free_kbytes;
void free_unref_page(struct page *page, unsigned int order);
void free_unref_folios(struct folio_batch *fbatch);
-void free_unref_page_list(struct list_head *list);
extern void zone_pcp_reset(struct zone *zone);
extern void zone_pcp_disable(struct zone *zone);
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index eca5b153f732..7600344b997e 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2641,24 +2641,6 @@ void free_unref_folios(struct folio_batch *folios)
folios->nr = 0;
}
-void free_unref_page_list(struct list_head *list)
-{
- struct folio_batch fbatch;
-
- folio_batch_init(&fbatch);
- while (!list_empty(list)) {
- struct folio *folio = list_first_entry(list, struct folio, lru);
-
- list_del(&folio->lru);
- if (folio_batch_add(&fbatch, folio) > 0)
- continue;
- free_unref_folios(&fbatch);
- }
-
- if (fbatch.nr)
- free_unref_folios(&fbatch);
-}
-
/*
* split_page takes a non-compound higher-order page, and splits it into
* n (1<<order) sub-pages: page[0..n]
--
2.43.0
^ permalink raw reply [flat|nested] 24+ messages in thread
* [PATCH v2 15/18] mm: Remove lru_to_page()
2024-02-17 2:25 [PATCH v2 00/18] Rearrange batched folio freeing Matthew Wilcox (Oracle)
` (13 preceding siblings ...)
2024-02-17 2:25 ` [PATCH v2 14/18] mm: Remove free_unref_page_list() Matthew Wilcox (Oracle)
@ 2024-02-17 2:25 ` Matthew Wilcox (Oracle)
2024-02-17 2:25 ` [PATCH v2 16/18] mm: Convert free_pages_and_swap_cache() to use folios_put() Matthew Wilcox (Oracle)
` (2 subsequent siblings)
17 siblings, 0 replies; 24+ messages in thread
From: Matthew Wilcox (Oracle) @ 2024-02-17 2:25 UTC (permalink / raw)
To: Andrew Morton; +Cc: Matthew Wilcox (Oracle), linux-mm
The last user was removed over a year ago; remove the definition.
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
include/linux/mm.h | 1 -
1 file changed, 1 deletion(-)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 2a1ebda5fb79..bb0ea42c2990 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -227,7 +227,6 @@ int overcommit_policy_handler(struct ctl_table *, int, void *, size_t *,
/* test whether an address (unsigned long or pointer) is aligned to PAGE_SIZE */
#define PAGE_ALIGNED(addr) IS_ALIGNED((unsigned long)(addr), PAGE_SIZE)
-#define lru_to_page(head) (list_entry((head)->prev, struct page, lru))
static inline struct folio *lru_to_folio(struct list_head *head)
{
return list_entry((head)->prev, struct folio, lru);
--
2.43.0
^ permalink raw reply [flat|nested] 24+ messages in thread
* [PATCH v2 16/18] mm: Convert free_pages_and_swap_cache() to use folios_put()
2024-02-17 2:25 [PATCH v2 00/18] Rearrange batched folio freeing Matthew Wilcox (Oracle)
` (14 preceding siblings ...)
2024-02-17 2:25 ` [PATCH v2 15/18] mm: Remove lru_to_page() Matthew Wilcox (Oracle)
@ 2024-02-17 2:25 ` Matthew Wilcox (Oracle)
2024-02-17 2:25 ` [PATCH v2 17/18] mm: Use a folio in __collapse_huge_page_copy_succeeded() Matthew Wilcox (Oracle)
2024-02-17 2:25 ` [PATCH v2 18/18] mm: Convert free_swap_cache() to take a folio Matthew Wilcox (Oracle)
17 siblings, 0 replies; 24+ messages in thread
From: Matthew Wilcox (Oracle) @ 2024-02-17 2:25 UTC (permalink / raw)
To: Andrew Morton; +Cc: Matthew Wilcox (Oracle), linux-mm
Process the pages in batch-sized quantities instead of all-at-once.
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
mm/swap_state.c | 15 ++++++++++++---
1 file changed, 12 insertions(+), 3 deletions(-)
diff --git a/mm/swap_state.c b/mm/swap_state.c
index 7255c01a1e4e..f2e07022d763 100644
--- a/mm/swap_state.c
+++ b/mm/swap_state.c
@@ -15,6 +15,7 @@
#include <linux/swapops.h>
#include <linux/init.h>
#include <linux/pagemap.h>
+#include <linux/pagevec.h>
#include <linux/backing-dev.h>
#include <linux/blkdev.h>
#include <linux/migrate.h>
@@ -310,10 +311,18 @@ void free_page_and_swap_cache(struct page *page)
*/
void free_pages_and_swap_cache(struct encoded_page **pages, int nr)
{
+ struct folio_batch folios;
+
lru_add_drain();
- for (int i = 0; i < nr; i++)
- free_swap_cache(encoded_page_ptr(pages[i]));
- release_pages(pages, nr);
+ folio_batch_init(&folios);
+ for (int i = 0; i < nr; i++) {
+ struct folio *folio = page_folio(encoded_page_ptr(pages[i]));
+ free_swap_cache(&folio->page);
+ if (folio_batch_add(&folios, folio) == 0)
+ folios_put(&folios);
+ }
+ if (folios.nr)
+ folios_put(&folios);
}
static inline bool swap_use_vma_readahead(void)
--
2.43.0
^ permalink raw reply [flat|nested] 24+ messages in thread
* [PATCH v2 17/18] mm: Use a folio in __collapse_huge_page_copy_succeeded()
2024-02-17 2:25 [PATCH v2 00/18] Rearrange batched folio freeing Matthew Wilcox (Oracle)
` (15 preceding siblings ...)
2024-02-17 2:25 ` [PATCH v2 16/18] mm: Convert free_pages_and_swap_cache() to use folios_put() Matthew Wilcox (Oracle)
@ 2024-02-17 2:25 ` Matthew Wilcox (Oracle)
2024-02-17 2:25 ` [PATCH v2 18/18] mm: Convert free_swap_cache() to take a folio Matthew Wilcox (Oracle)
17 siblings, 0 replies; 24+ messages in thread
From: Matthew Wilcox (Oracle) @ 2024-02-17 2:25 UTC (permalink / raw)
To: Andrew Morton; +Cc: Matthew Wilcox (Oracle), linux-mm
These pages are all chained together through the lru list, so we know
they're folios. Use the folio APIs to save three hidden calls to
compound_head().
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
mm/khugepaged.c | 30 ++++++++++++++----------------
1 file changed, 14 insertions(+), 16 deletions(-)
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 2771fc043b3b..5cc39c3f3847 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -689,9 +689,7 @@ static void __collapse_huge_page_copy_succeeded(pte_t *pte,
spinlock_t *ptl,
struct list_head *compound_pagelist)
{
- struct folio *src_folio;
- struct page *src_page;
- struct page *tmp;
+ struct folio *src, *tmp;
pte_t *_pte;
pte_t pteval;
@@ -710,10 +708,11 @@ static void __collapse_huge_page_copy_succeeded(pte_t *pte,
ksm_might_unmap_zero_page(vma->vm_mm, pteval);
}
} else {
- src_page = pte_page(pteval);
- src_folio = page_folio(src_page);
- if (!folio_test_large(src_folio))
- release_pte_folio(src_folio);
+ struct page *src_page = pte_page(pteval);
+
+ src = page_folio(src_page);
+ if (!folio_test_large(src))
+ release_pte_folio(src);
/*
* ptl mostly unnecessary, but preempt has to
* be disabled to update the per-cpu stats
@@ -721,20 +720,19 @@ static void __collapse_huge_page_copy_succeeded(pte_t *pte,
*/
spin_lock(ptl);
ptep_clear(vma->vm_mm, address, _pte);
- folio_remove_rmap_pte(src_folio, src_page, vma);
+ folio_remove_rmap_pte(src, src_page, vma);
spin_unlock(ptl);
free_page_and_swap_cache(src_page);
}
}
- list_for_each_entry_safe(src_page, tmp, compound_pagelist, lru) {
- list_del(&src_page->lru);
- mod_node_page_state(page_pgdat(src_page),
- NR_ISOLATED_ANON + page_is_file_lru(src_page),
- -compound_nr(src_page));
- unlock_page(src_page);
- free_swap_cache(src_page);
- putback_lru_page(src_page);
+ list_for_each_entry_safe(src, tmp, compound_pagelist, lru) {
+ list_del(&src->lru);
+ node_stat_sub_folio(src, NR_ISOLATED_ANON +
+ folio_is_file_lru(src));
+ folio_unlock(src);
+ free_swap_cache(&src->page);
+ folio_putback_lru(src);
}
}
--
2.43.0
^ permalink raw reply [flat|nested] 24+ messages in thread
* [PATCH v2 18/18] mm: Convert free_swap_cache() to take a folio
2024-02-17 2:25 [PATCH v2 00/18] Rearrange batched folio freeing Matthew Wilcox (Oracle)
` (16 preceding siblings ...)
2024-02-17 2:25 ` [PATCH v2 17/18] mm: Use a folio in __collapse_huge_page_copy_succeeded() Matthew Wilcox (Oracle)
@ 2024-02-17 2:25 ` Matthew Wilcox (Oracle)
2024-02-19 9:59 ` David Hildenbrand
17 siblings, 1 reply; 24+ messages in thread
From: Matthew Wilcox (Oracle) @ 2024-02-17 2:25 UTC (permalink / raw)
To: Andrew Morton; +Cc: Matthew Wilcox (Oracle), linux-mm, Ryan Roberts
All but one caller already has a folio, so convert
free_page_and_swap_cache() to have a folio and remove the call to
page_folio().
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reviewed-by: Ryan Roberts <ryan.roberts@arm.com>
---
include/linux/swap.h | 8 ++++----
mm/khugepaged.c | 2 +-
mm/memory.c | 2 +-
mm/swap_state.c | 12 ++++++------
4 files changed, 12 insertions(+), 12 deletions(-)
diff --git a/include/linux/swap.h b/include/linux/swap.h
index 3e2b038852bb..a211a0383425 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -440,9 +440,9 @@ static inline unsigned long total_swapcache_pages(void)
return global_node_page_state(NR_SWAPCACHE);
}
-extern void free_swap_cache(struct page *page);
-extern void free_page_and_swap_cache(struct page *);
-extern void free_pages_and_swap_cache(struct encoded_page **, int);
+void free_swap_cache(struct folio *folio);
+void free_page_and_swap_cache(struct page *);
+void free_pages_and_swap_cache(struct encoded_page **, int);
/* linux/mm/swapfile.c */
extern atomic_long_t nr_swap_pages;
extern long total_swap_pages;
@@ -524,7 +524,7 @@ static inline void put_swap_device(struct swap_info_struct *si)
/* used to sanity check ptes in zap_pte_range when CONFIG_SWAP=0 */
#define free_swap_and_cache(e) is_pfn_swap_entry(e)
-static inline void free_swap_cache(struct page *page)
+static inline void free_swap_cache(struct folio *folio)
{
}
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 5cc39c3f3847..d19fba3355a7 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -731,7 +731,7 @@ static void __collapse_huge_page_copy_succeeded(pte_t *pte,
node_stat_sub_folio(src, NR_ISOLATED_ANON +
folio_is_file_lru(src));
folio_unlock(src);
- free_swap_cache(&src->page);
+ free_swap_cache(src);
folio_putback_lru(src);
}
}
diff --git a/mm/memory.c b/mm/memory.c
index e3e32c5b4be1..815312b2dc48 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -3376,7 +3376,7 @@ static vm_fault_t wp_page_copy(struct vm_fault *vmf)
folio_put(new_folio);
if (old_folio) {
if (page_copied)
- free_swap_cache(&old_folio->page);
+ free_swap_cache(old_folio);
folio_put(old_folio);
}
diff --git a/mm/swap_state.c b/mm/swap_state.c
index f2e07022d763..3f58d6fd5b44 100644
--- a/mm/swap_state.c
+++ b/mm/swap_state.c
@@ -283,10 +283,8 @@ void clear_shadow_from_swap_cache(int type, unsigned long begin,
* folio_free_swap() _with_ the lock.
* - Marcelo
*/
-void free_swap_cache(struct page *page)
+void free_swap_cache(struct folio *folio)
{
- struct folio *folio = page_folio(page);
-
if (folio_test_swapcache(folio) && !folio_mapped(folio) &&
folio_trylock(folio)) {
folio_free_swap(folio);
@@ -300,9 +298,11 @@ void free_swap_cache(struct page *page)
*/
void free_page_and_swap_cache(struct page *page)
{
- free_swap_cache(page);
+ struct folio *folio = page_folio(page);
+
+ free_swap_cache(folio);
if (!is_huge_zero_page(page))
- put_page(page);
+ folio_put(folio);
}
/*
@@ -317,7 +317,7 @@ void free_pages_and_swap_cache(struct encoded_page **pages, int nr)
folio_batch_init(&folios);
for (int i = 0; i < nr; i++) {
struct folio *folio = page_folio(encoded_page_ptr(pages[i]));
- free_swap_cache(&folio->page);
+ free_swap_cache(folio);
if (folio_batch_add(&folios, folio) == 0)
folios_put(&folios);
}
--
2.43.0
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH v2 01/18] mm: Make folios_put() the basis of release_pages()
2024-02-17 2:25 ` [PATCH v2 01/18] mm: Make folios_put() the basis of release_pages() Matthew Wilcox (Oracle)
@ 2024-02-19 9:43 ` David Hildenbrand
2024-02-19 15:03 ` Matthew Wilcox
0 siblings, 1 reply; 24+ messages in thread
From: David Hildenbrand @ 2024-02-19 9:43 UTC (permalink / raw)
To: Matthew Wilcox (Oracle), Andrew Morton; +Cc: linux-mm
On 17.02.24 03:25, Matthew Wilcox (Oracle) wrote:
> By making release_pages() call folios_put(), we can get rid of the calls
> to compound_head() for the callers that already know they have folios.
> We can also get rid of the lock_batch tracking as we know the size
> of the batch is limited by folio_batch. This does reduce the maximum
> number of pages for which the lruvec lock is held, from SWAP_CLUSTER_MAX
> (32) to PAGEVEC_SIZE (15). I do not expect this to make a significant
> difference, but if it does, we can increase PAGEVEC_SIZE to 31.
>
I'm afraid that won't apply to current mm-unstable anymore, where we can
now put multiple references to a single folio (as part of unmapping
large PTE-mapped folios).
[...]
> +/**
> + * release_pages - batched put_page()
> + * @arg: array of pages to release
> + * @nr: number of pages
> + *
> + * Decrement the reference count on all the pages in @arg. If it
> + * fell to zero, remove the page from the LRU and free it.
> + *
> + * Note that the argument can be an array of pages, encoded pages,
> + * or folio pointers. We ignore any encoded bits, and turn any of
> + * them into just a folio that gets free'd.
> + */
> +void release_pages(release_pages_arg arg, int nr)
> +{
> + struct folio_batch fbatch;
> + struct encoded_page **encoded = arg.encoded_pages;
> + int i;
> +
> + folio_batch_init(&fbatch);
> + for (i = 0; i < nr; i++) {
> + /* Turn any of the argument types into a folio */
> + struct folio *folio = page_folio(encoded_page_ptr(encoded[i]));
> +
As an "easy" way forward, we could handle these "multiple-ref" cases
here by putting ref-1 references, and leaving the single remaining
reference to folios_put().
That implies, more atomic operations, though.
Alternatively, "struct folio_batch" would have to be optimized to
understand "put multiple references" as well.
> + if (folio_batch_add(&fbatch, folio) > 0)
> + continue;
> + folios_put(&fbatch);
> + }
> +
> + if (fbatch.nr)
> + folios_put(&fbatch);
> }
> EXPORT_SYMBOL(release_pages);
>
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH v2 18/18] mm: Convert free_swap_cache() to take a folio
2024-02-17 2:25 ` [PATCH v2 18/18] mm: Convert free_swap_cache() to take a folio Matthew Wilcox (Oracle)
@ 2024-02-19 9:59 ` David Hildenbrand
0 siblings, 0 replies; 24+ messages in thread
From: David Hildenbrand @ 2024-02-19 9:59 UTC (permalink / raw)
To: Matthew Wilcox (Oracle), Andrew Morton; +Cc: linux-mm, Ryan Roberts
On 17.02.24 03:25, Matthew Wilcox (Oracle) wrote:
> All but one caller already has a folio, so convert
> free_page_and_swap_cache() to have a folio and remove the call to
> page_folio().
>
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> Reviewed-by: Ryan Roberts <ryan.roberts@arm.com>
> ---
> include/linux/swap.h | 8 ++++----
> mm/khugepaged.c | 2 +-
> mm/memory.c | 2 +-
> mm/swap_state.c | 12 ++++++------
> 4 files changed, 12 insertions(+), 12 deletions(-)
>
> diff --git a/include/linux/swap.h b/include/linux/swap.h
> index 3e2b038852bb..a211a0383425 100644
> --- a/include/linux/swap.h
> +++ b/include/linux/swap.h
> @@ -440,9 +440,9 @@ static inline unsigned long total_swapcache_pages(void)
> return global_node_page_state(NR_SWAPCACHE);
> }
>
> -extern void free_swap_cache(struct page *page);
> -extern void free_page_and_swap_cache(struct page *);
> -extern void free_pages_and_swap_cache(struct encoded_page **, int);
> +void free_swap_cache(struct folio *folio);
> +void free_page_and_swap_cache(struct page *);
> +void free_pages_and_swap_cache(struct encoded_page **, int);
> /* linux/mm/swapfile.c */
> extern atomic_long_t nr_swap_pages;
> extern long total_swap_pages;
> @@ -524,7 +524,7 @@ static inline void put_swap_device(struct swap_info_struct *si)
> /* used to sanity check ptes in zap_pte_range when CONFIG_SWAP=0 */
> #define free_swap_and_cache(e) is_pfn_swap_entry(e)
>
> -static inline void free_swap_cache(struct page *page)
> +static inline void free_swap_cache(struct folio *folio)
> {
> }
>
> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> index 5cc39c3f3847..d19fba3355a7 100644
> --- a/mm/khugepaged.c
> +++ b/mm/khugepaged.c
> @@ -731,7 +731,7 @@ static void __collapse_huge_page_copy_succeeded(pte_t *pte,
> node_stat_sub_folio(src, NR_ISOLATED_ANON +
> folio_is_file_lru(src));
> folio_unlock(src);
> - free_swap_cache(&src->page);
> + free_swap_cache(src);
> folio_putback_lru(src);
> }
> }
> diff --git a/mm/memory.c b/mm/memory.c
> index e3e32c5b4be1..815312b2dc48 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -3376,7 +3376,7 @@ static vm_fault_t wp_page_copy(struct vm_fault *vmf)
> folio_put(new_folio);
> if (old_folio) {
> if (page_copied)
> - free_swap_cache(&old_folio->page);
> + free_swap_cache(old_folio);
> folio_put(old_folio);
> }
>
> diff --git a/mm/swap_state.c b/mm/swap_state.c
> index f2e07022d763..3f58d6fd5b44 100644
> --- a/mm/swap_state.c
> +++ b/mm/swap_state.c
> @@ -283,10 +283,8 @@ void clear_shadow_from_swap_cache(int type, unsigned long begin,
> * folio_free_swap() _with_ the lock.
> * - Marcelo
> */
> -void free_swap_cache(struct page *page)
> +void free_swap_cache(struct folio *folio)
> {
I wanted to do the same, great to see that you already have a patch for it.
I was wondering whether we should call that something like
"folio_try_free_swap_cache"
instead.
Anyhow
Reviewed-by: David Hildenbrand <david@redhat.com>
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH v2 01/18] mm: Make folios_put() the basis of release_pages()
2024-02-19 9:43 ` David Hildenbrand
@ 2024-02-19 15:03 ` Matthew Wilcox
2024-02-19 15:31 ` David Hildenbrand
0 siblings, 1 reply; 24+ messages in thread
From: Matthew Wilcox @ 2024-02-19 15:03 UTC (permalink / raw)
To: David Hildenbrand; +Cc: Andrew Morton, linux-mm
On Mon, Feb 19, 2024 at 10:43:06AM +0100, David Hildenbrand wrote:
> On 17.02.24 03:25, Matthew Wilcox (Oracle) wrote:
> > By making release_pages() call folios_put(), we can get rid of the calls
> > to compound_head() for the callers that already know they have folios.
> > We can also get rid of the lock_batch tracking as we know the size
> > of the batch is limited by folio_batch. This does reduce the maximum
> > number of pages for which the lruvec lock is held, from SWAP_CLUSTER_MAX
> > (32) to PAGEVEC_SIZE (15). I do not expect this to make a significant
> > difference, but if it does, we can increase PAGEVEC_SIZE to 31.
> >
>
> I'm afraid that won't apply to current mm-unstable anymore, where we can now
> put multiple references to a single folio (as part of unmapping
> large PTE-mapped folios).
Argh. I'm not a huge fan of that approach, but let's live with it for
now. How about this as a replacement patch? It compiles ...
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 1743bdeab506..42de41e469a1 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -36,6 +36,7 @@ struct anon_vma;
struct anon_vma_chain;
struct user_struct;
struct pt_regs;
+struct folio_batch;
extern int sysctl_page_lock_unfairness;
@@ -1519,6 +1520,8 @@ static inline void folio_put_refs(struct folio *folio, int refs)
__folio_put(folio);
}
+void folios_put_refs(struct folio_batch *folios, unsigned int *refs);
+
/*
* union release_pages_arg - an array of pages or folios
*
@@ -1541,18 +1544,19 @@ void release_pages(release_pages_arg, int nr);
/**
* folios_put - Decrement the reference count on an array of folios.
* @folios: The folios.
- * @nr: How many folios there are.
*
- * Like folio_put(), but for an array of folios. This is more efficient
- * than writing the loop yourself as it will optimise the locks which
- * need to be taken if the folios are freed.
+ * Like folio_put(), but for a batch of folios. This is more efficient
+ * than writing the loop yourself as it will optimise the locks which need
+ * to be taken if the folios are freed. The folios batch is returned
+ * empty and ready to be reused for another batch; there is no need to
+ * reinitialise it.
*
* Context: May be called in process or interrupt context, but not in NMI
* context. May be called while holding a spinlock.
*/
-static inline void folios_put(struct folio **folios, unsigned int nr)
+static inline void folios_put(struct folio_batch *folios)
{
- release_pages(folios, nr);
+ folios_put_refs(folios, NULL);
}
static inline void put_page(struct page *page)
diff --git a/mm/mlock.c b/mm/mlock.c
index 086546ac5766..1ed2f2ab37cd 100644
--- a/mm/mlock.c
+++ b/mm/mlock.c
@@ -206,8 +206,7 @@ static void mlock_folio_batch(struct folio_batch *fbatch)
if (lruvec)
unlock_page_lruvec_irq(lruvec);
- folios_put(fbatch->folios, folio_batch_count(fbatch));
- folio_batch_reinit(fbatch);
+ folios_put(fbatch);
}
void mlock_drain_local(void)
diff --git a/mm/swap.c b/mm/swap.c
index e5380d732c0d..6b736fceccfa 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -89,7 +89,7 @@ static void __page_cache_release(struct folio *folio)
__folio_clear_lru_flags(folio);
unlock_page_lruvec_irqrestore(lruvec, flags);
}
- /* See comment on folio_test_mlocked in release_pages() */
+ /* See comment on folio_test_mlocked in folios_put() */
if (unlikely(folio_test_mlocked(folio))) {
long nr_pages = folio_nr_pages(folio);
@@ -175,7 +175,7 @@ static void lru_add_fn(struct lruvec *lruvec, struct folio *folio)
* while the LRU lock is held.
*
* (That is not true of __page_cache_release(), and not necessarily
- * true of release_pages(): but those only clear the mlocked flag after
+ * true of folios_put(): but those only clear the mlocked flag after
* folio_put_testzero() has excluded any other users of the folio.)
*/
if (folio_evictable(folio)) {
@@ -221,8 +221,7 @@ static void folio_batch_move_lru(struct folio_batch *fbatch, move_fn_t move_fn)
if (lruvec)
unlock_page_lruvec_irqrestore(lruvec, flags);
- folios_put(fbatch->folios, folio_batch_count(fbatch));
- folio_batch_reinit(fbatch);
+ folios_put(fbatch);
}
static void folio_batch_add_and_move(struct folio_batch *fbatch,
@@ -946,47 +945,30 @@ void lru_cache_disable(void)
}
/**
- * release_pages - batched put_page()
- * @arg: array of pages to release
- * @nr: number of pages
+ * folios_put_refs - Reduce the reference count on a batch of folios.
+ * @folios: The folios.
+ * @refs: The number of refs to subtract from each folio.
*
- * Decrement the reference count on all the pages in @arg. If it
- * fell to zero, remove the page from the LRU and free it.
+ * Like folio_put(), but for a batch of folios. This is more efficient
+ * than writing the loop yourself as it will optimise the locks which need
+ * to be taken if the folios are freed. The folios batch is returned
+ * empty and ready to be reused for another batch; there is no need
+ * to reinitialise it. If @refs is NULL, we subtract one from each
+ * folio refcount.
*
- * Note that the argument can be an array of pages, encoded pages,
- * or folio pointers. We ignore any encoded bits, and turn any of
- * them into just a folio that gets free'd.
+ * Context: May be called in process or interrupt context, but not in NMI
+ * context. May be called while holding a spinlock.
*/
-void release_pages(release_pages_arg arg, int nr)
+void folios_put_refs(struct folio_batch *folios, unsigned int *refs)
{
int i;
- struct encoded_page **encoded = arg.encoded_pages;
LIST_HEAD(pages_to_free);
struct lruvec *lruvec = NULL;
unsigned long flags = 0;
- unsigned int lock_batch;
- for (i = 0; i < nr; i++) {
- unsigned int nr_refs = 1;
- struct folio *folio;
-
- /* Turn any of the argument types into a folio */
- folio = page_folio(encoded_page_ptr(encoded[i]));
-
- /* Is our next entry actually "nr_pages" -> "nr_refs" ? */
- if (unlikely(encoded_page_flags(encoded[i]) &
- ENCODED_PAGE_BIT_NR_PAGES_NEXT))
- nr_refs = encoded_nr_pages(encoded[++i]);
-
- /*
- * Make sure the IRQ-safe lock-holding time does not get
- * excessive with a continuous string of pages from the
- * same lruvec. The lock is held only if lruvec != NULL.
- */
- if (lruvec && ++lock_batch == SWAP_CLUSTER_MAX) {
- unlock_page_lruvec_irqrestore(lruvec, flags);
- lruvec = NULL;
- }
+ for (i = 0; i < folios->nr; i++) {
+ struct folio *folio = folios->folios[i];
+ unsigned int nr_refs = refs ? refs[i] : 1;
if (is_huge_zero_page(&folio->page))
continue;
@@ -1016,13 +998,8 @@ void release_pages(release_pages_arg arg, int nr)
}
if (folio_test_lru(folio)) {
- struct lruvec *prev_lruvec = lruvec;
-
lruvec = folio_lruvec_relock_irqsave(folio, lruvec,
&flags);
- if (prev_lruvec != lruvec)
- lock_batch = 0;
-
lruvec_del_folio(lruvec, folio);
__folio_clear_lru_flags(folio);
}
@@ -1046,6 +1023,47 @@ void release_pages(release_pages_arg arg, int nr)
mem_cgroup_uncharge_list(&pages_to_free);
free_unref_page_list(&pages_to_free);
+ folios->nr = 0;
+}
+EXPORT_SYMBOL(folios_put);
+
+/**
+ * release_pages - batched put_page()
+ * @arg: array of pages to release
+ * @nr: number of pages
+ *
+ * Decrement the reference count on all the pages in @arg. If it
+ * fell to zero, remove the page from the LRU and free it.
+ *
+ * Note that the argument can be an array of pages, encoded pages,
+ * or folio pointers. We ignore any encoded bits, and turn any of
+ * them into just a folio that gets free'd.
+ */
+void release_pages(release_pages_arg arg, int nr)
+{
+ struct folio_batch fbatch;
+ int refs[PAGEVEC_SIZE];
+ struct encoded_page **encoded = arg.encoded_pages;
+ int i;
+
+ folio_batch_init(&fbatch);
+ for (i = 0; i < nr; i++) {
+ /* Turn any of the argument types into a folio */
+ struct folio *folio = page_folio(encoded_page_ptr(encoded[i]));
+
+ /* Is our next entry actually "nr_pages" -> "nr_refs" ? */
+ refs[fbatch.nr] = 1;
+ if (unlikely(encoded_page_flags(encoded[i]) &
+ ENCODED_PAGE_BIT_NR_PAGES_NEXT))
+ refs[fbatch.nr] = encoded_nr_pages(encoded[++i]);
+
+ if (folio_batch_add(&fbatch, folio) > 0)
+ continue;
+ folios_put_refs(&fbatch, refs);
+ }
+
+ if (fbatch.nr)
+ folios_put_refs(&fbatch, refs);
}
EXPORT_SYMBOL(release_pages);
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH v2 01/18] mm: Make folios_put() the basis of release_pages()
2024-02-19 15:03 ` Matthew Wilcox
@ 2024-02-19 15:31 ` David Hildenbrand
2024-02-19 16:07 ` Matthew Wilcox
0 siblings, 1 reply; 24+ messages in thread
From: David Hildenbrand @ 2024-02-19 15:31 UTC (permalink / raw)
To: Matthew Wilcox; +Cc: Andrew Morton, linux-mm
On 19.02.24 16:03, Matthew Wilcox wrote:
> On Mon, Feb 19, 2024 at 10:43:06AM +0100, David Hildenbrand wrote:
>> On 17.02.24 03:25, Matthew Wilcox (Oracle) wrote:
>>> By making release_pages() call folios_put(), we can get rid of the calls
>>> to compound_head() for the callers that already know they have folios.
>>> We can also get rid of the lock_batch tracking as we know the size
>>> of the batch is limited by folio_batch. This does reduce the maximum
>>> number of pages for which the lruvec lock is held, from SWAP_CLUSTER_MAX
>>> (32) to PAGEVEC_SIZE (15). I do not expect this to make a significant
>>> difference, but if it does, we can increase PAGEVEC_SIZE to 31.
>>>
>>
>> I'm afraid that won't apply to current mm-unstable anymore, where we can now
>> put multiple references to a single folio (as part of unmapping
>> large PTE-mapped folios).
>
> Argh. I'm not a huge fan of that approach, but let's live with it for
> now.
I'm hoping we at least can get rid of page ranges at some point (and
just have folio + nr_refs), but for the time being there is no way
around that due to delayed rmap handling that needs the exact pages (ugh).
folios_put_refs() does sound reasonable in any case, although likely
"putting multiple references" is limited to zap/munmap/... code paths.
> How about this as a replacement patch? It compiles ...
>
Nothing jumped at me, one comment:
[...]
> +EXPORT_SYMBOL(folios_put);
> +
> +/**
> + * release_pages - batched put_page()
> + * @arg: array of pages to release
> + * @nr: number of pages
> + *
> + * Decrement the reference count on all the pages in @arg. If it
> + * fell to zero, remove the page from the LRU and free it.
> + *
> + * Note that the argument can be an array of pages, encoded pages,
> + * or folio pointers. We ignore any encoded bits, and turn any of
> + * them into just a folio that gets free'd.
> + */
> +void release_pages(release_pages_arg arg, int nr)
> +{
> + struct folio_batch fbatch;
> + int refs[PAGEVEC_SIZE];
> + struct encoded_page **encoded = arg.encoded_pages;
> + int i;
> +
> + folio_batch_init(&fbatch);
> + for (i = 0; i < nr; i++) {
> + /* Turn any of the argument types into a folio */
> + struct folio *folio = page_folio(encoded_page_ptr(encoded[i]));
> +
> + /* Is our next entry actually "nr_pages" -> "nr_refs" ? */
> + refs[fbatch.nr] = 1;
> + if (unlikely(encoded_page_flags(encoded[i]) &
> + ENCODED_PAGE_BIT_NR_PAGES_NEXT))
> + refs[fbatch.nr] = encoded_nr_pages(encoded[++i]);
> +
> + if (folio_batch_add(&fbatch, folio) > 0)
> + continue;
> + folios_put_refs(&fbatch, refs);
> + }
> +
> + if (fbatch.nr)
> + folios_put_refs(&fbatch, refs);
I wonder if it makes sense to remember if any ref !=1, and simply call
folios_put() if that's the case.
But I guess the whole point about PAGEVEC_SIZE is that it is very
cache-friendly and traversing it a second time (e.g., when all we are
doing is freeing order-0 folios) is not too expensive.
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH v2 01/18] mm: Make folios_put() the basis of release_pages()
2024-02-19 15:31 ` David Hildenbrand
@ 2024-02-19 16:07 ` Matthew Wilcox
0 siblings, 0 replies; 24+ messages in thread
From: Matthew Wilcox @ 2024-02-19 16:07 UTC (permalink / raw)
To: David Hildenbrand; +Cc: Andrew Morton, linux-mm
On Mon, Feb 19, 2024 at 04:31:14PM +0100, David Hildenbrand wrote:
> I'm hoping we at least can get rid of page ranges at some point (and just
> have folio + nr_refs), but for the time being there is no way around that
> due to delayed rmap handling that needs the exact pages (ugh).
Yup. I've looked at pulling some of that apart, but realistically it's
not going to happen soon.
> folios_put_refs() does sound reasonable in any case, although likely
> "putting multiple references" is limited to zap/munmap/... code paths.
Well ... maybe. We have a few places where we call folio_put_refs(),
and maybe some of them could be batched. unpin_user_pages_dirty_lock()
is a candidate, but I wouldn't be surprised if someone inventive could
find a way to do something similar in the filemap_free_folio() paths.
Although the real solution there is to make the pagecache reference
count once, not N times.
> > +EXPORT_SYMBOL(folios_put);
heh, forgot to change that line. A full compile (as opposed to just mm/)
picked it up.
> > + if (fbatch.nr)
> > + folios_put_refs(&fbatch, refs);
>
> I wonder if it makes sense to remember if any ref !=1, and simply call
> folios_put() if that's the case.
>
> But I guess the whole point about PAGEVEC_SIZE is that it is very
> cache-friendly and traversing it a second time (e.g., when all we are doing
> is freeing order-0 folios) is not too expensive.
I don't think we need to add that; it'd certainly be something we could
look at though.
^ permalink raw reply [flat|nested] 24+ messages in thread
end of thread, other threads:[~2024-02-19 16:08 UTC | newest]
Thread overview: 24+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-02-17 2:25 [PATCH v2 00/18] Rearrange batched folio freeing Matthew Wilcox (Oracle)
2024-02-17 2:25 ` [PATCH v2 01/18] mm: Make folios_put() the basis of release_pages() Matthew Wilcox (Oracle)
2024-02-19 9:43 ` David Hildenbrand
2024-02-19 15:03 ` Matthew Wilcox
2024-02-19 15:31 ` David Hildenbrand
2024-02-19 16:07 ` Matthew Wilcox
2024-02-17 2:25 ` [PATCH v2 02/18] mm: Convert free_unref_page_list() to use folios Matthew Wilcox (Oracle)
2024-02-17 2:25 ` [PATCH v2 03/18] mm: Add free_unref_folios() Matthew Wilcox (Oracle)
2024-02-17 2:25 ` [PATCH v2 04/18] mm: Use folios_put() in __folio_batch_release() Matthew Wilcox (Oracle)
2024-02-17 2:25 ` [PATCH v2 05/18] memcg: Add mem_cgroup_uncharge_folios() Matthew Wilcox (Oracle)
2024-02-17 2:25 ` [PATCH v2 06/18] mm: Remove use of folio list from folios_put() Matthew Wilcox (Oracle)
2024-02-17 2:25 ` [PATCH v2 07/18] mm: Use free_unref_folios() in put_pages_list() Matthew Wilcox (Oracle)
2024-02-17 2:25 ` [PATCH v2 08/18] mm: use __page_cache_release() in folios_put() Matthew Wilcox (Oracle)
2024-02-17 2:25 ` [PATCH v2 09/18] mm: Handle large folios in free_unref_folios() Matthew Wilcox (Oracle)
2024-02-17 2:25 ` [PATCH v2 10/18] mm: Allow non-hugetlb large folios to be batch processed Matthew Wilcox (Oracle)
2024-02-17 2:25 ` [PATCH v2 11/18] mm: Free folios in a batch in shrink_folio_list() Matthew Wilcox (Oracle)
2024-02-17 2:25 ` [PATCH v2 12/18] mm: Free folios directly in move_folios_to_lru() Matthew Wilcox (Oracle)
2024-02-17 2:25 ` [PATCH v2 13/18] memcg: Remove mem_cgroup_uncharge_list() Matthew Wilcox (Oracle)
2024-02-17 2:25 ` [PATCH v2 14/18] mm: Remove free_unref_page_list() Matthew Wilcox (Oracle)
2024-02-17 2:25 ` [PATCH v2 15/18] mm: Remove lru_to_page() Matthew Wilcox (Oracle)
2024-02-17 2:25 ` [PATCH v2 16/18] mm: Convert free_pages_and_swap_cache() to use folios_put() Matthew Wilcox (Oracle)
2024-02-17 2:25 ` [PATCH v2 17/18] mm: Use a folio in __collapse_huge_page_copy_succeeded() Matthew Wilcox (Oracle)
2024-02-17 2:25 ` [PATCH v2 18/18] mm: Convert free_swap_cache() to take a folio Matthew Wilcox (Oracle)
2024-02-19 9:59 ` David Hildenbrand
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox