From: Rik van Riel <riel@surriel.com>
To: Frank van der Linden <fvdl@google.com>
Cc: akpm@linux-foundation.org, muchun.song@linux.dev,
linux-mm@kvack.org, linux-kernel@vger.kernel.org,
hannes@cmpxchg.org, david@redhat.com, roman.gushchin@linux.dev,
kernel-team@meta.com
Subject: [RFC PATCH 00/12] mm,cma: call CMA balancing from page reclaim code
Date: Thu, 25 Sep 2025 18:11:09 -0400 [thread overview]
Message-ID: <20250925181109.11dd36e5@fangorn> (raw)
In-Reply-To: <20250915195153.462039-1-fvdl@google.com>
Call CMA balancing from the page reclaim code, if page reclaim
is reclaiming pages that are unsuitable for the allocator.
To keep direct reclaim latencies low, kswapd will do CMA balancing
whenever some of the reclaimed pages are unsuitable for the allocator
that woke up kswapd, while the direct reclaimers will only do CMA
balancing if most of the reclaimed pages are unsuitable.
Signed-off-by: Rik van Riel <riel@surriel.com>
---
mm/vmscan.c | 31 ++++++++++++++++++++++++++++++-
1 file changed, 30 insertions(+), 1 deletion(-)
diff --git a/mm/vmscan.c b/mm/vmscan.c
index a48aec8bfd92..ec6bde5b07d3 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -168,6 +168,9 @@ struct scan_control {
/* Number of pages freed so far during a call to shrink_zones() */
unsigned long nr_reclaimed;
+ /* Number of pages reclaimed, but unsuitable to the allocator */
+ unsigned long nr_unsuitable;
+
struct {
unsigned int dirty;
unsigned int unqueued_dirty;
@@ -1092,6 +1095,19 @@ static bool may_enter_fs(struct folio *folio, gfp_t gfp_mask)
return !data_race(folio_swap_flags(folio) & SWP_FS_OPS);
}
+#ifdef CONFIG_CMA
+static bool unsuitable_folio(struct folio *folio, struct scan_control *sc)
+{
+ return gfp_migratetype(sc->gfp_mask) != MIGRATE_MOVABLE &&
+ folio_migratetype(folio) == MIGRATE_CMA;
+}
+#else
+static bool unsuitable_folio(struct folio *folio, struct scan_control *sc)
+{
+ return false;
+}
+#endif
+
/*
* shrink_folio_list() returns the number of reclaimed pages
*/
@@ -1103,7 +1119,7 @@ static unsigned int shrink_folio_list(struct list_head *folio_list,
struct folio_batch free_folios;
LIST_HEAD(ret_folios);
LIST_HEAD(demote_folios);
- unsigned int nr_reclaimed = 0, nr_demoted = 0;
+ unsigned int nr_reclaimed = 0, nr_demoted = 0, nr_unsuitable = 0;
unsigned int pgactivate = 0;
bool do_demote_pass;
struct swap_iocb *plug = NULL;
@@ -1530,6 +1546,8 @@ static unsigned int shrink_folio_list(struct list_head *folio_list,
* leave it off the LRU).
*/
nr_reclaimed += nr_pages;
+ if (unsuitable_folio(folio, sc))
+ nr_unsuitable += nr_pages;
continue;
}
}
@@ -1560,6 +1578,8 @@ static unsigned int shrink_folio_list(struct list_head *folio_list,
* all pages in it.
*/
nr_reclaimed += nr_pages;
+ if (unsuitable_folio(folio, sc))
+ nr_unsuitable += nr_pages;
folio_unqueue_deferred_split(folio);
if (folio_batch_add(&free_folios, folio) == 0) {
@@ -1641,6 +1661,9 @@ static unsigned int shrink_folio_list(struct list_head *folio_list,
if (plug)
swap_write_unplug(plug);
+
+ sc->nr_unsuitable += nr_unsuitable;
+
return nr_reclaimed;
}
@@ -6431,6 +6454,10 @@ static unsigned long do_try_to_free_pages(struct zonelist *zonelist,
delayacct_freepages_end();
+ /* Almost all memory reclaimed was unsuitable? Move data into CMA. */
+ if (sc->nr_unsuitable >= sc->nr_reclaimed - 2)
+ balance_cma_zonelist(zonelist, SWAP_CLUSTER_MAX);
+
if (sc->nr_reclaimed)
return sc->nr_reclaimed;
@@ -7169,6 +7196,8 @@ static int balance_pgdat(pg_data_t *pgdat, int order, int highest_zoneidx)
if (!sc.nr_reclaimed)
pgdat->kswapd_failures++;
+ if (sc.nr_unsuitable)
+ balance_node_cma(pgdat->node_id, NULL);
out:
clear_reclaim_active(pgdat, highest_zoneidx);
--
2.47.3
prev parent reply other threads:[~2025-09-25 22:11 UTC|newest]
Thread overview: 30+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-09-15 19:51 [RFC PATCH 00/12] CMA balancing Frank van der Linden
2025-09-15 19:51 ` [RFC PATCH 01/12] mm/cma: add tunable for CMA fallback limit Frank van der Linden
2025-09-16 20:23 ` Rik van Riel
2025-09-15 19:51 ` [RFC PATCH 02/12] mm/cma: clean up flag handling a bit Frank van der Linden
2025-09-16 20:25 ` Rik van Riel
2025-09-15 19:51 ` [RFC PATCH 03/12] mm/cma: add flags argument to init functions Frank van der Linden
2025-09-16 21:16 ` Rik van Riel
2025-09-15 19:51 ` [RFC PATCH 04/12] mm/cma: keep a global sorted list of CMA ranges Frank van der Linden
2025-09-16 22:25 ` Rik van Riel
2025-09-15 19:51 ` [RFC PATCH 05/12] mm/cma: add helper functions for CMA balancing Frank van der Linden
2025-09-16 22:57 ` Rik van Riel
2025-09-15 19:51 ` [RFC PATCH 06/12] mm/cma: define and act on CMA_BALANCE flag Frank van der Linden
2025-09-17 3:30 ` Rik van Riel
2025-09-15 19:51 ` [RFC PATCH 07/12] mm/compaction: optionally use a different isolate function Frank van der Linden
2025-09-17 12:53 ` Rik van Riel
2025-09-15 19:51 ` [RFC PATCH 08/12] mm/compaction: simplify isolation order checks a bit Frank van der Linden
2025-09-17 14:43 ` Rik van Riel
2025-09-15 19:51 ` [RFC PATCH 09/12] mm/cma: introduce CMA balancing Frank van der Linden
2025-09-17 15:17 ` Rik van Riel
2025-09-15 19:51 ` [RFC PATCH 10/12] mm/hugetlb: do explicit " Frank van der Linden
2025-09-17 15:21 ` Rik van Riel
2025-09-15 19:51 ` [RFC PATCH 11/12] mm/cma: rebalance CMA when changing cma_first_limit Frank van der Linden
2025-09-17 15:22 ` Rik van Riel
2025-09-15 19:51 ` [RFC PATCH 12/12] mm/cma: add CMA balance VM event counter Frank van der Linden
2025-09-17 15:22 ` Rik van Riel
2025-09-17 0:50 ` [RFC PATCH 00/12] CMA balancing Roman Gushchin
2025-09-17 22:04 ` Frank van der Linden
2025-09-18 22:12 ` Roman Gushchin
2025-09-25 22:11 ` [RFC PATCH 13/12] mm,cma: add compaction cma balance helper for direct reclaim Rik van Riel
2025-09-25 22:11 ` Rik van Riel [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250925181109.11dd36e5@fangorn \
--to=riel@surriel.com \
--cc=akpm@linux-foundation.org \
--cc=david@redhat.com \
--cc=fvdl@google.com \
--cc=hannes@cmpxchg.org \
--cc=kernel-team@meta.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=muchun.song@linux.dev \
--cc=roman.gushchin@linux.dev \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox