From: Rik van Riel <riel@surriel.com>
To: Frank van der Linden <fvdl@google.com>
Cc: akpm@linux-foundation.org, muchun.song@linux.dev,
linux-mm@kvack.org, linux-kernel@vger.kernel.org,
hannes@cmpxchg.org, david@redhat.com, roman.gushchin@linux.dev,
kernel-team@meta.com
Subject: [RFC PATCH 13/12] mm,cma: add compaction cma balance helper for direct reclaim
Date: Thu, 25 Sep 2025 18:11:06 -0400 [thread overview]
Message-ID: <20250925181106.3924a90c@fangorn> (raw)
In-Reply-To: <20250915195153.462039-1-fvdl@google.com>
On Mon, 15 Sep 2025 19:51:41 +0000
Frank van der Linden <fvdl@google.com> wrote:
> This is an RFC on a solution to the long standing problem of OOMs
> occuring when the kernel runs out of space for unmovable allocations
> in the face of large amounts of CMA.
In order to make the CMA balancing code useful without hugetlb involvement,
eg. when simply allocating a !__GFP_MOVABLE allocation, I added two
patches to invoke CMA balancing from the page reclaim code when needed.
With these changes, we might no longer need to call the CMA balancing
code from the hugetlb free path any more, and could potentially
simplify some things in that area.
---8<---
From 99991606760fdf8399255d7fc1f21b58069a4afe Mon Sep 17 00:00:00 2001
From: Rik van Riel <riel@meta.com>
Date: Tue, 23 Sep 2025 10:01:42 -0700
Subject: [PATCH 2/3] mm,cma: add compaction cma balance helper for direct reclaim
Add a cma balance helper for the direct reclaim code, which does not
balance CMA free memory all the way, but only a limited number of
pages.
Signed-off-by: Rik van Riel <riel@surriel.com>
---
mm/compaction.c | 20 ++++++++++++++++++--
mm/internal.h | 7 +++++++
2 files changed, 25 insertions(+), 2 deletions(-)
diff --git a/mm/compaction.c b/mm/compaction.c
index 3200119b8baf..90478c29db60 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -2541,7 +2541,7 @@ isolate_free_cma_pages(struct compact_control *cc)
cc->free_pfn = next_pfn;
}
-static void balance_zone_cma(struct zone *zone, struct cma *cma)
+static void balance_zone_cma(struct zone *zone, struct cma *cma, int target)
{
struct compact_control cc = {
.zone = zone,
@@ -2613,6 +2613,13 @@ static void balance_zone_cma(struct zone *zone, struct cma *cma)
nr_pages = min(nr_pages, cma_get_available(cma));
nr_pages = min(allocated_noncma, nr_pages);
+ /*
+ * When invoked from page reclaim, use the provided target rather
+ * than the calculated one.
+ */
+ if (target)
+ nr_pages = target;
+
for (order = 0; order < NR_PAGE_ORDERS; order++)
INIT_LIST_HEAD(&cc.freepages[order]);
INIT_LIST_HEAD(&cc.migratepages);
@@ -2674,10 +2681,19 @@ void balance_node_cma(int nid, struct cma *cma)
if (!populated_zone(zone))
continue;
- balance_zone_cma(zone, cma);
+ balance_zone_cma(zone, cma, 0);
}
}
+void balance_cma_zonelist(struct zonelist *zonelist, int nr_pages)
+{
+ struct zoneref *z;
+ struct zone *zone;
+
+ for_each_zone_zonelist(zone, z, zonelist, MAX_NR_ZONES - 1)
+ balance_zone_cma(zone, NULL, nr_pages);
+}
+
#endif /* CONFIG_CMA */
static enum compact_result
diff --git a/mm/internal.h b/mm/internal.h
index 7dcaf7214683..5340b94683bf 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -942,6 +942,7 @@ struct cma;
void *cma_reserve_early(struct cma *cma, unsigned long size);
void init_cma_pageblock(struct page *page);
void balance_node_cma(int nid, struct cma *cma);
+void balance_cma_zonelist(struct zonelist *zonelist, int nr_pages);
#else
static inline void *cma_reserve_early(struct cma *cma, unsigned long size)
{
@@ -950,6 +951,12 @@ static inline void *cma_reserve_early(struct cma *cma, unsigned long size)
static inline void init_cma_pageblock(struct page *page)
{
}
+static inline void balance_node_cma(int nid, struct cma *cma)
+{
+}
+static inline void balance_cma_zonelist(struct zonelist *zonelist, int nr_pages)
+{
+}
#endif
--
2.47.3
next prev parent reply other threads:[~2025-09-25 22:11 UTC|newest]
Thread overview: 30+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-09-15 19:51 [RFC PATCH 00/12] CMA balancing Frank van der Linden
2025-09-15 19:51 ` [RFC PATCH 01/12] mm/cma: add tunable for CMA fallback limit Frank van der Linden
2025-09-16 20:23 ` Rik van Riel
2025-09-15 19:51 ` [RFC PATCH 02/12] mm/cma: clean up flag handling a bit Frank van der Linden
2025-09-16 20:25 ` Rik van Riel
2025-09-15 19:51 ` [RFC PATCH 03/12] mm/cma: add flags argument to init functions Frank van der Linden
2025-09-16 21:16 ` Rik van Riel
2025-09-15 19:51 ` [RFC PATCH 04/12] mm/cma: keep a global sorted list of CMA ranges Frank van der Linden
2025-09-16 22:25 ` Rik van Riel
2025-09-15 19:51 ` [RFC PATCH 05/12] mm/cma: add helper functions for CMA balancing Frank van der Linden
2025-09-16 22:57 ` Rik van Riel
2025-09-15 19:51 ` [RFC PATCH 06/12] mm/cma: define and act on CMA_BALANCE flag Frank van der Linden
2025-09-17 3:30 ` Rik van Riel
2025-09-15 19:51 ` [RFC PATCH 07/12] mm/compaction: optionally use a different isolate function Frank van der Linden
2025-09-17 12:53 ` Rik van Riel
2025-09-15 19:51 ` [RFC PATCH 08/12] mm/compaction: simplify isolation order checks a bit Frank van der Linden
2025-09-17 14:43 ` Rik van Riel
2025-09-15 19:51 ` [RFC PATCH 09/12] mm/cma: introduce CMA balancing Frank van der Linden
2025-09-17 15:17 ` Rik van Riel
2025-09-15 19:51 ` [RFC PATCH 10/12] mm/hugetlb: do explicit " Frank van der Linden
2025-09-17 15:21 ` Rik van Riel
2025-09-15 19:51 ` [RFC PATCH 11/12] mm/cma: rebalance CMA when changing cma_first_limit Frank van der Linden
2025-09-17 15:22 ` Rik van Riel
2025-09-15 19:51 ` [RFC PATCH 12/12] mm/cma: add CMA balance VM event counter Frank van der Linden
2025-09-17 15:22 ` Rik van Riel
2025-09-17 0:50 ` [RFC PATCH 00/12] CMA balancing Roman Gushchin
2025-09-17 22:04 ` Frank van der Linden
2025-09-18 22:12 ` Roman Gushchin
2025-09-25 22:11 ` Rik van Riel [this message]
2025-09-25 22:11 ` [RFC PATCH 00/12] mm,cma: call CMA balancing from page reclaim code Rik van Riel
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250925181106.3924a90c@fangorn \
--to=riel@surriel.com \
--cc=akpm@linux-foundation.org \
--cc=david@redhat.com \
--cc=fvdl@google.com \
--cc=hannes@cmpxchg.org \
--cc=kernel-team@meta.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=muchun.song@linux.dev \
--cc=roman.gushchin@linux.dev \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox