From: Frank van der Linden <fvdl@google.com>
To: akpm@linux-foundation.org, muchun.song@linux.dev,
linux-mm@kvack.org, linux-kernel@vger.kernel.org
Cc: hannes@cmpxchg.org, david@redhat.com, roman.gushchin@linux.dev,
Frank van der Linden <fvdl@google.com>
Subject: [RFC PATCH 06/12] mm/cma: define and act on CMA_BALANCE flag
Date: Mon, 15 Sep 2025 19:51:47 +0000 [thread overview]
Message-ID: <20250915195153.462039-7-fvdl@google.com> (raw)
In-Reply-To: <20250915195153.462039-1-fvdl@google.com>
When the CMA_BALANCE flag is set for a CMA area, it means that
it opts in to CMA balancing. This means two things:
1) It allows movable allocations to be migrated in to it in
the case of a CMA inbalance (too much free memory in CMA
pageblocks as compared to other pageblocks).
2) It is allocated top-down, so that compaction will end up
migrating pages in to it. Doing this will make sure that
compaction doesn't aggrevate a CMA imbalance, and that it
won't fight with CMA balance migration from non-CMA to
CMA.
Signed-off-by: Frank van der Linden <fvdl@google.com>
---
include/linux/cma.h | 4 +++-
mm/cma.c | 33 ++++++++++++++++++++++++++-------
2 files changed, 29 insertions(+), 8 deletions(-)
diff --git a/include/linux/cma.h b/include/linux/cma.h
index 0504580d61d0..6e98a516b336 100644
--- a/include/linux/cma.h
+++ b/include/linux/cma.h
@@ -26,6 +26,7 @@ enum cma_flags {
__CMA_ZONES_INVALID,
__CMA_ACTIVATED,
__CMA_FIXED,
+ __CMA_BALANCE,
};
#define CMA_RESERVE_PAGES_ON_ERROR BIT(__CMA_RESERVE_PAGES_ON_ERROR)
@@ -33,8 +34,9 @@ enum cma_flags {
#define CMA_ZONES_INVALID BIT(__CMA_ZONES_INVALID)
#define CMA_ACTIVATED BIT(__CMA_ACTIVATED)
#define CMA_FIXED BIT(__CMA_FIXED)
+#define CMA_BALANCE BIT(__CMA_BALANCE)
-#define CMA_INIT_FLAGS (CMA_FIXED|CMA_RESERVE_PAGES_ON_ERROR)
+#define CMA_INIT_FLAGS (CMA_FIXED|CMA_RESERVE_PAGES_ON_ERROR|CMA_BALANCE)
struct cma;
struct zone;
diff --git a/mm/cma.c b/mm/cma.c
index 53cb1833407b..6050d57f3c2e 100644
--- a/mm/cma.c
+++ b/mm/cma.c
@@ -272,6 +272,9 @@ static bool cma_next_free_range(struct cma_memrange *cmr,
static inline bool cma_should_balance_range(struct zone *zone,
struct cma_memrange *cmr)
{
+ if (!(cmr->cma->flags & CMA_BALANCE))
+ return false;
+
if (page_zone(pfn_to_page(cmr->base_pfn)) != zone)
return false;
@@ -527,6 +530,12 @@ static bool __init basecmp(struct cma_init_memrange *mlp,
return mlp->base < mrp->base;
}
+static bool __init revbasecmp(struct cma_init_memrange *mlp,
+ struct cma_init_memrange *mrp)
+{
+ return mlp->base > mrp->base;
+}
+
/*
* Helper function to create sorted lists.
*/
@@ -575,7 +584,8 @@ static int __init cma_fixed_reserve(phys_addr_t base, phys_addr_t size)
}
static phys_addr_t __init cma_alloc_mem(phys_addr_t base, phys_addr_t size,
- phys_addr_t align, phys_addr_t limit, int nid)
+ phys_addr_t align, phys_addr_t limit, int nid,
+ unsigned long flags)
{
phys_addr_t addr = 0;
@@ -588,7 +598,8 @@ static phys_addr_t __init cma_alloc_mem(phys_addr_t base, phys_addr_t size,
* like DMA/DMA32.
*/
#ifdef CONFIG_PHYS_ADDR_T_64BIT
- if (!memblock_bottom_up() && limit >= SZ_4G + size) {
+ if (!(flags & CMA_BALANCE) && !memblock_bottom_up()
+ && limit >= SZ_4G + size) {
memblock_set_bottom_up(true);
addr = memblock_alloc_range_nid(size, align, SZ_4G, limit,
nid, true);
@@ -695,7 +706,7 @@ static int __init __cma_declare_contiguous_nid(phys_addr_t *basep,
if (ret)
return ret;
} else {
- base = cma_alloc_mem(base, size, alignment, limit, nid);
+ base = cma_alloc_mem(base, size, alignment, limit, nid, flags);
if (!base)
return -ENOMEM;
@@ -851,7 +862,10 @@ int __init cma_declare_contiguous_multi(phys_addr_t total_size,
list_for_each_safe(mp, next, &ranges) {
mlp = list_entry(mp, struct cma_init_memrange, list);
list_del(mp);
- list_insert_sorted(&final_ranges, mlp, basecmp);
+ if (flags & CMA_BALANCE)
+ list_insert_sorted(&final_ranges, mlp, revbasecmp);
+ else
+ list_insert_sorted(&final_ranges, mlp, basecmp);
sizesum += mlp->size;
if (sizesum >= total_size)
break;
@@ -866,7 +880,12 @@ int __init cma_declare_contiguous_multi(phys_addr_t total_size,
list_for_each(mp, &final_ranges) {
mlp = list_entry(mp, struct cma_init_memrange, list);
size = min(sizeleft, mlp->size);
- if (memblock_reserve(mlp->base, size)) {
+ if (flags & CMA_BALANCE)
+ start = (mlp->base + mlp->size - size);
+ else
+ start = mlp->base;
+
+ if (memblock_reserve(start, size)) {
/*
* Unexpected error. Could go on to
* the next one, but just abort to
@@ -877,9 +896,9 @@ int __init cma_declare_contiguous_multi(phys_addr_t total_size,
}
pr_debug("created region %d: %016llx - %016llx\n",
- nr, (u64)mlp->base, (u64)mlp->base + size);
+ nr, (u64)start, (u64)start + size);
cmrp = &cma->ranges[nr++];
- cmrp->base_pfn = PHYS_PFN(mlp->base);
+ cmrp->base_pfn = PHYS_PFN(start);
cmrp->early_pfn = cmrp->base_pfn;
cmrp->count = size >> PAGE_SHIFT;
cmrp->cma = cma;
--
2.51.0.384.g4c02a37b29-goog
next prev parent reply other threads:[~2025-09-15 19:52 UTC|newest]
Thread overview: 30+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-09-15 19:51 [RFC PATCH 00/12] CMA balancing Frank van der Linden
2025-09-15 19:51 ` [RFC PATCH 01/12] mm/cma: add tunable for CMA fallback limit Frank van der Linden
2025-09-16 20:23 ` Rik van Riel
2025-09-15 19:51 ` [RFC PATCH 02/12] mm/cma: clean up flag handling a bit Frank van der Linden
2025-09-16 20:25 ` Rik van Riel
2025-09-15 19:51 ` [RFC PATCH 03/12] mm/cma: add flags argument to init functions Frank van der Linden
2025-09-16 21:16 ` Rik van Riel
2025-09-15 19:51 ` [RFC PATCH 04/12] mm/cma: keep a global sorted list of CMA ranges Frank van der Linden
2025-09-16 22:25 ` Rik van Riel
2025-09-15 19:51 ` [RFC PATCH 05/12] mm/cma: add helper functions for CMA balancing Frank van der Linden
2025-09-16 22:57 ` Rik van Riel
2025-09-15 19:51 ` Frank van der Linden [this message]
2025-09-17 3:30 ` [RFC PATCH 06/12] mm/cma: define and act on CMA_BALANCE flag Rik van Riel
2025-09-15 19:51 ` [RFC PATCH 07/12] mm/compaction: optionally use a different isolate function Frank van der Linden
2025-09-17 12:53 ` Rik van Riel
2025-09-15 19:51 ` [RFC PATCH 08/12] mm/compaction: simplify isolation order checks a bit Frank van der Linden
2025-09-17 14:43 ` Rik van Riel
2025-09-15 19:51 ` [RFC PATCH 09/12] mm/cma: introduce CMA balancing Frank van der Linden
2025-09-17 15:17 ` Rik van Riel
2025-09-15 19:51 ` [RFC PATCH 10/12] mm/hugetlb: do explicit " Frank van der Linden
2025-09-17 15:21 ` Rik van Riel
2025-09-15 19:51 ` [RFC PATCH 11/12] mm/cma: rebalance CMA when changing cma_first_limit Frank van der Linden
2025-09-17 15:22 ` Rik van Riel
2025-09-15 19:51 ` [RFC PATCH 12/12] mm/cma: add CMA balance VM event counter Frank van der Linden
2025-09-17 15:22 ` Rik van Riel
2025-09-17 0:50 ` [RFC PATCH 00/12] CMA balancing Roman Gushchin
2025-09-17 22:04 ` Frank van der Linden
2025-09-18 22:12 ` Roman Gushchin
2025-09-25 22:11 ` [RFC PATCH 13/12] mm,cma: add compaction cma balance helper for direct reclaim Rik van Riel
2025-09-25 22:11 ` [RFC PATCH 00/12] mm,cma: call CMA balancing from page reclaim code Rik van Riel
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250915195153.462039-7-fvdl@google.com \
--to=fvdl@google.com \
--cc=akpm@linux-foundation.org \
--cc=david@redhat.com \
--cc=hannes@cmpxchg.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=muchun.song@linux.dev \
--cc=roman.gushchin@linux.dev \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox