* [RFC PATCH v2 1/3] mm/damon/core: split regions for min_nr_regions
2026-02-21 18:03 [RFC PATCH v2 0/3] mm/damon: strictly respect min_nr_regions SeongJae Park
@ 2026-02-21 18:03 ` SeongJae Park
2026-02-21 18:03 ` [RFC PATCH v2 2/3] mm/damon/vaddr: do not " SeongJae Park
2026-02-21 18:03 ` [RFC PATCH v2 3/3] mm/damon/test/core-kunit: add damon_apply_min_nr_regions() test SeongJae Park
2 siblings, 0 replies; 4+ messages in thread
From: SeongJae Park @ 2026-02-21 18:03 UTC (permalink / raw)
Cc: SeongJae Park, Akinobu Mita, Andrew Morton, damon, linux-kernel,
linux-mm
DAMON core layer respects the min_nr_regions parameter by setting the
maximum size of each region as total monitoring region size divided by
the parameter value. And the limit is applied by preventing merge of
regions that result in a region larger than the maximum size. The limit
is updated per ops update interval, because vaddr updates the monitoring
regions on the ops update callback.
It does nothing for the beginning state. That's because the users can
set the initial monitoring regions as they want. That is, if the users
really care about the min_nr_regions, they are supposed to set the
initial monitoring regions to have more than min_nr_regions regions.
The virtual address space operation set, vaddr, has an exceptional case.
Users can ask the ops set to configure the initial regions on its own.
For the case, vaddr sets up the initial regions to meet the
min_nr_regions. So, vaddr has exceptional support, but basically users
are required to set the regions on their own if they want min_nr_regions
to be respected.
When 'min_nr_regions' is high, such initial setup is difficult. If
DAMON sysfs interface is used for that, the memory for saving the
initial setup is also a waste.
Even if the user forgives the setup, DAMON will eventually make more
than min_nr_regions regions by splitting operations. But it will take
time. If the aggregation interval is long, the delay could be
problematic. There was actually a report [1] of the case. The reporter
wanted to do page granular monitoring with a large aggregation interval.
Also, DAMON is doing nothing for online changes on monitoring regions
and min_nr_regions. For example, the user can remove a monitoring
region or increase min_nr_regions while DAMON is running.
Split regions larger than the size at the beginning of the kdamond main
loop, to fix the initial setup issue. Also do the split every
aggregation interval, for online changes. This means the behavior is
slightly changed. It is difficult to imagine a use case that actually
depends on the old behavior, though. So this change is arguably fine.
Note that the size limit is aligned by damon_ctx->min_region_sz and
cannot be zero. That is, if min_nr_region is larger than the total size
of monitoring regions divided by ->min_region_sz, that cannot be
respected.
[1] https://lore.kernel.org/CAC5umyjmJE9SBqjbetZZecpY54bHpn2AvCGNv3aF6J=1cfoPXQ@mail.gmail.com
Signed-off-by: SeongJae Park <sj@kernel.org>
---
mm/damon/core.c | 45 +++++++++++++++++++++++++++++++++++++++------
1 file changed, 39 insertions(+), 6 deletions(-)
diff --git a/mm/damon/core.c b/mm/damon/core.c
index 8e4cf71e2a3ed..602b85ef23597 100644
--- a/mm/damon/core.c
+++ b/mm/damon/core.c
@@ -1316,6 +1316,40 @@ static unsigned long damon_region_sz_limit(struct damon_ctx *ctx)
return sz;
}
+static void damon_split_region_at(struct damon_target *t,
+ struct damon_region *r, unsigned long sz_r);
+
+/*
+ * damon_apply_min_nr_regions() - Make effect of min_nr_regions parameter.
+ * @ctx: monitoring context.
+ *
+ * This function implement min_nr_regions (minimum number of damon_region
+ * objects in the given monitoring context) behavior. It first calculates
+ * maximum size of each region for enforcing the min_nr_regions as total size
+ * of the regions divided by the min_nr_regions. After that, this function
+ * splits regions to ensure all regions are equal to or smaller than the size
+ * limit. Finally, this function returns the maximum size limit.
+ *
+ * Returns: maximum size of each region for convincing min_nr_regions.
+ */
+static unsigned long damon_apply_min_nr_regions(struct damon_ctx *ctx)
+{
+ unsigned long max_region_sz = damon_region_sz_limit(ctx);
+ struct damon_target *t;
+ struct damon_region *r, *next;
+
+ max_region_sz = ALIGN(max_region_sz, ctx->min_region_sz);
+ damon_for_each_target(t, ctx) {
+ damon_for_each_region_safe(r, next, t) {
+ while (damon_sz_region(r) > max_region_sz) {
+ damon_split_region_at(t, r, max_region_sz);
+ r = damon_next_region(r);
+ }
+ }
+ }
+ return max_region_sz;
+}
+
static int kdamond_fn(void *data);
/*
@@ -1672,9 +1706,6 @@ static void kdamond_tune_intervals(struct damon_ctx *c)
damon_set_attrs(c, &new_attrs);
}
-static void damon_split_region_at(struct damon_target *t,
- struct damon_region *r, unsigned long sz_r);
-
static bool __damos_valid_target(struct damon_region *r, struct damos *s)
{
unsigned long sz;
@@ -2778,7 +2809,7 @@ static int kdamond_fn(void *data)
if (!ctx->regions_score_histogram)
goto done;
- sz_limit = damon_region_sz_limit(ctx);
+ sz_limit = damon_apply_min_nr_regions(ctx);
while (!kdamond_need_stop(ctx)) {
/*
@@ -2803,10 +2834,13 @@ static int kdamond_fn(void *data)
if (ctx->ops.check_accesses)
max_nr_accesses = ctx->ops.check_accesses(ctx);
- if (ctx->passed_sample_intervals >= next_aggregation_sis)
+ if (ctx->passed_sample_intervals >= next_aggregation_sis) {
kdamond_merge_regions(ctx,
max_nr_accesses / 10,
sz_limit);
+ /* online updates might be made */
+ sz_limit = damon_apply_min_nr_regions(ctx);
+ }
/*
* do kdamond_call() and kdamond_apply_schemes() after
@@ -2863,7 +2897,6 @@ static int kdamond_fn(void *data)
sample_interval;
if (ctx->ops.update)
ctx->ops.update(ctx);
- sz_limit = damon_region_sz_limit(ctx);
}
}
done:
--
2.47.3
^ permalink raw reply [flat|nested] 4+ messages in thread* [RFC PATCH v2 2/3] mm/damon/vaddr: do not split regions for min_nr_regions
2026-02-21 18:03 [RFC PATCH v2 0/3] mm/damon: strictly respect min_nr_regions SeongJae Park
2026-02-21 18:03 ` [RFC PATCH v2 1/3] mm/damon/core: split regions for min_nr_regions SeongJae Park
@ 2026-02-21 18:03 ` SeongJae Park
2026-02-21 18:03 ` [RFC PATCH v2 3/3] mm/damon/test/core-kunit: add damon_apply_min_nr_regions() test SeongJae Park
2 siblings, 0 replies; 4+ messages in thread
From: SeongJae Park @ 2026-02-21 18:03 UTC (permalink / raw)
Cc: SeongJae Park, Akinobu Mita, Andrew Morton, damon, linux-kernel,
linux-mm
The previous commit made DAMON core split regions at the beginning for
min_nr_regions. The virtual address space operation set (vaddr) does
similar work on its own, for a case user delegates entire initial
monitoring regions setup to vaddr. It is unnecessary now, as DAMON core
will do similar work for any case. Remove the duplicated work in vaddr.
Also, remove a helper function that was being used only for the work,
and the test code of the helper function.
Signed-off-by: SeongJae Park <sj@kernel.org>
---
mm/damon/tests/vaddr-kunit.h | 76 ------------------------------------
mm/damon/vaddr.c | 70 +--------------------------------
2 files changed, 2 insertions(+), 144 deletions(-)
diff --git a/mm/damon/tests/vaddr-kunit.h b/mm/damon/tests/vaddr-kunit.h
index cfae870178bfd..98e734d77d517 100644
--- a/mm/damon/tests/vaddr-kunit.h
+++ b/mm/damon/tests/vaddr-kunit.h
@@ -252,88 +252,12 @@ static void damon_test_apply_three_regions4(struct kunit *test)
new_three_regions, expected, ARRAY_SIZE(expected));
}
-static void damon_test_split_evenly_fail(struct kunit *test,
- unsigned long start, unsigned long end, unsigned int nr_pieces)
-{
- struct damon_target *t = damon_new_target();
- struct damon_region *r;
-
- if (!t)
- kunit_skip(test, "target alloc fail");
-
- r = damon_new_region(start, end);
- if (!r) {
- damon_free_target(t);
- kunit_skip(test, "region alloc fail");
- }
-
- damon_add_region(r, t);
- KUNIT_EXPECT_EQ(test,
- damon_va_evenly_split_region(t, r, nr_pieces), -EINVAL);
- KUNIT_EXPECT_EQ(test, damon_nr_regions(t), 1u);
-
- damon_for_each_region(r, t) {
- KUNIT_EXPECT_EQ(test, r->ar.start, start);
- KUNIT_EXPECT_EQ(test, r->ar.end, end);
- }
-
- damon_free_target(t);
-}
-
-static void damon_test_split_evenly_succ(struct kunit *test,
- unsigned long start, unsigned long end, unsigned int nr_pieces)
-{
- struct damon_target *t = damon_new_target();
- struct damon_region *r;
- unsigned long expected_width = (end - start) / nr_pieces;
- unsigned long i = 0;
-
- if (!t)
- kunit_skip(test, "target alloc fail");
- r = damon_new_region(start, end);
- if (!r) {
- damon_free_target(t);
- kunit_skip(test, "region alloc fail");
- }
- damon_add_region(r, t);
- KUNIT_EXPECT_EQ(test,
- damon_va_evenly_split_region(t, r, nr_pieces), 0);
- KUNIT_EXPECT_EQ(test, damon_nr_regions(t), nr_pieces);
-
- damon_for_each_region(r, t) {
- if (i == nr_pieces - 1) {
- KUNIT_EXPECT_EQ(test,
- r->ar.start, start + i * expected_width);
- KUNIT_EXPECT_EQ(test, r->ar.end, end);
- break;
- }
- KUNIT_EXPECT_EQ(test,
- r->ar.start, start + i++ * expected_width);
- KUNIT_EXPECT_EQ(test, r->ar.end, start + i * expected_width);
- }
- damon_free_target(t);
-}
-
-static void damon_test_split_evenly(struct kunit *test)
-{
- KUNIT_EXPECT_EQ(test, damon_va_evenly_split_region(NULL, NULL, 5),
- -EINVAL);
-
- damon_test_split_evenly_fail(test, 0, 100, 0);
- damon_test_split_evenly_succ(test, 0, 100, 10);
- damon_test_split_evenly_succ(test, 5, 59, 5);
- damon_test_split_evenly_succ(test, 4, 6, 1);
- damon_test_split_evenly_succ(test, 0, 3, 2);
- damon_test_split_evenly_fail(test, 5, 6, 2);
-}
-
static struct kunit_case damon_test_cases[] = {
KUNIT_CASE(damon_test_three_regions_in_vmas),
KUNIT_CASE(damon_test_apply_three_regions1),
KUNIT_CASE(damon_test_apply_three_regions2),
KUNIT_CASE(damon_test_apply_three_regions3),
KUNIT_CASE(damon_test_apply_three_regions4),
- KUNIT_CASE(damon_test_split_evenly),
{},
};
diff --git a/mm/damon/vaddr.c b/mm/damon/vaddr.c
index 4e3430d4191d1..400247d96eecc 100644
--- a/mm/damon/vaddr.c
+++ b/mm/damon/vaddr.c
@@ -53,52 +53,6 @@ static struct mm_struct *damon_get_mm(struct damon_target *t)
return mm;
}
-/*
- * Functions for the initial monitoring target regions construction
- */
-
-/*
- * Size-evenly split a region into 'nr_pieces' small regions
- *
- * Returns 0 on success, or negative error code otherwise.
- */
-static int damon_va_evenly_split_region(struct damon_target *t,
- struct damon_region *r, unsigned int nr_pieces)
-{
- unsigned long sz_orig, sz_piece, orig_end;
- struct damon_region *n = NULL, *next;
- unsigned long start;
- unsigned int i;
-
- if (!r || !nr_pieces)
- return -EINVAL;
-
- if (nr_pieces == 1)
- return 0;
-
- orig_end = r->ar.end;
- sz_orig = damon_sz_region(r);
- sz_piece = ALIGN_DOWN(sz_orig / nr_pieces, DAMON_MIN_REGION_SZ);
-
- if (!sz_piece)
- return -EINVAL;
-
- r->ar.end = r->ar.start + sz_piece;
- next = damon_next_region(r);
- for (start = r->ar.end, i = 1; i < nr_pieces; start += sz_piece, i++) {
- n = damon_new_region(start, start + sz_piece);
- if (!n)
- return -ENOMEM;
- damon_insert_region(n, r, next, t);
- r = n;
- }
- /* complement last region for possible rounding error */
- if (n)
- n->ar.end = orig_end;
-
- return 0;
-}
-
static unsigned long sz_range(struct damon_addr_range *r)
{
return r->end - r->start;
@@ -240,10 +194,8 @@ static void __damon_va_init_regions(struct damon_ctx *ctx,
struct damon_target *t)
{
struct damon_target *ti;
- struct damon_region *r;
struct damon_addr_range regions[3];
- unsigned long sz = 0, nr_pieces;
- int i, tidx = 0;
+ int tidx = 0;
if (damon_va_three_regions(t, regions)) {
damon_for_each_target(ti, ctx) {
@@ -255,25 +207,7 @@ static void __damon_va_init_regions(struct damon_ctx *ctx,
return;
}
- for (i = 0; i < 3; i++)
- sz += regions[i].end - regions[i].start;
- if (ctx->attrs.min_nr_regions)
- sz /= ctx->attrs.min_nr_regions;
- if (sz < DAMON_MIN_REGION_SZ)
- sz = DAMON_MIN_REGION_SZ;
-
- /* Set the initial three regions of the target */
- for (i = 0; i < 3; i++) {
- r = damon_new_region(regions[i].start, regions[i].end);
- if (!r) {
- pr_err("%d'th init region creation failed\n", i);
- return;
- }
- damon_add_region(r, t);
-
- nr_pieces = (regions[i].end - regions[i].start) / sz;
- damon_va_evenly_split_region(t, r, nr_pieces);
- }
+ damon_set_regions(t, regions, 3, DAMON_MIN_REGION_SZ);
}
/* Initialize '->regions_list' of every target (task) */
--
2.47.3
^ permalink raw reply [flat|nested] 4+ messages in thread* [RFC PATCH v2 3/3] mm/damon/test/core-kunit: add damon_apply_min_nr_regions() test
2026-02-21 18:03 [RFC PATCH v2 0/3] mm/damon: strictly respect min_nr_regions SeongJae Park
2026-02-21 18:03 ` [RFC PATCH v2 1/3] mm/damon/core: split regions for min_nr_regions SeongJae Park
2026-02-21 18:03 ` [RFC PATCH v2 2/3] mm/damon/vaddr: do not " SeongJae Park
@ 2026-02-21 18:03 ` SeongJae Park
2 siblings, 0 replies; 4+ messages in thread
From: SeongJae Park @ 2026-02-21 18:03 UTC (permalink / raw)
Cc: SeongJae Park, Akinobu Mita, Andrew Morton, Brendan Higgins,
David Gow, damon, kunit-dev, linux-kernel, linux-kselftest,
linux-mm
Add a kunit test for the functionality of damon_apply_min_nr_regions().
Signed-off-by: SeongJae Park <sj@kernel.org>
---
mm/damon/tests/core-kunit.h | 52 +++++++++++++++++++++++++++++++++++++
1 file changed, 52 insertions(+)
diff --git a/mm/damon/tests/core-kunit.h b/mm/damon/tests/core-kunit.h
index 92ea25e2dc9e3..4c19ccac5a2ee 100644
--- a/mm/damon/tests/core-kunit.h
+++ b/mm/damon/tests/core-kunit.h
@@ -1241,6 +1241,57 @@ static void damon_test_set_filters_default_reject(struct kunit *test)
damos_free_filter(target_filter);
}
+static void damon_test_apply_min_nr_regions_for(struct kunit *test,
+ unsigned long sz_regions, unsigned long min_region_sz,
+ unsigned long min_nr_regions,
+ unsigned long max_region_sz_expect,
+ unsigned long nr_regions_expect)
+{
+ struct damon_ctx *ctx;
+ struct damon_target *t;
+ struct damon_region *r;
+ unsigned long max_region_size;
+
+ ctx = damon_new_ctx();
+ if (!ctx)
+ kunit_skip(test, "ctx alloc fail\n");
+ t = damon_new_target();
+ if (!t) {
+ damon_destroy_ctx(ctx);
+ kunit_skip(test, "target alloc fail\n");
+ }
+ damon_add_target(ctx, t);
+ r = damon_new_region(0, sz_regions);
+ if (!r) {
+ damon_destroy_ctx(ctx);
+ kunit_skip(test, "region alloc fail\n");
+ }
+ damon_add_region(r, t);
+
+ ctx->min_region_sz = min_region_sz;
+ ctx->attrs.min_nr_regions = min_nr_regions;
+ max_region_size = damon_apply_min_nr_regions(ctx);
+
+ KUNIT_EXPECT_EQ(test, max_region_size, max_region_sz_expect);
+ KUNIT_EXPECT_EQ(test, damon_nr_regions(t), nr_regions_expect);
+
+ damon_destroy_ctx(ctx);
+}
+
+static void damon_test_apply_min_nr_regions(struct kunit *test)
+{
+ /* common, expected setup */
+ damon_test_apply_min_nr_regions_for(test, 10, 1, 10, 1, 10);
+ /* no zero size limit */
+ damon_test_apply_min_nr_regions_for(test, 10, 1, 15, 1, 10);
+ /* max size should be aligned by min_region_sz */
+ damon_test_apply_min_nr_regions_for(test, 10, 2, 2, 6, 2);
+ /*
+ * when min_nr_regions and min_region_sz conflicts, min_region_sz wins.
+ */
+ damon_test_apply_min_nr_regions_for(test, 10, 2, 10, 2, 5);
+}
+
static struct kunit_case damon_test_cases[] = {
KUNIT_CASE(damon_test_target),
KUNIT_CASE(damon_test_regions),
@@ -1267,6 +1318,7 @@ static struct kunit_case damon_test_cases[] = {
KUNIT_CASE(damos_test_filter_out),
KUNIT_CASE(damon_test_feed_loop_next_input),
KUNIT_CASE(damon_test_set_filters_default_reject),
+ KUNIT_CASE(damon_test_apply_min_nr_regions),
{},
};
--
2.47.3
^ permalink raw reply [flat|nested] 4+ messages in thread