linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [RFC PATCH 0/3] mm/damon: always respect min_nr_regions from the beginning
@ 2026-02-17  0:03 SeongJae Park
  2026-02-17  0:03 ` [RFC PATCH 1/3] mm/damon/core: split regions for min_nr_regions at beginning SeongJae Park
                   ` (2 more replies)
  0 siblings, 3 replies; 6+ messages in thread
From: SeongJae Park @ 2026-02-17  0:03 UTC (permalink / raw)
  Cc: SeongJae Park, Andrew Morton, Brendan Higgins, David Gow, damon,
	kunit-dev, linux-kernel, linux-kselftest, linux-mm

DAMON core assumes min_nr_regions parameter is respected by initial user
setup or the underlying operation set.  That is, users are supposed to
set up the initial monitoring regions to have more than min_nr_regions
regions.  When the virtual address operation set (vaddr) is used, users
can ask vaddr to do the setup entirely.  For the case, vaddr finds
regions for covering the current virtual address mappings, and split
regions to meet the min_nr_regions.  DAMON core therefore does nothing
for min_nr_regions at the beginning, and only avoids the number becoming
lower than min_nr_regions, by preventing regions merge operations when
needed.

The user setup requirement is somewhat too much, however, when the
min_nr_regions value is high.  Actually there was a report [1] of the
case.  Make below three changes for resolving the issue.

First (patch 1), drop the assumption and make DAMON core split regions
at the beginning for respecting the min_nr_regions.  Second (patch 2),
drop the vaddr's split operations and related code that are no more
needed.  Third (patch 3), add kunit test for the newly introduced
function.

[1] https://lore.kernel.org/CAC5umyjmJE9SBqjbetZZecpY54bHpn2AvCGNv3aF6J=1cfoPXQ@mail.gmail.com

SeongJae Park (3):
  mm/damon/core: split regions for min_nr_regions at beginning
  mm/damon/vaddr: do not split regions for min_nr_regions
  mm/damon/test/core-kunit: add damon_apply_min_nr_regions() test

 mm/damon/core.c              | 39 ++++++++++++++++--
 mm/damon/tests/core-kunit.h  | 52 ++++++++++++++++++++++++
 mm/damon/tests/vaddr-kunit.h | 76 ------------------------------------
 mm/damon/vaddr.c             | 70 +--------------------------------
 4 files changed, 89 insertions(+), 148 deletions(-)


base-commit: b2e07b9d93a9696f78fb21f2260d5798c6040b28
-- 
2.47.3


^ permalink raw reply	[flat|nested] 6+ messages in thread

* [RFC PATCH 1/3] mm/damon/core: split regions for min_nr_regions at beginning
  2026-02-17  0:03 [RFC PATCH 0/3] mm/damon: always respect min_nr_regions from the beginning SeongJae Park
@ 2026-02-17  0:03 ` SeongJae Park
  2026-02-20 15:13   ` SeongJae Park
  2026-02-17  0:03 ` [RFC PATCH 2/3] mm/damon/vaddr: do not split regions for min_nr_regions SeongJae Park
  2026-02-17  0:03 ` [RFC PATCH 3/3] mm/damon/test/core-kunit: add damon_apply_min_nr_regions() test SeongJae Park
  2 siblings, 1 reply; 6+ messages in thread
From: SeongJae Park @ 2026-02-17  0:03 UTC (permalink / raw)
  Cc: SeongJae Park, Andrew Morton, damon, linux-kernel, linux-mm

DAMON core layer respects the min_nr_regions parameter by setting the
maximum size of each region as total monitoring region size divided by
the parameter value.  And the limit is applied by preventing merge of
regions that result in a region larger than the maximum size.

It does nothing for the beginning state.  That's because the users can
set the initial monitoring regions as they want.  That is, if the users
really care about the min_nr_regions, they are supposed to set the
initial monitoring regions to have more than min_nr_regions regions.

When 'min_nr_regions' is high, such initial setup is difficult.  Even in
the case, DAMON will eventually make more than min_nr_regions regions,
but it will take time.  If the aggregation interval is long, the delay
could be problematic.  There was actually a report [1] of the case.

Split regions larger than the size at the beginning of the kdamond main
loop, to fix the problem.  This means the behavior is slightly changed.
That is, the initial monitoring regions that the user set ain't be
strictly respected, in terms of the number of the regions.  It is
difficult to imagine a use case that actually depends on the behavior,
though, so this change should be fine.

Note that the size limit is aligned by damon_ctx->min_region_sz and
cannot be zero.  That is, if min_nr_region is larger than the total size
of monitoring regions divided by ->min_region_sz, that cannot be
respected.

[1] https://lore.kernel.org/CAC5umyjmJE9SBqjbetZZecpY54bHpn2AvCGNv3aF6J=1cfoPXQ@mail.gmail.com

Signed-off-by: SeongJae Park <sj@kernel.org>
---
 mm/damon/core.c | 39 +++++++++++++++++++++++++++++++++++----
 1 file changed, 35 insertions(+), 4 deletions(-)

diff --git a/mm/damon/core.c b/mm/damon/core.c
index 8e4cf71e2a3ed..fd1b2cbfe2c80 100644
--- a/mm/damon/core.c
+++ b/mm/damon/core.c
@@ -1316,6 +1316,40 @@ static unsigned long damon_region_sz_limit(struct damon_ctx *ctx)
 	return sz;
 }
 
+static void damon_split_region_at(struct damon_target *t,
+				  struct damon_region *r, unsigned long sz_r);
+
+/*
+ * damon_apply_min_nr_regions() - Make effect of min_nr_regions parameter.
+ * @ctx:	monitoring context.
+ *
+ * This function implement min_nr_regions (minimum number of damon_region
+ * objects in the given monitoring context) behavior.  It first calculates
+ * maximum size of each region for enforcing the min_nr_regions as total size
+ * of the regions divided by the min_nr_regions.  After that, this function
+ * splits regions to ensure all regions are equal to or smaller than the size
+ * limit.  Finally, this function returns the maximum size limit.
+ *
+ * Returns: maximum size of each region for convincing min_nr_regions.
+ */
+static unsigned long damon_apply_min_nr_regions(struct damon_ctx *ctx)
+{
+	unsigned long max_region_sz = damon_region_sz_limit(ctx);
+	struct damon_target *t;
+	struct damon_region *r, *next;
+
+	max_region_sz = ALIGN(max_region_sz, ctx->min_region_sz);
+	damon_for_each_target(t, ctx) {
+		damon_for_each_region_safe(r, next, t) {
+			while (damon_sz_region(r) > max_region_sz) {
+				damon_split_region_at(t, r, max_region_sz);
+				r = damon_next_region(r);
+			}
+		}
+	}
+	return max_region_sz;
+}
+
 static int kdamond_fn(void *data);
 
 /*
@@ -1672,9 +1706,6 @@ static void kdamond_tune_intervals(struct damon_ctx *c)
 	damon_set_attrs(c, &new_attrs);
 }
 
-static void damon_split_region_at(struct damon_target *t,
-				  struct damon_region *r, unsigned long sz_r);
-
 static bool __damos_valid_target(struct damon_region *r, struct damos *s)
 {
 	unsigned long sz;
@@ -2778,7 +2809,7 @@ static int kdamond_fn(void *data)
 	if (!ctx->regions_score_histogram)
 		goto done;
 
-	sz_limit = damon_region_sz_limit(ctx);
+	sz_limit = damon_apply_min_nr_regions(ctx);
 
 	while (!kdamond_need_stop(ctx)) {
 		/*
-- 
2.47.3


^ permalink raw reply	[flat|nested] 6+ messages in thread

* [RFC PATCH 2/3] mm/damon/vaddr: do not split regions for min_nr_regions
  2026-02-17  0:03 [RFC PATCH 0/3] mm/damon: always respect min_nr_regions from the beginning SeongJae Park
  2026-02-17  0:03 ` [RFC PATCH 1/3] mm/damon/core: split regions for min_nr_regions at beginning SeongJae Park
@ 2026-02-17  0:03 ` SeongJae Park
  2026-02-17  0:03 ` [RFC PATCH 3/3] mm/damon/test/core-kunit: add damon_apply_min_nr_regions() test SeongJae Park
  2 siblings, 0 replies; 6+ messages in thread
From: SeongJae Park @ 2026-02-17  0:03 UTC (permalink / raw)
  Cc: SeongJae Park, Andrew Morton, damon, linux-kernel, linux-mm

The previous commit made DAMON core split regions at the beginning for
min_nr_regions.  The virtual address space operation set (vaddr) does
similar work on its own, for a case user delegates entire initial
monitoring regions setup to vaddr.  It is unnecessary now, as DAMON core
will do similar work for any case.  Remove the duplicated work in vaddr.

Also, remove a helper function that was being used only for the work,
and the test code of the helper function.

Signed-off-by: SeongJae Park <sj@kernel.org>
---
 mm/damon/tests/vaddr-kunit.h | 76 ------------------------------------
 mm/damon/vaddr.c             | 70 +--------------------------------
 2 files changed, 2 insertions(+), 144 deletions(-)

diff --git a/mm/damon/tests/vaddr-kunit.h b/mm/damon/tests/vaddr-kunit.h
index cfae870178bfd..98e734d77d517 100644
--- a/mm/damon/tests/vaddr-kunit.h
+++ b/mm/damon/tests/vaddr-kunit.h
@@ -252,88 +252,12 @@ static void damon_test_apply_three_regions4(struct kunit *test)
 			new_three_regions, expected, ARRAY_SIZE(expected));
 }
 
-static void damon_test_split_evenly_fail(struct kunit *test,
-		unsigned long start, unsigned long end, unsigned int nr_pieces)
-{
-	struct damon_target *t = damon_new_target();
-	struct damon_region *r;
-
-	if (!t)
-		kunit_skip(test, "target alloc fail");
-
-	r = damon_new_region(start, end);
-	if (!r) {
-		damon_free_target(t);
-		kunit_skip(test, "region alloc fail");
-	}
-
-	damon_add_region(r, t);
-	KUNIT_EXPECT_EQ(test,
-			damon_va_evenly_split_region(t, r, nr_pieces), -EINVAL);
-	KUNIT_EXPECT_EQ(test, damon_nr_regions(t), 1u);
-
-	damon_for_each_region(r, t) {
-		KUNIT_EXPECT_EQ(test, r->ar.start, start);
-		KUNIT_EXPECT_EQ(test, r->ar.end, end);
-	}
-
-	damon_free_target(t);
-}
-
-static void damon_test_split_evenly_succ(struct kunit *test,
-	unsigned long start, unsigned long end, unsigned int nr_pieces)
-{
-	struct damon_target *t = damon_new_target();
-	struct damon_region *r;
-	unsigned long expected_width = (end - start) / nr_pieces;
-	unsigned long i = 0;
-
-	if (!t)
-		kunit_skip(test, "target alloc fail");
-	r = damon_new_region(start, end);
-	if (!r) {
-		damon_free_target(t);
-		kunit_skip(test, "region alloc fail");
-	}
-	damon_add_region(r, t);
-	KUNIT_EXPECT_EQ(test,
-			damon_va_evenly_split_region(t, r, nr_pieces), 0);
-	KUNIT_EXPECT_EQ(test, damon_nr_regions(t), nr_pieces);
-
-	damon_for_each_region(r, t) {
-		if (i == nr_pieces - 1) {
-			KUNIT_EXPECT_EQ(test,
-				r->ar.start, start + i * expected_width);
-			KUNIT_EXPECT_EQ(test, r->ar.end, end);
-			break;
-		}
-		KUNIT_EXPECT_EQ(test,
-				r->ar.start, start + i++ * expected_width);
-		KUNIT_EXPECT_EQ(test, r->ar.end, start + i * expected_width);
-	}
-	damon_free_target(t);
-}
-
-static void damon_test_split_evenly(struct kunit *test)
-{
-	KUNIT_EXPECT_EQ(test, damon_va_evenly_split_region(NULL, NULL, 5),
-			-EINVAL);
-
-	damon_test_split_evenly_fail(test, 0, 100, 0);
-	damon_test_split_evenly_succ(test, 0, 100, 10);
-	damon_test_split_evenly_succ(test, 5, 59, 5);
-	damon_test_split_evenly_succ(test, 4, 6, 1);
-	damon_test_split_evenly_succ(test, 0, 3, 2);
-	damon_test_split_evenly_fail(test, 5, 6, 2);
-}
-
 static struct kunit_case damon_test_cases[] = {
 	KUNIT_CASE(damon_test_three_regions_in_vmas),
 	KUNIT_CASE(damon_test_apply_three_regions1),
 	KUNIT_CASE(damon_test_apply_three_regions2),
 	KUNIT_CASE(damon_test_apply_three_regions3),
 	KUNIT_CASE(damon_test_apply_three_regions4),
-	KUNIT_CASE(damon_test_split_evenly),
 	{},
 };
 
diff --git a/mm/damon/vaddr.c b/mm/damon/vaddr.c
index 4e3430d4191d1..400247d96eecc 100644
--- a/mm/damon/vaddr.c
+++ b/mm/damon/vaddr.c
@@ -53,52 +53,6 @@ static struct mm_struct *damon_get_mm(struct damon_target *t)
 	return mm;
 }
 
-/*
- * Functions for the initial monitoring target regions construction
- */
-
-/*
- * Size-evenly split a region into 'nr_pieces' small regions
- *
- * Returns 0 on success, or negative error code otherwise.
- */
-static int damon_va_evenly_split_region(struct damon_target *t,
-		struct damon_region *r, unsigned int nr_pieces)
-{
-	unsigned long sz_orig, sz_piece, orig_end;
-	struct damon_region *n = NULL, *next;
-	unsigned long start;
-	unsigned int i;
-
-	if (!r || !nr_pieces)
-		return -EINVAL;
-
-	if (nr_pieces == 1)
-		return 0;
-
-	orig_end = r->ar.end;
-	sz_orig = damon_sz_region(r);
-	sz_piece = ALIGN_DOWN(sz_orig / nr_pieces, DAMON_MIN_REGION_SZ);
-
-	if (!sz_piece)
-		return -EINVAL;
-
-	r->ar.end = r->ar.start + sz_piece;
-	next = damon_next_region(r);
-	for (start = r->ar.end, i = 1; i < nr_pieces; start += sz_piece, i++) {
-		n = damon_new_region(start, start + sz_piece);
-		if (!n)
-			return -ENOMEM;
-		damon_insert_region(n, r, next, t);
-		r = n;
-	}
-	/* complement last region for possible rounding error */
-	if (n)
-		n->ar.end = orig_end;
-
-	return 0;
-}
-
 static unsigned long sz_range(struct damon_addr_range *r)
 {
 	return r->end - r->start;
@@ -240,10 +194,8 @@ static void __damon_va_init_regions(struct damon_ctx *ctx,
 				     struct damon_target *t)
 {
 	struct damon_target *ti;
-	struct damon_region *r;
 	struct damon_addr_range regions[3];
-	unsigned long sz = 0, nr_pieces;
-	int i, tidx = 0;
+	int tidx = 0;
 
 	if (damon_va_three_regions(t, regions)) {
 		damon_for_each_target(ti, ctx) {
@@ -255,25 +207,7 @@ static void __damon_va_init_regions(struct damon_ctx *ctx,
 		return;
 	}
 
-	for (i = 0; i < 3; i++)
-		sz += regions[i].end - regions[i].start;
-	if (ctx->attrs.min_nr_regions)
-		sz /= ctx->attrs.min_nr_regions;
-	if (sz < DAMON_MIN_REGION_SZ)
-		sz = DAMON_MIN_REGION_SZ;
-
-	/* Set the initial three regions of the target */
-	for (i = 0; i < 3; i++) {
-		r = damon_new_region(regions[i].start, regions[i].end);
-		if (!r) {
-			pr_err("%d'th init region creation failed\n", i);
-			return;
-		}
-		damon_add_region(r, t);
-
-		nr_pieces = (regions[i].end - regions[i].start) / sz;
-		damon_va_evenly_split_region(t, r, nr_pieces);
-	}
+	damon_set_regions(t, regions, 3, DAMON_MIN_REGION_SZ);
 }
 
 /* Initialize '->regions_list' of every target (task) */
-- 
2.47.3


^ permalink raw reply	[flat|nested] 6+ messages in thread

* [RFC PATCH 3/3] mm/damon/test/core-kunit: add damon_apply_min_nr_regions() test
  2026-02-17  0:03 [RFC PATCH 0/3] mm/damon: always respect min_nr_regions from the beginning SeongJae Park
  2026-02-17  0:03 ` [RFC PATCH 1/3] mm/damon/core: split regions for min_nr_regions at beginning SeongJae Park
  2026-02-17  0:03 ` [RFC PATCH 2/3] mm/damon/vaddr: do not split regions for min_nr_regions SeongJae Park
@ 2026-02-17  0:03 ` SeongJae Park
  2 siblings, 0 replies; 6+ messages in thread
From: SeongJae Park @ 2026-02-17  0:03 UTC (permalink / raw)
  Cc: SeongJae Park, Andrew Morton, Brendan Higgins, David Gow, damon,
	kunit-dev, linux-kernel, linux-kselftest, linux-mm

Add a kunit test for the functionality of damon_apply_min_nr_regions().

Signed-off-by: SeongJae Park <sj@kernel.org>
---
 mm/damon/tests/core-kunit.h | 52 +++++++++++++++++++++++++++++++++++++
 1 file changed, 52 insertions(+)

diff --git a/mm/damon/tests/core-kunit.h b/mm/damon/tests/core-kunit.h
index 92ea25e2dc9e3..4c19ccac5a2ee 100644
--- a/mm/damon/tests/core-kunit.h
+++ b/mm/damon/tests/core-kunit.h
@@ -1241,6 +1241,57 @@ static void damon_test_set_filters_default_reject(struct kunit *test)
 	damos_free_filter(target_filter);
 }
 
+static void damon_test_apply_min_nr_regions_for(struct kunit *test,
+		unsigned long sz_regions, unsigned long min_region_sz,
+		unsigned long min_nr_regions,
+		unsigned long max_region_sz_expect,
+		unsigned long nr_regions_expect)
+{
+	struct damon_ctx *ctx;
+	struct damon_target *t;
+	struct damon_region *r;
+	unsigned long max_region_size;
+
+	ctx = damon_new_ctx();
+	if (!ctx)
+		kunit_skip(test, "ctx alloc fail\n");
+	t = damon_new_target();
+	if (!t) {
+		damon_destroy_ctx(ctx);
+		kunit_skip(test, "target alloc fail\n");
+	}
+	damon_add_target(ctx, t);
+	r = damon_new_region(0, sz_regions);
+	if (!r) {
+		damon_destroy_ctx(ctx);
+		kunit_skip(test, "region alloc fail\n");
+	}
+	damon_add_region(r, t);
+
+	ctx->min_region_sz = min_region_sz;
+	ctx->attrs.min_nr_regions = min_nr_regions;
+	max_region_size = damon_apply_min_nr_regions(ctx);
+
+	KUNIT_EXPECT_EQ(test, max_region_size, max_region_sz_expect);
+	KUNIT_EXPECT_EQ(test, damon_nr_regions(t), nr_regions_expect);
+
+	damon_destroy_ctx(ctx);
+}
+
+static void damon_test_apply_min_nr_regions(struct kunit *test)
+{
+	/* common, expected setup */
+	damon_test_apply_min_nr_regions_for(test, 10, 1, 10, 1, 10);
+	/* no zero size limit */
+	damon_test_apply_min_nr_regions_for(test, 10, 1, 15, 1, 10);
+	/* max size should be aligned by min_region_sz */
+	damon_test_apply_min_nr_regions_for(test, 10, 2, 2, 6, 2);
+	/*
+	 * when min_nr_regions and min_region_sz conflicts, min_region_sz wins.
+	 */
+	damon_test_apply_min_nr_regions_for(test, 10, 2, 10, 2, 5);
+}
+
 static struct kunit_case damon_test_cases[] = {
 	KUNIT_CASE(damon_test_target),
 	KUNIT_CASE(damon_test_regions),
@@ -1267,6 +1318,7 @@ static struct kunit_case damon_test_cases[] = {
 	KUNIT_CASE(damos_test_filter_out),
 	KUNIT_CASE(damon_test_feed_loop_next_input),
 	KUNIT_CASE(damon_test_set_filters_default_reject),
+	KUNIT_CASE(damon_test_apply_min_nr_regions),
 	{},
 };
 
-- 
2.47.3


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [RFC PATCH 1/3] mm/damon/core: split regions for min_nr_regions at beginning
  2026-02-17  0:03 ` [RFC PATCH 1/3] mm/damon/core: split regions for min_nr_regions at beginning SeongJae Park
@ 2026-02-20 15:13   ` SeongJae Park
  2026-02-21 18:07     ` SeongJae Park
  0 siblings, 1 reply; 6+ messages in thread
From: SeongJae Park @ 2026-02-20 15:13 UTC (permalink / raw)
  To: SeongJae Park; +Cc: Andrew Morton, damon, linux-kernel, linux-mm, Akinobu Mita

On Mon, 16 Feb 2026 16:03:56 -0800 SeongJae Park <sj@kernel.org> wrote:

> DAMON core layer respects the min_nr_regions parameter by setting the
> maximum size of each region as total monitoring region size divided by
> the parameter value.  And the limit is applied by preventing merge of
> regions that result in a region larger than the maximum size.
> 
> It does nothing for the beginning state.  That's because the users can
> set the initial monitoring regions as they want.  That is, if the users
> really care about the min_nr_regions, they are supposed to set the
> initial monitoring regions to have more than min_nr_regions regions.
> 
> When 'min_nr_regions' is high, such initial setup is difficult.  Even in
> the case, DAMON will eventually make more than min_nr_regions regions,
> but it will take time.  If the aggregation interval is long, the delay
> could be problematic.  There was actually a report [1] of the case.
> 
> Split regions larger than the size at the beginning of the kdamond main
> loop, to fix the problem.  This means the behavior is slightly changed.
> That is, the initial monitoring regions that the user set ain't be
> strictly respected, in terms of the number of the regions.  It is
> difficult to imagine a use case that actually depends on the behavior,
> though, so this change should be fine.
> 
> Note that the size limit is aligned by damon_ctx->min_region_sz and
> cannot be zero.  That is, if min_nr_region is larger than the total size
> of monitoring regions divided by ->min_region_sz, that cannot be
> respected.
> 
> [1] https://lore.kernel.org/CAC5umyjmJE9SBqjbetZZecpY54bHpn2AvCGNv3aF6J=1cfoPXQ@mail.gmail.com
> 
> Signed-off-by: SeongJae Park <sj@kernel.org>
> ---
>  mm/damon/core.c | 39 +++++++++++++++++++++++++++++++++++----
>  1 file changed, 35 insertions(+), 4 deletions(-)
> 
> diff --git a/mm/damon/core.c b/mm/damon/core.c
> index 8e4cf71e2a3ed..fd1b2cbfe2c80 100644
> --- a/mm/damon/core.c
> +++ b/mm/damon/core.c
> @@ -1316,6 +1316,40 @@ static unsigned long damon_region_sz_limit(struct damon_ctx *ctx)
>  	return sz;
>  }
>  
> +static void damon_split_region_at(struct damon_target *t,
> +				  struct damon_region *r, unsigned long sz_r);
> +
> +/*
> + * damon_apply_min_nr_regions() - Make effect of min_nr_regions parameter.
> + * @ctx:	monitoring context.
> + *
> + * This function implement min_nr_regions (minimum number of damon_region
> + * objects in the given monitoring context) behavior.  It first calculates
> + * maximum size of each region for enforcing the min_nr_regions as total size
> + * of the regions divided by the min_nr_regions.  After that, this function
> + * splits regions to ensure all regions are equal to or smaller than the size
> + * limit.  Finally, this function returns the maximum size limit.
> + *
> + * Returns: maximum size of each region for convincing min_nr_regions.
> + */
> +static unsigned long damon_apply_min_nr_regions(struct damon_ctx *ctx)
> +{
> +	unsigned long max_region_sz = damon_region_sz_limit(ctx);
> +	struct damon_target *t;
> +	struct damon_region *r, *next;
> +
> +	max_region_sz = ALIGN(max_region_sz, ctx->min_region_sz);
> +	damon_for_each_target(t, ctx) {
> +		damon_for_each_region_safe(r, next, t) {
> +			while (damon_sz_region(r) > max_region_sz) {
> +				damon_split_region_at(t, r, max_region_sz);
> +				r = damon_next_region(r);
> +			}
> +		}
> +	}
> +	return max_region_sz;
> +}
> +
>  static int kdamond_fn(void *data);
>  
>  /*
> @@ -1672,9 +1706,6 @@ static void kdamond_tune_intervals(struct damon_ctx *c)
>  	damon_set_attrs(c, &new_attrs);
>  }
>  
> -static void damon_split_region_at(struct damon_target *t,
> -				  struct damon_region *r, unsigned long sz_r);
> -
>  static bool __damos_valid_target(struct damon_region *r, struct damos *s)
>  {
>  	unsigned long sz;
> @@ -2778,7 +2809,7 @@ static int kdamond_fn(void *data)
>  	if (!ctx->regions_score_histogram)
>  		goto done;
>  
> -	sz_limit = damon_region_sz_limit(ctx);
> +	sz_limit = damon_apply_min_nr_regions(ctx);
>  
>  	while (!kdamond_need_stop(ctx)) {
>  		/*

As the commit message is saying, and Akinobu pointed out [1], this patch is
incomplete.  It doesn't cover the online updates of monitoring regions and
min_nr_regions.  I was planning to solve this case first and continu works for
the online updates.  The followup work was taking time more than I expected,
mainly because I wanted to do that in an efficient way.  That is,
damon_apply_min_nr_regions() iterates regions twice.  I wanted to reduce that.
But now I realize maybe I was thinking too much.

Just calling damon_apply_min_nr_regions() after kdamond_merge_regions() like
below should work in a not that inefficient way.  I believe it is not that
inefficient because it will be executed only in aggregation interval.  We are
doing the region iterations multiple times per sampling interval, which is 1/20
of the aggregation interval by default and smaller portion of the aggregation
interval in experimental setups, anyway.

'''
--- a/mm/damon/core.c
+++ b/mm/damon/core.c
@@ -3371,10 +3371,13 @@ static int kdamond_fn(void *data)
                        max_nr_accesses = ctx->ops.check_accesses(ctx);

                if (time_after_eq(ctx->passed_sample_intervals,
-                                       next_aggregation_sis))
+                                       next_aggregation_sis)) {
                        kdamond_merge_regions(ctx,
                                        max_nr_accesses / 10,
                                        sz_limit);
+                       /* online updates might be made */
+                       sz_limit = damon_apply_min_nr_regions(ctx);
+               }

                /*
                 * do kdamond_call() and kdamond_apply_schemes() after
@@ -3434,7 +3437,6 @@ static int kdamond_fn(void *data)
                                sample_interval;
                        if (ctx->ops.update)
                                ctx->ops.update(ctx);
-                       sz_limit = damon_region_sz_limit(ctx);
                }
        }
 done:
'''

If nobodys finds some problems on this, I will post RFC v2 of this after
squashing the above diff into this patch, probably within this weekend.


[1] https://lore.kernel.org/CAC5umygPq8+FQWTG73-QPOKHT1P5=N2+qFkrRfZAkL_7G=gQXQ@mail.gmail.com


Thanks,
SJ

[...]


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [RFC PATCH 1/3] mm/damon/core: split regions for min_nr_regions at beginning
  2026-02-20 15:13   ` SeongJae Park
@ 2026-02-21 18:07     ` SeongJae Park
  0 siblings, 0 replies; 6+ messages in thread
From: SeongJae Park @ 2026-02-21 18:07 UTC (permalink / raw)
  To: SeongJae Park; +Cc: Andrew Morton, damon, linux-kernel, linux-mm, Akinobu Mita

On Fri, 20 Feb 2026 07:13:01 -0800 SeongJae Park <sj@kernel.org> wrote:

[...]
> If nobodys finds some problems on this, I will post RFC v2 of this after
> squashing the above diff into this patch, probably within this weekend.

The RFC v2 is posted:
https://lore.kernel.org/20260221180341.10313-1-sj@kernel.org


Thanks,
SJ

[...]


^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2026-02-21 18:08 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2026-02-17  0:03 [RFC PATCH 0/3] mm/damon: always respect min_nr_regions from the beginning SeongJae Park
2026-02-17  0:03 ` [RFC PATCH 1/3] mm/damon/core: split regions for min_nr_regions at beginning SeongJae Park
2026-02-20 15:13   ` SeongJae Park
2026-02-21 18:07     ` SeongJae Park
2026-02-17  0:03 ` [RFC PATCH 2/3] mm/damon/vaddr: do not split regions for min_nr_regions SeongJae Park
2026-02-17  0:03 ` [RFC PATCH 3/3] mm/damon/test/core-kunit: add damon_apply_min_nr_regions() test SeongJae Park

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox