From: SeongJae Park <sj@kernel.org>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: SeongJae Park <sj@kernel.org>,
damon@lists.linux.dev, kernel-team@meta.com,
linux-kernel@vger.kernel.org, linux-mm@kvack.org
Subject: [PATCH 10/14] mm/damon/vaddr: put pid in cleanup_target()
Date: Sat, 12 Jul 2025 12:50:12 -0700 [thread overview]
Message-ID: <20250712195016.151108-11-sj@kernel.org> (raw)
In-Reply-To: <20250712195016.151108-1-sj@kernel.org>
Implement cleanup_target() callback for [f]vaddr, which calls put_pid()
for each target that will be destroyed. Also remove redundant put_pid()
calls in core, sysfs and sample modules, which were required to be done
redundantly due to the lack of such self cleanup in vaddr.
Signed-off-by: SeongJae Park <sj@kernel.org>
---
mm/damon/core.c | 2 --
mm/damon/sysfs.c | 10 ++--------
mm/damon/vaddr.c | 6 ++++++
samples/damon/prcl.c | 2 --
samples/damon/wsse.c | 2 --
5 files changed, 8 insertions(+), 14 deletions(-)
diff --git a/mm/damon/core.c b/mm/damon/core.c
index 678c9b4e038c..9554743dc992 100644
--- a/mm/damon/core.c
+++ b/mm/damon/core.c
@@ -1139,8 +1139,6 @@ static int damon_commit_targets(
} else {
struct damos *s;
- if (damon_target_has_pid(dst))
- put_pid(dst_target->pid);
damon_destroy_target(dst_target, dst);
damon_for_each_scheme(s, dst) {
if (s->quota.charge_target_from == dst_target) {
diff --git a/mm/damon/sysfs.c b/mm/damon/sysfs.c
index f2f9f756f5a2..5eba6ac53939 100644
--- a/mm/damon/sysfs.c
+++ b/mm/damon/sysfs.c
@@ -1298,13 +1298,9 @@ static int damon_sysfs_set_attrs(struct damon_ctx *ctx,
static void damon_sysfs_destroy_targets(struct damon_ctx *ctx)
{
struct damon_target *t, *next;
- bool has_pid = damon_target_has_pid(ctx);
- damon_for_each_target_safe(t, next, ctx) {
- if (has_pid)
- put_pid(t->pid);
+ damon_for_each_target_safe(t, next, ctx)
damon_destroy_target(t, ctx);
- }
}
static int damon_sysfs_set_regions(struct damon_target *t,
@@ -1387,10 +1383,8 @@ static void damon_sysfs_before_terminate(struct damon_ctx *ctx)
if (!damon_target_has_pid(ctx))
return;
- damon_for_each_target_safe(t, next, ctx) {
- put_pid(t->pid);
+ damon_for_each_target_safe(t, next, ctx)
damon_destroy_target(t, ctx);
- }
}
/*
diff --git a/mm/damon/vaddr.c b/mm/damon/vaddr.c
index 7f5dc9c221a0..94af19c4dfed 100644
--- a/mm/damon/vaddr.c
+++ b/mm/damon/vaddr.c
@@ -805,6 +805,11 @@ static bool damon_va_target_valid(struct damon_target *t)
return false;
}
+static void damon_va_cleanup_target(struct damon_target *t)
+{
+ put_pid(t->pid);
+}
+
#ifndef CONFIG_ADVISE_SYSCALLS
static unsigned long damos_madvise(struct damon_target *target,
struct damon_region *r, int behavior)
@@ -946,6 +951,7 @@ static int __init damon_va_initcall(void)
.prepare_access_checks = damon_va_prepare_access_checks,
.check_accesses = damon_va_check_accesses,
.target_valid = damon_va_target_valid,
+ .cleanup_target = damon_va_cleanup_target,
.cleanup = NULL,
.apply_scheme = damon_va_apply_scheme,
.get_scheme_score = damon_va_scheme_score,
diff --git a/samples/damon/prcl.c b/samples/damon/prcl.c
index 25a751a67b2d..1b839c06a612 100644
--- a/samples/damon/prcl.c
+++ b/samples/damon/prcl.c
@@ -120,8 +120,6 @@ static void damon_sample_prcl_stop(void)
damon_stop(&ctx, 1);
damon_destroy_ctx(ctx);
}
- if (target_pidp)
- put_pid(target_pidp);
}
static bool init_called;
diff --git a/samples/damon/wsse.c b/samples/damon/wsse.c
index a250e86b24a5..da052023b099 100644
--- a/samples/damon/wsse.c
+++ b/samples/damon/wsse.c
@@ -100,8 +100,6 @@ static void damon_sample_wsse_stop(void)
damon_stop(&ctx, 1);
damon_destroy_ctx(ctx);
}
- if (target_pidp)
- put_pid(target_pidp);
}
static bool init_called;
--
2.39.5
next prev parent reply other threads:[~2025-07-12 19:50 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-07-12 19:50 [PATCH 00/14] mm/damon: remove damon_callback SeongJae Park
2025-07-12 19:50 ` [PATCH 01/14] mm/damon: accept parallel damon_call() requests SeongJae Park
2025-07-12 19:50 ` [PATCH 02/14] mm/damon/core: introduce repeat mode damon_call() SeongJae Park
2025-07-12 19:50 ` [PATCH 03/14] mm/damon/stat: use damon_call() repeat mode instead of damon_callback SeongJae Park
2025-07-12 19:50 ` [PATCH 04/14] mm/damon/reclaim: " SeongJae Park
2025-07-12 19:50 ` [PATCH 05/14] mm/damon/lru_sort: " SeongJae Park
2025-07-12 19:50 ` [PATCH 06/14] samples/damon/prcl: " SeongJae Park
2025-07-12 19:50 ` [PATCH 07/14] samples/damon/wsse: " SeongJae Park
2025-07-12 19:50 ` [PATCH 08/14] mm/damon/core: do not call ops.cleanup() when destroying targets SeongJae Park
2025-07-12 19:50 ` [PATCH 09/14] mm/damon/core: add cleanup_target() ops callback SeongJae Park
2025-07-12 19:50 ` SeongJae Park [this message]
2025-07-12 19:50 ` [PATCH 11/14] mm/damon/sysfs: remove damon_sysfs_destroy_targets() SeongJae Park
2025-07-12 19:50 ` [PATCH 12/14] mm/damon/core: destroy targets when kdamond_fn() finish SeongJae Park
2025-07-12 19:50 ` [PATCH 13/14] mm/damon/sysfs: remove damon_sysfs_before_terminate() SeongJae Park
2025-07-12 19:50 ` [PATCH 14/14] mm/damon/core: remove damon_callback SeongJae Park
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250712195016.151108-11-sj@kernel.org \
--to=sj@kernel.org \
--cc=akpm@linux-foundation.org \
--cc=damon@lists.linux.dev \
--cc=kernel-team@meta.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox