* [PATCH 1/9] mm/damon: rename damos core filter helpers to have word core
2025-11-12 15:41 [PATCH 0/9] mm/damon: misc cleanups SeongJae Park
@ 2025-11-12 15:41 ` SeongJae Park
2025-11-12 15:41 ` [PATCH 2/9] mm/damon: rename damos->filters to damos->core_filters SeongJae Park
` (7 subsequent siblings)
8 siblings, 0 replies; 11+ messages in thread
From: SeongJae Park @ 2025-11-12 15:41 UTC (permalink / raw)
To: Andrew Morton
Cc: SeongJae Park, Bill Wendling, Justin Stitt, Miguel Ojeda,
Nathan Chancellor, Nick Desaulniers, damon, linux-kernel,
linux-mm, llvm
DAMOS filters handled by the core layer are called core filters, while
those handled by the ops layer are called ops filters. They share the
same type but are managed in different places since core filters are
evaluated before the ops filters. They also have different helper
functions that depend on their managed places.
The helper functions for ops filters have '_ops_' keyword on their name,
so it is easy to know they are for ops filters. Meanwhile, the helper
functions for core filters are not having the 'core' keyword on their
name. This makes it easy to be mistakenly used for ops filters.
Actually there was such a bug.
To avoid future mistakes from similar confusions, rename DAMOS core
filters helper functions to have a keyword 'core' on their names.
Signed-off-by: SeongJae Park <sj@kernel.org>
---
.clang-format | 4 ++--
include/linux/damon.h | 4 ++--
mm/damon/core.c | 14 +++++++-------
3 files changed, 11 insertions(+), 11 deletions(-)
diff --git a/.clang-format b/.clang-format
index f371a13b4d19..748efbe791ad 100644
--- a/.clang-format
+++ b/.clang-format
@@ -140,8 +140,8 @@ ForEachMacros:
- 'damon_for_each_scheme_safe'
- 'damon_for_each_target'
- 'damon_for_each_target_safe'
- - 'damos_for_each_filter'
- - 'damos_for_each_filter_safe'
+ - 'damos_for_each_core_filter'
+ - 'damos_for_each_core_filter_safe'
- 'damos_for_each_ops_filter'
- 'damos_for_each_ops_filter_safe'
- 'damos_for_each_quota_goal'
diff --git a/include/linux/damon.h b/include/linux/damon.h
index f3566b978cdf..6e3db165fe60 100644
--- a/include/linux/damon.h
+++ b/include/linux/damon.h
@@ -871,10 +871,10 @@ static inline unsigned long damon_sz_region(struct damon_region *r)
#define damos_for_each_quota_goal_safe(goal, next, quota) \
list_for_each_entry_safe(goal, next, &(quota)->goals, list)
-#define damos_for_each_filter(f, scheme) \
+#define damos_for_each_core_filter(f, scheme) \
list_for_each_entry(f, &(scheme)->filters, list)
-#define damos_for_each_filter_safe(f, next, scheme) \
+#define damos_for_each_core_filter_safe(f, next, scheme) \
list_for_each_entry_safe(f, next, &(scheme)->filters, list)
#define damos_for_each_ops_filter(f, scheme) \
diff --git a/mm/damon/core.c b/mm/damon/core.c
index a14cc73c2cab..d4cb11ced13f 100644
--- a/mm/damon/core.c
+++ b/mm/damon/core.c
@@ -450,7 +450,7 @@ void damon_destroy_scheme(struct damos *s)
damos_for_each_quota_goal_safe(g, g_next, &s->quota)
damos_destroy_quota_goal(g);
- damos_for_each_filter_safe(f, next, s)
+ damos_for_each_core_filter_safe(f, next, s)
damos_destroy_filter(f);
damos_for_each_ops_filter_safe(f, next, s)
@@ -864,12 +864,12 @@ static int damos_commit_quota(struct damos_quota *dst, struct damos_quota *src)
return 0;
}
-static struct damos_filter *damos_nth_filter(int n, struct damos *s)
+static struct damos_filter *damos_nth_core_filter(int n, struct damos *s)
{
struct damos_filter *filter;
int i = 0;
- damos_for_each_filter(filter, s) {
+ damos_for_each_core_filter(filter, s) {
if (i++ == n)
return filter;
}
@@ -923,15 +923,15 @@ static int damos_commit_core_filters(struct damos *dst, struct damos *src)
struct damos_filter *dst_filter, *next, *src_filter, *new_filter;
int i = 0, j = 0;
- damos_for_each_filter_safe(dst_filter, next, dst) {
- src_filter = damos_nth_filter(i++, src);
+ damos_for_each_core_filter_safe(dst_filter, next, dst) {
+ src_filter = damos_nth_core_filter(i++, src);
if (src_filter)
damos_commit_filter(dst_filter, src_filter);
else
damos_destroy_filter(dst_filter);
}
- damos_for_each_filter_safe(src_filter, next, src) {
+ damos_for_each_core_filter_safe(src_filter, next, src) {
if (j++ < i)
continue;
@@ -1767,7 +1767,7 @@ static bool damos_filter_out(struct damon_ctx *ctx, struct damon_target *t,
struct damos_filter *filter;
s->core_filters_allowed = false;
- damos_for_each_filter(filter, s) {
+ damos_for_each_core_filter(filter, s) {
if (damos_filter_match(ctx, t, r, filter, ctx->min_sz_region)) {
if (filter->allow)
s->core_filters_allowed = true;
--
2.47.3
^ permalink raw reply [flat|nested] 11+ messages in thread* [PATCH 2/9] mm/damon: rename damos->filters to damos->core_filters
2025-11-12 15:41 [PATCH 0/9] mm/damon: misc cleanups SeongJae Park
2025-11-12 15:41 ` [PATCH 1/9] mm/damon: rename damos core filter helpers to have word core SeongJae Park
@ 2025-11-12 15:41 ` SeongJae Park
2025-11-12 15:41 ` [PATCH 3/9] mm/damon/vaddr: cleanup using pmd_trans_huge_lock() SeongJae Park
` (6 subsequent siblings)
8 siblings, 0 replies; 11+ messages in thread
From: SeongJae Park @ 2025-11-12 15:41 UTC (permalink / raw)
To: Andrew Morton
Cc: SeongJae Park, Shuah Khan, damon, linux-kernel, linux-kselftest,
linux-mm
DAMOS filters that are handled by the ops layer are linked to
damos->ops_filters. Owing to the ops_ prefix on the name, it is easy to
understand it is for ops layer handled filters. The other types of
filters, which are handled by the core layer, are linked to
damos->filters. Because of the name, it is easy to confuse the list is
there for not only core layer handled ones but all filters. Avoid such
confusions by renaming the field to core_filters.
Signed-off-by: SeongJae Park <sj@kernel.org>
---
include/linux/damon.h | 10 +++++-----
mm/damon/core.c | 6 +++---
mm/damon/tests/core-kunit.h | 4 ++--
.../testing/selftests/damon/drgn_dump_damon_status.py | 8 ++++----
tools/testing/selftests/damon/sysfs.py | 2 +-
5 files changed, 15 insertions(+), 15 deletions(-)
diff --git a/include/linux/damon.h b/include/linux/damon.h
index 6e3db165fe60..3813373a9200 100644
--- a/include/linux/damon.h
+++ b/include/linux/damon.h
@@ -492,7 +492,7 @@ struct damos_migrate_dests {
* @wmarks: Watermarks for automated (in)activation of this scheme.
* @migrate_dests: Destination nodes if @action is "migrate_{hot,cold}".
* @target_nid: Destination node if @action is "migrate_{hot,cold}".
- * @filters: Additional set of &struct damos_filter for &action.
+ * @core_filters: Additional set of &struct damos_filter for &action.
* @ops_filters: ops layer handling &struct damos_filter objects list.
* @last_applied: Last @action applied ops-managing entity.
* @stat: Statistics of this scheme.
@@ -518,7 +518,7 @@ struct damos_migrate_dests {
*
* Before applying the &action to a memory region, &struct damon_operations
* implementation could check pages of the region and skip &action to respect
- * &filters
+ * &core_filters
*
* The minimum entity that @action can be applied depends on the underlying
* &struct damon_operations. Since it may not be aligned with the core layer
@@ -562,7 +562,7 @@ struct damos {
struct damos_migrate_dests migrate_dests;
};
};
- struct list_head filters;
+ struct list_head core_filters;
struct list_head ops_filters;
void *last_applied;
struct damos_stat stat;
@@ -872,10 +872,10 @@ static inline unsigned long damon_sz_region(struct damon_region *r)
list_for_each_entry_safe(goal, next, &(quota)->goals, list)
#define damos_for_each_core_filter(f, scheme) \
- list_for_each_entry(f, &(scheme)->filters, list)
+ list_for_each_entry(f, &(scheme)->core_filters, list)
#define damos_for_each_core_filter_safe(f, next, scheme) \
- list_for_each_entry_safe(f, next, &(scheme)->filters, list)
+ list_for_each_entry_safe(f, next, &(scheme)->core_filters, list)
#define damos_for_each_ops_filter(f, scheme) \
list_for_each_entry(f, &(scheme)->ops_filters, list)
diff --git a/mm/damon/core.c b/mm/damon/core.c
index d4cb11ced13f..aedb315b075a 100644
--- a/mm/damon/core.c
+++ b/mm/damon/core.c
@@ -306,7 +306,7 @@ void damos_add_filter(struct damos *s, struct damos_filter *f)
if (damos_filter_for_ops(f->type))
list_add_tail(&f->list, &s->ops_filters);
else
- list_add_tail(&f->list, &s->filters);
+ list_add_tail(&f->list, &s->core_filters);
}
static void damos_del_filter(struct damos_filter *f)
@@ -397,7 +397,7 @@ struct damos *damon_new_scheme(struct damos_access_pattern *pattern,
*/
scheme->next_apply_sis = 0;
scheme->walk_completed = false;
- INIT_LIST_HEAD(&scheme->filters);
+ INIT_LIST_HEAD(&scheme->core_filters);
INIT_LIST_HEAD(&scheme->ops_filters);
scheme->stat = (struct damos_stat){};
INIT_LIST_HEAD(&scheme->list);
@@ -995,7 +995,7 @@ static void damos_set_filters_default_reject(struct damos *s)
s->core_filters_default_reject = false;
else
s->core_filters_default_reject =
- damos_filters_default_reject(&s->filters);
+ damos_filters_default_reject(&s->core_filters);
s->ops_filters_default_reject =
damos_filters_default_reject(&s->ops_filters);
}
diff --git a/mm/damon/tests/core-kunit.h b/mm/damon/tests/core-kunit.h
index 0d2d8cda8631..4380d0312d24 100644
--- a/mm/damon/tests/core-kunit.h
+++ b/mm/damon/tests/core-kunit.h
@@ -876,7 +876,7 @@ static void damos_test_commit_filter(struct kunit *test)
static void damos_test_help_initailize_scheme(struct damos *scheme)
{
INIT_LIST_HEAD(&scheme->quota.goals);
- INIT_LIST_HEAD(&scheme->filters);
+ INIT_LIST_HEAD(&scheme->core_filters);
INIT_LIST_HEAD(&scheme->ops_filters);
}
@@ -1140,7 +1140,7 @@ static void damon_test_set_filters_default_reject(struct kunit *test)
struct damos scheme;
struct damos_filter *target_filter, *anon_filter;
- INIT_LIST_HEAD(&scheme.filters);
+ INIT_LIST_HEAD(&scheme.core_filters);
INIT_LIST_HEAD(&scheme.ops_filters);
damos_set_filters_default_reject(&scheme);
diff --git a/tools/testing/selftests/damon/drgn_dump_damon_status.py b/tools/testing/selftests/damon/drgn_dump_damon_status.py
index cb4fdbe68acb..5374d18d1fa8 100755
--- a/tools/testing/selftests/damon/drgn_dump_damon_status.py
+++ b/tools/testing/selftests/damon/drgn_dump_damon_status.py
@@ -175,11 +175,11 @@ def scheme_to_dict(scheme):
['target_nid', int],
['migrate_dests', damos_migrate_dests_to_dict],
])
- filters = []
+ core_filters = []
for f in list_for_each_entry(
- 'struct damos_filter', scheme.filters.address_of_(), 'list'):
- filters.append(damos_filter_to_dict(f))
- dict_['filters'] = filters
+ 'struct damos_filter', scheme.core_filters.address_of_(), 'list'):
+ core_filters.append(damos_filter_to_dict(f))
+ dict_['core_filters'] = core_filters
ops_filters = []
for f in list_for_each_entry(
'struct damos_filter', scheme.ops_filters.address_of_(), 'list'):
diff --git a/tools/testing/selftests/damon/sysfs.py b/tools/testing/selftests/damon/sysfs.py
index b34aea0a6775..b4c5ef5c4d69 100755
--- a/tools/testing/selftests/damon/sysfs.py
+++ b/tools/testing/selftests/damon/sysfs.py
@@ -132,7 +132,7 @@ def assert_scheme_committed(scheme, dump):
assert_watermarks_committed(scheme.watermarks, dump['wmarks'])
# TODO: test filters directory
for idx, f in enumerate(scheme.core_filters.filters):
- assert_filter_committed(f, dump['filters'][idx])
+ assert_filter_committed(f, dump['core_filters'][idx])
for idx, f in enumerate(scheme.ops_filters.filters):
assert_filter_committed(f, dump['ops_filters'][idx])
--
2.47.3
^ permalink raw reply [flat|nested] 11+ messages in thread* [PATCH 3/9] mm/damon/vaddr: cleanup using pmd_trans_huge_lock()
2025-11-12 15:41 [PATCH 0/9] mm/damon: misc cleanups SeongJae Park
2025-11-12 15:41 ` [PATCH 1/9] mm/damon: rename damos core filter helpers to have word core SeongJae Park
2025-11-12 15:41 ` [PATCH 2/9] mm/damon: rename damos->filters to damos->core_filters SeongJae Park
@ 2025-11-12 15:41 ` SeongJae Park
2025-11-17 15:44 ` SeongJae Park
2025-11-12 15:41 ` [PATCH 4/9] mm/damon/vaddr: use vm_normal_folio{,_pmd}() instead of damon_get_folio() SeongJae Park
` (5 subsequent siblings)
8 siblings, 1 reply; 11+ messages in thread
From: SeongJae Park @ 2025-11-12 15:41 UTC (permalink / raw)
To: Andrew Morton; +Cc: SeongJae Park, damon, linux-kernel, linux-mm, Hugh Dickins
Three pmd walk functions in vaddr.c are using pmd_trans_huge() and
pmd_lock() to handle THPs. Simplify the code by replacing the two
function calls with a single pmd_trans_huge_lock() call.
Note that this cleanup is not only reducing the lines of code, but also
simplifies code execution flows for migration entries case, as kindly
explained [1] by Hugh, who suggested this cleanup.
[1] https://lore.kernel.org/296c2b3f-6748-158f-b85d-2952165c0588@google.com
Suggested-by: Hugh Dickins <hughd@google.com>
Signed-off-by: SeongJae Park <sj@kernel.org>
---
mm/damon/vaddr.c | 48 ++++++++++++------------------------------------
1 file changed, 12 insertions(+), 36 deletions(-)
diff --git a/mm/damon/vaddr.c b/mm/damon/vaddr.c
index 7e834467b2d8..0ad1ce120aa1 100644
--- a/mm/damon/vaddr.c
+++ b/mm/damon/vaddr.c
@@ -307,24 +307,14 @@ static int damon_mkold_pmd_entry(pmd_t *pmd, unsigned long addr,
unsigned long next, struct mm_walk *walk)
{
pte_t *pte;
- pmd_t pmde;
spinlock_t *ptl;
- if (pmd_trans_huge(pmdp_get(pmd))) {
- ptl = pmd_lock(walk->mm, pmd);
- pmde = pmdp_get(pmd);
-
- if (!pmd_present(pmde)) {
- spin_unlock(ptl);
- return 0;
- }
-
- if (pmd_trans_huge(pmde)) {
+ ptl = pmd_trans_huge_lock(pmd, walk->vma);
+ if (ptl) {
+ if (pmd_present(pmdp_get(pmd)))
damon_pmdp_mkold(pmd, walk->vma, addr);
- spin_unlock(ptl);
- return 0;
- }
spin_unlock(ptl);
+ return 0;
}
pte = pte_offset_map_lock(walk->mm, pmd, addr, &ptl);
@@ -446,21 +436,12 @@ static int damon_young_pmd_entry(pmd_t *pmd, unsigned long addr,
struct damon_young_walk_private *priv = walk->private;
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
- if (pmd_trans_huge(pmdp_get(pmd))) {
- pmd_t pmde;
-
- ptl = pmd_lock(walk->mm, pmd);
- pmde = pmdp_get(pmd);
+ ptl = pmd_trans_huge_lock(pmd, walk->vma);
+ if (ptl) {
+ pmd_t pmde = pmdp_get(pmd);
- if (!pmd_present(pmde)) {
- spin_unlock(ptl);
- return 0;
- }
-
- if (!pmd_trans_huge(pmde)) {
- spin_unlock(ptl);
- goto regular_page;
- }
+ if (!pmd_present(pmde))
+ goto huge_out;
folio = damon_get_folio(pmd_pfn(pmde));
if (!folio)
goto huge_out;
@@ -474,8 +455,6 @@ static int damon_young_pmd_entry(pmd_t *pmd, unsigned long addr,
spin_unlock(ptl);
return 0;
}
-
-regular_page:
#endif /* CONFIG_TRANSPARENT_HUGEPAGE */
pte = pte_offset_map_lock(walk->mm, pmd, addr, &ptl);
@@ -910,13 +889,10 @@ static int damos_va_stat_pmd_entry(pmd_t *pmd, unsigned long addr,
int nr;
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
- if (pmd_trans_huge(*pmd)) {
- pmd_t pmde;
+ ptl = pmd_trans_huge_lock(pmd, vma);
+ if (ptl) {
+ pmd_t pmde = pmdp_get(pmd);
- ptl = pmd_trans_huge_lock(pmd, vma);
- if (!ptl)
- return 0;
- pmde = pmdp_get(pmd);
if (!pmd_present(pmde))
goto huge_unlock;
--
2.47.3
^ permalink raw reply [flat|nested] 11+ messages in thread* Re: [PATCH 3/9] mm/damon/vaddr: cleanup using pmd_trans_huge_lock()
2025-11-12 15:41 ` [PATCH 3/9] mm/damon/vaddr: cleanup using pmd_trans_huge_lock() SeongJae Park
@ 2025-11-17 15:44 ` SeongJae Park
0 siblings, 0 replies; 11+ messages in thread
From: SeongJae Park @ 2025-11-17 15:44 UTC (permalink / raw)
To: SeongJae Park
Cc: Andrew Morton, damon, linux-kernel, linux-mm, Hugh Dickins,
kernel test robot
On Wed, 12 Nov 2025 07:41:06 -0800 SeongJae Park <sj@kernel.org> wrote:
> Three pmd walk functions in vaddr.c are using pmd_trans_huge() and
> pmd_lock() to handle THPs. Simplify the code by replacing the two
> function calls with a single pmd_trans_huge_lock() call.
>
> Note that this cleanup is not only reducing the lines of code, but also
> simplifies code execution flows for migration entries case, as kindly
> explained [1] by Hugh, who suggested this cleanup.
>
> [1] https://lore.kernel.org/296c2b3f-6748-158f-b85d-2952165c0588@google.com
>
> Suggested-by: Hugh Dickins <hughd@google.com>
> Signed-off-by: SeongJae Park <sj@kernel.org>
> ---
> mm/damon/vaddr.c | 48 ++++++++++++------------------------------------
> 1 file changed, 12 insertions(+), 36 deletions(-)
>
> diff --git a/mm/damon/vaddr.c b/mm/damon/vaddr.c
> index 7e834467b2d8..0ad1ce120aa1 100644
> --- a/mm/damon/vaddr.c
> +++ b/mm/damon/vaddr.c
> @@ -307,24 +307,14 @@ static int damon_mkold_pmd_entry(pmd_t *pmd, unsigned long addr,
> unsigned long next, struct mm_walk *walk)
> {
> pte_t *pte;
> - pmd_t pmde;
> spinlock_t *ptl;
>
> - if (pmd_trans_huge(pmdp_get(pmd))) {
> - ptl = pmd_lock(walk->mm, pmd);
> - pmde = pmdp_get(pmd);
> -
> - if (!pmd_present(pmde)) {
> - spin_unlock(ptl);
> - return 0;
> - }
> -
> - if (pmd_trans_huge(pmde)) {
> + ptl = pmd_trans_huge_lock(pmd, walk->vma);
> + if (ptl) {
> + if (pmd_present(pmdp_get(pmd)))
> damon_pmdp_mkold(pmd, walk->vma, addr);
> - spin_unlock(ptl);
> - return 0;
> - }
> spin_unlock(ptl);
> + return 0;
> }
>
> pte = pte_offset_map_lock(walk->mm, pmd, addr, &ptl);
> @@ -446,21 +436,12 @@ static int damon_young_pmd_entry(pmd_t *pmd, unsigned long addr,
> struct damon_young_walk_private *priv = walk->private;
>
> #ifdef CONFIG_TRANSPARENT_HUGEPAGE
> - if (pmd_trans_huge(pmdp_get(pmd))) {
> - pmd_t pmde;
> -
> - ptl = pmd_lock(walk->mm, pmd);
> - pmde = pmdp_get(pmd);
> + ptl = pmd_trans_huge_lock(pmd, walk->vma);
> + if (ptl) {
> + pmd_t pmde = pmdp_get(pmd);
Kernel test robot reported [1] this is making m68k build fails. Andrew, could
you please add below attaching patch as a fix?
[1] https://lore.kernel.org/202511172257.CjElDcRX-lkp@intel.com
Thanks,
SJ
[...]
---- >8 ----
From 0908bba1aec11997107af757a34136a14be619b7 Mon Sep 17 00:00:00 2001
From: SeongJae Park <sj@kernel.org>
Date: Mon, 17 Nov 2025 07:36:43 -0800
Subject: [PATCH] mm/damon/vaddr: provide lvalue to pmd_present()
On m68k, vaddr.c build fails since pmd_present() requires lvalue while
vaddr.c is passing pmdp_get(). Fix it.
Reported-by: kernel test robot <lkp@intel.com>
Closes: https://lore.kernel.org/oe-kbuild-all/202511172257.CjElDcRX-lkp@intel.com/
Signed-off-by: SeongJae Park <sj@kernel.org>
---
mm/damon/vaddr.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/mm/damon/vaddr.c b/mm/damon/vaddr.c
index ef57e95eb422..2750c88e7225 100644
--- a/mm/damon/vaddr.c
+++ b/mm/damon/vaddr.c
@@ -311,7 +311,9 @@ static int damon_mkold_pmd_entry(pmd_t *pmd, unsigned long addr,
ptl = pmd_trans_huge_lock(pmd, walk->vma);
if (ptl) {
- if (pmd_present(pmdp_get(pmd)))
+ pmd_t pmde = pmdp_get(pmd);
+
+ if (pmd_present(pmde))
damon_pmdp_mkold(pmd, walk->vma, addr);
spin_unlock(ptl);
return 0;
--
2.47.3
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH 4/9] mm/damon/vaddr: use vm_normal_folio{,_pmd}() instead of damon_get_folio()
2025-11-12 15:41 [PATCH 0/9] mm/damon: misc cleanups SeongJae Park
` (2 preceding siblings ...)
2025-11-12 15:41 ` [PATCH 3/9] mm/damon/vaddr: cleanup using pmd_trans_huge_lock() SeongJae Park
@ 2025-11-12 15:41 ` SeongJae Park
2025-11-12 15:41 ` [PATCH 5/9] mm/damon/vaddr: consistently use only pmd_entry for damos_migrate SeongJae Park
` (4 subsequent siblings)
8 siblings, 0 replies; 11+ messages in thread
From: SeongJae Park @ 2025-11-12 15:41 UTC (permalink / raw)
To: Andrew Morton
Cc: SeongJae Park, damon, linux-kernel, linux-mm, David Hildenbrand
A few page table walk entry callback functions in vaddr.c uses
damon_get_folio() with p{te,md}_pfn() to get the folio, and then
put_folio(). Simplify and drop unnecessary folio get/put by using
vm_normal_folio() and its friends instead.
Note that this cleanup was suggested by David Hildenbrand during a
review of another patch series [1] and the patch was updated following
the suggestion. This patch further applies the cleanup to DAMON code
that merged before the patch.
[1] https://lore.kernel.org/0cb3d5a5-683b-4dba-90a8-b45ab83eec53@redhat.com
Suggested-by: David Hildenbrand <david@kernel.org>
Signed-off-by: SeongJae Park <sj@kernel.org>
---
mm/damon/vaddr.c | 19 ++++++-------------
1 file changed, 6 insertions(+), 13 deletions(-)
diff --git a/mm/damon/vaddr.c b/mm/damon/vaddr.c
index 0ad1ce120aa1..9c06cfe4526f 100644
--- a/mm/damon/vaddr.c
+++ b/mm/damon/vaddr.c
@@ -442,7 +442,7 @@ static int damon_young_pmd_entry(pmd_t *pmd, unsigned long addr,
if (!pmd_present(pmde))
goto huge_out;
- folio = damon_get_folio(pmd_pfn(pmde));
+ folio = vm_normal_folio_pmd(walk->vma, addr, pmde);
if (!folio)
goto huge_out;
if (pmd_young(pmde) || !folio_test_idle(folio) ||
@@ -450,7 +450,6 @@ static int damon_young_pmd_entry(pmd_t *pmd, unsigned long addr,
addr))
priv->young = true;
*priv->folio_sz = HPAGE_PMD_SIZE;
- folio_put(folio);
huge_out:
spin_unlock(ptl);
return 0;
@@ -463,14 +462,13 @@ static int damon_young_pmd_entry(pmd_t *pmd, unsigned long addr,
ptent = ptep_get(pte);
if (!pte_present(ptent))
goto out;
- folio = damon_get_folio(pte_pfn(ptent));
+ folio = vm_normal_folio(walk->vma, addr, ptent);
if (!folio)
goto out;
if (pte_young(ptent) || !folio_test_idle(folio) ||
mmu_notifier_test_young(walk->mm, addr))
priv->young = true;
*priv->folio_sz = folio_size(folio);
- folio_put(folio);
out:
pte_unmap_unlock(pte, ptl);
return 0;
@@ -718,18 +716,16 @@ static int damos_va_migrate_pmd_entry(pmd_t *pmd, unsigned long addr,
/* Tell page walk code to not split the PMD */
walk->action = ACTION_CONTINUE;
- folio = damon_get_folio(pmd_pfn(pmde));
+ folio = vm_normal_folio_pmd(walk->vma, addr, pmde);
if (!folio)
goto unlock;
if (damos_va_filter_out(s, folio, walk->vma, addr, NULL, pmd))
- goto put_folio;
+ goto unlock;
damos_va_migrate_dests_add(folio, walk->vma, addr, dests,
migration_lists);
-put_folio:
- folio_put(folio);
unlock:
spin_unlock(ptl);
return 0;
@@ -752,18 +748,15 @@ static int damos_va_migrate_pte_entry(pte_t *pte, unsigned long addr,
if (pte_none(ptent) || !pte_present(ptent))
return 0;
- folio = damon_get_folio(pte_pfn(ptent));
+ folio = vm_normal_folio(walk->vma, addr, ptent);
if (!folio)
return 0;
if (damos_va_filter_out(s, folio, walk->vma, addr, pte, NULL))
- goto put_folio;
+ return 0;
damos_va_migrate_dests_add(folio, walk->vma, addr, dests,
migration_lists);
-
-put_folio:
- folio_put(folio);
return 0;
}
--
2.47.3
^ permalink raw reply [flat|nested] 11+ messages in thread* [PATCH 5/9] mm/damon/vaddr: consistently use only pmd_entry for damos_migrate
2025-11-12 15:41 [PATCH 0/9] mm/damon: misc cleanups SeongJae Park
` (3 preceding siblings ...)
2025-11-12 15:41 ` [PATCH 4/9] mm/damon/vaddr: use vm_normal_folio{,_pmd}() instead of damon_get_folio() SeongJae Park
@ 2025-11-12 15:41 ` SeongJae Park
2025-11-12 15:41 ` [PATCH 6/9] mm/damon/tests/core-kunit: remove DAMON_MIN_REGION redefinition SeongJae Park
` (3 subsequent siblings)
8 siblings, 0 replies; 11+ messages in thread
From: SeongJae Park @ 2025-11-12 15:41 UTC (permalink / raw)
To: Andrew Morton
Cc: SeongJae Park, damon, linux-kernel, linux-mm, David Hildenbrand
For page table walks, it is usual [1] to have only one pmd entry
function. The vaddr.c code for DAMOS_MIGRATE_{HOT,COLD} is not
following the pattern. Instead, it uses both pmd and pte entry
functions without a special reason. Refactor it to use only the pmd
entry function, to make the code under mm/ more consistent.
Suggested-by: David Hildenbrand <david@kernel.org>
Signed-off-by: SeongJae Park <sj@kernel.org>
---
mm/damon/vaddr.c | 84 +++++++++++++++++++++---------------------------
1 file changed, 37 insertions(+), 47 deletions(-)
diff --git a/mm/damon/vaddr.c b/mm/damon/vaddr.c
index 9c06cfe4526f..ef57e95eb422 100644
--- a/mm/damon/vaddr.c
+++ b/mm/damon/vaddr.c
@@ -695,7 +695,6 @@ static void damos_va_migrate_dests_add(struct folio *folio,
list_add(&folio->lru, &migration_lists[i]);
}
-#ifdef CONFIG_TRANSPARENT_HUGEPAGE
static int damos_va_migrate_pmd_entry(pmd_t *pmd, unsigned long addr,
unsigned long next, struct mm_walk *walk)
{
@@ -705,58 +704,49 @@ static int damos_va_migrate_pmd_entry(pmd_t *pmd, unsigned long addr,
struct damos_migrate_dests *dests = &s->migrate_dests;
struct folio *folio;
spinlock_t *ptl;
- pmd_t pmde;
-
- ptl = pmd_lock(walk->mm, pmd);
- pmde = pmdp_get(pmd);
-
- if (!pmd_present(pmde) || !pmd_trans_huge(pmde))
- goto unlock;
-
- /* Tell page walk code to not split the PMD */
- walk->action = ACTION_CONTINUE;
-
- folio = vm_normal_folio_pmd(walk->vma, addr, pmde);
- if (!folio)
- goto unlock;
-
- if (damos_va_filter_out(s, folio, walk->vma, addr, NULL, pmd))
- goto unlock;
-
- damos_va_migrate_dests_add(folio, walk->vma, addr, dests,
- migration_lists);
-
-unlock:
- spin_unlock(ptl);
- return 0;
-}
-#else
-#define damos_va_migrate_pmd_entry NULL
-#endif /* CONFIG_TRANSPARENT_HUGEPAGE */
+ pte_t *start_pte, *pte, ptent;
+ int nr;
-static int damos_va_migrate_pte_entry(pte_t *pte, unsigned long addr,
- unsigned long next, struct mm_walk *walk)
-{
- struct damos_va_migrate_private *priv = walk->private;
- struct list_head *migration_lists = priv->migration_lists;
- struct damos *s = priv->scheme;
- struct damos_migrate_dests *dests = &s->migrate_dests;
- struct folio *folio;
- pte_t ptent;
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+ ptl = pmd_trans_huge_lock(pmd, walk->vma);
+ if (ptl) {
+ pmd_t pmde = pmdp_get(pmd);
- ptent = ptep_get(pte);
- if (pte_none(ptent) || !pte_present(ptent))
+ if (!pmd_present(pmde))
+ goto huge_out;
+ folio = vm_normal_folio_pmd(walk->vma, addr, pmde);
+ if (!folio)
+ goto huge_out;
+ if (damos_va_filter_out(s, folio, walk->vma, addr, NULL, pmd))
+ goto huge_out;
+ damos_va_migrate_dests_add(folio, walk->vma, addr, dests,
+ migration_lists);
+huge_out:
+ spin_unlock(ptl);
return 0;
+ }
+#endif /* CONFIG_TRANSPARENT_HUGEPAGE */
- folio = vm_normal_folio(walk->vma, addr, ptent);
- if (!folio)
+ start_pte = pte = pte_offset_map_lock(walk->mm, pmd, addr, &ptl);
+ if (!pte)
return 0;
- if (damos_va_filter_out(s, folio, walk->vma, addr, pte, NULL))
- return 0;
+ for (; addr < next; pte += nr, addr += nr * PAGE_SIZE) {
+ nr = 1;
+ ptent = ptep_get(pte);
- damos_va_migrate_dests_add(folio, walk->vma, addr, dests,
- migration_lists);
+ if (pte_none(ptent) || !pte_present(ptent))
+ continue;
+ folio = vm_normal_folio(walk->vma, addr, ptent);
+ if (!folio)
+ continue;
+ if (damos_va_filter_out(s, folio, walk->vma, addr, pte, NULL))
+ return 0;
+ damos_va_migrate_dests_add(folio, walk->vma, addr, dests,
+ migration_lists);
+ nr = folio_nr_pages(folio);
+ }
+ pte_unmap_unlock(start_pte, ptl);
return 0;
}
@@ -822,7 +812,7 @@ static unsigned long damos_va_migrate(struct damon_target *target,
struct damos_migrate_dests *dests = &s->migrate_dests;
struct mm_walk_ops walk_ops = {
.pmd_entry = damos_va_migrate_pmd_entry,
- .pte_entry = damos_va_migrate_pte_entry,
+ .pte_entry = NULL,
.walk_lock = PGWALK_RDLOCK,
};
--
2.47.3
^ permalink raw reply [flat|nested] 11+ messages in thread* [PATCH 6/9] mm/damon/tests/core-kunit: remove DAMON_MIN_REGION redefinition
2025-11-12 15:41 [PATCH 0/9] mm/damon: misc cleanups SeongJae Park
` (4 preceding siblings ...)
2025-11-12 15:41 ` [PATCH 5/9] mm/damon/vaddr: consistently use only pmd_entry for damos_migrate SeongJae Park
@ 2025-11-12 15:41 ` SeongJae Park
2025-11-12 15:41 ` [PATCH 7/9] selftests/damon/sysfs.py: merge DAMON status dumping into commitment assertion SeongJae Park
` (2 subsequent siblings)
8 siblings, 0 replies; 11+ messages in thread
From: SeongJae Park @ 2025-11-12 15:41 UTC (permalink / raw)
To: Andrew Morton
Cc: SeongJae Park, Brendan Higgins, David Gow, damon, kunit-dev,
linux-kernel, linux-kselftest, linux-mm
A few DAMON core functions including damon_set_regions() were hard-coded
to use DAMON_MIN_REGION as their regions management granularity. For
simple and human-readable unit tests' expectations, DAMON core layer
kunit test re-defines DAMON_MIN_REGION to '1'.
A previous patch series [1] has removed the hard-coded part but kept the
redefinition and updated related function calls to explicitly use
DAMON_MIN_REGION. Remove the unnecessary redefinition and update
relevant function calls to pass literals (number '1') instead of the
DAMON_MIN_REGION.
[1] https://lore.kernel.org/20250828171242.59810-1-sj@kernel.org
Signed-off-by: SeongJae Park <sj@kernel.org>
---
mm/damon/core.c | 5 ----
mm/damon/tests/core-kunit.h | 55 ++++++++++++++++++-------------------
2 files changed, 26 insertions(+), 34 deletions(-)
diff --git a/mm/damon/core.c b/mm/damon/core.c
index aedb315b075a..f9fc0375890a 100644
--- a/mm/damon/core.c
+++ b/mm/damon/core.c
@@ -20,11 +20,6 @@
#define CREATE_TRACE_POINTS
#include <trace/events/damon.h>
-#ifdef CONFIG_DAMON_KUNIT_TEST
-#undef DAMON_MIN_REGION
-#define DAMON_MIN_REGION 1
-#endif
-
static DEFINE_MUTEX(damon_lock);
static int nr_running_ctxs;
static bool running_exclusive_ctxs;
diff --git a/mm/damon/tests/core-kunit.h b/mm/damon/tests/core-kunit.h
index 4380d0312d24..a1eff023e928 100644
--- a/mm/damon/tests/core-kunit.h
+++ b/mm/damon/tests/core-kunit.h
@@ -279,7 +279,7 @@ static void damon_test_split_regions_of(struct kunit *test)
kunit_skip(test, "region alloc fail");
}
damon_add_region(r, t);
- damon_split_regions_of(t, 2, DAMON_MIN_REGION);
+ damon_split_regions_of(t, 2, 1);
KUNIT_EXPECT_LE(test, damon_nr_regions(t), 2u);
damon_free_target(t);
@@ -292,7 +292,7 @@ static void damon_test_split_regions_of(struct kunit *test)
kunit_skip(test, "second region alloc fail");
}
damon_add_region(r, t);
- damon_split_regions_of(t, 4, DAMON_MIN_REGION);
+ damon_split_regions_of(t, 4, 1);
KUNIT_EXPECT_LE(test, damon_nr_regions(t), 4u);
damon_free_target(t);
}
@@ -373,7 +373,7 @@ static void damon_test_set_regions(struct kunit *test)
damon_add_region(r1, t);
damon_add_region(r2, t);
- damon_set_regions(t, &range, 1, DAMON_MIN_REGION);
+ damon_set_regions(t, &range, 1, 1);
KUNIT_EXPECT_EQ(test, damon_nr_regions(t), 3);
damon_for_each_region(r, t) {
@@ -1037,15 +1037,14 @@ static void damos_test_filter_out(struct kunit *test)
f = damos_new_filter(DAMOS_FILTER_TYPE_ADDR, true, false);
if (!f)
kunit_skip(test, "filter alloc fail");
- f->addr_range = (struct damon_addr_range){
- .start = DAMON_MIN_REGION * 2, .end = DAMON_MIN_REGION * 6};
+ f->addr_range = (struct damon_addr_range){.start = 2, .end = 6};
t = damon_new_target();
if (!t) {
damos_destroy_filter(f);
kunit_skip(test, "target alloc fail");
}
- r = damon_new_region(DAMON_MIN_REGION * 3, DAMON_MIN_REGION * 5);
+ r = damon_new_region(3, 5);
if (!r) {
damos_destroy_filter(f);
damon_free_target(t);
@@ -1054,50 +1053,48 @@ static void damos_test_filter_out(struct kunit *test)
damon_add_region(r, t);
/* region in the range */
- KUNIT_EXPECT_TRUE(test,
- damos_filter_match(NULL, t, r, f, DAMON_MIN_REGION));
+ KUNIT_EXPECT_TRUE(test, damos_filter_match(NULL, t, r, f, 1));
KUNIT_EXPECT_EQ(test, damon_nr_regions(t), 1);
/* region before the range */
- r->ar.start = DAMON_MIN_REGION * 1;
- r->ar.end = DAMON_MIN_REGION * 2;
+ r->ar.start = 1;
+ r->ar.end = 2;
KUNIT_EXPECT_FALSE(test,
- damos_filter_match(NULL, t, r, f, DAMON_MIN_REGION));
+ damos_filter_match(NULL, t, r, f, 1));
KUNIT_EXPECT_EQ(test, damon_nr_regions(t), 1);
/* region after the range */
- r->ar.start = DAMON_MIN_REGION * 6;
- r->ar.end = DAMON_MIN_REGION * 8;
+ r->ar.start = 6;
+ r->ar.end = 8;
KUNIT_EXPECT_FALSE(test,
- damos_filter_match(NULL, t, r, f, DAMON_MIN_REGION));
+ damos_filter_match(NULL, t, r, f, 1));
KUNIT_EXPECT_EQ(test, damon_nr_regions(t), 1);
/* region started before the range */
- r->ar.start = DAMON_MIN_REGION * 1;
- r->ar.end = DAMON_MIN_REGION * 4;
- KUNIT_EXPECT_FALSE(test,
- damos_filter_match(NULL, t, r, f, DAMON_MIN_REGION));
+ r->ar.start = 1;
+ r->ar.end = 4;
+ KUNIT_EXPECT_FALSE(test, damos_filter_match(NULL, t, r, f, 1));
/* filter should have split the region */
- KUNIT_EXPECT_EQ(test, r->ar.start, DAMON_MIN_REGION * 1);
- KUNIT_EXPECT_EQ(test, r->ar.end, DAMON_MIN_REGION * 2);
+ KUNIT_EXPECT_EQ(test, r->ar.start, 1);
+ KUNIT_EXPECT_EQ(test, r->ar.end, 2);
KUNIT_EXPECT_EQ(test, damon_nr_regions(t), 2);
r2 = damon_next_region(r);
- KUNIT_EXPECT_EQ(test, r2->ar.start, DAMON_MIN_REGION * 2);
- KUNIT_EXPECT_EQ(test, r2->ar.end, DAMON_MIN_REGION * 4);
+ KUNIT_EXPECT_EQ(test, r2->ar.start, 2);
+ KUNIT_EXPECT_EQ(test, r2->ar.end, 4);
damon_destroy_region(r2, t);
/* region started in the range */
- r->ar.start = DAMON_MIN_REGION * 2;
- r->ar.end = DAMON_MIN_REGION * 8;
+ r->ar.start = 2;
+ r->ar.end = 8;
KUNIT_EXPECT_TRUE(test,
- damos_filter_match(NULL, t, r, f, DAMON_MIN_REGION));
+ damos_filter_match(NULL, t, r, f, 1));
/* filter should have split the region */
- KUNIT_EXPECT_EQ(test, r->ar.start, DAMON_MIN_REGION * 2);
- KUNIT_EXPECT_EQ(test, r->ar.end, DAMON_MIN_REGION * 6);
+ KUNIT_EXPECT_EQ(test, r->ar.start, 2);
+ KUNIT_EXPECT_EQ(test, r->ar.end, 6);
KUNIT_EXPECT_EQ(test, damon_nr_regions(t), 2);
r2 = damon_next_region(r);
- KUNIT_EXPECT_EQ(test, r2->ar.start, DAMON_MIN_REGION * 6);
- KUNIT_EXPECT_EQ(test, r2->ar.end, DAMON_MIN_REGION * 8);
+ KUNIT_EXPECT_EQ(test, r2->ar.start, 6);
+ KUNIT_EXPECT_EQ(test, r2->ar.end, 8);
damon_destroy_region(r2, t);
damon_free_target(t);
--
2.47.3
^ permalink raw reply [flat|nested] 11+ messages in thread* [PATCH 7/9] selftests/damon/sysfs.py: merge DAMON status dumping into commitment assertion
2025-11-12 15:41 [PATCH 0/9] mm/damon: misc cleanups SeongJae Park
` (5 preceding siblings ...)
2025-11-12 15:41 ` [PATCH 6/9] mm/damon/tests/core-kunit: remove DAMON_MIN_REGION redefinition SeongJae Park
@ 2025-11-12 15:41 ` SeongJae Park
2025-11-12 15:41 ` [PATCH 8/9] Docs/mm/damon/maintainer-profile: fix a typo on mm-untable link SeongJae Park
2025-11-12 15:41 ` [PATCH 9/9] Docs/mm/damon/maintainer-profile: fix grammartical errors SeongJae Park
8 siblings, 0 replies; 11+ messages in thread
From: SeongJae Park @ 2025-11-12 15:41 UTC (permalink / raw)
To: Andrew Morton
Cc: SeongJae Park, Shuah Khan, damon, linux-kernel, linux-kselftest,
linux-mm
For each test case, sysfs.py makes changes to DAMON, dumps DAMON
internal status and asserts the expectation is met. The dumping part
should be the same for all cases, so it is duplicated for each test
case. Which means it is easy to make mistakes. Actually a few of those
duplicates are not turning DAMON off in case of the dumping failure. It
makes following selftests that need to turn DAMON on fails with -EBUSY.
Merge the status dumping into commitment assertion with proper dumping
failure handling, to deduplicate and avoid the unnecessary following
tests failures.
Signed-off-by: SeongJae Park <sj@kernel.org>
---
tools/testing/selftests/damon/sysfs.py | 43 ++++++++------------------
1 file changed, 13 insertions(+), 30 deletions(-)
diff --git a/tools/testing/selftests/damon/sysfs.py b/tools/testing/selftests/damon/sysfs.py
index b4c5ef5c4d69..9cca71eb0325 100755
--- a/tools/testing/selftests/damon/sysfs.py
+++ b/tools/testing/selftests/damon/sysfs.py
@@ -185,7 +185,15 @@ def assert_ctx_committed(ctx, dump):
assert_monitoring_targets_committed(ctx.targets, dump['adaptive_targets'])
assert_schemes_committed(ctx.schemes, dump['schemes'])
-def assert_ctxs_committed(ctxs, dump):
+def assert_ctxs_committed(kdamonds):
+ status, err = dump_damon_status_dict(kdamonds.kdamonds[0].pid)
+ if err is not None:
+ print(err)
+ kdamonds.stop()
+ exit(1)
+
+ ctxs = kdamonds.kdamonds[0].contexts
+ dump = status['contexts']
assert_true(len(ctxs) == len(dump), 'ctxs length', dump)
for idx, ctx in enumerate(ctxs):
assert_ctx_committed(ctx, dump[idx])
@@ -202,13 +210,7 @@ def main():
print('kdamond start failed: %s' % err)
exit(1)
- status, err = dump_damon_status_dict(kdamonds.kdamonds[0].pid)
- if err is not None:
- print(err)
- kdamonds.stop()
- exit(1)
-
- assert_ctxs_committed(kdamonds.kdamonds[0].contexts, status['contexts'])
+ assert_ctxs_committed(kdamonds)
context = _damon_sysfs.DamonCtx(
monitoring_attrs=_damon_sysfs.DamonAttrs(
@@ -256,12 +258,7 @@ def main():
kdamonds.kdamonds[0].contexts = [context]
kdamonds.kdamonds[0].commit()
- status, err = dump_damon_status_dict(kdamonds.kdamonds[0].pid)
- if err is not None:
- print(err)
- exit(1)
-
- assert_ctxs_committed(kdamonds.kdamonds[0].contexts, status['contexts'])
+ assert_ctxs_committed(kdamonds)
# test online commitment of minimum context.
context = _damon_sysfs.DamonCtx()
@@ -270,12 +267,7 @@ def main():
kdamonds.kdamonds[0].contexts = [context]
kdamonds.kdamonds[0].commit()
- status, err = dump_damon_status_dict(kdamonds.kdamonds[0].pid)
- if err is not None:
- print(err)
- exit(1)
-
- assert_ctxs_committed(kdamonds.kdamonds[0].contexts, status['contexts'])
+ assert_ctxs_committed(kdamonds)
kdamonds.stop()
@@ -303,17 +295,8 @@ def main():
exit(1)
kdamonds.kdamonds[0].contexts[0].targets[1].obsolete = True
kdamonds.kdamonds[0].commit()
-
- status, err = dump_damon_status_dict(kdamonds.kdamonds[0].pid)
- if err is not None:
- print(err)
- kdamonds.stop()
- exit(1)
-
del kdamonds.kdamonds[0].contexts[0].targets[1]
-
- assert_ctxs_committed(kdamonds.kdamonds[0].contexts, status['contexts'])
-
+ assert_ctxs_committed(kdamonds)
kdamonds.stop()
if __name__ == '__main__':
--
2.47.3
^ permalink raw reply [flat|nested] 11+ messages in thread* [PATCH 8/9] Docs/mm/damon/maintainer-profile: fix a typo on mm-untable link
2025-11-12 15:41 [PATCH 0/9] mm/damon: misc cleanups SeongJae Park
` (6 preceding siblings ...)
2025-11-12 15:41 ` [PATCH 7/9] selftests/damon/sysfs.py: merge DAMON status dumping into commitment assertion SeongJae Park
@ 2025-11-12 15:41 ` SeongJae Park
2025-11-12 15:41 ` [PATCH 9/9] Docs/mm/damon/maintainer-profile: fix grammartical errors SeongJae Park
8 siblings, 0 replies; 11+ messages in thread
From: SeongJae Park @ 2025-11-12 15:41 UTC (permalink / raw)
To: Andrew Morton
Cc: SeongJae Park, Liam R. Howlett, David Hildenbrand,
Jonathan Corbet, Lorenzo Stoakes, Michal Hocko, Mike Rapoport,
Suren Baghdasaryan, Vlastimil Babka, damon, linux-doc,
linux-kernel, linux-mm
Commit 0b473f9e6eac ("Docs/mm/damon/maintainer-profile: update for
mm-new tree") mistakenly forgot putting a space between a link and the
next word. Fix it.
Signed-off-by: SeongJae Park <sj@kernel.org>
---
Documentation/mm/damon/maintainer-profile.rst | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/Documentation/mm/damon/maintainer-profile.rst b/Documentation/mm/damon/maintainer-profile.rst
index 58a3fb3c5762..f1aed6e55d31 100644
--- a/Documentation/mm/damon/maintainer-profile.rst
+++ b/Documentation/mm/damon/maintainer-profile.rst
@@ -57,7 +57,7 @@ Key cycle dates
Patches can be sent anytime. Key cycle dates of the `mm-new
<https://git.kernel.org/akpm/mm/h/mm-new>`_, `mm-unstable
-<https://git.kernel.org/akpm/mm/h/mm-unstable>`_and `mm-stable
+<https://git.kernel.org/akpm/mm/h/mm-unstable>`_ and `mm-stable
<https://git.kernel.org/akpm/mm/h/mm-stable>`_ trees depend on the memory
management subsystem maintainer.
--
2.47.3
^ permalink raw reply [flat|nested] 11+ messages in thread* [PATCH 9/9] Docs/mm/damon/maintainer-profile: fix grammartical errors
2025-11-12 15:41 [PATCH 0/9] mm/damon: misc cleanups SeongJae Park
` (7 preceding siblings ...)
2025-11-12 15:41 ` [PATCH 8/9] Docs/mm/damon/maintainer-profile: fix a typo on mm-untable link SeongJae Park
@ 2025-11-12 15:41 ` SeongJae Park
8 siblings, 0 replies; 11+ messages in thread
From: SeongJae Park @ 2025-11-12 15:41 UTC (permalink / raw)
To: Andrew Morton
Cc: SeongJae Park, Liam R. Howlett, David Hildenbrand,
Jonathan Corbet, Lorenzo Stoakes, Michal Hocko, Mike Rapoport,
Suren Baghdasaryan, Vlastimil Babka, damon, linux-doc,
linux-kernel, linux-mm
Fix a few grammartical errors on DAMON maintainer-profile.
Signed-off-by: SeongJae Park <sj@kernel.org>
---
Documentation/mm/damon/maintainer-profile.rst | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/Documentation/mm/damon/maintainer-profile.rst b/Documentation/mm/damon/maintainer-profile.rst
index f1aed6e55d31..e761edada1e9 100644
--- a/Documentation/mm/damon/maintainer-profile.rst
+++ b/Documentation/mm/damon/maintainer-profile.rst
@@ -27,8 +27,8 @@ maintainer.
Note again the patches for `mm-new tree
<https://git.kernel.org/akpm/mm/h/mm-new>`_ are queued by the memory management
-subsystem maintainer. If the patches requires some patches in `damon/next tree
-<https://git.kernel.org/sj/h/damon/next>`_ which not yet merged in mm-new,
+subsystem maintainer. If the patches require some patches in `damon/next tree
+<https://git.kernel.org/sj/h/damon/next>`_ which have not yet merged in mm-new,
please make sure the requirement is clearly specified.
Submit checklist addendum
@@ -99,5 +99,5 @@ Schedules and reservation status are available at the Google `doc
<https://docs.google.com/document/d/1v43Kcj3ly4CYqmAkMaZzLiM2GEnWfgdGbZAH3mi2vpM/edit?usp=sharing>`_.
There is also a public Google `calendar
<https://calendar.google.com/calendar/u/0?cid=ZDIwOTA4YTMxNjc2MDQ3NTIyMmUzYTM5ZmQyM2U4NDA0ZGIwZjBiYmJlZGQxNDM0MmY4ZTRjOTE0NjdhZDRiY0Bncm91cC5jYWxlbmRhci5nb29nbGUuY29t>`_
-that has the events. Anyone can subscribe it. DAMON maintainer will also
-provide periodic reminder to the mailing list (damon@lists.linux.dev).
+that has the events. Anyone can subscribe to it. DAMON maintainer will also
+provide periodic reminders to the mailing list (damon@lists.linux.dev).
--
2.47.3
^ permalink raw reply [flat|nested] 11+ messages in thread