* [PATCH] mm/damon/vaddr: do not repeat pte_offset_map_lock() until success
@ 2025-09-30 0:44 SeongJae Park
2025-09-30 3:18 ` Hugh Dickins
0 siblings, 1 reply; 2+ messages in thread
From: SeongJae Park @ 2025-09-30 0:44 UTC (permalink / raw)
To: Andrew Morton
Cc: SeongJae Park, # 6 . 5 . x, Hugh Dickins, damon, kernel-team,
linux-kernel, linux-mm, Xinyu Zheng
DAMON's virtual address space operation set implementation (vaddr) calls
pte_offset_map_lock() inside the page table walk callback function.
This is for reading and writing page table accessed bits. If
pte_offset_map_lock() fails, it retries by returning the page table walk
callback function with ACTION_AGAIN.
pte_offset_map_lock() can continuously fail if the target is a pmd
migration entry, though. Hence it could cause an infinite page table
walk if the migration cannot be done until the page table walk is
finished. This indeed caused a soft lockup when CPU hotplugging and
DAMON were running in parallel.
Avoid the infinite loop by simply not retrying the page table walk.
DAMON is promising only a best-effort accuracy, so missing access to
such pages is no problem.
Reported-by: Xinyu Zheng <zhengxinyu6@huawei.com>
Closes: https://lore.kernel.org/20250918030029.2652607-1-zhengxinyu6@huawei.com
Fixes: 7780d04046a2 ("mm/pagewalkers: ACTION_AGAIN if pte_offset_map_lock() fails")
Cc: <stable@vger.kernel.org> # 6.5.x
Cc: Hugh Dickins <hughd@google.com>
Signed-off-by: SeongJae Park <sj@kernel.org>
---
mm/damon/vaddr.c | 8 ++------
1 file changed, 2 insertions(+), 6 deletions(-)
diff --git a/mm/damon/vaddr.c b/mm/damon/vaddr.c
index 8c048f9b129e..7e834467b2d8 100644
--- a/mm/damon/vaddr.c
+++ b/mm/damon/vaddr.c
@@ -328,10 +328,8 @@ static int damon_mkold_pmd_entry(pmd_t *pmd, unsigned long addr,
}
pte = pte_offset_map_lock(walk->mm, pmd, addr, &ptl);
- if (!pte) {
- walk->action = ACTION_AGAIN;
+ if (!pte)
return 0;
- }
if (!pte_present(ptep_get(pte)))
goto out;
damon_ptep_mkold(pte, walk->vma, addr);
@@ -481,10 +479,8 @@ static int damon_young_pmd_entry(pmd_t *pmd, unsigned long addr,
#endif /* CONFIG_TRANSPARENT_HUGEPAGE */
pte = pte_offset_map_lock(walk->mm, pmd, addr, &ptl);
- if (!pte) {
- walk->action = ACTION_AGAIN;
+ if (!pte)
return 0;
- }
ptent = ptep_get(pte);
if (!pte_present(ptent))
goto out;
base-commit: 3169a901e935bc1f2d2eec0171abcf524b7747e4
--
2.39.5
^ permalink raw reply [flat|nested] 2+ messages in thread* Re: [PATCH] mm/damon/vaddr: do not repeat pte_offset_map_lock() until success
2025-09-30 0:44 [PATCH] mm/damon/vaddr: do not repeat pte_offset_map_lock() until success SeongJae Park
@ 2025-09-30 3:18 ` Hugh Dickins
0 siblings, 0 replies; 2+ messages in thread
From: Hugh Dickins @ 2025-09-30 3:18 UTC (permalink / raw)
To: SeongJae Park
Cc: Andrew Morton, Hugh Dickins, damon, kernel-team, linux-kernel,
linux-mm, Xinyu Zheng
On Mon, 29 Sep 2025, SeongJae Park wrote:
> DAMON's virtual address space operation set implementation (vaddr) calls
> pte_offset_map_lock() inside the page table walk callback function.
> This is for reading and writing page table accessed bits. If
> pte_offset_map_lock() fails, it retries by returning the page table walk
> callback function with ACTION_AGAIN.
>
> pte_offset_map_lock() can continuously fail if the target is a pmd
> migration entry, though. Hence it could cause an infinite page table
> walk if the migration cannot be done until the page table walk is
> finished. This indeed caused a soft lockup when CPU hotplugging and
> DAMON were running in parallel.
>
> Avoid the infinite loop by simply not retrying the page table walk.
> DAMON is promising only a best-effort accuracy, so missing access to
> such pages is no problem.
>
> Reported-by: Xinyu Zheng <zhengxinyu6@huawei.com>
> Closes: https://lore.kernel.org/20250918030029.2652607-1-zhengxinyu6@huawei.com
> Fixes: 7780d04046a2 ("mm/pagewalkers: ACTION_AGAIN if pte_offset_map_lock() fails")
> Cc: <stable@vger.kernel.org> # 6.5.x
> Cc: Hugh Dickins <hughd@google.com>
> Signed-off-by: SeongJae Park <sj@kernel.org>
Thanks,
Acked-by: Hugh Dickins <hughd@google.com>
> ---
> mm/damon/vaddr.c | 8 ++------
> 1 file changed, 2 insertions(+), 6 deletions(-)
>
> diff --git a/mm/damon/vaddr.c b/mm/damon/vaddr.c
> index 8c048f9b129e..7e834467b2d8 100644
> --- a/mm/damon/vaddr.c
> +++ b/mm/damon/vaddr.c
> @@ -328,10 +328,8 @@ static int damon_mkold_pmd_entry(pmd_t *pmd, unsigned long addr,
> }
>
> pte = pte_offset_map_lock(walk->mm, pmd, addr, &ptl);
> - if (!pte) {
> - walk->action = ACTION_AGAIN;
> + if (!pte)
> return 0;
> - }
> if (!pte_present(ptep_get(pte)))
> goto out;
> damon_ptep_mkold(pte, walk->vma, addr);
> @@ -481,10 +479,8 @@ static int damon_young_pmd_entry(pmd_t *pmd, unsigned long addr,
> #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
>
> pte = pte_offset_map_lock(walk->mm, pmd, addr, &ptl);
> - if (!pte) {
> - walk->action = ACTION_AGAIN;
> + if (!pte)
> return 0;
> - }
> ptent = ptep_get(pte);
> if (!pte_present(ptent))
> goto out;
>
> base-commit: 3169a901e935bc1f2d2eec0171abcf524b7747e4
> --
> 2.39.5
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2025-09-30 3:18 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-09-30 0:44 [PATCH] mm/damon/vaddr: do not repeat pte_offset_map_lock() until success SeongJae Park
2025-09-30 3:18 ` Hugh Dickins
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox