* [PATCH v7 0/2] Improvements for victim thawing and reaper VMA traversal
@ 2025-09-03 9:27 zhongjinji
2025-09-03 9:27 ` [PATCH v7 1/2] mm/oom_kill: Thaw victim on a per-process basis instead of per-thread zhongjinji
2025-09-03 9:27 ` [PATCH v7 2/2] mm/oom_kill: The OOM reaper traverses the VMA maple tree in reverse order zhongjinji
0 siblings, 2 replies; 15+ messages in thread
From: zhongjinji @ 2025-09-03 9:27 UTC (permalink / raw)
To: mhocko
Cc: rientjes, shakeel.butt, akpm, linux-mm, linux-kernel, tglx,
liam.howlett, lorenzo.stoakes, surenb, liulu.liu, feng.han,
zhongjinji
This patch series is about improvements to victim process thawing and
reaper VMA traversal. Even if the oom_reaper is delayed, patch 2 is
still beneficial for reaping processes with a large address space
footprint, and it also greatly improves process_mrelease.
---
v6 -> v7:
- Thawing the victim process prevents the OOM killer from being blocked. [10]
- Remove report tags
v5 -> v6:
- Use mas_for_each_rev() for VMA traversal [6]
- Simplify the judgment of whether to delay in queue_oom_reaper() [7]
- Refine changelog to better capture the essence of the changes [8]
- Use READ_ONCE(tsk->frozen) instead of checking mm and additional
checks inside for_each_process(), as it is sufficient [9]
- Add report tags because fengbaopeng and tianxiaobin reported the
high load issue of the reaper
v4 -> v5:
- Detect frozen state directly, avoid special futex handling. [3]
- Use mas_find_rev() for VMA traversal to avoid skipping entries. [4]
- Only check should_delay_oom_reap() in queue_oom_reaper(). [5]
v3 -> v4:
- Renamed functions and parameters for clarity. [2]
- Added should_delay_oom_reap() for OOM reap decisions.
- Traverse maple tree in reverse for improved behavior.
v2 -> v3:
- Fixed Subject prefix error.
v1 -> v2:
- Check robust_list for all threads, not just one. [1]
Reference:
[1] https://lore.kernel.org/linux-mm/u3mepw3oxj7cywezna4v72y2hvyc7bafkmsbirsbfuf34zpa7c@b23sc3rvp2gp/
[2] https://lore.kernel.org/linux-mm/87cy99g3k6.ffs@tglx/
[3] https://lore.kernel.org/linux-mm/aKRWtjRhE_HgFlp2@tiehlicka/
[4] https://lore.kernel.org/linux-mm/26larxehoe3a627s4fxsqghriwctays4opm4hhme3uk7ybjc5r@pmwh4s4yv7lm/
[5] https://lore.kernel.org/linux-mm/d5013a33-c08a-44c5-a67f-9dc8fd73c969@lucifer.local/
[6] https://lore.kernel.org/linux-mm/nwh7gegmvoisbxlsfwslobpbqku376uxdj2z32owkbftvozt3x@4dfet73fh2yy/
[7] https://lore.kernel.org/linux-mm/af4edeaf-d3c9-46a9-a300-dbaf5936e7d6@lucifer.local/
[8] https://lore.kernel.org/linux-mm/aK71W1ITmC_4I_RY@tiehlicka/
[9] https://lore.kernel.org/linux-mm/jzzdeczuyraup2zrspl6b74muf3bly2a3acejfftcldfmz4ekk@s5mcbeim34my/
[10] https://lore.kernel.org/linux-mm/aLWmf6qZHTA0hMpU@tiehlicka/
The earlier post:
v6: https://lore.kernel.org/linux-mm/20250829065550.29571-1-zhongjinji@honor.com/
v5: https://lore.kernel.org/linux-mm/20250825133855.30229-1-zhongjinji@honor.com/
v4: https://lore.kernel.org/linux-mm/20250814135555.17493-1-zhongjinji@honor.com/
v3: https://lore.kernel.org/linux-mm/20250804030341.18619-1-zhongjinji@honor.com/
v2: https://lore.kernel.org/linux-mm/20250801153649.23244-1-zhongjinji@honor.com/
v1: https://lore.kernel.org/linux-mm/20250731102904.8615-1-zhongjinji@honor.com/
zhongjinji (2):
mm/oom_kill: Thaw victim on a per-process basis instead of per-thread
mm/oom_kill: The OOM reaper traverses the VMA maple tree in reverse
order
mm/oom_kill.c | 29 ++++++++++++++++++++++++-----
1 file changed, 24 insertions(+), 5 deletions(-)
--
2.17.1
^ permalink raw reply [flat|nested] 15+ messages in thread
* [PATCH v7 1/2] mm/oom_kill: Thaw victim on a per-process basis instead of per-thread
2025-09-03 9:27 [PATCH v7 0/2] Improvements for victim thawing and reaper VMA traversal zhongjinji
@ 2025-09-03 9:27 ` zhongjinji
2025-09-03 12:27 ` Michal Hocko
2025-09-03 9:27 ` [PATCH v7 2/2] mm/oom_kill: The OOM reaper traverses the VMA maple tree in reverse order zhongjinji
1 sibling, 1 reply; 15+ messages in thread
From: zhongjinji @ 2025-09-03 9:27 UTC (permalink / raw)
To: mhocko
Cc: rientjes, shakeel.butt, akpm, linux-mm, linux-kernel, tglx,
liam.howlett, lorenzo.stoakes, surenb, liulu.liu, feng.han,
zhongjinji
OOM killer is a mechanism that selects and kills processes when the system
runs out of memory to reclaim resources and keep the system stable.
However, the oom victim cannot terminate on its own when it is frozen,
because __thaw_task() only thaws one thread of the victim, while
the other threads remain in the frozen state.
This change will thaw the entire victim process when OOM occurs,
ensuring that the oom victim can terminate on its own.
Signed-off-by: zhongjinji <zhongjinji@honor.com>
---
mm/oom_kill.c | 19 ++++++++++++++++---
1 file changed, 16 insertions(+), 3 deletions(-)
diff --git a/mm/oom_kill.c b/mm/oom_kill.c
index 25923cfec9c6..3caaafc896d4 100644
--- a/mm/oom_kill.c
+++ b/mm/oom_kill.c
@@ -747,6 +747,19 @@ static inline void queue_oom_reaper(struct task_struct *tsk)
}
#endif /* CONFIG_MMU */
+static void thaw_oom_process(struct task_struct *tsk)
+{
+ struct task_struct *t;
+
+ /* protects against __exit_signal() */
+ read_lock(&tasklist_lock);
+ for_each_thread(tsk, t) {
+ set_tsk_thread_flag(t, TIF_MEMDIE);
+ __thaw_task(t);
+ }
+ read_unlock(&tasklist_lock);
+}
+
/**
* mark_oom_victim - mark the given task as OOM victim
* @tsk: task to mark
@@ -772,12 +785,12 @@ static void mark_oom_victim(struct task_struct *tsk)
mmgrab(tsk->signal->oom_mm);
/*
- * Make sure that the task is woken up from uninterruptible sleep
+ * Make sure that the process is woken up from uninterruptible sleep
* if it is frozen because OOM killer wouldn't be able to free
* any memory and livelock. freezing_slow_path will tell the freezer
- * that TIF_MEMDIE tasks should be ignored.
+ * that TIF_MEMDIE threads should be ignored.
*/
- __thaw_task(tsk);
+ thaw_oom_process(tsk);
atomic_inc(&oom_victims);
cred = get_task_cred(tsk);
trace_mark_victim(tsk, cred->uid.val);
--
2.17.1
^ permalink raw reply [flat|nested] 15+ messages in thread
* [PATCH v7 2/2] mm/oom_kill: The OOM reaper traverses the VMA maple tree in reverse order
2025-09-03 9:27 [PATCH v7 0/2] Improvements for victim thawing and reaper VMA traversal zhongjinji
2025-09-03 9:27 ` [PATCH v7 1/2] mm/oom_kill: Thaw victim on a per-process basis instead of per-thread zhongjinji
@ 2025-09-03 9:27 ` zhongjinji
2025-09-03 12:58 ` Michal Hocko
2025-09-04 23:50 ` [PATCH v7 2/2] mm/oom_kill: The OOM reaper traverses the VMA maple tree in reverse order Shakeel Butt
1 sibling, 2 replies; 15+ messages in thread
From: zhongjinji @ 2025-09-03 9:27 UTC (permalink / raw)
To: mhocko
Cc: rientjes, shakeel.butt, akpm, linux-mm, linux-kernel, tglx,
liam.howlett, lorenzo.stoakes, surenb, liulu.liu, feng.han,
zhongjinji
Although the oom_reaper is delayed and it gives the oom victim chance to
clean up its address space this might take a while especially for
processes with a large address space footprint. In those cases
oom_reaper might start racing with the dying task and compete for shared
resources - e.g. page table lock contention has been observed.
Reduce those races by reaping the oom victim from the other end of the
address space.
It is also a significant improvement for process_mrelease(). When a process
is killed, process_mrelease is used to reap the killed process and often
runs concurrently with the dying task. The test data shows that after
applying the patch, lock contention is greatly reduced during the procedure
of reaping the killed process.
Without the patch:
|--99.74%-- oom_reaper
| |--76.67%-- unmap_page_range
| | |--33.70%-- __pte_offset_map_lock
| | | |--98.46%-- _raw_spin_lock
| | |--27.61%-- free_swap_and_cache_nr
| | |--16.40%-- folio_remove_rmap_ptes
| | |--12.25%-- tlb_flush_mmu
| |--12.61%-- tlb_finish_mmu
With the patch:
|--98.84%-- oom_reaper
| |--53.45%-- unmap_page_range
| | |--24.29%-- [hit in function]
| | |--48.06%-- folio_remove_rmap_ptes
| | |--17.99%-- tlb_flush_mmu
| | |--1.72%-- __pte_offset_map_lock
| |--30.43%-- tlb_finish_mmu
Signed-off-by: zhongjinji <zhongjinji@honor.com>
---
mm/oom_kill.c | 10 ++++++++--
1 file changed, 8 insertions(+), 2 deletions(-)
diff --git a/mm/oom_kill.c b/mm/oom_kill.c
index 3caaafc896d4..540b1e5e0e46 100644
--- a/mm/oom_kill.c
+++ b/mm/oom_kill.c
@@ -516,7 +516,7 @@ static bool __oom_reap_task_mm(struct mm_struct *mm)
{
struct vm_area_struct *vma;
bool ret = true;
- VMA_ITERATOR(vmi, mm, 0);
+ MA_STATE(mas, &mm->mm_mt, ULONG_MAX, ULONG_MAX);
/*
* Tell all users of get_user/copy_from_user etc... that the content
@@ -526,7 +526,13 @@ static bool __oom_reap_task_mm(struct mm_struct *mm)
*/
set_bit(MMF_UNSTABLE, &mm->flags);
- for_each_vma(vmi, vma) {
+ /*
+ * It might start racing with the dying task and compete for shared
+ * resources - e.g. page table lock contention has been observed.
+ * Reduce those races by reaping the oom victim from the other end
+ * of the address space.
+ */
+ mas_for_each_rev(&mas, vma, 0) {
if (vma->vm_flags & (VM_HUGETLB|VM_PFNMAP))
continue;
--
2.17.1
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH v7 1/2] mm/oom_kill: Thaw victim on a per-process basis instead of per-thread
2025-09-03 9:27 ` [PATCH v7 1/2] mm/oom_kill: Thaw victim on a per-process basis instead of per-thread zhongjinji
@ 2025-09-03 12:27 ` Michal Hocko
2025-09-04 13:08 ` zhongjinji
0 siblings, 1 reply; 15+ messages in thread
From: Michal Hocko @ 2025-09-03 12:27 UTC (permalink / raw)
To: zhongjinji
Cc: rientjes, shakeel.butt, akpm, linux-mm, linux-kernel, tglx,
liam.howlett, lorenzo.stoakes, surenb, liulu.liu, feng.han
On Wed 03-09-25 17:27:28, zhongjinji wrote:
> OOM killer is a mechanism that selects and kills processes when the system
> runs out of memory to reclaim resources and keep the system stable.
> However, the oom victim cannot terminate on its own when it is frozen,
> because __thaw_task() only thaws one thread of the victim, while
> the other threads remain in the frozen state.
>
> This change will thaw the entire victim process when OOM occurs,
> ensuring that the oom victim can terminate on its own.
>
> Signed-off-by: zhongjinji <zhongjinji@honor.com>
> ---
> mm/oom_kill.c | 19 ++++++++++++++++---
> 1 file changed, 16 insertions(+), 3 deletions(-)
>
> diff --git a/mm/oom_kill.c b/mm/oom_kill.c
> index 25923cfec9c6..3caaafc896d4 100644
> --- a/mm/oom_kill.c
> +++ b/mm/oom_kill.c
> @@ -747,6 +747,19 @@ static inline void queue_oom_reaper(struct task_struct *tsk)
> }
> #endif /* CONFIG_MMU */
>
> +static void thaw_oom_process(struct task_struct *tsk)
> +{
> + struct task_struct *t;
> +
> + /* protects against __exit_signal() */
> + read_lock(&tasklist_lock);
> + for_each_thread(tsk, t) {
> + set_tsk_thread_flag(t, TIF_MEMDIE);
> + __thaw_task(t);
> + }
> + read_unlock(&tasklist_lock);
> +}
> +
Sorry, I was probably not clear enough. I meant thaw_process should live
in the freezer proper kernel/freezer.c and oom should be just user.
Please make sure that freezer maintainers are involved and approve the
change.
--
Michal Hocko
SUSE Labs
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH v7 2/2] mm/oom_kill: The OOM reaper traverses the VMA maple tree in reverse order
2025-09-03 9:27 ` [PATCH v7 2/2] mm/oom_kill: The OOM reaper traverses the VMA maple tree in reverse order zhongjinji
@ 2025-09-03 12:58 ` Michal Hocko
2025-09-03 19:02 ` Liam R. Howlett
2025-09-04 12:24 ` zhongjinji
2025-09-04 23:50 ` [PATCH v7 2/2] mm/oom_kill: The OOM reaper traverses the VMA maple tree in reverse order Shakeel Butt
1 sibling, 2 replies; 15+ messages in thread
From: Michal Hocko @ 2025-09-03 12:58 UTC (permalink / raw)
To: zhongjinji
Cc: rientjes, shakeel.butt, akpm, linux-mm, linux-kernel, tglx,
liam.howlett, lorenzo.stoakes, surenb, liulu.liu, feng.han
On Wed 03-09-25 17:27:29, zhongjinji wrote:
> Although the oom_reaper is delayed and it gives the oom victim chance to
> clean up its address space this might take a while especially for
> processes with a large address space footprint. In those cases
> oom_reaper might start racing with the dying task and compete for shared
> resources - e.g. page table lock contention has been observed.
>
> Reduce those races by reaping the oom victim from the other end of the
> address space.
>
> It is also a significant improvement for process_mrelease(). When a process
> is killed, process_mrelease is used to reap the killed process and often
> runs concurrently with the dying task. The test data shows that after
> applying the patch, lock contention is greatly reduced during the procedure
> of reaping the killed process.
Thank you this is much better!
> Without the patch:
> |--99.74%-- oom_reaper
> | |--76.67%-- unmap_page_range
> | | |--33.70%-- __pte_offset_map_lock
> | | | |--98.46%-- _raw_spin_lock
> | | |--27.61%-- free_swap_and_cache_nr
> | | |--16.40%-- folio_remove_rmap_ptes
> | | |--12.25%-- tlb_flush_mmu
> | |--12.61%-- tlb_finish_mmu
>
> With the patch:
> |--98.84%-- oom_reaper
> | |--53.45%-- unmap_page_range
> | | |--24.29%-- [hit in function]
> | | |--48.06%-- folio_remove_rmap_ptes
> | | |--17.99%-- tlb_flush_mmu
> | | |--1.72%-- __pte_offset_map_lock
> | |--30.43%-- tlb_finish_mmu
Just curious. Do I read this correctly that the overall speedup is
mostly eaten by contention over tlb_finish_mmu?
> Signed-off-by: zhongjinji <zhongjinji@honor.com>
Anyway, the change on its own makes sense to me
Acked-by: Michal Hocko <mhocko@suse.com>
Thanks for working on the changelog improvements.
> ---
> mm/oom_kill.c | 10 ++++++++--
> 1 file changed, 8 insertions(+), 2 deletions(-)
>
> diff --git a/mm/oom_kill.c b/mm/oom_kill.c
> index 3caaafc896d4..540b1e5e0e46 100644
> --- a/mm/oom_kill.c
> +++ b/mm/oom_kill.c
> @@ -516,7 +516,7 @@ static bool __oom_reap_task_mm(struct mm_struct *mm)
> {
> struct vm_area_struct *vma;
> bool ret = true;
> - VMA_ITERATOR(vmi, mm, 0);
> + MA_STATE(mas, &mm->mm_mt, ULONG_MAX, ULONG_MAX);
>
> /*
> * Tell all users of get_user/copy_from_user etc... that the content
> @@ -526,7 +526,13 @@ static bool __oom_reap_task_mm(struct mm_struct *mm)
> */
> set_bit(MMF_UNSTABLE, &mm->flags);
>
> - for_each_vma(vmi, vma) {
> + /*
> + * It might start racing with the dying task and compete for shared
> + * resources - e.g. page table lock contention has been observed.
> + * Reduce those races by reaping the oom victim from the other end
> + * of the address space.
> + */
> + mas_for_each_rev(&mas, vma, 0) {
> if (vma->vm_flags & (VM_HUGETLB|VM_PFNMAP))
> continue;
>
> --
> 2.17.1
>
--
Michal Hocko
SUSE Labs
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH v7 2/2] mm/oom_kill: The OOM reaper traverses the VMA maple tree in reverse order
2025-09-03 12:58 ` Michal Hocko
@ 2025-09-03 19:02 ` Liam R. Howlett
2025-09-04 12:21 ` Michal Hocko
2025-09-04 12:47 ` zhongjinji
2025-09-04 12:24 ` zhongjinji
1 sibling, 2 replies; 15+ messages in thread
From: Liam R. Howlett @ 2025-09-03 19:02 UTC (permalink / raw)
To: Michal Hocko
Cc: zhongjinji, rientjes, shakeel.butt, akpm, linux-mm, linux-kernel,
tglx, lorenzo.stoakes, surenb, liulu.liu, feng.han
* Michal Hocko <mhocko@suse.com> [250903 08:58]:
> On Wed 03-09-25 17:27:29, zhongjinji wrote:
> > Although the oom_reaper is delayed and it gives the oom victim chance to
> > clean up its address space this might take a while especially for
> > processes with a large address space footprint. In those cases
> > oom_reaper might start racing with the dying task and compete for shared
> > resources - e.g. page table lock contention has been observed.
> >
> > Reduce those races by reaping the oom victim from the other end of the
> > address space.
> >
> > It is also a significant improvement for process_mrelease(). When a process
> > is killed, process_mrelease is used to reap the killed process and often
> > runs concurrently with the dying task. The test data shows that after
> > applying the patch, lock contention is greatly reduced during the procedure
> > of reaping the killed process.
>
> Thank you this is much better!
>
> > Without the patch:
> > |--99.74%-- oom_reaper
> > | |--76.67%-- unmap_page_range
> > | | |--33.70%-- __pte_offset_map_lock
> > | | | |--98.46%-- _raw_spin_lock
> > | | |--27.61%-- free_swap_and_cache_nr
> > | | |--16.40%-- folio_remove_rmap_ptes
> > | | |--12.25%-- tlb_flush_mmu
> > | |--12.61%-- tlb_finish_mmu
> >
> > With the patch:
> > |--98.84%-- oom_reaper
> > | |--53.45%-- unmap_page_range
> > | | |--24.29%-- [hit in function]
> > | | |--48.06%-- folio_remove_rmap_ptes
> > | | |--17.99%-- tlb_flush_mmu
> > | | |--1.72%-- __pte_offset_map_lock
> > | |--30.43%-- tlb_finish_mmu
>
> Just curious. Do I read this correctly that the overall speedup is
> mostly eaten by contention over tlb_finish_mmu?
The tlb_finish_mmu() taking less time indicates that it's probably not
doing much work, afaict. These numbers would be better if exit_mmap()
was also added to show a more complete view of how the system is
affected - I suspect the tlb_finish_mmu time will have disappeared from
that side of things.
Comments in the code of this stuff has many arch specific statements,
which makes me wonder if this is safe (probably?) and beneficial for
everyone? At the least, it would be worth mentioning which arch was
used for the benchmark - I am guessing arm64 considering the talk of
android, coincidently arm64 would benefit the most fwiu.
mmu_notifier_release(mm) is called early in the exit_mmap() path should
cause the mmu notifiers to be non-blocking (according to the comment in
v6.0 source of exit_mmap [1].
>
> > Signed-off-by: zhongjinji <zhongjinji@honor.com>
>
> Anyway, the change on its own makes sense to me
> Acked-by: Michal Hocko <mhocko@suse.com>
>
> Thanks for working on the changelog improvements.
[1]. https://elixir.bootlin.com/linux/v6.0.19/source/mm/mmap.c#L3089
...
Thanks,
Liam
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH v7 2/2] mm/oom_kill: The OOM reaper traverses the VMA maple tree in reverse order
2025-09-03 19:02 ` Liam R. Howlett
@ 2025-09-04 12:21 ` Michal Hocko
2025-09-05 2:12 ` Liam R. Howlett
2025-09-04 12:47 ` zhongjinji
1 sibling, 1 reply; 15+ messages in thread
From: Michal Hocko @ 2025-09-04 12:21 UTC (permalink / raw)
To: Liam R. Howlett
Cc: zhongjinji, rientjes, shakeel.butt, akpm, linux-mm, linux-kernel,
tglx, lorenzo.stoakes, surenb, liulu.liu, feng.han
On Wed 03-09-25 15:02:34, Liam R. Howlett wrote:
> * Michal Hocko <mhocko@suse.com> [250903 08:58]:
> > On Wed 03-09-25 17:27:29, zhongjinji wrote:
[...]
> mmu_notifier_release(mm) is called early in the exit_mmap() path should
> cause the mmu notifiers to be non-blocking (according to the comment in
> v6.0 source of exit_mmap [1].
I am not sure I follow you here. How does this relate to the actual
direction of the address space freeing?
> > > Signed-off-by: zhongjinji <zhongjinji@honor.com>
> >
> > Anyway, the change on its own makes sense to me
> > Acked-by: Michal Hocko <mhocko@suse.com>
> >
> > Thanks for working on the changelog improvements.
>
> [1]. https://elixir.bootlin.com/linux/v6.0.19/source/mm/mmap.c#L3089
>
> ...
>
> Thanks,
> Liam
--
Michal Hocko
SUSE Labs
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH v7 2/2] mm/oom_kill: The OOM reaper traverses the VMA maple tree in reverse order
2025-09-03 12:58 ` Michal Hocko
2025-09-03 19:02 ` Liam R. Howlett
@ 2025-09-04 12:24 ` zhongjinji
2025-09-04 14:48 ` Michal Hocko
1 sibling, 1 reply; 15+ messages in thread
From: zhongjinji @ 2025-09-04 12:24 UTC (permalink / raw)
To: mhocko
Cc: akpm, feng.han, liam.howlett, linux-kernel, linux-mm, liulu.liu,
lorenzo.stoakes, rientjes, shakeel.butt, surenb, tglx,
zhongjinji
> On Wed 03-09-25 17:27:29, zhongjinji wrote:
> > Although the oom_reaper is delayed and it gives the oom victim chance to
> > clean up its address space this might take a while especially for
> > processes with a large address space footprint. In those cases
> > oom_reaper might start racing with the dying task and compete for shared
> > resources - e.g. page table lock contention has been observed.
> >
> > Reduce those races by reaping the oom victim from the other end of the
> > address space.
> >
> > It is also a significant improvement for process_mrelease(). When a process
> > is killed, process_mrelease is used to reap the killed process and often
> > runs concurrently with the dying task. The test data shows that after
> > applying the patch, lock contention is greatly reduced during the procedure
> > of reaping the killed process.
>
> Thank you this is much better!
>
> > Without the patch:
> > |--99.74%-- oom_reaper
> > | |--76.67%-- unmap_page_range
> > | | |--33.70%-- __pte_offset_map_lock
> > | | | |--98.46%-- _raw_spin_lock
> > | | |--27.61%-- free_swap_and_cache_nr
> > | | |--16.40%-- folio_remove_rmap_ptes
> > | | |--12.25%-- tlb_flush_mmu
> > | |--12.61%-- tlb_finish_mmu
> >
> > With the patch:
> > |--98.84%-- oom_reaper
> > | |--53.45%-- unmap_page_range
> > | | |--24.29%-- [hit in function]
> > | | |--48.06%-- folio_remove_rmap_ptes
> > | | |--17.99%-- tlb_flush_mmu
> > | | |--1.72%-- __pte_offset_map_lock
> > | |--30.43%-- tlb_finish_mmu
>
> Just curious. Do I read this correctly that the overall speedup is
> mostly eaten by contention over tlb_finish_mmu?
Here is a more detailed perf report, which includes the execution times
of some important functions. I believe it will address your concerns.
tlb_flush_mmu and tlb_finish_mmu perform similar tasks; they both mainly
call free_pages_and_swap_cache, and its execution time is related to the
number of anonymous pages being reclaimed.
In previous tests, the pte spinlock contention was so obvious that I
overlooked other issues.
Without the patch
|--99.50%-- oom_reaper
| |--0.50%-- [hit in function]
| |--71.06%-- unmap_page_range
| | |--41.75%-- __pte_offset_map_lock
| | |--23.23%-- folio_remove_rmap_ptes
| | |--20.34%-- tlb_flush_mmu
| | | free_pages_and_swap_cache
| | |--2.23%-- folio_mark_accessed
| | |--1.19%-- free_swap_and_cache_nr
| | |--1.13%-- __tlb_remove_folio_pages
| | |--0.76%-- _raw_spin_lock
| |--16.02%-- tlb_finish_mmu
| | |--26.08%-- [hit in function]
| | |--72.97%-- free_pages_and_swap_cache
| | |--0.67%-- free_pages
| |--2.27%-- folio_remove_rmap_ptes
| |--1.54%-- __tlb_remove_folio_pages
| | |--83.47%-- [hit in function]
| |--0.51%-- __pte_offset_map_lock
Period (ms) Symbol
79.180156 oom_reaper
56.321241 unmap_page_range
23.891714 __pte_offset_map_lock
20.711614 free_pages_and_swap_cache
12.831778 tlb_finish_mmu
11.443282 tlb_flush_mmu
With the patch
|--99.54%-- oom_reaper
| |--0.29%-- [hit in function]
| |--57.91%-- unmap_page_range
| | |--20.42%-- [hit in function]
| | |--53.35%-- folio_remove_rmap_ptes
| | | |--5.85%-- [hit in function]
| | |--10.49%-- __pte_offset_map_lock
| | | |--5.17%-- [hit in function]
| | |--8.40%-- tlb_flush_mmu
| | |--2.35%-- _raw_spin_lock
| | |--1.89%-- folio_mark_accessed
| | |--1.64%-- __tlb_remove_folio_pages
| | | |--57.95%-- [hit in function]
| |--36.34%-- tlb_finish_mmu
| | |--14.70%-- [hit in function]
| | |--84.85%-- free_pages_and_swap_cache
| | | |--2.32%-- [hit in function]
| | |--0.37%-- free_pages
| | --0.08%-- free_unref_page
| |--1.94%-- folio_remove_rmap_ptes
| |--1.68%-- __tlb_remove_folio_pages
| |--0.93%-- __pte_offset_map_lock
| |--0.43%-- folio_mark_accessed
Period (ms) Symbol
49.580521 oom_reaper
28.781660 unmap_page_range
18.105898 tlb_finish_mmu
17.688397 free_pages_and_swap_cache
3.471721 __pte_offset_map_lock
2.412970 tlb_flush_mmu
> > Signed-off-by: zhongjinji <zhongjinji@honor.com>
>
> Anyway, the change on its own makes sense to me
> Acked-by: Michal Hocko <mhocko@suse.com>
>
> Thanks for working on the changelog improvements.
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH v7 2/2] mm/oom_kill: The OOM reaper traverses the VMA maple tree in reverse order
2025-09-03 19:02 ` Liam R. Howlett
2025-09-04 12:21 ` Michal Hocko
@ 2025-09-04 12:47 ` zhongjinji
1 sibling, 0 replies; 15+ messages in thread
From: zhongjinji @ 2025-09-04 12:47 UTC (permalink / raw)
To: liam.howlett
Cc: akpm, feng.han, linux-kernel, linux-mm, liulu.liu,
lorenzo.stoakes, mhocko, rientjes, shakeel.butt, surenb, tglx,
zhongjinji
> > On Wed 03-09-25 17:27:29, zhongjinji wrote:
> > > Although the oom_reaper is delayed and it gives the oom victim chance to
> > > clean up its address space this might take a while especially for
> > > processes with a large address space footprint. In those cases
> > > oom_reaper might start racing with the dying task and compete for shared
> > > resources - e.g. page table lock contention has been observed.
> > >
> > > Reduce those races by reaping the oom victim from the other end of the
> > > address space.
> > >
> > > It is also a significant improvement for process_mrelease(). When a process
> > > is killed, process_mrelease is used to reap the killed process and often
> > > runs concurrently with the dying task. The test data shows that after
> > > applying the patch, lock contention is greatly reduced during the procedure
> > > of reaping the killed process.
> >
> > Thank you this is much better!
> >
> > > Without the patch:
> > > |--99.74%-- oom_reaper
> > > | |--76.67%-- unmap_page_range
> > > | | |--33.70%-- __pte_offset_map_lock
> > > | | | |--98.46%-- _raw_spin_lock
> > > | | |--27.61%-- free_swap_and_cache_nr
> > > | | |--16.40%-- folio_remove_rmap_ptes
> > > | | |--12.25%-- tlb_flush_mmu
> > > | |--12.61%-- tlb_finish_mmu
> > >
> > > With the patch:
> > > |--98.84%-- oom_reaper
> > > | |--53.45%-- unmap_page_range
> > > | | |--24.29%-- [hit in function]
> > > | | |--48.06%-- folio_remove_rmap_ptes
> > > | | |--17.99%-- tlb_flush_mmu
> > > | | |--1.72%-- __pte_offset_map_lock
> > > | |--30.43%-- tlb_finish_mmu
> >
> > Just curious. Do I read this correctly that the overall speedup is
> > mostly eaten by contention over tlb_finish_mmu?
>
> The tlb_finish_mmu() taking less time indicates that it's probably not
> doing much work, afaict. These numbers would be better if exit_mmap()
> was also added to show a more complete view of how the system is
> affected - I suspect the tlb_finish_mmu time will have disappeared from
> that side of things.
Yes, it would indeed be better to have exit_mmap data, but simpleperf
does not support capturing perf data from multiple processes. I'll try to
find a solution.
> Comments in the code of this stuff has many arch specific statements,
> which makes me wonder if this is safe (probably?) and beneficial for
> everyone? At the least, it would be worth mentioning which arch was
> used for the benchmark - I am guessing arm64 considering the talk of
> android, coincidently arm64 would benefit the most fwiu.
Yes, it's on arm64. Thank you, I will memtion it.
> mmu_notifier_release(mm) is called early in the exit_mmap() path should
> cause the mmu notifiers to be non-blocking (according to the comment in
> v6.0 source of exit_mmap [1].
>
> >
> > > Signed-off-by: zhongjinji <zhongjinji@honor.com>
> >
> > Anyway, the change on its own makes sense to me
> > Acked-by: Michal Hocko <mhocko@suse.com>
> >
> > Thanks for working on the changelog improvements.
>
> [1]. https://elixir.bootlin.com/linux/v6.0.19/source/mm/mmap.c#L3089
>
> ...
>
> Thanks,
> Liam
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH v7 1/2] mm/oom_kill: Thaw victim on a per-process basis instead of per-thread
2025-09-03 12:27 ` Michal Hocko
@ 2025-09-04 13:08 ` zhongjinji
0 siblings, 0 replies; 15+ messages in thread
From: zhongjinji @ 2025-09-04 13:08 UTC (permalink / raw)
To: mhocko
Cc: akpm, feng.han, liam.howlett, linux-kernel, linux-mm, liulu.liu,
lorenzo.stoakes, rientjes, shakeel.butt, surenb, tglx,
zhongjinji, rafael, pavel, linux-pm
> > OOM killer is a mechanism that selects and kills processes when the system
> > runs out of memory to reclaim resources and keep the system stable.
> > However, the oom victim cannot terminate on its own when it is frozen,
> > because __thaw_task() only thaws one thread of the victim, while
> > the other threads remain in the frozen state.
> >
> > This change will thaw the entire victim process when OOM occurs,
> > ensuring that the oom victim can terminate on its own.
> >
> > Signed-off-by: zhongjinji <zhongjinji@honor.com>
> > ---
> > mm/oom_kill.c | 19 ++++++++++++++++---
> > 1 file changed, 16 insertions(+), 3 deletions(-)
> >
> > diff --git a/mm/oom_kill.c b/mm/oom_kill.c
> > index 25923cfec9c6..3caaafc896d4 100644
> > --- a/mm/oom_kill.c
> > +++ b/mm/oom_kill.c
> > @@ -747,6 +747,19 @@ static inline void queue_oom_reaper(struct task_struct *tsk)
> > }
> > #endif /* CONFIG_MMU */
> >
> > +static void thaw_oom_process(struct task_struct *tsk)
> > +{
> > + struct task_struct *t;
> > +
> > + /* protects against __exit_signal() */
> > + read_lock(&tasklist_lock);
> > + for_each_thread(tsk, t) {
> > + set_tsk_thread_flag(t, TIF_MEMDIE);
> > + __thaw_task(t);
> > + }
> > + read_unlock(&tasklist_lock);
> > +}
> > +
>
> Sorry, I was probably not clear enough. I meant thaw_process should live
> in the freezer proper kernel/freezer.c and oom should be just user.
> Please make sure that freezer maintainers are involved and approve the
> change.
Thank you, I got it, It would indeed be better to have thaw_process in
kernel/freezer.c. Let me CC the freezer maintainers before updating
with the new changes; maybe they will have other suggestions as well.
>
> --
> Michal Hocko
> SUSE Labs
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH v7 2/2] mm/oom_kill: The OOM reaper traverses the VMA maple tree in reverse order
2025-09-04 12:24 ` zhongjinji
@ 2025-09-04 14:48 ` Michal Hocko
2025-09-08 12:15 ` [PATCH v7 2/2] mm/oom_kill: The OOM reaper traverses the VMA zhongjinji
0 siblings, 1 reply; 15+ messages in thread
From: Michal Hocko @ 2025-09-04 14:48 UTC (permalink / raw)
To: zhongjinji
Cc: akpm, feng.han, liam.howlett, linux-kernel, linux-mm, liulu.liu,
lorenzo.stoakes, rientjes, shakeel.butt, surenb, tglx
On Thu 04-09-25 20:24:38, zhongjinji wrote:
> > On Wed 03-09-25 17:27:29, zhongjinji wrote:
> > > Although the oom_reaper is delayed and it gives the oom victim chance to
> > > clean up its address space this might take a while especially for
> > > processes with a large address space footprint. In those cases
> > > oom_reaper might start racing with the dying task and compete for shared
> > > resources - e.g. page table lock contention has been observed.
> > >
> > > Reduce those races by reaping the oom victim from the other end of the
> > > address space.
> > >
> > > It is also a significant improvement for process_mrelease(). When a process
> > > is killed, process_mrelease is used to reap the killed process and often
> > > runs concurrently with the dying task. The test data shows that after
> > > applying the patch, lock contention is greatly reduced during the procedure
> > > of reaping the killed process.
> >
> > Thank you this is much better!
> >
> > > Without the patch:
> > > |--99.74%-- oom_reaper
> > > | |--76.67%-- unmap_page_range
> > > | | |--33.70%-- __pte_offset_map_lock
> > > | | | |--98.46%-- _raw_spin_lock
> > > | | |--27.61%-- free_swap_and_cache_nr
> > > | | |--16.40%-- folio_remove_rmap_ptes
> > > | | |--12.25%-- tlb_flush_mmu
> > > | |--12.61%-- tlb_finish_mmu
> > >
> > > With the patch:
> > > |--98.84%-- oom_reaper
> > > | |--53.45%-- unmap_page_range
> > > | | |--24.29%-- [hit in function]
> > > | | |--48.06%-- folio_remove_rmap_ptes
> > > | | |--17.99%-- tlb_flush_mmu
> > > | | |--1.72%-- __pte_offset_map_lock
> > > | |--30.43%-- tlb_finish_mmu
> >
> > Just curious. Do I read this correctly that the overall speedup is
> > mostly eaten by contention over tlb_finish_mmu?
>
> Here is a more detailed perf report, which includes the execution times
> of some important functions. I believe it will address your concerns.
>
> tlb_flush_mmu and tlb_finish_mmu perform similar tasks; they both mainly
> call free_pages_and_swap_cache, and its execution time is related to the
> number of anonymous pages being reclaimed.
>
> In previous tests, the pte spinlock contention was so obvious that I
> overlooked other issues.
>
> Without the patch
> |--99.50%-- oom_reaper
> | |--0.50%-- [hit in function]
> | |--71.06%-- unmap_page_range
> | | |--41.75%-- __pte_offset_map_lock
> | | |--23.23%-- folio_remove_rmap_ptes
> | | |--20.34%-- tlb_flush_mmu
> | | | free_pages_and_swap_cache
> | | |--2.23%-- folio_mark_accessed
> | | |--1.19%-- free_swap_and_cache_nr
> | | |--1.13%-- __tlb_remove_folio_pages
> | | |--0.76%-- _raw_spin_lock
> | |--16.02%-- tlb_finish_mmu
> | | |--26.08%-- [hit in function]
> | | |--72.97%-- free_pages_and_swap_cache
> | | |--0.67%-- free_pages
> | |--2.27%-- folio_remove_rmap_ptes
> | |--1.54%-- __tlb_remove_folio_pages
> | | |--83.47%-- [hit in function]
> | |--0.51%-- __pte_offset_map_lock
>
> Period (ms) Symbol
> 79.180156 oom_reaper
> 56.321241 unmap_page_range
> 23.891714 __pte_offset_map_lock
> 20.711614 free_pages_and_swap_cache
> 12.831778 tlb_finish_mmu
> 11.443282 tlb_flush_mmu
>
> With the patch
> |--99.54%-- oom_reaper
> | |--0.29%-- [hit in function]
> | |--57.91%-- unmap_page_range
> | | |--20.42%-- [hit in function]
> | | |--53.35%-- folio_remove_rmap_ptes
> | | | |--5.85%-- [hit in function]
> | | |--10.49%-- __pte_offset_map_lock
> | | | |--5.17%-- [hit in function]
> | | |--8.40%-- tlb_flush_mmu
> | | |--2.35%-- _raw_spin_lock
> | | |--1.89%-- folio_mark_accessed
> | | |--1.64%-- __tlb_remove_folio_pages
> | | | |--57.95%-- [hit in function]
> | |--36.34%-- tlb_finish_mmu
> | | |--14.70%-- [hit in function]
> | | |--84.85%-- free_pages_and_swap_cache
> | | | |--2.32%-- [hit in function]
> | | |--0.37%-- free_pages
> | | --0.08%-- free_unref_page
> | |--1.94%-- folio_remove_rmap_ptes
> | |--1.68%-- __tlb_remove_folio_pages
> | |--0.93%-- __pte_offset_map_lock
> | |--0.43%-- folio_mark_accessed
>
> Period (ms) Symbol
> 49.580521 oom_reaper
> 28.781660 unmap_page_range
> 18.105898 tlb_finish_mmu
> 17.688397 free_pages_and_swap_cache
> 3.471721 __pte_offset_map_lock
> 2.412970 tlb_flush_mmu
yes, this break down gives much more insight. Percentage is quite
misleading as the base is different. Could you also provide cumulative
oom_reaper + exit_mmap(victim) time in both cases?
--
Michal Hocko
SUSE Labs
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH v7 2/2] mm/oom_kill: The OOM reaper traverses the VMA maple tree in reverse order
2025-09-03 9:27 ` [PATCH v7 2/2] mm/oom_kill: The OOM reaper traverses the VMA maple tree in reverse order zhongjinji
2025-09-03 12:58 ` Michal Hocko
@ 2025-09-04 23:50 ` Shakeel Butt
1 sibling, 0 replies; 15+ messages in thread
From: Shakeel Butt @ 2025-09-04 23:50 UTC (permalink / raw)
To: zhongjinji
Cc: mhocko, rientjes, akpm, linux-mm, linux-kernel, tglx,
liam.howlett, lorenzo.stoakes, surenb, liulu.liu, feng.han
On Wed, Sep 03, 2025 at 05:27:29PM +0800, zhongjinji wrote:
> Although the oom_reaper is delayed and it gives the oom victim chance to
> clean up its address space this might take a while especially for
> processes with a large address space footprint. In those cases
> oom_reaper might start racing with the dying task and compete for shared
> resources - e.g. page table lock contention has been observed.
>
> Reduce those races by reaping the oom victim from the other end of the
> address space.
>
> It is also a significant improvement for process_mrelease(). When a process
> is killed, process_mrelease is used to reap the killed process and often
> runs concurrently with the dying task. The test data shows that after
> applying the patch, lock contention is greatly reduced during the procedure
> of reaping the killed process.
>
> Without the patch:
> |--99.74%-- oom_reaper
> | |--76.67%-- unmap_page_range
> | | |--33.70%-- __pte_offset_map_lock
> | | | |--98.46%-- _raw_spin_lock
> | | |--27.61%-- free_swap_and_cache_nr
> | | |--16.40%-- folio_remove_rmap_ptes
> | | |--12.25%-- tlb_flush_mmu
> | |--12.61%-- tlb_finish_mmu
>
> With the patch:
> |--98.84%-- oom_reaper
> | |--53.45%-- unmap_page_range
> | | |--24.29%-- [hit in function]
> | | |--48.06%-- folio_remove_rmap_ptes
> | | |--17.99%-- tlb_flush_mmu
> | | |--1.72%-- __pte_offset_map_lock
> | |--30.43%-- tlb_finish_mmu
>
> Signed-off-by: zhongjinji <zhongjinji@honor.com>
Acked-by: Shakeel Butt <shakeel.butt@linux.dev>
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH v7 2/2] mm/oom_kill: The OOM reaper traverses the VMA maple tree in reverse order
2025-09-04 12:21 ` Michal Hocko
@ 2025-09-05 2:12 ` Liam R. Howlett
2025-09-05 9:20 ` Michal Hocko
0 siblings, 1 reply; 15+ messages in thread
From: Liam R. Howlett @ 2025-09-05 2:12 UTC (permalink / raw)
To: Michal Hocko
Cc: zhongjinji, rientjes, shakeel.butt, akpm, linux-mm, linux-kernel,
tglx, lorenzo.stoakes, surenb, liulu.liu, feng.han
* Michal Hocko <mhocko@suse.com> [250904 08:21]:
> On Wed 03-09-25 15:02:34, Liam R. Howlett wrote:
> > * Michal Hocko <mhocko@suse.com> [250903 08:58]:
> > > On Wed 03-09-25 17:27:29, zhongjinji wrote:
> [...]
> > mmu_notifier_release(mm) is called early in the exit_mmap() path should
> > cause the mmu notifiers to be non-blocking (according to the comment in
> > v6.0 source of exit_mmap [1].
>
> I am not sure I follow you here. How does this relate to the actual
> direction of the address space freeing?
It doesn't relate to the direction of the address freeing, I think it
explains the perf data a bit.
The exit_mmap() would have a decrease in mmu related work while this
thread will have an increase, I think.
If they race, this thread gets virtually nothing while exit does all of
the work. If they don't race then the work is split.
Does that make sense?
Thanks,
Liam
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH v7 2/2] mm/oom_kill: The OOM reaper traverses the VMA maple tree in reverse order
2025-09-05 2:12 ` Liam R. Howlett
@ 2025-09-05 9:20 ` Michal Hocko
0 siblings, 0 replies; 15+ messages in thread
From: Michal Hocko @ 2025-09-05 9:20 UTC (permalink / raw)
To: Liam R. Howlett
Cc: zhongjinji, rientjes, shakeel.butt, akpm, linux-mm, linux-kernel,
tglx, lorenzo.stoakes, surenb, liulu.liu, feng.han
On Thu 04-09-25 22:12:48, Liam R. Howlett wrote:
> * Michal Hocko <mhocko@suse.com> [250904 08:21]:
> > On Wed 03-09-25 15:02:34, Liam R. Howlett wrote:
> > > * Michal Hocko <mhocko@suse.com> [250903 08:58]:
> > > > On Wed 03-09-25 17:27:29, zhongjinji wrote:
> > [...]
> > > mmu_notifier_release(mm) is called early in the exit_mmap() path should
> > > cause the mmu notifiers to be non-blocking (according to the comment in
> > > v6.0 source of exit_mmap [1].
> >
> > I am not sure I follow you here. How does this relate to the actual
> > direction of the address space freeing?
>
> It doesn't relate to the direction of the address freeing, I think it
> explains the perf data a bit.
>
> The exit_mmap() would have a decrease in mmu related work while this
> thread will have an increase, I think.
>
> If they race, this thread gets virtually nothing while exit does all of
> the work. If they don't race then the work is split.
>
> Does that make sense?
OK, I can see what you mean now. Thanks! I believe we need more data to
understand what is actually going on here. Having the overall duration
should help to get that.
--
Michal Hocko
SUSE Labs
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH v7 2/2] mm/oom_kill: The OOM reaper traverses the VMA
2025-09-04 14:48 ` Michal Hocko
@ 2025-09-08 12:15 ` zhongjinji
0 siblings, 0 replies; 15+ messages in thread
From: zhongjinji @ 2025-09-08 12:15 UTC (permalink / raw)
To: mhocko
Cc: akpm, feng.han, liam.howlett, linux-kernel, linux-mm, liulu.liu,
lorenzo.stoakes, rientjes, shakeel.butt, surenb, tglx,
zhongjinji
Since the perf report is too complicated, let us summarize the key points
from the report.
Conclusion:
Compared to the version without the patch, the total time reduced by
exit_mmap plus reaper work is roughly equal to the reduction in total
pte spinlock waiting time.
With the patch applied, for certain functions, the reaper performs
more times, such as folio_remove_rmap_ptes, but the time spent by
exit_mmap on folio_remove_rmap_ptes decreases accordingly.
Summary of measurements (ms):
+---------------------------------------------------------------+
| Category | Applying patch | Without patch|
+-------------------------------+---------------+--------------+
| Total running time | 132.6 | 167.1 |
| (exit_mmap + reaper work) | 72.4 + 60.2 | 90.7 + 76.4 |
+-------------------------------+---------------+--------------+
| Time waiting for pte spinlock | 1.0 | 33.1 |
| (exit_mmap + reaper work) | 0.4 + 0.6 | 10.0 + 23.1 |
+-------------------------------+---------------+--------------+
| folio_remove_rmap_ptes time | 42.0 | 41.3 |
| (exit_mmap + reaper work) | 18.4 + 23.6 | 22.4 + 18.9 |
+---------------------------------------------------------------+
Report without patch:
Arch: arm64
Event: cpu-clock (type 1, config 0)
Samples: 6355
Event count: 90781175
do_exit
|--93.81%-- mmput
| |--99.46%-- exit_mmap
| | |--76.74%-- unmap_vmas
| | | |--9.14%-- [hit in function]
| | | |--34.25%-- tlb_flush_mmu
| | | |--31.13%-- folio_remove_rmap_ptes
| | | |--15.04%-- __pte_offset_map_lock
| | | |--5.43%-- free_swap_and_cache_nr
| | | |--1.80%-- _raw_spin_lock
| | | |--1.19%-- folio_mark_accessed
| | | |--0.84%-- __tlb_remove_folio_pages
| | | |--0.37%-- mas_find
| | | |--0.37%-- percpu_counter_add_batch
| | | |--0.20%-- __mod_lruvec_page_state
| | | |--0.13%-- f2fs_dirty_data_folio
| | | |--0.04%-- __rcu_read_unlock
| | | |--0.04%-- tlb_flush_rmaps
| | | | folio_remove_rmap_ptes
| | | --0.02%-- folio_mark_dirty
| | |--12.72%-- free_pgtables
| | | |--0.53%-- [hit in function]
| | | |--35.81%-- unlink_file_vma
| | | |--28.38%-- down_write
| | | |--17.11%-- unlink_anon_vmas
| | | |--5.70%-- anon_vma_interval_tree_remove
| | | |--3.71%-- mas_find
| | | |--3.18%-- up_write
| | | |--2.25%-- free_pgd_range
| | | |--1.19%-- vma_interval_tree_remove
| | | |--1.06%-- kmem_cache_free
| | |--2.65%-- folio_remove_rmap_ptes
| | |--2.50%-- __vm_area_free
| | | |--11.49%-- [hit in function]
| | | |--81.08%-- kmem_cache_free
| | | |--4.05%-- _raw_spin_unlock_irqrestore
| | | --3.38%-- anon_vma_name_free
| | |--1.03%-- folio_mark_accessed
| | |--0.96%-- __tlb_remove_folio_pages
| | |--0.54%-- mas_find
| | |--0.46%-- tlb_finish_mmu
| | | |--96.30%-- free_pages_and_swap_cache
| | | | |--80.77%-- release_pages
| | |--0.44%-- kmem_cache_free
| | |--0.39%-- __pte_offset_map_lock
| | |--0.30%-- task_work_add
| | |--0.19%-- __rcu_read_unlock
| | |--0.17%-- fput
| | |--0.13%-- __mt_destroy
| | | mt_destroy_walk
| | |--0.10%-- down_write
| | |--0.07%-- unlink_file_vma
| | |--0.05%-- percpu_counter_add_batch
| | |--0.02%-- free_swap_and_cache_nr
| | |--0.02%-- flush_tlb_batched_pending
| | |--0.02%-- uprobe_munmap
| | |--0.02%-- _raw_spin_unlock
| | |--0.02%-- unlink_anon_vmas
| | --0.02%-- up_write
| |--0.40%-- fput
| |--0.10%-- mas_find
| |--0.02%-- kgsl_gpumem_vm_close
| --0.02%-- __vm_area_free
|--5.19%-- task_work_run
|--0.42%-- exit_files
| put_files_struct
|--0.35%-- exit_task_namespaces
Children Self Command Symbol
90752605 0 #APM_light-weig do_exit
90752605 0 #APM_light-weig get_signal
85138600 0 #APM_light-weig __mmput
84681480 399980 #APM_light-weig exit_mmap
64982465 5942560 #APM_light-weig unmap_vmas
22598870 1599920 #APM_light-weig free_pages_and_swap_cache
22498875 3314120 #APM_light-weig folio_remove_rmap_ptes
10985165 1442785 #APM_light-weig _raw_spin_lock
10770890 57140 #APM_light-weig free_pgtables
10099495 399980 #APM_light-weig __pte_offset_map_lock
8199590 1285650 #APM_light-weig folios_put_refs
4756905 685680 #APM_light-weig free_unref_page_list
4714050 14285 #APM_light-weig task_work_run
4671195 199990 #APM_light-weig ____fput
4085510 214275 #APM_light-weig __fput
3914090 57140 #APM_light-weig unlink_file_vma
3542680 28570 #APM_light-weig free_swap_and_cache_nr
3214125 2114180 #APM_light-weig free_unref_folios
3142700 14285 #APM_light-weig swap_entry_range_free
2828430 2828430 #APM_light-weig kmem_cache_free
2714150 528545 #APM_light-weig zram_free_page
2528445 114280 #APM_light-weig zram_slot_free_notify
Arch: arm64
Event: cpu-clock (type 1, config 0)
Samples: 5353
Event count: 76467605
kthread
|--99.57%-- oom_reaper
| |--0.28%-- [hit in function]
| |--73.58%-- unmap_page_range
| | |--8.67%-- [hit in function]
| | |--41.59%-- __pte_offset_map_lock
| | |--29.47%-- folio_remove_rmap_ptes
| | |--16.11%-- tlb_flush_mmu
| | | free_pages_and_swap_cache
| | | |--9.49%-- [hit in function]
| | |--1.66%-- folio_mark_accessed
| | |--0.74%-- free_swap_and_cache_nr
| | |--0.69%-- __tlb_remove_folio_pages
| | |--0.41%-- __mod_lruvec_page_state
| | |--0.33%-- _raw_spin_lock
| | |--0.28%-- percpu_counter_add_batch
| | |--0.03%-- tlb_flush_mmu_tlbonly
| | --0.03%-- __rcu_read_unlock
| |--19.94%-- tlb_finish_mmu
| | |--23.24%-- [hit in function]
| | |--76.39%-- free_pages_and_swap_cache
| | |--0.28%-- free_pages
| | --0.09%-- release_pages
| |--3.21%-- folio_remove_rmap_ptes
| |--1.16%-- __tlb_remove_folio_pages
| |--1.16%-- folio_mark_accessed
| |--0.36%-- __pte_offset_map_lock
| |--0.28%-- mas_find
| --0.02%-- __rcu_read_unlock
|--0.17%-- tlb_finish_mmu
|--0.15%-- mas_find
|--0.06%-- memset
|--0.04%-- unmap_page_range
--0.02%-- tlb_gather_mmu
Children Self Command Symbol
76467605 0 unknown kthread
76139050 214275 unknown oom_reaper
56054340 4885470 unknown unmap_page_range
23570250 385695 unknown __pte_offset_map_lock
23341690 257130 unknown _raw_spin_lock
23113130 23113130 unknown queued_spin_lock_slowpath
20627540 1371360 unknown free_pages_and_swap_cache
19027620 614255 unknown release_pages
18956195 3399830 unknown folio_remove_rmap_ptes
15313520 3656960 unknown tlb_finish_mmu
11799410 11785125 unknown cgroup_rstat_updated
11285150 11256580 unknown _raw_spin_unlock_irqrestore
9028120 0 unknown tlb_flush_mmu
8613855 1342790 unknown folios_put_refs
5442585 485690 unknown free_unref_page_list
4299785 1614205 unknown free_unref_folios
3385545 1299935 unknown free_unref_page_commit
Report with patch:
Arch: arm64
Event: cpu-clock (type 1, config 0)
Samples: 5075
Event count: 72496375
|--99.98%-- do_notify_resume
| |--92.63%-- mmput
| | __mmput
| | |--99.57%-- exit_mmap
| | | |--0.79%-- [hit in function]
| | | |--76.43%-- unmap_vmas
| | | | |--8.39%-- [hit in function]
| | | | |--42.80%-- tlb_flush_mmu
| | | | | free_pages_and_swap_cache
| | | | |--34.08%-- folio_remove_rmap_ptes
| | | | |--9.51%-- free_swap_and_cache_nr
| | | | |--2.40%-- _raw_spin_lock
| | | | |--0.75%-- __tlb_remove_folio_pages
| | | | |--0.48%-- mas_find
| | | | |--0.36%-- __pte_offset_map_lock
| | | | |--0.34%-- percpu_counter_add_batch
| | | | |--0.34%-- folio_mark_accessed
| | | | |--0.20%-- __mod_lruvec_page_state
| | | | |--0.17%-- f2fs_dirty_data_folio
| | | | |--0.11%-- __rcu_read_unlock
| | | | |--0.03%-- _raw_spin_unlock
| | | | |--0.03%-- tlb_flush_rmaps
| | | | --0.03%-- uprobe_munmap
| | | |--14.19%-- free_pgtables
| | | |--2.52%-- __vm_area_free
| | | |--1.52%-- folio_remove_rmap_ptes
| | | |--0.83%-- mas_find
| | | |--0.81%-- __tlb_remove_folio_pages
| | | |--0.77%-- folio_mark_accessed
| | | |--0.41%-- kmem_cache_free
| | | |--0.36%-- task_work_add
| | | |--0.34%-- fput
| | | |--0.32%-- __pte_offset_map_lock
| | | |--0.15%-- __rcu_read_unlock
| | | |--0.15%-- __mt_destroy
| | | |--0.09%-- unlink_file_vma
| | | |--0.06%-- down_write
| | | |--0.04%-- lookup_swap_cgroup_id
| | | |--0.04%-- uprobe_munmap
| | | |--0.04%-- percpu_counter_add_batch
| | | |--0.04%-- up_write
| | | |--0.02%-- flush_tlb_batched_pending
| | | |--0.02%-- _raw_spin_unlock
| | | |--0.02%-- unlink_anon_vmas
| | | --0.02%-- tlb_finish_mmu
| | | free_unref_page
| | |--0.38%-- fput
| | --0.04%-- mas_find
| |--6.21%-- task_work_run
| |--0.47%-- exit_task_namespaces
| |--0.16%-- ____fput
| --0.04%-- mm_update_next_owner
Children Self Command Symbol
72482090 0 #APM6-IO get_signal
67139500 0 #APM6-IO __mmput
67139500 0 #APM6-IO mmput
66853800 528545 #APM6-IO exit_mmap
51097445 4285500 #APM6-IO unmap_vmas
21870335 0 #APM6-IO tlb_flush_mmu
21870335 1371360 #APM6-IO free_pages_and_swap_cache
20384695 485690 #APM6-IO release_pages
18427650 1814195 #APM6-IO folio_remove_rmap_ptes
13799310 13785025 #APM6-IO cgroup_rstat_updated
12842215 12842215 #APM6-IO _raw_spin_unlock_irqrestore
9485240 14285 #APM6-IO free_pgtables
7785325 428550 #APM6-IO folios_put_refs
4899755 642825 #APM6-IO free_unref_page_list
4856900 42855 #APM6-IO free_swap_and_cache_nr
4499775 14285 #APM6-IO task_work_run
4385495 114280 #APM6-IO ____fput
3971230 714250 #APM6-IO zram_free_page
3899805 14285 #APM6-IO swap_entry_range_free
3785525 185705 #APM6-IO zram_slot_free_notify
399980 399980 #APM6-IO __pte_offset_map_lock
Arch: arm64
Event: cpu-clock (type 1, config 0)
Samples: 4221
Event count: 60296985
Children Self Command Pid Tid Shared Object Symbol
kthread
|--99.53%-- oom_reaper
| |--0.17%-- [hit in function]
| |--55.77%-- unmap_page_range
| | |--20.49%-- [hit in function]
| | |--58.30%-- folio_remove_rmap_ptes
| | |--11.48%-- tlb_flush_mmu
| | |--3.33%-- folio_mark_accessed
| | |--2.65%-- __tlb_remove_folio_pages
| | |--1.37%-- _raw_spin_lock
| | |--0.68%-- __mod_lruvec_page_state
| | |--0.51%-- __pte_offset_map_lock
| | |--0.43%-- percpu_counter_add_batch
| | |--0.30%-- __rcu_read_unlock
| | |--0.13%-- free_swap_and_cache_nr
| | |--0.09%-- tlb_flush_mmu_tlbonly
| | --0.04%-- __rcu_read_lock
| |--32.21%-- tlb_finish_mmu
| | |--88.69%-- free_pages_and_swap_cache
| |--6.93%-- folio_remove_rmap_ptes
| |--1.90%-- __tlb_remove_folio_pages
| |--1.55%-- folio_mark_accessed
| |--0.69%-- __pte_offset_map_lock
| |--0.45%-- mas_find_rev
| | |--21.05%-- [hit in function]
| | --78.95%-- mas_prev_slot
| |--0.12%-- mas_prev_slot
| |--0.10%-- free_pages_and_swap_cache
| |--0.07%-- __rcu_read_unlock
| |--0.02%-- percpu_counter_add_batch
| --0.02%-- lookup_swap_cgroup_id
|--0.12%-- mas_find_rev
|--0.12%-- unmap_page_range
|--0.12%-- tlb_finish_mmu
|--0.09%-- tlb_gather_mmu
--0.02%-- memset
Children Self Command Symbol
60296985 0 unknown kthread
60011285 99995 unknown oom_reaper
33541180 6928225 unknown unmap_page_range
23670245 5414015 unknown folio_remove_rmap_ptes
21027520 1757055 unknown free_pages_and_swap_cache
19399030 2171320 unknown tlb_finish_mmu
18970480 885670 unknown release_pages
13785025 13785025 unknown cgroup_rstat_updated
11442285 11442285 unknown _raw_spin_unlock_irqrestore
7928175 1871335 unknown folios_put_refs
4742620 371410 unknown free_unref_page_list
3928375 942810 unknown free_unref_folios
3842665 14285 unknown tlb_flush_mmu
3385545 728535 unknown free_unref_page_commit
585685 571400 unknown __pte_offset_map_lock
^ permalink raw reply [flat|nested] 15+ messages in thread
end of thread, other threads:[~2025-09-08 12:15 UTC | newest]
Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-09-03 9:27 [PATCH v7 0/2] Improvements for victim thawing and reaper VMA traversal zhongjinji
2025-09-03 9:27 ` [PATCH v7 1/2] mm/oom_kill: Thaw victim on a per-process basis instead of per-thread zhongjinji
2025-09-03 12:27 ` Michal Hocko
2025-09-04 13:08 ` zhongjinji
2025-09-03 9:27 ` [PATCH v7 2/2] mm/oom_kill: The OOM reaper traverses the VMA maple tree in reverse order zhongjinji
2025-09-03 12:58 ` Michal Hocko
2025-09-03 19:02 ` Liam R. Howlett
2025-09-04 12:21 ` Michal Hocko
2025-09-05 2:12 ` Liam R. Howlett
2025-09-05 9:20 ` Michal Hocko
2025-09-04 12:47 ` zhongjinji
2025-09-04 12:24 ` zhongjinji
2025-09-04 14:48 ` Michal Hocko
2025-09-08 12:15 ` [PATCH v7 2/2] mm/oom_kill: The OOM reaper traverses the VMA zhongjinji
2025-09-04 23:50 ` [PATCH v7 2/2] mm/oom_kill: The OOM reaper traverses the VMA maple tree in reverse order Shakeel Butt
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox