* [PATCH v9 0/2] Improvements to Victim Process Thawing and OOM Reaper Traversal Order
@ 2025-09-10 14:37 zhongjinji
2025-09-10 14:37 ` [PATCH v9 1/2] mm/oom_kill: Thaw the entire OOM victim process zhongjinji
2025-09-10 14:37 ` [PATCH v9 2/2] mm/oom_kill: The OOM reaper traverses the VMA maple tree in reverse order zhongjinji
0 siblings, 2 replies; 10+ messages in thread
From: zhongjinji @ 2025-09-10 14:37 UTC (permalink / raw)
To: mhocko
Cc: rientjes, shakeel.butt, akpm, tglx, liam.howlett,
lorenzo.stoakes, surenb, lenb, rafael, pavel, linux-mm, linux-pm,
linux-kernel, liulu.liu, feng.han, zhongjinji
This patch series focuses on optimizing victim process thawing and refining
the traversal order of the OOM reaper. Since __thaw_task() is used to thaw a
single thread of the victim, thawing only one thread cannot guarantee the
exit of the OOM victim when it is frozen. Patch 1 thaw the entire process
of the OOM victim to ensure that OOM victims are able to terminate themselves.
Even if the oom_reaper is delayed, patch 3 is still beneficial for reaping
processes with a large address space footprint, and it also greatly improves
process_mrelease.
---
-8 -> v9:
- Replace thaw_oom_process with thaw_process. [13]
- Use tsk_is_oom_victim() to check whether a task is an OOM victim in
freezing_slow_path(). [14]
v7 -> v8:
- Introduce thaw_oom_process() for thawing OOM victims. [12]
- Use RCU protection for thread traversal in thaw_oom_process.
v6 -> v7:
- Thawing the victim process to ensure that it can terminate on its own. [10]
- Since the delay reaper is no longer skipped, I'm not sure whether patch 2
will still be accepted. Revise the Changelog for patch 2. [11]
- Remove report tags
v5 -> v6:
- Use mas_for_each_rev() for VMA traversal [6]
- Simplify the judgment of whether to delay in queue_oom_reaper() [7]
- Refine changelog to better capture the essence of the changes [8]
- Use READ_ONCE(tsk->frozen) instead of checking mm and additional
checks inside for_each_process(), as it is sufficient [9]
- Add report tags because fengbaopeng and tianxiaobin reported the
high load issue of the reaper
v4 -> v5:
- Detect frozen state directly, avoid special futex handling. [3]
- Use mas_find_rev() for VMA traversal to avoid skipping entries. [4]
- Only check should_delay_oom_reap() in queue_oom_reaper(). [5]
v3 -> v4:
- Renamed functions and parameters for clarity. [2]
- Added should_delay_oom_reap() for OOM reap decisions.
- Traverse maple tree in reverse for improved behavior.
v2 -> v3:
- Fixed Subject prefix error.
v1 -> v2:
- Check robust_list for all threads, not just one. [1]
Reference:
[1] https://lore.kernel.org/linux-mm/u3mepw3oxj7cywezna4v72y2hvyc7bafkmsbirsbfuf34zpa7c@b23sc3rvp2gp/
[2] https://lore.kernel.org/linux-mm/87cy99g3k6.ffs@tglx/
[3] https://lore.kernel.org/linux-mm/aKRWtjRhE_HgFlp2@tiehlicka/
[4] https://lore.kernel.org/linux-mm/26larxehoe3a627s4fxsqghriwctays4opm4hhme3uk7ybjc5r@pmwh4s4yv7lm/
[5] https://lore.kernel.org/linux-mm/d5013a33-c08a-44c5-a67f-9dc8fd73c969@lucifer.local/
[6] https://lore.kernel.org/linux-mm/nwh7gegmvoisbxlsfwslobpbqku376uxdj2z32owkbftvozt3x@4dfet73fh2yy/
[7] https://lore.kernel.org/linux-mm/af4edeaf-d3c9-46a9-a300-dbaf5936e7d6@lucifer.local/
[8] https://lore.kernel.org/linux-mm/aK71W1ITmC_4I_RY@tiehlicka/
[9] https://lore.kernel.org/linux-mm/jzzdeczuyraup2zrspl6b74muf3bly2a3acejfftcldfmz4ekk@s5mcbeim34my/
[10] https://lore.kernel.org/linux-mm/aLWmf6qZHTA0hMpU@tiehlicka/
[11] https://lore.kernel.org/linux-mm/aLVOICSkyvVRKD94@tiehlicka/
[12] https://lore.kernel.org/linux-mm/aLg0QZQ5kXNJgDMF@tiehlicka/
[13] https://lore.kernel.org/linux-mm/aL_wLqsy7nzP_bRF@tiehlicka/
[14] https://lore.kernel.org/linux-mm/aMAzkQQ4XAFh9xlm@tiehlicka/
The earlier post:
v8: https://lore.kernel.org/linux-mm/20250909090659.26400-1-zhongjinji@honor.com/
v7: https://lore.kernel.org/linux-mm/20250903092729.10611-1-zhongjinji@honor.com/
v6: https://lore.kernel.org/linux-mm/20250829065550.29571-1-zhongjinji@honor.com/
v5: https://lore.kernel.org/linux-mm/20250825133855.30229-1-zhongjinji@honor.com/
v4: https://lore.kernel.org/linux-mm/20250814135555.17493-1-zhongjinji@honor.com/
v3: https://lore.kernel.org/linux-mm/20250804030341.18619-1-zhongjinji@honor.com/
v2: https://lore.kernel.org/linux-mm/20250801153649.23244-1-zhongjinji@honor.com/
v1: https://lore.kernel.org/linux-mm/20250731102904.8615-1-zhongjinji@honor.com/
zhongjinji (2):
mm/oom_kill: Thaw the entire OOM victim process
mm/oom_kill: The OOM reaper traverses the VMA maple tree in reverse
order
include/linux/freezer.h | 2 ++
kernel/freezer.c | 13 ++++++++++++-
mm/oom_kill.c | 20 +++++++++++++-------
3 files changed, 27 insertions(+), 8 deletions(-)
--
2.17.1
^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH v9 1/2] mm/oom_kill: Thaw the entire OOM victim process
2025-09-10 14:37 [PATCH v9 0/2] Improvements to Victim Process Thawing and OOM Reaper Traversal Order zhongjinji
@ 2025-09-10 14:37 ` zhongjinji
2025-09-10 15:15 ` Michal Hocko
2025-09-11 23:55 ` Shakeel Butt
2025-09-10 14:37 ` [PATCH v9 2/2] mm/oom_kill: The OOM reaper traverses the VMA maple tree in reverse order zhongjinji
1 sibling, 2 replies; 10+ messages in thread
From: zhongjinji @ 2025-09-10 14:37 UTC (permalink / raw)
To: mhocko
Cc: rientjes, shakeel.butt, akpm, tglx, liam.howlett,
lorenzo.stoakes, surenb, lenb, rafael, pavel, linux-mm, linux-pm,
linux-kernel, liulu.liu, feng.han, zhongjinji
OOM killer is a mechanism that selects and kills processes when the system
runs out of memory to reclaim resources and keep the system stable. But the
oom victim cannot terminate on its own when it is frozen, even if the OOM
victim task is thawed through __thaw_task(). This is because __thaw_task() can
only thaw a single OOM victim thread, and cannot thaw the entire OOM victim
process.
Also, freezing_slow_path() decides whether a task is an OOM victim by checking
the task's TIF_MEMDIE flag. When a task is thawed, the freezer bypasses PM
freezing and cgroup freezing states to thaw it. But TIF_MEMDIE is not a thread
group shared flag, and only one thread is marked with TIF_MEMDIE. If other
threads are thawed, they may still remain frozen due to PM freezing and cgroup
freezing states.
To solve this, thaw_process() is introduced to thaw all threads of the victim,
ensuring every thread in the victim process can be thawed. The freezer uses
tsk_is_oom_victim() to determine whether a task is an OOM victim, because
tsk->signal->oom_mm is data shared by all threads. This allows all victim threads
to rely on it to be thawed.
This change will thaw the entire victim process when OOM occurs,
ensuring that the oom victim can terminate on its own.
Signed-off-by: zhongjinji <zhongjinji@honor.com>
Acked-by: Michal Hocko <mhocko@suse.com>
---
include/linux/freezer.h | 2 ++
kernel/freezer.c | 20 +++++++++++++++++++-
mm/oom_kill.c | 10 +++++-----
3 files changed, 26 insertions(+), 6 deletions(-)
diff --git a/include/linux/freezer.h b/include/linux/freezer.h
index b303472255be..32884c9721e5 100644
--- a/include/linux/freezer.h
+++ b/include/linux/freezer.h
@@ -47,6 +47,7 @@ extern int freeze_processes(void);
extern int freeze_kernel_threads(void);
extern void thaw_processes(void);
extern void thaw_kernel_threads(void);
+extern void thaw_process(struct task_struct *p);
static inline bool try_to_freeze(void)
{
@@ -80,6 +81,7 @@ static inline int freeze_processes(void) { return -ENOSYS; }
static inline int freeze_kernel_threads(void) { return -ENOSYS; }
static inline void thaw_processes(void) {}
static inline void thaw_kernel_threads(void) {}
+static inline void thaw_process(struct task_struct *p) {}
static inline bool try_to_freeze(void) { return false; }
diff --git a/kernel/freezer.c b/kernel/freezer.c
index 6a96149aede9..ddc11a8bd2ea 100644
--- a/kernel/freezer.c
+++ b/kernel/freezer.c
@@ -10,6 +10,7 @@
#include <linux/export.h>
#include <linux/syscalls.h>
#include <linux/freezer.h>
+#include <linux/oom.h>
#include <linux/kthread.h>
/* total number of freezing conditions in effect */
@@ -40,7 +41,7 @@ bool freezing_slow_path(struct task_struct *p)
if (p->flags & (PF_NOFREEZE | PF_SUSPEND_TASK))
return false;
- if (test_tsk_thread_flag(p, TIF_MEMDIE))
+ if (tsk_is_oom_victim(p))
return false;
if (pm_nosig_freezing || cgroup_freezing(p))
@@ -206,6 +207,23 @@ void __thaw_task(struct task_struct *p)
wake_up_state(p, TASK_FROZEN);
}
+/*
+ * thaw_process - Thaw a frozen process
+ * @p: the process to be thawed
+ *
+ * Iterate over all threads of @p and call __thaw_task() on each.
+ */
+void thaw_process(struct task_struct *p)
+{
+ struct task_struct *t;
+
+ rcu_read_lock();
+ for_each_thread(p, t) {
+ __thaw_task(t);
+ }
+ rcu_read_unlock();
+}
+
/**
* set_freezable - make %current freezable
*
diff --git a/mm/oom_kill.c b/mm/oom_kill.c
index 25923cfec9c6..88356b66cc35 100644
--- a/mm/oom_kill.c
+++ b/mm/oom_kill.c
@@ -772,12 +772,12 @@ static void mark_oom_victim(struct task_struct *tsk)
mmgrab(tsk->signal->oom_mm);
/*
- * Make sure that the task is woken up from uninterruptible sleep
- * if it is frozen because OOM killer wouldn't be able to free
- * any memory and livelock. freezing_slow_path will tell the freezer
- * that TIF_MEMDIE tasks should be ignored.
+ * Make sure that the process is woken up from uninterruptible sleep
+ * if it is frozen because OOM killer wouldn't be able to free any
+ * memory and livelock. The freezer will thaw the tasks that are OOM
+ * victims regardless of the PM freezing and cgroup freezing states.
*/
- __thaw_task(tsk);
+ thaw_process(tsk);
atomic_inc(&oom_victims);
cred = get_task_cred(tsk);
trace_mark_victim(tsk, cred->uid.val);
--
2.17.1
^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH v9 2/2] mm/oom_kill: The OOM reaper traverses the VMA maple tree in reverse order
2025-09-10 14:37 [PATCH v9 0/2] Improvements to Victim Process Thawing and OOM Reaper Traversal Order zhongjinji
2025-09-10 14:37 ` [PATCH v9 1/2] mm/oom_kill: Thaw the entire OOM victim process zhongjinji
@ 2025-09-10 14:37 ` zhongjinji
2025-09-10 15:22 ` Michal Hocko
1 sibling, 1 reply; 10+ messages in thread
From: zhongjinji @ 2025-09-10 14:37 UTC (permalink / raw)
To: mhocko
Cc: rientjes, shakeel.butt, akpm, tglx, liam.howlett,
lorenzo.stoakes, surenb, lenb, rafael, pavel, linux-mm, linux-pm,
linux-kernel, liulu.liu, feng.han, zhongjinji
Although the oom_reaper is delayed and it gives the oom victim chance to
clean up its address space this might take a while especially for
processes with a large address space footprint. In those cases
oom_reaper might start racing with the dying task and compete for shared
resources - e.g. page table lock contention has been observed.
Reduce those races by reaping the oom victim from the other end of the
address space.
It is also a significant improvement for process_mrelease(). When a process
is killed, process_mrelease is used to reap the killed process and often
runs concurrently with the dying task. The test data shows that after
applying the patch, lock contention is greatly reduced during the procedure
of reaping the killed process.
The test is based on arm64.
Without the patch:
|--99.57%-- oom_reaper
| |--0.28%-- [hit in function]
| |--73.58%-- unmap_page_range
| | |--8.67%-- [hit in function]
| | |--41.59%-- __pte_offset_map_lock
| | |--29.47%-- folio_remove_rmap_ptes
| | |--16.11%-- tlb_flush_mmu
| | |--1.66%-- folio_mark_accessed
| | |--0.74%-- free_swap_and_cache_nr
| | |--0.69%-- __tlb_remove_folio_pages
| |--19.94%-- tlb_finish_mmu
| |--3.21%-- folio_remove_rmap_ptes
| |--1.16%-- __tlb_remove_folio_pages
| |--1.16%-- folio_mark_accessed
| |--0.36%-- __pte_offset_map_lock
With the patch:
|--99.53%-- oom_reaper
| |--55.77%-- unmap_page_range
| | |--20.49%-- [hit in function]
| | |--58.30%-- folio_remove_rmap_ptes
| | |--11.48%-- tlb_flush_mmu
| | |--3.33%-- folio_mark_accessed
| | |--2.65%-- __tlb_remove_folio_pages
| | |--1.37%-- _raw_spin_lock
| | |--0.68%-- __mod_lruvec_page_state
| | |--0.51%-- __pte_offset_map_lock
| |--32.21%-- tlb_finish_mmu
| |--6.93%-- folio_remove_rmap_ptes
| |--1.90%-- __tlb_remove_folio_pages
| |--1.55%-- folio_mark_accessed
| |--0.69%-- __pte_offset_map_lock
Signed-off-by: zhongjinji <zhongjinji@honor.com>
Reviewed-by: Liam R. Howlett <Liam.Howlett@oracle.com>
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Reviewed-by: Suren Baghdasaryan <surenb@google.com>
Acked-by: Shakeel Butt <shakeel.butt@linux.dev>
Acked-by: Michal Hocko <mhocko@suse.com>
---
mm/oom_kill.c | 10 ++++++++--
1 file changed, 8 insertions(+), 2 deletions(-)
diff --git a/mm/oom_kill.c b/mm/oom_kill.c
index 88356b66cc35..28fb36be332b 100644
--- a/mm/oom_kill.c
+++ b/mm/oom_kill.c
@@ -516,7 +516,7 @@ static bool __oom_reap_task_mm(struct mm_struct *mm)
{
struct vm_area_struct *vma;
bool ret = true;
- VMA_ITERATOR(vmi, mm, 0);
+ MA_STATE(mas, &mm->mm_mt, ULONG_MAX, ULONG_MAX);
/*
* Tell all users of get_user/copy_from_user etc... that the content
@@ -526,7 +526,13 @@ static bool __oom_reap_task_mm(struct mm_struct *mm)
*/
set_bit(MMF_UNSTABLE, &mm->flags);
- for_each_vma(vmi, vma) {
+ /*
+ * It might start racing with the dying task and compete for shared
+ * resources - e.g. page table lock contention has been observed.
+ * Reduce those races by reaping the oom victim from the other end
+ * of the address space.
+ */
+ mas_for_each_rev(&mas, vma, 0) {
if (vma->vm_flags & (VM_HUGETLB|VM_PFNMAP))
continue;
--
2.17.1
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH v9 1/2] mm/oom_kill: Thaw the entire OOM victim process
2025-09-10 14:37 ` [PATCH v9 1/2] mm/oom_kill: Thaw the entire OOM victim process zhongjinji
@ 2025-09-10 15:15 ` Michal Hocko
2025-09-10 15:23 ` Suren Baghdasaryan
2025-09-11 23:55 ` Shakeel Butt
1 sibling, 1 reply; 10+ messages in thread
From: Michal Hocko @ 2025-09-10 15:15 UTC (permalink / raw)
To: zhongjinji
Cc: rientjes, shakeel.butt, akpm, tglx, liam.howlett,
lorenzo.stoakes, surenb, lenb, rafael, pavel, linux-mm, linux-pm,
linux-kernel, liulu.liu, feng.han
On Wed 10-09-25 22:37:25, zhongjinji wrote:
> OOM killer is a mechanism that selects and kills processes when the system
> runs out of memory to reclaim resources and keep the system stable. But the
> oom victim cannot terminate on its own when it is frozen, even if the OOM
> victim task is thawed through __thaw_task(). This is because __thaw_task() can
> only thaw a single OOM victim thread, and cannot thaw the entire OOM victim
> process.
>
> Also, freezing_slow_path() decides whether a task is an OOM victim by checking
> the task's TIF_MEMDIE flag. When a task is thawed, the freezer bypasses PM
> freezing and cgroup freezing states to thaw it. But TIF_MEMDIE is not a thread
> group shared flag, and only one thread is marked with TIF_MEMDIE. If other
> threads are thawed, they may still remain frozen due to PM freezing and cgroup
> freezing states.
>
> To solve this, thaw_process() is introduced to thaw all threads of the victim,
> ensuring every thread in the victim process can be thawed. The freezer uses
> tsk_is_oom_victim() to determine whether a task is an OOM victim, because
> tsk->signal->oom_mm is data shared by all threads. This allows all victim threads
> to rely on it to be thawed.
A history detour for future reference.
TIF_MEMDIE was a "this is the oom victim & it has access to memory
reserves" flag in the past. It has that thread vs. process problems and
tsk_is_oom_victim was introduced later to get rid of them and other
issues as well as the guarantee that we can identify the oom victim's mm reliably
for other oom_reaper. I recommend reading git log of mm/oom_kill.c to
get hairy history of that area and how tricky it is due all the subtle
interaction with process exit paths etc.
>
> This change will thaw the entire victim process when OOM occurs,
> ensuring that the oom victim can terminate on its own.
>
> Signed-off-by: zhongjinji <zhongjinji@honor.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Thanks!
>
> Acked-by: Michal Hocko <mhocko@suse.com>
> ---
> include/linux/freezer.h | 2 ++
> kernel/freezer.c | 20 +++++++++++++++++++-
> mm/oom_kill.c | 10 +++++-----
> 3 files changed, 26 insertions(+), 6 deletions(-)
>
> diff --git a/include/linux/freezer.h b/include/linux/freezer.h
> index b303472255be..32884c9721e5 100644
> --- a/include/linux/freezer.h
> +++ b/include/linux/freezer.h
> @@ -47,6 +47,7 @@ extern int freeze_processes(void);
> extern int freeze_kernel_threads(void);
> extern void thaw_processes(void);
> extern void thaw_kernel_threads(void);
> +extern void thaw_process(struct task_struct *p);
>
> static inline bool try_to_freeze(void)
> {
> @@ -80,6 +81,7 @@ static inline int freeze_processes(void) { return -ENOSYS; }
> static inline int freeze_kernel_threads(void) { return -ENOSYS; }
> static inline void thaw_processes(void) {}
> static inline void thaw_kernel_threads(void) {}
> +static inline void thaw_process(struct task_struct *p) {}
>
> static inline bool try_to_freeze(void) { return false; }
>
> diff --git a/kernel/freezer.c b/kernel/freezer.c
> index 6a96149aede9..ddc11a8bd2ea 100644
> --- a/kernel/freezer.c
> +++ b/kernel/freezer.c
> @@ -10,6 +10,7 @@
> #include <linux/export.h>
> #include <linux/syscalls.h>
> #include <linux/freezer.h>
> +#include <linux/oom.h>
> #include <linux/kthread.h>
>
> /* total number of freezing conditions in effect */
> @@ -40,7 +41,7 @@ bool freezing_slow_path(struct task_struct *p)
> if (p->flags & (PF_NOFREEZE | PF_SUSPEND_TASK))
> return false;
>
> - if (test_tsk_thread_flag(p, TIF_MEMDIE))
> + if (tsk_is_oom_victim(p))
> return false;
>
> if (pm_nosig_freezing || cgroup_freezing(p))
> @@ -206,6 +207,23 @@ void __thaw_task(struct task_struct *p)
> wake_up_state(p, TASK_FROZEN);
> }
>
> +/*
> + * thaw_process - Thaw a frozen process
> + * @p: the process to be thawed
> + *
> + * Iterate over all threads of @p and call __thaw_task() on each.
> + */
> +void thaw_process(struct task_struct *p)
> +{
> + struct task_struct *t;
> +
> + rcu_read_lock();
> + for_each_thread(p, t) {
> + __thaw_task(t);
> + }
> + rcu_read_unlock();
> +}
> +
> /**
> * set_freezable - make %current freezable
> *
> diff --git a/mm/oom_kill.c b/mm/oom_kill.c
> index 25923cfec9c6..88356b66cc35 100644
> --- a/mm/oom_kill.c
> +++ b/mm/oom_kill.c
> @@ -772,12 +772,12 @@ static void mark_oom_victim(struct task_struct *tsk)
> mmgrab(tsk->signal->oom_mm);
>
> /*
> - * Make sure that the task is woken up from uninterruptible sleep
> - * if it is frozen because OOM killer wouldn't be able to free
> - * any memory and livelock. freezing_slow_path will tell the freezer
> - * that TIF_MEMDIE tasks should be ignored.
> + * Make sure that the process is woken up from uninterruptible sleep
> + * if it is frozen because OOM killer wouldn't be able to free any
> + * memory and livelock. The freezer will thaw the tasks that are OOM
> + * victims regardless of the PM freezing and cgroup freezing states.
> */
> - __thaw_task(tsk);
> + thaw_process(tsk);
> atomic_inc(&oom_victims);
> cred = get_task_cred(tsk);
> trace_mark_victim(tsk, cred->uid.val);
> --
> 2.17.1
--
Michal Hocko
SUSE Labs
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH v9 2/2] mm/oom_kill: The OOM reaper traverses the VMA maple tree in reverse order
2025-09-10 14:37 ` [PATCH v9 2/2] mm/oom_kill: The OOM reaper traverses the VMA maple tree in reverse order zhongjinji
@ 2025-09-10 15:22 ` Michal Hocko
2025-09-11 4:06 ` zhongjinji
0 siblings, 1 reply; 10+ messages in thread
From: Michal Hocko @ 2025-09-10 15:22 UTC (permalink / raw)
To: zhongjinji
Cc: rientjes, shakeel.butt, akpm, tglx, liam.howlett,
lorenzo.stoakes, surenb, lenb, rafael, pavel, linux-mm, linux-pm,
linux-kernel, liulu.liu, feng.han
On Wed 10-09-25 22:37:26, zhongjinji wrote:
> Although the oom_reaper is delayed and it gives the oom victim chance to
> clean up its address space this might take a while especially for
> processes with a large address space footprint. In those cases
> oom_reaper might start racing with the dying task and compete for shared
> resources - e.g. page table lock contention has been observed.
>
> Reduce those races by reaping the oom victim from the other end of the
> address space.
>
> It is also a significant improvement for process_mrelease(). When a process
> is killed, process_mrelease is used to reap the killed process and often
> runs concurrently with the dying task. The test data shows that after
> applying the patch, lock contention is greatly reduced during the procedure
> of reaping the killed process.
>
> The test is based on arm64.
>
> Without the patch:
> |--99.57%-- oom_reaper
> | |--0.28%-- [hit in function]
> | |--73.58%-- unmap_page_range
> | | |--8.67%-- [hit in function]
> | | |--41.59%-- __pte_offset_map_lock
> | | |--29.47%-- folio_remove_rmap_ptes
> | | |--16.11%-- tlb_flush_mmu
> | | |--1.66%-- folio_mark_accessed
> | | |--0.74%-- free_swap_and_cache_nr
> | | |--0.69%-- __tlb_remove_folio_pages
> | |--19.94%-- tlb_finish_mmu
> | |--3.21%-- folio_remove_rmap_ptes
> | |--1.16%-- __tlb_remove_folio_pages
> | |--1.16%-- folio_mark_accessed
> | |--0.36%-- __pte_offset_map_lock
>
> With the patch:
> |--99.53%-- oom_reaper
> | |--55.77%-- unmap_page_range
> | | |--20.49%-- [hit in function]
> | | |--58.30%-- folio_remove_rmap_ptes
> | | |--11.48%-- tlb_flush_mmu
> | | |--3.33%-- folio_mark_accessed
> | | |--2.65%-- __tlb_remove_folio_pages
> | | |--1.37%-- _raw_spin_lock
> | | |--0.68%-- __mod_lruvec_page_state
> | | |--0.51%-- __pte_offset_map_lock
> | |--32.21%-- tlb_finish_mmu
> | |--6.93%-- folio_remove_rmap_ptes
> | |--1.90%-- __tlb_remove_folio_pages
> | |--1.55%-- folio_mark_accessed
> | |--0.69%-- __pte_offset_map_lock
I do not object to the patch but this profile is not telling much really
as already pointed out in prior versions as we do not know the base
those percentages are from. It would be really much more helpful to
measure the elapse time for the oom_repaer and exit_mmap to see those
gains.
> Signed-off-by: zhongjinji <zhongjinji@honor.com>
> Reviewed-by: Liam R. Howlett <Liam.Howlett@oracle.com>
> Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> Reviewed-by: Suren Baghdasaryan <surenb@google.com>
> Acked-by: Shakeel Butt <shakeel.butt@linux.dev>
> Acked-by: Michal Hocko <mhocko@suse.com>
> ---
> mm/oom_kill.c | 10 ++++++++--
> 1 file changed, 8 insertions(+), 2 deletions(-)
>
> diff --git a/mm/oom_kill.c b/mm/oom_kill.c
> index 88356b66cc35..28fb36be332b 100644
> --- a/mm/oom_kill.c
> +++ b/mm/oom_kill.c
> @@ -516,7 +516,7 @@ static bool __oom_reap_task_mm(struct mm_struct *mm)
> {
> struct vm_area_struct *vma;
> bool ret = true;
> - VMA_ITERATOR(vmi, mm, 0);
> + MA_STATE(mas, &mm->mm_mt, ULONG_MAX, ULONG_MAX);
>
> /*
> * Tell all users of get_user/copy_from_user etc... that the content
> @@ -526,7 +526,13 @@ static bool __oom_reap_task_mm(struct mm_struct *mm)
> */
> set_bit(MMF_UNSTABLE, &mm->flags);
>
> - for_each_vma(vmi, vma) {
> + /*
> + * It might start racing with the dying task and compete for shared
> + * resources - e.g. page table lock contention has been observed.
> + * Reduce those races by reaping the oom victim from the other end
> + * of the address space.
> + */
> + mas_for_each_rev(&mas, vma, 0) {
> if (vma->vm_flags & (VM_HUGETLB|VM_PFNMAP))
> continue;
>
> --
> 2.17.1
--
Michal Hocko
SUSE Labs
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH v9 1/2] mm/oom_kill: Thaw the entire OOM victim process
2025-09-10 15:15 ` Michal Hocko
@ 2025-09-10 15:23 ` Suren Baghdasaryan
0 siblings, 0 replies; 10+ messages in thread
From: Suren Baghdasaryan @ 2025-09-10 15:23 UTC (permalink / raw)
To: Michal Hocko
Cc: zhongjinji, rientjes, shakeel.butt, akpm, tglx, liam.howlett,
lorenzo.stoakes, lenb, rafael, pavel, linux-mm, linux-pm,
linux-kernel, liulu.liu, feng.han
On Wed, Sep 10, 2025 at 8:15 AM Michal Hocko <mhocko@suse.com> wrote:
>
> On Wed 10-09-25 22:37:25, zhongjinji wrote:
> > OOM killer is a mechanism that selects and kills processes when the system
> > runs out of memory to reclaim resources and keep the system stable. But the
> > oom victim cannot terminate on its own when it is frozen, even if the OOM
> > victim task is thawed through __thaw_task(). This is because __thaw_task() can
> > only thaw a single OOM victim thread, and cannot thaw the entire OOM victim
> > process.
> >
> > Also, freezing_slow_path() decides whether a task is an OOM victim by checking
> > the task's TIF_MEMDIE flag. When a task is thawed, the freezer bypasses PM
> > freezing and cgroup freezing states to thaw it. But TIF_MEMDIE is not a thread
> > group shared flag, and only one thread is marked with TIF_MEMDIE. If other
> > threads are thawed, they may still remain frozen due to PM freezing and cgroup
> > freezing states.
> >
> > To solve this, thaw_process() is introduced to thaw all threads of the victim,
> > ensuring every thread in the victim process can be thawed. The freezer uses
> > tsk_is_oom_victim() to determine whether a task is an OOM victim, because
> > tsk->signal->oom_mm is data shared by all threads. This allows all victim threads
> > to rely on it to be thawed.
>
> A history detour for future reference.
> TIF_MEMDIE was a "this is the oom victim & it has access to memory
> reserves" flag in the past. It has that thread vs. process problems and
> tsk_is_oom_victim was introduced later to get rid of them and other
> issues as well as the guarantee that we can identify the oom victim's mm reliably
> for other oom_reaper. I recommend reading git log of mm/oom_kill.c to
> get hairy history of that area and how tricky it is due all the subtle
> interaction with process exit paths etc.
>
> >
> > This change will thaw the entire victim process when OOM occurs,
> > ensuring that the oom victim can terminate on its own.
> >
> > Signed-off-by: zhongjinji <zhongjinji@honor.com>
>
> Acked-by: Michal Hocko <mhocko@suse.com>
> Thanks!
Reviewed-by: Suren Baghdasaryan <surenb@google.com>
> >
> > Acked-by: Michal Hocko <mhocko@suse.com>
> > ---
> > include/linux/freezer.h | 2 ++
> > kernel/freezer.c | 20 +++++++++++++++++++-
> > mm/oom_kill.c | 10 +++++-----
> > 3 files changed, 26 insertions(+), 6 deletions(-)
> >
> > diff --git a/include/linux/freezer.h b/include/linux/freezer.h
> > index b303472255be..32884c9721e5 100644
> > --- a/include/linux/freezer.h
> > +++ b/include/linux/freezer.h
> > @@ -47,6 +47,7 @@ extern int freeze_processes(void);
> > extern int freeze_kernel_threads(void);
> > extern void thaw_processes(void);
> > extern void thaw_kernel_threads(void);
> > +extern void thaw_process(struct task_struct *p);
> >
> > static inline bool try_to_freeze(void)
> > {
> > @@ -80,6 +81,7 @@ static inline int freeze_processes(void) { return -ENOSYS; }
> > static inline int freeze_kernel_threads(void) { return -ENOSYS; }
> > static inline void thaw_processes(void) {}
> > static inline void thaw_kernel_threads(void) {}
> > +static inline void thaw_process(struct task_struct *p) {}
> >
> > static inline bool try_to_freeze(void) { return false; }
> >
> > diff --git a/kernel/freezer.c b/kernel/freezer.c
> > index 6a96149aede9..ddc11a8bd2ea 100644
> > --- a/kernel/freezer.c
> > +++ b/kernel/freezer.c
> > @@ -10,6 +10,7 @@
> > #include <linux/export.h>
> > #include <linux/syscalls.h>
> > #include <linux/freezer.h>
> > +#include <linux/oom.h>
> > #include <linux/kthread.h>
> >
> > /* total number of freezing conditions in effect */
> > @@ -40,7 +41,7 @@ bool freezing_slow_path(struct task_struct *p)
> > if (p->flags & (PF_NOFREEZE | PF_SUSPEND_TASK))
> > return false;
> >
> > - if (test_tsk_thread_flag(p, TIF_MEMDIE))
> > + if (tsk_is_oom_victim(p))
> > return false;
> >
> > if (pm_nosig_freezing || cgroup_freezing(p))
> > @@ -206,6 +207,23 @@ void __thaw_task(struct task_struct *p)
> > wake_up_state(p, TASK_FROZEN);
> > }
> >
> > +/*
> > + * thaw_process - Thaw a frozen process
> > + * @p: the process to be thawed
> > + *
> > + * Iterate over all threads of @p and call __thaw_task() on each.
> > + */
> > +void thaw_process(struct task_struct *p)
> > +{
> > + struct task_struct *t;
> > +
> > + rcu_read_lock();
> > + for_each_thread(p, t) {
> > + __thaw_task(t);
> > + }
> > + rcu_read_unlock();
> > +}
> > +
> > /**
> > * set_freezable - make %current freezable
> > *
> > diff --git a/mm/oom_kill.c b/mm/oom_kill.c
> > index 25923cfec9c6..88356b66cc35 100644
> > --- a/mm/oom_kill.c
> > +++ b/mm/oom_kill.c
> > @@ -772,12 +772,12 @@ static void mark_oom_victim(struct task_struct *tsk)
> > mmgrab(tsk->signal->oom_mm);
> >
> > /*
> > - * Make sure that the task is woken up from uninterruptible sleep
> > - * if it is frozen because OOM killer wouldn't be able to free
> > - * any memory and livelock. freezing_slow_path will tell the freezer
> > - * that TIF_MEMDIE tasks should be ignored.
> > + * Make sure that the process is woken up from uninterruptible sleep
> > + * if it is frozen because OOM killer wouldn't be able to free any
> > + * memory and livelock. The freezer will thaw the tasks that are OOM
> > + * victims regardless of the PM freezing and cgroup freezing states.
> > */
> > - __thaw_task(tsk);
> > + thaw_process(tsk);
> > atomic_inc(&oom_victims);
> > cred = get_task_cred(tsk);
> > trace_mark_victim(tsk, cred->uid.val);
> > --
> > 2.17.1
>
> --
> Michal Hocko
> SUSE Labs
>
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH v9 2/2] mm/oom_kill: The OOM reaper traverses the VMA maple tree in reverse order
2025-09-10 15:22 ` Michal Hocko
@ 2025-09-11 4:06 ` zhongjinji
2025-09-11 7:31 ` Michal Hocko
0 siblings, 1 reply; 10+ messages in thread
From: zhongjinji @ 2025-09-11 4:06 UTC (permalink / raw)
To: mhocko
Cc: akpm, feng.han, lenb, liam.howlett, linux-kernel, linux-mm,
linux-pm, liulu.liu, lorenzo.stoakes, pavel, rafael, rientjes,
shakeel.butt, surenb, tglx, zhongjinji
> On Wed 10-09-25 22:37:26, zhongjinji wrote:
> > Although the oom_reaper is delayed and it gives the oom victim chance to
> > clean up its address space this might take a while especially for
> > processes with a large address space footprint. In those cases
> > oom_reaper might start racing with the dying task and compete for shared
> > resources - e.g. page table lock contention has been observed.
> >
> > Reduce those races by reaping the oom victim from the other end of the
> > address space.
> >
> > It is also a significant improvement for process_mrelease(). When a process
> > is killed, process_mrelease is used to reap the killed process and often
> > runs concurrently with the dying task. The test data shows that after
> > applying the patch, lock contention is greatly reduced during the procedure
> > of reaping the killed process.
> >
> > The test is based on arm64.
> >
> > Without the patch:
> > |--99.57%-- oom_reaper
> > | |--0.28%-- [hit in function]
> > | |--73.58%-- unmap_page_range
> > | | |--8.67%-- [hit in function]
> > | | |--41.59%-- __pte_offset_map_lock
> > | | |--29.47%-- folio_remove_rmap_ptes
> > | | |--16.11%-- tlb_flush_mmu
> > | | |--1.66%-- folio_mark_accessed
> > | | |--0.74%-- free_swap_and_cache_nr
> > | | |--0.69%-- __tlb_remove_folio_pages
> > | |--19.94%-- tlb_finish_mmu
> > | |--3.21%-- folio_remove_rmap_ptes
> > | |--1.16%-- __tlb_remove_folio_pages
> > | |--1.16%-- folio_mark_accessed
> > | |--0.36%-- __pte_offset_map_lock
> >
> > With the patch:
> > |--99.53%-- oom_reaper
> > | |--55.77%-- unmap_page_range
> > | | |--20.49%-- [hit in function]
> > | | |--58.30%-- folio_remove_rmap_ptes
> > | | |--11.48%-- tlb_flush_mmu
> > | | |--3.33%-- folio_mark_accessed
> > | | |--2.65%-- __tlb_remove_folio_pages
> > | | |--1.37%-- _raw_spin_lock
> > | | |--0.68%-- __mod_lruvec_page_state
> > | | |--0.51%-- __pte_offset_map_lock
> > | |--32.21%-- tlb_finish_mmu
> > | |--6.93%-- folio_remove_rmap_ptes
> > | |--1.90%-- __tlb_remove_folio_pages
> > | |--1.55%-- folio_mark_accessed
> > | |--0.69%-- __pte_offset_map_lock
>
> I do not object to the patch but this profile is not telling much really
> as already pointed out in prior versions as we do not know the base
> those percentages are from. It would be really much more helpful to
> measure the elapse time for the oom_repaer and exit_mmap to see those
> gains.
I got it. I will reference the perf report like this [1] in the changelog.
link : https://lore.kernel.org/all/20250908121503.20960-1-zhongjinji@honor.com/ [1]
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH v9 2/2] mm/oom_kill: The OOM reaper traverses the VMA maple tree in reverse order
2025-09-11 4:06 ` zhongjinji
@ 2025-09-11 7:31 ` Michal Hocko
2025-09-15 16:26 ` zhongjinji
0 siblings, 1 reply; 10+ messages in thread
From: Michal Hocko @ 2025-09-11 7:31 UTC (permalink / raw)
To: zhongjinji
Cc: akpm, feng.han, lenb, liam.howlett, linux-kernel, linux-mm,
linux-pm, liulu.liu, lorenzo.stoakes, pavel, rafael, rientjes,
shakeel.butt, surenb, tglx
On Thu 11-09-25 12:06:09, zhongjinji wrote:
> > On Wed 10-09-25 22:37:26, zhongjinji wrote:
> > > Although the oom_reaper is delayed and it gives the oom victim chance to
> > > clean up its address space this might take a while especially for
> > > processes with a large address space footprint. In those cases
> > > oom_reaper might start racing with the dying task and compete for shared
> > > resources - e.g. page table lock contention has been observed.
> > >
> > > Reduce those races by reaping the oom victim from the other end of the
> > > address space.
> > >
> > > It is also a significant improvement for process_mrelease(). When a process
> > > is killed, process_mrelease is used to reap the killed process and often
> > > runs concurrently with the dying task. The test data shows that after
> > > applying the patch, lock contention is greatly reduced during the procedure
> > > of reaping the killed process.
> > >
> > > The test is based on arm64.
> > >
> > > Without the patch:
> > > |--99.57%-- oom_reaper
> > > | |--0.28%-- [hit in function]
> > > | |--73.58%-- unmap_page_range
> > > | | |--8.67%-- [hit in function]
> > > | | |--41.59%-- __pte_offset_map_lock
> > > | | |--29.47%-- folio_remove_rmap_ptes
> > > | | |--16.11%-- tlb_flush_mmu
> > > | | |--1.66%-- folio_mark_accessed
> > > | | |--0.74%-- free_swap_and_cache_nr
> > > | | |--0.69%-- __tlb_remove_folio_pages
> > > | |--19.94%-- tlb_finish_mmu
> > > | |--3.21%-- folio_remove_rmap_ptes
> > > | |--1.16%-- __tlb_remove_folio_pages
> > > | |--1.16%-- folio_mark_accessed
> > > | |--0.36%-- __pte_offset_map_lock
> > >
> > > With the patch:
> > > |--99.53%-- oom_reaper
> > > | |--55.77%-- unmap_page_range
> > > | | |--20.49%-- [hit in function]
> > > | | |--58.30%-- folio_remove_rmap_ptes
> > > | | |--11.48%-- tlb_flush_mmu
> > > | | |--3.33%-- folio_mark_accessed
> > > | | |--2.65%-- __tlb_remove_folio_pages
> > > | | |--1.37%-- _raw_spin_lock
> > > | | |--0.68%-- __mod_lruvec_page_state
> > > | | |--0.51%-- __pte_offset_map_lock
> > > | |--32.21%-- tlb_finish_mmu
> > > | |--6.93%-- folio_remove_rmap_ptes
> > > | |--1.90%-- __tlb_remove_folio_pages
> > > | |--1.55%-- folio_mark_accessed
> > > | |--0.69%-- __pte_offset_map_lock
> >
> > I do not object to the patch but this profile is not telling much really
> > as already pointed out in prior versions as we do not know the base
> > those percentages are from. It would be really much more helpful to
> > measure the elapse time for the oom_repaer and exit_mmap to see those
> > gains.
>
> I got it. I will reference the perf report like this [1] in the changelog.
> link : https://lore.kernel.org/all/20250908121503.20960-1-zhongjinji@honor.com/ [1]
Yes, this is much more informative. I do not think we need the full
report in the changelog though. I would just add your summary
Summary of measurements (ms):
+---------------------------------------------------------------+
| Category | Applying patch | Without patch|
+-------------------------------+---------------+--------------+
| Total running time | 132.6 | 167.1 |
| (exit_mmap + reaper work) | 72.4 + 60.2 | 90.7 + 76.4 |
+-------------------------------+---------------+--------------+
| Time waiting for pte spinlock | 1.0 | 33.1 |
| (exit_mmap + reaper work) | 0.4 + 0.6 | 10.0 + 23.1 |
+-------------------------------+---------------+--------------+
| folio_remove_rmap_ptes time | 42.0 | 41.3 |
| (exit_mmap + reaper work) | 18.4 + 23.6 | 22.4 + 18.9 |
+---------------------------------------------------------------+
and referenced the full report by the link.
Thanks!
--
Michal Hocko
SUSE Labs
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH v9 1/2] mm/oom_kill: Thaw the entire OOM victim process
2025-09-10 14:37 ` [PATCH v9 1/2] mm/oom_kill: Thaw the entire OOM victim process zhongjinji
2025-09-10 15:15 ` Michal Hocko
@ 2025-09-11 23:55 ` Shakeel Butt
1 sibling, 0 replies; 10+ messages in thread
From: Shakeel Butt @ 2025-09-11 23:55 UTC (permalink / raw)
To: zhongjinji
Cc: mhocko, rientjes, akpm, tglx, liam.howlett, lorenzo.stoakes,
surenb, lenb, rafael, pavel, linux-mm, linux-pm, linux-kernel,
liulu.liu, feng.han
On Wed, Sep 10, 2025 at 10:37:25PM +0800, zhongjinji wrote:
> OOM killer is a mechanism that selects and kills processes when the system
> runs out of memory to reclaim resources and keep the system stable. But the
> oom victim cannot terminate on its own when it is frozen, even if the OOM
> victim task is thawed through __thaw_task(). This is because __thaw_task() can
> only thaw a single OOM victim thread, and cannot thaw the entire OOM victim
> process.
>
> Also, freezing_slow_path() decides whether a task is an OOM victim by checking
> the task's TIF_MEMDIE flag. When a task is thawed, the freezer bypasses PM
> freezing and cgroup freezing states to thaw it. But TIF_MEMDIE is not a thread
> group shared flag, and only one thread is marked with TIF_MEMDIE. If other
> threads are thawed, they may still remain frozen due to PM freezing and cgroup
> freezing states.
>
> To solve this, thaw_process() is introduced to thaw all threads of the victim,
> ensuring every thread in the victim process can be thawed. The freezer uses
> tsk_is_oom_victim() to determine whether a task is an OOM victim, because
> tsk->signal->oom_mm is data shared by all threads. This allows all victim threads
> to rely on it to be thawed.
>
> This change will thaw the entire victim process when OOM occurs,
> ensuring that the oom victim can terminate on its own.
>
> Signed-off-by: zhongjinji <zhongjinji@honor.com>
>
> Acked-by: Michal Hocko <mhocko@suse.com>
Acked-by: Shakeel Butt <shakeel.butt@linux.dev>
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH v9 2/2] mm/oom_kill: The OOM reaper traverses the VMA maple tree in reverse order
2025-09-11 7:31 ` Michal Hocko
@ 2025-09-15 16:26 ` zhongjinji
0 siblings, 0 replies; 10+ messages in thread
From: zhongjinji @ 2025-09-15 16:26 UTC (permalink / raw)
To: mhocko
Cc: akpm, feng.han, lenb, liam.howlett, linux-kernel, linux-mm,
linux-pm, liulu.liu, lorenzo.stoakes, pavel, rafael, rientjes,
shakeel.butt, surenb, tglx, zhongjinji
This perf report evaluates the benefits that process_mrelease gains after
applying the patch. However, in this test, process_mrelease is not called
directly. Instead, the kill signal is proactively intercepted, and the
killed process is added to the oom_reaper queue to trigger the reaper
worker. This simulates the way LMKD calls process_mrelease, which helps
simplify the testing process.
Since the perf report is too complicated, let us focus on the key points
from the report.
Key points:
1. Compared to the version without the patch, the total time reduced by
exit_mmap plus reaper work is roughly equal to the reduction in total
pte spinlock waiting time.
2. With the patch applied, for certain functions, the reaper performs
more times, such as folio_remove_rmap_ptes, but the time spent by
exit_mmap on folio_remove_rmap_ptes decreases accordingly.
Summary of measurements (ms):
+----------------------------------------------------------------+
| Category | Applying patch | Without patch |
+-------------------------------+----------------+---------------+
| Total running time | 132.6 | 167.1 |
| (exit_mmap + reaper work) | 72.4 + 60.2 | 90.7 + 76.4 |
+-------------------------------+----------------+---------------+
| Time waiting for pte spinlock | 1.0 | 33.1 |
| (exit_mmap + reaper work) | 0.4 + 0.6 | 10.0 + 23.1 |
+-------------------------------+----------------+---------------+
| folio_remove_rmap_ptes time | 42.0 | 41.3 |
| (exit_mmap + reaper work) | 18.4 + 23.6 | 22.4 + 18.9 |
+----------------------------------------------------------------+
Report without patch:
Arch: arm64
Event: cpu-clock (type 1, config 0)
Samples: 6355
Event count: 90781175
do_exit
|--93.81%-- mmput
| |--99.46%-- exit_mmap
| | |--76.74%-- unmap_vmas
| | | |--9.14%-- [hit in function]
| | | |--34.25%-- tlb_flush_mmu
| | | |--31.13%-- folio_remove_rmap_ptes
| | | |--15.04%-- __pte_offset_map_lock
| | | |--5.43%-- free_swap_and_cache_nr
| | | |--1.80%-- _raw_spin_lock
| | | |--1.19%-- folio_mark_accessed
| | | |--0.84%-- __tlb_remove_folio_pages
| | | |--0.37%-- mas_find
| | | |--0.37%-- percpu_counter_add_batch
| | | |--0.20%-- __mod_lruvec_page_state
| | | |--0.13%-- f2fs_dirty_data_folio
| | | |--0.04%-- __rcu_read_unlock
| | | |--0.04%-- tlb_flush_rmaps
| | | | folio_remove_rmap_ptes
| | | --0.02%-- folio_mark_dirty
| | |--12.72%-- free_pgtables
| | |--2.65%-- folio_remove_rmap_ptes
| | |--2.50%-- __vm_area_free
| | | |--11.49%-- [hit in function]
| | | |--81.08%-- kmem_cache_free
| | | |--4.05%-- _raw_spin_unlock_irqrestore
| | | --3.38%-- anon_vma_name_free
| | |--1.03%-- folio_mark_accessed
| | |--0.96%-- __tlb_remove_folio_pages
| | |--0.54%-- mas_find
| | |--0.46%-- tlb_finish_mmu
| | | |--96.30%-- free_pages_and_swap_cache
| | | | |--80.77%-- release_pages
| | |--0.44%-- kmem_cache_free
| | |--0.39%-- __pte_offset_map_lock
| | |--0.30%-- task_work_add
| | |--0.19%-- __rcu_read_unlock
| | |--0.17%-- fput
| | |--0.13%-- __mt_destroy
| | |--0.10%-- down_write
| | |--0.07%-- unlink_file_vma
| | |--0.05%-- percpu_counter_add_batch
| | |--0.02%-- free_swap_and_cache_nr
| | |--0.02%-- flush_tlb_batched_pending
| | |--0.02%-- uprobe_munmap
| | |--0.02%-- _raw_spin_unlock
| | |--0.02%-- unlink_anon_vmas
| | --0.02%-- up_write
| |--0.40%-- fput
| |--0.10%-- mas_find
| --0.02%-- __vm_area_free
|--5.19%-- task_work_run
|--0.42%-- exit_files
| put_files_struct
|--0.35%-- exit_task_namespaces
Children Self Command Symbol
90752605 0 TEST_PROCESS do_exit
90752605 0 TEST_PROCESS get_signal
85138600 0 TEST_PROCESS __mmput
84681480 399980 TEST_PROCESS exit_mmap
64982465 5942560 TEST_PROCESS unmap_vmas
22598870 1599920 TEST_PROCESS free_pages_and_swap_cache
22498875 3314120 TEST_PROCESS folio_remove_rmap_ptes
10985165 1442785 TEST_PROCESS _raw_spin_lock
10770890 57140 TEST_PROCESS free_pgtables
10099495 399980 TEST_PROCESS __pte_offset_map_lock
8199590 1285650 TEST_PROCESS folios_put_refs
4756905 685680 TEST_PROCESS free_unref_page_list
4714050 14285 TEST_PROCESS task_work_run
4671195 199990 TEST_PROCESS ____fput
4085510 214275 TEST_PROCESS __fput
3914090 57140 TEST_PROCESS unlink_file_vma
3542680 28570 TEST_PROCESS free_swap_and_cache_nr
3214125 2114180 TEST_PROCESS free_unref_folios
3142700 14285 TEST_PROCESS swap_entry_range_free
2828430 2828430 TEST_PROCESS kmem_cache_free
2714150 528545 TEST_PROCESS zram_free_page
2528445 114280 TEST_PROCESS zram_slot_free_notify
Arch: arm64
Event: cpu-clock (type 1, config 0)
Samples: 5353
Event count: 76467605
kthread
|--99.57%-- oom_reaper
| |--0.28%-- [hit in function]
| |--73.58%-- unmap_page_range
| | |--8.67%-- [hit in function]
| | |--41.59%-- __pte_offset_map_lock
| | |--29.47%-- folio_remove_rmap_ptes
| | |--16.11%-- tlb_flush_mmu
| | | free_pages_and_swap_cache
| | | |--9.49%-- [hit in function]
| | |--1.66%-- folio_mark_accessed
| | |--0.74%-- free_swap_and_cache_nr
| | |--0.69%-- __tlb_remove_folio_pages
| | |--0.41%-- __mod_lruvec_page_state
| | |--0.33%-- _raw_spin_lock
| | |--0.28%-- percpu_counter_add_batch
| | |--0.03%-- tlb_flush_mmu_tlbonly
| | --0.03%-- __rcu_read_unlock
| |--19.94%-- tlb_finish_mmu
| | |--23.24%-- [hit in function]
| | |--76.39%-- free_pages_and_swap_cache
| | |--0.28%-- free_pages
| | --0.09%-- release_pages
| |--3.21%-- folio_remove_rmap_ptes
| |--1.16%-- __tlb_remove_folio_pages
| |--1.16%-- folio_mark_accessed
| |--0.36%-- __pte_offset_map_lock
| |--0.28%-- mas_find
| --0.02%-- __rcu_read_unlock
|--0.17%-- tlb_finish_mmu
|--0.15%-- mas_find
|--0.06%-- memset
|--0.04%-- unmap_page_range
--0.02%-- tlb_gather_mmu
Children Self Command Symbol
76467605 0 oom_reaper kthread
76139050 214275 oom_reaper oom_reaper
56054340 4885470 oom_reaper unmap_page_range
23570250 385695 oom_reaper __pte_offset_map_lock
23341690 257130 oom_reaper _raw_spin_lock
23113130 23113130 oom_reaper queued_spin_lock_slowpath
20627540 1371360 oom_reaper free_pages_and_swap_cache
19027620 614255 oom_reaper release_pages
18956195 3399830 oom_reaper folio_remove_rmap_ptes
15313520 3656960 oom_reaper tlb_finish_mmu
11799410 11785125 oom_reaper cgroup_rstat_updated
11285150 11256580 oom_reaper _raw_spin_unlock_irqrestore
9028120 0 oom_reaper tlb_flush_mmu
8613855 1342790 oom_reaper folios_put_refs
5442585 485690 oom_reaper free_unref_page_list
4299785 1614205 oom_reaper free_unref_folios
3385545 1299935 oom_reaper free_unref_page_commit
Report with patch:
Arch: arm64
Event: cpu-clock (type 1, config 0)
Samples: 5075
Event count: 72496375
|--99.98%-- do_notify_resume
| |--92.63%-- mmput
| | |--99.57%-- exit_mmap
| | | |--0.79%-- [hit in function]
| | | |--76.43%-- unmap_vmas
| | | | |--8.39%-- [hit in function]
| | | | |--42.80%-- tlb_flush_mmu
| | | | | free_pages_and_swap_cache
| | | | |--34.08%-- folio_remove_rmap_ptes
| | | | |--9.51%-- free_swap_and_cache_nr
| | | | |--2.40%-- _raw_spin_lock
| | | | |--0.75%-- __tlb_remove_folio_pages
| | | | |--0.48%-- mas_find
| | | | |--0.36%-- __pte_offset_map_lock
| | | | |--0.34%-- percpu_counter_add_batch
| | | | |--0.34%-- folio_mark_accessed
| | | | |--0.20%-- __mod_lruvec_page_state
| | | | |--0.17%-- f2fs_dirty_data_folio
| | | | |--0.11%-- __rcu_read_unlock
| | | | |--0.03%-- _raw_spin_unlock
| | | | |--0.03%-- tlb_flush_rmaps
| | | | --0.03%-- uprobe_munmap
| | | |--14.19%-- free_pgtables
| | | |--2.52%-- __vm_area_free
| | | |--1.52%-- folio_remove_rmap_ptes
| | | |--0.83%-- mas_find
| | | |--0.81%-- __tlb_remove_folio_pages
| | | |--0.77%-- folio_mark_accessed
| | | |--0.41%-- kmem_cache_free
| | | |--0.36%-- task_work_add
| | | |--0.34%-- fput
| | | |--0.32%-- __pte_offset_map_lock
| | | |--0.15%-- __rcu_read_unlock
| | | |--0.15%-- __mt_destroy
| | | |--0.09%-- unlink_file_vma
| | | |--0.06%-- down_write
| | | |--0.04%-- lookup_swap_cgroup_id
| | | |--0.04%-- uprobe_munmap
| | | |--0.04%-- percpu_counter_add_batch
| | | |--0.04%-- up_write
| | | |--0.02%-- flush_tlb_batched_pending
| | | |--0.02%-- _raw_spin_unlock
| | | |--0.02%-- unlink_anon_vmas
| | | --0.02%-- tlb_finish_mmu
| | | free_unref_page
| | |--0.38%-- fput
| | --0.04%-- mas_find
| |--6.21%-- task_work_run
| |--0.47%-- exit_task_namespaces
| |--0.16%-- ____fput
| --0.04%-- mm_update_next_owner
Children Self Command Symbol
72482090 0 TEST_PROCESS get_signal
67139500 0 TEST_PROCESS __mmput
67139500 0 TEST_PROCESS mmput
66853800 528545 TEST_PROCESS exit_mmap
51097445 4285500 TEST_PROCESS unmap_vmas
21870335 0 TEST_PROCESS tlb_flush_mmu
21870335 1371360 TEST_PROCESS free_pages_and_swap_cache
20384695 485690 TEST_PROCESS release_pages
18427650 1814195 TEST_PROCESS folio_remove_rmap_ptes
13799310 13785025 TEST_PROCESS cgroup_rstat_updated
12842215 12842215 TEST_PROCESS _raw_spin_unlock_irqrestore
9485240 14285 TEST_PROCESS free_pgtables
7785325 428550 TEST_PROCESS folios_put_refs
4899755 642825 TEST_PROCESS free_unref_page_list
4856900 42855 TEST_PROCESS free_swap_and_cache_nr
4499775 14285 TEST_PROCESS task_work_run
4385495 114280 TEST_PROCESS ____fput
3971230 714250 TEST_PROCESS zram_free_page
3899805 14285 TEST_PROCESS swap_entry_range_free
3785525 185705 TEST_PROCESS zram_slot_free_notify
399980 399980 TEST_PROCESS __pte_offset_map_lock
Arch: arm64
Event: cpu-clock (type 1, config 0)
Samples: 4221
Event count: 60296985
kthread
|--99.53%-- oom_reaper
| |--0.17%-- [hit in function]
| |--55.77%-- unmap_page_range
| | |--20.49%-- [hit in function]
| | |--58.30%-- folio_remove_rmap_ptes
| | |--11.48%-- tlb_flush_mmu
| | |--3.33%-- folio_mark_accessed
| | |--2.65%-- __tlb_remove_folio_pages
| | |--1.37%-- _raw_spin_lock
| | |--0.68%-- __mod_lruvec_page_state
| | |--0.51%-- __pte_offset_map_lock
| | |--0.43%-- percpu_counter_add_batch
| | |--0.30%-- __rcu_read_unlock
| | |--0.13%-- free_swap_and_cache_nr
| | |--0.09%-- tlb_flush_mmu_tlbonly
| | --0.04%-- __rcu_read_lock
| |--32.21%-- tlb_finish_mmu
| | |--88.69%-- free_pages_and_swap_cache
| |--6.93%-- folio_remove_rmap_ptes
| |--1.90%-- __tlb_remove_folio_pages
| |--1.55%-- folio_mark_accessed
| |--0.69%-- __pte_offset_map_lock
| |--0.45%-- mas_find_rev
| | |--21.05%-- [hit in function]
| | --78.95%-- mas_prev_slot
| |--0.12%-- mas_prev_slot
| |--0.10%-- free_pages_and_swap_cache
| |--0.07%-- __rcu_read_unlock
| |--0.02%-- percpu_counter_add_batch
| --0.02%-- lookup_swap_cgroup_id
|--0.12%-- mas_find_rev
|--0.12%-- unmap_page_range
|--0.12%-- tlb_finish_mmu
|--0.09%-- tlb_gather_mmu
--0.02%-- memset
Children Self Command Symbol
60296985 0 oom_reaper kthread
60011285 99995 oom_reaper oom_reaper
33541180 6928225 oom_reaper unmap_page_range
23670245 5414015 oom_reaper folio_remove_rmap_ptes
21027520 1757055 oom_reaper free_pages_and_swap_cache
19399030 2171320 oom_reaper tlb_finish_mmu
18970480 885670 oom_reaper release_pages
13785025 13785025 oom_reaper cgroup_rstat_updated
11442285 11442285 oom_reaper _raw_spin_unlock_irqrestore
7928175 1871335 oom_reaper folios_put_refs
4742620 371410 oom_reaper free_unref_page_list
3928375 942810 oom_reaper free_unref_folios
3842665 14285 oom_reaper tlb_flush_mmu
3385545 728535 oom_reaper free_unref_page_commit
585685 571400 oom_reaper __pte_offset_map_lock
^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2025-09-15 16:26 UTC | newest]
Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-09-10 14:37 [PATCH v9 0/2] Improvements to Victim Process Thawing and OOM Reaper Traversal Order zhongjinji
2025-09-10 14:37 ` [PATCH v9 1/2] mm/oom_kill: Thaw the entire OOM victim process zhongjinji
2025-09-10 15:15 ` Michal Hocko
2025-09-10 15:23 ` Suren Baghdasaryan
2025-09-11 23:55 ` Shakeel Butt
2025-09-10 14:37 ` [PATCH v9 2/2] mm/oom_kill: The OOM reaper traverses the VMA maple tree in reverse order zhongjinji
2025-09-10 15:22 ` Michal Hocko
2025-09-11 4:06 ` zhongjinji
2025-09-11 7:31 ` Michal Hocko
2025-09-15 16:26 ` zhongjinji
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox