From: Michal Hocko <mhocko@suse.com>
To: zhongjinji <zhongjinji@honor.com>
Cc: rientjes@google.com, shakeel.butt@linux.dev,
akpm@linux-foundation.org, tglx@linutronix.de,
liam.howlett@oracle.com, lorenzo.stoakes@oracle.com,
surenb@google.com, lenb@kernel.org, rafael@kernel.org,
pavel@kernel.org, linux-mm@kvack.org, linux-pm@vger.kernel.org,
linux-kernel@vger.kernel.org, liulu.liu@honor.com,
feng.han@honor.com
Subject: Re: [PATCH v9 2/2] mm/oom_kill: The OOM reaper traverses the VMA maple tree in reverse order
Date: Wed, 10 Sep 2025 17:22:25 +0200 [thread overview]
Message-ID: <aMGXsenuvA682-Dc@tiehlicka> (raw)
In-Reply-To: <20250910143726.19905-3-zhongjinji@honor.com>
On Wed 10-09-25 22:37:26, zhongjinji wrote:
> Although the oom_reaper is delayed and it gives the oom victim chance to
> clean up its address space this might take a while especially for
> processes with a large address space footprint. In those cases
> oom_reaper might start racing with the dying task and compete for shared
> resources - e.g. page table lock contention has been observed.
>
> Reduce those races by reaping the oom victim from the other end of the
> address space.
>
> It is also a significant improvement for process_mrelease(). When a process
> is killed, process_mrelease is used to reap the killed process and often
> runs concurrently with the dying task. The test data shows that after
> applying the patch, lock contention is greatly reduced during the procedure
> of reaping the killed process.
>
> The test is based on arm64.
>
> Without the patch:
> |--99.57%-- oom_reaper
> | |--0.28%-- [hit in function]
> | |--73.58%-- unmap_page_range
> | | |--8.67%-- [hit in function]
> | | |--41.59%-- __pte_offset_map_lock
> | | |--29.47%-- folio_remove_rmap_ptes
> | | |--16.11%-- tlb_flush_mmu
> | | |--1.66%-- folio_mark_accessed
> | | |--0.74%-- free_swap_and_cache_nr
> | | |--0.69%-- __tlb_remove_folio_pages
> | |--19.94%-- tlb_finish_mmu
> | |--3.21%-- folio_remove_rmap_ptes
> | |--1.16%-- __tlb_remove_folio_pages
> | |--1.16%-- folio_mark_accessed
> | |--0.36%-- __pte_offset_map_lock
>
> With the patch:
> |--99.53%-- oom_reaper
> | |--55.77%-- unmap_page_range
> | | |--20.49%-- [hit in function]
> | | |--58.30%-- folio_remove_rmap_ptes
> | | |--11.48%-- tlb_flush_mmu
> | | |--3.33%-- folio_mark_accessed
> | | |--2.65%-- __tlb_remove_folio_pages
> | | |--1.37%-- _raw_spin_lock
> | | |--0.68%-- __mod_lruvec_page_state
> | | |--0.51%-- __pte_offset_map_lock
> | |--32.21%-- tlb_finish_mmu
> | |--6.93%-- folio_remove_rmap_ptes
> | |--1.90%-- __tlb_remove_folio_pages
> | |--1.55%-- folio_mark_accessed
> | |--0.69%-- __pte_offset_map_lock
I do not object to the patch but this profile is not telling much really
as already pointed out in prior versions as we do not know the base
those percentages are from. It would be really much more helpful to
measure the elapse time for the oom_repaer and exit_mmap to see those
gains.
> Signed-off-by: zhongjinji <zhongjinji@honor.com>
> Reviewed-by: Liam R. Howlett <Liam.Howlett@oracle.com>
> Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> Reviewed-by: Suren Baghdasaryan <surenb@google.com>
> Acked-by: Shakeel Butt <shakeel.butt@linux.dev>
> Acked-by: Michal Hocko <mhocko@suse.com>
> ---
> mm/oom_kill.c | 10 ++++++++--
> 1 file changed, 8 insertions(+), 2 deletions(-)
>
> diff --git a/mm/oom_kill.c b/mm/oom_kill.c
> index 88356b66cc35..28fb36be332b 100644
> --- a/mm/oom_kill.c
> +++ b/mm/oom_kill.c
> @@ -516,7 +516,7 @@ static bool __oom_reap_task_mm(struct mm_struct *mm)
> {
> struct vm_area_struct *vma;
> bool ret = true;
> - VMA_ITERATOR(vmi, mm, 0);
> + MA_STATE(mas, &mm->mm_mt, ULONG_MAX, ULONG_MAX);
>
> /*
> * Tell all users of get_user/copy_from_user etc... that the content
> @@ -526,7 +526,13 @@ static bool __oom_reap_task_mm(struct mm_struct *mm)
> */
> set_bit(MMF_UNSTABLE, &mm->flags);
>
> - for_each_vma(vmi, vma) {
> + /*
> + * It might start racing with the dying task and compete for shared
> + * resources - e.g. page table lock contention has been observed.
> + * Reduce those races by reaping the oom victim from the other end
> + * of the address space.
> + */
> + mas_for_each_rev(&mas, vma, 0) {
> if (vma->vm_flags & (VM_HUGETLB|VM_PFNMAP))
> continue;
>
> --
> 2.17.1
--
Michal Hocko
SUSE Labs
next prev parent reply other threads:[~2025-09-10 15:22 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-09-10 14:37 [PATCH v9 0/2] Improvements to Victim Process Thawing and OOM Reaper Traversal Order zhongjinji
2025-09-10 14:37 ` [PATCH v9 1/2] mm/oom_kill: Thaw the entire OOM victim process zhongjinji
2025-09-10 15:15 ` Michal Hocko
2025-09-10 15:23 ` Suren Baghdasaryan
2025-09-11 23:55 ` Shakeel Butt
2025-09-10 14:37 ` [PATCH v9 2/2] mm/oom_kill: The OOM reaper traverses the VMA maple tree in reverse order zhongjinji
2025-09-10 15:22 ` Michal Hocko [this message]
2025-09-11 4:06 ` zhongjinji
2025-09-11 7:31 ` Michal Hocko
2025-09-15 16:26 ` zhongjinji
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=aMGXsenuvA682-Dc@tiehlicka \
--to=mhocko@suse.com \
--cc=akpm@linux-foundation.org \
--cc=feng.han@honor.com \
--cc=lenb@kernel.org \
--cc=liam.howlett@oracle.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-pm@vger.kernel.org \
--cc=liulu.liu@honor.com \
--cc=lorenzo.stoakes@oracle.com \
--cc=pavel@kernel.org \
--cc=rafael@kernel.org \
--cc=rientjes@google.com \
--cc=shakeel.butt@linux.dev \
--cc=surenb@google.com \
--cc=tglx@linutronix.de \
--cc=zhongjinji@honor.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox