From: Jiexun Wang <wangjiexun@tinylab.org>
To: akpm@linux-foundation.org, brauner@kernel.org
Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org,
falcon@tinylab.org, Jiexun Wang <wangjiexun@tinylab.org>
Subject: [PATCH 0/1] mm/madvise: add cond_resched() in madvise_cold_or_pageout_pte_range()
Date: Sat, 9 Sep 2023 13:33:07 +0800 [thread overview]
Message-ID: <cover.1694219361.git.wangjiexun@tinylab.org> (raw)
I conducted real-time testing on the LicheePi 4A board using
Cylictest and employed Ftrace for latency tracing. I observed
that madvise_cold_or_pageout_pte_range() causes significant
latency under memory pressure, which can be effectively reduced
by adding cond_resched() within the loop.
The script I tested is as follows:
echo wakeup_rt > /sys/kernel/tracing/current_tracer
echo 1 > /sys/kernel/tracing/tracing_on
echo 0 > /sys/kernel/tracing/tracing_max_latency
stress-ng --cpu 4 --vm 4 --vm-bytes 2G &
cyclictest --mlockall --smp --priority=99 --distance=0 --duration=30m
echo 0 > /sys/kernel/tracing/tracing_on
cat /sys/kernel/tracing/trace
The tracing results before modification are as follows:
# tracer: wakeup_rt
#
# wakeup_rt latency trace v1.1.5 on 6.5.0-rt6-r1208-00003-g999d221864bf
# --------------------------------------------------------------------
# latency: 1969 us, #6/6, CPU#1 | (M:preempt_rt VP:0, KP:0, SP:0 HP:0 #P:4)
# -----------------
# | task: cyclictest-250 (uid:0 nice:0 policy:1 rt_prio:99)
# -----------------
#
# _--------=> CPU#
# / _-------=> irqs-off/BH-disabled
# | / _------=> need-resched
# || / _-----=> need-resched-lazy
# ||| / _----=> hardirq/softirq
# |||| / _---=> preempt-depth
# ||||| / _--=> preempt-lazy-depth
# |||||| / _-=> migrate-disable
# ||||||| / delay
# cmd pid |||||||| time | caller
# \ / |||||||| \ | /
stress-n-264 1dn.h511 1us : 264:120:R + [001] 250: 0:R cyclictest
stress-n-264 1dn.h511 6us : <stack trace>
=> __ftrace_trace_stack
=> __trace_stack
=> probe_wakeup
=> ttwu_do_activate
=> try_to_wake_up
=> wake_up_process
=> hrtimer_wakeup
=> __hrtimer_run_queues
=> hrtimer_interrupt
=> riscv_timer_interrupt
=> handle_percpu_devid_irq
=> generic_handle_domain_irq
=> riscv_intc_irq
=> handle_riscv_irq
=> do_irq
stress-n-264 1dn.h511 8us#: 0
stress-n-264 1d...3.. 1953us : __schedule
stress-n-264 1d...3.. 1956us+: 264:120:R ==> [001] 250: 0:R cyclictest
stress-n-264 1d...3.. 1968us : <stack trace>
=> __ftrace_trace_stack
=> __trace_stack
=> probe_wakeup_sched_switch
=> __schedule
=> preempt_schedule
=> migrate_enable
=> rt_spin_unlock
=> madvise_cold_or_pageout_pte_range
=> walk_pgd_range
=> __walk_page_range
=> walk_page_range
=> madvise_pageout
=> madvise_vma_behavior
=> do_madvise
=> sys_madvise
=> do_trap_ecall_u
=> ret_from_exception
The tracing results after modification are as follows:
# tracer: wakeup_rt
#
# wakeup_rt latency trace v1.1.5 on 6.5.0-rt6-r1208-00003-g999d221864bf-dirty
# --------------------------------------------------------------------
# latency: 879 us, #6/6, CPU#2 | (M:preempt_rt VP:0, KP:0, SP:0 HP:0 #P:4)
# -----------------
# | task: cyclictest-212 (uid:0 nice:0 policy:1 rt_prio:99)
# -----------------
#
# _--------=> CPU#
# / _-------=> irqs-off/BH-disabled
# | / _------=> need-resched
# || / _-----=> need-resched-lazy
# ||| / _----=> hardirq/softirq
# |||| / _---=> preempt-depth
# ||||| / _--=> preempt-lazy-depth
# |||||| / _-=> migrate-disable
# ||||||| / delay
# cmd pid |||||||| time | caller
# \ / |||||||| \ | /
stress-n-223 2dn.h413 1us : 223:120:R + [002] 212: 0:R cyclictest
stress-n-223 2dn.h413 6us : <stack trace>
=> __ftrace_trace_stack
=> __trace_stack
=> probe_wakeup
=> ttwu_do_activate
=> try_to_wake_up
=> wake_up_process
=> hrtimer_wakeup
=> __hrtimer_run_queues
=> hrtimer_interrupt
=> riscv_timer_interrupt
=> handle_percpu_devid_irq
=> generic_handle_domain_irq
=> riscv_intc_irq
=> handle_riscv_irq
=> do_irq
stress-n-223 2dn.h413 9us!: 0
stress-n-223 2d...3.. 850us : __schedule
stress-n-223 2d...3.. 856us+: 223:120:R ==> [002] 212: 0:R cyclictest
stress-n-223 2d...3.. 875us : <stack trace>
=> __ftrace_trace_stack
=> __trace_stack
=> probe_wakeup_sched_switch
=> __schedule
=> preempt_schedule
=> migrate_enable
=> free_unref_page_list
=> release_pages
=> free_pages_and_swap_cache
=> tlb_batch_pages_flush
=> tlb_flush_mmu
=> unmap_page_range
=> unmap_vmas
=> unmap_region
=> do_vmi_align_munmap.constprop.0
=> do_vmi_munmap
=> __vm_munmap
=> sys_munmap
=> do_trap_ecall_u
=> ret_from_exception
After the modification, the cause of maximum latency is no longer
madvise_cold_or_pageout_pte_range(), so this modification can
reduce the latency caused by madvise_cold_or_pageout_pte_range().
Jiexun Wang (1):
add cond_resched() in madvise_cold_or_pageout_pte_range()
mm/madvise.c | 9 +++++++++
1 file changed, 9 insertions(+)
--
2.34.1
next reply other threads:[~2023-09-09 5:34 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-09-09 5:33 Jiexun Wang [this message]
2023-09-09 5:33 ` [PATCH 1/1] " Jiexun Wang
2023-09-10 19:33 ` Andrew Morton
2023-09-12 17:58 ` kernel test robot
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=cover.1694219361.git.wangjiexun@tinylab.org \
--to=wangjiexun@tinylab.org \
--cc=akpm@linux-foundation.org \
--cc=brauner@kernel.org \
--cc=falcon@tinylab.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox