* [PATCH 0/1] mm/madvise: add cond_resched() in madvise_cold_or_pageout_pte_range()
@ 2023-09-09 5:33 Jiexun Wang
2023-09-09 5:33 ` [PATCH 1/1] " Jiexun Wang
0 siblings, 1 reply; 4+ messages in thread
From: Jiexun Wang @ 2023-09-09 5:33 UTC (permalink / raw)
To: akpm, brauner; +Cc: linux-mm, linux-kernel, falcon, Jiexun Wang
I conducted real-time testing on the LicheePi 4A board using
Cylictest and employed Ftrace for latency tracing. I observed
that madvise_cold_or_pageout_pte_range() causes significant
latency under memory pressure, which can be effectively reduced
by adding cond_resched() within the loop.
The script I tested is as follows:
echo wakeup_rt > /sys/kernel/tracing/current_tracer
echo 1 > /sys/kernel/tracing/tracing_on
echo 0 > /sys/kernel/tracing/tracing_max_latency
stress-ng --cpu 4 --vm 4 --vm-bytes 2G &
cyclictest --mlockall --smp --priority=99 --distance=0 --duration=30m
echo 0 > /sys/kernel/tracing/tracing_on
cat /sys/kernel/tracing/trace
The tracing results before modification are as follows:
# tracer: wakeup_rt
#
# wakeup_rt latency trace v1.1.5 on 6.5.0-rt6-r1208-00003-g999d221864bf
# --------------------------------------------------------------------
# latency: 1969 us, #6/6, CPU#1 | (M:preempt_rt VP:0, KP:0, SP:0 HP:0 #P:4)
# -----------------
# | task: cyclictest-250 (uid:0 nice:0 policy:1 rt_prio:99)
# -----------------
#
# _--------=> CPU#
# / _-------=> irqs-off/BH-disabled
# | / _------=> need-resched
# || / _-----=> need-resched-lazy
# ||| / _----=> hardirq/softirq
# |||| / _---=> preempt-depth
# ||||| / _--=> preempt-lazy-depth
# |||||| / _-=> migrate-disable
# ||||||| / delay
# cmd pid |||||||| time | caller
# \ / |||||||| \ | /
stress-n-264 1dn.h511 1us : 264:120:R + [001] 250: 0:R cyclictest
stress-n-264 1dn.h511 6us : <stack trace>
=> __ftrace_trace_stack
=> __trace_stack
=> probe_wakeup
=> ttwu_do_activate
=> try_to_wake_up
=> wake_up_process
=> hrtimer_wakeup
=> __hrtimer_run_queues
=> hrtimer_interrupt
=> riscv_timer_interrupt
=> handle_percpu_devid_irq
=> generic_handle_domain_irq
=> riscv_intc_irq
=> handle_riscv_irq
=> do_irq
stress-n-264 1dn.h511 8us#: 0
stress-n-264 1d...3.. 1953us : __schedule
stress-n-264 1d...3.. 1956us+: 264:120:R ==> [001] 250: 0:R cyclictest
stress-n-264 1d...3.. 1968us : <stack trace>
=> __ftrace_trace_stack
=> __trace_stack
=> probe_wakeup_sched_switch
=> __schedule
=> preempt_schedule
=> migrate_enable
=> rt_spin_unlock
=> madvise_cold_or_pageout_pte_range
=> walk_pgd_range
=> __walk_page_range
=> walk_page_range
=> madvise_pageout
=> madvise_vma_behavior
=> do_madvise
=> sys_madvise
=> do_trap_ecall_u
=> ret_from_exception
The tracing results after modification are as follows:
# tracer: wakeup_rt
#
# wakeup_rt latency trace v1.1.5 on 6.5.0-rt6-r1208-00003-g999d221864bf-dirty
# --------------------------------------------------------------------
# latency: 879 us, #6/6, CPU#2 | (M:preempt_rt VP:0, KP:0, SP:0 HP:0 #P:4)
# -----------------
# | task: cyclictest-212 (uid:0 nice:0 policy:1 rt_prio:99)
# -----------------
#
# _--------=> CPU#
# / _-------=> irqs-off/BH-disabled
# | / _------=> need-resched
# || / _-----=> need-resched-lazy
# ||| / _----=> hardirq/softirq
# |||| / _---=> preempt-depth
# ||||| / _--=> preempt-lazy-depth
# |||||| / _-=> migrate-disable
# ||||||| / delay
# cmd pid |||||||| time | caller
# \ / |||||||| \ | /
stress-n-223 2dn.h413 1us : 223:120:R + [002] 212: 0:R cyclictest
stress-n-223 2dn.h413 6us : <stack trace>
=> __ftrace_trace_stack
=> __trace_stack
=> probe_wakeup
=> ttwu_do_activate
=> try_to_wake_up
=> wake_up_process
=> hrtimer_wakeup
=> __hrtimer_run_queues
=> hrtimer_interrupt
=> riscv_timer_interrupt
=> handle_percpu_devid_irq
=> generic_handle_domain_irq
=> riscv_intc_irq
=> handle_riscv_irq
=> do_irq
stress-n-223 2dn.h413 9us!: 0
stress-n-223 2d...3.. 850us : __schedule
stress-n-223 2d...3.. 856us+: 223:120:R ==> [002] 212: 0:R cyclictest
stress-n-223 2d...3.. 875us : <stack trace>
=> __ftrace_trace_stack
=> __trace_stack
=> probe_wakeup_sched_switch
=> __schedule
=> preempt_schedule
=> migrate_enable
=> free_unref_page_list
=> release_pages
=> free_pages_and_swap_cache
=> tlb_batch_pages_flush
=> tlb_flush_mmu
=> unmap_page_range
=> unmap_vmas
=> unmap_region
=> do_vmi_align_munmap.constprop.0
=> do_vmi_munmap
=> __vm_munmap
=> sys_munmap
=> do_trap_ecall_u
=> ret_from_exception
After the modification, the cause of maximum latency is no longer
madvise_cold_or_pageout_pte_range(), so this modification can
reduce the latency caused by madvise_cold_or_pageout_pte_range().
Jiexun Wang (1):
add cond_resched() in madvise_cold_or_pageout_pte_range()
mm/madvise.c | 9 +++++++++
1 file changed, 9 insertions(+)
--
2.34.1
^ permalink raw reply [flat|nested] 4+ messages in thread
* [PATCH 1/1] mm/madvise: add cond_resched() in madvise_cold_or_pageout_pte_range()
2023-09-09 5:33 [PATCH 0/1] mm/madvise: add cond_resched() in madvise_cold_or_pageout_pte_range() Jiexun Wang
@ 2023-09-09 5:33 ` Jiexun Wang
2023-09-10 19:33 ` Andrew Morton
2023-09-12 17:58 ` kernel test robot
0 siblings, 2 replies; 4+ messages in thread
From: Jiexun Wang @ 2023-09-09 5:33 UTC (permalink / raw)
To: akpm, brauner; +Cc: linux-mm, linux-kernel, falcon, Jiexun Wang
Currently the madvise_cold_or_pageout_pte_range() function exhibits
significant latency under memory pressure, which can be effectively
reduced by adding cond_resched() within the loop.
When the batch_count reaches SWAP_CLUSTER_MAX, we reschedule
the task to ensure fairness and avoid long lock holding times.
Signed-off-by: Jiexun Wang <wangjiexun@tinylab.org>
---
mm/madvise.c | 9 +++++++++
1 file changed, 9 insertions(+)
diff --git a/mm/madvise.c b/mm/madvise.c
index 4dded5d27e7e..df760096ea85 100644
--- a/mm/madvise.c
+++ b/mm/madvise.c
@@ -31,6 +31,7 @@
#include <linux/swapops.h>
#include <linux/shmem_fs.h>
#include <linux/mmu_notifier.h>
+#include <linux/swap.h>
#include <asm/tlb.h>
@@ -353,6 +354,7 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd,
struct folio *folio = NULL;
LIST_HEAD(folio_list);
bool pageout_anon_only_filter;
+ unsigned int batch_count = 0;
if (fatal_signal_pending(current))
return -EINTR;
@@ -441,6 +443,13 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd,
arch_enter_lazy_mmu_mode();
for (; addr < end; pte++, addr += PAGE_SIZE) {
ptent = ptep_get(pte);
+
+ if (++batch_count == SWAP_CLUSTER_MAX) {
+ pte_unmap_unlock(start_pte, ptl);
+ cond_resched();
+ start_pte = pte_offset_map_lock(mm, pmd, addr, &ptl);
+ batch_count = 0;
+ }
if (pte_none(ptent))
continue;
--
2.34.1
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH 1/1] mm/madvise: add cond_resched() in madvise_cold_or_pageout_pte_range()
2023-09-09 5:33 ` [PATCH 1/1] " Jiexun Wang
@ 2023-09-10 19:33 ` Andrew Morton
2023-09-12 17:58 ` kernel test robot
1 sibling, 0 replies; 4+ messages in thread
From: Andrew Morton @ 2023-09-10 19:33 UTC (permalink / raw)
To: Jiexun Wang; +Cc: brauner, linux-mm, linux-kernel, falcon
On Sat, 9 Sep 2023 13:33:08 +0800 Jiexun Wang <wangjiexun@tinylab.org> wrote:
> Currently the madvise_cold_or_pageout_pte_range() function exhibits
> significant latency under memory pressure, which can be effectively
> reduced by adding cond_resched() within the loop.
>
> When the batch_count reaches SWAP_CLUSTER_MAX, we reschedule
> the task to ensure fairness and avoid long lock holding times.
>
> ...
>
> @@ -441,6 +443,13 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd,
> arch_enter_lazy_mmu_mode();
> for (; addr < end; pte++, addr += PAGE_SIZE) {
> ptent = ptep_get(pte);
> +
> + if (++batch_count == SWAP_CLUSTER_MAX) {
> + pte_unmap_unlock(start_pte, ptl);
> + cond_resched();
> + start_pte = pte_offset_map_lock(mm, pmd, addr, &ptl);
> + batch_count = 0;
> + }
>
> if (pte_none(ptent))
> continue;
I doubt if we can simply drop the lock like this then proceed as if
nothing has changed while the lock was released.
Could be that something along these lines:
@@ -434,6 +436,7 @@ huge_unlock:
regular_folio:
#endif
tlb_change_page_size(tlb, PAGE_SIZE);
+restart:
start_pte = pte = pte_offset_map_lock(vma->vm_mm, pmd, addr, &ptl);
if (!start_pte)
return 0;
@@ -441,6 +444,15 @@ regular_folio:
arch_enter_lazy_mmu_mode();
for (; addr < end; pte++, addr += PAGE_SIZE) {
ptent = ptep_get(pte);
+
+ if (++batch_count == SWAP_CLUSTER_MAX) {
+ batch_count = 0;
+ if (need_resched()) {
+ pte_unmap_unlock(start_pte, ptl);
+ cond_resched();
+ goto restart;
+ }
+ }
if (pte_none(ptent))
continue;
would work, but more analysis would be needed.
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH 1/1] mm/madvise: add cond_resched() in madvise_cold_or_pageout_pte_range()
2023-09-09 5:33 ` [PATCH 1/1] " Jiexun Wang
2023-09-10 19:33 ` Andrew Morton
@ 2023-09-12 17:58 ` kernel test robot
1 sibling, 0 replies; 4+ messages in thread
From: kernel test robot @ 2023-09-12 17:58 UTC (permalink / raw)
To: Jiexun Wang
Cc: oe-kbuild-all, akpm, brauner, linux-mm, linux-kernel, falcon,
Jiexun Wang
Hi Jiexun,
kernel test robot noticed the following build warnings:
[auto build test WARNING on akpm-mm/mm-everything]
url: https://github.com/intel-lab-lkp/linux/commits/Jiexun-Wang/mm-madvise-add-cond_resched-in-madvise_cold_or_pageout_pte_range/20230909-133707
base: https://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm.git mm-everything
patch link: https://lore.kernel.org/r/95d610623363009a71024c7a473d6895f39f3caf.1694219361.git.wangjiexun%40tinylab.org
patch subject: [PATCH 1/1] mm/madvise: add cond_resched() in madvise_cold_or_pageout_pte_range()
compiler: gcc-12 (Debian 12.2.0-14) 12.2.0
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <yujie.liu@intel.com>
| Closes: https://lore.kernel.org/r/202309111456.SBqOkY0h-lkp@intel.com/
includecheck warnings: (new ones prefixed by >>)
>> mm/madvise.c: linux/swap.h is included more than once.
vim +30 mm/madvise.c
> 30 #include <linux/swap.h>
31 #include <linux/swapops.h>
32 #include <linux/shmem_fs.h>
33 #include <linux/mmu_notifier.h>
> 34 #include <linux/swap.h>
35
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2023-09-12 18:03 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-09-09 5:33 [PATCH 0/1] mm/madvise: add cond_resched() in madvise_cold_or_pageout_pte_range() Jiexun Wang
2023-09-09 5:33 ` [PATCH 1/1] " Jiexun Wang
2023-09-10 19:33 ` Andrew Morton
2023-09-12 17:58 ` kernel test robot
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox