* [PATCH 1/4] vmstat: allow_direct_reclaim should use zone_page_state_snapshot
2023-05-30 14:52 [PATCH 0/4] vmstat bug fixes for nohz_full CPUs Marcelo Tosatti
@ 2023-05-30 14:52 ` Marcelo Tosatti
2023-05-30 14:52 ` [PATCH 2/4] vmstat: skip periodic vmstat update for nohz full CPUs Marcelo Tosatti
` (2 subsequent siblings)
3 siblings, 0 replies; 6+ messages in thread
From: Marcelo Tosatti @ 2023-05-30 14:52 UTC (permalink / raw)
To: Christoph Lameter
Cc: Aaron Tomlin, Frederic Weisbecker, Andrew Morton, linux-kernel,
linux-mm, Vlastimil Babka, Michal Hocko, Marcelo Tosatti
A customer provided evidence indicating that a process
was stalled in direct reclaim:
- The process was trapped in throttle_direct_reclaim().
The function wait_event_killable() was called to wait condition
allow_direct_reclaim(pgdat) for current node to be true.
The allow_direct_reclaim(pgdat) examined the number of free pages
on the node by zone_page_state() which just returns value in
zone->vm_stat[NR_FREE_PAGES].
- On node #1, zone->vm_stat[NR_FREE_PAGES] was 0.
However, the freelist on this node was not empty.
- This inconsistent of vmstat value was caused by percpu vmstat on
nohz_full cpus. Every increment/decrement of vmstat is performed
on percpu vmstat counter at first, then pooled diffs are cumulated
to the zone's vmstat counter in timely manner. However, on nohz_full
cpus (in case of this customer's system, 48 of 52 cpus) these pooled
diffs were not cumulated once the cpu had no event on it so that
the cpu started sleeping infinitely.
I checked percpu vmstat and found there were total 69 counts not
cumulated to the zone's vmstat counter yet.
- In this situation, kswapd did not help the trapped process.
In pgdat_balanced(), zone_wakermark_ok_safe() examined the number
of free pages on the node by zone_page_state_snapshot() which
checks pending counts on percpu vmstat.
Therefore kswapd could know there were 69 free pages correctly.
Since zone->_watermark = {8, 20, 32}, kswapd did not work because
69 was greater than 32 as high watermark.
Change allow_direct_reclaim to use zone_page_state_snapshot, which
allows a more precise version of the vmstat counters to be used.
allow_direct_reclaim will only be called from try_to_free_pages,
which is not a hot path.
Testing: Due to difficulties accessing the system, it has not been
possible for the reproducer to test the patch (however its
clear from available data and analysis that it should fix it).
Reviewed-by: Michal Hocko <mhocko@suse.com>
Reviewed-by: Aaron Tomlin <atomlin@atomlin.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
---
Index: linux-vmstat-remote/mm/vmscan.c
===================================================================
--- linux-vmstat-remote.orig/mm/vmscan.c
+++ linux-vmstat-remote/mm/vmscan.c
@@ -6887,7 +6887,7 @@ static bool allow_direct_reclaim(pg_data
continue;
pfmemalloc_reserve += min_wmark_pages(zone);
- free_pages += zone_page_state(zone, NR_FREE_PAGES);
+ free_pages += zone_page_state_snapshot(zone, NR_FREE_PAGES);
}
/* If there are no reserves (unexpected config) then do not throttle */
^ permalink raw reply [flat|nested] 6+ messages in thread* [PATCH 2/4] vmstat: skip periodic vmstat update for nohz full CPUs
2023-05-30 14:52 [PATCH 0/4] vmstat bug fixes for nohz_full CPUs Marcelo Tosatti
2023-05-30 14:52 ` [PATCH 1/4] vmstat: allow_direct_reclaim should use zone_page_state_snapshot Marcelo Tosatti
@ 2023-05-30 14:52 ` Marcelo Tosatti
2023-05-30 14:52 ` [PATCH 3/4] workqueue: add schedule_on_each_cpumask helper Marcelo Tosatti
2023-05-30 14:52 ` [PATCH 4/4] mm/vmstat: do not refresh stats for nohz_full CPUs Marcelo Tosatti
3 siblings, 0 replies; 6+ messages in thread
From: Marcelo Tosatti @ 2023-05-30 14:52 UTC (permalink / raw)
To: Christoph Lameter
Cc: Aaron Tomlin, Frederic Weisbecker, Andrew Morton, linux-kernel,
linux-mm, Vlastimil Babka, Michal Hocko, Marcelo Tosatti
The interruption caused by vmstat_update is undesirable
for certain aplications:
oslat 1094.456862: sys_mlock(start: 7f7ed0000b60, len: 1000)
oslat 1094.456971: workqueue_queue_work: ... function=vmstat_update ...
oslat 1094.456974: sched_switch: prev_comm=oslat ... ==> next_comm=kworker/5:1 ...
kworker 1094.456978: sched_switch: prev_comm=kworker/5:1 ==> next_comm=oslat ...
The example above shows an additional 7us for the
oslat -> kworker -> oslat
switches. In the case of a virtualized CPU, and the vmstat_update
interruption in the host (of a qemu-kvm vcpu), the latency penalty
observed in the guest is higher than 50us, violating the acceptable
latency threshold.
Skip periodic updates for nohz full CPUs. Any callers who
need precise values should use a snapshot of the per-CPU
counters, or use the global counters with measures to
handle errors up to thresholds (see calculate_normal_threshold).
Suggested by Michal Hocko.
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
---
Index: linux-vmstat-remote/mm/vmstat.c
===================================================================
--- linux-vmstat-remote.orig/mm/vmstat.c
+++ linux-vmstat-remote/mm/vmstat.c
@@ -28,6 +28,7 @@
#include <linux/mm_inline.h>
#include <linux/page_ext.h>
#include <linux/page_owner.h>
+#include <linux/tick.h>
#include "internal.h"
@@ -2022,6 +2023,16 @@ static void vmstat_shepherd(struct work_
for_each_online_cpu(cpu) {
struct delayed_work *dw = &per_cpu(vmstat_work, cpu);
+ /*
+ * Skip periodic updates for nohz full CPUs.
+ * Any callers who need precise values should use
+ * a snapshot of the per-CPU counters, or use the global
+ * counters with measures to handle errors up to
+ * thresholds (see calculate_normal_threshold).
+ */
+ if (tick_nohz_full_cpu(cpu))
+ continue;
+
if (!delayed_work_pending(dw) && need_update(cpu))
queue_delayed_work_on(cpu, mm_percpu_wq, dw, 0);
^ permalink raw reply [flat|nested] 6+ messages in thread* [PATCH 3/4] workqueue: add schedule_on_each_cpumask helper
2023-05-30 14:52 [PATCH 0/4] vmstat bug fixes for nohz_full CPUs Marcelo Tosatti
2023-05-30 14:52 ` [PATCH 1/4] vmstat: allow_direct_reclaim should use zone_page_state_snapshot Marcelo Tosatti
2023-05-30 14:52 ` [PATCH 2/4] vmstat: skip periodic vmstat update for nohz full CPUs Marcelo Tosatti
@ 2023-05-30 14:52 ` Marcelo Tosatti
2023-05-30 20:09 ` Andrew Morton
2023-05-30 14:52 ` [PATCH 4/4] mm/vmstat: do not refresh stats for nohz_full CPUs Marcelo Tosatti
3 siblings, 1 reply; 6+ messages in thread
From: Marcelo Tosatti @ 2023-05-30 14:52 UTC (permalink / raw)
To: Christoph Lameter
Cc: Aaron Tomlin, Frederic Weisbecker, Andrew Morton, linux-kernel,
linux-mm, Vlastimil Babka, Michal Hocko, Marcelo Tosatti
Add a schedule_on_each_cpumask function, equivalent to
schedule_on_each_cpu but accepting a cpumask to operate.
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
---
Index: linux-vmstat-remote/kernel/workqueue.c
===================================================================
--- linux-vmstat-remote.orig/kernel/workqueue.c
+++ linux-vmstat-remote/kernel/workqueue.c
@@ -3455,6 +3455,56 @@ int schedule_on_each_cpu(work_func_t fun
return 0;
}
+
+/**
+ * schedule_on_each_cpumask - execute a function synchronously on each
+ * CPU in "cpumask", for those which are online.
+ *
+ * @func: the function to call
+ * @mask: the CPUs which to call function on
+ *
+ * schedule_on_each_cpu() executes @func on each specified CPU that is online,
+ * using the system workqueue and blocks until all such CPUs have completed.
+ * schedule_on_each_cpu() is very slow.
+ *
+ * Return:
+ * 0 on success, -errno on failure.
+ */
+int schedule_on_each_cpumask(work_func_t func, cpumask_t *cpumask)
+{
+ int cpu;
+ struct work_struct __percpu *works;
+ cpumask_var_t effmask;
+
+ works = alloc_percpu(struct work_struct);
+ if (!works)
+ return -ENOMEM;
+
+ if (!alloc_cpumask_var(&effmask, GFP_KERNEL)) {
+ free_percpu(works);
+ return -ENOMEM;
+ }
+
+ cpumask_and(effmask, cpumask, cpu_online_mask);
+
+ cpus_read_lock();
+
+ for_each_cpu(cpu, effmask) {
+ struct work_struct *work = per_cpu_ptr(works, cpu);
+
+ INIT_WORK(work, func);
+ schedule_work_on(cpu, work);
+ }
+
+ for_each_cpu(cpu, effmask)
+ flush_work(per_cpu_ptr(works, cpu));
+
+ cpus_read_unlock();
+ free_percpu(works);
+ free_cpumask_var(effmask);
+ return 0;
+}
+
/**
* execute_in_process_context - reliably execute the routine with user context
* @fn: the function to execute
Index: linux-vmstat-remote/include/linux/workqueue.h
===================================================================
--- linux-vmstat-remote.orig/include/linux/workqueue.h
+++ linux-vmstat-remote/include/linux/workqueue.h
@@ -450,6 +450,7 @@ extern void __flush_workqueue(struct wor
extern void drain_workqueue(struct workqueue_struct *wq);
extern int schedule_on_each_cpu(work_func_t func);
+extern int schedule_on_each_cpumask(work_func_t func, cpumask_t *cpumask);
int execute_in_process_context(work_func_t fn, struct execute_work *);
^ permalink raw reply [flat|nested] 6+ messages in thread* Re: [PATCH 3/4] workqueue: add schedule_on_each_cpumask helper
2023-05-30 14:52 ` [PATCH 3/4] workqueue: add schedule_on_each_cpumask helper Marcelo Tosatti
@ 2023-05-30 20:09 ` Andrew Morton
0 siblings, 0 replies; 6+ messages in thread
From: Andrew Morton @ 2023-05-30 20:09 UTC (permalink / raw)
To: Marcelo Tosatti
Cc: Christoph Lameter, Aaron Tomlin, Frederic Weisbecker,
linux-kernel, linux-mm, Vlastimil Babka, Michal Hocko
On Tue, 30 May 2023 11:52:37 -0300 Marcelo Tosatti <mtosatti@redhat.com> wrote:
> Add a schedule_on_each_cpumask function, equivalent to
> schedule_on_each_cpu but accepting a cpumask to operate.
>
> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
>
> ---
>
> Index: linux-vmstat-remote/kernel/workqueue.c
> ===================================================================
> --- linux-vmstat-remote.orig/kernel/workqueue.c
> +++ linux-vmstat-remote/kernel/workqueue.c
> @@ -3455,6 +3455,56 @@ int schedule_on_each_cpu(work_func_t fun
> return 0;
> }
>
> +
> +/**
> + * schedule_on_each_cpumask - execute a function synchronously on each
> + * CPU in "cpumask", for those which are online.
> + *
> + * @func: the function to call
> + * @mask: the CPUs which to call function on
> + *
> + * schedule_on_each_cpu() executes @func on each specified CPU that is online,
> + * using the system workqueue and blocks until all such CPUs have completed.
> + * schedule_on_each_cpu() is very slow.
> + *
> + * Return:
> + * 0 on success, -errno on failure.
> + */
> +int schedule_on_each_cpumask(work_func_t func, cpumask_t *cpumask)
> +{
> + int cpu;
> + struct work_struct __percpu *works;
> + cpumask_var_t effmask;
> +
> + works = alloc_percpu(struct work_struct);
> + if (!works)
> + return -ENOMEM;
> +
> + if (!alloc_cpumask_var(&effmask, GFP_KERNEL)) {
> + free_percpu(works);
> + return -ENOMEM;
> + }
> +
> + cpumask_and(effmask, cpumask, cpu_online_mask);
> +
> + cpus_read_lock();
> +
> + for_each_cpu(cpu, effmask) {
Should we check here that the cpu is still online?
> + struct work_struct *work = per_cpu_ptr(works, cpu);
> +
> + INIT_WORK(work, func);
> + schedule_work_on(cpu, work);
> + }
> +
> + for_each_cpu(cpu, effmask)
> + flush_work(per_cpu_ptr(works, cpu));
> +
> + cpus_read_unlock();
> + free_percpu(works);
> + free_cpumask_var(effmask);
> + return 0;
> +}
> +
> /**
> * execute_in_process_context - reliably execute the routine with user context
> * @fn: the function to execute
> --- linux-vmstat-remote.orig/include/linux/workqueue.h
> +++ linux-vmstat-remote/include/linux/workqueue.h
> @@ -450,6 +450,7 @@ extern void __flush_workqueue(struct wor
> extern void drain_workqueue(struct workqueue_struct *wq);
>
> extern int schedule_on_each_cpu(work_func_t func);
> +extern int schedule_on_each_cpumask(work_func_t func, cpumask_t *cpumask);
May as well make schedule_on_each_cpu() call
schedule_on_each_cpumask()? Save a bit of text, and they're hardly
performance-critical to that extent.
^ permalink raw reply [flat|nested] 6+ messages in thread
* [PATCH 4/4] mm/vmstat: do not refresh stats for nohz_full CPUs
2023-05-30 14:52 [PATCH 0/4] vmstat bug fixes for nohz_full CPUs Marcelo Tosatti
` (2 preceding siblings ...)
2023-05-30 14:52 ` [PATCH 3/4] workqueue: add schedule_on_each_cpumask helper Marcelo Tosatti
@ 2023-05-30 14:52 ` Marcelo Tosatti
3 siblings, 0 replies; 6+ messages in thread
From: Marcelo Tosatti @ 2023-05-30 14:52 UTC (permalink / raw)
To: Christoph Lameter
Cc: Aaron Tomlin, Frederic Weisbecker, Andrew Morton, linux-kernel,
linux-mm, Vlastimil Babka, Michal Hocko, Marcelo Tosatti
The interruption caused by queueing work on nohz_full CPUs
is undesirable for certain aplications.
Fix by not refreshing per-CPU stats of nohz_full CPUs.
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
---
Index: linux-vmstat-remote/mm/vmstat.c
===================================================================
--- linux-vmstat-remote.orig/mm/vmstat.c
+++ linux-vmstat-remote/mm/vmstat.c
@@ -1877,12 +1877,31 @@ static void refresh_vm_stats(struct work
refresh_cpu_vm_stats(true);
}
+#ifdef CONFIG_NO_HZ_FULL
+static inline const cpumask_t *tickless_cpumask(void)
+{
+ return tick_nohz_full_mask;
+}
+#else
+static cpumask_t empty_cpumask;
+static inline const cpumask_t *tickless_cpumask(void)
+{
+ return &empty_cpumask;
+}
+#endif
+
int vmstat_refresh(struct ctl_table *table, int write,
void *buffer, size_t *lenp, loff_t *ppos)
{
long val;
int err;
int i;
+ cpumask_var_t dstmask;
+
+ if (!alloc_cpumask_var(&dstmask, GFP_KERNEL))
+ return -ENOMEM;
+
+ cpumask_andnot(dstmask, cpu_possible_mask, tickless_cpumask());
/*
* The regular update, every sysctl_stat_interval, may come later
@@ -1896,7 +1915,9 @@ int vmstat_refresh(struct ctl_table *tab
* transiently negative values, report an error here if any of
* the stats is negative, so we know to go looking for imbalance.
*/
- err = schedule_on_each_cpu(refresh_vm_stats);
+ err = schedule_on_each_cpumask(refresh_vm_stats, dstmask);
+ free_cpumask_var(dstmask);
+
if (err)
return err;
for (i = 0; i < NR_VM_ZONE_STAT_ITEMS; i++) {
^ permalink raw reply [flat|nested] 6+ messages in thread