linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2] mm/vmstat: fix vmstat_shepherd double-scheduling vmstat_update
@ 2026-04-09 12:26 Breno Leitao
  2026-04-09 15:47 ` Andrew Morton
  2026-04-09 17:01 ` Vlastimil Babka (SUSE)
  0 siblings, 2 replies; 3+ messages in thread
From: Breno Leitao @ 2026-04-09 12:26 UTC (permalink / raw)
  To: Andrew Morton, David Hildenbrand, Lorenzo Stoakes,
	Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
	Suren Baghdasaryan, Michal Hocko, Christoph Lameter
  Cc: linux-mm, linux-kernel, kas, shakeel.butt, usama.arif,
	kernel-team, Breno Leitao

vmstat_shepherd uses delayed_work_pending() to check whether
vmstat_update is already scheduled for a given CPU before queuing it.
However, delayed_work_pending() only tests WORK_STRUCT_PENDING_BIT,
which is cleared the moment a worker thread picks up the work to
execute it.

This means that while vmstat_update is actively running on a CPU,
delayed_work_pending() returns false. If need_update() also returns
true at that point (per-cpu counters not yet zeroed mid-flush), the
shepherd queues a second invocation with delay=0, causing vmstat_update
to run again immediately after finishing.

On a 72-CPU system this race is readily observable: before the fix,
many CPUs show invocation gaps well below 500 jiffies (the minimum
round_jiffies_relative() can produce), with the most extreme cases
reaching 0 jiffies—vmstat_update called twice within the same jiffy.

Fix this by replacing delayed_work_pending() with work_busy(), which
returns non-zero for both WORK_BUSY_PENDING (timer armed or work
queued) and WORK_BUSY_RUNNING (work currently executing). The shepherd
now correctly skips a CPU in all busy states.

After the fix, all sub-jiffy and most sub-100-jiffie gaps disappear.
The remaining early invocations have gaps in the 700–999 jiffie range,
attributable to round_jiffies_relative() aligning to a nearer
jiffie-second boundary rather than to this race.

Each spurious vmstat_update invocation has a measurable side effect:
refresh_cpu_vm_stats() calls decay_pcp_high() for every zone, which
drains idle per-CPU pages back to the buddy allocator via
free_pcppages_bulk(), taking the zone spinlock each time. Eliminating
the double-scheduling therefore reduces zone lock contention directly.
On a 72-CPU stress-ng workload measured with perf lock contention:

  free_pcppages_bulk contention count:  ~55% reduction
  free_pcppages_bulk total wait time:   ~57% reduction
  free_pcppages_bulk max wait time:     ~47% reduction

Note: work_busy() is inherently racy—between the check and the
subsequent queue_delayed_work_on() call, vmstat_update can finish
execution, leaving the work neither pending nor running. In that
narrow window the shepherd can still queue a second invocation.
After the fix, this residual race is rare and produces only occasional
small gaps, a significant improvement over the systematic
double-scheduling seen with delayed_work_pending().

Fixes: 7b8da4c7f07774 ("vmstat: get rid of the ugly cpu_stat_off variable")
Signed-off-by: Breno Leitao <leitao@debian.org>
Reviewed-by: Vlastimil Babka (SUSE) <vbabka@kernel.org>
---
Changes in v2:
- Instead of changing the timings, do not double-schedule.
- Link to v1: https://patch.msgid.link/20260401-vmstat-v1-1-b68ce4a35055@debian.org
---
 mm/vmstat.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/vmstat.c b/mm/vmstat.c
index 2370c6fb1fcd6..cc5fdc0d0f298 100644
--- a/mm/vmstat.c
+++ b/mm/vmstat.c
@@ -2139,7 +2139,7 @@ static void vmstat_shepherd(struct work_struct *w)
 			if (cpu_is_isolated(cpu))
 				continue;
 
-			if (!delayed_work_pending(dw) && need_update(cpu))
+			if (!work_busy(&dw->work) && need_update(cpu))
 				queue_delayed_work_on(cpu, mm_percpu_wq, dw, 0);
 		}
 

---
base-commit: cf7c3c02fdd0dfccf4d6611714273dcb538af2cb
change-id: 20260401-vmstat-048e0feaf344

Best regards,
--  
Breno Leitao <leitao@debian.org>



^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH v2] mm/vmstat: fix vmstat_shepherd double-scheduling vmstat_update
  2026-04-09 12:26 [PATCH v2] mm/vmstat: fix vmstat_shepherd double-scheduling vmstat_update Breno Leitao
@ 2026-04-09 15:47 ` Andrew Morton
  2026-04-09 17:01 ` Vlastimil Babka (SUSE)
  1 sibling, 0 replies; 3+ messages in thread
From: Andrew Morton @ 2026-04-09 15:47 UTC (permalink / raw)
  To: Breno Leitao
  Cc: David Hildenbrand, Lorenzo Stoakes, Liam R. Howlett,
	Vlastimil Babka, Mike Rapoport, Suren Baghdasaryan, Michal Hocko,
	Christoph Lameter, linux-mm, linux-kernel, kas, shakeel.butt,
	usama.arif, kernel-team

On Thu, 09 Apr 2026 05:26:36 -0700 Breno Leitao <leitao@debian.org> wrote:

> vmstat_shepherd uses delayed_work_pending() to check whether
> vmstat_update is already scheduled for a given CPU before queuing it.
> However, delayed_work_pending() only tests WORK_STRUCT_PENDING_BIT,
> which is cleared the moment a worker thread picks up the work to
> execute it.

Thanks, I tentatively added this to the 7.1-rc1 queue, to upstream ~2
weeks hence. 


^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH v2] mm/vmstat: fix vmstat_shepherd double-scheduling vmstat_update
  2026-04-09 12:26 [PATCH v2] mm/vmstat: fix vmstat_shepherd double-scheduling vmstat_update Breno Leitao
  2026-04-09 15:47 ` Andrew Morton
@ 2026-04-09 17:01 ` Vlastimil Babka (SUSE)
  1 sibling, 0 replies; 3+ messages in thread
From: Vlastimil Babka (SUSE) @ 2026-04-09 17:01 UTC (permalink / raw)
  To: Breno Leitao, Andrew Morton, David Hildenbrand, Lorenzo Stoakes,
	Liam R. Howlett, Mike Rapoport, Suren Baghdasaryan, Michal Hocko,
	Christoph Lameter, Dmitry Ilvokhin
  Cc: linux-mm, linux-kernel, kas, shakeel.butt, usama.arif, kernel-team

On 4/9/26 14:26, Breno Leitao wrote:
> vmstat_shepherd uses delayed_work_pending() to check whether
> vmstat_update is already scheduled for a given CPU before queuing it.
> However, delayed_work_pending() only tests WORK_STRUCT_PENDING_BIT,
> which is cleared the moment a worker thread picks up the work to
> execute it.
> 
> This means that while vmstat_update is actively running on a CPU,
> delayed_work_pending() returns false. If need_update() also returns
> true at that point (per-cpu counters not yet zeroed mid-flush), the
> shepherd queues a second invocation with delay=0, causing vmstat_update
> to run again immediately after finishing.
> 
> On a 72-CPU system this race is readily observable: before the fix,
> many CPUs show invocation gaps well below 500 jiffies (the minimum
> round_jiffies_relative() can produce), with the most extreme cases
> reaching 0 jiffies—vmstat_update called twice within the same jiffy.
> 
> Fix this by replacing delayed_work_pending() with work_busy(), which
> returns non-zero for both WORK_BUSY_PENDING (timer armed or work
> queued) and WORK_BUSY_RUNNING (work currently executing). The shepherd
> now correctly skips a CPU in all busy states.
> 
> After the fix, all sub-jiffy and most sub-100-jiffie gaps disappear.
> The remaining early invocations have gaps in the 700–999 jiffie range,
> attributable to round_jiffies_relative() aligning to a nearer
> jiffie-second boundary rather than to this race.
> 
> Each spurious vmstat_update invocation has a measurable side effect:
> refresh_cpu_vm_stats() calls decay_pcp_high() for every zone, which
> drains idle per-CPU pages back to the buddy allocator via
> free_pcppages_bulk(), taking the zone spinlock each time. Eliminating
> the double-scheduling therefore reduces zone lock contention directly.
> On a 72-CPU stress-ng workload measured with perf lock contention:
> 
>   free_pcppages_bulk contention count:  ~55% reduction
>   free_pcppages_bulk total wait time:   ~57% reduction
>   free_pcppages_bulk max wait time:     ~47% reduction
> 
> Note: work_busy() is inherently racy—between the check and the
> subsequent queue_delayed_work_on() call, vmstat_update can finish
> execution, leaving the work neither pending nor running. In that
> narrow window the shepherd can still queue a second invocation.
> After the fix, this residual race is rare and produces only occasional
> small gaps, a significant improvement over the systematic
> double-scheduling seen with delayed_work_pending().
> 
> Fixes: 7b8da4c7f07774 ("vmstat: get rid of the ugly cpu_stat_off variable")
> Signed-off-by: Breno Leitao <leitao@debian.org>
> Reviewed-by: Vlastimil Babka (SUSE) <vbabka@kernel.org>

Looks like v2 posting raced with Dmitry adding R-b tag to the posting in v1
thread:

https://lore.kernel.org/all/adebKQaiAP4G3liv@shell.ilvokhin.com/

> ---
> Changes in v2:
> - Instead of changing the timings, do not double-schedule.
> - Link to v1: https://patch.msgid.link/20260401-vmstat-v1-1-b68ce4a35055@debian.org
> ---
>  mm/vmstat.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/mm/vmstat.c b/mm/vmstat.c
> index 2370c6fb1fcd6..cc5fdc0d0f298 100644
> --- a/mm/vmstat.c
> +++ b/mm/vmstat.c
> @@ -2139,7 +2139,7 @@ static void vmstat_shepherd(struct work_struct *w)
>  			if (cpu_is_isolated(cpu))
>  				continue;
>  
> -			if (!delayed_work_pending(dw) && need_update(cpu))
> +			if (!work_busy(&dw->work) && need_update(cpu))
>  				queue_delayed_work_on(cpu, mm_percpu_wq, dw, 0);
>  		}
>  
> 
> ---
> base-commit: cf7c3c02fdd0dfccf4d6611714273dcb538af2cb
> change-id: 20260401-vmstat-048e0feaf344
> 
> Best regards,
> --  
> Breno Leitao <leitao@debian.org>
> 



^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2026-04-09 17:01 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2026-04-09 12:26 [PATCH v2] mm/vmstat: fix vmstat_shepherd double-scheduling vmstat_update Breno Leitao
2026-04-09 15:47 ` Andrew Morton
2026-04-09 17:01 ` Vlastimil Babka (SUSE)

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox