linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [REGRESSION] workqueue/writeback: Severe CPU hang due to kworker proliferation during I/O flush and cgroup cleanup
@ 2025-09-25  0:24 Chenglong Tang
  2025-09-25  0:52 ` Tejun Heo
  0 siblings, 1 reply; 8+ messages in thread
From: Chenglong Tang @ 2025-09-25  0:24 UTC (permalink / raw)
  To: stable; +Cc: regressions, tj, roman.gushchin, linux-mm, lakitu-dev

[-- Attachment #1: Type: text/plain, Size: 4121 bytes --]

Hello,

This is Chenglong from Google Container Optimized OS. I'm reporting a
severe CPU hang regression that occurs after a high volume of file creation
and subsequent cgroup cleanup.

Through bisection, the issue appears to be caused by a chain reaction
between three commits related to writeback, unbound workqueues, and
CPU-hogging detection. The issue is greatly alleviated on the latest
mainline kernel but is not fully resolved, still occurring intermittently
(~1 in 10 runs).

How to reproduce

The kernel v6.1 is good. The hang is reliably triggered(over 80% chance) on
kernels v6.6 and 6.12 and intermittently on mainline(6.17-rc7) with the
following steps:
-

*Environment:* A machine with a fast SSD and a high core count (e.g.,
Google Cloud's N2-standard-128).
-

*Workload:* Concurrently generate a large number of files (e.g., 2 million)
using multiple services managed by systemd-run. This creates significant
I/O and cgroup churn.
-

*Trigger:* After the file generation completes, terminate the systemd-run
services.
-

*Result:* Shortly after the services are killed, the system's CPU load
spikes, leading to a massive number of kworker/+inode_switch_wbs threads
and a system-wide hang/livelock where the machine becomes unresponsive (20s
- 300s).
- Analysis and Problematic Commits
-

*1. The initial commit* The process begins with a worker that can get stuck
busy-waiting on a spinlock.

   -

   *Commit:* ("writeback, cgroup: release dying cgwbs by switching attached
   inodes <https://lkml.org/lkml/2021/6/3/1424>")
   -

   *Effect:* This introduced the inode_switch_wbs_work_fn worker to clean
   up cgroup writeback structures. Under our test load, this worker appears to
   hit a highly contended wb->list_lock spinlock, causing it to burn 100%
   CPU without sleeping.

-

*2. The Kworker Explosion* A subsequent change misinterprets the spinning
worker from Stage 1, leading to a runaway feedback loop of worker creation.

   -

   *Commit:* 616db8779b1e ("workqueue: Automatically mark CPU-hogging work
   items CPU_INTENSIVE"
   <https://git.zx2c4.com/linux-rng/commit/?id=616db8779b1e3f93075df691432cccc5ef3c3ba0>
   )
   -

   *Effect:* This logic sees the spinning worker, marks it as CPU_INTENSIVE,
   and excludes it from concurrency management. To handle the work backlog, it
   spawns a *new* kworker, which then also gets stuck on the same lock,
   repeating the cycle. This directly causes the kworker count to explode from
   <50 to 100-2000+.

*3. The System-Wide Lockdown* The final piece allows this localized worker
explosion to saturate the entire system.

   -

   *Commit:* 8639ecebc9b1 ("workqueue: Implement non-strict affinity scope
   for unbound workqueues"
   <https://git.zx2c4.com/linux-rng/commit/tools/workqueue?id=8639ecebc9b1796d7074751a350462f5e1c61cd4>
   )
   -

   *Effect:* This change introduced non-strict affinity as the default. It
   allows the hundreds of kworkers created in Stage 2 to be spread by the
   scheduler across all available CPU cores, turning the problem into a
   system-wide hang.

- Current Status and Mitigation


   -

   *Mainline Status:* On the latest mainline kernel, the hang is far less
   frequent and the kworker counts are reduced back to normal (<50),
   suggesting other changes have partially mitigated the issue. However, the
   hang still occurs, and when it does, the kworker count still explodes
   (e.g., 300+), indicating the underlying feedback loop remains.
   -

   *Workaround:* A reliable mitigation is to revert to the old workqueue
   behavior by setting affinity_strict to 1. This contains the kworker
   proliferation to a single CPU pod, preventing the system-wide hang.

- Questions

Given that the issue is not fully resolved, could you please provide some
guidance?

   1.

   Is this a known issue, and are there patches in development that might
   fully address the underlying spinlock contention or the kworker feedback
   loop?
   2.

   Is there a better long-term mitigation we can apply other than forcing
   strict affinity?

Thank you for your time and help.

Best regards,
-

Chenglong

[-- Attachment #2: Type: text/html, Size: 4726 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread
* [REGRESSION] workqueue/writeback: Severe CPU hang due to kworker proliferation during I/O flush and cgroup cleanup
@ 2025-09-25  0:29 Chenglong Tang
  2025-09-26 19:54 ` Chenglong Tang
  0 siblings, 1 reply; 8+ messages in thread
From: Chenglong Tang @ 2025-09-25  0:29 UTC (permalink / raw)
  To: stable; +Cc: regressions, tj, roman.gushchin, linux-mm, lakitu-dev

Hello,

This is Chenglong from Google Container Optimized OS. I'm reporting a
severe CPU hang regression that occurs after a high volume of file
creation and subsequent cgroup cleanup.

Through bisection, the issue appears to be caused by a chain reaction
between three commits related to writeback, unbound workqueues, and
CPU-hogging detection. The issue is greatly alleviated on the latest
mainline kernel but is not fully resolved, still occurring
intermittently (~1 in 10 runs).

How to reproduce

The kernel v6.1 is good. The hang is reliably triggered(over 80%
chance) on kernels v6.6 and 6.12 and intermittently on
mainline(6.17-rc7) with the following steps:

Environment: A machine with a fast SSD and a high core count (e.g.,
Google Cloud's N2-standard-128).

Workload: Concurrently generate a large number of files (e.g., 2
million) using multiple services managed by systemd-run. This creates
significant I/O and cgroup churn.

Trigger: After the file generation completes, terminate the
systemd-run services.

Result: Shortly after the services are killed, the system's CPU load
spikes, leading to a massive number of kworker/+inode_switch_wbs
threads and a system-wide hang/livelock where the machine becomes
unresponsive (20s - 300s).

Analysis and Problematic Commits

1. The initial commit: The process begins with a worker that can get
stuck busy-waiting on a spinlock.

Commit: ("writeback, cgroup: release dying cgwbs by switching attached inodes")

Effect: This introduced the inode_switch_wbs_work_fn worker to clean
up cgroup writeback structures. Under our test load, this worker
appears to hit a highly contended wb->list_lock spinlock, causing it
to burn 100% CPU without sleeping.

2. The Kworker Explosion: A subsequent change misinterprets the
spinning worker from Stage 1, leading to a runaway feedback loop of
worker creation.

Commit: 616db8779b1e ("workqueue: Automatically mark CPU-hogging work
items CPU_INTENSIVE")

Effect: This logic sees the spinning worker, marks it as
CPU_INTENSIVE, and excludes it from concurrency management. To handle
the work backlog, it spawns a new kworker, which then also gets stuck
on the same lock, repeating the cycle. This directly causes the
kworker count to explode from <50 to 100-2000+.

3. The System-Wide Lockdown: The final piece allows this localized
worker explosion to saturate the entire system.

Commit: 8639ecebc9b1 ("workqueue: Implement non-strict affinity scope
for unbound workqueues")

Effect: This change introduced non-strict affinity as the default. It
allows the hundreds of kworkers created in Stage 2 to be spread by the
scheduler across all available CPU cores, turning the problem into a
system-wide hang.

Current Status and Mitigation

Mainline Status: On the latest mainline kernel, the hang is far less
frequent and the kworker counts are reduced back to normal (<50),
suggesting other changes have partially mitigated the issue. However,
the hang still occurs, and when it does, the kworker count still
explodes (e.g., 300+), indicating the underlying feedback loop
remains.

Workaround: A reliable mitigation is to revert to the old workqueue
behavior by setting affinity_strict to 1. This contains the kworker
proliferation to a single CPU pod, preventing the system-wide hang.

Questions

Given that the issue is not fully resolved, could you please provide
some guidance?

1. Is this a known issue, and are there patches in development that
might fully address the underlying spinlock contention or the kworker
feedback loop?

2. Is there a better long-term mitigation we can apply other than
forcing strict affinity?

Thank you for your time and help.

Best regards,

Chenglong


^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2025-09-26 20:08 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-09-25  0:24 [REGRESSION] workqueue/writeback: Severe CPU hang due to kworker proliferation during I/O flush and cgroup cleanup Chenglong Tang
2025-09-25  0:52 ` Tejun Heo
2025-09-25 16:58   ` Chenglong Tang
2025-09-26 19:50   ` Chenglong Tang
2025-09-25  0:29 Chenglong Tang
2025-09-26 19:54 ` Chenglong Tang
2025-09-26 19:59   ` Tejun Heo
2025-09-26 20:07     ` Chenglong Tang

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox