Hello,

This is Chenglong from Google Container Optimized OS. I'm reporting a severe CPU hang regression that occurs after a high volume of file creation and subsequent cgroup cleanup.

Through bisection, the issue appears to be caused by a chain reaction between three commits related to writeback, unbound workqueues, and CPU-hogging detection. The issue is greatly alleviated on the latest mainline kernel but is not fully resolved, still occurring intermittently (~1 in 10 runs).

How to reproduce

The kernel v6.1 is good. The hang is reliably triggered(over 80% chance) on kernels v6.6 and 6.12 and intermittently on mainline(6.17-rc7) with the following steps:
  • Environment: A machine with a fast SSD and a high core count (e.g., Google Cloud's N2-standard-128).

  • Workload: Concurrently generate a large number of files (e.g., 2 million) using multiple services managed by systemd-run. This creates significant I/O and cgroup churn.

  • Trigger: After the file generation completes, terminate the systemd-run services.

  • Result: Shortly after the services are killed, the system's CPU load spikes, leading to a massive number of kworker/+inode_switch_wbs threads and a system-wide hang/livelock where the machine becomes unresponsive (20s - 300s).

  • Analysis and Problematic Commits

  • 1. The initial commit The process begins with a worker that can get stuck busy-waiting on a spinlock.

  • 2. The Kworker Explosion A subsequent change misinterprets the spinning worker from Stage 1, leading to a runaway feedback loop of worker creation.

    • Commit: 616db8779b1e ("workqueue: Automatically mark CPU-hogging work items CPU_INTENSIVE")

    • Effect: This logic sees the spinning worker, marks it as CPU_INTENSIVE, and excludes it from concurrency management. To handle the work backlog, it spawns a new kworker, which then also gets stuck on the same lock, repeating the cycle. This directly causes the kworker count to explode from <50 to 100-2000+.

    3. The System-Wide Lockdown The final piece allows this localized worker explosion to saturate the entire system.

  • Current Status and Mitigation

  • Questions

    Given that the issue is not fully resolved, could you please provide some guidance?

    1. Is this a known issue, and are there patches in development that might fully address the underlying spinlock contention or the kworker feedback loop?

    2. Is there a better long-term mitigation we can apply other than forcing strict affinity?

    Thank you for your time and help.

    Best regards,

  • Chenglong