From: Chenglong Tang <chenglongtang@google.com>
To: stable@vger.kernel.org
Cc: regressions@lists.linux.dev, tj@kernel.org,
roman.gushchin@linux.dev, linux-mm@kvack.org,
lakitu-dev@google.com
Subject: [REGRESSION] workqueue/writeback: Severe CPU hang due to kworker proliferation during I/O flush and cgroup cleanup
Date: Wed, 24 Sep 2025 17:24:15 -0700 [thread overview]
Message-ID: <CAOdxtTZJqgDNMtqsq51hQ0azanFPLXHMAJ-mRhRS6yjzYhMf_A@mail.gmail.com> (raw)
[-- Attachment #1: Type: text/plain, Size: 4121 bytes --]
Hello,
This is Chenglong from Google Container Optimized OS. I'm reporting a
severe CPU hang regression that occurs after a high volume of file creation
and subsequent cgroup cleanup.
Through bisection, the issue appears to be caused by a chain reaction
between three commits related to writeback, unbound workqueues, and
CPU-hogging detection. The issue is greatly alleviated on the latest
mainline kernel but is not fully resolved, still occurring intermittently
(~1 in 10 runs).
How to reproduce
The kernel v6.1 is good. The hang is reliably triggered(over 80% chance) on
kernels v6.6 and 6.12 and intermittently on mainline(6.17-rc7) with the
following steps:
-
*Environment:* A machine with a fast SSD and a high core count (e.g.,
Google Cloud's N2-standard-128).
-
*Workload:* Concurrently generate a large number of files (e.g., 2 million)
using multiple services managed by systemd-run. This creates significant
I/O and cgroup churn.
-
*Trigger:* After the file generation completes, terminate the systemd-run
services.
-
*Result:* Shortly after the services are killed, the system's CPU load
spikes, leading to a massive number of kworker/+inode_switch_wbs threads
and a system-wide hang/livelock where the machine becomes unresponsive (20s
- 300s).
- Analysis and Problematic Commits
-
*1. The initial commit* The process begins with a worker that can get stuck
busy-waiting on a spinlock.
-
*Commit:* ("writeback, cgroup: release dying cgwbs by switching attached
inodes <https://lkml.org/lkml/2021/6/3/1424>")
-
*Effect:* This introduced the inode_switch_wbs_work_fn worker to clean
up cgroup writeback structures. Under our test load, this worker appears to
hit a highly contended wb->list_lock spinlock, causing it to burn 100%
CPU without sleeping.
-
*2. The Kworker Explosion* A subsequent change misinterprets the spinning
worker from Stage 1, leading to a runaway feedback loop of worker creation.
-
*Commit:* 616db8779b1e ("workqueue: Automatically mark CPU-hogging work
items CPU_INTENSIVE"
<https://git.zx2c4.com/linux-rng/commit/?id=616db8779b1e3f93075df691432cccc5ef3c3ba0>
)
-
*Effect:* This logic sees the spinning worker, marks it as CPU_INTENSIVE,
and excludes it from concurrency management. To handle the work backlog, it
spawns a *new* kworker, which then also gets stuck on the same lock,
repeating the cycle. This directly causes the kworker count to explode from
<50 to 100-2000+.
*3. The System-Wide Lockdown* The final piece allows this localized worker
explosion to saturate the entire system.
-
*Commit:* 8639ecebc9b1 ("workqueue: Implement non-strict affinity scope
for unbound workqueues"
<https://git.zx2c4.com/linux-rng/commit/tools/workqueue?id=8639ecebc9b1796d7074751a350462f5e1c61cd4>
)
-
*Effect:* This change introduced non-strict affinity as the default. It
allows the hundreds of kworkers created in Stage 2 to be spread by the
scheduler across all available CPU cores, turning the problem into a
system-wide hang.
- Current Status and Mitigation
-
*Mainline Status:* On the latest mainline kernel, the hang is far less
frequent and the kworker counts are reduced back to normal (<50),
suggesting other changes have partially mitigated the issue. However, the
hang still occurs, and when it does, the kworker count still explodes
(e.g., 300+), indicating the underlying feedback loop remains.
-
*Workaround:* A reliable mitigation is to revert to the old workqueue
behavior by setting affinity_strict to 1. This contains the kworker
proliferation to a single CPU pod, preventing the system-wide hang.
- Questions
Given that the issue is not fully resolved, could you please provide some
guidance?
1.
Is this a known issue, and are there patches in development that might
fully address the underlying spinlock contention or the kworker feedback
loop?
2.
Is there a better long-term mitigation we can apply other than forcing
strict affinity?
Thank you for your time and help.
Best regards,
-
Chenglong
[-- Attachment #2: Type: text/html, Size: 4726 bytes --]
next reply other threads:[~2025-09-25 0:24 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-09-25 0:24 Chenglong Tang [this message]
2025-09-25 0:52 ` Tejun Heo
2025-09-25 16:58 ` Chenglong Tang
2025-09-26 19:50 ` Chenglong Tang
2025-09-25 0:29 Chenglong Tang
2025-09-26 19:54 ` Chenglong Tang
2025-09-26 19:59 ` Tejun Heo
2025-09-26 20:07 ` Chenglong Tang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CAOdxtTZJqgDNMtqsq51hQ0azanFPLXHMAJ-mRhRS6yjzYhMf_A@mail.gmail.com \
--to=chenglongtang@google.com \
--cc=lakitu-dev@google.com \
--cc=linux-mm@kvack.org \
--cc=regressions@lists.linux.dev \
--cc=roman.gushchin@linux.dev \
--cc=stable@vger.kernel.org \
--cc=tj@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox