* [REGRESSION] workqueue/writeback: Severe CPU hang due to kworker proliferation during I/O flush and cgroup cleanup
@ 2025-09-25 0:29 Chenglong Tang
2025-09-26 19:54 ` Chenglong Tang
0 siblings, 1 reply; 8+ messages in thread
From: Chenglong Tang @ 2025-09-25 0:29 UTC (permalink / raw)
To: stable; +Cc: regressions, tj, roman.gushchin, linux-mm, lakitu-dev
Hello,
This is Chenglong from Google Container Optimized OS. I'm reporting a
severe CPU hang regression that occurs after a high volume of file
creation and subsequent cgroup cleanup.
Through bisection, the issue appears to be caused by a chain reaction
between three commits related to writeback, unbound workqueues, and
CPU-hogging detection. The issue is greatly alleviated on the latest
mainline kernel but is not fully resolved, still occurring
intermittently (~1 in 10 runs).
How to reproduce
The kernel v6.1 is good. The hang is reliably triggered(over 80%
chance) on kernels v6.6 and 6.12 and intermittently on
mainline(6.17-rc7) with the following steps:
Environment: A machine with a fast SSD and a high core count (e.g.,
Google Cloud's N2-standard-128).
Workload: Concurrently generate a large number of files (e.g., 2
million) using multiple services managed by systemd-run. This creates
significant I/O and cgroup churn.
Trigger: After the file generation completes, terminate the
systemd-run services.
Result: Shortly after the services are killed, the system's CPU load
spikes, leading to a massive number of kworker/+inode_switch_wbs
threads and a system-wide hang/livelock where the machine becomes
unresponsive (20s - 300s).
Analysis and Problematic Commits
1. The initial commit: The process begins with a worker that can get
stuck busy-waiting on a spinlock.
Commit: ("writeback, cgroup: release dying cgwbs by switching attached inodes")
Effect: This introduced the inode_switch_wbs_work_fn worker to clean
up cgroup writeback structures. Under our test load, this worker
appears to hit a highly contended wb->list_lock spinlock, causing it
to burn 100% CPU without sleeping.
2. The Kworker Explosion: A subsequent change misinterprets the
spinning worker from Stage 1, leading to a runaway feedback loop of
worker creation.
Commit: 616db8779b1e ("workqueue: Automatically mark CPU-hogging work
items CPU_INTENSIVE")
Effect: This logic sees the spinning worker, marks it as
CPU_INTENSIVE, and excludes it from concurrency management. To handle
the work backlog, it spawns a new kworker, which then also gets stuck
on the same lock, repeating the cycle. This directly causes the
kworker count to explode from <50 to 100-2000+.
3. The System-Wide Lockdown: The final piece allows this localized
worker explosion to saturate the entire system.
Commit: 8639ecebc9b1 ("workqueue: Implement non-strict affinity scope
for unbound workqueues")
Effect: This change introduced non-strict affinity as the default. It
allows the hundreds of kworkers created in Stage 2 to be spread by the
scheduler across all available CPU cores, turning the problem into a
system-wide hang.
Current Status and Mitigation
Mainline Status: On the latest mainline kernel, the hang is far less
frequent and the kworker counts are reduced back to normal (<50),
suggesting other changes have partially mitigated the issue. However,
the hang still occurs, and when it does, the kworker count still
explodes (e.g., 300+), indicating the underlying feedback loop
remains.
Workaround: A reliable mitigation is to revert to the old workqueue
behavior by setting affinity_strict to 1. This contains the kworker
proliferation to a single CPU pod, preventing the system-wide hang.
Questions
Given that the issue is not fully resolved, could you please provide
some guidance?
1. Is this a known issue, and are there patches in development that
might fully address the underlying spinlock contention or the kworker
feedback loop?
2. Is there a better long-term mitigation we can apply other than
forcing strict affinity?
Thank you for your time and help.
Best regards,
Chenglong
^ permalink raw reply [flat|nested] 8+ messages in thread* Re: [REGRESSION] workqueue/writeback: Severe CPU hang due to kworker proliferation during I/O flush and cgroup cleanup
2025-09-25 0:29 [REGRESSION] workqueue/writeback: Severe CPU hang due to kworker proliferation during I/O flush and cgroup cleanup Chenglong Tang
@ 2025-09-26 19:54 ` Chenglong Tang
2025-09-26 19:59 ` Tejun Heo
0 siblings, 1 reply; 8+ messages in thread
From: Chenglong Tang @ 2025-09-26 19:54 UTC (permalink / raw)
To: stable; +Cc: regressions, tj, roman.gushchin, linux-mm, lakitu-dev
Just did more testing here. Confirmed that the system hang's still
there but less frequently(6/40) with the patches
http://lkml.kernel.org/r/20250912103522.2935-1-jack@suse.cz appied to
v6.17-rc7. In the bad instances, the kworker count climbed to over
600+ and caused the hang over 80+ seconds.
So I think the patches didn't fully solve the issue.
On Wed, Sep 24, 2025 at 5:29 PM Chenglong Tang <chenglongtang@google.com> wrote:
>
> Hello,
>
> This is Chenglong from Google Container Optimized OS. I'm reporting a
> severe CPU hang regression that occurs after a high volume of file
> creation and subsequent cgroup cleanup.
>
> Through bisection, the issue appears to be caused by a chain reaction
> between three commits related to writeback, unbound workqueues, and
> CPU-hogging detection. The issue is greatly alleviated on the latest
> mainline kernel but is not fully resolved, still occurring
> intermittently (~1 in 10 runs).
>
> How to reproduce
>
> The kernel v6.1 is good. The hang is reliably triggered(over 80%
> chance) on kernels v6.6 and 6.12 and intermittently on
> mainline(6.17-rc7) with the following steps:
>
> Environment: A machine with a fast SSD and a high core count (e.g.,
> Google Cloud's N2-standard-128).
>
> Workload: Concurrently generate a large number of files (e.g., 2
> million) using multiple services managed by systemd-run. This creates
> significant I/O and cgroup churn.
>
> Trigger: After the file generation completes, terminate the
> systemd-run services.
>
> Result: Shortly after the services are killed, the system's CPU load
> spikes, leading to a massive number of kworker/+inode_switch_wbs
> threads and a system-wide hang/livelock where the machine becomes
> unresponsive (20s - 300s).
>
> Analysis and Problematic Commits
>
> 1. The initial commit: The process begins with a worker that can get
> stuck busy-waiting on a spinlock.
>
> Commit: ("writeback, cgroup: release dying cgwbs by switching attached inodes")
>
> Effect: This introduced the inode_switch_wbs_work_fn worker to clean
> up cgroup writeback structures. Under our test load, this worker
> appears to hit a highly contended wb->list_lock spinlock, causing it
> to burn 100% CPU without sleeping.
>
> 2. The Kworker Explosion: A subsequent change misinterprets the
> spinning worker from Stage 1, leading to a runaway feedback loop of
> worker creation.
>
> Commit: 616db8779b1e ("workqueue: Automatically mark CPU-hogging work
> items CPU_INTENSIVE")
>
> Effect: This logic sees the spinning worker, marks it as
> CPU_INTENSIVE, and excludes it from concurrency management. To handle
> the work backlog, it spawns a new kworker, which then also gets stuck
> on the same lock, repeating the cycle. This directly causes the
> kworker count to explode from <50 to 100-2000+.
>
> 3. The System-Wide Lockdown: The final piece allows this localized
> worker explosion to saturate the entire system.
>
> Commit: 8639ecebc9b1 ("workqueue: Implement non-strict affinity scope
> for unbound workqueues")
>
> Effect: This change introduced non-strict affinity as the default. It
> allows the hundreds of kworkers created in Stage 2 to be spread by the
> scheduler across all available CPU cores, turning the problem into a
> system-wide hang.
>
> Current Status and Mitigation
>
> Mainline Status: On the latest mainline kernel, the hang is far less
> frequent and the kworker counts are reduced back to normal (<50),
> suggesting other changes have partially mitigated the issue. However,
> the hang still occurs, and when it does, the kworker count still
> explodes (e.g., 300+), indicating the underlying feedback loop
> remains.
>
> Workaround: A reliable mitigation is to revert to the old workqueue
> behavior by setting affinity_strict to 1. This contains the kworker
> proliferation to a single CPU pod, preventing the system-wide hang.
>
> Questions
>
> Given that the issue is not fully resolved, could you please provide
> some guidance?
>
> 1. Is this a known issue, and are there patches in development that
> might fully address the underlying spinlock contention or the kworker
> feedback loop?
>
> 2. Is there a better long-term mitigation we can apply other than
> forcing strict affinity?
>
> Thank you for your time and help.
>
> Best regards,
>
> Chenglong
^ permalink raw reply [flat|nested] 8+ messages in thread* Re: [REGRESSION] workqueue/writeback: Severe CPU hang due to kworker proliferation during I/O flush and cgroup cleanup
2025-09-26 19:54 ` Chenglong Tang
@ 2025-09-26 19:59 ` Tejun Heo
2025-09-26 20:07 ` Chenglong Tang
0 siblings, 1 reply; 8+ messages in thread
From: Tejun Heo @ 2025-09-26 19:59 UTC (permalink / raw)
To: Chenglong Tang
Cc: stable, regressions, roman.gushchin, linux-mm, lakitu-dev, Jan Kara
cc'ing Jan.
On Fri, Sep 26, 2025 at 12:54:29PM -0700, Chenglong Tang wrote:
> Just did more testing here. Confirmed that the system hang's still
> there but less frequently(6/40) with the patches
> http://lkml.kernel.org/r/20250912103522.2935-1-jack@suse.cz appied to
> v6.17-rc7. In the bad instances, the kworker count climbed to over
> 600+ and caused the hang over 80+ seconds.
>
> So I think the patches didn't fully solve the issue.
I wonder how the number of workers still exploded to 600+. Are there that
many cgroups being shut down? Does clamping down @max_active resolve the
problem? There's no reason to have really high concurrency for this.
Thanks.
--
tejun
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [REGRESSION] workqueue/writeback: Severe CPU hang due to kworker proliferation during I/O flush and cgroup cleanup
2025-09-26 19:59 ` Tejun Heo
@ 2025-09-26 20:07 ` Chenglong Tang
0 siblings, 0 replies; 8+ messages in thread
From: Chenglong Tang @ 2025-09-26 20:07 UTC (permalink / raw)
To: Tejun Heo
Cc: stable, regressions, roman.gushchin, linux-mm, lakitu-dev,
Jan Kara, Shedrack Okpara, Calvin D'Mello, Jeff Schnurr
cc'ing GKE folks
On Fri, Sep 26, 2025 at 12:59 PM Tejun Heo <tj@kernel.org> wrote:
>
> cc'ing Jan.
>
> On Fri, Sep 26, 2025 at 12:54:29PM -0700, Chenglong Tang wrote:
> > Just did more testing here. Confirmed that the system hang's still
> > there but less frequently(6/40) with the patches
> > http://lkml.kernel.org/r/20250912103522.2935-1-jack@suse.cz appied to
> > v6.17-rc7. In the bad instances, the kworker count climbed to over
> > 600+ and caused the hang over 80+ seconds.
> >
> > So I think the patches didn't fully solve the issue.
>
> I wonder how the number of workers still exploded to 600+. Are there that
> many cgroups being shut down? Does clamping down @max_active resolve the
> problem? There's no reason to have really high concurrency for this.
>
> Thanks.
>
> --
> tejun
^ permalink raw reply [flat|nested] 8+ messages in thread
* [REGRESSION] workqueue/writeback: Severe CPU hang due to kworker proliferation during I/O flush and cgroup cleanup
@ 2025-09-25 0:24 Chenglong Tang
2025-09-25 0:52 ` Tejun Heo
0 siblings, 1 reply; 8+ messages in thread
From: Chenglong Tang @ 2025-09-25 0:24 UTC (permalink / raw)
To: stable; +Cc: regressions, tj, roman.gushchin, linux-mm, lakitu-dev
[-- Attachment #1: Type: text/plain, Size: 4121 bytes --]
Hello,
This is Chenglong from Google Container Optimized OS. I'm reporting a
severe CPU hang regression that occurs after a high volume of file creation
and subsequent cgroup cleanup.
Through bisection, the issue appears to be caused by a chain reaction
between three commits related to writeback, unbound workqueues, and
CPU-hogging detection. The issue is greatly alleviated on the latest
mainline kernel but is not fully resolved, still occurring intermittently
(~1 in 10 runs).
How to reproduce
The kernel v6.1 is good. The hang is reliably triggered(over 80% chance) on
kernels v6.6 and 6.12 and intermittently on mainline(6.17-rc7) with the
following steps:
-
*Environment:* A machine with a fast SSD and a high core count (e.g.,
Google Cloud's N2-standard-128).
-
*Workload:* Concurrently generate a large number of files (e.g., 2 million)
using multiple services managed by systemd-run. This creates significant
I/O and cgroup churn.
-
*Trigger:* After the file generation completes, terminate the systemd-run
services.
-
*Result:* Shortly after the services are killed, the system's CPU load
spikes, leading to a massive number of kworker/+inode_switch_wbs threads
and a system-wide hang/livelock where the machine becomes unresponsive (20s
- 300s).
- Analysis and Problematic Commits
-
*1. The initial commit* The process begins with a worker that can get stuck
busy-waiting on a spinlock.
-
*Commit:* ("writeback, cgroup: release dying cgwbs by switching attached
inodes <https://lkml.org/lkml/2021/6/3/1424>")
-
*Effect:* This introduced the inode_switch_wbs_work_fn worker to clean
up cgroup writeback structures. Under our test load, this worker appears to
hit a highly contended wb->list_lock spinlock, causing it to burn 100%
CPU without sleeping.
-
*2. The Kworker Explosion* A subsequent change misinterprets the spinning
worker from Stage 1, leading to a runaway feedback loop of worker creation.
-
*Commit:* 616db8779b1e ("workqueue: Automatically mark CPU-hogging work
items CPU_INTENSIVE"
<https://git.zx2c4.com/linux-rng/commit/?id=616db8779b1e3f93075df691432cccc5ef3c3ba0>
)
-
*Effect:* This logic sees the spinning worker, marks it as CPU_INTENSIVE,
and excludes it from concurrency management. To handle the work backlog, it
spawns a *new* kworker, which then also gets stuck on the same lock,
repeating the cycle. This directly causes the kworker count to explode from
<50 to 100-2000+.
*3. The System-Wide Lockdown* The final piece allows this localized worker
explosion to saturate the entire system.
-
*Commit:* 8639ecebc9b1 ("workqueue: Implement non-strict affinity scope
for unbound workqueues"
<https://git.zx2c4.com/linux-rng/commit/tools/workqueue?id=8639ecebc9b1796d7074751a350462f5e1c61cd4>
)
-
*Effect:* This change introduced non-strict affinity as the default. It
allows the hundreds of kworkers created in Stage 2 to be spread by the
scheduler across all available CPU cores, turning the problem into a
system-wide hang.
- Current Status and Mitigation
-
*Mainline Status:* On the latest mainline kernel, the hang is far less
frequent and the kworker counts are reduced back to normal (<50),
suggesting other changes have partially mitigated the issue. However, the
hang still occurs, and when it does, the kworker count still explodes
(e.g., 300+), indicating the underlying feedback loop remains.
-
*Workaround:* A reliable mitigation is to revert to the old workqueue
behavior by setting affinity_strict to 1. This contains the kworker
proliferation to a single CPU pod, preventing the system-wide hang.
- Questions
Given that the issue is not fully resolved, could you please provide some
guidance?
1.
Is this a known issue, and are there patches in development that might
fully address the underlying spinlock contention or the kworker feedback
loop?
2.
Is there a better long-term mitigation we can apply other than forcing
strict affinity?
Thank you for your time and help.
Best regards,
-
Chenglong
[-- Attachment #2: Type: text/html, Size: 4726 bytes --]
^ permalink raw reply [flat|nested] 8+ messages in thread* Re: [REGRESSION] workqueue/writeback: Severe CPU hang due to kworker proliferation during I/O flush and cgroup cleanup
2025-09-25 0:24 Chenglong Tang
@ 2025-09-25 0:52 ` Tejun Heo
2025-09-25 16:58 ` Chenglong Tang
2025-09-26 19:50 ` Chenglong Tang
0 siblings, 2 replies; 8+ messages in thread
From: Tejun Heo @ 2025-09-25 0:52 UTC (permalink / raw)
To: Chenglong Tang; +Cc: stable, regressions, roman.gushchin, linux-mm, lakitu-dev
On Wed, Sep 24, 2025 at 05:24:15PM -0700, Chenglong Tang wrote:
> The kernel v6.1 is good. The hang is reliably triggered(over 80% chance) on
> kernels v6.6 and 6.12 and intermittently on mainline(6.17-rc7) with the
> following steps:
> -
>
> *Environment:* A machine with a fast SSD and a high core count (e.g.,
> Google Cloud's N2-standard-128).
> -
>
> *Workload:* Concurrently generate a large number of files (e.g., 2 million)
> using multiple services managed by systemd-run. This creates significant
> I/O and cgroup churn.
> -
>
> *Trigger:* After the file generation completes, terminate the systemd-run
> services.
> -
>
> *Result:* Shortly after the services are killed, the system's CPU load
> spikes, leading to a massive number of kworker/+inode_switch_wbs threads
> and a system-wide hang/livelock where the machine becomes unresponsive (20s
> - 300s).
Sounds like:
http://lkml.kernel.org/r/20250912103522.2935-1-jack@suse.cz
Can you see whether those patches resolve the problem?
Thanks.
--
tejun
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [REGRESSION] workqueue/writeback: Severe CPU hang due to kworker proliferation during I/O flush and cgroup cleanup
2025-09-25 0:52 ` Tejun Heo
@ 2025-09-25 16:58 ` Chenglong Tang
2025-09-26 19:50 ` Chenglong Tang
1 sibling, 0 replies; 8+ messages in thread
From: Chenglong Tang @ 2025-09-25 16:58 UTC (permalink / raw)
To: Tejun Heo; +Cc: stable, regressions, roman.gushchin, linux-mm, lakitu-dev
Confirmed the patches worked for the mainline(6.17-rc7). But it's
still flaky(1/13) if I simply apply the patches to v6.12.46.
I think there should be some gap commits that I should apply as well.
Here is the diff between the two kernel versions after patches are applied:
diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
index fad8ddfa622bb..62d85c5086ba1 100644
--- a/fs/fs-writeback.c
+++ b/fs/fs-writeback.c
@@ -65,7 +65,7 @@ struct wb_writeback_work {
* timestamps written to disk after 12 hours, but in the worst case a
* few inodes might not their timestamps updated for 24 hours.
*/
-static unsigned int dirtytime_expire_interval = 12 * 60 * 60;
+unsigned int dirtytime_expire_interval = 12 * 60 * 60;
static inline struct inode *wb_inode(struct list_head *head)
{
@@ -290,6 +290,7 @@ void __inode_attach_wb(struct inode *inode, struct
folio *folio)
if (unlikely(cmpxchg(&inode->i_wb, NULL, wb)))
wb_put(wb);
}
+EXPORT_SYMBOL_GPL(__inode_attach_wb);
/**
* inode_cgwb_move_to_attached - put the inode onto wb->b_attached list
@@ -770,9 +771,8 @@ bool cleanup_offline_cgwb(struct bdi_writeback *wb)
* writeback completion, wbc_detach_inode() should be called. This is used
* to track the cgroup writeback context.
*/
-static void wbc_attach_and_unlock_inode(struct writeback_control *wbc,
- struct inode *inode)
- __releases(&inode->i_lock)
+void wbc_attach_and_unlock_inode(struct writeback_control *wbc,
+ struct inode *inode)
{
if (!inode_cgwb_enabled(inode)) {
spin_unlock(&inode->i_lock);
@@ -802,24 +802,7 @@ static void wbc_attach_and_unlock_inode(struct
writeback_control *wbc,
if (unlikely(wb_dying(wbc->wb) && !css_is_dying(wbc->wb->memcg_css)))
inode_switch_wbs(inode, wbc->wb_id);
}
-
-/**
- * wbc_attach_fdatawrite_inode - associate wbc and inode for fdatawrite
- * @wbc: writeback_control of interest
- * @inode: target inode
- *
- * This function is to be used by __filemap_fdatawrite_range(), which is an
- * alternative entry point into writeback code, and first ensures @inode is
- * associated with a bdi_writeback and attaches it to @wbc.
- */
-void wbc_attach_fdatawrite_inode(struct writeback_control *wbc,
- struct inode *inode)
-{
- spin_lock(&inode->i_lock);
- inode_attach_wb(inode, NULL);
- wbc_attach_and_unlock_inode(wbc, inode);
-}
-EXPORT_SYMBOL_GPL(wbc_attach_fdatawrite_inode);
+EXPORT_SYMBOL_GPL(wbc_attach_and_unlock_inode);
/**
* wbc_detach_inode - disassociate wbc from inode and perform foreign detection
@@ -1282,13 +1265,6 @@ static void bdi_split_work_to_wbs(struct
backing_dev_info *bdi,
}
}
-static inline void wbc_attach_and_unlock_inode(struct writeback_control *wbc,
- struct inode *inode)
- __releases(&inode->i_lock)
-{
- spin_unlock(&inode->i_lock);
-}
-
#endif /* CONFIG_CGROUP_WRITEBACK */
/*
@@ -2475,7 +2451,14 @@ static void wakeup_dirtytime_writeback(struct
work_struct *w)
schedule_delayed_work(&dirtytime_work, dirtytime_expire_interval * HZ);
}
-static int dirtytime_interval_handler(const struct ctl_table *table, int write,
+static int __init start_dirtytime_writeback(void)
+{
+ schedule_delayed_work(&dirtytime_work, dirtytime_expire_interval * HZ);
+ return 0;
+}
+__initcall(start_dirtytime_writeback);
+
+int dirtytime_interval_handler(const struct ctl_table *table, int write,
void *buffer, size_t *lenp, loff_t *ppos)
{
int ret;
@@ -2486,25 +2469,6 @@ static int dirtytime_interval_handler(const
struct ctl_table *table, int write,
return ret;
}
-static const struct ctl_table vm_fs_writeback_table[] = {
- {
- .procname = "dirtytime_expire_seconds",
- .data = &dirtytime_expire_interval,
- .maxlen = sizeof(dirtytime_expire_interval),
- .mode = 0644,
- .proc_handler = dirtytime_interval_handler,
- .extra1 = SYSCTL_ZERO,
- },
-};
-
-static int __init start_dirtytime_writeback(void)
-{
- schedule_delayed_work(&dirtytime_work, dirtytime_expire_interval * HZ);
- register_sysctl_init("vm", vm_fs_writeback_table);
- return 0;
-}
-__initcall(start_dirtytime_writeback);
-
/**
* __mark_inode_dirty - internal function to mark an inode dirty
*
On Wed, Sep 24, 2025 at 5:52 PM Tejun Heo <tj@kernel.org> wrote:
>
> On Wed, Sep 24, 2025 at 05:24:15PM -0700, Chenglong Tang wrote:
> > The kernel v6.1 is good. The hang is reliably triggered(over 80% chance) on
> > kernels v6.6 and 6.12 and intermittently on mainline(6.17-rc7) with the
> > following steps:
> > -
> >
> > *Environment:* A machine with a fast SSD and a high core count (e.g.,
> > Google Cloud's N2-standard-128).
> > -
> >
> > *Workload:* Concurrently generate a large number of files (e.g., 2 million)
> > using multiple services managed by systemd-run. This creates significant
> > I/O and cgroup churn.
> > -
> >
> > *Trigger:* After the file generation completes, terminate the systemd-run
> > services.
> > -
> >
> > *Result:* Shortly after the services are killed, the system's CPU load
> > spikes, leading to a massive number of kworker/+inode_switch_wbs threads
> > and a system-wide hang/livelock where the machine becomes unresponsive (20s
> > - 300s).
>
> Sounds like:
>
> http://lkml.kernel.org/r/20250912103522.2935-1-jack@suse.cz
>
> Can you see whether those patches resolve the problem?
>
> Thanks.
>
> --
> tejun
^ permalink raw reply [flat|nested] 8+ messages in thread* Re: [REGRESSION] workqueue/writeback: Severe CPU hang due to kworker proliferation during I/O flush and cgroup cleanup
2025-09-25 0:52 ` Tejun Heo
2025-09-25 16:58 ` Chenglong Tang
@ 2025-09-26 19:50 ` Chenglong Tang
1 sibling, 0 replies; 8+ messages in thread
From: Chenglong Tang @ 2025-09-26 19:50 UTC (permalink / raw)
To: Tejun Heo; +Cc: stable, regressions, roman.gushchin, linux-mm, lakitu-dev
Just did more testing here. Confirmed that the system hang's still
there but less frequently(6/40) with the patches
http://lkml.kernel.org/r/20250912103522.2935-1-jack@suse.cz appied to
v6.17-rc7. In the bad instances, the kworker count climbed to over
600+ and caused the hang over 80+ seconds.
So I think the patches didn't fully solve the issue.
On Wed, Sep 24, 2025 at 5:52 PM Tejun Heo <tj@kernel.org> wrote:
>
> On Wed, Sep 24, 2025 at 05:24:15PM -0700, Chenglong Tang wrote:
> > The kernel v6.1 is good. The hang is reliably triggered(over 80% chance) on
> > kernels v6.6 and 6.12 and intermittently on mainline(6.17-rc7) with the
> > following steps:
> > -
> >
> > *Environment:* A machine with a fast SSD and a high core count (e.g.,
> > Google Cloud's N2-standard-128).
> > -
> >
> > *Workload:* Concurrently generate a large number of files (e.g., 2 million)
> > using multiple services managed by systemd-run. This creates significant
> > I/O and cgroup churn.
> > -
> >
> > *Trigger:* After the file generation completes, terminate the systemd-run
> > services.
> > -
> >
> > *Result:* Shortly after the services are killed, the system's CPU load
> > spikes, leading to a massive number of kworker/+inode_switch_wbs threads
> > and a system-wide hang/livelock where the machine becomes unresponsive (20s
> > - 300s).
>
> Sounds like:
>
> http://lkml.kernel.org/r/20250912103522.2935-1-jack@suse.cz
>
> Can you see whether those patches resolve the problem?
>
> Thanks.
>
> --
> tejun
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2025-09-26 20:08 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-09-25 0:29 [REGRESSION] workqueue/writeback: Severe CPU hang due to kworker proliferation during I/O flush and cgroup cleanup Chenglong Tang
2025-09-26 19:54 ` Chenglong Tang
2025-09-26 19:59 ` Tejun Heo
2025-09-26 20:07 ` Chenglong Tang
-- strict thread matches above, loose matches on Subject: below --
2025-09-25 0:24 Chenglong Tang
2025-09-25 0:52 ` Tejun Heo
2025-09-25 16:58 ` Chenglong Tang
2025-09-26 19:50 ` Chenglong Tang
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox