linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/3] Replace wq users and add WQ_PERCPU to alloc_workqueue() users
@ 2026-01-13 11:46 Marco Crivellari
  2026-01-13 11:46 ` [PATCH v2 1/3] mm: Replace use of system_unbound_wq with system_dfl_wq Marco Crivellari
                   ` (2 more replies)
  0 siblings, 3 replies; 4+ messages in thread
From: Marco Crivellari @ 2026-01-13 11:46 UTC (permalink / raw)
  To: linux-kernel, linux-mm
  Cc: Tejun Heo, Lai Jiangshan, Frederic Weisbecker,
	Sebastian Andrzej Siewior, Marco Crivellari, Michal Hocko,
	Andrew Morton

Hi,

This series continues the effort to refactor the Workqueue API.
No behavior changes are introduced by this series.

=== Recent changes to the WQ API ===

The following, address the recent changes in the Workqueue API:

- commit 128ea9f6ccfb ("workqueue: Add system_percpu_wq and system_dfl_wq")
- commit 930c2ea566af ("workqueue: Add new WQ_PERCPU flag")

The old workqueues will be removed in a future release cycle and
unbound will become the implicit default.

=== Introduced Changes by this series ===

1) [P 1-2] Replace use of system_wq and system_unbound_wq

    Workqueue users converted to the better named new workqueues:

        system_wq -> system_percpu_wq
        system_unbound_wq -> system_dfl_wq

    This way the old obsolete workqueues (system_wq, system_unbound_wq) can be
    removed in the future.

2) [P 3] add WQ_PERCPU to remaining alloc_workqueue() users

    With the introduction of the WQ_PERCPU flag (equivalent to !WQ_UNBOUND),
    any alloc_workqueue() caller that doesn’t explicitly specify WQ_UNBOUND
    must now use WQ_PERCPU.

    WQ_UNBOUND will be removed in future.


For more information:
    https://lore.kernel.org/all/20250221112003.1dSuoGyc@linutronix.de/


---
Changes in v2:
- commit logs upgraded with a better description
- rebased on 6.19-rc5

Marco Crivellari (3):
  mm: Replace use of system_unbound_wq with system_dfl_wq
  mm: Replace use of system_wq with system_percpu_wq
  mm: add WQ_PERCPU to alloc_workqueue users

 mm/backing-dev.c | 6 +++---
 mm/kfence/core.c | 6 +++---
 mm/memcontrol.c  | 4 ++--
 mm/slub.c        | 4 +++-
 mm/vmstat.c      | 3 ++-
 5 files changed, 13 insertions(+), 10 deletions(-)

-- 
2.52.0



^ permalink raw reply	[flat|nested] 4+ messages in thread

* [PATCH v2 1/3] mm: Replace use of system_unbound_wq with system_dfl_wq
  2026-01-13 11:46 [PATCH v2 0/3] Replace wq users and add WQ_PERCPU to alloc_workqueue() users Marco Crivellari
@ 2026-01-13 11:46 ` Marco Crivellari
  2026-01-13 11:46 ` [PATCH v2 2/3] mm: Replace use of system_wq with system_percpu_wq Marco Crivellari
  2026-01-13 11:46 ` [PATCH v2 3/3] mm: add WQ_PERCPU to alloc_workqueue users Marco Crivellari
  2 siblings, 0 replies; 4+ messages in thread
From: Marco Crivellari @ 2026-01-13 11:46 UTC (permalink / raw)
  To: linux-kernel, linux-mm
  Cc: Tejun Heo, Lai Jiangshan, Frederic Weisbecker,
	Sebastian Andrzej Siewior, Marco Crivellari, Michal Hocko,
	Andrew Morton

This patch continues the effort to refactor workqueue APIs, which has begun
with the changes introducing new workqueues and a new alloc_workqueue flag:

   commit 128ea9f6ccfb ("workqueue: Add system_percpu_wq and system_dfl_wq")
   commit 930c2ea566af ("workqueue: Add new WQ_PERCPU flag")

The point of the refactoring is to eventually alter the default behavior of
workqueues to become unbound by default so that their workload placement is
optimized by the scheduler.

Before that to happen, workqueue users must be converted to the better named
new workqueues with no intended behaviour changes:

   system_wq -> system_percpu_wq
   system_unbound_wq -> system_dfl_wq

This way the old obsolete workqueues (system_wq, system_unbound_wq) can be
removed in the future.

Link: https://lore.kernel.org/all/20250221112003.1dSuoGyc@linutronix.de/
Suggested-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Marco Crivellari <marco.crivellari@suse.com>
---
 mm/backing-dev.c | 2 +-
 mm/kfence/core.c | 6 +++---
 mm/memcontrol.c  | 4 ++--
 3 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/mm/backing-dev.c b/mm/backing-dev.c
index c5740c6d37a2..2f65b5416228 100644
--- a/mm/backing-dev.c
+++ b/mm/backing-dev.c
@@ -939,7 +939,7 @@ void wb_memcg_offline(struct mem_cgroup *memcg)
 	memcg_cgwb_list->next = NULL;	/* prevent new wb's */
 	spin_unlock_irq(&cgwb_lock);
 
-	queue_work(system_unbound_wq, &cleanup_offline_cgwbs_work);
+	queue_work(system_dfl_wq, &cleanup_offline_cgwbs_work);
 }
 
 /**
diff --git a/mm/kfence/core.c b/mm/kfence/core.c
index 577a1699c553..36cc78ede411 100644
--- a/mm/kfence/core.c
+++ b/mm/kfence/core.c
@@ -878,7 +878,7 @@ static void toggle_allocation_gate(struct work_struct *work)
 	/* Disable static key and reset timer. */
 	static_branch_disable(&kfence_allocation_key);
 #endif
-	queue_delayed_work(system_unbound_wq, &kfence_timer,
+	queue_delayed_work(system_dfl_wq, &kfence_timer,
 			   msecs_to_jiffies(kfence_sample_interval));
 }
 
@@ -928,7 +928,7 @@ static void kfence_init_enable(void)
 #endif
 
 	WRITE_ONCE(kfence_enabled, true);
-	queue_delayed_work(system_unbound_wq, &kfence_timer, 0);
+	queue_delayed_work(system_dfl_wq, &kfence_timer, 0);
 
 	pr_info("initialized - using %lu bytes for %d objects at 0x%p-0x%p\n", KFENCE_POOL_SIZE,
 		CONFIG_KFENCE_NUM_OBJECTS, (void *)__kfence_pool,
@@ -1024,7 +1024,7 @@ static int kfence_enable_late(void)
 		return kfence_init_late();
 
 	WRITE_ONCE(kfence_enabled, true);
-	queue_delayed_work(system_unbound_wq, &kfence_timer, 0);
+	queue_delayed_work(system_dfl_wq, &kfence_timer, 0);
 	pr_info("re-enabled\n");
 	return 0;
 }
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 86f43b7e5f71..6b69b8ee023b 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -644,7 +644,7 @@ static void flush_memcg_stats_dwork(struct work_struct *w)
 	 * in latency-sensitive paths is as cheap as possible.
 	 */
 	__mem_cgroup_flush_stats(root_mem_cgroup, true);
-	queue_delayed_work(system_unbound_wq, &stats_flush_dwork, FLUSH_TIME);
+	queue_delayed_work(system_dfl_wq, &stats_flush_dwork, FLUSH_TIME);
 }
 
 unsigned long memcg_page_state(struct mem_cgroup *memcg, int idx)
@@ -3872,7 +3872,7 @@ static int mem_cgroup_css_online(struct cgroup_subsys_state *css)
 		goto offline_kmem;
 
 	if (unlikely(mem_cgroup_is_root(memcg)) && !mem_cgroup_disabled())
-		queue_delayed_work(system_unbound_wq, &stats_flush_dwork,
+		queue_delayed_work(system_dfl_wq, &stats_flush_dwork,
 				   FLUSH_TIME);
 	lru_gen_online_memcg(memcg);
 
-- 
2.52.0



^ permalink raw reply	[flat|nested] 4+ messages in thread

* [PATCH v2 2/3] mm: Replace use of system_wq with system_percpu_wq
  2026-01-13 11:46 [PATCH v2 0/3] Replace wq users and add WQ_PERCPU to alloc_workqueue() users Marco Crivellari
  2026-01-13 11:46 ` [PATCH v2 1/3] mm: Replace use of system_unbound_wq with system_dfl_wq Marco Crivellari
@ 2026-01-13 11:46 ` Marco Crivellari
  2026-01-13 11:46 ` [PATCH v2 3/3] mm: add WQ_PERCPU to alloc_workqueue users Marco Crivellari
  2 siblings, 0 replies; 4+ messages in thread
From: Marco Crivellari @ 2026-01-13 11:46 UTC (permalink / raw)
  To: linux-kernel, linux-mm
  Cc: Tejun Heo, Lai Jiangshan, Frederic Weisbecker,
	Sebastian Andrzej Siewior, Marco Crivellari, Michal Hocko,
	Andrew Morton

This patch continues the effort to refactor workqueue APIs, which has begun
with the changes introducing new workqueues and a new alloc_workqueue flag:

   commit 128ea9f6ccfb ("workqueue: Add system_percpu_wq and system_dfl_wq")
   commit 930c2ea566af ("workqueue: Add new WQ_PERCPU flag")

The point of the refactoring is to eventually alter the default behavior of
workqueues to become unbound by default so that their workload placement is
optimized by the scheduler.

Before that to happen, workqueue users must be converted to the better named
new workqueues with no intended behaviour changes:

   system_wq -> system_percpu_wq
   system_unbound_wq -> system_dfl_wq

This way the old obsolete workqueues (system_wq, system_unbound_wq) can be
removed in the future.

Link: https://lore.kernel.org/all/20250221112003.1dSuoGyc@linutronix.de/
Suggested-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Marco Crivellari <marco.crivellari@suse.com>
---
 mm/backing-dev.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/backing-dev.c b/mm/backing-dev.c
index 2f65b5416228..4c6f0b85a24e 100644
--- a/mm/backing-dev.c
+++ b/mm/backing-dev.c
@@ -971,7 +971,7 @@ static int __init cgwb_init(void)
 {
 	/*
 	 * There can be many concurrent release work items overwhelming
-	 * system_wq.  Put them in a separate wq and limit concurrency.
+	 * system_percpu_wq.  Put them in a separate wq and limit concurrency.
 	 * There's no point in executing many of these in parallel.
 	 */
 	cgwb_release_wq = alloc_workqueue("cgwb_release", 0, 1);
-- 
2.52.0



^ permalink raw reply	[flat|nested] 4+ messages in thread

* [PATCH v2 3/3] mm: add WQ_PERCPU to alloc_workqueue users
  2026-01-13 11:46 [PATCH v2 0/3] Replace wq users and add WQ_PERCPU to alloc_workqueue() users Marco Crivellari
  2026-01-13 11:46 ` [PATCH v2 1/3] mm: Replace use of system_unbound_wq with system_dfl_wq Marco Crivellari
  2026-01-13 11:46 ` [PATCH v2 2/3] mm: Replace use of system_wq with system_percpu_wq Marco Crivellari
@ 2026-01-13 11:46 ` Marco Crivellari
  2 siblings, 0 replies; 4+ messages in thread
From: Marco Crivellari @ 2026-01-13 11:46 UTC (permalink / raw)
  To: linux-kernel, linux-mm
  Cc: Tejun Heo, Lai Jiangshan, Frederic Weisbecker,
	Sebastian Andrzej Siewior, Marco Crivellari, Michal Hocko,
	Andrew Morton

This continues the effort to refactor workqueue APIs, which began with
the introduction of new workqueues and a new alloc_workqueue flag in:

   commit 128ea9f6ccfb ("workqueue: Add system_percpu_wq and system_dfl_wq")
   commit 930c2ea566af ("workqueue: Add new WQ_PERCPU flag")

The refactoring is going to alter the default behavior of
alloc_workqueue() to be unbound by default.

With the introduction of the WQ_PERCPU flag (equivalent to !WQ_UNBOUND),
any alloc_workqueue() caller that doesn’t explicitly specify WQ_UNBOUND
must now use WQ_PERCPU. For more details see the Link tag below.

In order to keep alloc_workqueue() behavior identical, explicitly request
WQ_PERCPU.

Link: https://lore.kernel.org/all/20250221112003.1dSuoGyc@linutronix.de/
Suggested-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Marco Crivellari <marco.crivellari@suse.com>
---
 mm/backing-dev.c | 2 +-
 mm/slub.c        | 4 +++-
 mm/vmstat.c      | 3 ++-
 3 files changed, 6 insertions(+), 3 deletions(-)

diff --git a/mm/backing-dev.c b/mm/backing-dev.c
index 4c6f0b85a24e..861fee5e48b7 100644
--- a/mm/backing-dev.c
+++ b/mm/backing-dev.c
@@ -974,7 +974,7 @@ static int __init cgwb_init(void)
 	 * system_percpu_wq.  Put them in a separate wq and limit concurrency.
 	 * There's no point in executing many of these in parallel.
 	 */
-	cgwb_release_wq = alloc_workqueue("cgwb_release", 0, 1);
+	cgwb_release_wq = alloc_workqueue("cgwb_release", WQ_PERCPU, 1);
 	if (!cgwb_release_wq)
 		return -ENOMEM;
 
diff --git a/mm/slub.c b/mm/slub.c
index 861592ac5425..bbaa247dce2a 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -8542,7 +8542,9 @@ void __init kmem_cache_init(void)
 
 void __init kmem_cache_init_late(void)
 {
-	flushwq = alloc_workqueue("slub_flushwq", WQ_MEM_RECLAIM, 0);
+#ifndef CONFIG_SLUB_TINY
+	flushwq = alloc_workqueue("slub_flushwq", WQ_MEM_RECLAIM | WQ_PERCPU,
+				  0);
 	WARN_ON(!flushwq);
 }
 
diff --git a/mm/vmstat.c b/mm/vmstat.c
index 65de88cdf40e..580b5ad293d6 100644
--- a/mm/vmstat.c
+++ b/mm/vmstat.c
@@ -2274,7 +2274,8 @@ void __init init_mm_internals(void)
 {
 	int ret __maybe_unused;
 
-	mm_percpu_wq = alloc_workqueue("mm_percpu_wq", WQ_MEM_RECLAIM, 0);
+	mm_percpu_wq = alloc_workqueue("mm_percpu_wq",
+				       WQ_MEM_RECLAIM | WQ_PERCPU, 0);
 
 #ifdef CONFIG_SMP
 	ret = cpuhp_setup_state_nocalls(CPUHP_MM_VMSTAT_DEAD, "mm/vmstat:dead",
-- 
2.52.0



^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2026-01-13 11:46 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2026-01-13 11:46 [PATCH v2 0/3] Replace wq users and add WQ_PERCPU to alloc_workqueue() users Marco Crivellari
2026-01-13 11:46 ` [PATCH v2 1/3] mm: Replace use of system_unbound_wq with system_dfl_wq Marco Crivellari
2026-01-13 11:46 ` [PATCH v2 2/3] mm: Replace use of system_wq with system_percpu_wq Marco Crivellari
2026-01-13 11:46 ` [PATCH v2 3/3] mm: add WQ_PERCPU to alloc_workqueue users Marco Crivellari

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox