* [PATCH v16 0/3] Improve proc RSS accuracy and OOM killer latency
@ 2026-01-14 14:59 Mathieu Desnoyers
2026-01-14 14:59 ` [PATCH v16 1/3] lib: Introduce hierarchical per-cpu counters Mathieu Desnoyers
` (2 more replies)
0 siblings, 3 replies; 10+ messages in thread
From: Mathieu Desnoyers @ 2026-01-14 14:59 UTC (permalink / raw)
To: Andrew Morton
Cc: linux-kernel, Mathieu Desnoyers, Paul E. McKenney,
Steven Rostedt, Masami Hiramatsu, Dennis Zhou, Tejun Heo,
Christoph Lameter, Martin Liu, David Rientjes, christian.koenig,
Shakeel Butt, SeongJae Park, Michal Hocko, Johannes Weiner,
Sweet Tea Dorminy, Lorenzo Stoakes, Liam R . Howlett,
Mike Rapoport, Suren Baghdasaryan, Vlastimil Babka,
Christian Brauner, Wei Yang, David Hildenbrand, Miaohe Lin,
Al Viro, linux-mm, linux-trace-kernel, Yu Zhao, Roman Gushchin,
Mateusz Guzik, Matthew Wilcox, Baolin Wang, Aboorva Devarajan
This series use the hierarchical tree counter approximation (hpcc) to:
* Increase accuracy of approximated RSS counters exposed through proc
interfaces:
With a test program hopping across CPUs doing frequent mmap/munmap
operations, the upstream implementation approximation reaches a 1GB
delta from the precise value after a few minutes, compared to a 80MB
delta with the hierarchical counter. The hierarchical counter provides a
guaranteed maximum approximation inaccuracy of 192MB on that hardware
topology.
* Implement the OOM killer task selection with a 2-pass algorithm. This
is a latency reduction improvement of the OOM killer task selection:
Testing the execution time of select_bad_process() with a single
tail -f /dev/zero:
AMD EPYC 9654 96-Core (2 sockets)
Within a KVM, configured with 256 logical cpus.
| precise sum | hpcc |
----------------------------------|-------------|----------|
nr_processes=40 | 0.5 ms | 0.3 ms |
nr_processes=10000 | 80.0 ms | 7.9 ms |
I'm sending this series to gather feedback. I plan to re-submit it for
inclusion into mm-new _after_ the next merge window closes, so the bug
fix "mm: Fix OOM killer inaccuracy on large many-core systems" can be
tested in the current release cycle.
Andrew, if you have a prior version of this specific series in mm-new,
please drop it for now.
This series is based on v6.19-rc4, on top of the following three
preparation series:
https://lore.kernel.org/linux-mm/20251224173358.647691-1-mathieu.desnoyers@efficios.com/T/#t
https://lore.kernel.org/linux-mm/20251224173810.648699-1-mathieu.desnoyers@efficios.com/T/#t
https://lore.kernel.org/linux-mm/20260114143642.47333-1-mathieu.desnoyers@efficios.com/
Thanks,
Mathieu
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: "Paul E. McKenney" <paulmck@kernel.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Masami Hiramatsu <mhiramat@kernel.org>
Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Cc: Dennis Zhou <dennis@kernel.org>
Cc: Tejun Heo <tj@kernel.org>
Cc: Christoph Lameter <cl@linux.com>
Cc: Martin Liu <liumartin@google.com>
Cc: David Rientjes <rientjes@google.com>
Cc: christian.koenig@amd.com
Cc: Shakeel Butt <shakeel.butt@linux.dev>
Cc: SeongJae Park <sj@kernel.org>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Sweet Tea Dorminy <sweettea-kernel@dorminy.me>
Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Cc: "Liam R . Howlett" <liam.howlett@oracle.com>
Cc: Mike Rapoport <rppt@kernel.org>
Cc: Suren Baghdasaryan <surenb@google.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Christian Brauner <brauner@kernel.org>
Cc: Wei Yang <richard.weiyang@gmail.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Miaohe Lin <linmiaohe@huawei.com>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: linux-mm@kvack.org
Cc: linux-trace-kernel@vger.kernel.org
Cc: Yu Zhao <yuzhao@google.com>
Cc: Roman Gushchin <roman.gushchin@linux.dev>
Cc: Mateusz Guzik <mjguzik@gmail.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
Cc: Aboorva Devarajan <aboorvad@linux.ibm.com>
Mathieu Desnoyers (3):
lib: Introduce hierarchical per-cpu counters
mm: Improve RSS counter approximation accuracy for proc interfaces
mm: Reduce latency of OOM killer task selection with 2-pass algorithm
.../core-api/percpu-counter-tree.rst | 75 ++
fs/proc/base.c | 2 +-
include/linux/mm.h | 49 +-
include/linux/mm_types.h | 54 +-
include/linux/oom.h | 11 +-
include/linux/percpu_counter_tree.h | 367 ++++++++++
include/trace/events/kmem.h | 2 +-
init/main.c | 2 +
kernel/fork.c | 22 +-
lib/Makefile | 1 +
lib/percpu_counter_tree.c | 679 ++++++++++++++++++
mm/oom_kill.c | 84 ++-
12 files changed, 1295 insertions(+), 53 deletions(-)
create mode 100644 Documentation/core-api/percpu-counter-tree.rst
create mode 100644 include/linux/percpu_counter_tree.h
create mode 100644 lib/percpu_counter_tree.c
--
2.39.5
^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH v16 1/3] lib: Introduce hierarchical per-cpu counters
2026-01-14 14:59 [PATCH v16 0/3] Improve proc RSS accuracy and OOM killer latency Mathieu Desnoyers
@ 2026-01-14 14:59 ` Mathieu Desnoyers
2026-01-14 16:41 ` Michal Hocko
2026-01-14 14:59 ` [PATCH v16 2/3] mm: Improve RSS counter approximation accuracy for proc interfaces Mathieu Desnoyers
2026-01-14 14:59 ` [PATCH v16 3/3] mm: Reduce latency of OOM killer task selection with 2-pass algorithm Mathieu Desnoyers
2 siblings, 1 reply; 10+ messages in thread
From: Mathieu Desnoyers @ 2026-01-14 14:59 UTC (permalink / raw)
To: Andrew Morton
Cc: linux-kernel, Mathieu Desnoyers, Paul E. McKenney,
Steven Rostedt, Masami Hiramatsu, Dennis Zhou, Tejun Heo,
Christoph Lameter, Martin Liu, David Rientjes, christian.koenig,
Shakeel Butt, SeongJae Park, Michal Hocko, Johannes Weiner,
Sweet Tea Dorminy, Lorenzo Stoakes, Liam R . Howlett,
Mike Rapoport, Suren Baghdasaryan, Vlastimil Babka,
Christian Brauner, Wei Yang, David Hildenbrand, Miaohe Lin,
Al Viro, linux-mm, linux-trace-kernel, Yu Zhao, Roman Gushchin,
Mateusz Guzik, Matthew Wilcox, Baolin Wang, Aboorva Devarajan
* Motivation
The purpose of this hierarchical split-counter scheme is to:
- Minimize contention when incrementing and decrementing counters,
- Provide fast access to a sum approximation,
- Provide a sum approximation with an acceptable accuracy level when
scaling to many-core systems.
- Provide approximate and precise comparison of two counters, and
between a counter and a value.
- Provide possible precise sum ranges for a given sum approximation.
Its goals are twofold:
- Improve the accuracy of the approximated RSS counter values returned
by proc interfaces [1],
- Reduce the latency of the OOM killer on large many-core systems.
* Design
The hierarchical per-CPU counters propagate a sum approximation through
a N-way tree. When reaching the batch size, the carry is propagated
through a binary tree which consists of logN(nr_cpu_ids) levels. The
batch size for each level is twice the batch size of the prior level.
Example propagation diagram with 8 cpus through a binary tree:
Level 0: 0 1 2 3 4 5 6 7
| / | / | / | /
| / | / | / | /
| / | / | / | /
Level 1: 0 1 2 3
| / | /
| / | /
| / | /
Level 2: 0 1
| /
| /
| /
Level 3: 0
For a binary tree, the maximum inaccuracy is bound by:
batch_size * log2(nr_cpus) * nr_cpus
which evolves with O(n*log(n)) as the number of CPUs increases.
For a N-way tree, the maximum inaccuracy can be pre-calculated
based on the the N-arity of each level and the batch size.
Link: https://lore.kernel.org/lkml/20250331223516.7810-2-sweettea-kernel@dorminy.me/ # [1]
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: "Paul E. McKenney" <paulmck@kernel.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Masami Hiramatsu <mhiramat@kernel.org>
Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Cc: Dennis Zhou <dennis@kernel.org>
Cc: Tejun Heo <tj@kernel.org>
Cc: Christoph Lameter <cl@linux.com>
Cc: Martin Liu <liumartin@google.com>
Cc: David Rientjes <rientjes@google.com>
Cc: christian.koenig@amd.com
Cc: Shakeel Butt <shakeel.butt@linux.dev>
Cc: SeongJae Park <sj@kernel.org>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Sweet Tea Dorminy <sweettea-kernel@dorminy.me>
Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Cc: "Liam R . Howlett" <liam.howlett@oracle.com>
Cc: Mike Rapoport <rppt@kernel.org>
Cc: Suren Baghdasaryan <surenb@google.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Christian Brauner <brauner@kernel.org>
Cc: Wei Yang <richard.weiyang@gmail.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Miaohe Lin <linmiaohe@huawei.com>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: linux-mm@kvack.org
Cc: linux-trace-kernel@vger.kernel.org
Cc: Yu Zhao <yuzhao@google.com>
Cc: Roman Gushchin <roman.gushchin@linux.dev>
Cc: Mateusz Guzik <mjguzik@gmail.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
Cc: Aboorva Devarajan <aboorvad@linux.ibm.com>
---
Changes since v15:
- Rebase on v2 of "mm: Fix OOM killer inaccuracy on large many-core systems".
- Update the commit message to explain the two goals of hpcc:
provide a more accurate approximation, and reduce OOM killer latency.
- Change percpu_counter_tree_approximate_accuracy_range to increment the
under/over parameters rather than set them. This requires callers to
initialize them to 0, but facilitates scenarios where accuracy needs to
be summed over many counters.
- Clarify comments above percpu_counter_tree_approximate_accuracy_range.
Changes since v14:
- Add Documentation/core-api/percpu-counter-tree.rst.
- Make percpu_counter_tree_approximate_accuracy_range static inline.
Changes since v12:
- Use atomic_long_set for percpu_counter_tree_set in UP build.
- percpu_counter_tree_precise_sum returns long type.
Changes since v11:
- Reduce level0 per-cpu memory allocation to the size required by those
percpu counters.
- Use unsigned long type rather than unsigned int.
- Introduce functions to return the min/max range of possible
precise sums.
Changes since v10:
- Fold "mm: Take into account hierarchical percpu tree items for static
mm_struct definitions".
Changes since v9:
- Introduce kerneldoc documentation.
- Document structure fields.
- Remove inline from percpu_counter_tree_add().
- Reject batch_size==1 which makes no sense.
- Fix copy-paste bug in percpu_counter_tree_precise_compare()
(wrong inequality comparison).
- Rename "inaccuracy" to "accuracy", which makes it easier to
document.
- Track accuracy limits more precisely. In two's complement
signed integers, the range before a n-bit underflow is one
unit larger than the range before a n-bit overflow. This sums
for all the counters within the tree. Therefore the "under"
vs "over" accuracy is not symmetrical.
Changes since v8:
- Remove migrate guard from the fast path. It does not
matter through which path the carry is propagated up
the tree.
- Rebase on top of v6.18-rc6.
- Introduce percpu_counter_tree_init_many and
percpu_counter_tree_destroy_many APIs.
- Move tree items allocation to the caller.
- Introduce percpu_counter_tree_items_size().
- Move percpu_counter_tree_subsystem_init() call before mm_core_init()
so percpu_counter_tree_items_size() is initialized before it is used.
Changes since v7:
- Explicitly initialize the subsystem from start_kernel() right
after mm_core_init() so it is up and running before the creation of
the first mm at boot.
- Remove the module.h include which is not needed with the explicit
initialization.
- Only consider levels>0 items for order={0,1} nr_items. No
functional change except to allocate only the amount of memory
which is strictly needed.
- Introduce positive precise sum API to handle a scenario where an
unlucky precise sum iteration would hit negative counter values
concurrently with counter updates.
Changes since v5:
- Introduce percpu_counter_tree_approximate_sum_positive.
- Introduce !CONFIG_SMP static inlines for UP build.
- Remove percpu_counter_tree_set_bias from the public API and make it
static.
Changes since v3:
- Add gfp flags to init function.
Changes since v2:
- Introduce N-way tree to reduce tree depth on larger systems.
Changes since v1:
- Remove percpu_counter_tree_precise_sum_unbiased from public header,
make this function static,
- Introduce precise and approximate comparisons between two counters,
- Reorder the struct percpu_counter_tree fields,
- Introduce approx_sum field, which points to the approximate sum
for the percpu_counter_tree_approximate_sum() fast path.
---
.../core-api/percpu-counter-tree.rst | 75 ++
include/linux/mm_types.h | 6 +-
include/linux/percpu_counter_tree.h | 367 ++++++++++
init/main.c | 2 +
lib/Makefile | 1 +
lib/percpu_counter_tree.c | 679 ++++++++++++++++++
6 files changed, 1127 insertions(+), 3 deletions(-)
create mode 100644 Documentation/core-api/percpu-counter-tree.rst
create mode 100644 include/linux/percpu_counter_tree.h
create mode 100644 lib/percpu_counter_tree.c
diff --git a/Documentation/core-api/percpu-counter-tree.rst b/Documentation/core-api/percpu-counter-tree.rst
new file mode 100644
index 000000000000..f47ce44a539d
--- /dev/null
+++ b/Documentation/core-api/percpu-counter-tree.rst
@@ -0,0 +1,75 @@
+========================================
+The Hierarchical Per-CPU Counters (HPCC)
+========================================
+
+:Author: Mathieu Desnoyers
+
+Introduction
+============
+
+Counters come in many varieties, each with their own trade offs:
+
+ * A global atomic counter provides a fast read access to the current
+ sum, at the expense of cache-line bouncing on updates. This leads to
+ poor performance of frequent updates from various cores on large SMP
+ systems.
+
+ * A per-cpu split counter provides fast updates to per-cpu counters,
+ at the expense of a slower aggregation (sum). The sum operation needs
+ to iterate over all per-cpu counters to calculate the current total.
+
+The hierarchical per-cpu counters attempt to provide the best of both
+worlds (fast updates, and fast sum) by relaxing requirements on the sum
+accuracy. It allows quickly querying an approximated sum value, along
+with the possible min/max ranges of the associated precise sum. The
+exact precise sum can still be calculated with an iteration on all
+per-cpu counter, but the availability of an approximated sum value with
+possible precise sum min/max ranges allows eliminating candidates which
+are certainly outside of a known target range without the overhead of
+precise sums.
+
+Overview
+========
+
+The herarchical per-cpu counters are organized as a tree with the tree
+root at the bottom (last level) and the first level of the tree
+consisting of per-cpu counters.
+
+The intermediate tree levels contain carry propagation counters. When
+reaching a threshold (batch size), the carry is propagated down the
+tree.
+
+This allows reading an approximated value at the root, which has a
+bounded accuracy (minimum/maximum possible precise sum range) determined
+by the tree topology.
+
+Use Cases
+=========
+
+Use cases HPCC is meant to handle invove tracking resources which are
+used across many CPUs to quickly sum as feedback for decision making to
+apply throttling, quota limits, sort tasks, and perform memory or task
+migration decisions. When considering approximated sums within the
+accuracy range of the decision threshold, the user can either:
+
+ * Be conservative and fast: Consider that the sum has reached the
+ limit as soon as the given limit is within the approximation range.
+
+ * Be aggressive and fast: Consider that the sum is over the
+ limit only when the approximation range is over the given limit.
+
+ * Be precise and slow: Do a precise comparison with the limit, which
+ requires a precise sum when the limit is within the approximated
+ range.
+
+One primary example use-case for this is per-task RSS tracking for OOM
+killer task selection. The task selection can use the approximated value
+in a first pass over all processes, and then a second pass only needs
+the RSS precise sum for tasks which are within the possible precise sum
+range of the task chosen by the first pass.
+
+Functions and structures
+========================
+
+.. kernel-doc:: include/linux/percpu_counter_tree.h
+.. kernel-doc:: lib/percpu_counter_tree.c
diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index aa4639888f89..9f861ceabe61 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -1366,9 +1366,9 @@ static inline void __mm_flags_set_mask_bits_word(struct mm_struct *mm,
MT_FLAGS_USE_RCU)
extern struct mm_struct init_mm;
-#define MM_STRUCT_FLEXIBLE_ARRAY_INIT \
-{ \
- [0 ... sizeof(cpumask_t) + MM_CID_STATIC_SIZE - 1] = 0 \
+#define MM_STRUCT_FLEXIBLE_ARRAY_INIT \
+{ \
+ [0 ... sizeof(cpumask_t) + MM_CID_STATIC_SIZE + PERCPU_COUNTER_TREE_ITEMS_STATIC_SIZE - 1] = 0 \
}
/* Pointer magic because the dynamic array size confuses some compilers. */
diff --git a/include/linux/percpu_counter_tree.h b/include/linux/percpu_counter_tree.h
new file mode 100644
index 000000000000..828c763edd4a
--- /dev/null
+++ b/include/linux/percpu_counter_tree.h
@@ -0,0 +1,367 @@
+/* SPDX-License-Identifier: GPL-2.0+ OR MIT */
+/* SPDX-FileCopyrightText: 2025 Mathieu Desnoyers <mathieu.desnoyers@efficios.com> */
+
+#ifndef _PERCPU_COUNTER_TREE_H
+#define _PERCPU_COUNTER_TREE_H
+
+#include <linux/preempt.h>
+#include <linux/atomic.h>
+#include <linux/percpu.h>
+
+#ifdef CONFIG_SMP
+
+#if NR_CPUS == (1U << 0)
+# define PERCPU_COUNTER_TREE_STATIC_NR_ITEMS 0
+#elif NR_CPUS <= (1U << 1)
+# define PERCPU_COUNTER_TREE_STATIC_NR_ITEMS 1
+#elif NR_CPUS <= (1U << 2)
+# define PERCPU_COUNTER_TREE_STATIC_NR_ITEMS 3
+#elif NR_CPUS <= (1U << 3)
+# define PERCPU_COUNTER_TREE_STATIC_NR_ITEMS 7
+#elif NR_CPUS <= (1U << 4)
+# define PERCPU_COUNTER_TREE_STATIC_NR_ITEMS 7
+#elif NR_CPUS <= (1U << 5)
+# define PERCPU_COUNTER_TREE_STATIC_NR_ITEMS 11
+#elif NR_CPUS <= (1U << 6)
+# define PERCPU_COUNTER_TREE_STATIC_NR_ITEMS 21
+#elif NR_CPUS <= (1U << 7)
+# define PERCPU_COUNTER_TREE_STATIC_NR_ITEMS 21
+#elif NR_CPUS <= (1U << 8)
+# define PERCPU_COUNTER_TREE_STATIC_NR_ITEMS 37
+#elif NR_CPUS <= (1U << 9)
+# define PERCPU_COUNTER_TREE_STATIC_NR_ITEMS 73
+#elif NR_CPUS <= (1U << 10)
+# define PERCPU_COUNTER_TREE_STATIC_NR_ITEMS 149
+#elif NR_CPUS <= (1U << 11)
+# define PERCPU_COUNTER_TREE_STATIC_NR_ITEMS 293
+#elif NR_CPUS <= (1U << 12)
+# define PERCPU_COUNTER_TREE_STATIC_NR_ITEMS 585
+#elif NR_CPUS <= (1U << 13)
+# define PERCPU_COUNTER_TREE_STATIC_NR_ITEMS 1173
+#elif NR_CPUS <= (1U << 14)
+# define PERCPU_COUNTER_TREE_STATIC_NR_ITEMS 2341
+#elif NR_CPUS <= (1U << 15)
+# define PERCPU_COUNTER_TREE_STATIC_NR_ITEMS 4681
+#elif NR_CPUS <= (1U << 16)
+# define PERCPU_COUNTER_TREE_STATIC_NR_ITEMS 4681
+#elif NR_CPUS <= (1U << 17)
+# define PERCPU_COUNTER_TREE_STATIC_NR_ITEMS 8777
+#elif NR_CPUS <= (1U << 18)
+# define PERCPU_COUNTER_TREE_STATIC_NR_ITEMS 17481
+#elif NR_CPUS <= (1U << 19)
+# define PERCPU_COUNTER_TREE_STATIC_NR_ITEMS 34953
+#elif NR_CPUS <= (1U << 20)
+# define PERCPU_COUNTER_TREE_STATIC_NR_ITEMS 69905
+#else
+# error "Unsupported number of CPUs."
+#endif
+
+struct percpu_counter_tree_level_item {
+ atomic_long_t count; /*
+ * Count the number of carry for this tree item.
+ * The carry counter is kept at the order of the
+ * carry accounted for at this tree level.
+ */
+} ____cacheline_aligned_in_smp;
+
+#define PERCPU_COUNTER_TREE_ITEMS_STATIC_SIZE \
+ (PERCPU_COUNTER_TREE_STATIC_NR_ITEMS * sizeof(struct percpu_counter_tree_level_item))
+
+struct percpu_counter_tree {
+ /* Fast-path fields. */
+ unsigned long __percpu *level0; /* Pointer to per-CPU split counters (tree level 0). */
+ unsigned long level0_bit_mask; /* Bit mask to apply to detect carry propagation from tree level 0. */
+ union {
+ unsigned long *i; /* Approximate sum for single-CPU topology. */
+ atomic_long_t *a; /* Approximate sum for SMP topology. */
+ } approx_sum;
+ long bias; /* Bias to apply to counter precise and approximate values. */
+
+ /* Slow-path fields. */
+ struct percpu_counter_tree_level_item *items; /* Array of tree items for levels 1 to N. */
+ unsigned long batch_size; /*
+ * The batch size is the increment step at level 0 which
+ * triggers a carry propagation. The batch size is required
+ * to be greater than 1, and a power of 2.
+ */
+ /*
+ * The tree approximate sum is guaranteed to be within this accuracy range:
+ * (precise_sum - approx_accuracy_range.under) <= approx_sum <= (precise_sum + approx_accuracy_range.over).
+ * This accuracy is derived from the hardware topology and the tree batch_size.
+ * The "under" accuracy is larger than the "over" accuracy because the negative range of a
+ * two's complement signed integer is one unit larger than the positive range. This delta
+ * is summed for each tree item, which leads to a significantly larger "under" accuracy range
+ * compared to the "over" accuracy range.
+ */
+ struct {
+ unsigned long under;
+ unsigned long over;
+ } approx_accuracy_range;
+};
+
+size_t percpu_counter_tree_items_size(void);
+int percpu_counter_tree_init_many(struct percpu_counter_tree *counters, struct percpu_counter_tree_level_item *items,
+ unsigned int nr_counters, unsigned long batch_size, gfp_t gfp_flags);
+int percpu_counter_tree_init(struct percpu_counter_tree *counter, struct percpu_counter_tree_level_item *items,
+ unsigned long batch_size, gfp_t gfp_flags);
+void percpu_counter_tree_destroy_many(struct percpu_counter_tree *counter, unsigned int nr_counters);
+void percpu_counter_tree_destroy(struct percpu_counter_tree *counter);
+void percpu_counter_tree_add(struct percpu_counter_tree *counter, long inc);
+long percpu_counter_tree_precise_sum(struct percpu_counter_tree *counter);
+int percpu_counter_tree_approximate_compare(struct percpu_counter_tree *a, struct percpu_counter_tree *b);
+int percpu_counter_tree_approximate_compare_value(struct percpu_counter_tree *counter, long v);
+int percpu_counter_tree_precise_compare(struct percpu_counter_tree *a, struct percpu_counter_tree *b);
+int percpu_counter_tree_precise_compare_value(struct percpu_counter_tree *counter, long v);
+void percpu_counter_tree_set(struct percpu_counter_tree *counter, long v);
+int percpu_counter_tree_subsystem_init(void);
+
+/**
+ * percpu_counter_tree_approximate_sum() - Return approximate counter sum.
+ * @counter: The counter to sum.
+ *
+ * Querying the approximate sum is fast, but it is only accurate within
+ * the bounds delimited by percpu_counter_tree_approximate_accuracy_range().
+ * This is meant to be used when speed is preferred over accuracy.
+ *
+ * Return: The current approximate counter sum.
+ */
+static inline
+long percpu_counter_tree_approximate_sum(struct percpu_counter_tree *counter)
+{
+ unsigned long v;
+
+ if (!counter->level0_bit_mask)
+ v = READ_ONCE(*counter->approx_sum.i);
+ else
+ v = atomic_long_read(counter->approx_sum.a);
+ return (long) (v + (unsigned long)READ_ONCE(counter->bias));
+}
+
+/**
+ * percpu_counter_tree_approximate_accuracy_range - Query the accuracy range for a counter tree.
+ * @counter: Counter to query.
+ * @under: Pointer to a variable to be incremented of the approximation
+ * accuracy range below the precise sum.
+ * @over: Pointer to a variable to be incremented of the approximation
+ * accuracy range above the precise sum.
+ *
+ * Query the accuracy range limits for the counter.
+ * Because of two's complement binary representation, the "under" range is typically
+ * slightly larger than the "over" range.
+ * Those values are derived from the hardware topology and the counter tree batch size.
+ * They are invariant for a given counter tree.
+ * Using this function should not be typically required, see the following functions instead:
+ * * percpu_counter_tree_approximate_compare(),
+ * * percpu_counter_tree_approximate_compare_value(),
+ * * percpu_counter_tree_precise_compare(),
+ * * percpu_counter_tree_precise_compare_value().
+ */
+static inline
+void percpu_counter_tree_approximate_accuracy_range(struct percpu_counter_tree *counter,
+ unsigned long *under, unsigned long *over)
+{
+ *under += counter->approx_accuracy_range.under;
+ *over += counter->approx_accuracy_range.over;
+}
+
+#else /* !CONFIG_SMP */
+
+#define PERCPU_COUNTER_TREE_ITEMS_STATIC_SIZE 0
+
+struct percpu_counter_tree_level_item;
+
+struct percpu_counter_tree {
+ atomic_long_t count;
+};
+
+static inline
+size_t percpu_counter_tree_items_size(void)
+{
+ return 0;
+}
+
+static inline
+int percpu_counter_tree_init_many(struct percpu_counter_tree *counters, struct percpu_counter_tree_level_item *items,
+ unsigned int nr_counters, unsigned long batch_size, gfp_t gfp_flags)
+{
+ for (unsigned int i = 0; i < nr_counters; i++)
+ atomic_long_set(&counters[i].count, 0);
+ return 0;
+}
+
+static inline
+int percpu_counter_tree_init(struct percpu_counter_tree *counter, struct percpu_counter_tree_level_item *items,
+ unsigned long batch_size, gfp_t gfp_flags)
+{
+ return percpu_counter_tree_init_many(counter, items, 1, batch_size, gfp_flags);
+}
+
+static inline
+void percpu_counter_tree_destroy_many(struct percpu_counter_tree *counter, unsigned int nr_counters)
+{
+}
+
+static inline
+void percpu_counter_tree_destroy(struct percpu_counter_tree *counter)
+{
+}
+
+static inline
+long percpu_counter_tree_precise_sum(struct percpu_counter_tree *counter)
+{
+ return atomic_long_read(&counter->count);
+}
+
+static inline
+int percpu_counter_tree_precise_compare(struct percpu_counter_tree *a, struct percpu_counter_tree *b)
+{
+ long count_a = percpu_counter_tree_precise_sum(a),
+ count_b = percpu_counter_tree_precise_sum(b);
+
+ if (count_a == count_b)
+ return 0;
+ if (count_a < count_b)
+ return -1;
+ return 1;
+}
+
+static inline
+int percpu_counter_tree_precise_compare_value(struct percpu_counter_tree *counter, long v)
+{
+ long count = percpu_counter_tree_precise_sum(counter);
+
+ if (count == v)
+ return 0;
+ if (count < v)
+ return -1;
+ return 1;
+}
+
+static inline
+int percpu_counter_tree_approximate_compare(struct percpu_counter_tree *a, struct percpu_counter_tree *b)
+{
+ return percpu_counter_tree_precise_compare(a, b);
+}
+
+static inline
+int percpu_counter_tree_approximate_compare_value(struct percpu_counter_tree *counter, long v)
+{
+ return percpu_counter_tree_precise_compare_value(counter, v);
+}
+
+static inline
+void percpu_counter_tree_set(struct percpu_counter_tree *counter, long v)
+{
+ atomic_long_set(&counter->count, v);
+}
+
+static inline
+void percpu_counter_tree_approximate_accuracy_range(struct percpu_counter_tree *counter,
+ unsigned long *under, unsigned long *over)
+{
+}
+
+static inline
+void percpu_counter_tree_add(struct percpu_counter_tree *counter, long inc)
+{
+ atomic_long_add(inc, &counter->count);
+}
+
+static inline
+long percpu_counter_tree_approximate_sum(struct percpu_counter_tree *counter)
+{
+ return percpu_counter_tree_precise_sum(counter);
+}
+
+static inline
+int percpu_counter_tree_subsystem_init(void)
+{
+ return 0;
+}
+
+#endif /* CONFIG_SMP */
+
+/**
+ * percpu_counter_tree_approximate_sum_positive() - Return a positive approximate counter sum.
+ * @counter: The counter to sum.
+ *
+ * Return an approximate counter sum which is guaranteed to be greater
+ * or equal to 0.
+ *
+ * Return: The current positive approximate counter sum.
+ */
+static inline
+long percpu_counter_tree_approximate_sum_positive(struct percpu_counter_tree *counter)
+{
+ long v = percpu_counter_tree_approximate_sum(counter);
+ return v > 0 ? v : 0;
+}
+
+/**
+ * percpu_counter_tree_precise_sum_positive() - Return a positive precise counter sum.
+ * @counter: The counter to sum.
+ *
+ * Return a precise counter sum which is guaranteed to be greater
+ * or equal to 0.
+ *
+ * Return: The current positive precise counter sum.
+ */
+static inline
+long percpu_counter_tree_precise_sum_positive(struct percpu_counter_tree *counter)
+{
+ long v = percpu_counter_tree_precise_sum(counter);
+ return v > 0 ? v : 0;
+}
+
+/**
+ * percpu_counter_tree_approximate_min_max_range() - Return the approximation min and max precise values.
+ * @approx_sum: Approximated sum.
+ * @under: Tree accuracy range (under).
+ * @over: Tree accuracy range (over).
+ * @precise_min: Minimum possible value for precise sum (output).
+ * @precise_max: Maximum possible value for precise sum (output).
+ *
+ * Calculate the minimum and maximum precise values for a given
+ * approximation and (under, over) accuracy range.
+ *
+ * The range of the approximation as a function of the precise sum is expressed as:
+ *
+ * approx_sum >= precise_sum - approx_accuracy_range.under
+ * approx_sum <= precise_sum + approx_accuracy_range.over
+ *
+ * Therefore, the range of the precise sum as a function of the approximation is expressed as:
+ *
+ * precise_sum <= approx_sum + approx_accuracy_range.under
+ * precise_sum >= approx_sum - approx_accuracy_range.over
+ */
+static inline
+void percpu_counter_tree_approximate_min_max_range(long approx_sum, unsigned long under, unsigned long over,
+ long *precise_min, long *precise_max)
+{
+ *precise_min = approx_sum - over;
+ *precise_max = approx_sum + under;
+}
+
+/**
+ * percpu_counter_tree_approximate_min_max() - Return the tree approximation, min and max possible precise values.
+ * @counter: The counter to sum.
+ * @approx_sum: Approximate sum (output).
+ * @precise_min: Minimum possible value for precise sum (output).
+ * @precise_max: Maximum possible value for precise sum (output).
+ *
+ * Return the approximate sum, minimum and maximum precise values for
+ * a counter.
+ */
+static inline
+void percpu_counter_tree_approximate_min_max(struct percpu_counter_tree *counter,
+ long *approx_sum, long *precise_min, long *precise_max)
+{
+ unsigned long under = 0, over = 0;
+ long v = percpu_counter_tree_approximate_sum(counter);
+
+ percpu_counter_tree_approximate_accuracy_range(counter, &under, &over);
+ percpu_counter_tree_approximate_min_max_range(v, under, over, precise_min, precise_max);
+ *approx_sum = v;
+}
+
+#endif /* _PERCPU_COUNTER_TREE_H */
diff --git a/init/main.c b/init/main.c
index b84818ad9685..85bcc7c1ccd0 100644
--- a/init/main.c
+++ b/init/main.c
@@ -104,6 +104,7 @@
#include <linux/pidfs.h>
#include <linux/ptdump.h>
#include <linux/time_namespace.h>
+#include <linux/percpu_counter_tree.h>
#include <net/net_namespace.h>
#include <asm/io.h>
@@ -1063,6 +1064,7 @@ void start_kernel(void)
vfs_caches_init_early();
sort_main_extable();
trap_init();
+ percpu_counter_tree_subsystem_init();
mm_core_init();
maple_tree_init();
poking_init();
diff --git a/lib/Makefile b/lib/Makefile
index aaf677cf4527..bff117556424 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -181,6 +181,7 @@ obj-$(CONFIG_TEXTSEARCH_KMP) += ts_kmp.o
obj-$(CONFIG_TEXTSEARCH_BM) += ts_bm.o
obj-$(CONFIG_TEXTSEARCH_FSM) += ts_fsm.o
obj-$(CONFIG_SMP) += percpu_counter.o
+obj-$(CONFIG_SMP) += percpu_counter_tree.o
obj-$(CONFIG_AUDIT_GENERIC) += audit.o
obj-$(CONFIG_AUDIT_COMPAT_GENERIC) += compat_audit.o
diff --git a/lib/percpu_counter_tree.c b/lib/percpu_counter_tree.c
new file mode 100644
index 000000000000..0c956aa8b8de
--- /dev/null
+++ b/lib/percpu_counter_tree.c
@@ -0,0 +1,679 @@
+// SPDX-License-Identifier: GPL-2.0+ OR MIT
+// SPDX-FileCopyrightText: 2025 Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
+
+/*
+ * Split Counters With Tree Approximation Propagation
+ *
+ * * Propagation diagram when reaching batch size thresholds (± batch size):
+ *
+ * Example diagram for 8 CPUs:
+ *
+ * log2(8) = 3 levels
+ *
+ * At each level, each pair propagates its values to the next level when
+ * reaching the batch size thresholds.
+ *
+ * Counters at levels 0, 1, 2 can be kept on a single byte ([-128 .. +127] range),
+ * although it may be relevant to keep them on "long" counters for
+ * simplicity. (complexity vs memory footprint tradeoff)
+ *
+ * Counter at level 3 can be kept on a "long" counter.
+ *
+ * Level 0: 0 1 2 3 4 5 6 7
+ * | / | / | / | /
+ * | / | / | / | /
+ * | / | / | / | /
+ * Level 1: 0 1 2 3
+ * | / | /
+ * | / | /
+ * | / | /
+ * Level 2: 0 1
+ * | /
+ * | /
+ * | /
+ * Level 3: 0
+ *
+ * * Approximation accuracy:
+ *
+ * BATCH(level N): Level N batch size.
+ *
+ * Example for BATCH(level 0) = 32.
+ *
+ * BATCH(level 0) = 32
+ * BATCH(level 1) = 64
+ * BATCH(level 2) = 128
+ * BATCH(level N) = BATCH(level 0) * 2^N
+ *
+ * per-counter global
+ * accuracy accuracy
+ * Level 0: [ -32 .. +31] ±256 (8 * 32)
+ * Level 1: [ -64 .. +63] ±256 (4 * 64)
+ * Level 2: [-128 .. +127] ±256 (2 * 128)
+ * Total: ------ ±768 (log2(nr_cpu_ids) * BATCH(level 0) * nr_cpu_ids)
+ *
+ * Note that the global accuracy can be calculated more precisely
+ * by taking into account that the positive accuracy range is
+ * 31 rather than 32.
+ *
+ * -----
+ *
+ * Approximate Sum Carry Propagation
+ *
+ * Let's define a number of counter bits for each level, e.g.:
+ *
+ * log2(BATCH(level 0)) = log2(32) = 5
+ * Let's assume, for this example, a 32-bit architecture (sizeof(long) == 4).
+ *
+ * nr_bit value_mask range
+ * Level 0: 5 bits v 0 .. +31
+ * Level 1: 1 bit (v & ~((1UL << 5) - 1)) 0 .. +63
+ * Level 2: 1 bit (v & ~((1UL << 6) - 1)) 0 .. +127
+ * Level 3: 25 bits (v & ~((1UL << 7) - 1)) 0 .. 2^32-1
+ *
+ * Note: Use a "long" per-cpu counter at level 0 to allow precise sum.
+ *
+ * Note: Use cacheline aligned counters at levels above 0 to prevent false sharing.
+ * If memory footprint is an issue, a specialized allocator could be used
+ * to eliminate padding.
+ *
+ * Example with expanded values:
+ *
+ * counter_add(counter, inc):
+ *
+ * if (!inc)
+ * return;
+ *
+ * res = percpu_add_return(counter @ Level 0, inc);
+ * orig = res - inc;
+ * if (inc < 0) {
+ * inc = -(-inc & ~0b00011111); // Clear used bits
+ * // xor bit 5: underflow
+ * if ((inc ^ orig ^ res) & 0b00100000)
+ * inc -= 0b00100000;
+ * } else {
+ * inc &= ~0b00011111; // Clear used bits
+ * // xor bit 5: overflow
+ * if ((inc ^ orig ^ res) & 0b00100000)
+ * inc += 0b00100000;
+ * }
+ * if (!inc)
+ * return;
+ *
+ * res = atomic_long_add_return(counter @ Level 1, inc);
+ * orig = res - inc;
+ * if (inc < 0) {
+ * inc = -(-inc & ~0b00111111); // Clear used bits
+ * // xor bit 6: underflow
+ * if ((inc ^ orig ^ res) & 0b01000000)
+ * inc -= 0b01000000;
+ * } else {
+ * inc &= ~0b00111111; // Clear used bits
+ * // xor bit 6: overflow
+ * if ((inc ^ orig ^ res) & 0b01000000)
+ * inc += 0b01000000;
+ * }
+ * if (!inc)
+ * return;
+ *
+ * res = atomic_long_add_return(counter @ Level 2, inc);
+ * orig = res - inc;
+ * if (inc < 0) {
+ * inc = -(-inc & ~0b01111111); // Clear used bits
+ * // xor bit 7: underflow
+ * if ((inc ^ orig ^ res) & 0b10000000)
+ * inc -= 0b10000000;
+ * } else {
+ * inc &= ~0b01111111; // Clear used bits
+ * // xor bit 7: overflow
+ * if ((inc ^ orig ^ res) & 0b10000000)
+ * inc += 0b10000000;
+ * }
+ * if (!inc)
+ * return;
+ *
+ * atomic_long_add(counter @ Level 3, inc);
+ */
+
+#include <linux/percpu_counter_tree.h>
+#include <linux/cpumask.h>
+#include <linux/percpu.h>
+#include <linux/atomic.h>
+#include <linux/errno.h>
+#include <linux/slab.h>
+#include <linux/math.h>
+
+#define MAX_NR_LEVELS 5
+
+/*
+ * The counter configuration is selected at boot time based on the
+ * hardware topology.
+ */
+struct counter_config {
+ unsigned int nr_items; /*
+ * nr_items is the number of items in the tree for levels 1
+ * up to and including the final level (approximate sum).
+ * It excludes the level 0 per-CPU counters.
+ */
+ unsigned char nr_levels; /*
+ * nr_levels is the number of hierarchical counter tree levels.
+ * It excludes the final level (approximate sum).
+ */
+ unsigned char n_arity_order[MAX_NR_LEVELS]; /*
+ * n-arity of tree nodes for each level from
+ * 0 to (nr_levels - 1).
+ */
+};
+
+static const struct counter_config per_nr_cpu_order_config[] = {
+ [0] = { .nr_items = 0, .nr_levels = 0, .n_arity_order = { 0 } },
+ [1] = { .nr_items = 1, .nr_levels = 1, .n_arity_order = { 1 } },
+ [2] = { .nr_items = 3, .nr_levels = 2, .n_arity_order = { 1, 1 } },
+ [3] = { .nr_items = 7, .nr_levels = 3, .n_arity_order = { 1, 1, 1 } },
+ [4] = { .nr_items = 7, .nr_levels = 3, .n_arity_order = { 2, 1, 1 } },
+ [5] = { .nr_items = 11, .nr_levels = 3, .n_arity_order = { 2, 2, 1 } },
+ [6] = { .nr_items = 21, .nr_levels = 3, .n_arity_order = { 2, 2, 2 } },
+ [7] = { .nr_items = 21, .nr_levels = 3, .n_arity_order = { 3, 2, 2 } },
+ [8] = { .nr_items = 37, .nr_levels = 3, .n_arity_order = { 3, 3, 2 } },
+ [9] = { .nr_items = 73, .nr_levels = 3, .n_arity_order = { 3, 3, 3 } },
+ [10] = { .nr_items = 149, .nr_levels = 4, .n_arity_order = { 3, 3, 2, 2 } },
+ [11] = { .nr_items = 293, .nr_levels = 4, .n_arity_order = { 3, 3, 3, 2 } },
+ [12] = { .nr_items = 585, .nr_levels = 4, .n_arity_order = { 3, 3, 3, 3 } },
+ [13] = { .nr_items = 1173, .nr_levels = 5, .n_arity_order = { 3, 3, 3, 2, 2 } },
+ [14] = { .nr_items = 2341, .nr_levels = 5, .n_arity_order = { 3, 3, 3, 3, 2 } },
+ [15] = { .nr_items = 4681, .nr_levels = 5, .n_arity_order = { 3, 3, 3, 3, 3 } },
+ [16] = { .nr_items = 4681, .nr_levels = 5, .n_arity_order = { 4, 3, 3, 3, 3 } },
+ [17] = { .nr_items = 8777, .nr_levels = 5, .n_arity_order = { 4, 4, 3, 3, 3 } },
+ [18] = { .nr_items = 17481, .nr_levels = 5, .n_arity_order = { 4, 4, 4, 3, 3 } },
+ [19] = { .nr_items = 34953, .nr_levels = 5, .n_arity_order = { 4, 4, 4, 4, 3 } },
+ [20] = { .nr_items = 69905, .nr_levels = 5, .n_arity_order = { 4, 4, 4, 4, 4 } },
+};
+
+static const struct counter_config *counter_config; /* Hierarchical counter configuration for the hardware topology. */
+static unsigned int nr_cpus_order; /* Order of nr_cpu_ids. */
+static unsigned long accuracy_multiplier; /* Calculate accuracy for a given batch size (multiplication factor). */
+
+static
+int __percpu_counter_tree_init(struct percpu_counter_tree *counter,
+ unsigned long batch_size, gfp_t gfp_flags,
+ unsigned long __percpu *level0,
+ struct percpu_counter_tree_level_item *items)
+{
+ /* Batch size must be greater than 1, and a power of 2. */
+ if (WARN_ON(batch_size <= 1 || (batch_size & (batch_size - 1))))
+ return -EINVAL;
+ counter->batch_size = batch_size;
+ counter->bias = 0;
+ counter->level0 = level0;
+ counter->items = items;
+ if (!nr_cpus_order) {
+ counter->approx_sum.i = per_cpu_ptr(counter->level0, 0);
+ counter->level0_bit_mask = 0;
+ } else {
+ counter->approx_sum.a = &counter->items[counter_config->nr_items - 1].count;
+ counter->level0_bit_mask = 1UL << get_count_order(batch_size);
+ }
+ /*
+ * Each tree item signed integer has a negative range which is
+ * one unit greater than the positive range.
+ */
+ counter->approx_accuracy_range.under = batch_size * accuracy_multiplier;
+ counter->approx_accuracy_range.over = (batch_size - 1) * accuracy_multiplier;
+ return 0;
+}
+
+/**
+ * percpu_counter_tree_init_many() - Initialize many per-CPU counter trees.
+ * @counters: An array of @nr_counters counters to initialize.
+ * Their memory is provided by the caller.
+ * @items: Pointer to memory area where to store tree items.
+ * This memory is provided by the caller.
+ * Its size needs to be at least @nr_counters * percpu_counter_tree_items_size().
+ * @nr_counters: The number of counter trees to initialize
+ * @batch_size: The batch size is the increment step at level 0 which triggers a
+ * carry propagation.
+ * The batch size is required to be greater than 1, and a power of 2.
+ * @gfp_flags: gfp flags to pass to the per-CPU allocator.
+ *
+ * Initialize many per-CPU counter trees using a single per-CPU
+ * allocator invocation for @nr_counters counters.
+ *
+ * Return:
+ * * %0: Success
+ * * %-EINVAL: - Invalid @batch_size argument
+ * * %-ENOMEM: - Out of memory
+ */
+int percpu_counter_tree_init_many(struct percpu_counter_tree *counters, struct percpu_counter_tree_level_item *items,
+ unsigned int nr_counters, unsigned long batch_size, gfp_t gfp_flags)
+{
+ void __percpu *level0, *level0_iter;
+ size_t counter_size = sizeof(*counters->level0),
+ items_size = percpu_counter_tree_items_size();
+ void *items_iter;
+ unsigned int i;
+ int ret;
+
+ memset(items, 0, items_size * nr_counters);
+ level0 = __alloc_percpu_gfp(nr_counters * counter_size,
+ __alignof__(*counters->level0), gfp_flags);
+ if (!level0)
+ return -ENOMEM;
+ level0_iter = level0;
+ items_iter = items;
+ for (i = 0; i < nr_counters; i++) {
+ ret = __percpu_counter_tree_init(&counters[i], batch_size, gfp_flags, level0_iter, items_iter);
+ if (ret)
+ goto free_level0;
+ level0_iter += counter_size;
+ items_iter += items_size;
+ }
+ return 0;
+
+free_level0:
+ free_percpu(level0);
+ return ret;
+}
+
+/**
+ * percpu_counter_tree_init() - Initialize one per-CPU counter tree.
+ * @counter: Counter to initialize.
+ * Its memory is provided by the caller.
+ * @items: Pointer to memory area where to store tree items.
+ * This memory is provided by the caller.
+ * Its size needs to be at least percpu_counter_tree_items_size().
+ * @batch_size: The batch size is the increment step at level 0 which triggers a
+ * carry propagation.
+ * The batch size is required to be greater than 1, and a power of 2.
+ * @gfp_flags: gfp flags to pass to the per-CPU allocator.
+ *
+ * Initialize one per-CPU counter tree.
+ *
+ * Return:
+ * * %0: Success
+ * * %-EINVAL: - Invalid @batch_size argument
+ * * %-ENOMEM: - Out of memory
+ */
+int percpu_counter_tree_init(struct percpu_counter_tree *counter, struct percpu_counter_tree_level_item *items,
+ unsigned long batch_size, gfp_t gfp_flags)
+{
+ return percpu_counter_tree_init_many(counter, items, 1, batch_size, gfp_flags);
+}
+
+/**
+ * percpu_counter_tree_destroy_many() - Destroy many per-CPU counter trees.
+ * @counters: Array of counters trees to destroy.
+ * @nr_counters: The number of counter trees to destroy.
+ *
+ * Release internal resources allocated for @nr_counters per-CPU counter trees.
+ */
+
+void percpu_counter_tree_destroy_many(struct percpu_counter_tree *counters, unsigned int nr_counters)
+{
+ free_percpu(counters->level0);
+}
+
+/**
+ * percpu_counter_tree_destroy() - Destroy one per-CPU counter tree.
+ * @counter: Counter to destroy.
+ *
+ * Release internal resources allocated for one per-CPU counter tree.
+ */
+void percpu_counter_tree_destroy(struct percpu_counter_tree *counter)
+{
+ return percpu_counter_tree_destroy_many(counter, 1);
+}
+
+static
+long percpu_counter_tree_carry(long orig, long res, long inc, unsigned long bit_mask)
+{
+ if (inc < 0) {
+ inc = -(-inc & ~(bit_mask - 1));
+ /*
+ * xor bit_mask: underflow.
+ *
+ * If inc has bit set, decrement an additional bit if
+ * there is _no_ bit transition between orig and res.
+ * Else, inc has bit cleared, decrement an additional
+ * bit if there is a bit transition between orig and
+ * res.
+ */
+ if ((inc ^ orig ^ res) & bit_mask)
+ inc -= bit_mask;
+ } else {
+ inc &= ~(bit_mask - 1);
+ /*
+ * xor bit_mask: overflow.
+ *
+ * If inc has bit set, increment an additional bit if
+ * there is _no_ bit transition between orig and res.
+ * Else, inc has bit cleared, increment an additional
+ * bit if there is a bit transition between orig and
+ * res.
+ */
+ if ((inc ^ orig ^ res) & bit_mask)
+ inc += bit_mask;
+ }
+ return inc;
+}
+
+/*
+ * It does not matter through which path the carry propagates up the
+ * tree, therefore there is no need to disable preemption because the
+ * cpu number is only used to favor cache locality.
+ */
+static
+void percpu_counter_tree_add_slowpath(struct percpu_counter_tree *counter, long inc)
+{
+ unsigned int level_items, nr_levels = counter_config->nr_levels,
+ level, n_arity_order;
+ unsigned long bit_mask;
+ struct percpu_counter_tree_level_item *item = counter->items;
+ unsigned int cpu = raw_smp_processor_id();
+
+ WARN_ON_ONCE(!nr_cpus_order); /* Should never be called for 1 cpu. */
+
+ n_arity_order = counter_config->n_arity_order[0];
+ bit_mask = counter->level0_bit_mask << n_arity_order;
+ level_items = 1U << (nr_cpus_order - n_arity_order);
+
+ for (level = 1; level < nr_levels; level++) {
+ atomic_long_t *count = &item[cpu & (level_items - 1)].count;
+ unsigned long orig, res;
+
+ res = atomic_long_add_return_relaxed(inc, count);
+ orig = res - inc;
+ inc = percpu_counter_tree_carry(orig, res, inc, bit_mask);
+ if (likely(!inc))
+ return;
+ item += level_items;
+ n_arity_order = counter_config->n_arity_order[level];
+ level_items >>= n_arity_order;
+ bit_mask <<= n_arity_order;
+ }
+ atomic_long_add(inc, counter->approx_sum.a);
+}
+
+/**
+ * percpu_counter_tree_add() - Add to a per-CPU counter tree.
+ * @counter: Counter added to.
+ * @inc: Increment value (either positive or negative).
+ *
+ * Add @inc to a per-CPU counter tree. This is a fast-path which will
+ * typically increment per-CPU counters as long as there is no carry
+ * greater or equal to the counter tree batch size.
+ */
+void percpu_counter_tree_add(struct percpu_counter_tree *counter, long inc)
+{
+ unsigned long bit_mask = counter->level0_bit_mask, orig, res;
+
+ res = this_cpu_add_return(*counter->level0, inc);
+ orig = res - inc;
+ inc = percpu_counter_tree_carry(orig, res, inc, bit_mask);
+ if (likely(!inc))
+ return;
+ percpu_counter_tree_add_slowpath(counter, inc);
+}
+
+
+static
+long percpu_counter_tree_precise_sum_unbiased(struct percpu_counter_tree *counter)
+{
+ unsigned long sum = 0;
+ int cpu;
+
+ for_each_possible_cpu(cpu)
+ sum += *per_cpu_ptr(counter->level0, cpu);
+ return (long) sum;
+}
+
+/**
+ * percpu_counter_tree_precise_sum() - Return precise counter sum.
+ * @counter: The counter to sum.
+ *
+ * Querying the precise sum is relatively expensive because it needs to
+ * iterate over all CPUs.
+ * This is meant to be used when accuracy is preferred over speed.
+ *
+ * Return: The current precise counter sum.
+ */
+long percpu_counter_tree_precise_sum(struct percpu_counter_tree *counter)
+{
+ return percpu_counter_tree_precise_sum_unbiased(counter) + READ_ONCE(counter->bias);
+}
+
+static
+int compare_delta(long delta, unsigned long accuracy_neg, unsigned long accuracy_pos)
+{
+ if (delta >= 0) {
+ if (delta <= accuracy_pos)
+ return 0;
+ else
+ return 1;
+ } else {
+ if (-delta <= accuracy_neg)
+ return 0;
+ else
+ return -1;
+ }
+}
+
+/**
+ * percpu_counter_tree_approximate_compare - Approximated comparison of two counter trees.
+ * @a: First counter to compare.
+ * @b: Second counter to compare.
+ *
+ * Evaluate an approximate comparison of two counter trees.
+ * This approximation comparison is fast, and provides an accurate
+ * answer if the counters are found to be either less than or greater
+ * than the other. However, if the approximated comparison returns
+ * 0, the counters respective sums are found to be within the two
+ * counters accuracy range.
+ *
+ * Return:
+ * * %0 - Counters @a and @b do not differ by more than the sum of their respective
+ * accuracy ranges.
+ * * %-1 - Counter @a less than counter @b.
+ * * %1 - Counter @a is greater than counter @b.
+ */
+int percpu_counter_tree_approximate_compare(struct percpu_counter_tree *a, struct percpu_counter_tree *b)
+{
+ return compare_delta(percpu_counter_tree_approximate_sum(a) - percpu_counter_tree_approximate_sum(b),
+ a->approx_accuracy_range.over + b->approx_accuracy_range.under,
+ a->approx_accuracy_range.under + b->approx_accuracy_range.over);
+}
+
+/**
+ * percpu_counter_tree_approximate_compare_value - Approximated comparison of a counter tree against a given value.
+ * @counter: Counter to compare.
+ * @v: Value to compare.
+ *
+ * Evaluate an approximate comparison of a counter tree against a given value.
+ * This approximation comparison is fast, and provides an accurate
+ * answer if the counter is found to be either less than or greater
+ * than the value. However, if the approximated comparison returns
+ * 0, the value is within the counter accuracy range.
+ *
+ * Return:
+ * * %0 - The value @v is within the accuracy range of the counter.
+ * * %-1 - The value @v is less than the counter.
+ * * %1 - The value @v is greater than the counter.
+ */
+int percpu_counter_tree_approximate_compare_value(struct percpu_counter_tree *counter, long v)
+{
+ return compare_delta(v - percpu_counter_tree_approximate_sum(counter),
+ counter->approx_accuracy_range.under,
+ counter->approx_accuracy_range.over);
+}
+
+/**
+ * percpu_counter_tree_precise_compare - Precise comparison of two counter trees.
+ * @a: First counter to compare.
+ * @b: Second counter to compare.
+ *
+ * Evaluate a precise comparison of two counter trees.
+ * As an optimization, it uses the approximate counter comparison
+ * to quickly compare counters which are far apart. Only cases where
+ * counter sums are within the accuracy range require precise counter
+ * sums.
+ *
+ * Return:
+ * * %0 - Counters are equal.
+ * * %-1 - Counter @a less than counter @b.
+ * * %1 - Counter @a is greater than counter @b.
+ */
+int percpu_counter_tree_precise_compare(struct percpu_counter_tree *a, struct percpu_counter_tree *b)
+{
+ long count_a = percpu_counter_tree_approximate_sum(a),
+ count_b = percpu_counter_tree_approximate_sum(b);
+ unsigned long accuracy_a, accuracy_b;
+ long delta = count_a - count_b;
+ int res;
+
+ res = compare_delta(delta,
+ a->approx_accuracy_range.over + b->approx_accuracy_range.under,
+ a->approx_accuracy_range.under + b->approx_accuracy_range.over);
+ /* The values are distanced enough for an accurate approximated comparison. */
+ if (res)
+ return res;
+
+ /*
+ * The approximated comparison is within the accuracy range, therefore at least one
+ * precise sum is needed. Sum the counter which has the largest accuracy first.
+ */
+ if (delta >= 0) {
+ accuracy_a = a->approx_accuracy_range.under;
+ accuracy_b = b->approx_accuracy_range.over;
+ } else {
+ accuracy_a = a->approx_accuracy_range.over;
+ accuracy_b = b->approx_accuracy_range.under;
+ }
+ if (accuracy_b < accuracy_a) {
+ count_a = percpu_counter_tree_precise_sum(a);
+ res = compare_delta(count_a - count_b,
+ b->approx_accuracy_range.under,
+ b->approx_accuracy_range.over);
+ if (res)
+ return res;
+ /* Precise sum of second counter is required. */
+ count_b = percpu_counter_tree_precise_sum(b);
+ } else {
+ count_b = percpu_counter_tree_precise_sum(b);
+ res = compare_delta(count_a - count_b,
+ a->approx_accuracy_range.over,
+ a->approx_accuracy_range.under);
+ if (res)
+ return res;
+ /* Precise sum of second counter is required. */
+ count_a = percpu_counter_tree_precise_sum(a);
+ }
+ if (count_a - count_b < 0)
+ return -1;
+ if (count_a - count_b > 0)
+ return 1;
+ return 0;
+}
+
+/**
+ * percpu_counter_tree_precise_compare_value - Precise comparison of a counter tree against a given value.
+ * @counter: Counter to compare.
+ * @v: Value to compare.
+ *
+ * Evaluate a precise comparison of a counter tree against a given value.
+ * As an optimization, it uses the approximate counter comparison
+ * to quickly identify whether the counter and value are far apart.
+ * Only cases where the value is within the counter accuracy range
+ * require a precise counter sum.
+ *
+ * Return:
+ * * %0 - The value @v is equal to the counter.
+ * * %-1 - The value @v is less than the counter.
+ * * %1 - The value @v is greater than the counter.
+ */
+int percpu_counter_tree_precise_compare_value(struct percpu_counter_tree *counter, long v)
+{
+ long count = percpu_counter_tree_approximate_sum(counter);
+ int res;
+
+ res = compare_delta(v - count,
+ counter->approx_accuracy_range.under,
+ counter->approx_accuracy_range.over);
+ /* The values are distanced enough for an accurate approximated comparison. */
+ if (res)
+ return res;
+
+ /* Precise sum is required. */
+ count = percpu_counter_tree_precise_sum(counter);
+ if (v - count < 0)
+ return -1;
+ if (v - count > 0)
+ return 1;
+ return 0;
+}
+
+static
+void percpu_counter_tree_set_bias(struct percpu_counter_tree *counter, long bias)
+{
+ WRITE_ONCE(counter->bias, bias);
+}
+
+/**
+ * percpu_counter_tree_set - Set the counter tree sum to a given value.
+ * @counter: Counter to set.
+ * @v: Value to set.
+ *
+ * Set the counter sum to a given value. It can be useful for instance
+ * to reset the counter sum to 0. Note that even after setting the
+ * counter sum to a given value, the counter sum approximation can
+ * return any value within the accuracy range around that value.
+ */
+void percpu_counter_tree_set(struct percpu_counter_tree *counter, long v)
+{
+ percpu_counter_tree_set_bias(counter,
+ v - percpu_counter_tree_precise_sum_unbiased(counter));
+}
+
+/*
+ * percpu_counter_tree_items_size - Query the size required for counter tree items.
+ *
+ * Query the size of the memory area required to hold the counter tree
+ * items. This depends on the hardware topology and is invariant after
+ * boot.
+ *
+ * Return: Size required to hold tree items.
+ */
+size_t percpu_counter_tree_items_size(void)
+{
+ if (!nr_cpus_order)
+ return 0;
+ return counter_config->nr_items * sizeof(struct percpu_counter_tree_level_item);
+}
+
+static void __init calculate_accuracy_topology(void)
+{
+ unsigned int nr_levels = counter_config->nr_levels, level;
+ unsigned int level_items = 1U << nr_cpus_order;
+ unsigned long batch_size = 1;
+
+ for (level = 0; level < nr_levels; level++) {
+ unsigned int n_arity_order = counter_config->n_arity_order[level];
+
+ /*
+ * The accuracy multiplier is derived from a batch size of 1
+ * to speed up calculating the accuracy at tree initialization.
+ */
+ accuracy_multiplier += batch_size * level_items;
+ batch_size <<= n_arity_order;
+ level_items >>= n_arity_order;
+ }
+}
+
+int __init percpu_counter_tree_subsystem_init(void)
+{
+ nr_cpus_order = get_count_order(nr_cpu_ids);
+ if (WARN_ON_ONCE(nr_cpus_order >= ARRAY_SIZE(per_nr_cpu_order_config))) {
+ printk(KERN_ERR "Unsupported number of CPUs (%u)\n", nr_cpu_ids);
+ return -1;
+ }
+ counter_config = &per_nr_cpu_order_config[nr_cpus_order];
+ calculate_accuracy_topology();
+ return 0;
+}
--
2.39.5
^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH v16 2/3] mm: Improve RSS counter approximation accuracy for proc interfaces
2026-01-14 14:59 [PATCH v16 0/3] Improve proc RSS accuracy and OOM killer latency Mathieu Desnoyers
2026-01-14 14:59 ` [PATCH v16 1/3] lib: Introduce hierarchical per-cpu counters Mathieu Desnoyers
@ 2026-01-14 14:59 ` Mathieu Desnoyers
2026-01-14 16:48 ` Michal Hocko
2026-01-14 14:59 ` [PATCH v16 3/3] mm: Reduce latency of OOM killer task selection with 2-pass algorithm Mathieu Desnoyers
2 siblings, 1 reply; 10+ messages in thread
From: Mathieu Desnoyers @ 2026-01-14 14:59 UTC (permalink / raw)
To: Andrew Morton
Cc: linux-kernel, Mathieu Desnoyers, Paul E. McKenney,
Steven Rostedt, Masami Hiramatsu, Dennis Zhou, Tejun Heo,
Christoph Lameter, Martin Liu, David Rientjes, christian.koenig,
Shakeel Butt, SeongJae Park, Michal Hocko, Johannes Weiner,
Sweet Tea Dorminy, Lorenzo Stoakes, Liam R . Howlett,
Mike Rapoport, Suren Baghdasaryan, Vlastimil Babka,
Christian Brauner, Wei Yang, David Hildenbrand, Miaohe Lin,
Al Viro, linux-mm, linux-trace-kernel, Yu Zhao, Roman Gushchin,
Mateusz Guzik, Matthew Wilcox, Baolin Wang, Aboorva Devarajan
Use hierarchical per-cpu counters for RSS tracking to improve the
accuracy of per-mm RSS sum approximation on large many-core systems [1].
This improves the accuracy of the RSS values returned by proc
interfaces.
This is also a preparation step to introduce a 2-pass OOM killer task
selection which leverages the approximation and accuracy ranges to
quickly eliminate tasks which are outside of the range of the current
selection, and thus reduce the latency introduced by execution of the
OOM killer.
Here is a (possibly incomplete) list of the prior approaches that were
used or proposed, along with their downside:
1) Per-thread rss tracking: large error on many-thread processes.
2) Per-CPU counters: up to 12% slower for short-lived processes and 9%
increased system time in make test workloads [1]. Moreover, the
inaccuracy increases with O(n^2) with the number of CPUs.
3) Per-NUMA-node counters: requires atomics on fast-path (overhead),
error is high with systems that have lots of NUMA nodes (32 times
the number of NUMA nodes).
4) Use a percise per-cpu counter sum for each counter value query:
Requires iteration on each possible CPUs for each sum, which
adds overhead (and thus increases OOM killer latency) on large
many-core systems running many processes.
The approach proposed here is to replace the per-cpu counters by the
hierarchical per-cpu counters, which bounds the inaccuracy based on the
system topology with O(N*logN).
* Testing results:
Test hardware: 2 sockets AMD EPYC 9654 96-Core Processor (384 logical CPUs total)
Methodology:
Comparing the current upstream implementation with the hierarchical
counters is done by keeping both implementations wired up in parallel,
and running a single-process, single-threaded program which hops
randomly across CPUs in the system, calling mmap(2) and munmap(2) on
random CPUs, keeping track of an array of allocated mappings, randomly
choosing entries to either map or unmap.
get_mm_counter() is instrumented to compare the upstream counter
approximation to the precise value, and print the delta when going over
a given threshold. The delta of the hierarchical counter approximation
to the precise value is also printed for comparison.
After a few minutes running this test, the upstream implementation
counter approximation reaches a 1GB delta from the
precise value, compared to 80MB delta with the hierarchical counter.
The hierarchical counter provides a guaranteed maximum approximation
inaccuracy of 192MB on that hardware topology.
* Fast path implementation comparison
The new inline percpu_counter_tree_add() uses a this_cpu_add_return()
for the fast path (under a certain allocation size threshold). Above
that, it calls a slow path which "trickles up" the carry to upper level
counters with atomic_add_return.
In comparison, the upstream counters implementation calls
percpu_counter_add_batch which uses this_cpu_try_cmpxchg() on the fast
path, and does a raw_spin_lock_irqsave above a certain threshold.
The hierarchical implementation is therefore expected to have less
contention on mid-sized allocations than the upstream counters because
the atomic counters tracking those bits are only shared across nearby
CPUs. In comparison, the upstream counters immediately use a global
spinlock when reaching the threshold.
* Benchmarks
Using will-it-scale page_fault1 benchmarks to compare the upstream
counters to the hierarchical counters. This is done with hyperthreading
disabled. The speedup is within the standard deviation of the upstream
runs, so the overhead is not significant.
upstream hierarchical speedup
page_fault1_processes -s 100 -t 1 614783 615558 +0.1%
page_fault1_threads -s 100 -t 1 612788 612447 -0.1%
page_fault1_processes -s 100 -t 96 37994977 37932035 -0.2%
page_fault1_threads -s 100 -t 96 2484130 2504860 +0.8%
page_fault1_processes -s 100 -t 192 71262917 71118830 -0.2%
page_fault1_threads -s 100 -t 192 2446437 2469296 +0.1%
This change depends on the following patch:
"mm: Fix OOM killer inaccuracy on large many-core systems" [2]
Link: https://lore.kernel.org/lkml/20250331223516.7810-2-sweettea-kernel@dorminy.me/ # [1]
Link: https://lore.kernel.org/lkml/20260114143642.47333-1-mathieu.desnoyers@efficios.com/ # [2]
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: "Paul E. McKenney" <paulmck@kernel.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Masami Hiramatsu <mhiramat@kernel.org>
Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Cc: Dennis Zhou <dennis@kernel.org>
Cc: Tejun Heo <tj@kernel.org>
Cc: Christoph Lameter <cl@linux.com>
Cc: Martin Liu <liumartin@google.com>
Cc: David Rientjes <rientjes@google.com>
Cc: christian.koenig@amd.com
Cc: Shakeel Butt <shakeel.butt@linux.dev>
Cc: SeongJae Park <sj@kernel.org>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Sweet Tea Dorminy <sweettea-kernel@dorminy.me>
Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Cc: "Liam R . Howlett" <liam.howlett@oracle.com>
Cc: Mike Rapoport <rppt@kernel.org>
Cc: Suren Baghdasaryan <surenb@google.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Christian Brauner <brauner@kernel.org>
Cc: Wei Yang <richard.weiyang@gmail.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Miaohe Lin <linmiaohe@huawei.com>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: linux-mm@kvack.org
Cc: linux-trace-kernel@vger.kernel.org
Cc: Yu Zhao <yuzhao@google.com>
Cc: Roman Gushchin <roman.gushchin@linux.dev>
Cc: Mateusz Guzik <mjguzik@gmail.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
Cc: Aboorva Devarajan <aboorvad@linux.ibm.com>
---
Changes since v15:
- Update the commit message to explain that this change is an
improvement to the accuracy of proc interfaces approximated RSS
values, and a preparation step for reducing OOM killer latency.
- Rebase on v2 of "mm: Fix OOM killer and proc stats inaccuracy on
large many-core systems".
Changes since v14:
- This change becomes the preparation for reducing the OOM killer latency.
Changes since v13:
- Change check_mm print format from %d to %ld.
Changes since v10:
- Rebase on top of mm_struct static init fixes.
- Change the alignment of mm_struct flexible array to the alignment of
the rss counters items (which are cacheline aligned on SMP).
- Move the rss counters items to first position within the flexible
array at the end of the mm_struct to place content in decreasing
alignment requirement order.
Changes since v8:
- Use percpu_counter_tree_init_many and
percpu_counter_tree_destroy_many APIs.
- Remove percpu tree items allocation. Extend mm_struct size to include
rss items. Those are handled through the new helpers
get_rss_stat_items() and get_rss_stat_items_size() and passed
as parameter to percpu_counter_tree_init_many().
Changes since v7:
- Use precise sum positive API to handle a scenario where an unlucky
precise sum iteration would observe negative counter values due to
concurrent updates.
Changes since v6:
- Rebased on v6.18-rc3.
- Implement get_mm_counter_sum as percpu_counter_tree_precise_sum for
/proc virtual files memory state queries.
Changes since v5:
- Use percpu_counter_tree_approximate_sum_positive.
Change since v4:
- get_mm_counter needs to return 0 or a positive value.
---
include/linux/mm.h | 19 ++++++++++----
include/linux/mm_types.h | 50 +++++++++++++++++++++++++++----------
include/trace/events/kmem.h | 2 +-
kernel/fork.c | 22 +++++++++-------
4 files changed, 65 insertions(+), 28 deletions(-)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index bfa1307264df..10d62e8f35e7 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2843,38 +2843,47 @@ static inline bool get_user_page_fast_only(unsigned long addr,
{
return get_user_pages_fast_only(addr, 1, gup_flags, pagep) == 1;
}
+
+static inline struct percpu_counter_tree_level_item *get_rss_stat_items(struct mm_struct *mm)
+{
+ unsigned long ptr = (unsigned long)mm;
+
+ ptr += offsetof(struct mm_struct, flexible_array);
+ return (struct percpu_counter_tree_level_item *)ptr;
+}
+
/*
* per-process(per-mm_struct) statistics.
*/
static inline unsigned long get_mm_counter(struct mm_struct *mm, int member)
{
- return percpu_counter_read_positive(&mm->rss_stat[member]);
+ return percpu_counter_tree_approximate_sum_positive(&mm->rss_stat[member]);
}
static inline unsigned long get_mm_counter_sum(struct mm_struct *mm, int member)
{
- return percpu_counter_sum_positive(&mm->rss_stat[member]);
+ return percpu_counter_tree_precise_sum_positive(&mm->rss_stat[member]);
}
void mm_trace_rss_stat(struct mm_struct *mm, int member);
static inline void add_mm_counter(struct mm_struct *mm, int member, long value)
{
- percpu_counter_add(&mm->rss_stat[member], value);
+ percpu_counter_tree_add(&mm->rss_stat[member], value);
mm_trace_rss_stat(mm, member);
}
static inline void inc_mm_counter(struct mm_struct *mm, int member)
{
- percpu_counter_inc(&mm->rss_stat[member]);
+ percpu_counter_tree_add(&mm->rss_stat[member], 1);
mm_trace_rss_stat(mm, member);
}
static inline void dec_mm_counter(struct mm_struct *mm, int member)
{
- percpu_counter_dec(&mm->rss_stat[member]);
+ percpu_counter_tree_add(&mm->rss_stat[member], -1);
mm_trace_rss_stat(mm, member);
}
diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index 9f861ceabe61..c3e8f0ce3112 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -18,7 +18,7 @@
#include <linux/page-flags-layout.h>
#include <linux/workqueue.h>
#include <linux/seqlock.h>
-#include <linux/percpu_counter.h>
+#include <linux/percpu_counter_tree.h>
#include <linux/types.h>
#include <linux/rseq_types.h>
#include <linux/bitmap.h>
@@ -1070,6 +1070,19 @@ typedef struct {
DECLARE_BITMAP(__mm_flags, NUM_MM_FLAG_BITS);
} __private mm_flags_t;
+/*
+ * The alignment of the mm_struct flexible array is based on the largest
+ * alignment of its content:
+ * __alignof__(struct percpu_counter_tree_level_item) provides a
+ * cacheline aligned alignment on SMP systems, else alignment on
+ * unsigned long on UP systems.
+ */
+#ifdef CONFIG_SMP
+# define __mm_struct_flexible_array_aligned __aligned(__alignof__(struct percpu_counter_tree_level_item))
+#else
+# define __mm_struct_flexible_array_aligned __aligned(__alignof__(unsigned long))
+#endif
+
struct kioctx_table;
struct iommu_mm_data;
struct mm_struct {
@@ -1215,7 +1228,7 @@ struct mm_struct {
unsigned long saved_e_flags;
#endif
- struct percpu_counter rss_stat[NR_MM_COUNTERS];
+ struct percpu_counter_tree rss_stat[NR_MM_COUNTERS];
struct linux_binfmt *binfmt;
@@ -1326,10 +1339,13 @@ struct mm_struct {
} __randomize_layout;
/*
- * The mm_cpumask needs to be at the end of mm_struct, because it
- * is dynamically sized based on nr_cpu_ids.
+ * The rss hierarchical counter items, mm_cpumask, and mm_cid
+ * masks need to be at the end of mm_struct, because they are
+ * dynamically sized based on nr_cpu_ids.
+ * The content of the flexible array needs to be placed in
+ * decreasing alignment requirement order.
*/
- char flexible_array[] __aligned(__alignof__(unsigned long));
+ char flexible_array[] __mm_struct_flexible_array_aligned;
};
/* Copy value to the first system word of mm flags, non-atomically. */
@@ -1368,22 +1384,28 @@ extern struct mm_struct init_mm;
#define MM_STRUCT_FLEXIBLE_ARRAY_INIT \
{ \
- [0 ... sizeof(cpumask_t) + MM_CID_STATIC_SIZE + PERCPU_COUNTER_TREE_ITEMS_STATIC_SIZE - 1] = 0 \
+ [0 ... PERCPU_COUNTER_TREE_ITEMS_STATIC_SIZE + sizeof(cpumask_t) + MM_CID_STATIC_SIZE - 1] = 0 \
}
-/* Pointer magic because the dynamic array size confuses some compilers. */
-static inline void mm_init_cpumask(struct mm_struct *mm)
+static inline size_t get_rss_stat_items_size(void)
{
- unsigned long cpu_bitmap = (unsigned long)mm;
-
- cpu_bitmap += offsetof(struct mm_struct, flexible_array);
- cpumask_clear((struct cpumask *)cpu_bitmap);
+ return percpu_counter_tree_items_size() * NR_MM_COUNTERS;
}
/* Future-safe accessor for struct mm_struct's cpu_vm_mask. */
static inline cpumask_t *mm_cpumask(struct mm_struct *mm)
{
- return (struct cpumask *)&mm->flexible_array;
+ unsigned long ptr = (unsigned long)mm;
+
+ ptr += offsetof(struct mm_struct, flexible_array);
+ /* Skip RSS stats counters. */
+ ptr += get_rss_stat_items_size();
+ return (struct cpumask *)ptr;
+}
+
+static inline void mm_init_cpumask(struct mm_struct *mm)
+{
+ cpumask_clear((struct cpumask *)mm_cpumask(mm));
}
#ifdef CONFIG_LRU_GEN
@@ -1475,6 +1497,8 @@ static inline cpumask_t *mm_cpus_allowed(struct mm_struct *mm)
unsigned long bitmap = (unsigned long)mm;
bitmap += offsetof(struct mm_struct, flexible_array);
+ /* Skip RSS stats counters. */
+ bitmap += get_rss_stat_items_size();
/* Skip cpu_bitmap */
bitmap += cpumask_size();
return (struct cpumask *)bitmap;
diff --git a/include/trace/events/kmem.h b/include/trace/events/kmem.h
index 7f93e754da5c..91c81c44f884 100644
--- a/include/trace/events/kmem.h
+++ b/include/trace/events/kmem.h
@@ -442,7 +442,7 @@ TRACE_EVENT(rss_stat,
__entry->mm_id = mm_ptr_to_hash(mm);
__entry->curr = !!(current->mm == mm);
__entry->member = member;
- __entry->size = (percpu_counter_sum_positive(&mm->rss_stat[member])
+ __entry->size = (percpu_counter_tree_approximate_sum_positive(&mm->rss_stat[member])
<< PAGE_SHIFT);
),
diff --git a/kernel/fork.c b/kernel/fork.c
index b1f3915d5f8e..8b56d81af734 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -133,6 +133,11 @@
*/
#define MAX_THREADS FUTEX_TID_MASK
+/*
+ * Batch size of rss stat approximation
+ */
+#define RSS_STAT_BATCH_SIZE 32
+
/*
* Protected counters by write_lock_irq(&tasklist_lock)
*/
@@ -626,14 +631,12 @@ static void check_mm(struct mm_struct *mm)
"Please make sure 'struct resident_page_types[]' is updated as well");
for (i = 0; i < NR_MM_COUNTERS; i++) {
- long x = percpu_counter_sum(&mm->rss_stat[i]);
-
- if (unlikely(x)) {
+ if (unlikely(percpu_counter_tree_precise_compare_value(&mm->rss_stat[i], 0) != 0))
pr_alert("BUG: Bad rss-counter state mm:%p type:%s val:%ld Comm:%s Pid:%d\n",
- mm, resident_page_types[i], x,
+ mm, resident_page_types[i],
+ percpu_counter_tree_precise_sum(&mm->rss_stat[i]),
current->comm,
task_pid_nr(current));
- }
}
if (mm_pgtables_bytes(mm))
@@ -731,7 +734,7 @@ void __mmdrop(struct mm_struct *mm)
put_user_ns(mm->user_ns);
mm_pasid_drop(mm);
mm_destroy_cid(mm);
- percpu_counter_destroy_many(mm->rss_stat, NR_MM_COUNTERS);
+ percpu_counter_tree_destroy_many(mm->rss_stat, NR_MM_COUNTERS);
free_mm(mm);
}
@@ -1123,8 +1126,9 @@ static struct mm_struct *mm_init(struct mm_struct *mm, struct task_struct *p,
if (mm_alloc_cid(mm, p))
goto fail_cid;
- if (percpu_counter_init_many(mm->rss_stat, 0, GFP_KERNEL_ACCOUNT,
- NR_MM_COUNTERS))
+ if (percpu_counter_tree_init_many(mm->rss_stat, get_rss_stat_items(mm),
+ NR_MM_COUNTERS, RSS_STAT_BATCH_SIZE,
+ GFP_KERNEL_ACCOUNT))
goto fail_pcpu;
mm->user_ns = get_user_ns(user_ns);
@@ -3006,7 +3010,7 @@ void __init mm_cache_init(void)
* dynamically sized based on the maximum CPU number this system
* can have, taking hotplug into account (nr_cpu_ids).
*/
- mm_size = sizeof(struct mm_struct) + cpumask_size() + mm_cid_size();
+ mm_size = sizeof(struct mm_struct) + cpumask_size() + mm_cid_size() + get_rss_stat_items_size();
mm_cachep = kmem_cache_create_usercopy("mm_struct",
mm_size, ARCH_MIN_MMSTRUCT_ALIGN,
--
2.39.5
^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH v16 3/3] mm: Reduce latency of OOM killer task selection with 2-pass algorithm
2026-01-14 14:59 [PATCH v16 0/3] Improve proc RSS accuracy and OOM killer latency Mathieu Desnoyers
2026-01-14 14:59 ` [PATCH v16 1/3] lib: Introduce hierarchical per-cpu counters Mathieu Desnoyers
2026-01-14 14:59 ` [PATCH v16 2/3] mm: Improve RSS counter approximation accuracy for proc interfaces Mathieu Desnoyers
@ 2026-01-14 14:59 ` Mathieu Desnoyers
2026-01-14 17:06 ` Michal Hocko
2 siblings, 1 reply; 10+ messages in thread
From: Mathieu Desnoyers @ 2026-01-14 14:59 UTC (permalink / raw)
To: Andrew Morton
Cc: linux-kernel, Mathieu Desnoyers, Paul E. McKenney,
Steven Rostedt, Masami Hiramatsu, Dennis Zhou, Tejun Heo,
Christoph Lameter, Martin Liu, David Rientjes, christian.koenig,
Shakeel Butt, SeongJae Park, Michal Hocko, Johannes Weiner,
Sweet Tea Dorminy, Lorenzo Stoakes, Liam R . Howlett,
Mike Rapoport, Suren Baghdasaryan, Vlastimil Babka,
Christian Brauner, Wei Yang, David Hildenbrand, Miaohe Lin,
Al Viro, linux-mm, linux-trace-kernel, Yu Zhao, Roman Gushchin,
Mateusz Guzik, Matthew Wilcox, Baolin Wang, Aboorva Devarajan
Use the hierarchical tree counter approximation (hpcc) to implement the
OOM killer task selection with a 2-pass algorithm. The first pass
selects the process that has the highest badness points approximation,
and the second pass compares each process using the current max badness
points approximation.
The second pass uses an approximate comparison to eliminate all processes
which are below the current max badness points approximation accuracy
range.
Summing the per-CPU counters to calculate the precise badness of tasks
is only required for tasks with an approximate badness within the
accuracy range of the current max points value.
Limit to 16 the maximum number of badness sums allowed for an OOM killer
task selection before falling back to the approximated comparison. This
ensures bounded execution time for scenarios where many tasks have
badness within the accuracy of the maximum badness approximation.
Testing the execution time of select_bad_process() with a single
tail -f /dev/zero:
AMD EPYC 9654 96-Core (2 sockets)
Within a KVM, configured with 256 logical cpus.
| precise sum | hpcc |
----------------------------------|-------------|----------|
nr_processes=40 | 0.5 ms | 0.3 ms |
nr_processes=10000 | 80.0 ms | 7.9 ms |
Tested with the following script:
#!/bin/sh
for a in $(seq 1 10); do (tail /dev/zero &); done
sleep 5
for a in $(seq 1 10); do (tail /dev/zero &); done
sleep 2
for a in $(seq 1 10); do (tail /dev/zero &); done
echo "Waiting for tasks to finish"
wait
Results: OOM kill order on a 128GB memory system
================================================
* systemd and sd-pam are chosen first due to their oom_score_ajd:100:
Out of memory: Killed process 3502 (systemd) total-vm:20096kB, anon-rss:0kB, file-rss:0kB, shmem-rss:0kB, UID:1000 pgtables:72kB oom_score_adj:100
Out of memory: Killed process 3503 ((sd-pam)) total-vm:21432kB, anon-rss:0kB, file-rss:0kB, shmem-rss:0kB, UID:1000 pgtables:76kB oom_score_adj:100
* The first batch of 10 processes are gradually killed, consecutively
picking the one that uses the most memory. The fact that we are
freeing memory from the previous processes increases the threshold
at which the remaining processes of that group are killed. Processes
from the second and third batches of 10 processes have time to start
before we complete killing the first 10 processes:
Out of memory: Killed process 3703 (tail) total-vm:6591280kB, anon-rss:6578176kB, file-rss:0kB, shmem-rss:0kB, UID:1000 pgtables:12936kB oom_score_adj:0
Out of memory: Killed process 3705 (tail) total-vm:6731716kB, anon-rss:6709248kB, file-rss:0kB, shmem-rss:0kB, UID:1000 pgtables:13212kB oom_score_adj:0
Out of memory: Killed process 3707 (tail) total-vm:6977216kB, anon-rss:6946816kB, file-rss:0kB, shmem-rss:0kB, UID:1000 pgtables:13692kB oom_score_adj:0
Out of memory: Killed process 3699 (tail) total-vm:7205640kB, anon-rss:7184384kB, file-rss:0kB, shmem-rss:0kB, UID:1000 pgtables:14136kB oom_score_adj:0
Out of memory: Killed process 3713 (tail) total-vm:7463204kB, anon-rss:7438336kB, file-rss:0kB, shmem-rss:0kB, UID:1000 pgtables:14644kB oom_score_adj:0
Out of memory: Killed process 3701 (tail) total-vm:7739204kB, anon-rss:7716864kB, file-rss:0kB, shmem-rss:0kB, UID:1000 pgtables:15180kB oom_score_adj:0
Out of memory: Killed process 3709 (tail) total-vm:8050176kB, anon-rss:8028160kB, file-rss:0kB, shmem-rss:0kB, UID:1000 pgtables:15792kB oom_score_adj:0
Out of memory: Killed process 3711 (tail) total-vm:8362236kB, anon-rss:8339456kB, file-rss:0kB, shmem-rss:0kB, UID:1000 pgtables:16404kB oom_score_adj:0
Out of memory: Killed process 3715 (tail) total-vm:8649360kB, anon-rss:8634368kB, file-rss:0kB, shmem-rss:0kB, UID:1000 pgtables:16972kB oom_score_adj:0
Out of memory: Killed process 3697 (tail) total-vm:8951788kB, anon-rss:8929280kB, file-rss:0kB, shmem-rss:0kB, UID:1000 pgtables:17560kB oom_score_adj:0
* Even though there is a 2 seconds delay between the 2nd and 3rd batches
those appear to execute in mixed order. Therefore, let's consider them
as a single batch of 20 processes. We are hitting oom at a lower
memory threshold because at this point the 20 remaining proceses are
running rather than the previous 10. The process with highest memory
usage is selected for oom, thus making room for the remaining
processes so they can use more memory before they fill the available
memory, thus explaining why the memory use for selected processes
gradually increases, until all system memory is used by the last one:
Out of memory: Killed process 3731 (tail) total-vm:7089868kB, anon-rss:7077888kB, file-rss:0kB, shmem-rss:0kB, UID:1000 pgtables:13912kB oom_score_adj:0
Out of memory: Killed process 3721 (tail) total-vm:7417248kB, anon-rss:7405568kB, file-rss:0kB, shmem-rss:0kB, UID:1000 pgtables:14556kB oom_score_adj:0
Out of memory: Killed process 3729 (tail) total-vm:7795864kB, anon-rss:7766016kB, file-rss:0kB, shmem-rss:0kB, UID:1000 pgtables:15300kB oom_score_adj:0
Out of memory: Killed process 3723 (tail) total-vm:8259620kB, anon-rss:8224768kB, file-rss:0kB, shmem-rss:0kB, UID:1000 pgtables:16208kB oom_score_adj:0
Out of memory: Killed process 3737 (tail) total-vm:8695984kB, anon-rss:8667136kB, file-rss:0kB, shmem-rss:0kB, UID:1000 pgtables:17060kB oom_score_adj:0
Out of memory: Killed process 3735 (tail) total-vm:9295980kB, anon-rss:9265152kB, file-rss:0kB, shmem-rss:0kB, UID:1000 pgtables:18240kB oom_score_adj:0
Out of memory: Killed process 3727 (tail) total-vm:9907900kB, anon-rss:9895936kB, file-rss:0kB, shmem-rss:0kB, UID:1000 pgtables:19428kB oom_score_adj:0
Out of memory: Killed process 3719 (tail) total-vm:10631248kB, anon-rss:10600448kB, file-rss:0kB, shmem-rss:0kB, UID:1000 pgtables:20844kB oom_score_adj:0
Out of memory: Killed process 3733 (tail) total-vm:11341720kB, anon-rss:11321344kB, file-rss:0kB, shmem-rss:0kB, UID:1000 pgtables:22232kB oom_score_adj:0
Out of memory: Killed process 3725 (tail) total-vm:12348124kB, anon-rss:12320768kB, file-rss:0kB, shmem-rss:0kB, UID:1000 pgtables:24204kB oom_score_adj:0
Out of memory: Killed process 3759 (tail) total-vm:12978888kB, anon-rss:12967936kB, file-rss:0kB, shmem-rss:0kB, UID:1000 pgtables:25440kB oom_score_adj:0
Out of memory: Killed process 3751 (tail) total-vm:14386412kB, anon-rss:14352384kB, file-rss:0kB, shmem-rss:0kB, UID:1000 pgtables:28196kB oom_score_adj:0
Out of memory: Killed process 3741 (tail) total-vm:16153168kB, anon-rss:16130048kB, file-rss:0kB, shmem-rss:0kB, UID:1000 pgtables:31652kB oom_score_adj:0
Out of memory: Killed process 3753 (tail) total-vm:18414856kB, anon-rss:18391040kB, file-rss:0kB, shmem-rss:0kB, UID:1000 pgtables:36076kB oom_score_adj:0
Out of memory: Killed process 3745 (tail) total-vm:21389456kB, anon-rss:21356544kB, file-rss:0kB, shmem-rss:0kB, UID:1000 pgtables:41904kB oom_score_adj:0
Out of memory: Killed process 3747 (tail) total-vm:25659348kB, anon-rss:25632768kB, file-rss:0kB, shmem-rss:0kB, UID:1000 pgtables:50260kB oom_score_adj:0
Out of memory: Killed process 3755 (tail) total-vm:32030820kB, anon-rss:32006144kB, file-rss:0kB, shmem-rss:0kB, UID:1000 pgtables:62720kB oom_score_adj:0
Out of memory: Killed process 3743 (tail) total-vm:42648456kB, anon-rss:42614784kB, file-rss:0kB, shmem-rss:0kB, UID:1000 pgtables:83504kB oom_score_adj:0
Out of memory: Killed process 3757 (tail) total-vm:63971028kB, anon-rss:63938560kB, file-rss:0kB, shmem-rss:0kB, UID:1000 pgtables:125228kB oom_score_adj:0
Out of memory: Killed process 3749 (tail) total-vm:127799660kB, anon-rss:127778816kB, file-rss:0kB, shmem-rss:0kB, UID:1000 pgtables:250140kB oom_score_adj:0
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: "Paul E. McKenney" <paulmck@kernel.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Masami Hiramatsu <mhiramat@kernel.org>
Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Cc: Dennis Zhou <dennis@kernel.org>
Cc: Tejun Heo <tj@kernel.org>
Cc: Christoph Lameter <cl@linux.com>
Cc: Martin Liu <liumartin@google.com>
Cc: David Rientjes <rientjes@google.com>
Cc: christian.koenig@amd.com
Cc: Shakeel Butt <shakeel.butt@linux.dev>
Cc: SeongJae Park <sj@kernel.org>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Sweet Tea Dorminy <sweettea-kernel@dorminy.me>
Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Cc: "Liam R . Howlett" <liam.howlett@oracle.com>
Cc: Mike Rapoport <rppt@kernel.org>
Cc: Suren Baghdasaryan <surenb@google.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Christian Brauner <brauner@kernel.org>
Cc: Wei Yang <richard.weiyang@gmail.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Miaohe Lin <linmiaohe@huawei.com>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: linux-mm@kvack.org
Cc: linux-trace-kernel@vger.kernel.org
Cc: Yu Zhao <yuzhao@google.com>
Cc: Roman Gushchin <roman.gushchin@linux.dev>
Cc: Mateusz Guzik <mjguzik@gmail.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
Cc: Aboorva Devarajan <aboorvad@linux.ibm.com>
---
Changes since v14:
- This change becomes an OOM killer latency reduction (performance
improvement).
Changes since v12:
- The oom_task_origin case needs to set points_min rather than points
variable now, otherwise points_min is used without being initialized.
Changes since v11:
- get_mm_counter_sum() returns a precise sum.
- Use unsigned long type rather than unsigned int for accuracy.
- Use precise sum min/max calculation to compare the chosen vs
current points.
- The first pass finds the maximum task's min points. The
second pass eliminates all tasks for which the max points
are below the currently chosen min points, and uses a precise
sum to validate the candidates which are possibly in range.
---
fs/proc/base.c | 2 +-
include/linux/mm.h | 34 +++++++++++++-----
include/linux/oom.h | 11 +++++-
mm/oom_kill.c | 84 +++++++++++++++++++++++++++++++++++++--------
4 files changed, 106 insertions(+), 25 deletions(-)
diff --git a/fs/proc/base.c b/fs/proc/base.c
index 4eec684baca9..d75d0ce97032 100644
--- a/fs/proc/base.c
+++ b/fs/proc/base.c
@@ -589,7 +589,7 @@ static int proc_oom_score(struct seq_file *m, struct pid_namespace *ns,
unsigned long points = 0;
long badness;
- badness = oom_badness(task, totalpages);
+ badness = oom_badness(task, totalpages, false, NULL, NULL);
/*
* Special case OOM_SCORE_ADJ_MIN for all others scale the
* badness value into [0, 2000] range which we have been
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 10d62e8f35e7..298b8223fbfb 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2855,14 +2855,28 @@ static inline struct percpu_counter_tree_level_item *get_rss_stat_items(struct m
/*
* per-process(per-mm_struct) statistics.
*/
+static inline unsigned long __get_mm_counter(struct mm_struct *mm, int member, bool approximate,
+ unsigned long *accuracy_under, unsigned long *accuracy_over)
+{
+ if (approximate) {
+ if (accuracy_under && accuracy_over) {
+ percpu_counter_tree_approximate_accuracy_range(&mm->rss_stat[member],
+ accuracy_under, accuracy_over);
+ }
+ return percpu_counter_tree_approximate_sum_positive(&mm->rss_stat[member]);
+ } else {
+ return percpu_counter_tree_precise_sum_positive(&mm->rss_stat[member]);
+ }
+}
+
static inline unsigned long get_mm_counter(struct mm_struct *mm, int member)
{
- return percpu_counter_tree_approximate_sum_positive(&mm->rss_stat[member]);
+ return __get_mm_counter(mm, member, true, NULL, NULL);
}
static inline unsigned long get_mm_counter_sum(struct mm_struct *mm, int member)
{
- return percpu_counter_tree_precise_sum_positive(&mm->rss_stat[member]);
+ return __get_mm_counter(mm, member, false, NULL, NULL);
}
void mm_trace_rss_stat(struct mm_struct *mm, int member);
@@ -2903,18 +2917,22 @@ static inline int mm_counter(struct folio *folio)
return mm_counter_file(folio);
}
+static inline unsigned long __get_mm_rss(struct mm_struct *mm, bool approximate,
+ unsigned long *accuracy_under, unsigned long *accuracy_over)
+{
+ return __get_mm_counter(mm, MM_FILEPAGES, approximate, accuracy_under, accuracy_over) +
+ __get_mm_counter(mm, MM_ANONPAGES, approximate, accuracy_under, accuracy_over) +
+ __get_mm_counter(mm, MM_SHMEMPAGES, approximate, accuracy_under, accuracy_over);
+}
+
static inline unsigned long get_mm_rss(struct mm_struct *mm)
{
- return get_mm_counter(mm, MM_FILEPAGES) +
- get_mm_counter(mm, MM_ANONPAGES) +
- get_mm_counter(mm, MM_SHMEMPAGES);
+ return __get_mm_rss(mm, true, NULL, NULL);
}
static inline unsigned long get_mm_rss_sum(struct mm_struct *mm)
{
- return get_mm_counter_sum(mm, MM_FILEPAGES) +
- get_mm_counter_sum(mm, MM_ANONPAGES) +
- get_mm_counter_sum(mm, MM_SHMEMPAGES);
+ return __get_mm_rss(mm, false, NULL, NULL);
}
static inline unsigned long get_mm_hiwater_rss(struct mm_struct *mm)
diff --git a/include/linux/oom.h b/include/linux/oom.h
index 7b02bc1d0a7e..f8e5bfaf7b39 100644
--- a/include/linux/oom.h
+++ b/include/linux/oom.h
@@ -48,6 +48,12 @@ struct oom_control {
unsigned long totalpages;
struct task_struct *chosen;
long chosen_points;
+ bool approximate;
+ /*
+ * Number of precise badness points sums performed by this task
+ * selection.
+ */
+ int nr_precise;
/* Used to print the constraint info. */
enum oom_constraint constraint;
@@ -97,7 +103,10 @@ static inline vm_fault_t check_stable_address_space(struct mm_struct *mm)
}
long oom_badness(struct task_struct *p,
- unsigned long totalpages);
+ unsigned long totalpages,
+ bool approximate,
+ unsigned long *accuracy_under,
+ unsigned long *accuracy_over);
extern bool out_of_memory(struct oom_control *oc);
diff --git a/mm/oom_kill.c b/mm/oom_kill.c
index 214cb8cb939b..93ae14c278e9 100644
--- a/mm/oom_kill.c
+++ b/mm/oom_kill.c
@@ -53,6 +53,14 @@
#define CREATE_TRACE_POINTS
#include <trace/events/oom.h>
+/*
+ * Maximum number of badness sums allowed before using an approximated
+ * comparison. This ensures bounded execution time for scenarios where
+ * many tasks have badness within the accuracy of the maximum badness
+ * approximation.
+ */
+static int max_precise_badness_sums = 16;
+
static int sysctl_panic_on_oom;
static int sysctl_oom_kill_allocating_task;
static int sysctl_oom_dump_tasks = 1;
@@ -194,12 +202,16 @@ static bool should_dump_unreclaim_slab(void)
* oom_badness - heuristic function to determine which candidate task to kill
* @p: task struct of which task we should calculate
* @totalpages: total present RAM allowed for page allocation
+ * @approximate: whether the value can be approximated
+ * @accuracy_under: accuracy of the badness value approximation (under value)
+ * @accuracy_over: accuracy of the badness value approximation (over value)
*
* The heuristic for determining which task to kill is made to be as simple and
* predictable as possible. The goal is to return the highest value for the
* task consuming the most memory to avoid subsequent oom failures.
*/
-long oom_badness(struct task_struct *p, unsigned long totalpages)
+long oom_badness(struct task_struct *p, unsigned long totalpages, bool approximate,
+ unsigned long *accuracy_under, unsigned long *accuracy_over)
{
long points;
long adj;
@@ -228,7 +240,8 @@ long oom_badness(struct task_struct *p, unsigned long totalpages)
* The baseline for the badness score is the proportion of RAM that each
* task's rss, pagetable and swap space use.
*/
- points = get_mm_rss_sum(p->mm) + get_mm_counter_sum(p->mm, MM_SWAPENTS) +
+ points = __get_mm_rss(p->mm, approximate, accuracy_under, accuracy_over) +
+ __get_mm_counter(p->mm, MM_SWAPENTS, approximate, accuracy_under, accuracy_over) +
mm_pgtables_bytes(p->mm) / PAGE_SIZE;
task_unlock(p);
@@ -309,7 +322,8 @@ static enum oom_constraint constrained_alloc(struct oom_control *oc)
static int oom_evaluate_task(struct task_struct *task, void *arg)
{
struct oom_control *oc = arg;
- long points;
+ unsigned long accuracy_under = 0, accuracy_over = 0;
+ long points, points_min, points_max;
if (oom_unkillable_task(task))
goto next;
@@ -335,20 +349,47 @@ static int oom_evaluate_task(struct task_struct *task, void *arg)
* killed first if it triggers an oom, then select it.
*/
if (oom_task_origin(task)) {
- points = LONG_MAX;
+ points_min = LONG_MAX;
goto select;
}
- points = oom_badness(task, oc->totalpages);
- if (points == LONG_MIN || points < oc->chosen_points)
- goto next;
+ points = oom_badness(task, oc->totalpages, true, &accuracy_under, &accuracy_over);
+ if (points != LONG_MIN) {
+ percpu_counter_tree_approximate_min_max_range(points,
+ accuracy_under, accuracy_over,
+ &points_min, &points_max);
+ }
+ if (oc->approximate) {
+ /*
+ * Keep the process which has the highest minimum
+ * possible points value based on approximation.
+ */
+ if (points == LONG_MIN || points_min < oc->chosen_points)
+ goto next;
+ } else {
+ /*
+ * Eliminate processes which are certainly below the
+ * chosen points minimum possible value with an
+ * approximation.
+ */
+ if (points == LONG_MIN || (long)(points_max - oc->chosen_points) < 0)
+ goto next;
+
+ if (oc->nr_precise < max_precise_badness_sums) {
+ oc->nr_precise++;
+ /* Precise evaluation. */
+ points_min = points_max = points = oom_badness(task, oc->totalpages, false, NULL, NULL);
+ if (points == LONG_MIN || (long)(points - oc->chosen_points) < 0)
+ goto next;
+ }
+ }
select:
if (oc->chosen)
put_task_struct(oc->chosen);
get_task_struct(task);
oc->chosen = task;
- oc->chosen_points = points;
+ oc->chosen_points = points_min;
next:
return 0;
abort:
@@ -358,14 +399,8 @@ static int oom_evaluate_task(struct task_struct *task, void *arg)
return 1;
}
-/*
- * Simple selection loop. We choose the process with the highest number of
- * 'points'. In case scan was aborted, oc->chosen is set to -1.
- */
-static void select_bad_process(struct oom_control *oc)
+static void select_bad_process_iter(struct oom_control *oc)
{
- oc->chosen_points = LONG_MIN;
-
if (is_memcg_oom(oc))
mem_cgroup_scan_tasks(oc->memcg, oom_evaluate_task, oc);
else {
@@ -379,6 +414,25 @@ static void select_bad_process(struct oom_control *oc)
}
}
+/*
+ * Simple selection loop. We choose the process with the highest number of
+ * 'points'. In case scan was aborted, oc->chosen is set to -1.
+ */
+static void select_bad_process(struct oom_control *oc)
+{
+ oc->chosen_points = LONG_MIN;
+ oc->nr_precise = 0;
+
+ /* Approximate scan. */
+ oc->approximate = true;
+ select_bad_process_iter(oc);
+ if (oc->chosen == (void *)-1UL)
+ return;
+ /* Precise scan. */
+ oc->approximate = false;
+ select_bad_process_iter(oc);
+}
+
static int dump_task(struct task_struct *p, void *arg)
{
struct oom_control *oc = arg;
--
2.39.5
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH v16 1/3] lib: Introduce hierarchical per-cpu counters
2026-01-14 14:59 ` [PATCH v16 1/3] lib: Introduce hierarchical per-cpu counters Mathieu Desnoyers
@ 2026-01-14 16:41 ` Michal Hocko
2026-01-14 19:19 ` Mathieu Desnoyers
0 siblings, 1 reply; 10+ messages in thread
From: Michal Hocko @ 2026-01-14 16:41 UTC (permalink / raw)
To: Mathieu Desnoyers
Cc: Andrew Morton, linux-kernel, Paul E. McKenney, Steven Rostedt,
Masami Hiramatsu, Dennis Zhou, Tejun Heo, Christoph Lameter,
Martin Liu, David Rientjes, christian.koenig, Shakeel Butt,
SeongJae Park, Johannes Weiner, Sweet Tea Dorminy,
Lorenzo Stoakes, Liam R . Howlett, Mike Rapoport,
Suren Baghdasaryan, Vlastimil Babka, Christian Brauner, Wei Yang,
David Hildenbrand, Miaohe Lin, Al Viro, linux-mm,
linux-trace-kernel, Yu Zhao, Roman Gushchin, Mateusz Guzik,
Matthew Wilcox, Baolin Wang, Aboorva Devarajan
On Wed 14-01-26 09:59:13, Mathieu Desnoyers wrote:
> * Motivation
>
> The purpose of this hierarchical split-counter scheme is to:
>
> - Minimize contention when incrementing and decrementing counters,
> - Provide fast access to a sum approximation,
> - Provide a sum approximation with an acceptable accuracy level when
> scaling to many-core systems.
> - Provide approximate and precise comparison of two counters, and
> between a counter and a value.
> - Provide possible precise sum ranges for a given sum approximation.
>
> Its goals are twofold:
>
> - Improve the accuracy of the approximated RSS counter values returned
> by proc interfaces [1],
> - Reduce the latency of the OOM killer on large many-core systems.
>
> * Design
>
> The hierarchical per-CPU counters propagate a sum approximation through
> a N-way tree. When reaching the batch size, the carry is propagated
> through a binary tree which consists of logN(nr_cpu_ids) levels. The
> batch size for each level is twice the batch size of the prior level.
>
> Example propagation diagram with 8 cpus through a binary tree:
>
> Level 0: 0 1 2 3 4 5 6 7
> | / | / | / | /
> | / | / | / | /
> | / | / | / | /
> Level 1: 0 1 2 3
> | / | /
> | / | /
> | / | /
> Level 2: 0 1
> | /
> | /
> | /
> Level 3: 0
>
> For a binary tree, the maximum inaccuracy is bound by:
> batch_size * log2(nr_cpus) * nr_cpus
> which evolves with O(n*log(n)) as the number of CPUs increases.
>
> For a N-way tree, the maximum inaccuracy can be pre-calculated
> based on the the N-arity of each level and the batch size.
One thing you should probably mention here is the memory consumption of
the structure.
I have briefly looked at the implementation and concluded that I do not
have enough time to make a thorough review. Sorry about that. As I've
said in previous version the overall idea is sound. Especially if the
additional memory consumption is not a factor. I will let others judge
implementation details.
Thanks!
--
Michal Hocko
SUSE Labs
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH v16 2/3] mm: Improve RSS counter approximation accuracy for proc interfaces
2026-01-14 14:59 ` [PATCH v16 2/3] mm: Improve RSS counter approximation accuracy for proc interfaces Mathieu Desnoyers
@ 2026-01-14 16:48 ` Michal Hocko
2026-01-14 19:21 ` Mathieu Desnoyers
0 siblings, 1 reply; 10+ messages in thread
From: Michal Hocko @ 2026-01-14 16:48 UTC (permalink / raw)
To: Mathieu Desnoyers
Cc: Andrew Morton, linux-kernel, Paul E. McKenney, Steven Rostedt,
Masami Hiramatsu, Dennis Zhou, Tejun Heo, Christoph Lameter,
Martin Liu, David Rientjes, christian.koenig, Shakeel Butt,
SeongJae Park, Johannes Weiner, Sweet Tea Dorminy,
Lorenzo Stoakes, Liam R . Howlett, Mike Rapoport,
Suren Baghdasaryan, Vlastimil Babka, Christian Brauner, Wei Yang,
David Hildenbrand, Miaohe Lin, Al Viro, linux-mm,
linux-trace-kernel, Yu Zhao, Roman Gushchin, Mateusz Guzik,
Matthew Wilcox, Baolin Wang, Aboorva Devarajan
On Wed 14-01-26 09:59:14, Mathieu Desnoyers wrote:
> Use hierarchical per-cpu counters for RSS tracking to improve the
> accuracy of per-mm RSS sum approximation on large many-core systems [1].
> This improves the accuracy of the RSS values returned by proc
> interfaces.
>
> This is also a preparation step to introduce a 2-pass OOM killer task
> selection which leverages the approximation and accuracy ranges to
> quickly eliminate tasks which are outside of the range of the current
> selection, and thus reduce the latency introduced by execution of the
> OOM killer.
>
> Here is a (possibly incomplete) list of the prior approaches that were
> used or proposed, along with their downside:
>
> 1) Per-thread rss tracking: large error on many-thread processes.
>
> 2) Per-CPU counters: up to 12% slower for short-lived processes and 9%
> increased system time in make test workloads [1]. Moreover, the
> inaccuracy increases with O(n^2) with the number of CPUs.
>
> 3) Per-NUMA-node counters: requires atomics on fast-path (overhead),
> error is high with systems that have lots of NUMA nodes (32 times
> the number of NUMA nodes).
>
> 4) Use a percise per-cpu counter sum for each counter value query:
> Requires iteration on each possible CPUs for each sum, which
> adds overhead (and thus increases OOM killer latency) on large
> many-core systems running many processes.
>
> The approach proposed here is to replace the per-cpu counters by the
> hierarchical per-cpu counters, which bounds the inaccuracy based on the
> system topology with O(N*logN).
>
> * Testing results:
>
> Test hardware: 2 sockets AMD EPYC 9654 96-Core Processor (384 logical CPUs total)
>
> Methodology:
>
> Comparing the current upstream implementation with the hierarchical
> counters is done by keeping both implementations wired up in parallel,
> and running a single-process, single-threaded program which hops
> randomly across CPUs in the system, calling mmap(2) and munmap(2) on
> random CPUs, keeping track of an array of allocated mappings, randomly
> choosing entries to either map or unmap.
>
> get_mm_counter() is instrumented to compare the upstream counter
> approximation to the precise value, and print the delta when going over
> a given threshold. The delta of the hierarchical counter approximation
> to the precise value is also printed for comparison.
>
> After a few minutes running this test, the upstream implementation
> counter approximation reaches a 1GB delta from the
> precise value, compared to 80MB delta with the hierarchical counter.
> The hierarchical counter provides a guaranteed maximum approximation
> inaccuracy of 192MB on that hardware topology.
>
> * Fast path implementation comparison
>
> The new inline percpu_counter_tree_add() uses a this_cpu_add_return()
> for the fast path (under a certain allocation size threshold). Above
> that, it calls a slow path which "trickles up" the carry to upper level
> counters with atomic_add_return.
>
> In comparison, the upstream counters implementation calls
> percpu_counter_add_batch which uses this_cpu_try_cmpxchg() on the fast
> path, and does a raw_spin_lock_irqsave above a certain threshold.
>
> The hierarchical implementation is therefore expected to have less
> contention on mid-sized allocations than the upstream counters because
> the atomic counters tracking those bits are only shared across nearby
> CPUs. In comparison, the upstream counters immediately use a global
> spinlock when reaching the threshold.
>
> * Benchmarks
>
> Using will-it-scale page_fault1 benchmarks to compare the upstream
> counters to the hierarchical counters. This is done with hyperthreading
> disabled. The speedup is within the standard deviation of the upstream
> runs, so the overhead is not significant.
>
> upstream hierarchical speedup
> page_fault1_processes -s 100 -t 1 614783 615558 +0.1%
> page_fault1_threads -s 100 -t 1 612788 612447 -0.1%
> page_fault1_processes -s 100 -t 96 37994977 37932035 -0.2%
> page_fault1_threads -s 100 -t 96 2484130 2504860 +0.8%
> page_fault1_processes -s 100 -t 192 71262917 71118830 -0.2%
> page_fault1_threads -s 100 -t 192 2446437 2469296 +0.1%
>
> This change depends on the following patch:
> "mm: Fix OOM killer inaccuracy on large many-core systems" [2]
As mentioned in the previous patch, it would be great to explicitly
mention what is the memory price for the new tracking data structure.
Other than that this seems like a generally useful improvement for
larger systems and it is my understanding that it doesn't add almost any
overhead on small end systems, correct?
--
Michal Hocko
SUSE Labs
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH v16 3/3] mm: Reduce latency of OOM killer task selection with 2-pass algorithm
2026-01-14 14:59 ` [PATCH v16 3/3] mm: Reduce latency of OOM killer task selection with 2-pass algorithm Mathieu Desnoyers
@ 2026-01-14 17:06 ` Michal Hocko
2026-01-14 19:36 ` Mathieu Desnoyers
0 siblings, 1 reply; 10+ messages in thread
From: Michal Hocko @ 2026-01-14 17:06 UTC (permalink / raw)
To: Mathieu Desnoyers
Cc: Andrew Morton, linux-kernel, Paul E. McKenney, Steven Rostedt,
Masami Hiramatsu, Dennis Zhou, Tejun Heo, Christoph Lameter,
Martin Liu, David Rientjes, christian.koenig, Shakeel Butt,
SeongJae Park, Johannes Weiner, Sweet Tea Dorminy,
Lorenzo Stoakes, Liam R . Howlett, Mike Rapoport,
Suren Baghdasaryan, Vlastimil Babka, Christian Brauner, Wei Yang,
David Hildenbrand, Miaohe Lin, Al Viro, linux-mm,
linux-trace-kernel, Yu Zhao, Roman Gushchin, Mateusz Guzik,
Matthew Wilcox, Baolin Wang, Aboorva Devarajan
On Wed 14-01-26 09:59:15, Mathieu Desnoyers wrote:
> Use the hierarchical tree counter approximation (hpcc) to implement the
> OOM killer task selection with a 2-pass algorithm. The first pass
> selects the process that has the highest badness points approximation,
> and the second pass compares each process using the current max badness
> points approximation.
>
> The second pass uses an approximate comparison to eliminate all processes
> which are below the current max badness points approximation accuracy
> range.
>
> Summing the per-CPU counters to calculate the precise badness of tasks
> is only required for tasks with an approximate badness within the
> accuracy range of the current max points value.
>
> Limit to 16 the maximum number of badness sums allowed for an OOM killer
> task selection before falling back to the approximated comparison. This
> ensures bounded execution time for scenarios where many tasks have
> badness within the accuracy of the maximum badness approximation.
>
> Testing the execution time of select_bad_process() with a single
> tail -f /dev/zero:
>
> AMD EPYC 9654 96-Core (2 sockets)
> Within a KVM, configured with 256 logical cpus.
>
> | precise sum | hpcc |
> ----------------------------------|-------------|----------|
> nr_processes=40 | 0.5 ms | 0.3 ms |
> nr_processes=10000 | 80.0 ms | 7.9 ms |
>
> Tested with the following script:
I am confused by these numbers. Are you saying that 2 pass over all
tasks and evaluating all of them is 10 times faster than a single pass
with exact sum of pcp counters?
>
> #!/bin/sh
>
> for a in $(seq 1 10); do (tail /dev/zero &); done
> sleep 5
> for a in $(seq 1 10); do (tail /dev/zero &); done
> sleep 2
> for a in $(seq 1 10); do (tail /dev/zero &); done
> echo "Waiting for tasks to finish"
> wait
>
> Results: OOM kill order on a 128GB memory system
> ================================================
I find this section confusing as well. Is that before/after comparision.
If yes it would be great to call out explicit behavior before and after.
My overall impression is that the implementation is really involved and
at this moment I do not really see a big benefit of all the complexity.
It would help to explicitly mention what is the the overall imprecision
of the oom victim selection with the new data structure (maybe this is
good enough[*]). What if we go with exact precision with the new data
structure comparing to the original pcp counters.
[*] please keep in mind that oom victim selection is by no means an
exact science, we try to pick up a task that is likely to free up some
memory to unlock the system from memory depletion. We want that to be a
big memory consumer to reduce number of tasks to kill and we want to
roughly apply oom_score_adj.
--
Michal Hocko
SUSE Labs
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH v16 1/3] lib: Introduce hierarchical per-cpu counters
2026-01-14 16:41 ` Michal Hocko
@ 2026-01-14 19:19 ` Mathieu Desnoyers
0 siblings, 0 replies; 10+ messages in thread
From: Mathieu Desnoyers @ 2026-01-14 19:19 UTC (permalink / raw)
To: Michal Hocko
Cc: Andrew Morton, linux-kernel, Paul E. McKenney, Steven Rostedt,
Masami Hiramatsu, Dennis Zhou, Tejun Heo, Christoph Lameter,
Martin Liu, David Rientjes, christian.koenig, Shakeel Butt,
SeongJae Park, Johannes Weiner, Sweet Tea Dorminy,
Lorenzo Stoakes, Liam R . Howlett, Mike Rapoport,
Suren Baghdasaryan, Vlastimil Babka, Christian Brauner, Wei Yang,
David Hildenbrand, Miaohe Lin, Al Viro, linux-mm,
linux-trace-kernel, Yu Zhao, Roman Gushchin, Mateusz Guzik,
Matthew Wilcox, Baolin Wang, Aboorva Devarajan
On 2026-01-14 11:41, Michal Hocko wrote:
>
> One thing you should probably mention here is the memory consumption of
> the structure.
Good point.
The most important parts are the per-cpu counters and the tree items
which propagate the carry.
In the proposed implementation, the per-cpu counters are allocated
within per-cpu data structures, so they end up using:
nr_possible_cpus * sizeof(unsigned long)
In addition, the tree items are appended at the end of the mm_struct.
The size of those items is defined by the per_nr_cpu_order_config
table "nr_items" field.
Each item is aligned on cacheline size (typically 64 bytes) to minimize
false sharing.
Here is the footprint for a few nr_cpus on a 64-bit arch:
nr_cpus percpu counters (bytes) nr_items items size (bytes) total (bytes)
2 16 1 64 80
4 32 3 192 224
8 64 7 448 512
64 512 21 1344 1856
128 1024 21 1344 2368
256 2048 37 2368 4416
512 4096 73 4672 8768
There are of course various trade offs we can make here. We can:
* Increase the n-arity of the intermediate items to shrink the nr_items
required for a given nr_cpus. This will increase contention of carry
propagation across more cores.
* Remove cacheline alignment of intermediate tree items. This will
shrink the memory needed for tree items, but will increase false
sharing.
* Represent intermediate tree items on a byte rather than long.
This further reduces the memory required for intermediate tree
items, but further increases false sharing.
* Represent per-cpu counters on bytes rather than long. This makes
the "sum" operation trickier, because it needs to iterate on the
intermediate carry propagation nodes as well and synchronize with
ongoing "tree add" operations. It further reduces memory use.
* Implement a custom strided allocator for intermediate items carry
propagation bytes. This shares cachelines across different tree
instances, keeping good locality. This ensures that all accesses
from a given location in the machine topology touch the same
cacheline for the various tree instances. This adds complexity,
but provides compactness as well as minimal false-sharing.
Compared to this, the upstream percpu counters use a 32-bit integer per-cpu
(4 bytes), and accumulate within a 64-bit global value.
So yes, there is an extra memory footprint added by the current hpcc
implementation, but if it's an issue we have various options to consider
to reduce its footprint.
Is it OK if I add this discussion to the commit message, or should it
be also added into the high level design doc within
Documentation/core-api/percpu-counter-tree.rst ?
Thanks,
Mathieu
--
Mathieu Desnoyers
EfficiOS Inc.
https://www.efficios.com
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH v16 2/3] mm: Improve RSS counter approximation accuracy for proc interfaces
2026-01-14 16:48 ` Michal Hocko
@ 2026-01-14 19:21 ` Mathieu Desnoyers
0 siblings, 0 replies; 10+ messages in thread
From: Mathieu Desnoyers @ 2026-01-14 19:21 UTC (permalink / raw)
To: Michal Hocko
Cc: Andrew Morton, linux-kernel, Paul E. McKenney, Steven Rostedt,
Masami Hiramatsu, Dennis Zhou, Tejun Heo, Christoph Lameter,
Martin Liu, David Rientjes, christian.koenig, Shakeel Butt,
SeongJae Park, Johannes Weiner, Sweet Tea Dorminy,
Lorenzo Stoakes, Liam R . Howlett, Mike Rapoport,
Suren Baghdasaryan, Vlastimil Babka, Christian Brauner, Wei Yang,
David Hildenbrand, Miaohe Lin, Al Viro, linux-mm,
linux-trace-kernel, Yu Zhao, Roman Gushchin, Mateusz Guzik,
Matthew Wilcox, Baolin Wang, Aboorva Devarajan
On 2026-01-14 11:48, Michal Hocko wrote:
> On Wed 14-01-26 09:59:14, Mathieu Desnoyers wrote:
>> Use hierarchical per-cpu counters for RSS tracking to improve the
>> accuracy of per-mm RSS sum approximation on large many-core systems [1].
>> This improves the accuracy of the RSS values returned by proc
>> interfaces.
>>
>> This is also a preparation step to introduce a 2-pass OOM killer task
>> selection which leverages the approximation and accuracy ranges to
>> quickly eliminate tasks which are outside of the range of the current
>> selection, and thus reduce the latency introduced by execution of the
>> OOM killer.
>>
>> Here is a (possibly incomplete) list of the prior approaches that were
>> used or proposed, along with their downside:
>>
>> 1) Per-thread rss tracking: large error on many-thread processes.
>>
>> 2) Per-CPU counters: up to 12% slower for short-lived processes and 9%
>> increased system time in make test workloads [1]. Moreover, the
>> inaccuracy increases with O(n^2) with the number of CPUs.
>>
>> 3) Per-NUMA-node counters: requires atomics on fast-path (overhead),
>> error is high with systems that have lots of NUMA nodes (32 times
>> the number of NUMA nodes).
>>
>> 4) Use a percise per-cpu counter sum for each counter value query:
>> Requires iteration on each possible CPUs for each sum, which
>> adds overhead (and thus increases OOM killer latency) on large
>> many-core systems running many processes.
>>
>> The approach proposed here is to replace the per-cpu counters by the
>> hierarchical per-cpu counters, which bounds the inaccuracy based on the
>> system topology with O(N*logN).
>>
>> * Testing results:
>>
>> Test hardware: 2 sockets AMD EPYC 9654 96-Core Processor (384 logical CPUs total)
>>
>> Methodology:
>>
>> Comparing the current upstream implementation with the hierarchical
>> counters is done by keeping both implementations wired up in parallel,
>> and running a single-process, single-threaded program which hops
>> randomly across CPUs in the system, calling mmap(2) and munmap(2) on
>> random CPUs, keeping track of an array of allocated mappings, randomly
>> choosing entries to either map or unmap.
>>
>> get_mm_counter() is instrumented to compare the upstream counter
>> approximation to the precise value, and print the delta when going over
>> a given threshold. The delta of the hierarchical counter approximation
>> to the precise value is also printed for comparison.
>>
>> After a few minutes running this test, the upstream implementation
>> counter approximation reaches a 1GB delta from the
>> precise value, compared to 80MB delta with the hierarchical counter.
>> The hierarchical counter provides a guaranteed maximum approximation
>> inaccuracy of 192MB on that hardware topology.
>>
>> * Fast path implementation comparison
>>
>> The new inline percpu_counter_tree_add() uses a this_cpu_add_return()
>> for the fast path (under a certain allocation size threshold). Above
>> that, it calls a slow path which "trickles up" the carry to upper level
>> counters with atomic_add_return.
>>
>> In comparison, the upstream counters implementation calls
>> percpu_counter_add_batch which uses this_cpu_try_cmpxchg() on the fast
>> path, and does a raw_spin_lock_irqsave above a certain threshold.
>>
>> The hierarchical implementation is therefore expected to have less
>> contention on mid-sized allocations than the upstream counters because
>> the atomic counters tracking those bits are only shared across nearby
>> CPUs. In comparison, the upstream counters immediately use a global
>> spinlock when reaching the threshold.
>>
>> * Benchmarks
>>
>> Using will-it-scale page_fault1 benchmarks to compare the upstream
>> counters to the hierarchical counters. This is done with hyperthreading
>> disabled. The speedup is within the standard deviation of the upstream
>> runs, so the overhead is not significant.
>>
>> upstream hierarchical speedup
>> page_fault1_processes -s 100 -t 1 614783 615558 +0.1%
>> page_fault1_threads -s 100 -t 1 612788 612447 -0.1%
>> page_fault1_processes -s 100 -t 96 37994977 37932035 -0.2%
>> page_fault1_threads -s 100 -t 96 2484130 2504860 +0.8%
>> page_fault1_processes -s 100 -t 192 71262917 71118830 -0.2%
>> page_fault1_threads -s 100 -t 192 2446437 2469296 +0.1%
>>
>> This change depends on the following patch:
>> "mm: Fix OOM killer inaccuracy on large many-core systems" [2]
>
> As mentioned in the previous patch, it would be great to explicitly
> mention what is the memory price for the new tracking data structure.
Yes, I can add the explanation here as well.
>
> Other than that this seems like a generally useful improvement for
> larger systems and it is my understanding that it doesn't add almost any
> overhead on small end systems, correct?
Indeed, the impact is mostly on large many-core systems, not so much on
smaller systems.
Thanks,
Mathieu
--
Mathieu Desnoyers
EfficiOS Inc.
https://www.efficios.com
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH v16 3/3] mm: Reduce latency of OOM killer task selection with 2-pass algorithm
2026-01-14 17:06 ` Michal Hocko
@ 2026-01-14 19:36 ` Mathieu Desnoyers
0 siblings, 0 replies; 10+ messages in thread
From: Mathieu Desnoyers @ 2026-01-14 19:36 UTC (permalink / raw)
To: Michal Hocko
Cc: Andrew Morton, linux-kernel, Paul E. McKenney, Steven Rostedt,
Masami Hiramatsu, Dennis Zhou, Tejun Heo, Christoph Lameter,
Martin Liu, David Rientjes, christian.koenig, Shakeel Butt,
SeongJae Park, Johannes Weiner, Sweet Tea Dorminy,
Lorenzo Stoakes, Liam R . Howlett, Mike Rapoport,
Suren Baghdasaryan, Vlastimil Babka, Christian Brauner, Wei Yang,
David Hildenbrand, Miaohe Lin, Al Viro, linux-mm,
linux-trace-kernel, Yu Zhao, Roman Gushchin, Mateusz Guzik,
Matthew Wilcox, Baolin Wang, Aboorva Devarajan
On 2026-01-14 12:06, Michal Hocko wrote:
> On Wed 14-01-26 09:59:15, Mathieu Desnoyers wrote:
>> Use the hierarchical tree counter approximation (hpcc) to implement the
>> OOM killer task selection with a 2-pass algorithm. The first pass
>> selects the process that has the highest badness points approximation,
>> and the second pass compares each process using the current max badness
>> points approximation.
>>
>> The second pass uses an approximate comparison to eliminate all processes
>> which are below the current max badness points approximation accuracy
>> range.
>>
>> Summing the per-CPU counters to calculate the precise badness of tasks
>> is only required for tasks with an approximate badness within the
>> accuracy range of the current max points value.
>>
>> Limit to 16 the maximum number of badness sums allowed for an OOM killer
>> task selection before falling back to the approximated comparison. This
>> ensures bounded execution time for scenarios where many tasks have
>> badness within the accuracy of the maximum badness approximation.
>>
>> Testing the execution time of select_bad_process() with a single
>> tail -f /dev/zero:
>>
>> AMD EPYC 9654 96-Core (2 sockets)
>> Within a KVM, configured with 256 logical cpus.
>>
>> | precise sum | hpcc |
>> ----------------------------------|-------------|----------|
>> nr_processes=40 | 0.5 ms | 0.3 ms |
>> nr_processes=10000 | 80.0 ms | 7.9 ms |
>>
>> Tested with the following script:
>
> I am confused by these numbers. Are you saying that 2 pass over all
> tasks and evaluating all of them is 10 times faster than a single pass
> with exact sum of pcp counters?
Yes, that's correct.
This is because we can use the approximate value and the
min/max possible precise sum ranges to eliminate most tasks,
so we only have to do a precise sum on very few tasks.
So we're basically going from a 1-pass doing precise sum for
all tasks to 2-passes doing approximated sum for all tasks,
and only precise sums for the very few tasks that happen to
be in the vicinity of the task chosen by the 1st pass.
We could probably do something more clever and do all this
within a single pass if we keep track of the possible candidates
in a separate list, and then only iterate on those candidates
to do a precise sum at the end of the pass. But I chose to keep
things simple, hence the 2 pass algo.
>
>>
>> #!/bin/sh
>>
>> for a in $(seq 1 10); do (tail /dev/zero &); done
>> sleep 5
>> for a in $(seq 1 10); do (tail /dev/zero &); done
>> sleep 2
>> for a in $(seq 1 10); do (tail /dev/zero &); done
>> echo "Waiting for tasks to finish"
>> wait
>>
>> Results: OOM kill order on a 128GB memory system
>> ================================================
>
> I find this section confusing as well. Is that before/after comparision.
> If yes it would be great to call out explicit behavior before and after.
Not really. I'm just testing the "after" to make sure we get the
expected killing order.
>
> My overall impression is that the implementation is really involved and
> at this moment I do not really see a big benefit of all the complexity.
Note that we can get the proc ABI RSS accuracy improvements with the
previous 2 patches without this 2-pass algo. Do you see more value in
the RSS accuracy improvements than in the oom killer latency reduction ?
>
> It would help to explicitly mention what is the the overall imprecision
> of the oom victim selection with the new data structure (maybe this is
> good enough[*]). What if we go with exact precision with the new data
> structure comparing to the original pcp counters.
Do you mean comparing using approximate sums with the new data
structure (which has a bounded accuracy of O(nr_cpus*log(nr_cpus)))
compared to the old data structure which had an inaccuracy of
O(nr_cpus^2) ? So if the inaccuracy provided by the new data structure
is good enough for OOM task selection, we could go from precise sum
back to an approximation and just use that with the new data
structure.
Thanks,
Mathieu
>
>
> [*] please keep in mind that oom victim selection is by no means an
> exact science, we try to pick up a task that is likely to free up some
> memory to unlock the system from memory depletion. We want that to be a
> big memory consumer to reduce number of tasks to kill and we want to
> roughly apply oom_score_adj.
--
Mathieu Desnoyers
EfficiOS Inc.
https://www.efficios.com
^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2026-01-14 19:36 UTC | newest]
Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2026-01-14 14:59 [PATCH v16 0/3] Improve proc RSS accuracy and OOM killer latency Mathieu Desnoyers
2026-01-14 14:59 ` [PATCH v16 1/3] lib: Introduce hierarchical per-cpu counters Mathieu Desnoyers
2026-01-14 16:41 ` Michal Hocko
2026-01-14 19:19 ` Mathieu Desnoyers
2026-01-14 14:59 ` [PATCH v16 2/3] mm: Improve RSS counter approximation accuracy for proc interfaces Mathieu Desnoyers
2026-01-14 16:48 ` Michal Hocko
2026-01-14 19:21 ` Mathieu Desnoyers
2026-01-14 14:59 ` [PATCH v16 3/3] mm: Reduce latency of OOM killer task selection with 2-pass algorithm Mathieu Desnoyers
2026-01-14 17:06 ` Michal Hocko
2026-01-14 19:36 ` Mathieu Desnoyers
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox