* [PATCH 6.6 01/28] maple_tree: add mt_free_one() and mt_attr() helpers
2024-10-24 13:19 [PATCH 6.6 00/28] fix CVE-2024-46701 Yu Kuai
@ 2024-10-24 13:19 ` Yu Kuai
2024-10-24 13:19 ` [PATCH 6.6 02/28] maple_tree: introduce {mtree,mas}_lock_nested() Yu Kuai
` (16 subsequent siblings)
17 siblings, 0 replies; 47+ messages in thread
From: Yu Kuai @ 2024-10-24 13:19 UTC (permalink / raw)
To: stable, gregkh, harry.wentland, sunpeng.li, Rodrigo.Siqueira,
alexander.deucher, christian.koenig, Xinhui.Pan, airlied, daniel,
viro, brauner, Liam.Howlett, akpm, hughd, willy, sashal,
srinivasan.shanmugam, chiahsuan.chung, mingo, mgorman, yukuai3,
chengming.zhou, zhangpeng.00, chuck.lever
Cc: amd-gfx, dri-devel, linux-kernel, linux-fsdevel, maple-tree,
linux-mm, yukuai1, yi.zhang, yangerkun
From: Peng Zhang <zhangpeng.00@bytedance.com>
commit 4f2267b58a22d972be98edef8e6b3c7a67c9fb91 upstream.
Patch series "Introduce __mt_dup() to improve the performance of fork()", v7.
This series introduces __mt_dup() to improve the performance of fork().
During the duplication process of mmap, all VMAs are traversed and
inserted one by one into the new maple tree, causing the maple tree to be
rebalanced multiple times. Balancing the maple tree is a costly
operation. To duplicate VMAs more efficiently, mtree_dup() and __mt_dup()
are introduced for the maple tree. They can efficiently duplicate a maple
tree.
Here are some algorithmic details about {mtree,__mt}_dup(). We perform a
DFS pre-order traversal of all nodes in the source maple tree. During
this process, we fully copy the nodes from the source tree to the new
tree. This involves memory allocation, and when encountering a new node,
if it is a non-leaf node, all its child nodes are allocated at once.
This idea was originally from Liam R. Howlett's Maple Tree Work email,
and I added some of my own ideas to implement it. Some previous
discussions can be found in [1]. For a more detailed analysis of the
algorithm, please refer to the logs for patch [3/10] and patch [10/10].
There is a "spawn" in byte-unixbench[2], which can be used to test the
performance of fork(). I modified it slightly to make it work with
different number of VMAs.
Below are the test results. The first row shows the number of VMAs. The
second and third rows show the number of fork() calls per ten seconds,
corresponding to next-20231006 and the this patchset, respectively. The
test results were obtained with CPU binding to avoid scheduler load
balancing that could cause unstable results. There are still some
fluctuations in the test results, but at least they are better than the
original performance.
21 121 221 421 821 1621 3221 6421 12821 25621 51221
112100 76261 54227 34035 20195 11112 6017 3161 1606 802 393
114558 83067 65008 45824 28751 16072 8922 4747 2436 1233 599
2.19% 8.92% 19.88% 34.64% 42.37% 44.64% 48.28% 50.17% 51.68% 53.74% 52.42%
Thanks to Liam and Matthew for the review.
This patch (of 10):
Add two helpers:
1. mt_free_one(), used to free a maple node.
2. mt_attr(), used to obtain the attributes of maple tree.
Link: https://lkml.kernel.org/r/20231027033845.90608-1-zhangpeng.00@bytedance.com
Link: https://lkml.kernel.org/r/20231027033845.90608-2-zhangpeng.00@bytedance.com
Signed-off-by: Peng Zhang <zhangpeng.00@bytedance.com>
Reviewed-by: Liam R. Howlett <Liam.Howlett@oracle.com>
Cc: Christian Brauner <brauner@kernel.org>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: Mateusz Guzik <mjguzik@gmail.com>
Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Michael S. Tsirkin <mst@redhat.com>
Cc: Mike Christie <michael.christie@oracle.com>
Cc: Nicholas Piggin <npiggin@gmail.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Suren Baghdasaryan <surenb@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
---
lib/maple_tree.c | 12 +++++++++++-
1 file changed, 11 insertions(+), 1 deletion(-)
diff --git a/lib/maple_tree.c b/lib/maple_tree.c
index 4e05511c8d1e..e7228bb86ef6 100644
--- a/lib/maple_tree.c
+++ b/lib/maple_tree.c
@@ -165,6 +165,11 @@ static inline int mt_alloc_bulk(gfp_t gfp, size_t size, void **nodes)
return kmem_cache_alloc_bulk(maple_node_cache, gfp, size, nodes);
}
+static inline void mt_free_one(struct maple_node *node)
+{
+ kmem_cache_free(maple_node_cache, node);
+}
+
static inline void mt_free_bulk(size_t size, void __rcu **nodes)
{
kmem_cache_free_bulk(maple_node_cache, size, (void **)nodes);
@@ -205,6 +210,11 @@ static unsigned int mas_mt_height(struct ma_state *mas)
return mt_height(mas->tree);
}
+static inline unsigned int mt_attr(struct maple_tree *mt)
+{
+ return mt->ma_flags & ~MT_FLAGS_HEIGHT_MASK;
+}
+
static inline enum maple_type mte_node_type(const struct maple_enode *entry)
{
return ((unsigned long)entry >> MAPLE_NODE_TYPE_SHIFT) &
@@ -5584,7 +5594,7 @@ void mas_destroy(struct ma_state *mas)
mt_free_bulk(count, (void __rcu **)&node->slot[1]);
total -= count;
}
- kmem_cache_free(maple_node_cache, node);
+ mt_free_one(ma_mnode_ptr(node));
total--;
}
--
2.39.2
^ permalink raw reply [flat|nested] 47+ messages in thread* [PATCH 6.6 02/28] maple_tree: introduce {mtree,mas}_lock_nested()
2024-10-24 13:19 [PATCH 6.6 00/28] fix CVE-2024-46701 Yu Kuai
2024-10-24 13:19 ` [PATCH 6.6 01/28] maple_tree: add mt_free_one() and mt_attr() helpers Yu Kuai
@ 2024-10-24 13:19 ` Yu Kuai
2024-10-24 13:19 ` [PATCH 6.6 03/28] maple_tree: introduce interfaces __mt_dup() and mtree_dup() Yu Kuai
` (15 subsequent siblings)
17 siblings, 0 replies; 47+ messages in thread
From: Yu Kuai @ 2024-10-24 13:19 UTC (permalink / raw)
To: stable, gregkh, harry.wentland, sunpeng.li, Rodrigo.Siqueira,
alexander.deucher, christian.koenig, Xinhui.Pan, airlied, daniel,
viro, brauner, Liam.Howlett, akpm, hughd, willy, sashal,
srinivasan.shanmugam, chiahsuan.chung, mingo, mgorman, yukuai3,
chengming.zhou, zhangpeng.00, chuck.lever
Cc: amd-gfx, dri-devel, linux-kernel, linux-fsdevel, maple-tree,
linux-mm, yukuai1, yi.zhang, yangerkun
From: Peng Zhang <zhangpeng.00@bytedance.com>
commit b2472efe4316b2687c153919c1513a098bd82c17 upstream.
In some cases, nested locks may be needed, so {mtree,mas}_lock_nested is
introduced. For example, when duplicating maple tree, we need to hold the
locks of two trees, in which case nested locks are needed.
At the same time, add the definition of spin_lock_nested() in tools for
testing.
Link: https://lkml.kernel.org/r/20231027033845.90608-3-zhangpeng.00@bytedance.com
Signed-off-by: Peng Zhang <zhangpeng.00@bytedance.com>
Reviewed-by: Liam R. Howlett <Liam.Howlett@oracle.com>
Cc: Christian Brauner <brauner@kernel.org>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: Mateusz Guzik <mjguzik@gmail.com>
Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Michael S. Tsirkin <mst@redhat.com>
Cc: Mike Christie <michael.christie@oracle.com>
Cc: Nicholas Piggin <npiggin@gmail.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Suren Baghdasaryan <surenb@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
---
include/linux/maple_tree.h | 4 ++++
tools/include/linux/spinlock.h | 1 +
2 files changed, 5 insertions(+)
diff --git a/include/linux/maple_tree.h b/include/linux/maple_tree.h
index d01e850b570f..f91dbc7fe091 100644
--- a/include/linux/maple_tree.h
+++ b/include/linux/maple_tree.h
@@ -256,6 +256,8 @@ struct maple_tree {
struct maple_tree name = MTREE_INIT(name, 0)
#define mtree_lock(mt) spin_lock((&(mt)->ma_lock))
+#define mtree_lock_nested(mas, subclass) \
+ spin_lock_nested((&(mt)->ma_lock), subclass)
#define mtree_unlock(mt) spin_unlock((&(mt)->ma_lock))
/*
@@ -406,6 +408,8 @@ struct ma_wr_state {
};
#define mas_lock(mas) spin_lock(&((mas)->tree->ma_lock))
+#define mas_lock_nested(mas, subclass) \
+ spin_lock_nested(&((mas)->tree->ma_lock), subclass)
#define mas_unlock(mas) spin_unlock(&((mas)->tree->ma_lock))
diff --git a/tools/include/linux/spinlock.h b/tools/include/linux/spinlock.h
index 622266b197d0..a6cdf25b6b9d 100644
--- a/tools/include/linux/spinlock.h
+++ b/tools/include/linux/spinlock.h
@@ -11,6 +11,7 @@
#define spin_lock_init(x) pthread_mutex_init(x, NULL)
#define spin_lock(x) pthread_mutex_lock(x)
+#define spin_lock_nested(x, subclass) pthread_mutex_lock(x)
#define spin_unlock(x) pthread_mutex_unlock(x)
#define spin_lock_bh(x) pthread_mutex_lock(x)
#define spin_unlock_bh(x) pthread_mutex_unlock(x)
--
2.39.2
^ permalink raw reply [flat|nested] 47+ messages in thread* [PATCH 6.6 03/28] maple_tree: introduce interfaces __mt_dup() and mtree_dup()
2024-10-24 13:19 [PATCH 6.6 00/28] fix CVE-2024-46701 Yu Kuai
2024-10-24 13:19 ` [PATCH 6.6 01/28] maple_tree: add mt_free_one() and mt_attr() helpers Yu Kuai
2024-10-24 13:19 ` [PATCH 6.6 02/28] maple_tree: introduce {mtree,mas}_lock_nested() Yu Kuai
@ 2024-10-24 13:19 ` Yu Kuai
2024-10-24 13:19 ` [PATCH 6.6 04/28] maple_tree: skip other tests when BENCH is enabled Yu Kuai
` (14 subsequent siblings)
17 siblings, 0 replies; 47+ messages in thread
From: Yu Kuai @ 2024-10-24 13:19 UTC (permalink / raw)
To: stable, gregkh, harry.wentland, sunpeng.li, Rodrigo.Siqueira,
alexander.deucher, christian.koenig, Xinhui.Pan, airlied, daniel,
viro, brauner, Liam.Howlett, akpm, hughd, willy, sashal,
srinivasan.shanmugam, chiahsuan.chung, mingo, mgorman, yukuai3,
chengming.zhou, zhangpeng.00, chuck.lever
Cc: amd-gfx, dri-devel, linux-kernel, linux-fsdevel, maple-tree,
linux-mm, yukuai1, yi.zhang, yangerkun
From: Peng Zhang <zhangpeng.00@bytedance.com>
commit fd32e4e9b7646510ee9010e0d5f8b8857d48a6f7 upstream.
Introduce interfaces __mt_dup() and mtree_dup(), which are used to
duplicate a maple tree. They duplicate a maple tree in Depth-First Search
(DFS) pre-order traversal. It uses memcopy() to copy nodes in the source
tree and allocate new child nodes in non-leaf nodes. The new node is
exactly the same as the source node except for all the addresses stored in
it. It will be faster than traversing all elements in the source tree and
inserting them one by one into the new tree. The time complexity of these
two functions is O(n).
The difference between __mt_dup() and mtree_dup() is that mtree_dup()
handles locks internally.
Analysis of the average time complexity of this algorithm:
For simplicity, let's assume that the maximum branching factor of all
non-leaf nodes is 16 (in allocation mode, it is 10), and the tree is a
full tree.
Under the given conditions, if there is a maple tree with n elements, the
number of its leaves is n/16. From bottom to top, the number of nodes in
each level is 1/16 of the number of nodes in the level below. So the
total number of nodes in the entire tree is given by the sum of n/16 +
n/16^2 + n/16^3 + ... + 1. This is a geometric series, and it has log(n)
terms with base 16. According to the formula for the sum of a geometric
series, the sum of this series can be calculated as (n-1)/15. Each node
has only one parent node pointer, which can be considered as an edge. In
total, there are (n-1)/15-1 edges.
This algorithm consists of two operations:
1. Traversing all nodes in DFS order.
2. For each node, making a copy and performing necessary modifications
to create a new node.
For the first part, DFS traversal will visit each edge twice. Let
T(ascend) represent the cost of taking one step downwards, and T(descend)
represent the cost of taking one step upwards. And both of them are
constants (although mas_ascend() may not be, as it contains a loop, but
here we ignore it and treat it as a constant). So the time spent on the
first part can be represented as ((n-1)/15-1) * (T(ascend) + T(descend)).
For the second part, each node will be copied, and the cost of copying a
node is denoted as T(copy_node). For each non-leaf node, it is necessary
to reallocate all child nodes, and the cost of this operation is denoted
as T(dup_alloc). The behavior behind memory allocation is complex and not
specific to the maple tree operation. Here, we assume that the time
required for a single allocation is constant. Since the size of a node is
fixed, both of these symbols are also constants. We can calculate that
the time spent on the second part is ((n-1)/15) * T(copy_node) + ((n-1)/15
- n/16) * T(dup_alloc).
Adding both parts together, the total time spent by the algorithm can be
represented as:
((n-1)/15) * (T(ascend) + T(descend) + T(copy_node) + T(dup_alloc)) -
n/16 * T(dup_alloc) - (T(ascend) + T(descend))
Let C1 = T(ascend) + T(descend) + T(copy_node) + T(dup_alloc)
Let C2 = T(dup_alloc)
Let C3 = T(ascend) + T(descend)
Finally, the expression can be simplified as:
((16 * C1 - 15 * C2) / (15 * 16)) * n - (C1 / 15 + C3).
This is a linear function, so the average time complexity is O(n).
Link: https://lkml.kernel.org/r/20231027033845.90608-4-zhangpeng.00@bytedance.com
Signed-off-by: Peng Zhang <zhangpeng.00@bytedance.com>
Suggested-by: Liam R. Howlett <Liam.Howlett@oracle.com>
Cc: Christian Brauner <brauner@kernel.org>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: Mateusz Guzik <mjguzik@gmail.com>
Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Michael S. Tsirkin <mst@redhat.com>
Cc: Mike Christie <michael.christie@oracle.com>
Cc: Nicholas Piggin <npiggin@gmail.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Suren Baghdasaryan <surenb@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
---
include/linux/maple_tree.h | 3 +
lib/maple_tree.c | 274 +++++++++++++++++++++++++++++++++++++
2 files changed, 277 insertions(+)
diff --git a/include/linux/maple_tree.h b/include/linux/maple_tree.h
index f91dbc7fe091..a452dd8a1e5c 100644
--- a/include/linux/maple_tree.h
+++ b/include/linux/maple_tree.h
@@ -329,6 +329,9 @@ int mtree_store(struct maple_tree *mt, unsigned long index,
void *entry, gfp_t gfp);
void *mtree_erase(struct maple_tree *mt, unsigned long index);
+int mtree_dup(struct maple_tree *mt, struct maple_tree *new, gfp_t gfp);
+int __mt_dup(struct maple_tree *mt, struct maple_tree *new, gfp_t gfp);
+
void mtree_destroy(struct maple_tree *mt);
void __mt_destroy(struct maple_tree *mt);
diff --git a/lib/maple_tree.c b/lib/maple_tree.c
index e7228bb86ef6..6f1addbbc820 100644
--- a/lib/maple_tree.c
+++ b/lib/maple_tree.c
@@ -4,6 +4,8 @@
* Copyright (c) 2018-2022 Oracle Corporation
* Authors: Liam R. Howlett <Liam.Howlett@oracle.com>
* Matthew Wilcox <willy@infradead.org>
+ * Copyright (c) 2023 ByteDance
+ * Author: Peng Zhang <zhangpeng.00@bytedance.com>
*/
/*
@@ -6486,6 +6488,278 @@ void *mtree_erase(struct maple_tree *mt, unsigned long index)
}
EXPORT_SYMBOL(mtree_erase);
+/*
+ * mas_dup_free() - Free an incomplete duplication of a tree.
+ * @mas: The maple state of a incomplete tree.
+ *
+ * The parameter @mas->node passed in indicates that the allocation failed on
+ * this node. This function frees all nodes starting from @mas->node in the
+ * reverse order of mas_dup_build(). There is no need to hold the source tree
+ * lock at this time.
+ */
+static void mas_dup_free(struct ma_state *mas)
+{
+ struct maple_node *node;
+ enum maple_type type;
+ void __rcu **slots;
+ unsigned char count, i;
+
+ /* Maybe the first node allocation failed. */
+ if (mas_is_none(mas))
+ return;
+
+ while (!mte_is_root(mas->node)) {
+ mas_ascend(mas);
+ if (mas->offset) {
+ mas->offset--;
+ do {
+ mas_descend(mas);
+ mas->offset = mas_data_end(mas);
+ } while (!mte_is_leaf(mas->node));
+
+ mas_ascend(mas);
+ }
+
+ node = mte_to_node(mas->node);
+ type = mte_node_type(mas->node);
+ slots = ma_slots(node, type);
+ count = mas_data_end(mas) + 1;
+ for (i = 0; i < count; i++)
+ ((unsigned long *)slots)[i] &= ~MAPLE_NODE_MASK;
+ mt_free_bulk(count, slots);
+ }
+
+ node = mte_to_node(mas->node);
+ mt_free_one(node);
+}
+
+/*
+ * mas_copy_node() - Copy a maple node and replace the parent.
+ * @mas: The maple state of source tree.
+ * @new_mas: The maple state of new tree.
+ * @parent: The parent of the new node.
+ *
+ * Copy @mas->node to @new_mas->node, set @parent to be the parent of
+ * @new_mas->node. If memory allocation fails, @mas is set to -ENOMEM.
+ */
+static inline void mas_copy_node(struct ma_state *mas, struct ma_state *new_mas,
+ struct maple_pnode *parent)
+{
+ struct maple_node *node = mte_to_node(mas->node);
+ struct maple_node *new_node = mte_to_node(new_mas->node);
+ unsigned long val;
+
+ /* Copy the node completely. */
+ memcpy(new_node, node, sizeof(struct maple_node));
+ /* Update the parent node pointer. */
+ val = (unsigned long)node->parent & MAPLE_NODE_MASK;
+ new_node->parent = ma_parent_ptr(val | (unsigned long)parent);
+}
+
+/*
+ * mas_dup_alloc() - Allocate child nodes for a maple node.
+ * @mas: The maple state of source tree.
+ * @new_mas: The maple state of new tree.
+ * @gfp: The GFP_FLAGS to use for allocations.
+ *
+ * This function allocates child nodes for @new_mas->node during the duplication
+ * process. If memory allocation fails, @mas is set to -ENOMEM.
+ */
+static inline void mas_dup_alloc(struct ma_state *mas, struct ma_state *new_mas,
+ gfp_t gfp)
+{
+ struct maple_node *node = mte_to_node(mas->node);
+ struct maple_node *new_node = mte_to_node(new_mas->node);
+ enum maple_type type;
+ unsigned char request, count, i;
+ void __rcu **slots;
+ void __rcu **new_slots;
+ unsigned long val;
+
+ /* Allocate memory for child nodes. */
+ type = mte_node_type(mas->node);
+ new_slots = ma_slots(new_node, type);
+ request = mas_data_end(mas) + 1;
+ count = mt_alloc_bulk(gfp, request, (void **)new_slots);
+ if (unlikely(count < request)) {
+ memset(new_slots, 0, request * sizeof(void *));
+ mas_set_err(mas, -ENOMEM);
+ return;
+ }
+
+ /* Restore node type information in slots. */
+ slots = ma_slots(node, type);
+ for (i = 0; i < count; i++) {
+ val = (unsigned long)mt_slot_locked(mas->tree, slots, i);
+ val &= MAPLE_NODE_MASK;
+ ((unsigned long *)new_slots)[i] |= val;
+ }
+}
+
+/*
+ * mas_dup_build() - Build a new maple tree from a source tree
+ * @mas: The maple state of source tree, need to be in MAS_START state.
+ * @new_mas: The maple state of new tree, need to be in MAS_START state.
+ * @gfp: The GFP_FLAGS to use for allocations.
+ *
+ * This function builds a new tree in DFS preorder. If the memory allocation
+ * fails, the error code -ENOMEM will be set in @mas, and @new_mas points to the
+ * last node. mas_dup_free() will free the incomplete duplication of a tree.
+ *
+ * Note that the attributes of the two trees need to be exactly the same, and the
+ * new tree needs to be empty, otherwise -EINVAL will be set in @mas.
+ */
+static inline void mas_dup_build(struct ma_state *mas, struct ma_state *new_mas,
+ gfp_t gfp)
+{
+ struct maple_node *node;
+ struct maple_pnode *parent = NULL;
+ struct maple_enode *root;
+ enum maple_type type;
+
+ if (unlikely(mt_attr(mas->tree) != mt_attr(new_mas->tree)) ||
+ unlikely(!mtree_empty(new_mas->tree))) {
+ mas_set_err(mas, -EINVAL);
+ return;
+ }
+
+ root = mas_start(mas);
+ if (mas_is_ptr(mas) || mas_is_none(mas))
+ goto set_new_tree;
+
+ node = mt_alloc_one(gfp);
+ if (!node) {
+ new_mas->node = MAS_NONE;
+ mas_set_err(mas, -ENOMEM);
+ return;
+ }
+
+ type = mte_node_type(mas->node);
+ root = mt_mk_node(node, type);
+ new_mas->node = root;
+ new_mas->min = 0;
+ new_mas->max = ULONG_MAX;
+ root = mte_mk_root(root);
+ while (1) {
+ mas_copy_node(mas, new_mas, parent);
+ if (!mte_is_leaf(mas->node)) {
+ /* Only allocate child nodes for non-leaf nodes. */
+ mas_dup_alloc(mas, new_mas, gfp);
+ if (unlikely(mas_is_err(mas)))
+ return;
+ } else {
+ /*
+ * This is the last leaf node and duplication is
+ * completed.
+ */
+ if (mas->max == ULONG_MAX)
+ goto done;
+
+ /* This is not the last leaf node and needs to go up. */
+ do {
+ mas_ascend(mas);
+ mas_ascend(new_mas);
+ } while (mas->offset == mas_data_end(mas));
+
+ /* Move to the next subtree. */
+ mas->offset++;
+ new_mas->offset++;
+ }
+
+ mas_descend(mas);
+ parent = ma_parent_ptr(mte_to_node(new_mas->node));
+ mas_descend(new_mas);
+ mas->offset = 0;
+ new_mas->offset = 0;
+ }
+done:
+ /* Specially handle the parent of the root node. */
+ mte_to_node(root)->parent = ma_parent_ptr(mas_tree_parent(new_mas));
+set_new_tree:
+ /* Make them the same height */
+ new_mas->tree->ma_flags = mas->tree->ma_flags;
+ rcu_assign_pointer(new_mas->tree->ma_root, root);
+}
+
+/**
+ * __mt_dup(): Duplicate an entire maple tree
+ * @mt: The source maple tree
+ * @new: The new maple tree
+ * @gfp: The GFP_FLAGS to use for allocations
+ *
+ * This function duplicates a maple tree in Depth-First Search (DFS) pre-order
+ * traversal. It uses memcpy() to copy nodes in the source tree and allocate
+ * new child nodes in non-leaf nodes. The new node is exactly the same as the
+ * source node except for all the addresses stored in it. It will be faster than
+ * traversing all elements in the source tree and inserting them one by one into
+ * the new tree.
+ * The user needs to ensure that the attributes of the source tree and the new
+ * tree are the same, and the new tree needs to be an empty tree, otherwise
+ * -EINVAL will be returned.
+ * Note that the user needs to manually lock the source tree and the new tree.
+ *
+ * Return: 0 on success, -ENOMEM if memory could not be allocated, -EINVAL If
+ * the attributes of the two trees are different or the new tree is not an empty
+ * tree.
+ */
+int __mt_dup(struct maple_tree *mt, struct maple_tree *new, gfp_t gfp)
+{
+ int ret = 0;
+ MA_STATE(mas, mt, 0, 0);
+ MA_STATE(new_mas, new, 0, 0);
+
+ mas_dup_build(&mas, &new_mas, gfp);
+ if (unlikely(mas_is_err(&mas))) {
+ ret = xa_err(mas.node);
+ if (ret == -ENOMEM)
+ mas_dup_free(&new_mas);
+ }
+
+ return ret;
+}
+EXPORT_SYMBOL(__mt_dup);
+
+/**
+ * mtree_dup(): Duplicate an entire maple tree
+ * @mt: The source maple tree
+ * @new: The new maple tree
+ * @gfp: The GFP_FLAGS to use for allocations
+ *
+ * This function duplicates a maple tree in Depth-First Search (DFS) pre-order
+ * traversal. It uses memcpy() to copy nodes in the source tree and allocate
+ * new child nodes in non-leaf nodes. The new node is exactly the same as the
+ * source node except for all the addresses stored in it. It will be faster than
+ * traversing all elements in the source tree and inserting them one by one into
+ * the new tree.
+ * The user needs to ensure that the attributes of the source tree and the new
+ * tree are the same, and the new tree needs to be an empty tree, otherwise
+ * -EINVAL will be returned.
+ *
+ * Return: 0 on success, -ENOMEM if memory could not be allocated, -EINVAL If
+ * the attributes of the two trees are different or the new tree is not an empty
+ * tree.
+ */
+int mtree_dup(struct maple_tree *mt, struct maple_tree *new, gfp_t gfp)
+{
+ int ret = 0;
+ MA_STATE(mas, mt, 0, 0);
+ MA_STATE(new_mas, new, 0, 0);
+
+ mas_lock(&new_mas);
+ mas_lock_nested(&mas, SINGLE_DEPTH_NESTING);
+ mas_dup_build(&mas, &new_mas, gfp);
+ mas_unlock(&mas);
+ if (unlikely(mas_is_err(&mas))) {
+ ret = xa_err(mas.node);
+ if (ret == -ENOMEM)
+ mas_dup_free(&new_mas);
+ }
+
+ mas_unlock(&new_mas);
+ return ret;
+}
+EXPORT_SYMBOL(mtree_dup);
+
/**
* __mt_destroy() - Walk and free all nodes of a locked maple tree.
* @mt: The maple tree
--
2.39.2
^ permalink raw reply [flat|nested] 47+ messages in thread* [PATCH 6.6 04/28] maple_tree: skip other tests when BENCH is enabled
2024-10-24 13:19 [PATCH 6.6 00/28] fix CVE-2024-46701 Yu Kuai
` (2 preceding siblings ...)
2024-10-24 13:19 ` [PATCH 6.6 03/28] maple_tree: introduce interfaces __mt_dup() and mtree_dup() Yu Kuai
@ 2024-10-24 13:19 ` Yu Kuai
2024-10-24 13:19 ` [PATCH 6.6 05/28] maple_tree: preserve the tree attributes when destroying maple tree Yu Kuai
` (13 subsequent siblings)
17 siblings, 0 replies; 47+ messages in thread
From: Yu Kuai @ 2024-10-24 13:19 UTC (permalink / raw)
To: stable, gregkh, harry.wentland, sunpeng.li, Rodrigo.Siqueira,
alexander.deucher, christian.koenig, Xinhui.Pan, airlied, daniel,
viro, brauner, Liam.Howlett, akpm, hughd, willy, sashal,
srinivasan.shanmugam, chiahsuan.chung, mingo, mgorman, yukuai3,
chengming.zhou, zhangpeng.00, chuck.lever
Cc: amd-gfx, dri-devel, linux-kernel, linux-fsdevel, maple-tree,
linux-mm, yukuai1, yi.zhang, yangerkun
From: Peng Zhang <zhangpeng.00@bytedance.com>
commit f670fa1caadb4ea532a89012c5451e4c6789bfcc upstream.
Skip other tests when BENCH is enabled so that performance can be measured
in user space.
Link: https://lkml.kernel.org/r/20231027033845.90608-8-zhangpeng.00@bytedance.com
Signed-off-by: Peng Zhang <zhangpeng.00@bytedance.com>
Reviewed-by: Liam R. Howlett <Liam.Howlett@oracle.com>
Cc: Christian Brauner <brauner@kernel.org>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: Mateusz Guzik <mjguzik@gmail.com>
Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Michael S. Tsirkin <mst@redhat.com>
Cc: Mike Christie <michael.christie@oracle.com>
Cc: Nicholas Piggin <npiggin@gmail.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Suren Baghdasaryan <surenb@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
---
lib/test_maple_tree.c | 8 ++++----
tools/testing/radix-tree/maple.c | 2 ++
2 files changed, 6 insertions(+), 4 deletions(-)
diff --git a/lib/test_maple_tree.c b/lib/test_maple_tree.c
index 464eeb90d5ad..de470950714f 100644
--- a/lib/test_maple_tree.c
+++ b/lib/test_maple_tree.c
@@ -3585,10 +3585,6 @@ static int __init maple_tree_seed(void)
pr_info("\nTEST STARTING\n\n");
- mt_init_flags(&tree, MT_FLAGS_ALLOC_RANGE);
- check_root_expand(&tree);
- mtree_destroy(&tree);
-
#if defined(BENCH_SLOT_STORE)
#define BENCH
mt_init_flags(&tree, MT_FLAGS_ALLOC_RANGE);
@@ -3646,6 +3642,10 @@ static int __init maple_tree_seed(void)
goto skip;
#endif
+ mt_init_flags(&tree, MT_FLAGS_ALLOC_RANGE);
+ check_root_expand(&tree);
+ mtree_destroy(&tree);
+
mt_init_flags(&tree, MT_FLAGS_ALLOC_RANGE);
check_iteration(&tree);
mtree_destroy(&tree);
diff --git a/tools/testing/radix-tree/maple.c b/tools/testing/radix-tree/maple.c
index 76a8990bb14e..576b825d6bb1 100644
--- a/tools/testing/radix-tree/maple.c
+++ b/tools/testing/radix-tree/maple.c
@@ -35938,7 +35938,9 @@ void farmer_tests(void)
void maple_tree_tests(void)
{
+#if !defined(BENCH)
farmer_tests();
+#endif
maple_tree_seed();
maple_tree_harvest();
}
--
2.39.2
^ permalink raw reply [flat|nested] 47+ messages in thread* [PATCH 6.6 05/28] maple_tree: preserve the tree attributes when destroying maple tree
2024-10-24 13:19 [PATCH 6.6 00/28] fix CVE-2024-46701 Yu Kuai
` (3 preceding siblings ...)
2024-10-24 13:19 ` [PATCH 6.6 04/28] maple_tree: skip other tests when BENCH is enabled Yu Kuai
@ 2024-10-24 13:19 ` Yu Kuai
2024-10-24 13:19 ` [PATCH 6.6 06/28] maple_tree: remove unnecessary default labels from switch statements Yu Kuai
` (12 subsequent siblings)
17 siblings, 0 replies; 47+ messages in thread
From: Yu Kuai @ 2024-10-24 13:19 UTC (permalink / raw)
To: stable, gregkh, harry.wentland, sunpeng.li, Rodrigo.Siqueira,
alexander.deucher, christian.koenig, Xinhui.Pan, airlied, daniel,
viro, brauner, Liam.Howlett, akpm, hughd, willy, sashal,
srinivasan.shanmugam, chiahsuan.chung, mingo, mgorman, yukuai3,
chengming.zhou, zhangpeng.00, chuck.lever
Cc: amd-gfx, dri-devel, linux-kernel, linux-fsdevel, maple-tree,
linux-mm, yukuai1, yi.zhang, yangerkun
From: Peng Zhang <zhangpeng.00@bytedance.com>
commit 8e50d32c7a89bde896945e4e572ef28ccd87bbf8 upstream.
When destroying maple tree, preserve its attributes and then turn it into
an empty tree. This allows it to be reused without needing to be
reinitialized.
Link: https://lkml.kernel.org/r/20231027033845.90608-10-zhangpeng.00@bytedance.com
Signed-off-by: Peng Zhang <zhangpeng.00@bytedance.com>
Reviewed-by: Liam R. Howlett <Liam.Howlett@oracle.com>
Cc: Christian Brauner <brauner@kernel.org>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: Mateusz Guzik <mjguzik@gmail.com>
Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Michael S. Tsirkin <mst@redhat.com>
Cc: Mike Christie <michael.christie@oracle.com>
Cc: Nicholas Piggin <npiggin@gmail.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Suren Baghdasaryan <surenb@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
---
lib/maple_tree.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/lib/maple_tree.c b/lib/maple_tree.c
index 6f1addbbc820..97a610307d38 100644
--- a/lib/maple_tree.c
+++ b/lib/maple_tree.c
@@ -6774,7 +6774,7 @@ void __mt_destroy(struct maple_tree *mt)
if (xa_is_node(root))
mte_destroy_walk(root, mt);
- mt->ma_flags = 0;
+ mt->ma_flags = mt_attr(mt);
}
EXPORT_SYMBOL_GPL(__mt_destroy);
--
2.39.2
^ permalink raw reply [flat|nested] 47+ messages in thread* [PATCH 6.6 06/28] maple_tree: remove unnecessary default labels from switch statements
2024-10-24 13:19 [PATCH 6.6 00/28] fix CVE-2024-46701 Yu Kuai
` (4 preceding siblings ...)
2024-10-24 13:19 ` [PATCH 6.6 05/28] maple_tree: preserve the tree attributes when destroying maple tree Yu Kuai
@ 2024-10-24 13:19 ` Yu Kuai
2024-10-24 13:19 ` [PATCH 6.6 07/28] maple_tree: make mas_erase() more robust Yu Kuai
` (11 subsequent siblings)
17 siblings, 0 replies; 47+ messages in thread
From: Yu Kuai @ 2024-10-24 13:19 UTC (permalink / raw)
To: stable, gregkh, harry.wentland, sunpeng.li, Rodrigo.Siqueira,
alexander.deucher, christian.koenig, Xinhui.Pan, airlied, daniel,
viro, brauner, Liam.Howlett, akpm, hughd, willy, sashal,
srinivasan.shanmugam, chiahsuan.chung, mingo, mgorman, yukuai3,
chengming.zhou, zhangpeng.00, chuck.lever
Cc: amd-gfx, dri-devel, linux-kernel, linux-fsdevel, maple-tree,
linux-mm, yukuai1, yi.zhang, yangerkun
From: "Liam R. Howlett" <Liam.Howlett@oracle.com>
commit 37a8ab24d3d4c465b070bd704e2ad2fa277df9d7 upstream.
Patch series "maple_tree: iterator state changes".
These patches have some general cleanup and a change to separate the maple
state status tracking from the maple state node.
The maple state status change allows for walks to continue from previous
places when the status needs to be recorded to make logical sense for the
next call to the maple state. For instance, it allows for prev/next to
function in a way that better resembles the linked list. It also allows
switch statements to be used to detect missed states during compile, and
the addition of fast-path "active" state is cleaner as an enum.
While making the status change, perf showed some very small (one line)
functions that were not inlined even with the inline key word. Making
these small functions __always_inline is less expensive according to perf.
As part of that change, some inlines have been dropped from larger
functions.
Perf also showed that the commonly used mas_for_each() iterator was
spending a lot of time finding the end of the node. This series
introduces caching of the end of the node in the maple state (and updating
it during writes). This caching along with the inline changes yielded at
23.25% improvement on the BENCH_MAS_FOR_EACH maple tree test framework
benchmark.
I've also included a change to mtree_range_walk and mtree_lookup_walk to
take advantage of Peng's change [1] to the initial pivot setup.
mmtests did not produce any significant gains.
[1] https://lore.kernel.org/all/20230711035444.526-1-zhangpeng.00@bytedance.com/T/#u
This patch (of 12):
Removing the default types from the switch statements will cause compile
warnings on missing cases.
Link: https://lkml.kernel.org/r/20231101171629.3612299-2-Liam.Howlett@oracle.com
Signed-off-by: Liam R. Howlett <Liam.Howlett@oracle.com>
Suggested-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
---
lib/maple_tree.c | 9 ++-------
1 file changed, 2 insertions(+), 7 deletions(-)
diff --git a/lib/maple_tree.c b/lib/maple_tree.c
index 97a610307d38..9de2e3dfdfcc 100644
--- a/lib/maple_tree.c
+++ b/lib/maple_tree.c
@@ -771,7 +771,6 @@ static inline void mte_set_pivot(struct maple_enode *mn, unsigned char piv,
BUG_ON(piv >= mt_pivots[type]);
switch (type) {
- default:
case maple_range_64:
case maple_leaf_64:
node->mr64.pivot[piv] = val;
@@ -795,7 +794,6 @@ static inline void mte_set_pivot(struct maple_enode *mn, unsigned char piv,
static inline void __rcu **ma_slots(struct maple_node *mn, enum maple_type mt)
{
switch (mt) {
- default:
case maple_arange_64:
return mn->ma64.slot;
case maple_range_64:
@@ -804,6 +802,8 @@ static inline void __rcu **ma_slots(struct maple_node *mn, enum maple_type mt)
case maple_dense:
return mn->slot;
}
+
+ return NULL;
}
static inline bool mt_write_locked(const struct maple_tree *mt)
@@ -7013,7 +7013,6 @@ static void mt_dump_range(unsigned long min, unsigned long max,
else
pr_info("%.*s%lx-%lx: ", depth * 2, spaces, min, max);
break;
- default:
case mt_dump_dec:
if (min == max)
pr_info("%.*s%lu: ", depth * 2, spaces, min);
@@ -7053,7 +7052,6 @@ static void mt_dump_range64(const struct maple_tree *mt, void *entry,
case mt_dump_hex:
pr_cont("%p %lX ", node->slot[i], node->pivot[i]);
break;
- default:
case mt_dump_dec:
pr_cont("%p %lu ", node->slot[i], node->pivot[i]);
}
@@ -7083,7 +7081,6 @@ static void mt_dump_range64(const struct maple_tree *mt, void *entry,
pr_err("node %p last (%lx) > max (%lx) at pivot %d!\n",
node, last, max, i);
break;
- default:
case mt_dump_dec:
pr_err("node %p last (%lu) > max (%lu) at pivot %d!\n",
node, last, max, i);
@@ -7108,7 +7105,6 @@ static void mt_dump_arange64(const struct maple_tree *mt, void *entry,
case mt_dump_hex:
pr_cont("%lx ", node->gap[i]);
break;
- default:
case mt_dump_dec:
pr_cont("%lu ", node->gap[i]);
}
@@ -7119,7 +7115,6 @@ static void mt_dump_arange64(const struct maple_tree *mt, void *entry,
case mt_dump_hex:
pr_cont("%p %lX ", node->slot[i], node->pivot[i]);
break;
- default:
case mt_dump_dec:
pr_cont("%p %lu ", node->slot[i], node->pivot[i]);
}
--
2.39.2
^ permalink raw reply [flat|nested] 47+ messages in thread* [PATCH 6.6 07/28] maple_tree: make mas_erase() more robust
2024-10-24 13:19 [PATCH 6.6 00/28] fix CVE-2024-46701 Yu Kuai
` (5 preceding siblings ...)
2024-10-24 13:19 ` [PATCH 6.6 06/28] maple_tree: remove unnecessary default labels from switch statements Yu Kuai
@ 2024-10-24 13:19 ` Yu Kuai
2024-10-24 13:19 ` [PATCH 6.6 08/28] maple_tree: move debug check to __mas_set_range() Yu Kuai
` (10 subsequent siblings)
17 siblings, 0 replies; 47+ messages in thread
From: Yu Kuai @ 2024-10-24 13:19 UTC (permalink / raw)
To: stable, gregkh, harry.wentland, sunpeng.li, Rodrigo.Siqueira,
alexander.deucher, christian.koenig, Xinhui.Pan, airlied, daniel,
viro, brauner, Liam.Howlett, akpm, hughd, willy, sashal,
srinivasan.shanmugam, chiahsuan.chung, mingo, mgorman, yukuai3,
chengming.zhou, zhangpeng.00, chuck.lever
Cc: amd-gfx, dri-devel, linux-kernel, linux-fsdevel, maple-tree,
linux-mm, yukuai1, yi.zhang, yangerkun
From: "Liam R. Howlett" <Liam.Howlett@oracle.com>
commit f7a59018953910032231c0a019208c4b0a4a8bc3 upstream.
mas_erase() may not deal correctly with all maple states. Make the
function more robust by ensuring the state is in one of the two acceptable
states.
Link: https://lkml.kernel.org/r/20231101171629.3612299-3-Liam.Howlett@oracle.com
Signed-off-by: Liam R. Howlett <Liam.Howlett@oracle.com>
Cc: Peng Zhang <zhangpeng.00@bytedance.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
---
lib/maple_tree.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/lib/maple_tree.c b/lib/maple_tree.c
index 9de2e3dfdfcc..e4d0df3980e0 100644
--- a/lib/maple_tree.c
+++ b/lib/maple_tree.c
@@ -6184,7 +6184,7 @@ void *mas_erase(struct ma_state *mas)
void *entry;
MA_WR_STATE(wr_mas, mas, NULL);
- if (mas_is_none(mas) || mas_is_paused(mas))
+ if (!mas_is_active(mas) || !mas_is_start(mas))
mas->node = MAS_START;
/* Retry unnecessary when holding the write lock. */
--
2.39.2
^ permalink raw reply [flat|nested] 47+ messages in thread* [PATCH 6.6 08/28] maple_tree: move debug check to __mas_set_range()
2024-10-24 13:19 [PATCH 6.6 00/28] fix CVE-2024-46701 Yu Kuai
` (6 preceding siblings ...)
2024-10-24 13:19 ` [PATCH 6.6 07/28] maple_tree: make mas_erase() more robust Yu Kuai
@ 2024-10-24 13:19 ` Yu Kuai
2024-10-24 13:19 ` [PATCH 6.6 09/28] maple_tree: add end of node tracking to the maple state Yu Kuai
` (9 subsequent siblings)
17 siblings, 0 replies; 47+ messages in thread
From: Yu Kuai @ 2024-10-24 13:19 UTC (permalink / raw)
To: stable, gregkh, harry.wentland, sunpeng.li, Rodrigo.Siqueira,
alexander.deucher, christian.koenig, Xinhui.Pan, airlied, daniel,
viro, brauner, Liam.Howlett, akpm, hughd, willy, sashal,
srinivasan.shanmugam, chiahsuan.chung, mingo, mgorman, yukuai3,
chengming.zhou, zhangpeng.00, chuck.lever
Cc: amd-gfx, dri-devel, linux-kernel, linux-fsdevel, maple-tree,
linux-mm, yukuai1, yi.zhang, yangerkun
From: "Liam R. Howlett" <Liam.Howlett@oracle.com>
commit bf857ddd21d0bffc1edafc317e8e2ce0d6d5950c upstream.
__mas_set_range() was created to shortcut resetting the maple state and a
debug check was added to the caller (the vma iterator) to ensure the
internal maple state remains safe to use. Move the debug check from the
vma iterator into the maple tree itself so other users do not incorrectly
use the advanced maple state modification.
Fallout from this change include a large amount of debug setup needed to
be moved to earlier in the header, and the maple_tree.h radix-tree test
code needed to move the inclusion of the header to after the atomic
define. None of those changes have functional changes.
Link: https://lkml.kernel.org/r/20231101171629.3612299-4-Liam.Howlett@oracle.com
Signed-off-by: Liam R. Howlett <Liam.Howlett@oracle.com>
Cc: Peng Zhang <zhangpeng.00@bytedance.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
---
include/linux/maple_tree.h | 255 ++++++++++----------
mm/internal.h | 2 -
tools/testing/radix-tree/linux/maple_tree.h | 2 +-
3 files changed, 130 insertions(+), 129 deletions(-)
diff --git a/include/linux/maple_tree.h b/include/linux/maple_tree.h
index a452dd8a1e5c..b5d5992578c9 100644
--- a/include/linux/maple_tree.h
+++ b/include/linux/maple_tree.h
@@ -557,6 +557,131 @@ static inline void mas_reset(struct ma_state *mas)
*/
#define mas_for_each(__mas, __entry, __max) \
while (((__entry) = mas_find((__mas), (__max))) != NULL)
+
+#ifdef CONFIG_DEBUG_MAPLE_TREE
+enum mt_dump_format {
+ mt_dump_dec,
+ mt_dump_hex,
+};
+
+extern atomic_t maple_tree_tests_run;
+extern atomic_t maple_tree_tests_passed;
+
+void mt_dump(const struct maple_tree *mt, enum mt_dump_format format);
+void mas_dump(const struct ma_state *mas);
+void mas_wr_dump(const struct ma_wr_state *wr_mas);
+void mt_validate(struct maple_tree *mt);
+void mt_cache_shrink(void);
+#define MT_BUG_ON(__tree, __x) do { \
+ atomic_inc(&maple_tree_tests_run); \
+ if (__x) { \
+ pr_info("BUG at %s:%d (%u)\n", \
+ __func__, __LINE__, __x); \
+ mt_dump(__tree, mt_dump_hex); \
+ pr_info("Pass: %u Run:%u\n", \
+ atomic_read(&maple_tree_tests_passed), \
+ atomic_read(&maple_tree_tests_run)); \
+ dump_stack(); \
+ } else { \
+ atomic_inc(&maple_tree_tests_passed); \
+ } \
+} while (0)
+
+#define MAS_BUG_ON(__mas, __x) do { \
+ atomic_inc(&maple_tree_tests_run); \
+ if (__x) { \
+ pr_info("BUG at %s:%d (%u)\n", \
+ __func__, __LINE__, __x); \
+ mas_dump(__mas); \
+ mt_dump((__mas)->tree, mt_dump_hex); \
+ pr_info("Pass: %u Run:%u\n", \
+ atomic_read(&maple_tree_tests_passed), \
+ atomic_read(&maple_tree_tests_run)); \
+ dump_stack(); \
+ } else { \
+ atomic_inc(&maple_tree_tests_passed); \
+ } \
+} while (0)
+
+#define MAS_WR_BUG_ON(__wrmas, __x) do { \
+ atomic_inc(&maple_tree_tests_run); \
+ if (__x) { \
+ pr_info("BUG at %s:%d (%u)\n", \
+ __func__, __LINE__, __x); \
+ mas_wr_dump(__wrmas); \
+ mas_dump((__wrmas)->mas); \
+ mt_dump((__wrmas)->mas->tree, mt_dump_hex); \
+ pr_info("Pass: %u Run:%u\n", \
+ atomic_read(&maple_tree_tests_passed), \
+ atomic_read(&maple_tree_tests_run)); \
+ dump_stack(); \
+ } else { \
+ atomic_inc(&maple_tree_tests_passed); \
+ } \
+} while (0)
+
+#define MT_WARN_ON(__tree, __x) ({ \
+ int ret = !!(__x); \
+ atomic_inc(&maple_tree_tests_run); \
+ if (ret) { \
+ pr_info("WARN at %s:%d (%u)\n", \
+ __func__, __LINE__, __x); \
+ mt_dump(__tree, mt_dump_hex); \
+ pr_info("Pass: %u Run:%u\n", \
+ atomic_read(&maple_tree_tests_passed), \
+ atomic_read(&maple_tree_tests_run)); \
+ dump_stack(); \
+ } else { \
+ atomic_inc(&maple_tree_tests_passed); \
+ } \
+ unlikely(ret); \
+})
+
+#define MAS_WARN_ON(__mas, __x) ({ \
+ int ret = !!(__x); \
+ atomic_inc(&maple_tree_tests_run); \
+ if (ret) { \
+ pr_info("WARN at %s:%d (%u)\n", \
+ __func__, __LINE__, __x); \
+ mas_dump(__mas); \
+ mt_dump((__mas)->tree, mt_dump_hex); \
+ pr_info("Pass: %u Run:%u\n", \
+ atomic_read(&maple_tree_tests_passed), \
+ atomic_read(&maple_tree_tests_run)); \
+ dump_stack(); \
+ } else { \
+ atomic_inc(&maple_tree_tests_passed); \
+ } \
+ unlikely(ret); \
+})
+
+#define MAS_WR_WARN_ON(__wrmas, __x) ({ \
+ int ret = !!(__x); \
+ atomic_inc(&maple_tree_tests_run); \
+ if (ret) { \
+ pr_info("WARN at %s:%d (%u)\n", \
+ __func__, __LINE__, __x); \
+ mas_wr_dump(__wrmas); \
+ mas_dump((__wrmas)->mas); \
+ mt_dump((__wrmas)->mas->tree, mt_dump_hex); \
+ pr_info("Pass: %u Run:%u\n", \
+ atomic_read(&maple_tree_tests_passed), \
+ atomic_read(&maple_tree_tests_run)); \
+ dump_stack(); \
+ } else { \
+ atomic_inc(&maple_tree_tests_passed); \
+ } \
+ unlikely(ret); \
+})
+#else
+#define MT_BUG_ON(__tree, __x) BUG_ON(__x)
+#define MAS_BUG_ON(__mas, __x) BUG_ON(__x)
+#define MAS_WR_BUG_ON(__mas, __x) BUG_ON(__x)
+#define MT_WARN_ON(__tree, __x) WARN_ON(__x)
+#define MAS_WARN_ON(__mas, __x) WARN_ON(__x)
+#define MAS_WR_WARN_ON(__mas, __x) WARN_ON(__x)
+#endif /* CONFIG_DEBUG_MAPLE_TREE */
+
/**
* __mas_set_range() - Set up Maple Tree operation state to a sub-range of the
* current location.
@@ -570,6 +695,9 @@ static inline void mas_reset(struct ma_state *mas)
static inline void __mas_set_range(struct ma_state *mas, unsigned long start,
unsigned long last)
{
+ /* Ensure the range starts within the current slot */
+ MAS_WARN_ON(mas, mas_is_active(mas) &&
+ (mas->index > start || mas->last < start));
mas->index = start;
mas->last = last;
}
@@ -587,8 +715,8 @@ static inline void __mas_set_range(struct ma_state *mas, unsigned long start,
static inline
void mas_set_range(struct ma_state *mas, unsigned long start, unsigned long last)
{
- __mas_set_range(mas, start, last);
mas->node = MAS_START;
+ __mas_set_range(mas, start, last);
}
/**
@@ -713,129 +841,4 @@ void *mt_next(struct maple_tree *mt, unsigned long index, unsigned long max);
for (__entry = mt_find(__tree, &(__index), __max); \
__entry; __entry = mt_find_after(__tree, &(__index), __max))
-
-#ifdef CONFIG_DEBUG_MAPLE_TREE
-enum mt_dump_format {
- mt_dump_dec,
- mt_dump_hex,
-};
-
-extern atomic_t maple_tree_tests_run;
-extern atomic_t maple_tree_tests_passed;
-
-void mt_dump(const struct maple_tree *mt, enum mt_dump_format format);
-void mas_dump(const struct ma_state *mas);
-void mas_wr_dump(const struct ma_wr_state *wr_mas);
-void mt_validate(struct maple_tree *mt);
-void mt_cache_shrink(void);
-#define MT_BUG_ON(__tree, __x) do { \
- atomic_inc(&maple_tree_tests_run); \
- if (__x) { \
- pr_info("BUG at %s:%d (%u)\n", \
- __func__, __LINE__, __x); \
- mt_dump(__tree, mt_dump_hex); \
- pr_info("Pass: %u Run:%u\n", \
- atomic_read(&maple_tree_tests_passed), \
- atomic_read(&maple_tree_tests_run)); \
- dump_stack(); \
- } else { \
- atomic_inc(&maple_tree_tests_passed); \
- } \
-} while (0)
-
-#define MAS_BUG_ON(__mas, __x) do { \
- atomic_inc(&maple_tree_tests_run); \
- if (__x) { \
- pr_info("BUG at %s:%d (%u)\n", \
- __func__, __LINE__, __x); \
- mas_dump(__mas); \
- mt_dump((__mas)->tree, mt_dump_hex); \
- pr_info("Pass: %u Run:%u\n", \
- atomic_read(&maple_tree_tests_passed), \
- atomic_read(&maple_tree_tests_run)); \
- dump_stack(); \
- } else { \
- atomic_inc(&maple_tree_tests_passed); \
- } \
-} while (0)
-
-#define MAS_WR_BUG_ON(__wrmas, __x) do { \
- atomic_inc(&maple_tree_tests_run); \
- if (__x) { \
- pr_info("BUG at %s:%d (%u)\n", \
- __func__, __LINE__, __x); \
- mas_wr_dump(__wrmas); \
- mas_dump((__wrmas)->mas); \
- mt_dump((__wrmas)->mas->tree, mt_dump_hex); \
- pr_info("Pass: %u Run:%u\n", \
- atomic_read(&maple_tree_tests_passed), \
- atomic_read(&maple_tree_tests_run)); \
- dump_stack(); \
- } else { \
- atomic_inc(&maple_tree_tests_passed); \
- } \
-} while (0)
-
-#define MT_WARN_ON(__tree, __x) ({ \
- int ret = !!(__x); \
- atomic_inc(&maple_tree_tests_run); \
- if (ret) { \
- pr_info("WARN at %s:%d (%u)\n", \
- __func__, __LINE__, __x); \
- mt_dump(__tree, mt_dump_hex); \
- pr_info("Pass: %u Run:%u\n", \
- atomic_read(&maple_tree_tests_passed), \
- atomic_read(&maple_tree_tests_run)); \
- dump_stack(); \
- } else { \
- atomic_inc(&maple_tree_tests_passed); \
- } \
- unlikely(ret); \
-})
-
-#define MAS_WARN_ON(__mas, __x) ({ \
- int ret = !!(__x); \
- atomic_inc(&maple_tree_tests_run); \
- if (ret) { \
- pr_info("WARN at %s:%d (%u)\n", \
- __func__, __LINE__, __x); \
- mas_dump(__mas); \
- mt_dump((__mas)->tree, mt_dump_hex); \
- pr_info("Pass: %u Run:%u\n", \
- atomic_read(&maple_tree_tests_passed), \
- atomic_read(&maple_tree_tests_run)); \
- dump_stack(); \
- } else { \
- atomic_inc(&maple_tree_tests_passed); \
- } \
- unlikely(ret); \
-})
-
-#define MAS_WR_WARN_ON(__wrmas, __x) ({ \
- int ret = !!(__x); \
- atomic_inc(&maple_tree_tests_run); \
- if (ret) { \
- pr_info("WARN at %s:%d (%u)\n", \
- __func__, __LINE__, __x); \
- mas_wr_dump(__wrmas); \
- mas_dump((__wrmas)->mas); \
- mt_dump((__wrmas)->mas->tree, mt_dump_hex); \
- pr_info("Pass: %u Run:%u\n", \
- atomic_read(&maple_tree_tests_passed), \
- atomic_read(&maple_tree_tests_run)); \
- dump_stack(); \
- } else { \
- atomic_inc(&maple_tree_tests_passed); \
- } \
- unlikely(ret); \
-})
-#else
-#define MT_BUG_ON(__tree, __x) BUG_ON(__x)
-#define MAS_BUG_ON(__mas, __x) BUG_ON(__x)
-#define MAS_WR_BUG_ON(__mas, __x) BUG_ON(__x)
-#define MT_WARN_ON(__tree, __x) WARN_ON(__x)
-#define MAS_WARN_ON(__mas, __x) WARN_ON(__x)
-#define MAS_WR_WARN_ON(__mas, __x) WARN_ON(__x)
-#endif /* CONFIG_DEBUG_MAPLE_TREE */
-
#endif /*_LINUX_MAPLE_TREE_H */
diff --git a/mm/internal.h b/mm/internal.h
index ef8d787a510c..8212179b8566 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -1068,8 +1068,6 @@ static inline bool vma_soft_dirty_enabled(struct vm_area_struct *vma)
static inline void vma_iter_config(struct vma_iterator *vmi,
unsigned long index, unsigned long last)
{
- MAS_BUG_ON(&vmi->mas, vmi->mas.node != MAS_START &&
- (vmi->mas.index > index || vmi->mas.last < index));
__mas_set_range(&vmi->mas, index, last - 1);
}
diff --git a/tools/testing/radix-tree/linux/maple_tree.h b/tools/testing/radix-tree/linux/maple_tree.h
index 7d8d1f445b89..06c89bdcc515 100644
--- a/tools/testing/radix-tree/linux/maple_tree.h
+++ b/tools/testing/radix-tree/linux/maple_tree.h
@@ -1,7 +1,7 @@
/* SPDX-License-Identifier: GPL-2.0+ */
#define atomic_t int32_t
-#include "../../../../include/linux/maple_tree.h"
#define atomic_inc(x) uatomic_inc(x)
#define atomic_read(x) uatomic_read(x)
#define atomic_set(x, y) do {} while (0)
#define U8_MAX UCHAR_MAX
+#include "../../../../include/linux/maple_tree.h"
--
2.39.2
^ permalink raw reply [flat|nested] 47+ messages in thread* [PATCH 6.6 09/28] maple_tree: add end of node tracking to the maple state
2024-10-24 13:19 [PATCH 6.6 00/28] fix CVE-2024-46701 Yu Kuai
` (7 preceding siblings ...)
2024-10-24 13:19 ` [PATCH 6.6 08/28] maple_tree: move debug check to __mas_set_range() Yu Kuai
@ 2024-10-24 13:19 ` Yu Kuai
2024-10-24 13:19 ` [PATCH 6.6 10/28] maple_tree: use cached node end in mas_next() Yu Kuai
` (8 subsequent siblings)
17 siblings, 0 replies; 47+ messages in thread
From: Yu Kuai @ 2024-10-24 13:19 UTC (permalink / raw)
To: stable, gregkh, harry.wentland, sunpeng.li, Rodrigo.Siqueira,
alexander.deucher, christian.koenig, Xinhui.Pan, airlied, daniel,
viro, brauner, Liam.Howlett, akpm, hughd, willy, sashal,
srinivasan.shanmugam, chiahsuan.chung, mingo, mgorman, yukuai3,
chengming.zhou, zhangpeng.00, chuck.lever
Cc: amd-gfx, dri-devel, linux-kernel, linux-fsdevel, maple-tree,
linux-mm, yukuai1, yi.zhang, yangerkun
From: "Liam R. Howlett" <Liam.Howlett@oracle.com>
commit 31c532a8af57513228c2b12d281104198ff412b8 upstream.
Analysis of the mas_for_each() iteration showed that there is a
significant time spent finding the end of a node. This time can be
greatly reduced if the end of the node is cached in the maple state. Care
must be taken to update & invalidate as necessary.
Link: https://lkml.kernel.org/r/20231101171629.3612299-5-Liam.Howlett@oracle.com
Signed-off-by: Liam R. Howlett <Liam.Howlett@oracle.com>
Cc: Peng Zhang <zhangpeng.00@bytedance.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
---
include/linux/maple_tree.h | 1 +
lib/maple_tree.c | 7 +++++++
tools/testing/radix-tree/maple.c | 1 +
3 files changed, 9 insertions(+)
diff --git a/include/linux/maple_tree.h b/include/linux/maple_tree.h
index b5d5992578c9..0b82efe0cf1e 100644
--- a/include/linux/maple_tree.h
+++ b/include/linux/maple_tree.h
@@ -393,6 +393,7 @@ struct ma_state {
unsigned char depth; /* depth of tree descent during write */
unsigned char offset;
unsigned char mas_flags;
+ unsigned char end; /* The end of the node */
};
struct ma_wr_state {
diff --git a/lib/maple_tree.c b/lib/maple_tree.c
index e4d0df3980e0..d19fb14a9635 100644
--- a/lib/maple_tree.c
+++ b/lib/maple_tree.c
@@ -2843,6 +2843,7 @@ static inline void *mtree_range_walk(struct ma_state *mas)
goto dead_node;
} while (!ma_is_leaf(type));
+ mas->end = end;
mas->offset = offset;
mas->index = min;
mas->last = max;
@@ -3509,6 +3510,7 @@ static noinline_for_kasan int mas_commit_b_node(struct ma_wr_state *wr_mas,
mas_replace_node(wr_mas->mas, old_enode);
reuse_node:
mas_update_gap(wr_mas->mas);
+ wr_mas->mas->end = b_end;
return 1;
}
@@ -4010,6 +4012,7 @@ static inline bool mas_wr_node_store(struct ma_wr_state *wr_mas,
}
trace_ma_write(__func__, mas, 0, wr_mas->entry);
mas_update_gap(mas);
+ mas->end = new_end;
return true;
}
@@ -4190,6 +4193,7 @@ static inline bool mas_wr_append(struct ma_wr_state *wr_mas,
if (!wr_mas->content || !wr_mas->entry)
mas_update_gap(mas);
+ mas->end = new_end;
trace_ma_write(__func__, mas, new_end, wr_mas->entry);
return true;
}
@@ -4428,6 +4432,7 @@ static inline int mas_prev_node(struct ma_state *mas, unsigned long min)
if (unlikely(mte_dead_node(mas->node)))
return 1;
+ mas->end = mas->offset;
return 0;
no_entry:
@@ -5074,6 +5079,7 @@ int mas_empty_area(struct ma_state *mas, unsigned long min,
if (mas->index < min)
mas->index = min;
mas->last = mas->index + size - 1;
+ mas->end = mas_data_end(mas);
return 0;
}
EXPORT_SYMBOL_GPL(mas_empty_area);
@@ -5134,6 +5140,7 @@ int mas_empty_area_rev(struct ma_state *mas, unsigned long min,
mas->last = max;
mas->index = mas->last - size + 1;
+ mas->end = mas_data_end(mas);
return 0;
}
EXPORT_SYMBOL_GPL(mas_empty_area_rev);
diff --git a/tools/testing/radix-tree/maple.c b/tools/testing/radix-tree/maple.c
index 576b825d6bb1..27a3a31ba662 100644
--- a/tools/testing/radix-tree/maple.c
+++ b/tools/testing/radix-tree/maple.c
@@ -945,6 +945,7 @@ static inline bool mas_tree_walk(struct ma_state *mas, unsigned long *range_min,
goto retry;
}
+ mas->end = mas_data_end(mas);
return ret;
not_found:
--
2.39.2
^ permalink raw reply [flat|nested] 47+ messages in thread* [PATCH 6.6 10/28] maple_tree: use cached node end in mas_next()
2024-10-24 13:19 [PATCH 6.6 00/28] fix CVE-2024-46701 Yu Kuai
` (8 preceding siblings ...)
2024-10-24 13:19 ` [PATCH 6.6 09/28] maple_tree: add end of node tracking to the maple state Yu Kuai
@ 2024-10-24 13:19 ` Yu Kuai
2024-10-24 13:19 ` [PATCH 6.6 11/28] maple_tree: use cached node end in mas_destroy() Yu Kuai
` (7 subsequent siblings)
17 siblings, 0 replies; 47+ messages in thread
From: Yu Kuai @ 2024-10-24 13:19 UTC (permalink / raw)
To: stable, gregkh, harry.wentland, sunpeng.li, Rodrigo.Siqueira,
alexander.deucher, christian.koenig, Xinhui.Pan, airlied, daniel,
viro, brauner, Liam.Howlett, akpm, hughd, willy, sashal,
srinivasan.shanmugam, chiahsuan.chung, mingo, mgorman, yukuai3,
chengming.zhou, zhangpeng.00, chuck.lever
Cc: amd-gfx, dri-devel, linux-kernel, linux-fsdevel, maple-tree,
linux-mm, yukuai1, yi.zhang, yangerkun
From: "Liam R. Howlett" <Liam.Howlett@oracle.com>
commit e9c52d8940cbfd94b36035bbebce7f55954e7728 upstream.
When looking for the next entry, don't recalculate the node end as it is
now tracked in the maple state.
Link: https://lkml.kernel.org/r/20231101171629.3612299-6-Liam.Howlett@oracle.com
Signed-off-by: Liam R. Howlett <Liam.Howlett@oracle.com>
Cc: Peng Zhang <zhangpeng.00@bytedance.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
---
lib/maple_tree.c | 14 ++++++++------
1 file changed, 8 insertions(+), 6 deletions(-)
diff --git a/lib/maple_tree.c b/lib/maple_tree.c
index d19fb14a9635..e0dcc8412da0 100644
--- a/lib/maple_tree.c
+++ b/lib/maple_tree.c
@@ -4539,6 +4539,7 @@ static inline int mas_next_node(struct ma_state *mas, struct maple_node *node,
unsigned long min;
unsigned long *pivots;
struct maple_enode *enode;
+ struct maple_node *tmp;
int level = 0;
unsigned char node_end;
enum maple_type mt;
@@ -4591,6 +4592,10 @@ static inline int mas_next_node(struct ma_state *mas, struct maple_node *node,
pivots = ma_pivots(node, mt);
mas->max = mas_safe_pivot(mas, pivots, mas->offset, mt);
+ tmp = mte_to_node(enode);
+ mt = mte_node_type(enode);
+ pivots = ma_pivots(tmp, mt);
+ mas->end = ma_data_end(tmp, mt, pivots, mas->max);
if (unlikely(ma_dead_node(node)))
return 1;
@@ -4625,7 +4630,6 @@ static void *mas_next_slot(struct ma_state *mas, unsigned long max, bool empty,
unsigned long pivot;
enum maple_type type;
struct maple_node *node;
- unsigned char data_end;
unsigned long save_point = mas->last;
void *entry;
@@ -4633,12 +4637,11 @@ static void *mas_next_slot(struct ma_state *mas, unsigned long max, bool empty,
node = mas_mn(mas);
type = mte_node_type(mas->node);
pivots = ma_pivots(node, type);
- data_end = ma_data_end(node, type, pivots, mas->max);
if (unlikely(mas_rewalk_if_dead(mas, node, save_point)))
goto retry;
if (mas->max >= max) {
- if (likely(mas->offset < data_end))
+ if (likely(mas->offset < mas->end))
pivot = pivots[mas->offset];
else
goto overflow;
@@ -4650,11 +4653,11 @@ static void *mas_next_slot(struct ma_state *mas, unsigned long max, bool empty,
goto overflow;
}
- if (likely(mas->offset < data_end)) {
+ if (likely(mas->offset < mas->end)) {
mas->index = pivots[mas->offset] + 1;
again:
mas->offset++;
- if (likely(mas->offset < data_end))
+ if (likely(mas->offset < mas->end))
mas->last = pivots[mas->offset];
else
mas->last = mas->max;
@@ -4691,7 +4694,6 @@ static void *mas_next_slot(struct ma_state *mas, unsigned long max, bool empty,
goto overflow;
mas->index = mas->last + 1;
- /* Node cannot end on NULL, so it's safe to short-cut here */
goto again;
}
--
2.39.2
^ permalink raw reply [flat|nested] 47+ messages in thread* [PATCH 6.6 11/28] maple_tree: use cached node end in mas_destroy()
2024-10-24 13:19 [PATCH 6.6 00/28] fix CVE-2024-46701 Yu Kuai
` (9 preceding siblings ...)
2024-10-24 13:19 ` [PATCH 6.6 10/28] maple_tree: use cached node end in mas_next() Yu Kuai
@ 2024-10-24 13:19 ` Yu Kuai
2024-10-24 13:19 ` [PATCH 6.6 12/28] maple_tree: clean up inlines for some functions Yu Kuai
` (6 subsequent siblings)
17 siblings, 0 replies; 47+ messages in thread
From: Yu Kuai @ 2024-10-24 13:19 UTC (permalink / raw)
To: stable, gregkh, harry.wentland, sunpeng.li, Rodrigo.Siqueira,
alexander.deucher, christian.koenig, Xinhui.Pan, airlied, daniel,
viro, brauner, Liam.Howlett, akpm, hughd, willy, sashal,
srinivasan.shanmugam, chiahsuan.chung, mingo, mgorman, yukuai3,
chengming.zhou, zhangpeng.00, chuck.lever
Cc: amd-gfx, dri-devel, linux-kernel, linux-fsdevel, maple-tree,
linux-mm, yukuai1, yi.zhang, yangerkun
From: "Liam R. Howlett" <Liam.Howlett@oracle.com>
commit 1f41ef12abf8538b3d82cdae14c06aa171cb71ce upstream.
The node end is set during the walk, so use the resulting end instead of
re-fetching it.
Link: https://lkml.kernel.org/r/20231101171629.3612299-7-Liam.Howlett@oracle.com
Signed-off-by: Liam R. Howlett <Liam.Howlett@oracle.com>
Cc: Peng Zhang <zhangpeng.00@bytedance.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
---
lib/maple_tree.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/lib/maple_tree.c b/lib/maple_tree.c
index e0dcc8412da0..3df7e3456205 100644
--- a/lib/maple_tree.c
+++ b/lib/maple_tree.c
@@ -5587,7 +5587,7 @@ void mas_destroy(struct ma_state *mas)
mas_start(mas);
mtree_range_walk(mas);
- end = mas_data_end(mas) + 1;
+ end = mas->end + 1;
if (end < mt_min_slot_count(mas->node) - 1)
mas_destroy_rebalance(mas, end);
--
2.39.2
^ permalink raw reply [flat|nested] 47+ messages in thread* [PATCH 6.6 12/28] maple_tree: clean up inlines for some functions
2024-10-24 13:19 [PATCH 6.6 00/28] fix CVE-2024-46701 Yu Kuai
` (10 preceding siblings ...)
2024-10-24 13:19 ` [PATCH 6.6 11/28] maple_tree: use cached node end in mas_destroy() Yu Kuai
@ 2024-10-24 13:19 ` Yu Kuai
2024-10-24 13:19 ` [PATCH 6.6 13/28] maple_tree: add test for mtree_dup() Yu Kuai
` (5 subsequent siblings)
17 siblings, 0 replies; 47+ messages in thread
From: Yu Kuai @ 2024-10-24 13:19 UTC (permalink / raw)
To: stable, gregkh, harry.wentland, sunpeng.li, Rodrigo.Siqueira,
alexander.deucher, christian.koenig, Xinhui.Pan, airlied, daniel,
viro, brauner, Liam.Howlett, akpm, hughd, willy, sashal,
srinivasan.shanmugam, chiahsuan.chung, mingo, mgorman, yukuai3,
chengming.zhou, zhangpeng.00, chuck.lever
Cc: amd-gfx, dri-devel, linux-kernel, linux-fsdevel, maple-tree,
linux-mm, yukuai1, yi.zhang, yangerkun
From: "Liam R. Howlett" <Liam.Howlett@oracle.com>
commit 271f61a8b41dcd86e1ecc2e0455bcc071bc7dde4 upstream.
There are a few functions which were inlined but are somewhat too large to
inline, so remove the inline key word.
There are also several very small functions which are used in critical
code sections which gcc was not inlining, so make this more strict and use
__always_line for these functions.
Link: https://lkml.kernel.org/r/20231101171629.3612299-8-Liam.Howlett@oracle.com
Signed-off-by: Liam R. Howlett <Liam.Howlett@oracle.com>
Cc: Peng Zhang <zhangpeng.00@bytedance.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
---
lib/maple_tree.c | 78 ++++++++++++++++++++++++------------------------
1 file changed, 39 insertions(+), 39 deletions(-)
diff --git a/lib/maple_tree.c b/lib/maple_tree.c
index 3df7e3456205..d1416276f1ef 100644
--- a/lib/maple_tree.c
+++ b/lib/maple_tree.c
@@ -217,23 +217,24 @@ static inline unsigned int mt_attr(struct maple_tree *mt)
return mt->ma_flags & ~MT_FLAGS_HEIGHT_MASK;
}
-static inline enum maple_type mte_node_type(const struct maple_enode *entry)
+static __always_inline enum maple_type mte_node_type(
+ const struct maple_enode *entry)
{
return ((unsigned long)entry >> MAPLE_NODE_TYPE_SHIFT) &
MAPLE_NODE_TYPE_MASK;
}
-static inline bool ma_is_dense(const enum maple_type type)
+static __always_inline bool ma_is_dense(const enum maple_type type)
{
return type < maple_leaf_64;
}
-static inline bool ma_is_leaf(const enum maple_type type)
+static __always_inline bool ma_is_leaf(const enum maple_type type)
{
return type < maple_range_64;
}
-static inline bool mte_is_leaf(const struct maple_enode *entry)
+static __always_inline bool mte_is_leaf(const struct maple_enode *entry)
{
return ma_is_leaf(mte_node_type(entry));
}
@@ -242,7 +243,7 @@ static inline bool mte_is_leaf(const struct maple_enode *entry)
* We also reserve values with the bottom two bits set to '10' which are
* below 4096
*/
-static inline bool mt_is_reserved(const void *entry)
+static __always_inline bool mt_is_reserved(const void *entry)
{
return ((unsigned long)entry < MAPLE_RESERVED_RANGE) &&
xa_is_internal(entry);
@@ -295,7 +296,8 @@ static inline bool mas_searchable(struct ma_state *mas)
return true;
}
-static inline struct maple_node *mte_to_node(const struct maple_enode *entry)
+static __always_inline struct maple_node *mte_to_node(
+ const struct maple_enode *entry)
{
return (struct maple_node *)((unsigned long)entry & ~MAPLE_NODE_MASK);
}
@@ -372,12 +374,12 @@ static inline bool mte_has_null(const struct maple_enode *node)
return (unsigned long)node & MAPLE_ENODE_NULL;
}
-static inline bool ma_is_root(struct maple_node *node)
+static __always_inline bool ma_is_root(struct maple_node *node)
{
return ((unsigned long)node->parent & MA_ROOT_PARENT);
}
-static inline bool mte_is_root(const struct maple_enode *node)
+static __always_inline bool mte_is_root(const struct maple_enode *node)
{
return ma_is_root(mte_to_node(node));
}
@@ -387,7 +389,7 @@ static inline bool mas_is_root_limits(const struct ma_state *mas)
return !mas->min && mas->max == ULONG_MAX;
}
-static inline bool mt_is_alloc(struct maple_tree *mt)
+static __always_inline bool mt_is_alloc(struct maple_tree *mt)
{
return (mt->ma_flags & MT_FLAGS_ALLOC_RANGE);
}
@@ -526,11 +528,12 @@ void mas_set_parent(struct ma_state *mas, struct maple_enode *enode,
*
* Return: The slot in the parent node where @enode resides.
*/
-static inline unsigned int mte_parent_slot(const struct maple_enode *enode)
+static __always_inline
+unsigned int mte_parent_slot(const struct maple_enode *enode)
{
unsigned long val = (unsigned long)mte_to_node(enode)->parent;
- if (val & MA_ROOT_PARENT)
+ if (unlikely(val & MA_ROOT_PARENT))
return 0;
/*
@@ -546,7 +549,8 @@ static inline unsigned int mte_parent_slot(const struct maple_enode *enode)
*
* Return: The parent maple node.
*/
-static inline struct maple_node *mte_parent(const struct maple_enode *enode)
+static __always_inline
+struct maple_node *mte_parent(const struct maple_enode *enode)
{
return (void *)((unsigned long)
(mte_to_node(enode)->parent) & ~MAPLE_NODE_MASK);
@@ -558,7 +562,7 @@ static inline struct maple_node *mte_parent(const struct maple_enode *enode)
*
* Return: true if dead, false otherwise.
*/
-static inline bool ma_dead_node(const struct maple_node *node)
+static __always_inline bool ma_dead_node(const struct maple_node *node)
{
struct maple_node *parent;
@@ -574,7 +578,7 @@ static inline bool ma_dead_node(const struct maple_node *node)
*
* Return: true if dead, false otherwise.
*/
-static inline bool mte_dead_node(const struct maple_enode *enode)
+static __always_inline bool mte_dead_node(const struct maple_enode *enode)
{
struct maple_node *parent, *node;
@@ -730,7 +734,7 @@ static inline unsigned long mas_pivot(struct ma_state *mas, unsigned char piv)
* Return: The pivot at @piv within the limit of the @pivots array, @mas->max
* otherwise.
*/
-static inline unsigned long
+static __always_inline unsigned long
mas_safe_pivot(const struct ma_state *mas, unsigned long *pivots,
unsigned char piv, enum maple_type type)
{
@@ -812,20 +816,20 @@ static inline bool mt_write_locked(const struct maple_tree *mt)
lockdep_is_held(&mt->ma_lock);
}
-static inline bool mt_locked(const struct maple_tree *mt)
+static __always_inline bool mt_locked(const struct maple_tree *mt)
{
return mt_external_lock(mt) ? mt_lock_is_held(mt) :
lockdep_is_held(&mt->ma_lock);
}
-static inline void *mt_slot(const struct maple_tree *mt,
+static __always_inline void *mt_slot(const struct maple_tree *mt,
void __rcu **slots, unsigned char offset)
{
return rcu_dereference_check(slots[offset], mt_locked(mt));
}
-static inline void *mt_slot_locked(struct maple_tree *mt, void __rcu **slots,
- unsigned char offset)
+static __always_inline void *mt_slot_locked(struct maple_tree *mt,
+ void __rcu **slots, unsigned char offset)
{
return rcu_dereference_protected(slots[offset], mt_write_locked(mt));
}
@@ -837,8 +841,8 @@ static inline void *mt_slot_locked(struct maple_tree *mt, void __rcu **slots,
*
* Return: The entry stored in @slots at the @offset.
*/
-static inline void *mas_slot_locked(struct ma_state *mas, void __rcu **slots,
- unsigned char offset)
+static __always_inline void *mas_slot_locked(struct ma_state *mas,
+ void __rcu **slots, unsigned char offset)
{
return mt_slot_locked(mas->tree, slots, offset);
}
@@ -851,8 +855,8 @@ static inline void *mas_slot_locked(struct ma_state *mas, void __rcu **slots,
*
* Return: The entry stored in @slots at the @offset
*/
-static inline void *mas_slot(struct ma_state *mas, void __rcu **slots,
- unsigned char offset)
+static __always_inline void *mas_slot(struct ma_state *mas, void __rcu **slots,
+ unsigned char offset)
{
return mt_slot(mas->tree, slots, offset);
}
@@ -863,7 +867,7 @@ static inline void *mas_slot(struct ma_state *mas, void __rcu **slots,
*
* Return: The pointer to the root of the tree
*/
-static inline void *mas_root(struct ma_state *mas)
+static __always_inline void *mas_root(struct ma_state *mas)
{
return rcu_dereference_check(mas->tree->ma_root, mt_locked(mas->tree));
}
@@ -1437,10 +1441,8 @@ static inline struct maple_enode *mas_start(struct ma_state *mas)
* Uses metadata to find the end of the data when possible.
* Return: The zero indexed last slot with data (may be null).
*/
-static inline unsigned char ma_data_end(struct maple_node *node,
- enum maple_type type,
- unsigned long *pivots,
- unsigned long max)
+static __always_inline unsigned char ma_data_end(struct maple_node *node,
+ enum maple_type type, unsigned long *pivots, unsigned long max)
{
unsigned char offset;
@@ -4344,7 +4346,7 @@ static inline void *mas_insert(struct ma_state *mas, void *entry)
}
-static inline void mas_rewalk(struct ma_state *mas, unsigned long index)
+static __always_inline void mas_rewalk(struct ma_state *mas, unsigned long index)
{
retry:
mas_set(mas, index);
@@ -4353,7 +4355,7 @@ static inline void mas_rewalk(struct ma_state *mas, unsigned long index)
goto retry;
}
-static inline bool mas_rewalk_if_dead(struct ma_state *mas,
+static __always_inline bool mas_rewalk_if_dead(struct ma_state *mas,
struct maple_node *node, const unsigned long index)
{
if (unlikely(ma_dead_node(node))) {
@@ -4372,7 +4374,7 @@ static inline bool mas_rewalk_if_dead(struct ma_state *mas,
* The prev node value will be mas->node[mas->offset] or MAS_NONE.
* Return: 1 if the node is dead, 0 otherwise.
*/
-static inline int mas_prev_node(struct ma_state *mas, unsigned long min)
+static int mas_prev_node(struct ma_state *mas, unsigned long min)
{
enum maple_type mt;
int offset, level;
@@ -4533,8 +4535,8 @@ static void *mas_prev_slot(struct ma_state *mas, unsigned long min, bool empty,
* The next value will be mas->node[mas->offset] or MAS_NONE.
* Return: 1 on dead node, 0 otherwise.
*/
-static inline int mas_next_node(struct ma_state *mas, struct maple_node *node,
- unsigned long max)
+static int mas_next_node(struct ma_state *mas, struct maple_node *node,
+ unsigned long max)
{
unsigned long min;
unsigned long *pivots;
@@ -5675,7 +5677,7 @@ int mas_expected_entries(struct ma_state *mas, unsigned long nr_entries)
}
EXPORT_SYMBOL_GPL(mas_expected_entries);
-static inline bool mas_next_setup(struct ma_state *mas, unsigned long max,
+static bool mas_next_setup(struct ma_state *mas, unsigned long max,
void **entry)
{
bool was_none = mas_is_none(mas);
@@ -5791,8 +5793,7 @@ void *mt_next(struct maple_tree *mt, unsigned long index, unsigned long max)
}
EXPORT_SYMBOL_GPL(mt_next);
-static inline bool mas_prev_setup(struct ma_state *mas, unsigned long min,
- void **entry)
+static bool mas_prev_setup(struct ma_state *mas, unsigned long min, void **entry)
{
if (unlikely(mas->index <= min)) {
mas->node = MAS_UNDERFLOW;
@@ -5941,8 +5942,7 @@ EXPORT_SYMBOL_GPL(mas_pause);
*
* Returns: True if entry is the answer, false otherwise.
*/
-static inline bool mas_find_setup(struct ma_state *mas, unsigned long max,
- void **entry)
+static __always_inline bool mas_find_setup(struct ma_state *mas, unsigned long max, void **entry)
{
if (mas_is_active(mas)) {
if (mas->last < max)
@@ -6058,7 +6058,7 @@ EXPORT_SYMBOL_GPL(mas_find_range);
*
* Returns: True if entry is the answer, false otherwise.
*/
-static inline bool mas_find_rev_setup(struct ma_state *mas, unsigned long min,
+static bool mas_find_rev_setup(struct ma_state *mas, unsigned long min,
void **entry)
{
if (mas_is_active(mas)) {
--
2.39.2
^ permalink raw reply [flat|nested] 47+ messages in thread* [PATCH 6.6 13/28] maple_tree: add test for mtree_dup()
2024-10-24 13:19 [PATCH 6.6 00/28] fix CVE-2024-46701 Yu Kuai
` (11 preceding siblings ...)
2024-10-24 13:19 ` [PATCH 6.6 12/28] maple_tree: clean up inlines for some functions Yu Kuai
@ 2024-10-24 13:19 ` Yu Kuai
2024-10-24 13:19 ` [PATCH 6.6 14/28] maple_tree: separate ma_state node from status Yu Kuai
` (4 subsequent siblings)
17 siblings, 0 replies; 47+ messages in thread
From: Yu Kuai @ 2024-10-24 13:19 UTC (permalink / raw)
To: stable, gregkh, harry.wentland, sunpeng.li, Rodrigo.Siqueira,
alexander.deucher, christian.koenig, Xinhui.Pan, airlied, daniel,
viro, brauner, Liam.Howlett, akpm, hughd, willy, sashal,
srinivasan.shanmugam, chiahsuan.chung, mingo, mgorman, yukuai3,
chengming.zhou, zhangpeng.00, chuck.lever
Cc: amd-gfx, dri-devel, linux-kernel, linux-fsdevel, maple-tree,
linux-mm, yukuai1, yi.zhang, yangerkun
From: Peng Zhang <zhangpeng.00@bytedance.com>
commit a2587a7e8d37885dc063255f5400a66299b42e48 upstream.
Add test for mtree_dup().
Test by duplicating different maple trees and then comparing the two
trees. Includes tests for duplicating full trees and memory allocation
failures on different nodes.
Link: https://lkml.kernel.org/r/20231027033845.90608-6-zhangpeng.00@bytedance.com
Signed-off-by: Peng Zhang <zhangpeng.00@bytedance.com>
Reviewed-by: Liam R. Howlett <Liam.Howlett@oracle.com>
Cc: Christian Brauner <brauner@kernel.org>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: Mateusz Guzik <mjguzik@gmail.com>
Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Michael S. Tsirkin <mst@redhat.com>
Cc: Mike Christie <michael.christie@oracle.com>
Cc: Nicholas Piggin <npiggin@gmail.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Suren Baghdasaryan <surenb@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
---
tools/testing/radix-tree/maple.c | 361 +++++++++++++++++++++++++++++++
1 file changed, 361 insertions(+)
diff --git a/tools/testing/radix-tree/maple.c b/tools/testing/radix-tree/maple.c
index 27a3a31ba662..1c86ae3f8186 100644
--- a/tools/testing/radix-tree/maple.c
+++ b/tools/testing/radix-tree/maple.c
@@ -35858,6 +35858,363 @@ static noinline void __init check_locky(struct maple_tree *mt)
mt_clear_in_rcu(mt);
}
+/*
+ * Compares two nodes except for the addresses stored in the nodes.
+ * Returns zero if they are the same, otherwise returns non-zero.
+ */
+static int __init compare_node(struct maple_enode *enode_a,
+ struct maple_enode *enode_b)
+{
+ struct maple_node *node_a, *node_b;
+ struct maple_node a, b;
+ void **slots_a, **slots_b; /* Do not use the rcu tag. */
+ enum maple_type type;
+ int i;
+
+ if (((unsigned long)enode_a & MAPLE_NODE_MASK) !=
+ ((unsigned long)enode_b & MAPLE_NODE_MASK)) {
+ pr_err("The lower 8 bits of enode are different.\n");
+ return -1;
+ }
+
+ type = mte_node_type(enode_a);
+ node_a = mte_to_node(enode_a);
+ node_b = mte_to_node(enode_b);
+ a = *node_a;
+ b = *node_b;
+
+ /* Do not compare addresses. */
+ if (ma_is_root(node_a) || ma_is_root(node_b)) {
+ a.parent = (struct maple_pnode *)((unsigned long)a.parent &
+ MA_ROOT_PARENT);
+ b.parent = (struct maple_pnode *)((unsigned long)b.parent &
+ MA_ROOT_PARENT);
+ } else {
+ a.parent = (struct maple_pnode *)((unsigned long)a.parent &
+ MAPLE_NODE_MASK);
+ b.parent = (struct maple_pnode *)((unsigned long)b.parent &
+ MAPLE_NODE_MASK);
+ }
+
+ if (a.parent != b.parent) {
+ pr_err("The lower 8 bits of parents are different. %p %p\n",
+ a.parent, b.parent);
+ return -1;
+ }
+
+ /*
+ * If it is a leaf node, the slots do not contain the node address, and
+ * no special processing of slots is required.
+ */
+ if (ma_is_leaf(type))
+ goto cmp;
+
+ slots_a = ma_slots(&a, type);
+ slots_b = ma_slots(&b, type);
+
+ for (i = 0; i < mt_slots[type]; i++) {
+ if (!slots_a[i] && !slots_b[i])
+ break;
+
+ if (!slots_a[i] || !slots_b[i]) {
+ pr_err("The number of slots is different.\n");
+ return -1;
+ }
+
+ /* Do not compare addresses in slots. */
+ ((unsigned long *)slots_a)[i] &= MAPLE_NODE_MASK;
+ ((unsigned long *)slots_b)[i] &= MAPLE_NODE_MASK;
+ }
+
+cmp:
+ /*
+ * Compare all contents of two nodes, including parent (except address),
+ * slots (except address), pivots, gaps and metadata.
+ */
+ return memcmp(&a, &b, sizeof(struct maple_node));
+}
+
+/*
+ * Compare two trees and return 0 if they are the same, non-zero otherwise.
+ */
+static int __init compare_tree(struct maple_tree *mt_a, struct maple_tree *mt_b)
+{
+ MA_STATE(mas_a, mt_a, 0, 0);
+ MA_STATE(mas_b, mt_b, 0, 0);
+
+ if (mt_a->ma_flags != mt_b->ma_flags) {
+ pr_err("The flags of the two trees are different.\n");
+ return -1;
+ }
+
+ mas_dfs_preorder(&mas_a);
+ mas_dfs_preorder(&mas_b);
+
+ if (mas_is_ptr(&mas_a) || mas_is_ptr(&mas_b)) {
+ if (!(mas_is_ptr(&mas_a) && mas_is_ptr(&mas_b))) {
+ pr_err("One is MAS_ROOT and the other is not.\n");
+ return -1;
+ }
+ return 0;
+ }
+
+ while (!mas_is_none(&mas_a) || !mas_is_none(&mas_b)) {
+
+ if (mas_is_none(&mas_a) || mas_is_none(&mas_b)) {
+ pr_err("One is MAS_NONE and the other is not.\n");
+ return -1;
+ }
+
+ if (mas_a.min != mas_b.min ||
+ mas_a.max != mas_b.max) {
+ pr_err("mas->min, mas->max do not match.\n");
+ return -1;
+ }
+
+ if (compare_node(mas_a.node, mas_b.node)) {
+ pr_err("The contents of nodes %p and %p are different.\n",
+ mas_a.node, mas_b.node);
+ mt_dump(mt_a, mt_dump_dec);
+ mt_dump(mt_b, mt_dump_dec);
+ return -1;
+ }
+
+ mas_dfs_preorder(&mas_a);
+ mas_dfs_preorder(&mas_b);
+ }
+
+ return 0;
+}
+
+static __init void mas_subtree_max_range(struct ma_state *mas)
+{
+ unsigned long limit = mas->max;
+ MA_STATE(newmas, mas->tree, 0, 0);
+ void *entry;
+
+ mas_for_each(mas, entry, limit) {
+ if (mas->last - mas->index >=
+ newmas.last - newmas.index) {
+ newmas = *mas;
+ }
+ }
+
+ *mas = newmas;
+}
+
+/*
+ * build_full_tree() - Build a full tree.
+ * @mt: The tree to build.
+ * @flags: Use @flags to build the tree.
+ * @height: The height of the tree to build.
+ *
+ * Build a tree with full leaf nodes and internal nodes. Note that the height
+ * should not exceed 3, otherwise it will take a long time to build.
+ * Return: zero if the build is successful, non-zero if it fails.
+ */
+static __init int build_full_tree(struct maple_tree *mt, unsigned int flags,
+ int height)
+{
+ MA_STATE(mas, mt, 0, 0);
+ unsigned long step;
+ int ret = 0, cnt = 1;
+ enum maple_type type;
+
+ mt_init_flags(mt, flags);
+ mtree_insert_range(mt, 0, ULONG_MAX, xa_mk_value(5), GFP_KERNEL);
+
+ mtree_lock(mt);
+
+ while (1) {
+ mas_set(&mas, 0);
+ if (mt_height(mt) < height) {
+ mas.max = ULONG_MAX;
+ goto store;
+ }
+
+ while (1) {
+ mas_dfs_preorder(&mas);
+ if (mas_is_none(&mas))
+ goto unlock;
+
+ type = mte_node_type(mas.node);
+ if (mas_data_end(&mas) + 1 < mt_slots[type]) {
+ mas_set(&mas, mas.min);
+ goto store;
+ }
+ }
+store:
+ mas_subtree_max_range(&mas);
+ step = mas.last - mas.index;
+ if (step < 1) {
+ ret = -1;
+ goto unlock;
+ }
+
+ step /= 2;
+ mas.last = mas.index + step;
+ mas_store_gfp(&mas, xa_mk_value(5),
+ GFP_KERNEL);
+ ++cnt;
+ }
+unlock:
+ mtree_unlock(mt);
+
+ MT_BUG_ON(mt, mt_height(mt) != height);
+ /* pr_info("height:%u number of elements:%d\n", mt_height(mt), cnt); */
+ return ret;
+}
+
+static noinline void __init check_mtree_dup(struct maple_tree *mt)
+{
+ DEFINE_MTREE(new);
+ int i, j, ret, count = 0;
+ unsigned int rand_seed = 17, rand;
+
+ /* store a value at [0, 0] */
+ mt_init_flags(mt, 0);
+ mtree_store_range(mt, 0, 0, xa_mk_value(0), GFP_KERNEL);
+ ret = mtree_dup(mt, &new, GFP_KERNEL);
+ MT_BUG_ON(&new, ret);
+ mt_validate(&new);
+ if (compare_tree(mt, &new))
+ MT_BUG_ON(&new, 1);
+
+ mtree_destroy(mt);
+ mtree_destroy(&new);
+
+ /* The two trees have different attributes. */
+ mt_init_flags(mt, 0);
+ mt_init_flags(&new, MT_FLAGS_ALLOC_RANGE);
+ ret = mtree_dup(mt, &new, GFP_KERNEL);
+ MT_BUG_ON(&new, ret != -EINVAL);
+ mtree_destroy(mt);
+ mtree_destroy(&new);
+
+ /* The new tree is not empty */
+ mt_init_flags(mt, 0);
+ mt_init_flags(&new, 0);
+ mtree_store(&new, 5, xa_mk_value(5), GFP_KERNEL);
+ ret = mtree_dup(mt, &new, GFP_KERNEL);
+ MT_BUG_ON(&new, ret != -EINVAL);
+ mtree_destroy(mt);
+ mtree_destroy(&new);
+
+ /* Test for duplicating full trees. */
+ for (i = 1; i <= 3; i++) {
+ ret = build_full_tree(mt, 0, i);
+ MT_BUG_ON(mt, ret);
+ mt_init_flags(&new, 0);
+
+ ret = mtree_dup(mt, &new, GFP_KERNEL);
+ MT_BUG_ON(&new, ret);
+ mt_validate(&new);
+ if (compare_tree(mt, &new))
+ MT_BUG_ON(&new, 1);
+
+ mtree_destroy(mt);
+ mtree_destroy(&new);
+ }
+
+ for (i = 1; i <= 3; i++) {
+ ret = build_full_tree(mt, MT_FLAGS_ALLOC_RANGE, i);
+ MT_BUG_ON(mt, ret);
+ mt_init_flags(&new, MT_FLAGS_ALLOC_RANGE);
+
+ ret = mtree_dup(mt, &new, GFP_KERNEL);
+ MT_BUG_ON(&new, ret);
+ mt_validate(&new);
+ if (compare_tree(mt, &new))
+ MT_BUG_ON(&new, 1);
+
+ mtree_destroy(mt);
+ mtree_destroy(&new);
+ }
+
+ /* Test for normal duplicating. */
+ for (i = 0; i < 1000; i += 3) {
+ if (i & 1) {
+ mt_init_flags(mt, 0);
+ mt_init_flags(&new, 0);
+ } else {
+ mt_init_flags(mt, MT_FLAGS_ALLOC_RANGE);
+ mt_init_flags(&new, MT_FLAGS_ALLOC_RANGE);
+ }
+
+ for (j = 0; j < i; j++) {
+ mtree_store_range(mt, j * 10, j * 10 + 5,
+ xa_mk_value(j), GFP_KERNEL);
+ }
+
+ ret = mtree_dup(mt, &new, GFP_KERNEL);
+ MT_BUG_ON(&new, ret);
+ mt_validate(&new);
+ if (compare_tree(mt, &new))
+ MT_BUG_ON(&new, 1);
+
+ mtree_destroy(mt);
+ mtree_destroy(&new);
+ }
+
+ /* Test memory allocation failed. */
+ mt_init_flags(mt, MT_FLAGS_ALLOC_RANGE);
+ for (i = 0; i < 30; i += 3) {
+ mtree_store_range(mt, j * 10, j * 10 + 5,
+ xa_mk_value(j), GFP_KERNEL);
+ }
+
+ /* Failed at the first node. */
+ mt_init_flags(&new, MT_FLAGS_ALLOC_RANGE);
+ mt_set_non_kernel(0);
+ ret = mtree_dup(mt, &new, GFP_NOWAIT);
+ mt_set_non_kernel(0);
+ MT_BUG_ON(&new, ret != -ENOMEM);
+ mtree_destroy(mt);
+ mtree_destroy(&new);
+
+ /* Random maple tree fails at a random node. */
+ for (i = 0; i < 1000; i += 3) {
+ if (i & 1) {
+ mt_init_flags(mt, 0);
+ mt_init_flags(&new, 0);
+ } else {
+ mt_init_flags(mt, MT_FLAGS_ALLOC_RANGE);
+ mt_init_flags(&new, MT_FLAGS_ALLOC_RANGE);
+ }
+
+ for (j = 0; j < i; j++) {
+ mtree_store_range(mt, j * 10, j * 10 + 5,
+ xa_mk_value(j), GFP_KERNEL);
+ }
+ /*
+ * The rand() library function is not used, so we can generate
+ * the same random numbers on any platform.
+ */
+ rand_seed = rand_seed * 1103515245 + 12345;
+ rand = rand_seed / 65536 % 128;
+ mt_set_non_kernel(rand);
+
+ ret = mtree_dup(mt, &new, GFP_NOWAIT);
+ mt_set_non_kernel(0);
+ if (ret != 0) {
+ MT_BUG_ON(&new, ret != -ENOMEM);
+ count++;
+ mtree_destroy(mt);
+ continue;
+ }
+
+ mt_validate(&new);
+ if (compare_tree(mt, &new))
+ MT_BUG_ON(&new, 1);
+
+ mtree_destroy(mt);
+ mtree_destroy(&new);
+ }
+
+ /* pr_info("mtree_dup() fail %d times\n", count); */
+ BUG_ON(!count);
+}
+
extern void test_kmem_cache_bulk(void);
void farmer_tests(void)
@@ -35905,6 +36262,10 @@ void farmer_tests(void)
check_null_expand(&tree);
mtree_destroy(&tree);
+ mt_init_flags(&tree, 0);
+ check_mtree_dup(&tree);
+ mtree_destroy(&tree);
+
/* RCU testing */
mt_init_flags(&tree, 0);
check_erase_testset(&tree);
--
2.39.2
^ permalink raw reply [flat|nested] 47+ messages in thread* [PATCH 6.6 14/28] maple_tree: separate ma_state node from status
2024-10-24 13:19 [PATCH 6.6 00/28] fix CVE-2024-46701 Yu Kuai
` (12 preceding siblings ...)
2024-10-24 13:19 ` [PATCH 6.6 13/28] maple_tree: add test for mtree_dup() Yu Kuai
@ 2024-10-24 13:19 ` Yu Kuai
2024-10-24 13:19 ` [PATCH 6.6 15/28] maple_tree: remove mas_searchable() Yu Kuai
` (3 subsequent siblings)
17 siblings, 0 replies; 47+ messages in thread
From: Yu Kuai @ 2024-10-24 13:19 UTC (permalink / raw)
To: stable, gregkh, harry.wentland, sunpeng.li, Rodrigo.Siqueira,
alexander.deucher, christian.koenig, Xinhui.Pan, airlied, daniel,
viro, brauner, Liam.Howlett, akpm, hughd, willy, sashal,
srinivasan.shanmugam, chiahsuan.chung, mingo, mgorman, yukuai3,
chengming.zhou, zhangpeng.00, chuck.lever
Cc: amd-gfx, dri-devel, linux-kernel, linux-fsdevel, maple-tree,
linux-mm, yukuai1, yi.zhang, yangerkun
From: "Liam R. Howlett" <Liam.Howlett@oracle.com>
commit 067311d33e650adfe7ae23765959ddcc1ba18510 upstream.
The maple tree node is overloaded to keep status as well as the active
node. This, unfortunately, results in a re-walk on underflow or overflow.
Since the maple state has room, the status can be placed in its own enum
in the structure. Once an underflow/overflow is detected, certain modes
can restore the status to active and others may need to re-walk just that
one node to see the entry.
The status being an enum has the benefit of detecting unhandled status in
switch statements.
[Liam.Howlett@oracle.com: fix comments about MAS_*]
Link: https://lkml.kernel.org/r/20231106154124.614247-1-Liam.Howlett@oracle.com
[Liam.Howlett@oracle.com: update forking to separate maple state and node]
Link: https://lkml.kernel.org/r/20231106154551.615042-1-Liam.Howlett@oracle.com
[Liam.Howlett@oracle.com: fix mas_prev() state separation code]
Link: https://lkml.kernel.org/r/20231207193319.4025462-1-Liam.Howlett@oracle.com
Link: https://lkml.kernel.org/r/20231101171629.3612299-9-Liam.Howlett@oracle.com
Signed-off-by: Liam R. Howlett <Liam.Howlett@oracle.com>
Cc: Peng Zhang <zhangpeng.00@bytedance.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
---
include/linux/maple_tree.h | 87 +++---
include/linux/mm_types.h | 3 +-
lib/maple_tree.c | 459 +++++++++++++++++++------------
lib/test_maple_tree.c | 189 +++++++------
mm/internal.h | 8 +-
tools/testing/radix-tree/maple.c | 26 +-
6 files changed, 445 insertions(+), 327 deletions(-)
diff --git a/include/linux/maple_tree.h b/include/linux/maple_tree.h
index 0b82efe0cf1e..4dd668f7b111 100644
--- a/include/linux/maple_tree.h
+++ b/include/linux/maple_tree.h
@@ -349,6 +349,36 @@ static inline bool mtree_empty(const struct maple_tree *mt)
/* Advanced API */
+/*
+ * Maple State Status
+ * ma_active means the maple state is pointing to a node and offset and can
+ * continue operating on the tree.
+ * ma_start means we have not searched the tree.
+ * ma_root means we have searched the tree and the entry we found lives in
+ * the root of the tree (ie it has index 0, length 1 and is the only entry in
+ * the tree).
+ * ma_none means we have searched the tree and there is no node in the
+ * tree for this entry. For example, we searched for index 1 in an empty
+ * tree. Or we have a tree which points to a full leaf node and we
+ * searched for an entry which is larger than can be contained in that
+ * leaf node.
+ * ma_pause means the data within the maple state may be stale, restart the
+ * operation
+ * ma_overflow means the search has reached the upper limit of the search
+ * ma_underflow means the search has reached the lower limit of the search
+ * ma_error means there was an error, check the node for the error number.
+ */
+enum maple_status {
+ ma_active,
+ ma_start,
+ ma_root,
+ ma_none,
+ ma_pause,
+ ma_overflow,
+ ma_underflow,
+ ma_error,
+};
+
/*
* The maple state is defined in the struct ma_state and is used to keep track
* of information during operations, and even between operations when using the
@@ -381,6 +411,13 @@ static inline bool mtree_empty(const struct maple_tree *mt)
* When returning a value the maple state index and last respectively contain
* the start and end of the range for the entry. Ranges are inclusive in the
* Maple Tree.
+ *
+ * The status of the state is used to determine how the next action should treat
+ * the state. For instance, if the status is ma_start then the next action
+ * should start at the root of the tree and walk down. If the status is
+ * ma_pause then the node may be stale data and should be discarded. If the
+ * status is ma_overflow, then the last action hit the upper limit.
+ *
*/
struct ma_state {
struct maple_tree *tree; /* The tree we're operating in */
@@ -390,6 +427,7 @@ struct ma_state {
unsigned long min; /* The minimum index of this node - implied pivot min */
unsigned long max; /* The maximum index of this node - implied pivot max */
struct maple_alloc *alloc; /* Allocated nodes for this operation */
+ enum maple_status status; /* The status of the state (active, start, none, etc) */
unsigned char depth; /* depth of tree descent during write */
unsigned char offset;
unsigned char mas_flags;
@@ -416,28 +454,12 @@ struct ma_wr_state {
spin_lock_nested(&((mas)->tree->ma_lock), subclass)
#define mas_unlock(mas) spin_unlock(&((mas)->tree->ma_lock))
-
/*
* Special values for ma_state.node.
- * MAS_START means we have not searched the tree.
- * MAS_ROOT means we have searched the tree and the entry we found lives in
- * the root of the tree (ie it has index 0, length 1 and is the only entry in
- * the tree).
- * MAS_NONE means we have searched the tree and there is no node in the
- * tree for this entry. For example, we searched for index 1 in an empty
- * tree. Or we have a tree which points to a full leaf node and we
- * searched for an entry which is larger than can be contained in that
- * leaf node.
* MA_ERROR represents an errno. After dropping the lock and attempting
* to resolve the error, the walk would have to be restarted from the
* top of the tree as the tree may have been modified.
*/
-#define MAS_START ((struct maple_enode *)1UL)
-#define MAS_ROOT ((struct maple_enode *)5UL)
-#define MAS_NONE ((struct maple_enode *)9UL)
-#define MAS_PAUSE ((struct maple_enode *)17UL)
-#define MAS_OVERFLOW ((struct maple_enode *)33UL)
-#define MAS_UNDERFLOW ((struct maple_enode *)65UL)
#define MA_ERROR(err) \
((struct maple_enode *)(((unsigned long)err << 2) | 2UL))
@@ -446,7 +468,8 @@ struct ma_wr_state {
.tree = mt, \
.index = first, \
.last = end, \
- .node = MAS_START, \
+ .node = NULL, \
+ .status = ma_start, \
.min = 0, \
.max = ULONG_MAX, \
.alloc = NULL, \
@@ -477,7 +500,6 @@ void *mas_find_range(struct ma_state *mas, unsigned long max);
void *mas_find_rev(struct ma_state *mas, unsigned long min);
void *mas_find_range_rev(struct ma_state *mas, unsigned long max);
int mas_preallocate(struct ma_state *mas, void *entry, gfp_t gfp);
-bool mas_is_err(struct ma_state *mas);
bool mas_nomem(struct ma_state *mas, gfp_t gfp);
void mas_pause(struct ma_state *mas);
@@ -506,28 +528,18 @@ static inline void mas_init(struct ma_state *mas, struct maple_tree *tree,
mas->tree = tree;
mas->index = mas->last = addr;
mas->max = ULONG_MAX;
- mas->node = MAS_START;
+ mas->status = ma_start;
+ mas->node = NULL;
}
-/* Checks if a mas has not found anything */
-static inline bool mas_is_none(const struct ma_state *mas)
-{
- return mas->node == MAS_NONE;
-}
-
-/* Checks if a mas has been paused */
-static inline bool mas_is_paused(const struct ma_state *mas)
+static inline bool mas_is_active(struct ma_state *mas)
{
- return mas->node == MAS_PAUSE;
+ return mas->status == ma_active;
}
-/* Check if the mas is pointing to a node or not */
-static inline bool mas_is_active(struct ma_state *mas)
+static inline bool mas_is_err(struct ma_state *mas)
{
- if ((unsigned long)mas->node >= MAPLE_RESERVED_RANGE)
- return true;
-
- return false;
+ return mas->status == ma_error;
}
/**
@@ -540,9 +552,10 @@ static inline bool mas_is_active(struct ma_state *mas)
*
* Context: Any context.
*/
-static inline void mas_reset(struct ma_state *mas)
+static __always_inline void mas_reset(struct ma_state *mas)
{
- mas->node = MAS_START;
+ mas->status = ma_start;
+ mas->node = NULL;
}
/**
@@ -716,7 +729,7 @@ static inline void __mas_set_range(struct ma_state *mas, unsigned long start,
static inline
void mas_set_range(struct ma_state *mas, unsigned long start, unsigned long last)
{
- mas->node = MAS_START;
+ mas_reset(mas);
__mas_set_range(mas, start, last);
}
diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index 43c19d85dfe7..e38abf389943 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -1041,7 +1041,8 @@ struct vma_iterator {
.mas = { \
.tree = &(__mm)->mm_mt, \
.index = __addr, \
- .node = MAS_START, \
+ .node = NULL, \
+ .status = ma_start, \
}, \
}
diff --git a/lib/maple_tree.c b/lib/maple_tree.c
index d1416276f1ef..f7a1c1cc18eb 100644
--- a/lib/maple_tree.c
+++ b/lib/maple_tree.c
@@ -249,40 +249,40 @@ static __always_inline bool mt_is_reserved(const void *entry)
xa_is_internal(entry);
}
-static inline void mas_set_err(struct ma_state *mas, long err)
+static __always_inline void mas_set_err(struct ma_state *mas, long err)
{
mas->node = MA_ERROR(err);
+ mas->status = ma_error;
}
-static inline bool mas_is_ptr(const struct ma_state *mas)
+static __always_inline bool mas_is_ptr(const struct ma_state *mas)
{
- return mas->node == MAS_ROOT;
+ return mas->status == ma_root;
}
-static inline bool mas_is_start(const struct ma_state *mas)
+static __always_inline bool mas_is_start(const struct ma_state *mas)
{
- return mas->node == MAS_START;
+ return mas->status == ma_start;
}
-bool mas_is_err(struct ma_state *mas)
+static __always_inline bool mas_is_none(const struct ma_state *mas)
{
- return xa_is_err(mas->node);
+ return mas->status == ma_none;
}
-static __always_inline bool mas_is_overflow(struct ma_state *mas)
+static __always_inline bool mas_is_paused(const struct ma_state *mas)
{
- if (unlikely(mas->node == MAS_OVERFLOW))
- return true;
-
- return false;
+ return mas->status == ma_pause;
}
-static __always_inline bool mas_is_underflow(struct ma_state *mas)
+static __always_inline bool mas_is_overflow(struct ma_state *mas)
{
- if (unlikely(mas->node == MAS_UNDERFLOW))
- return true;
+ return mas->status == ma_overflow;
+}
- return false;
+static inline bool mas_is_underflow(struct ma_state *mas)
+{
+ return mas->status == ma_underflow;
}
static inline bool mas_searchable(struct ma_state *mas)
@@ -1274,6 +1274,7 @@ static inline void mas_alloc_nodes(struct ma_state *mas, gfp_t gfp)
if (mas->mas_flags & MA_STATE_PREALLOC) {
if (allocated)
return;
+ BUG_ON(!allocated);
WARN_ON(!allocated);
}
@@ -1379,14 +1380,14 @@ static void mas_node_count(struct ma_state *mas, int count)
* mas_start() - Sets up maple state for operations.
* @mas: The maple state.
*
- * If mas->node == MAS_START, then set the min, max and depth to
+ * If mas->status == mas_start, then set the min, max and depth to
* defaults.
*
* Return:
- * - If mas->node is an error or not MAS_START, return NULL.
- * - If it's an empty tree: NULL & mas->node == MAS_NONE
- * - If it's a single entry: The entry & mas->node == MAS_ROOT
- * - If it's a tree: NULL & mas->node == safe root node.
+ * - If mas->node is an error or not mas_start, return NULL.
+ * - If it's an empty tree: NULL & mas->status == ma_none
+ * - If it's a single entry: The entry & mas->status == mas_root
+ * - If it's a tree: NULL & mas->status == safe root node.
*/
static inline struct maple_enode *mas_start(struct ma_state *mas)
{
@@ -1402,6 +1403,7 @@ static inline struct maple_enode *mas_start(struct ma_state *mas)
/* Tree with nodes */
if (likely(xa_is_node(root))) {
mas->depth = 1;
+ mas->status = ma_active;
mas->node = mte_safe_root(root);
mas->offset = 0;
if (mte_dead_node(mas->node))
@@ -1412,13 +1414,14 @@ static inline struct maple_enode *mas_start(struct ma_state *mas)
/* empty tree */
if (unlikely(!root)) {
- mas->node = MAS_NONE;
+ mas->node = NULL;
+ mas->status = ma_none;
mas->offset = MAPLE_NODE_SLOTS;
return NULL;
}
/* Single entry tree */
- mas->node = MAS_ROOT;
+ mas->status = ma_root;
mas->offset = MAPLE_NODE_SLOTS;
/* Single entry tree. */
@@ -2225,19 +2228,21 @@ static inline bool mas_next_sibling(struct ma_state *mas)
}
/*
- * mte_node_or_node() - Return the encoded node or MAS_NONE.
+ * mte_node_or_none() - Set the enode and state.
* @enode: The encoded maple node.
*
- * Shorthand to avoid setting %NULLs in the tree or maple_subtree_state.
- *
- * Return: @enode or MAS_NONE
+ * Set the node to the enode and the status.
*/
-static inline struct maple_enode *mte_node_or_none(struct maple_enode *enode)
+static inline void mas_node_or_none(struct ma_state *mas,
+ struct maple_enode *enode)
{
- if (enode)
- return enode;
-
- return ma_enode_ptr(MAS_NONE);
+ if (enode) {
+ mas->node = enode;
+ mas->status = ma_active;
+ } else {
+ mas->node = NULL;
+ mas->status = ma_none;
+ }
}
/*
@@ -2559,13 +2564,15 @@ static inline void mast_set_split_parents(struct maple_subtree_state *mast,
* The node will either be RCU freed or pushed back on the maple state.
*/
static inline void mas_topiary_node(struct ma_state *mas,
- struct maple_enode *enode, bool in_rcu)
+ struct ma_state *tmp_mas, bool in_rcu)
{
struct maple_node *tmp;
+ struct maple_enode *enode;
- if (enode == MAS_NONE)
+ if (mas_is_none(tmp_mas))
return;
+ enode = tmp_mas->node;
tmp = mte_to_node(enode);
mte_set_node_dead(enode);
if (in_rcu)
@@ -2605,8 +2612,8 @@ static inline void mas_topiary_replace(struct ma_state *mas,
/* Update the parent pointers in the tree */
tmp[0] = *mas;
tmp[0].offset = 0;
- tmp[1].node = MAS_NONE;
- tmp[2].node = MAS_NONE;
+ tmp[1].status = ma_none;
+ tmp[2].status = ma_none;
while (!mte_is_leaf(tmp[0].node)) {
n = 0;
for (i = 0; i < 3; i++) {
@@ -2626,7 +2633,7 @@ static inline void mas_topiary_replace(struct ma_state *mas,
break;
while (n < 3)
- tmp_next[n++].node = MAS_NONE;
+ tmp_next[n++].status = ma_none;
for (i = 0; i < 3; i++)
tmp[i] = tmp_next[i];
@@ -2639,8 +2646,8 @@ static inline void mas_topiary_replace(struct ma_state *mas,
tmp[0] = *mas;
tmp[0].offset = 0;
tmp[0].node = old_enode;
- tmp[1].node = MAS_NONE;
- tmp[2].node = MAS_NONE;
+ tmp[1].status = ma_none;
+ tmp[2].status = ma_none;
in_rcu = mt_in_rcu(mas->tree);
do {
n = 0;
@@ -2655,7 +2662,7 @@ static inline void mas_topiary_replace(struct ma_state *mas,
if ((tmp_next[n].min >= tmp_next->index) &&
(tmp_next[n].max <= tmp_next->last)) {
mat_add(&subtrees, tmp_next[n].node);
- tmp_next[n].node = MAS_NONE;
+ tmp_next[n].status = ma_none;
} else {
n++;
}
@@ -2666,16 +2673,16 @@ static inline void mas_topiary_replace(struct ma_state *mas,
break;
while (n < 3)
- tmp_next[n++].node = MAS_NONE;
+ tmp_next[n++].status = ma_none;
for (i = 0; i < 3; i++) {
- mas_topiary_node(mas, tmp[i].node, in_rcu);
+ mas_topiary_node(mas, &tmp[i], in_rcu);
tmp[i] = tmp_next[i];
}
} while (!mte_is_leaf(tmp[0].node));
for (i = 0; i < 3; i++)
- mas_topiary_node(mas, tmp[i].node, in_rcu);
+ mas_topiary_node(mas, &tmp[i], in_rcu);
mas_mat_destroy(mas, &subtrees);
}
@@ -2714,9 +2721,9 @@ static inline void mast_cp_to_nodes(struct maple_subtree_state *mast,
{
bool new_lmax = true;
- mast->l->node = mte_node_or_none(left);
- mast->m->node = mte_node_or_none(middle);
- mast->r->node = mte_node_or_none(right);
+ mas_node_or_none(mast->l, left);
+ mas_node_or_none(mast->m, middle);
+ mas_node_or_none(mast->r, right);
mast->l->min = mast->orig_l->min;
if (split == mast->bn->b_end) {
@@ -2896,7 +2903,7 @@ static int mas_spanning_rebalance(struct ma_state *mas,
mast->l = &l_mas;
mast->m = &m_mas;
mast->r = &r_mas;
- l_mas.node = r_mas.node = m_mas.node = MAS_NONE;
+ l_mas.status = r_mas.status = m_mas.status = ma_none;
/* Check if this is not root and has sufficient data. */
if (((mast->orig_l->min != 0) || (mast->orig_r->max != ULONG_MAX)) &&
@@ -3423,7 +3430,6 @@ static int mas_split(struct ma_state *mas, struct maple_big_node *b_node)
/* Try to push left. */
if (mas_push_data(mas, height, &mast, true))
break;
-
/* Try to push right. */
if (mas_push_data(mas, height, &mast, false))
break;
@@ -3539,6 +3545,7 @@ static inline int mas_root_expand(struct ma_state *mas, void *entry)
slots = ma_slots(node, type);
node->parent = ma_parent_ptr(mas_tree_parent(mas));
mas->node = mt_mk_node(node, type);
+ mas->status = ma_active;
if (mas->index) {
if (contents) {
@@ -3571,7 +3578,7 @@ static inline void mas_store_root(struct ma_state *mas, void *entry)
mas_root_expand(mas, entry);
else {
rcu_assign_pointer(mas->tree->ma_root, entry);
- mas->node = MAS_START;
+ mas->status = ma_start;
}
}
@@ -3801,7 +3808,7 @@ static inline int mas_new_root(struct ma_state *mas, void *entry)
mas->depth = 0;
mas_set_height(mas);
rcu_assign_pointer(mas->tree->ma_root, entry);
- mas->node = MAS_START;
+ mas->status = ma_start;
goto done;
}
@@ -3814,6 +3821,7 @@ static inline int mas_new_root(struct ma_state *mas, void *entry)
slots = ma_slots(node, type);
node->parent = ma_parent_ptr(mas_tree_parent(mas));
mas->node = mt_mk_node(node, type);
+ mas->status = ma_active;
rcu_assign_pointer(slots[0], entry);
pivots[0] = mas->last;
mas->depth = 1;
@@ -4367,11 +4375,13 @@ static __always_inline bool mas_rewalk_if_dead(struct ma_state *mas,
/*
* mas_prev_node() - Find the prev non-null entry at the same level in the
- * tree. The prev value will be mas->node[mas->offset] or MAS_NONE.
+ * tree. The prev value will be mas->node[mas->offset] or the status will be
+ * ma_none.
* @mas: The maple state
* @min: The lower limit to search
*
- * The prev node value will be mas->node[mas->offset] or MAS_NONE.
+ * The prev node value will be mas->node[mas->offset] or the status will be
+ * ma_none.
* Return: 1 if the node is dead, 0 otherwise.
*/
static int mas_prev_node(struct ma_state *mas, unsigned long min)
@@ -4441,7 +4451,7 @@ static int mas_prev_node(struct ma_state *mas, unsigned long min)
if (unlikely(ma_dead_node(node)))
return 1;
- mas->node = MAS_NONE;
+ mas->status = ma_underflow;
return 0;
}
@@ -4455,8 +4465,7 @@ static int mas_prev_node(struct ma_state *mas, unsigned long min)
*
* Return: The entry in the previous slot which is possibly NULL
*/
-static void *mas_prev_slot(struct ma_state *mas, unsigned long min, bool empty,
- bool set_underflow)
+static void *mas_prev_slot(struct ma_state *mas, unsigned long min, bool empty)
{
void *entry;
void __rcu **slots;
@@ -4489,13 +4498,16 @@ static void *mas_prev_slot(struct ma_state *mas, unsigned long min, bool empty,
mas->last = mas->index - 1;
mas->index = mas_safe_min(mas, pivots, mas->offset);
} else {
+ if (mas->index <= min)
+ goto underflow;
+
if (mas_prev_node(mas, min)) {
mas_rewalk(mas, save_point);
goto retry;
}
- if (mas_is_none(mas))
- goto underflow;
+ if (WARN_ON_ONCE(mas_is_underflow(mas)))
+ return NULL;
mas->last = mas->max;
node = mas_mn(mas);
@@ -4509,12 +4521,15 @@ static void *mas_prev_slot(struct ma_state *mas, unsigned long min, bool empty,
if (unlikely(mas_rewalk_if_dead(mas, node, save_point)))
goto retry;
+
if (likely(entry))
return entry;
if (!empty) {
- if (mas->index <= min)
- goto underflow;
+ if (mas->index <= min) {
+ mas->status = ma_underflow;
+ return NULL;
+ }
goto again;
}
@@ -4522,8 +4537,7 @@ static void *mas_prev_slot(struct ma_state *mas, unsigned long min, bool empty,
return entry;
underflow:
- if (set_underflow)
- mas->node = MAS_UNDERFLOW;
+ mas->status = ma_underflow;
return NULL;
}
@@ -4532,7 +4546,8 @@ static void *mas_prev_slot(struct ma_state *mas, unsigned long min, bool empty,
* @mas: The maple state
* @max: The maximum pivot value to check.
*
- * The next value will be mas->node[mas->offset] or MAS_NONE.
+ * The next value will be mas->node[mas->offset] or the status will have
+ * overflowed.
* Return: 1 on dead node, 0 otherwise.
*/
static int mas_next_node(struct ma_state *mas, struct maple_node *node,
@@ -4548,13 +4563,13 @@ static int mas_next_node(struct ma_state *mas, struct maple_node *node,
void __rcu **slots;
if (mas->max >= max)
- goto no_entry;
+ goto overflow;
min = mas->max + 1;
level = 0;
do {
if (ma_is_root(node))
- goto no_entry;
+ goto overflow;
/* Walk up. */
if (unlikely(mas_ascend(mas)))
@@ -4605,11 +4620,11 @@ static int mas_next_node(struct ma_state *mas, struct maple_node *node,
mas->min = min;
return 0;
-no_entry:
+overflow:
if (unlikely(ma_dead_node(node)))
return 1;
- mas->node = MAS_NONE;
+ mas->status = ma_overflow;
return 0;
}
@@ -4624,8 +4639,7 @@ static int mas_next_node(struct ma_state *mas, struct maple_node *node,
*
* Return: The entry in the next slot which is possibly NULL
*/
-static void *mas_next_slot(struct ma_state *mas, unsigned long max, bool empty,
- bool set_overflow)
+static void *mas_next_slot(struct ma_state *mas, unsigned long max, bool empty)
{
void __rcu **slots;
unsigned long *pivots;
@@ -4646,13 +4660,15 @@ static void *mas_next_slot(struct ma_state *mas, unsigned long max, bool empty,
if (likely(mas->offset < mas->end))
pivot = pivots[mas->offset];
else
- goto overflow;
+ pivot = mas->max;
if (unlikely(mas_rewalk_if_dead(mas, node, save_point)))
goto retry;
- if (pivot >= max)
- goto overflow;
+ if (pivot >= max) { /* Was at the limit, next will extend beyond */
+ mas->status = ma_overflow;
+ return NULL;
+ }
}
if (likely(mas->offset < mas->end)) {
@@ -4664,16 +4680,18 @@ static void *mas_next_slot(struct ma_state *mas, unsigned long max, bool empty,
else
mas->last = mas->max;
} else {
+ if (mas->last >= max) {
+ mas->status = ma_overflow;
+ return NULL;
+ }
+
if (mas_next_node(mas, node, max)) {
mas_rewalk(mas, save_point);
goto retry;
}
- if (WARN_ON_ONCE(mas_is_none(mas))) {
- mas->node = MAS_OVERFLOW;
+ if (WARN_ON_ONCE(mas_is_overflow(mas)))
return NULL;
- goto overflow;
- }
mas->offset = 0;
mas->index = mas->min;
@@ -4691,20 +4709,18 @@ static void *mas_next_slot(struct ma_state *mas, unsigned long max, bool empty,
if (entry)
return entry;
+
if (!empty) {
- if (mas->last >= max)
- goto overflow;
+ if (mas->last >= max) {
+ mas->status = ma_overflow;
+ return NULL;
+ }
mas->index = mas->last + 1;
goto again;
}
return entry;
-
-overflow:
- if (set_overflow)
- mas->node = MAS_OVERFLOW;
- return NULL;
}
/*
@@ -4723,11 +4739,11 @@ static void *mas_next_slot(struct ma_state *mas, unsigned long max, bool empty,
static inline void *mas_next_entry(struct ma_state *mas, unsigned long limit)
{
if (mas->last >= limit) {
- mas->node = MAS_OVERFLOW;
+ mas->status = ma_overflow;
return NULL;
}
- return mas_next_slot(mas, limit, false, true);
+ return mas_next_slot(mas, limit, false);
}
/*
@@ -4895,7 +4911,7 @@ static inline bool mas_anode_descend(struct ma_state *mas, unsigned long size)
* @mas: The maple state.
*
* mas->index and mas->last will be set to the range if there is a value. If
- * mas->node is MAS_NONE, reset to MAS_START.
+ * mas->status is ma_none, reset to ma_start
*
* Return: the entry at the location or %NULL.
*/
@@ -4904,7 +4920,7 @@ void *mas_walk(struct ma_state *mas)
void *entry;
if (!mas_is_active(mas) || !mas_is_start(mas))
- mas->node = MAS_START;
+ mas->status = ma_start;
retry:
entry = mas_state_walk(mas);
if (mas_is_start(mas)) {
@@ -4920,7 +4936,7 @@ void *mas_walk(struct ma_state *mas)
mas->index = 1;
mas->last = ULONG_MAX;
- mas->node = MAS_NONE;
+ mas->status = ma_none;
return NULL;
}
@@ -5683,27 +5699,40 @@ static bool mas_next_setup(struct ma_state *mas, unsigned long max,
bool was_none = mas_is_none(mas);
if (unlikely(mas->last >= max)) {
- mas->node = MAS_OVERFLOW;
+ mas->status = ma_overflow;
return true;
}
- if (mas_is_active(mas))
+ switch (mas->status) {
+ case ma_active:
return false;
-
- if (mas_is_none(mas) || mas_is_paused(mas)) {
- mas->node = MAS_START;
- } else if (mas_is_overflow(mas)) {
+ case ma_none:
+ fallthrough;
+ case ma_pause:
+ mas->status = ma_start;
+ fallthrough;
+ case ma_start:
+ mas_walk(mas); /* Retries on dead nodes handled by mas_walk */
+ break;
+ case ma_overflow:
/* Overflowed before, but the max changed */
- mas->node = MAS_START;
- } else if (mas_is_underflow(mas)) {
- mas->node = MAS_START;
+ mas->status = ma_active;
+ break;
+ case ma_underflow:
+ /* The user expects the mas to be one before where it is */
+ mas->status = ma_active;
*entry = mas_walk(mas);
if (*entry)
return true;
+ break;
+ case ma_root:
+ break;
+ case ma_error:
+ return true;
}
- if (mas_is_start(mas))
- *entry = mas_walk(mas); /* Retries on dead nodes handled by mas_walk */
+ if (likely(mas_is_active(mas))) /* Fast path */
+ return false;
if (mas_is_ptr(mas)) {
*entry = NULL;
@@ -5713,7 +5742,7 @@ static bool mas_next_setup(struct ma_state *mas, unsigned long max,
}
mas->index = 1;
mas->last = ULONG_MAX;
- mas->node = MAS_NONE;
+ mas->status = ma_none;
return true;
}
@@ -5742,7 +5771,7 @@ void *mas_next(struct ma_state *mas, unsigned long max)
return entry;
/* Retries on dead nodes handled by mas_next_slot */
- return mas_next_slot(mas, max, false, true);
+ return mas_next_slot(mas, max, false);
}
EXPORT_SYMBOL_GPL(mas_next);
@@ -5765,7 +5794,7 @@ void *mas_next_range(struct ma_state *mas, unsigned long max)
return entry;
/* Retries on dead nodes handled by mas_next_slot */
- return mas_next_slot(mas, max, true, true);
+ return mas_next_slot(mas, max, true);
}
EXPORT_SYMBOL_GPL(mas_next_range);
@@ -5796,33 +5825,45 @@ EXPORT_SYMBOL_GPL(mt_next);
static bool mas_prev_setup(struct ma_state *mas, unsigned long min, void **entry)
{
if (unlikely(mas->index <= min)) {
- mas->node = MAS_UNDERFLOW;
+ mas->status = ma_underflow;
return true;
}
- if (mas_is_active(mas))
+ switch (mas->status) {
+ case ma_active:
return false;
-
- if (mas_is_overflow(mas)) {
- mas->node = MAS_START;
+ case ma_start:
+ break;
+ case ma_none:
+ fallthrough;
+ case ma_pause:
+ mas->status = ma_start;
+ break;
+ case ma_underflow:
+ /* underflowed before but the min changed */
+ mas->status = ma_active;
+ break;
+ case ma_overflow:
+ /* User expects mas to be one after where it is */
+ mas->status = ma_active;
*entry = mas_walk(mas);
if (*entry)
return true;
- }
-
- if (mas_is_none(mas) || mas_is_paused(mas)) {
- mas->node = MAS_START;
- } else if (mas_is_underflow(mas)) {
- /* underflowed before but the min changed */
- mas->node = MAS_START;
+ break;
+ case ma_root:
+ break;
+ case ma_error:
+ return true;
}
if (mas_is_start(mas))
mas_walk(mas);
if (unlikely(mas_is_ptr(mas))) {
- if (!mas->index)
- goto none;
+ if (!mas->index) {
+ mas->status = ma_none;
+ return true;
+ }
mas->index = mas->last = 0;
*entry = mas_root(mas);
return true;
@@ -5832,7 +5873,7 @@ static bool mas_prev_setup(struct ma_state *mas, unsigned long min, void **entry
if (mas->index) {
/* Walked to out-of-range pointer? */
mas->index = mas->last = 0;
- mas->node = MAS_ROOT;
+ mas->status = ma_root;
*entry = mas_root(mas);
return true;
}
@@ -5840,10 +5881,6 @@ static bool mas_prev_setup(struct ma_state *mas, unsigned long min, void **entry
}
return false;
-
-none:
- mas->node = MAS_NONE;
- return true;
}
/**
@@ -5852,7 +5889,7 @@ static bool mas_prev_setup(struct ma_state *mas, unsigned long min, void **entry
* @min: The minimum value to check.
*
* Must hold rcu_read_lock or the write lock.
- * Will reset mas to MAS_START if the node is MAS_NONE. Will stop on not
+ * Will reset mas to ma_start if the status is ma_none. Will stop on not
* searchable nodes.
*
* Return: the previous value or %NULL.
@@ -5864,7 +5901,7 @@ void *mas_prev(struct ma_state *mas, unsigned long min)
if (mas_prev_setup(mas, min, &entry))
return entry;
- return mas_prev_slot(mas, min, false, true);
+ return mas_prev_slot(mas, min, false);
}
EXPORT_SYMBOL_GPL(mas_prev);
@@ -5875,7 +5912,7 @@ EXPORT_SYMBOL_GPL(mas_prev);
*
* Sets @mas->index and @mas->last to the range.
* Must hold rcu_read_lock or the write lock.
- * Will reset mas to MAS_START if the node is MAS_NONE. Will stop on not
+ * Will reset mas to ma_start if the node is ma_none. Will stop on not
* searchable nodes.
*
* Return: the previous value or %NULL.
@@ -5887,7 +5924,7 @@ void *mas_prev_range(struct ma_state *mas, unsigned long min)
if (mas_prev_setup(mas, min, &entry))
return entry;
- return mas_prev_slot(mas, min, true, true);
+ return mas_prev_slot(mas, min, true);
}
EXPORT_SYMBOL_GPL(mas_prev_range);
@@ -5930,7 +5967,8 @@ EXPORT_SYMBOL_GPL(mt_prev);
*/
void mas_pause(struct ma_state *mas)
{
- mas->node = MAS_PAUSE;
+ mas->status = ma_pause;
+ mas->node = NULL;
}
EXPORT_SYMBOL_GPL(mas_pause);
@@ -5944,32 +5982,52 @@ EXPORT_SYMBOL_GPL(mas_pause);
*/
static __always_inline bool mas_find_setup(struct ma_state *mas, unsigned long max, void **entry)
{
- if (mas_is_active(mas)) {
+ switch (mas->status) {
+ case ma_active:
if (mas->last < max)
return false;
-
return true;
- }
-
- if (mas_is_paused(mas)) {
+ case ma_start:
+ break;
+ case ma_pause:
if (unlikely(mas->last >= max))
return true;
mas->index = ++mas->last;
- mas->node = MAS_START;
- } else if (mas_is_none(mas)) {
+ mas->status = ma_start;
+ break;
+ case ma_none:
if (unlikely(mas->last >= max))
return true;
mas->index = mas->last;
- mas->node = MAS_START;
- } else if (mas_is_overflow(mas) || mas_is_underflow(mas)) {
- if (mas->index > max) {
- mas->node = MAS_OVERFLOW;
+ mas->status = ma_start;
+ break;
+ case ma_underflow:
+ /* mas is pointing at entry before unable to go lower */
+ if (unlikely(mas->index >= max)) {
+ mas->status = ma_overflow;
return true;
}
- mas->node = MAS_START;
+ mas->status = ma_active;
+ *entry = mas_walk(mas);
+ if (*entry)
+ return true;
+ break;
+ case ma_overflow:
+ if (unlikely(mas->last >= max))
+ return true;
+
+ mas->status = ma_active;
+ *entry = mas_walk(mas);
+ if (*entry)
+ return true;
+ break;
+ case ma_root:
+ break;
+ case ma_error:
+ return true;
}
if (mas_is_start(mas)) {
@@ -5996,7 +6054,7 @@ static __always_inline bool mas_find_setup(struct ma_state *mas, unsigned long m
return false;
ptr_out_of_range:
- mas->node = MAS_NONE;
+ mas->status = ma_none;
mas->index = 1;
mas->last = ULONG_MAX;
return true;
@@ -6010,7 +6068,7 @@ static __always_inline bool mas_find_setup(struct ma_state *mas, unsigned long m
*
* Must hold rcu_read_lock or the write lock.
* If an entry exists, last and index are updated accordingly.
- * May set @mas->node to MAS_NONE.
+ * May set @mas->status to ma_overflow.
*
* Return: The entry or %NULL.
*/
@@ -6022,7 +6080,10 @@ void *mas_find(struct ma_state *mas, unsigned long max)
return entry;
/* Retries on dead nodes handled by mas_next_slot */
- return mas_next_slot(mas, max, false, false);
+ entry = mas_next_slot(mas, max, false);
+ /* Ignore overflow */
+ mas->status = ma_active;
+ return entry;
}
EXPORT_SYMBOL_GPL(mas_find);
@@ -6034,7 +6095,7 @@ EXPORT_SYMBOL_GPL(mas_find);
*
* Must hold rcu_read_lock or the write lock.
* If an entry exists, last and index are updated accordingly.
- * May set @mas->node to MAS_NONE.
+ * May set @mas->status to ma_overflow.
*
* Return: The entry or %NULL.
*/
@@ -6046,7 +6107,7 @@ void *mas_find_range(struct ma_state *mas, unsigned long max)
return entry;
/* Retries on dead nodes handled by mas_next_slot */
- return mas_next_slot(mas, max, true, false);
+ return mas_next_slot(mas, max, true);
}
EXPORT_SYMBOL_GPL(mas_find_range);
@@ -6061,33 +6122,45 @@ EXPORT_SYMBOL_GPL(mas_find_range);
static bool mas_find_rev_setup(struct ma_state *mas, unsigned long min,
void **entry)
{
- if (mas_is_active(mas)) {
- if (mas->index > min)
- return false;
-
- return true;
- }
- if (mas_is_paused(mas)) {
+ switch (mas->status) {
+ case ma_active:
+ goto active;
+ case ma_start:
+ break;
+ case ma_pause:
if (unlikely(mas->index <= min)) {
- mas->node = MAS_NONE;
+ mas->status = ma_underflow;
return true;
}
- mas->node = MAS_START;
mas->last = --mas->index;
- } else if (mas_is_none(mas)) {
+ mas->status = ma_start;
+ break;
+ case ma_none:
if (mas->index <= min)
goto none;
mas->last = mas->index;
- mas->node = MAS_START;
- } else if (mas_is_underflow(mas) || mas_is_overflow(mas)) {
- if (mas->last <= min) {
- mas->node = MAS_UNDERFLOW;
+ mas->status = ma_start;
+ break;
+ case ma_overflow: /* user expects the mas to be one after where it is */
+ if (unlikely(mas->index <= min)) {
+ mas->status = ma_underflow;
return true;
}
- mas->node = MAS_START;
+ mas->status = ma_active;
+ break;
+ case ma_underflow: /* user expects the mas to be one before where it is */
+ if (unlikely(mas->index <= min))
+ return true;
+
+ mas->status = ma_active;
+ break;
+ case ma_root:
+ break;
+ case ma_error:
+ return true;
}
if (mas_is_start(mas)) {
@@ -6110,19 +6183,20 @@ static bool mas_find_rev_setup(struct ma_state *mas, unsigned long min,
* previous location is 0.
*/
mas->last = mas->index = 0;
- mas->node = MAS_ROOT;
+ mas->status = ma_root;
*entry = mas_root(mas);
return true;
}
}
+active:
if (mas->index < min)
return true;
return false;
none:
- mas->node = MAS_NONE;
+ mas->status = ma_none;
return true;
}
@@ -6135,7 +6209,7 @@ static bool mas_find_rev_setup(struct ma_state *mas, unsigned long min,
*
* Must hold rcu_read_lock or the write lock.
* If an entry exists, last and index are updated accordingly.
- * May set @mas->node to MAS_NONE.
+ * May set @mas->status to ma_underflow.
*
* Return: The entry or %NULL.
*/
@@ -6147,7 +6221,7 @@ void *mas_find_rev(struct ma_state *mas, unsigned long min)
return entry;
/* Retries on dead nodes handled by mas_prev_slot */
- return mas_prev_slot(mas, min, false, false);
+ return mas_prev_slot(mas, min, false);
}
EXPORT_SYMBOL_GPL(mas_find_rev);
@@ -6161,7 +6235,7 @@ EXPORT_SYMBOL_GPL(mas_find_rev);
*
* Must hold rcu_read_lock or the write lock.
* If an entry exists, last and index are updated accordingly.
- * May set @mas->node to MAS_NONE.
+ * May set @mas->status to ma_underflow.
*
* Return: The entry or %NULL.
*/
@@ -6173,7 +6247,7 @@ void *mas_find_range_rev(struct ma_state *mas, unsigned long min)
return entry;
/* Retries on dead nodes handled by mas_prev_slot */
- return mas_prev_slot(mas, min, true, false);
+ return mas_prev_slot(mas, min, true);
}
EXPORT_SYMBOL_GPL(mas_find_range_rev);
@@ -6194,7 +6268,7 @@ void *mas_erase(struct ma_state *mas)
MA_WR_STATE(wr_mas, mas, NULL);
if (!mas_is_active(mas) || !mas_is_start(mas))
- mas->node = MAS_START;
+ mas->status = ma_start;
/* Retry unnecessary when holding the write lock. */
entry = mas_state_walk(mas);
@@ -6239,7 +6313,7 @@ bool mas_nomem(struct ma_state *mas, gfp_t gfp)
if (!mas_allocated(mas))
return false;
- mas->node = MAS_START;
+ mas->status = ma_start;
return true;
}
@@ -6638,7 +6712,7 @@ static inline void mas_dup_build(struct ma_state *mas, struct ma_state *new_mas,
node = mt_alloc_one(gfp);
if (!node) {
- new_mas->node = MAS_NONE;
+ new_mas->status = ma_none;
mas_set_err(mas, -ENOMEM);
return;
}
@@ -6982,11 +7056,11 @@ static inline struct maple_enode *mas_get_slot(struct ma_state *mas,
static void mas_dfs_postorder(struct ma_state *mas, unsigned long max)
{
- struct maple_enode *p = MAS_NONE, *mn = mas->node;
+ struct maple_enode *p, *mn = mas->node;
unsigned long p_min, p_max;
mas_next_node(mas, mas_mn(mas), max);
- if (!mas_is_none(mas))
+ if (!mas_is_overflow(mas))
return;
if (mte_is_root(mn))
@@ -6999,7 +7073,7 @@ static void mas_dfs_postorder(struct ma_state *mas, unsigned long max)
p_min = mas->min;
p_max = mas->max;
mas_prev_node(mas, 0);
- } while (!mas_is_none(mas));
+ } while (!mas_is_underflow(mas));
mas->node = p;
mas->max = p_max;
@@ -7454,7 +7528,7 @@ static void mt_validate_nulls(struct maple_tree *mt)
MA_STATE(mas, mt, 0, 0);
mas_start(&mas);
- if (mas_is_none(&mas) || (mas.node == MAS_ROOT))
+ if (mas_is_none(&mas) || (mas_is_ptr(&mas)))
return;
while (!mte_is_leaf(mas.node))
@@ -7471,7 +7545,7 @@ static void mt_validate_nulls(struct maple_tree *mt)
last = entry;
if (offset == mas_data_end(&mas)) {
mas_next_node(&mas, mas_mn(&mas), ULONG_MAX);
- if (mas_is_none(&mas))
+ if (mas_is_overflow(&mas))
return;
offset = 0;
slots = ma_slots(mte_to_node(mas.node),
@@ -7480,7 +7554,7 @@ static void mt_validate_nulls(struct maple_tree *mt)
offset++;
}
- } while (!mas_is_none(&mas));
+ } while (!mas_is_overflow(&mas));
}
/*
@@ -7501,7 +7575,7 @@ void mt_validate(struct maple_tree *mt)
while (!mte_is_leaf(mas.node))
mas_descend(&mas);
- while (!mas_is_none(&mas)) {
+ while (!mas_is_overflow(&mas)) {
MAS_WARN_ON(&mas, mte_dead_node(mas.node));
end = mas_data_end(&mas);
if (MAS_WARN_ON(&mas, (end < mt_min_slot_count(mas.node)) &&
@@ -7526,16 +7600,35 @@ EXPORT_SYMBOL_GPL(mt_validate);
void mas_dump(const struct ma_state *mas)
{
pr_err("MAS: tree=%p enode=%p ", mas->tree, mas->node);
- if (mas_is_none(mas))
- pr_err("(MAS_NONE) ");
- else if (mas_is_ptr(mas))
- pr_err("(MAS_ROOT) ");
- else if (mas_is_start(mas))
- pr_err("(MAS_START) ");
- else if (mas_is_paused(mas))
- pr_err("(MAS_PAUSED) ");
-
- pr_err("[%u] index=%lx last=%lx\n", mas->offset, mas->index, mas->last);
+ switch (mas->status) {
+ case ma_active:
+ pr_err("(ma_active)");
+ break;
+ case ma_none:
+ pr_err("(ma_none)");
+ break;
+ case ma_root:
+ pr_err("(ma_root)");
+ break;
+ case ma_start:
+ pr_err("(ma_start) ");
+ break;
+ case ma_pause:
+ pr_err("(ma_pause) ");
+ break;
+ case ma_overflow:
+ pr_err("(ma_overflow) ");
+ break;
+ case ma_underflow:
+ pr_err("(ma_underflow) ");
+ break;
+ case ma_error:
+ pr_err("(ma_error) ");
+ break;
+ }
+
+ pr_err("[%u/%u] index=%lx last=%lx\n", mas->offset, mas->end,
+ mas->index, mas->last);
pr_err(" min=%lx max=%lx alloc=%p, depth=%u, flags=%x\n",
mas->min, mas->max, mas->alloc, mas->depth, mas->mas_flags);
if (mas->index > mas->last)
diff --git a/lib/test_maple_tree.c b/lib/test_maple_tree.c
index de470950714f..f9acc6ef0728 100644
--- a/lib/test_maple_tree.c
+++ b/lib/test_maple_tree.c
@@ -54,6 +54,11 @@ atomic_t maple_tree_tests_passed;
#else
#define cond_resched() do {} while (0)
#endif
+
+#define mas_is_none(x) ((x)->status == ma_none)
+#define mas_is_overflow(x) ((x)->status == ma_overflow)
+#define mas_is_underflow(x) ((x)->status == ma_underflow)
+
static int __init mtree_insert_index(struct maple_tree *mt,
unsigned long index, gfp_t gfp)
{
@@ -582,7 +587,7 @@ static noinline void __init check_find(struct maple_tree *mt)
MT_BUG_ON(mt, last != mas.last);
- mas.node = MAS_NONE;
+ mas.status = ma_none;
mas.index = ULONG_MAX;
mas.last = ULONG_MAX;
entry2 = mas_prev(&mas, 0);
@@ -2175,7 +2180,7 @@ static noinline void __init next_prev_test(struct maple_tree *mt)
MT_BUG_ON(mt, val != NULL);
MT_BUG_ON(mt, mas.index != 0);
MT_BUG_ON(mt, mas.last != 5);
- MT_BUG_ON(mt, mas.node != MAS_UNDERFLOW);
+ MT_BUG_ON(mt, !mas_is_underflow(&mas));
mas.index = 0;
mas.last = 5;
@@ -3039,10 +3044,6 @@ static noinline void __init check_empty_area_fill(struct maple_tree *mt)
* DNE active active range of NULL
*/
-#define mas_active(x) (((x).node != MAS_ROOT) && \
- ((x).node != MAS_START) && \
- ((x).node != MAS_PAUSE) && \
- ((x).node != MAS_NONE))
static noinline void __init check_state_handling(struct maple_tree *mt)
{
MA_STATE(mas, mt, 0, 0);
@@ -3057,7 +3058,7 @@ static noinline void __init check_state_handling(struct maple_tree *mt)
/* prev: Start -> underflow*/
entry = mas_prev(&mas, 0);
MT_BUG_ON(mt, entry != NULL);
- MT_BUG_ON(mt, mas.node != MAS_UNDERFLOW);
+ MT_BUG_ON(mt, mas.status != ma_underflow);
/* prev: Start -> root */
mas_set(&mas, 10);
@@ -3065,7 +3066,7 @@ static noinline void __init check_state_handling(struct maple_tree *mt)
MT_BUG_ON(mt, entry != ptr);
MT_BUG_ON(mt, mas.index != 0);
MT_BUG_ON(mt, mas.last != 0);
- MT_BUG_ON(mt, mas.node != MAS_ROOT);
+ MT_BUG_ON(mt, mas.status != ma_root);
/* prev: pause -> root */
mas_set(&mas, 10);
@@ -3074,7 +3075,7 @@ static noinline void __init check_state_handling(struct maple_tree *mt)
MT_BUG_ON(mt, entry != ptr);
MT_BUG_ON(mt, mas.index != 0);
MT_BUG_ON(mt, mas.last != 0);
- MT_BUG_ON(mt, mas.node != MAS_ROOT);
+ MT_BUG_ON(mt, mas.status != ma_root);
/* next: start -> none */
mas_set(&mas, 0);
@@ -3082,7 +3083,7 @@ static noinline void __init check_state_handling(struct maple_tree *mt)
MT_BUG_ON(mt, mas.index != 1);
MT_BUG_ON(mt, mas.last != ULONG_MAX);
MT_BUG_ON(mt, entry != NULL);
- MT_BUG_ON(mt, mas.node != MAS_NONE);
+ MT_BUG_ON(mt, mas.status != ma_none);
/* next: start -> none*/
mas_set(&mas, 10);
@@ -3090,7 +3091,7 @@ static noinline void __init check_state_handling(struct maple_tree *mt)
MT_BUG_ON(mt, mas.index != 1);
MT_BUG_ON(mt, mas.last != ULONG_MAX);
MT_BUG_ON(mt, entry != NULL);
- MT_BUG_ON(mt, mas.node != MAS_NONE);
+ MT_BUG_ON(mt, mas.status != ma_none);
/* find: start -> root */
mas_set(&mas, 0);
@@ -3098,21 +3099,21 @@ static noinline void __init check_state_handling(struct maple_tree *mt)
MT_BUG_ON(mt, entry != ptr);
MT_BUG_ON(mt, mas.index != 0);
MT_BUG_ON(mt, mas.last != 0);
- MT_BUG_ON(mt, mas.node != MAS_ROOT);
+ MT_BUG_ON(mt, mas.status != ma_root);
/* find: root -> none */
entry = mas_find(&mas, ULONG_MAX);
MT_BUG_ON(mt, entry != NULL);
MT_BUG_ON(mt, mas.index != 1);
MT_BUG_ON(mt, mas.last != ULONG_MAX);
- MT_BUG_ON(mt, mas.node != MAS_NONE);
+ MT_BUG_ON(mt, mas.status != ma_none);
/* find: none -> none */
entry = mas_find(&mas, ULONG_MAX);
MT_BUG_ON(mt, entry != NULL);
MT_BUG_ON(mt, mas.index != 1);
MT_BUG_ON(mt, mas.last != ULONG_MAX);
- MT_BUG_ON(mt, mas.node != MAS_NONE);
+ MT_BUG_ON(mt, mas.status != ma_none);
/* find: start -> none */
mas_set(&mas, 10);
@@ -3120,14 +3121,14 @@ static noinline void __init check_state_handling(struct maple_tree *mt)
MT_BUG_ON(mt, entry != NULL);
MT_BUG_ON(mt, mas.index != 1);
MT_BUG_ON(mt, mas.last != ULONG_MAX);
- MT_BUG_ON(mt, mas.node != MAS_NONE);
+ MT_BUG_ON(mt, mas.status != ma_none);
/* find_rev: none -> root */
entry = mas_find_rev(&mas, 0);
MT_BUG_ON(mt, entry != ptr);
MT_BUG_ON(mt, mas.index != 0);
MT_BUG_ON(mt, mas.last != 0);
- MT_BUG_ON(mt, mas.node != MAS_ROOT);
+ MT_BUG_ON(mt, mas.status != ma_root);
/* find_rev: start -> root */
mas_set(&mas, 0);
@@ -3135,21 +3136,21 @@ static noinline void __init check_state_handling(struct maple_tree *mt)
MT_BUG_ON(mt, entry != ptr);
MT_BUG_ON(mt, mas.index != 0);
MT_BUG_ON(mt, mas.last != 0);
- MT_BUG_ON(mt, mas.node != MAS_ROOT);
+ MT_BUG_ON(mt, mas.status != ma_root);
/* find_rev: root -> none */
entry = mas_find_rev(&mas, 0);
MT_BUG_ON(mt, entry != NULL);
MT_BUG_ON(mt, mas.index != 0);
MT_BUG_ON(mt, mas.last != 0);
- MT_BUG_ON(mt, mas.node != MAS_NONE);
+ MT_BUG_ON(mt, mas.status != ma_none);
/* find_rev: none -> none */
entry = mas_find_rev(&mas, 0);
MT_BUG_ON(mt, entry != NULL);
MT_BUG_ON(mt, mas.index != 0);
MT_BUG_ON(mt, mas.last != 0);
- MT_BUG_ON(mt, mas.node != MAS_NONE);
+ MT_BUG_ON(mt, mas.status != ma_none);
/* find_rev: start -> root */
mas_set(&mas, 10);
@@ -3157,7 +3158,7 @@ static noinline void __init check_state_handling(struct maple_tree *mt)
MT_BUG_ON(mt, entry != ptr);
MT_BUG_ON(mt, mas.index != 0);
MT_BUG_ON(mt, mas.last != 0);
- MT_BUG_ON(mt, mas.node != MAS_ROOT);
+ MT_BUG_ON(mt, mas.status != ma_root);
/* walk: start -> none */
mas_set(&mas, 10);
@@ -3165,7 +3166,7 @@ static noinline void __init check_state_handling(struct maple_tree *mt)
MT_BUG_ON(mt, entry != NULL);
MT_BUG_ON(mt, mas.index != 1);
MT_BUG_ON(mt, mas.last != ULONG_MAX);
- MT_BUG_ON(mt, mas.node != MAS_NONE);
+ MT_BUG_ON(mt, mas.status != ma_none);
/* walk: pause -> none*/
mas_set(&mas, 10);
@@ -3174,7 +3175,7 @@ static noinline void __init check_state_handling(struct maple_tree *mt)
MT_BUG_ON(mt, entry != NULL);
MT_BUG_ON(mt, mas.index != 1);
MT_BUG_ON(mt, mas.last != ULONG_MAX);
- MT_BUG_ON(mt, mas.node != MAS_NONE);
+ MT_BUG_ON(mt, mas.status != ma_none);
/* walk: none -> none */
mas.index = mas.last = 10;
@@ -3182,14 +3183,14 @@ static noinline void __init check_state_handling(struct maple_tree *mt)
MT_BUG_ON(mt, entry != NULL);
MT_BUG_ON(mt, mas.index != 1);
MT_BUG_ON(mt, mas.last != ULONG_MAX);
- MT_BUG_ON(mt, mas.node != MAS_NONE);
+ MT_BUG_ON(mt, mas.status != ma_none);
/* walk: none -> none */
entry = mas_walk(&mas);
MT_BUG_ON(mt, entry != NULL);
MT_BUG_ON(mt, mas.index != 1);
MT_BUG_ON(mt, mas.last != ULONG_MAX);
- MT_BUG_ON(mt, mas.node != MAS_NONE);
+ MT_BUG_ON(mt, mas.status != ma_none);
/* walk: start -> root */
mas_set(&mas, 0);
@@ -3197,7 +3198,7 @@ static noinline void __init check_state_handling(struct maple_tree *mt)
MT_BUG_ON(mt, entry != ptr);
MT_BUG_ON(mt, mas.index != 0);
MT_BUG_ON(mt, mas.last != 0);
- MT_BUG_ON(mt, mas.node != MAS_ROOT);
+ MT_BUG_ON(mt, mas.status != ma_root);
/* walk: pause -> root */
mas_set(&mas, 0);
@@ -3206,22 +3207,22 @@ static noinline void __init check_state_handling(struct maple_tree *mt)
MT_BUG_ON(mt, entry != ptr);
MT_BUG_ON(mt, mas.index != 0);
MT_BUG_ON(mt, mas.last != 0);
- MT_BUG_ON(mt, mas.node != MAS_ROOT);
+ MT_BUG_ON(mt, mas.status != ma_root);
/* walk: none -> root */
- mas.node = MAS_NONE;
+ mas.status = ma_none;
entry = mas_walk(&mas);
MT_BUG_ON(mt, entry != ptr);
MT_BUG_ON(mt, mas.index != 0);
MT_BUG_ON(mt, mas.last != 0);
- MT_BUG_ON(mt, mas.node != MAS_ROOT);
+ MT_BUG_ON(mt, mas.status != ma_root);
/* walk: root -> root */
entry = mas_walk(&mas);
MT_BUG_ON(mt, entry != ptr);
MT_BUG_ON(mt, mas.index != 0);
MT_BUG_ON(mt, mas.last != 0);
- MT_BUG_ON(mt, mas.node != MAS_ROOT);
+ MT_BUG_ON(mt, mas.status != ma_root);
/* walk: root -> none */
mas_set(&mas, 10);
@@ -3229,7 +3230,7 @@ static noinline void __init check_state_handling(struct maple_tree *mt)
MT_BUG_ON(mt, entry != NULL);
MT_BUG_ON(mt, mas.index != 1);
MT_BUG_ON(mt, mas.last != ULONG_MAX);
- MT_BUG_ON(mt, mas.node != MAS_NONE);
+ MT_BUG_ON(mt, mas.status != ma_none);
/* walk: none -> root */
mas.index = mas.last = 0;
@@ -3237,7 +3238,7 @@ static noinline void __init check_state_handling(struct maple_tree *mt)
MT_BUG_ON(mt, entry != ptr);
MT_BUG_ON(mt, mas.index != 0);
MT_BUG_ON(mt, mas.last != 0);
- MT_BUG_ON(mt, mas.node != MAS_ROOT);
+ MT_BUG_ON(mt, mas.status != ma_root);
mas_unlock(&mas);
@@ -3255,7 +3256,7 @@ static noinline void __init check_state_handling(struct maple_tree *mt)
MT_BUG_ON(mt, entry != ptr);
MT_BUG_ON(mt, mas.index != 0x1000);
MT_BUG_ON(mt, mas.last != 0x1500);
- MT_BUG_ON(mt, !mas_active(mas));
+ MT_BUG_ON(mt, !mas_is_active(&mas));
/* next: pause ->active */
mas_set(&mas, 0);
@@ -3264,126 +3265,132 @@ static noinline void __init check_state_handling(struct maple_tree *mt)
MT_BUG_ON(mt, entry != ptr);
MT_BUG_ON(mt, mas.index != 0x1000);
MT_BUG_ON(mt, mas.last != 0x1500);
- MT_BUG_ON(mt, !mas_active(mas));
+ MT_BUG_ON(mt, !mas_is_active(&mas));
/* next: none ->active */
mas.index = mas.last = 0;
mas.offset = 0;
- mas.node = MAS_NONE;
+ mas.status = ma_none;
entry = mas_next(&mas, ULONG_MAX);
MT_BUG_ON(mt, entry != ptr);
MT_BUG_ON(mt, mas.index != 0x1000);
MT_BUG_ON(mt, mas.last != 0x1500);
- MT_BUG_ON(mt, !mas_active(mas));
+ MT_BUG_ON(mt, !mas_is_active(&mas));
- /* next:active ->active */
- entry = mas_next(&mas, ULONG_MAX);
+ /* next:active ->active (spanning limit) */
+ entry = mas_next(&mas, 0x2100);
MT_BUG_ON(mt, entry != ptr2);
MT_BUG_ON(mt, mas.index != 0x2000);
MT_BUG_ON(mt, mas.last != 0x2500);
- MT_BUG_ON(mt, !mas_active(mas));
+ MT_BUG_ON(mt, !mas_is_active(&mas));
- /* next:active -> active beyond data */
+ /* next:active -> overflow (limit reached) beyond data */
entry = mas_next(&mas, 0x2999);
MT_BUG_ON(mt, entry != NULL);
MT_BUG_ON(mt, mas.index != 0x2501);
MT_BUG_ON(mt, mas.last != 0x2fff);
- MT_BUG_ON(mt, !mas_active(mas));
+ MT_BUG_ON(mt, !mas_is_overflow(&mas));
- /* Continue after last range ends after max */
+ /* next:overflow -> active (limit changed) */
entry = mas_next(&mas, ULONG_MAX);
MT_BUG_ON(mt, entry != ptr3);
MT_BUG_ON(mt, mas.index != 0x3000);
MT_BUG_ON(mt, mas.last != 0x3500);
- MT_BUG_ON(mt, !mas_active(mas));
+ MT_BUG_ON(mt, !mas_is_active(&mas));
- /* next:active -> active continued */
+ /* next:active -> overflow (limit reached) */
entry = mas_next(&mas, ULONG_MAX);
MT_BUG_ON(mt, entry != NULL);
MT_BUG_ON(mt, mas.index != 0x3501);
MT_BUG_ON(mt, mas.last != ULONG_MAX);
- MT_BUG_ON(mt, !mas_active(mas));
-
- /* next:active -> overflow */
- entry = mas_next(&mas, ULONG_MAX);
- MT_BUG_ON(mt, entry != NULL);
- MT_BUG_ON(mt, mas.index != 0x3501);
- MT_BUG_ON(mt, mas.last != ULONG_MAX);
- MT_BUG_ON(mt, mas.node != MAS_OVERFLOW);
+ MT_BUG_ON(mt, !mas_is_overflow(&mas));
/* next:overflow -> overflow */
entry = mas_next(&mas, ULONG_MAX);
MT_BUG_ON(mt, entry != NULL);
MT_BUG_ON(mt, mas.index != 0x3501);
MT_BUG_ON(mt, mas.last != ULONG_MAX);
- MT_BUG_ON(mt, mas.node != MAS_OVERFLOW);
+ MT_BUG_ON(mt, !mas_is_overflow(&mas));
/* prev:overflow -> active */
entry = mas_prev(&mas, 0);
MT_BUG_ON(mt, entry != ptr3);
MT_BUG_ON(mt, mas.index != 0x3000);
MT_BUG_ON(mt, mas.last != 0x3500);
- MT_BUG_ON(mt, !mas_active(mas));
+ MT_BUG_ON(mt, !mas_is_active(&mas));
/* next: none -> active, skip value at location */
mas_set(&mas, 0);
entry = mas_next(&mas, ULONG_MAX);
- mas.node = MAS_NONE;
+ mas.status = ma_none;
mas.offset = 0;
entry = mas_next(&mas, ULONG_MAX);
MT_BUG_ON(mt, entry != ptr2);
MT_BUG_ON(mt, mas.index != 0x2000);
MT_BUG_ON(mt, mas.last != 0x2500);
- MT_BUG_ON(mt, !mas_active(mas));
+ MT_BUG_ON(mt, !mas_is_active(&mas));
/* prev:active ->active */
entry = mas_prev(&mas, 0);
MT_BUG_ON(mt, entry != ptr);
MT_BUG_ON(mt, mas.index != 0x1000);
MT_BUG_ON(mt, mas.last != 0x1500);
- MT_BUG_ON(mt, !mas_active(mas));
+ MT_BUG_ON(mt, !mas_is_active(&mas));
- /* prev:active -> active spanning end range */
+ /* prev:active -> underflow (span limit) */
+ mas_next(&mas, ULONG_MAX);
+ entry = mas_prev(&mas, 0x1200);
+ MT_BUG_ON(mt, entry != ptr);
+ MT_BUG_ON(mt, mas.index != 0x1000);
+ MT_BUG_ON(mt, mas.last != 0x1500);
+ MT_BUG_ON(mt, !mas_is_active(&mas)); /* spanning limit */
+ entry = mas_prev(&mas, 0x1200); /* underflow */
+ MT_BUG_ON(mt, entry != NULL);
+ MT_BUG_ON(mt, mas.index != 0x1000);
+ MT_BUG_ON(mt, mas.last != 0x1500);
+ MT_BUG_ON(mt, !mas_is_underflow(&mas));
+
+ /* prev:underflow -> underflow (lower limit) spanning end range */
entry = mas_prev(&mas, 0x0100);
MT_BUG_ON(mt, entry != NULL);
MT_BUG_ON(mt, mas.index != 0);
MT_BUG_ON(mt, mas.last != 0x0FFF);
- MT_BUG_ON(mt, !mas_active(mas));
+ MT_BUG_ON(mt, !mas_is_underflow(&mas));
- /* prev:active -> underflow */
+ /* prev:underflow -> underflow */
entry = mas_prev(&mas, 0);
MT_BUG_ON(mt, entry != NULL);
MT_BUG_ON(mt, mas.index != 0);
MT_BUG_ON(mt, mas.last != 0x0FFF);
- MT_BUG_ON(mt, mas.node != MAS_UNDERFLOW);
+ MT_BUG_ON(mt, !mas_is_underflow(&mas));
/* prev:underflow -> underflow */
entry = mas_prev(&mas, 0);
MT_BUG_ON(mt, entry != NULL);
MT_BUG_ON(mt, mas.index != 0);
MT_BUG_ON(mt, mas.last != 0x0FFF);
- MT_BUG_ON(mt, mas.node != MAS_UNDERFLOW);
+ MT_BUG_ON(mt, !mas_is_underflow(&mas));
/* next:underflow -> active */
entry = mas_next(&mas, ULONG_MAX);
MT_BUG_ON(mt, entry != ptr);
MT_BUG_ON(mt, mas.index != 0x1000);
MT_BUG_ON(mt, mas.last != 0x1500);
- MT_BUG_ON(mt, !mas_active(mas));
+ MT_BUG_ON(mt, !mas_is_active(&mas));
/* prev:first value -> underflow */
entry = mas_prev(&mas, 0x1000);
MT_BUG_ON(mt, entry != NULL);
MT_BUG_ON(mt, mas.index != 0x1000);
MT_BUG_ON(mt, mas.last != 0x1500);
- MT_BUG_ON(mt, mas.node != MAS_UNDERFLOW);
+ MT_BUG_ON(mt, !mas_is_underflow(&mas));
/* find:underflow -> first value */
entry = mas_find(&mas, ULONG_MAX);
MT_BUG_ON(mt, entry != ptr);
MT_BUG_ON(mt, mas.index != 0x1000);
MT_BUG_ON(mt, mas.last != 0x1500);
- MT_BUG_ON(mt, !mas_active(mas));
+ MT_BUG_ON(mt, !mas_is_active(&mas));
/* prev: pause ->active */
mas_set(&mas, 0x3600);
@@ -3394,21 +3401,21 @@ static noinline void __init check_state_handling(struct maple_tree *mt)
MT_BUG_ON(mt, entry != ptr2);
MT_BUG_ON(mt, mas.index != 0x2000);
MT_BUG_ON(mt, mas.last != 0x2500);
- MT_BUG_ON(mt, !mas_active(mas));
+ MT_BUG_ON(mt, !mas_is_active(&mas));
- /* prev:active -> active spanning min */
+ /* prev:active -> underflow spanning min */
entry = mas_prev(&mas, 0x1600);
MT_BUG_ON(mt, entry != NULL);
MT_BUG_ON(mt, mas.index != 0x1501);
MT_BUG_ON(mt, mas.last != 0x1FFF);
- MT_BUG_ON(mt, !mas_active(mas));
+ MT_BUG_ON(mt, !mas_is_underflow(&mas));
/* prev: active ->active, continue */
entry = mas_prev(&mas, 0);
MT_BUG_ON(mt, entry != ptr);
MT_BUG_ON(mt, mas.index != 0x1000);
MT_BUG_ON(mt, mas.last != 0x1500);
- MT_BUG_ON(mt, !mas_active(mas));
+ MT_BUG_ON(mt, !mas_is_active(&mas));
/* find: start ->active */
mas_set(&mas, 0);
@@ -3416,7 +3423,7 @@ static noinline void __init check_state_handling(struct maple_tree *mt)
MT_BUG_ON(mt, entry != ptr);
MT_BUG_ON(mt, mas.index != 0x1000);
MT_BUG_ON(mt, mas.last != 0x1500);
- MT_BUG_ON(mt, !mas_active(mas));
+ MT_BUG_ON(mt, !mas_is_active(&mas));
/* find: pause ->active */
mas_set(&mas, 0);
@@ -3425,7 +3432,7 @@ static noinline void __init check_state_handling(struct maple_tree *mt)
MT_BUG_ON(mt, entry != ptr);
MT_BUG_ON(mt, mas.index != 0x1000);
MT_BUG_ON(mt, mas.last != 0x1500);
- MT_BUG_ON(mt, !mas_active(mas));
+ MT_BUG_ON(mt, !mas_is_active(&mas));
/* find: start ->active on value */;
mas_set(&mas, 1200);
@@ -3433,14 +3440,14 @@ static noinline void __init check_state_handling(struct maple_tree *mt)
MT_BUG_ON(mt, entry != ptr);
MT_BUG_ON(mt, mas.index != 0x1000);
MT_BUG_ON(mt, mas.last != 0x1500);
- MT_BUG_ON(mt, !mas_active(mas));
+ MT_BUG_ON(mt, !mas_is_active(&mas));
/* find:active ->active */
entry = mas_find(&mas, ULONG_MAX);
MT_BUG_ON(mt, entry != ptr2);
MT_BUG_ON(mt, mas.index != 0x2000);
MT_BUG_ON(mt, mas.last != 0x2500);
- MT_BUG_ON(mt, !mas_active(mas));
+ MT_BUG_ON(mt, !mas_is_active(&mas));
/* find:active -> active (NULL)*/
@@ -3448,35 +3455,35 @@ static noinline void __init check_state_handling(struct maple_tree *mt)
MT_BUG_ON(mt, entry != NULL);
MT_BUG_ON(mt, mas.index != 0x2501);
MT_BUG_ON(mt, mas.last != 0x2FFF);
- MT_BUG_ON(mt, !mas_active(mas));
+ MAS_BUG_ON(&mas, !mas_is_active(&mas));
/* find: overflow ->active */
entry = mas_find(&mas, 0x5000);
MT_BUG_ON(mt, entry != ptr3);
MT_BUG_ON(mt, mas.index != 0x3000);
MT_BUG_ON(mt, mas.last != 0x3500);
- MT_BUG_ON(mt, !mas_active(mas));
+ MT_BUG_ON(mt, !mas_is_active(&mas));
/* find:active -> active (NULL) end*/
entry = mas_find(&mas, ULONG_MAX);
MT_BUG_ON(mt, entry != NULL);
MT_BUG_ON(mt, mas.index != 0x3501);
MT_BUG_ON(mt, mas.last != ULONG_MAX);
- MT_BUG_ON(mt, !mas_active(mas));
+ MAS_BUG_ON(&mas, !mas_is_active(&mas));
/* find_rev: active (END) ->active */
entry = mas_find_rev(&mas, 0);
MT_BUG_ON(mt, entry != ptr3);
MT_BUG_ON(mt, mas.index != 0x3000);
MT_BUG_ON(mt, mas.last != 0x3500);
- MT_BUG_ON(mt, !mas_active(mas));
+ MT_BUG_ON(mt, !mas_is_active(&mas));
/* find_rev:active ->active */
entry = mas_find_rev(&mas, 0);
MT_BUG_ON(mt, entry != ptr2);
MT_BUG_ON(mt, mas.index != 0x2000);
MT_BUG_ON(mt, mas.last != 0x2500);
- MT_BUG_ON(mt, !mas_active(mas));
+ MT_BUG_ON(mt, !mas_is_active(&mas));
/* find_rev: pause ->active */
mas_pause(&mas);
@@ -3484,14 +3491,14 @@ static noinline void __init check_state_handling(struct maple_tree *mt)
MT_BUG_ON(mt, entry != ptr);
MT_BUG_ON(mt, mas.index != 0x1000);
MT_BUG_ON(mt, mas.last != 0x1500);
- MT_BUG_ON(mt, !mas_active(mas));
+ MT_BUG_ON(mt, !mas_is_active(&mas));
- /* find_rev:active -> active */
+ /* find_rev:active -> underflow */
entry = mas_find_rev(&mas, 0);
MT_BUG_ON(mt, entry != NULL);
MT_BUG_ON(mt, mas.index != 0);
MT_BUG_ON(mt, mas.last != 0x0FFF);
- MT_BUG_ON(mt, !mas_active(mas));
+ MT_BUG_ON(mt, !mas_is_underflow(&mas));
/* find_rev: start ->active */
mas_set(&mas, 0x1200);
@@ -3499,7 +3506,7 @@ static noinline void __init check_state_handling(struct maple_tree *mt)
MT_BUG_ON(mt, entry != ptr);
MT_BUG_ON(mt, mas.index != 0x1000);
MT_BUG_ON(mt, mas.last != 0x1500);
- MT_BUG_ON(mt, !mas_active(mas));
+ MT_BUG_ON(mt, !mas_is_active(&mas));
/* mas_walk start ->active */
mas_set(&mas, 0x1200);
@@ -3507,7 +3514,7 @@ static noinline void __init check_state_handling(struct maple_tree *mt)
MT_BUG_ON(mt, entry != ptr);
MT_BUG_ON(mt, mas.index != 0x1000);
MT_BUG_ON(mt, mas.last != 0x1500);
- MT_BUG_ON(mt, !mas_active(mas));
+ MT_BUG_ON(mt, !mas_is_active(&mas));
/* mas_walk start ->active */
mas_set(&mas, 0x1600);
@@ -3515,7 +3522,7 @@ static noinline void __init check_state_handling(struct maple_tree *mt)
MT_BUG_ON(mt, entry != NULL);
MT_BUG_ON(mt, mas.index != 0x1501);
MT_BUG_ON(mt, mas.last != 0x1fff);
- MT_BUG_ON(mt, !mas_active(mas));
+ MT_BUG_ON(mt, !mas_is_active(&mas));
/* mas_walk pause ->active */
mas_set(&mas, 0x1200);
@@ -3524,7 +3531,7 @@ static noinline void __init check_state_handling(struct maple_tree *mt)
MT_BUG_ON(mt, entry != ptr);
MT_BUG_ON(mt, mas.index != 0x1000);
MT_BUG_ON(mt, mas.last != 0x1500);
- MT_BUG_ON(mt, !mas_active(mas));
+ MT_BUG_ON(mt, !mas_is_active(&mas));
/* mas_walk pause -> active */
mas_set(&mas, 0x1600);
@@ -3533,25 +3540,25 @@ static noinline void __init check_state_handling(struct maple_tree *mt)
MT_BUG_ON(mt, entry != NULL);
MT_BUG_ON(mt, mas.index != 0x1501);
MT_BUG_ON(mt, mas.last != 0x1fff);
- MT_BUG_ON(mt, !mas_active(mas));
+ MT_BUG_ON(mt, !mas_is_active(&mas));
/* mas_walk none -> active */
mas_set(&mas, 0x1200);
- mas.node = MAS_NONE;
+ mas.status = ma_none;
entry = mas_walk(&mas);
MT_BUG_ON(mt, entry != ptr);
MT_BUG_ON(mt, mas.index != 0x1000);
MT_BUG_ON(mt, mas.last != 0x1500);
- MT_BUG_ON(mt, !mas_active(mas));
+ MT_BUG_ON(mt, !mas_is_active(&mas));
/* mas_walk none -> active */
mas_set(&mas, 0x1600);
- mas.node = MAS_NONE;
+ mas.status = ma_none;
entry = mas_walk(&mas);
MT_BUG_ON(mt, entry != NULL);
MT_BUG_ON(mt, mas.index != 0x1501);
MT_BUG_ON(mt, mas.last != 0x1fff);
- MT_BUG_ON(mt, !mas_active(mas));
+ MT_BUG_ON(mt, !mas_is_active(&mas));
/* mas_walk active -> active */
mas.index = 0x1200;
@@ -3561,7 +3568,7 @@ static noinline void __init check_state_handling(struct maple_tree *mt)
MT_BUG_ON(mt, entry != ptr);
MT_BUG_ON(mt, mas.index != 0x1000);
MT_BUG_ON(mt, mas.last != 0x1500);
- MT_BUG_ON(mt, !mas_active(mas));
+ MT_BUG_ON(mt, !mas_is_active(&mas));
/* mas_walk active -> active */
mas.index = 0x1600;
@@ -3570,7 +3577,7 @@ static noinline void __init check_state_handling(struct maple_tree *mt)
MT_BUG_ON(mt, entry != NULL);
MT_BUG_ON(mt, mas.index != 0x1501);
MT_BUG_ON(mt, mas.last != 0x1fff);
- MT_BUG_ON(mt, !mas_active(mas));
+ MT_BUG_ON(mt, !mas_is_active(&mas));
mas_unlock(&mas);
}
diff --git a/mm/internal.h b/mm/internal.h
index 8212179b8566..b29f9693b0f2 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -1107,13 +1107,13 @@ static inline void vma_iter_store(struct vma_iterator *vmi,
{
#if defined(CONFIG_DEBUG_VM_MAPLE_TREE)
- if (MAS_WARN_ON(&vmi->mas, vmi->mas.node != MAS_START &&
+ if (MAS_WARN_ON(&vmi->mas, vmi->mas.status != ma_start &&
vmi->mas.index > vma->vm_start)) {
pr_warn("%lx > %lx\n store vma %lx-%lx\n into slot %lx-%lx\n",
vmi->mas.index, vma->vm_start, vma->vm_start,
vma->vm_end, vmi->mas.index, vmi->mas.last);
}
- if (MAS_WARN_ON(&vmi->mas, vmi->mas.node != MAS_START &&
+ if (MAS_WARN_ON(&vmi->mas, vmi->mas.status != ma_start &&
vmi->mas.last < vma->vm_start)) {
pr_warn("%lx < %lx\nstore vma %lx-%lx\ninto slot %lx-%lx\n",
vmi->mas.last, vma->vm_start, vma->vm_start, vma->vm_end,
@@ -1121,7 +1121,7 @@ static inline void vma_iter_store(struct vma_iterator *vmi,
}
#endif
- if (vmi->mas.node != MAS_START &&
+ if (vmi->mas.status != ma_start &&
((vmi->mas.index > vma->vm_start) || (vmi->mas.last < vma->vm_start)))
vma_iter_invalidate(vmi);
@@ -1132,7 +1132,7 @@ static inline void vma_iter_store(struct vma_iterator *vmi,
static inline int vma_iter_store_gfp(struct vma_iterator *vmi,
struct vm_area_struct *vma, gfp_t gfp)
{
- if (vmi->mas.node != MAS_START &&
+ if (vmi->mas.status != ma_start &&
((vmi->mas.index > vma->vm_start) || (vmi->mas.last < vma->vm_start)))
vma_iter_invalidate(vmi);
diff --git a/tools/testing/radix-tree/maple.c b/tools/testing/radix-tree/maple.c
index 1c86ae3f8186..d630e86052f9 100644
--- a/tools/testing/radix-tree/maple.c
+++ b/tools/testing/radix-tree/maple.c
@@ -118,6 +118,7 @@ static noinline void __init check_new_node(struct maple_tree *mt)
MT_BUG_ON(mt, mas.alloc == NULL);
MT_BUG_ON(mt, mas.alloc->slot[0] == NULL);
mas_push_node(&mas, mn);
+ mas_reset(&mas);
mas_nomem(&mas, GFP_KERNEL); /* free */
mtree_unlock(mt);
@@ -141,7 +142,7 @@ static noinline void __init check_new_node(struct maple_tree *mt)
mn->parent = ma_parent_ptr(mn);
ma_free_rcu(mn);
- mas.node = MAS_START;
+ mas.status = ma_start;
mas_nomem(&mas, GFP_KERNEL);
/* Allocate 3 nodes, will fail. */
mas_node_count(&mas, 3);
@@ -158,6 +159,7 @@ static noinline void __init check_new_node(struct maple_tree *mt)
/* Ensure we counted 3. */
MT_BUG_ON(mt, mas_allocated(&mas) != 3);
/* Free. */
+ mas_reset(&mas);
mas_nomem(&mas, GFP_KERNEL);
/* Set allocation request to 1. */
@@ -272,6 +274,7 @@ static noinline void __init check_new_node(struct maple_tree *mt)
ma_free_rcu(mn);
MT_BUG_ON(mt, mas_allocated(&mas) != i - j - 1);
}
+ mas_reset(&mas);
MT_BUG_ON(mt, mas_nomem(&mas, GFP_KERNEL));
}
@@ -294,6 +297,7 @@ static noinline void __init check_new_node(struct maple_tree *mt)
smn = smn->slot[0]; /* next. */
}
MT_BUG_ON(mt, mas_allocated(&mas) != total);
+ mas_reset(&mas);
mas_nomem(&mas, GFP_KERNEL); /* Free. */
MT_BUG_ON(mt, mas_allocated(&mas) != 0);
@@ -441,7 +445,7 @@ static noinline void __init check_new_node(struct maple_tree *mt)
mas.node = MA_ERROR(-ENOMEM);
mas_node_count(&mas, 10); /* Request */
mas_nomem(&mas, GFP_KERNEL); /* Fill request */
- mas.node = MAS_START;
+ mas.status = ma_start;
MT_BUG_ON(mt, mas_allocated(&mas) != 10);
mas_destroy(&mas);
@@ -452,7 +456,7 @@ static noinline void __init check_new_node(struct maple_tree *mt)
mas.node = MA_ERROR(-ENOMEM);
mas_node_count(&mas, 10 + MAPLE_ALLOC_SLOTS - 1); /* Request */
mas_nomem(&mas, GFP_KERNEL); /* Fill request */
- mas.node = MAS_START;
+ mas.status = ma_start;
MT_BUG_ON(mt, mas_allocated(&mas) != 10 + MAPLE_ALLOC_SLOTS - 1);
mas_destroy(&mas);
@@ -941,7 +945,7 @@ static inline bool mas_tree_walk(struct ma_state *mas, unsigned long *range_min,
ret = mas_descend_walk(mas, range_min, range_max);
if (unlikely(mte_dead_node(mas->node))) {
- mas->node = MAS_START;
+ mas->status = ma_start;
goto retry;
}
@@ -961,10 +965,10 @@ static inline void *mas_range_load(struct ma_state *mas,
unsigned long index = mas->index;
if (mas_is_none(mas) || mas_is_paused(mas))
- mas->node = MAS_START;
+ mas->status = ma_start;
retry:
if (mas_tree_walk(mas, range_min, range_max))
- if (unlikely(mas->node == MAS_ROOT))
+ if (unlikely(mas->status == ma_root))
return mas_root(mas);
if (likely(mas->offset != MAPLE_NODE_SLOTS))
@@ -35337,7 +35341,7 @@ static void mas_dfs_preorder(struct ma_state *mas)
unsigned char end, slot = 0;
unsigned long *pivots;
- if (mas->node == MAS_START) {
+ if (mas->status == ma_start) {
mas_start(mas);
return;
}
@@ -35374,7 +35378,7 @@ static void mas_dfs_preorder(struct ma_state *mas)
return;
done:
- mas->node = MAS_NONE;
+ mas->status = ma_none;
}
@@ -35833,7 +35837,7 @@ static noinline void __init check_nomem(struct maple_tree *mt)
mas_store(&ms, &ms); /* insert 1 -> &ms, fails. */
MT_BUG_ON(mt, ms.node != MA_ERROR(-ENOMEM));
mas_nomem(&ms, GFP_KERNEL); /* Node allocated in here. */
- MT_BUG_ON(mt, ms.node != MAS_START);
+ MT_BUG_ON(mt, ms.status != ma_start);
mtree_unlock(mt);
MT_BUG_ON(mt, mtree_insert(mt, 2, mt, GFP_KERNEL) != 0);
mtree_lock(mt);
@@ -35952,7 +35956,7 @@ static int __init compare_tree(struct maple_tree *mt_a, struct maple_tree *mt_b)
if (mas_is_ptr(&mas_a) || mas_is_ptr(&mas_b)) {
if (!(mas_is_ptr(&mas_a) && mas_is_ptr(&mas_b))) {
- pr_err("One is MAS_ROOT and the other is not.\n");
+ pr_err("One is ma_root and the other is not.\n");
return -1;
}
return 0;
@@ -35961,7 +35965,7 @@ static int __init compare_tree(struct maple_tree *mt_a, struct maple_tree *mt_b)
while (!mas_is_none(&mas_a) || !mas_is_none(&mas_b)) {
if (mas_is_none(&mas_a) || mas_is_none(&mas_b)) {
- pr_err("One is MAS_NONE and the other is not.\n");
+ pr_err("One is ma_none and the other is not.\n");
return -1;
}
--
2.39.2
^ permalink raw reply [flat|nested] 47+ messages in thread* [PATCH 6.6 15/28] maple_tree: remove mas_searchable()
2024-10-24 13:19 [PATCH 6.6 00/28] fix CVE-2024-46701 Yu Kuai
` (13 preceding siblings ...)
2024-10-24 13:19 ` [PATCH 6.6 14/28] maple_tree: separate ma_state node from status Yu Kuai
@ 2024-10-24 13:19 ` Yu Kuai
2024-10-24 13:22 ` [PATCH 6.6 16/28] Revert "maple_tree: correct tree corruption on spanning store" Yu Kuai
` (2 subsequent siblings)
17 siblings, 0 replies; 47+ messages in thread
From: Yu Kuai @ 2024-10-24 13:19 UTC (permalink / raw)
To: stable, gregkh, harry.wentland, sunpeng.li, Rodrigo.Siqueira,
alexander.deucher, christian.koenig, Xinhui.Pan, airlied, daniel,
viro, brauner, Liam.Howlett, akpm, hughd, willy, sashal,
srinivasan.shanmugam, chiahsuan.chung, mingo, mgorman, yukuai3,
chengming.zhou, zhangpeng.00, chuck.lever
Cc: amd-gfx, dri-devel, linux-kernel, linux-fsdevel, maple-tree,
linux-mm, yukuai1, yi.zhang, yangerkun
From: "Liam R. Howlett" <Liam.Howlett@oracle.com>
commit 9a40d45c1f2c49273c04938ec3d7849f685eb3c1 upstream.
Now that the status of the maple state is outside of the node, the
mas_searchable() function can be dropped for easier open-coding of what is
going on.
Link: https://lkml.kernel.org/r/20231101171629.3612299-10-Liam.Howlett@oracle.com
Signed-off-by: Liam R. Howlett <Liam.Howlett@oracle.com>
Cc: Peng Zhang <zhangpeng.00@bytedance.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
---
lib/maple_tree.c | 66 ++++++++------------------------
tools/testing/radix-tree/maple.c | 4 +-
2 files changed, 19 insertions(+), 51 deletions(-)
diff --git a/lib/maple_tree.c b/lib/maple_tree.c
index f7a1c1cc18eb..f73e3772c883 100644
--- a/lib/maple_tree.c
+++ b/lib/maple_tree.c
@@ -285,17 +285,6 @@ static inline bool mas_is_underflow(struct ma_state *mas)
return mas->status == ma_underflow;
}
-static inline bool mas_searchable(struct ma_state *mas)
-{
- if (mas_is_none(mas))
- return false;
-
- if (mas_is_ptr(mas))
- return false;
-
- return true;
-}
-
static __always_inline struct maple_node *mte_to_node(
const struct maple_enode *entry)
{
@@ -6041,12 +6030,11 @@ static __always_inline bool mas_find_setup(struct ma_state *mas, unsigned long m
}
- if (unlikely(!mas_searchable(mas))) {
- if (unlikely(mas_is_ptr(mas)))
- goto ptr_out_of_range;
+ if (unlikely(mas_is_ptr(mas)))
+ goto ptr_out_of_range;
+ if (unlikely(mas_is_none(mas)))
return true;
- }
if (mas->index == max)
return true;
@@ -6173,20 +6161,18 @@ static bool mas_find_rev_setup(struct ma_state *mas, unsigned long min,
return true;
}
- if (unlikely(!mas_searchable(mas))) {
- if (mas_is_ptr(mas))
- goto none;
+ if (unlikely(mas_is_ptr(mas)))
+ goto none;
- if (mas_is_none(mas)) {
- /*
- * Walked to the location, and there was nothing so the
- * previous location is 0.
- */
- mas->last = mas->index = 0;
- mas->status = ma_root;
- *entry = mas_root(mas);
- return true;
- }
+ if (unlikely(mas_is_none(mas))) {
+ /*
+ * Walked to the location, and there was nothing so the previous
+ * location is 0.
+ */
+ mas->last = mas->index = 0;
+ mas->status = ma_root;
+ *entry = mas_root(mas);
+ return true;
}
active:
@@ -6916,7 +6902,7 @@ void *mt_find(struct maple_tree *mt, unsigned long *index, unsigned long max)
if (entry)
goto unlock;
- while (mas_searchable(&mas) && (mas.last < max)) {
+ while (mas_is_active(&mas) && (mas.last < max)) {
entry = mas_next_entry(&mas, max);
if (likely(entry && !xa_is_zero(entry)))
break;
@@ -6998,26 +6984,6 @@ unsigned int mt_nr_allocated(void)
return kmem_cache_nr_allocated(maple_node_cache);
}
-/*
- * mas_dead_node() - Check if the maple state is pointing to a dead node.
- * @mas: The maple state
- * @index: The index to restore in @mas.
- *
- * Used in test code.
- * Return: 1 if @mas has been reset to MAS_START, 0 otherwise.
- */
-static inline int mas_dead_node(struct ma_state *mas, unsigned long index)
-{
- if (unlikely(!mas_searchable(mas) || mas_is_start(mas)))
- return 0;
-
- if (likely(!mte_dead_node(mas->node)))
- return 0;
-
- mas_rewalk(mas, index);
- return 1;
-}
-
void mt_cache_shrink(void)
{
}
@@ -7569,7 +7535,7 @@ void mt_validate(struct maple_tree *mt)
MA_STATE(mas, mt, 0, 0);
rcu_read_lock();
mas_start(&mas);
- if (!mas_searchable(&mas))
+ if (!mas_is_active(&mas))
goto done;
while (!mte_is_leaf(mas.node))
diff --git a/tools/testing/radix-tree/maple.c b/tools/testing/radix-tree/maple.c
index d630e86052f9..35cc8c2a10f4 100644
--- a/tools/testing/radix-tree/maple.c
+++ b/tools/testing/radix-tree/maple.c
@@ -974,8 +974,10 @@ static inline void *mas_range_load(struct ma_state *mas,
if (likely(mas->offset != MAPLE_NODE_SLOTS))
entry = mas_get_slot(mas, mas->offset);
- if (mas_dead_node(mas, index))
+ if (mas_is_active(mas) && mte_dead_node(mas->node)) {
+ mas_set(mas, index);
goto retry;
+ }
return entry;
}
--
2.39.2
^ permalink raw reply [flat|nested] 47+ messages in thread* [PATCH 6.6 16/28] Revert "maple_tree: correct tree corruption on spanning store"
2024-10-24 13:19 [PATCH 6.6 00/28] fix CVE-2024-46701 Yu Kuai
` (14 preceding siblings ...)
2024-10-24 13:19 ` [PATCH 6.6 15/28] maple_tree: remove mas_searchable() Yu Kuai
@ 2024-10-24 13:22 ` Yu Kuai
2024-10-24 13:22 ` [PATCH 6.6 17/28] maple_tree: use maple state end for write operations Yu Kuai
` (11 more replies)
2024-11-06 6:16 ` [PATCH 6.6 00/28] fix CVE-2024-46701 Greg KH
2024-11-06 14:43 ` Lorenzo Stoakes
17 siblings, 12 replies; 47+ messages in thread
From: Yu Kuai @ 2024-10-24 13:22 UTC (permalink / raw)
To: stable, gregkh, harry.wentland, sunpeng.li, Rodrigo.Siqueira,
alexander.deucher, christian.koenig, Xinhui.Pan, airlied, daniel,
viro, brauner, Liam.Howlett, akpm, hughd, willy, sashal,
srinivasan.shanmugam, chiahsuan.chung, mingo, mgorman, yukuai3,
chengming.zhou, zhangpeng.00, chuck.lever
Cc: amd-gfx, dri-devel, linux-kernel, linux-fsdevel, maple-tree,
linux-mm, yukuai1, yi.zhang, yangerkun
From: Yu Kuai <yukuai3@huawei.com>
This reverts commit 677f1df179cb68c12ddf7707ec325eb50e99c7d9.
Above commit contain manual changes and will cause conflicts for
following patches. The commit be backported from mainline later, without
conflicts.
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
---
lib/maple_tree.c | 12 ++++++------
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/lib/maple_tree.c b/lib/maple_tree.c
index f73e3772c883..291412b91047 100644
--- a/lib/maple_tree.c
+++ b/lib/maple_tree.c
@@ -2236,8 +2236,6 @@ static inline void mas_node_or_none(struct ma_state *mas,
/*
* mas_wr_node_walk() - Find the correct offset for the index in the @mas.
- * If @mas->index cannot be found within the containing
- * node, we traverse to the last entry in the node.
* @wr_mas: The maple write state
*
* Uses mas_slot_locked() and does not need to worry about dead nodes.
@@ -3657,7 +3655,7 @@ static bool mas_wr_walk(struct ma_wr_state *wr_mas)
return true;
}
-static void mas_wr_walk_index(struct ma_wr_state *wr_mas)
+static bool mas_wr_walk_index(struct ma_wr_state *wr_mas)
{
struct ma_state *mas = wr_mas->mas;
@@ -3666,9 +3664,11 @@ static void mas_wr_walk_index(struct ma_wr_state *wr_mas)
wr_mas->content = mas_slot_locked(mas, wr_mas->slots,
mas->offset);
if (ma_is_leaf(wr_mas->type))
- return;
+ return true;
mas_wr_walk_traverse(wr_mas);
+
}
+ return true;
}
/*
* mas_extend_spanning_null() - Extend a store of a %NULL to include surrounding %NULLs.
@@ -3905,8 +3905,8 @@ static inline int mas_wr_spanning_store(struct ma_wr_state *wr_mas)
memset(&b_node, 0, sizeof(struct maple_big_node));
/* Copy l_mas and store the value in b_node. */
mas_store_b_node(&l_wr_mas, &b_node, l_wr_mas.node_end);
- /* Copy r_mas into b_node if there is anything to copy. */
- if (r_mas.max > r_mas.last)
+ /* Copy r_mas into b_node. */
+ if (r_mas.offset <= r_wr_mas.node_end)
mas_mab_cp(&r_mas, r_mas.offset, r_wr_mas.node_end,
&b_node, b_node.b_end + 1);
else
--
2.39.2
^ permalink raw reply [flat|nested] 47+ messages in thread* [PATCH 6.6 17/28] maple_tree: use maple state end for write operations
2024-10-24 13:22 ` [PATCH 6.6 16/28] Revert "maple_tree: correct tree corruption on spanning store" Yu Kuai
@ 2024-10-24 13:22 ` Yu Kuai
2024-10-24 13:22 ` [PATCH 6.6 18/28] maple_tree: don't find node end in mtree_lookup_walk() Yu Kuai
` (10 subsequent siblings)
11 siblings, 0 replies; 47+ messages in thread
From: Yu Kuai @ 2024-10-24 13:22 UTC (permalink / raw)
To: stable, gregkh, harry.wentland, sunpeng.li, Rodrigo.Siqueira,
alexander.deucher, christian.koenig, Xinhui.Pan, airlied, daniel,
viro, brauner, Liam.Howlett, akpm, hughd, willy, sashal,
srinivasan.shanmugam, chiahsuan.chung, mingo, mgorman, yukuai3,
chengming.zhou, zhangpeng.00, chuck.lever
Cc: amd-gfx, dri-devel, linux-kernel, linux-fsdevel, maple-tree,
linux-mm, yukuai1, yi.zhang, yangerkun
From: "Liam R. Howlett" <Liam.Howlett@oracle.com>
commit 0de56e38b307b0cb2ac825e8e7cb371a28daf844 upstream.
ma_wr_state was previously tracking the end of the node for writing.
Since the implementation of the ma_state end tracking, this is duplicated
work. This patch removes the maple write state tracking of the end of the
node and uses the maple state end instead.
Link: https://lkml.kernel.org/r/20231101171629.3612299-11-Liam.Howlett@oracle.com
Signed-off-by: Liam R. Howlett <Liam.Howlett@oracle.com>
Cc: Peng Zhang <zhangpeng.00@bytedance.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
---
include/linux/maple_tree.h | 1 -
lib/maple_tree.c | 46 ++++++++++++++++++++------------------
2 files changed, 24 insertions(+), 23 deletions(-)
diff --git a/include/linux/maple_tree.h b/include/linux/maple_tree.h
index 4dd668f7b111..b3d63123b945 100644
--- a/include/linux/maple_tree.h
+++ b/include/linux/maple_tree.h
@@ -441,7 +441,6 @@ struct ma_wr_state {
unsigned long r_max; /* range max */
enum maple_type type; /* mas->node type */
unsigned char offset_end; /* The offset where the write ends */
- unsigned char node_end; /* mas->node end */
unsigned long *pivots; /* mas->node->pivots pointer */
unsigned long end_piv; /* The pivot at the offset end */
void __rcu **slots; /* mas->node->slots pointer */
diff --git a/lib/maple_tree.c b/lib/maple_tree.c
index 291412b91047..472aef7a3d5c 100644
--- a/lib/maple_tree.c
+++ b/lib/maple_tree.c
@@ -2158,11 +2158,11 @@ static noinline_for_kasan void mas_store_b_node(struct ma_wr_state *wr_mas,
}
slot = offset_end + 1;
- if (slot > wr_mas->node_end)
+ if (slot > mas->end)
goto b_end;
/* Copy end data to the end of the node. */
- mas_mab_cp(mas, slot, wr_mas->node_end + 1, b_node, ++b_end);
+ mas_mab_cp(mas, slot, mas->end + 1, b_node, ++b_end);
b_node->b_end--;
return;
@@ -2253,8 +2253,8 @@ static inline void mas_wr_node_walk(struct ma_wr_state *wr_mas)
wr_mas->node = mas_mn(wr_mas->mas);
wr_mas->pivots = ma_pivots(wr_mas->node, wr_mas->type);
- count = wr_mas->node_end = ma_data_end(wr_mas->node, wr_mas->type,
- wr_mas->pivots, mas->max);
+ count = mas->end = ma_data_end(wr_mas->node, wr_mas->type,
+ wr_mas->pivots, mas->max);
offset = mas->offset;
while (offset < count && mas->index > wr_mas->pivots[offset])
@@ -3904,10 +3904,10 @@ static inline int mas_wr_spanning_store(struct ma_wr_state *wr_mas)
memset(&b_node, 0, sizeof(struct maple_big_node));
/* Copy l_mas and store the value in b_node. */
- mas_store_b_node(&l_wr_mas, &b_node, l_wr_mas.node_end);
+ mas_store_b_node(&l_wr_mas, &b_node, l_mas.end);
/* Copy r_mas into b_node. */
- if (r_mas.offset <= r_wr_mas.node_end)
- mas_mab_cp(&r_mas, r_mas.offset, r_wr_mas.node_end,
+ if (r_mas.offset <= r_mas.end)
+ mas_mab_cp(&r_mas, r_mas.offset, r_mas.end,
&b_node, b_node.b_end + 1);
else
b_node.b_end++;
@@ -3949,7 +3949,7 @@ static inline bool mas_wr_node_store(struct ma_wr_state *wr_mas,
if (mas->last == wr_mas->end_piv)
offset_end++; /* don't copy this offset */
else if (unlikely(wr_mas->r_max == ULONG_MAX))
- mas_bulk_rebalance(mas, wr_mas->node_end, wr_mas->type);
+ mas_bulk_rebalance(mas, mas->end, wr_mas->type);
/* set up node. */
if (in_rcu) {
@@ -3985,12 +3985,12 @@ static inline bool mas_wr_node_store(struct ma_wr_state *wr_mas,
* this range wrote to the end of the node or it overwrote the rest of
* the data
*/
- if (offset_end > wr_mas->node_end)
+ if (offset_end > mas->end)
goto done;
dst_offset = mas->offset + 1;
/* Copy to the end of node if necessary. */
- copy_size = wr_mas->node_end - offset_end + 1;
+ copy_size = mas->end - offset_end + 1;
memcpy(dst_slots + dst_offset, wr_mas->slots + offset_end,
sizeof(void *) * copy_size);
memcpy(dst_pivots + dst_offset, wr_mas->pivots + offset_end,
@@ -4077,10 +4077,10 @@ static inline void mas_wr_extend_null(struct ma_wr_state *wr_mas)
} else {
/* Check next slot(s) if we are overwriting the end */
if ((mas->last == wr_mas->end_piv) &&
- (wr_mas->node_end != wr_mas->offset_end) &&
+ (mas->end != wr_mas->offset_end) &&
!wr_mas->slots[wr_mas->offset_end + 1]) {
wr_mas->offset_end++;
- if (wr_mas->offset_end == wr_mas->node_end)
+ if (wr_mas->offset_end == mas->end)
mas->last = mas->max;
else
mas->last = wr_mas->pivots[wr_mas->offset_end];
@@ -4105,11 +4105,11 @@ static inline void mas_wr_extend_null(struct ma_wr_state *wr_mas)
static inline void mas_wr_end_piv(struct ma_wr_state *wr_mas)
{
- while ((wr_mas->offset_end < wr_mas->node_end) &&
+ while ((wr_mas->offset_end < wr_mas->mas->end) &&
(wr_mas->mas->last > wr_mas->pivots[wr_mas->offset_end]))
wr_mas->offset_end++;
- if (wr_mas->offset_end < wr_mas->node_end)
+ if (wr_mas->offset_end < wr_mas->mas->end)
wr_mas->end_piv = wr_mas->pivots[wr_mas->offset_end];
else
wr_mas->end_piv = wr_mas->mas->max;
@@ -4121,7 +4121,7 @@ static inline void mas_wr_end_piv(struct ma_wr_state *wr_mas)
static inline unsigned char mas_wr_new_end(struct ma_wr_state *wr_mas)
{
struct ma_state *mas = wr_mas->mas;
- unsigned char new_end = wr_mas->node_end + 2;
+ unsigned char new_end = mas->end + 2;
new_end -= wr_mas->offset_end - mas->offset;
if (wr_mas->r_min == mas->index)
@@ -4155,10 +4155,10 @@ static inline bool mas_wr_append(struct ma_wr_state *wr_mas,
if (mt_in_rcu(mas->tree))
return false;
- if (mas->offset != wr_mas->node_end)
+ if (mas->offset != mas->end)
return false;
- end = wr_mas->node_end;
+ end = mas->end;
if (mas->offset != end)
return false;
@@ -4210,7 +4210,7 @@ static void mas_wr_bnode(struct ma_wr_state *wr_mas)
trace_ma_write(__func__, wr_mas->mas, 0, wr_mas->entry);
memset(&b_node, 0, sizeof(struct maple_big_node));
mas_store_b_node(wr_mas, &b_node, wr_mas->offset_end);
- mas_commit_b_node(wr_mas, &b_node, wr_mas->node_end);
+ mas_commit_b_node(wr_mas, &b_node, wr_mas->mas->end);
}
static inline void mas_wr_modify(struct ma_wr_state *wr_mas)
@@ -4238,7 +4238,7 @@ static inline void mas_wr_modify(struct ma_wr_state *wr_mas)
if (mas_wr_append(wr_mas, new_end))
return;
- if (new_end == wr_mas->node_end && mas_wr_slot_store(wr_mas))
+ if (new_end == mas->end && mas_wr_slot_store(wr_mas))
return;
if (mas_wr_node_store(wr_mas, new_end))
@@ -5052,6 +5052,7 @@ int mas_empty_area(struct ma_state *mas, unsigned long min,
unsigned char offset;
unsigned long *pivots;
enum maple_type mt;
+ struct maple_node *node;
if (min > max)
return -EINVAL;
@@ -5082,13 +5083,14 @@ int mas_empty_area(struct ma_state *mas, unsigned long min,
if (unlikely(offset == MAPLE_NODE_SLOTS))
return -EBUSY;
+ node = mas_mn(mas);
mt = mte_node_type(mas->node);
- pivots = ma_pivots(mas_mn(mas), mt);
+ pivots = ma_pivots(node, mt);
min = mas_safe_min(mas, pivots, offset);
if (mas->index < min)
mas->index = min;
mas->last = mas->index + size - 1;
- mas->end = mas_data_end(mas);
+ mas->end = ma_data_end(node, mt, pivots, mas->max);
return 0;
}
EXPORT_SYMBOL_GPL(mas_empty_area);
@@ -7607,7 +7609,7 @@ void mas_wr_dump(const struct ma_wr_state *wr_mas)
pr_err("WR_MAS: node=%p r_min=%lx r_max=%lx\n",
wr_mas->node, wr_mas->r_min, wr_mas->r_max);
pr_err(" type=%u off_end=%u, node_end=%u, end_piv=%lx\n",
- wr_mas->type, wr_mas->offset_end, wr_mas->node_end,
+ wr_mas->type, wr_mas->offset_end, wr_mas->mas->end,
wr_mas->end_piv);
}
EXPORT_SYMBOL_GPL(mas_wr_dump);
--
2.39.2
^ permalink raw reply [flat|nested] 47+ messages in thread* [PATCH 6.6 18/28] maple_tree: don't find node end in mtree_lookup_walk()
2024-10-24 13:22 ` [PATCH 6.6 16/28] Revert "maple_tree: correct tree corruption on spanning store" Yu Kuai
2024-10-24 13:22 ` [PATCH 6.6 17/28] maple_tree: use maple state end for write operations Yu Kuai
@ 2024-10-24 13:22 ` Yu Kuai
2024-10-24 13:22 ` [PATCH 6.6 19/28] maple_tree: mtree_range_walk() clean up Yu Kuai
` (9 subsequent siblings)
11 siblings, 0 replies; 47+ messages in thread
From: Yu Kuai @ 2024-10-24 13:22 UTC (permalink / raw)
To: stable, gregkh, harry.wentland, sunpeng.li, Rodrigo.Siqueira,
alexander.deucher, christian.koenig, Xinhui.Pan, airlied, daniel,
viro, brauner, Liam.Howlett, akpm, hughd, willy, sashal,
srinivasan.shanmugam, chiahsuan.chung, mingo, mgorman, yukuai3,
chengming.zhou, zhangpeng.00, chuck.lever
Cc: amd-gfx, dri-devel, linux-kernel, linux-fsdevel, maple-tree,
linux-mm, yukuai1, yi.zhang, yangerkun
From: "Liam R. Howlett" <Liam.Howlett@oracle.com>
commit 24662decdd44645e8f027d7912be962dd461d1aa upstream.
Since the pivot being set is now reliable, the optimized loop no longer
needs to find the node end. The redundant check for a dead node can also
be avoided as there is no danger of using the wrong pivot since the
results will be thrown out in the case of a dead node by the later check.
This patch also adds a benchmark test for the function to the maple tree
test framework. The benchmark shows an average increase performance of
5.98% over 3 runs with this commit.
Link: https://lkml.kernel.org/r/20231101171629.3612299-12-Liam.Howlett@oracle.com
Signed-off-by: Liam R. Howlett <Liam.Howlett@oracle.com>
Cc: Peng Zhang <zhangpeng.00@bytedance.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
---
lib/maple_tree.c | 12 +++---------
lib/test_maple_tree.c | 21 +++++++++++++++++++++
2 files changed, 24 insertions(+), 9 deletions(-)
diff --git a/lib/maple_tree.c b/lib/maple_tree.c
index 472aef7a3d5c..ad8bf3413889 100644
--- a/lib/maple_tree.c
+++ b/lib/maple_tree.c
@@ -3742,23 +3742,17 @@ static inline void *mtree_lookup_walk(struct ma_state *mas)
enum maple_type type;
void __rcu **slots;
unsigned char end;
- unsigned long max;
next = mas->node;
- max = ULONG_MAX;
do {
- offset = 0;
node = mte_to_node(next);
type = mte_node_type(next);
pivots = ma_pivots(node, type);
- end = ma_data_end(node, type, pivots, max);
- if (unlikely(ma_dead_node(node)))
- goto dead_node;
+ end = mt_pivots[type];
+ offset = 0;
do {
- if (pivots[offset] >= mas->index) {
- max = pivots[offset];
+ if (pivots[offset] >= mas->index)
break;
- }
} while (++offset < end);
slots = ma_slots(node, type);
diff --git a/lib/test_maple_tree.c b/lib/test_maple_tree.c
index f9acc6ef0728..26991888da14 100644
--- a/lib/test_maple_tree.c
+++ b/lib/test_maple_tree.c
@@ -43,6 +43,7 @@ atomic_t maple_tree_tests_passed;
/* #define BENCH_NODE_STORE */
/* #define BENCH_AWALK */
/* #define BENCH_WALK */
+/* #define BENCH_LOAD */
/* #define BENCH_MT_FOR_EACH */
/* #define BENCH_FORK */
/* #define BENCH_MAS_FOR_EACH */
@@ -1754,6 +1755,19 @@ static noinline void __init bench_walk(struct maple_tree *mt)
}
#endif
+#if defined(BENCH_LOAD)
+static noinline void __init bench_load(struct maple_tree *mt)
+{
+ int i, max = 2500, count = 550000000;
+
+ for (i = 0; i < max; i += 10)
+ mtree_store_range(mt, i, i + 5, xa_mk_value(i), GFP_KERNEL);
+
+ for (i = 0; i < count; i++)
+ mtree_load(mt, 1470);
+}
+#endif
+
#if defined(BENCH_MT_FOR_EACH)
static noinline void __init bench_mt_for_each(struct maple_tree *mt)
{
@@ -3620,6 +3634,13 @@ static int __init maple_tree_seed(void)
mtree_destroy(&tree);
goto skip;
#endif
+#if defined(BENCH_LOAD)
+#define BENCH
+ mt_init_flags(&tree, MT_FLAGS_ALLOC_RANGE);
+ bench_load(&tree);
+ mtree_destroy(&tree);
+ goto skip;
+#endif
#if defined(BENCH_FORK)
#define BENCH
mt_init_flags(&tree, MT_FLAGS_ALLOC_RANGE);
--
2.39.2
^ permalink raw reply [flat|nested] 47+ messages in thread* [PATCH 6.6 19/28] maple_tree: mtree_range_walk() clean up
2024-10-24 13:22 ` [PATCH 6.6 16/28] Revert "maple_tree: correct tree corruption on spanning store" Yu Kuai
2024-10-24 13:22 ` [PATCH 6.6 17/28] maple_tree: use maple state end for write operations Yu Kuai
2024-10-24 13:22 ` [PATCH 6.6 18/28] maple_tree: don't find node end in mtree_lookup_walk() Yu Kuai
@ 2024-10-24 13:22 ` Yu Kuai
2024-10-24 13:22 ` [PATCH 6.6 20/28] lib/maple_tree.c: fix build error due to hotfix alteration Yu Kuai
` (8 subsequent siblings)
11 siblings, 0 replies; 47+ messages in thread
From: Yu Kuai @ 2024-10-24 13:22 UTC (permalink / raw)
To: stable, gregkh, harry.wentland, sunpeng.li, Rodrigo.Siqueira,
alexander.deucher, christian.koenig, Xinhui.Pan, airlied, daniel,
viro, brauner, Liam.Howlett, akpm, hughd, willy, sashal,
srinivasan.shanmugam, chiahsuan.chung, mingo, mgorman, yukuai3,
chengming.zhou, zhangpeng.00, chuck.lever
Cc: amd-gfx, dri-devel, linux-kernel, linux-fsdevel, maple-tree,
linux-mm, yukuai1, yi.zhang, yangerkun
From: "Liam R. Howlett" <Liam.Howlett@oracle.com>
commit a3c63c8c5df6406e79490456a1fc41a287676070 upstream.
mtree_range_walk() needed to be updated to avoid checking if there was a
pivot value. On closer examination, the code could avoid setting min or
max in certain scenarios. The commit removes the extra check for
pivot[offset] before setting max and only sets max when necessary. It
also only sets min if it is necessary by checking offset 0 prior to the
loop (as it has always done).
The commit also drops a dead node check since the end of the node will
return the array size when the last slot is occupied (by a potential reuse
in a dead node). The data will be discarded later if the node is marked
dead.
Benchmarking these changes results in an increase in performance of 5.45%
using the BENCH_WALK in the maple tree test code.
Link: https://lkml.kernel.org/r/20231101171629.3612299-13-Liam.Howlett@oracle.com
Signed-off-by: Liam R. Howlett <Liam.Howlett@oracle.com>
Cc: Peng Zhang <zhangpeng.00@bytedance.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
---
lib/maple_tree.c | 27 ++++++++++++---------------
1 file changed, 12 insertions(+), 15 deletions(-)
diff --git a/lib/maple_tree.c b/lib/maple_tree.c
index ad8bf3413889..d90f4b7e7511 100644
--- a/lib/maple_tree.c
+++ b/lib/maple_tree.c
@@ -2806,32 +2806,29 @@ static inline void *mtree_range_walk(struct ma_state *mas)
min = mas->min;
max = mas->max;
do {
- offset = 0;
last = next;
node = mte_to_node(next);
type = mte_node_type(next);
pivots = ma_pivots(node, type);
end = ma_data_end(node, type, pivots, max);
- if (unlikely(ma_dead_node(node)))
- goto dead_node;
-
- if (pivots[offset] >= mas->index) {
- prev_max = max;
- prev_min = min;
- max = pivots[offset];
+ prev_min = min;
+ prev_max = max;
+ if (pivots[0] >= mas->index) {
+ offset = 0;
+ max = pivots[0];
goto next;
}
- do {
+ offset = 1;
+ while (offset < end) {
+ if (pivots[offset] >= mas->index) {
+ max = pivots[offset];
+ break;
+ }
offset++;
- } while ((offset < end) && (pivots[offset] < mas->index));
+ }
- prev_min = min;
min = pivots[offset - 1] + 1;
- prev_max = max;
- if (likely(offset < end && pivots[offset]))
- max = pivots[offset];
-
next:
slots = ma_slots(node, type);
next = mt_slot(mas->tree, slots, offset);
--
2.39.2
^ permalink raw reply [flat|nested] 47+ messages in thread* [PATCH 6.6 20/28] lib/maple_tree.c: fix build error due to hotfix alteration
2024-10-24 13:22 ` [PATCH 6.6 16/28] Revert "maple_tree: correct tree corruption on spanning store" Yu Kuai
` (2 preceding siblings ...)
2024-10-24 13:22 ` [PATCH 6.6 19/28] maple_tree: mtree_range_walk() clean up Yu Kuai
@ 2024-10-24 13:22 ` Yu Kuai
2024-10-24 13:22 ` [PATCH 6.6 21/28] maple_tree: avoid checking other gaps after getting the largest gap Yu Kuai
` (7 subsequent siblings)
11 siblings, 0 replies; 47+ messages in thread
From: Yu Kuai @ 2024-10-24 13:22 UTC (permalink / raw)
To: stable, gregkh, harry.wentland, sunpeng.li, Rodrigo.Siqueira,
alexander.deucher, christian.koenig, Xinhui.Pan, airlied, daniel,
viro, brauner, Liam.Howlett, akpm, hughd, willy, sashal,
srinivasan.shanmugam, chiahsuan.chung, mingo, mgorman, yukuai3,
chengming.zhou, zhangpeng.00, chuck.lever
Cc: amd-gfx, dri-devel, linux-kernel, linux-fsdevel, maple-tree,
linux-mm, yukuai1, yi.zhang, yangerkun
From: Andrew Morton <akpm@linux-foundation.org>
commit 5143eecd2af2b5424f7b96d53f17bb4718e46bd3 upstream.
Commit 0de56e38b307 ("maple_tree: use maple state end for write
operations") was broken by a later patch "maple_tree: do not preallocate
nodes for slot stores". But the later patch was scheduled ahead of
0de56e38b307, for 6.7-rc.
This fixlet undoes the damage.
Fixes: 0de56e38b307 ("maple_tree: use maple state end for write operations")
Cc: Liam R. Howlett <Liam.Howlett@oracle.com>
Cc: Sidhartha Kumar <sidhartha.kumar@oracle.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
---
lib/maple_tree.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/lib/maple_tree.c b/lib/maple_tree.c
index d90f4b7e7511..905fa1143f8d 100644
--- a/lib/maple_tree.c
+++ b/lib/maple_tree.c
@@ -5524,7 +5524,7 @@ int mas_preallocate(struct ma_state *mas, void *entry, gfp_t gfp)
node_size = mas_wr_new_end(&wr_mas);
/* Slot store, does not require additional nodes */
- if (node_size == wr_mas.node_end) {
+ if (node_size == mas->end) {
/* reuse node */
if (!mt_in_rcu(mas->tree))
return 0;
--
2.39.2
^ permalink raw reply [flat|nested] 47+ messages in thread* [PATCH 6.6 21/28] maple_tree: avoid checking other gaps after getting the largest gap
2024-10-24 13:22 ` [PATCH 6.6 16/28] Revert "maple_tree: correct tree corruption on spanning store" Yu Kuai
` (3 preceding siblings ...)
2024-10-24 13:22 ` [PATCH 6.6 20/28] lib/maple_tree.c: fix build error due to hotfix alteration Yu Kuai
@ 2024-10-24 13:22 ` Yu Kuai
2024-10-24 13:22 ` [PATCH 6.6 22/28] libfs: Re-arrange locking in offset_iterate_dir() Yu Kuai
` (6 subsequent siblings)
11 siblings, 0 replies; 47+ messages in thread
From: Yu Kuai @ 2024-10-24 13:22 UTC (permalink / raw)
To: stable, gregkh, harry.wentland, sunpeng.li, Rodrigo.Siqueira,
alexander.deucher, christian.koenig, Xinhui.Pan, airlied, daniel,
viro, brauner, Liam.Howlett, akpm, hughd, willy, sashal,
srinivasan.shanmugam, chiahsuan.chung, mingo, mgorman, yukuai3,
chengming.zhou, zhangpeng.00, chuck.lever
Cc: amd-gfx, dri-devel, linux-kernel, linux-fsdevel, maple-tree,
linux-mm, yukuai1, yi.zhang, yangerkun
From: Peng Zhang <zhangpeng.00@bytedance.com>
commit 7e552dcd803f4ff60165271c573ab2e38d15769f upstream.
The last range stored in maple tree is typically quite large. By checking
if it exceeds the sum of the remaining ranges in that node, it is possible
to avoid checking all other gaps.
Running the maple tree test suite in user mode almost always results in a
near 100% hit rate for this optimization.
Link: https://lkml.kernel.org/r/20231215074632.82045-1-zhangpeng.00@bytedance.com
Signed-off-by: Peng Zhang <zhangpeng.00@bytedance.com>
Reviewed-by: Liam R. Howlett <Liam.Howlett@oracle.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
---
lib/maple_tree.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/lib/maple_tree.c b/lib/maple_tree.c
index 905fa1143f8d..1af83414877a 100644
--- a/lib/maple_tree.c
+++ b/lib/maple_tree.c
@@ -1547,6 +1547,9 @@ static unsigned long mas_leaf_max_gap(struct ma_state *mas)
gap = ULONG_MAX - pivots[max_piv];
if (gap > max_gap)
max_gap = gap;
+
+ if (max_gap > pivots[max_piv] - mas->min)
+ return max_gap;
}
for (; i <= max_piv; i++) {
--
2.39.2
^ permalink raw reply [flat|nested] 47+ messages in thread* [PATCH 6.6 22/28] libfs: Re-arrange locking in offset_iterate_dir()
2024-10-24 13:22 ` [PATCH 6.6 16/28] Revert "maple_tree: correct tree corruption on spanning store" Yu Kuai
` (4 preceding siblings ...)
2024-10-24 13:22 ` [PATCH 6.6 21/28] maple_tree: avoid checking other gaps after getting the largest gap Yu Kuai
@ 2024-10-24 13:22 ` Yu Kuai
2024-10-24 13:22 ` [PATCH 6.6 23/28] libfs: Define a minimum directory offset Yu Kuai
` (5 subsequent siblings)
11 siblings, 0 replies; 47+ messages in thread
From: Yu Kuai @ 2024-10-24 13:22 UTC (permalink / raw)
To: stable, gregkh, harry.wentland, sunpeng.li, Rodrigo.Siqueira,
alexander.deucher, christian.koenig, Xinhui.Pan, airlied, daniel,
viro, brauner, Liam.Howlett, akpm, hughd, willy, sashal,
srinivasan.shanmugam, chiahsuan.chung, mingo, mgorman, yukuai3,
chengming.zhou, zhangpeng.00, chuck.lever
Cc: amd-gfx, dri-devel, linux-kernel, linux-fsdevel, maple-tree,
linux-mm, yukuai1, yi.zhang, yangerkun
From: Chuck Lever <chuck.lever@oracle.com>
commit 3f6d810665dfde0d33785420618ceb03fba0619d upstream.
Liam and Matthew say that once the RCU read lock is released,
xa_state is not safe to re-use for the next xas_find() call. But the
RCU read lock must be released on each loop iteration so that
dput(), which might_sleep(), can be called safely.
Thus we are forced to walk the offset tree with fresh state for each
directory entry. xa_find() can do this for us, though it might be a
little less efficient than maintaining xa_state locally.
We believe that in the current code base, inode->i_rwsem provides
protection for the xa_state maintained in
offset_iterate_dir(). However, there is no guarantee that will
continue to be the case in the future.
Since offset_iterate_dir() doesn't build xa_state locally any more,
there's no longer a strong need for offset_find_next(). Clean up by
rolling these two helpers together.
Suggested-by: Liam R. Howlett <Liam.Howlett@Oracle.com>
Message-ID: <170785993027.11135.8830043889278631735.stgit@91.116.238.104.host.secureserver.net>
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
Link: https://lore.kernel.org/r/170820142021.6328.15047865406275957018.stgit@91.116.238.104.host.secureserver.net
Reviewed-by: Jan Kara <jack@suse.cz>
Signed-off-by: Christian Brauner <brauner@kernel.org>
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
---
fs/libfs.c | 12 ++++++------
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/fs/libfs.c b/fs/libfs.c
index dc0f7519045f..430f7c95336c 100644
--- a/fs/libfs.c
+++ b/fs/libfs.c
@@ -401,12 +401,13 @@ static loff_t offset_dir_llseek(struct file *file, loff_t offset, int whence)
return vfs_setpos(file, offset, U32_MAX);
}
-static struct dentry *offset_find_next(struct xa_state *xas)
+static struct dentry *offset_find_next(struct offset_ctx *octx, loff_t offset)
{
struct dentry *child, *found = NULL;
+ XA_STATE(xas, &octx->xa, offset);
rcu_read_lock();
- child = xas_next_entry(xas, U32_MAX);
+ child = xas_next_entry(&xas, U32_MAX);
if (!child)
goto out;
spin_lock(&child->d_lock);
@@ -429,12 +430,11 @@ static bool offset_dir_emit(struct dir_context *ctx, struct dentry *dentry)
static void *offset_iterate_dir(struct inode *inode, struct dir_context *ctx)
{
- struct offset_ctx *so_ctx = inode->i_op->get_offset_ctx(inode);
- XA_STATE(xas, &so_ctx->xa, ctx->pos);
+ struct offset_ctx *octx = inode->i_op->get_offset_ctx(inode);
struct dentry *dentry;
while (true) {
- dentry = offset_find_next(&xas);
+ dentry = offset_find_next(octx, ctx->pos);
if (!dentry)
return ERR_PTR(-ENOENT);
@@ -443,8 +443,8 @@ static void *offset_iterate_dir(struct inode *inode, struct dir_context *ctx)
break;
}
+ ctx->pos = dentry2offset(dentry) + 1;
dput(dentry);
- ctx->pos = xas.xa_index + 1;
}
return NULL;
}
--
2.39.2
^ permalink raw reply [flat|nested] 47+ messages in thread* [PATCH 6.6 23/28] libfs: Define a minimum directory offset
2024-10-24 13:22 ` [PATCH 6.6 16/28] Revert "maple_tree: correct tree corruption on spanning store" Yu Kuai
` (5 preceding siblings ...)
2024-10-24 13:22 ` [PATCH 6.6 22/28] libfs: Re-arrange locking in offset_iterate_dir() Yu Kuai
@ 2024-10-24 13:22 ` Yu Kuai
2024-10-24 13:22 ` [PATCH 6.6 24/28] libfs: Add simple_offset_empty() Yu Kuai
` (4 subsequent siblings)
11 siblings, 0 replies; 47+ messages in thread
From: Yu Kuai @ 2024-10-24 13:22 UTC (permalink / raw)
To: stable, gregkh, harry.wentland, sunpeng.li, Rodrigo.Siqueira,
alexander.deucher, christian.koenig, Xinhui.Pan, airlied, daniel,
viro, brauner, Liam.Howlett, akpm, hughd, willy, sashal,
srinivasan.shanmugam, chiahsuan.chung, mingo, mgorman, yukuai3,
chengming.zhou, zhangpeng.00, chuck.lever
Cc: amd-gfx, dri-devel, linux-kernel, linux-fsdevel, maple-tree,
linux-mm, yukuai1, yi.zhang, yangerkun
From: Chuck Lever <chuck.lever@oracle.com>
commit 7beea725a8ca412c6190090ce7c3a13b169592a1 upstream.
This value is used in several places, so make it a symbolic
constant.
Reviewed-by: Jan Kara <jack@suse.cz>
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
Link: https://lore.kernel.org/r/170820142741.6328.12428356024575347885.stgit@91.116.238.104.host.secureserver.net
Signed-off-by: Christian Brauner <brauner@kernel.org>
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
---
fs/libfs.c | 13 ++++++++-----
1 file changed, 8 insertions(+), 5 deletions(-)
diff --git a/fs/libfs.c b/fs/libfs.c
index 430f7c95336c..c3dc58e776f9 100644
--- a/fs/libfs.c
+++ b/fs/libfs.c
@@ -239,6 +239,11 @@ const struct inode_operations simple_dir_inode_operations = {
};
EXPORT_SYMBOL(simple_dir_inode_operations);
+/* 0 is '.', 1 is '..', so always start with offset 2 or more */
+enum {
+ DIR_OFFSET_MIN = 2,
+};
+
static void offset_set(struct dentry *dentry, u32 offset)
{
dentry->d_fsdata = (void *)((uintptr_t)(offset));
@@ -260,9 +265,7 @@ void simple_offset_init(struct offset_ctx *octx)
{
xa_init_flags(&octx->xa, XA_FLAGS_ALLOC1);
lockdep_set_class(&octx->xa.xa_lock, &simple_offset_xa_lock);
-
- /* 0 is '.', 1 is '..', so always start with offset 2 */
- octx->next_offset = 2;
+ octx->next_offset = DIR_OFFSET_MIN;
}
/**
@@ -275,7 +278,7 @@ void simple_offset_init(struct offset_ctx *octx)
*/
int simple_offset_add(struct offset_ctx *octx, struct dentry *dentry)
{
- static const struct xa_limit limit = XA_LIMIT(2, U32_MAX);
+ static const struct xa_limit limit = XA_LIMIT(DIR_OFFSET_MIN, U32_MAX);
u32 offset;
int ret;
@@ -480,7 +483,7 @@ static int offset_readdir(struct file *file, struct dir_context *ctx)
return 0;
/* In this case, ->private_data is protected by f_pos_lock */
- if (ctx->pos == 2)
+ if (ctx->pos == DIR_OFFSET_MIN)
file->private_data = NULL;
else if (file->private_data == ERR_PTR(-ENOENT))
return 0;
--
2.39.2
^ permalink raw reply [flat|nested] 47+ messages in thread* [PATCH 6.6 24/28] libfs: Add simple_offset_empty()
2024-10-24 13:22 ` [PATCH 6.6 16/28] Revert "maple_tree: correct tree corruption on spanning store" Yu Kuai
` (6 preceding siblings ...)
2024-10-24 13:22 ` [PATCH 6.6 23/28] libfs: Define a minimum directory offset Yu Kuai
@ 2024-10-24 13:22 ` Yu Kuai
2024-10-24 13:22 ` [PATCH 6.6 25/28] maple_tree: Add mtree_alloc_cyclic() Yu Kuai
` (3 subsequent siblings)
11 siblings, 0 replies; 47+ messages in thread
From: Yu Kuai @ 2024-10-24 13:22 UTC (permalink / raw)
To: stable, gregkh, harry.wentland, sunpeng.li, Rodrigo.Siqueira,
alexander.deucher, christian.koenig, Xinhui.Pan, airlied, daniel,
viro, brauner, Liam.Howlett, akpm, hughd, willy, sashal,
srinivasan.shanmugam, chiahsuan.chung, mingo, mgorman, yukuai3,
chengming.zhou, zhangpeng.00, chuck.lever
Cc: amd-gfx, dri-devel, linux-kernel, linux-fsdevel, maple-tree,
linux-mm, yukuai1, yi.zhang, yangerkun
From: Chuck Lever <chuck.lever@oracle.com>
commit ecba88a3b32d733d41e27973e25b2bc580f64281 upstream.
For simple filesystems that use directory offset mapping, rely
strictly on the directory offset map to tell when a directory has
no children.
After this patch is applied, the emptiness test holds only the RCU
read lock when the directory being tested has no children.
In addition, this adds another layer of confirmation that
simple_offset_add/remove() are working as expected.
Reviewed-by: Jan Kara <jack@suse.cz>
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
Link: https://lore.kernel.org/r/170820143463.6328.7872919188371286951.stgit@91.116.238.104.host.secureserver.net
Signed-off-by: Christian Brauner <brauner@kernel.org>
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
---
fs/libfs.c | 32 ++++++++++++++++++++++++++++++++
include/linux/fs.h | 1 +
mm/shmem.c | 4 ++--
3 files changed, 35 insertions(+), 2 deletions(-)
diff --git a/fs/libfs.c b/fs/libfs.c
index c3dc58e776f9..d7b901cb9af4 100644
--- a/fs/libfs.c
+++ b/fs/libfs.c
@@ -312,6 +312,38 @@ void simple_offset_remove(struct offset_ctx *octx, struct dentry *dentry)
offset_set(dentry, 0);
}
+/**
+ * simple_offset_empty - Check if a dentry can be unlinked
+ * @dentry: dentry to be tested
+ *
+ * Returns 0 if @dentry is a non-empty directory; otherwise returns 1.
+ */
+int simple_offset_empty(struct dentry *dentry)
+{
+ struct inode *inode = d_inode(dentry);
+ struct offset_ctx *octx;
+ struct dentry *child;
+ unsigned long index;
+ int ret = 1;
+
+ if (!inode || !S_ISDIR(inode->i_mode))
+ return ret;
+
+ index = DIR_OFFSET_MIN;
+ octx = inode->i_op->get_offset_ctx(inode);
+ xa_for_each(&octx->xa, index, child) {
+ spin_lock(&child->d_lock);
+ if (simple_positive(child)) {
+ spin_unlock(&child->d_lock);
+ ret = 0;
+ break;
+ }
+ spin_unlock(&child->d_lock);
+ }
+
+ return ret;
+}
+
/**
* simple_offset_rename_exchange - exchange rename with directory offsets
* @old_dir: parent of dentry being moved
diff --git a/include/linux/fs.h b/include/linux/fs.h
index 6c3d86532e3f..5104405ce3e6 100644
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -3197,6 +3197,7 @@ struct offset_ctx {
void simple_offset_init(struct offset_ctx *octx);
int simple_offset_add(struct offset_ctx *octx, struct dentry *dentry);
void simple_offset_remove(struct offset_ctx *octx, struct dentry *dentry);
+int simple_offset_empty(struct dentry *dentry);
int simple_offset_rename_exchange(struct inode *old_dir,
struct dentry *old_dentry,
struct inode *new_dir,
diff --git a/mm/shmem.c b/mm/shmem.c
index 3d721d5591dd..4cae2807806e 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -3371,7 +3371,7 @@ static int shmem_unlink(struct inode *dir, struct dentry *dentry)
static int shmem_rmdir(struct inode *dir, struct dentry *dentry)
{
- if (!simple_empty(dentry))
+ if (!simple_offset_empty(dentry))
return -ENOTEMPTY;
drop_nlink(d_inode(dentry));
@@ -3428,7 +3428,7 @@ static int shmem_rename2(struct mnt_idmap *idmap,
return simple_offset_rename_exchange(old_dir, old_dentry,
new_dir, new_dentry);
- if (!simple_empty(new_dentry))
+ if (!simple_offset_empty(new_dentry))
return -ENOTEMPTY;
if (flags & RENAME_WHITEOUT) {
--
2.39.2
^ permalink raw reply [flat|nested] 47+ messages in thread* [PATCH 6.6 25/28] maple_tree: Add mtree_alloc_cyclic()
2024-10-24 13:22 ` [PATCH 6.6 16/28] Revert "maple_tree: correct tree corruption on spanning store" Yu Kuai
` (7 preceding siblings ...)
2024-10-24 13:22 ` [PATCH 6.6 24/28] libfs: Add simple_offset_empty() Yu Kuai
@ 2024-10-24 13:22 ` Yu Kuai
2024-10-24 13:22 ` [PATCH 6.6 26/28] libfs: Convert simple directory offsets to use a Maple Tree Yu Kuai
` (2 subsequent siblings)
11 siblings, 0 replies; 47+ messages in thread
From: Yu Kuai @ 2024-10-24 13:22 UTC (permalink / raw)
To: stable, gregkh, harry.wentland, sunpeng.li, Rodrigo.Siqueira,
alexander.deucher, christian.koenig, Xinhui.Pan, airlied, daniel,
viro, brauner, Liam.Howlett, akpm, hughd, willy, sashal,
srinivasan.shanmugam, chiahsuan.chung, mingo, mgorman, yukuai3,
chengming.zhou, zhangpeng.00, chuck.lever
Cc: amd-gfx, dri-devel, linux-kernel, linux-fsdevel, maple-tree,
linux-mm, yukuai1, yi.zhang, yangerkun
From: Chuck Lever <chuck.lever@oracle.com>
commit 9b6713cc75229f25552c643083cbdbfb771e5bca upstream.
I need a cyclic allocator for the simple_offset implementation in
fs/libfs.c.
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
Link: https://lore.kernel.org/r/170820144179.6328.12838600511394432325.stgit@91.116.238.104.host.secureserver.net
Signed-off-by: Christian Brauner <brauner@kernel.org>
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
---
include/linux/maple_tree.h | 7 +++
lib/maple_tree.c | 93 ++++++++++++++++++++++++++++++++++++++
2 files changed, 100 insertions(+)
diff --git a/include/linux/maple_tree.h b/include/linux/maple_tree.h
index b3d63123b945..a53ad4dabd7e 100644
--- a/include/linux/maple_tree.h
+++ b/include/linux/maple_tree.h
@@ -171,6 +171,7 @@ enum maple_type {
#define MT_FLAGS_LOCK_IRQ 0x100
#define MT_FLAGS_LOCK_BH 0x200
#define MT_FLAGS_LOCK_EXTERN 0x300
+#define MT_FLAGS_ALLOC_WRAPPED 0x0800
#define MAPLE_HEIGHT_MAX 31
@@ -319,6 +320,9 @@ int mtree_insert_range(struct maple_tree *mt, unsigned long first,
int mtree_alloc_range(struct maple_tree *mt, unsigned long *startp,
void *entry, unsigned long size, unsigned long min,
unsigned long max, gfp_t gfp);
+int mtree_alloc_cyclic(struct maple_tree *mt, unsigned long *startp,
+ void *entry, unsigned long range_lo, unsigned long range_hi,
+ unsigned long *next, gfp_t gfp);
int mtree_alloc_rrange(struct maple_tree *mt, unsigned long *startp,
void *entry, unsigned long size, unsigned long min,
unsigned long max, gfp_t gfp);
@@ -499,6 +503,9 @@ void *mas_find_range(struct ma_state *mas, unsigned long max);
void *mas_find_rev(struct ma_state *mas, unsigned long min);
void *mas_find_range_rev(struct ma_state *mas, unsigned long max);
int mas_preallocate(struct ma_state *mas, void *entry, gfp_t gfp);
+int mas_alloc_cyclic(struct ma_state *mas, unsigned long *startp,
+ void *entry, unsigned long range_lo, unsigned long range_hi,
+ unsigned long *next, gfp_t gfp);
bool mas_nomem(struct ma_state *mas, gfp_t gfp);
void mas_pause(struct ma_state *mas);
diff --git a/lib/maple_tree.c b/lib/maple_tree.c
index 1af83414877a..5328e08723d7 100644
--- a/lib/maple_tree.c
+++ b/lib/maple_tree.c
@@ -4337,6 +4337,56 @@ static inline void *mas_insert(struct ma_state *mas, void *entry)
}
+/**
+ * mas_alloc_cyclic() - Internal call to find somewhere to store an entry
+ * @mas: The maple state.
+ * @startp: Pointer to ID.
+ * @range_lo: Lower bound of range to search.
+ * @range_hi: Upper bound of range to search.
+ * @entry: The entry to store.
+ * @next: Pointer to next ID to allocate.
+ * @gfp: The GFP_FLAGS to use for allocations.
+ *
+ * Return: 0 if the allocation succeeded without wrapping, 1 if the
+ * allocation succeeded after wrapping, or -EBUSY if there are no
+ * free entries.
+ */
+int mas_alloc_cyclic(struct ma_state *mas, unsigned long *startp,
+ void *entry, unsigned long range_lo, unsigned long range_hi,
+ unsigned long *next, gfp_t gfp)
+{
+ unsigned long min = range_lo;
+ int ret = 0;
+
+ range_lo = max(min, *next);
+ ret = mas_empty_area(mas, range_lo, range_hi, 1);
+ if ((mas->tree->ma_flags & MT_FLAGS_ALLOC_WRAPPED) && ret == 0) {
+ mas->tree->ma_flags &= ~MT_FLAGS_ALLOC_WRAPPED;
+ ret = 1;
+ }
+ if (ret < 0 && range_lo > min) {
+ ret = mas_empty_area(mas, min, range_hi, 1);
+ if (ret == 0)
+ ret = 1;
+ }
+ if (ret < 0)
+ return ret;
+
+ do {
+ mas_insert(mas, entry);
+ } while (mas_nomem(mas, gfp));
+ if (mas_is_err(mas))
+ return xa_err(mas->node);
+
+ *startp = mas->index;
+ *next = *startp + 1;
+ if (*next == 0)
+ mas->tree->ma_flags |= MT_FLAGS_ALLOC_WRAPPED;
+
+ return ret;
+}
+EXPORT_SYMBOL(mas_alloc_cyclic);
+
static __always_inline void mas_rewalk(struct ma_state *mas, unsigned long index)
{
retry:
@@ -6490,6 +6540,49 @@ int mtree_alloc_range(struct maple_tree *mt, unsigned long *startp,
}
EXPORT_SYMBOL(mtree_alloc_range);
+/**
+ * mtree_alloc_cyclic() - Find somewhere to store this entry in the tree.
+ * @mt: The maple tree.
+ * @startp: Pointer to ID.
+ * @range_lo: Lower bound of range to search.
+ * @range_hi: Upper bound of range to search.
+ * @entry: The entry to store.
+ * @next: Pointer to next ID to allocate.
+ * @gfp: The GFP_FLAGS to use for allocations.
+ *
+ * Finds an empty entry in @mt after @next, stores the new index into
+ * the @id pointer, stores the entry at that index, then updates @next.
+ *
+ * @mt must be initialized with the MT_FLAGS_ALLOC_RANGE flag.
+ *
+ * Context: Any context. Takes and releases the mt.lock. May sleep if
+ * the @gfp flags permit.
+ *
+ * Return: 0 if the allocation succeeded without wrapping, 1 if the
+ * allocation succeeded after wrapping, -ENOMEM if memory could not be
+ * allocated, -EINVAL if @mt cannot be used, or -EBUSY if there are no
+ * free entries.
+ */
+int mtree_alloc_cyclic(struct maple_tree *mt, unsigned long *startp,
+ void *entry, unsigned long range_lo, unsigned long range_hi,
+ unsigned long *next, gfp_t gfp)
+{
+ int ret;
+
+ MA_STATE(mas, mt, 0, 0);
+
+ if (!mt_is_alloc(mt))
+ return -EINVAL;
+ if (WARN_ON_ONCE(mt_is_reserved(entry)))
+ return -EINVAL;
+ mtree_lock(mt);
+ ret = mas_alloc_cyclic(&mas, startp, entry, range_lo, range_hi,
+ next, gfp);
+ mtree_unlock(mt);
+ return ret;
+}
+EXPORT_SYMBOL(mtree_alloc_cyclic);
+
int mtree_alloc_rrange(struct maple_tree *mt, unsigned long *startp,
void *entry, unsigned long size, unsigned long min,
unsigned long max, gfp_t gfp)
--
2.39.2
^ permalink raw reply [flat|nested] 47+ messages in thread* [PATCH 6.6 26/28] libfs: Convert simple directory offsets to use a Maple Tree
2024-10-24 13:22 ` [PATCH 6.6 16/28] Revert "maple_tree: correct tree corruption on spanning store" Yu Kuai
` (8 preceding siblings ...)
2024-10-24 13:22 ` [PATCH 6.6 25/28] maple_tree: Add mtree_alloc_cyclic() Yu Kuai
@ 2024-10-24 13:22 ` Yu Kuai
2024-10-24 13:22 ` [PATCH 6.6 27/28] libfs: fix infinite directory reads for offset dir Yu Kuai
2024-10-24 13:22 ` [PATCH 6.6 28/28] maple_tree: correct tree corruption on spanning store Yu Kuai
11 siblings, 0 replies; 47+ messages in thread
From: Yu Kuai @ 2024-10-24 13:22 UTC (permalink / raw)
To: stable, gregkh, harry.wentland, sunpeng.li, Rodrigo.Siqueira,
alexander.deucher, christian.koenig, Xinhui.Pan, airlied, daniel,
viro, brauner, Liam.Howlett, akpm, hughd, willy, sashal,
srinivasan.shanmugam, chiahsuan.chung, mingo, mgorman, yukuai3,
chengming.zhou, zhangpeng.00, chuck.lever
Cc: amd-gfx, dri-devel, linux-kernel, linux-fsdevel, maple-tree,
linux-mm, yukuai1, yi.zhang, yangerkun
From: Chuck Lever <chuck.lever@oracle.com>
commit 0e4a862174f2a8d1653a8a9cf0815020e1d3af24 upstream.
Test robot reports:
> kernel test robot noticed a -19.0% regression of aim9.disk_src.ops_per_sec on:
>
> commit: a2e459555c5f9da3e619b7e47a63f98574dc75f1 ("shmem: stable directory offsets")
> https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master
Feng Tang further clarifies that:
> ... the new simple_offset_add()
> called by shmem_mknod() brings extra cost related with slab,
> specifically the 'radix_tree_node', which cause the regression.
Willy's analysis is that, over time, the test workload causes
xa_alloc_cyclic() to fragment the underlying SLAB cache.
This patch replaces the offset_ctx's xarray with a Maple Tree in the
hope that Maple Tree's dense node mode will handle this scenario
more scalably.
In addition, we can widen the simple directory offset maximum to
signed long (as loff_t is also signed).
Suggested-by: Matthew Wilcox <willy@infradead.org>
Reported-by: kernel test robot <oliver.sang@intel.com>
Closes: https://lore.kernel.org/oe-lkp/202309081306.3ecb3734-oliver.sang@intel.com
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
Link: https://lore.kernel.org/r/170820145616.6328.12620992971699079156.stgit@91.116.238.104.host.secureserver.net
Reviewed-by: Jan Kara <jack@suse.cz>
Signed-off-by: Christian Brauner <brauner@kernel.org>
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
---
fs/libfs.c | 47 +++++++++++++++++++++++-----------------------
include/linux/fs.h | 5 +++--
2 files changed, 26 insertions(+), 26 deletions(-)
diff --git a/fs/libfs.c b/fs/libfs.c
index d7b901cb9af4..98731178a3c1 100644
--- a/fs/libfs.c
+++ b/fs/libfs.c
@@ -244,17 +244,17 @@ enum {
DIR_OFFSET_MIN = 2,
};
-static void offset_set(struct dentry *dentry, u32 offset)
+static void offset_set(struct dentry *dentry, long offset)
{
- dentry->d_fsdata = (void *)((uintptr_t)(offset));
+ dentry->d_fsdata = (void *)offset;
}
-static u32 dentry2offset(struct dentry *dentry)
+static long dentry2offset(struct dentry *dentry)
{
- return (u32)((uintptr_t)(dentry->d_fsdata));
+ return (long)dentry->d_fsdata;
}
-static struct lock_class_key simple_offset_xa_lock;
+static struct lock_class_key simple_offset_lock_class;
/**
* simple_offset_init - initialize an offset_ctx
@@ -263,8 +263,8 @@ static struct lock_class_key simple_offset_xa_lock;
*/
void simple_offset_init(struct offset_ctx *octx)
{
- xa_init_flags(&octx->xa, XA_FLAGS_ALLOC1);
- lockdep_set_class(&octx->xa.xa_lock, &simple_offset_xa_lock);
+ mt_init_flags(&octx->mt, MT_FLAGS_ALLOC_RANGE);
+ lockdep_set_class(&octx->mt.ma_lock, &simple_offset_lock_class);
octx->next_offset = DIR_OFFSET_MIN;
}
@@ -273,20 +273,19 @@ void simple_offset_init(struct offset_ctx *octx)
* @octx: directory offset ctx to be updated
* @dentry: new dentry being added
*
- * Returns zero on success. @so_ctx and the dentry offset are updated.
+ * Returns zero on success. @octx and the dentry's offset are updated.
* Otherwise, a negative errno value is returned.
*/
int simple_offset_add(struct offset_ctx *octx, struct dentry *dentry)
{
- static const struct xa_limit limit = XA_LIMIT(DIR_OFFSET_MIN, U32_MAX);
- u32 offset;
+ unsigned long offset;
int ret;
if (dentry2offset(dentry) != 0)
return -EBUSY;
- ret = xa_alloc_cyclic(&octx->xa, &offset, dentry, limit,
- &octx->next_offset, GFP_KERNEL);
+ ret = mtree_alloc_cyclic(&octx->mt, &offset, dentry, DIR_OFFSET_MIN,
+ LONG_MAX, &octx->next_offset, GFP_KERNEL);
if (ret < 0)
return ret;
@@ -302,13 +301,13 @@ int simple_offset_add(struct offset_ctx *octx, struct dentry *dentry)
*/
void simple_offset_remove(struct offset_ctx *octx, struct dentry *dentry)
{
- u32 offset;
+ long offset;
offset = dentry2offset(dentry);
if (offset == 0)
return;
- xa_erase(&octx->xa, offset);
+ mtree_erase(&octx->mt, offset);
offset_set(dentry, 0);
}
@@ -331,7 +330,7 @@ int simple_offset_empty(struct dentry *dentry)
index = DIR_OFFSET_MIN;
octx = inode->i_op->get_offset_ctx(inode);
- xa_for_each(&octx->xa, index, child) {
+ mt_for_each(&octx->mt, child, index, LONG_MAX) {
spin_lock(&child->d_lock);
if (simple_positive(child)) {
spin_unlock(&child->d_lock);
@@ -361,8 +360,8 @@ int simple_offset_rename_exchange(struct inode *old_dir,
{
struct offset_ctx *old_ctx = old_dir->i_op->get_offset_ctx(old_dir);
struct offset_ctx *new_ctx = new_dir->i_op->get_offset_ctx(new_dir);
- u32 old_index = dentry2offset(old_dentry);
- u32 new_index = dentry2offset(new_dentry);
+ long old_index = dentry2offset(old_dentry);
+ long new_index = dentry2offset(new_dentry);
int ret;
simple_offset_remove(old_ctx, old_dentry);
@@ -388,9 +387,9 @@ int simple_offset_rename_exchange(struct inode *old_dir,
out_restore:
offset_set(old_dentry, old_index);
- xa_store(&old_ctx->xa, old_index, old_dentry, GFP_KERNEL);
+ mtree_store(&old_ctx->mt, old_index, old_dentry, GFP_KERNEL);
offset_set(new_dentry, new_index);
- xa_store(&new_ctx->xa, new_index, new_dentry, GFP_KERNEL);
+ mtree_store(&new_ctx->mt, new_index, new_dentry, GFP_KERNEL);
return ret;
}
@@ -403,7 +402,7 @@ int simple_offset_rename_exchange(struct inode *old_dir,
*/
void simple_offset_destroy(struct offset_ctx *octx)
{
- xa_destroy(&octx->xa);
+ mtree_destroy(&octx->mt);
}
/**
@@ -433,16 +432,16 @@ static loff_t offset_dir_llseek(struct file *file, loff_t offset, int whence)
/* In this case, ->private_data is protected by f_pos_lock */
file->private_data = NULL;
- return vfs_setpos(file, offset, U32_MAX);
+ return vfs_setpos(file, offset, LONG_MAX);
}
static struct dentry *offset_find_next(struct offset_ctx *octx, loff_t offset)
{
+ MA_STATE(mas, &octx->mt, offset, offset);
struct dentry *child, *found = NULL;
- XA_STATE(xas, &octx->xa, offset);
rcu_read_lock();
- child = xas_next_entry(&xas, U32_MAX);
+ child = mas_find(&mas, LONG_MAX);
if (!child)
goto out;
spin_lock(&child->d_lock);
@@ -456,8 +455,8 @@ static struct dentry *offset_find_next(struct offset_ctx *octx, loff_t offset)
static bool offset_dir_emit(struct dir_context *ctx, struct dentry *dentry)
{
- u32 offset = dentry2offset(dentry);
struct inode *inode = d_inode(dentry);
+ long offset = dentry2offset(dentry);
return ctx->actor(ctx, dentry->d_name.name, dentry->d_name.len, offset,
inode->i_ino, fs_umode_to_dtype(inode->i_mode));
diff --git a/include/linux/fs.h b/include/linux/fs.h
index 5104405ce3e6..b9edab0ba46c 100644
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -43,6 +43,7 @@
#include <linux/cred.h>
#include <linux/mnt_idmapping.h>
#include <linux/slab.h>
+#include <linux/maple_tree.h>
#include <asm/byteorder.h>
#include <uapi/linux/fs.h>
@@ -3190,8 +3191,8 @@ extern ssize_t simple_write_to_buffer(void *to, size_t available, loff_t *ppos,
const void __user *from, size_t count);
struct offset_ctx {
- struct xarray xa;
- u32 next_offset;
+ struct maple_tree mt;
+ unsigned long next_offset;
};
void simple_offset_init(struct offset_ctx *octx);
--
2.39.2
^ permalink raw reply [flat|nested] 47+ messages in thread* [PATCH 6.6 27/28] libfs: fix infinite directory reads for offset dir
2024-10-24 13:22 ` [PATCH 6.6 16/28] Revert "maple_tree: correct tree corruption on spanning store" Yu Kuai
` (9 preceding siblings ...)
2024-10-24 13:22 ` [PATCH 6.6 26/28] libfs: Convert simple directory offsets to use a Maple Tree Yu Kuai
@ 2024-10-24 13:22 ` Yu Kuai
2024-10-24 13:22 ` [PATCH 6.6 28/28] maple_tree: correct tree corruption on spanning store Yu Kuai
11 siblings, 0 replies; 47+ messages in thread
From: Yu Kuai @ 2024-10-24 13:22 UTC (permalink / raw)
To: stable, gregkh, harry.wentland, sunpeng.li, Rodrigo.Siqueira,
alexander.deucher, christian.koenig, Xinhui.Pan, airlied, daniel,
viro, brauner, Liam.Howlett, akpm, hughd, willy, sashal,
srinivasan.shanmugam, chiahsuan.chung, mingo, mgorman, yukuai3,
chengming.zhou, zhangpeng.00, chuck.lever
Cc: amd-gfx, dri-devel, linux-kernel, linux-fsdevel, maple-tree,
linux-mm, yukuai1, yi.zhang, yangerkun
From: yangerkun <yangerkun@huawei.com>
commit 64a7ce76fb901bf9f9c36cf5d681328fc0fd4b5a upstream.
After we switch tmpfs dir operations from simple_dir_operations to
simple_offset_dir_operations, every rename happened will fill new dentry
to dest dir's maple tree(&SHMEM_I(inode)->dir_offsets->mt) with a free
key starting with octx->newx_offset, and then set newx_offset equals to
free key + 1. This will lead to infinite readdir combine with rename
happened at the same time, which fail generic/736 in xfstests(detail show
as below).
1. create 5000 files(1 2 3...) under one dir
2. call readdir(man 3 readdir) once, and get one entry
3. rename(entry, "TEMPFILE"), then rename("TEMPFILE", entry)
4. loop 2~3, until readdir return nothing or we loop too many
times(tmpfs break test with the second condition)
We choose the same logic what commit 9b378f6ad48cf ("btrfs: fix infinite
directory reads") to fix it, record the last_index when we open dir, and
do not emit the entry which index >= last_index. The file->private_data
now used in offset dir can use directly to do this, and we also update
the last_index when we llseek the dir file.
Fixes: a2e459555c5f ("shmem: stable directory offsets")
Signed-off-by: yangerkun <yangerkun@huawei.com>
Link: https://lore.kernel.org/r/20240731043835.1828697-1-yangerkun@huawei.com
Reviewed-by: Chuck Lever <chuck.lever@oracle.com>
[brauner: only update last_index after seek when offset is zero like Jan suggested]
Signed-off-by: Christian Brauner <brauner@kernel.org>
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
---
fs/libfs.c | 35 ++++++++++++++++++++++++-----------
1 file changed, 24 insertions(+), 11 deletions(-)
diff --git a/fs/libfs.c b/fs/libfs.c
index 98731178a3c1..fd5d30c798de 100644
--- a/fs/libfs.c
+++ b/fs/libfs.c
@@ -405,6 +405,14 @@ void simple_offset_destroy(struct offset_ctx *octx)
mtree_destroy(&octx->mt);
}
+static int offset_dir_open(struct inode *inode, struct file *file)
+{
+ struct offset_ctx *ctx = inode->i_op->get_offset_ctx(inode);
+
+ file->private_data = (void *)ctx->next_offset;
+ return 0;
+}
+
/**
* offset_dir_llseek - Advance the read position of a directory descriptor
* @file: an open directory whose position is to be updated
@@ -418,6 +426,9 @@ void simple_offset_destroy(struct offset_ctx *octx)
*/
static loff_t offset_dir_llseek(struct file *file, loff_t offset, int whence)
{
+ struct inode *inode = file->f_inode;
+ struct offset_ctx *ctx = inode->i_op->get_offset_ctx(inode);
+
switch (whence) {
case SEEK_CUR:
offset += file->f_pos;
@@ -431,7 +442,8 @@ static loff_t offset_dir_llseek(struct file *file, loff_t offset, int whence)
}
/* In this case, ->private_data is protected by f_pos_lock */
- file->private_data = NULL;
+ if (!offset)
+ file->private_data = (void *)ctx->next_offset;
return vfs_setpos(file, offset, LONG_MAX);
}
@@ -462,7 +474,7 @@ static bool offset_dir_emit(struct dir_context *ctx, struct dentry *dentry)
inode->i_ino, fs_umode_to_dtype(inode->i_mode));
}
-static void *offset_iterate_dir(struct inode *inode, struct dir_context *ctx)
+static void offset_iterate_dir(struct inode *inode, struct dir_context *ctx, long last_index)
{
struct offset_ctx *octx = inode->i_op->get_offset_ctx(inode);
struct dentry *dentry;
@@ -470,17 +482,21 @@ static void *offset_iterate_dir(struct inode *inode, struct dir_context *ctx)
while (true) {
dentry = offset_find_next(octx, ctx->pos);
if (!dentry)
- return ERR_PTR(-ENOENT);
+ return;
+
+ if (dentry2offset(dentry) >= last_index) {
+ dput(dentry);
+ return;
+ }
if (!offset_dir_emit(ctx, dentry)) {
dput(dentry);
- break;
+ return;
}
ctx->pos = dentry2offset(dentry) + 1;
dput(dentry);
}
- return NULL;
}
/**
@@ -507,22 +523,19 @@ static void *offset_iterate_dir(struct inode *inode, struct dir_context *ctx)
static int offset_readdir(struct file *file, struct dir_context *ctx)
{
struct dentry *dir = file->f_path.dentry;
+ long last_index = (long)file->private_data;
lockdep_assert_held(&d_inode(dir)->i_rwsem);
if (!dir_emit_dots(file, ctx))
return 0;
- /* In this case, ->private_data is protected by f_pos_lock */
- if (ctx->pos == DIR_OFFSET_MIN)
- file->private_data = NULL;
- else if (file->private_data == ERR_PTR(-ENOENT))
- return 0;
- file->private_data = offset_iterate_dir(d_inode(dir), ctx);
+ offset_iterate_dir(d_inode(dir), ctx, last_index);
return 0;
}
const struct file_operations simple_offset_dir_operations = {
+ .open = offset_dir_open,
.llseek = offset_dir_llseek,
.iterate_shared = offset_readdir,
.read = generic_read_dir,
--
2.39.2
^ permalink raw reply [flat|nested] 47+ messages in thread* [PATCH 6.6 28/28] maple_tree: correct tree corruption on spanning store
2024-10-24 13:22 ` [PATCH 6.6 16/28] Revert "maple_tree: correct tree corruption on spanning store" Yu Kuai
` (10 preceding siblings ...)
2024-10-24 13:22 ` [PATCH 6.6 27/28] libfs: fix infinite directory reads for offset dir Yu Kuai
@ 2024-10-24 13:22 ` Yu Kuai
2024-11-06 15:02 ` Lorenzo Stoakes
11 siblings, 1 reply; 47+ messages in thread
From: Yu Kuai @ 2024-10-24 13:22 UTC (permalink / raw)
To: stable, gregkh, harry.wentland, sunpeng.li, Rodrigo.Siqueira,
alexander.deucher, christian.koenig, Xinhui.Pan, airlied, daniel,
viro, brauner, Liam.Howlett, akpm, hughd, willy, sashal,
srinivasan.shanmugam, chiahsuan.chung, mingo, mgorman, yukuai3,
chengming.zhou, zhangpeng.00, chuck.lever
Cc: amd-gfx, dri-devel, linux-kernel, linux-fsdevel, maple-tree,
linux-mm, yukuai1, yi.zhang, yangerkun
From: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
commit bea07fd63192b61209d48cbb81ef474cc3ee4c62 upstream.
Patch series "maple_tree: correct tree corruption on spanning store", v3.
There has been a nasty yet subtle maple tree corruption bug that appears
to have been in existence since the inception of the algorithm.
This bug seems far more likely to happen since commit f8d112a4e657
("mm/mmap: avoid zeroing vma tree in mmap_region()"), which is the point
at which reports started to be submitted concerning this bug.
We were made definitely aware of the bug thanks to the kind efforts of
Bert Karwatzki who helped enormously in my being able to track this down
and identify the cause of it.
The bug arises when an attempt is made to perform a spanning store across
two leaf nodes, where the right leaf node is the rightmost child of the
shared parent, AND the store completely consumes the right-mode node.
This results in mas_wr_spanning_store() mitakenly duplicating the new and
existing entries at the maximum pivot within the range, and thus maple
tree corruption.
The fix patch corrects this by detecting this scenario and disallowing the
mistaken duplicate copy.
The fix patch commit message goes into great detail as to how this occurs.
This series also includes a test which reliably reproduces the issue, and
asserts that the fix works correctly.
Bert has kindly tested the fix and confirmed it resolved his issues. Also
Mikhail Gavrilov kindly reported what appears to be precisely the same
bug, which this fix should also resolve.
This patch (of 2):
There has been a subtle bug present in the maple tree implementation from
its inception.
This arises from how stores are performed - when a store occurs, it will
overwrite overlapping ranges and adjust the tree as necessary to
accommodate this.
A range may always ultimately span two leaf nodes. In this instance we
walk the two leaf nodes, determine which elements are not overwritten to
the left and to the right of the start and end of the ranges respectively
and then rebalance the tree to contain these entries and the newly
inserted one.
This kind of store is dubbed a 'spanning store' and is implemented by
mas_wr_spanning_store().
In order to reach this stage, mas_store_gfp() invokes
mas_wr_preallocate(), mas_wr_store_type() and mas_wr_walk() in turn to
walk the tree and update the object (mas) to traverse to the location
where the write should be performed, determining its store type.
When a spanning store is required, this function returns false stopping at
the parent node which contains the target range, and mas_wr_store_type()
marks the mas->store_type as wr_spanning_store to denote this fact.
When we go to perform the store in mas_wr_spanning_store(), we first
determine the elements AFTER the END of the range we wish to store (that
is, to the right of the entry to be inserted) - we do this by walking to
the NEXT pivot in the tree (i.e. r_mas.last + 1), starting at the node we
have just determined contains the range over which we intend to write.
We then turn our attention to the entries to the left of the entry we are
inserting, whose state is represented by l_mas, and copy these into a 'big
node', which is a special node which contains enough slots to contain two
leaf node's worth of data.
We then copy the entry we wish to store immediately after this - the copy
and the insertion of the new entry is performed by mas_store_b_node().
After this we copy the elements to the right of the end of the range which
we are inserting, if we have not exceeded the length of the node (i.e.
r_mas.offset <= r_mas.end).
Herein lies the bug - under very specific circumstances, this logic can
break and corrupt the maple tree.
Consider the following tree:
Height
0 Root Node
/ \
pivot = 0xffff / \ pivot = ULONG_MAX
/ \
1 A [-----] ...
/ \
pivot = 0x4fff / \ pivot = 0xffff
/ \
2 (LEAVES) B [-----] [-----] C
^--- Last pivot 0xffff.
Now imagine we wish to store an entry in the range [0x4000, 0xffff] (note
that all ranges expressed in maple tree code are inclusive):
1. mas_store_gfp() descends the tree, finds node A at <=0xffff, then
determines that this is a spanning store across nodes B and C. The mas
state is set such that the current node from which we traverse further
is node A.
2. In mas_wr_spanning_store() we try to find elements to the right of pivot
0xffff by searching for an index of 0x10000:
- mas_wr_walk_index() invokes mas_wr_walk_descend() and
mas_wr_node_walk() in turn.
- mas_wr_node_walk() loops over entries in node A until EITHER it
finds an entry whose pivot equals or exceeds 0x10000 OR it
reaches the final entry.
- Since no entry has a pivot equal to or exceeding 0x10000, pivot
0xffff is selected, leading to node C.
- mas_wr_walk_traverse() resets the mas state to traverse node C. We
loop around and invoke mas_wr_walk_descend() and mas_wr_node_walk()
in turn once again.
- Again, we reach the last entry in node C, which has a pivot of
0xffff.
3. We then copy the elements to the left of 0x4000 in node B to the big
node via mas_store_b_node(), and insert the new [0x4000, 0xffff] entry
too.
4. We determine whether we have any entries to copy from the right of the
end of the range via - and with r_mas set up at the entry at pivot
0xffff, r_mas.offset <= r_mas.end, and then we DUPLICATE the entry at
pivot 0xffff.
5. BUG! The maple tree is corrupted with a duplicate entry.
This requires a very specific set of circumstances - we must be spanning
the last element in a leaf node, which is the last element in the parent
node.
spanning store across two leaf nodes with a range that ends at that shared
pivot.
A potential solution to this problem would simply be to reset the walk
each time we traverse r_mas, however given the rarity of this situation it
seems that would be rather inefficient.
Instead, this patch detects if the right hand node is populated, i.e. has
anything we need to copy.
We do so by only copying elements from the right of the entry being
inserted when the maximum value present exceeds the last, rather than
basing this on offset position.
The patch also updates some comments and eliminates the unused bool return
value in mas_wr_walk_index().
The work performed in commit f8d112a4e657 ("mm/mmap: avoid zeroing vma
tree in mmap_region()") seems to have made the probability of this event
much more likely, which is the point at which reports started to be
submitted concerning this bug.
The motivation for this change arose from Bert Karwatzki's report of
encountering mm instability after the release of kernel v6.12-rc1 which,
after the use of CONFIG_DEBUG_VM_MAPLE_TREE and similar configuration
options, was identified as maple tree corruption.
After Bert very generously provided his time and ability to reproduce this
event consistently, I was able to finally identify that the issue
discussed in this commit message was occurring for him.
Link: https://lkml.kernel.org/r/cover.1728314402.git.lorenzo.stoakes@oracle.com
Link: https://lkml.kernel.org/r/48b349a2a0f7c76e18772712d0997a5e12ab0a3b.1728314403.git.lorenzo.stoakes@oracle.com
Fixes: 54a611b60590 ("Maple Tree: add new data structure")
Signed-off-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Reported-by: Bert Karwatzki <spasswolf@web.de>
Closes: https://lore.kernel.org/all/20241001023402.3374-1-spasswolf@web.de/
Tested-by: Bert Karwatzki <spasswolf@web.de>
Reported-by: Mikhail Gavrilov <mikhail.v.gavrilov@gmail.com>
Closes: https://lore.kernel.org/all/CABXGCsOPwuoNOqSMmAvWO2Fz4TEmPnjFj-b7iF+XFRu1h7-+Dg@mail.gmail.com/
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Reviewed-by: Liam R. Howlett <Liam.Howlett@Oracle.com>
Tested-by: Mikhail Gavrilov <mikhail.v.gavrilov@gmail.com>
Reviewed-by: Wei Yang <richard.weiyang@gmail.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Sidhartha Kumar <sidhartha.kumar@oracle.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
---
lib/maple_tree.c | 12 ++++++------
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/lib/maple_tree.c b/lib/maple_tree.c
index 5328e08723d7..c57b6fc4db2e 100644
--- a/lib/maple_tree.c
+++ b/lib/maple_tree.c
@@ -2239,6 +2239,8 @@ static inline void mas_node_or_none(struct ma_state *mas,
/*
* mas_wr_node_walk() - Find the correct offset for the index in the @mas.
+ * If @mas->index cannot be found within the containing
+ * node, we traverse to the last entry in the node.
* @wr_mas: The maple write state
*
* Uses mas_slot_locked() and does not need to worry about dead nodes.
@@ -3655,7 +3657,7 @@ static bool mas_wr_walk(struct ma_wr_state *wr_mas)
return true;
}
-static bool mas_wr_walk_index(struct ma_wr_state *wr_mas)
+static void mas_wr_walk_index(struct ma_wr_state *wr_mas)
{
struct ma_state *mas = wr_mas->mas;
@@ -3664,11 +3666,9 @@ static bool mas_wr_walk_index(struct ma_wr_state *wr_mas)
wr_mas->content = mas_slot_locked(mas, wr_mas->slots,
mas->offset);
if (ma_is_leaf(wr_mas->type))
- return true;
+ return;
mas_wr_walk_traverse(wr_mas);
-
}
- return true;
}
/*
* mas_extend_spanning_null() - Extend a store of a %NULL to include surrounding %NULLs.
@@ -3899,8 +3899,8 @@ static inline int mas_wr_spanning_store(struct ma_wr_state *wr_mas)
memset(&b_node, 0, sizeof(struct maple_big_node));
/* Copy l_mas and store the value in b_node. */
mas_store_b_node(&l_wr_mas, &b_node, l_mas.end);
- /* Copy r_mas into b_node. */
- if (r_mas.offset <= r_mas.end)
+ /* Copy r_mas into b_node if there is anything to copy. */
+ if (r_mas.max > r_mas.last)
mas_mab_cp(&r_mas, r_mas.offset, r_mas.end,
&b_node, b_node.b_end + 1);
else
--
2.39.2
^ permalink raw reply [flat|nested] 47+ messages in thread* Re: [PATCH 6.6 28/28] maple_tree: correct tree corruption on spanning store
2024-10-24 13:22 ` [PATCH 6.6 28/28] maple_tree: correct tree corruption on spanning store Yu Kuai
@ 2024-11-06 15:02 ` Lorenzo Stoakes
2024-11-07 1:22 ` Yu Kuai
0 siblings, 1 reply; 47+ messages in thread
From: Lorenzo Stoakes @ 2024-11-06 15:02 UTC (permalink / raw)
To: Yu Kuai
Cc: stable, gregkh, harry.wentland, sunpeng.li, Rodrigo.Siqueira,
alexander.deucher, christian.koenig, Xinhui.Pan, airlied, daniel,
viro, brauner, Liam.Howlett, akpm, hughd, willy, sashal,
srinivasan.shanmugam, chiahsuan.chung, mingo, mgorman, yukuai3,
chengming.zhou, zhangpeng.00, chuck.lever, amd-gfx, dri-devel,
linux-kernel, linux-fsdevel, maple-tree, linux-mm, yi.zhang,
yangerkun
On Thu, Oct 24, 2024 at 09:22:25PM +0800, Yu Kuai wrote:
> diff --git a/lib/maple_tree.c b/lib/maple_tree.c
> index 5328e08723d7..c57b6fc4db2e 100644
> --- a/lib/maple_tree.c
> +++ b/lib/maple_tree.c
> @@ -2239,6 +2239,8 @@ static inline void mas_node_or_none(struct ma_state *mas,
>
> /*
> * mas_wr_node_walk() - Find the correct offset for the index in the @mas.
> + * If @mas->index cannot be found within the containing
> + * node, we traverse to the last entry in the node.
> * @wr_mas: The maple write state
> *
> * Uses mas_slot_locked() and does not need to worry about dead nodes.
> @@ -3655,7 +3657,7 @@ static bool mas_wr_walk(struct ma_wr_state *wr_mas)
> return true;
> }
>
> -static bool mas_wr_walk_index(struct ma_wr_state *wr_mas)
> +static void mas_wr_walk_index(struct ma_wr_state *wr_mas)
> {
> struct ma_state *mas = wr_mas->mas;
>
> @@ -3664,11 +3666,9 @@ static bool mas_wr_walk_index(struct ma_wr_state *wr_mas)
> wr_mas->content = mas_slot_locked(mas, wr_mas->slots,
> mas->offset);
> if (ma_is_leaf(wr_mas->type))
> - return true;
> + return;
> mas_wr_walk_traverse(wr_mas);
> -
> }
> - return true;
> }
> /*
> * mas_extend_spanning_null() - Extend a store of a %NULL to include surrounding %NULLs.
> @@ -3899,8 +3899,8 @@ static inline int mas_wr_spanning_store(struct ma_wr_state *wr_mas)
> memset(&b_node, 0, sizeof(struct maple_big_node));
> /* Copy l_mas and store the value in b_node. */
> mas_store_b_node(&l_wr_mas, &b_node, l_mas.end);
> - /* Copy r_mas into b_node. */
> - if (r_mas.offset <= r_mas.end)
> + /* Copy r_mas into b_node if there is anything to copy. */
> + if (r_mas.max > r_mas.last)
> mas_mab_cp(&r_mas, r_mas.offset, r_mas.end,
> &b_node, b_node.b_end + 1);
> else
> --
> 2.39.2
>
This is a good example of where you've gone horribly wrong, this relies on
31c532a8af57 ("maple_tree: add end of node tracking to the maple state") which
is not in 6.6.
You reverted (!!) my backported patch for this that _does not require this_
only to pull in 31c532a8af57 in order to apply the upstream version of my
fix over that.
This is totally unnecessary and I can't see why _on earth_ you would need
31c532a8af57.
You need to correctly identify what patches need to be backported and _fix
merge conflicts_ accordingly, like I did with the patch that you decided to
revert.
In the kernel it is absolutely unacceptable to arbitrarily backport huge
amounts of patches you don't understand in order to avoid merge conflicts,
you may be breaking all kinds of things without realising.
You have to find the _minimal_ change and _fix merge conflicts_.
Stable is not a playground, it's what millions (billions?) of kernels rely
upon.
In any case, I think Liam's reply suggests that we should be looking at
maybe 1 thing to backport? If we even need to?
Please in future be more cautious, and if you are unsure how to proceed,
cc- the relevant maintainers (+ all authors of patches you intend to
backport/revert) in an RFC. Thanks.
^ permalink raw reply [flat|nested] 47+ messages in thread* Re: [PATCH 6.6 28/28] maple_tree: correct tree corruption on spanning store
2024-11-06 15:02 ` Lorenzo Stoakes
@ 2024-11-07 1:22 ` Yu Kuai
0 siblings, 0 replies; 47+ messages in thread
From: Yu Kuai @ 2024-11-07 1:22 UTC (permalink / raw)
To: Lorenzo Stoakes, Yu Kuai
Cc: stable, gregkh, harry.wentland, sunpeng.li, Rodrigo.Siqueira,
alexander.deucher, christian.koenig, Xinhui.Pan, airlied, daniel,
viro, brauner, Liam.Howlett, akpm, hughd, willy, sashal,
srinivasan.shanmugam, chiahsuan.chung, mingo, mgorman,
chengming.zhou, zhangpeng.00, chuck.lever, amd-gfx, dri-devel,
linux-kernel, linux-fsdevel, maple-tree, linux-mm, yi.zhang,
yangerkun, yukuai (C)
Hi,
在 2024/11/06 23:02, Lorenzo Stoakes 写道:
> On Thu, Oct 24, 2024 at 09:22:25PM +0800, Yu Kuai wrote:
>
>> diff --git a/lib/maple_tree.c b/lib/maple_tree.c
>> index 5328e08723d7..c57b6fc4db2e 100644
>> --- a/lib/maple_tree.c
>> +++ b/lib/maple_tree.c
>> @@ -2239,6 +2239,8 @@ static inline void mas_node_or_none(struct ma_state *mas,
>>
>> /*
>> * mas_wr_node_walk() - Find the correct offset for the index in the @mas.
>> + * If @mas->index cannot be found within the containing
>> + * node, we traverse to the last entry in the node.
>> * @wr_mas: The maple write state
>> *
>> * Uses mas_slot_locked() and does not need to worry about dead nodes.
>> @@ -3655,7 +3657,7 @@ static bool mas_wr_walk(struct ma_wr_state *wr_mas)
>> return true;
>> }
>>
>> -static bool mas_wr_walk_index(struct ma_wr_state *wr_mas)
>> +static void mas_wr_walk_index(struct ma_wr_state *wr_mas)
>> {
>> struct ma_state *mas = wr_mas->mas;
>>
>> @@ -3664,11 +3666,9 @@ static bool mas_wr_walk_index(struct ma_wr_state *wr_mas)
>> wr_mas->content = mas_slot_locked(mas, wr_mas->slots,
>> mas->offset);
>> if (ma_is_leaf(wr_mas->type))
>> - return true;
>> + return;
>> mas_wr_walk_traverse(wr_mas);
>> -
>> }
>> - return true;
>> }
>> /*
>> * mas_extend_spanning_null() - Extend a store of a %NULL to include surrounding %NULLs.
>> @@ -3899,8 +3899,8 @@ static inline int mas_wr_spanning_store(struct ma_wr_state *wr_mas)
>> memset(&b_node, 0, sizeof(struct maple_big_node));
>> /* Copy l_mas and store the value in b_node. */
>> mas_store_b_node(&l_wr_mas, &b_node, l_mas.end);
>> - /* Copy r_mas into b_node. */
>> - if (r_mas.offset <= r_mas.end)
>> + /* Copy r_mas into b_node if there is anything to copy. */
>> + if (r_mas.max > r_mas.last)
>> mas_mab_cp(&r_mas, r_mas.offset, r_mas.end,
>> &b_node, b_node.b_end + 1);
>> else
>> --
>> 2.39.2
>>
>
> This is a good example of where you've gone horribly wrong, this relies on
> 31c532a8af57 ("maple_tree: add end of node tracking to the maple state") which
> is not in 6.6.
>
> You reverted (!!) my backported patch for this that _does not require this_
> only to pull in 31c532a8af57 in order to apply the upstream version of my
> fix over that.
>
> This is totally unnecessary and I can't see why _on earth_ you would need
> 31c532a8af57.
>
> You need to correctly identify what patches need to be backported and _fix
> merge conflicts_ accordingly, like I did with the patch that you decided to
> revert.
>
> In the kernel it is absolutely unacceptable to arbitrarily backport huge
> amounts of patches you don't understand in order to avoid merge conflicts,
> you may be breaking all kinds of things without realising.
>
> You have to find the _minimal_ change and _fix merge conflicts_.
Thanks for the suggestions, I do understand, however, I'll just give up
this because I'm not confident to fix confilcts for maple tree. Other
folks will have to this if they care about this cve for v6.6.
>
> Stable is not a playground, it's what millions (billions?) of kernels rely
> upon.
>
> In any case, I think Liam's reply suggests that we should be looking at
> maybe 1 thing to backport? If we even need to?
Keep using xarray for patch 27 is wrong, I think. xarray is 32-bit and
if the offset overflow, readdir will found nothing, this is more severe
than the orignal cve.
>
> Please in future be more cautious, and if you are unsure how to proceed,
> cc- the relevant maintainers (+ all authors of patches you intend to
> backport/revert) in an RFC. Thanks.
Of course.
Thanks,
Kuai
>
> .
>
^ permalink raw reply [flat|nested] 47+ messages in thread
* Re: [PATCH 6.6 00/28] fix CVE-2024-46701
2024-10-24 13:19 [PATCH 6.6 00/28] fix CVE-2024-46701 Yu Kuai
` (15 preceding siblings ...)
2024-10-24 13:22 ` [PATCH 6.6 16/28] Revert "maple_tree: correct tree corruption on spanning store" Yu Kuai
@ 2024-11-06 6:16 ` Greg KH
2024-11-06 14:44 ` Liam R. Howlett
2024-11-06 15:19 ` Chuck Lever III
2024-11-06 14:43 ` Lorenzo Stoakes
17 siblings, 2 replies; 47+ messages in thread
From: Greg KH @ 2024-11-06 6:16 UTC (permalink / raw)
To: Yu Kuai
Cc: stable, harry.wentland, sunpeng.li, Rodrigo.Siqueira,
alexander.deucher, christian.koenig, Xinhui.Pan, airlied, daniel,
viro, brauner, Liam.Howlett, akpm, hughd, willy, sashal,
srinivasan.shanmugam, chiahsuan.chung, mingo, mgorman, yukuai3,
chengming.zhou, zhangpeng.00, chuck.lever, amd-gfx, dri-devel,
linux-kernel, linux-fsdevel, maple-tree, linux-mm, yi.zhang,
yangerkun
On Thu, Oct 24, 2024 at 09:19:41PM +0800, Yu Kuai wrote:
> From: Yu Kuai <yukuai3@huawei.com>
>
> Fix patch is patch 27, relied patches are from:
>
> - patches from set [1] to add helpers to maple_tree, the last patch to
> improve fork() performance is not backported;
So things slowed down?
> - patches from set [2] to change maple_tree, and follow up fixes;
> - patches from set [3] to convert offset_ctx from xarray to maple_tree;
>
> Please notice that I'm not an expert in this area, and I'm afraid to
> make manual changes. That's why patch 16 revert the commit that is
> different from mainline and will cause conflict backporting new patches.
> patch 28 pick the original mainline patch again.
>
> (And this is what we did to fix the CVE in downstream kernels).
>
> [1] https://lore.kernel.org/all/20231027033845.90608-1-zhangpeng.00@bytedance.com/
> [2] https://lore.kernel.org/all/20231101171629.3612299-2-Liam.Howlett@oracle.com/T/
> [3] https://lore.kernel.org/all/170820083431.6328.16233178852085891453.stgit@91.116.238.104.host.secureserver.net/
This series looks rough. I want to have the maintainers of these
files/subsystems to ack this before being able to take them.
thanks,
greg k-h
^ permalink raw reply [flat|nested] 47+ messages in thread* Re: [PATCH 6.6 00/28] fix CVE-2024-46701
2024-11-06 6:16 ` [PATCH 6.6 00/28] fix CVE-2024-46701 Greg KH
@ 2024-11-06 14:44 ` Liam R. Howlett
2024-11-06 15:19 ` Chuck Lever III
1 sibling, 0 replies; 47+ messages in thread
From: Liam R. Howlett @ 2024-11-06 14:44 UTC (permalink / raw)
To: Greg KH
Cc: Yu Kuai, stable, harry.wentland, sunpeng.li, Rodrigo.Siqueira,
alexander.deucher, christian.koenig, Xinhui.Pan, airlied, daniel,
viro, brauner, akpm, hughd, willy, sashal, srinivasan.shanmugam,
chiahsuan.chung, mingo, mgorman, yukuai3, chengming.zhou,
zhangpeng.00, chuck.lever, amd-gfx, dri-devel, linux-kernel,
linux-fsdevel, maple-tree, linux-mm, yi.zhang, yangerkun,
lorenzo.stoakes
* Greg KH <gregkh@linuxfoundation.org> [241106 01:16]:
> On Thu, Oct 24, 2024 at 09:19:41PM +0800, Yu Kuai wrote:
> > From: Yu Kuai <yukuai3@huawei.com>
> >
> > Fix patch is patch 27, relied patches are from:
> >
> > - patches from set [1] to add helpers to maple_tree, the last patch to
> > improve fork() performance is not backported;
>
> So things slowed down?
Fork got faster in modern kernels. The backport contains helpers as
they are dependencies for later patches.
>
> > - patches from set [2] to change maple_tree, and follow up fixes;
> > - patches from set [3] to convert offset_ctx from xarray to maple_tree;
> >
> > Please notice that I'm not an expert in this area, and I'm afraid to
> > make manual changes. That's why patch 16 revert the commit that is
> > different from mainline and will cause conflict backporting new patches.
> > patch 28 pick the original mainline patch again.
You reverted and forward ported a patch but didn't Cc the author of the
patch you changed. That is probably one of the most important Cc's to
have on this list.
By the way, that fix is already in 6.6
> >
> > (And this is what we did to fix the CVE in downstream kernels).
> >
> > [1] https://lore.kernel.org/all/20231027033845.90608-1-zhangpeng.00@bytedance.com/
> > [2] https://lore.kernel.org/all/20231101171629.3612299-2-Liam.Howlett@oracle.com/T/
> > [3] https://lore.kernel.org/all/170820083431.6328.16233178852085891453.stgit@91.116.238.104.host.secureserver.net/
>
> This series looks rough. I want to have the maintainers of these
> files/subsystems to ack this before being able to take them.
The entire backporting of all of this to fix an issue is extreme, and
although it will solve the issue, you end up running something very
different than 6.6 for a single fix.
Looking at the details of the cve, it seems very odd. This is an issue
in libfs and the affected kernel is 6.6 to 6.10.7. It then goes into
details of how the maple tree allows this - but 6.6 doesn't use the
maple tree in libfs so either the patch needs to be backported to an
older stable (6.6) or the CVE is wrong.
Almost all of these patches are to backport using the maple tree in
libfs and that should not be done.
I don't know if the CVE is incorrectly labeled or if the patch wasn't
backported far enough because I was not involved in the discussion of
this CVE - which seems like an oversight if this is specifically caused
by the maple tree?
The patch in question is 64a7ce76fb90 ("libfs: fix infinite directory
reads for offset dir"). I think we just need the one?
To be clear:
- Do not take this serioes
- Someone in libfs land should respond stating if the fix above needs
to be backported.
Thanks,
Liam
^ permalink raw reply [flat|nested] 47+ messages in thread* Re: [PATCH 6.6 00/28] fix CVE-2024-46701
2024-11-06 6:16 ` [PATCH 6.6 00/28] fix CVE-2024-46701 Greg KH
2024-11-06 14:44 ` Liam R. Howlett
@ 2024-11-06 15:19 ` Chuck Lever III
2024-11-06 16:21 ` James Bottomley
2024-11-07 0:57 ` Yu Kuai
1 sibling, 2 replies; 47+ messages in thread
From: Chuck Lever III @ 2024-11-06 15:19 UTC (permalink / raw)
To: Yu Kuai
Cc: Greg KH, linux-stable, harry.wentland, sunpeng.li,
Rodrigo.Siqueira, alexander.deucher, christian.koenig,
Xinhui.Pan, airlied, Daniel Vetter, Al Viro, Christian Brauner,
Liam Howlett, Andrew Morton, Hugh Dickins,
Matthew Wilcox (Oracle),
Sasha Levin, srinivasan.shanmugam, chiahsuan.chung, mingo,
mgorman, yukuai3, chengming.zhou, zhangpeng.00, amd-gfx,
dri-devel, Linux Kernel Mailing List, Linux FS Devel, maple-tree,
linux-mm, yi.zhang, yangerkun
> On Nov 6, 2024, at 1:16 AM, Greg KH <gregkh@linuxfoundation.org> wrote:
>
> On Thu, Oct 24, 2024 at 09:19:41PM +0800, Yu Kuai wrote:
>> From: Yu Kuai <yukuai3@huawei.com>
>>
>> Fix patch is patch 27, relied patches are from:
I assume patch 27 is:
libfs: fix infinite directory reads for offset dir
https://lore.kernel.org/stable/20241024132225.2271667-12-yukuai1@huaweicloud.com/
I don't think the Maple tree patches are a hard
requirement for this fix. And note that libfs did
not use Maple tree originally because I was told
at that time that Maple tree was not yet mature.
So, a better approach might be to fit the fix
onto linux-6.6.y while sticking with xarray.
This is the first I've heard of this CVE. It
would help if the patch authors got some
notification when these are filed.
>> - patches from set [1] to add helpers to maple_tree, the last patch to
>> improve fork() performance is not backported;
>
> So things slowed down?
>
>> - patches from set [2] to change maple_tree, and follow up fixes;
>> - patches from set [3] to convert offset_ctx from xarray to maple_tree;
>>
>> Please notice that I'm not an expert in this area, and I'm afraid to
>> make manual changes. That's why patch 16 revert the commit that is
>> different from mainline and will cause conflict backporting new patches.
>> patch 28 pick the original mainline patch again.
>>
>> (And this is what we did to fix the CVE in downstream kernels).
>>
>> [1] https://lore.kernel.org/all/20231027033845.90608-1-zhangpeng.00@bytedance.com/
>> [2] https://lore.kernel.org/all/20231101171629.3612299-2-Liam.Howlett@oracle.com/T/
>> [3] https://lore.kernel.org/all/170820083431.6328.16233178852085891453.stgit@91.116.238.104.host.secureserver.net/
>
> This series looks rough. I want to have the maintainers of these
> files/subsystems to ack this before being able to take them.
>
> thanks,
>
> greg k-h
--
Chuck Lever
^ permalink raw reply [flat|nested] 47+ messages in thread
* Re: [PATCH 6.6 00/28] fix CVE-2024-46701
2024-11-06 15:19 ` Chuck Lever III
@ 2024-11-06 16:21 ` James Bottomley
2024-11-07 0:57 ` Yu Kuai
1 sibling, 0 replies; 47+ messages in thread
From: James Bottomley @ 2024-11-06 16:21 UTC (permalink / raw)
To: Chuck Lever III, Yu Kuai
Cc: Greg KH, linux-stable, harry.wentland, sunpeng.li,
Rodrigo.Siqueira, alexander.deucher, christian.koenig,
Xinhui.Pan, airlied, Daniel Vetter, Al Viro, Christian Brauner,
Liam Howlett, Andrew Morton, Hugh Dickins,
Matthew Wilcox (Oracle),
Sasha Levin, srinivasan.shanmugam, chiahsuan.chung, mingo,
mgorman, yukuai3, chengming.zhou, zhangpeng.00, amd-gfx,
dri-devel, Linux Kernel Mailing List, Linux FS Devel, maple-tree,
linux-mm, yi.zhang, yangerkun
On Wed, 2024-11-06 at 15:19 +0000, Chuck Lever III wrote:
> This is the first I've heard of this CVE. It
> would help if the patch authors got some
> notification when these are filed.
Greg did it; it came from the kernel CNA:
https://www.cve.org/CVERecord?id=CVE-2024-46701
The way it seems to work is that this is simply a wrapper for the
upstream commit:
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=64a7ce76fb901bf9f9c36cf5d681328fc0fd4b5a
Which is what appears as the last stable reference. I assume someone
investigated and added the vulnerable kernel details. I think the
theory is that since you reviewed the original upstream patch, stable
just takes care of the backports and CVE management of the existing fix
through the normal stable process.
James
^ permalink raw reply [flat|nested] 47+ messages in thread
* Re: [PATCH 6.6 00/28] fix CVE-2024-46701
2024-11-06 15:19 ` Chuck Lever III
2024-11-06 16:21 ` James Bottomley
@ 2024-11-07 0:57 ` Yu Kuai
2024-11-07 14:41 ` Chuck Lever
2024-11-07 14:44 ` Liam R. Howlett
1 sibling, 2 replies; 47+ messages in thread
From: Yu Kuai @ 2024-11-07 0:57 UTC (permalink / raw)
To: Chuck Lever III, Yu Kuai
Cc: Greg KH, linux-stable, harry.wentland, sunpeng.li,
Rodrigo.Siqueira, alexander.deucher, christian.koenig,
Xinhui.Pan, airlied, Daniel Vetter, Al Viro, Christian Brauner,
Liam Howlett, Andrew Morton, Hugh Dickins,
Matthew Wilcox (Oracle),
Sasha Levin, srinivasan.shanmugam, chiahsuan.chung, mingo,
mgorman, chengming.zhou, zhangpeng.00, amd-gfx, dri-devel,
Linux Kernel Mailing List, Linux FS Devel, maple-tree, linux-mm,
yi.zhang, yangerkun, yukuai (C)
Hi,
在 2024/11/06 23:19, Chuck Lever III 写道:
>
>
>> On Nov 6, 2024, at 1:16 AM, Greg KH <gregkh@linuxfoundation.org> wrote:
>>
>> On Thu, Oct 24, 2024 at 09:19:41PM +0800, Yu Kuai wrote:
>>> From: Yu Kuai <yukuai3@huawei.com>
>>>
>>> Fix patch is patch 27, relied patches are from:
>
> I assume patch 27 is:
>
> libfs: fix infinite directory reads for offset dir
>
> https://lore.kernel.org/stable/20241024132225.2271667-12-yukuai1@huaweicloud.com/
>
> I don't think the Maple tree patches are a hard
> requirement for this fix. And note that libfs did
> not use Maple tree originally because I was told
> at that time that Maple tree was not yet mature.
>
> So, a better approach might be to fit the fix
> onto linux-6.6.y while sticking with xarray.
The painful part is that using xarray is not acceptable, the offet
is just 32 bit and if it overflows, readdir will read nothing. That's
why maple_tree has to be used.
Thanks,
Kuai
>
> This is the first I've heard of this CVE. It
> would help if the patch authors got some
> notification when these are filed.
>
>
>>> - patches from set [1] to add helpers to maple_tree, the last patch to
>>> improve fork() performance is not backported;
>>
>> So things slowed down?
>>
>>> - patches from set [2] to change maple_tree, and follow up fixes;
>>> - patches from set [3] to convert offset_ctx from xarray to maple_tree;
>>>
>>> Please notice that I'm not an expert in this area, and I'm afraid to
>>> make manual changes. That's why patch 16 revert the commit that is
>>> different from mainline and will cause conflict backporting new patches.
>>> patch 28 pick the original mainline patch again.
>>>
>>> (And this is what we did to fix the CVE in downstream kernels).
>>>
>>> [1] https://lore.kernel.org/all/20231027033845.90608-1-zhangpeng.00@bytedance.com/
>>> [2] https://lore.kernel.org/all/20231101171629.3612299-2-Liam.Howlett@oracle.com/T/
>>> [3] https://lore.kernel.org/all/170820083431.6328.16233178852085891453.stgit@91.116.238.104.host.secureserver.net/
>>
>> This series looks rough. I want to have the maintainers of these
>> files/subsystems to ack this before being able to take them.
>>
>> thanks,
>>
>> greg k-h
>
> --
> Chuck Lever
>
>
^ permalink raw reply [flat|nested] 47+ messages in thread
* Re: [PATCH 6.6 00/28] fix CVE-2024-46701
2024-11-07 0:57 ` Yu Kuai
@ 2024-11-07 14:41 ` Chuck Lever
2024-11-08 1:19 ` Yu Kuai
2024-11-07 14:44 ` Liam R. Howlett
1 sibling, 1 reply; 47+ messages in thread
From: Chuck Lever @ 2024-11-07 14:41 UTC (permalink / raw)
To: Yu Kuai
Cc: Greg KH, linux-stable, harry.wentland, sunpeng.li,
Rodrigo.Siqueira, alexander.deucher, christian.koenig,
Xinhui.Pan, airlied, Daniel Vetter, Al Viro, Christian Brauner,
Liam Howlett, Andrew Morton, Hugh Dickins,
Matthew Wilcox (Oracle),
Sasha Levin, srinivasan.shanmugam, chiahsuan.chung, mingo,
mgorman, chengming.zhou, zhangpeng.00, amd-gfx, dri-devel,
Linux Kernel Mailing List, Linux FS Devel, maple-tree, linux-mm,
yi.zhang, yangerkun, yukuai (C)
On Thu, Nov 07, 2024 at 08:57:23AM +0800, Yu Kuai wrote:
> Hi,
>
> 在 2024/11/06 23:19, Chuck Lever III 写道:
> >
> >
> > > On Nov 6, 2024, at 1:16 AM, Greg KH <gregkh@linuxfoundation.org> wrote:
> > >
> > > On Thu, Oct 24, 2024 at 09:19:41PM +0800, Yu Kuai wrote:
> > > > From: Yu Kuai <yukuai3@huawei.com>
> > > >
> > > > Fix patch is patch 27, relied patches are from:
> >
> > I assume patch 27 is:
> >
> > libfs: fix infinite directory reads for offset dir
> >
> > https://lore.kernel.org/stable/20241024132225.2271667-12-yukuai1@huaweicloud.com/
> >
> > I don't think the Maple tree patches are a hard
> > requirement for this fix. And note that libfs did
> > not use Maple tree originally because I was told
> > at that time that Maple tree was not yet mature.
> >
> > So, a better approach might be to fit the fix
> > onto linux-6.6.y while sticking with xarray.
>
> The painful part is that using xarray is not acceptable, the offet
> is just 32 bit and if it overflows, readdir will read nothing. That's
> why maple_tree has to be used.
A 32-bit range should be entirely adequate for this usage.
- The offset allocator wraps when it reaches the maximum, it
doesn't overflow unless there are actually billions of extant
entries in the directory, which IMO is not likely.
- The offset values are dense, so the directory can use all 2- or
4- billion in the 32-bit integer range before wrapping.
- No-one complained about this limitation when offset_readdir() was
first merged. The xarray was replaced for performance reasons,
not because of the 32-bit range limit.
It is always possible that I have misunderstood your concern!
> Thanks,
> Kuai
>
> >
> > This is the first I've heard of this CVE. It
> > would help if the patch authors got some
> > notification when these are filed.
> >
> >
> > > > - patches from set [1] to add helpers to maple_tree, the last patch to
> > > > improve fork() performance is not backported;
> > >
> > > So things slowed down?
> > >
> > > > - patches from set [2] to change maple_tree, and follow up fixes;
> > > > - patches from set [3] to convert offset_ctx from xarray to maple_tree;
> > > >
> > > > Please notice that I'm not an expert in this area, and I'm afraid to
> > > > make manual changes. That's why patch 16 revert the commit that is
> > > > different from mainline and will cause conflict backporting new patches.
> > > > patch 28 pick the original mainline patch again.
> > > >
> > > > (And this is what we did to fix the CVE in downstream kernels).
> > > >
> > > > [1] https://lore.kernel.org/all/20231027033845.90608-1-zhangpeng.00@bytedance.com/
> > > > [2] https://lore.kernel.org/all/20231101171629.3612299-2-Liam.Howlett@oracle.com/T/
> > > > [3] https://lore.kernel.org/all/170820083431.6328.16233178852085891453.stgit@91.116.238.104.host.secureserver.net/
> > >
> > > This series looks rough. I want to have the maintainers of these
> > > files/subsystems to ack this before being able to take them.
> > >
> > > thanks,
> > >
> > > greg k-h
> >
> > --
> > Chuck Lever
> >
> >
>
--
Chuck Lever
^ permalink raw reply [flat|nested] 47+ messages in thread
* Re: [PATCH 6.6 00/28] fix CVE-2024-46701
2024-11-07 14:41 ` Chuck Lever
@ 2024-11-08 1:19 ` Yu Kuai
2024-11-08 13:23 ` Chuck Lever III
0 siblings, 1 reply; 47+ messages in thread
From: Yu Kuai @ 2024-11-08 1:19 UTC (permalink / raw)
To: Chuck Lever, Yu Kuai
Cc: Greg KH, linux-stable, harry.wentland, sunpeng.li,
Rodrigo.Siqueira, alexander.deucher, christian.koenig,
Xinhui.Pan, airlied, Daniel Vetter, Al Viro, Christian Brauner,
Liam Howlett, Andrew Morton, Hugh Dickins,
Matthew Wilcox (Oracle),
Sasha Levin, srinivasan.shanmugam, chiahsuan.chung, mingo,
mgorman, chengming.zhou, zhangpeng.00, amd-gfx, dri-devel,
Linux Kernel Mailing List, Linux FS Devel, maple-tree, linux-mm,
yi.zhang, yangerkun, yukuai (C)
Hi,
在 2024/11/07 22:41, Chuck Lever 写道:
> On Thu, Nov 07, 2024 at 08:57:23AM +0800, Yu Kuai wrote:
>> Hi,
>>
>> 在 2024/11/06 23:19, Chuck Lever III 写道:
>>>
>>>
>>>> On Nov 6, 2024, at 1:16 AM, Greg KH <gregkh@linuxfoundation.org> wrote:
>>>>
>>>> On Thu, Oct 24, 2024 at 09:19:41PM +0800, Yu Kuai wrote:
>>>>> From: Yu Kuai <yukuai3@huawei.com>
>>>>>
>>>>> Fix patch is patch 27, relied patches are from:
>>>
>>> I assume patch 27 is:
>>>
>>> libfs: fix infinite directory reads for offset dir
>>>
>>> https://lore.kernel.org/stable/20241024132225.2271667-12-yukuai1@huaweicloud.com/
>>>
>>> I don't think the Maple tree patches are a hard
>>> requirement for this fix. And note that libfs did
>>> not use Maple tree originally because I was told
>>> at that time that Maple tree was not yet mature.
>>>
>>> So, a better approach might be to fit the fix
>>> onto linux-6.6.y while sticking with xarray.
>>
>> The painful part is that using xarray is not acceptable, the offet
>> is just 32 bit and if it overflows, readdir will read nothing. That's
>> why maple_tree has to be used.
>
> A 32-bit range should be entirely adequate for this usage.
>
> - The offset allocator wraps when it reaches the maximum, it
> doesn't overflow unless there are actually billions of extant
> entries in the directory, which IMO is not likely.
Yes, it's not likely, but it's possible, and not hard to trigger for
test. And please notice that the offset will increase for each new file,
and file can be removed, while offset stays the same.
>
> - The offset values are dense, so the directory can use all 2- or
> 4- billion in the 32-bit integer range before wrapping.
A simple math, if user create and remove 1 file in each seconds, it will
cost about 130 years to overflow. And if user create and remove 1000
files in each second, it will cost about 1 month to overflow.
maple tree use 64 bit value for the offset, which is impossible to
overflow for the rest of our lifes.
>
> - No-one complained about this limitation when offset_readdir() was
> first merged. The xarray was replaced for performance reasons,
> not because of the 32-bit range limit.
>
> It is always possible that I have misunderstood your concern!
The problem is that if the next_offset overflows to 0, then after patch
27, offset_dir_open() will record the 0, and later offset_readdir will
return directly, while there can be many files.
Thanks,
Kuai
^ permalink raw reply [flat|nested] 47+ messages in thread
* Re: [PATCH 6.6 00/28] fix CVE-2024-46701
2024-11-08 1:19 ` Yu Kuai
@ 2024-11-08 13:23 ` Chuck Lever III
2024-11-08 17:03 ` Liam R. Howlett
2024-11-09 1:30 ` Yu Kuai
0 siblings, 2 replies; 47+ messages in thread
From: Chuck Lever III @ 2024-11-08 13:23 UTC (permalink / raw)
To: Yu Kuai
Cc: Greg KH, linux-stable, harry.wentland, sunpeng.li,
Rodrigo.Siqueira, alexander.deucher, christian.koenig,
Xinhui.Pan, airlied, Daniel Vetter, Al Viro, Christian Brauner,
Liam Howlett, Andrew Morton, Hugh Dickins,
Matthew Wilcox (Oracle),
Sasha Levin, srinivasan.shanmugam, chiahsuan.chung, mingo,
mgorman, chengming.zhou, zhangpeng.00, amd-gfx, dri-devel,
Linux Kernel Mailing List, Linux FS Devel, maple-tree, linux-mm,
yi.zhang, yangerkun, yukuai (C)
> On Nov 7, 2024, at 8:19 PM, Yu Kuai <yukuai1@huaweicloud.com> wrote:
>
> Hi,
>
> 在 2024/11/07 22:41, Chuck Lever 写道:
>> On Thu, Nov 07, 2024 at 08:57:23AM +0800, Yu Kuai wrote:
>>> Hi,
>>>
>>> 在 2024/11/06 23:19, Chuck Lever III 写道:
>>>>
>>>>
>>>>> On Nov 6, 2024, at 1:16 AM, Greg KH <gregkh@linuxfoundation.org> wrote:
>>>>>
>>>>> On Thu, Oct 24, 2024 at 09:19:41PM +0800, Yu Kuai wrote:
>>>>>> From: Yu Kuai <yukuai3@huawei.com>
>>>>>>
>>>>>> Fix patch is patch 27, relied patches are from:
>>>>
>>>> I assume patch 27 is:
>>>>
>>>> libfs: fix infinite directory reads for offset dir
>>>>
>>>> https://lore.kernel.org/stable/20241024132225.2271667-12-yukuai1@huaweicloud.com/
>>>>
>>>> I don't think the Maple tree patches are a hard
>>>> requirement for this fix. And note that libfs did
>>>> not use Maple tree originally because I was told
>>>> at that time that Maple tree was not yet mature.
>>>>
>>>> So, a better approach might be to fit the fix
>>>> onto linux-6.6.y while sticking with xarray.
>>>
>>> The painful part is that using xarray is not acceptable, the offet
>>> is just 32 bit and if it overflows, readdir will read nothing. That's
>>> why maple_tree has to be used.
>> A 32-bit range should be entirely adequate for this usage.
>> - The offset allocator wraps when it reaches the maximum, it
>> doesn't overflow unless there are actually billions of extant
>> entries in the directory, which IMO is not likely.
>
> Yes, it's not likely, but it's possible, and not hard to trigger for
> test.
I question whether such a test reflects any real-world
workload.
Besides, there are a number of other limits that will impact
the ability to create that many entries in one directory.
The number of inodes in one tmpfs instance is limited, for
instance.
> And please notice that the offset will increase for each new file,
> and file can be removed, while offset stays the same.
>> - The offset values are dense, so the directory can use all 2- or
>> 4- billion in the 32-bit integer range before wrapping.
>
> A simple math, if user create and remove 1 file in each seconds, it will
> cost about 130 years to overflow. And if user create and remove 1000
> files in each second, it will cost about 1 month to overflow.
The question is what happens when there are no more offset
values available. xa_alloc_cyclic should fail, and file
creation is supposed to fail at that point. If it doesn't,
that's a bug that is outside of the use of xarray or Maple.
> maple tree use 64 bit value for the offset, which is impossible to
> overflow for the rest of our lifes.
>> - No-one complained about this limitation when offset_readdir() was
>> first merged. The xarray was replaced for performance reasons,
>> not because of the 32-bit range limit.
>> It is always possible that I have misunderstood your concern!
>
> The problem is that if the next_offset overflows to 0, then after patch
> 27, offset_dir_open() will record the 0, and later offset_readdir will
> return directly, while there can be many files.
That's a separate bug that has nothing to do with the maximum
number of entries one directory can have. Again, you don't
need Maple tree to address that.
My understanding from Liam is that backporting Maple into
v6.6 is just not practical to do. We must explore alternate
ways to address these concerns.
--
Chuck Lever
^ permalink raw reply [flat|nested] 47+ messages in thread
* Re: [PATCH 6.6 00/28] fix CVE-2024-46701
2024-11-08 13:23 ` Chuck Lever III
@ 2024-11-08 17:03 ` Liam R. Howlett
2024-11-09 1:38 ` Yu Kuai
2024-11-09 1:30 ` Yu Kuai
1 sibling, 1 reply; 47+ messages in thread
From: Liam R. Howlett @ 2024-11-08 17:03 UTC (permalink / raw)
To: Chuck Lever III
Cc: Yu Kuai, Greg KH, linux-stable, harry.wentland, sunpeng.li,
Rodrigo.Siqueira, alexander.deucher, christian.koenig,
Xinhui.Pan, airlied, Daniel Vetter, Al Viro, Christian Brauner,
Andrew Morton, Hugh Dickins, Matthew Wilcox (Oracle),
Sasha Levin, srinivasan.shanmugam, chiahsuan.chung, mingo,
mgorman, chengming.zhou, zhangpeng.00, amd-gfx, dri-devel,
Linux Kernel Mailing List, Linux FS Devel, maple-tree, linux-mm,
yi.zhang, yangerkun, yukuai (C)
* Chuck Lever III <chuck.lever@oracle.com> [241108 08:23]:
>
>
> > On Nov 7, 2024, at 8:19 PM, Yu Kuai <yukuai1@huaweicloud.com> wrote:
> >
> > Hi,
> >
> > 在 2024/11/07 22:41, Chuck Lever 写道:
> >> On Thu, Nov 07, 2024 at 08:57:23AM +0800, Yu Kuai wrote:
> >>> Hi,
> >>>
> >>> 在 2024/11/06 23:19, Chuck Lever III 写道:
> >>>>
> >>>>
> >>>>> On Nov 6, 2024, at 1:16 AM, Greg KH <gregkh@linuxfoundation.org> wrote:
> >>>>>
> >>>>> On Thu, Oct 24, 2024 at 09:19:41PM +0800, Yu Kuai wrote:
> >>>>>> From: Yu Kuai <yukuai3@huawei.com>
> >>>>>>
> >>>>>> Fix patch is patch 27, relied patches are from:
> >>>>
> >>>> I assume patch 27 is:
> >>>>
> >>>> libfs: fix infinite directory reads for offset dir
> >>>>
> >>>> https://lore.kernel.org/stable/20241024132225.2271667-12-yukuai1@huaweicloud.com/
> >>>>
> >>>> I don't think the Maple tree patches are a hard
> >>>> requirement for this fix. And note that libfs did
> >>>> not use Maple tree originally because I was told
> >>>> at that time that Maple tree was not yet mature.
> >>>>
> >>>> So, a better approach might be to fit the fix
> >>>> onto linux-6.6.y while sticking with xarray.
> >>>
> >>> The painful part is that using xarray is not acceptable, the offet
> >>> is just 32 bit and if it overflows, readdir will read nothing. That's
> >>> why maple_tree has to be used.
> >> A 32-bit range should be entirely adequate for this usage.
> >> - The offset allocator wraps when it reaches the maximum, it
> >> doesn't overflow unless there are actually billions of extant
> >> entries in the directory, which IMO is not likely.
> >
> > Yes, it's not likely, but it's possible, and not hard to trigger for
> > test.
>
> I question whether such a test reflects any real-world
> workload.
>
> Besides, there are a number of other limits that will impact
> the ability to create that many entries in one directory.
> The number of inodes in one tmpfs instance is limited, for
> instance.
>
>
> > And please notice that the offset will increase for each new file,
> > and file can be removed, while offset stays the same.
> >> - The offset values are dense, so the directory can use all 2- or
> >> 4- billion in the 32-bit integer range before wrapping.
> >
> > A simple math, if user create and remove 1 file in each seconds, it will
> > cost about 130 years to overflow. And if user create and remove 1000
> > files in each second, it will cost about 1 month to overflow.
>
> The question is what happens when there are no more offset
> values available. xa_alloc_cyclic should fail, and file
> creation is supposed to fail at that point. If it doesn't,
> that's a bug that is outside of the use of xarray or Maple.
>
>
> > maple tree use 64 bit value for the offset, which is impossible to
> > overflow for the rest of our lifes.
> >> - No-one complained about this limitation when offset_readdir() was
> >> first merged. The xarray was replaced for performance reasons,
> >> not because of the 32-bit range limit.
> >> It is always possible that I have misunderstood your concern!
> >
> > The problem is that if the next_offset overflows to 0, then after patch
> > 27, offset_dir_open() will record the 0, and later offset_readdir will
> > return directly, while there can be many files.
>
> That's a separate bug that has nothing to do with the maximum
> number of entries one directory can have. Again, you don't
> need Maple tree to address that.
>
> My understanding from Liam is that backporting Maple into
> v6.6 is just not practical to do. We must explore alternate
> ways to address these concerns.
>
The tree itself is in v6.6, but the evolution of the tree to fit the
needs of this and other subsystems isn't something that would be well
tested. This is really backporting features and that's not the point of
stable.
I think this is what Lorenzo was saying about changing your approach, we
can't backport 28 patches to fix this when it isn't needed.
Thanks,
Liam
^ permalink raw reply [flat|nested] 47+ messages in thread
* Re: [PATCH 6.6 00/28] fix CVE-2024-46701
2024-11-08 17:03 ` Liam R. Howlett
@ 2024-11-09 1:38 ` Yu Kuai
0 siblings, 0 replies; 47+ messages in thread
From: Yu Kuai @ 2024-11-09 1:38 UTC (permalink / raw)
To: Liam R. Howlett, Chuck Lever III, Yu Kuai, Greg KH, linux-stable,
harry.wentland, sunpeng.li, Rodrigo.Siqueira, alexander.deucher,
christian.koenig, Xinhui.Pan, airlied, Daniel Vetter, Al Viro,
Christian Brauner, Andrew Morton, Hugh Dickins,
Matthew Wilcox (Oracle),
Sasha Levin, srinivasan.shanmugam, chiahsuan.chung, mingo,
mgorman, chengming.zhou, zhangpeng.00, amd-gfx, dri-devel,
Linux Kernel Mailing List, Linux FS Devel, maple-tree, linux-mm,
yi.zhang, yangerkun, yukuai (C)
Hi,
在 2024/11/09 1:03, Liam R. Howlett 写道:
> * Chuck Lever III <chuck.lever@oracle.com> [241108 08:23]:
>>
>>
>>> On Nov 7, 2024, at 8:19 PM, Yu Kuai <yukuai1@huaweicloud.com> wrote:
>>>
>>> Hi,
>>>
>>> 在 2024/11/07 22:41, Chuck Lever 写道:
>>>> On Thu, Nov 07, 2024 at 08:57:23AM +0800, Yu Kuai wrote:
>>>>> Hi,
>>>>>
>>>>> 在 2024/11/06 23:19, Chuck Lever III 写道:
>>>>>>
>>>>>>
>>>>>>> On Nov 6, 2024, at 1:16 AM, Greg KH <gregkh@linuxfoundation.org> wrote:
>>>>>>>
>>>>>>> On Thu, Oct 24, 2024 at 09:19:41PM +0800, Yu Kuai wrote:
>>>>>>>> From: Yu Kuai <yukuai3@huawei.com>
>>>>>>>>
>>>>>>>> Fix patch is patch 27, relied patches are from:
>>>>>>
>>>>>> I assume patch 27 is:
>>>>>>
>>>>>> libfs: fix infinite directory reads for offset dir
>>>>>>
>>>>>> https://lore.kernel.org/stable/20241024132225.2271667-12-yukuai1@huaweicloud.com/
>>>>>>
>>>>>> I don't think the Maple tree patches are a hard
>>>>>> requirement for this fix. And note that libfs did
>>>>>> not use Maple tree originally because I was told
>>>>>> at that time that Maple tree was not yet mature.
>>>>>>
>>>>>> So, a better approach might be to fit the fix
>>>>>> onto linux-6.6.y while sticking with xarray.
>>>>>
>>>>> The painful part is that using xarray is not acceptable, the offet
>>>>> is just 32 bit and if it overflows, readdir will read nothing. That's
>>>>> why maple_tree has to be used.
>>>> A 32-bit range should be entirely adequate for this usage.
>>>> - The offset allocator wraps when it reaches the maximum, it
>>>> doesn't overflow unless there are actually billions of extant
>>>> entries in the directory, which IMO is not likely.
>>>
>>> Yes, it's not likely, but it's possible, and not hard to trigger for
>>> test.
>>
>> I question whether such a test reflects any real-world
>> workload.
>>
>> Besides, there are a number of other limits that will impact
>> the ability to create that many entries in one directory.
>> The number of inodes in one tmpfs instance is limited, for
>> instance.
>>
>>
>>> And please notice that the offset will increase for each new file,
>>> and file can be removed, while offset stays the same.
>>>> - The offset values are dense, so the directory can use all 2- or
>>>> 4- billion in the 32-bit integer range before wrapping.
>>>
>>> A simple math, if user create and remove 1 file in each seconds, it will
>>> cost about 130 years to overflow. And if user create and remove 1000
>>> files in each second, it will cost about 1 month to overflow.
>>
>> The question is what happens when there are no more offset
>> values available. xa_alloc_cyclic should fail, and file
>> creation is supposed to fail at that point. If it doesn't,
>> that's a bug that is outside of the use of xarray or Maple.
>>
>>
>>> maple tree use 64 bit value for the offset, which is impossible to
>>> overflow for the rest of our lifes.
>>>> - No-one complained about this limitation when offset_readdir() was
>>>> first merged. The xarray was replaced for performance reasons,
>>>> not because of the 32-bit range limit.
>>>> It is always possible that I have misunderstood your concern!
>>>
>>> The problem is that if the next_offset overflows to 0, then after patch
>>> 27, offset_dir_open() will record the 0, and later offset_readdir will
>>> return directly, while there can be many files.
>>
>> That's a separate bug that has nothing to do with the maximum
>> number of entries one directory can have. Again, you don't
>> need Maple tree to address that.
>>
>> My understanding from Liam is that backporting Maple into
>> v6.6 is just not practical to do. We must explore alternate
>> ways to address these concerns.
>>
>
> The tree itself is in v6.6, but the evolution of the tree to fit the
> needs of this and other subsystems isn't something that would be well
> tested. This is really backporting features and that's not the point of
> stable.
Of course.
>
> I think this is what Lorenzo was saying about changing your approach, we
> can't backport 28 patches to fix this when it isn't needed.
I don't have other approach now, so I'll not follow on fixing this cve.
I'll be great if someone has a beeter apporch. :)
Thanks,
Kuai
>
> Thanks,
> Liam
>
> .
>
^ permalink raw reply [flat|nested] 47+ messages in thread
* Re: [PATCH 6.6 00/28] fix CVE-2024-46701
2024-11-08 13:23 ` Chuck Lever III
2024-11-08 17:03 ` Liam R. Howlett
@ 2024-11-09 1:30 ` Yu Kuai
2024-11-09 16:58 ` Chuck Lever III
1 sibling, 1 reply; 47+ messages in thread
From: Yu Kuai @ 2024-11-09 1:30 UTC (permalink / raw)
To: Chuck Lever III, Yu Kuai
Cc: Greg KH, linux-stable, harry.wentland, sunpeng.li,
Rodrigo.Siqueira, alexander.deucher, christian.koenig,
Xinhui.Pan, airlied, Daniel Vetter, Al Viro, Christian Brauner,
Liam Howlett, Andrew Morton, Hugh Dickins,
Matthew Wilcox (Oracle),
Sasha Levin, srinivasan.shanmugam, chiahsuan.chung, mingo,
mgorman, chengming.zhou, zhangpeng.00, amd-gfx, dri-devel,
Linux Kernel Mailing List, Linux FS Devel, maple-tree, linux-mm,
yi.zhang, yangerkun, yukuai (C)
Hi,
在 2024/11/08 21:23, Chuck Lever III 写道:
>
>
>> On Nov 7, 2024, at 8:19 PM, Yu Kuai <yukuai1@huaweicloud.com> wrote:
>>
>> Hi,
>>
>> 在 2024/11/07 22:41, Chuck Lever 写道:
>>> On Thu, Nov 07, 2024 at 08:57:23AM +0800, Yu Kuai wrote:
>>>> Hi,
>>>>
>>>> 在 2024/11/06 23:19, Chuck Lever III 写道:
>>>>>
>>>>>
>>>>>> On Nov 6, 2024, at 1:16 AM, Greg KH <gregkh@linuxfoundation.org> wrote:
>>>>>>
>>>>>> On Thu, Oct 24, 2024 at 09:19:41PM +0800, Yu Kuai wrote:
>>>>>>> From: Yu Kuai <yukuai3@huawei.com>
>>>>>>>
>>>>>>> Fix patch is patch 27, relied patches are from:
>>>>>
>>>>> I assume patch 27 is:
>>>>>
>>>>> libfs: fix infinite directory reads for offset dir
>>>>>
>>>>> https://lore.kernel.org/stable/20241024132225.2271667-12-yukuai1@huaweicloud.com/
>>>>>
>>>>> I don't think the Maple tree patches are a hard
>>>>> requirement for this fix. And note that libfs did
>>>>> not use Maple tree originally because I was told
>>>>> at that time that Maple tree was not yet mature.
>>>>>
>>>>> So, a better approach might be to fit the fix
>>>>> onto linux-6.6.y while sticking with xarray.
>>>>
>>>> The painful part is that using xarray is not acceptable, the offet
>>>> is just 32 bit and if it overflows, readdir will read nothing. That's
>>>> why maple_tree has to be used.
>>> A 32-bit range should be entirely adequate for this usage.
>>> - The offset allocator wraps when it reaches the maximum, it
>>> doesn't overflow unless there are actually billions of extant
>>> entries in the directory, which IMO is not likely.
>>
>> Yes, it's not likely, but it's possible, and not hard to trigger for
>> test.
>
> I question whether such a test reflects any real-world
> workload.
>
> Besides, there are a number of other limits that will impact
> the ability to create that many entries in one directory.
> The number of inodes in one tmpfs instance is limited, for
> instance.
>
>
>> And please notice that the offset will increase for each new file,
>> and file can be removed, while offset stays the same.
Did you see the above explanation? files can be removed, you don't have
to store that much files to triggger the offset to overflow.
>>> - The offset values are dense, so the directory can use all 2- or
>>> 4- billion in the 32-bit integer range before wrapping.
>>
>> A simple math, if user create and remove 1 file in each seconds, it will
>> cost about 130 years to overflow. And if user create and remove 1000
>> files in each second, it will cost about 1 month to overflow.
>
> The question is what happens when there are no more offset
> values available. xa_alloc_cyclic should fail, and file
> creation is supposed to fail at that point. If it doesn't,
> that's a bug that is outside of the use of xarray or Maple.
Can you show me the code that xa_alloc_cyclic should fail? At least
according to the commets, it will return 1 if the allocation succeeded
after wrapping.
* Context: Any context. Takes and releases the xa_lock. May sleep if
* the @gfp flags permit.
* Return: 0 if the allocation succeeded without wrapping. 1 if the
* allocation succeeded after wrapping, -ENOMEM if memory could not be
* allocated or -EBUSY if there are no free entries in @limit.
*/
static inline int xa_alloc_cyclic(struct xarray *xa, u32 *id, void *entry,
struct xa_limit limit, u32 *next, gfp_t gfp)
>
>
>> maple tree use 64 bit value for the offset, which is impossible to
>> overflow for the rest of our lifes.
>>> - No-one complained about this limitation when offset_readdir() was
>>> first merged. The xarray was replaced for performance reasons,
>>> not because of the 32-bit range limit.
>>> It is always possible that I have misunderstood your concern!
>>
>> The problem is that if the next_offset overflows to 0, then after patch
>> 27, offset_dir_open() will record the 0, and later offset_readdir will
>> return directly, while there can be many files.
>
> That's a separate bug that has nothing to do with the maximum
> number of entries one directory can have. Again, you don't
> need Maple tree to address that.
>
> My understanding from Liam is that backporting Maple into
> v6.6 is just not practical to do. We must explore alternate
> ways to address these concerns.
Like I said, I'll just give up for this cve for v6.6.
>
>
> --
> Chuck Lever
>
>
^ permalink raw reply [flat|nested] 47+ messages in thread
* Re: [PATCH 6.6 00/28] fix CVE-2024-46701
2024-11-09 1:30 ` Yu Kuai
@ 2024-11-09 16:58 ` Chuck Lever III
2024-11-11 0:56 ` Yu Kuai
0 siblings, 1 reply; 47+ messages in thread
From: Chuck Lever III @ 2024-11-09 16:58 UTC (permalink / raw)
To: Yu Kuai, Al Viro, Christian Brauner
Cc: Greg KH, linux-stable, harry.wentland, sunpeng.li,
Rodrigo.Siqueira, alexander.deucher, christian.koenig,
Xinhui.Pan, airlied, Daniel Vetter, Liam Howlett, Andrew Morton,
Hugh Dickins, Matthew Wilcox (Oracle),
Sasha Levin, srinivasan.shanmugam, chiahsuan.chung, mingo,
mgorman, chengming.zhou, zhangpeng.00, amd-gfx, dri-devel,
Linux Kernel Mailing List, Linux FS Devel, maple-tree, linux-mm,
yi.zhang, yangerkun, yukuai (C)
> On Nov 8, 2024, at 8:30 PM, Yu Kuai <yukuai1@huaweicloud.com> wrote:
>
> Hi,
>
> 在 2024/11/08 21:23, Chuck Lever III 写道:
>>> On Nov 7, 2024, at 8:19 PM, Yu Kuai <yukuai1@huaweicloud.com> wrote:
>>>
>>> Hi,
>>>
>>> 在 2024/11/07 22:41, Chuck Lever 写道:
>>>> On Thu, Nov 07, 2024 at 08:57:23AM +0800, Yu Kuai wrote:
>>>>> Hi,
>>>>>
>>>>> 在 2024/11/06 23:19, Chuck Lever III 写道:
>>>>>>
>>>>>>
>>>>>>> On Nov 6, 2024, at 1:16 AM, Greg KH <gregkh@linuxfoundation.org> wrote:
>>>>>>>
>>>>>>> On Thu, Oct 24, 2024 at 09:19:41PM +0800, Yu Kuai wrote:
>>>>>>>> From: Yu Kuai <yukuai3@huawei.com>
>>>>>>>>
>>>>>>>> Fix patch is patch 27, relied patches are from:
>>>>>>
>>>>>> I assume patch 27 is:
>>>>>>
>>>>>> libfs: fix infinite directory reads for offset dir
>>>>>>
>>>>>> https://lore.kernel.org/stable/20241024132225.2271667-12-yukuai1@huaweicloud.com/
>>>>>>
>>>>>> I don't think the Maple tree patches are a hard
>>>>>> requirement for this fix. And note that libfs did
>>>>>> not use Maple tree originally because I was told
>>>>>> at that time that Maple tree was not yet mature.
>>>>>>
>>>>>> So, a better approach might be to fit the fix
>>>>>> onto linux-6.6.y while sticking with xarray.
>>>>>
>>>>> The painful part is that using xarray is not acceptable, the offet
>>>>> is just 32 bit and if it overflows, readdir will read nothing. That's
>>>>> why maple_tree has to be used.
>>>> A 32-bit range should be entirely adequate for this usage.
>>>> - The offset allocator wraps when it reaches the maximum, it
>>>> doesn't overflow unless there are actually billions of extant
>>>> entries in the directory, which IMO is not likely.
>>>
>>> Yes, it's not likely, but it's possible, and not hard to trigger for
>>> test.
>> I question whether such a test reflects any real-world
>> workload.
>> Besides, there are a number of other limits that will impact
>> the ability to create that many entries in one directory.
>> The number of inodes in one tmpfs instance is limited, for
>> instance.
>>> And please notice that the offset will increase for each new file,
>>> and file can be removed, while offset stays the same.
>
> Did you see the above explanation? files can be removed, you don't have
> to store that much files to trigger the offset to overflow.
>>>> - The offset values are dense, so the directory can use all 2- or
>>>> 4- billion in the 32-bit integer range before wrapping.
>>>
>>> A simple math, if user create and remove 1 file in each seconds, it will
>>> cost about 130 years to overflow. And if user create and remove 1000
>>> files in each second, it will cost about 1 month to overflow.
> The problem is that if the next_offset overflows to 0, then after patch
> 27, offset_dir_open() will record the 0, and later offset_readdir will
> return directly, while there can be many files.
Let me revisit this for a moment. The xa_alloc_cyclic() call
in simple_offset_add() has a range limit argument of 2 - U32_MAX.
So I'm not clear how an overflow (or, more precisely, the
reuse of an offset value) would result in a "0" offset being
recorded. The range limit prevents the use of 0 and 1.
A "0" offset value would be a bug, I agree, but I don't see
how that can happen.
>> The question is what happens when there are no more offset
>> values available. xa_alloc_cyclic should fail, and file
>> creation is supposed to fail at that point. If it doesn't,
>> that's a bug that is outside of the use of xarray or Maple.
>
> Can you show me the code that xa_alloc_cyclic should fail? At least
> according to the commets, it will return 1 if the allocation succeeded
> after wrapping.
>
> * Context: Any context. Takes and releases the xa_lock. May sleep if
> * the @gfp flags permit.
> * Return: 0 if the allocation succeeded without wrapping. 1 if the
> * allocation succeeded after wrapping, -ENOMEM if memory could not be
> * allocated or -EBUSY if there are no free entries in @limit.
> */
> static inline int xa_alloc_cyclic(struct xarray *xa, u32 *id, void *entry,
> struct xa_limit limit, u32 *next, gfp_t gfp)
I recall (dimly) that directory entry offset value re-use
is acceptable and preferred, so I think ignoring a "1"
return value from xa_alloc_cyclic() is OK. If there are
no unused offset values available, it will return -EBUSY,
and file creation will fail.
Perhaps Christian or Al can chime in here on whether
directory entry offset value re-use is indeed expected
to be acceptable.
Further, my understanding is that:
https://lore.kernel.org/stable/20241024132225.2271667-12-yukuai1@huaweicloud.com/
fixes a rename issue that results in an infinite loop,
and that's the (only) issue that underlies CVE-2024-46701.
You are suggesting that there are other overflow problems
with the xarray-based simple_offset implementation. If I
can confirm them, then I can get these fixed in v6.6. But
so far, I'm not sure I completely understand these other
failure modes.
Are you suggesting that the above fix /introduces/ the
0 offset problem?
--
Chuck Lever
^ permalink raw reply [flat|nested] 47+ messages in thread
* Re: [PATCH 6.6 00/28] fix CVE-2024-46701
2024-11-09 16:58 ` Chuck Lever III
@ 2024-11-11 0:56 ` Yu Kuai
0 siblings, 0 replies; 47+ messages in thread
From: Yu Kuai @ 2024-11-11 0:56 UTC (permalink / raw)
To: Chuck Lever III, Yu Kuai, Al Viro, Christian Brauner
Cc: Greg KH, linux-stable, harry.wentland, sunpeng.li,
Rodrigo.Siqueira, alexander.deucher, christian.koenig,
Xinhui.Pan, airlied, Daniel Vetter, Liam Howlett, Andrew Morton,
Hugh Dickins, Matthew Wilcox (Oracle),
Sasha Levin, srinivasan.shanmugam, chiahsuan.chung, mingo,
mgorman, chengming.zhou, zhangpeng.00, amd-gfx, dri-devel,
Linux Kernel Mailing List, Linux FS Devel, maple-tree, linux-mm,
yi.zhang, yangerkun, yukuai (C)
Hi,
在 2024/11/10 0:58, Chuck Lever III 写道:
>
>
>> On Nov 8, 2024, at 8:30 PM, Yu Kuai <yukuai1@huaweicloud.com> wrote:
>>
>> Hi,
>>
>> 在 2024/11/08 21:23, Chuck Lever III 写道:
>>>> On Nov 7, 2024, at 8:19 PM, Yu Kuai <yukuai1@huaweicloud.com> wrote:
>>>>
>>>> Hi,
>>>>
>>>> 在 2024/11/07 22:41, Chuck Lever 写道:
>>>>> On Thu, Nov 07, 2024 at 08:57:23AM +0800, Yu Kuai wrote:
>>>>>> Hi,
>>>>>>
>>>>>> 在 2024/11/06 23:19, Chuck Lever III 写道:
>>>>>>>
>>>>>>>
>>>>>>>> On Nov 6, 2024, at 1:16 AM, Greg KH <gregkh@linuxfoundation.org> wrote:
>>>>>>>>
>>>>>>>> On Thu, Oct 24, 2024 at 09:19:41PM +0800, Yu Kuai wrote:
>>>>>>>>> From: Yu Kuai <yukuai3@huawei.com>
>>>>>>>>>
>>>>>>>>> Fix patch is patch 27, relied patches are from:
>>>>>>>
>>>>>>> I assume patch 27 is:
>>>>>>>
>>>>>>> libfs: fix infinite directory reads for offset dir
>>>>>>>
>>>>>>> https://lore.kernel.org/stable/20241024132225.2271667-12-yukuai1@huaweicloud.com/
>>>>>>>
>>>>>>> I don't think the Maple tree patches are a hard
>>>>>>> requirement for this fix. And note that libfs did
>>>>>>> not use Maple tree originally because I was told
>>>>>>> at that time that Maple tree was not yet mature.
>>>>>>>
>>>>>>> So, a better approach might be to fit the fix
>>>>>>> onto linux-6.6.y while sticking with xarray.
>>>>>>
>>>>>> The painful part is that using xarray is not acceptable, the offet
>>>>>> is just 32 bit and if it overflows, readdir will read nothing. That's
>>>>>> why maple_tree has to be used.
>>>>> A 32-bit range should be entirely adequate for this usage.
>>>>> - The offset allocator wraps when it reaches the maximum, it
>>>>> doesn't overflow unless there are actually billions of extant
>>>>> entries in the directory, which IMO is not likely.
>>>>
>>>> Yes, it's not likely, but it's possible, and not hard to trigger for
>>>> test.
>>> I question whether such a test reflects any real-world
>>> workload.
>>> Besides, there are a number of other limits that will impact
>>> the ability to create that many entries in one directory.
>>> The number of inodes in one tmpfs instance is limited, for
>>> instance.
>>>> And please notice that the offset will increase for each new file,
>>>> and file can be removed, while offset stays the same.
>>
>> Did you see the above explanation? files can be removed, you don't have
>> to store that much files to trigger the offset to overflow.
>>>>> - The offset values are dense, so the directory can use all 2- or
>>>>> 4- billion in the 32-bit integer range before wrapping.
>>>>
>>>> A simple math, if user create and remove 1 file in each seconds, it will
>>>> cost about 130 years to overflow. And if user create and remove 1000
>>>> files in each second, it will cost about 1 month to overflow.
>
>> The problem is that if the next_offset overflows to 0, then after patch
>> 27, offset_dir_open() will record the 0, and later offset_readdir will
>> return directly, while there can be many files.
>
>
> Let me revisit this for a moment. The xa_alloc_cyclic() call
> in simple_offset_add() has a range limit argument of 2 - U32_MAX.
>
> So I'm not clear how an overflow (or, more precisely, the
> reuse of an offset value) would result in a "0" offset being
> recorded. The range limit prevents the use of 0 and 1.
>
> A "0" offset value would be a bug, I agree, but I don't see
> how that can happen.
>
>
>>> The question is what happens when there are no more offset
>>> values available. xa_alloc_cyclic should fail, and file
>>> creation is supposed to fail at that point. If it doesn't,
>>> that's a bug that is outside of the use of xarray or Maple.
>>
>> Can you show me the code that xa_alloc_cyclic should fail? At least
>> according to the commets, it will return 1 if the allocation succeeded
>> after wrapping.
>>
>> * Context: Any context. Takes and releases the xa_lock. May sleep if
>> * the @gfp flags permit.
>> * Return: 0 if the allocation succeeded without wrapping. 1 if the
>> * allocation succeeded after wrapping, -ENOMEM if memory could not be
>> * allocated or -EBUSY if there are no free entries in @limit.
>> */
>> static inline int xa_alloc_cyclic(struct xarray *xa, u32 *id, void *entry,
>> struct xa_limit limit, u32 *next, gfp_t gfp)
>
> I recall (dimly) that directory entry offset value re-use
> is acceptable and preferred, so I think ignoring a "1"
> return value from xa_alloc_cyclic() is OK. If there are
> no unused offset values available, it will return -EBUSY,
> and file creation will fail.
>
> Perhaps Christian or Al can chime in here on whether
> directory entry offset value re-use is indeed expected
> to be acceptable.
This can't be acceptable in this case, the reason is straightforward,
it will mess readdir, and this is mucth more serious than the cve
itself.
Thanks,
Kuai
>
> Further, my understanding is that:
>
> https://lore.kernel.org/stable/20241024132225.2271667-12-yukuai1@huaweicloud.com/
>
> fixes a rename issue that results in an infinite loop,
> and that's the (only) issue that underlies CVE-2024-46701.
>
> You are suggesting that there are other overflow problems
> with the xarray-based simple_offset implementation. If I
> can confirm them, then I can get these fixed in v6.6. But
> so far, I'm not sure I completely understand these other
> failure modes.
>
> Are you suggesting that the above fix /introduces/ the
> 0 offset problem?
>
> --
> Chuck Lever
>
>
^ permalink raw reply [flat|nested] 47+ messages in thread
* Re: [PATCH 6.6 00/28] fix CVE-2024-46701
2024-11-07 0:57 ` Yu Kuai
2024-11-07 14:41 ` Chuck Lever
@ 2024-11-07 14:44 ` Liam R. Howlett
1 sibling, 0 replies; 47+ messages in thread
From: Liam R. Howlett @ 2024-11-07 14:44 UTC (permalink / raw)
To: Yu Kuai
Cc: Chuck Lever III, Greg KH, linux-stable, harry.wentland,
sunpeng.li, Rodrigo.Siqueira, alexander.deucher,
christian.koenig, Xinhui.Pan, airlied, Daniel Vetter, Al Viro,
Christian Brauner, Andrew Morton, Hugh Dickins,
Matthew Wilcox (Oracle),
Sasha Levin, srinivasan.shanmugam, chiahsuan.chung, mingo,
mgorman, chengming.zhou, zhangpeng.00, amd-gfx, dri-devel,
Linux Kernel Mailing List, Linux FS Devel, maple-tree, linux-mm,
yi.zhang, yangerkun, yukuai (C)
* Yu Kuai <yukuai1@huaweicloud.com> [241106 19:57]:
> Hi,
>
> 在 2024/11/06 23:19, Chuck Lever III 写道:
> >
> >
> > > On Nov 6, 2024, at 1:16 AM, Greg KH <gregkh@linuxfoundation.org> wrote:
> > >
> > > On Thu, Oct 24, 2024 at 09:19:41PM +0800, Yu Kuai wrote:
> > > > From: Yu Kuai <yukuai3@huawei.com>
> > > >
> > > > Fix patch is patch 27, relied patches are from:
> >
> > I assume patch 27 is:
> >
> > libfs: fix infinite directory reads for offset dir
> >
> > https://lore.kernel.org/stable/20241024132225.2271667-12-yukuai1@huaweicloud.com/
> >
> > I don't think the Maple tree patches are a hard
> > requirement for this fix. And note that libfs did
> > not use Maple tree originally because I was told
> > at that time that Maple tree was not yet mature.
> >
> > So, a better approach might be to fit the fix
> > onto linux-6.6.y while sticking with xarray.
>
> The painful part is that using xarray is not acceptable, the offet
> is just 32 bit and if it overflows, readdir will read nothing. That's
> why maple_tree has to be used.
Why does the xarray cause it to overflow vs the maple tree? The maple
tree conversion was for performance reasons, as far as I know [1].
Thanks,
Liam
[1]. https://lore.kernel.org/all/170820083431.6328.16233178852085891453.stgit@91.116.238.104.host.secureserver.net/
>
> Thanks,
> Kuai
>
> >
> > This is the first I've heard of this CVE. It
> > would help if the patch authors got some
> > notification when these are filed.
> >
> >
> > > > - patches from set [1] to add helpers to maple_tree, the last patch to
> > > > improve fork() performance is not backported;
> > >
> > > So things slowed down?
> > >
> > > > - patches from set [2] to change maple_tree, and follow up fixes;
> > > > - patches from set [3] to convert offset_ctx from xarray to maple_tree;
> > > >
> > > > Please notice that I'm not an expert in this area, and I'm afraid to
> > > > make manual changes. That's why patch 16 revert the commit that is
> > > > different from mainline and will cause conflict backporting new patches.
> > > > patch 28 pick the original mainline patch again.
> > > >
> > > > (And this is what we did to fix the CVE in downstream kernels).
> > > >
> > > > [1] https://lore.kernel.org/all/20231027033845.90608-1-zhangpeng.00@bytedance.com/
> > > > [2] https://lore.kernel.org/all/20231101171629.3612299-2-Liam.Howlett@oracle.com/T/
> > > > [3] https://lore.kernel.org/all/170820083431.6328.16233178852085891453.stgit@91.116.238.104.host.secureserver.net/
> > >
> > > This series looks rough. I want to have the maintainers of these
> > > files/subsystems to ack this before being able to take them.
> > >
> > > thanks,
> > >
> > > greg k-h
> >
> > --
> > Chuck Lever
> >
> >
>
^ permalink raw reply [flat|nested] 47+ messages in thread
* Re: [PATCH 6.6 00/28] fix CVE-2024-46701
2024-10-24 13:19 [PATCH 6.6 00/28] fix CVE-2024-46701 Yu Kuai
` (16 preceding siblings ...)
2024-11-06 6:16 ` [PATCH 6.6 00/28] fix CVE-2024-46701 Greg KH
@ 2024-11-06 14:43 ` Lorenzo Stoakes
2024-11-07 1:43 ` Yu Kuai
17 siblings, 1 reply; 47+ messages in thread
From: Lorenzo Stoakes @ 2024-11-06 14:43 UTC (permalink / raw)
To: Yu Kuai
Cc: stable, gregkh, harry.wentland, sunpeng.li, Rodrigo.Siqueira,
alexander.deucher, christian.koenig, Xinhui.Pan, airlied, daniel,
viro, brauner, Liam.Howlett, akpm, hughd, willy, sashal,
srinivasan.shanmugam, chiahsuan.chung, mingo, mgorman, yukuai3,
chengming.zhou, zhangpeng.00, chuck.lever, amd-gfx, dri-devel,
linux-kernel, linux-fsdevel, maple-tree, linux-mm, yi.zhang,
yangerkun
NACK.
Do this some other way that isn't a terrible mess.
You've reverted my CRITICAL fix, then didn't cc- me so I'm grumpy.
Even if you bizarrely brought it back later.
Don't fail to cc- people you revert in future, please, especially in
stable. It's not only discourteous it's also an actual security risk.
Thanks.
Also this commit log is ridiculous, you don't even explain WHAT ON EARTH
YOU ARE DOING HERE. It's not just good enough to reference a CVE and expect
us to go research this for you, especially one you've 'addressed' in this
totally bizarre fashion.
On Thu, Oct 24, 2024 at 09:19:41PM +0800, Yu Kuai wrote:
> From: Yu Kuai <yukuai3@huawei.com>
>
> Fix patch is patch 27, relied patches are from:
>
> - patches from set [1] to add helpers to maple_tree, the last patch to
> improve fork() performance is not backported;
> - patches from set [2] to change maple_tree, and follow up fixes;
> - patches from set [3] to convert offset_ctx from xarray to maple_tree;
>
> Please notice that I'm not an expert in this area, and I'm afraid to
> make manual changes. That's why patch 16 revert the commit that is
> different from mainline and will cause conflict backporting new patches.
> patch 28 pick the original mainline patch again.
This is... what? :/
You have to fix conflicts, that's part of what backporting involves.
Yeah, rethink your whole approach, thanks.
>
> (And this is what we did to fix the CVE in downstream kernels).
>
> [1] https://lore.kernel.org/all/20231027033845.90608-1-zhangpeng.00@bytedance.com/
> [2] https://lore.kernel.org/all/20231101171629.3612299-2-Liam.Howlett@oracle.com/T/
> [3] https://lore.kernel.org/all/170820083431.6328.16233178852085891453.stgit@91.116.238.104.host.secureserver.net/
>
> Andrew Morton (1):
> lib/maple_tree.c: fix build error due to hotfix alteration
>
> Chuck Lever (5):
> libfs: Re-arrange locking in offset_iterate_dir()
> libfs: Define a minimum directory offset
> libfs: Add simple_offset_empty()
> maple_tree: Add mtree_alloc_cyclic()
> libfs: Convert simple directory offsets to use a Maple Tree
>
> Liam R. Howlett (12):
> maple_tree: remove unnecessary default labels from switch statements
> maple_tree: make mas_erase() more robust
> maple_tree: move debug check to __mas_set_range()
> maple_tree: add end of node tracking to the maple state
> maple_tree: use cached node end in mas_next()
> maple_tree: use cached node end in mas_destroy()
> maple_tree: clean up inlines for some functions
> maple_tree: separate ma_state node from status
> maple_tree: remove mas_searchable()
> maple_tree: use maple state end for write operations
> maple_tree: don't find node end in mtree_lookup_walk()
> maple_tree: mtree_range_walk() clean up
>
> Lorenzo Stoakes (1):
> maple_tree: correct tree corruption on spanning store
>
> Peng Zhang (7):
> maple_tree: add mt_free_one() and mt_attr() helpers
> maple_tree: introduce {mtree,mas}_lock_nested()
> maple_tree: introduce interfaces __mt_dup() and mtree_dup()
> maple_tree: skip other tests when BENCH is enabled
> maple_tree: preserve the tree attributes when destroying maple tree
> maple_tree: add test for mtree_dup()
> maple_tree: avoid checking other gaps after getting the largest gap
>
> Yu Kuai (1):
> Revert "maple_tree: correct tree corruption on spanning store"
>
> yangerkun (1):
> libfs: fix infinite directory reads for offset dir
>
> fs/libfs.c | 129 ++-
> include/linux/fs.h | 6 +-
> include/linux/maple_tree.h | 356 +++---
> include/linux/mm_types.h | 3 +-
> lib/maple_tree.c | 1096 +++++++++++++------
> lib/test_maple_tree.c | 218 ++--
> mm/internal.h | 10 +-
> mm/shmem.c | 4 +-
> tools/include/linux/spinlock.h | 1 +
> tools/testing/radix-tree/linux/maple_tree.h | 2 +-
> tools/testing/radix-tree/maple.c | 390 ++++++-
> 11 files changed, 1564 insertions(+), 651 deletions(-)
>
> --
> 2.39.2
>
^ permalink raw reply [flat|nested] 47+ messages in thread* Re: [PATCH 6.6 00/28] fix CVE-2024-46701
2024-11-06 14:43 ` Lorenzo Stoakes
@ 2024-11-07 1:43 ` Yu Kuai
0 siblings, 0 replies; 47+ messages in thread
From: Yu Kuai @ 2024-11-07 1:43 UTC (permalink / raw)
To: Lorenzo Stoakes, Yu Kuai
Cc: stable, gregkh, harry.wentland, sunpeng.li, Rodrigo.Siqueira,
alexander.deucher, christian.koenig, Xinhui.Pan, airlied, daniel,
viro, brauner, Liam.Howlett, akpm, hughd, willy, sashal,
srinivasan.shanmugam, chiahsuan.chung, mingo, mgorman,
chengming.zhou, zhangpeng.00, chuck.lever, amd-gfx, dri-devel,
linux-kernel, linux-fsdevel, maple-tree, linux-mm, yi.zhang,
yangerkun, yukuai (C)
Hi,
在 2024/11/06 22:43, Lorenzo Stoakes 写道:
> NACK.
>
> Do this some other way that isn't a terrible mess.
>
> You've reverted my CRITICAL fix, then didn't cc- me so I'm grumpy.
>
> Even if you bizarrely brought it back later.
>
> Don't fail to cc- people you revert in future, please, especially in
> stable. It's not only discourteous it's also an actual security risk.
ok, that's my fault.
>
> Thanks.
>
> Also this commit log is ridiculous, you don't even explain WHAT ON EARTH
> YOU ARE DOING HERE. It's not just good enough to reference a CVE and expect
> us to go research this for you, especially one you've 'addressed' in this
> totally bizarre fashion.
>
> On Thu, Oct 24, 2024 at 09:19:41PM +0800, Yu Kuai wrote:
>> From: Yu Kuai <yukuai3@huawei.com>
>>
>> Fix patch is patch 27, relied patches are from:
>>
>> - patches from set [1] to add helpers to maple_tree, the last patch to
>> improve fork() performance is not backported;
>> - patches from set [2] to change maple_tree, and follow up fixes;
>> - patches from set [3] to convert offset_ctx from xarray to maple_tree;
>>
>> Please notice that I'm not an expert in this area, and I'm afraid to
>> make manual changes. That's why patch 16 revert the commit that is
>> different from mainline and will cause conflict backporting new patches.
>> patch 28 pick the original mainline patch again.
>
> This is... what? :/
>
> You have to fix conflicts, that's part of what backporting involves.
So, that's the best I can do in this area. I agree that this is
unacceptable now. So I'll just ignore this cve for v6.6, unless
some expert in this area will try to fix conflicts for patch 27 in
a better way.
Thanks,
Kuai
>
> Yeah, rethink your whole approach, thanks.
>
>>
>> (And this is what we did to fix the CVE in downstream kernels).
>>
>> [1] https://lore.kernel.org/all/20231027033845.90608-1-zhangpeng.00@bytedance.com/
>> [2] https://lore.kernel.org/all/20231101171629.3612299-2-Liam.Howlett@oracle.com/T/
>> [3] https://lore.kernel.org/all/170820083431.6328.16233178852085891453.stgit@91.116.238.104.host.secureserver.net/
>>
>> Andrew Morton (1):
>> lib/maple_tree.c: fix build error due to hotfix alteration
>>
>> Chuck Lever (5):
>> libfs: Re-arrange locking in offset_iterate_dir()
>> libfs: Define a minimum directory offset
>> libfs: Add simple_offset_empty()
>> maple_tree: Add mtree_alloc_cyclic()
>> libfs: Convert simple directory offsets to use a Maple Tree
>>
>> Liam R. Howlett (12):
>> maple_tree: remove unnecessary default labels from switch statements
>> maple_tree: make mas_erase() more robust
>> maple_tree: move debug check to __mas_set_range()
>> maple_tree: add end of node tracking to the maple state
>> maple_tree: use cached node end in mas_next()
>> maple_tree: use cached node end in mas_destroy()
>> maple_tree: clean up inlines for some functions
>> maple_tree: separate ma_state node from status
>> maple_tree: remove mas_searchable()
>> maple_tree: use maple state end for write operations
>> maple_tree: don't find node end in mtree_lookup_walk()
>> maple_tree: mtree_range_walk() clean up
>>
>> Lorenzo Stoakes (1):
>> maple_tree: correct tree corruption on spanning store
>>
>> Peng Zhang (7):
>> maple_tree: add mt_free_one() and mt_attr() helpers
>> maple_tree: introduce {mtree,mas}_lock_nested()
>> maple_tree: introduce interfaces __mt_dup() and mtree_dup()
>> maple_tree: skip other tests when BENCH is enabled
>> maple_tree: preserve the tree attributes when destroying maple tree
>> maple_tree: add test for mtree_dup()
>> maple_tree: avoid checking other gaps after getting the largest gap
>>
>> Yu Kuai (1):
>> Revert "maple_tree: correct tree corruption on spanning store"
>>
>> yangerkun (1):
>> libfs: fix infinite directory reads for offset dir
>>
>> fs/libfs.c | 129 ++-
>> include/linux/fs.h | 6 +-
>> include/linux/maple_tree.h | 356 +++---
>> include/linux/mm_types.h | 3 +-
>> lib/maple_tree.c | 1096 +++++++++++++------
>> lib/test_maple_tree.c | 218 ++--
>> mm/internal.h | 10 +-
>> mm/shmem.c | 4 +-
>> tools/include/linux/spinlock.h | 1 +
>> tools/testing/radix-tree/linux/maple_tree.h | 2 +-
>> tools/testing/radix-tree/maple.c | 390 ++++++-
>> 11 files changed, 1564 insertions(+), 651 deletions(-)
>>
>> --
>> 2.39.2
>>
>
> .
>
^ permalink raw reply [flat|nested] 47+ messages in thread