* [PATCH v5 0/6] Track node vacancy to reduce worst case allocation counts
@ 2025-04-10 19:14 Sidhartha Kumar
2025-04-10 19:14 ` [PATCH v5 1/6] maple_tree: convert mas_prealloc_calc() to take in a maple write state Sidhartha Kumar
` (5 more replies)
0 siblings, 6 replies; 9+ messages in thread
From: Sidhartha Kumar @ 2025-04-10 19:14 UTC (permalink / raw)
To: linux-kernel, maple-tree
Cc: linux-mm, akpm, liam.howlett, willy, Sidhartha Kumar
v4[4] -> v5:
- remove l_mas.depth usage in mas_spanning_store()
- fix comment for vacant height and sufficient height to use "height"
rather than depth.
- add r-b to patch 3 by Liam
v3[3] -> v4:
- fix bug reported by Vasily Gorbik by fixing condition for
mast_overflow() and add test case for this condition. This fix
was also tested on s390 by Vasily.
- cleanup new_height variable usage in mas_spanning_rebalance()
and add an additional test to make sure mt_height() is correct
after a rebalance causes a tree's height to decrease per Liam.
v2[2] -> v3:
- add r-b to patches 1,4, and 6
- update function parameter comments in patch 2
- remove line that sets mas->depth in patch 2
- remove redundant code for checking for a spanning write in patch 3
- rewrite commit message of patch 5 for additonal context and clarity
v1[1] -> v2:
- fix comment for vacant_height which refers to depth per Wei
- add a patch to reorder switch case statements in mas_prealloc_calc and
mas_wr_store_entry
- use sufficient height in spanning stores
- modify patch 2 to use a counter to track ascending the tree rather
than overloading mas->depth to have this function.
================ overview ========================
Currently, the maple tree preallocates the worst case number of nodes for
given store type by taking into account the whole height of the tree. This
comes from a worst case scenario of every node in the tree being full and
having to propagate node allocation upwards until we reach the root of the
tree. This can be optimized if there are vacancies in nodes that are at a
lower depth than the root node. This series implements tracking the level
at which there is a vacant node so we only need to allocate until this
level is reached, rather than always using the full height of the tree.
The ma_wr_state struct is modified to add a field which keeps track of the
vacant height and is updated during walks of the tree. This value is then
read in mas_prealloc_calc() when we decide how many nodes to allocate.
For rebalancing and spanning stores, we also need to track the lowest
height at which a node has 1 more entry than the minimum sufficient number
of entries. This is because rebalancing can cause a parent node to become
insufficient which results in further node allocations. In this case, we
need to use the sufficient height as the worst case rather than the vacant
height.
patch 1-2: preparatory patches
patch 3: implement vacant height tracking + update the tests
patch 4: support vacant height tracking for rebalancing writes
patch 5: implement sufficient height tracking
patch 6: reorder switch case statements
================ results =========================
Bpftrace was used to profile the allocation path for requesting new maple
nodes while running stress-ng mmap 120s. The histograms below represent
requests to kmem_cache_alloc_bulk() and show the count argument. This
represnts how many maple nodes the caller is requesting in
kmem_cache_alloc_bulk()
command: stress-ng --mmap 4 --timeout 120
mm-unstable
@bulk_alloc_req:
[3, 4) 4 | |
[4, 5) 54170 |@ |
[5, 6) 0 | |
[6, 7) 893057 |@@@@@@@@@@@@@@@@@@@@ |
[7, 8) 4 | |
[8, 9) 2230287 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|
[9, 10) 55811 |@ |
[10, 11) 77834 |@ |
[11, 12) 0 | |
[12, 13) 1368684 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ |
[13, 14) 0 | |
[14, 15) 0 | |
[15, 16) 367197 |@@@@@@@@ |
@maple_node_total: 46,630,160
@total_vmas: 46184591
mm-unstable + this series
@bulk_alloc_req:
[2, 3) 198 | |
[3, 4) 4 | |
[4, 5) 43 | |
[5, 6) 0 | |
[6, 7) 1069503 |@@@@@@@@@@@@@@@@@@@@@ |
[7, 8) 4 | |
[8, 9) 2597268 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|
[9, 10) 472191 |@@@@@@@@@ |
[10, 11) 191904 |@@@ |
[11, 12) 0 | |
[12, 13) 247316 |@@@@ |
[13, 14) 0 | |
[14, 15) 0 | |
[15, 16) 98769 |@ |
@maple_node_total: 37,813,856
@total_vmas: 43493287
This represents a ~19% reduction in the number of bulk maple nodes allocated.
For more reproducible results, a historgram of the return value of
mas_prealloc_calc() is displayed while running the maple_tree_tests
whcih have a deterministic store pattern
mas_prealloc_calc() return value mm-unstable
1 : (12068)
3 : (11836)
5 : ***** (271192)
7 : ************************************************** (2329329)
9 : *********** (534186)
10 : (435)
11 : *************** (704306)
13 : ******** (409781)
mas_prealloc_calc() return value mm-unstable + this series
1 : (12070)
3 : ************************************************** (3548777)
5 : ******** (633458)
7 : (65081)
9 : (11224)
10 : (341)
11 : (2973)
13 : (68)
do_mmap latency was also measured for regressions:
command: stress-ng --mmap 4 --timeout 120
mm-unstable:
avg = 7162 nsecs, total: 16101821292 nsecs, count: 2248034
mm-unstable + this series:
avg = 6689 nsecs, total: 15135391764 nsecs, count: 2262726
stress-ng --mmap4 --timeout 120
with vacant_height:
stress-ng: info: [257] 21526312 Maple Tree Read 0.176 M/sec
stress-ng: info: [257] 339979348 Maple Tree Write 2.774 M/sec
without vacant_height:
stress-ng: info: [8228] 20968900 Maple Tree Read 0.171 M/sec
stress-ng: info: [8228] 312214648 Maple Tree Write 2.547 M/sec
This respresents an increase of ~3% read throughput and ~9% increase in write throughput.
[1]: https://lore.kernel.org/lkml/20241114170524.64391-1-sidhartha.kumar@oracle.com/T/
[2]: https://lore.kernel.org/lkml/20250221163610.578409-1-sidhartha.kumar@oracle.com/
[3]: https://lore.kernel.org/lkml/20250227204823.758784-1-sidhartha.kumar@oracle.com/
[4]: https://lore.kernel.org/lkml/20250407184102.2155415-1-sidhartha.kumar@oracle.com/
Sidhartha Kumar (6):
maple_tree: convert mas_prealloc_calc() to take in a maple write state
maple_tree: use height and depth consistently
maple_tree: use vacant nodes to reduce worst case allocations
maple_tree: break on convergence in mas_spanning_rebalance()
maple_tree: add sufficient height
maple_tree: reorder mas->store_type case statements
include/linux/maple_tree.h | 4 +
lib/maple_tree.c | 191 ++++++++++++++++++-------------
tools/testing/radix-tree/maple.c | 126 ++++++++++++++++++--
3 files changed, 233 insertions(+), 88 deletions(-)
--
2.43.0
^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH v5 1/6] maple_tree: convert mas_prealloc_calc() to take in a maple write state
2025-04-10 19:14 [PATCH v5 0/6] Track node vacancy to reduce worst case allocation counts Sidhartha Kumar
@ 2025-04-10 19:14 ` Sidhartha Kumar
2025-04-10 19:14 ` [PATCH v5 2/6] maple_tree: use height and depth consistently Sidhartha Kumar
` (4 subsequent siblings)
5 siblings, 0 replies; 9+ messages in thread
From: Sidhartha Kumar @ 2025-04-10 19:14 UTC (permalink / raw)
To: linux-kernel, maple-tree
Cc: linux-mm, akpm, liam.howlett, willy, Sidhartha Kumar, Liam R . Howlett
In a subsequent patch, mas_prealloc_calc() will need to access fields only
in the ma_wr_state. Convert the function to take in a ma_wr_state and
modify all callers. There is no functional change.
Reviewed-by: Liam R. Howlett <Liam.Howlett@Oracle.com>
Signed-off-by: Sidhartha Kumar <sidhartha.kumar@oracle.com>
---
lib/maple_tree.c | 16 ++++++++--------
1 file changed, 8 insertions(+), 8 deletions(-)
diff --git a/lib/maple_tree.c b/lib/maple_tree.c
index d0bea23fa4bc..f25ee210d495 100644
--- a/lib/maple_tree.c
+++ b/lib/maple_tree.c
@@ -4140,13 +4140,14 @@ static inline void mas_wr_prealloc_setup(struct ma_wr_state *wr_mas)
/**
* mas_prealloc_calc() - Calculate number of nodes needed for a
* given store oepration
- * @mas: The maple state
+ * @wr_mas: The maple write state
* @entry: The entry to store into the tree
*
* Return: Number of nodes required for preallocation.
*/
-static inline int mas_prealloc_calc(struct ma_state *mas, void *entry)
+static inline int mas_prealloc_calc(struct ma_wr_state *wr_mas, void *entry)
{
+ struct ma_state *mas = wr_mas->mas;
int ret = mas_mt_height(mas) * 3 + 1;
switch (mas->store_type) {
@@ -4243,16 +4244,15 @@ static inline enum store_type mas_wr_store_type(struct ma_wr_state *wr_mas)
*/
static inline void mas_wr_preallocate(struct ma_wr_state *wr_mas, void *entry)
{
- struct ma_state *mas = wr_mas->mas;
int request;
mas_wr_prealloc_setup(wr_mas);
- mas->store_type = mas_wr_store_type(wr_mas);
- request = mas_prealloc_calc(mas, entry);
+ wr_mas->mas->store_type = mas_wr_store_type(wr_mas);
+ request = mas_prealloc_calc(wr_mas, entry);
if (!request)
return;
- mas_node_count(mas, request);
+ mas_node_count(wr_mas->mas, request);
}
/**
@@ -5397,7 +5397,7 @@ void *mas_store(struct ma_state *mas, void *entry)
return wr_mas.content;
}
- request = mas_prealloc_calc(mas, entry);
+ request = mas_prealloc_calc(&wr_mas, entry);
if (!request)
goto store;
@@ -5494,7 +5494,7 @@ int mas_preallocate(struct ma_state *mas, void *entry, gfp_t gfp)
mas_wr_prealloc_setup(&wr_mas);
mas->store_type = mas_wr_store_type(&wr_mas);
- request = mas_prealloc_calc(mas, entry);
+ request = mas_prealloc_calc(&wr_mas, entry);
if (!request)
return ret;
--
2.43.0
^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH v5 2/6] maple_tree: use height and depth consistently
2025-04-10 19:14 [PATCH v5 0/6] Track node vacancy to reduce worst case allocation counts Sidhartha Kumar
2025-04-10 19:14 ` [PATCH v5 1/6] maple_tree: convert mas_prealloc_calc() to take in a maple write state Sidhartha Kumar
@ 2025-04-10 19:14 ` Sidhartha Kumar
2025-04-10 19:17 ` Liam R. Howlett
2025-04-10 19:14 ` [PATCH v5 3/6] maple_tree: use vacant nodes to reduce worst case allocations Sidhartha Kumar
` (3 subsequent siblings)
5 siblings, 1 reply; 9+ messages in thread
From: Sidhartha Kumar @ 2025-04-10 19:14 UTC (permalink / raw)
To: linux-kernel, maple-tree
Cc: linux-mm, akpm, liam.howlett, willy, Sidhartha Kumar
For the maple tree, the root node is defined to have a depth of 0 with a
height of 1. Each level down from the node, these values are incremented
by 1. Various code paths define a root with depth 1 which is inconsisent
with the definition. Modify the code to be consistent with this
definition.
In mas_spanning_rebalance(), l_mas.depth was being used to track the
height based on the number of iterations done in the main loop. This
information was then used in mas_put_in_tree() to set the height. Rather
than overload the l_mas.depth field to track height, simply keep track of
height in the local variable new_height and directly pass this to
mas_wmb_replace() which will be passed into mas_put_in_tree(). This
allows up to remove writes to l_mas.depth.
Signed-off-by: Sidhartha Kumar <sidhartha.kumar@oracle.com>
---
lib/maple_tree.c | 84 +++++++++++++++++---------------
tools/testing/radix-tree/maple.c | 19 ++++++++
2 files changed, 63 insertions(+), 40 deletions(-)
diff --git a/lib/maple_tree.c b/lib/maple_tree.c
index f25ee210d495..195b19505b39 100644
--- a/lib/maple_tree.c
+++ b/lib/maple_tree.c
@@ -211,14 +211,14 @@ static void ma_free_rcu(struct maple_node *node)
call_rcu(&node->rcu, mt_free_rcu);
}
-static void mas_set_height(struct ma_state *mas)
+static void mt_set_height(struct maple_tree *mt, unsigned char height)
{
- unsigned int new_flags = mas->tree->ma_flags;
+ unsigned int new_flags = mt->ma_flags;
new_flags &= ~MT_FLAGS_HEIGHT_MASK;
- MAS_BUG_ON(mas, mas->depth > MAPLE_HEIGHT_MAX);
- new_flags |= mas->depth << MT_FLAGS_HEIGHT_OFFSET;
- mas->tree->ma_flags = new_flags;
+ MT_BUG_ON(mt, height > MAPLE_HEIGHT_MAX);
+ new_flags |= height << MT_FLAGS_HEIGHT_OFFSET;
+ mt->ma_flags = new_flags;
}
static unsigned int mas_mt_height(struct ma_state *mas)
@@ -1371,7 +1371,7 @@ static inline struct maple_enode *mas_start(struct ma_state *mas)
root = mas_root(mas);
/* Tree with nodes */
if (likely(xa_is_node(root))) {
- mas->depth = 1;
+ mas->depth = 0;
mas->status = ma_active;
mas->node = mte_safe_root(root);
mas->offset = 0;
@@ -1712,9 +1712,10 @@ static inline void mas_adopt_children(struct ma_state *mas,
* node as dead.
* @mas: the maple state with the new node
* @old_enode: The old maple encoded node to replace.
+ * @new_height: if we are inserting a root node, update the height of the tree
*/
static inline void mas_put_in_tree(struct ma_state *mas,
- struct maple_enode *old_enode)
+ struct maple_enode *old_enode, char new_height)
__must_hold(mas->tree->ma_lock)
{
unsigned char offset;
@@ -1723,7 +1724,7 @@ static inline void mas_put_in_tree(struct ma_state *mas,
if (mte_is_root(mas->node)) {
mas_mn(mas)->parent = ma_parent_ptr(mas_tree_parent(mas));
rcu_assign_pointer(mas->tree->ma_root, mte_mk_root(mas->node));
- mas_set_height(mas);
+ mt_set_height(mas->tree, new_height);
} else {
offset = mte_parent_slot(mas->node);
@@ -1741,12 +1742,13 @@ static inline void mas_put_in_tree(struct ma_state *mas,
* the parent encoding to locate the maple node in the tree.
* @mas: the ma_state with @mas->node pointing to the new node.
* @old_enode: The old maple encoded node.
+ * @new_height: The new height of the tree as a result of the operation
*/
static inline void mas_replace_node(struct ma_state *mas,
- struct maple_enode *old_enode)
+ struct maple_enode *old_enode, unsigned char new_height)
__must_hold(mas->tree->ma_lock)
{
- mas_put_in_tree(mas, old_enode);
+ mas_put_in_tree(mas, old_enode, new_height);
mas_free(mas, old_enode);
}
@@ -2536,10 +2538,11 @@ static inline void mas_topiary_node(struct ma_state *mas,
*
* @mas: The maple state pointing at the new data
* @old_enode: The maple encoded node being replaced
+ * @new_height: The new height of the tree as a result of the operation
*
*/
static inline void mas_topiary_replace(struct ma_state *mas,
- struct maple_enode *old_enode)
+ struct maple_enode *old_enode, unsigned char new_height)
{
struct ma_state tmp[3], tmp_next[3];
MA_TOPIARY(subtrees, mas->tree);
@@ -2547,7 +2550,7 @@ static inline void mas_topiary_replace(struct ma_state *mas,
int i, n;
/* Place data in tree & then mark node as old */
- mas_put_in_tree(mas, old_enode);
+ mas_put_in_tree(mas, old_enode, new_height);
/* Update the parent pointers in the tree */
tmp[0] = *mas;
@@ -2631,14 +2634,15 @@ static inline void mas_topiary_replace(struct ma_state *mas,
* mas_wmb_replace() - Write memory barrier and replace
* @mas: The maple state
* @old_enode: The old maple encoded node that is being replaced.
+ * @new_height: The new height of the tree as a result of the operation
*
* Updates gap as necessary.
*/
static inline void mas_wmb_replace(struct ma_state *mas,
- struct maple_enode *old_enode)
+ struct maple_enode *old_enode, unsigned char new_height)
{
/* Insert the new data in the tree */
- mas_topiary_replace(mas, old_enode);
+ mas_topiary_replace(mas, old_enode, new_height);
if (mte_is_leaf(mas->node))
return;
@@ -2824,6 +2828,7 @@ static void mas_spanning_rebalance(struct ma_state *mas,
{
unsigned char split, mid_split;
unsigned char slot = 0;
+ unsigned char new_height = 0; /* used if node is a new root */
struct maple_enode *left = NULL, *middle = NULL, *right = NULL;
struct maple_enode *old_enode;
@@ -2845,8 +2850,6 @@ static void mas_spanning_rebalance(struct ma_state *mas,
unlikely(mast->bn->b_end <= mt_min_slots[mast->bn->type]))
mast_spanning_rebalance(mast);
- l_mas.depth = 0;
-
/*
* Each level of the tree is examined and balanced, pushing data to the left or
* right, or rebalancing against left or right nodes is employed to avoid
@@ -2866,6 +2869,7 @@ static void mas_spanning_rebalance(struct ma_state *mas,
mast_set_split_parents(mast, left, middle, right, split,
mid_split);
mast_cp_to_nodes(mast, left, middle, right, split, mid_split);
+ new_height++;
/*
* Copy data from next level in the tree to mast->bn from next
@@ -2873,7 +2877,6 @@ static void mas_spanning_rebalance(struct ma_state *mas,
*/
memset(mast->bn, 0, sizeof(struct maple_big_node));
mast->bn->type = mte_node_type(left);
- l_mas.depth++;
/* Root already stored in l->node. */
if (mas_is_root_limits(mast->l))
@@ -2909,8 +2912,9 @@ static void mas_spanning_rebalance(struct ma_state *mas,
l_mas.node = mt_mk_node(ma_mnode_ptr(mas_pop_node(mas)),
mte_node_type(mast->orig_l->node));
- l_mas.depth++;
+
mab_mas_cp(mast->bn, 0, mt_slots[mast->bn->type] - 1, &l_mas, true);
+ new_height++;
mas_set_parent(mas, left, l_mas.node, slot);
if (middle)
mas_set_parent(mas, middle, l_mas.node, ++slot);
@@ -2933,7 +2937,7 @@ static void mas_spanning_rebalance(struct ma_state *mas,
mas->min = l_mas.min;
mas->max = l_mas.max;
mas->offset = l_mas.offset;
- mas_wmb_replace(mas, old_enode);
+ mas_wmb_replace(mas, old_enode, new_height);
mtree_range_walk(mas);
return;
}
@@ -3009,6 +3013,7 @@ static inline void mas_destroy_rebalance(struct ma_state *mas, unsigned char end
void __rcu **l_slots, **slots;
unsigned long *l_pivs, *pivs, gap;
bool in_rcu = mt_in_rcu(mas->tree);
+ unsigned char new_height = mas_mt_height(mas);
MA_STATE(l_mas, mas->tree, mas->index, mas->last);
@@ -3103,7 +3108,7 @@ static inline void mas_destroy_rebalance(struct ma_state *mas, unsigned char end
mas_ascend(mas);
if (in_rcu) {
- mas_replace_node(mas, old_eparent);
+ mas_replace_node(mas, old_eparent, new_height);
mas_adopt_children(mas, mas->node);
}
@@ -3114,10 +3119,9 @@ static inline void mas_destroy_rebalance(struct ma_state *mas, unsigned char end
* mas_split_final_node() - Split the final node in a subtree operation.
* @mast: the maple subtree state
* @mas: The maple state
- * @height: The height of the tree in case it's a new root.
*/
static inline void mas_split_final_node(struct maple_subtree_state *mast,
- struct ma_state *mas, int height)
+ struct ma_state *mas)
{
struct maple_enode *ancestor;
@@ -3126,7 +3130,6 @@ static inline void mas_split_final_node(struct maple_subtree_state *mast,
mast->bn->type = maple_arange_64;
else
mast->bn->type = maple_range_64;
- mas->depth = height;
}
/*
* Only a single node is used here, could be root.
@@ -3214,7 +3217,6 @@ static inline void mast_split_data(struct maple_subtree_state *mast,
* mas_push_data() - Instead of splitting a node, it is beneficial to push the
* data to the right or left node if there is room.
* @mas: The maple state
- * @height: The current height of the maple state
* @mast: The maple subtree state
* @left: Push left or not.
*
@@ -3222,8 +3224,8 @@ static inline void mast_split_data(struct maple_subtree_state *mast,
*
* Return: True if pushed, false otherwise.
*/
-static inline bool mas_push_data(struct ma_state *mas, int height,
- struct maple_subtree_state *mast, bool left)
+static inline bool mas_push_data(struct ma_state *mas,
+ struct maple_subtree_state *mast, bool left)
{
unsigned char slot_total = mast->bn->b_end;
unsigned char end, space, split;
@@ -3280,7 +3282,7 @@ static inline bool mas_push_data(struct ma_state *mas, int height,
mast_split_data(mast, mas, split);
mast_fill_bnode(mast, mas, 2);
- mas_split_final_node(mast, mas, height + 1);
+ mas_split_final_node(mast, mas);
return true;
}
@@ -3293,6 +3295,7 @@ static void mas_split(struct ma_state *mas, struct maple_big_node *b_node)
{
struct maple_subtree_state mast;
int height = 0;
+ unsigned int orig_height = mas_mt_height(mas);
unsigned char mid_split, split = 0;
struct maple_enode *old;
@@ -3319,7 +3322,6 @@ static void mas_split(struct ma_state *mas, struct maple_big_node *b_node)
MA_STATE(prev_r_mas, mas->tree, mas->index, mas->last);
trace_ma_op(__func__, mas);
- mas->depth = mas_mt_height(mas);
mast.l = &l_mas;
mast.r = &r_mas;
@@ -3327,9 +3329,9 @@ static void mas_split(struct ma_state *mas, struct maple_big_node *b_node)
mast.orig_r = &prev_r_mas;
mast.bn = b_node;
- while (height++ <= mas->depth) {
+ while (height++ <= orig_height) {
if (mt_slots[b_node->type] > b_node->b_end) {
- mas_split_final_node(&mast, mas, height);
+ mas_split_final_node(&mast, mas);
break;
}
@@ -3344,11 +3346,15 @@ static void mas_split(struct ma_state *mas, struct maple_big_node *b_node)
* is a significant savings.
*/
/* Try to push left. */
- if (mas_push_data(mas, height, &mast, true))
+ if (mas_push_data(mas, &mast, true)) {
+ height++;
break;
+ }
/* Try to push right. */
- if (mas_push_data(mas, height, &mast, false))
+ if (mas_push_data(mas, &mast, false)) {
+ height++;
break;
+ }
split = mab_calc_split(mas, b_node, &mid_split);
mast_split_data(&mast, mas, split);
@@ -3365,7 +3371,7 @@ static void mas_split(struct ma_state *mas, struct maple_big_node *b_node)
/* Set the original node as dead */
old = mas->node;
mas->node = l_mas.node;
- mas_wmb_replace(mas, old);
+ mas_wmb_replace(mas, old, height);
mtree_range_walk(mas);
return;
}
@@ -3424,8 +3430,7 @@ static inline void mas_root_expand(struct ma_state *mas, void *entry)
if (mas->last != ULONG_MAX)
pivots[++slot] = ULONG_MAX;
- mas->depth = 1;
- mas_set_height(mas);
+ mt_set_height(mas->tree, 1);
ma_set_meta(node, maple_leaf_64, 0, slot);
/* swap the new root into the tree */
rcu_assign_pointer(mas->tree->ma_root, mte_mk_root(mas->node));
@@ -3669,8 +3674,7 @@ static inline void mas_new_root(struct ma_state *mas, void *entry)
WARN_ON_ONCE(mas->index || mas->last != ULONG_MAX);
if (!entry) {
- mas->depth = 0;
- mas_set_height(mas);
+ mt_set_height(mas->tree, 0);
rcu_assign_pointer(mas->tree->ma_root, entry);
mas->status = ma_start;
goto done;
@@ -3684,8 +3688,7 @@ static inline void mas_new_root(struct ma_state *mas, void *entry)
mas->status = ma_active;
rcu_assign_pointer(slots[0], entry);
pivots[0] = mas->last;
- mas->depth = 1;
- mas_set_height(mas);
+ mt_set_height(mas->tree, 1);
rcu_assign_pointer(mas->tree->ma_root, mte_mk_root(mas->node));
done:
@@ -3804,6 +3807,7 @@ static inline void mas_wr_node_store(struct ma_wr_state *wr_mas,
struct maple_node reuse, *newnode;
unsigned char copy_size, node_pivots = mt_pivots[wr_mas->type];
bool in_rcu = mt_in_rcu(mas->tree);
+ unsigned char height = mas_mt_height(mas);
if (mas->last == wr_mas->end_piv)
offset_end++; /* don't copy this offset */
@@ -3860,7 +3864,7 @@ static inline void mas_wr_node_store(struct ma_wr_state *wr_mas,
struct maple_enode *old_enode = mas->node;
mas->node = mt_mk_node(newnode, wr_mas->type);
- mas_replace_node(mas, old_enode);
+ mas_replace_node(mas, old_enode, height);
} else {
memcpy(wr_mas->node, newnode, sizeof(struct maple_node));
}
diff --git a/tools/testing/radix-tree/maple.c b/tools/testing/radix-tree/maple.c
index bc30050227fd..e0f8fabe8821 100644
--- a/tools/testing/radix-tree/maple.c
+++ b/tools/testing/radix-tree/maple.c
@@ -36248,6 +36248,21 @@ static noinline void __init check_mtree_dup(struct maple_tree *mt)
extern void test_kmem_cache_bulk(void);
+static inline void check_spanning_store_height(struct maple_tree *mt)
+{
+ int index = 0;
+ MA_STATE(mas, mt, 0, 0);
+ mas_lock(&mas);
+ while (mt_height(mt) != 3) {
+ mas_store_gfp(&mas, xa_mk_value(index), GFP_KERNEL);
+ mas_set(&mas, ++index);
+ }
+ mas_set_range(&mas, 90, 140);
+ mas_store_gfp(&mas, xa_mk_value(index), GFP_KERNEL);
+ MT_BUG_ON(mt, mas_mt_height(&mas) != 2);
+ mas_unlock(&mas);
+}
+
/* callback function used for check_nomem_writer_race() */
static void writer2(void *maple_tree)
{
@@ -36414,6 +36429,10 @@ void farmer_tests(void)
check_spanning_write(&tree);
mtree_destroy(&tree);
+ mt_init_flags(&tree, MT_FLAGS_ALLOC_RANGE);
+ check_spanning_store_height(&tree);
+ mtree_destroy(&tree);
+
mt_init_flags(&tree, MT_FLAGS_ALLOC_RANGE);
check_null_expand(&tree);
mtree_destroy(&tree);
--
2.43.0
^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH v5 3/6] maple_tree: use vacant nodes to reduce worst case allocations
2025-04-10 19:14 [PATCH v5 0/6] Track node vacancy to reduce worst case allocation counts Sidhartha Kumar
2025-04-10 19:14 ` [PATCH v5 1/6] maple_tree: convert mas_prealloc_calc() to take in a maple write state Sidhartha Kumar
2025-04-10 19:14 ` [PATCH v5 2/6] maple_tree: use height and depth consistently Sidhartha Kumar
@ 2025-04-10 19:14 ` Sidhartha Kumar
2025-04-10 19:14 ` [PATCH v5 4/6] maple_tree: break on convergence in mas_spanning_rebalance() Sidhartha Kumar
` (2 subsequent siblings)
5 siblings, 0 replies; 9+ messages in thread
From: Sidhartha Kumar @ 2025-04-10 19:14 UTC (permalink / raw)
To: linux-kernel, maple-tree
Cc: linux-mm, akpm, liam.howlett, willy, Sidhartha Kumar, Liam R . Howlett
In order to determine the store type for a maple tree operation, a walk
of the tree is done through mas_wr_walk(). This function descends the
tree until a spanning write is detected or we reach a leaf node. While
descending, keep track of the height at which we encounter a node with
available space. This is done by checking if mas->end is less than the
number of slots a given node type can fit.
Now that the height of the vacant node is tracked, we can use the
difference between the height of the tree and the height of the vacant
node to know how many levels we will have to propagate creating new
nodes. Update mas_prealloc_calc() to consider the vacant height and
reduce the number of worst-case allocations.
Rebalancing and spanning stores are not supported and fall back to using
the full height of the tree for allocations.
Update preallocation testing assertions to take into account vacant
height.
Reviewed-by: Liam R. Howlett <Liam.Howlett@oracle.com>
Signed-off-by: Sidhartha Kumar <sidhartha.kumar@oracle.com>
---
include/linux/maple_tree.h | 2 +
lib/maple_tree.c | 13 ++++--
tools/testing/radix-tree/maple.c | 79 ++++++++++++++++++++++++++++----
3 files changed, 82 insertions(+), 12 deletions(-)
diff --git a/include/linux/maple_tree.h b/include/linux/maple_tree.h
index cbbcd18d4186..657adb33e61e 100644
--- a/include/linux/maple_tree.h
+++ b/include/linux/maple_tree.h
@@ -463,6 +463,7 @@ struct ma_wr_state {
void __rcu **slots; /* mas->node->slots pointer */
void *entry; /* The entry to write */
void *content; /* The existing entry that is being overwritten */
+ unsigned char vacant_height; /* Height of lowest node with free space */
};
#define mas_lock(mas) spin_lock(&((mas)->tree->ma_lock))
@@ -498,6 +499,7 @@ struct ma_wr_state {
.mas = ma_state, \
.content = NULL, \
.entry = wr_entry, \
+ .vacant_height = 0 \
}
#define MA_TOPIARY(name, tree) \
diff --git a/lib/maple_tree.c b/lib/maple_tree.c
index 195b19505b39..3f794ef072f4 100644
--- a/lib/maple_tree.c
+++ b/lib/maple_tree.c
@@ -3537,6 +3537,9 @@ static bool mas_wr_walk(struct ma_wr_state *wr_mas)
if (ma_is_leaf(wr_mas->type))
return true;
+ if (mas->end < mt_slots[wr_mas->type] - 1)
+ wr_mas->vacant_height = mas->depth + 1;
+
mas_wr_walk_traverse(wr_mas);
}
@@ -4152,7 +4155,9 @@ static inline void mas_wr_prealloc_setup(struct ma_wr_state *wr_mas)
static inline int mas_prealloc_calc(struct ma_wr_state *wr_mas, void *entry)
{
struct ma_state *mas = wr_mas->mas;
- int ret = mas_mt_height(mas) * 3 + 1;
+ unsigned char height = mas_mt_height(mas);
+ int ret = height * 3 + 1;
+ unsigned char delta = height - wr_mas->vacant_height;
switch (mas->store_type) {
case wr_invalid:
@@ -4170,13 +4175,13 @@ static inline int mas_prealloc_calc(struct ma_wr_state *wr_mas, void *entry)
ret = 0;
break;
case wr_spanning_store:
- ret = mas_mt_height(mas) * 3 + 1;
+ WARN_ON_ONCE(ret != height * 3 + 1);
break;
case wr_split_store:
- ret = mas_mt_height(mas) * 2 + 1;
+ ret = delta * 2 + 1;
break;
case wr_rebalance:
- ret = mas_mt_height(mas) * 2 - 1;
+ ret = height * 2 + 1;
break;
case wr_node_store:
ret = mt_in_rcu(mas->tree) ? 1 : 0;
diff --git a/tools/testing/radix-tree/maple.c b/tools/testing/radix-tree/maple.c
index e0f8fabe8821..e37a3ab2e921 100644
--- a/tools/testing/radix-tree/maple.c
+++ b/tools/testing/radix-tree/maple.c
@@ -35475,15 +35475,65 @@ static void check_dfs_preorder(struct maple_tree *mt)
}
/* End of depth first search tests */
+/* get height of the lowest non-leaf node with free space */
+static unsigned char get_vacant_height(struct ma_wr_state *wr_mas, void *entry)
+{
+ struct ma_state *mas = wr_mas->mas;
+ char vacant_height = 0;
+ enum maple_type type;
+ unsigned long *pivots;
+ unsigned long min = 0;
+ unsigned long max = ULONG_MAX;
+ unsigned char offset;
+
+ /* start traversal */
+ mas_reset(mas);
+ mas_start(mas);
+ if (!xa_is_node(mas_root(mas)))
+ return 0;
+
+ type = mte_node_type(mas->node);
+ wr_mas->type = type;
+ while (!ma_is_leaf(type)) {
+ mas_node_walk(mas, mte_to_node(mas->node), type, &min, &max);
+ offset = mas->offset;
+ mas->end = mas_data_end(mas);
+ pivots = ma_pivots(mte_to_node(mas->node), type);
+
+ if (pivots) {
+ if (offset)
+ min = pivots[mas->offset - 1];
+ if (offset < mas->end)
+ max = pivots[mas->offset];
+ }
+ wr_mas->r_max = offset < mas->end ? pivots[offset] : mas->max;
+
+ /* detect spanning write */
+ if (mas_is_span_wr(wr_mas))
+ break;
+
+ if (mas->end < mt_slot_count(mas->node) - 1)
+ vacant_height = mas->depth + 1;
+
+ mas_descend(mas);
+ type = mte_node_type(mas->node);
+ mas->depth++;
+ }
+
+ return vacant_height;
+}
+
/* Preallocation testing */
static noinline void __init check_prealloc(struct maple_tree *mt)
{
unsigned long i, max = 100;
unsigned long allocated;
unsigned char height;
+ unsigned char vacant_height;
struct maple_node *mn;
void *ptr = check_prealloc;
MA_STATE(mas, mt, 10, 20);
+ MA_WR_STATE(wr_mas, &mas, ptr);
mt_set_non_kernel(1000);
for (i = 0; i <= max; i++)
@@ -35494,8 +35544,9 @@ static noinline void __init check_prealloc(struct maple_tree *mt)
MT_BUG_ON(mt, mas_preallocate(&mas, ptr, GFP_KERNEL) != 0);
allocated = mas_allocated(&mas);
height = mas_mt_height(&mas);
+ vacant_height = get_vacant_height(&wr_mas, ptr);
MT_BUG_ON(mt, allocated == 0);
- MT_BUG_ON(mt, allocated != 1 + height * 3);
+ MT_BUG_ON(mt, allocated != 1 + (height - vacant_height) * 3);
mas_destroy(&mas);
allocated = mas_allocated(&mas);
MT_BUG_ON(mt, allocated != 0);
@@ -35503,8 +35554,9 @@ static noinline void __init check_prealloc(struct maple_tree *mt)
MT_BUG_ON(mt, mas_preallocate(&mas, ptr, GFP_KERNEL) != 0);
allocated = mas_allocated(&mas);
height = mas_mt_height(&mas);
+ vacant_height = get_vacant_height(&wr_mas, ptr);
MT_BUG_ON(mt, allocated == 0);
- MT_BUG_ON(mt, allocated != 1 + height * 3);
+ MT_BUG_ON(mt, allocated != 1 + (height - vacant_height) * 3);
MT_BUG_ON(mt, mas_preallocate(&mas, ptr, GFP_KERNEL) != 0);
mas_destroy(&mas);
allocated = mas_allocated(&mas);
@@ -35514,7 +35566,8 @@ static noinline void __init check_prealloc(struct maple_tree *mt)
MT_BUG_ON(mt, mas_preallocate(&mas, ptr, GFP_KERNEL) != 0);
allocated = mas_allocated(&mas);
height = mas_mt_height(&mas);
- MT_BUG_ON(mt, allocated != 1 + height * 3);
+ vacant_height = get_vacant_height(&wr_mas, ptr);
+ MT_BUG_ON(mt, allocated != 1 + (height - vacant_height) * 3);
mn = mas_pop_node(&mas);
MT_BUG_ON(mt, mas_allocated(&mas) != allocated - 1);
mn->parent = ma_parent_ptr(mn);
@@ -35527,7 +35580,8 @@ static noinline void __init check_prealloc(struct maple_tree *mt)
MT_BUG_ON(mt, mas_preallocate(&mas, ptr, GFP_KERNEL) != 0);
allocated = mas_allocated(&mas);
height = mas_mt_height(&mas);
- MT_BUG_ON(mt, allocated != 1 + height * 3);
+ vacant_height = get_vacant_height(&wr_mas, ptr);
+ MT_BUG_ON(mt, allocated != 1 + (height - vacant_height) * 3);
mn = mas_pop_node(&mas);
MT_BUG_ON(mt, mas_allocated(&mas) != allocated - 1);
MT_BUG_ON(mt, mas_preallocate(&mas, ptr, GFP_KERNEL) != 0);
@@ -35540,7 +35594,8 @@ static noinline void __init check_prealloc(struct maple_tree *mt)
MT_BUG_ON(mt, mas_preallocate(&mas, ptr, GFP_KERNEL) != 0);
allocated = mas_allocated(&mas);
height = mas_mt_height(&mas);
- MT_BUG_ON(mt, allocated != 1 + height * 3);
+ vacant_height = get_vacant_height(&wr_mas, ptr);
+ MT_BUG_ON(mt, allocated != 1 + (height - vacant_height) * 3);
mn = mas_pop_node(&mas);
MT_BUG_ON(mt, mas_allocated(&mas) != allocated - 1);
mas_push_node(&mas, mn);
@@ -35553,7 +35608,8 @@ static noinline void __init check_prealloc(struct maple_tree *mt)
MT_BUG_ON(mt, mas_preallocate(&mas, ptr, GFP_KERNEL) != 0);
allocated = mas_allocated(&mas);
height = mas_mt_height(&mas);
- MT_BUG_ON(mt, allocated != 1 + height * 3);
+ vacant_height = get_vacant_height(&wr_mas, ptr);
+ MT_BUG_ON(mt, allocated != 1 + (height - vacant_height) * 3);
mas_store_prealloc(&mas, ptr);
MT_BUG_ON(mt, mas_allocated(&mas) != 0);
@@ -35578,7 +35634,8 @@ static noinline void __init check_prealloc(struct maple_tree *mt)
MT_BUG_ON(mt, mas_preallocate(&mas, ptr, GFP_KERNEL) != 0);
allocated = mas_allocated(&mas);
height = mas_mt_height(&mas);
- MT_BUG_ON(mt, allocated != 1 + height * 2);
+ vacant_height = get_vacant_height(&wr_mas, ptr);
+ MT_BUG_ON(mt, allocated != 1 + (height - vacant_height) * 2);
mas_store_prealloc(&mas, ptr);
MT_BUG_ON(mt, mas_allocated(&mas) != 0);
mt_set_non_kernel(1);
@@ -35595,8 +35652,14 @@ static noinline void __init check_prealloc(struct maple_tree *mt)
MT_BUG_ON(mt, mas_preallocate(&mas, ptr, GFP_KERNEL) != 0);
allocated = mas_allocated(&mas);
height = mas_mt_height(&mas);
+ vacant_height = get_vacant_height(&wr_mas, ptr);
MT_BUG_ON(mt, allocated == 0);
- MT_BUG_ON(mt, allocated != 1 + height * 3);
+ /*
+ * vacant height cannot be used to compute the number of nodes needed
+ * as the root contains two entries which means it is on the verge of
+ * insufficiency. The worst case full height of the tree is needed.
+ */
+ MT_BUG_ON(mt, allocated != height * 3 + 1);
mas_store_prealloc(&mas, ptr);
MT_BUG_ON(mt, mas_allocated(&mas) != 0);
mas_set_range(&mas, 0, 200);
--
2.43.0
^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH v5 4/6] maple_tree: break on convergence in mas_spanning_rebalance()
2025-04-10 19:14 [PATCH v5 0/6] Track node vacancy to reduce worst case allocation counts Sidhartha Kumar
` (2 preceding siblings ...)
2025-04-10 19:14 ` [PATCH v5 3/6] maple_tree: use vacant nodes to reduce worst case allocations Sidhartha Kumar
@ 2025-04-10 19:14 ` Sidhartha Kumar
2025-04-10 19:14 ` [PATCH v5 5/6] maple_tree: add sufficient height Sidhartha Kumar
2025-04-10 19:14 ` [PATCH v5 6/6] maple_tree: reorder mas->store_type case statements Sidhartha Kumar
5 siblings, 0 replies; 9+ messages in thread
From: Sidhartha Kumar @ 2025-04-10 19:14 UTC (permalink / raw)
To: linux-kernel, maple-tree
Cc: linux-mm, akpm, liam.howlett, willy, Sidhartha Kumar, Wei Yang,
Liam R . Howlett
This allows support for using the vacant height to calculate the worst
case number of nodes needed for wr_rebalance operation.
mas_spanning_rebalance() was seen to perform unnecessary node allocations.
We can reduce allocations by breaking early during the rebalancing loop
once we realize that we have ascended to a common ancestor.
Suggested-by: Liam Howlett <liam.howlett@oracle.com>
Reviewed-by: Wei Yang <richard.weiyang@gmail.com>
Reviewed-by: Liam R. Howlett <Liam.Howlett@Oracle.com>
Signed-off-by: Sidhartha Kumar <sidhartha.kumar@oracle.com>
---
lib/maple_tree.c | 16 +++++++++++++---
1 file changed, 13 insertions(+), 3 deletions(-)
diff --git a/lib/maple_tree.c b/lib/maple_tree.c
index 3f794ef072f4..5610b3742a79 100644
--- a/lib/maple_tree.c
+++ b/lib/maple_tree.c
@@ -2893,11 +2893,21 @@ static void mas_spanning_rebalance(struct ma_state *mas,
mast_combine_cp_right(mast);
mast->orig_l->last = mast->orig_l->max;
- if (mast_sufficient(mast))
- continue;
+ if (mast_sufficient(mast)) {
+ if (mast_overflow(mast))
+ continue;
+
+ if (mast->orig_l->node == mast->orig_r->node) {
+ /*
+ * The data in b_node should be stored in one
+ * node and in the tree
+ */
+ slot = mast->l->offset;
+ break;
+ }
- if (mast_overflow(mast))
continue;
+ }
/* May be a new root stored in mast->bn */
if (mas_is_root_limits(mast->orig_l))
--
2.43.0
^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH v5 5/6] maple_tree: add sufficient height
2025-04-10 19:14 [PATCH v5 0/6] Track node vacancy to reduce worst case allocation counts Sidhartha Kumar
` (3 preceding siblings ...)
2025-04-10 19:14 ` [PATCH v5 4/6] maple_tree: break on convergence in mas_spanning_rebalance() Sidhartha Kumar
@ 2025-04-10 19:14 ` Sidhartha Kumar
2025-04-10 19:18 ` Liam R. Howlett
2025-04-10 19:14 ` [PATCH v5 6/6] maple_tree: reorder mas->store_type case statements Sidhartha Kumar
5 siblings, 1 reply; 9+ messages in thread
From: Sidhartha Kumar @ 2025-04-10 19:14 UTC (permalink / raw)
To: linux-kernel, maple-tree
Cc: linux-mm, akpm, liam.howlett, willy, Sidhartha Kumar
In order to support rebalancing and spanning stores using less than the
worst case number of nodes, we need to track more than just the vacant
height. Using only vacant height to reduce the worst case maple node
allocation count can lead to a shortcoming of nodes in the following
scenarios.
For rebalancing writes, when a leaf node becomes insufficient, it may be
combined with a sibling into a single node. This means that the parent node
which has entries for this children will lose one entry. If this parent node
was just meeting the minimum entries, losing one entry will now cause this
parent node to be insufficient. This leads to a cascading operation of
rebalancing at different levels and can lead to more node allocations than
simply using vacant height can return.
For spanning writes, a similar situation occurs. At the location at
which a spanning write is detected, the number of ancestor nodes may
similarly need to rebalanced into a smaller number of nodes and the same
cascading situation could occur.
To use less than the full height of the tree for the number of
allocations, we also need to track the height at which a non-leaf node
cannot become insufficient. This means even if a rebalance occurs to a
child of this node, it currently has enough entries that it can lose one
without any further action. This field is stored in the maple write state
as sufficient height. In mas_prealloc_calc() when figuring out how many
nodes to allocate, we check if the vacant node is lower in the tree
than a sufficient node (has a larger value). If it is, we cannot use the
vacant height and must use the difference in the height and sufficient
height as the basis for the number of nodes needed.
An off by one bug was also discovered in mast_overflow() where it is
using >= rather than >. This caused extra iterations of the
mas_spanning_rebalance() loop and lead to unneeded allocations. A test
is also added to check the number of allocations is correct.
Signed-off-by: Sidhartha Kumar <sidhartha.kumar@oracle.com>
---
include/linux/maple_tree.h | 4 +++-
lib/maple_tree.c | 19 ++++++++++++++++---
tools/testing/radix-tree/maple.c | 28 ++++++++++++++++++++++++++++
3 files changed, 47 insertions(+), 4 deletions(-)
diff --git a/include/linux/maple_tree.h b/include/linux/maple_tree.h
index 657adb33e61e..9ef129038224 100644
--- a/include/linux/maple_tree.h
+++ b/include/linux/maple_tree.h
@@ -464,6 +464,7 @@ struct ma_wr_state {
void *entry; /* The entry to write */
void *content; /* The existing entry that is being overwritten */
unsigned char vacant_height; /* Height of lowest node with free space */
+ unsigned char sufficient_height;/* Height of lowest node with min sufficiency + 1 nodes */
};
#define mas_lock(mas) spin_lock(&((mas)->tree->ma_lock))
@@ -499,7 +500,8 @@ struct ma_wr_state {
.mas = ma_state, \
.content = NULL, \
.entry = wr_entry, \
- .vacant_height = 0 \
+ .vacant_height = 0, \
+ .sufficient_height = 0 \
}
#define MA_TOPIARY(name, tree) \
diff --git a/lib/maple_tree.c b/lib/maple_tree.c
index 5610b3742a79..aa139668bcae 100644
--- a/lib/maple_tree.c
+++ b/lib/maple_tree.c
@@ -2741,7 +2741,7 @@ static inline bool mast_sufficient(struct maple_subtree_state *mast)
*/
static inline bool mast_overflow(struct maple_subtree_state *mast)
{
- if (mast->bn->b_end >= mt_slot_count(mast->orig_l->node))
+ if (mast->bn->b_end > mt_slot_count(mast->orig_l->node))
return true;
return false;
@@ -3550,6 +3550,13 @@ static bool mas_wr_walk(struct ma_wr_state *wr_mas)
if (mas->end < mt_slots[wr_mas->type] - 1)
wr_mas->vacant_height = mas->depth + 1;
+ if (ma_is_root(mas_mn(mas))) {
+ /* root needs more than 2 entries to be sufficient + 1 */
+ if (mas->end > 2)
+ wr_mas->sufficient_height = 1;
+ } else if (mas->end > mt_min_slots[wr_mas->type] + 1)
+ wr_mas->sufficient_height = mas->depth + 1;
+
mas_wr_walk_traverse(wr_mas);
}
@@ -4185,13 +4192,19 @@ static inline int mas_prealloc_calc(struct ma_wr_state *wr_mas, void *entry)
ret = 0;
break;
case wr_spanning_store:
- WARN_ON_ONCE(ret != height * 3 + 1);
+ if (wr_mas->sufficient_height < wr_mas->vacant_height)
+ ret = (height - wr_mas->sufficient_height) * 3 + 1;
+ else
+ ret = delta * 3 + 1;
break;
case wr_split_store:
ret = delta * 2 + 1;
break;
case wr_rebalance:
- ret = height * 2 + 1;
+ if (wr_mas->sufficient_height < wr_mas->vacant_height)
+ ret = (height - wr_mas->sufficient_height) * 2 + 1;
+ else
+ ret = delta * 2 + 1;
break;
case wr_node_store:
ret = mt_in_rcu(mas->tree) ? 1 : 0;
diff --git a/tools/testing/radix-tree/maple.c b/tools/testing/radix-tree/maple.c
index e37a3ab2e921..2c0b38301253 100644
--- a/tools/testing/radix-tree/maple.c
+++ b/tools/testing/radix-tree/maple.c
@@ -36326,6 +36326,30 @@ static inline void check_spanning_store_height(struct maple_tree *mt)
mas_unlock(&mas);
}
+/*
+ * Test to check the path of a spanning rebalance which results in
+ * a collapse where the rebalancing of the child node leads to
+ * insufficieny in the parent node.
+ */
+static void check_collapsing_rebalance(struct maple_tree *mt)
+{
+ int i = 0;
+ MA_STATE(mas, mt, ULONG_MAX, ULONG_MAX);
+
+ /* create a height 6 tree */
+ while (mt_height(mt) < 6) {
+ mtree_store_range(mt, i, i + 10, xa_mk_value(i), GFP_KERNEL);
+ i += 9;
+ }
+
+ /* delete all entries one at a time, starting from the right */
+ do {
+ mas_erase(&mas);
+ } while (mas_prev(&mas, 0) != NULL);
+
+ mtree_unlock(mt);
+}
+
/* callback function used for check_nomem_writer_race() */
static void writer2(void *maple_tree)
{
@@ -36496,6 +36520,10 @@ void farmer_tests(void)
check_spanning_store_height(&tree);
mtree_destroy(&tree);
+ mt_init_flags(&tree, MT_FLAGS_ALLOC_RANGE);
+ check_collapsing_rebalance(&tree);
+ mtree_destroy(&tree);
+
mt_init_flags(&tree, MT_FLAGS_ALLOC_RANGE);
check_null_expand(&tree);
mtree_destroy(&tree);
--
2.43.0
^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH v5 6/6] maple_tree: reorder mas->store_type case statements
2025-04-10 19:14 [PATCH v5 0/6] Track node vacancy to reduce worst case allocation counts Sidhartha Kumar
` (4 preceding siblings ...)
2025-04-10 19:14 ` [PATCH v5 5/6] maple_tree: add sufficient height Sidhartha Kumar
@ 2025-04-10 19:14 ` Sidhartha Kumar
5 siblings, 0 replies; 9+ messages in thread
From: Sidhartha Kumar @ 2025-04-10 19:14 UTC (permalink / raw)
To: linux-kernel, maple-tree
Cc: linux-mm, akpm, liam.howlett, willy, Sidhartha Kumar, Liam R . Howlett
Move the unlikely case that mas->store_type is invalid to be the last
evaluated case and put liklier cases higher up.
Suggested-by: Liam R. Howlett <liam.howlett@oracle.com>
Reviewed-by: Liam R. Howlett <Liam.Howlett@Oracle.com>
Signed-off-by: Sidhartha Kumar <sidhartha.kumar@oracle.com>
---
lib/maple_tree.c | 51 ++++++++++++++++++++++++------------------------
1 file changed, 25 insertions(+), 26 deletions(-)
diff --git a/lib/maple_tree.c b/lib/maple_tree.c
index aa139668bcae..affe979bd14d 100644
--- a/lib/maple_tree.c
+++ b/lib/maple_tree.c
@@ -4083,15 +4083,6 @@ static inline void mas_wr_store_entry(struct ma_wr_state *wr_mas)
unsigned char new_end = mas_wr_new_end(wr_mas);
switch (mas->store_type) {
- case wr_invalid:
- MT_BUG_ON(mas->tree, 1);
- return;
- case wr_new_root:
- mas_new_root(mas, wr_mas->entry);
- break;
- case wr_store_root:
- mas_store_root(mas, wr_mas->entry);
- break;
case wr_exact_fit:
rcu_assign_pointer(wr_mas->slots[mas->offset], wr_mas->entry);
if (!!wr_mas->entry ^ !!wr_mas->content)
@@ -4113,6 +4104,14 @@ static inline void mas_wr_store_entry(struct ma_wr_state *wr_mas)
case wr_rebalance:
mas_wr_bnode(wr_mas);
break;
+ case wr_new_root:
+ mas_new_root(mas, wr_mas->entry);
+ break;
+ case wr_store_root:
+ mas_store_root(mas, wr_mas->entry);
+ break;
+ case wr_invalid:
+ MT_BUG_ON(mas->tree, 1);
}
return;
@@ -4177,19 +4176,10 @@ static inline int mas_prealloc_calc(struct ma_wr_state *wr_mas, void *entry)
unsigned char delta = height - wr_mas->vacant_height;
switch (mas->store_type) {
- case wr_invalid:
- WARN_ON_ONCE(1);
- break;
- case wr_new_root:
- ret = 1;
- break;
- case wr_store_root:
- if (likely((mas->last != 0) || (mas->index != 0)))
- ret = 1;
- else if (((unsigned long) (entry) & 3) == 2)
- ret = 1;
- else
- ret = 0;
+ case wr_exact_fit:
+ case wr_append:
+ case wr_slot_store:
+ ret = 0;
break;
case wr_spanning_store:
if (wr_mas->sufficient_height < wr_mas->vacant_height)
@@ -4209,10 +4199,19 @@ static inline int mas_prealloc_calc(struct ma_wr_state *wr_mas, void *entry)
case wr_node_store:
ret = mt_in_rcu(mas->tree) ? 1 : 0;
break;
- case wr_append:
- case wr_exact_fit:
- case wr_slot_store:
- ret = 0;
+ case wr_new_root:
+ ret = 1;
+ break;
+ case wr_store_root:
+ if (likely((mas->last != 0) || (mas->index != 0)))
+ ret = 1;
+ else if (((unsigned long) (entry) & 3) == 2)
+ ret = 1;
+ else
+ ret = 0;
+ break;
+ case wr_invalid:
+ WARN_ON_ONCE(1);
}
return ret;
--
2.43.0
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH v5 2/6] maple_tree: use height and depth consistently
2025-04-10 19:14 ` [PATCH v5 2/6] maple_tree: use height and depth consistently Sidhartha Kumar
@ 2025-04-10 19:17 ` Liam R. Howlett
0 siblings, 0 replies; 9+ messages in thread
From: Liam R. Howlett @ 2025-04-10 19:17 UTC (permalink / raw)
To: Sidhartha Kumar; +Cc: linux-kernel, maple-tree, linux-mm, akpm, willy
* Sidhartha Kumar <sidhartha.kumar@oracle.com> [250410 15:14]:
> For the maple tree, the root node is defined to have a depth of 0 with a
> height of 1. Each level down from the node, these values are incremented
> by 1. Various code paths define a root with depth 1 which is inconsisent
> with the definition. Modify the code to be consistent with this
> definition.
>
> In mas_spanning_rebalance(), l_mas.depth was being used to track the
> height based on the number of iterations done in the main loop. This
> information was then used in mas_put_in_tree() to set the height. Rather
> than overload the l_mas.depth field to track height, simply keep track of
> height in the local variable new_height and directly pass this to
> mas_wmb_replace() which will be passed into mas_put_in_tree(). This
> allows up to remove writes to l_mas.depth.
>
> Signed-off-by: Sidhartha Kumar <sidhartha.kumar@oracle.com>
Reviewed-by: Liam R. Howlett <Liam.Howlett@oracle.com>
> ---
> lib/maple_tree.c | 84 +++++++++++++++++---------------
> tools/testing/radix-tree/maple.c | 19 ++++++++
> 2 files changed, 63 insertions(+), 40 deletions(-)
>
> diff --git a/lib/maple_tree.c b/lib/maple_tree.c
> index f25ee210d495..195b19505b39 100644
> --- a/lib/maple_tree.c
> +++ b/lib/maple_tree.c
> @@ -211,14 +211,14 @@ static void ma_free_rcu(struct maple_node *node)
> call_rcu(&node->rcu, mt_free_rcu);
> }
>
> -static void mas_set_height(struct ma_state *mas)
> +static void mt_set_height(struct maple_tree *mt, unsigned char height)
> {
> - unsigned int new_flags = mas->tree->ma_flags;
> + unsigned int new_flags = mt->ma_flags;
>
> new_flags &= ~MT_FLAGS_HEIGHT_MASK;
> - MAS_BUG_ON(mas, mas->depth > MAPLE_HEIGHT_MAX);
> - new_flags |= mas->depth << MT_FLAGS_HEIGHT_OFFSET;
> - mas->tree->ma_flags = new_flags;
> + MT_BUG_ON(mt, height > MAPLE_HEIGHT_MAX);
> + new_flags |= height << MT_FLAGS_HEIGHT_OFFSET;
> + mt->ma_flags = new_flags;
> }
>
> static unsigned int mas_mt_height(struct ma_state *mas)
> @@ -1371,7 +1371,7 @@ static inline struct maple_enode *mas_start(struct ma_state *mas)
> root = mas_root(mas);
> /* Tree with nodes */
> if (likely(xa_is_node(root))) {
> - mas->depth = 1;
> + mas->depth = 0;
> mas->status = ma_active;
> mas->node = mte_safe_root(root);
> mas->offset = 0;
> @@ -1712,9 +1712,10 @@ static inline void mas_adopt_children(struct ma_state *mas,
> * node as dead.
> * @mas: the maple state with the new node
> * @old_enode: The old maple encoded node to replace.
> + * @new_height: if we are inserting a root node, update the height of the tree
> */
> static inline void mas_put_in_tree(struct ma_state *mas,
> - struct maple_enode *old_enode)
> + struct maple_enode *old_enode, char new_height)
> __must_hold(mas->tree->ma_lock)
> {
> unsigned char offset;
> @@ -1723,7 +1724,7 @@ static inline void mas_put_in_tree(struct ma_state *mas,
> if (mte_is_root(mas->node)) {
> mas_mn(mas)->parent = ma_parent_ptr(mas_tree_parent(mas));
> rcu_assign_pointer(mas->tree->ma_root, mte_mk_root(mas->node));
> - mas_set_height(mas);
> + mt_set_height(mas->tree, new_height);
> } else {
>
> offset = mte_parent_slot(mas->node);
> @@ -1741,12 +1742,13 @@ static inline void mas_put_in_tree(struct ma_state *mas,
> * the parent encoding to locate the maple node in the tree.
> * @mas: the ma_state with @mas->node pointing to the new node.
> * @old_enode: The old maple encoded node.
> + * @new_height: The new height of the tree as a result of the operation
> */
> static inline void mas_replace_node(struct ma_state *mas,
> - struct maple_enode *old_enode)
> + struct maple_enode *old_enode, unsigned char new_height)
> __must_hold(mas->tree->ma_lock)
> {
> - mas_put_in_tree(mas, old_enode);
> + mas_put_in_tree(mas, old_enode, new_height);
> mas_free(mas, old_enode);
> }
>
> @@ -2536,10 +2538,11 @@ static inline void mas_topiary_node(struct ma_state *mas,
> *
> * @mas: The maple state pointing at the new data
> * @old_enode: The maple encoded node being replaced
> + * @new_height: The new height of the tree as a result of the operation
> *
> */
> static inline void mas_topiary_replace(struct ma_state *mas,
> - struct maple_enode *old_enode)
> + struct maple_enode *old_enode, unsigned char new_height)
> {
> struct ma_state tmp[3], tmp_next[3];
> MA_TOPIARY(subtrees, mas->tree);
> @@ -2547,7 +2550,7 @@ static inline void mas_topiary_replace(struct ma_state *mas,
> int i, n;
>
> /* Place data in tree & then mark node as old */
> - mas_put_in_tree(mas, old_enode);
> + mas_put_in_tree(mas, old_enode, new_height);
>
> /* Update the parent pointers in the tree */
> tmp[0] = *mas;
> @@ -2631,14 +2634,15 @@ static inline void mas_topiary_replace(struct ma_state *mas,
> * mas_wmb_replace() - Write memory barrier and replace
> * @mas: The maple state
> * @old_enode: The old maple encoded node that is being replaced.
> + * @new_height: The new height of the tree as a result of the operation
> *
> * Updates gap as necessary.
> */
> static inline void mas_wmb_replace(struct ma_state *mas,
> - struct maple_enode *old_enode)
> + struct maple_enode *old_enode, unsigned char new_height)
> {
> /* Insert the new data in the tree */
> - mas_topiary_replace(mas, old_enode);
> + mas_topiary_replace(mas, old_enode, new_height);
>
> if (mte_is_leaf(mas->node))
> return;
> @@ -2824,6 +2828,7 @@ static void mas_spanning_rebalance(struct ma_state *mas,
> {
> unsigned char split, mid_split;
> unsigned char slot = 0;
> + unsigned char new_height = 0; /* used if node is a new root */
> struct maple_enode *left = NULL, *middle = NULL, *right = NULL;
> struct maple_enode *old_enode;
>
> @@ -2845,8 +2850,6 @@ static void mas_spanning_rebalance(struct ma_state *mas,
> unlikely(mast->bn->b_end <= mt_min_slots[mast->bn->type]))
> mast_spanning_rebalance(mast);
>
> - l_mas.depth = 0;
> -
> /*
> * Each level of the tree is examined and balanced, pushing data to the left or
> * right, or rebalancing against left or right nodes is employed to avoid
> @@ -2866,6 +2869,7 @@ static void mas_spanning_rebalance(struct ma_state *mas,
> mast_set_split_parents(mast, left, middle, right, split,
> mid_split);
> mast_cp_to_nodes(mast, left, middle, right, split, mid_split);
> + new_height++;
>
> /*
> * Copy data from next level in the tree to mast->bn from next
> @@ -2873,7 +2877,6 @@ static void mas_spanning_rebalance(struct ma_state *mas,
> */
> memset(mast->bn, 0, sizeof(struct maple_big_node));
> mast->bn->type = mte_node_type(left);
> - l_mas.depth++;
>
> /* Root already stored in l->node. */
> if (mas_is_root_limits(mast->l))
> @@ -2909,8 +2912,9 @@ static void mas_spanning_rebalance(struct ma_state *mas,
>
> l_mas.node = mt_mk_node(ma_mnode_ptr(mas_pop_node(mas)),
> mte_node_type(mast->orig_l->node));
> - l_mas.depth++;
> +
> mab_mas_cp(mast->bn, 0, mt_slots[mast->bn->type] - 1, &l_mas, true);
> + new_height++;
> mas_set_parent(mas, left, l_mas.node, slot);
> if (middle)
> mas_set_parent(mas, middle, l_mas.node, ++slot);
> @@ -2933,7 +2937,7 @@ static void mas_spanning_rebalance(struct ma_state *mas,
> mas->min = l_mas.min;
> mas->max = l_mas.max;
> mas->offset = l_mas.offset;
> - mas_wmb_replace(mas, old_enode);
> + mas_wmb_replace(mas, old_enode, new_height);
> mtree_range_walk(mas);
> return;
> }
> @@ -3009,6 +3013,7 @@ static inline void mas_destroy_rebalance(struct ma_state *mas, unsigned char end
> void __rcu **l_slots, **slots;
> unsigned long *l_pivs, *pivs, gap;
> bool in_rcu = mt_in_rcu(mas->tree);
> + unsigned char new_height = mas_mt_height(mas);
>
> MA_STATE(l_mas, mas->tree, mas->index, mas->last);
>
> @@ -3103,7 +3108,7 @@ static inline void mas_destroy_rebalance(struct ma_state *mas, unsigned char end
> mas_ascend(mas);
>
> if (in_rcu) {
> - mas_replace_node(mas, old_eparent);
> + mas_replace_node(mas, old_eparent, new_height);
> mas_adopt_children(mas, mas->node);
> }
>
> @@ -3114,10 +3119,9 @@ static inline void mas_destroy_rebalance(struct ma_state *mas, unsigned char end
> * mas_split_final_node() - Split the final node in a subtree operation.
> * @mast: the maple subtree state
> * @mas: The maple state
> - * @height: The height of the tree in case it's a new root.
> */
> static inline void mas_split_final_node(struct maple_subtree_state *mast,
> - struct ma_state *mas, int height)
> + struct ma_state *mas)
> {
> struct maple_enode *ancestor;
>
> @@ -3126,7 +3130,6 @@ static inline void mas_split_final_node(struct maple_subtree_state *mast,
> mast->bn->type = maple_arange_64;
> else
> mast->bn->type = maple_range_64;
> - mas->depth = height;
> }
> /*
> * Only a single node is used here, could be root.
> @@ -3214,7 +3217,6 @@ static inline void mast_split_data(struct maple_subtree_state *mast,
> * mas_push_data() - Instead of splitting a node, it is beneficial to push the
> * data to the right or left node if there is room.
> * @mas: The maple state
> - * @height: The current height of the maple state
> * @mast: The maple subtree state
> * @left: Push left or not.
> *
> @@ -3222,8 +3224,8 @@ static inline void mast_split_data(struct maple_subtree_state *mast,
> *
> * Return: True if pushed, false otherwise.
> */
> -static inline bool mas_push_data(struct ma_state *mas, int height,
> - struct maple_subtree_state *mast, bool left)
> +static inline bool mas_push_data(struct ma_state *mas,
> + struct maple_subtree_state *mast, bool left)
> {
> unsigned char slot_total = mast->bn->b_end;
> unsigned char end, space, split;
> @@ -3280,7 +3282,7 @@ static inline bool mas_push_data(struct ma_state *mas, int height,
>
> mast_split_data(mast, mas, split);
> mast_fill_bnode(mast, mas, 2);
> - mas_split_final_node(mast, mas, height + 1);
> + mas_split_final_node(mast, mas);
> return true;
> }
>
> @@ -3293,6 +3295,7 @@ static void mas_split(struct ma_state *mas, struct maple_big_node *b_node)
> {
> struct maple_subtree_state mast;
> int height = 0;
> + unsigned int orig_height = mas_mt_height(mas);
> unsigned char mid_split, split = 0;
> struct maple_enode *old;
>
> @@ -3319,7 +3322,6 @@ static void mas_split(struct ma_state *mas, struct maple_big_node *b_node)
> MA_STATE(prev_r_mas, mas->tree, mas->index, mas->last);
>
> trace_ma_op(__func__, mas);
> - mas->depth = mas_mt_height(mas);
>
> mast.l = &l_mas;
> mast.r = &r_mas;
> @@ -3327,9 +3329,9 @@ static void mas_split(struct ma_state *mas, struct maple_big_node *b_node)
> mast.orig_r = &prev_r_mas;
> mast.bn = b_node;
>
> - while (height++ <= mas->depth) {
> + while (height++ <= orig_height) {
> if (mt_slots[b_node->type] > b_node->b_end) {
> - mas_split_final_node(&mast, mas, height);
> + mas_split_final_node(&mast, mas);
> break;
> }
>
> @@ -3344,11 +3346,15 @@ static void mas_split(struct ma_state *mas, struct maple_big_node *b_node)
> * is a significant savings.
> */
> /* Try to push left. */
> - if (mas_push_data(mas, height, &mast, true))
> + if (mas_push_data(mas, &mast, true)) {
> + height++;
> break;
> + }
> /* Try to push right. */
> - if (mas_push_data(mas, height, &mast, false))
> + if (mas_push_data(mas, &mast, false)) {
> + height++;
> break;
> + }
>
> split = mab_calc_split(mas, b_node, &mid_split);
> mast_split_data(&mast, mas, split);
> @@ -3365,7 +3371,7 @@ static void mas_split(struct ma_state *mas, struct maple_big_node *b_node)
> /* Set the original node as dead */
> old = mas->node;
> mas->node = l_mas.node;
> - mas_wmb_replace(mas, old);
> + mas_wmb_replace(mas, old, height);
> mtree_range_walk(mas);
> return;
> }
> @@ -3424,8 +3430,7 @@ static inline void mas_root_expand(struct ma_state *mas, void *entry)
> if (mas->last != ULONG_MAX)
> pivots[++slot] = ULONG_MAX;
>
> - mas->depth = 1;
> - mas_set_height(mas);
> + mt_set_height(mas->tree, 1);
> ma_set_meta(node, maple_leaf_64, 0, slot);
> /* swap the new root into the tree */
> rcu_assign_pointer(mas->tree->ma_root, mte_mk_root(mas->node));
> @@ -3669,8 +3674,7 @@ static inline void mas_new_root(struct ma_state *mas, void *entry)
> WARN_ON_ONCE(mas->index || mas->last != ULONG_MAX);
>
> if (!entry) {
> - mas->depth = 0;
> - mas_set_height(mas);
> + mt_set_height(mas->tree, 0);
> rcu_assign_pointer(mas->tree->ma_root, entry);
> mas->status = ma_start;
> goto done;
> @@ -3684,8 +3688,7 @@ static inline void mas_new_root(struct ma_state *mas, void *entry)
> mas->status = ma_active;
> rcu_assign_pointer(slots[0], entry);
> pivots[0] = mas->last;
> - mas->depth = 1;
> - mas_set_height(mas);
> + mt_set_height(mas->tree, 1);
> rcu_assign_pointer(mas->tree->ma_root, mte_mk_root(mas->node));
>
> done:
> @@ -3804,6 +3807,7 @@ static inline void mas_wr_node_store(struct ma_wr_state *wr_mas,
> struct maple_node reuse, *newnode;
> unsigned char copy_size, node_pivots = mt_pivots[wr_mas->type];
> bool in_rcu = mt_in_rcu(mas->tree);
> + unsigned char height = mas_mt_height(mas);
>
> if (mas->last == wr_mas->end_piv)
> offset_end++; /* don't copy this offset */
> @@ -3860,7 +3864,7 @@ static inline void mas_wr_node_store(struct ma_wr_state *wr_mas,
> struct maple_enode *old_enode = mas->node;
>
> mas->node = mt_mk_node(newnode, wr_mas->type);
> - mas_replace_node(mas, old_enode);
> + mas_replace_node(mas, old_enode, height);
> } else {
> memcpy(wr_mas->node, newnode, sizeof(struct maple_node));
> }
> diff --git a/tools/testing/radix-tree/maple.c b/tools/testing/radix-tree/maple.c
> index bc30050227fd..e0f8fabe8821 100644
> --- a/tools/testing/radix-tree/maple.c
> +++ b/tools/testing/radix-tree/maple.c
> @@ -36248,6 +36248,21 @@ static noinline void __init check_mtree_dup(struct maple_tree *mt)
>
> extern void test_kmem_cache_bulk(void);
>
> +static inline void check_spanning_store_height(struct maple_tree *mt)
> +{
> + int index = 0;
> + MA_STATE(mas, mt, 0, 0);
> + mas_lock(&mas);
> + while (mt_height(mt) != 3) {
> + mas_store_gfp(&mas, xa_mk_value(index), GFP_KERNEL);
> + mas_set(&mas, ++index);
> + }
> + mas_set_range(&mas, 90, 140);
> + mas_store_gfp(&mas, xa_mk_value(index), GFP_KERNEL);
> + MT_BUG_ON(mt, mas_mt_height(&mas) != 2);
> + mas_unlock(&mas);
> +}
> +
> /* callback function used for check_nomem_writer_race() */
> static void writer2(void *maple_tree)
> {
> @@ -36414,6 +36429,10 @@ void farmer_tests(void)
> check_spanning_write(&tree);
> mtree_destroy(&tree);
>
> + mt_init_flags(&tree, MT_FLAGS_ALLOC_RANGE);
> + check_spanning_store_height(&tree);
> + mtree_destroy(&tree);
> +
> mt_init_flags(&tree, MT_FLAGS_ALLOC_RANGE);
> check_null_expand(&tree);
> mtree_destroy(&tree);
> --
> 2.43.0
>
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH v5 5/6] maple_tree: add sufficient height
2025-04-10 19:14 ` [PATCH v5 5/6] maple_tree: add sufficient height Sidhartha Kumar
@ 2025-04-10 19:18 ` Liam R. Howlett
0 siblings, 0 replies; 9+ messages in thread
From: Liam R. Howlett @ 2025-04-10 19:18 UTC (permalink / raw)
To: Sidhartha Kumar; +Cc: linux-kernel, maple-tree, linux-mm, akpm, willy
* Sidhartha Kumar <sidhartha.kumar@oracle.com> [250410 15:14]:
> In order to support rebalancing and spanning stores using less than the
> worst case number of nodes, we need to track more than just the vacant
> height. Using only vacant height to reduce the worst case maple node
> allocation count can lead to a shortcoming of nodes in the following
> scenarios.
>
> For rebalancing writes, when a leaf node becomes insufficient, it may be
> combined with a sibling into a single node. This means that the parent node
> which has entries for this children will lose one entry. If this parent node
> was just meeting the minimum entries, losing one entry will now cause this
> parent node to be insufficient. This leads to a cascading operation of
> rebalancing at different levels and can lead to more node allocations than
> simply using vacant height can return.
>
> For spanning writes, a similar situation occurs. At the location at
> which a spanning write is detected, the number of ancestor nodes may
> similarly need to rebalanced into a smaller number of nodes and the same
> cascading situation could occur.
>
> To use less than the full height of the tree for the number of
> allocations, we also need to track the height at which a non-leaf node
> cannot become insufficient. This means even if a rebalance occurs to a
> child of this node, it currently has enough entries that it can lose one
> without any further action. This field is stored in the maple write state
> as sufficient height. In mas_prealloc_calc() when figuring out how many
> nodes to allocate, we check if the vacant node is lower in the tree
> than a sufficient node (has a larger value). If it is, we cannot use the
> vacant height and must use the difference in the height and sufficient
> height as the basis for the number of nodes needed.
>
> An off by one bug was also discovered in mast_overflow() where it is
> using >= rather than >. This caused extra iterations of the
> mas_spanning_rebalance() loop and lead to unneeded allocations. A test
> is also added to check the number of allocations is correct.
>
> Signed-off-by: Sidhartha Kumar <sidhartha.kumar@oracle.com>
Reviewed-by: Liam R. Howlett <Liam.Howlett@oracle.com>
> ---
> include/linux/maple_tree.h | 4 +++-
> lib/maple_tree.c | 19 ++++++++++++++++---
> tools/testing/radix-tree/maple.c | 28 ++++++++++++++++++++++++++++
> 3 files changed, 47 insertions(+), 4 deletions(-)
>
> diff --git a/include/linux/maple_tree.h b/include/linux/maple_tree.h
> index 657adb33e61e..9ef129038224 100644
> --- a/include/linux/maple_tree.h
> +++ b/include/linux/maple_tree.h
> @@ -464,6 +464,7 @@ struct ma_wr_state {
> void *entry; /* The entry to write */
> void *content; /* The existing entry that is being overwritten */
> unsigned char vacant_height; /* Height of lowest node with free space */
> + unsigned char sufficient_height;/* Height of lowest node with min sufficiency + 1 nodes */
> };
>
> #define mas_lock(mas) spin_lock(&((mas)->tree->ma_lock))
> @@ -499,7 +500,8 @@ struct ma_wr_state {
> .mas = ma_state, \
> .content = NULL, \
> .entry = wr_entry, \
> - .vacant_height = 0 \
> + .vacant_height = 0, \
> + .sufficient_height = 0 \
> }
>
> #define MA_TOPIARY(name, tree) \
> diff --git a/lib/maple_tree.c b/lib/maple_tree.c
> index 5610b3742a79..aa139668bcae 100644
> --- a/lib/maple_tree.c
> +++ b/lib/maple_tree.c
> @@ -2741,7 +2741,7 @@ static inline bool mast_sufficient(struct maple_subtree_state *mast)
> */
> static inline bool mast_overflow(struct maple_subtree_state *mast)
> {
> - if (mast->bn->b_end >= mt_slot_count(mast->orig_l->node))
> + if (mast->bn->b_end > mt_slot_count(mast->orig_l->node))
> return true;
>
> return false;
> @@ -3550,6 +3550,13 @@ static bool mas_wr_walk(struct ma_wr_state *wr_mas)
> if (mas->end < mt_slots[wr_mas->type] - 1)
> wr_mas->vacant_height = mas->depth + 1;
>
> + if (ma_is_root(mas_mn(mas))) {
> + /* root needs more than 2 entries to be sufficient + 1 */
> + if (mas->end > 2)
> + wr_mas->sufficient_height = 1;
> + } else if (mas->end > mt_min_slots[wr_mas->type] + 1)
> + wr_mas->sufficient_height = mas->depth + 1;
> +
> mas_wr_walk_traverse(wr_mas);
> }
>
> @@ -4185,13 +4192,19 @@ static inline int mas_prealloc_calc(struct ma_wr_state *wr_mas, void *entry)
> ret = 0;
> break;
> case wr_spanning_store:
> - WARN_ON_ONCE(ret != height * 3 + 1);
> + if (wr_mas->sufficient_height < wr_mas->vacant_height)
> + ret = (height - wr_mas->sufficient_height) * 3 + 1;
> + else
> + ret = delta * 3 + 1;
> break;
> case wr_split_store:
> ret = delta * 2 + 1;
> break;
> case wr_rebalance:
> - ret = height * 2 + 1;
> + if (wr_mas->sufficient_height < wr_mas->vacant_height)
> + ret = (height - wr_mas->sufficient_height) * 2 + 1;
> + else
> + ret = delta * 2 + 1;
> break;
> case wr_node_store:
> ret = mt_in_rcu(mas->tree) ? 1 : 0;
> diff --git a/tools/testing/radix-tree/maple.c b/tools/testing/radix-tree/maple.c
> index e37a3ab2e921..2c0b38301253 100644
> --- a/tools/testing/radix-tree/maple.c
> +++ b/tools/testing/radix-tree/maple.c
> @@ -36326,6 +36326,30 @@ static inline void check_spanning_store_height(struct maple_tree *mt)
> mas_unlock(&mas);
> }
>
> +/*
> + * Test to check the path of a spanning rebalance which results in
> + * a collapse where the rebalancing of the child node leads to
> + * insufficieny in the parent node.
> + */
> +static void check_collapsing_rebalance(struct maple_tree *mt)
> +{
> + int i = 0;
> + MA_STATE(mas, mt, ULONG_MAX, ULONG_MAX);
> +
> + /* create a height 6 tree */
> + while (mt_height(mt) < 6) {
> + mtree_store_range(mt, i, i + 10, xa_mk_value(i), GFP_KERNEL);
> + i += 9;
> + }
> +
> + /* delete all entries one at a time, starting from the right */
> + do {
> + mas_erase(&mas);
> + } while (mas_prev(&mas, 0) != NULL);
> +
> + mtree_unlock(mt);
> +}
> +
> /* callback function used for check_nomem_writer_race() */
> static void writer2(void *maple_tree)
> {
> @@ -36496,6 +36520,10 @@ void farmer_tests(void)
> check_spanning_store_height(&tree);
> mtree_destroy(&tree);
>
> + mt_init_flags(&tree, MT_FLAGS_ALLOC_RANGE);
> + check_collapsing_rebalance(&tree);
> + mtree_destroy(&tree);
> +
> mt_init_flags(&tree, MT_FLAGS_ALLOC_RANGE);
> check_null_expand(&tree);
> mtree_destroy(&tree);
> --
> 2.43.0
>
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2025-04-10 19:18 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-04-10 19:14 [PATCH v5 0/6] Track node vacancy to reduce worst case allocation counts Sidhartha Kumar
2025-04-10 19:14 ` [PATCH v5 1/6] maple_tree: convert mas_prealloc_calc() to take in a maple write state Sidhartha Kumar
2025-04-10 19:14 ` [PATCH v5 2/6] maple_tree: use height and depth consistently Sidhartha Kumar
2025-04-10 19:17 ` Liam R. Howlett
2025-04-10 19:14 ` [PATCH v5 3/6] maple_tree: use vacant nodes to reduce worst case allocations Sidhartha Kumar
2025-04-10 19:14 ` [PATCH v5 4/6] maple_tree: break on convergence in mas_spanning_rebalance() Sidhartha Kumar
2025-04-10 19:14 ` [PATCH v5 5/6] maple_tree: add sufficient height Sidhartha Kumar
2025-04-10 19:18 ` Liam R. Howlett
2025-04-10 19:14 ` [PATCH v5 6/6] maple_tree: reorder mas->store_type case statements Sidhartha Kumar
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox