linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v4 00/17] Introduce a store type enum for the Maple tree
@ 2024-08-14 16:19 Sidhartha Kumar
  2024-08-14 16:19 ` [PATCH v4 01/17] maple_tree: introduce store_type enum Sidhartha Kumar
                   ` (16 more replies)
  0 siblings, 17 replies; 27+ messages in thread
From: Sidhartha Kumar @ 2024-08-14 16:19 UTC (permalink / raw)
  To: linux-kernel, maple-tree
  Cc: linux-mm, akpm, liam.howlett, willy, surenb, Sidhartha Kumar

v3[1] -> v4:
    - move up mas->node = null in mas_start() so mas->node is cleared in all cases.
    - don't pass gfp into mas_wr_preallocate().This fixes the libfs error reported in v3[2].
    - change condition of node store so more writes can be eligible node stores.
    - more cleanup of mas_commit_b_node() as most of the function becomes dead code.
    - remove wr_bnode as a store type as it is no longer needed.
    - add a patch to change many write helper functions to be void.

================================ OVERVIEW ================================

This series implements two work items[3]: "aligning mas_store_gfp() with
mas_preallocate()" and "enum for store type".

mas_store_gfp() is modified to preallocate nodes. This simplies many of
the write helper functions by allowing them to use mas_store_gfp() rather
than open coding node allocation and error handling.

The enum defines the following store types:

enum store_type {
    wr_invalid,
    wr_new_root,
    wr_store_root,
    wr_exact_fit,
    wr_spanning_store,
    wr_split_store,
    wr_rebalance,
    wr_append,
    wr_node_store,
    wr_slot_store,
};

In the current maple tree code, a walk down the tree is done in
mas_preallocate() to determine the number of nodes needed for this write.
After node allocation, mas_wr_store_entry() will perform another walk to
determine which write helper function to use to complete the write.

Rather than performing the second walk, we can store the type of write
in the maple write state during node allocation and read this field to
complete the write.

Patches 1-16 implement this store type feature.
Patch 17 is a cleanup patch to change functions that have unused return
types to be void.

================================ RESULTS =================================

Phoronix t-test-1 (Seconds < Lower Is Better)
    v6.10-rc6
        Threads: 1
            33.15

        Threads: 2
            10.81

    v6.10-rc6 + this series
            Threads: 1
            32.69

        Threads: 2
            10.45

Stress-ng mmap
                    6.10_base  store_type_v4
Duration User        2744.65     2769.40
Duration System     10862.69    10817.59
Duration Elapsed     1477.58     1478.35



================================ TESTING =================================

Testing was done with the maple tree test suite. A new test case is also
added to validate the order in which we test for and assign the store type.

[1]: https://lore.kernel.org/linux-mm/80926b22-a8d2-9992-eb5e-27e2c99cf460@google.com/T/#m81044feb66765265f8ca7f21e4b4b3725b18780a
[2]: https://lore.kernel.org/linux-mm/80926b22-a8d2-9992-eb5e-27e2c99cf460@google.com/T/#mb36c6526486638e82518c0f37a428fb279c84d8a
[3]: https://lists.infradead.org/pipermail/maple-tree/2023-December/003098.html

Sidhartha Kumar (17):
  maple_tree: introduce store_type enum
  maple_tree: introduce mas_wr_prealloc_setup()
  maple_tree: move up mas_wr_store_setup() and mas_wr_prealloc_setup()
  maple_tree: introduce mas_wr_store_type()
  maple_tree: remove mas_destroy() from mas_nomem()
  maple_tree: preallocate nodes in mas_erase()
  maple_tree: use mas_store_gfp() in mtree_store_range()
  maple_tree: print store type in mas_dump()
  maple_tree: use store type in mas_wr_store_entry()
  maple_tree: convert mas_insert() to preallocate nodes
  maple_tree: simplify mas_commit_b_node()
  maple_tree: remove mas_wr_modify()
  maple_tree: have mas_store() allocate nodes if needed
  maple_tree: remove node allocations from various write helper
    functions
  maple_tree: remove repeated sanity checks from write helper functions
  maple_tree: remove unneeded mas_wr_walk() in mas_store_prealloc()
  maple_tree: make write helper functions void

 include/linux/maple_tree.h       |  14 +
 lib/maple_tree.c                 | 653 ++++++++++++++++---------------
 tools/testing/radix-tree/maple.c |  46 ++-
 3 files changed, 400 insertions(+), 313 deletions(-)

-- 
2.46.0



^ permalink raw reply	[flat|nested] 27+ messages in thread

* [PATCH v4 01/17] maple_tree: introduce store_type enum
  2024-08-14 16:19 [PATCH v4 00/17] Introduce a store type enum for the Maple tree Sidhartha Kumar
@ 2024-08-14 16:19 ` Sidhartha Kumar
  2024-08-14 16:19 ` [PATCH v4 02/17] maple_tree: introduce mas_wr_prealloc_setup() Sidhartha Kumar
                   ` (15 subsequent siblings)
  16 siblings, 0 replies; 27+ messages in thread
From: Sidhartha Kumar @ 2024-08-14 16:19 UTC (permalink / raw)
  To: linux-kernel, maple-tree
  Cc: linux-mm, akpm, liam.howlett, willy, surenb, Sidhartha Kumar

Add a store_type enum that is stored in ma_state. This will be used to
keep track of partial walks of the tree so that subsequent walks can
pick up where a previous walk left off.

Signed-off-by: Sidhartha Kumar <sidhartha.kumar@oracle.com>
---
 include/linux/maple_tree.h | 14 ++++++++++++++
 1 file changed, 14 insertions(+)

diff --git a/include/linux/maple_tree.h b/include/linux/maple_tree.h
index a53ad4dabd7e..8e1504a81cd2 100644
--- a/include/linux/maple_tree.h
+++ b/include/linux/maple_tree.h
@@ -148,6 +148,18 @@ enum maple_type {
 	maple_arange_64,
 };
 
+enum store_type {
+	wr_invalid,
+	wr_new_root,
+	wr_store_root,
+	wr_exact_fit,
+	wr_spanning_store,
+	wr_split_store,
+	wr_rebalance,
+	wr_append,
+	wr_node_store,
+	wr_slot_store,
+};
 
 /**
  * DOC: Maple tree flags
@@ -436,6 +448,7 @@ struct ma_state {
 	unsigned char offset;
 	unsigned char mas_flags;
 	unsigned char end;		/* The end of the node */
+	enum store_type store_type;	/* The type of store needed for this operation */
 };
 
 struct ma_wr_state {
@@ -477,6 +490,7 @@ struct ma_wr_state {
 		.max = ULONG_MAX,					\
 		.alloc = NULL,						\
 		.mas_flags = 0,						\
+		.store_type = wr_invalid,				\
 	}
 
 #define MA_WR_STATE(name, ma_state, wr_entry)				\
-- 
2.46.0



^ permalink raw reply	[flat|nested] 27+ messages in thread

* [PATCH v4 02/17] maple_tree: introduce mas_wr_prealloc_setup()
  2024-08-14 16:19 [PATCH v4 00/17] Introduce a store type enum for the Maple tree Sidhartha Kumar
  2024-08-14 16:19 ` [PATCH v4 01/17] maple_tree: introduce store_type enum Sidhartha Kumar
@ 2024-08-14 16:19 ` Sidhartha Kumar
  2024-08-14 16:19 ` [PATCH v4 03/17] maple_tree: move up mas_wr_store_setup() and mas_wr_prealloc_setup() Sidhartha Kumar
                   ` (14 subsequent siblings)
  16 siblings, 0 replies; 27+ messages in thread
From: Sidhartha Kumar @ 2024-08-14 16:19 UTC (permalink / raw)
  To: linux-kernel, maple-tree
  Cc: linux-mm, akpm, liam.howlett, willy, surenb, Sidhartha Kumar

Introduce a helper function, mas_wr_prealoc_setup(), that will set up a
maple write state in order to start a walk of a maple tree.

Signed-off-by: Sidhartha Kumar <sidhartha.kumar@oracle.com>
---
 lib/maple_tree.c | 10 ++++++++--
 1 file changed, 8 insertions(+), 2 deletions(-)

diff --git a/lib/maple_tree.c b/lib/maple_tree.c
index 884e2d130876..407c0be6e42f 100644
--- a/lib/maple_tree.c
+++ b/lib/maple_tree.c
@@ -5399,6 +5399,13 @@ static void mas_wr_store_setup(struct ma_wr_state *wr_mas)
 	mas_reset(wr_mas->mas);
 }
 
+static inline void mas_wr_prealloc_setup(struct ma_wr_state *wr_mas)
+{
+	struct ma_state *mas = wr_mas->mas;
+
+	mas_wr_store_setup(wr_mas);
+	wr_mas->content = mas_start(mas);
+}
 /* Interface */
 
 /**
@@ -5509,8 +5516,7 @@ int mas_preallocate(struct ma_state *mas, void *entry, gfp_t gfp)
 	if (unlikely(!mas->index && mas->last == ULONG_MAX))
 		goto ask_now;
 
-	mas_wr_store_setup(&wr_mas);
-	wr_mas.content = mas_start(mas);
+	mas_wr_prealloc_setup(&wr_mas);
 	/* Root expand */
 	if (unlikely(mas_is_none(mas) || mas_is_ptr(mas)))
 		goto ask_now;
-- 
2.46.0



^ permalink raw reply	[flat|nested] 27+ messages in thread

* [PATCH v4 03/17] maple_tree: move up mas_wr_store_setup() and mas_wr_prealloc_setup()
  2024-08-14 16:19 [PATCH v4 00/17] Introduce a store type enum for the Maple tree Sidhartha Kumar
  2024-08-14 16:19 ` [PATCH v4 01/17] maple_tree: introduce store_type enum Sidhartha Kumar
  2024-08-14 16:19 ` [PATCH v4 02/17] maple_tree: introduce mas_wr_prealloc_setup() Sidhartha Kumar
@ 2024-08-14 16:19 ` Sidhartha Kumar
  2024-08-14 16:19 ` [PATCH v4 04/17] maple_tree: introduce mas_wr_store_type() Sidhartha Kumar
                   ` (13 subsequent siblings)
  16 siblings, 0 replies; 27+ messages in thread
From: Sidhartha Kumar @ 2024-08-14 16:19 UTC (permalink / raw)
  To: linux-kernel, maple-tree
  Cc: linux-mm, akpm, liam.howlett, willy, surenb, Sidhartha Kumar,
	Liam R. Howlett

Subsequent patches require these definitions to be higher, no functional
changes intended.

Reviewed-by: Liam R. Howlett <Liam.Howlett@oracle.com>
Signed-off-by: Sidhartha Kumar <sidhartha.kumar@oracle.com>
---
 lib/maple_tree.c | 96 ++++++++++++++++++++++++------------------------
 1 file changed, 48 insertions(+), 48 deletions(-)

diff --git a/lib/maple_tree.c b/lib/maple_tree.c
index 407c0be6e42f..de4a91ced8ca 100644
--- a/lib/maple_tree.c
+++ b/lib/maple_tree.c
@@ -4227,6 +4227,54 @@ static inline void mas_wr_store_entry(struct ma_wr_state *wr_mas)
 		mas_wr_modify(wr_mas);
 }
 
+static void mas_wr_store_setup(struct ma_wr_state *wr_mas)
+{
+	if (!mas_is_active(wr_mas->mas)) {
+		if (mas_is_start(wr_mas->mas))
+			return;
+
+		if (unlikely(mas_is_paused(wr_mas->mas)))
+			goto reset;
+
+		if (unlikely(mas_is_none(wr_mas->mas)))
+			goto reset;
+
+		if (unlikely(mas_is_overflow(wr_mas->mas)))
+			goto reset;
+
+		if (unlikely(mas_is_underflow(wr_mas->mas)))
+			goto reset;
+	}
+
+	/*
+	 * A less strict version of mas_is_span_wr() where we allow spanning
+	 * writes within this node.  This is to stop partial walks in
+	 * mas_prealloc() from being reset.
+	 */
+	if (wr_mas->mas->last > wr_mas->mas->max)
+		goto reset;
+
+	if (wr_mas->entry)
+		return;
+
+	if (mte_is_leaf(wr_mas->mas->node) &&
+	    wr_mas->mas->last == wr_mas->mas->max)
+		goto reset;
+
+	return;
+
+reset:
+	mas_reset(wr_mas->mas);
+}
+
+static inline void mas_wr_prealloc_setup(struct ma_wr_state *wr_mas)
+{
+	struct ma_state *mas = wr_mas->mas;
+
+	mas_wr_store_setup(wr_mas);
+	wr_mas->content = mas_start(mas);
+}
+
 /**
  * mas_insert() - Internal call to insert a value
  * @mas: The maple state
@@ -5358,54 +5406,6 @@ static inline void mte_destroy_walk(struct maple_enode *enode,
 		mt_destroy_walk(enode, mt, true);
 	}
 }
-
-static void mas_wr_store_setup(struct ma_wr_state *wr_mas)
-{
-	if (!mas_is_active(wr_mas->mas)) {
-		if (mas_is_start(wr_mas->mas))
-			return;
-
-		if (unlikely(mas_is_paused(wr_mas->mas)))
-			goto reset;
-
-		if (unlikely(mas_is_none(wr_mas->mas)))
-			goto reset;
-
-		if (unlikely(mas_is_overflow(wr_mas->mas)))
-			goto reset;
-
-		if (unlikely(mas_is_underflow(wr_mas->mas)))
-			goto reset;
-	}
-
-	/*
-	 * A less strict version of mas_is_span_wr() where we allow spanning
-	 * writes within this node.  This is to stop partial walks in
-	 * mas_prealloc() from being reset.
-	 */
-	if (wr_mas->mas->last > wr_mas->mas->max)
-		goto reset;
-
-	if (wr_mas->entry)
-		return;
-
-	if (mte_is_leaf(wr_mas->mas->node) &&
-	    wr_mas->mas->last == wr_mas->mas->max)
-		goto reset;
-
-	return;
-
-reset:
-	mas_reset(wr_mas->mas);
-}
-
-static inline void mas_wr_prealloc_setup(struct ma_wr_state *wr_mas)
-{
-	struct ma_state *mas = wr_mas->mas;
-
-	mas_wr_store_setup(wr_mas);
-	wr_mas->content = mas_start(mas);
-}
 /* Interface */
 
 /**
-- 
2.46.0



^ permalink raw reply	[flat|nested] 27+ messages in thread

* [PATCH v4 04/17] maple_tree: introduce mas_wr_store_type()
  2024-08-14 16:19 [PATCH v4 00/17] Introduce a store type enum for the Maple tree Sidhartha Kumar
                   ` (2 preceding siblings ...)
  2024-08-14 16:19 ` [PATCH v4 03/17] maple_tree: move up mas_wr_store_setup() and mas_wr_prealloc_setup() Sidhartha Kumar
@ 2024-08-14 16:19 ` Sidhartha Kumar
  2024-09-25  2:04   ` Wei Yang
  2024-08-14 16:19 ` [PATCH v4 05/17] maple_tree: remove mas_destroy() from mas_nomem() Sidhartha Kumar
                   ` (12 subsequent siblings)
  16 siblings, 1 reply; 27+ messages in thread
From: Sidhartha Kumar @ 2024-08-14 16:19 UTC (permalink / raw)
  To: linux-kernel, maple-tree
  Cc: linux-mm, akpm, liam.howlett, willy, surenb, Sidhartha Kumar

Introduce mas_wr_store_type() which will set the correct store type
based on a walk of the tree. In mas_wr_node_store() the <= min_slots
condition is changed to < as if new_end is = to mt_min_slots then there
is not enough room.

mas_prealloc_calc() is also introduced to abstract the calculation used
to determine the number of nodes needed for a store operation.

In this change a call to mas_reset() is removed in the error case of
mas_prealloc(). This is only needed in the MA_STATE_REBALANCE case of
mas_destroy(). We can move the call to mas_reset() directly to
mas_destroy().

Also, add a test case to validate the order that we check the store type
in is correct. This test models a vma expanding and then shrinking which
is part of the boot process.

Signed-off-by: Sidhartha Kumar <sidhartha.kumar@oracle.com>
---
 lib/maple_tree.c                 | 217 ++++++++++++++++++++++---------
 tools/testing/radix-tree/maple.c |  36 +++++
 2 files changed, 193 insertions(+), 60 deletions(-)

diff --git a/lib/maple_tree.c b/lib/maple_tree.c
index de4a91ced8ca..d0b9b3795b96 100644
--- a/lib/maple_tree.c
+++ b/lib/maple_tree.c
@@ -1372,9 +1372,9 @@ static inline struct maple_enode *mas_start(struct ma_state *mas)
 			return NULL;
 		}
 
+		mas->node = NULL;
 		/* empty tree */
 		if (unlikely(!root)) {
-			mas->node = NULL;
 			mas->status = ma_none;
 			mas->offset = MAPLE_NODE_SLOTS;
 			return NULL;
@@ -3890,7 +3890,7 @@ static inline bool mas_wr_node_store(struct ma_wr_state *wr_mas,
 	bool in_rcu = mt_in_rcu(mas->tree);
 
 	/* Check if there is enough data. The room is enough. */
-	if (!mte_is_root(mas->node) && (new_end <= mt_min_slots[wr_mas->type]) &&
+	if (!mte_is_root(mas->node) && (new_end < mt_min_slots[wr_mas->type]) &&
 	    !(mas->mas_flags & MA_STATE_BULK))
 		return false;
 
@@ -4275,6 +4275,146 @@ static inline void mas_wr_prealloc_setup(struct ma_wr_state *wr_mas)
 	wr_mas->content = mas_start(mas);
 }
 
+/**
+ * mas_prealloc_calc() - Calculate number of nodes needed for a
+ * given store oepration
+ * @mas: The maple state
+ * @entry: The entry to store into the tree
+ *
+ * Return: Number of nodes required for preallocation.
+ */
+static inline int mas_prealloc_calc(struct ma_state *mas, void *entry)
+{
+	int ret = mas_mt_height(mas) * 3 + 1;
+
+	switch (mas->store_type) {
+	case wr_invalid:
+		WARN_ON_ONCE(1);
+		break;
+	case wr_new_root:
+		ret = 1;
+		break;
+	case wr_store_root:
+		if (likely((mas->last != 0) || (mas->index != 0)))
+			ret = 1;
+		else if (((unsigned long) (entry) & 3) == 2)
+			ret = 1;
+		else
+			ret = 0;
+		break;
+	case wr_spanning_store:
+		ret =  mas_mt_height(mas) * 3 + 1;
+		break;
+	case wr_split_store:
+		ret =  mas_mt_height(mas) * 2 + 1;
+		break;
+	case wr_rebalance:
+		ret =  mas_mt_height(mas) * 2 - 1;
+		break;
+	case wr_node_store:
+		ret = mt_in_rcu(mas->tree) ? 1 : 0;
+		break;
+	case wr_append:
+	case wr_exact_fit:
+	case wr_slot_store:
+		ret = 0;
+	}
+
+	return ret;
+}
+
+/*
+ * mas_wr_store_type() - Set the store type for a given
+ * store operation.
+ * @wr_mas: The maple write state
+ */
+static inline void mas_wr_store_type(struct ma_wr_state *wr_mas)
+{
+	struct ma_state *mas = wr_mas->mas;
+	unsigned char new_end;
+
+	if (unlikely(mas_is_none(mas) || mas_is_ptr(mas))) {
+		mas->store_type = wr_store_root;
+		return;
+	}
+
+	if (unlikely(!mas_wr_walk(wr_mas))) {
+		mas->store_type = wr_spanning_store;
+		return;
+	}
+
+	/* At this point, we are at the leaf node that needs to be altered. */
+	mas_wr_end_piv(wr_mas);
+	if (!wr_mas->entry)
+		mas_wr_extend_null(wr_mas);
+
+	new_end = mas_wr_new_end(wr_mas);
+	if ((wr_mas->r_min == mas->index) && (wr_mas->r_max == mas->last)) {
+		mas->store_type = wr_exact_fit;
+		return;
+	}
+
+	if (unlikely(!mas->index && mas->last == ULONG_MAX)) {
+		mas->store_type = wr_new_root;
+		return;
+	}
+
+	/* Potential spanning rebalance collapsing a node */
+	if (new_end < mt_min_slots[wr_mas->type]) {
+		if (!mte_is_root(mas->node)) {
+			mas->store_type = wr_rebalance;
+			return;
+		}
+		mas->store_type = wr_node_store;
+		return;
+	}
+
+	if (new_end >= mt_slots[wr_mas->type]) {
+		mas->store_type = wr_split_store;
+		return;
+	}
+
+	if (!mt_in_rcu(mas->tree) && (mas->offset == mas->end)) {
+		mas->store_type = wr_append;
+		return;
+	}
+
+	if ((new_end == mas->end) && (!mt_in_rcu(mas->tree) ||
+		(wr_mas->offset_end - mas->offset == 1))) {
+		mas->store_type = wr_slot_store;
+		return;
+	}
+
+	if (mte_is_root(mas->node) || (new_end >= mt_min_slots[wr_mas->type]) ||
+		(mas->mas_flags & MA_STATE_BULK)) {
+		mas->store_type = wr_node_store;
+		return;
+	}
+
+	mas->store_type = wr_invalid;
+	MAS_WARN_ON(mas, 1);
+}
+
+/**
+ * mas_wr_preallocate() - Preallocate enough nodes for a store operation
+ * @wr_mas: The maple write state
+ * @entry: The entry that will be stored
+ *
+ */
+static inline void mas_wr_preallocate(struct ma_wr_state *wr_mas, void *entry)
+{
+	struct ma_state *mas = wr_mas->mas;
+	int request;
+
+	mas_wr_prealloc_setup(wr_mas);
+	mas_wr_store_type(wr_mas);
+	request = mas_prealloc_calc(mas, entry);
+	if (!request)
+		return;
+
+	mas_node_count(mas, request);
+}
+
 /**
  * mas_insert() - Internal call to insert a value
  * @mas: The maple state
@@ -5508,69 +5648,25 @@ EXPORT_SYMBOL_GPL(mas_store_prealloc);
 int mas_preallocate(struct ma_state *mas, void *entry, gfp_t gfp)
 {
 	MA_WR_STATE(wr_mas, mas, entry);
-	unsigned char node_size;
-	int request = 1;
-	int ret;
-
-
-	if (unlikely(!mas->index && mas->last == ULONG_MAX))
-		goto ask_now;
+	int ret = 0;
+	int request;
 
 	mas_wr_prealloc_setup(&wr_mas);
-	/* Root expand */
-	if (unlikely(mas_is_none(mas) || mas_is_ptr(mas)))
-		goto ask_now;
-
-	if (unlikely(!mas_wr_walk(&wr_mas))) {
-		/* Spanning store, use worst case for now */
-		request = 1 + mas_mt_height(mas) * 3;
-		goto ask_now;
-	}
-
-	/* At this point, we are at the leaf node that needs to be altered. */
-	/* Exact fit, no nodes needed. */
-	if (wr_mas.r_min == mas->index && wr_mas.r_max == mas->last)
-		return 0;
-
-	mas_wr_end_piv(&wr_mas);
-	node_size = mas_wr_new_end(&wr_mas);
-
-	/* Slot store, does not require additional nodes */
-	if (node_size == mas->end) {
-		/* reuse node */
-		if (!mt_in_rcu(mas->tree))
-			return 0;
-		/* shifting boundary */
-		if (wr_mas.offset_end - mas->offset == 1)
-			return 0;
-	}
+	mas_wr_store_type(&wr_mas);
+	request = mas_prealloc_calc(mas, entry);
+	if (!request)
+		return ret;
 
-	if (node_size >= mt_slots[wr_mas.type]) {
-		/* Split, worst case for now. */
-		request = 1 + mas_mt_height(mas) * 2;
-		goto ask_now;
+	mas_node_count_gfp(mas, request, gfp);
+	if (mas_is_err(mas)) {
+		mas_set_alloc_req(mas, 0);
+		ret = xa_err(mas->node);
+		mas_destroy(mas);
+		mas_reset(mas);
+		return ret;
 	}
 
-	/* New root needs a single node */
-	if (unlikely(mte_is_root(mas->node)))
-		goto ask_now;
-
-	/* Potential spanning rebalance collapsing a node, use worst-case */
-	if (node_size  - 1 <= mt_min_slots[wr_mas.type])
-		request = mas_mt_height(mas) * 2 - 1;
-
-	/* node store, slot store needs one node */
-ask_now:
-	mas_node_count_gfp(mas, request, gfp);
 	mas->mas_flags |= MA_STATE_PREALLOC;
-	if (likely(!mas_is_err(mas)))
-		return 0;
-
-	mas_set_alloc_req(mas, 0);
-	ret = xa_err(mas->node);
-	mas_reset(mas);
-	mas_destroy(mas);
-	mas_reset(mas);
 	return ret;
 }
 EXPORT_SYMBOL_GPL(mas_preallocate);
@@ -5596,7 +5692,8 @@ void mas_destroy(struct ma_state *mas)
 	 */
 	if (mas->mas_flags & MA_STATE_REBALANCE) {
 		unsigned char end;
-
+		if (mas_is_err(mas))
+			mas_reset(mas);
 		mas_start(mas);
 		mtree_range_walk(mas);
 		end = mas->end + 1;
diff --git a/tools/testing/radix-tree/maple.c b/tools/testing/radix-tree/maple.c
index ef5b83cf94ea..ad42a36231fb 100644
--- a/tools/testing/radix-tree/maple.c
+++ b/tools/testing/radix-tree/maple.c
@@ -36283,6 +36283,38 @@ static void check_nomem_writer_race(struct maple_tree *mt)
 	mtree_unlock(mt);
 }
 
+ /* test to simulate expanding a vma from [0x7fffffffe000, 0x7ffffffff000)
+  * to [0x7ffde4ca1000, 0x7ffffffff000) and then shrinking the vma to
+  * [0x7ffde4ca1000, 0x7ffde4ca2000)
+  */
+static inline int check_vma_modification(struct maple_tree *mt)
+{
+	MA_STATE(mas, mt, 0, 0);
+
+	mtree_lock(mt);
+	/* vma with old start and old end */
+	__mas_set_range(&mas, 0x7fffffffe000, 0x7ffffffff000 - 1);
+	mas_preallocate(&mas, xa_mk_value(1), GFP_KERNEL);
+	mas_store_prealloc(&mas, xa_mk_value(1));
+
+	/* next write occurs partly in previous range [0, 0x7fffffffe000)*/
+	mas_prev_range(&mas, 0);
+	/* expand vma to {0x7ffde4ca1000, 0x7ffffffff000) */
+	__mas_set_range(&mas, 0x7ffde4ca1000, 0x7ffffffff000 - 1);
+	mas_preallocate(&mas, xa_mk_value(1), GFP_KERNEL);
+	mas_store_prealloc(&mas, xa_mk_value(1));
+
+	/* shrink vma to [0x7ffde4ca1000, 7ffde4ca2000) */
+	__mas_set_range(&mas, 0x7ffde4ca2000, 0x7ffffffff000 - 1);
+	mas_preallocate(&mas, NULL, GFP_KERNEL);
+	mas_store_prealloc(&mas, NULL);
+	mt_dump(mt, mt_dump_hex);
+
+	mas_destroy(&mas);
+	mtree_unlock(mt);
+	return 0;
+}
+
 void farmer_tests(void)
 {
 	struct maple_node *node;
@@ -36290,6 +36322,10 @@ void farmer_tests(void)
 
 	mt_dump(&tree, mt_dump_dec);
 
+	mt_init_flags(&tree, MT_FLAGS_ALLOC_RANGE | MT_FLAGS_LOCK_EXTERN | MT_FLAGS_USE_RCU);
+	check_vma_modification(&tree);
+	mtree_destroy(&tree);
+
 	tree.ma_root = xa_mk_value(0);
 	mt_dump(&tree, mt_dump_dec);
 
-- 
2.46.0



^ permalink raw reply	[flat|nested] 27+ messages in thread

* [PATCH v4 05/17] maple_tree: remove mas_destroy() from mas_nomem()
  2024-08-14 16:19 [PATCH v4 00/17] Introduce a store type enum for the Maple tree Sidhartha Kumar
                   ` (3 preceding siblings ...)
  2024-08-14 16:19 ` [PATCH v4 04/17] maple_tree: introduce mas_wr_store_type() Sidhartha Kumar
@ 2024-08-14 16:19 ` Sidhartha Kumar
  2024-08-14 16:19 ` [PATCH v4 06/17] maple_tree: preallocate nodes in mas_erase() Sidhartha Kumar
                   ` (11 subsequent siblings)
  16 siblings, 0 replies; 27+ messages in thread
From: Sidhartha Kumar @ 2024-08-14 16:19 UTC (permalink / raw)
  To: linux-kernel, maple-tree
  Cc: linux-mm, akpm, liam.howlett, willy, surenb, Sidhartha Kumar,
	Liam R. Howlett

Separate call to mas_destroy() from mas_nomem() so we can check for no
memory errors without destroying the current maple state in
mas_store_gfp(). We then add calls to mas_destroy() to callers of
mas_nomem().

Reviewed-by: Liam R. Howlett <Liam.Howlett@oracle.com>
Signed-off-by: Sidhartha Kumar <sidhartha.kumar@oracle.com>
---
 lib/maple_tree.c                 | 37 +++++++++++++++++++++-----------
 tools/testing/radix-tree/maple.c | 10 +++++----
 2 files changed, 30 insertions(+), 17 deletions(-)

diff --git a/lib/maple_tree.c b/lib/maple_tree.c
index d0b9b3795b96..58985107cf00 100644
--- a/lib/maple_tree.c
+++ b/lib/maple_tree.c
@@ -4519,6 +4519,7 @@ int mas_alloc_cyclic(struct ma_state *mas, unsigned long *startp,
 	if (*next == 0)
 		mas->tree->ma_flags |= MT_FLAGS_ALLOC_WRAPPED;
 
+	mas_destroy(mas);
 	return ret;
 }
 EXPORT_SYMBOL(mas_alloc_cyclic);
@@ -5601,21 +5602,25 @@ int mas_store_gfp(struct ma_state *mas, void *entry, gfp_t gfp)
 	unsigned long index = mas->index;
 	unsigned long last = mas->last;
 	MA_WR_STATE(wr_mas, mas, entry);
+	int ret = 0;
 
-	mas_wr_store_setup(&wr_mas);
-	trace_ma_write(__func__, mas, 0, entry);
 retry:
-	mas_wr_store_entry(&wr_mas);
+	mas_wr_preallocate(&wr_mas, entry);
 	if (unlikely(mas_nomem(mas, gfp))) {
 		if (!entry)
 			__mas_set_range(mas, index, last);
 		goto retry;
 	}
 
-	if (unlikely(mas_is_err(mas)))
-		return xa_err(mas->node);
+	if (mas_is_err(mas)) {
+		ret = xa_err(mas->node);
+		goto out;
+	}
 
-	return 0;
+	mas_wr_store_entry(&wr_mas);
+out:
+	mas_destroy(mas);
+	return ret;
 }
 EXPORT_SYMBOL_GPL(mas_store_gfp);
 
@@ -6374,6 +6379,7 @@ void *mas_erase(struct ma_state *mas)
 		goto write_retry;
 	}
 
+	mas_destroy(mas);
 	return entry;
 }
 EXPORT_SYMBOL_GPL(mas_erase);
@@ -6388,10 +6394,8 @@ EXPORT_SYMBOL_GPL(mas_erase);
 bool mas_nomem(struct ma_state *mas, gfp_t gfp)
 	__must_hold(mas->tree->ma_lock)
 {
-	if (likely(mas->node != MA_ERROR(-ENOMEM))) {
-		mas_destroy(mas);
+	if (likely(mas->node != MA_ERROR(-ENOMEM)))
 		return false;
-	}
 
 	if (gfpflags_allow_blocking(gfp) && !mt_external_lock(mas->tree)) {
 		mtree_unlock(mas->tree);
@@ -6469,6 +6473,7 @@ int mtree_store_range(struct maple_tree *mt, unsigned long index,
 {
 	MA_STATE(mas, mt, index, last);
 	MA_WR_STATE(wr_mas, &mas, entry);
+	int ret = 0;
 
 	trace_ma_write(__func__, &mas, 0, entry);
 	if (WARN_ON_ONCE(xa_is_advanced(entry)))
@@ -6484,10 +6489,12 @@ int mtree_store_range(struct maple_tree *mt, unsigned long index,
 		goto retry;
 
 	mtree_unlock(mt);
+
 	if (mas_is_err(&mas))
-		return xa_err(mas.node);
+		ret = xa_err(mas.node);
 
-	return 0;
+	mas_destroy(&mas);
+	return ret;
 }
 EXPORT_SYMBOL(mtree_store_range);
 
@@ -6523,6 +6530,7 @@ int mtree_insert_range(struct maple_tree *mt, unsigned long first,
 		unsigned long last, void *entry, gfp_t gfp)
 {
 	MA_STATE(ms, mt, first, last);
+	int ret = 0;
 
 	if (WARN_ON_ONCE(xa_is_advanced(entry)))
 		return -EINVAL;
@@ -6538,9 +6546,10 @@ int mtree_insert_range(struct maple_tree *mt, unsigned long first,
 
 	mtree_unlock(mt);
 	if (mas_is_err(&ms))
-		return xa_err(ms.node);
+		ret = xa_err(ms.node);
 
-	return 0;
+	mas_destroy(&ms);
+	return ret;
 }
 EXPORT_SYMBOL(mtree_insert_range);
 
@@ -6595,6 +6604,7 @@ int mtree_alloc_range(struct maple_tree *mt, unsigned long *startp,
 
 unlock:
 	mtree_unlock(mt);
+	mas_destroy(&mas);
 	return ret;
 }
 EXPORT_SYMBOL(mtree_alloc_range);
@@ -6676,6 +6686,7 @@ int mtree_alloc_rrange(struct maple_tree *mt, unsigned long *startp,
 
 unlock:
 	mtree_unlock(mt);
+	mas_destroy(&mas);
 	return ret;
 }
 EXPORT_SYMBOL(mtree_alloc_rrange);
diff --git a/tools/testing/radix-tree/maple.c b/tools/testing/radix-tree/maple.c
index ad42a36231fb..c5b00aca9def 100644
--- a/tools/testing/radix-tree/maple.c
+++ b/tools/testing/radix-tree/maple.c
@@ -120,7 +120,7 @@ static noinline void __init check_new_node(struct maple_tree *mt)
 	MT_BUG_ON(mt, mas.alloc->slot[0] == NULL);
 	mas_push_node(&mas, mn);
 	mas_reset(&mas);
-	mas_nomem(&mas, GFP_KERNEL); /* free */
+	mas_destroy(&mas);
 	mtree_unlock(mt);
 
 
@@ -144,7 +144,7 @@ static noinline void __init check_new_node(struct maple_tree *mt)
 	mn->parent = ma_parent_ptr(mn);
 	ma_free_rcu(mn);
 	mas.status = ma_start;
-	mas_nomem(&mas, GFP_KERNEL);
+	mas_destroy(&mas);
 	/* Allocate 3 nodes, will fail. */
 	mas_node_count(&mas, 3);
 	/* Drop the lock and allocate 3 nodes. */
@@ -161,7 +161,7 @@ static noinline void __init check_new_node(struct maple_tree *mt)
 	MT_BUG_ON(mt, mas_allocated(&mas) != 3);
 	/* Free. */
 	mas_reset(&mas);
-	mas_nomem(&mas, GFP_KERNEL);
+	mas_destroy(&mas);
 
 	/* Set allocation request to 1. */
 	mas_set_alloc_req(&mas, 1);
@@ -277,6 +277,7 @@ static noinline void __init check_new_node(struct maple_tree *mt)
 		}
 		mas_reset(&mas);
 		MT_BUG_ON(mt, mas_nomem(&mas, GFP_KERNEL));
+		mas_destroy(&mas);
 
 	}
 
@@ -299,7 +300,7 @@ static noinline void __init check_new_node(struct maple_tree *mt)
 	}
 	MT_BUG_ON(mt, mas_allocated(&mas) != total);
 	mas_reset(&mas);
-	mas_nomem(&mas, GFP_KERNEL); /* Free. */
+	mas_destroy(&mas); /* Free. */
 
 	MT_BUG_ON(mt, mas_allocated(&mas) != 0);
 	for (i = 1; i < 128; i++) {
@@ -35847,6 +35848,7 @@ static noinline void __init check_nomem(struct maple_tree *mt)
 	mas_store(&ms, &ms); /* insert 1 -> &ms */
 	mas_nomem(&ms, GFP_KERNEL); /* Node allocated in here. */
 	mtree_unlock(mt);
+	mas_destroy(&ms);
 	mtree_destroy(mt);
 }
 
-- 
2.46.0



^ permalink raw reply	[flat|nested] 27+ messages in thread

* [PATCH v4 06/17] maple_tree: preallocate nodes in mas_erase()
  2024-08-14 16:19 [PATCH v4 00/17] Introduce a store type enum for the Maple tree Sidhartha Kumar
                   ` (4 preceding siblings ...)
  2024-08-14 16:19 ` [PATCH v4 05/17] maple_tree: remove mas_destroy() from mas_nomem() Sidhartha Kumar
@ 2024-08-14 16:19 ` Sidhartha Kumar
  2024-08-14 16:19 ` [PATCH v4 07/17] maple_tree: use mas_store_gfp() in mtree_store_range() Sidhartha Kumar
                   ` (10 subsequent siblings)
  16 siblings, 0 replies; 27+ messages in thread
From: Sidhartha Kumar @ 2024-08-14 16:19 UTC (permalink / raw)
  To: linux-kernel, maple-tree
  Cc: linux-mm, akpm, liam.howlett, willy, surenb, Sidhartha Kumar

Use mas_wr_preallocate() in mas_erase() to preallocate enough nodes to
complete the erase. Add error handling by skipping the store if the
preallocation lead to some error besides no memory.

Signed-off-by: Sidhartha Kumar <sidhartha.kumar@oracle.com>
---
 lib/maple_tree.c | 8 ++++++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/lib/maple_tree.c b/lib/maple_tree.c
index 58985107cf00..8ba52ffa778e 100644
--- a/lib/maple_tree.c
+++ b/lib/maple_tree.c
@@ -6371,14 +6371,18 @@ void *mas_erase(struct ma_state *mas)
 
 	/* Must reset to ensure spanning writes of last slot are detected */
 	mas_reset(mas);
-	mas_wr_store_setup(&wr_mas);
-	mas_wr_store_entry(&wr_mas);
+	mas_wr_preallocate(&wr_mas, NULL);
 	if (mas_nomem(mas, GFP_KERNEL)) {
 		/* in case the range of entry changed when unlocked */
 		mas->index = mas->last = index;
 		goto write_retry;
 	}
 
+	if (mas_is_err(mas))
+		goto out;
+
+	mas_wr_store_entry(&wr_mas);
+out:
 	mas_destroy(mas);
 	return entry;
 }
-- 
2.46.0



^ permalink raw reply	[flat|nested] 27+ messages in thread

* [PATCH v4 07/17] maple_tree: use mas_store_gfp() in mtree_store_range()
  2024-08-14 16:19 [PATCH v4 00/17] Introduce a store type enum for the Maple tree Sidhartha Kumar
                   ` (5 preceding siblings ...)
  2024-08-14 16:19 ` [PATCH v4 06/17] maple_tree: preallocate nodes in mas_erase() Sidhartha Kumar
@ 2024-08-14 16:19 ` Sidhartha Kumar
  2024-08-14 16:19 ` [PATCH v4 08/17] maple_tree: print store type in mas_dump() Sidhartha Kumar
                   ` (9 subsequent siblings)
  16 siblings, 0 replies; 27+ messages in thread
From: Sidhartha Kumar @ 2024-08-14 16:19 UTC (permalink / raw)
  To: linux-kernel, maple-tree
  Cc: linux-mm, akpm, liam.howlett, willy, surenb, Sidhartha Kumar,
	Liam R. Howlett

Refactor mtree_store_range() to use mas_store_gfp() which will abstract
the store, memory allocation, and error handling.

Reviewed-by: Liam R. Howlett <Liam.Howlett@oracle.com>
Signed-off-by: Sidhartha Kumar <sidhartha.kumar@oracle.com>
---
 lib/maple_tree.c | 11 +----------
 1 file changed, 1 insertion(+), 10 deletions(-)

diff --git a/lib/maple_tree.c b/lib/maple_tree.c
index 8ba52ffa778e..e01e05be6301 100644
--- a/lib/maple_tree.c
+++ b/lib/maple_tree.c
@@ -6476,7 +6476,6 @@ int mtree_store_range(struct maple_tree *mt, unsigned long index,
 		unsigned long last, void *entry, gfp_t gfp)
 {
 	MA_STATE(mas, mt, index, last);
-	MA_WR_STATE(wr_mas, &mas, entry);
 	int ret = 0;
 
 	trace_ma_write(__func__, &mas, 0, entry);
@@ -6487,17 +6486,9 @@ int mtree_store_range(struct maple_tree *mt, unsigned long index,
 		return -EINVAL;
 
 	mtree_lock(mt);
-retry:
-	mas_wr_store_entry(&wr_mas);
-	if (mas_nomem(&mas, gfp))
-		goto retry;
-
+	ret = mas_store_gfp(&mas, entry, gfp);
 	mtree_unlock(mt);
 
-	if (mas_is_err(&mas))
-		ret = xa_err(mas.node);
-
-	mas_destroy(&mas);
 	return ret;
 }
 EXPORT_SYMBOL(mtree_store_range);
-- 
2.46.0



^ permalink raw reply	[flat|nested] 27+ messages in thread

* [PATCH v4 08/17] maple_tree: print store type in mas_dump()
  2024-08-14 16:19 [PATCH v4 00/17] Introduce a store type enum for the Maple tree Sidhartha Kumar
                   ` (6 preceding siblings ...)
  2024-08-14 16:19 ` [PATCH v4 07/17] maple_tree: use mas_store_gfp() in mtree_store_range() Sidhartha Kumar
@ 2024-08-14 16:19 ` Sidhartha Kumar
  2024-08-14 16:19 ` [PATCH v4 09/17] maple_tree: use store type in mas_wr_store_entry() Sidhartha Kumar
                   ` (8 subsequent siblings)
  16 siblings, 0 replies; 27+ messages in thread
From: Sidhartha Kumar @ 2024-08-14 16:19 UTC (permalink / raw)
  To: linux-kernel, maple-tree
  Cc: linux-mm, akpm, liam.howlett, willy, surenb, Sidhartha Kumar,
	Liam R. Howlett

Knowing the store type of the maple state could be helpful for debugging.
Have mas_dump() print mas->store_type.

Reviewed-by: Liam R. Howlett <Liam.Howlett@oracle.com>
Signed-off-by: Sidhartha Kumar <sidhartha.kumar@oracle.com>
---
 lib/maple_tree.c | 34 ++++++++++++++++++++++++++++++++++
 1 file changed, 34 insertions(+)

diff --git a/lib/maple_tree.c b/lib/maple_tree.c
index e01e05be6301..a1689fc6227b 100644
--- a/lib/maple_tree.c
+++ b/lib/maple_tree.c
@@ -7760,6 +7760,40 @@ void mas_dump(const struct ma_state *mas)
 		break;
 	}
 
+	pr_err("Store Type: ");
+	switch (mas->store_type) {
+	case wr_invalid:
+		pr_err("invalid store type\n");
+		break;
+	case wr_new_root:
+		pr_err("new_root\n");
+		break;
+	case wr_store_root:
+		pr_err("store_root\n");
+		break;
+	case wr_exact_fit:
+		pr_err("exact_fit\n");
+		break;
+	case wr_split_store:
+		pr_err("split_store\n");
+		break;
+	case wr_slot_store:
+		pr_err("slot_store\n");
+		break;
+	case wr_append:
+		pr_err("append\n");
+		break;
+	case wr_node_store:
+		pr_err("node_store\n");
+		break;
+	case wr_spanning_store:
+		pr_err("spanning_store\n");
+		break;
+	case wr_rebalance:
+		pr_err("rebalance\n");
+		break;
+	}
+
 	pr_err("[%u/%u] index=%lx last=%lx\n", mas->offset, mas->end,
 	       mas->index, mas->last);
 	pr_err("     min=%lx max=%lx alloc=%p, depth=%u, flags=%x\n",
-- 
2.46.0



^ permalink raw reply	[flat|nested] 27+ messages in thread

* [PATCH v4 09/17] maple_tree: use store type in mas_wr_store_entry()
  2024-08-14 16:19 [PATCH v4 00/17] Introduce a store type enum for the Maple tree Sidhartha Kumar
                   ` (7 preceding siblings ...)
  2024-08-14 16:19 ` [PATCH v4 08/17] maple_tree: print store type in mas_dump() Sidhartha Kumar
@ 2024-08-14 16:19 ` Sidhartha Kumar
  2024-08-14 16:19 ` [PATCH v4 10/17] maple_tree: convert mas_insert() to preallocate nodes Sidhartha Kumar
                   ` (7 subsequent siblings)
  16 siblings, 0 replies; 27+ messages in thread
From: Sidhartha Kumar @ 2024-08-14 16:19 UTC (permalink / raw)
  To: linux-kernel, maple-tree
  Cc: linux-mm, akpm, liam.howlett, willy, surenb, Sidhartha Kumar,
	Liam R. Howlett

When storing an entry, we can read the store type that was set from a
previous partial walk of the tree. Now that the type of store is known,
select the correct write helper function to use to complete the store.

Also noinline mas_wr_spanning_store() to limit stack frame usage in
mas_wr_store_entry() as it allocates a maple_big_node on the stack.

Reviewed-by: Liam R. Howlett <Liam.Howlett@oracle.com>
Signed-off-by: Sidhartha Kumar <sidhartha.kumar@oracle.com>
---
 lib/maple_tree.c | 90 ++++++++++++++++++++++++++++--------------------
 1 file changed, 52 insertions(+), 38 deletions(-)

diff --git a/lib/maple_tree.c b/lib/maple_tree.c
index a1689fc6227b..2242e07a46dc 100644
--- a/lib/maple_tree.c
+++ b/lib/maple_tree.c
@@ -3780,7 +3780,7 @@ static inline int mas_new_root(struct ma_state *mas, void *entry)
  *
  * Return: 0 on error, positive on success.
  */
-static inline int mas_wr_spanning_store(struct ma_wr_state *wr_mas)
+static noinline int mas_wr_spanning_store(struct ma_wr_state *wr_mas)
 {
 	struct maple_subtree_state mast;
 	struct maple_big_node b_node;
@@ -4206,43 +4206,62 @@ static inline void mas_wr_modify(struct ma_wr_state *wr_mas)
 static inline void mas_wr_store_entry(struct ma_wr_state *wr_mas)
 {
 	struct ma_state *mas = wr_mas->mas;
+	unsigned char new_end = mas_wr_new_end(wr_mas);
 
-	wr_mas->content = mas_start(mas);
-	if (mas_is_none(mas) || mas_is_ptr(mas)) {
-		mas_store_root(mas, wr_mas->entry);
+	switch (mas->store_type) {
+	case wr_invalid:
+		MT_BUG_ON(mas->tree, 1);
 		return;
-	}
-
-	if (unlikely(!mas_wr_walk(wr_mas))) {
+	case wr_new_root:
+		mas_new_root(mas, wr_mas->entry);
+		break;
+	case wr_store_root:
+		mas_store_root(mas, wr_mas->entry);
+		break;
+	case wr_exact_fit:
+		rcu_assign_pointer(wr_mas->slots[mas->offset], wr_mas->entry);
+		if (!!wr_mas->entry ^ !!wr_mas->content)
+			mas_update_gap(mas);
+		break;
+	case wr_append:
+		mas_wr_append(wr_mas, new_end);
+		break;
+	case wr_slot_store:
+		mas_wr_slot_store(wr_mas);
+		break;
+	case wr_node_store:
+		mas_wr_node_store(wr_mas, new_end);
+		break;
+	case wr_spanning_store:
 		mas_wr_spanning_store(wr_mas);
-		return;
+		break;
+	case wr_split_store:
+	case wr_rebalance:
+		mas_wr_bnode(wr_mas);
+		break;
 	}
 
-	/* At this point, we are at the leaf node that needs to be altered. */
-	mas_wr_end_piv(wr_mas);
-	/* New root for a single pointer */
-	if (unlikely(!mas->index && mas->last == ULONG_MAX))
-		mas_new_root(mas, wr_mas->entry);
-	else
-		mas_wr_modify(wr_mas);
+	return;
 }
 
-static void mas_wr_store_setup(struct ma_wr_state *wr_mas)
+static inline void mas_wr_prealloc_setup(struct ma_wr_state *wr_mas)
 {
-	if (!mas_is_active(wr_mas->mas)) {
-		if (mas_is_start(wr_mas->mas))
-			return;
+	struct ma_state *mas = wr_mas->mas;
+
+	if (!mas_is_active(mas)) {
+		if (mas_is_start(mas))
+			goto set_content;
 
-		if (unlikely(mas_is_paused(wr_mas->mas)))
+		if (unlikely(mas_is_paused(mas)))
 			goto reset;
 
-		if (unlikely(mas_is_none(wr_mas->mas)))
+		if (unlikely(mas_is_none(mas)))
 			goto reset;
 
-		if (unlikely(mas_is_overflow(wr_mas->mas)))
+		if (unlikely(mas_is_overflow(mas)))
 			goto reset;
 
-		if (unlikely(mas_is_underflow(wr_mas->mas)))
+		if (unlikely(mas_is_underflow(mas)))
 			goto reset;
 	}
 
@@ -4251,27 +4270,20 @@ static void mas_wr_store_setup(struct ma_wr_state *wr_mas)
 	 * writes within this node.  This is to stop partial walks in
 	 * mas_prealloc() from being reset.
 	 */
-	if (wr_mas->mas->last > wr_mas->mas->max)
+	if (mas->last > mas->max)
 		goto reset;
 
 	if (wr_mas->entry)
-		return;
+		goto set_content;
 
-	if (mte_is_leaf(wr_mas->mas->node) &&
-	    wr_mas->mas->last == wr_mas->mas->max)
+	if (mte_is_leaf(mas->node) && mas->last == mas->max)
 		goto reset;
 
-	return;
+	goto set_content;
 
 reset:
-	mas_reset(wr_mas->mas);
-}
-
-static inline void mas_wr_prealloc_setup(struct ma_wr_state *wr_mas)
-{
-	struct ma_state *mas = wr_mas->mas;
-
-	mas_wr_store_setup(wr_mas);
+	mas_reset(mas);
+set_content:
 	wr_mas->content = mas_start(mas);
 }
 
@@ -5582,7 +5594,8 @@ void *mas_store(struct ma_state *mas, void *entry)
 	 * want to examine what happens if a single store operation was to
 	 * overwrite multiple entries within a self-balancing B-Tree.
 	 */
-	mas_wr_store_setup(&wr_mas);
+	mas_wr_prealloc_setup(&wr_mas);
+	mas_wr_store_type(&wr_mas);
 	mas_wr_store_entry(&wr_mas);
 	return wr_mas.content;
 }
@@ -5634,7 +5647,8 @@ void mas_store_prealloc(struct ma_state *mas, void *entry)
 {
 	MA_WR_STATE(wr_mas, mas, entry);
 
-	mas_wr_store_setup(&wr_mas);
+	mas_wr_prealloc_setup(&wr_mas);
+	mas_wr_store_type(&wr_mas);
 	trace_ma_write(__func__, mas, 0, entry);
 	mas_wr_store_entry(&wr_mas);
 	MAS_WR_BUG_ON(&wr_mas, mas_is_err(mas));
-- 
2.46.0



^ permalink raw reply	[flat|nested] 27+ messages in thread

* [PATCH v4 10/17] maple_tree: convert mas_insert() to preallocate nodes
  2024-08-14 16:19 [PATCH v4 00/17] Introduce a store type enum for the Maple tree Sidhartha Kumar
                   ` (8 preceding siblings ...)
  2024-08-14 16:19 ` [PATCH v4 09/17] maple_tree: use store type in mas_wr_store_entry() Sidhartha Kumar
@ 2024-08-14 16:19 ` Sidhartha Kumar
  2024-08-14 16:19 ` [PATCH v4 11/17] maple_tree: simplify mas_commit_b_node() Sidhartha Kumar
                   ` (6 subsequent siblings)
  16 siblings, 0 replies; 27+ messages in thread
From: Sidhartha Kumar @ 2024-08-14 16:19 UTC (permalink / raw)
  To: linux-kernel, maple-tree
  Cc: linux-mm, akpm, liam.howlett, willy, surenb, Sidhartha Kumar,
	Liam R. Howlett

By setting the store type in mas_insert(), we no longer need to use
mas_wr_modify() to determine the correct store function to use. Instead,
set the store type and call mas_wr_store_entry(). Also, pass in the
requested gfp flags to mas_insert() so they can be passed to the call to
mas_wr_preallocate().

Reviewed-by: Liam R. Howlett <Liam.Howlett@oracle.com>
Signed-off-by: Sidhartha Kumar <sidhartha.kumar@oracle.com>
---
 lib/maple_tree.c | 22 ++++++++++------------
 1 file changed, 10 insertions(+), 12 deletions(-)

diff --git a/lib/maple_tree.c b/lib/maple_tree.c
index 2242e07a46dc..bb940f61d713 100644
--- a/lib/maple_tree.c
+++ b/lib/maple_tree.c
@@ -4457,26 +4457,24 @@ static inline void *mas_insert(struct ma_state *mas, void *entry)
 	if (wr_mas.content)
 		goto exists;
 
-	if (mas_is_none(mas) || mas_is_ptr(mas)) {
-		mas_store_root(mas, entry);
+	mas_wr_preallocate(&wr_mas, entry);
+	if (mas_is_err(mas))
 		return NULL;
-	}
 
 	/* spanning writes always overwrite something */
-	if (!mas_wr_walk(&wr_mas))
+	if (mas->store_type == wr_spanning_store)
 		goto exists;
 
 	/* At this point, we are at the leaf node that needs to be altered. */
-	wr_mas.offset_end = mas->offset;
-	wr_mas.end_piv = wr_mas.r_max;
-
-	if (wr_mas.content || (mas->last > wr_mas.r_max))
-		goto exists;
+	if (mas->store_type != wr_new_root && mas->store_type != wr_store_root) {
+		wr_mas.offset_end = mas->offset;
+		wr_mas.end_piv = wr_mas.r_max;
 
-	if (!entry)
-		return NULL;
+		if (wr_mas.content || (mas->last > wr_mas.r_max))
+			goto exists;
+	}
 
-	mas_wr_modify(&wr_mas);
+	mas_wr_store_entry(&wr_mas);
 	return wr_mas.content;
 
 exists:
-- 
2.46.0



^ permalink raw reply	[flat|nested] 27+ messages in thread

* [PATCH v4 11/17] maple_tree: simplify mas_commit_b_node()
  2024-08-14 16:19 [PATCH v4 00/17] Introduce a store type enum for the Maple tree Sidhartha Kumar
                   ` (9 preceding siblings ...)
  2024-08-14 16:19 ` [PATCH v4 10/17] maple_tree: convert mas_insert() to preallocate nodes Sidhartha Kumar
@ 2024-08-14 16:19 ` Sidhartha Kumar
  2024-08-14 16:19 ` [PATCH v4 12/17] maple_tree: remove mas_wr_modify() Sidhartha Kumar
                   ` (5 subsequent siblings)
  16 siblings, 0 replies; 27+ messages in thread
From: Sidhartha Kumar @ 2024-08-14 16:19 UTC (permalink / raw)
  To: linux-kernel, maple-tree
  Cc: linux-mm, akpm, liam.howlett, willy, surenb, Sidhartha Kumar

The only callers of mas_commit_b_node() are those with store type of
wr_rebalance and wr_split_store. Use mas->store_type to dispatch to the
correct helper function. This allows the removal of mas_reuse_node() as
it is no longer used.

Signed-off-by: Sidhartha Kumar <sidhartha.kumar@oracle.com>
---
 lib/maple_tree.c | 64 ++++++------------------------------------------
 1 file changed, 7 insertions(+), 57 deletions(-)

diff --git a/lib/maple_tree.c b/lib/maple_tree.c
index bb940f61d713..0314e9b52621 100644
--- a/lib/maple_tree.c
+++ b/lib/maple_tree.c
@@ -3395,72 +3395,22 @@ static int mas_split(struct ma_state *mas, struct maple_big_node *b_node)
 	return 1;
 }
 
-/*
- * mas_reuse_node() - Reuse the node to store the data.
- * @wr_mas: The maple write state
- * @bn: The maple big node
- * @end: The end of the data.
- *
- * Will always return false in RCU mode.
- *
- * Return: True if node was reused, false otherwise.
- */
-static inline bool mas_reuse_node(struct ma_wr_state *wr_mas,
-			  struct maple_big_node *bn, unsigned char end)
-{
-	/* Need to be rcu safe. */
-	if (mt_in_rcu(wr_mas->mas->tree))
-		return false;
-
-	if (end > bn->b_end) {
-		int clear = mt_slots[wr_mas->type] - bn->b_end;
-
-		memset(wr_mas->slots + bn->b_end, 0, sizeof(void *) * clear--);
-		memset(wr_mas->pivots + bn->b_end, 0, sizeof(void *) * clear);
-	}
-	mab_mas_cp(bn, 0, bn->b_end, wr_mas->mas, false);
-	return true;
-}
-
 /*
  * mas_commit_b_node() - Commit the big node into the tree.
  * @wr_mas: The maple write state
  * @b_node: The maple big node
- * @end: The end of the data.
  */
 static noinline_for_kasan int mas_commit_b_node(struct ma_wr_state *wr_mas,
-			    struct maple_big_node *b_node, unsigned char end)
+			    struct maple_big_node *b_node)
 {
-	struct maple_node *node;
-	struct maple_enode *old_enode;
-	unsigned char b_end = b_node->b_end;
-	enum maple_type b_type = b_node->type;
-
-	old_enode = wr_mas->mas->node;
-	if ((b_end < mt_min_slots[b_type]) &&
-	    (!mte_is_root(old_enode)) &&
-	    (mas_mt_height(wr_mas->mas) > 1))
-		return mas_rebalance(wr_mas->mas, b_node);
-
-	if (b_end >= mt_slots[b_type])
-		return mas_split(wr_mas->mas, b_node);
+	enum store_type type = wr_mas->mas->store_type;
 
-	if (mas_reuse_node(wr_mas, b_node, end))
-		goto reuse_node;
+	WARN_ON_ONCE(type != wr_rebalance && type != wr_split_store);
 
-	mas_node_count(wr_mas->mas, 1);
-	if (mas_is_err(wr_mas->mas))
-		return 0;
+	if (type == wr_rebalance)
+		return mas_rebalance(wr_mas->mas, b_node);
 
-	node = mas_pop_node(wr_mas->mas);
-	node->parent = mas_mn(wr_mas->mas)->parent;
-	wr_mas->mas->node = mt_mk_node(node, b_type);
-	mab_mas_cp(b_node, 0, b_end, wr_mas->mas, false);
-	mas_replace_node(wr_mas->mas, old_enode);
-reuse_node:
-	mas_update_gap(wr_mas->mas);
-	wr_mas->mas->end = b_end;
-	return 1;
+	return mas_split(wr_mas->mas, b_node);
 }
 
 /*
@@ -4155,7 +4105,7 @@ static void mas_wr_bnode(struct ma_wr_state *wr_mas)
 	trace_ma_write(__func__, wr_mas->mas, 0, wr_mas->entry);
 	memset(&b_node, 0, sizeof(struct maple_big_node));
 	mas_store_b_node(wr_mas, &b_node, wr_mas->offset_end);
-	mas_commit_b_node(wr_mas, &b_node, wr_mas->mas->end);
+	mas_commit_b_node(wr_mas, &b_node);
 }
 
 static inline void mas_wr_modify(struct ma_wr_state *wr_mas)
-- 
2.46.0



^ permalink raw reply	[flat|nested] 27+ messages in thread

* [PATCH v4 12/17] maple_tree: remove mas_wr_modify()
  2024-08-14 16:19 [PATCH v4 00/17] Introduce a store type enum for the Maple tree Sidhartha Kumar
                   ` (10 preceding siblings ...)
  2024-08-14 16:19 ` [PATCH v4 11/17] maple_tree: simplify mas_commit_b_node() Sidhartha Kumar
@ 2024-08-14 16:19 ` Sidhartha Kumar
  2024-08-14 16:19 ` [PATCH v4 13/17] maple_tree: have mas_store() allocate nodes if needed Sidhartha Kumar
                   ` (4 subsequent siblings)
  16 siblings, 0 replies; 27+ messages in thread
From: Sidhartha Kumar @ 2024-08-14 16:19 UTC (permalink / raw)
  To: linux-kernel, maple-tree
  Cc: linux-mm, akpm, liam.howlett, willy, surenb, Sidhartha Kumar,
	Liam R. Howlett

There are no more users of the function, safely remove it.

Reviewed-by: Liam R. Howlett <Liam.Howlett@oracle.com>
Signed-off-by: Sidhartha Kumar <sidhartha.kumar@oracle.com>
---
 lib/maple_tree.c | 38 --------------------------------------
 1 file changed, 38 deletions(-)

diff --git a/lib/maple_tree.c b/lib/maple_tree.c
index 0314e9b52621..6640ca775808 100644
--- a/lib/maple_tree.c
+++ b/lib/maple_tree.c
@@ -4108,44 +4108,6 @@ static void mas_wr_bnode(struct ma_wr_state *wr_mas)
 	mas_commit_b_node(wr_mas, &b_node);
 }
 
-static inline void mas_wr_modify(struct ma_wr_state *wr_mas)
-{
-	struct ma_state *mas = wr_mas->mas;
-	unsigned char new_end;
-
-	/* Direct replacement */
-	if (wr_mas->r_min == mas->index && wr_mas->r_max == mas->last) {
-		rcu_assign_pointer(wr_mas->slots[mas->offset], wr_mas->entry);
-		if (!!wr_mas->entry ^ !!wr_mas->content)
-			mas_update_gap(mas);
-		return;
-	}
-
-	/*
-	 * new_end exceeds the size of the maple node and cannot enter the fast
-	 * path.
-	 */
-	new_end = mas_wr_new_end(wr_mas);
-	if (new_end >= mt_slots[wr_mas->type])
-		goto slow_path;
-
-	/* Attempt to append */
-	if (mas_wr_append(wr_mas, new_end))
-		return;
-
-	if (new_end == mas->end && mas_wr_slot_store(wr_mas))
-		return;
-
-	if (mas_wr_node_store(wr_mas, new_end))
-		return;
-
-	if (mas_is_err(mas))
-		return;
-
-slow_path:
-	mas_wr_bnode(wr_mas);
-}
-
 /*
  * mas_wr_store_entry() - Internal call to store a value
  * @mas: The maple state
-- 
2.46.0



^ permalink raw reply	[flat|nested] 27+ messages in thread

* [PATCH v4 13/17] maple_tree: have mas_store() allocate nodes if needed
  2024-08-14 16:19 [PATCH v4 00/17] Introduce a store type enum for the Maple tree Sidhartha Kumar
                   ` (11 preceding siblings ...)
  2024-08-14 16:19 ` [PATCH v4 12/17] maple_tree: remove mas_wr_modify() Sidhartha Kumar
@ 2024-08-14 16:19 ` Sidhartha Kumar
  2024-08-14 16:19 ` [PATCH v4 14/17] maple_tree: remove node allocations from various write helper functions Sidhartha Kumar
                   ` (3 subsequent siblings)
  16 siblings, 0 replies; 27+ messages in thread
From: Sidhartha Kumar @ 2024-08-14 16:19 UTC (permalink / raw)
  To: linux-kernel, maple-tree
  Cc: linux-mm, akpm, liam.howlett, willy, surenb, Sidhartha Kumar,
	Liam R. Howlett

Not all users of mas_store() enter with nodes already preallocated.
Check for the MA_STATE_PREALLOC flag to decide whether to preallocate nodes
within mas_store() rather than relying on future write helper functions
to perform the allocations. This allows the write helper functions to be
simplified as they do not have to do checks to make sure there are
enough allocated nodes to perform the write.

Reviewed-by: Liam R. Howlett <Liam.Howlett@oracle.com>
Signed-off-by: Sidhartha Kumar <sidhartha.kumar@oracle.com>
---
 lib/maple_tree.c | 19 +++++++++++++++++--
 1 file changed, 17 insertions(+), 2 deletions(-)

diff --git a/lib/maple_tree.c b/lib/maple_tree.c
index 6640ca775808..d5e020dd93fa 100644
--- a/lib/maple_tree.c
+++ b/lib/maple_tree.c
@@ -5477,13 +5477,12 @@ static inline void mte_destroy_walk(struct maple_enode *enode,
  * @entry: The entry to store.
  *
  * The @mas->index and @mas->last is used to set the range for the @entry.
- * Note: The @mas should have pre-allocated entries to ensure there is memory to
- * store the entry.  Please see mas_expected_entries()/mas_destroy() for more details.
  *
  * Return: the first entry between mas->index and mas->last or %NULL.
  */
 void *mas_store(struct ma_state *mas, void *entry)
 {
+	int request;
 	MA_WR_STATE(wr_mas, mas, entry);
 
 	trace_ma_write(__func__, mas, 0, entry);
@@ -5506,7 +5505,23 @@ void *mas_store(struct ma_state *mas, void *entry)
 	 */
 	mas_wr_prealloc_setup(&wr_mas);
 	mas_wr_store_type(&wr_mas);
+	if (mas->mas_flags & MA_STATE_PREALLOC) {
+		mas_wr_store_entry(&wr_mas);
+		MAS_WR_BUG_ON(&wr_mas, mas_is_err(mas));
+		return wr_mas.content;
+	}
+
+	request = mas_prealloc_calc(mas, entry);
+	if (!request)
+		goto store;
+
+	mas_node_count(mas, request);
+	if (mas_is_err(mas))
+		return NULL;
+
+store:
 	mas_wr_store_entry(&wr_mas);
+	mas_destroy(mas);
 	return wr_mas.content;
 }
 EXPORT_SYMBOL_GPL(mas_store);
-- 
2.46.0



^ permalink raw reply	[flat|nested] 27+ messages in thread

* [PATCH v4 14/17] maple_tree: remove node allocations from various write helper functions
  2024-08-14 16:19 [PATCH v4 00/17] Introduce a store type enum for the Maple tree Sidhartha Kumar
                   ` (12 preceding siblings ...)
  2024-08-14 16:19 ` [PATCH v4 13/17] maple_tree: have mas_store() allocate nodes if needed Sidhartha Kumar
@ 2024-08-14 16:19 ` Sidhartha Kumar
  2024-08-14 16:19 ` [PATCH v4 15/17] maple_tree: remove repeated sanity checks from " Sidhartha Kumar
                   ` (2 subsequent siblings)
  16 siblings, 0 replies; 27+ messages in thread
From: Sidhartha Kumar @ 2024-08-14 16:19 UTC (permalink / raw)
  To: linux-kernel, maple-tree
  Cc: linux-mm, akpm, liam.howlett, willy, surenb, Sidhartha Kumar,
	Liam R. Howlett

These write helper functions are all called from store paths which
preallocate enough nodes that will be needed for the write. There is no
more need to allocate within the functions themselves.

Reviewed-by: Liam R. Howlett <Liam.Howlett@oracle.com>
Signed-off-by: Sidhartha Kumar <sidhartha.kumar@oracle.com>
---
 lib/maple_tree.c | 27 ---------------------------
 1 file changed, 27 deletions(-)

diff --git a/lib/maple_tree.c b/lib/maple_tree.c
index d5e020dd93fa..5f79be184377 100644
--- a/lib/maple_tree.c
+++ b/lib/maple_tree.c
@@ -2976,9 +2976,6 @@ static inline int mas_rebalance(struct ma_state *mas,
 	 * tries to combine the data in the same way.  If one node contains the
 	 * entire range of the tree, then that node is used as a new root node.
 	 */
-	mas_node_count(mas, empty_count * 2 - 1);
-	if (mas_is_err(mas))
-		return 0;
 
 	mast.orig_l = &l_mas;
 	mast.orig_r = &r_mas;
@@ -3029,11 +3026,6 @@ static inline void mas_destroy_rebalance(struct ma_state *mas, unsigned char end
 
 	/* set up node. */
 	if (in_rcu) {
-		/* Allocate for both left and right as well as parent. */
-		mas_node_count(mas, 3);
-		if (mas_is_err(mas))
-			return;
-
 		newnode = mas_pop_node(mas);
 	} else {
 		newnode = &reuse;
@@ -3341,10 +3333,6 @@ static int mas_split(struct ma_state *mas, struct maple_big_node *b_node)
 
 	trace_ma_op(__func__, mas);
 	mas->depth = mas_mt_height(mas);
-	/* Allocation failures will happen early. */
-	mas_node_count(mas, 1 + mas->depth * 2);
-	if (mas_is_err(mas))
-		return 0;
 
 	mast.l = &l_mas;
 	mast.r = &r_mas;
@@ -3427,10 +3415,6 @@ static inline int mas_root_expand(struct ma_state *mas, void *entry)
 	unsigned long *pivots;
 	int slot = 0;
 
-	mas_node_count(mas, 1);
-	if (unlikely(mas_is_err(mas)))
-		return 0;
-
 	node = mas_pop_node(mas);
 	pivots = ma_pivots(node, type);
 	slots = ma_slots(node, type);
@@ -3699,10 +3683,6 @@ static inline int mas_new_root(struct ma_state *mas, void *entry)
 		goto done;
 	}
 
-	mas_node_count(mas, 1);
-	if (mas_is_err(mas))
-		return 0;
-
 	node = mas_pop_node(mas);
 	pivots = ma_pivots(node, type);
 	slots = ma_slots(node, type);
@@ -3765,9 +3745,6 @@ static noinline int mas_wr_spanning_store(struct ma_wr_state *wr_mas)
 	 * entries per level plus a new root.
 	 */
 	height = mas_mt_height(mas);
-	mas_node_count(mas, 1 + height * 3);
-	if (mas_is_err(mas))
-		return 0;
 
 	/*
 	 * Set up right side.  Need to get to the next offset after the spanning
@@ -3851,10 +3828,6 @@ static inline bool mas_wr_node_store(struct ma_wr_state *wr_mas,
 
 	/* set up node. */
 	if (in_rcu) {
-		mas_node_count(mas, 1);
-		if (mas_is_err(mas))
-			return false;
-
 		newnode = mas_pop_node(mas);
 	} else {
 		memset(&reuse, 0, sizeof(struct maple_node));
-- 
2.46.0



^ permalink raw reply	[flat|nested] 27+ messages in thread

* [PATCH v4 15/17] maple_tree: remove repeated sanity checks from write helper functions
  2024-08-14 16:19 [PATCH v4 00/17] Introduce a store type enum for the Maple tree Sidhartha Kumar
                   ` (13 preceding siblings ...)
  2024-08-14 16:19 ` [PATCH v4 14/17] maple_tree: remove node allocations from various write helper functions Sidhartha Kumar
@ 2024-08-14 16:19 ` Sidhartha Kumar
  2024-08-14 16:19 ` [PATCH v4 16/17] maple_tree: remove unneeded mas_wr_walk() in mas_store_prealloc() Sidhartha Kumar
  2024-08-14 16:19 ` [PATCH v4 17/17] maple_tree: make write helper functions void Sidhartha Kumar
  16 siblings, 0 replies; 27+ messages in thread
From: Sidhartha Kumar @ 2024-08-14 16:19 UTC (permalink / raw)
  To: linux-kernel, maple-tree
  Cc: linux-mm, akpm, liam.howlett, willy, surenb, Sidhartha Kumar

These sanity checks are now redundant as they are already checked in
mas_wr_store_type(). We can remove them from mas_wr_append() and
mas_wr_node_store().

Signed-off-by: Sidhartha Kumar <sidhartha.kumar@oracle.com>
---
 lib/maple_tree.c | 17 ++---------------
 1 file changed, 2 insertions(+), 15 deletions(-)

diff --git a/lib/maple_tree.c b/lib/maple_tree.c
index 5f79be184377..8c1a1a483395 100644
--- a/lib/maple_tree.c
+++ b/lib/maple_tree.c
@@ -3816,11 +3816,6 @@ static inline bool mas_wr_node_store(struct ma_wr_state *wr_mas,
 	unsigned char copy_size, node_pivots = mt_pivots[wr_mas->type];
 	bool in_rcu = mt_in_rcu(mas->tree);
 
-	/* Check if there is enough data. The room is enough. */
-	if (!mte_is_root(mas->node) && (new_end < mt_min_slots[wr_mas->type]) &&
-	    !(mas->mas_flags & MA_STATE_BULK))
-		return false;
-
 	if (mas->last == wr_mas->end_piv)
 		offset_end++; /* don't copy this offset */
 	else if (unlikely(wr_mas->r_max == ULONG_MAX))
@@ -4018,17 +4013,9 @@ static inline unsigned char mas_wr_new_end(struct ma_wr_state *wr_mas)
 static inline bool mas_wr_append(struct ma_wr_state *wr_mas,
 		unsigned char new_end)
 {
-	struct ma_state *mas;
+	struct ma_state *mas = wr_mas->mas;
 	void __rcu **slots;
-	unsigned char end;
-
-	mas = wr_mas->mas;
-	if (mt_in_rcu(mas->tree))
-		return false;
-
-	end = mas->end;
-	if (mas->offset != end)
-		return false;
+	unsigned char end = mas->end;
 
 	if (new_end < mt_pivots[wr_mas->type]) {
 		wr_mas->pivots[new_end] = wr_mas->pivots[end];
-- 
2.46.0



^ permalink raw reply	[flat|nested] 27+ messages in thread

* [PATCH v4 16/17] maple_tree: remove unneeded mas_wr_walk() in mas_store_prealloc()
  2024-08-14 16:19 [PATCH v4 00/17] Introduce a store type enum for the Maple tree Sidhartha Kumar
                   ` (14 preceding siblings ...)
  2024-08-14 16:19 ` [PATCH v4 15/17] maple_tree: remove repeated sanity checks from " Sidhartha Kumar
@ 2024-08-14 16:19 ` Sidhartha Kumar
  2024-10-24  1:20   ` Wei Yang
  2024-08-14 16:19 ` [PATCH v4 17/17] maple_tree: make write helper functions void Sidhartha Kumar
  16 siblings, 1 reply; 27+ messages in thread
From: Sidhartha Kumar @ 2024-08-14 16:19 UTC (permalink / raw)
  To: linux-kernel, maple-tree
  Cc: linux-mm, akpm, liam.howlett, willy, surenb, Sidhartha Kumar,
	Liam R. Howlett

Users of mas_store_prealloc() enter this function with nodes already
preallocated. This means the store type must be already set. We can then
remove the call to mas_wr_store_type() and initialize the write state to
continue the partial walk that was done when determining the store type.

Reviewed-by: Liam R. Howlett <Liam.Howlett@oracle.com>
Signed-off-by: Sidhartha Kumar <sidhartha.kumar@oracle.com>
---
 lib/maple_tree.c | 18 +++++++++++++-----
 1 file changed, 13 insertions(+), 5 deletions(-)

diff --git a/lib/maple_tree.c b/lib/maple_tree.c
index 8c1a1a483395..73ce63d9c3a0 100644
--- a/lib/maple_tree.c
+++ b/lib/maple_tree.c
@@ -3979,9 +3979,6 @@ static inline void mas_wr_end_piv(struct ma_wr_state *wr_mas)
 		wr_mas->end_piv = wr_mas->pivots[wr_mas->offset_end];
 	else
 		wr_mas->end_piv = wr_mas->mas->max;
-
-	if (!wr_mas->entry)
-		mas_wr_extend_null(wr_mas);
 }
 
 static inline unsigned char mas_wr_new_end(struct ma_wr_state *wr_mas)
@@ -5532,8 +5529,19 @@ void mas_store_prealloc(struct ma_state *mas, void *entry)
 {
 	MA_WR_STATE(wr_mas, mas, entry);
 
-	mas_wr_prealloc_setup(&wr_mas);
-	mas_wr_store_type(&wr_mas);
+	if (mas->store_type == wr_store_root) {
+		mas_wr_prealloc_setup(&wr_mas);
+		goto store;
+	}
+
+	mas_wr_walk_descend(&wr_mas);
+	if (mas->store_type != wr_spanning_store) {
+		/* set wr_mas->content to current slot */
+		wr_mas.content = mas_slot_locked(mas, wr_mas.slots, mas->offset);
+		mas_wr_end_piv(&wr_mas);
+	}
+
+store:
 	trace_ma_write(__func__, mas, 0, entry);
 	mas_wr_store_entry(&wr_mas);
 	MAS_WR_BUG_ON(&wr_mas, mas_is_err(mas));
-- 
2.46.0



^ permalink raw reply	[flat|nested] 27+ messages in thread

* [PATCH v4 17/17] maple_tree: make write helper functions void
  2024-08-14 16:19 [PATCH v4 00/17] Introduce a store type enum for the Maple tree Sidhartha Kumar
                   ` (15 preceding siblings ...)
  2024-08-14 16:19 ` [PATCH v4 16/17] maple_tree: remove unneeded mas_wr_walk() in mas_store_prealloc() Sidhartha Kumar
@ 2024-08-14 16:19 ` Sidhartha Kumar
  16 siblings, 0 replies; 27+ messages in thread
From: Sidhartha Kumar @ 2024-08-14 16:19 UTC (permalink / raw)
  To: linux-kernel, maple-tree
  Cc: linux-mm, akpm, liam.howlett, willy, surenb, Sidhartha Kumar

The return value of various write helper functions are not checked. We
can safely change the return type of these functions to be void.

Signed-off-by: Sidhartha Kumar <sidhartha.kumar@oracle.com>
---
 lib/maple_tree.c | 47 ++++++++++++++++-------------------------------
 1 file changed, 16 insertions(+), 31 deletions(-)

diff --git a/lib/maple_tree.c b/lib/maple_tree.c
index 73ce63d9c3a0..755ba8b18e14 100644
--- a/lib/maple_tree.c
+++ b/lib/maple_tree.c
@@ -2823,10 +2823,8 @@ static inline void *mtree_range_walk(struct ma_state *mas)
  * orig_l_mas->last is used in mas_consume to find the slots that will need to
  * be either freed or destroyed.  orig_l_mas->depth keeps track of the height of
  * the new sub-tree in case the sub-tree becomes the full tree.
- *
- * Return: the number of elements in b_node during the last loop.
  */
-static int mas_spanning_rebalance(struct ma_state *mas,
+static void mas_spanning_rebalance(struct ma_state *mas,
 		struct maple_subtree_state *mast, unsigned char count)
 {
 	unsigned char split, mid_split;
@@ -2942,7 +2940,7 @@ static int mas_spanning_rebalance(struct ma_state *mas,
 	mas->offset = l_mas.offset;
 	mas_wmb_replace(mas, old_enode);
 	mtree_range_walk(mas);
-	return mast->bn->b_end;
+	return;
 }
 
 /*
@@ -2952,10 +2950,8 @@ static int mas_spanning_rebalance(struct ma_state *mas,
  *
  * Rebalance two nodes into a single node or two new nodes that are sufficient.
  * Continue upwards until tree is sufficient.
- *
- * Return: the number of elements in b_node during the last loop.
  */
-static inline int mas_rebalance(struct ma_state *mas,
+static inline void mas_rebalance(struct ma_state *mas,
 				struct maple_big_node *b_node)
 {
 	char empty_count = mas_mt_height(mas);
@@ -3300,9 +3296,8 @@ static inline bool mas_push_data(struct ma_state *mas, int height,
  * mas_split() - Split data that is too big for one node into two.
  * @mas: The maple state
  * @b_node: The maple big node
- * Return: 1 on success, 0 on failure.
  */
-static int mas_split(struct ma_state *mas, struct maple_big_node *b_node)
+static void mas_split(struct ma_state *mas, struct maple_big_node *b_node)
 {
 	struct maple_subtree_state mast;
 	int height = 0;
@@ -3380,7 +3375,7 @@ static int mas_split(struct ma_state *mas, struct maple_big_node *b_node)
 	mas->node = l_mas.node;
 	mas_wmb_replace(mas, old);
 	mtree_range_walk(mas);
-	return 1;
+	return;
 }
 
 /*
@@ -3388,7 +3383,7 @@ static int mas_split(struct ma_state *mas, struct maple_big_node *b_node)
  * @wr_mas: The maple write state
  * @b_node: The maple big node
  */
-static noinline_for_kasan int mas_commit_b_node(struct ma_wr_state *wr_mas,
+static noinline_for_kasan void mas_commit_b_node(struct ma_wr_state *wr_mas,
 			    struct maple_big_node *b_node)
 {
 	enum store_type type = wr_mas->mas->store_type;
@@ -3664,10 +3659,8 @@ static void mte_destroy_walk(struct maple_enode *, struct maple_tree *);
  * @entry: The entry to store.
  *
  * Only valid when the index == 0 and the last == ULONG_MAX
- *
- * Return 0 on error, 1 on success.
  */
-static inline int mas_new_root(struct ma_state *mas, void *entry)
+static inline void mas_new_root(struct ma_state *mas, void *entry)
 {
 	struct maple_enode *root = mas_root_locked(mas);
 	enum maple_type type = maple_leaf_64;
@@ -3699,7 +3692,7 @@ static inline int mas_new_root(struct ma_state *mas, void *entry)
 	if (xa_is_node(root))
 		mte_destroy_walk(root, mas->tree);
 
-	return 1;
+	return;
 }
 /*
  * mas_wr_spanning_store() - Create a subtree with the store operation completed
@@ -3707,10 +3700,8 @@ static inline int mas_new_root(struct ma_state *mas, void *entry)
  * Note that mas is expected to point to the node which caused the store to
  * span.
  * @wr_mas: The maple write state
- *
- * Return: 0 on error, positive on success.
  */
-static noinline int mas_wr_spanning_store(struct ma_wr_state *wr_mas)
+static noinline void mas_wr_spanning_store(struct ma_wr_state *wr_mas)
 {
 	struct maple_subtree_state mast;
 	struct maple_big_node b_node;
@@ -3802,10 +3793,8 @@ static noinline int mas_wr_spanning_store(struct ma_wr_state *wr_mas)
  * @wr_mas: The maple write state
  *
  * Attempts to reuse the node, but may allocate.
- *
- * Return: True if stored, false otherwise
  */
-static inline bool mas_wr_node_store(struct ma_wr_state *wr_mas,
+static inline void mas_wr_node_store(struct ma_wr_state *wr_mas,
 				     unsigned char new_end)
 {
 	struct ma_state *mas = wr_mas->mas;
@@ -3878,16 +3867,14 @@ static inline bool mas_wr_node_store(struct ma_wr_state *wr_mas,
 	trace_ma_write(__func__, mas, 0, wr_mas->entry);
 	mas_update_gap(mas);
 	mas->end = new_end;
-	return true;
+	return;
 }
 
 /*
  * mas_wr_slot_store: Attempt to store a value in a slot.
  * @wr_mas: the maple write state
- *
- * Return: True if stored, false otherwise
  */
-static inline bool mas_wr_slot_store(struct ma_wr_state *wr_mas)
+static inline void mas_wr_slot_store(struct ma_wr_state *wr_mas)
 {
 	struct ma_state *mas = wr_mas->mas;
 	unsigned char offset = mas->offset;
@@ -3919,7 +3906,7 @@ static inline bool mas_wr_slot_store(struct ma_wr_state *wr_mas)
 		wr_mas->pivots[offset + 1] = mas->last;
 		mas->offset++; /* Keep mas accurate. */
 	} else {
-		return false;
+		return;
 	}
 
 	trace_ma_write(__func__, mas, 0, wr_mas->entry);
@@ -3930,7 +3917,7 @@ static inline bool mas_wr_slot_store(struct ma_wr_state *wr_mas)
 	if (!wr_mas->entry || gap)
 		mas_update_gap(mas);
 
-	return true;
+	return;
 }
 
 static inline void mas_wr_extend_null(struct ma_wr_state *wr_mas)
@@ -4004,10 +3991,8 @@ static inline unsigned char mas_wr_new_end(struct ma_wr_state *wr_mas)
  * This is currently unsafe in rcu mode since the end of the node may be cached
  * by readers while the node contents may be updated which could result in
  * inaccurate information.
- *
- * Return: True if appended, false otherwise
  */
-static inline bool mas_wr_append(struct ma_wr_state *wr_mas,
+static inline void mas_wr_append(struct ma_wr_state *wr_mas,
 		unsigned char new_end)
 {
 	struct ma_state *mas = wr_mas->mas;
@@ -4046,7 +4031,7 @@ static inline bool mas_wr_append(struct ma_wr_state *wr_mas,
 
 	mas->end = new_end;
 	trace_ma_write(__func__, mas, new_end, wr_mas->entry);
-	return  true;
+	return;
 }
 
 /*
-- 
2.46.0



^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH v4 04/17] maple_tree: introduce mas_wr_store_type()
  2024-08-14 16:19 ` [PATCH v4 04/17] maple_tree: introduce mas_wr_store_type() Sidhartha Kumar
@ 2024-09-25  2:04   ` Wei Yang
  2024-09-25 19:33     ` Sid Kumar
  0 siblings, 1 reply; 27+ messages in thread
From: Wei Yang @ 2024-09-25  2:04 UTC (permalink / raw)
  To: Sidhartha Kumar
  Cc: linux-kernel, maple-tree, linux-mm, akpm, liam.howlett, willy, surenb

On Wed, Aug 14, 2024 at 12:19:31PM -0400, Sidhartha Kumar wrote:

Sorry for a late reply, I just see this change.

>+
>+/*
>+ * mas_wr_store_type() - Set the store type for a given
>+ * store operation.
>+ * @wr_mas: The maple write state
>+ */
>+static inline void mas_wr_store_type(struct ma_wr_state *wr_mas)
>+{
>+	struct ma_state *mas = wr_mas->mas;
>+	unsigned char new_end;
>+
>+	if (unlikely(mas_is_none(mas) || mas_is_ptr(mas))) {
>+		mas->store_type = wr_store_root;
>+		return;
>+	}
>+
>+	if (unlikely(!mas_wr_walk(wr_mas))) {
>+		mas->store_type = wr_spanning_store;
>+		return;
>+	}
>+
>+	/* At this point, we are at the leaf node that needs to be altered. */
>+	mas_wr_end_piv(wr_mas);
>+	if (!wr_mas->entry)
>+		mas_wr_extend_null(wr_mas);
>+
>+	new_end = mas_wr_new_end(wr_mas);
>+	if ((wr_mas->r_min == mas->index) && (wr_mas->r_max == mas->last)) {
>+		mas->store_type = wr_exact_fit;
>+		return;
>+	}
>+
>+	if (unlikely(!mas->index && mas->last == ULONG_MAX)) {
>+		mas->store_type = wr_new_root;
>+		return;
>+	}
>+
>+	/* Potential spanning rebalance collapsing a node */
>+	if (new_end < mt_min_slots[wr_mas->type]) {
>+		if (!mte_is_root(mas->node)) {
>+			mas->store_type = wr_rebalance;
>+			return;
>+		}
>+		mas->store_type = wr_node_store;
>+		return;
>+	}

After this check, we are sure new_end >= mt_min_slots[wr_mas->type].

>+
>+	if (new_end >= mt_slots[wr_mas->type]) {
>+		mas->store_type = wr_split_store;
>+		return;
>+	}
>+
>+	if (!mt_in_rcu(mas->tree) && (mas->offset == mas->end)) {
>+		mas->store_type = wr_append;
>+		return;
>+	}
>+
>+	if ((new_end == mas->end) && (!mt_in_rcu(mas->tree) ||
>+		(wr_mas->offset_end - mas->offset == 1))) {
>+		mas->store_type = wr_slot_store;
>+		return;
>+	}
>+
>+	if (mte_is_root(mas->node) || (new_end >= mt_min_slots[wr_mas->type]) ||
>+		(mas->mas_flags & MA_STATE_BULK)) {

The check (new_end >= mt_min_slots[wr_mas->type]) here seems always be true.

So the if here seems not necessary. Do I miss something?

>+		mas->store_type = wr_node_store;
>+		return;
>+	}
>+
>+	mas->store_type = wr_invalid;
>+	MAS_WARN_ON(mas, 1);
>+}
>+

-- 
Wei Yang
Help you, Help me


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH v4 04/17] maple_tree: introduce mas_wr_store_type()
  2024-09-25  2:04   ` Wei Yang
@ 2024-09-25 19:33     ` Sid Kumar
  2024-09-25 19:36       ` Sid Kumar
  0 siblings, 1 reply; 27+ messages in thread
From: Sid Kumar @ 2024-09-25 19:33 UTC (permalink / raw)
  To: Wei Yang
  Cc: linux-kernel, maple-tree, linux-mm, akpm, liam.howlett, willy, surenb


On 9/24/24 9:04 PM, Wei Yang wrote:
> On Wed, Aug 14, 2024 at 12:19:31PM -0400, Sidhartha Kumar wrote:
>
> Sorry for a late reply, I just see this change.
>
>> +
>> +/*
>> + * mas_wr_store_type() - Set the store type for a given
>> + * store operation.
>> + * @wr_mas: The maple write state
>> + */
>> +static inline void mas_wr_store_type(struct ma_wr_state *wr_mas)
>> +{
>> +	struct ma_state *mas = wr_mas->mas;
>> +	unsigned char new_end;
>> +
>> +	if (unlikely(mas_is_none(mas) || mas_is_ptr(mas))) {
>> +		mas->store_type = wr_store_root;
>> +		return;
>> +	}
>> +
>> +	if (unlikely(!mas_wr_walk(wr_mas))) {
>> +		mas->store_type = wr_spanning_store;
>> +		return;
>> +	}
>> +
>> +	/* At this point, we are at the leaf node that needs to be altered. */
>> +	mas_wr_end_piv(wr_mas);
>> +	if (!wr_mas->entry)
>> +		mas_wr_extend_null(wr_mas);
>> +
>> +	new_end = mas_wr_new_end(wr_mas);
>> +	if ((wr_mas->r_min == mas->index) && (wr_mas->r_max == mas->last)) {
>> +		mas->store_type = wr_exact_fit;
>> +		return;
>> +	}
>> +
>> +	if (unlikely(!mas->index && mas->last == ULONG_MAX)) {
>> +		mas->store_type = wr_new_root;
>> +		return;
>> +	}
>> +
>> +	/* Potential spanning rebalance collapsing a node */
>> +	if (new_end < mt_min_slots[wr_mas->type]) {
>> +		if (!mte_is_root(mas->node)) {
>> +			mas->store_type = wr_rebalance;
>> +			return;
>> +		}
>> +		mas->store_type = wr_node_store;
>> +		return;
>> +	}
> After this check, we are sure new_end >= mt_min_slots[wr_mas->type].
>
>> +
>> +	if (new_end >= mt_slots[wr_mas->type]) {
>> +		mas->store_type = wr_split_store;
>> +		return;
>> +	}
>> +
>> +	if (!mt_in_rcu(mas->tree) && (mas->offset == mas->end)) {
>> +		mas->store_type = wr_append;
>> +		return;
>> +	}
>> +
>> +	if ((new_end == mas->end) && (!mt_in_rcu(mas->tree) ||
>> +		(wr_mas->offset_end - mas->offset == 1))) {
>> +		mas->store_type = wr_slot_store;
>> +		return;
>> +	}
>> +
>> +	if (mte_is_root(mas->node) || (new_end >= mt_min_slots[wr_mas->type]) ||
>> +		(mas->mas_flags & MA_STATE_BULK)) {
> The check (new_end >= mt_min_slots[wr_mas->type]) here seems always be true.
>
> So the if here seems not necessary. Do I miss something?

It is true that at this point new_end >= mt_min_slots[wr_mas->type] must 
be true but if we remove that check we won't catch this wr_node_store 
case if !mte_is_root() and !(mas->mas_flags & MA_STATE_BULK).

We could change the default store type to be wr_node_store and get rid 
of that whole if statement entirely.

This diff passes the tests:

diff --git a/lib/maple_tree.c b/lib/maple_tree.c index 
4f34e50c92b5..2ae0c4da9d74 100644 --- a/lib/maple_tree.c +++ 
b/lib/maple_tree.c @@ -4242,14 +4242,7 @@ static inline void 
mas_wr_store_type(struct ma_wr_state *wr_mas) return; } - if 
(mte_is_root(mas->node) || (new_end >= mt_min_slots[wr_mas->type]) || - 
(mas->mas_flags & MA_STATE_BULK)) { - mas->store_type = wr_node_store; - 
return; - } - - mas->store_type = wr_invalid; - MAS_WARN_ON(mas, 1); + 
mas->store_type = wr_node_store; }

do you think this makes sense?

Thanks,

Sid

>> +		mas->store_type = wr_node_store;
>> +		return;
>> +	}
>> +
>> +	mas->store_type = wr_invalid;
>> +	MAS_WARN_ON(mas, 1);
>> +}
>> +


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH v4 04/17] maple_tree: introduce mas_wr_store_type()
  2024-09-25 19:33     ` Sid Kumar
@ 2024-09-25 19:36       ` Sid Kumar
  2024-09-25 22:59         ` Wei Yang
  0 siblings, 1 reply; 27+ messages in thread
From: Sid Kumar @ 2024-09-25 19:36 UTC (permalink / raw)
  To: Wei Yang
  Cc: linux-kernel, maple-tree, linux-mm, akpm, liam.howlett, willy, surenb


On 9/25/24 2:33 PM, Sid Kumar wrote:
>
> On 9/24/24 9:04 PM, Wei Yang wrote:
>> On Wed, Aug 14, 2024 at 12:19:31PM -0400, Sidhartha Kumar wrote:
>>
>> Sorry for a late reply, I just see this change.
>>
>>> +
>>> +/*
>>> + * mas_wr_store_type() - Set the store type for a given
>>> + * store operation.
>>> + * @wr_mas: The maple write state
>>> + */
>>> +static inline void mas_wr_store_type(struct ma_wr_state *wr_mas)
>>> +{
>>> +    struct ma_state *mas = wr_mas->mas;
>>> +    unsigned char new_end;
>>> +
>>> +    if (unlikely(mas_is_none(mas) || mas_is_ptr(mas))) {
>>> +        mas->store_type = wr_store_root;
>>> +        return;
>>> +    }
>>> +
>>> +    if (unlikely(!mas_wr_walk(wr_mas))) {
>>> +        mas->store_type = wr_spanning_store;
>>> +        return;
>>> +    }
>>> +
>>> +    /* At this point, we are at the leaf node that needs to be 
>>> altered. */
>>> +    mas_wr_end_piv(wr_mas);
>>> +    if (!wr_mas->entry)
>>> +        mas_wr_extend_null(wr_mas);
>>> +
>>> +    new_end = mas_wr_new_end(wr_mas);
>>> +    if ((wr_mas->r_min == mas->index) && (wr_mas->r_max == 
>>> mas->last)) {
>>> +        mas->store_type = wr_exact_fit;
>>> +        return;
>>> +    }
>>> +
>>> +    if (unlikely(!mas->index && mas->last == ULONG_MAX)) {
>>> +        mas->store_type = wr_new_root;
>>> +        return;
>>> +    }
>>> +
>>> +    /* Potential spanning rebalance collapsing a node */
>>> +    if (new_end < mt_min_slots[wr_mas->type]) {
>>> +        if (!mte_is_root(mas->node)) {
>>> +            mas->store_type = wr_rebalance;
>>> +            return;
>>> +        }
>>> +        mas->store_type = wr_node_store;
>>> +        return;
>>> +    }
>> After this check, we are sure new_end >= mt_min_slots[wr_mas->type].
>>
>>> +
>>> +    if (new_end >= mt_slots[wr_mas->type]) {
>>> +        mas->store_type = wr_split_store;
>>> +        return;
>>> +    }
>>> +
>>> +    if (!mt_in_rcu(mas->tree) && (mas->offset == mas->end)) {
>>> +        mas->store_type = wr_append;
>>> +        return;
>>> +    }
>>> +
>>> +    if ((new_end == mas->end) && (!mt_in_rcu(mas->tree) ||
>>> +        (wr_mas->offset_end - mas->offset == 1))) {
>>> +        mas->store_type = wr_slot_store;
>>> +        return;
>>> +    }
>>> +
>>> +    if (mte_is_root(mas->node) || (new_end >= 
>>> mt_min_slots[wr_mas->type]) ||
>>> +        (mas->mas_flags & MA_STATE_BULK)) {
>> The check (new_end >= mt_min_slots[wr_mas->type]) here seems always 
>> be true.
>>
>> So the if here seems not necessary. Do I miss something?
>
> It is true that at this point new_end >= mt_min_slots[wr_mas->type] 
> must be true but if we remove that check we won't catch this 
> wr_node_store case if !mte_is_root() and !(mas->mas_flags & 
> MA_STATE_BULK).
>
> We could change the default store type to be wr_node_store and get rid 
> of that whole if statement entirely.
>
> This diff passes the tests:
>
> diff --git a/lib/maple_tree.c b/lib/maple_tree.c index 
> 4f34e50c92b5..2ae0c4da9d74 100644 --- a/lib/maple_tree.c +++ 
> b/lib/maple_tree.c @@ -4242,14 +4242,7 @@ static inline void 
> mas_wr_store_type(struct ma_wr_state *wr_mas) return; } - if 
> (mte_is_root(mas->node) || (new_end >= mt_min_slots[wr_mas->type]) || 
> - (mas->mas_flags & MA_STATE_BULK)) { - mas->store_type = 
> wr_node_store; - return; - } - - mas->store_type = wr_invalid; - 
> MAS_WARN_ON(mas, 1); + mas->store_type = wr_node_store; }
>
> do you think this makes sense?
>
Sorry this diff wasn't formatted correctly, it should look normal now:

diff --git a/lib/maple_tree.c b/lib/maple_tree.c
index 4f34e50c92b5..2ae0c4da9d74 100644
--- a/lib/maple_tree.c
+++ b/lib/maple_tree.c
@@ -4242,14 +4242,7 @@ static inline void mas_wr_store_type(struct 
ma_wr_state *wr_mas)
                 return;
         }

-       if (mte_is_root(mas->node) || (new_end >= 
mt_min_slots[wr_mas->type]) ||
-               (mas->mas_flags & MA_STATE_BULK)) {
-               mas->store_type = wr_node_store;
-               return;
-       }
-
-       mas->store_type = wr_invalid;
-       MAS_WARN_ON(mas, 1);
+       mas->store_type = wr_node_store;
  }

> Thanks,
>
> Sid
>
>>> +        mas->store_type = wr_node_store;
>>> +        return;
>>> +    }
>>> +
>>> +    mas->store_type = wr_invalid;
>>> +    MAS_WARN_ON(mas, 1);
>>> +}
>>> +


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH v4 04/17] maple_tree: introduce mas_wr_store_type()
  2024-09-25 19:36       ` Sid Kumar
@ 2024-09-25 22:59         ` Wei Yang
  0 siblings, 0 replies; 27+ messages in thread
From: Wei Yang @ 2024-09-25 22:59 UTC (permalink / raw)
  To: Sid Kumar
  Cc: Wei Yang, linux-kernel, maple-tree, linux-mm, akpm, liam.howlett,
	willy, surenb

On Wed, Sep 25, 2024 at 02:36:21PM -0500, Sid Kumar wrote:
>
>On 9/25/24 2:33 PM, Sid Kumar wrote:
>> 
>> On 9/24/24 9:04 PM, Wei Yang wrote:
>> > On Wed, Aug 14, 2024 at 12:19:31PM -0400, Sidhartha Kumar wrote:
>> > 
>> > Sorry for a late reply, I just see this change.
>> > 
>> > > +
>> > > +/*
>> > > + * mas_wr_store_type() - Set the store type for a given
>> > > + * store operation.
>> > > + * @wr_mas: The maple write state
>> > > + */
>> > > +static inline void mas_wr_store_type(struct ma_wr_state *wr_mas)
>> > > +{
>> > > +    struct ma_state *mas = wr_mas->mas;
>> > > +    unsigned char new_end;
>> > > +
>> > > +    if (unlikely(mas_is_none(mas) || mas_is_ptr(mas))) {
>> > > +        mas->store_type = wr_store_root;
>> > > +        return;
>> > > +    }
>> > > +
>> > > +    if (unlikely(!mas_wr_walk(wr_mas))) {
>> > > +        mas->store_type = wr_spanning_store;
>> > > +        return;
>> > > +    }
>> > > +
>> > > +    /* At this point, we are at the leaf node that needs to be
>> > > altered. */
>> > > +    mas_wr_end_piv(wr_mas);
>> > > +    if (!wr_mas->entry)
>> > > +        mas_wr_extend_null(wr_mas);
>> > > +
>> > > +    new_end = mas_wr_new_end(wr_mas);
>> > > +    if ((wr_mas->r_min == mas->index) && (wr_mas->r_max ==
>> > > mas->last)) {
>> > > +        mas->store_type = wr_exact_fit;
>> > > +        return;
>> > > +    }
>> > > +
>> > > +    if (unlikely(!mas->index && mas->last == ULONG_MAX)) {
>> > > +        mas->store_type = wr_new_root;
>> > > +        return;
>> > > +    }
>> > > +
>> > > +    /* Potential spanning rebalance collapsing a node */
>> > > +    if (new_end < mt_min_slots[wr_mas->type]) {
>> > > +        if (!mte_is_root(mas->node)) {
>> > > +            mas->store_type = wr_rebalance;
>> > > +            return;
>> > > +        }
>> > > +        mas->store_type = wr_node_store;
>> > > +        return;
>> > > +    }
>> > After this check, we are sure new_end >= mt_min_slots[wr_mas->type].
>> > 
>> > > +
>> > > +    if (new_end >= mt_slots[wr_mas->type]) {
>> > > +        mas->store_type = wr_split_store;
>> > > +        return;
>> > > +    }
>> > > +
>> > > +    if (!mt_in_rcu(mas->tree) && (mas->offset == mas->end)) {
>> > > +        mas->store_type = wr_append;
>> > > +        return;
>> > > +    }
>> > > +
>> > > +    if ((new_end == mas->end) && (!mt_in_rcu(mas->tree) ||
>> > > +        (wr_mas->offset_end - mas->offset == 1))) {
>> > > +        mas->store_type = wr_slot_store;
>> > > +        return;
>> > > +    }
>> > > +
>> > > +    if (mte_is_root(mas->node) || (new_end >=
>> > > mt_min_slots[wr_mas->type]) ||
>> > > +        (mas->mas_flags & MA_STATE_BULK)) {
>> > The check (new_end >= mt_min_slots[wr_mas->type]) here seems always
>> > be true.
>> > 
>> > So the if here seems not necessary. Do I miss something?
>> 
>> It is true that at this point new_end >= mt_min_slots[wr_mas->type] must
>> be true but if we remove that check we won't catch this wr_node_store
>> case if !mte_is_root() and !(mas->mas_flags & MA_STATE_BULK).
>> 
>> We could change the default store type to be wr_node_store and get rid of
>> that whole if statement entirely.
>> 
>> This diff passes the tests:
>> 
>> diff --git a/lib/maple_tree.c b/lib/maple_tree.c index
>> 4f34e50c92b5..2ae0c4da9d74 100644 --- a/lib/maple_tree.c +++
>> b/lib/maple_tree.c @@ -4242,14 +4242,7 @@ static inline void
>> mas_wr_store_type(struct ma_wr_state *wr_mas) return; } - if
>> (mte_is_root(mas->node) || (new_end >= mt_min_slots[wr_mas->type]) || -
>> (mas->mas_flags & MA_STATE_BULK)) { - mas->store_type = wr_node_store; -
>> return; - } - - mas->store_type = wr_invalid; - MAS_WARN_ON(mas, 1); +
>> mas->store_type = wr_node_store; }
>> 
>> do you think this makes sense?
>> 
>Sorry this diff wasn't formatted correctly, it should look normal now:
>
>diff --git a/lib/maple_tree.c b/lib/maple_tree.c
>index 4f34e50c92b5..2ae0c4da9d74 100644
>--- a/lib/maple_tree.c
>+++ b/lib/maple_tree.c
>@@ -4242,14 +4242,7 @@ static inline void mas_wr_store_type(struct
>ma_wr_state *wr_mas)
>                return;
>        }
>
>-       if (mte_is_root(mas->node) || (new_end >= mt_min_slots[wr_mas->type])
>||
>-               (mas->mas_flags & MA_STATE_BULK)) {
>-               mas->store_type = wr_node_store;
>-               return;
>-       }
>-
>-       mas->store_type = wr_invalid;
>-       MAS_WARN_ON(mas, 1);
>+       mas->store_type = wr_node_store;
> }
>

I am ok for this one.

>> Thanks,
>> 
>> Sid
>> 
>> > > +        mas->store_type = wr_node_store;
>> > > +        return;
>> > > +    }
>> > > +
>> > > +    mas->store_type = wr_invalid;
>> > > +    MAS_WARN_ON(mas, 1);
>> > > +}
>> > > +

-- 
Wei Yang
Help you, Help me


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH v4 16/17] maple_tree: remove unneeded mas_wr_walk() in mas_store_prealloc()
  2024-08-14 16:19 ` [PATCH v4 16/17] maple_tree: remove unneeded mas_wr_walk() in mas_store_prealloc() Sidhartha Kumar
@ 2024-10-24  1:20   ` Wei Yang
  2024-10-25 19:54     ` Sid Kumar
  0 siblings, 1 reply; 27+ messages in thread
From: Wei Yang @ 2024-10-24  1:20 UTC (permalink / raw)
  To: Sidhartha Kumar
  Cc: linux-kernel, maple-tree, linux-mm, akpm, liam.howlett, willy, surenb

On Wed, Aug 14, 2024 at 12:19:43PM -0400, Sidhartha Kumar wrote:
>Users of mas_store_prealloc() enter this function with nodes already
>preallocated. This means the store type must be already set. We can then
>remove the call to mas_wr_store_type() and initialize the write state to
>continue the partial walk that was done when determining the store type.
>

May I ask what is the partial walk here means?

It is the mas_wr_walk() in mas_wr_store_type() which is stopped because of it
is spanning write?

I may lost some background, so the assumption here is mas_wr_store_type() has
already been invoked and the store type has been decided, right?

>Reviewed-by: Liam R. Howlett <Liam.Howlett@oracle.com>
>Signed-off-by: Sidhartha Kumar <sidhartha.kumar@oracle.com>
>---
> lib/maple_tree.c | 18 +++++++++++++-----
> 1 file changed, 13 insertions(+), 5 deletions(-)
>
>diff --git a/lib/maple_tree.c b/lib/maple_tree.c
>index 8c1a1a483395..73ce63d9c3a0 100644
>--- a/lib/maple_tree.c
>+++ b/lib/maple_tree.c
>@@ -3979,9 +3979,6 @@ static inline void mas_wr_end_piv(struct ma_wr_state *wr_mas)
> 		wr_mas->end_piv = wr_mas->pivots[wr_mas->offset_end];
> 	else
> 		wr_mas->end_piv = wr_mas->mas->max;
>-
>-	if (!wr_mas->entry)
>-		mas_wr_extend_null(wr_mas);
> }
> 
> static inline unsigned char mas_wr_new_end(struct ma_wr_state *wr_mas)
>@@ -5532,8 +5529,19 @@ void mas_store_prealloc(struct ma_state *mas, void *entry)
> {
> 	MA_WR_STATE(wr_mas, mas, entry);
> 
>-	mas_wr_prealloc_setup(&wr_mas);
>-	mas_wr_store_type(&wr_mas);
>+	if (mas->store_type == wr_store_root) {
>+		mas_wr_prealloc_setup(&wr_mas);
>+		goto store;
>+	}
>+
>+	mas_wr_walk_descend(&wr_mas);

This one does not descend the tree, just locate the offset in a node and
adjust min/max. So not look like to continue the partial walk to me.

>+	if (mas->store_type != wr_spanning_store) {
>+		/* set wr_mas->content to current slot */
>+		wr_mas.content = mas_slot_locked(mas, wr_mas.slots, mas->offset);
>+		mas_wr_end_piv(&wr_mas);

If not a spanning write, the previous walk should reach a leaf node, right?

I am not sure why we don't need to check extend null here. Because we have
already done it?

>+	}
>+
>+store:
> 	trace_ma_write(__func__, mas, 0, entry);
> 	mas_wr_store_entry(&wr_mas);
> 	MAS_WR_BUG_ON(&wr_mas, mas_is_err(mas));
>-- 
>2.46.0
>

-- 
Wei Yang
Help you, Help me


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH v4 16/17] maple_tree: remove unneeded mas_wr_walk() in mas_store_prealloc()
  2024-10-24  1:20   ` Wei Yang
@ 2024-10-25 19:54     ` Sid Kumar
  2024-10-25 23:58       ` Wei Yang
  0 siblings, 1 reply; 27+ messages in thread
From: Sid Kumar @ 2024-10-25 19:54 UTC (permalink / raw)
  To: Wei Yang
  Cc: linux-kernel, maple-tree, linux-mm, akpm, liam.howlett, willy, surenb


On 10/23/24 9:20 PM, Wei Yang wrote:
> On Wed, Aug 14, 2024 at 12:19:43PM -0400, Sidhartha Kumar wrote:
>> Users of mas_store_prealloc() enter this function with nodes already
>> preallocated. This means the store type must be already set. We can then
>> remove the call to mas_wr_store_type() and initialize the write state to
>> continue the partial walk that was done when determining the store type.
>>
> May I ask what is the partial walk here means?
>
> It is the mas_wr_walk() in mas_wr_store_type() which is stopped because of it
> is spanning write?

Yes, this is what I meant by the partial walk that's already been 
started. It's the walk done by mas_wr_store_type().

> I may lost some background, so the assumption here is mas_wr_store_type() has
> already been invoked and the store type has been decided, right?

Ya users of mas_store_prealloc() should have already called 
mas_preallocate() which does:

     mas->store_type = mas_wr_store_type(&wr_mas);
     request = mas_prealloc_calc(&wr_mas, entry);

to set the store type and allocate the nodes.


>> Reviewed-by: Liam R. Howlett <Liam.Howlett@oracle.com>
>> Signed-off-by: Sidhartha Kumar <sidhartha.kumar@oracle.com>
>> ---
>> lib/maple_tree.c | 18 +++++++++++++-----
>> 1 file changed, 13 insertions(+), 5 deletions(-)
>>
>> diff --git a/lib/maple_tree.c b/lib/maple_tree.c
>> index 8c1a1a483395..73ce63d9c3a0 100644
>> --- a/lib/maple_tree.c
>> +++ b/lib/maple_tree.c
>> @@ -3979,9 +3979,6 @@ static inline void mas_wr_end_piv(struct ma_wr_state *wr_mas)
>> 		wr_mas->end_piv = wr_mas->pivots[wr_mas->offset_end];
>> 	else
>> 		wr_mas->end_piv = wr_mas->mas->max;
>> -
>> -	if (!wr_mas->entry)
>> -		mas_wr_extend_null(wr_mas);
>> }
>>
>> static inline unsigned char mas_wr_new_end(struct ma_wr_state *wr_mas)
>> @@ -5532,8 +5529,19 @@ void mas_store_prealloc(struct ma_state *mas, void *entry)
>> {
>> 	MA_WR_STATE(wr_mas, mas, entry);
>>
>> -	mas_wr_prealloc_setup(&wr_mas);
>> -	mas_wr_store_type(&wr_mas);
>> +	if (mas->store_type == wr_store_root) {
>> +		mas_wr_prealloc_setup(&wr_mas);
>> +		goto store;
>> +	}
>> +
>> +	mas_wr_walk_descend(&wr_mas);
> This one does not descend the tree, just locate the offset in a node and
> adjust min/max. So not look like to continue the partial walk to me.
>
>> +	if (mas->store_type != wr_spanning_store) {
>> +		/* set wr_mas->content to current slot */
>> +		wr_mas.content = mas_slot_locked(mas, wr_mas.slots, mas->offset);
>> +		mas_wr_end_piv(&wr_mas);
> If not a spanning write, the previous walk should reach a leaf node, right?

Ya that's true.

> I am not sure why we don't need to check extend null here. Because we have
> already done it?


Ya we extend null in mas_wr_store_type() which has already been called 
at this point.


     /* At this point, we are at the leaf node that needs to be altered. */
     mas_wr_end_piv(wr_mas);
     if (!wr_mas->entry)
         mas_wr_extend_null(wr_mas);

Thanks,

Sid

>> +	}
>> +
>> +store:
>> 	trace_ma_write(__func__, mas, 0, entry);
>> 	mas_wr_store_entry(&wr_mas);
>> 	MAS_WR_BUG_ON(&wr_mas, mas_is_err(mas));
>> -- 
>> 2.46.0
>>


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH v4 16/17] maple_tree: remove unneeded mas_wr_walk() in mas_store_prealloc()
  2024-10-25 19:54     ` Sid Kumar
@ 2024-10-25 23:58       ` Wei Yang
  2024-10-29 15:46         ` Sid Kumar
  0 siblings, 1 reply; 27+ messages in thread
From: Wei Yang @ 2024-10-25 23:58 UTC (permalink / raw)
  To: Sid Kumar
  Cc: Wei Yang, linux-kernel, maple-tree, linux-mm, akpm, liam.howlett,
	willy, surenb

On Fri, Oct 25, 2024 at 03:54:04PM -0400, Sid Kumar wrote:
>
>On 10/23/24 9:20 PM, Wei Yang wrote:
>> On Wed, Aug 14, 2024 at 12:19:43PM -0400, Sidhartha Kumar wrote:
>> > Users of mas_store_prealloc() enter this function with nodes already
>> > preallocated. This means the store type must be already set. We can then
>> > remove the call to mas_wr_store_type() and initialize the write state to
>> > continue the partial walk that was done when determining the store type.
>> > 
>> May I ask what is the partial walk here means?
>> 
>> It is the mas_wr_walk() in mas_wr_store_type() which is stopped because of it
>> is spanning write?
>
>Yes, this is what I meant by the partial walk that's already been started.
>It's the walk done by mas_wr_store_type().
>
>> I may lost some background, so the assumption here is mas_wr_store_type() has
>> already been invoked and the store type has been decided, right?
>
>Ya users of mas_store_prealloc() should have already called mas_preallocate()
>which does:
>
>    mas->store_type = mas_wr_store_type(&wr_mas);
>    request = mas_prealloc_calc(&wr_mas, entry);
>
>to set the store type and allocate the nodes.
>
>
>> > Reviewed-by: Liam R. Howlett <Liam.Howlett@oracle.com>
>> > Signed-off-by: Sidhartha Kumar <sidhartha.kumar@oracle.com>
>> > ---
>> > lib/maple_tree.c | 18 +++++++++++++-----
>> > 1 file changed, 13 insertions(+), 5 deletions(-)
>> > 
>> > diff --git a/lib/maple_tree.c b/lib/maple_tree.c
>> > index 8c1a1a483395..73ce63d9c3a0 100644
>> > --- a/lib/maple_tree.c
>> > +++ b/lib/maple_tree.c
>> > @@ -3979,9 +3979,6 @@ static inline void mas_wr_end_piv(struct ma_wr_state *wr_mas)
>> > 		wr_mas->end_piv = wr_mas->pivots[wr_mas->offset_end];
>> > 	else
>> > 		wr_mas->end_piv = wr_mas->mas->max;
>> > -
>> > -	if (!wr_mas->entry)
>> > -		mas_wr_extend_null(wr_mas);
>> > }
>> > 
>> > static inline unsigned char mas_wr_new_end(struct ma_wr_state *wr_mas)
>> > @@ -5532,8 +5529,19 @@ void mas_store_prealloc(struct ma_state *mas, void *entry)
>> > {
>> > 	MA_WR_STATE(wr_mas, mas, entry);
>> > 
>> > -	mas_wr_prealloc_setup(&wr_mas);
>> > -	mas_wr_store_type(&wr_mas);
>> > +	if (mas->store_type == wr_store_root) {
>> > +		mas_wr_prealloc_setup(&wr_mas);
>> > +		goto store;
>> > +	}
>> > +
>> > +	mas_wr_walk_descend(&wr_mas);
>> This one does not descend the tree, just locate the offset in a node and
>> adjust min/max. So not look like to continue the partial walk to me.
>> 
>> > +	if (mas->store_type != wr_spanning_store) {
>> > +		/* set wr_mas->content to current slot */
>> > +		wr_mas.content = mas_slot_locked(mas, wr_mas.slots, mas->offset);
>> > +		mas_wr_end_piv(&wr_mas);
>> If not a spanning write, the previous walk should reach a leaf node, right?
>
>Ya that's true.
>
>> I am not sure why we don't need to check extend null here. Because we have
>> already done it?
>
>
>Ya we extend null in mas_wr_store_type() which has already been called at
>this point.
>
>
>    /* At this point, we are at the leaf node that needs to be altered. */
>    mas_wr_end_piv(wr_mas);
>    if (!wr_mas->entry)
>        mas_wr_extend_null(wr_mas);
>
>Thanks,

Hmm... if we have already done this, why we need to do mas_wr_end_piv() again?

>
>Sid
>
>> > +	}
>> > +
>> > +store:
>> > 	trace_ma_write(__func__, mas, 0, entry);
>> > 	mas_wr_store_entry(&wr_mas);
>> > 	MAS_WR_BUG_ON(&wr_mas, mas_is_err(mas));
>> > -- 
>> > 2.46.0
>> > 

-- 
Wei Yang
Help you, Help me


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH v4 16/17] maple_tree: remove unneeded mas_wr_walk() in mas_store_prealloc()
  2024-10-25 23:58       ` Wei Yang
@ 2024-10-29 15:46         ` Sid Kumar
  2024-10-31 23:18           ` Wei Yang
  0 siblings, 1 reply; 27+ messages in thread
From: Sid Kumar @ 2024-10-29 15:46 UTC (permalink / raw)
  To: Wei Yang
  Cc: linux-kernel, maple-tree, linux-mm, akpm, liam.howlett, willy, surenb


On 10/25/24 7:58 PM, Wei Yang wrote:
> On Fri, Oct 25, 2024 at 03:54:04PM -0400, Sid Kumar wrote:
>> On 10/23/24 9:20 PM, Wei Yang wrote:
>>> On Wed, Aug 14, 2024 at 12:19:43PM -0400, Sidhartha Kumar wrote:
>>>> Users of mas_store_prealloc() enter this function with nodes already
>>>> preallocated. This means the store type must be already set. We can then
>>>> remove the call to mas_wr_store_type() and initialize the write state to
>>>> continue the partial walk that was done when determining the store type.
>>>>
>>> May I ask what is the partial walk here means?
>>>
>>> It is the mas_wr_walk() in mas_wr_store_type() which is stopped because of it
>>> is spanning write?
>> Yes, this is what I meant by the partial walk that's already been started.
>> It's the walk done by mas_wr_store_type().
>>
>>> I may lost some background, so the assumption here is mas_wr_store_type() has
>>> already been invoked and the store type has been decided, right?
>> Ya users of mas_store_prealloc() should have already called mas_preallocate()
>> which does:
>>
>>      mas->store_type = mas_wr_store_type(&wr_mas);
>>      request = mas_prealloc_calc(&wr_mas, entry);
>>
>> to set the store type and allocate the nodes.
>>
>>
>>>> Reviewed-by: Liam R. Howlett <Liam.Howlett@oracle.com>
>>>> Signed-off-by: Sidhartha Kumar <sidhartha.kumar@oracle.com>
>>>> ---
>>>> lib/maple_tree.c | 18 +++++++++++++-----
>>>> 1 file changed, 13 insertions(+), 5 deletions(-)
>>>>
>>>> diff --git a/lib/maple_tree.c b/lib/maple_tree.c
>>>> index 8c1a1a483395..73ce63d9c3a0 100644
>>>> --- a/lib/maple_tree.c
>>>> +++ b/lib/maple_tree.c
>>>> @@ -3979,9 +3979,6 @@ static inline void mas_wr_end_piv(struct ma_wr_state *wr_mas)
>>>> 		wr_mas->end_piv = wr_mas->pivots[wr_mas->offset_end];
>>>> 	else
>>>> 		wr_mas->end_piv = wr_mas->mas->max;
>>>> -
>>>> -	if (!wr_mas->entry)
>>>> -		mas_wr_extend_null(wr_mas);
>>>> }
>>>>
>>>> static inline unsigned char mas_wr_new_end(struct ma_wr_state *wr_mas)
>>>> @@ -5532,8 +5529,19 @@ void mas_store_prealloc(struct ma_state *mas, void *entry)
>>>> {
>>>> 	MA_WR_STATE(wr_mas, mas, entry);
>>>>
>>>> -	mas_wr_prealloc_setup(&wr_mas);
>>>> -	mas_wr_store_type(&wr_mas);
>>>> +	if (mas->store_type == wr_store_root) {
>>>> +		mas_wr_prealloc_setup(&wr_mas);
>>>> +		goto store;
>>>> +	}
>>>> +
>>>> +	mas_wr_walk_descend(&wr_mas);
>>> This one does not descend the tree, just locate the offset in a node and
>>> adjust min/max. So not look like to continue the partial walk to me.
>>>
>>>> +	if (mas->store_type != wr_spanning_store) {
>>>> +		/* set wr_mas->content to current slot */
>>>> +		wr_mas.content = mas_slot_locked(mas, wr_mas.slots, mas->offset);
>>>> +		mas_wr_end_piv(&wr_mas);
>>> If not a spanning write, the previous walk should reach a leaf node, right?
>> Ya that's true.
>>
>>> I am not sure why we don't need to check extend null here. Because we have
>>> already done it?
>>
>> Ya we extend null in mas_wr_store_type() which has already been called at
>> this point.
>>
>>
>>      /* At this point, we are at the leaf node that needs to be altered. */
>>      mas_wr_end_piv(wr_mas);
>>      if (!wr_mas->entry)
>>          mas_wr_extend_null(wr_mas);
>>
>> Thanks,
> Hmm... if we have already done this, why we need to do mas_wr_end_piv() again?

The maple write state here is local to this function:

void mas_store_prealloc(struct ma_state *mas, void *entry)
{
     MA_WR_STATE(wr_mas, mas, entry);

so we don't retain the wr_end information from the previous call to 
mas_preallocate() and have to repeat it here. The write state is not 
currently exposed so have to call mas_wr_end_piv() again.

Thanks,

Sid


>
>> Sid
>>
>>>> +	}
>>>> +
>>>> +store:
>>>> 	trace_ma_write(__func__, mas, 0, entry);
>>>> 	mas_wr_store_entry(&wr_mas);
>>>> 	MAS_WR_BUG_ON(&wr_mas, mas_is_err(mas));
>>>> -- 
>>>> 2.46.0
>>>>


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH v4 16/17] maple_tree: remove unneeded mas_wr_walk() in mas_store_prealloc()
  2024-10-29 15:46         ` Sid Kumar
@ 2024-10-31 23:18           ` Wei Yang
  0 siblings, 0 replies; 27+ messages in thread
From: Wei Yang @ 2024-10-31 23:18 UTC (permalink / raw)
  To: Sid Kumar
  Cc: Wei Yang, linux-kernel, maple-tree, linux-mm, akpm, liam.howlett,
	willy, surenb

On Tue, Oct 29, 2024 at 11:46:58AM -0400, Sid Kumar wrote:
>
>On 10/25/24 7:58 PM, Wei Yang wrote:
>> On Fri, Oct 25, 2024 at 03:54:04PM -0400, Sid Kumar wrote:
>> > On 10/23/24 9:20 PM, Wei Yang wrote:
>> > > On Wed, Aug 14, 2024 at 12:19:43PM -0400, Sidhartha Kumar wrote:
>> > > > Users of mas_store_prealloc() enter this function with nodes already
>> > > > preallocated. This means the store type must be already set. We can then
>> > > > remove the call to mas_wr_store_type() and initialize the write state to
>> > > > continue the partial walk that was done when determining the store type.
>> > > > 
>> > > May I ask what is the partial walk here means?
>> > > 
>> > > It is the mas_wr_walk() in mas_wr_store_type() which is stopped because of it
>> > > is spanning write?
>> > Yes, this is what I meant by the partial walk that's already been started.
>> > It's the walk done by mas_wr_store_type().
>> > 
>> > > I may lost some background, so the assumption here is mas_wr_store_type() has
>> > > already been invoked and the store type has been decided, right?
>> > Ya users of mas_store_prealloc() should have already called mas_preallocate()
>> > which does:
>> > 
>> >      mas->store_type = mas_wr_store_type(&wr_mas);
>> >      request = mas_prealloc_calc(&wr_mas, entry);
>> > 
>> > to set the store type and allocate the nodes.
>> > 
>> > 
>> > > > Reviewed-by: Liam R. Howlett <Liam.Howlett@oracle.com>
>> > > > Signed-off-by: Sidhartha Kumar <sidhartha.kumar@oracle.com>
>> > > > ---
>> > > > lib/maple_tree.c | 18 +++++++++++++-----
>> > > > 1 file changed, 13 insertions(+), 5 deletions(-)
>> > > > 
>> > > > diff --git a/lib/maple_tree.c b/lib/maple_tree.c
>> > > > index 8c1a1a483395..73ce63d9c3a0 100644
>> > > > --- a/lib/maple_tree.c
>> > > > +++ b/lib/maple_tree.c
>> > > > @@ -3979,9 +3979,6 @@ static inline void mas_wr_end_piv(struct ma_wr_state *wr_mas)
>> > > > 		wr_mas->end_piv = wr_mas->pivots[wr_mas->offset_end];
>> > > > 	else
>> > > > 		wr_mas->end_piv = wr_mas->mas->max;
>> > > > -
>> > > > -	if (!wr_mas->entry)
>> > > > -		mas_wr_extend_null(wr_mas);
>> > > > }
>> > > > 
>> > > > static inline unsigned char mas_wr_new_end(struct ma_wr_state *wr_mas)
>> > > > @@ -5532,8 +5529,19 @@ void mas_store_prealloc(struct ma_state *mas, void *entry)
>> > > > {
>> > > > 	MA_WR_STATE(wr_mas, mas, entry);
>> > > > 
>> > > > -	mas_wr_prealloc_setup(&wr_mas);
>> > > > -	mas_wr_store_type(&wr_mas);
>> > > > +	if (mas->store_type == wr_store_root) {
>> > > > +		mas_wr_prealloc_setup(&wr_mas);
>> > > > +		goto store;
>> > > > +	}
>> > > > +
>> > > > +	mas_wr_walk_descend(&wr_mas);
>> > > This one does not descend the tree, just locate the offset in a node and
>> > > adjust min/max. So not look like to continue the partial walk to me.
>> > > 
>> > > > +	if (mas->store_type != wr_spanning_store) {
>> > > > +		/* set wr_mas->content to current slot */
>> > > > +		wr_mas.content = mas_slot_locked(mas, wr_mas.slots, mas->offset);
>> > > > +		mas_wr_end_piv(&wr_mas);
>> > > If not a spanning write, the previous walk should reach a leaf node, right?
>> > Ya that's true.
>> > 
>> > > I am not sure why we don't need to check extend null here. Because we have
>> > > already done it?
>> > 
>> > Ya we extend null in mas_wr_store_type() which has already been called at
>> > this point.
>> > 
>> > 
>> >      /* At this point, we are at the leaf node that needs to be altered. */
>> >      mas_wr_end_piv(wr_mas);
>> >      if (!wr_mas->entry)
>> >          mas_wr_extend_null(wr_mas);
>> > 
>> > Thanks,
>> Hmm... if we have already done this, why we need to do mas_wr_end_piv() again?
>
>The maple write state here is local to this function:
>
>void mas_store_prealloc(struct ma_state *mas, void *entry)
>{
>    MA_WR_STATE(wr_mas, mas, entry);
>
>so we don't retain the wr_end information from the previous call to
>mas_preallocate() and have to repeat it here. The write state is not
>currently exposed so have to call mas_wr_end_piv() again.
>

Thanks, I missed this point.

>Thanks,
>
>Sid
>
>
>> 
>> > Sid
>> > 
>> > > > +	}
>> > > > +
>> > > > +store:
>> > > > 	trace_ma_write(__func__, mas, 0, entry);
>> > > > 	mas_wr_store_entry(&wr_mas);
>> > > > 	MAS_WR_BUG_ON(&wr_mas, mas_is_err(mas));
>> > > > -- 
>> > > > 2.46.0
>> > > > 

-- 
Wei Yang
Help you, Help me


^ permalink raw reply	[flat|nested] 27+ messages in thread

end of thread, other threads:[~2024-10-31 23:18 UTC | newest]

Thread overview: 27+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-08-14 16:19 [PATCH v4 00/17] Introduce a store type enum for the Maple tree Sidhartha Kumar
2024-08-14 16:19 ` [PATCH v4 01/17] maple_tree: introduce store_type enum Sidhartha Kumar
2024-08-14 16:19 ` [PATCH v4 02/17] maple_tree: introduce mas_wr_prealloc_setup() Sidhartha Kumar
2024-08-14 16:19 ` [PATCH v4 03/17] maple_tree: move up mas_wr_store_setup() and mas_wr_prealloc_setup() Sidhartha Kumar
2024-08-14 16:19 ` [PATCH v4 04/17] maple_tree: introduce mas_wr_store_type() Sidhartha Kumar
2024-09-25  2:04   ` Wei Yang
2024-09-25 19:33     ` Sid Kumar
2024-09-25 19:36       ` Sid Kumar
2024-09-25 22:59         ` Wei Yang
2024-08-14 16:19 ` [PATCH v4 05/17] maple_tree: remove mas_destroy() from mas_nomem() Sidhartha Kumar
2024-08-14 16:19 ` [PATCH v4 06/17] maple_tree: preallocate nodes in mas_erase() Sidhartha Kumar
2024-08-14 16:19 ` [PATCH v4 07/17] maple_tree: use mas_store_gfp() in mtree_store_range() Sidhartha Kumar
2024-08-14 16:19 ` [PATCH v4 08/17] maple_tree: print store type in mas_dump() Sidhartha Kumar
2024-08-14 16:19 ` [PATCH v4 09/17] maple_tree: use store type in mas_wr_store_entry() Sidhartha Kumar
2024-08-14 16:19 ` [PATCH v4 10/17] maple_tree: convert mas_insert() to preallocate nodes Sidhartha Kumar
2024-08-14 16:19 ` [PATCH v4 11/17] maple_tree: simplify mas_commit_b_node() Sidhartha Kumar
2024-08-14 16:19 ` [PATCH v4 12/17] maple_tree: remove mas_wr_modify() Sidhartha Kumar
2024-08-14 16:19 ` [PATCH v4 13/17] maple_tree: have mas_store() allocate nodes if needed Sidhartha Kumar
2024-08-14 16:19 ` [PATCH v4 14/17] maple_tree: remove node allocations from various write helper functions Sidhartha Kumar
2024-08-14 16:19 ` [PATCH v4 15/17] maple_tree: remove repeated sanity checks from " Sidhartha Kumar
2024-08-14 16:19 ` [PATCH v4 16/17] maple_tree: remove unneeded mas_wr_walk() in mas_store_prealloc() Sidhartha Kumar
2024-10-24  1:20   ` Wei Yang
2024-10-25 19:54     ` Sid Kumar
2024-10-25 23:58       ` Wei Yang
2024-10-29 15:46         ` Sid Kumar
2024-10-31 23:18           ` Wei Yang
2024-08-14 16:19 ` [PATCH v4 17/17] maple_tree: make write helper functions void Sidhartha Kumar

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox