From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id AF9FCC7619A for ; Wed, 12 Apr 2023 08:13:22 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5213B900004; Wed, 12 Apr 2023 04:13:22 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 4D1CD900002; Wed, 12 Apr 2023 04:13:22 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 399C7900004; Wed, 12 Apr 2023 04:13:22 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 2646C900002 for ; Wed, 12 Apr 2023 04:13:22 -0400 (EDT) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id ECBF380D36 for ; Wed, 12 Apr 2023 08:13:21 +0000 (UTC) X-FDA: 80672024202.29.8094A76 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf11.hostedemail.com (Postfix) with ESMTP id 180894001F for ; Wed, 12 Apr 2023 08:13:19 +0000 (UTC) Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=linuxfoundation.org header.s=korg header.b=WFl0ytKS; spf=pass (imf11.hostedemail.com: domain of gregkh@linuxfoundation.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=gregkh@linuxfoundation.org; dmarc=pass (policy=none) header.from=linuxfoundation.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1681287200; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:dkim-signature; bh=m4LJAUBy0Lpfj/OCHiRv+RWHm717supOYD0MgwLlhlw=; b=3r7U8RULpwSrzIryoFQuPVCoGxVUK2gLbR0MR5nYpPbV4wTAE6R6C4TbdTcLEj9QgEY+5+ RL5uylxcNB9M2Dst88U1bXZ8MF5PBP5JczCB3nLESyhCmPF2ZDC4iDMMOd/qgQP6MvWikh 0RJZfxiCck4ECL6oqr5fMg2NJ17m954= ARC-Authentication-Results: i=1; imf11.hostedemail.com; dkim=pass header.d=linuxfoundation.org header.s=korg header.b=WFl0ytKS; spf=pass (imf11.hostedemail.com: domain of gregkh@linuxfoundation.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=gregkh@linuxfoundation.org; dmarc=pass (policy=none) header.from=linuxfoundation.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1681287200; a=rsa-sha256; cv=none; b=gxS4LlITx/opqwMZrp6TI8JfaLH49UyV0AUYdjjim08ajSlz+9imLzOihKbv2OfOh/BSg2 ZzgVOFTa2mV+xASm+/w3/1Au7+Cq+UQ/iPWTPEG1vfm6F218iioD67GVqZ7Uzm/B5812TF 8oFvKj2Z0TifeKC4DNbKHiSXCptE2PM= Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 32E686299B; Wed, 12 Apr 2023 08:13:19 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 42DBDC433D2; Wed, 12 Apr 2023 08:13:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1681287198; bh=jzo1uLNQfcZc8kR9yOoKEM1J/AWzrUzjxevTW4iIlxY=; h=Subject:To:Cc:From:Date:In-Reply-To:From; b=WFl0ytKSf7NqwzFkYZi753eciRcF+jMlMo3xewxyHJH76XrT8u3JNA6OixFdIdLyS UHWvqpxxTfxOJ2xa3B8YePHnzY1PYLR4G3COBXnarFyvbmV8V3B5+HcATu/norFY+X lmtJr+iA2bfvmv9qzld0kpEwO8OYNqPZ4hrAWLPQ= Subject: Patch "maple_tree: add RCU lock checking to rcu callback functions" has been added to the 6.1-stable tree To: Liam.Howlett@Oracle.com,Liam.Howlett@oracle.com,gregkh@linuxfoundation.org,linux-mm@kvack.org,maple-tree@lists.infradead.org,surenb@google.com Cc: From: Date: Wed, 12 Apr 2023 10:13:15 +0200 In-Reply-To: <20230411151055.2910579-14-Liam.Howlett@oracle.com> Message-ID: <2023041215-juniper-unbiased-db63@gregkh> MIME-Version: 1.0 Content-Type: text/plain; charset=ANSI_X3.4-1968 Content-Transfer-Encoding: 8bit X-stable: commit X-Patchwork-Hint: ignore X-Rspam-User: X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 180894001F X-Stat-Signature: aa467fcqbrdqmyofjeut3i8ee3ye89db X-HE-Tag: 1681287199-583122 X-HE-Meta: U2FsdGVkX1/yDfSwLmqpF47oyJnFrSxZFtwBiAipYil45pcTTG0Keb3ttMpqUzT4MdHP0gwvgIwb+XiCwEP8N4OS812Muzi7QIWFSoGSbLJFHbvGjP4eZR1vjFReln1uAGb2qCXghL7Y+A4PpLyeHU2UD3Ok1hTOzwIA3wGAoOVKxf3WF4wcEpMyUfnjyQvcZhbcs1jrQsXVOibLVIP/Sl7GxZYXKSg/maF9ku/r7gc6pzeCabf2yLhQj2GQSfoLubhxt/jx+mh/T9M2eHV4CveGsSiO9dCVMtCawS7skO83mClpTFI/PWI+l6kxjky4fqkwdR0F8Wa6dM/NeU3v3vXJeuPz3akW0I/eu7gjbwj/CYiLVQ9WUKXez43kL8J+IqAXPmmRc4p1e2u1z1jcC9gplXLChESh2jTnEE/8+oAL9RYLVi6815aMpZMXFa84Oqs5U0rFx6m3+ZW+qKk2SAR3E8+98dal1+MhIYCEzH2ivW4ZxWtWXbtDT4mmm6ZgXqw8LJl74HKaVR3C6TjJa+FzfoY23Hna8E3UuWL6d01w0armvl6NvqUv99nkfBWnsDqRdg91ThIAFKD8VCvWCSFAjMrNym/ADS6ROYyhXXiJdIIDOeKblt4kNoCIs3/mU2ticTZPMtA/1biyCMquBB6No15SuABmw3Ht8mjDojTuq2BlNPdVW7Qv/NDIDOboFCddskNsC1owRuXFDhxC4LDhF6PtX95OSuLWifws2OGGSUWjgSZscL0UHp0ODflZJSfhsmAKYGGAt+Q+jWpc8ks8fFjm0m7DSP/ZVCnyJDkQLv1ujQo7fCwGV4UiBllHq9V5ST0ufMVUtA1iq/c+W88sJ72AGhzd0zGtKEyIRcij5gjESFANmtb/6vH6OkX/TSOud2O9DbfOjJsEsdYPFuc+2Xa7vBGxHoO2idnuuf4OlupTgQjmgzS1Lp+ExoGZRh7CbiC61dgWluHBtj0 U7xDr3rI xWQTroz72FeLiBHIm4VigRJyCockUfvc/1ujTojNig5xUFW3P2Shir+WPXM/KhwJjOKAK6KQBV1HkZITgfaGt8BMe98nZIpylBzn1umyiJ+3i320PXYuKxCr4O+FM1h4uNsLErKBCXlGTa38+soL+vFfonK1lWQtKFv4t0+VSvlLGYtNNG83eoJ/RkycSqyg/mQRXXTSvrMimSHyMnXIBUEdkGSNHGGYx/3/fO0y+mAEpEX4oAQI50yMbbMkhVN2QmQhzOtHVkutk3haiDI7RK+F4oZBjd2DYcpj+FSluk/1nwyGRaCqGaJJF4UaBVb/UpFVHscQRLQcNpn+McBk4I7p2LD4qCC0hx9ova1k4bBaQ6RA5zV5lXLaiihpRPmgTLWYWAAG/azo0vWCjIdXleB02W7ScWA3D7KEvSZ44SHhR+ojBFBY/803HKSiluplkvtd/tWSnO1pAgeM+Cp0wiMc/QbcYj+guIW4/EhCS0Yq1kynrHK34xKZ4NzZP5Kd2wQQaxfYR5xnJGyGeiqbzPWGGQpTLlagbBe+F2koYtyLcYS635Ig8Fpri90ShY4JZuO/lYef5azX55a68RyoHUOEEg3fcjNtz5MmUNKzvqrq6sW8fuHMNMbSYtwIm/l9CuqhMntg5wjNnZoyWwKV9s47/f4D9JfBr9KTtaNMH5pZgqEUD778zeae3OiebMcPc9gKMI8ZQV3u5pFl90yIv5zeq3g== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This is a note to let you know that I've just added the patch titled maple_tree: add RCU lock checking to rcu callback functions to the 6.1-stable tree which can be found at: http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary The filename of the patch is: maple_tree-add-rcu-lock-checking-to-rcu-callback-functions.patch and it can be found in the queue-6.1 subdirectory. If you, or anyone else, feels it should not be added to the stable tree, please let know about it. >From stable-owner@vger.kernel.org Tue Apr 11 17:13:37 2023 From: "Liam R. Howlett" Date: Tue, 11 Apr 2023 11:10:54 -0400 Subject: maple_tree: add RCU lock checking to rcu callback functions To: Greg Kroah-Hartman , stable@vger.kernel.org Cc: maple-tree@lists.infradead.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, "Liam R. Howlett" , Suren Baghdasaryan , "Liam R . Howlett" Message-ID: <20230411151055.2910579-14-Liam.Howlett@oracle.com> From: "Liam R. Howlett" commit 790e1fa86b340c2bd4a327e01c161f7a1ad885f6 upstream. Dereferencing RCU objects within the RCU callback without the RCU check has caused lockdep to complain. Fix the RCU dereferencing by using the RCU callback lock to ensure the operation is safe. Also stop creating a new lock to use for dereferencing during destruction of the tree or subtree. Instead, pass through a pointer to the tree that has the lock that is held for RCU dereferencing checking. It also does not make sense to use the maple state in the freeing scenario as the tree walk is a special case where the tree no longer has the normal encodings and parent pointers. Link: https://lkml.kernel.org/r/20230227173632.3292573-8-surenb@google.com Fixes: 54a611b60590 ("Maple Tree: add new data structure") Cc: stable@vger.kernel.org Reported-by: Suren Baghdasaryan Signed-off-by: Liam R. Howlett Signed-off-by: Greg Kroah-Hartman --- lib/maple_tree.c | 188 ++++++++++++++++++++++++++++--------------------------- 1 file changed, 96 insertions(+), 92 deletions(-) --- a/lib/maple_tree.c +++ b/lib/maple_tree.c @@ -814,6 +814,11 @@ static inline void *mt_slot(const struct return rcu_dereference_check(slots[offset], mt_locked(mt)); } +static inline void *mt_slot_locked(struct maple_tree *mt, void __rcu **slots, + unsigned char offset) +{ + return rcu_dereference_protected(slots[offset], mt_locked(mt)); +} /* * mas_slot_locked() - Get the slot value when holding the maple tree lock. * @mas: The maple state @@ -825,7 +830,7 @@ static inline void *mt_slot(const struct static inline void *mas_slot_locked(struct ma_state *mas, void __rcu **slots, unsigned char offset) { - return rcu_dereference_protected(slots[offset], mt_locked(mas->tree)); + return mt_slot_locked(mas->tree, slots, offset); } /* @@ -897,34 +902,35 @@ static inline void ma_set_meta(struct ma } /* - * mas_clear_meta() - clear the metadata information of a node, if it exists - * @mas: The maple state + * mt_clear_meta() - clear the metadata information of a node, if it exists + * @mt: The maple tree * @mn: The maple node - * @mt: The maple node type + * @type: The maple node type * @offset: The offset of the highest sub-gap in this node. * @end: The end of the data in this node. */ -static inline void mas_clear_meta(struct ma_state *mas, struct maple_node *mn, - enum maple_type mt) +static inline void mt_clear_meta(struct maple_tree *mt, struct maple_node *mn, + enum maple_type type) { struct maple_metadata *meta; unsigned long *pivots; void __rcu **slots; void *next; - switch (mt) { + switch (type) { case maple_range_64: pivots = mn->mr64.pivot; if (unlikely(pivots[MAPLE_RANGE64_SLOTS - 2])) { slots = mn->mr64.slot; - next = mas_slot_locked(mas, slots, - MAPLE_RANGE64_SLOTS - 1); - if (unlikely((mte_to_node(next) && mte_node_type(next)))) - return; /* The last slot is a node, no metadata */ + next = mt_slot_locked(mt, slots, + MAPLE_RANGE64_SLOTS - 1); + if (unlikely((mte_to_node(next) && + mte_node_type(next)))) + return; /* no metadata, could be node */ } fallthrough; case maple_arange_64: - meta = ma_meta(mn, mt); + meta = ma_meta(mn, type); break; default: return; @@ -5472,7 +5478,7 @@ no_gap: } /* - * mas_dead_leaves() - Mark all leaves of a node as dead. + * mte_dead_leaves() - Mark all leaves of a node as dead. * @mas: The maple state * @slots: Pointer to the slot array * @type: The maple node type @@ -5482,16 +5488,16 @@ no_gap: * Return: The number of leaves marked as dead. */ static inline -unsigned char mas_dead_leaves(struct ma_state *mas, void __rcu **slots, - enum maple_type mt) +unsigned char mte_dead_leaves(struct maple_enode *enode, struct maple_tree *mt, + void __rcu **slots) { struct maple_node *node; enum maple_type type; void *entry; int offset; - for (offset = 0; offset < mt_slots[mt]; offset++) { - entry = mas_slot_locked(mas, slots, offset); + for (offset = 0; offset < mt_slot_count(enode); offset++) { + entry = mt_slot(mt, slots, offset); type = mte_node_type(entry); node = mte_to_node(entry); /* Use both node and type to catch LE & BE metadata */ @@ -5506,162 +5512,160 @@ unsigned char mas_dead_leaves(struct ma_ return offset; } -static void __rcu **mas_dead_walk(struct ma_state *mas, unsigned char offset) +/** + * mte_dead_walk() - Walk down a dead tree to just before the leaves + * @enode: The maple encoded node + * @offset: The starting offset + * + * Note: This can only be used from the RCU callback context. + */ +static void __rcu **mte_dead_walk(struct maple_enode **enode, unsigned char offset) { - struct maple_node *next; + struct maple_node *node, *next; void __rcu **slots = NULL; - next = mas_mn(mas); + next = mte_to_node(*enode); do { - mas->node = mt_mk_node(next, next->type); - slots = ma_slots(next, next->type); - next = mas_slot_locked(mas, slots, offset); + *enode = ma_enode_ptr(next); + node = mte_to_node(*enode); + slots = ma_slots(node, node->type); + next = rcu_dereference_protected(slots[offset], + lock_is_held(&rcu_callback_map)); offset = 0; } while (!ma_is_leaf(next->type)); return slots; } +/** + * mt_free_walk() - Walk & free a tree in the RCU callback context + * @head: The RCU head that's within the node. + * + * Note: This can only be used from the RCU callback context. + */ static void mt_free_walk(struct rcu_head *head) { void __rcu **slots; struct maple_node *node, *start; - struct maple_tree mt; + struct maple_enode *enode; unsigned char offset; enum maple_type type; - MA_STATE(mas, &mt, 0, 0); node = container_of(head, struct maple_node, rcu); if (ma_is_leaf(node->type)) goto free_leaf; - mt_init_flags(&mt, node->ma_flags); - mas_lock(&mas); start = node; - mas.node = mt_mk_node(node, node->type); - slots = mas_dead_walk(&mas, 0); - node = mas_mn(&mas); + enode = mt_mk_node(node, node->type); + slots = mte_dead_walk(&enode, 0); + node = mte_to_node(enode); do { mt_free_bulk(node->slot_len, slots); offset = node->parent_slot + 1; - mas.node = node->piv_parent; - if (mas_mn(&mas) == node) - goto start_slots_free; - - type = mte_node_type(mas.node); - slots = ma_slots(mte_to_node(mas.node), type); - if ((offset < mt_slots[type]) && (slots[offset])) - slots = mas_dead_walk(&mas, offset); - - node = mas_mn(&mas); + enode = node->piv_parent; + if (mte_to_node(enode) == node) + goto free_leaf; + + type = mte_node_type(enode); + slots = ma_slots(mte_to_node(enode), type); + if ((offset < mt_slots[type]) && + rcu_dereference_protected(slots[offset], + lock_is_held(&rcu_callback_map))) + slots = mte_dead_walk(&enode, offset); + node = mte_to_node(enode); } while ((node != start) || (node->slot_len < offset)); slots = ma_slots(node, node->type); mt_free_bulk(node->slot_len, slots); -start_slots_free: - mas_unlock(&mas); free_leaf: mt_free_rcu(&node->rcu); } -static inline void __rcu **mas_destroy_descend(struct ma_state *mas, - struct maple_enode *prev, unsigned char offset) +static inline void __rcu **mte_destroy_descend(struct maple_enode **enode, + struct maple_tree *mt, struct maple_enode *prev, unsigned char offset) { struct maple_node *node; - struct maple_enode *next = mas->node; + struct maple_enode *next = *enode; void __rcu **slots = NULL; + enum maple_type type; + unsigned char next_offset = 0; do { - mas->node = next; - node = mas_mn(mas); - slots = ma_slots(node, mte_node_type(mas->node)); - next = mas_slot_locked(mas, slots, 0); - if ((mte_dead_node(next))) { - mte_to_node(next)->type = mte_node_type(next); - next = mas_slot_locked(mas, slots, 1); - } + *enode = next; + node = mte_to_node(*enode); + type = mte_node_type(*enode); + slots = ma_slots(node, type); + next = mt_slot_locked(mt, slots, next_offset); + if ((mte_dead_node(next))) + next = mt_slot_locked(mt, slots, ++next_offset); - mte_set_node_dead(mas->node); - node->type = mte_node_type(mas->node); - mas_clear_meta(mas, node, node->type); + mte_set_node_dead(*enode); + node->type = type; node->piv_parent = prev; node->parent_slot = offset; - offset = 0; - prev = mas->node; + offset = next_offset; + next_offset = 0; + prev = *enode; } while (!mte_is_leaf(next)); return slots; } -static void mt_destroy_walk(struct maple_enode *enode, unsigned char ma_flags, +static void mt_destroy_walk(struct maple_enode *enode, struct maple_tree *mt, bool free) { void __rcu **slots; struct maple_node *node = mte_to_node(enode); struct maple_enode *start; - struct maple_tree mt; - MA_STATE(mas, &mt, 0, 0); - - mas.node = enode; if (mte_is_leaf(enode)) { node->type = mte_node_type(enode); goto free_leaf; } - ma_flags &= ~MT_FLAGS_LOCK_MASK; - mt_init_flags(&mt, ma_flags); - mas_lock(&mas); - - mte_to_node(enode)->ma_flags = ma_flags; start = enode; - slots = mas_destroy_descend(&mas, start, 0); - node = mas_mn(&mas); + slots = mte_destroy_descend(&enode, mt, start, 0); + node = mte_to_node(enode); // Updated in the above call. do { enum maple_type type; unsigned char offset; struct maple_enode *parent, *tmp; - node->type = mte_node_type(mas.node); - node->slot_len = mas_dead_leaves(&mas, slots, node->type); + node->slot_len = mte_dead_leaves(enode, mt, slots); if (free) mt_free_bulk(node->slot_len, slots); offset = node->parent_slot + 1; - mas.node = node->piv_parent; - if (mas_mn(&mas) == node) - goto start_slots_free; + enode = node->piv_parent; + if (mte_to_node(enode) == node) + goto free_leaf; - type = mte_node_type(mas.node); - slots = ma_slots(mte_to_node(mas.node), type); + type = mte_node_type(enode); + slots = ma_slots(mte_to_node(enode), type); if (offset >= mt_slots[type]) goto next; - tmp = mas_slot_locked(&mas, slots, offset); + tmp = mt_slot_locked(mt, slots, offset); if (mte_node_type(tmp) && mte_to_node(tmp)) { - parent = mas.node; - mas.node = tmp; - slots = mas_destroy_descend(&mas, parent, offset); + parent = enode; + enode = tmp; + slots = mte_destroy_descend(&enode, mt, parent, offset); } next: - node = mas_mn(&mas); - } while (start != mas.node); + node = mte_to_node(enode); + } while (start != enode); - node = mas_mn(&mas); - node->type = mte_node_type(mas.node); - node->slot_len = mas_dead_leaves(&mas, slots, node->type); + node = mte_to_node(enode); + node->slot_len = mte_dead_leaves(enode, mt, slots); if (free) mt_free_bulk(node->slot_len, slots); -start_slots_free: - mas_unlock(&mas); - free_leaf: if (free) mt_free_rcu(&node->rcu); else - mas_clear_meta(&mas, node, node->type); + mt_clear_meta(mt, node, node->type); } /* @@ -5677,10 +5681,10 @@ static inline void mte_destroy_walk(stru struct maple_node *node = mte_to_node(enode); if (mt_in_rcu(mt)) { - mt_destroy_walk(enode, mt->ma_flags, false); + mt_destroy_walk(enode, mt, false); call_rcu(&node->rcu, mt_free_walk); } else { - mt_destroy_walk(enode, mt->ma_flags, true); + mt_destroy_walk(enode, mt, true); } } Patches currently in stable-queue which might be from stable-owner@vger.kernel.org are queue-6.1/maple_tree-fix-potential-rcu-issue.patch queue-6.1/maple_tree-add-smp_rmb-to-dead-node-detection.patch queue-6.1/maple_tree-add-rcu-lock-checking-to-rcu-callback-functions.patch queue-6.1/maple_tree-fix-handle-of-invalidated-state-in-mas_wr_store_setup.patch queue-6.1/maple_tree-reduce-user-error-potential.patch queue-6.1/maple_tree-fix-mas_prev-and-mas_find-state-handling.patch queue-6.1/maple_tree-remove-gfp_zero-from-kmem_cache_alloc-and-kmem_cache_alloc_bulk.patch queue-6.1/maple_tree-be-more-cautious-about-dead-nodes.patch queue-6.1/mm-enable-maple-tree-rcu-mode-by-default.patch queue-6.1/maple_tree-detect-dead-nodes-in-mas_start.patch queue-6.1/maple_tree-fix-freeing-of-nodes-in-rcu-mode.patch queue-6.1/maple_tree-remove-extra-smp_wmb-from-mas_dead_leaves.patch queue-6.1/maple_tree-refine-ma_state-init-from-mas_start.patch