From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id AF87DC04E69 for ; Wed, 16 Aug 2023 13:42:01 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1E9DB280018; Wed, 16 Aug 2023 09:42:01 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 19A3D8D0001; Wed, 16 Aug 2023 09:42:01 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 03AB6280018; Wed, 16 Aug 2023 09:42:00 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id E555C8D0001 for ; Wed, 16 Aug 2023 09:42:00 -0400 (EDT) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id A5EDA140F10 for ; Wed, 16 Aug 2023 13:42:00 +0000 (UTC) X-FDA: 81130081200.30.339F50F Received: from mail-pj1-f46.google.com (mail-pj1-f46.google.com [209.85.216.46]) by imf25.hostedemail.com (Postfix) with ESMTP id 39C6AA0021 for ; Wed, 16 Aug 2023 13:41:56 +0000 (UTC) Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=bytedance.com header.s=google header.b=KB872v4f; dmarc=pass (policy=quarantine) header.from=bytedance.com; spf=pass (imf25.hostedemail.com: domain of zhangpeng.00@bytedance.com designates 209.85.216.46 as permitted sender) smtp.mailfrom=zhangpeng.00@bytedance.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1692193318; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=8ihTiRqTb2aq3ohAKkaAZG0MxMTltkenTkWuMMzeDzE=; b=f+lCEWaOGjNaIjNlsxW52B+bG1zBN42OdYNhzTmln6OsK2ZbVHtE2dGR7cAd68cfOIyuWV d9TDpcN5S1J6fJE224YEOcwBQeHL+kwTa7H7lwqcTHmFUKLUenD58mkgCjHRCxukhWLcoz 4mmRdO3okQiPBdml+pi3/IDhNeTkMdc= ARC-Authentication-Results: i=1; imf25.hostedemail.com; dkim=pass header.d=bytedance.com header.s=google header.b=KB872v4f; dmarc=pass (policy=quarantine) header.from=bytedance.com; spf=pass (imf25.hostedemail.com: domain of zhangpeng.00@bytedance.com designates 209.85.216.46 as permitted sender) smtp.mailfrom=zhangpeng.00@bytedance.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1692193318; a=rsa-sha256; cv=none; b=6BszPtMPtIdqIuFHPue2iH1pJz4eVK0mqahl/YTlSE2iB3e/M7x9xXc6ExIstdqv7wyWuP KFFQWPaIHceWoHz/GcJWM+9fidlj/RyfhkIvbE1nFGVp7fZW5N5BE7OSihRxgPUP5NYKpP 5/evpVXW2kKCPxzSNRlgvt/jxgu5gb8= Received: by mail-pj1-f46.google.com with SMTP id 98e67ed59e1d1-26b7c16556eso1128086a91.1 for ; Wed, 16 Aug 2023 06:41:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1692193316; x=1692798116; h=content-transfer-encoding:in-reply-to:from:references:to:subject :user-agent:mime-version:date:message-id:from:to:cc:subject:date :message-id:reply-to; bh=8ihTiRqTb2aq3ohAKkaAZG0MxMTltkenTkWuMMzeDzE=; b=KB872v4fXmvzxtYyx6VAyuNleEfWUz+nV5594C+XcRRq+RcNkNcdXuKpSvWB8a9odA M0lZxz2T5CbuozCfOrP7cx13OtzCHCDue/+AHUMcaq5jGn4/ken5CWM92rlLEBevNu7B 1DD5KkN2kV6+X3TLG2qafzaS5FVymR0zD4suP2c2MrnL3qmNEVi8/PD2ndBQpH5MN0t6 7yyDlsPAbJm3VLjp96QZdFBmeolvvvciaODLXxkzdZr1lK+ccs34LTKsNAIRLYTOZw8O WnzUPvj7gg/HqicvGOWaIy8ff2XXxPTQ8o3Y8qC5Qk2CLxvtnbqMpAXXKs5MOpF5HDEP 7ZlA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692193316; x=1692798116; h=content-transfer-encoding:in-reply-to:from:references:to:subject :user-agent:mime-version:date:message-id:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=8ihTiRqTb2aq3ohAKkaAZG0MxMTltkenTkWuMMzeDzE=; b=jcHTNtqioVrfuRLqLhvfKSt2A85husmIcaWFL5DlDqFcet6FIbPkeBoyNHSNDPB5PD mkgJUNnRqC4WwEdEEKNcvBZkVDwZQ6VJQBfIpk9UI3YNw7mzrWlg0UtwqT9UnQ5kI3o3 eiHcFyEGeCMrZTb5C4dFClWhhl11QQtoZyxQgdA/yt7t1vRZhnfh6uTXJWiyxZK4GLxV trGkerKIi5XGWGja8AoSIWlYCpc3nUaBB2S4jPYoktBIdYCqvmCNgvEetgYJHW+w2ELd Y8HylbuES+vk9VgspVjjTnT/k7vdBXkIiLvIQTVCNGRkRVHWiU8wbL+yfIN6rCCO7FuN +20Q== X-Gm-Message-State: AOJu0YwNEAghqeOTs8QJUrWF0AsCCOUCxxbg1HTgbYv4fFJ2vC07zWDr sgbgVfq2S0WBAM9Jl/M2CJllnozM40XndF6MnI0= X-Google-Smtp-Source: AGHT+IEszk9C0QYQmqCnnszdSR3D5xosNDW1dmgUMlYbIMwwxmJBkulSegYFAsSMNbL3kmzbJYzhaQ== X-Received: by 2002:a17:90a:fd14:b0:268:42a2:35db with SMTP id cv20-20020a17090afd1400b0026842a235dbmr1374091pjb.48.1692193315624; Wed, 16 Aug 2023 06:41:55 -0700 (PDT) Received: from [10.254.252.111] ([139.177.225.249]) by smtp.gmail.com with ESMTPSA id p5-20020a170902e74500b001bb99e188fcsm13072912plf.194.2023.08.16.06.41.48 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Wed, 16 Aug 2023 06:41:55 -0700 (PDT) Message-ID: <3f4e73cc-1a98-95a8-9ab2-47797d236585@bytedance.com> Date: Wed, 16 Aug 2023 21:41:46 +0800 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0) Gecko/20100101 Thunderbird/102.14.0 Subject: Re: [PATCH 04/11] maple_tree: Introduce interfaces __mt_dup() and mt_dup() To: "Liam R. Howlett" , Peng Zhang , willy@infradead.org, michael.christie@oracle.com, surenb@google.com, npiggin@gmail.com, corbet@lwn.net, mathieu.desnoyers@efficios.com, avagin@gmail.com, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, akpm@linux-foundation.org, brauner@kernel.org, peterz@infradead.org References: <20230726080916.17454-1-zhangpeng.00@bytedance.com> <20230726080916.17454-5-zhangpeng.00@bytedance.com> <20230726160354.konsgq6hidj7gr5u@revolver> <20230731162714.4x3lzymuyvu2mter@revolver> From: Peng Zhang In-Reply-To: <20230731162714.4x3lzymuyvu2mter@revolver> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 39C6AA0021 X-Stat-Signature: 79b1yzpasmcpjd5hh3auuckc8npg3rbi X-HE-Tag: 1692193316-363062 X-HE-Meta: U2FsdGVkX1/jEgzG2VM6BeDUCwnxznIn2Jb7nw/luYmBHrUGOVFVzEIeYHdV62sJgn3rTgUA+CzK38Mx1mkWrRcPafFF1f1gs9GQQ4fb9ous7hVVRr7PIIz3MCNxW2V2qlezyXOI35nslXR7O3gAPCLZtJ1txT+BJaAk23zMQ3jsk0n1EQOkDA2gZxBwaWuNjvBG/hs7CCBugv4cp+j9h4nd8d1pEVvBB5VyrHEYgmyi7KA5XgSF1N0R0yQmhvAcAP+RpXLyF6zFuaxeZHXwhXYVVwU39y4ZXIpvB04zcxttz6KxdCf9Ua49GOTCqKzF0OZ+dmRsnIEneX5ITQVAoa5epGhf9iF1zEn0rknBGbcDM8xamL+iVU+7HUfKKqQDh2C95Fwnk9JKrl5Hw2gTylCEnRHgoviObIl+ecvUlVWpmOC1AIZqraw/eAgCR5Hr5ADTKm2SRszWX8YSxT6hCSg84JhMiccJgHUw/PvfrDGzQoA4yUl2TwuM/0RKu5ua4vMcrRbGVS7dK+/RqEdADsvXEXai8vj6moX4TA9Vlh525bW3E8dMy0fXV87RCaFJ6uwv1mLd9ezuN8Vgi5vME+FC0fgQsFbTxO3eaykIPF10vz0ompWiTjDorVZ8/FKOeIm5lpl/Ms3BZVGJxXL2h9MRmzjFOP2mKfQMRAj40WpL/BUL6DOnokqIaeZjam5vFKxWXf6W3QvxJMEu/zJ2b5YPkWuDmvz9myq+e5fMVh7u06nz5k1bG6YGV5DULE9B7zE2Qwj7WJrdDwBRamKSh/GiLyblCXWq5zAqdNdJ7YxZgjULrsUxunhE0HkZlvsuAz0bVMctJbHZdFPr2B7zTw//cY1ON4ojG7lc1SLMM7hpSGIr5vxXITvgJJTDYQoZVXauRkQ/yG/02K3pqVuCbEMObA4ozIsJKRlharEKVEswUM4UhYnTBc14nMLXNhM2sxR0S/dlAplwGzqzpEn U2pO3EZl QE/JjAqZcCIOUoO9f3A8xpEKJc3SkPoZgEIzOaYVAc2oKIOGVfSnQcQZ/P+Vv39feb5YtHZq9Jb80c1v5wgFm58rtIzJANY7IZs9lfkBTRgANHAsqRyQBt1bK7LtDORN4RvuuIPnr2CqeQErTar+HBvZx9ppCwACYQ8B6VTBMI+5WxM5U/r1f6LQmGhsu9ewpR7+ODXQ6p551JBE6Z/TcE2MPfZa48gdKff/QC+80iRTxfO0HIzUry78j8lFqFIchlqmg5r2TbeF1BJemHhxM7blBL/TDERL9p2+w9YagDb/IFcZ2p3tMZ+Nth9Sgu8LCHYKyyW1xAp0Plpiv7o162C3xwyXD1k2NaUIytw93MC3kQnOWIv+keypgWuOsxWiemEQBBE2OMjkKPOAIL78dojanGZgCSHLh7QHfCUNBFX9nrNupf+fkBAvi3E008GFE81H0h98cUfyisK/rf73Qips4fuqq6ZPnxOJh X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: 在 2023/8/1 00:27, Liam R. Howlett 写道: > * Peng Zhang [230731 08:24]: >> >> >> 在 2023/7/27 00:03, Liam R. Howlett 写道: >>> * Peng Zhang [230726 04:10]: >>>> Introduce interfaces __mt_dup() and mt_dup(), which are used to >>>> duplicate a maple tree. Compared with traversing the source tree and >>>> reinserting entry by entry in the new tree, it has better performance. >>>> The difference between __mt_dup() and mt_dup() is that mt_dup() holds >>>> an internal lock. >>>> >>>> Signed-off-by: Peng Zhang >>>> --- >>>> include/linux/maple_tree.h | 3 + >>>> lib/maple_tree.c | 211 +++++++++++++++++++++++++++++++++++++ >>>> 2 files changed, 214 insertions(+) >>>> >>>> diff --git a/include/linux/maple_tree.h b/include/linux/maple_tree.h >>>> index c962af188681..229fe78e4c89 100644 >>>> --- a/include/linux/maple_tree.h >>>> +++ b/include/linux/maple_tree.h >>>> @@ -327,6 +327,9 @@ int mtree_store(struct maple_tree *mt, unsigned long index, >>>> void *entry, gfp_t gfp); >>>> void *mtree_erase(struct maple_tree *mt, unsigned long index); >>>> +int mt_dup(struct maple_tree *mt, struct maple_tree *new, gfp_t gfp); >>>> +int __mt_dup(struct maple_tree *mt, struct maple_tree *new, gfp_t gfp); >>>> + >>>> void mtree_destroy(struct maple_tree *mt); >>>> void __mt_destroy(struct maple_tree *mt); >>>> diff --git a/lib/maple_tree.c b/lib/maple_tree.c >>>> index da3a2fb405c0..efac6761ae37 100644 >>>> --- a/lib/maple_tree.c >>>> +++ b/lib/maple_tree.c >>>> @@ -6595,6 +6595,217 @@ void *mtree_erase(struct maple_tree *mt, unsigned long index) >>>> } >>>> EXPORT_SYMBOL(mtree_erase); >>>> +/* >>>> + * mt_dup_free() - Free the nodes of a incomplete maple tree. >>>> + * @mt: The incomplete maple tree >>>> + * @node: Free nodes from @node >>>> + * >>>> + * This function frees all nodes starting from @node in the reverse order of >>>> + * mt_dup_build(). At this point we don't need to hold the source tree lock. >>>> + */ >>>> +static void mt_dup_free(struct maple_tree *mt, struct maple_node *node) >>>> +{ >>>> + void **slots; >>>> + unsigned char offset; >>>> + struct maple_enode *enode; >>>> + enum maple_type type; >>>> + unsigned char count = 0, i; >>>> + >>> >>> Can we make these labels inline functions and try to make this a loop? >> I did this just to make things easier. Refer to the implementation of >> walk_tg_tree_from() in sched/core.c. Using some loops and inline >> functions probably doesn't simplify things. I'll try to do that and give >> up if it complicates things. > > Thanks, I'd like to try and simplify the code instead of adding goto > label loops. The code you are referencing is from 2008 and goto loops > are not common. > >>> >>>> +try_ascend: >>>> + if (ma_is_root(node)) { >>>> + mt_free_one(node); >>>> + return; >>>> + } >>>> + >>>> + offset = ma_parent_slot(node); >>>> + type = ma_parent_type(mt, node); >>>> + node = ma_parent(node); >>>> + if (!offset) >>>> + goto free; >>>> + >>>> + offset--; >>>> + >>>> +descend: >>>> + slots = (void **)ma_slots(node, type); >>>> + enode = slots[offset]; >>>> + if (mte_is_leaf(enode)) >>>> + goto free; >>>> + >>>> + type = mte_node_type(enode); >>>> + node = mte_to_node(enode); >>>> + offset = ma_nonleaf_data_end_nocheck(node, type); >>>> + goto descend; >>>> + >>>> +free: >>>> + slots = (void **)ma_slots(node, type); >>>> + count = ma_nonleaf_data_end_nocheck(node, type) + 1; >>>> + for (i = 0; i < count; i++) >>>> + ((unsigned long *)slots)[i] &= ~MAPLE_NODE_MASK; >>>> + >>>> + /* Cast to __rcu to avoid sparse checker complaining. */ >>>> + mt_free_bulk(count, (void __rcu **)slots); >>>> + goto try_ascend; >>>> +} >>>> + >>>> +/* >>>> + * mt_dup_build() - Build a new maple tree from a source tree >>>> + * @mt: The source maple tree to copy from >>>> + * @new: The new maple tree >>>> + * @gfp: The GFP_FLAGS to use for allocations >>>> + * @to_free: Free nodes starting from @to_free if the build fails >>>> + * >>>> + * This function builds a new tree in DFS preorder. If it fails due to memory >>>> + * allocation, @to_free will store the last failed node to free the incomplete >>>> + * tree. Use mt_dup_free() to free nodes. >>>> + * >>>> + * Return: 0 on success, -ENOMEM if memory could not be allocated. >>>> + */ >>>> +static inline int mt_dup_build(struct maple_tree *mt, struct maple_tree *new, >>>> + gfp_t gfp, struct maple_node **to_free) >>> >>> I am trying to change the functions to be two tabs of indent for >>> arguments from now on. It allows for more to fit on a single line and >>> still maintains a clear separation between code and argument lists. >> I'm not too concerned about code formatting. . . At least in this >> patchset. > > I have a mess of it in the tree and wanted to communicate my desire to > shift to using two tabs for extra arguments in the future. > >>> >>>> +{ >>>> + struct maple_enode *enode; >>>> + struct maple_node *new_node, *new_parent = NULL, *node; >>>> + enum maple_type type; >>>> + void __rcu **slots; >>>> + void **new_slots; >>>> + unsigned char count, request, i, offset; >>>> + unsigned long *set_parent; >>>> + unsigned long new_root; >>>> + >>>> + mt_init_flags(new, mt->ma_flags); >>>> + enode = mt_root_locked(mt); >>>> + if (unlikely(!xa_is_node(enode))) { >>>> + rcu_assign_pointer(new->ma_root, enode); >>>> + return 0; >>>> + } >>>> + >>>> + new_node = mt_alloc_one(gfp); >>>> + if (!new_node) >>>> + return -ENOMEM; >>>> + >>>> + new_root = (unsigned long)new_node; >>>> + new_root |= (unsigned long)enode & MAPLE_NODE_MASK; >>>> + >>>> +copy_node: >>> >>> Can you make copy_node, descend, ascend inline functions instead of the >>> goto jumping please? It's better to have loops over jumping around a >>> lot. Gotos are good for undoing things and retry, but constructing >>> loops with them makes it difficult to follow. >> Same as above. >>> >>>> + node = mte_to_node(enode); >>>> + type = mte_node_type(enode); >>>> + memcpy(new_node, node, sizeof(struct maple_node)); >>>> + >>>> + set_parent = (unsigned long *)&(new_node->parent); >>>> + *set_parent &= MAPLE_NODE_MASK; >>>> + *set_parent |= (unsigned long)new_parent; >>> >>> Maybe make a small inline to set the parent instead of this? >>> >>> There are some defined helpers for setting the types like >>> ma_parent_ptr() and ma_enode_ptr() to make casting more type-safe. >> Ok, I'll try to do that. >>> >>>> + if (ma_is_leaf(type)) >>>> + goto ascend; >>>> + >>>> + new_slots = (void **)ma_slots(new_node, type); >>>> + slots = ma_slots(node, type); >>>> + request = ma_nonleaf_data_end(mt, node, type) + 1; >>>> + count = mt_alloc_bulk(gfp, request, new_slots); >>>> + if (!count) { >>>> + *to_free = new_node; >>>> + return -ENOMEM; >>>> + } >>>> + >>>> + for (i = 0; i < count; i++) >>>> + ((unsigned long *)new_slots)[i] |= >>>> + ((unsigned long)mt_slot_locked(mt, slots, i) & >>>> + MAPLE_NODE_MASK); >>>> + offset = 0; >>>> + >>>> +descend: >>>> + new_parent = new_node; >>>> + enode = mt_slot_locked(mt, slots, offset); >>>> + new_node = mte_to_node(new_slots[offset]); >>>> + goto copy_node; >>>> + >>>> +ascend: >>>> + if (ma_is_root(node)) { >>>> + new_node = mte_to_node((void *)new_root); >>>> + new_node->parent = ma_parent_ptr((unsigned long)new | >>>> + MA_ROOT_PARENT); >>>> + rcu_assign_pointer(new->ma_root, (void *)new_root); >>>> + return 0; >>>> + } >>>> + >>>> + offset = ma_parent_slot(node); >>>> + type = ma_parent_type(mt, node); >>>> + node = ma_parent(node); >>>> + new_node = ma_parent(new_node); >>>> + if (offset < ma_nonleaf_data_end(mt, node, type)) { >>>> + offset++; >>>> + new_slots = (void **)ma_slots(new_node, type); >>>> + slots = ma_slots(node, type); >>>> + goto descend; >>>> + } >>>> + >>>> + goto ascend; >>>> +} >>>> + >>>> +/** >>>> + * __mt_dup(): Duplicate a maple tree >>>> + * @mt: The source maple tree >>>> + * @new: The new maple tree >>>> + * @gfp: The GFP_FLAGS to use for allocations >>>> + * >>>> + * This function duplicates a maple tree using a faster method than traversing >>>> + * the source tree and inserting entries into the new tree one by one. The user >>>> + * needs to lock the source tree manually. Before calling this function, @new >>>> + * must be an empty tree or an uninitialized tree. If @mt uses an external lock, >>>> + * we may also need to manually set @new's external lock using >>>> + * mt_set_external_lock(). >>>> + * >>>> + * Return: 0 on success, -ENOMEM if memory could not be allocated. >>>> + */ >>>> +int __mt_dup(struct maple_tree *mt, struct maple_tree *new, gfp_t gfp) >>> >>> We use mas_ for things that won't handle the locking and pass in a maple >>> state. Considering the leaves need to be altered once this is returned, >>> I would expect passing in a maple state should be feasible? >> But we don't really need mas here. What do you think the state of mas >> should be when this function returns? Make it point to the first entry, >> or the last entry? > > I would write it to point to the first element so that the call to > replace the first element can just do that without an extra walk and > document the maple state end point. Unfortunately, this does not seem to be convenient. Users usually use mas_for_each() to replace elements. If we set mas to the first element, the first call to mas_find() in mas_for_each() will get the next element. There may also be other scenarios where the user does not necessarily have to replace every element. Finally, getting the first element in __mt_dup() requires an additional check to check whether the first element has already been recorded. Such a check will be performed at each leaf node, which is unnecessary overhead. Of course, the first reason is the main reason, which prevents us from using mas_for_each(). So I don't want to record the first element. > >>> >>>> +{ >>>> + int ret; >>>> + struct maple_node *to_free = NULL; >>>> + >>>> + ret = mt_dup_build(mt, new, gfp, &to_free); >>>> + >>>> + if (unlikely(ret == -ENOMEM)) { >>> >>> On other errors, will the half constructed tree be returned? Is this >>> safe? >> Of course, mt_dup_free() is carefully designed to handle this. >>> >>>> + if (to_free) >>>> + mt_dup_free(new, to_free); >>>> + } >>>> + >>>> + return ret; >>>> +} >>>> +EXPORT_SYMBOL(__mt_dup); >>>> + >>>> +/** >>>> + * mt_dup(): Duplicate a maple tree >>>> + * @mt: The source maple tree >>>> + * @new: The new maple tree >>>> + * @gfp: The GFP_FLAGS to use for allocations >>>> + * >>>> + * This function duplicates a maple tree using a faster method than traversing >>>> + * the source tree and inserting entries into the new tree one by one. The >>>> + * function will lock the source tree with an internal lock, and the user does >>>> + * not need to manually handle the lock. Before calling this function, @new must >>>> + * be an empty tree or an uninitialized tree. If @mt uses an external lock, we >>>> + * may also need to manually set @new's external lock using >>>> + * mt_set_external_lock(). >>>> + * >>>> + * Return: 0 on success, -ENOMEM if memory could not be allocated. >>>> + */ >>>> +int mt_dup(struct maple_tree *mt, struct maple_tree *new, gfp_t gfp) >>> >>> mtree_ ususually used to indicate locking is handled. >> Before unifying mtree_* and mt_*, I don't think I can see any difference >> between them. At least mt_set_in_rcu() and mt_clear_in_rcu() will hold >> the lock. > > Fair enough. I was thinking this closely matches __mt_destroy() and > mtree_destroy(). We could be consistent in our inconsistency, at least. > >>> >>>> +{ >>>> + int ret; >>>> + struct maple_node *to_free = NULL; >>>> + >>>> + mtree_lock(mt); >>>> + ret = mt_dup_build(mt, new, gfp, &to_free); >>>> + mtree_unlock(mt); >>>> + >>>> + if (unlikely(ret == -ENOMEM)) { >>>> + if (to_free) >>>> + mt_dup_free(new, to_free); >>> >>> Again, is a half constructed tree safe to return? Since each caller >>> checks to_free is NULL, could that be in mt_dup_free() instead? >> Yes, this check can be put in mt_dup_free(). >>> >>>> + } >>>> + >>>> + return ret; >>>> +} >>>> +EXPORT_SYMBOL(mt_dup); >>>> + >>>> /** >>>> * __mt_destroy() - Walk and free all nodes of a locked maple tree. >>>> * @mt: The maple tree >>>> -- >>>> 2.20.1 >>>> >>>>