From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 05602C7619A for ; Wed, 12 Apr 2023 08:13:56 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 94183280004; Wed, 12 Apr 2023 04:13:55 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 8F321280003; Wed, 12 Apr 2023 04:13:55 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7B7A9280004; Wed, 12 Apr 2023 04:13:55 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 6A87F280003 for ; Wed, 12 Apr 2023 04:13:55 -0400 (EDT) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 0A03880F90 for ; Wed, 12 Apr 2023 08:13:55 +0000 (UTC) X-FDA: 80672025630.13.2608DF6 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf19.hostedemail.com (Postfix) with ESMTP id 51CE11A001B for ; Wed, 12 Apr 2023 08:13:53 +0000 (UTC) Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=linuxfoundation.org header.s=korg header.b=WLM61T78; spf=pass (imf19.hostedemail.com: domain of gregkh@linuxfoundation.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=gregkh@linuxfoundation.org; dmarc=pass (policy=none) header.from=linuxfoundation.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1681287233; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:dkim-signature; bh=ndavgv6vBL1YJe9N4qobGxFHL/yVvEkI9H9WXPQTir0=; b=J9ByTlzYbkSZs6WBi2gGh/fT0lmv0qsi9gM+uZ5gjq7iK/LQ9idt6GsOEi6zXDHuCE6WFX uMajwOMBV4N2Q7/QSiEOr6qq2+73gNMBHDe7iN3YRYG6rEd2tD5UX+wRrwxTmkwVMvsTqX XfAZH6tZRiUoh0g1NMrByghp1Wpuf3Y= ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=pass header.d=linuxfoundation.org header.s=korg header.b=WLM61T78; spf=pass (imf19.hostedemail.com: domain of gregkh@linuxfoundation.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=gregkh@linuxfoundation.org; dmarc=pass (policy=none) header.from=linuxfoundation.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1681287233; a=rsa-sha256; cv=none; b=bWKXyAHCjHZnB2LbI2WhkuQvq5I+Gr9DQwi+N2o/wn0d3VcUmEojV9MGq1LcU6p6Bjft6X cfV+eProJ/MfCmd7+26fyqHzIU1mHnavnWtERPdkieKpHUQ8KDrOokZ/KvQBO+Rpzgv0si SnMmeRb9A6qLaP6/wbJWivPEyDVCu0g= Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 7B6A562F18; Wed, 12 Apr 2023 08:13:52 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 8D512C4339B; Wed, 12 Apr 2023 08:13:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1681287231; bh=iwEop71rXcB+2zJE8RndVzA7zdOEQ20q1Lq6fjW1JfM=; h=Subject:To:Cc:From:Date:In-Reply-To:From; b=WLM61T78NpRD3TYf29CldPbcsQdnpaZQJThH7DvUaxIDjUzwAX0UanC/pFsvj84gN +Xfb58Q5t00l2Xd7THxxXDdWZOTs3xXqnkmABLJneLdvE5Srj9rOdr2uPHHfm+yt/E f3qG1BRW9uCTMjwnvqwcm2+DrX+865CVZxnIW2JI= Subject: Patch "maple_tree: remove GFP_ZERO from kmem_cache_alloc() and kmem_cache_alloc_bulk()" has been added to the 6.1-stable tree To: Liam.Howlett@Oracle.com,Liam.Howlett@oracle.com,gregkh@linuxfoundation.org,jhladky@redhat.com,linux-mm@kvack.org,maple-tree@lists.infradead.org,willy@infradead.org Cc: From: Date: Wed, 12 Apr 2023 10:13:30 +0200 In-Reply-To: <20230411151055.2910579-2-Liam.Howlett@oracle.com> Message-ID: <2023041230-pendant-heaviness-7215@gregkh> MIME-Version: 1.0 Content-Type: text/plain; charset=ANSI_X3.4-1968 Content-Transfer-Encoding: 8bit X-stable: commit X-Patchwork-Hint: ignore X-Stat-Signature: o67jkibub97bugop9osxi64dhtyixqtc X-Rspam-User: X-Rspamd-Queue-Id: 51CE11A001B X-Rspamd-Server: rspam06 X-HE-Tag: 1681287233-794975 X-HE-Meta: U2FsdGVkX195v15QTpMHmpkuWBIWz8odzQ6RE21HiB70is9mHtOtjohXdBmR+al91KR5xsm8J/yYEtcG+MXM08YtNcTZ58hbNHq1JtvUKaRDpcEO7fRC4AzuGA9pQK8k5qPlhuUwuL5Qg3EYw1PBu3rZwqlXGTLBnn0Lr791yo9QGEuTM3OvBsiUxuynyV8S0fFruTxR8wOEzY2+kXHIVHVE5jzTRyMxiz7Gsci2KBm9ytfdHsiRUfgB76bOa+XxvRVcIEvQlJMFGxvdZjudmQ1oGRxefTtE//bGlAXBalYAJGV7Zo5JwymSHe6E7ftypJCbzxuJLUSspOOZ/JBd7ARYz3Xl3ll47m4SBpP9P9MoprHuQtZLFdhFdRaKLKsYoWLIVmAtXRrQ1qqRuLu4soGE7VYAZaC9BIva/a9PkEEm6rBf18n5JsMtXLswQUQ/UvRI69vPaAwVzgBZPaqzbGNS23SqxaSsKrpKu2it5vY+nWPhnoVIdLq3T6di2g37WoFkABePjhyBkYvWU0xVhiy0r4yT6OIPXCQMobVuk8N8ylK5gxG9e39rhRvQhJe8UqmDmt6F8dMcS4as5pBmm7S3dAbrNtZsk5EFfw51ko77iUVTHkNwI5K3b4w5WwiYOxEXd9e8AOokNOtPr6uMpQtJJRtcvR5412yu0kg/frRRlKAFP8oDDh0XaDV8mkOpXwSIBZucf91Q68qEGYx489iMjfbKm55KQfNwNaR8HKhNvPhptFwHh+gsUhG36l61cRHX8Ma57PGnl11t7Jw+wgs8RAJ3xGocujbkY73ccd8UYIq4loVhxhV5uw66rPGDWXjzFgBjs33aQQh+86SvE+Sj2idRWe83IKPmfXcQKz5AMTjRz9cewsMSpqKwTJYJ/zQo76iM01AFzyICQQIMPbEasz+HAA3uZFK3WviQBVkkzfuH8knOfIegwIicO3W+29gDVA6agqE2e+mUxpv 9/x2TUX5 XSCoBNmjjK7zmt54IbDDd7IEWNnAhSOCDX1jf4mDFPGkUKDW+xWaKnvhneZMLKFQd8EV93VU/EwzpVKCUGGn8B3YzF6bJ3VBuGMpMnkJ2Q10khOAbqzoZ/EXtp8RDvhSpyg5QYzDpp7e2eJ1ybTN4uyDt0ppW55RvHG3WtmwxvfAs7HtFkifQKkb/RCW9Ajz7vT9cIiT/HJ9g35v+okqye8zXLxldt+8QBJ2x+HrBpY/4Dkf1fKzVxDf+UZByMKleDr/gWi5ZuFmNsoamJpcmYD1CgxD8E4wTkFPK02DqLaiR6TAAiB5DzppJMAan2F/t+9poBBd5P/TYgNNsY0uOPerE2TB1KyoUV5Cy9LF6K1IEsycZSvX++ZTnoEPjfcCucJOZi/S7iJBLJhvl+7TqzfrmH5+GnJI5iU1ctE3dB/dJy7rD7WO6FVI+itIwBFmJtwtJj2v/Ni13RS4dzV4cpBeZa3wSeqCAXvRpCAQ1NcPM+TjNqo0djGM59AGbllazJTN9ttXSHqi94OOHwA4rm0jxi5X/mJGuwzubcneHSd4s/MEfSsW7WioXeAQM1RgQq2Agi29RgDixn+ouSXGoYGmEUCichw8ngQMfn5dcNinucJqunY0UTcRqYm9KBehpIaNaHKBqkqqcxI+RPUavbzE8jA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This is a note to let you know that I've just added the patch titled maple_tree: remove GFP_ZERO from kmem_cache_alloc() and kmem_cache_alloc_bulk() to the 6.1-stable tree which can be found at: http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary The filename of the patch is: maple_tree-remove-gfp_zero-from-kmem_cache_alloc-and-kmem_cache_alloc_bulk.patch and it can be found in the queue-6.1 subdirectory. If you, or anyone else, feels it should not be added to the stable tree, please let know about it. >From stable-owner@vger.kernel.org Tue Apr 11 17:13:02 2023 From: "Liam R. Howlett" Date: Tue, 11 Apr 2023 11:10:42 -0400 Subject: maple_tree: remove GFP_ZERO from kmem_cache_alloc() and kmem_cache_alloc_bulk() To: Greg Kroah-Hartman , stable@vger.kernel.org Cc: maple-tree@lists.infradead.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, "Liam R. Howlett" , Liam Howlett , Jirka Hladky , Matthew Wilcox Message-ID: <20230411151055.2910579-2-Liam.Howlett@oracle.com> From: "Liam R. Howlett" commit 541e06b772c1aaffb3b6a245ccface36d7107af2 upstream. Preallocations are common in the VMA code to avoid allocating under certain locking conditions. The preallocations must also cover the worst-case scenario. Removing the GFP_ZERO flag from the kmem_cache_alloc() (and bulk variant) calls will reduce the amount of time spent zeroing memory that may not be used. Only zero out the necessary area to keep track of the allocations in the maple state. Zero the entire node prior to using it in the tree. This required internal changes to node counting on allocation, so the test code is also updated. This restores some micro-benchmark performance: up to +9% in mmtests mmap1 by my testing +10% to +20% in mmap, mmapaddr, mmapmany tests reported by Red Hat Link: https://bugzilla.redhat.com/show_bug.cgi?id=2149636 Link: https://lkml.kernel.org/r/20230105160427.2988454-1-Liam.Howlett@oracle.com Cc: stable@vger.kernel.org Fixes: 54a611b60590 ("Maple Tree: add new data structure") Signed-off-by: Liam Howlett Reported-by: Jirka Hladky Suggested-by: Matthew Wilcox (Oracle) Signed-off-by: Greg Kroah-Hartman --- lib/maple_tree.c | 80 ++++++++++++++++++++------------------- tools/testing/radix-tree/maple.c | 18 ++++---- 2 files changed, 52 insertions(+), 46 deletions(-) --- a/lib/maple_tree.c +++ b/lib/maple_tree.c @@ -149,13 +149,12 @@ struct maple_subtree_state { /* Functions */ static inline struct maple_node *mt_alloc_one(gfp_t gfp) { - return kmem_cache_alloc(maple_node_cache, gfp | __GFP_ZERO); + return kmem_cache_alloc(maple_node_cache, gfp); } static inline int mt_alloc_bulk(gfp_t gfp, size_t size, void **nodes) { - return kmem_cache_alloc_bulk(maple_node_cache, gfp | __GFP_ZERO, size, - nodes); + return kmem_cache_alloc_bulk(maple_node_cache, gfp, size, nodes); } static inline void mt_free_bulk(size_t size, void __rcu **nodes) @@ -1123,9 +1122,10 @@ static inline struct maple_node *mas_pop { struct maple_alloc *ret, *node = mas->alloc; unsigned long total = mas_allocated(mas); + unsigned int req = mas_alloc_req(mas); /* nothing or a request pending. */ - if (unlikely(!total)) + if (WARN_ON(!total)) return NULL; if (total == 1) { @@ -1135,27 +1135,25 @@ static inline struct maple_node *mas_pop goto single_node; } - if (!node->node_count) { + if (node->node_count == 1) { /* Single allocation in this node. */ mas->alloc = node->slot[0]; - node->slot[0] = NULL; mas->alloc->total = node->total - 1; ret = node; goto new_head; } - node->total--; - ret = node->slot[node->node_count]; - node->slot[node->node_count--] = NULL; + ret = node->slot[--node->node_count]; + node->slot[node->node_count] = NULL; single_node: new_head: - ret->total = 0; - ret->node_count = 0; - if (ret->request_count) { - mas_set_alloc_req(mas, ret->request_count + 1); - ret->request_count = 0; + if (req) { + req++; + mas_set_alloc_req(mas, req); } + + memset(ret, 0, sizeof(*ret)); return (struct maple_node *)ret; } @@ -1174,21 +1172,20 @@ static inline void mas_push_node(struct unsigned long count; unsigned int requested = mas_alloc_req(mas); - memset(reuse, 0, sizeof(*reuse)); count = mas_allocated(mas); - if (count && (head->node_count < MAPLE_ALLOC_SLOTS - 1)) { - if (head->slot[0]) - head->node_count++; - head->slot[head->node_count] = reuse; + reuse->request_count = 0; + reuse->node_count = 0; + if (count && (head->node_count < MAPLE_ALLOC_SLOTS)) { + head->slot[head->node_count++] = reuse; head->total++; goto done; } reuse->total = 1; if ((head) && !((unsigned long)head & 0x1)) { - head->request_count = 0; reuse->slot[0] = head; + reuse->node_count = 1; reuse->total += head->total; } @@ -1207,7 +1204,6 @@ static inline void mas_alloc_nodes(struc { struct maple_alloc *node; unsigned long allocated = mas_allocated(mas); - unsigned long success = allocated; unsigned int requested = mas_alloc_req(mas); unsigned int count; void **slots = NULL; @@ -1223,24 +1219,29 @@ static inline void mas_alloc_nodes(struc WARN_ON(!allocated); } - if (!allocated || mas->alloc->node_count == MAPLE_ALLOC_SLOTS - 1) { + if (!allocated || mas->alloc->node_count == MAPLE_ALLOC_SLOTS) { node = (struct maple_alloc *)mt_alloc_one(gfp); if (!node) goto nomem_one; - if (allocated) + if (allocated) { node->slot[0] = mas->alloc; + node->node_count = 1; + } else { + node->node_count = 0; + } - success++; mas->alloc = node; + node->total = ++allocated; requested--; } node = mas->alloc; + node->request_count = 0; while (requested) { max_req = MAPLE_ALLOC_SLOTS; - if (node->slot[0]) { - unsigned int offset = node->node_count + 1; + if (node->node_count) { + unsigned int offset = node->node_count; slots = (void **)&node->slot[offset]; max_req -= offset; @@ -1254,15 +1255,13 @@ static inline void mas_alloc_nodes(struc goto nomem_bulk; node->node_count += count; - /* zero indexed. */ - if (slots == (void **)&node->slot) - node->node_count--; - - success += count; + allocated += count; node = node->slot[0]; + node->node_count = 0; + node->request_count = 0; requested -= count; } - mas->alloc->total = success; + mas->alloc->total = allocated; return; nomem_bulk: @@ -1271,7 +1270,7 @@ nomem_bulk: nomem_one: mas_set_alloc_req(mas, requested); if (mas->alloc && !(((unsigned long)mas->alloc & 0x1))) - mas->alloc->total = success; + mas->alloc->total = allocated; mas_set_err(mas, -ENOMEM); return; @@ -5720,6 +5719,7 @@ int mas_preallocate(struct ma_state *mas void mas_destroy(struct ma_state *mas) { struct maple_alloc *node; + unsigned long total; /* * When using mas_for_each() to insert an expected number of elements, @@ -5742,14 +5742,20 @@ void mas_destroy(struct ma_state *mas) } mas->mas_flags &= ~(MA_STATE_BULK|MA_STATE_PREALLOC); - while (mas->alloc && !((unsigned long)mas->alloc & 0x1)) { + total = mas_allocated(mas); + while (total) { node = mas->alloc; mas->alloc = node->slot[0]; - if (node->node_count > 0) - mt_free_bulk(node->node_count, - (void __rcu **)&node->slot[1]); + if (node->node_count > 1) { + size_t count = node->node_count - 1; + + mt_free_bulk(count, (void __rcu **)&node->slot[1]); + total -= count; + } kmem_cache_free(maple_node_cache, node); + total--; } + mas->alloc = NULL; } EXPORT_SYMBOL_GPL(mas_destroy); --- a/tools/testing/radix-tree/maple.c +++ b/tools/testing/radix-tree/maple.c @@ -172,11 +172,11 @@ static noinline void check_new_node(stru if (!MAPLE_32BIT) { if (i >= 35) - e = i - 35; + e = i - 34; else if (i >= 5) - e = i - 5; + e = i - 4; else if (i >= 2) - e = i - 2; + e = i - 1; } else { if (i >= 4) e = i - 4; @@ -304,17 +304,17 @@ static noinline void check_new_node(stru MT_BUG_ON(mt, mas.node != MA_ERROR(-ENOMEM)); MT_BUG_ON(mt, !mas_nomem(&mas, GFP_KERNEL)); MT_BUG_ON(mt, mas_allocated(&mas) != MAPLE_ALLOC_SLOTS + 1); - MT_BUG_ON(mt, mas.alloc->node_count != MAPLE_ALLOC_SLOTS - 1); + MT_BUG_ON(mt, mas.alloc->node_count != MAPLE_ALLOC_SLOTS); mn = mas_pop_node(&mas); /* get the next node. */ MT_BUG_ON(mt, mn == NULL); MT_BUG_ON(mt, not_empty(mn)); MT_BUG_ON(mt, mas_allocated(&mas) != MAPLE_ALLOC_SLOTS); - MT_BUG_ON(mt, mas.alloc->node_count != MAPLE_ALLOC_SLOTS - 2); + MT_BUG_ON(mt, mas.alloc->node_count != MAPLE_ALLOC_SLOTS - 1); mas_push_node(&mas, mn); MT_BUG_ON(mt, mas_allocated(&mas) != MAPLE_ALLOC_SLOTS + 1); - MT_BUG_ON(mt, mas.alloc->node_count != MAPLE_ALLOC_SLOTS - 1); + MT_BUG_ON(mt, mas.alloc->node_count != MAPLE_ALLOC_SLOTS); /* Check the limit of pop/push/pop */ mas_node_count(&mas, MAPLE_ALLOC_SLOTS + 2); /* Request */ @@ -322,14 +322,14 @@ static noinline void check_new_node(stru MT_BUG_ON(mt, mas.node != MA_ERROR(-ENOMEM)); MT_BUG_ON(mt, !mas_nomem(&mas, GFP_KERNEL)); MT_BUG_ON(mt, mas_alloc_req(&mas)); - MT_BUG_ON(mt, mas.alloc->node_count); + MT_BUG_ON(mt, mas.alloc->node_count != 1); MT_BUG_ON(mt, mas_allocated(&mas) != MAPLE_ALLOC_SLOTS + 2); mn = mas_pop_node(&mas); MT_BUG_ON(mt, not_empty(mn)); MT_BUG_ON(mt, mas_allocated(&mas) != MAPLE_ALLOC_SLOTS + 1); - MT_BUG_ON(mt, mas.alloc->node_count != MAPLE_ALLOC_SLOTS - 1); + MT_BUG_ON(mt, mas.alloc->node_count != MAPLE_ALLOC_SLOTS); mas_push_node(&mas, mn); - MT_BUG_ON(mt, mas.alloc->node_count); + MT_BUG_ON(mt, mas.alloc->node_count != 1); MT_BUG_ON(mt, mas_allocated(&mas) != MAPLE_ALLOC_SLOTS + 2); mn = mas_pop_node(&mas); MT_BUG_ON(mt, not_empty(mn)); Patches currently in stable-queue which might be from stable-owner@vger.kernel.org are queue-6.1/maple_tree-fix-potential-rcu-issue.patch queue-6.1/maple_tree-add-smp_rmb-to-dead-node-detection.patch queue-6.1/maple_tree-add-rcu-lock-checking-to-rcu-callback-functions.patch queue-6.1/maple_tree-fix-handle-of-invalidated-state-in-mas_wr_store_setup.patch queue-6.1/maple_tree-reduce-user-error-potential.patch queue-6.1/maple_tree-fix-mas_prev-and-mas_find-state-handling.patch queue-6.1/maple_tree-remove-gfp_zero-from-kmem_cache_alloc-and-kmem_cache_alloc_bulk.patch queue-6.1/maple_tree-be-more-cautious-about-dead-nodes.patch queue-6.1/mm-enable-maple-tree-rcu-mode-by-default.patch queue-6.1/maple_tree-detect-dead-nodes-in-mas_start.patch queue-6.1/maple_tree-fix-freeing-of-nodes-in-rcu-mode.patch queue-6.1/maple_tree-remove-extra-smp_wmb-from-mas_dead_leaves.patch queue-6.1/maple_tree-refine-ma_state-init-from-mas_start.patch