From: Sidhartha Kumar <sidhartha.kumar@oracle.com>
To: Wei Yang <richard.weiyang@gmail.com>
Cc: linux-kernel@vger.kernel.org, maple-tree@lists.infradead.org,
linux-mm@kvack.org, akpm@linux-foundation.org,
liam.howlett@oracle.com
Subject: Re: [PATCH 0/5] Track node vacancy to reduce worst case allocation counts
Date: Tue, 26 Nov 2024 13:32:31 -0600 [thread overview]
Message-ID: <95b2cf29-78be-454e-b538-14cb7f5fe964@oracle.com> (raw)
In-Reply-To: <20241119095951.a46jgpbkz7suaahk@master>
On 11/19/24 3:59 AM, Wei Yang wrote:
> On Thu, Nov 14, 2024 at 04:39:00PM -0500, Sid Kumar wrote:
>>
>> On 11/14/24 12:05 PM, Sidhartha Kumar wrote:
> [...]
>>> ================ results =========================
>>> Bpftrace was used to profile the allocation path for requesting new maple
>>> nodes while running the ./mmap1_processes test from mmtests. The two paths
>>> for allocation are requests for a single node and the bulk allocation path.
>>> The histogram represents the number of calls to these paths and a shows the
>>> distribution of the number of nodes requested for the bulk allocation path.
>>>
>>>
>>> mm-unstable 11/13/24
>>> @bulk_alloc_req:
>>> [2, 4) 10 |@@@@@@@@@@@@@ |
>>> [4, 8) 38 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|
>>> [8, 16) 19 |@@@@@@@@@@@@@@@@@@@@@@@@@@ |
>>>
>>>
>>> mm-unstable 11/13/24 + this series
>>> @bulk_alloc_req:
>>> [2, 4) 9 |@@@@@@@@@@ |
>>> [4, 8) 43 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|
>>> [8, 16) 15 |@@@@@@@@@@@@@@@@@@ |
>>>
>>> We can see the worst case bulk allocations of [8,16) nodes are reduced after
>>> this series.
>>
>>From running the ./malloc1_threads test case we eliminate almost all bulk
>> allocation requests that
>>
>> fall between 8 and 16 nodes
>>
>> ./malloc1_threads -t 8 -s 100
>> mm-unstable + this series
>> @bulk_alloc_req:
>> [2, 4) 2 |
>> |
>> [4, 8) 3381
>> |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|
>> [8, 16) 2 |
>> |
>>
>
> This is impressive. But I come up one thing not clear.
>
> For mmap related code, we usually have the following usage:
>
> vma_iter_prealloc(vmi, vma);
> mas_preallocate(vmi->mas, vma);
> MA_WR_STATE(wr_mas, );
> mas_wr_store_type(&wr_mas); --- (1)
> vma_iter_store(vmi, vma);
>
> Locaton (1) is where we try to get a better estimation of allocations.
> The estimation is based on we walk down the tree and try to meet a proper
> node.
>
> In mmap related code, we usually have already walked down the
> tree to leaf, by vma_find() or related iteration operation, and the mas.status
> is set to ma_active. To me, I don't expect mas_preallocate() would traverse
> the tree again.
>
> But from your result, it seems most cases do traverse the tree again to get a
> more precise height.
>
Hello,
From looking at mas_wr_prealloc_setup(), when mas_is_active():
we reset in two scenarios:
if (mas->last > mas->max)
goto reset;
if (wr_mas->entry)
goto set_content;
if (mte_is_leaf(mas->node) && mas->last == mas->max)
goto reset;
it could be that this test case specifically hits these two cases. In
testing brk() I did not see the same gains that this malloc test had so
in that case we are probably not traversing the tree again as you say.
Thanks,
Sid
> Which part do you think I have missed?
>
>>
>> mm-unstable
>> @bulk_alloc_req:
>> [2, 4) 1 |
>> |
>> [4, 8) 1427 |@@@@@@@@@@@@@@@@@@@@@@@@@@
>> |
>> [8, 16) 2790
>> |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|
>>
>>
>>>
>>> Sidhartha Kumar (5):
>>> maple_tree: convert mas_prealloc_calc() to take in a maple write state
>>> maple_tree: use height and depth consistently
>>> maple_tree: use vacant nodes to reduce worst case allocations
>>> maple_tree: break on convergence in mas_spanning_rebalance()
>>> maple_tree: add sufficient height
>>>
>>> include/linux/maple_tree.h | 4 +
>>> lib/maple_tree.c | 89 +++++++++++++---------
>>> tools/testing/radix-tree/maple.c | 125 +++++++++++++++++++++++++++++--
>>> 3 files changed, 176 insertions(+), 42 deletions(-)
>>>
>
prev parent reply other threads:[~2024-11-26 19:32 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-11-14 17:05 Sidhartha Kumar
2024-11-14 17:05 ` [PATCH 1/5] maple_tree: convert mas_prealloc_calc() to take in a maple write state Sidhartha Kumar
2024-11-14 17:05 ` [PATCH 2/5] maple_tree: use height and depth consistently Sidhartha Kumar
2024-11-14 17:05 ` [PATCH 3/5] maple_tree: use vacant nodes to reduce worst case allocations Sidhartha Kumar
2024-11-15 7:52 ` Wei Yang
2024-11-15 20:34 ` Sidhartha Kumar
2024-11-16 1:41 ` Wei Yang
2024-11-18 16:36 ` Sidhartha Kumar
2024-11-19 2:30 ` Wei Yang
2024-11-19 14:15 ` Liam R. Howlett
2024-11-14 17:05 ` [PATCH 4/5] maple_tree: break on convergence in mas_spanning_rebalance() Sidhartha Kumar
2024-11-15 7:14 ` Wei Yang
2024-11-14 17:05 ` [PATCH 5/5] maple_tree: add sufficient height Sidhartha Kumar
2024-11-14 21:39 ` [PATCH 0/5] Track node vacancy to reduce worst case allocation counts Sid Kumar
2024-11-19 9:59 ` Wei Yang
2024-11-26 19:32 ` Sidhartha Kumar [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=95b2cf29-78be-454e-b538-14cb7f5fe964@oracle.com \
--to=sidhartha.kumar@oracle.com \
--cc=akpm@linux-foundation.org \
--cc=liam.howlett@oracle.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=maple-tree@lists.infradead.org \
--cc=richard.weiyang@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox