From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id EDFE2EB64DA for ; Mon, 26 Jun 2023 14:31:27 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 585288D0005; Mon, 26 Jun 2023 10:31:27 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 534948D0001; Mon, 26 Jun 2023 10:31:27 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3D5748D0005; Mon, 26 Jun 2023 10:31:27 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 2E3BA8D0001 for ; Mon, 26 Jun 2023 10:31:27 -0400 (EDT) Received: from smtpin20.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id F22314051C for ; Mon, 26 Jun 2023 14:31:26 +0000 (UTC) X-FDA: 80945136972.20.BDD47DD Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf21.hostedemail.com (Postfix) with ESMTP id D4FBD1C005F for ; Mon, 26 Jun 2023 14:31:00 +0000 (UTC) Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=NT1CKK+9; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf21.hostedemail.com: domain of dakr@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dakr@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1687789861; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=fI5bKyN+2a6dGAEULLZwYpYs8Vq+LvmQj425yEjhkiQ=; b=wAt7DnIC1V7m75essPF6MvCVRp0cI92ImvnNveptj51PZauBSRZcQY+X/NC/fQN1ghNGiA dVMXEwhIAh+DGE8Hemgiir7YAWkfRl6oeso0HgXtJ4StY9/Up9dRrtgTwUQSV90oYd+kMz 08NoszzN9DFoCLgpNXXAOkBV89GUcX0= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=NT1CKK+9; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf21.hostedemail.com: domain of dakr@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dakr@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1687789861; a=rsa-sha256; cv=none; b=Ne3ywKAXv725QtpeVHLdhBQ3a1b5XGzGKN+I5KOMh1X40VtVHwGJFozfVb5vRvujv8Xt26 aFq/JbxcoRxpzcEQwL8oKWEjiN99t9zkeivYKyernSdPKKGeMHJeJrfuM48oqNxCrARU41 afrtIZ11Ro46dbQGecQXIxdtf9Pm+a8= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1687789858; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=fI5bKyN+2a6dGAEULLZwYpYs8Vq+LvmQj425yEjhkiQ=; b=NT1CKK+98ZVuUsr88id9NDsDcQQu6SNeD8wnw1PsWq6PdILtim98aeeXAWULdYj+3jyY4Y /RSgmOHRFqcqyngULo2lo7vDkbjWm4gYTlKobe8zV2kygSa5Tem3+FEADanPohzzNgHcE8 TIZLTmCkeKgYeuoj0Ed7fC9M7RP3d54= Received: from mail-wr1-f70.google.com (mail-wr1-f70.google.com [209.85.221.70]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-328-tmBVsvdPPP-nHVa0ZBXsHQ-1; Mon, 26 Jun 2023 10:30:56 -0400 X-MC-Unique: tmBVsvdPPP-nHVa0ZBXsHQ-1 Received: by mail-wr1-f70.google.com with SMTP id ffacd0b85a97d-31275d62506so1254195f8f.1 for ; Mon, 26 Jun 2023 07:30:52 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687789851; x=1690381851; h=content-transfer-encoding:in-reply-to:organization:from:references :cc:to:content-language:subject:user-agent:mime-version:date :message-id:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=fI5bKyN+2a6dGAEULLZwYpYs8Vq+LvmQj425yEjhkiQ=; b=PL2HYDBRXc+XSJHGqIlbv2zj/0u4W7rUBbiKbZaCAiJGTZOziB1pUqe468jg4lZLUD cSjklEGj8qVHR3i0FsPIM6Iyn+tgSVqdeiylTOY5KwakSo6e/KwBWP6/4R+GnJi5bkzz FcIwOY6A0aQGLaP7G6LPr8K2RVNj5fQ2Obi6Kw/Rrvzf0ZMO9GursQao6bdp/JwwB4zU G0zr5VXHZeTPgyboAowsnFDBLBReCSUsFfjmsW1ouc+2JR9BxqxxK+7r/KeRv+4wP5j5 EHBxTNmLFjJCJlPLBYxwwQE3OFBinhLZeSYEkcvNacoY4Me3P4FkqFZWJYWjvXhv+8IG qkhQ== X-Gm-Message-State: AC+VfDzVOSbrFDTTI6XUqfc78XeRmXavd0d3bwFuFGd0MO+yQl1lrdDA F9jnOSOJ4GAtMUE2dMUo+3JKJVXBX7dh5g/rrpwLtaUlE3vnoeCJMxWkQbcEI54WRcaTh5sy7yP T5r2/I4yS6io= X-Received: by 2002:a5d:5707:0:b0:30f:bafb:2468 with SMTP id a7-20020a5d5707000000b0030fbafb2468mr8822267wrv.40.1687789851275; Mon, 26 Jun 2023 07:30:51 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ6EJb2bHHP4jyEt0OOS3a9wWQOr7idvu6TIJ0MPTzFGBFSm5dniXfXeNoYdSm1+wALWhIG8fw== X-Received: by 2002:a5d:5707:0:b0:30f:bafb:2468 with SMTP id a7-20020a5d5707000000b0030fbafb2468mr8822246wrv.40.1687789850943; Mon, 26 Jun 2023 07:30:50 -0700 (PDT) Received: from ?IPV6:2a02:810d:4b3f:de9c:642:1aff:fe31:a15c? ([2a02:810d:4b3f:de9c:642:1aff:fe31:a15c]) by smtp.gmail.com with ESMTPSA id u8-20020adfdd48000000b0030ae6432504sm7496866wrm.38.2023.06.26.07.30.49 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Mon, 26 Jun 2023 07:30:50 -0700 (PDT) Message-ID: <6d78d109-d027-0358-b1a8-2eaaa63e39af@redhat.com> Date: Mon, 26 Jun 2023 16:30:49 +0200 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.10.0 Subject: Re: [PATCH v2 14/16] maple_tree: Refine mas_preallocate() node calculations To: Peng Zhang Cc: maple-tree@lists.infradead.org, linux-mm@kvack.org, Andrew Morton , "Liam R. Howlett" , linux-kernel@vger.kernel.org, David Airlie , Boris Brezillon , Matthew Wilcox References: <20230612203953.2093911-1-Liam.Howlett@oracle.com> <20230612203953.2093911-15-Liam.Howlett@oracle.com> <26d8fbcf-d34f-0a79-9d91-8c60e66f7341@redhat.com> <43ce08db-210a-fec8-51b4-351625b3cdfb@redhat.com> From: Danilo Krummrich Organization: RedHat In-Reply-To: X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Language: en-US Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-Rspamd-Queue-Id: D4FBD1C005F X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: rw6m99j7xr3n4jht344zh7ntjk9xszhw X-HE-Tag: 1687789860-727571 X-HE-Meta: U2FsdGVkX1/OJqj3syC1rxIuIw2N2smZCmyQRwuSBPRAcDzlqB8/N0f9MeB4K2nbBPVwx5rcyoh2RimO4tffuUCEbqTIgkGmsV3UPIqjaluQkH3b3530uoXjQBTSQaN73wdY4UW1JPsV1CiCZg5NVoY6sQXvEw2OkfYPbyCrvTruk5PdccrWdGD+e8DtCOrXaZ243G2YEiUeiGq1wziYiJ0nf2VsCnWTXc7z9w49WtzUnxXXDekXoCCxuwekl1ETxLblzqq0QdxP/OYINxLDUgicib74aMsUcSM9tBoBhad52eNZIEgyH8CMB2jbwgm9nGp10GPIgp2ntsfJCDHVmhg/YDEbAOl5L5i42ZNNoJeO5xQZ5rdAIsoD/sOHTG4Y9FlNtdpfqHLLNN/HBgpcbpl8g5+uHclCEWY/bg4Hiq+03i7q3U6DancDkMS0i7v2P2rXqPu9hlbWVY4lJ9wSIWg4SbtjtpDHJ0Wmt+2ykMX/OKj69qEEjy1FJ9E0FiCEThvnZVw6y76CePcRFDdRPMoI1D/kFjOv9uQruEEs3igLSrRtVXKeYPV1J/oivR3S65z88Z4g3QvzQXhWgl2ueqg6RQMt1t7CNXLDKUZ5lcDIRzRmHnJn6284gWcdAFErV4DdZ84fjZgsiz0sSWZntjdndd0Y+CpezUZrR+POcbSg/g1+Vo+8Ac+qQzurncY9yzwYFbwTx6k5IR+ou4zOa1yyABQxXNE6JJKvg59Kn2nQc+UtIatYU1iH2Wn10YU5dIhF+YnioSb+AenSt/0OTbmzqW6afUU4YCwMK19ZNfgHrqiR0b+FkAou/+UpKA9u7GR2+ooqoWf5lNMFDAhcTbWudXEKv7o5maSBWZkU4E2PMj0pw1f/06HmQp0hOPSOe0CXbi+aZhKQlFp7F7yc5F611e8fgQ0dcrx6unVW9W4cXBXN0NSavZOcfe0MxO4a8efkLGer1OZ0VzCMbMA 0CZOGmrK IeKkonnsl4iC/fi0RMdw2HifIXefMHC2nZCoHpZcIv1ofe+lnjwEIjZKKatfAzteX0Rreqz0FhZbnGEOYyfnFqFh2gKl78w9vejskSJIL4iIvi0NxLwcPzxL3Rarpyb0+xT2sOakHsNcmSE0mZ+56NjmGkEUZJ/HzeAe+p6tU9iadseq+mJrAL3N4bIsCY1X9FMeDUN+9+5KURHzdkQYVXCltcvikuwjybS0ip/KvwfG0SJCg5d8mZ+KTnry9oruDda3gNA9IL7JGsGeTADvvk1p2hPzJ6AfdhyRCPdW4bJdhJBMWmT/pirxyfmZ7WS9bps7BMED46Z3w/fIApSRJ3aow/ggwuu+4TFsU9B6qPzMezQK2FCSSRGj5d7Maqny0DOopiJHA0gJVPrKwASVrCF/HOhIZ/Ghw/ST6uHdIT5B9hUlNgC2djhqmq3QbxxnjDCmombTOeJOd9DVdxI5gDjzcKjrjL++O5uR7+WnH9L2OtjWgll4GZBma3bZflo01LXstqUzZ5y+OWtBs7eDk8xfDA1AVk4Ov8YElNszzeSkU0NNvkPL8vGEqZg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 6/26/23 16:08, Peng Zhang wrote: > > > 在 2023/6/26 08:38, Danilo Krummrich 写道: >> Hi Peng, >> >> On 6/25/23 05:28, Peng Zhang wrote: >>> >>> >>> 在 2023/6/23 00:41, Danilo Krummrich 写道: >>>> On 6/12/23 22:39, Liam R. Howlett wrote: >>>>> Calculate the number of nodes based on the pending write action >>>>> instead >>>>> of assuming the worst case. >>>> >>>> Liam already gave me a heads-up on this patch, which I already >>>> replied to [1]. >>>> >>>> However, I think it might make sense to also reply to this patch >>>> directly. >>>> >>>> For a mas_preallocate() calculating the actual required nodes to be >>>> allocated instead of assuming the worst to work, it is required to >>>> ensure that the tree does not change between calling >>>> mas_preallocate() and mas_store_prealloc() if my understanding is >>>> correct. >>>> >>>> In DRM however, more specifically the DRM GPUVA Manager [2], we do >>>> have the case that we are not able to ensure this: >>>> >>>> Jobs to create GPU mappings can be submitted by userspace, are >>>> queued up by the kernel and are processed asynchronously in >>>> dma-fence signalling critical paths, e.g. by using the >>>> drm_gpu_scheduler. Hence, we must be able to allocate the worst case >>>> amount of node, since at the time a job is submitted we can't >>>> predict the state the maple tree keeping track of mappings has once >>>> a mapping is inserted in the (asynchronous) dma-fence signalling >>>> critical path. >>>> >>>> A more detailed explanation can be found in [1]. >>>> >>>> Could we keep a separate function for allocating the worst case >>>> amount of nodes additionally to this optimization? E.g. something >>>> like mas_preallocate_worst_case() or mas_preallocate_unlocked() >>>> (since I guess the new one requires the maple tree to be kept locked >>>> in order not to change)? >>> Hi Danilo, >>> >>> Your understanding seems incorrect. Even with previously unoptimized >>> mas_preallocate(), the maple tree cannot be modified between calls to >>> mas_preallocate() and mas_store_prealloc(). The calculation of the >>> number of pre-allocated nodes depends on the structure of the maple >>> tree. In the unoptimized mas_preallocate(), it depends on the height of >>> the tree. If the maple tree is modified before mas_store_prealloc() and >>> the height of the tree changes, the number of pre-allocated nodes is >>> inaccurate. >> >> Thanks for pointing this out! >> >> First of all, it's probably fair to say "naive me", it totally makes >> sense the tree height is needed - it's a b-tree. >> >> On the other hand, unless I miss something (and if so, please let me >> know), something is bogus with the API then. >> >> While the documentation of the Advanced API of the maple tree >> explicitly claims that the user of the API is responsible for locking, >> this should be limited to the bounds set by the maple tree >> implementation. Which means, the user must decide for either the >> internal (spin-) lock or an external lock (which possibly goes away in >> the future) and acquire and release it according to the rules maple >> tree enforces through lockdep checks. >> >> Let's say one picks the internal lock. How is one supposed to ensure >> the tree isn't modified using the internal lock with mas_preallocate()? >> >> Besides that, I think the documentation should definitely mention this >> limitation and give some guidance for the locking. > Yes, the documentation of maple tree is not detailed and complete. >> >> Currently, from an API perspective, I can't see how anyone not >> familiar with the implementation details would be able to recognize >> this limitation. >> >> In terms of the GPUVA manager, unfortunately, it seems like I need to >> drop the maple tree and go back to using a rb-tree, since it seems >> there is no sane way doing a worst-case pre-allocation that does not >> suffer from this limitation. > I also think preallocation may not be necessary, and I agree with what > Matthew said. Preallocation should be used in some cases where > preallocation has to be used. If preallocation is used, but the number > of preallocated nodes is insufficient because the tree is modified > midway, GFP_NOWAIT will be used for memory allocation during the tree > modification process, and the user may not notice that more nodes are > not from preallocation. Please see my reply to Matthew. :) - Danilo > >> >> - Danilo >> >>> >>> Regards, >>> Peng >>> >>>> >>>> [1] >>>> https://lore.kernel.org/nouveau/68cd25de-e767-725e-2e7b-703217230bb0@redhat.com/T/#ma326e200b1de1e3c9df4e9fcb3bf243061fee8b5 >>>> >>>> [2] >>>> https://lore.kernel.org/linux-mm/20230620004217.4700-8-dakr@redhat.com/T/#m47ab82310f87793d0f0cc1825a316eb30ad5b653 >>>> >>>> - Danilo >>>> >>>>> >>>>> This addresses a performance regression introduced in platforms that >>>>> have longer allocation timing. >>>>> >>>>> Signed-off-by: Liam R. Howlett >>>>> --- >>>>>   lib/maple_tree.c | 48 >>>>> +++++++++++++++++++++++++++++++++++++++++++++++- >>>>>   1 file changed, 47 insertions(+), 1 deletion(-) >>>>> >>>>> diff --git a/lib/maple_tree.c b/lib/maple_tree.c >>>>> index 048d6413a114..7ac5b5457603 100644 >>>>> --- a/lib/maple_tree.c >>>>> +++ b/lib/maple_tree.c >>>>> @@ -5541,9 +5541,55 @@ EXPORT_SYMBOL_GPL(mas_store_prealloc); >>>>>    */ >>>>>   int mas_preallocate(struct ma_state *mas, void *entry, gfp_t gfp) >>>>>   { >>>>> +    MA_WR_STATE(wr_mas, mas, entry); >>>>> +    unsigned char node_size; >>>>> +    int request = 1; >>>>>       int ret; >>>>> -    mas_node_count_gfp(mas, 1 + mas_mt_height(mas) * 3, gfp); >>>>> + >>>>> +    if (unlikely(!mas->index && mas->last == ULONG_MAX)) >>>>> +        goto ask_now; >>>>> + >>>>> +    mas_wr_store_setup(&wr_mas); >>>>> +    wr_mas.content = mas_start(mas); >>>>> +    /* Root expand */ >>>>> +    if (unlikely(mas_is_none(mas) || mas_is_ptr(mas))) >>>>> +        goto ask_now; >>>>> + >>>>> +    if (unlikely(!mas_wr_walk(&wr_mas))) { >>>>> +        /* Spanning store, use worst case for now */ >>>>> +        request = 1 + mas_mt_height(mas) * 3; >>>>> +        goto ask_now; >>>>> +    } >>>>> + >>>>> +    /* At this point, we are at the leaf node that needs to be >>>>> altered. */ >>>>> +    /* Exact fit, no nodes needed. */ >>>>> +    if (wr_mas.r_min == mas->index && wr_mas.r_max == mas->last) >>>>> +        return 0; >>>>> + >>>>> +    mas_wr_end_piv(&wr_mas); >>>>> +    node_size = mas_wr_new_end(&wr_mas); >>>>> +    /* Slot store can avoid using any nodes */ >>>>> +    if (node_size == wr_mas.node_end && wr_mas.offset_end - >>>>> mas->offset == 1) >>>>> +        return 0; >>>>> + >>>>> +    if (node_size >= mt_slots[wr_mas.type]) { >>>>> +        /* Split, worst case for now. */ >>>>> +        request = 1 + mas_mt_height(mas) * 2; >>>>> +        goto ask_now; >>>>> +    } >>>>> + >>>>> +    /* Appending does not need any nodes */ >>>>> +    if (node_size == wr_mas.node_end + 1 && mas->offset == >>>>> wr_mas.node_end) >>>>> +        return 0; >>>>> + >>>>> +    /* Potential spanning rebalance collapsing a node, use >>>>> worst-case */ >>>>> +    if (node_size  - 1 <= mt_min_slots[wr_mas.type]) >>>>> +        request = mas_mt_height(mas) * 2 - 1; >>>>> + >>>>> +    /* node store needs one node */ >>>>> +ask_now: >>>>> +    mas_node_count_gfp(mas, request, gfp); >>>>>       mas->mas_flags |= MA_STATE_PREALLOC; >>>>>       if (likely(!mas_is_err(mas))) >>>>>           return 0; >>>> >>>> >>> >> >