From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 73813EB64D8 for ; Thu, 22 Jun 2023 15:09:49 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0EEED8D0003; Thu, 22 Jun 2023 11:09:49 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 051388D0002; Thu, 22 Jun 2023 11:09:48 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E0CFD8D0003; Thu, 22 Jun 2023 11:09:48 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id CB2638D0002 for ; Thu, 22 Jun 2023 11:09:48 -0400 (EDT) Received: from smtpin12.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 94F121A0177 for ; Thu, 22 Jun 2023 15:09:48 +0000 (UTC) X-FDA: 80930718456.12.8E0E6CF Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf17.hostedemail.com (Postfix) with ESMTP id D82D340118 for ; Thu, 22 Jun 2023 15:07:28 +0000 (UTC) Authentication-Results: imf17.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b="DX/30AwZ"; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf17.hostedemail.com: domain of dakr@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dakr@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1687446448; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=jx6WC1mHABJsvuM+bnp+yD3GpYJCZcdfj8MNht3/Ius=; b=eUUpzUFfQ7NlTA/MRpF+5zjqrSOv0qz70TCmCXF+xHaelztxaOsAE+h2m0+nb4Eq3KfLGD 3U30l6iMw8CiLnNppwVp4tMhggQbkLXcufB3X8NOXtEFxd7HJytkF38VSiXb5tK98lL9Xp 5GDYaJc6N9+n0vXz/HdI/D78sdh+7ak= ARC-Authentication-Results: i=1; imf17.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b="DX/30AwZ"; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf17.hostedemail.com: domain of dakr@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dakr@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1687446448; a=rsa-sha256; cv=none; b=qyQcq+LHRytKlpvNwVapKFgug6Qa5yDmNforpzsiUEnStReRnRm8y0GBrrXvnQhfTuDvCR g02ek0G6VyqvVsYGxF+yaZytXmyCiTVYeZhtSYpNORZUUUV7Q1k+iqMncAUTfB5udEybZA C2qeaBjzzvM2jFQFSxRDWPY9BZV6PfQ= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1687446448; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=jx6WC1mHABJsvuM+bnp+yD3GpYJCZcdfj8MNht3/Ius=; b=DX/30AwZ8PqtAH1coJLnlK+BB40KltGpc91YWKd9Gx31Ra2DShr36Xz1zDjCZtlOuVXgkt h2dNhW4HbMXuZRYBogy0ahby9I3qi8ouM15Ji/zBI/dKsjqLSQMVIP+q/dv0EfiyDJIu/+ vVCi5zSpvm8nbmZe6LJmj81hxs2RuSw= Received: from mail-pf1-f197.google.com (mail-pf1-f197.google.com [209.85.210.197]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-494-ukGa_49aMeCHPr1PGxxmJA-1; Thu, 22 Jun 2023 11:07:26 -0400 X-MC-Unique: ukGa_49aMeCHPr1PGxxmJA-1 Received: by mail-pf1-f197.google.com with SMTP id d2e1a72fcca58-6686e4499b1so2661807b3a.1 for ; Thu, 22 Jun 2023 08:07:26 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687446445; x=1690038445; h=content-transfer-encoding:in-reply-to:organization:references:cc:to :from:content-language:subject:user-agent:mime-version:date :message-id:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=jx6WC1mHABJsvuM+bnp+yD3GpYJCZcdfj8MNht3/Ius=; b=X76ngduaNsM410AMUvjDGJR2d+DhYG+uBisAxqNKAqjYsLu58MwWapI2LL7UyS1Rxm dN7F1t0jxkaulrAM8iEI4sIeT46fak1k89ogOGBUl/aq64yW7eoykp1bHqqNF66h4fCl TgYS6LKr3llQQxfv95XtSOi+pYvkhjQ2tEwQIydJskyREtVcPch43zRSTlmnepYBWuKj 0HM3ggz3RmQ7dIBo6gsb7tHdKb//nN63G/byNzpq325tzmSkp7WPaK4SX24/bAGTY7YK RMYj19PwCbJZFBn04BnciiBH3OJ0ao4mwdIwjXJz9mhvK8Mod8HSl4SEWT260VuHV6iU D0WQ== X-Gm-Message-State: AC+VfDxp0DpdMd8ldOR1twVTIu776I4Cswh9kRRBYHClSDRL03M0PIZj AuBz7XJFNZ6m0LHeopY7w+1csvCcVevrOAoo0DJw6gynSIpgwf7UPKKbJhZWycnjqZBUNsj2D0p oEeH91HdnQd0= X-Received: by 2002:a05:6a00:809:b0:66a:2ff1:dee4 with SMTP id m9-20020a056a00080900b0066a2ff1dee4mr6496437pfk.2.1687446445106; Thu, 22 Jun 2023 08:07:25 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ6HRa0GCUXkWBtzHtdiP3Pquuzaz9AEf4d8mo5rrQwuiJDyyK0Sq1psu59IAgHb74bq4hLcmA== X-Received: by 2002:a05:6a00:809:b0:66a:2ff1:dee4 with SMTP id m9-20020a056a00080900b0066a2ff1dee4mr6496388pfk.2.1687446444684; Thu, 22 Jun 2023 08:07:24 -0700 (PDT) Received: from ?IPV6:2a02:810d:4b3f:de9c:642:1aff:fe31:a15c? ([2a02:810d:4b3f:de9c:642:1aff:fe31:a15c]) by smtp.gmail.com with ESMTPSA id o2-20020a63e342000000b0053fb37fb626sm5011644pgj.43.2023.06.22.08.07.14 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 22 Jun 2023 08:07:24 -0700 (PDT) Message-ID: <2f502150-c1f8-615c-66d9-c3fb59b8c409@redhat.com> Date: Thu, 22 Jun 2023 17:07:11 +0200 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.10.0 Subject: Re: [PATCH drm-next v5 03/14] drm: manager to keep track of GPUs VA mappings From: Danilo Krummrich To: =?UTF-8?Q?Christian_K=c3=b6nig?= , airlied@gmail.com, daniel@ffwll.ch, tzimmermann@suse.de, mripard@kernel.org, corbet@lwn.net, bskeggs@redhat.com, Liam.Howlett@oracle.com, matthew.brost@intel.com, boris.brezillon@collabora.com, alexdeucher@gmail.com, ogabbay@kernel.org, bagasdotme@gmail.com, willy@infradead.org, jason@jlekstrand.net Cc: dri-devel@lists.freedesktop.org, nouveau@lists.freedesktop.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Donald Robson , Dave Airlie References: <20230620004217.4700-1-dakr@redhat.com> <20230620004217.4700-4-dakr@redhat.com> <41aecd10-9956-0752-2838-34c97834f0c8@amd.com> <86ef9898-c4b6-f4c0-7ad3-3ffe5358365a@amd.com> Organization: RedHat In-Reply-To: X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Language: en-US Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-Rspamd-Queue-Id: D82D340118 X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: smsxkgd89uxwd9untdh1npj58papxxhj X-HE-Tag: 1687446448-168885 X-HE-Meta: U2FsdGVkX1+UXo0kx5TTFmDMXFC4NmeafG6rM0Wh+pNbUFYvCY8gFkpITG0bzINTD9bX9JZqZcbOBYXr8FMhejEkepMfn+FYNzQCJTj6XJpgVHQ9Cmeg3BbfqAMI+SwPMdBji1MkpUKNLajks5kApS4JregPeM9hZurUqP4hcnlqCFcKEzQUxdmQjrT/IBo/Ylx1ilGu5Qbld9hYigaoYZIhQSA8WTWud9IH/P65OEyD8mDj9Kc29I5CaViE72DXw1WGQ/zs2C8qJ9ZMswxr5AymjfJV3IBRLKXA2vw2dRsL2v9oyIg90UTgWkjcVvM26eLw6iQjXTPlk8j0s8TyDytGglDCrD9F4gBKH/n+jJpN4+IUWMyW/GOHL4f67vkQ0KV2eoVRxIRF6Tc3KAp5gCmEQppa2q10uig791SK4M69bRhL1wBvdFqd0qSuOwiMoqFmfjkUzSinuPOayVBUA9VFDwls/OrdUIgs2SLJKHY/GMujLPGaj3/eMVlDRxJZnwKhnQJNpx8a2+yWZhJIpwo1K6JaMCiFb7qXJZ1Pbsq7iluB7fOykHfyynwAD13Eb8hurRJoBc0VXlcTkrcLNBe42mw5MF9tUKVUdnnnu6P1t8lJeO6EfMr6jbDKkmQZbZBzmdcTlBOoFTfph/OaXPLKcI5wHnJig0sjTKhVjZfUox6oPuYAKSYwFDYcl2ByDNjMbNx3LRi0K8n8oqZ4dF9YmZndOjDzQUNIVqKatqV2nhQkaxp+kSGC2RFQ+pmzoNbgMi5ErNsPjuP7qIZ+0llzVKoC206ACjYY+i0WEEgW4f7de6WHir7RkUpgcbExh2ef688mP8l6Q4xI6Shw5WwTVhrcwrKJJovNFcTldbevEWJCkOAbqeifOqp40x8klU3EhK0bizo/l59XOrGIb7OjI5becmP8jhDSSeMst/PVLhSeAmy6x7/H8pSDux+9AHY1vjgG3MwhIyuu8jP QHIXuEqV 3p4rzU37RbU/YfxvvZ1G898XPabyRv544L36N6wLX6Z3AjvRPw0IpTCLl++go4XpsgBUbHx2SAXnBlg+Yvmdb47VFBfIbGklXRFR4cb1Zr53t7ntAG3p4hzn7Tux7PfgMe76N7kmnOjqsujeB6UC+bMoPbgWPMkZOyjkH+pr2NoU0jNm9bjqylLhoqn+Yah/1d7EwXtdpBYCN6OpCoyB6lwQFSeN3KG0JNdhp4vRT/ozZ6YSh9mO0SPZ9wkql6/wDh8ou/zQ4m+PoTNe1hzOrdiwaKPieOqk3YVIEdrt7xX00whTxmz+xAqWRfkR9T3tUgXxfduIRoE08kVdXimDLrf1/u5Pp4ze/6usEkB24R34ksuXXBlJXE6SgD+cBa85aWsTRSb3ZFJNbaghHNeVCesFh1oUZ5fH+hdyi X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 6/22/23 17:04, Danilo Krummrich wrote: > On 6/22/23 16:42, Christian König wrote: >> Am 22.06.23 um 16:22 schrieb Danilo Krummrich: >>> On 6/22/23 15:54, Christian König wrote: >>>> Am 20.06.23 um 14:23 schrieb Danilo Krummrich: >>>>> Hi Christian, >>>>> >>>>> On 6/20/23 08:45, Christian König wrote: >>>>>> Hi Danilo, >>>>>> >>>>>> sorry for the delayed reply. I've trying to dig myself out of a >>>>>> hole at the moment. >>>>> >>>>> No worries, thank you for taking a look anyway! >>>>> >>>>>> >>>>>> Am 20.06.23 um 02:42 schrieb Danilo Krummrich: >>>>>>> [SNIP] >>>>>>> diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h >>>>>>> index bbc721870c13..5ec8148a30ee 100644 >>>>>>> --- a/include/drm/drm_gem.h >>>>>>> +++ b/include/drm/drm_gem.h >>>>>>> @@ -36,6 +36,8 @@ >>>>>>>   #include >>>>>>>   #include >>>>>>> +#include >>>>>>> +#include >>>>>>>   #include >>>>>>> @@ -379,6 +381,18 @@ struct drm_gem_object { >>>>>>>        */ >>>>>>>       struct dma_resv _resv; >>>>>>> +    /** >>>>>>> +     * @gpuva: >>>>>>> +     * >>>>>>> +     * Provides the list of GPU VAs attached to this GEM object. >>>>>>> +     * >>>>>>> +     * Drivers should lock list accesses with the GEMs &dma_resv >>>>>>> lock >>>>>>> +     * (&drm_gem_object.resv). >>>>>>> +     */ >>>>>>> +    struct { >>>>>>> +        struct list_head list; >>>>>>> +    } gpuva; >>>>>>> + >>>>>>>       /** >>>>>>>        * @funcs: >>>>>>>        * >>>>>> >>>>>> I'm pretty sure that it's not a good idea to attach this directly >>>>>> to the GEM object. >>>>> >>>>> Why do you think so? IMHO having a common way to connect mappings >>>>> to their backing buffers is a good thing, since every driver needs >>>>> this connection anyway. >>>>> >>>>> E.g. when a BO gets evicted, drivers can just iterate the list of >>>>> mappings and, as the circumstances require, invalidate the >>>>> corresponding mappings or to unmap all existing mappings of a given >>>>> buffer. >>>>> >>>>> What would be the advantage to let every driver implement a driver >>>>> specific way of keeping this connection? >>>> >>>> Flexibility. For example on amdgpu the mappings of a BO are groups >>>> by VM address spaces. >>>> >>>> E.g. the BO points to multiple bo_vm structures which in turn have >>>> lists of their mappings. >>> >>> Isn't this (almost) the same relationship I introduce with the GPUVA >>> manager? >>> >>> If you would switch over to the GPUVA manager right now, it would be >>> that every GEM has a list of it's mappings (the gpuva list). The >>> mapping is represented by struct drm_gpuva (of course embedded in >>> driver specific structure(s)) which has a pointer to the VM address >>> space it is part of, namely the GPUVA manager instance. And the GPUVA >>> manager keeps a maple tree of it's mappings as well. >>> >>> If you still would like to *directly* (indirectly you already have >>> that relationship) keep a list of GPUVA managers (VM address spaces) >>> per GEM, you could still do that in a driver specific way. >>> >>> Do I miss something? >> >> How do you efficiently find only the mappings of a BO in one VM? > > Actually, I think this case should even be more efficient than with a BO > having a list of GPUVAs (or mappings): *than with a BO having a list of VMs: > > Having a list of GPUVAs per GEM, each GPUVA has a pointer to it's VM. > Hence, you'd only need to iterate the list of mappings for a given BO > and check the mappings VM pointer. > > Having a list of VMs per BO, you'd have to iterate the whole VM to find > the mappings having a pointer to the given BO, right? > > I'd think that a single VM potentially has more mapping entries than a > single BO was mapped in multiple VMs. > > Another case to consider is the case I originally had in mind choosing > this relationship: finding all mappings for a given BO, which I guess > all drivers need to do in order to invalidate mappings on BO eviction. > > Having a list of VMs per BO, wouldn't you need to iterate all of the VMs > entirely? > >> >> Keep in mind that we have cases where one BO is shared with hundreds >> of different VMs as well as potentially the number of mappings can be >> >10k. >> >>> >>>> >>>> Additional to that there is a state maschine associated with the >>>> mappings, e.g. if the page tables are up to date or need to be >>>> updated etc.... >>>> >>>>> Do you see cases where this kind of connection between mappings and >>>>> backing buffers wouldn't be good enough? If so, which cases do you >>>>> have in mind? Maybe we can cover them in a common way as well? >>>> >>>> Yeah, we have tons of cases like that. But I have no idea how to >>>> generalize them. >>> >>> They could still remain to be driver specific then, right? >> >> Well does the mapping has a back pointer to the BO? And can that be >> optional NULL if there is no BO? > > Yes to both. > > - Danilo > >> >> Regards, >> Christian. >> >>> >>>> >>>>> >>>>>> >>>>>> As you wrote in the commit message it's highly driver specific >>>>>> what to map and where to map it. >>>>> >>>>> In the end the common case should be that in a VA space at least >>>>> every mapping being backed by a BO is represented by a struct >>>>> drm_gpuva. >>>> >>>> Oh, no! We also have mappings not backed by a BO at all! For example >>>> for partial resident textures or data routing to internal hw etc... >>>> >>>> You can't be sure that a mapping is backed by a BO at all. >>> >>> I fully agree, that's why I wrote "the common case should be that in >>> a VA space at least every mapping *being backed by a BO* is >>> represented by a struct drm_gpuva". >>> >>> Mappings not being backed by an actual BO would not be linked to a >>> GEM of course. >>> >>>> >>>>> >>>>>> >>>>>> Instead I suggest to have a separate structure for mappings in a >>>>>> VA space which driver can then add to their GEM objects or >>>>>> whatever they want to map into their VMs. >>>>> >>>>> Which kind of separate structure for mappings? Another one >>>>> analogous to struct drm_gpuva? >>>> >>>> Well similar to what amdgpu uses BO -> one structure for each >>>> combination of BO and VM -> mappings inside that VM >>> >>> As explained above, I think that's exactly what the GPUVA manager >>> does, just in another order: >>> >>> BO has list of mappings, mappings have pointer to VM, VM has list (or >>> actually a maple tree) of mappings. >>> >>> You see any advantages or disadvantages of either order of >>> relationships? For me it looks like it doesn't really matter which >>> one to pick. >>> >>> - Danilo >>> >>>> >>>> Regards, >>>> Christian. >>>> >>>>> >>>>> - Danilo >>>>> >>>>>> >>>>>> Regards, >>>>>> Christian. >>>>>> >>>>>> >>>>> >>>> >>> >>