linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Danilo Krummrich <dakr@redhat.com>
To: "Christian König" <christian.koenig@amd.com>
Cc: airlied@gmail.com, daniel@ffwll.ch, tzimmermann@suse.de,
	mripard@kernel.org, corbet@lwn.net, bskeggs@redhat.com,
	Liam.Howlett@oracle.com, matthew.brost@intel.com,
	boris.brezillon@collabora.com, alexdeucher@gmail.com,
	ogabbay@kernel.org, bagasdotme@gmail.com, willy@infradead.org,
	jason@jlekstrand.net, linux-doc@vger.kernel.org,
	nouveau@lists.freedesktop.org, linux-kernel@vger.kernel.org,
	dri-devel@lists.freedesktop.org, linux-mm@kvack.org,
	Dave Airlie <airlied@redhat.com>
Subject: Re: [PATCH drm-next v2 05/16] drm: manager to keep track of GPUs VA mappings
Date: Wed, 22 Feb 2023 17:40:14 +0100	[thread overview]
Message-ID: <83755119-083d-7d66-fca0-ca306c841d9c@redhat.com> (raw)
In-Reply-To: <29ea3705-5634-c204-c1da-d356b6dfbafc@amd.com>

On 2/22/23 16:14, Christian König wrote:
> Am 22.02.23 um 16:07 schrieb Danilo Krummrich:
>> On 2/22/23 11:25, Christian König wrote:
>>> Am 17.02.23 um 14:44 schrieb Danilo Krummrich:
>>
>> <snip>
>>
>>>> +/**
>>>> + * DOC: Overview
>>>> + *
>>>> + * The DRM GPU VA Manager, represented by struct drm_gpuva_manager 
>>>> keeps track
>>>> + * of a GPU's virtual address (VA) space and manages the 
>>>> corresponding virtual
>>>> + * mappings represented by &drm_gpuva objects. It also keeps track 
>>>> of the
>>>> + * mapping's backing &drm_gem_object buffers.
>>>> + *
>>>> + * &drm_gem_object buffers maintain a list (and a corresponding 
>>>> list lock) of
>>>> + * &drm_gpuva objects representing all existent GPU VA mappings 
>>>> using this
>>>> + * &drm_gem_object as backing buffer.
>>>> + *
>>>> + * If the &DRM_GPUVA_MANAGER_REGIONS feature is enabled, a GPU VA 
>>>> mapping can
>>>> + * only be created within a previously allocated &drm_gpuva_region, 
>>>> which
>>>> + * represents a reserved portion of the GPU VA space. GPU VA 
>>>> mappings are not
>>>> + * allowed to span over a &drm_gpuva_region's boundary.
>>>> + *
>>>> + * GPU VA regions can also be flagged as sparse, which allows 
>>>> drivers to create
>>>> + * sparse mappings for a whole GPU VA region in order to support 
>>>> Vulkan
>>>> + * 'Sparse Resources'.
>>>
>>> Well since we have now found that there is absolutely no technical 
>>> reason for having those regions could we please drop them?
>>
>> I disagree this was the outcome of our previous discussion.
>>
>> In nouveau I still need them to track the separate sparse page tables 
>> and, as you confirmed previously, Nvidia cards are not the only cards 
>> supporting this feature.
>>
>> The second reason is that with regions we can avoid merging between 
>> buffers, which saves some effort. However, I agree that this argument 
>> by itself probably doesn't hold too much, since you've pointed out in 
>> a previous mail that:
>>
>> <cite>
>> 1) If we merge and decide to only do that inside certain boundaries 
>> then those boundaries needs to be provided and checked against. This 
>> burns quite some CPU cycles
>>
>> 2) If we just merge what we can we might have extra page table updates 
>> which cost time and could result in undesired side effects.
>>
>> 3) If we don't merge at all we have additional housekeeping for the 
>> mappings and maybe hw restrictions.
>> </cite>
>>
>> However, if a driver uses regions to track its separate sparse page 
>> tables anyway it gets 1) for free, which is a nice synergy.
>>
>> I totally agree that regions aren't for everyone though. Hence, I made 
>> them an optional feature and by default regions are disabled. In order 
>> to use them drm_gpuva_manager_init() must be called with the 
>> DRM_GPUVA_MANAGER_REGIONS feature flag.
>>
>> I really would not want to open code regions or have two GPUVA manager 
>> instances in nouveau to track sparse page tables. That would be really 
>> messy, hence I hope we can agree on this to be an optional feature.
> 
> I absolutely don't think that this is a good idea then. This separate 
> handling of sparse page tables is completely Nouveau specific.

Actually, I rely on what you said in a previous mail when I say it's, 
potentially, not specific to nouveau.

<cite>
This sounds similar to what AMD hw used to have up until gfx8 (I think), 
basically sparse resources where defined through a separate mechanism to 
the address resolution of the page tables. I won't rule out that other 
hardware has similar approaches.
</cite>

> 
> Even when it's optional feature mixing this into the common handling is 
> exactly what I pointed out as not properly separating between hardware 
> specific and hardware agnostic functionality.

Optionally having regions is *not* a hardware specific concept, drivers 
might use it for a hardware specific purpose though. Which potentially 
is is the case for almost every DRM helper.

Drivers can use regions only for the sake of not merging between buffer 
boundaries as well. Some drivers might prefer this over "never merge" or 
"always merge", depending on the cost of re-organizing page tables for 
unnecessary splits/merges, without having the need of tracking separate 
sparse page tables.

Its just that I think *if* a driver needs to track separate sparse page 
tables anyways its a nice synergy since then there is no extra cost for 
getting this optimization.

> 
> This is exactly the problem we ran into with TTM as well and I've spend 
> a massive amount of time to clean that up again. >

Admittedly, I don't know what problems you are referring to. However, I 
don't see which kind of trouble it could cause by allowing drivers to 
track regions optionally.

> Regards,
> Christian.
> 
>>
>>>
>>> I don't really see a need for them any more.
>>>
>>> Regards,
>>> Christian.
>>>
>>
> 



  reply	other threads:[~2023-02-22 16:40 UTC|newest]

Thread overview: 64+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-02-17 13:44 [PATCH drm-next v2 00/16] [RFC] DRM GPUVA Manager & Nouveau VM_BIND UAPI Danilo Krummrich
2023-02-17 13:44 ` [PATCH drm-next v2 01/16] drm: execution context for GEM buffers Danilo Krummrich
2023-02-17 16:00   ` Christian König
2023-02-21 14:56     ` Danilo Krummrich
2023-02-17 13:44 ` [PATCH drm-next v2 02/16] drm/exec: fix memory leak in drm_exec_prepare_obj() Danilo Krummrich
2023-02-17 13:44 ` [PATCH drm-next v2 03/16] maple_tree: split up MA_STATE() macro Danilo Krummrich
2023-02-17 18:34   ` Liam R. Howlett
2023-02-20 13:48     ` Danilo Krummrich
2023-02-21 16:52       ` Liam R. Howlett
2023-02-17 19:45   ` Matthew Wilcox
2023-02-20 13:48     ` Danilo Krummrich
2023-02-17 13:44 ` [PATCH drm-next v2 04/16] maple_tree: add flag MT_FLAGS_LOCK_NONE Danilo Krummrich
2023-02-17 18:18   ` Liam R. Howlett
2023-02-17 19:38   ` Matthew Wilcox
2023-02-20 14:00     ` Danilo Krummrich
2023-02-20 15:10       ` Matthew Wilcox
2023-02-20 17:06         ` Danilo Krummrich
2023-02-20 20:33           ` Matthew Wilcox
2023-02-21 14:37             ` Danilo Krummrich
2023-02-21 18:31               ` Matthew Wilcox
2023-02-22 16:11                 ` Danilo Krummrich
2023-02-22 16:32                   ` Matthew Wilcox
2023-02-22 17:28                     ` Danilo Krummrich
2023-02-27 17:39                 ` Danilo Krummrich
2023-02-27 18:36                   ` Matthew Wilcox
2023-02-27 18:59                     ` Danilo Krummrich
2023-02-17 13:44 ` [PATCH drm-next v2 05/16] drm: manager to keep track of GPUs VA mappings Danilo Krummrich
2023-02-18  1:05   ` kernel test robot
2023-02-21 18:20   ` Liam R. Howlett
2023-02-22 18:13     ` Danilo Krummrich
2023-02-23 19:09       ` Liam R. Howlett
2023-02-27 12:23         ` Danilo Krummrich
2023-03-02  2:38           ` Liam R. Howlett
2023-03-06 15:46             ` Danilo Krummrich
2023-03-07 22:43               ` Liam R. Howlett
2023-03-13 23:46                 ` Danilo Krummrich
2023-03-20 19:16                   ` Liam R. Howlett
2023-02-28  2:17     ` Danilo Krummrich
2023-02-28 16:24       ` Liam R. Howlett
2023-03-06 13:39         ` Danilo Krummrich
2023-02-22 10:25   ` Christian König
2023-02-22 15:07     ` Danilo Krummrich
2023-02-22 15:14       ` Christian König
2023-02-22 16:40         ` Danilo Krummrich [this message]
2023-02-23  7:06           ` Christian König
2023-02-23 14:12             ` Danilo Krummrich
2023-02-17 13:48 ` [PATCH drm-next v2 06/16] drm: debugfs: provide infrastructure to dump a DRM GPU VA space Danilo Krummrich
2023-02-18  2:47   ` kernel test robot
2023-02-17 13:48 ` [PATCH drm-next v2 07/16] drm/nouveau: new VM_BIND uapi interfaces Danilo Krummrich
2023-02-17 13:48 ` [PATCH drm-next v2 08/16] drm/nouveau: get vmm via nouveau_cli_vmm() Danilo Krummrich
2023-02-17 13:48 ` [PATCH drm-next v2 09/16] drm/nouveau: bo: initialize GEM GPU VA interface Danilo Krummrich
2023-02-17 13:48 ` [PATCH drm-next v2 10/16] drm/nouveau: move usercopy helpers to nouveau_drv.h Danilo Krummrich
2023-02-17 13:48 ` [PATCH drm-next v2 11/16] drm/nouveau: fence: fail to emit when fence context is killed Danilo Krummrich
2023-02-17 13:48 ` [PATCH drm-next v2 12/16] drm/nouveau: chan: provide nouveau_channel_kill() Danilo Krummrich
2023-02-17 13:48 ` [PATCH drm-next v2 13/16] drm/nouveau: nvkm/vmm: implement raw ops to manage uvmm Danilo Krummrich
2023-02-18  1:16   ` kernel test robot
2023-02-17 13:48 ` [PATCH drm-next v2 14/16] drm/nouveau: implement uvmm for user mode bindings Danilo Krummrich
2023-02-17 13:48 ` [PATCH drm-next v2 15/16] drm/nouveau: implement new VM_BIND UAPI Danilo Krummrich
2023-02-17 13:48 ` [PATCH drm-next v2 16/16] drm/nouveau: debugfs: implement DRM GPU VA debugfs Danilo Krummrich
2023-03-09  9:12 ` [PATCH drm-next v2 00/16] [RFC] DRM GPUVA Manager & Nouveau VM_BIND UAPI Boris Brezillon
2023-03-09  9:48   ` Boris Brezillon
2023-03-10 16:45     ` Danilo Krummrich
2023-03-10 17:25       ` Boris Brezillon
2023-03-10 20:06         ` Danilo Krummrich

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=83755119-083d-7d66-fca0-ca306c841d9c@redhat.com \
    --to=dakr@redhat.com \
    --cc=Liam.Howlett@oracle.com \
    --cc=airlied@gmail.com \
    --cc=airlied@redhat.com \
    --cc=alexdeucher@gmail.com \
    --cc=bagasdotme@gmail.com \
    --cc=boris.brezillon@collabora.com \
    --cc=bskeggs@redhat.com \
    --cc=christian.koenig@amd.com \
    --cc=corbet@lwn.net \
    --cc=daniel@ffwll.ch \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=jason@jlekstrand.net \
    --cc=linux-doc@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=matthew.brost@intel.com \
    --cc=mripard@kernel.org \
    --cc=nouveau@lists.freedesktop.org \
    --cc=ogabbay@kernel.org \
    --cc=tzimmermann@suse.de \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox