From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 916F8EB64D8 for ; Thu, 22 Jun 2023 14:23:04 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 102BD8D0003; Thu, 22 Jun 2023 10:23:04 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 08C158D0001; Thu, 22 Jun 2023 10:23:04 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DAABD8D0003; Thu, 22 Jun 2023 10:23:03 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id C63718D0001 for ; Thu, 22 Jun 2023 10:23:03 -0400 (EDT) Received: from smtpin18.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 4A6841C7A04 for ; Thu, 22 Jun 2023 14:23:03 +0000 (UTC) X-FDA: 80930600646.18.AA8BEAB Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf24.hostedemail.com (Postfix) with ESMTP id 7CB39180020 for ; Thu, 22 Jun 2023 14:23:00 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=NAFU9njD; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf24.hostedemail.com: domain of dakr@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dakr@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1687443780; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=0q7rImqCfSTwUHverUYJSMCU6piKOXc2WChZXUdmxQE=; b=RjeeaYlMJdl3I6gvL2Cue6g/uEjESxhDwQ6aDfMj2FaTBEKKVzZMtKx9TX5jvG+YaRNbb9 3mEP4l4kHyy0PkwCcRpF3iHNWwM+ddY8uladKvHmfgWzabEpj6ya1KTrWJ1iII4mP5j2cC VvNFgs+sYcAlbvcRO/199bNIHwSb3z0= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=NAFU9njD; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf24.hostedemail.com: domain of dakr@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dakr@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1687443780; a=rsa-sha256; cv=none; b=wFrhXke/NQY/y3RIIkXKNSeQqB0qEi09fQgMDm6AYanHLVH58AwdnVBdTYkFA6Vu2PcO5d d81vIToG0ncKWEJn6VXREbl4xCFQUmbueizETf4A7lvNbHK96WWQgntPd/okCSTAsqvd6t w3vA6geQ1QfL4qEC7kzXzyQMFjrtP9M= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1687443779; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=0q7rImqCfSTwUHverUYJSMCU6piKOXc2WChZXUdmxQE=; b=NAFU9njD0V24tC4PURZB+c84eMObGsgAp8UIwJei39RDTK4gi3u2nk3yYRaXl6YXA6Bgq/ OwAhGH520xVSU7nmXBV/L6T/HEUvI0Go4aIbDhFbFwoX06w3WcKZUxwsrZWY1x9CwR8C12 wkPYADXrTUKc/X1Pkpo21AhbIiWP+Nw= Received: from mail-pl1-f199.google.com (mail-pl1-f199.google.com [209.85.214.199]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-245-YDdUW3-7Pceuw5TMq0x_jA-1; Thu, 22 Jun 2023 10:22:50 -0400 X-MC-Unique: YDdUW3-7Pceuw5TMq0x_jA-1 Received: by mail-pl1-f199.google.com with SMTP id d9443c01a7336-1b511b1b660so34996615ad.1 for ; Thu, 22 Jun 2023 07:22:50 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687443769; x=1690035769; h=content-transfer-encoding:in-reply-to:organization:from:references :cc:to:content-language:subject:user-agent:mime-version:date :message-id:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=0q7rImqCfSTwUHverUYJSMCU6piKOXc2WChZXUdmxQE=; b=JpRE9er8KHUyEqkjnipALohgjhDAgjumwCPuK+enUQG9EFEB8Ffr6iEhGM+ihDZ+Ya u9Gpdxi0OuN10dJrU2jYDKEElN7jPv6LDIicXnJdX8ZHaRvmggsZsyzDE+3byHWS6jmL qJUSJPHckaczDxIidyWhKBsTQjy9hQQYTnHe91JF53rNv12R3L4SqZHm+JOV3cijWXeq aazzH5W7gW4i8f6/VafvXu5YaXWl9ORQUkhwCxcN0dy0FnbaMiOWQazjUUOHEk0AEKGB N88WcPrLSn1ukYMKro27CdjoqTU/8AT+oW2l3LhToGwwoEWQNL0mND3Bh2CuKosA5dKj cmWQ== X-Gm-Message-State: AC+VfDxbc5mm0di4e5GBp8c3Qo8wG94rqb9Zivuq2+silfNfB2tOHyHs oaunAmKHzlzf/puYsQm84litQwqNU/V3tL/yhuZjEfS73r7r3/kbzitZ3XiR2s/cjj0s7Dt9lr0 rZOkxv8BxDj8= X-Received: by 2002:a17:903:280e:b0:1b0:48e9:cddd with SMTP id kp14-20020a170903280e00b001b048e9cdddmr11003959plb.69.1687443769269; Thu, 22 Jun 2023 07:22:49 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ7WGrYMwwejrJnH7T2C3FaNBaY1FPeFkjKFjutN5VdhnvN4RMKBQk6mBNw0Aj/+8BttIDl5ZA== X-Received: by 2002:a17:903:280e:b0:1b0:48e9:cddd with SMTP id kp14-20020a170903280e00b001b048e9cdddmr11003932plb.69.1687443768933; Thu, 22 Jun 2023 07:22:48 -0700 (PDT) Received: from ?IPV6:2a02:810d:4b3f:de9c:642:1aff:fe31:a15c? ([2a02:810d:4b3f:de9c:642:1aff:fe31:a15c]) by smtp.gmail.com with ESMTPSA id l5-20020a170902d34500b001b5640a8874sm5438809plk.293.2023.06.22.07.22.38 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 22 Jun 2023 07:22:48 -0700 (PDT) Message-ID: Date: Thu, 22 Jun 2023 16:22:36 +0200 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.10.0 Subject: Re: [PATCH drm-next v5 03/14] drm: manager to keep track of GPUs VA mappings To: =?UTF-8?Q?Christian_K=c3=b6nig?= , airlied@gmail.com, daniel@ffwll.ch, tzimmermann@suse.de, mripard@kernel.org, corbet@lwn.net, bskeggs@redhat.com, Liam.Howlett@oracle.com, matthew.brost@intel.com, boris.brezillon@collabora.com, alexdeucher@gmail.com, ogabbay@kernel.org, bagasdotme@gmail.com, willy@infradead.org, jason@jlekstrand.net Cc: dri-devel@lists.freedesktop.org, nouveau@lists.freedesktop.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Donald Robson , Dave Airlie References: <20230620004217.4700-1-dakr@redhat.com> <20230620004217.4700-4-dakr@redhat.com> <41aecd10-9956-0752-2838-34c97834f0c8@amd.com> From: Danilo Krummrich Organization: RedHat In-Reply-To: <41aecd10-9956-0752-2838-34c97834f0c8@amd.com> X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Language: en-US Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-Rspamd-Queue-Id: 7CB39180020 X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: ghyxq7jbund96a3okyj5otts9sso9sc4 X-HE-Tag: 1687443780-940961 X-HE-Meta: U2FsdGVkX193Ae6RgGSIid+ACiccdUsHt6ZItayw4yiOhPLax2sFVQBVbAUGak7dKzmAHc8sUNA8hkOy3opO8tVZkTy3FgZNjiNQJHazRxLUeiaeP0QsHYkykpXjppDeDfJUo6WAbUWE7t5VPWdbZ1PvjDvyc0o442SJ2s9uy9hpGdh2uNnlYMA3JMDIeRB5YSG9QJfIa3uSMuKsY6U/sLEE2twKfB6y+5B3MmxfXzNS0WscnwwUCF3pKhIUOr7GHcjz++Y8/SghmLdvrHl/A75qIw5Auzkj1k+2XoZbla/YS7qRUTbesrG8aHtZQ2/PF4DzIRBejhHOdzEyXGrMnOauNEhNOBUzdHQ7CHNi76E8P8eClhvapjr5qfEUFFCPvc21k44shakFik5OwSe56vdElgBoNi///3QEoRrpF5WT+NPG94yBxDaNrKPDCF8FbULIcuBTGyse0zdlTC6SUum2aceH78qYErgp+lH81yuhLoWjf2UaFSsgRsTbnPfkDW0DPmQ4+MCWde9RurQJaeZgsmak2xFcWAMyH0oC9duStBN6SE/RbLtMeRYBlwGu0naiedmOJtQ+koDZuwruvIKDyawWAuLBaEt9AOgaQRurbs5J5z5aXj+ik/kqgLvkXhQqVZjgYpl0s7F/p2/4rs6w1MLZOEHiafoHCQasERyshbGVDXBqx7/sZ91BfpPhsdFS+0e0Z0eDFBWt8QkrNbza8qH/VligAzSJPKoptVs6sKs2qDzgMTLF0YT27s0P9pJxjWQlfXcsUcy8zzXu0u88nJERGxXPSXC502nTAbSrANm5qCn9A1Rb0I2WujxFurmaBSHioVYs57qO8qYDrxUinNZcJMzUddiqP6L1lUpIc8gg12mOzjEblemZJ0lAdZLQXm36ZFcPPIddf6DiA0uE6o8gqJp5TAu0BXoMb8cFNiR63mOE6fK4UB2ndNTZhVpoOqfUbDS4fq+Vwbu sa/dEO0Y CI+JdwvVTYKG/g/8LbDTbQEYw3F/yKFDXCMrgqfFFg77YMjpNjL4d1zrnv5V+szRlFen88sTxi0GVvkFttdLJGTJj1kEde8QikxXF/HIRsdZb5shvOmepIhwUEWLXPzm9WTxxsRTZ7hHW5N0zGVh2ah0bcO561fAidYntgIFLuLFLqnsieYBDpI7hBj6rwBHlip8YAmpl9KSDjfkJz/j9TA9br1of5PB9IGyHCp6H/D5F5p90H1xGfFp7xxoAHSp59BAXZoGNAvHLAH8+LyvBSixCEB/bQxdFs4tX+lkLDHjtj3hU+ZuhjlxMIk8zJKZblXB+PjElYfp4XA24t66eAeOBjLLUVbKkgyDpasQMxfXKjYH9WYWYhxwXsv80wUJAXxUKflHQ6txEAaX1zldu77qUlkNg7hKcikrf X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 6/22/23 15:54, Christian König wrote: > Am 20.06.23 um 14:23 schrieb Danilo Krummrich: >> Hi Christian, >> >> On 6/20/23 08:45, Christian König wrote: >>> Hi Danilo, >>> >>> sorry for the delayed reply. I've trying to dig myself out of a hole >>> at the moment. >> >> No worries, thank you for taking a look anyway! >> >>> >>> Am 20.06.23 um 02:42 schrieb Danilo Krummrich: >>>> [SNIP] >>>> diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h >>>> index bbc721870c13..5ec8148a30ee 100644 >>>> --- a/include/drm/drm_gem.h >>>> +++ b/include/drm/drm_gem.h >>>> @@ -36,6 +36,8 @@ >>>>   #include >>>>   #include >>>> +#include >>>> +#include >>>>   #include >>>> @@ -379,6 +381,18 @@ struct drm_gem_object { >>>>        */ >>>>       struct dma_resv _resv; >>>> +    /** >>>> +     * @gpuva: >>>> +     * >>>> +     * Provides the list of GPU VAs attached to this GEM object. >>>> +     * >>>> +     * Drivers should lock list accesses with the GEMs &dma_resv lock >>>> +     * (&drm_gem_object.resv). >>>> +     */ >>>> +    struct { >>>> +        struct list_head list; >>>> +    } gpuva; >>>> + >>>>       /** >>>>        * @funcs: >>>>        * >>> >>> I'm pretty sure that it's not a good idea to attach this directly to >>> the GEM object. >> >> Why do you think so? IMHO having a common way to connect mappings to >> their backing buffers is a good thing, since every driver needs this >> connection anyway. >> >> E.g. when a BO gets evicted, drivers can just iterate the list of >> mappings and, as the circumstances require, invalidate the >> corresponding mappings or to unmap all existing mappings of a given >> buffer. >> >> What would be the advantage to let every driver implement a driver >> specific way of keeping this connection? > > Flexibility. For example on amdgpu the mappings of a BO are groups by VM > address spaces. > > E.g. the BO points to multiple bo_vm structures which in turn have lists > of their mappings. Isn't this (almost) the same relationship I introduce with the GPUVA manager? If you would switch over to the GPUVA manager right now, it would be that every GEM has a list of it's mappings (the gpuva list). The mapping is represented by struct drm_gpuva (of course embedded in driver specific structure(s)) which has a pointer to the VM address space it is part of, namely the GPUVA manager instance. And the GPUVA manager keeps a maple tree of it's mappings as well. If you still would like to *directly* (indirectly you already have that relationship) keep a list of GPUVA managers (VM address spaces) per GEM, you could still do that in a driver specific way. Do I miss something? > > Additional to that there is a state maschine associated with the > mappings, e.g. if the page tables are up to date or need to be updated > etc.... > >> Do you see cases where this kind of connection between mappings and >> backing buffers wouldn't be good enough? If so, which cases do you >> have in mind? Maybe we can cover them in a common way as well? > > Yeah, we have tons of cases like that. But I have no idea how to > generalize them. They could still remain to be driver specific then, right? > >> >>> >>> As you wrote in the commit message it's highly driver specific what >>> to map and where to map it. >> >> In the end the common case should be that in a VA space at least every >> mapping being backed by a BO is represented by a struct drm_gpuva. > > Oh, no! We also have mappings not backed by a BO at all! For example for > partial resident textures or data routing to internal hw etc... > > You can't be sure that a mapping is backed by a BO at all. I fully agree, that's why I wrote "the common case should be that in a VA space at least every mapping *being backed by a BO* is represented by a struct drm_gpuva". Mappings not being backed by an actual BO would not be linked to a GEM of course. > >> >>> >>> Instead I suggest to have a separate structure for mappings in a VA >>> space which driver can then add to their GEM objects or whatever they >>> want to map into their VMs. >> >> Which kind of separate structure for mappings? Another one analogous >> to struct drm_gpuva? > > Well similar to what amdgpu uses BO -> one structure for each > combination of BO and VM -> mappings inside that VM As explained above, I think that's exactly what the GPUVA manager does, just in another order: BO has list of mappings, mappings have pointer to VM, VM has list (or actually a maple tree) of mappings. You see any advantages or disadvantages of either order of relationships? For me it looks like it doesn't really matter which one to pick. - Danilo > > Regards, > Christian. > >> >> - Danilo >> >>> >>> Regards, >>> Christian. >>> >>> >> >