linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Tvrtko Ursulin <tursulin@ursulin.net>
To: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>,
	Maxime Ripard <mripard@kernel.org>
Cc: intel-xe@lists.freedesktop.org, linux-kernel@vger.kernel.org,
	dri-devel@lists.freedesktop.org, Tejun Heo <tj@kernel.org>,
	Zefan Li <lizefan.x@bytedance.com>,
	Johannes Weiner <hannes@cmpxchg.org>,
	Andrew Morton <akpm@linux-foundation.org>,
	Jonathan Corbet <corbet@lwn.net>,
	David Airlie <airlied@gmail.com>, Daniel Vetter <daniel@ffwll.ch>,
	Thomas Zimmermann <tzimmermann@suse.de>,
	Friedrich Vock <friedrich.vock@gmx.de>,
	cgroups@vger.kernel.org, linux-mm@kvack.org,
	linux-doc@vger.kernel.org
Subject: Re: [RFC PATCH 2/6] drm/cgroup: Add memory accounting DRM cgroup
Date: Mon, 1 Jul 2024 18:01:41 +0100	[thread overview]
Message-ID: <40ef0eed-c514-4ec1-9486-2967f23824be@ursulin.net> (raw)
In-Reply-To: <70289c58-7947-4347-8600-658821a730b0@linux.intel.com>


On 01/07/2024 10:25, Maarten Lankhorst wrote:
> Den 2024-06-28 kl. 16:04, skrev Maxime Ripard:
>> Hi,
>>
>> On Thu, Jun 27, 2024 at 09:22:56PM GMT, Maarten Lankhorst wrote:
>>> Den 2024-06-27 kl. 19:16, skrev Maxime Ripard:
>>>> Hi,
>>>>
>>>> Thanks for working on this!
>>>>
>>>> On Thu, Jun 27, 2024 at 05:47:21PM GMT, Maarten Lankhorst wrote:
>>>>> The initial version was based roughly on the rdma and misc cgroup
>>>>> controllers, with a lot of the accounting code borrowed from rdma.
>>>>>
>>>>> The current version is a complete rewrite with page counter; it uses
>>>>> the same min/low/max semantics as the memory cgroup as a result.
>>>>>
>>>>> There's a small mismatch as TTM uses u64, and page_counter long pages.
>>>>> In practice it's not a problem. 32-bits systems don't really come with
>>>>>> =4GB cards and as long as we're consistently wrong with units, it's
>>>>> fine. The device page size may not be in the same units as kernel page
>>>>> size, and each region might also have a different page size (VRAM vs GART
>>>>> for example).
>>>>>
>>>>> The interface is simple:
>>>>> - populate drmcgroup_device->regions[..] name and size for each active
>>>>>     region, set num_regions accordingly.
>>>>> - Call drm(m)cg_register_device()
>>>>> - Use drmcg_try_charge to check if you can allocate a chunk of memory,
>>>>>     use drmcg_uncharge when freeing it. This may return an error code,
>>>>>     or -EAGAIN when the cgroup limit is reached. In that case a reference
>>>>>     to the limiting pool is returned.
>>>>> - The limiting cs can be used as compare function for
>>>>>     drmcs_evict_valuable.
>>>>> - After having evicted enough, drop reference to limiting cs with
>>>>>     drmcs_pool_put.
>>>>>
>>>>> This API allows you to limit device resources with cgroups.
>>>>> You can see the supported cards in /sys/fs/cgroup/drm.capacity
>>>>> You need to echo +drm to cgroup.subtree_control, and then you can
>>>>> partition memory.
>>>>>
>>>>> Signed-off-by: Maarten Lankhorst<maarten.lankhorst@linux.intel.com>
>>>>> Co-developed-by: Friedrich Vock<friedrich.vock@gmx.de>
>>>> I'm sorry, I should have wrote minutes on the discussion we had with TJ
>>>> and Tvrtko the other day.
>>>>
>>>> We're all very interested in making this happen, but doing a "DRM"
>>>> cgroup doesn't look like the right path to us.
>>>>
>>>> Indeed, we have a significant number of drivers that won't have a
>>>> dedicated memory but will depend on DMA allocations one way or the
>>>> other, and those pools are shared between multiple frameworks (DRM,
>>>> V4L2, DMA-Buf Heaps, at least).
>>>>
>>>> This was also pointed out by Sima some time ago here:
>>>> https://lore.kernel.org/amd-gfx/YCVOl8%2F87bqRSQei@phenom.ffwll.local/
>>>>
>>>> So we'll want that cgroup subsystem to be cross-framework. We settled on
>>>> a "device" cgroup during the discussion, but I'm sure we'll have plenty
>>>> of bikeshedding.
>>>>
>>>> The other thing we agreed on, based on the feedback TJ got on the last
>>>> iterations of his series was to go for memcg for drivers not using DMA
>>>> allocations.
>>>>
>>>> It's the part where I expect some discussion there too :)
>>>>
>>>> So we went back to a previous version of TJ's work, and I've started to
>>>> work on:
>>>>
>>>>     - Integration of the cgroup in the GEM DMA and GEM VRAM helpers (this
>>>>       works on tidss right now)
>>>>
>>>>     - Integration of all heaps into that cgroup but the system one
>>>>       (working on this at the moment)
>>>
>>> Should be similar to what I have then. I think you could use my work to
>>> continue it.
>>>
>>> I made nothing DRM specific except the name, if you renamed it the device
>>> resource management cgroup and changed the init function signature to take a
>>> name instead of a drm pointer, nothing would change. This is exactly what
>>> I'm hoping to accomplish, including reserving memory.
>>
>> I've started to work on rebasing my current work onto your series today,
>> and I'm not entirely sure how what I described would best fit. Let's
>> assume we have two KMS device, one using shmem, one using DMA
>> allocations, two heaps, one using the page allocator, the other using
>> CMA, and one v4l2 device using dma allocations.
>>
>> So we would have one KMS device and one heap using the page allocator,
>> and one KMS device, one heap, and one v4l2 driver using the DMA
>> allocator.
>>
>> Would these make different cgroup devices, or different cgroup regions?
> 
> Each driver would register a device, whatever feels most logical for that device I suppose.
> 
> My guess is that a prefix would also be nice here, so register a device with name of drm/$name or v4l2/$name, heap/$name. I didn't give it much thought and we're still experimenting, so just try something. :)
> 
> There's no limit to amount of devices, I only fixed amount of pools to match TTM, but even that could be increased arbitrarily. I just don't think there is a point in doing so.

Do we need a plan for top level controls which do not include region 
names? If the latter will be driver specific then I am thinking of ease 
of configuring it all from the outside. Especially considering that one 
cgroup can have multiple devices in it.

Second question is about double accounting for shmem backed objects. I 
think they will be seen, for drivers which allocate backing store at 
buffer objects creation time, under the cgroup of process doing the 
creation, in the existing memory controller. Right?

Is there a chance to exclude those from there and only have them in this 
new controller? Or would the opposite be a better choice? That is, not 
see those in the device memory controller but only in the existing one.

Regards,

Tvrtko

>>> The nice thing is that it should be similar to the memory cgroup controller
>>> in semantics, so you would have the same memory behavior whether you use the
>>> device cgroup or memory cgroup.
>>>
>>> I'm sad I missed the discussion, but hopefully we can coordinate more now
>>> that we know we're both working on it. :)
>>
>> Yeah, definitely :)
>>
>> Maxime
> Cheers,
> ~Maarten


  reply	other threads:[~2024-07-01 17:01 UTC|newest]

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-06-27 15:47 [RFC PATCH 0/6] DRM resource management cgroup, try 2 Maarten Lankhorst
2024-06-27 15:47 ` [RFC PATCH 1/6] mm/page_counter: Move calculating protection values to page_counter Maarten Lankhorst
2024-06-27 17:33   ` Roman Gushchin
2024-06-27 18:48   ` Shakeel Butt
2024-06-27 15:47 ` [RFC PATCH 2/6] drm/cgroup: Add memory accounting DRM cgroup Maarten Lankhorst
2024-06-27 17:16   ` Maxime Ripard
2024-06-27 19:22     ` Maarten Lankhorst
2024-06-28 14:04       ` Maxime Ripard
2024-07-01  9:25         ` Maarten Lankhorst
2024-07-01 17:01           ` Tvrtko Ursulin [this message]
2024-08-06 13:01             ` Daniel Vetter
2024-08-06 14:09               ` Maxime Ripard
2024-08-06 15:26                 ` Daniel Vetter
2024-09-03  8:53                   ` Maxime Ripard
2024-09-03 11:26                     ` Simona Vetter
2024-08-06  8:19           ` Maxime Ripard
2024-06-27 15:47 ` [RFC PATCH 3/6] drm/ttm: Handle cgroup based eviction in TTM Maarten Lankhorst
2024-06-27 15:47 ` [RFC PATCH 4/6] drm/xe: Implement cgroup for vram Maarten Lankhorst
2024-06-27 15:47 ` [RFC PATCH 5/6] drm/amdgpu: Add cgroups implementation Maarten Lankhorst
2024-06-27 15:47 ` [RFC PATCH 6/6] drm/xe: Hack to test with mapped pages instead of vram Maarten Lankhorst

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=40ef0eed-c514-4ec1-9486-2967f23824be@ursulin.net \
    --to=tursulin@ursulin.net \
    --cc=airlied@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=cgroups@vger.kernel.org \
    --cc=corbet@lwn.net \
    --cc=daniel@ffwll.ch \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=friedrich.vock@gmx.de \
    --cc=hannes@cmpxchg.org \
    --cc=intel-xe@lists.freedesktop.org \
    --cc=linux-doc@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=lizefan.x@bytedance.com \
    --cc=maarten.lankhorst@linux.intel.com \
    --cc=mripard@kernel.org \
    --cc=tj@kernel.org \
    --cc=tzimmermann@suse.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox