From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C35A9C49EA1 for ; Tue, 6 Aug 2024 13:01:57 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 524426B0088; Tue, 6 Aug 2024 09:01:57 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 4D4A06B0089; Tue, 6 Aug 2024 09:01:57 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3C2C96B0092; Tue, 6 Aug 2024 09:01:57 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 1DD456B0088 for ; Tue, 6 Aug 2024 09:01:57 -0400 (EDT) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 2D2CFC0CE3 for ; Tue, 6 Aug 2024 13:01:56 +0000 (UTC) X-FDA: 82421833032.27.A50638A Received: from mail-lj1-f181.google.com (mail-lj1-f181.google.com [209.85.208.181]) by imf29.hostedemail.com (Postfix) with ESMTP id 71D01120051 for ; Tue, 6 Aug 2024 13:01:49 +0000 (UTC) Authentication-Results: imf29.hostedemail.com; dkim=pass header.d=ffwll.ch header.s=google header.b=jNU097R2; dmarc=none; spf=none (imf29.hostedemail.com: domain of daniel.vetter@ffwll.ch has no SPF policy when checking 209.85.208.181) smtp.mailfrom=daniel.vetter@ffwll.ch ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1722949259; a=rsa-sha256; cv=none; b=ryOcS9yTmsWastA/KRO/fFItgsm/3GSodWJ/b1+l8pQqfe1APU6p9qLDZ8tnleAWrpLkqF lhSa2s4l3O+wve/h8ysQFoKKvRhh0pdk0YBsQXDCE0rO/ayAB4/hsbiFUuabUzrqqeYeT1 FNpNajRkIcMef7w9EM3cPtr68gdsN9M= ARC-Authentication-Results: i=1; imf29.hostedemail.com; dkim=pass header.d=ffwll.ch header.s=google header.b=jNU097R2; dmarc=none; spf=none (imf29.hostedemail.com: domain of daniel.vetter@ffwll.ch has no SPF policy when checking 209.85.208.181) smtp.mailfrom=daniel.vetter@ffwll.ch ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1722949259; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=ZWaPqrk0Ji9UdhOsGVUggVZW2Ffw4bKnDm4+9jC3vKo=; b=OvHl7NiDtDuCymlpq8entSfMx5TF3ex7tcxrvZT8OJelJtWTZigMlRjbHuv7SQLKOnuccN JjnhBDAZs6N/+u/geS2hdk/58RzWP+H4cEWqZZMc0wo8er2ZGuxXNSIXjzHL0RYQKbDLi0 IMPk4geq1pOnsUq4+JH1JnJRwm5j9Io= Received: by mail-lj1-f181.google.com with SMTP id 38308e7fff4ca-2f1926de474so3371fa.3 for ; Tue, 06 Aug 2024 06:01:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ffwll.ch; s=google; t=1722949307; x=1723554107; darn=kvack.org; h=in-reply-to:content-disposition:mime-version:references :mail-followup-to:message-id:subject:cc:to:from:date:from:to:cc :subject:date:message-id:reply-to; bh=ZWaPqrk0Ji9UdhOsGVUggVZW2Ffw4bKnDm4+9jC3vKo=; b=jNU097R2IRAS8WgnKFUV+V9LvkYFf0b8nXgkZMKDzEoqLWnNgc7oohrsnIZWmWXeBg a8CdZX4F9t961f7JareLiEqvwRlnT4bzgaGgfK/3lgzCE7xK00Rc8EI/q/LRgqsEk8OG 5W0jtCCWcc66WHylQ29ucZOxhoq13QEfG4ZDY= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722949307; x=1723554107; h=in-reply-to:content-disposition:mime-version:references :mail-followup-to:message-id:subject:cc:to:from:date :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=ZWaPqrk0Ji9UdhOsGVUggVZW2Ffw4bKnDm4+9jC3vKo=; b=Z/HWDcf2CwL+JDAqZip9nbxQ8ZEnVo6d7IfNdEFlkNrrkK963PwxL+YcTUjC2U5/uN HzfSiP9Hh3jL0lJSlG9k+puaPGpEYyLfKzEveCNkd6aaoMq/nBEx7dbxqKR/88ChJS/0 z+EkX7taHpREvaQIZkLrsNKKVdsNEuX+pyzct7jZZI1xrMLI7H56M7ecZgkP+28CeOkw lUmH+jy3W91XbOP/lrxR1Llmt7FikhJcMjnVPtFaQbdxMucUrCKDECVFh61LHQpToz3f aOjtO11KCoLw9F/FWH2ajkt4xfEWZCF1aAtN3J17MC1OdiarMdG2R49B2JkJVWRd0Mrc 9uBg== X-Forwarded-Encrypted: i=1; AJvYcCU8hLWWSOjX07V+fvPNNTyncFOght8sirhbc/tJuSY2/GKI2p7U2rZNwYKwBzp7cqgDs7en9qKuOw==@kvack.org X-Gm-Message-State: AOJu0YwV5JJ5mrxzi48UosoEsMxfVqi2HxODWfyr42L8fbz5spzZjanc Gajk4U/bY/6BY3z9v6YtNO3RfvymMfXkHGrn1zCyVO7SiVLz0A4D8ZRYQSEmui8= X-Google-Smtp-Source: AGHT+IGBILqIWLNLhTGm47B+BmFPkDFKRX0XFMxi6y7ZZfWYiY/LEw2NnlFOMQ1ijpWL3jroPbc9qA== X-Received: by 2002:a2e:3313:0:b0:2ef:2405:ff63 with SMTP id 38308e7fff4ca-2f15aafc20bmr51509691fa.5.1722949307034; Tue, 06 Aug 2024 06:01:47 -0700 (PDT) Received: from phenom.ffwll.local ([2a02:168:57f4:0:efd0:b9e5:5ae6:c2fa]) by smtp.gmail.com with ESMTPSA id 4fb4d7f45d1cf-5bb884cddf4sm2712126a12.66.2024.08.06.06.01.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 06 Aug 2024 06:01:46 -0700 (PDT) Date: Tue, 6 Aug 2024 15:01:44 +0200 From: Daniel Vetter To: Tvrtko Ursulin Cc: Maarten Lankhorst , Maxime Ripard , intel-xe@lists.freedesktop.org, linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, Tejun Heo , Zefan Li , Johannes Weiner , Andrew Morton , Jonathan Corbet , David Airlie , Daniel Vetter , Thomas Zimmermann , Friedrich Vock , cgroups@vger.kernel.org, linux-mm@kvack.org, linux-doc@vger.kernel.org Subject: Re: [RFC PATCH 2/6] drm/cgroup: Add memory accounting DRM cgroup Message-ID: Mail-Followup-To: Tvrtko Ursulin , Maarten Lankhorst , Maxime Ripard , intel-xe@lists.freedesktop.org, linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, Tejun Heo , Zefan Li , Johannes Weiner , Andrew Morton , Jonathan Corbet , David Airlie , Thomas Zimmermann , Friedrich Vock , cgroups@vger.kernel.org, linux-mm@kvack.org, linux-doc@vger.kernel.org References: <20240627154754.74828-1-maarten.lankhorst@linux.intel.com> <20240627154754.74828-3-maarten.lankhorst@linux.intel.com> <20240627-paper-vicugna-of-fantasy-c549ed@houat> <6cb7c074-55cb-4825-9f80-5cf07bbd6745@linux.intel.com> <20240628-romantic-emerald-snake-7b26ca@houat> <70289c58-7947-4347-8600-658821a730b0@linux.intel.com> <40ef0eed-c514-4ec1-9486-2967f23824be@ursulin.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <40ef0eed-c514-4ec1-9486-2967f23824be@ursulin.net> X-Operating-System: Linux phenom 6.9.10-amd64 X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 71D01120051 X-Stat-Signature: gz9t7r1hhtzcjwsyajh4y7syiyj8bf6p X-Rspam-User: X-HE-Tag: 1722949309-634414 X-HE-Meta: U2FsdGVkX19Mrs5EWjnEXODPi6RSl4qsZ+r0ttK5hLqEx0LcGX8CF2o9M+C9TbaG+5xM+KbNew2DgfnI0ZU1WpwkY75iDLEXD98T+Iiq4aNW1aObVXDH/OywhnGV0ZXDYPwzrvSwR69b1+d8IKzq3D9ebn5xLNuwHVKFrm1ttBI6tGsoj5ZqpbK4FBVH9u2FgxC/+A2FmJ4BOIJYFcClvPVwb6cfV4k7e0o7oBrCTw+T2ExekGynwnBJnMok+MRO8N2l3q8if77lq6fjRMbl1iddjQGxxtsCMoXBrwPNw94bSVADUj3GliXi+T6g8MoNYkOqC/xnbKv9nZqnUeuFLQw9UQ2mxsj3zPE+t5baeJ1ZxFAS/+3dpI0maObi/5e19oT5iF8id99jdS4gUlCj+up6V0tBhKbYeV7B/Otr1n1L07iLA3CxY/tT6Ohom65m8XYjTiFmADCg0fXkvYZew/YY4k/8xCS75TKWdfH9FVwLq7yt7Cyw4JSlThQHYyBhmTb1zs2sU015FhPyFqxUDa4C2suLrLvh2uUGrO+mf+CtU+3dxX7qJunfuGeripErNbLvr6JtOeqmXhDJm4560psFAoTuIH6m8TbQp5RrjxR40Qo0LhmN6x9Q1Lw5mOSFU3Npn8KDckXD5HHwtssT0IsxzknFcd7bEnMHA43eCLwNFFWDMyU4rrUYhMyW7fftja9tASDGvDxdbvfhsZlDrWKd46m7FvvoIAaqJ3LV9tz01/cN5LUfapl8RFftUtpY8VCgeduK+d/xyH6Ooi2sByt8m2TlJKRwvHHHN1lVBYC93Ix0ppGSjM2bBJvutyxOuvmzVumbC/fexTgnqYC6NF5fG4eGi73L3eHJTtp5QW63Bmu16Qa0STOgV1jX1r+6L3cZZaZxgZaPrDan8hIIoJHeKJ04MoZcuqfm1aw3CWkoPqWV9QhE/WJFpb3Cp7yA0gMHjiQILKmPbzDhzGj 7Vq4n7Hc YlCQ0BQ9SzNdfKlb2YEOAbXjav3tvo9PR9sdATWiB9so4tRZphmqeIpeRzMXUY/pzjXaUnjh6xpmuOH8J/HpSJ4gPj0+Z4Pk5jNJblcu4l8QTAze/o33Pze8Wx0OrvxGu+UNWl3R8gtWevc5k1ie/wn7rl02AVkn7ve6liXLAz7WlQnOrXCSg4Hz7Hwi0Wq05Om6K7QaMF6HT+fijv0bVHayQ82SYwwGrXWTrVMFPt1zuC/C/gE+BA0ay1s4+4LCmhYncAdrtPQa3YO4yhqldbHh4i5VmoiEJ8i8YIsbF8h+ceH1PY+8fTwaRxZAfRJ1kSREHrScZdXEq/4ex78ryvxY5DOBRlOIaLD3E X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Mon, Jul 01, 2024 at 06:01:41PM +0100, Tvrtko Ursulin wrote: > > On 01/07/2024 10:25, Maarten Lankhorst wrote: > > Den 2024-06-28 kl. 16:04, skrev Maxime Ripard: > > > Hi, > > > > > > On Thu, Jun 27, 2024 at 09:22:56PM GMT, Maarten Lankhorst wrote: > > > > Den 2024-06-27 kl. 19:16, skrev Maxime Ripard: > > > > > Hi, > > > > > > > > > > Thanks for working on this! > > > > > > > > > > On Thu, Jun 27, 2024 at 05:47:21PM GMT, Maarten Lankhorst wrote: > > > > > > The initial version was based roughly on the rdma and misc cgroup > > > > > > controllers, with a lot of the accounting code borrowed from rdma. > > > > > > > > > > > > The current version is a complete rewrite with page counter; it uses > > > > > > the same min/low/max semantics as the memory cgroup as a result. > > > > > > > > > > > > There's a small mismatch as TTM uses u64, and page_counter long pages. > > > > > > In practice it's not a problem. 32-bits systems don't really come with > > > > > > > =4GB cards and as long as we're consistently wrong with units, it's > > > > > > fine. The device page size may not be in the same units as kernel page > > > > > > size, and each region might also have a different page size (VRAM vs GART > > > > > > for example). > > > > > > > > > > > > The interface is simple: > > > > > > - populate drmcgroup_device->regions[..] name and size for each active > > > > > > region, set num_regions accordingly. > > > > > > - Call drm(m)cg_register_device() > > > > > > - Use drmcg_try_charge to check if you can allocate a chunk of memory, > > > > > > use drmcg_uncharge when freeing it. This may return an error code, > > > > > > or -EAGAIN when the cgroup limit is reached. In that case a reference > > > > > > to the limiting pool is returned. > > > > > > - The limiting cs can be used as compare function for > > > > > > drmcs_evict_valuable. > > > > > > - After having evicted enough, drop reference to limiting cs with > > > > > > drmcs_pool_put. > > > > > > > > > > > > This API allows you to limit device resources with cgroups. > > > > > > You can see the supported cards in /sys/fs/cgroup/drm.capacity > > > > > > You need to echo +drm to cgroup.subtree_control, and then you can > > > > > > partition memory. > > > > > > > > > > > > Signed-off-by: Maarten Lankhorst > > > > > > Co-developed-by: Friedrich Vock > > > > > I'm sorry, I should have wrote minutes on the discussion we had with TJ > > > > > and Tvrtko the other day. > > > > > > > > > > We're all very interested in making this happen, but doing a "DRM" > > > > > cgroup doesn't look like the right path to us. > > > > > > > > > > Indeed, we have a significant number of drivers that won't have a > > > > > dedicated memory but will depend on DMA allocations one way or the > > > > > other, and those pools are shared between multiple frameworks (DRM, > > > > > V4L2, DMA-Buf Heaps, at least). > > > > > > > > > > This was also pointed out by Sima some time ago here: > > > > > https://lore.kernel.org/amd-gfx/YCVOl8%2F87bqRSQei@phenom.ffwll.local/ > > > > > > > > > > So we'll want that cgroup subsystem to be cross-framework. We settled on > > > > > a "device" cgroup during the discussion, but I'm sure we'll have plenty > > > > > of bikeshedding. > > > > > > > > > > The other thing we agreed on, based on the feedback TJ got on the last > > > > > iterations of his series was to go for memcg for drivers not using DMA > > > > > allocations. > > > > > > > > > > It's the part where I expect some discussion there too :) > > > > > > > > > > So we went back to a previous version of TJ's work, and I've started to > > > > > work on: > > > > > > > > > > - Integration of the cgroup in the GEM DMA and GEM VRAM helpers (this > > > > > works on tidss right now) > > > > > > > > > > - Integration of all heaps into that cgroup but the system one > > > > > (working on this at the moment) > > > > > > > > Should be similar to what I have then. I think you could use my work to > > > > continue it. > > > > > > > > I made nothing DRM specific except the name, if you renamed it the device > > > > resource management cgroup and changed the init function signature to take a > > > > name instead of a drm pointer, nothing would change. This is exactly what > > > > I'm hoping to accomplish, including reserving memory. > > > > > > I've started to work on rebasing my current work onto your series today, > > > and I'm not entirely sure how what I described would best fit. Let's > > > assume we have two KMS device, one using shmem, one using DMA > > > allocations, two heaps, one using the page allocator, the other using > > > CMA, and one v4l2 device using dma allocations. > > > > > > So we would have one KMS device and one heap using the page allocator, > > > and one KMS device, one heap, and one v4l2 driver using the DMA > > > allocator. > > > > > > Would these make different cgroup devices, or different cgroup regions? > > > > Each driver would register a device, whatever feels most logical for that device I suppose. > > > > My guess is that a prefix would also be nice here, so register a device with name of drm/$name or v4l2/$name, heap/$name. I didn't give it much thought and we're still experimenting, so just try something. :) > > > > There's no limit to amount of devices, I only fixed amount of pools to match TTM, but even that could be increased arbitrarily. I just don't think there is a point in doing so. > > Do we need a plan for top level controls which do not include region names? > If the latter will be driver specific then I am thinking of ease of > configuring it all from the outside. Especially considering that one cgroup > can have multiple devices in it. > > Second question is about double accounting for shmem backed objects. I think > they will be seen, for drivers which allocate backing store at buffer > objects creation time, under the cgroup of process doing the creation, in > the existing memory controller. Right? We currently don't set __GFP_ACCOUNT respectively use GFP_KERNEL_ACCOUNT, so no. Unless someone allocates them with GFP_USER ... > Is there a chance to exclude those from there and only have them in this new > controller? Or would the opposite be a better choice? That is, not see those > in the device memory controller but only in the existing one. I missed this, so jumping in super late. I think guidance from Tejun was to go the other way around: Exclude allocations from normal system memory from device cgroups and instead make sure it's tracked in the existing memcg. Which might mean we need memcg shrinkers and the assorted pain ... Also I don't think we ever reached some agreement on where things like cma allocations should be accounted for in this case. -Sima -- Daniel Vetter Software Engineer, Intel Corporation http://blog.ffwll.ch