From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E93D2C35FF3 for ; Thu, 13 Mar 2025 18:17:05 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E3D34280008; Thu, 13 Mar 2025 14:17:04 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id DC62A280007; Thu, 13 Mar 2025 14:17:04 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C3ED4280008; Thu, 13 Mar 2025 14:17:04 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id A132F280007 for ; Thu, 13 Mar 2025 14:17:04 -0400 (EDT) Received: from smtpin23.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id DEDEF120435 for ; Thu, 13 Mar 2025 18:17:04 +0000 (UTC) X-FDA: 83217334368.23.EC0A6F5 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf22.hostedemail.com (Postfix) with ESMTP id 2945AC001B for ; Thu, 13 Mar 2025 18:17:02 +0000 (UTC) Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=ckrwnbay; dmarc=pass (policy=quarantine) header.from=kernel.org; spf=pass (imf22.hostedemail.com: domain of mripard@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=mripard@kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1741889823; a=rsa-sha256; cv=none; b=8MVtuDjIJv8VU1YuqcKAle2+6hGUXc7tcxXDa/dD364U3TR45Jn/7UKyXaY5Is6IsPvt1T Rp2bN8JEvO8kwxg62t/Fj4q2Bpat5d0hauKTYQ5oFU6TGWksSpym50q1qPyXv4DPbm3Al4 RzQuS7ewzhAsdYec5R9bg177H94Y+Uk= ARC-Authentication-Results: i=1; imf22.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=ckrwnbay; dmarc=pass (policy=quarantine) header.from=kernel.org; spf=pass (imf22.hostedemail.com: domain of mripard@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=mripard@kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1741889823; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=6Jewvkf0PqsP+anbvhurSP3VHtK9+3QPVVOpIfySy1Q=; b=hQgw+S6SIqKaLYkHl/i4WUDNCadRpB3crQHzcWFKbiCm3xnfL0Abl6DIbtWyY8K/xqLveW oYu8SxQ14X6X4tgeSiXZvaELwl7ydyYW8rjbL3E/AjBBvYBhGYdm522RJRma03k4MQm2sX nmQ9OHvOGPRsSv+IUJPguXKVQxagLKI= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 223AD5C5DF5; Thu, 13 Mar 2025 18:14:45 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 22BEAC4CEDD; Thu, 13 Mar 2025 18:17:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1741889821; bh=DXglPiNNV84Je9bZlfd3wkaek0EXUcsHteRC16eVmM8=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=ckrwnbayb5nxjjekbrpH+Bfn1tGnTSe5VjfyI3zlwSEo4IVMokY85CipHU+Xenac/ eOS2KApmgXHCoF8qJuZ7DPP3XdCjCOOTdllG7Y004ZUtI0Gm5LoF3dRYjujXvuq/tA NJyestEaQryHvfOXiLJlY7vwfgOveJVdA0A6a8X67ekYjhsL72rXfLzm79jy0uq26D 0sr7oBrzKzBKVQ6/cRjYD6QIvQtdAll4UZSYLVMtrIhT3gcXmIgx4ODBgs8qQ9wozq Cla7oizVWP8n3nihzMelgNDqxW/BDc1bINOQvhRzDjIIyZju4FuHWoiJmH3wRcxU1q LVJIahEP6jzsg== Date: Thu, 13 Mar 2025 19:16:58 +0100 From: Maxime Ripard To: Robin Murphy Cc: Andrew Morton , Marek Szyprowski , Sumit Semwal , Christian =?utf-8?B?S8O2bmln?= , Benjamin Gaignard , Brian Starkey , John Stultz , "T.J. Mercier" , Maarten Lankhorst , Thomas Zimmermann , David Airlie , Simona Vetter , Tomasz Figa , Mauro Carvalho Chehab , Hans Verkuil , Laurent Pinchart , linux-mm@kvack.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org Subject: Re: [PATCH RFC 06/12] dma: direct: Provide accessor to dmem region Message-ID: <20250313-wooden-violet-quokka-001ef5@houat> References: <20250310-dmem-cgroups-v1-0-2984c1bc9312@kernel.org> <20250310-dmem-cgroups-v1-6-2984c1bc9312@kernel.org> <2af9ea85-b31d-49c9-b574-38c33cc89cef@arm.com> <20250310-expert-piculet-of-fascination-3813cd@houat> <0b057c55-fe02-4c83-af69-37770dc83eb8@arm.com> MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha512; protocol="application/pgp-signature"; boundary="ffb7rrvpjvoby4z6" Content-Disposition: inline In-Reply-To: <0b057c55-fe02-4c83-af69-37770dc83eb8@arm.com> X-Stat-Signature: u483pae66uufg3qks5sz98scx8pcbz6q X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 2945AC001B X-Rspam-User: X-HE-Tag: 1741889822-658899 X-HE-Meta: U2FsdGVkX1+ey+/3wdm3W3xiENrsSKhHi9SuIxc7Ircdx2weCrWoDtJjQcNkMbstVXVRGHZcxFoPvJC0ZZfzUcjSscPAQ6mMGzkViM1v+ycsPS04ZRgbK7xa4Djz7uWwk9l30S369NiqSkp8Gu7hoIG75l7xcUxAoT8JzLhw4J6D1k83eM64EvC/YcZolyU5mlaAAk0p0IQLZ67cEtAxQAe0CNkN/dpW0ICmC0TRs3QG8h7aB5JMlDUVMOIammHusS3MmOshtFcvBACg50lbDZhc1b3aNUR7a03mBlkMzdTbzmbPGx71Zo1wQjiSHMemOUOp9AAO0+naGjKoxku6EgSJA0/dSm9m88IbFLaxxpS17x/SyjRMqkB+PpTb1lFUFkGzVX55qJnFTHd/YbnWs6MW3UrpHK0QOw9JEl8q/6CoSK1coZQ4nsLRjepXlUX2OA4ol2STFe50LQIEjr0R+nDjmNUya0j/iiL2RiwG4xUwNLugGKjYW++JnBNIG+uLK6ZTvBeCcV9Ec8+Ty5j5+ydTy7M0cxPDNr2909SXV2iW1Rk3db55wrKGtByil75g0s111PFftYUgmv2KVFcH4MXCvnAWGgPjInizo/hgVRIqyyiwgYeM7kj9lNIGPgk1MeiS+Ppr5BaK++iDX0o5KDl8zWqYcHg4p+jHA/8V29hiskSu7ouD9ew2Hn806oyloVoJ62brVI6bpA4kOUJRdVE5Injrwmo50zC0cN47rOO3ErgSIG/jZHtWgeITGHXScZOALn+KTfvzCVXxa4gjPbI1Jv9XpFCtIyPLOcRPGfRv2uuDGSvmSwdY15edH8kwm0vxKwfdOZY3vuuAZYakXUor+8390Gn9Ah9EDAIz/GQzWD3m31vW4iGueyVyB8YrsclpfUhbrPNr1++DIFhPLGy6cXZY4lGEM+BrN71PAJq5BBKRznxfUCdFJ+t1L1QRhnqClVFi5PTYmEsItmG MtzOPzf2 /vfGZL9r4Z5juD6NeDQLTyJPVPzmtUhuwvDl5/Y8zxr9GHFMgAnq2M5aiUoumu1WmNLJ8YS0Z1lt2fXorEFobz4NWbfevr3jbBMJT0cB3WDXeDiR8C195mk6qmQHjyAsyIT/HoMczuiD1ouNRhUuf0xCRER5eOBCnicjRIieGmruNjsn4EScFlE70Qy5nugnOcYtw1y659nBqO2dVJPgDROuvn72/gocqqO9RPQWmpSvGITjqRafR4c1JuQscZtehORtiL6YzhoaJbN64QJSeAIai9qBRsJZn+2aeB+/6lESica+KwgN9LgjhxSrLWHdUaTpFHiUvBCeAuM2NJ7ojrx1poy4zY6Vs98fha5ctln5tG+n2Y4OZavC55boNqTs0l0zmDLWUQf6feIcWeMUJS/czGiPgGFGU/BMr+5rtqS9CTjgRuHRqL+Ovmw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000723, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: --ffb7rrvpjvoby4z6 Content-Type: text/plain; protected-headers=v1; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable Subject: Re: [PATCH RFC 06/12] dma: direct: Provide accessor to dmem region MIME-Version: 1.0 On Mon, Mar 10, 2025 at 06:44:51PM +0000, Robin Murphy wrote: > On 2025-03-10 4:28 pm, Maxime Ripard wrote: > > On Mon, Mar 10, 2025 at 02:56:37PM +0000, Robin Murphy wrote: > > > On 2025-03-10 12:06 pm, Maxime Ripard wrote: > > > > Consumers of the direct DMA API will have to know which region their > > > > device allocate from in order for them to charge the memory allocat= ion > > > > in the right one. > > >=20 > > > This doesn't seem to make much sense - dma-direct is not an allocator > > > itself, it just provides the high-level dma_alloc_attrs/dma_alloc_pag= es/etc. > > > interfaces wherein the underlying allocations _could_ come from CMA, = but > > > also a per-device coherent/restricted pool, or a global coherent/atom= ic > > > pool, or the regular page allocator, or in one weird corner case the = SWIOTLB > > > buffer, or... > >=20 > > I guess it wasn't super clear, but what I meant is that it's an > > allocator to the consumer: it gets called, and returns a buffer. How it > > does so is transparent to the device, and on the other side of the > > abstraction. > >=20 > > I do agree that the logic is complicated to follow, and that's what I > > was getting at in the cover letter. >=20 > Right, but ultimately my point is that when we later end up with: >=20 > struct dmem_cgroup_region * > dma_get_dmem_cgroup_region(struct device *dev) > { > if (dma_alloc_direct(dev, get_dma_ops(dev))) > return dma_direct_get_dmem_cgroup_region(dev); >=20 > =3D dma_contiguous_get_dmem_cgroup_region(dev); >=20 > it's objectively wrong given what dma_alloc_direct() means in context: >=20 > void *dma_alloc_attrs(...) > { > if (dma_alloc_direct(dev, ops)) > cpu_addr =3D dma_direct_alloc(...); >=20 > where dma_direct_alloc() may then use at least 5 different allocation > methods, only one of which is CMA. Accounting things which are not CMA to > CMA seems to thoroughly defeat the purpose of having such fine-grained > accounting at all. >=20 > This is why the very notion of "consumers of dma-direct" should > fundamentally not be a thing IMO. Drivers consume the DMA API interfaces, > and the DMA API ultimately consumes various memory allocators, but what > happens in between is nobody else's business; dma-direct happens to > represent *some* paths between the two, but there are plenty more paths to > the same (and different) allocators through other DMA API implementations= as > well. Which route a particular call takes to end up at a particular > allocator is not meaningful unless you are the DMA ops dispatch code. >=20 > Or to put it another way, to even go for the "dumbest possible correct > solution", the plumbing of dma_get_dmem_cgroup_region() would need to be > about as complex and widespread as the plumbing of dma_alloc_attrs() itse= lf > ;) I largely agree with the sentiment, and I think the very idea of dma_get_dmem_cgroup_region() is a bad one for that reason. But since I wasn't too sure what a good one might look like, I figured it would be a good way to start the discussion still :) > I think I see why a simple DMA attribute couldn't be made to work, as > dmem_cgroup_uncharge() can't simply look up the pool the same way > dmem_cgroup_try_charge() found it, since we still need a cg for that and > get_current_dmemcs() can't be assumed to be stable over time, right? > At the point I'm probably starting to lean towards a whole new DMA op wit= h a > properly encapsulated return type (and maybe a long-term goal of > consolidating the 3 or 4 different allocation type we already have) It felt like a good solution to me too, and what I alluded to with struct page or folio. My feeling was that the best way to do it would be to encapsulate it into the structure returned by the dma_alloc_* API. That's a pretty large rework though, so I wanted to make sure I was on the right path before doing so. > or just have a single dmem region for "DMA API memory" and don't care > where it came from (although I do see the issues with that too - you > probably wouldn't want to ration a device-private pool the same way as > global system memory, for example) Yeah, the CMA pool is probably something you want to limit differently as well. Maxime --ffb7rrvpjvoby4z6 Content-Type: application/pgp-signature; name="signature.asc" -----BEGIN PGP SIGNATURE----- iHUEABYKAB0WIQRcEzekXsqa64kGDp7j7w1vZxhRxQUCZ9MhGQAKCRDj7w1vZxhR xd9pAQCHcGvhv8bNn26UvuNcyqdmxp8TBRjnsTOXt3Y6nu+UiwD/c+NEwXwAONYX R/jM767xucC1ylTWThh2/9kyYuWCpwM= =byXD -----END PGP SIGNATURE----- --ffb7rrvpjvoby4z6--