From: Christoph Hellwig <hch@lst.de>
To: Robin Murphy <robin.murphy@arm.com>
Cc: "Leon Romanovsky" <leon@kernel.org>,
"Jens Axboe" <axboe@kernel.dk>, "Jason Gunthorpe" <jgg@ziepe.ca>,
"Joerg Roedel" <joro@8bytes.org>, "Will Deacon" <will@kernel.org>,
"Christoph Hellwig" <hch@lst.de>,
"Sagi Grimberg" <sagi@grimberg.me>,
"Leon Romanovsky" <leonro@nvidia.com>,
"Keith Busch" <kbusch@kernel.org>,
"Bjorn Helgaas" <bhelgaas@google.com>,
"Logan Gunthorpe" <logang@deltatee.com>,
"Yishai Hadas" <yishaih@nvidia.com>,
"Shameer Kolothum" <shameerali.kolothum.thodi@huawei.com>,
"Kevin Tian" <kevin.tian@intel.com>,
"Alex Williamson" <alex.williamson@redhat.com>,
"Marek Szyprowski" <m.szyprowski@samsung.com>,
"Jérôme Glisse" <jglisse@redhat.com>,
"Andrew Morton" <akpm@linux-foundation.org>,
"Jonathan Corbet" <corbet@lwn.net>,
linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org,
linux-block@vger.kernel.org, linux-rdma@vger.kernel.org,
iommu@lists.linux.dev, linux-nvme@lists.infradead.org,
linux-pci@vger.kernel.org, kvm@vger.kernel.org,
linux-mm@kvack.org, "Randy Dunlap" <rdunlap@infradead.org>
Subject: Re: [PATCH v5 05/17] dma-mapping: Provide an interface to allow allocate IOVA
Date: Wed, 15 Jan 2025 07:13:26 +0100 [thread overview]
Message-ID: <20250115061326.GA29643@lst.de> (raw)
In-Reply-To: <ecb59036-b279-4412-9a09-40e05af3b9ea@arm.com>
On Tue, Jan 14, 2025 at 08:50:28PM +0000, Robin Murphy wrote:
>> +bool dma_iova_try_alloc(struct device *dev, struct dma_iova_state *state,
>> + phys_addr_t phys, size_t size)
>> +{
>> + struct iommu_domain *domain = iommu_get_dma_domain(dev);
>> + struct iommu_dma_cookie *cookie = domain->iova_cookie;
>> + struct iova_domain *iovad = &cookie->iovad;
>> + size_t iova_off = iova_offset(iovad, phys);
>> + dma_addr_t addr;
>> +
>> + memset(state, 0, sizeof(*state));
>> + if (!use_dma_iommu(dev))
>> + return false;
>
> Can you guess why that return won't ever be taken?
It is regularly taken. Now that it's quoted this way it would probably
good to split the thing up to not do the deferferences above, as they
might cause problems if the compiler wasn't smart enough to only
perform them after the check..
>> + if (static_branch_unlikely(&iommu_deferred_attach_enabled) &&
>> + iommu_deferred_attach(dev, iommu_get_domain_for_dev(dev)))
>> + return false;
>> +
>> + if (WARN_ON_ONCE(!size))
>> + return false;
>> + if (WARN_ON_ONCE(size & DMA_IOVA_USE_SWIOTLB))
>
> This looks weird. Why would a caller ever set an effectively-private flag
> in the first place? If it's actually supposed to be a maximum size check,
> please make it look like a maximum size check.
As the person who added it - this is to catch a user passing in a value
that would set it. To me this looks obvious, but should we add a
comment?
> (Which also makes me consider iommu_dma_max_mapping_size() returning
> SIZE_MAX isn't strictly accurate, ho hum...)
You can still map SIZE_MAX, just not using this interface. Assuming
no other real life limitations get in way, which I bet they will.
>> @@ -72,6 +74,21 @@
>> #define DMA_BIT_MASK(n) (((n) == 64) ? ~0ULL : ((1ULL<<(n))-1))
>> +struct dma_iova_state {
>> + dma_addr_t addr;
>> + size_t __size;
>> +};
>> +
>> +/*
>> + * Use the high bit to mark if we used swiotlb for one or more ranges.
>> + */
>> +#define DMA_IOVA_USE_SWIOTLB (1ULL << 63)
>
> This will give surprising results for 32-bit size_t (in fact I guess it
> might fire some build warnings already).
Good point. I guess __size should simply become a u64.
next prev parent reply other threads:[~2025-01-15 6:13 UTC|newest]
Thread overview: 28+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-12-17 13:00 [PATCH v5 00/17] Provide a new two step DMA mapping API Leon Romanovsky
2024-12-17 13:00 ` [PATCH v5 01/17] PCI/P2PDMA: Refactor the p2pdma mapping helpers Leon Romanovsky
2024-12-17 13:00 ` [PATCH v5 02/17] dma-mapping: move the PCI P2PDMA mapping helpers to pci-p2pdma.h Leon Romanovsky
2024-12-17 13:00 ` [PATCH v5 03/17] iommu: generalize the batched sync after map interface Leon Romanovsky
2024-12-17 13:00 ` [PATCH v5 04/17] iommu: add kernel-doc for iommu_unmap and iommu_unmap_fast Leon Romanovsky
2024-12-17 13:00 ` [PATCH v5 05/17] dma-mapping: Provide an interface to allow allocate IOVA Leon Romanovsky
2025-01-14 20:50 ` Robin Murphy
2025-01-15 6:13 ` Christoph Hellwig [this message]
2025-01-15 8:17 ` Leon Romanovsky
2024-12-17 13:00 ` [PATCH v5 06/17] iommu/dma: Factor out a iommu_dma_map_swiotlb helper Leon Romanovsky
2024-12-17 13:00 ` [PATCH v5 07/17] dma-mapping: Implement link/unlink ranges API Leon Romanovsky
2025-01-14 20:50 ` Robin Murphy
2025-01-15 6:26 ` Christoph Hellwig
2025-01-15 7:27 ` Leon Romanovsky
2025-01-15 8:33 ` Leon Romanovsky
2025-01-16 20:18 ` Jason Gunthorpe
2025-01-16 21:00 ` Leon Romanovsky
2024-12-17 13:00 ` [PATCH v5 08/17] dma-mapping: add a dma_need_unmap helper Leon Romanovsky
2024-12-17 13:00 ` [PATCH v5 09/17] docs: core-api: document the IOVA-based API Leon Romanovsky
2024-12-17 13:00 ` [PATCH v5 10/17] mm/hmm: let users to tag specific PFN with DMA mapped bit Leon Romanovsky
2024-12-17 13:00 ` [PATCH v5 11/17] mm/hmm: provide generic DMA managing logic Leon Romanovsky
2024-12-17 13:00 ` [PATCH v5 12/17] RDMA/umem: Store ODP access mask information in PFN Leon Romanovsky
2024-12-17 13:00 ` [PATCH v5 13/17] RDMA/core: Convert UMEM ODP DMA mapping to caching IOVA and page linkage Leon Romanovsky
2024-12-17 13:00 ` [PATCH v5 14/17] RDMA/umem: Separate implicit ODP initialization from explicit ODP Leon Romanovsky
2024-12-17 13:00 ` [PATCH v5 15/17] vfio/mlx5: Explicitly use number of pages instead of allocated length Leon Romanovsky
2024-12-17 13:00 ` [PATCH v5 16/17] vfio/mlx5: Rewrite create mkey flow to allow better code reuse Leon Romanovsky
2024-12-17 13:00 ` [PATCH v5 17/17] vfio/mlx5: Enable the DMA link API Leon Romanovsky
2025-01-14 8:38 ` [PATCH v5 00/17] Provide a new two step DMA mapping API Leon Romanovsky
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250115061326.GA29643@lst.de \
--to=hch@lst.de \
--cc=akpm@linux-foundation.org \
--cc=alex.williamson@redhat.com \
--cc=axboe@kernel.dk \
--cc=bhelgaas@google.com \
--cc=corbet@lwn.net \
--cc=iommu@lists.linux.dev \
--cc=jgg@ziepe.ca \
--cc=jglisse@redhat.com \
--cc=joro@8bytes.org \
--cc=kbusch@kernel.org \
--cc=kevin.tian@intel.com \
--cc=kvm@vger.kernel.org \
--cc=leon@kernel.org \
--cc=leonro@nvidia.com \
--cc=linux-block@vger.kernel.org \
--cc=linux-doc@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-nvme@lists.infradead.org \
--cc=linux-pci@vger.kernel.org \
--cc=linux-rdma@vger.kernel.org \
--cc=logang@deltatee.com \
--cc=m.szyprowski@samsung.com \
--cc=rdunlap@infradead.org \
--cc=robin.murphy@arm.com \
--cc=sagi@grimberg.me \
--cc=shameerali.kolothum.thodi@huawei.com \
--cc=will@kernel.org \
--cc=yishaih@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox