From: Leon Romanovsky <leon@kernel.org>
To: Baolu Lu <baolu.lu@linux.intel.com>
Cc: "Marek Szyprowski" <m.szyprowski@samsung.com>,
"Jens Axboe" <axboe@kernel.dk>, "Christoph Hellwig" <hch@lst.de>,
"Keith Busch" <kbusch@kernel.org>, "Jake Edge" <jake@lwn.net>,
"Jonathan Corbet" <corbet@lwn.net>,
"Jason Gunthorpe" <jgg@ziepe.ca>,
"Zhu Yanjun" <zyjzyj2000@gmail.com>,
"Robin Murphy" <robin.murphy@arm.com>,
"Joerg Roedel" <joro@8bytes.org>, "Will Deacon" <will@kernel.org>,
"Sagi Grimberg" <sagi@grimberg.me>,
"Bjorn Helgaas" <bhelgaas@google.com>,
"Logan Gunthorpe" <logang@deltatee.com>,
"Yishai Hadas" <yishaih@nvidia.com>,
"Shameer Kolothum" <shameerali.kolothum.thodi@huawei.com>,
"Kevin Tian" <kevin.tian@intel.com>,
"Alex Williamson" <alex.williamson@redhat.com>,
"Jérôme Glisse" <jglisse@redhat.com>,
"Andrew Morton" <akpm@linux-foundation.org>,
linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org,
linux-block@vger.kernel.org, linux-rdma@vger.kernel.org,
iommu@lists.linux.dev, linux-nvme@lists.infradead.org,
linux-pci@vger.kernel.org, kvm@vger.kernel.org,
linux-mm@kvack.org, "Niklas Schnelle" <schnelle@linux.ibm.com>,
"Chuck Lever" <chuck.lever@oracle.com>,
"Luis Chamberlain" <mcgrof@kernel.org>,
"Matthew Wilcox" <willy@infradead.org>,
"Dan Williams" <dan.j.williams@intel.com>,
"Kanchan Joshi" <joshi.k@samsung.com>,
"Chaitanya Kulkarni" <kch@nvidia.com>,
"Jason Gunthorpe" <jgg@nvidia.com>
Subject: Re: [PATCH v10 03/24] iommu: generalize the batched sync after map interface
Date: Tue, 29 Apr 2025 09:09:18 +0300 [thread overview]
Message-ID: <20250429060918.GK5848@unreal> (raw)
In-Reply-To: <f8d86cde-d485-4e5a-a693-e9323679474f@linux.intel.com>
On Tue, Apr 29, 2025 at 10:19:46AM +0800, Baolu Lu wrote:
> On 4/28/25 17:22, Leon Romanovsky wrote:
> > From: Christoph Hellwig<hch@lst.de>
> >
> > For the upcoming IOVA-based DMA API we want to batch the
> > ops->iotlb_sync_map() call after mapping multiple IOVAs from
> > dma-iommu without having a scatterlist. Improve the API.
> >
> > Add a wrapper for the map_sync as iommu_sync_map() so that callers
> > don't need to poke into the methods directly.
> >
> > Formalize __iommu_map() into iommu_map_nosync() which requires the
> > caller to call iommu_sync_map() after all maps are completed.
> >
> > Refactor the existing sanity checks from all the different layers
> > into iommu_map_nosync().
> >
> > Signed-off-by: Christoph Hellwig<hch@lst.de>
> > Acked-by: Will Deacon<will@kernel.org>
> > Tested-by: Jens Axboe<axboe@kernel.dk>
> > Reviewed-by: Jason Gunthorpe<jgg@nvidia.com>
> > Reviewed-by: Luis Chamberlain<mcgrof@kernel.org>
> > Signed-off-by: Leon Romanovsky<leonro@nvidia.com>
> > ---
> > drivers/iommu/iommu.c | 65 +++++++++++++++++++------------------------
> > include/linux/iommu.h | 4 +++
> > 2 files changed, 33 insertions(+), 36 deletions(-)
> >
> > diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
> > index 4f91a740c15f..02960585b8d4 100644
> > --- a/drivers/iommu/iommu.c
> > +++ b/drivers/iommu/iommu.c
> > @@ -2443,8 +2443,8 @@ static size_t iommu_pgsize(struct iommu_domain *domain, unsigned long iova,
> > return pgsize;
> > }
> > -static int __iommu_map(struct iommu_domain *domain, unsigned long iova,
> > - phys_addr_t paddr, size_t size, int prot, gfp_t gfp)
> > +int iommu_map_nosync(struct iommu_domain *domain, unsigned long iova,
> > + phys_addr_t paddr, size_t size, int prot, gfp_t gfp)
> > {
> > const struct iommu_domain_ops *ops = domain->ops;
> > unsigned long orig_iova = iova;
> > @@ -2453,12 +2453,19 @@ static int __iommu_map(struct iommu_domain *domain, unsigned long iova,
> > phys_addr_t orig_paddr = paddr;
> > int ret = 0;
> > + might_sleep_if(gfpflags_allow_blocking(gfp));
> > +
> > if (unlikely(!(domain->type & __IOMMU_DOMAIN_PAGING)))
> > return -EINVAL;
> > if (WARN_ON(!ops->map_pages || domain->pgsize_bitmap == 0UL))
> > return -ENODEV;
> > + /* Discourage passing strange GFP flags */
> > + if (WARN_ON_ONCE(gfp & (__GFP_COMP | __GFP_DMA | __GFP_DMA32 |
> > + __GFP_HIGHMEM)))
> > + return -EINVAL;
> > +
> > /* find out the minimum page size supported */
> > min_pagesz = 1 << __ffs(domain->pgsize_bitmap);
> > @@ -2506,31 +2513,27 @@ static int __iommu_map(struct iommu_domain *domain, unsigned long iova,
> > return ret;
> > }
> > -int iommu_map(struct iommu_domain *domain, unsigned long iova,
> > - phys_addr_t paddr, size_t size, int prot, gfp_t gfp)
> > +int iommu_sync_map(struct iommu_domain *domain, unsigned long iova, size_t size)
> > {
> > const struct iommu_domain_ops *ops = domain->ops;
> > - int ret;
> > -
> > - might_sleep_if(gfpflags_allow_blocking(gfp));
> > - /* Discourage passing strange GFP flags */
> > - if (WARN_ON_ONCE(gfp & (__GFP_COMP | __GFP_DMA | __GFP_DMA32 |
> > - __GFP_HIGHMEM)))
> > - return -EINVAL;
> > + if (!ops->iotlb_sync_map)
> > + return 0;
> > + return ops->iotlb_sync_map(domain, iova, size);
> > +}
>
> I am wondering whether iommu_sync_map() needs a return value. The
> purpose of this callback is just to sync the TLB cache after new
> mappings are created, which should effectively be a no-fail operation.
>
> The definition of iotlb_sync_map in struct iommu_domain_ops seems
> unnecessary:
>
> struct iommu_domain_ops {
> ...
> int (*iotlb_sync_map)(struct iommu_domain *domain, unsigned long
> iova,
> size_t size);
> ...
> };
>
> Furthermore, currently no iommu driver implements this callback in a way
> that returns a failure. We could clean up the iommu definition in a
> subsequent patch series, but for this driver-facing interface, it's
> better to get it right from the beginning.
I see that s390 is relying on return values:
569 static int s390_iommu_iotlb_sync_map(struct iommu_domain *domain,
570 unsigned long iova, size_t size)
571 {
<...>
581 ret = zpci_refresh_trans((u64)zdev->fh << 32,
582 iova, size);
583 /*
584 * let the hypervisor discover invalidated entries
585 * allowing it to free IOVAs and unpin pages
586 */
587 if (ret == -ENOMEM) {
588 ret = zpci_refresh_all(zdev);
589 if (ret)
590 break;
591 }
<...>
595 return ret;
596 }
>
> Thanks,
> baolu
next prev parent reply other threads:[~2025-04-29 6:09 UTC|newest]
Thread overview: 41+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-04-28 9:22 [PATCH v10 00/24] Provide a new two step DMA mapping API Leon Romanovsky
2025-04-28 9:22 ` [PATCH v10 01/24] PCI/P2PDMA: Refactor the p2pdma mapping helpers Leon Romanovsky
2025-04-29 2:08 ` Baolu Lu
2025-04-28 9:22 ` [PATCH v10 02/24] dma-mapping: move the PCI P2PDMA mapping helpers to pci-p2pdma.h Leon Romanovsky
2025-04-29 2:09 ` Baolu Lu
2025-04-28 9:22 ` [PATCH v10 03/24] iommu: generalize the batched sync after map interface Leon Romanovsky
2025-04-29 2:19 ` Baolu Lu
2025-04-29 6:09 ` Leon Romanovsky [this message]
2025-04-29 11:53 ` Jason Gunthorpe
2025-04-28 9:22 ` [PATCH v10 04/24] iommu: add kernel-doc for iommu_unmap_fast Leon Romanovsky
2025-04-29 2:37 ` Baolu Lu
2025-04-28 9:22 ` [PATCH v10 05/24] dma-mapping: Provide an interface to allow allocate IOVA Leon Romanovsky
2025-04-29 3:10 ` Baolu Lu
2025-04-29 5:46 ` Leon Romanovsky
2025-04-28 9:22 ` [PATCH v10 06/24] iommu/dma: Factor out a iommu_dma_map_swiotlb helper Leon Romanovsky
2025-04-29 4:58 ` Baolu Lu
2025-04-29 5:53 ` Leon Romanovsky
2025-04-29 5:58 ` Baolu Lu
2025-04-29 6:18 ` Leon Romanovsky
2025-04-28 9:22 ` [PATCH v10 07/24] dma-mapping: Implement link/unlink ranges API Leon Romanovsky
2025-04-28 9:22 ` [PATCH v10 08/24] dma-mapping: add a dma_need_unmap helper Leon Romanovsky
2025-04-28 9:22 ` [PATCH v10 09/24] docs: core-api: document the IOVA-based API Leon Romanovsky
2025-04-28 9:22 ` [PATCH v10 10/24] mm/hmm: let users to tag specific PFN with DMA mapped bit Leon Romanovsky
2025-04-28 9:22 ` [PATCH v10 11/24] mm/hmm: provide generic DMA managing logic Leon Romanovsky
2025-04-28 9:22 ` [PATCH v10 12/24] RDMA/umem: Store ODP access mask information in PFN Leon Romanovsky
2025-04-28 9:22 ` [PATCH v10 13/24] RDMA/core: Convert UMEM ODP DMA mapping to caching IOVA and page linkage Leon Romanovsky
2025-04-28 9:22 ` [PATCH v10 14/24] RDMA/umem: Separate implicit ODP initialization from explicit ODP Leon Romanovsky
2025-04-28 9:22 ` [PATCH v10 15/24] vfio/mlx5: Explicitly use number of pages instead of allocated length Leon Romanovsky
2025-04-28 9:22 ` [PATCH v10 16/24] vfio/mlx5: Rewrite create mkey flow to allow better code reuse Leon Romanovsky
2025-04-28 9:22 ` [PATCH v10 17/24] vfio/mlx5: Enable the DMA link API Leon Romanovsky
2025-04-28 9:22 ` [PATCH v10 18/24] block: share more code for bio addition helper Leon Romanovsky
2025-04-28 9:22 ` [PATCH v10 19/24] block: don't merge different kinds of P2P transfers in a single bio Leon Romanovsky
2025-04-28 9:22 ` [PATCH v10 20/24] blk-mq: add scatterlist-less DMA mapping helpers Leon Romanovsky
2025-04-28 9:22 ` [PATCH v10 21/24] nvme-pci: remove struct nvme_descriptor Leon Romanovsky
2025-04-28 9:22 ` [PATCH v10 22/24] nvme-pci: use a better encoding for small prp pool allocations Leon Romanovsky
2025-04-28 9:22 ` [PATCH v10 23/24] nvme-pci: convert to blk_rq_dma_map Leon Romanovsky
2025-04-28 16:46 ` Keith Busch
2025-04-28 17:22 ` Leon Romanovsky
2025-04-28 17:30 ` Keith Busch
2025-04-28 9:22 ` [PATCH v10 24/24] nvme-pci: store aborted state in flags variable Leon Romanovsky
2025-05-12 10:07 ` (subset) [PATCH v10 00/24] Provide a new two step DMA mapping API Leon Romanovsky
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250429060918.GK5848@unreal \
--to=leon@kernel.org \
--cc=akpm@linux-foundation.org \
--cc=alex.williamson@redhat.com \
--cc=axboe@kernel.dk \
--cc=baolu.lu@linux.intel.com \
--cc=bhelgaas@google.com \
--cc=chuck.lever@oracle.com \
--cc=corbet@lwn.net \
--cc=dan.j.williams@intel.com \
--cc=hch@lst.de \
--cc=iommu@lists.linux.dev \
--cc=jake@lwn.net \
--cc=jgg@nvidia.com \
--cc=jgg@ziepe.ca \
--cc=jglisse@redhat.com \
--cc=joro@8bytes.org \
--cc=joshi.k@samsung.com \
--cc=kbusch@kernel.org \
--cc=kch@nvidia.com \
--cc=kevin.tian@intel.com \
--cc=kvm@vger.kernel.org \
--cc=linux-block@vger.kernel.org \
--cc=linux-doc@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-nvme@lists.infradead.org \
--cc=linux-pci@vger.kernel.org \
--cc=linux-rdma@vger.kernel.org \
--cc=logang@deltatee.com \
--cc=m.szyprowski@samsung.com \
--cc=mcgrof@kernel.org \
--cc=robin.murphy@arm.com \
--cc=sagi@grimberg.me \
--cc=schnelle@linux.ibm.com \
--cc=shameerali.kolothum.thodi@huawei.com \
--cc=will@kernel.org \
--cc=willy@infradead.org \
--cc=yishaih@nvidia.com \
--cc=zyjzyj2000@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox