From: Catalin Marinas <catalin.marinas@arm.com>
To: Robin Murphy <robin.murphy@arm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>,
Arnd Bergmann <arnd@arndb.de>, Christoph Hellwig <hch@lst.de>,
Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
Will Deacon <will@kernel.org>, Marc Zyngier <maz@kernel.org>,
Andrew Morton <akpm@linux-foundation.org>,
Herbert Xu <herbert@gondor.apana.org.au>,
Ard Biesheuvel <ardb@kernel.org>,
Isaac Manjarres <isaacmanjarres@google.com>,
Saravana Kannan <saravanak@google.com>,
Alasdair Kergon <agk@redhat.com>, Daniel Vetter <daniel@ffwll.ch>,
Joerg Roedel <joro@8bytes.org>, Mark Brown <broonie@kernel.org>,
Mike Snitzer <snitzer@kernel.org>,
"Rafael J. Wysocki" <rafael@kernel.org>,
linux-mm@kvack.org, iommu@lists.linux.dev,
linux-arm-kernel@lists.infradead.org
Subject: Re: [PATCH v4 13/15] iommu/dma: Force bouncing if the size is not cacheline-aligned
Date: Fri, 19 May 2023 15:02:24 +0100 [thread overview]
Message-ID: <ZGeBcLVFKpLdTveZ@arm.com> (raw)
In-Reply-To: <9dc3e036-75cd-debf-7093-177ef6c7a3ae@arm.com>
On Fri, May 19, 2023 at 01:29:38PM +0100, Robin Murphy wrote:
> On 2023-05-18 18:34, Catalin Marinas wrote:
> > diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
> > index 7a9f0b0bddbd..ab1c1681c06e 100644
> > --- a/drivers/iommu/dma-iommu.c
> > +++ b/drivers/iommu/dma-iommu.c
> > @@ -956,7 +956,7 @@ static void iommu_dma_sync_sg_for_cpu(struct device *dev,
> > struct scatterlist *sg;
> > int i;
> > - if (dev_use_swiotlb(dev))
> > + if (dev_use_swiotlb(dev) || sg_is_dma_bounced(sgl))
> > for_each_sg(sgl, sg, nelems, i)
> > iommu_dma_sync_single_for_cpu(dev, sg_dma_address(sg),
> > sg->length, dir);
> > @@ -972,7 +972,7 @@ static void iommu_dma_sync_sg_for_device(struct device *dev,
> > struct scatterlist *sg;
> > int i;
> > - if (dev_use_swiotlb(dev))
> > + if (dev_use_swiotlb(dev) || sg_is_dma_bounced(sgl))
> > for_each_sg(sgl, sg, nelems, i)
> > iommu_dma_sync_single_for_device(dev,
> > sg_dma_address(sg),
> > @@ -998,7 +998,8 @@ static dma_addr_t iommu_dma_map_page(struct device *dev, struct page *page,
> > * If both the physical buffer start address and size are
> > * page aligned, we don't need to use a bounce page.
> > */
> > - if (dev_use_swiotlb(dev) && iova_offset(iovad, phys | size)) {
> > + if ((dev_use_swiotlb(dev) && iova_offset(iovad, phys | size)) ||
> > + dma_kmalloc_needs_bounce(dev, size, dir)) {
> > void *padding_start;
> > size_t padding_size, aligned_size;
> > @@ -1210,7 +1211,21 @@ static int iommu_dma_map_sg(struct device *dev, struct scatterlist *sg,
> > goto out;
> > }
> > - if (dev_use_swiotlb(dev))
> > + /*
> > + * If kmalloc() buffers are not DMA-safe for this device and
> > + * direction, check the individual lengths in the sg list. If one of
> > + * the buffers is deemed unsafe, follow the iommu_dma_map_sg_swiotlb()
> > + * path for potential bouncing.
> > + */
> > + if (!dma_kmalloc_safe(dev, dir)) {
> > + for_each_sg(sg, s, nents, i)
> > + if (!dma_kmalloc_size_aligned(s->length)) {
>
> Just to remind myself, we're not checking s->offset on the grounds that if
> anyone wants to DMA into an unaligned part of a larger allocation that
> remains at their own risk, is that right?
Right. That's the case currently as well and those users that were
relying on ARCH_KMALLOC_MINALIGN for this have either been migrated to
ARCH_DMA_MINALIGN in this series or the logic rewritten (as in the
crypto code).
> Do we care about the (probably theoretical) case where someone might build a
> scatterlist for multiple small allocations such that ones which happen to be
> adjacent might get combined into a single segment of apparently "safe"
> length but still at "unsafe" alignment?
I'd say that's theoretical only. One could write such code but normally
you'd go for an array rather than relying on the randomness of the
kmalloc pointers to figure out adjacent objects. It also only works if
the individual struct size is exactly one of the kmalloc cache sizes, so
not generic enough.
> > + sg_dma_mark_bounced(sg);
>
> I'd prefer to have iommu_dma_map_sg_swiotlb() mark the segments, since
> that's in charge of the actual bouncing. Then we can fold the alignment
> check into dev_use_swiotlb() (with the dev_is_untrusted() condition taking
> priority), and sync/unmap can simply rely on sg_is_dma_bounced() alone.
With this patch we only set the SG_DMA_BOUNCED on the first element of
the sglist. Do you want to set this flag only on individual elements
being bounced? It makes some sense in principle but the
iommu_dma_unmap_sg() path would need to scan the list again to decide
whether to go the swiotlb path.
If we keep the SG_DMA_BOUNCED flag only on the first element, I can
change it to your suggestion, assuming I understood it.
--
Catalin
next prev parent reply other threads:[~2023-05-19 14:05 UTC|newest]
Thread overview: 35+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-05-18 17:33 [PATCH v4 00/15] mm, dma, arm64: Reduce ARCH_KMALLOC_MINALIGN to 8 Catalin Marinas
2023-05-18 17:33 ` [PATCH v4 01/15] mm/slab: Decouple ARCH_KMALLOC_MINALIGN from ARCH_DMA_MINALIGN Catalin Marinas
2023-05-19 15:49 ` Catalin Marinas
2023-05-18 17:33 ` [PATCH v4 02/15] dma: Allow dma_get_cache_alignment() to return the smaller cache_line_size() Catalin Marinas
2023-05-20 5:42 ` Christoph Hellwig
2023-05-20 6:14 ` Christoph Hellwig
2023-05-20 10:34 ` Catalin Marinas
2023-05-18 17:33 ` [PATCH v4 03/15] mm/slab: Simplify create_kmalloc_cache() args and make it static Catalin Marinas
2023-05-18 17:33 ` [PATCH v4 04/15] mm/slab: Limit kmalloc() minimum alignment to dma_get_cache_alignment() Catalin Marinas
2023-05-18 17:33 ` [PATCH v4 05/15] drivers/base: Use ARCH_DMA_MINALIGN instead of ARCH_KMALLOC_MINALIGN Catalin Marinas
2023-05-19 9:41 ` Greg Kroah-Hartman
2023-05-18 17:33 ` [PATCH v4 06/15] drivers/gpu: " Catalin Marinas
2023-05-18 17:33 ` [PATCH v4 07/15] drivers/usb: " Catalin Marinas
2023-05-19 9:41 ` Greg Kroah-Hartman
2023-05-18 17:33 ` [PATCH v4 08/15] drivers/spi: " Catalin Marinas
2023-05-18 17:33 ` [PATCH v4 09/15] drivers/md: " Catalin Marinas
2023-05-18 17:33 ` [PATCH v4 10/15] arm64: Allow kmalloc() caches aligned to the smaller cache_line_size() Catalin Marinas
2023-05-18 17:33 ` [PATCH v4 11/15] scatterlist: Add dedicated config for DMA flags Catalin Marinas
2023-05-20 5:42 ` Christoph Hellwig
2023-05-18 17:34 ` [PATCH v4 12/15] dma-mapping: Force bouncing if the kmalloc() size is not cache-line-aligned Catalin Marinas
2023-05-20 5:44 ` Christoph Hellwig
2023-05-18 17:34 ` [PATCH v4 13/15] iommu/dma: Force bouncing if the size is not cacheline-aligned Catalin Marinas
2023-05-19 12:29 ` Robin Murphy
2023-05-19 14:02 ` Catalin Marinas [this message]
2023-05-19 15:46 ` Catalin Marinas
2023-05-19 17:09 ` Robin Murphy
2023-05-22 7:27 ` Catalin Marinas
2023-05-23 15:47 ` Robin Murphy
2023-05-18 17:34 ` [PATCH v4 14/15] mm: slab: Reduce the kmalloc() minimum alignment if DMA bouncing possible Catalin Marinas
2023-05-19 11:00 ` Catalin Marinas
2023-05-18 17:34 ` [PATCH v4 15/15] arm64: Enable ARCH_WANT_KMALLOC_DMA_BOUNCE for arm64 Catalin Marinas
2023-05-18 17:56 ` [PATCH v4 00/15] mm, dma, arm64: Reduce ARCH_KMALLOC_MINALIGN to 8 Linus Torvalds
2023-05-18 18:13 ` Ard Biesheuvel
2023-05-18 18:50 ` Catalin Marinas
2023-05-18 18:46 ` Catalin Marinas
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ZGeBcLVFKpLdTveZ@arm.com \
--to=catalin.marinas@arm.com \
--cc=agk@redhat.com \
--cc=akpm@linux-foundation.org \
--cc=ardb@kernel.org \
--cc=arnd@arndb.de \
--cc=broonie@kernel.org \
--cc=daniel@ffwll.ch \
--cc=gregkh@linuxfoundation.org \
--cc=hch@lst.de \
--cc=herbert@gondor.apana.org.au \
--cc=iommu@lists.linux.dev \
--cc=isaacmanjarres@google.com \
--cc=joro@8bytes.org \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-mm@kvack.org \
--cc=maz@kernel.org \
--cc=rafael@kernel.org \
--cc=robin.murphy@arm.com \
--cc=saravanak@google.com \
--cc=snitzer@kernel.org \
--cc=torvalds@linux-foundation.org \
--cc=will@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox