linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Robin Murphy <robin.murphy@arm.com>
To: Catalin Marinas <catalin.marinas@arm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>,
	Arnd Bergmann <arnd@arndb.de>, Christoph Hellwig <hch@lst.de>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Will Deacon <will@kernel.org>, Marc Zyngier <maz@kernel.org>,
	Andrew Morton <akpm@linux-foundation.org>,
	Herbert Xu <herbert@gondor.apana.org.au>,
	Ard Biesheuvel <ardb@kernel.org>,
	Isaac Manjarres <isaacmanjarres@google.com>,
	Saravana Kannan <saravanak@google.com>,
	Alasdair Kergon <agk@redhat.com>, Daniel Vetter <daniel@ffwll.ch>,
	Joerg Roedel <joro@8bytes.org>, Mark Brown <broonie@kernel.org>,
	Mike Snitzer <snitzer@kernel.org>,
	"Rafael J. Wysocki" <rafael@kernel.org>,
	linux-mm@kvack.org, iommu@lists.linux.dev,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [PATCH v4 13/15] iommu/dma: Force bouncing if the size is not cacheline-aligned
Date: Fri, 19 May 2023 18:09:45 +0100	[thread overview]
Message-ID: <30a91384-157c-0192-443c-12c835ad3b35@arm.com> (raw)
In-Reply-To: <ZGeBcLVFKpLdTveZ@arm.com>

On 19/05/2023 3:02 pm, Catalin Marinas wrote:
> On Fri, May 19, 2023 at 01:29:38PM +0100, Robin Murphy wrote:
>> On 2023-05-18 18:34, Catalin Marinas wrote:
>>> diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
>>> index 7a9f0b0bddbd..ab1c1681c06e 100644
>>> --- a/drivers/iommu/dma-iommu.c
>>> +++ b/drivers/iommu/dma-iommu.c
>>> @@ -956,7 +956,7 @@ static void iommu_dma_sync_sg_for_cpu(struct device *dev,
>>>    	struct scatterlist *sg;
>>>    	int i;
>>> -	if (dev_use_swiotlb(dev))
>>> +	if (dev_use_swiotlb(dev) || sg_is_dma_bounced(sgl))
>>>    		for_each_sg(sgl, sg, nelems, i)
>>>    			iommu_dma_sync_single_for_cpu(dev, sg_dma_address(sg),
>>>    						      sg->length, dir);
>>> @@ -972,7 +972,7 @@ static void iommu_dma_sync_sg_for_device(struct device *dev,
>>>    	struct scatterlist *sg;
>>>    	int i;
>>> -	if (dev_use_swiotlb(dev))
>>> +	if (dev_use_swiotlb(dev) || sg_is_dma_bounced(sgl))
>>>    		for_each_sg(sgl, sg, nelems, i)
>>>    			iommu_dma_sync_single_for_device(dev,
>>>    							 sg_dma_address(sg),
>>> @@ -998,7 +998,8 @@ static dma_addr_t iommu_dma_map_page(struct device *dev, struct page *page,
>>>    	 * If both the physical buffer start address and size are
>>>    	 * page aligned, we don't need to use a bounce page.
>>>    	 */
>>> -	if (dev_use_swiotlb(dev) && iova_offset(iovad, phys | size)) {
>>> +	if ((dev_use_swiotlb(dev) && iova_offset(iovad, phys | size)) ||
>>> +	    dma_kmalloc_needs_bounce(dev, size, dir)) {
>>>    		void *padding_start;
>>>    		size_t padding_size, aligned_size;
>>> @@ -1210,7 +1211,21 @@ static int iommu_dma_map_sg(struct device *dev, struct scatterlist *sg,
>>>    			goto out;
>>>    	}
>>> -	if (dev_use_swiotlb(dev))
>>> +	/*
>>> +	 * If kmalloc() buffers are not DMA-safe for this device and
>>> +	 * direction, check the individual lengths in the sg list. If one of
>>> +	 * the buffers is deemed unsafe, follow the iommu_dma_map_sg_swiotlb()
>>> +	 * path for potential bouncing.
>>> +	 */
>>> +	if (!dma_kmalloc_safe(dev, dir)) {
>>> +		for_each_sg(sg, s, nents, i)
>>> +			if (!dma_kmalloc_size_aligned(s->length)) {
>>
>> Just to remind myself, we're not checking s->offset on the grounds that if
>> anyone wants to DMA into an unaligned part of a larger allocation that
>> remains at their own risk, is that right?
> 
> Right. That's the case currently as well and those users that were
> relying on ARCH_KMALLOC_MINALIGN for this have either been migrated to
> ARCH_DMA_MINALIGN in this series or the logic rewritten (as in the
> crypto code).

OK, I did manage to summon a vague memory of this being discussed 
before, which at least stopped me asking "Should we be checking..." - 
perhaps a comment on dma_kmalloc_safe() to help remember that reasoning 
might not go amiss?

>> Do we care about the (probably theoretical) case where someone might build a
>> scatterlist for multiple small allocations such that ones which happen to be
>> adjacent might get combined into a single segment of apparently "safe"
>> length but still at "unsafe" alignment?
> 
> I'd say that's theoretical only. One could write such code but normally
> you'd go for an array rather than relying on the randomness of the
> kmalloc pointers to figure out adjacent objects. It also only works if
> the individual struct size is exactly one of the kmalloc cache sizes, so
> not generic enough.

FWIW I was imagining something like sg_alloc_table_from_pages() but at a 
smaller scale, queueing up some list/array of, say, 32-byte buffers into 
a scatterlist to submit as a single DMA job. I'm not aware that such a 
thing exists though, and I'm inclined to agree that it probably is 
sufficiently unrealistic to be concerned about. As usual I just want to 
feel comfortable that we've explored all the possibilities :)

>>> +				sg_dma_mark_bounced(sg);
>>
>> I'd prefer to have iommu_dma_map_sg_swiotlb() mark the segments, since
>> that's in charge of the actual bouncing. Then we can fold the alignment
>> check into dev_use_swiotlb() (with the dev_is_untrusted() condition taking
>> priority), and sync/unmap can simply rely on sg_is_dma_bounced() alone.
> 
> With this patch we only set the SG_DMA_BOUNCED on the first element of
> the sglist. Do you want to set this flag only on individual elements
> being bounced? It makes some sense in principle but the
> iommu_dma_unmap_sg() path would need to scan the list again to decide
> whether to go the swiotlb path.
> 
> If we keep the SG_DMA_BOUNCED flag only on the first element, I can
> change it to your suggestion, assuming I understood it.

Indeed that should be fine - sync_sg/unmap_sg always have to be given 
the same arguments which were passed to map_sg (and note that in the 
normal case, the DMA address/length will often end up concatenated 
entirely into the first element), so while we still have the two 
distinct flows internally, I don't think there's any issue with only 
tagging the head of the list to steer between them. Of course if it then 
works out to be trivial enough to tag *all* the segments for good 
measure, there should be no harm in that either - at the moment the flag 
is destined to have more of a "this might be bounced, so needs checking" 
meaning than "this definitely is bounced" either way.

Cheers,
Robin.


  parent reply	other threads:[~2023-05-19 17:09 UTC|newest]

Thread overview: 35+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-05-18 17:33 [PATCH v4 00/15] mm, dma, arm64: Reduce ARCH_KMALLOC_MINALIGN to 8 Catalin Marinas
2023-05-18 17:33 ` [PATCH v4 01/15] mm/slab: Decouple ARCH_KMALLOC_MINALIGN from ARCH_DMA_MINALIGN Catalin Marinas
2023-05-19 15:49   ` Catalin Marinas
2023-05-18 17:33 ` [PATCH v4 02/15] dma: Allow dma_get_cache_alignment() to return the smaller cache_line_size() Catalin Marinas
2023-05-20  5:42   ` Christoph Hellwig
2023-05-20  6:14     ` Christoph Hellwig
2023-05-20 10:34       ` Catalin Marinas
2023-05-18 17:33 ` [PATCH v4 03/15] mm/slab: Simplify create_kmalloc_cache() args and make it static Catalin Marinas
2023-05-18 17:33 ` [PATCH v4 04/15] mm/slab: Limit kmalloc() minimum alignment to dma_get_cache_alignment() Catalin Marinas
2023-05-18 17:33 ` [PATCH v4 05/15] drivers/base: Use ARCH_DMA_MINALIGN instead of ARCH_KMALLOC_MINALIGN Catalin Marinas
2023-05-19  9:41   ` Greg Kroah-Hartman
2023-05-18 17:33 ` [PATCH v4 06/15] drivers/gpu: " Catalin Marinas
2023-05-18 17:33 ` [PATCH v4 07/15] drivers/usb: " Catalin Marinas
2023-05-19  9:41   ` Greg Kroah-Hartman
2023-05-18 17:33 ` [PATCH v4 08/15] drivers/spi: " Catalin Marinas
2023-05-18 17:33 ` [PATCH v4 09/15] drivers/md: " Catalin Marinas
2023-05-18 17:33 ` [PATCH v4 10/15] arm64: Allow kmalloc() caches aligned to the smaller cache_line_size() Catalin Marinas
2023-05-18 17:33 ` [PATCH v4 11/15] scatterlist: Add dedicated config for DMA flags Catalin Marinas
2023-05-20  5:42   ` Christoph Hellwig
2023-05-18 17:34 ` [PATCH v4 12/15] dma-mapping: Force bouncing if the kmalloc() size is not cache-line-aligned Catalin Marinas
2023-05-20  5:44   ` Christoph Hellwig
2023-05-18 17:34 ` [PATCH v4 13/15] iommu/dma: Force bouncing if the size is not cacheline-aligned Catalin Marinas
2023-05-19 12:29   ` Robin Murphy
2023-05-19 14:02     ` Catalin Marinas
2023-05-19 15:46       ` Catalin Marinas
2023-05-19 17:09       ` Robin Murphy [this message]
2023-05-22  7:27         ` Catalin Marinas
2023-05-23 15:47           ` Robin Murphy
2023-05-18 17:34 ` [PATCH v4 14/15] mm: slab: Reduce the kmalloc() minimum alignment if DMA bouncing possible Catalin Marinas
2023-05-19 11:00   ` Catalin Marinas
2023-05-18 17:34 ` [PATCH v4 15/15] arm64: Enable ARCH_WANT_KMALLOC_DMA_BOUNCE for arm64 Catalin Marinas
2023-05-18 17:56 ` [PATCH v4 00/15] mm, dma, arm64: Reduce ARCH_KMALLOC_MINALIGN to 8 Linus Torvalds
2023-05-18 18:13   ` Ard Biesheuvel
2023-05-18 18:50     ` Catalin Marinas
2023-05-18 18:46   ` Catalin Marinas

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=30a91384-157c-0192-443c-12c835ad3b35@arm.com \
    --to=robin.murphy@arm.com \
    --cc=agk@redhat.com \
    --cc=akpm@linux-foundation.org \
    --cc=ardb@kernel.org \
    --cc=arnd@arndb.de \
    --cc=broonie@kernel.org \
    --cc=catalin.marinas@arm.com \
    --cc=daniel@ffwll.ch \
    --cc=gregkh@linuxfoundation.org \
    --cc=hch@lst.de \
    --cc=herbert@gondor.apana.org.au \
    --cc=iommu@lists.linux.dev \
    --cc=isaacmanjarres@google.com \
    --cc=joro@8bytes.org \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-mm@kvack.org \
    --cc=maz@kernel.org \
    --cc=rafael@kernel.org \
    --cc=saravanak@google.com \
    --cc=snitzer@kernel.org \
    --cc=torvalds@linux-foundation.org \
    --cc=will@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox