From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8FA1AC7EE23 for ; Tue, 23 May 2023 15:48:05 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0875C900003; Tue, 23 May 2023 11:48:05 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 0103D900002; Tue, 23 May 2023 11:48:04 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DF2FE900003; Tue, 23 May 2023 11:48:04 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id C5313900002 for ; Tue, 23 May 2023 11:48:04 -0400 (EDT) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 6555FA0713 for ; Tue, 23 May 2023 15:48:04 +0000 (UTC) X-FDA: 80821950888.19.40D1962 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf21.hostedemail.com (Postfix) with ESMTP id EA0A41C0014 for ; Tue, 23 May 2023 15:48:00 +0000 (UTC) Authentication-Results: imf21.hostedemail.com; dkim=none; spf=pass (imf21.hostedemail.com: domain of robin.murphy@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=robin.murphy@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1684856881; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=+dgLqp6vWAjZmpucMsfK+Ss/qen1s8kLAexRCm6FbPI=; b=C4xeggXsdZyXLsbZP+ECGu+dQvMfQErXzzeYhUp4z0STYvQ9xUxhgkHp462gCTR1w9Awm3 Gt8+V1RFdxc1cfAm5rMGIUY20sYz5mEYntjbXsDrkQvJC4Uha+oBQW+Llyao55oeQvQKQm zeaFU7L27lqjLcQ9n9c+GKshnaI3LDs= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1684856881; a=rsa-sha256; cv=none; b=mu4+0QplYH0hld05I0LMtFc9iXK3LgJnTaw+OwwtyJPUhfJUnCNEXDt5ejnvie0iooJf7v /YFpb5Nz1vuDY83BbwRt0ut193KXBz8R2+5ojmiJQzHYsuTQX2MEZm3N/A/4yOIh0CrWKc S16ziKHdUGQcVJr0R0DQfFhq2IVo178= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=none; spf=pass (imf21.hostedemail.com: domain of robin.murphy@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=robin.murphy@arm.com; dmarc=pass (policy=none) header.from=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id A929B139F; Tue, 23 May 2023 08:48:44 -0700 (PDT) Received: from [10.1.196.40] (e121345-lin.cambridge.arm.com [10.1.196.40]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 1D4603F840; Tue, 23 May 2023 08:47:57 -0700 (PDT) Message-ID: <378d2261-81ec-a68a-7ba4-7602f7a335f9@arm.com> Date: Tue, 23 May 2023 16:47:52 +0100 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux aarch64; rv:102.0) Gecko/20100101 Thunderbird/102.10.0 Subject: Re: [PATCH v4 13/15] iommu/dma: Force bouncing if the size is not cacheline-aligned Content-Language: en-GB To: Catalin Marinas Cc: Linus Torvalds , Arnd Bergmann , Christoph Hellwig , Greg Kroah-Hartman , Will Deacon , Marc Zyngier , Andrew Morton , Herbert Xu , Ard Biesheuvel , Isaac Manjarres , Saravana Kannan , Alasdair Kergon , Daniel Vetter , Joerg Roedel , Mark Brown , Mike Snitzer , "Rafael J. Wysocki" , linux-mm@kvack.org, iommu@lists.linux.dev, linux-arm-kernel@lists.infradead.org References: <20230518173403.1150549-1-catalin.marinas@arm.com> <20230518173403.1150549-14-catalin.marinas@arm.com> <9dc3e036-75cd-debf-7093-177ef6c7a3ae@arm.com> <30a91384-157c-0192-443c-12c835ad3b35@arm.com> From: Robin Murphy In-Reply-To: Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Stat-Signature: 9r3kujd31ux5p13377wa5846fzn7ksaa X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: EA0A41C0014 X-Rspam-User: X-HE-Tag: 1684856880-468154 X-HE-Meta: U2FsdGVkX19ch9L/9YCphKH2Oy69scZEm2f9aP7jbsNO/OucKN7Q68nEdMOtTTOlKkE2Jb7W5ZkODpPeFiSjKjZmfPGJVBTX9Oly+F0GFLFQgHyFCL4Ik6npFwWkqQ6duSLosv+5ArjFCE8S2+pZjP+bSB0v88s8iS7vS3shZl6pJnmjwrBY6nNbDxP4OaZgo8VlyZ1+7Ow1q84+w4T+a/W0PzJCGyBtwnbG9fvDF8OGnkzKdOSHt/sHn1iR7n78PLvo5NNXL3D1rr5b0R63ctuVEDEam4vMwN0/q+eeobgKcZrhTgVbLEBYrO7xaUXTyqDUxwQv1GK5jGt4lsufC+MfvnV2M4qpaALGkzDOek54Kx8nzbTQdn+nFsAyYedf3ZoxyTt3ZNOIxC2uXlD/YRE0tGJGFh9ptYQ1QnufdiJZvR6bLKIy1I2q7mJA/MJLPae5Mc7CRv7iJh69tyoBwDeXtfH4CQXSkkY31zlVsE/1sdCy1B/8ZqLAYvID6bu/xLBB0WH0FwXRF1xZ+9qavOFSXEsK32ZKWLsIKOM5xcffhQiGgMgZ40tatpc1yswSktr5hpj/HU37ZsoH7G4Bp3FstnHvxlxcaOVmE4yjt3kmIxaE0ZfSageZuNVoMdC0KWuvvUs+8eclxfuWvI1I+Gzgf6ILUDr/QGssFtNxOZGbfxi5bz0hbLlXyioMrwMquwm46uH7HXAVSnK3diNavpKz3YDB401uu0CUAsaGmcGKovr1XTS1rPsl8jQJty1GXSmUtnQsXSCz1DWbF1zya/siqkQrWITPmL2Mwn8QAK2AqwHwydyFGTwvUsZuRbK+mRA2wU54aV6pHmGP4t0Y8U9S1TCOibCNB2eZ3Z5t0dKqbtg9fX9MlIxXZxEKJvtvgG2zODi5HCCRGSVDTTEHchPu+CawIrjjh0lGS/4Pz6LrWkqLjqKQRp7O3KeK5E9hq5fa/n5OV6WWBo8eFyk HeQAFqut 4J+AgNg+4c1nfAzbOzRtk4AC/Jcblxwaup0cZpau+t4+ND58889JValsqXDJyttePWoPP3nuQWHAIwT+LOuihehge+T9UbAOaABDixmo1kFMlY8li5DYZleP3r1S9WfGCh3q+GmyV3Gbnwtg5ODlGcuxSKhvV0uRCYUdzBraBRpE5WmSTn2cpSekl17+i5ARkmn0r2VoMis02fVBLB1jGb2nimrhcJ2osEasQ X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 22/05/2023 8:27 am, Catalin Marinas wrote: > On Fri, May 19, 2023 at 06:09:45PM +0100, Robin Murphy wrote: >> On 19/05/2023 3:02 pm, Catalin Marinas wrote: >>> On Fri, May 19, 2023 at 01:29:38PM +0100, Robin Murphy wrote: >>>> On 2023-05-18 18:34, Catalin Marinas wrote: >>>>> diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c >>>>> index 7a9f0b0bddbd..ab1c1681c06e 100644 >>>>> --- a/drivers/iommu/dma-iommu.c >>>>> +++ b/drivers/iommu/dma-iommu.c > [...] >>>>> @@ -1210,7 +1211,21 @@ static int iommu_dma_map_sg(struct device *dev, struct scatterlist *sg, >>>>> goto out; >>>>> } >>>>> - if (dev_use_swiotlb(dev)) >>>>> + /* >>>>> + * If kmalloc() buffers are not DMA-safe for this device and >>>>> + * direction, check the individual lengths in the sg list. If one of >>>>> + * the buffers is deemed unsafe, follow the iommu_dma_map_sg_swiotlb() >>>>> + * path for potential bouncing. >>>>> + */ >>>>> + if (!dma_kmalloc_safe(dev, dir)) { >>>>> + for_each_sg(sg, s, nents, i) >>>>> + if (!dma_kmalloc_size_aligned(s->length)) { >>>> >>>> Just to remind myself, we're not checking s->offset on the grounds that if >>>> anyone wants to DMA into an unaligned part of a larger allocation that >>>> remains at their own risk, is that right? >>> >>> Right. That's the case currently as well and those users that were >>> relying on ARCH_KMALLOC_MINALIGN for this have either been migrated to >>> ARCH_DMA_MINALIGN in this series or the logic rewritten (as in the >>> crypto code). >> >> OK, I did manage to summon a vague memory of this being discussed before, >> which at least stopped me asking "Should we be checking..." - perhaps a >> comment on dma_kmalloc_safe() to help remember that reasoning might not go >> amiss? > > I'll add some notes in the comment. > >>>> Do we care about the (probably theoretical) case where someone might build a >>>> scatterlist for multiple small allocations such that ones which happen to be >>>> adjacent might get combined into a single segment of apparently "safe" >>>> length but still at "unsafe" alignment? >>> >>> I'd say that's theoretical only. One could write such code but normally >>> you'd go for an array rather than relying on the randomness of the >>> kmalloc pointers to figure out adjacent objects. It also only works if >>> the individual struct size is exactly one of the kmalloc cache sizes, so >>> not generic enough. >> >> FWIW I was imagining something like sg_alloc_table_from_pages() but at a >> smaller scale, queueing up some list/array of, say, 32-byte buffers into a >> scatterlist to submit as a single DMA job. I'm not aware that such a thing >> exists though, and I'm inclined to agree that it probably is sufficiently >> unrealistic to be concerned about. As usual I just want to feel comfortable >> that we've explored all the possibilities :) > > The strict approach would be to check each pointer and size (not just > small ones) and, if unaligned, test whether it comes from a slab > allocation and what its actual alignment is, something similar to > ksize(). But this adds too many checks for (I think) a theoretical > issue. We discussed this in previous iterations of this series and > concluded to only check the size and bounce accordingly (even if we may > bounce fully aligned slabs or miss cases like the one you mentioned). > Anyway, we have a backup plan if we trip over something like this, just > slightly more expensive. > >>>>> + sg_dma_mark_bounced(sg); >>>> >>>> I'd prefer to have iommu_dma_map_sg_swiotlb() mark the segments, since >>>> that's in charge of the actual bouncing. Then we can fold the alignment >>>> check into dev_use_swiotlb() (with the dev_is_untrusted() condition taking >>>> priority), and sync/unmap can simply rely on sg_is_dma_bounced() alone. >>> >>> With this patch we only set the SG_DMA_BOUNCED on the first element of >>> the sglist. Do you want to set this flag only on individual elements >>> being bounced? It makes some sense in principle but the >>> iommu_dma_unmap_sg() path would need to scan the list again to decide >>> whether to go the swiotlb path. >>> >>> If we keep the SG_DMA_BOUNCED flag only on the first element, I can >>> change it to your suggestion, assuming I understood it. >> >> Indeed that should be fine - sync_sg/unmap_sg always have to be given the >> same arguments which were passed to map_sg (and note that in the normal >> case, the DMA address/length will often end up concatenated entirely into >> the first element), so while we still have the two distinct flows >> internally, I don't think there's any issue with only tagging the head of >> the list to steer between them. Of course if it then works out to be trivial >> enough to tag *all* the segments for good measure, there should be no harm >> in that either - at the moment the flag is destined to have more of a "this >> might be bounced, so needs checking" meaning than "this definitely is >> bounced" either way. > > I renamed SG_DMA_BOUNCED to SG_DMA_USE_SWIOTLB (to match > dev_use_swiotlb()). The past participle of bounce does make you think > that it was definitely bounced. > > Before I post a v5, does this resemble what you suggested: Indeed; I hadn't got as far as considering optimising checks for the sg case, but the overall shape looks like what I was imagining. Possibly some naming nitpicks, but I'm not sure how much I can be bothered :) Thanks, Robin. > ------8<------------------------------ > From 6558c2bc242ea8598d16b842c8cc77105ce1d5fa Mon Sep 17 00:00:00 2001 > From: Catalin Marinas > Date: Tue, 8 Nov 2022 11:19:31 +0000 > Subject: [PATCH] iommu/dma: Force bouncing if the size is not > cacheline-aligned > > Similarly to the direct DMA, bounce small allocations as they may have > originated from a kmalloc() cache not safe for DMA. Unlike the direct > DMA, iommu_dma_map_sg() cannot call iommu_dma_map_sg_swiotlb() for all > non-coherent devices as this would break some cases where the iova is > expected to be contiguous (dmabuf). Instead, scan the scatterlist for > any small sizes and only go the swiotlb path if any element of the list > needs bouncing (note that iommu_dma_map_page() would still only bounce > those buffers which are not DMA-aligned). > > To avoid scanning the scatterlist on the 'sync' operations, introduce a > SG_DMA_USE_SWIOTLB flag set during the iommu_dma_map_sg_swiotlb() call > (suggested by Robin Murphy). > > Signed-off-by: Catalin Marinas > Cc: Joerg Roedel > Cc: Christoph Hellwig > Cc: Robin Murphy > --- > drivers/iommu/dma-iommu.c | 50 ++++++++++++++++++++++++++++++------- > include/linux/scatterlist.h | 25 +++++++++++++++++-- > 2 files changed, 64 insertions(+), 11 deletions(-) > > diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c > index 7a9f0b0bddbd..24a8b8c2368c 100644 > --- a/drivers/iommu/dma-iommu.c > +++ b/drivers/iommu/dma-iommu.c > @@ -520,9 +520,38 @@ static bool dev_is_untrusted(struct device *dev) > return dev_is_pci(dev) && to_pci_dev(dev)->untrusted; > } > > -static bool dev_use_swiotlb(struct device *dev) > +static bool dev_use_swiotlb(struct device *dev, size_t size, > + enum dma_data_direction dir) > { > - return IS_ENABLED(CONFIG_SWIOTLB) && dev_is_untrusted(dev); > + return IS_ENABLED(CONFIG_SWIOTLB) && > + (dev_is_untrusted(dev) || > + dma_kmalloc_needs_bounce(dev, size, dir)); > +} > + > +static bool dev_use_sg_swiotlb(struct device *dev, struct scatterlist *sg, > + int nents, enum dma_data_direction dir) > +{ > + struct scatterlist *s; > + int i; > + > + if (!IS_ENABLED(CONFIG_SWIOTLB)) > + return false; > + > + if (dev_is_untrusted(dev)) > + return true; > + > + /* > + * If kmalloc() buffers are not DMA-safe for this device and > + * direction, check the individual lengths in the sg list. If any > + * element is deemed unsafe, use the swiotlb for bouncing. > + */ > + if (!dma_kmalloc_safe(dev, dir)) { > + for_each_sg(sg, s, nents, i) > + if (!dma_kmalloc_size_aligned(s->length)) > + return true; > + } > + > + return false; > } > > /** > @@ -922,7 +951,7 @@ static void iommu_dma_sync_single_for_cpu(struct device *dev, > { > phys_addr_t phys; > > - if (dev_is_dma_coherent(dev) && !dev_use_swiotlb(dev)) > + if (dev_is_dma_coherent(dev) && !dev_use_swiotlb(dev, size, dir)) > return; > > phys = iommu_iova_to_phys(iommu_get_dma_domain(dev), dma_handle); > @@ -938,7 +967,7 @@ static void iommu_dma_sync_single_for_device(struct device *dev, > { > phys_addr_t phys; > > - if (dev_is_dma_coherent(dev) && !dev_use_swiotlb(dev)) > + if (dev_is_dma_coherent(dev) && !dev_use_swiotlb(dev, size, dir)) > return; > > phys = iommu_iova_to_phys(iommu_get_dma_domain(dev), dma_handle); > @@ -956,7 +985,7 @@ static void iommu_dma_sync_sg_for_cpu(struct device *dev, > struct scatterlist *sg; > int i; > > - if (dev_use_swiotlb(dev)) > + if (sg_is_dma_use_swiotlb(sgl)) > for_each_sg(sgl, sg, nelems, i) > iommu_dma_sync_single_for_cpu(dev, sg_dma_address(sg), > sg->length, dir); > @@ -972,7 +1001,7 @@ static void iommu_dma_sync_sg_for_device(struct device *dev, > struct scatterlist *sg; > int i; > > - if (dev_use_swiotlb(dev)) > + if (sg_is_dma_use_swiotlb(sgl)) > for_each_sg(sgl, sg, nelems, i) > iommu_dma_sync_single_for_device(dev, > sg_dma_address(sg), > @@ -998,7 +1027,8 @@ static dma_addr_t iommu_dma_map_page(struct device *dev, struct page *page, > * If both the physical buffer start address and size are > * page aligned, we don't need to use a bounce page. > */ > - if (dev_use_swiotlb(dev) && iova_offset(iovad, phys | size)) { > + if (dev_use_swiotlb(dev, size, dir) && > + iova_offset(iovad, phys | size)) { > void *padding_start; > size_t padding_size, aligned_size; > > @@ -1166,6 +1196,8 @@ static int iommu_dma_map_sg_swiotlb(struct device *dev, struct scatterlist *sg, > struct scatterlist *s; > int i; > > + sg_dma_mark_use_swiotlb(sg); > + > for_each_sg(sg, s, nents, i) { > sg_dma_address(s) = iommu_dma_map_page(dev, sg_page(s), > s->offset, s->length, dir, attrs); > @@ -1210,7 +1242,7 @@ static int iommu_dma_map_sg(struct device *dev, struct scatterlist *sg, > goto out; > } > > - if (dev_use_swiotlb(dev)) > + if (dev_use_sg_swiotlb(dev, sg, nents, dir)) > return iommu_dma_map_sg_swiotlb(dev, sg, nents, dir, attrs); > > if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC)) > @@ -1315,7 +1347,7 @@ static void iommu_dma_unmap_sg(struct device *dev, struct scatterlist *sg, > struct scatterlist *tmp; > int i; > > - if (dev_use_swiotlb(dev)) { > + if (sg_is_dma_use_swiotlb(sg)) { > iommu_dma_unmap_sg_swiotlb(dev, sg, nents, dir, attrs); > return; > } > diff --git a/include/linux/scatterlist.h b/include/linux/scatterlist.h > index 87aaf8b5cdb4..e0f9fea456c1 100644 > --- a/include/linux/scatterlist.h > +++ b/include/linux/scatterlist.h > @@ -248,6 +248,29 @@ static inline void sg_unmark_end(struct scatterlist *sg) > sg->page_link &= ~SG_END; > } > > +#define SG_DMA_BUS_ADDRESS (1 << 0) > +#define SG_DMA_USE_SWIOTLB (1 << 1) > + > +#ifdef CONFIG_DMA_BOUNCE_UNALIGNED_KMALLOC > +static inline bool sg_is_dma_use_swiotlb(struct scatterlist *sg) > +{ > + return sg->dma_flags & SG_DMA_USE_SWIOTLB; > +} > + > +static inline void sg_dma_mark_use_swiotlb(struct scatterlist *sg) > +{ > + sg->dma_flags |= SG_DMA_USE_SWIOTLB; > +} > +#else > +static inline bool sg_is_dma_use_swiotlb(struct scatterlist *sg) > +{ > + return false; > +} > +static inline void sg_dma_mark_use_swiotlb(struct scatterlist *sg) > +{ > +} > +#endif > + > /* > * CONFIG_PCI_P2PDMA depends on CONFIG_64BIT which means there is 4 bytes > * in struct scatterlist (assuming also CONFIG_NEED_SG_DMA_LENGTH is set). > @@ -256,8 +279,6 @@ static inline void sg_unmark_end(struct scatterlist *sg) > */ > #ifdef CONFIG_PCI_P2PDMA > > -#define SG_DMA_BUS_ADDRESS (1 << 0) > - > /** > * sg_dma_is_bus address - Return whether a given segment was marked > * as a bus address