From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3B267C77B7F for ; Fri, 19 May 2023 12:29:51 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 83155900005; Fri, 19 May 2023 08:29:50 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 7E142900003; Fri, 19 May 2023 08:29:50 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6A8D3900005; Fri, 19 May 2023 08:29:50 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 59D2E900003 for ; Fri, 19 May 2023 08:29:50 -0400 (EDT) Received: from smtpin03.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 188ABA0A6B for ; Fri, 19 May 2023 12:29:50 +0000 (UTC) X-FDA: 80806936140.03.8CC97AA Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf15.hostedemail.com (Postfix) with ESMTP id 43AEAA000C for ; Fri, 19 May 2023 12:29:47 +0000 (UTC) Authentication-Results: imf15.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf15.hostedemail.com: domain of robin.murphy@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=robin.murphy@arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1684499387; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=KTEWMI7Vn0pNRtIXcb2UndtSUyLO9AbjN7Bj8ZA7t+I=; b=luW6bvTs7rokElqfu3w1j+03mRGqhAgljRZX4i5Q+O1zSFF40evt9ov7k1BUVwVo3Cbcg8 NHDMwpuOglRGrK+D9s4mGXZQj3c/w7RdmStEGqT2ruQFy1Ny/3nTf53NipgMlebFERR14P z4f+8NRXp13XBv14Xp4GGFNUG0R0cHA= ARC-Authentication-Results: i=1; imf15.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf15.hostedemail.com: domain of robin.murphy@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=robin.murphy@arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1684499387; a=rsa-sha256; cv=none; b=FAl5AoR6l+oFteYwZYEAmhnnYaXmEspzI5MuNRCYLFD4qkoc4uHiDFDqc/0e2YJmjp0U7V ZtdlyC9DnxSS+HSfP+K8cVyUHZ+ELoDX6vvuLehvmwYnMhHsqwSgai5mAGOjM9hIByRr1A 6nJlIQITbVgNHQGR4Ob4kCARIlbn2/Y= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 0B52C1FB; Fri, 19 May 2023 05:30:31 -0700 (PDT) Received: from [10.57.84.114] (unknown [10.57.84.114]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 20AFD3F762; Fri, 19 May 2023 05:29:42 -0700 (PDT) Message-ID: <9dc3e036-75cd-debf-7093-177ef6c7a3ae@arm.com> Date: Fri, 19 May 2023 13:29:38 +0100 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Windows NT 10.0; rv:102.0) Gecko/20100101 Thunderbird/102.11.0 Subject: Re: [PATCH v4 13/15] iommu/dma: Force bouncing if the size is not cacheline-aligned Content-Language: en-GB To: Catalin Marinas , Linus Torvalds , Arnd Bergmann , Christoph Hellwig , Greg Kroah-Hartman Cc: Will Deacon , Marc Zyngier , Andrew Morton , Herbert Xu , Ard Biesheuvel , Isaac Manjarres , Saravana Kannan , Alasdair Kergon , Daniel Vetter , Joerg Roedel , Mark Brown , Mike Snitzer , "Rafael J. Wysocki" , linux-mm@kvack.org, iommu@lists.linux.dev, linux-arm-kernel@lists.infradead.org References: <20230518173403.1150549-1-catalin.marinas@arm.com> <20230518173403.1150549-14-catalin.marinas@arm.com> From: Robin Murphy In-Reply-To: <20230518173403.1150549-14-catalin.marinas@arm.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Rspamd-Queue-Id: 43AEAA000C X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: gmkahix5b85gix5y33zaw6a9xy13q3zk X-HE-Tag: 1684499387-542839 X-HE-Meta: U2FsdGVkX18iflngH+/hnV6Cyilh9P/7TsLTn6RnVqkdqpBqhw1pRN18j9veczsJ6BIvctcPoz40z5y5hgMWLI3LQxOKcSFUaUSSTi8c5dolfRpRlTSDEY5Zh3Rtm7+ZGYCnuXwC4ZvHJpdAun9iBppVg98MK0voTItLMuTw+IKBvm257B/vO8105NRQyKJSopGViG1nnFxNYrTHtk7gfdJprhm5l7FD08J9+99XkN3TUfS78aWYxIifccm76dStEOLYtgP6BFYLFkirSq0pyZ5hUEtPM4ZHPCyCSBsgtzXL/kJ0zHibu4mOcwbpz5BN21hj0V0nVuET/W0QMly827JgkB8ZF/al/OAnwFldCiEQnL2PWgDZV8Om9IH7BingjxCDUkfFOvdzo/CsWLuGxuEEE4/r7bnyANw7+eKE9Qt5m/SfVlesXO+Ng23/igdudu0F1ERD7TP7qyWO6jaXMys36/tb6tEBkk0WTdKeizdApibIUQoakUbu+NiCQ+Psh5cBZX1A57VYmil3+NiUyMi7Q5H7LUNY1wWtDOPGF4TQjDwBMa/KCDaKtxbtNWv8JwyT0p72DD2ko2RKJvD4vm5oLbtH2IqK43bjRBj4aYa9QypFatGFUwfVEu7KybHg4hfjt2dvo2rfQUZeHIYkoBLYXE0iEzZ513Eg/f9OYHAreXtTAQheC+2FWeBZdKo2aPyLFhGlk9RnJ5FWjipWZi3fXMP49Q7wsOrDrBP4tHekMrD+R/Z4GbDYygljBRgfSw6FHw4bzY4BBOiE5U5N2UcANo2XTXbD8XVphgouxM7RVAcz6RkPHn0DZeSRsMVH/BPSyLemhfyKu9M4ECDMAjVHfWHFrnOA+mCOOTbRpvTWlHA6jCQOozbxD0ctBBTTUPc/rTqtxl0LtsIOZSJluGsJtOaas5A3T6KqreOlhX90sHnMMzYko0D1z+rYv4xwRyGAgZ8Mb8Mtv3dTw0/ fdlbNTJr KbSc74CdMlvNqXFpXM48pSOyjx1y7nUxygmOzbxOX0F0ekLPpA4q5iP0J5GKxZBTrMaAM231/4CHE7u7aW09dukEWdqFTPXiIlS4dAnyiYLj3eas5r9Amm6Zj4GbuTJcrEmL965vfUAmGtMiBwoWm/PUBHIosJP76XBzQlHo0HJGqLCOCTsnU/2jxa9/omrP0bha+Cw15Laek3vn7/FmXb9qoYaVEqUSqOPF/ X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 2023-05-18 18:34, Catalin Marinas wrote: > Similarly to the direct DMA, bounce small allocations as they may have > originated from a kmalloc() cache not safe for DMA. Unlike the direct > DMA, iommu_dma_map_sg() cannot call iommu_dma_map_sg_swiotlb() for all > non-coherent devices as this would break some cases where the iova is > expected to be contiguous (dmabuf). Instead, scan the scatterlist for > any small sizes and only go the swiotlb path if any element of the list > needs bouncing (note that iommu_dma_map_page() would still only bounce > those buffers which are not DMA-aligned). > > To avoid scanning the scatterlist on the 'sync' operations, introduce a > SG_DMA_BOUNCED flag set during the iommu_dma_map_sg() call (suggested by > Robin Murphy). > > Signed-off-by: Catalin Marinas > Cc: Joerg Roedel > Cc: Christoph Hellwig > Cc: Robin Murphy > --- > drivers/iommu/dma-iommu.c | 25 ++++++++++++++++++++----- > include/linux/scatterlist.h | 25 +++++++++++++++++++++++-- > 2 files changed, 43 insertions(+), 7 deletions(-) > > diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c > index 7a9f0b0bddbd..ab1c1681c06e 100644 > --- a/drivers/iommu/dma-iommu.c > +++ b/drivers/iommu/dma-iommu.c > @@ -956,7 +956,7 @@ static void iommu_dma_sync_sg_for_cpu(struct device *dev, > struct scatterlist *sg; > int i; > > - if (dev_use_swiotlb(dev)) > + if (dev_use_swiotlb(dev) || sg_is_dma_bounced(sgl)) > for_each_sg(sgl, sg, nelems, i) > iommu_dma_sync_single_for_cpu(dev, sg_dma_address(sg), > sg->length, dir); > @@ -972,7 +972,7 @@ static void iommu_dma_sync_sg_for_device(struct device *dev, > struct scatterlist *sg; > int i; > > - if (dev_use_swiotlb(dev)) > + if (dev_use_swiotlb(dev) || sg_is_dma_bounced(sgl)) > for_each_sg(sgl, sg, nelems, i) > iommu_dma_sync_single_for_device(dev, > sg_dma_address(sg), > @@ -998,7 +998,8 @@ static dma_addr_t iommu_dma_map_page(struct device *dev, struct page *page, > * If both the physical buffer start address and size are > * page aligned, we don't need to use a bounce page. > */ > - if (dev_use_swiotlb(dev) && iova_offset(iovad, phys | size)) { > + if ((dev_use_swiotlb(dev) && iova_offset(iovad, phys | size)) || > + dma_kmalloc_needs_bounce(dev, size, dir)) { > void *padding_start; > size_t padding_size, aligned_size; > > @@ -1210,7 +1211,21 @@ static int iommu_dma_map_sg(struct device *dev, struct scatterlist *sg, > goto out; > } > > - if (dev_use_swiotlb(dev)) > + /* > + * If kmalloc() buffers are not DMA-safe for this device and > + * direction, check the individual lengths in the sg list. If one of > + * the buffers is deemed unsafe, follow the iommu_dma_map_sg_swiotlb() > + * path for potential bouncing. > + */ > + if (!dma_kmalloc_safe(dev, dir)) { > + for_each_sg(sg, s, nents, i) > + if (!dma_kmalloc_size_aligned(s->length)) { Just to remind myself, we're not checking s->offset on the grounds that if anyone wants to DMA into an unaligned part of a larger allocation that remains at their own risk, is that right? Do we care about the (probably theoretical) case where someone might build a scatterlist for multiple small allocations such that ones which happen to be adjacent might get combined into a single segment of apparently "safe" length but still at "unsafe" alignment? > + sg_dma_mark_bounced(sg); I'd prefer to have iommu_dma_map_sg_swiotlb() mark the segments, since that's in charge of the actual bouncing. Then we can fold the alignment check into dev_use_swiotlb() (with the dev_is_untrusted() condition taking priority), and sync/unmap can simply rely on sg_is_dma_bounced() alone. (ultimately I'd like to merge the two separate paths back together and handle bouncing per-segment, but that can wait) Thanks, Robin. > + break; > + } > + } > + > + if (dev_use_swiotlb(dev) || sg_is_dma_bounced(sg)) > return iommu_dma_map_sg_swiotlb(dev, sg, nents, dir, attrs); > > if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC)) > @@ -1315,7 +1330,7 @@ static void iommu_dma_unmap_sg(struct device *dev, struct scatterlist *sg, > struct scatterlist *tmp; > int i; > > - if (dev_use_swiotlb(dev)) { > + if (dev_use_swiotlb(dev) || sg_is_dma_bounced(sg)) { > iommu_dma_unmap_sg_swiotlb(dev, sg, nents, dir, attrs); > return; > } > diff --git a/include/linux/scatterlist.h b/include/linux/scatterlist.h > index 87aaf8b5cdb4..9306880cae1c 100644 > --- a/include/linux/scatterlist.h > +++ b/include/linux/scatterlist.h > @@ -248,6 +248,29 @@ static inline void sg_unmark_end(struct scatterlist *sg) > sg->page_link &= ~SG_END; > } > > +#define SG_DMA_BUS_ADDRESS (1 << 0) > +#define SG_DMA_BOUNCED (1 << 1) > + > +#ifdef CONFIG_DMA_BOUNCE_UNALIGNED_KMALLOC > +static inline bool sg_is_dma_bounced(struct scatterlist *sg) > +{ > + return sg->dma_flags & SG_DMA_BOUNCED; > +} > + > +static inline void sg_dma_mark_bounced(struct scatterlist *sg) > +{ > + sg->dma_flags |= SG_DMA_BOUNCED; > +} > +#else > +static inline bool sg_is_dma_bounced(struct scatterlist *sg) > +{ > + return false; > +} > +static inline void sg_dma_mark_bounced(struct scatterlist *sg) > +{ > +} > +#endif > + > /* > * CONFIG_PCI_P2PDMA depends on CONFIG_64BIT which means there is 4 bytes > * in struct scatterlist (assuming also CONFIG_NEED_SG_DMA_LENGTH is set). > @@ -256,8 +279,6 @@ static inline void sg_unmark_end(struct scatterlist *sg) > */ > #ifdef CONFIG_PCI_P2PDMA > > -#define SG_DMA_BUS_ADDRESS (1 << 0) > - > /** > * sg_dma_is_bus address - Return whether a given segment was marked > * as a bus address