From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E84C5C7EE23 for ; Fri, 26 May 2023 16:47:51 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 82631900003; Fri, 26 May 2023 12:47:51 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 7D59E900002; Fri, 26 May 2023 12:47:51 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 69DF7900003; Fri, 26 May 2023 12:47:51 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 5C576900002 for ; Fri, 26 May 2023 12:47:51 -0400 (EDT) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 0F5EDADC3D for ; Fri, 26 May 2023 16:47:51 +0000 (UTC) X-FDA: 80832987942.17.D363330 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf17.hostedemail.com (Postfix) with ESMTP id 41AAA40008 for ; Fri, 26 May 2023 16:47:48 +0000 (UTC) Authentication-Results: imf17.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=dcLSeYsK; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf17.hostedemail.com: domain of jszhang@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=jszhang@kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1685119669; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=gm2HZlS8cMQbAVAmePf1QIr49ChvQlN6oo+urtiGo0Y=; b=WcXujaGM/5HOlbPbS0LNKG8b9rElhMlj4t3LPKyXj3fJlxe2g7Yo/DBCoTA0MD2E7hA6kC LGdrFTd+TGOmGPIwJ+dPDKspAnJC5mS4yz9S05Qbmv8SYrJhWUg1+4BqyTY8c3n7bJS/s2 2unJHHo0NByUAmWs2ABGNotdDAT1o9Q= ARC-Authentication-Results: i=1; imf17.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=dcLSeYsK; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf17.hostedemail.com: domain of jszhang@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=jszhang@kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1685119669; a=rsa-sha256; cv=none; b=YGT8vhc80REb5mN5TZV/+rcdp42Z2T0wb8GIyfTUmna3q3BLgjI4DCP0tktXpo4bYmF82D 3ToFhFc6WrIV5ThKAxj6cIIasApaUnkXUWPUToFi4WH6Ftz3V42WQOmEK7CplyIt7CqIUm VpUv94ZHUbQ/VBMkSreyWrAjuWaYp5Y= Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 2A95860DF5; Fri, 26 May 2023 16:47:48 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 2C2E2C4339B; Fri, 26 May 2023 16:47:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1685119667; bh=HztkGlGNg6aotOmvW2bUdXS74YXZP4CAN2DBXK9jDjo=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=dcLSeYsK32132xu+uotqzZvQoogKUiU5Xy8bJpIZ3CJf7hfuOzFpx6ttdfVpbOjlj tEZ1V4imWO0kq6RUOfL5SpfVtFQR5FHROAdrqvnou63KhxQS8vnsTuDkD5CEvVnL5T 7DDgwcBuVvbqMYEKEG/kf9504v5kbZMT2Y6YlmX2P+tRKyjyfRpHBZQfaVfanXpi5X kFnny0XgcKojxMZva4/Jl+HdXYsjdvgYwRQj5YtMksOMZUU7XucW9Vl7WeJmF/oIf7 bzEvKpdtOvAxLMvVAntV98IqeWBgSW1ysneSs2iTZjfO1XRKmHrM3LBV38lF0/bJgs PVd0BWk96A93A== Date: Sat, 27 May 2023 00:36:30 +0800 From: Jisheng Zhang To: Catalin Marinas Cc: Linus Torvalds , Christoph Hellwig , Robin Murphy , Arnd Bergmann , Greg Kroah-Hartman , Will Deacon , Marc Zyngier , Andrew Morton , Herbert Xu , Ard Biesheuvel , Isaac Manjarres , Saravana Kannan , Alasdair Kergon , Daniel Vetter , Joerg Roedel , Mark Brown , Mike Snitzer , "Rafael J. Wysocki" , linux-mm@kvack.org, iommu@lists.linux.dev, linux-arm-kernel@lists.infradead.org Subject: Re: [PATCH v5 13/15] iommu/dma: Force bouncing if the size is not cacheline-aligned Message-ID: References: <20230524171904.3967031-1-catalin.marinas@arm.com> <20230524171904.3967031-14-catalin.marinas@arm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: <20230524171904.3967031-14-catalin.marinas@arm.com> X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: 41AAA40008 X-Stat-Signature: zosf3qzwtyc7mffqih76nq9ti9c5ongh X-Rspam-User: X-HE-Tag: 1685119668-952404 X-HE-Meta: U2FsdGVkX18mZji/gclw3U2SO3alVIwqkr3zzT3vTui3QpIRcmHSZ8dmODNdI0/WVcAz+2UlE6e58dlXRtVyTH78ganfctZ8+lVGUOxFFrTFB8vWQuMfmRegTgnbq2DQdt2cLMOCfm6oFq5Hb7rnWB+3G+29gvQkmwrvECk6pB9zNxyLvcIk4APIRdyn+FVTYINs9SJ+O+2RQ5A7HNh6/5y4KjIajcX4pao6hvKKW2IedWhPXSP+GEqKSaXAzA/7Cqe+U12E6WfPnWcGfyZS/15WPVfJoMQhfyhvbB1RHtRAHGFP/arplUjAdjOoi19kQ8SBodG+hpkdQsmpxcwwFMTiDv7axRJKRJXHwQefrIpaETL3Np4eadH0EuzBgx1nUtkeZA/dn3MoBbeidYxLGvKwTO9t14elXdTKbOQSFA5L7B+zeK5AHs1/9zWd2E2d9goL1+pANefK7UFC1n5DZ3I0cHMQXl9MyXGUTb35qLX70Fs+9v1qZ58aoQXGpyP2mG8+iIUbLlINWpER7GLldxLo4A5DhjVHFrYObAvb4V0hNmbTA65VG1Tj1Bwk56rnzsiX4hJaYXOrNNGEBEPwNqIG0qCAm8n4VEUTtOMP3UFFmOo6iDGf1lHlQMpU6joGHxS3bQZRaeWXqNCGllb2M7ayJ8LKQxFW2kIzbB2CbQtmuPFVDmys6WDxfpQUmdbt9vcvRDZlHYPBlb07RnUfXTlPaZmqE8q1S8TvQpRH0KUdOU23ADiFhlEXNE6JByhXKVHsss5ox0VPfRBuG2cg0ySZvmDcqQ4ZAQJITwz49CYsPpijk9fJOYl2K7OYwdg3vKEcjRDNRjCwVLaUjaTPOOYMc7fyZ+fWNtZb18U6wdKkMyHwC88N7Tao6ttKqP9MM1IAn06Cg+dDD47RgBD+5inNPviyZ2Gc3rdWhA4d8nccTnS9J2JekjhYdQ7Ugq9SXaK8xs/LK2+yisAVL6I ZTNiEdQS XJBvv8cbpNZRkQ0LBMjz3CrE8N9hzK8HNFDS8+RP3LvHvuSqEPbtIoIXsDTuYNQ52CKU9vVQDlPwZXghOCdOrpGQ6bkyu8mz0avlxQCIWEEXW9yGam8bKt1gGAgu4mkzx5GmqmjFJ+N3OZ4i//4l9kL4XfBSm1LLgaWMlSj8njjgYhsmwIXJho+Q82ERcyU83o3nvggAEWYhIn2DzE2tpdjGDuGpdmLDg/dyZKsevmtzF0nwDeGMufHYMi9mrmhKUDr0k30PSxQVa9eruUbCOIZQ0hmItKIo4r91nvIlfEQSnk3OWFjjjmmtflTzKagj//Qt/Eop5+Qrxqv5nG1CZo+AO/y/DJXZWVaJCxat52olP4bPh+c1lco1OHR+o9CjQRnlwG4UjdVrg5N3gBUTPM9NqB6L5rkx79czwGUdAPKxXpDcxTZkRQDifpqyD3Ru3ICaqgm5wzu/Gke95kGfiKtDBtQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, May 24, 2023 at 06:19:02PM +0100, Catalin Marinas wrote: > Similarly to the direct DMA, bounce small allocations as they may have > originated from a kmalloc() cache not safe for DMA. Unlike the direct > DMA, iommu_dma_map_sg() cannot call iommu_dma_map_sg_swiotlb() for all > non-coherent devices as this would break some cases where the iova is > expected to be contiguous (dmabuf). Instead, scan the scatterlist for > any small sizes and only go the swiotlb path if any element of the list > needs bouncing (note that iommu_dma_map_page() would still only bounce > those buffers which are not DMA-aligned). > > To avoid scanning the scatterlist on the 'sync' operations, introduce an > SG_DMA_USE_SWIOTLB flag set by iommu_dma_map_sg_swiotlb(). The > dev_use_swiotlb() function together with the newly added > dev_use_sg_swiotlb() now check for both untrusted devices and unaligned > kmalloc() buffers (suggested by Robin Murphy). > > Signed-off-by: Catalin Marinas > Cc: Joerg Roedel > Cc: Christoph Hellwig > Cc: Robin Murphy > --- > drivers/iommu/Kconfig | 1 + > drivers/iommu/dma-iommu.c | 50 ++++++++++++++++++++++++++++++------- > include/linux/scatterlist.h | 25 +++++++++++++++++-- > 3 files changed, 65 insertions(+), 11 deletions(-) > > diff --git a/drivers/iommu/Kconfig b/drivers/iommu/Kconfig > index db98c3f86e8c..670eff7a8e11 100644 > --- a/drivers/iommu/Kconfig > +++ b/drivers/iommu/Kconfig > @@ -152,6 +152,7 @@ config IOMMU_DMA > select IOMMU_IOVA > select IRQ_MSI_IOMMU > select NEED_SG_DMA_LENGTH > + select NEED_SG_DMA_FLAGS if SWIOTLB > > # Shared Virtual Addressing > config IOMMU_SVA > diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c > index 7a9f0b0bddbd..24a8b8c2368c 100644 > --- a/drivers/iommu/dma-iommu.c > +++ b/drivers/iommu/dma-iommu.c > @@ -520,9 +520,38 @@ static bool dev_is_untrusted(struct device *dev) > return dev_is_pci(dev) && to_pci_dev(dev)->untrusted; > } > > -static bool dev_use_swiotlb(struct device *dev) > +static bool dev_use_swiotlb(struct device *dev, size_t size, > + enum dma_data_direction dir) > { > - return IS_ENABLED(CONFIG_SWIOTLB) && dev_is_untrusted(dev); > + return IS_ENABLED(CONFIG_SWIOTLB) && > + (dev_is_untrusted(dev) || > + dma_kmalloc_needs_bounce(dev, size, dir)); > +} > + > +static bool dev_use_sg_swiotlb(struct device *dev, struct scatterlist *sg, > + int nents, enum dma_data_direction dir) > +{ > + struct scatterlist *s; > + int i; > + > + if (!IS_ENABLED(CONFIG_SWIOTLB)) > + return false; > + > + if (dev_is_untrusted(dev)) > + return true; > + > + /* > + * If kmalloc() buffers are not DMA-safe for this device and > + * direction, check the individual lengths in the sg list. If any > + * element is deemed unsafe, use the swiotlb for bouncing. > + */ > + if (!dma_kmalloc_safe(dev, dir)) { > + for_each_sg(sg, s, nents, i) > + if (!dma_kmalloc_size_aligned(s->length)) > + return true; > + } > + > + return false; > } > > /** > @@ -922,7 +951,7 @@ static void iommu_dma_sync_single_for_cpu(struct device *dev, > { > phys_addr_t phys; > > - if (dev_is_dma_coherent(dev) && !dev_use_swiotlb(dev)) > + if (dev_is_dma_coherent(dev) && !dev_use_swiotlb(dev, size, dir)) > return; > > phys = iommu_iova_to_phys(iommu_get_dma_domain(dev), dma_handle); > @@ -938,7 +967,7 @@ static void iommu_dma_sync_single_for_device(struct device *dev, > { > phys_addr_t phys; > > - if (dev_is_dma_coherent(dev) && !dev_use_swiotlb(dev)) > + if (dev_is_dma_coherent(dev) && !dev_use_swiotlb(dev, size, dir)) > return; > > phys = iommu_iova_to_phys(iommu_get_dma_domain(dev), dma_handle); > @@ -956,7 +985,7 @@ static void iommu_dma_sync_sg_for_cpu(struct device *dev, > struct scatterlist *sg; > int i; > > - if (dev_use_swiotlb(dev)) > + if (sg_is_dma_use_swiotlb(sgl)) > for_each_sg(sgl, sg, nelems, i) > iommu_dma_sync_single_for_cpu(dev, sg_dma_address(sg), > sg->length, dir); > @@ -972,7 +1001,7 @@ static void iommu_dma_sync_sg_for_device(struct device *dev, > struct scatterlist *sg; > int i; > > - if (dev_use_swiotlb(dev)) > + if (sg_is_dma_use_swiotlb(sgl)) > for_each_sg(sgl, sg, nelems, i) > iommu_dma_sync_single_for_device(dev, > sg_dma_address(sg), > @@ -998,7 +1027,8 @@ static dma_addr_t iommu_dma_map_page(struct device *dev, struct page *page, > * If both the physical buffer start address and size are > * page aligned, we don't need to use a bounce page. > */ > - if (dev_use_swiotlb(dev) && iova_offset(iovad, phys | size)) { > + if (dev_use_swiotlb(dev, size, dir) && > + iova_offset(iovad, phys | size)) { > void *padding_start; > size_t padding_size, aligned_size; > > @@ -1166,6 +1196,8 @@ static int iommu_dma_map_sg_swiotlb(struct device *dev, struct scatterlist *sg, > struct scatterlist *s; > int i; > > + sg_dma_mark_use_swiotlb(sg); > + > for_each_sg(sg, s, nents, i) { > sg_dma_address(s) = iommu_dma_map_page(dev, sg_page(s), > s->offset, s->length, dir, attrs); > @@ -1210,7 +1242,7 @@ static int iommu_dma_map_sg(struct device *dev, struct scatterlist *sg, > goto out; > } > > - if (dev_use_swiotlb(dev)) > + if (dev_use_sg_swiotlb(dev, sg, nents, dir)) > return iommu_dma_map_sg_swiotlb(dev, sg, nents, dir, attrs); > > if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC)) > @@ -1315,7 +1347,7 @@ static void iommu_dma_unmap_sg(struct device *dev, struct scatterlist *sg, > struct scatterlist *tmp; > int i; > > - if (dev_use_swiotlb(dev)) { > + if (sg_is_dma_use_swiotlb(sg)) { > iommu_dma_unmap_sg_swiotlb(dev, sg, nents, dir, attrs); > return; > } > diff --git a/include/linux/scatterlist.h b/include/linux/scatterlist.h > index 87aaf8b5cdb4..330a157c5501 100644 > --- a/include/linux/scatterlist.h > +++ b/include/linux/scatterlist.h > @@ -248,6 +248,29 @@ static inline void sg_unmark_end(struct scatterlist *sg) > sg->page_link &= ~SG_END; > } > > +#define SG_DMA_BUS_ADDRESS (1 << 0) > +#define SG_DMA_USE_SWIOTLB (1 << 1) > + > +#ifdef CONFIG_SWIOTLB s/CONFIG_SWIOTLB/CONFIG_NEED_SG_DMA_FLAGS ? Otherwise, there's compiler error if SWIOTLB=y but IOMMU=n Thanks > +static inline bool sg_is_dma_use_swiotlb(struct scatterlist *sg) > +{ > + return sg->dma_flags & SG_DMA_USE_SWIOTLB; > +} > + > +static inline void sg_dma_mark_use_swiotlb(struct scatterlist *sg) > +{ > + sg->dma_flags |= SG_DMA_USE_SWIOTLB; > +} > +#else > +static inline bool sg_is_dma_use_swiotlb(struct scatterlist *sg) > +{ > + return false; > +} > +static inline void sg_dma_mark_use_swiotlb(struct scatterlist *sg) > +{ > +} > +#endif > + > /* > * CONFIG_PCI_P2PDMA depends on CONFIG_64BIT which means there is 4 bytes > * in struct scatterlist (assuming also CONFIG_NEED_SG_DMA_LENGTH is set). > @@ -256,8 +279,6 @@ static inline void sg_unmark_end(struct scatterlist *sg) > */ > #ifdef CONFIG_PCI_P2PDMA > > -#define SG_DMA_BUS_ADDRESS (1 << 0) > - > /** > * sg_dma_is_bus address - Return whether a given segment was marked > * as a bus address > > _______________________________________________ > linux-arm-kernel mailing list > linux-arm-kernel@lists.infradead.org > http://lists.infradead.org/mailman/listinfo/linux-arm-kernel