From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id DBDE0C3ABB6 for ; Mon, 5 May 2025 07:02:37 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0F9CE6B00A5; Mon, 5 May 2025 03:02:36 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 0AC3A6B00A6; Mon, 5 May 2025 03:02:36 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E65896B00A7; Mon, 5 May 2025 03:02:35 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id C31976B00A5 for ; Mon, 5 May 2025 03:02:35 -0400 (EDT) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id C9C0B81A9C for ; Mon, 5 May 2025 07:02:36 +0000 (UTC) X-FDA: 83407961112.30.1E68EC4 Received: from sea.source.kernel.org (sea.source.kernel.org [172.234.252.31]) by imf11.hostedemail.com (Postfix) with ESMTP id 19A6140003 for ; Mon, 5 May 2025 07:02:34 +0000 (UTC) Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=rya3XWBx; dmarc=pass (policy=quarantine) header.from=kernel.org; spf=pass (imf11.hostedemail.com: domain of leon@kernel.org designates 172.234.252.31 as permitted sender) smtp.mailfrom=leon@kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1746428555; a=rsa-sha256; cv=none; b=RAMpIMTaML/u4+bW7Y3zh3RIA6sg44odGxC9m/b/N7pONbsv2M3M99IZtP2J0/XwBx6m1d o3C0cREmLDVKok9H6Tg7tjQCuXY014iw1trbKuffg5AMTECcOxkL/SOVHsWf8ZZ5qetzgO XQ7pzagsTT3OpsKxbQZH1QdSQio99PI= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1746428555; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=GXZQGOrus7XcRMyGDuHhXBU7VL3UgLznDoeUy+DWmg4=; b=wth1r4lb1UJUxgaIF2BG3Kki/o0eSsGTGhJsSNgtSajQZ0+ct47rWsWJOD+wM/NPVZ3WuY 5ve9i1R0RPxu1iEcg2ARt9yUtIAwtpqFw+/rVetg8BzEngJyQ+1f5sy6fxZZqwASmqZYel bzDb1y8Ghr4bSQSbKsFJyuSUVoc1kBE= ARC-Authentication-Results: i=1; imf11.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=rya3XWBx; dmarc=pass (policy=quarantine) header.from=kernel.org; spf=pass (imf11.hostedemail.com: domain of leon@kernel.org designates 172.234.252.31 as permitted sender) smtp.mailfrom=leon@kernel.org Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sea.source.kernel.org (Postfix) with ESMTP id 7E03544AD5; Mon, 5 May 2025 07:02:31 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 18AECC4CEEE; Mon, 5 May 2025 07:02:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1746428554; bh=FHhYCIQH3fyl1RZsVHoG+qc1CnBKO2xtISEdd+41zhA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=rya3XWBxNoSpNlJz32e3z2UANL3UGlWjrmlMLD1i8sq7XwjQL5SolDu5qj6zHwrST p+e95Ie6o3sLtUvfK1cVVr5ALs36JL8DDHOYgv+WCCwrNiAv1/58RBHCh7g7h/4t/J /MTaH3EZouT7FyjGILcWE/K7nxeP4PdOk0+rU4wWi4XWtJ4R9LqVMPMrM+D3lAQ854 mmKer1TMEPeKi7N4+02LDgeKQGYLidTwaXjUvznHGQMZRrOCCdHX3eWH3oXd5AY6B6 JoTX4+ac1NY6xVYXVe4J6Npm0rBh6bRObr36/EQEdIn54YZ84ngMEXiDEHUZ2oHK21 fyThRUNwjlYkg== From: Leon Romanovsky To: Cc: Christoph Hellwig , Jens Axboe , Keith Busch , Jake Edge , Jonathan Corbet , Jason Gunthorpe , Zhu Yanjun , Robin Murphy , Joerg Roedel , Will Deacon , Sagi Grimberg , Bjorn Helgaas , Logan Gunthorpe , Yishai Hadas , Shameer Kolothum , Kevin Tian , Alex Williamson , =?UTF-8?q?J=C3=A9r=C3=B4me=20Glisse?= , Andrew Morton , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-rdma@vger.kernel.org, iommu@lists.linux.dev, linux-nvme@lists.infradead.org, linux-pci@vger.kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org, Niklas Schnelle , Chuck Lever , Luis Chamberlain , Matthew Wilcox , Dan Williams , Kanchan Joshi , Chaitanya Kulkarni , Lu Baolu , Leon Romanovsky Subject: [PATCH v11 6/9] iommu/dma: Factor out a iommu_dma_map_swiotlb helper Date: Mon, 5 May 2025 10:01:43 +0300 Message-ID: <6e45705027d0a90014dc253aedaee92db7f4be1f.1746424934.git.leon@kernel.org> X-Mailer: git-send-email 2.49.0 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 19A6140003 X-Stat-Signature: bq486s7oayp8njqax14tsc679ukp95jw X-Rspam-User: X-HE-Tag: 1746428554-140759 X-HE-Meta: U2FsdGVkX1/NnJsAdZ+LZRfKsv4vAJncTl+gyXDXyT1pH7GZ71xZv0bKxGpAQHSbr/uHCS5goefbKHazi2w+/OpjWTNJCzkwI+KD6vI+bfWQymdjcT/2FSyZfQLGN2uNFC4FZXah8ZsNeQkZK3bcKJSUU5j/Np2Xy1JqWxqRoZj3BzW8WIXDQS1YpvZ5jIAu4bXBr6yFsuktYLS6x1M/+HK95Dl1RFQb/p2PF+lsGQkDj7R+KcL/2qC3BwDy8B6x8w1u0U84Jr9qSGmM86WGaLr9fqxM79NujbzsfZSFHbhUvBMWZLgW0ruDokknWzPdrqGTnCbg7IGmzozNhXvm+rhTfOqTRZrVCPuUgtW2cHmHOsti6aTRIYBR8Q/xosIIS6ZSWZuVSKuTViWpiqh3hO6C/gTbTrHKGm5fxElXYMeIMBqUb17b+FVna3MJbV5Z5oQfLSooRZeOdzZcz8zNTaFdjNqySkA9H7Uvrd1ugq0C+DTn1mQLejBKs4hkg4lrUXqGLt3t7Rm4qMsiXVEUV7v9HkYOC/l7eN0P3Gan/9eyQFJdp6Ew4efRg22UNx0XsgXduQuRu6Uwf6iOC25kftbzpgVXtRYC+vjPOu7KVlSx58eCssVfr2/3ruzmyqya5ogL2MUpJUIh+V4UlyCbF/kxYlV1bFG585/hGmFDOMAziOpqGGepusZUTd94fbMDopcLYIlOH2ngKRJDtr99WjWwT/yJEv1+rBlo/D996ht1SU9BLZrIu4xHGi81bJGAbnnIgQCjKg64jobadrBZhzkysWc8qulQW718CP1aakU5NbwPkkQwgNTVdmsGAFCjM69wR5dunKMZcWPuJTGrILRvDLO7faxm0MMmGE5WIGucur/PHbkaS+Z5x6PYCsGUYLwENUnxYc03FzBNc+XBe92lZhdWnQhFDVKIyyqTxFoL4M5TCLaok4EaDWqn5IkprD7pm0bxijMgk+PhL0B /d4xWao2 716iC2aLIQ4V+7kc7DjzrrEdrtaNE7pWs4fO0e/KpSY/IvHc/JztL6WoAcP1uEdH6wF3l6JhIu8scEjfMYDKi9WWS2xNGvm4dDGf4EJfiBvPk/2tZSGLtMzbHGdJa5u4s6nFJYCTXZlAT5/Cp3W+kX1LivzDXNAoYH9+XJyyXAEztfJOXAAlZVLttH/aoFj2NHLG2DeK5XaP9Y96HS6jWrxyic8ct3Kc/zlvBaYR4jmgsqXBDRQz8jNk7dyA1ppC0DwnRPjodb+rF6/rXL9aOee1WZNuiW9Zy/RfrKqWlKZDaZXN0LX/ID9Tp/m7k3kws1PqhV/V+To7h9ts= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Christoph Hellwig Split the iommu logic from iommu_dma_map_page into a separate helper. This not only keeps the code neatly separated, but will also allow for reuse in another caller. Signed-off-by: Christoph Hellwig Tested-by: Jens Axboe Reviewed-by: Luis Chamberlain Reviewed-by: Lu Baolu Signed-off-by: Leon Romanovsky --- drivers/iommu/dma-iommu.c | 73 ++++++++++++++++++++++----------------- 1 file changed, 41 insertions(+), 32 deletions(-) diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index d3211a8d755e..d7684024c439 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -1138,6 +1138,43 @@ void iommu_dma_sync_sg_for_device(struct device *dev, struct scatterlist *sgl, arch_sync_dma_for_device(sg_phys(sg), sg->length, dir); } +static phys_addr_t iommu_dma_map_swiotlb(struct device *dev, phys_addr_t phys, + size_t size, enum dma_data_direction dir, unsigned long attrs) +{ + struct iommu_domain *domain = iommu_get_dma_domain(dev); + struct iova_domain *iovad = &domain->iova_cookie->iovad; + + if (!is_swiotlb_active(dev)) { + dev_warn_once(dev, "DMA bounce buffers are inactive, unable to map unaligned transaction.\n"); + return (phys_addr_t)DMA_MAPPING_ERROR; + } + + trace_swiotlb_bounced(dev, phys, size); + + phys = swiotlb_tbl_map_single(dev, phys, size, iova_mask(iovad), dir, + attrs); + + /* + * Untrusted devices should not see padding areas with random leftover + * kernel data, so zero the pre- and post-padding. + * swiotlb_tbl_map_single() has initialized the bounce buffer proper to + * the contents of the original memory buffer. + */ + if (phys != (phys_addr_t)DMA_MAPPING_ERROR && dev_is_untrusted(dev)) { + size_t start, virt = (size_t)phys_to_virt(phys); + + /* Pre-padding */ + start = iova_align_down(iovad, virt); + memset((void *)start, 0, virt - start); + + /* Post-padding */ + start = virt + size; + memset((void *)start, 0, iova_align(iovad, start) - start); + } + + return phys; +} + dma_addr_t iommu_dma_map_page(struct device *dev, struct page *page, unsigned long offset, size_t size, enum dma_data_direction dir, unsigned long attrs) @@ -1151,42 +1188,14 @@ dma_addr_t iommu_dma_map_page(struct device *dev, struct page *page, dma_addr_t iova, dma_mask = dma_get_mask(dev); /* - * If both the physical buffer start address and size are - * page aligned, we don't need to use a bounce page. + * If both the physical buffer start address and size are page aligned, + * we don't need to use a bounce page. */ if (dev_use_swiotlb(dev, size, dir) && iova_offset(iovad, phys | size)) { - if (!is_swiotlb_active(dev)) { - dev_warn_once(dev, "DMA bounce buffers are inactive, unable to map unaligned transaction.\n"); - return DMA_MAPPING_ERROR; - } - - trace_swiotlb_bounced(dev, phys, size); - - phys = swiotlb_tbl_map_single(dev, phys, size, - iova_mask(iovad), dir, attrs); - - if (phys == DMA_MAPPING_ERROR) + phys = iommu_dma_map_swiotlb(dev, phys, size, dir, attrs); + if (phys == (phys_addr_t)DMA_MAPPING_ERROR) return DMA_MAPPING_ERROR; - - /* - * Untrusted devices should not see padding areas with random - * leftover kernel data, so zero the pre- and post-padding. - * swiotlb_tbl_map_single() has initialized the bounce buffer - * proper to the contents of the original memory buffer. - */ - if (dev_is_untrusted(dev)) { - size_t start, virt = (size_t)phys_to_virt(phys); - - /* Pre-padding */ - start = iova_align_down(iovad, virt); - memset((void *)start, 0, virt - start); - - /* Post-padding */ - start = virt + size; - memset((void *)start, 0, - iova_align(iovad, start) - start); - } } if (!coherent && !(attrs & DMA_ATTR_SKIP_CPU_SYNC)) -- 2.49.0