From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 12562C369CB for ; Tue, 29 Apr 2025 05:53:47 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 386AF6B0007; Tue, 29 Apr 2025 01:53:47 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 311106B0008; Tue, 29 Apr 2025 01:53:47 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 189B86B000A; Tue, 29 Apr 2025 01:53:47 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id E92806B0007 for ; Tue, 29 Apr 2025 01:53:46 -0400 (EDT) Received: from smtpin08.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id B1C11140D78 for ; Tue, 29 Apr 2025 05:53:46 +0000 (UTC) X-FDA: 83386014852.08.89D65DB Received: from sea.source.kernel.org (sea.source.kernel.org [172.234.252.31]) by imf26.hostedemail.com (Postfix) with ESMTP id D57FF140004 for ; Tue, 29 Apr 2025 05:53:44 +0000 (UTC) Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=LnQWizAg; dmarc=pass (policy=quarantine) header.from=kernel.org; spf=pass (imf26.hostedemail.com: domain of leon@kernel.org designates 172.234.252.31 as permitted sender) smtp.mailfrom=leon@kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1745906025; a=rsa-sha256; cv=none; b=sikRF523xEB+ZwHeH1Ek/X4Z4cx7pZYfzbxdpeRDM06s5vJfg5kOohjjlh9OE6+Iing461 u3eLxQt3F5C4qD6c9acFHi8m9WTJiMtu5fKsGN7xpXP9lol4KDVAP2ugLELTzi6Xlm7eIO r8ldZVpSnwm3hqag/KkHqYICn4m45uQ= ARC-Authentication-Results: i=1; imf26.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=LnQWizAg; dmarc=pass (policy=quarantine) header.from=kernel.org; spf=pass (imf26.hostedemail.com: domain of leon@kernel.org designates 172.234.252.31 as permitted sender) smtp.mailfrom=leon@kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1745906025; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=mkaMVF7EQuiZ/QiBjwDx/6oEIbz4vfvs86IlYe5Y4qs=; b=8Hl73TOGlua9cWy+Gqs6+r5OA/l6Ryg9f/uBofCpMAXQw34gh13r0hsHT36WCsqoZ6yoHV EA+OzAb5CeNiqzVWQjgQ4QVGTkUzzcnAGqLnw37YbjqX/yNP2ha04ZIcST58U6gNv22/8Y PN4QnSeeihwhsFnHlhv4qNOHi9Eau7s= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sea.source.kernel.org (Postfix) with ESMTP id 5712F44FB4; Tue, 29 Apr 2025 05:53:41 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id A1A3BC4CEE3; Tue, 29 Apr 2025 05:53:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1745906023; bh=pjWT2YEBoo+aeNNdF/9OjGUMU99E00/qEdiP7XvrLnI=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=LnQWizAgROC7A8WUupkgx/RjNgOeZhcQXFHDDnOFmpAkDfulmJPJ3Ysno5xdYwaxf 34rmjPLe1vQWdYLvjawf4pTZa0H329xNF11gy9kInf5XYGloOwQ6Nhwg6LBxuy3nES KXqpLcb5VXB1GXH03JwKI/YNp+C1ImMo97JAl1Kez1aO2YW1ovR1CZuGNH7JMEuOQ7 TR8B7CAv5HFPojVJP6F7jIDFFZst/yBljgdskoR27X0imt251RZVpEhOkBuV8L1dUJ V7FHK2Owwv/FzLv2VB5DPC4mrBKKSM5BPaRsC+i3rNieRLL1uGuULFNX/S1RJ6aILC 1N3zXS+iMsY5A== Date: Tue, 29 Apr 2025 08:53:39 +0300 From: Leon Romanovsky To: Baolu Lu Cc: Marek Szyprowski , Jens Axboe , Christoph Hellwig , Keith Busch , Jake Edge , Jonathan Corbet , Jason Gunthorpe , Zhu Yanjun , Robin Murphy , Joerg Roedel , Will Deacon , Sagi Grimberg , Bjorn Helgaas , Logan Gunthorpe , Yishai Hadas , Shameer Kolothum , Kevin Tian , Alex Williamson , =?iso-8859-1?B?Suly9G1l?= Glisse , Andrew Morton , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-rdma@vger.kernel.org, iommu@lists.linux.dev, linux-nvme@lists.infradead.org, linux-pci@vger.kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org, Niklas Schnelle , Chuck Lever , Luis Chamberlain , Matthew Wilcox , Dan Williams , Kanchan Joshi , Chaitanya Kulkarni Subject: Re: [PATCH v10 06/24] iommu/dma: Factor out a iommu_dma_map_swiotlb helper Message-ID: <20250429055339.GJ5848@unreal> References: <8416e94f-171e-4956-b8fe-246ed12a2314@linux.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <8416e94f-171e-4956-b8fe-246ed12a2314@linux.intel.com> X-Rspamd-Queue-Id: D57FF140004 X-Rspam-User: X-Rspamd-Server: rspam07 X-Stat-Signature: dwdxgbkzg4ky5sahjfmgkpgxn7cf3fqc X-HE-Tag: 1745906024-529812 X-HE-Meta: U2FsdGVkX1/v39D211uRPrFyCaIX9PnGlE8DwCIpfuYxB72zbS+ydYnkHNE7MCajbqdSvGxgYmyY3xlCjZCNRejebb1rIxO41S95+Cc9usr8KYsB1t1N8WPeryUF1MQpsXU/4dGGjCiLeT4q55CR2tyzgdIELSFXdPrBWE+/GQgEH0/Y+obLxBIEi41ghDIEjXpGjgcZswmQtox0oALQN23KW57CJqg2KoJt1E0cpe62i0Y4Jre7jy5b4ISfLqSqcIR3Z+QZ4Dxn533zkE0UHRroanNSFNoBsgyk8kMIs5vbAc2jHkb9EA/UgyrR2KpCuQUUCKl+osNflaFRpRMzRTPu0842fWQqkKg6pKRn91bU9dkMvaRvRq+Kc+onitXtLgddp5/ZUowRyhVgO4z9JqGeUNWjNqiQPfmxlwOpp/j5Y4NEPcbxFK+u3a67qujzI/MMcjilwOCikgko9Fai/L/4buNkW+Li7knz+FYfRODNYyUYehjtGDlWDP1zmX3wm5uOjc6nrWzpcworJPqe8u50e5VbPqfiW3lEXUzZgrLBzjBLUhaOzdFBVxnCkDW3EIEA3piLgEP28hksN+s0Wr5v//fSs3pyKE9VK0tTAKDfcSo0gnTEgg6cHfEiSkShXxfjgemuxTD5/8jKULAfnGBUVqxv1+Mn6AXMxv6nPUUWFLTIsPjw0eWedIjDZPJFNWgEye3N0SRvd2XQyShT/mz+LJeKdKjzlwi685cV7InL1YuY1SdQmWAEmTmmv0Pu4NtaJg9Pwcu5LOxmuTwJmigI1Fe0YEVzKH1uIk+zDnFfDKpHWTlY6zZnkkvSenK4RBJJTZDZQsjPzwC1P1ZApegbsRAsLmxcCpbAC+tCtLyl3VkHtsYzIGWgl9d3k/JfODsXDQelojFEgPAyW9fAPRQj6MYsJsZs2XnVwlVPiVfe/mfqRM42khRxR8WZNQiJC9hIg7zN40R80EKlSdg EKO+O8U+ pfBWrM1+ONZjWXYt5A6dWx3g25YGP76tFZ900KdFrjj+3G+JC1VMTRlb8hKzUIk4Sl8FBXXU3A5x1+t7m+0j4pvR/SN4Hcm0nEVML6OLV1l5Q42z2TYETo2LEZ0feT96ba0iffSZoCJCK3YaeLd3leIOAoQ+ynJwcxBECHmHLBYHtgSn+35lC57LIX/XdeRbeA1QAYKmXbdtTqft54Mb/qCbsX/Jy/LvbNiJY4uPw295GWj+xaR6xztN1uKnnEqtLQwRNmxvfZ1FVvPevv7jAORTygfG+q7VCl7xTVRh9VSXyhn0USKU6/6c742XtPCsFZIGbkDxaheZR/gkVC/pb/VhdBuJwKpxt+j5GQMvikVf7xiFqjZfzUTyoag== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Tue, Apr 29, 2025 at 12:58:18PM +0800, Baolu Lu wrote: > On 4/28/25 17:22, Leon Romanovsky wrote: > > From: Christoph Hellwig > > > > Split the iommu logic from iommu_dma_map_page into a separate helper. > > This not only keeps the code neatly separated, but will also allow for > > reuse in another caller. > > > > Signed-off-by: Christoph Hellwig > > Tested-by: Jens Axboe > > Reviewed-by: Luis Chamberlain > > Signed-off-by: Leon Romanovsky > > Reviewed-by: Lu Baolu > > with a nit below ... > > > --- > > drivers/iommu/dma-iommu.c | 73 ++++++++++++++++++++++----------------- > > 1 file changed, 41 insertions(+), 32 deletions(-) > > > > diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c > > index d3211a8d755e..d7684024c439 100644 > > --- a/drivers/iommu/dma-iommu.c > > +++ b/drivers/iommu/dma-iommu.c > > @@ -1138,6 +1138,43 @@ void iommu_dma_sync_sg_for_device(struct device *dev, struct scatterlist *sgl, > > arch_sync_dma_for_device(sg_phys(sg), sg->length, dir); > > } > > +static phys_addr_t iommu_dma_map_swiotlb(struct device *dev, phys_addr_t phys, > > + size_t size, enum dma_data_direction dir, unsigned long attrs) > > +{ > > + struct iommu_domain *domain = iommu_get_dma_domain(dev); > > + struct iova_domain *iovad = &domain->iova_cookie->iovad; > > + > > + if (!is_swiotlb_active(dev)) { > > + dev_warn_once(dev, "DMA bounce buffers are inactive, unable to map unaligned transaction.\n"); > > + return (phys_addr_t)DMA_MAPPING_ERROR; > > + } > > + > > + trace_swiotlb_bounced(dev, phys, size); > > + > > + phys = swiotlb_tbl_map_single(dev, phys, size, iova_mask(iovad), dir, > > + attrs); > > + > > + /* > > + * Untrusted devices should not see padding areas with random leftover > > + * kernel data, so zero the pre- and post-padding. > > + * swiotlb_tbl_map_single() has initialized the bounce buffer proper to > > + * the contents of the original memory buffer. > > + */ > > + if (phys != (phys_addr_t)DMA_MAPPING_ERROR && dev_is_untrusted(dev)) { > > + size_t start, virt = (size_t)phys_to_virt(phys); > > + > > + /* Pre-padding */ > > + start = iova_align_down(iovad, virt); > > + memset((void *)start, 0, virt - start); > > + > > + /* Post-padding */ > > + start = virt + size; > > + memset((void *)start, 0, iova_align(iovad, start) - start); > > + } > > + > > + return phys; > > +} > > + > > dma_addr_t iommu_dma_map_page(struct device *dev, struct page *page, > > unsigned long offset, size_t size, enum dma_data_direction dir, > > unsigned long attrs) > > @@ -1151,42 +1188,14 @@ dma_addr_t iommu_dma_map_page(struct device *dev, struct page *page, > > dma_addr_t iova, dma_mask = dma_get_mask(dev); > > /* > > - * If both the physical buffer start address and size are > > - * page aligned, we don't need to use a bounce page. > > + * If both the physical buffer start address and size are page aligned, > > + * we don't need to use a bounce page. > > */ > > if (dev_use_swiotlb(dev, size, dir) && > > iova_offset(iovad, phys | size)) { > > - if (!is_swiotlb_active(dev)) { > > ... Is it better to move this check into the helper? Simply no-op if a > bounce page is not needed: > > if (!dev_use_swiotlb(dev, size, dir) || > !iova_offset(iovad, phys | size)) > return phys; Am I missing something? iommu_dma_map_page() has more code after this check, so it is not correct to return immediately: 1189 dma_addr_t iommu_dma_map_page(struct device *dev, struct page *page, 1190 unsigned long offset, size_t size, enum dma_data_direction dir, 1191 unsigned long attrs) 1192 { <...> 1201 /* 1202 * If both the physical buffer start address and size are page aligned, 1203 * we don't need to use a bounce page. 1204 */ 1205 if (dev_use_swiotlb(dev, size, dir) && 1206 iova_unaligned(iovad, phys, size)) { 1207 phys = iommu_dma_map_swiotlb(dev, phys, size, dir, attrs); 1208 if (phys == (phys_addr_t)DMA_MAPPING_ERROR) 1209 return DMA_MAPPING_ERROR; 1210 } 1211 1212 if (!coherent && !(attrs & DMA_ATTR_SKIP_CPU_SYNC)) 1213 arch_sync_dma_for_device(phys, size, dir); 1214 1215 iova = __iommu_dma_map(dev, phys, size, prot, dma_mask); 1216 if (iova == DMA_MAPPING_ERROR) 1217 swiotlb_tbl_unmap_single(dev, phys, size, dir, attrs); 1218 return iova; 1219 } > > Thanks, > baolu >