From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4BDBCC369CB for ; Tue, 29 Apr 2025 06:02:31 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 92DD66B0007; Tue, 29 Apr 2025 02:02:29 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 8B31A6B0008; Tue, 29 Apr 2025 02:02:29 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 72DF16B000A; Tue, 29 Apr 2025 02:02:29 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 4B2AB6B0007 for ; Tue, 29 Apr 2025 02:02:29 -0400 (EDT) Received: from smtpin11.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 0BFC4C7681 for ; Tue, 29 Apr 2025 06:02:30 +0000 (UTC) X-FDA: 83386036860.11.2AA2685 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.7]) by imf30.hostedemail.com (Postfix) with ESMTP id 590D28000D for ; Tue, 29 Apr 2025 06:02:27 +0000 (UTC) Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=cs3CrsLt; dmarc=pass (policy=none) header.from=intel.com; spf=none (imf30.hostedemail.com: domain of baolu.lu@linux.intel.com has no SPF policy when checking 192.198.163.7) smtp.mailfrom=baolu.lu@linux.intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1745906547; a=rsa-sha256; cv=none; b=23L45+63OHz2mFgIpl1Jh0gtBCSDXQYKrzbVffwRlmu+8i9KI6uE3Dl5maGbUI+y3IgfjD 2DbrLNUc5n9YMOywTohBTEbzpZUF6ZtNIm08+nYO/KN3ZQjnr5jtHAzbHY/e+0ykkoswsc NMKn7NITVnQ4+zy2ZbbEhQgnOeo3KC0= ARC-Authentication-Results: i=1; imf30.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=cs3CrsLt; dmarc=pass (policy=none) header.from=intel.com; spf=none (imf30.hostedemail.com: domain of baolu.lu@linux.intel.com has no SPF policy when checking 192.198.163.7) smtp.mailfrom=baolu.lu@linux.intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1745906547; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=k8MVtkHiFFg4EG4C+fF1ryyzx+RTLfKxBDUbr+C17Go=; b=Llq3bYxs6pyIjir5XYVlJIrr4C1P2l3i7cD5lYgt9NerKCXWgzmNPaaeW4Ey/YlhnZUEnM REv964k+Xr6stTX4elhLcfN0HTltSDDTGpZuXKzcm9qOxgeUzCY8pNKTuaxOYME7ffWgvv NDDQeVb/X7JuWjyTNX1M/zJPx/LdcXU= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1745906547; x=1777442547; h=message-id:date:mime-version:subject:to:cc:references: from:in-reply-to:content-transfer-encoding; bh=GGOZBkZPXbXcgYWtmEjaWNBSh9vK7IAgAJHtHjY5RGk=; b=cs3CrsLtcywzIDWhehr2+2I0xo1z0iAcY6R3d9NMyFBcaxisWpP/tOyM q2+TRmM0GZu2gcR5WRVsVZK30l2HpZo8rKmvE0QKaMINpSa6cVN97ACd6 Gy2nSrVFPKouz1NWobSG2arMop28f3L/vqjARyA/WFTHa/mrpl3vv2bvU 60Jn31frNVHAlYfGqyycmtiMteyUgDRuq5WkGQGX60hjRAU7GHeYbODR2 dLtlcq+8CCUSWvQ7aZibQdHESYAHcSB50nOG0Tb9Q0QBaaQ/MPR0DOSwC kLgq3KKqP3o88egOFFxVkTaN+f24TCN3fypqlqFiUoARH2PP6FNaAB0+I A==; X-CSE-ConnectionGUID: DPr48p7KTqSFb051i8w6Dg== X-CSE-MsgGUID: XH7F8o7+QgKPXLRSmvilog== X-IronPort-AV: E=McAfee;i="6700,10204,11417"; a="72891432" X-IronPort-AV: E=Sophos;i="6.15,248,1739865600"; d="scan'208";a="72891432" Received: from orviesa008.jf.intel.com ([10.64.159.148]) by fmvoesa101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Apr 2025 23:02:25 -0700 X-CSE-ConnectionGUID: B6mYu5WYR6y4hHVzuSOAgA== X-CSE-MsgGUID: KHRkdKHCSYO/GfkXnTeDdQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.15,248,1739865600"; d="scan'208";a="134708735" Received: from allen-sbox.sh.intel.com (HELO [10.239.159.30]) ([10.239.159.30]) by orviesa008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Apr 2025 23:02:18 -0700 Message-ID: <9d1abdbc-4b21-47e2-bcaf-6bc8ca365b01@linux.intel.com> Date: Tue, 29 Apr 2025 13:58:06 +0800 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v10 06/24] iommu/dma: Factor out a iommu_dma_map_swiotlb helper To: Leon Romanovsky Cc: Marek Szyprowski , Jens Axboe , Christoph Hellwig , Keith Busch , Jake Edge , Jonathan Corbet , Jason Gunthorpe , Zhu Yanjun , Robin Murphy , Joerg Roedel , Will Deacon , Sagi Grimberg , Bjorn Helgaas , Logan Gunthorpe , Yishai Hadas , Shameer Kolothum , Kevin Tian , Alex Williamson , =?UTF-8?B?SsOpcsO0bWUgR2xpc3Nl?= , Andrew Morton , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-rdma@vger.kernel.org, iommu@lists.linux.dev, linux-nvme@lists.infradead.org, linux-pci@vger.kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org, Niklas Schnelle , Chuck Lever , Luis Chamberlain , Matthew Wilcox , Dan Williams , Kanchan Joshi , Chaitanya Kulkarni References: <8416e94f-171e-4956-b8fe-246ed12a2314@linux.intel.com> <20250429055339.GJ5848@unreal> Content-Language: en-US From: Baolu Lu In-Reply-To: <20250429055339.GJ5848@unreal> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Rspamd-Queue-Id: 590D28000D X-Rspam-User: X-Rspamd-Server: rspam07 X-Stat-Signature: 1gqk7u8nhaubmsuonyzuieope8abmpp8 X-HE-Tag: 1745906547-331767 X-HE-Meta: U2FsdGVkX197DwBwgFUPZHDYjvrL37Y7UtqZYnJVSLbWqQom5rvizGmQjXphbQyTiV/spuD1/HFM4wMO1oqXUXGraHR1NOXzAVXNLJgtkWobZbbcHID8boHSnOXaJYWyCHKjWmOCVN3DLSzktUxxOov3spo8IpuNOTy45/lgEoFDsCCnGKWobVQLzQTXww55VmXkM2Me3gZgrzaoGjPfTGHhLF6/u/fqYHlXfPwAHXwY2/NME88ynJyPG9IyrLMWVrpy/LljbiIkvulirz9qwsccR0bSKROo8+jxI7fMChgFKShxt040P7IpJ5qfiSGIsgZKi9HndibM5cUewrnXm0NGWmVJiOHI5eylnVJNoAcRyi2LjVCnqdAe0CkJzaPNsD7KkNvh+h8OKBXHPxXxPGN7QRWUPTf4Sm7tIHquQNAaWVrCL/asXaAaeEEGBFCo04KUR9HIgdn67K242udiDUNkk72tbsbFH3G9TVRoHt3yuaDj/wFMM42rHfsl+CvuYNzVwIMPUKdx57HcCu6ObvKcH1+CkmUE3+wi4L9mGd5va3CwvC2QcqB4fmEB3L+c2JjuupgtpVRIiDTV1uy/LE/XfvYJs05kdBwXj60yVBLu0sQqwKTRJ09edA5dqS55Fm35ezo2IO3z2gsGkVSywfyRlhlzzGInwf1g1Poe5+KpG4DbeoxewJZ5IhMRZ0Mj5bI/9r74YGVIb2dY3m0qxY0/ZyyB7O8pz+VUqTjPQPQnI236EVY7ramGeqceWZbixWe7k0phDAZ7DfIk+jhKwl3VWWQdURTTTW10SINMTC270TrbgKnJM24ZJoYvWYBAO/72g3Sjo/wMlIiLWOiXB6dCTO87YZvJkrm2o+Pwu5Btvxjg4K6lZiKQBzPU1XkGNn7JnOzkv/R+Ntrl45El7CjWA5zqNcRBWemBqP3J6UbawCJEByBsDYFT+2arvubCpyYSNqXuHD9mZPtDbZ9 7T2OIFKI 0S6auSQnB9UEOac6WjaWr+tNz6vODxHkRK3PThUx9HWD67Y8NOI8SqcEzd/2LpLACKyHrMaV3r2lut0ahEXrWG52RyYoj6MpuI0tX/fyGFoYGsF5zefsvGouPihO2Y1z76GRdFx/YCTL+4FWzzdFTLyWRKxeH3IQnroY4DasAJPIz4w62XsCFBn6X31tXMC86SjjehyOqBgO0OLlOa24DPPtgC4sZm6W5gltvSNa5TkTJQkd5EcnaZ4NQWsL28oYLHpdWod+4fYn06lHEmr5ylj+grfsR8eoCj5WYvkQ+guSdNCmzcRI5BuISuQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 4/29/25 13:53, Leon Romanovsky wrote: > On Tue, Apr 29, 2025 at 12:58:18PM +0800, Baolu Lu wrote: >> On 4/28/25 17:22, Leon Romanovsky wrote: >>> From: Christoph Hellwig >>> >>> Split the iommu logic from iommu_dma_map_page into a separate helper. >>> This not only keeps the code neatly separated, but will also allow for >>> reuse in another caller. >>> >>> Signed-off-by: Christoph Hellwig >>> Tested-by: Jens Axboe >>> Reviewed-by: Luis Chamberlain >>> Signed-off-by: Leon Romanovsky >> >> Reviewed-by: Lu Baolu >> >> with a nit below ... >> >>> --- >>> drivers/iommu/dma-iommu.c | 73 ++++++++++++++++++++++----------------- >>> 1 file changed, 41 insertions(+), 32 deletions(-) >>> >>> diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c >>> index d3211a8d755e..d7684024c439 100644 >>> --- a/drivers/iommu/dma-iommu.c >>> +++ b/drivers/iommu/dma-iommu.c >>> @@ -1138,6 +1138,43 @@ void iommu_dma_sync_sg_for_device(struct device *dev, struct scatterlist *sgl, >>> arch_sync_dma_for_device(sg_phys(sg), sg->length, dir); >>> } >>> +static phys_addr_t iommu_dma_map_swiotlb(struct device *dev, phys_addr_t phys, >>> + size_t size, enum dma_data_direction dir, unsigned long attrs) >>> +{ >>> + struct iommu_domain *domain = iommu_get_dma_domain(dev); >>> + struct iova_domain *iovad = &domain->iova_cookie->iovad; >>> + >>> + if (!is_swiotlb_active(dev)) { >>> + dev_warn_once(dev, "DMA bounce buffers are inactive, unable to map unaligned transaction.\n"); >>> + return (phys_addr_t)DMA_MAPPING_ERROR; >>> + } >>> + >>> + trace_swiotlb_bounced(dev, phys, size); >>> + >>> + phys = swiotlb_tbl_map_single(dev, phys, size, iova_mask(iovad), dir, >>> + attrs); >>> + >>> + /* >>> + * Untrusted devices should not see padding areas with random leftover >>> + * kernel data, so zero the pre- and post-padding. >>> + * swiotlb_tbl_map_single() has initialized the bounce buffer proper to >>> + * the contents of the original memory buffer. >>> + */ >>> + if (phys != (phys_addr_t)DMA_MAPPING_ERROR && dev_is_untrusted(dev)) { >>> + size_t start, virt = (size_t)phys_to_virt(phys); >>> + >>> + /* Pre-padding */ >>> + start = iova_align_down(iovad, virt); >>> + memset((void *)start, 0, virt - start); >>> + >>> + /* Post-padding */ >>> + start = virt + size; >>> + memset((void *)start, 0, iova_align(iovad, start) - start); >>> + } >>> + >>> + return phys; >>> +} >>> + >>> dma_addr_t iommu_dma_map_page(struct device *dev, struct page *page, >>> unsigned long offset, size_t size, enum dma_data_direction dir, >>> unsigned long attrs) >>> @@ -1151,42 +1188,14 @@ dma_addr_t iommu_dma_map_page(struct device *dev, struct page *page, >>> dma_addr_t iova, dma_mask = dma_get_mask(dev); >>> /* >>> - * If both the physical buffer start address and size are >>> - * page aligned, we don't need to use a bounce page. >>> + * If both the physical buffer start address and size are page aligned, >>> + * we don't need to use a bounce page. >>> */ >>> if (dev_use_swiotlb(dev, size, dir) && >>> iova_offset(iovad, phys | size)) { >>> - if (!is_swiotlb_active(dev)) { >> >> ... Is it better to move this check into the helper? Simply no-op if a >> bounce page is not needed: >> >> if (!dev_use_swiotlb(dev, size, dir) || >> !iova_offset(iovad, phys | size)) >> return phys; > > Am I missing something? iommu_dma_map_page() has more code after this > check, so it is not correct to return immediately: > > 1189 dma_addr_t iommu_dma_map_page(struct device *dev, struct page *page, > 1190 unsigned long offset, size_t size, enum dma_data_direction dir, > 1191 unsigned long attrs) > 1192 { > > <...> > > 1201 /* > 1202 * If both the physical buffer start address and size are page aligned, > 1203 * we don't need to use a bounce page. > 1204 */ > 1205 if (dev_use_swiotlb(dev, size, dir) && > 1206 iova_unaligned(iovad, phys, size)) { > 1207 phys = iommu_dma_map_swiotlb(dev, phys, size, dir, attrs); > 1208 if (phys == (phys_addr_t)DMA_MAPPING_ERROR) > 1209 return DMA_MAPPING_ERROR; > 1210 } > 1211 > 1212 if (!coherent && !(attrs & DMA_ATTR_SKIP_CPU_SYNC)) > 1213 arch_sync_dma_for_device(phys, size, dir); > 1214 > 1215 iova = __iommu_dma_map(dev, phys, size, prot, dma_mask); > 1216 if (iova == DMA_MAPPING_ERROR) > 1217 swiotlb_tbl_unmap_single(dev, phys, size, dir, attrs); > 1218 return iova; > 1219 } static phys_addr_t iommu_dma_map_swiotlb(struct device *dev, phys_addr_t phys, size_t size, enum dma_data_direction dir, unsigned long attrs) { <...> /* * If both the physical buffer start address and size are page aligned, * we don't need to use a bounce page. */ if (!dev_use_swiotlb(dev, size, dir) || !iova_offset(iovad, phys | size)) return phys; <...> } Then, dma_addr_t iommu_dma_map_page(struct device *dev, struct page *page, unsigned long offset, size_t size, enum dma_data_direction dir, unsigned long attrs) { <...> phys = iommu_dma_map_swiotlb(dev, phys, size, dir, attrs); if (phys == (phys_addr_t)DMA_MAPPING_ERROR) return DMA_MAPPING_ERROR; <...> } Thanks, baolu