From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7F29ED4922E for ; Mon, 18 Nov 2024 14:59:42 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 092F96B00A9; Mon, 18 Nov 2024 09:59:42 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 0411E6B00AB; Mon, 18 Nov 2024 09:59:41 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E49FA6B00AC; Mon, 18 Nov 2024 09:59:41 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id C2CE76B00A9 for ; Mon, 18 Nov 2024 09:59:41 -0500 (EST) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 7BC661201B0 for ; Mon, 18 Nov 2024 14:59:41 +0000 (UTC) X-FDA: 82799523996.04.2E2EE61 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf22.hostedemail.com (Postfix) with ESMTP id 85DC1C0002 for ; Mon, 18 Nov 2024 14:58:38 +0000 (UTC) Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=G6Bg2k8C; spf=pass (imf22.hostedemail.com: domain of will@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=will@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1731941779; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=ft6u+6gFe73vGLOiK3Ic2gEj439dyqSINOprZSDfqF8=; b=uWi64LeyMPmhF0YUTSPc1eGy+kcaryD5Cnh14H0LK5bq+YDu8Y1jKs6fY6fEXCZf18cjf7 1I45AWS5mhXrKz+OwliaSrZCHvdru7la+UxOdKaiFz7hWGsQ5/V8VYxYDreDVR7HBU5al4 hI6U4ofgUS6Ies4+9NS68ON86rLRoyo= ARC-Authentication-Results: i=1; imf22.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=G6Bg2k8C; spf=pass (imf22.hostedemail.com: domain of will@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=will@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1731941779; a=rsa-sha256; cv=none; b=Q5BHmDFZAVkPAY6lqALXnyxjc3EzLCJLdTH6GIOa51a5FbcrpuZQYYdPH/ekMjlCGACQr3 pxUQhSKf9HP5iE2NqjVwCn96UCBkuNfdRO0F4iQyoG2PzB9w851Pk+H9DoFxzOAOrL+/IA 2HERFTz9ItEXMaFEwOEmMXNM6afBZOM= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id C0AAE5C5A86; Mon, 18 Nov 2024 14:58:54 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id BD6BEC4CECC; Mon, 18 Nov 2024 14:59:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1731941978; bh=B8pGggj3ynn83Ub48krJi7CHfczmBiFlV+ZzKagAU8c=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=G6Bg2k8CB4cyv7GvpsoK3J+5RLs/2u5H29iEyh4i6ZqGoXct/Mu22+XNHflmRvT6s KZe6NXydVtOw41QSE4Mu0hI142rjcw2NGNoa3hORXv8QPudQ2LNbI7CL1PQllBv1zJ PVg5OIMdYI/sifgmlsljRU/Jxm7UY+Ka84poM2U08EXGlKgrcyVUOBN2rs8H4k42SQ UuahBUOxDy3vM9OoqFVolfL209f3W00Ro+weBEcZGMM2MXHHmc4mlPpCvaZhG6b4ib avxefeiazL/Eyx8Z0HkLX7S1VbKRCfbXLdz1veV0FzQoJ7m9izmVY5DPnmgs81YoQr LHwDUV8zWxLxA== Date: Mon, 18 Nov 2024 14:59:30 +0000 From: Will Deacon To: Leon Romanovsky Cc: Jens Axboe , Jason Gunthorpe , Robin Murphy , Joerg Roedel , Christoph Hellwig , Sagi Grimberg , Leon Romanovsky , Keith Busch , Bjorn Helgaas , Logan Gunthorpe , Yishai Hadas , Shameer Kolothum , Kevin Tian , Alex Williamson , Marek Szyprowski , =?iso-8859-1?B?Suly9G1l?= Glisse , Andrew Morton , Jonathan Corbet , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-rdma@vger.kernel.org, iommu@lists.linux.dev, linux-nvme@lists.infradead.org, linux-pci@vger.kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org, Randy Dunlap Subject: Re: [PATCH v3 07/17] dma-mapping: Implement link/unlink ranges API Message-ID: <20241118145929.GB27795@willie-the-truck> References: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.10.1 (2018-07-13) X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 85DC1C0002 X-Stat-Signature: rsdfrdtbmjdjxemgzhxwhuunt6ihj311 X-Rspam-User: X-HE-Tag: 1731941918-124939 X-HE-Meta: U2FsdGVkX19dMtrPq64pdZ3ux47pK1qdeSNgrHSgiEGOI4fUMFJ8BeY76okBpXbmwkOvT2pAHdw+jiDCI4CU1FflIc/Tgk5qL9QVQMY8ZCn9N7KkGGP/vqDIfE+RNc0GzQa6S39iVkRD0FGvjXCTY+TGNdHn8R6eJGPu/Drz34pZV/WEj3kHCig5VP4zoMHvfS7iUu3faXG76bB0m+HQlJgc87lld/2jbCXFYT5ZYDbvTgrlf+B2LQ4SC3X7PpX1OuaOaPK/j13uxTIOY1CPKJYaRtgrn6lUEHB30QPaC1cMucfZShn9w557quiTGhpKN9xgtrFSUuq/ar9yukx18u+/MZGcLE7Y0xouZC3+07+qdrbvR1IrIvNe5AqrN4h4iiR6HxGdcWBW6AuGil0PteUgoKIfpAuXaEtNFxk2ZUGj0EJTLdQLq9knyjgsl1fO8Bq9h/O7G7EE0fi3sQKFhxY/33FKSTjYFpuKt5aUV8UxND1egJbMlVLUNhpiJTPC+2Poj2v68gZr9GldRnyfXb1Zj1q1mFvkfcqBjMDuQ/P5pH3yLlyOI+FoekHPKMj18wCRtV8lUF47Xb9OGW/iLZdxxtu0mv3XLK/2bGgInUgIeIxAtwG+mLMTn6HxZskigkpg1Lckic4PhdxXVuoiJwcQfhkOcOwity225TKIzxuMDMGwuH26TyF+wtRmFlboXP8I80Fi+P+Bxy5fib+Z0A54OinkE8E3OUjFV08arEV9GwszXPOJappv8KbiuWXrN8YSWaqja5eW+Qd5Dujn53A0Iek3sT9md+kKuUzMhfIJjkv6lf82G+vjYpXN6p1jpjEPnWSbGHG0V5VskZAY31RmfkYqe5ULKtcOQ8HzidLHat9cs6Drda4qM0vGL/tF1tEbtd/AjcPSD0JRx5mE7dxbRTOJKQwBTcUeKhDSM/o8ZG6C7nC5AUjI41I/CQ+VyVq6OwSbZ3L9rNFo9N9 tclfoTv+ fs36qgBRGlwb2RGr4SNYpLx0AA+7t3AQGG6amK/2qSh13l4t7uKOAXQReH/0qCZyyPrjcPIcgQVan/fKnAVc0H7BQ873QovrtwAqOrG83RY3G9TtKgCg0GA1IJFURQulvjer58+FIAf6ezM9hm/PcmiEIJzkezMW/egm96pHW7bMHtaLgMw8hQJJ1qDrXwYyg8eLMHoy15PXRjNWPTprOQYPblMri0S0pnDJ0cVBDenQT/DXsj+JW2zjL/dl5fD9DYPM4WSp1kLca034= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Sun, Nov 10, 2024 at 03:46:54PM +0200, Leon Romanovsky wrote: > From: Leon Romanovsky > > Introduce new DMA APIs to perform DMA linkage of buffers > in layers higher than DMA. > > In proposed API, the callers will perform the following steps. > In map path: > if (dma_can_use_iova(...)) > dma_iova_alloc() > for (page in range) > dma_iova_link_next(...) > dma_iova_sync(...) > else > /* Fallback to legacy map pages */ > for (all pages) > dma_map_page(...) > > In unmap path: > if (dma_can_use_iova(...)) > dma_iova_destroy() > else > for (all pages) > dma_unmap_page(...) > > Signed-off-by: Leon Romanovsky > --- > drivers/iommu/dma-iommu.c | 259 ++++++++++++++++++++++++++++++++++++ > include/linux/dma-mapping.h | 32 +++++ > 2 files changed, 291 insertions(+) [...] > +/** > + * dma_iova_link - Link a range of IOVA space > + * @dev: DMA device > + * @state: IOVA state > + * @phys: physical address to link > + * @offset: offset into the IOVA state to map into > + * @size: size of the buffer > + * @dir: DMA direction > + * @attrs: attributes of mapping properties > + * > + * Link a range of IOVA space for the given IOVA state without IOTLB sync. > + * This function is used to link multiple physical addresses in contigueous > + * IOVA space without performing costly IOTLB sync. > + * > + * The caller is responsible to call to dma_iova_sync() to sync IOTLB at > + * the end of linkage. > + */ > +int dma_iova_link(struct device *dev, struct dma_iova_state *state, > + phys_addr_t phys, size_t offset, size_t size, > + enum dma_data_direction dir, unsigned long attrs) > +{ > + struct iommu_domain *domain = iommu_get_dma_domain(dev); > + struct iommu_dma_cookie *cookie = domain->iova_cookie; > + struct iova_domain *iovad = &cookie->iovad; > + size_t iova_start_pad = iova_offset(iovad, phys); > + > + if (WARN_ON_ONCE(iova_start_pad && offset > 0)) > + return -EIO; > + > + if (dev_use_swiotlb(dev, size, dir) && iova_offset(iovad, phys | size)) > + return iommu_dma_iova_link_swiotlb(dev, state, phys, offset, > + size, dir, attrs); > + > + return __dma_iova_link(dev, state->addr + offset - iova_start_pad, > + phys - iova_start_pad, > + iova_align(iovad, size + iova_start_pad), dir, attrs); > +} > +EXPORT_SYMBOL_GPL(dma_iova_link); > + > +/** > + * dma_iova_sync - Sync IOTLB > + * @dev: DMA device > + * @state: IOVA state > + * @offset: offset into the IOVA state to sync > + * @size: size of the buffer > + * > + * Sync IOTLB for the given IOVA state. This function should be called on > + * the IOVA-contigous range created by one ore more dma_iova_link() calls > + * to sync the IOTLB. > + */ > +int dma_iova_sync(struct device *dev, struct dma_iova_state *state, > + size_t offset, size_t size) > +{ > + struct iommu_domain *domain = iommu_get_dma_domain(dev); > + struct iommu_dma_cookie *cookie = domain->iova_cookie; > + struct iova_domain *iovad = &cookie->iovad; > + dma_addr_t addr = state->addr + offset; > + size_t iova_start_pad = iova_offset(iovad, addr); > + > + return iommu_sync_map(domain, addr - iova_start_pad, > + iova_align(iovad, size + iova_start_pad)); > +} > +EXPORT_SYMBOL_GPL(dma_iova_sync); > + > +static void iommu_dma_iova_unlink_range_slow(struct device *dev, > + dma_addr_t addr, size_t size, enum dma_data_direction dir, > + unsigned long attrs) > +{ > + struct iommu_domain *domain = iommu_get_dma_domain(dev); > + struct iommu_dma_cookie *cookie = domain->iova_cookie; > + struct iova_domain *iovad = &cookie->iovad; > + size_t iova_start_pad = iova_offset(iovad, addr); > + dma_addr_t end = addr + size; > + > + do { > + phys_addr_t phys; > + size_t len; > + > + phys = iommu_iova_to_phys(domain, addr); > + if (WARN_ON(!phys)) > + continue; > + len = min_t(size_t, > + end - addr, iovad->granule - iova_start_pad); > + > + if (!dev_is_dma_coherent(dev) && > + !(attrs & DMA_ATTR_SKIP_CPU_SYNC)) > + arch_sync_dma_for_cpu(phys, len, dir); > + > + swiotlb_tbl_unmap_single(dev, phys, len, dir, attrs); > + > + addr += len; > + iova_start_pad = 0; > + } while (addr < end); > +} > + > +static void __iommu_dma_iova_unlink(struct device *dev, > + struct dma_iova_state *state, size_t offset, size_t size, > + enum dma_data_direction dir, unsigned long attrs, > + bool free_iova) > +{ > + struct iommu_domain *domain = iommu_get_dma_domain(dev); > + struct iommu_dma_cookie *cookie = domain->iova_cookie; > + struct iova_domain *iovad = &cookie->iovad; > + dma_addr_t addr = state->addr + offset; > + size_t iova_start_pad = iova_offset(iovad, addr); > + struct iommu_iotlb_gather iotlb_gather; > + size_t unmapped; > + > + if ((state->__size & DMA_IOVA_USE_SWIOTLB) || > + (!dev_is_dma_coherent(dev) && !(attrs & DMA_ATTR_SKIP_CPU_SYNC))) > + iommu_dma_iova_unlink_range_slow(dev, addr, size, dir, attrs); > + > + iommu_iotlb_gather_init(&iotlb_gather); > + iotlb_gather.queued = free_iova && READ_ONCE(cookie->fq_domain); > + > + size = iova_align(iovad, size + iova_start_pad); > + addr -= iova_start_pad; > + unmapped = iommu_unmap_fast(domain, addr, size, &iotlb_gather); > + WARN_ON(unmapped != size); Does the new API require that the 'size' passed to dma_iova_unlink() exactly match the 'size' passed to the corresponding call to dma_iova_link()? I ask because the IOMMU page-table code is built around the assumption that partial unmap() operations never occur (i.e. operations which could require splitting a huge mapping). We just removed [1] that code from the Arm IO page-table implementations, so it would be good to avoid adding it back for this. Will [1] https://git.kernel.org/pub/scm/linux/kernel/git/iommu/linux.git/commit/?h=arm/smmu&id=33729a5fc0caf7a97d20507acbeee6b012e7e519