From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8730FC36005 for ; Mon, 28 Apr 2025 09:23:07 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 903786B0011; Mon, 28 Apr 2025 05:23:05 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 88DDA6B0012; Mon, 28 Apr 2025 05:23:05 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 72D976B0022; Mon, 28 Apr 2025 05:23:05 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 514E66B0011 for ; Mon, 28 Apr 2025 05:23:05 -0400 (EDT) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 4CF5F5D8A6 for ; Mon, 28 Apr 2025 09:23:06 +0000 (UTC) X-FDA: 83382913572.16.950D160 Received: from tor.source.kernel.org (tor.source.kernel.org [172.105.4.254]) by imf08.hostedemail.com (Postfix) with ESMTP id 8BB84160002 for ; Mon, 28 Apr 2025 09:23:03 +0000 (UTC) Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=GkJSRLTH; spf=pass (imf08.hostedemail.com: domain of leon@kernel.org designates 172.105.4.254 as permitted sender) smtp.mailfrom=leon@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1745832183; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=yaWDhC+o6t4QNumdRypAhbcIqTdOEcDVYPDxj5A29h4=; b=Rz5lpX3D6LOwVP3srlhbnc0rJYmU9WOmEk3OKk59W7c4zlwxc5zQyO+X4X8pE5OvP+RmO3 rm2ZOnAkfrpFFcH+FymrILhzOPDiGCI45cVzfpUU22VZNWcIPpKWor06PFGThob7BB7x49 0+Q4o8vOBOxyMdGnW0cYWesGMRnH+dM= ARC-Authentication-Results: i=1; imf08.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=GkJSRLTH; spf=pass (imf08.hostedemail.com: domain of leon@kernel.org designates 172.105.4.254 as permitted sender) smtp.mailfrom=leon@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1745832183; a=rsa-sha256; cv=none; b=AxsXEISGpDsoHwJ50RpG0oTAIq5jeXFBWf3zaH8zjeoIVwqDCz0HYuh+lLACc+O/zYm6ZV El6/ng9xI58Vvn5Rjmcww7B68luZwWQxVqjHX5PAE1Opuunx42NYFkt2er/bsmEvReyaD5 cqwQMVTo0QF61pWN3iS9KTWyEwTobzo= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by tor.source.kernel.org (Postfix) with ESMTP id 40AEA61156; Mon, 28 Apr 2025 09:22:38 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id CE1CFC4CEE9; Mon, 28 Apr 2025 09:23:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1745832182; bh=NSjvHXdN9Auhws02PZWMgoFw38+PyYsmwJSV2Mf4l+E=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=GkJSRLTHZU22u1UhQ67MPM9g63S7XvmwGzUbJuzqOOoixyYAyVfnA04t4wkZADAmT Kp8RGfRAN5KkYQzvIv85m/MuedCEbIZLkadhGViQy0IoLd+Ybez+IHCgO2/uSYjZND pr/fjH+IU4RZjijmZEtuma8QCzj/V5qs8RmmPj14QRRr0zIjZpIQy4fJvwiCfDn3hb LHesU6Nn6nX3ydTqwP9xqMbEkv2CKn08JumCY+lk4i/fp1QJciqX9Y3b0H4KtHsEAv OgC6Gcr8vk1MCui06+shS3RvU4kjjpK+rarol/LeBfDsV+Z4blbG4gtyfxLsC6nX+g pwpQ+Io0zJd5w== From: Leon Romanovsky To: Marek Szyprowski , Jens Axboe , Christoph Hellwig , Keith Busch Cc: Jake Edge , Jonathan Corbet , Jason Gunthorpe , Zhu Yanjun , Robin Murphy , Joerg Roedel , Will Deacon , Sagi Grimberg , Bjorn Helgaas , Logan Gunthorpe , Yishai Hadas , Shameer Kolothum , Kevin Tian , Alex Williamson , =?UTF-8?q?J=C3=A9r=C3=B4me=20Glisse?= , Andrew Morton , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-rdma@vger.kernel.org, iommu@lists.linux.dev, linux-nvme@lists.infradead.org, linux-pci@vger.kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org, Niklas Schnelle , Chuck Lever , Luis Chamberlain , Matthew Wilcox , Dan Williams , Kanchan Joshi , Chaitanya Kulkarni , Jason Gunthorpe , Leon Romanovsky Subject: [PATCH v10 03/24] iommu: generalize the batched sync after map interface Date: Mon, 28 Apr 2025 12:22:09 +0300 Message-ID: <69da19d2cc5df0be5112f0cf2365a0337b00d873.1745831017.git.leon@kernel.org> X-Mailer: git-send-email 2.49.0 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Rspam-User: X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: 8BB84160002 X-Stat-Signature: 1ezrznbhpmftweezeuqtger16kc3ykh7 X-HE-Tag: 1745832183-550940 X-HE-Meta: U2FsdGVkX19MAE6B/7dmSMBNMuOAyj9QgEywxkSmlk+NprTwaYNVcIhuWJ8cOXuzYSDRQVfSygPawtp3cPbF4XEJyNh/3e3QgJyIJEb/Q/VpNLy2WNMssCAQ799AtcL/TmGnj73rjMW9XiHoBVtfVmmqlu3uXBGZ0gDIgLRoMzqEBla34/GWuyUkvQ997tqMlFfbUstZ+bN2Ni+JFeHJLMrFE8+nphTWHAoEi5Pb5DhQLg6iRfjNHDzefrXCLvc3r+GaeBmNeZZQ/Jb9Ff4cUMRxTZuNpJ090warXydyUYFZwn4t4Ovdqif+JXFuTm3HH8Ua3LDaqlAxSvLt72hfC5sVDManMch7h2DaQTB7y9C1xhZOP3cLIDhOw7CXasK6yd1SYOZSyBU5nigDGuPK7tpzuhNxHQwhGdBhXl0Oly4PBH7uf0KD82n9FBEBB809tsmsNOmFi30Dep4pfQUJxcHorLuNxp7lX1q2zDWeWrBw5WH78OoqmsnFAURR1ibhwLtlLAF/0qc7LB426HQTTQfilwCmhCYL40hqZQZkccBER2sNNj23pv45AuYc22zP0WZ4hE4i9M6XkEYCjagTDzisjmy1vr1E5S30N73Egjqil4eNjodxO0ZzeVEnmMrtGsz/Idc1snB8gIXuCHdUoExc76tI51zF7OmvH+YA3M2z2rJ6EWFba5gZLm12aJg1mP3Ro/Z+DuXZ8Qr6M4OmmmuTsXReray0tuOZHs46mYXaPtJwtQKE6hfAScA3hxxeA2yP2JpmBlJiV6442RxrCSBIT7TsCmklGXoz2zRmQQFW/0xyTr7n4QIHbOEJ+kp1d5N214yi5QZfw73CXWiQctgxSSZ8c5gbd4LIZaDlHibTqKPfOCZBDyLt5MxAgdKrLlXaIReEjLFVNXJf5VphvGy11dq+/UlqGsJF1tZFT+VVOX1TNlPdRtBwej9aGZUg/wbGaZ6LN9S8/TYSF4g lwJMHLaG qT507dyPKZ6PHCUIpjgfeeVrT2NpGEO+s+lZ0m/894iLcU1niGaViu35H5vAiQV8eL3qnpfLZKsvf/vdgTCexRjGruFMWXPEXOPsmjJr/62ZNnwIsyPcI5Mv7/nqFK5+v22mA1Xr9KM3HYv6LizQ5qtuVf+Rk6L59hu2cCZqi4q3pSPXrS2xgt5OEEPcNcfO/W95sAo+S4kVlHVWVBCYxGVjaD0YC8Z/pCH1EZ8R4gSkVmv+5aD+19lD88O7QwaHC7U08 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Christoph Hellwig For the upcoming IOVA-based DMA API we want to batch the ops->iotlb_sync_map() call after mapping multiple IOVAs from dma-iommu without having a scatterlist. Improve the API. Add a wrapper for the map_sync as iommu_sync_map() so that callers don't need to poke into the methods directly. Formalize __iommu_map() into iommu_map_nosync() which requires the caller to call iommu_sync_map() after all maps are completed. Refactor the existing sanity checks from all the different layers into iommu_map_nosync(). Signed-off-by: Christoph Hellwig Acked-by: Will Deacon Tested-by: Jens Axboe Reviewed-by: Jason Gunthorpe Reviewed-by: Luis Chamberlain Signed-off-by: Leon Romanovsky --- drivers/iommu/iommu.c | 65 +++++++++++++++++++------------------------ include/linux/iommu.h | 4 +++ 2 files changed, 33 insertions(+), 36 deletions(-) diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c index 4f91a740c15f..02960585b8d4 100644 --- a/drivers/iommu/iommu.c +++ b/drivers/iommu/iommu.c @@ -2443,8 +2443,8 @@ static size_t iommu_pgsize(struct iommu_domain *domain, unsigned long iova, return pgsize; } -static int __iommu_map(struct iommu_domain *domain, unsigned long iova, - phys_addr_t paddr, size_t size, int prot, gfp_t gfp) +int iommu_map_nosync(struct iommu_domain *domain, unsigned long iova, + phys_addr_t paddr, size_t size, int prot, gfp_t gfp) { const struct iommu_domain_ops *ops = domain->ops; unsigned long orig_iova = iova; @@ -2453,12 +2453,19 @@ static int __iommu_map(struct iommu_domain *domain, unsigned long iova, phys_addr_t orig_paddr = paddr; int ret = 0; + might_sleep_if(gfpflags_allow_blocking(gfp)); + if (unlikely(!(domain->type & __IOMMU_DOMAIN_PAGING))) return -EINVAL; if (WARN_ON(!ops->map_pages || domain->pgsize_bitmap == 0UL)) return -ENODEV; + /* Discourage passing strange GFP flags */ + if (WARN_ON_ONCE(gfp & (__GFP_COMP | __GFP_DMA | __GFP_DMA32 | + __GFP_HIGHMEM))) + return -EINVAL; + /* find out the minimum page size supported */ min_pagesz = 1 << __ffs(domain->pgsize_bitmap); @@ -2506,31 +2513,27 @@ static int __iommu_map(struct iommu_domain *domain, unsigned long iova, return ret; } -int iommu_map(struct iommu_domain *domain, unsigned long iova, - phys_addr_t paddr, size_t size, int prot, gfp_t gfp) +int iommu_sync_map(struct iommu_domain *domain, unsigned long iova, size_t size) { const struct iommu_domain_ops *ops = domain->ops; - int ret; - - might_sleep_if(gfpflags_allow_blocking(gfp)); - /* Discourage passing strange GFP flags */ - if (WARN_ON_ONCE(gfp & (__GFP_COMP | __GFP_DMA | __GFP_DMA32 | - __GFP_HIGHMEM))) - return -EINVAL; + if (!ops->iotlb_sync_map) + return 0; + return ops->iotlb_sync_map(domain, iova, size); +} - ret = __iommu_map(domain, iova, paddr, size, prot, gfp); - if (ret == 0 && ops->iotlb_sync_map) { - ret = ops->iotlb_sync_map(domain, iova, size); - if (ret) - goto out_err; - } +int iommu_map(struct iommu_domain *domain, unsigned long iova, + phys_addr_t paddr, size_t size, int prot, gfp_t gfp) +{ + int ret; - return ret; + ret = iommu_map_nosync(domain, iova, paddr, size, prot, gfp); + if (ret) + return ret; -out_err: - /* undo mappings already done */ - iommu_unmap(domain, iova, size); + ret = iommu_sync_map(domain, iova, size); + if (ret) + iommu_unmap(domain, iova, size); return ret; } @@ -2630,26 +2633,17 @@ ssize_t iommu_map_sg(struct iommu_domain *domain, unsigned long iova, struct scatterlist *sg, unsigned int nents, int prot, gfp_t gfp) { - const struct iommu_domain_ops *ops = domain->ops; size_t len = 0, mapped = 0; phys_addr_t start; unsigned int i = 0; int ret; - might_sleep_if(gfpflags_allow_blocking(gfp)); - - /* Discourage passing strange GFP flags */ - if (WARN_ON_ONCE(gfp & (__GFP_COMP | __GFP_DMA | __GFP_DMA32 | - __GFP_HIGHMEM))) - return -EINVAL; - while (i <= nents) { phys_addr_t s_phys = sg_phys(sg); if (len && s_phys != start + len) { - ret = __iommu_map(domain, iova + mapped, start, + ret = iommu_map_nosync(domain, iova + mapped, start, len, prot, gfp); - if (ret) goto out_err; @@ -2672,11 +2666,10 @@ ssize_t iommu_map_sg(struct iommu_domain *domain, unsigned long iova, sg = sg_next(sg); } - if (ops->iotlb_sync_map) { - ret = ops->iotlb_sync_map(domain, iova, mapped); - if (ret) - goto out_err; - } + ret = iommu_sync_map(domain, iova, mapped); + if (ret) + goto out_err; + return mapped; out_err: diff --git a/include/linux/iommu.h b/include/linux/iommu.h index ccce8a751e2a..ce472af8e9c3 100644 --- a/include/linux/iommu.h +++ b/include/linux/iommu.h @@ -872,6 +872,10 @@ extern struct iommu_domain *iommu_get_domain_for_dev(struct device *dev); extern struct iommu_domain *iommu_get_dma_domain(struct device *dev); extern int iommu_map(struct iommu_domain *domain, unsigned long iova, phys_addr_t paddr, size_t size, int prot, gfp_t gfp); +int iommu_map_nosync(struct iommu_domain *domain, unsigned long iova, + phys_addr_t paddr, size_t size, int prot, gfp_t gfp); +int iommu_sync_map(struct iommu_domain *domain, unsigned long iova, + size_t size); extern size_t iommu_unmap(struct iommu_domain *domain, unsigned long iova, size_t size); extern size_t iommu_unmap_fast(struct iommu_domain *domain, -- 2.49.0