From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 06957C3ABB6 for ; Mon, 5 May 2025 07:02:12 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 318656B0095; Mon, 5 May 2025 03:02:10 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 29FBB6B0096; Mon, 5 May 2025 03:02:10 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0F30D6B0098; Mon, 5 May 2025 03:02:10 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id E04536B0095 for ; Mon, 5 May 2025 03:02:09 -0400 (EDT) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id C75711D0F36 for ; Mon, 5 May 2025 07:02:10 +0000 (UTC) X-FDA: 83407960020.17.2CB91CB Received: from nyc.source.kernel.org (nyc.source.kernel.org [147.75.193.91]) by imf22.hostedemail.com (Postfix) with ESMTP id 3872FC000E for ; Mon, 5 May 2025 07:02:09 +0000 (UTC) Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=Pbjoh9io; dmarc=pass (policy=quarantine) header.from=kernel.org; spf=pass (imf22.hostedemail.com: domain of leon@kernel.org designates 147.75.193.91 as permitted sender) smtp.mailfrom=leon@kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1746428529; a=rsa-sha256; cv=none; b=mRXdLhZ2eeJ72BP31kAOnuILlp7WCgRZhAAPF4FeVCsMOntL0tax+WZxtMK6mEstp1yfj0 3jzVGvVex2MNSwZOAYeqWo+zpmIOgjIhOPi6D+acjBngCDTN8Wpty761YpHIRxL4w5SD3C AN+7wA1jbGLk8g4XLg9Ky6UEzAWPPI4= ARC-Authentication-Results: i=1; imf22.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=Pbjoh9io; dmarc=pass (policy=quarantine) header.from=kernel.org; spf=pass (imf22.hostedemail.com: domain of leon@kernel.org designates 147.75.193.91 as permitted sender) smtp.mailfrom=leon@kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1746428529; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=yaWDhC+o6t4QNumdRypAhbcIqTdOEcDVYPDxj5A29h4=; b=KVKI3zDKldU5itzIDL97Nj4KXumUNx43a/Y6m6ILZV0A8flzYcITyMrC4OoqyU0kI1upqL zU3Oryt6zQsYWW78eUr5gTAVmbQ/HIP8LfScE3KHwkrT2aZrCPtUvRHsV7S4sDQuJt2isO 4Gm+8jpPVq2A5UYs5XIJBwclrw25QJI= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by nyc.source.kernel.org (Postfix) with ESMTP id 9769EA4C5F6; Mon, 5 May 2025 06:56:40 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 29FF0C4CEEE; Mon, 5 May 2025 07:02:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1746428527; bh=NSjvHXdN9Auhws02PZWMgoFw38+PyYsmwJSV2Mf4l+E=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Pbjoh9io2FXWocROWFlAoZmys25rTcsa+c5jcyV02z+qfdxpaGggNeM5/ovA+uLIq YTfuULDJ2SCvLZa3z74b1gbNPwbmtmxEWxgHCoNMaSIsbSX6BmEEpzOLY1NYgBC/w2 ZqNacybVwcJohe2ddFlWXCQrYQdcZ9lgPr7G7K6PoqQ1UbFlvOV5FdZU0Mc/at2Ev7 c6nVY2zfL8RGLUuoADrBm2E1blIRTwIW+2NG6E3oMJWQNXzO95ASIxuLQqOawg7ENn +1y1oZwpg0HK2NwEme6kcyyZVaKiHkWLDjHxfc4f1hTJIVP0BK0vsX+q5mhRTreW+Z geAWqLOQPt77w== From: Leon Romanovsky To: Cc: Christoph Hellwig , Jens Axboe , Keith Busch , Jake Edge , Jonathan Corbet , Jason Gunthorpe , Zhu Yanjun , Robin Murphy , Joerg Roedel , Will Deacon , Sagi Grimberg , Bjorn Helgaas , Logan Gunthorpe , Yishai Hadas , Shameer Kolothum , Kevin Tian , Alex Williamson , =?UTF-8?q?J=C3=A9r=C3=B4me=20Glisse?= , Andrew Morton , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-rdma@vger.kernel.org, iommu@lists.linux.dev, linux-nvme@lists.infradead.org, linux-pci@vger.kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org, Niklas Schnelle , Chuck Lever , Luis Chamberlain , Matthew Wilcox , Dan Williams , Kanchan Joshi , Chaitanya Kulkarni , Jason Gunthorpe , Leon Romanovsky Subject: [PATCH v11 3/9] iommu: generalize the batched sync after map interface Date: Mon, 5 May 2025 10:01:40 +0300 Message-ID: X-Mailer: git-send-email 2.49.0 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Rspamd-Server: rspam11 X-Rspamd-Queue-Id: 3872FC000E X-Stat-Signature: 6nf47uwhrmmw3ztkamqny74j5tnkeyhs X-Rspam-User: X-HE-Tag: 1746428529-638923 X-HE-Meta: U2FsdGVkX1+AWyEH75XMs2njGTIo86kjs9Tc5HNmSBJWZw1mY3eQ27wGOGk2nSWwnf/a8ky5ujD43zavd/6tD2ue/KFpeaOGUFVvpRJzHhln/PB+avvQ0KCKZHPkld2qtIJZjF0axlGT167CzOIIai2FE8KEHsXen22on5hSDR823V/9EEJFh2BetcdGKKC4esavvz1i9BSwTRJ4foPPkEJA0r4IfW1etoukRyUZV/KhzgSchBdcWjDpjHhqcVzgj1ua4wzsHB52tPKjAUw8+GGhCj5Nddq8UWfEuWyDl3oyC8NKncr1/sjnLi3tbriAOa05klGIcuWx3fUalvPL0a8VxZZSJM1/DqRZtmIdatKaWF/ks4KyNg/762tkc3orkGX4kgJsIfnt+9cxd683KVfZaXDWWRjtldvCVD+Y/8AzdQBD+V+jhh9A8NfNhTBYUPdIfQ7F1TWaxEo6QxSh214/D7FupnQz6RfCiMntkBBJvxQMG+JWQxvzc1jEnNw35JF4jhpFAEBJNoKJgfYGI7zub3cKFd6SqRd1YuV1paRcmILHK1K20K95WUG2IoXQEc2EPAJ2aYRUIeR9H+uzn9XozWzFx4A+YRw6Z8mov9c3ntd6URhdCkL3vQxC0junLpFKbm0XWBSUJednS5d/hLmDVrrtfYgFQCqjVsB8LZUAkIC4HUtwGhxylrAN+uHXujoWLDtm70bf4VtX9zQ0bzh6FDRTt0bZqhi5iMk/Vxn64AK/2SSF1S6UYNmlcICKBkszzPWrQTuhlCt2PGGpbjZF65fd+JG+JoOQoXZG0G9k55sSVUxJEJmltnjqgyIuTy4NimHURzXJ/pt8R6smaQr4BORO6cw/dRIZU5iSjb/2aXZvpRM7ZZ1pjAmmKC70v7wIWnGHtVTzOjBOdbpsEqkkXXxY10erO6c1//2CGsCs2yo+IPonoN8qLLMjGSnCHqe+Sj1FKqkvltyIDsp 46A3UNsC RbLG32QKzqG+rQ77AoxwF9Fv3V9mmQrllPpib+DABk4VxKzSooIyXbK8WQvrpc5xoZBDSX5zZNhrUC7TpYQFwMjPngCJQVt58ZCIJAsEtJ4FKlLnIIXAJXkaC9KALFz7xG295m6X+IAAaBmyOTLTHfekhbU2JNMa9Tsg06Len2H2f8mHrIbi7UebXEfcjpSKxlsZAwAUxrZ7Jp16hDfjulDMYIc+r37Gk3wDXIAb2VXkQKZpQl08M3M5+0RUkNyA3Q8v1 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Christoph Hellwig For the upcoming IOVA-based DMA API we want to batch the ops->iotlb_sync_map() call after mapping multiple IOVAs from dma-iommu without having a scatterlist. Improve the API. Add a wrapper for the map_sync as iommu_sync_map() so that callers don't need to poke into the methods directly. Formalize __iommu_map() into iommu_map_nosync() which requires the caller to call iommu_sync_map() after all maps are completed. Refactor the existing sanity checks from all the different layers into iommu_map_nosync(). Signed-off-by: Christoph Hellwig Acked-by: Will Deacon Tested-by: Jens Axboe Reviewed-by: Jason Gunthorpe Reviewed-by: Luis Chamberlain Signed-off-by: Leon Romanovsky --- drivers/iommu/iommu.c | 65 +++++++++++++++++++------------------------ include/linux/iommu.h | 4 +++ 2 files changed, 33 insertions(+), 36 deletions(-) diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c index 4f91a740c15f..02960585b8d4 100644 --- a/drivers/iommu/iommu.c +++ b/drivers/iommu/iommu.c @@ -2443,8 +2443,8 @@ static size_t iommu_pgsize(struct iommu_domain *domain, unsigned long iova, return pgsize; } -static int __iommu_map(struct iommu_domain *domain, unsigned long iova, - phys_addr_t paddr, size_t size, int prot, gfp_t gfp) +int iommu_map_nosync(struct iommu_domain *domain, unsigned long iova, + phys_addr_t paddr, size_t size, int prot, gfp_t gfp) { const struct iommu_domain_ops *ops = domain->ops; unsigned long orig_iova = iova; @@ -2453,12 +2453,19 @@ static int __iommu_map(struct iommu_domain *domain, unsigned long iova, phys_addr_t orig_paddr = paddr; int ret = 0; + might_sleep_if(gfpflags_allow_blocking(gfp)); + if (unlikely(!(domain->type & __IOMMU_DOMAIN_PAGING))) return -EINVAL; if (WARN_ON(!ops->map_pages || domain->pgsize_bitmap == 0UL)) return -ENODEV; + /* Discourage passing strange GFP flags */ + if (WARN_ON_ONCE(gfp & (__GFP_COMP | __GFP_DMA | __GFP_DMA32 | + __GFP_HIGHMEM))) + return -EINVAL; + /* find out the minimum page size supported */ min_pagesz = 1 << __ffs(domain->pgsize_bitmap); @@ -2506,31 +2513,27 @@ static int __iommu_map(struct iommu_domain *domain, unsigned long iova, return ret; } -int iommu_map(struct iommu_domain *domain, unsigned long iova, - phys_addr_t paddr, size_t size, int prot, gfp_t gfp) +int iommu_sync_map(struct iommu_domain *domain, unsigned long iova, size_t size) { const struct iommu_domain_ops *ops = domain->ops; - int ret; - - might_sleep_if(gfpflags_allow_blocking(gfp)); - /* Discourage passing strange GFP flags */ - if (WARN_ON_ONCE(gfp & (__GFP_COMP | __GFP_DMA | __GFP_DMA32 | - __GFP_HIGHMEM))) - return -EINVAL; + if (!ops->iotlb_sync_map) + return 0; + return ops->iotlb_sync_map(domain, iova, size); +} - ret = __iommu_map(domain, iova, paddr, size, prot, gfp); - if (ret == 0 && ops->iotlb_sync_map) { - ret = ops->iotlb_sync_map(domain, iova, size); - if (ret) - goto out_err; - } +int iommu_map(struct iommu_domain *domain, unsigned long iova, + phys_addr_t paddr, size_t size, int prot, gfp_t gfp) +{ + int ret; - return ret; + ret = iommu_map_nosync(domain, iova, paddr, size, prot, gfp); + if (ret) + return ret; -out_err: - /* undo mappings already done */ - iommu_unmap(domain, iova, size); + ret = iommu_sync_map(domain, iova, size); + if (ret) + iommu_unmap(domain, iova, size); return ret; } @@ -2630,26 +2633,17 @@ ssize_t iommu_map_sg(struct iommu_domain *domain, unsigned long iova, struct scatterlist *sg, unsigned int nents, int prot, gfp_t gfp) { - const struct iommu_domain_ops *ops = domain->ops; size_t len = 0, mapped = 0; phys_addr_t start; unsigned int i = 0; int ret; - might_sleep_if(gfpflags_allow_blocking(gfp)); - - /* Discourage passing strange GFP flags */ - if (WARN_ON_ONCE(gfp & (__GFP_COMP | __GFP_DMA | __GFP_DMA32 | - __GFP_HIGHMEM))) - return -EINVAL; - while (i <= nents) { phys_addr_t s_phys = sg_phys(sg); if (len && s_phys != start + len) { - ret = __iommu_map(domain, iova + mapped, start, + ret = iommu_map_nosync(domain, iova + mapped, start, len, prot, gfp); - if (ret) goto out_err; @@ -2672,11 +2666,10 @@ ssize_t iommu_map_sg(struct iommu_domain *domain, unsigned long iova, sg = sg_next(sg); } - if (ops->iotlb_sync_map) { - ret = ops->iotlb_sync_map(domain, iova, mapped); - if (ret) - goto out_err; - } + ret = iommu_sync_map(domain, iova, mapped); + if (ret) + goto out_err; + return mapped; out_err: diff --git a/include/linux/iommu.h b/include/linux/iommu.h index ccce8a751e2a..ce472af8e9c3 100644 --- a/include/linux/iommu.h +++ b/include/linux/iommu.h @@ -872,6 +872,10 @@ extern struct iommu_domain *iommu_get_domain_for_dev(struct device *dev); extern struct iommu_domain *iommu_get_dma_domain(struct device *dev); extern int iommu_map(struct iommu_domain *domain, unsigned long iova, phys_addr_t paddr, size_t size, int prot, gfp_t gfp); +int iommu_map_nosync(struct iommu_domain *domain, unsigned long iova, + phys_addr_t paddr, size_t size, int prot, gfp_t gfp); +int iommu_sync_map(struct iommu_domain *domain, unsigned long iova, + size_t size); extern size_t iommu_unmap(struct iommu_domain *domain, unsigned long iova, size_t size); extern size_t iommu_unmap_fast(struct iommu_domain *domain, -- 2.49.0