From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3A50AC54E4A for ; Tue, 5 Mar 2024 10:22:50 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C115D6B00AD; Tue, 5 Mar 2024 05:22:49 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id B739B6B00C1; Tue, 5 Mar 2024 05:22:49 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9C5496B00C2; Tue, 5 Mar 2024 05:22:49 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 815F36B00AD for ; Tue, 5 Mar 2024 05:22:49 -0500 (EST) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 527BE1C0E19 for ; Tue, 5 Mar 2024 10:22:49 +0000 (UTC) X-FDA: 81862596858.19.0C010FB Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55]) by imf01.hostedemail.com (Postfix) with ESMTP id 3174C40018 for ; Tue, 5 Mar 2024 10:22:46 +0000 (UTC) Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=sJY3Cs6h; spf=pass (imf01.hostedemail.com: domain of leon@kernel.org designates 145.40.73.55 as permitted sender) smtp.mailfrom=leon@kernel.org; dmarc=pass (policy=none) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1709634167; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=DGOWTpc0mK7AvFB2rfCYarTy/YnOa82zBOZKlzEOfvY=; b=G+WSlqWomdVbEFwz7qb88iB3SitOBbdS6Qgr8pd09DrBdZcZk3VDV2ShRUO0DG98DQXgb8 SNd3xEwz7FMZijrriN7M6Vi6HEk41cMDNBp1m4LRQHPYKKtRDYY1Vi8yRQJm+f/389wsmj B+PnZLfujaChREn1fWZWzU16wi+g7Mg= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1709634167; a=rsa-sha256; cv=none; b=k8fknx2Z7jxpeWHg69odto7kD5rX/Zpo4kEy2TlbiS5iwxMI2rUEv4KM65XiLJhJY5p+Ss oKGaHR4Am5vSd0bTIFz5h94XsNVqqP/84LJG8N8H/AhepdPgHpj5QhLdxy3IV7rgbLlE9Q Z82igdOJqLKVXtcJ92ZXQ9tTkIK4Fe0= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=sJY3Cs6h; spf=pass (imf01.hostedemail.com: domain of leon@kernel.org designates 145.40.73.55 as permitted sender) smtp.mailfrom=leon@kernel.org; dmarc=pass (policy=none) header.from=kernel.org Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sin.source.kernel.org (Postfix) with ESMTP id 6ADA4CE19DD; Tue, 5 Mar 2024 10:22:44 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id E8D92C433F1; Tue, 5 Mar 2024 10:22:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1709634163; bh=W6hmByMbviZow30AysrbmdCiStk7NPI/+8Qeoey1JY8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=sJY3Cs6hl4Xyiy72UOBb36BBQES0tf2UyuIkZqN3zk2QMVL6VZ1r1ZL1bFH3ozG3P 5VrQJCwX96mJl651aGVoS6sb3Q/RHhZj+G03f7OM39qsZDs0xSmhgciEKkPmEOuG9v wXJT9T/W6Ns8h9LonwMY/i6iJMWL44XqUkr4OJGJplIF4xc26usfc10rpE5C+xcDb/ TdpGb7e9aZHebhPTlz4SjZCcOElLbhKh21/SJQAb6sU4Cjd/bj0OmGdmILSjNU88fA ptQmmFPbzQ681LIvARYDHdn4bdOX7/AG9vGeS8VEZoDFfVqidpSJl1IFUBlipPAPxA LLo2Q0npbcszA== From: Leon Romanovsky To: Christoph Hellwig , Robin Murphy , Marek Szyprowski , Joerg Roedel , Will Deacon , Jason Gunthorpe , Chaitanya Kulkarni Cc: Leon Romanovsky , Jonathan Corbet , Jens Axboe , Keith Busch , Sagi Grimberg , Yishai Hadas , Shameer Kolothum , Kevin Tian , Alex Williamson , =?UTF-8?q?J=C3=A9r=C3=B4me=20Glisse?= , Andrew Morton , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-rdma@vger.kernel.org, iommu@lists.linux.dev, linux-nvme@lists.infradead.org, kvm@vger.kernel.org, linux-mm@kvack.org, Bart Van Assche , Damien Le Moal , Amir Goldstein , "josef@toxicpanda.com" , "Martin K. Petersen" , "daniel@iogearbox.net" , Dan Williams , "jack@suse.com" , Zhu Yanjun Subject: [RFC 04/16] iommu/dma: Provide an interface to allow preallocate IOVA Date: Tue, 5 Mar 2024 12:22:05 +0200 Message-ID: X-Mailer: git-send-email 2.44.0 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Rspamd-Queue-Id: 3174C40018 X-Rspam-User: X-Stat-Signature: u3ogyoyhxx9zktcft86z8qy54iyk5s1f X-Rspamd-Server: rspam03 X-HE-Tag: 1709634166-82998 X-HE-Meta: U2FsdGVkX1/KzH/AW/qXcsbUm/84I1qXE67bfLVfBXe1U0lIWDuaot/3aSEwZf9jrepgTFTRqII7cQ5of1zUW3XOy04WxDwMD3eSsxzlcii69OGGUgzg3wzPmhPnm87rTfjAFtaURYhA6GESUgOLC9EJ5/R8UvFY5MaWVboAOlivRmcyhcw4EGNnbABg2ZeIuv99veb+CRXEn2ngLM3X7Zg4Fm6sGFEpdL/vzTRIEAS8ef+d2C+z0k9z8xsWfitr/v6m8j4io7LA4y/QfnTm/hH4A4vwNs54tUfg10CxeThu4YEj2UQsdc1k/yea/bLDiigtXNO3iNdVyJgQ/sWBgeXjsmxkovilUgzGYD8jAGgieTeXqafQ43PjlBd3Dt83AxcgEXA26Io7F5IRHs45ekLL8DU7UZj7N9Cpd5cUGheWT+slLZ1LeqQ2K8xI7ytnSqM6nnwnqnhixfvnuaGwjTJvXohQON2esfic5Fql0K/XltMbK3JMD5XmLNa/4fzVBqGgKKqnXaMIUOfSurQ8NYyBrQ/j6ErsMPA+YiWGtwvCmk4GWkYIacfZJ+pmhnsGPAlItFvZqpQqhKKEHagfrygrKWs1TA5hsRqkNv1Tmg5hypYDlTrB/dO0nNyccdzfJISPDIU+ihPDbJvI/pg9SGYS47z9jThCCRSkQcINy1AMoaiF89KIUoJTa1t7NDPXrAc9cziJYrYD5sXY5KEIuYs3gWJ2LrkzpylzQUMlVO0D/cfqRH1p5ShOwVDi/DNm+s0RjA+SnIqvcwGDLrCo/2lpUVlzReCUtxjy9DxbyVatyjWy5jnsjLdElrj8yY3T9lzmtOWxcQSKRhuR5eRKUDiwwQzzXp3dlRhiPPdqueiHz4RePLw1kTUovmLIJWetdChSB1+ujh8KESwIJdcaTdNU6ZFcy8/wiS3briKYbRFThmuePlZKUe+oxN3UA16zb+JU5ajk8ohpEgu/dVF 4xQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Message-ID: <20240305102205.fnrm-X2hfplJgZy9x7qzc-jRmYZlP30X5yq2nhydl2M@z> From: Leon Romanovsky Separate IOVA allocation to dedicated callback so it will allow cache of IOVA and reuse it in fast paths for devices which support ODP (on-demand-paging) mechanism. Signed-off-by: Leon Romanovsky --- drivers/iommu/dma-iommu.c | 50 +++++++++++++++++++++++++++++---------- 1 file changed, 38 insertions(+), 12 deletions(-) diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index 50ccc4f1ef81..e55726783501 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -356,7 +356,7 @@ int iommu_dma_init_fq(struct iommu_domain *domain) atomic_set(&cookie->fq_timer_on, 0); /* * Prevent incomplete fq state being observable. Pairs with path from - * __iommu_dma_unmap() through iommu_dma_free_iova() to queue_iova() + * __iommu_dma_unmap() through __iommu_dma_free_iova() to queue_iova() */ smp_wmb(); WRITE_ONCE(cookie->fq_domain, domain); @@ -760,7 +760,7 @@ static int dma_info_to_prot(enum dma_data_direction dir, bool coherent, } } -static dma_addr_t iommu_dma_alloc_iova(struct iommu_domain *domain, +static dma_addr_t __iommu_dma_alloc_iova(struct iommu_domain *domain, size_t size, u64 dma_limit, struct device *dev) { struct iommu_dma_cookie *cookie = domain->iova_cookie; @@ -806,7 +806,7 @@ static dma_addr_t iommu_dma_alloc_iova(struct iommu_domain *domain, return (dma_addr_t)iova << shift; } -static void iommu_dma_free_iova(struct iommu_dma_cookie *cookie, +static void __iommu_dma_free_iova(struct iommu_dma_cookie *cookie, dma_addr_t iova, size_t size, struct iommu_iotlb_gather *gather) { struct iova_domain *iovad = &cookie->iovad; @@ -843,7 +843,7 @@ static void __iommu_dma_unmap(struct device *dev, dma_addr_t dma_addr, if (!iotlb_gather.queued) iommu_iotlb_sync(domain, &iotlb_gather); - iommu_dma_free_iova(cookie, dma_addr, size, &iotlb_gather); + __iommu_dma_free_iova(cookie, dma_addr, size, &iotlb_gather); } static dma_addr_t __iommu_dma_map(struct device *dev, phys_addr_t phys, @@ -861,12 +861,12 @@ static dma_addr_t __iommu_dma_map(struct device *dev, phys_addr_t phys, size = iova_align(iovad, size + iova_off); - iova = iommu_dma_alloc_iova(domain, size, dma_mask, dev); + iova = __iommu_dma_alloc_iova(domain, size, dma_mask, dev); if (!iova) return DMA_MAPPING_ERROR; if (iommu_map(domain, iova, phys - iova_off, size, prot, GFP_ATOMIC)) { - iommu_dma_free_iova(cookie, iova, size, NULL); + __iommu_dma_free_iova(cookie, iova, size, NULL); return DMA_MAPPING_ERROR; } return iova + iova_off; @@ -970,7 +970,7 @@ static struct page **__iommu_dma_alloc_noncontiguous(struct device *dev, return NULL; size = iova_align(iovad, size); - iova = iommu_dma_alloc_iova(domain, size, dev->coherent_dma_mask, dev); + iova = __iommu_dma_alloc_iova(domain, size, dev->coherent_dma_mask, dev); if (!iova) goto out_free_pages; @@ -1004,7 +1004,7 @@ static struct page **__iommu_dma_alloc_noncontiguous(struct device *dev, out_free_sg: sg_free_table(sgt); out_free_iova: - iommu_dma_free_iova(cookie, iova, size, NULL); + __iommu_dma_free_iova(cookie, iova, size, NULL); out_free_pages: __iommu_dma_free_pages(pages, count); return NULL; @@ -1436,7 +1436,7 @@ static int iommu_dma_map_sg(struct device *dev, struct scatterlist *sg, if (!iova_len) return __finalise_sg(dev, sg, nents, 0); - iova = iommu_dma_alloc_iova(domain, iova_len, dma_get_mask(dev), dev); + iova = __iommu_dma_alloc_iova(domain, iova_len, dma_get_mask(dev), dev); if (!iova) { ret = -ENOMEM; goto out_restore_sg; @@ -1453,7 +1453,7 @@ static int iommu_dma_map_sg(struct device *dev, struct scatterlist *sg, return __finalise_sg(dev, sg, nents, iova); out_free_iova: - iommu_dma_free_iova(cookie, iova, iova_len, NULL); + __iommu_dma_free_iova(cookie, iova, iova_len, NULL); out_restore_sg: __invalidate_sg(sg, nents); out: @@ -1706,6 +1706,30 @@ static size_t iommu_dma_opt_mapping_size(void) return iova_rcache_range(); } +static dma_addr_t iommu_dma_alloc_iova(struct device *dev, size_t size) +{ + struct iommu_domain *domain = iommu_get_dma_domain(dev); + struct iommu_dma_cookie *cookie = domain->iova_cookie; + struct iova_domain *iovad = &cookie->iovad; + dma_addr_t dma_mask = dma_get_mask(dev); + + size = iova_align(iovad, size); + return __iommu_dma_alloc_iova(domain, size, dma_mask, dev); +} + +static void iommu_dma_free_iova(struct device *dev, dma_addr_t iova, + size_t size) +{ + struct iommu_domain *domain = iommu_get_dma_domain(dev); + struct iommu_dma_cookie *cookie = domain->iova_cookie; + struct iova_domain *iovad = &cookie->iovad; + struct iommu_iotlb_gather iotlb_gather; + + size = iova_align(iovad, size); + iommu_iotlb_gather_init(&iotlb_gather); + __iommu_dma_free_iova(cookie, iova, size, &iotlb_gather); +} + static const struct dma_map_ops iommu_dma_ops = { .flags = DMA_F_PCI_P2PDMA_SUPPORTED, .alloc = iommu_dma_alloc, @@ -1728,6 +1752,8 @@ static const struct dma_map_ops iommu_dma_ops = { .unmap_resource = iommu_dma_unmap_resource, .get_merge_boundary = iommu_dma_get_merge_boundary, .opt_mapping_size = iommu_dma_opt_mapping_size, + .alloc_iova = iommu_dma_alloc_iova, + .free_iova = iommu_dma_free_iova, }; /* @@ -1776,7 +1802,7 @@ static struct iommu_dma_msi_page *iommu_dma_get_msi_page(struct device *dev, if (!msi_page) return NULL; - iova = iommu_dma_alloc_iova(domain, size, dma_get_mask(dev), dev); + iova = __iommu_dma_alloc_iova(domain, size, dma_get_mask(dev), dev); if (!iova) goto out_free_page; @@ -1790,7 +1816,7 @@ static struct iommu_dma_msi_page *iommu_dma_get_msi_page(struct device *dev, return msi_page; out_free_iova: - iommu_dma_free_iova(cookie, iova, size, NULL); + __iommu_dma_free_iova(cookie, iova, size, NULL); out_free_page: kfree(msi_page); return NULL; -- 2.44.0