From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id CA2CBD4336D for ; Thu, 7 Nov 2024 14:50:44 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 46D116B008A; Thu, 7 Nov 2024 09:50:44 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 41D2C6B008C; Thu, 7 Nov 2024 09:50:44 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2E5026B0096; Thu, 7 Nov 2024 09:50:44 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 1133C6B008A for ; Thu, 7 Nov 2024 09:50:44 -0500 (EST) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id B70021C3960 for ; Thu, 7 Nov 2024 14:50:43 +0000 (UTC) X-FDA: 82759585230.29.0279225 Received: from nyc.source.kernel.org (nyc.source.kernel.org [147.75.193.91]) by imf26.hostedemail.com (Postfix) with ESMTP id 1D4D314000D for ; Thu, 7 Nov 2024 14:50:14 +0000 (UTC) Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=HrTTNVLG; dmarc=pass (policy=quarantine) header.from=kernel.org; spf=pass (imf26.hostedemail.com: domain of leon@kernel.org designates 147.75.193.91 as permitted sender) smtp.mailfrom=leon@kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1730990957; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Izi4mvBFeqgZhbS+lxZlKvs6PIp3z3YnyysPqg81y8A=; b=BY72gX7EhV2As9ImHKQ4vyNefMYnI6JabLad2VNiCqGProipKCDLu/wCcrNwYdInPfL8os ZpA6H85tbFVhkkBQ3DYegRWfPFP+tbrZqmT+6swzRqL7ZQLoB8ScWBHhHKE7wfCEmddlvK f2f1Gxx1XTHO451glz6aEkFM7Dx/IWw= ARC-Authentication-Results: i=1; imf26.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=HrTTNVLG; dmarc=pass (policy=quarantine) header.from=kernel.org; spf=pass (imf26.hostedemail.com: domain of leon@kernel.org designates 147.75.193.91 as permitted sender) smtp.mailfrom=leon@kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1730990957; a=rsa-sha256; cv=none; b=xgI4F8jMoeVHkIJHlrh4VJ2dVFrc/3HCMN9qvLI/hJC+jjAYjDh0AQRBLmjiefWc3eprB+ zXN6n6n/3VbIxlc7AQED8dk7/ARahsZiW3btm7TR9gKXu53T/q0Zs7lo0mtUQE+xpKhUUv VQ8Q48A3qVItJJ2vhWuUslE1ENH4TOI= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by nyc.source.kernel.org (Postfix) with ESMTP id 7A2C0A444AC; Thu, 7 Nov 2024 14:48:46 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id C3FA6C4CECC; Thu, 7 Nov 2024 14:50:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1730991041; bh=jRpyuqPS/RfPxFE82RRibgYz8lq1+WOk8OnAFcqr9HY=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=HrTTNVLGr0k6BbKW0dEb6ZIQo/RQQdIVrWJQhWAQa6JALjw5Uelh67HLaTrZXT/Je mIj8OIN2KIw7OAdV8IRlUiwd1QctmZrFFKGXfzc0aWRWBVRkbHZd3v4Ma0JjAH5r4o RllAPWRCzhb8AxRbCdK9JWFR5uvuvXaq57Bcp6JVx0/+/n0Vcd0FTwQpD0C13UseaK zsbYe92XLKd6seRM8ug+RJKv9lk+J/tjBGwOyqeU8B+pQc9OZ32dNHSW1nTVwFmSDB FF5TUjLYtNLjZJdK1mLn10lmaVO1g0mDZm3TMUsuhwXHoQmZjhgnckwVIp3naxiGmO N7zfMcFtrOHMw== Date: Thu, 7 Nov 2024 16:50:34 +0200 From: Leon Romanovsky To: Christoph Hellwig Cc: Jason Gunthorpe , Robin Murphy , Jens Axboe , Joerg Roedel , Will Deacon , Sagi Grimberg , Keith Busch , Bjorn Helgaas , Logan Gunthorpe , Yishai Hadas , Shameer Kolothum , Kevin Tian , Alex Williamson , Marek Szyprowski , =?iso-8859-1?B?Suly9G1l?= Glisse , Andrew Morton , Jonathan Corbet , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-rdma@vger.kernel.org, iommu@lists.linux.dev, linux-nvme@lists.infradead.org, linux-pci@vger.kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org Subject: Re: [PATCH v1 07/17] dma-mapping: Implement link/unlink ranges API Message-ID: <20241107145034.GO5006@unreal> References: <51c5a5d5-6f90-4c42-b0ef-b87791e00f20@arm.com> <20241104091048.GA25041@lst.de> <20241104121924.GC35848@ziepe.ca> <20241104125302.GA11168@lst.de> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20241104125302.GA11168@lst.de> X-Rspam-User: X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 1D4D314000D X-Stat-Signature: mn3acj3e8hgymuzzndw75f5fb1nh568t X-HE-Tag: 1730991014-206253 X-HE-Meta: U2FsdGVkX18w7+6cnjXIda20uTWa5+cjAduwUWehDQcEPwZv4IMhY6doTDqoEh/kT0vm6iAcydyioghiPoJzKuK0eEFhWmTP/YFr8YmxagyE1W4Ze/cZp1NOufS/7MEDvWTvwyQtajI+is5NLC1bI3Oa7O0dZg79cY8DkFEj6AaTykgb68RXC29na15LP+B23Rr0r2fvRprqhI2xfa2gap/N9waGSK+NaZP4DhU8ZA5xRs3YueccrqS9//TwKXA5X47GuK0oPS8V3s/dFizrMid2F5/luqq4OLnN2v6VofvH6qDe11k0ZOwWbHXiDet2YH+IzDtGgMdL05uqmsxZ48sMnIFtQsQeyoyAAJKOn1kZym8XfEf/DJRnaWsTuTWzSr3xBWwCTi7gfRpG4gDzKTTGnSE53VFysh4hRQ80ZmDVfdlA/SqGnFTUhYQ5oD0zmgf8NEpDGbMQGZLAYRDXPMMyxTCGrEGmqpe0Z1oHWgS1KSXVEeO1+dD35kr3UTxBLhyDuXZbNdcA/yQoAyQCZ1gQK3y5Ysva4rlHt98eyCIInW6ukiC0OeZaAryJSmOQY94b2Z9DiSpax3Qx1K3FGqrPKbmuWqJiRtABsBeIbw+ZZGYCE3KdNLhm6cfGuWPFoudLgEaWjSG2idvc3Y6o7GT+VoWaoVRluJlFSMnVoXSjZqoiejYwTFdxDDXqEtvrG7Y+nCD3CNeDOgVIlsvBkepw8GdosVMOHRnXLxLBgTgSTuSZNslcym2OW9Dr9qa2IhrD5CO5XgX6ZL2Ihc6yY15x5MEwCLCyppt4Q2UhLIly5NAc9RhY3yEnRwOcT8CCKBaHk2PvFIBCWdK4UZHqoXjPMMidl4nXBX3yrEUZl8v3XrxUGTib+hJrStuCA988uiCn/MR9mbDYwuueUM2Lq+MNoAtWR9IEtFIVXP/T3R42SLfOm+az5c3E2QnuRkkfHV7Gs5Jm5tBIgI2WHUv wDzjr7YB xn1urheU1gq7VrL/RV2TkrgfUsjsmJBK8VcUJ+EySr35yqBVjAd2aKiV0zt5JmqFqhoIkXI6m5dGs+UkuaaWP9Tc31Kum16tVDqVs+MYut0BAzmPXROjJYRi/RzEhIKzBAJyo5ukeaWTokgG1PGWlDY4XNhemtxuYoxXn2hKAqmcmRxRzmFgP1wNERG+a19sOmgXcLhp4Lgwm/41d17HmaN5GBHNROLtLB+Q2eZETnE5Eo95Kf/INlj29OLbw53VC8g8l X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Mon, Nov 04, 2024 at 01:53:02PM +0100, Christoph Hellwig wrote: > On Mon, Nov 04, 2024 at 08:19:24AM -0400, Jason Gunthorpe wrote: > > > That's a good point. Only mapped through host bridge P2P can even > > > end up here, so the address is a perfectly valid physical address > > > in the host. But I'm not sure if all arch_sync_dma_for_device > > > implementations handle IOMMU memory fine. > > > > I was told on x86 if you do a cache flush operation on MMIO there is a > > chance it will MCE. Recently had some similar discussions about ARM > > where it was asserted some platforms may have similar. > > On x86 we never flush caches for DMA operations anyway, so x86 isn't > really the concern here, but architectures that do cache incoherent DMA > to PCIe devices. Which isn't a whole lot as most SOCs try to avoid that > for PCIe even if they lack DMA coherent for lesser peripherals, but I bet > there are some on arm/arm64 and maybe riscv or mips. > > > It would be safest to only call arch flushing calls on memory that is > > mapped cachable. We can assume that a P2P target is never CPU > > mapped cachable, regardless of how the DMA is routed. > > Yes. I.e. force DMA_ATTR_SKIP_CPU_SYNC for P2P. What do you think? diff --git a/block/blk-merge.c b/block/blk-merge.c index 38bcb3ecceeb..065bdace3344 100644 --- a/block/blk-merge.c +++ b/block/blk-merge.c @@ -559,14 +559,19 @@ static bool blk_rq_dma_map_iova(struct request *req, struct device *dma_dev, { enum dma_data_direction dir = rq_dma_dir(req); unsigned int mapped = 0; + unsigned long attrs = 0; int error = 0; iter->addr = state->addr; iter->len = dma_iova_size(state); + if (req->cmd_flags & REQ_P2PDMA) { + attrs |= DMA_ATTR_SKIP_CPU_SYNC; + req->cmd_flags &= ~REQ_P2PDMA; + } do { error = dma_iova_link(dma_dev, state, vec->paddr, mapped, - vec->len, dir, 0); + vec->len, dir, attrs); if (error) goto error_unmap; mapped += vec->len; @@ -578,7 +583,7 @@ static bool blk_rq_dma_map_iova(struct request *req, struct device *dma_dev, return true; error_unmap: - dma_iova_destroy(dma_dev, state, mapped, rq_dma_dir(req), 0); + dma_iova_destroy(dma_dev, state, mapped, rq_dma_dir(req), attrs); iter->status = errno_to_blk_status(error); return false; } @@ -633,7 +638,6 @@ bool blk_rq_dma_map_iter_start(struct request *req, struct device *dma_dev, * P2P transfers through the host bridge are treated the * same as non-P2P transfers below and during unmap. */ - req->cmd_flags &= ~REQ_P2PDMA; break; default: iter->status = BLK_STS_INVAL; @@ -644,6 +648,8 @@ bool blk_rq_dma_map_iter_start(struct request *req, struct device *dma_dev, if (blk_can_dma_map_iova(req, dma_dev) && dma_iova_try_alloc(dma_dev, state, vec.paddr, total_len)) return blk_rq_dma_map_iova(req, dma_dev, state, iter, &vec); + + req->cmd_flags &= ~REQ_P2PDMA; return blk_dma_map_direct(req, dma_dev, iter, &vec); } EXPORT_SYMBOL_GPL(blk_rq_dma_map_iter_start); diff --git a/include/linux/hmm.h b/include/linux/hmm.h index 62980ca8f3c5..5fe30fbc42b0 100644 --- a/include/linux/hmm.h +++ b/include/linux/hmm.h @@ -23,6 +23,7 @@ struct mmu_interval_notifier; * HMM_PFN_WRITE - if the page memory can be written to (requires HMM_PFN_VALID) * HMM_PFN_ERROR - accessing the pfn is impossible and the device should * fail. ie poisoned memory, special pages, no vma, etc + * HMM_PFN_P2PDMA - P@P page, not bus mapped * HMM_PFN_P2PDMA_BUS - Bus mapped P2P transfer * HMM_PFN_DMA_MAPPED - Flag preserved on input-to-output transformation * to mark that page is already DMA mapped @@ -41,6 +42,7 @@ enum hmm_pfn_flags { HMM_PFN_ERROR = 1UL << (BITS_PER_LONG - 3), /* Sticky flag, carried from Input to Output */ + HMM_PFN_P2PDMA = 1UL << (BITS_PER_LONG - 5), HMM_PFN_P2PDMA_BUS = 1UL << (BITS_PER_LONG - 6), HMM_PFN_DMA_MAPPED = 1UL << (BITS_PER_LONG - 7), diff --git a/mm/hmm.c b/mm/hmm.c index 4ef2b3815212..b2ec199c2ea8 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -710,6 +710,7 @@ dma_addr_t hmm_dma_map_pfn(struct device *dev, struct hmm_dma_map *map, struct page *page = hmm_pfn_to_page(pfns[idx]); phys_addr_t paddr = hmm_pfn_to_phys(pfns[idx]); size_t offset = idx * map->dma_entry_size; + unsigned long attrs = 0; dma_addr_t dma_addr; int ret; @@ -740,6 +741,9 @@ dma_addr_t hmm_dma_map_pfn(struct device *dev, struct hmm_dma_map *map, switch (pci_p2pdma_state(p2pdma_state, dev, page)) { case PCI_P2PDMA_MAP_THRU_HOST_BRIDGE: + attrs |= DMA_ATTR_SKIP_CPU_SYNC; + pfns[idx] |= HMM_PFN_P2PDMA; + fallthrough; case PCI_P2PDMA_MAP_NONE: break; case PCI_P2PDMA_MAP_BUS_ADDR: @@ -752,7 +756,8 @@ dma_addr_t hmm_dma_map_pfn(struct device *dev, struct hmm_dma_map *map, if (dma_use_iova(state)) { ret = dma_iova_link(dev, state, paddr, offset, - map->dma_entry_size, DMA_BIDIRECTIONAL, 0); + map->dma_entry_size, DMA_BIDIRECTIONAL, + attrs); if (ret) return DMA_MAPPING_ERROR; @@ -793,6 +798,7 @@ bool hmm_dma_unmap_pfn(struct device *dev, struct hmm_dma_map *map, size_t idx) struct dma_iova_state *state = &map->state; dma_addr_t *dma_addrs = map->dma_list; unsigned long *pfns = map->pfn_list; + unsigned long attrs = 0; #define HMM_PFN_VALID_DMA (HMM_PFN_VALID | HMM_PFN_DMA_MAPPED) if ((pfns[idx] & HMM_PFN_VALID_DMA) != HMM_PFN_VALID_DMA) @@ -801,14 +807,16 @@ bool hmm_dma_unmap_pfn(struct device *dev, struct hmm_dma_map *map, size_t idx) if (pfns[idx] & HMM_PFN_P2PDMA_BUS) ; /* no need to unmap bus address P2P mappings */ - else if (dma_use_iova(state)) + else if (dma_use_iova(state)) { + if (pfns[idx] & HMM_PFN_P2PDMA) + attrs |= DMA_ATTR_SKIP_CPU_SYNC; dma_iova_unlink(dev, state, idx * map->dma_entry_size, - map->dma_entry_size, DMA_BIDIRECTIONAL, 0); - else if (dma_need_unmap(dev)) + map->dma_entry_size, DMA_BIDIRECTIONAL, attrs); + } else if (dma_need_unmap(dev)) dma_unmap_page(dev, dma_addrs[idx], map->dma_entry_size, DMA_BIDIRECTIONAL); - pfns[idx] &= ~(HMM_PFN_DMA_MAPPED | HMM_PFN_P2PDMA_BUS); + pfns[idx] &= ~(HMM_PFN_DMA_MAPPED | HMM_PFN_P2PDMA | HMM_PFN_P2PDMA_BUS); return true; } EXPORT_SYMBOL_GPL(hmm_dma_unmap_pfn);