From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2459DD13562 for ; Sun, 27 Oct 2024 14:22:32 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id AF4CA6B00A9; Sun, 27 Oct 2024 10:22:31 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id AA3636B00AA; Sun, 27 Oct 2024 10:22:31 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8F51F6B00AB; Sun, 27 Oct 2024 10:22:31 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 70E1C6B00A9 for ; Sun, 27 Oct 2024 10:22:31 -0400 (EDT) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 37BD6ADBA0 for ; Sun, 27 Oct 2024 14:21:48 +0000 (UTC) X-FDA: 82719596484.09.087B688 Received: from nyc.source.kernel.org (nyc.source.kernel.org [147.75.193.91]) by imf03.hostedemail.com (Postfix) with ESMTP id 4D1662000C for ; Sun, 27 Oct 2024 14:22:19 +0000 (UTC) Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=iCkLbrpn; spf=pass (imf03.hostedemail.com: domain of leon@kernel.org designates 147.75.193.91 as permitted sender) smtp.mailfrom=leon@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1730038792; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=tFfAVeRpTt9MOKvdbyKITP7haOM8Jd6d88FaRUe8ZOQ=; b=mZ7ozPxzBMoNvU8P4wCE+jdkvIs4q+eCltfx0L0abGGvk6h+8WMnrBWNiQaVwb6NFPkpL4 Vvr24MZh1FLcQqMKj/rCj6inUrAeQrqX7CIaY66kwymBZ1HR11cDKA4x7Dan0bxzIiuUwF i+gF2QItq3cPDiHyg4B7vbFCAEJnPqA= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1730038792; a=rsa-sha256; cv=none; b=fi+YtqtfnzkK+ygNGx4MlA/CxJv7mskJap/bD5AJVnSSdBvBHrg1jbm9nquij7Jh2TM1JK DORjsnnaxZPwWWtjRuUai3ay6jYdcA5iaVR7VOnIk9sdnmwLbmsLYHAnWEd7OCeS4HOttJ Ac0bLftQ5D/2K66VScxFcgiw3cGqzLE= ARC-Authentication-Results: i=1; imf03.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=iCkLbrpn; spf=pass (imf03.hostedemail.com: domain of leon@kernel.org designates 147.75.193.91 as permitted sender) smtp.mailfrom=leon@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by nyc.source.kernel.org (Postfix) with ESMTP id C30C0A40F21; Sun, 27 Oct 2024 14:20:32 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id CBBC4C4CEC3; Sun, 27 Oct 2024 14:22:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1730038948; bh=LyGbooKE736rWckmyvuR6Y2aUxxUKiIhQ25bI43LeHo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=iCkLbrpnlImhB9B2714xCMDVyAnzWG7MsfR2tObG6bSGDFDz4Bf4CI0mqKt+rhuYq oH9+qokY2xOMGvGOGy85p6vNn6z9dqDdwZ9MAzJ/Y7n8iQv0e0nn5FsdeSoMXGJcFj B+Zi8xW0UVh0uK/OidgiacDE2NCdMWAyryo2eQ61nfECpDk59aRsqWtCbWH4zdhzfH EE5mWIwZYs2U2XkGvGVAKEq7GkstM4Q+7cbJ824WG2VF+/8pl7vlytQS/XVuNbujES urNdKuuovQ1egFVlgv0jTDSmxVbzWG5blHyNDBWJJkFVst1jo9u8sMvdVfv9ThBNkJ kD4ta2u0CCROg== From: Leon Romanovsky To: Jens Axboe , Jason Gunthorpe , Robin Murphy , Joerg Roedel , Will Deacon , Christoph Hellwig , Sagi Grimberg Cc: Leon Romanovsky , Keith Busch , Bjorn Helgaas , Logan Gunthorpe , Yishai Hadas , Shameer Kolothum , Kevin Tian , Alex Williamson , Marek Szyprowski , =?UTF-8?q?J=C3=A9r=C3=B4me=20Glisse?= , Andrew Morton , Jonathan Corbet , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-rdma@vger.kernel.org, iommu@lists.linux.dev, linux-nvme@lists.infradead.org, linux-pci@vger.kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 10/18] mm/hmm: let users to tag specific PFN with DMA mapped bit Date: Sun, 27 Oct 2024 16:21:10 +0200 Message-ID: <6c79710ccc5d9fec36172fea13498e30132a0600.1730037276.git.leon@kernel.org> X-Mailer: git-send-email 2.46.2 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Rspam-User: X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 4D1662000C X-Stat-Signature: hcig9qkhhmqegwz3hgj5quc3mst6xhfg X-HE-Tag: 1730038939-166147 X-HE-Meta: U2FsdGVkX1/DW30gf7sBbn/CTvAQ0BLyEjRzJXc4HS9PtqTbBeml2dpiUoJu9cDM+zDaJcTUJUytrKgTMYX0xntxNTsViC9mgfOvA39ilMA2aAiWvY+AO1dQlhKhLIgCMFttBQH/oid5EEv57VZGOR7joz6XNVyPPXvIgOQMI46Ks7Tn6WfUdPjDoKabVBy9QZ055lYg59BddjviepYzM5i6E31No+KxQotf0qnCYRvCcrevYklb8OajA+W/S6qHvOM3vtz5oEPk8apSRSONFy95qa/gDvEHmgqupKaHN71vYWTWb6/I+7yp+a2OPz7InHo8njoafYWI70L5+a8jBUp9Nfs1N2oCr4/AWw9x2eUZY0Ra45h8eNtagO4vM0vp6P8RGxhbj6cyxAkofDms2rkM2eMtrdG3F8AULk/gYBhGkayNsHmJC1bRX79TxxALzRtcKDDxKHnl73OS6p442Jc9BtImlQjt8rsRcqqxxnhnyzKJz7Q6NUTSLETNFJLUYpFEKHySg2R4kGZmv7QbWoBTipvSR3HDKK1Dan+XLaLYt7g/KAoVyp5XugjbTD+zNY5g8w6c9FN3p4QYnLXJvdbVafBRLwrB1j9P2iW8LuWUhBWwHr4p6SSf6xLJMfydcV+9nP94Nk9MDJ1RPx6QRzZBJsZQrVNc4lj2K2RcmX4QHe+E0/Bm49HuHbCpBVoXt0x7mUVRAu66XHOnpRoqwd7KH6hNDOYneBgGRQ44VKQ4XckVslsbMJM6LzeK4VxWKPgFLyVAqayewyHghdECyP2q5G0qkbi28mObXBJ0OAv8aXOizvUReVotB/BNrl2mf2QkKYwSYnsRFkBPzLtLnMSvjZLsJjiTfjNbyWXAZrYHUpnlrrAqczHuzjflJzhWpe1hazi/HKKLMvciphrwLj7P01+v0VbS2c7Lyb7IUVevtITF0q5l7Mh5lYROw1VxSHryK3lnbn/7jBwJu9I 3G21gytE cOr3rQwr+9TpafrIGvMJOzAIkMGW+TB7ZNvMxE+scDOGhutyRECK2bwZ/i24lWrRjfac2sr8sXhNXaOLseDVElkOh4WFx30FcLYNPq2N91TnQNgi8Wx1bWGShf0ren2ukFBiH81IwnUn6xze8IcyKtd/jUjcmNPE5OJJntylPmkHiaXTvZRjjJ+GbDy+IPIRTbKTU2HCV2ts+vHOpN21fSBtBe6a/OZ2ncopq3BO8Fnlh9p8= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Leon Romanovsky Introduce new sticky flag (HMM_PFN_DMA_MAPPED), which isn't overwritten by HMM range fault. Such flag allows users to tag specific PFNs with information if this specific PFN was already DMA mapped. Signed-off-by: Leon Romanovsky --- include/linux/hmm.h | 14 ++++++++++++++ mm/hmm.c | 34 +++++++++++++++++++++------------- 2 files changed, 35 insertions(+), 13 deletions(-) diff --git a/include/linux/hmm.h b/include/linux/hmm.h index 126a36571667..5dd655f6766b 100644 --- a/include/linux/hmm.h +++ b/include/linux/hmm.h @@ -23,6 +23,8 @@ struct mmu_interval_notifier; * HMM_PFN_WRITE - if the page memory can be written to (requires HMM_PFN_VALID) * HMM_PFN_ERROR - accessing the pfn is impossible and the device should * fail. ie poisoned memory, special pages, no vma, etc + * HMM_PFN_DMA_MAPPED - Flag preserved on input-to-output transformation + * to mark that page is already DMA mapped * * On input: * 0 - Return the current state of the page, do not fault it. @@ -36,6 +38,10 @@ enum hmm_pfn_flags { HMM_PFN_VALID = 1UL << (BITS_PER_LONG - 1), HMM_PFN_WRITE = 1UL << (BITS_PER_LONG - 2), HMM_PFN_ERROR = 1UL << (BITS_PER_LONG - 3), + + /* Sticky flag, carried from Input to Output */ + HMM_PFN_DMA_MAPPED = 1UL << (BITS_PER_LONG - 7), + HMM_PFN_ORDER_SHIFT = (BITS_PER_LONG - 8), /* Input flags */ @@ -57,6 +63,14 @@ static inline struct page *hmm_pfn_to_page(unsigned long hmm_pfn) return pfn_to_page(hmm_pfn & ~HMM_PFN_FLAGS); } +/* + * hmm_pfn_to_phys() - return physical address pointed to by a device entry + */ +static inline phys_addr_t hmm_pfn_to_phys(unsigned long hmm_pfn) +{ + return __pfn_to_phys(hmm_pfn & ~HMM_PFN_FLAGS); +} + /* * hmm_pfn_to_map_order() - return the CPU mapping size order * diff --git a/mm/hmm.c b/mm/hmm.c index 7e0229ae4a5a..2a0c34d7cb2b 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -44,8 +44,10 @@ static int hmm_pfns_fill(unsigned long addr, unsigned long end, { unsigned long i = (addr - range->start) >> PAGE_SHIFT; - for (; addr < end; addr += PAGE_SIZE, i++) - range->hmm_pfns[i] = cpu_flags; + for (; addr < end; addr += PAGE_SIZE, i++) { + range->hmm_pfns[i] &= HMM_PFN_DMA_MAPPED; + range->hmm_pfns[i] |= cpu_flags; + } return 0; } @@ -202,8 +204,10 @@ static int hmm_vma_handle_pmd(struct mm_walk *walk, unsigned long addr, return hmm_vma_fault(addr, end, required_fault, walk); pfn = pmd_pfn(pmd) + ((addr & ~PMD_MASK) >> PAGE_SHIFT); - for (i = 0; addr < end; addr += PAGE_SIZE, i++, pfn++) - hmm_pfns[i] = pfn | cpu_flags; + for (i = 0; addr < end; addr += PAGE_SIZE, i++, pfn++) { + hmm_pfns[i] &= HMM_PFN_DMA_MAPPED; + hmm_pfns[i] |= pfn | cpu_flags; + } return 0; } #else /* CONFIG_TRANSPARENT_HUGEPAGE */ @@ -236,7 +240,7 @@ static int hmm_vma_handle_pte(struct mm_walk *walk, unsigned long addr, hmm_pte_need_fault(hmm_vma_walk, pfn_req_flags, 0); if (required_fault) goto fault; - *hmm_pfn = 0; + *hmm_pfn = *hmm_pfn & HMM_PFN_DMA_MAPPED; return 0; } @@ -253,14 +257,14 @@ static int hmm_vma_handle_pte(struct mm_walk *walk, unsigned long addr, cpu_flags = HMM_PFN_VALID; if (is_writable_device_private_entry(entry)) cpu_flags |= HMM_PFN_WRITE; - *hmm_pfn = swp_offset_pfn(entry) | cpu_flags; + *hmm_pfn = (*hmm_pfn & HMM_PFN_DMA_MAPPED) | swp_offset_pfn(entry) | cpu_flags; return 0; } required_fault = hmm_pte_need_fault(hmm_vma_walk, pfn_req_flags, 0); if (!required_fault) { - *hmm_pfn = 0; + *hmm_pfn = *hmm_pfn & HMM_PFN_DMA_MAPPED; return 0; } @@ -304,11 +308,11 @@ static int hmm_vma_handle_pte(struct mm_walk *walk, unsigned long addr, pte_unmap(ptep); return -EFAULT; } - *hmm_pfn = HMM_PFN_ERROR; + *hmm_pfn = (*hmm_pfn & HMM_PFN_DMA_MAPPED) | HMM_PFN_ERROR; return 0; } - *hmm_pfn = pte_pfn(pte) | cpu_flags; + *hmm_pfn = (*hmm_pfn & HMM_PFN_DMA_MAPPED) | pte_pfn(pte) | cpu_flags; return 0; fault: @@ -448,8 +452,10 @@ static int hmm_vma_walk_pud(pud_t *pudp, unsigned long start, unsigned long end, } pfn = pud_pfn(pud) + ((addr & ~PUD_MASK) >> PAGE_SHIFT); - for (i = 0; i < npages; ++i, ++pfn) - hmm_pfns[i] = pfn | cpu_flags; + for (i = 0; i < npages; ++i, ++pfn) { + hmm_pfns[i] &= HMM_PFN_DMA_MAPPED; + hmm_pfns[i] |= pfn | cpu_flags; + } goto out_unlock; } @@ -507,8 +513,10 @@ static int hmm_vma_walk_hugetlb_entry(pte_t *pte, unsigned long hmask, } pfn = pte_pfn(entry) + ((start & ~hmask) >> PAGE_SHIFT); - for (; addr < end; addr += PAGE_SIZE, i++, pfn++) - range->hmm_pfns[i] = pfn | cpu_flags; + for (; addr < end; addr += PAGE_SIZE, i++, pfn++) { + range->hmm_pfns[i] &= HMM_PFN_DMA_MAPPED; + range->hmm_pfns[i] |= pfn | cpu_flags; + } spin_unlock(ptl); return 0; -- 2.46.2