From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E90D8C369C9 for ; Fri, 18 Apr 2025 06:48:41 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 40F3228009B; Fri, 18 Apr 2025 02:48:40 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 3BE6128005A; Fri, 18 Apr 2025 02:48:40 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2128328009B; Fri, 18 Apr 2025 02:48:40 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id F1F0E28005A for ; Fri, 18 Apr 2025 02:48:39 -0400 (EDT) Received: from smtpin11.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id D9FD4813BE for ; Fri, 18 Apr 2025 06:48:40 +0000 (UTC) X-FDA: 83346236400.11.DB08639 Received: from sea.source.kernel.org (sea.source.kernel.org [172.234.252.31]) by imf23.hostedemail.com (Postfix) with ESMTP id 1EE37140008 for ; Fri, 18 Apr 2025 06:48:38 +0000 (UTC) Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=PMzqbvZL; spf=pass (imf23.hostedemail.com: domain of leon@kernel.org designates 172.234.252.31 as permitted sender) smtp.mailfrom=leon@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1744958919; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=WG8lcd6o2n+dt2Qb+3KeVBaxZnHZiu15XEtc7tvNOUk=; b=R1Elqt7EtoCypqH4Llwsp4KAl5Psuw1zlKIkqCGyrIRJfpsY4tCQAKpulysOt0iHQ3G4A/ WwvENgqfBa0y0orXkVfIllU7TD8AOH9utvNNp2BITz+dzm8ICf7NV5kqn9PHGzp1eVMRxV sJ8MK9LGTr5y3YyoHdIJ9G7naoZlGPg= ARC-Authentication-Results: i=1; imf23.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=PMzqbvZL; spf=pass (imf23.hostedemail.com: domain of leon@kernel.org designates 172.234.252.31 as permitted sender) smtp.mailfrom=leon@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1744958919; a=rsa-sha256; cv=none; b=ZkjDzO5AC5bvU8y7MbErlJ6G5ldioMLsfau42dlRSA+d+r/7EnE1XSBSV3Yz5rp99pxaNB /DnHPRddBiZKSC3ZcC+R+ijwwSAR1fGzNf1TfCWSHJLExTuXDUl/xgczboHJVhfQq21RZ9 J5td60yDT65ZCQ9HFHFhVnLtP8ecexg= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sea.source.kernel.org (Postfix) with ESMTP id E53F844821; Fri, 18 Apr 2025 06:48:36 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 5C6C6C4CEE2; Fri, 18 Apr 2025 06:48:37 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1744958918; bh=bwZYS7t7aipTbZhzuM+nOmt9voRHtxEAZ2NbQn2AEac=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=PMzqbvZLVz6u0BLlY8BVSTF1Tzaj0IB6QlcytsBjcMXR0Vq5TqnLtjzxeMJFG6EZf 87EVNatEJsJ+MhBRMUMl/3smmRf84wLV/gcHBf+nM6UwLn/SM0zXin3XYVk8eNvmPd IvHB/SR6YAL376Fx4EQ5AY8E8uil3Ks/7cDN4NdXi7X97Wsz6VzBpGjRylN9ItSia9 6nK6wF0YQV56//ML/sXgkFuHhqI0ej/GYQ68lOx4QO2x/Syr54x2LLs6AAQLNNe3cb iGb2Rdu1nzTh48v7xaVqcanVC4DXs4yTpzwzguBV5fe+3IWAEHZ5oH15P5mM1MVDHX CtLxdfiSNoQjw== From: Leon Romanovsky To: Marek Szyprowski , Jens Axboe , Christoph Hellwig , Keith Busch Cc: Leon Romanovsky , Jake Edge , Jonathan Corbet , Jason Gunthorpe , Zhu Yanjun , Robin Murphy , Joerg Roedel , Will Deacon , Sagi Grimberg , Bjorn Helgaas , Logan Gunthorpe , Yishai Hadas , Shameer Kolothum , Kevin Tian , Alex Williamson , =?UTF-8?q?J=C3=A9r=C3=B4me=20Glisse?= , Andrew Morton , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-rdma@vger.kernel.org, iommu@lists.linux.dev, linux-nvme@lists.infradead.org, linux-pci@vger.kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org, Niklas Schnelle , Chuck Lever , Luis Chamberlain , Matthew Wilcox , Dan Williams , Kanchan Joshi , Chaitanya Kulkarni Subject: [PATCH v8 10/24] mm/hmm: let users to tag specific PFN with DMA mapped bit Date: Fri, 18 Apr 2025 09:47:40 +0300 Message-ID: <4530281aa6f9846a54dd232ce5071f281cf0b570.1744825142.git.leon@kernel.org> X-Mailer: git-send-email 2.49.0 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Stat-Signature: megu7ff4fph4xgk4bhga9f8adzghuhi8 X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 1EE37140008 X-Rspam-User: X-HE-Tag: 1744958918-681994 X-HE-Meta: U2FsdGVkX18r9RkiaLoItcRyAHbYEGP1Gu/SRmOh/FzxIgR8MsrZGS44g0Q0bg2/w7WeFt605WbN/aFpuC6s0p+j6w7WUxyq4/L0S9YMHU3BPregL7oqa+K1rxbepo61B9Lrw4r3e/AaaNIFLL2GjvRQ0wiXBMO2BG7NXmw0NFdFuuRXMsEZHyBjT0j1KxYAatV8+HghjH3qc2bx+aScWBzV3HIVkOaDGNCz4wLs5LUg9Aw/Y9BK+Wbwo8bps2XAT82xIXxQL+S6VSoBsMQHn5x1GMQYG2yLhzQB+On/KRA7ZcyeUrR3fbM/+pbAhHIyNar42pkHAn7pxS4NuTYgN1jxN8fh2ZhSewe9HUm/lurzGJgA5gGyJAKAkfxkJmll3y0zyPqD6Z74RSSZwoyWl8oMlboJ3xM1dFskenMlPqItXWYPi07P9WgFE4NSftAdJNfdNq3Yze/NsxIKyba7LZ+jwImrs3pt43D5dgbxzFWK3jm6XA+6LIeg/28rgTviyXKzuXj5Axup7dt/HpSNLGXFl/1R8TPCnrPfGgqEKOqhZ9G+IdqDrPDwz0MgJLJqVmkFrrwn6fXGUbAXwmf7tHMGutbRBsf0jIOw2nkJem132J66AY7oCY+kC98lRAPYxTXCCmOsWbszi0NoLkU0FuZZgV1qOwj2b9JPKaQPifuF9ipN6O37JL7gI2PudCP+KbfxfwlJ95Ew23cG6G6lHYbCmLR3rquHCa2qR0jyrwz7CbFIPx3GGyRjKG88GN5kvfHr+YGkI7zaw+f7C5IcB38nq/RF6fyvLuiGYNgFcz6ce3w6jWuMOTm9S17Q9rGxFGjP+Z65AQOynqryorCsfyBUhgdeY2kQIv/sNM/mF+hmP22ZxVN2g7O1Nvlazwyokx9dXuE3pEmx0SneznFzCguITx35Kd5YwbrzkELH2XZIezcTcYvq36Ol8DLlFfNYjtM7TARrs0Cagomf047 04hapM7h zAaavEM1oVgiQAC2/+XmlBlkdCMjz/EEpEwbNq0wX+wWJ3sVCHNZVVH3woX7mtwpzzQpDiEtdUDx31J1c/WIer953A0kbaOC/Q5zNI6H0r9xvdfpcfOdqJB4KYVHLDj64knE0WHsm+6Y4cfu66xCigdqT8mu2aJPIc1dZnZaXAnFJ9ip74UxLKpc7dHdq5MTpF9pJQZfELoHNXKMf8L6EdUgRE+Fmw0N3w57wbYM01htPg34= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Leon Romanovsky Introduce new sticky flag (HMM_PFN_DMA_MAPPED), which isn't overwritten by HMM range fault. Such flag allows users to tag specific PFNs with information if this specific PFN was already DMA mapped. Signed-off-by: Leon Romanovsky --- include/linux/hmm.h | 17 +++++++++++++++ mm/hmm.c | 51 ++++++++++++++++++++++++++++----------------- 2 files changed, 49 insertions(+), 19 deletions(-) diff --git a/include/linux/hmm.h b/include/linux/hmm.h index 126a36571667..a1ddbedc19c0 100644 --- a/include/linux/hmm.h +++ b/include/linux/hmm.h @@ -23,6 +23,8 @@ struct mmu_interval_notifier; * HMM_PFN_WRITE - if the page memory can be written to (requires HMM_PFN_VALID) * HMM_PFN_ERROR - accessing the pfn is impossible and the device should * fail. ie poisoned memory, special pages, no vma, etc + * HMM_PFN_DMA_MAPPED - Flag preserved on input-to-output transformation + * to mark that page is already DMA mapped * * On input: * 0 - Return the current state of the page, do not fault it. @@ -36,6 +38,13 @@ enum hmm_pfn_flags { HMM_PFN_VALID = 1UL << (BITS_PER_LONG - 1), HMM_PFN_WRITE = 1UL << (BITS_PER_LONG - 2), HMM_PFN_ERROR = 1UL << (BITS_PER_LONG - 3), + + /* + * Sticky flags, carried from input to output, + * don't forget to update HMM_PFN_INOUT_FLAGS + */ + HMM_PFN_DMA_MAPPED = 1UL << (BITS_PER_LONG - 7), + HMM_PFN_ORDER_SHIFT = (BITS_PER_LONG - 8), /* Input flags */ @@ -57,6 +66,14 @@ static inline struct page *hmm_pfn_to_page(unsigned long hmm_pfn) return pfn_to_page(hmm_pfn & ~HMM_PFN_FLAGS); } +/* + * hmm_pfn_to_phys() - return physical address pointed to by a device entry + */ +static inline phys_addr_t hmm_pfn_to_phys(unsigned long hmm_pfn) +{ + return __pfn_to_phys(hmm_pfn & ~HMM_PFN_FLAGS); +} + /* * hmm_pfn_to_map_order() - return the CPU mapping size order * diff --git a/mm/hmm.c b/mm/hmm.c index 082f7b7c0b9e..51fe8b011cc7 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -39,13 +39,20 @@ enum { HMM_NEED_ALL_BITS = HMM_NEED_FAULT | HMM_NEED_WRITE_FAULT, }; +enum { + /* These flags are carried from input-to-output */ + HMM_PFN_INOUT_FLAGS = HMM_PFN_DMA_MAPPED, +}; + static int hmm_pfns_fill(unsigned long addr, unsigned long end, struct hmm_range *range, unsigned long cpu_flags) { unsigned long i = (addr - range->start) >> PAGE_SHIFT; - for (; addr < end; addr += PAGE_SIZE, i++) - range->hmm_pfns[i] = cpu_flags; + for (; addr < end; addr += PAGE_SIZE, i++) { + range->hmm_pfns[i] &= HMM_PFN_INOUT_FLAGS; + range->hmm_pfns[i] |= cpu_flags; + } return 0; } @@ -202,8 +209,10 @@ static int hmm_vma_handle_pmd(struct mm_walk *walk, unsigned long addr, return hmm_vma_fault(addr, end, required_fault, walk); pfn = pmd_pfn(pmd) + ((addr & ~PMD_MASK) >> PAGE_SHIFT); - for (i = 0; addr < end; addr += PAGE_SIZE, i++, pfn++) - hmm_pfns[i] = pfn | cpu_flags; + for (i = 0; addr < end; addr += PAGE_SIZE, i++, pfn++) { + hmm_pfns[i] &= HMM_PFN_INOUT_FLAGS; + hmm_pfns[i] |= pfn | cpu_flags; + } return 0; } #else /* CONFIG_TRANSPARENT_HUGEPAGE */ @@ -230,14 +239,14 @@ static int hmm_vma_handle_pte(struct mm_walk *walk, unsigned long addr, unsigned long cpu_flags; pte_t pte = ptep_get(ptep); uint64_t pfn_req_flags = *hmm_pfn; + uint64_t new_pfn_flags = 0; if (pte_none_mostly(pte)) { required_fault = hmm_pte_need_fault(hmm_vma_walk, pfn_req_flags, 0); if (required_fault) goto fault; - *hmm_pfn = 0; - return 0; + goto out; } if (!pte_present(pte)) { @@ -253,16 +262,14 @@ static int hmm_vma_handle_pte(struct mm_walk *walk, unsigned long addr, cpu_flags = HMM_PFN_VALID; if (is_writable_device_private_entry(entry)) cpu_flags |= HMM_PFN_WRITE; - *hmm_pfn = swp_offset_pfn(entry) | cpu_flags; - return 0; + new_pfn_flags = swp_offset_pfn(entry) | cpu_flags; + goto out; } required_fault = hmm_pte_need_fault(hmm_vma_walk, pfn_req_flags, 0); - if (!required_fault) { - *hmm_pfn = 0; - return 0; - } + if (!required_fault) + goto out; if (!non_swap_entry(entry)) goto fault; @@ -304,11 +311,13 @@ static int hmm_vma_handle_pte(struct mm_walk *walk, unsigned long addr, pte_unmap(ptep); return -EFAULT; } - *hmm_pfn = HMM_PFN_ERROR; - return 0; + new_pfn_flags = HMM_PFN_ERROR; + goto out; } - *hmm_pfn = pte_pfn(pte) | cpu_flags; + new_pfn_flags = pte_pfn(pte) | cpu_flags; +out: + *hmm_pfn = (*hmm_pfn & HMM_PFN_INOUT_FLAGS) | new_pfn_flags; return 0; fault: @@ -448,8 +457,10 @@ static int hmm_vma_walk_pud(pud_t *pudp, unsigned long start, unsigned long end, } pfn = pud_pfn(pud) + ((addr & ~PUD_MASK) >> PAGE_SHIFT); - for (i = 0; i < npages; ++i, ++pfn) - hmm_pfns[i] = pfn | cpu_flags; + for (i = 0; i < npages; ++i, ++pfn) { + hmm_pfns[i] &= HMM_PFN_INOUT_FLAGS; + hmm_pfns[i] |= pfn | cpu_flags; + } goto out_unlock; } @@ -507,8 +518,10 @@ static int hmm_vma_walk_hugetlb_entry(pte_t *pte, unsigned long hmask, } pfn = pte_pfn(entry) + ((start & ~hmask) >> PAGE_SHIFT); - for (; addr < end; addr += PAGE_SIZE, i++, pfn++) - range->hmm_pfns[i] = pfn | cpu_flags; + for (; addr < end; addr += PAGE_SIZE, i++, pfn++) { + range->hmm_pfns[i] &= HMM_PFN_INOUT_FLAGS; + range->hmm_pfns[i] |= pfn | cpu_flags; + } spin_unlock(ptl); return 0; -- 2.49.0