From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A5FA1C36005 for ; Mon, 28 Apr 2025 09:23:26 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A6E346B002A; Mon, 28 Apr 2025 05:23:24 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 9F37F6B002B; Mon, 28 Apr 2025 05:23:24 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 81E776B002C; Mon, 28 Apr 2025 05:23:24 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 5C4E26B002A for ; Mon, 28 Apr 2025 05:23:24 -0400 (EDT) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 95E53C0E7F for ; Mon, 28 Apr 2025 09:23:25 +0000 (UTC) X-FDA: 83382914370.02.BA7C736 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf28.hostedemail.com (Postfix) with ESMTP id E83C1C0008 for ; Mon, 28 Apr 2025 09:23:23 +0000 (UTC) Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=i33plADx; dmarc=pass (policy=quarantine) header.from=kernel.org; spf=pass (imf28.hostedemail.com: domain of leon@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=leon@kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1745832204; a=rsa-sha256; cv=none; b=u+Lrrze4/WOmMGOM9n7PT6Mp5z7IUpiXqxKTizob+NraNm/7As4/i2zSwNkrzMyMgMqKIl DEEEOlnuGaLkI1PsqFiSBdv+tMN352tbPVMX1u2KhjL7SiJu7JGXfqSPnnKjTW8U2PvvV6 91EGPtiHm5P3Qv0bdOOBj/ldGqn9mwo= ARC-Authentication-Results: i=1; imf28.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=i33plADx; dmarc=pass (policy=quarantine) header.from=kernel.org; spf=pass (imf28.hostedemail.com: domain of leon@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=leon@kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1745832204; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=a6fUSUxifkVvO6d/ynAghzA0SMb36AROUIzmjpyl3G0=; b=ZzmXbvvbCtgJY6g4tTlJUy3Bcb5Cz5T/uVV4hpEHfxZFgVsN8UB2D6qZXT+z+eaazmKNX5 hT0CUxjcvfGDzQ36CKzSOJPcBUO6Hi2S69Ds8BO5X7ygAJRo+cPuckK87xvIXS3EmUmUBj 4izpCEVPCkih6SJclmM605QHkYSG784= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id ED7AD5C5969; Mon, 28 Apr 2025 09:21:05 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 0A77BC4CEE4; Mon, 28 Apr 2025 09:23:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1745832202; bh=DqmELyRaD+HKliKs+yhB7fshFSe2Z6G/xyJIpc/k6wk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=i33plADx4mBDh4Mg0JhS7PdQonorQCsmo5YRKErsBgYKa4z8vOidmkY/AfS0xrH1Z yFDIAo01whKRGniuFCRzbw1+F6BnXoeGa8ey25ORoO6MG2G3/eXqsW1FNher3MmIz0 +9dEW6mpOEwloxkUFgK2y3up8HxeJXMHWV+4XbrTQYL5Or5e//10k5C7G4pNzcKJ/m oANUDjKdfpR/c4GB4V24c0eKKek4BVX9/RgbDOjLjhU8rGCdF1gxhQqNesdw2rfrrL MyYaLBYM5LnbvqLH5GsIk9qtApa0VRVHM28Dg1Mr4VJ5Nm2VYTSsLzNp0HPAb/dB/M 4dv4HRnILcU/A== From: Leon Romanovsky To: Marek Szyprowski , Jens Axboe , Christoph Hellwig , Keith Busch Cc: Leon Romanovsky , Jake Edge , Jonathan Corbet , Jason Gunthorpe , Zhu Yanjun , Robin Murphy , Joerg Roedel , Will Deacon , Sagi Grimberg , Bjorn Helgaas , Logan Gunthorpe , Yishai Hadas , Shameer Kolothum , Kevin Tian , Alex Williamson , =?UTF-8?q?J=C3=A9r=C3=B4me=20Glisse?= , Andrew Morton , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-rdma@vger.kernel.org, iommu@lists.linux.dev, linux-nvme@lists.infradead.org, linux-pci@vger.kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org, Niklas Schnelle , Chuck Lever , Luis Chamberlain , Matthew Wilcox , Dan Williams , Kanchan Joshi , Chaitanya Kulkarni , Jason Gunthorpe Subject: [PATCH v10 10/24] mm/hmm: let users to tag specific PFN with DMA mapped bit Date: Mon, 28 Apr 2025 12:22:16 +0300 Message-ID: X-Mailer: git-send-email 2.49.0 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: E83C1C0008 X-Stat-Signature: kyrc1h1da3p9a7ai3tn7jfeadis7knt5 X-Rspam-User: X-HE-Tag: 1745832203-127916 X-HE-Meta: U2FsdGVkX1+XzvZbvF5ZM/KusDBBhn01eOuRHpOBAjkLUafm6jXXYfL7GMKyzNpnnCyoFPzWrMqD6/UIz1Uu61FiCuLqgcU6EUrTFs5+OpTQwMA7aTIsx3Aufn8BRMy2TlfSqNoZhdrnNrfpCkoC0/9GR5PvXCNhA2mF2KmHuuQ3syO0mIPmQehAy2aQJEkEkmSz7Gm4EyW9WLQoi0uTYNOzNZZgeVfWNNVw7ggrPzTh9izzsnS5NTet04yIBgIC51BDamL8cS/7Bvc9SRyoNeoRXggbd50yxnpsZBmVeyAI3K11oGml00lk1cvoTD3GSsXLkKyS+h08bZKzZ4TnDcAVQdotlI0jTR9CR25vvuW6Lga+MUd3NJw6M+5loPEmmzxSF6I1o0DmNcFBYTNj+g/F8nsDErm5EZbaCatRaSo/W9Z/Ds1aBmU1vynioXXm8Ng0ghiBmRjqLxYaUsgEhc0i2IwjfYlq/RB3CoHezeVUDBxfYlNlsnnttQsNEP9tjMEY5fzJrX4VzLPAHFCQW0DyPkHuWmEGXFaIXn0JSbQZaytcWMIZBi1w91n3zQL9mg8oLSoBw/Vi9hBdPL8UfqPkfnNhLqYdDgm7wFsIQ/mn4kdRTUpHLmaXBMdcnbau9Mn6c76C3gVCzM29ozH5COTbq5va+S1U4vfT51FrPo95H/ZeBnySjQTQAsBhBDmdmv6hIioqQiRWRjtF7QgJuWGJX9ngcnNZMHszRJRD6z961i5TgwVtKJRHB6ciTlrRTTixCRn8RPX+EH5o5KqVUUb4mHnOe0FbiIHt35NlOamjt+paFMtmRbNVNlJPlJwKWR7uWHsxSAu8g2kALrZ8Uya3k1jqLIQLQ1L6oXeeZwr28bej+wRdiOAKGyneJjm09UCm1MzKU9Tz01MPu/B9eSPs+jZFSxhgSQkl2aOhRJatuOvb9duDIUOrjH6MmgugkThxU5FzEKybQ9uE185 cfPOz9Os YGCf+3aeXTKqSU1NJ+uEcqm+M+XSiL2mc1TgTmL0R84egY/f0ljdRQQf5pLqtEX8mVanLOUcSLZblAGB+r9xBmqfUZKgBQer6klKhwLJaZUdjLIQ69YnCyUEw6x0BGOf3RMIyvKXAB7JZrUVi+gT76SEHtKnBRsTz2Ud4EK96iDEg2bMRColGihnFqU7GDrjXU0mi+cY+6c/ndF/fFFupdqxkfK6nPPJeO9f9UlJFY5/kb6I= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Leon Romanovsky Introduce new sticky flag (HMM_PFN_DMA_MAPPED), which isn't overwritten by HMM range fault. Such flag allows users to tag specific PFNs with information if this specific PFN was already DMA mapped. Tested-by: Jens Axboe Reviewed-by: Christoph Hellwig Reviewed-by: Jason Gunthorpe Signed-off-by: Leon Romanovsky --- include/linux/hmm.h | 20 ++++++++++++++++-- mm/hmm.c | 51 ++++++++++++++++++++++++++++----------------- 2 files changed, 50 insertions(+), 21 deletions(-) diff --git a/include/linux/hmm.h b/include/linux/hmm.h index 126a36571667..a43e56f273a1 100644 --- a/include/linux/hmm.h +++ b/include/linux/hmm.h @@ -23,6 +23,8 @@ struct mmu_interval_notifier; * HMM_PFN_WRITE - if the page memory can be written to (requires HMM_PFN_VALID) * HMM_PFN_ERROR - accessing the pfn is impossible and the device should * fail. ie poisoned memory, special pages, no vma, etc + * HMM_PFN_DMA_MAPPED - Flag preserved on input-to-output transformation + * to mark that page is already DMA mapped * * On input: * 0 - Return the current state of the page, do not fault it. @@ -36,13 +38,19 @@ enum hmm_pfn_flags { HMM_PFN_VALID = 1UL << (BITS_PER_LONG - 1), HMM_PFN_WRITE = 1UL << (BITS_PER_LONG - 2), HMM_PFN_ERROR = 1UL << (BITS_PER_LONG - 3), - HMM_PFN_ORDER_SHIFT = (BITS_PER_LONG - 8), + /* + * Sticky flags, carried from input to output, + * don't forget to update HMM_PFN_INOUT_FLAGS + */ + HMM_PFN_DMA_MAPPED = 1UL << (BITS_PER_LONG - 4), + + HMM_PFN_ORDER_SHIFT = (BITS_PER_LONG - 9), /* Input flags */ HMM_PFN_REQ_FAULT = HMM_PFN_VALID, HMM_PFN_REQ_WRITE = HMM_PFN_WRITE, - HMM_PFN_FLAGS = 0xFFUL << HMM_PFN_ORDER_SHIFT, + HMM_PFN_FLAGS = ~((1UL << HMM_PFN_ORDER_SHIFT) - 1), }; /* @@ -57,6 +65,14 @@ static inline struct page *hmm_pfn_to_page(unsigned long hmm_pfn) return pfn_to_page(hmm_pfn & ~HMM_PFN_FLAGS); } +/* + * hmm_pfn_to_phys() - return physical address pointed to by a device entry + */ +static inline phys_addr_t hmm_pfn_to_phys(unsigned long hmm_pfn) +{ + return __pfn_to_phys(hmm_pfn & ~HMM_PFN_FLAGS); +} + /* * hmm_pfn_to_map_order() - return the CPU mapping size order * diff --git a/mm/hmm.c b/mm/hmm.c index 082f7b7c0b9e..51fe8b011cc7 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -39,13 +39,20 @@ enum { HMM_NEED_ALL_BITS = HMM_NEED_FAULT | HMM_NEED_WRITE_FAULT, }; +enum { + /* These flags are carried from input-to-output */ + HMM_PFN_INOUT_FLAGS = HMM_PFN_DMA_MAPPED, +}; + static int hmm_pfns_fill(unsigned long addr, unsigned long end, struct hmm_range *range, unsigned long cpu_flags) { unsigned long i = (addr - range->start) >> PAGE_SHIFT; - for (; addr < end; addr += PAGE_SIZE, i++) - range->hmm_pfns[i] = cpu_flags; + for (; addr < end; addr += PAGE_SIZE, i++) { + range->hmm_pfns[i] &= HMM_PFN_INOUT_FLAGS; + range->hmm_pfns[i] |= cpu_flags; + } return 0; } @@ -202,8 +209,10 @@ static int hmm_vma_handle_pmd(struct mm_walk *walk, unsigned long addr, return hmm_vma_fault(addr, end, required_fault, walk); pfn = pmd_pfn(pmd) + ((addr & ~PMD_MASK) >> PAGE_SHIFT); - for (i = 0; addr < end; addr += PAGE_SIZE, i++, pfn++) - hmm_pfns[i] = pfn | cpu_flags; + for (i = 0; addr < end; addr += PAGE_SIZE, i++, pfn++) { + hmm_pfns[i] &= HMM_PFN_INOUT_FLAGS; + hmm_pfns[i] |= pfn | cpu_flags; + } return 0; } #else /* CONFIG_TRANSPARENT_HUGEPAGE */ @@ -230,14 +239,14 @@ static int hmm_vma_handle_pte(struct mm_walk *walk, unsigned long addr, unsigned long cpu_flags; pte_t pte = ptep_get(ptep); uint64_t pfn_req_flags = *hmm_pfn; + uint64_t new_pfn_flags = 0; if (pte_none_mostly(pte)) { required_fault = hmm_pte_need_fault(hmm_vma_walk, pfn_req_flags, 0); if (required_fault) goto fault; - *hmm_pfn = 0; - return 0; + goto out; } if (!pte_present(pte)) { @@ -253,16 +262,14 @@ static int hmm_vma_handle_pte(struct mm_walk *walk, unsigned long addr, cpu_flags = HMM_PFN_VALID; if (is_writable_device_private_entry(entry)) cpu_flags |= HMM_PFN_WRITE; - *hmm_pfn = swp_offset_pfn(entry) | cpu_flags; - return 0; + new_pfn_flags = swp_offset_pfn(entry) | cpu_flags; + goto out; } required_fault = hmm_pte_need_fault(hmm_vma_walk, pfn_req_flags, 0); - if (!required_fault) { - *hmm_pfn = 0; - return 0; - } + if (!required_fault) + goto out; if (!non_swap_entry(entry)) goto fault; @@ -304,11 +311,13 @@ static int hmm_vma_handle_pte(struct mm_walk *walk, unsigned long addr, pte_unmap(ptep); return -EFAULT; } - *hmm_pfn = HMM_PFN_ERROR; - return 0; + new_pfn_flags = HMM_PFN_ERROR; + goto out; } - *hmm_pfn = pte_pfn(pte) | cpu_flags; + new_pfn_flags = pte_pfn(pte) | cpu_flags; +out: + *hmm_pfn = (*hmm_pfn & HMM_PFN_INOUT_FLAGS) | new_pfn_flags; return 0; fault: @@ -448,8 +457,10 @@ static int hmm_vma_walk_pud(pud_t *pudp, unsigned long start, unsigned long end, } pfn = pud_pfn(pud) + ((addr & ~PUD_MASK) >> PAGE_SHIFT); - for (i = 0; i < npages; ++i, ++pfn) - hmm_pfns[i] = pfn | cpu_flags; + for (i = 0; i < npages; ++i, ++pfn) { + hmm_pfns[i] &= HMM_PFN_INOUT_FLAGS; + hmm_pfns[i] |= pfn | cpu_flags; + } goto out_unlock; } @@ -507,8 +518,10 @@ static int hmm_vma_walk_hugetlb_entry(pte_t *pte, unsigned long hmask, } pfn = pte_pfn(entry) + ((start & ~hmask) >> PAGE_SHIFT); - for (; addr < end; addr += PAGE_SIZE, i++, pfn++) - range->hmm_pfns[i] = pfn | cpu_flags; + for (; addr < end; addr += PAGE_SIZE, i++, pfn++) { + range->hmm_pfns[i] &= HMM_PFN_INOUT_FLAGS; + range->hmm_pfns[i] |= pfn | cpu_flags; + } spin_unlock(ptl); return 0; -- 2.49.0