From: "Mika Penttilä" <mpenttil@redhat.com>
To: Leon Romanovsky <leon@kernel.org>,
Marek Szyprowski <m.szyprowski@samsung.com>,
Jens Axboe <axboe@kernel.dk>, Christoph Hellwig <hch@lst.de>,
Keith Busch <kbusch@kernel.org>
Cc: "Leon Romanovsky" <leonro@nvidia.com>, "Jake Edge" <jake@lwn.net>,
"Jonathan Corbet" <corbet@lwn.net>,
"Jason Gunthorpe" <jgg@ziepe.ca>,
"Zhu Yanjun" <zyjzyj2000@gmail.com>,
"Robin Murphy" <robin.murphy@arm.com>,
"Joerg Roedel" <joro@8bytes.org>, "Will Deacon" <will@kernel.org>,
"Sagi Grimberg" <sagi@grimberg.me>,
"Bjorn Helgaas" <bhelgaas@google.com>,
"Logan Gunthorpe" <logang@deltatee.com>,
"Yishai Hadas" <yishaih@nvidia.com>,
"Shameer Kolothum" <shameerali.kolothum.thodi@huawei.com>,
"Kevin Tian" <kevin.tian@intel.com>,
"Alex Williamson" <alex.williamson@redhat.com>,
"Jérôme Glisse" <jglisse@redhat.com>,
"Andrew Morton" <akpm@linux-foundation.org>,
linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org,
linux-block@vger.kernel.org, linux-rdma@vger.kernel.org,
iommu@lists.linux.dev, linux-nvme@lists.infradead.org,
linux-pci@vger.kernel.org, kvm@vger.kernel.org,
linux-mm@kvack.org, "Niklas Schnelle" <schnelle@linux.ibm.com>,
"Chuck Lever" <chuck.lever@oracle.com>,
"Luis Chamberlain" <mcgrof@kernel.org>,
"Matthew Wilcox" <willy@infradead.org>,
"Dan Williams" <dan.j.williams@intel.com>,
"Kanchan Joshi" <joshi.k@samsung.com>,
"Chaitanya Kulkarni" <kch@nvidia.com>
Subject: Re: [PATCH v9 10/24] mm/hmm: let users to tag specific PFN with DMA mapped bit
Date: Wed, 23 Apr 2025 20:54:05 +0300 [thread overview]
Message-ID: <7185c055-fc9e-4510-a9bf-6245673f2f92@redhat.com> (raw)
In-Reply-To: <0a7c1e06269eee12ff8912fe0da4b7692081fcde.1745394536.git.leon@kernel.org>
Hi,
On 4/23/25 11:13, Leon Romanovsky wrote:
> From: Leon Romanovsky <leonro@nvidia.com>
>
> Introduce new sticky flag (HMM_PFN_DMA_MAPPED), which isn't overwritten
> by HMM range fault. Such flag allows users to tag specific PFNs with
> information if this specific PFN was already DMA mapped.
>
> Tested-by: Jens Axboe <axboe@kernel.dk>
> Reviewed-by: Christoph Hellwig <hch@lst.de>
> Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
> ---
> include/linux/hmm.h | 17 +++++++++++++++
> mm/hmm.c | 51 ++++++++++++++++++++++++++++-----------------
> 2 files changed, 49 insertions(+), 19 deletions(-)
>
> diff --git a/include/linux/hmm.h b/include/linux/hmm.h
> index 126a36571667..a1ddbedc19c0 100644
> --- a/include/linux/hmm.h
> +++ b/include/linux/hmm.h
> @@ -23,6 +23,8 @@ struct mmu_interval_notifier;
> * HMM_PFN_WRITE - if the page memory can be written to (requires HMM_PFN_VALID)
> * HMM_PFN_ERROR - accessing the pfn is impossible and the device should
> * fail. ie poisoned memory, special pages, no vma, etc
> + * HMM_PFN_DMA_MAPPED - Flag preserved on input-to-output transformation
> + * to mark that page is already DMA mapped
> *
> * On input:
> * 0 - Return the current state of the page, do not fault it.
> @@ -36,6 +38,13 @@ enum hmm_pfn_flags {
> HMM_PFN_VALID = 1UL << (BITS_PER_LONG - 1),
> HMM_PFN_WRITE = 1UL << (BITS_PER_LONG - 2),
> HMM_PFN_ERROR = 1UL << (BITS_PER_LONG - 3),
> +
> + /*
> + * Sticky flags, carried from input to output,
> + * don't forget to update HMM_PFN_INOUT_FLAGS
> + */
> + HMM_PFN_DMA_MAPPED = 1UL << (BITS_PER_LONG - 7),
> +
How is this playing together with the mapped order usage?
> HMM_PFN_ORDER_SHIFT = (BITS_PER_LONG - 8),
>
> /* Input flags */
> @@ -57,6 +66,14 @@ static inline struct page *hmm_pfn_to_page(unsigned long hmm_pfn)
> return pfn_to_page(hmm_pfn & ~HMM_PFN_FLAGS);
> }
>
> +/*
> + * hmm_pfn_to_phys() - return physical address pointed to by a device entry
> + */
> +static inline phys_addr_t hmm_pfn_to_phys(unsigned long hmm_pfn)
> +{
> + return __pfn_to_phys(hmm_pfn & ~HMM_PFN_FLAGS);
> +}
> +
> /*
> * hmm_pfn_to_map_order() - return the CPU mapping size order
> *
> diff --git a/mm/hmm.c b/mm/hmm.c
> index 082f7b7c0b9e..51fe8b011cc7 100644
> --- a/mm/hmm.c
> +++ b/mm/hmm.c
> @@ -39,13 +39,20 @@ enum {
> HMM_NEED_ALL_BITS = HMM_NEED_FAULT | HMM_NEED_WRITE_FAULT,
> };
>
> +enum {
> + /* These flags are carried from input-to-output */
> + HMM_PFN_INOUT_FLAGS = HMM_PFN_DMA_MAPPED,
> +};
> +
> static int hmm_pfns_fill(unsigned long addr, unsigned long end,
> struct hmm_range *range, unsigned long cpu_flags)
> {
> unsigned long i = (addr - range->start) >> PAGE_SHIFT;
>
> - for (; addr < end; addr += PAGE_SIZE, i++)
> - range->hmm_pfns[i] = cpu_flags;
> + for (; addr < end; addr += PAGE_SIZE, i++) {
> + range->hmm_pfns[i] &= HMM_PFN_INOUT_FLAGS;
> + range->hmm_pfns[i] |= cpu_flags;
> + }
> return 0;
> }
>
> @@ -202,8 +209,10 @@ static int hmm_vma_handle_pmd(struct mm_walk *walk, unsigned long addr,
> return hmm_vma_fault(addr, end, required_fault, walk);
>
> pfn = pmd_pfn(pmd) + ((addr & ~PMD_MASK) >> PAGE_SHIFT);
> - for (i = 0; addr < end; addr += PAGE_SIZE, i++, pfn++)
> - hmm_pfns[i] = pfn | cpu_flags;
> + for (i = 0; addr < end; addr += PAGE_SIZE, i++, pfn++) {
> + hmm_pfns[i] &= HMM_PFN_INOUT_FLAGS;
> + hmm_pfns[i] |= pfn | cpu_flags;
> + }
> return 0;
> }
> #else /* CONFIG_TRANSPARENT_HUGEPAGE */
> @@ -230,14 +239,14 @@ static int hmm_vma_handle_pte(struct mm_walk *walk, unsigned long addr,
> unsigned long cpu_flags;
> pte_t pte = ptep_get(ptep);
> uint64_t pfn_req_flags = *hmm_pfn;
> + uint64_t new_pfn_flags = 0;
>
> if (pte_none_mostly(pte)) {
> required_fault =
> hmm_pte_need_fault(hmm_vma_walk, pfn_req_flags, 0);
> if (required_fault)
> goto fault;
> - *hmm_pfn = 0;
> - return 0;
> + goto out;
> }
>
> if (!pte_present(pte)) {
> @@ -253,16 +262,14 @@ static int hmm_vma_handle_pte(struct mm_walk *walk, unsigned long addr,
> cpu_flags = HMM_PFN_VALID;
> if (is_writable_device_private_entry(entry))
> cpu_flags |= HMM_PFN_WRITE;
> - *hmm_pfn = swp_offset_pfn(entry) | cpu_flags;
> - return 0;
> + new_pfn_flags = swp_offset_pfn(entry) | cpu_flags;
> + goto out;
> }
>
> required_fault =
> hmm_pte_need_fault(hmm_vma_walk, pfn_req_flags, 0);
> - if (!required_fault) {
> - *hmm_pfn = 0;
> - return 0;
> - }
> + if (!required_fault)
> + goto out;
>
> if (!non_swap_entry(entry))
> goto fault;
> @@ -304,11 +311,13 @@ static int hmm_vma_handle_pte(struct mm_walk *walk, unsigned long addr,
> pte_unmap(ptep);
> return -EFAULT;
> }
> - *hmm_pfn = HMM_PFN_ERROR;
> - return 0;
> + new_pfn_flags = HMM_PFN_ERROR;
> + goto out;
> }
>
> - *hmm_pfn = pte_pfn(pte) | cpu_flags;
> + new_pfn_flags = pte_pfn(pte) | cpu_flags;
> +out:
> + *hmm_pfn = (*hmm_pfn & HMM_PFN_INOUT_FLAGS) | new_pfn_flags;
> return 0;
>
> fault:
> @@ -448,8 +457,10 @@ static int hmm_vma_walk_pud(pud_t *pudp, unsigned long start, unsigned long end,
> }
>
> pfn = pud_pfn(pud) + ((addr & ~PUD_MASK) >> PAGE_SHIFT);
> - for (i = 0; i < npages; ++i, ++pfn)
> - hmm_pfns[i] = pfn | cpu_flags;
> + for (i = 0; i < npages; ++i, ++pfn) {
> + hmm_pfns[i] &= HMM_PFN_INOUT_FLAGS;
> + hmm_pfns[i] |= pfn | cpu_flags;
> + }
> goto out_unlock;
> }
>
> @@ -507,8 +518,10 @@ static int hmm_vma_walk_hugetlb_entry(pte_t *pte, unsigned long hmask,
> }
>
> pfn = pte_pfn(entry) + ((start & ~hmask) >> PAGE_SHIFT);
> - for (; addr < end; addr += PAGE_SIZE, i++, pfn++)
> - range->hmm_pfns[i] = pfn | cpu_flags;
> + for (; addr < end; addr += PAGE_SIZE, i++, pfn++) {
> + range->hmm_pfns[i] &= HMM_PFN_INOUT_FLAGS;
> + range->hmm_pfns[i] |= pfn | cpu_flags;
> + }
>
> spin_unlock(ptl);
> return 0;
next prev parent reply other threads:[~2025-04-23 17:54 UTC|newest]
Thread overview: 73+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-04-23 8:12 [PATCH v9 00/24] Provide a new two step DMA mapping API Leon Romanovsky
2025-04-23 8:12 ` [PATCH v9 01/24] PCI/P2PDMA: Refactor the p2pdma mapping helpers Leon Romanovsky
2025-04-26 0:21 ` Luis Chamberlain
2025-04-27 7:25 ` Leon Romanovsky
2025-04-23 8:12 ` [PATCH v9 02/24] dma-mapping: move the PCI P2PDMA mapping helpers to pci-p2pdma.h Leon Romanovsky
2025-04-26 0:34 ` Luis Chamberlain
2025-04-27 7:53 ` Leon Romanovsky
2025-04-23 8:12 ` [PATCH v9 03/24] iommu: generalize the batched sync after map interface Leon Romanovsky
2025-04-23 17:15 ` Jason Gunthorpe
2025-04-24 6:55 ` Leon Romanovsky
2025-04-26 0:52 ` Luis Chamberlain
2025-04-27 7:54 ` Leon Romanovsky
2025-04-23 8:12 ` [PATCH v9 04/24] iommu: add kernel-doc for iommu_unmap_fast Leon Romanovsky
2025-04-23 17:15 ` Jason Gunthorpe
2025-04-26 0:55 ` Luis Chamberlain
2025-04-23 8:12 ` [PATCH v9 05/24] dma-mapping: Provide an interface to allow allocate IOVA Leon Romanovsky
2025-04-26 1:10 ` Luis Chamberlain
2025-04-23 8:12 ` [PATCH v9 06/24] iommu/dma: Factor out a iommu_dma_map_swiotlb helper Leon Romanovsky
2025-04-26 1:14 ` Luis Chamberlain
2025-04-23 8:12 ` [PATCH v9 07/24] dma-mapping: Implement link/unlink ranges API Leon Romanovsky
2025-04-26 22:46 ` Luis Chamberlain
2025-04-27 8:13 ` Leon Romanovsky
2025-04-28 13:16 ` Jason Gunthorpe
2025-04-28 13:20 ` Christoph Hellwig
2025-04-23 8:12 ` [PATCH v9 08/24] dma-mapping: add a dma_need_unmap helper Leon Romanovsky
2025-04-26 22:49 ` Luis Chamberlain
2025-04-23 8:13 ` [PATCH v9 09/24] docs: core-api: document the IOVA-based API Leon Romanovsky
2025-04-23 8:13 ` [PATCH v9 10/24] mm/hmm: let users to tag specific PFN with DMA mapped bit Leon Romanovsky
2025-04-23 17:17 ` Jason Gunthorpe
2025-04-23 17:54 ` Mika Penttilä [this message]
2025-04-23 18:17 ` Jason Gunthorpe
2025-04-23 18:37 ` Mika Penttilä
2025-04-23 23:33 ` Jason Gunthorpe
2025-04-24 8:07 ` Leon Romanovsky
2025-04-24 8:11 ` Christoph Hellwig
2025-04-24 8:46 ` Leon Romanovsky
2025-04-24 12:07 ` Jason Gunthorpe
2025-04-24 12:50 ` Leon Romanovsky
2025-04-24 16:01 ` Leon Romanovsky
2025-04-23 8:13 ` [PATCH v9 11/24] mm/hmm: provide generic DMA managing logic Leon Romanovsky
2025-04-23 17:28 ` Jason Gunthorpe
2025-04-24 7:15 ` Leon Romanovsky
2025-04-24 7:22 ` Leon Romanovsky
2025-04-23 8:13 ` [PATCH v9 12/24] RDMA/umem: Store ODP access mask information in PFN Leon Romanovsky
2025-04-23 17:34 ` Jason Gunthorpe
2025-04-23 8:13 ` [PATCH v9 13/24] RDMA/core: Convert UMEM ODP DMA mapping to caching IOVA and page linkage Leon Romanovsky
2025-04-23 17:36 ` Jason Gunthorpe
2025-04-23 8:13 ` [PATCH v9 14/24] RDMA/umem: Separate implicit ODP initialization from explicit ODP Leon Romanovsky
2025-04-23 17:38 ` Jason Gunthorpe
2025-04-23 8:13 ` [PATCH v9 15/24] vfio/mlx5: Explicitly use number of pages instead of allocated length Leon Romanovsky
2025-04-23 17:39 ` Jason Gunthorpe
2025-04-23 8:13 ` [PATCH v9 16/24] vfio/mlx5: Rewrite create mkey flow to allow better code reuse Leon Romanovsky
2025-04-23 18:02 ` Jason Gunthorpe
2025-04-23 8:13 ` [PATCH v9 17/24] vfio/mlx5: Enable the DMA link API Leon Romanovsky
2025-04-23 18:09 ` Jason Gunthorpe
2025-04-24 7:55 ` Leon Romanovsky
2025-04-23 8:13 ` [PATCH v9 18/24] block: share more code for bio addition helper Leon Romanovsky
2025-04-23 8:13 ` [PATCH v9 19/24] block: don't merge different kinds of P2P transfers in a single bio Leon Romanovsky
2025-04-23 8:13 ` [PATCH v9 20/24] blk-mq: add scatterlist-less DMA mapping helpers Leon Romanovsky
2025-04-23 8:13 ` [PATCH v9 21/24] nvme-pci: remove struct nvme_descriptor Leon Romanovsky
2025-04-23 8:13 ` [PATCH v9 22/24] nvme-pci: use a better encoding for small prp pool allocations Leon Romanovsky
2025-04-23 9:05 ` Christoph Hellwig
2025-04-23 13:39 ` Leon Romanovsky
2025-04-23 8:13 ` [PATCH v9 23/24] nvme-pci: convert to blk_rq_dma_map Leon Romanovsky
2025-04-23 9:24 ` Christoph Hellwig
2025-04-23 10:03 ` Leon Romanovsky
2025-04-23 15:47 ` Christoph Hellwig
2025-04-23 17:00 ` Jason Gunthorpe
2025-04-23 15:05 ` Keith Busch
2025-04-27 7:10 ` Leon Romanovsky
2025-04-23 14:58 ` Keith Busch
2025-04-23 17:11 ` Leon Romanovsky
2025-04-23 8:13 ` [PATCH v9 24/24] nvme-pci: store aborted state in flags variable Leon Romanovsky
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=7185c055-fc9e-4510-a9bf-6245673f2f92@redhat.com \
--to=mpenttil@redhat.com \
--cc=akpm@linux-foundation.org \
--cc=alex.williamson@redhat.com \
--cc=axboe@kernel.dk \
--cc=bhelgaas@google.com \
--cc=chuck.lever@oracle.com \
--cc=corbet@lwn.net \
--cc=dan.j.williams@intel.com \
--cc=hch@lst.de \
--cc=iommu@lists.linux.dev \
--cc=jake@lwn.net \
--cc=jgg@ziepe.ca \
--cc=jglisse@redhat.com \
--cc=joro@8bytes.org \
--cc=joshi.k@samsung.com \
--cc=kbusch@kernel.org \
--cc=kch@nvidia.com \
--cc=kevin.tian@intel.com \
--cc=kvm@vger.kernel.org \
--cc=leon@kernel.org \
--cc=leonro@nvidia.com \
--cc=linux-block@vger.kernel.org \
--cc=linux-doc@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-nvme@lists.infradead.org \
--cc=linux-pci@vger.kernel.org \
--cc=linux-rdma@vger.kernel.org \
--cc=logang@deltatee.com \
--cc=m.szyprowski@samsung.com \
--cc=mcgrof@kernel.org \
--cc=robin.murphy@arm.com \
--cc=sagi@grimberg.me \
--cc=schnelle@linux.ibm.com \
--cc=shameerali.kolothum.thodi@huawei.com \
--cc=will@kernel.org \
--cc=willy@infradead.org \
--cc=yishaih@nvidia.com \
--cc=zyjzyj2000@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox