linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Christoph Hellwig <hch@infradead.org>
To: Jason Gunthorpe <jgg@ziepe.ca>
Cc: Christoph Hellwig <hch@infradead.org>,
	Yonatan Maman <ymaman@nvidia.com>,
	nouveau@lists.freedesktop.org, linux-kernel@vger.kernel.org,
	linux-rdma@vger.kernel.org, linux-mm@kvack.org,
	herbst@redhat.com, lyude@redhat.com, dakr@redhat.com,
	airlied@gmail.com, simona@ffwll.ch, leon@kernel.org,
	jglisse@redhat.com, akpm@linux-foundation.org,
	dri-devel@lists.freedesktop.org, apopple@nvidia.com,
	bskeggs@nvidia.com, Gal Shalom <GalShalom@nvidia.com>
Subject: Re: [PATCH v1 1/4] mm/hmm: HMM API for P2P DMA to device zone pages
Date: Thu, 17 Oct 2024 04:58:12 -0700	[thread overview]
Message-ID: <ZxD71D66qLI0qHpW@infradead.org> (raw)
In-Reply-To: <20241016174445.GF4020792@ziepe.ca>

On Wed, Oct 16, 2024 at 02:44:45PM -0300, Jason Gunthorpe wrote:
> > > FWIW, I've been expecting this series to be rebased on top of Leon's
> > > new DMA API series so it doesn't have this issue..
> > 
> > That's not going to make a difference at this level.
> 
> I'm not sure what you are asking then.
> 
> Patch 2 does pci_p2pdma_add_resource() and so a valid struct page with
> a P2P ZONE_DEVICE type exists, and that gets returned back to the
> hmm/odp code.
> 
> Today odp calls dma_map_page() which only works by chance in limited
> cases. With Leon's revision it will call hmm_dma_map_pfn() ->
> dma_iova_link() which does call pci_p2pdma_map_type() and should do
> the right thing.

Again none of this affects the code posted here.  It reshuffles the
callers but has no direct affect on the patches posted here.

(and the current DMA series lacks P2P support, I'm trying to figure
out how to properly handle it at the moment).

> > IOMMU or not doens't matter much for P2P.  The important difference is
> > through the host bridge or through a switch.  dma_map_page will work
> > for P2P through the host brige (assuming the host bridge even support
> > it as it also lacks the error handling for when not), but it lacks the
> > handling for P2P through a switch.
> 
> On most x86 systems the BAR/bus address of the P2P memory is the same
> as the CPU address, so without an IOMMU translation dma_map_page()
> will return the CPU/host physical address which is the same as the
> BAR/bus address and that will take the P2P switch path for testing.

Maybe.  Either way the use of dma_map_page is incorrect.


> 
> Jason
---end quoted text---


  reply	other threads:[~2024-10-17 11:58 UTC|newest]

Thread overview: 28+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-10-15 15:23 [PATCH v1 0/4] GPU Direct RDMA (P2P DMA) for Device Private Pages Yonatan Maman
2024-10-15 15:23 ` [PATCH v1 1/4] mm/hmm: HMM API for P2P DMA to device zone pages Yonatan Maman
2024-10-16  4:49   ` Christoph Hellwig
2024-10-16 15:04     ` Yonatan Maman
2024-10-16 15:44     ` Jason Gunthorpe
2024-10-16 16:41       ` Christoph Hellwig
2024-10-16 17:44         ` Jason Gunthorpe
2024-10-17 11:58           ` Christoph Hellwig [this message]
2024-10-17 13:05             ` Jason Gunthorpe
2024-10-17 13:12               ` Christoph Hellwig
2024-10-17 13:46                 ` Jason Gunthorpe
2024-10-17 13:49                   ` Christoph Hellwig
2024-10-17 14:05                     ` Jason Gunthorpe
2024-10-17 14:19                       ` Christoph Hellwig
2024-10-16  5:10   ` Alistair Popple
2024-10-16 15:45     ` Jason Gunthorpe
2024-10-17  1:58       ` Alistair Popple
2024-10-17 11:53         ` Jason Gunthorpe
2024-10-15 15:23 ` [PATCH v1 2/4] nouveau/dmem: HMM P2P DMA for private dev pages Yonatan Maman
2024-10-16  5:12   ` Alistair Popple
2024-10-16 15:18     ` Yonatan Maman
2024-10-15 15:23 ` [PATCH v1 3/4] IB/core: P2P DMA for device private pages Yonatan Maman
2024-10-15 15:23 ` [PATCH v1 4/4] RDMA/mlx5: Enabling ATS for ODP memory Yonatan Maman
2024-10-16  4:23 ` [PATCH v1 0/4] GPU Direct RDMA (P2P DMA) for Device Private Pages Christoph Hellwig
2024-10-16 15:16   ` Yonatan Maman
2024-10-16 22:22     ` Alistair Popple
2024-10-18  7:26     ` Zhu Yanjun
2024-10-20 15:26       ` Yonatan Maman

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ZxD71D66qLI0qHpW@infradead.org \
    --to=hch@infradead.org \
    --cc=GalShalom@nvidia.com \
    --cc=airlied@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=apopple@nvidia.com \
    --cc=bskeggs@nvidia.com \
    --cc=dakr@redhat.com \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=herbst@redhat.com \
    --cc=jgg@ziepe.ca \
    --cc=jglisse@redhat.com \
    --cc=leon@kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-rdma@vger.kernel.org \
    --cc=lyude@redhat.com \
    --cc=nouveau@lists.freedesktop.org \
    --cc=simona@ffwll.ch \
    --cc=ymaman@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox