From: Jason Gunthorpe <jgg@ziepe.ca>
To: Christoph Hellwig <hch@infradead.org>
Cc: Yonatan Maman <ymaman@nvidia.com>,
nouveau@lists.freedesktop.org, linux-kernel@vger.kernel.org,
linux-rdma@vger.kernel.org, linux-mm@kvack.org,
herbst@redhat.com, lyude@redhat.com, dakr@redhat.com,
airlied@gmail.com, simona@ffwll.ch, leon@kernel.org,
jglisse@redhat.com, akpm@linux-foundation.org,
dri-devel@lists.freedesktop.org, apopple@nvidia.com,
bskeggs@nvidia.com, Gal Shalom <GalShalom@nvidia.com>
Subject: Re: [PATCH v1 1/4] mm/hmm: HMM API for P2P DMA to device zone pages
Date: Wed, 16 Oct 2024 14:44:45 -0300 [thread overview]
Message-ID: <20241016174445.GF4020792@ziepe.ca> (raw)
In-Reply-To: <Zw_sn_DdZRUw5oxq@infradead.org>
On Wed, Oct 16, 2024 at 09:41:03AM -0700, Christoph Hellwig wrote:
> On Wed, Oct 16, 2024 at 12:44:28PM -0300, Jason Gunthorpe wrote:
> > > We are talking about P2P memory here. How do you manage to get a page
> > > that dma_map_page can be used on? All P2P memory needs to use the P2P
> > > aware dma_map_sg as the pages for P2P memory are just fake zone device
> > > pages.
> >
> > FWIW, I've been expecting this series to be rebased on top of Leon's
> > new DMA API series so it doesn't have this issue..
>
> That's not going to make a difference at this level.
I'm not sure what you are asking then.
Patch 2 does pci_p2pdma_add_resource() and so a valid struct page with
a P2P ZONE_DEVICE type exists, and that gets returned back to the
hmm/odp code.
Today odp calls dma_map_page() which only works by chance in limited
cases. With Leon's revision it will call hmm_dma_map_pfn() ->
dma_iova_link() which does call pci_p2pdma_map_type() and should do
the right thing.
> > I'm guessing they got their testing done so far on a system without an
> > iommu translation?
>
> IOMMU or not doens't matter much for P2P. The important difference is
> through the host bridge or through a switch. dma_map_page will work
> for P2P through the host brige (assuming the host bridge even support
> it as it also lacks the error handling for when not), but it lacks the
> handling for P2P through a switch.
On most x86 systems the BAR/bus address of the P2P memory is the same
as the CPU address, so without an IOMMU translation dma_map_page()
will return the CPU/host physical address which is the same as the
BAR/bus address and that will take the P2P switch path for testing.
Jason
next prev parent reply other threads:[~2024-10-16 17:44 UTC|newest]
Thread overview: 28+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-10-15 15:23 [PATCH v1 0/4] GPU Direct RDMA (P2P DMA) for Device Private Pages Yonatan Maman
2024-10-15 15:23 ` [PATCH v1 1/4] mm/hmm: HMM API for P2P DMA to device zone pages Yonatan Maman
2024-10-16 4:49 ` Christoph Hellwig
2024-10-16 15:04 ` Yonatan Maman
2024-10-16 15:44 ` Jason Gunthorpe
2024-10-16 16:41 ` Christoph Hellwig
2024-10-16 17:44 ` Jason Gunthorpe [this message]
2024-10-17 11:58 ` Christoph Hellwig
2024-10-17 13:05 ` Jason Gunthorpe
2024-10-17 13:12 ` Christoph Hellwig
2024-10-17 13:46 ` Jason Gunthorpe
2024-10-17 13:49 ` Christoph Hellwig
2024-10-17 14:05 ` Jason Gunthorpe
2024-10-17 14:19 ` Christoph Hellwig
2024-10-16 5:10 ` Alistair Popple
2024-10-16 15:45 ` Jason Gunthorpe
2024-10-17 1:58 ` Alistair Popple
2024-10-17 11:53 ` Jason Gunthorpe
2024-10-15 15:23 ` [PATCH v1 2/4] nouveau/dmem: HMM P2P DMA for private dev pages Yonatan Maman
2024-10-16 5:12 ` Alistair Popple
2024-10-16 15:18 ` Yonatan Maman
2024-10-15 15:23 ` [PATCH v1 3/4] IB/core: P2P DMA for device private pages Yonatan Maman
2024-10-15 15:23 ` [PATCH v1 4/4] RDMA/mlx5: Enabling ATS for ODP memory Yonatan Maman
2024-10-16 4:23 ` [PATCH v1 0/4] GPU Direct RDMA (P2P DMA) for Device Private Pages Christoph Hellwig
2024-10-16 15:16 ` Yonatan Maman
2024-10-16 22:22 ` Alistair Popple
2024-10-18 7:26 ` Zhu Yanjun
2024-10-20 15:26 ` Yonatan Maman
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20241016174445.GF4020792@ziepe.ca \
--to=jgg@ziepe.ca \
--cc=GalShalom@nvidia.com \
--cc=airlied@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=apopple@nvidia.com \
--cc=bskeggs@nvidia.com \
--cc=dakr@redhat.com \
--cc=dri-devel@lists.freedesktop.org \
--cc=hch@infradead.org \
--cc=herbst@redhat.com \
--cc=jglisse@redhat.com \
--cc=leon@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-rdma@vger.kernel.org \
--cc=lyude@redhat.com \
--cc=nouveau@lists.freedesktop.org \
--cc=simona@ffwll.ch \
--cc=ymaman@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox