From: Jason Gunthorpe <jgg@ziepe.ca>
To: Christoph Hellwig <hch@infradead.org>
Cc: Yonatan Maman <ymaman@nvidia.com>,
nouveau@lists.freedesktop.org, linux-kernel@vger.kernel.org,
linux-rdma@vger.kernel.org, linux-mm@kvack.org,
herbst@redhat.com, lyude@redhat.com, dakr@redhat.com,
airlied@gmail.com, simona@ffwll.ch, leon@kernel.org,
jglisse@redhat.com, akpm@linux-foundation.org,
dri-devel@lists.freedesktop.org, apopple@nvidia.com,
bskeggs@nvidia.com, Gal Shalom <GalShalom@nvidia.com>
Subject: Re: [PATCH v1 1/4] mm/hmm: HMM API for P2P DMA to device zone pages
Date: Thu, 17 Oct 2024 10:05:39 -0300 [thread overview]
Message-ID: <20241017130539.GA897978@ziepe.ca> (raw)
In-Reply-To: <ZxD71D66qLI0qHpW@infradead.org>
On Thu, Oct 17, 2024 at 04:58:12AM -0700, Christoph Hellwig wrote:
> On Wed, Oct 16, 2024 at 02:44:45PM -0300, Jason Gunthorpe wrote:
> > > > FWIW, I've been expecting this series to be rebased on top of Leon's
> > > > new DMA API series so it doesn't have this issue..
> > >
> > > That's not going to make a difference at this level.
> >
> > I'm not sure what you are asking then.
> >
> > Patch 2 does pci_p2pdma_add_resource() and so a valid struct page with
> > a P2P ZONE_DEVICE type exists, and that gets returned back to the
> > hmm/odp code.
> >
> > Today odp calls dma_map_page() which only works by chance in limited
> > cases. With Leon's revision it will call hmm_dma_map_pfn() ->
> > dma_iova_link() which does call pci_p2pdma_map_type() and should do
> > the right thing.
>
> Again none of this affects the code posted here. It reshuffles the
> callers but has no direct affect on the patches posted here.
I didn't realize till last night that Leon's series did not have P2P
support.
What I'm trying to say is that this is a multi-series project.
A followup based on Leon's initial work will get the ODP DMA mapping
path able to support ZONE_DEVICE P2P pages.
Once that is done, this series sits on top. This series is only about
hmm and effectively allows hmm_range_fault() to return a ZONE_DEVICE
P2P page.
Yonatan should explain this better in the cover letter and mark it as
a RFC series.
So, I know we are still figuring out the P2P support on the DMA API
side, but my expectation for hmm is that hmm_range_fault() returing a
ZONE_DEVICE P2P page is going to be what we want.
> (and the current DMA series lacks P2P support, I'm trying to figure
> out how to properly handle it at the moment).
Yes, I see, I looked through those patches last night and there is a
gap there.
Broadly I think whatever flow NVMe uses for P2P will apply to ODP as
well.
Thanks,
Jason
next prev parent reply other threads:[~2024-10-17 13:05 UTC|newest]
Thread overview: 28+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-10-15 15:23 [PATCH v1 0/4] GPU Direct RDMA (P2P DMA) for Device Private Pages Yonatan Maman
2024-10-15 15:23 ` [PATCH v1 1/4] mm/hmm: HMM API for P2P DMA to device zone pages Yonatan Maman
2024-10-16 4:49 ` Christoph Hellwig
2024-10-16 15:04 ` Yonatan Maman
2024-10-16 15:44 ` Jason Gunthorpe
2024-10-16 16:41 ` Christoph Hellwig
2024-10-16 17:44 ` Jason Gunthorpe
2024-10-17 11:58 ` Christoph Hellwig
2024-10-17 13:05 ` Jason Gunthorpe [this message]
2024-10-17 13:12 ` Christoph Hellwig
2024-10-17 13:46 ` Jason Gunthorpe
2024-10-17 13:49 ` Christoph Hellwig
2024-10-17 14:05 ` Jason Gunthorpe
2024-10-17 14:19 ` Christoph Hellwig
2024-10-16 5:10 ` Alistair Popple
2024-10-16 15:45 ` Jason Gunthorpe
2024-10-17 1:58 ` Alistair Popple
2024-10-17 11:53 ` Jason Gunthorpe
2024-10-15 15:23 ` [PATCH v1 2/4] nouveau/dmem: HMM P2P DMA for private dev pages Yonatan Maman
2024-10-16 5:12 ` Alistair Popple
2024-10-16 15:18 ` Yonatan Maman
2024-10-15 15:23 ` [PATCH v1 3/4] IB/core: P2P DMA for device private pages Yonatan Maman
2024-10-15 15:23 ` [PATCH v1 4/4] RDMA/mlx5: Enabling ATS for ODP memory Yonatan Maman
2024-10-16 4:23 ` [PATCH v1 0/4] GPU Direct RDMA (P2P DMA) for Device Private Pages Christoph Hellwig
2024-10-16 15:16 ` Yonatan Maman
2024-10-16 22:22 ` Alistair Popple
2024-10-18 7:26 ` Zhu Yanjun
2024-10-20 15:26 ` Yonatan Maman
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20241017130539.GA897978@ziepe.ca \
--to=jgg@ziepe.ca \
--cc=GalShalom@nvidia.com \
--cc=airlied@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=apopple@nvidia.com \
--cc=bskeggs@nvidia.com \
--cc=dakr@redhat.com \
--cc=dri-devel@lists.freedesktop.org \
--cc=hch@infradead.org \
--cc=herbst@redhat.com \
--cc=jglisse@redhat.com \
--cc=leon@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-rdma@vger.kernel.org \
--cc=lyude@redhat.com \
--cc=nouveau@lists.freedesktop.org \
--cc=simona@ffwll.ch \
--cc=ymaman@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox