From: Simona Vetter <simona.vetter@ffwll.ch>
To: Jason Gunthorpe <jgg@ziepe.ca>
Cc: "Thomas Hellström" <thomas.hellstrom@linux.intel.com>,
"Yonatan Maman" <ymaman@nvidia.com>,
kherbst@redhat.com, lyude@redhat.com, dakr@redhat.com,
airlied@gmail.com, simona@ffwll.ch, leon@kernel.org,
jglisse@redhat.com, akpm@linux-foundation.org,
GalShalom@nvidia.com, dri-devel@lists.freedesktop.org,
nouveau@lists.freedesktop.org, linux-kernel@vger.kernel.org,
linux-rdma@vger.kernel.org, linux-mm@kvack.org,
linux-tegra@vger.kernel.org
Subject: Re: [RFC 1/5] mm/hmm: HMM API to enable P2P DMA for device private pages
Date: Wed, 29 Jan 2025 14:38:58 +0100 [thread overview]
Message-ID: <Z5ovcnX2zVoqdomA@phenom.ffwll.local> (raw)
In-Reply-To: <20250128172123.GD1524382@ziepe.ca>
On Tue, Jan 28, 2025 at 01:21:23PM -0400, Jason Gunthorpe wrote:
> On Tue, Jan 28, 2025 at 05:32:23PM +0100, Thomas Hellström wrote:
> > > This series supports three case:
> > >
> > > 1) pgmap->owner == range->dev_private_owner
> > > This is "driver private fast interconnect" in this case HMM
> > > should
> > > immediately return the page. The calling driver understands the
> > > private parts of the pgmap and computes the private interconnect
> > > address.
> > >
> > > This requires organizing your driver so that all private
> > > interconnect has the same pgmap->owner.
> >
> > Yes, although that makes this map static, since pgmap->owner has to be
> > set at pgmap creation time. and we were during initial discussions
> > looking at something dynamic here. However I think we can probably do
> > with a per-driver owner for now and get back if that's not sufficient.
>
> The pgmap->owner doesn't *have* to fixed, certainly during early boot before
> you hand out any page references it can be changed. I wouldn't be
> surprised if this is useful to some requirements to build up the
> private interconnect topology?
The trouble I'm seeing is device probe and the fundemantal issue that you
never know when you're done. And so if we entirely rely on pgmap->owner to
figure out the driver private interconnect topology, that's going to be
messy. That's why I'm also leaning towards both comparing owners and
having an additional check whether the interconnect is actually there or
not yet.
You can fake that by doing these checks after hmm_range_fault returned,
and if you get a bunch of unsuitable pages, toss it back to
hmm_range_fault asking for an unconditional migration to system memory for
those. But that's kinda not great and I think goes at least against the
spirit of how you want to handle pci p2p in step 2 below?
Cheers, Sima
> > > 2) The page is DEVICE_PRIVATE and get_dma_pfn_for_device() exists.
> > > The exporting driver has the option to return a P2P struct page
> > > that can be used for PCI P2P without any migration. In a PCI GPU
> > > context this means the GPU has mapped its local memory to a PCI
> > > address. The assumption is that P2P always works and so this
> > > address can be DMA'd from.
> >
> > So do I understand it correctly, that the driver then needs to set up
> > one device_private struct page and one pcie_p2p struct page for each
> > page of device memory participating in this way?
>
> Yes, for now. I hope to remove the p2p page eventually.
>
> > > If you are just talking about your private multi-path, then that is
> > > already handled..
> >
> > No, the issue I'm having with this is really why would
> > hmm_range_fault() need the new pfn when it could easily be obtained
> > from the device-private pfn by the hmm_range_fault() caller?
>
> That isn't the API of HMM, the caller uses hmm to get PFNs it can use.
>
> Deliberately returning PFNs the caller cannot use is nonsensical to
> it's purpose :)
>
> > So anyway what we'll do is to try to use an interconnect-common owner
> > for now and revisit the problem if that's not sufficient so we can come
> > up with an acceptable solution.
>
> That is the intention for sure. The idea was that the drivers under
> the private pages would somehow generate unique owners for shared
> private interconnect segments.
>
> I wouldn't say this is the end all of the idea, if there are better
> ways to handle accepting private pages they can certainly be
> explored..
>
> Jason
--
Simona Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
next prev parent reply other threads:[~2025-01-29 13:46 UTC|newest]
Thread overview: 26+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-12-01 10:36 [RFC 0/5] GPU Direct RDMA (P2P DMA) for Device Private Pages Yonatan Maman
2024-12-01 10:36 ` [RFC 1/5] mm/hmm: HMM API to enable P2P DMA for device private pages Yonatan Maman
2025-01-28 8:51 ` Thomas Hellström
2025-01-28 13:20 ` Jason Gunthorpe
2025-01-28 14:48 ` Thomas Hellström
2025-01-28 15:16 ` Jason Gunthorpe
2025-01-28 16:32 ` Thomas Hellström
2025-01-28 17:21 ` Jason Gunthorpe
2025-01-29 13:38 ` Simona Vetter [this message]
2025-01-29 13:47 ` Jason Gunthorpe
2025-01-29 17:09 ` Thomas Hellström
2025-01-30 10:50 ` Simona Vetter
2025-01-30 13:23 ` Jason Gunthorpe
2025-01-30 16:09 ` Simona Vetter
2025-01-30 17:42 ` Jason Gunthorpe
2025-01-31 16:59 ` Simona Vetter
2025-02-03 15:08 ` Jason Gunthorpe
2025-02-04 9:32 ` Thomas Hellström
2025-02-04 13:26 ` Jason Gunthorpe
2025-02-04 14:29 ` Thomas Hellström
2025-02-04 19:16 ` Jason Gunthorpe
2025-02-04 22:01 ` Thomas Hellström
2024-12-01 10:36 ` [RFC 2/5] nouveau/dmem: HMM P2P DMA for private dev pages Yonatan Maman
2024-12-01 10:36 ` [RFC 3/5] IB/core: P2P DMA for device private pages Yonatan Maman
2024-12-01 10:36 ` [RFC 4/5] RDMA/mlx5: Add fallback for P2P DMA errors Yonatan Maman
2024-12-01 10:36 ` [RFC 5/5] RDMA/mlx5: Enabling ATS for ODP memory Yonatan Maman
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Z5ovcnX2zVoqdomA@phenom.ffwll.local \
--to=simona.vetter@ffwll.ch \
--cc=GalShalom@nvidia.com \
--cc=airlied@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=dakr@redhat.com \
--cc=dri-devel@lists.freedesktop.org \
--cc=jgg@ziepe.ca \
--cc=jglisse@redhat.com \
--cc=kherbst@redhat.com \
--cc=leon@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-rdma@vger.kernel.org \
--cc=linux-tegra@vger.kernel.org \
--cc=lyude@redhat.com \
--cc=nouveau@lists.freedesktop.org \
--cc=simona@ffwll.ch \
--cc=thomas.hellstrom@linux.intel.com \
--cc=ymaman@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox