From: Daniel Vetter <daniel@ffwll.ch>
To: Jerome Glisse <jglisse@redhat.com>
Cc: christian.koenig@amd.com, dri-devel@lists.freedesktop.org,
nouveau@lists.freedesktop.org,
Evgeny Baskakov <ebaskakov@nvidia.com>,
linux-mm@kvack.org, Ralph Campbell <rcampbell@nvidia.com>,
John Hubbard <jhubbard@nvidia.com>,
Felix Kuehling <felix.kuehling@amd.com>,
"Bridgman, John" <John.Bridgman@amd.com>
Subject: Re: [RFC PATCH 00/13] SVM (share virtual memory) with HMM in nouveau
Date: Tue, 13 Mar 2018 11:46:44 +0100 [thread overview]
Message-ID: <20180313104644.GB4788@phenom.ffwll.local> (raw)
In-Reply-To: <20180312175057.GC4214@redhat.com>
On Mon, Mar 12, 2018 at 01:50:58PM -0400, Jerome Glisse wrote:
> On Mon, Mar 12, 2018 at 06:30:09PM +0100, Daniel Vetter wrote:
> > On Sat, Mar 10, 2018 at 04:01:58PM +0100, Christian K??nig wrote:
>
> [...]
>
> > > > They are work underway to revamp nouveau channel creation with a new
> > > > userspace API. So we might want to delay upstreaming until this lands.
> > > > We can stil discuss one aspect specific to HMM here namely the issue
> > > > around GEM objects used for some specific part of the GPU. Some engine
> > > > inside the GPU (engine are a GPU block like the display block which
> > > > is responsible of scaning memory to send out a picture through some
> > > > connector for instance HDMI or DisplayPort) can only access memory
> > > > with virtual address below (1 << 40). To accomodate those we need to
> > > > create a "hole" inside the process address space. This patchset have
> > > > a hack for that (patch 13 HACK FOR HMM AREA), it reserves a range of
> > > > device file offset so that process can mmap this range with PROT_NONE
> > > > to create a hole (process must make sure the hole is below 1 << 40).
> > > > I feel un-easy of doing it this way but maybe it is ok with other
> > > > folks.
> > >
> > > Well we have essentially the same problem with pre gfx9 AMD hardware. Felix
> > > might have some advise how it was solved for HSA.
> >
> > Couldn't we do an in-kernel address space for those special gpu blocks? As
> > long as it's display the kernel needs to manage it anyway, and adding a
> > 2nd mapping when you pin/unpin for scanout usage shouldn't really matter
> > (as long as you cache the mapping until the buffer gets thrown out of
> > vram). More-or-less what we do for i915 (where we have an entirely
> > separate address space for these things which is 4G on the latest chips).
> > -Daniel
>
> We can not do an in-kernel address space for those. We already have an
> in kernel address space but it does not apply for the object considered
> here.
>
> For NVidia (i believe this is the same for AMD AFAIK) the objects we
> are talking about are objects that must be in the same address space
> as the one against which process's shader/dma/... get executed.
>
> For instance command buffer submited by userspace must be inside a
> GEM object mapped inside the GPU's process address against which the
> command are executed. My understanding is that the PFIFO (the engine
> on nv GPU that fetch commands) first context switch to address space
> associated with the channel and then starts fetching commands with
> all address being interpreted against the channel address space.
>
> Hence why we need to reserve some range in the process virtual address
> space if we want to do SVM in a sane way. I mean we could just map
> buffer into GPU page table and then cross fingers and toes hopping that
> the process will never get any of its mmap overlapping those mapping :)
Ah, from the example I got the impression it's just the display engine
that has this restriction. CS/PFIFO having the same restriction is indeed
more fun.
-Daniel
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
next prev parent reply other threads:[~2018-03-13 10:46 UTC|newest]
Thread overview: 26+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-03-10 3:21 jglisse
2018-03-10 3:21 ` [RFC PATCH 01/13] drm/nouveau/vmm: enable page table iterator over non populated range jglisse
2018-03-10 3:21 ` [RFC PATCH 02/13] drm/nouveau/core/memory: add some useful accessor macros jglisse
2018-03-10 3:21 ` [RFC PATCH 03/13] drm/nouveau/core: define engine for handling replayable faults jglisse
2018-03-10 3:21 ` [RFC PATCH 04/13] drm/nouveau/mmu/gp100: allow gcc/tex to generate " jglisse
2018-03-10 3:21 ` [RFC PATCH 05/13] drm/nouveau/mc/gp100-: handle replayable fault interrupt jglisse
2018-03-10 3:21 ` [RFC PATCH 06/13] drm/nouveau/fault/gp100: initial implementation of MaxwellFaultBufferA jglisse
2018-03-10 3:21 ` [RFC PATCH 07/13] drm/nouveau: special mapping method for HMM jglisse
2018-03-10 3:21 ` [RFC PATCH 08/13] drm/nouveau: special mapping method for HMM (user interface) jglisse
2018-03-10 3:21 ` [RFC PATCH 09/13] drm/nouveau: add SVM through HMM support to nouveau client jglisse
2018-03-10 3:21 ` [RFC PATCH 10/13] drm/nouveau: add HMM area creation jglisse
2018-03-10 3:21 ` [RFC PATCH 11/13] drm/nouveau: add HMM area creation user interface jglisse
2018-03-10 3:21 ` [RFC PATCH 12/13] drm/nouveau: HMM area creation helpers for nouveau client jglisse
2018-03-10 3:21 ` [RFC PATCH 13/13] drm/nouveau: HACK FOR HMM AREA jglisse
2018-03-10 15:01 ` [RFC PATCH 00/13] SVM (share virtual memory) with HMM in nouveau Christian König
2018-03-10 17:55 ` Jerome Glisse
2018-03-12 17:30 ` Daniel Vetter
2018-03-12 17:50 ` Jerome Glisse
2018-03-13 6:14 ` John Hubbard
2018-03-13 13:29 ` Matthew Wilcox
2018-03-13 14:31 ` Jerome Glisse
2018-03-13 15:56 ` Jerome Glisse
2018-03-13 10:46 ` Daniel Vetter [this message]
2018-03-12 18:28 ` Felix Kuehling
2018-03-13 14:28 ` Jerome Glisse
2018-03-13 15:32 ` Felix Kuehling
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20180313104644.GB4788@phenom.ffwll.local \
--to=daniel@ffwll.ch \
--cc=John.Bridgman@amd.com \
--cc=christian.koenig@amd.com \
--cc=dri-devel@lists.freedesktop.org \
--cc=ebaskakov@nvidia.com \
--cc=felix.kuehling@amd.com \
--cc=jglisse@redhat.com \
--cc=jhubbard@nvidia.com \
--cc=linux-mm@kvack.org \
--cc=nouveau@lists.freedesktop.org \
--cc=rcampbell@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox