linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Jerome Glisse <jglisse@redhat.com>
To: John Hubbard <jhubbard@nvidia.com>
Cc: christian.koenig@amd.com, dri-devel@lists.freedesktop.org,
	nouveau@lists.freedesktop.org,
	Evgeny Baskakov <ebaskakov@nvidia.com>,
	linux-mm@kvack.org, Ralph Campbell <rcampbell@nvidia.com>,
	Felix Kuehling <felix.kuehling@amd.com>,
	"Bridgman, John" <John.Bridgman@amd.com>
Subject: Re: [RFC PATCH 00/13] SVM (share virtual memory) with HMM in nouveau
Date: Tue, 13 Mar 2018 11:56:46 -0400	[thread overview]
Message-ID: <20180313155645.GD3828@redhat.com> (raw)
In-Reply-To: <39139ff7-76ad-960c-53f6-46b57525b733@nvidia.com>

On Mon, Mar 12, 2018 at 11:14:47PM -0700, John Hubbard wrote:
> On 03/12/2018 10:50 AM, Jerome Glisse wrote:

[...]

> Yes, on NVIDIA GPUs, the Host/FIFO unit is limited to 40-bit addresses, so
> things such as the following need to be below (1 << 40), and also accessible 
> to both CPU (user space) and GPU hardware. 
>     -- command buffers (CPU user space driver fills them, GPU consumes them), 
>     -- semaphores (here, a GPU-centric term, rather than OS-type: these are
>        memory locations that, for example, the GPU hardware might write to, in
>        order to indicate work completion; there are other uses as well), 
>     -- a few other things most likely (this is not a complete list).
> 
> So what I'd tentatively expect that to translate into in the driver stack is, 
> approximately:
> 
>     -- User space driver code mmap's an area below (1 << 40). It's hard to avoid this,
>        given that user space needs access to the area (for filling out command
>        buffers and monitoring semaphores, that sort of thing). Then suballocate
>        from there using mmap's MAP_FIXED or (future-ish) MAP_FIXED_SAFE flags.
> 
>        ...glancing at the other fork of this thread, I think that is exactly what
>        Felix is saying, too. So that's good.
> 
>     -- The user space program sits above the user space driver, and although the
>        program could, in theory, interfere with this mmap'd area, that would be
>        wrong in the same way that mucking around with malloc'd areas (outside of
>        malloc() itself) is wrong. So I don't see any particular need to do much
>        more than the above.

I am worried that rogue program (i am not worried about buggy program
if people shoot themself in the foot they should feel the pain) could
use that to abuse channel to do something harmful. I am not familiar
enough with the hardware to completely rule out such scenario.

I do believe hardware with userspace queue support have the necessary
boundary to keep thing secure as i would assume for those the hardware
engineers had to take security into consideration.

Note that in my patchset the code that monitor the special vma is small
something like 20lines of code that only get call if something happen
to the reserved area. So i believe it is worth having such thing, cost
is low for little extra peace of mind :)

Cheers,
Jerome

  parent reply	other threads:[~2018-03-13 15:56 UTC|newest]

Thread overview: 26+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-03-10  3:21 jglisse
2018-03-10  3:21 ` [RFC PATCH 01/13] drm/nouveau/vmm: enable page table iterator over non populated range jglisse
2018-03-10  3:21 ` [RFC PATCH 02/13] drm/nouveau/core/memory: add some useful accessor macros jglisse
2018-03-10  3:21 ` [RFC PATCH 03/13] drm/nouveau/core: define engine for handling replayable faults jglisse
2018-03-10  3:21 ` [RFC PATCH 04/13] drm/nouveau/mmu/gp100: allow gcc/tex to generate " jglisse
2018-03-10  3:21 ` [RFC PATCH 05/13] drm/nouveau/mc/gp100-: handle replayable fault interrupt jglisse
2018-03-10  3:21 ` [RFC PATCH 06/13] drm/nouveau/fault/gp100: initial implementation of MaxwellFaultBufferA jglisse
2018-03-10  3:21 ` [RFC PATCH 07/13] drm/nouveau: special mapping method for HMM jglisse
2018-03-10  3:21 ` [RFC PATCH 08/13] drm/nouveau: special mapping method for HMM (user interface) jglisse
2018-03-10  3:21 ` [RFC PATCH 09/13] drm/nouveau: add SVM through HMM support to nouveau client jglisse
2018-03-10  3:21 ` [RFC PATCH 10/13] drm/nouveau: add HMM area creation jglisse
2018-03-10  3:21 ` [RFC PATCH 11/13] drm/nouveau: add HMM area creation user interface jglisse
2018-03-10  3:21 ` [RFC PATCH 12/13] drm/nouveau: HMM area creation helpers for nouveau client jglisse
2018-03-10  3:21 ` [RFC PATCH 13/13] drm/nouveau: HACK FOR HMM AREA jglisse
2018-03-10 15:01 ` [RFC PATCH 00/13] SVM (share virtual memory) with HMM in nouveau Christian König
2018-03-10 17:55   ` Jerome Glisse
2018-03-12 17:30   ` Daniel Vetter
2018-03-12 17:50     ` Jerome Glisse
2018-03-13  6:14       ` John Hubbard
2018-03-13 13:29         ` Matthew Wilcox
2018-03-13 14:31           ` Jerome Glisse
2018-03-13 15:56         ` Jerome Glisse [this message]
2018-03-13 10:46       ` Daniel Vetter
2018-03-12 18:28   ` Felix Kuehling
2018-03-13 14:28     ` Jerome Glisse
2018-03-13 15:32       ` Felix Kuehling

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20180313155645.GD3828@redhat.com \
    --to=jglisse@redhat.com \
    --cc=John.Bridgman@amd.com \
    --cc=christian.koenig@amd.com \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=ebaskakov@nvidia.com \
    --cc=felix.kuehling@amd.com \
    --cc=jhubbard@nvidia.com \
    --cc=linux-mm@kvack.org \
    --cc=nouveau@lists.freedesktop.org \
    --cc=rcampbell@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox