ksummit.lists.linux.dev archive mirror
 help / color / mirror / Atom feed
From: Jerome Glisse <j.glisse@gmail.com>
To: Joerg Roedel <joro@8bytes.org>
Cc: ksummit-discuss@lists.linuxfoundation.org
Subject: Re: [Ksummit-discuss] [CORE TOPIC] Core Kernel support for Compute-Offload Devices
Date: Mon, 3 Aug 2015 14:28:54 -0400	[thread overview]
Message-ID: <20150803182853.GB2981@gmail.com> (raw)
In-Reply-To: <20150803160203.GJ14980@8bytes.org>

On Mon, Aug 03, 2015 at 06:02:03PM +0200, Joerg Roedel wrote:
> Hi Jerome,
> 
> On Sat, Aug 01, 2015 at 03:08:48PM -0400, Jerome Glisse wrote:
> > It is definitly worth a discussion but i fear right now there is little
> > room for anything in the kernel. Hardware scheduling is done is almost
> > 100% hardware. The idea of GPU is that you have 1000 compute unit but
> > the hardware keep track of 10000 threads and at any point in time there
> > is huge probability that 1000 of those 10000 threads are ready to compute
> > something. So if a job is only using 60% of the GPU then the remaining
> > 40% would automaticly be use by the next batch of threads. This is a
> > simplification as the number of thread the hw can keep track of depend
> > of several factor and vary from one model to the other even inside same
> > family of the same manufacturer.
> 
> So the hardware scheduled individual threads, that is right. But still,
> as you say, there are limits of how many threads the hardware can handle
> which the device driver needs to take care of, and decide which job will
> be sent to the offload device next. Same with the priorities for the
> queues.

What i was pointing to is that right now you do not have such granularity
of choice from device driver point of view. Right now it is either let
command queue spawn thread or not. So it is either stop a command queue
or let it run. Thought how and when you can stop a queue vary. In some
hw you can only stop it at execution boundary ie you have a packet in
a command queue that request 500k thread to be launch, you can only stop
that queue once the 500k thread are launch and you can not stop in the
middle.

Given that some of those queue a programmed directly from userspace, you
can not even force the queue to only schedule small batches of thread (ie
something like 1000 thread no more per command packet in the queue).

But newer hw is becoming more capable on that front.

> 
> > > Some devices might provide that information, see the extended-access bit
> > > of Intel VT-d.
> > 
> > This would be limited to integrated GPU and so far only on one platform.
> > My point was more that userspace have way more informations to make good
> > decision here. The userspace program is more likely to know what part of
> > the dataset gonna be repeatedly access by the GPU threads.
> 
> Hmm, so what is the point of HMM then? If userspace is going to decide
> which part of the address space the device needs it could just copy the
> data over (keeping the address space layout and thus the pointers
> stable) and you would basically achieve the same without adding a lot of
> code to memory manangement, no?

Well no, you can not be "transparent" if you do it in userspace. Let
say userspace decide to migrate, then it means CPU will not be able
to access that memory, so you have to either PROT_NONE the range or
unmap it. This is not what we want. If we get CPU access to memory
that is migrated to device memory then we want to migrate it back (at
very least one page of it) so CPU can access it. We want this migration
back to be transparent from process point of view, like if the memory
was swapped on a disk.

Even on hw where the CPU can access device memory properly (maintaining
CPU atomic operation for instance which is not the case on PCIe) like
with CAPI on powerpc. You have to either have struct page for the device
memory of the kernel must know how to handle those special range of
memory.

So here HMM never makes any decission, it just leave that with the device
driver that can gather more informations from hw and user space to make
the best decision. But it might get things wrong or user space program
might do stupid thing like trying to access data set with the CPU while
the GPU is churning on it. Still we do not want CPU access to be handle
as fault or forbidden, when this happen HMM will force migration back
to service CPU page fault.


HMM also intends to provide more features that are not doable from user
space. Like exclusive write access on a range for the device so that
device can perform atomic operation. Again PCIe offer limited capabilities
on atomic so only way to provide more advance atomic operations is to map
thing read only for CPU and other devices while atomic operation on a
device progress.

Another feature is sharing device memory btw different devices. Some
devices (not necessarily from the same manufacturer) can communicate
to one another and access one another device memory. When a range is
migrated on one of such device pair, there must be a way for the other
device to find about it. Having userspace device driver try to exchange
that kind of information is racy in many way. So easier and better to
have it in kernel.

Cheers,
Jérôme

  reply	other threads:[~2015-08-03 18:28 UTC|newest]

Thread overview: 27+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-07-30 13:00 Joerg Roedel
2015-07-30 13:31 ` David Woodhouse
2015-07-30 13:54   ` Joerg Roedel
2015-07-31 16:34     ` Jerome Glisse
2015-08-03 18:51       ` David Woodhouse
2015-08-03 19:01         ` Jerome Glisse
2015-08-03 19:07           ` Andy Lutomirski
2015-08-03 19:56             ` Jerome Glisse
2015-08-03 21:10           ` Joerg Roedel
2015-08-03 21:12             ` David Woodhouse
2015-08-03 21:31               ` Joerg Roedel
2015-08-03 21:34               ` Jerome Glisse
2015-08-03 21:51                 ` David Woodhouse
2015-08-04 18:11               ` Catalin Marinas
2015-08-03 22:10         ` Benjamin Herrenschmidt
2015-07-30 22:32 ` Benjamin Herrenschmidt
2015-08-01 16:10   ` Joerg Roedel
2015-07-31 14:52 ` Rik van Riel
2015-07-31 16:13   ` Jerome Glisse
2015-08-01 15:57     ` Joerg Roedel
2015-08-01 19:08       ` Jerome Glisse
2015-08-03 16:02         ` Joerg Roedel
2015-08-03 18:28           ` Jerome Glisse [this message]
2015-08-01 20:46 ` Arnd Bergmann
2015-08-03 16:10   ` Joerg Roedel
2015-08-03 19:23     ` Arnd Bergmann
2015-08-04 15:40   ` Christoph Lameter

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20150803182853.GB2981@gmail.com \
    --to=j.glisse@gmail.com \
    --cc=joro@8bytes.org \
    --cc=ksummit-discuss@lists.linuxfoundation.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox