From: Oded Gabbay <oded.gabbay@amd.com>
To: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: "Jerome Glisse" <j.glisse@gmail.com>,
"Christian König" <deathsimple@vodafone.de>,
"David Airlie" <airlied@linux.ie>,
"Alex Deucher" <alexdeucher@gmail.com>,
"Andrew Morton" <akpm@linux-foundation.org>,
"John Bridgman" <John.Bridgman@amd.com>,
"Joerg Roedel" <joro@8bytes.org>,
"Andrew Lewycky" <Andrew.Lewycky@amd.com>,
"Michel Dänzer" <michel.daenzer@amd.com>,
"Ben Goz" <Ben.Goz@amd.com>,
"Alexey Skidanov" <Alexey.Skidanov@amd.com>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
"dri-devel@lists.freedesktop.org"
<dri-devel@lists.freedesktop.org>, linux-mm <linux-mm@kvack.org>,
"Sellek, Tom" <Tom.Sellek@amd.com>
Subject: Re: [PATCH v2 00/25] AMDKFD kernel driver
Date: Wed, 23 Jul 2014 11:35:59 +0300 [thread overview]
Message-ID: <53CF73EF.5000506@amd.com> (raw)
In-Reply-To: <CAKMK7uFtSStEewVivbXAT1VC4t2Y+suTaEmQA4=UptK1UBLSmg@mail.gmail.com>
On 23/07/14 10:05, Daniel Vetter wrote:
> On Wed, Jul 23, 2014 at 8:50 AM, Oded Gabbay <oded.gabbay@amd.com> wrote:
>> On 22/07/14 14:15, Daniel Vetter wrote:
>>>
>>> On Tue, Jul 22, 2014 at 12:52:43PM +0300, Oded Gabbay wrote:
>>>>
>>>> On 22/07/14 12:21, Daniel Vetter wrote:
>>>>>
>>>>> On Tue, Jul 22, 2014 at 10:19 AM, Oded Gabbay <oded.gabbay@amd.com>
>>>>> wrote:
>>>>>>>
>>>>>>> Exactly, just prevent userspace from submitting more. And if you have
>>>>>>> misbehaving userspace that submits too much, reset the gpu and tell it
>>>>>>> that you're sorry but won't schedule any more work.
>>>>>>
>>>>>>
>>>>>> I'm not sure how you intend to know if a userspace misbehaves or not.
>>>>>> Can
>>>>>> you elaborate ?
>>>>>
>>>>>
>>>>> Well that's mostly policy, currently in i915 we only have a check for
>>>>> hangs, and if userspace hangs a bit too often then we stop it. I guess
>>>>> you can do that with the queue unmapping you've describe in reply to
>>>>> Jerome's mail.
>>>>> -Daniel
>>>>>
>>>> What do you mean by hang ? Like the tdr mechanism in Windows (checks if a
>>>> gpu job takes more than 2 seconds, I think, and if so, terminates the
>>>> job).
>>>
>>>
>>> Essentially yes. But we also have some hw features to kill jobs quicker,
>>> e.g. for media workloads.
>>> -Daniel
>>>
>>
>> Yeah, so this is what I'm talking about when I say that you and Jerome come
>> from a graphics POV and amdkfd come from a compute POV, no offense intended.
>>
>> For compute jobs, we simply can't use this logic to terminate jobs. Graphics
>> are mostly Real-Time while compute jobs can take from a few ms to a few
>> hours!!! And I'm not talking about an entire application runtime but on a
>> single submission of jobs by the userspace app. We have tests with jobs that
>> take between 20-30 minutes to complete. In theory, we can even imagine a
>> compute job which takes 1 or 2 days (on larger APUs).
>>
>> Now, I understand the question of how do we prevent the compute job from
>> monopolizing the GPU, and internally here we have some ideas that we will
>> probably share in the next few days, but my point is that I don't think we
>> can terminate a compute job because it is running for more than x seconds.
>> It is like you would terminate a CPU process which runs more than x seconds.
>>
>> I think this is a *very* important discussion (detecting a misbehaved
>> compute process) and I would like to continue it, but I don't think moving
>> the job submission from userspace control to kernel control will solve this
>> core problem.
>
> Well graphics gets away with cooperative scheduling since usually
> people want to see stuff within a few frames, so we can legitimately
> kill jobs after a fairly short timeout. Imo if you want to allow
> userspace to submit compute jobs that are atomic and take a few
> minutes to hours with no break-up in between and no hw means to
> preempt then that design is screwed up. We really can't tell the core
> vm that "sorry we will hold onto these gobloads of memory you really
> need now for another few hours". Pinning memory like that essentially
> without a time limit is restricted to root.
> -Daniel
>
First of all, I don't see the relation to memory pinning here. I already said on
this thread that amdkfd does NOT pin local memory. The only memory we allocate
is system memory, and we map it to the gart, and we can limit that memory by
limiting max # of queues and max # of process through kernel parameters. Most of
the memory used is allocated via regular means by the userspace, which is
usually pageable.
Second, it is important to remember that this problem only exists in KV. In CZ,
the GPU can context switch between waves (by doing mid-wave preemption). So even
long running waves are getting switched on and off constantly and there is no
monopolizing of GPU resources.
Third, even in KV, we can kill waves. The question is when and how to recognize
it. I think it would be sufficient for now if we expose this ability to the kernel.
Oded
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2014-07-23 8:36 UTC|newest]
Thread overview: 49+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-07-17 13:57 Oded Gabbay
2014-07-20 17:46 ` Jerome Glisse
2014-07-21 3:03 ` Jerome Glisse
2014-07-21 7:01 ` Daniel Vetter
2014-07-21 9:34 ` Christian König
2014-07-21 12:36 ` Oded Gabbay
2014-07-21 13:39 ` Christian König
2014-07-21 14:12 ` Oded Gabbay
2014-07-21 15:54 ` Jerome Glisse
2014-07-21 17:42 ` Oded Gabbay
2014-07-21 18:14 ` Jerome Glisse
2014-07-21 18:36 ` Oded Gabbay
2014-07-21 18:59 ` Jerome Glisse
2014-07-21 19:23 ` Oded Gabbay
2014-07-21 19:28 ` Jerome Glisse
2014-07-21 21:56 ` Oded Gabbay
2014-07-21 23:05 ` Jerome Glisse
2014-07-21 23:29 ` Bridgman, John
2014-07-21 23:36 ` Jerome Glisse
2014-07-22 8:05 ` Oded Gabbay
2014-07-22 7:23 ` Daniel Vetter
2014-07-22 8:10 ` Oded Gabbay
2014-07-21 15:25 ` Daniel Vetter
2014-07-21 15:58 ` Jerome Glisse
2014-07-21 17:05 ` Daniel Vetter
2014-07-21 17:28 ` Oded Gabbay
2014-07-21 18:22 ` Daniel Vetter
2014-07-21 18:41 ` Oded Gabbay
2014-07-21 19:03 ` Jerome Glisse
2014-07-22 7:28 ` Daniel Vetter
2014-07-22 7:40 ` Daniel Vetter
2014-07-22 8:21 ` Oded Gabbay
2014-07-22 8:19 ` Oded Gabbay
2014-07-22 9:21 ` Daniel Vetter
2014-07-22 9:24 ` Daniel Vetter
2014-07-22 9:52 ` Oded Gabbay
2014-07-22 11:15 ` Daniel Vetter
2014-07-23 6:50 ` Oded Gabbay
2014-07-23 7:04 ` Christian König
2014-07-23 13:39 ` Bridgman, John
2014-07-23 14:56 ` Jerome Glisse
2014-07-23 19:49 ` Alex Deucher
2014-07-23 20:25 ` Jerome Glisse
2014-07-23 7:05 ` Daniel Vetter
2014-07-23 8:35 ` Oded Gabbay [this message]
2014-07-23 13:33 ` Bridgman, John
2014-07-23 14:41 ` Daniel Vetter
2014-07-23 15:06 ` Bridgman, John
2014-07-23 15:12 ` Bridgman, John
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=53CF73EF.5000506@amd.com \
--to=oded.gabbay@amd.com \
--cc=Alexey.Skidanov@amd.com \
--cc=Andrew.Lewycky@amd.com \
--cc=Ben.Goz@amd.com \
--cc=John.Bridgman@amd.com \
--cc=Tom.Sellek@amd.com \
--cc=airlied@linux.ie \
--cc=akpm@linux-foundation.org \
--cc=alexdeucher@gmail.com \
--cc=daniel.vetter@ffwll.ch \
--cc=deathsimple@vodafone.de \
--cc=dri-devel@lists.freedesktop.org \
--cc=j.glisse@gmail.com \
--cc=joro@8bytes.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=michel.daenzer@amd.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox