From: ebiederm+eric@ccr.net (Eric W. Biederman)
To: Rik Faith <faith@precisioninsight.com>
Cc: James Simmons <jsimmons@edgeglobal.com>, Linux MM <linux-mm@kvack.org>
Subject: Re: MMIO regions
Date: 10 Oct 1999 09:03:11 -0500 [thread overview]
Message-ID: <m1emf3wbxc.fsf@alogconduit1ai.ccr.net> (raw)
In-Reply-To: Rik Faith's message of "Sun, 10 Oct 1999 07:24:58 -0400"
Rik Faith <faith@precisioninsight.com> writes:
> On 7 Oct 99 19:40:32 GMT, James Simmons <jsimmons@edgeglobal.com> wrote:
> > On Mon, 4 Oct 1999, Stephen C. Tweedie wrote:
> > > On Mon, 4 Oct 1999 14:29:14 -0400 (EDT), James Simmons
> > > <jsimmons@edgeglobal.com> said:
[snip]
> If I understand what you are saying, there are serious performance
> implications for direct-rendering clients (in addition to the added
> scheduler overhead, which will negatively impact overall system
> performance).
>
> I believe you are saying:
> 1) There are n processes, each of which has the MMIO region mmap'd.
> 2) The scheduler will only schedule one of these processes at a time,
> even on an SMP system. [I'm assuming this is what you mean by "in
> use", since the scheduler can't know about actual MMIO writes -- it
> has to assume that a mapped region is a region that is "in use",
> even if it isn't (e.g., a threaded program may have the MMIO region
> mapped in n-1 threads, but may only direct render in 1 thread).]
>
That was one idea.
There is the other side here. Software is buggy and hardware is buggy.
If some buggy software forgets to take the lock (or messes it up),
and two apps hit the MMIO region at the same time.
BOOOMM!!!! your computer is toast.
The DRI approach looks good if:
Your hardware is good enough it won't bring down the box, on cooperation failure.
And hopefully it is good enough that after it gets scrabled by cooperation failure you
can reset it.
>
> The cooperative locking system used by the DRI (see
> http://precisioninsight.com/dr/locking.html) allows direct-rendering
> clients to perform fine-grain locking only when the MMIO region is actually
> being written. The overhead for this system is extremely low (about 2
> instructions to lock, and 1 instruction to unlock). Cooperative locking
> like this allows several threads that all map the same MMIO region to run
> simultaneously on an SMP system.
The difficulty is that all threads need to be run as root.
Ouch!!!
Personally I see 3 functional ways of making this work on buggy single threaded
hardware.
1) Allow only one process to have the MMIO/Frame buffer regions faulted in
at a time. As simultaneous frame buffer and MMIO writes are reported to
have hardware crashing side effects.
2) Convince user space to have dedicated drawing/rendering threads that
are created with fork rather than clone. Then these threads can be cautiously
scheduled to work around buggy hardware.
3) Have a set of very low overhead syscalls that will manipulate MMIO,
etc. This might work in conjunction with 2 and have a fast path that just
makes nothing else is running that could touch the frame buffer.
(With Linux cheap syscalls this may be possible)
The fundamental problem that makes this hard are:
1) It is very desireable to for this to work in a windowed environment with
many apps running simultaneously, (the X server wants to hand off some of the work).
2) The hardware is buggy so you must either:
a) Have many trusted (SUID) clients.
b) Have very clever work arounds that give high performance.
c) Lose some performance.
Either just the X server is trusted and you must tell it what to do,
or some other way.
What someone (not me) needs to do is code up a multithreaded test application
that shoots pictures to the screen, and needs these features. And run
tests with multiple copies of said test application running. On
various kernel configurations to see if it will work and give
acceptable performance.
Extending the current architecture with just X server needing to be
trusted doesn't much worry me. But we really need to find
an alternative to encouraing SUID binary only games (and other
intensive clients).
Eric
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://humbolt.geo.uu.nl/Linux-MM/
next prev parent reply other threads:[~1999-10-10 14:03 UTC|newest]
Thread overview: 27+ messages / expand[flat|nested] mbox.gz Atom feed top
1999-10-04 14:38 James Simmons
1999-10-04 15:31 ` Stephen C. Tweedie
1999-10-04 15:52 ` James Simmons
1999-10-04 16:02 ` Benjamin C.R. LaHaise
1999-10-04 17:27 ` James Simmons
1999-10-04 17:56 ` Benjamin C.R. LaHaise
1999-10-04 18:26 ` James Simmons
1999-10-04 19:19 ` Stephen C. Tweedie
1999-10-06 20:15 ` James Simmons
1999-10-11 17:09 ` Stephen C. Tweedie
1999-10-11 17:26 ` Jeff Garzik
1999-10-11 23:14 ` James Simmons
1999-10-11 17:57 ` James Simmons
1999-10-04 16:11 ` Stephen C. Tweedie
1999-10-04 18:29 ` James Simmons
1999-10-04 19:35 ` Stephen C. Tweedie
1999-10-07 19:40 ` James Simmons
1999-10-10 11:24 ` Rik Faith
1999-10-10 14:03 ` Eric W. Biederman [this message]
1999-10-10 18:46 ` Rik Faith
1999-10-11 0:21 ` James Simmons
1999-10-11 10:59 ` Rik Faith
1999-10-11 3:38 ` Eric W. Biederman
1999-10-10 14:21 ` James Simmons
1999-10-11 17:22 ` Stephen C. Tweedie
1999-10-04 16:58 ` Marcus Sundberg
1999-10-04 18:27 ` James Simmons
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=m1emf3wbxc.fsf@alogconduit1ai.ccr.net \
--to=ebiederm+eric@ccr.net \
--cc=faith@precisioninsight.com \
--cc=jsimmons@edgeglobal.com \
--cc=linux-mm@kvack.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox