linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* MMIO regions
@ 1999-10-04 14:38 James Simmons
  1999-10-04 15:31 ` Stephen C. Tweedie
  1999-10-04 16:58 ` Marcus Sundberg
  0 siblings, 2 replies; 27+ messages in thread
From: James Simmons @ 1999-10-04 14:38 UTC (permalink / raw)
  To: Linux MM

Howdy again!!

   I noticed something for SMP machines with all the dicussion about
concurrent access to memory regions. What happens when you have two
processes that have both mmapped the same MMIO region for some card.
Doesn't have to be a video card,. On a SMP machine it is possible that
both processes could access the same region at the same time. This could
cause the card to go into a indeterminate state. Even lock the machine.
Does their exist a way to handle this? Also some cards have mulitple MMIO
regions. What if a process mmaps one MMIO region of this card and another
process mmaps another MMIO region of this card. Now process one could
alter the card in such a way it could effect the results that process two
is expecting. How is this dealt with? Is it dealt with? If not what would
be a good way to handle this?  

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://humbolt.geo.uu.nl/Linux-MM/

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: MMIO regions
  1999-10-04 14:38 MMIO regions James Simmons
@ 1999-10-04 15:31 ` Stephen C. Tweedie
  1999-10-04 15:52   ` James Simmons
  1999-10-04 16:58 ` Marcus Sundberg
  1 sibling, 1 reply; 27+ messages in thread
From: Stephen C. Tweedie @ 1999-10-04 15:31 UTC (permalink / raw)
  To: James Simmons; +Cc: Linux MM

Hi,

On Mon, 4 Oct 1999 10:38:13 -0400 (EDT), James Simmons
<jsimmons@edgeglobal.com> said:

>    I noticed something for SMP machines with all the dicussion about
> concurrent access to memory regions. What happens when you have two
> processes that have both mmapped the same MMIO region for some card.

The kernel doesn't impose any limits against this.  If you want to make
this impossible, then you need to add locking to the driver itself to
prevent multiple processes from conflicting.

--Stephen
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://humbolt.geo.uu.nl/Linux-MM/

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: MMIO regions
  1999-10-04 15:31 ` Stephen C. Tweedie
@ 1999-10-04 15:52   ` James Simmons
  1999-10-04 16:02     ` Benjamin C.R. LaHaise
  1999-10-04 16:11     ` Stephen C. Tweedie
  0 siblings, 2 replies; 27+ messages in thread
From: James Simmons @ 1999-10-04 15:52 UTC (permalink / raw)
  To: Stephen C. Tweedie; +Cc: Linux MM

On Mon, 4 Oct 1999, Stephen C. Tweedie wrote:

> Hi,
> 
> On Mon, 4 Oct 1999 10:38:13 -0400 (EDT), James Simmons
> <jsimmons@edgeglobal.com> said:
> 
> >    I noticed something for SMP machines with all the dicussion about
> > concurrent access to memory regions. What happens when you have two
> > processes that have both mmapped the same MMIO region for some card.
> 
> The kernel doesn't impose any limits against this.  If you want to make
> this impossible, then you need to add locking to the driver itself to
> prevent multiple processes from conflicting.

And if the process holding the locks dies then no other process can access
this resource. Also if the program forgets to release the lock you end up
with other process never being able to access this piece of hardware.   

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://humbolt.geo.uu.nl/Linux-MM/

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: MMIO regions
  1999-10-04 15:52   ` James Simmons
@ 1999-10-04 16:02     ` Benjamin C.R. LaHaise
  1999-10-04 17:27       ` James Simmons
  1999-10-04 16:11     ` Stephen C. Tweedie
  1 sibling, 1 reply; 27+ messages in thread
From: Benjamin C.R. LaHaise @ 1999-10-04 16:02 UTC (permalink / raw)
  To: James Simmons; +Cc: Linux MM

On Mon, 4 Oct 1999, James Simmons wrote:

> And if the process holding the locks dies then no other process can access
> this resource. Also if the program forgets to release the lock you end up
> with other process never being able to access this piece of hardware.   

Eh?  That's simply not true -- it's easy enough to handle via a couple of
different means: in the release fop or munmap which both get called on
termination of a task.  Or in userspace from the SIGCHLD to the parent, or
if you're really paranoid, you can save the pid in an owner field in the
lock and periodically check that the process is still there.

		-ben

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://humbolt.geo.uu.nl/Linux-MM/

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: MMIO regions
  1999-10-04 15:52   ` James Simmons
  1999-10-04 16:02     ` Benjamin C.R. LaHaise
@ 1999-10-04 16:11     ` Stephen C. Tweedie
  1999-10-04 18:29       ` James Simmons
  1 sibling, 1 reply; 27+ messages in thread
From: Stephen C. Tweedie @ 1999-10-04 16:11 UTC (permalink / raw)
  To: James Simmons; +Cc: Stephen C. Tweedie, Linux MM

Hi,

On Mon, 4 Oct 1999 11:52:50 -0400 (EDT), James Simmons
<jsimmons@edgeglobal.com> said:

>> The kernel doesn't impose any limits against this.  If you want to make
>> this impossible, then you need to add locking to the driver itself to
>> prevent multiple processes from conflicting.

> And if the process holding the locks dies then no other process can access
> this resource. Also if the program forgets to release the lock you end up
> with other process never being able to access this piece of hardware.   

There are any number of ways to recover from this.  SysV semaphores, for
example, allow you to specify UNDO when you down a semaphore, and the
semaphore will be restored automatically on process death.

--Stephen
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://humbolt.geo.uu.nl/Linux-MM/

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: MMIO regions
  1999-10-04 14:38 MMIO regions James Simmons
  1999-10-04 15:31 ` Stephen C. Tweedie
@ 1999-10-04 16:58 ` Marcus Sundberg
  1999-10-04 18:27   ` James Simmons
  1 sibling, 1 reply; 27+ messages in thread
From: Marcus Sundberg @ 1999-10-04 16:58 UTC (permalink / raw)
  To: James Simmons; +Cc: linux-mm

James Simmons wrote:
> 
> Howdy again!!
> 
>    I noticed something for SMP machines with all the dicussion about
> concurrent access to memory regions. What happens when you have two
> processes that have both mmapped the same MMIO region for some card.
> Doesn't have to be a video card,. On a SMP machine it is possible that
> both processes could access the same region at the same time. This could
> cause the card to go into a indeterminate state. Even lock the machine.
> Does their exist a way to handle this? Also some cards have mulitple MMIO
> regions. What if a process mmaps one MMIO region of this card and another
> process mmaps another MMIO region of this card. Now process one could
> alter the card in such a way it could effect the results that process two
> is expecting. How is this dealt with? Is it dealt with? If not what would
> be a good way to handle this?

AFAIK no drivers except fbcon drivers map any IO-region to userspace.

//Marcus
-- 
-------------------------------+------------------------------------
        Marcus Sundberg        | http://www.stacken.kth.se/~mackan/
 Royal Institute of Technology |       Phone: +46 707 295404
       Stockholm, Sweden       |   E-Mail: mackan@stacken.kth.se
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://humbolt.geo.uu.nl/Linux-MM/

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: MMIO regions
  1999-10-04 16:02     ` Benjamin C.R. LaHaise
@ 1999-10-04 17:27       ` James Simmons
  1999-10-04 17:56         ` Benjamin C.R. LaHaise
  1999-10-04 19:19         ` Stephen C. Tweedie
  0 siblings, 2 replies; 27+ messages in thread
From: James Simmons @ 1999-10-04 17:27 UTC (permalink / raw)
  To: Benjamin C.R. LaHaise; +Cc: Linux MM

On Mon, 4 Oct 1999, Benjamin C.R. LaHaise wrote:

> On Mon, 4 Oct 1999, James Simmons wrote:
> 
> > And if the process holding the locks dies then no other process can access
> > this resource. Also if the program forgets to release the lock you end up
> > with other process never being able to access this piece of hardware.   
> 
> Eh?  That's simply not true -- it's easy enough to handle via a couple of
> different means: in the release fop or munmap which both get called on
> termination of a task.  

Which means only one application can ever have access to the MMIO. If
another process wanted it then this application would have to tell the
other appilcation hey I want it so unmap. Then the application demanding
it would then have to mmap it.  

> Or in userspace from the SIGCHLD to the parent, 

Thats assuming its always a child that has access to a MMIO region.

> or if you're really paranoid, you can save the pid in an owner field in
the
> lock and periodically check that the process is still there.
 
How would you use this method?

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://humbolt.geo.uu.nl/Linux-MM/

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: MMIO regions
  1999-10-04 17:27       ` James Simmons
@ 1999-10-04 17:56         ` Benjamin C.R. LaHaise
  1999-10-04 18:26           ` James Simmons
  1999-10-04 19:19         ` Stephen C. Tweedie
  1 sibling, 1 reply; 27+ messages in thread
From: Benjamin C.R. LaHaise @ 1999-10-04 17:56 UTC (permalink / raw)
  To: James Simmons; +Cc: Linux MM

On Mon, 4 Oct 1999, James Simmons wrote:

> On Mon, 4 Oct 1999, Benjamin C.R. LaHaise wrote:
> 
> > On Mon, 4 Oct 1999, James Simmons wrote:
> > 
> > > And if the process holding the locks dies then no other process can access
> > > this resource. Also if the program forgets to release the lock you end up
> > > with other process never being able to access this piece of hardware.   
> > 
> > Eh?  That's simply not true -- it's easy enough to handle via a couple of
> > different means: in the release fop or munmap which both get called on
> > termination of a task.  
> 
> Which means only one application can ever have access to the MMIO. If
> another process wanted it then this application would have to tell the
> other appilcation hey I want it so unmap. Then the application demanding
> it would then have to mmap it.  

GUG!  Think: you've got a file descriptor, if you implement an ioctl to
lock it, then your release op gets called when the app dies.  But
Stephen's suggestion of using the SYSV semaphores for the user code is
better. 

> > Or in userspace from the SIGCHLD to the parent, 
> 
> Thats assuming its always a child that has access to a MMIO region.

This is a commonly used trick in unix programming: at the start of the
program, you fork and perform all operations in the child so that the
parent can clean up after the child.

> > or if you're really paranoid, you can save the pid in an owner field in
> the
> > lock and periodically check that the process is still there.
>  
> How would you use this method?

man 2 kill.

		-ben

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://humbolt.geo.uu.nl/Linux-MM/

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: MMIO regions
  1999-10-04 17:56         ` Benjamin C.R. LaHaise
@ 1999-10-04 18:26           ` James Simmons
  0 siblings, 0 replies; 27+ messages in thread
From: James Simmons @ 1999-10-04 18:26 UTC (permalink / raw)
  To: Benjamin C.R. LaHaise; +Cc: Linux MM

> > Which means only one application can ever have access to the MMIO. If
> > another process wanted it then this application would have to tell the
> > other appilcation hey I want it so unmap. Then the application demanding
> > it would then have to mmap it.  
> 
> GUG!  Think: you've got a file descriptor, if you implement an ioctl to
> lock it, then your release op gets called when the app dies.  But
> Stephen's suggestion of using the SYSV semaphores for the user code is
> better. 

Okay that can be fixed. What about if the process goes to sleep? Most
important thing I was trying to get at was both processes both wanting to
use the MMIO region at the same time. Okay if we expand on the idea of
semaphore what we are doing is really recreating shared memory of IPC.
Note IPC sharded memory does not handle similatneous access to the memory.
Usually a userland semaphore is used. Now a rogue app can access this
memory if they have the key and ignore the semaphore. Of course this is
only ram and this would only fubar the apps using this memory. What I'm
talking about is messing up the hardware and locking the machine.

> > How would you use this method?
> 
> man 2 kill.

Userland has to explictly kill it or have the other process send a
kill signal for within the kernel. All I want to do is suspend the
processes that might access the MMIO region while one is using it. Of
course I could use signals in the kernel to do that.  

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://humbolt.geo.uu.nl/Linux-MM/

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: MMIO regions
  1999-10-04 16:58 ` Marcus Sundberg
@ 1999-10-04 18:27   ` James Simmons
  0 siblings, 0 replies; 27+ messages in thread
From: James Simmons @ 1999-10-04 18:27 UTC (permalink / raw)
  To: Marcus Sundberg; +Cc: linux-mm

> AFAIK no drivers except fbcon drivers map any IO-region to userspace.

Want to be prepared for the future.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://humbolt.geo.uu.nl/Linux-MM/

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: MMIO regions
  1999-10-04 16:11     ` Stephen C. Tweedie
@ 1999-10-04 18:29       ` James Simmons
  1999-10-04 19:35         ` Stephen C. Tweedie
  0 siblings, 1 reply; 27+ messages in thread
From: James Simmons @ 1999-10-04 18:29 UTC (permalink / raw)
  To: Stephen C. Tweedie; +Cc: Linux MM

On Mon, 4 Oct 1999, Stephen C. Tweedie wrote:

> Hi,
> 
> On Mon, 4 Oct 1999 11:52:50 -0400 (EDT), James Simmons
> <jsimmons@edgeglobal.com> said:
> 
> >> The kernel doesn't impose any limits against this.  If you want to make
> >> this impossible, then you need to add locking to the driver itself to
> >> prevent multiple processes from conflicting.
> 
> > And if the process holding the locks dies then no other process can access
> > this resource. Also if the program forgets to release the lock you end up
> > with other process never being able to access this piece of hardware.   
>
> There are any number of ways to recover from this.  SysV semaphores, for
> example, allow you to specify UNDO when you down a semaphore, and the
> semaphore will be restored automatically on process death.
> 
> --Stephen
> 

Okay. But none of this prevents a rogue app from hosing your system. Such
a process doesn't have to bother with locks or semaphores. 
 


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://humbolt.geo.uu.nl/Linux-MM/

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: MMIO regions
  1999-10-04 17:27       ` James Simmons
  1999-10-04 17:56         ` Benjamin C.R. LaHaise
@ 1999-10-04 19:19         ` Stephen C. Tweedie
  1999-10-06 20:15           ` James Simmons
  1 sibling, 1 reply; 27+ messages in thread
From: Stephen C. Tweedie @ 1999-10-04 19:19 UTC (permalink / raw)
  To: James Simmons; +Cc: Benjamin C.R. LaHaise, Linux MM, Stephen Tweedie

Hi,

On Mon, 4 Oct 1999 13:27:43 -0400 (EDT), James Simmons
<jsimmons@edgeglobal.com> said:

> On Mon, 4 Oct 1999, Benjamin C.R. LaHaise wrote:
>> On Mon, 4 Oct 1999, James Simmons wrote:
>> 
>> > And if the process holding the locks dies then no other process can access
>> > this resource. Also if the program forgets to release the lock you end up
>> > with other process never being able to access this piece of hardware.   
>> 
>> Eh?  That's simply not true -- it's easy enough to handle via a couple of
>> different means: in the release fop or munmap which both get called on
>> termination of a task.  

> Which means only one application can ever have access to the MMIO. If
> another process wanted it then this application would have to tell the
> other appilcation hey I want it so unmap. Then the application demanding
> it would then have to mmap it.  

Look, we've already been over this --- if you have multiple accessors,
you have all the locking problems we talked about before.  The kernel
doesn't do anything automatically for you.  Any locking you want to do
can be done in the driver.

The problem is, what locking do you want to do?  We've talked about this
--- the fact is, if the hardware sucks badly enough that multiple
accessors can crash the bus but you need multiple accessors to access
certain functionality, then what do you want to do about it?  Sorry, you
can't just shrug this off as an O/S implementation problem --- the
hardware in this case doesn't give the software any clean way out.  The
*only* solutions are either slow in the general case, enforcing access
control via expensive VM operations; or they are best-effort, relying on
cooperative locking but allowing good performance.

>> or if you're really paranoid, you can save the pid in an owner field
>> in the lock and periodically check that the process is still there.
 
> How would you use this method?

Look at http://www.precisioninsight.com/dr/locking.html for a
description of the cooperative lightweight locking used in the DRI in
2.3 kernels to solve this problem.  Basically you have a shared memory
segment which processes can mmap allowing them to determine if they
still hold the lock via a simple locked memory operation, and a kernel
syscall which lets processes which don't have the lock arbitrate for
access.

--Stephen
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://humbolt.geo.uu.nl/Linux-MM/

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: MMIO regions
  1999-10-04 18:29       ` James Simmons
@ 1999-10-04 19:35         ` Stephen C. Tweedie
  1999-10-07 19:40           ` James Simmons
  0 siblings, 1 reply; 27+ messages in thread
From: Stephen C. Tweedie @ 1999-10-04 19:35 UTC (permalink / raw)
  To: James Simmons; +Cc: Stephen C. Tweedie, Linux MM

Hi,

On Mon, 4 Oct 1999 14:29:14 -0400 (EDT), James Simmons
<jsimmons@edgeglobal.com> said:

> Okay. But none of this prevents a rogue app from hosing your system. Such
> a process doesn't have to bother with locks or semaphores. 

And we talked about this before.  You _can_ make such a guarantee, but
it is hideously expensive especially on SMP.  You either protect the
memory or the CPU against access by the other app, and that requires
either scheduler or VM interrupts between CPUs.

--Stephen


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://humbolt.geo.uu.nl/Linux-MM/

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: MMIO regions
  1999-10-04 19:19         ` Stephen C. Tweedie
@ 1999-10-06 20:15           ` James Simmons
  1999-10-11 17:09             ` Stephen C. Tweedie
  0 siblings, 1 reply; 27+ messages in thread
From: James Simmons @ 1999-10-06 20:15 UTC (permalink / raw)
  To: Linux MM

> Look at http://www.precisioninsight.com/dr/locking.html for a
> description of the cooperative lightweight locking used in the DRI in
> 2.3 kernels to solve this problem.  Basically you have a shared memory
> segment which processes can mmap allowing them to determine if they
> still hold the lock via a simple locked memory operation, and a kernel
> syscall which lets processes which don't have the lock arbitrate for
> access.

I have read those papers. Its not compatible with fbcon. It would require
a massive rewrite which would break everything that works with fbcon. When
people start writing apps using DRI and it locks their machine or damages
the hardware. Well the linux kernel mailing list will have to hear those
complaints. You know people will want to write their own stuff. Of course
precisioninsight should make a licence stating it illegal to write
your own code using their driver or a warning so they don't get their
asses sued. These are the kinds of people who will look for other
solutions like I am. So expect more like me. 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://humbolt.geo.uu.nl/Linux-MM/

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: MMIO regions
  1999-10-04 19:35         ` Stephen C. Tweedie
@ 1999-10-07 19:40           ` James Simmons
  1999-10-10 11:24             ` Rik Faith
  1999-10-11 17:22             ` Stephen C. Tweedie
  0 siblings, 2 replies; 27+ messages in thread
From: James Simmons @ 1999-10-07 19:40 UTC (permalink / raw)
  To: Stephen C. Tweedie; +Cc: Linux MM

On Mon, 4 Oct 1999, Stephen C. Tweedie wrote:

> Hi,
> 
> On Mon, 4 Oct 1999 14:29:14 -0400 (EDT), James Simmons
> <jsimmons@edgeglobal.com> said:
> 
> > Okay. But none of this prevents a rogue app from hosing your system. Such
> > a process doesn't have to bother with locks or semaphores. 
> 
> And we talked about this before.  You _can_ make such a guarantee, but
> it is hideously expensive especially on SMP.  You either protect the
> memory or the CPU against access by the other app, and that requires
> either scheduler or VM interrupts between CPUs.

No VM stuff. I think the better approach is with the scheduler. The nice
thing about the schedular is the schedular lock. I'm assuming durning
is lock no other process on any CPU can be resceduled. Its during the lock
that I can test to see if a process is using a MMIO region that already in
use by another process. If it is then skip this process. If not weight
this process with the others. If a process is slected to be the next
executed process then lock the mmio region. 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://humbolt.geo.uu.nl/Linux-MM/

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: MMIO regions
  1999-10-07 19:40           ` James Simmons
@ 1999-10-10 11:24             ` Rik Faith
  1999-10-10 14:03               ` Eric W. Biederman
  1999-10-10 14:21               ` James Simmons
  1999-10-11 17:22             ` Stephen C. Tweedie
  1 sibling, 2 replies; 27+ messages in thread
From: Rik Faith @ 1999-10-10 11:24 UTC (permalink / raw)
  To: James Simmons; +Cc: Linux MM

On 7 Oct 99 19:40:32 GMT, James Simmons <jsimmons@edgeglobal.com> wrote:
> On Mon, 4 Oct 1999, Stephen C. Tweedie wrote:
> > On Mon, 4 Oct 1999 14:29:14 -0400 (EDT), James Simmons
> > <jsimmons@edgeglobal.com> said:
> > 
> > > Okay. But none of this prevents a rogue app from hosing your system. Such
> > > a process doesn't have to bother with locks or semaphores. 
> > 
> > And we talked about this before.  You _can_ make such a guarantee, but
> > it is hideously expensive especially on SMP.  You either protect the
> > memory or the CPU against access by the other app, and that requires
> > either scheduler or VM interrupts between CPUs.
> 
> No VM stuff. I think the better approach is with the scheduler. The nice
> thing about the schedular is the schedular lock. I'm assuming durning
> is lock no other process on any CPU can be resceduled. Its during the lock
> that I can test to see if a process is using a MMIO region that already in
> use by another process. If it is then skip this process. If not weight
> this process with the others. If a process is slected to be the next
> executed process then lock the mmio region. 

If I understand what you are saying, there are serious performance
implications for direct-rendering clients (in addition to the added
scheduler overhead, which will negatively impact overall system
performance).

I believe you are saying:
    1) There are n processes, each of which has the MMIO region mmap'd.
    2) The scheduler will only schedule one of these processes at a time,
       even on an SMP system.  [I'm assuming this is what you mean by "in
       use", since the scheduler can't know about actual MMIO writes -- it
       has to assume that a mapped region is a region that is "in use",
       even if it isn't (e.g., a threaded program may have the MMIO region
       mapped in n-1 threads, but may only direct render in 1 thread).]

On MMIO-based graphics cards (i.e., those that do not use traditional DMA),
a direct-rendering client will intersperse relatively long periods of
computation with relatively short periods of MMIO writes.  In your scheme,
one of these clients will run for a whole time slice before the other one
runs (i.e., they will run in alternate time slices, even on an SMP system
with sufficient processors to run both simultaneously).  Because actual
MMIO writes take up a relatively small fraction of that time slice,
rendering performance will potentially decrease by a factor of 2 (or more,
if more CPUs are available).  This is significant, especially since many
high-end OpenGL applications are threaded and expect to be able to run
simultaneously on SMP systems.

The cooperative locking system used by the DRI (see
http://precisioninsight.com/dr/locking.html) allows direct-rendering
clients to perform fine-grain locking only when the MMIO region is actually
being written.  The overhead for this system is extremely low (about 2
instructions to lock, and 1 instruction to unlock).  Cooperative locking
like this allows several threads that all map the same MMIO region to run
simultaneously on an SMP system.

-- 
Rik Faith: faith@precisioninsight.com
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://humbolt.geo.uu.nl/Linux-MM/

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: MMIO regions
  1999-10-10 11:24             ` Rik Faith
@ 1999-10-10 14:03               ` Eric W. Biederman
  1999-10-10 18:46                 ` Rik Faith
  1999-10-10 14:21               ` James Simmons
  1 sibling, 1 reply; 27+ messages in thread
From: Eric W. Biederman @ 1999-10-10 14:03 UTC (permalink / raw)
  To: Rik Faith; +Cc: James Simmons, Linux MM

Rik Faith <faith@precisioninsight.com> writes:

> On 7 Oct 99 19:40:32 GMT, James Simmons <jsimmons@edgeglobal.com> wrote:
> > On Mon, 4 Oct 1999, Stephen C. Tweedie wrote:
> > > On Mon, 4 Oct 1999 14:29:14 -0400 (EDT), James Simmons
> > > <jsimmons@edgeglobal.com> said:
[snip]
> If I understand what you are saying, there are serious performance
> implications for direct-rendering clients (in addition to the added
> scheduler overhead, which will negatively impact overall system
> performance).
> 
> I believe you are saying:
>     1) There are n processes, each of which has the MMIO region mmap'd.
>     2) The scheduler will only schedule one of these processes at a time,
>        even on an SMP system.  [I'm assuming this is what you mean by "in
>        use", since the scheduler can't know about actual MMIO writes -- it
>        has to assume that a mapped region is a region that is "in use",
>        even if it isn't (e.g., a threaded program may have the MMIO region
>        mapped in n-1 threads, but may only direct render in 1 thread).]
>
That was one idea.

There is the other side here.  Software is buggy and hardware is buggy.
If some buggy software forgets to take the lock (or messes it up),
and two apps hit the MMIO region at the same time.

BOOOMM!!!! your computer is toast.

The DRI approach looks good if:
   Your hardware is good enough it won't bring down the box, on cooperation failure.
   And hopefully it is good enough that after it gets scrabled by cooperation failure you
   can reset it.

> 
> The cooperative locking system used by the DRI (see
> http://precisioninsight.com/dr/locking.html) allows direct-rendering
> clients to perform fine-grain locking only when the MMIO region is actually
> being written.  The overhead for this system is extremely low (about 2
> instructions to lock, and 1 instruction to unlock).  Cooperative locking
> like this allows several threads that all map the same MMIO region to run
> simultaneously on an SMP system.

The difficulty is that all threads need to be run as root.
Ouch!!!


Personally I see 3 functional ways of making this work on buggy single threaded
hardware.

1)  Allow only one process to have the MMIO/Frame buffer regions faulted in 
at a time.  As simultaneous frame buffer and MMIO writes are reported to 
have hardware crashing side effects.

2) Convince user space to have dedicated drawing/rendering threads that
are created with fork rather than clone.  Then these threads can be cautiously
scheduled to work around buggy hardware.

3) Have a set of very low overhead syscalls that will manipulate MMIO,
etc.  This might work in conjunction with 2 and have a fast path that just
makes nothing else is running that could touch the frame buffer.
(With Linux cheap syscalls this may be possible)

The fundamental problem that makes this hard are:
1) It is very desireable to for this to work in a windowed environment with
many apps running simultaneously, (the X server wants to hand off some of the work).

2) The hardware is buggy so you must either:
    a) Have many trusted (SUID) clients.
    b) Have very clever work arounds that give high performance.
    c) Lose some performance.
         Either just the X server is trusted and you must tell it what to do,
         or some other way.

What someone (not me) needs to do is code up a multithreaded test application
that shoots pictures to the screen, and needs these features.  And run
tests with multiple copies of said test application running.  On
various kernel configurations to see if it will work and give
acceptable performance.

Extending the current architecture with just X server needing to be
trusted doesn't much worry me.  But we really need to find
an alternative to encouraing SUID binary only games (and other
intensive clients).

Eric
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://humbolt.geo.uu.nl/Linux-MM/

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: MMIO regions
  1999-10-10 11:24             ` Rik Faith
  1999-10-10 14:03               ` Eric W. Biederman
@ 1999-10-10 14:21               ` James Simmons
  1 sibling, 0 replies; 27+ messages in thread
From: James Simmons @ 1999-10-10 14:21 UTC (permalink / raw)
  To: Rik Faith; +Cc: Linux MM

> If I understand what you are saying, there are serious performance
> implications for direct-rendering clients (in addition to the added
> scheduler overhead, which will negatively impact overall system
> performance).
> 
> I believe you are saying:
>     1) There are n processes, each of which has the MMIO region mmap'd.
>     2) The scheduler will only schedule one of these processes at a time,
>        even on an SMP system.  [I'm assuming this is what you mean by "in
>        use", since the scheduler can't know about actual MMIO writes -- it
>        has to assume that a mapped region is a region that is "in use",
>        even if it isn't (e.g., a threaded program may have the MMIO region
>        mapped in n-1 threads, but may only direct render in 1 thread).]
> 
> On MMIO-based graphics cards (i.e., those that do not use traditional DMA),
> a direct-rendering client will intersperse relatively long periods of
> computation with relatively short periods of MMIO writes.  In your scheme,
> one of these clients will run for a whole time slice before the other one
> runs (i.e., they will run in alternate time slices, even on an SMP system
> with sufficient processors to run both simultaneously).  Because actual
> MMIO writes take up a relatively small fraction of that time slice,
> rendering performance will potentially decrease by a factor of 2 (or more,
> if more CPUs are available).  This is significant, especially since many
> high-end OpenGL applications are threaded and expect to be able to run
> simultaneously on SMP systems.
>

I notice this when I was playing with my code. Also I realized regular
kernel semaphores are not going to be able to give you hard realtime
guarantees that are needed. Even the regular interrupt handling is just
not good enough. A good example is VBL. With ordinary interrupt handling
it takes a enormous amount of time to get to the interrput handler. The
effect gets worst under a very highly loaded machine. The tearing effect
gets worst. Its not unusual for a graphics program to create a high load
either. So actually I'm designing a hard realtime schedular that does
this. The regular schedular is not going to cut the mustard. Plus this
gives a enormous performace boost no matter what the load. Someone
familiar with IRIX told me thats what SGI does to optimize their systems.
Also you can have the following 


Data-> accel engine
                  context switch 
                        other data->accel engine.

This would confuss most cards. With a realtime handler you can make sure
that a accel command is finished then allow a context switch.

> The cooperative locking system used by the DRI (see
> http://precisioninsight.com/dr/locking.html) allows direct-rendering
> clients to perform fine-grain locking only when the MMIO region is actually
> being written.  The overhead for this system is extremely low (about 2
> instructions to lock, and 1 instruction to unlock).  Cooperative locking
> like this allows several threads that all map the same MMIO region to run
> simultaneously on an SMP system.

I'm familar with the system.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://humbolt.geo.uu.nl/Linux-MM/

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: MMIO regions
  1999-10-10 14:03               ` Eric W. Biederman
@ 1999-10-10 18:46                 ` Rik Faith
  1999-10-11  0:21                   ` James Simmons
  1999-10-11  3:38                   ` Eric W. Biederman
  0 siblings, 2 replies; 27+ messages in thread
From: Rik Faith @ 1999-10-10 18:46 UTC (permalink / raw)
  To: Eric W. Biederman; +Cc: Rik Faith, James Simmons, Linux MM

On     10 Oct 1999 09:03:11 -0500,
   Eric W. Biederman <ebiederm+eric@ccr.net> wrote:
> Rik Faith <faith@precisioninsight.com> writes:
> > The cooperative locking system used by the DRI (see
> > http://precisioninsight.com/dr/locking.html) allows direct-rendering
> > clients to perform fine-grain locking only when the MMIO region is actually
> > being written.  The overhead for this system is extremely low (about 2
> > instructions to lock, and 1 instruction to unlock).  Cooperative locking
> > like this allows several threads that all map the same MMIO region to run
> > simultaneously on an SMP system.
> 
> The difficulty is that all threads need to be run as root.
> Ouch!!!

No.  The DRI assumes that direct-rendering clients are running as non-root
users.  A direct-rendering client, with an open connection to the X server,
is allowed to mmap the MMIO region via a special device (additional
restrictions also apply).  For more information, please see "A Security
Analysis of the Direct Rendering Infrastructure"
(http://precisioninsight.com/dr/security.html).

> Personally I see 3 functional ways of making this work on buggy single
> threaded hardware.
> 
> 1)  Allow only one process to have the MMIO/Frame buffer regions faulted in 
> at a time.  As simultaneous frame buffer and MMIO writes are reported to 
> have hardware crashing side effects.

Faulting doesn't work on low-end (e.g., any PC-class hardware) because two
clients cannot intermingle their MMIO writes.

> 2) Convince user space to have dedicated drawing/rendering threads that
> are created with fork rather than clone.  Then these threads can be
> cautiously scheduled to work around buggy hardware.

We don't want to require that large existing OpenGL applications be
re-written for Linux -- we'd like them to be easily ported to Linux.  In
any case, I don't see how using processes instead of thread makes this
problem any easier.

> 3) Have a set of very low overhead syscalls that will manipulate MMIO,
> etc.  This might work in conjunction with 2 and have a fast path that just
> makes nothing else is running that could touch the frame buffer.
> (With Linux cheap syscalls this may be possible)

One of the advantages of "direct rendering" is that the clients talk
directly to the hardware.  Adding a syscall interface for MMIO will create
a significant performance hit (the whole reason for providing direct
rendering is performance -- if you add significant overhead in the
direct-rendering pathway, then you might as well just implement an
indirect-rendering solution).

> What someone (not me) needs to do is code up a multithreaded test
> application that shoots pictures to the screen, and needs these features.
> And run tests with multiple copies of said test application running.  On
> various kernel configurations to see if it will work and give acceptable
> performance.

The DRI has been implemented and is available in XFree86 3.9.15 (and up).
The DRI supports multiple simultaneous direct-rendering clients.

> Extending the current architecture with just X server needing to be
> trusted doesn't much worry me.  But we really need to find
> an alternative to encouraing SUID binary only games (and other
> intensive clients).

Just to clarify, the DRI does _not_ require that clients be SUID.



If you are interested in reading more about the DRI, there are several
high- and low-level design documents available from
http://precisioninsight.com/piinsights.html.

Those who are not familiar with the basic ideas and requirements for
direct-rendering should start with the following papers describing
implementations by SGI and HP:

[KBH95] Mark J. Kilgard, David Blythe, and Deanna Hohn. System Support for
OpenGL Direct Rendering.  Proceedings of Graphics Interface '95, Quebec
City, Quebec, May 1995. Available from
http://reality.sgi.com/mjk/direct/direct.html

[K97] Mark J. Kilgard.  Realizing OpenGL: Two Implementations of One
Architecture.  SIGGRAPH/Eurographics Workshop on Graphics Hardware, Los
Angeles, August 3-4, 1997. Avaialble from
http://reality.sgi.com/opengl/twoimps/twoimps.html

[LCPGH98] Kevin T. Lefebvre, Robert J. Casey, Michael, J. Phelps, Courtney
D. Goeltzenleuchter, and Donley B. Hoffman.  An Overview of the HP
OpenGL&reg; Software Architecture.  The Hewlett-Packard Journal, May 1998,
49(2): 9-18.  Available from
http://www.hp.com/hpj/98may/ma98a2.pdf

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://humbolt.geo.uu.nl/Linux-MM/

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: MMIO regions
  1999-10-10 18:46                 ` Rik Faith
@ 1999-10-11  0:21                   ` James Simmons
  1999-10-11 10:59                     ` Rik Faith
  1999-10-11  3:38                   ` Eric W. Biederman
  1 sibling, 1 reply; 27+ messages in thread
From: James Simmons @ 1999-10-11  0:21 UTC (permalink / raw)
  To: Rik Faith; +Cc: Eric W. Biederman, Linux MM

> No.  The DRI assumes that direct-rendering clients are running as non-root
> users.  A direct-rendering client, with an open connection to the X server,
> is allowed to mmap the MMIO region via a special device (additional
> restrictions also apply).  For more information, please see "A Security
> Analysis of the Direct Rendering Infrastructure"
> (http://precisioninsight.com/dr/security.html).

> Just to clarify, the DRI does _not_ require that clients be SUID.

Oh my. Non root and direct access to buggy hardware. 

Yeah since your familar with SGI can you explain to me the use of 
/dev/shmiq, /dev/qcntl and /dev/usemaclone. I have seen them used for the
X server on IRIX and was just interested to see if they could be of use on
other platforms. Yes SGI linux supports these.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://humbolt.geo.uu.nl/Linux-MM/

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: MMIO regions
  1999-10-10 18:46                 ` Rik Faith
  1999-10-11  0:21                   ` James Simmons
@ 1999-10-11  3:38                   ` Eric W. Biederman
  1 sibling, 0 replies; 27+ messages in thread
From: Eric W. Biederman @ 1999-10-11  3:38 UTC (permalink / raw)
  To: Rik Faith; +Cc: Eric W. Biederman, James Simmons, Linux MM

Rik Faith <faith@precisioninsight.com> writes:

> (http://precisioninsight.com/dr/security.html).
References read and digested.

I am now convinced that (given buggy hardware) the software lock
is the only possible way to go.

The argument is that unless the hardware is well designed you cannot
save it's state to do a context switch at arbitrary times.
A repeat of the old EGA problem.

The second part of the architecture is that openGL does the rendiring
in the X server with the same libraries as in user space, with the addition
of a loop to fetch the commands to run, from another process.

And the openGL would be the only API programmed to.
With dispatch logic similiar to that found in libggi, for different
hardware.  And it would only be in the hardware specific code that the
lock would be taken if needed.

The fact that in this interface the kernel will only expose safe
hardware registers makes this design not as spooky.  The spooky aspect
is still remains in that incorrectly accessing the hardware, (possibly
caused by a stray pointer) can cause a system crash.

The nice thing is if you remove SGID bit from the binaries, all
rendering will be indirect through the X server, allowing security to
be managed. 

The previous are from SGI & HP suggests that with good hardware
a page faulting technique may be prefered for fairness etc.
There are many issues relating to TLB flushes, and multithread
programs that need to be resolved, but that is mostly irrelevant
as most hardware is too buggy to properly context switch :(

Eric
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://humbolt.geo.uu.nl/Linux-MM/

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: MMIO regions
  1999-10-11  0:21                   ` James Simmons
@ 1999-10-11 10:59                     ` Rik Faith
  0 siblings, 0 replies; 27+ messages in thread
From: Rik Faith @ 1999-10-11 10:59 UTC (permalink / raw)
  To: James Simmons; +Cc: Rik Faith, Eric W. Biederman, Linux MM

On Sun 10 Oct 1999 20:21:15 -0400,
   James Simmons <jsimmons@edgeglobal.com> wrote:
> 
> > No.  The DRI assumes that direct-rendering clients are running as non-root
> > users.  A direct-rendering client, with an open connection to the X server,
> > is allowed to mmap the MMIO region via a special device (additional
> > restrictions also apply).  For more information, please see "A Security
> > Analysis of the Direct Rendering Infrastructure"
> > (http://precisioninsight.com/dr/security.html).
> 
> > Just to clarify, the DRI does _not_ require that clients be SUID.
> 
> Oh my. Non root and direct access to buggy hardware.

For those on the list who haven't read the Security Analysis document, let
me summarize the DRI's security policy with respect to mapping the MMIO
region: A non-root direct-rendering client is allowed to map the MMIO
region if:

    1) the client already has an open connection to the X server (so, for
       example, all xauth authentication has already been performed), and

    2) the client is running with the appropriate access rights that allow
       it to open the DRM device (so the system administrator can easily
       restrict direct-rendering access to a certain group of users).

> Yeah since your familar with SGI can you explain to me the use of 
> /dev/shmiq, /dev/qcntl and /dev/usemaclone. I have seen them used for the
> X server on IRIX and was just interested to see if they could be of use on
> other platforms. Yes SGI linux supports these.

I believe the shmiq (shared memory input queue) and qcntl devices are
mostly used by the input device drivers (e.g, keyboard, mouse, tablet,
dial/button box) to serialize input events destined for the X server.  For
Linux, much of the functionality of these devices has already been
implemented as pure user-space programs (e.g., gpm can serialize input from
multiple mice).

The usema device provides spinlocks and semaphores.  SGI uses these to
provide synchronization between X server threads for indirect-rendering
with their multi-rendering implementation.  This is discussed in:

  [KHLS94] Mark J. Kilgard, Simon Hui, Allen A Leinwand, and Dave
  Spalding.  X Server Multi-rendering for OpenGL and PEX.  8th Annual X
  Technical Conference, Boston, Mass., January 25, 1994.  Available from
  <http://reality.sgi.com/opengl/multirender/multirender.html>.

Note that the two-tiered lock that the DRM implements can be viewed as a
specially optimized "user-space" semaphore, and that the DRM already
provides other functionality that is described in this paper (although our
initial implementation has concentrated on the performance-critical case of 
direct rendering).

Because PC-class hardware is so different from SGI-class hardware, direct
rendering on non-SGI machines requires the implementation of interfaces
that SGI does not need (i.e., implementing all of the traditional SGI
interfaces is not sufficient to provide fast direct rendering on PC-class
hardware).  The DRI, however, has been designed for hardware ranging from
low-end PC-class hardware to high-end SGI-class hardware, so it can be used 
on hardware that requires cooperative locking as well on hardware that can
be virtualized.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://humbolt.geo.uu.nl/Linux-MM/

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: MMIO regions
  1999-10-06 20:15           ` James Simmons
@ 1999-10-11 17:09             ` Stephen C. Tweedie
  1999-10-11 17:26               ` Jeff Garzik
  1999-10-11 17:57               ` James Simmons
  0 siblings, 2 replies; 27+ messages in thread
From: Stephen C. Tweedie @ 1999-10-11 17:09 UTC (permalink / raw)
  To: James Simmons; +Cc: Linux MM, Stephen Tweedie

Hi,

On Wed, 6 Oct 1999 16:15:59 -0400 (EDT), James Simmons
<jsimmons@edgeglobal.com> said:

>> Look at http://www.precisioninsight.com/dr/locking.html for a
>> description of the cooperative lightweight locking used in the DRI 

> I have read those papers. Its not compatible with fbcon. It would
> require a massive rewrite which would break everything that works with
> fbcon. 

Sure.  It requires that people cooperate in order to take advantage of
the locking protection.

> When people start writing apps using DRI and it locks their machine or
> damages the hardware. Well the linux kernel mailing list will have to
> hear those complaints. You know people will want to write their own
> stuff. Of course precisioninsight should make a licence stating it
> illegal to write your own code using their driver or a warning so they
> don't get their asses sued. These are the kinds of people who will
> look for other solutions like I am. So expect more like me.

You seem to be looking for a solution which doesn't exist, though. :)

It is an unfortunate, but true, fact that the broken video hardware
doesn't let you provide memory mapped access which is (a) fast, (b)
totally safe, and (c) functional.  Choose which of a, b and c you are
willing to sacrifice and then we can look for solutions.  DRI sacrifices
(b), for example, by making the locking cooperative rather than
compulsory.  The basic unaccelerated fbcon sacrifices (c).  Using VM
protection would sacrifice (a).  It's not the ideal choice, sadly.

--Stephen

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://humbolt.geo.uu.nl/Linux-MM/

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: MMIO regions
  1999-10-07 19:40           ` James Simmons
  1999-10-10 11:24             ` Rik Faith
@ 1999-10-11 17:22             ` Stephen C. Tweedie
  1 sibling, 0 replies; 27+ messages in thread
From: Stephen C. Tweedie @ 1999-10-11 17:22 UTC (permalink / raw)
  To: James Simmons; +Cc: Stephen C. Tweedie, Linux MM

Hi,

On Thu, 7 Oct 1999 15:40:32 -0400 (EDT), James Simmons
<jsimmons@edgeglobal.com> said:

> No VM stuff. I think the better approach is with the scheduler. 

The big problem there is threads.  We simply cannot have different VM
setups for different threads of a given process --- threads are
_defined_ as being processes which share the same VM.  The only way to
achieve VM serialisation in a threaded application via the scheduler is
to serialise the threads, which is rather contrary to what you want on
an SMP machine.  CivCTP and Quake 3 are already threaded and SMP-capable
on Linux, for example, and we have a threaded version of the Mesa openGL
libraries too.

--Stephen
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://humbolt.geo.uu.nl/Linux-MM/

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: MMIO regions
  1999-10-11 17:09             ` Stephen C. Tweedie
@ 1999-10-11 17:26               ` Jeff Garzik
  1999-10-11 23:14                 ` James Simmons
  1999-10-11 17:57               ` James Simmons
  1 sibling, 1 reply; 27+ messages in thread
From: Jeff Garzik @ 1999-10-11 17:26 UTC (permalink / raw)
  To: Stephen C. Tweedie; +Cc: James Simmons, Linux MM

"Stephen C. Tweedie" wrote:
> You seem to be looking for a solution which doesn't exist, though. :)

He's working on it though :)  http://imperial.edgeglobal.com/~jsimmons


> It is an unfortunate, but true, fact that the broken video hardware
> doesn't let you provide memory mapped access which is (a) fast, (b)
> totally safe, and (c) functional.  Choose which of a, b and c you are
> willing to sacrifice and then we can look for solutions.  DRI sacrifices
> (b), for example, by making the locking cooperative rather than
> compulsory.  The basic unaccelerated fbcon sacrifices (c).  Using VM
> protection would sacrifice (a).  It's not the ideal choice, sadly.

Seems like it would make sense for an fbcon driver to specify the level
of safety (and thus the level of speed penalty).

For the older cards, "slow and safe" shouldn't be a big problem, because
the typical scenario involves a single fbdev application using the
entire screen.  The fbcon/GGI driver would specify a NEED_SLOW_SYNC flag
when it registers.

For newer cards, they get progressively better at having internal
consistency for reads/writes of various MMIO regions and DMAable
operations.   The fbcon/GGI driver for this could specify the FAST flag
because the card can handle concurrent operations.

Regards,

	Jeff




-- 
Custom driver development	|    Never worry about theory as long
Open source programming		|    as the machinery does what it's
				|    supposed to do.  -- R. A. Heinlein
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://humbolt.geo.uu.nl/Linux-MM/

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: MMIO regions
  1999-10-11 17:09             ` Stephen C. Tweedie
  1999-10-11 17:26               ` Jeff Garzik
@ 1999-10-11 17:57               ` James Simmons
  1 sibling, 0 replies; 27+ messages in thread
From: James Simmons @ 1999-10-11 17:57 UTC (permalink / raw)
  To: Stephen C. Tweedie; +Cc: Linux MM

> You seem to be looking for a solution which doesn't exist, though. :)

Well my next experiment is RTLinux with acceleration. From what I have
been learning SGI kernel has a special schedular for its graphics to
ensure hard real time performace. I want to see how much of a impact
RTLinux wil have with acceleration.    

> It is an unfortunate, but true, fact that the broken video hardware
> doesn't let you provide memory mapped access which is (a) fast, (b)
> totally safe, and (c) functional.  Choose which of a, b and c you are
> willing to sacrifice and then we can look for solutions.  DRI sacrifices
> (b), for example, by making the locking cooperative rather than
> compulsory.  The basic unaccelerated fbcon sacrifices (c).  Using VM
> protection would sacrifice (a).  It's not the ideal choice, sadly.

Well I see SGI uses usemsa which is its version of flocking. If SGI does
it then its the right way :) Yes I think SGI hardware is teh greatest in
the world.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://humbolt.geo.uu.nl/Linux-MM/

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: MMIO regions
  1999-10-11 17:26               ` Jeff Garzik
@ 1999-10-11 23:14                 ` James Simmons
  0 siblings, 0 replies; 27+ messages in thread
From: James Simmons @ 1999-10-11 23:14 UTC (permalink / raw)
  To: Jeff Garzik; +Cc: Stephen C. Tweedie, Linux MM

On Mon, 11 Oct 1999, Jeff Garzik wrote:

> > It is an unfortunate, but true, fact that the broken video hardware
> > doesn't let you provide memory mapped access which is (a) fast, (b)
> > totally safe, and (c) functional.  Choose which of a, b and c you are
> > willing to sacrifice and then we can look for solutions.  DRI sacrifices
> > (b), for example, by making the locking cooperative rather than
> > compulsory.  The basic unaccelerated fbcon sacrifices (c).  Using VM
> > protection would sacrifice (a).  It's not the ideal choice, sadly.
> 
> Seems like it would make sense for an fbcon driver to specify the level
> of safety (and thus the level of speed penalty).

Well the new system I have implemented has elminated porting MMIO regions
to userspace. This makes way for DRI or any other solutions that might
come down the road. Also I have written fbcon to release the console
system on explict opening of fbdev. This way no accels are running in the
kernel while something like X is running. Especially if something like DRI
is running. This prevents any possible conflicts. Yes I sacrificed some
functionality of the current fbcon for DRI. I hope DRI will in turn help
support fbcon and help us write drivers. 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://humbolt.geo.uu.nl/Linux-MM/

^ permalink raw reply	[flat|nested] 27+ messages in thread

end of thread, other threads:[~1999-10-11 23:14 UTC | newest]

Thread overview: 27+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
1999-10-04 14:38 MMIO regions James Simmons
1999-10-04 15:31 ` Stephen C. Tweedie
1999-10-04 15:52   ` James Simmons
1999-10-04 16:02     ` Benjamin C.R. LaHaise
1999-10-04 17:27       ` James Simmons
1999-10-04 17:56         ` Benjamin C.R. LaHaise
1999-10-04 18:26           ` James Simmons
1999-10-04 19:19         ` Stephen C. Tweedie
1999-10-06 20:15           ` James Simmons
1999-10-11 17:09             ` Stephen C. Tweedie
1999-10-11 17:26               ` Jeff Garzik
1999-10-11 23:14                 ` James Simmons
1999-10-11 17:57               ` James Simmons
1999-10-04 16:11     ` Stephen C. Tweedie
1999-10-04 18:29       ` James Simmons
1999-10-04 19:35         ` Stephen C. Tweedie
1999-10-07 19:40           ` James Simmons
1999-10-10 11:24             ` Rik Faith
1999-10-10 14:03               ` Eric W. Biederman
1999-10-10 18:46                 ` Rik Faith
1999-10-11  0:21                   ` James Simmons
1999-10-11 10:59                     ` Rik Faith
1999-10-11  3:38                   ` Eric W. Biederman
1999-10-10 14:21               ` James Simmons
1999-10-11 17:22             ` Stephen C. Tweedie
1999-10-04 16:58 ` Marcus Sundberg
1999-10-04 18:27   ` James Simmons

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox