linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* Re: [RFC] RSS guarantees and limits
@ 2000-06-23 14:01 frankeh
  2000-06-23 17:56 ` Stephen Tweedie
  0 siblings, 1 reply; 28+ messages in thread
From: frankeh @ 2000-06-23 14:01 UTC (permalink / raw)
  To: linux-mm

How is shared memory accounted for?

Options are:
(a) Creator is charged
(b) prorated per number of users

any others options come to mind ?

-- Hubertus Franke
    IBM T.J.Watson Research Center


Rik van Riel <riel@conectiva.com.br>@kvack.org on 06/23/2000 10:15:46 AM

Sent by:  owner-linux-mm@kvack.org


To:   Ed Tomlinson <tomlins@cam.org>
cc:   linux-mm@kvack.org
Subject:  Re: [RFC] RSS guarantees and limits



On Thu, 22 Jun 2000, Ed Tomlinson wrote:

> Just wondering what will happen with java applications?  These
> beasts typically have working sets of 16M or more and use 10-20
> threads.  When using native threads linux sees each one as a
> process.  They all share the same memory though.

Ahh, but these limits are of course applied per _MM_, not
per thread ;)

regards,

Rik
--
The Internet is not a network of computers. It is a network
of people. That is its real strength.

Wanna talk about the kernel?  irc.openprojects.net / #kernelnewbies
http://www.conectiva.com/          http://www.surriel.com/

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux.eu.org/Linux-MM/



--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux.eu.org/Linux-MM/

^ permalink raw reply	[flat|nested] 28+ messages in thread
* Re: [RFC] RSS guarantees and limits
@ 2000-06-23 18:07 frankeh
  0 siblings, 0 replies; 28+ messages in thread
From: frankeh @ 2000-06-23 18:07 UTC (permalink / raw)
  To: linux-mm

John, ...

I thought Rik actually takes care of that.  He doesn't necessarily penalize
a process because it is big.
He penalizes the process if its working set size is substantially smaller
than then its memory footprint.
His measure for that is the refault rate. If he takes away pages that are
shortly thereafter being faulted back in, than as he stated, he is to
agressive. Since this refaulting is cheap when the page is still in the
cache, the overhead should be reasonable small.

-- Hubertus



"John Fremlin" <vii@penguinpowered.com>@kvack.org on 06/23/2000 11:52:59 AM

Sent by:  owner-linux-mm@kvack.org


To:   <linux-mm@kvack.org>
cc:
Subject:  Re: [RFC] RSS guarantees and limits



Rik van Riel <riel@conectiva.com.br> writes:

[...]

> > I agree completely. It was one of the reasons I suggested that a
> > syscall like nice but giving info to the mm layer would be
> > useful. In general, small apps (xeyes,biff,gpm) don't deserve
> > any special treatment.
>
> Why not?  In scheduling processes which use less CPU get
> a better response time. Why not do the same for memory
> use? The less memory you use, the less agressive we'll be
> in trying to take it away from you.

CPU != memory.

Quick reasons:
        (1) Sleeping process takes memory.

        (2) Take away 10% CPU from a program, it runs at about 90% of
        former speed. Take away 10% mem from a program, might only run
        at 5-10% of former speed due to having to wait for disk IO.
>
> Of course a small app should be removed from memory when
> it's sleeping, but there's no reason to not apply some
> degree of fairness in memory allocation and memory stealing.

[...]

You say you can't see why small processes like shells etc. shouldn't
be specially treated (your first paragraph). Folding double negative,
you say there should be positive discrimination for these processes,
i.e. fairer distribution of memory (your second paragraph).

If you think I'm not qualified to disagree, reread what Matthew Dillon
said to you while discussing VM changes in May:

    Well, I have a pretty strong opinion on trying to rationalize
    penalizing big processes simply because they are big.  It's a bad
    idea for several reasons, not the least of which being that by
    making such a rationalization you are assuming a particular system
    topology -- you are assuming, for example, that the system may
    contain a few large less-important processes and a reasonable
    number of small processes.  But if the system contains hundreds of
    small processes or if some of the large processes turn out to be
    important, the rationalization fails.

    Also if the large process in question happens to really need the
    pages (is accessing them all the time), trying to page those pages
    out gratuitously does nothing but create a massive paging load on
    the system.  Unless you have a mechanism to (such as FreeBSD has)
    to impose a 20-second forced sleep under extreme memory loads, any
    focus on large processes will simply result in thrashing (read:
    screw up the system).

[...]

> > The only general solution I can see is to give some process
> > (groups) a higher MM priority, by analogy with nice.
>
> That you can't see anything better doesn't mean it isn't possible ;)

Indeed, I wait anxiously for someone to propose a better solution.

[...]

--

     http://altern.org/vii
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux.eu.org/Linux-MM/



--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux.eu.org/Linux-MM/

^ permalink raw reply	[flat|nested] 28+ messages in thread
* Re: [RFC] RSS guarantees and limits
@ 2000-06-22 23:02 Mark_H_Johnson
  0 siblings, 0 replies; 28+ messages in thread
From: Mark_H_Johnson @ 2000-06-22 23:02 UTC (permalink / raw)
  To: riel; +Cc: linux-mm, sct

Controls on resident set size has been one of the items I really want to
see established. I have some concerns about what is suggested here and have
a few suggestions. I prefer user settable RSS limits that are enforced by
the kernel & use automated methods when the user doesn't set any such
limits.

My situation is this. I'm looking at deploying a large "real time"
simulation on a cluster of Linux machines. The main application will be
locked in memory and must have predictable execution patterns. To aid in
development, we will have a number of workstations. I want to be able to
run the main application at "slower than real time" on those workstations -
using paging & swapping as needed.
  [1] Our real time application(s) will lock lots [perhaps 1/2 to 3/4] of
physical memory.
    - The RSS for our application must be at least large enough to cover
the "locked" memory plus some additional space for TBD purposes.
    - The RSS for remaining processes must be "reasonable" - take into
consideration the locked memory as unavailable until released.
    - The transition from lots of memory is "free" to lots of memory is
"locked" has to be managed in some way.
    We know in advance what "reasonable" values are for RLIMIT_RSS & can
set them appropriately. I doubt an automatic system can do well in this
case.
  [2] On the workstation, we want good performance from the program under
test.
    - The RSS of our application must be large relative to the rest of the
system applications
    - There needs to be some balance between our application and other
applications - to run gdb, X, and other tools used during test
    This is a similar situation to above when I really do want a "memory
hog" to use most of the system memory. I think user settable RSS limits
would still be better than an automatic system.

Using the existing RSS limits would go a long way to enabling us to set the
system up and meet these diverse needs. At this time, I absolutely prefer
to initiate swapping of tasks to preserve the RSS of the application we're
delivering to our customer. On our development machines, some automatic
tuning would be OK, but I don't see how it will run "better" (as measured
by page fault rates) than with carefully selected values based on the
applications being run. If there's plenty of space available, I don't mind
automatic methods for a process have more than the RSS limit [if swapping
isn't necessary]. If all [or most] of the processes have "unlimited" for
the RSS limit, do something reasonable as well in an automated way. But if
the user has specified RSS limits [via the RLIMIT_RSS setting in
setrlimit(2)], please abide by them. Thanks.
--Mark H Johnson
  <mailto:Mark_H_Johnson@raytheon.com>


                                                                                                                    
                    Rik van Riel                                                                                    
                    <riel@conecti        To:     linux-mm@kvack.org                                                 
                    va.com.br>           cc:     "Stephen C. Tweedie" <sct@redhat.com>, (bcc: Mark H                
                                         Johnson/RTS/Raytheon/US)                                                   
                    06/21/00             Subject:     [RFC] RSS guarantees and limits                               
                    05:29 PM                                                                                        
                                                                                                                    
                                                                                                                    



Hi,

I think I have an idea to solve the following two problems:
- RSS guarantees and limits to protect applications from
  each other
- make sure streaming IO doesn't cause the RSS of the application
  to grow too large
- protect smaller apps from bigger memory hogs


The idea revolves around two concepts. The first idea is to
have an RSS guarantee and an RSS limit per application, which
is recalculated periodically. A process' RSS will not be shrunk
to under the guarantee and cannot be grown to over the limit.
The ratio between the guarantee and the limit is fixed (eg.
limit = 4 x guarantee).

The second concept is the keeping of statistics per mm. We will
keep statistics of both the number of page steals per mm and the
number of re-faults per mm. A page steal is when we forcefully
shrink the RSS of the mm, by swap_out. A re-fault is pretty similar
to a page fault, with the difference that re-faults only count the
pages that are 1) faulted in  and 2) were just stolen from the
application (and are still in the lru cache).


Every second (??) we walk the list of all tasks (mms?) and do
something very much like this:

if (mm->refaults * 2 > mm->steals) {
           mm->rss_guarantee += (mm->rss_guarantee >> 4 + 1);
} else {
           mm->rss_guarantee -= (mm->rss_guarantee >> 4 + 1);
}
mm->refaults >>= 1;
mm->steals >>= 1;


This will have different effects on different kinds of tasks.
For example, an application which has a fixed working set will
fault *all* its pages back in and get a big rss_guarantee (and
rss_limit).

However, an application which is streaming tons of data (and
using the data only once) will find itself in the situation
where it does not reclaim most of the pages that get stolen from
it. This means that the RSS of a data streaming application will
remain limited to its working set. This should reduce the bad
effects this app has on the rest of the system. Also, when the
app hits its RSS limit and the page it releases from its VM is
dirty, we can apply write throttling.


One extra protection is needed in this scheme. We must make sure
that the RSS guarantees combined never get too big. We can do this
by simply making sure that all the RSS guarantees combined never
get bigger than 1/2 of physical memory. If we "need" more than that,
we can simply decrease the biggest RSS guarantees until we get below
1/2 of physical memory.


regards,

Rik
--
The Internet is not a network of computers. It is a network
of people. That is its real strength.

Wanna talk about the kernel?  irc.openprojects.net / #kernelnewbies
http://www.conectiva.com/                      http://www.surriel.com/





--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux.eu.org/Linux-MM/




--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux.eu.org/Linux-MM/

^ permalink raw reply	[flat|nested] 28+ messages in thread
* Re: [RFC] RSS guarantees and limits
@ 2000-06-22 16:22 frankeh
  2000-06-22 16:38 ` Rik van Riel
  2000-06-22 19:48 ` Jamie Lokier
  0 siblings, 2 replies; 28+ messages in thread
From: frankeh @ 2000-06-22 16:22 UTC (permalink / raw)
  To: linux-mm

Now I understand this much better. The RSS guarantee is a function of the
refault-rate <clever>.
This in principle implements a decay of the limit based on usage.... I like
that approach.
Is there a hardstop RSS limit below you will not evict pages from a process
(e.g.   mem_size / MAX_PROCESSES ?) to give some interactivity for
processes that haven't executed for a while, or you just let it go down
based on the refault-rate...

-- Hubertus




Rik van Riel <riel@conectiva.com.br>@kvack.org on 06/22/2000 12:35:06 PM

Sent by:  owner-linux-mm@kvack.org


To:   Hubertus Franke/Watson/IBM@IBMUS
cc:   linux-mm@kvack.org
Subject:  Re: [RFC] RSS guarantees and limits



On Thu, 22 Jun 2000 frankeh@us.ibm.com wrote:

> I assume that in the <workstation> scenario, where there are
> limited number of processes, your approach will work just fine.
>
> In a server scenario where you might have lots of processes
> (with limited resource requirements) this might have different
> effects This inevidably will happen when we move Linux to NUMA
> or large scale SMP systems and we apply images like that to
> webhosting.

This is exactly why I want to have the RSS guarantees and
limits auto-tune themselves, depending on the ratio between
re-faults (where we have stolen a page from the working set
of a process) and page steals (these pages were not from the
working set).

If we steal a lot of pages from a process and the process
doesn't take these same pages back, we should continue stealing
from that process since obviously it isn't using all its pages.
(or it only uses the pages once)

Also, stolen pages will stay around in memory, outside of the
working set of the process, but in one of the various caches.
If they are faulted back very quickly no disk IO is needed at
all ... and faulting them back quickly is an indication that
we're stealing too many pages from the process.

> Do you think that the resulting RSS guarantees (function of
> <mem_size/2*process_count>) will be sufficient ?

The RSS guarantee is just that, a guarantee. We guarantee that
the RSS of the process will not be shrunk below its guarantee,
but that doesn't stop any process from having a larger RSS (up
to its RSS limit).

> Or is your assumption, that for this kind of server apps with
> lots of running processes, you better don't overextent your
> memory and start paging (acceptable assumption)..

If we recycle memory pages _before_ the application can re-fault
them in from the page/swap cache, it won't be able to make the
re-fault and its RSS guarantee and limit will be shrunk...

regards,

Rik
--
The Internet is not a network of computers. It is a network
of people. That is its real strength.

Wanna talk about the kernel?  irc.openprojects.net / #kernelnewbies
http://www.conectiva.com/          http://www.surriel.com/

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux.eu.org/Linux-MM/



--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux.eu.org/Linux-MM/

^ permalink raw reply	[flat|nested] 28+ messages in thread
* Re: [RFC] RSS guarantees and limits
@ 2000-06-22 15:49 frankeh
  2000-06-22 16:05 ` Rik van Riel
  0 siblings, 1 reply; 28+ messages in thread
From: frankeh @ 2000-06-22 15:49 UTC (permalink / raw)
  To: linux-mm

I assume that in the <workstation> scenario, where there are limited number
of processes, your approach will work just fine.

In a server scenario where you might have lots of processes (with limited
resource requirements) this might have different effects
This inevidably will happen when we move Linux to NUMA or large scale SMP
systems and we apply images like that to webhosting.

Do you think that the resulting RSS guarantees (function of
<mem_size/2*process_count>) will  be sufficient ? Or is your assumption,
that for this kind of server apps with lots of running processes, you
better don't overextent your memory and start paging (acceptable
assumption)..



-- Hubertus


Rik van Riel <riel@conectiva.com.br> on 06/22/2000 12:01:18 PM

To:   Hubertus Franke/Watson/IBM@IBMUS
cc:   linux-mm@kvack.org
Subject:  Re: [RFC] RSS guarantees and limits



On Thu, 22 Jun 2000 frankeh@us.ibm.com wrote:

> Seems like a good idea, for ensuring some decent response time.
> This seems similar to what WinNT is doing.

There's a big difference here. I plan on making the RSS limit system
such that most applications should be somewhere between their limit
and their guarantee when the system is under "normal" levels of
memory pressure.

That is, I want to keep global page replacement the primary page
replacement strategy and only use the RSS guarantees and limits to
guide global page replacement and limit the system from impact by
memory hogs.

> Do you envision that the "RSS guarantees" decay over time. I am
> concerned that some daemons hanging out there and which might be
> executed very rarely (e.g. inetd) might hug to much memory
> (cummulatively speaking).  I think NT at some point pages the
> entire working set for such apps.

This is what I want to avoid. Of course if a task is really
sleeping it should of course be completely removed from
memory, but a _periodic_ task like top or atd may as well be
protected a bit if memory pressure is low enough.

I know I will have to adjust my rough draft quite a bit to
achieve the wanted effects...

regards,

Rik
--
The Internet is not a network of computers. It is a network
of people. That is its real strength.

Wanna talk about the kernel?  irc.openprojects.net / #kernelnewbies
http://www.conectiva.com/          http://www.surriel.com/




--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux.eu.org/Linux-MM/

^ permalink raw reply	[flat|nested] 28+ messages in thread
* Re: [RFC] RSS guarantees and limits
@ 2000-06-22 14:41 frankeh
  2000-06-22 15:31 ` Rik van Riel
  0 siblings, 1 reply; 28+ messages in thread
From: frankeh @ 2000-06-22 14:41 UTC (permalink / raw)
  To: Rik van Riel; +Cc: linux-mm

Seems like a good idea, for ensuring some decent response time.
This seems similar to what WinNT is doing.

Do you envision that the "RSS guarantees" decay over time. I am concerned
that some daemons hanging out there and which might be executed very rarely
(e.g. inetd) might hug to much memory (cummulatively speaking).  I think NT
at some point pages the entire working set for such apps.




-- Hubertus Franke
IBM T.J.Watson Research Center


Rik van Riel <riel@conectiva.com.br>@kvack.org on 06/21/2000 06:59:44 PM

Sent by:  owner-linux-mm@kvack.org


To:   linux-mm@kvack.org
cc:   "Stephen C. Tweedie" <sct@redhat.com>
Subject:  [RFC] RSS guarantees and limits



Hi,

I think I have an idea to solve the following two problems:
- RSS guarantees and limits to protect applications from
  each other
- make sure streaming IO doesn't cause the RSS of the application
  to grow too large
- protect smaller apps from bigger memory hogs


The idea revolves around two concepts. The first idea is to
have an RSS guarantee and an RSS limit per application, which
is recalculated periodically. A process' RSS will not be shrunk
to under the guarantee and cannot be grown to over the limit.
The ratio between the guarantee and the limit is fixed (eg.
limit = 4 x guarantee).

The second concept is the keeping of statistics per mm. We will
keep statistics of both the number of page steals per mm and the
number of re-faults per mm. A page steal is when we forcefully
shrink the RSS of the mm, by swap_out. A re-fault is pretty similar
to a page fault, with the difference that re-faults only count the
pages that are 1) faulted in  and 2) were just stolen from the
application (and are still in the lru cache).


Every second (??) we walk the list of all tasks (mms?) and do
something very much like this:

if (mm->refaults * 2 > mm->steals) {
     mm->rss_guarantee += (mm->rss_guarantee >> 4 + 1);
} else {
     mm->rss_guarantee -= (mm->rss_guarantee >> 4 + 1);
}
mm->refaults >>= 1;
mm->steals >>= 1;


This will have different effects on different kinds of tasks.
For example, an application which has a fixed working set will
fault *all* its pages back in and get a big rss_guarantee (and
rss_limit).

However, an application which is streaming tons of data (and
using the data only once) will find itself in the situation
where it does not reclaim most of the pages that get stolen from
it. This means that the RSS of a data streaming application will
remain limited to its working set. This should reduce the bad
effects this app has on the rest of the system. Also, when the
app hits its RSS limit and the page it releases from its VM is
dirty, we can apply write throttling.


One extra protection is needed in this scheme. We must make sure
that the RSS guarantees combined never get too big. We can do this
by simply making sure that all the RSS guarantees combined never
get bigger than 1/2 of physical memory. If we "need" more than that,
we can simply decrease the biggest RSS guarantees until we get below
1/2 of physical memory.


regards,

Rik
--
The Internet is not a network of computers. It is a network
of people. That is its real strength.

Wanna talk about the kernel?  irc.openprojects.net / #kernelnewbies
http://www.conectiva.com/          http://www.surriel.com/





--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux.eu.org/Linux-MM/



--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux.eu.org/Linux-MM/

^ permalink raw reply	[flat|nested] 28+ messages in thread
* [RFC] RSS guarantees and limits
@ 2000-06-21 22:29 Rik van Riel
  2000-06-22 18:00 ` John Fremlin
  0 siblings, 1 reply; 28+ messages in thread
From: Rik van Riel @ 2000-06-21 22:29 UTC (permalink / raw)
  To: linux-mm; +Cc: Stephen C. Tweedie

Hi,

I think I have an idea to solve the following two problems:
- RSS guarantees and limits to protect applications from
  each other
- make sure streaming IO doesn't cause the RSS of the application
  to grow too large
- protect smaller apps from bigger memory hogs


The idea revolves around two concepts. The first idea is to
have an RSS guarantee and an RSS limit per application, which
is recalculated periodically. A process' RSS will not be shrunk
to under the guarantee and cannot be grown to over the limit.
The ratio between the guarantee and the limit is fixed (eg.
limit = 4 x guarantee).

The second concept is the keeping of statistics per mm. We will
keep statistics of both the number of page steals per mm and the
number of re-faults per mm. A page steal is when we forcefully
shrink the RSS of the mm, by swap_out. A re-fault is pretty similar
to a page fault, with the difference that re-faults only count the
pages that are 1) faulted in  and 2) were just stolen from the
application (and are still in the lru cache).


Every second (??) we walk the list of all tasks (mms?) and do
something very much like this:

if (mm->refaults * 2 > mm->steals) { 
	mm->rss_guarantee += (mm->rss_guarantee >> 4 + 1);
} else {
	mm->rss_guarantee -= (mm->rss_guarantee >> 4 + 1);
}
mm->refaults >>= 1;
mm->steals >>= 1;


This will have different effects on different kinds of tasks.
For example, an application which has a fixed working set will
fault *all* its pages back in and get a big rss_guarantee (and
rss_limit).

However, an application which is streaming tons of data (and
using the data only once) will find itself in the situation
where it does not reclaim most of the pages that get stolen from
it. This means that the RSS of a data streaming application will
remain limited to its working set. This should reduce the bad
effects this app has on the rest of the system. Also, when the
app hits its RSS limit and the page it releases from its VM is
dirty, we can apply write throttling.


One extra protection is needed in this scheme. We must make sure
that the RSS guarantees combined never get too big. We can do this
by simply making sure that all the RSS guarantees combined never
get bigger than 1/2 of physical memory. If we "need" more than that,
we can simply decrease the biggest RSS guarantees until we get below
1/2 of physical memory.


regards,

Rik
--
The Internet is not a network of computers. It is a network
of people. That is its real strength.

Wanna talk about the kernel?  irc.openprojects.net / #kernelnewbies
http://www.conectiva.com/		http://www.surriel.com/





--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux.eu.org/Linux-MM/

^ permalink raw reply	[flat|nested] 28+ messages in thread

end of thread, other threads:[~2000-06-23 18:07 UTC | newest]

Thread overview: 28+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2000-06-23 14:01 [RFC] RSS guarantees and limits frankeh
2000-06-23 17:56 ` Stephen Tweedie
  -- strict thread matches above, loose matches on Subject: below --
2000-06-23 18:07 frankeh
2000-06-22 23:02 Mark_H_Johnson
2000-06-22 16:22 frankeh
2000-06-22 16:38 ` Rik van Riel
2000-06-22 19:48 ` Jamie Lokier
2000-06-22 19:52   ` Rik van Riel
2000-06-22 20:00     ` Jamie Lokier
2000-06-22 20:07       ` Rik van Riel
2000-06-22 15:49 frankeh
2000-06-22 16:05 ` Rik van Riel
2000-06-22 14:41 frankeh
2000-06-22 15:31 ` Rik van Riel
2000-06-21 22:29 Rik van Riel
2000-06-22 18:00 ` John Fremlin
2000-06-22 19:12   ` Rik van Riel
2000-06-22 21:19   ` Stephen Tweedie
2000-06-22 21:37     ` Rik van Riel
2000-06-22 22:48       ` John Fremlin
2000-06-22 23:59         ` Stephen Tweedie
2000-06-23 16:08           ` John Fremlin
2000-06-22 22:39     ` John Fremlin
2000-06-22 23:27       ` Rik van Riel
2000-06-23  0:49         ` Ed Tomlinson
2000-06-23 13:45           ` Rik van Riel
2000-06-23 15:36             ` volodya
2000-06-23 15:52         ` John Fremlin

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox