linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* Fairness in love and swapping
@ 1998-02-25 20:32 Stephen C. Tweedie
  1998-02-25 21:02 ` Linus Torvalds
                   ` (2 more replies)
  0 siblings, 3 replies; 27+ messages in thread
From: Stephen C. Tweedie @ 1998-02-25 20:32 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: Stephen C. Tweedie, Benjamin C.R. LaHaise, Rik van Riel,
	Itai Nahshon, Alan Cox, paubert, linux-kernel, Ingo Molnar,
	linux-mm

Hmm.

I've been continuing to test the swapper stuff, and Linus has a couple
of patches which will help with spurious warnings --- I'll make a fresh
patch against 89pre1 shortly unless he beats me to it.  While testing, I
discovered a rather nasty behaviour inherent in the swapper.

The test program I was using allocates a large heap of pages and writes
different signatures to each page (keeping a copy of each signature in a
separate, compressed array).  It then forks off a number of reader
processes which continually validate the signatures in the heap pages,
and writer processes which do the same except that every so often they
write a new signature to a page and to the pattern table.  If the total
heap size exceeds available memory, then the whole thing has to swap
shared pages both in and out to work, and the writer tasks perform COW
on the shared pages.

I noticed something rather unfortunate when starting up two of these
tests simultaneously, each test using a bit less than total physical
memory.  The first test gobbled up the whole of ram as expected, but the
second test did not.  What happened was that the contention for memory
was keeping swap active all the time, but the processes which were
already all in memory just kept running at full speed and so their pages
all remained fresh in the page age table.  The newcomer processes were
never able to keep a page in memory long enough for their age to compete
with the old process' pages, and so I had a number of identical
processes, half of which were fully swapped in and half of which were
swapping madly.

Needless to say, this is highly unfair, but I'm not sure whether there
is any easy way round it --- any clock algorithm will have the same
problem, unless we start implementing dynamic resident set size limits.

Just a thought..

Cheers,
 Stephen.

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: Fairness in love and swapping
  1998-02-25 20:32 Fairness in love and swapping Stephen C. Tweedie
@ 1998-02-25 21:02 ` Linus Torvalds
  1998-02-25 21:44   ` Rik van Riel
  1998-02-25 21:39 ` Dr. Werner Fink
  1998-02-26  8:05 ` Rogier Wolff
  2 siblings, 1 reply; 27+ messages in thread
From: Linus Torvalds @ 1998-02-25 21:02 UTC (permalink / raw)
  To: Stephen C. Tweedie
  Cc: Benjamin C.R. LaHaise, Rik van Riel, Itai Nahshon, Alan Cox,
	paubert, linux-kernel, Ingo Molnar, linux-mm



On Wed, 25 Feb 1998, Stephen C. Tweedie wrote:
> 
> I noticed something rather unfortunate when starting up two of these
> tests simultaneously, each test using a bit less than total physical
> memory.  The first test gobbled up the whole of ram as expected, but the
> second test did not.  What happened was that the contention for memory
> was keeping swap active all the time, but the processes which were
> already all in memory just kept running at full speed and so their pages
> all remained fresh in the page age table.  The newcomer processes were
> never able to keep a page in memory long enough for their age to compete
> with the old process' pages, and so I had a number of identical
> processes, half of which were fully swapped in and half of which were
> swapping madly.
> 
> Needless to say, this is highly unfair, but I'm not sure whether there
> is any easy way round it --- any clock algorithm will have the same
> problem, unless we start implementing dynamic resident set size limits.

Yes. This is similar to what I observed when I (a long time ago) made the
swap-out a lot more strictly "least recently used": what that ended up
showing very clearly was that interactive processes got swapped out very
aggressively indeed, because they had tended to touch their pages much
less than the memory-hogging ones.. 

What I _think_ should be done is that every time the accessed bit is
cleared in a process during the clock scan, the "swap-out priority" of
that process is _increased_. Right now it works the other way around: 
having the accessed bit set _decreases_ the priority for swapping, because
the pager thinks that that page shouldn't be paged out. 

Note that these are two different priorities: you have a "per-page" 
priority and a "per-process" priority, and they should have a reverse
relationship: being accessed should obviously make the "per-page" thing
less likely to page out, but it should make the "per process" thing _more_
likely to page out. 

The per-page thing we already obviously have. And we currently have
something that comes close to being a "per process"  priority, which is
the "p->swap_cnt" thing. But it is not updated on accessed bits, but
rather differently based on the rss, and there is precious little
interaction between the two: at some point we should make the comparison
between "is the per-page priority lower than the per-process priority"? 
Right now we have a "absolute" comparison of the per-page priority for
determining whether to throw the page out or not, which isn't associated
with the per-process priority at all. 

(Note: in this context "per-process" really is "per-page-table", ie it
should probably be in p->mm->swap_cnt rather than in p->swap_cnt..) 

I think this is something to look at.. 

		Linus

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: Fairness in love and swapping
  1998-02-25 20:32 Fairness in love and swapping Stephen C. Tweedie
  1998-02-25 21:02 ` Linus Torvalds
@ 1998-02-25 21:39 ` Dr. Werner Fink
  1998-02-25 22:27   ` Rik van Riel
  1998-02-26  8:05 ` Rogier Wolff
  2 siblings, 1 reply; 27+ messages in thread
From: Dr. Werner Fink @ 1998-02-25 21:39 UTC (permalink / raw)
  To: sct
  Cc: torvalds, blah, H.H.vanRiel, nahshon, alan, paubert,
	linux-kernel, mingo, linux-mm

>
> I noticed something rather unfortunate when starting up two of these
> tests simultaneously, each test using a bit less than total physical
> memory.  The first test gobbled up the whole of ram as expected, but the
> second test did not.  What happened was that the contention for memory
> was keeping swap active all the time, but the processes which were
> already all in memory just kept running at full speed and so their pages
> all remained fresh in the page age table.  The newcomer processes were
> never able to keep a page in memory long enough for their age to compete
> with the old process' pages, and so I had a number of identical
> processes, half of which were fully swapped in and half of which were
> swapping madly.

Maybe my changes done for 2.0.3x in ipc/shm.c: shm_swap_in()

                shm_rss++;

                /* Give the physical reallocated page a bigger start */
                if (shm_rss < (MAP_NR(high_memory) >> 3))
                        mem_map[MAP_NR(page)].age = (PAGE_INITIAL_AGE + PAGE_ADVANCE);

and mm/page_alloc.c: swap_in()

                
        vma->vm_mm->rss++;
        tsk->maj_flt++;

        /* Give the physical reallocated page a bigger start */
        if (vma->vm_mm->rss < (MAP_NR(high_memory) >> 2))
                mem_map[MAP_NR(page)].age = (PAGE_INITIAL_AGE + PAGE_ADVANCE);


would help a bit.  With this few lines a recently swapin page gets a bigger
start by increasing the page age ... but only if the corresponding process to
not overtake the physical memory.  This change is not very smart (e.g. its not
a real comparsion by process swap count or priority) ... nevertheless it works
for 2.0.33.

> 
> Needless to say, this is highly unfair, but I'm not sure whether there
> is any easy way round it --- any clock algorithm will have the same
> problem, unless we start implementing dynamic resident set size limits.
> 


               Werner

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: Fairness in love and swapping
  1998-02-25 21:02 ` Linus Torvalds
@ 1998-02-25 21:44   ` Rik van Riel
  0 siblings, 0 replies; 27+ messages in thread
From: Rik van Riel @ 1998-02-25 21:44 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: Stephen C. Tweedie, Benjamin C.R. LaHaise, Itai Nahshon,
	Alan Cox, paubert, linux-kernel, Ingo Molnar, linux-mm

On Wed, 25 Feb 1998, Linus Torvalds wrote:
> On Wed, 25 Feb 1998, Stephen C. Tweedie wrote:
> > 
> > I noticed something rather unfortunate when starting up two of these
> > tests simultaneously, each test using a bit less than total physical
> > memory.  The first test gobbled up the whole of ram as expected, but the
> > second test did not.  What happened was that the contention for memory
> > was keeping swap active all the time, but the processes which were
> > already all in memory just kept running at full speed and so their pages
> > all remained fresh in the page age table.  The newcomer processes were
> > never able to keep a page in memory long enough for their age to compete
> > with the old process' pages, and so I had a number of identical
> > processes, half of which were fully swapped in and half of which were
> > swapping madly.
> 
> What I _think_ should be done is that every time the accessed bit is
> cleared in a process during the clock scan, the "swap-out priority" of
> that process is _increased_. Right now it works the other way around: 
> having the accessed bit set _decreases_ the priority for swapping, because
> the pager thinks that that page shouldn't be paged out. 
> 
> (Note: in this context "per-process" really is "per-page-table", ie it
> should probably be in p->mm->swap_cnt rather than in p->swap_cnt..) 

In the *BSDs (the original ones?), the last n pages of a
proces' memory were considered holy, and were never swapped
out (unless the process was suspended, then _everything_ was
swapped out, including wired structures).

Personally, I have found that aging pagecache pages helps
interactive processes very much (try my mmap-age patch to
see for yourself).

We could implement some balancing by limiting the maximum number
of pages a process can have when it's number of pagefaults/second
is lower than 1/2 of the systemwide pagefaults/second.

Alternatively, we can use a dynamic pagefault/megabyte
DSIZE tuning system, ie. when a process has less than
half of the average pagefault/megabyte number it's using
too much memory, and further memory allocation should
be satisfied by a swap_out_process(self, __GFP_IO|__GFP_WAIT)
instead of an allocation from the global pool.

The BSD solution seems to be hopelessly outdated, so
the choice is between the last two solutions, with the
latter one being my favorite (despite the more difficult
calculations).

Rik.
+-----------------------------+------------------------------+
| For Linux mm-patches, go to | "I'm busy managing memory.." |
| my homepage (via LinuxHQ).  | H.H.vanRiel@fys.ruu.nl       |
| ...submissions welcome...   | http://www.fys.ruu.nl/~riel/ |
+-----------------------------+------------------------------+

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: Fairness in love and swapping
  1998-02-25 21:39 ` Dr. Werner Fink
@ 1998-02-25 22:27   ` Rik van Riel
  1998-02-26 11:03     ` Dr. Werner Fink
  0 siblings, 1 reply; 27+ messages in thread
From: Rik van Riel @ 1998-02-25 22:27 UTC (permalink / raw)
  To: Dr. Werner Fink; +Cc: sct, torvalds, nahshon, alan, paubert, mingo, linux-mm

On Wed, 25 Feb 1998, Dr. Werner Fink wrote:

> > all remained fresh in the page age table.  The newcomer processes were
> > never able to keep a page in memory long enough for their age to compete
> > with the old process' pages, and so I had a number of identical
> > processes, half of which were fully swapped in and half of which were
> > swapping madly.
> 
>         /* Give the physical reallocated page a bigger start */
>         if (vma->vm_mm->rss < (MAP_NR(high_memory) >> 2))
>                 mem_map[MAP_NR(page)].age = (PAGE_INITIAL_AGE + PAGE_ADVANCE);
> 
> 
> would help a bit.  With this few lines a recently swapin page gets a bigger
> start by increasing the page age ... but only if the corresponding process to
> not overtake the physical memory.  This change is not very smart (e.g. its not
> a real comparsion by process swap count or priority) ... nevertheless it works
> for 2.0.33.

It looks kinda valid, and I'll try and tune it RSN. If
it gives any improvement, I'll send it to Linus for
inclusion.

Rik.
+-----------------------------+------------------------------+
| For Linux mm-patches, go to | "I'm busy managing memory.." |
| my homepage (via LinuxHQ).  | H.H.vanRiel@fys.ruu.nl       |
| ...submissions welcome...   | http://www.fys.ruu.nl/~riel/ |
+-----------------------------+------------------------------+

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: Fairness in love and swapping
  1998-02-25 20:32 Fairness in love and swapping Stephen C. Tweedie
  1998-02-25 21:02 ` Linus Torvalds
  1998-02-25 21:39 ` Dr. Werner Fink
@ 1998-02-26  8:05 ` Rogier Wolff
  1998-02-26 13:00   ` Dr. Werner Fink
                     ` (2 more replies)
  2 siblings, 3 replies; 27+ messages in thread
From: Rogier Wolff @ 1998-02-26  8:05 UTC (permalink / raw)
  To: Stephen C. Tweedie
  Cc: torvalds, blah, H.H.vanRiel, nahshon, alan, paubert,
	linux-kernel, mingo, linux-mm

Stephen C. Tweedie wrote:
> I noticed something rather unfortunate when starting up two of these
> tests simultaneously, each test using a bit less than total physical
> memory.  The first test gobbled up the whole of ram as expected, but the
> second test did not.  What happened was that the contention for memory
> was keeping swap active all the time, but the processes which were
> already all in memory just kept running at full speed and so their pages
> all remained fresh in the page age table.  The newcomer processes were
> never able to keep a page in memory long enough for their age to compete
> with the old process' pages, and so I had a number of identical
> processes, half of which were fully swapped in and half of which were
> swapping madly.
> 
> Needless to say, this is highly unfair, but I'm not sure whether there
> is any easy way round it --- any clock algorithm will have the same
> problem, unless we start implementing dynamic resident set size limits.

[ Processes P1 and P2 both need the same amount of CPU time, I've noted
the "completion percentages" at the top. ]

If you run it like this, you'll get:

          0        50       100
      P1  <---- in memory ----> 

          0                   5         50      100
      P2  < swapping like mad ><---- in memory ---> 

If you'd have enough memory for two of them you'd get:

          0                  50                100
      P1  <--------------- in memory ------------> 

          0                  50                100
      P2  <--------------- in memory ------------> 


but if the system would be "fair" we would get: 

          0                  5                 10            15
      P1  <------ swapping --- like --- mad ------------------- ....

          0                  5                 10            15
      P2  <------ swapping --- like --- mad ------------------- ....


So.... In some cases, this behaviour is exactly what you want. What we
really need is that some mechanism that actually determines in the
first and last case that the system is thrashing like hell, and that
"swapping" (as opposed to paging) is becoming a required
strategy. That would mean putting a "page-in" ban on each process for
relatively long stretches of time. These should become longer with
each time that it occurs. That way, you will get:

          0        50           51      100
      P1  <in memory>...........<in memory> 

          0          1        50           51      100
      P2  ...........<in memory>...........<in memory> 


By making the periods longer, you will cater for larger machines where
getting the working set into main memory might take a long time (think
about a machine with 4G core, and a disk subsystem that reaches 4Mb (*)
per second on "random access paging". That's a quarter of an hour
worth of swapping before that 3.6G process is swapped in....)

Regards,

		Roger Wolff. 



(*) That's about 10 fast disks in parallel. (**)

(**) But keeping 10 disks busy in this case is impossible: Your
process (who "knows" what the next block will be) blocks until the
block is paged in.... 

-- 
If it's there and you can see it, it's REAL      |___R.E.Wolff@BitWizard.nl  |
If it's there and you can't see it, it's TRANSPARENT |  Tel: +31-15-2137555  |
If it's not there and you can see it, it's VIRTUAL   |__FAX:_+31-15-2138217  |
If it's not there and you can't see it, it's GONE! -- Roy Wilks, 1983  |_____|

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: Fairness in love and swapping
  1998-02-25 22:27   ` Rik van Riel
@ 1998-02-26 11:03     ` Dr. Werner Fink
  1998-02-26 11:34       ` Rik van Riel
  0 siblings, 1 reply; 27+ messages in thread
From: Dr. Werner Fink @ 1998-02-26 11:03 UTC (permalink / raw)
  To: H.H.vanRiel; +Cc: sct, torvalds, nahshon, alan, paubert, mingo, linux-mm

[...]
> 
> It looks kinda valid, and I'll try and tune it RSN. If
> it gives any improvement, I'll send it to Linus for
> inclusion.

There is one point more which makes ageing a bit unfair.  In
include/linux/pagemap.h PAGE_AGE_VALUE is defined to 16 which is used in
__add_page_to_hash_queue() to set the age of a hashed page ... IMHO only
touch_page() should be used.  Nevertheless a static value of 16
breaks the dynamic manner of swap control via /proc/sys/vm/swapctl


         Werner

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: Fairness in love and swapping
  1998-02-26 11:03     ` Dr. Werner Fink
@ 1998-02-26 11:34       ` Rik van Riel
  1998-02-26 18:57         ` Dr. Werner Fink
  1998-02-26 22:44         ` Stephen C. Tweedie
  0 siblings, 2 replies; 27+ messages in thread
From: Rik van Riel @ 1998-02-26 11:34 UTC (permalink / raw)
  To: Dr. Werner Fink; +Cc: sct, torvalds, nahshon, alan, paubert, mingo, linux-mm

On Thu, 26 Feb 1998, Dr. Werner Fink wrote:

> There is one point more which makes ageing a bit unfair.  In
> include/linux/pagemap.h PAGE_AGE_VALUE is defined to 16 which is used in
> __add_page_to_hash_queue() to set the age of a hashed page ... IMHO only
> touch_page() should be used.  Nevertheless a static value of 16
> breaks the dynamic manner of swap control via /proc/sys/vm/swapctl

Without my mmap-age patch, page cache pages aren't aged
at all... They're just freed whenever they weren't referenced
since the last scan. The PAGE_AGE_VALUE is quite useless IMO
(but I could be wrong, Stephen?).

Rik.
+-----------------------------+------------------------------+
| For Linux mm-patches, go to | "I'm busy managing memory.." |
| my homepage (via LinuxHQ).  | H.H.vanRiel@fys.ruu.nl       |
| ...submissions welcome...   | http://www.fys.ruu.nl/~riel/ |
+-----------------------------+------------------------------+

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: Fairness in love and swapping
  1998-02-26  8:05 ` Rogier Wolff
@ 1998-02-26 13:00   ` Dr. Werner Fink
  1998-02-26 22:36     ` Stephen C. Tweedie
  1998-02-26 14:30   ` Rik van Riel
  1998-02-26 22:33   ` Stephen C. Tweedie
  2 siblings, 1 reply; 27+ messages in thread
From: Dr. Werner Fink @ 1998-02-26 13:00 UTC (permalink / raw)
  To: R.E.Wolff; +Cc: sct, torvalds



[...]

> 
> but if the system would be "fair" we would get: 
> 
>           0                  5                 10            15
>       P1  <------ swapping --- like --- mad ------------------- ....
> 
>           0                  5                 10            15
>       P2  <------ swapping --- like --- mad ------------------- ....
> 
> 
> So.... In some cases, this behaviour is exactly what you want. What we
> really need is that some mechanism that actually determines in the
> first and last case that the system is thrashing like hell, and that
> "swapping" (as opposed to paging) is becoming a required
> strategy. That would mean putting a "page-in" ban on each process for
> relatively long stretches of time. These should become longer with
> each time that it occurs. That way, you will get:
> 
>           0        50           51      100
>       P1  <in memory>...........<in memory> 
> 
>           0          1        50           51      100
>       P2  ...........<in memory>...........<in memory> 
> 
> 
> By making the periods longer, you will cater for larger machines where
> getting the working set into main memory might take a long time (think
> about a machine with 4G core, and a disk subsystem that reaches 4Mb (*)
> per second on "random access paging". That's a quarter of an hour
> worth of swapping before that 3.6G process is swapped in....)

In other words: the pages swapped in or cached into the swap cache should
get their initial age which its self is calculated out of the current priority
of the corresponding process?


         Werner

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: Fairness in love and swapping
  1998-02-26  8:05 ` Rogier Wolff
  1998-02-26 13:00   ` Dr. Werner Fink
@ 1998-02-26 14:30   ` Rik van Riel
  1998-02-26 22:41     ` Stephen C. Tweedie
  1998-02-26 22:33   ` Stephen C. Tweedie
  2 siblings, 1 reply; 27+ messages in thread
From: Rik van Riel @ 1998-02-26 14:30 UTC (permalink / raw)
  To: Rogier Wolff
  Cc: Stephen C. Tweedie, torvalds, blah, nahshon, alan, paubert,
	linux-kernel, mingo, linux-mm

On Thu, 26 Feb 1998, Rogier Wolff wrote:

>           0        50           51      100
>       P1  <in memory>...........<in memory> 
> 
>           0          1        50           51      100
>       P2  ...........<in memory>...........<in memory> 

Now, how do we select which processes to suspend temporarily
and which to wake up again...
Suspending X wouldn't be to good, since then a lot of other
procesess would block on it... But this gives us a good clue
as to what to do.

We could:
- force-swap out processes which have slept for some time
- suspend & force-swap out the largest process
- wake it up again when there are two proceses waiting on
  it (to prevent X from being swapped out)
- wake up the suspended process after some time (2 seconds
  per megabyte size?) and mark the process as just-suspended
  (and don't swap it out again for a guaranteed 1 second *
  megabyte size period)
- if necessary, suspend & swap another large process when
  we're short on memory again

Doing this together with a dynamic RSS-limit strategy and
page cache page aging might give us quite an improvement
in VM performance.

Of course, I'm quite sure that I forgot something,
so please comment on how/what you want things changed.

Rik.
+-----------------------------+------------------------------+
| For Linux mm-patches, go to | "I'm busy managing memory.." |
| my homepage (via LinuxHQ).  | H.H.vanRiel@fys.ruu.nl       |
| ...submissions welcome...   | http://www.fys.ruu.nl/~riel/ |
+-----------------------------+------------------------------+

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: Fairness in love and swapping
  1998-02-26 11:34       ` Rik van Riel
@ 1998-02-26 18:57         ` Dr. Werner Fink
  1998-02-26 19:32           ` Rik van Riel
  1998-02-26 22:44         ` Stephen C. Tweedie
  1 sibling, 1 reply; 27+ messages in thread
From: Dr. Werner Fink @ 1998-02-26 18:57 UTC (permalink / raw)
  To: H.H.vanRiel; +Cc: sct, torvalds, nahshon, alan, paubert, mingo, linux-mm

> 
> > There is one point more which makes ageing a bit unfair.  In
> > include/linux/pagemap.h PAGE_AGE_VALUE is defined to 16 which is used in
> > __add_page_to_hash_queue() to set the age of a hashed page ... IMHO only
> > touch_page() should be used.  Nevertheless a static value of 16
> > breaks the dynamic manner of swap control via /proc/sys/vm/swapctl
> 
> Without my mmap-age patch, page cache pages aren't aged
> at all... They're just freed whenever they weren't referenced
> since the last scan. The PAGE_AGE_VALUE is quite useless IMO
> (but I could be wrong, Stephen?).

The age of a page cache page isn't changed if a process took it (?). IMHO that
means that this age is the starting age of such a process page, isn't it?
Maybe it would be a win if the initial page age, the increase and decrease
amount for the page age depends on the priority or the amount of
the time slice of the owner process(es).


          Werner

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: Fairness in love and swapping
  1998-02-26 18:57         ` Dr. Werner Fink
@ 1998-02-26 19:32           ` Rik van Riel
  0 siblings, 0 replies; 27+ messages in thread
From: Rik van Riel @ 1998-02-26 19:32 UTC (permalink / raw)
  To: Dr. Werner Fink; +Cc: sct, torvalds, nahshon, alan, paubert, mingo, linux-mm

On Thu, 26 Feb 1998, Dr. Werner Fink wrote:

> > Without my mmap-age patch, page cache pages aren't aged
> 
> The age of a page cache page isn't changed if a process took it (?). IMHO that
> means that this age is the starting age of such a process page, isn't it?

No, it means that page-cache pages are swapped out immediately,
without taking the usage pattern into account (except when it
got used just before kswapd did it's scanning).

My mmap-age patch does something to alleviate this, and I'll
make a patch against 2.1.89-pre2 any moment. You can probably
expect it on linux-mm before 0000UT this evening...

Rik.
+-----------------------------+------------------------------+
| For Linux mm-patches, go to | "I'm busy managing memory.." |
| my homepage (via LinuxHQ).  | H.H.vanRiel@fys.ruu.nl       |
| ...submissions welcome...   | http://www.fys.ruu.nl/~riel/ |
+-----------------------------+------------------------------+

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: Fairness in love and swapping
  1998-02-26  8:05 ` Rogier Wolff
  1998-02-26 13:00   ` Dr. Werner Fink
  1998-02-26 14:30   ` Rik van Riel
@ 1998-02-26 22:33   ` Stephen C. Tweedie
  1998-02-26 22:49     ` Rik van Riel
  1998-02-27  2:56     ` Michael O'Reilly
  2 siblings, 2 replies; 27+ messages in thread
From: Stephen C. Tweedie @ 1998-02-26 22:33 UTC (permalink / raw)
  To: Rogier Wolff
  Cc: Stephen C. Tweedie, torvalds, blah, H.H.vanRiel, nahshon, alan,
	paubert, linux-kernel, mingo, linux-mm

Hi,

On Thu, 26 Feb 1998 09:05:55 +0100 (MET), R.E.Wolff@BitWizard.nl
(Rogier Wolff) said:

> [ Processes P1 and P2 both need the same amount of CPU time, I've noted
> the "completion percentages" at the top. ]

> If you run it like this, you'll get:

>           0        50       100
>       P1  <---- in memory ----> 

>           0                   5         50      100
>       P2  < swapping like mad ><---- in memory ---> 

> but if the system would be "fair" we would get: 

>           0                  5                 10            15
>       P1  <------ swapping --- like --- mad ------------------- ....

>           0                  5                 10            15
>       P2  <------ swapping --- like --- mad ------------------- ....


> So.... In some cases, this behaviour is exactly what you want. 

It's maybe not "exactly what you want", but it can certainly be better
than being purely fair, for exactly this reason.  That's why it's hard
to see how we can improve much on the current scheme except to tweak
around the edges --- there are cases where being completely fair
actually reduces overall throughput substantially.

> What we really need is that some mechanism that actually determines
> in the first and last case that the system is thrashing like hell,
> and that "swapping" (as opposed to paging) is becoming a required
> strategy. 

True.  Any takers for this?  :)

--Stephen

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: Fairness in love and swapping
  1998-02-26 13:00   ` Dr. Werner Fink
@ 1998-02-26 22:36     ` Stephen C. Tweedie
  1998-02-26 23:20       ` Dr. Werner Fink
  0 siblings, 1 reply; 27+ messages in thread
From: Stephen C. Tweedie @ 1998-02-26 22:36 UTC (permalink / raw)
  To: Dr. Werner Fink
  Cc: R.E.Wolff, sct, torvalds, blah, H.H.vanRiel, nahshon, alan,
	paubert, mingo, linux-mm


On Thu, 26 Feb 1998 14:00:18 +0100, "Dr. Werner Fink" <werner@suse.de> said:

>> "swapping" (as opposed to paging) is becoming a required
>> strategy

> In other words: the pages swapped in or cached into the swap cache
> should get their initial age which its self is calculated out of the
> current priority of the corresponding process?

No, the idea is that we stop paging one or more processes altogether
and suspend them for a while, flushing their entire resident set out
to disk for the duration.  It's something very valuable when you are
running big concurrent batch jobs, and essentially moves the fairness
problem out of the memory space and into the scheduler, where we _can_
make a reasonable stab at being fair.

--Stephen

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: Fairness in love and swapping
  1998-02-26 14:30   ` Rik van Riel
@ 1998-02-26 22:41     ` Stephen C. Tweedie
  1998-02-26 23:21       ` Rik van Riel
  0 siblings, 1 reply; 27+ messages in thread
From: Stephen C. Tweedie @ 1998-02-26 22:41 UTC (permalink / raw)
  To: Rik van Riel
  Cc: Rogier Wolff, Stephen C. Tweedie, torvalds, blah, nahshon, alan,
	paubert, linux-kernel, mingo, linux-mm

Hi,

On Thu, 26 Feb 1998 15:30:25 +0100 (MET), Rik van Riel
<H.H.vanRiel@fys.ruu.nl> said:

> Now, how do we select which processes to suspend temporarily
> and which to wake up again...
> Suspending X wouldn't be to good, since then a lot of other
> procesess would block on it... But this gives us a good clue
> as to what to do.

> We could:
> - force-swap out processes which have slept for some time
> - suspend & force-swap out the largest process
> - wake it up again when there are two proceses waiting on
>   it (to prevent X from being swapped out)

Define the number of processes waiting on a given process?

Another way of making the distinction between batch and interactive
processes might be to observe that interactive processes spend some of
their time in "S" (interruptible sleep) state, whereas we expect
compute-bound jobs to be in "R" or "D" state most of the time.
However, that breaks down too when you consider batch jobs involving
pipelines, such as gcc -pipe.

> Doing this together with a dynamic RSS-limit strategy and
> page cache page aging might give us quite an improvement
> in VM performance.

Yes, and doing streamed writeahead and clustered swapin will up the
throughput to/from swap quite significantly too.

Cheers,
 Stephen.

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: Fairness in love and swapping
  1998-02-26 11:34       ` Rik van Riel
  1998-02-26 18:57         ` Dr. Werner Fink
@ 1998-02-26 22:44         ` Stephen C. Tweedie
  1998-02-26 23:34           ` Rik van Riel
  1 sibling, 1 reply; 27+ messages in thread
From: Stephen C. Tweedie @ 1998-02-26 22:44 UTC (permalink / raw)
  To: Rik van Riel
  Cc: Dr. Werner Fink, sct, torvalds, nahshon, alan, paubert, mingo, linux-mm

Hi,

On Thu, 26 Feb 1998 12:34:40 +0100 (MET), Rik van Riel
<H.H.vanRiel@fys.ruu.nl> said:

> Without my mmap-age patch, page cache pages aren't aged
> at all... They're just freed whenever they weren't referenced
> since the last scan. The PAGE_AGE_VALUE is quite useless IMO
> (but I could be wrong, Stephen?).

They _are_ useful for mapped images such as binaries (which are swapped
out by vmscan.c, not filemap.c), but not for otherwise unused, pure
cached pages.

--Stephen

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: Fairness in love and swapping
  1998-02-26 22:33   ` Stephen C. Tweedie
@ 1998-02-26 22:49     ` Rik van Riel
  1998-02-27  2:56     ` Michael O'Reilly
  1 sibling, 0 replies; 27+ messages in thread
From: Rik van Riel @ 1998-02-26 22:49 UTC (permalink / raw)
  To: Stephen C. Tweedie
  Cc: Rogier Wolff, torvalds, blah, nahshon, alan, paubert,
	linux-kernel, mingo, linux-mm

On Thu, 26 Feb 1998, Stephen C. Tweedie wrote:

> > What we really need is that some mechanism that actually determines
> > in the first and last case that the system is thrashing like hell,
> > and that "swapping" (as opposed to paging) is becoming a required
> > strategy. 
> 
> True.  Any takers for this?  :)

Yup. Here's one :-)

I've got the NetBSD source (with comments dating back
to '84 and possibly before :-) and parts of the Digital
Unix system administators tuning guide next to me, so
I have some idea as to what to do...

But still, we need to come up with a general idea of
the algorithms first (if you don't believe this, take
a look at my memory-limit patch earlier today..).

Rik.
+-----------------------------+------------------------------+
| For Linux mm-patches, go to | "I'm busy managing memory.." |
| my homepage (via LinuxHQ).  | H.H.vanRiel@fys.ruu.nl       |
| ...submissions welcome...   | http://www.fys.ruu.nl/~riel/ |
+-----------------------------+------------------------------+

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: Fairness in love and swapping
  1998-02-26 22:36     ` Stephen C. Tweedie
@ 1998-02-26 23:20       ` Dr. Werner Fink
  0 siblings, 0 replies; 27+ messages in thread
From: Dr. Werner Fink @ 1998-02-26 23:20 UTC (permalink / raw)
  To: sct
  Cc: R.E.Wolff, torvalds, blah, H.H.vanRiel, nahshon, alan, paubert,
	mingo, linux-mm

> >> "swapping" (as opposed to paging) is becoming a required
> >> strategy
> 
> > In other words: the pages swapped in or cached into the swap cache
> > should get their initial age which its self is calculated out of the
> > current priority of the corresponding process?
> 
> No, the idea is that we stop paging one or more processes altogether
> and suspend them for a while, flushing their entire resident set out
> to disk for the duration.  It's something very valuable when you are
> running big concurrent batch jobs, and essentially moves the fairness
> problem out of the memory space and into the scheduler, where we _can_
> make a reasonable stab at being fair.

Ohmm ... yes, but it's a pity because the diagrams of Roger took an old idea
of mine back into my mind :)  The idea was simply to give a process an
advantage over the others within its time slice by simply makeing
touch_page(), age_page(), and a new inline intial_age() depending on the
amount of the process time slice.


            Werner

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: Fairness in love and swapping
  1998-02-26 22:41     ` Stephen C. Tweedie
@ 1998-02-26 23:21       ` Rik van Riel
  0 siblings, 0 replies; 27+ messages in thread
From: Rik van Riel @ 1998-02-26 23:21 UTC (permalink / raw)
  To: Stephen C. Tweedie
  Cc: Rogier Wolff, torvalds, blah, nahshon, alan, paubert,
	linux-kernel, mingo, linux-mm

On Thu, 26 Feb 1998, Stephen C. Tweedie wrote:

> > We could:
> > - force-swap out processes which have slept for some time
> > - suspend & force-swap out the largest process
> > - wake it up again when there are two proceses waiting on
> >   it (to prevent X from being swapped out)
> 
> Define the number of processes waiting on a given process?
> 
> Another way of making the distinction between batch and interactive
> processes might be to observe that interactive processes spend some of
> their time in "S" (interruptible sleep) state, whereas we expect
> compute-bound jobs to be in "R" or "D" state most of the time.
> However, that breaks down too when you consider batch jobs involving
> pipelines, such as gcc -pipe.

I think we should give programs points based on several
things:
time_in + how long has it been in-core in seconds (300 max)
data_sz + RSS + DSIZE (#pages)
fil_dsc - number of file descriptors (if it has loads of
          file descriptors, it communicates a lot with the environment
          and is less likely a batch process)
slp_tim + how long has it been sleeping (to force-swap, but not
          suspend sleeping processes) in seconds (300 max)
run_tim + how long has it been running/blocking without 'interactive'
          syscalls or state changes in seconds (300 max)
is_root - euid = 0 (500 points)

The more (+) points a process has, the more likely it is
going to be selected for swapout. Now we got to make some
nice formula to select the processes and the swapout time.

Maybe:

points= time_in + (data_sz / fil_dsc) + slp_tim + run_tim - is_root4~

or:

points= (time_in / fil_dsc) + data_sz + slp_tim + run_tim - is_root
         ^^^^^max 300pt total

When swapping is needed, we simply walk the process table
and swap out the process with the most points...
But we _need_ to be sure that we don't pick X for a 30 second
break ... How do we do that?

Rik.
+-----------------------------+------------------------------+
| For Linux mm-patches, go to | "I'm busy managing memory.." |
| my homepage (via LinuxHQ).  | H.H.vanRiel@fys.ruu.nl       |
| ...submissions welcome...   | http://www.fys.ruu.nl/~riel/ |
+-----------------------------+------------------------------+

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: Fairness in love and swapping
  1998-02-26 22:44         ` Stephen C. Tweedie
@ 1998-02-26 23:34           ` Rik van Riel
  1998-02-27 19:41             ` Stephen C. Tweedie
  0 siblings, 1 reply; 27+ messages in thread
From: Rik van Riel @ 1998-02-26 23:34 UTC (permalink / raw)
  To: Stephen C. Tweedie
  Cc: Dr. Werner Fink, torvalds, nahshon, alan, paubert, mingo, linux-mm

On Thu, 26 Feb 1998, Stephen C. Tweedie wrote:

> > Without my mmap-age patch, page cache pages aren't aged
> > at all... They're just freed whenever they weren't referenced
> > since the last scan. The PAGE_AGE_VALUE is quite useless IMO
> > (but I could be wrong, Stephen?).
> 
> They _are_ useful for mapped images such as binaries (which are swapped
> out by vmscan.c, not filemap.c), but not for otherwise unused, pure
> cached pages.

AFAIK, mapped images aren't part of a proces' RSS, but
are page-cached (page->inode type of RSS). And swapping
of those vma's _is_ done in shrink_mmap() in filemap.c.

Furthermore, it's quite useful if your read-ahead pages
stay in memory for a while so you don't read them two
or even three times before they're actually used.

But if I've overlooked something, I'd really like to
hear about it... A bit of a clue never hurts when
coding up new patches :-)

Rik.
+-----------------------------+------------------------------+
| For Linux mm-patches, go to | "I'm busy managing memory.." |
| my homepage (via LinuxHQ).  | H.H.vanRiel@fys.ruu.nl       |
| ...submissions welcome...   | http://www.fys.ruu.nl/~riel/ |
+-----------------------------+------------------------------+

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: Fairness in love and swapping
  1998-02-26 22:33   ` Stephen C. Tweedie
  1998-02-26 22:49     ` Rik van Riel
@ 1998-02-27  2:56     ` Michael O'Reilly
  1 sibling, 0 replies; 27+ messages in thread
From: Michael O'Reilly @ 1998-02-27  2:56 UTC (permalink / raw)
  To: Stephen C. Tweedie
  Cc: Rogier Wolff, torvalds, blah, H.H.vanRiel, nahshon, alan,
	paubert, linux-kernel, mingo, linux-mm

"Stephen C. Tweedie" <sct@dcs.ed.ac.uk> writes:
> > What we really need is that some mechanism that actually determines
> > in the first and last case that the system is thrashing like hell,
> > and that "swapping" (as opposed to paging) is becoming a required
> > strategy. 
> 
> True.  Any takers for this?  :)
> 

That should be fairly easy. A stab. If the MIN(page in rate, page out
rate) over the last 30 seconds(?) is greater than X, and there are
more than 2(?) processes involved, then start swapping (instead of
paging).

Taking a relatively long baseline means that you need a lot of paging
to trigger. Taking the min of in/out means that it isn't just a
growing process, but something with a working set that's larger than
available ram. Taking the dispertion means that you ignore just one
process running out of ram.

Comments?

The tricky bit there is working out how many processes are
involved. Maybe something as simple as a circular log N elements long
that records the last PID associated with the last page out/in.

This is cheap for the page case, and then you can regularly poll the
rates to check.





int pid_log[N];
int pid_log_next;

page_out/page_in()
	.....
	
	pid_log[pid_log_next] = pid;
	pid_log_next = (pid_log_next+1)&(N-1);

	++page_rate_in;
	....


check_page_rates()
	
	age page rates;
	dispertion = number of different PID's in log;

	if MIN(page_rate_in, page_rate_out) > blah &&
		dispertion > 3) {
		swapping = 1;
	} else {
		swapping = 0;
	}
	...

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: Fairness in love and swapping
  1998-02-26 23:34           ` Rik van Riel
@ 1998-02-27 19:41             ` Stephen C. Tweedie
  1998-03-02 16:19               ` Rik van Riel
  0 siblings, 1 reply; 27+ messages in thread
From: Stephen C. Tweedie @ 1998-02-27 19:41 UTC (permalink / raw)
  To: Rik van Riel
  Cc: Stephen C. Tweedie, Dr. Werner Fink, torvalds, nahshon, alan,
	paubert, mingo, linux-mm

Hi,

> AFAIK, mapped images aren't part of a proces' RSS, but
> are page-cached (page->inode type of RSS). And swapping
> of those vma's _is_ done in shrink_mmap() in filemap.c.

No, absolutely not.  These pages are certainly present in the page
cache, but they are not swapped out there, and filemap.c never deals
directly with vma scans.  shrink_mmap() refuses to touch any pages which
have a reference count not exactly equal to one, so it avoids memory
mapped pages like the plague.  Memory mapped images are referenced
directly by a process's page tables, so they count against its resident
set size (which is defined as the number of present user-mode pages in
the page tables).

vmscan.c::try_to_swap_out() unhooks these pages from the page tables
when it wants to.  The final swapout of these pages takes place at the
end of that function, where it calls filemap.c::page_unuse(), which
takes care of removing the page from the page cache as soon as the last
reference from the page tables is removed.

> Furthermore, it's quite useful if your read-ahead pages
> stay in memory for a while so you don't read them two
> or even three times before they're actually used.

We never will read more than once --- the pages are still in the page
cache, so whenever we try to swap them in, we can always find the
readahead copy there.  Memory-mapped pages have to be in the page cache
before we are allowed to link them into the page tables, so the pages
are shared by both in the page cache *and* the page tables.  It is the
swapper which is responsible for turfing shared pages.  shrink_mmap()
only ever looks for unshared cache pages and buffers.

> But if I've overlooked something, I'd really like to hear about
> it... A bit of a clue never hurts when coding up new patches :-)

You're welcome. :)

Cheers,
 Stephen.

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: Fairness in love and swapping
  1998-02-27 19:41             ` Stephen C. Tweedie
@ 1998-03-02 16:19               ` Rik van Riel
  1998-03-02 22:35                 ` Stephen C. Tweedie
  0 siblings, 1 reply; 27+ messages in thread
From: Rik van Riel @ 1998-03-02 16:19 UTC (permalink / raw)
  To: Stephen C. Tweedie
  Cc: Dr. Werner Fink, torvalds, nahshon, alan, paubert, mingo, linux-mm

On Fri, 27 Feb 1998, Stephen C. Tweedie wrote:

> > AFAIK, mapped images aren't part of a proces' RSS, but
> > are page-cached (page->inode type of RSS). And swapping
> > of those vma's _is_ done in shrink_mmap() in filemap.c.
> 
> No, absolutely not.  These pages are certainly present in the page
[snip]
> > But if I've overlooked something, I'd really like to hear about
> > it... A bit of a clue never hurts when coding up new patches :-)
> 
> You're welcome. :)

Nevertheless, the system seems to run smoother when the
page-cache pages aren't thrown away immediately, but aged
as normal pages are. Read-ahead pages _are_ sometimes
freed before they're actually used, so in this case the
system _will_ have to read them again. But maybe a 'true'
LRU implementation for the 'hardy-referenced' pages might
be better (with a sysctl tunable timing thing).

start:
	page->age |= (1 << lru_age_factor)
referenced:
	page->age >>= 1
	page->age |= (1 << lru_age_factor)
not-referenced:
	page->age >>=1

grtz,

Rik.
+-----------------------------+------------------------------+
| For Linux mm-patches, go to | "I'm busy managing memory.." |
| my homepage (via LinuxHQ).  | H.H.vanRiel@fys.ruu.nl       |
| ...submissions welcome...   | http://www.fys.ruu.nl/~riel/ |
+-----------------------------+------------------------------+

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: Fairness in love and swapping
  1998-03-02 16:19               ` Rik van Riel
@ 1998-03-02 22:35                 ` Stephen C. Tweedie
  1998-03-02 23:14                   ` Rik van Riel
  0 siblings, 1 reply; 27+ messages in thread
From: Stephen C. Tweedie @ 1998-03-02 22:35 UTC (permalink / raw)
  To: Rik van Riel
  Cc: Stephen C. Tweedie, Dr. Werner Fink, torvalds, nahshon, alan,
	paubert, mingo, linux-mm

Hi,

On Mon, 2 Mar 1998 17:19:41 +0100 (MET), Rik van Riel
<H.H.vanRiel@fys.ruu.nl> said:

> Nevertheless, the system seems to run smoother when the
> page-cache pages aren't thrown away immediately, but aged
> as normal pages are. Read-ahead pages _are_ sometimes
> freed before they're actually used, so in this case the
> system _will_ have to read them again. 

Absolutely.  The trouble is that

a) the kernel likes to keep reclaiming pages from a single source if
it is finding it easy to locate unused pages there, so when it starts
on the page cache it _can_ get over zealous in reaping those pages;
and

b) starting to find free pages from swap is inherently difficult due
to the initial age placed on pages.

I rather suspect with those patches that it's not simply the aging of
page cache pages which helps performance, but also the tuning of the
balance between page cache and data page reclamation.

--Stephen

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: Fairness in love and swapping
  1998-03-02 22:35                 ` Stephen C. Tweedie
@ 1998-03-02 23:14                   ` Rik van Riel
  1998-03-03 22:59                     ` Stephen C. Tweedie
  0 siblings, 1 reply; 27+ messages in thread
From: Rik van Riel @ 1998-03-02 23:14 UTC (permalink / raw)
  To: Stephen C. Tweedie
  Cc: Dr. Werner Fink, torvalds, nahshon, alan, paubert, mingo, linux-mm

On Mon, 2 Mar 1998, Stephen C. Tweedie wrote:

> a) the kernel likes to keep reclaiming pages from a single source if
> it is finding it easy to locate unused pages there, so when it starts
> on the page cache it _can_ get over zealous in reaping those pages;
> and

Correction: It _will_ get over zealous in reaping those pages.

> b) starting to find free pages from swap is inherently difficult due
> to the initial age placed on pages.
> 
> I rather suspect with those patches that it's not simply the aging of
> page cache pages which helps performance, but also the tuning of the
> balance between page cache and data page reclamation.

That's why I proposed the true LRU aging on those pages,
so they get a better chance of (re)usal before they're
really freed and forgotten about (and need to be reread
in the case of readahead pages).

I might be working on this RSN.

Rik.
+-----------------------------+------------------------------+
| For Linux mm-patches, go to | "I'm busy managing memory.." |
| my homepage (via LinuxHQ).  | H.H.vanRiel@fys.ruu.nl       |
| ...submissions welcome...   | http://www.fys.ruu.nl/~riel/ |
+-----------------------------+------------------------------+

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: Fairness in love and swapping
  1998-03-02 23:14                   ` Rik van Riel
@ 1998-03-03 22:59                     ` Stephen C. Tweedie
  0 siblings, 0 replies; 27+ messages in thread
From: Stephen C. Tweedie @ 1998-03-03 22:59 UTC (permalink / raw)
  To: Rik van Riel
  Cc: Stephen C. Tweedie, Dr. Werner Fink, torvalds, nahshon, alan,
	paubert, mingo, linux-mm

Hi,

On Tue, 3 Mar 1998 00:14:43 +0100 (MET), Rik van Riel <H.H.vanRiel@fys.ruu.nl> said:

>> I rather suspect with those patches that it's not simply the aging of
>> page cache pages which helps performance, but also the tuning of the
>> balance between page cache and data page reclamation.

> That's why I proposed the true LRU aging on those pages,
> so they get a better chance of (re)usal before they're
> really freed and forgotten about (and need to be reread
> in the case of readahead pages).

That's exactly what all the work on being able to look up ptes from
the page address is about.  To get the balancing right, we really want
a single vmscan routine which deals with every single page fairly,
rather than skipping about between free page sources.  To do that, we
need to be able to lookup the ptes from the physical address.

Given that functionality, whole new worlds open up. :)

There is one other big balancing problem right now --- if there are
insufficient free pages to instantly grow the buffer cache, then getting
a new buffer defaults to reusing the oldest buffer.  I'd like to nuke
that breakage, because it leaves the buffer cache at the mercy of the
other caches in a busy system, and stops us from caching useful stuff
such as commonly used indirect blocks and directories.

--Stephen

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: Fairness in love and swapping
       [not found] <199802270729.IAA00680@cave.BitWizard.nl>
@ 1998-02-27 11:26 ` Rik van Riel
  0 siblings, 0 replies; 27+ messages in thread
From: Rik van Riel @ 1998-02-27 11:26 UTC (permalink / raw)
  To: Rogier Wolff; +Cc: Stephen C. Tweedie, linux-mm

On Fri, 27 Feb 1998, Rogier Wolff wrote:

> Rik van Riel wrote:
> > fil_dsc - number of file descriptors (if it has loads of
> >           file descriptors, it communicates a lot with the environment
> >           and is less likely a batch process)
> 
> At shell they have 3D datasets. 
> 
> They store them in an "array of 2D files". That way you can do:
> 
>          (echo "P5";echo 230 500;cat file24) | xv -
> 
> A program processing these e.g. in 2D, but then along a different axis
> as over here, would have all 300 files open at the same time.......

OK, we could take the number of non-file file descriptors.
The number of network connections (to not-self) is usually
a good indication of program interaction. The number of
network I/Os also is a good bonus.

Rik.
+-----------------------------+------------------------------+
| For Linux mm-patches, go to | "I'm busy managing memory.." |
| my homepage (via LinuxHQ).  | H.H.vanRiel@fys.ruu.nl       |
| ...submissions welcome...   | http://www.fys.ruu.nl/~riel/ |
+-----------------------------+------------------------------+

^ permalink raw reply	[flat|nested] 27+ messages in thread

end of thread, other threads:[~1998-03-03 23:00 UTC | newest]

Thread overview: 27+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
1998-02-25 20:32 Fairness in love and swapping Stephen C. Tweedie
1998-02-25 21:02 ` Linus Torvalds
1998-02-25 21:44   ` Rik van Riel
1998-02-25 21:39 ` Dr. Werner Fink
1998-02-25 22:27   ` Rik van Riel
1998-02-26 11:03     ` Dr. Werner Fink
1998-02-26 11:34       ` Rik van Riel
1998-02-26 18:57         ` Dr. Werner Fink
1998-02-26 19:32           ` Rik van Riel
1998-02-26 22:44         ` Stephen C. Tweedie
1998-02-26 23:34           ` Rik van Riel
1998-02-27 19:41             ` Stephen C. Tweedie
1998-03-02 16:19               ` Rik van Riel
1998-03-02 22:35                 ` Stephen C. Tweedie
1998-03-02 23:14                   ` Rik van Riel
1998-03-03 22:59                     ` Stephen C. Tweedie
1998-02-26  8:05 ` Rogier Wolff
1998-02-26 13:00   ` Dr. Werner Fink
1998-02-26 22:36     ` Stephen C. Tweedie
1998-02-26 23:20       ` Dr. Werner Fink
1998-02-26 14:30   ` Rik van Riel
1998-02-26 22:41     ` Stephen C. Tweedie
1998-02-26 23:21       ` Rik van Riel
1998-02-26 22:33   ` Stephen C. Tweedie
1998-02-26 22:49     ` Rik van Riel
1998-02-27  2:56     ` Michael O'Reilly
     [not found] <199802270729.IAA00680@cave.BitWizard.nl>
1998-02-27 11:26 ` Rik van Riel

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox