* pressuring dirty pages (2.3.99-pre6)
@ 2000-04-24 19:54 Rik van Riel
2000-04-24 21:27 ` Stephen C. Tweedie
0 siblings, 1 reply; 12+ messages in thread
From: Rik van Riel @ 2000-04-24 19:54 UTC (permalink / raw)
To: linux-mm
[-- Attachment #1: Type: TEXT/PLAIN, Size: 934 bytes --]
Hi,
I've been trying to fix the VM balance for a week or so now,
and things are mostly fixed except for one situation.
If there is a *heavy* write going on and the data is in the
page cache only .. ie. no buffer heads available, then the
page cache will grow almost without bounds and kswapd and
the rest of the system will basically spin in shrink_mmap()...
What mechanism do we use to flush back dirty pages from eg.
mmap()s? How could I push those pages to disk the way we
do with buffers (by waking up bdflush)?
(yes, this is a big bug, please try the attached program by
Juan Quintela and set the #defines as wanted .. it'll make
painfully clear that this bug exists and should be fixed)
regards,
Rik
--
The Internet is not a network of computers. It is a network
of people. That is its real strength.
Wanna talk about the kernel? irc.openprojects.net / #kernelnewbies
http://www.conectiva.com/ http://www.surriel.com/
[-- Attachment #2: qmtest.c --]
[-- Type: TEXT/PLAIN, Size: 1153 bytes --]
/*
* Memory tester by Quintela.
*/
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <sys/mman.h>
#include <stdlib.h>
#include <stdio.h>
#include <unistd.h>
#define FILENAME "/tmp/testing_file"
/* Put here 2times your memory or less */
#define SIZE (128 * 1024 * 1024)
void error_string(char *msg)
{
perror(msg);
exit(EXIT_FAILURE);
}
int main(int argc, char * argv[])
{
char *array;
int i;
int fd = open(FILENAME, O_RDWR | O_CREAT, 0666);
if (fd == -1)
error_string("Problems opening the file");
if (lseek(fd, SIZE, SEEK_SET) != SIZE)
error_string("Problems doing the lseek");
if (write(fd,"\0",1) !=1)
error_string("Problems writing");
array = mmap(0, SIZE, PROT_WRITE, MAP_SHARED,fd,0);
if (array == MAP_FAILED)
error_string("The mmap has failed");
for(i = 0; i < SIZE; i++) {
array[i] = i;
}
msync(array, SIZE, MS_SYNC);
close(fd);
exit(EXIT_SUCCESS);
}
^ permalink raw reply [flat|nested] 12+ messages in thread* Re: pressuring dirty pages (2.3.99-pre6)
2000-04-24 19:54 pressuring dirty pages (2.3.99-pre6) Rik van Riel
@ 2000-04-24 21:27 ` Stephen C. Tweedie
2000-04-24 22:42 ` Rik van Riel
0 siblings, 1 reply; 12+ messages in thread
From: Stephen C. Tweedie @ 2000-04-24 21:27 UTC (permalink / raw)
To: riel; +Cc: linux-mm
Hi,
On Mon, Apr 24, 2000 at 04:54:38PM -0300, Rik van Riel wrote:
>
> I've been trying to fix the VM balance for a week or so now,
> and things are mostly fixed except for one situation.
>
> If there is a *heavy* write going on and the data is in the
> page cache only .. ie. no buffer heads available, then the
> page cache will grow almost without bounds and kswapd and
> the rest of the system will basically spin in shrink_mmap()...
shrink_mmap is the problem then -- it should be giving up
sooner and letting try_to_swap_out() deal with the pages. mmap()ed
dirty pages can only be freed through swapper activity, not via
shrink_mmap().
--Stephen
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux.eu.org/Linux-MM/
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: pressuring dirty pages (2.3.99-pre6)
2000-04-24 21:27 ` Stephen C. Tweedie
@ 2000-04-24 22:42 ` Rik van Riel
2000-04-25 9:35 ` Stephen C. Tweedie
2000-04-25 13:58 ` Eric W. Biederman
0 siblings, 2 replies; 12+ messages in thread
From: Rik van Riel @ 2000-04-24 22:42 UTC (permalink / raw)
To: Stephen C. Tweedie; +Cc: linux-mm
On Mon, 24 Apr 2000, Stephen C. Tweedie wrote:
> On Mon, Apr 24, 2000 at 04:54:38PM -0300, Rik van Riel wrote:
> >
> > I've been trying to fix the VM balance for a week or so now,
> > and things are mostly fixed except for one situation.
> >
> > If there is a *heavy* write going on and the data is in the
> > page cache only .. ie. no buffer heads available, then the
> > page cache will grow almost without bounds and kswapd and
> > the rest of the system will basically spin in shrink_mmap()...
>
> shrink_mmap is the problem then -- it should be giving up sooner
> and letting try_to_swap_out() deal with the pages. mmap()ed
> dirty pages can only be freed through swapper activity, not via
> shrink_mmap().
That will not work. The problem isn't that kswapd eats cpu,
but the problem is that the dirty pages completely dominate
physical memory.
I've tried the "giving up earlier" option in shrink_mmap(),
but that leads to memory filling up just as badly and giving
us the same kind of trouble.
I guess what we want is the kind of callback that we do in
the direction of the buffer cache, using something like the
bdflush wakeup call done in try_to_free_buffers() ...
Maybe a "special" return value from shrink_mmap() telling
do_try_to_free_pages() to run swap_out() unconditionally
after this succesful shrink_mmap() call? Maybe even with
severity levels?
Eg. more calls to swap_out() if we encountered a lot of
dirty pages in shrink_mmap() ???
regards,
Rik
--
The Internet is not a network of computers. It is a network
of people. That is its real strength.
Wanna talk about the kernel? irc.openprojects.net / #kernelnewbies
http://www.conectiva.com/ http://www.surriel.com/
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux.eu.org/Linux-MM/
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: pressuring dirty pages (2.3.99-pre6)
2000-04-24 22:42 ` Rik van Riel
@ 2000-04-25 9:35 ` Stephen C. Tweedie
2000-04-25 15:25 ` Rik van Riel
2000-04-25 13:58 ` Eric W. Biederman
1 sibling, 1 reply; 12+ messages in thread
From: Stephen C. Tweedie @ 2000-04-25 9:35 UTC (permalink / raw)
To: riel; +Cc: Stephen C. Tweedie, linux-mm
Hi,
On Mon, Apr 24, 2000 at 07:42:12PM -0300, Rik van Riel wrote:
>
> That will not work. The problem isn't that kswapd eats cpu,
> but the problem is that the dirty pages completely dominate
> physical memory.
That isn't a "problem". That's a state. Of _course_ memory usage
is going to be dominated by whichever sort of page is being
predominantly used.
So we need to identify the real problem. Is 2.3 much worse than
2.2 at this dirty-write-mmap test? Are we seeing swap fragmentation
reducing swap throughput? Is the VM simply keeping insufficient
memory available for tasks other than the highly paging one?
--Stephen
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux.eu.org/Linux-MM/
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: pressuring dirty pages (2.3.99-pre6)
2000-04-25 9:35 ` Stephen C. Tweedie
@ 2000-04-25 15:25 ` Rik van Riel
0 siblings, 0 replies; 12+ messages in thread
From: Rik van Riel @ 2000-04-25 15:25 UTC (permalink / raw)
To: Stephen C. Tweedie; +Cc: linux-mm
On Tue, 25 Apr 2000, Stephen C. Tweedie wrote:
> On Mon, Apr 24, 2000 at 07:42:12PM -0300, Rik van Riel wrote:
> >
> > That will not work. The problem isn't that kswapd eats cpu,
> > but the problem is that the dirty pages completely dominate
> > physical memory.
>
> That isn't a "problem". That's a state. Of _course_ memory
> usage is going to be dominated by whichever sort of page is
> being predominantly used.
>
> So we need to identify the real problem. Is 2.3 much worse than
> 2.2 at this dirty-write-mmap test? Are we seeing swap
> fragmentation reducing swap throughput? Is the VM simply
> keeping insufficient memory available for tasks other than the
> highly paging one?
The highly paging task is pushing other tasks out of memory, even
though it doesn't do the task itself any good. In fact, some of
the typical memory hogs are found to run *faster* when we age their
pages better...
The combination of the above "push harder" logic together with my
anti hog code may work the way we want .. I've just compiled it and
will be testing it for a while now.
regards,
Rik
--
The Internet is not a network of computers. It is a network
of people. That is its real strength.
Wanna talk about the kernel? irc.openprojects.net / #kernelnewbies
http://www.conectiva.com/ http://www.surriel.com/
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux.eu.org/Linux-MM/
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: pressuring dirty pages (2.3.99-pre6)
2000-04-24 22:42 ` Rik van Riel
2000-04-25 9:35 ` Stephen C. Tweedie
@ 2000-04-25 13:58 ` Eric W. Biederman
1 sibling, 0 replies; 12+ messages in thread
From: Eric W. Biederman @ 2000-04-25 13:58 UTC (permalink / raw)
To: riel; +Cc: Stephen C. Tweedie, linux-mm
Rik van Riel <riel@conectiva.com.br> writes:
> On Mon, 24 Apr 2000, Stephen C. Tweedie wrote:
> > On Mon, Apr 24, 2000 at 04:54:38PM -0300, Rik van Riel wrote:
> > >
> > > I've been trying to fix the VM balance for a week or so now,
> > > and things are mostly fixed except for one situation.
> > >
> > > If there is a *heavy* write going on and the data is in the
> > > page cache only .. ie. no buffer heads available, then the
> > > page cache will grow almost without bounds and kswapd and
> > > the rest of the system will basically spin in shrink_mmap()...
> >
> > shrink_mmap is the problem then -- it should be giving up sooner
> > and letting try_to_swap_out() deal with the pages. mmap()ed
> > dirty pages can only be freed through swapper activity, not via
> > shrink_mmap().
>
> That will not work. The problem isn't that kswapd eats cpu,
> but the problem is that the dirty pages completely dominate
> physical memory.
>
> I've tried the "giving up earlier" option in shrink_mmap(),
> but that leads to memory filling up just as badly and giving
> us the same kind of trouble.
>
> I guess what we want is the kind of callback that we do in
> the direction of the buffer cache, using something like the
> bdflush wakeup call done in try_to_free_buffers() ...
>
> Maybe a "special" return value from shrink_mmap() telling
> do_try_to_free_pages() to run swap_out() unconditionally
> after this succesful shrink_mmap() call? Maybe even with
> severity levels?
>
> Eg. more calls to swap_out() if we encountered a lot of
> dirty pages in shrink_mmap() ???
I suspect the simplest thing we could do would be to actually implement
a RSS limit per struct mm. Roughly in handle_pte_fault if the page isn't
present and we are at our rss limit call swap_out_mm, until we are
below the limit.
This won't hurt much in the uncontended case, because the page
cache will still keep everything anyway, some dirty pages
will just get buffer_heads, and bdflush might clean those pages.
In the contended case, it removes some of the burden from swap_out,
and it should give shrink_mmap some pages to work with...
How we can approach the ideal of dynamically managed max RSS
sizes is another question...
Eric
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux.eu.org/Linux-MM/
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: pressuring dirty pages (2.3.99-pre6)
@ 2000-04-25 14:27 Mark_H_Johnson.RTS
2000-04-25 16:30 ` Stephen C. Tweedie
0 siblings, 1 reply; 12+ messages in thread
From: Mark_H_Johnson.RTS @ 2000-04-25 14:27 UTC (permalink / raw)
To: Eric W. Biederman; +Cc: linux-mm, riel, sct
Re: "RSS limits"
It would be great to have a dynamic max limit. However I can see a lot of
complexity in doing so. May I make a few suggestions.
- take a few moments to model the system operation under load. If the model
says RSS limits would help, by all means lets do it. If not, fix what we have.
If RSS limits are what we need, then
- implement the RSS limit using the current mechanism [e.g., ulimit]
- use a simple page removal algorithm to start with [e.g.,"oldest page first"
or "address space order"]. The only caution I might add on this is to check that
the page you are removing isn't the one w/ the instruction you are executing
[else you page fault again on returning to the process].
- get measurements under load to validate the model and determine if the
solution is "good enough"
Then add the bells & whistles once the basic capability is proven.
Yes, it would be nice to remove the "least recently used" page - however, for
many applications this is quite similar to "oldest page". If I remember from a
DECUS meeting (talk about VMS's virtual memory system), they saw perhaps 5-10%
improvement using LRU with a lot of extra overhead in the kernel. [you have to
remember that taking the "wrong page" out of the process will result in a low
cost page fault - that page didn't actually go into the swap area]
Yes, a dynamic max limit would be good. But even with a highly dynamic load on
the system [cycles of a burst of activity, then a quiet period], for this kind
of load, small RSS sizes may also be "good enough". You can't tell w/o a model
of system performance or real measurements.
If we get to the point of implementing a dynamic RSS limit, let's make sure it
gets done with the right information and at the "right time". I suggest it not
be done at page fault time - give it to a process like kswapd where you can
review page fault rates and memory sizes and make a global adjustment.
--Mark H Johnson
<mailto:Mark_H_Johnson@raytheon.com>
|--------+----------------------->
| | ebiederman@us|
| | west.net |
| | (Eric W. |
| | Biederman) |
| | |
| | 04/25/00 |
| | 08:58 AM |
| | |
|--------+----------------------->
>----------------------------------------------------------------------------|
| |
| To: riel@nl.linux.org |
| cc: "Stephen C. Tweedie" <sct@redhat.com>, linux-mm@kvack.org, |
| (bcc: Mark H Johnson/RTS/Raytheon/US) |
| Subject: Re: pressuring dirty pages (2.3.99-pre6) |
>----------------------------------------------------------------------------|
Rik van Riel <riel@conectiva.com.br> writes:
> On Mon, 24 Apr 2000, Stephen C. Tweedie wrote:
> > On Mon, Apr 24, 2000 at 04:54:38PM -0300, Rik van Riel wrote:
> > >
> > > I've been trying to fix the VM balance for a week or so now,
> > > and things are mostly fixed except for one situation.
> > >
> > > If there is a *heavy* write going on and the data is in the
> > > page cache only .. ie. no buffer heads available, then the
> > > page cache will grow almost without bounds and kswapd and
> > > the rest of the system will basically spin in shrink_mmap()...
> >
> > shrink_mmap is the problem then -- it should be giving up sooner
> > and letting try_to_swap_out() deal with the pages. mmap()ed
> > dirty pages can only be freed through swapper activity, not via
> > shrink_mmap().
>
> That will not work. The problem isn't that kswapd eats cpu,
> but the problem is that the dirty pages completely dominate
> physical memory.
>
> I've tried the "giving up earlier" option in shrink_mmap(),
> but that leads to memory filling up just as badly and giving
> us the same kind of trouble.
>
> I guess what we want is the kind of callback that we do in
> the direction of the buffer cache, using something like the
> bdflush wakeup call done in try_to_free_buffers() ...
>
> Maybe a "special" return value from shrink_mmap() telling
> do_try_to_free_pages() to run swap_out() unconditionally
> after this succesful shrink_mmap() call? Maybe even with
> severity levels?
>
> Eg. more calls to swap_out() if we encountered a lot of
> dirty pages in shrink_mmap() ???
I suspect the simplest thing we could do would be to actually implement
a RSS limit per struct mm. Roughly in handle_pte_fault if the page isn't
present and we are at our rss limit call swap_out_mm, until we are
below the limit.
This won't hurt much in the uncontended case, because the page
cache will still keep everything anyway, some dirty pages
will just get buffer_heads, and bdflush might clean those pages.
In the contended case, it removes some of the burden from swap_out,
and it should give shrink_mmap some pages to work with...
How we can approach the ideal of dynamically managed max RSS
sizes is another question...
Eric
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux.eu.org/Linux-MM/
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux.eu.org/Linux-MM/
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: pressuring dirty pages (2.3.99-pre6)
2000-04-25 14:27 Mark_H_Johnson.RTS
@ 2000-04-25 16:30 ` Stephen C. Tweedie
2000-04-25 19:14 ` Eric W. Biederman
0 siblings, 1 reply; 12+ messages in thread
From: Stephen C. Tweedie @ 2000-04-25 16:30 UTC (permalink / raw)
To: Mark_H_Johnson.RTS; +Cc: Eric W. Biederman, linux-mm, riel, sct
Hi,
On Tue, Apr 25, 2000 at 09:27:57AM -0500, Mark_H_Johnson.RTS@raytheon.com wrote:
> It would be great to have a dynamic max limit. However I can see a lot of
> complexity in doing so. May I make a few suggestions.
> - take a few moments to model the system operation under load. If the model
> says RSS limits would help, by all means lets do it. If not, fix what we have.
> If RSS limits are what we need, then
> - implement the RSS limit using the current mechanism [e.g., ulimit]
> - use a simple page removal algorithm to start with [e.g.,"oldest page first"
> or "address space order"]. The only caution I might add on this is to check that
> the page you are removing isn't the one w/ the instruction you are executing
We already have simple page removal algorithms.
The reason for the dynamic RSS limit isn't to improve the throughput
under load. It is to protect innocent processes from the effects of a
large memory hog in the system. It's easy enough to see that any pageout
algorithm which treats all pages fairly will have trouble if you have a
memory hog paging rapidly through all of its pages --- the hog process's
pages will be treated the same as any other process's pages, which means
that since the hog process is thrashing, it forces other tasks to do
likewise.
Note that RSS upper bounds are not the only way to achieve this. In a
thrashing situation, giving processes a lower limit --- an RSS guarantee
--- will also help, by allowing processes which don't need that much
memory to continue to work without any paging pressure at all.
--Stephen
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux.eu.org/Linux-MM/
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: pressuring dirty pages (2.3.99-pre6)
2000-04-25 16:30 ` Stephen C. Tweedie
@ 2000-04-25 19:14 ` Eric W. Biederman
2000-04-25 19:47 ` Rik van Riel
2000-04-26 11:06 ` Stephen C. Tweedie
0 siblings, 2 replies; 12+ messages in thread
From: Eric W. Biederman @ 2000-04-25 19:14 UTC (permalink / raw)
To: Stephen C. Tweedie; +Cc: Mark_H_Johnson.RTS, linux-mm, riel
"Stephen C. Tweedie" <sct@redhat.com> writes:
> Hi,
>
> On Tue, Apr 25, 2000 at 09:27:57AM -0500, Mark_H_Johnson.RTS@raytheon.com wrote:
>
>
> > It would be great to have a dynamic max limit. However I can see a lot of
> > complexity in doing so. May I make a few suggestions.
Agreed all I suggest for now was implement a max limit.
The dynamic was just food for thought.
> > - take a few moments to model the system operation under load. If the model
> > says RSS limits would help, by all means lets do it. If not, fix what we have.
>
> > If RSS limits are what we need, then
> > - implement the RSS limit using the current mechanism [e.g., ulimit]
> > - use a simple page removal algorithm to start with [e.g.,"oldest page first"
>
> > or "address space order"]. The only caution I might add on this is to check
> that
>
> > the page you are removing isn't the one w/ the instruction you are executing
>
> We already have simple page removal algorithms.
>
> The reason for the dynamic RSS limit isn't to improve the throughput
> under load. It is to protect innocent processes from the effects of a
> large memory hog in the system. It's easy enough to see that any pageout
> algorithm which treats all pages fairly will have trouble if you have a
> memory hog paging rapidly through all of its pages --- the hog process's
> pages will be treated the same as any other process's pages, which means
> that since the hog process is thrashing, it forces other tasks to do
> likewise.
>
> Note that RSS upper bounds are not the only way to achieve this. In a
> thrashing situation, giving processes a lower limit --- an RSS guarantee
> --- will also help, by allowing processes which don't need that much
> memory to continue to work without any paging pressure at all.
Right. A RSS guarantee sounds like it would make for easier tuning.
But a hard RSS max has the advantage of hitting a memory space hog
early, before it has a chance to get all of memory dirty, and simply
penalizes the hog.
Also under heave load a RSS garantee and a RSS hard limit are the
same. Though the angle of a RSS garantee does open new ideas.
The biggest being if you can meet all of the RSS guarantees do
you start actually swapping processing and not paging, or do you just
go about readjusting everyones RSS guarantee lower...
Maybe for the dynamic case we should just call it ideal_rss...
If I'm lucky I'll have some time this weekend to play with it.
But no guarantees.
Eric
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux.eu.org/Linux-MM/
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: pressuring dirty pages (2.3.99-pre6)
2000-04-25 19:14 ` Eric W. Biederman
@ 2000-04-25 19:47 ` Rik van Riel
2000-04-26 11:43 ` Stephen C. Tweedie
2000-04-26 11:06 ` Stephen C. Tweedie
1 sibling, 1 reply; 12+ messages in thread
From: Rik van Riel @ 2000-04-25 19:47 UTC (permalink / raw)
To: Eric W. Biederman; +Cc: Stephen C. Tweedie, Mark_H_Johnson.RTS, linux-mm
On 25 Apr 2000, Eric W. Biederman wrote:
> "Stephen C. Tweedie" <sct@redhat.com> writes:
> > On Tue, Apr 25, 2000 at 09:27:57AM -0500, Mark_H_Johnson.RTS@raytheon.com wrote:
> >
> > > It would be great to have a dynamic max limit. However I can see a lot of
> > > complexity in doing so. May I make a few suggestions.
>
> Agreed all I suggest for now was implement a max limit.
> The dynamic was just food for thought.
I have a solution for this.
My current anti-hog code already looks at what the biggest
process is. Any process which is in the same size class will
get a special bit set and has to call swap_out() on allocation
of a new page.
This will:
1) slow down the hogs a little, but give most slowdown to the
hog that does most allocations
2) will cause memory in processes to be unmapped, populating
the lru queue without the help of kswapd ...
3) ... this makes sure we have a whole bunch of easily freeable
memory around ...
4) ... which in turn makes it easy to keep up with the high IO
rates which some memory hogs require, because it's easier to
free memory
So in __alloc_pages():
if (current->hog)
swap_out();
Of course this won't penalise processes like bonnie, which just
do a lot of IO, but that *isn't needed* at all because the cache
memory used for these processes is not mapped and occupies a big
portion of the lru queue .. so it's quite likely that we'll free
memory from this process when we free something.
In fact, the MM code I'm playing with at the moment seems pretty
resistant against things like bonnie and tar ...
regards,
Rik
--
The Internet is not a network of computers. It is a network
of people. That is its real strength.
Wanna talk about the kernel? irc.openprojects.net / #kernelnewbies
http://www.conectiva.com/ http://www.surriel.com/
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux.eu.org/Linux-MM/
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: pressuring dirty pages (2.3.99-pre6)
2000-04-25 19:47 ` Rik van Riel
@ 2000-04-26 11:43 ` Stephen C. Tweedie
0 siblings, 0 replies; 12+ messages in thread
From: Stephen C. Tweedie @ 2000-04-26 11:43 UTC (permalink / raw)
To: riel; +Cc: Eric W. Biederman, Stephen C. Tweedie, Mark_H_Johnson.RTS, linux-mm
Hi,
On Tue, Apr 25, 2000 at 04:47:52PM -0300, Rik van Riel wrote:
>
> My current anti-hog code already looks at what the biggest
> process is. Any process which is in the same size class will
> get a special bit set
What clears the bit?
--Stephen
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux.eu.org/Linux-MM/
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: pressuring dirty pages (2.3.99-pre6)
2000-04-25 19:14 ` Eric W. Biederman
2000-04-25 19:47 ` Rik van Riel
@ 2000-04-26 11:06 ` Stephen C. Tweedie
1 sibling, 0 replies; 12+ messages in thread
From: Stephen C. Tweedie @ 2000-04-26 11:06 UTC (permalink / raw)
To: Eric W. Biederman; +Cc: Stephen C. Tweedie, Mark_H_Johnson.RTS, linux-mm, riel
Hi,
On Tue, Apr 25, 2000 at 02:14:30PM -0500, Eric W. Biederman wrote:
> Right. A RSS guarantee sounds like it would make for easier tuning.
> But a hard RSS max has the advantage of hitting a memory space hog
> early, before it has a chance to get all of memory dirty, and simply
> penalizes the hog.
Agreed --- RSS limits for the biggest processes in the system are
definitely needed.
> Also under heave load a RSS garantee and a RSS hard limit are the
> same.
Not at all --- that's only the case if you only have one process
experiencing memory pressure, or if you are in equilibrium. It's the
bits in between, where we are under changing load, which are the most
interesting, and in that case you still want your smallest processes to
have the protection of the RSS guarantees while you start dynamically
reducing the RSS limit on the biggest processes.
--Stephen
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux.eu.org/Linux-MM/
^ permalink raw reply [flat|nested] 12+ messages in thread
end of thread, other threads:[~2000-04-26 11:43 UTC | newest]
Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2000-04-24 19:54 pressuring dirty pages (2.3.99-pre6) Rik van Riel
2000-04-24 21:27 ` Stephen C. Tweedie
2000-04-24 22:42 ` Rik van Riel
2000-04-25 9:35 ` Stephen C. Tweedie
2000-04-25 15:25 ` Rik van Riel
2000-04-25 13:58 ` Eric W. Biederman
2000-04-25 14:27 Mark_H_Johnson.RTS
2000-04-25 16:30 ` Stephen C. Tweedie
2000-04-25 19:14 ` Eric W. Biederman
2000-04-25 19:47 ` Rik van Riel
2000-04-26 11:43 ` Stephen C. Tweedie
2000-04-26 11:06 ` Stephen C. Tweedie
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox