linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* Broad questions about the current design
@ 2002-08-09 15:12 Scott Kaplan
  2002-08-09 15:52 ` William Lee Irwin III
                   ` (2 more replies)
  0 siblings, 3 replies; 7+ messages in thread
From: Scott Kaplan @ 2002-08-09 15:12 UTC (permalink / raw)
  To: linux-mm

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Hi folks,

I'm in process of trying to do some experiments that require modifying the 
VM system to gather recency hit distribution statistics online.  I'm just 
beginning to get the hang of the code, so I need some help, particularly 
with the latest versions which are substantially different (it seems to me)
  from the last versions that were (semi-)documented.  Some of these 
questions may be foolish, and the last one in particular rambles a bit as 
I think straight into the keyboard, but I am interested in your responses:

1) What happened to page ages?  I found them in 2.4.0, but they're
    gone by 2.4.19, and remain gone in 2.5.30.  The active list scan
    seems to start at the tail and work its way towards the head,
    demoting to the inactive list those pages whose reference bit is
    cleared.  This seems to be like some kind of hybrid inbetween a
    FIFO policy and a CLOCK algorithm.  Pages are inserted and scanned
    based on the FIFO ordering, but given a second chance much like a
    CLOCK.  Is a similar approach used for queuing pages for cleaning
    and for reclaimation?  Am I interpreting this code in
    refill_inactive correctly?

2) Is there only one inactive list now?  Again, somewhere between
    2.4.0 and 2.4.19, inactive_dirty_list and the per-zone
    inactive_clean_lists disappeared.  How are the inactive_clean
    and inactive_dirty pages separated?  Or are they no longer kept
    separate in that way, and simply distinguished when trying to
    reclaim pages?

3) Does the scanning of pages (roughly every page within a minute)
    create a lot of avoidable overhead?  I can see that such scanning
    is necessary when page aging is used, as the ages must be updated
    to maintain this frequency-of-use information.  However, in the
    absence of page ages, scanning seems superfluous.  Some amount of
    scanning for the purpose of flushing groups of dirty pages seems
    appropriate, but that doesn't requiring the continual scanning of
    all pages.  Clearing reference bits on roughly the same time scale
    with which those bits are set could require regular and complete
    scanning, but the value of that reference-bit-clearing has not been
    clearly demonstrated (or has it?).

    How much overhead *does* this scanning introduce?  Does it really
    yield performance that is so much better than, say, a SEGQ
    (CLOCK->LRU) structure with a single-handed clock?  Is it worth
    raising this point when justifying rmap?  Specifically, we're
    already accustomed to some amount of overhead in VM bookkeeping in
    order to avoid bad memory management -- what fraction of the total
    overhead would be due to rmap in bad cases when compared to this
    overhead?

Many thanks for answers and thoughts that you can provide.  I do have one 
other important question to me:  How much should I expect this code to 
continue to change?  Is this basic structure likely to change, or will 
there only be tuning improvements and minor modifications?

Scott
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.0.6 (Darwin)
Comment: For info see http://www.gnupg.org

iD8DBQE9U9vX8eFdWQtoOmgRAtzLAKCcKtzpOIfQyE27vwFaf1o6tvFlfACdHtY+
T3EXbIQg/aqxNWqxXn5LAW4=
=RZc9
-----END PGP SIGNATURE-----

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Broad questions about the current design
  2002-08-09 15:12 Broad questions about the current design Scott Kaplan
@ 2002-08-09 15:52 ` William Lee Irwin III
  2002-08-09 15:53 ` Rik van Riel
  2002-08-12  9:13 ` Daniel Phillips
  2 siblings, 0 replies; 7+ messages in thread
From: William Lee Irwin III @ 2002-08-09 15:52 UTC (permalink / raw)
  To: Scott Kaplan; +Cc: linux-mm

On Fri, Aug 09, 2002 at 11:12:20AM -0400, Scott Kaplan wrote:
> 1) What happened to page ages?  I found them in 2.4.0, but they're
>    gone by 2.4.19, and remain gone in 2.5.30.  The active list scan
>    seems to start at the tail and work its way towards the head,
>    demoting to the inactive list those pages whose reference bit is
>    cleared.  This seems to be like some kind of hybrid inbetween a
>    FIFO policy and a CLOCK algorithm.  Pages are inserted and scanned
>    based on the FIFO ordering, but given a second chance much like a
>    CLOCK.  Is a similar approach used for queuing pages for cleaning
>    and for reclaimation?  Am I interpreting this code in
>    refill_inactive correctly?

The cleaning and reclamation are done in the same pass AFAICT.
As (little as) I understand it, it's a highly unusual algorithm.


On Fri, Aug 09, 2002 at 11:12:20AM -0400, Scott Kaplan wrote:
> 2) Is there only one inactive list now?  Again, somewhere between
>    2.4.0 and 2.4.19, inactive_dirty_list and the per-zone
>    inactive_clean_lists disappeared.  How are the inactive_clean
>    and inactive_dirty pages separated?  Or are they no longer kept
>    separate in that way, and simply distinguished when trying to
>    reclaim pages?

Pending patches for 2.5.30 make it per-zone. 2.4.x will stay as it is.
The search problem created by ZONE_DMA/ZONE_NORMAL/ZONE_HIGHMEM
mixtures in queues can be severe.


On Fri, Aug 09, 2002 at 11:12:20AM -0400, Scott Kaplan wrote:
> 3) Does the scanning of pages (roughly every page within a minute)
>    create a lot of avoidable overhead?  I can see that such scanning
>    is necessary when page aging is used, as the ages must be updated
>    to maintain this frequency-of-use information.  However, in the
>    absence of page ages, scanning seems superfluous.  Some amount of
>    scanning for the purpose of flushing groups of dirty pages seems
>    appropriate, but that doesn't requiring the continual scanning of
>    all pages.  Clearing reference bits on roughly the same time scale
>    with which those bits are set could require regular and complete
>    scanning, but the value of that reference-bit-clearing has not been
>    clearly demonstrated (or has it?).

I suspect it is overzealous. The attack on the CPU consumption of the
page replacement algorithms has generally been on making the searches
more efficient, not on reducing the frequency of scanning. rmap *should*
be able to get away with a lot less scanning because it can get at the
pte's directly. Page replacement is not my primary focus, though.


On Fri, Aug 09, 2002 at 11:12:20AM -0400, Scott Kaplan wrote:
>    How much overhead *does* this scanning introduce?  Does it really
>    yield performance that is so much better than, say, a SEGQ
>    (CLOCK->LRU) structure with a single-handed clock?  Is it worth
>    raising this point when justifying rmap?  Specifically, we're
>    already accustomed to some amount of overhead in VM bookkeeping in
>    order to avoid bad memory management -- what fraction of the total
>    overhead would be due to rmap in bad cases when compared to this
>    overhead?

I haven't seen an implementation of it. Not sure if others have, either.
Might be worth checking out, but I'm tied up with superpages (yes,
Hubertus, I've got a diff or two for you after I finish this mail).


On Fri, Aug 09, 2002 at 11:12:20AM -0400, Scott Kaplan wrote:
> Many thanks for answers and thoughts that you can provide.  I do have one 
> other important question to me:  How much should I expect this code to 
> continue to change?  Is this basic structure likely to change, or will 
> there only be tuning improvements and minor modifications?

The page replacement bits in the VM are *ahem* frequently rewritten,
though some things (e.g. buddy system, software pagetable stuff) seem
to rarely be touched.


Cheers,
Bill
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Broad questions about the current design
  2002-08-09 15:12 Broad questions about the current design Scott Kaplan
  2002-08-09 15:52 ` William Lee Irwin III
@ 2002-08-09 15:53 ` Rik van Riel
  2002-08-12  9:13 ` Daniel Phillips
  2 siblings, 0 replies; 7+ messages in thread
From: Rik van Riel @ 2002-08-09 15:53 UTC (permalink / raw)
  To: Scott Kaplan; +Cc: linux-mm

On Fri, 9 Aug 2002, Scott Kaplan wrote:

> 1) What happened to page ages?  I found them in 2.4.0, but they're
>     gone by 2.4.19, and remain gone in 2.5.30.  The active list scan
>     seems to start at the tail and work its way towards the head,
>     demoting to the inactive list those pages whose reference bit is
>     cleared.  This seems to be like some kind of hybrid inbetween a
>     FIFO policy and a CLOCK algorithm.  Pages are inserted and scanned
>     based on the FIFO ordering, but given a second chance much like a
>     CLOCK.  Is a similar approach used for queuing pages for cleaning
>     and for reclaimation?  Am I interpreting this code in
>     refill_inactive correctly?

The thing is that there are now 4 different VMs for Linux:

- 2.4 mainline
- 2.4 -aa
- 2.4 -rmap
- 2.5

2.4 mainline and 2.4-aa are mostly the same, but 2.4 rmap has
the LRU lists completely per zone and uses page aging.

2.5 is halfway between the two when it comes to page replacement.

> 2) Is there only one inactive list now?  Again, somewhere between
>     2.4.0 and 2.4.19, inactive_dirty_list and the per-zone
>     inactive_clean_lists disappeared.  How are the inactive_clean
>     and inactive_dirty pages separated?  Or are they no longer kept
>     separate in that way, and simply distinguished when trying to
>     reclaim pages?

They are no longer separated out.

> 3) Does the scanning of pages (roughly every page within a minute)
>     create a lot of avoidable overhead?  I can see that such scanning
>     is necessary when page aging is used, as the ages must be updated
>     to maintain this frequency-of-use information.  However, in the
>     absence of page ages, scanning seems superfluous.  Some amount of
>     scanning for the purpose of flushing groups of dirty pages seems
>     appropriate, but that doesn't requiring the continual scanning of
>     all pages.  Clearing reference bits on roughly the same time scale
>     with which those bits are set could require regular and complete
>     scanning, but the value of that reference-bit-clearing has not been
>     clearly demonstrated (or has it?).
>
>     How much overhead *does* this scanning introduce?  Does it really
>     yield performance that is so much better than, say, a SEGQ
>     (CLOCK->LRU) structure with a single-handed clock?  Is it worth
>     raising this point when justifying rmap?  Specifically, we're
>     already accustomed to some amount of overhead in VM bookkeeping in
>     order to avoid bad memory management -- what fraction of the total
>     overhead would be due to rmap in bad cases when compared to this
>     overhead?

Good questions, I hope you'll be able to find answers because
I don't have them ;)

> Many thanks for answers and thoughts that you can provide.  I do have one
> other important question to me:  How much should I expect this code to
> continue to change?  Is this basic structure likely to change, or will
> there only be tuning improvements and minor modifications?

The code will probably keep changing on an almost monthly
basis until 2.6.0 is out. Your input in deciding what to
change would be very much welcome...

kind regards,

Rik
-- 
Bravely reimplemented by the knights who say "NIH".

http://www.surriel.com/		http://distro.conectiva.com/

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Broad questions about the current design
  2002-08-09 15:12 Broad questions about the current design Scott Kaplan
  2002-08-09 15:52 ` William Lee Irwin III
  2002-08-09 15:53 ` Rik van Riel
@ 2002-08-12  9:13 ` Daniel Phillips
  2002-08-12 17:58   ` Scott Kaplan
  2 siblings, 1 reply; 7+ messages in thread
From: Daniel Phillips @ 2002-08-12  9:13 UTC (permalink / raw)
  To: Scott Kaplan, linux-mm

On Friday 09 August 2002 17:12, Scott Kaplan wrote:
> 1) What happened to page ages?  I found them in 2.4.0, but they're
>     gone by 2.4.19, and remain gone in 2.5.30.

One day, around 2.4.9, Andrea showed up on lkml with a 'VM rewrite', which 
replaced the page aging with a simpler LRU mechanism (described below).  As 
we had never managed to get Rik's aging mechanism tuned so that it would 
behave predictably in corner cases, Linus decided to switch out the whole 
aging mechanism in favor or Andrea's patch.  Though the decision was 
controversial at the time, it turned out to be quite correct, as you can see 
that VM-related complaints on lkml dropped off rapidly starting from that 
time.

The jury is still out as to whether aging or LRU is the better page 
replacement policy, and to date, no formal comparisons have been done.

>     The active list scan
>     seems to start at the tail and work its way towards the head,
>     demoting to the inactive list those pages whose reference bit is
>     cleared.  This seems to be like some kind of hybrid inbetween a
>     FIFO policy and a CLOCK algorithm.  Pages are inserted and scanned
>     based on the FIFO ordering, but given a second chance much like a
>     CLOCK.  Is a similar approach used for queuing pages for cleaning
>     and for reclaimation?  Am I interpreting this code in
>     refill_inactive correctly?
>

This code implements the LRU on the active list:

http://lxr.linux.no/source/mm/vmscan.c?v=2.5.28#L349:

349                 if (page->pte.chain && page_referenced(page)) {
350                         list_del(&page->lru);
351                         list_add(&page->lru, &active_list);
352                         pte_chain_unlock(page);
353                         continue;
354                 }

Yes, it was supposed to be LRU but as you point out, it's merely a clock.
It would be an LRU if the list deletion and reinsertion occured directly in 
try_to_swap_out, but there the page referenced bit is merely set.  I asked
Andrea why he did not do this and he wasn't sure, but he thought that maybe 
the way he did it was more efficient.

For any page that is explicitly touched, e.g., by file IO, we use 
activate_page, which moves the page to the head of the active list regardless 
of which list the page is currently on.  This is a classic LRU.

http://lxr.linux.no/source/mm/swap.c?v=2.5.28#L39:

39 static inline void activate_page_nolock(struct page * page)
40 {
41         if (PageLRU(page) && !PageActive(page)) {
42                 del_page_from_inactive_list(page);
43                 add_page_to_active_list(page);
44                 KERNEL_STAT_INC(pgactivate);
45         }
46 }

The inactive list is a fifo queue.  So you have a (sort-of) LRU feeding pages
from its cold end into the FIFO, and if the page stays on the FIFO long 
enough to reach the code end it gets evicted, or at least it starts on the
process.  It's a fairly effective arrangement, except for the part about not 
really implementing the LRU properly, and needing to find page referenced 
bits by virtual scanning.  The latter means that the referenced information 
at the cold end of the LRU and FIFO is unreliable.

The LRU behavior would be better, I suppose, if the page activation were done 
in try_to_swap_out.  I haven't tried this, because I think it's more 
important to get the reverse mapping work nailed down so that the page 
referenced information is reliable.  Otherwise, tuning the scanner is just 
too frustrating, and better left to those who, by instinct, can keep Fiats 
running ;-)

-- 
Daniel
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Broad questions about the current design
  2002-08-12  9:13 ` Daniel Phillips
@ 2002-08-12 17:58   ` Scott Kaplan
  2002-08-12 20:55     ` Rik van Riel
  2002-08-12 21:07     ` Martin J. Bligh
  0 siblings, 2 replies; 7+ messages in thread
From: Scott Kaplan @ 2002-08-12 17:58 UTC (permalink / raw)
  To: linux-mm

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On Monday, August 12, 2002, at 05:13 AM, Daniel Phillips wrote:

> On Friday 09 August 2002 17:12, Scott Kaplan wrote:
>> 1) What happened to page ages?  I found them in 2.4.0, but they're
>>     gone by 2.4.19, and remain gone in 2.5.30.
>
> The jury is still out as to whether aging or LRU is the better page
> replacement policy, and to date, no formal comparisons have been done.

Okay, so *that* explains when and how it happened.  Thank you.  As to 
``which is better'':  At the least, it's a very hard question to answer, 
not only because testing VM policies is difficult, but also because this 
policy is managing pages that are being put to very different uses.  From 
a VM perspective, LRU tends to be the better idea, and frequency 
information, while always tempting, is generally a bad idea with one 
notable exception -- In those cases where LRU performs poorly, frequency 
allows the replacement policy to deviate from LRU.  As it happens, just 
about *anything* that isn't LRU performs better in those cases, so there's 
nothing laudable about using frequency information in such situations, as 
other non-LRU approaches work, too.  Since page aging is only partly 
frequency based, it may be that its benefits are exactly these cases where 
just about anything that deviates from LRU will help.

For filesystem caching, the picture is less clear.  Some studies have 
shown frequency to be a genuinely good idea, as file access patterns 
exhibit strong regularities for which LRU performs poorly.  While I think 
that even those studies are oversimplifying the problem, frequency could 
be a decent approach.  Since the Linux MM system manages both VM pages and 
filesystem cache pages together, its hard to say how those pools compete, 
and which policy is a better choice.  I certainly think that something 
LRU-like is going to be more stable and predictable, failing in cases that 
people understand pretty well.

> This code implements the LRU on the active list:
>
> http://lxr.linux.no/source/mm/vmscan.c?v=2.5.28#L349:
>
> 349                 if (page->pte.chain && page_referenced(page)) {
> 350                         list_del(&page->lru);
> 351                         list_add(&page->lru, &active_list);
> 352                         pte_chain_unlock(page);
> 353                         continue;
> 354                 }
>
> Yes, it was supposed to be LRU but as you point out, it's merely a clock.
> It would be an LRU if the list deletion and reinsertion occured directly 
> in
> try_to_swap_out, but there the page referenced bit is merely set.  I asked
> Andrea why he did not do this and he wasn't sure, but he thought that 
> maybe
> the way he did it was more efficient.

I'm a bit confused by these comments, so maybe you can help me out a bit.  
While I agree that it would be possible to move pages to the front of the 
active list in try_to_swap_out() rather than setting their reference bits,
  I don't think that change would make this an LRU policy.  There are only 
two ways to achieve a true LRU ordering:

1) Trap into the kernel on every reference, moving the referenced page to 
the front immediately.  (Obviously, the overhead here would be absurd.)

2) Use hardware that timestamps each page frame with the time of each 
reference, allowing you to discover the order of last reference.  No chip 
does this, of course, since it's not worth the hardware or the cycles to 
examine the timestamps.

In other words, by the time try_to_swap_out() runs, it is possible to 
discover which pages have been used lately and move them to the front, but 
the order of last reference among those pages is already lost.  It's not a 
true LRU ordering.

Critically, true LRU orderings aren't worth much.  That the active list is 
managed via CLOCK is totally appropriate and desirable.  The entire 
purpose of CLOCK is that it is an approximation of LRU  -- and, in fact, a 
very good one -- that doesn't incur the overhead needed for true LRU.  I 
can't think of any reason to try to make the active list more LRU-like, as 
there's no real benefit to be gained.

> For any page that is explicitly touched, e.g., by file IO, we use
> activate_page, which moves the page to the head of the active list 
> regardless of which list the page is currently on.  This is a classic LRU.

Okay, that sounds fine, although for the reasons I just mentioned, it's 
not clear that it's helping much to move the page to the front if it's 
already on the active list, which itself is a CLOCK that yields good 
LRU-like behavior.

> The inactive list is a fifo queue.  So you have a (sort-of) LRU feeding 
> pages from its cold end into the FIFO, and if the page stays on the FIFO 
> long enough to reach the code end it gets evicted, or at least it starts 
> on the process.

Wait, this doesn't make sense to me.  I assume that there is some code 
that examines pages on the inactive list and, if they've been referenced, 
moves them to the front of the active list.  That would make the inactive 
list another CLOCK-like queue, not a FIFO queue.  (It would be FIFO only 
if pages were pulled only from the cold end of the FIFO queue and either 
(a) evicted if they've not been used or (b) moved to the front of the 
active list if they have been.)

Making this list an LRU queue makes this whole structure a classic 
segmented queue (SEGQ) arrangement.  The first queue is a FIFO or CLOCK -- 
a kind of queue where references to pages are not detected immediately, 
but only when something from that queue needs replacement for an incoming 
page.  Pages evicted from the first queue are inserted into the second one.
   The purpose of the structure is that pages in the second queue are 
referenced far less often, and so incurring the overhead of detecting 
their references when they occur -- that is, protecting the pages so that 
the reference causes a trap into the kernel -- is a low cost way to order 
the pages near eviction.  While keeping the inactive queue as a CLOCK-like 
structure is fine, it could be a true LRU queue, and there's little 
advantage to making it a FIFO queue.  (Again, though, it doesn't sound to 
me as though it *is* a FIFO queue -- or am I misunderstanding your 
comments?)

> [...] and needing to find page referenced bits by virtual scanning.  The 
> latter means that the referenced information at the cold end of the LRU 
> and FIFO is unreliable.

The cold end of the CLOCK actually tends to be quite reliable.  I agree 
that the cold end of a FIFO queue is not, so why not do away with the 
scanning and simply mark pages in the inactive queue as not present 
(although they are, of course)?  If referenced, they are immediately moved 
to the front.  It's not likely to happen often, so the overhead is modest 
(and, for many workloads, far less than with the continual scanning).  
Leave the first queue as a CLOCK, make the second queue a true LRU.

> I haven't tried this, because I think it's more important to get the 
> reverse mapping work nailed down so that the page referenced information 
> is reliable.

Fair enough -- we can't solve all problems at once.  But, I do have a 
proposal, now that I've done all of this nit-picking:  For my own purposes,
  I want a simpler, non-scanning structure.  I want the CLOCK/LRU SEGQ 
structure that I described.  So I'll just go ahead and do that, as it will 
be the basis of some other experiments that I'm trying to do.  Once (if?) 
I've managed that, we can try some workloads to see what the overhead of 
scanning is vs. the overhead of minor (non-I/O) page faults for the 
inactive list references.  My prediction for the outcome is as follows:  
For workloads that are loop-like and require space near to the capacity of 
memory (that is, workloads that will hit the inactive list pages often), 
my approach will incur more overhead.  I think on other workloads, 
eliminating the scanning will be worthwhile, not only in the reduction of 
overhead, but in the elimination of one more factor to tune.  (Mind you, 
scanning will still be somewhat needed to batch together page-write 
operations for cleaning purposes, but that's yet another topic.)

Anyone think this is interesting?  Or am I just doing this for myself?  (I'
m happy to do it for myself, but if others want to know, I'll try to share 
the results.)  Also, does anyone think I'm nuts, and misunderstanding some 
of the issues?  (Always a possibility.)

Scott
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.0.6 (Darwin)
Comment: For info see http://www.gnupg.org

iD8DBQE9V/c88eFdWQtoOmgRAsPUAJ9P8Qkag/wXeBibK01CjvgnjtnnwgCgkwYw
Zu1FbDBWZPYYV/tg13hifiA=
=sANG
-----END PGP SIGNATURE-----

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Broad questions about the current design
  2002-08-12 17:58   ` Scott Kaplan
@ 2002-08-12 20:55     ` Rik van Riel
  2002-08-12 21:07     ` Martin J. Bligh
  1 sibling, 0 replies; 7+ messages in thread
From: Rik van Riel @ 2002-08-12 20:55 UTC (permalink / raw)
  To: Scott Kaplan; +Cc: linux-mm

On Mon, 12 Aug 2002, Scott Kaplan wrote:

>   I want a simpler, non-scanning structure.  I want the CLOCK/LRU SEGQ
> structure that I described.  So I'll just go ahead and do that, as it will
> be the basis of some other experiments that I'm trying to do.  Once (if?)
> I've managed that, we can try some workloads to see what the overhead of
> scanning is vs. the overhead of minor (non-I/O) page faults for the
> inactive list references.  My prediction for the outcome is as follows:

> Anyone think this is interesting?

Absolutely.  One thing to keep in mind though is streaming
IO and things like 'find' that touch a LOT of pages once.

We probably want some kind of mechanism to prevent these
streaming IO pages to flush out the whole working set at
once.

kind regards,

Rik
-- 
Bravely reimplemented by the knights who say "NIH".

http://www.surriel.com/		http://distro.conectiva.com/

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Broad questions about the current design
  2002-08-12 17:58   ` Scott Kaplan
  2002-08-12 20:55     ` Rik van Riel
@ 2002-08-12 21:07     ` Martin J. Bligh
  1 sibling, 0 replies; 7+ messages in thread
From: Martin J. Bligh @ 2002-08-12 21:07 UTC (permalink / raw)
  To: Scott Kaplan, linux-mm

Scott, have you seen the talk here?

ftp://ftp.suse.com/pub/people/andrea/talks/english/2001/

Might be worth reading. I remembered it was there somewhere,
but it took Randy to actually find it again ;-) A rare piece of
actual documentation ;-)

M.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2002-08-12 21:07 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2002-08-09 15:12 Broad questions about the current design Scott Kaplan
2002-08-09 15:52 ` William Lee Irwin III
2002-08-09 15:53 ` Rik van Riel
2002-08-12  9:13 ` Daniel Phillips
2002-08-12 17:58   ` Scott Kaplan
2002-08-12 20:55     ` Rik van Riel
2002-08-12 21:07     ` Martin J. Bligh

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox