linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Matthew Dillon <dillon@apollo.backplane.com>
To: Rik van Riel <riel@conectiva.com.br>
Cc: Jamie Lokier <lk@tantalophile.demon.co.uk>, linux-mm@kvack.org
Subject: Re: [RFC] 2.3/4 VM queues idea
Date: Thu, 25 May 2000 10:53:04 -0700 (PDT)	[thread overview]
Message-ID: <200005251753.KAA83360@apollo.backplane.com> (raw)
In-Reply-To: <Pine.LNX.4.21.0005251405160.32434-100000@duckman.distro.conectiva>

    Another big difference is that when you scan by physical page, you
    can collect a whole lot of information together to help you make
    the decision on how the adjust the weight.

    When you scan by physical page, then locate the VM mappings for that
    page, you have:

	* a count of the number of mappings
	* a count of how many of those referenced the page since the
	  last check.
	* more determinism (see below)

    When you scan by virtual page, then locate the physical mapping:

	* you cannot tell how many other virtual mappings referenced the
	  page (short of checking, at which point you might as well be
	  scanning by physical page)

	* you have no way of figuring out how many discrete physical pages
	  your virtual page scan has covered.  For all you know you could
	  scan 500 virtual mappings and still only have gotten through a
	  handful of physical pages.  Big problem!

	* you have much less information available to make the decision on
	  how to adjust the weight.

:> How so?  You're only scanning currently mapped ptes, and one
:> goal is to keep that number small enough that you can gather
:> good LRU stats of page usage.
:
:Page aging may well be cheaper than continuously unmapping ptes
:(including tlb flushes and cache flushes of the page tables) and
:softfaulting them back in.

    It's definitely cheaper.  If you unmap a page and then have to
    take a page fault to get it back, the cost is going to be
    roughly 300 instructions plus other overhead.

    Another example of why physical page scanning is better then
    virtual page scanning:  When there is memory pressure and you are
    scanning by physical page, and the weight reaches 0, you can then
    turn around and unmap ALL of its virtual pte's all at once (or mark
    them read-only for a dirty page to allow it to be flushed).  Sure
    you have to eat cpu to find those virtual pte's, but the end result
    is a page which is now cleanable or freeable.

    Now try this with a virtual scan:  You do a virtual scan, locate
    a page you decide is idle, and then... what?  Unmap just that one
    instance of the pte?  What about the others?  You would have to unmap
    them too, which would cost as much as it would when doing a physical
    page scan *EXCEPT* that you are running through a whole lot more virtual
    pages during the virtual page scan to get the same effect as with
    the physical page scan (when trying to locate idle pages).  It's
    the difference between O(N) and O(N^2).  If the physical page queues
    are reasonably well ordered, its the difference between O(1) and O(N^2).

					-Matt

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux.eu.org/Linux-MM/

  reply	other threads:[~2000-05-25 17:53 UTC|newest]

Thread overview: 35+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2000-05-24 16:16 Matthew Dillon
2000-05-24 18:51 ` Rik van Riel
2000-05-24 20:57   ` Matthew Dillon
2000-05-24 22:44     ` Rik van Riel
2000-05-25  9:52     ` Jamie Lokier
2000-05-25 16:18       ` Matthew Dillon
2000-05-25 16:50         ` Jamie Lokier
2000-05-25 17:17           ` Rik van Riel
2000-05-25 17:53             ` Matthew Dillon [this message]
2000-05-26 11:38               ` Jamie Lokier
2000-05-26 11:08           ` Stephen C. Tweedie
2000-05-26 11:22             ` Jamie Lokier
2000-05-26 13:15               ` Stephen C. Tweedie
2000-05-26 14:31                 ` Jamie Lokier
2000-05-26 14:38                   ` Stephen C. Tweedie
2000-05-26 15:59                     ` Matthew Dillon
2000-05-26 16:36                     ` Jamie Lokier
2000-05-26 16:40                       ` Stephen C. Tweedie
2000-05-26 16:55                         ` Matthew Dillon
2000-05-26 17:05                           ` Jamie Lokier
2000-05-26 17:35                             ` Matthew Dillon
2000-05-26 17:46                               ` Stephen C. Tweedie
2000-05-26 17:02                         ` Jamie Lokier
2000-05-26 17:15                           ` Stephen C. Tweedie
2000-05-26 20:41                             ` Jamie Lokier
2000-05-28 22:42                               ` Stephen Tweedie
2000-05-26 15:45                   ` Matthew Dillon
2000-05-26 12:04             ` Rik van Riel
  -- strict thread matches above, loose matches on Subject: below --
2000-05-24 19:37 Mark_H_Johnson
2000-05-24 20:35 ` Matthew Dillon
2000-05-24 15:11 Rik van Riel
2000-05-24 22:44 ` Juan J. Quintela
2000-05-24 23:32   ` Rik van Riel
2000-05-26 11:11 ` Stephen C. Tweedie
2000-05-26 11:49   ` Rik van Riel

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=200005251753.KAA83360@apollo.backplane.com \
    --to=dillon@apollo.backplane.com \
    --cc=linux-mm@kvack.org \
    --cc=lk@tantalophile.demon.co.uk \
    --cc=riel@conectiva.com.br \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox