linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Daniel Drake <ddrake@brontes3d.com>
To: linux-mm@kvack.org
Cc: linux-kernel@vger.kernel.org
Subject: speeding up swapoff
Date: Wed, 29 Aug 2007 09:29:32 -0400	[thread overview]
Message-ID: <1188394172.22156.67.camel@localhost> (raw)

Hi,

I've spent some time trying to understand why swapoff is such a slow
operation.

My experiments show that when there is not much free physical memory,
swapoff moves pages out of swap at a rate of approximately 5mb/sec. When
there is a lot of free physical memory, it is faster but still a slow
CPU-intensive operation, purging swap at about 20mb/sec.

I've read into the swap code and I have some understanding that this is
an expensive operation (and has to be). This page was very helpful and
also agrees:
http://kernel.org/doc/gorman/html/understand/understand014.html

After reading that, I have an idea for a possible optimization. If we
were to create a system call to disable ALL swap partitions (or modify
the existing one to accept NULL for that purpose), could this process be
signficantly less complex?

I'm thinking we could do something like this:
 1. Prevent any more pages from being swapped out from this point
 2. Iterate through all process page tables, paging all swapped
    pages back into physical memory and updating PTEs
 3. Clear all swap tables and caches

Due to only iterating through process page tables once, does this sound
like it would increase performance non-trivially? Is it feasible?

I'm happy to spend a few more hours looking into implementing this but
would greatly appreciate any advice from those in-the-know on if my
ideas are broken to start with...

Thanks!
-- 
Daniel Drake
Brontes Technologies, A 3M Company
http://www.brontes3d.com/opensource

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

             reply	other threads:[~2007-08-29 13:29 UTC|newest]

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2007-08-29 13:29 Daniel Drake [this message]
2007-08-29 14:30 ` Arjan van de Ven
2007-08-29 14:36   ` Oliver Neukum
2007-08-29 16:04     ` Hugh Dickins
2007-08-29 16:18       ` Oliver Neukum
2007-08-29 14:44   ` Daniel Drake
2007-08-29 15:12     ` Juergen Beisert
2007-08-30 15:57     ` Bill Davidsen
2007-09-01 22:20     ` Andi Kleen
2007-08-29 15:58   ` Hugh Dickins
2007-08-29 15:36 ` Hugh Dickins
2007-08-30  8:27   ` Eric W. Biederman
2007-08-30 10:36     ` Hugh Dickins
2007-08-30 15:05       ` Daniel Drake
2007-08-29 16:08 ` Lee Schermerhorn
     [not found] <fa.j/pO3mTWDugTdvZ3XNr9XpvgzPQ@ifi.uio.no>
     [not found] ` <fa.ed9fasZXOwVCrbffkPQTX7G3a7g@ifi.uio.no>
     [not found]   ` <fa./NZA3biuO1+qW5pW8ybdZMDWcZs@ifi.uio.no>
2007-08-30  1:37     ` Robert Hancock
2007-08-30 13:55       ` Helge Hafting
2007-08-30 14:06         ` Xavier Bestel
2007-08-30 14:06           ` Helge Hafting
2007-08-30 14:14             ` Xavier Bestel

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1188394172.22156.67.camel@localhost \
    --to=ddrake@brontes3d.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox