From: Rik van Riel <H.H.vanRiel@phys.uu.nl>
To: Linux MM <linux-mm@kvack.org>
Subject: [PATCH] swapin readahead
Date: Fri, 27 Nov 1998 00:23:33 +0100 (CET) [thread overview]
Message-ID: <Pine.LNX.3.96.981127001214.445A-100000@mirkwood.dummy.home> (raw)
Hi,
here is a very first primitive version of as swapin
readahead patch. It seems to give much increased
throughput to swap and the desktop switch time has
decreased noticably.
The checks are all needed. The first two checks are there
to avoid annoying messages from swap_state.c :)) The third
check is to make sure we always keep at least as much
swapout bandwidth as swapin bandwidth. We need that to keep
the system alive under heavy circumstances.
I am now testing the patch quite heavily (200+ swap IOs/second)
without any errors showing up in my xconsole, so I guess that
means you can have fun too :)
cheers,
Rik -- now completely used to dvorak kbd layout...
+-------------------------------------------------------------------+
| Linux memory management tour guide. H.H.vanRiel@phys.uu.nl |
| Scouting Vries cubscout leader. http://www.phys.uu.nl/~riel/ |
+-------------------------------------------------------------------+
--- mm/page_alloc.c.orig Thu Nov 26 11:26:49 1998
+++ mm/page_alloc.c Thu Nov 26 23:48:42 1998
@@ -370,9 +370,28 @@
pte_t * page_table, unsigned long entry, int write_access)
{
unsigned long page;
+ int i;
struct page *page_map;
+ unsigned long offset = SWP_OFFSET(entry);
+ struct swap_info_struct *swapdev = SWP_TYPE(entry) + swap_info;
page_map = read_swap_cache(entry);
+
+ /*
+ * Primitive swap readahead code. We simply read the
+ * next 16 entries in the swap area. The break below
+ * is needed or else the request queue will explode :)
+ */
+ for (i = 1; i++ < 16;) {
+ offset++;
+ if (!swapdev->swap_map[offset] || offset >= swapdev->max
+ || atomic_read(&nr_async_pages) >
+ pager_daemon.swap_cluster / 2)
+ break;
+ read_swap_cache_async(SWP_ENTRY(SWP_TYPE(entry), offset),
+0);
+ break;
+ }
if (pte_val(*page_table) != entry) {
if (page_map)
--- mm/page_io.c.orig Thu Nov 26 11:26:49 1998
+++ mm/page_io.c Thu Nov 26 11:30:43 1998
@@ -60,7 +60,7 @@
}
/* Don't allow too many pending pages in flight.. */
- if (atomic_read(&nr_async_pages) > SWAP_CLUSTER_MAX)
+ if (atomic_read(&nr_async_pages) > pager_daemon.swap_cluster)
wait = 1;
p = &swap_info[type];
--
This is a majordomo managed list. To unsubscribe, send a message with
the body 'unsubscribe linux-mm me@address' to: majordomo@kvack.org
next reply other threads:[~1998-11-27 0:00 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
1998-11-26 23:23 Rik van Riel [this message]
1998-12-01 15:13 ` Stephen C. Tweedie
1998-12-01 15:41 ` Rik van Riel
1998-12-01 15:51 ` Zlatko Calusic
1998-12-01 16:42 ` Rik van Riel
1998-12-01 17:20 ` Zlatko Calusic
1998-12-01 18:32 ` Rik van Riel
1998-12-02 17:35 ` Stephen C. Tweedie
1998-12-02 21:18 ` Zlatko Calusic
1998-12-03 5:25 ` Eric W. Biederman
1998-12-03 8:55 ` Zlatko Calusic
1998-12-03 15:39 ` Eric W. Biederman
1998-12-03 10:07 ` Rik van Riel
1998-12-02 17:33 ` Stephen C. Tweedie
1998-12-03 14:44 ` Rik van Riel
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Pine.LNX.3.96.981127001214.445A-100000@mirkwood.dummy.home \
--to=h.h.vanriel@phys.uu.nl \
--cc=linux-mm@kvack.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox