From: Andrea Arcangeli <andrea@e-mind.com>
To: "Stephen C. Tweedie" <sct@redhat.com>
Cc: Rik van Riel <H.H.vanRiel@phys.uu.nl>,
Neil Conway <nconway.list@ukaea.org.uk>,
Linux MM <linux-mm@kvack.org>,
Linux Kernel <linux-kernel@vger.rutgers.edu>,
Alan Cox <number6@the-village.bc.nu>
Subject: Re: [PATCH] VM improvements for 2.1.131
Date: Wed, 9 Dec 1998 18:43:25 +0100 (CET) [thread overview]
Message-ID: <Pine.LNX.3.96.981209183310.3727A-100000@laser.bogus> (raw)
In-Reply-To: <199812072204.WAA01733@dax.scot.redhat.com>
On Mon, 7 Dec 1998, Stephen C. Tweedie wrote:
>Right: 2.1.131 + Rik's fixes + my fix to Rik's fixes (see below) has set
>a new record for my 8MB benchmarks. In 64MB, it is behaving much more
I think that my state = 0 in do_try_to_free_page() helped a lot to handle
the better kernel performance.
>--- mm/vmscan.c.~1~ Mon Dec 7 12:05:54 1998
>+++ mm/vmscan.c Mon Dec 7 18:55:55 1998
>@@ -432,6 +432,8 @@
>
> if (buffer_over_borrow() || pgcache_over_borrow())
> state = 0;
>+ if (atomic_read(&nr_async_pages) > pager_daemon.swap_cluster / 2)
>+ shrink_mmap(i, gfp_mask);
>
Doing that we risk to shrink too much cache even if not necessary but this
part of the patch improve a _lot_ swapping performance even if I don' t know
why ;)
And why not to use GFP_USER in the userspace swaping code?
Index: linux/mm/swap_state.c
diff -u linux/mm/swap_state.c:1.1.3.2 linux/mm/swap_state.c:1.1.1.1.2.4
--- linux/mm/swap_state.c:1.1.3.2 Wed Dec 9 16:11:46 1998
+++ linux/mm/swap_state.c Wed Dec 9 18:39:03 1998
@@ -261,7 +261,9 @@
struct page * lookup_swap_cache(unsigned long entry)
{
struct page *found;
+#ifdef SWAP_CACHE_INFO
swap_cache_find_total++;
+#endif
while (1) {
found = find_page(&swapper_inode, entry);
@@ -270,7 +272,9 @@
if (found->inode != &swapper_inode || !PageSwapCache(found))
goto out_bad;
if (!PageLocked(found)) {
+#ifdef SWAP_CACHE_INFO
swap_cache_find_success++;
+#endif
return found;
}
__free_page(found);
@@ -308,7 +336,7 @@
if (found_page)
goto out;
- new_page_addr = __get_free_page(GFP_KERNEL);
+ new_page_addr = __get_free_page(GFP_USER);
if (!new_page_addr)
goto out; /* Out of memory */
new_page = mem_map + MAP_NR(new_page_addr);
Andrea Arcangeli
--
This is a majordomo managed list. To unsubscribe, send a message with
the body 'unsubscribe linux-mm me@address' to: majordomo@kvack.org
next prev parent reply other threads:[~1998-12-09 20:49 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
1998-12-06 0:34 Rik van Riel
1998-12-06 2:10 ` Eric W. Biederman
1998-12-06 1:59 ` Rik van Riel
1998-12-07 17:08 ` Stephen C. Tweedie
1998-12-07 10:47 ` Neil Conway
1998-12-07 13:04 ` Rik van Riel
1998-12-07 18:01 ` Stephen C. Tweedie
1998-12-07 22:04 ` Stephen C. Tweedie
1998-12-07 22:51 ` Rik van Riel
1998-12-09 17:43 ` Andrea Arcangeli [this message]
1998-12-09 21:05 ` Rik van Riel
1998-12-09 23:15 ` Andrea Arcangeli
1998-12-10 1:10 ` Rik van Riel
1998-12-10 13:52 ` Stephen C. Tweedie
1998-12-10 13:50 ` Stephen C. Tweedie
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Pine.LNX.3.96.981209183310.3727A-100000@laser.bogus \
--to=andrea@e-mind.com \
--cc=H.H.vanRiel@phys.uu.nl \
--cc=linux-kernel@vger.rutgers.edu \
--cc=linux-mm@kvack.org \
--cc=nconway.list@ukaea.org.uk \
--cc=number6@the-village.bc.nu \
--cc=sct@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox