From: clameter@sgi.com
To: linux-mm@kvack.org
Cc: Mel Gorman <mel@skynet.ie>,
William Lee Irwin III <wli@holomorphy.com>,
Adam Litke <aglitke@gmail.com>, David Chinner <dgc@sgi.com>,
Jens Axboe <jens.axboe@oracle.com>, Avi Kivity <avi@argo.co.il>,
Dave Hansen <hansendc@us.ibm.com>,
Badari Pulavarty <pbadari@gmail.com>,
Maxim Levitsky <maximlevitsky@gmail.com>
Subject: [RFC 11/16] Variable Page Cache Size: Fix up reclaim counters
Date: Sun, 22 Apr 2007 23:21:18 -0700 [thread overview]
Message-ID: <20070423062130.785519484@sgi.com> (raw)
In-Reply-To: <20070423062107.843307112@sgi.com>
[-- Attachment #1: var_pc_reclaim --]
[-- Type: text/plain, Size: 2674 bytes --]
We can now reclaim larger pages. Adjust the VM counters
to deal with it.
Note that this does currently not make things work.
For some reason we keep loosing pages off the active lists
and reclaim stalls at some point attempting to remove
active pages from an empty active list.
It seems that the removal from the active lists happens
outside of reclaim ?!?
Signed-off-by: Christoph Lameter <clameter@sgi.com>
---
mm/vmscan.c | 15 ++++++++-------
1 file changed, 8 insertions(+), 7 deletions(-)
Index: linux-2.6.21-rc7/mm/vmscan.c
===================================================================
--- linux-2.6.21-rc7.orig/mm/vmscan.c 2007-04-22 06:50:03.000000000 -0700
+++ linux-2.6.21-rc7/mm/vmscan.c 2007-04-22 17:19:35.000000000 -0700
@@ -471,14 +471,14 @@ static unsigned long shrink_page_list(st
VM_BUG_ON(PageActive(page));
- sc->nr_scanned++;
+ sc->nr_scanned += base_pages(page);
if (!sc->may_swap && page_mapped(page))
goto keep_locked;
/* Double the slab pressure for mapped and swapcache pages */
if (page_mapped(page) || PageSwapCache(page))
- sc->nr_scanned++;
+ sc->nr_scanned += base_pages(page);
if (PageWriteback(page))
goto keep_locked;
@@ -581,7 +581,7 @@ static unsigned long shrink_page_list(st
free_it:
unlock_page(page);
- nr_reclaimed++;
+ nr_reclaimed += base_pages(page);
if (!pagevec_add(&freed_pvec, page))
__pagevec_release_nonlru(&freed_pvec);
continue;
@@ -627,7 +627,7 @@ static unsigned long isolate_lru_pages(u
struct page *page;
unsigned long scan;
- for (scan = 0; scan < nr_to_scan && !list_empty(src); scan++) {
+ for (scan = 0; scan < nr_to_scan && !list_empty(src); ) {
struct list_head *target;
page = lru_to_page(src);
prefetchw_prev_lru_page(page, src, flags);
@@ -644,10 +644,11 @@ static unsigned long isolate_lru_pages(u
*/
ClearPageLRU(page);
target = dst;
- nr_taken++;
+ nr_taken += base_pages(page);
} /* else it is being freed elsewhere */
list_add(&page->lru, target);
+ scan += base_pages(page);
}
*scanned = scan;
@@ -856,7 +857,7 @@ force_reclaim_mapped:
ClearPageActive(page);
list_move(&page->lru, &zone->inactive_list);
- pgmoved++;
+ pgmoved += base_pages(page);
if (!pagevec_add(&pvec, page)) {
__mod_zone_page_state(zone, NR_INACTIVE, pgmoved);
spin_unlock_irq(&zone->lru_lock);
@@ -884,7 +885,7 @@ force_reclaim_mapped:
SetPageLRU(page);
VM_BUG_ON(!PageActive(page));
list_move(&page->lru, &zone->active_list);
- pgmoved++;
+ pgmoved += base_pages(page);
if (!pagevec_add(&pvec, page)) {
__mod_zone_page_state(zone, NR_ACTIVE, pgmoved);
pgmoved = 0;
--
next prev parent reply other threads:[~2007-04-23 6:21 UTC|newest]
Thread overview: 19+ messages / expand[flat|nested] mbox.gz Atom feed top
2007-04-23 6:21 [RFC 00/16] Variable Order Page Cache Patchset V2 clameter
2007-04-23 6:21 ` [RFC 01/16] Free up page->private for compound pages clameter
2007-04-23 6:21 ` [RFC 02/16] vmstat.c: Support accounting " clameter
2007-04-23 6:21 ` [RFC 03/16] Variable Order Page Cache: Add order field in mapping clameter
2007-04-23 6:21 ` [RFC 04/16] Variable Order Page Cache: Add basic allocation functions clameter
2007-04-23 6:21 ` [RFC 05/16] Variable Order Page Cache: Add functions to establish sizes clameter
2007-04-23 6:21 ` [RFC 06/16] Variable Page Cache: Add VM_BUG_ONs to check for correct page order clameter
2007-04-23 6:21 ` [RFC 07/16] Variable Order Page Cache: Add clearing and flushing function clameter
2007-04-23 6:21 ` [RFC 08/16] Variable Order Page Cache: Fixup fallback functions clameter
2007-04-23 6:21 ` [RFC 09/16] Variable Order Page Cache: Fix up mm/filemap.c clameter
2007-04-23 6:21 ` [RFC 10/16] Variable Order Page Cache: Readahead fixups clameter
2007-04-23 6:21 ` clameter [this message]
2007-04-23 6:21 ` [RFC 12/16] Variable Order Page Cache: Fix up the writeback logic clameter
2007-04-23 6:21 ` [RFC 13/16] Variable Order Page Cache: Fixed to block layer clameter
2007-04-23 6:21 ` [RFC 14/16] Variable Order Page Cache: Add support to ramfs clameter
2007-04-23 6:21 ` [RFC 15/16] ext2: Add variable page size support clameter
2007-04-23 6:21 ` [RFC 16/16] Variable Order Page Cache: Alternate implementation of page cache macros clameter
2007-04-23 6:48 [RFC 00/16] Variable Order Page Cache Patchset V2 Christoph Lameter
2007-04-23 6:49 ` [RFC 11/16] Variable Page Cache Size: Fix up reclaim counters Christoph Lameter
2007-04-25 13:08 ` Mel Gorman
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20070423062130.785519484@sgi.com \
--to=clameter@sgi.com \
--cc=aglitke@gmail.com \
--cc=avi@argo.co.il \
--cc=dgc@sgi.com \
--cc=hansendc@us.ibm.com \
--cc=jens.axboe@oracle.com \
--cc=linux-mm@kvack.org \
--cc=maximlevitsky@gmail.com \
--cc=mel@skynet.ie \
--cc=pbadari@gmail.com \
--cc=wli@holomorphy.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox