* vmscan-give-referenced-active-and-unmapped-pages-a-second-trip-around-the-lru.patch
@ 2008-10-08 5:55 Nick Piggin
2008-10-08 10:03 ` vmscan-give-referenced-active-and-unmapped-pages-a-second-trip-around-the-lru.patch KOSAKI Motohiro
0 siblings, 1 reply; 12+ messages in thread
From: Nick Piggin @ 2008-10-08 5:55 UTC (permalink / raw)
To: Morton, Andrew, linux-mm
This patch, like I said when it was first merged, has the problem that
it can cause large stalls when reclaiming pages.
I actually myself tried a similar thing a long time ago. The problem is
that after a long period of no reclaiming, your file pages can all end
up being active and referenced. When the first guy wants to reclaim a
page, it might have to scan through gigabytes of file pages before being
able to reclaim a single one.
While it would be really nice to be able to just lazily set PageReferenced
and nothing else in mark_page_accessed, and then do file page aging based
on the referenced bit, the fact is that we virtually have O(1) reclaim
for file pages now, and this can make it much more like O(n) (in worst case,
especially).
I don't think it is right to say "we broke aging and this patch fixes it".
It's all a big crazy heuristic. Who's to say that the previous behaviour
wasn't better and this patch breaks it? :)
Anyway, I don't think it is exactly productive to keep patches like this in
the tree (that doesn't seem ever intended to be merged) while there are
other big changes to reclaim there.
Same for vm-dont-run-touch_buffer-during-buffercache-lookups.patch
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: vmscan-give-referenced-active-and-unmapped-pages-a-second-trip-around-the-lru.patch
2008-10-08 5:55 vmscan-give-referenced-active-and-unmapped-pages-a-second-trip-around-the-lru.patch Nick Piggin
@ 2008-10-08 10:03 ` KOSAKI Motohiro
2008-10-10 22:17 ` vmscan-give-referenced-active-and-unmapped-pages-a-second-trip-around-the-lru.patch Andrew Morton
0 siblings, 1 reply; 12+ messages in thread
From: KOSAKI Motohiro @ 2008-10-08 10:03 UTC (permalink / raw)
To: Nick Piggin; +Cc: kosaki.motohiro, Morton, Andrew, linux-mm
Hi
Nick, Andrew, very thanks for good advice.
your helpful increase my investigate speed.
> This patch, like I said when it was first merged, has the problem that
> it can cause large stalls when reclaiming pages.
>
> I actually myself tried a similar thing a long time ago. The problem is
> that after a long period of no reclaiming, your file pages can all end
> up being active and referenced. When the first guy wants to reclaim a
> page, it might have to scan through gigabytes of file pages before being
> able to reclaim a single one.
I perfectly agree this opinion.
all pages stay on active list is awful.
In addition, my mesurement tell me this patch cause latency degression on really heavy io workload.
2.6.27-rc8: Throughput 13.4231 MB/sec 4000 clients 4000 procs max_latency=1421988.159 ms
+ patch : Throughput 12.0953 MB/sec 4000 clients 4000 procs max_latency=1731244.847 ms
> While it would be really nice to be able to just lazily set PageReferenced
> and nothing else in mark_page_accessed, and then do file page aging based
> on the referenced bit, the fact is that we virtually have O(1) reclaim
> for file pages now, and this can make it much more like O(n) (in worst case,
> especially).
>
> I don't think it is right to say "we broke aging and this patch fixes it".
> It's all a big crazy heuristic. Who's to say that the previous behaviour
> wasn't better and this patch breaks it? :)
>
> Anyway, I don't think it is exactly productive to keep patches like this in
> the tree (that doesn't seem ever intended to be merged) while there are
> other big changes to reclaim there.
>
> Same for vm-dont-run-touch_buffer-during-buffercache-lookups.patch
I mesured it too,
2.6.27-rc8: Throughput 13.4231 MB/sec 4000 clients 4000 procs max_latency=1421988.159 ms
+ patch : Throughput 11.8494 MB/sec 4000 clients 4000 procs max_latency=3463217.227 ms
dbench latency increased about x2.5
So, the patch desctiption already descibe this risk.
metadata dropping can decrease performance largely.
that just appeared, imho.
I'll investigate more tommorow.
Thanks!
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: vmscan-give-referenced-active-and-unmapped-pages-a-second-trip-around-the-lru.patch
2008-10-08 10:03 ` vmscan-give-referenced-active-and-unmapped-pages-a-second-trip-around-the-lru.patch KOSAKI Motohiro
@ 2008-10-10 22:17 ` Andrew Morton
2008-10-10 22:25 ` vmscan-give-referenced-active-and-unmapped-pages-a-second-trip-around-the-lru.patch Andrew Morton
0 siblings, 1 reply; 12+ messages in thread
From: Andrew Morton @ 2008-10-10 22:17 UTC (permalink / raw)
To: KOSAKI Motohiro; +Cc: nickpiggin, linux-mm
On Wed, 8 Oct 2008 19:03:07 +0900 (JST)
KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> wrote:
> Hi
>
> Nick, Andrew, very thanks for good advice.
> your helpful increase my investigate speed.
>
>
> > This patch, like I said when it was first merged, has the problem that
> > it can cause large stalls when reclaiming pages.
> >
> > I actually myself tried a similar thing a long time ago. The problem is
> > that after a long period of no reclaiming, your file pages can all end
> > up being active and referenced. When the first guy wants to reclaim a
> > page, it might have to scan through gigabytes of file pages before being
> > able to reclaim a single one.
>
> I perfectly agree this opinion.
> all pages stay on active list is awful.
>
> In addition, my mesurement tell me this patch cause latency degression on really heavy io workload.
>
> 2.6.27-rc8: Throughput 13.4231 MB/sec 4000 clients 4000 procs max_latency=1421988.159 ms
> + patch : Throughput 12.0953 MB/sec 4000 clients 4000 procs max_latency=1731244.847 ms
>
>
> > While it would be really nice to be able to just lazily set PageReferenced
> > and nothing else in mark_page_accessed, and then do file page aging based
> > on the referenced bit, the fact is that we virtually have O(1) reclaim
> > for file pages now, and this can make it much more like O(n) (in worst case,
> > especially).
> >
> > I don't think it is right to say "we broke aging and this patch fixes it".
> > It's all a big crazy heuristic. Who's to say that the previous behaviour
> > wasn't better and this patch breaks it? :)
> >
> > Anyway, I don't think it is exactly productive to keep patches like this in
> > the tree (that doesn't seem ever intended to be merged) while there are
> > other big changes to reclaim there.
Well yes. I've been hanging onto these in the hope that someone would
work out whether they are changes which we should make.
> > Same for vm-dont-run-touch_buffer-during-buffercache-lookups.patch
>
> I mesured it too,
>
> 2.6.27-rc8: Throughput 13.4231 MB/sec 4000 clients 4000 procs max_latency=1421988.159 ms
> + patch : Throughput 11.8494 MB/sec 4000 clients 4000 procs max_latency=3463217.227 ms
>
> dbench latency increased about x2.5
>
> So, the patch desctiption already descibe this risk.
> metadata dropping can decrease performance largely.
> that just appeared, imho.
Oh well, that'll suffice, thanks - I'll drop them.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: vmscan-give-referenced-active-and-unmapped-pages-a-second-trip-around-the-lru.patch
2008-10-10 22:17 ` vmscan-give-referenced-active-and-unmapped-pages-a-second-trip-around-the-lru.patch Andrew Morton
@ 2008-10-10 22:25 ` Andrew Morton
2008-10-10 22:33 ` vmscan-give-referenced-active-and-unmapped-pages-a-second-trip-around-the-lru.patch Andrew Morton
2008-10-10 23:56 ` vmscan-give-referenced-active-and-unmapped-pages-a-second-trip-around-the-lru.patch Rik van Riel
0 siblings, 2 replies; 12+ messages in thread
From: Andrew Morton @ 2008-10-10 22:25 UTC (permalink / raw)
To: kosaki.motohiro, nickpiggin, linux-mm; +Cc: Rik van Riel, Lee Schermerhorn
On Fri, 10 Oct 2008 15:17:01 -0700
Andrew Morton <akpm@linux-foundation.org> wrote:
> On Wed, 8 Oct 2008 19:03:07 +0900 (JST)
> KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> wrote:
>
> > Hi
> >
> > Nick, Andrew, very thanks for good advice.
> > your helpful increase my investigate speed.
> >
> >
> > > This patch, like I said when it was first merged, has the problem that
> > > it can cause large stalls when reclaiming pages.
> > >
> > > I actually myself tried a similar thing a long time ago. The problem is
> > > that after a long period of no reclaiming, your file pages can all end
> > > up being active and referenced. When the first guy wants to reclaim a
> > > page, it might have to scan through gigabytes of file pages before being
> > > able to reclaim a single one.
> >
> > I perfectly agree this opinion.
> > all pages stay on active list is awful.
> >
> > In addition, my mesurement tell me this patch cause latency degression on really heavy io workload.
> >
> > 2.6.27-rc8: Throughput 13.4231 MB/sec 4000 clients 4000 procs max_latency=1421988.159 ms
> > + patch : Throughput 12.0953 MB/sec 4000 clients 4000 procs max_latency=1731244.847 ms
> >
> >
> > > While it would be really nice to be able to just lazily set PageReferenced
> > > and nothing else in mark_page_accessed, and then do file page aging based
> > > on the referenced bit, the fact is that we virtually have O(1) reclaim
> > > for file pages now, and this can make it much more like O(n) (in worst case,
> > > especially).
> > >
> > > I don't think it is right to say "we broke aging and this patch fixes it".
> > > It's all a big crazy heuristic. Who's to say that the previous behaviour
> > > wasn't better and this patch breaks it? :)
> > >
> > > Anyway, I don't think it is exactly productive to keep patches like this in
> > > the tree (that doesn't seem ever intended to be merged) while there are
> > > other big changes to reclaim there.
>
> Well yes. I've been hanging onto these in the hope that someone would
> work out whether they are changes which we should make.
>
>
> > > Same for vm-dont-run-touch_buffer-during-buffercache-lookups.patch
> >
> > I mesured it too,
> >
> > 2.6.27-rc8: Throughput 13.4231 MB/sec 4000 clients 4000 procs max_latency=1421988.159 ms
> > + patch : Throughput 11.8494 MB/sec 4000 clients 4000 procs max_latency=3463217.227 ms
> >
> > dbench latency increased about x2.5
> >
> > So, the patch desctiption already descibe this risk.
> > metadata dropping can decrease performance largely.
> > that just appeared, imho.
>
> Oh well, that'll suffice, thanks - I'll drop them.
Which means that after vmscan-split-lru-lists-into-anon-file-sets.patch,
shrink_active_list() simply does
while (!list_empty(&l_hold)) {
cond_resched();
page = lru_to_page(&l_hold);
list_add(&page->lru, &l_inactive);
}
yes?
We might even be able to list_splice those pages..
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: vmscan-give-referenced-active-and-unmapped-pages-a-second-trip-around-the-lru.patch
2008-10-10 22:25 ` vmscan-give-referenced-active-and-unmapped-pages-a-second-trip-around-the-lru.patch Andrew Morton
@ 2008-10-10 22:33 ` Andrew Morton
2008-10-10 23:59 ` vmscan-give-referenced-active-and-unmapped-pages-a-second-trip-around-the-lru.patch Rik van Riel
2008-10-10 23:56 ` vmscan-give-referenced-active-and-unmapped-pages-a-second-trip-around-the-lru.patch Rik van Riel
1 sibling, 1 reply; 12+ messages in thread
From: Andrew Morton @ 2008-10-10 22:33 UTC (permalink / raw)
To: kosaki.motohiro, nickpiggin, linux-mm, riel, lee.schermerhorn
On Fri, 10 Oct 2008 15:25:40 -0700
Andrew Morton <akpm@linux-foundation.org> wrote:
> On Fri, 10 Oct 2008 15:17:01 -0700
> Andrew Morton <akpm@linux-foundation.org> wrote:
>
> > On Wed, 8 Oct 2008 19:03:07 +0900 (JST)
> > KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> wrote:
> >
> > > Hi
> > >
> > > Nick, Andrew, very thanks for good advice.
> > > your helpful increase my investigate speed.
> > >
> > >
> > > > This patch, like I said when it was first merged, has the problem that
> > > > it can cause large stalls when reclaiming pages.
> > > >
> > > > I actually myself tried a similar thing a long time ago. The problem is
> > > > that after a long period of no reclaiming, your file pages can all end
> > > > up being active and referenced. When the first guy wants to reclaim a
> > > > page, it might have to scan through gigabytes of file pages before being
> > > > able to reclaim a single one.
> > >
> > > I perfectly agree this opinion.
> > > all pages stay on active list is awful.
> > >
> > > In addition, my mesurement tell me this patch cause latency degression on really heavy io workload.
> > >
> > > 2.6.27-rc8: Throughput 13.4231 MB/sec 4000 clients 4000 procs max_latency=1421988.159 ms
> > > + patch : Throughput 12.0953 MB/sec 4000 clients 4000 procs max_latency=1731244.847 ms
> > >
> > >
> > > > While it would be really nice to be able to just lazily set PageReferenced
> > > > and nothing else in mark_page_accessed, and then do file page aging based
> > > > on the referenced bit, the fact is that we virtually have O(1) reclaim
> > > > for file pages now, and this can make it much more like O(n) (in worst case,
> > > > especially).
> > > >
> > > > I don't think it is right to say "we broke aging and this patch fixes it".
> > > > It's all a big crazy heuristic. Who's to say that the previous behaviour
> > > > wasn't better and this patch breaks it? :)
> > > >
> > > > Anyway, I don't think it is exactly productive to keep patches like this in
> > > > the tree (that doesn't seem ever intended to be merged) while there are
> > > > other big changes to reclaim there.
> >
> > Well yes. I've been hanging onto these in the hope that someone would
> > work out whether they are changes which we should make.
> >
> >
> > > > Same for vm-dont-run-touch_buffer-during-buffercache-lookups.patch
> > >
> > > I mesured it too,
> > >
> > > 2.6.27-rc8: Throughput 13.4231 MB/sec 4000 clients 4000 procs max_latency=1421988.159 ms
> > > + patch : Throughput 11.8494 MB/sec 4000 clients 4000 procs max_latency=3463217.227 ms
> > >
> > > dbench latency increased about x2.5
> > >
> > > So, the patch desctiption already descibe this risk.
> > > metadata dropping can decrease performance largely.
> > > that just appeared, imho.
> >
> > Oh well, that'll suffice, thanks - I'll drop them.
>
> Which means that after vmscan-split-lru-lists-into-anon-file-sets.patch,
> shrink_active_list() simply does
>
> while (!list_empty(&l_hold)) {
> cond_resched();
> page = lru_to_page(&l_hold);
> list_add(&page->lru, &l_inactive);
> }
>
> yes?
>
> We might even be able to list_splice those pages..
OK, that wasn't a particularly good time to drop those patches.
Here's how shrink_active_list() ended up:
static void shrink_active_list(unsigned long nr_pages, struct zone *zone,
struct scan_control *sc, int priority, int file)
{
unsigned long pgmoved;
int pgdeactivate = 0;
unsigned long pgscanned;
LIST_HEAD(l_hold); /* The pages which were snipped off */
LIST_HEAD(l_active);
LIST_HEAD(l_inactive);
struct page *page;
struct pagevec pvec;
enum lru_list lru;
lru_add_drain();
spin_lock_irq(&zone->lru_lock);
pgmoved = sc->isolate_pages(nr_pages, &l_hold, &pgscanned, sc->order,
ISOLATE_ACTIVE, zone,
sc->mem_cgroup, 1, file);
/*
* zone->pages_scanned is used for detect zone's oom
* mem_cgroup remembers nr_scan by itself.
*/
if (scan_global_lru(sc)) {
zone->pages_scanned += pgscanned;
zone->recent_scanned[!!file] += pgmoved;
}
if (file)
__mod_zone_page_state(zone, NR_ACTIVE_FILE, -pgmoved);
else
__mod_zone_page_state(zone, NR_ACTIVE_ANON, -pgmoved);
spin_unlock_irq(&zone->lru_lock);
pgmoved = 0;
while (!list_empty(&l_hold)) {
cond_resched();
page = lru_to_page(&l_hold);
list_del(&page->lru);
if (unlikely(!page_evictable(page, NULL))) {
putback_lru_page(page);
continue;
}
list_add(&page->lru, &l_inactive);
if (!page_mapping_inuse(page)) {
/*
* Bypass use-once, make the next access count. See
* mark_page_accessed and shrink_page_list.
*/
SetPageReferenced(page);
}
}
/*
* Count the referenced pages as rotated, even when they are moved
* to the inactive list. This helps balance scan pressure between
* file and anonymous pages in get_scan_ratio.
*/
zone->recent_rotated[!!file] += pgmoved;
/*
* Now put the pages back on the appropriate [file or anon] inactive
* and active lists.
*/
pagevec_init(&pvec, 1);
pgmoved = 0;
lru = LRU_BASE + file * LRU_FILE;
spin_lock_irq(&zone->lru_lock);
while (!list_empty(&l_inactive)) {
page = lru_to_page(&l_inactive);
prefetchw_prev_lru_page(page, &l_inactive, flags);
VM_BUG_ON(PageLRU(page));
SetPageLRU(page);
VM_BUG_ON(!PageActive(page));
ClearPageActive(page);
list_move(&page->lru, &zone->lru[lru].list);
mem_cgroup_move_lists(page, lru);
pgmoved++;
if (!pagevec_add(&pvec, page)) {
__mod_zone_page_state(zone, NR_LRU_BASE + lru, pgmoved);
spin_unlock_irq(&zone->lru_lock);
pgdeactivate += pgmoved;
pgmoved = 0;
if (buffer_heads_over_limit)
pagevec_strip(&pvec);
__pagevec_release(&pvec);
spin_lock_irq(&zone->lru_lock);
}
}
__mod_zone_page_state(zone, NR_LRU_BASE + lru, pgmoved);
pgdeactivate += pgmoved;
if (buffer_heads_over_limit) {
spin_unlock_irq(&zone->lru_lock);
pagevec_strip(&pvec);
spin_lock_irq(&zone->lru_lock);
}
pgmoved = 0;
lru = LRU_ACTIVE + file * LRU_FILE;
while (!list_empty(&l_active)) {
page = lru_to_page(&l_active);
prefetchw_prev_lru_page(page, &l_active, flags);
VM_BUG_ON(PageLRU(page));
SetPageLRU(page);
VM_BUG_ON(!PageActive(page));
list_move(&page->lru, &zone->lru[lru].list);
mem_cgroup_move_lists(page, lru);
pgmoved++;
if (!pagevec_add(&pvec, page)) {
__mod_zone_page_state(zone, NR_LRU_BASE + lru, pgmoved);
pgmoved = 0;
spin_unlock_irq(&zone->lru_lock);
if (vm_swap_full())
pagevec_swap_free(&pvec);
__pagevec_release(&pvec);
spin_lock_irq(&zone->lru_lock);
}
}
__mod_zone_page_state(zone, NR_LRU_BASE + lru, pgmoved);
__count_zone_vm_events(PGREFILL, zone, pgscanned);
__count_vm_events(PGDEACTIVATE, pgdeactivate);
spin_unlock_irq(&zone->lru_lock);
if (vm_swap_full())
pagevec_swap_free(&pvec);
pagevec_release(&pvec);
}
Note the first use of pgmoved there. It no longer does anything. erk.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: vmscan-give-referenced-active-and-unmapped-pages-a-second-trip-around-the-lru.patch
2008-10-10 22:25 ` vmscan-give-referenced-active-and-unmapped-pages-a-second-trip-around-the-lru.patch Andrew Morton
2008-10-10 22:33 ` vmscan-give-referenced-active-and-unmapped-pages-a-second-trip-around-the-lru.patch Andrew Morton
@ 2008-10-10 23:56 ` Rik van Riel
1 sibling, 0 replies; 12+ messages in thread
From: Rik van Riel @ 2008-10-10 23:56 UTC (permalink / raw)
To: Andrew Morton; +Cc: kosaki.motohiro, nickpiggin, linux-mm, Lee Schermerhorn
Andrew Morton wrote:
> Which means that after vmscan-split-lru-lists-into-anon-file-sets.patch,
> shrink_active_list() simply does
>
> while (!list_empty(&l_hold)) {
> cond_resched();
> page = lru_to_page(&l_hold);
> list_add(&page->lru, &l_inactive);
> }
>
> yes?
>
> We might even be able to list_splice those pages..
Not quite. We still need to clear the referenced bits.
In order to better balance the pressure between the file
and anon lists, we may also want to count the number of
referenced mapped file pages.
That would be roughly a 3-line change, which I could
either send against a recent mmotm (is the one on your
site recent enough?) or directly to Linus if you are
sending the split LRU code upstream.
Just let me know which you prefer.
--
All rights reversed.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: vmscan-give-referenced-active-and-unmapped-pages-a-second-trip-around-the-lru.patch
2008-10-10 22:33 ` vmscan-give-referenced-active-and-unmapped-pages-a-second-trip-around-the-lru.patch Andrew Morton
@ 2008-10-10 23:59 ` Rik van Riel
2008-10-11 1:42 ` vmscan-give-referenced-active-and-unmapped-pages-a-second-trip-around-the-lru.patch Andrew Morton
0 siblings, 1 reply; 12+ messages in thread
From: Rik van Riel @ 2008-10-10 23:59 UTC (permalink / raw)
To: Andrew Morton; +Cc: kosaki.motohiro, nickpiggin, linux-mm, lee.schermerhorn
Andrew Morton wrote:
> OK, that wasn't a particularly good time to drop those patches.
>
> Here's how shrink_active_list() ended up:
You're close.
> while (!list_empty(&l_hold)) {
> cond_resched();
> page = lru_to_page(&l_hold);
> list_del(&page->lru);
>
> if (unlikely(!page_evictable(page, NULL))) {
> putback_lru_page(page);
> continue;
> }
These three lines are needed here:
/* page_referenced clears PageReferenced */
if (page_mapping_inuse(page) && page_referenced(page))
pgmoved++;
> list_add(&page->lru, &l_inactive);
That allows us to drop these lines:
> if (!page_mapping_inuse(page)) {
> /*
> * Bypass use-once, make the next access count. See
> * mark_page_accessed and shrink_page_list.
> */
> SetPageReferenced(page);
> }
Other than that, it looks good.
> }
>
> /*
> * Count the referenced pages as rotated, even when they are moved
> * to the inactive list. This helps balance scan pressure between
> * file and anonymous pages in get_scan_ratio.
> */
> zone->recent_rotated[!!file] += pgmoved;
This now automatically does the right thing.
--
All rights reversed.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: vmscan-give-referenced-active-and-unmapped-pages-a-second-trip-around-the-lru.patch
2008-10-10 23:59 ` vmscan-give-referenced-active-and-unmapped-pages-a-second-trip-around-the-lru.patch Rik van Riel
@ 2008-10-11 1:42 ` Andrew Morton
2008-10-11 1:53 ` vmscan-give-referenced-active-and-unmapped-pages-a-second-trip-around-the-lru.patch Rik van Riel
0 siblings, 1 reply; 12+ messages in thread
From: Andrew Morton @ 2008-10-11 1:42 UTC (permalink / raw)
To: Rik van Riel; +Cc: kosaki.motohiro, nickpiggin, linux-mm, lee.schermerhorn
On Fri, 10 Oct 2008 19:59:36 -0400 Rik van Riel <riel@redhat.com> wrote:
> Andrew Morton wrote:
>
> > OK, that wasn't a particularly good time to drop those patches.
> >
> > Here's how shrink_active_list() ended up:
>
> You're close.
>
> > while (!list_empty(&l_hold)) {
> > cond_resched();
> > page = lru_to_page(&l_hold);
> > list_del(&page->lru);
> >
> > if (unlikely(!page_evictable(page, NULL))) {
> > putback_lru_page(page);
> > continue;
> > }
>
> These three lines are needed here:
>
> /* page_referenced clears PageReferenced */
> if (page_mapping_inuse(page) && page_referenced(page))
> pgmoved++;
>
> > list_add(&page->lru, &l_inactive);
>
> That allows us to drop these lines:
>
> > if (!page_mapping_inuse(page)) {
> > /*
> > * Bypass use-once, make the next access count. See
> > * mark_page_accessed and shrink_page_list.
> > */
> > SetPageReferenced(page);
> > }
>
> Other than that, it looks good.
>
> > }
> >
> > /*
> > * Count the referenced pages as rotated, even when they are moved
> > * to the inactive list. This helps balance scan pressure between
> > * file and anonymous pages in get_scan_ratio.
> > */
> > zone->recent_rotated[!!file] += pgmoved;
>
> This now automatically does the right thing.
hm, OK.
I implemented this as a fix against
vmscan-fix-pagecache-reclaim-referenced-bit-check.patch, but that patch
says
The -mm tree contains the patch
vmscan-give-referenced-active-and-unmapped-pages-a-second-trip-around-the-lru.patch
which gives referenced pagecache pages another trip around the active
list. This seems to help keep frequently accessed pagecache pages in
memory.
However, it means that pagecache pages that get moved to the
inactive list do not have their referenced bit set, and the next
access to the page will not get it moved back to the active list.
This patch sets the referenced bit on pagecache pages that get
deactivated, so the next access to the page will promote it back to
the active list.
This works because shrink_page_list() will reclaim unmapped pages
with the referenced bit set.
which isn't true any more.
Sorry about this mess.
static void shrink_active_list(unsigned long nr_pages, struct zone *zone,
struct scan_control *sc, int priority, int file)
{
unsigned long pgmoved;
int pgdeactivate = 0;
unsigned long pgscanned;
LIST_HEAD(l_hold); /* The pages which were snipped off */
LIST_HEAD(l_active);
LIST_HEAD(l_inactive);
struct page *page;
struct pagevec pvec;
enum lru_list lru;
lru_add_drain();
spin_lock_irq(&zone->lru_lock);
pgmoved = sc->isolate_pages(nr_pages, &l_hold, &pgscanned, sc->order,
ISOLATE_ACTIVE, zone,
sc->mem_cgroup, 1, file);
/*
* zone->pages_scanned is used for detect zone's oom
* mem_cgroup remembers nr_scan by itself.
*/
if (scan_global_lru(sc)) {
zone->pages_scanned += pgscanned;
zone->recent_scanned[!!file] += pgmoved;
}
if (file)
__mod_zone_page_state(zone, NR_ACTIVE_FILE, -pgmoved);
else
__mod_zone_page_state(zone, NR_ACTIVE_ANON, -pgmoved);
spin_unlock_irq(&zone->lru_lock);
pgmoved = 0;
while (!list_empty(&l_hold)) {
cond_resched();
page = lru_to_page(&l_hold);
list_del(&page->lru);
if (unlikely(!page_evictable(page, NULL))) {
putback_lru_page(page);
continue;
}
/* page_referenced clears PageReferenced */
if (page_mapping_inuse(page) && page_referenced(page))
pgmoved++;
list_add(&page->lru, &l_inactive);
}
/*
* Count the referenced pages as rotated, even when they are moved
* to the inactive list. This helps balance scan pressure between
* file and anonymous pages in get_scan_ratio.
*/
zone->recent_rotated[!!file] += pgmoved;
/*
* Now put the pages back on the appropriate [file or anon] inactive
* and active lists.
*/
pagevec_init(&pvec, 1);
pgmoved = 0;
lru = LRU_BASE + file * LRU_FILE;
spin_lock_irq(&zone->lru_lock);
while (!list_empty(&l_inactive)) {
page = lru_to_page(&l_inactive);
prefetchw_prev_lru_page(page, &l_inactive, flags);
VM_BUG_ON(PageLRU(page));
SetPageLRU(page);
VM_BUG_ON(!PageActive(page));
ClearPageActive(page);
list_move(&page->lru, &zone->lru[lru].list);
mem_cgroup_move_lists(page, lru);
pgmoved++;
if (!pagevec_add(&pvec, page)) {
__mod_zone_page_state(zone, NR_LRU_BASE + lru, pgmoved);
spin_unlock_irq(&zone->lru_lock);
pgdeactivate += pgmoved;
pgmoved = 0;
if (buffer_heads_over_limit)
pagevec_strip(&pvec);
__pagevec_release(&pvec);
spin_lock_irq(&zone->lru_lock);
}
}
__mod_zone_page_state(zone, NR_LRU_BASE + lru, pgmoved);
pgdeactivate += pgmoved;
if (buffer_heads_over_limit) {
spin_unlock_irq(&zone->lru_lock);
pagevec_strip(&pvec);
spin_lock_irq(&zone->lru_lock);
}
pgmoved = 0;
lru = LRU_ACTIVE + file * LRU_FILE;
while (!list_empty(&l_active)) {
page = lru_to_page(&l_active);
prefetchw_prev_lru_page(page, &l_active, flags);
VM_BUG_ON(PageLRU(page));
SetPageLRU(page);
VM_BUG_ON(!PageActive(page));
list_move(&page->lru, &zone->lru[lru].list);
mem_cgroup_move_lists(page, lru);
pgmoved++;
if (!pagevec_add(&pvec, page)) {
__mod_zone_page_state(zone, NR_LRU_BASE + lru, pgmoved);
pgmoved = 0;
spin_unlock_irq(&zone->lru_lock);
if (vm_swap_full())
pagevec_swap_free(&pvec);
__pagevec_release(&pvec);
spin_lock_irq(&zone->lru_lock);
}
}
__mod_zone_page_state(zone, NR_LRU_BASE + lru, pgmoved);
__count_zone_vm_events(PGREFILL, zone, pgscanned);
__count_vm_events(PGDEACTIVATE, pgdeactivate);
spin_unlock_irq(&zone->lru_lock);
if (vm_swap_full())
pagevec_swap_free(&pvec);
pagevec_release(&pvec);
}
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: vmscan-give-referenced-active-and-unmapped-pages-a-second-trip-around-the-lru.patch
2008-10-11 1:42 ` vmscan-give-referenced-active-and-unmapped-pages-a-second-trip-around-the-lru.patch Andrew Morton
@ 2008-10-11 1:53 ` Rik van Riel
2008-10-11 2:21 ` vmscan-give-referenced-active-and-unmapped-pages-a-second-trip-around-the-lru.patch Andrew Morton
0 siblings, 1 reply; 12+ messages in thread
From: Rik van Riel @ 2008-10-11 1:53 UTC (permalink / raw)
To: Andrew Morton; +Cc: kosaki.motohiro, nickpiggin, linux-mm, lee.schermerhorn
Andrew Morton wrote:
> I implemented this as a fix against
> vmscan-fix-pagecache-reclaim-referenced-bit-check.patch, but that patch
> says
> which isn't true any more.
> Sorry about this mess.
I'm not sure what else is still in the -mm tree and what got
removed, so I'm not sure what the new comment for the patch
should be.
Maybe the patch could just be folded into an earlier split
LRU patch now since there no longer is a special case for
page cache pages?
Btw, a few more cleanups to shrink_active_list are possible
now that every page always goes to the inactive list.
> static void shrink_active_list(unsigned long nr_pages, struct zone *zone,
> struct scan_control *sc, int priority, int file)
> {
> unsigned long pgmoved;
> int pgdeactivate = 0;
> unsigned long pgscanned;
> LIST_HEAD(l_hold); /* The pages which were snipped off */
> LIST_HEAD(l_active);
We no longer need l_active.
> /*
> * Count the referenced pages as rotated, even when they are moved
> * to the inactive list. This helps balance scan pressure between
> * file and anonymous pages in get_scan_ratio.
> */
> zone->recent_rotated[!!file] += pgmoved;
This can be rewritten as
/*
* Count referenced pages from currently used mappings as
* rotated, even though they are moved to the inactive list.
* This helps balance scan pressure between file and anonymous
* pages in get_scan_ratio.
*/
> /*
> * Now put the pages back on the appropriate [file or anon] inactive
> * and active lists.
> */
/*
* Move the pages to the [file or anon] inactive list.
*/
We keep the code that moves pages from l_inactive to the inactive
list.
We can throw away the loop that moves pages from l_active to the
active list, because we no longer do that:
> pgmoved = 0;
> lru = LRU_ACTIVE + file * LRU_FILE;
> while (!list_empty(&l_active)) {
> page = lru_to_page(&l_active);
> prefetchw_prev_lru_page(page, &l_active, flags);
> VM_BUG_ON(PageLRU(page));
> SetPageLRU(page);
> VM_BUG_ON(!PageActive(page));
>
> list_move(&page->lru, &zone->lru[lru].list);
> mem_cgroup_move_lists(page, lru);
> pgmoved++;
> if (!pagevec_add(&pvec, page)) {
> __mod_zone_page_state(zone, NR_LRU_BASE + lru, pgmoved);
> pgmoved = 0;
> spin_unlock_irq(&zone->lru_lock);
> if (vm_swap_full())
> pagevec_swap_free(&pvec);
> __pagevec_release(&pvec);
> spin_lock_irq(&zone->lru_lock);
> }
> }
> __mod_zone_page_state(zone, NR_LRU_BASE + lru, pgmoved);
These last few lines are useful and should be kept:
> __count_zone_vm_events(PGREFILL, zone, pgscanned);
> __count_vm_events(PGDEACTIVATE, pgdeactivate);
> spin_unlock_irq(&zone->lru_lock);
> if (vm_swap_full())
> pagevec_swap_free(&pvec);
>
> pagevec_release(&pvec);
> }
--
All rights reversed.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: vmscan-give-referenced-active-and-unmapped-pages-a-second-trip-around-the-lru.patch
2008-10-11 1:53 ` vmscan-give-referenced-active-and-unmapped-pages-a-second-trip-around-the-lru.patch Rik van Riel
@ 2008-10-11 2:21 ` Andrew Morton
2008-10-11 20:46 ` vmscan-give-referenced-active-and-unmapped-pages-a-second-trip-around-the-lru.patch Rik van Riel
0 siblings, 1 reply; 12+ messages in thread
From: Andrew Morton @ 2008-10-11 2:21 UTC (permalink / raw)
To: Rik van Riel; +Cc: kosaki.motohiro, nickpiggin, linux-mm, lee.schermerhorn
On Fri, 10 Oct 2008 21:53:59 -0400 Rik van Riel <riel@redhat.com> wrote:
> Andrew Morton wrote:
>
> > I implemented this as a fix against
> > vmscan-fix-pagecache-reclaim-referenced-bit-check.patch, but that patch
> > says
>
> > which isn't true any more.
>
> > Sorry about this mess.
>
> I'm not sure what else is still in the -mm tree and what got
> removed, so I'm not sure what the new comment for the patch
> should be.
This is getting terrible.
Unfortunately I'm basically dead int he water over here because Stephen
shot through for a month and all the subsystem trees have gone rampant
all over the place.
Apparently mmotm does kinda-compile and kinda-run, but only by luck.
> Maybe the patch could just be folded into an earlier split
> LRU patch now since there no longer is a special case for
> page cache pages?
Yeah, I can do that. Fold all these:
vmscan-split-lru-lists-into-anon-file-sets.patch
vmscan-split-lru-lists-into-anon-file-sets-memcg-fix-handling-of-shmem-migrationv2.patch
vmscan-split-lru-lists-into-anon-file-sets-adjust-quicklists-field-of-proc-meminfo.patch
vmscan-split-lru-lists-into-anon-file-sets-adjust-hugepage-related-field-of-proc-meminfo.patch
vmscan-split-lru-lists-into-anon-file-sets-fix-style-issue-of-get_scan_ratio.patch
vmscan-second-chance-replacement-for-anonymous-pages.patch
vmscan-fix-pagecache-reclaim-referenced-bit-check.patch
vmscan-fix-pagecache-reclaim-referenced-bit-check-fix.patch
vmscan-fix-pagecache-reclaim-referenced-bit-check-fix-fix.patch
except vmscan-second-chance-replacement-for-anonymous-pages.patch isn't
appropriate for folding.
If I join
vmscan-fix-pagecache-reclaim-referenced-bit-check.patch
vmscan-fix-pagecache-reclaim-referenced-bit-check-fix.patch
vmscan-fix-pagecache-reclaim-referenced-bit-check-fix-fix.patch
then I get the below. Can we think of a plausible-sounding changelog for it?
--- a/mm/vmscan.c~vmscan-fix-pagecache-reclaim-referenced-bit-check
+++ a/mm/vmscan.c
@@ -1064,7 +1064,6 @@ static void shrink_active_list(unsigned
int pgdeactivate = 0;
unsigned long pgscanned;
LIST_HEAD(l_hold); /* The pages which were snipped off */
- LIST_HEAD(l_active);
LIST_HEAD(l_inactive);
struct page *page;
struct pagevec pvec;
@@ -1095,6 +1094,11 @@ static void shrink_active_list(unsigned
cond_resched();
page = lru_to_page(&l_hold);
list_del(&page->lru);
+
+ /* page_referenced clears PageReferenced */
+ if (page_mapping_inuse(page) && page_referenced(page))
+ pgmoved++;
+
list_add(&page->lru, &l_inactive);
}
@@ -1103,13 +1107,20 @@ static void shrink_active_list(unsigned
* to the inactive list. This helps balance scan pressure between
* file and anonymous pages in get_scan_ratio.
*/
+
+ /*
+ * Count referenced pages from currently used mappings as
+ * rotated, even though they are moved to the inactive list.
+ * This helps balance scan pressure between file and anonymous
+ * pages in get_scan_ratio.
+ */
zone->recent_rotated[!!file] += pgmoved;
/*
- * Now put the pages back on the appropriate [file or anon] inactive
- * and active lists.
+ * Move the pages to the [file or anon] inactive list.
*/
pagevec_init(&pvec, 1);
+
pgmoved = 0;
lru = LRU_BASE + file * LRU_FILE;
spin_lock_irq(&zone->lru_lock);
@@ -1142,31 +1153,6 @@ static void shrink_active_list(unsigned
pagevec_strip(&pvec);
spin_lock_irq(&zone->lru_lock);
}
-
- pgmoved = 0;
- lru = LRU_ACTIVE + file * LRU_FILE;
- while (!list_empty(&l_active)) {
- page = lru_to_page(&l_active);
- prefetchw_prev_lru_page(page, &l_active, flags);
- VM_BUG_ON(PageLRU(page));
- SetPageLRU(page);
- VM_BUG_ON(!PageActive(page));
-
- list_move(&page->lru, &zone->lru[lru].list);
- mem_cgroup_move_lists(page, true);
- pgmoved++;
- if (!pagevec_add(&pvec, page)) {
- __mod_zone_page_state(zone, NR_LRU_BASE + lru, pgmoved);
- pgmoved = 0;
- spin_unlock_irq(&zone->lru_lock);
- if (vm_swap_full())
- pagevec_swap_free(&pvec);
- __pagevec_release(&pvec);
- spin_lock_irq(&zone->lru_lock);
- }
- }
- __mod_zone_page_state(zone, NR_LRU_BASE + lru, pgmoved);
-
__count_zone_vm_events(PGREFILL, zone, pgscanned);
__count_vm_events(PGDEACTIVATE, pgdeactivate);
spin_unlock_irq(&zone->lru_lock);
_
> We can throw away the loop that moves pages from l_active to the
> active list, because we no longer do that:
yup.
Latest version:
static void shrink_active_list(unsigned long nr_pages, struct zone *zone,
struct scan_control *sc, int priority, int file)
{
unsigned long pgmoved;
int pgdeactivate = 0;
unsigned long pgscanned;
LIST_HEAD(l_hold); /* The pages which were snipped off */
LIST_HEAD(l_inactive);
struct page *page;
struct pagevec pvec;
enum lru_list lru;
lru_add_drain();
spin_lock_irq(&zone->lru_lock);
pgmoved = sc->isolate_pages(nr_pages, &l_hold, &pgscanned, sc->order,
ISOLATE_ACTIVE, zone,
sc->mem_cgroup, 1, file);
/*
* zone->pages_scanned is used for detect zone's oom
* mem_cgroup remembers nr_scan by itself.
*/
if (scan_global_lru(sc)) {
zone->pages_scanned += pgscanned;
zone->recent_scanned[!!file] += pgmoved;
}
if (file)
__mod_zone_page_state(zone, NR_ACTIVE_FILE, -pgmoved);
else
__mod_zone_page_state(zone, NR_ACTIVE_ANON, -pgmoved);
spin_unlock_irq(&zone->lru_lock);
pgmoved = 0;
while (!list_empty(&l_hold)) {
cond_resched();
page = lru_to_page(&l_hold);
list_del(&page->lru);
if (unlikely(!page_evictable(page, NULL))) {
putback_lru_page(page);
continue;
}
/* page_referenced clears PageReferenced */
if (page_mapping_inuse(page) && page_referenced(page))
pgmoved++;
list_add(&page->lru, &l_inactive);
}
/*
* Count the referenced pages as rotated, even when they are moved
* to the inactive list. This helps balance scan pressure between
* file and anonymous pages in get_scan_ratio.
*/
/*
* Count referenced pages from currently used mappings as
* rotated, even though they are moved to the inactive list.
* This helps balance scan pressure between file and anonymous
* pages in get_scan_ratio.
*/
zone->recent_rotated[!!file] += pgmoved;
/*
* Move the pages to the [file or anon] inactive list.
*/
pagevec_init(&pvec, 1);
pgmoved = 0;
lru = LRU_BASE + file * LRU_FILE;
spin_lock_irq(&zone->lru_lock);
while (!list_empty(&l_inactive)) {
page = lru_to_page(&l_inactive);
prefetchw_prev_lru_page(page, &l_inactive, flags);
VM_BUG_ON(PageLRU(page));
SetPageLRU(page);
VM_BUG_ON(!PageActive(page));
ClearPageActive(page);
list_move(&page->lru, &zone->lru[lru].list);
mem_cgroup_move_lists(page, lru);
pgmoved++;
if (!pagevec_add(&pvec, page)) {
__mod_zone_page_state(zone, NR_LRU_BASE + lru, pgmoved);
spin_unlock_irq(&zone->lru_lock);
pgdeactivate += pgmoved;
pgmoved = 0;
if (buffer_heads_over_limit)
pagevec_strip(&pvec);
__pagevec_release(&pvec);
spin_lock_irq(&zone->lru_lock);
}
}
__mod_zone_page_state(zone, NR_LRU_BASE + lru, pgmoved);
pgdeactivate += pgmoved;
if (buffer_heads_over_limit) {
spin_unlock_irq(&zone->lru_lock);
pagevec_strip(&pvec);
spin_lock_irq(&zone->lru_lock);
}
__count_zone_vm_events(PGREFILL, zone, pgscanned);
__count_vm_events(PGDEACTIVATE, pgdeactivate);
spin_unlock_irq(&zone->lru_lock);
if (vm_swap_full())
pagevec_swap_free(&pvec);
pagevec_release(&pvec);
}
ho hum. I'll do a mmotm right now.
My queue up to and including
mmap-handle-mlocked-pages-during-map-remap-unmap-mlock-update-locked_vm-on-munmap-of-mlocked-region.patch
(against 2.6.27-rc9) is at http://userweb.kernel.org/~akpm/rvr.gz
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: vmscan-give-referenced-active-and-unmapped-pages-a-second-trip-around-the-lru.patch
2008-10-11 2:21 ` vmscan-give-referenced-active-and-unmapped-pages-a-second-trip-around-the-lru.patch Andrew Morton
@ 2008-10-11 20:46 ` Rik van Riel
2008-10-12 13:31 ` vmscan-give-referenced-active-and-unmapped-pages-a-second-trip-around-the-lru.patch KOSAKI Motohiro
0 siblings, 1 reply; 12+ messages in thread
From: Rik van Riel @ 2008-10-11 20:46 UTC (permalink / raw)
To: Andrew Morton; +Cc: kosaki.motohiro, nickpiggin, linux-mm, lee.schermerhorn
Andrew Morton wrote:
> then I get the below. Can we think of a plausible-sounding changelog for ?
Does this sound reasonable?
Moving referenced pages back to the head of the active list
creates a huge scalability problem, because by the time a
large memory system finally runs out of free memory, every
single page in the system will have been referenced.
Not only do we not have the time to scan every single page
on the active list, but since they have will all have the
referenced bit set, that bit conveys no useful information.
A more scalable solution is to just move every page that
hits the end of the active list to the inactive list.
We clear the referenced bit off of mapped pages, which
need just one reference to be moved back onto the active
list.
Unmapped pages will be moved back to the active list after
two references (see mark_page_accessed). We preserve the
PG_referenced flag on unmapped pages to preserve accesses
that were made while the page was on the active list.
> @@ -1103,13 +1107,20 @@ static void shrink_active_list(unsigned
> * to the inactive list. This helps balance scan pressure between
> * file and anonymous pages in get_scan_ratio.
> */
> +
> + /*
> + * Count referenced pages from currently used mappings as
> + * rotated, even though they are moved to the inactive list.
> + * This helps balance scan pressure between file and anonymous
> + * pages in get_scan_ratio.
> + */
> zone->recent_rotated[!!file] += pgmoved;
You might want to remove the obsoleted comment :)
--
All rights reversed.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: vmscan-give-referenced-active-and-unmapped-pages-a-second-trip-around-the-lru.patch
2008-10-11 20:46 ` vmscan-give-referenced-active-and-unmapped-pages-a-second-trip-around-the-lru.patch Rik van Riel
@ 2008-10-12 13:31 ` KOSAKI Motohiro
0 siblings, 0 replies; 12+ messages in thread
From: KOSAKI Motohiro @ 2008-10-12 13:31 UTC (permalink / raw)
To: Rik van Riel
Cc: kosaki.motohiro, Andrew Morton, nickpiggin, linux-mm, lee.schermerhorn
Hi,
I mesured mmotm-10-10 today.
So, it seems very good result.
mainline: Throughput 13.4231 MB/sec 4000 clients 4000 procs max_latency=1421988.159 ms
mmotm-10-02: Throughput 7.0354 MB/sec 4000 clients 4000 procs max_latency=2369213.380 ms
mmotm-10-10: Throughput 14.2802 MB/sec 4000 clients 4000 procs max_latency=1564716.557 ms
Thanks!
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 12+ messages in thread
end of thread, other threads:[~2008-10-12 13:32 UTC | newest]
Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2008-10-08 5:55 vmscan-give-referenced-active-and-unmapped-pages-a-second-trip-around-the-lru.patch Nick Piggin
2008-10-08 10:03 ` vmscan-give-referenced-active-and-unmapped-pages-a-second-trip-around-the-lru.patch KOSAKI Motohiro
2008-10-10 22:17 ` vmscan-give-referenced-active-and-unmapped-pages-a-second-trip-around-the-lru.patch Andrew Morton
2008-10-10 22:25 ` vmscan-give-referenced-active-and-unmapped-pages-a-second-trip-around-the-lru.patch Andrew Morton
2008-10-10 22:33 ` vmscan-give-referenced-active-and-unmapped-pages-a-second-trip-around-the-lru.patch Andrew Morton
2008-10-10 23:59 ` vmscan-give-referenced-active-and-unmapped-pages-a-second-trip-around-the-lru.patch Rik van Riel
2008-10-11 1:42 ` vmscan-give-referenced-active-and-unmapped-pages-a-second-trip-around-the-lru.patch Andrew Morton
2008-10-11 1:53 ` vmscan-give-referenced-active-and-unmapped-pages-a-second-trip-around-the-lru.patch Rik van Riel
2008-10-11 2:21 ` vmscan-give-referenced-active-and-unmapped-pages-a-second-trip-around-the-lru.patch Andrew Morton
2008-10-11 20:46 ` vmscan-give-referenced-active-and-unmapped-pages-a-second-trip-around-the-lru.patch Rik van Riel
2008-10-12 13:31 ` vmscan-give-referenced-active-and-unmapped-pages-a-second-trip-around-the-lru.patch KOSAKI Motohiro
2008-10-10 23:56 ` vmscan-give-referenced-active-and-unmapped-pages-a-second-trip-around-the-lru.patch Rik van Riel
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox