* [PATCH] swap-prefetch, fix lru_cache_add_tail()
@ 2006-05-17 16:54 Peter Zijlstra
2006-05-18 6:39 ` Con Kolivas
0 siblings, 1 reply; 2+ messages in thread
From: Peter Zijlstra @ 2006-05-17 16:54 UTC (permalink / raw)
To: Con Kolivas, Andrew Morton, linux-kernel, linux-mm
From: Peter Zijlstra <a.p.zijlstra@chello.nl>
lru_cache_add_tail() uses the inactive per-cpu pagevec. This causes
normal inactive and intactive tail inserts to end up on the wrong end
of the list.
When the pagevec is completed by lru_cache_add_tail() but still contains
normal inactive pages, all pages will be added to the inactive tail and
vice versa.
Also *add_drain*() will always complete to the inactive head.
Add a third per-cpu pagevec to alleviate this problem.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
---
mm/swap.c | 8 +++++++-
1 file changed, 7 insertions(+), 1 deletion(-)
Index: 2.6-mm/mm/swap.c
===================================================================
--- 2.6-mm.orig/mm/swap.c 2006-05-17 15:39:59.000000000 +0200
+++ 2.6-mm/mm/swap.c 2006-05-17 18:29:42.000000000 +0200
@@ -138,6 +138,7 @@ EXPORT_SYMBOL(mark_page_accessed);
*/
static DEFINE_PER_CPU(struct pagevec, lru_add_pvecs) = { 0, };
static DEFINE_PER_CPU(struct pagevec, lru_add_active_pvecs) = { 0, };
+static DEFINE_PER_CPU(struct pagevec, lru_add_tail_pvecs) = { 0, };
void fastcall lru_cache_add(struct page *page)
{
@@ -159,6 +160,8 @@ void fastcall lru_cache_add_active(struc
put_cpu_var(lru_add_active_pvecs);
}
+static inline void __pagevec_lru_add_tail(struct pagevec *pvec);
+
static void __lru_add_drain(int cpu)
{
struct pagevec *pvec = &per_cpu(lru_add_pvecs, cpu);
@@ -169,6 +172,9 @@ static void __lru_add_drain(int cpu)
pvec = &per_cpu(lru_add_active_pvecs, cpu);
if (pagevec_count(pvec))
__pagevec_lru_add_active(pvec);
+ pvec = &per_cpu(lru_add_tail_pvecs, cpu);
+ if (pagevec_count(pvec))
+ __pagevec_lru_add_tail(pvec);
}
void lru_add_drain(void)
@@ -416,7 +422,7 @@ static inline void __pagevec_lru_add_tai
*/
void fastcall lru_cache_add_tail(struct page *page)
{
- struct pagevec *pvec = &get_cpu_var(lru_add_pvecs);
+ struct pagevec *pvec = &get_cpu_var(lru_add_tail_pvecs);
page_cache_get(page);
if (!pagevec_add(pvec, page))
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 2+ messages in thread* Re: [PATCH] swap-prefetch, fix lru_cache_add_tail()
2006-05-17 16:54 [PATCH] swap-prefetch, fix lru_cache_add_tail() Peter Zijlstra
@ 2006-05-18 6:39 ` Con Kolivas
0 siblings, 0 replies; 2+ messages in thread
From: Con Kolivas @ 2006-05-18 6:39 UTC (permalink / raw)
To: Peter Zijlstra; +Cc: Andrew Morton, linux-kernel, linux-mm
On Thursday 18 May 2006 02:54, Peter Zijlstra wrote:
> From: Peter Zijlstra <a.p.zijlstra@chello.nl>
>
> lru_cache_add_tail() uses the inactive per-cpu pagevec. This causes
> normal inactive and intactive tail inserts to end up on the wrong end
> of the list.
>
> When the pagevec is completed by lru_cache_add_tail() but still contains
> normal inactive pages, all pages will be added to the inactive tail and
> vice versa.
>
> Also *add_drain*() will always complete to the inactive head.
>
> Add a third per-cpu pagevec to alleviate this problem.
Thanks!
> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: Con Kolivas <kernel@kolivas.org>
--
-ck
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2006-05-18 6:39 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2006-05-17 16:54 [PATCH] swap-prefetch, fix lru_cache_add_tail() Peter Zijlstra
2006-05-18 6:39 ` Con Kolivas
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox