* [PATCH 0/3] Lumpy Reclaim V6
@ 2007-04-20 15:03 Andy Whitcroft
2007-04-20 15:03 ` [PATCH 1/3] kswapd: use reclaim order in background reclaim Andy Whitcroft
` (2 more replies)
0 siblings, 3 replies; 8+ messages in thread
From: Andy Whitcroft @ 2007-04-20 15:03 UTC (permalink / raw)
To: Andrew Morton; +Cc: linux-mm, linux-kernel, Andy Whitcroft, Mel Gorman
Following this email are three patches to the lumpy reclaim
algorithm. These apply on top of the lumpy patches in 2.6.21-rc6-mm1
(lumpy V5); making lumpy V6. The first enables kswapd to apply
reclaim at the order of the allocations which trigger background
reclaim. The second increases pressure on the area at the end of
the inactive list. The last introduces a new symbolic constant
representing the boundary between easily reclaimed areas and those
where extra pressure is applicable. Andrew, please consider for -mm.
Comparitive testing between lumpy-V5 and lumpy-V6 shows a
considerable improvement when under extreme load. lumpy-V5 relies on
the pages in an area being rotated onto the inactive list together
and remaining inactive long enough to be reclaimed from that list.
Under high load a significant portion of the pages return to active
or are referenced before this can occur. Lumpy-V6 targets all
LRU pages in the area greatly increasing the chance of reclaiming
it completely.
kswapd-use-reclaim-order-in-background-reclaim: When an allocator
has to dip below the low water mark for a zone, kswapd is awoken
to start background reclaim. Make kswapd use the highest order
of these allocations to define the order at which it reclaims.
lumpy-increase-pressure-at-the-end-of-the-inactive-list: when
reclaiming at higher order target all pages in the contigious
area for reclaim, including active and recently referenced pages.
This increases the chances of that area becoming free.
introduce-HIGH_ORDER-delineating-easily-reclaimable-orders:
The memory allocator treats lower and higher order allocations
slightly differently. Lumpy reclaim also changes behaviour at
this same boundary. Pull out the magic numbers and replace them
with a symbolic constant.
Against: 2.6.21-rc6-mm1
-apw
Changes in lumpy V5:
Andy Whitcroft:
lumpy: back out removal of active check in isolate_lru_pages
lumpy: only count taken pages as scanned
Changes in lumpy V4:
Andy Whitcroft:
lumpy: isolate_lru_pages wants to specifically take active
or inactive pages
lumpy: ensure that we compare PageActive and active safely
lumpy: update commentry on subtle comparisons and rounding assumptions
lumpy: only check for valid pages when holes are present
Changes in lumpy V3:
Adrian Bunk:
lumpy-reclaim-cleanup
Andrew Morton:
lumpy-reclaim-v2-page_to_pfn-fix
lumpy-reclaim-v2-tidy
Andy Whitcroft:
lumpy: ensure we respect zone boundaries
lumpy: take the other active/inactive pages in the area
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCH 1/3] kswapd: use reclaim order in background reclaim
2007-04-20 15:03 [PATCH 0/3] Lumpy Reclaim V6 Andy Whitcroft
@ 2007-04-20 15:03 ` Andy Whitcroft
2007-04-20 15:04 ` [PATCH 2/3] lumpy: increase pressure at the end of the inactive list Andy Whitcroft
2007-04-20 15:04 ` [PATCH 3/3] introduce HIGH_ORDER delineating easily reclaimable orders Andy Whitcroft
2 siblings, 0 replies; 8+ messages in thread
From: Andy Whitcroft @ 2007-04-20 15:03 UTC (permalink / raw)
To: Andrew Morton; +Cc: linux-mm, linux-kernel, Andy Whitcroft, Mel Gorman
When an allocator has to dip below the low water mark for a
zone, kswapd is awoken to start background reclaim. The highest
order of these dipping allocations are accumulated on the zone.
With this patch kswapd uses this hint to force reclaim at that
order via balance_pgdat().
Signed-off-by: Andy Whitcroft <apw@shadowen.org>
Acked-by: Mel Gorman <mel@csn.ul.ie>
---
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 428da1a..466435f 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1212,6 +1212,7 @@ static unsigned long balance_pgdat(pg_data_t *pgdat, int order)
.may_swap = 1,
.swap_cluster_max = SWAP_CLUSTER_MAX,
.swappiness = vm_swappiness,
+ .order = order,
};
/*
* temp_priority is used to remember the scanning priority at which
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCH 2/3] lumpy: increase pressure at the end of the inactive list
2007-04-20 15:03 [PATCH 0/3] Lumpy Reclaim V6 Andy Whitcroft
2007-04-20 15:03 ` [PATCH 1/3] kswapd: use reclaim order in background reclaim Andy Whitcroft
@ 2007-04-20 15:04 ` Andy Whitcroft
2007-04-21 8:24 ` Andrew Morton
2007-04-20 15:04 ` [PATCH 3/3] introduce HIGH_ORDER delineating easily reclaimable orders Andy Whitcroft
2 siblings, 1 reply; 8+ messages in thread
From: Andy Whitcroft @ 2007-04-20 15:04 UTC (permalink / raw)
To: Andrew Morton; +Cc: linux-mm, linux-kernel, Andy Whitcroft, Mel Gorman
Having selected an area at the end of the inactive list, reclaim is
attempted for all LRU pages within that contiguous area. Currently,
any pages in this area found to still be active or referenced are
rotated back to the active list as normal and the rest reclaimed.
At low orders there is a reasonable likelyhood of finding contigious
inactive areas for reclaim. However when reclaiming at higher order
there is a very low chance all pages in the area being inactive,
unreferenced and therefore reclaimable.
This patch modifies behaviour when reclaiming at higher order
(order >= 4). All LRU pages within the target area are reclaimed,
including both active and recently referenced pages.
[mel@csn.ul.ie: additionally apply pressure to referenced paged]
Signed-off-by: Andy Whitcroft <apw@shadowen.org>
Acked-by: Mel Gorman <mel@csn.ul.ie>
---
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 466435f..e5e77fb 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -472,7 +472,7 @@ static unsigned long shrink_page_list(struct list_head *page_list,
referenced = page_referenced(page, 1);
/* In active use or really unfreeable? Activate it. */
- if (referenced && page_mapping_inuse(page))
+ if (sc->order <= 3 && referenced && page_mapping_inuse(page))
goto activate_locked;
#ifdef CONFIG_SWAP
@@ -505,7 +505,7 @@ static unsigned long shrink_page_list(struct list_head *page_list,
}
if (PageDirty(page)) {
- if (referenced)
+ if (sc->order <= 3 && referenced)
goto keep_locked;
if (!may_enter_fs)
goto keep_locked;
@@ -599,6 +599,7 @@ keep:
*
* returns 0 on success, -ve errno on failure.
*/
+#define ISOLATE_BOTH -1 /* Isolate both active and inactive pages. */
static int __isolate_lru_page(struct page *page, int active)
{
int ret = -EINVAL;
@@ -608,7 +609,8 @@ static int __isolate_lru_page(struct page *page, int active)
* dealing with comparible boolean values. Take the logical not
* of each.
*/
- if (PageLRU(page) && (!PageActive(page) == !active)) {
+ if (PageLRU(page) && (active == ISOLATE_BOTH ||
+ (!PageActive(page) == !active))) {
ret = -EBUSY;
if (likely(get_page_unless_zero(page))) {
/*
@@ -729,6 +731,26 @@ static unsigned long isolate_lru_pages(unsigned long nr_to_scan,
}
/*
+ * deactivate_pages() is a helper for shrink_active_list(), it deactivates
+ * all active pages on the passed list.
+ */
+static unsigned long deactivate_pages(struct list_head *page_list)
+{
+ int nr_active = 0;
+ struct list_head *entry;
+
+ list_for_each(entry, page_list) {
+ struct page *page = list_entry(entry, struct page, lru);
+ if (PageActive(page)) {
+ ClearPageActive(page);
+ nr_active++;
+ }
+ }
+
+ return nr_active;
+}
+
+/*
* shrink_inactive_list() is a helper for shrink_zone(). It returns the number
* of reclaimed pages
*/
@@ -749,11 +771,17 @@ static unsigned long shrink_inactive_list(unsigned long max_scan,
unsigned long nr_taken;
unsigned long nr_scan;
unsigned long nr_freed;
+ unsigned long nr_active;
nr_taken = isolate_lru_pages(sc->swap_cluster_max,
&zone->inactive_list,
- &page_list, &nr_scan, sc->order, 0);
- __mod_zone_page_state(zone, NR_INACTIVE, -nr_taken);
+ &page_list, &nr_scan, sc->order,
+ (sc->order > 3)? ISOLATE_BOTH : 0);
+ nr_active = deactivate_pages(&page_list);
+
+ __mod_zone_page_state(zone, NR_ACTIVE, -nr_active);
+ __mod_zone_page_state(zone, NR_INACTIVE,
+ -(nr_taken - nr_active));
zone->pages_scanned += nr_scan;
zone->total_scanned += nr_scan;
spin_unlock_irq(&zone->lru_lock);
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCH 3/3] introduce HIGH_ORDER delineating easily reclaimable orders
2007-04-20 15:03 [PATCH 0/3] Lumpy Reclaim V6 Andy Whitcroft
2007-04-20 15:03 ` [PATCH 1/3] kswapd: use reclaim order in background reclaim Andy Whitcroft
2007-04-20 15:04 ` [PATCH 2/3] lumpy: increase pressure at the end of the inactive list Andy Whitcroft
@ 2007-04-20 15:04 ` Andy Whitcroft
2007-04-21 8:28 ` Andrew Morton
2 siblings, 1 reply; 8+ messages in thread
From: Andy Whitcroft @ 2007-04-20 15:04 UTC (permalink / raw)
To: Andrew Morton; +Cc: linux-mm, linux-kernel, Andy Whitcroft, Mel Gorman
The memory allocator treats lower order (order <= 3) and higher order
(order >= 4) allocations in slightly different ways. As lower orders
are much more likely to be available and also more likely to be
simply reclaimed it is deemed reasonable to wait longer for those.
Lumpy reclaim also changes behaviour at this same boundary, more
agressivly targetting pages in reclaim at higher order.
This patch removes all these magical numbers and replaces with
with a constant HIGH_ORDER.
Signed-off-by: Andy Whitcroft <apw@shadowen.org>
Acked-by: Mel Gorman <mel@csn.ul.ie>
---
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index 8c87d79..f9d2ced 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -25,6 +25,13 @@
#endif
#define MAX_ORDER_NR_PAGES (1 << (MAX_ORDER - 1))
+/*
+ * The boundary between small and large allocations. That is between
+ * allocation orders which should colesce naturally under reasonable
+ * reclaim pressure and those which will not.
+ */
+#define HIGH_ORDER 3
+
#ifdef CONFIG_PAGE_GROUP_BY_MOBILITY
#define MIGRATE_UNMOVABLE 0
#define MIGRATE_RECLAIMABLE 1
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index d7e33cb..44786d9 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1768,7 +1768,7 @@ nofail_alloc:
*/
do_retry = 0;
if (!(gfp_mask & __GFP_NORETRY)) {
- if ((order <= 3) || (gfp_mask & __GFP_REPEAT))
+ if ((order <= HIGH_ORDER) || (gfp_mask & __GFP_REPEAT))
do_retry = 1;
if (gfp_mask & __GFP_NOFAIL)
do_retry = 1;
diff --git a/mm/vmscan.c b/mm/vmscan.c
index e5e77fb..79aedcb 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -472,7 +472,8 @@ static unsigned long shrink_page_list(struct list_head *page_list,
referenced = page_referenced(page, 1);
/* In active use or really unfreeable? Activate it. */
- if (sc->order <= 3 && referenced && page_mapping_inuse(page))
+ if (sc->order <= HIGH_ORDER &&
+ referenced && page_mapping_inuse(page))
goto activate_locked;
#ifdef CONFIG_SWAP
@@ -505,7 +506,7 @@ static unsigned long shrink_page_list(struct list_head *page_list,
}
if (PageDirty(page)) {
- if (sc->order <= 3 && referenced)
+ if (sc->order <= HIGH_ORDER && referenced)
goto keep_locked;
if (!may_enter_fs)
goto keep_locked;
@@ -774,9 +775,9 @@ static unsigned long shrink_inactive_list(unsigned long max_scan,
unsigned long nr_active;
nr_taken = isolate_lru_pages(sc->swap_cluster_max,
- &zone->inactive_list,
- &page_list, &nr_scan, sc->order,
- (sc->order > 3)? ISOLATE_BOTH : 0);
+ &zone->inactive_list,
+ &page_list, &nr_scan, sc->order,
+ (sc->order > HIGH_ORDER)? ISOLATE_BOTH : 0);
nr_active = deactivate_pages(&page_list);
__mod_zone_page_state(zone, NR_ACTIVE, -nr_active);
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH 2/3] lumpy: increase pressure at the end of the inactive list
2007-04-20 15:04 ` [PATCH 2/3] lumpy: increase pressure at the end of the inactive list Andy Whitcroft
@ 2007-04-21 8:24 ` Andrew Morton
0 siblings, 0 replies; 8+ messages in thread
From: Andrew Morton @ 2007-04-21 8:24 UTC (permalink / raw)
To: Andy Whitcroft; +Cc: linux-mm, linux-kernel, Mel Gorman
On Fri, 20 Apr 2007 16:04:04 +0100 Andy Whitcroft <apw@shadowen.org> wrote:
>
> Having selected an area at the end of the inactive list, reclaim is
> attempted for all LRU pages within that contiguous area. Currently,
> any pages in this area found to still be active or referenced are
> rotated back to the active list as normal and the rest reclaimed.
> At low orders there is a reasonable likelyhood of finding contigious
> inactive areas for reclaim. However when reclaiming at higher order
> there is a very low chance all pages in the area being inactive,
> unreferenced and therefore reclaimable.
>
> This patch modifies behaviour when reclaiming at higher order
> (order >= 4). All LRU pages within the target area are reclaimed,
> including both active and recently referenced pages.
um, OK, I guess.
Should we use smaller values of 4 if PAGE_SIZE > 4k? I mean, users of the
page allocator usually request a number of bytes, not a number of pages.
Order 3 allocations on 64k pagesize will be far less common than on 4k
pagesize, no?
And is there a relationship between this magic 4 and the magic 3 in
__alloc_pages()? (Which has the same PAGE_SIZE problem, btw)
I must say that this is a pretty grotty-looking patch.
> [mel@csn.ul.ie: additionally apply pressure to referenced paged]
> Signed-off-by: Andy Whitcroft <apw@shadowen.org>
> Acked-by: Mel Gorman <mel@csn.ul.ie>
> ---
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index 466435f..e5e77fb 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -472,7 +472,7 @@ static unsigned long shrink_page_list(struct list_head *page_list,
>
> referenced = page_referenced(page, 1);
> /* In active use or really unfreeable? Activate it. */
> - if (referenced && page_mapping_inuse(page))
> + if (sc->order <= 3 && referenced && page_mapping_inuse(page))
The oft-occurring magic "3" needs a #define.
> @@ -599,6 +599,7 @@ keep:
> *
> * returns 0 on success, -ve errno on failure.
> */
> +#define ISOLATE_BOTH -1 /* Isolate both active and inactive pages. */
> static int __isolate_lru_page(struct page *page, int active)
> {
> int ret = -EINVAL;
> @@ -608,7 +609,8 @@ static int __isolate_lru_page(struct page *page, int active)
> * dealing with comparible boolean values. Take the logical not
> * of each.
> */
> - if (PageLRU(page) && (!PageActive(page) == !active)) {
> + if (PageLRU(page) && (active == ISOLATE_BOTH ||
> + (!PageActive(page) == !active))) {
So we have a nice enumerated value but we only half-use it: sometimes we
implicitly assume that ISOLATE_BOTH has a non-zero value, which rather
takes away from the whole point of creating ISOLATE_BOTH in the first
place.
Cleaner to do:
#define ISOLATE_INACTIVE 0
#define ISOLATE_ACTIVE 1
#define ISOLATE_BOTH 2
if (!PageLRU(page))
return; /* save a tabstop! */
if (active != ISOLATE_BOTH) {
if (PageActive(page) && active != ISOLATE_ACTIVE)
return;
if (!PageActive(page) && active != ISOLATE_INACTIVE)
return;
}
<isolate the page>
or some such. At present it is all very confused.
And the comment describing the `active' arg to __isolate_lru_page() needs
to be updated.
And the name `active' is now clearly inappropriate. It needs to be renamed
`mode' or something.
> ret = -EBUSY;
> if (likely(get_page_unless_zero(page))) {
> /*
> @@ -729,6 +731,26 @@ static unsigned long isolate_lru_pages(unsigned long nr_to_scan,
> }
>
> /*
> + * deactivate_pages() is a helper for shrink_active_list(), it deactivates
> + * all active pages on the passed list.
> + */
> +static unsigned long deactivate_pages(struct list_head *page_list)
The phrase "deactivate a page" normally means "move it from the active list
to the inactive list". But that isn't what this function does. Something
like clear_active_flags(), maybe?
> +{
> + int nr_active = 0;
> + struct list_head *entry;
> +
> + list_for_each(entry, page_list) {
> + struct page *page = list_entry(entry, struct page, lru);
list_for_each_entry()?
> + if (PageActive(page)) {
> + ClearPageActive(page);
> + nr_active++;
> + }
> + }
> +
> + return nr_active;
> +}
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH 3/3] introduce HIGH_ORDER delineating easily reclaimable orders
2007-04-20 15:04 ` [PATCH 3/3] introduce HIGH_ORDER delineating easily reclaimable orders Andy Whitcroft
@ 2007-04-21 8:28 ` Andrew Morton
2007-04-21 8:32 ` Andrew Morton
0 siblings, 1 reply; 8+ messages in thread
From: Andrew Morton @ 2007-04-21 8:28 UTC (permalink / raw)
To: Andy Whitcroft; +Cc: linux-mm, linux-kernel, Mel Gorman
On Fri, 20 Apr 2007 16:04:36 +0100 Andy Whitcroft <apw@shadowen.org> wrote:
> The memory allocator treats lower order (order <= 3) and higher order
> (order >= 4) allocations in slightly different ways. As lower orders
> are much more likely to be available and also more likely to be
> simply reclaimed it is deemed reasonable to wait longer for those.
> Lumpy reclaim also changes behaviour at this same boundary, more
> agressivly targetting pages in reclaim at higher order.
>
> This patch removes all these magical numbers and replaces with
> with a constant HIGH_ORDER.
oh, there we go.
It would have been better to have patched page_alloc.c independently, then
to have used HIGH_ORDER in "lumpy: increase pressure at the end of the inactive
list".
The name HIGH_ORDER is a bit squidgy. I'm not sure what would be better though.
PAGE_ALLOC_CLUSTER_MAX?
It'd be interesting to turn this into a runtime tunable, perhaps.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH 3/3] introduce HIGH_ORDER delineating easily reclaimable orders
2007-04-21 8:28 ` Andrew Morton
@ 2007-04-21 8:32 ` Andrew Morton
2007-04-23 10:23 ` Andy Whitcroft
0 siblings, 1 reply; 8+ messages in thread
From: Andrew Morton @ 2007-04-21 8:32 UTC (permalink / raw)
To: Andy Whitcroft, linux-mm, linux-kernel, Mel Gorman
On Sat, 21 Apr 2007 01:28:43 -0700 Andrew Morton <akpm@linux-foundation.org> wrote:
> It would have been better to have patched page_alloc.c independently, then
> to have used HIGH_ORDER in "lumpy: increase pressure at the end of the inactive
> list".
Actually that doesn't matter, because I plan on lumping all the lumpy patches
together into one lump.
I was going to duck patches #2 and #3, such was my outrage. But given that
it's all lined up to be a single patch, followup cleanup patches will fit in
OK. Please.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH 3/3] introduce HIGH_ORDER delineating easily reclaimable orders
2007-04-21 8:32 ` Andrew Morton
@ 2007-04-23 10:23 ` Andy Whitcroft
0 siblings, 0 replies; 8+ messages in thread
From: Andy Whitcroft @ 2007-04-23 10:23 UTC (permalink / raw)
To: Andrew Morton; +Cc: linux-mm, linux-kernel, Mel Gorman
Andrew Morton wrote:
> On Sat, 21 Apr 2007 01:28:43 -0700 Andrew Morton <akpm@linux-foundation.org> wrote:
>
>> It would have been better to have patched page_alloc.c independently, then
>> to have used HIGH_ORDER in "lumpy: increase pressure at the end of the inactive
>> list".
>
> Actually that doesn't matter, because I plan on lumping all the lumpy patches
> together into one lump.
>
> I was going to duck patches #2 and #3, such was my outrage. But given that
> it's all lined up to be a single patch, followup cleanup patches will fit in
> OK. Please.
Yes. Its funny how you can get so close to a change that you can no
longer see the obvious warts on it.
I am actually travelling today, so it'll be tommorrow now. But I'll
roll the cleanups and get them to you. I can also offer you a clean
drop in lumpy stack with the HIGH_ORDER change pulled out to the top
once you are happy.
-apw
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2007-04-23 10:23 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2007-04-20 15:03 [PATCH 0/3] Lumpy Reclaim V6 Andy Whitcroft
2007-04-20 15:03 ` [PATCH 1/3] kswapd: use reclaim order in background reclaim Andy Whitcroft
2007-04-20 15:04 ` [PATCH 2/3] lumpy: increase pressure at the end of the inactive list Andy Whitcroft
2007-04-21 8:24 ` Andrew Morton
2007-04-20 15:04 ` [PATCH 3/3] introduce HIGH_ORDER delineating easily reclaimable orders Andy Whitcroft
2007-04-21 8:28 ` Andrew Morton
2007-04-21 8:32 ` Andrew Morton
2007-04-23 10:23 ` Andy Whitcroft
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox