* [PATCH RFC] higher order allocs 2.4.10-pre9-recycle-R1
@ 2001-09-15 22:53 Roger Larsson
2001-09-16 16:28 ` Rik van Riel
0 siblings, 1 reply; 2+ messages in thread
From: Roger Larsson @ 2001-09-15 22:53 UTC (permalink / raw)
To: Stephan von Krawczynski; +Cc: linux-mm
[-- Attachment #1: Type: text/plain, Size: 1086 bytes --]
Hi again,
Summary: Keep more pages free if lots of higher order pages has been used.
* Suppose returned higher order pages is not returned to the free list
but to another per zone and order list.
Then they will not be missing when needed again later.
But there are drawbacks... pages will not be merged. Cant be used for lower
order allocs if never needed again...
So, what will happen if they are placed on the ordinary free list - but
not counted as free?
* We will free more pages and that will make it more unlikely that they
will be needed. But they are still there for fast allocs. And they can
merge into higher order pages.
I have made some test runs - results are close to proper 2.4.10-pre9
actually slightly better for all but two of mine testcases.
diff of two files bigger than RAM got half throughput, why???
mmap002 (use all memory attempt) took more than three times as long -
less memory to use at once, OK. And not necessarily a bad thing.
(I will be away from my computer for some days - back on Tuesday)
/RogerL
--
Roger Larsson
Skelleftea
Sweden
[-- Attachment #2: patch-2.4.10-pre9-recycle-R1 --]
[-- Type: text/x-diff, Size: 2232 bytes --]
*******************************************
Patch prepared by: roger.larsson@norran.net
Name of file: /home/roger/patches/patch-2.4.10-pre9-recycle-R1
--- linux/mm/page_alloc.c.orig Sat Sep 15 17:30:32 2001
+++ linux/mm/page_alloc.c Sun Sep 16 00:01:24 2001
@@ -104,7 +104,12 @@
spin_lock_irqsave(&zone->lock, flags);
- zone->free_pages -= mask;
+ area->recycled++;
+ if (area->recycled <= 0)
+ area->recycled=1;
+
+ if (!order || area->recycled < 0)
+ zone->free_pages -= mask;
while (mask + (1 << (MAX_ORDER-1))) {
struct page *buddy1, *buddy2;
@@ -193,9 +198,14 @@
index = page - zone->zone_mem_map;
if (curr_order != MAX_ORDER-1)
MARK_USED(index, curr_order, area);
- zone->free_pages -= 1 << order;
page = expand(zone, page, index, order, curr_order, area);
+ /* use initial area, requested order */
+ area=zone->free_area + order;
+ area->recycled--; /* might go neg, fixed in free */
+ if (!order || area->recycled < 0)
+ zone->free_pages -= 1 << order;
+
spin_unlock_irqrestore(&zone->lock, flags);
set_page_count(page, 1);
@@ -653,7 +663,8 @@
if (zone->size) {
spin_lock_irqsave(&zone->lock, flags);
for (order = 0; order < MAX_ORDER; order++) {
- head = &(zone->free_area + order)->free_list;
+ free_area_t *area = zone->free_area + order;
+ head = &area->free_list;
curr = head;
nr = 0;
for (;;) {
@@ -663,8 +674,9 @@
nr++;
}
total += nr * (1 << order);
- printk("%lu*%lukB ", nr,
- (PAGE_SIZE>>10) << order);
+ printk("%lu/%ld*%lukB ", nr,
+ area->recycled,
+ (PAGE_SIZE>>10) << order);
}
spin_unlock_irqrestore(&zone->lock, flags);
}
@@ -891,6 +903,7 @@
bitmap_size = LONG_ALIGN(bitmap_size+1);
zone->free_area[i].map =
(unsigned long *) alloc_bootmem_node(pgdat, bitmap_size);
+ zone->free_area[i].recycled = 0;
}
}
build_zonelists(pgdat);
--- linux/include/linux/mmzone.h.orig Sat Sep 15 21:58:47 2001
+++ linux/include/linux/mmzone.h Sat Sep 15 22:01:29 2001
@@ -21,6 +21,7 @@
typedef struct free_area_struct {
struct list_head free_list;
unsigned long *map;
+ long recycled;
} free_area_t;
struct pglist_data;
^ permalink raw reply [flat|nested] 2+ messages in thread* Re: [PATCH RFC] higher order allocs 2.4.10-pre9-recycle-R1
2001-09-15 22:53 [PATCH RFC] higher order allocs 2.4.10-pre9-recycle-R1 Roger Larsson
@ 2001-09-16 16:28 ` Rik van Riel
0 siblings, 0 replies; 2+ messages in thread
From: Rik van Riel @ 2001-09-16 16:28 UTC (permalink / raw)
To: Roger Larsson; +Cc: Stephan von Krawczynski, linux-mm
[-- Attachment #1: Type: TEXT/PLAIN, Size: 720 bytes --]
On Sun, 16 Sep 2001, Roger Larsson wrote:
> I have made some test runs - results are close to proper 2.4.10-pre9
> actually slightly better for all but two of mine testcases.
> diff of two files bigger than RAM got half throughput, why???
> mmap002 (use all memory attempt) took more than three times as long -
> less memory to use at once, OK. And not necessarily a bad thing.
These "test runs" are completely unrelated to memory fragmentation.
I don't know what you're thinking, but you could at least test the
thing you're trying to fix ...
regards,
Rik
--
IA64: a worthy successor to i860.
http://www.surriel.com/ http://distro.conectiva.com/
Send all your spam to aardvark@nl.linux.org (spam digging piggy)
[-- Attachment #2: Type: TEXT/X-DIFF, Size: 2232 bytes --]
*******************************************
Patch prepared by: roger.larsson@norran.net
Name of file: /home/roger/patches/patch-2.4.10-pre9-recycle-R1
--- linux/mm/page_alloc.c.orig Sat Sep 15 17:30:32 2001
+++ linux/mm/page_alloc.c Sun Sep 16 00:01:24 2001
@@ -104,7 +104,12 @@
spin_lock_irqsave(&zone->lock, flags);
- zone->free_pages -= mask;
+ area->recycled++;
+ if (area->recycled <= 0)
+ area->recycled=1;
+
+ if (!order || area->recycled < 0)
+ zone->free_pages -= mask;
while (mask + (1 << (MAX_ORDER-1))) {
struct page *buddy1, *buddy2;
@@ -193,9 +198,14 @@
index = page - zone->zone_mem_map;
if (curr_order != MAX_ORDER-1)
MARK_USED(index, curr_order, area);
- zone->free_pages -= 1 << order;
page = expand(zone, page, index, order, curr_order, area);
+ /* use initial area, requested order */
+ area=zone->free_area + order;
+ area->recycled--; /* might go neg, fixed in free */
+ if (!order || area->recycled < 0)
+ zone->free_pages -= 1 << order;
+
spin_unlock_irqrestore(&zone->lock, flags);
set_page_count(page, 1);
@@ -653,7 +663,8 @@
if (zone->size) {
spin_lock_irqsave(&zone->lock, flags);
for (order = 0; order < MAX_ORDER; order++) {
- head = &(zone->free_area + order)->free_list;
+ free_area_t *area = zone->free_area + order;
+ head = &area->free_list;
curr = head;
nr = 0;
for (;;) {
@@ -663,8 +674,9 @@
nr++;
}
total += nr * (1 << order);
- printk("%lu*%lukB ", nr,
- (PAGE_SIZE>>10) << order);
+ printk("%lu/%ld*%lukB ", nr,
+ area->recycled,
+ (PAGE_SIZE>>10) << order);
}
spin_unlock_irqrestore(&zone->lock, flags);
}
@@ -891,6 +903,7 @@
bitmap_size = LONG_ALIGN(bitmap_size+1);
zone->free_area[i].map =
(unsigned long *) alloc_bootmem_node(pgdat, bitmap_size);
+ zone->free_area[i].recycled = 0;
}
}
build_zonelists(pgdat);
--- linux/include/linux/mmzone.h.orig Sat Sep 15 21:58:47 2001
+++ linux/include/linux/mmzone.h Sat Sep 15 22:01:29 2001
@@ -21,6 +21,7 @@
typedef struct free_area_struct {
struct list_head free_list;
unsigned long *map;
+ long recycled;
} free_area_t;
struct pglist_data;
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2001-09-16 16:28 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2001-09-15 22:53 [PATCH RFC] higher order allocs 2.4.10-pre9-recycle-R1 Roger Larsson
2001-09-16 16:28 ` Rik van Riel
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox