* [PATCH/RFC] 2.4.0-test10-pre3 vmfix?
@ 2000-10-17 1:11 Roger Larsson
2000-10-17 12:26 ` VM magic numbers Eric Lowe
0 siblings, 1 reply; 3+ messages in thread
From: Roger Larsson @ 2000-10-17 1:11 UTC (permalink / raw)
To: linux-mm, Rik van Riel
[-- Attachment #1: Type: text/plain, Size: 1858 bytes --]
Hi,
Attached is patch that does the included part below.
With the addition of some questions to Riel and addition of
inactive_target in SysRq-M output.
Back to the included part. It is from __alloc_pages_limit
with the original code - won't the test fail always until
free_pages == pages_min + 8 resulting in only allocating from
free pages - no reclaims, might be ok.
When this situation is reached it will only reclaim_pages
and the different limits will give little effect.
The patch below will give a more interesting allocations.
When water_mark is PAGES_HIGH then first alloc pages from
free until pages_high, then if direct_reclaim is allowed
from inactive_clean else continue to use free pages.
If the freeable pages gets below pages_high, retry with
new limit...
It gives comparable performance with plain test10, but with
more pages free. Limits can be trimmed down...
Note: that you could/should remove the first loop in
__alloc_pages, not tried it should really be done with
this patch - problem is where to start kreclaimd...
/RogerL
--- linux/mm/page_alloc.c.orig Mon Oct 16 23:54:03 2000
+++ linux/mm/page_alloc.c Tue Oct 17 01:16:13 2000
@@ -264,7 +264,8 @@ static struct page * __alloc_pages_limit
if (z->free_pages + z->inactive_clean_pages >=
water_mark) {
struct page *page = NULL;
/* If possible, reclaim a page directly. */
- if (direct_reclaim && z->free_pages <
z->pages_min + 8)
+ /* Riel: the magical "+ 8" please explain */
+ if (direct_reclaim && z->free_pages < water_mark
+ 8)
page = reclaim_page(z);
/* If that fails, fall back to rmqueue. */
if (!page)
--
Home page:
http://www.norran.net/nra02596/
[-- Attachment #2: patch-2.4.0-test10-pre3-vmfix.rl --]
[-- Type: text/plain, Size: 1827 bytes --]
--- linux/mm/page_alloc.c.orig Mon Oct 16 23:54:03 2000
+++ linux/mm/page_alloc.c Tue Oct 17 01:16:13 2000
@@ -264,7 +264,8 @@ static struct page * __alloc_pages_limit
if (z->free_pages + z->inactive_clean_pages >= water_mark) {
struct page *page = NULL;
/* If possible, reclaim a page directly. */
- if (direct_reclaim && z->free_pages < z->pages_min + 8)
+ /* Riel: the magical "+ 8" please explain */
+ if (direct_reclaim && z->free_pages < water_mark + 8)
page = reclaim_page(z);
/* If that fails, fall back to rmqueue. */
if (!page)
@@ -340,6 +341,8 @@ try_again:
if (!z->size)
BUG();
+ /* Riel: what about using z->pages_min instead of low when
+ * !direct_reclaim or are they too common? */
if (z->free_pages >= z->pages_low) {
page = rmqueue(z, order);
if (page)
@@ -382,7 +385,7 @@ try_again:
* resolve this situation before memory gets tight.
*
* We also yield the CPU, because that:
- * - gives kswapd a chance to do something
+ * - gives kswapd/kreclaimd/bdflush a chance to do something
* - slows down allocations, in particular the
* allocations from the fast allocator that's
* causing the problems ...
@@ -666,14 +669,15 @@ void show_free_areas_core(pg_data_t *pgd
nr_free_pages() << (PAGE_SHIFT-10),
nr_free_highpages() << (PAGE_SHIFT-10));
- printk("( Active: %d, inactive_dirty: %d, inactive_clean: %d, free: %d (%d %d %d) )\n",
+ printk("( Active: %d, inactive_dirty: %d, inactive_clean: %d, free: %d (%d %d %d) inactive_target: %d)\n",
nr_active_pages,
nr_inactive_dirty_pages,
nr_inactive_clean_pages(),
nr_free_pages(),
freepages.min,
freepages.low,
- freepages.high);
+ freepages.high,
+ inactive_target);
for (type = 0; type < MAX_NR_ZONES; type++) {
struct list_head *head, *curr;
^ permalink raw reply [flat|nested] 3+ messages in thread* VM magic numbers
2000-10-17 1:11 [PATCH/RFC] 2.4.0-test10-pre3 vmfix? Roger Larsson
@ 2000-10-17 12:26 ` Eric Lowe
2000-10-17 16:37 ` afei
0 siblings, 1 reply; 3+ messages in thread
From: Eric Lowe @ 2000-10-17 12:26 UTC (permalink / raw)
To: Roger Larsson; +Cc: linux-mm, Rik van Riel
Hi,
I'm interested in an explanation of the magic numbers in the code
as well. According to my notes, there are magic numbers
in the following places in the code (all of which probably
need to be tuned, justified, or otherwise eliminated with
something else):
(note: my line numbers are against test9)
zone_balance_ratio -- I understand wanting to keep more DMA
pages around in case we need them, but the choice of
numbers seems quite arbitrary. Perhaps the demand per
zone for free vs inactive clean pages should determine
where this number goes over time?
ln 455 in page_alloc.c: when memory allocation fails
we do a memory_pressure++, effectively a magic number
of 1. Since this decays exponentially I would think
a failed allocation may want to kick things a little
harder?
ln 274 in page_alloc.c: pages_min+8? I think I
see what's going on here, we want to make sure that
we have 8 pages free for recursive allocations..
But this doesn't guarantee that. Besides, we
really don't care how many free pages there are
until the inactive_clean list is empty, right?
That's when we get into the danger of deadlock..
ln 323 page_alloc.c: inactive_target / 3, was /2
in earlier rounds.. I think we're trying not to launder
too many pages at once here?
.. and inactive_target is a magic number itself, really.
By my calculations it's 1/64 of memory_pressure or 1/4
of physical memory, whichever is smaller. I know we do
this so we don't start laundering too many pages at once
when load increases, it smooths the curve out. Some
work probably needs to be done to tell if it's really
effective at that or not.. (if the idea was borrowed
from FreeBSD's VM design, how did Matt test that? and
what's the effect on streaming I/O performance under
increasing memory_pressure?)
Rik, can you bring out your flashlight and shed some
light on this? :)
--
Eric Lowe
Software Engineer, Systran Corporation
elowe@systran.com
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux.eu.org/Linux-MM/
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2000-10-17 16:38 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2000-10-17 1:11 [PATCH/RFC] 2.4.0-test10-pre3 vmfix? Roger Larsson
2000-10-17 12:26 ` VM magic numbers Eric Lowe
2000-10-17 16:37 ` afei
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox