* pagecache in highmem with 2.3.27
@ 1999-11-12 10:24 Ingo Molnar
0 siblings, 0 replies; only message in thread
From: Ingo Molnar @ 1999-11-12 10:24 UTC (permalink / raw)
To: MM mailing list
[-- Attachment #1: Type: TEXT/PLAIN, Size: 2816 bytes --]
with the 2.3.27 kernel we have the pagecache in high memory. Also there is
a new, schedulable and caching kmap implementation. [note that the
attached small patch has to be used as well to get it stable]
without this feature we cannot take more than 25 Netbench users in
'dbench' simulations without running out of low memory. With the patch i'm
running a 250-user simulation just fine with only 30% of low memory used.
Speed of the 250-user run is identical (~230 MB/sec) to the 25-user run
with CONFIG_HIGHMEM turned off.
here is the changelog, comments/suggestions welcome:
- kmap interface redesigned along Linus' idea. The main two functions are
kmap(page) and kunmap(page). There is a current limit of
2MB of total maps, i never actually ran into this limit. Basically all
code uses this new kmap/kunmap variant, and i've seen no performance
degradation. (in fact, during a dbench run we have a cache hit ratio of
50%, and only about 100 'total flushes' in a 300 use dbench run.)
- there is a limited additional API: kmap_atomic()/kunmap_atomic(). It's
discouraged to use this (comments are warning people about this), the
only place right now is the bounce-buffer code, it has to copy to high
memory from IRQ contexts.
- exec.c now uses high memory to store argument pages.
- bounce buffer support in highmem.c. I kept it simple and stupid, but
it's fully functional and should behave well in low memory and
allocation-deadlock situations. The only impact on the generic code is a
single #ifdef in ll_rw_blk.c.
- filemap.c: pagecache in high memory.
- This also prompted a cleanup of the page allocation APIs in
page_alloc.c. Fortunately all functions that return 'struct page *' are
relatively young and so we could change them without impacting third party
code so shortly before 2.4. The new page allocation interface is i
believe now pretty clean and intuitive:
__get_free_pages() & friends does what it always did.
alloc_page(flag) and alloc_pages(flag,order) returns 'struct page *'
the weird get_highmem_page-type of confusing interfaces are now gone.
all highmem-related code uses now the alloc_page() variants.
alloc_page() can be used to allocate non-highmem pages as well, and this
is used in a couple of places as well.
- cleaned up page_alloc.c a bit more, removed an oversight.
(page_alloc_lock)
- arch/i386/mm/init.c needed some changes to get the kmap pagetable
right.
- fixes to a few unrelated fs and architecture-specific places that
either use kmap or the 'struct page *' allocators. So this patch should
cause no breakage.
This should be the 'last' larger highmem-related patch, the Linux 64GB
feature is now pretty mature and we can expect to scale to 32/64GB RAM
just fine with typical server usage.
-- mingo
[-- Attachment #2: Type: TEXT/PLAIN, Size: 486 bytes --]
--- linux/mm/highmem.c.orig Fri Nov 12 00:31:13 1999
+++ linux/mm/highmem.c Fri Nov 12 00:32:30 1999
@@ -269,9 +269,9 @@
unsigned long vto;
p_to = to->b_page;
- vto = kmap_atomic(p_to, KM_BOUNCE_WRITE);
+ vto = kmap_atomic(p_to, KM_BOUNCE_READ);
memcpy((char *)vto + bh_offset(to), from->b_data, to->b_size);
- kunmap_atomic(vto, KM_BOUNCE_WRITE);
+ kunmap_atomic(vto, KM_BOUNCE_READ);
}
static inline void bounce_end_io (struct buffer_head *bh, int uptodate)
^ permalink raw reply [flat|nested] only message in thread
only message in thread, other threads:[~1999-11-12 10:24 UTC | newest]
Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
1999-11-12 10:24 pagecache in highmem with 2.3.27 Ingo Molnar
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox