* Q: PAGE_CACHE_SIZE?
@ 1999-05-18 14:03 Eric W. Biederman
1999-05-18 15:04 ` Andi Kleen
0 siblings, 1 reply; 15+ messages in thread
From: Eric W. Biederman @ 1999-05-18 14:03 UTC (permalink / raw)
To: linux-kernel; +Cc: linux-mm
Who's idea was it start the work to make the granularity of the page
cache larger?
>From what I can tell:
(a) It can save on finding multiple pages
(b) allows larger internal fragmentation of memory.
(c) Isn't needed if you just need a large chunk of the page
cache at a time. (It isn't hard to tie 2 or more pages to
together if you need to).
This is something I'm stumbling over porting patches for large
files in the page cache from 2.2.5 to to 2.3.3.
I guess if it's worth it I would like to talk with whoever is
responsible so we can coordinate our efforts.
Otherwise I would like this code dropped.
Non-page cache aligned mappings sound great until you
(a) squeeze the extra bits out of the vm_offset and make it an index
into the page cache, and
(b) realize you need more bits to say how far you are into a page.
Eric
--
To unsubscribe, send a message with 'unsubscribe linux-mm my@address'
in the body to majordomo@kvack.org. For more info on Linux MM,
see: http://humbolt.geo.uu.nl/Linux-MM/
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: Q: PAGE_CACHE_SIZE?
1999-05-18 14:03 Q: PAGE_CACHE_SIZE? Eric W. Biederman
@ 1999-05-18 15:04 ` Andi Kleen
1999-05-19 23:29 ` Chris Wedgwood
` (2 more replies)
0 siblings, 3 replies; 15+ messages in thread
From: Andi Kleen @ 1999-05-18 15:04 UTC (permalink / raw)
To: Eric W. Biederman; +Cc: linux-kernel, linux-mm
On Tue, May 18, 1999 at 04:03:57PM +0200, Eric W. Biederman wrote:
> Who's idea was it start the work to make the granularity of the page
> cache larger?
I guess the main motivation comes from the ARM port, where some versions
have PAGE_SIZE=32k.
-Andi
--
To unsubscribe, send a message with 'unsubscribe linux-mm my@address'
in the body to majordomo@kvack.org. For more info on Linux MM,
see: http://humbolt.geo.uu.nl/Linux-MM/
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: Q: PAGE_CACHE_SIZE?
1999-05-18 15:04 ` Andi Kleen
@ 1999-05-19 23:29 ` Chris Wedgwood
1999-05-20 17:12 ` Andrea Arcangeli
1999-05-25 16:29 ` Alan Cox
2 siblings, 0 replies; 15+ messages in thread
From: Chris Wedgwood @ 1999-05-19 23:29 UTC (permalink / raw)
To: Andi Kleen; +Cc: Eric W. Biederman, linux-kernel, linux-mm
> I guess the main motivation comes from the ARM port, where some
> versions have PAGE_SIZE=32k.
I've often wondered if it wouldn't be a good idea to do this on Intel
boxes sometimes, especially as many machines routinely have 512MB of
ram, so we could probably get away with merge 4 pages into one and
having pseudo-16k pages.
Presumably this might/will break existing stuff though... I think
many of these could be worked around though.
-Chris
--
To unsubscribe, send a message with 'unsubscribe linux-mm my@address'
in the body to majordomo@kvack.org. For more info on Linux MM,
see: http://humbolt.geo.uu.nl/Linux-MM/
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: Q: PAGE_CACHE_SIZE?
1999-05-18 15:04 ` Andi Kleen
1999-05-19 23:29 ` Chris Wedgwood
@ 1999-05-20 17:12 ` Andrea Arcangeli
1999-05-25 16:29 ` Alan Cox
2 siblings, 0 replies; 15+ messages in thread
From: Andrea Arcangeli @ 1999-05-20 17:12 UTC (permalink / raw)
To: Andi Kleen; +Cc: Eric W. Biederman, linux-kernel, linux-mm
On Tue, 18 May 1999, Andi Kleen wrote:
>On Tue, May 18, 1999 at 04:03:57PM +0200, Eric W. Biederman wrote:
>> Who's idea was it start the work to make the granularity of the page
>> cache larger?
>
>I guess the main motivation comes from the ARM port, where some versions
>have PAGE_SIZE=32k.
Since they have a too much large PAGE_SIZE, they shouldn't be interested
in enalrging the page-cache-size.
Andrea Arcangeli
--
To unsubscribe, send a message with 'unsubscribe linux-mm my@address'
in the body to majordomo@kvack.org. For more info on Linux MM,
see: http://humbolt.geo.uu.nl/Linux-MM/
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: Q: PAGE_CACHE_SIZE?
1999-05-18 15:04 ` Andi Kleen
1999-05-19 23:29 ` Chris Wedgwood
1999-05-20 17:12 ` Andrea Arcangeli
@ 1999-05-25 16:29 ` Alan Cox
1999-05-25 20:16 ` Rik van Riel
2 siblings, 1 reply; 15+ messages in thread
From: Alan Cox @ 1999-05-25 16:29 UTC (permalink / raw)
To: Andi Kleen; +Cc: ebiederm+eric, linux-kernel, linux-mm
> > Who's idea was it start the work to make the granularity of the page
> > cache larger?
>
> I guess the main motivation comes from the ARM port, where some versions
> have PAGE_SIZE=32k.
For large amounts of memory on fast boxes you want a higher page size. Some
vendors even pick page size based on memory size at boot up.
Alan
--
To unsubscribe, send a message with 'unsubscribe linux-mm my@address'
in the body to majordomo@kvack.org. For more info on Linux MM,
see: http://humbolt.geo.uu.nl/Linux-MM/
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: Q: PAGE_CACHE_SIZE?
1999-05-25 16:29 ` Alan Cox
@ 1999-05-25 20:16 ` Rik van Riel
1999-05-25 22:17 ` Matti Aarnio
1999-05-27 22:06 ` Alan Cox
0 siblings, 2 replies; 15+ messages in thread
From: Rik van Riel @ 1999-05-25 20:16 UTC (permalink / raw)
To: Alan Cox; +Cc: Andi Kleen, ebiederm+eric, linux-kernel, linux-mm
On Tue, 25 May 1999, Alan Cox wrote:
> > > Who's idea was it start the work to make the granularity of the page
> > > cache larger?
> >
> > I guess the main motivation comes from the ARM port, where some versions
> > have PAGE_SIZE=32k.
>
> For large amounts of memory on fast boxes you want a higher page
> size. Some vendors even pick page size based on memory size at
> boot up.
This sounds suspiciously like the 'larger-blocks-for-larger-FSes'
tactic other systems have been using to hide the bad scalability
of their algorithms.
A larger page size is no compensation for the lack of a decent
read-{ahead,back,anywhere} I/O clustering algorithm in the OS.
I believe we should take the more appropriate path and build
a proper 'smart' algorithm. Once we're optimizing for I/O
minimization, CPU is relatively cheap anyway...
Rik -- Open Source: you deserve to be in control of your data.
+-------------------------------------------------------------------+
| Le Reseau netwerksystemen BV: http://www.reseau.nl/ |
| Linux Memory Management site: http://www.linux.eu.org/Linux-MM/ |
| Nederlandse Linux documentatie: http://www.nl.linux.org/ |
+-------------------------------------------------------------------+
--
To unsubscribe, send a message with 'unsubscribe linux-mm my@address'
in the body to majordomo@kvack.org. For more info on Linux MM,
see: http://humbolt.geo.uu.nl/Linux-MM/
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: Q: PAGE_CACHE_SIZE?
1999-05-25 20:16 ` Rik van Riel
@ 1999-05-25 22:17 ` Matti Aarnio
1999-05-27 22:06 ` Alan Cox
1 sibling, 0 replies; 15+ messages in thread
From: Matti Aarnio @ 1999-05-25 22:17 UTC (permalink / raw)
To: Rik van Riel; +Cc: alan, ak, ebiederm+eric, linux-kernel, linux-mm
Rik van Riel <riel@nl.linux.org> wrote:
...
> This sounds suspiciously like the 'larger-blocks-for-larger-FSes'
> tactic other systems have been using to hide the bad scalability
> of their algorithms.
... (read-ahead comments cut away) ...
I have this following table about EXT2 (and UFS, and SysVfs, and..)
filesystem maximum supported file size. These limits stem from block
addressability limitations in the classical tripply-indirection schemes:
Block Size File Size
512 2 GB + epsilon
1k 16 GB + epsilon
2k 128 GB + epsilon
4k 1024 GB + epsilon
8k 8192 GB + epsilon ( not without PAGE_SIZE >= 8 kB )
And of course any single partition filesystem in Linux (all of the
'local devices' filesystems right now) can't exceed 4G blocks of
512 bytes which limit is at the block device layer.
(This gives maximum physical filesystem size of 2 TB for EXT2.)
So, in my opinnion any triply-indirected filesystem is at the end
of its life when it comes to truly massive datasets.
The EXT2FS family will soon get new ways to extend its life by having
alternate block addressing structure to that of the classical triply-
indirection scheme it now uses. (Ted Ts'o is working at it.)
> Rik -- Open Source: you deserve to be in control of your data.
/Matti Aarnio <matti.aarnio@sonera.fi>
--
To unsubscribe, send a message with 'unsubscribe linux-mm my@address'
in the body to majordomo@kvack.org. For more info on Linux MM,
see: http://humbolt.geo.uu.nl/Linux-MM/
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: Q: PAGE_CACHE_SIZE?
1999-05-25 20:16 ` Rik van Riel
1999-05-25 22:17 ` Matti Aarnio
@ 1999-05-27 22:06 ` Alan Cox
1999-05-28 20:46 ` Stephen C. Tweedie
1 sibling, 1 reply; 15+ messages in thread
From: Alan Cox @ 1999-05-27 22:06 UTC (permalink / raw)
To: Rik van Riel; +Cc: alan, ak, ebiederm+eric, linux-kernel, linux-mm
> A larger page size is no compensation for the lack of a decent
> read-{ahead,back,anywhere} I/O clustering algorithm in the OS.
It isnt compensating for that. If you have 4Gig of memory and a high performance
I/O controller the constant cost per page for VM management begins to dominate
the equation. Its also a win for other CPU related reasons (reduced tlb
misses and the like), and with 4Gig of RAM the argument is a larger page
size isnt a problem.
Alan
--
To unsubscribe, send a message with 'unsubscribe linux-mm my@address'
in the body to majordomo@kvack.org. For more info on Linux MM,
see: http://humbolt.geo.uu.nl/Linux-MM/
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: Q: PAGE_CACHE_SIZE?
1999-05-27 22:06 ` Alan Cox
@ 1999-05-28 20:46 ` Stephen C. Tweedie
1999-05-28 21:33 ` Rik van Riel
1999-05-29 15:07 ` Ralf Baechle
0 siblings, 2 replies; 15+ messages in thread
From: Stephen C. Tweedie @ 1999-05-28 20:46 UTC (permalink / raw)
To: Alan Cox; +Cc: Rik van Riel, ak, ebiederm+eric, linux-kernel, linux-mm
Hi,
On Thu, 27 May 1999 23:06:48 +0100 (BST), Alan Cox
<alan@lxorguk.ukuu.org.uk> said:
>> A larger page size is no compensation for the lack of a decent
>> read-{ahead,back,anywhere} I/O clustering algorithm in the OS.
> It isnt compensating for that. If you have 4Gig of memory and a high
> performance I/O controller the constant cost per page for VM
> management begins to dominate the equation. Its also a win for other
> CPU related reasons (reduced tlb misses and the like), and with 4Gig
> of RAM the argument is a larger page size isnt a problem.
That's still only half of the issue. Remember, there isn't any one
block size in the kernel.
We can happily use 1k or 2k blocks in the buffer cache. We use 8k
chunks for stacks. Nothing is forcing us to use the hardware page size
for all of our operations.
In short, I think the real answer as far as the VM is concerned is
indeed to use clustering, not just for IO (we already do that for
mmap paging and for sequential reads), but for pageout too. I really
believe that the best unit in which to do pageout is whatever unit we
did the pagein in in the first place.
This has a lot of really nice properties. If we record sequential
accesses when setting up data in the first place, then we can
automatically optimise for that when doing the pageout again. For swap,
it reduces fragmentation: we can allocate in multi-page chunks and keep
that allocation persistent.
There are very few places where doing such clustering falls short of
what we'd get by increasing the page size. COW is one: retaining
per-page COW granularity is an easy way to fragment swap, but increasing
the COW chunk size changes the semantics unless we actually export a
larger pagesize to user space (because we end up destroying the VM
backing store sharing for pages which haven't actually been touched by
the user).
Finally, there's one other thing worth considering: if we use a larger
chunk size for dividing up memory (say, set the buddy heap up in a basic
unit of 32k or more), then the slab allocator is actually a very good
way of getting page allocations out of that.
If we did in fact use the 4k minipage for all kernel get_free_page()
allocations as usual, but used the larger 32k buddy heap pages for all
VM allocations, then 8K kernel allocations (eg. stack allocations and
large NFS packets) become trivial to deal with.
The biggest problem we have had with these multi-page allocations up to
now is fragmentation in the VM. If we populate the _entire_ VM in
multiples of 8k or more then we can never see such fragmentation at all.
8k might actually be a reasonable pagesize even on low memory machines:
we found in 2.2 that the increased size of the kernel was compensated
for by more efficient swapping so that things still went faster in low
memory than under 2.2, and large pages may well have the same tradeoff.
--Stephen
--
To unsubscribe, send a message with 'unsubscribe linux-mm my@address'
in the body to majordomo@kvack.org. For more info on Linux MM,
see: http://humbolt.geo.uu.nl/Linux-MM/
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: Q: PAGE_CACHE_SIZE?
1999-05-28 20:46 ` Stephen C. Tweedie
@ 1999-05-28 21:33 ` Rik van Riel
1999-05-29 1:59 ` Stephen C. Tweedie
1999-05-29 15:07 ` Ralf Baechle
1 sibling, 1 reply; 15+ messages in thread
From: Rik van Riel @ 1999-05-28 21:33 UTC (permalink / raw)
To: Stephen C. Tweedie; +Cc: Alan Cox, ak, ebiederm+eric, linux-kernel, linux-mm
On Fri, 28 May 1999, Stephen C. Tweedie wrote:
> In short, I think the real answer as far as the VM is concerned is
> indeed to use clustering, not just for IO (we already do that for
> mmap paging and for sequential reads), but for pageout too. I
> really believe that the best unit in which to do pageout is
> whatever unit we did the pagein in in the first place.
Proper I/O and swap clustering isn't difficult. Just take
a look at the FreeBSD sources to see how simple it is (and
how much simpler it can be because the disk_seek:disk_transfer
ratio has changed).
> This has a lot of really nice properties. If we record sequential
> accesses when setting up data in the first place, then we can
> automatically optimise for that when doing the pageout again. For swap,
> it reduces fragmentation: we can allocate in multi-page chunks and keep
> that allocation persistent.
Since we keep pages in the page cache after swapping them out,
we can implement this optimization very cheaply.
> There are very few places where doing such clustering falls short of
> what we'd get by increasing the page size. COW is one: retaining
> per-page COW granularity is an easy way to fragment swap, but increasing
> the COW chunk size changes the semantics unless we actually export a
> larger pagesize to user space
If we're still COWing a task, it's probably sharing memory with
other _resident_ tasks as well so the I/O point becomes moot in
most cases (hopefully -- correct me if I'm wrong!).
> Finally, there's one other thing worth considering: if we use a larger
> chunk size for dividing up memory (say, set the buddy heap up in a basic
> unit of 32k or more), then the slab allocator is actually a very good
> way of getting page allocations out of that.
>
> If we did in fact use the 4k minipage for all kernel get_free_page()
> allocations as usual, but used the larger 32k buddy heap pages for all
> VM allocations, then 8K kernel allocations (eg. stack allocations and
> large NFS packets) become trivial to deal with.
It will also nicely solve the page colouring problem, giving
a 10 to 20% speed increase on at least my Intel Neptune chip
set. And similar increases for the number crunching folks...
It seems like an excellent idea to me, although I really
would like to keep the 4kB space efficiency for user VM.
(could be useful for low-memory machines like our company
web server)
regards,
Rik -- Open Source: you deserve to be in control of your data.
+-------------------------------------------------------------------+
| Le Reseau netwerksystemen BV: http://www.reseau.nl/ |
| Linux Memory Management site: http://www.linux.eu.org/Linux-MM/ |
| Nederlandse Linux documentatie: http://www.nl.linux.org/ |
+-------------------------------------------------------------------+
--
To unsubscribe, send a message with 'unsubscribe linux-mm my@address'
in the body to majordomo@kvack.org. For more info on Linux MM,
see: http://humbolt.geo.uu.nl/Linux-MM/
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: Q: PAGE_CACHE_SIZE?
1999-05-28 21:33 ` Rik van Riel
@ 1999-05-29 1:59 ` Stephen C. Tweedie
1999-05-30 23:12 ` Andrea Arcangeli
0 siblings, 1 reply; 15+ messages in thread
From: Stephen C. Tweedie @ 1999-05-29 1:59 UTC (permalink / raw)
To: Rik van Riel
Cc: Stephen C. Tweedie, Alan Cox, ak, ebiederm+eric, linux-kernel, linux-mm
Hi,
On Fri, 28 May 1999 23:33:33 +0200 (CEST), Rik van Riel
<riel@nl.linux.org> said:
>> This has a lot of really nice properties. If we record sequential
>> accesses when setting up data in the first place, then we can
>> automatically optimise for that when doing the pageout again. For swap,
>> it reduces fragmentation: we can allocate in multi-page chunks and keep
>> that allocation persistent.
> Since we keep pages in the page cache after swapping them out,
> we can implement this optimization very cheaply.
It should be cheap, yes, but it will require a fundamental change in the
VM: currently, all swap cache is readonly. No exceptions. To keep the
allocation persistent, even over write()s to otherwise unshared pages
(and we need to do to sustain good performance), we need to allow dirty
pages in the swap cache. The current PG_Dirty work impacts on this.
--Stephen
--
To unsubscribe, send a message with 'unsubscribe linux-mm my@address'
in the body to majordomo@kvack.org. For more info on Linux MM,
see: http://humbolt.geo.uu.nl/Linux-MM/
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: Q: PAGE_CACHE_SIZE?
1999-05-28 20:46 ` Stephen C. Tweedie
1999-05-28 21:33 ` Rik van Riel
@ 1999-05-29 15:07 ` Ralf Baechle
1 sibling, 0 replies; 15+ messages in thread
From: Ralf Baechle @ 1999-05-29 15:07 UTC (permalink / raw)
To: Stephen C. Tweedie
Cc: Alan Cox, Rik van Riel, ak, ebiederm+eric, linux-kernel, linux-mm
On Fri, May 28, 1999 at 09:46:01PM +0100, Stephen C. Tweedie wrote:
> If we did in fact use the 4k minipage for all kernel get_free_page()
> allocations as usual, but used the larger 32k buddy heap pages for all
> VM allocations, then 8K kernel allocations (eg. stack allocations and
> large NFS packets) become trivial to deal with.
>
> The biggest problem we have had with these multi-page allocations up to
> now is fragmentation in the VM. If we populate the _entire_ VM in
> multiples of 8k or more then we can never see such fragmentation at all.
> 8k might actually be a reasonable pagesize even on low memory machines:
> we found in 2.2 that the increased size of the kernel was compensated
> for by more efficient swapping so that things still went faster in low
> memory than under 2.2, and large pages may well have the same tradeoff.
I'm working on Linux/MIPS64 and I intend to cleanup the code such that the
kernel can be built for different page sizes. I intend to benchmark
things for curiosity. Maybe it's some viable system tuning option, even
though a compile time one.
Ralf
--
To unsubscribe, send a message with 'unsubscribe linux-mm my@address'
in the body to majordomo@kvack.org. For more info on Linux MM,
see: http://humbolt.geo.uu.nl/Linux-MM/
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: Q: PAGE_CACHE_SIZE?
1999-05-29 1:59 ` Stephen C. Tweedie
@ 1999-05-30 23:12 ` Andrea Arcangeli
1999-06-01 0:01 ` Stephen C. Tweedie
0 siblings, 1 reply; 15+ messages in thread
From: Andrea Arcangeli @ 1999-05-30 23:12 UTC (permalink / raw)
To: Stephen C. Tweedie
Cc: Rik van Riel, Alan Cox, ak, ebiederm+eric, linux-kernel, linux-mm
On Sat, 29 May 1999, Stephen C. Tweedie wrote:
>It should be cheap, yes, but it will require a fundamental change in the
>VM: currently, all swap cache is readonly. No exceptions. To keep the
>allocation persistent, even over write()s to otherwise unshared pages
>(and we need to do to sustain good performance), we need to allow dirty
>pages in the swap cache. The current PG_Dirty work impacts on this.
I am just rewriting swapped-in pages to their previous location on swap to
avoid swap fragmentation. No need to have dirty pages into the swap cache
to handle that. We just have the information cached in the
page-map->offset field. We only need to know when it make sense to know if
we should use it or not. To handle that I simply added a PG_swap_entry
bitflag set at swapin time and cleared after swapout to the old entry or
at free_page_and_swap_cache() time. The thing runs like a charm (the
swapin performances definitely improves a lot).
ftp://e-mind.com/pub/andrea/kernel/2.3.3_andrea9.bz2
Andrea Arcangeli
--
To unsubscribe, send a message with 'unsubscribe linux-mm my@address'
in the body to majordomo@kvack.org. For more info on Linux MM,
see: http://humbolt.geo.uu.nl/Linux-MM/
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: Q: PAGE_CACHE_SIZE?
1999-05-30 23:12 ` Andrea Arcangeli
@ 1999-06-01 0:01 ` Stephen C. Tweedie
1999-06-01 14:23 ` Andrea Arcangeli
0 siblings, 1 reply; 15+ messages in thread
From: Stephen C. Tweedie @ 1999-06-01 0:01 UTC (permalink / raw)
To: Andrea Arcangeli
Cc: Stephen C. Tweedie, Rik van Riel, Alan Cox, ak, ebiederm+eric,
linux-kernel, linux-mm
Hi,
On Mon, 31 May 1999 01:12:43 +0200 (CEST), Andrea Arcangeli
<andrea@suse.de> said:
> I am just rewriting swapped-in pages to their previous location on
> swap to avoid swap fragmentation. No need to have dirty pages into the
> swap cache to handle that. We just have the information cached in the
> page-map-> offset field. We only need to know when it make sense to
> know if we should use it or not. To handle that I simply added a
> PG_swap_entry bitflag set at swapin time and cleared after swapout to
> the old entry or at free_page_and_swap_cache() time. The thing runs
> like a charm (the swapin performances definitely improves a lot).
Cute! When, oh when, are you going to start releasing these things as
separate patches which I can look at? This is one simple optimisation
that I'd really like to see in 2.3 asap.
--Stephen
--
To unsubscribe, send a message with 'unsubscribe linux-mm my@address'
in the body to majordomo@kvack.org. For more info on Linux MM,
see: http://humbolt.geo.uu.nl/Linux-MM/
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: Q: PAGE_CACHE_SIZE?
1999-06-01 0:01 ` Stephen C. Tweedie
@ 1999-06-01 14:23 ` Andrea Arcangeli
0 siblings, 0 replies; 15+ messages in thread
From: Andrea Arcangeli @ 1999-06-01 14:23 UTC (permalink / raw)
To: Stephen C. Tweedie
Cc: Rik van Riel, Alan Cox, ak, ebiederm+eric, linux-kernel,
linux-mm, Linus Torvalds, MOLNAR Ingo
On Tue, 1 Jun 1999, Stephen C. Tweedie wrote:
>Cute! When, oh when, are you going to start releasing these things as
Happy to hear that :).
>separate patches which I can look at? This is one simple optimisation
ASAP. Some second ago I started playing with the proggy that lockups the
machine recursing on the stack (that I am been able to reproduce here too
under some condition). When I'll understand which is the problem that
causes the lockup, then theorically I could just start the merging stage.
If you are interested I can CC the separate patches also to you.
>that I'd really like to see in 2.3 asap.
Linus asked me to wait Ingo's page cache code to be included in 2.3.x
before starting sending him patches. I am a bit worried starting
exctracting separate patches _now_, because if Ingo's page cache code will
break my patches, then I'll have to generate new patches tomorrow... So I
would like to have hints from Ingo. (note: I am fine also waiting a bit
more of time, just know that from my part I would just did the merging)
Andrea Arcangeli
--
To unsubscribe, send a message with 'unsubscribe linux-mm my@address'
in the body to majordomo@kvack.org. For more info on Linux MM,
see: http://humbolt.geo.uu.nl/Linux-MM/
^ permalink raw reply [flat|nested] 15+ messages in thread
end of thread, other threads:[~1999-06-01 16:06 UTC | newest]
Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
1999-05-18 14:03 Q: PAGE_CACHE_SIZE? Eric W. Biederman
1999-05-18 15:04 ` Andi Kleen
1999-05-19 23:29 ` Chris Wedgwood
1999-05-20 17:12 ` Andrea Arcangeli
1999-05-25 16:29 ` Alan Cox
1999-05-25 20:16 ` Rik van Riel
1999-05-25 22:17 ` Matti Aarnio
1999-05-27 22:06 ` Alan Cox
1999-05-28 20:46 ` Stephen C. Tweedie
1999-05-28 21:33 ` Rik van Riel
1999-05-29 1:59 ` Stephen C. Tweedie
1999-05-30 23:12 ` Andrea Arcangeli
1999-06-01 0:01 ` Stephen C. Tweedie
1999-06-01 14:23 ` Andrea Arcangeli
1999-05-29 15:07 ` Ralf Baechle
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox