* RFC: Bouncebuffer fixes
@ 2001-04-28 21:06 Arjan van de Ven
2001-04-29 0:07 ` Andrea Arcangeli
0 siblings, 1 reply; 9+ messages in thread
From: Arjan van de Ven @ 2001-04-28 21:06 UTC (permalink / raw)
To: linux-mm; +Cc: alan
Hi,
The following patch changes the emergency-bouncebuffer pool as present in
2.4.3-ac to be 1) bigger and 2) half reserved for threads with PF_MEMALLOC.
2) is needed to make sure that the vm kernelthreads actually can allocate
bouncebuffers if they need to free memory. The original code gave out
all emergency bouncebuffers to anyone, even to reads from "random" user
threads.
--- linux/mm/highmem.c.org Fri Apr 27 21:40:49 2001
+++ linux/mm/highmem.c Fri Apr 27 21:43:41 2001
@@ -160,7 +160,7 @@
spin_unlock(&kmap_lock);
}
-#define POOL_SIZE 32
+#define POOL_SIZE 64
/*
* This lock gets no contention at all, normally.
@@ -294,7 +294,7 @@
*/
tmp = &emergency_pages;
spin_lock_irq(&emergency_lock);
- if (!list_empty(tmp)) {
+ if (!list_empty(tmp) && ((current->flags&PF_MEMALLOC)||(nr_emergency_pages>POOL_SIZE/2))) {
page = list_entry(tmp->next, struct page, list);
list_del(tmp->next);
nr_emergency_pages--;
@@ -337,7 +337,7 @@
*/
tmp = &emergency_bhs;
spin_lock_irq(&emergency_lock);
- if (!list_empty(tmp)) {
+ if (!list_empty(tmp) && ((current->flags&PF_MEMALLOC)||(nr_emergency_bhs>POOL_SIZE/2))) {
bh = list_entry(tmp->next, struct buffer_head, b_inode_buffers);
list_del(tmp->next);
nr_emergency_bhs--;
The following patch, incremental to the previous one, removes
flush_dirty_buffers() from alloc_bounce_buffers to prevent the following
recursion:
bdflsh->flush_dirty_buffers->ll_rw_block->submit_bh->generic_make_request->
__make_request->create_bounce->alloc_bounce_page->flush_dirty_buffers
It also makes sure the tq_disk queue is run everytime an emergency
bouncebuffer is used, instead of only in the event we are out of
emergency buffers. If we are using emergency bounce-buffers, we should
start any pending physical IO asap.
--- linux/mm/highmem.c.org Sat Apr 28 18:40:52 2001
+++ linux/mm/highmem.c Sat Apr 28 19:02:54 2001
@@ -285,9 +285,9 @@
* No luck. First, try to flush some low memory buffers.
* This will throttle highmem writes when low memory gets full.
*/
- flush_dirty_buffers(0, 1);
wakeup_bdflush(0);
+ run_task_queue(&tq_disk);
/*
* Try to allocate from the emergency pool.
@@ -306,7 +306,6 @@
if (!buffer_warned++)
printk(KERN_WARNING "mm: critical shortage of bounce buffers.\n");
- run_task_queue(&tq_disk);
current->policy |= SCHED_YIELD;
__set_current_state(TASK_RUNNING);
@@ -328,9 +327,9 @@
* No luck. First, try to flush some low memory buffers.
* This will throttle highmem writes when low memory gets full.
*/
- flush_dirty_buffers(0, 1);
wakeup_bdflush(0);
+ run_task_queue(&tq_disk);
/*
* Try to allocate from the emergency pool.
@@ -349,7 +348,6 @@
if (!bh_warned++)
printk(KERN_WARNING "mm: critical shortage of bounce bh's.\n");
- run_task_queue(&tq_disk);
current->policy |= SCHED_YIELD;
__set_current_state(TASK_RUNNING);
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux.eu.org/Linux-MM/
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: RFC: Bouncebuffer fixes
2001-04-28 21:06 RFC: Bouncebuffer fixes Arjan van de Ven
@ 2001-04-29 0:07 ` Andrea Arcangeli
2001-04-29 7:56 ` Arjan van de Ven
0 siblings, 1 reply; 9+ messages in thread
From: Andrea Arcangeli @ 2001-04-29 0:07 UTC (permalink / raw)
To: Arjan van de Ven; +Cc: linux-mm, alan, Linus Torvalds
On Sat, Apr 28, 2001 at 05:06:48PM -0400, Arjan van de Ven wrote:
> Hi,
>
> The following patch changes the emergency-bouncebuffer pool as present in
> 2.4.3-ac to be 1) bigger and 2) half reserved for threads with PF_MEMALLOC.
> 2) is needed to make sure that the vm kernelthreads actually can allocate
> bouncebuffers if they need to free memory. The original code gave out
it is _not_ needed. If an emergency entry was used, we also have the
guarantee that it will be released soon after we unplug the tq_disk.
This is the *whole* point of the emergency pool and *why* it fixes the
deadlock. So it's perfectly ok to unplug tq_disk and wait for it to be
released as we do right now.
> The following patch, incremental to the previous one, removes
> flush_dirty_buffers() from alloc_bounce_buffers to prevent the following
> recursion:
> bdflsh->flush_dirty_buffers->ll_rw_block->submit_bh->generic_make_request->
> __make_request->create_bounce->alloc_bounce_page->flush_dirty_buffers
Hmm I cannot remeber any flush_dirty_buffers called by alloc_bounce_page in
any patch floating around, certainly there isn't any in my tree, so the
above recursion certainly cannot happen here.
The oom highmem bounce-buffer deadlock fix that I recomment to Linus to
merge is this one:
ftp://ftp.us.kernel.org/pub/linux/kernel/people/andrea/kernels/v2.4/2.4.4aa1/00_highmem-deadlock-3
Nothing else is necessary to fix such deadlock as far I can tell.
Most of the credit for such fix goes to Ingo (I only audited it and
fixed a few bugs in his original patch before merging it).
I inline it in this email for Linus:
diff -urN 2.4.4/mm/highmem.c highmem-deadlock/mm/highmem.c
--- 2.4.4/mm/highmem.c Sat Apr 28 05:24:48 2001
+++ highmem-deadlock/mm/highmem.c Sat Apr 28 18:21:24 2001
@@ -159,6 +159,19 @@
spin_unlock(&kmap_lock);
}
+#define POOL_SIZE 32
+
+/*
+ * This lock gets no contention at all, normally.
+ */
+static spinlock_t emergency_lock = SPIN_LOCK_UNLOCKED;
+
+int nr_emergency_pages;
+static LIST_HEAD(emergency_pages);
+
+int nr_emergency_bhs;
+static LIST_HEAD(emergency_bhs);
+
/*
* Simple bounce buffer support for highmem pages.
* This will be moved to the block layer in 2.5.
@@ -203,17 +216,72 @@
static inline void bounce_end_io (struct buffer_head *bh, int uptodate)
{
+ struct page *page;
struct buffer_head *bh_orig = (struct buffer_head *)(bh->b_private);
+ unsigned long flags;
bh_orig->b_end_io(bh_orig, uptodate);
- __free_page(bh->b_page);
+
+ page = bh->b_page;
+
+ spin_lock_irqsave(&emergency_lock, flags);
+ if (nr_emergency_pages >= POOL_SIZE)
+ __free_page(page);
+ else {
+ /*
+ * We are abusing page->list to manage
+ * the highmem emergency pool:
+ */
+ list_add(&page->list, &emergency_pages);
+ nr_emergency_pages++;
+ }
+
+ if (nr_emergency_bhs >= POOL_SIZE) {
#ifdef HIGHMEM_DEBUG
- /* Don't clobber the constructed slab cache */
- init_waitqueue_head(&bh->b_wait);
+ /* Don't clobber the constructed slab cache */
+ init_waitqueue_head(&bh->b_wait);
#endif
- kmem_cache_free(bh_cachep, bh);
+ kmem_cache_free(bh_cachep, bh);
+ } else {
+ /*
+ * Ditto in the bh case, here we abuse b_inode_buffers:
+ */
+ list_add(&bh->b_inode_buffers, &emergency_bhs);
+ nr_emergency_bhs++;
+ }
+ spin_unlock_irqrestore(&emergency_lock, flags);
}
+static __init int init_emergency_pool(void)
+{
+ spin_lock_irq(&emergency_lock);
+ while (nr_emergency_pages < POOL_SIZE) {
+ struct page * page = alloc_page(GFP_ATOMIC);
+ if (!page) {
+ printk("couldn't refill highmem emergency pages");
+ break;
+ }
+ list_add(&page->list, &emergency_pages);
+ nr_emergency_pages++;
+ }
+ while (nr_emergency_bhs < POOL_SIZE) {
+ struct buffer_head * bh = kmem_cache_alloc(bh_cachep, SLAB_ATOMIC);
+ if (!bh) {
+ printk("couldn't refill highmem emergency bhs");
+ break;
+ }
+ list_add(&bh->b_inode_buffers, &emergency_bhs);
+ nr_emergency_bhs++;
+ }
+ spin_unlock_irq(&emergency_lock);
+ printk("allocated %d pages and %d bhs reserved for the highmem bounces\n",
+ nr_emergency_pages, nr_emergency_bhs);
+
+ return 0;
+}
+
+__initcall(init_emergency_pool);
+
static void bounce_end_io_write (struct buffer_head *bh, int uptodate)
{
bounce_end_io(bh, uptodate);
@@ -228,6 +296,82 @@
bounce_end_io(bh, uptodate);
}
+struct page *alloc_bounce_page (void)
+{
+ struct list_head *tmp;
+ struct page *page;
+
+repeat_alloc:
+ page = alloc_page(GFP_BUFFER);
+ if (page)
+ return page;
+ /*
+ * No luck. First, kick the VM so it doesnt idle around while
+ * we are using up our emergency rations.
+ */
+ wakeup_bdflush(0);
+
+ /*
+ * Try to allocate from the emergency pool.
+ */
+ tmp = &emergency_pages;
+ spin_lock_irq(&emergency_lock);
+ if (!list_empty(tmp)) {
+ page = list_entry(tmp->next, struct page, list);
+ list_del(tmp->next);
+ nr_emergency_pages--;
+ }
+ spin_unlock_irq(&emergency_lock);
+ if (page)
+ return page;
+
+ /* we need to wait I/O completion */
+ run_task_queue(&tq_disk);
+
+ current->policy |= SCHED_YIELD;
+ __set_current_state(TASK_RUNNING);
+ schedule();
+ goto repeat_alloc;
+}
+
+struct buffer_head *alloc_bounce_bh (void)
+{
+ struct list_head *tmp;
+ struct buffer_head *bh;
+
+repeat_alloc:
+ bh = kmem_cache_alloc(bh_cachep, SLAB_BUFFER);
+ if (bh)
+ return bh;
+ /*
+ * No luck. First, kick the VM so it doesnt idle around while
+ * we are using up our emergency rations.
+ */
+ wakeup_bdflush(0);
+
+ /*
+ * Try to allocate from the emergency pool.
+ */
+ tmp = &emergency_bhs;
+ spin_lock_irq(&emergency_lock);
+ if (!list_empty(tmp)) {
+ bh = list_entry(tmp->next, struct buffer_head, b_inode_buffers);
+ list_del(tmp->next);
+ nr_emergency_bhs--;
+ }
+ spin_unlock_irq(&emergency_lock);
+ if (bh)
+ return bh;
+
+ /* we need to wait I/O completion */
+ run_task_queue(&tq_disk);
+
+ current->policy |= SCHED_YIELD;
+ __set_current_state(TASK_RUNNING);
+ schedule();
+ goto repeat_alloc;
+}
+
struct buffer_head * create_bounce(int rw, struct buffer_head * bh_orig)
{
struct page *page;
@@ -236,24 +380,15 @@
if (!PageHighMem(bh_orig->b_page))
return bh_orig;
-repeat_bh:
- bh = kmem_cache_alloc(bh_cachep, SLAB_BUFFER);
- if (!bh) {
- wakeup_bdflush(1); /* Sets task->state to TASK_RUNNING */
- goto repeat_bh;
- }
+ bh = alloc_bounce_bh();
/*
* This is wasteful for 1k buffers, but this is a stopgap measure
* and we are being ineffective anyway. This approach simplifies
* things immensly. On boxes with more than 4GB RAM this should
* not be an issue anyway.
*/
-repeat_page:
- page = alloc_page(GFP_BUFFER);
- if (!page) {
- wakeup_bdflush(1); /* Sets task->state to TASK_RUNNING */
- goto repeat_page;
- }
+ page = alloc_bounce_page();
+
set_bh_page(bh, page, 0);
bh->b_next = NULL;
Andrea
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux.eu.org/Linux-MM/
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: RFC: Bouncebuffer fixes
2001-04-29 0:07 ` Andrea Arcangeli
@ 2001-04-29 7:56 ` Arjan van de Ven
2001-04-29 13:17 ` Andrea Arcangeli
0 siblings, 1 reply; 9+ messages in thread
From: Arjan van de Ven @ 2001-04-29 7:56 UTC (permalink / raw)
To: Andrea Arcangeli; +Cc: linux-mm, alan, Linus Torvalds
On Sun, Apr 29, 2001 at 02:07:57AM +0200, Andrea Arcangeli wrote:
> Hmm I cannot remeber any flush_dirty_buffers called by alloc_bounce_page in
> any patch floating around, certainly there isn't any in my tree, so the
> above recursion certainly cannot happen here.
It's in 2.4.3-acFoo
> ftp://ftp.us.kernel.org/pub/linux/kernel/people/andrea/kernels/v2.4/2.4.4aa1/00_highmem-deadlock-3
This looks like the code in Alan's tree around 2.4.3-ac7, and that is NOT
enough to fix the deadlock. With that patch, tests deadlock within 10 minutes....
One of the reasons it deadlocks is because GFP_BUFFER can sleep here,
without the guarantee of progress. The regular VM threads that should
guarantee progress end up sleeping here.
Greetings,
Arjan van de Ven
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux.eu.org/Linux-MM/
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: RFC: Bouncebuffer fixes
2001-04-29 7:56 ` Arjan van de Ven
@ 2001-04-29 13:17 ` Andrea Arcangeli
2001-04-29 13:41 ` Arjan van de Ven
2001-04-29 13:42 ` RFC: Bouncebuffer fixes Andrea Arcangeli
0 siblings, 2 replies; 9+ messages in thread
From: Andrea Arcangeli @ 2001-04-29 13:17 UTC (permalink / raw)
To: Arjan van de Ven; +Cc: linux-mm, alan, Linus Torvalds
On Sun, Apr 29, 2001 at 03:56:26AM -0400, Arjan van de Ven wrote:
> This looks like the code in Alan's tree around 2.4.3-ac7, and that is NOT
> enough to fix the deadlock. With that patch, tests deadlock within 10 minutes....
>
> One of the reasons it deadlocks is because GFP_BUFFER can sleep here,
> without the guarantee of progress. The regular VM threads that should
GFP_BUFFER doesn't provide guarantee of progress and that's fine, as far
as GFP_BUFFER allocations returns NULL eventually there should be no
problem. The fact some emergency buffer is in flight is just the guarantee
of progress because after unplugging tq_disk we know those emergency
buffers will be released without the need of further memory allocations.
If GFP_BUFFER allocation never returns and they deadlocks inside the VM
that's a completly unrelated bug and I think you shouldn't workaround it
in highmem.c.
Andrea
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux.eu.org/Linux-MM/
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: RFC: Bouncebuffer fixes
2001-04-29 13:17 ` Andrea Arcangeli
@ 2001-04-29 13:41 ` Arjan van de Ven
2001-04-29 14:10 ` Andrea Arcangeli
2001-04-29 13:42 ` RFC: Bouncebuffer fixes Andrea Arcangeli
1 sibling, 1 reply; 9+ messages in thread
From: Arjan van de Ven @ 2001-04-29 13:41 UTC (permalink / raw)
To: Andrea Arcangeli; +Cc: Arjan van de Ven, linux-mm, alan, Linus Torvalds
On Sun, Apr 29, 2001 at 03:17:11PM +0200, Andrea Arcangeli wrote:
> GFP_BUFFER doesn't provide guarantee of progress and that's fine, as far
> as GFP_BUFFER allocations returns NULL eventually there should be no
> problem. The fact some emergency buffer is in flight is just the guarantee
> of progress because after unplugging tq_disk we know those emergency
> buffers will be released without the need of further memory allocations.
This is NOT what is happening. Look at the code. It does a GFP_BUFFER
allocation before even attempting to use the bounce-buffers! So there is no
guarantee of having emergency bouncebuffers in flight.
Also, I'm not totally convinced that GFP_BUFFER will never sleep before
running the tq_disk, but I agree that that can qualify as a seprate bug.
Greetings,
Arjan van de Ven
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux.eu.org/Linux-MM/
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: RFC: Bouncebuffer fixes
2001-04-29 13:17 ` Andrea Arcangeli
2001-04-29 13:41 ` Arjan van de Ven
@ 2001-04-29 13:42 ` Andrea Arcangeli
2001-05-01 8:14 ` Ingo Molnar
1 sibling, 1 reply; 9+ messages in thread
From: Andrea Arcangeli @ 2001-04-29 13:42 UTC (permalink / raw)
To: Arjan van de Ven; +Cc: linux-mm, alan, Linus Torvalds
> On Sun, Apr 29, 2001 at 03:56:26AM -0400, Arjan van de Ven wrote:
> > This looks like the code in Alan's tree around 2.4.3-ac7, and that is NOT
> > enough to fix the deadlock. With that patch, tests deadlock within 10 minutes....
just in case, also make sure you merged in my fixes as well, the first
patch I seen was buggy and could deadlock the machine for MM unrelated
reasons.
Andrea
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux.eu.org/Linux-MM/
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: RFC: Bouncebuffer fixes
2001-04-29 13:41 ` Arjan van de Ven
@ 2001-04-29 14:10 ` Andrea Arcangeli
2001-04-30 13:53 ` anti-deadlock logic (was Re: RFC: Bouncebuffer fixes) Szabolcs Szakacsits
0 siblings, 1 reply; 9+ messages in thread
From: Andrea Arcangeli @ 2001-04-29 14:10 UTC (permalink / raw)
To: Arjan van de Ven; +Cc: linux-mm, alan, Linus Torvalds
On Sun, Apr 29, 2001 at 09:41:21AM -0400, Arjan van de Ven wrote:
> On Sun, Apr 29, 2001 at 03:17:11PM +0200, Andrea Arcangeli wrote:
>
> > GFP_BUFFER doesn't provide guarantee of progress and that's fine, as far
> > as GFP_BUFFER allocations returns NULL eventually there should be no
> > problem. The fact some emergency buffer is in flight is just the guarantee
> > of progress because after unplugging tq_disk we know those emergency
> > buffers will be released without the need of further memory allocations.
>
> This is NOT what is happening. Look at the code. It does a GFP_BUFFER
> allocation before even attempting to use the bounce-buffers! So there is no
> guarantee of having emergency bouncebuffers in flight.
Of course _the first time_ the GFP_BUFFER fails, you have the guarantee
that the pool is _full_ of emergency bounce buffers.
Note that the fact GFP_BUFFER fails or succeed is absolutely not
interesting and unrelated to the anti-deadlock logic. You could drop the
GFP_BUFFER and the code should keep working (if it wouldn't be the case
_that_ would be the real bug).
The only reason of the GFP_BUFFER is to keep more I/O in flight when
normal memory is available.
The only "interesting" part of the algorithm I was talking about in the
last email is when the emergency pool is _empty_ (which in turn also
means GFP_BUFFER _just_ failed as we tried to allocate from the
emergency pool) and I wasn't even considering the case when the
emergency pool is not empty.
> Also, I'm not totally convinced that GFP_BUFFER will never sleep before
> running the tq_disk, but I agree that that can qualify as a seprate bug.
GFP_BUFFER is perfectly fine to sleep, the only thing GFP_BUFFER must
_not_ do is to start additional I/O (to avoid recursing on the fs locks)
and to deadlock (and this second property is common to all the GFP_*
indeed).
As far I can tell, if you use my patch on top of vanilla 2.4.4 and you
still get a deadlock in highmem.c it can only because GFP_BUFFER
deadlocksed and that can only be unrelated to the code in highmem.c. I
also suggest to verify that GFP_BUFFER really deadlocks in 2.4.4 vanilla
too because I didn't reproduced that yet.
Andrea
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux.eu.org/Linux-MM/
^ permalink raw reply [flat|nested] 9+ messages in thread
* anti-deadlock logic (was Re: RFC: Bouncebuffer fixes)
2001-04-29 14:10 ` Andrea Arcangeli
@ 2001-04-30 13:53 ` Szabolcs Szakacsits
0 siblings, 0 replies; 9+ messages in thread
From: Szabolcs Szakacsits @ 2001-04-30 13:53 UTC (permalink / raw)
To: Andrea Arcangeli
Cc: Arjan van de Ven, linux-mm, alan, Linus Torvalds, Jeff V. Merkey
On Sun, 29 Apr 2001, Andrea Arcangeli wrote:
> Note that the fact GFP_BUFFER fails or succeed is absolutely not
> interesting and unrelated to the anti-deadlock logic.
May I ask what's the anti-deadlock logic? Because it does not work [see
all kind of MM related hangs, livelocks on lkml]. As for 2.4.4 only
GFP_ATOMIC, kswapd, reclaimd, mtdblockd and OOM killed process
allocations can return NULL [from __alloc_pages()], all others will loop
until the requested free page(s) become available.
Szaka
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux.eu.org/Linux-MM/
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: RFC: Bouncebuffer fixes
2001-04-29 13:42 ` RFC: Bouncebuffer fixes Andrea Arcangeli
@ 2001-05-01 8:14 ` Ingo Molnar
0 siblings, 0 replies; 9+ messages in thread
From: Ingo Molnar @ 2001-05-01 8:14 UTC (permalink / raw)
To: Andrea Arcangeli; +Cc: Arjan van de Ven, linux-mm, alan, Linus Torvalds
On Sun, 29 Apr 2001, Andrea Arcangeli wrote:
> just in case, also make sure you merged in my fixes as well, the first
> patch I seen was buggy and could deadlock the machine for MM unrelated
> reasons.
all the deadlock-unrelated fixes were merged into the -ac tree long ago,
so that is certainly not the cause of the deadlock.
please re-check whether there are any fixes missing in the latest ac tree
+ Arjan's fixes.
Ingo
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux.eu.org/Linux-MM/
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2001-05-01 8:14 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2001-04-28 21:06 RFC: Bouncebuffer fixes Arjan van de Ven
2001-04-29 0:07 ` Andrea Arcangeli
2001-04-29 7:56 ` Arjan van de Ven
2001-04-29 13:17 ` Andrea Arcangeli
2001-04-29 13:41 ` Arjan van de Ven
2001-04-29 14:10 ` Andrea Arcangeli
2001-04-30 13:53 ` anti-deadlock logic (was Re: RFC: Bouncebuffer fixes) Szabolcs Szakacsits
2001-04-29 13:42 ` RFC: Bouncebuffer fixes Andrea Arcangeli
2001-05-01 8:14 ` Ingo Molnar
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox