* Selective swap out of processes
@ 2007-08-28 16:54 Javier Cabezas Rodríguez
2007-08-29 2:37 ` Nick Piggin
2007-08-29 5:18 ` Christoph Lameter
0 siblings, 2 replies; 14+ messages in thread
From: Javier Cabezas Rodríguez @ 2007-08-28 16:54 UTC (permalink / raw)
To: linux-mm
[-- Attachment #1: Type: text/plain, Size: 2444 bytes --]
Hi all,
I am trying to reduce the main memory power consumption when the system
is idle. In order to achieve it, I want to freeze some processes
(user-defined) when the system enters a long idle period and swap them
out to the disk. After that, more memory is free and then, the remaining
used memory can be moved to a minimal set of memory ranks so the rest of
ranks can be switched off.
To the best of my knowledge, a process can own the following types of
memory pages:
- Mapped pages
· Executable and Read-only mapped pages that are backed by a
file in the disk. These pages can be directly unmapped (if they
are not shared) -> UNMAP
· Writable file mapped pages that must be flushed to disk
(synced) before they are unmapped -> SYNC + UNMAP
- Anonymous pages in User Mode address spaces -> SWAP
- Mapped pages of tmpfs filesystem -> SWAP
I have implemented the process selection mechanism (using an entry for
each PID in proc), and the process freezing/resume (using the
refrigerator function, like in the hibernation code).
Now I am implementing the memory freeing. The biggest problem here is
that the regular swapping out algorithm of the kernel only frees memory
when it is needed, so I don't know which is the behaviour of the
standard routines in this situation. I have looked at the standard
swapping functions (shrink_zones, shrink_zone, ...) and I think they
handle all the process page types I enumerated previously. So, for each
VMA of the process, I build a page list with all the pages and pass it
as a parameter to shrink_page_list (before that I remove them from the
LRU active/inactive lists with del_page_from_lru).
First I have tried with the executable VMA (of a lynx process) mapped to
the executable file. However none of the pages is freed.
shrink_page_list skips each page due to this check:
referenced = page_referenced(page, 1);
/* In active use or really unfreeable? Activate it. */
if (referenced && page_mapping_inuse(page))
goto activate_locked;
It seems they are mapped somewhere else and they cannot be freed. So,
which operations should I perform on the pages (try_to_unmap,
pte_mkold, ...) before I call shrink_page_list?
I would be eternally grateful if someone could help me with this :-)
Thanks in advance.
Javi
--
Javier Cabezas Rodríguez
Phd. Student - DAC (UPC)
jcabezas@ac.upc.edu
[-- Attachment #2: Esta parte del mensaje está firmada digitalmente --]
[-- Type: application/pgp-signature, Size: 189 bytes --]
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: Selective swap out of processes
2007-08-28 16:54 Selective swap out of processes Javier Cabezas Rodríguez
@ 2007-08-29 2:37 ` Nick Piggin
2007-08-29 10:37 ` Javier Cabezas Rodríguez
2007-08-29 5:18 ` Christoph Lameter
1 sibling, 1 reply; 14+ messages in thread
From: Nick Piggin @ 2007-08-29 2:37 UTC (permalink / raw)
To: Javier Cabezas �; +Cc: linux-mm
Javier Cabezas RodrA-guez wrote:
> Hi all,
>
> I am trying to reduce the main memory power consumption when the system
> is idle. In order to achieve it, I want to freeze some processes
> (user-defined) when the system enters a long idle period and swap them
> out to the disk. After that, more memory is free and then, the remaining
> used memory can be moved to a minimal set of memory ranks so the rest of
> ranks can be switched off.
>
> To the best of my knowledge, a process can own the following types of
> memory pages:
> - Mapped pages
> A. Executable and Read-only mapped pages that are backed by a
> file in the disk. These pages can be directly unmapped (if they
> are not shared) -> UNMAP
> A. Writable file mapped pages that must be flushed to disk
> (synced) before they are unmapped -> SYNC + UNMAP
> - Anonymous pages in User Mode address spaces -> SWAP
> - Mapped pages of tmpfs filesystem -> SWAP
>
> I have implemented the process selection mechanism (using an entry for
> each PID in proc), and the process freezing/resume (using the
> refrigerator function, like in the hibernation code).
>
> Now I am implementing the memory freeing. The biggest problem here is
> that the regular swapping out algorithm of the kernel only frees memory
> when it is needed, so I don't know which is the behaviour of the
> standard routines in this situation. I have looked at the standard
> swapping functions (shrink_zones, shrink_zone, ...) and I think they
> handle all the process page types I enumerated previously. So, for each
> VMA of the process, I build a page list with all the pages and pass it
> as a parameter to shrink_page_list (before that I remove them from the
> LRU active/inactive lists with del_page_from_lru).
>
> First I have tried with the executable VMA (of a lynx process) mapped to
> the executable file. However none of the pages is freed.
> shrink_page_list skips each page due to this check:
>
> referenced = page_referenced(page, 1);
> /* In active use or really unfreeable? Activate it. */
> if (referenced && page_mapping_inuse(page))
> goto activate_locked;
>
> It seems they are mapped somewhere else and they cannot be freed. So,
> which operations should I perform on the pages (try_to_unmap,
> pte_mkold, ...) before I call shrink_page_list?
Simplest will be just to set referenced to 0 right after calling
page_referenced, in the case you want to forcefully swap out the
page.
try_to_unmap will get called later in the same function.
unmapped pagecache, and other caches are going to take up a fair
bit of memory as well, and fragmentation might mean it is hard to
get large enough regions of contiguous memory to switch off chips,
though.
--
SUSE Labs, Novell Inc.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: Selective swap out of processes
2007-08-28 16:54 Selective swap out of processes Javier Cabezas Rodríguez
2007-08-29 2:37 ` Nick Piggin
@ 2007-08-29 5:18 ` Christoph Lameter
1 sibling, 0 replies; 14+ messages in thread
From: Christoph Lameter @ 2007-08-29 5:18 UTC (permalink / raw)
To: Javier Cabezas Rodríguez; +Cc: linux-mm
[-- Attachment #1: Type: TEXT/PLAIN, Size: 1028 bytes --]
On Tue, 28 Aug 2007, Javier Cabezas Rodríguez wrote:
> Now I am implementing the memory freeing. The biggest problem here is
> that the regular swapping out algorithm of the kernel only frees memory
> when it is needed, so I don't know which is the behaviour of the
> standard routines in this situation. I have looked at the standard
> swapping functions (shrink_zones, shrink_zone, ...) and I think they
> handle all the process page types I enumerated previously. So, for each
> VMA of the process, I build a page list with all the pages and pass it
> as a parameter to shrink_page_list (before that I remove them from the
> LRU active/inactive lists with del_page_from_lru).
You may want to look at the page migration logic and in particular the
implementation of memory unplug in Andrew's tree. Memory unplug moves
memory to another node. You could use the same logic but instead of
moving pages reclaim them. Movable pages are reclaimable and much of the
page migration logic is based on reclaim.
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: Selective swap out of processes
2007-08-29 2:37 ` Nick Piggin
@ 2007-08-29 10:37 ` Javier Cabezas Rodríguez
2007-08-29 18:06 ` Javier Cabezas Rodríguez
2007-08-30 7:09 ` Nick Piggin
0 siblings, 2 replies; 14+ messages in thread
From: Javier Cabezas Rodríguez @ 2007-08-29 10:37 UTC (permalink / raw)
To: Nick Piggin; +Cc: linux-mm
[-- Attachment #1: Type: text/plain, Size: 2362 bytes --]
El mié, 29-08-2007 a las 12:37 +1000, Nick Piggin escribió:
> Simplest will be just to set referenced to 0 right after calling
> page_referenced, in the case you want to forcefully swap out the
> page.
>
> try_to_unmap will get called later in the same function.
I have tried this solution, but 0 pages are freed...
- RO/EXEC pages mapped from the executable are now skipped due to this
check:
if (!mapping || !remove_mapping(mapping, page))
goto keep_locked;
The offender is this check in remove_mapping:
if (unlikely(page_count(page) != 2))
goto cannot_free;
- RW pages mapped from the executable are skipped because pageout
returns PAGE_KEEP.
- Other pages are skipped because try_to_unmap returns SWAP_FAIL.
I also added a call ptep_clear_flush_young for each pte, to satisfy this
check in try_to_unmap_one:
if (!migration && ((vma->vm_flags & VM_LOCKED) ||
(ptep_clear_flush_young(vma, address, pte)))) {
ret = SWAP_FAIL;
goto out_unmap;
}
My code calls the following function for each VMA of the process. Are
there errors in the function?:
int my_free_pages(struct vm_area_struct * vma, struct mm_struct * mm)
{
LIST_HEAD(page_list);
unsigned long nr_taken;
struct zone * zone = NULL;
int ret;
pte_t *pte_k;
pud_t *pud;
pmd_t *pmd;
unsigned long addr;
struct page * p;
struct scan_control sc;
sc.gfp_mask = __GFP_FS;
sc.may_swap = 1;
sc.may_writepage = 1;
for (addr = vma->vm_start, nr_taken = 0; addr < vma->vm_end; addr +=
PAGE_SIZE, nr_taken++) {
pgd_t *pgd = pgd_offset(mm, addr);
if (pgd_none(*pgd))
return;
pud = pud_offset(pgd, addr);
if (pud_none(*pud))
return;
pmd = pmd_offset(pud, addr);
if (pmd_none(*pmd))
return;
if (pmd_large(*pmd))
pte_k = (pte_t *)pmd;
else
pte_k = pte_offset_kernel(pmd, addr);
if (pte_k && pte_present(*pte_k)) {
p = pte_page(*pte_k);
if (!zone)
zone = page_zone(p);
ptep_clear_flush_young(vma, addr, pte_k);
del_page_from_lru(zone, p);
list_add(&p->lru, &page_list);
}
}
spin_lock_irq(&zone->lru_lock);
__mod_zone_page_state(zone, NR_INACTIVE, -nr_taken);
zone->pages_scanned += nr_taken;
spin_unlock_irq(&zone->lru_lock);
}
Thanks
Javi
--
Javier Cabezas Rodríguez
Phd. Student - DAC (UPC)
jcabezas@ac.upc.edu
[-- Attachment #2: Esta parte del mensaje está firmada digitalmente --]
[-- Type: application/pgp-signature, Size: 189 bytes --]
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: Selective swap out of processes
2007-08-29 10:37 ` Javier Cabezas Rodríguez
@ 2007-08-29 18:06 ` Javier Cabezas Rodríguez
2007-08-29 22:01 ` Dave Hansen
2007-08-30 7:13 ` Nick Piggin
2007-08-30 7:09 ` Nick Piggin
1 sibling, 2 replies; 14+ messages in thread
From: Javier Cabezas Rodríguez @ 2007-08-29 18:06 UTC (permalink / raw)
To: Nick Piggin; +Cc: linux-mm
[-- Attachment #1: Type: text/plain, Size: 1538 bytes --]
> My code calls the following function for each VMA of the process. Are
> there errors in the function?:
Sorry. I forgot some lines:
int my_free_pages(struct vm_area_struct * vma, struct mm_struct * mm)
{
LIST_HEAD(page_list);
unsigned long nr_taken;
struct zone * zone = NULL;
int ret;
pte_t *pte_k;
pud_t *pud;
pmd_t *pmd;
unsigned long addr;
struct page * p;
struct scan_control sc;
sc.gfp_mask = __GFP_FS;
sc.may_swap = 1;
sc.may_writepage = 1;
for (addr = vma->vm_start, nr_taken = 0; addr < vma->vm_end; addr +=
PAGE_SIZE, nr_taken++) {
pgd_t *pgd = pgd_offset(mm, addr);
if (pgd_none(*pgd))
return;
pud = pud_offset(pgd, addr);
if (pud_none(*pud))
return;
pmd = pmd_offset(pud, addr);
if (pmd_none(*pmd))
return;
if (pmd_large(*pmd))
pte_k = (pte_t *)pmd;
else
pte_k = pte_offset_kernel(pmd, addr);
if (pte_k && pte_present(*pte_k)) {
p = pte_page(*pte_k);
if (!zone)
zone = page_zone(p);
ptep_clear_flush_young(vma, addr, pte_k);
del_page_from_lru(zone, p);
list_add(&p->lru, &page_list);
}
}
spin_lock_irq(&zone->lru_lock);
__mod_zone_page_state(zone, NR_INACTIVE, -nr_taken);
zone->pages_scanned += nr_taken;
spin_unlock_irq(&zone->lru_lock);
printk("VMC: %lu pages set to be freed\n", nr_taken);
printk("VMC: %d pages freed\n", ret =
shrink_page_list_vmswap(&page_list, &sc, PAGEOUT_IO_SYNC));
}
Javi
--
Javier Cabezas Rodríguez
Phd. Student - DAC (UPC)
jcabezas@ac.upc.edu
[-- Attachment #2: Esta parte del mensaje está firmada digitalmente --]
[-- Type: application/pgp-signature, Size: 189 bytes --]
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: Selective swap out of processes
2007-08-29 18:06 ` Javier Cabezas Rodríguez
@ 2007-08-29 22:01 ` Dave Hansen
2007-08-30 7:13 ` Nick Piggin
1 sibling, 0 replies; 14+ messages in thread
From: Dave Hansen @ 2007-08-29 22:01 UTC (permalink / raw)
To: Javier Cabezas Rodríguez; +Cc: Nick Piggin, linux-mm
I need the same basic thing for process checkpoint/restart. I just have
a syscall to which I give a virtual address, and then have the kernel
try to swap it out. It uses follow_page(FOLL_GET) and find_vma() in the
higher-level function, but this appears to work just fine.
I meant this as a horrible hack to play with a couple of months ago, but
it hasn't quite broken on me, yet.
diff -puN mm/vmscan.c~ptrace-force-swap1 mm/vmscan.c
--- lxc/mm/vmscan.c~ptrace-force-swap1 2007-03-15 11:21:06.000000000 -0700
+++ lxc-dave/mm/vmscan.c 2007-03-15 13:03:57.000000000 -0700
@@ -614,6 +614,23 @@ static unsigned long shrink_page_list(st return nr_reclaimed;
}
+int try_to_put_page_in_swap(struct page *page)
+{
+
+ get_page(page);
+ if (page_count(page) == 1)
+ /* page was freed from under us. So we are done. */
+ return -EAGAIN;
+ lock_page(page);
+ if (PageWriteback(page))
+ wait_on_page_writeback(page);
+ try_to_unmap(page, 0);
+ //printk("page mapped: %d\n", page_mapped(page));
+ unlock_page(page);
+ put_page(page);
+ return 0;
+}
+
/*
* zone->lru_lock is heavily contended. Some of the functions that
* shrink the lists perform better by taking out a batch of pages
-- Dave
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: Selective swap out of processes
2007-08-29 10:37 ` Javier Cabezas Rodríguez
2007-08-29 18:06 ` Javier Cabezas Rodríguez
@ 2007-08-30 7:09 ` Nick Piggin
1 sibling, 0 replies; 14+ messages in thread
From: Nick Piggin @ 2007-08-30 7:09 UTC (permalink / raw)
To: Javier Cabezas �; +Cc: linux-mm
Javier Cabezas RodrA-guez wrote:
> El miA(C), 29-08-2007 a las 12:37 +1000, Nick Piggin escribiA3:
>
>>Simplest will be just to set referenced to 0 right after calling
>>page_referenced, in the case you want to forcefully swap out the
>>page.
>>
>>try_to_unmap will get called later in the same function.
>
>
> I have tried this solution, but 0 pages are freed...
>
> - RO/EXEC pages mapped from the executable are now skipped due to this
> check:
>
> if (!mapping || !remove_mapping(mapping, page))
> goto keep_locked;
>
> The offender is this check in remove_mapping:
>
> if (unlikely(page_count(page) != 2))
> goto cannot_free;
>
> - RW pages mapped from the executable are skipped because pageout
> returns PAGE_KEEP.
>
> - Other pages are skipped because try_to_unmap returns SWAP_FAIL.
You still actually have to call page_referenced to clear the young
bits in the ptes, right? That should prevent try_to_unmap returning
SWAP_FAIL. It can be mapped by multiple processes, so just clearing
the young bit for one pte won't help (especially for exec pages,
which are very likely to be used by more than one process).
If your page_count is elevated after the page has been unmapped,
then there is something else using the page or your function isn't
doing the correct refcounting.
--
SUSE Labs, Novell Inc.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: Selective swap out of processes
2007-08-29 18:06 ` Javier Cabezas Rodríguez
2007-08-29 22:01 ` Dave Hansen
@ 2007-08-30 7:13 ` Nick Piggin
2007-08-30 23:41 ` Javier Cabezas Rodríguez
1 sibling, 1 reply; 14+ messages in thread
From: Nick Piggin @ 2007-08-30 7:13 UTC (permalink / raw)
To: Javier Cabezas �; +Cc: linux-mm
Javier Cabezas RodrA-guez wrote:
>>My code calls the following function for each VMA of the process. Are
>>there errors in the function?:
>
>
> Sorry. I forgot some lines:
>
> int my_free_pages(struct vm_area_struct * vma, struct mm_struct * mm)
> {
> LIST_HEAD(page_list);
> unsigned long nr_taken;
> struct zone * zone = NULL;
> int ret;
> pte_t *pte_k;
> pud_t *pud;
> pmd_t *pmd;
> unsigned long addr;
> struct page * p;
> struct scan_control sc;
>
> sc.gfp_mask = __GFP_FS;
> sc.may_swap = 1;
> sc.may_writepage = 1;
>
> for (addr = vma->vm_start, nr_taken = 0; addr < vma->vm_end; addr +=
> PAGE_SIZE, nr_taken++) {
> pgd_t *pgd = pgd_offset(mm, addr);
> if (pgd_none(*pgd))
> return;
> pud = pud_offset(pgd, addr);
> if (pud_none(*pud))
> return;
> pmd = pmd_offset(pud, addr);
> if (pmd_none(*pmd))
> return;
> if (pmd_large(*pmd))
> pte_k = (pte_t *)pmd;
> else
> pte_k = pte_offset_kernel(pmd, addr);
>
> if (pte_k && pte_present(*pte_k)) {
> p = pte_page(*pte_k);
> if (!zone)
> zone = page_zone(p);
>
> ptep_clear_flush_young(vma, addr, pte_k);
> del_page_from_lru(zone, p);
> list_add(&p->lru, &page_list);
> }
> }
>
> spin_lock_irq(&zone->lru_lock);
> __mod_zone_page_state(zone, NR_INACTIVE, -nr_taken);
> zone->pages_scanned += nr_taken;
> spin_unlock_irq(&zone->lru_lock);
>
> printk("VMC: %lu pages set to be freed\n", nr_taken);
> printk("VMC: %d pages freed\n", ret =
> shrink_page_list_vmswap(&page_list, &sc, PAGEOUT_IO_SYNC));
> }
I don't know if that's right or not really, without more context,
but it doesn't look like you have the right page table walking
locking or page refcounting (and you probably don't want to simply
be returning when you encounter the first empty page table entry).
Anyway. I'd be inclined to not do your own page table walking at
this stage and begin by using get_user_pages() to do it for you.
Then if you get to the stage of wanting to optimise it, you could
copy the get_user_pages code, and use that as a starting point.
--
SUSE Labs, Novell Inc.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: Selective swap out of processes
2007-08-30 7:13 ` Nick Piggin
@ 2007-08-30 23:41 ` Javier Cabezas Rodríguez
2007-08-30 23:47 ` Javier Cabezas Rodríguez
2007-08-30 23:50 ` Javier Cabezas Rodríguez
0 siblings, 2 replies; 14+ messages in thread
From: Javier Cabezas Rodríguez @ 2007-08-30 23:41 UTC (permalink / raw)
To: Nick Piggin, haveblue; +Cc: linux-mm
I have modified the code so it now uses get_user_pages. I'm also using
the function posted by Dave Hansen in this thread to free each page.
However my module is still not able to free any page. I inspect the
smaps entries of each process in /proc before/after executing my code
to check it. The full code is posted next; don't worry, it's quite
short. The entry point is free_procs, called from a procfs handler
when a number is written to "/proc/swapper" (created by my module).
The processes are in UNINTERRUPTIBLE_SLEEP state before I try to swap
them out.
Can someone find any obvious problem in the code?
Thanks.
Javi
int try_to_put_page_in_swap(struct page *page)
{
get_page(page);
if (page_count(page) == 1) /* page was freed from under us. So we are done. */
return -EAGAIN;
lock_page(page);
if (PageWriteback(page))
wait_on_page_writeback(page);
try_to_unmap(page, 0);
unlock_page(page);
put_page(page);
return 0;
}
int free_process(struct vm_area_struct * vma, struct task_struct * p)
{
int write;
int npages;
struct page ** pages;
int i;
spin_lock(&p->mm->page_table_lock);
write = (vma->vm_flags & VM_WRITE) != 0;
npages = (vma->vm_end - vma->vm_start) >> PAGE_SHIFT;
pages = kmalloc(npages * sizeof(struct page *), GFP_KERNEL);
if (!pages)
return -ENOMEM;
npages = get_user_pages(p, p->mm, vma->vm_start, npages, write, 0,
pages, NULL);
kfree(pages);
spin_unlock(&p->mm->page_table_lock);
for (i = 0; i < npages; i++)
try_to_put_page_in_swap(pages[i]);
return npages;
}
void free_procs(void)
{
struct task_struct *g, *p;
struct vm_area_struct * vma;
int c, count;
read_lock(&tasklist_lock);
do_each_thread(g, p) {
if (!p->pinned) { /* This process can be swapped out */
down_read(&p->mm->mmap_sem);
for (vma = p->mm->mmap, count = 0; vma; vma = vma->vm_next) {
if ((c = free_process(vma, p)) == -ENOMEM) {
printk("VMC: Out of Memory\n");
up_read(&p->mm->mmap_sem);
goto out;
}
count += c;
}
up_read(&p->mm->mmap_sem);
printk("VMC: Process %d. %d pages freed\n", p->pid, count);
}
} while_each_thread(g, p);
out:
read_unlock(&tasklist_lock);
}
On 8/30/07, Nick Piggin <nickpiggin@yahoo.com.au> wrote:
> Javier Cabezas Rodriguez wrote:
> >>My code calls the following function for each VMA of the process. Are
> >>there errors in the function?:
> >
> >
> > Sorry. I forgot some lines:
> >
> > int my_free_pages(struct vm_area_struct * vma, struct mm_struct * mm)
> > {
> > LIST_HEAD(page_list);
> > unsigned long nr_taken;
> > struct zone * zone = NULL;
> > int ret;
> > pte_t *pte_k;
> > pud_t *pud;
> > pmd_t *pmd;
> > unsigned long addr;
> > struct page * p;
> > struct scan_control sc;
> >
> > sc.gfp_mask = __GFP_FS;
> > sc.may_swap = 1;
> > sc.may_writepage = 1;
> >
> > for (addr = vma->vm_start, nr_taken = 0; addr < vma->vm_end; addr +=
> > PAGE_SIZE, nr_taken++) {
> > pgd_t *pgd = pgd_offset(mm, addr);
> > if (pgd_none(*pgd))
> > return;
> > pud = pud_offset(pgd, addr);
> > if (pud_none(*pud))
> > return;
> > pmd = pmd_offset(pud, addr);
> > if (pmd_none(*pmd))
> > return;
> > if (pmd_large(*pmd))
> > pte_k = (pte_t *)pmd;
> > else
> > pte_k = pte_offset_kernel(pmd, addr);
> >
> > if (pte_k && pte_present(*pte_k)) {
> > p = pte_page(*pte_k);
> > if (!zone)
> > zone = page_zone(p);
> >
> > ptep_clear_flush_young(vma, addr, pte_k);
> > del_page_from_lru(zone, p);
> > list_add(&p->lru, &page_list);
> > }
> > }
> >
> > spin_lock_irq(&zone->lru_lock);
> > __mod_zone_page_state(zone, NR_INACTIVE, -nr_taken);
> > zone->pages_scanned += nr_taken;
> > spin_unlock_irq(&zone->lru_lock);
> >
> > printk("VMC: %lu pages set to be freed\n", nr_taken);
> > printk("VMC: %d pages freed\n", ret =
> > shrink_page_list_vmswap(&page_list, &sc, PAGEOUT_IO_SYNC));
> > }
>
> I don't know if that's right or not really, without more context,
> but it doesn't look like you have the right page table walking
> locking or page refcounting (and you probably don't want to simply
> be returning when you encounter the first empty page table entry).
>
> Anyway. I'd be inclined to not do your own page table walking at
> this stage and begin by using get_user_pages() to do it for you.
> Then if you get to the stage of wanting to optimise it, you could
> copy the get_user_pages code, and use that as a starting point.
>
> --
> SUSE Labs, Novell Inc.
>
--
Javi
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: Selective swap out of processes
2007-08-30 23:41 ` Javier Cabezas Rodríguez
@ 2007-08-30 23:47 ` Javier Cabezas Rodríguez
2007-08-30 23:50 ` Javier Cabezas Rodríguez
1 sibling, 0 replies; 14+ messages in thread
From: Javier Cabezas Rodríguez @ 2007-08-30 23:47 UTC (permalink / raw)
To: Nick Piggin, haveblue; +Cc: linux-mm
Sorry. It was an old version:
int try_to_put_page_in_swap(struct page *page)
{
get_page(page);
if (page_count(page) == 1) /* page was freed from under us. So we are done. */
return -EAGAIN;
lock_page(page);
if (PageWriteback(page))
wait_on_page_writeback(page);
try_to_unmap(page, 0);
unlock_page(page);
put_page(page);
return 0;
}
int free_process(struct vm_area_struct * vma, struct task_struct * p)
{
int write;
int npages;
struct page ** pages;
int i;
spin_lock(&p->mm->page_table_lock);
write = (vma->vm_flags & VM_WRITE) != 0;
npages = (vma->vm_end - vma->vm_start) >> PAGE_SHIFT;
pages = kmalloc(npages * sizeof(struct page *), GFP_KERNEL);
if (!pages)
return -ENOMEM;
npages = get_user_pages(p, p->mm, vma->vm_start, npages, write, 0,
pages, NULL);
spin_unlock(&p->mm->page_table_lock);
for (i = 0; i < npages; i++)
try_to_put_page_in_swap(pages[i]);
kfree(pages);
return npages;
}
void free_procs(void)
{
struct task_struct *g, *p;
struct vm_area_struct * vma;
int c, count;
read_lock(&tasklist_lock);
do_each_thread(g, p) {
if (!p->pinned) { /* This process can be swapped out */
down_read(&p->mm->mmap_sem);
for (vma = p->mm->mmap, count = 0; vma; vma = vma->vm_next, count += c) {
if ((c = free_process(vma, p)) == -ENOMEM) {
printk("VMC: Out of Memory\n");
up_read(&p->mm->mmap_sem);
goto out;
}
}
up_read(&p->mm->mmap_sem);
printk("VMC: Process %d. %d pages freed\n", p->pid, count);
}
} while_each_thread(g, p);
out:
read_unlock(&tasklist_lock);
On 8/31/07, Javier Cabezas Rodriguez <jcabezas@ac.upc.edu> wrote:
> I have modified the code so it now uses get_user_pages. I'm also using
> the function posted by Dave Hansen in this thread to free each page.
> However my module is still not able to free any page. I inspect the
> smaps entries of each process in /proc before/after executing my code
> to check it. The full code is posted next; don't worry, it's quite
> short. The entry point is free_procs, called from a procfs handler
> when a number is written to "/proc/swapper" (created by my module).
> The processes are in UNINTERRUPTIBLE_SLEEP state before I try to swap
> them out.
>
> Can someone find any obvious problem in the code?
>
> Thanks.
>
> Javi
>
>
> int try_to_put_page_in_swap(struct page *page)
> {
> get_page(page);
>
> if (page_count(page) == 1) /* page was freed from under us. So we are done. */
> return -EAGAIN;
>
> lock_page(page);
>
> if (PageWriteback(page))
> wait_on_page_writeback(page);
>
> try_to_unmap(page, 0);
>
> unlock_page(page);
> put_page(page);
> return 0;
> }
>
>
> int free_process(struct vm_area_struct * vma, struct task_struct * p)
> {
> int write;
> int npages;
> struct page ** pages;
> int i;
>
> spin_lock(&p->mm->page_table_lock);
> write = (vma->vm_flags & VM_WRITE) != 0;
> npages = (vma->vm_end - vma->vm_start) >> PAGE_SHIFT;
> pages = kmalloc(npages * sizeof(struct page *), GFP_KERNEL);
>
> if (!pages)
> return -ENOMEM;
>
> npages = get_user_pages(p, p->mm, vma->vm_start, npages, write, 0,
> pages, NULL);
>
> kfree(pages);
> spin_unlock(&p->mm->page_table_lock);
>
> for (i = 0; i < npages; i++)
> try_to_put_page_in_swap(pages[i]);
>
> return npages;
> }
>
> void free_procs(void)
> {
> struct task_struct *g, *p;
> struct vm_area_struct * vma;
> int c, count;
>
> read_lock(&tasklist_lock);
> do_each_thread(g, p) {
> if (!p->pinned) { /* This process can be swapped out */
> down_read(&p->mm->mmap_sem);
> for (vma = p->mm->mmap, count = 0; vma; vma = vma->vm_next) {
> if ((c = free_process(vma, p)) == -ENOMEM) {
> printk("VMC: Out of Memory\n");
> up_read(&p->mm->mmap_sem);
> goto out;
> }
> count += c;
> }
> up_read(&p->mm->mmap_sem);
> printk("VMC: Process %d. %d pages freed\n", p->pid, count);
> }
> } while_each_thread(g, p);
>
> out:
> read_unlock(&tasklist_lock);
> }
>
> On 8/30/07, Nick Piggin <nickpiggin@yahoo.com.au> wrote:
> > Javier Cabezas Rodriguez wrote:
> > >>My code calls the following function for each VMA of the process. Are
> > >>there errors in the function?:
> > >
> > >
> > > Sorry. I forgot some lines:
> > >
> > > int my_free_pages(struct vm_area_struct * vma, struct mm_struct * mm)
> > > {
> > > LIST_HEAD(page_list);
> > > unsigned long nr_taken;
> > > struct zone * zone = NULL;
> > > int ret;
> > > pte_t *pte_k;
> > > pud_t *pud;
> > > pmd_t *pmd;
> > > unsigned long addr;
> > > struct page * p;
> > > struct scan_control sc;
> > >
> > > sc.gfp_mask = __GFP_FS;
> > > sc.may_swap = 1;
> > > sc.may_writepage = 1;
> > >
> > > for (addr = vma->vm_start, nr_taken = 0; addr < vma->vm_end; addr +=
> > > PAGE_SIZE, nr_taken++) {
> > > pgd_t *pgd = pgd_offset(mm, addr);
> > > if (pgd_none(*pgd))
> > > return;
> > > pud = pud_offset(pgd, addr);
> > > if (pud_none(*pud))
> > > return;
> > > pmd = pmd_offset(pud, addr);
> > > if (pmd_none(*pmd))
> > > return;
> > > if (pmd_large(*pmd))
> > > pte_k = (pte_t *)pmd;
> > > else
> > > pte_k = pte_offset_kernel(pmd, addr);
> > >
> > > if (pte_k && pte_present(*pte_k)) {
> > > p = pte_page(*pte_k);
> > > if (!zone)
> > > zone = page_zone(p);
> > >
> > > ptep_clear_flush_young(vma, addr, pte_k);
> > > del_page_from_lru(zone, p);
> > > list_add(&p->lru, &page_list);
> > > }
> > > }
> > >
> > > spin_lock_irq(&zone->lru_lock);
> > > __mod_zone_page_state(zone, NR_INACTIVE, -nr_taken);
> > > zone->pages_scanned += nr_taken;
> > > spin_unlock_irq(&zone->lru_lock);
> > >
> > > printk("VMC: %lu pages set to be freed\n", nr_taken);
> > > printk("VMC: %d pages freed\n", ret =
> > > shrink_page_list_vmswap(&page_list, &sc, PAGEOUT_IO_SYNC));
> > > }
> >
> > I don't know if that's right or not really, without more context,
> > but it doesn't look like you have the right page table walking
> > locking or page refcounting (and you probably don't want to simply
> > be returning when you encounter the first empty page table entry).
> >
> > Anyway. I'd be inclined to not do your own page table walking at
> > this stage and begin by using get_user_pages() to do it for you.
> > Then if you get to the stage of wanting to optimise it, you could
> > copy the get_user_pages code, and use that as a starting point.
> >
> > --
> > SUSE Labs, Novell Inc.
> >
>
>
> --
>
>
> Javi
>
--
Javi
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: Selective swap out of processes
2007-08-30 23:41 ` Javier Cabezas Rodríguez
2007-08-30 23:47 ` Javier Cabezas Rodríguez
@ 2007-08-30 23:50 ` Javier Cabezas Rodríguez
2007-08-31 0:34 ` Nick Piggin
2007-08-31 16:40 ` Dave Hansen
1 sibling, 2 replies; 14+ messages in thread
From: Javier Cabezas Rodríguez @ 2007-08-30 23:50 UTC (permalink / raw)
To: Nick Piggin, haveblue; +Cc: linux-mm
Sorry. It was an old version:
int try_to_put_page_in_swap(struct page *page)
{
get_page(page);
if (page_count(page) == 1) /* page was freed from under us. So we are done. */
return -EAGAIN;
lock_page(page);
if (PageWriteback(page))
wait_on_page_writeback(page);
try_to_unmap(page, 0);
unlock_page(page);
put_page(page);
return 0;
}
int free_process(struct vm_area_struct * vma, struct task_struct * p)
{
int write;
int npages;
struct page ** pages;
int i;
spin_lock(&p->mm->page_table_lock);
write = (vma->vm_flags & VM_WRITE) != 0;
npages = (vma->vm_end - vma->vm_start) >> PAGE_SHIFT;
pages = kmalloc(npages * sizeof(struct page *), GFP_KERNEL);
if (!pages)
return -ENOMEM;
npages = get_user_pages(p, p->mm, vma->vm_start, npages, write, 0,
pages, NULL);
spin_unlock(&p->mm->page_table_lock);
for (i = 0; i < npages; i++)
try_to_put_page_in_swap(pages[i]);
kfree(pages);
return npages;
}
void free_procs(void)
{
struct task_struct *g, *p;
struct vm_area_struct * vma;
int c, count;
read_lock(&tasklist_lock);
do_each_thread(g, p) {
if (!p->pinned) { /* This process can be swapped out */
down_read(&p->mm->mmap_sem);
for (vma = p->mm->mmap, count = 0; vma; vma = vma->vm_next, count += c) {
if ((c = free_process(vma, p)) == -ENOMEM) {
printk("VMC: Out of Memory\n");
up_read(&p->mm->mmap_sem);
goto out;
}
}
up_read(&p->mm->mmap_sem);
printk("VMC: Process %d. %d pages freed\n", p->pid, count);
}
} while_each_thread(g, p);
out:
read_unlock(&tasklist_lock);
On 8/31/07, Javier Cabezas Rodriguez <jcabezas@ac.upc.edu> wrote:
> I have modified the code so it now uses get_user_pages. I'm also using
> the function posted by Dave Hansen in this thread to free each page.
> However my module is still not able to free any page. I inspect the
> smaps entries of each process in /proc before/after executing my code
> to check it. The full code is posted next; don't worry, it's quite
> short. The entry point is free_procs, called from a procfs handler
> when a number is written to "/proc/swapper" (created by my module).
> The processes are in UNINTERRUPTIBLE_SLEEP state before I try to swap
> them out.
>
> Can someone find any obvious problem in the code?
>
> Thanks.
>
> Javi
>
>
> int try_to_put_page_in_swap(struct page *page)
> {
> get_page(page);
>
> if (page_count(page) == 1) /* page was freed from under us. So we are done. */
> return -EAGAIN;
>
> lock_page(page);
>
> if (PageWriteback(page))
> wait_on_page_writeback(page);
>
> try_to_unmap(page, 0);
>
> unlock_page(page);
> put_page(page);
> return 0;
> }
>
>
> int free_process(struct vm_area_struct * vma, struct task_struct * p)
> {
> int write;
> int npages;
> struct page ** pages;
> int i;
>
> spin_lock(&p->mm->page_table_lock);
> write = (vma->vm_flags & VM_WRITE) != 0;
> npages = (vma->vm_end - vma->vm_start) >> PAGE_SHIFT;
> pages = kmalloc(npages * sizeof(struct page *), GFP_KERNEL);
>
> if (!pages)
> return -ENOMEM;
>
> npages = get_user_pages(p, p->mm, vma->vm_start, npages, write, 0,
> pages, NULL);
>
> kfree(pages);
> spin_unlock(&p->mm->page_table_lock);
>
> for (i = 0; i < npages; i++)
> try_to_put_page_in_swap(pages[i]);
>
> return npages;
> }
>
> void free_procs(void)
> {
> struct task_struct *g, *p;
> struct vm_area_struct * vma;
> int c, count;
>
> read_lock(&tasklist_lock);
> do_each_thread(g, p) {
> if (!p->pinned) { /* This process can be swapped out */
> down_read(&p->mm->mmap_sem);
> for (vma = p->mm->mmap, count = 0; vma; vma = vma->vm_next) {
> if ((c = free_process(vma, p)) == -ENOMEM) {
> printk("VMC: Out of Memory\n");
> up_read(&p->mm->mmap_sem);
> goto out;
> }
> count += c;
> }
> up_read(&p->mm->mmap_sem);
> printk("VMC: Process %d. %d pages freed\n", p->pid, count);
> }
> } while_each_thread(g, p);
>
> out:
> read_unlock(&tasklist_lock);
> }
>
> On 8/30/07, Nick Piggin <nickpiggin@yahoo.com.au> wrote:
> > Javier Cabezas Rodriguez wrote:
> > >>My code calls the following function for each VMA of the process. Are
> > >>there errors in the function?:
> > >
> > >
> > > Sorry. I forgot some lines:
> > >
> > > int my_free_pages(struct vm_area_struct * vma, struct mm_struct * mm)
> > > {
> > > LIST_HEAD(page_list);
> > > unsigned long nr_taken;
> > > struct zone * zone = NULL;
> > > int ret;
> > > pte_t *pte_k;
> > > pud_t *pud;
> > > pmd_t *pmd;
> > > unsigned long addr;
> > > struct page * p;
> > > struct scan_control sc;
> > >
> > > sc.gfp_mask = __GFP_FS;
> > > sc.may_swap = 1;
> > > sc.may_writepage = 1;
> > >
> > > for (addr = vma->vm_start, nr_taken = 0; addr < vma->vm_end; addr +=
> > > PAGE_SIZE, nr_taken++) {
> > > pgd_t *pgd = pgd_offset(mm, addr);
> > > if (pgd_none(*pgd))
> > > return;
> > > pud = pud_offset(pgd, addr);
> > > if (pud_none(*pud))
> > > return;
> > > pmd = pmd_offset(pud, addr);
> > > if (pmd_none(*pmd))
> > > return;
> > > if (pmd_large(*pmd))
> > > pte_k = (pte_t *)pmd;
> > > else
> > > pte_k = pte_offset_kernel(pmd, addr);
> > >
> > > if (pte_k && pte_present(*pte_k)) {
> > > p = pte_page(*pte_k);
> > > if (!zone)
> > > zone = page_zone(p);
> > >
> > > ptep_clear_flush_young(vma, addr, pte_k);
> > > del_page_from_lru(zone, p);
> > > list_add(&p->lru, &page_list);
> > > }
> > > }
> > >
> > > spin_lock_irq(&zone->lru_lock);
> > > __mod_zone_page_state(zone, NR_INACTIVE, -nr_taken);
> > > zone->pages_scanned += nr_taken;
> > > spin_unlock_irq(&zone->lru_lock);
> > >
> > > printk("VMC: %lu pages set to be freed\n", nr_taken);
> > > printk("VMC: %d pages freed\n", ret =
> > > shrink_page_list_vmswap(&page_list, &sc, PAGEOUT_IO_SYNC));
> > > }
> >
> > I don't know if that's right or not really, without more context,
> > but it doesn't look like you have the right page table walking
> > locking or page refcounting (and you probably don't want to simply
> > be returning when you encounter the first empty page table entry).
> >
> > Anyway. I'd be inclined to not do your own page table walking at
> > this stage and begin by using get_user_pages() to do it for you.
> > Then if you get to the stage of wanting to optimise it, you could
> > copy the get_user_pages code, and use that as a starting point.
> >
> > --
> > SUSE Labs, Novell Inc.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: Selective swap out of processes
2007-08-30 23:50 ` Javier Cabezas Rodríguez
@ 2007-08-31 0:34 ` Nick Piggin
2007-08-31 16:40 ` Dave Hansen
1 sibling, 0 replies; 14+ messages in thread
From: Nick Piggin @ 2007-08-31 0:34 UTC (permalink / raw)
To: jcabezas; +Cc: haveblue, linux-mm
Javier Cabezas Rodriguez wrote:
> Sorry. It was an old version:
>
> int try_to_put_page_in_swap(struct page *page)
> {
> get_page(page);
>
> if (page_count(page) == 1) /* page was freed from under us. So we are done. */
> return -EAGAIN;
>
> lock_page(page);
>
> if (PageWriteback(page))
> wait_on_page_writeback(page);
>
> try_to_unmap(page, 0);
>
> unlock_page(page);
> put_page(page);
> return 0;
> }
You'd surely have to add_to_swap here, and at some point
will want to also free the swapcache after writing it out.
Look at how the code in mm/vmscan.c does it.
> int free_process(struct vm_area_struct * vma, struct task_struct * p)
> {
> int write;
> int npages;
> struct page ** pages;
> int i;
>
> spin_lock(&p->mm->page_table_lock);
You rather need down_read(&mm->mmap_sem);
> write = (vma->vm_flags & VM_WRITE) != 0;
> npages = (vma->vm_end - vma->vm_start) >> PAGE_SHIFT;
> pages = kmalloc(npages * sizeof(struct page *), GFP_KERNEL);
>
> if (!pages)
> return -ENOMEM;
Careful of just returning while you're holding a spinlock or
other resources.
>
> npages = get_user_pages(p, p->mm, vma->vm_start, npages, write, 0,
> pages, NULL);
>
> spin_unlock(&p->mm->page_table_lock);
>
> for (i = 0; i < npages; i++)
> try_to_put_page_in_swap(pages[i]);
>
> kfree(pages);
> return npages;
You have to carefully keep track of what is happening with your
page refcounts and make sure you're doing the right thing here.
For example, get_user_pages increments the refcounts, and it looks
like you don't decrement them again -- this will leave the page
permanently pinned in memory.
> }
>
> void free_procs(void)
> {
> struct task_struct *g, *p;
> struct vm_area_struct * vma;
> int c, count;
>
> read_lock(&tasklist_lock);
> do_each_thread(g, p) {
> if (!p->pinned) { /* This process can be swapped out */
> down_read(&p->mm->mmap_sem);
Ah, you have down_read here. So you don't need ptl above. Unfortunately,
down_read sleeps, while tasklist_lock is a spinlock, so you'll need to
take some other approach here.
> for (vma = p->mm->mmap, count = 0; vma; vma = vma->vm_next, count += c) {
> if ((c = free_process(vma, p)) == -ENOMEM) {
> printk("VMC: Out of Memory\n");
> up_read(&p->mm->mmap_sem);
> goto out;
> }
> }
> up_read(&p->mm->mmap_sem);
> printk("VMC: Process %d. %d pages freed\n", p->pid, count);
> }
> } while_each_thread(g, p);
>
> out:
> read_unlock(&tasklist_lock);
I won't have time to help more as I'm heading overseas, good luck!
--
SUSE Labs, Novell Inc.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: Selective swap out of processes
2007-08-30 23:50 ` Javier Cabezas Rodríguez
2007-08-31 0:34 ` Nick Piggin
@ 2007-08-31 16:40 ` Dave Hansen
2007-09-08 1:45 ` Nick Piggin
1 sibling, 1 reply; 14+ messages in thread
From: Dave Hansen @ 2007-08-31 16:40 UTC (permalink / raw)
To: jcabezas; +Cc: Nick Piggin, linux-mm
Isn't the whole point of get_user_pages() so that the kernel doesn't
mess with those pages, and the driver or whatever can have free reign?
Seems to me that you're pinning the pages with get_user_pages(), then
trying to get the kernel to swap them out. Not a good idea. ;)
-- Dave
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: Selective swap out of processes
2007-08-31 16:40 ` Dave Hansen
@ 2007-09-08 1:45 ` Nick Piggin
0 siblings, 0 replies; 14+ messages in thread
From: Nick Piggin @ 2007-09-08 1:45 UTC (permalink / raw)
To: Dave Hansen; +Cc: jcabezas, linux-mm
On Saturday 01 September 2007 02:40, Dave Hansen wrote:
> Isn't the whole point of get_user_pages() so that the kernel doesn't
> mess with those pages, and the driver or whatever can have free reign?
>
> Seems to me that you're pinning the pages with get_user_pages(), then
> trying to get the kernel to swap them out. Not a good idea. ;)
That's pretty much what it means... well, it is explicitly defined to simply
increment the refcount of each returned page, which happens to be
exactly what you want in this case.
Obviously your VM code that's doing the swapout has to account for
this refcount... but you'd need to do that anyway.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 14+ messages in thread
end of thread, other threads:[~2007-09-08 1:45 UTC | newest]
Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2007-08-28 16:54 Selective swap out of processes Javier Cabezas Rodríguez
2007-08-29 2:37 ` Nick Piggin
2007-08-29 10:37 ` Javier Cabezas Rodríguez
2007-08-29 18:06 ` Javier Cabezas Rodríguez
2007-08-29 22:01 ` Dave Hansen
2007-08-30 7:13 ` Nick Piggin
2007-08-30 23:41 ` Javier Cabezas Rodríguez
2007-08-30 23:47 ` Javier Cabezas Rodríguez
2007-08-30 23:50 ` Javier Cabezas Rodríguez
2007-08-31 0:34 ` Nick Piggin
2007-08-31 16:40 ` Dave Hansen
2007-09-08 1:45 ` Nick Piggin
2007-08-30 7:09 ` Nick Piggin
2007-08-29 5:18 ` Christoph Lameter
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox