linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Nick Piggin <nickpiggin@yahoo.com.au>
To: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Andrea Arcangeli <aarcange@redhat.com>,
	Ingo Molnar <mingo@elte.hu>, Nick Piggin <npiggin@novell.com>,
	Hugh Dickins <hugh@veritas.com>,
	KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>,
	KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>,
	linux-mm@kvack.org
Subject: Re: [aarcange@redhat.com: [PATCH] fork vs gup(-fast) fix]
Date: Fri, 13 Mar 2009 03:23:40 +1100	[thread overview]
Message-ID: <200903130323.41193.nickpiggin@yahoo.com.au> (raw)
In-Reply-To: <200903121636.18867.nickpiggin@yahoo.com.au>

On Thursday 12 March 2009 16:36:18 Nick Piggin wrote:

> Assuming we want to try fixing it transparently... what about another
> approach, mark a vma as VM_DONTCOW and uncow all existing pages in it
> if it ever has get_user_pages run on it. Big hammer approach.
>
> fast gup would be a little bit harder because looking up the vma
> defeats the purpose. However if we use another page bit to say the
> page belongs to a VM_DONTCOW vma, then we only need to check that
> once and fall back to slow gup if it is clear. So there would be no
> extra atomics in the repeat case. Yes it would be slower, but apps
> that really care should know what they are doing and set
> MADV_DONTFORK or MADV_DONTCOW on the vma by hand before doing the
> zero copy IO.
>
> Would this work? Anyone see any holes? (I imagine someone might argue
> against big hammer, but I would prefer it if it is lighter impact on
> the VM and still allows good applications to avoid the hammer)

OK, this is as far as I got tonight.

This passes Andrea's dma_thread test case. I haven't started hugepages,
and it isn't quite right to drop the mmap_sem and retake it for write
in get_user_pages (firstly, caller might hold mmap_sem for write,
secondly, it may not be able to tolerate mmap_sem being dropped).

Annoying that it has to take mmap_sem for write to add this bit to
vm_flags. Possibly we could use a different way to signal it is a
"dontcow" vma... something in anon_vma maybe?

Anyway, before worrying too much more about those details, I'll post
it. It is a different approach that I think might be worth consideration.
Comments?

Thanks,
Nick
--
Index: linux-2.6/include/linux/mm.h
===================================================================
--- linux-2.6.orig/include/linux/mm.h	2009-03-13 03:00:58.000000000 +1100
+++ linux-2.6/include/linux/mm.h	2009-03-13 03:05:00.000000000 +1100
@@ -104,6 +104,7 @@ extern unsigned int kobjsize(const void 
 #define VM_CAN_NONLINEAR 0x08000000	/* Has ->fault & does nonlinear pages */
 #define VM_MIXEDMAP	0x10000000	/* Can contain "struct page" and pure PFN pages */
 #define VM_SAO		0x20000000	/* Strong Access Ordering (powerpc) */
+#define VM_DONTCOW	0x40000000	/* Contains no COW pages (copies on fork) */
 
 #ifndef VM_STACK_DEFAULT_FLAGS		/* arch can override this */
 #define VM_STACK_DEFAULT_FLAGS VM_DATA_DEFAULT_FLAGS
@@ -789,7 +790,7 @@ int walk_page_range(unsigned long addr, 
 void free_pgd_range(struct mmu_gather *tlb, unsigned long addr,
 		unsigned long end, unsigned long floor, unsigned long ceiling);
 int copy_page_range(struct mm_struct *dst, struct mm_struct *src,
-			struct vm_area_struct *vma);
+		struct vm_area_struct *dst_vma, struct vm_area_struct *vma);
 void unmap_mapping_range(struct address_space *mapping,
 		loff_t const holebegin, loff_t const holelen, int even_cows);
 int follow_phys(struct vm_area_struct *vma, unsigned long address,
Index: linux-2.6/mm/memory.c
===================================================================
--- linux-2.6.orig/mm/memory.c	2009-03-13 03:00:58.000000000 +1100
+++ linux-2.6/mm/memory.c	2009-03-13 03:07:52.000000000 +1100
@@ -580,7 +580,8 @@ copy_one_pte(struct mm_struct *dst_mm, s
 	 * in the parent and the child
 	 */
 	if (is_cow_mapping(vm_flags)) {
-		ptep_set_wrprotect(src_mm, addr, src_pte);
+		if (likely(!(vm_flags & VM_DONTCOW)))
+			ptep_set_wrprotect(src_mm, addr, src_pte);
 		pte = pte_wrprotect(pte);
 	}
 
@@ -594,6 +595,7 @@ copy_one_pte(struct mm_struct *dst_mm, s
 
 	page = vm_normal_page(vma, addr, pte);
 	if (page) {
+		VM_BUG_ON(PageDontCOW(page) && !(vm_flags & VM_DONTCOW));
 		get_page(page);
 		page_dup_rmap(page, vma, addr);
 		rss[!!PageAnon(page)]++;
@@ -696,8 +698,10 @@ static inline int copy_pud_range(struct 
 	return 0;
 }
 
+static int decow_page_range(struct mm_struct *mm, struct vm_area_struct *vma);
+
 int copy_page_range(struct mm_struct *dst_mm, struct mm_struct *src_mm,
-		struct vm_area_struct *vma)
+		struct vm_area_struct *dst_vma, struct vm_area_struct *vma)
 {
 	pgd_t *src_pgd, *dst_pgd;
 	unsigned long next;
@@ -755,6 +759,15 @@ int copy_page_range(struct mm_struct *ds
 	if (is_cow_mapping(vma->vm_flags))
 		mmu_notifier_invalidate_range_end(src_mm,
 						  vma->vm_start, end);
+
+	WARN_ON(ret);
+	if (unlikely(vma->vm_flags & VM_DONTCOW) && !ret) {
+		if (decow_page_range(dst_mm, dst_vma))
+			ret = -ENOMEM;
+		/* child doesn't really need VM_DONTCOW after being de-COWed */
+		// dst_vma->vm_flags &= ~VM_DONTCOW;
+	}
+
 	return ret;
 }
 
@@ -1200,6 +1213,7 @@ static inline int use_zero_page(struct v
 }
 
 
+static int make_vma_nocow(struct vm_area_struct *vma);
 
 int __get_user_pages(struct task_struct *tsk, struct mm_struct *mm,
 		     unsigned long start, int len, int flags,
@@ -1273,6 +1287,23 @@ int __get_user_pages(struct task_struct 
 		    (!ignore && !(vm_flags & vma->vm_flags)))
 			return i ? : -EFAULT;
 
+		if (!(flags & GUP_FLAGS_STACK) &&
+				is_cow_mapping(vma->vm_flags) &&
+				!(vma->vm_flags & VM_DONTCOW)) {
+			up_read(&mm->mmap_sem);
+			down_write(&mm->mmap_sem);
+			vma = find_vma(mm, start);
+			if (vma && is_cow_mapping(vma->vm_flags) &&
+				!(vma->vm_flags & VM_DONTCOW)) {
+				if (make_vma_nocow(vma)) {
+					downgrade_write(&mm->mmap_sem);
+					return i ? : -ENOMEM;
+				}
+			}
+			downgrade_write(&mm->mmap_sem);
+			continue;
+		}
+
 		if (is_vm_hugetlb_page(vma)) {
 			i = follow_hugetlb_page(mm, vma, pages, vmas,
 						&start, &len, i, write);
@@ -1910,6 +1941,8 @@ static int do_wp_page(struct mm_struct *
 		goto gotten;
 	}
 
+	VM_BUG_ON(PageDontCOW(old_page));
+
 	/*
 	 * Take out anonymous pages first, anonymous shared vmas are
 	 * not dirty accountable.
@@ -2102,6 +2135,232 @@ unwritable_page:
 	return VM_FAULT_SIGBUS;
 }
 
+static int decow_one_pte(struct mm_struct *mm, pte_t *ptep, pmd_t *pmd,
+			spinlock_t *ptl, struct vm_area_struct *vma,
+			unsigned long address)
+{
+	pte_t pte = *ptep;
+	struct page *page, *new_page;
+	int ret = 0;
+
+	/* pte contains position in swap or file, so don't do anything */
+	if (unlikely(!pte_present(pte)))
+		return 0;
+	/* pte is writable, can't be COW */
+	if (pte_write(pte))
+		return 0;
+
+	page = vm_normal_page(vma, address, pte);
+	if (!page)
+		return 0;
+
+	if (!PageAnon(page))
+		return 0;
+
+	page_cache_get(page);
+
+	pte_unmap_unlock(pte, ptl);
+
+	if (unlikely(anon_vma_prepare(vma)))
+		goto oom;
+	VM_BUG_ON(page == ZERO_PAGE(0));
+	new_page = alloc_page_vma(GFP_HIGHUSER_MOVABLE, vma, address);
+	if (!new_page)
+		goto oom;
+	/*
+	 * Don't let another task, with possibly unlocked vma,
+	 * keep the mlocked page.
+	 */
+	if (vma->vm_flags & VM_LOCKED) {
+		lock_page(page);	/* for LRU manipulation */
+		clear_page_mlock(page);
+		unlock_page(page);
+	}
+	cow_user_page(new_page, page, address, vma);
+	__SetPageUptodate(new_page);
+	__SetPageDontCOW(new_page);
+
+	if (mem_cgroup_newpage_charge(new_page, mm, GFP_KERNEL))
+		goto oom_free_new;
+
+	/*
+	 * Re-check the pte - we dropped the lock
+	 */
+	ptep = pte_offset_map_lock(mm, pmd, address, &ptl);
+	if (likely(pte_same(*ptep, pte))) {
+		pte_t entry;
+
+		flush_cache_page(vma, address, pte_pfn(pte));
+		entry = mk_pte(new_page, vma->vm_page_prot);
+		entry = maybe_mkwrite(pte_mkdirty(entry), vma);
+		/*
+		 * Clear the pte entry and flush it first, before updating the
+		 * pte with the new entry. This will avoid a race condition
+		 * seen in the presence of one thread doing SMC and another
+		 * thread doing COW.
+		 */
+		ptep_clear_flush_notify(vma, address, ptep);
+		page_add_new_anon_rmap(new_page, vma, address);
+		set_pte_at(mm, address, ptep, entry);
+
+		/* See comment in do_wp_page */
+		page_remove_rmap(page);
+	} else {
+		mem_cgroup_uncharge_page(new_page);
+		page_cache_release(new_page);
+		ret = -EAGAIN;
+	}
+
+	page_cache_release(page);
+
+	return ret;
+
+oom_free_new:
+	page_cache_release(new_page);
+oom:
+	page_cache_release(page);
+	return -ENOMEM;
+}
+
+static int decow_pte_range(struct mm_struct *mm,
+			pmd_t *pmd, struct vm_area_struct *vma,
+			unsigned long addr, unsigned long end)
+{
+	pte_t *pte;
+	spinlock_t *ptl;
+	int progress = 0;
+	int ret = 0;
+
+again:
+	pte = pte_offset_map_lock(mm, pmd, addr, &ptl);
+//	arch_enter_lazy_mmu_mode();
+
+	do {
+		/*
+		 * We are holding two locks at this point - either of them
+		 * could generate latencies in another task on another CPU.
+		 */
+		if (progress >= 32) {
+			progress = 0;
+			if (need_resched() || spin_needbreak(ptl))
+				break;
+		}
+		if (pte_none(*pte)) {
+			progress++;
+			continue;
+		}
+		ret = decow_one_pte(mm, pte, pmd, ptl, vma, addr);
+		if (ret) {
+			if (ret == -EAGAIN) { /* retry */
+				ret = 0;
+				break;
+			}
+			goto out;
+		}
+		progress += 8;
+	} while (pte++, addr += PAGE_SIZE, addr != end);
+
+//	arch_leave_lazy_mmu_mode();
+	pte_unmap_unlock(pte - 1, ptl);
+	cond_resched();
+	if (addr != end)
+		goto again;
+out:
+	return ret;
+}
+
+static int decow_pmd_range(struct mm_struct *mm,
+			pud_t *pud, struct vm_area_struct *vma,
+			unsigned long addr, unsigned long end)
+{
+	pmd_t *pmd;
+	unsigned long next;
+
+	pmd = pmd_offset(pud, addr);
+	do {
+		next = pmd_addr_end(addr, end);
+		if (pmd_none_or_clear_bad(pmd))
+			continue;
+		if (decow_pte_range(mm, pmd, vma, addr, next))
+			return -ENOMEM;
+	} while (pmd++, addr = next, addr != end);
+	return 0;
+}
+
+static int decow_pud_range(struct mm_struct *mm,
+			pgd_t *pgd, struct vm_area_struct *vma,
+			unsigned long addr, unsigned long end)
+{
+	pud_t *pud;
+	unsigned long next;
+
+	pud = pud_offset(pgd, addr);
+	do {
+		next = pud_addr_end(addr, end);
+		if (pud_none_or_clear_bad(pud))
+			continue;
+		if (decow_pmd_range(mm, pud, vma, addr, next))
+			return -ENOMEM;
+	} while (pud++, addr = next, addr != end);
+	return 0;
+}
+
+static noinline int decow_page_range(struct mm_struct *mm, struct vm_area_struct *vma)
+{
+	pgd_t *pgd;
+	unsigned long next;
+	unsigned long addr = vma->vm_start;
+	unsigned long end = vma->vm_end;
+	int ret;
+
+	BUG_ON(!is_cow_mapping(vma->vm_flags));
+
+//	if (is_vm_hugetlb_page(vma))
+//		return decow_hugetlb_page_range(mm, vma);
+
+	mmu_notifier_invalidate_range_start(mm, addr, end);
+
+	ret = 0;
+	pgd = pgd_offset(mm, addr);
+	do {
+		next = pgd_addr_end(addr, end);
+		if (pgd_none_or_clear_bad(pgd))
+			continue;
+		if (unlikely(decow_pud_range(mm, pgd, vma, addr, next))) {
+			ret = -ENOMEM;
+			break;
+		}
+	} while (pgd++, addr = next, addr != end);
+
+	mmu_notifier_invalidate_range_end(mm, vma->vm_start, end);
+
+	return ret;
+}
+
+/*
+ * Turns the anonymous VMA into a "nocow" vma. De-cow existing COW pages.
+ * Must hold mmap_sem for write.
+ */
+static int make_vma_nocow(struct vm_area_struct *vma)
+{
+	static DEFINE_MUTEX(lock);
+	struct mm_struct *mm = vma->vm_mm;
+	int ret;
+
+	mutex_lock(&lock);
+	if (vma->vm_flags & VM_DONTCOW) {
+		mutex_unlock(&lock);
+		return 0;
+	}
+
+	ret = decow_page_range(mm, vma);
+	if (!ret)
+		vma->vm_flags |= VM_DONTCOW;
+	mutex_unlock(&lock);
+
+	return ret;
+}
+
 /*
  * Helper functions for unmap_mapping_range().
  *
@@ -2433,6 +2692,9 @@ static int do_swap_page(struct mm_struct
 		count_vm_event(PGMAJFAULT);
 	}
 
+	if (unlikely(vma->vm_flags & VM_DONTCOW))
+		SetPageDontCOW(page);
+
 	mark_page_accessed(page);
 
 	lock_page(page);
@@ -2530,6 +2792,8 @@ static int do_anonymous_page(struct mm_s
 	if (!page)
 		goto oom;
 	__SetPageUptodate(page);
+	if (unlikely(vma->vm_flags & VM_DONTCOW))
+		__SetPageDontCOW(page);
 
 	if (mem_cgroup_newpage_charge(page, mm, GFP_KERNEL))
 		goto oom_free_page;
@@ -2636,6 +2900,8 @@ static int __do_fault(struct mm_struct *
 				clear_page_mlock(vmf.page);
 			copy_user_highpage(page, vmf.page, address, vma);
 			__SetPageUptodate(page);
+			if (unlikely(vma->vm_flags & VM_DONTCOW))
+				__SetPageDontCOW(page);
 		} else {
 			/*
 			 * If the page will be shareable, see if the backing
@@ -2935,8 +3201,9 @@ int make_pages_present(unsigned long add
 	BUG_ON(addr >= end);
 	BUG_ON(end > vma->vm_end);
 	len = DIV_ROUND_UP(end, PAGE_SIZE) - addr/PAGE_SIZE;
-	ret = get_user_pages(current, current->mm, addr,
-			len, write, 0, NULL, NULL);
+	ret = __get_user_pages(current, current->mm, addr,
+			len, GUP_FLAGS_STACK | (write ? GUP_FLAGS_WRITE : 0),
+			NULL, NULL);
 	if (ret < 0)
 		return ret;
 	return ret == len ? 0 : -EFAULT;
@@ -3085,8 +3352,9 @@ int access_process_vm(struct task_struct
 		void *maddr;
 		struct page *page = NULL;
 
-		ret = get_user_pages(tsk, mm, addr, 1,
-				write, 1, &page, &vma);
+		ret = __get_user_pages(tsk, mm, addr, 1,
+				GUP_FLAGS_FORCE | GUP_FLAGS_STACK |
+				(write ? GUP_FLAGS_WRITE : 0), &page, &vma);
 		if (ret <= 0) {
 			/*
 			 * Check if this is a VM_IO | VM_PFNMAP VMA, which
Index: linux-2.6/arch/x86/mm/gup.c
===================================================================
--- linux-2.6.orig/arch/x86/mm/gup.c	2009-03-13 03:00:58.000000000 +1100
+++ linux-2.6/arch/x86/mm/gup.c	2009-03-13 03:01:03.000000000 +1100
@@ -83,11 +83,14 @@ static noinline int gup_pte_range(pmd_t 
 		struct page *page;
 
 		if ((pte_flags(pte) & (mask | _PAGE_SPECIAL)) != mask) {
+failed:
 			pte_unmap(ptep);
 			return 0;
 		}
 		VM_BUG_ON(!pfn_valid(pte_pfn(pte)));
 		page = pte_page(pte);
+		if (unlikely(!PageDontCOW(page)))
+			goto failed;
 		get_page(page);
 		pages[*nr] = page;
 		(*nr)++;
Index: linux-2.6/include/linux/page-flags.h
===================================================================
--- linux-2.6.orig/include/linux/page-flags.h	2009-03-13 03:00:58.000000000 +1100
+++ linux-2.6/include/linux/page-flags.h	2009-03-13 03:01:03.000000000 +1100
@@ -94,6 +94,7 @@ enum pageflags {
 	PG_reclaim,		/* To be reclaimed asap */
 	PG_buddy,		/* Page is free, on buddy lists */
 	PG_swapbacked,		/* Page is backed by RAM/swap */
+	PG_dontcow,		/* PageAnon page in a VM_DONTCOW vma */
 #ifdef CONFIG_UNEVICTABLE_LRU
 	PG_unevictable,		/* Page is "unevictable"  */
 	PG_mlocked,		/* Page is vma mlocked */
@@ -208,6 +209,8 @@ __PAGEFLAG(SlubDebug, slub_debug)
  */
 TESTPAGEFLAG(Writeback, writeback) TESTSCFLAG(Writeback, writeback)
 __PAGEFLAG(Buddy, buddy)
+__PAGEFLAG(DontCOW, dontcow)
+SETPAGEFLAG(DontCOW, dontcow)
 PAGEFLAG(MappedToDisk, mappedtodisk)
 
 /* PG_readahead is only used for file reads; PG_reclaim is only for writes */
Index: linux-2.6/mm/page_alloc.c
===================================================================
--- linux-2.6.orig/mm/page_alloc.c	2009-03-13 03:00:58.000000000 +1100
+++ linux-2.6/mm/page_alloc.c	2009-03-13 03:01:03.000000000 +1100
@@ -1000,6 +1000,7 @@ static void free_hot_cold_page(struct pa
 	struct per_cpu_pages *pcp;
 	unsigned long flags;
 
+	__ClearPageDontCOW(page);
 	if (PageAnon(page))
 		page->mapping = NULL;
 	if (free_pages_check(page))
Index: linux-2.6/kernel/fork.c
===================================================================
--- linux-2.6.orig/kernel/fork.c	2009-03-13 03:04:33.000000000 +1100
+++ linux-2.6/kernel/fork.c	2009-03-13 03:05:00.000000000 +1100
@@ -353,7 +353,7 @@ static int dup_mmap(struct mm_struct *mm
 		rb_parent = &tmp->vm_rb;
 
 		mm->map_count++;
-		retval = copy_page_range(mm, oldmm, mpnt);
+		retval = copy_page_range(mm, oldmm, tmp, mpnt);
 
 		if (tmp->vm_ops && tmp->vm_ops->open)
 			tmp->vm_ops->open(tmp);
Index: linux-2.6/fs/exec.c
===================================================================
--- linux-2.6.orig/fs/exec.c	2009-03-13 03:04:33.000000000 +1100
+++ linux-2.6/fs/exec.c	2009-03-13 03:05:00.000000000 +1100
@@ -165,6 +165,13 @@ exit:
 
 #ifdef CONFIG_MMU
 
+#define GUP_FLAGS_WRITE                  0x01
+#define GUP_FLAGS_STACK                  0x10
+
+int __get_user_pages(struct task_struct *tsk, struct mm_struct *mm,
+		     unsigned long start, int len, int flags,
+		     struct page **pages, struct vm_area_struct **vmas);
+
 static struct page *get_arg_page(struct linux_binprm *bprm, unsigned long pos,
 		int write)
 {
@@ -178,8 +185,11 @@ static struct page *get_arg_page(struct 
 			return NULL;
 	}
 #endif
-	ret = get_user_pages(current, bprm->mm, pos,
-			1, write, 1, &page, NULL);
+	down_read(&bprm->mm->mmap_sem);
+	ret = __get_user_pages(current, bprm->mm, pos,
+			1, GUP_FLAGS_STACK | (write ? GUP_FLAGS_WRITE : 0),
+			&page, NULL);
+	up_read(&bprm->mm->mmap_sem);
 	if (ret <= 0)
 		return NULL;
 
Index: linux-2.6/mm/internal.h
===================================================================
--- linux-2.6.orig/mm/internal.h	2009-03-13 03:04:33.000000000 +1100
+++ linux-2.6/mm/internal.h	2009-03-13 03:05:00.000000000 +1100
@@ -273,10 +273,11 @@ static inline void mminit_validate_memmo
 }
 #endif /* CONFIG_SPARSEMEM */
 
-#define GUP_FLAGS_WRITE                  0x1
-#define GUP_FLAGS_FORCE                  0x2
-#define GUP_FLAGS_IGNORE_VMA_PERMISSIONS 0x4
-#define GUP_FLAGS_IGNORE_SIGKILL         0x8
+#define GUP_FLAGS_WRITE                  0x01
+#define GUP_FLAGS_FORCE                  0x02
+#define GUP_FLAGS_IGNORE_VMA_PERMISSIONS 0x04
+#define GUP_FLAGS_IGNORE_SIGKILL         0x08
+#define GUP_FLAGS_STACK                  0x10
 
 int __get_user_pages(struct task_struct *tsk, struct mm_struct *mm,
 		     unsigned long start, int len, int flags,
\0



  reply	other threads:[~2009-03-12 16:23 UTC|newest]

Thread overview: 83+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <20090311170611.GA2079@elte.hu>
2009-03-11 17:33 ` Linus Torvalds
2009-03-11 17:41   ` Ingo Molnar
2009-03-11 17:58     ` Linus Torvalds
2009-03-11 18:37       ` Andrea Arcangeli
2009-03-11 18:46         ` Linus Torvalds
2009-03-11 19:01           ` Linus Torvalds
2009-03-11 19:59             ` Andrea Arcangeli
2009-03-11 20:19               ` Linus Torvalds
2009-03-11 20:33                 ` Linus Torvalds
2009-03-11 20:55                   ` Andrea Arcangeli
2009-03-11 21:28                     ` Linus Torvalds
2009-03-11 21:57                       ` Andrea Arcangeli
2009-03-11 22:06                         ` Linus Torvalds
2009-03-11 22:07                           ` Linus Torvalds
2009-03-11 22:22                           ` Davide Libenzi
2009-03-11 22:32                             ` Linus Torvalds
2009-03-14  5:07                   ` Benjamin Herrenschmidt
2009-03-11 20:48                 ` Andrea Arcangeli
2009-03-14  5:06                 ` Benjamin Herrenschmidt
2009-03-14  5:20                   ` Nick Piggin
2009-03-16 16:01                     ` KOSAKI Motohiro
2009-03-16 16:23                       ` Nick Piggin
2009-03-16 16:32                         ` Linus Torvalds
2009-03-16 16:50                           ` Nick Piggin
2009-03-16 17:02                             ` Linus Torvalds
2009-03-16 17:19                               ` Nick Piggin
2009-03-16 17:42                                 ` Linus Torvalds
2009-03-16 18:02                                   ` Nick Piggin
2009-03-16 18:05                                     ` Nick Piggin
2009-03-16 18:17                                       ` Linus Torvalds
2009-03-16 18:33                                         ` Nick Piggin
2009-03-16 19:22                                           ` Linus Torvalds
2009-03-17  5:44                                             ` Nick Piggin
2009-03-16 18:14                                     ` Linus Torvalds
2009-03-16 18:29                                       ` Nick Piggin
2009-03-16 19:17                                         ` Linus Torvalds
2009-03-17  5:42                                           ` Nick Piggin
2009-03-17  5:58                                             ` Nick Piggin
2009-03-16 18:37                                       ` Andrea Arcangeli
2009-03-16 18:28                                   ` Andrea Arcangeli
2009-03-16 23:59                             ` KAMEZAWA Hiroyuki
2009-03-18  2:04                         ` KOSAKI Motohiro
2009-03-22 12:23                           ` KOSAKI Motohiro
2009-03-23  0:13                             ` KOSAKI Motohiro
2009-03-23 16:29                               ` Ingo Molnar
2009-03-23 16:46                                 ` Linus Torvalds
2009-03-24  5:08                                   ` KOSAKI Motohiro
2009-03-24 13:43                             ` Nick Piggin
2009-03-24 17:56                               ` Linus Torvalds
2009-03-30 10:52                               ` KOSAKI Motohiro
     [not found]                                 ` <200904022307.12043.nickpiggin@yahoo.com.au>
2009-04-03  3:49                                   ` Nick Piggin
2009-03-17  0:44                       ` Linus Torvalds
2009-03-17  0:56                         ` KAMEZAWA Hiroyuki
2009-03-17 12:19                         ` Andrea Arcangeli
2009-03-17 16:43                           ` Linus Torvalds
2009-03-17 17:01                             ` Linus Torvalds
2009-03-17 17:10                               ` Andrea Arcangeli
2009-03-17 17:43                                 ` Linus Torvalds
2009-03-17 18:09                                   ` Linus Torvalds
2009-03-17 18:19                                     ` Linus Torvalds
2009-03-17 18:46                                       ` Andrea Arcangeli
2009-03-17 19:03                                         ` Linus Torvalds
2009-03-17 19:35                                           ` Andrea Arcangeli
2009-03-17 19:55                                             ` Linus Torvalds
2009-03-11 19:06           ` Andrea Arcangeli
2009-03-12  5:36           ` Nick Piggin
2009-03-12 16:23             ` Nick Piggin [this message]
2009-03-12 17:00               ` Andrea Arcangeli
2009-03-12 17:20                 ` Nick Piggin
2009-03-12 17:23                   ` Nick Piggin
2009-03-12 18:06                   ` Andrea Arcangeli
2009-03-12 18:58                     ` Andrea Arcangeli
2009-03-13 16:09                     ` Nick Piggin
2009-03-13 19:34                       ` Andrea Arcangeli
2009-03-14  4:59                         ` Nick Piggin
2009-03-16 13:56                           ` Andrea Arcangeli
2009-03-16 16:01                             ` Nick Piggin
2009-03-14  4:46                       ` Nick Piggin
2009-03-14  5:06                         ` Nick Piggin
2009-03-11 18:53     ` Andrea Arcangeli
2009-03-11 18:22   ` Andrea Arcangeli
2009-03-11 19:06     ` Ingo Molnar
2009-03-11 19:15       ` Andrea Arcangeli

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=200903130323.41193.nickpiggin@yahoo.com.au \
    --to=nickpiggin@yahoo.com.au \
    --cc=aarcange@redhat.com \
    --cc=hugh@veritas.com \
    --cc=kamezawa.hiroyu@jp.fujitsu.com \
    --cc=kosaki.motohiro@jp.fujitsu.com \
    --cc=linux-mm@kvack.org \
    --cc=mingo@elte.hu \
    --cc=npiggin@novell.com \
    --cc=torvalds@linux-foundation.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox