linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* pagefault scalability patches
@ 2005-08-17 22:17 Andrew Morton
  2005-08-17 22:19 ` Christoph Lameter
                   ` (3 more replies)
  0 siblings, 4 replies; 47+ messages in thread
From: Andrew Morton @ 2005-08-17 22:17 UTC (permalink / raw)
  To: Linus Torvalds, Hugh Dickins; +Cc: Christoph Lameter, Nick Piggin, linux-mm

[-- Attachment #1: Type: text/plain, Size: 629 bytes --]


These are getting in the way now, and I need to make a go/no-go decision.

I have vague feelings of ickiness with the patches wrt:

a) general increase of complexity

b) the fact that they only partially address the problem: anonymous page
   faults are addressed, but lots of other places aren't.

c) the fact that they address one particular part of one particular
   workload on exceedingly rare machines.

I believe that Nick has plans to address b).

I'd like us to thrash this out (again), please.  Hugh, could you (for the
nth and final time) describe your concerns with these patches?

Thanks.

(Three patches attached)

[-- Attachment #2: page-fault-patches-introduce-pte_xchg-and-pte_cmpxchg.patch --]
[-- Type: application/octet-stream, Size: 12175 bytes --]


From: Christoph Lameter <christoph@lameter.com>

Updating a page table entry (pte) can be difficult since the MMU may modify
the pte concurrently.  The current approach taken is to first exchange the pte
contents with zero.  Clearing the pte by writing zero to it also resets the
present bit, which will stop the MMU from modifying the pte and allows the
processing of the bits that were set.  Then the pte is set to its new value.

While the present bit is not set, accesses to the page mapped by the pte will
results in page faults, which may install a new pte over the non present
entry.  In order to avoid that scenario the page_table_lock is held.  An
access will still result in a page fault but the fault handler will also try
to acquire the page_table_lock.  The page_table_lock is released after the pte
has been setup by the first process.  The second process will now acquire the
page_table_lock and find that there is already a pte setup for this page and
return without having done anything.

This means that a useless page fault has been generated.

However, most architectures have the capability to atomically exchange the
value of the pte.  For those the clearing of pte before setting them to a new
value is not necessary.  The use of atomic exchanges avoids useless page
faults.

The following patch introduces two new atomic operations ptep_xchg and
ptep_cmpxchg that may be provided by an architecture.  The fallback in
include/asm-generic/pgtable.h is to simulate both operations through the
existing ptep_get_and_clear function.  So there is essentially no change if
atomic operations on ptes have not been defined.  Architectures that do not
support atomic operations on ptes may continue to use the clearing of a pte.

Atomic operations are enabled for i386, ia64 and x86_64 if a suitable CPU is
configured in SMP mode.  Generic atomic definitions for ptep_xchg and
ptep_cmpxchg have been provided based on the existing xchg() and cmpxchg()
functions that already work atomically on many platforms.

The provided generic atomic functions may be overridden as usual by defining
the appropriate__HAVE_ARCH_xxx constant and providing a different
implementation.

This patch is a piece of my attempt to reduce the use of the page_table_lock
in the page fault handler through atomic operations.  This is only possible if
it can be ensured that a pte is never cleared if the pte is in use even when
the page_table_lock is not held.  Clearing a pte before setting it to another
value could result in a situation in which a fault generated by another cpu
could install a pte which is then immediately overwritten by the first CPU
setting the pte to a valid value again.  The patch is necessary for the other
patches removing the use of the page_table_lock to work properly.

Signed-off-by: Christoph Lameter <clameter@sgi.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
---

 arch/i386/Kconfig             |    5 ++
 arch/ia64/Kconfig             |    5 ++
 arch/x86_64/Kconfig           |    5 ++
 include/asm-generic/pgtable.h |   86 ++++++++++++++++++++++++++++++++++++++++++
 mm/memory.c                   |   14 ++++--
 mm/mprotect.c                 |   22 +++++-----
 mm/rmap.c                     |   22 +++++-----
 7 files changed, 133 insertions(+), 26 deletions(-)

diff -puN arch/i386/Kconfig~page-fault-patches-introduce-pte_xchg-and-pte_cmpxchg arch/i386/Kconfig
--- 25/arch/i386/Kconfig~page-fault-patches-introduce-pte_xchg-and-pte_cmpxchg	Wed Aug 17 14:53:01 2005
+++ 25-akpm/arch/i386/Kconfig	Wed Aug 17 15:09:24 2005
@@ -905,6 +905,11 @@ config HAVE_DEC_LOCK
 	depends on (SMP || PREEMPT) && X86_CMPXCHG
 	default y
 
+config ATOMIC_TABLE_OPS
+	bool
+	depends on SMP && X86_CMPXCHG && !X86_PAE
+	default y
+
 # turning this on wastes a bunch of space.
 # Summit needs it only when NUMA is on
 config BOOT_IOREMAP
diff -puN arch/ia64/Kconfig~page-fault-patches-introduce-pte_xchg-and-pte_cmpxchg arch/ia64/Kconfig
--- 25/arch/ia64/Kconfig~page-fault-patches-introduce-pte_xchg-and-pte_cmpxchg	Wed Aug 17 14:53:01 2005
+++ 25-akpm/arch/ia64/Kconfig	Wed Aug 17 15:09:24 2005
@@ -297,6 +297,11 @@ config PREEMPT
 
 source "mm/Kconfig"
 
+config ATOMIC_TABLE_OPS
+	bool
+	depends on SMP
+	default y
+
 config HAVE_DEC_LOCK
 	bool
 	depends on (SMP || PREEMPT)
diff -puN arch/x86_64/Kconfig~page-fault-patches-introduce-pte_xchg-and-pte_cmpxchg arch/x86_64/Kconfig
--- 25/arch/x86_64/Kconfig~page-fault-patches-introduce-pte_xchg-and-pte_cmpxchg	Wed Aug 17 14:53:01 2005
+++ 25-akpm/arch/x86_64/Kconfig	Wed Aug 17 15:09:24 2005
@@ -217,6 +217,11 @@ config SCHED_SMT
 	  cost of slightly increased overhead in some places. If unsure say
 	  N here.
 
+config ATOMIC_TABLE_OPS
+	bool
+	  depends on SMP
+	  default y
+
 source "kernel/Kconfig.preempt"
 
 config K8_NUMA
diff -puN include/asm-generic/pgtable.h~page-fault-patches-introduce-pte_xchg-and-pte_cmpxchg include/asm-generic/pgtable.h
--- 25/include/asm-generic/pgtable.h~page-fault-patches-introduce-pte_xchg-and-pte_cmpxchg	Wed Aug 17 14:53:01 2005
+++ 25-akpm/include/asm-generic/pgtable.h	Wed Aug 17 15:09:38 2005
@@ -111,6 +111,92 @@ do {				  					  \
 })
 #endif
 
+#ifdef CONFIG_ATOMIC_TABLE_OPS
+
+/*
+ * The architecture does support atomic table operations.
+ * We may be able to provide atomic ptep_xchg and ptep_cmpxchg using
+ * cmpxchg and xchg.
+ */
+#ifndef __HAVE_ARCH_PTEP_XCHG
+#define ptep_xchg(__mm, __address, __ptep, __pteval) \
+	__pte(xchg(&pte_val(*(__ptep)), pte_val(__pteval)))
+#endif
+
+#ifndef __HAVE_ARCH_PTEP_CMPXCHG
+#define ptep_cmpxchg(__mm, __address, __ptep,__oldval,__newval)		\
+	(cmpxchg(&pte_val(*(__ptep)),					\
+			pte_val(__oldval),				\
+			pte_val(__newval)				\
+		) == pte_val(__oldval)					\
+	)
+#endif
+
+#ifndef __HAVE_ARCH_PTEP_XCHG_FLUSH
+#define ptep_xchg_flush(__vma, __address, __ptep, __pteval)		\
+({									\
+	pte_t __pte = ptep_xchg(__vma, __address, __ptep, __pteval);	\
+	flush_tlb_page(__vma, __address);				\
+	__pte;								\
+})
+#endif
+
+#else
+
+/*
+ * No support for atomic operations on the page table.
+ * Exchanging of pte values is done by first swapping zeros into
+ * a pte and then putting new content into the pte entry.
+ * However, these functions will generate an empty pte for a
+ * short time frame. This means that the page_table_lock must be held
+ * to avoid a page fault that would install a new entry.
+ */
+#ifndef __HAVE_ARCH_PTEP_XCHG
+#define ptep_xchg(__mm, __address, __ptep, __pteval)			\
+({									\
+	pte_t __pte = ptep_get_and_clear(__mm, __address, __ptep);	\
+	set_pte_at(__mm, __address, __ptep, __pteval);			\
+	__pte;								\
+})
+#endif
+
+#ifndef __HAVE_ARCH_PTEP_XCHG_FLUSH
+#ifndef __HAVE_ARCH_PTEP_XCHG
+#define ptep_xchg_flush(__vma, __address, __ptep, __pteval)		\
+({									\
+	pte_t __pte = ptep_clear_flush(__vma, __address, __ptep);	\
+	set_pte_at((__vma)->vm_mm, __address, __ptep, __pteval);		\
+	__pte;								\
+})
+#else
+#define ptep_xchg_flush(__vma, __address, __ptep, __pteval)		\
+({									\
+	pte_t __pte = ptep_xchg((__vma)->vm_mm, __address, __ptep, __pteval);\
+	flush_tlb_page(__vma, __address);				\
+	__pte;								\
+})
+#endif
+#endif
+
+/*
+ * The fallback function for ptep_cmpxchg avoids any real use of cmpxchg
+ * since cmpxchg may not be available on certain architectures. Instead
+ * the clearing of a pte is used as a form of locking mechanism.
+ * This approach will only work if the page_table_lock is held to insure
+ * that the pte is not populated by a page fault generated on another
+ * CPU.
+ */
+#ifndef __HAVE_ARCH_PTEP_CMPXCHG
+#define ptep_cmpxchg(__mm, __address, __ptep, __old, __new)		\
+({									\
+	pte_t prev = ptep_get_and_clear(__mm, __address, __ptep);	\
+	int r = pte_val(prev) == pte_val(__old);			\
+	set_pte_at(__mm, __address, __ptep, r ? (__new) : prev);	\
+	r;								\
+})
+#endif
+#endif
+
 #ifndef __HAVE_ARCH_PTEP_SET_WRPROTECT
 static inline void ptep_set_wrprotect(struct mm_struct *mm, unsigned long address, pte_t *ptep)
 {
diff -puN mm/memory.c~page-fault-patches-introduce-pte_xchg-and-pte_cmpxchg mm/memory.c
--- 25/mm/memory.c~page-fault-patches-introduce-pte_xchg-and-pte_cmpxchg	Wed Aug 17 14:53:01 2005
+++ 25-akpm/mm/memory.c	Wed Aug 17 15:09:33 2005
@@ -551,15 +551,19 @@ static void zap_pte_range(struct mmu_gat
 				     page->index > details->last_index))
 					continue;
 			}
-			ptent = ptep_get_and_clear(tlb->mm, addr, pte);
-			tlb_remove_tlb_entry(tlb, pte, addr);
-			if (unlikely(!page))
+			if (unlikely(!page)) {
+				ptent = ptep_get_and_clear(tlb->mm, addr, pte);
+				tlb_remove_tlb_entry(tlb, pte, addr);
 				continue;
+			}
 			if (unlikely(details) && details->nonlinear_vma
 			    && linear_page_index(details->nonlinear_vma,
 						addr) != page->index)
-				set_pte_at(tlb->mm, addr, pte,
-					   pgoff_to_pte(page->index));
+				ptent = ptep_xchg(tlb->mm, addr, pte,
+						  pgoff_to_pte(page->index));
+			else
+				ptent = ptep_get_and_clear(tlb->mm, addr, pte);
+			tlb_remove_tlb_entry(tlb, pte, addr);
 			if (pte_dirty(ptent))
 				set_page_dirty(page);
 			if (PageAnon(page))
diff -puN mm/mprotect.c~page-fault-patches-introduce-pte_xchg-and-pte_cmpxchg mm/mprotect.c
--- 25/mm/mprotect.c~page-fault-patches-introduce-pte_xchg-and-pte_cmpxchg	Wed Aug 17 14:53:01 2005
+++ 25-akpm/mm/mprotect.c	Wed Aug 17 14:53:01 2005
@@ -32,17 +32,19 @@ static void change_pte_range(struct mm_s
 
 	pte = pte_offset_map(pmd, addr);
 	do {
-		if (pte_present(*pte)) {
-			pte_t ptent;
+		pte_t ptent;
+redo:
+		ptent = *pte;
+		if (!pte_present(ptent))
+			continue;
 
-			/* Avoid an SMP race with hardware updated dirty/clean
-			 * bits by wiping the pte and then setting the new pte
-			 * into place.
-			 */
-			ptent = pte_modify(ptep_get_and_clear(mm, addr, pte), newprot);
-			set_pte_at(mm, addr, pte, ptent);
-			lazy_mmu_prot_update(ptent);
-		}
+		/* Deal with a potential SMP race with hardware/arch
+		 * interrupt updating dirty/clean bits through the use
+		 * of ptep_cmpxchg.
+		 */
+		if (!ptep_cmpxchg(mm, addr, pte, ptent, pte_modify(ptent, newprot)))
+				goto redo;
+		lazy_mmu_prot_update(ptent);
 	} while (pte++, addr += PAGE_SIZE, addr != end);
 	pte_unmap(pte - 1);
 }
diff -puN mm/rmap.c~page-fault-patches-introduce-pte_xchg-and-pte_cmpxchg mm/rmap.c
--- 25/mm/rmap.c~page-fault-patches-introduce-pte_xchg-and-pte_cmpxchg	Wed Aug 17 14:53:01 2005
+++ 25-akpm/mm/rmap.c	Wed Aug 17 15:02:57 2005
@@ -539,11 +539,6 @@ static int try_to_unmap_one(struct page 
 
 	/* Nuke the page table entry. */
 	flush_cache_page(vma, address, page_to_pfn(page));
-	pteval = ptep_clear_flush(vma, address, pte);
-
-	/* Move the dirty bit to the physical page now the pte is gone. */
-	if (pte_dirty(pteval))
-		set_page_dirty(page);
 
 	if (PageAnon(page)) {
 		swp_entry_t entry = { .val = page->private };
@@ -558,10 +553,15 @@ static int try_to_unmap_one(struct page 
 			list_add(&mm->mmlist, &init_mm.mmlist);
 			spin_unlock(&mmlist_lock);
 		}
-		set_pte_at(mm, address, pte, swp_entry_to_pte(entry));
+		pteval = ptep_xchg_flush(vma, address, pte, swp_entry_to_pte(entry));
 		BUG_ON(pte_file(*pte));
 		dec_mm_counter(mm, anon_rss);
-	}
+	} else
+		pteval = ptep_clear_flush(vma, address, pte);
+
+	/* Move the dirty bit to the physical page now the pte is gone. */
+	if (pte_dirty(pteval))
+		set_page_dirty(page);
 
 	dec_mm_counter(mm, rss);
 	page_remove_rmap(page);
@@ -653,15 +653,15 @@ static void try_to_unmap_cluster(unsigne
 		if (ptep_clear_flush_young(vma, address, pte))
 			continue;
 
-		/* Nuke the page table entry. */
 		flush_cache_page(vma, address, pfn);
-		pteval = ptep_clear_flush(vma, address, pte);
 
 		/* If nonlinear, store the file page offset in the pte. */
 		if (page->index != linear_page_index(vma, address))
-			set_pte_at(mm, address, pte, pgoff_to_pte(page->index));
+			pteval = ptep_xchg_flush(vma, address, pte, pgoff_to_pte(page->index));
+		else
+			pteval = ptep_clear_flush(vma, address, pte);
 
-		/* Move the dirty bit to the physical page now the pte is gone. */
+		/* Move the dirty bit to the physical page now that the pte is gone. */
 		if (pte_dirty(pteval))
 			set_page_dirty(page);
 
_

[-- Attachment #3: page-fault-patches-optional-page_lock-acquisition-in.patch --]
[-- Type: application/octet-stream, Size: 25364 bytes --]


From: Christoph Lameter <christoph@lameter.com>

The page fault handler attempts to use the page_table_lock only for short time
periods.  It repeatedly drops and reacquires the lock.  When the lock is
reacquired, checks are made if the underlying pte has changed before replacing
the pte value.  These locations are a good fit for the use of ptep_cmpxchg.

The following patch allows the use of atomic operations to remove the first
acquisition of the page_table_lock.  A section using atomic pte operations is
begun with

	page_table_atomic_start(struct mm_struct *)

and ends with

	page_table_atomic_stop(struct mm_struct *)

Both of these become spin_lock(page_table_lock) and
spin_unlock(page_table_lock) if atomic page table operations are not
configured (CONFIG_ATOMIC_TABLE_OPS undefined).

Atomic pte operations using pte_xchg and pte_cmpxchg only work for the lowest
layer of the page table.  Higher layers may also be populated in an atomic way
by defining pmd_test_and_populate() etc.  The generic versions of these
functions fall back to the page_table_lock.  Populating higher level page
table entries is rare and therefore this is not likely to be performance
critical.  For ia64 a definition of higher level atomic operations is
included.

This patch depends on the patch to avoid spurious page faults to be applied
first and will only remove the first acquisition of the page_table_lock in the
page fault handler.  This will allow the following page table operations
without acquiring the page_table_lock:

1. Updating of access bits (handle_mm_fault)
2. Anonymous read faults (do_anonymous_page)

The page_table_lock is still acquired for creating a new pte for an anonymous
write fault and therefore the problems with atomic updates of rss do not yet
occur.

The patch also adds some diagnostic features by counting the number of cmpxchg
failures (useful for verification if this patch works right) and the number of
page faults that led to no change in the page table.  The statistics may be
accessed via /proc/meminfo.

Signed-off-by: Christoph Lameter <clameter@sgi.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
---

 include/asm-generic/4level-fixup.h  |    1 
 include/asm-generic/pgtable-nopmd.h |    5 
 include/asm-generic/pgtable-nopud.h |    8 
 include/asm-generic/pgtable.h       |   99 +++++++++++
 include/asm-ia64/pgalloc.h          |   19 ++
 include/asm-ia64/pgtable.h          |    2 
 include/linux/page-flags.h          |    6 
 mm/memory.c                         |  299 ++++++++++++++++++++++++------------
 mm/page_alloc.c                     |    6 
 proc/proc_misc.c                    |    0 
 10 files changed, 347 insertions(+), 98 deletions(-)

diff -puN fs/proc/proc_misc.c~page-fault-patches-optional-page_lock-acquisition-in fs/proc/proc_misc.c
diff -puN include/asm-generic/pgtable.h~page-fault-patches-optional-page_lock-acquisition-in include/asm-generic/pgtable.h
--- 25/include/asm-generic/pgtable.h~page-fault-patches-optional-page_lock-acquisition-in	Wed Aug 17 15:09:54 2005
+++ 25-akpm/include/asm-generic/pgtable.h	Wed Aug 17 15:09:54 2005
@@ -141,6 +141,65 @@ do {				  					  \
 })
 #endif
 
+/*
+ * page_table_atomic_start and page_table_atomic_stop may be used to
+ * define special measures that an arch needs to guarantee atomic
+ * operations outside of a spinlock. In the case that an arch does
+ * not support atomic page table operations we will fall back to the
+ * page table lock.
+ */
+#ifndef __HAVE_ARCH_PAGE_TABLE_ATOMIC_START
+#define page_table_atomic_start(mm) do { } while (0)
+#endif
+
+#ifndef __HAVE_ARCH_PAGE_TABLE_ATOMIC_START
+#define page_table_atomic_stop(mm) do { } while (0)
+#endif
+
+/*
+ * Fallback functions for atomic population of higher page table
+ * structures. These simply acquire the page_table_lock for
+ * synchronization. An architecture may override these generic
+ * functions to provide atomic populate functions to make these
+ * more effective.
+ */
+
+#ifndef __HAVE_ARCH_PGD_TEST_AND_POPULATE
+#define pgd_test_and_populate(__mm, __pgd, __pud)			\
+({									\
+	int __rc;							\
+	spin_lock(&mm->page_table_lock);				\
+	__rc = pgd_none(*(__pgd));					\
+	if (__rc) pgd_populate(__mm, __pgd, __pud);			\
+	spin_unlock(&mm->page_table_lock);				\
+	__rc;								\
+})
+#endif
+
+#ifndef __HAVE_ARCH_PUD_TEST_AND_POPULATE
+#define pud_test_and_populate(__mm, __pud, __pmd)			\
+({									\
+	int __rc;							\
+	spin_lock(&mm->page_table_lock);				\
+	__rc = pud_none(*(__pud));					\
+	if (__rc) pud_populate(__mm, __pud, __pmd);			\
+	spin_unlock(&mm->page_table_lock);				\
+	__rc;								\
+})
+#endif
+
+#ifndef __HAVE_ARCH_PMD_TEST_AND_POPULATE
+#define pmd_test_and_populate(__mm, __pmd, __page)			\
+({									\
+	int __rc;							\
+	spin_lock(&mm->page_table_lock);				\
+	__rc = !pmd_present(*(__pmd));					\
+	if (__rc) pmd_populate(__mm, __pmd, __page);			\
+	spin_unlock(&mm->page_table_lock);				\
+	__rc;								\
+})
+#endif
+
 #else
 
 /*
@@ -151,6 +210,11 @@ do {				  					  \
  * short time frame. This means that the page_table_lock must be held
  * to avoid a page fault that would install a new entry.
  */
+
+/* Fall back to the page table lock to synchronize page table access */
+#define page_table_atomic_start(mm)	spin_lock(&(mm)->page_table_lock)
+#define page_table_atomic_stop(mm)	spin_unlock(&(mm)->page_table_lock)
+
 #ifndef __HAVE_ARCH_PTEP_XCHG
 #define ptep_xchg(__mm, __address, __ptep, __pteval)			\
 ({									\
@@ -195,6 +259,41 @@ do {				  					  \
 	r;								\
 })
 #endif
+
+/*
+ * Fallback functions for atomic population of higher page table
+ * structures. These rely on the page_table_lock being held.
+ */
+#ifndef __HAVE_ARCH_PGD_TEST_AND_POPULATE
+#define pgd_test_and_populate(__mm, __pgd, __pud)			\
+({									\
+	int __rc;							\
+	__rc = pgd_none(*(__pgd));					\
+	if (__rc) pgd_populate(__mm, __pgd, __pud);			\
+	__rc;								\
+})
+#endif
+
+#ifndef __HAVE_ARCH_PUD_TEST_AND_POPULATE
+#define pud_test_and_populate(__mm, __pud, __pmd)			\
+({									\
+       int __rc;							\
+       __rc = pud_none(*(__pud));					\
+       if (__rc) pud_populate(__mm, __pud, __pmd);			\
+       __rc;								\
+})
+#endif
+
+#ifndef __HAVE_ARCH_PMD_TEST_AND_POPULATE
+#define pmd_test_and_populate(__mm, __pmd, __page)			\
+({									\
+       int __rc;							\
+       __rc = !pmd_present(*(__pmd));					\
+       if (__rc) pmd_populate(__mm, __pmd, __page);			\
+       __rc;								\
+})
+#endif
+
 #endif
 
 #ifndef __HAVE_ARCH_PTEP_SET_WRPROTECT
diff -puN include/asm-generic/pgtable-nopmd.h~page-fault-patches-optional-page_lock-acquisition-in include/asm-generic/pgtable-nopmd.h
--- 25/include/asm-generic/pgtable-nopmd.h~page-fault-patches-optional-page_lock-acquisition-in	Wed Aug 17 15:09:54 2005
+++ 25-akpm/include/asm-generic/pgtable-nopmd.h	Wed Aug 17 15:09:54 2005
@@ -31,6 +31,11 @@ static inline void pud_clear(pud_t *pud)
 #define pmd_ERROR(pmd)				(pud_ERROR((pmd).pud))
 
 #define pud_populate(mm, pmd, pte)		do { } while (0)
+#define __ARCH_HAVE_PUD_TEST_AND_POPULATE
+static inline int pud_test_and_populate(struct mm_struct *mm, pud_t *pud, pmd_t *pmd)
+{
+	return 1;
+}
 
 /*
  * (pmds are folded into puds so this doesn't get actually called,
diff -puN include/asm-generic/pgtable-nopud.h~page-fault-patches-optional-page_lock-acquisition-in include/asm-generic/pgtable-nopud.h
--- 25/include/asm-generic/pgtable-nopud.h~page-fault-patches-optional-page_lock-acquisition-in	Wed Aug 17 15:09:54 2005
+++ 25-akpm/include/asm-generic/pgtable-nopud.h	Wed Aug 17 15:09:54 2005
@@ -27,8 +27,14 @@ static inline int pgd_bad(pgd_t pgd)		{ 
 static inline int pgd_present(pgd_t pgd)	{ return 1; }
 static inline void pgd_clear(pgd_t *pgd)	{ }
 #define pud_ERROR(pud)				(pgd_ERROR((pud).pgd))
-
 #define pgd_populate(mm, pgd, pud)		do { } while (0)
+
+#define __HAVE_ARCH_PGD_TEST_AND_POPULATE
+static inline int pgd_test_and_populate(struct mm_struct *mm, pgd_t *pgd, pud_t *pud)
+{
+	return 1;
+}
+
 /*
  * (puds are folded into pgds so this doesn't get actually called,
  * but the define is needed for a generic inline function.)
diff -puN include/asm-ia64/pgalloc.h~page-fault-patches-optional-page_lock-acquisition-in include/asm-ia64/pgalloc.h
--- 25/include/asm-ia64/pgalloc.h~page-fault-patches-optional-page_lock-acquisition-in	Wed Aug 17 15:09:54 2005
+++ 25-akpm/include/asm-ia64/pgalloc.h	Wed Aug 17 15:09:54 2005
@@ -1,6 +1,10 @@
 #ifndef _ASM_IA64_PGALLOC_H
 #define _ASM_IA64_PGALLOC_H
 
+/* Empty entries of PMD and PGD */
+#define PMD_NONE       0
+#define PUD_NONE       0
+
 /*
  * This file contains the functions and defines necessary to allocate
  * page tables.
@@ -86,6 +90,21 @@ static inline void pgd_free(pgd_t * pgd)
 	pgtable_quicklist_free(pgd);
 }
 
+/* Atomic populate */
+static inline int
+pud_test_and_populate (struct mm_struct *mm, pud_t *pud_entry, pmd_t *pmd)
+{
+	return ia64_cmpxchg8_acq(pud_entry,__pa(pmd), PUD_NONE) == PUD_NONE;
+}
+
+/* Atomic populate */
+static inline int
+pmd_test_and_populate (struct mm_struct *mm, pmd_t *pmd_entry, struct page *pte)
+{
+	return ia64_cmpxchg8_acq(pmd_entry, page_to_phys(pte), PMD_NONE) == PMD_NONE;
+}
+
+
 static inline void
 pud_populate(struct mm_struct *mm, pud_t * pud_entry, pmd_t * pmd)
 {
diff -puN include/asm-ia64/pgtable.h~page-fault-patches-optional-page_lock-acquisition-in include/asm-ia64/pgtable.h
--- 25/include/asm-ia64/pgtable.h~page-fault-patches-optional-page_lock-acquisition-in	Wed Aug 17 15:09:54 2005
+++ 25-akpm/include/asm-ia64/pgtable.h	Wed Aug 17 15:09:54 2005
@@ -565,6 +565,8 @@ do {											\
 #define __HAVE_ARCH_PTE_SAME
 #define __HAVE_ARCH_PGD_OFFSET_GATE
 #define __HAVE_ARCH_LAZY_MMU_PROT_UPDATE
+#define __HAVE_ARCH_PUD_TEST_AND_POPULATE
+#define __HAVE_ARCH_PMD_TEST_AND_POPULATE
 
 #include <asm-generic/pgtable-nopud.h>
 #include <asm-generic/pgtable.h>
diff -puN include/linux/page-flags.h~page-fault-patches-optional-page_lock-acquisition-in include/linux/page-flags.h
--- 25/include/linux/page-flags.h~page-fault-patches-optional-page_lock-acquisition-in	Wed Aug 17 15:09:54 2005
+++ 25-akpm/include/linux/page-flags.h	Wed Aug 17 15:09:59 2005
@@ -131,6 +131,12 @@ struct page_state {
 
 	unsigned long pgrotated;	/* pages rotated to tail of the LRU */
 	unsigned long nr_bounce;	/* pages for bounce buffers */
+	unsigned long spurious_page_faults;	/* Faults with no ops */
+	unsigned long cmpxchg_fail_flag_update;	/* cmpxchg failures for pte flag update */
+	unsigned long cmpxchg_fail_flag_reuse;	/* cmpxchg failures when cow reuse of pte */
+
+	unsigned long cmpxchg_fail_anon_read;	/* cmpxchg failures on anonymous read */
+	unsigned long cmpxchg_fail_anon_write;	/* cmpxchg failures on anonymous write */
 };
 
 extern void get_page_state(struct page_state *ret);
diff -puN mm/memory.c~page-fault-patches-optional-page_lock-acquisition-in mm/memory.c
--- 25/mm/memory.c~page-fault-patches-optional-page_lock-acquisition-in	Wed Aug 17 15:09:54 2005
+++ 25-akpm/mm/memory.c	Wed Aug 17 15:10:11 2005
@@ -36,6 +36,8 @@
  *		(Gerhard.Wichert@pdb.siemens.de)
  *
  * Aug/Sep 2004 Changed to four level page tables (Andi Kleen)
+ * Jan 2005 	Scalability improvement by reducing the use and the length of time
+ *		the page table lock is held (Christoph Lameter)
  */
 
 #include <linux/kernel_stat.h>
@@ -977,7 +979,7 @@ int get_user_pages(struct task_struct *t
 				 */
 				if (ret & VM_FAULT_WRITE)
 					write_access = 0;
-				
+
 				switch (ret & ~VM_FAULT_WRITE) {
 				case VM_FAULT_MINOR:
 					tsk->min_flt++;
@@ -1646,8 +1648,7 @@ void swapin_readahead(swp_entry_t entry,
 }
 
 /*
- * We hold the mm semaphore and the page_table_lock on entry and
- * should release the pagetable lock on exit..
+ * We hold the mm semaphore and have started atomic pte operations
  */
 static int do_swap_page(struct mm_struct * mm,
 	struct vm_area_struct * vma, unsigned long address,
@@ -1659,15 +1660,14 @@ static int do_swap_page(struct mm_struct
 	int ret = VM_FAULT_MINOR;
 
 	pte_unmap(page_table);
-	spin_unlock(&mm->page_table_lock);
+	page_table_atomic_stop(mm);
 	page = lookup_swap_cache(entry);
 	if (!page) {
  		swapin_readahead(entry, address, vma);
  		page = read_swap_cache_async(entry, vma, address);
 		if (!page) {
 			/*
-			 * Back out if somebody else faulted in this pte while
-			 * we released the page table lock.
+			 * Back out if somebody else faulted in this pte
 			 */
 			spin_lock(&mm->page_table_lock);
 			page_table = pte_offset_map(pmd, address);
@@ -1690,8 +1690,7 @@ static int do_swap_page(struct mm_struct
 	lock_page(page);
 
 	/*
-	 * Back out if somebody else faulted in this pte while we
-	 * released the page table lock.
+	 * Back out if somebody else faulted in this pte
 	 */
 	spin_lock(&mm->page_table_lock);
 	page_table = pte_offset_map(pmd, address);
@@ -1746,61 +1745,79 @@ out_nomap:
 }
 
 /*
- * We are called with the MM semaphore and page_table_lock
- * spinlock held to protect against concurrent faults in
- * multithreaded programs. 
+ * We are called with atomic operations started and the
+ * value of the pte that was read in orig_entry.
  */
 static int
 do_anonymous_page(struct mm_struct *mm, struct vm_area_struct *vma,
 		pte_t *page_table, pmd_t *pmd, int write_access,
-		unsigned long addr)
+		unsigned long addr, pte_t orig_entry)
 {
 	pte_t entry;
-	struct page * page = ZERO_PAGE(addr);
+	struct page * page;
 
-	/* Read-only mapping of ZERO_PAGE. */
-	entry = pte_wrprotect(mk_pte(ZERO_PAGE(addr), vma->vm_page_prot));
+	if (unlikely(!write_access)) {
 
-	/* ..except if it's a write access */
-	if (write_access) {
-		/* Allocate our own private page. */
+		/* Read-only mapping of ZERO_PAGE. */
+		entry = pte_wrprotect(mk_pte(ZERO_PAGE(addr),
+					vma->vm_page_prot));
+
+		/*
+		 * If the cmpxchg fails then another cpu may
+		 * already have populated the entry
+		 */
+		if (ptep_cmpxchg(mm, addr, page_table, orig_entry, entry)) {
+			update_mmu_cache(vma, addr, entry);
+			lazy_mmu_prot_update(entry);
+		} else {
+			inc_page_state(cmpxchg_fail_anon_read);
+		}
 		pte_unmap(page_table);
-		spin_unlock(&mm->page_table_lock);
+		goto minor_fault;
+	}
 
-		if (unlikely(anon_vma_prepare(vma)))
-			goto no_mem;
-		page = alloc_zeroed_user_highpage(vma, addr);
-		if (!page)
-			goto no_mem;
+	/* This leaves the write case */
+	page_table_atomic_stop(mm);
+	if (unlikely(anon_vma_prepare(vma)))
+		goto oom;
 
-		spin_lock(&mm->page_table_lock);
-		page_table = pte_offset_map(pmd, addr);
+	page = alloc_zeroed_user_highpage(vma, addr);
+	if (!page)
+		goto oom;
 
-		if (!pte_none(*page_table)) {
-			pte_unmap(page_table);
-			page_cache_release(page);
-			spin_unlock(&mm->page_table_lock);
-			goto out;
-		}
-		inc_mm_counter(mm, rss);
-		entry = maybe_mkwrite(pte_mkdirty(mk_pte(page,
-							 vma->vm_page_prot)),
-				      vma);
-		lru_cache_add_active(page);
-		SetPageReferenced(page);
-		page_add_anon_rmap(page, vma, addr);
-	}
+	entry = maybe_mkwrite(pte_mkdirty(mk_pte(page,
+						vma->vm_page_prot)),
+				vma);
+	spin_lock(&mm->page_table_lock);
 
-	set_pte_at(mm, addr, page_table, entry);
-	pte_unmap(page_table);
+	if (!ptep_cmpxchg(mm, addr, page_table, orig_entry, entry)) {
+		pte_unmap(page_table);
+		page_cache_release(page);
+		inc_page_state(cmpxchg_fail_anon_write);
+		goto minor_fault_atomic;
+        }
 
-	/* No need to invalidate - it was non-present before */
+	/*
+	 * These two functions must come after the cmpxchg
+	 * because if the page is on the LRU then try_to_unmap may come
+	 * in and unmap the pte.
+	 */
+	page_add_anon_rmap(page, vma, addr);
+	lru_cache_add_active(page);
+	inc_mm_counter(mm, rss);
+	pte_unmap(page_table);
 	update_mmu_cache(vma, addr, entry);
 	lazy_mmu_prot_update(entry);
+
+minor_fault:
 	spin_unlock(&mm->page_table_lock);
-out:
 	return VM_FAULT_MINOR;
-no_mem:
+
+minor_fault_atomic:
+	page_table_atomic_stop(mm);
+	return VM_FAULT_MINOR;
+
+oom:
 	return VM_FAULT_OOM;
 }
 
@@ -1813,12 +1830,12 @@ no_mem:
  * As this is called only for pages that do not currently exist, we
  * do not need to flush old virtual caches or the TLB.
  *
- * This is called with the MM semaphore held and the page table
- * spinlock held. Exit with the spinlock released.
+ * This is called with the MM semaphore held and atomic pte operations started.
  */
 static int
 do_no_page(struct mm_struct *mm, struct vm_area_struct *vma,
-	unsigned long address, int write_access, pte_t *page_table, pmd_t *pmd)
+	unsigned long address, int write_access, pte_t *page_table,
+        pmd_t *pmd, pte_t orig_entry)
 {
 	struct page * new_page;
 	struct address_space *mapping = NULL;
@@ -1829,9 +1846,9 @@ do_no_page(struct mm_struct *mm, struct 
 
 	if (!vma->vm_ops || !vma->vm_ops->nopage)
 		return do_anonymous_page(mm, vma, page_table,
-					pmd, write_access, address);
+					pmd, write_access, address, orig_entry);
 	pte_unmap(page_table);
-	spin_unlock(&mm->page_table_lock);
+	page_table_atomic_stop(mm);
 
 	if (vma->vm_file) {
 		mapping = vma->vm_file->f_mapping;
@@ -1938,7 +1955,7 @@ oom:
  * nonlinear vmas.
  */
 static int do_file_page(struct mm_struct * mm, struct vm_area_struct * vma,
-	unsigned long address, int write_access, pte_t *pte, pmd_t *pmd)
+	unsigned long address, int write_access, pte_t *pte, pmd_t *pmd, pte_t entry)
 {
 	unsigned long pgoff;
 	int err;
@@ -1951,13 +1968,13 @@ static int do_file_page(struct mm_struct
 	if (!vma->vm_ops->populate ||
 			(write_access && !(vma->vm_flags & VM_SHARED))) {
 		pte_clear(mm, address, pte);
-		return do_no_page(mm, vma, address, write_access, pte, pmd);
+		return do_no_page(mm, vma, address, write_access, pte, pmd, entry);
 	}
 
-	pgoff = pte_to_pgoff(*pte);
+	pgoff = pte_to_pgoff(entry);
 
 	pte_unmap(pte);
-	spin_unlock(&mm->page_table_lock);
+	page_table_atomic_stop(mm);
 
 	err = vma->vm_ops->populate(vma, address & PAGE_MASK, PAGE_SIZE, vma->vm_page_prot, pgoff, 0);
 	if (err == -ENOMEM)
@@ -1976,49 +1993,80 @@ static int do_file_page(struct mm_struct
  * with external mmu caches can use to update those (ie the Sparc or
  * PowerPC hashed page tables that act as extended TLBs).
  *
- * Note the "page_table_lock". It is to protect against kswapd removing
- * pages from under us. Note that kswapd only ever _removes_ pages, never
- * adds them. As such, once we have noticed that the page is not present,
- * we can drop the lock early.
- *
- * The adding of pages is protected by the MM semaphore (which we hold),
- * so we don't need to worry about a page being suddenly been added into
- * our VM.
- *
- * We enter with the pagetable spinlock held, we are supposed to
- * release it when done.
- */
+ * Note that kswapd only ever _removes_ pages, never adds them.
+ * We exploit that case if possible to avoid taking the
+ * page table lock.
+*/
 static inline int handle_pte_fault(struct mm_struct *mm,
 	struct vm_area_struct * vma, unsigned long address,
 	int write_access, pte_t *pte, pmd_t *pmd)
 {
 	pte_t entry;
+	pte_t new_entry;
 
 	entry = *pte;
 	if (!pte_present(entry)) {
 		/*
-		 * If it truly wasn't present, we know that kswapd
-		 * and the PTE updates will not touch it later. So
-		 * drop the lock.
+		 * Pass the value of the pte to do_no_page and do_file_page
+		 * This value may be used to verify that the pte is still
+		 * not present allowing atomic insertion of ptes.
 		 */
 		if (pte_none(entry))
-			return do_no_page(mm, vma, address, write_access, pte, pmd);
+			return do_no_page(mm, vma, address, write_access,
+						pte, pmd, entry);
 		if (pte_file(entry))
-			return do_file_page(mm, vma, address, write_access, pte, pmd);
-		return do_swap_page(mm, vma, address, pte, pmd, entry, write_access);
+			return do_file_page(mm, vma, address, write_access,
+						pte, pmd, entry);
+		return do_swap_page(mm, vma, address, pte, pmd,
+						entry, write_access);
 	}
 
+	new_entry = pte_mkyoung(entry);
 	if (write_access) {
-		if (!pte_write(entry))
+		if (!pte_write(entry)) {
+#ifdef CONFIG_ATOMIC_TABLE_OPS
+			/*
+			 * do_wp_page modifies a pte. We can add a pte without
+			 * the page_table_lock but not modify a pte since a
+			 * cmpxchg does not allow us to verify that the page
+			 * was not changed under us. So acquire the page table
+			 * lock.
+			 */
+			spin_lock(&mm->page_table_lock);
+			if (pte_same(entry, *pte))
+				return do_wp_page(mm, vma, address, pte,
+							pmd, entry);
+			/*
+			 * pte was changed under us. Another processor may have
+			 * done what we needed to do.
+			 */
+			pte_unmap(pte);
+			spin_unlock(&mm->page_table_lock);
+			return VM_FAULT_MINOR;
+#else
 			return do_wp_page(mm, vma, address, pte, pmd, entry);
-		entry = pte_mkdirty(entry);
+#endif
+		}
+		new_entry = pte_mkdirty(new_entry);
 	}
-	entry = pte_mkyoung(entry);
-	ptep_set_access_flags(vma, address, pte, entry, write_access);
-	update_mmu_cache(vma, address, entry);
-	lazy_mmu_prot_update(entry);
+
+	/*
+	 * If the cmpxchg fails then another processor may have done
+	 * the changes for us. If not then another fault will bring
+	 * another chance to do this again.
+	*/
+	if (ptep_cmpxchg(mm, address, pte, entry, new_entry)) {
+		flush_tlb_page(vma, address);
+		update_mmu_cache(vma, address, entry);
+		lazy_mmu_prot_update(entry);
+	} else {
+		inc_page_state(cmpxchg_fail_flag_update);
+	}
+
 	pte_unmap(pte);
-	spin_unlock(&mm->page_table_lock);
+	page_table_atomic_stop(mm);
+	if (pte_val(new_entry) == pte_val(entry))
+		inc_page_state(spurious_page_faults);
 	return VM_FAULT_MINOR;
 }
 
@@ -2037,33 +2085,90 @@ int __handle_mm_fault(struct mm_struct *
 
 	inc_page_state(pgfault);
 
-	if (is_vm_hugetlb_page(vma))
-		return VM_FAULT_SIGBUS;	/* mapping truncation does this. */
+	if (unlikely(is_vm_hugetlb_page(vma)))
+		goto sigbus;		/* mapping truncation does this. */
 
 	/*
-	 * We need the page table lock to synchronize with kswapd
-	 * and the SMP-safe atomic PTE updates.
+	 * We try to rely on the mmap_sem and the SMP-safe atomic PTE updates.
+	 * to synchronize with kswapd. However, the arch may fall back
+	 * in page_table_atomic_start to the page table lock.
+	 *
+	 * We may be able to avoid taking and releasing the page_table_lock
+	 * for the p??_alloc functions through atomic operations so we
+	 * duplicate the functionality of pmd_alloc, pud_alloc and
+	 * pte_alloc_map here.
 	 */
+	page_table_atomic_start(mm);
 	pgd = pgd_offset(mm, address);
-	spin_lock(&mm->page_table_lock);
+	if (unlikely(pgd_none(*pgd))) {
+#ifdef __ARCH_HAS_4LEVEL_HACK
+		/* The hack does not allow a clean fall back.
+		 * We need to insert a pmd entry into a pgd. pgd_test_and_populate is set
+		 * up to take a pmd entry. pud_none(pgd) == 0, therefore
+		 * the pud population branch will never be taken.
+		 */
+		pmd_t *new;
 
-	pud = pud_alloc(mm, pgd, address);
-	if (!pud)
-		goto oom;
+		page_table_atomic_stop(mm);
+		new = pmd_alloc_one(mm, address);
+#else
+		pud_t *new;
 
-	pmd = pmd_alloc(mm, pud, address);
-	if (!pmd)
-		goto oom;
+		page_table_atomic_stop(mm);
+		new = pud_alloc_one(mm, address);
+#endif
 
-	pte = pte_alloc_map(mm, pmd, address);
-	if (!pte)
-		goto oom;
-	
-	return handle_pte_fault(mm, vma, address, write_access, pte, pmd);
+		if (!new)
+			goto oom;
 
- oom:
-	spin_unlock(&mm->page_table_lock);
+		page_table_atomic_start(mm);
+		if (!pgd_test_and_populate(mm, pgd, new))
+			pud_free(new);
+	}
+
+	pud = pud_offset(pgd, address);
+	if (unlikely(pud_none(*pud))) {
+		pmd_t *new;
+
+		page_table_atomic_stop(mm);
+		new = pmd_alloc_one(mm, address);
+
+		if (!new)
+			goto oom;
+
+		page_table_atomic_start(mm);
+
+		if (!pud_test_and_populate(mm, pud, new))
+			pmd_free(new);
+	}
+
+	pmd = pmd_offset(pud, address);
+	if (unlikely(!pmd_present(*pmd))) {
+		struct page *new;
+
+		page_table_atomic_stop(mm);
+		new = pte_alloc_one(mm, address);
+
+		if (!new)
+			goto oom;
+
+		page_table_atomic_start(mm);
+
+		if (!pmd_test_and_populate(mm, pmd, new))
+			pte_free(new);
+		else {
+			inc_page_state(nr_page_table_pages);
+			mm->nr_ptes++;
+		}
+	}
+
+	pte = pte_offset_map(pmd, address);
+	return handle_pte_fault(mm, vma, address, write_access, pte, pmd);
+oom:
 	return VM_FAULT_OOM;
+
+sigbus:
+	return VM_FAULT_SIGBUS;
 }
 
 #ifndef __PAGETABLE_PUD_FOLDED
diff -puN mm/page_alloc.c~page-fault-patches-optional-page_lock-acquisition-in mm/page_alloc.c
--- 25/mm/page_alloc.c~page-fault-patches-optional-page_lock-acquisition-in	Wed Aug 17 15:09:59 2005
+++ 25-akpm/mm/page_alloc.c	Wed Aug 17 15:09:59 2005
@@ -2218,6 +2218,12 @@ static char *vmstat_text[] = {
 
 	"pgrotated",
 	"nr_bounce",
+	"spurious_page_faults",
+	"cmpxchg_fail_flag_update",
+	"cmpxchg_fail_flag_reuse",
+
+	"cmpxchg_fail_anon_read",
+	"cmpxchg_fail_anon_write",
 };
 
 static void *vmstat_start(struct seq_file *m, loff_t *pos)
diff -puN include/asm-generic/4level-fixup.h~page-fault-patches-optional-page_lock-acquisition-in include/asm-generic/4level-fixup.h
--- 25/include/asm-generic/4level-fixup.h~page-fault-patches-optional-page_lock-acquisition-in	Wed Aug 17 15:10:08 2005
+++ 25-akpm/include/asm-generic/4level-fixup.h	Wed Aug 17 15:10:08 2005
@@ -26,6 +26,7 @@
 #define pud_present(pud)		1
 #define pud_ERROR(pud)			do { } while (0)
 #define pud_clear(pud)			pgd_clear(pud)
+#define pud_populate			pgd_populate
 
 #undef pud_free_tlb
 #define pud_free_tlb(tlb, x)            do { } while (0)
_

[-- Attachment #4: page-fault-patches-no-pagetable-lock-in-do_anon_page.patch --]
[-- Type: application/octet-stream, Size: 4302 bytes --]


From: Christoph Lameter <christoph@lameter.com>

Do not use the page_table_lock in do_anonymous_page.  This will significantly
increase the parallelism in the page fault handler for SMP systems.  The patch
also modifies the definitions of _mm_counter functions so that rss and
anon_rss become atomic (and will use atomic64_t if available).

Signed-off-by: Christoph Lameter <clameter@sgi.com>
Signed-off-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
---

 include/linux/sched.h |   31 +++++++++++++++++++++++++++++++
 mm/memory.c           |   14 +++++---------
 2 files changed, 36 insertions(+), 9 deletions(-)

diff -puN include/linux/sched.h~page-fault-patches-no-pagetable-lock-in-do_anon_page include/linux/sched.h
--- 25/include/linux/sched.h~page-fault-patches-no-pagetable-lock-in-do_anon_page	Wed Aug 17 15:10:30 2005
+++ 25-akpm/include/linux/sched.h	Wed Aug 17 15:10:45 2005
@@ -204,12 +204,43 @@ arch_get_unmapped_area_topdown(struct fi
 extern void arch_unmap_area(struct mm_struct *, unsigned long);
 extern void arch_unmap_area_topdown(struct mm_struct *, unsigned long);
 
+#ifdef CONFIG_ATOMIC_TABLE_OPS
+/*
+ * No spinlock is held during atomic page table operations. The
+ * counters are not protected anymore and must also be
+ * incremented atomically.
+*/
+#ifdef ATOMIC64_INIT
+#define set_mm_counter(mm, member, value) atomic64_set(&(mm)->_##member, value)
+#define get_mm_counter(mm, member) ((unsigned long)atomic64_read(&(mm)->_##member))
+#define add_mm_counter(mm, member, value) atomic64_add(value, &(mm)->_##member)
+#define inc_mm_counter(mm, member) atomic64_inc(&(mm)->_##member)
+#define dec_mm_counter(mm, member) atomic64_dec(&(mm)->_##member)
+typedef atomic64_t mm_counter_t;
+#else
+/*
+ * This may limit process memory to 2^31 * PAGE_SIZE which may be around 8TB
+ * if using 4KB page size
+ */
+#define set_mm_counter(mm, member, value) atomic_set(&(mm)->_##member, value)
+#define get_mm_counter(mm, member) ((unsigned long)atomic_read(&(mm)->_##member))
+#define add_mm_counter(mm, member, value) atomic_add(value, &(mm)->_##member)
+#define inc_mm_counter(mm, member) atomic_inc(&(mm)->_##member)
+#define dec_mm_counter(mm, member) atomic_dec(&(mm)->_##member)
+typedef atomic_t mm_counter_t;
+#endif
+#else
+/*
+ * No atomic page table operations. Counters are protected by
+ * the page table lock
+ */
 #define set_mm_counter(mm, member, value) (mm)->_##member = (value)
 #define get_mm_counter(mm, member) ((mm)->_##member)
 #define add_mm_counter(mm, member, value) (mm)->_##member += (value)
 #define inc_mm_counter(mm, member) (mm)->_##member++
 #define dec_mm_counter(mm, member) (mm)->_##member--
 typedef unsigned long mm_counter_t;
+#endif
 
 struct mm_struct {
 	struct vm_area_struct * mmap;		/* list of VMAs */
diff -puN mm/memory.c~page-fault-patches-no-pagetable-lock-in-do_anon_page mm/memory.c
--- 25/mm/memory.c~page-fault-patches-no-pagetable-lock-in-do_anon_page	Wed Aug 17 15:10:30 2005
+++ 25-akpm/mm/memory.c	Wed Aug 17 15:10:48 2005
@@ -1772,12 +1772,12 @@ do_anonymous_page(struct mm_struct *mm, 
 		} else {
 			inc_page_state(cmpxchg_fail_anon_read);
 		}
-		pte_unmap(page_table);
 		goto minor_fault;
 	}
 
 	/* This leaves the write case */
 	page_table_atomic_stop(mm);
+	pte_unmap(page_table);
 	if (unlikely(anon_vma_prepare(vma)))
 		goto oom;
 
@@ -1788,13 +1788,13 @@ do_anonymous_page(struct mm_struct *mm, 
 	entry = maybe_mkwrite(pte_mkdirty(mk_pte(page,
 						vma->vm_page_prot)),
 				vma);
-	spin_lock(&mm->page_table_lock);
+	page_table = pte_offset_map(pmd, addr);
+	page_table_atomic_start(mm);
 
 	if (!ptep_cmpxchg(mm, addr, page_table, orig_entry, entry)) {
-		pte_unmap(page_table);
 		page_cache_release(page);
 		inc_page_state(cmpxchg_fail_anon_write);
-		goto minor_fault_atomic;
+		goto minor_fault;
         }
 
 	/*
@@ -1805,16 +1805,12 @@ do_anonymous_page(struct mm_struct *mm, 
 	page_add_anon_rmap(page, vma, addr);
 	lru_cache_add_active(page);
 	inc_mm_counter(mm, rss);
-	pte_unmap(page_table);
 	update_mmu_cache(vma, addr, entry);
 	lazy_mmu_prot_update(entry);
 
 minor_fault:
-	spin_unlock(&mm->page_table_lock);
-	return VM_FAULT_MINOR;
-
-minor_fault_atomic:
 	page_table_atomic_stop(mm);
+	pte_unmap(page_table);
 	return VM_FAULT_MINOR;
 
 oom:
_

^ permalink raw reply	[flat|nested] 47+ messages in thread

end of thread, other threads:[~2005-08-22 20:31 UTC | newest]

Thread overview: 47+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2005-08-17 22:17 pagefault scalability patches Andrew Morton
2005-08-17 22:19 ` Christoph Lameter
2005-08-17 22:36 ` Linus Torvalds
2005-08-17 22:51   ` Christoph Lameter
2005-08-17 23:01     ` Linus Torvalds
2005-08-17 23:12       ` Christoph Lameter
2005-08-17 23:23         ` Linus Torvalds
2005-08-17 23:31           ` Christoph Lameter
2005-08-17 23:30         ` Andrew Morton
2005-08-17 23:33           ` Christoph Lameter
2005-08-17 23:44             ` Andrew Morton
2005-08-17 23:52               ` Peter Chubb
2005-08-17 23:58                 ` Christoph Lameter
2005-08-18  0:47                   ` Andrew Morton
2005-08-18 16:09                     ` Christoph Lameter
2005-08-22  2:13     ` Benjamin Herrenschmidt
2005-08-18  0:43 ` Andrew Morton
2005-08-18 16:04   ` Christoph Lameter
2005-08-18 20:16   ` Hugh Dickins
2005-08-19  1:22     ` [PATCH] use mm_counter macros for nr_pte since its also under ptl Christoph Lameter
2005-08-19  3:17       ` Andrew Morton
2005-08-19  3:51         ` Christoph Lameter
2005-08-19  1:33     ` pagefault scalability patches Christoph Lameter
2005-08-19  3:53     ` [RFC] Concept for delayed counter updates in mm_struct Christoph Lameter
2005-08-19  4:29       ` Andrew Morton
2005-08-19  4:34         ` Andi Kleen
2005-08-19  4:49         ` Linus Torvalds
2005-08-19 16:06           ` Christoph Lameter
2005-08-20  7:33           ` [PATCH] mm_struct counter deltas in task_struct Christoph Lameter
2005-08-20  7:35           ` [PATCH] Use deltas to replace atomic inc Christoph Lameter
2005-08-20  7:58             ` Andrew Morton
2005-08-22  3:32               ` Christoph Lameter
2005-08-22  3:48                 ` Linus Torvalds
2005-08-22  4:06                   ` Christoph Lameter
2005-08-22  4:18                     ` Linus Torvalds
2005-08-22 13:23                       ` Christoph Lameter
2005-08-22 14:22                         ` Hugh Dickins
2005-08-22 15:24                           ` Christoph Lameter
2005-08-22 15:43                             ` Andi Kleen
2005-08-22 16:24                               ` Christoph Lameter
2005-08-22 20:30                           ` [PATCH] mm_struct counter deltas V2 Christoph Lameter
2005-08-22 20:31                           ` [PATCH] Use deltas to replace atomic inc V2 Christoph Lameter
2005-08-22  2:09   ` pagefault scalability patches Benjamin Herrenschmidt
2005-08-18  2:00 ` Nick Piggin
2005-08-18  8:38   ` Nick Piggin
2005-08-18 16:17     ` Christoph Lameter
2005-08-22  2:04       ` Benjamin Herrenschmidt

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox