* [ofa-general] [PATCH 0 of 8] mmu notifiers #v10
@ 2008-04-02 21:30 Andrea Arcangeli
2008-04-02 21:30 ` [ofa-general] [PATCH 1 of 8] Core of mmu notifiers Andrea Arcangeli
` (7 more replies)
0 siblings, 8 replies; 13+ messages in thread
From: Andrea Arcangeli @ 2008-04-02 21:30 UTC (permalink / raw)
To: akpm
Cc: Nick Piggin, Peter Zijlstra, linux-mm, Izik Eidus, Kanoj Sarcar,
Roland Dreier, Jack Steiner, linux-kernel, Avi Kivity, kvm-devel,
Robin Holt, general, Christoph Lameter
Hello,
this is the mmu notifier #v10. Patches 1 and 2 are the only difference between
this and EMM V2. The rest is the same as with Christoph's patches.
I think maximum priority should be given in merging patch 1 and 2 into -mm and
ASAP in mainline.
Patches from 3 to 8 can go in -mm for testing but I'm not sure if we should
support sleep capable notifiers in mainline unless we make the VM locking
conditional to avoid overscheduling for extremely small critical sections in
the common case. I only rediffed Christoph's patches on top of the mmu
notifier patches.
KVM current plans are to heavily depend on mmu notifiers for swapping, to
optimize the spte faults, and we need it for smp guest ballooning with
madvise(DONT_NEED) and other optimizations and features.
Patches from 3 to 8 are Christoph's work ported on top of #v10 to make the
#v10 mmu notifiers sleep capable (at least supposedly). I didn't test the
scheduling, but I assume you'll quickly test XPMEM on top of this.
^ permalink raw reply [flat|nested] 13+ messages in thread
* [ofa-general] [PATCH 1 of 8] Core of mmu notifiers
2008-04-02 21:30 [ofa-general] [PATCH 0 of 8] mmu notifiers #v10 Andrea Arcangeli
@ 2008-04-02 21:30 ` Andrea Arcangeli
2008-04-02 22:34 ` Christoph Lameter
2008-04-02 21:30 ` [PATCH 2 of 8] Moves all mmu notifier methods outside the PT lock (first and not last Andrea Arcangeli
` (6 subsequent siblings)
7 siblings, 1 reply; 13+ messages in thread
From: Andrea Arcangeli @ 2008-04-02 21:30 UTC (permalink / raw)
To: akpm
Cc: Nick Piggin, Peter Zijlstra, linux-mm, Izik Eidus, Kanoj Sarcar,
Roland Dreier, Jack Steiner, linux-kernel, Avi Kivity, kvm-devel,
Robin Holt, general, Christoph Lameter
# HG changeset patch
# User Andrea Arcangeli <andrea@qumranet.com>
# Date 1207158873 -7200
# Node ID a406c0cc686d0ca94a4d890d661cdfa48cfba09f
# Parent 249e077dc932a5322e04ac1d69326622ea4023b8
Core of mmu notifiers.
Signed-off-by: Andrea Arcangeli <andrea@qumranet.com>
Signed-off-by: Nick Piggin <npiggin@suse.de>
Signed-off-by: Christoph Lameter <clameter@sgi.com>
diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -10,6 +10,7 @@
#include <linux/rbtree.h>
#include <linux/rwsem.h>
#include <linux/completion.h>
+#include <linux/seqlock.h>
#include <asm/page.h>
#include <asm/mmu.h>
@@ -225,6 +226,10 @@
#ifdef CONFIG_CGROUP_MEM_RES_CTLR
struct mem_cgroup *mem_cgroup;
#endif
+#ifdef CONFIG_MMU_NOTIFIER
+ struct hlist_head mmu_notifier_list;
+ seqlock_t mmu_notifier_lock;
+#endif
};
#endif /* _LINUX_MM_TYPES_H */
diff --git a/include/linux/mmu_notifier.h b/include/linux/mmu_notifier.h
new file mode 100644
--- /dev/null
+++ b/include/linux/mmu_notifier.h
@@ -0,0 +1,181 @@
+#ifndef _LINUX_MMU_NOTIFIER_H
+#define _LINUX_MMU_NOTIFIER_H
+
+#include <linux/list.h>
+#include <linux/spinlock.h>
+#include <linux/mm_types.h>
+
+struct mmu_notifier;
+struct mmu_notifier_ops;
+
+#ifdef CONFIG_MMU_NOTIFIER
+
+struct mmu_notifier_ops {
+ /*
+ * Called when nobody can register any more notifier in the mm
+ * and after the "mn" notifier has been disarmed already.
+ */
+ void (*release)(struct mmu_notifier *mn,
+ struct mm_struct *mm);
+
+ /*
+ * clear_flush_young is called after the VM is
+ * test-and-clearing the young/accessed bitflag in the
+ * pte. This way the VM will provide proper aging to the
+ * accesses to the page through the secondary MMUs and not
+ * only to the ones through the Linux pte.
+ */
+ int (*clear_flush_young)(struct mmu_notifier *mn,
+ struct mm_struct *mm,
+ unsigned long address);
+
+ /*
+ * Before this is invoked any secondary MMU is still ok to
+ * read/write to the page previously pointed by the Linux pte
+ * because the old page hasn't been freed yet. If required
+ * set_page_dirty has to be called internally to this method.
+ */
+ void (*invalidate_page)(struct mmu_notifier *mn,
+ struct mm_struct *mm,
+ unsigned long address);
+
+ /*
+ * invalidate_range_start() and invalidate_range_end() must be
+ * paired. Multiple invalidate_range_start/ends may be nested
+ * or called concurrently.
+ */
+ void (*invalidate_range_start)(struct mmu_notifier *mn,
+ struct mm_struct *mm,
+ unsigned long start, unsigned long end);
+ void (*invalidate_range_end)(struct mmu_notifier *mn,
+ struct mm_struct *mm,
+ unsigned long start, unsigned long end);
+};
+
+struct mmu_notifier {
+ struct hlist_node hlist;
+ const struct mmu_notifier_ops *ops;
+};
+
+static inline int mm_has_notifiers(struct mm_struct *mm)
+{
+ return unlikely(!hlist_empty(&mm->mmu_notifier_list));
+}
+
+/*
+ * Must hold the mmap_sem for write.
+ *
+ * RCU is used to traverse the list.
+ */
+extern void mmu_notifier_register(struct mmu_notifier *mn,
+ struct mm_struct *mm);
+extern void __mmu_notifier_release(struct mm_struct *mm);
+extern int __mmu_notifier_clear_flush_young(struct mm_struct *mm,
+ unsigned long address);
+extern void __mmu_notifier_invalidate_page(struct mm_struct *mm,
+ unsigned long address);
+extern void __mmu_notifier_invalidate_range_start(struct mm_struct *mm,
+ unsigned long start, unsigned long end);
+extern void __mmu_notifier_invalidate_range_end(struct mm_struct *mm,
+ unsigned long start, unsigned long end);
+
+
+static inline void mmu_notifier_release(struct mm_struct *mm)
+{
+ if (mm_has_notifiers(mm))
+ __mmu_notifier_release(mm);
+}
+
+static inline int mmu_notifier_clear_flush_young(struct mm_struct *mm,
+ unsigned long address)
+{
+ if (mm_has_notifiers(mm))
+ return __mmu_notifier_clear_flush_young(mm, address);
+ return 0;
+}
+
+static inline void mmu_notifier_invalidate_page(struct mm_struct *mm,
+ unsigned long address)
+{
+ if (mm_has_notifiers(mm))
+ __mmu_notifier_invalidate_page(mm, address);
+}
+
+static inline void mmu_notifier_invalidate_range_start(struct mm_struct *mm,
+ unsigned long start, unsigned long end)
+{
+ if (mm_has_notifiers(mm))
+ __mmu_notifier_invalidate_range_start(mm, start, end);
+}
+
+static inline void mmu_notifier_invalidate_range_end(struct mm_struct *mm,
+ unsigned long start, unsigned long end)
+{
+ if (mm_has_notifiers(mm))
+ __mmu_notifier_invalidate_range_end(mm, start, end);
+}
+
+static inline void mmu_notifier_mm_init(struct mm_struct *mm)
+{
+ INIT_HLIST_HEAD(&mm->mmu_notifier_list);
+ seqlock_init(&mm->mmu_notifier_lock);
+}
+
+#define ptep_clear_flush_notify(__vma, __address, __ptep) \
+({ \
+ pte_t __pte; \
+ struct vm_area_struct *___vma = __vma; \
+ unsigned long ___address = __address; \
+ __pte = ptep_clear_flush(___vma, ___address, __ptep); \
+ mmu_notifier_invalidate_page(___vma->vm_mm, ___address); \
+ __pte; \
+})
+
+#define ptep_clear_flush_young_notify(__vma, __address, __ptep) \
+({ \
+ int __young; \
+ struct vm_area_struct *___vma = __vma; \
+ unsigned long ___address = __address; \
+ __young = ptep_clear_flush_young(___vma, ___address, __ptep); \
+ __young |= mmu_notifier_clear_flush_young(___vma->vm_mm, \
+ ___address); \
+ __young; \
+})
+
+#else /* CONFIG_MMU_NOTIFIER */
+
+static inline void mmu_notifier_release(struct mm_struct *mm)
+{
+}
+
+static inline int mmu_notifier_clear_flush_young(struct mm_struct *mm,
+ unsigned long address)
+{
+ return 0;
+}
+
+static inline void mmu_notifier_invalidate_page(struct mm_struct *mm,
+ unsigned long address)
+{
+}
+
+static inline void mmu_notifier_invalidate_range_start(struct mm_struct *mm,
+ unsigned long start, unsigned long end)
+{
+}
+
+static inline void mmu_notifier_invalidate_range_end(struct mm_struct *mm,
+ unsigned long start, unsigned long end)
+{
+}
+
+static inline void mmu_notifier_mm_init(struct mm_struct *mm)
+{
+}
+
+#define ptep_clear_flush_young_notify ptep_clear_flush_young
+#define ptep_clear_flush_notify ptep_clear_flush
+
+#endif /* CONFIG_MMU_NOTIFIER */
+
+#endif /* _LINUX_MMU_NOTIFIER_H */
diff --git a/kernel/fork.c b/kernel/fork.c
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -53,6 +53,7 @@
#include <linux/tty.h>
#include <linux/proc_fs.h>
#include <linux/blkdev.h>
+#include <linux/mmu_notifier.h>
#include <asm/pgtable.h>
#include <asm/pgalloc.h>
@@ -362,6 +363,7 @@
if (likely(!mm_alloc_pgd(mm))) {
mm->def_flags = 0;
+ mmu_notifier_mm_init(mm);
return mm;
}
diff --git a/mm/Kconfig b/mm/Kconfig
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -193,3 +193,7 @@
config VIRT_TO_BUS
def_bool y
depends on !ARCH_NO_VIRT_TO_BUS
+
+config MMU_NOTIFIER
+ def_bool y
+ bool "MMU notifier, for paging KVM/RDMA"
diff --git a/mm/Makefile b/mm/Makefile
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -33,4 +33,4 @@
obj-$(CONFIG_SMP) += allocpercpu.o
obj-$(CONFIG_QUICKLIST) += quicklist.o
obj-$(CONFIG_CGROUP_MEM_RES_CTLR) += memcontrol.o
-
+obj-$(CONFIG_MMU_NOTIFIER) += mmu_notifier.o
diff --git a/mm/filemap_xip.c b/mm/filemap_xip.c
--- a/mm/filemap_xip.c
+++ b/mm/filemap_xip.c
@@ -194,7 +194,7 @@
if (pte) {
/* Nuke the page table entry. */
flush_cache_page(vma, address, pte_pfn(*pte));
- pteval = ptep_clear_flush(vma, address, pte);
+ pteval = ptep_clear_flush_notify(vma, address, pte);
page_remove_rmap(page, vma);
dec_mm_counter(mm, file_rss);
BUG_ON(pte_dirty(pteval));
diff --git a/mm/fremap.c b/mm/fremap.c
--- a/mm/fremap.c
+++ b/mm/fremap.c
@@ -15,6 +15,7 @@
#include <linux/rmap.h>
#include <linux/module.h>
#include <linux/syscalls.h>
+#include <linux/mmu_notifier.h>
#include <asm/mmu_context.h>
#include <asm/cacheflush.h>
@@ -214,7 +215,9 @@
spin_unlock(&mapping->i_mmap_lock);
}
+ mmu_notifier_invalidate_range_start(mm, start, start + size);
err = populate_range(mm, vma, start, size, pgoff);
+ mmu_notifier_invalidate_range_end(mm, start, start + size);
if (!err && !(flags & MAP_NONBLOCK)) {
if (unlikely(has_write_lock)) {
downgrade_write(&mm->mmap_sem);
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -14,6 +14,7 @@
#include <linux/mempolicy.h>
#include <linux/cpuset.h>
#include <linux/mutex.h>
+#include <linux/mmu_notifier.h>
#include <asm/page.h>
#include <asm/pgtable.h>
@@ -799,6 +800,7 @@
BUG_ON(start & ~HPAGE_MASK);
BUG_ON(end & ~HPAGE_MASK);
+ mmu_notifier_invalidate_range_start(mm, start, end);
spin_lock(&mm->page_table_lock);
for (address = start; address < end; address += HPAGE_SIZE) {
ptep = huge_pte_offset(mm, address);
@@ -819,6 +821,7 @@
}
spin_unlock(&mm->page_table_lock);
flush_tlb_range(vma, start, end);
+ mmu_notifier_invalidate_range_end(mm, start, end);
list_for_each_entry_safe(page, tmp, &page_list, lru) {
list_del(&page->lru);
put_page(page);
diff --git a/mm/memory.c b/mm/memory.c
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -51,6 +51,7 @@
#include <linux/init.h>
#include <linux/writeback.h>
#include <linux/memcontrol.h>
+#include <linux/mmu_notifier.h>
#include <asm/pgalloc.h>
#include <asm/uaccess.h>
@@ -611,6 +612,9 @@
if (is_vm_hugetlb_page(vma))
return copy_hugetlb_page_range(dst_mm, src_mm, vma);
+ if (is_cow_mapping(vma->vm_flags))
+ mmu_notifier_invalidate_range_start(src_mm, addr, end);
+
dst_pgd = pgd_offset(dst_mm, addr);
src_pgd = pgd_offset(src_mm, addr);
do {
@@ -621,6 +625,11 @@
vma, addr, next))
return -ENOMEM;
} while (dst_pgd++, src_pgd++, addr = next, addr != end);
+
+ if (is_cow_mapping(vma->vm_flags))
+ mmu_notifier_invalidate_range_end(src_mm,
+ vma->vm_start, end);
+
return 0;
}
@@ -897,7 +906,9 @@
lru_add_drain();
tlb = tlb_gather_mmu(mm, 0);
update_hiwater_rss(mm);
+ mmu_notifier_invalidate_range_start(mm, address, end);
end = unmap_vmas(&tlb, vma, address, end, &nr_accounted, details);
+ mmu_notifier_invalidate_range_end(mm, address, end);
if (tlb)
tlb_finish_mmu(tlb, address, end);
return end;
@@ -1463,10 +1474,11 @@
{
pgd_t *pgd;
unsigned long next;
- unsigned long end = addr + size;
+ unsigned long start = addr, end = addr + size;
int err;
BUG_ON(addr >= end);
+ mmu_notifier_invalidate_range_start(mm, start, end);
pgd = pgd_offset(mm, addr);
do {
next = pgd_addr_end(addr, end);
@@ -1474,6 +1486,7 @@
if (err)
break;
} while (pgd++, addr = next, addr != end);
+ mmu_notifier_invalidate_range_end(mm, start, end);
return err;
}
EXPORT_SYMBOL_GPL(apply_to_page_range);
@@ -1675,7 +1688,7 @@
* seen in the presence of one thread doing SMC and another
* thread doing COW.
*/
- ptep_clear_flush(vma, address, page_table);
+ ptep_clear_flush_notify(vma, address, page_table);
set_pte_at(mm, address, page_table, entry);
update_mmu_cache(vma, address, entry);
lru_cache_add_active(new_page);
diff --git a/mm/mmap.c b/mm/mmap.c
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -26,6 +26,7 @@
#include <linux/mount.h>
#include <linux/mempolicy.h>
#include <linux/rmap.h>
+#include <linux/mmu_notifier.h>
#include <asm/uaccess.h>
#include <asm/cacheflush.h>
@@ -1747,11 +1748,13 @@
lru_add_drain();
tlb = tlb_gather_mmu(mm, 0);
update_hiwater_rss(mm);
+ mmu_notifier_invalidate_range_start(mm, start, end);
unmap_vmas(&tlb, vma, start, end, &nr_accounted, NULL);
vm_unacct_memory(nr_accounted);
free_pgtables(&tlb, vma, prev? prev->vm_end: FIRST_USER_ADDRESS,
next? next->vm_start: 0);
tlb_finish_mmu(tlb, start, end);
+ mmu_notifier_invalidate_range_end(mm, start, end);
}
/*
@@ -2037,6 +2040,7 @@
unsigned long end;
/* mm's last user has gone, and its about to be pulled down */
+ mmu_notifier_release(mm);
arch_exit_mmap(mm);
lru_add_drain();
diff --git a/mm/mmu_notifier.c b/mm/mmu_notifier.c
new file mode 100644
--- /dev/null
+++ b/mm/mmu_notifier.c
@@ -0,0 +1,121 @@
+/*
+ * linux/mm/mmu_notifier.c
+ *
+ * Copyright (C) 2008 Qumranet, Inc.
+ * Copyright (C) 2008 SGI
+ * Christoph Lameter <clameter@sgi.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2. See
+ * the COPYING file in the top-level directory.
+ */
+
+#include <linux/mmu_notifier.h>
+#include <linux/rcupdate.h>
+#include <linux/module.h>
+
+/*
+ * No synchronization. This function can only be called when only a single
+ * process remains that performs teardown.
+ */
+void __mmu_notifier_release(struct mm_struct *mm)
+{
+ struct mmu_notifier *mn;
+ unsigned seq;
+
+ seq = read_seqbegin(&mm->mmu_notifier_lock);
+ while (unlikely(!hlist_empty(&mm->mmu_notifier_list))) {
+ mn = hlist_entry(mm->mmu_notifier_list.first,
+ struct mmu_notifier,
+ hlist);
+ hlist_del(&mn->hlist);
+ if (mn->ops->release)
+ mn->ops->release(mn, mm);
+ BUG_ON(read_seqretry(&mm->mmu_notifier_lock, seq));
+ }
+}
+
+/*
+ * If no young bitflag is supported by the hardware, ->clear_flush_young can
+ * unmap the address and return 1 or 0 depending if the mapping previously
+ * existed or not.
+ */
+int __mmu_notifier_clear_flush_young(struct mm_struct *mm,
+ unsigned long address)
+{
+ struct mmu_notifier *mn;
+ struct hlist_node *n;
+ int young = 0;
+ unsigned seq;
+
+ seq = read_seqbegin(&mm->mmu_notifier_lock);
+ do {
+ hlist_for_each_entry_rcu(mn, n, &mm->mmu_notifier_list, hlist) {
+ if (mn->ops->clear_flush_young)
+ young |= mn->ops->clear_flush_young(mn, mm,
+ address);
+ }
+ } while (read_seqretry(&mm->mmu_notifier_lock, seq));
+
+ return young;
+}
+
+void __mmu_notifier_invalidate_page(struct mm_struct *mm,
+ unsigned long address)
+{
+ struct mmu_notifier *mn;
+ struct hlist_node *n;
+ unsigned seq;
+
+ seq = read_seqbegin(&mm->mmu_notifier_lock);
+ do {
+ hlist_for_each_entry_rcu(mn, n, &mm->mmu_notifier_list, hlist) {
+ if (mn->ops->invalidate_page)
+ mn->ops->invalidate_page(mn, mm, address);
+ }
+ } while (read_seqretry(&mm->mmu_notifier_lock, seq));
+}
+
+void __mmu_notifier_invalidate_range_start(struct mm_struct *mm,
+ unsigned long start, unsigned long end)
+{
+ struct mmu_notifier *mn;
+ struct hlist_node *n;
+ unsigned seq;
+
+ seq = read_seqbegin(&mm->mmu_notifier_lock);
+ do {
+ hlist_for_each_entry_rcu(mn, n, &mm->mmu_notifier_list, hlist) {
+ if (mn->ops->invalidate_range_start)
+ mn->ops->invalidate_range_start(mn, mm,
+ start, end);
+ }
+ } while (read_seqretry(&mm->mmu_notifier_lock, seq));
+}
+
+void __mmu_notifier_invalidate_range_end(struct mm_struct *mm,
+ unsigned long start, unsigned long end)
+{
+ struct mmu_notifier *mn;
+ struct hlist_node *n;
+ unsigned seq;
+
+ seq = read_seqbegin(&mm->mmu_notifier_lock);
+ do {
+ hlist_for_each_entry_rcu(mn, n, &mm->mmu_notifier_list, hlist) {
+ if (mn->ops->invalidate_range_end)
+ mn->ops->invalidate_range_end(mn, mm,
+ start, end);
+ }
+ } while (read_seqretry(&mm->mmu_notifier_lock, seq));
+}
+
+/*
+ * Must hold mmap_sem writably when calling registration functions.
+ */
+void mmu_notifier_register(struct mmu_notifier *mn, struct mm_struct *mm)
+{
+ write_seqlock(&mm->mmu_notifier_lock);
+ hlist_add_head_rcu(&mn->hlist, &mm->mmu_notifier_list);
+ write_sequnlock(&mm->mmu_notifier_lock);
+}
+EXPORT_SYMBOL_GPL(mmu_notifier_register);
diff --git a/mm/mprotect.c b/mm/mprotect.c
--- a/mm/mprotect.c
+++ b/mm/mprotect.c
@@ -21,6 +21,7 @@
#include <linux/syscalls.h>
#include <linux/swap.h>
#include <linux/swapops.h>
+#include <linux/mmu_notifier.h>
#include <asm/uaccess.h>
#include <asm/pgtable.h>
#include <asm/cacheflush.h>
@@ -198,10 +199,12 @@
dirty_accountable = 1;
}
+ mmu_notifier_invalidate_range_start(mm, start, end);
if (is_vm_hugetlb_page(vma))
hugetlb_change_protection(vma, start, end, vma->vm_page_prot);
else
change_protection(vma, start, end, vma->vm_page_prot, dirty_accountable);
+ mmu_notifier_invalidate_range_end(mm, start, end);
vm_stat_account(mm, oldflags, vma->vm_file, -nrpages);
vm_stat_account(mm, newflags, vma->vm_file, nrpages);
return 0;
diff --git a/mm/mremap.c b/mm/mremap.c
--- a/mm/mremap.c
+++ b/mm/mremap.c
@@ -18,6 +18,7 @@
#include <linux/highmem.h>
#include <linux/security.h>
#include <linux/syscalls.h>
+#include <linux/mmu_notifier.h>
#include <asm/uaccess.h>
#include <asm/cacheflush.h>
@@ -74,7 +75,11 @@
struct mm_struct *mm = vma->vm_mm;
pte_t *old_pte, *new_pte, pte;
spinlock_t *old_ptl, *new_ptl;
+ unsigned long old_start;
+ old_start = old_addr;
+ mmu_notifier_invalidate_range_start(vma->vm_mm,
+ old_start, old_end);
if (vma->vm_file) {
/*
* Subtle point from Rajesh Venkatasubramanian: before
@@ -116,6 +121,7 @@
pte_unmap_unlock(old_pte - 1, old_ptl);
if (mapping)
spin_unlock(&mapping->i_mmap_lock);
+ mmu_notifier_invalidate_range_end(vma->vm_mm, old_start, old_end);
}
#define LATENCY_LIMIT (64 * PAGE_SIZE)
diff --git a/mm/rmap.c b/mm/rmap.c
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -49,6 +49,7 @@
#include <linux/module.h>
#include <linux/kallsyms.h>
#include <linux/memcontrol.h>
+#include <linux/mmu_notifier.h>
#include <asm/tlbflush.h>
@@ -287,7 +288,7 @@
if (vma->vm_flags & VM_LOCKED) {
referenced++;
*mapcount = 1; /* break early from loop */
- } else if (ptep_clear_flush_young(vma, address, pte))
+ } else if (ptep_clear_flush_young_notify(vma, address, pte))
referenced++;
/* Pretend the page is referenced if the task has the
@@ -456,7 +457,7 @@
pte_t entry;
flush_cache_page(vma, address, pte_pfn(*pte));
- entry = ptep_clear_flush(vma, address, pte);
+ entry = ptep_clear_flush_notify(vma, address, pte);
entry = pte_wrprotect(entry);
entry = pte_mkclean(entry);
set_pte_at(mm, address, pte, entry);
@@ -717,14 +718,14 @@
* skipped over this mm) then we should reactivate it.
*/
if (!migration && ((vma->vm_flags & VM_LOCKED) ||
- (ptep_clear_flush_young(vma, address, pte)))) {
+ (ptep_clear_flush_young_notify(vma, address, pte)))) {
ret = SWAP_FAIL;
goto out_unmap;
}
/* Nuke the page table entry. */
flush_cache_page(vma, address, page_to_pfn(page));
- pteval = ptep_clear_flush(vma, address, pte);
+ pteval = ptep_clear_flush_notify(vma, address, pte);
/* Move the dirty bit to the physical page now the pte is gone. */
if (pte_dirty(pteval))
@@ -849,12 +850,12 @@
page = vm_normal_page(vma, address, *pte);
BUG_ON(!page || PageAnon(page));
- if (ptep_clear_flush_young(vma, address, pte))
+ if (ptep_clear_flush_young_notify(vma, address, pte))
continue;
/* Nuke the page table entry. */
flush_cache_page(vma, address, pte_pfn(*pte));
- pteval = ptep_clear_flush(vma, address, pte);
+ pteval = ptep_clear_flush_notify(vma, address, pte);
/* If nonlinear, store the file page offset in the pte. */
if (page->index != linear_page_index(vma, address))
^ permalink raw reply [flat|nested] 13+ messages in thread
* [PATCH 2 of 8] Moves all mmu notifier methods outside the PT lock (first and not last
2008-04-02 21:30 [ofa-general] [PATCH 0 of 8] mmu notifiers #v10 Andrea Arcangeli
2008-04-02 21:30 ` [ofa-general] [PATCH 1 of 8] Core of mmu notifiers Andrea Arcangeli
@ 2008-04-02 21:30 ` Andrea Arcangeli
2008-04-02 22:03 ` [ofa-general] " Christoph Lameter
2008-04-02 21:30 ` [PATCH 3 of 8] Move the tlb flushing into free_pgtables. The conversion of the locks Andrea Arcangeli
` (5 subsequent siblings)
7 siblings, 1 reply; 13+ messages in thread
From: Andrea Arcangeli @ 2008-04-02 21:30 UTC (permalink / raw)
To: akpm
Cc: Nick Piggin, Steve Wise, Peter Zijlstra, linux-mm, Kanoj Sarcar,
Roland Dreier, Jack Steiner, linux-kernel, Avi Kivity, kvm-devel,
Robin Holt, general, Christoph Lameter
# HG changeset patch
# User Andrea Arcangeli <andrea@qumranet.com>
# Date 1207159010 -7200
# Node ID fe00cb9deeb31467396370c835cb808f4b85209a
# Parent a406c0cc686d0ca94a4d890d661cdfa48cfba09f
Moves all mmu notifier methods outside the PT lock (first and not last
step to make them sleep capable).
Signed-off-by: Andrea Arcangeli <andrea@qumranet.com>
diff --git a/include/linux/mmu_notifier.h b/include/linux/mmu_notifier.h
--- a/include/linux/mmu_notifier.h
+++ b/include/linux/mmu_notifier.h
@@ -121,27 +121,6 @@
seqlock_init(&mm->mmu_notifier_lock);
}
-#define ptep_clear_flush_notify(__vma, __address, __ptep) \
-({ \
- pte_t __pte; \
- struct vm_area_struct *___vma = __vma; \
- unsigned long ___address = __address; \
- __pte = ptep_clear_flush(___vma, ___address, __ptep); \
- mmu_notifier_invalidate_page(___vma->vm_mm, ___address); \
- __pte; \
-})
-
-#define ptep_clear_flush_young_notify(__vma, __address, __ptep) \
-({ \
- int __young; \
- struct vm_area_struct *___vma = __vma; \
- unsigned long ___address = __address; \
- __young = ptep_clear_flush_young(___vma, ___address, __ptep); \
- __young |= mmu_notifier_clear_flush_young(___vma->vm_mm, \
- ___address); \
- __young; \
-})
-
#else /* CONFIG_MMU_NOTIFIER */
static inline void mmu_notifier_release(struct mm_struct *mm)
@@ -173,9 +152,6 @@
{
}
-#define ptep_clear_flush_young_notify ptep_clear_flush_young
-#define ptep_clear_flush_notify ptep_clear_flush
-
#endif /* CONFIG_MMU_NOTIFIER */
#endif /* _LINUX_MMU_NOTIFIER_H */
diff --git a/mm/filemap_xip.c b/mm/filemap_xip.c
--- a/mm/filemap_xip.c
+++ b/mm/filemap_xip.c
@@ -194,11 +194,13 @@
if (pte) {
/* Nuke the page table entry. */
flush_cache_page(vma, address, pte_pfn(*pte));
- pteval = ptep_clear_flush_notify(vma, address, pte);
+ pteval = ptep_clear_flush(vma, address, pte);
page_remove_rmap(page, vma);
dec_mm_counter(mm, file_rss);
BUG_ON(pte_dirty(pteval));
pte_unmap_unlock(pte, ptl);
+ /* must invalidate_page _before_ freeing the page */
+ mmu_notifier_invalidate_page(mm, address);
page_cache_release(page);
}
}
diff --git a/mm/memory.c b/mm/memory.c
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -1626,9 +1626,10 @@
*/
page_table = pte_offset_map_lock(mm, pmd, address,
&ptl);
- page_cache_release(old_page);
+ new_page = NULL;
if (!pte_same(*page_table, orig_pte))
goto unlock;
+ page_cache_release(old_page);
page_mkwrite = 1;
}
@@ -1644,6 +1645,7 @@
if (ptep_set_access_flags(vma, address, page_table, entry,1))
update_mmu_cache(vma, address, entry);
ret |= VM_FAULT_WRITE;
+ old_page = new_page = NULL;
goto unlock;
}
@@ -1688,7 +1690,7 @@
* seen in the presence of one thread doing SMC and another
* thread doing COW.
*/
- ptep_clear_flush_notify(vma, address, page_table);
+ ptep_clear_flush(vma, address, page_table);
set_pte_at(mm, address, page_table, entry);
update_mmu_cache(vma, address, entry);
lru_cache_add_active(new_page);
@@ -1700,12 +1702,18 @@
} else
mem_cgroup_uncharge_page(new_page);
- if (new_page)
+unlock:
+ pte_unmap_unlock(page_table, ptl);
+
+ if (new_page) {
+ if (new_page == old_page)
+ /* cow happened, notify before releasing old_page */
+ mmu_notifier_invalidate_page(mm, address);
page_cache_release(new_page);
+ }
if (old_page)
page_cache_release(old_page);
-unlock:
- pte_unmap_unlock(page_table, ptl);
+
if (dirty_page) {
if (vma->vm_file)
file_update_time(vma->vm_file);
diff --git a/mm/rmap.c b/mm/rmap.c
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -275,7 +275,7 @@
unsigned long address;
pte_t *pte;
spinlock_t *ptl;
- int referenced = 0;
+ int referenced = 0, clear_flush_young = 0;
address = vma_address(page, vma);
if (address == -EFAULT)
@@ -288,8 +288,11 @@
if (vma->vm_flags & VM_LOCKED) {
referenced++;
*mapcount = 1; /* break early from loop */
- } else if (ptep_clear_flush_young_notify(vma, address, pte))
- referenced++;
+ } else {
+ clear_flush_young = 1;
+ if (ptep_clear_flush_young(vma, address, pte))
+ referenced++;
+ }
/* Pretend the page is referenced if the task has the
swap token and is in the middle of a page fault. */
@@ -299,6 +302,10 @@
(*mapcount)--;
pte_unmap_unlock(pte, ptl);
+
+ if (clear_flush_young)
+ referenced += mmu_notifier_clear_flush_young(mm, address);
+
out:
return referenced;
}
@@ -457,7 +464,7 @@
pte_t entry;
flush_cache_page(vma, address, pte_pfn(*pte));
- entry = ptep_clear_flush_notify(vma, address, pte);
+ entry = ptep_clear_flush(vma, address, pte);
entry = pte_wrprotect(entry);
entry = pte_mkclean(entry);
set_pte_at(mm, address, pte, entry);
@@ -465,6 +472,10 @@
}
pte_unmap_unlock(pte, ptl);
+
+ if (ret)
+ mmu_notifier_invalidate_page(mm, address);
+
out:
return ret;
}
@@ -717,15 +728,14 @@
* If it's recently referenced (perhaps page_referenced
* skipped over this mm) then we should reactivate it.
*/
- if (!migration && ((vma->vm_flags & VM_LOCKED) ||
- (ptep_clear_flush_young_notify(vma, address, pte)))) {
+ if (!migration && (vma->vm_flags & VM_LOCKED)) {
ret = SWAP_FAIL;
goto out_unmap;
}
/* Nuke the page table entry. */
flush_cache_page(vma, address, page_to_pfn(page));
- pteval = ptep_clear_flush_notify(vma, address, pte);
+ pteval = ptep_clear_flush(vma, address, pte);
/* Move the dirty bit to the physical page now the pte is gone. */
if (pte_dirty(pteval))
@@ -780,6 +790,8 @@
out_unmap:
pte_unmap_unlock(pte, ptl);
+ if (ret != SWAP_FAIL)
+ mmu_notifier_invalidate_page(mm, address);
out:
return ret;
}
@@ -818,7 +830,7 @@
spinlock_t *ptl;
struct page *page;
unsigned long address;
- unsigned long end;
+ unsigned long start, end;
address = (vma->vm_start + cursor) & CLUSTER_MASK;
end = address + CLUSTER_SIZE;
@@ -839,6 +851,8 @@
if (!pmd_present(*pmd))
return;
+ start = address;
+ mmu_notifier_invalidate_range_start(mm, start, end);
pte = pte_offset_map_lock(mm, pmd, address, &ptl);
/* Update high watermark before we lower rss */
@@ -850,12 +864,12 @@
page = vm_normal_page(vma, address, *pte);
BUG_ON(!page || PageAnon(page));
- if (ptep_clear_flush_young_notify(vma, address, pte))
+ if (ptep_clear_flush_young(vma, address, pte))
continue;
/* Nuke the page table entry. */
flush_cache_page(vma, address, pte_pfn(*pte));
- pteval = ptep_clear_flush_notify(vma, address, pte);
+ pteval = ptep_clear_flush(vma, address, pte);
/* If nonlinear, store the file page offset in the pte. */
if (page->index != linear_page_index(vma, address))
@@ -871,6 +885,7 @@
(*mapcount)--;
}
pte_unmap_unlock(pte - 1, ptl);
+ mmu_notifier_invalidate_range_end(mm, start, end);
}
static int try_to_unmap_anon(struct page *page, int migration)
-------------------------------------------------------------------------
Check out the new SourceForge.net Marketplace.
It's the best place to buy or sell services for
just about anything Open Source.
http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace
^ permalink raw reply [flat|nested] 13+ messages in thread
* [PATCH 3 of 8] Move the tlb flushing into free_pgtables. The conversion of the locks
2008-04-02 21:30 [ofa-general] [PATCH 0 of 8] mmu notifiers #v10 Andrea Arcangeli
2008-04-02 21:30 ` [ofa-general] [PATCH 1 of 8] Core of mmu notifiers Andrea Arcangeli
2008-04-02 21:30 ` [PATCH 2 of 8] Moves all mmu notifier methods outside the PT lock (first and not last Andrea Arcangeli
@ 2008-04-02 21:30 ` Andrea Arcangeli
2008-04-02 21:30 ` [PATCH 4 of 8] The conversion to a rwsem allows callbacks during rmap traversal Andrea Arcangeli
` (4 subsequent siblings)
7 siblings, 0 replies; 13+ messages in thread
From: Andrea Arcangeli @ 2008-04-02 21:30 UTC (permalink / raw)
To: akpm
Cc: Nick Piggin, Steve Wise, Peter Zijlstra, linux-mm, Kanoj Sarcar,
Roland Dreier, Jack Steiner, linux-kernel, Avi Kivity, kvm-devel,
Robin Holt, general, Christoph Lameter
# HG changeset patch
# User Andrea Arcangeli <andrea@qumranet.com>
# Date 1207159010 -7200
# Node ID d880c227ddf345f5d577839d36d150c37b653bfd
# Parent fe00cb9deeb31467396370c835cb808f4b85209a
Move the tlb flushing into free_pgtables. The conversion of the locks
taken for reverse map scanning would require taking sleeping locks
in free_pgtables(). Moving the tlb flushing into free_pgtables allows
sleeping in parts of free_pgtables().
This means that we do a tlb_finish_mmu() before freeing the page tables.
Strictly speaking there may not be the need to do another tlb flush after
freeing the tables. But its the only way to free a series of page table
pages from the tlb list. And we do not want to call into the page allocator
for performance reasons. Aim9 numbers look okay after this patch.
Signed-off-by: Christoph Lameter <clameter@sgi.com>
diff --git a/include/linux/mm.h b/include/linux/mm.h
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -751,8 +751,8 @@
void *private);
void free_pgd_range(struct mmu_gather **tlb, unsigned long addr,
unsigned long end, unsigned long floor, unsigned long ceiling);
-void free_pgtables(struct mmu_gather **tlb, struct vm_area_struct *start_vma,
- unsigned long floor, unsigned long ceiling);
+void free_pgtables(struct vm_area_struct *start_vma, unsigned long floor,
+ unsigned long ceiling);
int copy_page_range(struct mm_struct *dst, struct mm_struct *src,
struct vm_area_struct *vma);
void unmap_mapping_range(struct address_space *mapping,
diff --git a/mm/memory.c b/mm/memory.c
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -272,9 +272,11 @@
} while (pgd++, addr = next, addr != end);
}
-void free_pgtables(struct mmu_gather **tlb, struct vm_area_struct *vma,
- unsigned long floor, unsigned long ceiling)
+void free_pgtables(struct vm_area_struct *vma, unsigned long floor,
+ unsigned long ceiling)
{
+ struct mmu_gather *tlb;
+
while (vma) {
struct vm_area_struct *next = vma->vm_next;
unsigned long addr = vma->vm_start;
@@ -286,8 +288,10 @@
unlink_file_vma(vma);
if (is_vm_hugetlb_page(vma)) {
- hugetlb_free_pgd_range(tlb, addr, vma->vm_end,
+ tlb = tlb_gather_mmu(vma->vm_mm, 0);
+ hugetlb_free_pgd_range(&tlb, addr, vma->vm_end,
floor, next? next->vm_start: ceiling);
+ tlb_finish_mmu(tlb, addr, vma->vm_end);
} else {
/*
* Optimization: gather nearby vmas into one call down
@@ -299,8 +303,10 @@
anon_vma_unlink(vma);
unlink_file_vma(vma);
}
- free_pgd_range(tlb, addr, vma->vm_end,
+ tlb = tlb_gather_mmu(vma->vm_mm, 0);
+ free_pgd_range(&tlb, addr, vma->vm_end,
floor, next? next->vm_start: ceiling);
+ tlb_finish_mmu(tlb, addr, vma->vm_end);
}
vma = next;
}
diff --git a/mm/mmap.c b/mm/mmap.c
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -1751,9 +1751,9 @@
mmu_notifier_invalidate_range_start(mm, start, end);
unmap_vmas(&tlb, vma, start, end, &nr_accounted, NULL);
vm_unacct_memory(nr_accounted);
- free_pgtables(&tlb, vma, prev? prev->vm_end: FIRST_USER_ADDRESS,
+ tlb_finish_mmu(tlb, start, end);
+ free_pgtables(vma, prev? prev->vm_end: FIRST_USER_ADDRESS,
next? next->vm_start: 0);
- tlb_finish_mmu(tlb, start, end);
mmu_notifier_invalidate_range_end(mm, start, end);
}
@@ -2050,8 +2050,8 @@
/* Use -1 here to ensure all VMAs in the mm are unmapped */
end = unmap_vmas(&tlb, vma, 0, -1, &nr_accounted, NULL);
vm_unacct_memory(nr_accounted);
- free_pgtables(&tlb, vma, FIRST_USER_ADDRESS, 0);
tlb_finish_mmu(tlb, 0, end);
+ free_pgtables(vma, FIRST_USER_ADDRESS, 0);
/*
* Walk the list again, actually closing and freeing it,
-------------------------------------------------------------------------
Check out the new SourceForge.net Marketplace.
It's the best place to buy or sell services for
just about anything Open Source.
http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace
^ permalink raw reply [flat|nested] 13+ messages in thread
* [PATCH 4 of 8] The conversion to a rwsem allows callbacks during rmap traversal
2008-04-02 21:30 [ofa-general] [PATCH 0 of 8] mmu notifiers #v10 Andrea Arcangeli
` (2 preceding siblings ...)
2008-04-02 21:30 ` [PATCH 3 of 8] Move the tlb flushing into free_pgtables. The conversion of the locks Andrea Arcangeli
@ 2008-04-02 21:30 ` Andrea Arcangeli
2008-04-02 21:30 ` [ofa-general] [PATCH 5 of 8] We no longer abort unmapping in unmap vmas because we can reschedule while Andrea Arcangeli
` (3 subsequent siblings)
7 siblings, 0 replies; 13+ messages in thread
From: Andrea Arcangeli @ 2008-04-02 21:30 UTC (permalink / raw)
To: akpm
Cc: Nick Piggin, Steve Wise, Peter Zijlstra, linux-mm, Kanoj Sarcar,
Roland Dreier, Jack Steiner, linux-kernel, Avi Kivity, kvm-devel,
Robin Holt, general, Christoph Lameter
# HG changeset patch
# User Andrea Arcangeli <andrea@qumranet.com>
# Date 1207159011 -7200
# Node ID 3c3787c496cab1fc590ba3f97e7904bdfaab5375
# Parent d880c227ddf345f5d577839d36d150c37b653bfd
The conversion to a rwsem allows callbacks during rmap traversal
for files in a non atomic context. A rw style lock also allows concurrent
walking of the reverse map. This is fairly straightforward if one removes
pieces of the resched checking.
[Restarting unmapping is an issue to be discussed].
This slightly increases Aim9 performance results on an 8p.
Signed-off-by: Andrea Arcangeli <andrea@qumranet.com>
Signed-off-by: Christoph Lameter <clameter@sgi.com>
diff --git a/arch/x86/mm/hugetlbpage.c b/arch/x86/mm/hugetlbpage.c
--- a/arch/x86/mm/hugetlbpage.c
+++ b/arch/x86/mm/hugetlbpage.c
@@ -69,7 +69,7 @@
if (!vma_shareable(vma, addr))
return;
- spin_lock(&mapping->i_mmap_lock);
+ down_read(&mapping->i_mmap_sem);
vma_prio_tree_foreach(svma, &iter, &mapping->i_mmap, idx, idx) {
if (svma == vma)
continue;
@@ -94,7 +94,7 @@
put_page(virt_to_page(spte));
spin_unlock(&mm->page_table_lock);
out:
- spin_unlock(&mapping->i_mmap_lock);
+ up_read(&mapping->i_mmap_sem);
}
/*
diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c
--- a/fs/hugetlbfs/inode.c
+++ b/fs/hugetlbfs/inode.c
@@ -454,10 +454,10 @@
pgoff = offset >> PAGE_SHIFT;
i_size_write(inode, offset);
- spin_lock(&mapping->i_mmap_lock);
+ down_read(&mapping->i_mmap_sem);
if (!prio_tree_empty(&mapping->i_mmap))
hugetlb_vmtruncate_list(&mapping->i_mmap, pgoff);
- spin_unlock(&mapping->i_mmap_lock);
+ up_read(&mapping->i_mmap_sem);
truncate_hugepages(inode, offset);
return 0;
}
diff --git a/fs/inode.c b/fs/inode.c
--- a/fs/inode.c
+++ b/fs/inode.c
@@ -210,7 +210,7 @@
INIT_LIST_HEAD(&inode->i_devices);
INIT_RADIX_TREE(&inode->i_data.page_tree, GFP_ATOMIC);
rwlock_init(&inode->i_data.tree_lock);
- spin_lock_init(&inode->i_data.i_mmap_lock);
+ init_rwsem(&inode->i_data.i_mmap_sem);
INIT_LIST_HEAD(&inode->i_data.private_list);
spin_lock_init(&inode->i_data.private_lock);
INIT_RAW_PRIO_TREE_ROOT(&inode->i_data.i_mmap);
diff --git a/include/linux/fs.h b/include/linux/fs.h
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -503,7 +503,7 @@
unsigned int i_mmap_writable;/* count VM_SHARED mappings */
struct prio_tree_root i_mmap; /* tree of private and shared mappings */
struct list_head i_mmap_nonlinear;/*list VM_NONLINEAR mappings */
- spinlock_t i_mmap_lock; /* protect tree, count, list */
+ struct rw_semaphore i_mmap_sem; /* protect tree, count, list */
unsigned int truncate_count; /* Cover race condition with truncate */
unsigned long nrpages; /* number of total pages */
pgoff_t writeback_index;/* writeback starts here */
diff --git a/include/linux/mm.h b/include/linux/mm.h
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -716,7 +716,7 @@
struct address_space *check_mapping; /* Check page->mapping if set */
pgoff_t first_index; /* Lowest page->index to unmap */
pgoff_t last_index; /* Highest page->index to unmap */
- spinlock_t *i_mmap_lock; /* For unmap_mapping_range: */
+ struct rw_semaphore *i_mmap_sem; /* For unmap_mapping_range: */
unsigned long truncate_count; /* Compare vm_truncate_count */
};
diff --git a/kernel/fork.c b/kernel/fork.c
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -274,12 +274,12 @@
atomic_dec(&inode->i_writecount);
/* insert tmp into the share list, just after mpnt */
- spin_lock(&file->f_mapping->i_mmap_lock);
+ down_write(&file->f_mapping->i_mmap_sem);
tmp->vm_truncate_count = mpnt->vm_truncate_count;
flush_dcache_mmap_lock(file->f_mapping);
vma_prio_tree_add(tmp, mpnt);
flush_dcache_mmap_unlock(file->f_mapping);
- spin_unlock(&file->f_mapping->i_mmap_lock);
+ up_write(&file->f_mapping->i_mmap_sem);
}
/*
diff --git a/mm/filemap.c b/mm/filemap.c
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -61,16 +61,16 @@
/*
* Lock ordering:
*
- * ->i_mmap_lock (vmtruncate)
+ * ->i_mmap_sem (vmtruncate)
* ->private_lock (__free_pte->__set_page_dirty_buffers)
* ->swap_lock (exclusive_swap_page, others)
* ->mapping->tree_lock
*
* ->i_mutex
- * ->i_mmap_lock (truncate->unmap_mapping_range)
+ * ->i_mmap_sem (truncate->unmap_mapping_range)
*
* ->mmap_sem
- * ->i_mmap_lock
+ * ->i_mmap_sem
* ->page_table_lock or pte_lock (various, mainly in memory.c)
* ->mapping->tree_lock (arch-dependent flush_dcache_mmap_lock)
*
@@ -87,7 +87,7 @@
* ->sb_lock (fs/fs-writeback.c)
* ->mapping->tree_lock (__sync_single_inode)
*
- * ->i_mmap_lock
+ * ->i_mmap_sem
* ->anon_vma.lock (vma_adjust)
*
* ->anon_vma.lock
diff --git a/mm/filemap_xip.c b/mm/filemap_xip.c
--- a/mm/filemap_xip.c
+++ b/mm/filemap_xip.c
@@ -184,7 +184,7 @@
if (!page)
return;
- spin_lock(&mapping->i_mmap_lock);
+ down_read(&mapping->i_mmap_sem);
vma_prio_tree_foreach(vma, &iter, &mapping->i_mmap, pgoff, pgoff) {
mm = vma->vm_mm;
address = vma->vm_start +
@@ -204,7 +204,7 @@
page_cache_release(page);
}
}
- spin_unlock(&mapping->i_mmap_lock);
+ up_read(&mapping->i_mmap_sem);
}
/*
diff --git a/mm/fremap.c b/mm/fremap.c
--- a/mm/fremap.c
+++ b/mm/fremap.c
@@ -206,13 +206,13 @@
}
goto out;
}
- spin_lock(&mapping->i_mmap_lock);
+ down_write(&mapping->i_mmap_sem);
flush_dcache_mmap_lock(mapping);
vma->vm_flags |= VM_NONLINEAR;
vma_prio_tree_remove(vma, &mapping->i_mmap);
vma_nonlinear_insert(vma, &mapping->i_mmap_nonlinear);
flush_dcache_mmap_unlock(mapping);
- spin_unlock(&mapping->i_mmap_lock);
+ up_write(&mapping->i_mmap_sem);
}
mmu_notifier_invalidate_range_start(mm, start, start + size);
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -790,7 +790,7 @@
struct page *page;
struct page *tmp;
/*
- * A page gathering list, protected by per file i_mmap_lock. The
+ * A page gathering list, protected by per file i_mmap_sem. The
* lock is used to avoid list corruption from multiple unmapping
* of the same page since we are using page->lru.
*/
@@ -840,9 +840,9 @@
* do nothing in this case.
*/
if (vma->vm_file) {
- spin_lock(&vma->vm_file->f_mapping->i_mmap_lock);
+ down_write(&vma->vm_file->f_mapping->i_mmap_sem);
__unmap_hugepage_range(vma, start, end);
- spin_unlock(&vma->vm_file->f_mapping->i_mmap_lock);
+ up_write(&vma->vm_file->f_mapping->i_mmap_sem);
}
}
@@ -1085,7 +1085,7 @@
BUG_ON(address >= end);
flush_cache_range(vma, address, end);
- spin_lock(&vma->vm_file->f_mapping->i_mmap_lock);
+ down_write(&vma->vm_file->f_mapping->i_mmap_sem);
spin_lock(&mm->page_table_lock);
for (; address < end; address += HPAGE_SIZE) {
ptep = huge_pte_offset(mm, address);
@@ -1100,7 +1100,7 @@
}
}
spin_unlock(&mm->page_table_lock);
- spin_unlock(&vma->vm_file->f_mapping->i_mmap_lock);
+ up_write(&vma->vm_file->f_mapping->i_mmap_sem);
flush_tlb_range(vma, start, end);
}
diff --git a/mm/memory.c b/mm/memory.c
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -838,7 +838,6 @@
unsigned long tlb_start = 0; /* For tlb_finish_mmu */
int tlb_start_valid = 0;
unsigned long start = start_addr;
- spinlock_t *i_mmap_lock = details? details->i_mmap_lock: NULL;
int fullmm = (*tlbp)->fullmm;
for ( ; vma && vma->vm_start < end_addr; vma = vma->vm_next) {
@@ -875,22 +874,12 @@
}
tlb_finish_mmu(*tlbp, tlb_start, start);
-
- if (need_resched() ||
- (i_mmap_lock && spin_needbreak(i_mmap_lock))) {
- if (i_mmap_lock) {
- *tlbp = NULL;
- goto out;
- }
- cond_resched();
- }
-
+ cond_resched();
*tlbp = tlb_gather_mmu(vma->vm_mm, fullmm);
tlb_start_valid = 0;
zap_work = ZAP_BLOCK_SIZE;
}
}
-out:
return start; /* which is now the end (or restart) address */
}
@@ -1752,7 +1741,7 @@
/*
* Helper functions for unmap_mapping_range().
*
- * __ Notes on dropping i_mmap_lock to reduce latency while unmapping __
+ * __ Notes on dropping i_mmap_sem to reduce latency while unmapping __
*
* We have to restart searching the prio_tree whenever we drop the lock,
* since the iterator is only valid while the lock is held, and anyway
@@ -1771,7 +1760,7 @@
* can't efficiently keep all vmas in step with mapping->truncate_count:
* so instead reset them all whenever it wraps back to 0 (then go to 1).
* mapping->truncate_count and vma->vm_truncate_count are protected by
- * i_mmap_lock.
+ * i_mmap_sem.
*
* In order to make forward progress despite repeatedly restarting some
* large vma, note the restart_addr from unmap_vmas when it breaks out:
@@ -1821,7 +1810,7 @@
restart_addr = zap_page_range(vma, start_addr,
end_addr - start_addr, details);
- need_break = need_resched() || spin_needbreak(details->i_mmap_lock);
+ need_break = need_resched();
if (restart_addr >= end_addr) {
/* We have now completed this vma: mark it so */
@@ -1835,9 +1824,9 @@
goto again;
}
- spin_unlock(details->i_mmap_lock);
+ up_write(details->i_mmap_sem);
cond_resched();
- spin_lock(details->i_mmap_lock);
+ down_write(details->i_mmap_sem);
return -EINTR;
}
@@ -1931,9 +1920,9 @@
details.last_index = hba + hlen - 1;
if (details.last_index < details.first_index)
details.last_index = ULONG_MAX;
- details.i_mmap_lock = &mapping->i_mmap_lock;
+ details.i_mmap_sem = &mapping->i_mmap_sem;
- spin_lock(&mapping->i_mmap_lock);
+ down_write(&mapping->i_mmap_sem);
/* Protect against endless unmapping loops */
mapping->truncate_count++;
@@ -1948,7 +1937,7 @@
unmap_mapping_range_tree(&mapping->i_mmap, &details);
if (unlikely(!list_empty(&mapping->i_mmap_nonlinear)))
unmap_mapping_range_list(&mapping->i_mmap_nonlinear, &details);
- spin_unlock(&mapping->i_mmap_lock);
+ up_write(&mapping->i_mmap_sem);
}
EXPORT_SYMBOL(unmap_mapping_range);
diff --git a/mm/migrate.c b/mm/migrate.c
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -211,12 +211,12 @@
if (!mapping)
return;
- spin_lock(&mapping->i_mmap_lock);
+ down_read(&mapping->i_mmap_sem);
vma_prio_tree_foreach(vma, &iter, &mapping->i_mmap, pgoff, pgoff)
remove_migration_pte(vma, old, new);
- spin_unlock(&mapping->i_mmap_lock);
+ up_read(&mapping->i_mmap_sem);
}
/*
diff --git a/mm/mmap.c b/mm/mmap.c
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -187,7 +187,7 @@
}
/*
- * Requires inode->i_mapping->i_mmap_lock
+ * Requires inode->i_mapping->i_mmap_sem
*/
static void __remove_shared_vm_struct(struct vm_area_struct *vma,
struct file *file, struct address_space *mapping)
@@ -215,9 +215,9 @@
if (file) {
struct address_space *mapping = file->f_mapping;
- spin_lock(&mapping->i_mmap_lock);
+ down_write(&mapping->i_mmap_sem);
__remove_shared_vm_struct(vma, file, mapping);
- spin_unlock(&mapping->i_mmap_lock);
+ up_write(&mapping->i_mmap_sem);
}
}
@@ -440,7 +440,7 @@
mapping = vma->vm_file->f_mapping;
if (mapping) {
- spin_lock(&mapping->i_mmap_lock);
+ down_write(&mapping->i_mmap_sem);
vma->vm_truncate_count = mapping->truncate_count;
}
anon_vma_lock(vma);
@@ -450,7 +450,7 @@
anon_vma_unlock(vma);
if (mapping)
- spin_unlock(&mapping->i_mmap_lock);
+ up_write(&mapping->i_mmap_sem);
mm->map_count++;
validate_mm(mm);
@@ -537,7 +537,7 @@
mapping = file->f_mapping;
if (!(vma->vm_flags & VM_NONLINEAR))
root = &mapping->i_mmap;
- spin_lock(&mapping->i_mmap_lock);
+ down_write(&mapping->i_mmap_sem);
if (importer &&
vma->vm_truncate_count != next->vm_truncate_count) {
/*
@@ -621,7 +621,7 @@
if (anon_vma)
spin_unlock(&anon_vma->lock);
if (mapping)
- spin_unlock(&mapping->i_mmap_lock);
+ up_write(&mapping->i_mmap_sem);
if (remove_next) {
if (file)
@@ -2065,7 +2065,7 @@
/* Insert vm structure into process list sorted by address
* and into the inode's i_mmap tree. If vm_file is non-NULL
- * then i_mmap_lock is taken here.
+ * then i_mmap_sem is taken here.
*/
int insert_vm_struct(struct mm_struct * mm, struct vm_area_struct * vma)
{
diff --git a/mm/mremap.c b/mm/mremap.c
--- a/mm/mremap.c
+++ b/mm/mremap.c
@@ -88,7 +88,7 @@
* and we propagate stale pages into the dst afterward.
*/
mapping = vma->vm_file->f_mapping;
- spin_lock(&mapping->i_mmap_lock);
+ down_write(&mapping->i_mmap_sem);
if (new_vma->vm_truncate_count &&
new_vma->vm_truncate_count != vma->vm_truncate_count)
new_vma->vm_truncate_count = 0;
@@ -120,7 +120,7 @@
pte_unmap_nested(new_pte - 1);
pte_unmap_unlock(old_pte - 1, old_ptl);
if (mapping)
- spin_unlock(&mapping->i_mmap_lock);
+ up_write(&mapping->i_mmap_sem);
mmu_notifier_invalidate_range_end(vma->vm_mm, old_start, old_end);
}
diff --git a/mm/rmap.c b/mm/rmap.c
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -24,7 +24,7 @@
* inode->i_alloc_sem (vmtruncate_range)
* mm->mmap_sem
* page->flags PG_locked (lock_page)
- * mapping->i_mmap_lock
+ * mapping->i_mmap_sem
* anon_vma->lock
* mm->page_table_lock or pte_lock
* zone->lru_lock (in mark_page_accessed, isolate_lru_page)
@@ -373,14 +373,14 @@
* The page lock not only makes sure that page->mapping cannot
* suddenly be NULLified by truncation, it makes sure that the
* structure at mapping cannot be freed and reused yet,
- * so we can safely take mapping->i_mmap_lock.
+ * so we can safely take mapping->i_mmap_sem.
*/
BUG_ON(!PageLocked(page));
- spin_lock(&mapping->i_mmap_lock);
+ down_read(&mapping->i_mmap_sem);
/*
- * i_mmap_lock does not stabilize mapcount at all, but mapcount
+ * i_mmap_sem does not stabilize mapcount at all, but mapcount
* is more likely to be accurate if we note it after spinning.
*/
mapcount = page_mapcount(page);
@@ -403,7 +403,7 @@
break;
}
- spin_unlock(&mapping->i_mmap_lock);
+ up_read(&mapping->i_mmap_sem);
return referenced;
}
@@ -489,12 +489,12 @@
BUG_ON(PageAnon(page));
- spin_lock(&mapping->i_mmap_lock);
+ down_read(&mapping->i_mmap_sem);
vma_prio_tree_foreach(vma, &iter, &mapping->i_mmap, pgoff, pgoff) {
if (vma->vm_flags & VM_SHARED)
ret += page_mkclean_one(page, vma);
}
- spin_unlock(&mapping->i_mmap_lock);
+ up_read(&mapping->i_mmap_sem);
return ret;
}
@@ -930,7 +930,7 @@
unsigned long max_nl_size = 0;
unsigned int mapcount;
- spin_lock(&mapping->i_mmap_lock);
+ down_read(&mapping->i_mmap_sem);
vma_prio_tree_foreach(vma, &iter, &mapping->i_mmap, pgoff, pgoff) {
ret = try_to_unmap_one(page, vma, migration);
if (ret == SWAP_FAIL || !page_mapped(page))
@@ -967,7 +967,6 @@
mapcount = page_mapcount(page);
if (!mapcount)
goto out;
- cond_resched_lock(&mapping->i_mmap_lock);
max_nl_size = (max_nl_size + CLUSTER_SIZE - 1) & CLUSTER_MASK;
if (max_nl_cursor == 0)
@@ -989,7 +988,6 @@
}
vma->vm_private_data = (void *) max_nl_cursor;
}
- cond_resched_lock(&mapping->i_mmap_lock);
max_nl_cursor += CLUSTER_SIZE;
} while (max_nl_cursor <= max_nl_size);
@@ -1001,7 +999,7 @@
list_for_each_entry(vma, &mapping->i_mmap_nonlinear, shared.vm_set.list)
vma->vm_private_data = NULL;
out:
- spin_unlock(&mapping->i_mmap_lock);
+ up_write(&mapping->i_mmap_sem);
return ret;
}
-------------------------------------------------------------------------
Check out the new SourceForge.net Marketplace.
It's the best place to buy or sell services for
just about anything Open Source.
http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace
^ permalink raw reply [flat|nested] 13+ messages in thread
* [ofa-general] [PATCH 5 of 8] We no longer abort unmapping in unmap vmas because we can reschedule while
2008-04-02 21:30 [ofa-general] [PATCH 0 of 8] mmu notifiers #v10 Andrea Arcangeli
` (3 preceding siblings ...)
2008-04-02 21:30 ` [PATCH 4 of 8] The conversion to a rwsem allows callbacks during rmap traversal Andrea Arcangeli
@ 2008-04-02 21:30 ` Andrea Arcangeli
2008-04-02 21:30 ` [PATCH 6 of 8] Convert the anon_vma spinlock to a rw semaphore. This allows concurrent Andrea Arcangeli
` (2 subsequent siblings)
7 siblings, 0 replies; 13+ messages in thread
From: Andrea Arcangeli @ 2008-04-02 21:30 UTC (permalink / raw)
To: akpm
Cc: Nick Piggin, Peter Zijlstra, linux-mm, Izik Eidus, Kanoj Sarcar,
Roland Dreier, Jack Steiner, linux-kernel, Avi Kivity, kvm-devel,
Robin Holt, general, Christoph Lameter
# HG changeset patch
# User Andrea Arcangeli <andrea@qumranet.com>
# Date 1207159055 -7200
# Node ID 316e5b1e4bf388ef0198c91b3067ed1e4171d7f6
# Parent 3c3787c496cab1fc590ba3f97e7904bdfaab5375
We no longer abort unmapping in unmap vmas because we can reschedule while
unmapping since we are holding a semaphore. This would allow moving more
of the tlb flusing into unmap_vmas reducing code in various places.
Signed-off-by: Christoph Lameter <clameter@sgi.com>
diff --git a/include/linux/mm.h b/include/linux/mm.h
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -723,8 +723,7 @@
struct page *vm_normal_page(struct vm_area_struct *, unsigned long, pte_t);
unsigned long zap_page_range(struct vm_area_struct *vma, unsigned long address,
unsigned long size, struct zap_details *);
-unsigned long unmap_vmas(struct mmu_gather **tlb,
- struct vm_area_struct *start_vma, unsigned long start_addr,
+unsigned long unmap_vmas(struct vm_area_struct *start_vma, unsigned long start_addr,
unsigned long end_addr, unsigned long *nr_accounted,
struct zap_details *);
diff --git a/mm/memory.c b/mm/memory.c
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -805,7 +805,6 @@
/**
* unmap_vmas - unmap a range of memory covered by a list of vma's
- * @tlbp: address of the caller's struct mmu_gather
* @vma: the starting vma
* @start_addr: virtual address at which to start unmapping
* @end_addr: virtual address at which to end unmapping
@@ -817,20 +816,13 @@
* Unmap all pages in the vma list.
*
* We aim to not hold locks for too long (for scheduling latency reasons).
- * So zap pages in ZAP_BLOCK_SIZE bytecounts. This means we need to
- * return the ending mmu_gather to the caller.
+ * So zap pages in ZAP_BLOCK_SIZE bytecounts.
*
* Only addresses between `start' and `end' will be unmapped.
*
* The VMA list must be sorted in ascending virtual address order.
- *
- * unmap_vmas() assumes that the caller will flush the whole unmapped address
- * range after unmap_vmas() returns. So the only responsibility here is to
- * ensure that any thus-far unmapped pages are flushed before unmap_vmas()
- * drops the lock and schedules.
*/
-unsigned long unmap_vmas(struct mmu_gather **tlbp,
- struct vm_area_struct *vma, unsigned long start_addr,
+unsigned long unmap_vmas(struct vm_area_struct *vma, unsigned long start_addr,
unsigned long end_addr, unsigned long *nr_accounted,
struct zap_details *details)
{
@@ -838,7 +830,15 @@
unsigned long tlb_start = 0; /* For tlb_finish_mmu */
int tlb_start_valid = 0;
unsigned long start = start_addr;
- int fullmm = (*tlbp)->fullmm;
+ int fullmm;
+ struct mmu_gather *tlb;
+ struct mm_struct *mm = vma->vm_mm;
+
+ mmu_notifier_invalidate_range_start(mm, start_addr, end_addr);
+ lru_add_drain();
+ tlb = tlb_gather_mmu(mm, 0);
+ update_hiwater_rss(mm);
+ fullmm = tlb->fullmm;
for ( ; vma && vma->vm_start < end_addr; vma = vma->vm_next) {
unsigned long end;
@@ -865,7 +865,7 @@
(HPAGE_SIZE / PAGE_SIZE);
start = end;
} else
- start = unmap_page_range(*tlbp, vma,
+ start = unmap_page_range(tlb, vma,
start, end, &zap_work, details);
if (zap_work > 0) {
@@ -873,13 +873,15 @@
break;
}
- tlb_finish_mmu(*tlbp, tlb_start, start);
+ tlb_finish_mmu(tlb, tlb_start, start);
cond_resched();
- *tlbp = tlb_gather_mmu(vma->vm_mm, fullmm);
+ tlb = tlb_gather_mmu(vma->vm_mm, fullmm);
tlb_start_valid = 0;
zap_work = ZAP_BLOCK_SIZE;
}
}
+ tlb_finish_mmu(tlb, start_addr, end_addr);
+ mmu_notifier_invalidate_range_end(mm, start_addr, end_addr);
return start; /* which is now the end (or restart) address */
}
@@ -893,20 +895,10 @@
unsigned long zap_page_range(struct vm_area_struct *vma, unsigned long address,
unsigned long size, struct zap_details *details)
{
- struct mm_struct *mm = vma->vm_mm;
- struct mmu_gather *tlb;
unsigned long end = address + size;
unsigned long nr_accounted = 0;
- lru_add_drain();
- tlb = tlb_gather_mmu(mm, 0);
- update_hiwater_rss(mm);
- mmu_notifier_invalidate_range_start(mm, address, end);
- end = unmap_vmas(&tlb, vma, address, end, &nr_accounted, details);
- mmu_notifier_invalidate_range_end(mm, address, end);
- if (tlb)
- tlb_finish_mmu(tlb, address, end);
- return end;
+ return unmap_vmas(vma, address, end, &nr_accounted, details);
}
/*
diff --git a/mm/mmap.c b/mm/mmap.c
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -1742,19 +1742,12 @@
unsigned long start, unsigned long end)
{
struct vm_area_struct *next = prev? prev->vm_next: mm->mmap;
- struct mmu_gather *tlb;
unsigned long nr_accounted = 0;
- lru_add_drain();
- tlb = tlb_gather_mmu(mm, 0);
- update_hiwater_rss(mm);
- mmu_notifier_invalidate_range_start(mm, start, end);
- unmap_vmas(&tlb, vma, start, end, &nr_accounted, NULL);
+ unmap_vmas(vma, start, end, &nr_accounted, NULL);
vm_unacct_memory(nr_accounted);
- tlb_finish_mmu(tlb, start, end);
free_pgtables(vma, prev? prev->vm_end: FIRST_USER_ADDRESS,
next? next->vm_start: 0);
- mmu_notifier_invalidate_range_end(mm, start, end);
}
/*
@@ -2034,7 +2027,6 @@
/* Release all mmaps. */
void exit_mmap(struct mm_struct *mm)
{
- struct mmu_gather *tlb;
struct vm_area_struct *vma = mm->mmap;
unsigned long nr_accounted = 0;
unsigned long end;
@@ -2045,12 +2037,9 @@
lru_add_drain();
flush_cache_mm(mm);
- tlb = tlb_gather_mmu(mm, 1);
- /* Don't update_hiwater_rss(mm) here, do_exit already did */
- /* Use -1 here to ensure all VMAs in the mm are unmapped */
- end = unmap_vmas(&tlb, vma, 0, -1, &nr_accounted, NULL);
+
+ end = unmap_vmas(vma, 0, -1, &nr_accounted, NULL);
vm_unacct_memory(nr_accounted);
- tlb_finish_mmu(tlb, 0, end);
free_pgtables(vma, FIRST_USER_ADDRESS, 0);
/*
^ permalink raw reply [flat|nested] 13+ messages in thread
* [PATCH 6 of 8] Convert the anon_vma spinlock to a rw semaphore. This allows concurrent
2008-04-02 21:30 [ofa-general] [PATCH 0 of 8] mmu notifiers #v10 Andrea Arcangeli
` (4 preceding siblings ...)
2008-04-02 21:30 ` [ofa-general] [PATCH 5 of 8] We no longer abort unmapping in unmap vmas because we can reschedule while Andrea Arcangeli
@ 2008-04-02 21:30 ` Andrea Arcangeli
2008-04-02 21:30 ` [PATCH 7 of 8] XPMEM would have used sys_madvise() except that madvise_dontneed() Andrea Arcangeli
2008-04-02 21:30 ` [ofa-general] [PATCH 8 of 8] This patch adds a lock ordering rule to avoid a potential deadlock when Andrea Arcangeli
7 siblings, 0 replies; 13+ messages in thread
From: Andrea Arcangeli @ 2008-04-02 21:30 UTC (permalink / raw)
To: akpm
Cc: Nick Piggin, Steve Wise, Peter Zijlstra, linux-mm, Kanoj Sarcar,
Roland Dreier, Jack Steiner, linux-kernel, Avi Kivity, kvm-devel,
Robin Holt, general, Christoph Lameter
# HG changeset patch
# User Andrea Arcangeli <andrea@qumranet.com>
# Date 1207159058 -7200
# Node ID dd918e267ce1d054e8364a53adcecf3c7439cff4
# Parent 316e5b1e4bf388ef0198c91b3067ed1e4171d7f6
Convert the anon_vma spinlock to a rw semaphore. This allows concurrent
traversal of reverse maps for try_to_unmap and page_mkclean. It also
allows the calling of sleeping functions from reverse map traversal.
An additional complication is that rcu is used in some context to guarantee
the presence of the anon_vma while we acquire the lock. We cannot take a
semaphore within an rcu critical section. Add a refcount to the anon_vma
structure which allow us to give an existence guarantee for the anon_vma
structure independent of the spinlock or the list contents.
The refcount can then be taken within the RCU section. If it has been
taken successfully then the refcount guarantees the existence of the
anon_vma. The refcount in anon_vma also allows us to fix a nasty
issue in page migration where we fudged by using rcu for a long code
path to guarantee the existence of the anon_vma.
The refcount in general allows a shortening of RCU critical sections since
we can do a rcu_unlock after taking the refcount. This is particularly
relevant if the anon_vma chains contain hundreds of entries.
Issues:
- Atomic overhead increases in situations where a new reference
to the anon_vma has to be established or removed. Overhead also increases
when a speculative reference is used (try_to_unmap,
page_mkclean, page migration). There is also the more frequent processor
change due to up_xxx letting waiting tasks run first.
This results in f.e. the Aim9 brk performance test to got down by 10-15%.
Signed-off-by: Christoph Lameter <clameter@sgi.com>
diff --git a/include/linux/rmap.h b/include/linux/rmap.h
--- a/include/linux/rmap.h
+++ b/include/linux/rmap.h
@@ -25,7 +25,8 @@
* pointing to this anon_vma once its vma list is empty.
*/
struct anon_vma {
- spinlock_t lock; /* Serialize access to vma list */
+ atomic_t refcount; /* vmas on the list */
+ struct rw_semaphore sem;/* Serialize access to vma list */
struct list_head head; /* List of private "related" vmas */
};
@@ -43,18 +44,31 @@
kmem_cache_free(anon_vma_cachep, anon_vma);
}
+struct anon_vma *grab_anon_vma(struct page *page);
+
+static inline void get_anon_vma(struct anon_vma *anon_vma)
+{
+ atomic_inc(&anon_vma->refcount);
+}
+
+static inline void put_anon_vma(struct anon_vma *anon_vma)
+{
+ if (atomic_dec_and_test(&anon_vma->refcount))
+ anon_vma_free(anon_vma);
+}
+
static inline void anon_vma_lock(struct vm_area_struct *vma)
{
struct anon_vma *anon_vma = vma->anon_vma;
if (anon_vma)
- spin_lock(&anon_vma->lock);
+ down_write(&anon_vma->sem);
}
static inline void anon_vma_unlock(struct vm_area_struct *vma)
{
struct anon_vma *anon_vma = vma->anon_vma;
if (anon_vma)
- spin_unlock(&anon_vma->lock);
+ up_write(&anon_vma->sem);
}
/*
diff --git a/mm/migrate.c b/mm/migrate.c
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -235,15 +235,16 @@
return;
/*
- * We hold the mmap_sem lock. So no need to call page_lock_anon_vma.
+ * We hold either the mmap_sem lock or a reference on the
+ * anon_vma. So no need to call page_lock_anon_vma.
*/
anon_vma = (struct anon_vma *) (mapping - PAGE_MAPPING_ANON);
- spin_lock(&anon_vma->lock);
+ down_read(&anon_vma->sem);
list_for_each_entry(vma, &anon_vma->head, anon_vma_node)
remove_migration_pte(vma, old, new);
- spin_unlock(&anon_vma->lock);
+ up_read(&anon_vma->sem);
}
/*
@@ -623,7 +624,7 @@
int rc = 0;
int *result = NULL;
struct page *newpage = get_new_page(page, private, &result);
- int rcu_locked = 0;
+ struct anon_vma *anon_vma = NULL;
int charge = 0;
if (!newpage)
@@ -647,16 +648,14 @@
}
/*
* By try_to_unmap(), page->mapcount goes down to 0 here. In this case,
- * we cannot notice that anon_vma is freed while we migrates a page.
+ * we cannot notice that anon_vma is freed while we migrate a page.
* This rcu_read_lock() delays freeing anon_vma pointer until the end
* of migration. File cache pages are no problem because of page_lock()
* File Caches may use write_page() or lock_page() in migration, then,
* just care Anon page here.
*/
- if (PageAnon(page)) {
- rcu_read_lock();
- rcu_locked = 1;
- }
+ if (PageAnon(page))
+ anon_vma = grab_anon_vma(page);
/*
* Corner case handling:
@@ -674,10 +673,7 @@
if (!PageAnon(page) && PagePrivate(page)) {
/*
* Go direct to try_to_free_buffers() here because
- * a) that's what try_to_release_page() would do anyway
- * b) we may be under rcu_read_lock() here, so we can't
- * use GFP_KERNEL which is what try_to_release_page()
- * needs to be effective.
+ * that's what try_to_release_page() would do anyway
*/
try_to_free_buffers(page);
}
@@ -698,8 +694,8 @@
} else if (charge)
mem_cgroup_end_migration(newpage);
rcu_unlock:
- if (rcu_locked)
- rcu_read_unlock();
+ if (anon_vma)
+ put_anon_vma(anon_vma);
unlock:
diff --git a/mm/mmap.c b/mm/mmap.c
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -565,7 +565,7 @@
if (vma->anon_vma)
anon_vma = vma->anon_vma;
if (anon_vma) {
- spin_lock(&anon_vma->lock);
+ down_write(&anon_vma->sem);
/*
* Easily overlooked: when mprotect shifts the boundary,
* make sure the expanding vma has anon_vma set if the
@@ -619,7 +619,7 @@
}
if (anon_vma)
- spin_unlock(&anon_vma->lock);
+ up_write(&anon_vma->sem);
if (mapping)
up_write(&mapping->i_mmap_sem);
diff --git a/mm/rmap.c b/mm/rmap.c
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -69,7 +69,7 @@
if (anon_vma) {
allocated = NULL;
locked = anon_vma;
- spin_lock(&locked->lock);
+ down_write(&locked->sem);
} else {
anon_vma = anon_vma_alloc();
if (unlikely(!anon_vma))
@@ -81,6 +81,7 @@
/* page_table_lock to protect against threads */
spin_lock(&mm->page_table_lock);
if (likely(!vma->anon_vma)) {
+ get_anon_vma(anon_vma);
vma->anon_vma = anon_vma;
list_add_tail(&vma->anon_vma_node, &anon_vma->head);
allocated = NULL;
@@ -88,7 +89,7 @@
spin_unlock(&mm->page_table_lock);
if (locked)
- spin_unlock(&locked->lock);
+ up_write(&locked->sem);
if (unlikely(allocated))
anon_vma_free(allocated);
}
@@ -99,14 +100,17 @@
{
BUG_ON(vma->anon_vma != next->anon_vma);
list_del(&next->anon_vma_node);
+ put_anon_vma(vma->anon_vma);
}
void __anon_vma_link(struct vm_area_struct *vma)
{
struct anon_vma *anon_vma = vma->anon_vma;
- if (anon_vma)
+ if (anon_vma) {
+ get_anon_vma(anon_vma);
list_add_tail(&vma->anon_vma_node, &anon_vma->head);
+ }
}
void anon_vma_link(struct vm_area_struct *vma)
@@ -114,36 +118,32 @@
struct anon_vma *anon_vma = vma->anon_vma;
if (anon_vma) {
- spin_lock(&anon_vma->lock);
+ get_anon_vma(anon_vma);
+ down_write(&anon_vma->sem);
list_add_tail(&vma->anon_vma_node, &anon_vma->head);
- spin_unlock(&anon_vma->lock);
+ up_write(&anon_vma->sem);
}
}
void anon_vma_unlink(struct vm_area_struct *vma)
{
struct anon_vma *anon_vma = vma->anon_vma;
- int empty;
if (!anon_vma)
return;
- spin_lock(&anon_vma->lock);
+ down_write(&anon_vma->sem);
list_del(&vma->anon_vma_node);
-
- /* We must garbage collect the anon_vma if it's empty */
- empty = list_empty(&anon_vma->head);
- spin_unlock(&anon_vma->lock);
-
- if (empty)
- anon_vma_free(anon_vma);
+ up_write(&anon_vma->sem);
+ put_anon_vma(anon_vma);
}
static void anon_vma_ctor(struct kmem_cache *cachep, void *data)
{
struct anon_vma *anon_vma = data;
- spin_lock_init(&anon_vma->lock);
+ init_rwsem(&anon_vma->sem);
+ atomic_set(&anon_vma->refcount, 0);
INIT_LIST_HEAD(&anon_vma->head);
}
@@ -157,9 +157,9 @@
* Getting a lock on a stable anon_vma from a page off the LRU is
* tricky: page_lock_anon_vma rely on RCU to guard against the races.
*/
-static struct anon_vma *page_lock_anon_vma(struct page *page)
+struct anon_vma *grab_anon_vma(struct page *page)
{
- struct anon_vma *anon_vma;
+ struct anon_vma *anon_vma = NULL;
unsigned long anon_mapping;
rcu_read_lock();
@@ -170,17 +170,26 @@
goto out;
anon_vma = (struct anon_vma *) (anon_mapping - PAGE_MAPPING_ANON);
- spin_lock(&anon_vma->lock);
- return anon_vma;
+ if (!atomic_inc_not_zero(&anon_vma->refcount))
+ anon_vma = NULL;
out:
rcu_read_unlock();
- return NULL;
+ return anon_vma;
+}
+
+static struct anon_vma *page_lock_anon_vma(struct page *page)
+{
+ struct anon_vma *anon_vma = grab_anon_vma(page);
+
+ if (anon_vma)
+ down_read(&anon_vma->sem);
+ return anon_vma;
}
static void page_unlock_anon_vma(struct anon_vma *anon_vma)
{
- spin_unlock(&anon_vma->lock);
- rcu_read_unlock();
+ up_read(&anon_vma->sem);
+ put_anon_vma(anon_vma);
}
/*
-------------------------------------------------------------------------
Check out the new SourceForge.net Marketplace.
It's the best place to buy or sell services for
just about anything Open Source.
http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace
^ permalink raw reply [flat|nested] 13+ messages in thread
* [PATCH 7 of 8] XPMEM would have used sys_madvise() except that madvise_dontneed()
2008-04-02 21:30 [ofa-general] [PATCH 0 of 8] mmu notifiers #v10 Andrea Arcangeli
` (5 preceding siblings ...)
2008-04-02 21:30 ` [PATCH 6 of 8] Convert the anon_vma spinlock to a rw semaphore. This allows concurrent Andrea Arcangeli
@ 2008-04-02 21:30 ` Andrea Arcangeli
2008-04-02 21:30 ` [ofa-general] [PATCH 8 of 8] This patch adds a lock ordering rule to avoid a potential deadlock when Andrea Arcangeli
7 siblings, 0 replies; 13+ messages in thread
From: Andrea Arcangeli @ 2008-04-02 21:30 UTC (permalink / raw)
To: akpm
Cc: Nick Piggin, Steve Wise, Peter Zijlstra, linux-mm, Kanoj Sarcar,
Roland Dreier, Jack Steiner, linux-kernel, Avi Kivity, kvm-devel,
Robin Holt, general, Christoph Lameter
# HG changeset patch
# User Andrea Arcangeli <andrea@qumranet.com>
# Date 1207159059 -7200
# Node ID 31fc23193bd039cc595fba1ca149a9715f7d0fb2
# Parent dd918e267ce1d054e8364a53adcecf3c7439cff4
XPMEM would have used sys_madvise() except that madvise_dontneed()
returns an -EINVAL if VM_PFNMAP is set, which is always true for the pages
XPMEM imports from other partitions and is also true for uncached pages
allocated locally via the mspec allocator. XPMEM needs zap_page_range()
functionality for these types of pages as well as 'normal' pages.
Signed-off-by: Dean Nelson <dcn@sgi.com>
diff --git a/mm/memory.c b/mm/memory.c
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -900,6 +900,7 @@
return unmap_vmas(vma, address, end, &nr_accounted, details);
}
+EXPORT_SYMBOL_GPL(zap_page_range);
/*
* Do a quick page-table lookup for a single page.
-------------------------------------------------------------------------
Check out the new SourceForge.net Marketplace.
It's the best place to buy or sell services for
just about anything Open Source.
http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace
^ permalink raw reply [flat|nested] 13+ messages in thread
* [ofa-general] [PATCH 8 of 8] This patch adds a lock ordering rule to avoid a potential deadlock when
2008-04-02 21:30 [ofa-general] [PATCH 0 of 8] mmu notifiers #v10 Andrea Arcangeli
` (6 preceding siblings ...)
2008-04-02 21:30 ` [PATCH 7 of 8] XPMEM would have used sys_madvise() except that madvise_dontneed() Andrea Arcangeli
@ 2008-04-02 21:30 ` Andrea Arcangeli
7 siblings, 0 replies; 13+ messages in thread
From: Andrea Arcangeli @ 2008-04-02 21:30 UTC (permalink / raw)
To: akpm
Cc: Nick Piggin, Peter Zijlstra, linux-mm, Izik Eidus, Kanoj Sarcar,
Roland Dreier, Jack Steiner, linux-kernel, Avi Kivity, kvm-devel,
Robin Holt, general, Christoph Lameter
# HG changeset patch
# User Andrea Arcangeli <andrea@qumranet.com>
# Date 1207159059 -7200
# Node ID f3f119118b0abd9c4624263ef388dc7230d937fe
# Parent 31fc23193bd039cc595fba1ca149a9715f7d0fb2
This patch adds a lock ordering rule to avoid a potential deadlock when
multiple mmap_sems need to be locked.
Signed-off-by: Dean Nelson <dcn@sgi.com>
diff --git a/mm/filemap.c b/mm/filemap.c
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -79,6 +79,9 @@
*
* ->i_mutex (generic_file_buffered_write)
* ->mmap_sem (fault_in_pages_readable->do_page_fault)
+ *
+ * When taking multiple mmap_sems, one should lock the lowest-addressed
+ * one first proceeding on up to the highest-addressed one.
*
* ->i_mutex
* ->i_alloc_sem (various)
^ permalink raw reply [flat|nested] 13+ messages in thread
* [ofa-general] Re: [PATCH 2 of 8] Moves all mmu notifier methods outside the PT lock (first and not last
2008-04-02 21:30 ` [PATCH 2 of 8] Moves all mmu notifier methods outside the PT lock (first and not last Andrea Arcangeli
@ 2008-04-02 22:03 ` Christoph Lameter
0 siblings, 0 replies; 13+ messages in thread
From: Christoph Lameter @ 2008-04-02 22:03 UTC (permalink / raw)
To: Andrea Arcangeli
Cc: Nick Piggin, Peter Zijlstra, linux-mm, Izik Eidus, Kanoj Sarcar,
Roland Dreier, Jack Steiner, linux-kernel, Avi Kivity, kvm-devel,
Robin Holt, general, akpm
On Wed, 2 Apr 2008, Andrea Arcangeli wrote:
> diff --git a/mm/memory.c b/mm/memory.c
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -1626,9 +1626,10 @@
> */
> page_table = pte_offset_map_lock(mm, pmd, address,
> &ptl);
> - page_cache_release(old_page);
> + new_page = NULL;
> if (!pte_same(*page_table, orig_pte))
> goto unlock;
> + page_cache_release(old_page);
>
> page_mkwrite = 1;
> }
This is deferring frees and not moving the callouts. KVM specific? What
exactly is this doing?
A significant portion of this seems to be undoing what the first patch
did.
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH 1 of 8] Core of mmu notifiers
2008-04-02 21:30 ` [ofa-general] [PATCH 1 of 8] Core of mmu notifiers Andrea Arcangeli
@ 2008-04-02 22:34 ` Christoph Lameter
2008-04-03 0:42 ` Andrea Arcangeli
0 siblings, 1 reply; 13+ messages in thread
From: Christoph Lameter @ 2008-04-02 22:34 UTC (permalink / raw)
To: Andrea Arcangeli
Cc: Nick Piggin, Steve Wise, Peter Zijlstra, linux-mm, Kanoj Sarcar,
Roland Dreier, Jack Steiner, linux-kernel, Avi Kivity, kvm-devel,
Robin Holt, general, akpm
On Wed, 2 Apr 2008, Andrea Arcangeli wrote:
> + void (*invalidate_page)(struct mmu_notifier *mn,
> + struct mm_struct *mm,
> + unsigned long address);
> +
> + void (*invalidate_range_start)(struct mmu_notifier *mn,
> + struct mm_struct *mm,
> + unsigned long start, unsigned long end);
> + void (*invalidate_range_end)(struct mmu_notifier *mn,
> + struct mm_struct *mm,
> + unsigned long start, unsigned long end);
Still two methods ...
> +void __mmu_notifier_release(struct mm_struct *mm)
> +{
> + struct mmu_notifier *mn;
> + unsigned seq;
> +
> + seq = read_seqbegin(&mm->mmu_notifier_lock);
> + while (unlikely(!hlist_empty(&mm->mmu_notifier_list))) {
> + mn = hlist_entry(mm->mmu_notifier_list.first,
> + struct mmu_notifier,
> + hlist);
> + hlist_del(&mn->hlist);
> + if (mn->ops->release)
> + mn->ops->release(mn, mm);
> + BUG_ON(read_seqretry(&mm->mmu_notifier_lock, seq));
> + }
> +}
seqlock just taken for checking if everything is ok?
> +
> +/*
> + * If no young bitflag is supported by the hardware, ->clear_flush_young can
> + * unmap the address and return 1 or 0 depending if the mapping previously
> + * existed or not.
> + */
> +int __mmu_notifier_clear_flush_young(struct mm_struct *mm,
> + unsigned long address)
> +{
> + struct mmu_notifier *mn;
> + struct hlist_node *n;
> + int young = 0;
> + unsigned seq;
> +
> + seq = read_seqbegin(&mm->mmu_notifier_lock);
> + do {
> + hlist_for_each_entry_rcu(mn, n, &mm->mmu_notifier_list, hlist) {
> + if (mn->ops->clear_flush_young)
> + young |= mn->ops->clear_flush_young(mn, mm,
> + address);
> + }
> + } while (read_seqretry(&mm->mmu_notifier_lock, seq));
> +
The critical section could be run multiple times for one callback which
could result in multiple callbacks to clear the young bit. Guess not that
big of an issue?
> +void __mmu_notifier_invalidate_page(struct mm_struct *mm,
> + unsigned long address)
> +{
> + struct mmu_notifier *mn;
> + struct hlist_node *n;
> + unsigned seq;
> +
> + seq = read_seqbegin(&mm->mmu_notifier_lock);
> + do {
> + hlist_for_each_entry_rcu(mn, n, &mm->mmu_notifier_list, hlist) {
> + if (mn->ops->invalidate_page)
> + mn->ops->invalidate_page(mn, mm, address);
> + }
> + } while (read_seqretry(&mm->mmu_notifier_lock, seq));
> +}
Ok. Retry would try to invalidate the page a second time which is not a
problem unless you would drop the refcount or make other state changes
that require correspondence with mapping. I guess this is the reason
that you stopped adding a refcount?
> +void __mmu_notifier_invalidate_range_start(struct mm_struct *mm,
> + unsigned long start, unsigned long end)
> +{
> + struct mmu_notifier *mn;
> + struct hlist_node *n;
> + unsigned seq;
> +
> + seq = read_seqbegin(&mm->mmu_notifier_lock);
> + do {
> + hlist_for_each_entry_rcu(mn, n, &mm->mmu_notifier_list, hlist) {
> + if (mn->ops->invalidate_range_start)
> + mn->ops->invalidate_range_start(mn, mm,
> + start, end);
> + }
> + } while (read_seqretry(&mm->mmu_notifier_lock, seq));
> +}
Multiple invalidate_range_starts on the same range? This means the driver
needs to be able to deal with the situation and ignore the repeated
call?
> +void __mmu_notifier_invalidate_range_end(struct mm_struct *mm,
> + unsigned long start, unsigned long end)
> +{
> + struct mmu_notifier *mn;
> + struct hlist_node *n;
> + unsigned seq;
> +
> + seq = read_seqbegin(&mm->mmu_notifier_lock);
> + do {
> + hlist_for_each_entry_rcu(mn, n, &mm->mmu_notifier_list, hlist) {
> + if (mn->ops->invalidate_range_end)
> + mn->ops->invalidate_range_end(mn, mm,
> + start, end);
> + }
> + } while (read_seqretry(&mm->mmu_notifier_lock, seq));
> +}
Retry can lead to multiple invalidate_range callbacks with the same
parameters? Driver needs to ignore if the range is already clear?
-------------------------------------------------------------------------
Check out the new SourceForge.net Marketplace.
It's the best place to buy or sell services for
just about anything Open Source.
http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH 1 of 8] Core of mmu notifiers
2008-04-02 22:34 ` Christoph Lameter
@ 2008-04-03 0:42 ` Andrea Arcangeli
2008-04-03 1:03 ` Christoph Lameter
0 siblings, 1 reply; 13+ messages in thread
From: Andrea Arcangeli @ 2008-04-03 0:42 UTC (permalink / raw)
To: Christoph Lameter
Cc: Nick Piggin, Steve Wise, Peter Zijlstra, linux-mm, Kanoj Sarcar,
Roland Dreier, Jack Steiner, linux-kernel, Avi Kivity, kvm-devel,
Robin Holt, general, akpm
On Wed, Apr 02, 2008 at 03:34:01PM -0700, Christoph Lameter wrote:
> Still two methods ...
Yes, the invalidate_page is called with the core VM holding a
reference on the page _after_ the tlb flush. The invalidate_end is
called after the page has been freed already and after the tlb
flush. They've different semantics and with invalidate_page there's no
need to block the kvm fault handler. But invalidate_page is only
the most efficient for operations that aren't creating holes in the
vma, for the rest invalidate_range_start/end provides the best
performance by reducing the number of tlb flushes.
> seqlock just taken for checking if everything is ok?
Exactly.
> The critical section could be run multiple times for one callback which
> could result in multiple callbacks to clear the young bit. Guess not that
> big of an issue?
Yes, that's ok.
> Ok. Retry would try to invalidate the page a second time which is not a
> problem unless you would drop the refcount or make other state changes
> that require correspondence with mapping. I guess this is the reason
> that you stopped adding a refcount?
The current patch using mmu notifiers is already robust against
multiple invalidates. The refcounting represent a spte mapping, if we
already invalidated it, the spte will be nonpresent and there's no
page to unpin. The removal of the refcount is only a
microoptimization.
> Multiple invalidate_range_starts on the same range? This means the driver
> needs to be able to deal with the situation and ignore the repeated
> call?
The driver would need to store current->pid in a list and remove it in
range_stop. And range_stop would need to do nothing at all, if the pid
isn't found in the list.
But thinking more I'm not convinced the driver is safe by ignoring if
range_end runs before range_begin (pid not found in the list). And I
don't see a clear way to fix it not internally to the device driver
nor externally. So the repeated call is easy to handle for the
driver. What is not trivial is to block the secondary page faults when
mmu_notifier_register happens in the middle of range_start/end
critical section. sptes can be established in between range_start/_end
and that shouldn't happen. So the core problem returns to be how to
handle mmu_notifier_register happening in the middle of
_range_start/_end, dismissing it as a job for the driver seems not
feasible (you have the same problem with EMM of course).
> Retry can lead to multiple invalidate_range callbacks with the same
> parameters? Driver needs to ignore if the range is already clear?
Mostly covered above.
-------------------------------------------------------------------------
Check out the new SourceForge.net Marketplace.
It's the best place to buy or sell services for
just about anything Open Source.
http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH 1 of 8] Core of mmu notifiers
2008-04-03 0:42 ` Andrea Arcangeli
@ 2008-04-03 1:03 ` Christoph Lameter
0 siblings, 0 replies; 13+ messages in thread
From: Christoph Lameter @ 2008-04-03 1:03 UTC (permalink / raw)
To: Andrea Arcangeli
Cc: Nick Piggin, Steve Wise, Peter Zijlstra, linux-mm, Kanoj Sarcar,
Roland Dreier, Jack Steiner, linux-kernel, Avi Kivity, kvm-devel,
Robin Holt, general, akpm
Thinking about this adventurous locking some more: I think you are
misunderstanding what a seqlock is. It is *not* a spinlock.
The critical read section with the reading of a version before and after
allows you access to a certain version of memory how it is or was some
time ago (caching effect). It does not mean that the current state of
memory is fixed and neither does it allow syncing when an item is added
to the list.
So it could be that you are traversing a list that is missing one item
because it is not visible to this processor yet.
You may just see a state from the past. I would think that you will need a
real lock in order to get the desired effect.
-------------------------------------------------------------------------
Check out the new SourceForge.net Marketplace.
It's the best place to buy or sell services for
just about anything Open Source.
http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace
^ permalink raw reply [flat|nested] 13+ messages in thread
end of thread, other threads:[~2008-04-03 1:03 UTC | newest]
Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2008-04-02 21:30 [ofa-general] [PATCH 0 of 8] mmu notifiers #v10 Andrea Arcangeli
2008-04-02 21:30 ` [ofa-general] [PATCH 1 of 8] Core of mmu notifiers Andrea Arcangeli
2008-04-02 22:34 ` Christoph Lameter
2008-04-03 0:42 ` Andrea Arcangeli
2008-04-03 1:03 ` Christoph Lameter
2008-04-02 21:30 ` [PATCH 2 of 8] Moves all mmu notifier methods outside the PT lock (first and not last Andrea Arcangeli
2008-04-02 22:03 ` [ofa-general] " Christoph Lameter
2008-04-02 21:30 ` [PATCH 3 of 8] Move the tlb flushing into free_pgtables. The conversion of the locks Andrea Arcangeli
2008-04-02 21:30 ` [PATCH 4 of 8] The conversion to a rwsem allows callbacks during rmap traversal Andrea Arcangeli
2008-04-02 21:30 ` [ofa-general] [PATCH 5 of 8] We no longer abort unmapping in unmap vmas because we can reschedule while Andrea Arcangeli
2008-04-02 21:30 ` [PATCH 6 of 8] Convert the anon_vma spinlock to a rw semaphore. This allows concurrent Andrea Arcangeli
2008-04-02 21:30 ` [PATCH 7 of 8] XPMEM would have used sys_madvise() except that madvise_dontneed() Andrea Arcangeli
2008-04-02 21:30 ` [ofa-general] [PATCH 8 of 8] This patch adds a lock ordering rule to avoid a potential deadlock when Andrea Arcangeli
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox