* [RFC][PATCH 0/4] on do_page_fault() and *copy*_inatomic
@ 2006-10-19 10:17 Peter Zijlstra
2006-10-19 10:17 ` [RFC][PATCH 1/4] mm: arch do_page_fault() vs in_atomic() Peter Zijlstra
` (3 more replies)
0 siblings, 4 replies; 6+ messages in thread
From: Peter Zijlstra @ 2006-10-19 10:17 UTC (permalink / raw)
To: linux-arch, linux-kernel, linux-mm
Cc: Nick Piggin, Andrew Morton, Peter Zijlstra
In light of the recent work on fault handlers and generic_file_buffered_write()
I've gone over some of the arch specific stuff that supports this work.
The following four patches are the result...
Peter
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 6+ messages in thread
* [RFC][PATCH 1/4] mm: arch do_page_fault() vs in_atomic()
2006-10-19 10:17 [RFC][PATCH 0/4] on do_page_fault() and *copy*_inatomic Peter Zijlstra
@ 2006-10-19 10:17 ` Peter Zijlstra
2006-10-19 10:54 ` Nick Piggin
2006-10-19 10:17 ` [RFC][PATCH 2/4] mm: pagefault_{disable,enable}() Peter Zijlstra
` (2 subsequent siblings)
3 siblings, 1 reply; 6+ messages in thread
From: Peter Zijlstra @ 2006-10-19 10:17 UTC (permalink / raw)
To: linux-arch, linux-kernel, linux-mm
Cc: Nick Piggin, Andrew Morton, Peter Zijlstra
[-- Attachment #1: inatomic_do_page_fault.patch --]
[-- Type: text/plain, Size: 4883 bytes --]
In light of the recent pagefault and filemap_copy_from_user work I've
gone through all the arch pagefault handlers to make sure the
inc_preempt_count() 'feature' works as expected.
Several sections of code (including the new filemap_copy_from_user) rely
on the fact that faults do not take locks under increased preempt count.
arch/x86_64 - good
arch/powerpc - good
arch/cris - fixed
arch/i386 - good
arch/parisc - fixed
arch/sh - good
arch/sparc - good
arch/s390 - good
arch/m68k - fixed
arch/ppc - good
arch/alpha - fixed
arch/mips - good
arch/sparc64 - good
arch/ia64 - good
arch/arm - fixed
arch/um - NA
arch/avr32 - good
arch/h8300 - NA
arch/m32r - good
arch/v850 - good
arch/frv - fixed
arch/m68knommu - NA
arch/arm26 - fixed
arch/sh64 - fixed
arch/xtensa - good
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
---
arch/alpha/mm/fault.c | 2 +-
arch/arm/mm/fault.c | 2 +-
arch/arm26/mm/fault.c | 2 +-
arch/cris/mm/fault.c | 2 +-
arch/frv/mm/fault.c | 2 +-
arch/m68k/mm/fault.c | 2 +-
arch/parisc/mm/fault.c | 2 +-
arch/sh64/mm/fault.c | 2 +-
8 files changed, 8 insertions(+), 8 deletions(-)
Index: linux-2.6/arch/alpha/mm/fault.c
===================================================================
--- linux-2.6.orig/arch/alpha/mm/fault.c
+++ linux-2.6/arch/alpha/mm/fault.c
@@ -108,7 +108,7 @@ do_page_fault(unsigned long address, uns
/* If we're in an interrupt context, or have no user context,
we must not take the fault. */
- if (!mm || in_interrupt())
+ if (!mm || in_atomic())
goto no_context;
#ifdef CONFIG_ALPHA_LARGE_VMALLOC
Index: linux-2.6/arch/arm/mm/fault.c
===================================================================
--- linux-2.6.orig/arch/arm/mm/fault.c
+++ linux-2.6/arch/arm/mm/fault.c
@@ -230,7 +230,7 @@ do_page_fault(unsigned long addr, unsign
* If we're in an interrupt or have no user
* context, we must not take the fault..
*/
- if (in_interrupt() || !mm)
+ if (in_atomic() || !mm)
goto no_context;
/*
Index: linux-2.6/arch/arm26/mm/fault.c
===================================================================
--- linux-2.6.orig/arch/arm26/mm/fault.c
+++ linux-2.6/arch/arm26/mm/fault.c
@@ -215,7 +215,7 @@ int do_page_fault(unsigned long addr, un
* If we're in an interrupt or have no user
* context, we must not take the fault..
*/
- if (in_interrupt() || !mm)
+ if (in_atomic() || !mm)
goto no_context;
down_read(&mm->mmap_sem);
Index: linux-2.6/arch/cris/mm/fault.c
===================================================================
--- linux-2.6.orig/arch/cris/mm/fault.c
+++ linux-2.6/arch/cris/mm/fault.c
@@ -232,7 +232,7 @@ do_page_fault(unsigned long address, str
* context, we must not take the fault..
*/
- if (in_interrupt() || !mm)
+ if (in_atomic() || !mm)
goto no_context;
down_read(&mm->mmap_sem);
Index: linux-2.6/arch/frv/mm/fault.c
===================================================================
--- linux-2.6.orig/arch/frv/mm/fault.c
+++ linux-2.6/arch/frv/mm/fault.c
@@ -78,7 +78,7 @@ asmlinkage void do_page_fault(int datamm
* If we're in an interrupt or have no user
* context, we must not take the fault..
*/
- if (in_interrupt() || !mm)
+ if (in_atomic() || !mm)
goto no_context;
down_read(&mm->mmap_sem);
Index: linux-2.6/arch/m68k/mm/fault.c
===================================================================
--- linux-2.6.orig/arch/m68k/mm/fault.c
+++ linux-2.6/arch/m68k/mm/fault.c
@@ -99,7 +99,7 @@ int do_page_fault(struct pt_regs *regs,
* If we're in an interrupt or have no user
* context, we must not take the fault..
*/
- if (in_interrupt() || !mm)
+ if (in_atomic() || !mm)
goto no_context;
down_read(&mm->mmap_sem);
Index: linux-2.6/arch/parisc/mm/fault.c
===================================================================
--- linux-2.6.orig/arch/parisc/mm/fault.c
+++ linux-2.6/arch/parisc/mm/fault.c
@@ -152,7 +152,7 @@ void do_page_fault(struct pt_regs *regs,
const struct exception_table_entry *fix;
unsigned long acc_type;
- if (in_interrupt() || !mm)
+ if (in_atomic() || !mm)
goto no_context;
down_read(&mm->mmap_sem);
Index: linux-2.6/arch/sh64/mm/fault.c
===================================================================
--- linux-2.6.orig/arch/sh64/mm/fault.c
+++ linux-2.6/arch/sh64/mm/fault.c
@@ -154,7 +154,7 @@ asmlinkage void do_page_fault(struct pt_
* If we're in an interrupt or have no user
* context, we must not take the fault..
*/
- if (in_interrupt() || !mm)
+ if (in_atomic() || !mm)
goto no_context;
/* TLB misses upon some cache flushes get done under cli() */
--
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 6+ messages in thread
* [RFC][PATCH 2/4] mm: pagefault_{disable,enable}()
2006-10-19 10:17 [RFC][PATCH 0/4] on do_page_fault() and *copy*_inatomic Peter Zijlstra
2006-10-19 10:17 ` [RFC][PATCH 1/4] mm: arch do_page_fault() vs in_atomic() Peter Zijlstra
@ 2006-10-19 10:17 ` Peter Zijlstra
2006-10-19 10:17 ` [RFC][PATCH 3/4] mm: k{,um}map_atomic() vs in_atomic() Peter Zijlstra
2006-10-19 10:17 ` [RFC][PATCH 4/4] mm: move pagefault_{disable,enable}() into __copy_{to,from}_user_inatomic() Peter Zijlstra
3 siblings, 0 replies; 6+ messages in thread
From: Peter Zijlstra @ 2006-10-19 10:17 UTC (permalink / raw)
To: linux-arch, linux-kernel, linux-mm
Cc: Nick Piggin, Andrew Morton, Peter Zijlstra
[-- Attachment #1: pagefault_disable.patch --]
[-- Type: text/plain, Size: 16258 bytes --]
Introduce pagefault_{disable,enable}() and use these where previously
we did manual preempt increments/decrements to make the pagefault handler
do the atomic thing.
Currently they still rely on the increased preempt count, but do not rely
on the disabled preemption, this might go away in the future.
(NOTE: the extra barrier() in pagefault_disable might fix some holes on
machines which have too many registers for their own good)
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
---
arch/frv/kernel/futex.c | 4 ++--
arch/i386/mm/highmem.c | 10 ++++------
arch/mips/mm/highmem.c | 10 ++++------
arch/s390/lib/uaccess_std.c | 4 ++--
arch/sparc/mm/highmem.c | 8 +++-----
include/asm-frv/highmem.h | 5 ++---
include/asm-generic/futex.h | 4 ++--
include/asm-i386/futex.h | 4 ++--
include/asm-ia64/futex.h | 4 ++--
include/asm-mips/futex.h | 4 ++--
include/asm-parisc/futex.h | 4 ++--
include/asm-powerpc/futex.h | 4 ++--
include/asm-ppc/highmem.h | 8 +++-----
include/asm-sparc64/futex.h | 4 ++--
include/asm-x86_64/futex.h | 4 ++--
include/linux/uaccess.h | 39 +++++++++++++++++++++++++++++++++++++--
kernel/futex.c | 28 ++++++++++++++--------------
17 files changed, 87 insertions(+), 61 deletions(-)
Index: linux-2.6/arch/frv/kernel/futex.c
===================================================================
--- linux-2.6.orig/arch/frv/kernel/futex.c
+++ linux-2.6/arch/frv/kernel/futex.c
@@ -200,7 +200,7 @@ int futex_atomic_op_inuser(int encoded_o
if (!access_ok(VERIFY_WRITE, uaddr, sizeof(int)))
return -EFAULT;
- inc_preempt_count();
+ pagefault_disable();
switch (op) {
case FUTEX_OP_SET:
@@ -223,7 +223,7 @@ int futex_atomic_op_inuser(int encoded_o
break;
}
- dec_preempt_count();
+ pagefault_enable();
if (!ret) {
switch (cmp) {
Index: linux-2.6/arch/i386/mm/highmem.c
===================================================================
--- linux-2.6.orig/arch/i386/mm/highmem.c
+++ linux-2.6/arch/i386/mm/highmem.c
@@ -32,7 +32,7 @@ void *kmap_atomic(struct page *page, enu
unsigned long vaddr;
/* even !CONFIG_PREEMPT needs this, for in_atomic in do_page_fault */
- inc_preempt_count();
+ pagefault_disable();
if (!PageHighMem(page))
return page_address(page);
@@ -52,8 +52,7 @@ void kunmap_atomic(void *kvaddr, enum km
#ifdef CONFIG_DEBUG_HIGHMEM
if (vaddr >= PAGE_OFFSET && vaddr < (unsigned long)high_memory) {
- dec_preempt_count();
- preempt_check_resched();
+ pagefault_enable();
return;
}
@@ -68,8 +67,7 @@ void kunmap_atomic(void *kvaddr, enum km
*/
kpte_clear_flush(kmap_pte-idx, vaddr);
- dec_preempt_count();
- preempt_check_resched();
+ pagefault_enable();
}
/* This is the same as kmap_atomic() but can map memory that doesn't
@@ -80,7 +78,7 @@ void *kmap_atomic_pfn(unsigned long pfn,
enum fixed_addresses idx;
unsigned long vaddr;
- inc_preempt_count();
+ pagefault_disable();
idx = type + KM_TYPE_NR*smp_processor_id();
vaddr = __fix_to_virt(FIX_KMAP_BEGIN + idx);
Index: linux-2.6/arch/mips/mm/highmem.c
===================================================================
--- linux-2.6.orig/arch/mips/mm/highmem.c
+++ linux-2.6/arch/mips/mm/highmem.c
@@ -39,7 +39,7 @@ void *__kmap_atomic(struct page *page, e
unsigned long vaddr;
/* even !CONFIG_PREEMPT needs this, for in_atomic in do_page_fault */
- inc_preempt_count();
+ pagefault_disable();
if (!PageHighMem(page))
return page_address(page);
@@ -62,8 +62,7 @@ void __kunmap_atomic(void *kvaddr, enum
enum fixed_addresses idx = type + KM_TYPE_NR*smp_processor_id();
if (vaddr < FIXADDR_START) { // FIXME
- dec_preempt_count();
- preempt_check_resched();
+ pagefault_enable();
return;
}
@@ -78,8 +77,7 @@ void __kunmap_atomic(void *kvaddr, enum
local_flush_tlb_one(vaddr);
#endif
- dec_preempt_count();
- preempt_check_resched();
+ pagefault_enable();
}
#ifndef CONFIG_LIMITED_DMA
@@ -92,7 +90,7 @@ void *kmap_atomic_pfn(unsigned long pfn,
enum fixed_addresses idx;
unsigned long vaddr;
- inc_preempt_count();
+ pagefault_disable();
idx = type + KM_TYPE_NR*smp_processor_id();
vaddr = __fix_to_virt(FIX_KMAP_BEGIN + idx);
Index: linux-2.6/arch/s390/lib/uaccess_std.c
===================================================================
--- linux-2.6.orig/arch/s390/lib/uaccess_std.c
+++ linux-2.6/arch/s390/lib/uaccess_std.c
@@ -295,7 +295,7 @@ int futex_atomic_op(int op, int __user *
{
int oldval = 0, newval, ret;
- inc_preempt_count();
+ pagefault_disable();
switch (op) {
case FUTEX_OP_SET:
@@ -321,7 +321,7 @@ int futex_atomic_op(int op, int __user *
default:
ret = -ENOSYS;
}
- dec_preempt_count();
+ pagefault_enable();
*old = oldval;
return ret;
}
Index: linux-2.6/arch/sparc/mm/highmem.c
===================================================================
--- linux-2.6.orig/arch/sparc/mm/highmem.c
+++ linux-2.6/arch/sparc/mm/highmem.c
@@ -35,7 +35,7 @@ void *kmap_atomic(struct page *page, enu
unsigned long vaddr;
/* even !CONFIG_PREEMPT needs this, for in_atomic in do_page_fault */
- inc_preempt_count();
+ pagefault_disable();
if (!PageHighMem(page))
return page_address(page);
@@ -70,8 +70,7 @@ void kunmap_atomic(void *kvaddr, enum km
unsigned long idx = type + KM_TYPE_NR*smp_processor_id();
if (vaddr < FIXADDR_START) { // FIXME
- dec_preempt_count();
- preempt_check_resched();
+ pagefault_enable();
return;
}
@@ -97,8 +96,7 @@ void kunmap_atomic(void *kvaddr, enum km
#endif
#endif
- dec_preempt_count();
- preempt_check_resched();
+ pagefault_enable();
}
/* We may be fed a pagetable here by ptep_to_xxx and others. */
Index: linux-2.6/include/asm-frv/highmem.h
===================================================================
--- linux-2.6.orig/include/asm-frv/highmem.h
+++ linux-2.6/include/asm-frv/highmem.h
@@ -115,7 +115,7 @@ static inline void *kmap_atomic(struct p
{
unsigned long paddr;
- inc_preempt_count();
+ pagefault_disable();
paddr = page_to_phys(page);
switch (type) {
@@ -170,8 +170,7 @@ static inline void kunmap_atomic(void *k
default:
BUG();
}
- dec_preempt_count();
- preempt_check_resched();
+ pagefault_enable();
}
#endif /* !__ASSEMBLY__ */
Index: linux-2.6/include/asm-generic/futex.h
===================================================================
--- linux-2.6.orig/include/asm-generic/futex.h
+++ linux-2.6/include/asm-generic/futex.h
@@ -21,7 +21,7 @@ futex_atomic_op_inuser (int encoded_op,
if (! access_ok (VERIFY_WRITE, uaddr, sizeof(int)))
return -EFAULT;
- inc_preempt_count();
+ pagefault_disable();
switch (op) {
case FUTEX_OP_SET:
@@ -33,7 +33,7 @@ futex_atomic_op_inuser (int encoded_op,
ret = -ENOSYS;
}
- dec_preempt_count();
+ pagefault_enable();
if (!ret) {
switch (cmp) {
Index: linux-2.6/include/asm-i386/futex.h
===================================================================
--- linux-2.6.orig/include/asm-i386/futex.h
+++ linux-2.6/include/asm-i386/futex.h
@@ -56,7 +56,7 @@ futex_atomic_op_inuser (int encoded_op,
if (! access_ok (VERIFY_WRITE, uaddr, sizeof(int)))
return -EFAULT;
- inc_preempt_count();
+ pagefault_disable();
if (op == FUTEX_OP_SET)
__futex_atomic_op1("xchgl %0, %2", ret, oldval, uaddr, oparg);
@@ -88,7 +88,7 @@ futex_atomic_op_inuser (int encoded_op,
}
}
- dec_preempt_count();
+ pagefault_enable();
if (!ret) {
switch (cmp) {
Index: linux-2.6/include/asm-ia64/futex.h
===================================================================
--- linux-2.6.orig/include/asm-ia64/futex.h
+++ linux-2.6/include/asm-ia64/futex.h
@@ -59,7 +59,7 @@ futex_atomic_op_inuser (int encoded_op,
if (! access_ok (VERIFY_WRITE, uaddr, sizeof(int)))
return -EFAULT;
- inc_preempt_count();
+ pagefault_disable();
switch (op) {
case FUTEX_OP_SET:
@@ -83,7 +83,7 @@ futex_atomic_op_inuser (int encoded_op,
ret = -ENOSYS;
}
- dec_preempt_count();
+ pagefault_enable();
if (!ret) {
switch (cmp) {
Index: linux-2.6/include/asm-mips/futex.h
===================================================================
--- linux-2.6.orig/include/asm-mips/futex.h
+++ linux-2.6/include/asm-mips/futex.h
@@ -86,7 +86,7 @@ futex_atomic_op_inuser (int encoded_op,
if (! access_ok (VERIFY_WRITE, uaddr, sizeof(int)))
return -EFAULT;
- inc_preempt_count();
+ pagefault_disable();
switch (op) {
case FUTEX_OP_SET:
@@ -113,7 +113,7 @@ futex_atomic_op_inuser (int encoded_op,
ret = -ENOSYS;
}
- dec_preempt_count();
+ pagefault_enable();
if (!ret) {
switch (cmp) {
Index: linux-2.6/include/asm-parisc/futex.h
===================================================================
--- linux-2.6.orig/include/asm-parisc/futex.h
+++ linux-2.6/include/asm-parisc/futex.h
@@ -21,7 +21,7 @@ futex_atomic_op_inuser (int encoded_op,
if (! access_ok (VERIFY_WRITE, uaddr, sizeof(int)))
return -EFAULT;
- inc_preempt_count();
+ pagefault_disable();
switch (op) {
case FUTEX_OP_SET:
@@ -33,7 +33,7 @@ futex_atomic_op_inuser (int encoded_op,
ret = -ENOSYS;
}
- dec_preempt_count();
+ pagefault_enable();
if (!ret) {
switch (cmp) {
Index: linux-2.6/include/asm-powerpc/futex.h
===================================================================
--- linux-2.6.orig/include/asm-powerpc/futex.h
+++ linux-2.6/include/asm-powerpc/futex.h
@@ -43,7 +43,7 @@ static inline int futex_atomic_op_inuser
if (! access_ok (VERIFY_WRITE, uaddr, sizeof(int)))
return -EFAULT;
- inc_preempt_count();
+ pagefault_disable();
switch (op) {
case FUTEX_OP_SET:
@@ -65,7 +65,7 @@ static inline int futex_atomic_op_inuser
ret = -ENOSYS;
}
- dec_preempt_count();
+ pagefault_enable();
if (!ret) {
switch (cmp) {
Index: linux-2.6/include/asm-ppc/highmem.h
===================================================================
--- linux-2.6.orig/include/asm-ppc/highmem.h
+++ linux-2.6/include/asm-ppc/highmem.h
@@ -79,7 +79,7 @@ static inline void *kmap_atomic(struct p
unsigned long vaddr;
/* even !CONFIG_PREEMPT needs this, for in_atomic in do_page_fault */
- inc_preempt_count();
+ pagefault_disable();
if (!PageHighMem(page))
return page_address(page);
@@ -101,8 +101,7 @@ static inline void kunmap_atomic(void *k
unsigned int idx = type + KM_TYPE_NR*smp_processor_id();
if (vaddr < KMAP_FIX_BEGIN) { // FIXME
- dec_preempt_count();
- preempt_check_resched();
+ pagefault_enable();
return;
}
@@ -115,8 +114,7 @@ static inline void kunmap_atomic(void *k
pte_clear(&init_mm, vaddr, kmap_pte+idx);
flush_tlb_page(NULL, vaddr);
#endif
- dec_preempt_count();
- preempt_check_resched();
+ pagefault_enable();
}
static inline struct page *kmap_atomic_to_page(void *ptr)
Index: linux-2.6/include/asm-sparc64/futex.h
===================================================================
--- linux-2.6.orig/include/asm-sparc64/futex.h
+++ linux-2.6/include/asm-sparc64/futex.h
@@ -45,7 +45,7 @@ static inline int futex_atomic_op_inuser
if (encoded_op & (FUTEX_OP_OPARG_SHIFT << 28))
oparg = 1 << oparg;
- inc_preempt_count();
+ pagefault_disable();
switch (op) {
case FUTEX_OP_SET:
@@ -67,7 +67,7 @@ static inline int futex_atomic_op_inuser
ret = -ENOSYS;
}
- dec_preempt_count();
+ pagefault_enable();
if (!ret) {
switch (cmp) {
Index: linux-2.6/include/asm-x86_64/futex.h
===================================================================
--- linux-2.6.orig/include/asm-x86_64/futex.h
+++ linux-2.6/include/asm-x86_64/futex.h
@@ -55,7 +55,7 @@ futex_atomic_op_inuser (int encoded_op,
if (! access_ok (VERIFY_WRITE, uaddr, sizeof(int)))
return -EFAULT;
- inc_preempt_count();
+ pagefault_disable();
switch (op) {
case FUTEX_OP_SET:
@@ -78,7 +78,7 @@ futex_atomic_op_inuser (int encoded_op,
ret = -ENOSYS;
}
- dec_preempt_count();
+ pagefault_enable();
if (!ret) {
switch (cmp) {
Index: linux-2.6/include/linux/uaccess.h
===================================================================
--- linux-2.6.orig/include/linux/uaccess.h
+++ linux-2.6/include/linux/uaccess.h
@@ -1,8 +1,43 @@
#ifndef __LINUX_UACCESS_H__
#define __LINUX_UACCESS_H__
+#include <linux/preempt.h>
#include <asm/uaccess.h>
+/*
+ * These routines enable/disable the pagefault handler in that
+ * it will not take any locks and go straight to the fixup table.
+ *
+ * They have great resemblance to the preempt_disable/enable calls
+ * and in fact they are identical; this is because currently there is
+ * no other way to make the pagefault handlers do this. So we do
+ * disable preemption but we don't necessarily care about that.
+ */
+static inline void pagefault_disable(void)
+{
+ inc_preempt_count();
+ /*
+ * make sure to have issued the store before a pagefault
+ * can hit.
+ */
+ barrier();
+}
+
+static inline void pagefault_enable(void)
+{
+ /*
+ * make sure to issue those last loads/stores before enabling
+ * the pagefault handler again.
+ */
+ barrier();
+ dec_preempt_count();
+ /*
+ * make sure we do..
+ */
+ barrier();
+ preempt_check_resched();
+}
+
#ifndef ARCH_HAS_NOCACHE_UACCESS
static inline unsigned long __copy_from_user_inatomic_nocache(void *to,
@@ -35,9 +70,9 @@ static inline unsigned long __copy_from_
({ \
long ret; \
\
- inc_preempt_count(); \
+ pagefault_disable(); \
ret = __get_user(retval, addr); \
- dec_preempt_count(); \
+ pagefault_enable(); \
ret; \
})
Index: linux-2.6/kernel/futex.c
===================================================================
--- linux-2.6.orig/kernel/futex.c
+++ linux-2.6/kernel/futex.c
@@ -282,9 +282,9 @@ static inline int get_futex_value_locked
{
int ret;
- inc_preempt_count();
+ pagefault_disable();
ret = __copy_from_user_inatomic(dest, from, sizeof(u32));
- dec_preempt_count();
+ pagefault_enable();
return ret ? -EFAULT : 0;
}
@@ -585,9 +585,9 @@ static int wake_futex_pi(u32 __user *uad
if (!(uval & FUTEX_OWNER_DIED)) {
newval = FUTEX_WAITERS | new_owner->pid;
- inc_preempt_count();
+ pagefault_disable();
curval = futex_atomic_cmpxchg_inatomic(uaddr, uval, newval);
- dec_preempt_count();
+ pagefault_enable();
if (curval == -EFAULT)
return -EFAULT;
if (curval != uval)
@@ -618,9 +618,9 @@ static int unlock_futex_pi(u32 __user *u
* There is no waiter, so we unlock the futex. The owner died
* bit has not to be preserved here. We are the owner:
*/
- inc_preempt_count();
+ pagefault_disable();
oldval = futex_atomic_cmpxchg_inatomic(uaddr, uval, 0);
- dec_preempt_count();
+ pagefault_enable();
if (oldval == -EFAULT)
return oldval;
@@ -1158,9 +1158,9 @@ static int futex_lock_pi(u32 __user *uad
*/
newval = current->pid;
- inc_preempt_count();
+ pagefault_disable();
curval = futex_atomic_cmpxchg_inatomic(uaddr, 0, newval);
- dec_preempt_count();
+ pagefault_enable();
if (unlikely(curval == -EFAULT))
goto uaddr_faulted;
@@ -1183,9 +1183,9 @@ static int futex_lock_pi(u32 __user *uad
uval = curval;
newval = uval | FUTEX_WAITERS;
- inc_preempt_count();
+ pagefault_disable();
curval = futex_atomic_cmpxchg_inatomic(uaddr, uval, newval);
- dec_preempt_count();
+ pagefault_enable();
if (unlikely(curval == -EFAULT))
goto uaddr_faulted;
@@ -1215,10 +1215,10 @@ static int futex_lock_pi(u32 __user *uad
newval = current->pid |
FUTEX_OWNER_DIED | FUTEX_WAITERS;
- inc_preempt_count();
+ pagefault_disable();
curval = futex_atomic_cmpxchg_inatomic(uaddr,
uval, newval);
- dec_preempt_count();
+ pagefault_enable();
if (unlikely(curval == -EFAULT))
goto uaddr_faulted;
@@ -1390,9 +1390,9 @@ retry_locked:
* anyone else up:
*/
if (!(uval & FUTEX_OWNER_DIED)) {
- inc_preempt_count();
+ pagefault_disable();
uval = futex_atomic_cmpxchg_inatomic(uaddr, current->pid, 0);
- dec_preempt_count();
+ pagefault_enable();
}
if (unlikely(uval == -EFAULT))
--
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 6+ messages in thread
* [RFC][PATCH 3/4] mm: k{,um}map_atomic() vs in_atomic()
2006-10-19 10:17 [RFC][PATCH 0/4] on do_page_fault() and *copy*_inatomic Peter Zijlstra
2006-10-19 10:17 ` [RFC][PATCH 1/4] mm: arch do_page_fault() vs in_atomic() Peter Zijlstra
2006-10-19 10:17 ` [RFC][PATCH 2/4] mm: pagefault_{disable,enable}() Peter Zijlstra
@ 2006-10-19 10:17 ` Peter Zijlstra
2006-10-19 10:17 ` [RFC][PATCH 4/4] mm: move pagefault_{disable,enable}() into __copy_{to,from}_user_inatomic() Peter Zijlstra
3 siblings, 0 replies; 6+ messages in thread
From: Peter Zijlstra @ 2006-10-19 10:17 UTC (permalink / raw)
To: linux-arch, linux-kernel, linux-mm
Cc: Nick Piggin, Andrew Morton, Peter Zijlstra
[-- Attachment #1: kmap_atomic_generic.patch --]
[-- Type: text/plain, Size: 2361 bytes --]
Make kmap_atomic/kunmap_atomic denote a pagefault disabled scope. All
non trivial implementations already do this anyway.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
---
include/asm-mips/highmem.h | 10 ++++++++--
include/linux/highmem.h | 8 +++++---
2 files changed, 13 insertions(+), 5 deletions(-)
Index: linux-2.6/include/asm-mips/highmem.h
===================================================================
--- linux-2.6.orig/include/asm-mips/highmem.h
+++ linux-2.6/include/asm-mips/highmem.h
@@ -21,6 +21,7 @@
#include <linux/init.h>
#include <linux/interrupt.h>
+#include <linux/uaccess.h>
#include <asm/kmap_types.h>
/* undef for production */
@@ -70,11 +71,16 @@ static inline void *kmap(struct page *pa
static inline void *kmap_atomic(struct page *page, enum km_type type)
{
+ pagefault_disable();
return page_address(page);
}
-static inline void kunmap_atomic(void *kvaddr, enum km_type type) { }
-#define kmap_atomic_pfn(pfn, idx) page_address(pfn_to_page(pfn))
+static inline void kunmap_atomic(void *kvaddr, enum km_type type)
+{
+ pagefault_enable();
+}
+
+#define kmap_atomic_pfn(pfn, idx) kmap_atomic(pfn_to_page(pfn), (idx))
#define kmap_atomic_to_page(ptr) virt_to_page(ptr)
Index: linux-2.6/include/linux/highmem.h
===================================================================
--- linux-2.6.orig/include/linux/highmem.h
+++ linux-2.6/include/linux/highmem.h
@@ -3,6 +3,7 @@
#include <linux/fs.h>
#include <linux/mm.h>
+#include <linux/uaccess.h>
#include <asm/cacheflush.h>
@@ -41,9 +42,10 @@ static inline void *kmap(struct page *pa
#define kunmap(page) do { (void) (page); } while (0)
-#define kmap_atomic(page, idx) page_address(page)
-#define kunmap_atomic(addr, idx) do { } while (0)
-#define kmap_atomic_pfn(pfn, idx) page_address(pfn_to_page(pfn))
+#define kmap_atomic(page, idx) \
+ ({ pagefault_disable(); page_address(page); })
+#define kunmap_atomic(addr, idx) do { pagefault_enable(); } while (0)
+#define kmap_atomic_pfn(pfn, idx) kmap_atomic(pfn_to_page(pfn), (idx))
#define kmap_atomic_to_page(ptr) virt_to_page(ptr)
#endif
--
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 6+ messages in thread
* [RFC][PATCH 4/4] mm: move pagefault_{disable,enable}() into __copy_{to,from}_user_inatomic()
2006-10-19 10:17 [RFC][PATCH 0/4] on do_page_fault() and *copy*_inatomic Peter Zijlstra
` (2 preceding siblings ...)
2006-10-19 10:17 ` [RFC][PATCH 3/4] mm: k{,um}map_atomic() vs in_atomic() Peter Zijlstra
@ 2006-10-19 10:17 ` Peter Zijlstra
3 siblings, 0 replies; 6+ messages in thread
From: Peter Zijlstra @ 2006-10-19 10:17 UTC (permalink / raw)
To: linux-arch, linux-kernel, linux-mm
Cc: Nick Piggin, Andrew Morton, Peter Zijlstra
[-- Attachment #1: copy_from_user_inatomic.patch --]
[-- Type: text/plain, Size: 32491 bytes --]
Move the pagefault_{disable,enable}() calls into
__copy_{to,from}_user_inatomic().
This breaks NTFS.
Also, if we take the previous patch where k{,un}map_atomic() create an
atomic scope this patch is not really needed since all calls to the
_inatomic copies are done from within k{,un}map_atomic() or interrupt
context (except NTFS).
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
---
arch/x86_64/lib/copy_user.S | 2 +-
include/asm-alpha/uaccess.h | 20 ++++++++++++++++++--
include/asm-arm/uaccess.h | 21 +++++++++++++++++++--
include/asm-arm26/uaccess.h | 21 +++++++++++++++++++--
include/asm-avr32/uaccess.h | 21 +++++++++++++++++++--
include/asm-cris/uaccess.h | 23 +++++++++++++++++++++--
include/asm-frv/uaccess.h | 33 +++++++++++++++++++++++++++------
include/asm-h8300/uaccess.h | 22 ++++++++++++++++++++--
include/asm-i386/uaccess.h | 36 +++++++++++++++++++++++-------------
include/asm-ia64/uaccess.h | 22 ++++++++++++++++++++--
include/asm-m32r/uaccess.h | 21 +++++++++++++++++++--
include/asm-m68k/uaccess.h | 21 +++++++++++++++++++--
include/asm-m68knommu/uaccess.h | 22 ++++++++++++++++++++--
include/asm-mips/uaccess.h | 21 +++++++++++++++++++--
include/asm-parisc/uaccess.h | 22 ++++++++++++++++++++--
include/asm-powerpc/uaccess.h | 28 ++++++++++++++++++++++++----
include/asm-s390/uaccess.h | 21 +++++++++++++++++++--
include/asm-sh/uaccess.h | 20 ++++++++++++++++++--
include/asm-sh64/uaccess.h | 21 +++++++++++++++++++--
include/asm-sparc/uaccess.h | 21 +++++++++++++++++++--
include/asm-sparc64/uaccess.h | 22 ++++++++++++++++++++--
include/asm-um/uaccess.h | 21 +++++++++++++++++++--
include/asm-v850/uaccess.h | 21 +++++++++++++++++++--
include/asm-x86_64/uaccess.h | 18 ++++++++++++++++--
include/asm-xtensa/uaccess.h | 21 +++++++++++++++++++--
include/linux/uaccess.h | 3 ++-
kernel/futex.c | 2 --
27 files changed, 478 insertions(+), 69 deletions(-)
Index: linux-2.6/arch/x86_64/lib/copy_user.S
===================================================================
--- linux-2.6.orig/arch/x86_64/lib/copy_user.S
+++ linux-2.6/arch/x86_64/lib/copy_user.S
@@ -52,7 +52,7 @@ ENTRY(copy_user_generic)
ALTERNATIVE_JUMP X86_FEATURE_REP_GOOD,copy_user_generic_unrolled,copy_user_generic_string
CFI_ENDPROC
-ENTRY(__copy_from_user_inatomic)
+ENTRY(____copy_from_user_inatomic)
CFI_STARTPROC
xorl %ecx,%ecx /* clear zero flag */
ALTERNATIVE_JUMP X86_FEATURE_REP_GOOD,copy_user_generic_unrolled,copy_user_generic_string
Index: linux-2.6/include/asm-alpha/uaccess.h
===================================================================
--- linux-2.6.orig/include/asm-alpha/uaccess.h
+++ linux-2.6/include/asm-alpha/uaccess.h
@@ -390,9 +390,25 @@ __copy_tofrom_user(void *to, const void
__copy_tofrom_user_nocheck((to),(__force void *)(from),(n)); \
})
-#define __copy_to_user_inatomic __copy_to_user
-#define __copy_from_user_inatomic __copy_from_user
+static __always_inline unsigned long __must_check
+__copy_to_user_inatomic(void __user *to, const void *from, unsigned long n)
+{
+ unsigned long ret;
+ pagefault_disable();
+ ret = __copy_to_user(to, from, n);
+ pagefault_enable();
+ return ret;
+}
+static __always_inline unsigned long __must_check
+__copy_from_user_inatomic(void *to, const void __user *from, unsigned long n)
+{
+ unsigned long ret;
+ pagefault_disable();
+ ret = __copy_from_user(to, from, n);
+ pagefault_enable();
+ return ret;
+}
extern inline long
copy_to_user(void __user *to, const void *from, long n)
Index: linux-2.6/include/asm-arm/uaccess.h
===================================================================
--- linux-2.6.orig/include/asm-arm/uaccess.h
+++ linux-2.6/include/asm-arm/uaccess.h
@@ -411,8 +411,25 @@ static inline unsigned long copy_to_user
return n;
}
-#define __copy_to_user_inatomic __copy_to_user
-#define __copy_from_user_inatomic __copy_from_user
+static __always_inline unsigned long __must_check
+__copy_to_user_inatomic(void __user *to, const void *from, unsigned long n)
+{
+ unsigned long ret;
+ pagefault_disable();
+ ret = __copy_to_user(to, from, n);
+ pagefault_enable();
+ return ret;
+}
+
+static __always_inline unsigned long __must_check
+__copy_from_user_inatomic(void *to, const void __user *from, unsigned long n)
+{
+ unsigned long ret;
+ pagefault_disable();
+ ret = __copy_from_user(to, from, n);
+ pagefault_enable();
+ return ret;
+}
static inline unsigned long clear_user(void __user *to, unsigned long n)
{
Index: linux-2.6/include/asm-arm26/uaccess.h
===================================================================
--- linux-2.6.orig/include/asm-arm26/uaccess.h
+++ linux-2.6/include/asm-arm26/uaccess.h
@@ -247,8 +247,25 @@ static __inline__ unsigned long __copy_t
return n;
}
-#define __copy_to_user_inatomic __copy_to_user
-#define __copy_from_user_inatomic __copy_from_user
+static __always_inline unsigned long __must_check
+__copy_to_user_inatomic(void __user *to, const void *from, unsigned long n)
+{
+ unsigned long ret;
+ pagefault_disable();
+ ret = __copy_to_user(to, from, n);
+ pagefault_enable();
+ return ret;
+}
+
+static __always_inline unsigned long __must_check
+__copy_from_user_inatomic(void *to, const void __user *from, unsigned long n)
+{
+ unsigned long ret;
+ pagefault_disable();
+ ret = __copy_from_user(to, from, n);
+ pagefault_enable();
+ return ret;
+}
static __inline__ unsigned long clear_user (void *to, unsigned long n)
{
Index: linux-2.6/include/asm-avr32/uaccess.h
===================================================================
--- linux-2.6.orig/include/asm-avr32/uaccess.h
+++ linux-2.6/include/asm-avr32/uaccess.h
@@ -95,8 +95,25 @@ static inline __kernel_size_t __copy_fro
return __copy_user(to, (const void __force *)from, n);
}
-#define __copy_to_user_inatomic __copy_to_user
-#define __copy_from_user_inatomic __copy_from_user
+static __always_inline unsigned long __must_check
+__copy_to_user_inatomic(void __user *to, const void *from, unsigned long n)
+{
+ unsigned long ret;
+ pagefault_disable();
+ ret = __copy_to_user(to, from, n);
+ pagefault_enable();
+ return ret;
+}
+
+static __always_inline unsigned long __must_check
+__copy_from_user_inatomic(void *to, const void __user *from, unsigned long n)
+{
+ unsigned long ret;
+ pagefault_disable();
+ ret = __copy_from_user(to, from, n);
+ pagefault_enable();
+ return ret;
+}
/*
* put_user: - Write a simple value into user space.
Index: linux-2.6/include/asm-cris/uaccess.h
===================================================================
--- linux-2.6.orig/include/asm-cris/uaccess.h
+++ linux-2.6/include/asm-cris/uaccess.h
@@ -428,8 +428,27 @@ __generic_clear_user_nocheck(void *to, u
#define __copy_to_user(to,from,n) __generic_copy_to_user_nocheck((to),(from),(n))
#define __copy_from_user(to,from,n) __generic_copy_from_user_nocheck((to),(from),(n))
-#define __copy_to_user_inatomic __copy_to_user
-#define __copy_from_user_inatomic __copy_from_user
+
+static __always_inline unsigned long __must_check
+__copy_to_user_inatomic(void __user *to, const void *from, unsigned long n)
+{
+ unsigned long ret;
+ pagefault_disable();
+ ret = __copy_to_user(to, from, n);
+ pagefault_enable();
+ return ret;
+}
+
+static __always_inline unsigned long __must_check
+__copy_from_user_inatomic(void *to, const void __user *from, unsigned long n)
+{
+ unsigned long ret;
+ pagefault_disable();
+ ret = __copy_from_user(to, from, n);
+ pagefault_enable();
+ return ret;
+}
+
#define __clear_user(to,n) __generic_clear_user_nocheck((to),(n))
#define strlen_user(str) strnlen_user((str), 0x7ffffffe)
Index: linux-2.6/include/asm-frv/uaccess.h
===================================================================
--- linux-2.6.orig/include/asm-frv/uaccess.h
+++ linux-2.6/include/asm-frv/uaccess.h
@@ -266,29 +266,50 @@ extern long __memset_user(void *dst, uns
extern long __memcpy_user(void *dst, const void *src, unsigned long count);
#define clear_user(dst,count) __memset_user(____force(dst), (count))
-#define __copy_from_user_inatomic(to, from, n) __memcpy_user((to), ____force(from), (n))
-#define __copy_to_user_inatomic(to, from, n) __memcpy_user(____force(to), (from), (n))
+#define ____copy_from_user(to, from, n) __memcpy_user((to), ____force(from), (n))
+#define ____copy_to_user(to, from, n) __memcpy_user(____force(to), (from), (n))
#else
#define clear_user(dst,count) (memset(____force(dst), 0, (count)), 0)
-#define __copy_from_user_inatomic(to, from, n) (memcpy((to), ____force(from), (n)), 0)
-#define __copy_to_user_inatomic(to, from, n) (memcpy(____force(to), (from), (n)), 0)
+#define ____copy_from_user(to, from, n) (memcpy((to), ____force(from), (n)), 0)
+#define ____copy_to_user(to, from, n) (memcpy(____force(to), (from), (n)), 0)
#endif
+
+static __always_inline unsigned long __must_check
+__copy_to_user_inatomic(void __user *to, const void *from, unsigned long n)
+{
+ unsigned long ret;
+ pagefault_disable();
+ ret = ____copy_to_user(to, from, n);
+ pagefault_enable();
+ return ret;
+}
+
+static __always_inline unsigned long __must_check
+__copy_from_user_inatomic(void *to, const void __user *from, unsigned long n)
+{
+ unsigned long ret;
+ pagefault_disable();
+ ret = ____copy_from_user(to, from, n);
+ pagefault_enable();
+ return ret;
+}
+
static inline unsigned long __must_check
__copy_to_user(void __user *to, const void *from, unsigned long n)
{
might_sleep();
- return __copy_to_user_inatomic(to, from, n);
+ return ____copy_to_user(to, from, n);
}
static inline unsigned long
__copy_from_user(void *to, const void __user *from, unsigned long n)
{
might_sleep();
- return __copy_from_user_inatomic(to, from, n);
+ return ____copy_from_user(to, from, n);
}
static inline long copy_from_user(void *to, const void __user *from, unsigned long n)
Index: linux-2.6/include/asm-h8300/uaccess.h
===================================================================
--- linux-2.6.orig/include/asm-h8300/uaccess.h
+++ linux-2.6/include/asm-h8300/uaccess.h
@@ -118,8 +118,26 @@ extern int __get_user_bad(void);
#define __copy_from_user(to, from, n) copy_from_user(to, from, n)
#define __copy_to_user(to, from, n) copy_to_user(to, from, n)
-#define __copy_to_user_inatomic __copy_to_user
-#define __copy_from_user_inatomic __copy_from_user
+
+static __always_inline unsigned long __must_check
+__copy_to_user_inatomic(void __user *to, const void *from, unsigned long n)
+{
+ unsigned long ret;
+ pagefault_disable();
+ ret = __copy_to_user(to, from, n);
+ pagefault_enable();
+ return ret;
+}
+
+static __always_inline unsigned long __must_check
+__copy_from_user_inatomic(void *to, const void __user *from, unsigned long n)
+{
+ unsigned long ret;
+ pagefault_disable();
+ ret = __copy_from_user(to, from, n);
+ pagefault_enable();
+ return ret;
+}
#define copy_to_user_ret(to,from,n,retval) ({ if (copy_to_user(to,from,n)) return retval; })
Index: linux-2.6/include/asm-i386/uaccess.h
===================================================================
--- linux-2.6.orig/include/asm-i386/uaccess.h
+++ linux-2.6/include/asm-i386/uaccess.h
@@ -407,22 +407,25 @@ unsigned long __must_check __copy_from_u
static __always_inline unsigned long __must_check
__copy_to_user_inatomic(void __user *to, const void *from, unsigned long n)
{
+ unsigned long ret;
+ pagefault_disable();
if (__builtin_constant_p(n)) {
- unsigned long ret;
-
switch (n) {
case 1:
__put_user_size(*(u8 *)from, (u8 __user *)to, 1, ret, 1);
- return ret;
+ goto out;
case 2:
__put_user_size(*(u16 *)from, (u16 __user *)to, 2, ret, 2);
- return ret;
+ goto out;
case 4:
__put_user_size(*(u32 *)from, (u32 __user *)to, 4, ret, 4);
- return ret;
+ goto out;
}
}
- return __copy_to_user_ll(to, from, n);
+ ret = __copy_to_user_ll(to, from, n);
+out:
+ pagefault_enable();
+ return ret;
}
/**
@@ -449,27 +452,30 @@ __copy_to_user(void __user *to, const vo
static __always_inline unsigned long
__copy_from_user_inatomic(void *to, const void __user *from, unsigned long n)
{
+ unsigned long ret;
+ pagefault_dislabe();
/* Avoid zeroing the tail if the copy fails..
* If 'n' is constant and 1, 2, or 4, we do still zero on a failure,
* but as the zeroing behaviour is only significant when n is not
* constant, that shouldn't be a problem.
*/
if (__builtin_constant_p(n)) {
- unsigned long ret;
-
switch (n) {
case 1:
__get_user_size(*(u8 *)to, from, 1, ret, 1);
- return ret;
+ goto out;
case 2:
__get_user_size(*(u16 *)to, from, 2, ret, 2);
- return ret;
+ goto out;
case 4:
__get_user_size(*(u32 *)to, from, 4, ret, 4);
- return ret;
+ goto out;
}
}
- return __copy_from_user_ll_nozero(to, from, n);
+ ret = __copy_from_user_ll_nozero(to, from, n);
+out:
+ pagefault_enable();
+ return ret;
}
/**
@@ -543,7 +549,11 @@ static __always_inline unsigned long __c
static __always_inline unsigned long
__copy_from_user_inatomic_nocache(void *to, const void __user *from, unsigned long n)
{
- return __copy_from_user_ll_nocache_nozero(to, from, n);
+ unsigned long ret;
+ pagefault_disable();
+ ret = __copy_from_user_ll_nocache_nozero(to, from, n);
+ pagefault_enable();
+ return ret;
}
unsigned long __must_check copy_to_user(void __user *to,
Index: linux-2.6/include/asm-ia64/uaccess.h
===================================================================
--- linux-2.6.orig/include/asm-ia64/uaccess.h
+++ linux-2.6/include/asm-ia64/uaccess.h
@@ -249,8 +249,26 @@ __copy_from_user (void *to, const void _
return __copy_user((__force void __user *) to, from, count);
}
-#define __copy_to_user_inatomic __copy_to_user
-#define __copy_from_user_inatomic __copy_from_user
+static __always_inline unsigned long __must_check
+__copy_to_user_inatomic(void __user *to, const void *from, unsigned long n)
+{
+ unsigned long ret;
+ pagefault_disable();
+ ret = __copy_to_user(to, from, n);
+ pagefault_enable();
+ return ret;
+}
+
+static __always_inline unsigned long __must_check
+__copy_from_user_inatomic(void *to, const void __user *from, unsigned long n)
+{
+ unsigned long ret;
+ pagefault_disable();
+ ret = __copy_from_user(to, from, n);
+ pagefault_enable();
+ return ret;
+}
+
#define copy_to_user(to, from, n) \
({ \
void __user *__cu_to = (to); \
Index: linux-2.6/include/asm-m32r/uaccess.h
===================================================================
--- linux-2.6.orig/include/asm-m32r/uaccess.h
+++ linux-2.6/include/asm-m32r/uaccess.h
@@ -579,8 +579,25 @@ unsigned long __generic_copy_from_user(v
#define __copy_to_user(to,from,n) \
__generic_copy_to_user_nocheck((to),(from),(n))
-#define __copy_to_user_inatomic __copy_to_user
-#define __copy_from_user_inatomic __copy_from_user
+static __always_inline unsigned long __must_check
+__copy_to_user_inatomic(void __user *to, const void *from, unsigned long n)
+{
+ unsigned long ret;
+ pagefault_disable();
+ ret = __copy_to_user(to, from, n);
+ pagefault_enable();
+ return ret;
+}
+
+static __always_inline unsigned long __must_check
+__copy_from_user_inatomic(void *to, const void __user *from, unsigned long n)
+{
+ unsigned long ret;
+ pagefault_disable();
+ ret = __copy_from_user(to, from, n);
+ pagefault_enable();
+ return ret;
+}
/**
* copy_to_user: - Copy a block of data into user space.
Index: linux-2.6/include/asm-m68k/uaccess.h
===================================================================
--- linux-2.6.orig/include/asm-m68k/uaccess.h
+++ linux-2.6/include/asm-m68k/uaccess.h
@@ -352,8 +352,25 @@ __constant_copy_to_user(void __user *to,
__constant_copy_to_user(to, from, n) : \
__generic_copy_to_user(to, from, n))
-#define __copy_to_user_inatomic __copy_to_user
-#define __copy_from_user_inatomic __copy_from_user
+static __always_inline unsigned long __must_check
+__copy_to_user_inatomic(void __user *to, const void *from, unsigned long n)
+{
+ unsigned long ret;
+ pagefault_disable();
+ ret = __copy_to_user(to, from, n);
+ pagefault_enable();
+ return ret;
+}
+
+static __always_inline unsigned long __must_check
+__copy_from_user_inatomic(void *to, const void __user *from, unsigned long n)
+{
+ unsigned long ret;
+ pagefault_disable();
+ ret = __copy_from_user(to, from, n);
+ pagefault_enable();
+ return ret;
+}
#define copy_from_user(to, from, n) __copy_from_user(to, from, n)
#define copy_to_user(to, from, n) __copy_to_user(to, from, n)
Index: linux-2.6/include/asm-m68knommu/uaccess.h
===================================================================
--- linux-2.6.orig/include/asm-m68knommu/uaccess.h
+++ linux-2.6/include/asm-m68knommu/uaccess.h
@@ -129,8 +129,26 @@ extern int __get_user_bad(void);
#define __copy_from_user(to, from, n) copy_from_user(to, from, n)
#define __copy_to_user(to, from, n) copy_to_user(to, from, n)
-#define __copy_to_user_inatomic __copy_to_user
-#define __copy_from_user_inatomic __copy_from_user
+
+static __always_inline unsigned long __must_check
+__copy_to_user_inatomic(void __user *to, const void *from, unsigned long n)
+{
+ unsigned long ret;
+ pagefault_disable();
+ ret = __copy_to_user(to, from, n);
+ pagefault_enable();
+ return ret;
+}
+
+static __always_inline unsigned long __must_check
+__copy_from_user_inatomic(void *to, const void __user *from, unsigned long n)
+{
+ unsigned long ret;
+ pagefault_disable();
+ ret = __copy_from_user(to, from, n);
+ pagefault_enable();
+ return ret;
+}
#define copy_to_user_ret(to,from,n,retval) ({ if (copy_to_user(to,from,n)) return retval; })
Index: linux-2.6/include/asm-mips/uaccess.h
===================================================================
--- linux-2.6.orig/include/asm-mips/uaccess.h
+++ linux-2.6/include/asm-mips/uaccess.h
@@ -432,8 +432,25 @@ extern size_t __copy_user(void *__to, co
__cu_len; \
})
-#define __copy_to_user_inatomic __copy_to_user
-#define __copy_from_user_inatomic __copy_from_user
+static __always_inline unsigned long __must_check
+__copy_to_user_inatomic(void __user *to, const void *from, unsigned long n)
+{
+ unsigned long ret;
+ pagefault_disable();
+ ret = __copy_to_user(to, from, n);
+ pagefault_enable();
+ return ret;
+}
+
+static __always_inline unsigned long __must_check
+__copy_from_user_inatomic(void *to, const void __user *from, unsigned long n)
+{
+ unsigned long ret;
+ pagefault_disable();
+ ret = __copy_from_user(to, from, n);
+ pagefault_enable();
+ return ret;
+}
/*
* copy_to_user: - Copy a block of data into user space.
Index: linux-2.6/include/asm-parisc/uaccess.h
===================================================================
--- linux-2.6.orig/include/asm-parisc/uaccess.h
+++ linux-2.6/include/asm-parisc/uaccess.h
@@ -282,7 +282,25 @@ unsigned long copy_from_user(void *dst,
#define __copy_from_user copy_from_user
unsigned long copy_in_user(void __user *dst, const void __user *src, unsigned long len);
#define __copy_in_user copy_in_user
-#define __copy_to_user_inatomic __copy_to_user
-#define __copy_from_user_inatomic __copy_from_user
+
+static __always_inline unsigned long __must_check
+__copy_to_user_inatomic(void __user *to, const void *from, unsigned long n)
+{
+ unsigned long ret;
+ pagefault_disable();
+ ret = __copy_to_user(to, from, n);
+ pagefault_enable();
+ return ret;
+}
+
+static __always_inline unsigned long __must_check
+__copy_from_user_inatomic(void *to, const void __user *from, unsigned long n)
+{
+ unsigned long ret;
+ pagefault_disable();
+ ret = __copy_from_user(to, from, n);
+ pagefault_enable();
+ return ret;
+}
#endif /* __PARISC_UACCESS_H */
Index: linux-2.6/include/asm-powerpc/uaccess.h
===================================================================
--- linux-2.6.orig/include/asm-powerpc/uaccess.h
+++ linux-2.6/include/asm-powerpc/uaccess.h
@@ -348,7 +348,7 @@ extern unsigned long copy_in_user(void _
#endif /* __powerpc64__ */
-static inline unsigned long __copy_from_user_inatomic(void *to,
+static inline unsigned long ____copy_from_user(void *to,
const void __user *from, unsigned long n)
{
if (__builtin_constant_p(n) && (n <= 8)) {
@@ -374,7 +374,7 @@ static inline unsigned long __copy_from_
return __copy_tofrom_user((__force void __user *)to, from, n);
}
-static inline unsigned long __copy_to_user_inatomic(void __user *to,
+static inline unsigned long ____copy_to_user(void __user *to,
const void *from, unsigned long n)
{
if (__builtin_constant_p(n) && (n <= 8)) {
@@ -400,18 +400,38 @@ static inline unsigned long __copy_to_us
return __copy_tofrom_user(to, (__force const void __user *)from, n);
}
+static __always_inline unsigned long __must_check
+__copy_to_user_inatomic(void __user *to, const void *from, unsigned long n)
+{
+ unsigned long ret;
+ pagefault_disable();
+ ret = ____copy_to_user(to, from, n);
+ pagefault_enable();
+ return ret;
+}
+
+static __always_inline unsigned long __must_check
+__copy_from_user_inatomic(void *to, const void __user *from, unsigned long n)
+{
+ unsigned long ret;
+ pagefault_disable();
+ ret = ____copy_from_user(to, from, n);
+ pagefault_enable();
+ return ret;
+}
+
static inline unsigned long __copy_from_user(void *to,
const void __user *from, unsigned long size)
{
might_sleep();
- return __copy_from_user_inatomic(to, from, size);
+ return ____copy_from_user(to, from, size);
}
static inline unsigned long __copy_to_user(void __user *to,
const void *from, unsigned long size)
{
might_sleep();
- return __copy_to_user_inatomic(to, from, size);
+ return ____copy_to_user(to, from, size);
}
extern unsigned long __clear_user(void __user *addr, unsigned long size);
Index: linux-2.6/include/asm-s390/uaccess.h
===================================================================
--- linux-2.6.orig/include/asm-s390/uaccess.h
+++ linux-2.6/include/asm-s390/uaccess.h
@@ -210,8 +210,25 @@ __copy_to_user(void __user *to, const vo
return uaccess.copy_to_user(n, to, from);
}
-#define __copy_to_user_inatomic __copy_to_user
-#define __copy_from_user_inatomic __copy_from_user
+static __always_inline unsigned long __must_check
+__copy_to_user_inatomic(void __user *to, const void *from, unsigned long n)
+{
+ unsigned long ret;
+ pagefault_disable();
+ ret = __copy_to_user(to, from, n);
+ pagefault_enable();
+ return ret;
+}
+
+static __always_inline unsigned long __must_check
+__copy_from_user_inatomic(void *to, const void __user *from, unsigned long n)
+{
+ unsigned long ret;
+ pagefault_disable();
+ ret = __copy_from_user(to, from, n);
+ pagefault_enable();
+ return ret;
+}
/**
* copy_to_user: - Copy a block of data into user space.
Index: linux-2.6/include/asm-sh/uaccess.h
===================================================================
--- linux-2.6.orig/include/asm-sh/uaccess.h
+++ linux-2.6/include/asm-sh/uaccess.h
@@ -424,9 +424,25 @@ __copy_res; })
__copy_user((void *)(to), \
(void *)(from), n)
-#define __copy_to_user_inatomic __copy_to_user
-#define __copy_from_user_inatomic __copy_from_user
+static __always_inline unsigned long __must_check
+__copy_to_user_inatomic(void __user *to, const void *from, unsigned long n)
+{
+ unsigned long ret;
+ pagefault_disable();
+ ret = __copy_to_user(to, from, n);
+ pagefault_enable();
+ return ret;
+}
+static __always_inline unsigned long __must_check
+__copy_from_user_inatomic(void *to, const void __user *from, unsigned long n)
+{
+ unsigned long ret;
+ pagefault_disable();
+ ret = __copy_from_user(to, from, n);
+ pagefault_enable();
+ return ret;
+}
#define copy_from_user(to,from,n) ({ \
void *__copy_to = (void *) (to); \
Index: linux-2.6/include/asm-sh64/uaccess.h
===================================================================
--- linux-2.6.orig/include/asm-sh64/uaccess.h
+++ linux-2.6/include/asm-sh64/uaccess.h
@@ -251,8 +251,25 @@ if (__copy_from_user(to,from,n)) \
return retval; \
})
-#define __copy_to_user_inatomic __copy_to_user
-#define __copy_from_user_inatomic __copy_from_user
+static __always_inline unsigned long __must_check
+__copy_to_user_inatomic(void __user *to, const void *from, unsigned long n)
+{
+ unsigned long ret;
+ pagefault_disable();
+ ret = __copy_to_user(to, from, n);
+ pagefault_enable();
+ return ret;
+}
+
+static __always_inline unsigned long __must_check
+__copy_from_user_inatomic(void *to, const void __user *from, unsigned long n)
+{
+ unsigned long ret;
+ pagefault_disable();
+ ret = __copy_from_user(to, from, n);
+ pagefault_enable();
+ return ret;
+}
/* XXX: Not sure it works well..
should be such that: 4byte clear and the rest. */
Index: linux-2.6/include/asm-sparc/uaccess.h
===================================================================
--- linux-2.6.orig/include/asm-sparc/uaccess.h
+++ linux-2.6/include/asm-sparc/uaccess.h
@@ -271,8 +271,25 @@ static inline unsigned long __copy_from_
return __copy_user((__force void __user *) to, from, n);
}
-#define __copy_to_user_inatomic __copy_to_user
-#define __copy_from_user_inatomic __copy_from_user
+static __always_inline unsigned long __must_check
+__copy_to_user_inatomic(void __user *to, const void *from, unsigned long n)
+{
+ unsigned long ret;
+ pagefault_disable();
+ ret = __copy_to_user(to, from, n);
+ pagefault_enable();
+ return ret;
+}
+
+static __always_inline unsigned long __must_check
+__copy_from_user_inatomic(void *to, const void __user *from, unsigned long n)
+{
+ unsigned long ret;
+ pagefault_disable();
+ ret = __copy_from_user(to, from, n);
+ pagefault_enable();
+ return ret;
+}
static inline unsigned long __clear_user(void __user *addr, unsigned long size)
{
Index: linux-2.6/include/asm-sparc64/uaccess.h
===================================================================
--- linux-2.6.orig/include/asm-sparc64/uaccess.h
+++ linux-2.6/include/asm-sparc64/uaccess.h
@@ -265,8 +265,26 @@ extern long __strnlen_user(const char __
#define strlen_user __strlen_user
#define strnlen_user __strnlen_user
-#define __copy_to_user_inatomic __copy_to_user
-#define __copy_from_user_inatomic __copy_from_user
+
+static __always_inline unsigned long __must_check
+__copy_to_user_inatomic(void __user *to, const void *from, unsigned long n)
+{
+ unsigned long ret;
+ pagefault_disable();
+ ret = __copy_to_user(to, from, n);
+ pagefault_enable();
+ return ret;
+}
+
+static __always_inline unsigned long __must_check
+__copy_from_user_inatomic(void *to, const void __user *from, unsigned long n)
+{
+ unsigned long ret;
+ pagefault_disable();
+ ret = __copy_from_user(to, from, n);
+ pagefault_enable();
+ return ret;
+}
#endif /* __ASSEMBLY__ */
Index: linux-2.6/include/asm-um/uaccess.h
===================================================================
--- linux-2.6.orig/include/asm-um/uaccess.h
+++ linux-2.6/include/asm-um/uaccess.h
@@ -36,8 +36,25 @@
#define __copy_to_user(to, from, n) copy_to_user(to, from, n)
-#define __copy_to_user_inatomic __copy_to_user
-#define __copy_from_user_inatomic __copy_from_user
+static __always_inline unsigned long __must_check
+__copy_to_user_inatomic(void __user *to, const void *from, unsigned long n)
+{
+ unsigned long ret;
+ pagefault_disable();
+ ret = __copy_to_user(to, from, n);
+ pagefault_enable();
+ return ret;
+}
+
+static __always_inline unsigned long __must_check
+__copy_from_user_inatomic(void *to, const void __user *from, unsigned long n)
+{
+ unsigned long ret;
+ pagefault_disable();
+ ret = __copy_from_user(to, from, n);
+ pagefault_enable();
+ return ret;
+}
#define __get_user(x, ptr) \
({ \
Index: linux-2.6/include/asm-v850/uaccess.h
===================================================================
--- linux-2.6.orig/include/asm-v850/uaccess.h
+++ linux-2.6/include/asm-v850/uaccess.h
@@ -107,8 +107,25 @@ extern int bad_user_access_length (void)
#define __copy_from_user(to, from, n) (memcpy (to, from, n), 0)
#define __copy_to_user(to, from, n) (memcpy(to, from, n), 0)
-#define __copy_to_user_inatomic __copy_to_user
-#define __copy_from_user_inatomic __copy_from_user
+static __always_inline unsigned long __must_check
+__copy_to_user_inatomic(void __user *to, const void *from, unsigned long n)
+{
+ unsigned long ret;
+ pagefault_disable();
+ ret = __copy_to_user(to, from, n);
+ pagefault_enable();
+ return ret;
+}
+
+static __always_inline unsigned long __must_check
+__copy_from_user_inatomic(void *to, const void __user *from, unsigned long n)
+{
+ unsigned long ret;
+ pagefault_disable();
+ ret = __copy_from_user(to, from, n);
+ pagefault_enable();
+ return ret;
+}
#define copy_from_user(to, from, n) __copy_from_user (to, from, n)
#define copy_to_user(to, from, n) __copy_to_user(to, from, n)
Index: linux-2.6/include/asm-x86_64/uaccess.h
===================================================================
--- linux-2.6.orig/include/asm-x86_64/uaccess.h
+++ linux-2.6/include/asm-x86_64/uaccess.h
@@ -360,12 +360,26 @@ __must_check long strlen_user(const char
__must_check unsigned long clear_user(void __user *mem, unsigned long len);
__must_check unsigned long __clear_user(void __user *mem, unsigned long len);
-__must_check long __copy_from_user_inatomic(void *dst, const void __user *src, unsigned size);
+__must_check long ____copy_from_user_inatomic(void *dst, const void __user *src, unsigned size);
+
+static __must_check __always_inline long
+__copy_from_user_inatomic(void *dst, const void __user *src, unsigned size)
+{
+ long ret;
+ pagefault_disable();
+ ret = ____copy_from_user_inatomic(dst, src, size);
+ pagefault_enable();
+ return ret;
+}
static __must_check __always_inline int
__copy_to_user_inatomic(void __user *dst, const void *src, unsigned size)
{
- return copy_user_generic((__force void *)dst, src, size);
+ int ret;
+ pagefault_disable();
+ ret = copy_user_generic((__force void *)dst, src, size);
+ pagefault_enable();
+ return ret;
}
#endif /* __X86_64_UACCESS_H */
Index: linux-2.6/include/asm-xtensa/uaccess.h
===================================================================
--- linux-2.6.orig/include/asm-xtensa/uaccess.h
+++ linux-2.6/include/asm-xtensa/uaccess.h
@@ -419,9 +419,26 @@ __generic_copy_from_user(void *to, const
#define copy_from_user(to,from,n) __generic_copy_from_user((to),(from),(n))
#define __copy_to_user(to,from,n) __generic_copy_to_user_nocheck((to),(from),(n))
#define __copy_from_user(to,from,n) __generic_copy_from_user_nocheck((to),(from),(n))
-#define __copy_to_user_inatomic __copy_to_user
-#define __copy_from_user_inatomic __copy_from_user
+static __always_inline unsigned long __must_check
+__copy_to_user_inatomic(void __user *to, const void *from, unsigned long n)
+{
+ unsigned long ret;
+ pagefault_disable();
+ ret = __copy_to_user(to, from, n);
+ pagefault_enable();
+ return ret;
+}
+
+static __always_inline unsigned long __must_check
+__copy_from_user_inatomic(void *to, const void __user *from, unsigned long n)
+{
+ unsigned long ret;
+ pagefault_disable();
+ ret = __copy_from_user(to, from, n);
+ pagefault_enable();
+ return ret;
+}
/*
* We need to return the number of bytes not cleared. Our memset()
Index: linux-2.6/include/linux/uaccess.h
===================================================================
--- linux-2.6.orig/include/linux/uaccess.h
+++ linux-2.6/include/linux/uaccess.h
@@ -2,7 +2,6 @@
#define __LINUX_UACCESS_H__
#include <linux/preempt.h>
-#include <asm/uaccess.h>
/*
* These routines enable/disable the pagefault handler in that
@@ -38,6 +37,8 @@ static inline void pagefault_enable(void
preempt_check_resched();
}
+#include <asm/uaccess.h>
+
#ifndef ARCH_HAS_NOCACHE_UACCESS
static inline unsigned long __copy_from_user_inatomic_nocache(void *to,
Index: linux-2.6/kernel/futex.c
===================================================================
--- linux-2.6.orig/kernel/futex.c
+++ linux-2.6/kernel/futex.c
@@ -282,9 +282,7 @@ static inline int get_futex_value_locked
{
int ret;
- pagefault_disable();
ret = __copy_from_user_inatomic(dest, from, sizeof(u32));
- pagefault_enable();
return ret ? -EFAULT : 0;
}
--
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [RFC][PATCH 1/4] mm: arch do_page_fault() vs in_atomic()
2006-10-19 10:17 ` [RFC][PATCH 1/4] mm: arch do_page_fault() vs in_atomic() Peter Zijlstra
@ 2006-10-19 10:54 ` Nick Piggin
0 siblings, 0 replies; 6+ messages in thread
From: Nick Piggin @ 2006-10-19 10:54 UTC (permalink / raw)
To: Peter Zijlstra
Cc: linux-arch, linux-kernel, linux-mm, Nick Piggin, Andrew Morton
Hi Peter,
This patchset looks pretty nice to me.
Acked-by: Nick Piggin <npiggin@suse.de>
One minor nit:
Peter Zijlstra wrote:
> In light of the recent pagefault and filemap_copy_from_user work I've
> gone through all the arch pagefault handlers to make sure the
> inc_preempt_count() 'feature' works as expected.
>
> Several sections of code (including the new filemap_copy_from_user) rely
> on the fact that faults do not take locks under increased preempt count.
>
> arch/x86_64 - good
> arch/powerpc - good
> arch/cris - fixed
> arch/i386 - good
> arch/parisc - fixed
> arch/sh - good
> arch/sparc - good
> arch/s390 - good
> arch/m68k - fixed
> arch/ppc - good
> arch/alpha - fixed
> arch/mips - good
> arch/sparc64 - good
> arch/ia64 - good
> arch/arm - fixed
> arch/um - NA
um does have a fault handler (in kernel/trap.c), but it gets the
in_atomic check correct.
Thanks for doing this.
Nick
--
SUSE Labs, Novell Inc.
Send instant messages to your online friends http://au.messenger.yahoo.com
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2006-10-19 10:54 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2006-10-19 10:17 [RFC][PATCH 0/4] on do_page_fault() and *copy*_inatomic Peter Zijlstra
2006-10-19 10:17 ` [RFC][PATCH 1/4] mm: arch do_page_fault() vs in_atomic() Peter Zijlstra
2006-10-19 10:54 ` Nick Piggin
2006-10-19 10:17 ` [RFC][PATCH 2/4] mm: pagefault_{disable,enable}() Peter Zijlstra
2006-10-19 10:17 ` [RFC][PATCH 3/4] mm: k{,um}map_atomic() vs in_atomic() Peter Zijlstra
2006-10-19 10:17 ` [RFC][PATCH 4/4] mm: move pagefault_{disable,enable}() into __copy_{to,from}_user_inatomic() Peter Zijlstra
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox