linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v10 0/6]arm64: add machine check safe support
@ 2024-01-29 13:46 Tong Tiangen
  2024-01-29 13:46 ` [PATCH v10 1/6] uaccess: add generic fallback version of copy_mc_to_user() Tong Tiangen
                   ` (5 more replies)
  0 siblings, 6 replies; 20+ messages in thread
From: Tong Tiangen @ 2024-01-29 13:46 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Mark Rutland, James Morse,
	Robin Murphy, Andrey Ryabinin, Alexander Potapenko,
	Alexander Viro, Andrey Konovalov, Dmitry Vyukov,
	Vincenzo Frascino, Andrew Morton, Michael Ellerman,
	Nicholas Piggin, Christophe Leroy, Aneesh Kumar K.V,
	Naveen N. Rao, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, x86, H. Peter Anvin
  Cc: linux-arm-kernel, linux-mm, linuxppc-dev, linux-kernel,
	kasan-dev, Tong Tiangen, wangkefeng.wang, Guohanjun

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset="yes", Size: 6462 bytes --]

With the increase of memory capacity and density, the probability of memory
error also increases. The increasing size and density of server RAM in data
centers and clouds have shown increased uncorrectable memory errors.

Currently, more and more scenarios that can tolerate memory errors,such as
CoW[1,2], KSM copy[3], coredump copy[4], khugepaged[5,6], uaccess copy[7],
etc.

This patchset introduces a new processing framework on ARM64, which enables
ARM64 to support error recovery in the above scenarios, and more scenarios
can be expanded based on this in the future.

In arm64, memory error handling in do_sea(), which is divided into two cases:
 1. If the user state consumed the memory errors, the solution is to kill
    the user process and isolate the error page.
 2. If the kernel state consumed the memory errors, the solution is to
    panic.

For case 2, Undifferentiated panic may not be the optimal choice, as it can
be handled better. In some scenarios, we can avoid panic, such as uaccess,
if the uaccess fails due to memory error, only the user process will be
affected, killing the user process and isolating the user page with
hardware memory errors is a better choice.

[1] commit d302c2398ba2 ("mm, hwpoison: when copy-on-write hits poison, take page offline")
[2] commit 1cb9dc4b475c ("mm: hwpoison: support recovery from HugePage copy-on-write faults")
[3] commit 6b970599e807 ("mm: hwpoison: support recovery from ksm_might_need_to_copy()")
[4] commit 245f09226893 ("mm: hwpoison: coredump: support recovery from dump_user_range()")
[5] commit 98c76c9f1ef7 ("mm/khugepaged: recover from poisoned anonymous memory")
[6] commit 12904d953364 ("mm/khugepaged: recover from poisoned file-backed memory")
[7] commit 278b917f8cb9 ("x86/mce: Add _ASM_EXTABLE_CPY for copy user access")

Since V9:
 1. Rebase to latest kernel version 6.8-rc2.
 2. Add patch 6/6 to support copy_mc_to_kernel().

Since V8:
 1. Rebase to latest kernel version and fix topo in some of the patches.
 2. According to the suggestion of Catalin, I attempted to modify the
    return value of function copy_mc_[user]_highpage() to bytes not copied.
    During the modification process, I found that it would be more
    reasonable to return -EFAULT when copy error occurs (referring to the
    newly added patch 4). 

    For ARM64, the implementation of copy_mc_[user]_highpage() needs to
    consider MTE. Considering the scenario where data copying is successful
    but the MTE tag copying fails, it is also not reasonable to return
    bytes not copied.
 3. Considering the recent addition of machine check safe support for
    multiple scenarios, modify commit message for patch 5 (patch 4 for V8).

Since V7:
 Currently, there are patches supporting recover from poison
 consumption for the cow scenario[1]. Therefore, Supporting cow
 scenario under the arm64 architecture only needs to modify the relevant
 code under the arch/.
 [1]https://lore.kernel.org/lkml/20221031201029.102123-1-tony.luck@intel.com/

Since V6:
 Resend patches that are not merged into the mainline in V6.

Since V5:
 1. Add patch2/3 to add uaccess assembly helpers.
 2. Optimize the implementation logic of arm64_do_kernel_sea() in patch8.
 3. Remove kernel access fixup in patch9.
 All suggestion are from Mark. 

Since V4:
 1. According Michael's suggestion, add patch5.
 2. According Mark's suggestiog, do some restructuring to arm64
 extable, then a new adaptation of machine check safe support is made based
 on this.
 3. According Mark's suggestion, support machine check safe in do_mte() in
 cow scene.
 4. In V4, two patches have been merged into -next, so V5 not send these
 two patches.

Since V3:
 1. According to Robin's suggestion, direct modify user_ldst and
 user_ldp in asm-uaccess.h and modify mte.S.
 2. Add new macro USER_MC in asm-uaccess.h, used in copy_from_user.S
 and copy_to_user.S.
 3. According to Robin's suggestion, using micro in copy_page_mc.S to
 simplify code.
 4. According to KeFeng's suggestion, modify powerpc code in patch1.
 5. According to KeFeng's suggestion, modify mm/extable.c and some code
 optimization.

Since V2:
 1. According to Mark's suggestion, all uaccess can be recovered due to
    memory error.
 2. Scenario pagecache reading is also supported as part of uaccess
    (copy_to_user()) and duplication code problem is also solved. 
    Thanks for Robin's suggestion.
 3. According Mark's suggestion, update commit message of patch 2/5.
 4. According Borisllav's suggestion, update commit message of patch 1/5.

Since V1:
 1.Consistent with PPC/x86, Using CONFIG_ARCH_HAS_COPY_MC instead of
   ARM64_UCE_KERNEL_RECOVERY.
 2.Add two new scenes, cow and pagecache reading.
 3.Fix two small bug(the first two patch).

V1 in here:
https://lore.kernel.org/lkml/20220323033705.3966643-1-tongtiangen@huawei.com/

Tong Tiangen (6):
  uaccess: add generic fallback version of copy_mc_to_user()
  arm64: add support for machine check error safe
  arm64: add uaccess to machine check safe
  mm/hwpoison: return -EFAULT when copy fail in
    copy_mc_[user]_highpage()
  arm64: support copy_mc_[user]_highpage()
  arm64: introduce copy_mc_to_kernel() implementation

 arch/arm64/Kconfig                   |   1 +
 arch/arm64/include/asm/asm-extable.h |  15 ++
 arch/arm64/include/asm/assembler.h   |   4 +
 arch/arm64/include/asm/extable.h     |   1 +
 arch/arm64/include/asm/mte.h         |   5 +
 arch/arm64/include/asm/page.h        |  10 ++
 arch/arm64/include/asm/string.h      |   5 +
 arch/arm64/include/asm/uaccess.h     |  21 +++
 arch/arm64/lib/Makefile              |   4 +-
 arch/arm64/lib/copy_from_user.S      |  10 +-
 arch/arm64/lib/copy_mc_page.S        |  78 ++++++++
 arch/arm64/lib/copy_to_user.S        |  10 +-
 arch/arm64/lib/memcpy_mc.S           | 257 +++++++++++++++++++++++++++
 arch/arm64/lib/mte.S                 |  27 +++
 arch/arm64/mm/copypage.c             |  66 ++++++-
 arch/arm64/mm/extable.c              |  21 ++-
 arch/arm64/mm/fault.c                |  29 ++-
 arch/powerpc/include/asm/uaccess.h   |   1 +
 arch/x86/include/asm/uaccess.h       |   1 +
 include/linux/highmem.h              |  16 +-
 include/linux/uaccess.h              |   9 +
 mm/kasan/shadow.c                    |  12 ++
 mm/khugepaged.c                      |   4 +-
 23 files changed, 581 insertions(+), 26 deletions(-)
 create mode 100644 arch/arm64/lib/copy_mc_page.S
 create mode 100644 arch/arm64/lib/memcpy_mc.S

-- 
2.25.1



^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH v10 1/6] uaccess: add generic fallback version of copy_mc_to_user()
  2024-01-29 13:46 [PATCH v10 0/6]arm64: add machine check safe support Tong Tiangen
@ 2024-01-29 13:46 ` Tong Tiangen
  2024-01-29 13:46 ` [PATCH v10 2/6] arm64: add support for machine check error safe Tong Tiangen
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 20+ messages in thread
From: Tong Tiangen @ 2024-01-29 13:46 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Mark Rutland, James Morse,
	Robin Murphy, Andrey Ryabinin, Alexander Potapenko,
	Alexander Viro, Andrey Konovalov, Dmitry Vyukov,
	Vincenzo Frascino, Andrew Morton, Michael Ellerman,
	Nicholas Piggin, Christophe Leroy, Aneesh Kumar K.V,
	Naveen N. Rao, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, x86, H. Peter Anvin
  Cc: linux-arm-kernel, linux-mm, linuxppc-dev, linux-kernel,
	kasan-dev, Tong Tiangen, wangkefeng.wang, Guohanjun

x86/powerpc has it's implementation of copy_mc_to_user(), we add generic
fallback in include/linux/uaccess.h prepare for other architechures to
enable CONFIG_ARCH_HAS_COPY_MC.

Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
Acked-by: Michael Ellerman <mpe@ellerman.id.au>
---
 arch/powerpc/include/asm/uaccess.h | 1 +
 arch/x86/include/asm/uaccess.h     | 1 +
 include/linux/uaccess.h            | 9 +++++++++
 3 files changed, 11 insertions(+)

diff --git a/arch/powerpc/include/asm/uaccess.h b/arch/powerpc/include/asm/uaccess.h
index f1f9890f50d3..4bfd1e6f0702 100644
--- a/arch/powerpc/include/asm/uaccess.h
+++ b/arch/powerpc/include/asm/uaccess.h
@@ -381,6 +381,7 @@ copy_mc_to_user(void __user *to, const void *from, unsigned long n)
 
 	return n;
 }
+#define copy_mc_to_user copy_mc_to_user
 #endif
 
 extern long __copy_from_user_flushcache(void *dst, const void __user *src,
diff --git a/arch/x86/include/asm/uaccess.h b/arch/x86/include/asm/uaccess.h
index 5c367c1290c3..fd56282ee9a8 100644
--- a/arch/x86/include/asm/uaccess.h
+++ b/arch/x86/include/asm/uaccess.h
@@ -497,6 +497,7 @@ copy_mc_to_kernel(void *to, const void *from, unsigned len);
 
 unsigned long __must_check
 copy_mc_to_user(void __user *to, const void *from, unsigned len);
+#define copy_mc_to_user copy_mc_to_user
 #endif
 
 /*
diff --git a/include/linux/uaccess.h b/include/linux/uaccess.h
index 3064314f4832..550287c92990 100644
--- a/include/linux/uaccess.h
+++ b/include/linux/uaccess.h
@@ -205,6 +205,15 @@ copy_mc_to_kernel(void *dst, const void *src, size_t cnt)
 }
 #endif
 
+#ifndef copy_mc_to_user
+static inline unsigned long __must_check
+copy_mc_to_user(void *dst, const void *src, size_t cnt)
+{
+	check_object_size(src, cnt, true);
+	return raw_copy_to_user(dst, src, cnt);
+}
+#endif
+
 static __always_inline void pagefault_disabled_inc(void)
 {
 	current->pagefault_disabled++;
-- 
2.25.1



^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH v10 2/6] arm64: add support for machine check error safe
  2024-01-29 13:46 [PATCH v10 0/6]arm64: add machine check safe support Tong Tiangen
  2024-01-29 13:46 ` [PATCH v10 1/6] uaccess: add generic fallback version of copy_mc_to_user() Tong Tiangen
@ 2024-01-29 13:46 ` Tong Tiangen
  2024-01-29 17:51   ` Mark Rutland
  2024-01-29 13:46 ` [PATCH v10 3/6] arm64: add uaccess to machine check safe Tong Tiangen
                   ` (3 subsequent siblings)
  5 siblings, 1 reply; 20+ messages in thread
From: Tong Tiangen @ 2024-01-29 13:46 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Mark Rutland, James Morse,
	Robin Murphy, Andrey Ryabinin, Alexander Potapenko,
	Alexander Viro, Andrey Konovalov, Dmitry Vyukov,
	Vincenzo Frascino, Andrew Morton, Michael Ellerman,
	Nicholas Piggin, Christophe Leroy, Aneesh Kumar K.V,
	Naveen N. Rao, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, x86, H. Peter Anvin
  Cc: linux-arm-kernel, linux-mm, linuxppc-dev, linux-kernel,
	kasan-dev, Tong Tiangen, wangkefeng.wang, Guohanjun

For the arm64 kernel, when it processes hardware memory errors for
synchronize notifications(do_sea()), if the errors is consumed within the
kernel, the current processing is panic. However, it is not optimal.

Take uaccess for example, if the uaccess operation fails due to memory
error, only the user process will be affected. Killing the user process and
isolating the corrupt page is a better choice.

This patch only enable machine error check framework and adds an exception
fixup before the kernel panic in do_sea().

Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
---
 arch/arm64/Kconfig               |  1 +
 arch/arm64/include/asm/extable.h |  1 +
 arch/arm64/mm/extable.c          | 16 ++++++++++++++++
 arch/arm64/mm/fault.c            | 29 ++++++++++++++++++++++++++++-
 4 files changed, 46 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index aa7c1d435139..2cc34b5e7abb 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -20,6 +20,7 @@ config ARM64
 	select ARCH_ENABLE_SPLIT_PMD_PTLOCK if PGTABLE_LEVELS > 2
 	select ARCH_ENABLE_THP_MIGRATION if TRANSPARENT_HUGEPAGE
 	select ARCH_HAS_CACHE_LINE_SIZE
+	select ARCH_HAS_COPY_MC if ACPI_APEI_GHES
 	select ARCH_HAS_CURRENT_STACK_POINTER
 	select ARCH_HAS_DEBUG_VIRTUAL
 	select ARCH_HAS_DEBUG_VM_PGTABLE
diff --git a/arch/arm64/include/asm/extable.h b/arch/arm64/include/asm/extable.h
index 72b0e71cc3de..f80ebd0addfd 100644
--- a/arch/arm64/include/asm/extable.h
+++ b/arch/arm64/include/asm/extable.h
@@ -46,4 +46,5 @@ bool ex_handler_bpf(const struct exception_table_entry *ex,
 #endif /* !CONFIG_BPF_JIT */
 
 bool fixup_exception(struct pt_regs *regs);
+bool fixup_exception_mc(struct pt_regs *regs);
 #endif
diff --git a/arch/arm64/mm/extable.c b/arch/arm64/mm/extable.c
index 228d681a8715..478e639f8680 100644
--- a/arch/arm64/mm/extable.c
+++ b/arch/arm64/mm/extable.c
@@ -76,3 +76,19 @@ bool fixup_exception(struct pt_regs *regs)
 
 	BUG();
 }
+
+bool fixup_exception_mc(struct pt_regs *regs)
+{
+	const struct exception_table_entry *ex;
+
+	ex = search_exception_tables(instruction_pointer(regs));
+	if (!ex)
+		return false;
+
+	/*
+	 * This is not complete, More Machine check safe extable type can
+	 * be processed here.
+	 */
+
+	return false;
+}
diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
index 55f6455a8284..312932dc100b 100644
--- a/arch/arm64/mm/fault.c
+++ b/arch/arm64/mm/fault.c
@@ -730,6 +730,31 @@ static int do_bad(unsigned long far, unsigned long esr, struct pt_regs *regs)
 	return 1; /* "fault" */
 }
 
+static bool arm64_do_kernel_sea(unsigned long addr, unsigned int esr,
+				     struct pt_regs *regs, int sig, int code)
+{
+	if (!IS_ENABLED(CONFIG_ARCH_HAS_COPY_MC))
+		return false;
+
+	if (user_mode(regs))
+		return false;
+
+	if (apei_claim_sea(regs) < 0)
+		return false;
+
+	if (!fixup_exception_mc(regs))
+		return false;
+
+	if (current->flags & PF_KTHREAD)
+		return true;
+
+	set_thread_esr(0, esr);
+	arm64_force_sig_fault(sig, code, addr,
+		"Uncorrected memory error on access to user memory\n");
+
+	return true;
+}
+
 static int do_sea(unsigned long far, unsigned long esr, struct pt_regs *regs)
 {
 	const struct fault_info *inf;
@@ -755,7 +780,9 @@ static int do_sea(unsigned long far, unsigned long esr, struct pt_regs *regs)
 		 */
 		siaddr  = untagged_addr(far);
 	}
-	arm64_notify_die(inf->name, regs, inf->sig, inf->code, siaddr, esr);
+
+	if (!arm64_do_kernel_sea(siaddr, esr, regs, inf->sig, inf->code))
+		arm64_notify_die(inf->name, regs, inf->sig, inf->code, siaddr, esr);
 
 	return 0;
 }
-- 
2.25.1



^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH v10 3/6] arm64: add uaccess to machine check safe
  2024-01-29 13:46 [PATCH v10 0/6]arm64: add machine check safe support Tong Tiangen
  2024-01-29 13:46 ` [PATCH v10 1/6] uaccess: add generic fallback version of copy_mc_to_user() Tong Tiangen
  2024-01-29 13:46 ` [PATCH v10 2/6] arm64: add support for machine check error safe Tong Tiangen
@ 2024-01-29 13:46 ` Tong Tiangen
  2024-01-29 17:43   ` Mark Rutland
  2024-01-29 13:46 ` [PATCH v10 4/6] mm/hwpoison: return -EFAULT when copy fail in copy_mc_[user]_highpage() Tong Tiangen
                   ` (2 subsequent siblings)
  5 siblings, 1 reply; 20+ messages in thread
From: Tong Tiangen @ 2024-01-29 13:46 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Mark Rutland, James Morse,
	Robin Murphy, Andrey Ryabinin, Alexander Potapenko,
	Alexander Viro, Andrey Konovalov, Dmitry Vyukov,
	Vincenzo Frascino, Andrew Morton, Michael Ellerman,
	Nicholas Piggin, Christophe Leroy, Aneesh Kumar K.V,
	Naveen N. Rao, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, x86, H. Peter Anvin
  Cc: linux-arm-kernel, linux-mm, linuxppc-dev, linux-kernel,
	kasan-dev, Tong Tiangen, wangkefeng.wang, Guohanjun

If user process access memory fails due to hardware memory error, only the
relevant processes are affected, so it is more reasonable to kill the user
process and isolate the corrupt page than to panic the kernel.

Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
---
 arch/arm64/lib/copy_from_user.S | 10 +++++-----
 arch/arm64/lib/copy_to_user.S   | 10 +++++-----
 arch/arm64/mm/extable.c         |  8 ++++----
 3 files changed, 14 insertions(+), 14 deletions(-)

diff --git a/arch/arm64/lib/copy_from_user.S b/arch/arm64/lib/copy_from_user.S
index 34e317907524..1bf676e9201d 100644
--- a/arch/arm64/lib/copy_from_user.S
+++ b/arch/arm64/lib/copy_from_user.S
@@ -25,7 +25,7 @@
 	.endm
 
 	.macro strb1 reg, ptr, val
-	strb \reg, [\ptr], \val
+	USER(9998f, strb \reg, [\ptr], \val)
 	.endm
 
 	.macro ldrh1 reg, ptr, val
@@ -33,7 +33,7 @@
 	.endm
 
 	.macro strh1 reg, ptr, val
-	strh \reg, [\ptr], \val
+	USER(9998f, strh \reg, [\ptr], \val)
 	.endm
 
 	.macro ldr1 reg, ptr, val
@@ -41,7 +41,7 @@
 	.endm
 
 	.macro str1 reg, ptr, val
-	str \reg, [\ptr], \val
+	USER(9998f, str \reg, [\ptr], \val)
 	.endm
 
 	.macro ldp1 reg1, reg2, ptr, val
@@ -49,7 +49,7 @@
 	.endm
 
 	.macro stp1 reg1, reg2, ptr, val
-	stp \reg1, \reg2, [\ptr], \val
+	USER(9998f, stp \reg1, \reg2, [\ptr], \val)
 	.endm
 
 end	.req	x5
@@ -66,7 +66,7 @@ SYM_FUNC_START(__arch_copy_from_user)
 	b.ne	9998f
 	// Before being absolutely sure we couldn't copy anything, try harder
 USER(9998f, ldtrb tmp1w, [srcin])
-	strb	tmp1w, [dst], #1
+USER(9998f, strb	tmp1w, [dst], #1)
 9998:	sub	x0, end, dst			// bytes not copied
 	ret
 SYM_FUNC_END(__arch_copy_from_user)
diff --git a/arch/arm64/lib/copy_to_user.S b/arch/arm64/lib/copy_to_user.S
index 802231772608..cc031bd87455 100644
--- a/arch/arm64/lib/copy_to_user.S
+++ b/arch/arm64/lib/copy_to_user.S
@@ -20,7 +20,7 @@
  *	x0 - bytes not copied
  */
 	.macro ldrb1 reg, ptr, val
-	ldrb  \reg, [\ptr], \val
+	USER(9998f, ldrb  \reg, [\ptr], \val)
 	.endm
 
 	.macro strb1 reg, ptr, val
@@ -28,7 +28,7 @@
 	.endm
 
 	.macro ldrh1 reg, ptr, val
-	ldrh  \reg, [\ptr], \val
+	USER(9998f, ldrh  \reg, [\ptr], \val)
 	.endm
 
 	.macro strh1 reg, ptr, val
@@ -36,7 +36,7 @@
 	.endm
 
 	.macro ldr1 reg, ptr, val
-	ldr \reg, [\ptr], \val
+	USER(9998f, ldr \reg, [\ptr], \val)
 	.endm
 
 	.macro str1 reg, ptr, val
@@ -44,7 +44,7 @@
 	.endm
 
 	.macro ldp1 reg1, reg2, ptr, val
-	ldp \reg1, \reg2, [\ptr], \val
+	USER(9998f, ldp \reg1, \reg2, [\ptr], \val)
 	.endm
 
 	.macro stp1 reg1, reg2, ptr, val
@@ -64,7 +64,7 @@ SYM_FUNC_START(__arch_copy_to_user)
 9997:	cmp	dst, dstin
 	b.ne	9998f
 	// Before being absolutely sure we couldn't copy anything, try harder
-	ldrb	tmp1w, [srcin]
+USER(9998f, ldrb	tmp1w, [srcin])
 USER(9998f, sttrb tmp1w, [dst])
 	add	dst, dst, #1
 9998:	sub	x0, end, dst			// bytes not copied
diff --git a/arch/arm64/mm/extable.c b/arch/arm64/mm/extable.c
index 478e639f8680..28ec35e3d210 100644
--- a/arch/arm64/mm/extable.c
+++ b/arch/arm64/mm/extable.c
@@ -85,10 +85,10 @@ bool fixup_exception_mc(struct pt_regs *regs)
 	if (!ex)
 		return false;
 
-	/*
-	 * This is not complete, More Machine check safe extable type can
-	 * be processed here.
-	 */
+	switch (ex->type) {
+	case EX_TYPE_UACCESS_ERR_ZERO:
+		return ex_handler_uaccess_err_zero(ex, regs);
+	}
 
 	return false;
 }
-- 
2.25.1



^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH v10 4/6] mm/hwpoison: return -EFAULT when copy fail in copy_mc_[user]_highpage()
  2024-01-29 13:46 [PATCH v10 0/6]arm64: add machine check safe support Tong Tiangen
                   ` (2 preceding siblings ...)
  2024-01-29 13:46 ` [PATCH v10 3/6] arm64: add uaccess to machine check safe Tong Tiangen
@ 2024-01-29 13:46 ` Tong Tiangen
  2024-01-29 13:46 ` [PATCH v10 5/6] arm64: support copy_mc_[user]_highpage() Tong Tiangen
  2024-01-29 13:46 ` [PATCH v10 6/6] arm64: introduce copy_mc_to_kernel() implementation Tong Tiangen
  5 siblings, 0 replies; 20+ messages in thread
From: Tong Tiangen @ 2024-01-29 13:46 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Mark Rutland, James Morse,
	Robin Murphy, Andrey Ryabinin, Alexander Potapenko,
	Alexander Viro, Andrey Konovalov, Dmitry Vyukov,
	Vincenzo Frascino, Andrew Morton, Michael Ellerman,
	Nicholas Piggin, Christophe Leroy, Aneesh Kumar K.V,
	Naveen N. Rao, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, x86, H. Peter Anvin
  Cc: linux-arm-kernel, linux-mm, linuxppc-dev, linux-kernel,
	kasan-dev, Tong Tiangen, wangkefeng.wang, Guohanjun

If hardware errors are encountered during page copying, returning the bytes
not copied is not meaningful, and the caller cannot do any processing on
the remaining data. Returning -EFAULT is more reasonable, which represents
a hardware error encountered during the copying.

Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
---
 include/linux/highmem.h | 8 ++++----
 mm/khugepaged.c         | 4 ++--
 2 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/include/linux/highmem.h b/include/linux/highmem.h
index 451c1dff0e87..c5ca1a1fc4f5 100644
--- a/include/linux/highmem.h
+++ b/include/linux/highmem.h
@@ -335,8 +335,8 @@ static inline void copy_highpage(struct page *to, struct page *from)
 /*
  * If architecture supports machine check exception handling, define the
  * #MC versions of copy_user_highpage and copy_highpage. They copy a memory
- * page with #MC in source page (@from) handled, and return the number
- * of bytes not copied if there was a #MC, otherwise 0 for success.
+ * page with #MC in source page (@from) handled, and return -EFAULT if there
+ * was a #MC, otherwise 0 for success.
  */
 static inline int copy_mc_user_highpage(struct page *to, struct page *from,
 					unsigned long vaddr, struct vm_area_struct *vma)
@@ -352,7 +352,7 @@ static inline int copy_mc_user_highpage(struct page *to, struct page *from,
 	kunmap_local(vto);
 	kunmap_local(vfrom);
 
-	return ret;
+	return ret ? -EFAULT : 0;
 }
 
 static inline int copy_mc_highpage(struct page *to, struct page *from)
@@ -368,7 +368,7 @@ static inline int copy_mc_highpage(struct page *to, struct page *from)
 	kunmap_local(vto);
 	kunmap_local(vfrom);
 
-	return ret;
+	return ret ? -EFAULT : 0;
 }
 #else
 static inline int copy_mc_user_highpage(struct page *to, struct page *from,
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 2b219acb528e..ba6743a54c86 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -797,7 +797,7 @@ static int __collapse_huge_page_copy(pte_t *pte,
 			continue;
 		}
 		src_page = pte_page(pteval);
-		if (copy_mc_user_highpage(page, src_page, _address, vma) > 0) {
+		if (copy_mc_user_highpage(page, src_page, _address, vma)) {
 			result = SCAN_COPY_MC;
 			break;
 		}
@@ -2053,7 +2053,7 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr,
 			clear_highpage(hpage + (index % HPAGE_PMD_NR));
 			index++;
 		}
-		if (copy_mc_highpage(hpage + (page->index % HPAGE_PMD_NR), page) > 0) {
+		if (copy_mc_highpage(hpage + (page->index % HPAGE_PMD_NR), page)) {
 			result = SCAN_COPY_MC;
 			goto rollback;
 		}
-- 
2.25.1



^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH v10 5/6] arm64: support copy_mc_[user]_highpage()
  2024-01-29 13:46 [PATCH v10 0/6]arm64: add machine check safe support Tong Tiangen
                   ` (3 preceding siblings ...)
  2024-01-29 13:46 ` [PATCH v10 4/6] mm/hwpoison: return -EFAULT when copy fail in copy_mc_[user]_highpage() Tong Tiangen
@ 2024-01-29 13:46 ` Tong Tiangen
  2024-01-29 20:45   ` Andrey Konovalov
  2024-01-30 10:31   ` Mark Rutland
  2024-01-29 13:46 ` [PATCH v10 6/6] arm64: introduce copy_mc_to_kernel() implementation Tong Tiangen
  5 siblings, 2 replies; 20+ messages in thread
From: Tong Tiangen @ 2024-01-29 13:46 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Mark Rutland, James Morse,
	Robin Murphy, Andrey Ryabinin, Alexander Potapenko,
	Alexander Viro, Andrey Konovalov, Dmitry Vyukov,
	Vincenzo Frascino, Andrew Morton, Michael Ellerman,
	Nicholas Piggin, Christophe Leroy, Aneesh Kumar K.V,
	Naveen N. Rao, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, x86, H. Peter Anvin
  Cc: linux-arm-kernel, linux-mm, linuxppc-dev, linux-kernel,
	kasan-dev, Tong Tiangen, wangkefeng.wang, Guohanjun

Currently, many scenarios that can tolerate memory errors when copying page
have been supported in the kernel[1][2][3], all of which are implemented by
copy_mc_[user]_highpage(). arm64 should also support this mechanism.

Due to mte, arm64 needs to have its own copy_mc_[user]_highpage()
architecture implementation, macros __HAVE_ARCH_COPY_MC_HIGHPAGE and
__HAVE_ARCH_COPY_MC_USER_HIGHPAGE have been added to control it.

Add new helper copy_mc_page() which provide a page copy implementation with
machine check safe. The copy_mc_page() in copy_mc_page.S is largely borrows
from copy_page() in copy_page.S and the main difference is copy_mc_page()
add extable entry to every load/store insn to support machine check safe.

Add new extable type EX_TYPE_COPY_MC_PAGE_ERR_ZERO which used in
copy_mc_page().

[1]a873dfe1032a ("mm, hwpoison: try to recover from copy-on write faults")
[2]5f2500b93cc9 ("mm/khugepaged: recover from poisoned anonymous memory")
[3]6b970599e807 ("mm: hwpoison: support recovery from ksm_might_need_to_copy()")

Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
---
 arch/arm64/include/asm/asm-extable.h | 15 ++++++
 arch/arm64/include/asm/assembler.h   |  4 ++
 arch/arm64/include/asm/mte.h         |  5 ++
 arch/arm64/include/asm/page.h        | 10 ++++
 arch/arm64/lib/Makefile              |  2 +
 arch/arm64/lib/copy_mc_page.S        | 78 ++++++++++++++++++++++++++++
 arch/arm64/lib/mte.S                 | 27 ++++++++++
 arch/arm64/mm/copypage.c             | 66 ++++++++++++++++++++---
 arch/arm64/mm/extable.c              |  7 +--
 include/linux/highmem.h              |  8 +++
 10 files changed, 213 insertions(+), 9 deletions(-)
 create mode 100644 arch/arm64/lib/copy_mc_page.S

diff --git a/arch/arm64/include/asm/asm-extable.h b/arch/arm64/include/asm/asm-extable.h
index 980d1dd8e1a3..819044fefbe7 100644
--- a/arch/arm64/include/asm/asm-extable.h
+++ b/arch/arm64/include/asm/asm-extable.h
@@ -10,6 +10,7 @@
 #define EX_TYPE_UACCESS_ERR_ZERO	2
 #define EX_TYPE_KACCESS_ERR_ZERO	3
 #define EX_TYPE_LOAD_UNALIGNED_ZEROPAD	4
+#define EX_TYPE_COPY_MC_PAGE_ERR_ZERO	5
 
 /* Data fields for EX_TYPE_UACCESS_ERR_ZERO */
 #define EX_DATA_REG_ERR_SHIFT	0
@@ -51,6 +52,16 @@
 #define _ASM_EXTABLE_UACCESS(insn, fixup)				\
 	_ASM_EXTABLE_UACCESS_ERR_ZERO(insn, fixup, wzr, wzr)
 
+#define _ASM_EXTABLE_COPY_MC_PAGE_ERR_ZERO(insn, fixup, err, zero)	\
+	__ASM_EXTABLE_RAW(insn, fixup, 					\
+			  EX_TYPE_COPY_MC_PAGE_ERR_ZERO,		\
+			  (						\
+			    EX_DATA_REG(ERR, err) |			\
+			    EX_DATA_REG(ZERO, zero)			\
+			  ))
+
+#define _ASM_EXTABLE_COPY_MC_PAGE(insn, fixup)				\
+	_ASM_EXTABLE_COPY_MC_PAGE_ERR_ZERO(insn, fixup, wzr, wzr)
 /*
  * Create an exception table entry for uaccess `insn`, which will branch to `fixup`
  * when an unhandled fault is taken.
@@ -59,6 +70,10 @@
 	_ASM_EXTABLE_UACCESS(\insn, \fixup)
 	.endm
 
+	.macro          _asm_extable_copy_mc_page, insn, fixup
+	_ASM_EXTABLE_COPY_MC_PAGE(\insn, \fixup)
+	.endm
+
 /*
  * Create an exception table entry for `insn` if `fixup` is provided. Otherwise
  * do nothing.
diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h
index 513787e43329..e1d8ce155878 100644
--- a/arch/arm64/include/asm/assembler.h
+++ b/arch/arm64/include/asm/assembler.h
@@ -154,6 +154,10 @@ lr	.req	x30		// link register
 #define CPU_LE(code...) code
 #endif
 
+#define CPY_MC(l, x...)		\
+9999:   x;			\
+	_asm_extable_copy_mc_page    9999b, l
+
 /*
  * Define a macro that constructs a 64-bit value by concatenating two
  * 32-bit registers. Note that on big endian systems the order of the
diff --git a/arch/arm64/include/asm/mte.h b/arch/arm64/include/asm/mte.h
index 91fbd5c8a391..9cdded082dd4 100644
--- a/arch/arm64/include/asm/mte.h
+++ b/arch/arm64/include/asm/mte.h
@@ -92,6 +92,7 @@ static inline bool try_page_mte_tagging(struct page *page)
 void mte_zero_clear_page_tags(void *addr);
 void mte_sync_tags(pte_t pte, unsigned int nr_pages);
 void mte_copy_page_tags(void *kto, const void *kfrom);
+int mte_copy_mc_page_tags(void *kto, const void *kfrom);
 void mte_thread_init_user(void);
 void mte_thread_switch(struct task_struct *next);
 void mte_cpu_setup(void);
@@ -128,6 +129,10 @@ static inline void mte_sync_tags(pte_t pte, unsigned int nr_pages)
 static inline void mte_copy_page_tags(void *kto, const void *kfrom)
 {
 }
+static inline int mte_copy_mc_page_tags(void *kto, const void *kfrom)
+{
+	return 0;
+}
 static inline void mte_thread_init_user(void)
 {
 }
diff --git a/arch/arm64/include/asm/page.h b/arch/arm64/include/asm/page.h
index 2312e6ee595f..304cc86b8a10 100644
--- a/arch/arm64/include/asm/page.h
+++ b/arch/arm64/include/asm/page.h
@@ -29,6 +29,16 @@ void copy_user_highpage(struct page *to, struct page *from,
 void copy_highpage(struct page *to, struct page *from);
 #define __HAVE_ARCH_COPY_HIGHPAGE
 
+#ifdef CONFIG_ARCH_HAS_COPY_MC
+int copy_mc_page(void *to, const void *from);
+int copy_mc_highpage(struct page *to, struct page *from);
+#define __HAVE_ARCH_COPY_MC_HIGHPAGE
+
+int copy_mc_user_highpage(struct page *to, struct page *from,
+		unsigned long vaddr, struct vm_area_struct *vma);
+#define __HAVE_ARCH_COPY_MC_USER_HIGHPAGE
+#endif
+
 struct folio *vma_alloc_zeroed_movable_folio(struct vm_area_struct *vma,
 						unsigned long vaddr);
 #define vma_alloc_zeroed_movable_folio vma_alloc_zeroed_movable_folio
diff --git a/arch/arm64/lib/Makefile b/arch/arm64/lib/Makefile
index 29490be2546b..a2fd865b816d 100644
--- a/arch/arm64/lib/Makefile
+++ b/arch/arm64/lib/Makefile
@@ -15,6 +15,8 @@ endif
 
 lib-$(CONFIG_ARCH_HAS_UACCESS_FLUSHCACHE) += uaccess_flushcache.o
 
+lib-$(CONFIG_ARCH_HAS_COPY_MC) += copy_mc_page.o
+
 obj-$(CONFIG_CRC32) += crc32.o
 
 obj-$(CONFIG_FUNCTION_ERROR_INJECTION) += error-inject.o
diff --git a/arch/arm64/lib/copy_mc_page.S b/arch/arm64/lib/copy_mc_page.S
new file mode 100644
index 000000000000..524534d26d86
--- /dev/null
+++ b/arch/arm64/lib/copy_mc_page.S
@@ -0,0 +1,78 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Copyright (C) 2012 ARM Ltd.
+ */
+
+#include <linux/linkage.h>
+#include <linux/const.h>
+#include <asm/assembler.h>
+#include <asm/page.h>
+#include <asm/cpufeature.h>
+#include <asm/alternative.h>
+#include <asm/asm-extable.h>
+
+/*
+ * Copy a page from src to dest (both are page aligned) with machine check
+ *
+ * Parameters:
+ *	x0 - dest
+ *	x1 - src
+ * Returns:
+ * 	x0 - Return 0 if copy success, or -EFAULT if anything goes wrong
+ *	     while copying.
+ */
+SYM_FUNC_START(__pi_copy_mc_page)
+CPY_MC(9998f, ldp	x2, x3, [x1])
+CPY_MC(9998f, ldp	x4, x5, [x1, #16])
+CPY_MC(9998f, ldp	x6, x7, [x1, #32])
+CPY_MC(9998f, ldp	x8, x9, [x1, #48])
+CPY_MC(9998f, ldp	x10, x11, [x1, #64])
+CPY_MC(9998f, ldp	x12, x13, [x1, #80])
+CPY_MC(9998f, ldp	x14, x15, [x1, #96])
+CPY_MC(9998f, ldp	x16, x17, [x1, #112])
+
+	add	x0, x0, #256
+	add	x1, x1, #128
+1:
+	tst	x0, #(PAGE_SIZE - 1)
+
+CPY_MC(9998f, stnp	x2, x3, [x0, #-256])
+CPY_MC(9998f, ldp	x2, x3, [x1])
+CPY_MC(9998f, stnp	x4, x5, [x0, #16 - 256])
+CPY_MC(9998f, ldp	x4, x5, [x1, #16])
+CPY_MC(9998f, stnp	x6, x7, [x0, #32 - 256])
+CPY_MC(9998f, ldp	x6, x7, [x1, #32])
+CPY_MC(9998f, stnp	x8, x9, [x0, #48 - 256])
+CPY_MC(9998f, ldp	x8, x9, [x1, #48])
+CPY_MC(9998f, stnp	x10, x11, [x0, #64 - 256])
+CPY_MC(9998f, ldp	x10, x11, [x1, #64])
+CPY_MC(9998f, stnp	x12, x13, [x0, #80 - 256])
+CPY_MC(9998f, ldp	x12, x13, [x1, #80])
+CPY_MC(9998f, stnp	x14, x15, [x0, #96 - 256])
+CPY_MC(9998f, ldp	x14, x15, [x1, #96])
+CPY_MC(9998f, stnp	x16, x17, [x0, #112 - 256])
+CPY_MC(9998f, ldp	x16, x17, [x1, #112])
+
+	add	x0, x0, #128
+	add	x1, x1, #128
+
+	b.ne	1b
+
+CPY_MC(9998f, stnp	x2, x3, [x0, #-256])
+CPY_MC(9998f, stnp	x4, x5, [x0, #16 - 256])
+CPY_MC(9998f, stnp	x6, x7, [x0, #32 - 256])
+CPY_MC(9998f, stnp	x8, x9, [x0, #48 - 256])
+CPY_MC(9998f, stnp	x10, x11, [x0, #64 - 256])
+CPY_MC(9998f, stnp	x12, x13, [x0, #80 - 256])
+CPY_MC(9998f, stnp	x14, x15, [x0, #96 - 256])
+CPY_MC(9998f, stnp	x16, x17, [x0, #112 - 256])
+
+	mov x0, #0
+	ret
+
+9998:	mov x0, #-EFAULT
+	ret
+
+SYM_FUNC_END(__pi_copy_mc_page)
+SYM_FUNC_ALIAS(copy_mc_page, __pi_copy_mc_page)
+EXPORT_SYMBOL(copy_mc_page)
diff --git a/arch/arm64/lib/mte.S b/arch/arm64/lib/mte.S
index 5018ac03b6bf..2b748e83f6cf 100644
--- a/arch/arm64/lib/mte.S
+++ b/arch/arm64/lib/mte.S
@@ -80,6 +80,33 @@ SYM_FUNC_START(mte_copy_page_tags)
 	ret
 SYM_FUNC_END(mte_copy_page_tags)
 
+/*
+ * Copy the tags from the source page to the destination one wiht machine check safe
+ *   x0 - address of the destination page
+ *   x1 - address of the source page
+ * Returns:
+ *   x0 - Return 0 if copy success, or
+ *        -EFAULT if anything goes wrong while copying.
+ */
+SYM_FUNC_START(mte_copy_mc_page_tags)
+	mov	x2, x0
+	mov	x3, x1
+	multitag_transfer_size x5, x6
+1:
+CPY_MC(2f, ldgm	x4, [x3])
+CPY_MC(2f, stgm	x4, [x2])
+	add	x2, x2, x5
+	add	x3, x3, x5
+	tst	x2, #(PAGE_SIZE - 1)
+	b.ne	1b
+
+	mov x0, #0
+	ret
+
+2:	mov x0, #-EFAULT
+	ret
+SYM_FUNC_END(mte_copy_mc_page_tags)
+
 /*
  * Read tags from a user buffer (one tag per byte) and set the corresponding
  * tags at the given kernel address. Used by PTRACE_POKEMTETAGS.
diff --git a/arch/arm64/mm/copypage.c b/arch/arm64/mm/copypage.c
index a7bb20055ce0..9765e40cde6c 100644
--- a/arch/arm64/mm/copypage.c
+++ b/arch/arm64/mm/copypage.c
@@ -14,6 +14,25 @@
 #include <asm/cpufeature.h>
 #include <asm/mte.h>
 
+static int do_mte(struct page *to, struct page *from, void *kto, void *kfrom, bool mc)
+{
+	int ret = 0;
+
+	if (system_supports_mte() && page_mte_tagged(from)) {
+		/* It's a new page, shouldn't have been tagged yet */
+		WARN_ON_ONCE(!try_page_mte_tagging(to));
+		if (mc)
+			ret = mte_copy_mc_page_tags(kto, kfrom);
+		else
+			mte_copy_page_tags(kto, kfrom);
+
+		if (!ret)
+			set_page_mte_tagged(to);
+	}
+
+	return ret;
+}
+
 void copy_highpage(struct page *to, struct page *from)
 {
 	void *kto = page_address(to);
@@ -24,12 +43,7 @@ void copy_highpage(struct page *to, struct page *from)
 	if (kasan_hw_tags_enabled())
 		page_kasan_tag_reset(to);
 
-	if (system_supports_mte() && page_mte_tagged(from)) {
-		/* It's a new page, shouldn't have been tagged yet */
-		WARN_ON_ONCE(!try_page_mte_tagging(to));
-		mte_copy_page_tags(kto, kfrom);
-		set_page_mte_tagged(to);
-	}
+	do_mte(to, from, kto, kfrom, false);
 }
 EXPORT_SYMBOL(copy_highpage);
 
@@ -40,3 +54,43 @@ void copy_user_highpage(struct page *to, struct page *from,
 	flush_dcache_page(to);
 }
 EXPORT_SYMBOL_GPL(copy_user_highpage);
+
+#ifdef CONFIG_ARCH_HAS_COPY_MC
+/*
+ * Return -EFAULT if anything goes wrong while copying page or mte.
+ */
+int copy_mc_highpage(struct page *to, struct page *from)
+{
+	void *kto = page_address(to);
+	void *kfrom = page_address(from);
+	int ret;
+
+	ret = copy_mc_page(kto, kfrom);
+	if (ret)
+		return -EFAULT;
+
+	if (kasan_hw_tags_enabled())
+		page_kasan_tag_reset(to);
+
+	ret = do_mte(to, from, kto, kfrom, true);
+	if (ret)
+		return -EFAULT;
+
+	return 0;
+}
+EXPORT_SYMBOL(copy_mc_highpage);
+
+int copy_mc_user_highpage(struct page *to, struct page *from,
+			unsigned long vaddr, struct vm_area_struct *vma)
+{
+	int ret;
+
+	ret = copy_mc_highpage(to, from);
+
+	if (!ret)
+		flush_dcache_page(to);
+
+	return ret;
+}
+EXPORT_SYMBOL_GPL(copy_mc_user_highpage);
+#endif
diff --git a/arch/arm64/mm/extable.c b/arch/arm64/mm/extable.c
index 28ec35e3d210..bdc81518d207 100644
--- a/arch/arm64/mm/extable.c
+++ b/arch/arm64/mm/extable.c
@@ -16,7 +16,7 @@ get_ex_fixup(const struct exception_table_entry *ex)
 	return ((unsigned long)&ex->fixup + ex->fixup);
 }
 
-static bool ex_handler_uaccess_err_zero(const struct exception_table_entry *ex,
+static bool ex_handler_fixup_err_zero(const struct exception_table_entry *ex,
 					struct pt_regs *regs)
 {
 	int reg_err = FIELD_GET(EX_DATA_REG_ERR, ex->data);
@@ -69,7 +69,7 @@ bool fixup_exception(struct pt_regs *regs)
 		return ex_handler_bpf(ex, regs);
 	case EX_TYPE_UACCESS_ERR_ZERO:
 	case EX_TYPE_KACCESS_ERR_ZERO:
-		return ex_handler_uaccess_err_zero(ex, regs);
+		return ex_handler_fixup_err_zero(ex, regs);
 	case EX_TYPE_LOAD_UNALIGNED_ZEROPAD:
 		return ex_handler_load_unaligned_zeropad(ex, regs);
 	}
@@ -87,7 +87,8 @@ bool fixup_exception_mc(struct pt_regs *regs)
 
 	switch (ex->type) {
 	case EX_TYPE_UACCESS_ERR_ZERO:
-		return ex_handler_uaccess_err_zero(ex, regs);
+	case EX_TYPE_COPY_MC_PAGE_ERR_ZERO:
+		return ex_handler_fixup_err_zero(ex, regs);
 	}
 
 	return false;
diff --git a/include/linux/highmem.h b/include/linux/highmem.h
index c5ca1a1fc4f5..a42470ca42f2 100644
--- a/include/linux/highmem.h
+++ b/include/linux/highmem.h
@@ -332,6 +332,7 @@ static inline void copy_highpage(struct page *to, struct page *from)
 #endif
 
 #ifdef copy_mc_to_kernel
+#ifndef __HAVE_ARCH_COPY_MC_USER_HIGHPAGE
 /*
  * If architecture supports machine check exception handling, define the
  * #MC versions of copy_user_highpage and copy_highpage. They copy a memory
@@ -354,7 +355,9 @@ static inline int copy_mc_user_highpage(struct page *to, struct page *from,
 
 	return ret ? -EFAULT : 0;
 }
+#endif
 
+#ifndef __HAVE_ARCH_COPY_MC_HIGHPAGE
 static inline int copy_mc_highpage(struct page *to, struct page *from)
 {
 	unsigned long ret;
@@ -370,20 +373,25 @@ static inline int copy_mc_highpage(struct page *to, struct page *from)
 
 	return ret ? -EFAULT : 0;
 }
+#endif
 #else
+#ifndef __HAVE_ARCH_COPY_MC_USER_HIGHPAGE
 static inline int copy_mc_user_highpage(struct page *to, struct page *from,
 					unsigned long vaddr, struct vm_area_struct *vma)
 {
 	copy_user_highpage(to, from, vaddr, vma);
 	return 0;
 }
+#endif
 
+#ifndef __HAVE_ARCH_COPY_MC_HIGHPAGE
 static inline int copy_mc_highpage(struct page *to, struct page *from)
 {
 	copy_highpage(to, from);
 	return 0;
 }
 #endif
+#endif
 
 static inline void memcpy_page(struct page *dst_page, size_t dst_off,
 			       struct page *src_page, size_t src_off,
-- 
2.25.1



^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH v10 6/6] arm64: introduce copy_mc_to_kernel() implementation
  2024-01-29 13:46 [PATCH v10 0/6]arm64: add machine check safe support Tong Tiangen
                   ` (4 preceding siblings ...)
  2024-01-29 13:46 ` [PATCH v10 5/6] arm64: support copy_mc_[user]_highpage() Tong Tiangen
@ 2024-01-29 13:46 ` Tong Tiangen
  2024-01-30 10:20   ` Mark Rutland
  5 siblings, 1 reply; 20+ messages in thread
From: Tong Tiangen @ 2024-01-29 13:46 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Mark Rutland, James Morse,
	Robin Murphy, Andrey Ryabinin, Alexander Potapenko,
	Alexander Viro, Andrey Konovalov, Dmitry Vyukov,
	Vincenzo Frascino, Andrew Morton, Michael Ellerman,
	Nicholas Piggin, Christophe Leroy, Aneesh Kumar K.V,
	Naveen N. Rao, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, x86, H. Peter Anvin
  Cc: linux-arm-kernel, linux-mm, linuxppc-dev, linux-kernel,
	kasan-dev, Tong Tiangen, wangkefeng.wang, Guohanjun

The copy_mc_to_kernel() helper is memory copy implementation that handles
source exceptions. It can be used in memory copy scenarios that tolerate
hardware memory errors(e.g: pmem_read/dax_copy_to_iter).

Currnently, only x86 and ppc suuport this helper, after arm64 support
machine check safe framework, we introduce copy_mc_to_kernel()
implementation.

Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
---
 arch/arm64/include/asm/string.h  |   5 +
 arch/arm64/include/asm/uaccess.h |  21 +++
 arch/arm64/lib/Makefile          |   2 +-
 arch/arm64/lib/memcpy_mc.S       | 257 +++++++++++++++++++++++++++++++
 mm/kasan/shadow.c                |  12 ++
 5 files changed, 296 insertions(+), 1 deletion(-)
 create mode 100644 arch/arm64/lib/memcpy_mc.S

diff --git a/arch/arm64/include/asm/string.h b/arch/arm64/include/asm/string.h
index 3a3264ff47b9..995b63c26e99 100644
--- a/arch/arm64/include/asm/string.h
+++ b/arch/arm64/include/asm/string.h
@@ -35,6 +35,10 @@ extern void *memchr(const void *, int, __kernel_size_t);
 extern void *memcpy(void *, const void *, __kernel_size_t);
 extern void *__memcpy(void *, const void *, __kernel_size_t);
 
+#define __HAVE_ARCH_MEMCPY_MC
+extern int memcpy_mcs(void *, const void *, __kernel_size_t);
+extern int __memcpy_mcs(void *, const void *, __kernel_size_t);
+
 #define __HAVE_ARCH_MEMMOVE
 extern void *memmove(void *, const void *, __kernel_size_t);
 extern void *__memmove(void *, const void *, __kernel_size_t);
@@ -57,6 +61,7 @@ void memcpy_flushcache(void *dst, const void *src, size_t cnt);
  */
 
 #define memcpy(dst, src, len) __memcpy(dst, src, len)
+#define memcpy_mcs(dst, src, len) __memcpy_mcs(dst, src, len)
 #define memmove(dst, src, len) __memmove(dst, src, len)
 #define memset(s, c, n) __memset(s, c, n)
 
diff --git a/arch/arm64/include/asm/uaccess.h b/arch/arm64/include/asm/uaccess.h
index 14be5000c5a0..61e28ef2112a 100644
--- a/arch/arm64/include/asm/uaccess.h
+++ b/arch/arm64/include/asm/uaccess.h
@@ -425,4 +425,25 @@ static inline size_t probe_subpage_writeable(const char __user *uaddr,
 
 #endif /* CONFIG_ARCH_HAS_SUBPAGE_FAULTS */
 
+#ifdef CONFIG_ARCH_HAS_COPY_MC
+/**
+ * copy_mc_to_kernel - memory copy that handles source exceptions
+ *
+ * @dst:	destination address
+ * @src:	source address
+ * @len:	number of bytes to copy
+ *
+ * Return 0 for success, or #size if there was an exception.
+ */
+static inline unsigned long __must_check
+copy_mc_to_kernel(void *to, const void *from, unsigned long size)
+{
+	int ret;
+
+	ret = memcpy_mcs(to, from, size);
+	return (ret == -EFAULT) ? size : 0;
+}
+#define copy_mc_to_kernel copy_mc_to_kernel
+#endif
+
 #endif /* __ASM_UACCESS_H */
diff --git a/arch/arm64/lib/Makefile b/arch/arm64/lib/Makefile
index a2fd865b816d..899d6ae9698c 100644
--- a/arch/arm64/lib/Makefile
+++ b/arch/arm64/lib/Makefile
@@ -3,7 +3,7 @@ lib-y		:= clear_user.o delay.o copy_from_user.o		\
 		   copy_to_user.o copy_page.o				\
 		   clear_page.o csum.o insn.o memchr.o memcpy.o		\
 		   memset.o memcmp.o strcmp.o strncmp.o strlen.o	\
-		   strnlen.o strchr.o strrchr.o tishift.o
+		   strnlen.o strchr.o strrchr.o tishift.o memcpy_mc.o
 
 ifeq ($(CONFIG_KERNEL_MODE_NEON), y)
 obj-$(CONFIG_XOR_BLOCKS)	+= xor-neon.o
diff --git a/arch/arm64/lib/memcpy_mc.S b/arch/arm64/lib/memcpy_mc.S
new file mode 100644
index 000000000000..7076b500d154
--- /dev/null
+++ b/arch/arm64/lib/memcpy_mc.S
@@ -0,0 +1,257 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Copyright (c) 2012-2021, Arm Limited.
+ *
+ * Adapted from the original at:
+ * https://github.com/ARM-software/optimized-routines/blob/afd6244a1f8d9229/string/aarch64/memcpy.S
+ */
+
+#include <linux/linkage.h>
+#include <asm/assembler.h>
+
+/* Assumptions:
+ *
+ * ARMv8-a, AArch64, unaligned accesses.
+ *
+ */
+
+#define L(label) .L ## label
+
+#define dstin	x0
+#define src	x1
+#define count	x2
+#define dst	x3
+#define srcend	x4
+#define dstend	x5
+#define A_l	x6
+#define A_lw	w6
+#define A_h	x7
+#define B_l	x8
+#define B_lw	w8
+#define B_h	x9
+#define C_l	x10
+#define C_lw	w10
+#define C_h	x11
+#define D_l	x12
+#define D_h	x13
+#define E_l	x14
+#define E_h	x15
+#define F_l	x16
+#define F_h	x17
+#define G_l	count
+#define G_h	dst
+#define H_l	src
+#define H_h	srcend
+#define tmp1	x14
+
+/* This implementation handles overlaps and supports both memcpy and memmove
+   from a single entry point.  It uses unaligned accesses and branchless
+   sequences to keep the code small, simple and improve performance.
+
+   Copies are split into 3 main cases: small copies of up to 32 bytes, medium
+   copies of up to 128 bytes, and large copies.  The overhead of the overlap
+   check is negligible since it is only required for large copies.
+
+   Large copies use a software pipelined loop processing 64 bytes per iteration.
+   The destination pointer is 16-byte aligned to minimize unaligned accesses.
+   The loop tail is handled by always copying 64 bytes from the end.
+*/
+
+SYM_FUNC_START(__pi_memcpy_mcs)
+	add	srcend, src, count
+	add	dstend, dstin, count
+	cmp	count, 128
+	b.hi	L(copy_long)
+	cmp	count, 32
+	b.hi	L(copy32_128)
+
+	/* Small copies: 0..32 bytes.  */
+	cmp	count, 16
+	b.lo	L(copy16)
+	CPY_MC(9998f, ldp	A_l, A_h, [src])
+	CPY_MC(9998f, ldp	D_l, D_h, [srcend, -16])
+	CPY_MC(9998f, stp	A_l, A_h, [dstin])
+	CPY_MC(9998f, stp	D_l, D_h, [dstend, -16])
+	mov x0, #0
+	ret
+
+	/* Copy 8-15 bytes.  */
+L(copy16):
+	tbz	count, 3, L(copy8)
+	CPY_MC(9998f, ldr	A_l, [src])
+	CPY_MC(9998f, ldr	A_h, [srcend, -8])
+	CPY_MC(9998f, str	A_l, [dstin])
+	CPY_MC(9998f, str	A_h, [dstend, -8])
+	mov x0, #0
+	ret
+
+	.p2align 3
+	/* Copy 4-7 bytes.  */
+L(copy8):
+	tbz	count, 2, L(copy4)
+	CPY_MC(9998f, ldr	A_lw, [src])
+	CPY_MC(9998f, ldr	B_lw, [srcend, -4])
+	CPY_MC(9998f, str	A_lw, [dstin])
+	CPY_MC(9998f, str	B_lw, [dstend, -4])
+	mov x0, #0
+	ret
+
+	/* Copy 0..3 bytes using a branchless sequence.  */
+L(copy4):
+	cbz	count, L(copy0)
+	lsr	tmp1, count, 1
+	CPY_MC(9998f, ldrb	A_lw, [src])
+	CPY_MC(9998f, ldrb	C_lw, [srcend, -1])
+	CPY_MC(9998f, ldrb	B_lw, [src, tmp1])
+	CPY_MC(9998f, strb	A_lw, [dstin])
+	CPY_MC(9998f, strb	B_lw, [dstin, tmp1])
+	CPY_MC(9998f, strb	C_lw, [dstend, -1])
+L(copy0):
+	mov x0, #0
+	ret
+
+	.p2align 4
+	/* Medium copies: 33..128 bytes.  */
+L(copy32_128):
+	CPY_MC(9998f, ldp	A_l, A_h, [src])
+	CPY_MC(9998f, ldp	B_l, B_h, [src, 16])
+	CPY_MC(9998f, ldp	C_l, C_h, [srcend, -32])
+	CPY_MC(9998f, ldp	D_l, D_h, [srcend, -16])
+	cmp	count, 64
+	b.hi	L(copy128)
+	CPY_MC(9998f, stp	A_l, A_h, [dstin])
+	CPY_MC(9998f, stp	B_l, B_h, [dstin, 16])
+	CPY_MC(9998f, stp	C_l, C_h, [dstend, -32])
+	CPY_MC(9998f, stp	D_l, D_h, [dstend, -16])
+	mov x0, #0
+	ret
+
+	.p2align 4
+	/* Copy 65..128 bytes.  */
+L(copy128):
+	CPY_MC(9998f, ldp	E_l, E_h, [src, 32])
+	CPY_MC(9998f, ldp	F_l, F_h, [src, 48])
+	cmp	count, 96
+	b.ls	L(copy96)
+	CPY_MC(9998f, ldp	G_l, G_h, [srcend, -64])
+	CPY_MC(9998f, ldp	H_l, H_h, [srcend, -48])
+	CPY_MC(9998f, stp	G_l, G_h, [dstend, -64])
+	CPY_MC(9998f, stp	H_l, H_h, [dstend, -48])
+L(copy96):
+	CPY_MC(9998f, stp	A_l, A_h, [dstin])
+	CPY_MC(9998f, stp	B_l, B_h, [dstin, 16])
+	CPY_MC(9998f, stp	E_l, E_h, [dstin, 32])
+	CPY_MC(9998f, stp	F_l, F_h, [dstin, 48])
+	CPY_MC(9998f, stp	C_l, C_h, [dstend, -32])
+	CPY_MC(9998f, stp	D_l, D_h, [dstend, -16])
+	mov x0, #0
+	ret
+
+	.p2align 4
+	/* Copy more than 128 bytes.  */
+L(copy_long):
+	/* Use backwards copy if there is an overlap.  */
+	sub	tmp1, dstin, src
+	cbz	tmp1, L(copy0)
+	cmp	tmp1, count
+	b.lo	L(copy_long_backwards)
+
+	/* Copy 16 bytes and then align dst to 16-byte alignment.  */
+
+	CPY_MC(9998f, ldp	D_l, D_h, [src])
+	and	tmp1, dstin, 15
+	bic	dst, dstin, 15
+	sub	src, src, tmp1
+	add	count, count, tmp1	/* Count is now 16 too large.  */
+	CPY_MC(9998f, ldp	A_l, A_h, [src, 16])
+	CPY_MC(9998f, stp	D_l, D_h, [dstin])
+	CPY_MC(9998f, ldp	B_l, B_h, [src, 32])
+	CPY_MC(9998f, ldp	C_l, C_h, [src, 48])
+	CPY_MC(9998f, ldp	D_l, D_h, [src, 64]!)
+	subs	count, count, 128 + 16	/* Test and readjust count.  */
+	b.ls	L(copy64_from_end)
+
+L(loop64):
+	CPY_MC(9998f, stp	A_l, A_h, [dst, 16])
+	CPY_MC(9998f, ldp	A_l, A_h, [src, 16])
+	CPY_MC(9998f, stp	B_l, B_h, [dst, 32])
+	CPY_MC(9998f, ldp	B_l, B_h, [src, 32])
+	CPY_MC(9998f, stp	C_l, C_h, [dst, 48])
+	CPY_MC(9998f, ldp	C_l, C_h, [src, 48])
+	CPY_MC(9998f, stp	D_l, D_h, [dst, 64]!)
+	CPY_MC(9998f, ldp	D_l, D_h, [src, 64]!)
+	subs	count, count, 64
+	b.hi	L(loop64)
+
+	/* Write the last iteration and copy 64 bytes from the end.  */
+L(copy64_from_end):
+	CPY_MC(9998f, ldp	E_l, E_h, [srcend, -64])
+	CPY_MC(9998f, stp	A_l, A_h, [dst, 16])
+	CPY_MC(9998f, ldp	A_l, A_h, [srcend, -48])
+	CPY_MC(9998f, stp	B_l, B_h, [dst, 32])
+	CPY_MC(9998f, ldp	B_l, B_h, [srcend, -32])
+	CPY_MC(9998f, stp	C_l, C_h, [dst, 48])
+	CPY_MC(9998f, ldp	C_l, C_h, [srcend, -16])
+	CPY_MC(9998f, stp	D_l, D_h, [dst, 64])
+	CPY_MC(9998f, stp	E_l, E_h, [dstend, -64])
+	CPY_MC(9998f, stp	A_l, A_h, [dstend, -48])
+	CPY_MC(9998f, stp	B_l, B_h, [dstend, -32])
+	CPY_MC(9998f, stp	C_l, C_h, [dstend, -16])
+	mov x0, #0
+	ret
+
+	.p2align 4
+
+	/* Large backwards copy for overlapping copies.
+	   Copy 16 bytes and then align dst to 16-byte alignment.  */
+L(copy_long_backwards):
+	CPY_MC(9998f, ldp	D_l, D_h, [srcend, -16])
+	and	tmp1, dstend, 15
+	sub	srcend, srcend, tmp1
+	sub	count, count, tmp1
+	CPY_MC(9998f, ldp	A_l, A_h, [srcend, -16])
+	CPY_MC(9998f, stp	D_l, D_h, [dstend, -16])
+	CPY_MC(9998f, ldp	B_l, B_h, [srcend, -32])
+	CPY_MC(9998f, ldp	C_l, C_h, [srcend, -48])
+	CPY_MC(9998f, ldp	D_l, D_h, [srcend, -64]!)
+	sub	dstend, dstend, tmp1
+	subs	count, count, 128
+	b.ls	L(copy64_from_start)
+
+L(loop64_backwards):
+	CPY_MC(9998f, stp	A_l, A_h, [dstend, -16])
+	CPY_MC(9998f, ldp	A_l, A_h, [srcend, -16])
+	CPY_MC(9998f, stp	B_l, B_h, [dstend, -32])
+	CPY_MC(9998f, ldp	B_l, B_h, [srcend, -32])
+	CPY_MC(9998f, stp	C_l, C_h, [dstend, -48])
+	CPY_MC(9998f, ldp	C_l, C_h, [srcend, -48])
+	CPY_MC(9998f, stp	D_l, D_h, [dstend, -64]!)
+	CPY_MC(9998f, ldp	D_l, D_h, [srcend, -64]!)
+	subs	count, count, 64
+	b.hi	L(loop64_backwards)
+
+	/* Write the last iteration and copy 64 bytes from the start.  */
+L(copy64_from_start):
+	CPY_MC(9998f, ldp	G_l, G_h, [src, 48])
+	CPY_MC(9998f, stp	A_l, A_h, [dstend, -16])
+	CPY_MC(9998f, ldp	A_l, A_h, [src, 32])
+	CPY_MC(9998f, stp	B_l, B_h, [dstend, -32])
+	CPY_MC(9998f, ldp	B_l, B_h, [src, 16])
+	CPY_MC(9998f, stp	C_l, C_h, [dstend, -48])
+	CPY_MC(9998f, ldp	C_l, C_h, [src])
+	CPY_MC(9998f, stp	D_l, D_h, [dstend, -64])
+	CPY_MC(9998f, stp	G_l, G_h, [dstin, 48])
+	CPY_MC(9998f, stp	A_l, A_h, [dstin, 32])
+	CPY_MC(9998f, stp	B_l, B_h, [dstin, 16])
+	CPY_MC(9998f, stp	C_l, C_h, [dstin])
+	mov x0, #0
+	ret
+
+9998:	mov x0, #-EFAULT
+	ret
+SYM_FUNC_END(__pi_memcpy_mcs)
+
+SYM_FUNC_ALIAS(__memcpy_mcs, __pi_memcpy_mcs)
+EXPORT_SYMBOL(__memcpy_mcs)
+SYM_FUNC_ALIAS_WEAK(memcpy_mcs, __memcpy_mcs)
+EXPORT_SYMBOL(memcpy_mcs)
diff --git a/mm/kasan/shadow.c b/mm/kasan/shadow.c
index 9ef84f31833f..e6519fd329b2 100644
--- a/mm/kasan/shadow.c
+++ b/mm/kasan/shadow.c
@@ -79,6 +79,18 @@ void *memcpy(void *dest, const void *src, size_t len)
 }
 #endif
 
+#ifdef __HAVE_ARCH_MEMCPY_MC
+#undef memcpy_mcs
+int memcpy_mcs(void *dest, const void *src, size_t len)
+{
+	if (!check_memory_region((unsigned long)src, len, false, _RET_IP_) ||
+	    !check_memory_region((unsigned long)dest, len, true, _RET_IP_))
+		return (unsigned long)len;
+
+	return __memcpy_mcs(dest, src, len);
+}
+#endif
+
 void *__asan_memset(void *addr, int c, ssize_t len)
 {
 	if (!kasan_check_range(addr, len, true, _RET_IP_))
-- 
2.25.1



^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v10 3/6] arm64: add uaccess to machine check safe
  2024-01-29 13:46 ` [PATCH v10 3/6] arm64: add uaccess to machine check safe Tong Tiangen
@ 2024-01-29 17:43   ` Mark Rutland
  2024-01-30 11:14     ` Tong Tiangen
  0 siblings, 1 reply; 20+ messages in thread
From: Mark Rutland @ 2024-01-29 17:43 UTC (permalink / raw)
  To: Tong Tiangen
  Cc: Catalin Marinas, Will Deacon, James Morse, Robin Murphy,
	Andrey Ryabinin, Alexander Potapenko, Alexander Viro,
	Andrey Konovalov, Dmitry Vyukov, Vincenzo Frascino,
	Andrew Morton, Michael Ellerman, Nicholas Piggin,
	Christophe Leroy, Aneesh Kumar K.V, Naveen N. Rao,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86,
	H. Peter Anvin, linux-arm-kernel, linux-mm, linuxppc-dev,
	linux-kernel, kasan-dev, wangkefeng.wang, Guohanjun

On Mon, Jan 29, 2024 at 09:46:49PM +0800, Tong Tiangen wrote:
> If user process access memory fails due to hardware memory error, only the
> relevant processes are affected, so it is more reasonable to kill the user
> process and isolate the corrupt page than to panic the kernel.
> 
> Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
> ---
>  arch/arm64/lib/copy_from_user.S | 10 +++++-----
>  arch/arm64/lib/copy_to_user.S   | 10 +++++-----
>  arch/arm64/mm/extable.c         |  8 ++++----
>  3 files changed, 14 insertions(+), 14 deletions(-)
> 
> diff --git a/arch/arm64/lib/copy_from_user.S b/arch/arm64/lib/copy_from_user.S
> index 34e317907524..1bf676e9201d 100644
> --- a/arch/arm64/lib/copy_from_user.S
> +++ b/arch/arm64/lib/copy_from_user.S
> @@ -25,7 +25,7 @@
>  	.endm
>  
>  	.macro strb1 reg, ptr, val
> -	strb \reg, [\ptr], \val
> +	USER(9998f, strb \reg, [\ptr], \val)
>  	.endm

This is a store to *kernel* memory, not user memory. It should not be marked
with USER().

I understand that you *might* want to handle memory errors on these stores, but
the commit message doesn't describe that and the associated trade-off. For
example, consider that when a copy_form_user fails we'll try to zero the
remaining buffer via memset(); so if a STR* instruction in copy_to_user
faulted, upon handling the fault we'll immediately try to fix that up with some
more stores which will also fault, but won't get fixed up, leading to a panic()
anyway...

Further, this change will also silently fixup unexpected kernel faults if we
pass bad kernel pointers to copy_{to,from}_user, which will hide real bugs.

So NAK to this change as-is; likewise for the addition of USER() to other ldr*
macros in copy_from_user.S and the addition of USER() str* macros in
copy_to_user.S.

If we want to handle memory errors on some kaccesses, we need a new EX_TYPE_*
separate from the usual EX_TYPE_KACESS_ERR_ZERO that means "handle memory
errors, but treat other faults as fatal". That should come with a rationale and
explanation of why it's actually useful.

[...]

> diff --git a/arch/arm64/mm/extable.c b/arch/arm64/mm/extable.c
> index 478e639f8680..28ec35e3d210 100644
> --- a/arch/arm64/mm/extable.c
> +++ b/arch/arm64/mm/extable.c
> @@ -85,10 +85,10 @@ bool fixup_exception_mc(struct pt_regs *regs)
>  	if (!ex)
>  		return false;
>  
> -	/*
> -	 * This is not complete, More Machine check safe extable type can
> -	 * be processed here.
> -	 */
> +	switch (ex->type) {
> +	case EX_TYPE_UACCESS_ERR_ZERO:
> +		return ex_handler_uaccess_err_zero(ex, regs);
> +	}

Please fold this part into the prior patch, and start ogf with *only* handling
errors on accesses already marked with EX_TYPE_UACCESS_ERR_ZERO. I think that
change would be relatively uncontroversial, and it would be much easier to
build atop that.

Thanks,
Mark.


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v10 2/6] arm64: add support for machine check error safe
  2024-01-29 13:46 ` [PATCH v10 2/6] arm64: add support for machine check error safe Tong Tiangen
@ 2024-01-29 17:51   ` Mark Rutland
  2024-01-30 10:57     ` Tong Tiangen
  0 siblings, 1 reply; 20+ messages in thread
From: Mark Rutland @ 2024-01-29 17:51 UTC (permalink / raw)
  To: Tong Tiangen
  Cc: Catalin Marinas, Will Deacon, James Morse, Robin Murphy,
	Andrey Ryabinin, Alexander Potapenko, Alexander Viro,
	Andrey Konovalov, Dmitry Vyukov, Vincenzo Frascino,
	Andrew Morton, Michael Ellerman, Nicholas Piggin,
	Christophe Leroy, Aneesh Kumar K.V, Naveen N. Rao,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86,
	H. Peter Anvin, linux-arm-kernel, linux-mm, linuxppc-dev,
	linux-kernel, kasan-dev, wangkefeng.wang, Guohanjun

On Mon, Jan 29, 2024 at 09:46:48PM +0800, Tong Tiangen wrote:
> For the arm64 kernel, when it processes hardware memory errors for
> synchronize notifications(do_sea()), if the errors is consumed within the
> kernel, the current processing is panic. However, it is not optimal.
> 
> Take uaccess for example, if the uaccess operation fails due to memory
> error, only the user process will be affected. Killing the user process and
> isolating the corrupt page is a better choice.
> 
> This patch only enable machine error check framework and adds an exception
> fixup before the kernel panic in do_sea().
> 
> Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
> ---
>  arch/arm64/Kconfig               |  1 +
>  arch/arm64/include/asm/extable.h |  1 +
>  arch/arm64/mm/extable.c          | 16 ++++++++++++++++
>  arch/arm64/mm/fault.c            | 29 ++++++++++++++++++++++++++++-
>  4 files changed, 46 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> index aa7c1d435139..2cc34b5e7abb 100644
> --- a/arch/arm64/Kconfig
> +++ b/arch/arm64/Kconfig
> @@ -20,6 +20,7 @@ config ARM64
>  	select ARCH_ENABLE_SPLIT_PMD_PTLOCK if PGTABLE_LEVELS > 2
>  	select ARCH_ENABLE_THP_MIGRATION if TRANSPARENT_HUGEPAGE
>  	select ARCH_HAS_CACHE_LINE_SIZE
> +	select ARCH_HAS_COPY_MC if ACPI_APEI_GHES
>  	select ARCH_HAS_CURRENT_STACK_POINTER
>  	select ARCH_HAS_DEBUG_VIRTUAL
>  	select ARCH_HAS_DEBUG_VM_PGTABLE
> diff --git a/arch/arm64/include/asm/extable.h b/arch/arm64/include/asm/extable.h
> index 72b0e71cc3de..f80ebd0addfd 100644
> --- a/arch/arm64/include/asm/extable.h
> +++ b/arch/arm64/include/asm/extable.h
> @@ -46,4 +46,5 @@ bool ex_handler_bpf(const struct exception_table_entry *ex,
>  #endif /* !CONFIG_BPF_JIT */
>  
>  bool fixup_exception(struct pt_regs *regs);
> +bool fixup_exception_mc(struct pt_regs *regs);
>  #endif
> diff --git a/arch/arm64/mm/extable.c b/arch/arm64/mm/extable.c
> index 228d681a8715..478e639f8680 100644
> --- a/arch/arm64/mm/extable.c
> +++ b/arch/arm64/mm/extable.c
> @@ -76,3 +76,19 @@ bool fixup_exception(struct pt_regs *regs)
>  
>  	BUG();
>  }
> +
> +bool fixup_exception_mc(struct pt_regs *regs)

Can we please replace 'mc' with something like 'memory_error' ?

There's no "machine check" on arm64, and 'mc' is opaque regardless.

> +{
> +	const struct exception_table_entry *ex;
> +
> +	ex = search_exception_tables(instruction_pointer(regs));
> +	if (!ex)
> +		return false;
> +
> +	/*
> +	 * This is not complete, More Machine check safe extable type can
> +	 * be processed here.
> +	 */
> +
> +	return false;
> +}

As with my comment on the subsequenty patch, I'd much prefer that we handle
EX_TYPE_UACCESS_ERR_ZERO from the outset.



> diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
> index 55f6455a8284..312932dc100b 100644
> --- a/arch/arm64/mm/fault.c
> +++ b/arch/arm64/mm/fault.c
> @@ -730,6 +730,31 @@ static int do_bad(unsigned long far, unsigned long esr, struct pt_regs *regs)
>  	return 1; /* "fault" */
>  }
>  
> +static bool arm64_do_kernel_sea(unsigned long addr, unsigned int esr,
> +				     struct pt_regs *regs, int sig, int code)
> +{
> +	if (!IS_ENABLED(CONFIG_ARCH_HAS_COPY_MC))
> +		return false;
> +
> +	if (user_mode(regs))
> +		return false;

This function is called "arm64_do_kernel_sea"; surely the caller should *never*
call this for a SEA taken from user mode?

> +
> +	if (apei_claim_sea(regs) < 0)
> +		return false;
> +
> +	if (!fixup_exception_mc(regs))
> +		return false;
> +
> +	if (current->flags & PF_KTHREAD)
> +		return true;

I think this needs a comment; why do we allow kthreads to go on, yet kill user
threads? What about helper threads (e.g. for io_uring)?

> +
> +	set_thread_esr(0, esr);

Why do we set the ESR to 0?

Mark.

> +	arm64_force_sig_fault(sig, code, addr,
> +		"Uncorrected memory error on access to user memory\n");
> +
> +	return true;
> +}
> +
>  static int do_sea(unsigned long far, unsigned long esr, struct pt_regs *regs)
>  {
>  	const struct fault_info *inf;
> @@ -755,7 +780,9 @@ static int do_sea(unsigned long far, unsigned long esr, struct pt_regs *regs)
>  		 */
>  		siaddr  = untagged_addr(far);
>  	}
> -	arm64_notify_die(inf->name, regs, inf->sig, inf->code, siaddr, esr);
> +
> +	if (!arm64_do_kernel_sea(siaddr, esr, regs, inf->sig, inf->code))
> +		arm64_notify_die(inf->name, regs, inf->sig, inf->code, siaddr, esr);
>  
>  	return 0;
>  }
> -- 
> 2.25.1
> 


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v10 5/6] arm64: support copy_mc_[user]_highpage()
  2024-01-29 13:46 ` [PATCH v10 5/6] arm64: support copy_mc_[user]_highpage() Tong Tiangen
@ 2024-01-29 20:45   ` Andrey Konovalov
  2024-01-30 10:31   ` Mark Rutland
  1 sibling, 0 replies; 20+ messages in thread
From: Andrey Konovalov @ 2024-01-29 20:45 UTC (permalink / raw)
  To: Peter Collingbourne
  Cc: Catalin Marinas, Will Deacon, Mark Rutland, James Morse,
	Robin Murphy, Andrey Ryabinin, Alexander Potapenko,
	Alexander Viro, Dmitry Vyukov, Vincenzo Frascino, Andrew Morton,
	Michael Ellerman, Nicholas Piggin, Christophe Leroy,
	Aneesh Kumar K.V, Naveen N. Rao, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, Dave Hansen, x86, H. Peter Anvin,
	linux-arm-kernel, linux-mm, linuxppc-dev, linux-kernel,
	kasan-dev, wangkefeng.wang, Guohanjun, Tong Tiangen

On Mon, Jan 29, 2024 at 2:47 PM Tong Tiangen <tongtiangen@huawei.com> wrote:
>
> Currently, many scenarios that can tolerate memory errors when copying page
> have been supported in the kernel[1][2][3], all of which are implemented by
> copy_mc_[user]_highpage(). arm64 should also support this mechanism.
>
> Due to mte, arm64 needs to have its own copy_mc_[user]_highpage()
> architecture implementation, macros __HAVE_ARCH_COPY_MC_HIGHPAGE and
> __HAVE_ARCH_COPY_MC_USER_HIGHPAGE have been added to control it.
>
> Add new helper copy_mc_page() which provide a page copy implementation with
> machine check safe. The copy_mc_page() in copy_mc_page.S is largely borrows
> from copy_page() in copy_page.S and the main difference is copy_mc_page()
> add extable entry to every load/store insn to support machine check safe.
>
> Add new extable type EX_TYPE_COPY_MC_PAGE_ERR_ZERO which used in
> copy_mc_page().
>
> [1]a873dfe1032a ("mm, hwpoison: try to recover from copy-on write faults")
> [2]5f2500b93cc9 ("mm/khugepaged: recover from poisoned anonymous memory")
> [3]6b970599e807 ("mm: hwpoison: support recovery from ksm_might_need_to_copy()")
>
> Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
> ---
>  arch/arm64/include/asm/asm-extable.h | 15 ++++++
>  arch/arm64/include/asm/assembler.h   |  4 ++
>  arch/arm64/include/asm/mte.h         |  5 ++
>  arch/arm64/include/asm/page.h        | 10 ++++
>  arch/arm64/lib/Makefile              |  2 +
>  arch/arm64/lib/copy_mc_page.S        | 78 ++++++++++++++++++++++++++++
>  arch/arm64/lib/mte.S                 | 27 ++++++++++
>  arch/arm64/mm/copypage.c             | 66 ++++++++++++++++++++---
>  arch/arm64/mm/extable.c              |  7 +--
>  include/linux/highmem.h              |  8 +++
>  10 files changed, 213 insertions(+), 9 deletions(-)
>  create mode 100644 arch/arm64/lib/copy_mc_page.S
>
> diff --git a/arch/arm64/include/asm/asm-extable.h b/arch/arm64/include/asm/asm-extable.h
> index 980d1dd8e1a3..819044fefbe7 100644
> --- a/arch/arm64/include/asm/asm-extable.h
> +++ b/arch/arm64/include/asm/asm-extable.h
> @@ -10,6 +10,7 @@
>  #define EX_TYPE_UACCESS_ERR_ZERO       2
>  #define EX_TYPE_KACCESS_ERR_ZERO       3
>  #define EX_TYPE_LOAD_UNALIGNED_ZEROPAD 4
> +#define EX_TYPE_COPY_MC_PAGE_ERR_ZERO  5
>
>  /* Data fields for EX_TYPE_UACCESS_ERR_ZERO */
>  #define EX_DATA_REG_ERR_SHIFT  0
> @@ -51,6 +52,16 @@
>  #define _ASM_EXTABLE_UACCESS(insn, fixup)                              \
>         _ASM_EXTABLE_UACCESS_ERR_ZERO(insn, fixup, wzr, wzr)
>
> +#define _ASM_EXTABLE_COPY_MC_PAGE_ERR_ZERO(insn, fixup, err, zero)     \
> +       __ASM_EXTABLE_RAW(insn, fixup,                                  \
> +                         EX_TYPE_COPY_MC_PAGE_ERR_ZERO,                \
> +                         (                                             \
> +                           EX_DATA_REG(ERR, err) |                     \
> +                           EX_DATA_REG(ZERO, zero)                     \
> +                         ))
> +
> +#define _ASM_EXTABLE_COPY_MC_PAGE(insn, fixup)                         \
> +       _ASM_EXTABLE_COPY_MC_PAGE_ERR_ZERO(insn, fixup, wzr, wzr)
>  /*
>   * Create an exception table entry for uaccess `insn`, which will branch to `fixup`
>   * when an unhandled fault is taken.
> @@ -59,6 +70,10 @@
>         _ASM_EXTABLE_UACCESS(\insn, \fixup)
>         .endm
>
> +       .macro          _asm_extable_copy_mc_page, insn, fixup
> +       _ASM_EXTABLE_COPY_MC_PAGE(\insn, \fixup)
> +       .endm
> +
>  /*
>   * Create an exception table entry for `insn` if `fixup` is provided. Otherwise
>   * do nothing.
> diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h
> index 513787e43329..e1d8ce155878 100644
> --- a/arch/arm64/include/asm/assembler.h
> +++ b/arch/arm64/include/asm/assembler.h
> @@ -154,6 +154,10 @@ lr .req    x30             // link register
>  #define CPU_LE(code...) code
>  #endif
>
> +#define CPY_MC(l, x...)                \
> +9999:   x;                     \
> +       _asm_extable_copy_mc_page    9999b, l
> +
>  /*
>   * Define a macro that constructs a 64-bit value by concatenating two
>   * 32-bit registers. Note that on big endian systems the order of the
> diff --git a/arch/arm64/include/asm/mte.h b/arch/arm64/include/asm/mte.h
> index 91fbd5c8a391..9cdded082dd4 100644
> --- a/arch/arm64/include/asm/mte.h
> +++ b/arch/arm64/include/asm/mte.h
> @@ -92,6 +92,7 @@ static inline bool try_page_mte_tagging(struct page *page)
>  void mte_zero_clear_page_tags(void *addr);
>  void mte_sync_tags(pte_t pte, unsigned int nr_pages);
>  void mte_copy_page_tags(void *kto, const void *kfrom);
> +int mte_copy_mc_page_tags(void *kto, const void *kfrom);
>  void mte_thread_init_user(void);
>  void mte_thread_switch(struct task_struct *next);
>  void mte_cpu_setup(void);
> @@ -128,6 +129,10 @@ static inline void mte_sync_tags(pte_t pte, unsigned int nr_pages)
>  static inline void mte_copy_page_tags(void *kto, const void *kfrom)
>  {
>  }
> +static inline int mte_copy_mc_page_tags(void *kto, const void *kfrom)
> +{
> +       return 0;
> +}
>  static inline void mte_thread_init_user(void)
>  {
>  }
> diff --git a/arch/arm64/include/asm/page.h b/arch/arm64/include/asm/page.h
> index 2312e6ee595f..304cc86b8a10 100644
> --- a/arch/arm64/include/asm/page.h
> +++ b/arch/arm64/include/asm/page.h
> @@ -29,6 +29,16 @@ void copy_user_highpage(struct page *to, struct page *from,
>  void copy_highpage(struct page *to, struct page *from);
>  #define __HAVE_ARCH_COPY_HIGHPAGE
>
> +#ifdef CONFIG_ARCH_HAS_COPY_MC
> +int copy_mc_page(void *to, const void *from);
> +int copy_mc_highpage(struct page *to, struct page *from);
> +#define __HAVE_ARCH_COPY_MC_HIGHPAGE
> +
> +int copy_mc_user_highpage(struct page *to, struct page *from,
> +               unsigned long vaddr, struct vm_area_struct *vma);
> +#define __HAVE_ARCH_COPY_MC_USER_HIGHPAGE
> +#endif
> +
>  struct folio *vma_alloc_zeroed_movable_folio(struct vm_area_struct *vma,
>                                                 unsigned long vaddr);
>  #define vma_alloc_zeroed_movable_folio vma_alloc_zeroed_movable_folio
> diff --git a/arch/arm64/lib/Makefile b/arch/arm64/lib/Makefile
> index 29490be2546b..a2fd865b816d 100644
> --- a/arch/arm64/lib/Makefile
> +++ b/arch/arm64/lib/Makefile
> @@ -15,6 +15,8 @@ endif
>
>  lib-$(CONFIG_ARCH_HAS_UACCESS_FLUSHCACHE) += uaccess_flushcache.o
>
> +lib-$(CONFIG_ARCH_HAS_COPY_MC) += copy_mc_page.o
> +
>  obj-$(CONFIG_CRC32) += crc32.o
>
>  obj-$(CONFIG_FUNCTION_ERROR_INJECTION) += error-inject.o
> diff --git a/arch/arm64/lib/copy_mc_page.S b/arch/arm64/lib/copy_mc_page.S
> new file mode 100644
> index 000000000000..524534d26d86
> --- /dev/null
> +++ b/arch/arm64/lib/copy_mc_page.S
> @@ -0,0 +1,78 @@
> +/* SPDX-License-Identifier: GPL-2.0-only */
> +/*
> + * Copyright (C) 2012 ARM Ltd.
> + */
> +
> +#include <linux/linkage.h>
> +#include <linux/const.h>
> +#include <asm/assembler.h>
> +#include <asm/page.h>
> +#include <asm/cpufeature.h>
> +#include <asm/alternative.h>
> +#include <asm/asm-extable.h>
> +
> +/*
> + * Copy a page from src to dest (both are page aligned) with machine check
> + *
> + * Parameters:
> + *     x0 - dest
> + *     x1 - src
> + * Returns:
> + *     x0 - Return 0 if copy success, or -EFAULT if anything goes wrong
> + *          while copying.
> + */
> +SYM_FUNC_START(__pi_copy_mc_page)
> +CPY_MC(9998f, ldp      x2, x3, [x1])
> +CPY_MC(9998f, ldp      x4, x5, [x1, #16])
> +CPY_MC(9998f, ldp      x6, x7, [x1, #32])
> +CPY_MC(9998f, ldp      x8, x9, [x1, #48])
> +CPY_MC(9998f, ldp      x10, x11, [x1, #64])
> +CPY_MC(9998f, ldp      x12, x13, [x1, #80])
> +CPY_MC(9998f, ldp      x14, x15, [x1, #96])
> +CPY_MC(9998f, ldp      x16, x17, [x1, #112])
> +
> +       add     x0, x0, #256
> +       add     x1, x1, #128
> +1:
> +       tst     x0, #(PAGE_SIZE - 1)
> +
> +CPY_MC(9998f, stnp     x2, x3, [x0, #-256])
> +CPY_MC(9998f, ldp      x2, x3, [x1])
> +CPY_MC(9998f, stnp     x4, x5, [x0, #16 - 256])
> +CPY_MC(9998f, ldp      x4, x5, [x1, #16])
> +CPY_MC(9998f, stnp     x6, x7, [x0, #32 - 256])
> +CPY_MC(9998f, ldp      x6, x7, [x1, #32])
> +CPY_MC(9998f, stnp     x8, x9, [x0, #48 - 256])
> +CPY_MC(9998f, ldp      x8, x9, [x1, #48])
> +CPY_MC(9998f, stnp     x10, x11, [x0, #64 - 256])
> +CPY_MC(9998f, ldp      x10, x11, [x1, #64])
> +CPY_MC(9998f, stnp     x12, x13, [x0, #80 - 256])
> +CPY_MC(9998f, ldp      x12, x13, [x1, #80])
> +CPY_MC(9998f, stnp     x14, x15, [x0, #96 - 256])
> +CPY_MC(9998f, ldp      x14, x15, [x1, #96])
> +CPY_MC(9998f, stnp     x16, x17, [x0, #112 - 256])
> +CPY_MC(9998f, ldp      x16, x17, [x1, #112])
> +
> +       add     x0, x0, #128
> +       add     x1, x1, #128
> +
> +       b.ne    1b
> +
> +CPY_MC(9998f, stnp     x2, x3, [x0, #-256])
> +CPY_MC(9998f, stnp     x4, x5, [x0, #16 - 256])
> +CPY_MC(9998f, stnp     x6, x7, [x0, #32 - 256])
> +CPY_MC(9998f, stnp     x8, x9, [x0, #48 - 256])
> +CPY_MC(9998f, stnp     x10, x11, [x0, #64 - 256])
> +CPY_MC(9998f, stnp     x12, x13, [x0, #80 - 256])
> +CPY_MC(9998f, stnp     x14, x15, [x0, #96 - 256])
> +CPY_MC(9998f, stnp     x16, x17, [x0, #112 - 256])
> +
> +       mov x0, #0
> +       ret
> +
> +9998:  mov x0, #-EFAULT
> +       ret
> +
> +SYM_FUNC_END(__pi_copy_mc_page)
> +SYM_FUNC_ALIAS(copy_mc_page, __pi_copy_mc_page)
> +EXPORT_SYMBOL(copy_mc_page)
> diff --git a/arch/arm64/lib/mte.S b/arch/arm64/lib/mte.S
> index 5018ac03b6bf..2b748e83f6cf 100644
> --- a/arch/arm64/lib/mte.S
> +++ b/arch/arm64/lib/mte.S
> @@ -80,6 +80,33 @@ SYM_FUNC_START(mte_copy_page_tags)
>         ret
>  SYM_FUNC_END(mte_copy_page_tags)
>
> +/*
> + * Copy the tags from the source page to the destination one wiht machine check safe
> + *   x0 - address of the destination page
> + *   x1 - address of the source page
> + * Returns:
> + *   x0 - Return 0 if copy success, or
> + *        -EFAULT if anything goes wrong while copying.
> + */
> +SYM_FUNC_START(mte_copy_mc_page_tags)
> +       mov     x2, x0
> +       mov     x3, x1
> +       multitag_transfer_size x5, x6
> +1:
> +CPY_MC(2f, ldgm        x4, [x3])
> +CPY_MC(2f, stgm        x4, [x2])
> +       add     x2, x2, x5
> +       add     x3, x3, x5
> +       tst     x2, #(PAGE_SIZE - 1)
> +       b.ne    1b
> +
> +       mov x0, #0
> +       ret
> +
> +2:     mov x0, #-EFAULT
> +       ret
> +SYM_FUNC_END(mte_copy_mc_page_tags)
> +
>  /*
>   * Read tags from a user buffer (one tag per byte) and set the corresponding
>   * tags at the given kernel address. Used by PTRACE_POKEMTETAGS.
> diff --git a/arch/arm64/mm/copypage.c b/arch/arm64/mm/copypage.c
> index a7bb20055ce0..9765e40cde6c 100644
> --- a/arch/arm64/mm/copypage.c
> +++ b/arch/arm64/mm/copypage.c
> @@ -14,6 +14,25 @@
>  #include <asm/cpufeature.h>
>  #include <asm/mte.h>
>
> +static int do_mte(struct page *to, struct page *from, void *kto, void *kfrom, bool mc)
> +{
> +       int ret = 0;
> +
> +       if (system_supports_mte() && page_mte_tagged(from)) {
> +               /* It's a new page, shouldn't have been tagged yet */
> +               WARN_ON_ONCE(!try_page_mte_tagging(to));
> +               if (mc)
> +                       ret = mte_copy_mc_page_tags(kto, kfrom);
> +               else
> +                       mte_copy_page_tags(kto, kfrom);
> +
> +               if (!ret)
> +                       set_page_mte_tagged(to);
> +       }
> +
> +       return ret;
> +}
> +
>  void copy_highpage(struct page *to, struct page *from)
>  {
>         void *kto = page_address(to);
> @@ -24,12 +43,7 @@ void copy_highpage(struct page *to, struct page *from)
>         if (kasan_hw_tags_enabled())
>                 page_kasan_tag_reset(to);
>
> -       if (system_supports_mte() && page_mte_tagged(from)) {
> -               /* It's a new page, shouldn't have been tagged yet */
> -               WARN_ON_ONCE(!try_page_mte_tagging(to));
> -               mte_copy_page_tags(kto, kfrom);
> -               set_page_mte_tagged(to);
> -       }
> +       do_mte(to, from, kto, kfrom, false);
>  }
>  EXPORT_SYMBOL(copy_highpage);
>
> @@ -40,3 +54,43 @@ void copy_user_highpage(struct page *to, struct page *from,
>         flush_dcache_page(to);
>  }
>  EXPORT_SYMBOL_GPL(copy_user_highpage);
> +
> +#ifdef CONFIG_ARCH_HAS_COPY_MC
> +/*
> + * Return -EFAULT if anything goes wrong while copying page or mte.
> + */
> +int copy_mc_highpage(struct page *to, struct page *from)
> +{
> +       void *kto = page_address(to);
> +       void *kfrom = page_address(from);
> +       int ret;
> +
> +       ret = copy_mc_page(kto, kfrom);
> +       if (ret)
> +               return -EFAULT;
> +
> +       if (kasan_hw_tags_enabled())
> +               page_kasan_tag_reset(to);
> +
> +       ret = do_mte(to, from, kto, kfrom, true);
> +       if (ret)
> +               return -EFAULT;
> +
> +       return 0;
> +}
> +EXPORT_SYMBOL(copy_mc_highpage);
> +
> +int copy_mc_user_highpage(struct page *to, struct page *from,
> +                       unsigned long vaddr, struct vm_area_struct *vma)
> +{
> +       int ret;
> +
> +       ret = copy_mc_highpage(to, from);
> +
> +       if (!ret)
> +               flush_dcache_page(to);
> +
> +       return ret;
> +}
> +EXPORT_SYMBOL_GPL(copy_mc_user_highpage);
> +#endif
> diff --git a/arch/arm64/mm/extable.c b/arch/arm64/mm/extable.c
> index 28ec35e3d210..bdc81518d207 100644
> --- a/arch/arm64/mm/extable.c
> +++ b/arch/arm64/mm/extable.c
> @@ -16,7 +16,7 @@ get_ex_fixup(const struct exception_table_entry *ex)
>         return ((unsigned long)&ex->fixup + ex->fixup);
>  }
>
> -static bool ex_handler_uaccess_err_zero(const struct exception_table_entry *ex,
> +static bool ex_handler_fixup_err_zero(const struct exception_table_entry *ex,
>                                         struct pt_regs *regs)
>  {
>         int reg_err = FIELD_GET(EX_DATA_REG_ERR, ex->data);
> @@ -69,7 +69,7 @@ bool fixup_exception(struct pt_regs *regs)
>                 return ex_handler_bpf(ex, regs);
>         case EX_TYPE_UACCESS_ERR_ZERO:
>         case EX_TYPE_KACCESS_ERR_ZERO:
> -               return ex_handler_uaccess_err_zero(ex, regs);
> +               return ex_handler_fixup_err_zero(ex, regs);
>         case EX_TYPE_LOAD_UNALIGNED_ZEROPAD:
>                 return ex_handler_load_unaligned_zeropad(ex, regs);
>         }
> @@ -87,7 +87,8 @@ bool fixup_exception_mc(struct pt_regs *regs)
>
>         switch (ex->type) {
>         case EX_TYPE_UACCESS_ERR_ZERO:
> -               return ex_handler_uaccess_err_zero(ex, regs);
> +       case EX_TYPE_COPY_MC_PAGE_ERR_ZERO:
> +               return ex_handler_fixup_err_zero(ex, regs);
>         }
>
>         return false;
> diff --git a/include/linux/highmem.h b/include/linux/highmem.h
> index c5ca1a1fc4f5..a42470ca42f2 100644
> --- a/include/linux/highmem.h
> +++ b/include/linux/highmem.h
> @@ -332,6 +332,7 @@ static inline void copy_highpage(struct page *to, struct page *from)
>  #endif
>
>  #ifdef copy_mc_to_kernel
> +#ifndef __HAVE_ARCH_COPY_MC_USER_HIGHPAGE
>  /*
>   * If architecture supports machine check exception handling, define the
>   * #MC versions of copy_user_highpage and copy_highpage. They copy a memory
> @@ -354,7 +355,9 @@ static inline int copy_mc_user_highpage(struct page *to, struct page *from,
>
>         return ret ? -EFAULT : 0;
>  }
> +#endif
>
> +#ifndef __HAVE_ARCH_COPY_MC_HIGHPAGE
>  static inline int copy_mc_highpage(struct page *to, struct page *from)
>  {
>         unsigned long ret;
> @@ -370,20 +373,25 @@ static inline int copy_mc_highpage(struct page *to, struct page *from)
>
>         return ret ? -EFAULT : 0;
>  }
> +#endif
>  #else
> +#ifndef __HAVE_ARCH_COPY_MC_USER_HIGHPAGE
>  static inline int copy_mc_user_highpage(struct page *to, struct page *from,
>                                         unsigned long vaddr, struct vm_area_struct *vma)
>  {
>         copy_user_highpage(to, from, vaddr, vma);
>         return 0;
>  }
> +#endif
>
> +#ifndef __HAVE_ARCH_COPY_MC_HIGHPAGE
>  static inline int copy_mc_highpage(struct page *to, struct page *from)
>  {
>         copy_highpage(to, from);
>         return 0;
>  }
>  #endif
> +#endif
>
>  static inline void memcpy_page(struct page *dst_page, size_t dst_off,
>                                struct page *src_page, size_t src_off,
> --
> 2.25.1
>

+Peter


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v10 6/6] arm64: introduce copy_mc_to_kernel() implementation
  2024-01-29 13:46 ` [PATCH v10 6/6] arm64: introduce copy_mc_to_kernel() implementation Tong Tiangen
@ 2024-01-30 10:20   ` Mark Rutland
  2024-01-30 13:56     ` Tong Tiangen
  0 siblings, 1 reply; 20+ messages in thread
From: Mark Rutland @ 2024-01-30 10:20 UTC (permalink / raw)
  To: Tong Tiangen
  Cc: Catalin Marinas, Will Deacon, James Morse, Robin Murphy,
	Andrey Ryabinin, Alexander Potapenko, Alexander Viro,
	Andrey Konovalov, Dmitry Vyukov, Vincenzo Frascino,
	Andrew Morton, Michael Ellerman, Nicholas Piggin,
	Christophe Leroy, Aneesh Kumar K.V, Naveen N. Rao,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86,
	H. Peter Anvin, linux-arm-kernel, linux-mm, linuxppc-dev,
	linux-kernel, kasan-dev, wangkefeng.wang, Guohanjun

On Mon, Jan 29, 2024 at 09:46:52PM +0800, Tong Tiangen wrote:
> The copy_mc_to_kernel() helper is memory copy implementation that handles
> source exceptions. It can be used in memory copy scenarios that tolerate
> hardware memory errors(e.g: pmem_read/dax_copy_to_iter).
> 
> Currnently, only x86 and ppc suuport this helper, after arm64 support
> machine check safe framework, we introduce copy_mc_to_kernel()
> implementation.
> 
> Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
> ---
>  arch/arm64/include/asm/string.h  |   5 +
>  arch/arm64/include/asm/uaccess.h |  21 +++
>  arch/arm64/lib/Makefile          |   2 +-
>  arch/arm64/lib/memcpy_mc.S       | 257 +++++++++++++++++++++++++++++++
>  mm/kasan/shadow.c                |  12 ++
>  5 files changed, 296 insertions(+), 1 deletion(-)
>  create mode 100644 arch/arm64/lib/memcpy_mc.S

Looking at the diffstat and code, this duplicates arch/arm64/lib/memcpy.S with
a few annotations. Duplicating that code is not maintainable, and so we cannot
take this as-is.

If you want a version that can handle faults that *must* be written such that
the code is shared with the regular memcpy. That could be done by using macros
to instantiate two copies (one with fault handling, the other without).

It would also be very helpful to see *any* indication that this has been
tested, which is sorely lacking in the series as-is.

Mark.

> diff --git a/arch/arm64/include/asm/string.h b/arch/arm64/include/asm/string.h
> index 3a3264ff47b9..995b63c26e99 100644
> --- a/arch/arm64/include/asm/string.h
> +++ b/arch/arm64/include/asm/string.h
> @@ -35,6 +35,10 @@ extern void *memchr(const void *, int, __kernel_size_t);
>  extern void *memcpy(void *, const void *, __kernel_size_t);
>  extern void *__memcpy(void *, const void *, __kernel_size_t);
>  
> +#define __HAVE_ARCH_MEMCPY_MC
> +extern int memcpy_mcs(void *, const void *, __kernel_size_t);
> +extern int __memcpy_mcs(void *, const void *, __kernel_size_t);
> +
>  #define __HAVE_ARCH_MEMMOVE
>  extern void *memmove(void *, const void *, __kernel_size_t);
>  extern void *__memmove(void *, const void *, __kernel_size_t);
> @@ -57,6 +61,7 @@ void memcpy_flushcache(void *dst, const void *src, size_t cnt);
>   */
>  
>  #define memcpy(dst, src, len) __memcpy(dst, src, len)
> +#define memcpy_mcs(dst, src, len) __memcpy_mcs(dst, src, len)
>  #define memmove(dst, src, len) __memmove(dst, src, len)
>  #define memset(s, c, n) __memset(s, c, n)
>  
> diff --git a/arch/arm64/include/asm/uaccess.h b/arch/arm64/include/asm/uaccess.h
> index 14be5000c5a0..61e28ef2112a 100644
> --- a/arch/arm64/include/asm/uaccess.h
> +++ b/arch/arm64/include/asm/uaccess.h
> @@ -425,4 +425,25 @@ static inline size_t probe_subpage_writeable(const char __user *uaddr,
>  
>  #endif /* CONFIG_ARCH_HAS_SUBPAGE_FAULTS */
>  
> +#ifdef CONFIG_ARCH_HAS_COPY_MC
> +/**
> + * copy_mc_to_kernel - memory copy that handles source exceptions
> + *
> + * @dst:	destination address
> + * @src:	source address
> + * @len:	number of bytes to copy
> + *
> + * Return 0 for success, or #size if there was an exception.
> + */
> +static inline unsigned long __must_check
> +copy_mc_to_kernel(void *to, const void *from, unsigned long size)
> +{
> +	int ret;
> +
> +	ret = memcpy_mcs(to, from, size);
> +	return (ret == -EFAULT) ? size : 0;
> +}
> +#define copy_mc_to_kernel copy_mc_to_kernel
> +#endif
> +
>  #endif /* __ASM_UACCESS_H */
> diff --git a/arch/arm64/lib/Makefile b/arch/arm64/lib/Makefile
> index a2fd865b816d..899d6ae9698c 100644
> --- a/arch/arm64/lib/Makefile
> +++ b/arch/arm64/lib/Makefile
> @@ -3,7 +3,7 @@ lib-y		:= clear_user.o delay.o copy_from_user.o		\
>  		   copy_to_user.o copy_page.o				\
>  		   clear_page.o csum.o insn.o memchr.o memcpy.o		\
>  		   memset.o memcmp.o strcmp.o strncmp.o strlen.o	\
> -		   strnlen.o strchr.o strrchr.o tishift.o
> +		   strnlen.o strchr.o strrchr.o tishift.o memcpy_mc.o
>  
>  ifeq ($(CONFIG_KERNEL_MODE_NEON), y)
>  obj-$(CONFIG_XOR_BLOCKS)	+= xor-neon.o
> diff --git a/arch/arm64/lib/memcpy_mc.S b/arch/arm64/lib/memcpy_mc.S
> new file mode 100644
> index 000000000000..7076b500d154
> --- /dev/null
> +++ b/arch/arm64/lib/memcpy_mc.S
> @@ -0,0 +1,257 @@
> +/* SPDX-License-Identifier: GPL-2.0-only */
> +/*
> + * Copyright (c) 2012-2021, Arm Limited.
> + *
> + * Adapted from the original at:
> + * https://github.com/ARM-software/optimized-routines/blob/afd6244a1f8d9229/string/aarch64/memcpy.S
> + */
> +
> +#include <linux/linkage.h>
> +#include <asm/assembler.h>
> +
> +/* Assumptions:
> + *
> + * ARMv8-a, AArch64, unaligned accesses.
> + *
> + */
> +
> +#define L(label) .L ## label
> +
> +#define dstin	x0
> +#define src	x1
> +#define count	x2
> +#define dst	x3
> +#define srcend	x4
> +#define dstend	x5
> +#define A_l	x6
> +#define A_lw	w6
> +#define A_h	x7
> +#define B_l	x8
> +#define B_lw	w8
> +#define B_h	x9
> +#define C_l	x10
> +#define C_lw	w10
> +#define C_h	x11
> +#define D_l	x12
> +#define D_h	x13
> +#define E_l	x14
> +#define E_h	x15
> +#define F_l	x16
> +#define F_h	x17
> +#define G_l	count
> +#define G_h	dst
> +#define H_l	src
> +#define H_h	srcend
> +#define tmp1	x14
> +
> +/* This implementation handles overlaps and supports both memcpy and memmove
> +   from a single entry point.  It uses unaligned accesses and branchless
> +   sequences to keep the code small, simple and improve performance.
> +
> +   Copies are split into 3 main cases: small copies of up to 32 bytes, medium
> +   copies of up to 128 bytes, and large copies.  The overhead of the overlap
> +   check is negligible since it is only required for large copies.
> +
> +   Large copies use a software pipelined loop processing 64 bytes per iteration.
> +   The destination pointer is 16-byte aligned to minimize unaligned accesses.
> +   The loop tail is handled by always copying 64 bytes from the end.
> +*/
> +
> +SYM_FUNC_START(__pi_memcpy_mcs)
> +	add	srcend, src, count
> +	add	dstend, dstin, count
> +	cmp	count, 128
> +	b.hi	L(copy_long)
> +	cmp	count, 32
> +	b.hi	L(copy32_128)
> +
> +	/* Small copies: 0..32 bytes.  */
> +	cmp	count, 16
> +	b.lo	L(copy16)
> +	CPY_MC(9998f, ldp	A_l, A_h, [src])
> +	CPY_MC(9998f, ldp	D_l, D_h, [srcend, -16])
> +	CPY_MC(9998f, stp	A_l, A_h, [dstin])
> +	CPY_MC(9998f, stp	D_l, D_h, [dstend, -16])
> +	mov x0, #0
> +	ret
> +
> +	/* Copy 8-15 bytes.  */
> +L(copy16):
> +	tbz	count, 3, L(copy8)
> +	CPY_MC(9998f, ldr	A_l, [src])
> +	CPY_MC(9998f, ldr	A_h, [srcend, -8])
> +	CPY_MC(9998f, str	A_l, [dstin])
> +	CPY_MC(9998f, str	A_h, [dstend, -8])
> +	mov x0, #0
> +	ret
> +
> +	.p2align 3
> +	/* Copy 4-7 bytes.  */
> +L(copy8):
> +	tbz	count, 2, L(copy4)
> +	CPY_MC(9998f, ldr	A_lw, [src])
> +	CPY_MC(9998f, ldr	B_lw, [srcend, -4])
> +	CPY_MC(9998f, str	A_lw, [dstin])
> +	CPY_MC(9998f, str	B_lw, [dstend, -4])
> +	mov x0, #0
> +	ret
> +
> +	/* Copy 0..3 bytes using a branchless sequence.  */
> +L(copy4):
> +	cbz	count, L(copy0)
> +	lsr	tmp1, count, 1
> +	CPY_MC(9998f, ldrb	A_lw, [src])
> +	CPY_MC(9998f, ldrb	C_lw, [srcend, -1])
> +	CPY_MC(9998f, ldrb	B_lw, [src, tmp1])
> +	CPY_MC(9998f, strb	A_lw, [dstin])
> +	CPY_MC(9998f, strb	B_lw, [dstin, tmp1])
> +	CPY_MC(9998f, strb	C_lw, [dstend, -1])
> +L(copy0):
> +	mov x0, #0
> +	ret
> +
> +	.p2align 4
> +	/* Medium copies: 33..128 bytes.  */
> +L(copy32_128):
> +	CPY_MC(9998f, ldp	A_l, A_h, [src])
> +	CPY_MC(9998f, ldp	B_l, B_h, [src, 16])
> +	CPY_MC(9998f, ldp	C_l, C_h, [srcend, -32])
> +	CPY_MC(9998f, ldp	D_l, D_h, [srcend, -16])
> +	cmp	count, 64
> +	b.hi	L(copy128)
> +	CPY_MC(9998f, stp	A_l, A_h, [dstin])
> +	CPY_MC(9998f, stp	B_l, B_h, [dstin, 16])
> +	CPY_MC(9998f, stp	C_l, C_h, [dstend, -32])
> +	CPY_MC(9998f, stp	D_l, D_h, [dstend, -16])
> +	mov x0, #0
> +	ret
> +
> +	.p2align 4
> +	/* Copy 65..128 bytes.  */
> +L(copy128):
> +	CPY_MC(9998f, ldp	E_l, E_h, [src, 32])
> +	CPY_MC(9998f, ldp	F_l, F_h, [src, 48])
> +	cmp	count, 96
> +	b.ls	L(copy96)
> +	CPY_MC(9998f, ldp	G_l, G_h, [srcend, -64])
> +	CPY_MC(9998f, ldp	H_l, H_h, [srcend, -48])
> +	CPY_MC(9998f, stp	G_l, G_h, [dstend, -64])
> +	CPY_MC(9998f, stp	H_l, H_h, [dstend, -48])
> +L(copy96):
> +	CPY_MC(9998f, stp	A_l, A_h, [dstin])
> +	CPY_MC(9998f, stp	B_l, B_h, [dstin, 16])
> +	CPY_MC(9998f, stp	E_l, E_h, [dstin, 32])
> +	CPY_MC(9998f, stp	F_l, F_h, [dstin, 48])
> +	CPY_MC(9998f, stp	C_l, C_h, [dstend, -32])
> +	CPY_MC(9998f, stp	D_l, D_h, [dstend, -16])
> +	mov x0, #0
> +	ret
> +
> +	.p2align 4
> +	/* Copy more than 128 bytes.  */
> +L(copy_long):
> +	/* Use backwards copy if there is an overlap.  */
> +	sub	tmp1, dstin, src
> +	cbz	tmp1, L(copy0)
> +	cmp	tmp1, count
> +	b.lo	L(copy_long_backwards)
> +
> +	/* Copy 16 bytes and then align dst to 16-byte alignment.  */
> +
> +	CPY_MC(9998f, ldp	D_l, D_h, [src])
> +	and	tmp1, dstin, 15
> +	bic	dst, dstin, 15
> +	sub	src, src, tmp1
> +	add	count, count, tmp1	/* Count is now 16 too large.  */
> +	CPY_MC(9998f, ldp	A_l, A_h, [src, 16])
> +	CPY_MC(9998f, stp	D_l, D_h, [dstin])
> +	CPY_MC(9998f, ldp	B_l, B_h, [src, 32])
> +	CPY_MC(9998f, ldp	C_l, C_h, [src, 48])
> +	CPY_MC(9998f, ldp	D_l, D_h, [src, 64]!)
> +	subs	count, count, 128 + 16	/* Test and readjust count.  */
> +	b.ls	L(copy64_from_end)
> +
> +L(loop64):
> +	CPY_MC(9998f, stp	A_l, A_h, [dst, 16])
> +	CPY_MC(9998f, ldp	A_l, A_h, [src, 16])
> +	CPY_MC(9998f, stp	B_l, B_h, [dst, 32])
> +	CPY_MC(9998f, ldp	B_l, B_h, [src, 32])
> +	CPY_MC(9998f, stp	C_l, C_h, [dst, 48])
> +	CPY_MC(9998f, ldp	C_l, C_h, [src, 48])
> +	CPY_MC(9998f, stp	D_l, D_h, [dst, 64]!)
> +	CPY_MC(9998f, ldp	D_l, D_h, [src, 64]!)
> +	subs	count, count, 64
> +	b.hi	L(loop64)
> +
> +	/* Write the last iteration and copy 64 bytes from the end.  */
> +L(copy64_from_end):
> +	CPY_MC(9998f, ldp	E_l, E_h, [srcend, -64])
> +	CPY_MC(9998f, stp	A_l, A_h, [dst, 16])
> +	CPY_MC(9998f, ldp	A_l, A_h, [srcend, -48])
> +	CPY_MC(9998f, stp	B_l, B_h, [dst, 32])
> +	CPY_MC(9998f, ldp	B_l, B_h, [srcend, -32])
> +	CPY_MC(9998f, stp	C_l, C_h, [dst, 48])
> +	CPY_MC(9998f, ldp	C_l, C_h, [srcend, -16])
> +	CPY_MC(9998f, stp	D_l, D_h, [dst, 64])
> +	CPY_MC(9998f, stp	E_l, E_h, [dstend, -64])
> +	CPY_MC(9998f, stp	A_l, A_h, [dstend, -48])
> +	CPY_MC(9998f, stp	B_l, B_h, [dstend, -32])
> +	CPY_MC(9998f, stp	C_l, C_h, [dstend, -16])
> +	mov x0, #0
> +	ret
> +
> +	.p2align 4
> +
> +	/* Large backwards copy for overlapping copies.
> +	   Copy 16 bytes and then align dst to 16-byte alignment.  */
> +L(copy_long_backwards):
> +	CPY_MC(9998f, ldp	D_l, D_h, [srcend, -16])
> +	and	tmp1, dstend, 15
> +	sub	srcend, srcend, tmp1
> +	sub	count, count, tmp1
> +	CPY_MC(9998f, ldp	A_l, A_h, [srcend, -16])
> +	CPY_MC(9998f, stp	D_l, D_h, [dstend, -16])
> +	CPY_MC(9998f, ldp	B_l, B_h, [srcend, -32])
> +	CPY_MC(9998f, ldp	C_l, C_h, [srcend, -48])
> +	CPY_MC(9998f, ldp	D_l, D_h, [srcend, -64]!)
> +	sub	dstend, dstend, tmp1
> +	subs	count, count, 128
> +	b.ls	L(copy64_from_start)
> +
> +L(loop64_backwards):
> +	CPY_MC(9998f, stp	A_l, A_h, [dstend, -16])
> +	CPY_MC(9998f, ldp	A_l, A_h, [srcend, -16])
> +	CPY_MC(9998f, stp	B_l, B_h, [dstend, -32])
> +	CPY_MC(9998f, ldp	B_l, B_h, [srcend, -32])
> +	CPY_MC(9998f, stp	C_l, C_h, [dstend, -48])
> +	CPY_MC(9998f, ldp	C_l, C_h, [srcend, -48])
> +	CPY_MC(9998f, stp	D_l, D_h, [dstend, -64]!)
> +	CPY_MC(9998f, ldp	D_l, D_h, [srcend, -64]!)
> +	subs	count, count, 64
> +	b.hi	L(loop64_backwards)
> +
> +	/* Write the last iteration and copy 64 bytes from the start.  */
> +L(copy64_from_start):
> +	CPY_MC(9998f, ldp	G_l, G_h, [src, 48])
> +	CPY_MC(9998f, stp	A_l, A_h, [dstend, -16])
> +	CPY_MC(9998f, ldp	A_l, A_h, [src, 32])
> +	CPY_MC(9998f, stp	B_l, B_h, [dstend, -32])
> +	CPY_MC(9998f, ldp	B_l, B_h, [src, 16])
> +	CPY_MC(9998f, stp	C_l, C_h, [dstend, -48])
> +	CPY_MC(9998f, ldp	C_l, C_h, [src])
> +	CPY_MC(9998f, stp	D_l, D_h, [dstend, -64])
> +	CPY_MC(9998f, stp	G_l, G_h, [dstin, 48])
> +	CPY_MC(9998f, stp	A_l, A_h, [dstin, 32])
> +	CPY_MC(9998f, stp	B_l, B_h, [dstin, 16])
> +	CPY_MC(9998f, stp	C_l, C_h, [dstin])
> +	mov x0, #0
> +	ret
> +
> +9998:	mov x0, #-EFAULT
> +	ret
> +SYM_FUNC_END(__pi_memcpy_mcs)
> +
> +SYM_FUNC_ALIAS(__memcpy_mcs, __pi_memcpy_mcs)
> +EXPORT_SYMBOL(__memcpy_mcs)
> +SYM_FUNC_ALIAS_WEAK(memcpy_mcs, __memcpy_mcs)
> +EXPORT_SYMBOL(memcpy_mcs)
> diff --git a/mm/kasan/shadow.c b/mm/kasan/shadow.c
> index 9ef84f31833f..e6519fd329b2 100644
> --- a/mm/kasan/shadow.c
> +++ b/mm/kasan/shadow.c
> @@ -79,6 +79,18 @@ void *memcpy(void *dest, const void *src, size_t len)
>  }
>  #endif
>  
> +#ifdef __HAVE_ARCH_MEMCPY_MC
> +#undef memcpy_mcs
> +int memcpy_mcs(void *dest, const void *src, size_t len)
> +{
> +	if (!check_memory_region((unsigned long)src, len, false, _RET_IP_) ||
> +	    !check_memory_region((unsigned long)dest, len, true, _RET_IP_))
> +		return (unsigned long)len;
> +
> +	return __memcpy_mcs(dest, src, len);
> +}
> +#endif
> +
>  void *__asan_memset(void *addr, int c, ssize_t len)
>  {
>  	if (!kasan_check_range(addr, len, true, _RET_IP_))
> -- 
> 2.25.1
> 


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v10 5/6] arm64: support copy_mc_[user]_highpage()
  2024-01-29 13:46 ` [PATCH v10 5/6] arm64: support copy_mc_[user]_highpage() Tong Tiangen
  2024-01-29 20:45   ` Andrey Konovalov
@ 2024-01-30 10:31   ` Mark Rutland
  2024-01-30 13:50     ` Tong Tiangen
  1 sibling, 1 reply; 20+ messages in thread
From: Mark Rutland @ 2024-01-30 10:31 UTC (permalink / raw)
  To: Tong Tiangen
  Cc: Catalin Marinas, Will Deacon, James Morse, Robin Murphy,
	Andrey Ryabinin, Alexander Potapenko, Alexander Viro,
	Andrey Konovalov, Dmitry Vyukov, Vincenzo Frascino,
	Andrew Morton, Michael Ellerman, Nicholas Piggin,
	Christophe Leroy, Aneesh Kumar K.V, Naveen N. Rao,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86,
	H. Peter Anvin, linux-arm-kernel, linux-mm, linuxppc-dev,
	linux-kernel, kasan-dev, wangkefeng.wang, Guohanjun

On Mon, Jan 29, 2024 at 09:46:51PM +0800, Tong Tiangen wrote:
> Currently, many scenarios that can tolerate memory errors when copying page
> have been supported in the kernel[1][2][3], all of which are implemented by
> copy_mc_[user]_highpage(). arm64 should also support this mechanism.
> 
> Due to mte, arm64 needs to have its own copy_mc_[user]_highpage()
> architecture implementation, macros __HAVE_ARCH_COPY_MC_HIGHPAGE and
> __HAVE_ARCH_COPY_MC_USER_HIGHPAGE have been added to control it.
> 
> Add new helper copy_mc_page() which provide a page copy implementation with
> machine check safe. The copy_mc_page() in copy_mc_page.S is largely borrows
> from copy_page() in copy_page.S and the main difference is copy_mc_page()
> add extable entry to every load/store insn to support machine check safe.
> 
> Add new extable type EX_TYPE_COPY_MC_PAGE_ERR_ZERO which used in
> copy_mc_page().
> 
> [1]a873dfe1032a ("mm, hwpoison: try to recover from copy-on write faults")
> [2]5f2500b93cc9 ("mm/khugepaged: recover from poisoned anonymous memory")
> [3]6b970599e807 ("mm: hwpoison: support recovery from ksm_might_need_to_copy()")
> 
> Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
> ---
>  arch/arm64/include/asm/asm-extable.h | 15 ++++++
>  arch/arm64/include/asm/assembler.h   |  4 ++
>  arch/arm64/include/asm/mte.h         |  5 ++
>  arch/arm64/include/asm/page.h        | 10 ++++
>  arch/arm64/lib/Makefile              |  2 +
>  arch/arm64/lib/copy_mc_page.S        | 78 ++++++++++++++++++++++++++++
>  arch/arm64/lib/mte.S                 | 27 ++++++++++
>  arch/arm64/mm/copypage.c             | 66 ++++++++++++++++++++---
>  arch/arm64/mm/extable.c              |  7 +--
>  include/linux/highmem.h              |  8 +++
>  10 files changed, 213 insertions(+), 9 deletions(-)
>  create mode 100644 arch/arm64/lib/copy_mc_page.S
> 
> diff --git a/arch/arm64/include/asm/asm-extable.h b/arch/arm64/include/asm/asm-extable.h
> index 980d1dd8e1a3..819044fefbe7 100644
> --- a/arch/arm64/include/asm/asm-extable.h
> +++ b/arch/arm64/include/asm/asm-extable.h
> @@ -10,6 +10,7 @@
>  #define EX_TYPE_UACCESS_ERR_ZERO	2
>  #define EX_TYPE_KACCESS_ERR_ZERO	3
>  #define EX_TYPE_LOAD_UNALIGNED_ZEROPAD	4
> +#define EX_TYPE_COPY_MC_PAGE_ERR_ZERO	5
>  
>  /* Data fields for EX_TYPE_UACCESS_ERR_ZERO */
>  #define EX_DATA_REG_ERR_SHIFT	0
> @@ -51,6 +52,16 @@
>  #define _ASM_EXTABLE_UACCESS(insn, fixup)				\
>  	_ASM_EXTABLE_UACCESS_ERR_ZERO(insn, fixup, wzr, wzr)
>  
> +#define _ASM_EXTABLE_COPY_MC_PAGE_ERR_ZERO(insn, fixup, err, zero)	\
> +	__ASM_EXTABLE_RAW(insn, fixup, 					\
> +			  EX_TYPE_COPY_MC_PAGE_ERR_ZERO,		\
> +			  (						\
> +			    EX_DATA_REG(ERR, err) |			\
> +			    EX_DATA_REG(ZERO, zero)			\
> +			  ))
> +
> +#define _ASM_EXTABLE_COPY_MC_PAGE(insn, fixup)				\
> +	_ASM_EXTABLE_COPY_MC_PAGE_ERR_ZERO(insn, fixup, wzr, wzr)
>  /*
>   * Create an exception table entry for uaccess `insn`, which will branch to `fixup`
>   * when an unhandled fault is taken.
> @@ -59,6 +70,10 @@
>  	_ASM_EXTABLE_UACCESS(\insn, \fixup)
>  	.endm
>  
> +	.macro          _asm_extable_copy_mc_page, insn, fixup
> +	_ASM_EXTABLE_COPY_MC_PAGE(\insn, \fixup)
> +	.endm
> +

This should share a common EX_TYPE_ with the other "kaccess where memory error
is handled but other faults are fatal" cases.

>  /*
>   * Create an exception table entry for `insn` if `fixup` is provided. Otherwise
>   * do nothing.
> diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h
> index 513787e43329..e1d8ce155878 100644
> --- a/arch/arm64/include/asm/assembler.h
> +++ b/arch/arm64/include/asm/assembler.h
> @@ -154,6 +154,10 @@ lr	.req	x30		// link register
>  #define CPU_LE(code...) code
>  #endif
>  
> +#define CPY_MC(l, x...)		\
> +9999:   x;			\
> +	_asm_extable_copy_mc_page    9999b, l
> +
>  /*
>   * Define a macro that constructs a 64-bit value by concatenating two
>   * 32-bit registers. Note that on big endian systems the order of the
> diff --git a/arch/arm64/include/asm/mte.h b/arch/arm64/include/asm/mte.h
> index 91fbd5c8a391..9cdded082dd4 100644
> --- a/arch/arm64/include/asm/mte.h
> +++ b/arch/arm64/include/asm/mte.h
> @@ -92,6 +92,7 @@ static inline bool try_page_mte_tagging(struct page *page)
>  void mte_zero_clear_page_tags(void *addr);
>  void mte_sync_tags(pte_t pte, unsigned int nr_pages);
>  void mte_copy_page_tags(void *kto, const void *kfrom);
> +int mte_copy_mc_page_tags(void *kto, const void *kfrom);
>  void mte_thread_init_user(void);
>  void mte_thread_switch(struct task_struct *next);
>  void mte_cpu_setup(void);
> @@ -128,6 +129,10 @@ static inline void mte_sync_tags(pte_t pte, unsigned int nr_pages)
>  static inline void mte_copy_page_tags(void *kto, const void *kfrom)
>  {
>  }
> +static inline int mte_copy_mc_page_tags(void *kto, const void *kfrom)
> +{
> +	return 0;
> +}
>  static inline void mte_thread_init_user(void)
>  {
>  }
> diff --git a/arch/arm64/include/asm/page.h b/arch/arm64/include/asm/page.h
> index 2312e6ee595f..304cc86b8a10 100644
> --- a/arch/arm64/include/asm/page.h
> +++ b/arch/arm64/include/asm/page.h
> @@ -29,6 +29,16 @@ void copy_user_highpage(struct page *to, struct page *from,
>  void copy_highpage(struct page *to, struct page *from);
>  #define __HAVE_ARCH_COPY_HIGHPAGE
>  
> +#ifdef CONFIG_ARCH_HAS_COPY_MC
> +int copy_mc_page(void *to, const void *from);
> +int copy_mc_highpage(struct page *to, struct page *from);
> +#define __HAVE_ARCH_COPY_MC_HIGHPAGE
> +
> +int copy_mc_user_highpage(struct page *to, struct page *from,
> +		unsigned long vaddr, struct vm_area_struct *vma);
> +#define __HAVE_ARCH_COPY_MC_USER_HIGHPAGE
> +#endif
> +
>  struct folio *vma_alloc_zeroed_movable_folio(struct vm_area_struct *vma,
>  						unsigned long vaddr);
>  #define vma_alloc_zeroed_movable_folio vma_alloc_zeroed_movable_folio
> diff --git a/arch/arm64/lib/Makefile b/arch/arm64/lib/Makefile
> index 29490be2546b..a2fd865b816d 100644
> --- a/arch/arm64/lib/Makefile
> +++ b/arch/arm64/lib/Makefile
> @@ -15,6 +15,8 @@ endif
>  
>  lib-$(CONFIG_ARCH_HAS_UACCESS_FLUSHCACHE) += uaccess_flushcache.o
>  
> +lib-$(CONFIG_ARCH_HAS_COPY_MC) += copy_mc_page.o
> +
>  obj-$(CONFIG_CRC32) += crc32.o
>  
>  obj-$(CONFIG_FUNCTION_ERROR_INJECTION) += error-inject.o
> diff --git a/arch/arm64/lib/copy_mc_page.S b/arch/arm64/lib/copy_mc_page.S
> new file mode 100644
> index 000000000000..524534d26d86
> --- /dev/null
> +++ b/arch/arm64/lib/copy_mc_page.S
> @@ -0,0 +1,78 @@
> +/* SPDX-License-Identifier: GPL-2.0-only */
> +/*
> + * Copyright (C) 2012 ARM Ltd.
> + */
> +
> +#include <linux/linkage.h>
> +#include <linux/const.h>
> +#include <asm/assembler.h>
> +#include <asm/page.h>
> +#include <asm/cpufeature.h>
> +#include <asm/alternative.h>
> +#include <asm/asm-extable.h>
> +
> +/*
> + * Copy a page from src to dest (both are page aligned) with machine check
> + *
> + * Parameters:
> + *	x0 - dest
> + *	x1 - src
> + * Returns:
> + * 	x0 - Return 0 if copy success, or -EFAULT if anything goes wrong
> + *	     while copying.
> + */
> +SYM_FUNC_START(__pi_copy_mc_page)
> +CPY_MC(9998f, ldp	x2, x3, [x1])
> +CPY_MC(9998f, ldp	x4, x5, [x1, #16])
> +CPY_MC(9998f, ldp	x6, x7, [x1, #32])
> +CPY_MC(9998f, ldp	x8, x9, [x1, #48])
> +CPY_MC(9998f, ldp	x10, x11, [x1, #64])
> +CPY_MC(9998f, ldp	x12, x13, [x1, #80])
> +CPY_MC(9998f, ldp	x14, x15, [x1, #96])
> +CPY_MC(9998f, ldp	x16, x17, [x1, #112])
> +
> +	add	x0, x0, #256
> +	add	x1, x1, #128
> +1:
> +	tst	x0, #(PAGE_SIZE - 1)
> +
> +CPY_MC(9998f, stnp	x2, x3, [x0, #-256])
> +CPY_MC(9998f, ldp	x2, x3, [x1])
> +CPY_MC(9998f, stnp	x4, x5, [x0, #16 - 256])
> +CPY_MC(9998f, ldp	x4, x5, [x1, #16])
> +CPY_MC(9998f, stnp	x6, x7, [x0, #32 - 256])
> +CPY_MC(9998f, ldp	x6, x7, [x1, #32])
> +CPY_MC(9998f, stnp	x8, x9, [x0, #48 - 256])
> +CPY_MC(9998f, ldp	x8, x9, [x1, #48])
> +CPY_MC(9998f, stnp	x10, x11, [x0, #64 - 256])
> +CPY_MC(9998f, ldp	x10, x11, [x1, #64])
> +CPY_MC(9998f, stnp	x12, x13, [x0, #80 - 256])
> +CPY_MC(9998f, ldp	x12, x13, [x1, #80])
> +CPY_MC(9998f, stnp	x14, x15, [x0, #96 - 256])
> +CPY_MC(9998f, ldp	x14, x15, [x1, #96])
> +CPY_MC(9998f, stnp	x16, x17, [x0, #112 - 256])
> +CPY_MC(9998f, ldp	x16, x17, [x1, #112])
> +
> +	add	x0, x0, #128
> +	add	x1, x1, #128
> +
> +	b.ne	1b
> +
> +CPY_MC(9998f, stnp	x2, x3, [x0, #-256])
> +CPY_MC(9998f, stnp	x4, x5, [x0, #16 - 256])
> +CPY_MC(9998f, stnp	x6, x7, [x0, #32 - 256])
> +CPY_MC(9998f, stnp	x8, x9, [x0, #48 - 256])
> +CPY_MC(9998f, stnp	x10, x11, [x0, #64 - 256])
> +CPY_MC(9998f, stnp	x12, x13, [x0, #80 - 256])
> +CPY_MC(9998f, stnp	x14, x15, [x0, #96 - 256])
> +CPY_MC(9998f, stnp	x16, x17, [x0, #112 - 256])
> +
> +	mov x0, #0
> +	ret
> +
> +9998:	mov x0, #-EFAULT
> +	ret
> +
> +SYM_FUNC_END(__pi_copy_mc_page)
> +SYM_FUNC_ALIAS(copy_mc_page, __pi_copy_mc_page)
> +EXPORT_SYMBOL(copy_mc_page)

This is a duplicate of the existing copy_page logic; it should be refactored
such that the logic can be shared.

> diff --git a/arch/arm64/lib/mte.S b/arch/arm64/lib/mte.S
> index 5018ac03b6bf..2b748e83f6cf 100644
> --- a/arch/arm64/lib/mte.S
> +++ b/arch/arm64/lib/mte.S
> @@ -80,6 +80,33 @@ SYM_FUNC_START(mte_copy_page_tags)
>  	ret
>  SYM_FUNC_END(mte_copy_page_tags)
>  
> +/*
> + * Copy the tags from the source page to the destination one wiht machine check safe
> + *   x0 - address of the destination page
> + *   x1 - address of the source page
> + * Returns:
> + *   x0 - Return 0 if copy success, or
> + *        -EFAULT if anything goes wrong while copying.
> + */
> +SYM_FUNC_START(mte_copy_mc_page_tags)
> +	mov	x2, x0
> +	mov	x3, x1
> +	multitag_transfer_size x5, x6
> +1:
> +CPY_MC(2f, ldgm	x4, [x3])
> +CPY_MC(2f, stgm	x4, [x2])
> +	add	x2, x2, x5
> +	add	x3, x3, x5
> +	tst	x2, #(PAGE_SIZE - 1)
> +	b.ne	1b
> +
> +	mov x0, #0
> +	ret
> +
> +2:	mov x0, #-EFAULT
> +	ret
> +SYM_FUNC_END(mte_copy_mc_page_tags)
> +
>  /*
>   * Read tags from a user buffer (one tag per byte) and set the corresponding
>   * tags at the given kernel address. Used by PTRACE_POKEMTETAGS.
> diff --git a/arch/arm64/mm/copypage.c b/arch/arm64/mm/copypage.c
> index a7bb20055ce0..9765e40cde6c 100644
> --- a/arch/arm64/mm/copypage.c
> +++ b/arch/arm64/mm/copypage.c
> @@ -14,6 +14,25 @@
>  #include <asm/cpufeature.h>
>  #include <asm/mte.h>
>  
> +static int do_mte(struct page *to, struct page *from, void *kto, void *kfrom, bool mc)
> +{
> +	int ret = 0;
> +
> +	if (system_supports_mte() && page_mte_tagged(from)) {
> +		/* It's a new page, shouldn't have been tagged yet */
> +		WARN_ON_ONCE(!try_page_mte_tagging(to));
> +		if (mc)
> +			ret = mte_copy_mc_page_tags(kto, kfrom);
> +		else
> +			mte_copy_page_tags(kto, kfrom);
> +
> +		if (!ret)
> +			set_page_mte_tagged(to);
> +	}
> +
> +	return ret;
> +}

The boolean 'mc' argument makes this painful to read, and I don't think it's
necessary to have this helper anyway.

It'd be clearer to have this expanded inline in the callers, e.g.

	// in copy_highpage(), as-is today
	if (system_supports_mte() && page_mte_tagged(from)) {
		/* It's a new page, shouldn't have been tagged yet */
		WARN_ON_ONCE(!try_page_mte_tagging(to));
		mte_copy_page_tags(kto, kfrom);
		set_page_mte_tagged(to);
	}

	// in copy_mc_highpage()
	if (system_supports_mte() && page_mte_tagged(from)) {
		/* It's a new page, shouldn't have been tagged yet */
		WARN_ON_ONCE(!try_page_mte_tagging(to));
		ret = mte_copy_mc_page_tags(kto, kfrom);
		if (ret)
			return -EFAULT;
		set_page_mte_tagged(to);
	}

Mark.

> +
>  void copy_highpage(struct page *to, struct page *from)
>  {
>  	void *kto = page_address(to);
> @@ -24,12 +43,7 @@ void copy_highpage(struct page *to, struct page *from)
>  	if (kasan_hw_tags_enabled())
>  		page_kasan_tag_reset(to);
>  
> -	if (system_supports_mte() && page_mte_tagged(from)) {
> -		/* It's a new page, shouldn't have been tagged yet */
> -		WARN_ON_ONCE(!try_page_mte_tagging(to));
> -		mte_copy_page_tags(kto, kfrom);
> -		set_page_mte_tagged(to);
> -	}
> +	do_mte(to, from, kto, kfrom, false);
>  }
>  EXPORT_SYMBOL(copy_highpage);
>  
> @@ -40,3 +54,43 @@ void copy_user_highpage(struct page *to, struct page *from,
>  	flush_dcache_page(to);
>  }
>  EXPORT_SYMBOL_GPL(copy_user_highpage);
> +
> +#ifdef CONFIG_ARCH_HAS_COPY_MC
> +/*
> + * Return -EFAULT if anything goes wrong while copying page or mte.
> + */
> +int copy_mc_highpage(struct page *to, struct page *from)
> +{
> +	void *kto = page_address(to);
> +	void *kfrom = page_address(from);
> +	int ret;
> +
> +	ret = copy_mc_page(kto, kfrom);
> +	if (ret)
> +		return -EFAULT;
> +
> +	if (kasan_hw_tags_enabled())
> +		page_kasan_tag_reset(to);
> +
> +	ret = do_mte(to, from, kto, kfrom, true);
> +	if (ret)
> +		return -EFAULT;
> +
> +	return 0;
> +}
> +EXPORT_SYMBOL(copy_mc_highpage);
> +
> +int copy_mc_user_highpage(struct page *to, struct page *from,
> +			unsigned long vaddr, struct vm_area_struct *vma)
> +{
> +	int ret;
> +
> +	ret = copy_mc_highpage(to, from);
> +
> +	if (!ret)
> +		flush_dcache_page(to);
> +
> +	return ret;
> +}
> +EXPORT_SYMBOL_GPL(copy_mc_user_highpage);
> +#endif
> diff --git a/arch/arm64/mm/extable.c b/arch/arm64/mm/extable.c
> index 28ec35e3d210..bdc81518d207 100644
> --- a/arch/arm64/mm/extable.c
> +++ b/arch/arm64/mm/extable.c
> @@ -16,7 +16,7 @@ get_ex_fixup(const struct exception_table_entry *ex)
>  	return ((unsigned long)&ex->fixup + ex->fixup);
>  }
>  
> -static bool ex_handler_uaccess_err_zero(const struct exception_table_entry *ex,
> +static bool ex_handler_fixup_err_zero(const struct exception_table_entry *ex,
>  					struct pt_regs *regs)
>  {
>  	int reg_err = FIELD_GET(EX_DATA_REG_ERR, ex->data);
> @@ -69,7 +69,7 @@ bool fixup_exception(struct pt_regs *regs)
>  		return ex_handler_bpf(ex, regs);
>  	case EX_TYPE_UACCESS_ERR_ZERO:
>  	case EX_TYPE_KACCESS_ERR_ZERO:
> -		return ex_handler_uaccess_err_zero(ex, regs);
> +		return ex_handler_fixup_err_zero(ex, regs);
>  	case EX_TYPE_LOAD_UNALIGNED_ZEROPAD:
>  		return ex_handler_load_unaligned_zeropad(ex, regs);
>  	}
> @@ -87,7 +87,8 @@ bool fixup_exception_mc(struct pt_regs *regs)
>  
>  	switch (ex->type) {
>  	case EX_TYPE_UACCESS_ERR_ZERO:
> -		return ex_handler_uaccess_err_zero(ex, regs);
> +	case EX_TYPE_COPY_MC_PAGE_ERR_ZERO:
> +		return ex_handler_fixup_err_zero(ex, regs);
>  	}
>  
>  	return false;
> diff --git a/include/linux/highmem.h b/include/linux/highmem.h
> index c5ca1a1fc4f5..a42470ca42f2 100644
> --- a/include/linux/highmem.h
> +++ b/include/linux/highmem.h
> @@ -332,6 +332,7 @@ static inline void copy_highpage(struct page *to, struct page *from)
>  #endif
>  
>  #ifdef copy_mc_to_kernel
> +#ifndef __HAVE_ARCH_COPY_MC_USER_HIGHPAGE
>  /*
>   * If architecture supports machine check exception handling, define the
>   * #MC versions of copy_user_highpage and copy_highpage. They copy a memory
> @@ -354,7 +355,9 @@ static inline int copy_mc_user_highpage(struct page *to, struct page *from,
>  
>  	return ret ? -EFAULT : 0;
>  }
> +#endif
>  
> +#ifndef __HAVE_ARCH_COPY_MC_HIGHPAGE
>  static inline int copy_mc_highpage(struct page *to, struct page *from)
>  {
>  	unsigned long ret;
> @@ -370,20 +373,25 @@ static inline int copy_mc_highpage(struct page *to, struct page *from)
>  
>  	return ret ? -EFAULT : 0;
>  }
> +#endif
>  #else
> +#ifndef __HAVE_ARCH_COPY_MC_USER_HIGHPAGE
>  static inline int copy_mc_user_highpage(struct page *to, struct page *from,
>  					unsigned long vaddr, struct vm_area_struct *vma)
>  {
>  	copy_user_highpage(to, from, vaddr, vma);
>  	return 0;
>  }
> +#endif
>  
> +#ifndef __HAVE_ARCH_COPY_MC_HIGHPAGE
>  static inline int copy_mc_highpage(struct page *to, struct page *from)
>  {
>  	copy_highpage(to, from);
>  	return 0;
>  }
>  #endif
> +#endif
>  
>  static inline void memcpy_page(struct page *dst_page, size_t dst_off,
>  			       struct page *src_page, size_t src_off,
> -- 
> 2.25.1
> 


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v10 2/6] arm64: add support for machine check error safe
  2024-01-29 17:51   ` Mark Rutland
@ 2024-01-30 10:57     ` Tong Tiangen
  2024-01-30 13:07       ` Mark Rutland
  0 siblings, 1 reply; 20+ messages in thread
From: Tong Tiangen @ 2024-01-30 10:57 UTC (permalink / raw)
  To: Mark Rutland
  Cc: Catalin Marinas, Will Deacon, James Morse, Robin Murphy,
	Andrey Ryabinin, Alexander Potapenko, Alexander Viro,
	Andrey Konovalov, Dmitry Vyukov, Vincenzo Frascino,
	Andrew Morton, Michael Ellerman, Nicholas Piggin,
	Christophe Leroy, Aneesh Kumar K.V, Naveen N. Rao,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86,
	H. Peter Anvin, linux-arm-kernel, linux-mm, linuxppc-dev,
	linux-kernel, kasan-dev, wangkefeng.wang, Guohanjun



在 2024/1/30 1:51, Mark Rutland 写道:
> On Mon, Jan 29, 2024 at 09:46:48PM +0800, Tong Tiangen wrote:
>> For the arm64 kernel, when it processes hardware memory errors for
>> synchronize notifications(do_sea()), if the errors is consumed within the
>> kernel, the current processing is panic. However, it is not optimal.
>>
>> Take uaccess for example, if the uaccess operation fails due to memory
>> error, only the user process will be affected. Killing the user process and
>> isolating the corrupt page is a better choice.
>>
>> This patch only enable machine error check framework and adds an exception
>> fixup before the kernel panic in do_sea().
>>
>> Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
>> ---
>>   arch/arm64/Kconfig               |  1 +
>>   arch/arm64/include/asm/extable.h |  1 +
>>   arch/arm64/mm/extable.c          | 16 ++++++++++++++++
>>   arch/arm64/mm/fault.c            | 29 ++++++++++++++++++++++++++++-
>>   4 files changed, 46 insertions(+), 1 deletion(-)
>>
>> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
>> index aa7c1d435139..2cc34b5e7abb 100644
>> --- a/arch/arm64/Kconfig
>> +++ b/arch/arm64/Kconfig
>> @@ -20,6 +20,7 @@ config ARM64
>>   	select ARCH_ENABLE_SPLIT_PMD_PTLOCK if PGTABLE_LEVELS > 2
>>   	select ARCH_ENABLE_THP_MIGRATION if TRANSPARENT_HUGEPAGE
>>   	select ARCH_HAS_CACHE_LINE_SIZE
>> +	select ARCH_HAS_COPY_MC if ACPI_APEI_GHES
>>   	select ARCH_HAS_CURRENT_STACK_POINTER
>>   	select ARCH_HAS_DEBUG_VIRTUAL
>>   	select ARCH_HAS_DEBUG_VM_PGTABLE
>> diff --git a/arch/arm64/include/asm/extable.h b/arch/arm64/include/asm/extable.h
>> index 72b0e71cc3de..f80ebd0addfd 100644
>> --- a/arch/arm64/include/asm/extable.h
>> +++ b/arch/arm64/include/asm/extable.h
>> @@ -46,4 +46,5 @@ bool ex_handler_bpf(const struct exception_table_entry *ex,
>>   #endif /* !CONFIG_BPF_JIT */
>>   
>>   bool fixup_exception(struct pt_regs *regs);
>> +bool fixup_exception_mc(struct pt_regs *regs);
>>   #endif
>> diff --git a/arch/arm64/mm/extable.c b/arch/arm64/mm/extable.c
>> index 228d681a8715..478e639f8680 100644
>> --- a/arch/arm64/mm/extable.c
>> +++ b/arch/arm64/mm/extable.c
>> @@ -76,3 +76,19 @@ bool fixup_exception(struct pt_regs *regs)
>>   
>>   	BUG();
>>   }
>> +
>> +bool fixup_exception_mc(struct pt_regs *regs)
> 
> Can we please replace 'mc' with something like 'memory_error' ?
> 
> There's no "machine check" on arm64, and 'mc' is opaque regardless.

OK, It's more appropriate to use "memory_error" on arm64.

> 
>> +{
>> +	const struct exception_table_entry *ex;
>> +
>> +	ex = search_exception_tables(instruction_pointer(regs));
>> +	if (!ex)
>> +		return false;
>> +
>> +	/*
>> +	 * This is not complete, More Machine check safe extable type can
>> +	 * be processed here.
>> +	 */
>> +
>> +	return false;
>> +}
> 
> As with my comment on the subsequenty patch, I'd much prefer that we handle
> EX_TYPE_UACCESS_ERR_ZERO from the outset.

OK, In the next version, the two patches will be merged.

> 
> 
> 
>> diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
>> index 55f6455a8284..312932dc100b 100644
>> --- a/arch/arm64/mm/fault.c
>> +++ b/arch/arm64/mm/fault.c
>> @@ -730,6 +730,31 @@ static int do_bad(unsigned long far, unsigned long esr, struct pt_regs *regs)
>>   	return 1; /* "fault" */
>>   }
>>   
>> +static bool arm64_do_kernel_sea(unsigned long addr, unsigned int esr,
>> +				     struct pt_regs *regs, int sig, int code)
>> +{
>> +	if (!IS_ENABLED(CONFIG_ARCH_HAS_COPY_MC))
>> +		return false;
>> +
>> +	if (user_mode(regs))
>> +		return false;
> 
> This function is called "arm64_do_kernel_sea"; surely the caller should *never*
> call this for a SEA taken from user mode?

In do_sea(), the processing logic is as follows:
   do_sea()
   {
     [...]
     if (user_mode(regs) && apei_claim_sea(regs) == 0) {
        return 0;
     }
     [...]
     //[1]
     if (!arm64_do_kernel_sea()) {
        arm64_notify_die();
     }
   }

[1] user_mode() is still possible to go here,If user_mode() goes here,
  it indicates that the impact caused by the memory error cannot be
  processed correctly by apei_claim_sea().


In this case, only arm64_notify_die() can be used, This also maintains
the original logic of user_mode()'s processing.

> 
>> +
>> +	if (apei_claim_sea(regs) < 0)
>> +		return false;
>> +
>> +	if (!fixup_exception_mc(regs))
>> +		return false;
>> +
>> +	if (current->flags & PF_KTHREAD)
>> +		return true;
> 
> I think this needs a comment; why do we allow kthreads to go on, yet kill user
> threads? What about helper threads (e.g. for io_uring)?

If a memroy error occurs in the kernel thread, the problem is more
serious than that of the user thread. As a result, related kernel
functions, such as khugepaged, cannot run properly. kernel panic should
be a better choice at this time.

Therefore, the processing scope of this framework is limited to the user 
  thread.

> 
>> +
>> +	set_thread_esr(0, esr);
> 
> Why do we set the ESR to 0?

The purpose is to reuse the logic of arm64_notify_die() and set the 
following parameters before sending signals to users:
   current->thread.fault_address = 0;
   current->thread.fault_code = err;

I looked at the git log and found that the logic was added by this
commit:


9141300a5884 (“arm64: Provide read/write fault information in compat 
signal handlers”)

According to the description of commit message, the purpose seems to be
for aarch32.

Many thanks.
Tong.


> 
> Mark.
> 
>> +	arm64_force_sig_fault(sig, code, addr,
>> +		"Uncorrected memory error on access to user memory\n");
>> +
>> +	return true;
>> +}
>> +
>>   static int do_sea(unsigned long far, unsigned long esr, struct pt_regs *regs)
>>   {
>>   	const struct fault_info *inf;
>> @@ -755,7 +780,9 @@ static int do_sea(unsigned long far, unsigned long esr, struct pt_regs *regs)
>>   		 */
>>   		siaddr  = untagged_addr(far);
>>   	}
>> -	arm64_notify_die(inf->name, regs, inf->sig, inf->code, siaddr, esr);
>> +
>> +	if (!arm64_do_kernel_sea(siaddr, esr, regs, inf->sig, inf->code))
>> +		arm64_notify_die(inf->name, regs, inf->sig, inf->code, siaddr, esr);
>>   
>>   	return 0;
>>   }
>> -- 
>> 2.25.1
>>
> .


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v10 3/6] arm64: add uaccess to machine check safe
  2024-01-29 17:43   ` Mark Rutland
@ 2024-01-30 11:14     ` Tong Tiangen
  2024-01-30 12:01       ` Mark Rutland
  0 siblings, 1 reply; 20+ messages in thread
From: Tong Tiangen @ 2024-01-30 11:14 UTC (permalink / raw)
  To: Mark Rutland
  Cc: Catalin Marinas, Will Deacon, James Morse, Robin Murphy,
	Andrey Ryabinin, Alexander Potapenko, Alexander Viro,
	Andrey Konovalov, Dmitry Vyukov, Vincenzo Frascino,
	Andrew Morton, Michael Ellerman, Nicholas Piggin,
	Christophe Leroy, Aneesh Kumar K.V, Naveen N. Rao,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86,
	H. Peter Anvin, linux-arm-kernel, linux-mm, linuxppc-dev,
	linux-kernel, kasan-dev, wangkefeng.wang, Guohanjun



在 2024/1/30 1:43, Mark Rutland 写道:
> On Mon, Jan 29, 2024 at 09:46:49PM +0800, Tong Tiangen wrote:
>> If user process access memory fails due to hardware memory error, only the
>> relevant processes are affected, so it is more reasonable to kill the user
>> process and isolate the corrupt page than to panic the kernel.
>>
>> Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
>> ---
>>   arch/arm64/lib/copy_from_user.S | 10 +++++-----
>>   arch/arm64/lib/copy_to_user.S   | 10 +++++-----
>>   arch/arm64/mm/extable.c         |  8 ++++----
>>   3 files changed, 14 insertions(+), 14 deletions(-)
>>
>> diff --git a/arch/arm64/lib/copy_from_user.S b/arch/arm64/lib/copy_from_user.S
>> index 34e317907524..1bf676e9201d 100644
>> --- a/arch/arm64/lib/copy_from_user.S
>> +++ b/arch/arm64/lib/copy_from_user.S
>> @@ -25,7 +25,7 @@
>>   	.endm
>>   
>>   	.macro strb1 reg, ptr, val
>> -	strb \reg, [\ptr], \val
>> +	USER(9998f, strb \reg, [\ptr], \val)
>>   	.endm
> 
> This is a store to *kernel* memory, not user memory. It should not be marked
> with USER().

This does cause some misconceptions, and my original idea was to reuse 
the fixup capability of USER().

> 
> I understand that you *might* want to handle memory errors on these stores, but
> the commit message doesn't describe that and the associated trade-off. For
> example, consider that when a copy_form_user fails we'll try to zero the
> remaining buffer via memset(); so if a STR* instruction in copy_to_user
> faulted, upon handling the fault we'll immediately try to fix that up with some
> more stores which will also fault, but won't get fixed up, leading to a panic()
> anyway...

When copy_from_user() triggers a memory error, there are two cases: ld
user memory error and st kernel memory error. The former can clear the
remaining kernel memory, and the latter cannot be cleared because the
page is poison.

The purpose of memset() is to keep the data consistency of the kernel
memory (or multiple subsequent pages) (the data that is not copied
should be set to 0). My consideration here is that since our ultimate
goal is to kill the owner thread of the kernel memory data, the
"consistency" of the kernel memory data is not so important, but
increases the processing complexity.

The trade-offs do need to be added to commit message after agreement
is reached :)
> 
> Further, this change will also silently fixup unexpected kernel faults if we
> pass bad kernel pointers to copy_{to,from}_user, which will hide real bugs.

I think this is better than the panic kernel, because the real bugs
belongs to the user process. Even if the wrong pointer is
transferred, the page corresponding to the wrong pointer has a memroy
error. In addition, the panic information contains necessary information
for users to check.

> 
> So NAK to this change as-is; likewise for the addition of USER() to other ldr*
> macros in copy_from_user.S and the addition of USER() str* macros in
> copy_to_user.S.
> 
> If we want to handle memory errors on some kaccesses, we need a new EX_TYPE_*
> separate from the usual EX_TYPE_KACESS_ERR_ZERO that means "handle memory
> errors, but treat other faults as fatal". That should come with a rationale and
> explanation of why it's actually useful.

This makes sense. Add kaccess types that can be processed properly.

> 
> [...]
> 
>> diff --git a/arch/arm64/mm/extable.c b/arch/arm64/mm/extable.c
>> index 478e639f8680..28ec35e3d210 100644
>> --- a/arch/arm64/mm/extable.c
>> +++ b/arch/arm64/mm/extable.c
>> @@ -85,10 +85,10 @@ bool fixup_exception_mc(struct pt_regs *regs)
>>   	if (!ex)
>>   		return false;
>>   
>> -	/*
>> -	 * This is not complete, More Machine check safe extable type can
>> -	 * be processed here.
>> -	 */
>> +	switch (ex->type) {
>> +	case EX_TYPE_UACCESS_ERR_ZERO:
>> +		return ex_handler_uaccess_err_zero(ex, regs);
>> +	}
> 
> Please fold this part into the prior patch, and start ogf with *only* handling
> errors on accesses already marked with EX_TYPE_UACCESS_ERR_ZERO. I think that
> change would be relatively uncontroversial, and it would be much easier to
> build atop that.

OK, the two patches will be merged in the next release.

Many thanks.
Tong.

> 
> Thanks,
> Mark.
> .


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v10 3/6] arm64: add uaccess to machine check safe
  2024-01-30 11:14     ` Tong Tiangen
@ 2024-01-30 12:01       ` Mark Rutland
  2024-01-30 13:41         ` Tong Tiangen
  0 siblings, 1 reply; 20+ messages in thread
From: Mark Rutland @ 2024-01-30 12:01 UTC (permalink / raw)
  To: Tong Tiangen
  Cc: Catalin Marinas, Will Deacon, James Morse, Robin Murphy,
	Andrey Ryabinin, Alexander Potapenko, Alexander Viro,
	Andrey Konovalov, Dmitry Vyukov, Vincenzo Frascino,
	Andrew Morton, Michael Ellerman, Nicholas Piggin,
	Christophe Leroy, Aneesh Kumar K.V, Naveen N. Rao,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86,
	H. Peter Anvin, linux-arm-kernel, linux-mm, linuxppc-dev,
	linux-kernel, kasan-dev, wangkefeng.wang, Guohanjun

On Tue, Jan 30, 2024 at 07:14:35PM +0800, Tong Tiangen wrote:
> 在 2024/1/30 1:43, Mark Rutland 写道:
> > On Mon, Jan 29, 2024 at 09:46:49PM +0800, Tong Tiangen wrote:
> > Further, this change will also silently fixup unexpected kernel faults if we
> > pass bad kernel pointers to copy_{to,from}_user, which will hide real bugs.
> 
> I think this is better than the panic kernel, because the real bugs
> belongs to the user process. Even if the wrong pointer is
> transferred, the page corresponding to the wrong pointer has a memroy
> error.

I think you have misunderstood my point; I'm talking about the case of a bad
kernel pointer *without* a memory error.

For example, consider some buggy code such as:

	void __user *uptr = some_valid_user_pointer;
	void *kptr = NULL; // or any other bad pointer

	ret = copy_to_user(uptr, kptr, size);
	if (ret)
		return -EFAULT;

Before this patch, when copy_to_user() attempted to load from NULL it would
fault, there would be no fixup handler for the LDR, and the kernel would die(),
reporting the bad kernel access.

After this patch (which adds fixup handlers to all the LDR*s in
copy_to_user()), the fault (which is *not* a memory error) would be handled by
the fixup handler, and copy_to_user() would return an error without *any*
indication of the horrible kernel bug.

This will hide kernel bugs, which will make those harder to identify and fix,
and will also potentially make it easier to exploit the kernel: if the user
somehow gains control of the kernel pointer, they can rely on the fixup handler
returning an error, and can scan through memory rather than dying as soon as
they pas a bad pointer.

> In addition, the panic information contains necessary information
> for users to check.

There is no panic() in the case I am describing.

> > So NAK to this change as-is; likewise for the addition of USER() to other ldr*
> > macros in copy_from_user.S and the addition of USER() str* macros in
> > copy_to_user.S.
> > 
> > If we want to handle memory errors on some kaccesses, we need a new EX_TYPE_*
> > separate from the usual EX_TYPE_KACESS_ERR_ZERO that means "handle memory
> > errors, but treat other faults as fatal". That should come with a rationale and
> > explanation of why it's actually useful.
> 
> This makes sense. Add kaccess types that can be processed properly.
> 
> > 
> > [...]
> > 
> > > diff --git a/arch/arm64/mm/extable.c b/arch/arm64/mm/extable.c
> > > index 478e639f8680..28ec35e3d210 100644
> > > --- a/arch/arm64/mm/extable.c
> > > +++ b/arch/arm64/mm/extable.c
> > > @@ -85,10 +85,10 @@ bool fixup_exception_mc(struct pt_regs *regs)
> > >   	if (!ex)
> > >   		return false;
> > > -	/*
> > > -	 * This is not complete, More Machine check safe extable type can
> > > -	 * be processed here.
> > > -	 */
> > > +	switch (ex->type) {
> > > +	case EX_TYPE_UACCESS_ERR_ZERO:
> > > +		return ex_handler_uaccess_err_zero(ex, regs);
> > > +	}
> > 
> > Please fold this part into the prior patch, and start ogf with *only* handling
> > errors on accesses already marked with EX_TYPE_UACCESS_ERR_ZERO. I think that
> > change would be relatively uncontroversial, and it would be much easier to
> > build atop that.
> 
> OK, the two patches will be merged in the next release.

Thanks.

Mark.


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v10 2/6] arm64: add support for machine check error safe
  2024-01-30 10:57     ` Tong Tiangen
@ 2024-01-30 13:07       ` Mark Rutland
  2024-01-30 13:22         ` Tong Tiangen
  0 siblings, 1 reply; 20+ messages in thread
From: Mark Rutland @ 2024-01-30 13:07 UTC (permalink / raw)
  To: Tong Tiangen
  Cc: Catalin Marinas, Will Deacon, James Morse, Robin Murphy,
	Andrey Ryabinin, Alexander Potapenko, Alexander Viro,
	Andrey Konovalov, Dmitry Vyukov, Vincenzo Frascino,
	Andrew Morton, Michael Ellerman, Nicholas Piggin,
	Christophe Leroy, Aneesh Kumar K.V, Naveen N. Rao,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86,
	H. Peter Anvin, linux-arm-kernel, linux-mm, linuxppc-dev,
	linux-kernel, kasan-dev, wangkefeng.wang, Guohanjun

On Tue, Jan 30, 2024 at 06:57:24PM +0800, Tong Tiangen wrote:
> 在 2024/1/30 1:51, Mark Rutland 写道:
> > On Mon, Jan 29, 2024 at 09:46:48PM +0800, Tong Tiangen wrote:

> > > diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
> > > index 55f6455a8284..312932dc100b 100644
> > > --- a/arch/arm64/mm/fault.c
> > > +++ b/arch/arm64/mm/fault.c
> > > @@ -730,6 +730,31 @@ static int do_bad(unsigned long far, unsigned long esr, struct pt_regs *regs)
> > >   	return 1; /* "fault" */
> > >   }
> > > +static bool arm64_do_kernel_sea(unsigned long addr, unsigned int esr,
> > > +				     struct pt_regs *regs, int sig, int code)
> > > +{
> > > +	if (!IS_ENABLED(CONFIG_ARCH_HAS_COPY_MC))
> > > +		return false;
> > > +
> > > +	if (user_mode(regs))
> > > +		return false;
> > 
> > This function is called "arm64_do_kernel_sea"; surely the caller should *never*
> > call this for a SEA taken from user mode?
> 
> In do_sea(), the processing logic is as follows:
>   do_sea()
>   {
>     [...]
>     if (user_mode(regs) && apei_claim_sea(regs) == 0) {
>        return 0;
>     }
>     [...]
>     //[1]
>     if (!arm64_do_kernel_sea()) {
>        arm64_notify_die();
>     }
>   }
> 
> [1] user_mode() is still possible to go here,If user_mode() goes here,
>  it indicates that the impact caused by the memory error cannot be
>  processed correctly by apei_claim_sea().
> 
> 
> In this case, only arm64_notify_die() can be used, This also maintains
> the original logic of user_mode()'s processing.

My point is that either:

(a) The name means that this should *only* be called for SEAs from a kernel
    context, and the caller should be responsible for ensuring that.

(b) The name is misleading, and the 'kernel' part should be removed from the
    name.

I prefer (a), and if you head down that route it's clear that you can get rid
of a bunch of redundant logic and remove the need for do_kernel_sea(), anyway,
e.g.

| static int do_sea(unsigned long far, unsigned long esr, struct pt_regs *regs)
| {
|         const struct fault_info *inf = esr_to_fault_info(esr);
|         bool claimed = apei_claim_sea(regs) == 0;
|         unsigned long siaddr;
| 
|         if (claimed) {
|                 if (user_mode(regs)) {
|                         /*  
|                          * APEI claimed this as a firmware-first notification.
|                          * Some processing deferred to task_work before ret_to_user().
|                          */
|                         return 0;
|                 } else {
|                         /*
|                          * TODO: explain why this is correct.
|                          */
|                         if ((current->flags & PF_KTHREAD) &&
|                             fixup_exception_mc(regs))
|                                 return 0;
|                 }
|         }
| 
|         if (esr & ESR_ELx_FnV) {
|                 siaddr = 0;
|         } else {
|                 /*  
|                  * The architecture specifies that the tag bits of FAR_EL1 are
|                  * UNKNOWN for synchronous external aborts. Mask them out now
|                  * so that userspace doesn't see them.
|                  */
|                 siaddr  = untagged_addr(far);
|         }   
|         arm64_notify_die(inf->name, regs, inf->sig, inf->code, siaddr, esr);
| 
|         return 0;
| }

> > > +
> > > +	if (apei_claim_sea(regs) < 0)
> > > +		return false;
> > > +
> > > +	if (!fixup_exception_mc(regs))
> > > +		return false;
> > > +
> > > +	if (current->flags & PF_KTHREAD)
> > > +		return true;
> > 
> > I think this needs a comment; why do we allow kthreads to go on, yet kill user
> > threads? What about helper threads (e.g. for io_uring)?
> 
> If a memroy error occurs in the kernel thread, the problem is more
> serious than that of the user thread. As a result, related kernel
> functions, such as khugepaged, cannot run properly. kernel panic should
> be a better choice at this time.
> 
> Therefore, the processing scope of this framework is limited to the user
> thread.

That's reasonable, but needs to be explained in a comment.

Also, as above, I think you haven't conisderd helper threads (e.g. io_uring),
which don't have PF_KTHREAD set but do have PF_USER_WORKER set. I suspect those
need the same treatment as kthreads.

> > > +	set_thread_esr(0, esr);
> > 
> > Why do we set the ESR to 0?
> 
> The purpose is to reuse the logic of arm64_notify_die() and set the
> following parameters before sending signals to users:
>   current->thread.fault_address = 0;
>   current->thread.fault_code = err;

Ok, but there's no need to open-code that.

As per my above example, please continue to use the existing call to
arm64_notify_die() rather than open-coding bits of it.

Mark.


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v10 2/6] arm64: add support for machine check error safe
  2024-01-30 13:07       ` Mark Rutland
@ 2024-01-30 13:22         ` Tong Tiangen
  0 siblings, 0 replies; 20+ messages in thread
From: Tong Tiangen @ 2024-01-30 13:22 UTC (permalink / raw)
  To: Mark Rutland
  Cc: Catalin Marinas, Will Deacon, James Morse, Robin Murphy,
	Andrey Ryabinin, Alexander Potapenko, Alexander Viro,
	Andrey Konovalov, Dmitry Vyukov, Vincenzo Frascino,
	Andrew Morton, Michael Ellerman, Nicholas Piggin,
	Christophe Leroy, Aneesh Kumar K.V, Naveen N. Rao,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86,
	H. Peter Anvin, linux-arm-kernel, linux-mm, linuxppc-dev,
	linux-kernel, kasan-dev, wangkefeng.wang, Guohanjun



在 2024/1/30 21:07, Mark Rutland 写道:
> On Tue, Jan 30, 2024 at 06:57:24PM +0800, Tong Tiangen wrote:
>> 在 2024/1/30 1:51, Mark Rutland 写道:
>>> On Mon, Jan 29, 2024 at 09:46:48PM +0800, Tong Tiangen wrote:
> 
>>>> diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
>>>> index 55f6455a8284..312932dc100b 100644
>>>> --- a/arch/arm64/mm/fault.c
>>>> +++ b/arch/arm64/mm/fault.c
>>>> @@ -730,6 +730,31 @@ static int do_bad(unsigned long far, unsigned long esr, struct pt_regs *regs)
>>>>    	return 1; /* "fault" */
>>>>    }
>>>> +static bool arm64_do_kernel_sea(unsigned long addr, unsigned int esr,
>>>> +				     struct pt_regs *regs, int sig, int code)
>>>> +{
>>>> +	if (!IS_ENABLED(CONFIG_ARCH_HAS_COPY_MC))
>>>> +		return false;
>>>> +
>>>> +	if (user_mode(regs))
>>>> +		return false;
>>>
>>> This function is called "arm64_do_kernel_sea"; surely the caller should *never*
>>> call this for a SEA taken from user mode?
>>
>> In do_sea(), the processing logic is as follows:
>>    do_sea()
>>    {
>>      [...]
>>      if (user_mode(regs) && apei_claim_sea(regs) == 0) {
>>         return 0;
>>      }
>>      [...]
>>      //[1]
>>      if (!arm64_do_kernel_sea()) {
>>         arm64_notify_die();
>>      }
>>    }
>>
>> [1] user_mode() is still possible to go here,If user_mode() goes here,
>>   it indicates that the impact caused by the memory error cannot be
>>   processed correctly by apei_claim_sea().
>>
>>
>> In this case, only arm64_notify_die() can be used, This also maintains
>> the original logic of user_mode()'s processing.
> 
> My point is that either:
> 
> (a) The name means that this should *only* be called for SEAs from a kernel
>      context, and the caller should be responsible for ensuring that.
> 
> (b) The name is misleading, and the 'kernel' part should be removed from the
>      name.
> 
> I prefer (a), and if you head down that route it's clear that you can get rid
> of a bunch of redundant logic and remove the need for do_kernel_sea(), anyway,
> e.g.
> 
> | static int do_sea(unsigned long far, unsigned long esr, struct pt_regs *regs)
> | {
> |         const struct fault_info *inf = esr_to_fault_info(esr);
> |         bool claimed = apei_claim_sea(regs) == 0;
> |         unsigned long siaddr;
> |
> |         if (claimed) {
> |                 if (user_mode(regs)) {
> |                         /*
> |                          * APEI claimed this as a firmware-first notification.
> |                          * Some processing deferred to task_work before ret_to_user().
> |                          */
> |                         return 0;
> |                 } else {
> |                         /*
> |                          * TODO: explain why this is correct.
> |                          */
> |                         if ((current->flags & PF_KTHREAD) &&
> |                             fixup_exception_mc(regs))
> |                                 return 0;
> |                 }
> |         }

This code seems to be a bit more concise and avoids misleading function 
names, which I'll use in the next version:)

> |
> |         if (esr & ESR_ELx_FnV) {
> |                 siaddr = 0;
> |         } else {
> |                 /*
> |                  * The architecture specifies that the tag bits of FAR_EL1 are
> |                  * UNKNOWN for synchronous external aborts. Mask them out now
> |                  * so that userspace doesn't see them.
> |                  */
> |                 siaddr  = untagged_addr(far);
> |         }
> |         arm64_notify_die(inf->name, regs, inf->sig, inf->code, siaddr, esr);
> |
> |         return 0;
> | }
> 
>>>> +
>>>> +	if (apei_claim_sea(regs) < 0)
>>>> +		return false;
>>>> +
>>>> +	if (!fixup_exception_mc(regs))
>>>> +		return false;
>>>> +
>>>> +	if (current->flags & PF_KTHREAD)
>>>> +		return true;
>>>
>>> I think this needs a comment; why do we allow kthreads to go on, yet kill user
>>> threads? What about helper threads (e.g. for io_uring)?
>>
>> If a memroy error occurs in the kernel thread, the problem is more
>> serious than that of the user thread. As a result, related kernel
>> functions, such as khugepaged, cannot run properly. kernel panic should
>> be a better choice at this time.
>>
>> Therefore, the processing scope of this framework is limited to the user
>> thread.
> 
> That's reasonable, but needs to be explained in a comment.
> 
> Also, as above, I think you haven't conisderd helper threads (e.g. io_uring),
> which don't have PF_KTHREAD set but do have PF_USER_WORKER set. I suspect those
> need the same treatment as kthreads.

Okay, I'm going to investigate PF_USER_WORKER.

> 
>>>> +	set_thread_esr(0, esr);
>>>
>>> Why do we set the ESR to 0?
>>
>> The purpose is to reuse the logic of arm64_notify_die() and set the
>> following parameters before sending signals to users:
>>    current->thread.fault_address = 0;
>>    current->thread.fault_code = err;
> 
> Ok, but there's no need to open-code that.
> 
> As per my above example, please continue to use the existing call to
> arm64_notify_die() rather than open-coding bits of it.

OK.

Many thanks.
Tong.
> 
> Mark.
> .


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v10 3/6] arm64: add uaccess to machine check safe
  2024-01-30 12:01       ` Mark Rutland
@ 2024-01-30 13:41         ` Tong Tiangen
  0 siblings, 0 replies; 20+ messages in thread
From: Tong Tiangen @ 2024-01-30 13:41 UTC (permalink / raw)
  To: Mark Rutland
  Cc: Catalin Marinas, Will Deacon, James Morse, Robin Murphy,
	Andrey Ryabinin, Alexander Potapenko, Alexander Viro,
	Andrey Konovalov, Dmitry Vyukov, Vincenzo Frascino,
	Andrew Morton, Michael Ellerman, Nicholas Piggin,
	Christophe Leroy, Aneesh Kumar K.V, Naveen N. Rao,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86,
	H. Peter Anvin, linux-arm-kernel, linux-mm, linuxppc-dev,
	linux-kernel, kasan-dev, wangkefeng.wang, Guohanjun



在 2024/1/30 20:01, Mark Rutland 写道:
> On Tue, Jan 30, 2024 at 07:14:35PM +0800, Tong Tiangen wrote:
>> 在 2024/1/30 1:43, Mark Rutland 写道:
>>> On Mon, Jan 29, 2024 at 09:46:49PM +0800, Tong Tiangen wrote:
>>> Further, this change will also silently fixup unexpected kernel faults if we
>>> pass bad kernel pointers to copy_{to,from}_user, which will hide real bugs.
>>
>> I think this is better than the panic kernel, because the real bugs
>> belongs to the user process. Even if the wrong pointer is
>> transferred, the page corresponding to the wrong pointer has a memroy
>> error.
> 
> I think you have misunderstood my point; I'm talking about the case of a bad
> kernel pointer *without* a memory error.
> 
> For example, consider some buggy code such as:
> 
> 	void __user *uptr = some_valid_user_pointer;
> 	void *kptr = NULL; // or any other bad pointer
> 
> 	ret = copy_to_user(uptr, kptr, size);
> 	if (ret)
> 		return -EFAULT;
> 
> Before this patch, when copy_to_user() attempted to load from NULL it would
> fault, there would be no fixup handler for the LDR, and the kernel would die(),
> reporting the bad kernel access.
> 
> After this patch (which adds fixup handlers to all the LDR*s in
> copy_to_user()), the fault (which is *not* a memory error) would be handled by
> the fixup handler, and copy_to_user() would return an error without *any*
> indication of the horrible kernel bug.
> 
> This will hide kernel bugs, which will make those harder to identify and fix,
> and will also potentially make it easier to exploit the kernel: if the user
> somehow gains control of the kernel pointer, they can rely on the fixup handler
> returning an error, and can scan through memory rather than dying as soon as
> they pas a bad pointer.

I should understand what you mean. I'll think about this and reply.

Many thanks.
Tong.

> 
>> In addition, the panic information contains necessary information
>> for users to check.
> 
> There is no panic() in the case I am describing.
> 
>>> So NAK to this change as-is; likewise for the addition of USER() to other ldr*
>>> macros in copy_from_user.S and the addition of USER() str* macros in
>>> copy_to_user.S.
>>>
>>> If we want to handle memory errors on some kaccesses, we need a new EX_TYPE_*
>>> separate from the usual EX_TYPE_KACESS_ERR_ZERO that means "handle memory
>>> errors, but treat other faults as fatal". That should come with a rationale and
>>> explanation of why it's actually useful.
>>
>> This makes sense. Add kaccess types that can be processed properly.
>>
>>>
>>> [...]
>>>
>>>> diff --git a/arch/arm64/mm/extable.c b/arch/arm64/mm/extable.c
>>>> index 478e639f8680..28ec35e3d210 100644
>>>> --- a/arch/arm64/mm/extable.c
>>>> +++ b/arch/arm64/mm/extable.c
>>>> @@ -85,10 +85,10 @@ bool fixup_exception_mc(struct pt_regs *regs)
>>>>    	if (!ex)
>>>>    		return false;
>>>> -	/*
>>>> -	 * This is not complete, More Machine check safe extable type can
>>>> -	 * be processed here.
>>>> -	 */
>>>> +	switch (ex->type) {
>>>> +	case EX_TYPE_UACCESS_ERR_ZERO:
>>>> +		return ex_handler_uaccess_err_zero(ex, regs);
>>>> +	}
>>>
>>> Please fold this part into the prior patch, and start ogf with *only* handling
>>> errors on accesses already marked with EX_TYPE_UACCESS_ERR_ZERO. I think that
>>> change would be relatively uncontroversial, and it would be much easier to
>>> build atop that.
>>
>> OK, the two patches will be merged in the next release.
> 
> Thanks.
> 
> Mark.
> .


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v10 5/6] arm64: support copy_mc_[user]_highpage()
  2024-01-30 10:31   ` Mark Rutland
@ 2024-01-30 13:50     ` Tong Tiangen
  0 siblings, 0 replies; 20+ messages in thread
From: Tong Tiangen @ 2024-01-30 13:50 UTC (permalink / raw)
  To: Mark Rutland
  Cc: Catalin Marinas, Will Deacon, James Morse, Robin Murphy,
	Andrey Ryabinin, Alexander Potapenko, Alexander Viro,
	Andrey Konovalov, Dmitry Vyukov, Vincenzo Frascino,
	Andrew Morton, Michael Ellerman, Nicholas Piggin,
	Christophe Leroy, Aneesh Kumar K.V, Naveen N. Rao,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86,
	H. Peter Anvin, linux-arm-kernel, linux-mm, linuxppc-dev,
	linux-kernel, kasan-dev, wangkefeng.wang, Guohanjun



在 2024/1/30 18:31, Mark Rutland 写道:
> On Mon, Jan 29, 2024 at 09:46:51PM +0800, Tong Tiangen wrote:
>> Currently, many scenarios that can tolerate memory errors when copying page
>> have been supported in the kernel[1][2][3], all of which are implemented by
>> copy_mc_[user]_highpage(). arm64 should also support this mechanism.
>>
>> Due to mte, arm64 needs to have its own copy_mc_[user]_highpage()
>> architecture implementation, macros __HAVE_ARCH_COPY_MC_HIGHPAGE and
>> __HAVE_ARCH_COPY_MC_USER_HIGHPAGE have been added to control it.
>>
>> Add new helper copy_mc_page() which provide a page copy implementation with
>> machine check safe. The copy_mc_page() in copy_mc_page.S is largely borrows
>> from copy_page() in copy_page.S and the main difference is copy_mc_page()
>> add extable entry to every load/store insn to support machine check safe.
>>
>> Add new extable type EX_TYPE_COPY_MC_PAGE_ERR_ZERO which used in
>> copy_mc_page().
>>
>> [1]a873dfe1032a ("mm, hwpoison: try to recover from copy-on write faults")
>> [2]5f2500b93cc9 ("mm/khugepaged: recover from poisoned anonymous memory")
>> [3]6b970599e807 ("mm: hwpoison: support recovery from ksm_might_need_to_copy()")
>>
>> Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
>> ---
>>   arch/arm64/include/asm/asm-extable.h | 15 ++++++
>>   arch/arm64/include/asm/assembler.h   |  4 ++
>>   arch/arm64/include/asm/mte.h         |  5 ++
>>   arch/arm64/include/asm/page.h        | 10 ++++
>>   arch/arm64/lib/Makefile              |  2 +
>>   arch/arm64/lib/copy_mc_page.S        | 78 ++++++++++++++++++++++++++++
>>   arch/arm64/lib/mte.S                 | 27 ++++++++++
>>   arch/arm64/mm/copypage.c             | 66 ++++++++++++++++++++---
>>   arch/arm64/mm/extable.c              |  7 +--
>>   include/linux/highmem.h              |  8 +++
>>   10 files changed, 213 insertions(+), 9 deletions(-)
>>   create mode 100644 arch/arm64/lib/copy_mc_page.S
>>
>> diff --git a/arch/arm64/include/asm/asm-extable.h b/arch/arm64/include/asm/asm-extable.h
>> index 980d1dd8e1a3..819044fefbe7 100644
>> --- a/arch/arm64/include/asm/asm-extable.h
>> +++ b/arch/arm64/include/asm/asm-extable.h
>> @@ -10,6 +10,7 @@
>>   #define EX_TYPE_UACCESS_ERR_ZERO	2
>>   #define EX_TYPE_KACCESS_ERR_ZERO	3
>>   #define EX_TYPE_LOAD_UNALIGNED_ZEROPAD	4
>> +#define EX_TYPE_COPY_MC_PAGE_ERR_ZERO	5
>>   
>>   /* Data fields for EX_TYPE_UACCESS_ERR_ZERO */
>>   #define EX_DATA_REG_ERR_SHIFT	0
>> @@ -51,6 +52,16 @@
>>   #define _ASM_EXTABLE_UACCESS(insn, fixup)				\
>>   	_ASM_EXTABLE_UACCESS_ERR_ZERO(insn, fixup, wzr, wzr)
>>   
>> +#define _ASM_EXTABLE_COPY_MC_PAGE_ERR_ZERO(insn, fixup, err, zero)	\
>> +	__ASM_EXTABLE_RAW(insn, fixup, 					\
>> +			  EX_TYPE_COPY_MC_PAGE_ERR_ZERO,		\
>> +			  (						\
>> +			    EX_DATA_REG(ERR, err) |			\
>> +			    EX_DATA_REG(ZERO, zero)			\
>> +			  ))
>> +
>> +#define _ASM_EXTABLE_COPY_MC_PAGE(insn, fixup)				\
>> +	_ASM_EXTABLE_COPY_MC_PAGE_ERR_ZERO(insn, fixup, wzr, wzr)
>>   /*
>>    * Create an exception table entry for uaccess `insn`, which will branch to `fixup`
>>    * when an unhandled fault is taken.
>> @@ -59,6 +70,10 @@
>>   	_ASM_EXTABLE_UACCESS(\insn, \fixup)
>>   	.endm
>>   
>> +	.macro          _asm_extable_copy_mc_page, insn, fixup
>> +	_ASM_EXTABLE_COPY_MC_PAGE(\insn, \fixup)
>> +	.endm
>> +
> 
> This should share a common EX_TYPE_ with the other "kaccess where memory error
> is handled but other faults are fatal" cases.

OK, reasonable.
> 
>>   /*
>>    * Create an exception table entry for `insn` if `fixup` is provided. Otherwise
>>    * do nothing.
>> diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h
>> index 513787e43329..e1d8ce155878 100644
>> --- a/arch/arm64/include/asm/assembler.h
>> +++ b/arch/arm64/include/asm/assembler.h
>> @@ -154,6 +154,10 @@ lr	.req	x30		// link register
>>   #define CPU_LE(code...) code
>>   #endif
>>   
>> +#define CPY_MC(l, x...)		\
>> +9999:   x;			\
>> +	_asm_extable_copy_mc_page    9999b, l
>> +
>>   /*
>>    * Define a macro that constructs a 64-bit value by concatenating two
>>    * 32-bit registers. Note that on big endian systems the order of the
>> diff --git a/arch/arm64/include/asm/mte.h b/arch/arm64/include/asm/mte.h
>> index 91fbd5c8a391..9cdded082dd4 100644
>> --- a/arch/arm64/include/asm/mte.h
>> +++ b/arch/arm64/include/asm/mte.h
>> @@ -92,6 +92,7 @@ static inline bool try_page_mte_tagging(struct page *page)
>>   void mte_zero_clear_page_tags(void *addr);
>>   void mte_sync_tags(pte_t pte, unsigned int nr_pages);
>>   void mte_copy_page_tags(void *kto, const void *kfrom);
>> +int mte_copy_mc_page_tags(void *kto, const void *kfrom);
>>   void mte_thread_init_user(void);
>>   void mte_thread_switch(struct task_struct *next);
>>   void mte_cpu_setup(void);
>> @@ -128,6 +129,10 @@ static inline void mte_sync_tags(pte_t pte, unsigned int nr_pages)
>>   static inline void mte_copy_page_tags(void *kto, const void *kfrom)
>>   {
>>   }
>> +static inline int mte_copy_mc_page_tags(void *kto, const void *kfrom)
>> +{
>> +	return 0;
>> +}
>>   static inline void mte_thread_init_user(void)
>>   {
>>   }
>> diff --git a/arch/arm64/include/asm/page.h b/arch/arm64/include/asm/page.h
>> index 2312e6ee595f..304cc86b8a10 100644
>> --- a/arch/arm64/include/asm/page.h
>> +++ b/arch/arm64/include/asm/page.h
>> @@ -29,6 +29,16 @@ void copy_user_highpage(struct page *to, struct page *from,
>>   void copy_highpage(struct page *to, struct page *from);
>>   #define __HAVE_ARCH_COPY_HIGHPAGE
>>   
>> +#ifdef CONFIG_ARCH_HAS_COPY_MC
>> +int copy_mc_page(void *to, const void *from);
>> +int copy_mc_highpage(struct page *to, struct page *from);
>> +#define __HAVE_ARCH_COPY_MC_HIGHPAGE
>> +
>> +int copy_mc_user_highpage(struct page *to, struct page *from,
>> +		unsigned long vaddr, struct vm_area_struct *vma);
>> +#define __HAVE_ARCH_COPY_MC_USER_HIGHPAGE
>> +#endif
>> +
>>   struct folio *vma_alloc_zeroed_movable_folio(struct vm_area_struct *vma,
>>   						unsigned long vaddr);
>>   #define vma_alloc_zeroed_movable_folio vma_alloc_zeroed_movable_folio
>> diff --git a/arch/arm64/lib/Makefile b/arch/arm64/lib/Makefile
>> index 29490be2546b..a2fd865b816d 100644
>> --- a/arch/arm64/lib/Makefile
>> +++ b/arch/arm64/lib/Makefile
>> @@ -15,6 +15,8 @@ endif
>>   
>>   lib-$(CONFIG_ARCH_HAS_UACCESS_FLUSHCACHE) += uaccess_flushcache.o
>>   
>> +lib-$(CONFIG_ARCH_HAS_COPY_MC) += copy_mc_page.o
>> +
>>   obj-$(CONFIG_CRC32) += crc32.o
>>   
>>   obj-$(CONFIG_FUNCTION_ERROR_INJECTION) += error-inject.o
>> diff --git a/arch/arm64/lib/copy_mc_page.S b/arch/arm64/lib/copy_mc_page.S
>> new file mode 100644
>> index 000000000000..524534d26d86
>> --- /dev/null
>> +++ b/arch/arm64/lib/copy_mc_page.S
>> @@ -0,0 +1,78 @@
>> +/* SPDX-License-Identifier: GPL-2.0-only */
>> +/*
>> + * Copyright (C) 2012 ARM Ltd.
>> + */
>> +
>> +#include <linux/linkage.h>
>> +#include <linux/const.h>
>> +#include <asm/assembler.h>
>> +#include <asm/page.h>
>> +#include <asm/cpufeature.h>
>> +#include <asm/alternative.h>
>> +#include <asm/asm-extable.h>
>> +
>> +/*
>> + * Copy a page from src to dest (both are page aligned) with machine check
>> + *
>> + * Parameters:
>> + *	x0 - dest
>> + *	x1 - src
>> + * Returns:
>> + * 	x0 - Return 0 if copy success, or -EFAULT if anything goes wrong
>> + *	     while copying.
>> + */
>> +SYM_FUNC_START(__pi_copy_mc_page)
>> +CPY_MC(9998f, ldp	x2, x3, [x1])
>> +CPY_MC(9998f, ldp	x4, x5, [x1, #16])
>> +CPY_MC(9998f, ldp	x6, x7, [x1, #32])
>> +CPY_MC(9998f, ldp	x8, x9, [x1, #48])
>> +CPY_MC(9998f, ldp	x10, x11, [x1, #64])
>> +CPY_MC(9998f, ldp	x12, x13, [x1, #80])
>> +CPY_MC(9998f, ldp	x14, x15, [x1, #96])
>> +CPY_MC(9998f, ldp	x16, x17, [x1, #112])
>> +
>> +	add	x0, x0, #256
>> +	add	x1, x1, #128
>> +1:
>> +	tst	x0, #(PAGE_SIZE - 1)
>> +
>> +CPY_MC(9998f, stnp	x2, x3, [x0, #-256])
>> +CPY_MC(9998f, ldp	x2, x3, [x1])
>> +CPY_MC(9998f, stnp	x4, x5, [x0, #16 - 256])
>> +CPY_MC(9998f, ldp	x4, x5, [x1, #16])
>> +CPY_MC(9998f, stnp	x6, x7, [x0, #32 - 256])
>> +CPY_MC(9998f, ldp	x6, x7, [x1, #32])
>> +CPY_MC(9998f, stnp	x8, x9, [x0, #48 - 256])
>> +CPY_MC(9998f, ldp	x8, x9, [x1, #48])
>> +CPY_MC(9998f, stnp	x10, x11, [x0, #64 - 256])
>> +CPY_MC(9998f, ldp	x10, x11, [x1, #64])
>> +CPY_MC(9998f, stnp	x12, x13, [x0, #80 - 256])
>> +CPY_MC(9998f, ldp	x12, x13, [x1, #80])
>> +CPY_MC(9998f, stnp	x14, x15, [x0, #96 - 256])
>> +CPY_MC(9998f, ldp	x14, x15, [x1, #96])
>> +CPY_MC(9998f, stnp	x16, x17, [x0, #112 - 256])
>> +CPY_MC(9998f, ldp	x16, x17, [x1, #112])
>> +
>> +	add	x0, x0, #128
>> +	add	x1, x1, #128
>> +
>> +	b.ne	1b
>> +
>> +CPY_MC(9998f, stnp	x2, x3, [x0, #-256])
>> +CPY_MC(9998f, stnp	x4, x5, [x0, #16 - 256])
>> +CPY_MC(9998f, stnp	x6, x7, [x0, #32 - 256])
>> +CPY_MC(9998f, stnp	x8, x9, [x0, #48 - 256])
>> +CPY_MC(9998f, stnp	x10, x11, [x0, #64 - 256])
>> +CPY_MC(9998f, stnp	x12, x13, [x0, #80 - 256])
>> +CPY_MC(9998f, stnp	x14, x15, [x0, #96 - 256])
>> +CPY_MC(9998f, stnp	x16, x17, [x0, #112 - 256])
>> +
>> +	mov x0, #0
>> +	ret
>> +
>> +9998:	mov x0, #-EFAULT
>> +	ret
>> +
>> +SYM_FUNC_END(__pi_copy_mc_page)
>> +SYM_FUNC_ALIAS(copy_mc_page, __pi_copy_mc_page)
>> +EXPORT_SYMBOL(copy_mc_page)
> 
> This is a duplicate of the existing copy_page logic; it should be refactored
> such that the logic can be shared.

OK, I'll think about how to do it.

> 
>> diff --git a/arch/arm64/lib/mte.S b/arch/arm64/lib/mte.S
>> index 5018ac03b6bf..2b748e83f6cf 100644
>> --- a/arch/arm64/lib/mte.S
>> +++ b/arch/arm64/lib/mte.S
>> @@ -80,6 +80,33 @@ SYM_FUNC_START(mte_copy_page_tags)
>>   	ret
>>   SYM_FUNC_END(mte_copy_page_tags)
>>   
>> +/*
>> + * Copy the tags from the source page to the destination one wiht machine check safe
>> + *   x0 - address of the destination page
>> + *   x1 - address of the source page
>> + * Returns:
>> + *   x0 - Return 0 if copy success, or
>> + *        -EFAULT if anything goes wrong while copying.
>> + */
>> +SYM_FUNC_START(mte_copy_mc_page_tags)
>> +	mov	x2, x0
>> +	mov	x3, x1
>> +	multitag_transfer_size x5, x6
>> +1:
>> +CPY_MC(2f, ldgm	x4, [x3])
>> +CPY_MC(2f, stgm	x4, [x2])
>> +	add	x2, x2, x5
>> +	add	x3, x3, x5
>> +	tst	x2, #(PAGE_SIZE - 1)
>> +	b.ne	1b
>> +
>> +	mov x0, #0
>> +	ret
>> +
>> +2:	mov x0, #-EFAULT
>> +	ret
>> +SYM_FUNC_END(mte_copy_mc_page_tags)
>> +
>>   /*
>>    * Read tags from a user buffer (one tag per byte) and set the corresponding
>>    * tags at the given kernel address. Used by PTRACE_POKEMTETAGS.
>> diff --git a/arch/arm64/mm/copypage.c b/arch/arm64/mm/copypage.c
>> index a7bb20055ce0..9765e40cde6c 100644
>> --- a/arch/arm64/mm/copypage.c
>> +++ b/arch/arm64/mm/copypage.c
>> @@ -14,6 +14,25 @@
>>   #include <asm/cpufeature.h>
>>   #include <asm/mte.h>
>>   
>> +static int do_mte(struct page *to, struct page *from, void *kto, void *kfrom, bool mc)
>> +{
>> +	int ret = 0;
>> +
>> +	if (system_supports_mte() && page_mte_tagged(from)) {
>> +		/* It's a new page, shouldn't have been tagged yet */
>> +		WARN_ON_ONCE(!try_page_mte_tagging(to));
>> +		if (mc)
>> +			ret = mte_copy_mc_page_tags(kto, kfrom);
>> +		else
>> +			mte_copy_page_tags(kto, kfrom);
>> +
>> +		if (!ret)
>> +			set_page_mte_tagged(to);
>> +	}
>> +
>> +	return ret;
>> +}
> 
> The boolean 'mc' argument makes this painful to read, and I don't think it's
> necessary to have this helper anyway.
> 
> It'd be clearer to have this expanded inline in the callers, e.g.
> 
> 	// in copy_highpage(), as-is today
> 	if (system_supports_mte() && page_mte_tagged(from)) {
> 		/* It's a new page, shouldn't have been tagged yet */
> 		WARN_ON_ONCE(!try_page_mte_tagging(to));
> 		mte_copy_page_tags(kto, kfrom);
> 		set_page_mte_tagged(to);
> 	}
> 
> 	// in copy_mc_highpage()
> 	if (system_supports_mte() && page_mte_tagged(from)) {
> 		/* It's a new page, shouldn't have been tagged yet */
> 		WARN_ON_ONCE(!try_page_mte_tagging(to));
> 		ret = mte_copy_mc_page_tags(kto, kfrom);
> 		if (ret)
> 			return -EFAULT;
> 		set_page_mte_tagged(to);
> 	}

OK,  follow this idea in the next version.

> 
> Mark.
> 
>> +
>>   void copy_highpage(struct page *to, struct page *from)
>>   {
>>   	void *kto = page_address(to);
>> @@ -24,12 +43,7 @@ void copy_highpage(struct page *to, struct page *from)
>>   	if (kasan_hw_tags_enabled())
>>   		page_kasan_tag_reset(to);
>>   
>> -	if (system_supports_mte() && page_mte_tagged(from)) {
>> -		/* It's a new page, shouldn't have been tagged yet */
>> -		WARN_ON_ONCE(!try_page_mte_tagging(to));
>> -		mte_copy_page_tags(kto, kfrom);
>> -		set_page_mte_tagged(to);
>> -	}
>> +	do_mte(to, from, kto, kfrom, false);
>>   }
>>   EXPORT_SYMBOL(copy_highpage);
>>   
>> @@ -40,3 +54,43 @@ void copy_user_highpage(struct page *to, struct page *from,
>>   	flush_dcache_page(to);
>>   }
>>   EXPORT_SYMBOL_GPL(copy_user_highpage);
>> +
>> +#ifdef CONFIG_ARCH_HAS_COPY_MC
>> +/*
>> + * Return -EFAULT if anything goes wrong while copying page or mte.
>> + */
>> +int copy_mc_highpage(struct page *to, struct page *from)
>> +{
>> +	void *kto = page_address(to);
>> +	void *kfrom = page_address(from);
>> +	int ret;
>> +
>> +	ret = copy_mc_page(kto, kfrom);
>> +	if (ret)
>> +		return -EFAULT;
>> +
>> +	if (kasan_hw_tags_enabled())
>> +		page_kasan_tag_reset(to);
>> +
>> +	ret = do_mte(to, from, kto, kfrom, true);
>> +	if (ret)
>> +		return -EFAULT;
>> +
>> +	return 0;
>> +}
>> +EXPORT_SYMBOL(copy_mc_highpage);
>> +
>> +int copy_mc_user_highpage(struct page *to, struct page *from,
>> +			unsigned long vaddr, struct vm_area_struct *vma)
>> +{
>> +	int ret;
>> +
>> +	ret = copy_mc_highpage(to, from);
>> +
>> +	if (!ret)
>> +		flush_dcache_page(to);
>> +
>> +	return ret;
>> +}
>> +EXPORT_SYMBOL_GPL(copy_mc_user_highpage);
>> +#endif
>> diff --git a/arch/arm64/mm/extable.c b/arch/arm64/mm/extable.c
>> index 28ec35e3d210..bdc81518d207 100644
>> --- a/arch/arm64/mm/extable.c
>> +++ b/arch/arm64/mm/extable.c
>> @@ -16,7 +16,7 @@ get_ex_fixup(const struct exception_table_entry *ex)
>>   	return ((unsigned long)&ex->fixup + ex->fixup);
>>   }
>>   
>> -static bool ex_handler_uaccess_err_zero(const struct exception_table_entry *ex,
>> +static bool ex_handler_fixup_err_zero(const struct exception_table_entry *ex,
>>   					struct pt_regs *regs)
>>   {
>>   	int reg_err = FIELD_GET(EX_DATA_REG_ERR, ex->data);
>> @@ -69,7 +69,7 @@ bool fixup_exception(struct pt_regs *regs)
>>   		return ex_handler_bpf(ex, regs);
>>   	case EX_TYPE_UACCESS_ERR_ZERO:
>>   	case EX_TYPE_KACCESS_ERR_ZERO:
>> -		return ex_handler_uaccess_err_zero(ex, regs);
>> +		return ex_handler_fixup_err_zero(ex, regs);
>>   	case EX_TYPE_LOAD_UNALIGNED_ZEROPAD:
>>   		return ex_handler_load_unaligned_zeropad(ex, regs);
>>   	}
>> @@ -87,7 +87,8 @@ bool fixup_exception_mc(struct pt_regs *regs)
>>   
>>   	switch (ex->type) {
>>   	case EX_TYPE_UACCESS_ERR_ZERO:
>> -		return ex_handler_uaccess_err_zero(ex, regs);
>> +	case EX_TYPE_COPY_MC_PAGE_ERR_ZERO:
>> +		return ex_handler_fixup_err_zero(ex, regs);
>>   	}
>>   
>>   	return false;
>> diff --git a/include/linux/highmem.h b/include/linux/highmem.h
>> index c5ca1a1fc4f5..a42470ca42f2 100644
>> --- a/include/linux/highmem.h
>> +++ b/include/linux/highmem.h
>> @@ -332,6 +332,7 @@ static inline void copy_highpage(struct page *to, struct page *from)
>>   #endif
>>   
>>   #ifdef copy_mc_to_kernel
>> +#ifndef __HAVE_ARCH_COPY_MC_USER_HIGHPAGE
>>   /*
>>    * If architecture supports machine check exception handling, define the
>>    * #MC versions of copy_user_highpage and copy_highpage. They copy a memory
>> @@ -354,7 +355,9 @@ static inline int copy_mc_user_highpage(struct page *to, struct page *from,
>>   
>>   	return ret ? -EFAULT : 0;
>>   }
>> +#endif
>>   
>> +#ifndef __HAVE_ARCH_COPY_MC_HIGHPAGE
>>   static inline int copy_mc_highpage(struct page *to, struct page *from)
>>   {
>>   	unsigned long ret;
>> @@ -370,20 +373,25 @@ static inline int copy_mc_highpage(struct page *to, struct page *from)
>>   
>>   	return ret ? -EFAULT : 0;
>>   }
>> +#endif
>>   #else
>> +#ifndef __HAVE_ARCH_COPY_MC_USER_HIGHPAGE
>>   static inline int copy_mc_user_highpage(struct page *to, struct page *from,
>>   					unsigned long vaddr, struct vm_area_struct *vma)
>>   {
>>   	copy_user_highpage(to, from, vaddr, vma);
>>   	return 0;
>>   }
>> +#endif
>>   
>> +#ifndef __HAVE_ARCH_COPY_MC_HIGHPAGE
>>   static inline int copy_mc_highpage(struct page *to, struct page *from)
>>   {
>>   	copy_highpage(to, from);
>>   	return 0;
>>   }
>>   #endif
>> +#endif
>>   
>>   static inline void memcpy_page(struct page *dst_page, size_t dst_off,
>>   			       struct page *src_page, size_t src_off,
>> -- 
>> 2.25.1
>>
> .


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v10 6/6] arm64: introduce copy_mc_to_kernel() implementation
  2024-01-30 10:20   ` Mark Rutland
@ 2024-01-30 13:56     ` Tong Tiangen
  0 siblings, 0 replies; 20+ messages in thread
From: Tong Tiangen @ 2024-01-30 13:56 UTC (permalink / raw)
  To: Mark Rutland
  Cc: Catalin Marinas, Will Deacon, James Morse, Robin Murphy,
	Andrey Ryabinin, Alexander Potapenko, Alexander Viro,
	Andrey Konovalov, Dmitry Vyukov, Vincenzo Frascino,
	Andrew Morton, Michael Ellerman, Nicholas Piggin,
	Christophe Leroy, Aneesh Kumar K.V, Naveen N. Rao,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86,
	H. Peter Anvin, linux-arm-kernel, linux-mm, linuxppc-dev,
	linux-kernel, kasan-dev, wangkefeng.wang, Guohanjun



在 2024/1/30 18:20, Mark Rutland 写道:
> On Mon, Jan 29, 2024 at 09:46:52PM +0800, Tong Tiangen wrote:
>> The copy_mc_to_kernel() helper is memory copy implementation that handles
>> source exceptions. It can be used in memory copy scenarios that tolerate
>> hardware memory errors(e.g: pmem_read/dax_copy_to_iter).
>>
>> Currnently, only x86 and ppc suuport this helper, after arm64 support
>> machine check safe framework, we introduce copy_mc_to_kernel()
>> implementation.
>>
>> Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
>> ---
>>   arch/arm64/include/asm/string.h  |   5 +
>>   arch/arm64/include/asm/uaccess.h |  21 +++
>>   arch/arm64/lib/Makefile          |   2 +-
>>   arch/arm64/lib/memcpy_mc.S       | 257 +++++++++++++++++++++++++++++++
>>   mm/kasan/shadow.c                |  12 ++
>>   5 files changed, 296 insertions(+), 1 deletion(-)
>>   create mode 100644 arch/arm64/lib/memcpy_mc.S
> 
> Looking at the diffstat and code, this duplicates arch/arm64/lib/memcpy.S with
> a few annotations. Duplicating that code is not maintainable, and so we cannot
> take this as-is.
> 
> If you want a version that can handle faults that *must* be written such that
> the code is shared with the regular memcpy. That could be done by using macros
> to instantiate two copies (one with fault handling, the other without).
> 
> It would also be very helpful to see *any* indication that this has been
> tested, which is sorely lacking in the series as-is.
> 
> Mark.

OK, so that's what I'm really want to solve right now, a lot of
  duplicate code, and I'm going to think about how to deal with that.

Thank you very much for your good advice:)
Tong.

> 
>> diff --git a/arch/arm64/include/asm/string.h b/arch/arm64/include/asm/string.h
>> index 3a3264ff47b9..995b63c26e99 100644
>> --- a/arch/arm64/include/asm/string.h
>> +++ b/arch/arm64/include/asm/string.h
>> @@ -35,6 +35,10 @@ extern void *memchr(const void *, int, __kernel_size_t);
>>   extern void *memcpy(void *, const void *, __kernel_size_t);
>>   extern void *__memcpy(void *, const void *, __kernel_size_t);
>>   
>> +#define __HAVE_ARCH_MEMCPY_MC
>> +extern int memcpy_mcs(void *, const void *, __kernel_size_t);
>> +extern int __memcpy_mcs(void *, const void *, __kernel_size_t);
>> +
>>   #define __HAVE_ARCH_MEMMOVE
>>   extern void *memmove(void *, const void *, __kernel_size_t);
>>   extern void *__memmove(void *, const void *, __kernel_size_t);
>> @@ -57,6 +61,7 @@ void memcpy_flushcache(void *dst, const void *src, size_t cnt);
>>    */
>>   
>>   #define memcpy(dst, src, len) __memcpy(dst, src, len)
>> +#define memcpy_mcs(dst, src, len) __memcpy_mcs(dst, src, len)
>>   #define memmove(dst, src, len) __memmove(dst, src, len)
>>   #define memset(s, c, n) __memset(s, c, n)
>>   
>> diff --git a/arch/arm64/include/asm/uaccess.h b/arch/arm64/include/asm/uaccess.h
>> index 14be5000c5a0..61e28ef2112a 100644
>> --- a/arch/arm64/include/asm/uaccess.h
>> +++ b/arch/arm64/include/asm/uaccess.h
>> @@ -425,4 +425,25 @@ static inline size_t probe_subpage_writeable(const char __user *uaddr,
>>   
>>   #endif /* CONFIG_ARCH_HAS_SUBPAGE_FAULTS */
>>   
>> +#ifdef CONFIG_ARCH_HAS_COPY_MC
>> +/**
>> + * copy_mc_to_kernel - memory copy that handles source exceptions
>> + *
>> + * @dst:	destination address
>> + * @src:	source address
>> + * @len:	number of bytes to copy
>> + *
>> + * Return 0 for success, or #size if there was an exception.
>> + */
>> +static inline unsigned long __must_check
>> +copy_mc_to_kernel(void *to, const void *from, unsigned long size)
>> +{
>> +	int ret;
>> +
>> +	ret = memcpy_mcs(to, from, size);
>> +	return (ret == -EFAULT) ? size : 0;
>> +}
>> +#define copy_mc_to_kernel copy_mc_to_kernel
>> +#endif
>> +
>>   #endif /* __ASM_UACCESS_H */
>> diff --git a/arch/arm64/lib/Makefile b/arch/arm64/lib/Makefile
>> index a2fd865b816d..899d6ae9698c 100644
>> --- a/arch/arm64/lib/Makefile
>> +++ b/arch/arm64/lib/Makefile
>> @@ -3,7 +3,7 @@ lib-y		:= clear_user.o delay.o copy_from_user.o		\
>>   		   copy_to_user.o copy_page.o				\
>>   		   clear_page.o csum.o insn.o memchr.o memcpy.o		\
>>   		   memset.o memcmp.o strcmp.o strncmp.o strlen.o	\
>> -		   strnlen.o strchr.o strrchr.o tishift.o
>> +		   strnlen.o strchr.o strrchr.o tishift.o memcpy_mc.o
>>   
>>   ifeq ($(CONFIG_KERNEL_MODE_NEON), y)
>>   obj-$(CONFIG_XOR_BLOCKS)	+= xor-neon.o
>> diff --git a/arch/arm64/lib/memcpy_mc.S b/arch/arm64/lib/memcpy_mc.S
>> new file mode 100644
>> index 000000000000..7076b500d154
>> --- /dev/null
>> +++ b/arch/arm64/lib/memcpy_mc.S
>> @@ -0,0 +1,257 @@
>> +/* SPDX-License-Identifier: GPL-2.0-only */
>> +/*
>> + * Copyright (c) 2012-2021, Arm Limited.
>> + *
>> + * Adapted from the original at:
>> + * https://github.com/ARM-software/optimized-routines/blob/afd6244a1f8d9229/string/aarch64/memcpy.S
>> + */
>> +
>> +#include <linux/linkage.h>
>> +#include <asm/assembler.h>
>> +
>> +/* Assumptions:
>> + *
>> + * ARMv8-a, AArch64, unaligned accesses.
>> + *
>> + */
>> +
>> +#define L(label) .L ## label
>> +
>> +#define dstin	x0
>> +#define src	x1
>> +#define count	x2
>> +#define dst	x3
>> +#define srcend	x4
>> +#define dstend	x5
>> +#define A_l	x6
>> +#define A_lw	w6
>> +#define A_h	x7
>> +#define B_l	x8
>> +#define B_lw	w8
>> +#define B_h	x9
>> +#define C_l	x10
>> +#define C_lw	w10
>> +#define C_h	x11
>> +#define D_l	x12
>> +#define D_h	x13
>> +#define E_l	x14
>> +#define E_h	x15
>> +#define F_l	x16
>> +#define F_h	x17
>> +#define G_l	count
>> +#define G_h	dst
>> +#define H_l	src
>> +#define H_h	srcend
>> +#define tmp1	x14
>> +
>> +/* This implementation handles overlaps and supports both memcpy and memmove
>> +   from a single entry point.  It uses unaligned accesses and branchless
>> +   sequences to keep the code small, simple and improve performance.
>> +
>> +   Copies are split into 3 main cases: small copies of up to 32 bytes, medium
>> +   copies of up to 128 bytes, and large copies.  The overhead of the overlap
>> +   check is negligible since it is only required for large copies.
>> +
>> +   Large copies use a software pipelined loop processing 64 bytes per iteration.
>> +   The destination pointer is 16-byte aligned to minimize unaligned accesses.
>> +   The loop tail is handled by always copying 64 bytes from the end.
>> +*/
>> +
>> +SYM_FUNC_START(__pi_memcpy_mcs)
>> +	add	srcend, src, count
>> +	add	dstend, dstin, count
>> +	cmp	count, 128
>> +	b.hi	L(copy_long)
>> +	cmp	count, 32
>> +	b.hi	L(copy32_128)
>> +
>> +	/* Small copies: 0..32 bytes.  */
>> +	cmp	count, 16
>> +	b.lo	L(copy16)
>> +	CPY_MC(9998f, ldp	A_l, A_h, [src])
>> +	CPY_MC(9998f, ldp	D_l, D_h, [srcend, -16])
>> +	CPY_MC(9998f, stp	A_l, A_h, [dstin])
>> +	CPY_MC(9998f, stp	D_l, D_h, [dstend, -16])
>> +	mov x0, #0
>> +	ret
>> +
>> +	/* Copy 8-15 bytes.  */
>> +L(copy16):
>> +	tbz	count, 3, L(copy8)
>> +	CPY_MC(9998f, ldr	A_l, [src])
>> +	CPY_MC(9998f, ldr	A_h, [srcend, -8])
>> +	CPY_MC(9998f, str	A_l, [dstin])
>> +	CPY_MC(9998f, str	A_h, [dstend, -8])
>> +	mov x0, #0
>> +	ret
>> +
>> +	.p2align 3
>> +	/* Copy 4-7 bytes.  */
>> +L(copy8):
>> +	tbz	count, 2, L(copy4)
>> +	CPY_MC(9998f, ldr	A_lw, [src])
>> +	CPY_MC(9998f, ldr	B_lw, [srcend, -4])
>> +	CPY_MC(9998f, str	A_lw, [dstin])
>> +	CPY_MC(9998f, str	B_lw, [dstend, -4])
>> +	mov x0, #0
>> +	ret
>> +
>> +	/* Copy 0..3 bytes using a branchless sequence.  */
>> +L(copy4):
>> +	cbz	count, L(copy0)
>> +	lsr	tmp1, count, 1
>> +	CPY_MC(9998f, ldrb	A_lw, [src])
>> +	CPY_MC(9998f, ldrb	C_lw, [srcend, -1])
>> +	CPY_MC(9998f, ldrb	B_lw, [src, tmp1])
>> +	CPY_MC(9998f, strb	A_lw, [dstin])
>> +	CPY_MC(9998f, strb	B_lw, [dstin, tmp1])
>> +	CPY_MC(9998f, strb	C_lw, [dstend, -1])
>> +L(copy0):
>> +	mov x0, #0
>> +	ret
>> +
>> +	.p2align 4
>> +	/* Medium copies: 33..128 bytes.  */
>> +L(copy32_128):
>> +	CPY_MC(9998f, ldp	A_l, A_h, [src])
>> +	CPY_MC(9998f, ldp	B_l, B_h, [src, 16])
>> +	CPY_MC(9998f, ldp	C_l, C_h, [srcend, -32])
>> +	CPY_MC(9998f, ldp	D_l, D_h, [srcend, -16])
>> +	cmp	count, 64
>> +	b.hi	L(copy128)
>> +	CPY_MC(9998f, stp	A_l, A_h, [dstin])
>> +	CPY_MC(9998f, stp	B_l, B_h, [dstin, 16])
>> +	CPY_MC(9998f, stp	C_l, C_h, [dstend, -32])
>> +	CPY_MC(9998f, stp	D_l, D_h, [dstend, -16])
>> +	mov x0, #0
>> +	ret
>> +
>> +	.p2align 4
>> +	/* Copy 65..128 bytes.  */
>> +L(copy128):
>> +	CPY_MC(9998f, ldp	E_l, E_h, [src, 32])
>> +	CPY_MC(9998f, ldp	F_l, F_h, [src, 48])
>> +	cmp	count, 96
>> +	b.ls	L(copy96)
>> +	CPY_MC(9998f, ldp	G_l, G_h, [srcend, -64])
>> +	CPY_MC(9998f, ldp	H_l, H_h, [srcend, -48])
>> +	CPY_MC(9998f, stp	G_l, G_h, [dstend, -64])
>> +	CPY_MC(9998f, stp	H_l, H_h, [dstend, -48])
>> +L(copy96):
>> +	CPY_MC(9998f, stp	A_l, A_h, [dstin])
>> +	CPY_MC(9998f, stp	B_l, B_h, [dstin, 16])
>> +	CPY_MC(9998f, stp	E_l, E_h, [dstin, 32])
>> +	CPY_MC(9998f, stp	F_l, F_h, [dstin, 48])
>> +	CPY_MC(9998f, stp	C_l, C_h, [dstend, -32])
>> +	CPY_MC(9998f, stp	D_l, D_h, [dstend, -16])
>> +	mov x0, #0
>> +	ret
>> +
>> +	.p2align 4
>> +	/* Copy more than 128 bytes.  */
>> +L(copy_long):
>> +	/* Use backwards copy if there is an overlap.  */
>> +	sub	tmp1, dstin, src
>> +	cbz	tmp1, L(copy0)
>> +	cmp	tmp1, count
>> +	b.lo	L(copy_long_backwards)
>> +
>> +	/* Copy 16 bytes and then align dst to 16-byte alignment.  */
>> +
>> +	CPY_MC(9998f, ldp	D_l, D_h, [src])
>> +	and	tmp1, dstin, 15
>> +	bic	dst, dstin, 15
>> +	sub	src, src, tmp1
>> +	add	count, count, tmp1	/* Count is now 16 too large.  */
>> +	CPY_MC(9998f, ldp	A_l, A_h, [src, 16])
>> +	CPY_MC(9998f, stp	D_l, D_h, [dstin])
>> +	CPY_MC(9998f, ldp	B_l, B_h, [src, 32])
>> +	CPY_MC(9998f, ldp	C_l, C_h, [src, 48])
>> +	CPY_MC(9998f, ldp	D_l, D_h, [src, 64]!)
>> +	subs	count, count, 128 + 16	/* Test and readjust count.  */
>> +	b.ls	L(copy64_from_end)
>> +
>> +L(loop64):
>> +	CPY_MC(9998f, stp	A_l, A_h, [dst, 16])
>> +	CPY_MC(9998f, ldp	A_l, A_h, [src, 16])
>> +	CPY_MC(9998f, stp	B_l, B_h, [dst, 32])
>> +	CPY_MC(9998f, ldp	B_l, B_h, [src, 32])
>> +	CPY_MC(9998f, stp	C_l, C_h, [dst, 48])
>> +	CPY_MC(9998f, ldp	C_l, C_h, [src, 48])
>> +	CPY_MC(9998f, stp	D_l, D_h, [dst, 64]!)
>> +	CPY_MC(9998f, ldp	D_l, D_h, [src, 64]!)
>> +	subs	count, count, 64
>> +	b.hi	L(loop64)
>> +
>> +	/* Write the last iteration and copy 64 bytes from the end.  */
>> +L(copy64_from_end):
>> +	CPY_MC(9998f, ldp	E_l, E_h, [srcend, -64])
>> +	CPY_MC(9998f, stp	A_l, A_h, [dst, 16])
>> +	CPY_MC(9998f, ldp	A_l, A_h, [srcend, -48])
>> +	CPY_MC(9998f, stp	B_l, B_h, [dst, 32])
>> +	CPY_MC(9998f, ldp	B_l, B_h, [srcend, -32])
>> +	CPY_MC(9998f, stp	C_l, C_h, [dst, 48])
>> +	CPY_MC(9998f, ldp	C_l, C_h, [srcend, -16])
>> +	CPY_MC(9998f, stp	D_l, D_h, [dst, 64])
>> +	CPY_MC(9998f, stp	E_l, E_h, [dstend, -64])
>> +	CPY_MC(9998f, stp	A_l, A_h, [dstend, -48])
>> +	CPY_MC(9998f, stp	B_l, B_h, [dstend, -32])
>> +	CPY_MC(9998f, stp	C_l, C_h, [dstend, -16])
>> +	mov x0, #0
>> +	ret
>> +
>> +	.p2align 4
>> +
>> +	/* Large backwards copy for overlapping copies.
>> +	   Copy 16 bytes and then align dst to 16-byte alignment.  */
>> +L(copy_long_backwards):
>> +	CPY_MC(9998f, ldp	D_l, D_h, [srcend, -16])
>> +	and	tmp1, dstend, 15
>> +	sub	srcend, srcend, tmp1
>> +	sub	count, count, tmp1
>> +	CPY_MC(9998f, ldp	A_l, A_h, [srcend, -16])
>> +	CPY_MC(9998f, stp	D_l, D_h, [dstend, -16])
>> +	CPY_MC(9998f, ldp	B_l, B_h, [srcend, -32])
>> +	CPY_MC(9998f, ldp	C_l, C_h, [srcend, -48])
>> +	CPY_MC(9998f, ldp	D_l, D_h, [srcend, -64]!)
>> +	sub	dstend, dstend, tmp1
>> +	subs	count, count, 128
>> +	b.ls	L(copy64_from_start)
>> +
>> +L(loop64_backwards):
>> +	CPY_MC(9998f, stp	A_l, A_h, [dstend, -16])
>> +	CPY_MC(9998f, ldp	A_l, A_h, [srcend, -16])
>> +	CPY_MC(9998f, stp	B_l, B_h, [dstend, -32])
>> +	CPY_MC(9998f, ldp	B_l, B_h, [srcend, -32])
>> +	CPY_MC(9998f, stp	C_l, C_h, [dstend, -48])
>> +	CPY_MC(9998f, ldp	C_l, C_h, [srcend, -48])
>> +	CPY_MC(9998f, stp	D_l, D_h, [dstend, -64]!)
>> +	CPY_MC(9998f, ldp	D_l, D_h, [srcend, -64]!)
>> +	subs	count, count, 64
>> +	b.hi	L(loop64_backwards)
>> +
>> +	/* Write the last iteration and copy 64 bytes from the start.  */
>> +L(copy64_from_start):
>> +	CPY_MC(9998f, ldp	G_l, G_h, [src, 48])
>> +	CPY_MC(9998f, stp	A_l, A_h, [dstend, -16])
>> +	CPY_MC(9998f, ldp	A_l, A_h, [src, 32])
>> +	CPY_MC(9998f, stp	B_l, B_h, [dstend, -32])
>> +	CPY_MC(9998f, ldp	B_l, B_h, [src, 16])
>> +	CPY_MC(9998f, stp	C_l, C_h, [dstend, -48])
>> +	CPY_MC(9998f, ldp	C_l, C_h, [src])
>> +	CPY_MC(9998f, stp	D_l, D_h, [dstend, -64])
>> +	CPY_MC(9998f, stp	G_l, G_h, [dstin, 48])
>> +	CPY_MC(9998f, stp	A_l, A_h, [dstin, 32])
>> +	CPY_MC(9998f, stp	B_l, B_h, [dstin, 16])
>> +	CPY_MC(9998f, stp	C_l, C_h, [dstin])
>> +	mov x0, #0
>> +	ret
>> +
>> +9998:	mov x0, #-EFAULT
>> +	ret
>> +SYM_FUNC_END(__pi_memcpy_mcs)
>> +
>> +SYM_FUNC_ALIAS(__memcpy_mcs, __pi_memcpy_mcs)
>> +EXPORT_SYMBOL(__memcpy_mcs)
>> +SYM_FUNC_ALIAS_WEAK(memcpy_mcs, __memcpy_mcs)
>> +EXPORT_SYMBOL(memcpy_mcs)
>> diff --git a/mm/kasan/shadow.c b/mm/kasan/shadow.c
>> index 9ef84f31833f..e6519fd329b2 100644
>> --- a/mm/kasan/shadow.c
>> +++ b/mm/kasan/shadow.c
>> @@ -79,6 +79,18 @@ void *memcpy(void *dest, const void *src, size_t len)
>>   }
>>   #endif
>>   
>> +#ifdef __HAVE_ARCH_MEMCPY_MC
>> +#undef memcpy_mcs
>> +int memcpy_mcs(void *dest, const void *src, size_t len)
>> +{
>> +	if (!check_memory_region((unsigned long)src, len, false, _RET_IP_) ||
>> +	    !check_memory_region((unsigned long)dest, len, true, _RET_IP_))
>> +		return (unsigned long)len;
>> +
>> +	return __memcpy_mcs(dest, src, len);
>> +}
>> +#endif
>> +
>>   void *__asan_memset(void *addr, int c, ssize_t len)
>>   {
>>   	if (!kasan_check_range(addr, len, true, _RET_IP_))
>> -- 
>> 2.25.1
>>
> .


^ permalink raw reply	[flat|nested] 20+ messages in thread

end of thread, other threads:[~2024-01-30 13:56 UTC | newest]

Thread overview: 20+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-01-29 13:46 [PATCH v10 0/6]arm64: add machine check safe support Tong Tiangen
2024-01-29 13:46 ` [PATCH v10 1/6] uaccess: add generic fallback version of copy_mc_to_user() Tong Tiangen
2024-01-29 13:46 ` [PATCH v10 2/6] arm64: add support for machine check error safe Tong Tiangen
2024-01-29 17:51   ` Mark Rutland
2024-01-30 10:57     ` Tong Tiangen
2024-01-30 13:07       ` Mark Rutland
2024-01-30 13:22         ` Tong Tiangen
2024-01-29 13:46 ` [PATCH v10 3/6] arm64: add uaccess to machine check safe Tong Tiangen
2024-01-29 17:43   ` Mark Rutland
2024-01-30 11:14     ` Tong Tiangen
2024-01-30 12:01       ` Mark Rutland
2024-01-30 13:41         ` Tong Tiangen
2024-01-29 13:46 ` [PATCH v10 4/6] mm/hwpoison: return -EFAULT when copy fail in copy_mc_[user]_highpage() Tong Tiangen
2024-01-29 13:46 ` [PATCH v10 5/6] arm64: support copy_mc_[user]_highpage() Tong Tiangen
2024-01-29 20:45   ` Andrey Konovalov
2024-01-30 10:31   ` Mark Rutland
2024-01-30 13:50     ` Tong Tiangen
2024-01-29 13:46 ` [PATCH v10 6/6] arm64: introduce copy_mc_to_kernel() implementation Tong Tiangen
2024-01-30 10:20   ` Mark Rutland
2024-01-30 13:56     ` Tong Tiangen

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox