From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E6AFEC3DA4A for ; Mon, 19 Aug 2024 17:29:32 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 627A46B0083; Mon, 19 Aug 2024 13:29:32 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 5D7116B0085; Mon, 19 Aug 2024 13:29:32 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 49E5D6B0088; Mon, 19 Aug 2024 13:29:32 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 2F2356B0083 for ; Mon, 19 Aug 2024 13:29:32 -0400 (EDT) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 8BCC91C5218 for ; Mon, 19 Aug 2024 17:29:31 +0000 (UTC) X-FDA: 82469681742.26.617802A Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf18.hostedemail.com (Postfix) with ESMTP id 8DD361C0011 for ; Mon, 19 Aug 2024 17:29:29 +0000 (UTC) Authentication-Results: imf18.hostedemail.com; dkim=none; spf=pass (imf18.hostedemail.com: domain of mark.rutland@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=mark.rutland@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1724088466; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=AkZwPyIj7uO2/B55SZJM/FfVtcSWGZGuYWzif3+meGE=; b=mFoL4LIj0aEtyGRbJ70x2ffqW6Q86CkDOm7kpnuObcOlK83wL+13hYsujbEDpQMkQcweqL xkK2Fwkd9fl/p17hTvg8zVXu/cju8SfIDccyzYFkld9fyKBcM94HyeCx32XyC6OpQleg0X DS9eTNlqzgGo50CEX4RAaQ/Ge9HK9Z8= ARC-Authentication-Results: i=1; imf18.hostedemail.com; dkim=none; spf=pass (imf18.hostedemail.com: domain of mark.rutland@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=mark.rutland@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1724088466; a=rsa-sha256; cv=none; b=TVB4ZHntxYiGccKygISJbEPy0gmqGiWEzT7ekdHAWvKwbfQGaqTVUNh78SpdCrU5E2+Iyx GnZgZnJ1Ow9Cou7rozvaV6XmnOmfgnXt34R9bZhhMN1n5VVzbz1ELP8XjRws7ubdJvKYuH GBj1U3T5GgLQbBKjXbIZ3lKSSJMjp3g= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 69BF2339; Mon, 19 Aug 2024 10:29:54 -0700 (PDT) Received: from J2N7QTR9R3.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 729813F58B; Mon, 19 Aug 2024 10:29:24 -0700 (PDT) Date: Mon, 19 Aug 2024 18:29:21 +0100 From: Mark Rutland To: Tong Tiangen Cc: Catalin Marinas , Will Deacon , Andrew Morton , James Morse , Robin Murphy , Andrey Konovalov , Dmitry Vyukov , Vincenzo Frascino , Michael Ellerman , Nicholas Piggin , Andrey Ryabinin , Alexander Potapenko , Christophe Leroy , "Aneesh Kumar K.V" , "Naveen N. Rao" , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org, linuxppc-dev@lists.ozlabs.org, linux-kernel@vger.kernel.org, wangkefeng.wang@huawei.com, Guohanjun Subject: Re: [PATCH v12 2/6] arm64: add support for ARCH_HAS_COPY_MC Message-ID: References: <20240528085915.1955987-1-tongtiangen@huawei.com> <20240528085915.1955987-3-tongtiangen@huawei.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20240528085915.1955987-3-tongtiangen@huawei.com> X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 8DD361C0011 X-Stat-Signature: pwkcjccnh8wcrbiexuik4y6ymj4jf1iw X-Rspam-User: X-HE-Tag: 1724088569-246468 X-HE-Meta: U2FsdGVkX1+BGwIXmTcGds0QmhU9zPUycVyuOciPq9HAAw6W7QyeO1mEKOHtL7d3inAsfH/uzBTzyLHOtqx2Hy5fdYTe2++9vU8RJXxfdQIINIrVJjnQU92F6u1YYH6wzQjJ0rQLgi5oxj+XS77MGehFIPGT7Ez50DdJAWi/lCIsTdSS5zikwLPzDh+1qw+6VwEYydNEcR2Pt8ccJpm3TiTOjtf0xQoJBlHQmlqPoE59aMiZGJB843+WFlLx2z8AeIX+VD96D/N7rURuK7tlqtEzH6yNOkPzq6Uhgl4uFAlz+gd9VFWU5qYY30ZdkyGT9rAhhJ8d6MK7LJiAPBh1JSBPbmbE7hDV1XJrgR0sQeq4zy7YhwwsS6mlSmHA5zIaJQdcrIGtmmJc2Nr4KXNNnfexhjfzuMM1CLT1JulIRyeHeI6Qtwu/5ECb1FEr6wdB2TDbjtmAa5UEb6YDvKmdwtF6yYEFE2gmszjWe1YjPuWAdnhnONoIUy514Bi7Bn3xsUGwGx9aiLMy6u+SFrO1Wv5ijQ6S0umZ9lkPJZINa8XLEPt3/X6veco4yHNuw6i6dlfHsE7LIIrrfe5Q1w8jqPyW6JdWq1J9UBpfSj/55rASOVw8qjiGPkPqPDa1xdG3I4RLU9kIDoK6FZndNjIp1ULNqHsP3mXnmq6gcY3iwyV8KgKly1zsioMONUhbdiYPcoFuYml7Z3BOtUb2trnC2yPhKpCV00sdMKIzsaTH7kVD3E3JJsA0usz7fHdSk9X+GuZX1ofhiCxMx2AdHi0MaTmqgImI7DrR6/N/JPQPGto5jKXMlUSibiUUKhacTElMRlyDpJttwgCzqomVrz+Ct60Vn1ZR8eQ/NCZdBbtkGAQVneECLGWF33dbK9zHhgz9YBMBIfGnMHU93kDDI0K6HAoYgnaTrftoZ0+Y2gcEj66hA8fzEc7jG8BZtsJRJ3lT0eUhpm+6BkpHi588y2D ASey9KOU K+DDO7CuptekMHDvlm4dc7op1Iy8UYnkM4I1mldKooh6xSW7Q/HxvRGP9534N07JNuSY16Xiytt3fC9nL3Cvne3UkxAA2FKHWOK0TqUoGe0QDfNK0GqwBO8mE4owx/HJh/Kxzym6hWxWdfrmWP2vGcULK64NPTRomQQwRDjnvepQM4832wyoYbaiO2fP4VrQUpexKsKu//O9PMdrFdoEIWmZLuYuwBSvi6f2GmtULj1YNAzBOoa2/IJau0YbTYT/o1vesa2tknzcj8MCkHJ19G6kpm5OCbMh0PkkkjlYPwFjGg6SpIhGhdvSoSA1TnL5b7lsBiNshX9SlLhHf2zhYaDVy86QICfB+aOI6A7ZZ7vUBQFKoyh7YN9EjHM3cReDAeZfR X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Hi Tong, On Tue, May 28, 2024 at 04:59:11PM +0800, Tong Tiangen wrote: > For the arm64 kernel, when it processes hardware memory errors for > synchronize notifications(do_sea()), if the errors is consumed within the > kernel, the current processing is panic. However, it is not optimal. > > Take copy_from/to_user for example, If ld* triggers a memory error, even in > kernel mode, only the associated process is affected. Killing the user > process and isolating the corrupt page is a better choice. > > New fixup type EX_TYPE_KACCESS_ERR_ZERO_ME_SAFE is added to identify insn > that can recover from memory errors triggered by access to kernel memory. > > Signed-off-by: Tong Tiangen Generally this looks ok, but I have a couple of comments below. > --- > arch/arm64/Kconfig | 1 + > arch/arm64/include/asm/asm-extable.h | 31 +++++++++++++++++++++++----- > arch/arm64/include/asm/asm-uaccess.h | 4 ++++ > arch/arm64/include/asm/extable.h | 1 + > arch/arm64/lib/copy_to_user.S | 10 ++++----- > arch/arm64/mm/extable.c | 19 +++++++++++++++++ > arch/arm64/mm/fault.c | 27 +++++++++++++++++------- > 7 files changed, 75 insertions(+), 18 deletions(-) > > diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig > index 5d91259ee7b5..13ca06ddf3dd 100644 > --- a/arch/arm64/Kconfig > +++ b/arch/arm64/Kconfig > @@ -20,6 +20,7 @@ config ARM64 > select ARCH_ENABLE_SPLIT_PMD_PTLOCK if PGTABLE_LEVELS > 2 > select ARCH_ENABLE_THP_MIGRATION if TRANSPARENT_HUGEPAGE > select ARCH_HAS_CACHE_LINE_SIZE > + select ARCH_HAS_COPY_MC if ACPI_APEI_GHES > select ARCH_HAS_CURRENT_STACK_POINTER > select ARCH_HAS_DEBUG_VIRTUAL > select ARCH_HAS_DEBUG_VM_PGTABLE > diff --git a/arch/arm64/include/asm/asm-extable.h b/arch/arm64/include/asm/asm-extable.h > index 980d1dd8e1a3..9c0664fe1eb1 100644 > --- a/arch/arm64/include/asm/asm-extable.h > +++ b/arch/arm64/include/asm/asm-extable.h > @@ -5,11 +5,13 @@ > #include > #include > > -#define EX_TYPE_NONE 0 > -#define EX_TYPE_BPF 1 > -#define EX_TYPE_UACCESS_ERR_ZERO 2 > -#define EX_TYPE_KACCESS_ERR_ZERO 3 > -#define EX_TYPE_LOAD_UNALIGNED_ZEROPAD 4 > +#define EX_TYPE_NONE 0 > +#define EX_TYPE_BPF 1 > +#define EX_TYPE_UACCESS_ERR_ZERO 2 > +#define EX_TYPE_KACCESS_ERR_ZERO 3 > +#define EX_TYPE_LOAD_UNALIGNED_ZEROPAD 4 > +/* kernel access memory error safe */ > +#define EX_TYPE_KACCESS_ERR_ZERO_ME_SAFE 5 Could we please use 'MEM_ERR', and likewise for the macros below? That's more obvious than 'ME_SAFE', and we wouldn't need the comment here. Likewise elsewhere in this patch and the series. To Jonathan's comment, I do prefer these numbers are aligned, so aside from the naming, the diff above looks good. > > /* Data fields for EX_TYPE_UACCESS_ERR_ZERO */ > #define EX_DATA_REG_ERR_SHIFT 0 > @@ -51,6 +53,17 @@ > #define _ASM_EXTABLE_UACCESS(insn, fixup) \ > _ASM_EXTABLE_UACCESS_ERR_ZERO(insn, fixup, wzr, wzr) > > +#define _ASM_EXTABLE_KACCESS_ERR_ZERO_ME_SAFE(insn, fixup, err, zero) \ > + __ASM_EXTABLE_RAW(insn, fixup, \ > + EX_TYPE_KACCESS_ERR_ZERO_ME_SAFE, \ > + ( \ > + EX_DATA_REG(ERR, err) | \ > + EX_DATA_REG(ZERO, zero) \ > + )) > + > +#define _ASM_EXTABLE_KACCESS_ME_SAFE(insn, fixup) \ > + _ASM_EXTABLE_KACCESS_ERR_ZERO_ME_SAFE(insn, fixup, wzr, wzr) > + > /* > * Create an exception table entry for uaccess `insn`, which will branch to `fixup` > * when an unhandled fault is taken. > @@ -69,6 +82,14 @@ > .endif > .endm > > +/* > + * Create an exception table entry for kaccess me(memory error) safe `insn`, which > + * will branch to `fixup` when an unhandled fault is taken. > + */ > + .macro _asm_extable_kaccess_me_safe, insn, fixup > + _ASM_EXTABLE_KACCESS_ME_SAFE(\insn, \fixup) > + .endm > + With the naming above, I think this can be: | /* | * Create an exception table entry for kaccess `insn`, which will branch to | * `fixup` when a memory error is taken | */ | .macro _asm_extable_kaccess_mem_err, insn, fixup | _ASM_EXTABLE_KACCESS_MEM_ERR(\insn, \fixup) | .endm > #else /* __ASSEMBLY__ */ > > #include > diff --git a/arch/arm64/include/asm/asm-uaccess.h b/arch/arm64/include/asm/asm-uaccess.h > index 5b6efe8abeeb..7bbebfa5b710 100644 > --- a/arch/arm64/include/asm/asm-uaccess.h > +++ b/arch/arm64/include/asm/asm-uaccess.h > @@ -57,6 +57,10 @@ alternative_else_nop_endif > .endm > #endif > > +#define KERNEL_ME_SAFE(l, x...) \ > +9999: x; \ > + _asm_extable_kaccess_me_safe 9999b, l > + > #define USER(l, x...) \ > 9999: x; \ > _asm_extable_uaccess 9999b, l > diff --git a/arch/arm64/include/asm/extable.h b/arch/arm64/include/asm/extable.h > index 72b0e71cc3de..bc49443bc502 100644 > --- a/arch/arm64/include/asm/extable.h > +++ b/arch/arm64/include/asm/extable.h > @@ -46,4 +46,5 @@ bool ex_handler_bpf(const struct exception_table_entry *ex, > #endif /* !CONFIG_BPF_JIT */ > > bool fixup_exception(struct pt_regs *regs); > +bool fixup_exception_me(struct pt_regs *regs); > #endif > diff --git a/arch/arm64/lib/copy_to_user.S b/arch/arm64/lib/copy_to_user.S > index 802231772608..2ac716c0d6d8 100644 > --- a/arch/arm64/lib/copy_to_user.S > +++ b/arch/arm64/lib/copy_to_user.S > @@ -20,7 +20,7 @@ > * x0 - bytes not copied > */ > .macro ldrb1 reg, ptr, val > - ldrb \reg, [\ptr], \val > + KERNEL_ME_SAFE(9998f, ldrb \reg, [\ptr], \val) > .endm > > .macro strb1 reg, ptr, val > @@ -28,7 +28,7 @@ > .endm > > .macro ldrh1 reg, ptr, val > - ldrh \reg, [\ptr], \val > + KERNEL_ME_SAFE(9998f, ldrh \reg, [\ptr], \val) > .endm > > .macro strh1 reg, ptr, val > @@ -36,7 +36,7 @@ > .endm > > .macro ldr1 reg, ptr, val > - ldr \reg, [\ptr], \val > + KERNEL_ME_SAFE(9998f, ldr \reg, [\ptr], \val) > .endm > > .macro str1 reg, ptr, val > @@ -44,7 +44,7 @@ > .endm > > .macro ldp1 reg1, reg2, ptr, val > - ldp \reg1, \reg2, [\ptr], \val > + KERNEL_ME_SAFE(9998f, ldp \reg1, \reg2, [\ptr], \val) > .endm > > .macro stp1 reg1, reg2, ptr, val These changes mean that regular copy_to_user() will handle kernel memory errors, rather than only doing that in copy_mc_to_user(). If that's intentional, please call that out explicitly in the commit message. > @@ -64,7 +64,7 @@ SYM_FUNC_START(__arch_copy_to_user) > 9997: cmp dst, dstin > b.ne 9998f > // Before being absolutely sure we couldn't copy anything, try harder > - ldrb tmp1w, [srcin] > +KERNEL_ME_SAFE(9998f, ldrb tmp1w, [srcin]) > USER(9998f, sttrb tmp1w, [dst]) > add dst, dst, #1 > 9998: sub x0, end, dst // bytes not copied Same comment as above. > diff --git a/arch/arm64/mm/extable.c b/arch/arm64/mm/extable.c > index 228d681a8715..8c690ae61944 100644 > --- a/arch/arm64/mm/extable.c > +++ b/arch/arm64/mm/extable.c > @@ -72,7 +72,26 @@ bool fixup_exception(struct pt_regs *regs) > return ex_handler_uaccess_err_zero(ex, regs); > case EX_TYPE_LOAD_UNALIGNED_ZEROPAD: > return ex_handler_load_unaligned_zeropad(ex, regs); > + case EX_TYPE_KACCESS_ERR_ZERO_ME_SAFE: > + return false; > } > > BUG(); > } > + > +bool fixup_exception_me(struct pt_regs *regs) > +{ > + const struct exception_table_entry *ex; > + > + ex = search_exception_tables(instruction_pointer(regs)); > + if (!ex) > + return false; > + > + switch (ex->type) { > + case EX_TYPE_UACCESS_ERR_ZERO: > + case EX_TYPE_KACCESS_ERR_ZERO_ME_SAFE: > + return ex_handler_uaccess_err_zero(ex, regs); > + } > + > + return false; > +} > diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c > index 451ba7cbd5ad..2dc65f99d389 100644 > --- a/arch/arm64/mm/fault.c > +++ b/arch/arm64/mm/fault.c > @@ -708,21 +708,32 @@ static int do_bad(unsigned long far, unsigned long esr, struct pt_regs *regs) > return 1; /* "fault" */ > } > > +/* > + * APEI claimed this as a firmware-first notification. > + * Some processing deferred to task_work before ret_to_user(). > + */ > +static bool do_apei_claim_sea(struct pt_regs *regs) > +{ > + if (user_mode(regs)) { > + if (!apei_claim_sea(regs)) > + return true; > + } else if (IS_ENABLED(CONFIG_ARCH_HAS_COPY_MC)) { > + if (fixup_exception_me(regs) && !apei_claim_sea(regs)) > + return true; > + } > + > + return false; > +} Hmm... that'll fixup the exception even if we don't manage to claim a the SEA. I suspect this should probably be: static bool do_apei_claim_sea(struct pt_regs *regs) { if (apei_claim_sea(regs)) return false; if (user_mode(regs)) return true; if (IS_ENABLED(CONFIG_ARCH_HAS_COPY_MC)) return !fixup_excepton_mem_err(regs); return false; } ... unless we *don't* want to claim the SEA in the case we don't have a fixup? Mark. > + > static int do_sea(unsigned long far, unsigned long esr, struct pt_regs *regs) > { > const struct fault_info *inf; > unsigned long siaddr; > > - inf = esr_to_fault_info(esr); > - > - if (user_mode(regs) && apei_claim_sea(regs) == 0) { > - /* > - * APEI claimed this as a firmware-first notification. > - * Some processing deferred to task_work before ret_to_user(). > - */ > + if (do_apei_claim_sea(regs)) > return 0; > - } > > + inf = esr_to_fault_info(esr); > if (esr & ESR_ELx_FnV) { > siaddr = 0; > } else { > -- > 2.25.1 > >