From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 05959C433EF for ; Wed, 6 Apr 2022 11:23:12 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 88A516B0072; Wed, 6 Apr 2022 07:23:01 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 83A4B6B0073; Wed, 6 Apr 2022 07:23:01 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 700BF6B0074; Wed, 6 Apr 2022 07:23:01 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0152.hostedemail.com [216.40.44.152]) by kanga.kvack.org (Postfix) with ESMTP id 627446B0072 for ; Wed, 6 Apr 2022 07:23:01 -0400 (EDT) Received: from smtpin28.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 2422D1828A817 for ; Wed, 6 Apr 2022 11:22:51 +0000 (UTC) X-FDA: 79326216942.28.8D9C808 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf22.hostedemail.com (Postfix) with ESMTP id 7AB1BC001D for ; Wed, 6 Apr 2022 11:22:50 +0000 (UTC) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id A5E1B23A; Wed, 6 Apr 2022 04:22:49 -0700 (PDT) Received: from FVFF77S0Q05N (unknown [10.57.10.98]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 7B3B53F718; Wed, 6 Apr 2022 04:22:47 -0700 (PDT) Date: Wed, 6 Apr 2022 12:22:43 +0100 From: Mark Rutland To: Tong Tiangen Cc: Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , Catalin Marinas , Will Deacon , Alexander Viro , x86@kernel.org, "H. Peter Anvin" , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: Re: [RFC PATCH -next V2 5/7] arm64: add get_user to machine check safe Message-ID: References: <20220406091311.3354723-1-tongtiangen@huawei.com> <20220406091311.3354723-6-tongtiangen@huawei.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20220406091311.3354723-6-tongtiangen@huawei.com> X-Stat-Signature: 8eimzfxi5duou15kw8pzyskh95rrxcxr Authentication-Results: imf22.hostedemail.com; dkim=none; spf=pass (imf22.hostedemail.com: domain of mark.rutland@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=mark.rutland@arm.com; dmarc=pass (policy=none) header.from=arm.com X-Rspam-User: X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 7AB1BC001D X-HE-Tag: 1649244170-261827 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, Apr 06, 2022 at 09:13:09AM +0000, Tong Tiangen wrote: > Add scenarios get_user to machine check safe. The processing of > EX_TYPE_UACCESS_ERR_ZERO and EX_TYPE_UACCESS_ERR_ZERO_UCE_RECOVERY is same > and both return -EFAULT. Which uaccess cases do we expect to *not* be recoverable? Naively I would assume that if we're going to treat a memory error on a uaccess as fatal to userspace we should be able to do that for *any* uacesses. The commit message should explain why we need the distinction between a recoverable uaccess and a non-recoverable uaccess. Thanks, Mark. > > Signed-off-by: Tong Tiangen > --- > arch/arm64/include/asm/asm-extable.h | 14 +++++++++++++- > arch/arm64/include/asm/uaccess.h | 2 +- > arch/arm64/mm/extable.c | 1 + > 3 files changed, 15 insertions(+), 2 deletions(-) > > diff --git a/arch/arm64/include/asm/asm-extable.h b/arch/arm64/include/asm/asm-extable.h > index 74d1db74fd86..bfc2d224cbae 100644 > --- a/arch/arm64/include/asm/asm-extable.h > +++ b/arch/arm64/include/asm/asm-extable.h > @@ -10,8 +10,11 @@ > > /* _MC indicates that can fixup from machine check errors */ > #define EX_TYPE_FIXUP_MC 5 > +#define EX_TYPE_UACCESS_ERR_ZERO_MC 6 > > -#define IS_EX_TYPE_MC(type) (type == EX_TYPE_FIXUP_MC) > +#define IS_EX_TYPE_MC(type) \ > + (type == EX_TYPE_FIXUP_MC || \ > + type == EX_TYPE_UACCESS_ERR_ZERO_MC) > > #ifdef __ASSEMBLY__ > > @@ -77,6 +80,15 @@ > #define EX_DATA_REG(reg, gpr) \ > "((.L__gpr_num_" #gpr ") << " __stringify(EX_DATA_REG_##reg##_SHIFT) ")" > > +#define _ASM_EXTABLE_UACCESS_ERR_ZERO_MC(insn, fixup, err, zero) \ > + __DEFINE_ASM_GPR_NUMS \ > + __ASM_EXTABLE_RAW(#insn, #fixup, \ > + __stringify(EX_TYPE_UACCESS_ERR_ZERO_MC), \ > + "(" \ > + EX_DATA_REG(ERR, err) " | " \ > + EX_DATA_REG(ZERO, zero) \ > + ")") > + > #define _ASM_EXTABLE_UACCESS_ERR_ZERO(insn, fixup, err, zero) \ > __DEFINE_ASM_GPR_NUMS \ > __ASM_EXTABLE_RAW(#insn, #fixup, \ > diff --git a/arch/arm64/include/asm/uaccess.h b/arch/arm64/include/asm/uaccess.h > index e8dce0cc5eaa..24b662407fbd 100644 > --- a/arch/arm64/include/asm/uaccess.h > +++ b/arch/arm64/include/asm/uaccess.h > @@ -236,7 +236,7 @@ static inline void __user *__uaccess_mask_ptr(const void __user *ptr) > asm volatile( \ > "1: " load " " reg "1, [%2]\n" \ > "2:\n" \ > - _ASM_EXTABLE_UACCESS_ERR_ZERO(1b, 2b, %w0, %w1) \ > + _ASM_EXTABLE_UACCESS_ERR_ZERO_MC(1b, 2b, %w0, %w1) \ > : "+r" (err), "=&r" (x) \ > : "r" (addr)) > > diff --git a/arch/arm64/mm/extable.c b/arch/arm64/mm/extable.c > index f1134c88e849..7c05f8d2bce0 100644 > --- a/arch/arm64/mm/extable.c > +++ b/arch/arm64/mm/extable.c > @@ -95,6 +95,7 @@ bool fixup_exception(struct pt_regs *regs, unsigned int esr) > case EX_TYPE_BPF: > return ex_handler_bpf(ex, regs); > case EX_TYPE_UACCESS_ERR_ZERO: > + case EX_TYPE_UACCESS_ERR_ZERO_MC: > return ex_handler_uaccess_err_zero(ex, regs); > case EX_TYPE_LOAD_UNALIGNED_ZEROPAD: > return ex_handler_load_unaligned_zeropad(ex, regs); > -- > 2.18.0.huawei.25 > > > _______________________________________________ > linux-arm-kernel mailing list > linux-arm-kernel@lists.infradead.org > http://lists.infradead.org/mailman/listinfo/linux-arm-kernel