From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 72D80C52D6F for ; Tue, 20 Aug 2024 02:43:20 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 06B7A6B0088; Mon, 19 Aug 2024 22:43:20 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 01B386B0089; Mon, 19 Aug 2024 22:43:19 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E4C7A6B008A; Mon, 19 Aug 2024 22:43:19 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id C69316B0088 for ; Mon, 19 Aug 2024 22:43:19 -0400 (EDT) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 3C022816E0 for ; Tue, 20 Aug 2024 02:43:19 +0000 (UTC) X-FDA: 82471077318.26.99E9DC9 Received: from szxga06-in.huawei.com (szxga06-in.huawei.com [45.249.212.32]) by imf25.hostedemail.com (Postfix) with ESMTP id D72D6A000C for ; Tue, 20 Aug 2024 02:43:15 +0000 (UTC) Authentication-Results: imf25.hostedemail.com; dkim=none; spf=pass (imf25.hostedemail.com: domain of tongtiangen@huawei.com designates 45.249.212.32 as permitted sender) smtp.mailfrom=tongtiangen@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1724121782; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=TTIjAUX/294gbs7CPlAhuGwK4HAq4z6iKG5hNdvMHwo=; b=EheaaZudZYeRn134rRNaxQBLziLxBHGuAm+Bm3eAjFP7MQW7htomi97o7+0RbrLPrY8tHM sWrSVm5lQeHpcZUQ55VU4Rd1ZfnLCXgCKA+7dfVI39ySh6KhsHYGXMynxNXQmL0AwwTz9m yimJLxHMH9xOZQFvcATffwGOE0EuKms= ARC-Authentication-Results: i=1; imf25.hostedemail.com; dkim=none; spf=pass (imf25.hostedemail.com: domain of tongtiangen@huawei.com designates 45.249.212.32 as permitted sender) smtp.mailfrom=tongtiangen@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1724121782; a=rsa-sha256; cv=none; b=1Er4QQGXeLmjm7yetykt/147B6Or8GKTGeiq+mtNj3v5dBQRndvDFSt5b0oT/w9CqznNWu wqI3ZkgXgvhfowJWBQ/17tSXuLOcAqNTw64Dr9+1BawVOP9LLadMNaQYjAGVr8L/QUmQED fYGdBdkBDp8wO5QeDd+0Cr5vPiObOiM= Received: from mail.maildlp.com (unknown [172.19.88.163]) by szxga06-in.huawei.com (SkyGuard) with ESMTP id 4Wnty50Fljz1xvWx; Tue, 20 Aug 2024 10:41:17 +0800 (CST) Received: from kwepemm600017.china.huawei.com (unknown [7.193.23.234]) by mail.maildlp.com (Postfix) with ESMTPS id E88B7180044; Tue, 20 Aug 2024 10:43:10 +0800 (CST) Received: from [10.174.179.234] (10.174.179.234) by kwepemm600017.china.huawei.com (7.193.23.234) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.39; Tue, 20 Aug 2024 10:43:09 +0800 Message-ID: <1e5036b9-9e3f-e68d-ef09-6fa693a9c42c@huawei.com> Date: Tue, 20 Aug 2024 10:43:08 +0800 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101 Thunderbird/91.8.0 Subject: Re: [PATCH v12 2/6] arm64: add support for ARCH_HAS_COPY_MC To: Jonathan Cameron CC: Mark Rutland , Catalin Marinas , Will Deacon , Andrew Morton , James Morse , Robin Murphy , Andrey Konovalov , Dmitry Vyukov , Vincenzo Frascino , Michael Ellerman , Nicholas Piggin , Andrey Ryabinin , Alexander Potapenko , Christophe Leroy , Aneesh Kumar K.V , "Naveen N. Rao" , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , , "H. Peter Anvin" , , , , , , Guohanjun References: <20240528085915.1955987-1-tongtiangen@huawei.com> <20240528085915.1955987-3-tongtiangen@huawei.com> <20240819113032.000042af@Huawei.com> From: Tong Tiangen In-Reply-To: <20240819113032.000042af@Huawei.com> Content-Type: text/plain; charset="UTF-8"; format=flowed Content-Transfer-Encoding: 8bit X-Originating-IP: [10.174.179.234] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To kwepemm600017.china.huawei.com (7.193.23.234) X-Rspam-User: X-Stat-Signature: 8khi6rp5nzn4ppefin6e1wdra4jxhszs X-Rspamd-Queue-Id: D72D6A000C X-Rspamd-Server: rspam11 X-HE-Tag: 1724121795-486100 X-HE-Meta: U2FsdGVkX1+Z3Z7mvZ1/wj0xeY9dKhZWqQpKaNVWwJtVhMzjI5L2zcUvA8TUEdBiwfB/A3MinAc6Jw732E/qDU12eXA5HvUSbStVN4Hnq/y3W8G11Fp729RwrkYj9pdnkf01L9Ekph5MUfTp/BZoja5Nk3a45TCHf/eXCSsgTfpxiEl0fA0DnnLLn68AUbuY03pbQUJO3NxHKRgcjzdF+qrz+yjCB6E9XBPh2vWP4AmJyzW081MrlWAugNFZFRQvMUrCUm0f16S5duBZvW4Z2uq0QPuPXK9GOoE5H2IPDin3Fr+PTPz0rMJa3F7XEhUwE281LkUVo/XmIwsH2TaO3oFISO6w1mU9Mfwr9QO4M1ECWJBVcAcOX0Ird44onYYFx+glZvoFkQX6Q10BfKME0lz7Wvj1qnV0OpEeSckEbufdy8rGdqh7Q5Io91WRWf7c97y8c6Y7S6Lv/1ORMbRnnpRdweEL1ygjpzb5pRQM1L0ZPgY9CCre8w5erkazMczFOQITqKj5iPXwTclcRkc5Cf29gaqTlhJDf3ki0NggipeycfdO8h9m/39nUOykgpLNy4gVuodF49UpI7WxoQppZj8uVyMrXMDxfThVA8GtFrII5PXYx7whE9rkfme7+N45+aGcZzIGJujNb6UnTdbQ6EWIwiYXF0XsiaR6nSRIF3PJns2Ud9/hc+hQw7WFqYEy8KDHpW1zRSNp0uv4FH7v/v91clIj1l2j/Rqx6e+rXcIyTrAAxGI+f4eOEEmr+UmRILISZcTeZV0QhXKTNyEdaZ0iB9pBc48AVc4a0Z/s5Xd/BM8KJMZ8i7l9/NQk0lpMWiAWXsR2T2bmfmjv1nLp843+Vk6AGc5VUPM9M24K8zi9mxzXc85of0p58jG8AWb7PNw6h3NkBJP5lYit4Zq8r4cUeRtP04IeFrntPFO+tsGthrkrwOAntMSoNGiv113BlYMMiiGt6gi9wF/lF0k kPoYFGKg hdV+emQVatHMNm6Ja8YJDIBhkDF8s//BNX3L4QTcvnt1aEhImzs/5eHqGtGdyoBaiqgAfPaj5KDzRqedKtrBZulfIjICj5s2/S1+/40gIRhqOsb0BqBU1/EbkAPuotIcrvSsVWfP80iJSOgWv42Vnz7Q34N+4yu+vAGXCegWxabOGjhHge91KkqdrDkIR1HWwFtKZQ1Dao5crFAgUt35u83lf34yX8W1+XliC4KlVzHZTlPi7UkDoYkS9Wvo/hrKk7JMjsYySb6nUBMzVzw1XpkwSOmBjwHfZo1MsDE+MIdaETqQ= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: 在 2024/8/19 18:30, Jonathan Cameron 写道: > On Tue, 28 May 2024 16:59:11 +0800 > Tong Tiangen wrote: > >> For the arm64 kernel, when it processes hardware memory errors for >> synchronize notifications(do_sea()), if the errors is consumed within the >> kernel, the current processing is panic. However, it is not optimal. >> >> Take copy_from/to_user for example, If ld* triggers a memory error, even in >> kernel mode, only the associated process is affected. Killing the user >> process and isolating the corrupt page is a better choice. >> >> New fixup type EX_TYPE_KACCESS_ERR_ZERO_ME_SAFE is added to identify insn >> that can recover from memory errors triggered by access to kernel memory. >> >> Signed-off-by: Tong Tiangen > > Hi - this is going slow :( > > A few comments inline in the meantime but this really needs ARM maintainers > to take a (hopefully final) look. > > Jonathan > > >> diff --git a/arch/arm64/include/asm/asm-extable.h b/arch/arm64/include/asm/asm-extable.h >> index 980d1dd8e1a3..9c0664fe1eb1 100644 >> --- a/arch/arm64/include/asm/asm-extable.h >> +++ b/arch/arm64/include/asm/asm-extable.h >> @@ -5,11 +5,13 @@ >> #include >> #include >> >> -#define EX_TYPE_NONE 0 >> -#define EX_TYPE_BPF 1 >> -#define EX_TYPE_UACCESS_ERR_ZERO 2 >> -#define EX_TYPE_KACCESS_ERR_ZERO 3 >> -#define EX_TYPE_LOAD_UNALIGNED_ZEROPAD 4 >> +#define EX_TYPE_NONE 0 >> +#define EX_TYPE_BPF 1 >> +#define EX_TYPE_UACCESS_ERR_ZERO 2 >> +#define EX_TYPE_KACCESS_ERR_ZERO 3 >> +#define EX_TYPE_LOAD_UNALIGNED_ZEROPAD 4 >> +/* kernel access memory error safe */ >> +#define EX_TYPE_KACCESS_ERR_ZERO_ME_SAFE 5 > > Does anyone care enough about the alignment to bother realigning for one > long line? I'd be tempted not to bother, but up to maintainers. > > >> diff --git a/arch/arm64/lib/copy_to_user.S b/arch/arm64/lib/copy_to_user.S >> index 802231772608..2ac716c0d6d8 100644 >> --- a/arch/arm64/lib/copy_to_user.S >> +++ b/arch/arm64/lib/copy_to_user.S >> @@ -20,7 +20,7 @@ >> * x0 - bytes not copied >> */ >> .macro ldrb1 reg, ptr, val >> - ldrb \reg, [\ptr], \val >> + KERNEL_ME_SAFE(9998f, ldrb \reg, [\ptr], \val) >> .endm >> >> .macro strb1 reg, ptr, val >> @@ -28,7 +28,7 @@ >> .endm >> >> .macro ldrh1 reg, ptr, val >> - ldrh \reg, [\ptr], \val >> + KERNEL_ME_SAFE(9998f, ldrh \reg, [\ptr], \val) >> .endm >> >> .macro strh1 reg, ptr, val >> @@ -36,7 +36,7 @@ >> .endm >> >> .macro ldr1 reg, ptr, val >> - ldr \reg, [\ptr], \val >> + KERNEL_ME_SAFE(9998f, ldr \reg, [\ptr], \val) >> .endm >> >> .macro str1 reg, ptr, val >> @@ -44,7 +44,7 @@ >> .endm >> >> .macro ldp1 reg1, reg2, ptr, val >> - ldp \reg1, \reg2, [\ptr], \val >> + KERNEL_ME_SAFE(9998f, ldp \reg1, \reg2, [\ptr], \val) >> .endm >> >> .macro stp1 reg1, reg2, ptr, val >> @@ -64,7 +64,7 @@ SYM_FUNC_START(__arch_copy_to_user) >> 9997: cmp dst, dstin >> b.ne 9998f >> // Before being absolutely sure we couldn't copy anything, try harder >> - ldrb tmp1w, [srcin] >> +KERNEL_ME_SAFE(9998f, ldrb tmp1w, [srcin]) > > Alignment looks off? Hi, Jonathan: How about we change this in conjunction with mark's suggestion? :) > >> USER(9998f, sttrb tmp1w, [dst]) >> add dst, dst, #1 >> 9998: sub x0, end, dst // bytes not copied > > > >> diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c >> index 451ba7cbd5ad..2dc65f99d389 100644 >> --- a/arch/arm64/mm/fault.c >> +++ b/arch/arm64/mm/fault.c >> @@ -708,21 +708,32 @@ static int do_bad(unsigned long far, unsigned long esr, struct pt_regs *regs) >> return 1; /* "fault" */ >> } >> >> +/* >> + * APEI claimed this as a firmware-first notification. >> + * Some processing deferred to task_work before ret_to_user(). >> + */ >> +static bool do_apei_claim_sea(struct pt_regs *regs) >> +{ >> + if (user_mode(regs)) { >> + if (!apei_claim_sea(regs)) > > I'd keep to the the (apei_claim_sea(regs) == 0) > used in the original code. That hints to the reader that we are > interested here in an 'error' code rather than apei_claim_sea() returning > a bool. I initially wondered why we return true when the code > fails to claim it. > > Also, perhaps if you return 0 for success and an error code if not > you could just make this > > if (user_mode(regs)) > return apei_claim_sea(regs); > > if (IS_ENABLED(CONFIG_ARCH_HAS_COPY_MC)) { > if (fixup_exception_me(regs)) { > return apei_claim_sea(regs); > } > } > > return false; > > or maybe even (I may have messed this up, but I think this logic > works). > > if (!user_mode(regs) && IS_ENABLED(CONFIG_ARCH_HAS_COPY_MC)) { > if (!fixup_exception_me(regs)) > return false; > } > return apei_claim_sea(regs); > > >> + return true; >> + } else if (IS_ENABLED(CONFIG_ARCH_HAS_COPY_MC)) { >> + if (fixup_exception_me(regs) && !apei_claim_sea(regs)) > > Same here with using apei_claim_sea(regs) == 0 so it's obvious we > are checking for an error, not a boolean. > >> + return true; >> + } >> + >> + return false; >> +} >> + >> static int do_sea(unsigned long far, unsigned long esr, struct pt_regs *regs) >> { >> const struct fault_info *inf; >> unsigned long siaddr; >> >> - inf = esr_to_fault_info(esr); >> - >> - if (user_mode(regs) && apei_claim_sea(regs) == 0) { >> - /* >> - * APEI claimed this as a firmware-first notification. >> - * Some processing deferred to task_work before ret_to_user(). >> - */ >> + if (do_apei_claim_sea(regs)) > > It might be made sense to factor this out first, then could be reviewed > as a noop before the new stuff is added. Still it's not much code, so doesn't > really matter. > Might be worth keeping to returning 0 for success, error code > otherwise as per apei_claim_sea(regs) > > The bool returning functions in the nearby code tend to be is_xxxx > not things that succeed or not. > > If you change it to return int make this > if (do_apei_claim_sea(regs) == 0) > so it's obvious this is the no error case. > My fault, treating the return value of apei_claim_sea() as bool has caused some confusion. Perhaps using "== 0" can reduce this confuse. Here's the change: static int do_apei_claim_sea(struct pt_regs *regs) { if (!user_mode(regs) && IS_ENABLED(CONFIG_ARCH_HAS_COPY_MC)) { if (!fixup_exception_me(regs))) return -ENOENT; } return apei_claim_sea(regs); } static int do_sea(...) { [...] if (do_apei_claim_sea(regs) == 0) return 0; [...] } I'll modify it later with the comments of mark. Thanks, Tong. >> return 0; >> - } >> >> + inf = esr_to_fault_info(esr); >> if (esr & ESR_ELx_FnV) { >> siaddr = 0; >> } else { > > .