From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 94A48C5320E for ; Mon, 19 Aug 2024 10:30:42 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2786F6B007B; Mon, 19 Aug 2024 06:30:42 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 228416B0082; Mon, 19 Aug 2024 06:30:42 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0F0A46B0083; Mon, 19 Aug 2024 06:30:42 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id E6C6F6B007B for ; Mon, 19 Aug 2024 06:30:41 -0400 (EDT) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 91FEE1210E5 for ; Mon, 19 Aug 2024 10:30:41 +0000 (UTC) X-FDA: 82468626282.09.0971588 Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) by imf27.hostedemail.com (Postfix) with ESMTP id 7F0B840008 for ; Mon, 19 Aug 2024 10:30:38 +0000 (UTC) Authentication-Results: imf27.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf27.hostedemail.com: domain of jonathan.cameron@huawei.com designates 185.176.79.56 as permitted sender) smtp.mailfrom=jonathan.cameron@huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1724063376; a=rsa-sha256; cv=none; b=fKPngQj7Tv466Bh8+jouU93BMDAJhO/dTF9bF+2+VlwOV/U/VytUEyRHpBmjeZkqVNXvHu fqlggwrvvzaLXTFSr4pP7zxi7toCCjPQgRh/1CXdBOHwRcpcyXErgMUROQg+VhZnXOlc6x JhT/8Nd70IbahkCDtkwh4HLZh1S3dHA= ARC-Authentication-Results: i=1; imf27.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf27.hostedemail.com: domain of jonathan.cameron@huawei.com designates 185.176.79.56 as permitted sender) smtp.mailfrom=jonathan.cameron@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1724063376; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=xwunycD0wzyLqWsyXQ0Q+xliszYmZgLXvrOZ7AntHpc=; b=Lo87IiQmBfsjkZuNQA2VByzfVMSMH3ak1/j8UsvQb1aJ7DcrsMI3DiX0DWi1wGHaarkBek NVpoEFq17EwA/OUsf9KpRnx3CcW7suqmrKQsZSIDD8m6gI6rbkGD7NDmq3Fvig82PMDodb U2Jmft2o0tbaWKIut9NEvXjJtG6r97I= Received: from mail.maildlp.com (unknown [172.18.186.216]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4WnTKy1RgYz67MPb; Mon, 19 Aug 2024 18:27:02 +0800 (CST) Received: from lhrpeml500005.china.huawei.com (unknown [7.191.163.240]) by mail.maildlp.com (Postfix) with ESMTPS id 218311400D7; Mon, 19 Aug 2024 18:30:34 +0800 (CST) Received: from localhost (10.203.177.66) by lhrpeml500005.china.huawei.com (7.191.163.240) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.39; Mon, 19 Aug 2024 11:30:33 +0100 Date: Mon, 19 Aug 2024 11:30:32 +0100 From: Jonathan Cameron To: Tong Tiangen CC: Mark Rutland , Catalin Marinas , Will Deacon , Andrew Morton , James Morse , "Robin Murphy" , Andrey Konovalov , Dmitry Vyukov , Vincenzo Frascino , Michael Ellerman , Nicholas Piggin , Andrey Ryabinin , Alexander Potapenko , Christophe Leroy , Aneesh Kumar K.V , "Naveen N. Rao" , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , , "H. Peter Anvin" , , , , , , Guohanjun Subject: Re: [PATCH v12 2/6] arm64: add support for ARCH_HAS_COPY_MC Message-ID: <20240819113032.000042af@Huawei.com> In-Reply-To: <20240528085915.1955987-3-tongtiangen@huawei.com> References: <20240528085915.1955987-1-tongtiangen@huawei.com> <20240528085915.1955987-3-tongtiangen@huawei.com> Organization: Huawei Technologies Research and Development (UK) Ltd. X-Mailer: Claws Mail 4.1.0 (GTK 3.24.33; x86_64-w64-mingw32) MIME-Version: 1.0 Content-Type: text/plain; charset="US-ASCII" Content-Transfer-Encoding: 7bit X-Originating-IP: [10.203.177.66] X-ClientProxiedBy: lhrpeml500005.china.huawei.com (7.191.163.240) To lhrpeml500005.china.huawei.com (7.191.163.240) X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 7F0B840008 X-Stat-Signature: o74fbao18rerj6ywgjxrfwso1c678kwg X-Rspam-User: X-HE-Tag: 1724063438-753762 X-HE-Meta: U2FsdGVkX18SZoRBEJNHRwQQi9Lv8D+NIADyN2J2lX8ZTHCZSpGoI9L69shlKhtvAd3Ad7uuiVjdo2zWkCbK0y4cmVI8KQCmmMtCFd/PZIjl6T8bxiicg0GaTPe0ZE8jYvomdjOQtIoi4aHaqZOXtTVBAYM/Pf9oKPDwwWjSxX1N9oCIJRM9tK2+ZwPZE0HxsmBRs4X9mow59WrSCEa+oVI60UEg4khOJwaBaWXqVyEq7+mEadP2FhRaHOFwvTqsvfqFj+Bh7L6HFRVz2c2qhyMKAV4T/60zF201g0Cw7PIOAmvjFnzP5zir8RTbFdTKxQJICpMVDPGo3+/mZfV7rfoOpfDBaHWJZj7IrJh2PVP9vcFCKyh2yJE7iOMvljd9bmH6aEpy2KtPkFe3AAyi/WmaQ2bLpdsrZjdcG4UqfhFxi+0Rm8+a3XnOm6tG0mxrcXvjdWc+G+EKVU7yjzxOLXbfGGXEaXEGS3mEPyIZmyCm/4vyfVjnJ3Vm98RAGjad7LU0EoWwmeZjej5SX8+s+kBeE2VBcDKIaxX+hdZwJlt80cxMs4IqmgFoZduK6M6tLvjcfRm+Q2mSbsWh5tSJN+au2iynOSeaDLTgfG8m9J58pvVq0NcjWNDdMJdb6E4qI5gslOekzTk4nDNtpgA8KlyylNtDQjqfpscN7SSBqvribQrYytz6abKy1bJh8vlC0ulJNmtPPa5z6BnVNl/pvcQHoEUY6NBSjMJneAYFktmeZR/eDyjqfzHzD8wy42+gnVwlBad+gkFzQmlAiFM/LaUo5UJlC33xjAdZhuRzgkyqyn5q7fGcRjKTxvRyhOUrMkSRyiSV7FHCrbwqZ48ykwqGdMXjRd/ZPFld402T5VUmUswZlbcyCfH0wDjjvwRKT9Mr4ipCsKIaoKjSoQFjOZvginRgb7Fs9GUXmqDIC5Xj/EBpZWcc016skPP/vx+C/pfCbhuCL+eNhFyJ+GG fI7XJAam p+Tw3WyZvX4oQpAA0qe4exr5SF47Fvya6814mvQm7+XE2/Ko/+CDOVSmn/msJWFNRSV8TOP5SYqWCe0R5H36rvRKk9egBMqUPvHVlEtMvdOhl/XV/XdeYA3WhHy8Wk7WOmBS1UkT7/F/Y3wBFIVLJcvhrbbUGZztcyGcvArSP62ItFoKsTw7wI7pu3vXCsY5kDEZESSWThQ2gtcnn/Hc86ZgGkcbY748+Tm2wjBKSKBpr3FraErjNSNA/ngGVvlZBCm9V5fJuavZo965I9mwUsVwpPQfzJ287CvwPUfFXEC0v689fKOwSuLU9RRZGsonGrGJS5Yz7K275bMkonTqRRv/yoDqKeGRE/pgK X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Tue, 28 May 2024 16:59:11 +0800 Tong Tiangen wrote: > For the arm64 kernel, when it processes hardware memory errors for > synchronize notifications(do_sea()), if the errors is consumed within the > kernel, the current processing is panic. However, it is not optimal. > > Take copy_from/to_user for example, If ld* triggers a memory error, even in > kernel mode, only the associated process is affected. Killing the user > process and isolating the corrupt page is a better choice. > > New fixup type EX_TYPE_KACCESS_ERR_ZERO_ME_SAFE is added to identify insn > that can recover from memory errors triggered by access to kernel memory. > > Signed-off-by: Tong Tiangen Hi - this is going slow :( A few comments inline in the meantime but this really needs ARM maintainers to take a (hopefully final) look. Jonathan > diff --git a/arch/arm64/include/asm/asm-extable.h b/arch/arm64/include/asm/asm-extable.h > index 980d1dd8e1a3..9c0664fe1eb1 100644 > --- a/arch/arm64/include/asm/asm-extable.h > +++ b/arch/arm64/include/asm/asm-extable.h > @@ -5,11 +5,13 @@ > #include > #include > > -#define EX_TYPE_NONE 0 > -#define EX_TYPE_BPF 1 > -#define EX_TYPE_UACCESS_ERR_ZERO 2 > -#define EX_TYPE_KACCESS_ERR_ZERO 3 > -#define EX_TYPE_LOAD_UNALIGNED_ZEROPAD 4 > +#define EX_TYPE_NONE 0 > +#define EX_TYPE_BPF 1 > +#define EX_TYPE_UACCESS_ERR_ZERO 2 > +#define EX_TYPE_KACCESS_ERR_ZERO 3 > +#define EX_TYPE_LOAD_UNALIGNED_ZEROPAD 4 > +/* kernel access memory error safe */ > +#define EX_TYPE_KACCESS_ERR_ZERO_ME_SAFE 5 Does anyone care enough about the alignment to bother realigning for one long line? I'd be tempted not to bother, but up to maintainers. > diff --git a/arch/arm64/lib/copy_to_user.S b/arch/arm64/lib/copy_to_user.S > index 802231772608..2ac716c0d6d8 100644 > --- a/arch/arm64/lib/copy_to_user.S > +++ b/arch/arm64/lib/copy_to_user.S > @@ -20,7 +20,7 @@ > * x0 - bytes not copied > */ > .macro ldrb1 reg, ptr, val > - ldrb \reg, [\ptr], \val > + KERNEL_ME_SAFE(9998f, ldrb \reg, [\ptr], \val) > .endm > > .macro strb1 reg, ptr, val > @@ -28,7 +28,7 @@ > .endm > > .macro ldrh1 reg, ptr, val > - ldrh \reg, [\ptr], \val > + KERNEL_ME_SAFE(9998f, ldrh \reg, [\ptr], \val) > .endm > > .macro strh1 reg, ptr, val > @@ -36,7 +36,7 @@ > .endm > > .macro ldr1 reg, ptr, val > - ldr \reg, [\ptr], \val > + KERNEL_ME_SAFE(9998f, ldr \reg, [\ptr], \val) > .endm > > .macro str1 reg, ptr, val > @@ -44,7 +44,7 @@ > .endm > > .macro ldp1 reg1, reg2, ptr, val > - ldp \reg1, \reg2, [\ptr], \val > + KERNEL_ME_SAFE(9998f, ldp \reg1, \reg2, [\ptr], \val) > .endm > > .macro stp1 reg1, reg2, ptr, val > @@ -64,7 +64,7 @@ SYM_FUNC_START(__arch_copy_to_user) > 9997: cmp dst, dstin > b.ne 9998f > // Before being absolutely sure we couldn't copy anything, try harder > - ldrb tmp1w, [srcin] > +KERNEL_ME_SAFE(9998f, ldrb tmp1w, [srcin]) Alignment looks off? > USER(9998f, sttrb tmp1w, [dst]) > add dst, dst, #1 > 9998: sub x0, end, dst // bytes not copied > diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c > index 451ba7cbd5ad..2dc65f99d389 100644 > --- a/arch/arm64/mm/fault.c > +++ b/arch/arm64/mm/fault.c > @@ -708,21 +708,32 @@ static int do_bad(unsigned long far, unsigned long esr, struct pt_regs *regs) > return 1; /* "fault" */ > } > > +/* > + * APEI claimed this as a firmware-first notification. > + * Some processing deferred to task_work before ret_to_user(). > + */ > +static bool do_apei_claim_sea(struct pt_regs *regs) > +{ > + if (user_mode(regs)) { > + if (!apei_claim_sea(regs)) I'd keep to the the (apei_claim_sea(regs) == 0) used in the original code. That hints to the reader that we are interested here in an 'error' code rather than apei_claim_sea() returning a bool. I initially wondered why we return true when the code fails to claim it. Also, perhaps if you return 0 for success and an error code if not you could just make this if (user_mode(regs)) return apei_claim_sea(regs); if (IS_ENABLED(CONFIG_ARCH_HAS_COPY_MC)) { if (fixup_exception_me(regs)) { return apei_claim_sea(regs); } } return false; or maybe even (I may have messed this up, but I think this logic works). if (!user_mode(regs) && IS_ENABLED(CONFIG_ARCH_HAS_COPY_MC)) { if (!fixup_exception_me(regs)) return false; } return apei_claim_sea(regs); > + return true; > + } else if (IS_ENABLED(CONFIG_ARCH_HAS_COPY_MC)) { > + if (fixup_exception_me(regs) && !apei_claim_sea(regs)) Same here with using apei_claim_sea(regs) == 0 so it's obvious we are checking for an error, not a boolean. > + return true; > + } > + > + return false; > +} > + > static int do_sea(unsigned long far, unsigned long esr, struct pt_regs *regs) > { > const struct fault_info *inf; > unsigned long siaddr; > > - inf = esr_to_fault_info(esr); > - > - if (user_mode(regs) && apei_claim_sea(regs) == 0) { > - /* > - * APEI claimed this as a firmware-first notification. > - * Some processing deferred to task_work before ret_to_user(). > - */ > + if (do_apei_claim_sea(regs)) It might be made sense to factor this out first, then could be reviewed as a noop before the new stuff is added. Still it's not much code, so doesn't really matter. Might be worth keeping to returning 0 for success, error code otherwise as per apei_claim_sea(regs) The bool returning functions in the nearby code tend to be is_xxxx not things that succeed or not. If you change it to return int make this if (do_apei_claim_sea(regs) == 0) so it's obvious this is the no error case. > return 0; > - } > > + inf = esr_to_fault_info(esr); > if (esr & ESR_ELx_FnV) { > siaddr = 0; > } else {