From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9A67DC77B73 for ; Tue, 11 Apr 2023 16:45:29 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0D8C7900003; Tue, 11 Apr 2023 12:45:29 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 0884C900002; Tue, 11 Apr 2023 12:45:29 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EB952900003; Tue, 11 Apr 2023 12:45:28 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id DCFA9900002 for ; Tue, 11 Apr 2023 12:45:28 -0400 (EDT) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 8906780D97 for ; Tue, 11 Apr 2023 16:45:28 +0000 (UTC) X-FDA: 80669685936.07.9DEB07C Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf17.hostedemail.com (Postfix) with ESMTP id D21E340022 for ; Tue, 11 Apr 2023 16:45:25 +0000 (UTC) Authentication-Results: imf17.hostedemail.com; dkim=none; dmarc=fail reason="SPF not aligned (relaxed), No valid DKIM" header.from=arm.com (policy=none); spf=pass (imf17.hostedemail.com: domain of cmarinas@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=cmarinas@kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1681231526; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Gif9uwiL2TyxtUCV66SCVGT2RXcB0LSxzBtC+Oe/c6w=; b=OnrslL+IMQiMrzYGAwyF6vtsdT4GgoAEXBz3KBsnwzFV1CuPUBstaR4Md/tn8mwC1Op9pN 01VioW/fy5kRE8EWFpvWDThzaxyZGkosuJEAWxPrINwpYzwLpIZdAHkrDrG+HuI9I+z6t3 Linho3WsECzewW0HcpPqk0D9x2zCbCk= ARC-Authentication-Results: i=1; imf17.hostedemail.com; dkim=none; dmarc=fail reason="SPF not aligned (relaxed), No valid DKIM" header.from=arm.com (policy=none); spf=pass (imf17.hostedemail.com: domain of cmarinas@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=cmarinas@kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1681231526; a=rsa-sha256; cv=none; b=jd12zHuKCbGv/jyWnEWnzXgW3AaNA1QwjycXVZAFjcAq+uGJNzuUXVlgrHGkeE3mfuoK+k WCWOpLzAbKww58jKzWHRahKT40+yCZFlQ/a6A7FqOqLdSEWYOZPYK4cZkhSdZJoN6UXL01 keEGtpv2DISbXKKxcHExRq8VcAnW+kA= Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id C5C926295E; Tue, 11 Apr 2023 16:45:24 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 3A883C433EF; Tue, 11 Apr 2023 16:45:21 +0000 (UTC) Date: Tue, 11 Apr 2023 17:45:18 +0100 From: Catalin Marinas To: Tong Tiangen Cc: Mark Rutland , James Morse , Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Robin Murphy , Dave Hansen , Will Deacon , Alexander Viro , x86@kernel.org, "H . Peter Anvin" , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Kefeng Wang , Guohanjun , Xie XiuQi Subject: Re: [PATCH -next v8 4/4] arm64: add cow to machine check safe Message-ID: References: <20221219120008.3818828-1-tongtiangen@huawei.com> <20221219120008.3818828-5-tongtiangen@huawei.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20221219120008.3818828-5-tongtiangen@huawei.com> X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: D21E340022 X-Rspam-User: X-Stat-Signature: koif1tsirasnf5se3po8uhjsk7wc411a X-HE-Tag: 1681231525-223143 X-HE-Meta: U2FsdGVkX1+dMnTzuM3A3Gn9HPQX1xgfSv+5kGxCKX3HNkM9icB78FknD+nRksGo6CaL/meF6jFVr0b3XFtUEHldfJQ3KUAD/6hwbISAookdW1lylswx1vBKV508xqVnXcNsU3BdVlwaGZhhCRjpDnLsY8U6WLfM1rLSv+sLPLZq9QEuJIEfVoZ/vTWdjnd9oA2FnFsgTbGrwtOwIMcb3Sg8RjTJYSOjTp3bIIk19/PaoioaYJdSohvi24HFbTDdA8tViuAw2J+uDXVpLIK17UCUAf0rKOai/stLsiCWI+aTLL3CfWJjxYPP4lQHDA7aeJMtUjp3l3+l6B0/Hvbf9FKvmCgWiAYgXCE4nKxFQa97qpt0HF2xLkeovWxr9pw0DKmisCu1ZiEE/PO1ZD5BWXAbsLH3NlY977jcWezacB27YP9uiPpcZtcKgAdQjBF59XZCfn0pbzKZdGCpjQObdcjsHggPUgAgctSot1nRq7ZqRhjcu/FvKFZqUMulzVBEs4kWEwiF3QSlUsvfOcfpcQ3HgAupCcw1JY0eIW3F/XXb2ah9QmLeT6fVBtXmTTu/7Jqcp7jolHnz7pB/b6oU646MxukCRzdWyDBqhymRIdcls2c5qutMC4DEju1PHkFMHzx8DGTLsS/stcBn2T0nr6+dI4RHiBmsD+orEojxRgXcK5uZSAW23auj39/GGseydIWWuv+Xl1gRFhEHgYBA8l7L1wV7yP3w9awltXX2vux7QHuAxi5PiwcR0+4GmSL2th9qo03+lnj+SsOTDIG08RRPdlHNqDYlxOTdIQx/P3bdO/AFih8wJnQUXIPzumc0TJrMcxNfUFNEYSuhbo1ocItmp3G2v/UwSwPHxuX99ZInvp7WgKWmXtYPsxq/LmqVSSX14NaXo6FAb5Mz84upHbK4pbFOYYc/8s3RC1gcChMkvkGUOeIsRots1xWiZtGA/ks2nDq9Fww2He7OCW4 PbTjRjxK e+aoO4zKnF9m4E+sph92YKrt4JDTfJilJV/lCR9RkF1M+MUGaoHd1WJg+be+YNkojvS/p05zlWsY1LyOyKBYqDAjpZIGPG+yqd03A3wR9QrtujHBbNGpEn1zo133fTdLorzwCvPIjLO74n1sh8oCGQFpGOwIrOg1sfDPFbdvMitkXRfkVNvahWzVzD2lfRJgPFFaSb67iEKFcC8qabNAxNmQ5dIz9iQxTEhMM5m8miZXCXCPnuXu6mF/EBCl3X0xK1AQF2C58KtNGqj51L6Xy5ow9nu69aLzIJl6uSQGIJtQlSNMTka6n4lDS5ZxkE+9NYr/4X3rzDXO435ZBHec4oAS5eW76ID3zmUErmtNDv3USbcUMMBeiCtPXMbbSZjcu9MOGNg/WHmgxTlQJCPvx3+aMcCL3a3VXv9AbG0aVM7xgY5HJZA6Zy35iJXRCpF+WWyn7qHRpIaUV3uXlWOKNH/lu1sD1sAZ0WhOUMQrHYSbKvbJKS5xyqI3uBCfh97H5/Tq2zRhlaJb27nIYrTj3WcO1h2mKdFz2oxTf X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, Dec 19, 2022 at 12:00:08PM +0000, Tong Tiangen wrote: > At present, Recover from poison consumption from copy-on-write has been > supported[1], arm64 should also support this mechanism. > > Add new helper copy_mc_page() which provide a page copy implementation with > machine check safe. At present, only used in cow. In the future, we can > expand more scenes. As long as the consequences of page copy failure are > not fatal(eg: only affect user process), we can use this helper. > > The copy_mc_page() in copy_page_mc.S is largely borrows from copy_page() > in copy_page.S and the main difference is copy_mc_page() add extable entry > to every load/store insn to support machine check safe. largely to keep the > patch simple. If needed those optimizations can be folded in. > > Add new extable type EX_TYPE_COPY_MC_PAGE which used in copy_mc_page(). > > [1]https://lore.kernel.org/lkml/20221031201029.102123-1-tony.luck@intel.com/ > > Signed-off-by: Tong Tiangen This series needs rebasing onto a newer kernel. Some random comments below. > diff --git a/arch/arm64/lib/copy_mc_page.S b/arch/arm64/lib/copy_mc_page.S > new file mode 100644 > index 000000000000..03d657a182f6 > --- /dev/null > +++ b/arch/arm64/lib/copy_mc_page.S > @@ -0,0 +1,82 @@ [...] > +SYM_FUNC_START(__pi_copy_mc_page) > +alternative_if ARM64_HAS_NO_HW_PREFETCH > + // Prefetch three cache lines ahead. > + prfm pldl1strm, [x1, #128] > + prfm pldl1strm, [x1, #256] > + prfm pldl1strm, [x1, #384] > +alternative_else_nop_endif > + > +CPY_MC(9998f, ldp x2, x3, [x1]) > +CPY_MC(9998f, ldp x4, x5, [x1, #16]) > +CPY_MC(9998f, ldp x6, x7, [x1, #32]) > +CPY_MC(9998f, ldp x8, x9, [x1, #48]) > +CPY_MC(9998f, ldp x10, x11, [x1, #64]) > +CPY_MC(9998f, ldp x12, x13, [x1, #80]) > +CPY_MC(9998f, ldp x14, x15, [x1, #96]) > +CPY_MC(9998f, ldp x16, x17, [x1, #112]) [...] [...] > +9998: ret What I don't understand, is there any error returned here or the bytes not copied? I can see its return value is never used in this series. Also, do we need to distinguish between fault on the source or the destination? > diff --git a/arch/arm64/lib/mte.S b/arch/arm64/lib/mte.S > index 5018ac03b6bf..bf4dd861c41c 100644 > --- a/arch/arm64/lib/mte.S > +++ b/arch/arm64/lib/mte.S > @@ -80,6 +80,25 @@ SYM_FUNC_START(mte_copy_page_tags) > ret > SYM_FUNC_END(mte_copy_page_tags) > > +/* > + * Copy the tags from the source page to the destination one wiht machine check safe > + * x0 - address of the destination page > + * x1 - address of the source page > + */ > +SYM_FUNC_START(mte_copy_mc_page_tags) > + mov x2, x0 > + mov x3, x1 > + multitag_transfer_size x5, x6 > +1: > +CPY_MC(2f, ldgm x4, [x3]) > + stgm x4, [x2] > + add x2, x2, x5 > + add x3, x3, x5 > + tst x2, #(PAGE_SIZE - 1) > + b.ne 1b > +2: ret > +SYM_FUNC_END(mte_copy_mc_page_tags) While the data copy above handles errors on both source and destination, here you skip the destination. Any reason? > diff --git a/arch/arm64/mm/copypage.c b/arch/arm64/mm/copypage.c > index 8dd5a8fe64b4..005ee2a3cb4e 100644 > --- a/arch/arm64/mm/copypage.c > +++ b/arch/arm64/mm/copypage.c [...] > +#ifdef CONFIG_ARCH_HAS_COPY_MC > +void copy_mc_highpage(struct page *to, struct page *from) > +{ > + void *kto = page_address(to); > + void *kfrom = page_address(from); > + > + copy_mc_page(kto, kfrom); > + do_mte(to, from, kto, kfrom, true); > +} > +EXPORT_SYMBOL(copy_mc_highpage); > + > +int copy_mc_user_highpage(struct page *to, struct page *from, > + unsigned long vaddr, struct vm_area_struct *vma) > +{ > + copy_mc_highpage(to, from); > + flush_dcache_page(to); > + return 0; > +} This one always returns 0. Does it actually catch any memory failures? > +EXPORT_SYMBOL_GPL(copy_mc_user_highpage); > +#endif > diff --git a/arch/arm64/mm/extable.c b/arch/arm64/mm/extable.c > index 28ec35e3d210..0fdab18f2f07 100644 > --- a/arch/arm64/mm/extable.c > +++ b/arch/arm64/mm/extable.c > @@ -16,6 +16,13 @@ get_ex_fixup(const struct exception_table_entry *ex) > return ((unsigned long)&ex->fixup + ex->fixup); > } > > +static bool ex_handler_fixup(const struct exception_table_entry *ex, > + struct pt_regs *regs) > +{ > + regs->pc = get_ex_fixup(ex); > + return true; > +} Should we prepare some error here like -EFAULT? That's done in ex_handler_uaccess_err_zero(). -- Catalin