From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 31B70C46CD2 for ; Tue, 30 Jan 2024 13:50:16 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A4B8D6B0074; Tue, 30 Jan 2024 08:50:15 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 9D5996B007D; Tue, 30 Jan 2024 08:50:15 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8758B6B0083; Tue, 30 Jan 2024 08:50:15 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 769106B0074 for ; Tue, 30 Jan 2024 08:50:15 -0500 (EST) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 20DADC0B95 for ; Tue, 30 Jan 2024 13:50:15 +0000 (UTC) X-FDA: 81736111590.05.26DC246 Received: from szxga03-in.huawei.com (szxga03-in.huawei.com [45.249.212.189]) by imf08.hostedemail.com (Postfix) with ESMTP id 5FD5B160016 for ; Tue, 30 Jan 2024 13:50:11 +0000 (UTC) Authentication-Results: imf08.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf08.hostedemail.com: domain of tongtiangen@huawei.com designates 45.249.212.189 as permitted sender) smtp.mailfrom=tongtiangen@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1706622613; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=njswFhihU4wIPtsqNIy1RukC7FY6gDJc/TLv8heQN00=; b=v5AA5K2lEMTTY8AtfwxH/9NP5k9Vb398ycbRmOpvPez8Hx1i4MxDLjz5ceSSMFceh6boT7 oucXAG3KIl9Bg58oRDF0eUkfz27lA6xQn4UmzFEKztaj8WLkwiHpx+bhpGri6H8aWr3nQK eZxgWOykEsPHN9ywzJb3jdMaHe0nDqk= ARC-Authentication-Results: i=1; imf08.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf08.hostedemail.com: domain of tongtiangen@huawei.com designates 45.249.212.189 as permitted sender) smtp.mailfrom=tongtiangen@huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1706622613; a=rsa-sha256; cv=none; b=owGcxWDsake8/Yj/sfOpZC4eH+hdwMhIAzaoJSJITcFQ0e78GSJDlbpom2Zm9NXiuta/4W CS69OI6ctVTq7a/bSGrwpHCcMzOjuqjVDufnGfMDzzspT8mWkQ4VG1c4iTJGBoV3PKvYxE 9yn9JhK/nc7dlVO3I/tKO+IIiOXSLgY= Received: from mail.maildlp.com (unknown [172.19.88.194]) by szxga03-in.huawei.com (SkyGuard) with ESMTP id 4TPRNM5yv9zJpQM; Tue, 30 Jan 2024 21:49:07 +0800 (CST) Received: from kwepemm600017.china.huawei.com (unknown [7.193.23.234]) by mail.maildlp.com (Postfix) with ESMTPS id 454CD1400FF; Tue, 30 Jan 2024 21:50:07 +0800 (CST) Received: from [10.174.179.234] (10.174.179.234) by kwepemm600017.china.huawei.com (7.193.23.234) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.35; Tue, 30 Jan 2024 21:50:05 +0800 Message-ID: <5227661e-da3b-6cff-37c5-5ddb7825e7b8@huawei.com> Date: Tue, 30 Jan 2024 21:50:04 +0800 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101 Thunderbird/91.8.0 Subject: Re: [PATCH v10 5/6] arm64: support copy_mc_[user]_highpage() To: Mark Rutland CC: Catalin Marinas , Will Deacon , James Morse , Robin Murphy , Andrey Ryabinin , Alexander Potapenko , Alexander Viro , Andrey Konovalov , Dmitry Vyukov , Vincenzo Frascino , Andrew Morton , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Aneesh Kumar K.V , "Naveen N. Rao" , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , , "H. Peter Anvin" , , , , , , , Guohanjun References: <20240129134652.4004931-1-tongtiangen@huawei.com> <20240129134652.4004931-6-tongtiangen@huawei.com> From: Tong Tiangen In-Reply-To: Content-Type: text/plain; charset="UTF-8"; format=flowed Content-Transfer-Encoding: 8bit X-Originating-IP: [10.174.179.234] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To kwepemm600017.china.huawei.com (7.193.23.234) X-Rspamd-Queue-Id: 5FD5B160016 X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: b8g9myj4smf6jziunh5657drsqhq3ig7 X-HE-Tag: 1706622611-465485 X-HE-Meta: U2FsdGVkX18Qvk/zS0ndr4svYoSeV/+myNkOXHd1cPLSDfQGzdTdcTsqkyWG+jbMUr8iw/PgAsYf895sO1Sxb0/+aWyJ/dXbMHKklirhEh/wtNqGJ+XALkWRhuXBspEDH/wc53IINSVIBk1a6ifdQkwWgi1eKXaW37eIyLBLk/meMqr8O+3/Oaliyqnc6SJE4WpD8EN9KJkISo2akFY6qz2JzQ7/3nZLW7R+HxRXRlrwCUiggCPbLt2vAlADPf/o55vxWvhY1ODRhp1jsb6Ap1CivLbT1OsjomjajRMXnQv+jKo0kjH7TSQvNr5GHbHC+eADAGqwmr0hGOOQR3YyO29TOe1CQJWwYiH+KB8XaY7/hH9guSJGmhkZ0YMmjWKY6ktLuVi8O9TqZPrPClz5VHekxhdmax007SVeHKM/s7jFlpuHormo+sM7thn35C5RJL1+Wz0c9cv1dfEzyXeLk9kPkh0UmSWDWX5nt25vg99UYDuNkdgvKRgYPmaG79YTG7YXo50ae1Uy0rtIK78MAaSeHZr58UMHFlJLCKpKKb/f7VTHpoMXztAi0aBtWKgj1Ixaev7iBZ1T9i9D0gr0fkrEJfJdC/zlfwqf6joQZP8UzKw44ucbvselvXmN7/W2BuiUNJ/9gW2hddQ/1gr2swxHnpbrvXEaTBjwtqE2vQu4Kr/HwKfinQf20wAr5Po7GpH8QjbZRoWTslI4LqSgxFQD1MpD7ACu2wzkE0iY+c4WdVUNBGcG+Djvbrn6xroExKrJV9ivGkbhfIHt2Xhy1ze5upTS2eL0peyBdp+eq7ppP4vxE6L8yKDs2w5h3lIoRz/PJ3LOlBjS0sf41abW9yTS6Bs23LPSt13CiAN9gN7f/IlXK2OCOT+90+/+NFkw8t1wN5Fa+0N3YaXoW0+g8qoIu2sd+53cFgieGOdd2SPJun5t5v/ApOlyH7NVMxQc6IjTumFiP2fdQYRvTEk 1DhZMi6q tcbf9JH0xRO5m6WTsV4vgkDGK6G3qnLRvBcV7iyWb/noMWP2nNO8bm3sWzoywzPI/Ij1VI1VKUeqxW3DQgt0KDwLJQLGqfpTxajHl4CAfCBbqMA207ijEV+VYi7CKYScN8gb6MXLUQb/Zkh79NygbHO+I0jscJdErK9JbG6u51mEmfR+XIFkOM+Tk6DQT56UbV5I3/6mbiYOJCyPokRx+8qtsUfzaNqq/cz2WLDwDF1Em+MVGJxKPqfsPd/RKjQh4Gw2GvjBkn1Ekxz52rKBdrYlu+A== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: 在 2024/1/30 18:31, Mark Rutland 写道: > On Mon, Jan 29, 2024 at 09:46:51PM +0800, Tong Tiangen wrote: >> Currently, many scenarios that can tolerate memory errors when copying page >> have been supported in the kernel[1][2][3], all of which are implemented by >> copy_mc_[user]_highpage(). arm64 should also support this mechanism. >> >> Due to mte, arm64 needs to have its own copy_mc_[user]_highpage() >> architecture implementation, macros __HAVE_ARCH_COPY_MC_HIGHPAGE and >> __HAVE_ARCH_COPY_MC_USER_HIGHPAGE have been added to control it. >> >> Add new helper copy_mc_page() which provide a page copy implementation with >> machine check safe. The copy_mc_page() in copy_mc_page.S is largely borrows >> from copy_page() in copy_page.S and the main difference is copy_mc_page() >> add extable entry to every load/store insn to support machine check safe. >> >> Add new extable type EX_TYPE_COPY_MC_PAGE_ERR_ZERO which used in >> copy_mc_page(). >> >> [1]a873dfe1032a ("mm, hwpoison: try to recover from copy-on write faults") >> [2]5f2500b93cc9 ("mm/khugepaged: recover from poisoned anonymous memory") >> [3]6b970599e807 ("mm: hwpoison: support recovery from ksm_might_need_to_copy()") >> >> Signed-off-by: Tong Tiangen >> --- >> arch/arm64/include/asm/asm-extable.h | 15 ++++++ >> arch/arm64/include/asm/assembler.h | 4 ++ >> arch/arm64/include/asm/mte.h | 5 ++ >> arch/arm64/include/asm/page.h | 10 ++++ >> arch/arm64/lib/Makefile | 2 + >> arch/arm64/lib/copy_mc_page.S | 78 ++++++++++++++++++++++++++++ >> arch/arm64/lib/mte.S | 27 ++++++++++ >> arch/arm64/mm/copypage.c | 66 ++++++++++++++++++++--- >> arch/arm64/mm/extable.c | 7 +-- >> include/linux/highmem.h | 8 +++ >> 10 files changed, 213 insertions(+), 9 deletions(-) >> create mode 100644 arch/arm64/lib/copy_mc_page.S >> >> diff --git a/arch/arm64/include/asm/asm-extable.h b/arch/arm64/include/asm/asm-extable.h >> index 980d1dd8e1a3..819044fefbe7 100644 >> --- a/arch/arm64/include/asm/asm-extable.h >> +++ b/arch/arm64/include/asm/asm-extable.h >> @@ -10,6 +10,7 @@ >> #define EX_TYPE_UACCESS_ERR_ZERO 2 >> #define EX_TYPE_KACCESS_ERR_ZERO 3 >> #define EX_TYPE_LOAD_UNALIGNED_ZEROPAD 4 >> +#define EX_TYPE_COPY_MC_PAGE_ERR_ZERO 5 >> >> /* Data fields for EX_TYPE_UACCESS_ERR_ZERO */ >> #define EX_DATA_REG_ERR_SHIFT 0 >> @@ -51,6 +52,16 @@ >> #define _ASM_EXTABLE_UACCESS(insn, fixup) \ >> _ASM_EXTABLE_UACCESS_ERR_ZERO(insn, fixup, wzr, wzr) >> >> +#define _ASM_EXTABLE_COPY_MC_PAGE_ERR_ZERO(insn, fixup, err, zero) \ >> + __ASM_EXTABLE_RAW(insn, fixup, \ >> + EX_TYPE_COPY_MC_PAGE_ERR_ZERO, \ >> + ( \ >> + EX_DATA_REG(ERR, err) | \ >> + EX_DATA_REG(ZERO, zero) \ >> + )) >> + >> +#define _ASM_EXTABLE_COPY_MC_PAGE(insn, fixup) \ >> + _ASM_EXTABLE_COPY_MC_PAGE_ERR_ZERO(insn, fixup, wzr, wzr) >> /* >> * Create an exception table entry for uaccess `insn`, which will branch to `fixup` >> * when an unhandled fault is taken. >> @@ -59,6 +70,10 @@ >> _ASM_EXTABLE_UACCESS(\insn, \fixup) >> .endm >> >> + .macro _asm_extable_copy_mc_page, insn, fixup >> + _ASM_EXTABLE_COPY_MC_PAGE(\insn, \fixup) >> + .endm >> + > > This should share a common EX_TYPE_ with the other "kaccess where memory error > is handled but other faults are fatal" cases. OK, reasonable. > >> /* >> * Create an exception table entry for `insn` if `fixup` is provided. Otherwise >> * do nothing. >> diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h >> index 513787e43329..e1d8ce155878 100644 >> --- a/arch/arm64/include/asm/assembler.h >> +++ b/arch/arm64/include/asm/assembler.h >> @@ -154,6 +154,10 @@ lr .req x30 // link register >> #define CPU_LE(code...) code >> #endif >> >> +#define CPY_MC(l, x...) \ >> +9999: x; \ >> + _asm_extable_copy_mc_page 9999b, l >> + >> /* >> * Define a macro that constructs a 64-bit value by concatenating two >> * 32-bit registers. Note that on big endian systems the order of the >> diff --git a/arch/arm64/include/asm/mte.h b/arch/arm64/include/asm/mte.h >> index 91fbd5c8a391..9cdded082dd4 100644 >> --- a/arch/arm64/include/asm/mte.h >> +++ b/arch/arm64/include/asm/mte.h >> @@ -92,6 +92,7 @@ static inline bool try_page_mte_tagging(struct page *page) >> void mte_zero_clear_page_tags(void *addr); >> void mte_sync_tags(pte_t pte, unsigned int nr_pages); >> void mte_copy_page_tags(void *kto, const void *kfrom); >> +int mte_copy_mc_page_tags(void *kto, const void *kfrom); >> void mte_thread_init_user(void); >> void mte_thread_switch(struct task_struct *next); >> void mte_cpu_setup(void); >> @@ -128,6 +129,10 @@ static inline void mte_sync_tags(pte_t pte, unsigned int nr_pages) >> static inline void mte_copy_page_tags(void *kto, const void *kfrom) >> { >> } >> +static inline int mte_copy_mc_page_tags(void *kto, const void *kfrom) >> +{ >> + return 0; >> +} >> static inline void mte_thread_init_user(void) >> { >> } >> diff --git a/arch/arm64/include/asm/page.h b/arch/arm64/include/asm/page.h >> index 2312e6ee595f..304cc86b8a10 100644 >> --- a/arch/arm64/include/asm/page.h >> +++ b/arch/arm64/include/asm/page.h >> @@ -29,6 +29,16 @@ void copy_user_highpage(struct page *to, struct page *from, >> void copy_highpage(struct page *to, struct page *from); >> #define __HAVE_ARCH_COPY_HIGHPAGE >> >> +#ifdef CONFIG_ARCH_HAS_COPY_MC >> +int copy_mc_page(void *to, const void *from); >> +int copy_mc_highpage(struct page *to, struct page *from); >> +#define __HAVE_ARCH_COPY_MC_HIGHPAGE >> + >> +int copy_mc_user_highpage(struct page *to, struct page *from, >> + unsigned long vaddr, struct vm_area_struct *vma); >> +#define __HAVE_ARCH_COPY_MC_USER_HIGHPAGE >> +#endif >> + >> struct folio *vma_alloc_zeroed_movable_folio(struct vm_area_struct *vma, >> unsigned long vaddr); >> #define vma_alloc_zeroed_movable_folio vma_alloc_zeroed_movable_folio >> diff --git a/arch/arm64/lib/Makefile b/arch/arm64/lib/Makefile >> index 29490be2546b..a2fd865b816d 100644 >> --- a/arch/arm64/lib/Makefile >> +++ b/arch/arm64/lib/Makefile >> @@ -15,6 +15,8 @@ endif >> >> lib-$(CONFIG_ARCH_HAS_UACCESS_FLUSHCACHE) += uaccess_flushcache.o >> >> +lib-$(CONFIG_ARCH_HAS_COPY_MC) += copy_mc_page.o >> + >> obj-$(CONFIG_CRC32) += crc32.o >> >> obj-$(CONFIG_FUNCTION_ERROR_INJECTION) += error-inject.o >> diff --git a/arch/arm64/lib/copy_mc_page.S b/arch/arm64/lib/copy_mc_page.S >> new file mode 100644 >> index 000000000000..524534d26d86 >> --- /dev/null >> +++ b/arch/arm64/lib/copy_mc_page.S >> @@ -0,0 +1,78 @@ >> +/* SPDX-License-Identifier: GPL-2.0-only */ >> +/* >> + * Copyright (C) 2012 ARM Ltd. >> + */ >> + >> +#include >> +#include >> +#include >> +#include >> +#include >> +#include >> +#include >> + >> +/* >> + * Copy a page from src to dest (both are page aligned) with machine check >> + * >> + * Parameters: >> + * x0 - dest >> + * x1 - src >> + * Returns: >> + * x0 - Return 0 if copy success, or -EFAULT if anything goes wrong >> + * while copying. >> + */ >> +SYM_FUNC_START(__pi_copy_mc_page) >> +CPY_MC(9998f, ldp x2, x3, [x1]) >> +CPY_MC(9998f, ldp x4, x5, [x1, #16]) >> +CPY_MC(9998f, ldp x6, x7, [x1, #32]) >> +CPY_MC(9998f, ldp x8, x9, [x1, #48]) >> +CPY_MC(9998f, ldp x10, x11, [x1, #64]) >> +CPY_MC(9998f, ldp x12, x13, [x1, #80]) >> +CPY_MC(9998f, ldp x14, x15, [x1, #96]) >> +CPY_MC(9998f, ldp x16, x17, [x1, #112]) >> + >> + add x0, x0, #256 >> + add x1, x1, #128 >> +1: >> + tst x0, #(PAGE_SIZE - 1) >> + >> +CPY_MC(9998f, stnp x2, x3, [x0, #-256]) >> +CPY_MC(9998f, ldp x2, x3, [x1]) >> +CPY_MC(9998f, stnp x4, x5, [x0, #16 - 256]) >> +CPY_MC(9998f, ldp x4, x5, [x1, #16]) >> +CPY_MC(9998f, stnp x6, x7, [x0, #32 - 256]) >> +CPY_MC(9998f, ldp x6, x7, [x1, #32]) >> +CPY_MC(9998f, stnp x8, x9, [x0, #48 - 256]) >> +CPY_MC(9998f, ldp x8, x9, [x1, #48]) >> +CPY_MC(9998f, stnp x10, x11, [x0, #64 - 256]) >> +CPY_MC(9998f, ldp x10, x11, [x1, #64]) >> +CPY_MC(9998f, stnp x12, x13, [x0, #80 - 256]) >> +CPY_MC(9998f, ldp x12, x13, [x1, #80]) >> +CPY_MC(9998f, stnp x14, x15, [x0, #96 - 256]) >> +CPY_MC(9998f, ldp x14, x15, [x1, #96]) >> +CPY_MC(9998f, stnp x16, x17, [x0, #112 - 256]) >> +CPY_MC(9998f, ldp x16, x17, [x1, #112]) >> + >> + add x0, x0, #128 >> + add x1, x1, #128 >> + >> + b.ne 1b >> + >> +CPY_MC(9998f, stnp x2, x3, [x0, #-256]) >> +CPY_MC(9998f, stnp x4, x5, [x0, #16 - 256]) >> +CPY_MC(9998f, stnp x6, x7, [x0, #32 - 256]) >> +CPY_MC(9998f, stnp x8, x9, [x0, #48 - 256]) >> +CPY_MC(9998f, stnp x10, x11, [x0, #64 - 256]) >> +CPY_MC(9998f, stnp x12, x13, [x0, #80 - 256]) >> +CPY_MC(9998f, stnp x14, x15, [x0, #96 - 256]) >> +CPY_MC(9998f, stnp x16, x17, [x0, #112 - 256]) >> + >> + mov x0, #0 >> + ret >> + >> +9998: mov x0, #-EFAULT >> + ret >> + >> +SYM_FUNC_END(__pi_copy_mc_page) >> +SYM_FUNC_ALIAS(copy_mc_page, __pi_copy_mc_page) >> +EXPORT_SYMBOL(copy_mc_page) > > This is a duplicate of the existing copy_page logic; it should be refactored > such that the logic can be shared. OK, I'll think about how to do it. > >> diff --git a/arch/arm64/lib/mte.S b/arch/arm64/lib/mte.S >> index 5018ac03b6bf..2b748e83f6cf 100644 >> --- a/arch/arm64/lib/mte.S >> +++ b/arch/arm64/lib/mte.S >> @@ -80,6 +80,33 @@ SYM_FUNC_START(mte_copy_page_tags) >> ret >> SYM_FUNC_END(mte_copy_page_tags) >> >> +/* >> + * Copy the tags from the source page to the destination one wiht machine check safe >> + * x0 - address of the destination page >> + * x1 - address of the source page >> + * Returns: >> + * x0 - Return 0 if copy success, or >> + * -EFAULT if anything goes wrong while copying. >> + */ >> +SYM_FUNC_START(mte_copy_mc_page_tags) >> + mov x2, x0 >> + mov x3, x1 >> + multitag_transfer_size x5, x6 >> +1: >> +CPY_MC(2f, ldgm x4, [x3]) >> +CPY_MC(2f, stgm x4, [x2]) >> + add x2, x2, x5 >> + add x3, x3, x5 >> + tst x2, #(PAGE_SIZE - 1) >> + b.ne 1b >> + >> + mov x0, #0 >> + ret >> + >> +2: mov x0, #-EFAULT >> + ret >> +SYM_FUNC_END(mte_copy_mc_page_tags) >> + >> /* >> * Read tags from a user buffer (one tag per byte) and set the corresponding >> * tags at the given kernel address. Used by PTRACE_POKEMTETAGS. >> diff --git a/arch/arm64/mm/copypage.c b/arch/arm64/mm/copypage.c >> index a7bb20055ce0..9765e40cde6c 100644 >> --- a/arch/arm64/mm/copypage.c >> +++ b/arch/arm64/mm/copypage.c >> @@ -14,6 +14,25 @@ >> #include >> #include >> >> +static int do_mte(struct page *to, struct page *from, void *kto, void *kfrom, bool mc) >> +{ >> + int ret = 0; >> + >> + if (system_supports_mte() && page_mte_tagged(from)) { >> + /* It's a new page, shouldn't have been tagged yet */ >> + WARN_ON_ONCE(!try_page_mte_tagging(to)); >> + if (mc) >> + ret = mte_copy_mc_page_tags(kto, kfrom); >> + else >> + mte_copy_page_tags(kto, kfrom); >> + >> + if (!ret) >> + set_page_mte_tagged(to); >> + } >> + >> + return ret; >> +} > > The boolean 'mc' argument makes this painful to read, and I don't think it's > necessary to have this helper anyway. > > It'd be clearer to have this expanded inline in the callers, e.g. > > // in copy_highpage(), as-is today > if (system_supports_mte() && page_mte_tagged(from)) { > /* It's a new page, shouldn't have been tagged yet */ > WARN_ON_ONCE(!try_page_mte_tagging(to)); > mte_copy_page_tags(kto, kfrom); > set_page_mte_tagged(to); > } > > // in copy_mc_highpage() > if (system_supports_mte() && page_mte_tagged(from)) { > /* It's a new page, shouldn't have been tagged yet */ > WARN_ON_ONCE(!try_page_mte_tagging(to)); > ret = mte_copy_mc_page_tags(kto, kfrom); > if (ret) > return -EFAULT; > set_page_mte_tagged(to); > } OK, follow this idea in the next version. > > Mark. > >> + >> void copy_highpage(struct page *to, struct page *from) >> { >> void *kto = page_address(to); >> @@ -24,12 +43,7 @@ void copy_highpage(struct page *to, struct page *from) >> if (kasan_hw_tags_enabled()) >> page_kasan_tag_reset(to); >> >> - if (system_supports_mte() && page_mte_tagged(from)) { >> - /* It's a new page, shouldn't have been tagged yet */ >> - WARN_ON_ONCE(!try_page_mte_tagging(to)); >> - mte_copy_page_tags(kto, kfrom); >> - set_page_mte_tagged(to); >> - } >> + do_mte(to, from, kto, kfrom, false); >> } >> EXPORT_SYMBOL(copy_highpage); >> >> @@ -40,3 +54,43 @@ void copy_user_highpage(struct page *to, struct page *from, >> flush_dcache_page(to); >> } >> EXPORT_SYMBOL_GPL(copy_user_highpage); >> + >> +#ifdef CONFIG_ARCH_HAS_COPY_MC >> +/* >> + * Return -EFAULT if anything goes wrong while copying page or mte. >> + */ >> +int copy_mc_highpage(struct page *to, struct page *from) >> +{ >> + void *kto = page_address(to); >> + void *kfrom = page_address(from); >> + int ret; >> + >> + ret = copy_mc_page(kto, kfrom); >> + if (ret) >> + return -EFAULT; >> + >> + if (kasan_hw_tags_enabled()) >> + page_kasan_tag_reset(to); >> + >> + ret = do_mte(to, from, kto, kfrom, true); >> + if (ret) >> + return -EFAULT; >> + >> + return 0; >> +} >> +EXPORT_SYMBOL(copy_mc_highpage); >> + >> +int copy_mc_user_highpage(struct page *to, struct page *from, >> + unsigned long vaddr, struct vm_area_struct *vma) >> +{ >> + int ret; >> + >> + ret = copy_mc_highpage(to, from); >> + >> + if (!ret) >> + flush_dcache_page(to); >> + >> + return ret; >> +} >> +EXPORT_SYMBOL_GPL(copy_mc_user_highpage); >> +#endif >> diff --git a/arch/arm64/mm/extable.c b/arch/arm64/mm/extable.c >> index 28ec35e3d210..bdc81518d207 100644 >> --- a/arch/arm64/mm/extable.c >> +++ b/arch/arm64/mm/extable.c >> @@ -16,7 +16,7 @@ get_ex_fixup(const struct exception_table_entry *ex) >> return ((unsigned long)&ex->fixup + ex->fixup); >> } >> >> -static bool ex_handler_uaccess_err_zero(const struct exception_table_entry *ex, >> +static bool ex_handler_fixup_err_zero(const struct exception_table_entry *ex, >> struct pt_regs *regs) >> { >> int reg_err = FIELD_GET(EX_DATA_REG_ERR, ex->data); >> @@ -69,7 +69,7 @@ bool fixup_exception(struct pt_regs *regs) >> return ex_handler_bpf(ex, regs); >> case EX_TYPE_UACCESS_ERR_ZERO: >> case EX_TYPE_KACCESS_ERR_ZERO: >> - return ex_handler_uaccess_err_zero(ex, regs); >> + return ex_handler_fixup_err_zero(ex, regs); >> case EX_TYPE_LOAD_UNALIGNED_ZEROPAD: >> return ex_handler_load_unaligned_zeropad(ex, regs); >> } >> @@ -87,7 +87,8 @@ bool fixup_exception_mc(struct pt_regs *regs) >> >> switch (ex->type) { >> case EX_TYPE_UACCESS_ERR_ZERO: >> - return ex_handler_uaccess_err_zero(ex, regs); >> + case EX_TYPE_COPY_MC_PAGE_ERR_ZERO: >> + return ex_handler_fixup_err_zero(ex, regs); >> } >> >> return false; >> diff --git a/include/linux/highmem.h b/include/linux/highmem.h >> index c5ca1a1fc4f5..a42470ca42f2 100644 >> --- a/include/linux/highmem.h >> +++ b/include/linux/highmem.h >> @@ -332,6 +332,7 @@ static inline void copy_highpage(struct page *to, struct page *from) >> #endif >> >> #ifdef copy_mc_to_kernel >> +#ifndef __HAVE_ARCH_COPY_MC_USER_HIGHPAGE >> /* >> * If architecture supports machine check exception handling, define the >> * #MC versions of copy_user_highpage and copy_highpage. They copy a memory >> @@ -354,7 +355,9 @@ static inline int copy_mc_user_highpage(struct page *to, struct page *from, >> >> return ret ? -EFAULT : 0; >> } >> +#endif >> >> +#ifndef __HAVE_ARCH_COPY_MC_HIGHPAGE >> static inline int copy_mc_highpage(struct page *to, struct page *from) >> { >> unsigned long ret; >> @@ -370,20 +373,25 @@ static inline int copy_mc_highpage(struct page *to, struct page *from) >> >> return ret ? -EFAULT : 0; >> } >> +#endif >> #else >> +#ifndef __HAVE_ARCH_COPY_MC_USER_HIGHPAGE >> static inline int copy_mc_user_highpage(struct page *to, struct page *from, >> unsigned long vaddr, struct vm_area_struct *vma) >> { >> copy_user_highpage(to, from, vaddr, vma); >> return 0; >> } >> +#endif >> >> +#ifndef __HAVE_ARCH_COPY_MC_HIGHPAGE >> static inline int copy_mc_highpage(struct page *to, struct page *from) >> { >> copy_highpage(to, from); >> return 0; >> } >> #endif >> +#endif >> >> static inline void memcpy_page(struct page *dst_page, size_t dst_off, >> struct page *src_page, size_t src_off, >> -- >> 2.25.1 >> > .