From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id CDC49C47DA9 for ; Mon, 29 Jan 2024 20:45:43 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 285BB6B007E; Mon, 29 Jan 2024 15:45:43 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 235FC6B0080; Mon, 29 Jan 2024 15:45:43 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0AFC66B0083; Mon, 29 Jan 2024 15:45:43 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id EB0A46B007E for ; Mon, 29 Jan 2024 15:45:42 -0500 (EST) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id B7062140798 for ; Mon, 29 Jan 2024 20:45:42 +0000 (UTC) X-FDA: 81733529724.21.8B54AD8 Received: from mail-wm1-f41.google.com (mail-wm1-f41.google.com [209.85.128.41]) by imf05.hostedemail.com (Postfix) with ESMTP id A981310001E for ; Mon, 29 Jan 2024 20:45:39 +0000 (UTC) Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=SJLSgfVQ; spf=pass (imf05.hostedemail.com: domain of andreyknvl@gmail.com designates 209.85.128.41 as permitted sender) smtp.mailfrom=andreyknvl@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1706561139; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=kKP9s1fPlgpzq7TnTgaJGwWqGuyNQA6clVmR/5qf3Ds=; b=kl7q/QKIpeBDUheBir0v8CYqocdg3wXQNlTFurKsjmp64sLde+AgATrJZDOQmSahvlh04Y WsdxkzwIKQ27LC8vvRcR2L5J8jTG0lHkvszsXZhSTB7RxcbAGgR57Z8+53KB/uOYJT1zJP COm9b3D2tt4dm/8yXZ6npVXlbl3K86o= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1706561139; a=rsa-sha256; cv=none; b=FOz7XSIS3IExodJ7XiDCl0qHoPXQN7fQp4V22+fiTqwti0sKWlXUbL8vOrGIVloyJ12ne8 W4mlhJoUhYsZNYgg2QK5OUkh5oK907O1oUc2zGlyTdkF04Z0CTV3UnnNLp872smXW4bHBV mwntslQxyKlN9X4LEyZxak6YmLavOac= ARC-Authentication-Results: i=1; imf05.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=SJLSgfVQ; spf=pass (imf05.hostedemail.com: domain of andreyknvl@gmail.com designates 209.85.128.41 as permitted sender) smtp.mailfrom=andreyknvl@gmail.com; dmarc=pass (policy=none) header.from=gmail.com Received: by mail-wm1-f41.google.com with SMTP id 5b1f17b1804b1-40ed356a6ecso28452655e9.2 for ; Mon, 29 Jan 2024 12:45:39 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1706561138; x=1707165938; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=kKP9s1fPlgpzq7TnTgaJGwWqGuyNQA6clVmR/5qf3Ds=; b=SJLSgfVQjdxYlkvf0sSFp7LAAajeLjCkHCzLhi940u7qTIoj6G7szV2a5eJS+B3mSj F0lE+gkYO/dYrclyzFKGMj5iUdEA/AB5nmYLrve8zpDLz199sGy6iz7W51itM7ttT8LZ L+zWyhj2Ko/uEuupa98PHX59Sakx4/lsixe73ZdzQQbBS53Pk7SYZ1147XexsZLkzSIp /GaNk2pfzHHGsjnIv68Rk3RlWQT0bggaugKIN55UKGjwYjAl8DLb2cUwL6ZPvPInapqs ZBsmHoLd2MY5HNl2zf30oaxR5TMBAxVkzqf7DILGGnxkKfIcusxUyWR5P48tCpCxsQVw RM4A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1706561138; x=1707165938; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=kKP9s1fPlgpzq7TnTgaJGwWqGuyNQA6clVmR/5qf3Ds=; b=Ocr0KNbkBREz9gR2zluKQVm1y6lwF7e1J3aFLlHJ5m4qHbcazFXTueBcvSKS5uCyBO uunP8xtBBUTDpQMisl4SzLBIHNKnDhv56zulvoVvoMrsMwlL5OCu7SBugkOhgxQAFtmb 06vnesh4V4CWTMjhuzVmWBEb7JZOWhWzs4o3a+3oooZfS2wzE0FQu4Z1jVN5J6cf8xyh b9wIgJ5eLvU5IjbzDtDM8AdZHdCsDcBZlbmZ5WyGyTy1K/sCuAikjJbWXRnECytSuaBN 7rrFiU23naLMfHyp4RjNFCDeiGbD0/8KfSBBZRQrsxK0p/ODUaMTgZFCtoT/kCJdkJqE osEQ== X-Gm-Message-State: AOJu0Ywa4aALpWOQ9c9KGl3Js2inn4F94sx/61uoZaCO/iP+jnvm/Unc 54Dn/qv4DyYy6YuXRKRAipvHnpaZH0FUjlKGw6r841FGiQGj6JrSBxtHCSHa7hySJzb2rmkdX1o 0F7+a5VsezraZY251gvLEZ2ogL4U= X-Google-Smtp-Source: AGHT+IE5Ek671tBA0IqHCVBSxhjZm1CfNJzWQWzdhO7fBSo8tQSiw6n0P10zAyQyvczyQa4Udz8EOrMemvDjIiFXt/M= X-Received: by 2002:a05:600c:524c:b0:40e:f62b:eec0 with SMTP id fc12-20020a05600c524c00b0040ef62beec0mr4015551wmb.17.1706561137853; Mon, 29 Jan 2024 12:45:37 -0800 (PST) MIME-Version: 1.0 References: <20240129134652.4004931-1-tongtiangen@huawei.com> <20240129134652.4004931-6-tongtiangen@huawei.com> In-Reply-To: <20240129134652.4004931-6-tongtiangen@huawei.com> From: Andrey Konovalov Date: Mon, 29 Jan 2024 21:45:26 +0100 Message-ID: Subject: Re: [PATCH v10 5/6] arm64: support copy_mc_[user]_highpage() To: Peter Collingbourne Cc: Catalin Marinas , Will Deacon , Mark Rutland , James Morse , Robin Murphy , Andrey Ryabinin , Alexander Potapenko , Alexander Viro , Dmitry Vyukov , Vincenzo Frascino , Andrew Morton , Michael Ellerman , Nicholas Piggin , Christophe Leroy , "Aneesh Kumar K.V" , "Naveen N. Rao" , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org, linuxppc-dev@lists.ozlabs.org, linux-kernel@vger.kernel.org, kasan-dev@googlegroups.com, wangkefeng.wang@huawei.com, Guohanjun , Tong Tiangen Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Stat-Signature: huuutitp43gzqd87eo3suknbpporrsbo X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: A981310001E X-Rspam-User: X-HE-Tag: 1706561139-389349 X-HE-Meta: U2FsdGVkX18kyKLwnWTpzSVaG34JAhc8biLfJ0Y2xMa5/N5hrFTEDdySc4osEfnrVZdDopZFh4vU7iQxhvFmjCsDR9/QwPudohOZkjMTth8V+y7mzUxWFwxWcXqX2nEgTNSg/z4s6YvhCgqFC2Ncau18onhfdo5Z7EIJ6cxe02coab2tYnip8cUbj4DeiWSBOxD/6P5dPpyP3MdmaiHtB+QPDss5nGV40gdzY+7YY9eB+vdrTENhMcJ0EytHXLBC5og312y3P46CoZSHAXyxZDBQqvDhfZwQWmVfkorfvS5vSPTyja4FoPukwa6KDhcOGCul67O4m+5Nq8U0yZ2M87YHv+iUwRGBHDakbrrVMJYO8/86bZXzV+2KGmkrK8VUn50NPw+o+Xuef2egYGKitiNP6jDlmf+UOLDcbUO7ZuSm+FYizPt54L32ReEnyUEIAin1YwDs1cVQeZMRLnyYwKX2mIyl+KgFd8wjGyZv0KVlv2iSZJLHV8v2zZmwY8Yci2QwpgE17pxp/r9qbJDxceotBXQ3EJFzYjbgKEmUN9FoZI49e1uOg2up44CWadMPbIuz9arcGxF7nFAmmlQgiTvdHS3f+rCzH8pQQT+xx7kf1c1CUY++I5i6kBESuL9efuyF6k3L7ODys5Dny6fLEVSb6liEofFDCJoCbfRTZq8EuUIv5+E7OY0jc213Zlo+35ske1KN2XlTPY6KgtrbYMehNbh27K5RaR7LPmkgWZAnlAYrw+gpD0qEWvYNNatxF2wi8J4ZPpOz7+Xl7rqZypYA0OFnr6QnAhdxhSPo4DRx04HM3m6Ugz0Cos7BKYzzKvVs98bj/YZ1OgzSJc9qbPl+Dh09K285fQtfN7S99bNemiu+y/+H0U0len4grnHqCCdcODHSS1bVSXoNwxpI+iHg2OEg6NbJmLNsgvbvNWkjPZ45H7ybf2fIRWt0Y77UZQLMEIjVeOuh4M/2sU2 WmlT0veb g+5fgxsv9V7ZlA3jSTtClFfHM7jVEA0uhbTvT5suXq5Zv7l0ultj+Aig3IXLaPq9bQIa5zFUFMrH2eV2h5p7DS8yUDqoY8lYENHioyxd9SRjRpNFiaeA8CGV0kXmt91WdDTBEoUhK+BA5/EMA8m0y2KFItdb8k7ut6vq76S/Xrc6SPSeunIv4nmf2uHdWgfTKx4sRivJUGJQp3hXeiHCUSofmvGDSejMXT+sKZHqhCFS9WeNOFbcHlbCDEU5BMb/xhwC+OVt4JVkctv/s3qwQUcP+nuRp5hyZcDaLNURvzlyjoOZSQHGSZgfifT5JV+F1oDX9U1XfSaLycuBRwcC+/wQd5b74admvl2GLoqzbI2E0vgmBICTN2nQHrbr5SL6dn08/G6daEwoRhH4dobLq01BAn7SE4M65LowpRhOrlBdUkRsKqa4nCNOdksYB+fNwEPiTbJYdXc3ARlinEtl4o/i420pE4FN3kAQScsiE9Dj2Ph3EJYG3o2RESw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Mon, Jan 29, 2024 at 2:47=E2=80=AFPM Tong Tiangen wrote: > > Currently, many scenarios that can tolerate memory errors when copying pa= ge > have been supported in the kernel[1][2][3], all of which are implemented = by > copy_mc_[user]_highpage(). arm64 should also support this mechanism. > > Due to mte, arm64 needs to have its own copy_mc_[user]_highpage() > architecture implementation, macros __HAVE_ARCH_COPY_MC_HIGHPAGE and > __HAVE_ARCH_COPY_MC_USER_HIGHPAGE have been added to control it. > > Add new helper copy_mc_page() which provide a page copy implementation wi= th > machine check safe. The copy_mc_page() in copy_mc_page.S is largely borro= ws > from copy_page() in copy_page.S and the main difference is copy_mc_page() > add extable entry to every load/store insn to support machine check safe. > > Add new extable type EX_TYPE_COPY_MC_PAGE_ERR_ZERO which used in > copy_mc_page(). > > [1]a873dfe1032a ("mm, hwpoison: try to recover from copy-on write faults"= ) > [2]5f2500b93cc9 ("mm/khugepaged: recover from poisoned anonymous memory") > [3]6b970599e807 ("mm: hwpoison: support recovery from ksm_might_need_to_c= opy()") > > Signed-off-by: Tong Tiangen > --- > arch/arm64/include/asm/asm-extable.h | 15 ++++++ > arch/arm64/include/asm/assembler.h | 4 ++ > arch/arm64/include/asm/mte.h | 5 ++ > arch/arm64/include/asm/page.h | 10 ++++ > arch/arm64/lib/Makefile | 2 + > arch/arm64/lib/copy_mc_page.S | 78 ++++++++++++++++++++++++++++ > arch/arm64/lib/mte.S | 27 ++++++++++ > arch/arm64/mm/copypage.c | 66 ++++++++++++++++++++--- > arch/arm64/mm/extable.c | 7 +-- > include/linux/highmem.h | 8 +++ > 10 files changed, 213 insertions(+), 9 deletions(-) > create mode 100644 arch/arm64/lib/copy_mc_page.S > > diff --git a/arch/arm64/include/asm/asm-extable.h b/arch/arm64/include/as= m/asm-extable.h > index 980d1dd8e1a3..819044fefbe7 100644 > --- a/arch/arm64/include/asm/asm-extable.h > +++ b/arch/arm64/include/asm/asm-extable.h > @@ -10,6 +10,7 @@ > #define EX_TYPE_UACCESS_ERR_ZERO 2 > #define EX_TYPE_KACCESS_ERR_ZERO 3 > #define EX_TYPE_LOAD_UNALIGNED_ZEROPAD 4 > +#define EX_TYPE_COPY_MC_PAGE_ERR_ZERO 5 > > /* Data fields for EX_TYPE_UACCESS_ERR_ZERO */ > #define EX_DATA_REG_ERR_SHIFT 0 > @@ -51,6 +52,16 @@ > #define _ASM_EXTABLE_UACCESS(insn, fixup) \ > _ASM_EXTABLE_UACCESS_ERR_ZERO(insn, fixup, wzr, wzr) > > +#define _ASM_EXTABLE_COPY_MC_PAGE_ERR_ZERO(insn, fixup, err, zero) \ > + __ASM_EXTABLE_RAW(insn, fixup, \ > + EX_TYPE_COPY_MC_PAGE_ERR_ZERO, \ > + ( \ > + EX_DATA_REG(ERR, err) | \ > + EX_DATA_REG(ZERO, zero) \ > + )) > + > +#define _ASM_EXTABLE_COPY_MC_PAGE(insn, fixup) \ > + _ASM_EXTABLE_COPY_MC_PAGE_ERR_ZERO(insn, fixup, wzr, wzr) > /* > * Create an exception table entry for uaccess `insn`, which will branch= to `fixup` > * when an unhandled fault is taken. > @@ -59,6 +70,10 @@ > _ASM_EXTABLE_UACCESS(\insn, \fixup) > .endm > > + .macro _asm_extable_copy_mc_page, insn, fixup > + _ASM_EXTABLE_COPY_MC_PAGE(\insn, \fixup) > + .endm > + > /* > * Create an exception table entry for `insn` if `fixup` is provided. Ot= herwise > * do nothing. > diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/= assembler.h > index 513787e43329..e1d8ce155878 100644 > --- a/arch/arm64/include/asm/assembler.h > +++ b/arch/arm64/include/asm/assembler.h > @@ -154,6 +154,10 @@ lr .req x30 // link register > #define CPU_LE(code...) code > #endif > > +#define CPY_MC(l, x...) \ > +9999: x; \ > + _asm_extable_copy_mc_page 9999b, l > + > /* > * Define a macro that constructs a 64-bit value by concatenating two > * 32-bit registers. Note that on big endian systems the order of the > diff --git a/arch/arm64/include/asm/mte.h b/arch/arm64/include/asm/mte.h > index 91fbd5c8a391..9cdded082dd4 100644 > --- a/arch/arm64/include/asm/mte.h > +++ b/arch/arm64/include/asm/mte.h > @@ -92,6 +92,7 @@ static inline bool try_page_mte_tagging(struct page *pa= ge) > void mte_zero_clear_page_tags(void *addr); > void mte_sync_tags(pte_t pte, unsigned int nr_pages); > void mte_copy_page_tags(void *kto, const void *kfrom); > +int mte_copy_mc_page_tags(void *kto, const void *kfrom); > void mte_thread_init_user(void); > void mte_thread_switch(struct task_struct *next); > void mte_cpu_setup(void); > @@ -128,6 +129,10 @@ static inline void mte_sync_tags(pte_t pte, unsigned= int nr_pages) > static inline void mte_copy_page_tags(void *kto, const void *kfrom) > { > } > +static inline int mte_copy_mc_page_tags(void *kto, const void *kfrom) > +{ > + return 0; > +} > static inline void mte_thread_init_user(void) > { > } > diff --git a/arch/arm64/include/asm/page.h b/arch/arm64/include/asm/page.= h > index 2312e6ee595f..304cc86b8a10 100644 > --- a/arch/arm64/include/asm/page.h > +++ b/arch/arm64/include/asm/page.h > @@ -29,6 +29,16 @@ void copy_user_highpage(struct page *to, struct page *= from, > void copy_highpage(struct page *to, struct page *from); > #define __HAVE_ARCH_COPY_HIGHPAGE > > +#ifdef CONFIG_ARCH_HAS_COPY_MC > +int copy_mc_page(void *to, const void *from); > +int copy_mc_highpage(struct page *to, struct page *from); > +#define __HAVE_ARCH_COPY_MC_HIGHPAGE > + > +int copy_mc_user_highpage(struct page *to, struct page *from, > + unsigned long vaddr, struct vm_area_struct *vma); > +#define __HAVE_ARCH_COPY_MC_USER_HIGHPAGE > +#endif > + > struct folio *vma_alloc_zeroed_movable_folio(struct vm_area_struct *vma, > unsigned long vaddr); > #define vma_alloc_zeroed_movable_folio vma_alloc_zeroed_movable_folio > diff --git a/arch/arm64/lib/Makefile b/arch/arm64/lib/Makefile > index 29490be2546b..a2fd865b816d 100644 > --- a/arch/arm64/lib/Makefile > +++ b/arch/arm64/lib/Makefile > @@ -15,6 +15,8 @@ endif > > lib-$(CONFIG_ARCH_HAS_UACCESS_FLUSHCACHE) +=3D uaccess_flushcache.o > > +lib-$(CONFIG_ARCH_HAS_COPY_MC) +=3D copy_mc_page.o > + > obj-$(CONFIG_CRC32) +=3D crc32.o > > obj-$(CONFIG_FUNCTION_ERROR_INJECTION) +=3D error-inject.o > diff --git a/arch/arm64/lib/copy_mc_page.S b/arch/arm64/lib/copy_mc_page.= S > new file mode 100644 > index 000000000000..524534d26d86 > --- /dev/null > +++ b/arch/arm64/lib/copy_mc_page.S > @@ -0,0 +1,78 @@ > +/* SPDX-License-Identifier: GPL-2.0-only */ > +/* > + * Copyright (C) 2012 ARM Ltd. > + */ > + > +#include > +#include > +#include > +#include > +#include > +#include > +#include > + > +/* > + * Copy a page from src to dest (both are page aligned) with machine che= ck > + * > + * Parameters: > + * x0 - dest > + * x1 - src > + * Returns: > + * x0 - Return 0 if copy success, or -EFAULT if anything goes wrong > + * while copying. > + */ > +SYM_FUNC_START(__pi_copy_mc_page) > +CPY_MC(9998f, ldp x2, x3, [x1]) > +CPY_MC(9998f, ldp x4, x5, [x1, #16]) > +CPY_MC(9998f, ldp x6, x7, [x1, #32]) > +CPY_MC(9998f, ldp x8, x9, [x1, #48]) > +CPY_MC(9998f, ldp x10, x11, [x1, #64]) > +CPY_MC(9998f, ldp x12, x13, [x1, #80]) > +CPY_MC(9998f, ldp x14, x15, [x1, #96]) > +CPY_MC(9998f, ldp x16, x17, [x1, #112]) > + > + add x0, x0, #256 > + add x1, x1, #128 > +1: > + tst x0, #(PAGE_SIZE - 1) > + > +CPY_MC(9998f, stnp x2, x3, [x0, #-256]) > +CPY_MC(9998f, ldp x2, x3, [x1]) > +CPY_MC(9998f, stnp x4, x5, [x0, #16 - 256]) > +CPY_MC(9998f, ldp x4, x5, [x1, #16]) > +CPY_MC(9998f, stnp x6, x7, [x0, #32 - 256]) > +CPY_MC(9998f, ldp x6, x7, [x1, #32]) > +CPY_MC(9998f, stnp x8, x9, [x0, #48 - 256]) > +CPY_MC(9998f, ldp x8, x9, [x1, #48]) > +CPY_MC(9998f, stnp x10, x11, [x0, #64 - 256]) > +CPY_MC(9998f, ldp x10, x11, [x1, #64]) > +CPY_MC(9998f, stnp x12, x13, [x0, #80 - 256]) > +CPY_MC(9998f, ldp x12, x13, [x1, #80]) > +CPY_MC(9998f, stnp x14, x15, [x0, #96 - 256]) > +CPY_MC(9998f, ldp x14, x15, [x1, #96]) > +CPY_MC(9998f, stnp x16, x17, [x0, #112 - 256]) > +CPY_MC(9998f, ldp x16, x17, [x1, #112]) > + > + add x0, x0, #128 > + add x1, x1, #128 > + > + b.ne 1b > + > +CPY_MC(9998f, stnp x2, x3, [x0, #-256]) > +CPY_MC(9998f, stnp x4, x5, [x0, #16 - 256]) > +CPY_MC(9998f, stnp x6, x7, [x0, #32 - 256]) > +CPY_MC(9998f, stnp x8, x9, [x0, #48 - 256]) > +CPY_MC(9998f, stnp x10, x11, [x0, #64 - 256]) > +CPY_MC(9998f, stnp x12, x13, [x0, #80 - 256]) > +CPY_MC(9998f, stnp x14, x15, [x0, #96 - 256]) > +CPY_MC(9998f, stnp x16, x17, [x0, #112 - 256]) > + > + mov x0, #0 > + ret > + > +9998: mov x0, #-EFAULT > + ret > + > +SYM_FUNC_END(__pi_copy_mc_page) > +SYM_FUNC_ALIAS(copy_mc_page, __pi_copy_mc_page) > +EXPORT_SYMBOL(copy_mc_page) > diff --git a/arch/arm64/lib/mte.S b/arch/arm64/lib/mte.S > index 5018ac03b6bf..2b748e83f6cf 100644 > --- a/arch/arm64/lib/mte.S > +++ b/arch/arm64/lib/mte.S > @@ -80,6 +80,33 @@ SYM_FUNC_START(mte_copy_page_tags) > ret > SYM_FUNC_END(mte_copy_page_tags) > > +/* > + * Copy the tags from the source page to the destination one wiht machin= e check safe > + * x0 - address of the destination page > + * x1 - address of the source page > + * Returns: > + * x0 - Return 0 if copy success, or > + * -EFAULT if anything goes wrong while copying. > + */ > +SYM_FUNC_START(mte_copy_mc_page_tags) > + mov x2, x0 > + mov x3, x1 > + multitag_transfer_size x5, x6 > +1: > +CPY_MC(2f, ldgm x4, [x3]) > +CPY_MC(2f, stgm x4, [x2]) > + add x2, x2, x5 > + add x3, x3, x5 > + tst x2, #(PAGE_SIZE - 1) > + b.ne 1b > + > + mov x0, #0 > + ret > + > +2: mov x0, #-EFAULT > + ret > +SYM_FUNC_END(mte_copy_mc_page_tags) > + > /* > * Read tags from a user buffer (one tag per byte) and set the correspon= ding > * tags at the given kernel address. Used by PTRACE_POKEMTETAGS. > diff --git a/arch/arm64/mm/copypage.c b/arch/arm64/mm/copypage.c > index a7bb20055ce0..9765e40cde6c 100644 > --- a/arch/arm64/mm/copypage.c > +++ b/arch/arm64/mm/copypage.c > @@ -14,6 +14,25 @@ > #include > #include > > +static int do_mte(struct page *to, struct page *from, void *kto, void *k= from, bool mc) > +{ > + int ret =3D 0; > + > + if (system_supports_mte() && page_mte_tagged(from)) { > + /* It's a new page, shouldn't have been tagged yet */ > + WARN_ON_ONCE(!try_page_mte_tagging(to)); > + if (mc) > + ret =3D mte_copy_mc_page_tags(kto, kfrom); > + else > + mte_copy_page_tags(kto, kfrom); > + > + if (!ret) > + set_page_mte_tagged(to); > + } > + > + return ret; > +} > + > void copy_highpage(struct page *to, struct page *from) > { > void *kto =3D page_address(to); > @@ -24,12 +43,7 @@ void copy_highpage(struct page *to, struct page *from) > if (kasan_hw_tags_enabled()) > page_kasan_tag_reset(to); > > - if (system_supports_mte() && page_mte_tagged(from)) { > - /* It's a new page, shouldn't have been tagged yet */ > - WARN_ON_ONCE(!try_page_mte_tagging(to)); > - mte_copy_page_tags(kto, kfrom); > - set_page_mte_tagged(to); > - } > + do_mte(to, from, kto, kfrom, false); > } > EXPORT_SYMBOL(copy_highpage); > > @@ -40,3 +54,43 @@ void copy_user_highpage(struct page *to, struct page *= from, > flush_dcache_page(to); > } > EXPORT_SYMBOL_GPL(copy_user_highpage); > + > +#ifdef CONFIG_ARCH_HAS_COPY_MC > +/* > + * Return -EFAULT if anything goes wrong while copying page or mte. > + */ > +int copy_mc_highpage(struct page *to, struct page *from) > +{ > + void *kto =3D page_address(to); > + void *kfrom =3D page_address(from); > + int ret; > + > + ret =3D copy_mc_page(kto, kfrom); > + if (ret) > + return -EFAULT; > + > + if (kasan_hw_tags_enabled()) > + page_kasan_tag_reset(to); > + > + ret =3D do_mte(to, from, kto, kfrom, true); > + if (ret) > + return -EFAULT; > + > + return 0; > +} > +EXPORT_SYMBOL(copy_mc_highpage); > + > +int copy_mc_user_highpage(struct page *to, struct page *from, > + unsigned long vaddr, struct vm_area_struct *vma) > +{ > + int ret; > + > + ret =3D copy_mc_highpage(to, from); > + > + if (!ret) > + flush_dcache_page(to); > + > + return ret; > +} > +EXPORT_SYMBOL_GPL(copy_mc_user_highpage); > +#endif > diff --git a/arch/arm64/mm/extable.c b/arch/arm64/mm/extable.c > index 28ec35e3d210..bdc81518d207 100644 > --- a/arch/arm64/mm/extable.c > +++ b/arch/arm64/mm/extable.c > @@ -16,7 +16,7 @@ get_ex_fixup(const struct exception_table_entry *ex) > return ((unsigned long)&ex->fixup + ex->fixup); > } > > -static bool ex_handler_uaccess_err_zero(const struct exception_table_ent= ry *ex, > +static bool ex_handler_fixup_err_zero(const struct exception_table_entry= *ex, > struct pt_regs *regs) > { > int reg_err =3D FIELD_GET(EX_DATA_REG_ERR, ex->data); > @@ -69,7 +69,7 @@ bool fixup_exception(struct pt_regs *regs) > return ex_handler_bpf(ex, regs); > case EX_TYPE_UACCESS_ERR_ZERO: > case EX_TYPE_KACCESS_ERR_ZERO: > - return ex_handler_uaccess_err_zero(ex, regs); > + return ex_handler_fixup_err_zero(ex, regs); > case EX_TYPE_LOAD_UNALIGNED_ZEROPAD: > return ex_handler_load_unaligned_zeropad(ex, regs); > } > @@ -87,7 +87,8 @@ bool fixup_exception_mc(struct pt_regs *regs) > > switch (ex->type) { > case EX_TYPE_UACCESS_ERR_ZERO: > - return ex_handler_uaccess_err_zero(ex, regs); > + case EX_TYPE_COPY_MC_PAGE_ERR_ZERO: > + return ex_handler_fixup_err_zero(ex, regs); > } > > return false; > diff --git a/include/linux/highmem.h b/include/linux/highmem.h > index c5ca1a1fc4f5..a42470ca42f2 100644 > --- a/include/linux/highmem.h > +++ b/include/linux/highmem.h > @@ -332,6 +332,7 @@ static inline void copy_highpage(struct page *to, str= uct page *from) > #endif > > #ifdef copy_mc_to_kernel > +#ifndef __HAVE_ARCH_COPY_MC_USER_HIGHPAGE > /* > * If architecture supports machine check exception handling, define the > * #MC versions of copy_user_highpage and copy_highpage. They copy a mem= ory > @@ -354,7 +355,9 @@ static inline int copy_mc_user_highpage(struct page *= to, struct page *from, > > return ret ? -EFAULT : 0; > } > +#endif > > +#ifndef __HAVE_ARCH_COPY_MC_HIGHPAGE > static inline int copy_mc_highpage(struct page *to, struct page *from) > { > unsigned long ret; > @@ -370,20 +373,25 @@ static inline int copy_mc_highpage(struct page *to,= struct page *from) > > return ret ? -EFAULT : 0; > } > +#endif > #else > +#ifndef __HAVE_ARCH_COPY_MC_USER_HIGHPAGE > static inline int copy_mc_user_highpage(struct page *to, struct page *fr= om, > unsigned long vaddr, struct vm_ar= ea_struct *vma) > { > copy_user_highpage(to, from, vaddr, vma); > return 0; > } > +#endif > > +#ifndef __HAVE_ARCH_COPY_MC_HIGHPAGE > static inline int copy_mc_highpage(struct page *to, struct page *from) > { > copy_highpage(to, from); > return 0; > } > #endif > +#endif > > static inline void memcpy_page(struct page *dst_page, size_t dst_off, > struct page *src_page, size_t src_off, > -- > 2.25.1 > +Peter