From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C6CC8C433FE for ; Wed, 13 Apr 2022 14:41:35 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4DF756B0072; Wed, 13 Apr 2022 10:41:35 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 48DE76B0073; Wed, 13 Apr 2022 10:41:35 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 32F256B0074; Wed, 13 Apr 2022 10:41:35 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.26]) by kanga.kvack.org (Postfix) with ESMTP id 21F376B0072 for ; Wed, 13 Apr 2022 10:41:35 -0400 (EDT) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 031CB25691 for ; Wed, 13 Apr 2022 14:41:34 +0000 (UTC) X-FDA: 79352119350.10.A6FC4E0 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by imf21.hostedemail.com (Postfix) with ESMTP id 701FE1C0006 for ; Wed, 13 Apr 2022 14:41:32 +0000 (UTC) Received: from kwepemi500008.china.huawei.com (unknown [172.30.72.57]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4Kdlbr0qV3zgYYx; Wed, 13 Apr 2022 22:39:36 +0800 (CST) Received: from kwepemm600017.china.huawei.com (7.193.23.234) by kwepemi500008.china.huawei.com (7.221.188.139) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Wed, 13 Apr 2022 22:41:25 +0800 Received: from [10.174.179.234] (10.174.179.234) by kwepemm600017.china.huawei.com (7.193.23.234) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Wed, 13 Apr 2022 22:41:24 +0800 Message-ID: Date: Wed, 13 Apr 2022 22:41:23 +0800 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101 Thunderbird/91.6.1 Subject: Re: [RFC PATCH -next V3 3/6] arm64: add support for machine check error safe To: Kefeng Wang , Mark Rutland , James Morse , Andrew Morton , Thomas Gleixner , "Ingo Molnar" , Borislav Petkov , Robin Murphy , Dave Hansen , "Catalin Marinas" , Will Deacon , "Alexander Viro" , , "H . Peter Anvin" CC: , , , Xie XiuQi References: <20220412072552.2526871-1-tongtiangen@huawei.com> <20220412072552.2526871-4-tongtiangen@huawei.com> From: Tong Tiangen In-Reply-To: Content-Type: text/plain; charset="UTF-8"; format=flowed Content-Transfer-Encoding: 8bit X-Originating-IP: [10.174.179.234] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To kwepemm600017.china.huawei.com (7.193.23.234) X-CFilter-Loop: Reflected X-Stat-Signature: u3ii1hyx4sbk1ionjr5rdg16pn8a61n4 X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 701FE1C0006 Authentication-Results: imf21.hostedemail.com; dkim=none; spf=pass (imf21.hostedemail.com: domain of tongtiangen@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=tongtiangen@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com X-Rspam-User: X-HE-Tag: 1649860892-375718 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: 在 2022/4/12 21:08, Kefeng Wang 写道: [...] >> + >> +bool fixup_exception_mc(struct pt_regs *regs) >> +{ >> +    const struct exception_table_entry *ex; >> + >> +    ex = search_exception_tables(instruction_pointer(regs)); >> +    if (!ex) >> +        return false; >> + >> +    switch (ex->type) { >> +    case EX_TYPE_UACCESS_MC: >> +        return ex_handler_fixup(ex, regs); >> +    } >> + >> +    return false; >> +} > > The definition of EX_TYPE_UACCESS_MC is in patch4, please fix it, and if > arm64 exception table ok, will do next version. > > is sorted by exception type, we could drop fixup_exception_mc(), right? In sort_relative_table_with_data(), it seems sorted by insn and data. > >> diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c >> index 77341b160aca..56b13cf8bf1d 100644 >> --- a/arch/arm64/mm/fault.c >> +++ b/arch/arm64/mm/fault.c >> @@ -695,6 +695,30 @@ static int do_bad(unsigned long far, unsigned int >> esr, struct pt_regs *regs) >>       return 1; /* "fault" */ >>   } >> +static bool arm64_process_kernel_sea(unsigned long addr, unsigned int >> esr, >> +                     struct pt_regs *regs, int sig, int code) >> +{ >> +    if (!IS_ENABLED(CONFIG_ARCH_HAS_COPY_MC)) >> +        return false; >> + >> +    if (user_mode(regs) || !current->mm) >> +        return false; >> + >> +    if (apei_claim_sea(regs) < 0) >> +        return false; >> + >> +    current->thread.fault_address = 0; >> +    current->thread.fault_code = esr; >> + > Use set_thread_esr(0, esr) and move it after fixup_exception_mc(); >> +    if (!fixup_exception_mc(regs)) >> +        return false; >> + >> +    arm64_force_sig_fault(sig, code, addr, >> +        "Uncorrected hardware memory error in kernel-access\n"); >> + >> +    return true; >> +} >> + >>   static int do_sea(unsigned long far, unsigned int esr, struct >> pt_regs *regs) >>   { >>       const struct fault_info *inf; >> @@ -720,6 +744,10 @@ static int do_sea(unsigned long far, unsigned int >> esr, struct pt_regs *regs) >>            */ >>           siaddr  = untagged_addr(far); >>       } >> + >> +    if (arm64_process_kernel_sea(siaddr, esr, regs, inf->sig, >> inf->code)) >> +        return 0; >> + > > Rename arm64_process_kernel_sea() to arm64_do_kernel_sea() > > if (!arm64_do_kernel_sea()) > >     arm64_notify_die(); > Agreed, will do next version. >>       arm64_notify_die(inf->name, regs, inf->sig, inf->code, siaddr, >> esr); >>       return 0; >> diff --git a/include/linux/uaccess.h b/include/linux/uaccess.h >> index 546179418ffa..dd952aeecdc1 100644 >> --- a/include/linux/uaccess.h >> +++ b/include/linux/uaccess.h >> @@ -174,6 +174,14 @@ copy_mc_to_kernel(void *dst, const void *src, >> size_t cnt) >>   } >>   #endif >> +#ifndef copy_mc_to_user >> +static inline unsigned long __must_check >> +copy_mc_to_user(void *dst, const void *src, size_t cnt) >> +{ > Add check_object_size(cnt, src, true);  which could make > HARDENED_USERCOPY works. Agreed, will do next version. Thanks KeFeng, Tong. >> +    return raw_copy_to_user(dst, src, cnt); >> +} >> +#endif >> + >>   static __always_inline void pagefault_disabled_inc(void) >>   { >>       current->pagefault_disabled++; > .