From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0B38DC7EE22 for ; Thu, 18 May 2023 02:03:17 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5E130900004; Wed, 17 May 2023 22:03:17 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 590A3900003; Wed, 17 May 2023 22:03:17 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 47F1F900004; Wed, 17 May 2023 22:03:17 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 32E0D900003 for ; Wed, 17 May 2023 22:03:17 -0400 (EDT) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id EEFCD1A05AF for ; Thu, 18 May 2023 02:03:16 +0000 (UTC) X-FDA: 80801728392.04.D3800AF Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by imf24.hostedemail.com (Postfix) with ESMTP id 660B6180007 for ; Thu, 18 May 2023 02:03:13 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=none; spf=pass (imf24.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1684375395; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=1s3e/igdFm4v67JlQlxnG+7RbWe5Y2ChCC9/XVHALyA=; b=hW7Vvlh1PExf5pYZWuOGDflbjCEbc2DHCNrqeecKb28AoDvpJ3F4O7DcrSg/1SRU4C6Q1x 7IvfERbGR9aWJUQXzXfuRm3WZx8fWuUN0oonGsE3xlmJ7kahuftE+TCK3MlrGjnHmL7LVl njJCWX54IiuRXGI32NCsMs7iekYxvYI= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=none; spf=pass (imf24.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1684375395; a=rsa-sha256; cv=none; b=lNvJo7sVQAVDD6dJosSiglfxP8IWpSg9Z4qO21lD1OvHeNFfNGSTohvg3SoexuX2mgcrRe oEQY/hId7UzAB409TrUln1Gj7d6bfCyVJzn1IAoivU4aGQit3phbw0dTzOVfhNi+QQ8OG1 gIpYwvJd6CbwS1lhgUfD+EdCP7sbttg= Received: from dggpemm500001.china.huawei.com (unknown [172.30.72.53]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4QMCnP0TQlzqSRl; Thu, 18 May 2023 09:58:49 +0800 (CST) Received: from [10.174.177.243] (10.174.177.243) by dggpemm500001.china.huawei.com (7.185.36.107) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23; Thu, 18 May 2023 10:03:10 +0800 Message-ID: Date: Thu, 18 May 2023 10:03:09 +0800 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101 Thunderbird/102.10.1 Subject: Re: [PATCH] x86/mce: set MCE_IN_KERNEL_COPYIN for all MC-Safe Copy Content-Language: en-US To: Tony Luck , Borislav Petkov , Naoya Horiguchi CC: Thomas Gleixner , Ingo Molnar , Dave Hansen , , Andrew Morton , , , , References: <20230508022233.13890-1-wangkefeng.wang@huawei.com> From: Kefeng Wang In-Reply-To: <20230508022233.13890-1-wangkefeng.wang@huawei.com> Content-Type: text/plain; charset="UTF-8"; format=flowed Content-Transfer-Encoding: 7bit X-Originating-IP: [10.174.177.243] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To dggpemm500001.china.huawei.com (7.185.36.107) X-CFilter-Loop: Reflected X-Rspam-User: X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 660B6180007 X-Stat-Signature: 9jzez7jzb3uzw8w4ppa16fsfqj4r16zz X-HE-Tag: 1684375393-84384 X-HE-Meta: U2FsdGVkX19u7lvgOtTkyiKMwRi+kl/2UBxmqIUnza2zD+vjACqSVXieiE7Rrolpj1nFI3THuevx2aIHWDeR8oLd6NW/a5TkC7B0V3fnw/vtSOiD/PDhvRESnpyxsTt9UE/xs6Iz9r1VCLWGdt8MU6oCp8cjHh+nSVaS89NobiknIC81QMJtN3PwJS1dUHd3PsHfSVftS71Sf/js7duIyZlQJlK+6iRcO55b4NJ1g+XUtPxAnjl6yp1D8o4cZ8d5FYkzIpmOFNBSBN8UjTBV8aOvSW6C6tp6BjqupuWLgVaarBvweAgjJP5yL+T20ygfb3gtVbt/hajjqcga50BPi1SH5mxm7Bcn56LkzrFhkd6IeA3ja/a9O0Z7kzKtI3lOSqKkmWvOm1hS6pNayxTh7Zxlci8gc5tM6qnrvmxCREp8tuXPPR8nkXH5LhGIzyvS1e6UAxTSk6d7oyqEVMAy+h4ODTfkrTU4etrQIETYVXIhAv3BbOH/qPcri47tZOoNCBf/0pqOKo7ZWMAOVaPQSd2votkxrtozoU9h8Mdi90d3kGblBFIUkPAZIQ/qFKq6znQy50sWUSz/iR0zIhtqg1IN3C2AU8gfitzMN+0WbY3wyEIlnNVUn51qQKwfaAeVaM6OMUWdpb9IoM3llNzee31+a9502SDIB990WLlcYmuYd662rjewXgdmknoxaXj3TSzbD/vMqELekXrqf52cz5n2DzudzxFYidzJzWsRKMvk1NuMC4+lXb33eNWJL0roK2XZYdqKiZv1arT2CfJ4Es2Dll+feLYCWQGfGlGvUEzKv1tb82aeYz+uzJudo0Sms8BIiA1hQK57+2qiIl3jRHtAkoxIArQPomgv5GyKa22Mdn2riZ/aaGIb6dFFGUTxbrM2hqxskGyCld+Gn00HXUmWLcm13HEEiwWt9ZpaETprjWuvgs8SNIbjksX9Qpy0KhKnm9BxVobmDc02nb0 MXOYvpSV LDFEC6EvJQL55FWe9K3zUKDEILmKa8Mh/geRWjQI47PY02I95Hxnitcv4HOS6Q7OpTT4SGECtUVsE1KRocJ7PQyO0LXZjzb9EUnoqsaNbvn7uMc9ZziuODI738go87EmevEbfCTOzbqo4IHbKhNkQoXIN8NMl5YJ6ELqMXgoZWDaLTxBZh6RWAMzt/ukNYbOxj/iiFkdgf6/a2ntNFB2T56IpT4d5qlIxVUaC5tVyiczOr8KIvwMqx0ulzEuq2csYp11eIIWBbxI7Cec= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Hi Tony and all x86 maintainers, kindly ping, thanks. On 2023/5/8 10:22, Kefeng Wang wrote: > Both EX_TYPE_FAULT_MCE_SAFE and EX_TYPE_DEFAULT_MCE_SAFE exception > fixup types are used to identify fixups which allow in kernel #MC > recovery, that is the Machine Check Safe Copy. > > For now, the MCE_IN_KERNEL_COPYIN flag is only set for EX_TYPE_COPY > and EX_TYPE_UACCESS when copy from user, and corrupted page is > isolated in this case, for MC-safe copy, memory_failure() is not > always called, some places, like __wp_page_copy_user, copy_subpage, > copy_user_gigantic_page and ksm_might_need_to_copy manually call > memory_failure_queue() to cope with such unhandled error pages, > recently coredump hwposion recovery support[1] is asked to do the > same thing, and there are some other already existed MC-safe copy > scenarios, eg, nvdimm, dm-writecache, dax, which has similar issue. > > The best way to fix them is set MCE_IN_KERNEL_COPYIN to MCE_SAFE > exception, then kill_me_never() will be queued to call memory_failure() > in do_machine_check() to isolate corrupted page, which avoid calling > memory_failure_queue() after every MC-safe copy return. > > [1] https://lkml.kernel.org/r/20230417045323.11054-1-wangkefeng.wang@huawei.com > > Signed-off-by: Kefeng Wang > --- > arch/x86/kernel/cpu/mce/severity.c | 3 +-- > mm/ksm.c | 1 - > mm/memory.c | 12 +++--------- > 3 files changed, 4 insertions(+), 12 deletions(-) > > diff --git a/arch/x86/kernel/cpu/mce/severity.c b/arch/x86/kernel/cpu/mce/severity.c > index c4477162c07d..63e94484c5d6 100644 > --- a/arch/x86/kernel/cpu/mce/severity.c > +++ b/arch/x86/kernel/cpu/mce/severity.c > @@ -293,12 +293,11 @@ static noinstr int error_context(struct mce *m, struct pt_regs *regs) > case EX_TYPE_COPY: > if (!copy_user) > return IN_KERNEL; > - m->kflags |= MCE_IN_KERNEL_COPYIN; > fallthrough; > > case EX_TYPE_FAULT_MCE_SAFE: > case EX_TYPE_DEFAULT_MCE_SAFE: > - m->kflags |= MCE_IN_KERNEL_RECOV; > + m->kflags |= MCE_IN_KERNEL_RECOV | MCE_IN_KERNEL_COPYIN; > return IN_KERNEL_RECOV; > > default: > diff --git a/mm/ksm.c b/mm/ksm.c > index 0156bded3a66..7abdf4892387 100644 > --- a/mm/ksm.c > +++ b/mm/ksm.c > @@ -2794,7 +2794,6 @@ struct page *ksm_might_need_to_copy(struct page *page, > if (new_page) { > if (copy_mc_user_highpage(new_page, page, address, vma)) { > put_page(new_page); > - memory_failure_queue(page_to_pfn(page), 0); > return ERR_PTR(-EHWPOISON); > } > SetPageDirty(new_page); > diff --git a/mm/memory.c b/mm/memory.c > index 5e2c6b1fc00e..c0f586257017 100644 > --- a/mm/memory.c > +++ b/mm/memory.c > @@ -2814,10 +2814,8 @@ static inline int __wp_page_copy_user(struct page *dst, struct page *src, > unsigned long addr = vmf->address; > > if (likely(src)) { > - if (copy_mc_user_highpage(dst, src, addr, vma)) { > - memory_failure_queue(page_to_pfn(src), 0); > + if (copy_mc_user_highpage(dst, src, addr, vma)) > return -EHWPOISON; > - } > return 0; > } > > @@ -5852,10 +5850,8 @@ static int copy_user_gigantic_page(struct folio *dst, struct folio *src, > > cond_resched(); > if (copy_mc_user_highpage(dst_page, src_page, > - addr + i*PAGE_SIZE, vma)) { > - memory_failure_queue(page_to_pfn(src_page), 0); > + addr + i*PAGE_SIZE, vma)) > return -EHWPOISON; > - } > } > return 0; > } > @@ -5871,10 +5867,8 @@ static int copy_subpage(unsigned long addr, int idx, void *arg) > struct copy_subpage_arg *copy_arg = arg; > > if (copy_mc_user_highpage(copy_arg->dst + idx, copy_arg->src + idx, > - addr, copy_arg->vma)) { > - memory_failure_queue(page_to_pfn(copy_arg->src + idx), 0); > + addr, copy_arg->vma)) > return -EHWPOISON; > - } > return 0; > } >