From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1C4C4C761A6 for ; Thu, 6 Apr 2023 12:39:37 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6E3E56B007E; Thu, 6 Apr 2023 08:39:36 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 694A86B0080; Thu, 6 Apr 2023 08:39:36 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 55DEC6B0081; Thu, 6 Apr 2023 08:39:36 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 46DFD6B007E for ; Thu, 6 Apr 2023 08:39:36 -0400 (EDT) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 0FE4B80424 for ; Thu, 6 Apr 2023 12:39:36 +0000 (UTC) X-FDA: 80650922352.14.3217458 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by imf07.hostedemail.com (Postfix) with ESMTP id 08A164000E for ; Thu, 6 Apr 2023 12:39:31 +0000 (UTC) Authentication-Results: imf07.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf07.hostedemail.com: domain of tanxiaofei@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=tanxiaofei@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1680784774; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=/ZAAeYQqObFrIYxxypd0KauXgaC+In6/UylfpremEOo=; b=rI6xBaKU2WuxmDBy8z+iczY2njcAuk38XJbd7p0qcLIaq+VjuZ3mlzIQEc30ON1URTmQge JHbSnOxujuAv7wJfylzpnZSZJAtpONgn1phaeCC2owYqDTI2aIg1CwJrWqUU8vvNqjxAQS Ex0ONQUT8ll6gKtS1PBD54w/9JaML3M= ARC-Authentication-Results: i=1; imf07.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf07.hostedemail.com: domain of tanxiaofei@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=tanxiaofei@huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1680784774; a=rsa-sha256; cv=none; b=nTrvgHWYLMMd8tx3ApE+ss3CWXaYg8UgU2PzC+HB3QGL/bmikvyzMmcrDnIoEfuIbrtKH9 JyYYJSscm8RmLzywqSQmJIx8D08G1xOu3pW8iZNJJ0KAf11dY8gTRkY23BVmHuU4atZO7H 5TUGVxU0ZtKnAPkeN0YDdEgOIoVGkgM= Received: from dggpemm500001.china.huawei.com (unknown [172.30.72.57]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4Psgx45BDdzKrYd; Thu, 6 Apr 2023 20:36:56 +0800 (CST) Received: from [10.67.100.236] (10.67.100.236) by dggpemm500001.china.huawei.com (7.185.36.107) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23; Thu, 6 Apr 2023 20:39:26 +0800 Message-ID: Date: Thu, 6 Apr 2023 20:39:26 +0800 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101 Thunderbird/91.3.2 Subject: Re: [PATCH v3 2/2] ACPI: APEI: handle synchronous exceptions in task work To: Shuai Xue , , CC: , , , , , , , , , , , , , , , , , , References: <20221027042445.60108-1-xueshuai@linux.alibaba.com> <20230317072443.3189-3-xueshuai@linux.alibaba.com> From: Xiaofei Tan In-Reply-To: <20230317072443.3189-3-xueshuai@linux.alibaba.com> Content-Type: text/plain; charset="UTF-8"; format=flowed Content-Transfer-Encoding: 8bit X-Originating-IP: [10.67.100.236] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To dggpemm500001.china.huawei.com (7.185.36.107) X-CFilter-Loop: Reflected X-Rspamd-Queue-Id: 08A164000E X-Rspamd-Server: rspam09 X-Rspam-User: X-Stat-Signature: xd3r1qeo5kxwjm1bdg4yctfn1o183uui X-HE-Tag: 1680784771-140742 X-HE-Meta: U2FsdGVkX1/gRox05OuuR4LnkQjy+W44bR16KUhznB8UWPUnMuITGNdiR/7j5WPc6pXP9jVbO9ASM1nimOgXGz//4ghO2PLYSc2PAQ7YvNSp/EzU+47yKwve7S/BaU1dKonLwP4ZhAYrKEt/D0VFNva2EhbziX8oIvl554qWjD1l72KoHznjgReqF9fyPWhH7HiSL8ZBOM0a18S030p/L8E1ZMK5tXfzoH/eNi8FRuoqVcmJFaP4K2ttVoZkGIDiLIGjFvReRvUsyz/RQptjhO9AAuW40OCpAe5METmmPxx5Gf6LhoU9QEcoa+C40R7JSkeZ5pvQP1WPtiNsbM/WGlNdmEn44kd4PdljZuPC+9wjW0o2lcohc6Lr8Nl7RoI+XRuJ6lfggqpJVHjOtL3fXx2dHWpO/Rq0/L/Zhn0YgGJgwBVgM4zv2dWca6kxMo2e7yRpQagbgCpYIY2++JMK1kHf8Y5eXfQ44/mA70xYHldtbXrxCntnRB4srCCP+a4a04q3iiEuBmJ9gZpk8ZWR4ExhWQyL+W8Lq1G8qFEcNvITlqWNMLSOIgUSAl2AsB8N4w5umPMynoOW75ca4NWLcelgiaBAWTjJ9X0zCgvy+W22YwyPVGo3I4S9wFWURr/w2aV7cALIbC49G2ghiSJWkBBHhudymWGN4oJCTcglnXcvZ+Vb9oRih84XJ9ybX8v/F6QgZUcIPGRUsgqsFX+505gVPOwbo6Xb6kmnHXCVer3aMboUE2GndA0oyI9dDjNnm+U9uuCUi3XiAaj74I8+czWENW7KdGq2wSZrhVylpG7/hoP0/tEXJXnrm8RExM3Ny4i6nu577Au4LFyASEaYxVx92xHi7eV1eEd7DZ/+MlDA3BZL+yI8s8Ko3Bd+beVGi93YK1TtR8VG2PrfuwbENNqWzYeNMyheKqF7rrBRcwFZhRBGpmoEp4w5M9Os5UqKiIsJ5rMmszy5m6nUK28 k4e9eFRz hFNaPXMVf5hhF47+Jbo8nxsMdrWhdG8TEJanoHyocG+/5egnzj6N+SA+2Ae/p+ax/lrMAL8T3I6J+RxnRXKRHwNPEFhzBy19t8MURDnbnN6cCFR2paS8y0XG8o2II/d+6B9Mr+Gj/8eIXCDlr3mvAyDeksMoz+i381AkUsP2cePbEP18= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Hi Shuai, Thanks for your this effort, and it's great. Some comments below. 在 2023/3/17 15:24, Shuai Xue 写道: > Hardware errors could be signaled by synchronous interrupt, e.g. when an > error is detected by a background scrubber, or signaled by synchronous > exception, e.g. when an uncorrected error is consumed. Both synchronous and > asynchronous error are queued and handled by a dedicated kthread in > workqueue. > > commit 7f17b4a121d0 ("ACPI: APEI: Kick the memory_failure() queue for > synchronous errors") keep track of whether memory_failure() work was > queued, and make task_work pending to flush out the workqueue so that the > work for synchronous error is processed before returning to user-space. > The trick ensures that the corrupted page is unmapped and poisoned. And > after returning to user-space, the task starts at current instruction which > triggering a page fault in which kernel will send SIGBUS to current process > due to VM_FAULT_HWPOISON. > > However, the memory failure recovery for hwpoison-aware mechanisms does not > work as expected. For example, hwpoison-aware user-space processes like > QEMU register their customized SIGBUS handler and enable early kill mode by > seting PF_MCE_EARLY at initialization. Then the kernel will directy notify > the process by sending a SIGBUS signal in memory failure with wrong > si_code: the actual user-space process accessing the corrupt memory > location, but its memory failure work is handled in a kthread context, so > it will send SIGBUS with BUS_MCEERR_AO si_code to the actual user-space > process instead of BUS_MCEERR_AR in kill_proc(). > > To this end, separate synchronous and asynchronous error handling into > different paths like X86 platform does: > > - task work for synchronous errors. > - and workqueue for asynchronous errors. > > Then for synchronous errors, the current context in memory failure is > exactly belongs to the task consuming poison data and it will send SIBBUS > with proper si_code. > > Fixes: 7f17b4a121d0 ("ACPI: APEI: Kick the memory_failure() queue for synchronous errors") > Signed-off-by: Shuai Xue > --- > drivers/acpi/apei/ghes.c | 114 ++++++++++++++++++++++----------------- > include/acpi/ghes.h | 3 -- > mm/memory-failure.c | 13 ----- > 3 files changed, 64 insertions(+), 66 deletions(-) > > diff --git a/drivers/acpi/apei/ghes.c b/drivers/acpi/apei/ghes.c > index cccd96596efe..1901ee3498c4 100644 > --- a/drivers/acpi/apei/ghes.c > +++ b/drivers/acpi/apei/ghes.c > @@ -452,45 +452,79 @@ static void ghes_clear_estatus(struct ghes *ghes, > } > > /* > - * Called as task_work before returning to user-space. > - * Ensure any queued work has been done before we return to the context that > - * triggered the notification. > + * struct sync_task_work - for synchronous RAS event > + * > + * @twork: callback_head for task work > + * @pfn: page frame number of corrupted page > + * @flags: fine tune action taken > + * > + * Structure to pass task work to be handled before > + * ret_to_user via task_work_add(). > */ > -static void ghes_kick_task_work(struct callback_head *head) > +struct sync_task_work { > + struct callback_head twork; > + u64 pfn; > + int flags; > +}; > + > +static void memory_failure_cb(struct callback_head *twork) > { > - struct acpi_hest_generic_status *estatus; > - struct ghes_estatus_node *estatus_node; > - u32 node_len; > + int ret; > + struct sync_task_work *twcb = > + container_of(twork, struct sync_task_work, twork); > > - estatus_node = container_of(head, struct ghes_estatus_node, task_work); > - if (IS_ENABLED(CONFIG_ACPI_APEI_MEMORY_FAILURE)) > - memory_failure_queue_kick(estatus_node->task_work_cpu); > + ret = memory_failure(twcb->pfn, twcb->flags); > + kfree(twcb); > > - estatus = GHES_ESTATUS_FROM_NODE(estatus_node); > - node_len = GHES_ESTATUS_NODE_LEN(cper_estatus_len(estatus)); > - gen_pool_free(ghes_estatus_pool, (unsigned long)estatus_node, node_len); > + if (!ret) > + return; > + > + /* > + * -EHWPOISON from memory_failure() means that it already sent SIGBUS > + * to the current process with the proper error info, > + * -EOPNOTSUPP means hwpoison_filter() filtered the error event, > + * > + * In both cases, no further processing is required. > + */ > + if (ret == -EHWPOISON || ret == -EOPNOTSUPP) > + return; > + > + pr_err("Memory error not recovered"); > + force_sig(SIGBUS); > } > > -static bool ghes_do_memory_failure(u64 physical_addr, int flags) > +static void ghes_do_memory_failure(u64 physical_addr, int flags) > { > unsigned long pfn; > + struct sync_task_work *twcb; > > if (!IS_ENABLED(CONFIG_ACPI_APEI_MEMORY_FAILURE)) > - return false; > + return; > > pfn = PHYS_PFN(physical_addr); > if (!pfn_valid(pfn) && !arch_is_platform_page(physical_addr)) { > pr_warn_ratelimited(FW_WARN GHES_PFX > "Invalid address in generic error data: %#llx\n", > physical_addr); > - return false; > + return; For synchronous errors, we need send SIGBUS to the current task if not recovered, as the behavior of this patch  in the function memory_failure_cb(). Such abnormal branches should also be taken as not recovered. > + } > + > + if (flags == MF_ACTION_REQUIRED && current->mm) { > + twcb = kmalloc(sizeof(*twcb), GFP_ATOMIC); > + if (!twcb) > + return; It's the same here. > + > + twcb->pfn = pfn; > + twcb->flags = flags; > + init_task_work(&twcb->twork, memory_failure_cb); > + task_work_add(current, &twcb->twork, TWA_RESUME); > + return; > } > > memory_failure_queue(pfn, flags); > - return true; > } > > -static bool ghes_handle_memory_failure(struct acpi_hest_generic_data *gdata, > +static void ghes_handle_memory_failure(struct acpi_hest_generic_data *gdata, > int sev, bool sync) > { > int flags = -1; > @@ -498,7 +532,7 @@ static bool ghes_handle_memory_failure(struct acpi_hest_generic_data *gdata, > struct cper_sec_mem_err *mem_err = acpi_hest_get_payload(gdata); > > if (!(mem_err->validation_bits & CPER_MEM_VALID_PA)) > - return false; > + return; and here. > > /* iff following two events can be handled properly by now */ > if (sec_sev == GHES_SEV_CORRECTED && > @@ -508,16 +542,15 @@ static bool ghes_handle_memory_failure(struct acpi_hest_generic_data *gdata, > flags = sync ? MF_ACTION_REQUIRED : 0; > > if (flags != -1) > - return ghes_do_memory_failure(mem_err->physical_addr, flags); > + ghes_do_memory_failure(mem_err->physical_addr, flags); > > - return false; > + return; > } > > -static bool ghes_handle_arm_hw_error(struct acpi_hest_generic_data *gdata, > +static void ghes_handle_arm_hw_error(struct acpi_hest_generic_data *gdata, > int sev, bool sync) > { > struct cper_sec_proc_arm *err = acpi_hest_get_payload(gdata); > - bool queued = false; > int sec_sev, i; > char *p; > int flags = sync ? MF_ACTION_REQUIRED : 0; > @@ -526,7 +559,7 @@ static bool ghes_handle_arm_hw_error(struct acpi_hest_generic_data *gdata, > > sec_sev = ghes_severity(gdata->error_severity); > if (sev != GHES_SEV_RECOVERABLE || sec_sev != GHES_SEV_RECOVERABLE) > - return false; > + return; and here. > > p = (char *)(err + 1); > for (i = 0; i < err->err_info_num; i++) { > @@ -542,7 +575,7 @@ static bool ghes_handle_arm_hw_error(struct acpi_hest_generic_data *gdata, > * and don't filter out 'corrected' error here. > */ > if (is_cache && has_pa) { > - queued = ghes_do_memory_failure(err_info->physical_fault_addr, flags); > + ghes_do_memory_failure(err_info->physical_fault_addr, flags); > p += err_info->length; > continue; > } > @@ -555,8 +588,6 @@ static bool ghes_handle_arm_hw_error(struct acpi_hest_generic_data *gdata, > error_type); > p += err_info->length; > } and here, for the case that memory failure is not done, as PA is invalid. > - > - return queued; > } > > /* > @@ -654,7 +685,7 @@ static void ghes_defer_non_standard_event(struct acpi_hest_generic_data *gdata, > schedule_work(&entry->work); > } > > -static bool ghes_do_proc(struct ghes *ghes, > +static void ghes_do_proc(struct ghes *ghes, > const struct acpi_hest_generic_status *estatus) > { > int sev, sec_sev; > @@ -662,7 +693,6 @@ static bool ghes_do_proc(struct ghes *ghes, > guid_t *sec_type; > const guid_t *fru_id = &guid_null; > char *fru_text = ""; > - bool queued = false; > bool sync = is_hest_sync_notify(ghes); > > sev = ghes_severity(estatus->error_severity); > @@ -681,13 +711,13 @@ static bool ghes_do_proc(struct ghes *ghes, > atomic_notifier_call_chain(&ghes_report_chain, sev, mem_err); > > arch_apei_report_mem_error(sev, mem_err); > - queued = ghes_handle_memory_failure(gdata, sev, sync); > + ghes_handle_memory_failure(gdata, sev, sync); > } > else if (guid_equal(sec_type, &CPER_SEC_PCIE)) { > ghes_handle_aer(gdata); > } > else if (guid_equal(sec_type, &CPER_SEC_PROC_ARM)) { > - queued = ghes_handle_arm_hw_error(gdata, sev, sync); > + ghes_handle_arm_hw_error(gdata, sev, sync); > } else { > void *err = acpi_hest_get_payload(gdata); > > @@ -697,8 +727,6 @@ static bool ghes_do_proc(struct ghes *ghes, > gdata->error_data_length); > } > } > - > - return queued; > } > > static void __ghes_print_estatus(const char *pfx, > @@ -1000,9 +1028,7 @@ static void ghes_proc_in_irq(struct irq_work *irq_work) > struct ghes_estatus_node *estatus_node; > struct acpi_hest_generic *generic; > struct acpi_hest_generic_status *estatus; > - bool task_work_pending; > u32 len, node_len; > - int ret; > > llnode = llist_del_all(&ghes_estatus_llist); > /* > @@ -1017,25 +1043,14 @@ static void ghes_proc_in_irq(struct irq_work *irq_work) > estatus = GHES_ESTATUS_FROM_NODE(estatus_node); > len = cper_estatus_len(estatus); > node_len = GHES_ESTATUS_NODE_LEN(len); > - task_work_pending = ghes_do_proc(estatus_node->ghes, estatus); > + ghes_do_proc(estatus_node->ghes, estatus); > if (!ghes_estatus_cached(estatus)) { > generic = estatus_node->generic; > if (ghes_print_estatus(NULL, generic, estatus)) > ghes_estatus_cache_add(generic, estatus); > } > - > - if (task_work_pending && current->mm) { > - estatus_node->task_work.func = ghes_kick_task_work; > - estatus_node->task_work_cpu = smp_processor_id(); > - ret = task_work_add(current, &estatus_node->task_work, > - TWA_RESUME); > - if (ret) > - estatus_node->task_work.func = NULL; > - } > - > - if (!estatus_node->task_work.func) > - gen_pool_free(ghes_estatus_pool, > - (unsigned long)estatus_node, node_len); > + gen_pool_free(ghes_estatus_pool, (unsigned long)estatus_node, > + node_len); > > llnode = next; > } > @@ -1096,7 +1111,6 @@ static int ghes_in_nmi_queue_one_entry(struct ghes *ghes, > > estatus_node->ghes = ghes; > estatus_node->generic = ghes->generic; > - estatus_node->task_work.func = NULL; > estatus = GHES_ESTATUS_FROM_NODE(estatus_node); > > if (__ghes_read_estatus(estatus, buf_paddr, fixmap_idx, len)) { > diff --git a/include/acpi/ghes.h b/include/acpi/ghes.h > index 3c8bba9f1114..e5e0c308d27f 100644 > --- a/include/acpi/ghes.h > +++ b/include/acpi/ghes.h > @@ -35,9 +35,6 @@ struct ghes_estatus_node { > struct llist_node llnode; > struct acpi_hest_generic *generic; > struct ghes *ghes; > - > - int task_work_cpu; > - struct callback_head task_work; > }; > > struct ghes_estatus_cache { > diff --git a/mm/memory-failure.c b/mm/memory-failure.c > index fae9baf3be16..6ea8c325acb3 100644 > --- a/mm/memory-failure.c > +++ b/mm/memory-failure.c > @@ -2355,19 +2355,6 @@ static void memory_failure_work_func(struct work_struct *work) > } > } > > -/* > - * Process memory_failure work queued on the specified CPU. > - * Used to avoid return-to-userspace racing with the memory_failure workqueue. > - */ > -void memory_failure_queue_kick(int cpu) > -{ > - struct memory_failure_cpu *mf_cpu; > - > - mf_cpu = &per_cpu(memory_failure_cpu, cpu); > - cancel_work_sync(&mf_cpu->work); > - memory_failure_work_func(&mf_cpu->work); > -} > - > static int __init memory_failure_init(void) > { > struct memory_failure_cpu *mf_cpu;