From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 50AF9C4332F for ; Fri, 9 Dec 2022 01:56:45 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id CA4BB8E0005; Thu, 8 Dec 2022 20:56:44 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id C54CE8E0001; Thu, 8 Dec 2022 20:56:44 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B1C618E0005; Thu, 8 Dec 2022 20:56:44 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 9E1F78E0001 for ; Thu, 8 Dec 2022 20:56:44 -0500 (EST) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 7A1B78087A for ; Fri, 9 Dec 2022 01:56:44 +0000 (UTC) X-FDA: 80221103928.22.38EB151 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by imf15.hostedemail.com (Postfix) with ESMTP id 8EEB8A0002 for ; Fri, 9 Dec 2022 01:56:41 +0000 (UTC) Authentication-Results: imf15.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf15.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1670551002; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=+lJCVM3bz8IcuMXRh4C5FjtylfVai7I+pl5JBdbHMk4=; b=cAAaYXKEj22aNgu6rpJMllZQnAcEwGO0wCblxVKZOwuq+4i00ojJWk7GLEhDJHagSPPePh arRL2ewU8A1crfWcCLyopEVzLNivCHq0P7eD32GRUB6tNNoo5rxMEboQg1BBoWaMNcztVM ypMaaUd2+VIGJCzdU5QVKLxG9uGNI3k= ARC-Authentication-Results: i=1; imf15.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf15.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1670551002; a=rsa-sha256; cv=none; b=lYwG8D2B7cQxBXypWtQsybPVq4AaROSG6vYw/JKUyHdW+ptdd5lnCrAtBlkTecsWJ9jHLb hfrR4YqXcnzTtkS0dUNTPodLBOVF7kyqVGV4kvgozTeJE7zy0vFCuEiMP6eGg5Aw6ehFiw QuqTiVCXeRlJOa38NaokRLF0T0bDbKI= Received: from dggpemm500001.china.huawei.com (unknown [172.30.72.53]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4NSvHl3bLwzJqRM; Fri, 9 Dec 2022 09:55:47 +0800 (CST) Received: from [10.174.177.243] (10.174.177.243) by dggpemm500001.china.huawei.com (7.185.36.107) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.31; Fri, 9 Dec 2022 09:56:09 +0800 Message-ID: <91ec9413-8045-428e-d7e6-9327d63685d1@huawei.com> Date: Fri, 9 Dec 2022 09:56:09 +0800 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101 Thunderbird/102.5.1 Subject: Re: [PATCH] mm: hwposion: support recovery from ksm_might_need_to_copy() Content-Language: en-US To: , , CC: , , References: <20221209021041.192835-1-wangkefeng.wang@huawei.com> From: Kefeng Wang In-Reply-To: <20221209021041.192835-1-wangkefeng.wang@huawei.com> Content-Type: text/plain; charset="UTF-8"; format=flowed Content-Transfer-Encoding: 8bit X-Originating-IP: [10.174.177.243] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To dggpemm500001.china.huawei.com (7.185.36.107) X-CFilter-Loop: Reflected X-Rspamd-Queue-Id: 8EEB8A0002 X-Stat-Signature: 9fjt1zoehxoefxo759oemnjid86zor69 X-Rspam-User: X-Rspamd-Server: rspam08 X-HE-Tag: 1670551001-932329 X-HE-Meta: U2FsdGVkX1+nGAexQ26gz7f7lRN7ylnwWXghH9K/gT802Q5UHf0w3MYDGvE9kIKrFugbbwZITwYrlEY1StTGtvydAH8xu1rvmU0OzXYSUEikV/TS9VQgnRxlAV+PNIshDGIw9x1coqpyQrduU6JPTgYCfNBF5L5zPLI7Aw/vdVO+ih8bdz22qe3LPclz5Gz1G58SGIuRNlrZO/wnoeesoiY7JDs9ekfYEzDQO2b99MixCb7LS03Gr26OOcWcNF/6s1PWDi9VM9utF1/Y610bEbzME/eUe6xanHMPYG7q2OUOzGT5OHVrfjfut9hy55CpeSicml0P7P7ctx0+IKdWiwVvL01kiyTJi29m+KqQUr6OT2MQS/x5bcKfw87kUhuCxOUo7r+nwoLYlgPWz33fa90UH8t9tPTkyR+baLt0yYsx08ACmBL3aITcZxkgaLcPPtgp3i8fcHRU5+7uprhef9JE1vxFWQMuwJN2JRFEvVvd46GhXT3j3++jgoCPKtEZUzkz+B/Bb8hcOhYZK1V7y3HPVvAVR85kZ/W8TK8/E9NSODsXYZOYTP6tgacwgTzFAx9FBGVdeTJOHDPAMOba0N+b/5CBqnNlKSveBxRaIFBVPyB+bqp2l9XHF7NPry5Hfw07FgOOK+PdiXA5G2nvDVH+bqJsT7A96up8UkNAv9qWJEaITB4deXcfMLM4N+2t/Dp3IH/qfmbQD7t1+B8LjRP9B/17SRQ41HwoMEgu0M+gfu3VZqAe1fjto3j44opm15MavOVAUfxx/a39h3MWzL89nQScMtNBett/ZvPVdhw/8xjrCfoQr37kUeYP+UcYNJNS1PxO9LpcXOeVcw1D1nxAscXDzpGgHshyAgCb6z1r0LvRY96elQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: sorry, please ignore it,  will resend. On 2022/12/9 10:10, Kefeng Wang wrote: > When the kernel copy a page from ksm_might_need_to_copy(), but runs > into an uncorrectable error, it will crash since poisoned page is > consumed by kernel, this is similar to Copy-on-write poison recovery, > When an error is detected during the page copy, return VM_FAULT_HWPOISON, > which help us to avoid system crash. Note, memory failure on a KSM > page will be skipped, but still call memory_failure_queue() to be > consistent with general memory failure process. > > Signed-off-by: Kefeng Wang > --- > mm/ksm.c | 8 ++++++-- > mm/memory.c | 3 +++ > mm/swapfile.c | 2 +- > 3 files changed, 10 insertions(+), 3 deletions(-) > > diff --git a/mm/ksm.c b/mm/ksm.c > index f1e06b1d47f3..356e93b85287 100644 > --- a/mm/ksm.c > +++ b/mm/ksm.c > @@ -2629,8 +2629,12 @@ struct page *ksm_might_need_to_copy(struct page *page, > new_page = NULL; > } > if (new_page) { > - copy_user_highpage(new_page, page, address, vma); > - > + if (copy_mc_user_highpage(new_page, page, address, vma)) { > + put_page(new_page); > + new_page = ERR_PTR(-EHWPOISON); > + memory_failure_queue(page_to_pfn(page), 0); > + return new_page; > + } > SetPageDirty(new_page); > __SetPageUptodate(new_page); > __SetPageLocked(new_page); > diff --git a/mm/memory.c b/mm/memory.c > index 2615fa615be4..bb7b35e42297 100644 > --- a/mm/memory.c > +++ b/mm/memory.c > @@ -3840,6 +3840,9 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > if (unlikely(!page)) { > ret = VM_FAULT_OOM; > goto out_page; > + } els if (unlikely(PTR_ERR(page) == -EHWPOISON)) { > + ret = VM_FAULT_HWPOISON; > + goto out_page; > } > folio = page_folio(page); > > diff --git a/mm/swapfile.c b/mm/swapfile.c > index f670ffb7df7e..763ff6a8a576 100644 > --- a/mm/swapfile.c > +++ b/mm/swapfile.c > @@ -1767,7 +1767,7 @@ static int unuse_pte(struct vm_area_struct *vma, pmd_t *pmd, > > swapcache = page; > page = ksm_might_need_to_copy(page, vma, addr); > - if (unlikely(!page)) > + if (IS_ERR_OR_NULL(!page)) should  be IS_ERR_OR_NULL(page) > return -ENOMEM; > > pte = pte_offset_map_lock(vma->vm_mm, pmd, addr, &ptl);