From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 23C68C001B2 for ; Fri, 9 Dec 2022 01:56:46 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B6CAC8E0006; Thu, 8 Dec 2022 20:56:45 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id AF5CE8E0001; Thu, 8 Dec 2022 20:56:45 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9E4D08E0006; Thu, 8 Dec 2022 20:56:45 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 8FA778E0001 for ; Thu, 8 Dec 2022 20:56:45 -0500 (EST) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 5EC8D140A20 for ; Fri, 9 Dec 2022 01:56:45 +0000 (UTC) X-FDA: 80221103970.19.A37F8CB Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by imf14.hostedemail.com (Postfix) with ESMTP id 00ECE100007 for ; Fri, 9 Dec 2022 01:56:42 +0000 (UTC) Authentication-Results: imf14.hostedemail.com; dkim=none; spf=pass (imf14.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1670551003; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=wzPFuORyFoV7Fw6ZZvb1EJIf+T5TYtmY/QWrtla7uHs=; b=INP6TCSO4k4yidiF4O46/SwWVQlqw0Vy9nH7qHuqOhiTwHe7KuabeScdwsMOQGQOTtx2En vrM/8Fr0Z8dEUpbJe0V7ADB9YDwXDDnPrl9uTGvtPMNFyyD7CGm8OfAdZAKHkKja/jTETd EUDImE6I7o8tWl/0p5AN0G/F8B8Jwug= ARC-Authentication-Results: i=1; imf14.hostedemail.com; dkim=none; spf=pass (imf14.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1670551003; a=rsa-sha256; cv=none; b=IP6+1U9dRd3p1FeVVCMmDSE2sqboysTGwo0Y3gX+j+XakTbd0grcyHCwLG6Owndt0ju/yw Pa/kIT+uvaWwbby+/ReUwilyB0GDTeMPIVQBfO4j2O97FFedR2F1QsATon5hgAbHY48mqs c9TzfbauzXx45h2XBzdpYKaik5eB3DM= Received: from dggpemm500001.china.huawei.com (unknown [172.30.72.53]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4NSvHl5GLJzJqRJ; Fri, 9 Dec 2022 09:55:47 +0800 (CST) Received: from [10.174.177.243] (10.174.177.243) by dggpemm500001.china.huawei.com (7.185.36.107) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.31; Fri, 9 Dec 2022 09:56:16 +0800 Message-ID: Date: Fri, 9 Dec 2022 09:56:16 +0800 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101 Thunderbird/102.5.1 Subject: Re: [PATCH] mm: hwposion: support recovery from ksm_might_need_to_copy() Content-Language: en-US To: , , CC: , , References: <20221209021041.192835-1-wangkefeng.wang@huawei.com> <20221209021041.192835-2-wangkefeng.wang@huawei.com> From: Kefeng Wang In-Reply-To: <20221209021041.192835-2-wangkefeng.wang@huawei.com> Content-Type: text/plain; charset="UTF-8"; format=flowed Content-Transfer-Encoding: 8bit X-Originating-IP: [10.174.177.243] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemm500001.china.huawei.com (7.185.36.107) X-CFilter-Loop: Reflected X-Stat-Signature: mygo8ikbish89ci4zeqob6bbcwd1s6a9 X-Rspam-User: X-Rspamd-Queue-Id: 00ECE100007 X-Rspamd-Server: rspam06 X-HE-Tag: 1670551002-43848 X-HE-Meta: U2FsdGVkX1+3bmAqUOyaVklkUljunwCGXTlLoamIpIvgS7TYhhhML69X1bfvTQ2iM5mOAgwo3sHD0jVo5DlzWinbZr02SEqwAzqjoMjKkUPUUQXBxO//YdSSRRt7wkop9GOp95GXGIlK88I/7E86/ImzVJXKE59s+wMfJbEnz7IEyMNXIQKkCQ5Dw+xpoR5FlXPLPtFkInB2KPV2ELW8J3MX+gtKQggQDbjuVB7DiOuoHclMml6usIy1kwpDFKnacpz/UKumwc+QxDrM0+sUCJISeKE1qDwXdAWXJU0SrBi1x/zF6BasbPL5GuntaGY50N8czVDd2/RZTo10PSQwphvl+crWA3RJ5WWj385GBSe5l9sDvCUlu/HLsiZyJ9knfqfz2PxlX1gULi7F/q6RWpBhACqWVtiK/CZ4Dv+xIltk+jfi1xNw4ukcjjvl3zrKoWrV/DuSoNGhuXb7Gz8sZ9Q5mCjtpP1DNs28HtPPdO+lxlnDj3l5UGuGEkR5rq2v55/6i44ne+mseTV3Hvffpt1g1fgeKfUvDRzbfGl68ZTKiZfLDwtmK9DLG0W8OsaedPtjt1Qg45Uxp22cv8s4s++rePMLQJlRPU1kYAQ+RWG7AtxC3OglKy/S3HfH4gtiQUE6126J44UqPkA1IdEJef+GytLLYl2nJrVE23qJgwM/72ZPo3uuPBgm9i85FcM+h4T9lBbSs+nW+FEdRgQ/52PocQc1C9IuKoqMool72NnpogzEwLcHBdYoPv/5DkFuF3eP8W38gPSoOp9KRvo4TRo8mPtB9xqcBwbgk7t+iJYwq5n2RlHapMVPvcrpOAui1q8bJWzu5cQ/djpJM2yyDZBzfZ3qVa6Bo895GbIO4X9ltax4WDLIHw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: sorry, please ignore it,  will resend. On 2022/12/9 10:10, Kefeng Wang wrote: > When the kernel copy a page from ksm_might_need_to_copy(), but runs > into an uncorrectable error, it will crash since poisoned page is > consumed by kernel, this is similar to Copy-on-write poison recovery, > When an error is detected during the page copy, return VM_FAULT_HWPOISON, > which help us to avoid system crash. Note, memory failure on a KSM > page will be skipped, but still call memory_failure_queue() to be > consistent with general memory failure process. > > Signed-off-by: Kefeng Wang > --- > mm/ksm.c | 8 ++++++-- > mm/memory.c | 3 +++ > mm/swapfile.c | 2 +- > 3 files changed, 10 insertions(+), 3 deletions(-) > > diff --git a/mm/ksm.c b/mm/ksm.c > index f1e06b1d47f3..356e93b85287 100644 > --- a/mm/ksm.c > +++ b/mm/ksm.c > @@ -2629,8 +2629,12 @@ struct page *ksm_might_need_to_copy(struct page *page, > new_page = NULL; > } > if (new_page) { > - copy_user_highpage(new_page, page, address, vma); > - > + if (copy_mc_user_highpage(new_page, page, address, vma)) { > + put_page(new_page); > + new_page = ERR_PTR(-EHWPOISON); > + memory_failure_queue(page_to_pfn(page), 0); > + return new_page; > + } > SetPageDirty(new_page); > __SetPageUptodate(new_page); > __SetPageLocked(new_page); > diff --git a/mm/memory.c b/mm/memory.c > index 2615fa615be4..bb7b35e42297 100644 > --- a/mm/memory.c > +++ b/mm/memory.c > @@ -3840,6 +3840,9 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > if (unlikely(!page)) { > ret = VM_FAULT_OOM; > goto out_page; > + } els if (unlikely(PTR_ERR(page) == -EHWPOISON)) { > + ret = VM_FAULT_HWPOISON; > + goto out_page; > } > folio = page_folio(page); > > diff --git a/mm/swapfile.c b/mm/swapfile.c > index f670ffb7df7e..763ff6a8a576 100644 > --- a/mm/swapfile.c > +++ b/mm/swapfile.c > @@ -1767,7 +1767,7 @@ static int unuse_pte(struct vm_area_struct *vma, pmd_t *pmd, > > swapcache = page; > page = ksm_might_need_to_copy(page, vma, addr); > - if (unlikely(!page)) > + if (IS_ERR_OR_NULL(!page)) > return -ENOMEM; > > pte = pte_offset_map_lock(vma->vm_mm, pmd, addr, &ptl);