From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 74499C4332F for ; Mon, 12 Dec 2022 02:37:01 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 92CD58E0003; Sun, 11 Dec 2022 21:37:00 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 8B5BB8E0002; Sun, 11 Dec 2022 21:37:00 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 756478E0003; Sun, 11 Dec 2022 21:37:00 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 620518E0002 for ; Sun, 11 Dec 2022 21:37:00 -0500 (EST) Received: from smtpin20.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 3610A1202CD for ; Mon, 12 Dec 2022 02:37:00 +0000 (UTC) X-FDA: 80232091800.20.98C0D2A Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by imf27.hostedemail.com (Postfix) with ESMTP id E0BC140007 for ; Mon, 12 Dec 2022 02:36:55 +0000 (UTC) Authentication-Results: imf27.hostedemail.com; dkim=none; spf=pass (imf27.hostedemail.com: domain of linmiaohe@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=linmiaohe@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1670812617; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=00fxY4TtqBjZpuSbopgVl83GM50iLlgaY85qon16kxM=; b=F0r7durHMt/KAnO+88lyp9F3G1CbrVFWu6eXElKI8W4+PiKkh9Q3A3Q6a7H277i929AFyE gVYcn214a2kYOtv1mT8H+Ig9cr5DtIG3QQZ9ByywoyVl2/W2Mk5fvEpQdHUxNomiKjEENf 9NspUXiDnLlthtV28zD2hoWFef9rO0g= ARC-Authentication-Results: i=1; imf27.hostedemail.com; dkim=none; spf=pass (imf27.hostedemail.com: domain of linmiaohe@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=linmiaohe@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1670812617; a=rsa-sha256; cv=none; b=UynS+BC1/E2zvsC/q0rZeYakWDBwb4xche8h/ILQz754XJVUnjlPsy6AJDDx0AUFKQIJZw 9ONwqj2tUqOY9NnbDP3D2LEp8aYgje6gYDZHPHfBJ7RAgkGjk9AdFPpBwjfnN5NV4zzsdV hTbwAIk2bWtAYvvion+MIw/0fIBaTcY= Received: from canpemm500002.china.huawei.com (unknown [172.30.72.54]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4NVlyp5XB3zqSPQ; Mon, 12 Dec 2022 10:32:34 +0800 (CST) Received: from [10.174.151.185] (10.174.151.185) by canpemm500002.china.huawei.com (7.192.104.244) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Mon, 12 Dec 2022 10:36:49 +0800 Subject: Re: [PATCH -next v2] mm: hwposion: support recovery from ksm_might_need_to_copy() To: Kefeng Wang CC: , , HORIGUCHI NAOYA , Andrew Morton , Linux-MM References: <20221209021525.196276-1-wangkefeng.wang@huawei.com> <20221209072801.193221-1-wangkefeng.wang@huawei.com> From: Miaohe Lin Message-ID: <342f4d3f-7347-1615-7d63-cbdef4872629@huawei.com> Date: Mon, 12 Dec 2022 10:36:48 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.6.0 MIME-Version: 1.0 In-Reply-To: <20221209072801.193221-1-wangkefeng.wang@huawei.com> Content-Type: text/plain; charset="utf-8" Content-Language: en-US Content-Transfer-Encoding: 7bit X-Originating-IP: [10.174.151.185] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To canpemm500002.china.huawei.com (7.192.104.244) X-CFilter-Loop: Reflected X-Stat-Signature: pz5ifcxrzysnxwz3odmr31sc3tuoecph X-Rspam-User: X-Rspamd-Queue-Id: E0BC140007 X-Rspamd-Server: rspam06 X-HE-Tag: 1670812615-627344 X-HE-Meta: U2FsdGVkX19kXJpuzjsg8/gfudhxEWS7GT8zYzBJeIWptKygal1kJ3wMnbIJBtxfuflhYE3UdM6ph8Zb1GD91fs+O3/jPARYdqa9IFLqFWYBFR4JqVKnCD2f0kfoBEpj63LrClHGv56/DFXUMikcL7O8U//MEb7CMeOz/nfesrykcVEuT16XH7bwoGKclo51AJSniuFv+LzRTzZNQ5J5wB2Cv4DHAT456oAJfAZ764bbdPABAIGn+ObNPF4lAMX7u2REqSVgflD/ajnjb5zWGiHMooWkQEU7JNlRW266/fK5UEWYE5X1ah4FGLV1exMdMDpPF3gspy+Cgm2U7BWtR/TKOVh68EUwnCAm9464gTvMQvBFzw7LCf5/bUFl3XxAAF4J27wPNbr2mCSmYqkZkG9x/pSwkIkPcipb//kH+hKwBBK8UkmRFpGwPrllNMB/OhEto2ezaft4y5rsroQ6AsHHaVOX+fviYw+j3jaXU4i7etQjnWu+A+JdRTXSpFu52rfpvTpi0c5+Os7GXFpqwCNkSt9psKUb7W0CWbQHjgDNrhq9a/2FYDvI261zF6T31CUrRVYYWvEf5tV6lDqf58NnpPMqeZ/warnSgE1Ou9nnr9AoE92SziLJkZALlyecrfW4xvnmAH9OFFPeSv11eaen5uZKgr8olMzkABE47d4x4j5O3V8IJxiyV7tOghc1Ab5poXZVsuw04KWlqniLFqgrvn2tJuGiCac2z7QyMT3mU7ojQCs3Z6xaLcUOwn9fdA2GslVE3KnqlSD2mrjJh0RooCXjmVjSKqEHLemiN5y9YfAPAN45yhOAqEMbenwb/e6kjqQVyosCkByflKzQf8mdYesk4RvwnNKvLAhTubedXGskT2ywcg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 2022/12/9 15:28, Kefeng Wang wrote: > When the kernel copy a page from ksm_might_need_to_copy(), but runs > into an uncorrectable error, it will crash since poisoned page is > consumed by kernel, this is similar to Copy-on-write poison recovery, > When an error is detected during the page copy, return VM_FAULT_HWPOISON, > which help us to avoid system crash. Note, memory failure on a KSM > page will be skipped, but still call memory_failure_queue() to be > consistent with general memory failure process. Thanks for your patch. > > Signed-off-by: Kefeng Wang > --- > v2: fix type error > > mm/ksm.c | 8 ++++++-- > mm/memory.c | 3 +++ > mm/swapfile.c | 2 +- > 3 files changed, 10 insertions(+), 3 deletions(-) > > diff --git a/mm/ksm.c b/mm/ksm.c > index dd02780c387f..83e2f74ae7da 100644 > --- a/mm/ksm.c > +++ b/mm/ksm.c > @@ -2629,8 +2629,12 @@ struct page *ksm_might_need_to_copy(struct page *page, > new_page = NULL; > } > if (new_page) { > - copy_user_highpage(new_page, page, address, vma); > - > + if (copy_mc_user_highpage(new_page, page, address, vma)) { > + put_page(new_page); > + new_page = ERR_PTR(-EHWPOISON); > + memory_failure_queue(page_to_pfn(page), 0); > + return new_page; > + } > SetPageDirty(new_page); > __SetPageUptodate(new_page); > __SetPageLocked(new_page); > diff --git a/mm/memory.c b/mm/memory.c > index aad226daf41b..5b2c137dfb2a 100644 > --- a/mm/memory.c > +++ b/mm/memory.c > @@ -3840,6 +3840,9 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > if (unlikely(!page)) { > ret = VM_FAULT_OOM; > goto out_page; > + } else if (unlikely(PTR_ERR(page) == -EHWPOISON)) { > + ret = VM_FAULT_HWPOISON; > + goto out_page; > } > folio = page_folio(page); > > diff --git a/mm/swapfile.c b/mm/swapfile.c > index 908a529bca12..d479811bc311 100644 > --- a/mm/swapfile.c > +++ b/mm/swapfile.c > @@ -1767,7 +1767,7 @@ static int unuse_pte(struct vm_area_struct *vma, pmd_t *pmd, > > swapcache = page; > page = ksm_might_need_to_copy(page, vma, addr); > - if (unlikely(!page)) > + if (IS_ERR_OR_NULL(page)) IMHO, it might be better to install a hwpoison entry here. Or later swapoff ops will trigger the uncorrectable error again? Thanks, Miaohe Lin