From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 05941C433EF for ; Mon, 9 May 2022 09:05:32 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 58BCA6B0071; Mon, 9 May 2022 05:05:32 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 53A136B0073; Mon, 9 May 2022 05:05:32 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 450536B0074; Mon, 9 May 2022 05:05:32 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 351B86B0071 for ; Mon, 9 May 2022 05:05:32 -0400 (EDT) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay12.hostedemail.com (Postfix) with ESMTP id 021DB120E02 for ; Mon, 9 May 2022 09:05:31 +0000 (UTC) X-FDA: 79445621304.16.F228C90 Received: from szxga03-in.huawei.com (szxga03-in.huawei.com [45.249.212.189]) by imf24.hostedemail.com (Postfix) with ESMTP id 69C911800B8 for ; Mon, 9 May 2022 09:05:23 +0000 (UTC) Received: from canpemm500002.china.huawei.com (unknown [172.30.72.57]) by szxga03-in.huawei.com (SkyGuard) with ESMTP id 4KxZr94wcszCsTD; Mon, 9 May 2022 17:00:09 +0800 (CST) Received: from [10.174.177.76] (10.174.177.76) by canpemm500002.china.huawei.com (7.192.104.244) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Mon, 9 May 2022 17:04:54 +0800 Subject: Re: [RFC PATCH v1 0/4] mm, hwpoison: improve handling workload related to hugetlb and memory_hotplug To: =?UTF-8?B?SE9SSUdVQ0hJIE5BT1lBKOWggOWPoyDnm7TkuZ8p?= , David Hildenbrand CC: Naoya Horiguchi , "linux-mm@kvack.org" , Andrew Morton , Mike Kravetz , Yang Shi , Oscar Salvador , Muchun Song , "linux-kernel@vger.kernel.org" References: <20220427042841.678351-1-naoya.horiguchi@linux.dev> <54399815-10fe-9d43-7ada-7ddb55e798cb@redhat.com> <20220427122049.GA3918978@hori.linux.bs1.fc.nec.co.jp> <20220509072902.GB123646@hori.linux.bs1.fc.nec.co.jp> From: Miaohe Lin Message-ID: <6a5d31a3-c27f-f6d9-78bb-d6bf69547887@huawei.com> Date: Mon, 9 May 2022 17:04:54 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.6.0 MIME-Version: 1.0 In-Reply-To: <20220509072902.GB123646@hori.linux.bs1.fc.nec.co.jp> Content-Type: text/plain; charset="utf-8" Content-Language: en-US Content-Transfer-Encoding: 8bit X-Originating-IP: [10.174.177.76] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To canpemm500002.china.huawei.com (7.192.104.244) X-CFilter-Loop: Reflected X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: 69C911800B8 Authentication-Results: imf24.hostedemail.com; dkim=none; spf=pass (imf24.hostedemail.com: domain of linmiaohe@huawei.com designates 45.249.212.189 as permitted sender) smtp.mailfrom=linmiaohe@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com X-Rspam-User: X-Stat-Signature: pqjzkbbzz3kupdodowka8oyif1g973ib X-HE-Tag: 1652087123-572302 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 2022/5/9 15:29, HORIGUCHI NAOYA(堀口 直也) wrote: > On Thu, Apr 28, 2022 at 10:44:15AM +0200, David Hildenbrand wrote: >>>> 2) It happens rarely (ever?), so do we even care? >>> >>> I'm not certain of the rarity. Some cloud service providers who maintain >>> lots of servers may care? >> >> About replacing broken DIMMs? I'm not so sure, especially because it >> requires a special setup with ZONE_MOVABLE (i.e., movablecore) to be >> somewhat reliable and individual DIMMs can usually not get replaced at all. >> >>> >>>> 3) Once the memory is offline, we can re-online it and lost HWPoison. >>>> The memory can be happily used. >>>> >>>> 3) can happen easily if our DIMM consists of multiple memory blocks and >>>> offlining of some memory block fails -> we'll re-online all already >>>> offlined ones. We'll happily reuse previously HWPoisoned pages, which >>>> feels more dangerous to me then just leaving the DIMM around (and >>>> eventually hwpoisoning all pages on it such that it won't get used >>>> anymore?). >>> >>> I see. This scenario can often happen. >>> >>>> >>>> So maybe we should just fail offlining once we stumble over a hwpoisoned >>>> page? >>> >>> That could be one choice. >>> >>> Maybe another is like this: offlining can succeed but HWPoison flags are >>> kept over offline-reonline operations. If the system noticed that the >>> re-onlined blocks are backed by the original DIMMs or NUMA nodes, then the >>> saved HWPoison flags are still effective, so keep using them. If the >>> re-onlined blocks are backed by replaced DIMMs/NUMA nodes, then we can clear >>> all HWPoison flags associated with replaced physical address range. This >>> can be done automatically in re-onlining if there's a way for kernel to know >>> whether DIMM/NUMA nodes are replaced with new ones. But if there isn't, >>> system applications have to check the HW and explicitly reset the HWPoison >>> flags. >> >> Offline memory sections have a stale memmap, so there is no trusting on >> that. And trying to work around that or adjusting memory onlining code >> overcomplicates something we really don't care about supporting. > > OK, so I'll go forward to reduce complexity in hwpoison specific code in > memory offlining code. > >> >> So if we continue allowing offlining memory blocks with poisoned pages, >> we could simply remember that that memory block had any posioned page >> (either for the memory section or maybe better for the whole memory >> block). We can then simply reject/fail memory onlining of these memory >> blocks. > > It seems to be helpful also for other conext (like hugetlb) to know whether > there's any hwpoisoned page in a given range of physical address, so I'll > think of this approach. > >> >> So that leaves us with either >> >> 1) Fail offlining -> no need to care about reonlining Maybe fail offlining will be a better alternative as we can get rid of many races between memory failure and memory offline? But no strong opinion. :) Thanks! >> 2) Succeed offlining but fail re-onlining > > Rephrasing in case I misread, memory offlining code should never check > hwpoisoned pages finally, and memory onlining code would do kind of range > query to find hwpoisoned pages (without depending on PageHWPoison flag). > > Thanks, > Naoya Horiguchi >