From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 35C2CC83F0D for ; Tue, 8 Jul 2025 01:16:02 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B628E6B03C3; Mon, 7 Jul 2025 21:16:01 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B13C06B03C4; Mon, 7 Jul 2025 21:16:01 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A50706B03C5; Mon, 7 Jul 2025 21:16:01 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 94AD56B03C3 for ; Mon, 7 Jul 2025 21:16:01 -0400 (EDT) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 6C7001D9272 for ; Tue, 8 Jul 2025 01:16:01 +0000 (UTC) X-FDA: 83639330922.19.E1887A0 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by imf20.hostedemail.com (Postfix) with ESMTP id A0BF71C000E for ; Tue, 8 Jul 2025 01:15:58 +0000 (UTC) Authentication-Results: imf20.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf20.hostedemail.com: domain of tujinjiang@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=tujinjiang@huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1751937359; a=rsa-sha256; cv=none; b=8S/DWcJVLvWjK95a+wzJDXK1dYGWjrvAm03rT6LfmJyLCpZN5q1RpHeGuJYh888rlRvygl qHVrhnecEcfHg9S7GVtjkJ+/cbEtzYhKMcqfpj9V7n2FXRG3TJXXBOOqS+xd8MUFJwP68+ LievomyYa96oLQdXa97NiAnV14zVXHA= ARC-Authentication-Results: i=1; imf20.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf20.hostedemail.com: domain of tujinjiang@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=tujinjiang@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1751937359; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=dtbBLkmeZO9Xui4G+a8aXVC5p8+K3gkpnQ3Vw08IRIQ=; b=PoKn8ntDFJwcoX0ebnuE9SuSNZ9A3iB59nZrgrRS+IN5QljbLOp1sovjEVmvNFMn3zXamM G/BVBycB9IhPBvOG6y2IvpK0qUqMt3B4k2SUYxVd4LOhBTwdOJ+IITfgsleaofjrlRAw7x NiqOIMYUm2S3RTgz8571WzE9iLrV40M= Received: from mail.maildlp.com (unknown [172.19.162.254]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4bbjjs1bZ6zWfsc; Tue, 8 Jul 2025 09:11:29 +0800 (CST) Received: from kwepemo200002.china.huawei.com (unknown [7.202.195.209]) by mail.maildlp.com (Postfix) with ESMTPS id 61E9F180236; Tue, 8 Jul 2025 09:15:54 +0800 (CST) Received: from [10.174.179.13] (10.174.179.13) by kwepemo200002.china.huawei.com (7.202.195.209) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Tue, 8 Jul 2025 09:15:53 +0800 Message-ID: <4c5d4fd5-5582-11d8-9fee-24828ac1913d@huawei.com> Date: Tue, 8 Jul 2025 09:15:53 +0800 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101 Thunderbird/102.15.1 Subject: Re: [PATCH v2 2/2] mm/memory_hotplug: fix hwpoisoned large folio handling in do_migrate_range To: David Hildenbrand , Oscar Salvador CC: , , , , References: <20250627125747.3094074-1-tujinjiang@huawei.com> <20250627125747.3094074-3-tujinjiang@huawei.com> <373d02c5-2b62-8543-b786-8fd591ad56eb@huawei.com> <61325284-d1d6-a973-8aa7-c0f226db95fa@huawei.com> <7b2c054b-fc33-4127-aaa9-9edf6a63e142@redhat.com> <924d9d25-e53c-f159-6ec0-e1fd4e96d6e2@huawei.com> From: Jinjiang Tu In-Reply-To: Content-Type: text/plain; charset="UTF-8"; format=flowed Content-Transfer-Encoding: 8bit X-Originating-IP: [10.174.179.13] X-ClientProxiedBy: kwepems500002.china.huawei.com (7.221.188.17) To kwepemo200002.china.huawei.com (7.202.195.209) X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: A0BF71C000E X-Stat-Signature: t8ak4zmdj4u86a7qidteex5es6hyokuw X-Rspam-User: X-HE-Tag: 1751937358-226076 X-HE-Meta: U2FsdGVkX1+OXtGwR+7wjZnwbBZmoNxhfEQS+AxXZjC39bZTOhtVlMT2jhI2zuo1rSGLDwiyKEQlS3LUoweb78LufLHR1Lv/nR3rM4RoIujinchtpBrNIAFj/xEl5A/2HJ7rRNLNAmB1zIYpsPYCo23x7IZiDR6EL4x+dMYAfnQIXcX56OqAuzvkk9T03wGv5RF2F+QE+9lz0Zlq9WbXkvTe7FrvvTM3VFXxFuIT2QbkzOAFQ7sx0tEP1ypadORBRgslr3YPkJVJW2ysAm6DO90I+saKuCMxaC9TVWoRH/EwMtRIOdr/pgtRYsy3yGsXDk9PTqVilXqQkFIbFNDgJXC/BwWU6opJ1PchmOyQwQ4fBFCnKCWizjxICjM1Jb7sGkVtqQcPx/eOuivnT4ozFZCzuxpkNr31Vkk91JyoNN62UQ1FRwkk//VWdlhRa0fGnFEdiuaHBq0FGauqTWGv4cKQGuB5pdB7AbiyyAGmXTa33dVgYSLT+63gxIXeoRRf2LoCLyNGUWSCxJTOHKLFmFhTW3Qw6RkEti1/8J4opf2ybm50cCksC/gQwRcxgMsRC4yIMjouhRSH1KYPRKzY+kJ0+SCbF4d/xKFhcrSdfzlkxLqlNy1tz3s5H+wWR5q44ZSlmRGQ6LDRMtcfPEPUBUGH2+MoE6qjp/Sp8GpcPPMCUNvRPTd5SLS5wTXgnCAz0VSvAP+vUxcOAfBP2C6Lrw7l38859qBghYNIVE5QIUKsMXr1lOaqjYZe2zntdVJSgVsif6NAZ5uYnjlqhCQeBQNPk/7KJAUom2v4bgrku3/QnmQTi42rNZ672Tv+J3uHYKp8Q/Wlk44oeyLfJxSyuzDG7Dr/2ZJFQCLtU960tOXoaH7uKz3PkBTnmGltwj3kHGRNT3vr90hRM83+wm/ANfpeGF7NfWtQcCmXfk/eNZbzr3SaEuef+KYvNrS81DDYGXnTZlO++ENSxvj7tzD VUFF/a+J Tg+uKQ0vnDRQgRt9fHMFuTkyF4H17o7mtxSTkkOG0HpM/3woWf3sC42cy2gy0gcBFIbOtWKEHkcbvPxgIuyjPilXzJ47GaFyDIcqCuzZ7fTDdNk+8iQZu9hpuV2KWWcySJOx/kMJLilHv9xJpcgB6auRq0FZr2EOQG20Glr/1z+mBm2OhA492sZitNATjtAQXD904CtzVAcxcBLSFIdIHDgRTOMQ7gAM+dAAcIasttO0XhgfX2QMVl3gYgX5EiUfPIDcNrNirc19+Tck= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: 在 2025/7/7 20:37, David Hildenbrand 写道: > On 07.07.25 13:51, Jinjiang Tu wrote: >> >> 在 2025/7/3 17:06, David Hildenbrand 写道: >>> On 03.07.25 10:24, Jinjiang Tu wrote: >>>> >>>> 在 2025/7/3 15:57, David Hildenbrand 写道: >>>>> On 03.07.25 09:46, Jinjiang Tu wrote: >>>>>> >>>>>> 在 2025/7/1 22:21, Oscar Salvador 写道: >>>>>>> On Fri, Jun 27, 2025 at 08:57:47PM +0800, Jinjiang Tu wrote: >>>>>>>> In do_migrate_range(), the hwpoisoned folio may be large folio, >>>>>>>> which >>>>>>>> can't be handled by unmap_poisoned_folio(). >>>>>>>> >>>>>>>> I can reproduce this issue in qemu after adding delay in >>>>>>>> memory_failure() >>>>>>>> >>>>>>>> BUG: kernel NULL pointer dereference, address: 0000000000000000 >>>>>>>> Workqueue: kacpi_hotplug acpi_hotplug_work_fn >>>>>>>> RIP: 0010:try_to_unmap_one+0x16a/0xfc0 >>>>>>>>      >>>>>>>>      rmap_walk_anon+0xda/0x1f0 >>>>>>>>      try_to_unmap+0x78/0x80 >>>>>>>>      ? __pfx_try_to_unmap_one+0x10/0x10 >>>>>>>>      ? __pfx_folio_not_mapped+0x10/0x10 >>>>>>>>      ? __pfx_folio_lock_anon_vma_read+0x10/0x10 >>>>>>>>      unmap_poisoned_folio+0x60/0x140 >>>>>>>>      do_migrate_range+0x4d1/0x600 >>>>>>>>      ? slab_memory_callback+0x6a/0x190 >>>>>>>>      ? notifier_call_chain+0x56/0xb0 >>>>>>>>      offline_pages+0x3e6/0x460 >>>>>>>>      memory_subsys_offline+0x130/0x1f0 >>>>>>>>      device_offline+0xba/0x110 >>>>>>>>      acpi_bus_offline+0xb7/0x130 >>>>>>>>      acpi_scan_hot_remove+0x77/0x290 >>>>>>>>      acpi_device_hotplug+0x1e0/0x240 >>>>>>>>      acpi_hotplug_work_fn+0x1a/0x30 >>>>>>>>      process_one_work+0x186/0x340 >>>>>>>> >>>>>>>> In this case, just make offline_pages() fail. >>>>>>>> >>>>>>>> Besides, do_migrate_range() may be called between >>>>>>>> memory_failure set >>>>>>>> hwposion flag and ioslate the folio from lru, so remove WARN_ON(). >>>>>>>> In other >>>>>>>> places, unmap_poisoned_folio() is called when the folio is >>>>>>>> isolated, obey >>>>>>>> it in do_migrate_range() too. >>>>>>>> >>>>>>>> Fixes: b15c87263a69 ("hwpoison, memory_hotplug: allow hwpoisoned >>>>>>>> pages to be offlined") >>>>>>>> Signed-off-by: Jinjiang Tu >>>>>>> ... >>>>>>>> @@ -2041,11 +2048,9 @@ int offline_pages(unsigned long start_pfn, >>>>>>>> unsigned long nr_pages, >>>>>>>>                     ret = scan_movable_pages(pfn, end_pfn, &pfn); >>>>>>>>                  if (!ret) { >>>>>>>> -                /* >>>>>>>> -                 * TODO: fatal migration failures should bail >>>>>>>> -                 * out >>>>>>>> -                 */ >>>>>>>> -                do_migrate_range(pfn, end_pfn); >>>>>>>> +                ret = do_migrate_range(pfn, end_pfn); >>>>>>>> +                if (ret) >>>>>>>> +                    break; >>>>>>> I am not really sure about this one. >>>>>>> I get the reason you're adding it, but note that migrate_pages() >>>>>>> can >>>>>>> also return >>>>>>> "fatal" errors and we don't propagate that. >>>>>>> >>>>>>> The moto has always been to migrate as much as possible, and this >>>>>>> changes this >>>>>>> behaviour. >>>>>> If we just skip to next pfn, offline_pages() will deadloop >>>>>> meaningless >>>>>> util received signal. >>>>> >>>>> Yeah, that's also not good, >>>>> >>>>>> It seems there is no document to guarantee memory offline have to >>>>>> migrate as much as possible. >>>>> >>>>> We should try offlining as good as possible. But if there is >>>>> something >>>>> we just cannot possibly migrate, there is no sense in retrying. >>>>> >>>>> Now, could we run into this case here because we are racing with >>>>> other >>>>> code, and actually retrying again could make it work? >>>>> >>>>> Remind me again: how exactly do we arrive at this point of having a >>>>> large folio that is hwpoisoned but still mapped? >>>>> >>>>> In memory_failure(), we do on a  large folio >>>>> >>>>> 1) folio_set_has_hwpoisoned >>>>> 2) try_to_split_thp_page >>>>> 3) if splitting fails, kill_procs_now >>>> If 2) is executed when do_migrate_range() increment the refcount of >>>> the >>>> folio, the split fails, and retry is meaningless. >>> >>> kill_procs_now will kill all processes, effectively unmapping the >>> folio in that case? >>> >>> So retrying would later just ... get us an unmapped folio and we can >>> make progress? >>> >> kill_procs_now()->collect_procs() collects the tasks to kill. But not >> all tasks that maps the folio >> will be collected, >> collect_procs_anon()->task_early_kill()->find_early_kill_thread() >> will not >> select the task (not current) if PF_MCE_PROCESS isn't set and >> sysctl_memory_failure_early_kill >> isn't enabled (this is the default behaviour). > > I think you're right, that's rather nasty. > > We fail to split, but keep the folio mapped into some processes. > > And we can't unmap it because unmap_poisoned_folio() does not properly > support large folios yet. > > We really should unmap the folio when splitting fail. :( unmap_poisoned_folio() doesn't guarantee the folio is unmapped successfully, according to the return val. Although I don't know in which case we will fail to unmap.