From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 65D8BC71155 for ; Tue, 17 Jun 2025 06:43:31 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E6B206B008A; Tue, 17 Jun 2025 02:43:30 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E1C006B0093; Tue, 17 Jun 2025 02:43:30 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D31886B0095; Tue, 17 Jun 2025 02:43:30 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id C49646B008A for ; Tue, 17 Jun 2025 02:43:30 -0400 (EDT) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 405CFBF628 for ; Tue, 17 Jun 2025 06:43:30 +0000 (UTC) X-FDA: 83563951380.27.E776E67 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by imf17.hostedemail.com (Postfix) with ESMTP id 734AC40013 for ; Tue, 17 Jun 2025 06:43:26 +0000 (UTC) Authentication-Results: imf17.hostedemail.com; dkim=none; spf=pass (imf17.hostedemail.com: domain of tujinjiang@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=tujinjiang@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1750142608; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=E1BteTK0APvqxuJHuJuL/c84psakBB8x8QOegEDgOJs=; b=rn6RzUXF/tO2l3SbVA/mi8OQmOcN2EfFY8zLzloAMl/Zluq3zXq3eVRNu1rY9nU/SnnZZP tYr7e3l8rXxvoTaaCBLGbeZ9LrG5NTRpjiFQaoZqnuBuKLsmLAd2EVBjqrO72rigJdAGT6 VMTr96mtBtgeUWuNil37jQBipDP6SAw= ARC-Authentication-Results: i=1; imf17.hostedemail.com; dkim=none; spf=pass (imf17.hostedemail.com: domain of tujinjiang@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=tujinjiang@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1750142608; a=rsa-sha256; cv=none; b=NOxUb1M01vzYm9E4kDDHiafbBGfZSpmfw7XmDorSziJKwtc/iH8PBWSxk7+NQipmEf9MoD qA/BOgjRlLzt0ybd0fkbsu3yWHnhypIxcj97GVsE2ij7W693rzgoi44LDYcQAhsmM8kSkz QnTTUQOuqFJ42R9jOoTt7GMRvF2UUrU= Received: from mail.maildlp.com (unknown [172.19.163.48]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4bLy365RC9znffx; Tue, 17 Jun 2025 14:42:10 +0800 (CST) Received: from kwepemo200002.china.huawei.com (unknown [7.202.195.209]) by mail.maildlp.com (Postfix) with ESMTPS id 569DC18006C; Tue, 17 Jun 2025 14:43:21 +0800 (CST) Received: from [10.174.179.13] (10.174.179.13) by kwepemo200002.china.huawei.com (7.202.195.209) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Tue, 17 Jun 2025 14:43:20 +0800 Message-ID: Date: Tue, 17 Jun 2025 14:43:20 +0800 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101 Thunderbird/102.15.1 Subject: Re: [PATCH] mm/vmscan: fix hwpoisoned large folio handling in shrink_folio_list To: David Hildenbrand , Zi Yan CC: , , , References: <20250611074643.250837-1-tujinjiang@huawei.com> <1f0c7d73-b7e2-4ee9-8050-f23c05e75e8b@redhat.com> <62e1f100-0e0e-40bc-9dc3-fcaf8f8d343f@redhat.com> <849e1901-82d3-4ba3-81ac-060fa16ed91e@redhat.com> <90112dc7-8f00-45ec-b742-2f4e551023ca@redhat.com> <839731C1-90AE-419E-A1A7-B41303E2F239@nvidia.com> <94438931-d78f-4d5d-be4e-86938225c7c8@redhat.com> <52d16469-8df6-4ee5-bc6f-97c5557f7aa1@redhat.com> From: Jinjiang Tu In-Reply-To: <52d16469-8df6-4ee5-bc6f-97c5557f7aa1@redhat.com> Content-Type: text/plain; charset="UTF-8"; format=flowed Content-Transfer-Encoding: 8bit X-Originating-IP: [10.174.179.13] X-ClientProxiedBy: kwepems500002.china.huawei.com (7.221.188.17) To kwepemo200002.china.huawei.com (7.202.195.209) X-Stat-Signature: gqa15s73bqzzh9fzowj1fz4s7c9qkt6d X-Rspamd-Queue-Id: 734AC40013 X-Rspam-User: X-Rspamd-Server: rspam09 X-HE-Tag: 1750142606-383782 X-HE-Meta: U2FsdGVkX1/kGFXVd48xMdUVWFSE8Hnk2zKdxgkWFXli/cyJ27X7S4CC9wOTv9M5qp0vrwKSIFNd5GYUTm65lbBnkSo9yBVhWaUyC6fzLzWRYV966clFJwwTeNrhH0mbi9ABMHKszN4NYfxFJL2QSN6qlwxKq9GG7A2bfHHXho4cgGWpVPXyLqfgrZH5iviCUJCSWGp9Ffs/66RyyS1aUhNEbF37kKPBqvM/UMQ+scjKEboxR1h0mKwRHKfeCx4s0rSxPM/UT9X56iWhPNtCR1pZ7L861hzb49xvj27ipqKnZEBg4blE6KO2Yos95Yq5ORLmGI28d+fmSBN8bJ4k56H9fLAAeNNgpfpgc67brQspFT5BKesLnWxv/mYxHsJrut6zr+nWSFwY4Wh6ZsUwGxJQarGcnENr3+wvLRNreuwE0Gprs4/jODf3g9XpGOXdi9Rjcwj3yZ3DlNWvs3mAWaXiAYW0IsGputk4VCt8FJjCmf6JMCvMfivk1x+mwYV/VGSstT3RiKgDy2Di4OYQXSeLhTPhdY2Gfb/mXO0sW/O9ZcwSuzOevWDImgcqoat4LUHY/8W63jfKZBrxswvk8rhO2GJAt05FJtcFBgobXluM8oxXar3aGmEPmqLDu2U+whb7YdNyWasYuaOpodujYd7haezb6pmWScF4LqGXyfy/a1h3oU4AgFlrWHnskM2itTPjXyqf44Gt1WnaoeO1tsXit9Ixc19bajyLbaZ9N6QxqtvihQ7ncaOW3wnijcPGgz+vIl2+RBm7VtD1ibuW5shf/fWzHZclTRRp9E3hsHNJKdullnn6t31mZRRW4p94Pn0P0pXMfPqyTuEIbWZf097WMniMc3nuc/MAybDyBfEiXrQkDHSwfmzcgDcME9iKvHoFTYAmSC5ewCvnfQZ89Eqx7nX97jGQlG1BfkntdgPvE8uAAfMY8pPCs25T2qJL7XK7iiyVhIQh3u9Z+PY f0+FHjYe vhw38DF9XsENNEY0F5qsh2l4hg0D9k8aNUIK2gdzn0czdtC5qql071wrelw4xhbZ0PUxyfh++nr93Rn1A/HXjcKle77ES5vkiKHzsAzldcMTQvQDycYasEzP+kOG1WxGp4tJlhIsAFTb9ptR3UlLE4CtQCipZ7GhmxYRBjZtoyq037D7ZvsJwv/Egb5XM7mTZH/qiw6ACRoY8Rkc6eKW4b8yZrufFAQcVnvEKP8pm1WOABzKTPkCQZSI+bw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: 在 2025/6/17 3:27, David Hildenbrand 写道: > On 16.06.25 13:33, Jinjiang Tu wrote: >> >> 在 2025/6/12 23:50, David Hildenbrand 写道: >>> On 12.06.25 17:35, Zi Yan wrote: >>>> On 12 Jun 2025, at 3:53, David Hildenbrand wrote: >>>> >>>>> On 11.06.25 19:52, Zi Yan wrote: >>>>>> On 11 Jun 2025, at 13:34, David Hildenbrand wrote: >>>>>> >>>>>>>> So __folio_split() has an implicit rule that: >>>>>>>> 1. if the given list is not NULL, the folio cannot be on LRU; >>>>>>>> 2. if the given list is NULL, the folio is on LRU. >>>>>>>> >>>>>>>> And the rule is buried deeply in lru_add_split_folio(). >>>>>>>> >>>>>>>> Should we add some checks in __folio_split()? >>>>>>>> >>>>>>>> diff --git a/mm/huge_memory.c b/mm/huge_memory.c >>>>>>>> index d3e66136e41a..8ce2734c9ca0 100644 >>>>>>>> --- a/mm/huge_memory.c >>>>>>>> +++ b/mm/huge_memory.c >>>>>>>> @@ -3732,6 +3732,11 @@ static int __folio_split(struct folio >>>>>>>> *folio, unsigned int new_order, >>>>>>>>          VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio); >>>>>>>>          VM_BUG_ON_FOLIO(!folio_test_large(folio), folio); >>>>>>>> >>>>>>>> +    if (list && folio_test_lru(folio)) >>>>>>>> +        return -EINVAL; >>>>>>>> +    if (!list && !folio_test_lru(folio)) >>>>>>>> +        return -EINVAL; >>>>>>>> + >>>>>>> >>>>>>> I guess we currently don't run into that, because whenever a folio >>>>>>> is otherwise isolated, there is an additional reference or a page >>>>>>> table mapping, so it cannot get split either way (e.g., freezing >>>>>>> the refcount fails). >>>>>>> >>>>>>> So maybe these checks would be too early and they should happen >>>>>>> after we froze the refcount? >>>>>> >>>>>> But if the caller does the isolation, the additional refcount is OK >>>>>> and >>>>>> can_split_folio() will return true. In addition, __folio_split() >>>>>> does not >>>>>> change folio LRU state, so these two checks are orthogonal to >>>>>> refcount >>>>>> check, right? The placement of them does not matter, but earlier >>>>>> the better >>>>>> to avoid unnecessary work. I see these are sanity checks for >>>>>> callers. >>>>> >>>>> In light of the discussion in this thread, if you have someone that >>>>> takes the folio off the LRU concurrently, I think we could still run >>>>> into a race here. Because that could happen just after we passed the >>>>> test in __folio_split(). >>>>> >>>>> That's why I think the test would have to happen when there are no >>>>> such races possible anymore. >>>> >>>> Make sense. Thanks for the explanation. >>>> >>>>> >>>>> But the real question is if it is okay to remove the folio from the >>>>> LRU as done in the patch discussed here ... >>>> >>>> I just read through the email thread. IIUC, when >>>> deferred_split_scan() split >>>> a THP, it expects the THP is on LRU list. I think it makes sense since >>>> all these THPs are in both the deferred_split_queue and LRU list. >>>> And deferred_split_scan() uses split_folio() without providing a list >>>> to store the after-split folios. >>>> >>>> In terms of the patch, since unmap_poisoned_folio() does not handle >>>> large >>>> folios, why not just split the large folios and add the after-split >>>> folios >>>> to folio_list? >>> >>> That's what I raised, but apparently it might not be worth it for that >>> corner case (splitting might fail). >>> >>> Then, the while loop will go over all the after-split folios >>>> one by one. >>>> >>>> BTW, unmap_poisoned_folio() is also used in do_migrate_range() from >>>> memory_hotplug.c and there is no guard for large folios either. That >>>> also needs a fix? >>> >>> Yes, that was mentioned, and I was hoping we could let >>> unmap_poisoned_folio() check+fail in that case. >> >> Maybe we could fix do_migrate_range() like below: >> >> diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c >> index 8305483de38b..5a6d869e6b56 100644 >> --- a/mm/memory_hotplug.c >> +++ b/mm/memory_hotplug.c >> @@ -1823,7 +1823,10 @@ static void do_migrate_range(unsigned long >> start_pfn, unsigned long end_pfn) >>                           pfn = folio_pfn(folio) + >> folio_nr_pages(folio) - 1; >> >>                   if (folio_contain_hwpoisoned_page(folio)) { >> -                       if (WARN_ON(folio_test_lru(folio))) >> +                       if (folio_test_large(folio)) >> +                               goto put_folio; >> + >> +                       if (folio_test_lru(folio)) > >                                   folio_isolate_lru(folio); > > Hm, what is supposed to happen if we fail folio_isolate_lru()? unmap_poisoned_folio() seems not to be assumed with lru flag cleared. But other calls of unmap_poisoned_folio() guarantee the folio is ioslated, maybe we should be consistent with them? > >>                           if (folio_mapped(folio)) { >>                                   folio_lock(folio); >> >> The folio may be on lru, if folio_test_lru() check happens between >> setting hwposion flag and isolating from lru in memory_failure(). >> So, I remove the WARN_ON. > > I guess this would work, although this special-casing on large folios > in the caller of unmap_poisoned_folio() is rather weird. > > What is supposed to happen if unmap_poisoned_folio() failed for small > folios? Why are we okay with having the LRU flag cleared and the folio > isolated? In hwpoison_user_mappings(), if unmap_poisoned_folio() fails, the small folio is kept with isolated too. IIUC, after the folio is kept with ioslated, other subsystems could not operate on this folio, to avoid introduing issues.