From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B352EC71136 for ; Mon, 16 Jun 2025 11:34:36 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 58A866B0089; Mon, 16 Jun 2025 07:34:36 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 562796B008C; Mon, 16 Jun 2025 07:34:36 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 477EF6B0092; Mon, 16 Jun 2025 07:34:36 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 3895D6B0089 for ; Mon, 16 Jun 2025 07:34:36 -0400 (EDT) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id D2C1B81169 for ; Mon, 16 Jun 2025 11:34:35 +0000 (UTC) X-FDA: 83561056110.30.45A5D91 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by imf26.hostedemail.com (Postfix) with ESMTP id 1FC8C14000D for ; Mon, 16 Jun 2025 11:34:32 +0000 (UTC) Authentication-Results: imf26.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf26.hostedemail.com: domain of tujinjiang@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=tujinjiang@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1750073673; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=WJDQuoVsGWUYmOO9GjC7kfX87C9/35p77HdolWspHTs=; b=sN+RSVHskElUzZnDOJ6En5MfMF1GGnxN9uSSbji/L9sJDs7Ybvvx8SXV7ijeIUy1QpclMk /o4Q5l/IQluWOEg2nqj/L8vHNBOWTzV4KKQNjTHsLDpserzqxJxgVxGK0Ofs6iJhkIyruv quKeK0o4XyMOmlb6kn7EepBemeKl/L4= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1750073673; a=rsa-sha256; cv=none; b=FHQdNq5kfUcYi4X338GIiWovNLkPEFSeOSq52Zx770OBAfB5Za3OJL0O3zb4cjmPBWz5pu P7YiDH73IqJ6iBm7iJmw+k4BMJ82aGtGWZUc3zv9A1sXHwVQ0fhVltKDVDWlgHwDtyw0NW +/yQ5zFkAm+672MFWXXYCmesIVE3EBc= ARC-Authentication-Results: i=1; imf26.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf26.hostedemail.com: domain of tujinjiang@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=tujinjiang@huawei.com Received: from mail.maildlp.com (unknown [172.19.88.194]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4bLSTw5gxnzRk4m; Mon, 16 Jun 2025 19:30:12 +0800 (CST) Received: from kwepemo200002.china.huawei.com (unknown [7.202.195.209]) by mail.maildlp.com (Postfix) with ESMTPS id 4A55B140146; Mon, 16 Jun 2025 19:34:28 +0800 (CST) Received: from [10.174.179.13] (10.174.179.13) by kwepemo200002.china.huawei.com (7.202.195.209) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Mon, 16 Jun 2025 19:34:27 +0800 Message-ID: <23255767-08b9-cda1-b93e-cf5675621504@huawei.com> Date: Mon, 16 Jun 2025 19:34:27 +0800 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101 Thunderbird/102.15.1 Subject: Re: [PATCH] mm/vmscan: fix hwpoisoned large folio handling in shrink_folio_list To: Zi Yan , David Hildenbrand CC: , , , References: <20250611074643.250837-1-tujinjiang@huawei.com> <1f0c7d73-b7e2-4ee9-8050-f23c05e75e8b@redhat.com> <62e1f100-0e0e-40bc-9dc3-fcaf8f8d343f@redhat.com> <849e1901-82d3-4ba3-81ac-060fa16ed91e@redhat.com> <90112dc7-8f00-45ec-b742-2f4e551023ca@redhat.com> <839731C1-90AE-419E-A1A7-B41303E2F239@nvidia.com> <94438931-d78f-4d5d-be4e-86938225c7c8@redhat.com> <45D4230C-F5CC-49B2-B672-C65E9D99BB3F@nvidia.com> From: Jinjiang Tu In-Reply-To: <45D4230C-F5CC-49B2-B672-C65E9D99BB3F@nvidia.com> Content-Type: text/plain; charset="UTF-8"; format=flowed Content-Transfer-Encoding: 8bit X-Originating-IP: [10.174.179.13] X-ClientProxiedBy: kwepems200001.china.huawei.com (7.221.188.67) To kwepemo200002.china.huawei.com (7.202.195.209) X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 1FC8C14000D X-Stat-Signature: fuf8zgbsycyzbsezyhnp6go1o5emq5qq X-Rspam-User: X-HE-Tag: 1750073672-109786 X-HE-Meta: U2FsdGVkX18HgL8tE80tzk7S+k66lBi5XLpUQEjCvB77xcYA9pZUMwCAoiKe8QnlI36d2VpxKOYtcMJwdm+gCHwB3m8lNvRf1rgoXS0mVjQdVd6ZscMv3fp8wV8cZ670kus2I2+7QukNGbD0CzgcCsdL2tYPp9HPegaChbDd8pQC/Rmpz85dKm71vqv02OIed9b/9T9tR/80K2WNr2Lt9P2t2ivPjkLLsJJWRwOkI4IuhB5Gzxcw8xq35R7J1L5Rc6OAF0oXg5i7m5VlWrransAGc+bQf/W7+piQYXHY8e6+1tRybraW9xhstd5DQb+vRDqmUigOr+/HaUQHSBlDYd8fMQYSEpZdqD3EouGjjKpt7D/KjOmRIoXMOOrYQn8skbcwfYRj8mlVCPi4jjGj6HSz3gSmNQieZgOxKlajr1+fVSw6nDGVMuRFAC7lvjA64TUglLmUEw2XmbcRPpUHUL9tse0zmWVZqzzeuO2IvLX4xhaLOs2a9cMD1rm/EZLcn5lEfmvC+whqYCxOyqdx5XkNPlT6vjra2AlSRpazx/iUjDwqaRCdXQN+GgGe75J786NbZCGNkxVB5QQ4QuTOvqZ+mVQAdz65R9FDTTXO3BevxWy6VElGqtygFEtn8hzPA2vA16jjQnaZGpIBxv9jv3BoGhkQ/djrVfuDB2hNXWmAXbcoNIzITfbAvcxa6iwkMXVK0EEhcQo2CZWNzo2w8J702bf2lB4vi0cTy5l/KXQrnJ4OIbsIY0mSZqRR140pxdwjWonUJBPtZziZf3zdMX7XzJGeR6jOHeqihzoaYcKDckpeUyzuwkSTTaAkZvrbhTEMwNxA8wK1KOhHeMeirfeMw0c4u8FJGY/DglbAcdjb/tuIW54zE0mP9rTyFb00O1nXaaK4vrsw0lXthaknRXFLoRGvJ8CNiBVb+qGRxoeHaWeYywSmRTsnmlc3N8gEr2YUgRgzB8lVuGtJ48V 20DR5Q95 6xJSys49oNmbDgoi64+ioVMW/Ra3BfiQxygRwWMRplzVStPAfOGwqXZsl5mSLO5hG2dfsgCLgzy6ot4GXB3MRloPGhch1GcrZJ+IbDQOd5lQtxu8zcybQTC4XMNXpniAApZjVlELwoV1ClLpWYT3iSjzHWmI+yfh4MYaAFkQX14TTN08O1ESbbO2IoWOTxenBDOrT1H3vVJ034RxuXXyJOvBoQXBcEGRBIOgQfagaL8W+3JtT4Hq5xdgtGA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: 在 2025/6/13 0:48, Zi Yan 写道: > On 12 Jun 2025, at 11:50, David Hildenbrand wrote: > >> On 12.06.25 17:35, Zi Yan wrote: >>> On 12 Jun 2025, at 3:53, David Hildenbrand wrote: >>> >>>> On 11.06.25 19:52, Zi Yan wrote: >>>>> On 11 Jun 2025, at 13:34, David Hildenbrand wrote: >>>>> >>>>>>> So __folio_split() has an implicit rule that: >>>>>>> 1. if the given list is not NULL, the folio cannot be on LRU; >>>>>>> 2. if the given list is NULL, the folio is on LRU. >>>>>>> >>>>>>> And the rule is buried deeply in lru_add_split_folio(). >>>>>>> >>>>>>> Should we add some checks in __folio_split()? >>>>>>> >>>>>>> diff --git a/mm/huge_memory.c b/mm/huge_memory.c >>>>>>> index d3e66136e41a..8ce2734c9ca0 100644 >>>>>>> --- a/mm/huge_memory.c >>>>>>> +++ b/mm/huge_memory.c >>>>>>> @@ -3732,6 +3732,11 @@ static int __folio_split(struct folio *folio, unsigned int new_order, >>>>>>> VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio); >>>>>>> VM_BUG_ON_FOLIO(!folio_test_large(folio), folio); >>>>>>> >>>>>>> + if (list && folio_test_lru(folio)) >>>>>>> + return -EINVAL; >>>>>>> + if (!list && !folio_test_lru(folio)) >>>>>>> + return -EINVAL; >>>>>>> + >>>>>> I guess we currently don't run into that, because whenever a folio is otherwise isolated, there is an additional reference or a page table mapping, so it cannot get split either way (e.g., freezing the refcount fails). >>>>>> >>>>>> So maybe these checks would be too early and they should happen after we froze the refcount? >>>>> But if the caller does the isolation, the additional refcount is OK and >>>>> can_split_folio() will return true. In addition, __folio_split() does not >>>>> change folio LRU state, so these two checks are orthogonal to refcount >>>>> check, right? The placement of them does not matter, but earlier the better >>>>> to avoid unnecessary work. I see these are sanity checks for callers. >>>> In light of the discussion in this thread, if you have someone that takes the folio off the LRU concurrently, I think we could still run into a race here. Because that could happen just after we passed the test in __folio_split(). >>>> >>>> That's why I think the test would have to happen when there are no such races possible anymore. >>> Make sense. Thanks for the explanation. >>> >>>> But the real question is if it is okay to remove the folio from the LRU as done in the patch discussed here ... >>> I just read through the email thread. IIUC, when deferred_split_scan() split >>> a THP, it expects the THP is on LRU list. I think it makes sense since >>> all these THPs are in both the deferred_split_queue and LRU list. >>> And deferred_split_scan() uses split_folio() without providing a list >>> to store the after-split folios. >>> >>> In terms of the patch, since unmap_poisoned_folio() does not handle large >>> folios, why not just split the large folios and add the after-split folios >>> to folio_list? >> That's what I raised, but apparently it might not be worth it for that corner case (splitting might fail). > OK, the reason to not split is that handling split failures is too much work > and it is better to be done in memory_failure(). > > In terms of this patch, returning large poisoned folio back to LRU list > seems to be the right thing to do. My thought is that the reclaim code > is trying to free this folio, but it cannot reclaim a large poisoned folio, > so it puts the folio back like it cannot reclaim an in-use folio. > >> Then, the while loop will go over all the after-split folios >>> one by one. >>> >>> BTW, unmap_poisoned_folio() is also used in do_migrate_range() from >>> memory_hotplug.c and there is no guard for large folios either. That >>> also needs a fix? >> Yes, that was mentioned, and I was hoping we could let unmap_poisoned_folio() check+fail in that case. > For this case, if unmap_poisoned_folio() fails, the whole range cannot > be offline? If we fix do_migrate_range() like below: diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index 8305483de38b..5a6d869e6b56 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -1823,7 +1823,10 @@ static void do_migrate_range(unsigned long start_pfn, unsigned long end_pfn)                         pfn = folio_pfn(folio) + folio_nr_pages(folio) - 1;                 if (folio_contain_hwpoisoned_page(folio)) { -                       if (WARN_ON(folio_test_lru(folio))) +                       if (folio_test_large(folio)) +                               goto put_folio; + +                       if (folio_test_lru(folio))                                 folio_isolate_lru(folio);                         if (folio_mapped(folio)) {                                 folio_lock(folio); If memory_failure calls try_to_split_thp_page() after do_migrate_range() puts the folio, the THP can be handled by memory_failure(), and the memory can be offlined after memory_failure() handling. However, If memory_failure() fails to split due to the extra reference, the whole memory can't be offlined forever. > > Best Regards, > Yan, Zi