From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4AB71C71136 for ; Mon, 16 Jun 2025 11:33:53 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id BCA416B0088; Mon, 16 Jun 2025 07:33:52 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B7B0E6B0089; Mon, 16 Jun 2025 07:33:52 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A69696B008A; Mon, 16 Jun 2025 07:33:52 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 950B66B0088 for ; Mon, 16 Jun 2025 07:33:52 -0400 (EDT) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 452B2BEB78 for ; Mon, 16 Jun 2025 11:33:52 +0000 (UTC) X-FDA: 83561054304.16.7CDFF65 Received: from szxga07-in.huawei.com (szxga07-in.huawei.com [45.249.212.35]) by imf24.hostedemail.com (Postfix) with ESMTP id AC396180011 for ; Mon, 16 Jun 2025 11:33:48 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=none; spf=pass (imf24.hostedemail.com: domain of tujinjiang@huawei.com designates 45.249.212.35 as permitted sender) smtp.mailfrom=tujinjiang@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1750073629; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=QHhVLqiAxAJINh+J4zAgngFoUlQGYS4lzsGFKBXT/Dg=; b=GsUh8TB3wfMMmN8Q1SORC6fJ3U5uocYHWGAx0h5YQxeLLTADM6QtgMdmDvIJKXMiMEEMUB sTqqclV2a4dUQ6yAQjlvu5J+t9Vfq4oU0FtyYTZO8avaTQd6dWCf+t8ryFnzzs3egMKnlt LYi1ZN/PClaX+YK6u50QqVJL6HSlvKQ= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1750073629; a=rsa-sha256; cv=none; b=VqsWstbo30hRfdDumvnCrME7PEgZN5ht0pZAlepY+3sAwQTzlCytcL3h1bmQuytUMDqPwS KF8v+XYQWYMqM4olNWV2JvrM2/CcsvBi6uYmxU1PJbfKwzEfifXoIEG1cWXNNvHRGlWn7S vcBQil143ouNDN1tMUoSMzeUG/v/TAo= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=none; spf=pass (imf24.hostedemail.com: domain of tujinjiang@huawei.com designates 45.249.212.35 as permitted sender) smtp.mailfrom=tujinjiang@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com Received: from mail.maildlp.com (unknown [172.19.88.234]) by szxga07-in.huawei.com (SkyGuard) with ESMTP id 4bLSXK1QCCz1g2pW; Mon, 16 Jun 2025 19:32:17 +0800 (CST) Received: from kwepemo200002.china.huawei.com (unknown [7.202.195.209]) by mail.maildlp.com (Postfix) with ESMTPS id ABD0A1400CB; Mon, 16 Jun 2025 19:33:43 +0800 (CST) Received: from [10.174.179.13] (10.174.179.13) by kwepemo200002.china.huawei.com (7.202.195.209) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Mon, 16 Jun 2025 19:33:43 +0800 Message-ID: Date: Mon, 16 Jun 2025 19:33:42 +0800 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101 Thunderbird/102.15.1 Subject: Re: [PATCH] mm/vmscan: fix hwpoisoned large folio handling in shrink_folio_list To: David Hildenbrand , Zi Yan CC: , , , References: <20250611074643.250837-1-tujinjiang@huawei.com> <1f0c7d73-b7e2-4ee9-8050-f23c05e75e8b@redhat.com> <62e1f100-0e0e-40bc-9dc3-fcaf8f8d343f@redhat.com> <849e1901-82d3-4ba3-81ac-060fa16ed91e@redhat.com> <90112dc7-8f00-45ec-b742-2f4e551023ca@redhat.com> <839731C1-90AE-419E-A1A7-B41303E2F239@nvidia.com> <94438931-d78f-4d5d-be4e-86938225c7c8@redhat.com> From: Jinjiang Tu In-Reply-To: <94438931-d78f-4d5d-be4e-86938225c7c8@redhat.com> Content-Type: text/plain; charset="UTF-8"; format=flowed Content-Transfer-Encoding: 8bit X-Originating-IP: [10.174.179.13] X-ClientProxiedBy: kwepems200001.china.huawei.com (7.221.188.67) To kwepemo200002.china.huawei.com (7.202.195.209) X-Rspam-User: X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: AC396180011 X-Stat-Signature: d1w6qd71qx8dr4mn6uh54uc5fh5snb5y X-HE-Tag: 1750073628-339661 X-HE-Meta: U2FsdGVkX181Jq3VxmF4g+lXXcZb0cMGtEZGXnTyeS4cKMCSyJxOctih0WpxQCsZdLw2NMQkzm7cRV30As9II7e4QkG2ZNEKuIegRN7vdXtaXmMjanX2TICGjaDUU2lUCgvxXODNmX9d9cJGKM8utQ/hpYVhuiP01K+zuEuSyh4qDbPA5EYdMkGddW8oIiQW4hHjzI0FClErgOba2CtideIHoLwbaE1/lYSGQEKkM1tvnz3N0UlOUcLTx1JnAb0t66QfMeGbIaaAVzfjQ3eFGpDEKcuBj2wkZJM8ALKOFudThKNaeF7oqI61cHk76qRnWF74v1HiSGxhMqUTgt4UwwKTpbEr93u0rIFsEY+GsaX6HjCtBENviUE9Gx8sG9rsxI5cQL+LPyFfwLCVHOMfcMK1ir8pJ9PxEI7WyNAp3yy1IpKqJmbSeIT7pKp2rIuWazEZJq1O26es8CeFx+gA+oJDBfogDkNBKGeP+3RrdyuAbNm1dO6ln7IAFhxSATWg3k8xreaf0OQlQR9hkvTupDtCnaoUKFnTrfZ+DjKsX96hwegRO6TEprBUx+FZR7Xw5DgkVgtJ/D8ZEWdxUTYl1bwqaky05MXziqcdb8052THS3oxfTW0xLc57TtHAiZJ+O4Xd0XjXAhkmiekdEsGvQjZpE+FmFM8pJgRrt1vqNHjKG6kjrSCY3IZeWRpJnKaSYdpIfHPHfZ7fGkPI/bo1IySyMesbAFnhBzBjQ4bMafmBnJAjCluhoRoIlS6aTNJ/L0LfNe5uxddgSm7gZv1ROIVtvInTvi7u8BVF0guDYQMlYjyQTRxkhXW44QilB/fEiHHzrAs0x2IC2q86n4JH3S080CWic6Z96SscfVsy1ryzgBE1VJrkpSdjK1E7pglafwuCxf8R6dDz1OOQvrEzEyQvAL+bcwDy/xupjxqjuB08Jbml9fPUzaKBwxtmpejodCAGKtAQsGPYSSElDLR AqotIbJL 3Dj/Tgz6SA41fk/+7YZa6P4Aek/VYAB3Ucr/ny6Nm97lNLlCfD3e3iz2WQEeKDUWdqRCtDcUSQqmsp9crT9cFhZqNNxBbQMlfp4pe71Ky+oN4c3Y+n8irJgGH2wrBFlWtQMBGdou3Ac/FKY8ikkgRbFIoGilmI4UZix61J6NVfkf6a8opUAaYcrDcsyqVUoHv5EdfHIPPxMyV21MZSfiiVdW0Xt+7QJIkYZaR7vc36SLe4Bb+fa5etQi4YQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: 在 2025/6/12 23:50, David Hildenbrand 写道: > On 12.06.25 17:35, Zi Yan wrote: >> On 12 Jun 2025, at 3:53, David Hildenbrand wrote: >> >>> On 11.06.25 19:52, Zi Yan wrote: >>>> On 11 Jun 2025, at 13:34, David Hildenbrand wrote: >>>> >>>>>> So __folio_split() has an implicit rule that: >>>>>> 1. if the given list is not NULL, the folio cannot be on LRU; >>>>>> 2. if the given list is NULL, the folio is on LRU. >>>>>> >>>>>> And the rule is buried deeply in lru_add_split_folio(). >>>>>> >>>>>> Should we add some checks in __folio_split()? >>>>>> >>>>>> diff --git a/mm/huge_memory.c b/mm/huge_memory.c >>>>>> index d3e66136e41a..8ce2734c9ca0 100644 >>>>>> --- a/mm/huge_memory.c >>>>>> +++ b/mm/huge_memory.c >>>>>> @@ -3732,6 +3732,11 @@ static int __folio_split(struct folio >>>>>> *folio, unsigned int new_order, >>>>>>         VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio); >>>>>>         VM_BUG_ON_FOLIO(!folio_test_large(folio), folio); >>>>>> >>>>>> +    if (list && folio_test_lru(folio)) >>>>>> +        return -EINVAL; >>>>>> +    if (!list && !folio_test_lru(folio)) >>>>>> +        return -EINVAL; >>>>>> + >>>>> >>>>> I guess we currently don't run into that, because whenever a folio >>>>> is otherwise isolated, there is an additional reference or a page >>>>> table mapping, so it cannot get split either way (e.g., freezing >>>>> the refcount fails). >>>>> >>>>> So maybe these checks would be too early and they should happen >>>>> after we froze the refcount? >>>> >>>> But if the caller does the isolation, the additional refcount is OK >>>> and >>>> can_split_folio() will return true. In addition, __folio_split() >>>> does not >>>> change folio LRU state, so these two checks are orthogonal to refcount >>>> check, right? The placement of them does not matter, but earlier >>>> the better >>>> to avoid unnecessary work. I see these are sanity checks for callers. >>> >>> In light of the discussion in this thread, if you have someone that >>> takes the folio off the LRU concurrently, I think we could still run >>> into a race here. Because that could happen just after we passed the >>> test in __folio_split(). >>> >>> That's why I think the test would have to happen when there are no >>> such races possible anymore. >> >> Make sense. Thanks for the explanation. >> >>> >>> But the real question is if it is okay to remove the folio from the >>> LRU as done in the patch discussed here ... >> >> I just read through the email thread. IIUC, when >> deferred_split_scan() split >> a THP, it expects the THP is on LRU list. I think it makes sense since >> all these THPs are in both the deferred_split_queue and LRU list. >> And deferred_split_scan() uses split_folio() without providing a list >> to store the after-split folios. >> >> In terms of the patch, since unmap_poisoned_folio() does not handle >> large >> folios, why not just split the large folios and add the after-split >> folios >> to folio_list? > > That's what I raised, but apparently it might not be worth it for that > corner case (splitting might fail). > > Then, the while loop will go over all the after-split folios >> one by one. >> >> BTW, unmap_poisoned_folio() is also used in do_migrate_range() from >> memory_hotplug.c and there is no guard for large folios either. That >> also needs a fix? > > Yes, that was mentioned, and I was hoping we could let > unmap_poisoned_folio() check+fail in that case. Maybe we could fix do_migrate_range() like below: diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index 8305483de38b..5a6d869e6b56 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -1823,7 +1823,10 @@ static void do_migrate_range(unsigned long start_pfn, unsigned long end_pfn)                         pfn = folio_pfn(folio) + folio_nr_pages(folio) - 1;                 if (folio_contain_hwpoisoned_page(folio)) { -                       if (WARN_ON(folio_test_lru(folio))) +                       if (folio_test_large(folio)) +                               goto put_folio; + +                       if (folio_test_lru(folio))                                 folio_isolate_lru(folio);                         if (folio_mapped(folio)) {                                 folio_lock(folio); The folio may be on lru, if folio_test_lru() check happens between setting hwposion flag and isolating from lru in memory_failure(). So, I remove the WARN_ON. Skip if it's a large folio before we isolate from lru.