From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1056FC52D7C for ; Thu, 15 Aug 2024 15:25:16 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 93EFC6B00B4; Thu, 15 Aug 2024 11:25:15 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 8D1786B013A; Thu, 15 Aug 2024 11:25:15 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6F3746B00BA; Thu, 15 Aug 2024 11:25:15 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 467DB6B008C for ; Thu, 15 Aug 2024 11:25:15 -0400 (EDT) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id CC728121640 for ; Thu, 15 Aug 2024 15:25:14 +0000 (UTC) X-FDA: 82454853348.17.48E025B Received: from mail-wm1-f52.google.com (mail-wm1-f52.google.com [209.85.128.52]) by imf25.hostedemail.com (Postfix) with ESMTP id A1DB3A001D for ; Thu, 15 Aug 2024 15:25:12 +0000 (UTC) Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=mj6eWRGw; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf25.hostedemail.com: domain of usamaarif642@gmail.com designates 209.85.128.52 as permitted sender) smtp.mailfrom=usamaarif642@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1723735453; a=rsa-sha256; cv=none; b=xmW4JwnN95iRYe9jJFO9cqlMkBFmh2SD/NSxIG2+WDVxkDPgUv2/QjzsRIr6Oujwet9394 cmHzO+9hjvpqH0JR4aXZ+LTHc6+UNcSpyGZKFaKHT1jIqzYo/s6pp2iuHklrcksjEHfr41 oQoXCazdJ+W45vdo33g6rkiTAaAp8EI= ARC-Authentication-Results: i=1; imf25.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=mj6eWRGw; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf25.hostedemail.com: domain of usamaarif642@gmail.com designates 209.85.128.52 as permitted sender) smtp.mailfrom=usamaarif642@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1723735453; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=8v1mzjCIPNIdgqV+hMVIEVVgOLKzqn9k/EPR2ZkKPYM=; b=0ugtnmw9fxyAB4/O4fuzBtt1CuKXkdUE73HrkzO8beP4ULSX8rt3YAHgQwm7/e2oPWYp3s gswyBOdxEPqpgj4AEUdpawt/FKTbg9og8bvZSkGoD9fZgiruJ6XqP5Hrm6I6w2EKnu5f0Q m5k9q1Yz28iyaurHu1EeLLnx/snSzhw= Received: by mail-wm1-f52.google.com with SMTP id 5b1f17b1804b1-429e29933aaso6943415e9.0 for ; Thu, 15 Aug 2024 08:25:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1723735511; x=1724340311; darn=kvack.org; h=content-transfer-encoding:in-reply-to:from:content-language :references:cc:to:subject:user-agent:mime-version:date:message-id :from:to:cc:subject:date:message-id:reply-to; bh=8v1mzjCIPNIdgqV+hMVIEVVgOLKzqn9k/EPR2ZkKPYM=; b=mj6eWRGwznCqho4MILBDIiy2tqhh3etTHDnZQXELiFK+yX06jMG5qji2cX817qdeXR pAo1vGhFrqXzfdXmxvytEzlNhndQd/jzk6czxyj5EPD8ZD+jDw3Aip4b05QbNrCb1cIk eXX3EsgG3zDMxx+lH+n0llNczonWCrD35hXACTRlt58Tw1eqNcYXrkN75oOYthllC2yX 8EUmBZMSTZN9LHW1blE8xa56iWm3MqJxcdccQMegthQjM0uR15oj0uW61W1/iamkts6Q b8jxKhYCo02dhsf/XjVwkOYlFpLBmOFiJOI3p1me4/EaE9uOUGicr0lrAOs9vgNzkDqP 89Vg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1723735511; x=1724340311; h=content-transfer-encoding:in-reply-to:from:content-language :references:cc:to:subject:user-agent:mime-version:date:message-id :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=8v1mzjCIPNIdgqV+hMVIEVVgOLKzqn9k/EPR2ZkKPYM=; b=ZdRv49XWLsKHqX5ztRMZYaX7vzKPC11ZLkOMhtbYT5Zp1BQ3YAXnpxwqTnQr1xaQ68 HHe4/h75AnfBXZthl8o7KP2kABDehG1DHO46aK5LGGpOE5oD4Hgm8yw5veJ5A6oFgbl3 L44v0x4WHbCB8XlJbPRYchncYd7+5gHfppN8E+IUrnvoM9c5+7fpJp1UpuaKFJG/yDPt nOKOEi6nlPtapegjYs8XlDQ7kV7H0DtUzHWHaVgQBejz19EpXGv0ykRJTInaSXTmcoCe Sodj/9BTn+k5NYkqa2cXUqccvevYpHIo4A/VXBgNnDX+HKc0b86fr8AHLfcrLy0zvsKx n7zQ== X-Forwarded-Encrypted: i=1; AJvYcCVx+3u4N1pcaNjZtXPABqTDOejxBoObpzWORIxybHQyBi2pOoOXehNf340Q07ANVCmNj3fd76nrT+g9UxCIwCfSXWI= X-Gm-Message-State: AOJu0YyaKtnYZFtqwg6UHPKJlOaaU6245rauYNzX+cZKLiLjOI0Oq2sC fMU+fcLGBPwOZTjkRnbtarTEMZ6U/D6cLzQ8UoR+TNSD6QQ8QSo+ X-Google-Smtp-Source: AGHT+IHA/Z8MYvLh/9a2fJdNDtEjp42BufsuAtyXox/DR5clkJZvTnCF2waMiTHHvinltgzfBEdwxg== X-Received: by 2002:a05:600c:154c:b0:428:314:f08e with SMTP id 5b1f17b1804b1-429dd234ae2mr43692845e9.5.1723735510719; Thu, 15 Aug 2024 08:25:10 -0700 (PDT) Received: from ?IPV6:2a02:6b6f:e750:7600:c5:51ce:2b5:970b? ([2a02:6b6f:e750:7600:c5:51ce:2b5:970b]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-429ded7dae0sm51548985e9.44.2024.08.15.08.25.10 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 15 Aug 2024 08:25:10 -0700 (PDT) Message-ID: <4acdf2b7-ed65-4087-9806-8f4a187b4eb5@gmail.com> Date: Thu, 15 Aug 2024 16:25:09 +0100 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v3 4/6] mm: Introduce a pageflag for partially mapped folios To: Barry Song <21cnbao@gmail.com>, akpm@linux-foundation.org Cc: baohua@kernel.org, cerasuolodomenico@gmail.com, corbet@lwn.net, david@redhat.com, hannes@cmpxchg.org, kernel-team@meta.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, riel@surriel.com, roman.gushchin@linux.dev, rppt@kernel.org, ryan.roberts@arm.com, shakeel.butt@linux.dev, willy@infradead.org, yuzhao@google.com References: <88d411c5-6d66-4d41-ae86-e0f943e5fb91@gmail.com> <20240814230533.54938-1-21cnbao@gmail.com> Content-Language: en-US From: Usama Arif In-Reply-To: <20240814230533.54938-1-21cnbao@gmail.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: A1DB3A001D X-Stat-Signature: kbrsoden8wxh89om9fux6fmi7dj4ye5s X-Rspam-User: X-HE-Tag: 1723735512-142041 X-HE-Meta: U2FsdGVkX18RUj7isqCGJ5HBVU/bUMXgpHuDyDmGoodadqOAaBC8FzDtZG+KM20zDtn5acZ1DlR7VgBoQRXRj9xpbkdZJhpYJhXVNI58332kH0gE+Vbj1r/Yehj/cb//8hDir9gWNR+bJPl1jd8WoLSlCe0zciD7sLqITt4J6N4DG2dxS4ZSpT/r0YKF7twjs8mdPLKq7ScjOeRk6TFVPQen/9LWABfiSjSlMLti7LPpcli4VlyXwNTj3QUoweEaRsLV6Zhs5YUSmOptva3tIYp6N784EOY+dedVwz7VRhp4Hp76WUtKmRPseIURwJqTDceWLwrQVflv7a0QP+3T0Ml0hDIGjdhYlIt8F8IOKDyCXpEujW5AJPC2VF7+e5KOiKsgtUTJYFwMLDpAe+1rLJUJrCOOTHnMbAr75yRvAUXLTYdYKWGkBK9IydhXH5K0aHMoG4pAKt74D2iAA1hB5GkziUCcOUBFrC3H6qK7/OuFAXljYa0YJmil91Pwxw6NHq/KfKpzbJhGEuS4+7hpcrQ3ksSUl5CU3Zf+HBxr/ofWnUlLbkjhQdjhjnEi3wnTmzPEKZidiMuNdRCFZabcTEpF/b2ffjG5DG0EYtSz0fDxbQtmE4j9bkcfakwgnN+I15x+KBqt66HnF8jZpSlFbexvCGm9qTAOI7rO0lKbaYteLbupb4QfELlRtKW+onr88et+t5FSf3AxMsTZayXH5EmRe0AH9pwq7HoumizTCaGQyqKJS/jUoLn5Up+DrgCwPM/+bPMn9ONV7TjXCb4QZq/8c13zqqEsRQROcQYqHqrZLpzcPBnKZhh7LOgfDMuEMJpgIpKgHBV3Rp0XdgZJAZneo9QGRoHwucthFF5hirghWdu0MUIazjvyZXpk4duNqs79YX+e2wx4+grxEP6K1EDe7RLypFuGFhEBc4uy+qEMJn56cP0pVSIcZ4oS0DUN9aYGIQIINRRZqiKyAVM TcIFTtqz 7OCqEXG4mECVBT4HJWS7ZODi4izH3LJZHTjY8fANDMjOfFofTtJ/pfb1Arc0LPttaooACwd1nwU3vEHSxMzOM48ytVaz8+m+DQ7+10QjHnm1iqC7JQ1vTKlvCRS6PRVBtye6D0KdEuyPPC0Qayk7o+F9KtrPmqCvcW0+03j5f+FEqwixKxcRs+Wjf+O7cwjjAu17EJH8D7mwVfbDIWL3tLhZXp1wfR8OUixZtn+j6ChedSqmadzp5bDy6bxVSbg2S39X3D8flbc3axpMIK/RX3pRBxDMdVCY9SD96ef/FtJlskK+Vk8sUXTctRn+iM3V8OnCKa4uqUONQY65B7xwuXusKl6G4N1UiCeL//RCMpGtAvX+DGNdRm0eyYKtiffYdYEId82p62Git7ABL8MENDPmhwz+kwNs0KfGnGjhxYuVr/G+Yb7z5Ed4KUi7rakdpUBfvJiQ8QjJUCcwNWFFQUZAZ64YKzjXaGLLrIhDYAeDP+rKjxuu0ju1Wcw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 15/08/2024 00:05, Barry Song wrote: > > On Thu, Aug 15, 2024 at 12:37 AM Usama Arif wrote: > [snip] >>>>>> >>>>>> -void deferred_split_folio(struct folio *folio) >>>>>> +void deferred_split_folio(struct folio *folio, bool partially_mapped) >>>>>>  { >>>>>>         struct deferred_split *ds_queue = get_deferred_split_queue(folio); >>>>>>  #ifdef CONFIG_MEMCG >>>>>> @@ -3485,14 +3487,17 @@ void deferred_split_folio(struct folio *folio) >>>>>>         if (folio_test_swapcache(folio)) >>>>>>                 return; >>>>>> >>>>>> -       if (!list_empty(&folio->_deferred_list)) >>>>>> -               return; >>>>>> - >>>>>>         spin_lock_irqsave(&ds_queue->split_queue_lock, flags); >>>>>> +       if (partially_mapped) >>>>>> +               folio_set_partially_mapped(folio); >>>>>> +       else >>>>>> +               folio_clear_partially_mapped(folio); >>>>>>         if (list_empty(&folio->_deferred_list)) { >>>>>> -               if (folio_test_pmd_mappable(folio)) >>>>>> -                       count_vm_event(THP_DEFERRED_SPLIT_PAGE); >>>>>> -               count_mthp_stat(folio_order(folio), MTHP_STAT_SPLIT_DEFERRED); >>>>>> +               if (partially_mapped) { >>>>>> +                       if (folio_test_pmd_mappable(folio)) >>>>>> +                               count_vm_event(THP_DEFERRED_SPLIT_PAGE); >>>>>> +                       count_mthp_stat(folio_order(folio), MTHP_STAT_SPLIT_DEFERRED); >>>>> >>>>> This code completely broke MTHP_STAT_SPLIT_DEFERRED for PMD_ORDER. It >>>>> added the folio to the deferred_list as entirely_mapped >>>>> (partially_mapped == false). >>>>> However, when partially_mapped becomes true, there's no opportunity to >>>>> add it again >>>>> as it has been there on the list. Are you consistently seeing the counter for >>>>> PMD_ORDER as 0? >>>>> >>>> >>>> Ah I see it, this should fix it? >>>> >>>> -void deferred_split_folio(struct folio *folio) >>>> +/* partially_mapped=false won't clear PG_partially_mapped folio flag */ >>>> +void deferred_split_folio(struct folio *folio, bool partially_mapped) >>>>  { >>>>         struct deferred_split *ds_queue = get_deferred_split_queue(folio); >>>>  #ifdef CONFIG_MEMCG >>>> @@ -3485,14 +3488,14 @@ void deferred_split_folio(struct folio *folio) >>>>         if (folio_test_swapcache(folio)) >>>>                 return; >>>> >>>> -       if (!list_empty(&folio->_deferred_list)) >>>> -               return; >>>> - >>>>         spin_lock_irqsave(&ds_queue->split_queue_lock, flags); >>>> -       if (list_empty(&folio->_deferred_list)) { >>>> +       if (partially_mapped) { >>>> +               folio_set_partially_mapped(folio); >>>>                 if (folio_test_pmd_mappable(folio)) >>>>                         count_vm_event(THP_DEFERRED_SPLIT_PAGE); >>>>                 count_mthp_stat(folio_order(folio), MTHP_STAT_SPLIT_DEFERRED); >>>> +       } >>>> +       if (list_empty(&folio->_deferred_list)) { >>>>                 list_add_tail(&folio->_deferred_list, &ds_queue->split_queue); >>>>                 ds_queue->split_queue_len++; >>>>  #ifdef CONFIG_MEMCG >>>> >>> >>> not enough. as deferred_split_folio(true) won't be called if folio has been >>> deferred_list in __folio_remove_rmap(): >>> >>>         if (partially_mapped && folio_test_anon(folio) && >>>             list_empty(&folio->_deferred_list)) >>>                 deferred_split_folio(folio, true); >>> >>> so you will still see 0. >>> >> >> ah yes, Thanks. >> >> So below diff over the current v3 series should work for all cases: >> >> >> diff --git a/mm/huge_memory.c b/mm/huge_memory.c >> index b4d72479330d..482e3ab60911 100644 >> --- a/mm/huge_memory.c >> +++ b/mm/huge_memory.c >> @@ -3483,6 +3483,7 @@ void __folio_undo_large_rmappable(struct folio *folio) >>         spin_unlock_irqrestore(&ds_queue->split_queue_lock, flags); >>  } >> >> +/* partially_mapped=false won't clear PG_partially_mapped folio flag */ >>  void deferred_split_folio(struct folio *folio, bool partially_mapped) >>  { >>         struct deferred_split *ds_queue = get_deferred_split_queue(folio); >> @@ -3515,16 +3516,16 @@ void deferred_split_folio(struct folio *folio, bool partially_mapped) >>                 return; >> >>         spin_lock_irqsave(&ds_queue->split_queue_lock, flags); >> -       if (partially_mapped) >> +       if (partially_mapped) { >>                 folio_set_partially_mapped(folio); >> -       else >> -               folio_clear_partially_mapped(folio); >> +               if (folio_test_pmd_mappable(folio)) >> +                       count_vm_event(THP_DEFERRED_SPLIT_PAGE); >> +               count_mthp_stat(folio_order(folio), MTHP_STAT_SPLIT_DEFERRED); >> +       } else { >> +               /* partially mapped folios cannont become partially unmapped */ >> +               VM_WARN_ON_FOLIO(folio_test_partially_mapped(folio), folio); >> +       } >>         if (list_empty(&folio->_deferred_list)) { >> -               if (partially_mapped) { >> -                       if (folio_test_pmd_mappable(folio)) >> -                               count_vm_event(THP_DEFERRED_SPLIT_PAGE); >> -                       count_mthp_stat(folio_order(folio), MTHP_STAT_SPLIT_DEFERRED); >> -               } >>                 list_add_tail(&folio->_deferred_list, &ds_queue->split_queue); >>                 ds_queue->split_queue_len++; >>  #ifdef CONFIG_MEMCG >> diff --git a/mm/rmap.c b/mm/rmap.c >> index 9ad558c2bad0..4c330635aa4e 100644 >> --- a/mm/rmap.c >> +++ b/mm/rmap.c >> @@ -1578,7 +1578,7 @@ static __always_inline void __folio_remove_rmap(struct folio *folio, >>          * Check partially_mapped first to ensure it is a large folio. >>          */ >>         if (partially_mapped && folio_test_anon(folio) && >> -           list_empty(&folio->_deferred_list)) >> +           !folio_test_partially_mapped(folio)) >>                 deferred_split_folio(folio, true); >> >>         __folio_mod_stat(folio, -nr, -nr_pmdmapped); >> > > This is an improvement, but there's still an issue. Two or more threads can > execute deferred_split_folio() simultaneously, which could lead to > DEFERRED_SPLIT being added multiple times. We should double-check > the status after acquiring the spinlock. > > diff --git a/mm/huge_memory.c b/mm/huge_memory.c > index f401ceded697..3d247826fb95 100644 > --- a/mm/huge_memory.c > +++ b/mm/huge_memory.c > @@ -3607,10 +3607,12 @@ void deferred_split_folio(struct folio *folio, bool partially_mapped) > > spin_lock_irqsave(&ds_queue->split_queue_lock, flags); > if (partially_mapped) { > - folio_set_partially_mapped(folio); > - if (folio_test_pmd_mappable(folio)) > - count_vm_event(THP_DEFERRED_SPLIT_PAGE); > - count_mthp_stat(folio_order(folio), MTHP_STAT_SPLIT_DEFERRED); > + if (!folio_test_partially_mapped(folio)) { > + folio_set_partially_mapped(folio); > + if (folio_test_pmd_mappable(folio)) > + count_vm_event(THP_DEFERRED_SPLIT_PAGE); > + count_mthp_stat(folio_order(folio), MTHP_STAT_SPLIT_DEFERRED); > + } > } else { > /* partially mapped folios cannont become partially unmapped */ > VM_WARN_ON_FOLIO(folio_test_partially_mapped(folio), folio); Actually above is still not thread safe. multiple threads can test partially_mapped and see its false at the same time and all of them would then increment stats. I believe !folio_test_set_partially_mapped would be best. Hopefully below diff over v3 should cover all the fixes that have come up until now. Independent of this series, I also think its a good idea to add a selftest for this deferred_split counter. I will send a separate patch for it that just maps a THP, unmaps a small part from it and checks the counter. I think split_huge_page_test.c is probably the right place for it. If everyone is happy with it, Andrew could replace the original fix patch in [1] with below. [1] https://lore.kernel.org/all/20240814200220.F1964C116B1@smtp.kernel.org/ commit c627655548fa09b59849e942da4decc84fa0b0f2 Author: Usama Arif Date: Thu Aug 15 16:07:20 2024 +0100 mm: Introduce a pageflag for partially mapped folios fix Fixes the original commit by not clearing partially mapped bit in hugeTLB folios and fixing deferred split THP stats. Signed-off-by: Usama Arif diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h index cecc1bad7910..7bee743ede40 100644 --- a/include/linux/page-flags.h +++ b/include/linux/page-flags.h @@ -863,6 +863,7 @@ static inline void ClearPageCompound(struct page *page) } FOLIO_FLAG(large_rmappable, FOLIO_SECOND_PAGE) FOLIO_FLAG(partially_mapped, FOLIO_SECOND_PAGE) +FOLIO_TEST_SET_FLAG(partially_mapped, FOLIO_SECOND_PAGE) #else FOLIO_FLAG_FALSE(large_rmappable) FOLIO_FLAG_FALSE(partially_mapped) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index c024ab0f745c..24371e4ef19b 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -3459,6 +3459,7 @@ void __folio_undo_large_rmappable(struct folio *folio) spin_unlock_irqrestore(&ds_queue->split_queue_lock, flags); } +/* partially_mapped=false won't clear PG_partially_mapped folio flag */ void deferred_split_folio(struct folio *folio, bool partially_mapped) { struct deferred_split *ds_queue = get_deferred_split_queue(folio); @@ -3488,16 +3489,17 @@ void deferred_split_folio(struct folio *folio, bool partially_mapped) return; spin_lock_irqsave(&ds_queue->split_queue_lock, flags); - if (partially_mapped) - folio_set_partially_mapped(folio); - else - folio_clear_partially_mapped(folio); - if (list_empty(&folio->_deferred_list)) { - if (partially_mapped) { + if (partially_mapped) { + if (!folio_test_set_partially_mapped(folio)) { if (folio_test_pmd_mappable(folio)) count_vm_event(THP_DEFERRED_SPLIT_PAGE); count_mthp_stat(folio_order(folio), MTHP_STAT_SPLIT_DEFERRED); } + } else { + /* partially mapped folios cannot become non-partially mapped */ + VM_WARN_ON_FOLIO(folio_test_partially_mapped(folio), folio); + } + if (list_empty(&folio->_deferred_list)) { list_add_tail(&folio->_deferred_list, &ds_queue->split_queue); ds_queue->split_queue_len++; #ifdef CONFIG_MEMCG diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 2ae2d9a18e40..1fdd9eab240c 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1758,7 +1758,6 @@ static void __update_and_free_hugetlb_folio(struct hstate *h, free_gigantic_folio(folio, huge_page_order(h)); } else { INIT_LIST_HEAD(&folio->_deferred_list); - folio_clear_partially_mapped(folio); folio_put(folio); } } diff --git a/mm/rmap.c b/mm/rmap.c index 9ad558c2bad0..4c330635aa4e 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1578,7 +1578,7 @@ static __always_inline void __folio_remove_rmap(struct folio *folio, * Check partially_mapped first to ensure it is a large folio. */ if (partially_mapped && folio_test_anon(folio) && - list_empty(&folio->_deferred_list)) + !folio_test_partially_mapped(folio)) deferred_split_folio(folio, true); __folio_mod_stat(folio, -nr, -nr_pmdmapped);