From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5B1AFD3E2A1 for ; Mon, 28 Oct 2024 18:39:36 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DA3BB6B0082; Mon, 28 Oct 2024 14:39:35 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D54C86B009E; Mon, 28 Oct 2024 14:39:35 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BCD806B009F; Mon, 28 Oct 2024 14:39:35 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 9AE926B0082 for ; Mon, 28 Oct 2024 14:39:35 -0400 (EDT) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 5C4A940DD5 for ; Mon, 28 Oct 2024 18:39:35 +0000 (UTC) X-FDA: 82723873344.27.D794BEE Received: from mail-ed1-f46.google.com (mail-ed1-f46.google.com [209.85.208.46]) by imf30.hostedemail.com (Postfix) with ESMTP id B7E2D80003 for ; Mon, 28 Oct 2024 18:38:48 +0000 (UTC) Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=ALqW9vIZ; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf30.hostedemail.com: domain of shy828301@gmail.com designates 209.85.208.46 as permitted sender) smtp.mailfrom=shy828301@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1730140719; a=rsa-sha256; cv=none; b=OinNW+rHIo7gCgaXvAKv3CIhhMZPLE8yOUE4ujEr4kZ9WgZb+/xyb6OvQsAtew6bB/mjrW 8w2L4dd6yAvDGA+mxo/KcrXUGxZrv3hfZb9/yYES9+XktgSUCtihwfE1YRAOkFp/MqLmtM 5lELcjuwF9BaH5Qpee92OveqVOpOq3g= ARC-Authentication-Results: i=1; imf30.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=ALqW9vIZ; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf30.hostedemail.com: domain of shy828301@gmail.com designates 209.85.208.46 as permitted sender) smtp.mailfrom=shy828301@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1730140719; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=A8kxc82HXTdWtW34yREZF/3jKqrh2MRJ4wdOo4wZTMM=; b=8LBJRfUwkZrQQSZGoC73xUBm6QE9szXvej61gzslnxGmu9oDGLqELgLMl6avBtppGwqJiR W7iRQxuOCavXznH66WkAdJl3cnSA6VhqCsHhIA4Q7ALedyCHImLis26Zh20JFfBVZGMMX8 65RTv3NpjunYDNlvvKyhZGO8qKBRjOA= Received: by mail-ed1-f46.google.com with SMTP id 4fb4d7f45d1cf-5cb6704ff6bso6045426a12.3 for ; Mon, 28 Oct 2024 11:39:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1730140768; x=1730745568; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=A8kxc82HXTdWtW34yREZF/3jKqrh2MRJ4wdOo4wZTMM=; b=ALqW9vIZR6KRd3JpYQ7NUNE9g2cDWRoBlYkN/ebqkpeTsfyps0lCKUqFFwHBR3URmV sDHc013OBtRiu2IvVyD+h320YYNzmWw9/koUPHpMZMT8pY/kYwFpET6e6JGHMM2SqHTY nQS5vqBRbUj70nD74l6J7Z0BOLTgzl/ogR5+qf8rGgwRNl/1nIcaGWGdt4wbjZYIL2TG M0LRgNmXfVEXWKAqFW9VF1zKvBhiziei2ahH3PJhIqpQwWdaC3k/NkmcKoW3S34yZRZZ KxbTHm/ogqsL59xpxkuHURG6SDftJjwCWZdPbOnrJh1DzrjZ4uJUBG/FPfOI0S1VQM5a snNw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1730140768; x=1730745568; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=A8kxc82HXTdWtW34yREZF/3jKqrh2MRJ4wdOo4wZTMM=; b=IJtPzN81jhDr9CzbTPp/dd2wOci2e8N9Ff+QseqQTxlrTkK53t8RbZDF0hbav8Zo8v qC8Vqmwp/Mcn2IRhKrJGyqqgh5WhsRTgBBH+HziUKHyRWKgdSDpTeVlpSbO5ZM+dF+Va tvMIlhgwz1Dmw9np76tmFjgbhqiEQEhCFOhdHVbOC5rkR1WVOeFYcfMhBJeuspMQNKT3 lU6D0uGD6oSZneW0CdqyJ9BjX+PlujFPWjYBPUlQJZySFirJj3LBco1npPLtVj+0zomj U5vlnq8dVNsJ8/QOCg7mo9Of19gu2Rq58TZMtp2CbNOWGf+hN1wPWhTsnhGh/tpCwGrs sD/g== X-Forwarded-Encrypted: i=1; AJvYcCVo6LXfZQzRK9Wq22xsYOatrSkd8lEX1U7AnfY0gVGwuV8tn8GBwqPNhTbDzOJAzLx2VNSCev72tA==@kvack.org X-Gm-Message-State: AOJu0Yw49FE1nOGCzFiTYVukIUpw/tOkvyCHl+Oab+70F0jTegBR85Gu hP8oj6EVN7Pvlq199GqhwJsg98g/XR1kgLg9YdgLjdA8AhKaZ4u5zznQbdQS5Qx+V27zF6k/oq8 zKNJftHnZ6lysa7iIYu/itnzOZjE= X-Google-Smtp-Source: AGHT+IFmQmv48tv463Bx0SLyDDjm9zkkdumXS/9BI4i5kLN91b1D9Uxh5QpSBvds3+7mHaPrYlq2g+i2znY5JpYgsUk= X-Received: by 2002:a05:6402:2344:b0:5ca:d532:f3a7 with SMTP id 4fb4d7f45d1cf-5cbbf88a36cmr8822889a12.2.1730140767575; Mon, 28 Oct 2024 11:39:27 -0700 (PDT) MIME-Version: 1.0 References: <81e34a8b-113a-0701-740e-2135c97eb1d7@google.com> <8dc111ae-f6db-2da7-b25c-7a20b1effe3b@google.com> In-Reply-To: <8dc111ae-f6db-2da7-b25c-7a20b1effe3b@google.com> From: Yang Shi Date: Mon, 28 Oct 2024 11:39:15 -0700 Message-ID: Subject: Re: [PATCH hotfix v2 2/2] mm/thp: fix deferred split unqueue naming and locking To: Hugh Dickins Cc: Andrew Morton , Usama Arif , Wei Yang , "Kirill A. Shutemov" , Matthew Wilcox , David Hildenbrand , Johannes Weiner , Baolin Wang , Barry Song , Kefeng Wang , Ryan Roberts , Nhat Pham , Zi Yan , Chris Li , Shakeel Butt , linux-kernel@vger.kernel.org, linux-mm@kvack.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspam-User: X-Rspamd-Queue-Id: B7E2D80003 X-Rspamd-Server: rspam01 X-Stat-Signature: wi78gdzxgwt9eqwcftwonf6c6149kxb7 X-HE-Tag: 1730140728-900314 X-HE-Meta: U2FsdGVkX18BT2ABDRgkiRnInECNlA79gevWrHg4JEVAH5m53h7VQjO+z2MUPbXqC76AVMwNAv6Jvx59uOUA+2f85JValaUq18HodwKrVDoObGw7llcxWko8KfxeAD4BFEkx3wmtjJ5zEYreei9PcJG6fDYjcwqz6tdYAJwGpIoWCSVKd6NGU069qFBH7WZ5WF6NClEspPj22mI0kbyyIGAvYxLy3M7qyCWnkKagl3q2OOxpNCjQuSYvH9G6PQGyhKZBMF2YPz4tbNd0ITgQQXCWrl/xHHwycJphqmVn8Pfh/u652mWNGWSQoR+nky06T3XGSOv2stV9Z71sDbawLEJqlhdMO3Sv00aieIjmtFj4g8DaW6iT8UcFFa/t6k9K+7JYzKQo77recBrgrZ4f1nSonUK42qjaJEQ1A9cM2OcTeyaQTuC91J2Tz5Ho23kCqv8n7A66/1IMlqM7TzAfsen6QjKcaFxcVIHcVBzvWethOBWiGYhUQZ+ewSeD/THXGhgHP4Sk/2nQC5yZfwFXtwM7nR49H/I6HMaq25G/5frOdsVpGdM6zgVWoBvWnEJaCZk9ECyQCHWrC/EXPJPga/AeNC1izd6ywObipIN7Dq/K5nPn2ehyPmCiz3XBDzBDnvag98fxON4f9gvC7v37i2VhOirAd3SQi5RudbtuRI93Hxf3Psf7sedZbo+fvkzECY+a8cQDj7IGQndygq/gwHLHa9XS4e6k2bXMV4yGcOVe3TGcTWrIXsNoabL0BR1BVq1Lq9bqmXSug+n9UAco8UoK3M1LLs+lWiSSgTrKiFRO7maIcwlxf7hXfjqgNipiYDlWQbC6F0vSsWZfZ0EDq9V6N99bw9OVXve5kjZFkS6gymywu/leRXvE2irYusC69q0Y7Y1Xh+mI1Xy/LRe7Ns2da3+FQroJgPMP9aKyrXYNchvY/J1BxUKAFbCUBNSddlBTtnXvVP6G77bytH4 Valq1G0i a51Mp9UJdiCSxSHidpcgkq/MQG/6CMEUNPnTMc+3GAGAN+rfXrn1ffwkLYbbB/in+eNnoUc2+cO2rnYWTqf6h+ntSQnep/ew/b0arwne5SGFqU1RE46mTkTGtvUyrWnc4S0ZP9pnHynjs3bIDPNQwH0R/nDMkTCl42hKv1IjT2u9WitQWghTn8NM4BeJIdz1pMGd6xaYcXw3J320ojbDi2JMnvy1zv5V4AVF1jYYTwrQ6TCMa5xsv/AtgF53rEQx6twjbvRX6hGfOxD+doVJ4JHHOON8qMxBuL4ImN1mqz2aTOKIkc8deBl927+1c33/5TKjZgKPoEHNhw5LNqyHPN9IMLFRE9I/XDS5BishGGp74vH09iYtxTkAi3XDFkyClLSh8aW56wPTMVncIgh36BtfsD3lmwmFJTXZIs/mXO6XA4qg79Kr6sp1DJY2tQHxl4BLqhQXxtXxYPejkQjTuXqGBJg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Sun, Oct 27, 2024 at 1:02=E2=80=AFPM Hugh Dickins wro= te: > > Recent changes are putting more pressure on THP deferred split queues: > under load revealing long-standing races, causing list_del corruptions, > "Bad page state"s and worse (I keep BUGs in both of those, so usually > don't get to see how badly they end up without). The relevant recent > changes being 6.8's mTHP, 6.10's mTHP swapout, and 6.12's mTHP swapin, > improved swap allocation, and underused THP splitting. > > Before fixing locking: rename misleading folio_undo_large_rmappable(), > which does not undo large_rmappable, to folio_unqueue_deferred_split(), > which is what it does. But that and its out-of-line __callee are mm > internals of very limited usability: add comment and WARN_ON_ONCEs to > check usage; and return a bool to say if a deferred split was unqueued, > which can then be used in WARN_ON_ONCEs around safety checks (sparing > callers the arcane conditionals in __folio_unqueue_deferred_split()). > > Just omit the folio_unqueue_deferred_split() from free_unref_folios(), > all of whose callers now call it beforehand (and if any forget then > bad_page() will tell) - except for its caller put_pages_list(), which > itself no longer has any callers (and will be deleted separately). > > Swapout: mem_cgroup_swapout() has been resetting folio->memcg_data 0 > without checking and unqueueing a THP folio from deferred split list; > which is unfortunate, since the split_queue_lock depends on the memcg > (when memcg is enabled); so swapout has been unqueueing such THPs later, > when freeing the folio, using the pgdat's lock instead: potentially > corrupting the memcg's list. __remove_mapping() has frozen refcount to > 0 here, so no problem with calling folio_unqueue_deferred_split() before > resetting memcg_data. > > That goes back to 5.4 commit 87eaceb3faa5 ("mm: thp: make deferred split > shrinker memcg aware"): which included a check on swapcache before adding > to deferred queue, but no check on deferred queue before adding THP to > swapcache. That worked fine with the usual sequence of events in reclaim > (though there were a couple of rare ways in which a THP on deferred queue > could have been swapped out), but 6.12 commit dafff3f4c850 ("mm: split > underused THPs") avoids splitting underused THPs in reclaim, which makes > swapcache THPs on deferred queue commonplace. > > Keep the check on swapcache before adding to deferred queue? Yes: it is > no longer essential, but preserves the existing behaviour, and is likely > to be a worthwhile optimization (vmstat showed much more traffic on the > queue under swapping load if the check was removed); update its comment. > > Memcg-v1 move (deprecated): mem_cgroup_move_account() has been changing > folio->memcg_data without checking and unqueueing a THP folio from the > deferred list, sometimes corrupting "from" memcg's list, like swapout. > Refcount is non-zero here, so folio_unqueue_deferred_split() can only be > used in a WARN_ON_ONCE to validate the fix, which must be done earlier: > mem_cgroup_move_charge_pte_range() first try to split the THP (splitting > of course unqueues), or skip it if that fails. Not ideal, but moving > charge has been requested, and khugepaged should repair the THP later: > nobody wants new custom unqueueing code just for this deprecated case. > > The 87eaceb3faa5 commit did have the code to move from one deferred list > to another (but was not conscious of its unsafety while refcount non-0); > but that was removed by 5.6 commit fac0516b5534 ("mm: thp: don't need > care deferred split queue in memcg charge move path"), which argued that > the existence of a PMD mapping guarantees that the THP cannot be on a > deferred list. As above, false in rare cases, and now commonly false. > > Backport to 6.11 should be straightforward. Earlier backports must take > care that other _deferred_list fixes and dependencies are included. > There is not a strong case for backports, but they can fix cornercases. > > Fixes: 87eaceb3faa5 ("mm: thp: make deferred split shrinker memcg aware") > Fixes: dafff3f4c850 ("mm: split underused THPs") > Signed-off-by: Hugh Dickins > Cc: > --- > Based on 6.12-rc4 > v2: adjusted commit message following info from Yang and David > reinstated deferred_split_folio swapcache check, adjusting comment > omitted (mem_cgroup_disabled) unqueue from free_unref_folios Reviewed-by: Yang Shi > > mm/huge_memory.c | 35 ++++++++++++++++++++++++++--------- > mm/internal.h | 10 +++++----- > mm/memcontrol-v1.c | 25 +++++++++++++++++++++++++ > mm/memcontrol.c | 8 +++++--- > mm/migrate.c | 4 ++-- > mm/page_alloc.c | 1 - > mm/swap.c | 4 ++-- > mm/vmscan.c | 4 ++-- > 8 files changed, 67 insertions(+), 24 deletions(-) > > diff --git a/mm/huge_memory.c b/mm/huge_memory.c > index a1d345f1680c..03fd4bc39ea1 100644 > --- a/mm/huge_memory.c > +++ b/mm/huge_memory.c > @@ -3588,10 +3588,27 @@ int split_folio_to_list(struct folio *folio, stru= ct list_head *list) > return split_huge_page_to_list_to_order(&folio->page, list, ret); > } > > -void __folio_undo_large_rmappable(struct folio *folio) > +/* > + * __folio_unqueue_deferred_split() is not to be called directly: > + * the folio_unqueue_deferred_split() inline wrapper in mm/internal.h > + * limits its calls to those folios which may have a _deferred_list for > + * queueing THP splits, and that list is (racily observed to be) non-emp= ty. > + * > + * It is unsafe to call folio_unqueue_deferred_split() until folio refco= unt is > + * zero: because even when split_queue_lock is held, a non-empty _deferr= ed_list > + * might be in use on deferred_split_scan()'s unlocked on-stack list. > + * > + * If memory cgroups are enabled, split_queue_lock is in the mem_cgroup:= it is > + * therefore important to unqueue deferred split before changing folio m= emcg. > + */ > +bool __folio_unqueue_deferred_split(struct folio *folio) > { > struct deferred_split *ds_queue; > unsigned long flags; > + bool unqueued =3D false; > + > + WARN_ON_ONCE(folio_ref_count(folio)); > + WARN_ON_ONCE(!mem_cgroup_disabled() && !folio_memcg(folio)); > > ds_queue =3D get_deferred_split_queue(folio); > spin_lock_irqsave(&ds_queue->split_queue_lock, flags); > @@ -3603,8 +3620,11 @@ void __folio_undo_large_rmappable(struct folio *fo= lio) > MTHP_STAT_NR_ANON_PARTIALLY_MAPPED,= -1); > } > list_del_init(&folio->_deferred_list); > + unqueued =3D true; > } > spin_unlock_irqrestore(&ds_queue->split_queue_lock, flags); > + > + return unqueued; /* useful for debug warnings */ > } > > /* partially_mapped=3Dfalse won't clear PG_partially_mapped folio flag *= / > @@ -3627,14 +3647,11 @@ void deferred_split_folio(struct folio *folio, bo= ol partially_mapped) > return; > > /* > - * The try_to_unmap() in page reclaim path might reach here too, > - * this may cause a race condition to corrupt deferred split queu= e. > - * And, if page reclaim is already handling the same folio, it is > - * unnecessary to handle it again in shrinker. > - * > - * Check the swapcache flag to determine if the folio is being > - * handled by page reclaim since THP swap would add the folio int= o > - * swap cache before calling try_to_unmap(). > + * Exclude swapcache: originally to avoid a corrupt deferred spli= t > + * queue. Nowadays that is fully prevented by mem_cgroup_swapout(= ); > + * but if page reclaim is already handling the same folio, it is > + * unnecessary to handle it again in the shrinker, so excluding > + * swapcache here may still be a useful optimization. > */ > if (folio_test_swapcache(folio)) > return; > diff --git a/mm/internal.h b/mm/internal.h > index 93083bbeeefa..16c1f3cd599e 100644 > --- a/mm/internal.h > +++ b/mm/internal.h > @@ -639,11 +639,11 @@ static inline void folio_set_order(struct folio *fo= lio, unsigned int order) > #endif > } > > -void __folio_undo_large_rmappable(struct folio *folio); > -static inline void folio_undo_large_rmappable(struct folio *folio) > +bool __folio_unqueue_deferred_split(struct folio *folio); > +static inline bool folio_unqueue_deferred_split(struct folio *folio) > { > if (folio_order(folio) <=3D 1 || !folio_test_large_rmappable(foli= o)) > - return; > + return false; > > /* > * At this point, there is no one trying to add the folio to > @@ -651,9 +651,9 @@ static inline void folio_undo_large_rmappable(struct = folio *folio) > * to check without acquiring the split_queue_lock. > */ > if (data_race(list_empty(&folio->_deferred_list))) > - return; > + return false; > > - __folio_undo_large_rmappable(folio); > + return __folio_unqueue_deferred_split(folio); > } > > static inline struct folio *page_rmappable_folio(struct page *page) > diff --git a/mm/memcontrol-v1.c b/mm/memcontrol-v1.c > index 81d8819f13cd..f8744f5630bb 100644 > --- a/mm/memcontrol-v1.c > +++ b/mm/memcontrol-v1.c > @@ -848,6 +848,8 @@ static int mem_cgroup_move_account(struct folio *foli= o, > css_get(&to->css); > css_put(&from->css); > > + /* Warning should never happen, so don't worry about refcount non= -0 */ > + WARN_ON_ONCE(folio_unqueue_deferred_split(folio)); > folio->memcg_data =3D (unsigned long)to; > > __folio_memcg_unlock(from); > @@ -1217,7 +1219,9 @@ static int mem_cgroup_move_charge_pte_range(pmd_t *= pmd, > enum mc_target_type target_type; > union mc_target target; > struct folio *folio; > + bool tried_split_before =3D false; > > +retry_pmd: > ptl =3D pmd_trans_huge_lock(pmd, vma); > if (ptl) { > if (mc.precharge < HPAGE_PMD_NR) { > @@ -1227,6 +1231,27 @@ static int mem_cgroup_move_charge_pte_range(pmd_t = *pmd, > target_type =3D get_mctgt_type_thp(vma, addr, *pmd, &targ= et); > if (target_type =3D=3D MC_TARGET_PAGE) { > folio =3D target.folio; > + /* > + * Deferred split queue locking depends on memcg, > + * and unqueue is unsafe unless folio refcount is= 0: > + * split or skip if on the queue? first try to sp= lit. > + */ > + if (!list_empty(&folio->_deferred_list)) { > + spin_unlock(ptl); > + if (!tried_split_before) > + split_folio(folio); > + folio_unlock(folio); > + folio_put(folio); > + if (tried_split_before) > + return 0; > + tried_split_before =3D true; > + goto retry_pmd; > + } > + /* > + * So long as that pmd lock is held, the folio ca= nnot > + * be racily added to the _deferred_list, because > + * __folio_remove_rmap() will find !partially_map= ped. > + */ > if (folio_isolate_lru(folio)) { > if (!mem_cgroup_move_account(folio, true, > mc.from, mc.= to)) { > diff --git a/mm/memcontrol.c b/mm/memcontrol.c > index 2703227cce88..06df2af97415 100644 > --- a/mm/memcontrol.c > +++ b/mm/memcontrol.c > @@ -4629,9 +4629,6 @@ static void uncharge_folio(struct folio *folio, str= uct uncharge_gather *ug) > struct obj_cgroup *objcg; > > VM_BUG_ON_FOLIO(folio_test_lru(folio), folio); > - VM_BUG_ON_FOLIO(folio_order(folio) > 1 && > - !folio_test_hugetlb(folio) && > - !list_empty(&folio->_deferred_list), folio); > > /* > * Nobody should be changing or seriously looking at > @@ -4678,6 +4675,7 @@ static void uncharge_folio(struct folio *folio, str= uct uncharge_gather *ug) > ug->nr_memory +=3D nr_pages; > ug->pgpgout++; > > + WARN_ON_ONCE(folio_unqueue_deferred_split(folio)); > folio->memcg_data =3D 0; > } > > @@ -4789,6 +4787,9 @@ void mem_cgroup_migrate(struct folio *old, struct f= olio *new) > > /* Transfer the charge and the css ref */ > commit_charge(new, memcg); > + > + /* Warning should never happen, so don't worry about refcount non= -0 */ > + WARN_ON_ONCE(folio_unqueue_deferred_split(old)); > old->memcg_data =3D 0; > } > > @@ -4975,6 +4976,7 @@ void mem_cgroup_swapout(struct folio *folio, swp_en= try_t entry) > VM_BUG_ON_FOLIO(oldid, folio); > mod_memcg_state(swap_memcg, MEMCG_SWAP, nr_entries); > > + folio_unqueue_deferred_split(folio); > folio->memcg_data =3D 0; > > if (!mem_cgroup_is_root(memcg)) > diff --git a/mm/migrate.c b/mm/migrate.c > index df91248755e4..691f25ee2489 100644 > --- a/mm/migrate.c > +++ b/mm/migrate.c > @@ -489,7 +489,7 @@ static int __folio_migrate_mapping(struct address_spa= ce *mapping, > folio_test_large_rmappable(folio)) { > if (!folio_ref_freeze(folio, expected_count)) > return -EAGAIN; > - folio_undo_large_rmappable(folio); > + folio_unqueue_deferred_split(folio); > folio_ref_unfreeze(folio, expected_count); > } > > @@ -514,7 +514,7 @@ static int __folio_migrate_mapping(struct address_spa= ce *mapping, > } > > /* Take off deferred split queue while frozen and memcg set */ > - folio_undo_large_rmappable(folio); > + folio_unqueue_deferred_split(folio); > > /* > * Now we know that no one else is looking at the folio: > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index 4b21a368b4e2..815100a45b25 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -2681,7 +2681,6 @@ void free_unref_folios(struct folio_batch *folios) > unsigned long pfn =3D folio_pfn(folio); > unsigned int order =3D folio_order(folio); > > - folio_undo_large_rmappable(folio); > if (!free_pages_prepare(&folio->page, order)) > continue; > /* > diff --git a/mm/swap.c b/mm/swap.c > index 835bdf324b76..b8e3259ea2c4 100644 > --- a/mm/swap.c > +++ b/mm/swap.c > @@ -121,7 +121,7 @@ void __folio_put(struct folio *folio) > } > > page_cache_release(folio); > - folio_undo_large_rmappable(folio); > + folio_unqueue_deferred_split(folio); > mem_cgroup_uncharge(folio); > free_unref_page(&folio->page, folio_order(folio)); > } > @@ -988,7 +988,7 @@ void folios_put_refs(struct folio_batch *folios, unsi= gned int *refs) > free_huge_folio(folio); > continue; > } > - folio_undo_large_rmappable(folio); > + folio_unqueue_deferred_split(folio); > __page_cache_release(folio, &lruvec, &flags); > > if (j !=3D i) > diff --git a/mm/vmscan.c b/mm/vmscan.c > index eb4e8440c507..635d45745b73 100644 > --- a/mm/vmscan.c > +++ b/mm/vmscan.c > @@ -1475,7 +1475,7 @@ static unsigned int shrink_folio_list(struct list_h= ead *folio_list, > */ > nr_reclaimed +=3D nr_pages; > > - folio_undo_large_rmappable(folio); > + folio_unqueue_deferred_split(folio); > if (folio_batch_add(&free_folios, folio) =3D=3D 0) { > mem_cgroup_uncharge_folios(&free_folios); > try_to_unmap_flush(); > @@ -1863,7 +1863,7 @@ static unsigned int move_folios_to_lru(struct lruve= c *lruvec, > if (unlikely(folio_put_testzero(folio))) { > __folio_clear_lru_flags(folio); > > - folio_undo_large_rmappable(folio); > + folio_unqueue_deferred_split(folio); > if (folio_batch_add(&free_folios, folio) =3D=3D 0= ) { > spin_unlock_irq(&lruvec->lru_lock); > mem_cgroup_uncharge_folios(&free_folios); > -- > 2.35.3