From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 111F4E7719A for ; Sat, 11 Jan 2025 22:13:28 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6B8036B0085; Sat, 11 Jan 2025 17:13:28 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 667486B0089; Sat, 11 Jan 2025 17:13:28 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 507D76B008C; Sat, 11 Jan 2025 17:13:28 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 2C7066B0085 for ; Sat, 11 Jan 2025 17:13:28 -0500 (EST) Received: from smtpin20.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 9A8AF1C8646 for ; Sat, 11 Jan 2025 22:13:27 +0000 (UTC) X-FDA: 82996573254.20.67A9AE3 Received: from mail-ua1-f50.google.com (mail-ua1-f50.google.com [209.85.222.50]) by imf09.hostedemail.com (Postfix) with ESMTP id AB8A014000A for ; Sat, 11 Jan 2025 22:13:25 +0000 (UTC) Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=Z1FxCJY0; spf=pass (imf09.hostedemail.com: domain of yuzhao@google.com designates 209.85.222.50 as permitted sender) smtp.mailfrom=yuzhao@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1736633605; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=hNr2+wkKarLEirx2YZmqHo6k789fTSsNZHZNSYbB+zE=; b=7fOeNxreexa1IA6Dwy3nSp3k3CvWEgap7pxODR9NKT67X+L1zphWaCubOoecvgweW8hXLX SuZMQ+vD1/pXP4Ys3iLtrj6nUd51Vbjyj4dPbPn++oCBglIjXOba84FYouDOXFmyc5iJpY gTU2UsTKh34n0/lviM6rt850bnZR7/U= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1736633605; a=rsa-sha256; cv=none; b=I+ZHhju7ezS7jYZlyRRbKTBHmyiNtquon5TbF+cZNPs2YPICHnjL2p56r3Ccq7LqSvg8dC nEXWl7BFedMNzthetmgLACz0L3WC9KznBMjUV9BvB6Wgz+6JCTpR/hk1budJqH1itrC+VU 7Und6pawJ58q8QPyHQJzyDkQrPCOwU0= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=Z1FxCJY0; spf=pass (imf09.hostedemail.com: domain of yuzhao@google.com designates 209.85.222.50 as permitted sender) smtp.mailfrom=yuzhao@google.com; dmarc=pass (policy=reject) header.from=google.com Received: by mail-ua1-f50.google.com with SMTP id a1e0cc1a2514c-85bb13573fbso964212241.1 for ; Sat, 11 Jan 2025 14:13:25 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1736633604; x=1737238404; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=hNr2+wkKarLEirx2YZmqHo6k789fTSsNZHZNSYbB+zE=; b=Z1FxCJY04hYhK8LmTPtk1JxiujDViZ+DOtLriyN3pgBXHaGVw3Dfzrutrzh7ki/ns+ nChigCKJk2ydUQKRTAkujr89eemylEhBgXRtnNF69XblkjjL+rQ45S7HRDDjchCZwbnN wnD69B/+YhaGojpeWPMJsLZcwPCWVVb/7sLBUWmsRusL9IOt8QMIC5RkqL+CWDPeZM42 ESLtVAPEBiuHvMHdaxG6fwP34fICQw8ugdJ1BeCt4ZVrSvLb2vauBf4sUw30sQer7CC9 FoqAjaptlOfgr0kht0aaaaxApk4zXzC8BCiVLProz6SZt+gi3/Euil4R8eUPcL0YuiQq nKgg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736633604; x=1737238404; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=hNr2+wkKarLEirx2YZmqHo6k789fTSsNZHZNSYbB+zE=; b=RcCONBcyfEutvHzzsOz+fT5d3UwghZIymsiZ+0PVYcip2JWVId5gsnrCgT+lUan4+V W6qVciiW0NopIb80yIZeu+RDjHZT9tEmtIqWIlhAwBeDKnWfRT5oJEB3Yol3BhX8ZGy3 C7fHnlyzLo1NRsoYD9KNIhIZJ4D7EBFSNl8LWwFE4D7klLusMg7zliwn95ioFCesx4rH lhOi3oZtjDC91eR4eESECEYenjIEIvB8Tf6FavyeIdYQyglqdIR2p3IedXyANgrwc/h/ laDuq++1xGZhBsL3k73BVV2Agvy/Of9XEGR7J+REIwddkgcM2WrhoMkbtohej6cFHD9v 6BCA== X-Forwarded-Encrypted: i=1; AJvYcCVvVZV/8CTepEiGous0gc7Ztx3PQTYGitfZZrVefa52egplfMdqral032aOJ+joBJ0nurCvZzS9EQ==@kvack.org X-Gm-Message-State: AOJu0YzwkhAvzqACuD3ICyOBscQquiOV9wcOEejvN2ooPk2fTLBWBRNZ aUOmfx3i/sKjaccmtGXk3Q+YKH0xYzKAY0vGmWhgAwTKq8gdgK5TjxwBxFPX/zHU0uj1BzQKHiq 5h8v4jmM0sTS/MxHll6LpfNDbMAaMkYiqghVG X-Gm-Gg: ASbGnctYvl1sUBDyotpaIdXPlmenxXHF9zs0AircV6bqxYNgGFKWBiILL0PHG+61rrt e4lFgVeaLdmSPNX/sOOWRt3NaLcps4nfMCiVFewyB8EWydIh/GFbngQSOBLGqIYscT2pDTg== X-Google-Smtp-Source: AGHT+IHx+s/ksdk80UUIJ9PkvHuE4wvL8QySmwlf2gKPq84yVpLtixLzNHb4sFaIdTSNBSPlZZ9x+3DEbOkZEfpl29s= X-Received: by 2002:a05:6102:5088:b0:4b6:1991:4f4f with SMTP id ada2fe7eead31-4b61991542cmr10750511137.17.1736633604456; Sat, 11 Jan 2025 14:13:24 -0800 (PST) MIME-Version: 1.0 References: <20250111091504.1363075-1-chenridong@huaweicloud.com> In-Reply-To: <20250111091504.1363075-1-chenridong@huaweicloud.com> From: Yu Zhao Date: Sat, 11 Jan 2025 15:12:47 -0700 X-Gm-Features: AbW1kvYTCVKgIyigjGCNoNrquInUUyjENj__Zf_Rjmt8kF4JdkS7GNF_xiaBQgA Message-ID: Subject: Re: [PATCH v7 mm-unstable] mm: vmscan: retry folios written back while isolated for traditional LRU To: Chen Ridong , Wei Xu Cc: akpm@linux-foundation.org, mhocko@suse.com, hannes@cmpxchg.org, yosryahmed@google.com, david@redhat.com, willy@infradead.org, ryan.roberts@arm.com, baohua@kernel.org, 21cnbao@gmail.com, wangkefeng.wang@huawei.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, chenridong@huawei.com, wangweiyang2@huawei.com, xieym_ict@hotmail.com Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: AB8A014000A X-Stat-Signature: pgjocgryhxecuiz9bdstxig8xfozorgk X-Rspam-User: X-HE-Tag: 1736633605-491595 X-HE-Meta: U2FsdGVkX1/ILhvr1+AUJ0c0WovDff/GAIwoMVtCtOYZqdKdM5LX66QvNkvzRRWaTcrapg6JFz3GyUQ/ig90fo5NRqYl5pIwURsizACm/MgfzRwhBqR9Dwe3PS9etjjzDGsfq5WM9cU5W/FaTp37PymANoKRbneYxCRAkrULtYt3mKFWw+RiyoUfZnr4kq9m38hGe70mGExVw3kQgW6wUC7NB2eHaUDB7wjVtwOmyeKtAJf14plR0yQAFLUtg4eumAmL/4uQXXLEkwjZ/YmyQ+KQePpmdYqxmJL8viRFcxjpcC2A19kZQoPa+9t3ZV8qtahVFXv/j2Y1eTLGSolGHBxtWu0PRSL143hXuSd98nSlNd1DD8AD5S4le+TwLD8Lcy7N5BICsDkKJRn1jpD5mz8/RcB4qvWlCW7vJ2N2K9ZPF2GXRqQtkftpJ8FdwpHdP/MzpNzYsbePZ902m5rKVp1rmgS2B6afW+of4YAZOYc5pwE5rMkM3Eyf/SisfhuE/DidsNAskFMtep0F1lt2CdFsyUFm7ugh11101C9zpRsDTvzJo8RYC6RZObZRHoTwSQ7XOZoOhG+2drn69eUlgn5fmdq5X1UpWXaR+yDAP0w4KMj3aBXbLzoGYtOUgo6Pf/lbp2UHN0IMAz8Ul+62AMz8Mj+5p1M/CiE+xxpRpkXQdjjUwzXjocqvcDghanJju4NbMn/nCnUxHkkQI1PquABvXk+47Svtd22xR/UzVawn+jGavTbgNqld5P7qtz7LrcY8xHH5VxGU8FePqQgbr7VNqfYg2zvFi/Udpmpx6tBEYddAmP5sgb/Ktiwai3qnYKcI9yk5SUbJXhab8QQxqJtlkwsV8BGTnUUwbqutlWbIt35PYVcqtn97rIuZrg2PDKBZd5I27FMfbjcRRSeHRAzuX4LT6lospSM/4jlUzcXxMeDiutThc1o+K4YtrfQNnfntAMBYnPj6g8eYxVm EeavT0E+ GtT0y77b4CgTM9TF9YfWs7QjKIw2RFwjsfeB3q0QEdY3BjkOAbTRpD7lDhU4m6EWV5axBup3EI3VNlJbCOgaCsdCtpLzVd7Hcx8LH9UW6HswLwCCoZlSacDnSVvfZG33AVCNIVFGRNh6xL9HoIwWorwATVK6x9lQxJcO6vc1RA+ch/5J5kiX22j38AIgZahZZFcbnhIANxnoS5ZuXqAXPsdKntmSItLiZbp2DDGfIzcy94LJuKf9vnSlY9oLPVT8+NYGzCjDpjoZps9aFZiit2aLQpEvuJ6igUfLv X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Sat, Jan 11, 2025 at 2:25=E2=80=AFAM Chen Ridong wrote: > > From: Chen Ridong > > As commit 359a5e1416ca ("mm: multi-gen LRU: retry folios written back > while isolated") mentioned: > > The page reclaim isolates a batch of folios from the tail of one of the > LRU lists and works on those folios one by one. For a suitable > swap-backed folio, if the swap device is async, it queues that folio fo= r > writeback. After the page reclaim finishes an entire batch, it puts ba= ck > the folios it queued for writeback to the head of the original LRU list= . > > In the meantime, the page writeback flushes the queued folios also by > batches. Its batching logic is independent from that of the page > reclaim. For each of the folios it writes back, the page writeback call= s > folio_rotate_reclaimable() which tries to rotate a folio to the tail. > > folio_rotate_reclaimable() only works for a folio after the page reclai= m > has put it back. If an async swap device is fast enough, the page > writeback can finish with that folio while the page reclaim is still > working on the rest of the batch containing it. In this case, that fol= io > will remain at the head and the page reclaim will not retry it before > reaching there". > > The commit 359a5e1416ca ("mm: multi-gen LRU: retry folios written back > while isolated") only fixed the issue for mglru. However, this issue > also exists in the traditional active/inactive LRU and was found at [1]. The active/inactive LRU needs more careful thoughts due to its complexity. Details below. > It can be reproduced with below steps: > > 1. Compile with CONFIG_TRANSPARENT_HUGEPAGE=3Dy > 2. Mount memcg v1, and create memcg named test_memcg and set > limit_in_bytes=3D1G, memsw.limit_in_bytes=3D2G. > 3. Create a 1G swap file, and allocate 1.05G anon memory in test_memcg. > > It was found that: > > cat memory.limit_in_bytes > 1073741824 > cat memory.memsw.limit_in_bytes > 2147483648 > cat memory.usage_in_bytes > 1073664000 > cat memory.memsw.usage_in_bytes > 1129840640 > > free -h > total used free > Mem: 31Gi 1.2Gi 28Gi > Swap: 1.0Gi 1.0Gi 2.0Mi > > As shown above, the test_memcg used about 50M swap, but almost 1G swap > memory was used, which means that 900M+ may be wasted because other memcg= s > can not use these swap memory. > > This issue should be fixed in the same way as mglru. Therefore, the commo= n > logic was extracted to the 'find_folios_written_back' function firstly, > which is then reused in the 'shrink_inactive_list' function. Finally, > retry reclaiming those folios that may have missed the rotation for > traditional LRU. > > After change, the same test case. only 54M swap was used. > > cat memory.usage_in_bytes > 1073463296 > cat memory.memsw.usage_in_bytes > 1129828352 > > free -h > total used free > Mem: 31Gi 1.2Gi 28Gi > Swap: 1.0Gi 54Mi 969Mi > > [1] https://lore.kernel.org/linux-kernel/20241010081802.290893-1-chenrido= ng@huaweicloud.com/ > [2] https://lore.kernel.org/linux-kernel/CAGsJ_4zqL8ZHNRZ44o_CC69kE7DBVXv= bZfvmQxMGiFqRxqHQdA@mail.gmail.com/ > Signed-off-by: Chen Ridong > --- > > v6->v7: > - fix conflict based on mm-unstable. > - update the commit message(quote from YU's commit message, and add > improvements after change.) > - restore 'is_retrying' to 'skip_retry' to keep original semantics. > > v6: https://lore.kernel.org/linux-kernel/20241223082004.3759152-1-chenrid= ong@huaweicloud.com/ > > mm/vmscan.c | 114 ++++++++++++++++++++++++++++++++++------------------ > 1 file changed, 76 insertions(+), 38 deletions(-) > > diff --git a/mm/vmscan.c b/mm/vmscan.c > index 01dce6f26..6861b6937 100644 > --- a/mm/vmscan.c > +++ b/mm/vmscan.c > @@ -183,6 +183,9 @@ struct scan_control { > struct reclaim_state reclaim_state; > }; > > +static inline void find_folios_written_back(struct list_head *list, > + struct list_head *clean, struct lruvec *lruvec, int type,= bool is_retrying); > + > #ifdef ARCH_HAS_PREFETCHW > #define prefetchw_prev_lru_folio(_folio, _base, _field) = \ > do { \ > @@ -1960,14 +1963,18 @@ static unsigned long shrink_inactive_list(unsigne= d long nr_to_scan, > enum lru_list lru) > { > LIST_HEAD(folio_list); > + LIST_HEAD(clean_list); > unsigned long nr_scanned; > - unsigned int nr_reclaimed =3D 0; > + unsigned int nr_reclaimed, total_reclaimed =3D 0; > + unsigned int nr_pageout =3D 0; > + unsigned int nr_unqueued_dirty =3D 0; > unsigned long nr_taken; > struct reclaim_stat stat; > bool file =3D is_file_lru(lru); > enum vm_event_item item; > struct pglist_data *pgdat =3D lruvec_pgdat(lruvec); > bool stalled =3D false; > + bool skip_retry =3D false; > > while (unlikely(too_many_isolated(pgdat, file, sc))) { > if (stalled) > @@ -2001,22 +2008,47 @@ static unsigned long shrink_inactive_list(unsigne= d long nr_to_scan, > if (nr_taken =3D=3D 0) > return 0; > > +retry: > nr_reclaimed =3D shrink_folio_list(&folio_list, pgdat, sc, &stat,= false); > > + sc->nr.dirty +=3D stat.nr_dirty; > + sc->nr.congested +=3D stat.nr_congested; > + sc->nr.unqueued_dirty +=3D stat.nr_unqueued_dirty; > + sc->nr.writeback +=3D stat.nr_writeback; I think this change breaks the tests on the stats above, e.g., wakeup_flusher_threads(), because the same dirty/writeback folio can be counted twice. The reason for that is that folio_test_dirty/writeback() can't account for dirty/writeback buffer heads, which can only be done by folio_check_dirty_writeback(). For MGLRU, it has been broken since day 1 and commit 1bc542c6a0d1 ("mm/vmscan: wake up flushers conditionally to avoid cgroup OOM") doesn't account for this either. I'll get around to that. > + sc->nr.immediate +=3D stat.nr_immediate; > + total_reclaimed +=3D nr_reclaimed; > + nr_pageout +=3D stat.nr_pageout; > + nr_unqueued_dirty +=3D stat.nr_unqueued_dirty; > + > + trace_mm_vmscan_lru_shrink_inactive(pgdat->node_id, > + nr_scanned, nr_reclaimed, &stat, sc->priority, fi= le); > + > + find_folios_written_back(&folio_list, &clean_list, lruvec, 0, ski= p_retry); > + > spin_lock_irq(&lruvec->lru_lock); > move_folios_to_lru(lruvec, &folio_list); > > __mod_lruvec_state(lruvec, PGDEMOTE_KSWAPD + reclaimer_offset(), > stat.nr_demoted); > - __mod_node_page_state(pgdat, NR_ISOLATED_ANON + file, -nr_taken); > item =3D PGSTEAL_KSWAPD + reclaimer_offset(); > if (!cgroup_reclaim(sc)) > __count_vm_events(item, nr_reclaimed); > __count_memcg_events(lruvec_memcg(lruvec), item, nr_reclaimed); > __count_vm_events(PGSTEAL_ANON + file, nr_reclaimed); > + > + if (!list_empty(&clean_list)) { > + list_splice_init(&clean_list, &folio_list); > + skip_retry =3D true; > + spin_unlock_irq(&lruvec->lru_lock); > + goto retry; > + } > + __mod_node_page_state(pgdat, NR_ISOLATED_ANON + file, -nr_taken); > spin_unlock_irq(&lruvec->lru_lock); > + sc->nr.taken +=3D nr_taken; > + if (file) > + sc->nr.file_taken +=3D nr_taken; > > - lru_note_cost(lruvec, file, stat.nr_pageout, nr_scanned - nr_recl= aimed); > + lru_note_cost(lruvec, file, nr_pageout, nr_scanned - total_reclai= med); > > /* > * If dirty folios are scanned that are not queued for IO, it > @@ -2029,7 +2061,7 @@ static unsigned long shrink_inactive_list(unsigned = long nr_to_scan, > * the flushers simply cannot keep up with the allocation > * rate. Nudge the flusher threads in case they are asleep. > */ > - if (stat.nr_unqueued_dirty =3D=3D nr_taken) { > + if (nr_unqueued_dirty =3D=3D nr_taken) { > wakeup_flusher_threads(WB_REASON_VMSCAN); > /* > * For cgroupv1 dirty throttling is achieved by waking up > @@ -2044,18 +2076,7 @@ static unsigned long shrink_inactive_list(unsigned= long nr_to_scan, > reclaim_throttle(pgdat, VMSCAN_THROTTLE_WRITEBACK= ); > } > > - sc->nr.dirty +=3D stat.nr_dirty; > - sc->nr.congested +=3D stat.nr_congested; > - sc->nr.unqueued_dirty +=3D stat.nr_unqueued_dirty; > - sc->nr.writeback +=3D stat.nr_writeback; > - sc->nr.immediate +=3D stat.nr_immediate; > - sc->nr.taken +=3D nr_taken; > - if (file) > - sc->nr.file_taken +=3D nr_taken; > - > - trace_mm_vmscan_lru_shrink_inactive(pgdat->node_id, > - nr_scanned, nr_reclaimed, &stat, sc->priority, fi= le); > - return nr_reclaimed; > + return total_reclaimed; > } > > /* > @@ -4637,8 +4658,6 @@ static int evict_folios(struct lruvec *lruvec, stru= ct scan_control *sc, int swap > int reclaimed; > LIST_HEAD(list); > LIST_HEAD(clean); > - struct folio *folio; > - struct folio *next; > enum vm_event_item item; > struct reclaim_stat stat; > struct lru_gen_mm_walk *walk; > @@ -4668,26 +4687,7 @@ static int evict_folios(struct lruvec *lruvec, str= uct scan_control *sc, int swap > scanned, reclaimed, &stat, sc->priority, > type ? LRU_INACTIVE_FILE : LRU_INACTIVE_ANON); > > - list_for_each_entry_safe_reverse(folio, next, &list, lru) { > - DEFINE_MIN_SEQ(lruvec); > - > - if (!folio_evictable(folio)) { > - list_del(&folio->lru); > - folio_putback_lru(folio); > - continue; > - } > - > - /* retry folios that may have missed folio_rotate_reclaim= able() */ > - if (!skip_retry && !folio_test_active(folio) && !folio_ma= pped(folio) && > - !folio_test_dirty(folio) && !folio_test_writeback(fol= io)) { > - list_move(&folio->lru, &clean); > - continue; > - } > - > - /* don't add rejected folios to the oldest generation */ > - if (lru_gen_folio_seq(lruvec, folio, false) =3D=3D min_se= q[type]) > - set_mask_bits(&folio->flags, LRU_REFS_FLAGS, BIT(= PG_active)); > - } > + find_folios_written_back(&list, &clean, lruvec, type, skip_retry)= ; > > spin_lock_irq(&lruvec->lru_lock); > > @@ -5706,6 +5706,44 @@ static void lru_gen_shrink_node(struct pglist_data= *pgdat, struct scan_control * > > #endif /* CONFIG_LRU_GEN */ > > +/** > + * find_folios_written_back - Find and move the written back folios to a= new list. > + * @list: filios list > + * @clean: the written back folios list > + * @lruvec: the lruvec > + * @type: LRU_GEN_ANON/LRU_GEN_FILE, only for multi-gen LRU > + * @skip_retry: whether skip retry. > + */ > +static inline void find_folios_written_back(struct list_head *list, > + struct list_head *clean, struct lruvec *lruvec, int type,= bool skip_retry) > +{ > + struct folio *folio; > + struct folio *next; > + > + list_for_each_entry_safe_reverse(folio, next, list, lru) { > +#ifdef CONFIG_LRU_GEN > + DEFINE_MIN_SEQ(lruvec); > +#endif > + if (!folio_evictable(folio)) { > + list_del(&folio->lru); > + folio_putback_lru(folio); > + continue; > + } > + > + /* retry folios that may have missed folio_rotate_reclaim= able() */ > + if (!skip_retry && !folio_test_active(folio) && !folio_ma= pped(folio) && > + !folio_test_dirty(folio) && !folio_test_writeback(fol= io)) { Have you verified that this condition also holds for the active/inactive LRU or did you just assume it? IOW, how do we know the active/inactive LRU doesn't think this folio should be kept (and put back to the head of the inactive LRU list).