From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 46FEBE7717F for ; Tue, 10 Dec 2024 08:24:59 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id AE6E46B0135; Tue, 10 Dec 2024 03:24:58 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id A975F6B0136; Tue, 10 Dec 2024 03:24:58 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 937D16B0137; Tue, 10 Dec 2024 03:24:58 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 6DED66B0135 for ; Tue, 10 Dec 2024 03:24:58 -0500 (EST) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 297DBAE0AC for ; Tue, 10 Dec 2024 08:24:58 +0000 (UTC) X-FDA: 82878363036.07.C5ECD01 Received: from mail-ua1-f45.google.com (mail-ua1-f45.google.com [209.85.222.45]) by imf20.hostedemail.com (Postfix) with ESMTP id 76B981C0002 for ; Tue, 10 Dec 2024 08:24:33 +0000 (UTC) Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=MXWkCE9Y; spf=pass (imf20.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.222.45 as permitted sender) smtp.mailfrom=21cnbao@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1733819086; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=8M3NwB48Wmn5pnmtgrNXdB9BnQn/8GWztLli7+mr040=; b=yYMQG6aqBf71CB4iBvU9m5AXd21xLcj951hbEQ0cSwR1MB9oRJFB+5Lmp8DUl+cJ6l2QoR 1aY2XAc5I2kj15xjLgxovT0B1nHRYYwUN8RaijdwfrQk7Jihdxs4hbNssB7/+65S+PuUMn K+RHwi/pt2ZEq4nmLWW6X8EsKSRqelA= ARC-Authentication-Results: i=1; imf20.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=MXWkCE9Y; spf=pass (imf20.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.222.45 as permitted sender) smtp.mailfrom=21cnbao@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1733819086; a=rsa-sha256; cv=none; b=35CfgQOTJ/lsfkufo8qUhuozQQPXH0pvpA6YICrVozI5RHOeoOiaPMvWDhVEhA6rrEHmZw HXI1R4Z5WBLEDuJgy2CpQ4AKkwBgn4af6Xyq9U5OfJTZKiNzlT9/QrpwfDfAQfh2EYB5wa xIQ8bO1pJZr+u1K0QAe271oiVzXo9Ac= Received: by mail-ua1-f45.google.com with SMTP id a1e0cc1a2514c-85bc7d126b2so2216146241.1 for ; Tue, 10 Dec 2024 00:24:56 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1733819095; x=1734423895; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=8M3NwB48Wmn5pnmtgrNXdB9BnQn/8GWztLli7+mr040=; b=MXWkCE9YmrGupA6i7tIUU7w2jxRhtVe3YAEEb7WfAdI+kI2tufyiHUlsB7fVvDITNR bdfs/CopS9gGLEjuRyoWyevFKl52sJCDCFLm58g5P1V2v0pkuPpGd37AZ90gRfpHRIVV pok+KUTsmhLOjxdgtbRt1Vx0ysPKEYnIvBZ0mvhsRCt4+aZq1u+Uh2HdB7Ycj9l8NiTx nVp2bTXq04/EAj9+z7yXtIh16VJfvKe69Wr4gO42NKYqNfaV+EbgvJgqT2cAXpKQ9pdO qL1lezm17RcYKTkhtv8GFQSy1phv72y11pCOQ48JV2PAHjtxFZHX8Ll9H2Km7A3eKkDM vE0Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1733819095; x=1734423895; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=8M3NwB48Wmn5pnmtgrNXdB9BnQn/8GWztLli7+mr040=; b=En9OFzXC6HTBXWLvt2tdD61upWSLwk4Is+n3v8mR2C+53IrUAH/10BVnARwjs9gqVp nyt9gD1H/vECVSnUTbkVfTh+KFybIfAMYHMCHFUanxz0Naa8W2xUoBklGvr/YEaIo3Eh 3Xbi+XbLMGrtfIYz2QFxx8smgBonAiomMTKlv8zkBfa/8I7Mf7I5iBfmOyMBB5y2+cw9 xk5aWfProlu+xJt4Gnwc5GUfR5U6BQQfbZ1aaDCwKqz603rh/XfRrtf7hdcFyxTVJb0b DAC6VcGExBOibiZgb8iC9LTPVxCL+oMuVLD6Etlb5OqdpcGaxzFcgGdV7sv2RVzuzIdL nj8w== X-Forwarded-Encrypted: i=1; AJvYcCVUxk/sjRKohyKeUTWcF57+ogO8yyzbRXDu3u1WZd/kouPGA0eym3ILSLV0rxUFWlW/GbhXlaFyWw==@kvack.org X-Gm-Message-State: AOJu0YzvHQ5CPUciY8WgBlvAO3vNZ++ZxX6fdyzeI7Fg+qYa8Ac9uUXV LktPxGr6bzpE1TJxKsKEGkJNNDlXDBbuocIX2J0RO/OVfRQDUPI7ERyUEI1bjtDWSlTMPURvwXt P1CoB9J60esSKAWm2XNEf72yxN/g= X-Gm-Gg: ASbGnctocoho1w5Fu6/qGY0noInWLeSysPkAEdtiFkeNQjX6/ewrHGxtPuGG7TYuaKx ItPeA/RmifxakqVqJnEhRI6IVCojSi4dQfneWLWTPRTDYMvDWyY9cqTrYUlhNZcWEI+xqGQ== X-Google-Smtp-Source: AGHT+IEc/TEcO8rXYDmQABPTygCPD+BpHnETdkqW9CqNL9aM3LBdnL2wPe+Gy6ay9HIfaLlDy/pZkxNga/zbXngYgAs= X-Received: by 2002:a05:6102:158d:b0:4b1:1a11:9628 with SMTP id ada2fe7eead31-4b11a119d0emr2083900137.24.1733819095284; Tue, 10 Dec 2024 00:24:55 -0800 (PST) MIME-Version: 1.0 References: <20241209083618.2889145-1-chenridong@huaweicloud.com> <20241209083618.2889145-2-chenridong@huaweicloud.com> <13223d50-6218-49db-8356-700a1907e224@huawei.com> In-Reply-To: <13223d50-6218-49db-8356-700a1907e224@huawei.com> From: Barry Song <21cnbao@gmail.com> Date: Tue, 10 Dec 2024 16:24:43 +0800 Message-ID: Subject: Re: [PATCH v4 1/1] mm: vmascan: retry folios written back while isolated for traditional LRU To: chenridong Cc: Chen Ridong , akpm@linux-foundation.org, mhocko@suse.com, hannes@cmpxchg.org, yosryahmed@google.com, yuzhao@google.com, david@redhat.com, willy@infradead.org, ryan.roberts@arm.com, wangkefeng.wang@huawei.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, wangweiyang2@huawei.com, xieym_ict@hotmail.com Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Queue-Id: 76B981C0002 X-Stat-Signature: yg71h4jdotytfocbshuhf78byuorfcf7 X-Rspam-User: X-Rspamd-Server: rspam11 X-HE-Tag: 1733819073-64007 X-HE-Meta: U2FsdGVkX1+VFpUe7VjWjEwkUjFyWXHZ9zw7pKJrDFwx30Flm9b1rYf5ytSUPto4V3FwQ3da55orGxZpHoyEJl9X+15h3j6uCfzGsqj2yiXMsfkVsVfnVgeD8gaTJLc1YtZoT2oiHzINJEm5XES7H1NWYjqmqoQBsbUVE2tJMDbtNZtGNKlTWnSCvkU8CCmn++sHK1p9E8MKPFse1C9p7iGqHNha/OvfvD1p74qWdCGEZtqk1eaT2y7qdqdDH9zjSjN8bi0WTi6VhrWdJ7wodAel5x5HTE7nXyD1n+F1gmVLn5u4JiB/xaJblCaUb9f089IPR7XVvMknLkdw9gCy04gMxCSeHalIcrxtcy52vPQrA8tWv3wmTznMHefyK5Nml3rBT4czObaDfsDMkCKJnKMffe/vLYxc/3aAS+Kq+V5u7zjs9Dg0SoabomTjsH7Z0CtCSe3sEwVdDO398xtC5G0YGE59FwwKHqu6TkGyvwirUPerGM8L28LdZ6cxk5gDBaJosXD7lBgtYVqZ6nkrHI7Od+/6RqzRuH2HHXq7AvcMHeKjImmsm8ykdbxeQuPa6KWPEmP6yhcxXgELY/2WAeMIgs9OC6thiSluWgM6qXzTuXs4DT4QoL/MaVKDTnNVSm8GHU8kzsflmBIAF4We7urCKikCLFY07BDiksz/CjHfhf/IEW+Y6JYsiUgqotZvhho17RykLo9YROWRhs3JSgb6pUMORejuW9jB/UdIdb1nANW6psxxKUe0EXMiFU/J5Jm6C4IdNKB9wWL0l7FCmZ3Vp8IFEsbEDEEk0Lo+oIKr0v6yuNP1lStCI9bzqbsE06DuyNXz+BNP1XrI85Dz5xTKoCX2SEddMWuedT4QZdqrG/PM+Qc0Xp/RU8V5+vqqaSo7rD3oaP2guoY6Ir9rLTOFypbf7WJwEVuy7zQW/Pca+DNEj+Q2PHukNeDGDtwHC6io/0s24L1+Wk2oMkz iKh5Js+P 4SYtZqYLfrJPd5cNahOl3RenVgXcAaVh8CdKaQhSFUQq9qJ4VAQyisfIvRK2A04EPjcyrAYcgrAe26BOrzUbkQ4aFr8ljEAgEYjfZe0Uo4hBKGuuVpUyIUxnjn5W3r9peZ0RtkPfyb6Ai12hV71MGtP3GfUSSTpwfuWOsL43qEFFqfoed9yg/VECK+yPS2LiVKxi2fl5jk/H3VUl+l4RB3Z7DN/TKR311bN0eGD98eCNWkvqAV/7eZ64gJpetYJL9f10Cw/SH5eUAP8ymE57zCNyvfKNizN4mYww5DWz4IJ4tIuKGjI8NsF+pOqn01YFj2IMcq2onlqtpbvT2654/Sr4TxB32WUjA2lkfzx4X0c1nYQlc8uuuxD5M5A== X-Bogosity: Ham, tests=bogofilter, spamicity=0.002077, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Tue, Dec 10, 2024 at 2:41=E2=80=AFPM chenridong = wrote: > > > > On 2024/12/10 12:54, Barry Song wrote: > > On Mon, Dec 9, 2024 at 4:46=E2=80=AFPM Chen Ridong wrote: > >> > >> From: Chen Ridong > >> > >> The commit 359a5e1416ca ("mm: multi-gen LRU: retry folios written back > >> while isolated") only fixed the issue for mglru. However, this issue > >> also exists in the traditional active/inactive LRU. This issue will be > >> worse if THP is split, which makes the list longer and needs longer ti= me > >> to finish a batch of folios reclaim. > >> > >> This issue should be fixed in the same way for the traditional LRU. > >> Therefore, the common logic was extracted to the 'find_folios_written_= back' > >> function firstly, which is then reused in the 'shrink_inactive_list' > >> function. Finally, retry reclaiming those folios that may have missed = the > >> rotation for traditional LRU. > > > > let's drop the cover-letter and refine the changelog. > > > Will update. > > >> > >> Signed-off-by: Chen Ridong > >> --- > >> include/linux/mmzone.h | 3 +- > >> mm/vmscan.c | 108 +++++++++++++++++++++++++++++-----------= - > >> 2 files changed, 77 insertions(+), 34 deletions(-) > >> > >> diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h > >> index b36124145a16..47c6e8c43dcd 100644 > >> --- a/include/linux/mmzone.h > >> +++ b/include/linux/mmzone.h > >> @@ -391,6 +391,7 @@ struct page_vma_mapped_walk; > >> > >> #define LRU_GEN_MASK ((BIT(LRU_GEN_WIDTH) - 1) << LRU_GEN_P= GOFF) > >> #define LRU_REFS_MASK ((BIT(LRU_REFS_WIDTH) - 1) << LRU_REFS= _PGOFF) > >> +#define LRU_REFS_FLAGS (BIT(PG_referenced) | BIT(PG_workingse= t)) > >> > >> #ifdef CONFIG_LRU_GEN > >> > >> @@ -406,8 +407,6 @@ enum { > >> NR_LRU_GEN_CAPS > >> }; > >> > >> -#define LRU_REFS_FLAGS (BIT(PG_referenced) | BIT(PG_workingse= t)) > >> - > >> #define MIN_LRU_BATCH BITS_PER_LONG > >> #define MAX_LRU_BATCH (MIN_LRU_BATCH * 64) > >> > >> diff --git a/mm/vmscan.c b/mm/vmscan.c > >> index 76378bc257e3..1f0d194f8b2f 100644 > >> --- a/mm/vmscan.c > >> +++ b/mm/vmscan.c > >> @@ -283,6 +283,48 @@ static void set_task_reclaim_state(struct task_st= ruct *task, > >> task->reclaim_state =3D rs; > >> } > >> > >> +/** > >> + * find_folios_written_back - Find and move the written back folios t= o a new list. > >> + * @list: filios list > >> + * @clean: the written back folios list > >> + * @skip: whether skip to move the written back folios to clean list. > >> + */ > >> +static inline void find_folios_written_back(struct list_head *list, > >> + struct list_head *clean, bool skip) > >> +{ > >> + struct folio *folio; > >> + struct folio *next; > >> + > >> + list_for_each_entry_safe_reverse(folio, next, list, lru) { > >> + if (!folio_evictable(folio)) { > >> + list_del(&folio->lru); > >> + folio_putback_lru(folio); > >> + continue; > >> + } > >> + > >> + if (folio_test_reclaim(folio) && > >> + (folio_test_dirty(folio) || folio_test_writeback(f= olio))) { > >> + /* restore LRU_REFS_FLAGS cleared by isolate_f= olio() */ > >> + if (lru_gen_enabled() && folio_test_workingset= (folio)) > >> + folio_set_referenced(folio); > >> + continue; > >> + } > >> + > >> + if (skip || folio_test_active(folio) || folio_test_ref= erenced(folio) || > >> + folio_mapped(folio) || folio_test_locked(folio) || > >> + folio_test_dirty(folio) || folio_test_writeback(fo= lio)) { > >> + /* don't add rejected folios to the oldest gen= eration */ > >> + if (lru_gen_enabled()) > >> + set_mask_bits(&folio->flags, LRU_REFS_= MASK | LRU_REFS_FLAGS, > >> + BIT(PG_active)); > >> + continue; > >> + } > >> + > >> + /* retry folios that may have missed folio_rotate_recl= aimable() */ > >> + list_move(&folio->lru, clean); > >> + } > >> +} > >> + > >> /* > >> * flush_reclaim_state(): add pages reclaimed outside of LRU-based re= claim to > >> * scan_control->nr_reclaimed. > >> @@ -1907,6 +1949,25 @@ static int current_may_throttle(void) > >> return !(current->flags & PF_LOCAL_THROTTLE); > >> } > >> > >> +static inline void acc_reclaimed_stat(struct reclaim_stat *stat, > >> + struct reclaim_stat *curr) > >> +{ > >> + int i; > >> + > >> + stat->nr_dirty +=3D curr->nr_dirty; > >> + stat->nr_unqueued_dirty +=3D curr->nr_unqueued_dirty; > >> + stat->nr_congested +=3D curr->nr_congested; > >> + stat->nr_writeback +=3D curr->nr_writeback; > >> + stat->nr_immediate +=3D curr->nr_immediate; > >> + stat->nr_pageout +=3D curr->nr_pageout; > >> + stat->nr_ref_keep +=3D curr->nr_ref_keep; > >> + stat->nr_unmap_fail +=3D curr->nr_unmap_fail; > >> + stat->nr_lazyfree_fail +=3D curr->nr_lazyfree_fail; > >> + stat->nr_demoted +=3D curr->nr_demoted; > >> + for (i =3D 0; i < ANON_AND_FILE; i++) > >> + stat->nr_activate[i] =3D curr->nr_activate[i]; > >> +} > > > > you had no this before, what's the purpose of this=EF=BC=9F > > > > We may call shrink_folio_list twice, and the 'stat curr' will reset in > the shrink_folio_list function. We should accumulate the stats as a > whole, which will then be used to calculate the cost and return it to > the caller. Does mglru have the same issue? If so, we may need to send a patch to fix mglru's stat accounting as well. By the way, the code is rather messy=E2=80=94could it be implemented as shown below instead? diff --git a/mm/vmscan.c b/mm/vmscan.c index 1f0d194f8b2f..40d2ddde21f5 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1094,7 +1094,6 @@ static unsigned int shrink_folio_list(struct list_head *folio_list, struct swap_iocb *plug =3D NULL; folio_batch_init(&free_folios); - memset(stat, 0, sizeof(*stat)); cond_resched(); do_demote_pass =3D can_demote(pgdat->node_id, sc); @@ -1949,25 +1948,6 @@ static int current_may_throttle(void) return !(current->flags & PF_LOCAL_THROTTLE); } -static inline void acc_reclaimed_stat(struct reclaim_stat *stat, - struct reclaim_stat *curr) -{ - int i; - - stat->nr_dirty +=3D curr->nr_dirty; - stat->nr_unqueued_dirty +=3D curr->nr_unqueued_dirty; - stat->nr_congested +=3D curr->nr_congested; - stat->nr_writeback +=3D curr->nr_writeback; - stat->nr_immediate +=3D curr->nr_immediate; - stat->nr_pageout +=3D curr->nr_pageout; - stat->nr_ref_keep +=3D curr->nr_ref_keep; - stat->nr_unmap_fail +=3D curr->nr_unmap_fail; - stat->nr_lazyfree_fail +=3D curr->nr_lazyfree_fail; - stat->nr_demoted +=3D curr->nr_demoted; - for (i =3D 0; i < ANON_AND_FILE; i++) - stat->nr_activate[i] =3D curr->nr_activate[i]; -} - /* * shrink_inactive_list() is a helper for shrink_node(). It returns the n= umber * of reclaimed pages @@ -1981,7 +1961,7 @@ static unsigned long shrink_inactive_list(unsigned long nr_to_scan, unsigned long nr_scanned; unsigned int nr_reclaimed =3D 0; unsigned long nr_taken; - struct reclaim_stat stat, curr; + struct reclaim_stat stat; bool file =3D is_file_lru(lru); enum vm_event_item item; struct pglist_data *pgdat =3D lruvec_pgdat(lruvec); @@ -2022,9 +2002,8 @@ static unsigned long shrink_inactive_list(unsigned long nr_to_scan, memset(&stat, 0, sizeof(stat)); retry: - nr_reclaimed +=3D shrink_folio_list(&folio_list, pgdat, sc, &curr, false)= ; + nr_reclaimed +=3D shrink_folio_list(&folio_list, pgdat, sc, &stat, false)= ; find_folios_written_back(&folio_list, &clean_list, skip_retry); - acc_reclaimed_stat(&stat, &curr); spin_lock_irq(&lruvec->lru_lock); move_folios_to_lru(lruvec, &folio_list); > > Thanks, > Ridong > > >> + > >> /* > >> * shrink_inactive_list() is a helper for shrink_node(). It returns = the number > >> * of reclaimed pages > >> @@ -1916,14 +1977,16 @@ static unsigned long shrink_inactive_list(unsi= gned long nr_to_scan, > >> enum lru_list lru) > >> { > >> LIST_HEAD(folio_list); > >> + LIST_HEAD(clean_list); > >> unsigned long nr_scanned; > >> unsigned int nr_reclaimed =3D 0; > >> unsigned long nr_taken; > >> - struct reclaim_stat stat; > >> + struct reclaim_stat stat, curr; > >> bool file =3D is_file_lru(lru); > >> enum vm_event_item item; > >> struct pglist_data *pgdat =3D lruvec_pgdat(lruvec); > >> bool stalled =3D false; > >> + bool skip_retry =3D false; > >> > >> while (unlikely(too_many_isolated(pgdat, file, sc))) { > >> if (stalled) > >> @@ -1957,10 +2020,20 @@ static unsigned long shrink_inactive_list(unsi= gned long nr_to_scan, > >> if (nr_taken =3D=3D 0) > >> return 0; > >> > >> - nr_reclaimed =3D shrink_folio_list(&folio_list, pgdat, sc, &st= at, false); > >> + memset(&stat, 0, sizeof(stat)); > >> +retry: > >> + nr_reclaimed +=3D shrink_folio_list(&folio_list, pgdat, sc, &c= urr, false); > >> + find_folios_written_back(&folio_list, &clean_list, skip_retry)= ; > >> + acc_reclaimed_stat(&stat, &curr); > >> > >> spin_lock_irq(&lruvec->lru_lock); > >> move_folios_to_lru(lruvec, &folio_list); > >> + if (!list_empty(&clean_list)) { > >> + list_splice_init(&clean_list, &folio_list); > >> + skip_retry =3D true; > >> + spin_unlock_irq(&lruvec->lru_lock); > >> + goto retry; This is rather confusing. We're still jumping to retry even though skip_retry=3Dtrue is set. Can we find a clearer approach for this? It was somewhat acceptable before we introduced the extracted function find_folios_written_back(). However, it has become harder to follow now that skip_retry is passed across functions. I find renaming skip_retry to is_retry more intuitive. The logic is that since we are already retrying, find_folios_written_back() shouldn=E2=80=99t move folios to the clean list again. The intended semanti= cs are: we have retris, don=E2=80=99t retry again. > >> + } > >> > >> __mod_lruvec_state(lruvec, PGDEMOTE_KSWAPD + reclaimer_offset(= ), > >> stat.nr_demoted); > >> @@ -4567,8 +4640,6 @@ static int evict_folios(struct lruvec *lruvec, s= truct scan_control *sc, int swap > >> int reclaimed; > >> LIST_HEAD(list); > >> LIST_HEAD(clean); > >> - struct folio *folio; > >> - struct folio *next; > >> enum vm_event_item item; > >> struct reclaim_stat stat; > >> struct lru_gen_mm_walk *walk; > >> @@ -4597,34 +4668,7 @@ static int evict_folios(struct lruvec *lruvec, = struct scan_control *sc, int swap > >> scanned, reclaimed, &stat, sc->priority, > >> type ? LRU_INACTIVE_FILE : LRU_INACTIVE_ANON); > >> > >> - list_for_each_entry_safe_reverse(folio, next, &list, lru) { > >> - if (!folio_evictable(folio)) { > >> - list_del(&folio->lru); > >> - folio_putback_lru(folio); > >> - continue; > >> - } > >> - > >> - if (folio_test_reclaim(folio) && > >> - (folio_test_dirty(folio) || folio_test_writeback(f= olio))) { > >> - /* restore LRU_REFS_FLAGS cleared by isolate_f= olio() */ > >> - if (folio_test_workingset(folio)) > >> - folio_set_referenced(folio); > >> - continue; > >> - } > >> - > >> - if (skip_retry || folio_test_active(folio) || folio_te= st_referenced(folio) || > >> - folio_mapped(folio) || folio_test_locked(folio) || > >> - folio_test_dirty(folio) || folio_test_writeback(fo= lio)) { > >> - /* don't add rejected folios to the oldest gen= eration */ > >> - set_mask_bits(&folio->flags, LRU_REFS_MASK | L= RU_REFS_FLAGS, > >> - BIT(PG_active)); > >> - continue; > >> - } > >> - > >> - /* retry folios that may have missed folio_rotate_recl= aimable() */ > >> - list_move(&folio->lru, &clean); > >> - } > >> - > >> + find_folios_written_back(&list, &clean, skip_retry); > >> spin_lock_irq(&lruvec->lru_lock); > >> > >> move_folios_to_lru(lruvec, &list); > >> -- > >> 2.34.1 > >> > > Thanks Barry