From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E7205E77184 for ; Fri, 20 Dec 2024 03:09:56 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4D3486B007B; Thu, 19 Dec 2024 22:09:56 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 482406B0083; Thu, 19 Dec 2024 22:09:56 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 34A786B0085; Thu, 19 Dec 2024 22:09:56 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 1052D6B007B for ; Thu, 19 Dec 2024 22:09:56 -0500 (EST) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 85FD516067B for ; Fri, 20 Dec 2024 03:09:55 +0000 (UTC) X-FDA: 82913857656.14.7734622 Received: from mail-vs1-f42.google.com (mail-vs1-f42.google.com [209.85.217.42]) by imf18.hostedemail.com (Postfix) with ESMTP id 585FA1C0004 for ; Fri, 20 Dec 2024 03:09:38 +0000 (UTC) Authentication-Results: imf18.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=GlbbfOos; spf=pass (imf18.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.217.42 as permitted sender) smtp.mailfrom=21cnbao@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1734664158; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=tL65j9U5OamUu7nrYFRTb0RGn0S6/yP55nE/fpS5z2E=; b=rXaY+/8AIxLGsk49MsYK+5pQktParNaZX+0PivLkLu8FgspTDsdfu/XZ8o9iEmAwqFATby wKsSR2w2sVlmoRNSf6A/ofEcQ0+Patg2HsVrBrqfa/LHKW2LXLNw/ysmgrYyCQCBavb3ud ZeiAm0cJCtmMnbRSKMaN33C+1Qokxaw= ARC-Authentication-Results: i=1; imf18.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=GlbbfOos; spf=pass (imf18.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.217.42 as permitted sender) smtp.mailfrom=21cnbao@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1734664158; a=rsa-sha256; cv=none; b=KhCfNnk2F3zFx4tuReb+6upnBqqJbGlOPLnZ3r+wPdDKkmwAylE2ToieJTknZ3EeTT2wZf AAuQFyG5eMQqPDiWHpy7BdU0Mhsu3A8A5J2c27G/zShx2eIyckpr+9h+Emmwp1cjEhYScH bAKgT6+SiBoW5WjiPYSJc4Qe+KjiVio= Received: by mail-vs1-f42.google.com with SMTP id ada2fe7eead31-4afe4f1ce18so400493137.3 for ; Thu, 19 Dec 2024 19:09:53 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1734664193; x=1735268993; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=tL65j9U5OamUu7nrYFRTb0RGn0S6/yP55nE/fpS5z2E=; b=GlbbfOos1AAguSKNYgflCQvTHXKPme40EFMgNRIMaCTCpMpX3ehIlJz45FVZeOPKsK l4K1Ow9jSq7Ne7O/GbA39x44pw+6vyA4GGKR7V98TaoDfignJbE693s18oG9ETacB573 spfog06HQdYweTeFyx4Y3vksU3BieeX8RofdLmuNwtUCBtPCFJ6PGotRXDfufpoCybYn hdzGciR+CfERaxMJvR9ugN8nh5+ArKgzwE5Oy7O0luEMTCE2j1FF6hAFP9y7TF5p1I1p EioyGvWfs6JhYFJb52Tz7tkXJunVGTUXzwE7Mg0Ggo8hbSorO0t6LBQslbpgQOfE0OTi VFsg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734664193; x=1735268993; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=tL65j9U5OamUu7nrYFRTb0RGn0S6/yP55nE/fpS5z2E=; b=aK51kM4Vp2ynws2sPl65L9zRmupdizURZ9Dsaad/3Uo4TQpIElHbPUqWt9kuxCjEGf yjP3S8s5kD1ThPQAqKSz5I/A3rZ+iWzNaXL1HNiux85dnjZhpdba5nKLCrm+beU6tAoF RU5+yt5bdhww4pNoO8yiXbEe1c/0E9L86NykiozOfTc5LOxP/pUxhlsFP+pWUpvbdbnl Ev+aXjCjqqzhk9ATQoj7Ee082ulRxSyXFwaAYiTJ//wPTfI9IT3V2wWu6wuwym+GnKLR 2PXbjBn23riKB/FLvF5rgGUdxk0mjLULfqaQNLhb8CPsBQ5Ol0p4oHBKxACXEinwZiqe Pnvg== X-Forwarded-Encrypted: i=1; AJvYcCXLvIaWCjIIrm8bd8sRr2WFo+EnWjkCQaOWXELI3s+e2pj4AI1mZXrdae4ddU6dAyAIeE707+ATCA==@kvack.org X-Gm-Message-State: AOJu0YxlkXe7Rf2pE95gsg+4Y2XIZnF08s80Ov+GT6uo76pAHuK6JCsR FRLfoWs2K3dcNU7n+yzbGqpISwGtOKTwGE5KYO8AEwVyNXzC/ia9MwodPtYXe9Q1kAHi2icVVZh eyaovkz8UIvWxAxnQDcy2fKZJA9M= X-Gm-Gg: ASbGncs4YC5szmX3rCb40trCLFcXZfl+TrdFihDI2dzWFFQMnyIznV66Ddob7/twBMM +iJpIe73rAaVLBEJx0J9NnJlwOIPNNdUTZnIE4/4mQOpouBX3P7oH0QUek2sw0sCW4pQhauaH X-Google-Smtp-Source: AGHT+IE4Gp1qXjfrCj5B6XE/dVTgzMlk4OvRqM1tzVvKTei+nBDiVk3f3Uzxe8sKJIKD8F45stUfyVYqiFvbKRWaY14= X-Received: by 2002:a05:6102:1623:b0:4b2:4b79:414 with SMTP id ada2fe7eead31-4b2cc35f0eamr1846188137.8.1734664192539; Thu, 19 Dec 2024 19:09:52 -0800 (PST) MIME-Version: 1.0 References: <20241220010931.3603111-1-chenridong@huaweicloud.com> In-Reply-To: From: Barry Song <21cnbao@gmail.com> Date: Fri, 20 Dec 2024 16:09:41 +1300 Message-ID: Subject: Re: [PATCH -next v5] mm: vmscan: retry folios written back while isolated for traditional LRU To: Chen Ridong , cuibixuan@vivo.com Cc: akpm@linux-foundation.org, mhocko@suse.com, hannes@cmpxchg.org, yosryahmed@google.com, yuzhao@google.com, david@redhat.com, willy@infradead.org, ryan.roberts@arm.com, wangkefeng.wang@huawei.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, chenridong@huawei.com, wangweiyang2@huawei.com, xieym_ict@hotmail.com Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 585FA1C0004 X-Rspam-User: X-Stat-Signature: 5zamfymz5tukmp5uk6ygqa3ttzipm5yj X-HE-Tag: 1734664178-132811 X-HE-Meta: U2FsdGVkX1+zupUzAqvsk7J73bJq/njyWok3ynfID3nHdMmBKfWwmLADX2nq7uzrTEp1Ekm6LjAo9qoxjiRCv1FxONu4UUz8t4HY+S/44glNln6qBS20ePg2bk2qfoqHC+rq6PbyqOZLLBYDLAhURlZ0aY0UeiTz/I9ngo8+hiDF08htFr7nP/O1OFnTpGzlLXcIYvZzZ34F6tiJKKt6imZigd883vf4vyvjMvFDShXbqG9awf4STzR/tHNfTb2SqLG5S6Z5sj/ERNMwe2XLYNo3BWBEw7D3q7/kKJdxq7gD2uc+AM5oi15i1weeNpTeCO4sXbGMpf7bDEZ3m6tkZKRLShi0aiQSo8IxrVgUXz9pUQPgWj5M9OUt+62SPjUW6cUgnnnDAvNImGjO1i0L5IJqjgxuRMqKMRn1N0hjxgB35eDfyQi1tphd4i0pg3SP1RF9qya1UZjlPcs8n8m05bbXeZJazbktiblWlLHXdB76Lzjq7fX64hIxkwOcK/S+3A3Ox+vrUQ9AfDV+HIZaNc3edprli/BE0g5BqxOGiAc6cDVOy6/yuY/nUO8LoeTT5tRuLECT53tDOSXfZubvE/6gDHC/QLikNZl9I0y7V078yD6iGupt8bEMnhAVOCephvyhSohazpV0QodPP4EqJgeV87TaPIUN/wVd7iAdO9AaB7YwcV0bLj9HwgZ3ft5+2fNF3cIeHeV6OiuqQHNZ00wtP3rcT4RbH0Xa401x47H15Qmc1UAQciX8dtLQXCkrp9TEeB++jpgZQ0YSxOh9Hdtsj4ocw+O5X/BZSz94xFOUJ4vt5ARRB9Oj0N7+EQKIVKy2ye+wyB3roWJWX1++We0UvA6D1SrCDqqqHYZADES/ihDW3tUVvA5VaAjf3l+nnsfJu2LvmNxr8OzT40eBIN755XxaQmB7LCAOnFvvJUydjToh7293nxOcYePgL8Y2NlGp5AS33AEd/PkryXc WN4WWmu7 6OftXfL6BqM1U0WG3CHvquGlrLqypvJ2dH6185yJNPLu8qfgFGt2rkglnRTKJSdCaWZ1c8eX1sFehiyIpQVq7myaVQA38RPvT/bi1RRqNZJgxFNCEKQTmd1X5TXtTVRA7ng2JR/4Cp7uGpamKHyL6RiQUJSOWMzA1wxsncCaxiocuQOFPmYoWEAgZPUnx6s1SK3NqnMWbEZV5mqcPWxCKDvUhr+EikmdE4J2fr9S3V3VEF8nIYXiAtt90PRpUUOJW7hfDc7QcESN0NOaDbCOQWjUslY70R91NLCmWCRfJsvSt/BbG8UKpwhO0wRqtm/BiFenR0yjVbdeoS9X7dliZq9ErBv3tG3akD8KA/jVWjIjZydF50eREroKgE2eTkYwbYHyhKO8DeByinGP6UAdeO7ualQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.007477, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Fri, Dec 20, 2024 at 3:30=E2=80=AFPM Barry Song <21cnbao@gmail.com> wrot= e: > > On Fri, Dec 20, 2024 at 2:19=E2=80=AFPM Chen Ridong wrote: > > > > From: Chen Ridong > > > > The page reclaim isolates a batch of folios from the tail of one of the > > LRU lists and works on those folios one by one. For a suitable > > swap-backed folio, if the swap device is async, it queues that folio fo= r > > writeback. After the page reclaim finishes an entire batch, it puts ba= ck > > the folios it queued for writeback to the head of the original LRU list= . > > > > In the meantime, the page writeback flushes the queued folios also by > > batches. Its batching logic is independent from that of the page recla= im. > > For each of the folios it writes back, the page writeback calls > > folio_rotate_reclaimable() which tries to rotate a folio to the tail. > > > > folio_rotate_reclaimable() only works for a folio after the page reclai= m > > has put it back. If an async swap device is fast enough, the page > > writeback can finish with that folio while the page reclaim is still > > working on the rest of the batch containing it. In this case, that fol= io > > will remain at the head and the page reclaim will not retry it before > > reaching there. > > > > The commit 359a5e1416ca ("mm: multi-gen LRU: retry folios written back > > while isolated") only fixed the issue for mglru. However, this issue > > also exists in the traditional active/inactive LRU. This issue will be > > worse if THP is split, which makes the list longer and needs longer tim= e > > to finish a batch of folios reclaim. > > > > This issue should be fixed in the same way for the traditional LRU. > > Therefore, the common logic was extracted to the 'find_folios_written_b= ack' > > function firstly, which is then reused in the 'shrink_inactive_list' > > function. Finally, retry reclaiming those folios that may have missed t= he > > rotation for traditional LRU. > > > > Link: https://lore.kernel.org/linux-kernel/20241010081802.290893-1-chen= ridong@huaweicloud.com/ > > Link: https://lore.kernel.org/linux-kernel/CAGsJ_4zqL8ZHNRZ44o_CC69kE7D= BVXvbZfvmQxMGiFqRxqHQdA@mail.gmail.com/ > > Signed-off-by: Chen Ridong > > --- > > mm/vmscan.c | 108 ++++++++++++++++++++++++++++++++++------------------ > > 1 file changed, 70 insertions(+), 38 deletions(-) > > > > diff --git a/mm/vmscan.c b/mm/vmscan.c > > index 39886f435ec5..e67e446540ba 100644 > > --- a/mm/vmscan.c > > +++ b/mm/vmscan.c > > @@ -283,6 +283,39 @@ static void set_task_reclaim_state(struct task_str= uct *task, > > task->reclaim_state =3D rs; > > } > > > > +/** > > + * find_folios_written_back - Find and move the written back folios to= a new list. > > + * @list: filios list > > + * @clean: the written back folios list > > + * @is_retried: whether the list has already been retried. > > + */ > > +static inline void find_folios_written_back(struct list_head *list, > > + struct list_head *clean, bool is_retried) > > +{ > > + struct folio *folio; > > + struct folio *next; > > + > > + list_for_each_entry_safe_reverse(folio, next, list, lru) { > > + if (!folio_evictable(folio)) { > > + list_del(&folio->lru); > > + folio_putback_lru(folio); > > + continue; > > + } > > + > > + /* retry folios that may have missed folio_rotate_recla= imable() */ > > + if (!is_retried && !folio_test_active(folio) && !folio_= mapped(folio) && > > + !folio_test_dirty(folio) && !folio_test_writeback(f= olio)) { > > + list_move(&folio->lru, clean); > > + continue; > > + } > > + > > + /* don't add rejected folios to the oldest generation *= / > > + if (lru_gen_enabled() && !lru_gen_distance(folio, false= )) > > + set_mask_bits(&folio->flags, LRU_REFS_FLAGS, BI= T(PG_active)); > > + } > > + > > +} > > + > > /* > > * flush_reclaim_state(): add pages reclaimed outside of LRU-based rec= laim to > > * scan_control->nr_reclaimed. > > @@ -1959,14 +1992,18 @@ static unsigned long shrink_inactive_list(unsig= ned long nr_to_scan, > > enum lru_list lru) > > { > > LIST_HEAD(folio_list); > > + LIST_HEAD(clean_list); > > unsigned long nr_scanned; > > - unsigned int nr_reclaimed =3D 0; > > + unsigned int nr_reclaimed, total_reclaimed =3D 0; > > + unsigned int nr_pageout =3D 0; > > + unsigned int nr_unqueued_dirty =3D 0; > > unsigned long nr_taken; > > struct reclaim_stat stat; > > bool file =3D is_file_lru(lru); > > enum vm_event_item item; > > struct pglist_data *pgdat =3D lruvec_pgdat(lruvec); > > bool stalled =3D false; > > + bool is_retried =3D false; The name is_retried is a bit confusing. It should be is_retry or is_retrying since we are currently retrying, not that we have already retried. > > > > while (unlikely(too_many_isolated(pgdat, file, sc))) { > > if (stalled) > > @@ -2000,22 +2037,47 @@ static unsigned long shrink_inactive_list(unsig= ned long nr_to_scan, > > if (nr_taken =3D=3D 0) > > return 0; > > > > +retry: > > nr_reclaimed =3D shrink_folio_list(&folio_list, pgdat, sc, &sta= t, false); > > > > + sc->nr.dirty +=3D stat.nr_dirty; > > + sc->nr.congested +=3D stat.nr_congested; > > + sc->nr.unqueued_dirty +=3D stat.nr_unqueued_dirty; > > + sc->nr.writeback +=3D stat.nr_writeback; > > + sc->nr.immediate +=3D stat.nr_immediate; > > + total_reclaimed +=3D nr_reclaimed; > > + nr_pageout +=3D stat.nr_pageout; > > + nr_unqueued_dirty +=3D stat.nr_unqueued_dirty; > > + > > + trace_mm_vmscan_lru_shrink_inactive(pgdat->node_id, > > + nr_scanned, nr_reclaimed, &stat, sc->priority, = file); > > This is a bit odd, as nr_scanned during a retry still uses the > previous nr_scanned > value. However, I find that mglru shows no difference. > > retry: > reclaimed =3D shrink_folio_list(&list, pgdat, sc, &stat, false); > sc->nr.unqueued_dirty +=3D stat.nr_unqueued_dirty; > sc->nr_reclaimed +=3D reclaimed; > trace_mm_vmscan_lru_shrink_inactive(pgdat->node_id, > scanned, reclaimed, &stat, sc->priority, > type ? LRU_INACTIVE_FILE : LRU_INACTIVE_ANON); > > Currently, the active/inactive state aligns with mglru in this trace. > It seems that > the userspace BPF should recognize that the nr_scanned during a retry doe= sn't > indicate we are isolating new nr_scanned folios. Ideally, the is_retry > flag should > be passed to the trace, allowing userspace to identify that it's a retry = and > disregard the nr_scanned value. > > It might be worth addressing this in a separate patch. Add Bixuan to clar= ify > how userspace depends on this trace and if "retry" will break his userspa= ce > BPF for both MGLRU and active/inactive cases. > > Otherwise, the patch looks good to me. > By the way, it's completely clear that the trace was added after mglru's re= try: https://lore.kernel.org/linux-mm/20240105013607.2868-3-cuibixuan@vivo.com/ Therefore, I don't believe the potential confusion about nr_scanned in the = trace should prevent Ridong's fix for the missed rotation of written-back folios = from proceeding. If there is an issue with that, we should open a separate thread to address= the trace. Please feel free to add the below in the future version after you fix "is_retried". Reviewed-by: Barry Song > > + > > + find_folios_written_back(&folio_list, &clean_list, is_retried); > > + > > spin_lock_irq(&lruvec->lru_lock); > > move_folios_to_lru(lruvec, &folio_list); > > > > __mod_lruvec_state(lruvec, PGDEMOTE_KSWAPD + reclaimer_offset()= , > > stat.nr_demoted); > > - __mod_node_page_state(pgdat, NR_ISOLATED_ANON + file, -nr_taken= ); > > item =3D PGSTEAL_KSWAPD + reclaimer_offset(); > > if (!cgroup_reclaim(sc)) > > __count_vm_events(item, nr_reclaimed); > > __count_memcg_events(lruvec_memcg(lruvec), item, nr_reclaimed); > > __count_vm_events(PGSTEAL_ANON + file, nr_reclaimed); > > + > > + if (!list_empty(&clean_list)) { > > + list_splice_init(&clean_list, &folio_list); > > + is_retried =3D true; > > + spin_unlock_irq(&lruvec->lru_lock); > > + goto retry; > > + } > > + __mod_node_page_state(pgdat, NR_ISOLATED_ANON + file, -nr_taken= ); > > spin_unlock_irq(&lruvec->lru_lock); > > + sc->nr.taken +=3D nr_taken; > > + if (file) > > + sc->nr.file_taken +=3D nr_taken; > > > > - lru_note_cost(lruvec, file, stat.nr_pageout, nr_scanned - nr_re= claimed); > > + lru_note_cost(lruvec, file, nr_pageout, nr_scanned - total_recl= aimed); > > > > /* > > * If dirty folios are scanned that are not queued for IO, it > > @@ -2028,7 +2090,7 @@ static unsigned long shrink_inactive_list(unsigne= d long nr_to_scan, > > * the flushers simply cannot keep up with the allocation > > * rate. Nudge the flusher threads in case they are asleep. > > */ > > - if (stat.nr_unqueued_dirty =3D=3D nr_taken) { > > + if (nr_unqueued_dirty =3D=3D nr_taken) { > > wakeup_flusher_threads(WB_REASON_VMSCAN); > > /* > > * For cgroupv1 dirty throttling is achieved by waking = up > > @@ -2043,18 +2105,7 @@ static unsigned long shrink_inactive_list(unsign= ed long nr_to_scan, > > reclaim_throttle(pgdat, VMSCAN_THROTTLE_WRITEBA= CK); > > } > > > > - sc->nr.dirty +=3D stat.nr_dirty; > > - sc->nr.congested +=3D stat.nr_congested; > > - sc->nr.unqueued_dirty +=3D stat.nr_unqueued_dirty; > > - sc->nr.writeback +=3D stat.nr_writeback; > > - sc->nr.immediate +=3D stat.nr_immediate; > > - sc->nr.taken +=3D nr_taken; > > - if (file) > > - sc->nr.file_taken +=3D nr_taken; > > - > > - trace_mm_vmscan_lru_shrink_inactive(pgdat->node_id, > > - nr_scanned, nr_reclaimed, &stat, sc->priority, = file); > > - return nr_reclaimed; > > + return total_reclaimed; > > } > > > > /* > > @@ -4585,12 +4636,10 @@ static int evict_folios(struct lruvec *lruvec, = struct scan_control *sc, int swap > > int reclaimed; > > LIST_HEAD(list); > > LIST_HEAD(clean); > > - struct folio *folio; > > - struct folio *next; > > enum vm_event_item item; > > struct reclaim_stat stat; > > struct lru_gen_mm_walk *walk; > > - bool skip_retry =3D false; > > + bool is_retried =3D false; > > struct lru_gen_folio *lrugen =3D &lruvec->lrugen; > > struct mem_cgroup *memcg =3D lruvec_memcg(lruvec); > > struct pglist_data *pgdat =3D lruvec_pgdat(lruvec); > > @@ -4616,24 +4665,7 @@ static int evict_folios(struct lruvec *lruvec, s= truct scan_control *sc, int swap > > scanned, reclaimed, &stat, sc->priority, > > type ? LRU_INACTIVE_FILE : LRU_INACTIVE_ANON); > > > > - list_for_each_entry_safe_reverse(folio, next, &list, lru) { > > - if (!folio_evictable(folio)) { > > - list_del(&folio->lru); > > - folio_putback_lru(folio); > > - continue; > > - } > > - > > - /* retry folios that may have missed folio_rotate_recla= imable() */ > > - if (!skip_retry && !folio_test_active(folio) && !folio_= mapped(folio) && > > - !folio_test_dirty(folio) && !folio_test_writeback(f= olio)) { > > - list_move(&folio->lru, &clean); > > - continue; > > - } > > - > > - /* don't add rejected folios to the oldest generation *= / > > - if (!lru_gen_distance(folio, false)) > > - set_mask_bits(&folio->flags, LRU_REFS_FLAGS, BI= T(PG_active)); > > - } > > + find_folios_written_back(&list, &clean, is_retried); > > > > spin_lock_irq(&lruvec->lru_lock); > > > > @@ -4656,7 +4688,7 @@ static int evict_folios(struct lruvec *lruvec, st= ruct scan_control *sc, int swap > > list_splice_init(&clean, &list); > > > > if (!list_empty(&list)) { > > - skip_retry =3D true; > > + is_retried =3D true; > > goto retry; > > } > > > > -- > > 2.34.1 > > > Thanks barry