From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 40A22E7717F for ; Tue, 10 Dec 2024 12:11:13 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C035A6B0195; Tue, 10 Dec 2024 07:11:12 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id BB2FB6B0196; Tue, 10 Dec 2024 07:11:12 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A55916B0197; Tue, 10 Dec 2024 07:11:12 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 80A8E6B0195 for ; Tue, 10 Dec 2024 07:11:12 -0500 (EST) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 27F1E120857 for ; Tue, 10 Dec 2024 12:11:12 +0000 (UTC) X-FDA: 82878933480.05.DF48D28 Received: from szxga05-in.huawei.com (szxga05-in.huawei.com [45.249.212.191]) by imf05.hostedemail.com (Postfix) with ESMTP id 0784E100003 for ; Tue, 10 Dec 2024 12:10:26 +0000 (UTC) Authentication-Results: imf05.hostedemail.com; dkim=none; spf=pass (imf05.hostedemail.com: domain of chenridong@huawei.com designates 45.249.212.191 as permitted sender) smtp.mailfrom=chenridong@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1733832655; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=K7VUZkGePjcCTZ+uZ4wGpWWh3ttbSd7K0U7qk4GmOvQ=; b=B+vNkG2I2Al4RqI5IN/+OqY22chRgukv/ITqe51nDEcMsC0ataaNwzxhOW2Scb5N+tAHYF cTszPstg7dOmPh/0q6K/MDSKNtU/WhMIq165lwcl/bbwKSmGOEnCRCFYOKpFa7jV+tti1m jED3B4Qsy8lhI94VNESki/0e2BERD48= ARC-Authentication-Results: i=1; imf05.hostedemail.com; dkim=none; spf=pass (imf05.hostedemail.com: domain of chenridong@huawei.com designates 45.249.212.191 as permitted sender) smtp.mailfrom=chenridong@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1733832655; a=rsa-sha256; cv=none; b=JYRqbm/5D0Pb0Vd1xyito88rbJjg9dg/5iatjFdR88FDN3Bo3goluedsqgwTR20wt9Zjaa Zu52TzouYN82e6RflpRlXtFLPpQ/d74p7t9CRJ/KoxOIyxAWcbnr1lM/JiVPRor5uASeps 4FJXrKkIpEZG6kWveDB4DwXKWOCaZDU= Received: from mail.maildlp.com (unknown [172.19.162.112]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4Y6yHZ1PgHz1JDvb; Tue, 10 Dec 2024 20:10:50 +0800 (CST) Received: from kwepemd100013.china.huawei.com (unknown [7.221.188.163]) by mail.maildlp.com (Postfix) with ESMTPS id 473281401DC; Tue, 10 Dec 2024 20:11:04 +0800 (CST) Received: from [10.67.109.79] (10.67.109.79) by kwepemd100013.china.huawei.com (7.221.188.163) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1258.34; Tue, 10 Dec 2024 20:11:03 +0800 Message-ID: Date: Tue, 10 Dec 2024 20:11:02 +0800 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v4 1/1] mm: vmascan: retry folios written back while isolated for traditional LRU To: Barry Song <21cnbao@gmail.com> CC: Chen Ridong , , , , , , , , , , , , , References: <20241209083618.2889145-1-chenridong@huaweicloud.com> <20241209083618.2889145-2-chenridong@huaweicloud.com> <13223d50-6218-49db-8356-700a1907e224@huawei.com> Content-Language: en-US From: chenridong In-Reply-To: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 8bit X-Originating-IP: [10.67.109.79] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To kwepemd100013.china.huawei.com (7.221.188.163) X-Rspamd-Server: rspam05 X-Stat-Signature: gd7yap38oixbafgj4xxz8ybi7jeou9oj X-Rspamd-Queue-Id: 0784E100003 X-Rspam-User: X-HE-Tag: 1733832626-308228 X-HE-Meta: U2FsdGVkX1/Ukj0X5d0S+d4j20uHlOww0Q6k7TIKoDbi3Hcg91xThdWlIqZah6ke6J8SfffgkAUjyYElLp1y9TdKkDje9n211knyM1e7rYguBoenhiHCrlD/NZIV5QHX7SRRAGDmDM20W8smo3Dfdi0INfGS/sKCLZB+ono6famvpr30XLgCcKy285fdJqfyhuTfhIG1RQdBXtKzT7Cepw5nR11FsqB2PugSu61wjyLyrtXGnfe0DQrWNapCPIwMuzQ6uY3UnSd7rt+xW51Q+Z1Q7h439lYnZZ2rI7OHJudMjJ4HLwVM4xolau+ffBgXyBZnoAcHlYCO5wd/rE+052L3lssKb2Z/N2wIDPynL9y48xaA9kZUOcODWb30WjeYnszUBEjF7d5HmnJ+aMJJE0yAT2MSyQSLPRKWOn5jrwsH93bZAov3R3caMCs3GU+3LpJupUeZY0JljhlmfFk2cnaRXXvqUu3fw/VzYUzurzhnooXFMOLnsa1Sn4ltCcss2sR6lWToTP3n7c12MXjVjq+Z2xaR+I94GUxESgHLS+5f1ozIVUpoXHLPIOEM73h0n3hqNSnltZzYGReBsdQuP6oP3afKjeA5sp9N3W6BJdgNn9/2MX/57p102hJSe/0RIfCWNl8IPHZz3EFnxC2s8klHoMU84l9oRxDHcnc5b+2stoiZ+rO1fh+U1+9h3tSaFUqh0It4FLkxUbNaOIgY6oxzBemZ5kXn9MBGp4JXslT9Y3dqMhfMbyQCQMvVn0/w5pW+V6+3thDICGiaw2TP9c8VY9jRz7+EmP9wDaVIAjtOroB6Hhg3QjtVrX7xNTOAeetEPCcKNEV01ijFgljZHlAZHiVriirpFod3A/1Tmczt+kulAG8bF++UaCj6xGuVJDpi/PzTJC2AvSdFN+cpyxzDM/uwuInax7GHOXwGCUsZQMwuIT1U7tF0fEkZT8K7Y++vVtjUPsO+Q5glncW 9+P502dt yxdSxwF6/peEB6WR8NkLEk6iHq3i7jC2s65LBhWkKigZWIBZX0S0M/m1zVQ+Vw30aRBOapjj2lAPnfoJXu3IMqt43knaf3mdPwAlnhuo0GqIihi1IARbViE+5xkgU9P4WnSBEr7zJfLvCEr2Li5DiukdvZ4czGinJvLE8NQYKEuwYcFrwUO31k8rEJqEH+0laDZ9JGgcIFp5Zh3+ubMCXAiQWBAT70U/JMMEHAWsjPYnhnAe6VegtIF820w== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 2024/12/10 16:24, Barry Song wrote: > On Tue, Dec 10, 2024 at 2:41 PM chenridong wrote: >> >> >> >> On 2024/12/10 12:54, Barry Song wrote: >>> On Mon, Dec 9, 2024 at 4:46 PM Chen Ridong wrote: >>>> >>>> From: Chen Ridong >>>> >>>> The commit 359a5e1416ca ("mm: multi-gen LRU: retry folios written back >>>> while isolated") only fixed the issue for mglru. However, this issue >>>> also exists in the traditional active/inactive LRU. This issue will be >>>> worse if THP is split, which makes the list longer and needs longer time >>>> to finish a batch of folios reclaim. >>>> >>>> This issue should be fixed in the same way for the traditional LRU. >>>> Therefore, the common logic was extracted to the 'find_folios_written_back' >>>> function firstly, which is then reused in the 'shrink_inactive_list' >>>> function. Finally, retry reclaiming those folios that may have missed the >>>> rotation for traditional LRU. >>> >>> let's drop the cover-letter and refine the changelog. >>> >> Will update. >> >>>> >>>> Signed-off-by: Chen Ridong >>>> --- >>>> include/linux/mmzone.h | 3 +- >>>> mm/vmscan.c | 108 +++++++++++++++++++++++++++++------------ >>>> 2 files changed, 77 insertions(+), 34 deletions(-) >>>> >>>> diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h >>>> index b36124145a16..47c6e8c43dcd 100644 >>>> --- a/include/linux/mmzone.h >>>> +++ b/include/linux/mmzone.h >>>> @@ -391,6 +391,7 @@ struct page_vma_mapped_walk; >>>> >>>> #define LRU_GEN_MASK ((BIT(LRU_GEN_WIDTH) - 1) << LRU_GEN_PGOFF) >>>> #define LRU_REFS_MASK ((BIT(LRU_REFS_WIDTH) - 1) << LRU_REFS_PGOFF) >>>> +#define LRU_REFS_FLAGS (BIT(PG_referenced) | BIT(PG_workingset)) >>>> >>>> #ifdef CONFIG_LRU_GEN >>>> >>>> @@ -406,8 +407,6 @@ enum { >>>> NR_LRU_GEN_CAPS >>>> }; >>>> >>>> -#define LRU_REFS_FLAGS (BIT(PG_referenced) | BIT(PG_workingset)) >>>> - >>>> #define MIN_LRU_BATCH BITS_PER_LONG >>>> #define MAX_LRU_BATCH (MIN_LRU_BATCH * 64) >>>> >>>> diff --git a/mm/vmscan.c b/mm/vmscan.c >>>> index 76378bc257e3..1f0d194f8b2f 100644 >>>> --- a/mm/vmscan.c >>>> +++ b/mm/vmscan.c >>>> @@ -283,6 +283,48 @@ static void set_task_reclaim_state(struct task_struct *task, >>>> task->reclaim_state = rs; >>>> } >>>> >>>> +/** >>>> + * find_folios_written_back - Find and move the written back folios to a new list. >>>> + * @list: filios list >>>> + * @clean: the written back folios list >>>> + * @skip: whether skip to move the written back folios to clean list. >>>> + */ >>>> +static inline void find_folios_written_back(struct list_head *list, >>>> + struct list_head *clean, bool skip) >>>> +{ >>>> + struct folio *folio; >>>> + struct folio *next; >>>> + >>>> + list_for_each_entry_safe_reverse(folio, next, list, lru) { >>>> + if (!folio_evictable(folio)) { >>>> + list_del(&folio->lru); >>>> + folio_putback_lru(folio); >>>> + continue; >>>> + } >>>> + >>>> + if (folio_test_reclaim(folio) && >>>> + (folio_test_dirty(folio) || folio_test_writeback(folio))) { >>>> + /* restore LRU_REFS_FLAGS cleared by isolate_folio() */ >>>> + if (lru_gen_enabled() && folio_test_workingset(folio)) >>>> + folio_set_referenced(folio); >>>> + continue; >>>> + } >>>> + >>>> + if (skip || folio_test_active(folio) || folio_test_referenced(folio) || >>>> + folio_mapped(folio) || folio_test_locked(folio) || >>>> + folio_test_dirty(folio) || folio_test_writeback(folio)) { >>>> + /* don't add rejected folios to the oldest generation */ >>>> + if (lru_gen_enabled()) >>>> + set_mask_bits(&folio->flags, LRU_REFS_MASK | LRU_REFS_FLAGS, >>>> + BIT(PG_active)); >>>> + continue; >>>> + } >>>> + >>>> + /* retry folios that may have missed folio_rotate_reclaimable() */ >>>> + list_move(&folio->lru, clean); >>>> + } >>>> +} >>>> + >>>> /* >>>> * flush_reclaim_state(): add pages reclaimed outside of LRU-based reclaim to >>>> * scan_control->nr_reclaimed. >>>> @@ -1907,6 +1949,25 @@ static int current_may_throttle(void) >>>> return !(current->flags & PF_LOCAL_THROTTLE); >>>> } >>>> >>>> +static inline void acc_reclaimed_stat(struct reclaim_stat *stat, >>>> + struct reclaim_stat *curr) >>>> +{ >>>> + int i; >>>> + >>>> + stat->nr_dirty += curr->nr_dirty; >>>> + stat->nr_unqueued_dirty += curr->nr_unqueued_dirty; >>>> + stat->nr_congested += curr->nr_congested; >>>> + stat->nr_writeback += curr->nr_writeback; >>>> + stat->nr_immediate += curr->nr_immediate; >>>> + stat->nr_pageout += curr->nr_pageout; >>>> + stat->nr_ref_keep += curr->nr_ref_keep; >>>> + stat->nr_unmap_fail += curr->nr_unmap_fail; >>>> + stat->nr_lazyfree_fail += curr->nr_lazyfree_fail; >>>> + stat->nr_demoted += curr->nr_demoted; >>>> + for (i = 0; i < ANON_AND_FILE; i++) >>>> + stat->nr_activate[i] = curr->nr_activate[i]; >>>> +} >>> >>> you had no this before, what's the purpose of this? >>> >> >> We may call shrink_folio_list twice, and the 'stat curr' will reset in >> the shrink_folio_list function. We should accumulate the stats as a >> whole, which will then be used to calculate the cost and return it to >> the caller. > > Does mglru have the same issue? If so, we may need to send a patch to > fix mglru's stat accounting as well. By the way, the code is rather > messy—could it be implemented as shown below instead? > I have checked the code (in the evict_folios function) again, and it appears that 'reclaimed' should correspond to sc->nr_reclaimed, which accumulates the results twice. Should I address this issue with a separate patch? if (!cgroup_reclaim(sc)) __count_vm_events(item, reclaimed); __count_memcg_events(memcg, item, reclaimed); __count_vm_events(PGSTEAL_ANON + type, reclaimed); > diff --git a/mm/vmscan.c b/mm/vmscan.c > index 1f0d194f8b2f..40d2ddde21f5 100644 > --- a/mm/vmscan.c > +++ b/mm/vmscan.c > @@ -1094,7 +1094,6 @@ static unsigned int shrink_folio_list(struct > list_head *folio_list, > struct swap_iocb *plug = NULL; > > folio_batch_init(&free_folios); > - memset(stat, 0, sizeof(*stat)); > cond_resched(); > do_demote_pass = can_demote(pgdat->node_id, sc); > > @@ -1949,25 +1948,6 @@ static int current_may_throttle(void) > return !(current->flags & PF_LOCAL_THROTTLE); > } > > -static inline void acc_reclaimed_stat(struct reclaim_stat *stat, > - struct reclaim_stat *curr) > -{ > - int i; > - > - stat->nr_dirty += curr->nr_dirty; > - stat->nr_unqueued_dirty += curr->nr_unqueued_dirty; > - stat->nr_congested += curr->nr_congested; > - stat->nr_writeback += curr->nr_writeback; > - stat->nr_immediate += curr->nr_immediate; > - stat->nr_pageout += curr->nr_pageout; > - stat->nr_ref_keep += curr->nr_ref_keep; > - stat->nr_unmap_fail += curr->nr_unmap_fail; > - stat->nr_lazyfree_fail += curr->nr_lazyfree_fail; > - stat->nr_demoted += curr->nr_demoted; > - for (i = 0; i < ANON_AND_FILE; i++) > - stat->nr_activate[i] = curr->nr_activate[i]; > -} > - > /* > * shrink_inactive_list() is a helper for shrink_node(). It returns the number > * of reclaimed pages > @@ -1981,7 +1961,7 @@ static unsigned long > shrink_inactive_list(unsigned long nr_to_scan, > unsigned long nr_scanned; > unsigned int nr_reclaimed = 0; > unsigned long nr_taken; > - struct reclaim_stat stat, curr; > + struct reclaim_stat stat; > bool file = is_file_lru(lru); > enum vm_event_item item; > struct pglist_data *pgdat = lruvec_pgdat(lruvec); > @@ -2022,9 +2002,8 @@ static unsigned long > shrink_inactive_list(unsigned long nr_to_scan, > > memset(&stat, 0, sizeof(stat)); > retry: > - nr_reclaimed += shrink_folio_list(&folio_list, pgdat, sc, &curr, false); > + nr_reclaimed += shrink_folio_list(&folio_list, pgdat, sc, &stat, false); > find_folios_written_back(&folio_list, &clean_list, skip_retry); > - acc_reclaimed_stat(&stat, &curr); > > spin_lock_irq(&lruvec->lru_lock); > move_folios_to_lru(lruvec, &folio_list); > This seems much better. But we have extras works to do: 1. In the shrink_folio_list function: --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1089,12 +1089,12 @@ static unsigned int shrink_folio_list(struct list_head *folio_list, LIST_HEAD(ret_folios); LIST_HEAD(demote_folios); unsigned int nr_reclaimed = 0; - unsigned int pgactivate = 0; + unsigned int pgactivate = stat->nr_activate[0] + stat->nr_activate[1]; + unsigned int nr_demote = 0; bool do_demote_pass; struct swap_iocb *plug = NULL; folio_batch_init(&free_folios); - memset(stat, 0, sizeof(*stat)); cond_resched(); do_demote_pass = can_demote(pgdat->node_id, sc); @@ -1558,7 +1558,8 @@ static unsigned int shrink_folio_list(struct list_head *folio_list, /* Migrate folios selected for demotion */ stat->nr_demoted = demote_folio_list(&demote_folios, pgdat); - nr_reclaimed += stat->nr_demoted; + stat->nr_demoted += nr_demote; + nr_reclaimed += nr_demote; /* Folios that could not be demoted are still in @demote_folios */ if (!list_empty(&demote_folios)) { /* Folios which weren't demoted go back on @folio_list */ @@ -1586,7 +1587,7 @@ static unsigned int shrink_folio_list(struct list_head *folio_list, } } - pgactivate = stat->nr_activate[0] + stat->nr_activate[1]; + pgactivate = stat->nr_activate[0] + stat->nr_activate[1] - pgactivate; mem_cgroup_uncharge_folios(&free_folios); try_to_unmap_flush(); 2. Outsize of the shrink_folio_list function, The callers should memset the stat. If you think this will be better, I will update like this. >> >> Thanks, >> Ridong >> >>>> + >>>> /* >>>> * shrink_inactive_list() is a helper for shrink_node(). It returns the number >>>> * of reclaimed pages >>>> @@ -1916,14 +1977,16 @@ static unsigned long shrink_inactive_list(unsigned long nr_to_scan, >>>> enum lru_list lru) >>>> { >>>> LIST_HEAD(folio_list); >>>> + LIST_HEAD(clean_list); >>>> unsigned long nr_scanned; >>>> unsigned int nr_reclaimed = 0; >>>> unsigned long nr_taken; >>>> - struct reclaim_stat stat; >>>> + struct reclaim_stat stat, curr; >>>> bool file = is_file_lru(lru); >>>> enum vm_event_item item; >>>> struct pglist_data *pgdat = lruvec_pgdat(lruvec); >>>> bool stalled = false; >>>> + bool skip_retry = false; >>>> >>>> while (unlikely(too_many_isolated(pgdat, file, sc))) { >>>> if (stalled) >>>> @@ -1957,10 +2020,20 @@ static unsigned long shrink_inactive_list(unsigned long nr_to_scan, >>>> if (nr_taken == 0) >>>> return 0; >>>> >>>> - nr_reclaimed = shrink_folio_list(&folio_list, pgdat, sc, &stat, false); >>>> + memset(&stat, 0, sizeof(stat)); >>>> +retry: >>>> + nr_reclaimed += shrink_folio_list(&folio_list, pgdat, sc, &curr, false); >>>> + find_folios_written_back(&folio_list, &clean_list, skip_retry); >>>> + acc_reclaimed_stat(&stat, &curr); >>>> >>>> spin_lock_irq(&lruvec->lru_lock); >>>> move_folios_to_lru(lruvec, &folio_list); >>>> + if (!list_empty(&clean_list)) { >>>> + list_splice_init(&clean_list, &folio_list); >>>> + skip_retry = true; >>>> + spin_unlock_irq(&lruvec->lru_lock); >>>> + goto retry; > > This is rather confusing. We're still jumping to retry even though > skip_retry=true is set. Can we find a clearer approach for this? > > It was somewhat acceptable before we introduced the extracted > function find_folios_written_back(). However, it has become > harder to follow now that skip_retry is passed across functions. > > I find renaming skip_retry to is_retry more intuitive. The logic > is that since we are already retrying, find_folios_written_back() > shouldn’t move folios to the clean list again. The intended semantics > are: we have retris, don’t retry again. > Reasonable. Will update. Thanks, Ridong > >>>> + } >>>> >>>> __mod_lruvec_state(lruvec, PGDEMOTE_KSWAPD + reclaimer_offset(), >>>> stat.nr_demoted); >>>> @@ -4567,8 +4640,6 @@ static int evict_folios(struct lruvec *lruvec, struct scan_control *sc, int swap >>>> int reclaimed; >>>> LIST_HEAD(list); >>>> LIST_HEAD(clean); >>>> - struct folio *folio; >>>> - struct folio *next; >>>> enum vm_event_item item; >>>> struct reclaim_stat stat; >>>> struct lru_gen_mm_walk *walk; >>>> @@ -4597,34 +4668,7 @@ static int evict_folios(struct lruvec *lruvec, struct scan_control *sc, int swap >>>> scanned, reclaimed, &stat, sc->priority, >>>> type ? LRU_INACTIVE_FILE : LRU_INACTIVE_ANON); >>>> >>>> - list_for_each_entry_safe_reverse(folio, next, &list, lru) { >>>> - if (!folio_evictable(folio)) { >>>> - list_del(&folio->lru); >>>> - folio_putback_lru(folio); >>>> - continue; >>>> - } >>>> - >>>> - if (folio_test_reclaim(folio) && >>>> - (folio_test_dirty(folio) || folio_test_writeback(folio))) { >>>> - /* restore LRU_REFS_FLAGS cleared by isolate_folio() */ >>>> - if (folio_test_workingset(folio)) >>>> - folio_set_referenced(folio); >>>> - continue; >>>> - } >>>> - >>>> - if (skip_retry || folio_test_active(folio) || folio_test_referenced(folio) || >>>> - folio_mapped(folio) || folio_test_locked(folio) || >>>> - folio_test_dirty(folio) || folio_test_writeback(folio)) { >>>> - /* don't add rejected folios to the oldest generation */ >>>> - set_mask_bits(&folio->flags, LRU_REFS_MASK | LRU_REFS_FLAGS, >>>> - BIT(PG_active)); >>>> - continue; >>>> - } >>>> - >>>> - /* retry folios that may have missed folio_rotate_reclaimable() */ >>>> - list_move(&folio->lru, &clean); >>>> - } >>>> - >>>> + find_folios_written_back(&list, &clean, skip_retry); >>>> spin_lock_irq(&lruvec->lru_lock); >>>> >>>> move_folios_to_lru(lruvec, &list); >>>> -- >>>> 2.34.1 >>>> >>> > > Thanks > Barry