linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: wangzicheng <wangzicheng@honor.com>
To: Barry Song <21cnbao@gmail.com>,
	"akpm@linux-foundation.org" <akpm@linux-foundation.org>,
	"linux-mm@kvack.org" <linux-mm@kvack.org>
Cc: "linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Barry Song <baohua@kernel.org>,
	Suren Baghdasaryan <surenb@google.com>,
	Lei Liu <liulei.rjpt@vivo.com>,
	"Matthew Wilcox (Oracle)" <willy@infradead.org>,
	Axel Rasmussen <axelrasmussen@google.com>,
	Yuanchu Xie <yuanchu@google.com>, Wei Xu <weixugc@google.com>,
	Kairui Song <kasong@tencent.com>,
	Tangquan Zheng <zhengtangquan@oppo.com>,
	wangtao <tao.wangtao@honor.com>
Subject: RE: [PATCH RFC] mm/mglru: lazily activate folios while folios are really mapped
Date: Thu, 26 Feb 2026 12:57:42 +0000	[thread overview]
Message-ID: <2558f7d82b9a482387960f45409e1b76@honor.com> (raw)
In-Reply-To: <20260225223712.3685-1-21cnbao@gmail.com>



> -----Original Message-----
> From: Barry Song <21cnbao@gmail.com>
> Sent: Thursday, February 26, 2026 6:37 AM
> To: akpm@linux-foundation.org; linux-mm@kvack.org
> Cc: linux-kernel@vger.kernel.org; Barry Song <baohua@kernel.org>;
> wangzicheng <wangzicheng@honor.com>; Suren Baghdasaryan
> <surenb@google.com>; Lei Liu <liulei.rjpt@vivo.com>; Matthew Wilcox
> (Oracle) <willy@infradead.org>; Axel Rasmussen
> <axelrasmussen@google.com>; Yuanchu Xie <yuanchu@google.com>; Wei
> Xu <weixugc@google.com>; Kairui Song <kasong@tencent.com>; Tangquan
> Zheng <zhengtangquan@oppo.com>
> Subject: [PATCH RFC] mm/mglru: lazily activate folios while folios are really
> mapped
> 
> From: Barry Song <baohua@kernel.org>
> 
> MGLRU activates folios when a new folio is added and
> lru_gen_in_fault() returns true. The problem is that when a
> page fault occurs at address N, readahead may bring in many
> folios around N, and those folios are also activated even
> though many of them may never be accessed.
> 
> A previous attempt by Lei Liu proposed introducing a separate
> LRU for readahead[1], but that approach is likely over-designed.
> 
> This patch instead activates folios lazily, only when they are
> actually mapped, so that unused folios do not occupy higher-
> priority positions in the LRU and become harder to reclaim.
> 
> A similar optimization could also be applied to swapin readahead,
> but this RFC limits the change to file-backed folios for now.
> 
> Based on Tangquan's observations, this can significantly reduce
> file refaults on Android devices when using MGLRU.
> 
> BTW, it seems somewhat odd that all LRU APIs are defined in
> swap.c and swap.h.
> 
> [1] https://lore.kernel.org/linux-mm/20250916072226.220426-1-
> liulei.rjpt@vivo.com/
> 
> Cc: wangzicheng <wangzicheng@honor.com>
> Cc: Suren Baghdasaryan <surenb@google.com>
> Cc: Lei Liu <liulei.rjpt@vivo.com>
> Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
> Cc: Axel Rasmussen <axelrasmussen@google.com>
> Cc: Yuanchu Xie <yuanchu@google.com>
> Cc: Wei Xu <weixugc@google.com>
> Cc: Kairui Song <kasong@tencent.com>
> Cc: Tangquan Zheng <zhengtangquan@oppo.com>
> Signed-off-by: Barry Song <baohua@kernel.org>
> ---
>  include/linux/swap.h |  1 +
>  mm/filemap.c         |  2 ++
>  mm/swap.c            | 16 +++++++++++++++-
>  3 files changed, 18 insertions(+), 1 deletion(-)
> 
> diff --git a/include/linux/swap.h b/include/linux/swap.h
> index 62fc7499b408..ce88ec560527 100644
> --- a/include/linux/swap.h
> +++ b/include/linux/swap.h
> @@ -335,6 +335,7 @@ void folio_add_lru(struct folio *);
>  void folio_add_lru_vma(struct folio *, struct vm_area_struct *);
>  void mark_page_accessed(struct page *);
>  void folio_mark_accessed(struct folio *);
> +void folio_activate_on_mapped(struct folio *folio);
> 
>  static inline bool folio_may_be_lru_cached(struct folio *folio)
>  {
> diff --git a/mm/filemap.c b/mm/filemap.c
> index 6cd7974d4ada..0b8f383facdb 100644
> --- a/mm/filemap.c
> +++ b/mm/filemap.c
> @@ -3567,6 +3567,7 @@ vm_fault_t filemap_fault(struct vm_fault *vmf)
>  		}
>  	}
> 
> +	folio_activate_on_mapped(folio);
>  	if (!lock_folio_maybe_drop_mmap(vmf, folio, &fpin))
>  		goto out_retry;
> 
> @@ -3926,6 +3927,7 @@ vm_fault_t filemap_map_pages(struct vm_fault
> *vmf,
>  					nr_pages, &rss, &mmap_miss,
> file_end);
> 
>  		folio_unlock(folio);
> +		folio_activate_on_mapped(folio);
>  	} while ((folio = next_uptodate_folio(&xas, mapping, end_pgoff)) !=
> NULL);
>  	add_mm_counter(vma->vm_mm, folio_type, rss);
>  	pte_unmap_unlock(vmf->pte, vmf->ptl);
> diff --git a/mm/swap.c b/mm/swap.c
> index bb19ccbece46..e50b1e794ef1 100644
> --- a/mm/swap.c
> +++ b/mm/swap.c
> @@ -488,6 +488,19 @@ void folio_mark_accessed(struct folio *folio)
>  }
>  EXPORT_SYMBOL(folio_mark_accessed);
> 
> +void folio_activate_on_mapped(struct folio *folio)
> +{
> +	if (lru_gen_enabled() && lru_gen_in_fault() &&
> +			!(current->flags & PF_MEMALLOC) &&
> +			!folio_test_active(folio) &&
> +			!folio_test_unevictable(folio)) {
> +		if (folio_test_lru(folio))
> +			folio_activate(folio);
> +		else /* still in lru cache */
> +			__lru_cache_activate_folio(folio);
> +	}
> +}
> +
>  /**
>   * folio_add_lru - Add a folio to an LRU list.
>   * @folio: The folio to be added to the LRU.
> @@ -506,7 +519,8 @@ void folio_add_lru(struct folio *folio)
>  	/* see the comment in lru_gen_folio_seq() */
>  	if (lru_gen_enabled() && !folio_test_unevictable(folio) &&
>  	    lru_gen_in_fault() && !(current->flags & PF_MEMALLOC))
> -		folio_set_active(folio);
> +		if (!folio_is_file_lru(folio))
> +			folio_set_active(folio);
> 
>  	folio_batch_add_and_move(folio, lru_add);
>  }
> --
> 2.39.3 (Apple Git-146)

Hi Barry,

Setting only non-filelru-folio in folio_add_lru looks reasonable and
should help with over-protecting readahead pages that are never
actually accessed.

For our workloads that already suffer from file under-protection, we see two
sides here: on the positive side, keeping only actually-used readahead pages
in memory could improve performance; on the other hand, since we already
see file under-protect issues, it's not clear whether this change might
exacerbate that or even hurt performance.

We'll test this when available and report back. We hope to have a
chance to discuss this topic at LSF/MM/BPF.

Thanks,
Zicheng


  reply	other threads:[~2026-02-26 12:57 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-02-25 22:37 Barry Song
2026-02-26 12:57 ` wangzicheng [this message]
  -- strict thread matches above, loose matches on Subject: below --
2026-02-25 21:26 Barry Song

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=2558f7d82b9a482387960f45409e1b76@honor.com \
    --to=wangzicheng@honor.com \
    --cc=21cnbao@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=axelrasmussen@google.com \
    --cc=baohua@kernel.org \
    --cc=kasong@tencent.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=liulei.rjpt@vivo.com \
    --cc=surenb@google.com \
    --cc=tao.wangtao@honor.com \
    --cc=weixugc@google.com \
    --cc=willy@infradead.org \
    --cc=yuanchu@google.com \
    --cc=zhengtangquan@oppo.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox