From: Johannes Weiner <hannes@cmpxchg.org>
To: Matthew Wilcox <willy@infradead.org>
Cc: "Liam R. Howlett" <liam@infradead.org>,
lsf-pc@lists.linux-foundation.org, linux-mm@kvack.org,
linux-fsdevel@vger.kernel.org,
David Hildenbrand <david@kernel.org>, Jan Kara <jack@suse.cz>,
Ryan Roberts <ryan.roberts@arm.com>,
Christian Brauner <brauner@kernel.org>
Subject: Re: [LSF/MM/BPF TOPIC] Page cache tracking with the Maple Tree
Date: Mon, 20 Apr 2026 16:18:11 -0400 [thread overview]
Message-ID: <aeaKAxrAUtfwM9ns@cmpxchg.org> (raw)
In-Reply-To: <aeKO6bKupHa2bgWt@casper.infradead.org>
On Fri, Apr 17, 2026 at 08:50:01PM +0100, Matthew Wilcox wrote:
> On Tue, Feb 24, 2026 at 12:10:26PM -0500, Liam R. Howlett wrote:
> > The Maple Tree needs some enhancements:
> > - Support for purging shadow entries from the page cache
>
> Liam and I hd some preliminary discussions yesterday around this
> and we'd like some feedback if anyone has time before LSFMM.
>
> For those who aren't aware, when a folio falls off the end of the
> LRU list, we store a shadow entry in the page cache in its place
> so that if we access that page again, we know where to put its folio in
> the LRU list.
>
> But this creates a problem (documented in mm/workingset.c) where
> we can fill up memory with shadow entries. Currently, we embed a
> list_head in xa_node and add nodes which contain only shadow entries
> to a list which can be walked by a shrinker when we're low on memory.
> Ideally we wouldn't do that with the maple tree. There are a few
> options.
>
> The first question we have is whether it's best to keep nodes around to
> wait for a shrinker to kick in. Was any experimentation done to
> see whether eagerly freeing a node that contains only shadow entries
> has a bad effect on performance?
Hm, I'm not sure how that could work.
The LRU order created by readahead makes it highly likely that all the
folios in a cache node are reclaimed/made non-resident at once. Going
this route would destroy a large part of the non-resident cache the
moment it is created.
The goal is to garbage-collect the oldest shadow entries whose
distances are too long to be actionable at this point. Specifically,
their distance to lruvec->nonresident_age (per-cgroup, per-node).
In the current scheme, we just go in the order in which nodes became
all-shadow - oldest first. And we only do so lazily when the
non-resident cache is far into that territory (cache set vastly larger
than available memory). That gives us confidence that we're mostly
dropping very old entries without having to look at them one by one.
We don't have to stick with that design, but whatever replaces it
should meet the goal, or approximate it well enough.
> The second idea we talked about is that the maple tree is much more
> flexible than the radix tree. Having even a single folio in a node pins
> the entire node, so it's "free" to keep the shadow entries in that node
> around. But with the maple tree, we can be much more granular and
> delete shadow entries in arbitrary positions. So we could (for example)
> keep track of inodes which contain shadow entries and purge shadow
> entries when they reach, say, 10% of the number of pages. Or 1000
> entries, or some other threshold.
It's not the volume or the concentration of shadows, it's their age
that makes them good candidates for garbage collection.
Searching the tree for all-shadow nodes should be possible, but you'd
still have to filter for age. Naively, that would be unpack_shadow()
-> workingset_test_recent() for each one, but that's likely too heavy.
What might work is a monotonic ID counter for each node that becomes
all-shadow, then search the trees with quick comparisons for any IDs
below a certain cutoff?
> The third idea is that instead of having an injected list_head that we
> keep a tree pointing to inodes (or even just maple tree nodes which
> contain a lot of shadow entries). That's not how list_lru works today,
> so a certain amount of development work would need to be done.
Ah, so you don't need storage inside the inode or maple tree node.
That could work well with the monotonic ID counter, you'd just scan
and reclaim through that tree, oldest node first.
prev parent reply other threads:[~2026-04-20 20:18 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-02-24 17:10 Liam R. Howlett
2026-04-17 19:50 ` Matthew Wilcox
2026-04-17 23:14 ` Liam R. Howlett
2026-04-20 20:18 ` Johannes Weiner [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=aeaKAxrAUtfwM9ns@cmpxchg.org \
--to=hannes@cmpxchg.org \
--cc=brauner@kernel.org \
--cc=david@kernel.org \
--cc=jack@suse.cz \
--cc=liam@infradead.org \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=lsf-pc@lists.linux-foundation.org \
--cc=ryan.roberts@arm.com \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox