From: Johannes Weiner <hannes@cmpxchg.org>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: David Hildenbrand <david@kernel.org>,
Shakeel Butt <shakeel.butt@linux.dev>,
Yosry Ahmed <yosry.ahmed@linux.dev>, Zi Yan <ziy@nvidia.com>,
"Liam R. Howlett" <Liam.Howlett@oracle.com>,
Usama Arif <usama.arif@linux.dev>,
Kiryl Shutsemau <kas@kernel.org>,
Dave Chinner <david@fromorbit.com>,
Roman Gushchin <roman.gushchin@linux.dev>,
linux-mm@kvack.org, linux-kernel@vger.kernel.org
Subject: [PATCH v3 6/7] mm: list_lru: introduce folio_memcg_list_lru_alloc()
Date: Wed, 18 Mar 2026 15:53:24 -0400 [thread overview]
Message-ID: <20260318200352.1039011-7-hannes@cmpxchg.org> (raw)
In-Reply-To: <20260318200352.1039011-1-hannes@cmpxchg.org>
memcg_list_lru_alloc() is called every time an object that may end up
on the list_lru is created. It needs to quickly check if the list_lru
heads for the memcg already exist, and allocate them when they don't.
Doing this with folio objects is tricky: folio_memcg() is not stable
and requires either RCU protection or pinning the cgroup. But it's
desirable to make the existence check lightweight under RCU, and only
pin the memcg when we need to allocate list_lru heads and may block.
In preparation for switching the THP shrinker to list_lru, add a
helper function for allocating list_lru heads coming from a folio.
Reviewed-by: David Hildenbrand (Arm) <david@kernel.org>
Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
---
include/linux/list_lru.h | 12 ++++++++++++
mm/list_lru.c | 39 ++++++++++++++++++++++++++++++++++-----
2 files changed, 46 insertions(+), 5 deletions(-)
diff --git a/include/linux/list_lru.h b/include/linux/list_lru.h
index 4afc02deb44d..4bd29b61c59a 100644
--- a/include/linux/list_lru.h
+++ b/include/linux/list_lru.h
@@ -81,6 +81,18 @@ static inline int list_lru_init_memcg_key(struct list_lru *lru, struct shrinker
int memcg_list_lru_alloc(struct mem_cgroup *memcg, struct list_lru *lru,
gfp_t gfp);
+
+#ifdef CONFIG_MEMCG
+int folio_memcg_list_lru_alloc(struct folio *folio, struct list_lru *lru,
+ gfp_t gfp);
+#else
+static inline int folio_memcg_list_lru_alloc(struct folio *folio,
+ struct list_lru *lru, gfp_t gfp)
+{
+ return 0;
+}
+#endif
+
void memcg_reparent_list_lrus(struct mem_cgroup *memcg, struct mem_cgroup *parent);
/**
diff --git a/mm/list_lru.c b/mm/list_lru.c
index b817c0f48f73..1ccdd45b1d14 100644
--- a/mm/list_lru.c
+++ b/mm/list_lru.c
@@ -537,17 +537,14 @@ static inline bool memcg_list_lru_allocated(struct mem_cgroup *memcg,
return idx < 0 || xa_load(&lru->xa, idx);
}
-int memcg_list_lru_alloc(struct mem_cgroup *memcg, struct list_lru *lru,
- gfp_t gfp)
+static int __memcg_list_lru_alloc(struct mem_cgroup *memcg,
+ struct list_lru *lru, gfp_t gfp)
{
unsigned long flags;
struct list_lru_memcg *mlru = NULL;
struct mem_cgroup *pos, *parent;
XA_STATE(xas, &lru->xa, 0);
- if (!list_lru_memcg_aware(lru) || memcg_list_lru_allocated(memcg, lru))
- return 0;
-
gfp &= GFP_RECLAIM_MASK;
/*
* Because the list_lru can be reparented to the parent cgroup's
@@ -588,6 +585,38 @@ int memcg_list_lru_alloc(struct mem_cgroup *memcg, struct list_lru *lru,
return xas_error(&xas);
}
+
+int memcg_list_lru_alloc(struct mem_cgroup *memcg, struct list_lru *lru,
+ gfp_t gfp)
+{
+ if (!list_lru_memcg_aware(lru) || memcg_list_lru_allocated(memcg, lru))
+ return 0;
+ return __memcg_list_lru_alloc(memcg, lru, gfp);
+}
+
+int folio_memcg_list_lru_alloc(struct folio *folio, struct list_lru *lru,
+ gfp_t gfp)
+{
+ struct mem_cgroup *memcg;
+ int res;
+
+ if (!list_lru_memcg_aware(lru))
+ return 0;
+
+ /* Fast path when list_lru heads already exist */
+ rcu_read_lock();
+ memcg = folio_memcg(folio);
+ res = memcg_list_lru_allocated(memcg, lru);
+ rcu_read_unlock();
+ if (likely(res))
+ return 0;
+
+ /* Allocation may block, pin the memcg */
+ memcg = get_mem_cgroup_from_folio(folio);
+ res = __memcg_list_lru_alloc(memcg, lru, gfp);
+ mem_cgroup_put(memcg);
+ return res;
+}
#else
static inline void memcg_init_list_lru(struct list_lru *lru, bool memcg_aware)
{
--
2.53.0
next prev parent reply other threads:[~2026-03-18 20:04 UTC|newest]
Thread overview: 43+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-03-18 19:53 [PATCH v3 0/7] mm: switch THP shrinker to list_lru Johannes Weiner
2026-03-18 19:53 ` [PATCH v3 1/7] mm: list_lru: lock_list_lru_of_memcg() cannot return NULL if !skip_empty Johannes Weiner
2026-03-18 20:12 ` Shakeel Butt
2026-03-24 11:30 ` Lorenzo Stoakes (Oracle)
2026-03-18 19:53 ` [PATCH v3 2/7] mm: list_lru: deduplicate unlock_list_lru() Johannes Weiner
2026-03-24 11:32 ` Lorenzo Stoakes (Oracle)
2026-03-18 19:53 ` [PATCH v3 3/7] mm: list_lru: move list dead check to lock_list_lru_of_memcg() Johannes Weiner
2026-03-18 20:20 ` Shakeel Butt
2026-03-24 11:34 ` Lorenzo Stoakes (Oracle)
2026-03-18 19:53 ` [PATCH v3 4/7] mm: list_lru: deduplicate lock_list_lru() Johannes Weiner
2026-03-18 20:22 ` Shakeel Butt
2026-03-24 11:36 ` Lorenzo Stoakes (Oracle)
2026-03-18 19:53 ` [PATCH v3 5/7] mm: list_lru: introduce caller locking for additions and deletions Johannes Weiner
2026-03-18 20:51 ` Shakeel Butt
2026-03-20 16:18 ` Johannes Weiner
2026-03-24 11:55 ` Lorenzo Stoakes (Oracle)
2026-03-18 19:53 ` Johannes Weiner [this message]
2026-03-18 20:52 ` [PATCH v3 6/7] mm: list_lru: introduce folio_memcg_list_lru_alloc() Shakeel Butt
2026-03-18 21:01 ` Shakeel Butt
2026-03-24 12:01 ` Lorenzo Stoakes (Oracle)
2026-03-30 16:54 ` Johannes Weiner
2026-04-01 14:43 ` Lorenzo Stoakes (Oracle)
2026-03-18 19:53 ` [PATCH v3 7/7] mm: switch deferred split shrinker to list_lru Johannes Weiner
2026-03-18 20:26 ` David Hildenbrand (Arm)
2026-03-18 23:18 ` Shakeel Butt
2026-03-24 13:48 ` Lorenzo Stoakes (Oracle)
2026-03-30 16:40 ` Johannes Weiner
2026-04-01 17:33 ` Lorenzo Stoakes (Oracle)
2026-04-06 21:37 ` Johannes Weiner
2026-04-07 9:55 ` Lorenzo Stoakes (Oracle)
2026-03-27 7:51 ` Kairui Song
2026-03-30 16:51 ` Johannes Weiner
2026-03-30 16:37 ` [PATCH v3 7/7] mm: switch deferred split shrinker to list_lru - [s390] panic in __memcg_list_lru_alloc Mikhail Zaslonko
2026-03-30 19:03 ` Andrew Morton
2026-03-30 20:41 ` Johannes Weiner
2026-03-30 20:56 ` Johannes Weiner
2026-03-30 22:46 ` Vasily Gorbik
2026-03-31 8:04 ` Mikhail Zaslonko
2026-03-18 21:00 ` [PATCH v3 0/7] mm: switch THP shrinker to list_lru Lorenzo Stoakes (Oracle)
2026-03-18 22:31 ` Johannes Weiner
2026-03-19 8:47 ` Lorenzo Stoakes (Oracle)
2026-03-19 8:52 ` David Hildenbrand (Arm)
2026-03-19 11:45 ` Lorenzo Stoakes (Oracle)
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260318200352.1039011-7-hannes@cmpxchg.org \
--to=hannes@cmpxchg.org \
--cc=Liam.Howlett@oracle.com \
--cc=akpm@linux-foundation.org \
--cc=david@fromorbit.com \
--cc=david@kernel.org \
--cc=kas@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=roman.gushchin@linux.dev \
--cc=shakeel.butt@linux.dev \
--cc=usama.arif@linux.dev \
--cc=yosry.ahmed@linux.dev \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox