From: "Matthew Wilcox (Oracle)" <willy@infradead.org>
To: linux-mm@kvack.org
Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org>,
Johannes Weiner <hannes@cmpxchg.org>,
Michal Hocko <mhocko@kernel.org>,
Roman Gushchin <roman.gushchin@linux.dev>,
Shakeel Butt <shakeel.butt@linux.dev>,
Muchun Song <muchun.song@linux.dev>, Zi Yan <ziy@nvidia.com>,
David Hildenbrand <david@redhat.com>
Subject: [PATCH 1/5] mm: Separate folio_split_memcg() from split_page_memcg()
Date: Thu, 13 Mar 2025 14:58:50 +0000 [thread overview]
Message-ID: <20250313145856.4118428-2-willy@infradead.org> (raw)
In-Reply-To: <20250313145856.4118428-1-willy@infradead.org>
Folios always use memcg_data to refer to the mem_cgroup while pages
allocated with GFP_ACCOUNT have a pointer to the obj_cgroup. Since the
caller already knows what it has, split the function into two and then
we don't need to check.
Move the assignment of split folio memcg_data to the point where we set
up the other parts of the new folio. That leaves folio_split_memcg()
just handling the memcg accounting.
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
include/linux/memcontrol.h | 7 +++++++
mm/huge_memory.c | 16 ++++------------
mm/memcontrol.c | 17 +++++++++++++----
3 files changed, 24 insertions(+), 16 deletions(-)
diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index 57664e2a8fb7..2b4246dc4284 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -1039,6 +1039,8 @@ static inline void memcg_memory_event_mm(struct mm_struct *mm,
}
void split_page_memcg(struct page *head, int old_order, int new_order);
+void folio_split_memcg(struct folio *folio, unsigned old_order,
+ unsigned new_order);
static inline u64 cgroup_id_from_mm(struct mm_struct *mm)
{
@@ -1463,6 +1465,11 @@ static inline void split_page_memcg(struct page *head, int old_order, int new_or
{
}
+static inline void folio_split_memcg(struct folio *folio, unsigned old_order,
+ unsigned new_order)
+{
+}
+
static inline u64 cgroup_id_from_mm(struct mm_struct *mm)
{
return 0;
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 14b1963898a7..514db6a5eee7 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -3394,6 +3394,9 @@ static void __split_folio_to_order(struct folio *folio, int old_order,
folio_set_young(new_folio);
if (folio_test_idle(folio))
folio_set_idle(new_folio);
+#ifdef CONFIG_MEMCG
+ new_folio->memcg_data = folio->memcg_data;
+#endif
folio_xchg_last_cpupid(new_folio, folio_last_cpupid(folio));
}
@@ -3525,18 +3528,7 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
}
}
- /*
- * Reset any memcg data overlay in the tail pages.
- * folio_nr_pages() is unreliable until prep_compound_page()
- * was called again.
- */
-#ifdef NR_PAGES_IN_LARGE_FOLIO
- folio->_nr_pages = 0;
-#endif
-
-
- /* complete memcg works before add pages to LRU */
- split_page_memcg(&folio->page, old_order, split_order);
+ folio_split_memcg(folio, old_order, split_order);
split_page_owner(&folio->page, old_order, split_order);
pgalloc_tag_split(folio, old_order, split_order);
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 87544df4c3b8..102109d0ee87 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -3101,10 +3101,19 @@ void split_page_memcg(struct page *head, int old_order, int new_order)
for (i = new_nr; i < old_nr; i += new_nr)
folio_page(folio, i)->memcg_data = folio->memcg_data;
- if (folio_memcg_kmem(folio))
- obj_cgroup_get_many(__folio_objcg(folio), old_nr / new_nr - 1);
- else
- css_get_many(&folio_memcg(folio)->css, old_nr / new_nr - 1);
+ obj_cgroup_get_many(__folio_objcg(folio), old_nr / new_nr - 1);
+}
+
+void folio_split_memcg(struct folio *folio, unsigned old_order,
+ unsigned new_order)
+{
+ unsigned new_refs;
+
+ if (mem_cgroup_disabled() || !folio_memcg_charged(folio))
+ return;
+
+ new_refs = (1 << (old_order - new_order)) - 1;
+ css_get_many(&__folio_memcg(folio)->css, new_refs);
}
unsigned long mem_cgroup_usage(struct mem_cgroup *memcg, bool swap)
--
2.47.2
next prev parent reply other threads:[~2025-03-13 14:59 UTC|newest]
Thread overview: 23+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-03-13 14:58 [PATCH 0/5] Minor memcg cleanups & prep for memdescs Matthew Wilcox (Oracle)
2025-03-13 14:58 ` Matthew Wilcox (Oracle) [this message]
2025-03-13 15:52 ` [PATCH 1/5] mm: Separate folio_split_memcg() from split_page_memcg() Johannes Weiner
2025-03-13 17:07 ` Shakeel Butt
2025-03-13 17:22 ` Zi Yan
2025-03-13 14:58 ` [PATCH 2/5] mm: Simplify split_page_memcg() Matthew Wilcox (Oracle)
2025-03-13 15:54 ` Johannes Weiner
2025-03-13 17:08 ` Shakeel Butt
2025-03-13 17:23 ` Zi Yan
2025-03-13 14:58 ` [PATCH 3/5] mm: Remove references to folio in split_page_memcg() Matthew Wilcox (Oracle)
2025-03-13 16:01 ` Johannes Weiner
2025-03-13 16:05 ` Matthew Wilcox
2025-03-13 17:59 ` Matthew Wilcox
2025-03-13 18:03 ` Johannes Weiner
2025-03-13 17:10 ` Shakeel Butt
2025-03-13 14:58 ` [PATCH 4/5] mm: Simplify folio_memcg_charged() Matthew Wilcox (Oracle)
2025-03-13 16:03 ` Johannes Weiner
2025-03-13 16:15 ` Matthew Wilcox
2025-03-13 16:57 ` Shakeel Butt
2025-03-13 17:10 ` Shakeel Butt
2025-03-13 14:58 ` [PATCH 5/5] mm: Remove references to folio in __memcg_kmem_uncharge_page() Matthew Wilcox (Oracle)
2025-03-13 16:04 ` Johannes Weiner
2025-03-13 17:12 ` Shakeel Butt
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250313145856.4118428-2-willy@infradead.org \
--to=willy@infradead.org \
--cc=david@redhat.com \
--cc=hannes@cmpxchg.org \
--cc=linux-mm@kvack.org \
--cc=mhocko@kernel.org \
--cc=muchun.song@linux.dev \
--cc=roman.gushchin@linux.dev \
--cc=shakeel.butt@linux.dev \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox