* [PATCH 0/3] Introduce acctmem
@ 2024-11-04 21:05 Matthew Wilcox (Oracle)
2024-11-04 21:05 ` [PATCH 1/3] mm: Opencode split_page_memcg() in __split_huge_page() Matthew Wilcox (Oracle)
` (3 more replies)
0 siblings, 4 replies; 8+ messages in thread
From: Matthew Wilcox (Oracle) @ 2024-11-04 21:05 UTC (permalink / raw)
To: Johannes Weiner, Michal Hocko, Roman Gushchin, Shakeel Butt
Cc: Matthew Wilcox (Oracle), Muchun Song, cgroups, linux-mm
As a step towards shrinking struct page, we need to remove all references
to page->memcg_data. The model I'm working towards is described at
https://kernelnewbies.org/MatthewWilcox/Memdescs
In working on this series, I'm dissatisfied with how much I've assumed
that every page belongs to a folio. There will need to be more changes
in order to split struct acctmem from struct folio in the future.
The first two patches take some steps in that direction, but I'm not
going to do any more than that in this series.
Matthew Wilcox (Oracle) (3):
mm: Opencode split_page_memcg() in __split_huge_page()
mm: Simplify split_page_memcg()
mm: Introduce acctmem
include/linux/memcontrol.h | 28 ++++++++++++++++++++++++++--
include/linux/mm_types.h | 6 +++---
mm/huge_memory.c | 11 +++++++++--
mm/memcontrol.c | 29 ++++++++++++++++-------------
mm/page_alloc.c | 8 ++++----
mm/page_owner.c | 2 +-
mm/slab.h | 2 +-
7 files changed, 60 insertions(+), 26 deletions(-)
--
2.43.0
^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCH 1/3] mm: Opencode split_page_memcg() in __split_huge_page()
2024-11-04 21:05 [PATCH 0/3] Introduce acctmem Matthew Wilcox (Oracle)
@ 2024-11-04 21:05 ` Matthew Wilcox (Oracle)
2024-11-05 17:30 ` David Hildenbrand
2024-11-04 21:05 ` [PATCH 2/3] mm: Simplify split_page_memcg() Matthew Wilcox (Oracle)
` (2 subsequent siblings)
3 siblings, 1 reply; 8+ messages in thread
From: Matthew Wilcox (Oracle) @ 2024-11-04 21:05 UTC (permalink / raw)
To: Johannes Weiner, Michal Hocko, Roman Gushchin, Shakeel Butt
Cc: Matthew Wilcox (Oracle), Muchun Song, cgroups, linux-mm
This is in preparation for only handling kmem pages in
__split_huge_page().
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
mm/huge_memory.c | 11 +++++++++--
1 file changed, 9 insertions(+), 2 deletions(-)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index f92068864469..44d25a74b611 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -3234,6 +3234,10 @@ static void __split_huge_page_tail(struct folio *folio, int tail,
folio_set_large_rmappable(new_folio);
}
+#ifdef CONFIG_MEMCG
+ new_folio->memcg_data = folio->memcg_data;
+#endif
+
/* Finally unfreeze refcount. Additional reference from page cache. */
page_ref_unfreeze(page_tail,
1 + ((!folio_test_anon(folio) || folio_test_swapcache(folio)) ?
@@ -3267,8 +3271,11 @@ static void __split_huge_page(struct page *page, struct list_head *list,
int order = folio_order(folio);
unsigned int nr = 1 << order;
- /* complete memcg works before add pages to LRU */
- split_page_memcg(head, order, new_order);
+#ifdef CONFIG_MEMCG
+ if (folio_memcg_charged(folio))
+ css_get_many(&folio_memcg(folio)->css,
+ (1 << (order - new_order)) - 1);
+#endif
if (folio_test_anon(folio) && folio_test_swapcache(folio)) {
offset = swap_cache_index(folio->swap);
--
2.43.0
^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCH 2/3] mm: Simplify split_page_memcg()
2024-11-04 21:05 [PATCH 0/3] Introduce acctmem Matthew Wilcox (Oracle)
2024-11-04 21:05 ` [PATCH 1/3] mm: Opencode split_page_memcg() in __split_huge_page() Matthew Wilcox (Oracle)
@ 2024-11-04 21:05 ` Matthew Wilcox (Oracle)
2024-11-05 17:34 ` David Hildenbrand
2024-11-08 15:06 ` Zi Yan
2024-11-04 21:06 ` [PATCH 3/3] mm: Introduce acctmem Matthew Wilcox (Oracle)
2024-11-06 0:09 ` [PATCH 0/3] " Roman Gushchin
3 siblings, 2 replies; 8+ messages in thread
From: Matthew Wilcox (Oracle) @ 2024-11-04 21:05 UTC (permalink / raw)
To: Johannes Weiner, Michal Hocko, Roman Gushchin, Shakeel Butt
Cc: Matthew Wilcox (Oracle), Muchun Song, cgroups, linux-mm
The last argument to split_page_memcg() is now always 0, so remove it,
effectively reverting commit b8791381d7ed.
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
include/linux/memcontrol.h | 4 ++--
mm/memcontrol.c | 26 ++++++++++++++------------
mm/page_alloc.c | 4 ++--
3 files changed, 18 insertions(+), 16 deletions(-)
diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index 5502aa8e138e..a787080f814f 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -1044,7 +1044,7 @@ static inline void memcg_memory_event_mm(struct mm_struct *mm,
rcu_read_unlock();
}
-void split_page_memcg(struct page *head, int old_order, int new_order);
+void split_page_memcg(struct page *first, int order);
#else /* CONFIG_MEMCG */
@@ -1463,7 +1463,7 @@ void count_memcg_event_mm(struct mm_struct *mm, enum vm_event_item idx)
{
}
-static inline void split_page_memcg(struct page *head, int old_order, int new_order)
+static inline void split_page_memcg(struct page *first, int order)
{
}
#endif /* CONFIG_MEMCG */
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 5e44d6e7591e..506439a5dcfe 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -3034,25 +3034,27 @@ void __memcg_slab_free_hook(struct kmem_cache *s, struct slab *slab,
}
/*
- * Because folio_memcg(head) is not set on tails, set it now.
+ * The memcg data is only set on the first page, now transfer it to all the
+ * other pages.
*/
-void split_page_memcg(struct page *head, int old_order, int new_order)
+void split_page_memcg(struct page *first, int order)
{
- struct folio *folio = page_folio(head);
+ unsigned long memcg_data = first->memcg_data;
+ struct obj_cgroup *objcg;
int i;
- unsigned int old_nr = 1 << old_order;
- unsigned int new_nr = 1 << new_order;
+ unsigned int nr = 1 << order;
- if (mem_cgroup_disabled() || !folio_memcg_charged(folio))
+ if (!memcg_data)
return;
- for (i = new_nr; i < old_nr; i += new_nr)
- folio_page(folio, i)->memcg_data = folio->memcg_data;
+ VM_BUG_ON_PAGE((memcg_data & OBJEXTS_FLAGS_MASK) != MEMCG_DATA_KMEM,
+ first);
+ objcg = (void *)(memcg_data & ~OBJEXTS_FLAGS_MASK);
- if (folio_memcg_kmem(folio))
- obj_cgroup_get_many(__folio_objcg(folio), old_nr / new_nr - 1);
- else
- css_get_many(&folio_memcg(folio)->css, old_nr / new_nr - 1);
+ for (i = 1; i < nr; i++)
+ first[i].memcg_data = memcg_data;
+
+ obj_cgroup_get_many(objcg, nr - 1);
}
unsigned long mem_cgroup_usage(struct mem_cgroup *memcg, bool swap)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 47048b39b8ca..5523654c9759 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2823,7 +2823,7 @@ void split_page(struct page *page, unsigned int order)
set_page_refcounted(page + i);
split_page_owner(page, order, 0);
pgalloc_tag_split(page_folio(page), order, 0);
- split_page_memcg(page, order, 0);
+ split_page_memcg(page, order);
}
EXPORT_SYMBOL_GPL(split_page);
@@ -5020,7 +5020,7 @@ static void *make_alloc_exact(unsigned long addr, unsigned int order,
split_page_owner(page, order, 0);
pgalloc_tag_split(page_folio(page), order, 0);
- split_page_memcg(page, order, 0);
+ split_page_memcg(page, order);
while (page < --last)
set_page_refcounted(last);
--
2.43.0
^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCH 3/3] mm: Introduce acctmem
2024-11-04 21:05 [PATCH 0/3] Introduce acctmem Matthew Wilcox (Oracle)
2024-11-04 21:05 ` [PATCH 1/3] mm: Opencode split_page_memcg() in __split_huge_page() Matthew Wilcox (Oracle)
2024-11-04 21:05 ` [PATCH 2/3] mm: Simplify split_page_memcg() Matthew Wilcox (Oracle)
@ 2024-11-04 21:06 ` Matthew Wilcox (Oracle)
2024-11-06 0:09 ` [PATCH 0/3] " Roman Gushchin
3 siblings, 0 replies; 8+ messages in thread
From: Matthew Wilcox (Oracle) @ 2024-11-04 21:06 UTC (permalink / raw)
To: Johannes Weiner, Michal Hocko, Roman Gushchin, Shakeel Butt
Cc: Matthew Wilcox (Oracle), Muchun Song, cgroups, linux-mm
struct acctmem is used for MEMCG_DATA_KMEM allocations. We're still a
bit loose with our casting to folios instead of acctmem, but that's a
problem to solve later. The build asserts ensure that this carelessness
doesn't cause any new bugs today.
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
include/linux/memcontrol.h | 24 ++++++++++++++++++++++++
include/linux/mm_types.h | 6 +++---
mm/memcontrol.c | 7 ++++---
mm/page_alloc.c | 4 ++--
mm/page_owner.c | 2 +-
mm/slab.h | 2 +-
6 files changed, 35 insertions(+), 10 deletions(-)
diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index a787080f814f..19ee98abea0f 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -30,6 +30,30 @@ struct page;
struct mm_struct;
struct kmem_cache;
+/*
+ * For now, this data structure overlays struct page. Eventually it
+ * will be separately allocated and become a memdesc type of its own
+ * like slab and ptdesc. memcg_data is only valid on the first page
+ * of an allocation, but that allocation might not be compound!
+ */
+struct acctmem {
+ unsigned long __page_flags;
+ unsigned long __padding[5];
+ unsigned int ___padding[2];
+ unsigned long memcg_data;
+};
+#ifdef CONFIG_MEMCG
+static_assert(offsetof(struct page, __acct_memcg_data) ==
+ offsetof(struct acctmem, memcg_data));
+static_assert(offsetof(struct folio, memcg_data) ==
+ offsetof(struct acctmem, memcg_data));
+static_assert(sizeof(struct acctmem) <= sizeof(struct page));
+#endif
+
+#define page_acctmem(_page) (_Generic((_page), \
+ const struct page *: (const struct acctmem *)(_page), \
+ struct page *: (struct acctmem *)(_page)))
+
/* Cgroup-specific page state, on top of universal node page state */
enum memcg_stat_item {
MEMCG_SWAP = NR_VM_NODE_STAT_ITEMS,
diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index 2b694f9a4518..274b125df0df 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -181,7 +181,7 @@ struct page {
atomic_t _refcount;
#ifdef CONFIG_MEMCG
- unsigned long memcg_data;
+ unsigned long __acct_memcg_data;
#elif defined(CONFIG_SLAB_OBJ_EXT)
unsigned long _unused_slab_obj_exts;
#endif
@@ -410,7 +410,7 @@ FOLIO_MATCH(private, private);
FOLIO_MATCH(_mapcount, _mapcount);
FOLIO_MATCH(_refcount, _refcount);
#ifdef CONFIG_MEMCG
-FOLIO_MATCH(memcg_data, memcg_data);
+FOLIO_MATCH(__acct_memcg_data, memcg_data);
#endif
#if defined(WANT_PAGE_VIRTUAL)
FOLIO_MATCH(virtual, virtual);
@@ -499,7 +499,7 @@ TABLE_MATCH(rcu_head, pt_rcu_head);
TABLE_MATCH(page_type, __page_type);
TABLE_MATCH(_refcount, __page_refcount);
#ifdef CONFIG_MEMCG
-TABLE_MATCH(memcg_data, pt_memcg_data);
+TABLE_MATCH(__acct_memcg_data, pt_memcg_data);
#endif
#undef TABLE_MATCH
static_assert(sizeof(struct ptdesc) <= sizeof(struct page));
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 506439a5dcfe..89c9d206c209 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -2661,6 +2661,7 @@ static int obj_cgroup_charge_pages(struct obj_cgroup *objcg, gfp_t gfp,
*/
int __memcg_kmem_charge_page(struct page *page, gfp_t gfp, int order)
{
+ struct acctmem *acctmem = page_acctmem(page);
struct obj_cgroup *objcg;
int ret = 0;
@@ -2669,7 +2670,7 @@ int __memcg_kmem_charge_page(struct page *page, gfp_t gfp, int order)
ret = obj_cgroup_charge_pages(objcg, gfp, 1 << order);
if (!ret) {
obj_cgroup_get(objcg);
- page->memcg_data = (unsigned long)objcg |
+ acctmem->memcg_data = (unsigned long)objcg |
MEMCG_DATA_KMEM;
return 0;
}
@@ -3039,7 +3040,7 @@ void __memcg_slab_free_hook(struct kmem_cache *s, struct slab *slab,
*/
void split_page_memcg(struct page *first, int order)
{
- unsigned long memcg_data = first->memcg_data;
+ unsigned long memcg_data = page_acctmem(first)->memcg_data;
struct obj_cgroup *objcg;
int i;
unsigned int nr = 1 << order;
@@ -3052,7 +3053,7 @@ void split_page_memcg(struct page *first, int order)
objcg = (void *)(memcg_data & ~OBJEXTS_FLAGS_MASK);
for (i = 1; i < nr; i++)
- first[i].memcg_data = memcg_data;
+ page_acctmem(first + i)->memcg_data = memcg_data;
obj_cgroup_get_many(objcg, nr - 1);
}
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 5523654c9759..07d9302882b2 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -870,7 +870,7 @@ static inline bool page_expected_state(struct page *page,
if (unlikely((unsigned long)page->mapping |
page_ref_count(page) |
#ifdef CONFIG_MEMCG
- page->memcg_data |
+ page_acctmem(page)->memcg_data |
#endif
#ifdef CONFIG_PAGE_POOL
((page->pp_magic & ~0x3UL) == PP_SIGNATURE) |
@@ -898,7 +898,7 @@ static const char *page_bad_reason(struct page *page, unsigned long flags)
bad_reason = "PAGE_FLAGS_CHECK_AT_FREE flag(s) set";
}
#ifdef CONFIG_MEMCG
- if (unlikely(page->memcg_data))
+ if (unlikely(page_acctmem(page)->memcg_data))
bad_reason = "page still charged to cgroup";
#endif
#ifdef CONFIG_PAGE_POOL
diff --git a/mm/page_owner.c b/mm/page_owner.c
index 2d6360eaccbb..71e183f8988b 100644
--- a/mm/page_owner.c
+++ b/mm/page_owner.c
@@ -506,7 +506,7 @@ static inline int print_page_owner_memcg(char *kbuf, size_t count, int ret,
char name[80];
rcu_read_lock();
- memcg_data = READ_ONCE(page->memcg_data);
+ memcg_data = READ_ONCE(page_acctmem(page)->memcg_data);
if (!memcg_data)
goto out_unlock;
diff --git a/mm/slab.h b/mm/slab.h
index 632fedd71fea..ee9ab84f7c4d 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -103,7 +103,7 @@ SLAB_MATCH(flags, __page_flags);
SLAB_MATCH(compound_head, slab_cache); /* Ensure bit 0 is clear */
SLAB_MATCH(_refcount, __page_refcount);
#ifdef CONFIG_MEMCG
-SLAB_MATCH(memcg_data, obj_exts);
+SLAB_MATCH(__acct_memcg_data, obj_exts);
#elif defined(CONFIG_SLAB_OBJ_EXT)
SLAB_MATCH(_unused_slab_obj_exts, obj_exts);
#endif
--
2.43.0
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH 1/3] mm: Opencode split_page_memcg() in __split_huge_page()
2024-11-04 21:05 ` [PATCH 1/3] mm: Opencode split_page_memcg() in __split_huge_page() Matthew Wilcox (Oracle)
@ 2024-11-05 17:30 ` David Hildenbrand
0 siblings, 0 replies; 8+ messages in thread
From: David Hildenbrand @ 2024-11-05 17:30 UTC (permalink / raw)
To: Matthew Wilcox (Oracle),
Johannes Weiner, Michal Hocko, Roman Gushchin, Shakeel Butt
Cc: Muchun Song, cgroups, linux-mm
On 04.11.24 22:05, Matthew Wilcox (Oracle) wrote:
> This is in preparation for only handling kmem pages in
> __split_huge_page().
Did you mean "in split_page_memcg()"?
>
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> ---
> mm/huge_memory.c | 11 +++++++++--
> 1 file changed, 9 insertions(+), 2 deletions(-)
>
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index f92068864469..44d25a74b611 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -3234,6 +3234,10 @@ static void __split_huge_page_tail(struct folio *folio, int tail,
> folio_set_large_rmappable(new_folio);
> }
>
> +#ifdef CONFIG_MEMCG
> + new_folio->memcg_data = folio->memcg_data;
> +#endif> +
> /* Finally unfreeze refcount. Additional reference from page cache. */
> page_ref_unfreeze(page_tail,
> 1 + ((!folio_test_anon(folio) || folio_test_swapcache(folio)) ?
> @@ -3267,8 +3271,11 @@ static void __split_huge_page(struct page *page, struct list_head *list,
> int order = folio_order(folio);
> unsigned int nr = 1 << order;
>
> - /* complete memcg works before add pages to LRU */
> - split_page_memcg(head, order, new_order);
> +#ifdef CONFIG_MEMCG
> + if (folio_memcg_charged(folio))
Do we have the mem_cgroup_disabled() call here?
Apart from that LGTM.
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH 2/3] mm: Simplify split_page_memcg()
2024-11-04 21:05 ` [PATCH 2/3] mm: Simplify split_page_memcg() Matthew Wilcox (Oracle)
@ 2024-11-05 17:34 ` David Hildenbrand
2024-11-08 15:06 ` Zi Yan
1 sibling, 0 replies; 8+ messages in thread
From: David Hildenbrand @ 2024-11-05 17:34 UTC (permalink / raw)
To: Matthew Wilcox (Oracle),
Johannes Weiner, Michal Hocko, Roman Gushchin, Shakeel Butt
Cc: Muchun Song, cgroups, linux-mm
On 04.11.24 22:05, Matthew Wilcox (Oracle) wrote:
> The last argument to split_page_memcg() is now always 0, so remove it,
> effectively reverting commit b8791381d7ed.
>
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> ---
> include/linux/memcontrol.h | 4 ++--
> mm/memcontrol.c | 26 ++++++++++++++------------
> mm/page_alloc.c | 4 ++--
> 3 files changed, 18 insertions(+), 16 deletions(-)
>
> diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
> index 5502aa8e138e..a787080f814f 100644
> --- a/include/linux/memcontrol.h
> +++ b/include/linux/memcontrol.h
> @@ -1044,7 +1044,7 @@ static inline void memcg_memory_event_mm(struct mm_struct *mm,
> rcu_read_unlock();
> }
>
> -void split_page_memcg(struct page *head, int old_order, int new_order);
> +void split_page_memcg(struct page *first, int order);
>
> #else /* CONFIG_MEMCG */
>
> @@ -1463,7 +1463,7 @@ void count_memcg_event_mm(struct mm_struct *mm, enum vm_event_item idx)
> {
> }
>
> -static inline void split_page_memcg(struct page *head, int old_order, int new_order)
> +static inline void split_page_memcg(struct page *first, int order)
> {
> }
> #endif /* CONFIG_MEMCG */
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index 5e44d6e7591e..506439a5dcfe 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -3034,25 +3034,27 @@ void __memcg_slab_free_hook(struct kmem_cache *s, struct slab *slab,
> }
>
> /*
> - * Because folio_memcg(head) is not set on tails, set it now.
> + * The memcg data is only set on the first page, now transfer it to all the
> + * other pages.
> */
> -void split_page_memcg(struct page *head, int old_order, int new_order)
> +void split_page_memcg(struct page *first, int order)
> {
> - struct folio *folio = page_folio(head);
> + unsigned long memcg_data = first->memcg_data;
> + struct obj_cgroup *objcg;
> int i;
> - unsigned int old_nr = 1 << old_order;
> - unsigned int new_nr = 1 << new_order;
> + unsigned int nr = 1 << order;
>
> - if (mem_cgroup_disabled() || !folio_memcg_charged(folio))
> + if (!memcg_data)
> return;
> > - for (i = new_nr; i < old_nr; i += new_nr)
> - folio_page(folio, i)->memcg_data = folio->memcg_data;
> + VM_BUG_ON_PAGE((memcg_data & OBJEXTS_FLAGS_MASK) != MEMCG_DATA_KMEM,
> + first);
> + objcg = (void *)(memcg_data & ~OBJEXTS_FLAGS_MASK);
I'm not sure if dropping the
if (mem_cgroup_disabled() || !folio_memcg_charged(folio))
end adding that OBJEXTS_FLAGS_MASK magic here is the right thing to do.
It also isn't really part of a revert of b8791381d7ed.
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH 0/3] Introduce acctmem
2024-11-04 21:05 [PATCH 0/3] Introduce acctmem Matthew Wilcox (Oracle)
` (2 preceding siblings ...)
2024-11-04 21:06 ` [PATCH 3/3] mm: Introduce acctmem Matthew Wilcox (Oracle)
@ 2024-11-06 0:09 ` Roman Gushchin
3 siblings, 0 replies; 8+ messages in thread
From: Roman Gushchin @ 2024-11-06 0:09 UTC (permalink / raw)
To: Matthew Wilcox (Oracle)
Cc: Johannes Weiner, Michal Hocko, Shakeel Butt, Muchun Song,
cgroups, linux-mm
On Mon, Nov 04, 2024 at 09:05:57PM +0000, Matthew Wilcox wrote:
> As a step towards shrinking struct page, we need to remove all references
> to page->memcg_data. The model I'm working towards is described at
> https://kernelnewbies.org/MatthewWilcox/Memdescs
>
> In working on this series, I'm dissatisfied with how much I've assumed
> that every page belongs to a folio. There will need to be more changes
> in order to split struct acctmem from struct folio in the future.
> The first two patches take some steps in that direction, but I'm not
> going to do any more than that in this series.
Is this series supposed to be merged in the current form or it's an rfc?
The code looks good to me and the approach makes sense, but the series
needs a bit more formal commit logs and series description.
Also, it would be great to have at least a short description of
the high-level design in place, without a link to an external site.
Thanks!
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH 2/3] mm: Simplify split_page_memcg()
2024-11-04 21:05 ` [PATCH 2/3] mm: Simplify split_page_memcg() Matthew Wilcox (Oracle)
2024-11-05 17:34 ` David Hildenbrand
@ 2024-11-08 15:06 ` Zi Yan
1 sibling, 0 replies; 8+ messages in thread
From: Zi Yan @ 2024-11-08 15:06 UTC (permalink / raw)
To: Matthew Wilcox (Oracle)
Cc: Johannes Weiner, Michal Hocko, Roman Gushchin, Shakeel Butt,
Muchun Song, cgroups, linux-mm
[-- Attachment #1: Type: text/plain, Size: 2201 bytes --]
On 4 Nov 2024, at 16:05, Matthew Wilcox (Oracle) wrote:
> The last argument to split_page_memcg() is now always 0, so remove it,
> effectively reverting commit b8791381d7ed.
You forgot to cc me about this.
>
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> ---
> include/linux/memcontrol.h | 4 ++--
> mm/memcontrol.c | 26 ++++++++++++++------------
> mm/page_alloc.c | 4 ++--
> 3 files changed, 18 insertions(+), 16 deletions(-)
>
> diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
> index 5502aa8e138e..a787080f814f 100644
> --- a/include/linux/memcontrol.h
> +++ b/include/linux/memcontrol.h
> @@ -1044,7 +1044,7 @@ static inline void memcg_memory_event_mm(struct mm_struct *mm,
> rcu_read_unlock();
> }
>
> -void split_page_memcg(struct page *head, int old_order, int new_order);
> +void split_page_memcg(struct page *first, int order);
>
> #else /* CONFIG_MEMCG */
>
> @@ -1463,7 +1463,7 @@ void count_memcg_event_mm(struct mm_struct *mm, enum vm_event_item idx)
> {
> }
>
> -static inline void split_page_memcg(struct page *head, int old_order, int new_order)
> +static inline void split_page_memcg(struct page *first, int order)
> {
> }
> #endif /* CONFIG_MEMCG */
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index 5e44d6e7591e..506439a5dcfe 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -3034,25 +3034,27 @@ void __memcg_slab_free_hook(struct kmem_cache *s, struct slab *slab,
> }
>
> /*
> - * Because folio_memcg(head) is not set on tails, set it now.
> + * The memcg data is only set on the first page, now transfer it to all the
> + * other pages.
> */
> -void split_page_memcg(struct page *head, int old_order, int new_order)
> +void split_page_memcg(struct page *first, int order)
So split_page_memcg() only handles kmem pages, it is better to rename it
to split_kmem_page_memcg() to avoid confusion. Especially if this patchset
is merged before my folio_split() patchset and I did not notice it, my
patchset would cause memcg issues, since I was still using split_page_memcg()
during folio split.
--
Best Regards,
Yan, Zi
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 854 bytes --]
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2024-11-08 15:06 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-11-04 21:05 [PATCH 0/3] Introduce acctmem Matthew Wilcox (Oracle)
2024-11-04 21:05 ` [PATCH 1/3] mm: Opencode split_page_memcg() in __split_huge_page() Matthew Wilcox (Oracle)
2024-11-05 17:30 ` David Hildenbrand
2024-11-04 21:05 ` [PATCH 2/3] mm: Simplify split_page_memcg() Matthew Wilcox (Oracle)
2024-11-05 17:34 ` David Hildenbrand
2024-11-08 15:06 ` Zi Yan
2024-11-04 21:06 ` [PATCH 3/3] mm: Introduce acctmem Matthew Wilcox (Oracle)
2024-11-06 0:09 ` [PATCH 0/3] " Roman Gushchin
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox