* [PATCH] memcg: consolidate private id refcount get/put helpers
@ 2026-02-13 10:03 Kairui Song
2026-02-13 20:08 ` Johannes Weiner
` (2 more replies)
0 siblings, 3 replies; 4+ messages in thread
From: Kairui Song @ 2026-02-13 10:03 UTC (permalink / raw)
To: cgroups, linux-mm
Cc: Johannes Weiner, Michal Hocko, Roman Gushchin, Shakeel Butt,
Muchun Song, Andrew Morton, linux-kernel, Kairui Song
From: Kairui Song <kasong@tencent.com>
We currently have two different sets of helpers for getting or putting
the private IDs' refcount for order 0 and large folios. This is
redundant. Just use one and always acquire the refcount of the swapout
folio size unless it's zero, and put the refcount using the folio size
if the charge failed, since the folio size can't change. Then there is
no need to update the refcount for tail pages.
Same for freeing, then only one pair of get/put helper is needed now.
The performance might be slightly better, too: both "inc unless zero"
and "add unless zero" use the same cmpxchg implementation. For large
folios, we saved an atomic operation. And for both order 0 and large
folios, we saved a branch.
Signed-off-by: Kairui Song <kasong@tencent.com>
---
mm/memcontrol-v1.c | 5 +----
mm/memcontrol-v1.h | 4 ++--
mm/memcontrol.c | 29 +++++++----------------------
3 files changed, 10 insertions(+), 28 deletions(-)
diff --git a/mm/memcontrol-v1.c b/mm/memcontrol-v1.c
index 0e3d972fad33..c28a060abc64 100644
--- a/mm/memcontrol-v1.c
+++ b/mm/memcontrol-v1.c
@@ -635,11 +635,8 @@ void memcg1_swapout(struct folio *folio, swp_entry_t entry)
* have an ID allocated to it anymore, charge the closest online
* ancestor for the swap instead and transfer the memory+swap charge.
*/
- swap_memcg = mem_cgroup_private_id_get_online(memcg);
nr_entries = folio_nr_pages(folio);
- /* Get references for the tail pages, too */
- if (nr_entries > 1)
- mem_cgroup_private_id_get_many(swap_memcg, nr_entries - 1);
+ swap_memcg = mem_cgroup_private_id_get_online(memcg, nr_entries);
mod_memcg_state(swap_memcg, MEMCG_SWAP, nr_entries);
swap_cgroup_record(folio, mem_cgroup_private_id(swap_memcg), entry);
diff --git a/mm/memcontrol-v1.h b/mm/memcontrol-v1.h
index 49933925b4ba..dbbd0e13d4ff 100644
--- a/mm/memcontrol-v1.h
+++ b/mm/memcontrol-v1.h
@@ -28,8 +28,8 @@ unsigned long memcg_events(struct mem_cgroup *memcg, int event);
unsigned long memcg_page_state_output(struct mem_cgroup *memcg, int item);
int memory_stat_show(struct seq_file *m, void *v);
-void mem_cgroup_private_id_get_many(struct mem_cgroup *memcg, unsigned int n);
-struct mem_cgroup *mem_cgroup_private_id_get_online(struct mem_cgroup *memcg);
+struct mem_cgroup *mem_cgroup_private_id_get_online(struct mem_cgroup *memcg,
+ unsigned int n);
/* Cgroup v1-specific declarations */
#ifdef CONFIG_MEMCG_V1
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 007413a53b45..4425ef51feae 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -3564,13 +3564,7 @@ static void mem_cgroup_private_id_remove(struct mem_cgroup *memcg)
}
}
-void __maybe_unused mem_cgroup_private_id_get_many(struct mem_cgroup *memcg,
- unsigned int n)
-{
- refcount_add(n, &memcg->id.ref);
-}
-
-static void mem_cgroup_private_id_put_many(struct mem_cgroup *memcg, unsigned int n)
+static inline void mem_cgroup_private_id_put(struct mem_cgroup *memcg, unsigned int n)
{
if (refcount_sub_and_test(n, &memcg->id.ref)) {
mem_cgroup_private_id_remove(memcg);
@@ -3580,14 +3574,9 @@ static void mem_cgroup_private_id_put_many(struct mem_cgroup *memcg, unsigned in
}
}
-static inline void mem_cgroup_private_id_put(struct mem_cgroup *memcg)
+struct mem_cgroup *mem_cgroup_private_id_get_online(struct mem_cgroup *memcg, unsigned int n)
{
- mem_cgroup_private_id_put_many(memcg, 1);
-}
-
-struct mem_cgroup *mem_cgroup_private_id_get_online(struct mem_cgroup *memcg)
-{
- while (!refcount_inc_not_zero(&memcg->id.ref)) {
+ while (!refcount_add_not_zero(n, &memcg->id.ref)) {
/*
* The root cgroup cannot be destroyed, so it's refcount must
* always be >= 1.
@@ -3888,7 +3877,7 @@ static void mem_cgroup_css_offline(struct cgroup_subsys_state *css)
drain_all_stock(memcg);
- mem_cgroup_private_id_put(memcg);
+ mem_cgroup_private_id_put(memcg, 1);
}
static void mem_cgroup_css_released(struct cgroup_subsys_state *css)
@@ -5170,19 +5159,15 @@ int __mem_cgroup_try_charge_swap(struct folio *folio, swp_entry_t entry)
return 0;
}
- memcg = mem_cgroup_private_id_get_online(memcg);
+ memcg = mem_cgroup_private_id_get_online(memcg, nr_pages);
if (!mem_cgroup_is_root(memcg) &&
!page_counter_try_charge(&memcg->swap, nr_pages, &counter)) {
memcg_memory_event(memcg, MEMCG_SWAP_MAX);
memcg_memory_event(memcg, MEMCG_SWAP_FAIL);
- mem_cgroup_private_id_put(memcg);
+ mem_cgroup_private_id_put(memcg, nr_pages);
return -ENOMEM;
}
-
- /* Get references for the tail pages, too */
- if (nr_pages > 1)
- mem_cgroup_private_id_get_many(memcg, nr_pages - 1);
mod_memcg_state(memcg, MEMCG_SWAP, nr_pages);
swap_cgroup_record(folio, mem_cgroup_private_id(memcg), entry);
@@ -5211,7 +5196,7 @@ void __mem_cgroup_uncharge_swap(swp_entry_t entry, unsigned int nr_pages)
page_counter_uncharge(&memcg->swap, nr_pages);
}
mod_memcg_state(memcg, MEMCG_SWAP, -nr_pages);
- mem_cgroup_private_id_put_many(memcg, nr_pages);
+ mem_cgroup_private_id_put(memcg, nr_pages);
}
rcu_read_unlock();
}
---
base-commit: 9fff1ab283e0982c2b8e73f1d2246fd38caf40c8
change-id: 20260213-memcg-privid-6ba2773b5ca2
Best regards,
--
Kairui Song <kasong@tencent.com>
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH] memcg: consolidate private id refcount get/put helpers
2026-02-13 10:03 [PATCH] memcg: consolidate private id refcount get/put helpers Kairui Song
@ 2026-02-13 20:08 ` Johannes Weiner
2026-02-14 0:56 ` Chen Ridong
2026-02-14 5:55 ` Shakeel Butt
2 siblings, 0 replies; 4+ messages in thread
From: Johannes Weiner @ 2026-02-13 20:08 UTC (permalink / raw)
To: Kairui Song
Cc: cgroups, linux-mm, Michal Hocko, Roman Gushchin, Shakeel Butt,
Muchun Song, Andrew Morton, linux-kernel, Kairui Song
On Fri, Feb 13, 2026 at 06:03:32PM +0800, Kairui Song wrote:
> From: Kairui Song <kasong@tencent.com>
>
> We currently have two different sets of helpers for getting or putting
> the private IDs' refcount for order 0 and large folios. This is
> redundant. Just use one and always acquire the refcount of the swapout
> folio size unless it's zero, and put the refcount using the folio size
> if the charge failed, since the folio size can't change. Then there is
> no need to update the refcount for tail pages.
>
> Same for freeing, then only one pair of get/put helper is needed now.
>
> The performance might be slightly better, too: both "inc unless zero"
> and "add unless zero" use the same cmpxchg implementation. For large
> folios, we saved an atomic operation. And for both order 0 and large
> folios, we saved a branch.
>
> Signed-off-by: Kairui Song <kasong@tencent.com>
Nice improvement!
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH] memcg: consolidate private id refcount get/put helpers
2026-02-13 10:03 [PATCH] memcg: consolidate private id refcount get/put helpers Kairui Song
2026-02-13 20:08 ` Johannes Weiner
@ 2026-02-14 0:56 ` Chen Ridong
2026-02-14 5:55 ` Shakeel Butt
2 siblings, 0 replies; 4+ messages in thread
From: Chen Ridong @ 2026-02-14 0:56 UTC (permalink / raw)
To: Kairui Song, cgroups, linux-mm
Cc: Johannes Weiner, Michal Hocko, Roman Gushchin, Shakeel Butt,
Muchun Song, Andrew Morton, linux-kernel, Kairui Song
On 2026/2/13 18:03, Kairui Song wrote:
> From: Kairui Song <kasong@tencent.com>
>
> We currently have two different sets of helpers for getting or putting
> the private IDs' refcount for order 0 and large folios. This is
> redundant. Just use one and always acquire the refcount of the swapout
> folio size unless it's zero, and put the refcount using the folio size
> if the charge failed, since the folio size can't change. Then there is
> no need to update the refcount for tail pages.
>
> Same for freeing, then only one pair of get/put helper is needed now.
>
> The performance might be slightly better, too: both "inc unless zero"
> and "add unless zero" use the same cmpxchg implementation. For large
> folios, we saved an atomic operation. And for both order 0 and large
> folios, we saved a branch.
>
> Signed-off-by: Kairui Song <kasong@tencent.com>
> ---
> mm/memcontrol-v1.c | 5 +----
> mm/memcontrol-v1.h | 4 ++--
> mm/memcontrol.c | 29 +++++++----------------------
> 3 files changed, 10 insertions(+), 28 deletions(-)
>
> diff --git a/mm/memcontrol-v1.c b/mm/memcontrol-v1.c
> index 0e3d972fad33..c28a060abc64 100644
> --- a/mm/memcontrol-v1.c
> +++ b/mm/memcontrol-v1.c
> @@ -635,11 +635,8 @@ void memcg1_swapout(struct folio *folio, swp_entry_t entry)
> * have an ID allocated to it anymore, charge the closest online
> * ancestor for the swap instead and transfer the memory+swap charge.
> */
> - swap_memcg = mem_cgroup_private_id_get_online(memcg);
> nr_entries = folio_nr_pages(folio);
> - /* Get references for the tail pages, too */
> - if (nr_entries > 1)
> - mem_cgroup_private_id_get_many(swap_memcg, nr_entries - 1);
> + swap_memcg = mem_cgroup_private_id_get_online(memcg, nr_entries);
> mod_memcg_state(swap_memcg, MEMCG_SWAP, nr_entries);
>
> swap_cgroup_record(folio, mem_cgroup_private_id(swap_memcg), entry);
> diff --git a/mm/memcontrol-v1.h b/mm/memcontrol-v1.h
> index 49933925b4ba..dbbd0e13d4ff 100644
> --- a/mm/memcontrol-v1.h
> +++ b/mm/memcontrol-v1.h
> @@ -28,8 +28,8 @@ unsigned long memcg_events(struct mem_cgroup *memcg, int event);
> unsigned long memcg_page_state_output(struct mem_cgroup *memcg, int item);
> int memory_stat_show(struct seq_file *m, void *v);
>
> -void mem_cgroup_private_id_get_many(struct mem_cgroup *memcg, unsigned int n);
> -struct mem_cgroup *mem_cgroup_private_id_get_online(struct mem_cgroup *memcg);
> +struct mem_cgroup *mem_cgroup_private_id_get_online(struct mem_cgroup *memcg,
> + unsigned int n);
>
> /* Cgroup v1-specific declarations */
> #ifdef CONFIG_MEMCG_V1
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index 007413a53b45..4425ef51feae 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -3564,13 +3564,7 @@ static void mem_cgroup_private_id_remove(struct mem_cgroup *memcg)
> }
> }
>
> -void __maybe_unused mem_cgroup_private_id_get_many(struct mem_cgroup *memcg,
> - unsigned int n)
> -{
> - refcount_add(n, &memcg->id.ref);
> -}
> -
> -static void mem_cgroup_private_id_put_many(struct mem_cgroup *memcg, unsigned int n)
> +static inline void mem_cgroup_private_id_put(struct mem_cgroup *memcg, unsigned int n)
> {
> if (refcount_sub_and_test(n, &memcg->id.ref)) {
> mem_cgroup_private_id_remove(memcg);
> @@ -3580,14 +3574,9 @@ static void mem_cgroup_private_id_put_many(struct mem_cgroup *memcg, unsigned in
> }
> }
>
> -static inline void mem_cgroup_private_id_put(struct mem_cgroup *memcg)
> +struct mem_cgroup *mem_cgroup_private_id_get_online(struct mem_cgroup *memcg, unsigned int n)
> {
> - mem_cgroup_private_id_put_many(memcg, 1);
> -}
> -
> -struct mem_cgroup *mem_cgroup_private_id_get_online(struct mem_cgroup *memcg)
> -{
> - while (!refcount_inc_not_zero(&memcg->id.ref)) {
> + while (!refcount_add_not_zero(n, &memcg->id.ref)) {
> /*
> * The root cgroup cannot be destroyed, so it's refcount must
> * always be >= 1.
> @@ -3888,7 +3877,7 @@ static void mem_cgroup_css_offline(struct cgroup_subsys_state *css)
>
> drain_all_stock(memcg);
>
> - mem_cgroup_private_id_put(memcg);
> + mem_cgroup_private_id_put(memcg, 1);
> }
>
> static void mem_cgroup_css_released(struct cgroup_subsys_state *css)
> @@ -5170,19 +5159,15 @@ int __mem_cgroup_try_charge_swap(struct folio *folio, swp_entry_t entry)
> return 0;
> }
>
> - memcg = mem_cgroup_private_id_get_online(memcg);
> + memcg = mem_cgroup_private_id_get_online(memcg, nr_pages);
>
> if (!mem_cgroup_is_root(memcg) &&
> !page_counter_try_charge(&memcg->swap, nr_pages, &counter)) {
> memcg_memory_event(memcg, MEMCG_SWAP_MAX);
> memcg_memory_event(memcg, MEMCG_SWAP_FAIL);
> - mem_cgroup_private_id_put(memcg);
> + mem_cgroup_private_id_put(memcg, nr_pages);
> return -ENOMEM;
> }
> -
> - /* Get references for the tail pages, too */
> - if (nr_pages > 1)
> - mem_cgroup_private_id_get_many(memcg, nr_pages - 1);
> mod_memcg_state(memcg, MEMCG_SWAP, nr_pages);
>
> swap_cgroup_record(folio, mem_cgroup_private_id(memcg), entry);
> @@ -5211,7 +5196,7 @@ void __mem_cgroup_uncharge_swap(swp_entry_t entry, unsigned int nr_pages)
> page_counter_uncharge(&memcg->swap, nr_pages);
> }
> mod_memcg_state(memcg, MEMCG_SWAP, -nr_pages);
> - mem_cgroup_private_id_put_many(memcg, nr_pages);
> + mem_cgroup_private_id_put(memcg, nr_pages);
> }
> rcu_read_unlock();
> }
>
> ---
> base-commit: 9fff1ab283e0982c2b8e73f1d2246fd38caf40c8
> change-id: 20260213-memcg-privid-6ba2773b5ca2
>
> Best regards,
Reviewed-by: Chen Ridong <chenridong@huaweicloud.com>
--
Best regards,
Ridong
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH] memcg: consolidate private id refcount get/put helpers
2026-02-13 10:03 [PATCH] memcg: consolidate private id refcount get/put helpers Kairui Song
2026-02-13 20:08 ` Johannes Weiner
2026-02-14 0:56 ` Chen Ridong
@ 2026-02-14 5:55 ` Shakeel Butt
2 siblings, 0 replies; 4+ messages in thread
From: Shakeel Butt @ 2026-02-14 5:55 UTC (permalink / raw)
To: Kairui Song
Cc: cgroups, linux-mm, Johannes Weiner, Michal Hocko, Roman Gushchin,
Muchun Song, Andrew Morton, linux-kernel, Kairui Song
On Fri, Feb 13, 2026 at 2:03 AM Kairui Song <ryncsn@gmail.com> wrote:
>
> From: Kairui Song <kasong@tencent.com>
>
> We currently have two different sets of helpers for getting or putting
> the private IDs' refcount for order 0 and large folios. This is
> redundant. Just use one and always acquire the refcount of the swapout
> folio size unless it's zero, and put the refcount using the folio size
> if the charge failed, since the folio size can't change. Then there is
> no need to update the refcount for tail pages.
>
> Same for freeing, then only one pair of get/put helper is needed now.
>
> The performance might be slightly better, too: both "inc unless zero"
> and "add unless zero" use the same cmpxchg implementation. For large
> folios, we saved an atomic operation. And for both order 0 and large
> folios, we saved a branch.
>
> Signed-off-by: Kairui Song <kasong@tencent.com>
Acked-by: Shakeel Butt <shakeel.butt@gmail.com>
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2026-02-14 5:55 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2026-02-13 10:03 [PATCH] memcg: consolidate private id refcount get/put helpers Kairui Song
2026-02-13 20:08 ` Johannes Weiner
2026-02-14 0:56 ` Chen Ridong
2026-02-14 5:55 ` Shakeel Butt
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox