* [PATCH] memcg: optimize memcg_rstat_updated
@ 2025-04-09 23:49 Shakeel Butt
2025-04-10 1:20 ` Waiman Long
0 siblings, 1 reply; 3+ messages in thread
From: Shakeel Butt @ 2025-04-09 23:49 UTC (permalink / raw)
To: Andrew Morton
Cc: Johannes Weiner, Michal Hocko, Roman Gushchin, Muchun Song,
Yosry Ahmed, linux-mm, cgroups, linux-kernel, Meta kernel team
Currently the kernel maintains the stats updates per-memcg which is
needed to implement stats flushing threshold. On the update side, the
update is added to the per-cpu per-memcg update of the given memcg and
all of its ancestors. However when the given memcg has passed the
flushing threshold, all of its ancestors should have passed the
threshold as well. There is no need to traverse up the memcg tree to
maintain the stats updates.
Perf profile collected from our fleet shows that memcg_rstat_updated is
one of the most expensive memcg function i.e. a lot of cumulative CPU
is being spent on it. So, even small micro optimizations matter a lot.
Signed-off-by: Shakeel Butt <shakeel.butt@linux.dev>
---
mm/memcontrol.c | 16 +++++++++-------
1 file changed, 9 insertions(+), 7 deletions(-)
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 421740f1bcdc..ea3e40e589df 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -585,18 +585,20 @@ static inline void memcg_rstat_updated(struct mem_cgroup *memcg, int val)
cgroup_rstat_updated(memcg->css.cgroup, cpu);
statc = this_cpu_ptr(memcg->vmstats_percpu);
for (; statc; statc = statc->parent) {
+ /*
+ * If @memcg is already flushable then all its ancestors are
+ * flushable as well and also there is no need to increase
+ * stats_updates.
+ */
+ if (!memcg_vmstats_needs_flush(statc->vmstats))
+ break;
+
stats_updates = READ_ONCE(statc->stats_updates) + abs(val);
WRITE_ONCE(statc->stats_updates, stats_updates);
if (stats_updates < MEMCG_CHARGE_BATCH)
continue;
- /*
- * If @memcg is already flush-able, increasing stats_updates is
- * redundant. Avoid the overhead of the atomic update.
- */
- if (!memcg_vmstats_needs_flush(statc->vmstats))
- atomic64_add(stats_updates,
- &statc->vmstats->stats_updates);
+ atomic64_add(stats_updates, &statc->vmstats->stats_updates);
WRITE_ONCE(statc->stats_updates, 0);
}
}
--
2.47.1
^ permalink raw reply [flat|nested] 3+ messages in thread* Re: [PATCH] memcg: optimize memcg_rstat_updated
2025-04-09 23:49 [PATCH] memcg: optimize memcg_rstat_updated Shakeel Butt
@ 2025-04-10 1:20 ` Waiman Long
2025-04-10 1:49 ` Shakeel Butt
0 siblings, 1 reply; 3+ messages in thread
From: Waiman Long @ 2025-04-10 1:20 UTC (permalink / raw)
To: Shakeel Butt, Andrew Morton
Cc: Johannes Weiner, Michal Hocko, Roman Gushchin, Muchun Song,
Yosry Ahmed, linux-mm, cgroups, linux-kernel, Meta kernel team
[-- Attachment #1: Type: text/plain, Size: 2159 bytes --]
On 4/9/25 7:49 PM, Shakeel Butt wrote:
> Currently the kernel maintains the stats updates per-memcg which is
> needed to implement stats flushing threshold. On the update side, the
> update is added to the per-cpu per-memcg update of the given memcg and
> all of its ancestors. However when the given memcg has passed the
> flushing threshold, all of its ancestors should have passed the
> threshold as well. There is no need to traverse up the memcg tree to
> maintain the stats updates.
>
> Perf profile collected from our fleet shows that memcg_rstat_updated is
> one of the most expensive memcg function i.e. a lot of cumulative CPU
> is being spent on it. So, even small micro optimizations matter a lot.
>
> Signed-off-by: Shakeel Butt<shakeel.butt@linux.dev>
> ---
> mm/memcontrol.c | 16 +++++++++-------
> 1 file changed, 9 insertions(+), 7 deletions(-)
>
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index 421740f1bcdc..ea3e40e589df 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -585,18 +585,20 @@ static inline void memcg_rstat_updated(struct mem_cgroup *memcg, int val)
> cgroup_rstat_updated(memcg->css.cgroup, cpu);
> statc = this_cpu_ptr(memcg->vmstats_percpu);
> for (; statc; statc = statc->parent) {
> + /*
> + * If @memcg is already flushable then all its ancestors are
> + * flushable as well and also there is no need to increase
> + * stats_updates.
> + */
> + if (!memcg_vmstats_needs_flush(statc->vmstats))
> + break;
> +
Do you mean "if (memcg_vmstats_needs_flush(statc->vmstats))"?
Cheers, Longman
> stats_updates = READ_ONCE(statc->stats_updates) + abs(val);
> WRITE_ONCE(statc->stats_updates, stats_updates);
> if (stats_updates < MEMCG_CHARGE_BATCH)
> continue;
>
> - /*
> - * If @memcg is already flush-able, increasing stats_updates is
> - * redundant. Avoid the overhead of the atomic update.
> - */
> - if (!memcg_vmstats_needs_flush(statc->vmstats))
> - atomic64_add(stats_updates,
> - &statc->vmstats->stats_updates);
> + atomic64_add(stats_updates, &statc->vmstats->stats_updates);
> WRITE_ONCE(statc->stats_updates, 0);
> }
> }
[-- Attachment #2: Type: text/html, Size: 2796 bytes --]
^ permalink raw reply [flat|nested] 3+ messages in thread* Re: [PATCH] memcg: optimize memcg_rstat_updated
2025-04-10 1:20 ` Waiman Long
@ 2025-04-10 1:49 ` Shakeel Butt
0 siblings, 0 replies; 3+ messages in thread
From: Shakeel Butt @ 2025-04-10 1:49 UTC (permalink / raw)
To: Waiman Long
Cc: Andrew Morton, Johannes Weiner, Michal Hocko, Roman Gushchin,
Muchun Song, Yosry Ahmed, linux-mm, cgroups, linux-kernel,
Meta kernel team
On Wed, Apr 09, 2025 at 09:20:34PM -0400, Waiman Long wrote:
> On 4/9/25 7:49 PM, Shakeel Butt wrote:
> > Currently the kernel maintains the stats updates per-memcg which is
> > needed to implement stats flushing threshold. On the update side, the
> > update is added to the per-cpu per-memcg update of the given memcg and
> > all of its ancestors. However when the given memcg has passed the
> > flushing threshold, all of its ancestors should have passed the
> > threshold as well. There is no need to traverse up the memcg tree to
> > maintain the stats updates.
> >
> > Perf profile collected from our fleet shows that memcg_rstat_updated is
> > one of the most expensive memcg function i.e. a lot of cumulative CPU
> > is being spent on it. So, even small micro optimizations matter a lot.
> >
> > Signed-off-by: Shakeel Butt<shakeel.butt@linux.dev>
> > ---
> > mm/memcontrol.c | 16 +++++++++-------
> > 1 file changed, 9 insertions(+), 7 deletions(-)
> >
> > diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> > index 421740f1bcdc..ea3e40e589df 100644
> > --- a/mm/memcontrol.c
> > +++ b/mm/memcontrol.c
> > @@ -585,18 +585,20 @@ static inline void memcg_rstat_updated(struct mem_cgroup *memcg, int val)
> > cgroup_rstat_updated(memcg->css.cgroup, cpu);
> > statc = this_cpu_ptr(memcg->vmstats_percpu);
> > for (; statc; statc = statc->parent) {
> > + /*
> > + * If @memcg is already flushable then all its ancestors are
> > + * flushable as well and also there is no need to increase
> > + * stats_updates.
> > + */
> > + if (!memcg_vmstats_needs_flush(statc->vmstats))
> > + break;
> > +
>
> Do you mean "if (memcg_vmstats_needs_flush(statc->vmstats))"?
>
Yup you are right, thanks for catching this. I will send a v2.
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2025-04-10 1:49 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-04-09 23:49 [PATCH] memcg: optimize memcg_rstat_updated Shakeel Butt
2025-04-10 1:20 ` Waiman Long
2025-04-10 1:49 ` Shakeel Butt
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox