From: Leon Huang Fu <leon.huangfu@shopee.com>
To: mkoutny@suse.com
Cc: akpm@linux-foundation.org, cgroups@vger.kernel.org,
corbet@lwn.net, hannes@cmpxchg.org, jack@suse.cz,
joel.granados@kernel.org, kyle.meyer@hpe.com,
lance.yang@linux.dev, laoar.shao@gmail.com,
leon.huangfu@shopee.com, linux-doc@vger.kernel.org,
linux-kernel@vger.kernel.org, linux-mm@kvack.org,
mclapinski@google.com, mhocko@kernel.org, muchun.song@linux.dev,
roman.gushchin@linux.dev, shakeel.butt@linux.dev, tj@kernel.org
Subject: Re: [PATCH mm-new v3] mm/memcontrol: Add memory.stat_refresh for on-demand stats flushing
Date: Tue, 11 Nov 2025 14:13:42 +0800 [thread overview]
Message-ID: <20251111061343.71045-1-leon.huangfu@shopee.com> (raw)
In-Reply-To: <ewcsz3553cd6ooslgzwbubnbaxwmpd23d2k7pw5s4ckfvbb7sp@dffffjvohz5b>
On Mon, Nov 10, 2025 at 9:50 PM Michal Koutný <mkoutny@suse.com> wrote:
>
> Hello Leon.
Hi Michal,
>
> On Mon, Nov 10, 2025 at 06:19:48PM +0800, Leon Huang Fu <leon.huangfu@shopee.com> wrote:
> > Memory cgroup statistics are updated asynchronously with periodic
> > flushing to reduce overhead. The current implementation uses a flush
> > threshold calculated as MEMCG_CHARGE_BATCH * num_online_cpus() for
> > determining when to aggregate per-CPU memory cgroup statistics. On
> > systems with high core counts, this threshold can become very large
> > (e.g., 64 * 256 = 16,384 on a 256-core system), leading to stale
> > statistics when userspace reads memory.stat files.
> >
> > This is particularly problematic for monitoring and management tools
> > that rely on reasonably fresh statistics, as they may observe data
> > that is thousands of updates out of date.
> >
> > Introduce a new write-only file, memory.stat_refresh, that allows
> > userspace to explicitly trigger an immediate flush of memory statistics.
>
> I think it's worth thinking twice when introducing a new file like
> this...
>
> > Writing any value to this file forces a synchronous flush via
> > __mem_cgroup_flush_stats(memcg, true) for the cgroup and all its
> > descendants, ensuring that subsequent reads of memory.stat and
> > memory.numa_stat reflect current data.
> >
> > This approach follows the pattern established by /proc/sys/vm/stat_refresh
> > and memory.peak, where the written value is ignored, keeping the
> > interface simple and consistent with existing kernel APIs.
> >
> > Usage example:
> > echo 1 > /sys/fs/cgroup/mygroup/memory.stat_refresh
> > cat /sys/fs/cgroup/mygroup/memory.stat
> >
> > The feature is available in both cgroup v1 and v2 for consistency.
>
> First, I find the motivation by the testcase (not real world) weak when
> considering such an API change (e.g. real world would be confined to
> fewer CPUs or there'd be other "traffic" causing flushes making this a
> non-issue, we don't know here).
Fewer CPUs?
We are going to run kernels on 224/256 cores machines, and the flush threshold
is 16384 on a 256-core machine. That means we will have stale statistics often,
and we will need a way to improve the stats accuracy.
>
> Second, this is open to everyone (non-root) who mkdir's their cgroups.
> Then why not make it the default memory.stat behavior? (Tongue-in-cheek,
> but [*].)
>
> With this change, we admit the implementation (async flushing) and leak
> it to the users which is hard to take back. Why should we continue doing
> any implicit in-kernel flushing afterwards?
If the concern is that we're papering over a suboptimal flush path, I'm happy
to take a closer look. I'll review both the synchronous and asynchronous
flushing paths to see how to improve it.
>
> Next, v1 and v2 haven't been consistent since introduction of v2 (unlike
> some other controllers that share code or even cftypes between v1 and
> v2). So I'd avoid introducing a new file to V1 API.
>
> When looking for analogies, I admittedly like memory.reclaim's
> O_NONBLOCK better (than /proc/sys/vm/stat_refresh). That would be an
> argument for flushing by default mentioned abovee [*]).
>
> Also, this undercuts the hooking of rstat flushing into BPF. I think the
> attempts were given up too early (I read about the verifier vs
> seq_file). Have you tried bypassing bailout from
> __mem_cgroup_flush_stats via trace_memcg_flush_stats?
>
I tried "tp_btf/memcg_flush_stats", but it didn't work:
10: (85) call css_rstat_flush#80218
program must be sleepable to call sleepable kfunc css_rstat_flush
The bpf code and the error message are attached at last section.
>
> All in all, I'd like to have more backing data on insufficiency of (all
> the) rstat optimizations before opening explicit flushes like this
> (especially when it's meant to be exposed by BPF already).
>
It's proving non-trivial to capture a persuasive delta. The global worker
already flushes rstat every two seconds (2UL*HZ), so the window where
userspace can observe stale numbers is short.
[...]
Thanks,
Leon
---
#include "vmlinux.h"
#include "bpf_helpers.h"
#include "bpf_tracing.h"
char _license[] SEC("license") = "GPL";
extern void css_rstat_flush(struct cgroup_subsys_state *css) __weak __ksym;
SEC("tp_btf/memcg_flush_stats")
int BPF_PROG(memcg_flush_stats, struct mem_cgroup *memcg, s64 stats_updates, bool force, bool needs_flush)
{
if (!force || !needs_flush) {
css_rstat_flush(&memcg->css);
__bpf_vprintk("memcg_flush_stats: memcg id=%d, stats_updates=%lld, force=%d, needs_flush=%d\n",
memcg->id.id, stats_updates, force, needs_flush);
}
return 0;
}
---
permission denied:
0: R1=ctx() R10=fp0
; int BPF_PROG(memcg_flush_stats, struct mem_cgroup *memcg, s64 stats_updates, bool force, bool needs_flush) @ memcg.c:13
0: (79) r6 = *(u64 *)(r1 +24) ; R1=ctx() R6_w=scalar()
1: (79) r9 = *(u64 *)(r1 +16) ; R1=ctx() R9_w=scalar()
; if (!force || !needs_flush) { @ memcg.c:15
2: (15) if r9 == 0x0 goto pc+1 ; R9_w=scalar(umin=1)
3: (55) if r6 != 0x0 goto pc+27 ; R6_w=0
4: (b7) r3 = 0 ; R3_w=0
; int BPF_PROG(memcg_flush_stats, struct mem_cgroup *memcg, s64 stats_updates, bool force, bool needs_flush) @ memcg.c:13
5: (79) r7 = *(u64 *)(r1 +0)
func 'memcg_flush_stats' arg0 has btf_id 623 type STRUCT 'mem_cgroup'
6: R1=ctx() R7_w=trusted_ptr_mem_cgroup()
6: (bf) r2 = r7 ; R2_w=trusted_ptr_mem_cgroup() R7_w=trusted_ptr_mem_cgroup()
7: (0f) r2 += r3 ; R2_w=trusted_ptr_mem_cgroup() R3_w=0
8: (79) r8 = *(u64 *)(r1 +8) ; R1=ctx() R8_w=scalar()
; css_rstat_flush(&memcg->css); @ memcg.c:16
9: (bf) r1 = r2 ; R1_w=trusted_ptr_mem_cgroup() R2_w=trusted_ptr_mem_cgroup()
10: (85) call css_rstat_flush#80218
program must be sleepable to call sleepable kfunc css_rstat_flush
processed 11 insns (limit 1000000) max_states_per_insn 0 total_states 0 peak_states 0 mark_read 0
next prev parent reply other threads:[~2025-11-11 6:13 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-11-10 10:19 Leon Huang Fu
2025-11-10 11:28 ` Michal Hocko
2025-11-11 6:12 ` Leon Huang Fu
2025-11-10 11:52 ` Harry Yoo
2025-11-11 6:12 ` Leon Huang Fu
2025-11-10 13:50 ` Michal Koutný
2025-11-10 16:04 ` Tejun Heo
2025-11-11 6:27 ` Leon Huang Fu
2025-11-11 1:00 ` Chen Ridong
2025-11-11 6:44 ` Leon Huang Fu
2025-11-12 0:56 ` Chen Ridong
2025-11-12 14:02 ` Michal Koutný
2025-11-11 6:13 ` Leon Huang Fu [this message]
2025-11-11 18:52 ` Tejun Heo
2025-11-11 19:01 ` Michal Koutný
2025-11-11 8:10 ` Michal Hocko
2025-11-11 19:10 ` Waiman Long
2025-11-11 19:47 ` Michal Hocko
2025-11-11 20:44 ` Waiman Long
2025-11-11 21:01 ` Michal Hocko
2025-11-12 14:02 ` Michal Koutný
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20251111061343.71045-1-leon.huangfu@shopee.com \
--to=leon.huangfu@shopee.com \
--cc=akpm@linux-foundation.org \
--cc=cgroups@vger.kernel.org \
--cc=corbet@lwn.net \
--cc=hannes@cmpxchg.org \
--cc=jack@suse.cz \
--cc=joel.granados@kernel.org \
--cc=kyle.meyer@hpe.com \
--cc=lance.yang@linux.dev \
--cc=laoar.shao@gmail.com \
--cc=linux-doc@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mclapinski@google.com \
--cc=mhocko@kernel.org \
--cc=mkoutny@suse.com \
--cc=muchun.song@linux.dev \
--cc=roman.gushchin@linux.dev \
--cc=shakeel.butt@linux.dev \
--cc=tj@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox