Just checked the BPF kfuncs patch, nice, speed so much, I think better use bpf to read, instead of normal read. The workload normally is read about every 2s, but there could be several different services needing polling this. Yes, previously the indent was hacked manually:) On Fri, Jan 9, 2026 at 6:50 AM Roman Gushchin wrote: > Andrew Morton writes: > > > On Thu, 8 Jan 2026 17:37:29 +0800 Jianyue Wu > wrote: > > > >> From: Jianyue Wu > >> > >> Replace seq_printf/seq_buf_printf with lightweight helpers to avoid > >> printf parsing in memcg stats output. > >> > >> Key changes: > >> - Add memcg_seq_put_name_val() for seq_file "name value\n" formatting > >> - Add memcg_seq_buf_put_name_val() for seq_buf "name value\n" formatting > >> - Update __memory_events_show(), swap_events_show(), > >> memory_stat_format(), memory_numa_stat_show(), and related helpers > >> > >> Performance: > >> - 1M reads of memory.stat+memory.numa_stat > >> - Before: real 0m9.663s, user 0m4.840s, sys 0m4.823s > >> - After: real 0m9.051s, user 0m4.775s, sys 0m4.275s (~11.4% sys drop) > >> > >> Tests: > >> - Script: > >> for ((i=1; i<=1000000; i++)); do > >> : > /dev/null < /sys/fs/cgroup/memory.stat > >> : > /dev/null < /sys/fs/cgroup/memory.numa_stat > >> done > >> > > > > I suspect there are workloads which read these files frequently. > > > > I'd be interested in learning "how frequently". Perhaps > > ascii-through-sysfs simply isn't an appropriate API for this data? > > We just got a bpf interface for this data merged, exactly to speed > things up: commit 99430ab8b804 ("mm: introduce BPF kfuncs to access > memcg statistics and events") in bpf-next. >