From: Andrew Morton <akpm@linux-foundation.org>
To: Jianyue Wu <wujianyue000@gmail.com>
Cc: jianyuew@nvidia.com, hannes@cmpxchg.org, mhocko@kernel.org,
roman.gushchin@linux.dev, shakeel.butt@linux.dev,
muchun.song@linux.dev, linux-mm@kvack.org,
cgroups@vger.kernel.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH] mm: optimize stat output for 11% sys time reduce
Date: Thu, 8 Jan 2026 11:10:27 -0800 [thread overview]
Message-ID: <20260108111027.172f19a9a86667e8e0142042@linux-foundation.org> (raw)
In-Reply-To: <20260108093741.212333-1-jianyuew@nvidia.com>
On Thu, 8 Jan 2026 17:37:29 +0800 Jianyue Wu <wujianyue000@gmail.com> wrote:
> From: Jianyue Wu <wujianyue000@gmail.com>
>
> Replace seq_printf/seq_buf_printf with lightweight helpers to avoid
> printf parsing in memcg stats output.
>
> Key changes:
> - Add memcg_seq_put_name_val() for seq_file "name value\n" formatting
> - Add memcg_seq_buf_put_name_val() for seq_buf "name value\n" formatting
> - Update __memory_events_show(), swap_events_show(),
> memory_stat_format(), memory_numa_stat_show(), and related helpers
>
> Performance:
> - 1M reads of memory.stat+memory.numa_stat
> - Before: real 0m9.663s, user 0m4.840s, sys 0m4.823s
> - After: real 0m9.051s, user 0m4.775s, sys 0m4.275s (~11.4% sys drop)
>
> Tests:
> - Script:
> for ((i=1; i<=1000000; i++)); do
> : > /dev/null < /sys/fs/cgroup/memory.stat
> : > /dev/null < /sys/fs/cgroup/memory.numa_stat
> done
>
I suspect there are workloads which read these files frequently.
I'd be interested in learning "how frequently". Perhaps
ascii-through-sysfs simply isn't an appropriate API for this data?
> @@ -1795,25 +1795,33 @@ static int memcg_numa_stat_show(struct seq_file *m, void *v)
> mem_cgroup_flush_stats(memcg);
>
> for (stat = stats; stat < stats + ARRAY_SIZE(stats); stat++) {
> - seq_printf(m, "%s=%lu", stat->name,
> - mem_cgroup_nr_lru_pages(memcg, stat->lru_mask,
> - false));
> - for_each_node_state(nid, N_MEMORY)
> - seq_printf(m, " N%d=%lu", nid,
> - mem_cgroup_node_nr_lru_pages(memcg, nid,
> - stat->lru_mask, false));
> + seq_puts(m, stat->name);
> + seq_put_decimal_ull(m, "=",
> + (u64)mem_cgroup_nr_lru_pages(memcg, stat->lru_mask,
> + false));
> + for_each_node_state(nid, N_MEMORY) {
> + seq_put_decimal_ull(m, " N", nid);
> + seq_put_decimal_ull(m, "=",
> + (u64)mem_cgroup_node_nr_lru_pages(memcg, nid,
> + stat->lru_mask, false));
The indenting went wrong here.
The patch does do a lot of ugly tricks to constrain the number of
columns used. Perhaps introduce some new local variables to clean this
up?
next prev parent reply other threads:[~2026-01-08 19:10 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-01-08 9:37 Jianyue Wu
2026-01-08 19:10 ` Andrew Morton [this message]
2026-01-08 22:49 ` Roman Gushchin
2026-01-08 23:52 ` Jiany Wu
2026-01-08 23:56 ` Jiany Wu
2026-01-10 4:22 ` [PATCH v2] " Jianyue Wu
-- strict thread matches above, loose matches on Subject: below --
2026-01-08 9:36 [PATCH] " Jianyue Wu
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260108111027.172f19a9a86667e8e0142042@linux-foundation.org \
--to=akpm@linux-foundation.org \
--cc=cgroups@vger.kernel.org \
--cc=hannes@cmpxchg.org \
--cc=jianyuew@nvidia.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mhocko@kernel.org \
--cc=muchun.song@linux.dev \
--cc=roman.gushchin@linux.dev \
--cc=shakeel.butt@linux.dev \
--cc=wujianyue000@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox