From: JP Kobryn <inwardvessel@gmail.com>
To: Hui Zhu <hui.zhu@linux.dev>,
Andrew Morton <akpm@linux-foundation.org>,
Johannes Weiner <hannes@cmpxchg.org>,
Michal Hocko <mhocko@kernel.org>,
Roman Gushchin <roman.gushchin@linux.dev>,
Shakeel Butt <shakeel.butt@linux.dev>,
Muchun Song <muchun.song@linux.dev>,
Alexei Starovoitov <ast@kernel.org>,
Daniel Borkmann <daniel@iogearbox.net>,
Andrii Nakryiko <andrii@kernel.org>,
Martin KaFai Lau <martin.lau@linux.dev>,
Eduard Zingerman <eddyz87@gmail.com>, Song Liu <song@kernel.org>,
Yonghong Song <yonghong.song@linux.dev>,
John Fastabend <john.fastabend@gmail.com>,
KP Singh <kpsingh@kernel.org>,
Stanislav Fomichev <sdf@fomichev.me>, Hao Luo <haoluo@google.com>,
Jiri Olsa <jolsa@kernel.org>, Shuah Khan <shuah@kernel.org>,
Peter Zijlstra <peterz@infradead.org>,
Miguel Ojeda <ojeda@kernel.org>,
Nathan Chancellor <nathan@kernel.org>,
Kees Cook <kees@kernel.org>, Tejun Heo <tj@kernel.org>,
Jeff Xu <jeffxu@chromium.org>,
mkoutny@suse.com, Jan Hendrik Farr <kernel@jfarr.cc>,
Christian Brauner <brauner@kernel.org>,
Randy Dunlap <rdunlap@infradead.org>,
Brian Gerst <brgerst@gmail.com>,
Masahiro Yamada <masahiroy@kernel.org>,
davem@davemloft.net, Jakub Kicinski <kuba@kernel.org>,
Jesper Dangaard Brouer <hawk@kernel.org>,
Willem de Bruijn <willemb@google.com>,
Jason Xing <kerneljasonxing@gmail.com>,
Paul Chaignon <paul.chaignon@gmail.com>,
Anton Protopopov <a.s.protopopov@gmail.com>,
Amery Hung <ameryhung@gmail.com>,
Chen Ridong <chenridong@huaweicloud.com>,
Lance Yang <lance.yang@linux.dev>,
Jiayuan Chen <jiayuan.chen@linux.dev>,
linux-kernel@vger.kernel.org, linux-mm@kvack.org,
cgroups@vger.kernel.org, bpf@vger.kernel.org,
netdev@vger.kernel.org, linux-kselftest@vger.kernel.org
Cc: Hui Zhu <zhuhui@kylinos.cn>, Geliang Tang <geliang@kernel.org>
Subject: Re: [RFC PATCH bpf-next v3 09/12] selftests/bpf: Add tests for memcg_bpf_ops
Date: Fri, 23 Jan 2026 12:47:02 -0800 [thread overview]
Message-ID: <b90069a3-86b4-4fba-9ff3-fe5f6c4e425d@gmail.com> (raw)
In-Reply-To: <c44accaaaebfc32be13234f82b501a3852ba3f0f.1769157382.git.zhuhui@kylinos.cn>
Hi Hui,
On 1/23/26 1:00 AM, Hui Zhu wrote:
> From: Hui Zhu <zhuhui@kylinos.cn>
>
> Add a comprehensive selftest suite for the `memcg_bpf_ops`
> functionality. These tests validate that BPF programs can correctly
> influence memory cgroup throttling behavior by implementing the new
> hooks.
>
> The test suite is added in `prog_tests/memcg_ops.c` and covers
> several key scenarios:
>
> 1. `test_memcg_ops_over_high`:
> Verifies that a BPF program can trigger throttling on a low-priority
> cgroup by returning a delay from the `get_high_delay_ms` hook when a
> high-priority cgroup is under pressure.
>
> 2. `test_memcg_ops_below_low_over_high`:
> Tests the combination of the `below_low` and `get_high_delay_ms`
> hooks, ensuring they work together as expected.
>
> 3. `test_memcg_ops_below_min_over_high`:
> Validates the interaction between the `below_min` and
> `get_high_delay_ms` hooks.
>
> The test framework sets up a cgroup hierarchy with high and low
> priority groups, attaches BPF programs, runs memory-intensive
> workloads, and asserts that the observed throttling (measured by
> workload execution time) matches expectations.
>
> The BPF program (`progs/memcg_ops.c`) uses a tracepoint on
> `memcg:count_memcg_events` (specifically PGFAULT) to detect memory
> pressure and trigger the appropriate hooks in response. This test
> suite provides essential validation for the new memory control
> mechanisms.
>
> Signed-off-by: Geliang Tang <geliang@kernel.org>
> Signed-off-by: Hui Zhu <zhuhui@kylinos.cn>
> ---
[..]
> diff --git a/tools/testing/selftests/bpf/prog_tests/memcg_ops.c b/tools/testing/selftests/bpf/prog_tests/memcg_ops.c
> new file mode 100644
> index 000000000000..9a8d16296f2d
> --- /dev/null
> +++ b/tools/testing/selftests/bpf/prog_tests/memcg_ops.c
> @@ -0,0 +1,537 @@
[..]
> +
> +static void
> +real_test_memcg_ops_child_work(const char *cgroup_path,
> + char *data_filename,
> + char *time_filename,
> + int read_times)
> +{
> + struct timeval start, end;
> + double elapsed;
> + FILE *fp;
> +
> + if (!ASSERT_OK(join_parent_cgroup(cgroup_path), "join_parent_cgroup"))
> + goto out;
> +
> + if (env.verbosity >= VERBOSE_NORMAL)
> + printf("%s %d begin\n", __func__, getpid());
> +
> + gettimeofday(&start, NULL);
> +
> + if (!ASSERT_OK(write_file(data_filename), "write_file"))
> + goto out;
> +
> + if (env.verbosity >= VERBOSE_NORMAL)
> + printf("%s %d write_file done\n", __func__, getpid());
> +
> + if (!ASSERT_OK(read_file(data_filename, read_times), "read_file"))
> + goto out;
> +
> + gettimeofday(&end, NULL);
> +
> + elapsed = (end.tv_sec - start.tv_sec) +
> + (end.tv_usec - start.tv_usec) / 1000000.0;
> +
> + if (env.verbosity >= VERBOSE_NORMAL)
> + printf("%s %d end %.6f\n", __func__, getpid(), elapsed);
> +
> + fp = fopen(time_filename, "w");
> + if (!ASSERT_OK_PTR(fp, "fopen"))
> + goto out;
> + fprintf(fp, "%.6f", elapsed);
> + fclose(fp);
> +
> +out:
> + exit(0);
> +}
> +
[..]
> +static void real_test_memcg_ops(int read_times)
> +{
> + int ret;
> + char data_file1[] = "/tmp/test_data_XXXXXX";
> + char data_file2[] = "/tmp/test_data_XXXXXX";
> + char time_file1[] = "/tmp/test_time_XXXXXX";
> + char time_file2[] = "/tmp/test_time_XXXXXX";
> + pid_t pid1, pid2;
> + double time1, time2;
> +
> + ret = mkstemp(data_file1);
> + if (!ASSERT_GT(ret, 0, "mkstemp"))
> + return;
> + close(ret);
> + ret = mkstemp(data_file2);
> + if (!ASSERT_GT(ret, 0, "mkstemp"))
> + goto cleanup_data_file1;
> + close(ret);
> + ret = mkstemp(time_file1);
> + if (!ASSERT_GT(ret, 0, "mkstemp"))
> + goto cleanup_data_file2;
> + close(ret);
> + ret = mkstemp(time_file2);
> + if (!ASSERT_GT(ret, 0, "mkstemp"))
> + goto cleanup_time_file1;
> + close(ret);
> +
> + pid1 = fork();
> + if (!ASSERT_GE(pid1, 0, "fork"))
> + goto cleanup;
> + if (pid1 == 0)
> + real_test_memcg_ops_child_work(CG_LOW_DIR,
> + data_file1,
> + time_file1,
> + read_times);
Would it be better to call exit() after real_test_memcg_ops_child_work()
instead of within it? This way the fork/exit/wait logic is contained in
the same scope making the lifetimes easier to track. I had to go back
and search for the call to exit() since at a glance this function
appears to proceed to call fork() and waitpid() from within both parent
and child procs (though it really does not).
next prev parent reply other threads:[~2026-01-24 0:55 UTC|newest]
Thread overview: 22+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-01-23 8:55 [RFC PATCH bpf-next v3 00/12] mm: memcontrol: Add BPF hooks for memory controller Hui Zhu
2026-01-23 8:55 ` [RFC PATCH bpf-next v3 01/12] bpf: move bpf_struct_ops_link into bpf.h Hui Zhu
2026-01-23 8:55 ` [RFC PATCH bpf-next v3 02/12] bpf: initial support for attaching struct ops to cgroups Hui Zhu
2026-01-23 9:19 ` bot+bpf-ci
2026-01-23 8:55 ` [RFC PATCH bpf-next v3 03/12] bpf: mark struct oom_control's memcg field as TRUSTED_OR_NULL Hui Zhu
2026-01-23 8:57 ` [RFC PATCH bpf-next v3 04/12] mm: define mem_cgroup_get_from_ino() outside of CONFIG_SHRINKER_DEBUG Hui Zhu
2026-01-23 8:57 ` [RFC PATCH bpf-next v3 05/12] libbpf: introduce bpf_map__attach_struct_ops_opts() Hui Zhu
2026-01-23 9:19 ` bot+bpf-ci
2026-01-23 8:58 ` [RFC PATCH bpf-next v3 06/12] bpf: Pass flags in bpf_link_create for struct_ops Hui Zhu
2026-01-23 8:58 ` [RFC PATCH bpf-next v3 07/12] libbpf: Support passing user-defined flags " Hui Zhu
2026-01-23 9:00 ` [RFC PATCH bpf-next v3 08/12] mm: memcontrol: Add BPF struct_ops for memory controller Hui Zhu
2026-01-23 9:29 ` bot+bpf-ci
2026-01-23 9:00 ` [RFC PATCH bpf-next v3 09/12] selftests/bpf: Add tests for memcg_bpf_ops Hui Zhu
2026-01-23 9:19 ` bot+bpf-ci
2026-01-23 20:47 ` JP Kobryn [this message]
2026-01-26 1:40 ` hui.zhu
2026-01-23 9:00 ` [RFC PATCH bpf-next v3 10/12] mm/bpf: Add BPF_F_ALLOW_OVERRIDE support " Hui Zhu
2026-01-23 9:29 ` bot+bpf-ci
2026-01-23 9:01 ` [RFC PATCH bpf-next v3 11/12] selftests/bpf: Add test for memcg_bpf_ops hierarchies Hui Zhu
2026-01-23 9:18 ` bot+bpf-ci
2026-01-23 9:01 ` [RFC PATCH bpf-next v3 12/12] samples/bpf: Add memcg priority control example Hui Zhu
2026-01-23 9:18 ` bot+bpf-ci
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=b90069a3-86b4-4fba-9ff3-fe5f6c4e425d@gmail.com \
--to=inwardvessel@gmail.com \
--cc=a.s.protopopov@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=ameryhung@gmail.com \
--cc=andrii@kernel.org \
--cc=ast@kernel.org \
--cc=bpf@vger.kernel.org \
--cc=brauner@kernel.org \
--cc=brgerst@gmail.com \
--cc=cgroups@vger.kernel.org \
--cc=chenridong@huaweicloud.com \
--cc=daniel@iogearbox.net \
--cc=davem@davemloft.net \
--cc=eddyz87@gmail.com \
--cc=geliang@kernel.org \
--cc=hannes@cmpxchg.org \
--cc=haoluo@google.com \
--cc=hawk@kernel.org \
--cc=hui.zhu@linux.dev \
--cc=jeffxu@chromium.org \
--cc=jiayuan.chen@linux.dev \
--cc=john.fastabend@gmail.com \
--cc=jolsa@kernel.org \
--cc=kees@kernel.org \
--cc=kernel@jfarr.cc \
--cc=kerneljasonxing@gmail.com \
--cc=kpsingh@kernel.org \
--cc=kuba@kernel.org \
--cc=lance.yang@linux.dev \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-kselftest@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=martin.lau@linux.dev \
--cc=masahiroy@kernel.org \
--cc=mhocko@kernel.org \
--cc=mkoutny@suse.com \
--cc=muchun.song@linux.dev \
--cc=nathan@kernel.org \
--cc=netdev@vger.kernel.org \
--cc=ojeda@kernel.org \
--cc=paul.chaignon@gmail.com \
--cc=peterz@infradead.org \
--cc=rdunlap@infradead.org \
--cc=roman.gushchin@linux.dev \
--cc=sdf@fomichev.me \
--cc=shakeel.butt@linux.dev \
--cc=shuah@kernel.org \
--cc=song@kernel.org \
--cc=tj@kernel.org \
--cc=willemb@google.com \
--cc=yonghong.song@linux.dev \
--cc=zhuhui@kylinos.cn \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox