linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH bpf-next v2 0/7] mm: bpf kfuncs to access memcg data
@ 2025-12-20  4:12 Roman Gushchin
  2025-12-20  4:12 ` [PATCH bpf-next v2 1/7] mm: declare memcg_page_state_output() in memcontrol.h Roman Gushchin
                   ` (6 more replies)
  0 siblings, 7 replies; 25+ messages in thread
From: Roman Gushchin @ 2025-12-20  4:12 UTC (permalink / raw)
  To: bpf, linux-mm, linux-kernel
  Cc: JP Kobryn, Alexei Starovoitov, Daniel Borkmann, Shakeel Butt,
	Michal Hocko, Johannes Weiner, Roman Gushchin

Introduce kfuncs to simplify the access to the memcg data.
These kfuncs can be used to accelerate monitoring use cases and
for implementing custom OOM policies once BPF OOM is landed.

This patchset was separated out from the BPF OOM patchset to simplify
the logistics and accelerate the landing of the part which is useful
by itself. No functional changes since BPF OOM v2.

v2:
  - added mem_cgroup_disabled() checks (by Shakeel B.)
  - added special handling of the root memcg in bpf_mem_cgroup_usage()
  (by Shakeel B.)
  - minor fixes in the kselftest (by Shakeel B.)
  - added a MAINTAINERS entry (by Shakeel B.)

v1:
  https://lore.kernel.org/bpf/87ike29s5r.fsf@linux.dev/T/#t


JP Kobryn (2):
  mm: introduce BPF kfunc to access memory events
  bpf: selftests: selftests for memcg stat kfuncs

Roman Gushchin (5):
  mm: declare memcg_page_state_output() in memcontrol.h
  mm: introduce BPF kfuncs to deal with memcg pointers
  mm: introduce bpf_get_root_mem_cgroup() BPF kfunc
  mm: introduce BPF kfuncs to access memcg statistics and events
  MAINTAINERS: add an entry for MM BPF extensions

 MAINTAINERS                                   |   7 +
 include/linux/memcontrol.h                    |   3 +
 mm/Makefile                                   |   3 +
 mm/bpf_memcontrol.c                           | 179 ++++++++++++++
 mm/memcontrol-v1.h                            |   1 -
 .../testing/selftests/bpf/cgroup_iter_memcg.h |  18 ++
 .../bpf/prog_tests/cgroup_iter_memcg.c        | 223 ++++++++++++++++++
 .../selftests/bpf/progs/cgroup_iter_memcg.c   |  39 +++
 8 files changed, 472 insertions(+), 1 deletion(-)
 create mode 100644 mm/bpf_memcontrol.c
 create mode 100644 tools/testing/selftests/bpf/cgroup_iter_memcg.h
 create mode 100644 tools/testing/selftests/bpf/prog_tests/cgroup_iter_memcg.c
 create mode 100644 tools/testing/selftests/bpf/progs/cgroup_iter_memcg.c

-- 
2.52.0



^ permalink raw reply	[flat|nested] 25+ messages in thread

* [PATCH bpf-next v2 1/7] mm: declare memcg_page_state_output() in memcontrol.h
  2025-12-20  4:12 [PATCH bpf-next v2 0/7] mm: bpf kfuncs to access memcg data Roman Gushchin
@ 2025-12-20  4:12 ` Roman Gushchin
  2025-12-20  4:12 ` [PATCH bpf-next v2 2/7] mm: introduce BPF kfuncs to deal with memcg pointers Roman Gushchin
                   ` (5 subsequent siblings)
  6 siblings, 0 replies; 25+ messages in thread
From: Roman Gushchin @ 2025-12-20  4:12 UTC (permalink / raw)
  To: bpf, linux-mm, linux-kernel
  Cc: JP Kobryn, Alexei Starovoitov, Daniel Borkmann, Shakeel Butt,
	Michal Hocko, Johannes Weiner, Roman Gushchin, Michal Hocko

To use memcg_page_state_output() in bpf_memcontrol.c move the
declaration from v1-specific memcontrol-v1.h to memcontrol.h.

Signed-off-by: Roman Gushchin <roman.gushchin@linux.dev>
Acked-by: Michal Hocko <mhocko@suse.com>
Acked-by: Shakeel Butt <shakeel.butt@linux.dev>
---
 include/linux/memcontrol.h | 1 +
 mm/memcontrol-v1.h         | 1 -
 2 files changed, 1 insertion(+), 1 deletion(-)

diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index 6a48398a1f4e..b309d13110af 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -950,6 +950,7 @@ static inline void mod_memcg_page_state(struct page *page,
 }
 
 unsigned long memcg_page_state(struct mem_cgroup *memcg, int idx);
+unsigned long memcg_page_state_output(struct mem_cgroup *memcg, int item);
 unsigned long lruvec_page_state(struct lruvec *lruvec, enum node_stat_item idx);
 unsigned long lruvec_page_state_local(struct lruvec *lruvec,
 				      enum node_stat_item idx);
diff --git a/mm/memcontrol-v1.h b/mm/memcontrol-v1.h
index 6358464bb416..a304ad418cdf 100644
--- a/mm/memcontrol-v1.h
+++ b/mm/memcontrol-v1.h
@@ -27,7 +27,6 @@ unsigned long mem_cgroup_usage(struct mem_cgroup *memcg, bool swap);
 void drain_all_stock(struct mem_cgroup *root_memcg);
 
 unsigned long memcg_events(struct mem_cgroup *memcg, int event);
-unsigned long memcg_page_state_output(struct mem_cgroup *memcg, int item);
 int memory_stat_show(struct seq_file *m, void *v);
 
 void mem_cgroup_id_get_many(struct mem_cgroup *memcg, unsigned int n);
-- 
2.52.0



^ permalink raw reply	[flat|nested] 25+ messages in thread

* [PATCH bpf-next v2 2/7] mm: introduce BPF kfuncs to deal with memcg pointers
  2025-12-20  4:12 [PATCH bpf-next v2 0/7] mm: bpf kfuncs to access memcg data Roman Gushchin
  2025-12-20  4:12 ` [PATCH bpf-next v2 1/7] mm: declare memcg_page_state_output() in memcontrol.h Roman Gushchin
@ 2025-12-20  4:12 ` Roman Gushchin
  2025-12-20  5:20   ` Shakeel Butt
  2025-12-22  0:39   ` Alexei Starovoitov
  2025-12-20  4:12 ` [PATCH bpf-next v2 3/7] mm: introduce bpf_get_root_mem_cgroup() BPF kfunc Roman Gushchin
                   ` (4 subsequent siblings)
  6 siblings, 2 replies; 25+ messages in thread
From: Roman Gushchin @ 2025-12-20  4:12 UTC (permalink / raw)
  To: bpf, linux-mm, linux-kernel
  Cc: JP Kobryn, Alexei Starovoitov, Daniel Borkmann, Shakeel Butt,
	Michal Hocko, Johannes Weiner, Roman Gushchin

To effectively operate with memory cgroups in BPF there is a need
to convert css pointers to memcg pointers. A simple container_of
cast which is used in the kernel code can't be used in BPF because
from the verifier's point of view that's a out-of-bounds memory access.

Introduce helper get/put kfuncs which can be used to get
a refcounted memcg pointer from the css pointer:
  - bpf_get_mem_cgroup,
  - bpf_put_mem_cgroup.

bpf_get_mem_cgroup() can take both memcg's css and the corresponding
cgroup's "self" css. It allows it to be used with the existing cgroup
iterator which iterates over cgroup tree, not memcg tree.

Signed-off-by: Roman Gushchin <roman.gushchin@linux.dev>
---
 mm/Makefile         |  3 ++
 mm/bpf_memcontrol.c | 88 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 91 insertions(+)
 create mode 100644 mm/bpf_memcontrol.c

diff --git a/mm/Makefile b/mm/Makefile
index 9175f8cc6565..79c39a98ff83 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -106,6 +106,9 @@ obj-$(CONFIG_MEMCG) += memcontrol.o vmpressure.o
 ifdef CONFIG_SWAP
 obj-$(CONFIG_MEMCG) += swap_cgroup.o
 endif
+ifdef CONFIG_BPF_SYSCALL
+obj-$(CONFIG_MEMCG) += bpf_memcontrol.o
+endif
 obj-$(CONFIG_CGROUP_HUGETLB) += hugetlb_cgroup.o
 obj-$(CONFIG_GUP_TEST) += gup_test.o
 obj-$(CONFIG_DMAPOOL_TEST) += dmapool_test.o
diff --git a/mm/bpf_memcontrol.c b/mm/bpf_memcontrol.c
new file mode 100644
index 000000000000..03d435fc4f10
--- /dev/null
+++ b/mm/bpf_memcontrol.c
@@ -0,0 +1,88 @@
+// SPDX-License-Identifier: GPL-2.0-or-later
+/*
+ * Memory Controller-related BPF kfuncs and auxiliary code
+ *
+ * Author: Roman Gushchin <roman.gushchin@linux.dev>
+ */
+
+#include <linux/memcontrol.h>
+#include <linux/bpf.h>
+
+__bpf_kfunc_start_defs();
+
+/**
+ * bpf_get_mem_cgroup - Get a reference to a memory cgroup
+ * @css: pointer to the css structure
+ *
+ * Returns a pointer to a mem_cgroup structure after bumping
+ * the corresponding css's reference counter.
+ *
+ * It's fine to pass a css which belongs to any cgroup controller,
+ * e.g. unified hierarchy's main css.
+ *
+ * Implements KF_ACQUIRE semantics.
+ */
+__bpf_kfunc struct mem_cgroup *
+bpf_get_mem_cgroup(struct cgroup_subsys_state *css)
+{
+	struct mem_cgroup *memcg = NULL;
+	bool rcu_unlock = false;
+
+	if (mem_cgroup_disabled() || !root_mem_cgroup)
+		return NULL;
+
+	if (root_mem_cgroup->css.ss != css->ss) {
+		struct cgroup *cgroup = css->cgroup;
+		int ssid = root_mem_cgroup->css.ss->id;
+
+		rcu_read_lock();
+		rcu_unlock = true;
+		css = rcu_dereference_raw(cgroup->subsys[ssid]);
+	}
+
+	if (css && css_tryget(css))
+		memcg = container_of(css, struct mem_cgroup, css);
+
+	if (rcu_unlock)
+		rcu_read_unlock();
+
+	return memcg;
+}
+
+/**
+ * bpf_put_mem_cgroup - Put a reference to a memory cgroup
+ * @memcg: memory cgroup to release
+ *
+ * Releases a previously acquired memcg reference.
+ * Implements KF_RELEASE semantics.
+ */
+__bpf_kfunc void bpf_put_mem_cgroup(struct mem_cgroup *memcg)
+{
+	css_put(&memcg->css);
+}
+
+__bpf_kfunc_end_defs();
+
+BTF_KFUNCS_START(bpf_memcontrol_kfuncs)
+BTF_ID_FLAGS(func, bpf_get_mem_cgroup, KF_TRUSTED_ARGS | KF_ACQUIRE | KF_RET_NULL | KF_RCU)
+BTF_ID_FLAGS(func, bpf_put_mem_cgroup, KF_TRUSTED_ARGS | KF_RELEASE)
+
+BTF_KFUNCS_END(bpf_memcontrol_kfuncs)
+
+static const struct btf_kfunc_id_set bpf_memcontrol_kfunc_set = {
+	.owner          = THIS_MODULE,
+	.set            = &bpf_memcontrol_kfuncs,
+};
+
+static int __init bpf_memcontrol_init(void)
+{
+	int err;
+
+	err = register_btf_kfunc_id_set(BPF_PROG_TYPE_UNSPEC,
+					&bpf_memcontrol_kfunc_set);
+	if (err)
+		pr_warn("error while registering bpf memcontrol kfuncs: %d", err);
+
+	return err;
+}
+late_initcall(bpf_memcontrol_init);
-- 
2.52.0



^ permalink raw reply	[flat|nested] 25+ messages in thread

* [PATCH bpf-next v2 3/7] mm: introduce bpf_get_root_mem_cgroup() BPF kfunc
  2025-12-20  4:12 [PATCH bpf-next v2 0/7] mm: bpf kfuncs to access memcg data Roman Gushchin
  2025-12-20  4:12 ` [PATCH bpf-next v2 1/7] mm: declare memcg_page_state_output() in memcontrol.h Roman Gushchin
  2025-12-20  4:12 ` [PATCH bpf-next v2 2/7] mm: introduce BPF kfuncs to deal with memcg pointers Roman Gushchin
@ 2025-12-20  4:12 ` Roman Gushchin
  2025-12-20  5:21   ` Shakeel Butt
  2025-12-20  4:12 ` [PATCH bpf-next v2 4/7] mm: introduce BPF kfuncs to access memcg statistics and events Roman Gushchin
                   ` (3 subsequent siblings)
  6 siblings, 1 reply; 25+ messages in thread
From: Roman Gushchin @ 2025-12-20  4:12 UTC (permalink / raw)
  To: bpf, linux-mm, linux-kernel
  Cc: JP Kobryn, Alexei Starovoitov, Daniel Borkmann, Shakeel Butt,
	Michal Hocko, Johannes Weiner, Roman Gushchin

Introduce a BPF kfunc to get a trusted pointer to the root memory
cgroup. It's very handy to traverse the full memcg tree, e.g.
for handling a system-wide OOM.

It's possible to obtain this pointer by traversing the memcg tree
up from any known memcg, but it's sub-optimal and makes BPF programs
more complex and less efficient.

bpf_get_root_mem_cgroup() has a KF_ACQUIRE | KF_RET_NULL semantics,
however in reality it's not necessary to bump the corresponding
reference counter - root memory cgroup is immortal, reference counting
is skipped, see css_get(). Once set, root_mem_cgroup is always a valid
memcg pointer. It's safe to call bpf_put_mem_cgroup() for the pointer
obtained with bpf_get_root_mem_cgroup(), it's effectively a no-op.

Signed-off-by: Roman Gushchin <roman.gushchin@linux.dev>
---
 mm/bpf_memcontrol.c | 18 ++++++++++++++++++
 1 file changed, 18 insertions(+)

diff --git a/mm/bpf_memcontrol.c b/mm/bpf_memcontrol.c
index 03d435fc4f10..2d518ad2ad3f 100644
--- a/mm/bpf_memcontrol.c
+++ b/mm/bpf_memcontrol.c
@@ -10,6 +10,23 @@
 
 __bpf_kfunc_start_defs();
 
+/**
+ * bpf_get_root_mem_cgroup - Returns a pointer to the root memory cgroup
+ *
+ * The function has KF_ACQUIRE semantics, even though the root memory
+ * cgroup is never destroyed after being created and doesn't require
+ * reference counting. And it's perfectly safe to pass it to
+ * bpf_put_mem_cgroup()
+ */
+__bpf_kfunc struct mem_cgroup *bpf_get_root_mem_cgroup(void)
+{
+	if (mem_cgroup_disabled())
+		return NULL;
+
+	/* css_get() is not needed */
+	return root_mem_cgroup;
+}
+
 /**
  * bpf_get_mem_cgroup - Get a reference to a memory cgroup
  * @css: pointer to the css structure
@@ -64,6 +81,7 @@ __bpf_kfunc void bpf_put_mem_cgroup(struct mem_cgroup *memcg)
 __bpf_kfunc_end_defs();
 
 BTF_KFUNCS_START(bpf_memcontrol_kfuncs)
+BTF_ID_FLAGS(func, bpf_get_root_mem_cgroup, KF_ACQUIRE | KF_RET_NULL)
 BTF_ID_FLAGS(func, bpf_get_mem_cgroup, KF_TRUSTED_ARGS | KF_ACQUIRE | KF_RET_NULL | KF_RCU)
 BTF_ID_FLAGS(func, bpf_put_mem_cgroup, KF_TRUSTED_ARGS | KF_RELEASE)
 
-- 
2.52.0



^ permalink raw reply	[flat|nested] 25+ messages in thread

* [PATCH bpf-next v2 4/7] mm: introduce BPF kfuncs to access memcg statistics and events
  2025-12-20  4:12 [PATCH bpf-next v2 0/7] mm: bpf kfuncs to access memcg data Roman Gushchin
                   ` (2 preceding siblings ...)
  2025-12-20  4:12 ` [PATCH bpf-next v2 3/7] mm: introduce bpf_get_root_mem_cgroup() BPF kfunc Roman Gushchin
@ 2025-12-20  4:12 ` Roman Gushchin
  2025-12-20  4:29   ` bot+bpf-ci
  2025-12-20  5:22   ` Shakeel Butt
  2025-12-20  4:12 ` [PATCH bpf-next v2 5/7] mm: introduce BPF kfunc to access memory events Roman Gushchin
                   ` (2 subsequent siblings)
  6 siblings, 2 replies; 25+ messages in thread
From: Roman Gushchin @ 2025-12-20  4:12 UTC (permalink / raw)
  To: bpf, linux-mm, linux-kernel
  Cc: JP Kobryn, Alexei Starovoitov, Daniel Borkmann, Shakeel Butt,
	Michal Hocko, Johannes Weiner, Roman Gushchin, Michal Hocko

Introduce BPF kfuncs to conveniently access memcg data:
  - bpf_mem_cgroup_vm_events(),
  - bpf_mem_cgroup_usage(),
  - bpf_mem_cgroup_page_state(),
  - bpf_mem_cgroup_flush_stats().

These functions are useful for implementing BPF OOM policies, but
also can be used to accelerate access to the memcg data. Reading
it through cgroupfs is much more expensive, roughly 5x, mostly
because of the need to convert the data into the text and back.

JP Kobryn:
An experiment was setup to compare the performance of a program that
uses the traditional method of reading memory.stat vs a program using
the new kfuncs. The control program opens up the root memory.stat file
and for 1M iterations reads, converts the string values to numeric data,
then seeks back to the beginning. The experimental program sets up the
requisite libbpf objects and for 1M iterations invokes a bpf program
which uses the kfuncs to fetch all available stats for node_stat_item,
memcg_stat_item, and vm_event_item types.

The results showed a significant perf benefit on the experimental side,
outperforming the control side by a margin of 93%. In kernel mode,
elapsed time was reduced by 80%, while in user mode, over 99% of time
was saved.

control: elapsed time
real    0m38.318s
user    0m25.131s
sys     0m13.070s

experiment: elapsed time
real    0m2.789s
user    0m0.187s
sys     0m2.512s

control: perf data
33.43% a.out libc.so.6         [.] __vfscanf_internal
 6.88% a.out [kernel.kallsyms] [k] vsnprintf
 6.33% a.out libc.so.6         [.] _IO_fgets
 5.51% a.out [kernel.kallsyms] [k] format_decode
 4.31% a.out libc.so.6         [.] __GI_____strtoull_l_internal
 3.78% a.out [kernel.kallsyms] [k] string
 3.53% a.out [kernel.kallsyms] [k] number
 2.71% a.out libc.so.6         [.] _IO_sputbackc
 2.41% a.out [kernel.kallsyms] [k] strlen
 1.98% a.out a.out             [.] main
 1.70% a.out libc.so.6         [.] _IO_getline_info
 1.51% a.out libc.so.6         [.] __isoc99_sscanf
 1.47% a.out [kernel.kallsyms] [k] memory_stat_format
 1.47% a.out [kernel.kallsyms] [k] memcpy_orig
 1.41% a.out [kernel.kallsyms] [k] seq_buf_printf

experiment: perf data
10.55% memcgstat bpf_prog_..._query [k] bpf_prog_16aab2f19fa982a7_query
 6.90% memcgstat [kernel.kallsyms]  [k] memcg_page_state_output
 3.55% memcgstat [kernel.kallsyms]  [k] _raw_spin_lock
 3.12% memcgstat [kernel.kallsyms]  [k] memcg_events
 2.87% memcgstat [kernel.kallsyms]  [k] __memcg_slab_post_alloc_hook
 2.73% memcgstat [kernel.kallsyms]  [k] kmem_cache_free
 2.70% memcgstat [kernel.kallsyms]  [k] entry_SYSRETQ_unsafe_stack
 2.25% memcgstat [kernel.kallsyms]  [k] __memcg_slab_free_hook
 2.06% memcgstat [kernel.kallsyms]  [k] get_page_from_freelist

Signed-off-by: Roman Gushchin <roman.gushchin@linux.dev>
Co-developed-by: JP Kobryn <inwardvessel@gmail.com>
Signed-off-by: JP Kobryn <inwardvessel@gmail.com>
Acked-by: Michal Hocko <mhocko@suse.com>
---
 include/linux/memcontrol.h |  2 ++
 mm/bpf_memcontrol.c        | 56 ++++++++++++++++++++++++++++++++++++++
 2 files changed, 58 insertions(+)

diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index b309d13110af..8c1ba4477d36 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -949,6 +949,8 @@ static inline void mod_memcg_page_state(struct page *page,
 	rcu_read_unlock();
 }
 
+unsigned long memcg_events(struct mem_cgroup *memcg, int event);
+unsigned long mem_cgroup_usage(struct mem_cgroup *memcg, bool swap);
 unsigned long memcg_page_state(struct mem_cgroup *memcg, int idx);
 unsigned long memcg_page_state_output(struct mem_cgroup *memcg, int item);
 unsigned long lruvec_page_state(struct lruvec *lruvec, enum node_stat_item idx);
diff --git a/mm/bpf_memcontrol.c b/mm/bpf_memcontrol.c
index 2d518ad2ad3f..d84fe6f3ed43 100644
--- a/mm/bpf_memcontrol.c
+++ b/mm/bpf_memcontrol.c
@@ -78,6 +78,57 @@ __bpf_kfunc void bpf_put_mem_cgroup(struct mem_cgroup *memcg)
 	css_put(&memcg->css);
 }
 
+/**
+ * bpf_mem_cgroup_vm_events - Read memory cgroup's vm event counter
+ * @memcg: memory cgroup
+ * @event: event id
+ *
+ * Allows to read memory cgroup event counters.
+ */
+__bpf_kfunc unsigned long bpf_mem_cgroup_vm_events(struct mem_cgroup *memcg,
+						enum vm_event_item event)
+{
+	return memcg_events(memcg, event);
+}
+
+/**
+ * bpf_mem_cgroup_usage - Read memory cgroup's usage
+ * @memcg: memory cgroup
+ *
+ * Returns current memory cgroup size in bytes.
+ * For the root memory cgroup it returns an approximate value.
+ */
+__bpf_kfunc unsigned long bpf_mem_cgroup_usage(struct mem_cgroup *memcg)
+{
+	return mem_cgroup_usage(memcg, false) * PAGE_SIZE;
+}
+
+/**
+ * bpf_mem_cgroup_page_state - Read memory cgroup's page state counter
+ * @memcg: memory cgroup
+ * @idx: counter idx
+ *
+ * Allows to read memory cgroup statistics. The output is in bytes.
+ */
+__bpf_kfunc unsigned long bpf_mem_cgroup_page_state(struct mem_cgroup *memcg, int idx)
+{
+	if (idx < 0 || idx >= MEMCG_NR_STAT)
+		return (unsigned long)-1;
+
+	return memcg_page_state_output(memcg, idx);
+}
+
+/**
+ * bpf_mem_cgroup_flush_stats - Flush memory cgroup's statistics
+ * @memcg: memory cgroup
+ *
+ * Propagate memory cgroup's statistics up the cgroup tree.
+ */
+__bpf_kfunc void bpf_mem_cgroup_flush_stats(struct mem_cgroup *memcg)
+{
+	mem_cgroup_flush_stats(memcg);
+}
+
 __bpf_kfunc_end_defs();
 
 BTF_KFUNCS_START(bpf_memcontrol_kfuncs)
@@ -85,6 +136,11 @@ BTF_ID_FLAGS(func, bpf_get_root_mem_cgroup, KF_ACQUIRE | KF_RET_NULL)
 BTF_ID_FLAGS(func, bpf_get_mem_cgroup, KF_TRUSTED_ARGS | KF_ACQUIRE | KF_RET_NULL | KF_RCU)
 BTF_ID_FLAGS(func, bpf_put_mem_cgroup, KF_TRUSTED_ARGS | KF_RELEASE)
 
+BTF_ID_FLAGS(func, bpf_mem_cgroup_vm_events, KF_TRUSTED_ARGS)
+BTF_ID_FLAGS(func, bpf_mem_cgroup_usage, KF_TRUSTED_ARGS)
+BTF_ID_FLAGS(func, bpf_mem_cgroup_page_state, KF_TRUSTED_ARGS)
+BTF_ID_FLAGS(func, bpf_mem_cgroup_flush_stats, KF_TRUSTED_ARGS | KF_SLEEPABLE)
+
 BTF_KFUNCS_END(bpf_memcontrol_kfuncs)
 
 static const struct btf_kfunc_id_set bpf_memcontrol_kfunc_set = {
-- 
2.52.0



^ permalink raw reply	[flat|nested] 25+ messages in thread

* [PATCH bpf-next v2 5/7] mm: introduce BPF kfunc to access memory events
  2025-12-20  4:12 [PATCH bpf-next v2 0/7] mm: bpf kfuncs to access memcg data Roman Gushchin
                   ` (3 preceding siblings ...)
  2025-12-20  4:12 ` [PATCH bpf-next v2 4/7] mm: introduce BPF kfuncs to access memcg statistics and events Roman Gushchin
@ 2025-12-20  4:12 ` Roman Gushchin
  2025-12-20  4:29   ` bot+bpf-ci
  2025-12-22  0:49   ` Alexei Starovoitov
  2025-12-20  4:12 ` [PATCH bpf-next v2 6/7] bpf: selftests: selftests for memcg stat kfuncs Roman Gushchin
  2025-12-20  4:12 ` [PATCH bpf-next v2 7/7] MAINTAINERS: add an entry for MM BPF extensions Roman Gushchin
  6 siblings, 2 replies; 25+ messages in thread
From: Roman Gushchin @ 2025-12-20  4:12 UTC (permalink / raw)
  To: bpf, linux-mm, linux-kernel
  Cc: JP Kobryn, Alexei Starovoitov, Daniel Borkmann, Shakeel Butt,
	Michal Hocko, Johannes Weiner, Roman Gushchin

From: JP Kobryn <inwardvessel@gmail.com>

Introduce BPF kfunc to access memory events, e.g.:
MEMCG_LOW, MEMCG_MAX, MEMCG_OOM, MEMCG_OOM_KILL etc.

Signed-off-by: JP Kobryn <inwardvessel@gmail.com>
Signed-off-by: Roman Gushchin <roman.gushchin@linux.dev>
Acked-by: Shakeel Butt <shakeel.butt@linux.dev>
---
 mm/bpf_memcontrol.c | 17 +++++++++++++++++
 1 file changed, 17 insertions(+)

diff --git a/mm/bpf_memcontrol.c b/mm/bpf_memcontrol.c
index d84fe6f3ed43..858eb43766ce 100644
--- a/mm/bpf_memcontrol.c
+++ b/mm/bpf_memcontrol.c
@@ -103,6 +103,22 @@ __bpf_kfunc unsigned long bpf_mem_cgroup_usage(struct mem_cgroup *memcg)
 	return mem_cgroup_usage(memcg, false) * PAGE_SIZE;
 }
 
+/**
+ * bpf_mem_cgroup_memory_events - Read memory cgroup's memory event value
+ * @memcg: memory cgroup
+ * @event: memory event id
+ *
+ * Returns current memory event count.
+ */
+__bpf_kfunc unsigned long bpf_mem_cgroup_memory_events(struct mem_cgroup *memcg,
+						enum memcg_memory_event event)
+{
+	if (event >= MEMCG_NR_MEMORY_EVENTS)
+		return (unsigned long)-1;
+
+	return atomic_long_read(&memcg->memory_events[event]);
+}
+
 /**
  * bpf_mem_cgroup_page_state - Read memory cgroup's page state counter
  * @memcg: memory cgroup
@@ -137,6 +153,7 @@ BTF_ID_FLAGS(func, bpf_get_mem_cgroup, KF_TRUSTED_ARGS | KF_ACQUIRE | KF_RET_NUL
 BTF_ID_FLAGS(func, bpf_put_mem_cgroup, KF_TRUSTED_ARGS | KF_RELEASE)
 
 BTF_ID_FLAGS(func, bpf_mem_cgroup_vm_events, KF_TRUSTED_ARGS)
+BTF_ID_FLAGS(func, bpf_mem_cgroup_memory_events, KF_TRUSTED_ARGS)
 BTF_ID_FLAGS(func, bpf_mem_cgroup_usage, KF_TRUSTED_ARGS)
 BTF_ID_FLAGS(func, bpf_mem_cgroup_page_state, KF_TRUSTED_ARGS)
 BTF_ID_FLAGS(func, bpf_mem_cgroup_flush_stats, KF_TRUSTED_ARGS | KF_SLEEPABLE)
-- 
2.52.0



^ permalink raw reply	[flat|nested] 25+ messages in thread

* [PATCH bpf-next v2 6/7] bpf: selftests: selftests for memcg stat kfuncs
  2025-12-20  4:12 [PATCH bpf-next v2 0/7] mm: bpf kfuncs to access memcg data Roman Gushchin
                   ` (4 preceding siblings ...)
  2025-12-20  4:12 ` [PATCH bpf-next v2 5/7] mm: introduce BPF kfunc to access memory events Roman Gushchin
@ 2025-12-20  4:12 ` Roman Gushchin
  2025-12-20  5:23   ` Shakeel Butt
  2025-12-20  4:12 ` [PATCH bpf-next v2 7/7] MAINTAINERS: add an entry for MM BPF extensions Roman Gushchin
  6 siblings, 1 reply; 25+ messages in thread
From: Roman Gushchin @ 2025-12-20  4:12 UTC (permalink / raw)
  To: bpf, linux-mm, linux-kernel
  Cc: JP Kobryn, Alexei Starovoitov, Daniel Borkmann, Shakeel Butt,
	Michal Hocko, Johannes Weiner, Roman Gushchin

From: JP Kobryn <inwardvessel@gmail.com>

Add test coverage for the kfuncs that fetch memcg stats. Using some common
stats, test scenarios ensuring that the given stat increases by some
arbitrary amount. The stats selected cover the three categories represented
by the enums: node_stat_item, memcg_stat_item, vm_event_item.

Since only a subset of all stats are queried, use a static struct made up
of fields for each stat. Write to the struct with the fetched values when
the bpf program is invoked and read the fields in the user mode program for
verification.

Signed-off-by: JP Kobryn <inwardvessel@gmail.com>
Signed-off-by: Roman Gushchin <roman.gushchin@linux.dev>
---
 .../testing/selftests/bpf/cgroup_iter_memcg.h |  18 ++
 .../bpf/prog_tests/cgroup_iter_memcg.c        | 223 ++++++++++++++++++
 .../selftests/bpf/progs/cgroup_iter_memcg.c   |  39 +++
 3 files changed, 280 insertions(+)
 create mode 100644 tools/testing/selftests/bpf/cgroup_iter_memcg.h
 create mode 100644 tools/testing/selftests/bpf/prog_tests/cgroup_iter_memcg.c
 create mode 100644 tools/testing/selftests/bpf/progs/cgroup_iter_memcg.c

diff --git a/tools/testing/selftests/bpf/cgroup_iter_memcg.h b/tools/testing/selftests/bpf/cgroup_iter_memcg.h
new file mode 100644
index 000000000000..3f59b127943b
--- /dev/null
+++ b/tools/testing/selftests/bpf/cgroup_iter_memcg.h
@@ -0,0 +1,18 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright (c) 2025 Meta Platforms, Inc. and affiliates. */
+#ifndef __CGROUP_ITER_MEMCG_H
+#define __CGROUP_ITER_MEMCG_H
+
+struct memcg_query {
+	/* some node_stat_item's */
+	unsigned long nr_anon_mapped;
+	unsigned long nr_shmem;
+	unsigned long nr_file_pages;
+	unsigned long nr_file_mapped;
+	/* some memcg_stat_item */
+	unsigned long memcg_kmem;
+	/* some vm_event_item */
+	unsigned long pgfault;
+};
+
+#endif /* __CGROUP_ITER_MEMCG_H */
diff --git a/tools/testing/selftests/bpf/prog_tests/cgroup_iter_memcg.c b/tools/testing/selftests/bpf/prog_tests/cgroup_iter_memcg.c
new file mode 100644
index 000000000000..a5afd16705f0
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/cgroup_iter_memcg.c
@@ -0,0 +1,223 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2025 Meta Platforms, Inc. and affiliates. */
+#include <test_progs.h>
+#include <bpf/libbpf.h>
+#include <bpf/btf.h>
+#include <fcntl.h>
+#include <sys/mman.h>
+#include <unistd.h>
+#include "cgroup_helpers.h"
+#include "cgroup_iter_memcg.h"
+#include "cgroup_iter_memcg.skel.h"
+
+static int read_stats(struct bpf_link *link)
+{
+	int fd, ret = 0;
+	ssize_t bytes;
+
+	fd = bpf_iter_create(bpf_link__fd(link));
+	if (!ASSERT_OK_FD(fd, "bpf_iter_create"))
+		return 1;
+
+	/*
+	 * Invoke iter program by reading from its fd. We're not expecting any
+	 * data to be written by the bpf program so the result should be zero.
+	 * Results will be read directly through the custom data section
+	 * accessible through skel->data_query.memcg_query.
+	 */
+	bytes = read(fd, NULL, 0);
+	if (!ASSERT_EQ(bytes, 0, "read fd"))
+		ret = 1;
+
+	close(fd);
+	return ret;
+}
+
+static void test_anon(struct bpf_link *link, struct memcg_query *memcg_query)
+{
+	void *map;
+	size_t len;
+
+	len = sysconf(_SC_PAGESIZE) * 1024;
+
+	/*
+	 * Increase memcg anon usage by mapping and writing
+	 * to a new anon region.
+	 */
+	map = mmap(NULL, len, PROT_WRITE, MAP_ANONYMOUS | MAP_PRIVATE, -1, 0);
+	if (!ASSERT_NEQ(map, MAP_FAILED, "mmap anon"))
+		return;
+
+	memset(map, 1, len);
+
+	if (!ASSERT_OK(read_stats(link), "read stats"))
+		goto cleanup;
+
+	ASSERT_GT(memcg_query->nr_anon_mapped, 0, "final anon mapped val");
+
+cleanup:
+	munmap(map, len);
+}
+
+static void test_file(struct bpf_link *link, struct memcg_query *memcg_query)
+{
+	void *map;
+	size_t len;
+	char *path;
+	int fd;
+
+	len = sysconf(_SC_PAGESIZE) * 1024;
+	path = "/tmp/test_cgroup_iter_memcg";
+
+	/*
+	 * Increase memcg file usage by creating and writing
+	 * to a mapped file.
+	 */
+	fd = open(path, O_CREAT | O_RDWR, 0644);
+	if (!ASSERT_OK_FD(fd, "open fd"))
+		return;
+	if (!ASSERT_OK(ftruncate(fd, len), "ftruncate"))
+		goto cleanup_fd;
+
+	map = mmap(NULL, len, PROT_WRITE, MAP_SHARED, fd, 0);
+	if (!ASSERT_NEQ(map, MAP_FAILED, "mmap file"))
+		goto cleanup_fd;
+
+	memset(map, 1, len);
+
+	if (!ASSERT_OK(read_stats(link), "read stats"))
+		goto cleanup_map;
+
+	ASSERT_GT(memcg_query->nr_file_pages, 0, "final file value");
+	ASSERT_GT(memcg_query->nr_file_mapped, 0, "final file mapped value");
+
+cleanup_map:
+	munmap(map, len);
+cleanup_fd:
+	close(fd);
+	unlink(path);
+}
+
+static void test_shmem(struct bpf_link *link, struct memcg_query *memcg_query)
+{
+	size_t len;
+	int fd;
+
+	len = sysconf(_SC_PAGESIZE) * 1024;
+
+	/*
+	 * Increase memcg shmem usage by creating and writing
+	 * to a shmem object.
+	 */
+	fd = shm_open("/tmp_shmem", O_CREAT | O_RDWR, 0644);
+	if (!ASSERT_OK_FD(fd, "shm_open"))
+		return;
+
+	if (!ASSERT_OK(fallocate(fd, 0, 0, len), "fallocate"))
+		goto cleanup;
+
+	if (!ASSERT_OK(read_stats(link), "read stats"))
+		goto cleanup;
+
+	ASSERT_GT(memcg_query->nr_shmem, 0, "final shmem value");
+
+cleanup:
+	close(fd);
+	shm_unlink("/tmp_shmem");
+}
+
+#define NR_PIPES 64
+static void test_kmem(struct bpf_link *link, struct memcg_query *memcg_query)
+{
+	int fds[NR_PIPES][2], i;
+
+	/*
+	 * Increase kmem value by creating pipes which will allocate some
+	 * kernel buffers.
+	 */
+	for (i = 0; i < NR_PIPES; i++) {
+		if (!ASSERT_OK(pipe(fds[i]), "pipe"))
+			goto cleanup;
+	}
+
+	if (!ASSERT_OK(read_stats(link), "read stats"))
+		goto cleanup;
+
+	ASSERT_GT(memcg_query->memcg_kmem, 0, "kmem value");
+
+cleanup:
+	for (i = i - 1; i >= 0; i--) {
+		close(fds[i][0]);
+		close(fds[i][1]);
+	}
+}
+
+static void test_pgfault(struct bpf_link *link, struct memcg_query *memcg_query)
+{
+	void *map;
+	size_t len;
+
+	len = sysconf(_SC_PAGESIZE) * 1024;
+
+	/* Create region to use for triggering a page fault. */
+	map = mmap(NULL, len, PROT_WRITE, MAP_ANONYMOUS | MAP_PRIVATE, -1, 0);
+	if (!ASSERT_NEQ(map, MAP_FAILED, "mmap anon"))
+		return;
+
+	/* Trigger page fault. */
+	memset(map, 1, len);
+
+	if (!ASSERT_OK(read_stats(link), "read stats"))
+		goto cleanup;
+
+	ASSERT_GT(memcg_query->pgfault, 0, "final pgfault val");
+
+cleanup:
+	munmap(map, len);
+}
+
+void test_cgroup_iter_memcg(void)
+{
+	char *cgroup_rel_path = "/cgroup_iter_memcg_test";
+	struct cgroup_iter_memcg *skel;
+	struct bpf_link *link;
+	int cgroup_fd;
+
+	cgroup_fd = cgroup_setup_and_join(cgroup_rel_path);
+	if (!ASSERT_OK_FD(cgroup_fd, "cgroup_setup_and_join"))
+		return;
+
+	skel = cgroup_iter_memcg__open_and_load();
+	if (!ASSERT_OK_PTR(skel, "cgroup_iter_memcg__open_and_load"))
+		goto cleanup_cgroup_fd;
+
+	DECLARE_LIBBPF_OPTS(bpf_iter_attach_opts, opts);
+	union bpf_iter_link_info linfo = {
+		.cgroup.cgroup_fd = cgroup_fd,
+		.cgroup.order = BPF_CGROUP_ITER_SELF_ONLY,
+	};
+	opts.link_info = &linfo;
+	opts.link_info_len = sizeof(linfo);
+
+	link = bpf_program__attach_iter(skel->progs.cgroup_memcg_query, &opts);
+	if (!ASSERT_OK_PTR(link, "bpf_program__attach_iter"))
+		goto cleanup_skel;
+
+	if (test__start_subtest("cgroup_iter_memcg__anon"))
+		test_anon(link, &skel->data_query->memcg_query);
+	if (test__start_subtest("cgroup_iter_memcg__shmem"))
+		test_shmem(link, &skel->data_query->memcg_query);
+	if (test__start_subtest("cgroup_iter_memcg__file"))
+		test_file(link, &skel->data_query->memcg_query);
+	if (test__start_subtest("cgroup_iter_memcg__kmem"))
+		test_kmem(link, &skel->data_query->memcg_query);
+	if (test__start_subtest("cgroup_iter_memcg__pgfault"))
+		test_pgfault(link, &skel->data_query->memcg_query);
+
+	bpf_link__destroy(link);
+cleanup_skel:
+	cgroup_iter_memcg__destroy(skel);
+cleanup_cgroup_fd:
+	close(cgroup_fd);
+	cleanup_cgroup_environment();
+}
diff --git a/tools/testing/selftests/bpf/progs/cgroup_iter_memcg.c b/tools/testing/selftests/bpf/progs/cgroup_iter_memcg.c
new file mode 100644
index 000000000000..59fb70a3cc50
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/cgroup_iter_memcg.c
@@ -0,0 +1,39 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2025 Meta Platforms, Inc. and affiliates. */
+#include <vmlinux.h>
+#include <bpf/bpf_core_read.h>
+#include "cgroup_iter_memcg.h"
+
+char _license[] SEC("license") = "GPL";
+
+/* The latest values read are stored here. */
+struct memcg_query memcg_query SEC(".data.query");
+
+SEC("iter.s/cgroup")
+int cgroup_memcg_query(struct bpf_iter__cgroup *ctx)
+{
+	struct cgroup *cgrp = ctx->cgroup;
+	struct cgroup_subsys_state *css;
+	struct mem_cgroup *memcg;
+
+	if (!cgrp)
+		return 1;
+
+	css = &cgrp->self;
+	memcg = bpf_get_mem_cgroup(css);
+	if (!memcg)
+		return 1;
+
+	bpf_mem_cgroup_flush_stats(memcg);
+
+	memcg_query.nr_anon_mapped = bpf_mem_cgroup_page_state(memcg, NR_ANON_MAPPED);
+	memcg_query.nr_shmem = bpf_mem_cgroup_page_state(memcg, NR_SHMEM);
+	memcg_query.nr_file_pages = bpf_mem_cgroup_page_state(memcg, NR_FILE_PAGES);
+	memcg_query.nr_file_mapped = bpf_mem_cgroup_page_state(memcg, NR_FILE_MAPPED);
+	memcg_query.memcg_kmem = bpf_mem_cgroup_page_state(memcg, MEMCG_KMEM);
+	memcg_query.pgfault = bpf_mem_cgroup_vm_events(memcg, PGFAULT);
+
+	bpf_put_mem_cgroup(memcg);
+
+	return 0;
+}
-- 
2.52.0



^ permalink raw reply	[flat|nested] 25+ messages in thread

* [PATCH bpf-next v2 7/7] MAINTAINERS: add an entry for MM BPF extensions
  2025-12-20  4:12 [PATCH bpf-next v2 0/7] mm: bpf kfuncs to access memcg data Roman Gushchin
                   ` (5 preceding siblings ...)
  2025-12-20  4:12 ` [PATCH bpf-next v2 6/7] bpf: selftests: selftests for memcg stat kfuncs Roman Gushchin
@ 2025-12-20  4:12 ` Roman Gushchin
  2025-12-20  5:26   ` Shakeel Butt
  6 siblings, 1 reply; 25+ messages in thread
From: Roman Gushchin @ 2025-12-20  4:12 UTC (permalink / raw)
  To: bpf, linux-mm, linux-kernel
  Cc: JP Kobryn, Alexei Starovoitov, Daniel Borkmann, Shakeel Butt,
	Michal Hocko, Johannes Weiner, Roman Gushchin

Let's create a separate entry for MM BPF extensions: these patches
often require an attention from both bpf and mm communities.

Signed-off-by: Roman Gushchin <roman.gushchin@linux.dev>
---
 MAINTAINERS | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/MAINTAINERS b/MAINTAINERS
index c0030e126fc8..59e3053e1122 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -4798,6 +4798,13 @@ L:	bpf@vger.kernel.org
 S:	Maintained
 F:	tools/lib/bpf/
 
+BPF [MEMORY MANAGEMENT EXTENSIONS]
+M:	Roman Gushchin <roman.gushchin@linux.dev>
+L:	bpf@vger.kernel.org
+L:	linux-mm@kvack.org
+S:	Maintained
+F:	mm/bpf_memcontrol.c
+
 BPF [MISC]
 L:	bpf@vger.kernel.org
 S:	Odd Fixes
-- 
2.52.0



^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH bpf-next v2 4/7] mm: introduce BPF kfuncs to access memcg statistics and events
  2025-12-20  4:12 ` [PATCH bpf-next v2 4/7] mm: introduce BPF kfuncs to access memcg statistics and events Roman Gushchin
@ 2025-12-20  4:29   ` bot+bpf-ci
  2025-12-20  4:39     ` Roman Gushchin
  2025-12-20  5:22   ` Shakeel Butt
  1 sibling, 1 reply; 25+ messages in thread
From: bot+bpf-ci @ 2025-12-20  4:29 UTC (permalink / raw)
  To: roman.gushchin, bpf, linux-mm, linux-kernel
  Cc: inwardvessel, ast, daniel, shakeel.butt, mhocko, hannes,
	roman.gushchin, mhocko, ast, andrii, daniel, martin.lau, eddyz87,
	yonghong.song, clm, ihor.solodrai

[-- Attachment #1: Type: text/plain, Size: 1981 bytes --]

> diff --git a/mm/bpf_memcontrol.c b/mm/bpf_memcontrol.c
> index 2d518ad2ad3f..d84fe6f3ed43 100644
> --- a/mm/bpf_memcontrol.c
> +++ b/mm/bpf_memcontrol.c
> @@ -78,6 +78,57 @@ __bpf_kfunc void bpf_put_mem_cgroup(struct mem_cgroup *memcg)
>  	css_put(&memcg->css);
>  }
>
> +/**
> + * bpf_mem_cgroup_vm_events - Read memory cgroup's vm event counter
> + * @memcg: memory cgroup
> + * @event: event id
> + *
> + * Allows to read memory cgroup event counters.
> + */
> +__bpf_kfunc unsigned long bpf_mem_cgroup_vm_events(struct mem_cgroup *memcg,
> +						enum vm_event_item event)
> +{
> +	return memcg_events(memcg, event);
> +}

Should this function validate the event parameter before passing it to
memcg_events()?

The call chain bpf_mem_cgroup_vm_events()->memcg_events()->memcg_events_index()
reaches an array access in memcg_events_index():

    static inline int memcg_events_index(enum vm_event_item idx)
    {
        return mem_cgroup_events_index[idx];
    }

The array mem_cgroup_events_index is declared with size NR_VM_EVENT_ITEMS.
If a BPF program passes an event value >= NR_VM_EVENT_ITEMS, this would
cause an out-of-bounds array read.

For comparison, bpf_mem_cgroup_page_state() below validates its idx
parameter before calling memcg_page_state_output(). A similar check here
would prevent the potential out-of-bounds access:

    if (event < 0 || event >= NR_VM_EVENT_ITEMS)
        return (unsigned long)-1;

> +
> +/**
> + * bpf_mem_cgroup_usage - Read memory cgroup's usage

[ ... ]

> +__bpf_kfunc unsigned long bpf_mem_cgroup_page_state(struct mem_cgroup *memcg, int idx)
> +{
> +	if (idx < 0 || idx >= MEMCG_NR_STAT)
> +		return (unsigned long)-1;
> +
> +	return memcg_page_state_output(memcg, idx);
> +}

[ ... ]


---
AI reviewed your patch. Please fix the bug or email reply why it's not a bug.
See: https://github.com/kernel-patches/vmtest/blob/master/ci/claude/README.md

CI run summary: https://github.com/kernel-patches/bpf/actions/runs/20389033088

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH bpf-next v2 5/7] mm: introduce BPF kfunc to access memory events
  2025-12-20  4:12 ` [PATCH bpf-next v2 5/7] mm: introduce BPF kfunc to access memory events Roman Gushchin
@ 2025-12-20  4:29   ` bot+bpf-ci
  2025-12-20  4:41     ` Roman Gushchin
  2025-12-22  0:49   ` Alexei Starovoitov
  1 sibling, 1 reply; 25+ messages in thread
From: bot+bpf-ci @ 2025-12-20  4:29 UTC (permalink / raw)
  To: roman.gushchin, bpf, linux-mm, linux-kernel
  Cc: inwardvessel, ast, daniel, shakeel.butt, mhocko, hannes,
	roman.gushchin, ast, andrii, daniel, martin.lau, eddyz87,
	yonghong.song, clm, ihor.solodrai

[-- Attachment #1: Type: text/plain, Size: 1493 bytes --]

> diff --git a/mm/bpf_memcontrol.c b/mm/bpf_memcontrol.c
> index d84fe6f3ed43..858eb43766ce 100644
> --- a/mm/bpf_memcontrol.c
> +++ b/mm/bpf_memcontrol.c
> @@ -103,6 +103,22 @@ __bpf_kfunc unsigned long bpf_mem_cgroup_usage(struct mem_cgroup *memcg)
>  	return mem_cgroup_usage(memcg, false) * PAGE_SIZE;
>  }
>
> +/**
> + * bpf_mem_cgroup_memory_events - Read memory cgroup's memory event value
> + * @memcg: memory cgroup
> + * @event: memory event id
> + *
> + * Returns current memory event count.
> + */
> +__bpf_kfunc unsigned long bpf_mem_cgroup_memory_events(struct mem_cgroup *memcg,
> +						enum memcg_memory_event event)
> +{
> +	if (event >= MEMCG_NR_MEMORY_EVENTS)
> +		return (unsigned long)-1;

Should this also check for negative values?  The enum type is typically
signed, so a BPF program passing a negative event value would pass this
check but cause an out-of-bounds read from memcg->memory_events[].

The similar bpf_mem_cgroup_page_state() in this same file uses:

    if (idx < 0 || idx >= MEMCG_NR_STAT)

which checks both bounds.  Could the check here be changed to:

    if (event < 0 || event >= MEMCG_NR_MEMORY_EVENTS)

to match that pattern?

> +
> +	return atomic_long_read(&memcg->memory_events[event]);
> +}

[ ... ]


---
AI reviewed your patch. Please fix the bug or email reply why it's not a bug.
See: https://github.com/kernel-patches/vmtest/blob/master/ci/claude/README.md

CI run summary: https://github.com/kernel-patches/bpf/actions/runs/20389033088

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH bpf-next v2 4/7] mm: introduce BPF kfuncs to access memcg statistics and events
  2025-12-20  4:29   ` bot+bpf-ci
@ 2025-12-20  4:39     ` Roman Gushchin
  0 siblings, 0 replies; 25+ messages in thread
From: Roman Gushchin @ 2025-12-20  4:39 UTC (permalink / raw)
  To: bot+bpf-ci
  Cc: bpf, linux-mm, linux-kernel, inwardvessel, ast, daniel,
	shakeel.butt, mhocko, hannes, mhocko, andrii, martin.lau,
	eddyz87, yonghong.song, clm, ihor.solodrai

bot+bpf-ci@kernel.org writes:

>> diff --git a/mm/bpf_memcontrol.c b/mm/bpf_memcontrol.c
>> index 2d518ad2ad3f..d84fe6f3ed43 100644
>> --- a/mm/bpf_memcontrol.c
>> +++ b/mm/bpf_memcontrol.c
>> @@ -78,6 +78,57 @@ __bpf_kfunc void bpf_put_mem_cgroup(struct mem_cgroup *memcg)
>>  	css_put(&memcg->css);
>>  }
>>
>> +/**
>> + * bpf_mem_cgroup_vm_events - Read memory cgroup's vm event counter
>> + * @memcg: memory cgroup
>> + * @event: event id
>> + *
>> + * Allows to read memory cgroup event counters.
>> + */
>> +__bpf_kfunc unsigned long bpf_mem_cgroup_vm_events(struct mem_cgroup *memcg,
>> +						enum vm_event_item event)
>> +{
>> +	return memcg_events(memcg, event);
>> +}
>
> Should this function validate the event parameter before passing it to
> memcg_events()?
>
> The call chain bpf_mem_cgroup_vm_events()->memcg_events()->memcg_events_index()
> reaches an array access in memcg_events_index():
>
>     static inline int memcg_events_index(enum vm_event_item idx)
>     {
>         return mem_cgroup_events_index[idx];
>     }
>
> The array mem_cgroup_events_index is declared with size NR_VM_EVENT_ITEMS.
> If a BPF program passes an event value >= NR_VM_EVENT_ITEMS, this would
> cause an out-of-bounds array read.
>
> For comparison, bpf_mem_cgroup_page_state() below validates its idx
> parameter before calling memcg_page_state_output(). A similar check here
> would prevent the potential out-of-bounds access:
>
>     if (event < 0 || event >= NR_VM_EVENT_ITEMS)
>         return (unsigned long)-1;

No, it's a false positive, as explained in a reply to the similar comment
to v1.


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH bpf-next v2 5/7] mm: introduce BPF kfunc to access memory events
  2025-12-20  4:29   ` bot+bpf-ci
@ 2025-12-20  4:41     ` Roman Gushchin
  2025-12-20 13:19       ` Chris Mason
  0 siblings, 1 reply; 25+ messages in thread
From: Roman Gushchin @ 2025-12-20  4:41 UTC (permalink / raw)
  To: bot+bpf-ci
  Cc: bpf, linux-mm, linux-kernel, inwardvessel, ast, daniel,
	shakeel.butt, mhocko, hannes, andrii, martin.lau, eddyz87,
	yonghong.song, clm, ihor.solodrai

bot+bpf-ci@kernel.org writes:

>> diff --git a/mm/bpf_memcontrol.c b/mm/bpf_memcontrol.c
>> index d84fe6f3ed43..858eb43766ce 100644
>> --- a/mm/bpf_memcontrol.c
>> +++ b/mm/bpf_memcontrol.c
>> @@ -103,6 +103,22 @@ __bpf_kfunc unsigned long bpf_mem_cgroup_usage(struct mem_cgroup *memcg)
>>  	return mem_cgroup_usage(memcg, false) * PAGE_SIZE;
>>  }
>>
>> +/**
>> + * bpf_mem_cgroup_memory_events - Read memory cgroup's memory event value
>> + * @memcg: memory cgroup
>> + * @event: memory event id
>> + *
>> + * Returns current memory event count.
>> + */
>> +__bpf_kfunc unsigned long bpf_mem_cgroup_memory_events(struct mem_cgroup *memcg,
>> +						enum memcg_memory_event event)
>> +{
>> +	if (event >= MEMCG_NR_MEMORY_EVENTS)
>> +		return (unsigned long)-1;
>
> Should this also check for negative values?  The enum type is typically
> signed, so a BPF program passing a negative event value would pass this
> check but cause an out-of-bounds read from memcg->memory_events[].
>
> The similar bpf_mem_cgroup_page_state() in this same file uses:
>
>     if (idx < 0 || idx >= MEMCG_NR_STAT)
>
> which checks both bounds.  Could the check here be changed to:
>
>     if (event < 0 || event >= MEMCG_NR_MEMORY_EVENTS)
>
> to match that pattern?

No, the bpf verifier enforces event > 0.
It's a false positive.


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH bpf-next v2 2/7] mm: introduce BPF kfuncs to deal with memcg pointers
  2025-12-20  4:12 ` [PATCH bpf-next v2 2/7] mm: introduce BPF kfuncs to deal with memcg pointers Roman Gushchin
@ 2025-12-20  5:20   ` Shakeel Butt
  2025-12-22  0:39   ` Alexei Starovoitov
  1 sibling, 0 replies; 25+ messages in thread
From: Shakeel Butt @ 2025-12-20  5:20 UTC (permalink / raw)
  To: Roman Gushchin
  Cc: bpf, linux-mm, linux-kernel, JP Kobryn, Alexei Starovoitov,
	Daniel Borkmann, Michal Hocko, Johannes Weiner

On Fri, Dec 19, 2025 at 08:12:45PM -0800, Roman Gushchin wrote:
> To effectively operate with memory cgroups in BPF there is a need
> to convert css pointers to memcg pointers. A simple container_of
> cast which is used in the kernel code can't be used in BPF because
> from the verifier's point of view that's a out-of-bounds memory access.
> 
> Introduce helper get/put kfuncs which can be used to get
> a refcounted memcg pointer from the css pointer:
>   - bpf_get_mem_cgroup,
>   - bpf_put_mem_cgroup.
> 
> bpf_get_mem_cgroup() can take both memcg's css and the corresponding
> cgroup's "self" css. It allows it to be used with the existing cgroup
> iterator which iterates over cgroup tree, not memcg tree.
> 
> Signed-off-by: Roman Gushchin <roman.gushchin@linux.dev>

Acked-by: Shakeel Butt <shakeel.butt@linux.dev>


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH bpf-next v2 3/7] mm: introduce bpf_get_root_mem_cgroup() BPF kfunc
  2025-12-20  4:12 ` [PATCH bpf-next v2 3/7] mm: introduce bpf_get_root_mem_cgroup() BPF kfunc Roman Gushchin
@ 2025-12-20  5:21   ` Shakeel Butt
  0 siblings, 0 replies; 25+ messages in thread
From: Shakeel Butt @ 2025-12-20  5:21 UTC (permalink / raw)
  To: Roman Gushchin
  Cc: bpf, linux-mm, linux-kernel, JP Kobryn, Alexei Starovoitov,
	Daniel Borkmann, Michal Hocko, Johannes Weiner

On Fri, Dec 19, 2025 at 08:12:46PM -0800, Roman Gushchin wrote:
> Introduce a BPF kfunc to get a trusted pointer to the root memory
> cgroup. It's very handy to traverse the full memcg tree, e.g.
> for handling a system-wide OOM.
> 
> It's possible to obtain this pointer by traversing the memcg tree
> up from any known memcg, but it's sub-optimal and makes BPF programs
> more complex and less efficient.
> 
> bpf_get_root_mem_cgroup() has a KF_ACQUIRE | KF_RET_NULL semantics,
> however in reality it's not necessary to bump the corresponding
> reference counter - root memory cgroup is immortal, reference counting
> is skipped, see css_get(). Once set, root_mem_cgroup is always a valid
> memcg pointer. It's safe to call bpf_put_mem_cgroup() for the pointer
> obtained with bpf_get_root_mem_cgroup(), it's effectively a no-op.
> 
> Signed-off-by: Roman Gushchin <roman.gushchin@linux.dev>

Acked-by: Shakeel Butt <shakeel.butt@linux.dev>


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH bpf-next v2 4/7] mm: introduce BPF kfuncs to access memcg statistics and events
  2025-12-20  4:12 ` [PATCH bpf-next v2 4/7] mm: introduce BPF kfuncs to access memcg statistics and events Roman Gushchin
  2025-12-20  4:29   ` bot+bpf-ci
@ 2025-12-20  5:22   ` Shakeel Butt
  1 sibling, 0 replies; 25+ messages in thread
From: Shakeel Butt @ 2025-12-20  5:22 UTC (permalink / raw)
  To: Roman Gushchin
  Cc: bpf, linux-mm, linux-kernel, JP Kobryn, Alexei Starovoitov,
	Daniel Borkmann, Michal Hocko, Johannes Weiner, Michal Hocko

On Fri, Dec 19, 2025 at 08:12:47PM -0800, Roman Gushchin wrote:
> Introduce BPF kfuncs to conveniently access memcg data:
>   - bpf_mem_cgroup_vm_events(),
>   - bpf_mem_cgroup_usage(),
>   - bpf_mem_cgroup_page_state(),
>   - bpf_mem_cgroup_flush_stats().
> 
> These functions are useful for implementing BPF OOM policies, but
> also can be used to accelerate access to the memcg data. Reading
> it through cgroupfs is much more expensive, roughly 5x, mostly
> because of the need to convert the data into the text and back.
> 
> JP Kobryn:
> An experiment was setup to compare the performance of a program that
> uses the traditional method of reading memory.stat vs a program using
> the new kfuncs. The control program opens up the root memory.stat file
> and for 1M iterations reads, converts the string values to numeric data,
> then seeks back to the beginning. The experimental program sets up the
> requisite libbpf objects and for 1M iterations invokes a bpf program
> which uses the kfuncs to fetch all available stats for node_stat_item,
> memcg_stat_item, and vm_event_item types.
> 
> The results showed a significant perf benefit on the experimental side,
> outperforming the control side by a margin of 93%. In kernel mode,
> elapsed time was reduced by 80%, while in user mode, over 99% of time
> was saved.
> 
> control: elapsed time
> real    0m38.318s
> user    0m25.131s
> sys     0m13.070s
> 
> experiment: elapsed time
> real    0m2.789s
> user    0m0.187s
> sys     0m2.512s
> 
> control: perf data
> 33.43% a.out libc.so.6         [.] __vfscanf_internal
>  6.88% a.out [kernel.kallsyms] [k] vsnprintf
>  6.33% a.out libc.so.6         [.] _IO_fgets
>  5.51% a.out [kernel.kallsyms] [k] format_decode
>  4.31% a.out libc.so.6         [.] __GI_____strtoull_l_internal
>  3.78% a.out [kernel.kallsyms] [k] string
>  3.53% a.out [kernel.kallsyms] [k] number
>  2.71% a.out libc.so.6         [.] _IO_sputbackc
>  2.41% a.out [kernel.kallsyms] [k] strlen
>  1.98% a.out a.out             [.] main
>  1.70% a.out libc.so.6         [.] _IO_getline_info
>  1.51% a.out libc.so.6         [.] __isoc99_sscanf
>  1.47% a.out [kernel.kallsyms] [k] memory_stat_format
>  1.47% a.out [kernel.kallsyms] [k] memcpy_orig
>  1.41% a.out [kernel.kallsyms] [k] seq_buf_printf
> 
> experiment: perf data
> 10.55% memcgstat bpf_prog_..._query [k] bpf_prog_16aab2f19fa982a7_query
>  6.90% memcgstat [kernel.kallsyms]  [k] memcg_page_state_output
>  3.55% memcgstat [kernel.kallsyms]  [k] _raw_spin_lock
>  3.12% memcgstat [kernel.kallsyms]  [k] memcg_events
>  2.87% memcgstat [kernel.kallsyms]  [k] __memcg_slab_post_alloc_hook
>  2.73% memcgstat [kernel.kallsyms]  [k] kmem_cache_free
>  2.70% memcgstat [kernel.kallsyms]  [k] entry_SYSRETQ_unsafe_stack
>  2.25% memcgstat [kernel.kallsyms]  [k] __memcg_slab_free_hook
>  2.06% memcgstat [kernel.kallsyms]  [k] get_page_from_freelist
> 
> Signed-off-by: Roman Gushchin <roman.gushchin@linux.dev>
> Co-developed-by: JP Kobryn <inwardvessel@gmail.com>
> Signed-off-by: JP Kobryn <inwardvessel@gmail.com>
> Acked-by: Michal Hocko <mhocko@suse.com>

Acked-by: Shakeel Butt <shakeel.butt@linux.dev>


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH bpf-next v2 6/7] bpf: selftests: selftests for memcg stat kfuncs
  2025-12-20  4:12 ` [PATCH bpf-next v2 6/7] bpf: selftests: selftests for memcg stat kfuncs Roman Gushchin
@ 2025-12-20  5:23   ` Shakeel Butt
  0 siblings, 0 replies; 25+ messages in thread
From: Shakeel Butt @ 2025-12-20  5:23 UTC (permalink / raw)
  To: Roman Gushchin
  Cc: bpf, linux-mm, linux-kernel, JP Kobryn, Alexei Starovoitov,
	Daniel Borkmann, Michal Hocko, Johannes Weiner

On Fri, Dec 19, 2025 at 08:12:49PM -0800, Roman Gushchin wrote:
> From: JP Kobryn <inwardvessel@gmail.com>
> 
> Add test coverage for the kfuncs that fetch memcg stats. Using some common
> stats, test scenarios ensuring that the given stat increases by some
> arbitrary amount. The stats selected cover the three categories represented
> by the enums: node_stat_item, memcg_stat_item, vm_event_item.
> 
> Since only a subset of all stats are queried, use a static struct made up
> of fields for each stat. Write to the struct with the fetched values when
> the bpf program is invoked and read the fields in the user mode program for
> verification.
> 
> Signed-off-by: JP Kobryn <inwardvessel@gmail.com>
> Signed-off-by: Roman Gushchin <roman.gushchin@linux.dev>

Acked-by: Shakeel Butt <shakeel.butt@linux.dev>


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH bpf-next v2 7/7] MAINTAINERS: add an entry for MM BPF extensions
  2025-12-20  4:12 ` [PATCH bpf-next v2 7/7] MAINTAINERS: add an entry for MM BPF extensions Roman Gushchin
@ 2025-12-20  5:26   ` Shakeel Butt
  0 siblings, 0 replies; 25+ messages in thread
From: Shakeel Butt @ 2025-12-20  5:26 UTC (permalink / raw)
  To: Roman Gushchin
  Cc: bpf, linux-mm, linux-kernel, JP Kobryn, Alexei Starovoitov,
	Daniel Borkmann, Michal Hocko, Johannes Weiner

On Fri, Dec 19, 2025 at 08:12:50PM -0800, Roman Gushchin wrote:
> Let's create a separate entry for MM BPF extensions: these patches
> often require an attention from both bpf and mm communities.
> 
> Signed-off-by: Roman Gushchin <roman.gushchin@linux.dev>
> ---
>  MAINTAINERS | 7 +++++++
>  1 file changed, 7 insertions(+)
> 
> diff --git a/MAINTAINERS b/MAINTAINERS
> index c0030e126fc8..59e3053e1122 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -4798,6 +4798,13 @@ L:	bpf@vger.kernel.org
>  S:	Maintained
>  F:	tools/lib/bpf/
>  
> +BPF [MEMORY MANAGEMENT EXTENSIONS]
> +M:	Roman Gushchin <roman.gushchin@linux.dev>

If you don't mind, put JP and I here as well.

Acked-by: Shakeel Butt <shakeel.butt@linux.dev>



^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH bpf-next v2 5/7] mm: introduce BPF kfunc to access memory events
  2025-12-20  4:41     ` Roman Gushchin
@ 2025-12-20 13:19       ` Chris Mason
  2025-12-20 18:43         ` Roman Gushchin
  0 siblings, 1 reply; 25+ messages in thread
From: Chris Mason @ 2025-12-20 13:19 UTC (permalink / raw)
  To: Roman Gushchin, bot+bpf-ci
  Cc: bpf, linux-mm, linux-kernel, inwardvessel, ast, daniel,
	shakeel.butt, mhocko, hannes, andrii, martin.lau, eddyz87,
	yonghong.song, ihor.solodrai

On 12/19/25 11:41 PM, Roman Gushchin wrote:
> bot+bpf-ci@kernel.org writes:
> 
>>> diff --git a/mm/bpf_memcontrol.c b/mm/bpf_memcontrol.c
>>> index d84fe6f3ed43..858eb43766ce 100644
>>> --- a/mm/bpf_memcontrol.c
>>> +++ b/mm/bpf_memcontrol.c
>>> @@ -103,6 +103,22 @@ __bpf_kfunc unsigned long bpf_mem_cgroup_usage(struct mem_cgroup *memcg)
>>>  	return mem_cgroup_usage(memcg, false) * PAGE_SIZE;
>>>  }
>>>
>>> +/**
>>> + * bpf_mem_cgroup_memory_events - Read memory cgroup's memory event value
>>> + * @memcg: memory cgroup
>>> + * @event: memory event id
>>> + *
>>> + * Returns current memory event count.
>>> + */
>>> +__bpf_kfunc unsigned long bpf_mem_cgroup_memory_events(struct mem_cgroup *memcg,
>>> +						enum memcg_memory_event event)
>>> +{
>>> +	if (event >= MEMCG_NR_MEMORY_EVENTS)
>>> +		return (unsigned long)-1;
>>
>> Should this also check for negative values?  The enum type is typically
>> signed, so a BPF program passing a negative event value would pass this
>> check but cause an out-of-bounds read from memcg->memory_events[].
>>
>> The similar bpf_mem_cgroup_page_state() in this same file uses:
>>
>>     if (idx < 0 || idx >= MEMCG_NR_STAT)
>>
>> which checks both bounds.  Could the check here be changed to:
>>
>>     if (event < 0 || event >= MEMCG_NR_MEMORY_EVENTS)
>>
>> to match that pattern?
> 
> No, the bpf verifier enforces event > 0.
> It's a false positive.

I'll add some words here to the bpf prompts, thanks Roman.

-chris



^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH bpf-next v2 5/7] mm: introduce BPF kfunc to access memory events
  2025-12-20 13:19       ` Chris Mason
@ 2025-12-20 18:43         ` Roman Gushchin
  2025-12-20 19:59           ` Chris Mason
  0 siblings, 1 reply; 25+ messages in thread
From: Roman Gushchin @ 2025-12-20 18:43 UTC (permalink / raw)
  To: Chris Mason
  Cc: bot+bpf-ci, bpf, linux-mm, linux-kernel, inwardvessel, ast,
	daniel, shakeel.butt, mhocko, hannes, andrii, martin.lau,
	eddyz87, yonghong.song, ihor.solodrai

Chris Mason <clm@meta.com> writes:

> On 12/19/25 11:41 PM, Roman Gushchin wrote:
>> bot+bpf-ci@kernel.org writes:
>> 
>>>> diff --git a/mm/bpf_memcontrol.c b/mm/bpf_memcontrol.c
>>>> index d84fe6f3ed43..858eb43766ce 100644
>>>> --- a/mm/bpf_memcontrol.c
>>>> +++ b/mm/bpf_memcontrol.c
>>>> @@ -103,6 +103,22 @@ __bpf_kfunc unsigned long bpf_mem_cgroup_usage(struct mem_cgroup *memcg)
>>>>  	return mem_cgroup_usage(memcg, false) * PAGE_SIZE;
>>>>  }
>>>>
>>>> +/**
>>>> + * bpf_mem_cgroup_memory_events - Read memory cgroup's memory event value
>>>> + * @memcg: memory cgroup
>>>> + * @event: memory event id
>>>> + *
>>>> + * Returns current memory event count.
>>>> + */
>>>> +__bpf_kfunc unsigned long bpf_mem_cgroup_memory_events(struct mem_cgroup *memcg,
>>>> +						enum memcg_memory_event event)
>>>> +{
>>>> +	if (event >= MEMCG_NR_MEMORY_EVENTS)
>>>> +		return (unsigned long)-1;
>>>
>>> Should this also check for negative values?  The enum type is typically
>>> signed, so a BPF program passing a negative event value would pass this
>>> check but cause an out-of-bounds read from memcg->memory_events[].
>>>
>>> The similar bpf_mem_cgroup_page_state() in this same file uses:
>>>
>>>     if (idx < 0 || idx >= MEMCG_NR_STAT)
>>>
>>> which checks both bounds.  Could the check here be changed to:
>>>
>>>     if (event < 0 || event >= MEMCG_NR_MEMORY_EVENTS)
>>>
>>> to match that pattern?
>> 
>> No, the bpf verifier enforces event > 0.
>> It's a false positive.
>
> I'll add some words here to the bpf prompts, thanks Roman.

I'll try to play with it too, hopefully we can fix it.

Thanks!


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH bpf-next v2 5/7] mm: introduce BPF kfunc to access memory events
  2025-12-20 18:43         ` Roman Gushchin
@ 2025-12-20 19:59           ` Chris Mason
  2025-12-22 22:23             ` Roman Gushchin
  0 siblings, 1 reply; 25+ messages in thread
From: Chris Mason @ 2025-12-20 19:59 UTC (permalink / raw)
  To: Roman Gushchin
  Cc: bot+bpf-ci, bpf, linux-mm, linux-kernel, inwardvessel, ast,
	daniel, shakeel.butt, mhocko, hannes, andrii, martin.lau,
	eddyz87, yonghong.song, ihor.solodrai

On 12/20/25 1:43 PM, Roman Gushchin wrote:
> Chris Mason <clm@meta.com> writes:
> 
>> On 12/19/25 11:41 PM, Roman Gushchin wrote:
>>> bot+bpf-ci@kernel.org writes:
>>>
>>>>> diff --git a/mm/bpf_memcontrol.c b/mm/bpf_memcontrol.c
>>>>> index d84fe6f3ed43..858eb43766ce 100644
>>>>> --- a/mm/bpf_memcontrol.c
>>>>> +++ b/mm/bpf_memcontrol.c
>>>>> @@ -103,6 +103,22 @@ __bpf_kfunc unsigned long bpf_mem_cgroup_usage(struct mem_cgroup *memcg)
>>>>>  	return mem_cgroup_usage(memcg, false) * PAGE_SIZE;
>>>>>  }
>>>>>
>>>>> +/**
>>>>> + * bpf_mem_cgroup_memory_events - Read memory cgroup's memory event value
>>>>> + * @memcg: memory cgroup
>>>>> + * @event: memory event id
>>>>> + *
>>>>> + * Returns current memory event count.
>>>>> + */
>>>>> +__bpf_kfunc unsigned long bpf_mem_cgroup_memory_events(struct mem_cgroup *memcg,
>>>>> +						enum memcg_memory_event event)
>>>>> +{
>>>>> +	if (event >= MEMCG_NR_MEMORY_EVENTS)
>>>>> +		return (unsigned long)-1;
>>>>
>>>> Should this also check for negative values?  The enum type is typically
>>>> signed, so a BPF program passing a negative event value would pass this
>>>> check but cause an out-of-bounds read from memcg->memory_events[].
>>>>
>>>> The similar bpf_mem_cgroup_page_state() in this same file uses:
>>>>
>>>>     if (idx < 0 || idx >= MEMCG_NR_STAT)
>>>>
>>>> which checks both bounds.  Could the check here be changed to:
>>>>
>>>>     if (event < 0 || event >= MEMCG_NR_MEMORY_EVENTS)
>>>>
>>>> to match that pattern?
>>>
>>> No, the bpf verifier enforces event > 0.
>>> It's a false positive.
>>
>> I'll add some words here to the bpf prompts, thanks Roman.
> 
> I'll try to play with it too, hopefully we can fix it.
> 

https://github.com/masoncl/review-prompts/commit/fcc3bf704798f6be64cbb2e28b05a5c91eee9c7b

Seems to do the trick here:

[ ... ]

  6. Missing negative check:
    - The function only checks event >= MEMCG_NR_MEMORY_EVENTS
    - Unlike bpf_mem_cgroup_page_state() which checks idx < 0 || idx >= MEMCG_NR_STAT
    - However, event is an enum memcg_memory_event, not int
    - Enums in C are typically unsigned when all values are non-negative (starting from 0)
    - The enum starts at MEMCG_LOW = 0 (implicit) and all values are positive
    - A negative value passed to this enum parameter would be extremely unusual and would require explicit casting

  Let me verify the enum values are all non-negative:

  Looking at memcg_memory_event enum:
  enum memcg_memory_event {
  	MEMCG_LOW,        // 0
  	MEMCG_HIGH,       // 1
  	MEMCG_MAX,        // 2
  	MEMCG_OOM,        // 3
  	MEMCG_OOM_KILL,   // 4
  	MEMCG_OOM_GROUP_KILL, // 5
  	MEMCG_SWAP_HIGH,  // 6
  	MEMCG_SWAP_MAX,   // 7
  	MEMCG_SWAP_FAIL,  // 8
  	MEMCG_SOCK_THROTTLED, // 9
  	MEMCG_NR_MEMORY_EVENTS, // 10
  }

  All values are 0 or positive. Since the parameter is typed as enum memcg_memory_event (not int), and BPF programs use constants from vmlinux.h, the lack of a < 0 check is acceptable:

  1. BPF programs using enum constants can only pass valid values (0-9)
  2. Even if a computed value were used, the enum type constrains the interpretation
  3. If a negative value were somehow cast to this enum, the unsigned comparison event >= MEMCG_NR_MEMORY_EVENTS would catch it (negative becomes large positive in unsigned comparison)

  Wait, let me reconsider this. The function parameter is enum memcg_memory_event event. In C, enums are typically represented as int under the hood. If someone passes a negative value, the comparison event >=
  MEMCG_NR_MEMORY_EVENTS would be signed comparison if event is treated as signed int.

  Let me check if this is a concern:

● plugin:semcode:semcode - grep_functions (MCP)(pattern: "memcg_memory_event", path_pattern: "bpf_memcontrol")
  ⎿  Searching function bodies for pattern: memcg_memory_event (filtering paths matching: bpf_memcontrol, limit: 100) at git commit 0f92b93b6e42765a13c933e4e9cb268f8d77cf3c

     Found 1 matching line(s):
     … +3 lines (ctrl+o to expand)

● Let me think more carefully about the signed/unsigned comparison issue. The comparison event >= MEMCG_NR_MEMORY_EVENTS involves:
  - event - an enum (which in C is typically int, signed)
  - MEMCG_NR_MEMORY_EVENTS - an enum constant (value 10)

  If event were negative (e.g., -1), then -1 >= 10 is false, and the function would proceed to access memcg->memory_events[-1] - an out-of-bounds access.

  However, I need to consider the BPF context:
  1. Per bpf.md: BPF programs use enum constants from vmlinux.h, and the compiler substitutes valid values at compile time
  2. The verifier tracks scalar values: Even if computed at runtime, the verifier tracks value ranges
  3. KF_TRUSTED_ARGS: The memcg pointer is validated, but this doesn't affect the enum parameter

  Let me compare with bpf_mem_cgroup_page_state() which takes int idx and explicitly checks idx < 0:

  __bpf_kfunc unsigned long bpf_mem_cgroup_page_state(struct mem_cgroup *memcg, int idx)
  {
  	if (idx < 0 || idx >= MEMCG_NR_STAT)
  		return (unsigned long)-1;
  	...
  }

  The difference is:
  - bpf_mem_cgroup_page_state takes int idx - explicitly signed, needs both checks
  - bpf_mem_cgroup_memory_events takes enum memcg_memory_event event - enum typed

  For the enum case, per the BPF documentation loaded earlier:
  DO NOT report as bugs: Kfuncs that take enum-typed parameters and use constants from vmlinux.h without explicit bounds checks.

-chris



^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH bpf-next v2 2/7] mm: introduce BPF kfuncs to deal with memcg pointers
  2025-12-20  4:12 ` [PATCH bpf-next v2 2/7] mm: introduce BPF kfuncs to deal with memcg pointers Roman Gushchin
  2025-12-20  5:20   ` Shakeel Butt
@ 2025-12-22  0:39   ` Alexei Starovoitov
  1 sibling, 0 replies; 25+ messages in thread
From: Alexei Starovoitov @ 2025-12-22  0:39 UTC (permalink / raw)
  To: Roman Gushchin
  Cc: bpf, linux-mm, LKML, JP Kobryn, Alexei Starovoitov,
	Daniel Borkmann, Shakeel Butt, Michal Hocko, Johannes Weiner

On Fri, Dec 19, 2025 at 6:13 PM Roman Gushchin <roman.gushchin@linux.dev> wrote:
>
> To effectively operate with memory cgroups in BPF there is a need
> to convert css pointers to memcg pointers. A simple container_of
> cast which is used in the kernel code can't be used in BPF because
> from the verifier's point of view that's a out-of-bounds memory access.
>
> Introduce helper get/put kfuncs which can be used to get
> a refcounted memcg pointer from the css pointer:
>   - bpf_get_mem_cgroup,
>   - bpf_put_mem_cgroup.
>
> bpf_get_mem_cgroup() can take both memcg's css and the corresponding
> cgroup's "self" css. It allows it to be used with the existing cgroup
> iterator which iterates over cgroup tree, not memcg tree.
>
> Signed-off-by: Roman Gushchin <roman.gushchin@linux.dev>
> ---
>  mm/Makefile         |  3 ++
>  mm/bpf_memcontrol.c | 88 +++++++++++++++++++++++++++++++++++++++++++++
>  2 files changed, 91 insertions(+)
>  create mode 100644 mm/bpf_memcontrol.c
>
> diff --git a/mm/Makefile b/mm/Makefile
> index 9175f8cc6565..79c39a98ff83 100644
> --- a/mm/Makefile
> +++ b/mm/Makefile
> @@ -106,6 +106,9 @@ obj-$(CONFIG_MEMCG) += memcontrol.o vmpressure.o
>  ifdef CONFIG_SWAP
>  obj-$(CONFIG_MEMCG) += swap_cgroup.o
>  endif
> +ifdef CONFIG_BPF_SYSCALL
> +obj-$(CONFIG_MEMCG) += bpf_memcontrol.o
> +endif
>  obj-$(CONFIG_CGROUP_HUGETLB) += hugetlb_cgroup.o
>  obj-$(CONFIG_GUP_TEST) += gup_test.o
>  obj-$(CONFIG_DMAPOOL_TEST) += dmapool_test.o
> diff --git a/mm/bpf_memcontrol.c b/mm/bpf_memcontrol.c
> new file mode 100644
> index 000000000000..03d435fc4f10
> --- /dev/null
> +++ b/mm/bpf_memcontrol.c
> @@ -0,0 +1,88 @@
> +// SPDX-License-Identifier: GPL-2.0-or-later
> +/*
> + * Memory Controller-related BPF kfuncs and auxiliary code
> + *
> + * Author: Roman Gushchin <roman.gushchin@linux.dev>
> + */
> +
> +#include <linux/memcontrol.h>
> +#include <linux/bpf.h>
> +
> +__bpf_kfunc_start_defs();
> +
> +/**
> + * bpf_get_mem_cgroup - Get a reference to a memory cgroup
> + * @css: pointer to the css structure
> + *
> + * Returns a pointer to a mem_cgroup structure after bumping
> + * the corresponding css's reference counter.
> + *
> + * It's fine to pass a css which belongs to any cgroup controller,
> + * e.g. unified hierarchy's main css.
> + *
> + * Implements KF_ACQUIRE semantics.
> + */
> +__bpf_kfunc struct mem_cgroup *
> +bpf_get_mem_cgroup(struct cgroup_subsys_state *css)
> +{
> +       struct mem_cgroup *memcg = NULL;
> +       bool rcu_unlock = false;
> +
> +       if (mem_cgroup_disabled() || !root_mem_cgroup)
> +               return NULL;
> +
> +       if (root_mem_cgroup->css.ss != css->ss) {
> +               struct cgroup *cgroup = css->cgroup;
> +               int ssid = root_mem_cgroup->css.ss->id;
> +
> +               rcu_read_lock();
> +               rcu_unlock = true;
> +               css = rcu_dereference_raw(cgroup->subsys[ssid]);
> +       }
> +
> +       if (css && css_tryget(css))
> +               memcg = container_of(css, struct mem_cgroup, css);
> +
> +       if (rcu_unlock)
> +               rcu_read_unlock();
> +
> +       return memcg;
> +}
> +
> +/**
> + * bpf_put_mem_cgroup - Put a reference to a memory cgroup
> + * @memcg: memory cgroup to release
> + *
> + * Releases a previously acquired memcg reference.
> + * Implements KF_RELEASE semantics.
> + */
> +__bpf_kfunc void bpf_put_mem_cgroup(struct mem_cgroup *memcg)
> +{
> +       css_put(&memcg->css);
> +}
> +
> +__bpf_kfunc_end_defs();
> +
> +BTF_KFUNCS_START(bpf_memcontrol_kfuncs)
> +BTF_ID_FLAGS(func, bpf_get_mem_cgroup, KF_TRUSTED_ARGS | KF_ACQUIRE | KF_RET_NULL | KF_RCU)
> +BTF_ID_FLAGS(func, bpf_put_mem_cgroup, KF_TRUSTED_ARGS | KF_RELEASE)

This is an unusual combination of flags.
KF_RCU is a weaker KF_TRUSTED_ARGS, so just use KF_RCU.
We have an odd selftest kmod that specifies both,
but it's unnecessary there as well.
Just KF_ACQUIRE | KF_RET_NULL | KF_RCU will do.

Similarly KF_RELEASE implies KF_TRUSTED_ARGS.
That's even documented Documentation/bpf/kfuncs.rst,
so just use KF_RELEASE for bpf_put_mem_cgroup.

pw-bot: cr


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH bpf-next v2 5/7] mm: introduce BPF kfunc to access memory events
  2025-12-20  4:12 ` [PATCH bpf-next v2 5/7] mm: introduce BPF kfunc to access memory events Roman Gushchin
  2025-12-20  4:29   ` bot+bpf-ci
@ 2025-12-22  0:49   ` Alexei Starovoitov
  2025-12-22  0:51     ` Alexei Starovoitov
  1 sibling, 1 reply; 25+ messages in thread
From: Alexei Starovoitov @ 2025-12-22  0:49 UTC (permalink / raw)
  To: Roman Gushchin
  Cc: bpf, linux-mm, LKML, JP Kobryn, Alexei Starovoitov,
	Daniel Borkmann, Shakeel Butt, Michal Hocko, Johannes Weiner

On Fri, Dec 19, 2025 at 6:13 PM Roman Gushchin <roman.gushchin@linux.dev> wrote:
>
> From: JP Kobryn <inwardvessel@gmail.com>
>
> Introduce BPF kfunc to access memory events, e.g.:
> MEMCG_LOW, MEMCG_MAX, MEMCG_OOM, MEMCG_OOM_KILL etc.
>
> Signed-off-by: JP Kobryn <inwardvessel@gmail.com>
> Signed-off-by: Roman Gushchin <roman.gushchin@linux.dev>
> Acked-by: Shakeel Butt <shakeel.butt@linux.dev>
> ---
>  mm/bpf_memcontrol.c | 17 +++++++++++++++++
>  1 file changed, 17 insertions(+)
>
> diff --git a/mm/bpf_memcontrol.c b/mm/bpf_memcontrol.c
> index d84fe6f3ed43..858eb43766ce 100644
> --- a/mm/bpf_memcontrol.c
> +++ b/mm/bpf_memcontrol.c
> @@ -103,6 +103,22 @@ __bpf_kfunc unsigned long bpf_mem_cgroup_usage(struct mem_cgroup *memcg)
>         return mem_cgroup_usage(memcg, false) * PAGE_SIZE;
>  }
>
> +/**
> + * bpf_mem_cgroup_memory_events - Read memory cgroup's memory event value
> + * @memcg: memory cgroup
> + * @event: memory event id
> + *
> + * Returns current memory event count.
> + */
> +__bpf_kfunc unsigned long bpf_mem_cgroup_memory_events(struct mem_cgroup *memcg,
> +                                               enum memcg_memory_event event)
> +{
> +       if (event >= MEMCG_NR_MEMORY_EVENTS)
> +               return (unsigned long)-1;
> +
> +       return atomic_long_read(&memcg->memory_events[event]);
> +}

Why is patch 5 not squashed with patch 4?
I'd think placing bpf_mem_cgroup_memory_events()
right next to bpf_mem_cgroup_vm_events() in the same patch
will make the difference more clear.
For non-mm people the names are very close and on the first glance
it looks like a duplicate.


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH bpf-next v2 5/7] mm: introduce BPF kfunc to access memory events
  2025-12-22  0:49   ` Alexei Starovoitov
@ 2025-12-22  0:51     ` Alexei Starovoitov
  0 siblings, 0 replies; 25+ messages in thread
From: Alexei Starovoitov @ 2025-12-22  0:51 UTC (permalink / raw)
  To: Roman Gushchin
  Cc: bpf, linux-mm, LKML, JP Kobryn, Alexei Starovoitov,
	Daniel Borkmann, Shakeel Butt, Michal Hocko, Johannes Weiner

On Sun, Dec 21, 2025 at 2:49 PM Alexei Starovoitov
<alexei.starovoitov@gmail.com> wrote:
>
> On Fri, Dec 19, 2025 at 6:13 PM Roman Gushchin <roman.gushchin@linux.dev> wrote:
> >
> > From: JP Kobryn <inwardvessel@gmail.com>
> >
> > Introduce BPF kfunc to access memory events, e.g.:
> > MEMCG_LOW, MEMCG_MAX, MEMCG_OOM, MEMCG_OOM_KILL etc.
> >
> > Signed-off-by: JP Kobryn <inwardvessel@gmail.com>
> > Signed-off-by: Roman Gushchin <roman.gushchin@linux.dev>
> > Acked-by: Shakeel Butt <shakeel.butt@linux.dev>
> > ---
> >  mm/bpf_memcontrol.c | 17 +++++++++++++++++
> >  1 file changed, 17 insertions(+)
> >
> > diff --git a/mm/bpf_memcontrol.c b/mm/bpf_memcontrol.c
> > index d84fe6f3ed43..858eb43766ce 100644
> > --- a/mm/bpf_memcontrol.c
> > +++ b/mm/bpf_memcontrol.c
> > @@ -103,6 +103,22 @@ __bpf_kfunc unsigned long bpf_mem_cgroup_usage(struct mem_cgroup *memcg)
> >         return mem_cgroup_usage(memcg, false) * PAGE_SIZE;
> >  }
> >
> > +/**
> > + * bpf_mem_cgroup_memory_events - Read memory cgroup's memory event value
> > + * @memcg: memory cgroup
> > + * @event: memory event id
> > + *
> > + * Returns current memory event count.
> > + */
> > +__bpf_kfunc unsigned long bpf_mem_cgroup_memory_events(struct mem_cgroup *memcg,
> > +                                               enum memcg_memory_event event)
> > +{
> > +       if (event >= MEMCG_NR_MEMORY_EVENTS)
> > +               return (unsigned long)-1;
> > +
> > +       return atomic_long_read(&memcg->memory_events[event]);
> > +}
>
> Why is patch 5 not squashed with patch 4?
> I'd think placing bpf_mem_cgroup_memory_events()
> right next to bpf_mem_cgroup_vm_events() in the same patch
> will make the difference more clear.
> For non-mm people the names are very close and on the first glance
> it looks like a duplicate.

Also see a bunch of kdoc warnings flagged by CI.


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH bpf-next v2 5/7] mm: introduce BPF kfunc to access memory events
  2025-12-20 19:59           ` Chris Mason
@ 2025-12-22 22:23             ` Roman Gushchin
  2025-12-23 14:09               ` Chris Mason
  0 siblings, 1 reply; 25+ messages in thread
From: Roman Gushchin @ 2025-12-22 22:23 UTC (permalink / raw)
  To: Chris Mason
  Cc: bot+bpf-ci, bpf, linux-mm, linux-kernel, inwardvessel, ast,
	daniel, shakeel.butt, mhocko, hannes, andrii, martin.lau,
	eddyz87, yonghong.song, ihor.solodrai

Chris Mason <clm@meta.com> writes:

> On 12/20/25 1:43 PM, Roman Gushchin wrote:
>> Chris Mason <clm@meta.com> writes:
>> 
>>> On 12/19/25 11:41 PM, Roman Gushchin wrote:
>>>> bot+bpf-ci@kernel.org writes:
>>>>
>>>>>> diff --git a/mm/bpf_memcontrol.c b/mm/bpf_memcontrol.c
>>>>>> index d84fe6f3ed43..858eb43766ce 100644
>>>>>> --- a/mm/bpf_memcontrol.c
>>>>>> +++ b/mm/bpf_memcontrol.c
>>>>>> @@ -103,6 +103,22 @@ __bpf_kfunc unsigned long bpf_mem_cgroup_usage(struct mem_cgroup *memcg)
>>>>>>  	return mem_cgroup_usage(memcg, false) * PAGE_SIZE;
>>>>>>  }
>>>>>>
>>>>>> +/**
>>>>>> + * bpf_mem_cgroup_memory_events - Read memory cgroup's memory event value
>>>>>> + * @memcg: memory cgroup
>>>>>> + * @event: memory event id
>>>>>> + *
>>>>>> + * Returns current memory event count.
>>>>>> + */
>>>>>> +__bpf_kfunc unsigned long bpf_mem_cgroup_memory_events(struct mem_cgroup *memcg,
>>>>>> +						enum memcg_memory_event event)
>>>>>> +{
>>>>>> +	if (event >= MEMCG_NR_MEMORY_EVENTS)
>>>>>> +		return (unsigned long)-1;
>>>>>
>>>>> Should this also check for negative values?  The enum type is typically
>>>>> signed, so a BPF program passing a negative event value would pass this
>>>>> check but cause an out-of-bounds read from memcg->memory_events[].
>>>>>
>>>>> The similar bpf_mem_cgroup_page_state() in this same file uses:
>>>>>
>>>>>     if (idx < 0 || idx >= MEMCG_NR_STAT)
>>>>>
>>>>> which checks both bounds.  Could the check here be changed to:
>>>>>
>>>>>     if (event < 0 || event >= MEMCG_NR_MEMORY_EVENTS)
>>>>>
>>>>> to match that pattern?
>>>>
>>>> No, the bpf verifier enforces event > 0.
>>>> It's a false positive.
>>>
>>> I'll add some words here to the bpf prompts, thanks Roman.
>> 
>> I'll try to play with it too, hopefully we can fix it.
>> 
>
> https://github.com/masoncl/review-prompts/commit/fcc3bf704798f6be64cbb2e28b05a5c91eee9c7b

Hi Chris!

I'm sorry, apparently I was dead wrong and overestimated the bpf
verifier  (and ai was correct, lol). Somebody told me that enums
are fully covered as a feedback to an earlier version and I didn't
check.

In reality the verifier doesn't guarantee the correctness of the value
passed as an enum, only that it's a u32. So we need to check the value.
I've added necessarily checks in v3 of my patchset. It passes the local
ai review without your latest change. Please, revert it.

Thanks and sorry for the hassle


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH bpf-next v2 5/7] mm: introduce BPF kfunc to access memory events
  2025-12-22 22:23             ` Roman Gushchin
@ 2025-12-23 14:09               ` Chris Mason
  0 siblings, 0 replies; 25+ messages in thread
From: Chris Mason @ 2025-12-23 14:09 UTC (permalink / raw)
  To: Roman Gushchin
  Cc: bot+bpf-ci, bpf, linux-mm, linux-kernel, inwardvessel, ast,
	daniel, shakeel.butt, mhocko, hannes, andrii, martin.lau,
	eddyz87, yonghong.song, ihor.solodrai

On 12/22/25 5:23 PM, Roman Gushchin wrote:
> Chris Mason <clm@meta.com> writes:
> 
>> On 12/20/25 1:43 PM, Roman Gushchin wrote:
>>>>> No, the bpf verifier enforces event > 0.
>>>>> It's a false positive.
>>>>
>>>> I'll add some words here to the bpf prompts, thanks Roman.
>>>
>>> I'll try to play with it too, hopefully we can fix it.
>>>
>>
>> https://github.com/masoncl/review-prompts/commit/fcc3bf704798f6be64cbb2e28b05a5c91eee9c7b 
> 
> Hi Chris!
> 
> I'm sorry, apparently I was dead wrong and overestimated the bpf
> verifier  (and ai was correct, lol). Somebody told me that enums
> are fully covered as a feedback to an earlier version and I didn't
> check.
> 
> In reality the verifier doesn't guarantee the correctness of the value
> passed as an enum, only that it's a u32. So we need to check the value.
> I've added necessarily checks in v3 of my patchset. It passes the local
> ai review without your latest change. Please, revert it.
> 
> Thanks and sorry for the hassle

Thanks Roman, I adjusted the prompt changes and looked harder for proof
of exactly what checks are done.

-chris


^ permalink raw reply	[flat|nested] 25+ messages in thread

end of thread, other threads:[~2025-12-23 14:10 UTC | newest]

Thread overview: 25+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-12-20  4:12 [PATCH bpf-next v2 0/7] mm: bpf kfuncs to access memcg data Roman Gushchin
2025-12-20  4:12 ` [PATCH bpf-next v2 1/7] mm: declare memcg_page_state_output() in memcontrol.h Roman Gushchin
2025-12-20  4:12 ` [PATCH bpf-next v2 2/7] mm: introduce BPF kfuncs to deal with memcg pointers Roman Gushchin
2025-12-20  5:20   ` Shakeel Butt
2025-12-22  0:39   ` Alexei Starovoitov
2025-12-20  4:12 ` [PATCH bpf-next v2 3/7] mm: introduce bpf_get_root_mem_cgroup() BPF kfunc Roman Gushchin
2025-12-20  5:21   ` Shakeel Butt
2025-12-20  4:12 ` [PATCH bpf-next v2 4/7] mm: introduce BPF kfuncs to access memcg statistics and events Roman Gushchin
2025-12-20  4:29   ` bot+bpf-ci
2025-12-20  4:39     ` Roman Gushchin
2025-12-20  5:22   ` Shakeel Butt
2025-12-20  4:12 ` [PATCH bpf-next v2 5/7] mm: introduce BPF kfunc to access memory events Roman Gushchin
2025-12-20  4:29   ` bot+bpf-ci
2025-12-20  4:41     ` Roman Gushchin
2025-12-20 13:19       ` Chris Mason
2025-12-20 18:43         ` Roman Gushchin
2025-12-20 19:59           ` Chris Mason
2025-12-22 22:23             ` Roman Gushchin
2025-12-23 14:09               ` Chris Mason
2025-12-22  0:49   ` Alexei Starovoitov
2025-12-22  0:51     ` Alexei Starovoitov
2025-12-20  4:12 ` [PATCH bpf-next v2 6/7] bpf: selftests: selftests for memcg stat kfuncs Roman Gushchin
2025-12-20  5:23   ` Shakeel Butt
2025-12-20  4:12 ` [PATCH bpf-next v2 7/7] MAINTAINERS: add an entry for MM BPF extensions Roman Gushchin
2025-12-20  5:26   ` Shakeel Butt

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox