From: Matt Bobrowski <mattbobrowski@google.com>
To: Roman Gushchin <roman.gushchin@linux.dev>
Cc: bpf@vger.kernel.org, Michal Hocko <mhocko@suse.com>,
Alexei Starovoitov <ast@kernel.org>,
Shakeel Butt <shakeel.butt@linux.dev>,
JP Kobryn <inwardvessel@gmail.com>,
linux-kernel@vger.kernel.org, linux-mm@kvack.org,
Suren Baghdasaryan <surenb@google.com>,
Johannes Weiner <hannes@cmpxchg.org>,
Andrew Morton <akpm@linux-foundation.org>
Subject: Re: [PATCH bpf-next v3 09/17] mm: introduce bpf_out_of_memory() BPF kfunc
Date: Wed, 28 Jan 2026 20:21:54 +0000 [thread overview]
Message-ID: <aXpv4r_3L0UWzAQn@google.com> (raw)
In-Reply-To: <20260127024421.494929-10-roman.gushchin@linux.dev>
On Mon, Jan 26, 2026 at 06:44:12PM -0800, Roman Gushchin wrote:
> Introduce bpf_out_of_memory() bpf kfunc, which allows to declare
> an out of memory events and trigger the corresponding kernel OOM
> handling mechanism.
>
> It takes a trusted memcg pointer (or NULL for system-wide OOMs)
> as an argument, as well as the page order.
>
> If the BPF_OOM_FLAGS_WAIT_ON_OOM_LOCK flag is not set, only one OOM
> can be declared and handled in the system at once, so if the function
> is called in parallel to another OOM handling, it bails out with -EBUSY.
> This mode is suited for global OOM's: any concurrent OOMs will likely
> do the job and release some memory. In a blocking mode (which is
> suited for memcg OOMs) the execution will wait on the oom_lock mutex.
>
> The function is declared as sleepable. It guarantees that it won't
> be called from an atomic context. It's required by the OOM handling
> code, which shouldn't be called from a non-blocking context.
>
> Handling of a memcg OOM almost always requires taking of the
> css_set_lock spinlock. The fact that bpf_out_of_memory() is sleepable
> also guarantees that it can't be called with acquired css_set_lock,
> so the kernel can't deadlock on it.
>
> To avoid deadlocks on the oom lock, the function is filtered out for
> bpf oom struct ops programs and all tracing programs.
>
> Signed-off-by: Roman Gushchin <roman.gushchin@linux.dev>
> ---
> include/linux/oom.h | 5 +++
> mm/oom_kill.c | 85 +++++++++++++++++++++++++++++++++++++++++++--
> 2 files changed, 88 insertions(+), 2 deletions(-)
>
> diff --git a/include/linux/oom.h b/include/linux/oom.h
> index c2dce336bcb4..851dba9287b5 100644
> --- a/include/linux/oom.h
> +++ b/include/linux/oom.h
> @@ -21,6 +21,11 @@ enum oom_constraint {
> CONSTRAINT_MEMCG,
> };
>
> +enum bpf_oom_flags {
> + BPF_OOM_FLAGS_WAIT_ON_OOM_LOCK = 1 << 0,
> + BPF_OOM_FLAGS_LAST = 1 << 1,
> +};
> +
> /*
> * Details of the page allocation that triggered the oom killer that are used to
> * determine what should be killed.
> diff --git a/mm/oom_kill.c b/mm/oom_kill.c
> index 09897597907f..8f63a370b8f5 100644
> --- a/mm/oom_kill.c
> +++ b/mm/oom_kill.c
> @@ -1334,6 +1334,53 @@ __bpf_kfunc int bpf_oom_kill_process(struct oom_control *oc,
> return 0;
> }
>
> +/**
> + * bpf_out_of_memory - declare Out Of Memory state and invoke OOM killer
> + * @memcg__nullable: memcg or NULL for system-wide OOMs
> + * @order: order of page which wasn't allocated
> + * @flags: flags
> + *
> + * Declares the Out Of Memory state and invokes the OOM killer.
> + *
> + * OOM handlers are synchronized using the oom_lock mutex. If wait_on_oom_lock
> + * is true, the function will wait on it. Otherwise it bails out with -EBUSY
> + * if oom_lock is contended.
> + *
> + * Generally it's advised to pass wait_on_oom_lock=false for global OOMs
> + * and wait_on_oom_lock=true for memcg-scoped OOMs.
> + *
> + * Returns 1 if the forward progress was achieved and some memory was freed.
> + * Returns a negative value if an error occurred.
> + */
> +__bpf_kfunc int bpf_out_of_memory(struct mem_cgroup *memcg__nullable,
> + int order, u64 flags)
> +{
> + struct oom_control oc = {
> + .memcg = memcg__nullable,
> + .gfp_mask = GFP_KERNEL,
> + .order = order,
> + };
> + int ret;
> +
> + if (flags & ~(BPF_OOM_FLAGS_LAST - 1))
> + return -EINVAL;
> +
> + if (oc.order < 0 || oc.order > MAX_PAGE_ORDER)
> + return -EINVAL;
> +
> + if (flags & BPF_OOM_FLAGS_WAIT_ON_OOM_LOCK) {
> + ret = mutex_lock_killable(&oom_lock);
If contended and we end up waiting here, some forward progress could
have been made in the interim. Enough such that this pending OOM event
initiated by the call into bpf_out_of_memory() may no longer even be
warranted. What do you think about adding an escape hatch here, which
could simply be in the form of a user-defined function callback?
> + if (ret)
> + return ret;
> + } else if (!mutex_trylock(&oom_lock))
> + return -EBUSY;
> +
> + ret = out_of_memory(&oc);
> +
> + mutex_unlock(&oom_lock);
> + return ret;
> +}
> +
> __bpf_kfunc_end_defs();
>
> BTF_KFUNCS_START(bpf_oom_kfuncs)
> @@ -1356,14 +1403,48 @@ static const struct btf_kfunc_id_set bpf_oom_kfunc_set = {
> .filter = bpf_oom_kfunc_filter,
> };
>
> +BTF_KFUNCS_START(bpf_declare_oom_kfuncs)
> +BTF_ID_FLAGS(func, bpf_out_of_memory, KF_SLEEPABLE)
> +BTF_KFUNCS_END(bpf_declare_oom_kfuncs)
> +
> +static int bpf_declare_oom_kfunc_filter(const struct bpf_prog *prog, u32 kfunc_id)
> +{
> + if (!btf_id_set8_contains(&bpf_declare_oom_kfuncs, kfunc_id))
> + return 0;
> +
> + if (prog->type == BPF_PROG_TYPE_STRUCT_OPS &&
> + prog->aux->attach_btf_id == bpf_oom_ops_ids[0])
> + return -EACCES;
> +
> + if (prog->type == BPF_PROG_TYPE_TRACING)
> + return -EACCES;
> +
> + return 0;
> +}
> +
> +static const struct btf_kfunc_id_set bpf_declare_oom_kfunc_set = {
> + .owner = THIS_MODULE,
> + .set = &bpf_declare_oom_kfuncs,
> + .filter = bpf_declare_oom_kfunc_filter,
> +};
> +
> static int __init bpf_oom_init(void)
> {
> int err;
>
> err = register_btf_kfunc_id_set(BPF_PROG_TYPE_STRUCT_OPS,
> &bpf_oom_kfunc_set);
> - if (err)
> - pr_warn("error while registering bpf oom kfuncs: %d", err);
> + if (err) {
> + pr_warn("error while registering struct_ops bpf oom kfuncs: %d", err);
> + return err;
> + }
> +
> + err = register_btf_kfunc_id_set(BPF_PROG_TYPE_UNSPEC,
> + &bpf_declare_oom_kfunc_set);
> + if (err) {
> + pr_warn("error while registering unspec bpf oom kfuncs: %d", err);
> + return err;
> + }
>
> return err;
> }
> --
> 2.52.0
>
next prev parent reply other threads:[~2026-01-28 20:22 UTC|newest]
Thread overview: 63+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-01-27 2:44 [PATCH bpf-next v3 00/17] mm: BPF OOM Roman Gushchin
2026-01-27 2:44 ` [PATCH bpf-next v3 01/17] bpf: move bpf_struct_ops_link into bpf.h Roman Gushchin
2026-01-27 5:50 ` Yafang Shao
2026-01-28 11:28 ` Matt Bobrowski
2026-01-27 2:44 ` [PATCH bpf-next v3 02/17] bpf: allow attaching struct_ops to cgroups Roman Gushchin
2026-01-27 3:08 ` bot+bpf-ci
2026-01-27 5:49 ` Yafang Shao
2026-01-28 3:10 ` Josh Don
2026-01-28 18:52 ` Roman Gushchin
2026-01-28 11:25 ` Matt Bobrowski
2026-01-28 19:18 ` Roman Gushchin
2026-01-27 2:44 ` [PATCH bpf-next v3 03/17] libbpf: fix return value on memory allocation failure Roman Gushchin
2026-01-27 5:52 ` Yafang Shao
2026-01-27 2:44 ` [PATCH bpf-next v3 04/17] libbpf: introduce bpf_map__attach_struct_ops_opts() Roman Gushchin
2026-01-27 3:08 ` bot+bpf-ci
2026-01-27 2:44 ` [PATCH bpf-next v3 05/17] bpf: mark struct oom_control's memcg field as TRUSTED_OR_NULL Roman Gushchin
2026-01-27 6:06 ` Yafang Shao
2026-02-02 4:56 ` Matt Bobrowski
2026-01-27 2:44 ` [PATCH bpf-next v3 06/17] mm: define mem_cgroup_get_from_ino() outside of CONFIG_SHRINKER_DEBUG Roman Gushchin
2026-01-27 6:12 ` Yafang Shao
2026-02-02 3:50 ` Shakeel Butt
2026-01-27 2:44 ` [PATCH bpf-next v3 07/17] mm: introduce BPF OOM struct ops Roman Gushchin
2026-01-27 9:38 ` Michal Hocko
2026-01-27 21:12 ` Roman Gushchin
2026-01-28 8:00 ` Michal Hocko
2026-01-28 18:44 ` Roman Gushchin
2026-02-02 4:06 ` Matt Bobrowski
2026-01-28 3:26 ` Josh Don
2026-01-28 19:03 ` Roman Gushchin
2026-01-28 11:19 ` Michal Hocko
2026-01-28 18:53 ` Roman Gushchin
2026-01-29 21:00 ` Martin KaFai Lau
2026-01-30 23:29 ` Roman Gushchin
2026-02-02 20:27 ` Martin KaFai Lau
2026-01-27 2:44 ` [PATCH bpf-next v3 08/17] mm: introduce bpf_oom_kill_process() bpf kfunc Roman Gushchin
2026-01-27 20:21 ` Martin KaFai Lau
2026-01-27 20:47 ` Roman Gushchin
2026-02-02 4:49 ` Matt Bobrowski
2026-01-27 2:44 ` [PATCH bpf-next v3 09/17] mm: introduce bpf_out_of_memory() BPF kfunc Roman Gushchin
2026-01-28 20:21 ` Matt Bobrowski [this message]
2026-01-27 2:44 ` [PATCH bpf-next v3 10/17] mm: introduce bpf_task_is_oom_victim() kfunc Roman Gushchin
2026-02-02 5:39 ` Matt Bobrowski
2026-02-02 17:30 ` Alexei Starovoitov
2026-02-03 0:14 ` Roman Gushchin
2026-02-03 13:23 ` Michal Hocko
2026-02-03 16:31 ` Alexei Starovoitov
2026-02-04 9:02 ` Michal Hocko
2026-02-05 0:12 ` Alexei Starovoitov
2026-01-27 2:44 ` [PATCH bpf-next v3 11/17] bpf: selftests: introduce read_cgroup_file() helper Roman Gushchin
2026-01-27 3:08 ` bot+bpf-ci
2026-01-27 2:44 ` [PATCH bpf-next v3 12/17] bpf: selftests: BPF OOM struct ops test Roman Gushchin
2026-01-27 2:44 ` [PATCH bpf-next v3 13/17] sched: psi: add a trace point to psi_avgs_work() Roman Gushchin
2026-01-27 2:44 ` [PATCH bpf-next v3 14/17] sched: psi: add cgroup_id field to psi_group structure Roman Gushchin
2026-01-27 2:44 ` [PATCH bpf-next v3 15/17] bpf: allow calling bpf_out_of_memory() from a PSI tracepoint Roman Gushchin
2026-01-27 9:02 ` [PATCH bpf-next v3 00/17] mm: BPF OOM Michal Hocko
2026-01-27 21:01 ` Roman Gushchin
2026-01-28 8:06 ` Michal Hocko
2026-01-28 16:59 ` Alexei Starovoitov
2026-01-28 18:23 ` Roman Gushchin
2026-01-28 18:53 ` Alexei Starovoitov
2026-02-02 3:26 ` Matt Bobrowski
2026-02-02 17:50 ` Alexei Starovoitov
2026-02-04 23:52 ` Matt Bobrowski
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=aXpv4r_3L0UWzAQn@google.com \
--to=mattbobrowski@google.com \
--cc=akpm@linux-foundation.org \
--cc=ast@kernel.org \
--cc=bpf@vger.kernel.org \
--cc=hannes@cmpxchg.org \
--cc=inwardvessel@gmail.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mhocko@suse.com \
--cc=roman.gushchin@linux.dev \
--cc=shakeel.butt@linux.dev \
--cc=surenb@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox