linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Roman Gushchin <roman.gushchin@linux.dev>
To: Michal Hocko <mhocko@suse.com>
Cc: linux-kernel@vger.kernel.org,
	Andrew Morton <akpm@linux-foundation.org>,
	Alexei Starovoitov <ast@kernel.org>,
	Johannes Weiner <hannes@cmpxchg.org>,
	Shakeel Butt <shakeel.butt@linux.dev>,
	Suren Baghdasaryan <surenb@google.com>,
	David Rientjes <rientjes@google.com>,
	Josh Don <joshdon@google.com>,
	Chuyi Zhou <zhouchuyi@bytedance.com>,
	cgroups@vger.kernel.org, linux-mm@kvack.org, bpf@vger.kernel.org
Subject: Re: [PATCH rfc 10/12] mm: introduce bpf_out_of_memory() bpf kfunc
Date: Wed, 30 Apr 2025 14:53:50 +0000	[thread overview]
Message-ID: <aBI5fh28P1Qgi2zZ@google.com> (raw)
In-Reply-To: <aBHQ69_rCqjnDaDl@tiehlicka>

On Wed, Apr 30, 2025 at 09:27:39AM +0200, Michal Hocko wrote:
> On Tue 29-04-25 21:31:35, Roman Gushchin wrote:
> > On Tue, Apr 29, 2025 at 01:46:07PM +0200, Michal Hocko wrote:
> > > On Mon 28-04-25 03:36:15, Roman Gushchin wrote:
> > > > Introduce bpf_out_of_memory() bpf kfunc, which allows to declare
> > > > an out of memory events and trigger the corresponding kernel OOM
> > > > handling mechanism.
> > > > 
> > > > It takes a trusted memcg pointer (or NULL for system-wide OOMs)
> > > > as an argument, as well as the page order.
> > > > 
> > > > Only one OOM can be declared and handled in the system at once,
> > > > so if the function is called in parallel to another OOM handling,
> > > > it bails out with -EBUSY.
> > > 
> > > This makes sense for the global OOM handler because concurrent handlers
> > > are cooperative. But is this really correct for memcg ooms which could
> > > happen for different hierarchies? Currently we do block on oom_lock in
> > > that case to make sure one oom doesn't starve others. Do we want the
> > > same behavior for custom OOM handlers?
> > 
> > It's a good point and I had similar thoughts when I was working on it.
> > But I think it's orthogonal to the customization of the oom handling.
> > Even for the existing oom killer it makes no sense to serialize memcg ooms
> > in independent memcg subtrees. But I'm worried about the dmesg reporting,
> > it can become really messy for 2+ concurrent OOMs.
> > 
> > Also, some memory can be shared, so one OOM can eliminate a need for another
> > OOM, even if they look independent.
> > 
> > So my conclusion here is to leave things as they are until we'll get signs
> > of real world problems with the (lack of) concurrency between ooms.
> 
> How do we learn about that happening though? I do not think we have any
> counters to watch to suspect that some oom handlers cannot run.

The bpf program which declares an OOM can handle this: e.g. retry, wait
and retry, etc. We can also try to mimick the existing behavior and wait
on oom_lock (potentially splitting it into multiple locks to support
concurrent ooms in various memcgs). Do you think it's preferable?


  reply	other threads:[~2025-04-30 14:54 UTC|newest]

Thread overview: 33+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-04-28  3:36 [PATCH rfc 00/12] mm: BPF OOM Roman Gushchin
2025-04-28  3:36 ` [PATCH rfc 01/12] mm: introduce a bpf hook for OOM handling Roman Gushchin
2025-04-28  3:36 ` [PATCH rfc 02/12] bpf: mark struct oom_control's memcg field as TRUSTED_OR_NULL Roman Gushchin
2025-04-28  3:36 ` [PATCH rfc 03/12] bpf: treat fmodret tracing program's arguments as trusted Roman Gushchin
2025-04-28  3:36 ` [PATCH rfc 04/12] mm: introduce bpf_oom_kill_process() bpf kfunc Roman Gushchin
2025-04-28  3:36 ` [PATCH rfc 05/12] mm: introduce bpf kfuncs to deal with memcg pointers Roman Gushchin
2025-04-28  3:36 ` [PATCH rfc 06/12] mm: introduce bpf_get_root_mem_cgroup() bpf kfunc Roman Gushchin
2025-04-28  3:36 ` [PATCH rfc 07/12] bpf: selftests: introduce read_cgroup_file() helper Roman Gushchin
2025-04-28  3:36 ` [PATCH rfc 08/12] bpf: selftests: bpf OOM handler test Roman Gushchin
2025-04-28  3:36 ` [PATCH rfc 09/12] sched: psi: bpf hook to handle psi events Roman Gushchin
2025-04-28  6:11   ` kernel test robot
2025-04-30  0:28   ` Suren Baghdasaryan
2025-04-30  0:58     ` Roman Gushchin
2025-04-28  3:36 ` [PATCH rfc 10/12] mm: introduce bpf_out_of_memory() bpf kfunc Roman Gushchin
2025-04-29 11:46   ` Michal Hocko
2025-04-29 21:31     ` Roman Gushchin
2025-04-30  7:27       ` Michal Hocko
2025-04-30 14:53         ` Roman Gushchin [this message]
2025-05-05  8:08           ` Michal Hocko
2025-04-28  3:36 ` [PATCH rfc 11/12] bpf: selftests: introduce open_cgroup_file() helper Roman Gushchin
2025-04-28  3:36 ` [PATCH rfc 12/12] bpf: selftests: psi handler test Roman Gushchin
2025-04-28 10:43 ` [PATCH rfc 00/12] mm: BPF OOM Matt Bobrowski
2025-04-28 17:24   ` Roman Gushchin
2025-04-29  1:56     ` Kumar Kartikeya Dwivedi
2025-04-29 15:42       ` Roman Gushchin
2025-05-02 17:26       ` Song Liu
2025-04-29 11:42 ` Michal Hocko
2025-04-29 14:44   ` Roman Gushchin
2025-04-29 21:56     ` Suren Baghdasaryan
2025-04-29 22:17       ` Roman Gushchin
2025-04-29 23:01     ` Suren Baghdasaryan
2025-04-29 22:44 ` Suren Baghdasaryan
2025-04-29 23:01   ` Roman Gushchin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=aBI5fh28P1Qgi2zZ@google.com \
    --to=roman.gushchin@linux.dev \
    --cc=akpm@linux-foundation.org \
    --cc=ast@kernel.org \
    --cc=bpf@vger.kernel.org \
    --cc=cgroups@vger.kernel.org \
    --cc=hannes@cmpxchg.org \
    --cc=joshdon@google.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@suse.com \
    --cc=rientjes@google.com \
    --cc=shakeel.butt@linux.dev \
    --cc=surenb@google.com \
    --cc=zhouchuyi@bytedance.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox