From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id F21AAC369DC for ; Tue, 29 Apr 2025 15:42:27 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C026D6B0005; Tue, 29 Apr 2025 11:42:25 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B8B646B0008; Tue, 29 Apr 2025 11:42:25 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A78766B000C; Tue, 29 Apr 2025 11:42:25 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 866106B0005 for ; Tue, 29 Apr 2025 11:42:25 -0400 (EDT) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id B2135120879 for ; Tue, 29 Apr 2025 15:42:26 +0000 (UTC) X-FDA: 83387498292.28.76B4BB9 Received: from out-184.mta0.migadu.com (out-184.mta0.migadu.com [91.218.175.184]) by imf25.hostedemail.com (Postfix) with ESMTP id F1DD0A0004 for ; Tue, 29 Apr 2025 15:42:24 +0000 (UTC) Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=iotlXLtG; dmarc=pass (policy=none) header.from=linux.dev; spf=pass (imf25.hostedemail.com: domain of roman.gushchin@linux.dev designates 91.218.175.184 as permitted sender) smtp.mailfrom=roman.gushchin@linux.dev ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1745941345; a=rsa-sha256; cv=none; b=zVeQu7yfANPRftA3GKQEt8ZyC6+dKC4aYNIrYiHZN406WLKpXsbqUOTUKMG0aDk3486JY4 o0kuA3bYmHwYEqt+chxBvbXCc9IvFeXh5iM/BnAX5EdKKE+Cdl5CRlcxq9EXVuinpKN/np cI/pytQ0+Lv19pM97QNf63a6g2wsW7U= ARC-Authentication-Results: i=1; imf25.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=iotlXLtG; dmarc=pass (policy=none) header.from=linux.dev; spf=pass (imf25.hostedemail.com: domain of roman.gushchin@linux.dev designates 91.218.175.184 as permitted sender) smtp.mailfrom=roman.gushchin@linux.dev ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1745941345; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=y81njpVq6vhPefsffMyJQnG/4OkeUNAj+Vh1f25K2Cs=; b=novY4Q7TwKzDcKJEzc84q3BI1PFEEdKi546FyACBwmfodCQES0QxYguEGMWyCmesZEWmR6 wEwUkn2EKnGwXgO1b9oNjtKGBYsgMSWVLUAk7afkV7CmujVeNdYYkaCZutkSCLHM7tcBzX W76/8xL2Rl1s3agOAIAosWEK239tHUA= Date: Tue, 29 Apr 2025 15:42:16 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1745941342; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=y81njpVq6vhPefsffMyJQnG/4OkeUNAj+Vh1f25K2Cs=; b=iotlXLtGiFrEmKEnROAr+VqKuiA7RA246ZDTFou2eqbzF/kouv6PauUkEqtWaulgdxa00T zkXnBYIjyZyTkXO+5cpeuvw3oaywvYS2ckQ3YJiEpgVHOaUf9atwWt2MsUId+NVrAj8aEn y9VEyAGkcWE+a+JbdBWbRA1unqn3Yss= X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. From: Roman Gushchin To: Kumar Kartikeya Dwivedi Cc: Matt Bobrowski , linux-kernel@vger.kernel.org, Andrew Morton , Alexei Starovoitov , Johannes Weiner , Michal Hocko , Shakeel Butt , Suren Baghdasaryan , David Rientjes , Josh Don , Chuyi Zhou , cgroups@vger.kernel.org, linux-mm@kvack.org, bpf@vger.kernel.org Subject: Re: [PATCH rfc 00/12] mm: BPF OOM Message-ID: References: <20250428033617.3797686-1-roman.gushchin@linux.dev> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Migadu-Flow: FLOW_OUT X-Rspamd-Queue-Id: F1DD0A0004 X-Stat-Signature: 7m5xzs581f9eo5jt79o5t19ibqp7sg5k X-Rspam-User: X-Rspamd-Server: rspam08 X-HE-Tag: 1745941344-208356 X-HE-Meta: U2FsdGVkX1/4f4SdUfKLmgHn/8sS+MSxftyNTfaqYVQrG2Y1o31Imm0BgqXOpzLr8XkDgfA5qAJEJOksviHKdvVJjk5okyTAOT6G8u4PtahhqfJr2/G+SoUbRRC4SuYsFfC8lvFmTyGdo+tcGnDpC3iyz6yqTCA0bkQGqmi5lF+sSZBb9ke95kxhGPGbFeqyL6KNRIPsN0IKgUQZOCy2Pvjn66kCKy3MDkt4TGPZ1mQC/PAkDM5508iq5gQdb43cgUJ7jk0avA6bbQ+Hr0QYSzjGBs+GgvLgFe6rXuh3MEgdnqVyAHWOToBUXUWRlyxVPVDJEAzOEQdEzzyYRei75SDJY/lj5hyE7VDACVxo6/Vrgkv5UugNc5aGW/0ZcGst/V9D3xoMK9NwW+YJP2ESPLQxiDDwMhs0EQX3igz741csKK9wGe8d4UW1G1Xue/GUQuEsNcadAEq2OuiixsUFsJCDjx+qhpQfZ50ANPT848TF4svb0TK0p3b2n3VjstBPV03Q3US4GfGWng2hLG0dVLuI+PYOoHDeURnn2WPIHFltI4aWybmSJqKF+BBo4hFewKoJJk1pmT4sanqxprL+0VgUmsX1d73OxPbjPuGUWTe0YfRofvDqi3ydkshkkdU8jP9SUdE9+AwlDSaiIP7AhyC53xPIQRHoiijmhslTefnRZLZdJe+KWdJYecx0xXpfXunrPa+mr/G9VK87RmH21uRwkpBF6K4OWR/jnQEfjPjd27umoax6oXePu0wXrWZGX5eTcVXbu0L+beVICiU2pEbyc8HbDeoOx+eHtg4XKkM0oyIydNr3lrJsA5VCV2msHUIb930tYvc= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Tue, Apr 29, 2025 at 03:56:54AM +0200, Kumar Kartikeya Dwivedi wrote: > On Mon, 28 Apr 2025 at 19:24, Roman Gushchin wrote: > > > > On Mon, Apr 28, 2025 at 10:43:07AM +0000, Matt Bobrowski wrote: > > > On Mon, Apr 28, 2025 at 03:36:05AM +0000, Roman Gushchin wrote: > > > > This patchset adds an ability to customize the out of memory > > > > handling using bpf. > > > > > > > > It focuses on two parts: > > > > 1) OOM handling policy, > > > > 2) PSI-based OOM invocation. > > > > > > > > The idea to use bpf for customizing the OOM handling is not new, but > > > > unlike the previous proposal [1], which augmented the existing task > > > > ranking-based policy, this one tries to be as generic as possible and > > > > leverage the full power of the modern bpf. > > > > > > > > It provides a generic hook which is called before the existing OOM > > > > killer code and allows implementing any policy, e.g. picking a victim > > > > task or memory cgroup or potentially even releasing memory in other > > > > ways, e.g. deleting tmpfs files (the last one might require some > > > > additional but relatively simple changes). > > > > > > > > The past attempt to implement memory-cgroup aware policy [2] showed > > > > that there are multiple opinions on what the best policy is. As it's > > > > highly workload-dependent and specific to a concrete way of organizing > > > > workloads, the structure of the cgroup tree etc, a customizable > > > > bpf-based implementation is preferable over a in-kernel implementation > > > > with a dozen on sysctls. > > > > > > > > The second part is related to the fundamental question on when to > > > > declare the OOM event. It's a trade-off between the risk of > > > > unnecessary OOM kills and associated work losses and the risk of > > > > infinite trashing and effective soft lockups. In the last few years > > > > several PSI-based userspace solutions were developed (e.g. OOMd [3] or > > > > systemd-OOMd [4]). The common idea was to use userspace daemons to > > > > implement custom OOM logic as well as rely on PSI monitoring to avoid > > > > stalls. In this scenario the userspace daemon was supposed to handle > > > > the majority of OOMs, while the in-kernel OOM killer worked as the > > > > last resort measure to guarantee that the system would never deadlock > > > > on the memory. But this approach creates additional infrastructure > > > > churn: userspace OOM daemon is a separate entity which needs to be > > > > deployed, updated, monitored. A completely different pipeline needs to > > > > be built to monitor both types of OOM events and collect associated > > > > logs. A userspace daemon is more restricted in terms on what data is > > > > available to it. Implementing a daemon which can work reliably under a > > > > heavy memory pressure in the system is also tricky. > > > > > > > > [1]: https://lwn.net/ml/linux-kernel/20230810081319.65668-1-zhouchuyi@bytedance.com/ > > > > [2]: https://lore.kernel.org/lkml/20171130152824.1591-1-guro@fb.com/ > > > > [3]: https://github.com/facebookincubator/oomd > > > > [4]: https://www.freedesktop.org/software/systemd/man/latest/systemd-oomd.service.html > > > > > > > > ---- > > > > > > > > This is an RFC version, which is not intended to be merged in the current form. > > > > Open questions/TODOs: > > > > 1) Program type/attachment type for the bpf_handle_out_of_memory() hook. > > > > It has to be able to return a value, to be sleepable (to use cgroup iterators) > > > > and to have trusted arguments to pass oom_control down to bpf_oom_kill_process(). > > > > Current patchset has a workaround (patch "bpf: treat fmodret tracing program's > > > > arguments as trusted"), which is not safe. One option is to fake acquire/release > > > > semantics for the oom_control pointer. Other option is to introduce a completely > > > > new attachment or program type, similar to lsm hooks. > > > > > > Thinking out loud now, but rather than introducing and having a single > > > BPF-specific function/interface, and BPF program for that matter, > > > which can effectively be used to short-circuit steps from within > > > out_of_memory(), why not introduce a > > > tcp_congestion_ops/sched_ext_ops-like interface which essentially > > > provides a multifaceted interface for controlling OOM killing > > > (->select_bad_process, ->oom_kill_process, etc), optionally also from > > > the context of a BPF program (BPF_PROG_TYPE_STRUCT_OPS)? > > > > It's certainly an option and I thought about it. I don't think we need a bunch > > of hooks though. This patchset adds 2 and they belong to completely different > > subsystems (mm and sched/psi), so Idk how well they can be gathered > > into a single struct ops. But maybe it's fine. > > > > The only potentially new hook I can envision now is one to customize > > the oom reporting. > > > > If you're considering scoping it down to a particular cgroup (as you > allude to in the TODO), or building a hierarchical interface, using > struct_ops will be much better than fmod_ret etc., which is global in > nature. Even if you don't support it now. I don't think a struct_ops > is warranted only when you have more than a few callbacks. As an > illustration, sched_ext started out without supporting hierarchical > attachment, but will piggy-back on the struct_ops interface to do so > in the near future. Good point! I agree. Thanks