linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/7] static key support for error injection functions
@ 2024-06-19 22:48 Vlastimil Babka
  2024-06-19 22:48 ` [PATCH v2 1/7] fault-inject: add support for static keys around fault injection sites Vlastimil Babka
                   ` (6 more replies)
  0 siblings, 7 replies; 15+ messages in thread
From: Vlastimil Babka @ 2024-06-19 22:48 UTC (permalink / raw)
  To: Akinobu Mita, Christoph Lameter, David Rientjes,
	Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Naveen N. Rao, Anil S Keshavamurthy, David S. Miller,
	Masami Hiramatsu, Steven Rostedt, Mark Rutland
  Cc: Jiri Olsa, Roman Gushchin, Hyeonggon Yoo, linux-kernel, linux-mm,
	bpf, linux-trace-kernel, Vlastimil Babka

This should now be complete, but perf_events attached bpf programs are
untested (Patch 3).
This is spread accross several subsystems but the easiest way would be
to go through a single tree, such as the mm tree.

As previously mentioned by myself [1] and others [2] the functions
designed for error injection can bring visible overhead in fastpaths
such as slab or page allocation, because even if nothing hooks into them
at a given moment, they are noninline function calls regardless of
CONFIG_ options since commits 4f6923fbb352 ("mm: make should_failslab
always available for fault injection") and af3b854492f3
("mm/page_alloc.c: allow error injection").

Live patching their callsites has been also suggested in both [1] and
[2] threads, and this is an attempt to do that with static keys that
guard the call sites. When disabled, the error injection functions still
exist and are noinline, but are not being called. Any of the existing
mechanisms that can inject errors should make sure to enable the
respective static key. I have added that support to hopefully all of
them that can be used today.

- the legacy fault injection, i.e. CONFIG_FAILSLAB and
  CONFIG_FAIL_PAGE_ALLOC is handled in Patch 1, and can be passed the
  address of the static key if it exists. The key will be activated if the
  fault injection probability becomes non-zero, and deactivated in the
  opposite transition. This also removes the overhead of the evaluation
  (on top of the noninline function call) when these mechanisms are
  configured in the kernel but unused at the moment.

- the generic error injection using kretprobes with
  override_function_with_return is handled in Patch 2. The
  ALLOW_ERROR_INJECTION() annotation is extended so that static key
  address can be passed, and the framework controls it when error
  injection is enabled or disabled in debugfs for the function.

- bpf programs can override return values of probed functions with
  CONFIG_BPF_KPROBE_OVERRIDE=y and have prog->kprobe_override=1. They
  can be attached to perf_event, which is handled in Patch 3, or via
  multi_link_attach, which is handled in Patch 4. I have tested the
  latter using a modified bcc program from commit 4f6923fbb352
  description, but not Patch 3 using a perf_event - testing is welcome.

- ftrace seems to be using override_function_with_return from
  #define ftrace_override_function_with_return but there appear to be
  no users, which was confirmed by Mark Rutland in the RFC thread.

If anyone was crazy enough to use multiple of mechanisms above
simultaneously, the usage of static_key_slow_inc/dec will do the right
thing and the key will be enabled iff at least one mechanism is active.

Additionally to the static key support, Patch 5 makes it possible to
stop making the fault injection functions noninline with
CONFIG_FUNCTION_ERROR_INJECTION=n by compiling out the BTF_ID()
references for bpf_non_sleepable_error_inject which are unnecessary in
that case.

Patches 6 and 7 implement the static keys for the two mm fault injection
sites in slab and page allocators. I have measured the improvement for
the slab case, as described in Patch 6:

    To demonstrate the reduced overhead of calling an empty
    should_failslab() function, a kernel build with
    CONFIG_FUNCTION_ERROR_INJECTION enabled but CONFIG_FAILSLAB disabled,
    and CPU mitigations enabled, was used in a qemu-kvm (virtme-ng) on AMD
    Ryzen 7 2700 machine, and execution of a program trying to open() a
    non-existent file was measured 3 times:

        for (int i = 0; i < 10000000; i++) {
            open("non_existent", O_RDONLY);
        }

    After this patch, the measured real time was 4.3% smaller. Using perf
    profiling it was verified that should_failslab was gone from the
    profile.

    With CONFIG_FAILSLAB also enabled, the patched kernel performace was
    unaffected, as expected, while unpatched kernel's performance was worse,
    resulting in the relative speedup being 10.5%. This means it no longer
    needs to be an option suitable only for debug kernel builds.

There might be other such fault injection callsites in hotpaths of other
subsystems but I didn't search for them at this point. With all the
preparations in place, it should be simple to improve them now.

FAQ:

Q: Does this improve only config options nobody uses in production
anyway?

A: No, the error injection hooks are unconditionally noninline functions
even if they are empty. CONFIG_FUNCTION_ERROR_INJECTION=y is probably
rather common, and overrides done via bpf. The goal was to eliminate
this unnecessary overhead. But as a secondary benefit now the legacy
fault injection options can be also enabled in production kernels
without extra overhead.

Q: Should we remove the legacy fault injection framework?

A: Maybe? I didn't want to wait for that to happen, so it's just handled
as well (Patch 1). The generic error injection handling and bpf needed
the most effort anyway.

Q: Should there be a unified way to register the kprobes that override
return values, that would also handle the static key control?

A: Possibly, but I'm not familiar with the area enough to do that. I
found every case handled by patches 2-4 to be so different, I just
modified them all. If a unification comes later, it should not change
most of what's done by this patchset.

[1] https://lore.kernel.org/6d5bb852-8703-4abf-a52b-90816bccbd7f@suse.cz/
[2] https://lore.kernel.org/3j5d3p22ssv7xoaghzraa7crcfih3h2qqjlhmjppbp6f42pg2t@kg7qoicog5ye/

Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
---
Changes in v2:
- Add error injection static key control for bpf programs with
  kprobe_override.
- Add separate get_injection_key() for querying (Masami Hiramatsu)
- Compile everything out with CONFIG_FUNCTION_ERROR_INJECTION=n
- Link to v1: https://lore.kernel.org/r/20240531-fault-injection-statickeys-v1-0-a513fd0a9614@suse.cz

---
Vlastimil Babka (7):
      fault-inject: add support for static keys around fault injection sites
      error-injection: support static keys around injectable functions
      bpf: support error injection static keys for perf_event attached progs
      bpf: support error injection static keys for multi_link attached progs
      bpf: do not create bpf_non_sleepable_error_inject list when unnecessary
      mm, slab: add static key for should_failslab()
      mm, page_alloc: add static key for should_fail_alloc_page()

 include/asm-generic/error-injection.h | 13 ++++++-
 include/asm-generic/vmlinux.lds.h     |  2 +-
 include/linux/error-injection.h       | 12 +++++--
 include/linux/fault-inject.h          | 14 ++++++--
 kernel/bpf/verifier.c                 | 15 ++++++++
 kernel/fail_function.c                | 10 ++++++
 kernel/trace/bpf_trace.c              | 65 +++++++++++++++++++++++++++++++----
 kernel/trace/trace_kprobe.c           | 30 ++++++++++++++--
 kernel/trace/trace_probe.h            |  5 +++
 lib/error-inject.c                    | 19 ++++++++++
 lib/fault-inject.c                    | 43 ++++++++++++++++++++++-
 mm/fail_page_alloc.c                  |  3 +-
 mm/failslab.c                         |  2 +-
 mm/internal.h                         |  2 ++
 mm/page_alloc.c                       | 30 ++++++++++++++--
 mm/slab.h                             |  3 ++
 mm/slub.c                             | 30 ++++++++++++++--
 17 files changed, 274 insertions(+), 24 deletions(-)
---
base-commit: 1613e604df0cd359cf2a7fbd9be7a0bcfacfabd0
change-id: 20240530-fault-injection-statickeys-66b7222e91b7

Best regards,
-- 
Vlastimil Babka <vbabka@suse.cz>



^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH v2 1/7] fault-inject: add support for static keys around fault injection sites
  2024-06-19 22:48 [PATCH v2 0/7] static key support for error injection functions Vlastimil Babka
@ 2024-06-19 22:48 ` Vlastimil Babka
  2024-06-25 14:08   ` Steven Rostedt
  2024-06-19 22:48 ` [PATCH v2 2/7] error-injection: support static keys around injectable functions Vlastimil Babka
                   ` (5 subsequent siblings)
  6 siblings, 1 reply; 15+ messages in thread
From: Vlastimil Babka @ 2024-06-19 22:48 UTC (permalink / raw)
  To: Akinobu Mita, Christoph Lameter, David Rientjes,
	Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Naveen N. Rao, Anil S Keshavamurthy, David S. Miller,
	Masami Hiramatsu, Steven Rostedt, Mark Rutland
  Cc: Jiri Olsa, Roman Gushchin, Hyeonggon Yoo, linux-kernel, linux-mm,
	bpf, linux-trace-kernel, Vlastimil Babka

Some fault injection sites are placed in hotpaths and incur overhead
even if not enabled, due to one or more function calls leading up to
should_fail_ex() that returns false due to attr->probability == 0.

This overhead can be eliminated if the outermost call into the checks is
guarded with a static key, so add support for that. The framework should
be told that such static key exist for a fault_attr, by initializing
fault_attr->active with the static key address. When it's not NULL,
enable the static key from setup_fault_attr() when the fault probability
is non-zero.

Also wire up writing into debugfs "probability" file to enable or
disable the static key when transitioning between zero and non-zero
probability.

For now, do not add configfs interface support as the immediate plan is
to leverage this for should_failslab() and should_fail_alloc_page()
after other necessary preparatory changes, and not for any of the
configfs based fault injection users.

Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
---
 include/linux/fault-inject.h |  7 ++++++-
 lib/fault-inject.c           | 43 ++++++++++++++++++++++++++++++++++++++++++-
 2 files changed, 48 insertions(+), 2 deletions(-)

diff --git a/include/linux/fault-inject.h b/include/linux/fault-inject.h
index 6d5edef09d45..cfe75cc1bac4 100644
--- a/include/linux/fault-inject.h
+++ b/include/linux/fault-inject.h
@@ -9,6 +9,7 @@
 #include <linux/configfs.h>
 #include <linux/ratelimit.h>
 #include <linux/atomic.h>
+#include <linux/jump_label.h>
 
 /*
  * For explanation of the elements of this struct, see
@@ -30,13 +31,14 @@ struct fault_attr {
 	unsigned long count;
 	struct ratelimit_state ratelimit_state;
 	struct dentry *dname;
+	struct static_key *active;
 };
 
 enum fault_flags {
 	FAULT_NOWARN =	1 << 0,
 };
 
-#define FAULT_ATTR_INITIALIZER {					\
+#define FAULT_ATTR_INITIALIZER_KEY(_key) {				\
 		.interval = 1,						\
 		.times = ATOMIC_INIT(1),				\
 		.require_end = ULONG_MAX,				\
@@ -44,8 +46,11 @@ enum fault_flags {
 		.ratelimit_state = RATELIMIT_STATE_INIT_DISABLED,	\
 		.verbose = 2,						\
 		.dname = NULL,						\
+		.active = (_key),					\
 	}
 
+#define FAULT_ATTR_INITIALIZER		FAULT_ATTR_INITIALIZER_KEY(NULL)
+
 #define DECLARE_FAULT_ATTR(name) struct fault_attr name = FAULT_ATTR_INITIALIZER
 int setup_fault_attr(struct fault_attr *attr, char *str);
 bool should_fail_ex(struct fault_attr *attr, ssize_t size, int flags);
diff --git a/lib/fault-inject.c b/lib/fault-inject.c
index d608f9b48c10..de9552cb22d0 100644
--- a/lib/fault-inject.c
+++ b/lib/fault-inject.c
@@ -35,6 +35,9 @@ int setup_fault_attr(struct fault_attr *attr, char *str)
 	atomic_set(&attr->times, times);
 	atomic_set(&attr->space, space);
 
+	if (probability != 0 && attr->active)
+		static_key_slow_inc(attr->active);
+
 	return 1;
 }
 EXPORT_SYMBOL_GPL(setup_fault_attr);
@@ -166,6 +169,12 @@ EXPORT_SYMBOL_GPL(should_fail);
 
 #ifdef CONFIG_FAULT_INJECTION_DEBUG_FS
 
+/*
+ * Protect updating probability from debugfs as that may trigger static key
+ * changes when changing between zero and non-zero.
+ */
+static DEFINE_MUTEX(probability_mutex);
+
 static int debugfs_ul_set(void *data, u64 val)
 {
 	*(unsigned long *)data = val;
@@ -186,6 +195,38 @@ static void debugfs_create_ul(const char *name, umode_t mode,
 	debugfs_create_file(name, mode, parent, value, &fops_ul);
 }
 
+static int debugfs_prob_set(void *data, u64 val)
+{
+	struct fault_attr *attr = data;
+
+	mutex_lock(&probability_mutex);
+
+	if (attr->active) {
+		if (attr->probability != 0 && val == 0) {
+			static_key_slow_dec(attr->active);
+		} else if (attr->probability == 0 && val != 0) {
+			static_key_slow_inc(attr->active);
+		}
+	}
+
+	attr->probability = val;
+
+	mutex_unlock(&probability_mutex);
+
+	return 0;
+}
+
+static int debugfs_prob_get(void *data, u64 *val)
+{
+	struct fault_attr *attr = data;
+
+	*val = attr->probability;
+
+	return 0;
+}
+
+DEFINE_SIMPLE_ATTRIBUTE(fops_prob, debugfs_prob_get, debugfs_prob_set, "%llu\n");
+
 #ifdef CONFIG_FAULT_INJECTION_STACKTRACE_FILTER
 
 static int debugfs_stacktrace_depth_set(void *data, u64 val)
@@ -218,7 +259,7 @@ struct dentry *fault_create_debugfs_attr(const char *name,
 	if (IS_ERR(dir))
 		return dir;
 
-	debugfs_create_ul("probability", mode, dir, &attr->probability);
+	debugfs_create_file("probability", mode, dir, attr, &fops_prob);
 	debugfs_create_ul("interval", mode, dir, &attr->interval);
 	debugfs_create_atomic_t("times", mode, dir, &attr->times);
 	debugfs_create_atomic_t("space", mode, dir, &attr->space);

-- 
2.45.2



^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH v2 2/7] error-injection: support static keys around injectable functions
  2024-06-19 22:48 [PATCH v2 0/7] static key support for error injection functions Vlastimil Babka
  2024-06-19 22:48 ` [PATCH v2 1/7] fault-inject: add support for static keys around fault injection sites Vlastimil Babka
@ 2024-06-19 22:48 ` Vlastimil Babka
  2024-06-25 14:41   ` Steven Rostedt
  2024-06-19 22:48 ` [PATCH v2 3/7] bpf: support error injection static keys for perf_event attached progs Vlastimil Babka
                   ` (4 subsequent siblings)
  6 siblings, 1 reply; 15+ messages in thread
From: Vlastimil Babka @ 2024-06-19 22:48 UTC (permalink / raw)
  To: Akinobu Mita, Christoph Lameter, David Rientjes,
	Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Naveen N. Rao, Anil S Keshavamurthy, David S. Miller,
	Masami Hiramatsu, Steven Rostedt, Mark Rutland
  Cc: Jiri Olsa, Roman Gushchin, Hyeonggon Yoo, linux-kernel, linux-mm,
	bpf, linux-trace-kernel, Vlastimil Babka

Error injectable functions cannot be inlined and since some are called
from hot paths, this incurs overhead even if no error injection is
enabled for them.

To avoid this overhead when disabled, allow the callsites of error
injectable functions to put the calls behind a static key, which the
framework can control when error injection is enabled or disabled for
the function.

Introduce a new ALLOW_ERROR_INJECTION_KEY() macro that adds a parameter
with the static key's address, and store it in struct
error_injection_entry. This new field has caused a mismatch when
populating the injection list from the _error_injection_whitelist
section using the current STRUCT_ALIGN(), so change the alignment to 8.

During the population, copy the key's address also to struct ei_entry,
and make it possible to retrieve it by get_injection_key().

Finally, make the processing of writes to the debugfs inject file enable
the static key when the function is added to the injection list, and
disable when removed.

Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
---
 include/asm-generic/error-injection.h | 13 ++++++++++++-
 include/asm-generic/vmlinux.lds.h     |  2 +-
 include/linux/error-injection.h       | 12 ++++++++++--
 kernel/fail_function.c                | 10 ++++++++++
 lib/error-inject.c                    | 19 +++++++++++++++++++
 5 files changed, 52 insertions(+), 4 deletions(-)

diff --git a/include/asm-generic/error-injection.h b/include/asm-generic/error-injection.h
index b05253f68eaa..eed2731f3820 100644
--- a/include/asm-generic/error-injection.h
+++ b/include/asm-generic/error-injection.h
@@ -12,6 +12,7 @@ enum {
 
 struct error_injection_entry {
 	unsigned long	addr;
+	unsigned long	static_key_addr;
 	int		etype;
 };
 
@@ -25,16 +26,26 @@ struct pt_regs;
  * 'Error Injectable Functions' section.
  */
 #define ALLOW_ERROR_INJECTION(fname, _etype)				\
-static struct error_injection_entry __used				\
+static struct error_injection_entry __used __aligned(8)			\
 	__section("_error_injection_whitelist")				\
 	_eil_addr_##fname = {						\
 		.addr = (unsigned long)fname,				\
 		.etype = EI_ETYPE_##_etype,				\
 	}
 
+#define ALLOW_ERROR_INJECTION_KEY(fname, _etype, key)			\
+static struct error_injection_entry __used __aligned(8)			\
+	__section("_error_injection_whitelist")				\
+	_eil_addr_##fname = {						\
+		.addr = (unsigned long)fname,				\
+		.static_key_addr = (unsigned long)key,			\
+		.etype = EI_ETYPE_##_etype,				\
+	}
+
 void override_function_with_return(struct pt_regs *regs);
 #else
 #define ALLOW_ERROR_INJECTION(fname, _etype)
+#define ALLOW_ERROR_INJECTION_KEY(fname, _etype, key)
 
 static inline void override_function_with_return(struct pt_regs *regs) { }
 #endif
diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h
index 5703526d6ebf..1b15a0af2a00 100644
--- a/include/asm-generic/vmlinux.lds.h
+++ b/include/asm-generic/vmlinux.lds.h
@@ -248,7 +248,7 @@
 
 #ifdef CONFIG_FUNCTION_ERROR_INJECTION
 #define ERROR_INJECT_WHITELIST()			\
-	STRUCT_ALIGN();					\
+	. = ALIGN(8);					\
 	BOUNDED_SECTION(_error_injection_whitelist)
 #else
 #define ERROR_INJECT_WHITELIST()
diff --git a/include/linux/error-injection.h b/include/linux/error-injection.h
index 20e738f4eae8..48da027c0302 100644
--- a/include/linux/error-injection.h
+++ b/include/linux/error-injection.h
@@ -6,10 +6,13 @@
 #include <linux/errno.h>
 #include <asm-generic/error-injection.h>
 
+struct static_key;
+
 #ifdef CONFIG_FUNCTION_ERROR_INJECTION
 
-extern bool within_error_injection_list(unsigned long addr);
-extern int get_injectable_error_type(unsigned long addr);
+bool within_error_injection_list(unsigned long addr);
+int get_injectable_error_type(unsigned long addr);
+struct static_key *get_injection_key(unsigned long addr);
 
 #else /* !CONFIG_FUNCTION_ERROR_INJECTION */
 
@@ -23,6 +26,11 @@ static inline int get_injectable_error_type(unsigned long addr)
 	return -EOPNOTSUPP;
 }
 
+static inline struct static_key *get_injection_key(unsigned long addr)
+{
+	return NULL;
+}
+
 #endif
 
 #endif /* _LINUX_ERROR_INJECTION_H */
diff --git a/kernel/fail_function.c b/kernel/fail_function.c
index d971a0189319..d39a9606a448 100644
--- a/kernel/fail_function.c
+++ b/kernel/fail_function.c
@@ -27,6 +27,7 @@ struct fei_attr {
 	struct list_head list;
 	struct kprobe kp;
 	unsigned long retval;
+	struct static_key *key;
 };
 static DEFINE_MUTEX(fei_lock);
 static LIST_HEAD(fei_attr_list);
@@ -67,6 +68,11 @@ static struct fei_attr *fei_attr_new(const char *sym, unsigned long addr)
 		attr->kp.pre_handler = fei_kprobe_handler;
 		attr->kp.post_handler = fei_post_handler;
 		attr->retval = adjust_error_retval(addr, 0);
+
+		attr->key = get_injection_key(addr);
+		if (IS_ERR(attr->key))
+			attr->key = NULL;
+
 		INIT_LIST_HEAD(&attr->list);
 	}
 	return attr;
@@ -218,6 +224,8 @@ static int fei_open(struct inode *inode, struct file *file)
 
 static void fei_attr_remove(struct fei_attr *attr)
 {
+	if (attr->key)
+		static_key_slow_dec(attr->key);
 	fei_debugfs_remove_attr(attr);
 	unregister_kprobe(&attr->kp);
 	list_del(&attr->list);
@@ -295,6 +303,8 @@ static ssize_t fei_write(struct file *file, const char __user *buffer,
 		fei_attr_free(attr);
 		goto out;
 	}
+	if (attr->key)
+		static_key_slow_inc(attr->key);
 	fei_debugfs_add_attr(attr);
 	list_add_tail(&attr->list, &fei_attr_list);
 	ret = count;
diff --git a/lib/error-inject.c b/lib/error-inject.c
index 887acd9a6ea6..982fbedd9ad5 100644
--- a/lib/error-inject.c
+++ b/lib/error-inject.c
@@ -17,6 +17,7 @@ struct ei_entry {
 	struct list_head list;
 	unsigned long start_addr;
 	unsigned long end_addr;
+	struct static_key *key;
 	int etype;
 	void *priv;
 };
@@ -54,6 +55,23 @@ int get_injectable_error_type(unsigned long addr)
 	return ei_type;
 }
 
+struct static_key *get_injection_key(unsigned long addr)
+{
+	struct ei_entry *ent;
+	struct static_key *ei_key = ERR_PTR(-EINVAL);
+
+	mutex_lock(&ei_mutex);
+	list_for_each_entry(ent, &error_injection_list, list) {
+		if (addr >= ent->start_addr && addr < ent->end_addr) {
+			ei_key = ent->key;
+			break;
+		}
+	}
+	mutex_unlock(&ei_mutex);
+
+	return ei_key;
+}
+
 /*
  * Lookup and populate the error_injection_list.
  *
@@ -86,6 +104,7 @@ static void populate_error_injection_list(struct error_injection_entry *start,
 		ent->start_addr = entry;
 		ent->end_addr = entry + size;
 		ent->etype = iter->etype;
+		ent->key = (struct static_key *) iter->static_key_addr;
 		ent->priv = priv;
 		INIT_LIST_HEAD(&ent->list);
 		list_add_tail(&ent->list, &error_injection_list);

-- 
2.45.2



^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH v2 3/7] bpf: support error injection static keys for perf_event attached progs
  2024-06-19 22:48 [PATCH v2 0/7] static key support for error injection functions Vlastimil Babka
  2024-06-19 22:48 ` [PATCH v2 1/7] fault-inject: add support for static keys around fault injection sites Vlastimil Babka
  2024-06-19 22:48 ` [PATCH v2 2/7] error-injection: support static keys around injectable functions Vlastimil Babka
@ 2024-06-19 22:48 ` Vlastimil Babka
  2024-06-19 22:48 ` [PATCH v2 4/7] bpf: support error injection static keys for multi_link " Vlastimil Babka
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 15+ messages in thread
From: Vlastimil Babka @ 2024-06-19 22:48 UTC (permalink / raw)
  To: Akinobu Mita, Christoph Lameter, David Rientjes,
	Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Naveen N. Rao, Anil S Keshavamurthy, David S. Miller,
	Masami Hiramatsu, Steven Rostedt, Mark Rutland
  Cc: Jiri Olsa, Roman Gushchin, Hyeonggon Yoo, linux-kernel, linux-mm,
	bpf, linux-trace-kernel, Vlastimil Babka

Functions marked for error injection can have an associated static key
that guards the callsite(s) to avoid overhead of calling an empty
function when no error injection is in progress.

Outside of the error injection framework itself, bpf programs can be
atteched to perf events and override results of error-injectable
functions. To make sure these functions are actually called, attaching
such bpf programs should control the static key accordingly.

Therefore, add the static key's address to struct trace_kprobe and fill
it in trace_kprobe_error_injectable(), using get_injection_key() instead
of within_error_injection_list(). Introduce
trace_kprobe_error_injection_control() to control the static key and
call the control function when attaching or detaching programs with
kprobe_override to perf events.

Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
---
 kernel/trace/bpf_trace.c    |  6 ++++++
 kernel/trace/trace_kprobe.c | 30 ++++++++++++++++++++++++++++--
 kernel/trace/trace_probe.h  |  5 +++++
 3 files changed, 39 insertions(+), 2 deletions(-)

diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
index f5154c051d2c..944de1c41209 100644
--- a/kernel/trace/bpf_trace.c
+++ b/kernel/trace/bpf_trace.c
@@ -2283,6 +2283,9 @@ int perf_event_attach_bpf_prog(struct perf_event *event,
 	rcu_assign_pointer(event->tp_event->prog_array, new_array);
 	bpf_prog_array_free_sleepable(old_array);
 
+	if (prog->kprobe_override)
+		trace_kprobe_error_injection_control(event->tp_event, true);
+
 unlock:
 	mutex_unlock(&bpf_event_mutex);
 	return ret;
@@ -2299,6 +2302,9 @@ void perf_event_detach_bpf_prog(struct perf_event *event)
 	if (!event->prog)
 		goto unlock;
 
+	if (event->prog->kprobe_override)
+		trace_kprobe_error_injection_control(event->tp_event, false);
+
 	old_array = bpf_event_rcu_dereference(event->tp_event->prog_array);
 	ret = bpf_prog_array_copy(old_array, event->prog, NULL, 0, &new_array);
 	if (ret == -ENOENT)
diff --git a/kernel/trace/trace_kprobe.c b/kernel/trace/trace_kprobe.c
index 16383247bdbf..1c1ee95bd5de 100644
--- a/kernel/trace/trace_kprobe.c
+++ b/kernel/trace/trace_kprobe.c
@@ -61,6 +61,7 @@ struct trace_kprobe {
 	unsigned long __percpu *nhit;
 	const char		*symbol;	/* symbol name */
 	struct trace_probe	tp;
+	struct static_key	*ei_key;
 };
 
 static bool is_trace_kprobe(struct dyn_event *ev)
@@ -235,9 +236,34 @@ bool trace_kprobe_on_func_entry(struct trace_event_call *call)
 bool trace_kprobe_error_injectable(struct trace_event_call *call)
 {
 	struct trace_kprobe *tk = trace_kprobe_primary_from_call(call);
+	struct static_key *ei_key;
 
-	return tk ? within_error_injection_list(trace_kprobe_address(tk)) :
-	       false;
+	if (!tk)
+		return false;
+
+	ei_key = get_injection_key(trace_kprobe_address(tk));
+	if (IS_ERR(ei_key))
+		return false;
+
+	tk->ei_key = ei_key;
+	return true;
+}
+
+void trace_kprobe_error_injection_control(struct trace_event_call *call,
+					  bool enable)
+{
+	struct trace_kprobe *tk = trace_kprobe_primary_from_call(call);
+
+	if (!tk)
+		return;
+
+	if (!tk->ei_key)
+		return;
+
+	if (enable)
+		static_key_slow_inc(tk->ei_key);
+	else
+		static_key_slow_dec(tk->ei_key);
 }
 
 static int register_kprobe_event(struct trace_kprobe *tk);
diff --git a/kernel/trace/trace_probe.h b/kernel/trace/trace_probe.h
index 5803e6a41570..d9ddcabb9f97 100644
--- a/kernel/trace/trace_probe.h
+++ b/kernel/trace/trace_probe.h
@@ -212,6 +212,8 @@ DECLARE_BASIC_PRINT_TYPE_FUNC(symbol);
 #ifdef CONFIG_KPROBE_EVENTS
 bool trace_kprobe_on_func_entry(struct trace_event_call *call);
 bool trace_kprobe_error_injectable(struct trace_event_call *call);
+void trace_kprobe_error_injection_control(struct trace_event_call *call,
+					  bool enabled);
 #else
 static inline bool trace_kprobe_on_func_entry(struct trace_event_call *call)
 {
@@ -222,6 +224,9 @@ static inline bool trace_kprobe_error_injectable(struct trace_event_call *call)
 {
 	return false;
 }
+
+static inline void trace_kprobe_error_injection_control(struct trace_event_call *call,
+							bool enabled) { }
 #endif /* CONFIG_KPROBE_EVENTS */
 
 struct probe_arg {

-- 
2.45.2



^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH v2 4/7] bpf: support error injection static keys for multi_link attached progs
  2024-06-19 22:48 [PATCH v2 0/7] static key support for error injection functions Vlastimil Babka
                   ` (2 preceding siblings ...)
  2024-06-19 22:48 ` [PATCH v2 3/7] bpf: support error injection static keys for perf_event attached progs Vlastimil Babka
@ 2024-06-19 22:48 ` Vlastimil Babka
  2024-06-19 22:48 ` [PATCH v2 5/7] bpf: do not create bpf_non_sleepable_error_inject list when unnecessary Vlastimil Babka
                   ` (2 subsequent siblings)
  6 siblings, 0 replies; 15+ messages in thread
From: Vlastimil Babka @ 2024-06-19 22:48 UTC (permalink / raw)
  To: Akinobu Mita, Christoph Lameter, David Rientjes,
	Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Naveen N. Rao, Anil S Keshavamurthy, David S. Miller,
	Masami Hiramatsu, Steven Rostedt, Mark Rutland
  Cc: Jiri Olsa, Roman Gushchin, Hyeonggon Yoo, linux-kernel, linux-mm,
	bpf, linux-trace-kernel, Vlastimil Babka

Functions marked for error injection can have an associated static key
that guards the callsite(s) to avoid overhead of calling an empty
function when no error injection is in progress.

Outside of the error injection framework itself, bpf programs can be
atteched to kprobes and override results of error-injectable functions.
To make sure these functions are actually called, attaching such bpf
programs should control the static key accordingly.

Therefore, add an array of static keys to struct bpf_kprobe_multi_link
and fill it in addrs_check_error_injection_list() for programs with
kprobe_override enabled, using get_injection_key() instead of
within_error_injection_list(). Introduce bpf_kprobe_ei_keys_control() to
control the static keys and call the control function when doing
multi_link_attach and release.

Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
---
 kernel/trace/bpf_trace.c | 59 +++++++++++++++++++++++++++++++++++++++++++-----
 1 file changed, 53 insertions(+), 6 deletions(-)

diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
index 944de1c41209..ef0fadb76bfa 100644
--- a/kernel/trace/bpf_trace.c
+++ b/kernel/trace/bpf_trace.c
@@ -2613,6 +2613,7 @@ struct bpf_kprobe_multi_link {
 	struct bpf_link link;
 	struct fprobe fp;
 	unsigned long *addrs;
+	struct static_key **ei_keys;
 	u64 *cookies;
 	u32 cnt;
 	u32 mods_cnt;
@@ -2687,11 +2688,30 @@ static void free_user_syms(struct user_syms *us)
 	kvfree(us->buf);
 }
 
+static void bpf_kprobe_ei_keys_control(struct bpf_kprobe_multi_link *link, bool enable)
+{
+	u32 i;
+
+	for (i = 0; i < link->cnt; i++) {
+		if (!link->ei_keys[i])
+			break;
+
+		if (enable)
+			static_key_slow_inc(link->ei_keys[i]);
+		else
+			static_key_slow_dec(link->ei_keys[i]);
+	}
+}
+
 static void bpf_kprobe_multi_link_release(struct bpf_link *link)
 {
 	struct bpf_kprobe_multi_link *kmulti_link;
 
 	kmulti_link = container_of(link, struct bpf_kprobe_multi_link, link);
+
+	if (kmulti_link->ei_keys)
+		bpf_kprobe_ei_keys_control(kmulti_link, false);
+
 	unregister_fprobe(&kmulti_link->fp);
 	kprobe_multi_put_modules(kmulti_link->mods, kmulti_link->mods_cnt);
 }
@@ -2703,6 +2723,7 @@ static void bpf_kprobe_multi_link_dealloc(struct bpf_link *link)
 	kmulti_link = container_of(link, struct bpf_kprobe_multi_link, link);
 	kvfree(kmulti_link->addrs);
 	kvfree(kmulti_link->cookies);
+	kvfree(kmulti_link->ei_keys);
 	kfree(kmulti_link->mods);
 	kfree(kmulti_link);
 }
@@ -2985,13 +3006,19 @@ static int get_modules_for_addrs(struct module ***mods, unsigned long *addrs, u3
 	return arr.mods_cnt;
 }
 
-static int addrs_check_error_injection_list(unsigned long *addrs, u32 cnt)
+static int addrs_check_error_injection_list(unsigned long *addrs, struct static_key **ei_keys,
+					    u32 cnt)
 {
-	u32 i;
+	struct static_key *ei_key;
+	u32 i, j = 0;
 
 	for (i = 0; i < cnt; i++) {
-		if (!within_error_injection_list(addrs[i]))
+		ei_key = get_injection_key(addrs[i]);
+		if (IS_ERR(ei_key))
 			return -EINVAL;
+
+		if (ei_key)
+			ei_keys[j++] = ei_key;
 	}
 	return 0;
 }
@@ -3000,6 +3027,7 @@ int bpf_kprobe_multi_link_attach(const union bpf_attr *attr, struct bpf_prog *pr
 {
 	struct bpf_kprobe_multi_link *link = NULL;
 	struct bpf_link_primer link_primer;
+	struct static_key **ei_keys = NULL;
 	void __user *ucookies;
 	unsigned long *addrs;
 	u32 flags, cnt, size;
@@ -3075,9 +3103,24 @@ int bpf_kprobe_multi_link_attach(const union bpf_attr *attr, struct bpf_prog *pr
 			goto error;
 	}
 
-	if (prog->kprobe_override && addrs_check_error_injection_list(addrs, cnt)) {
-		err = -EINVAL;
-		goto error;
+	if (prog->kprobe_override) {
+		ei_keys = kvcalloc(cnt, sizeof(*ei_keys), GFP_KERNEL);
+		if (!ei_keys) {
+			err = -ENOMEM;
+			goto error;
+		}
+
+		if (addrs_check_error_injection_list(addrs, ei_keys, cnt)) {
+			err = -EINVAL;
+			goto error;
+		}
+
+		if (ei_keys[0]) {
+			link->ei_keys = ei_keys;
+		} else {
+			kvfree(ei_keys);
+			ei_keys = NULL;
+		}
 	}
 
 	link = kzalloc(sizeof(*link), GFP_KERNEL);
@@ -3132,10 +3175,14 @@ int bpf_kprobe_multi_link_attach(const union bpf_attr *attr, struct bpf_prog *pr
 		return err;
 	}
 
+	if (link->ei_keys)
+		bpf_kprobe_ei_keys_control(link, true);
+
 	return bpf_link_settle(&link_primer);
 
 error:
 	kfree(link);
+	kvfree(ei_keys);
 	kvfree(addrs);
 	kvfree(cookies);
 	return err;

-- 
2.45.2



^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH v2 5/7] bpf: do not create bpf_non_sleepable_error_inject list when unnecessary
  2024-06-19 22:48 [PATCH v2 0/7] static key support for error injection functions Vlastimil Babka
                   ` (3 preceding siblings ...)
  2024-06-19 22:48 ` [PATCH v2 4/7] bpf: support error injection static keys for multi_link " Vlastimil Babka
@ 2024-06-19 22:48 ` Vlastimil Babka
  2024-06-20  1:18   ` Alexei Starovoitov
  2024-06-19 22:49 ` [PATCH v2 6/7] mm, slab: add static key for should_failslab() Vlastimil Babka
  2024-06-19 22:49 ` [PATCH v2 7/7] mm, page_alloc: add static key for should_fail_alloc_page() Vlastimil Babka
  6 siblings, 1 reply; 15+ messages in thread
From: Vlastimil Babka @ 2024-06-19 22:48 UTC (permalink / raw)
  To: Akinobu Mita, Christoph Lameter, David Rientjes,
	Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Naveen N. Rao, Anil S Keshavamurthy, David S. Miller,
	Masami Hiramatsu, Steven Rostedt, Mark Rutland
  Cc: Jiri Olsa, Roman Gushchin, Hyeonggon Yoo, linux-kernel, linux-mm,
	bpf, linux-trace-kernel, Vlastimil Babka

When CONFIG_FUNCTION_ERROR_INJECTION is disabled,
within_error_injection_list() will return false for any address and the
result of check_non_sleepable_error_inject() denylist is thus redundant.
The bpf_non_sleepable_error_inject list thus does not need to be
constructed at all, so #ifdef it out.

This will allow to inline functions on the list when
CONFIG_FUNCTION_ERROR_INJECTION is disabled as there will be no BTF_ID()
reference for them.

Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
---
 kernel/bpf/verifier.c | 15 +++++++++++++++
 1 file changed, 15 insertions(+)

diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 77da1f438bec..5cd93de37d68 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -21044,6 +21044,8 @@ static int check_attach_modify_return(unsigned long addr, const char *func_name)
 	return -EINVAL;
 }
 
+#ifdef CONFIG_FUNCTION_ERROR_INJECTION
+
 /* list of non-sleepable functions that are otherwise on
  * ALLOW_ERROR_INJECTION list
  */
@@ -21061,6 +21063,19 @@ static int check_non_sleepable_error_inject(u32 btf_id)
 	return btf_id_set_contains(&btf_non_sleepable_error_inject, btf_id);
 }
 
+#else /* CONFIG_FUNCTION_ERROR_INJECTION */
+
+/*
+ * Pretend the denylist is empty, within_error_injection_list() will return
+ * false anyway.
+ */
+static int check_non_sleepable_error_inject(u32 btf_id)
+{
+	return 0;
+}
+
+#endif
+
 int bpf_check_attach_target(struct bpf_verifier_log *log,
 			    const struct bpf_prog *prog,
 			    const struct bpf_prog *tgt_prog,

-- 
2.45.2



^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH v2 6/7] mm, slab: add static key for should_failslab()
  2024-06-19 22:48 [PATCH v2 0/7] static key support for error injection functions Vlastimil Babka
                   ` (4 preceding siblings ...)
  2024-06-19 22:48 ` [PATCH v2 5/7] bpf: do not create bpf_non_sleepable_error_inject list when unnecessary Vlastimil Babka
@ 2024-06-19 22:49 ` Vlastimil Babka
  2024-06-25 14:24   ` Vlastimil Babka
  2024-06-19 22:49 ` [PATCH v2 7/7] mm, page_alloc: add static key for should_fail_alloc_page() Vlastimil Babka
  6 siblings, 1 reply; 15+ messages in thread
From: Vlastimil Babka @ 2024-06-19 22:49 UTC (permalink / raw)
  To: Akinobu Mita, Christoph Lameter, David Rientjes,
	Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Naveen N. Rao, Anil S Keshavamurthy, David S. Miller,
	Masami Hiramatsu, Steven Rostedt, Mark Rutland
  Cc: Jiri Olsa, Roman Gushchin, Hyeonggon Yoo, linux-kernel, linux-mm,
	bpf, linux-trace-kernel, Vlastimil Babka

Since commit 4f6923fbb352 ("mm: make should_failslab always available for
fault injection") should_failslab() is unconditionally a noinline
function. This adds visible overhead to the slab allocation hotpath,
even if the function is empty. With CONFIG_FAILSLAB=y there's additional
overhead, even when the functionality is not activated by a boot
parameter or via debugfs.

The overhead can be eliminated with a static key around the callsite.
Fault injection and error injection frameworks including bpf can now be
told that this function has a static key associated, and are able to
enable and disable it accordingly.

Additionally, compile out all relevant code if neither CONFIG_FAILSLAB
nor CONFIG_FUNCTION_ERROR_INJECTION is enabled. When only the latter is
not enabled, make should_failslab() static inline instead of noinline.

To demonstrate the reduced overhead of calling an empty
should_failslab() function, a kernel build with
CONFIG_FUNCTION_ERROR_INJECTION enabled but CONFIG_FAILSLAB disabled,
and CPU mitigations enabled, was used in a qemu-kvm (virtme-ng) on AMD
Ryzen 7 2700 machine, and execution of a program trying to open() a
non-existent file was measured 3 times:

    for (int i = 0; i < 10000000; i++) {
        open("non_existent", O_RDONLY);
    }

After this patch, the measured real time was 4.3% smaller. Using perf
profiling it was verified that should_failslab was gone from the
profile.

With CONFIG_FAILSLAB also enabled, the patched kernel performace was
unaffected, as expected, while unpatched kernel's performance was worse,
resulting in the relative speedup being 10.5%. This means it no longer
needs to be an option suitable only for debug kernel builds.

Acked-by: Alexei Starovoitov <ast@kernel.org>
Reviewed-by: Roman Gushchin <roman.gushchin@linux.dev>
Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
---
 include/linux/fault-inject.h |  4 +++-
 mm/failslab.c                |  2 +-
 mm/slab.h                    |  3 +++
 mm/slub.c                    | 30 +++++++++++++++++++++++++++---
 4 files changed, 34 insertions(+), 5 deletions(-)

diff --git a/include/linux/fault-inject.h b/include/linux/fault-inject.h
index cfe75cc1bac4..0d0fa94dc1c8 100644
--- a/include/linux/fault-inject.h
+++ b/include/linux/fault-inject.h
@@ -107,9 +107,11 @@ static inline bool __should_fail_alloc_page(gfp_t gfp_mask, unsigned int order)
 }
 #endif /* CONFIG_FAIL_PAGE_ALLOC */
 
+#ifdef CONFIG_FUNCTION_ERROR_INJECTION
 int should_failslab(struct kmem_cache *s, gfp_t gfpflags);
+#endif
 #ifdef CONFIG_FAILSLAB
-extern bool __should_failslab(struct kmem_cache *s, gfp_t gfpflags);
+bool __should_failslab(struct kmem_cache *s, gfp_t gfpflags);
 #else
 static inline bool __should_failslab(struct kmem_cache *s, gfp_t gfpflags)
 {
diff --git a/mm/failslab.c b/mm/failslab.c
index ffc420c0e767..878fd08e5dac 100644
--- a/mm/failslab.c
+++ b/mm/failslab.c
@@ -9,7 +9,7 @@ static struct {
 	bool ignore_gfp_reclaim;
 	bool cache_filter;
 } failslab = {
-	.attr = FAULT_ATTR_INITIALIZER,
+	.attr = FAULT_ATTR_INITIALIZER_KEY(&should_failslab_active.key),
 	.ignore_gfp_reclaim = true,
 	.cache_filter = false,
 };
diff --git a/mm/slab.h b/mm/slab.h
index 5f8f47c5bee0..792e19cb37b8 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -11,6 +11,7 @@
 #include <linux/memcontrol.h>
 #include <linux/kfence.h>
 #include <linux/kasan.h>
+#include <linux/jump_label.h>
 
 /*
  * Internal slab definitions
@@ -160,6 +161,8 @@ static_assert(IS_ALIGNED(offsetof(struct slab, freelist), sizeof(freelist_aba_t)
  */
 #define slab_page(s) folio_page(slab_folio(s), 0)
 
+DECLARE_STATIC_KEY_FALSE(should_failslab_active);
+
 /*
  * If network-based swap is enabled, sl*b must keep track of whether pages
  * were allocated from pfmemalloc reserves.
diff --git a/mm/slub.c b/mm/slub.c
index 0809760cf789..11980aa94631 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -3874,13 +3874,37 @@ static __always_inline void maybe_wipe_obj_freeptr(struct kmem_cache *s,
 			0, sizeof(void *));
 }
 
-noinline int should_failslab(struct kmem_cache *s, gfp_t gfpflags)
+#if defined(CONFIG_FUNCTION_ERROR_INJECTION) || defined(CONFIG_FAILSLAB)
+DEFINE_STATIC_KEY_FALSE(should_failslab_active);
+
+#ifdef CONFIG_FUNCTION_ERROR_INJECTION
+noinline
+#else
+static inline
+#endif
+int should_failslab(struct kmem_cache *s, gfp_t gfpflags)
 {
 	if (__should_failslab(s, gfpflags))
 		return -ENOMEM;
 	return 0;
 }
-ALLOW_ERROR_INJECTION(should_failslab, ERRNO);
+ALLOW_ERROR_INJECTION_KEY(should_failslab, ERRNO, &should_failslab_active);
+
+static __always_inline int should_failslab_wrapped(struct kmem_cache *s,
+						   gfp_t gfp)
+{
+	if (static_branch_unlikely(&should_failslab_active))
+		return should_failslab(s, gfp);
+	else
+		return 0;
+}
+#else
+static __always_inline int should_failslab_wrapped(struct kmem_cache *s,
+						   gfp_t gfp)
+{
+	return false;
+}
+#endif
 
 static __fastpath_inline
 struct kmem_cache *slab_pre_alloc_hook(struct kmem_cache *s, gfp_t flags)
@@ -3889,7 +3913,7 @@ struct kmem_cache *slab_pre_alloc_hook(struct kmem_cache *s, gfp_t flags)
 
 	might_alloc(flags);
 
-	if (unlikely(should_failslab(s, flags)))
+	if (should_failslab_wrapped(s, flags))
 		return NULL;
 
 	return s;

-- 
2.45.2



^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH v2 7/7] mm, page_alloc: add static key for should_fail_alloc_page()
  2024-06-19 22:48 [PATCH v2 0/7] static key support for error injection functions Vlastimil Babka
                   ` (5 preceding siblings ...)
  2024-06-19 22:49 ` [PATCH v2 6/7] mm, slab: add static key for should_failslab() Vlastimil Babka
@ 2024-06-19 22:49 ` Vlastimil Babka
  6 siblings, 0 replies; 15+ messages in thread
From: Vlastimil Babka @ 2024-06-19 22:49 UTC (permalink / raw)
  To: Akinobu Mita, Christoph Lameter, David Rientjes,
	Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Naveen N. Rao, Anil S Keshavamurthy, David S. Miller,
	Masami Hiramatsu, Steven Rostedt, Mark Rutland
  Cc: Jiri Olsa, Roman Gushchin, Hyeonggon Yoo, linux-kernel, linux-mm,
	bpf, linux-trace-kernel, Vlastimil Babka

Similarly to should_failslab(), remove the overhead of calling the
noinline function should_fail_alloc_page() with a static key that guards
the callsite in the page allocator hotpath, and is controlled by the
fault and error injection frameworks and bpf.

Additionally, compile out all relevant code if neither
CONFIG_FAIL_ALLOC_PAGE nor CONFIG_FUNCTION_ERROR_INJECTION is enabled.
When only the latter is not enabled, make should_fail_alloc_page()
static inline instead of noinline.

No measurement was done other than verifying the should_fail_alloc_page
is gone from the perf profile. A measurement with the analogical change
for should_failslab() suggests that for a page allocator intensive
workload there might be noticeable improvement. It also makes
CONFIG_FAIL_ALLOC_PAGE an option suitable not only for debug kernels.

Reviewed-by: Roman Gushchin <roman.gushchin@linux.dev>
Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
---
 include/linux/fault-inject.h |  3 ++-
 mm/fail_page_alloc.c         |  3 ++-
 mm/internal.h                |  2 ++
 mm/page_alloc.c              | 30 +++++++++++++++++++++++++++---
 4 files changed, 33 insertions(+), 5 deletions(-)

diff --git a/include/linux/fault-inject.h b/include/linux/fault-inject.h
index 0d0fa94dc1c8..1a782042ae80 100644
--- a/include/linux/fault-inject.h
+++ b/include/linux/fault-inject.h
@@ -96,8 +96,9 @@ static inline void fault_config_init(struct fault_config *config,
 
 struct kmem_cache;
 
+#ifdef CONFIG_FUNCTION_ERROR_INJECTION
 bool should_fail_alloc_page(gfp_t gfp_mask, unsigned int order);
-
+#endif
 #ifdef CONFIG_FAIL_PAGE_ALLOC
 bool __should_fail_alloc_page(gfp_t gfp_mask, unsigned int order);
 #else
diff --git a/mm/fail_page_alloc.c b/mm/fail_page_alloc.c
index b1b09cce9394..0906b76d78e8 100644
--- a/mm/fail_page_alloc.c
+++ b/mm/fail_page_alloc.c
@@ -1,6 +1,7 @@
 // SPDX-License-Identifier: GPL-2.0
 #include <linux/fault-inject.h>
 #include <linux/mm.h>
+#include "internal.h"
 
 static struct {
 	struct fault_attr attr;
@@ -9,7 +10,7 @@ static struct {
 	bool ignore_gfp_reclaim;
 	u32 min_order;
 } fail_page_alloc = {
-	.attr = FAULT_ATTR_INITIALIZER,
+	.attr = FAULT_ATTR_INITIALIZER_KEY(&should_fail_alloc_page_active.key),
 	.ignore_gfp_reclaim = true,
 	.ignore_gfp_highmem = true,
 	.min_order = 1,
diff --git a/mm/internal.h b/mm/internal.h
index b2c75b12014e..8539e39b02e6 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -410,6 +410,8 @@ extern char * const zone_names[MAX_NR_ZONES];
 /* perform sanity checks on struct pages being allocated or freed */
 DECLARE_STATIC_KEY_MAYBE(CONFIG_DEBUG_VM, check_pages_enabled);
 
+DECLARE_STATIC_KEY_FALSE(should_fail_alloc_page_active);
+
 extern int min_free_kbytes;
 
 void setup_per_zone_wmarks(void);
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 2e22ce5675ca..b6e246acb4aa 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -3008,11 +3008,35 @@ struct page *rmqueue(struct zone *preferred_zone,
 	return page;
 }
 
-noinline bool should_fail_alloc_page(gfp_t gfp_mask, unsigned int order)
+#if defined(CONFIG_FUNCTION_ERROR_INJECTION) || defined(CONFIG_FAIL_PAGE_ALLOC)
+DEFINE_STATIC_KEY_FALSE(should_fail_alloc_page_active);
+
+#ifdef CONFIG_FUNCTION_ERROR_INJECTION
+noinline
+#else
+static inline
+#endif
+bool should_fail_alloc_page(gfp_t gfp_mask, unsigned int order)
 {
 	return __should_fail_alloc_page(gfp_mask, order);
 }
-ALLOW_ERROR_INJECTION(should_fail_alloc_page, TRUE);
+ALLOW_ERROR_INJECTION_KEY(should_fail_alloc_page, TRUE, &should_fail_alloc_page_active);
+
+static __always_inline bool
+should_fail_alloc_page_wrapped(gfp_t gfp_mask, unsigned int order)
+{
+	if (static_branch_unlikely(&should_fail_alloc_page_active))
+		return should_fail_alloc_page(gfp_mask, order);
+
+	return false;
+}
+#else
+static __always_inline bool
+should_fail_alloc_page_wrapped(gfp_t gfp_mask, unsigned int order)
+{
+	return false;
+}
+#endif
 
 static inline long __zone_watermark_unusable_free(struct zone *z,
 				unsigned int order, unsigned int alloc_flags)
@@ -4430,7 +4454,7 @@ static inline bool prepare_alloc_pages(gfp_t gfp_mask, unsigned int order,
 
 	might_alloc(gfp_mask);
 
-	if (should_fail_alloc_page(gfp_mask, order))
+	if (should_fail_alloc_page_wrapped(gfp_mask, order))
 		return false;
 
 	*alloc_flags = gfp_to_alloc_flags_cma(gfp_mask, *alloc_flags);

-- 
2.45.2



^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v2 5/7] bpf: do not create bpf_non_sleepable_error_inject list when unnecessary
  2024-06-19 22:48 ` [PATCH v2 5/7] bpf: do not create bpf_non_sleepable_error_inject list when unnecessary Vlastimil Babka
@ 2024-06-20  1:18   ` Alexei Starovoitov
  2024-06-20  8:15     ` Vlastimil Babka
  0 siblings, 1 reply; 15+ messages in thread
From: Alexei Starovoitov @ 2024-06-20  1:18 UTC (permalink / raw)
  To: Vlastimil Babka
  Cc: Akinobu Mita, Christoph Lameter, David Rientjes,
	Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Naveen N. Rao, Anil S Keshavamurthy, David S. Miller,
	Masami Hiramatsu, Steven Rostedt, Mark Rutland, Jiri Olsa,
	Roman Gushchin, Hyeonggon Yoo, LKML, linux-mm, bpf,
	linux-trace-kernel

On Wed, Jun 19, 2024 at 3:49 PM Vlastimil Babka <vbabka@suse.cz> wrote:
>
> When CONFIG_FUNCTION_ERROR_INJECTION is disabled,
> within_error_injection_list() will return false for any address and the
> result of check_non_sleepable_error_inject() denylist is thus redundant.
> The bpf_non_sleepable_error_inject list thus does not need to be
> constructed at all, so #ifdef it out.
>
> This will allow to inline functions on the list when
> CONFIG_FUNCTION_ERROR_INJECTION is disabled as there will be no BTF_ID()
> reference for them.
>
> Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
> ---
>  kernel/bpf/verifier.c | 15 +++++++++++++++
>  1 file changed, 15 insertions(+)
>
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index 77da1f438bec..5cd93de37d68 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -21044,6 +21044,8 @@ static int check_attach_modify_return(unsigned long addr, const char *func_name)
>         return -EINVAL;
>  }
>
> +#ifdef CONFIG_FUNCTION_ERROR_INJECTION
> +
>  /* list of non-sleepable functions that are otherwise on
>   * ALLOW_ERROR_INJECTION list
>   */
> @@ -21061,6 +21063,19 @@ static int check_non_sleepable_error_inject(u32 btf_id)
>         return btf_id_set_contains(&btf_non_sleepable_error_inject, btf_id);
>  }
>
> +#else /* CONFIG_FUNCTION_ERROR_INJECTION */
> +
> +/*
> + * Pretend the denylist is empty, within_error_injection_list() will return
> + * false anyway.
> + */
> +static int check_non_sleepable_error_inject(u32 btf_id)
> +{
> +       return 0;
> +}
> +
> +#endif

The comment reads like this is an optimization, but it's a mandatory
ifdef since should_failslab() might not be found by resolve_btfid
during the build.
Please make it clear in the comment.


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v2 5/7] bpf: do not create bpf_non_sleepable_error_inject list when unnecessary
  2024-06-20  1:18   ` Alexei Starovoitov
@ 2024-06-20  8:15     ` Vlastimil Babka
  0 siblings, 0 replies; 15+ messages in thread
From: Vlastimil Babka @ 2024-06-20  8:15 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: Akinobu Mita, Christoph Lameter, David Rientjes,
	Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Naveen N. Rao, Anil S Keshavamurthy, David S. Miller,
	Masami Hiramatsu, Steven Rostedt, Mark Rutland, Jiri Olsa,
	Roman Gushchin, Hyeonggon Yoo, LKML, linux-mm, bpf,
	linux-trace-kernel

On 6/20/24 3:18 AM, Alexei Starovoitov wrote:
> On Wed, Jun 19, 2024 at 3:49 PM Vlastimil Babka <vbabka@suse.cz> wrote:
>>
>> When CONFIG_FUNCTION_ERROR_INJECTION is disabled,
>> within_error_injection_list() will return false for any address and the
>> result of check_non_sleepable_error_inject() denylist is thus redundant.
>> The bpf_non_sleepable_error_inject list thus does not need to be
>> constructed at all, so #ifdef it out.
>>
>> This will allow to inline functions on the list when
>> CONFIG_FUNCTION_ERROR_INJECTION is disabled as there will be no BTF_ID()
>> reference for them.
>>
>> Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
>> ---
>>  kernel/bpf/verifier.c | 15 +++++++++++++++
>>  1 file changed, 15 insertions(+)
>>
>> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
>> index 77da1f438bec..5cd93de37d68 100644
>> --- a/kernel/bpf/verifier.c
>> +++ b/kernel/bpf/verifier.c
>> @@ -21044,6 +21044,8 @@ static int check_attach_modify_return(unsigned long addr, const char *func_name)
>>         return -EINVAL;
>>  }
>>
>> +#ifdef CONFIG_FUNCTION_ERROR_INJECTION
>> +
>>  /* list of non-sleepable functions that are otherwise on
>>   * ALLOW_ERROR_INJECTION list
>>   */
>> @@ -21061,6 +21063,19 @@ static int check_non_sleepable_error_inject(u32 btf_id)
>>         return btf_id_set_contains(&btf_non_sleepable_error_inject, btf_id);
>>  }
>>
>> +#else /* CONFIG_FUNCTION_ERROR_INJECTION */
>> +
>> +/*
>> + * Pretend the denylist is empty, within_error_injection_list() will return
>> + * false anyway.
>> + */
>> +static int check_non_sleepable_error_inject(u32 btf_id)
>> +{
>> +       return 0;
>> +}
>> +
>> +#endif
> 
> The comment reads like this is an optimization, but it's a mandatory
> ifdef since should_failslab() might not be found by resolve_btfid
> during the build.
> Please make it clear in the comment.

The comment just tried to explain why the return value is 0 and not 1 (which
would be also somewhat logical) but ok, will make it more clear.


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v2 1/7] fault-inject: add support for static keys around fault injection sites
  2024-06-19 22:48 ` [PATCH v2 1/7] fault-inject: add support for static keys around fault injection sites Vlastimil Babka
@ 2024-06-25 14:08   ` Steven Rostedt
  0 siblings, 0 replies; 15+ messages in thread
From: Steven Rostedt @ 2024-06-25 14:08 UTC (permalink / raw)
  To: Vlastimil Babka
  Cc: Akinobu Mita, Christoph Lameter, David Rientjes,
	Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Naveen N. Rao, Anil S Keshavamurthy, David S. Miller,
	Masami Hiramatsu, Mark Rutland, Jiri Olsa, Roman Gushchin,
	Hyeonggon Yoo, linux-kernel, linux-mm, bpf, linux-trace-kernel

On Thu, 20 Jun 2024 00:48:55 +0200
Vlastimil Babka <vbabka@suse.cz> wrote:

> +static int debugfs_prob_set(void *data, u64 val)
> +{
> +	struct fault_attr *attr = data;
> +
> +	mutex_lock(&probability_mutex);
> +
> +	if (attr->active) {
> +		if (attr->probability != 0 && val == 0) {
> +			static_key_slow_dec(attr->active);
> +		} else if (attr->probability == 0 && val != 0) {
> +			static_key_slow_inc(attr->active);
> +		}
> +	}

So basically the above is testing if val to probability is going from
zero or non-zero. For such cases, I find it less confusing to have:

	if (attr->active) {
		if (!!attr->probability != !!val) {
			if (val)
				static_key_slow_inc(attr->active);
			else
				static_key_slow_dec(attr->active);
		}
	}

This does add a layer of nested ifs, but IMO it's a bit more clear in
what is happening, and it gets rid of the "else if".

Not saying you need to change it. This is more of an FYI.

-- Steve


> +
> +	attr->probability = val;
> +
> +	mutex_unlock(&probability_mutex);
> +
> +	return 0;
> +}


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v2 6/7] mm, slab: add static key for should_failslab()
  2024-06-19 22:49 ` [PATCH v2 6/7] mm, slab: add static key for should_failslab() Vlastimil Babka
@ 2024-06-25 14:24   ` Vlastimil Babka
  2024-06-25 17:12     ` Alexei Starovoitov
  0 siblings, 1 reply; 15+ messages in thread
From: Vlastimil Babka @ 2024-06-25 14:24 UTC (permalink / raw)
  To: Akinobu Mita, Christoph Lameter, David Rientjes,
	Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Naveen N. Rao, Anil S Keshavamurthy, David S. Miller,
	Masami Hiramatsu, Steven Rostedt, Mark Rutland
  Cc: Jiri Olsa, Roman Gushchin, Hyeonggon Yoo, linux-kernel, linux-mm,
	bpf, linux-trace-kernel

On 6/20/24 12:49 AM, Vlastimil Babka wrote:
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -3874,13 +3874,37 @@ static __always_inline void maybe_wipe_obj_freeptr(struct kmem_cache *s,
>  			0, sizeof(void *));
>  }
>  
> -noinline int should_failslab(struct kmem_cache *s, gfp_t gfpflags)
> +#if defined(CONFIG_FUNCTION_ERROR_INJECTION) || defined(CONFIG_FAILSLAB)
> +DEFINE_STATIC_KEY_FALSE(should_failslab_active);
> +
> +#ifdef CONFIG_FUNCTION_ERROR_INJECTION
> +noinline
> +#else
> +static inline
> +#endif
> +int should_failslab(struct kmem_cache *s, gfp_t gfpflags)

Note that it has been found that (regardless of this series) gcc may clone
this to a should_failslab.constprop.0 in case the function is empty because
__should_failslab is compiled out (CONFIG_FAILSLAB=n). The "noinline"
doesn't help - the original function stays but only the clone is actually
being called, thus overriding the original function achieves nothing, see:
https://github.com/bpftrace/bpftrace/issues/3258

So we could use __noclone to prevent that, and I was thinking by adding
something this to error-injection.h:

#ifdef CONFIG_FUNCTION_ERROR_INJECTION
#define __error_injectable(alternative)		noinline __noclone
#else
#define __error_injectable(alternative)		alternative
#endif

and the usage here would be:

__error_injectable(static inline) int should_failslab(...)

Does that look acceptable, or is it too confusing that "static inline" is
specified there as the storage class to use when error injection is actually
disabled?

>  {
>  	if (__should_failslab(s, gfpflags))
>  		return -ENOMEM;
>  	return 0;
>  }
> -ALLOW_ERROR_INJECTION(should_failslab, ERRNO);
> +ALLOW_ERROR_INJECTION_KEY(should_failslab, ERRNO, &should_failslab_active);
> +
> +static __always_inline int should_failslab_wrapped(struct kmem_cache *s,
> +						   gfp_t gfp)
> +{
> +	if (static_branch_unlikely(&should_failslab_active))
> +		return should_failslab(s, gfp);
> +	else
> +		return 0;
> +}
> +#else
> +static __always_inline int should_failslab_wrapped(struct kmem_cache *s,
> +						   gfp_t gfp)
> +{
> +	return false;
> +}
> +#endif
>  
>  static __fastpath_inline
>  struct kmem_cache *slab_pre_alloc_hook(struct kmem_cache *s, gfp_t flags)
> @@ -3889,7 +3913,7 @@ struct kmem_cache *slab_pre_alloc_hook(struct kmem_cache *s, gfp_t flags)
>  
>  	might_alloc(flags);
>  
> -	if (unlikely(should_failslab(s, flags)))
> +	if (should_failslab_wrapped(s, flags))
>  		return NULL;
>  
>  	return s;
> 



^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v2 2/7] error-injection: support static keys around injectable functions
  2024-06-19 22:48 ` [PATCH v2 2/7] error-injection: support static keys around injectable functions Vlastimil Babka
@ 2024-06-25 14:41   ` Steven Rostedt
  0 siblings, 0 replies; 15+ messages in thread
From: Steven Rostedt @ 2024-06-25 14:41 UTC (permalink / raw)
  To: Vlastimil Babka
  Cc: Akinobu Mita, Christoph Lameter, David Rientjes,
	Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Naveen N. Rao, Anil S Keshavamurthy, David S. Miller,
	Masami Hiramatsu, Mark Rutland, Jiri Olsa, Roman Gushchin,
	Hyeonggon Yoo, linux-kernel, linux-mm, bpf, linux-trace-kernel

On Thu, 20 Jun 2024 00:48:56 +0200
Vlastimil Babka <vbabka@suse.cz> wrote:

> @@ -86,6 +104,7 @@ static void populate_error_injection_list(struct error_injection_entry *start,
>  		ent->start_addr = entry;
>  		ent->end_addr = entry + size;
>  		ent->etype = iter->etype;
> +		ent->key = (struct static_key *) iter->static_key_addr;

Nit, should there be a space between the typecast and the "iter"?

>  		ent->priv = priv;
>  		INIT_LIST_HEAD(&ent->list);
>  		list_add_tail(&ent->list, &error_injection_list);

Reviewed-by: Steven Rostedt (Google) <rostedt@goodmis.org>

-- Steve


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v2 6/7] mm, slab: add static key for should_failslab()
  2024-06-25 14:24   ` Vlastimil Babka
@ 2024-06-25 17:12     ` Alexei Starovoitov
  2024-06-25 17:53       ` Vlastimil Babka
  0 siblings, 1 reply; 15+ messages in thread
From: Alexei Starovoitov @ 2024-06-25 17:12 UTC (permalink / raw)
  To: Vlastimil Babka
  Cc: Akinobu Mita, Christoph Lameter, David Rientjes,
	Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Naveen N. Rao, Anil S Keshavamurthy, David S. Miller,
	Masami Hiramatsu, Steven Rostedt, Mark Rutland, Jiri Olsa,
	Roman Gushchin, Hyeonggon Yoo, LKML, linux-mm, bpf,
	linux-trace-kernel

On Tue, Jun 25, 2024 at 7:24 AM Vlastimil Babka <vbabka@suse.cz> wrote:
>
> On 6/20/24 12:49 AM, Vlastimil Babka wrote:
> > --- a/mm/slub.c
> > +++ b/mm/slub.c
> > @@ -3874,13 +3874,37 @@ static __always_inline void maybe_wipe_obj_freeptr(struct kmem_cache *s,
> >                       0, sizeof(void *));
> >  }
> >
> > -noinline int should_failslab(struct kmem_cache *s, gfp_t gfpflags)
> > +#if defined(CONFIG_FUNCTION_ERROR_INJECTION) || defined(CONFIG_FAILSLAB)
> > +DEFINE_STATIC_KEY_FALSE(should_failslab_active);
> > +
> > +#ifdef CONFIG_FUNCTION_ERROR_INJECTION
> > +noinline
> > +#else
> > +static inline
> > +#endif
> > +int should_failslab(struct kmem_cache *s, gfp_t gfpflags)
>
> Note that it has been found that (regardless of this series) gcc may clone
> this to a should_failslab.constprop.0 in case the function is empty because
> __should_failslab is compiled out (CONFIG_FAILSLAB=n). The "noinline"
> doesn't help - the original function stays but only the clone is actually
> being called, thus overriding the original function achieves nothing, see:
> https://github.com/bpftrace/bpftrace/issues/3258
>
> So we could use __noclone to prevent that, and I was thinking by adding
> something this to error-injection.h:
>
> #ifdef CONFIG_FUNCTION_ERROR_INJECTION
> #define __error_injectable(alternative)         noinline __noclone

To prevent such compiler transformations we typically use
__used noinline

We didn't have a need for __noclone yet. If __used is enough I'd stick to that.


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v2 6/7] mm, slab: add static key for should_failslab()
  2024-06-25 17:12     ` Alexei Starovoitov
@ 2024-06-25 17:53       ` Vlastimil Babka
  0 siblings, 0 replies; 15+ messages in thread
From: Vlastimil Babka @ 2024-06-25 17:53 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: Akinobu Mita, Christoph Lameter, David Rientjes,
	Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Naveen N. Rao, Anil S Keshavamurthy, David S. Miller,
	Masami Hiramatsu, Steven Rostedt, Mark Rutland, Jiri Olsa,
	Roman Gushchin, Hyeonggon Yoo, LKML, linux-mm, bpf,
	linux-trace-kernel

On 6/25/24 7:12 PM, Alexei Starovoitov wrote:
> On Tue, Jun 25, 2024 at 7:24 AM Vlastimil Babka <vbabka@suse.cz> wrote:
>>
>> On 6/20/24 12:49 AM, Vlastimil Babka wrote:
>> > --- a/mm/slub.c
>> > +++ b/mm/slub.c
>> > @@ -3874,13 +3874,37 @@ static __always_inline void maybe_wipe_obj_freeptr(struct kmem_cache *s,
>> >                       0, sizeof(void *));
>> >  }
>> >
>> > -noinline int should_failslab(struct kmem_cache *s, gfp_t gfpflags)
>> > +#if defined(CONFIG_FUNCTION_ERROR_INJECTION) || defined(CONFIG_FAILSLAB)
>> > +DEFINE_STATIC_KEY_FALSE(should_failslab_active);
>> > +
>> > +#ifdef CONFIG_FUNCTION_ERROR_INJECTION
>> > +noinline
>> > +#else
>> > +static inline
>> > +#endif
>> > +int should_failslab(struct kmem_cache *s, gfp_t gfpflags)
>>
>> Note that it has been found that (regardless of this series) gcc may clone
>> this to a should_failslab.constprop.0 in case the function is empty because
>> __should_failslab is compiled out (CONFIG_FAILSLAB=n). The "noinline"
>> doesn't help - the original function stays but only the clone is actually
>> being called, thus overriding the original function achieves nothing, see:
>> https://github.com/bpftrace/bpftrace/issues/3258
>>
>> So we could use __noclone to prevent that, and I was thinking by adding
>> something this to error-injection.h:
>>
>> #ifdef CONFIG_FUNCTION_ERROR_INJECTION
>> #define __error_injectable(alternative)         noinline __noclone
> 
> To prevent such compiler transformations we typically use
> __used noinline
> 
> We didn't have a need for __noclone yet. If __used is enough I'd stick to that.

__used made no difference here (gcc 13.3), __noclone did


^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2024-06-25 17:53 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-06-19 22:48 [PATCH v2 0/7] static key support for error injection functions Vlastimil Babka
2024-06-19 22:48 ` [PATCH v2 1/7] fault-inject: add support for static keys around fault injection sites Vlastimil Babka
2024-06-25 14:08   ` Steven Rostedt
2024-06-19 22:48 ` [PATCH v2 2/7] error-injection: support static keys around injectable functions Vlastimil Babka
2024-06-25 14:41   ` Steven Rostedt
2024-06-19 22:48 ` [PATCH v2 3/7] bpf: support error injection static keys for perf_event attached progs Vlastimil Babka
2024-06-19 22:48 ` [PATCH v2 4/7] bpf: support error injection static keys for multi_link " Vlastimil Babka
2024-06-19 22:48 ` [PATCH v2 5/7] bpf: do not create bpf_non_sleepable_error_inject list when unnecessary Vlastimil Babka
2024-06-20  1:18   ` Alexei Starovoitov
2024-06-20  8:15     ` Vlastimil Babka
2024-06-19 22:49 ` [PATCH v2 6/7] mm, slab: add static key for should_failslab() Vlastimil Babka
2024-06-25 14:24   ` Vlastimil Babka
2024-06-25 17:12     ` Alexei Starovoitov
2024-06-25 17:53       ` Vlastimil Babka
2024-06-19 22:49 ` [PATCH v2 7/7] mm, page_alloc: add static key for should_fail_alloc_page() Vlastimil Babka

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox