* [PATCH 0/7] kcov: Introduce New Unique PC|EDGE|CMP Modes
@ 2025-01-14 5:34 Jiao, Joey
2025-01-14 5:34 ` [PATCH 1/7] kcov: introduce new kcov KCOV_TRACE_UNIQ_PC mode Jiao, Joey
` (7 more replies)
0 siblings, 8 replies; 16+ messages in thread
From: Jiao, Joey @ 2025-01-14 5:34 UTC (permalink / raw)
To: Dmitry Vyukov, Andrey Konovalov, Jonathan Corbet, Andrew Morton,
Dennis Zhou, Tejun Heo, Christoph Lameter, Catalin Marinas,
Will Deacon
Cc: kasan-dev, linux-kernel, workflows, linux-doc, linux-mm,
linux-arm-kernel, kernel
Hi,
This patch series introduces new kcov unique modes:
`KCOV_TRACE_UNIQ_[PC|EDGE|CMP]`, which are used to collect unique PC, EDGE,
CMP information.
Background
----------
In the current kcov implementation, when `__sanitizer_cov_trace_pc` is hit,
the instruction pointer (IP) is stored sequentially in an area. Userspace
programs then read this area to record covered PCs and calculate covered
edges. However, recent syzkaller runs show that many syscalls likely have
`pos > t->kcov_size`, leading to kcov overflow. To address this issue, we
introduce new kcov unique modes.
Solution Overview
-----------------
1. [P 1] Introduce `KCOV_TRACE_UNIQ_PC` Mode:
- Export `KCOV_TRACE_UNIQ_PC` to userspace.
- Add `kcov_map` struct to manage memory during the KCOV lifecycle.
- `kcov_entry` struct as a hashtable entry containing unique PCs.
- Use hashtable buckets to link `kcov_entry`.
- Preallocate memory using genpool during KCOV initialization.
- Move `area` inside `kcov_map` for easier management.
- Use `jhash` for hash key calculation to support `KCOV_TRACE_UNIQ_CMP`
mode.
2. [P 2-3] Introduce `KCOV_TRACE_UNIQ_EDGE` Mode:
- Save `prev_pc` to calculate edges with the current IP.
- Add unique edges to the hashmap.
- Use a lower 12-bit mask to make hash independent of module offsets.
- Distinguish areas for `KCOV_TRACE_UNIQ_PC` and `KCOV_TRACE_UNIQ_EDGE`
modes using `offset` during mmap.
- Support enabling `KCOV_TRACE_UNIQ_PC` and `KCOV_TRACE_UNIQ_EDGE`
together.
3. [P 4] Introduce `KCOV_TRACE_UNIQ_CMP` Mode:
- Shares the area with `KCOV_TRACE_UNIQ_PC`, making these modes
exclusive.
4. [P 5] Add Example Code Documentation:
- Provide examples for testing different modes:
- `KCOV_TRACE_PC`: `./kcov` or `./kcov 0`
- `KCOV_TRACE_CMP`: `./kcov 1`
- `KCOV_TRACE_UNIQ_PC`: `./kcov 2`
- `KCOV_TRACE_UNIQ_EDGE`: `./kcov 4`
- `KCOV_TRACE_UNIQ_PC|KCOV_TRACE_UNIQ_EDGE`: `./kcov 6`
- `KCOV_TRACE_UNIQ_CMP`: `./kcov 8`
5. [P 6-7] Disable KCOV Instrumentation:
- Disable instrumentation like genpool to prevent recursive calls.
Caveats
-------
The userspace program has been tested on Qemu x86_64 and two real Android
phones with different ARM64 chips. More syzkaller-compatible tests have
been conducted. However, due to limited knowledge of other platforms,
assistance from those with access to other systems is needed.
Results and Analysis
--------------------
1. KMEMLEAK Test on Qemu x86_64:
- No memory leaks found during the `kcov` program run.
2. KCSAN Test on Qemu x86_64:
- No KCSAN issues found during the `kcov` program run.
3. Existing Syzkaller on Qemu x86_64 and Real ARM64 Device:
- Syzkaller can fuzz, show coverage, and find bugs. Adjusting `procs`
and `vm mem` settings can avoid OOM issues caused by genpool in the
patches, so `procs:4 + vm:2GB` or `procs:4 + vm:2GB` are used for
Qemu x86_64.
- `procs:8` is kept on Real ARM64 Device with 12GB/16GB mem.
4. Modified Syzkaller to Support New KCOV Unique Modes:
- Syzkaller runs fine on both Qemu x86_64 and ARM64 real devices.
Limited `Cover overflows` and `Comps overflows` observed.
5. Modified Syzkaller + Upstream Kernel Without Patch Series:
- Not tested. The modified syzkaller will fall back to `KCOV_TRACE_PC`
or `KCOV_TRACE_CMP` if `ioctl` fails for Unique mode.
Possible Further Enhancements
-----------------------------
1. Test more cases and setups, including those in syzbot.
2. Ensure `hash_for_each_possible_rcu` is protected for reentrance
and atomicity.
3. Find a simpler and more efficient way to store unique coverage.
Conclusion
----------
These patches add new kcov unique modes to mitigate the kcov overflow
issue, compatible with both existing and new syzkaller versions.
Thanks,
Joey Jiao
---
Jiao, Joey (7):
kcov: introduce new kcov KCOV_TRACE_UNIQ_PC mode
kcov: introduce new kcov KCOV_TRACE_UNIQ_EDGE mode
kcov: allow using KCOV_TRACE_UNIQ_[PC|EDGE] modes together
kcov: introduce new kcov KCOV_TRACE_UNIQ_CMP mode
kcov: add the new KCOV uniq modes example code
kcov: disable instrumentation for genalloc and bitmap
arm64: disable kcov instrument in header files
Documentation/dev-tools/kcov.rst | 243 ++++++++++++++--------------
arch/arm64/include/asm/percpu.h | 2 +-
arch/arm64/include/asm/preempt.h | 2 +-
include/linux/kcov.h | 10 +-
include/uapi/linux/kcov.h | 6 +
kernel/kcov.c | 333 +++++++++++++++++++++++++++++++++------
lib/Makefile | 2 +
7 files changed, 423 insertions(+), 175 deletions(-)
---
base-commit: 9b2ffa6148b1e4468d08f7e0e7e371c43cac9ffe
change-id: 20250114-kcov-95cedece4654
Best regards,
--
<Jiao, Joey> <quic_jiangenj@quicinc.com>
^ permalink raw reply [flat|nested] 16+ messages in thread
* [PATCH 1/7] kcov: introduce new kcov KCOV_TRACE_UNIQ_PC mode
2025-01-14 5:34 [PATCH 0/7] kcov: Introduce New Unique PC|EDGE|CMP Modes Jiao, Joey
@ 2025-01-14 5:34 ` Jiao, Joey
2025-01-14 5:34 ` [PATCH 2/7] kcov: introduce new kcov KCOV_TRACE_UNIQ_EDGE mode Jiao, Joey
` (6 subsequent siblings)
7 siblings, 0 replies; 16+ messages in thread
From: Jiao, Joey @ 2025-01-14 5:34 UTC (permalink / raw)
To: Dmitry Vyukov, Andrey Konovalov, Jonathan Corbet, Andrew Morton,
Dennis Zhou, Tejun Heo, Christoph Lameter, Catalin Marinas,
Will Deacon
Cc: kasan-dev, linux-kernel, workflows, linux-doc, linux-mm,
linux-arm-kernel, kernel
The current kcov KCOV_TRACE_PC mode stores PC in sequence.
Introduce KCOV_TRACE_UNIQ_PC mode to store unique PC info.
In unique PC mode,
- use hashmap to store unique PC in kcov_entry
- use gen_pool_alloc in __sanitizer_cov_trace_pc to avoid
sleeping function kmalloc
Signed-off-by: Jiao, Joey <quic_jiangenj@quicinc.com>
---
include/linux/kcov.h | 6 +-
include/uapi/linux/kcov.h | 2 +
kernel/kcov.c | 190 ++++++++++++++++++++++++++++++++++++++++------
3 files changed, 173 insertions(+), 25 deletions(-)
diff --git a/include/linux/kcov.h b/include/linux/kcov.h
index 75a2fb8b16c32917817b8ec7f5e45421793431ae..aafd9f88450cb8672c701349300b54662bc38079 100644
--- a/include/linux/kcov.h
+++ b/include/linux/kcov.h
@@ -20,9 +20,11 @@ enum kcov_mode {
*/
KCOV_MODE_TRACE_PC = 2,
/* Collecting comparison operands mode. */
- KCOV_MODE_TRACE_CMP = 3,
+ KCOV_MODE_TRACE_CMP = 4,
/* The process owns a KCOV remote reference. */
- KCOV_MODE_REMOTE = 4,
+ KCOV_MODE_REMOTE = 8,
+ /* COllecting uniq pc mode. */
+ KCOV_MODE_TRACE_UNIQ_PC = 16,
};
#define KCOV_IN_CTXSW (1 << 30)
diff --git a/include/uapi/linux/kcov.h b/include/uapi/linux/kcov.h
index ed95dba9fa37e291e9e9e0109eb8481bb7a5e9da..d2a2bff36f285a5e3a03395f8890fcb716cf3f07 100644
--- a/include/uapi/linux/kcov.h
+++ b/include/uapi/linux/kcov.h
@@ -35,6 +35,8 @@ enum {
KCOV_TRACE_PC = 0,
/* Collecting comparison operands mode. */
KCOV_TRACE_CMP = 1,
+ /* Collecting uniq PC mode. */
+ KCOV_TRACE_UNIQ_PC = 2,
};
/*
diff --git a/kernel/kcov.c b/kernel/kcov.c
index 28a6be6e64fdd721d49c4040ed10ce33f9d890a1..bbd7b7503206fe595976458ab685b95f784607d7 100644
--- a/kernel/kcov.c
+++ b/kernel/kcov.c
@@ -9,9 +9,11 @@
#include <linux/types.h>
#include <linux/file.h>
#include <linux/fs.h>
+#include <linux/genalloc.h>
#include <linux/hashtable.h>
#include <linux/init.h>
#include <linux/jiffies.h>
+#include <linux/jhash.h>
#include <linux/kmsan-checks.h>
#include <linux/mm.h>
#include <linux/preempt.h>
@@ -32,6 +34,29 @@
/* Number of 64-bit words written per one comparison: */
#define KCOV_WORDS_PER_CMP 4
+struct kcov_entry {
+ unsigned long ent;
+
+ struct hlist_node node;
+};
+
+/* Min gen pool alloc order. */
+#define MIN_POOL_ALLOC_ORDER ilog2(roundup_pow_of_two(sizeof(struct kcov_entry)))
+
+/*
+ * kcov hashmap to store uniq pc, prealloced mem for kcov_entry
+ * and area shared between kernel and userspace.
+ */
+struct kcov_map {
+ /* 15 bits fit most cases for hash collision, memory and performance. */
+ DECLARE_HASHTABLE(buckets, 15);
+ struct gen_pool *pool;
+ /* Prealloced memory added to pool to be used as kcov_entry. */
+ void *mem;
+ /* Buffer shared with user space. */
+ void *area;
+};
+
/*
* kcov descriptor (one per opened debugfs file).
* State transitions of the descriptor:
@@ -60,6 +85,8 @@ struct kcov {
unsigned int size;
/* Coverage buffer shared with user space. */
void *area;
+ /* Coverage hashmap for unique pc. */
+ struct kcov_map *map;
/* Task for which we collect coverage, or NULL. */
struct task_struct *t;
/* Collecting coverage from remote (background) threads. */
@@ -171,7 +198,7 @@ static inline bool in_softirq_really(void)
return in_serving_softirq() && !in_hardirq() && !in_nmi();
}
-static notrace bool check_kcov_mode(enum kcov_mode needed_mode, struct task_struct *t)
+static notrace unsigned int check_kcov_mode(enum kcov_mode needed_mode, struct task_struct *t)
{
unsigned int mode;
@@ -191,7 +218,94 @@ static notrace bool check_kcov_mode(enum kcov_mode needed_mode, struct task_stru
* kcov_start().
*/
barrier();
- return mode == needed_mode;
+ return mode & needed_mode;
+}
+
+static int kcov_map_init(struct kcov *kcov, unsigned long size)
+{
+ struct kcov_map *map;
+ void *area;
+ unsigned long flags;
+
+ map = kzalloc(sizeof(*map), GFP_KERNEL);
+ if (!map)
+ return -ENOMEM;
+
+ area = vmalloc_user(size * sizeof(unsigned long));
+ if (!area) {
+ kfree(map);
+ return -ENOMEM;
+ }
+
+ spin_lock_irqsave(&kcov->lock, flags);
+ map->area = area;
+
+ kcov->map = map;
+ kcov->area = area;
+ spin_unlock_irqrestore(&kcov->lock, flags);
+
+ hash_init(map->buckets);
+
+ map->pool = gen_pool_create(MIN_POOL_ALLOC_ORDER, -1);
+ if (!map->pool)
+ return -ENOMEM;
+
+ map->mem = vmalloc(size * (1 << MIN_POOL_ALLOC_ORDER));
+ if (!map->mem) {
+ vfree(area);
+ gen_pool_destroy(map->pool);
+ kfree(map);
+ return -ENOMEM;
+ }
+
+ if (gen_pool_add(map->pool, (unsigned long)map->mem, size *
+ (1 << MIN_POOL_ALLOC_ORDER), -1)) {
+ vfree(area);
+ vfree(map->mem);
+ gen_pool_destroy(map->pool);
+ kfree(map);
+ return -ENOMEM;
+ }
+
+ return 0;
+}
+
+static inline u32 hash_key(const struct kcov_entry *k)
+{
+ return jhash((u32 *)k, offsetof(struct kcov_entry, node), 0);
+}
+
+static notrace inline void kcov_map_add(struct kcov_map *map, struct kcov_entry *ent,
+ struct task_struct *t)
+{
+ struct kcov *kcov;
+ struct kcov_entry *entry;
+ unsigned int key = hash_key(ent);
+ unsigned long pos, *area;
+
+ kcov = t->kcov;
+
+ hash_for_each_possible_rcu(map->buckets, entry, node, key) {
+ if (entry->ent == ent->ent)
+ return;
+ }
+
+ entry = (struct kcov_entry *)gen_pool_alloc(map->pool, 1 << MIN_POOL_ALLOC_ORDER);
+ if (unlikely(!entry))
+ return;
+
+ barrier();
+ memcpy(entry, ent, sizeof(*entry));
+ hash_add_rcu(map->buckets, &entry->node, key);
+
+ area = t->kcov_area;
+
+ pos = READ_ONCE(area[0]) + 1;
+ if (likely(pos < t->kcov_size)) {
+ WRITE_ONCE(area[0], pos);
+ barrier();
+ area[pos] = ent->ent;
+ }
}
static notrace unsigned long canonicalize_ip(unsigned long ip)
@@ -212,25 +326,34 @@ void notrace __sanitizer_cov_trace_pc(void)
unsigned long *area;
unsigned long ip = canonicalize_ip(_RET_IP_);
unsigned long pos;
+ struct kcov_entry entry = {0};
+ unsigned int mode;
t = current;
- if (!check_kcov_mode(KCOV_MODE_TRACE_PC, t))
+ if (!check_kcov_mode(KCOV_MODE_TRACE_PC | KCOV_MODE_TRACE_UNIQ_PC, t))
return;
area = t->kcov_area;
- /* The first 64-bit word is the number of subsequent PCs. */
- pos = READ_ONCE(area[0]) + 1;
- if (likely(pos < t->kcov_size)) {
- /* Previously we write pc before updating pos. However, some
- * early interrupt code could bypass check_kcov_mode() check
- * and invoke __sanitizer_cov_trace_pc(). If such interrupt is
- * raised between writing pc and updating pos, the pc could be
- * overitten by the recursive __sanitizer_cov_trace_pc().
- * Update pos before writing pc to avoid such interleaving.
- */
- WRITE_ONCE(area[0], pos);
- barrier();
- area[pos] = ip;
+ mode = t->kcov_mode;
+ if (mode == KCOV_MODE_TRACE_PC) {
+ area = t->kcov_area;
+ /* The first 64-bit word is the number of subsequent PCs. */
+ pos = READ_ONCE(area[0]) + 1;
+ if (likely(pos < t->kcov_size)) {
+ /* Previously we write pc before updating pos. However, some
+ * early interrupt code could bypass check_kcov_mode() check
+ * and invoke __sanitizer_cov_trace_pc(). If such interrupt is
+ * raised between writing pc and updating pos, the pc could be
+ * overitten by the recursive __sanitizer_cov_trace_pc().
+ * Update pos before writing pc to avoid such interleaving.
+ */
+ WRITE_ONCE(area[0], pos);
+ barrier();
+ area[pos] = ip;
+ }
+ } else {
+ entry.ent = ip;
+ kcov_map_add(t->kcov->map, &entry, t);
}
}
EXPORT_SYMBOL(__sanitizer_cov_trace_pc);
@@ -432,11 +555,33 @@ static void kcov_get(struct kcov *kcov)
refcount_inc(&kcov->refcount);
}
+static void kcov_map_free(struct kcov *kcov)
+{
+ int bkt;
+ struct hlist_node *tmp;
+ struct kcov_entry *entry;
+ struct kcov_map *map;
+
+ map = kcov->map;
+ if (!map)
+ return;
+ rcu_read_lock();
+ hash_for_each_safe(map->buckets, bkt, tmp, entry, node) {
+ hash_del_rcu(&entry->node);
+ gen_pool_free(map->pool, (unsigned long)entry, 1 << MIN_POOL_ALLOC_ORDER);
+ }
+ rcu_read_unlock();
+ vfree(map->area);
+ vfree(map->mem);
+ gen_pool_destroy(map->pool);
+ kfree(map);
+}
+
static void kcov_put(struct kcov *kcov)
{
if (refcount_dec_and_test(&kcov->refcount)) {
kcov_remote_reset(kcov);
- vfree(kcov->area);
+ kcov_map_free(kcov);
kfree(kcov);
}
}
@@ -546,6 +691,8 @@ static int kcov_get_mode(unsigned long arg)
#else
return -ENOTSUPP;
#endif
+ else if (arg == KCOV_TRACE_UNIQ_PC)
+ return KCOV_MODE_TRACE_UNIQ_PC;
else
return -EINVAL;
}
@@ -698,7 +845,6 @@ static long kcov_ioctl(struct file *filep, unsigned int cmd, unsigned long arg)
unsigned int remote_num_handles;
unsigned long remote_arg_size;
unsigned long size, flags;
- void *area;
kcov = filep->private_data;
switch (cmd) {
@@ -713,16 +859,14 @@ static long kcov_ioctl(struct file *filep, unsigned int cmd, unsigned long arg)
size = arg;
if (size < 2 || size > INT_MAX / sizeof(unsigned long))
return -EINVAL;
- area = vmalloc_user(size * sizeof(unsigned long));
- if (area == NULL)
- return -ENOMEM;
+ res = kcov_map_init(kcov, size);
+ if (res)
+ return res;
spin_lock_irqsave(&kcov->lock, flags);
if (kcov->mode != KCOV_MODE_DISABLED) {
spin_unlock_irqrestore(&kcov->lock, flags);
- vfree(area);
return -EBUSY;
}
- kcov->area = area;
kcov->size = size;
kcov->mode = KCOV_MODE_INIT;
spin_unlock_irqrestore(&kcov->lock, flags);
--
2.47.1
^ permalink raw reply [flat|nested] 16+ messages in thread
* [PATCH 2/7] kcov: introduce new kcov KCOV_TRACE_UNIQ_EDGE mode
2025-01-14 5:34 [PATCH 0/7] kcov: Introduce New Unique PC|EDGE|CMP Modes Jiao, Joey
2025-01-14 5:34 ` [PATCH 1/7] kcov: introduce new kcov KCOV_TRACE_UNIQ_PC mode Jiao, Joey
@ 2025-01-14 5:34 ` Jiao, Joey
2025-01-14 5:34 ` [PATCH 3/7] kcov: allow using KCOV_TRACE_UNIQ_[PC|EDGE] modes together Jiao, Joey
` (5 subsequent siblings)
7 siblings, 0 replies; 16+ messages in thread
From: Jiao, Joey @ 2025-01-14 5:34 UTC (permalink / raw)
To: Dmitry Vyukov, Andrey Konovalov, Jonathan Corbet, Andrew Morton,
Dennis Zhou, Tejun Heo, Christoph Lameter, Catalin Marinas,
Will Deacon
Cc: kasan-dev, linux-kernel, workflows, linux-doc, linux-mm,
linux-arm-kernel, kernel
KCOV_TRACE_UNIQ_EDGE stores uniq edge info, which is bitwise xor operation
of prev_pc and current pc.
And only hash the lower 12 bits so the hash is independent of any module
offsets.
Signed-off-by: Jiao, Joey <quic_jiangenj@quicinc.com>
---
include/linux/kcov.h | 4 ++-
include/uapi/linux/kcov.h | 2 ++
kernel/kcov.c | 73 ++++++++++++++++++++++++++++++++++++-----------
3 files changed, 61 insertions(+), 18 deletions(-)
diff --git a/include/linux/kcov.h b/include/linux/kcov.h
index aafd9f88450cb8672c701349300b54662bc38079..56b858205ba16c47fc72bda9938c98f034503c8c 100644
--- a/include/linux/kcov.h
+++ b/include/linux/kcov.h
@@ -23,8 +23,10 @@ enum kcov_mode {
KCOV_MODE_TRACE_CMP = 4,
/* The process owns a KCOV remote reference. */
KCOV_MODE_REMOTE = 8,
- /* COllecting uniq pc mode. */
+ /* Collecting uniq pc mode. */
KCOV_MODE_TRACE_UNIQ_PC = 16,
+ /* Collecting uniq edge mode. */
+ KCOV_MODE_TRACE_UNIQ_EDGE = 32,
};
#define KCOV_IN_CTXSW (1 << 30)
diff --git a/include/uapi/linux/kcov.h b/include/uapi/linux/kcov.h
index d2a2bff36f285a5e3a03395f8890fcb716cf3f07..9b2019f0ab8b8cb5426d2d6b74472fa1a7293817 100644
--- a/include/uapi/linux/kcov.h
+++ b/include/uapi/linux/kcov.h
@@ -37,6 +37,8 @@ enum {
KCOV_TRACE_CMP = 1,
/* Collecting uniq PC mode. */
KCOV_TRACE_UNIQ_PC = 2,
+ /* Collecting uniq edge mode. */
+ KCOV_TRACE_UNIQ_EDGE = 4,
};
/*
diff --git a/kernel/kcov.c b/kernel/kcov.c
index bbd7b7503206fe595976458ab685b95f784607d7..5a0ead92729294d99db80bb4e0f5b04c8b025dba 100644
--- a/kernel/kcov.c
+++ b/kernel/kcov.c
@@ -83,10 +83,14 @@ struct kcov {
enum kcov_mode mode;
/* Size of arena (in long's). */
unsigned int size;
+ /* Previous PC. */
+ unsigned long prev_pc;
/* Coverage buffer shared with user space. */
void *area;
/* Coverage hashmap for unique pc. */
struct kcov_map *map;
+ /* Edge hashmap for unique edge. */
+ struct kcov_map *map_edge;
/* Task for which we collect coverage, or NULL. */
struct task_struct *t;
/* Collecting coverage from remote (background) threads. */
@@ -221,7 +225,7 @@ static notrace unsigned int check_kcov_mode(enum kcov_mode needed_mode, struct t
return mode & needed_mode;
}
-static int kcov_map_init(struct kcov *kcov, unsigned long size)
+static int kcov_map_init(struct kcov *kcov, unsigned long size, bool edge)
{
struct kcov_map *map;
void *area;
@@ -240,8 +244,12 @@ static int kcov_map_init(struct kcov *kcov, unsigned long size)
spin_lock_irqsave(&kcov->lock, flags);
map->area = area;
- kcov->map = map;
- kcov->area = area;
+ if (edge) {
+ kcov->map_edge = map;
+ } else {
+ kcov->map = map;
+ kcov->area = area;
+ }
spin_unlock_irqrestore(&kcov->lock, flags);
hash_init(map->buckets);
@@ -276,7 +284,7 @@ static inline u32 hash_key(const struct kcov_entry *k)
}
static notrace inline void kcov_map_add(struct kcov_map *map, struct kcov_entry *ent,
- struct task_struct *t)
+ struct task_struct *t, unsigned int mode)
{
struct kcov *kcov;
struct kcov_entry *entry;
@@ -298,7 +306,10 @@ static notrace inline void kcov_map_add(struct kcov_map *map, struct kcov_entry
memcpy(entry, ent, sizeof(*entry));
hash_add_rcu(map->buckets, &entry->node, key);
- area = t->kcov_area;
+ if (mode == KCOV_MODE_TRACE_UNIQ_PC)
+ area = t->kcov_area;
+ else
+ area = kcov->map_edge->area;
pos = READ_ONCE(area[0]) + 1;
if (likely(pos < t->kcov_size)) {
@@ -327,13 +338,15 @@ void notrace __sanitizer_cov_trace_pc(void)
unsigned long ip = canonicalize_ip(_RET_IP_);
unsigned long pos;
struct kcov_entry entry = {0};
+ /* Only hash the lower 12 bits so the hash is independent of any module offsets. */
+ unsigned long mask = (1 << 12) - 1;
unsigned int mode;
t = current;
- if (!check_kcov_mode(KCOV_MODE_TRACE_PC | KCOV_MODE_TRACE_UNIQ_PC, t))
+ if (!check_kcov_mode(KCOV_MODE_TRACE_PC | KCOV_MODE_TRACE_UNIQ_PC |
+ KCOV_MODE_TRACE_UNIQ_EDGE, t))
return;
- area = t->kcov_area;
mode = t->kcov_mode;
if (mode == KCOV_MODE_TRACE_PC) {
area = t->kcov_area;
@@ -352,8 +365,15 @@ void notrace __sanitizer_cov_trace_pc(void)
area[pos] = ip;
}
} else {
- entry.ent = ip;
- kcov_map_add(t->kcov->map, &entry, t);
+ if (mode & KCOV_MODE_TRACE_UNIQ_PC) {
+ entry.ent = ip;
+ kcov_map_add(t->kcov->map, &entry, t, KCOV_MODE_TRACE_UNIQ_PC);
+ }
+ if (mode & KCOV_MODE_TRACE_UNIQ_EDGE) {
+ entry.ent = (hash_long(t->kcov->prev_pc & mask, BITS_PER_LONG) & mask) ^ ip;
+ t->kcov->prev_pc = ip;
+ kcov_map_add(t->kcov->map_edge, &entry, t, KCOV_MODE_TRACE_UNIQ_EDGE);
+ }
}
}
EXPORT_SYMBOL(__sanitizer_cov_trace_pc);
@@ -555,14 +575,17 @@ static void kcov_get(struct kcov *kcov)
refcount_inc(&kcov->refcount);
}
-static void kcov_map_free(struct kcov *kcov)
+static void kcov_map_free(struct kcov *kcov, bool edge)
{
int bkt;
struct hlist_node *tmp;
struct kcov_entry *entry;
struct kcov_map *map;
- map = kcov->map;
+ if (edge)
+ map = kcov->map_edge;
+ else
+ map = kcov->map;
if (!map)
return;
rcu_read_lock();
@@ -581,7 +604,8 @@ static void kcov_put(struct kcov *kcov)
{
if (refcount_dec_and_test(&kcov->refcount)) {
kcov_remote_reset(kcov);
- kcov_map_free(kcov);
+ kcov_map_free(kcov, false);
+ kcov_map_free(kcov, true);
kfree(kcov);
}
}
@@ -636,18 +660,27 @@ static int kcov_mmap(struct file *filep, struct vm_area_struct *vma)
unsigned long size, off;
struct page *page;
unsigned long flags;
+ void *area;
spin_lock_irqsave(&kcov->lock, flags);
size = kcov->size * sizeof(unsigned long);
- if (kcov->area == NULL || vma->vm_pgoff != 0 ||
- vma->vm_end - vma->vm_start != size) {
+ if (!vma->vm_pgoff) {
+ area = kcov->area;
+ } else if (vma->vm_pgoff == size >> PAGE_SHIFT) {
+ area = kcov->map_edge->area;
+ } else {
+ spin_unlock_irqrestore(&kcov->lock, flags);
+ return -EINVAL;
+ }
+
+ if (!area || vma->vm_end - vma->vm_start != size) {
res = -EINVAL;
goto exit;
}
spin_unlock_irqrestore(&kcov->lock, flags);
vm_flags_set(vma, VM_DONTEXPAND);
for (off = 0; off < size; off += PAGE_SIZE) {
- page = vmalloc_to_page(kcov->area + off);
+ page = vmalloc_to_page(area + off);
res = vm_insert_page(vma, vma->vm_start + off, page);
if (res) {
pr_warn_once("kcov: vm_insert_page() failed\n");
@@ -693,6 +726,8 @@ static int kcov_get_mode(unsigned long arg)
#endif
else if (arg == KCOV_TRACE_UNIQ_PC)
return KCOV_MODE_TRACE_UNIQ_PC;
+ else if (arg == KCOV_TRACE_UNIQ_EDGE)
+ return KCOV_MODE_TRACE_UNIQ_EDGE;
else
return -EINVAL;
}
@@ -747,7 +782,8 @@ static int kcov_ioctl_locked(struct kcov *kcov, unsigned int cmd,
* at task exit or voluntary by KCOV_DISABLE. After that it can
* be enabled for another task.
*/
- if (kcov->mode != KCOV_MODE_INIT || !kcov->area)
+ if (kcov->mode != KCOV_MODE_INIT || !kcov->area ||
+ !kcov->map_edge->area)
return -EINVAL;
t = current;
if (kcov->t != NULL || t->kcov != NULL)
@@ -859,7 +895,10 @@ static long kcov_ioctl(struct file *filep, unsigned int cmd, unsigned long arg)
size = arg;
if (size < 2 || size > INT_MAX / sizeof(unsigned long))
return -EINVAL;
- res = kcov_map_init(kcov, size);
+ res = kcov_map_init(kcov, size, false);
+ if (res)
+ return res;
+ res = kcov_map_init(kcov, size, true);
if (res)
return res;
spin_lock_irqsave(&kcov->lock, flags);
--
2.47.1
^ permalink raw reply [flat|nested] 16+ messages in thread
* [PATCH 3/7] kcov: allow using KCOV_TRACE_UNIQ_[PC|EDGE] modes together
2025-01-14 5:34 [PATCH 0/7] kcov: Introduce New Unique PC|EDGE|CMP Modes Jiao, Joey
2025-01-14 5:34 ` [PATCH 1/7] kcov: introduce new kcov KCOV_TRACE_UNIQ_PC mode Jiao, Joey
2025-01-14 5:34 ` [PATCH 2/7] kcov: introduce new kcov KCOV_TRACE_UNIQ_EDGE mode Jiao, Joey
@ 2025-01-14 5:34 ` Jiao, Joey
2025-01-14 5:34 ` [PATCH 4/7] kcov: introduce new kcov KCOV_TRACE_UNIQ_CMP mode Jiao, Joey
` (4 subsequent siblings)
7 siblings, 0 replies; 16+ messages in thread
From: Jiao, Joey @ 2025-01-14 5:34 UTC (permalink / raw)
To: Dmitry Vyukov, Andrey Konovalov, Jonathan Corbet, Andrew Morton,
Dennis Zhou, Tejun Heo, Christoph Lameter, Catalin Marinas,
Will Deacon
Cc: kasan-dev, linux-kernel, workflows, linux-doc, linux-mm,
linux-arm-kernel, kernel
KCOV_TRACE_UNIQ_PC and KCOV_TRACE_UNIQ_EDGE modes can be used
separately, and now they can be used together to simulate current
KCOV_TRACE_PC mode without sequence info.
Signed-off-by: Jiao, Joey <quic_jiangenj@quicinc.com>
---
kernel/kcov.c | 14 +++++++++-----
1 file changed, 9 insertions(+), 5 deletions(-)
diff --git a/kernel/kcov.c b/kernel/kcov.c
index 5a0ead92729294d99db80bb4e0f5b04c8b025dba..c04bbec9ac3186a5145240de8ac609ad8a7ca733 100644
--- a/kernel/kcov.c
+++ b/kernel/kcov.c
@@ -716,6 +716,8 @@ static int kcov_close(struct inode *inode, struct file *filep)
static int kcov_get_mode(unsigned long arg)
{
+ int mode = 0;
+
if (arg == KCOV_TRACE_PC)
return KCOV_MODE_TRACE_PC;
else if (arg == KCOV_TRACE_CMP)
@@ -724,12 +726,14 @@ static int kcov_get_mode(unsigned long arg)
#else
return -ENOTSUPP;
#endif
- else if (arg == KCOV_TRACE_UNIQ_PC)
- return KCOV_MODE_TRACE_UNIQ_PC;
- else if (arg == KCOV_TRACE_UNIQ_EDGE)
- return KCOV_MODE_TRACE_UNIQ_EDGE;
- else
+ if (arg & KCOV_TRACE_UNIQ_PC)
+ mode |= KCOV_MODE_TRACE_UNIQ_PC;
+ if (arg & KCOV_TRACE_UNIQ_EDGE)
+ mode |= KCOV_MODE_TRACE_UNIQ_EDGE;
+ if (!mode)
return -EINVAL;
+
+ return mode;
}
/*
--
2.47.1
^ permalink raw reply [flat|nested] 16+ messages in thread
* [PATCH 4/7] kcov: introduce new kcov KCOV_TRACE_UNIQ_CMP mode
2025-01-14 5:34 [PATCH 0/7] kcov: Introduce New Unique PC|EDGE|CMP Modes Jiao, Joey
` (2 preceding siblings ...)
2025-01-14 5:34 ` [PATCH 3/7] kcov: allow using KCOV_TRACE_UNIQ_[PC|EDGE] modes together Jiao, Joey
@ 2025-01-14 5:34 ` Jiao, Joey
2025-01-24 2:11 ` kernel test robot
2025-01-24 12:26 ` kernel test robot
2025-01-14 5:34 ` [PATCH 5/7] kcov: add the new KCOV uniq modes example code Jiao, Joey
` (3 subsequent siblings)
7 siblings, 2 replies; 16+ messages in thread
From: Jiao, Joey @ 2025-01-14 5:34 UTC (permalink / raw)
To: Dmitry Vyukov, Andrey Konovalov, Jonathan Corbet, Andrew Morton,
Dennis Zhou, Tejun Heo, Christoph Lameter, Catalin Marinas,
Will Deacon
Cc: kasan-dev, linux-kernel, workflows, linux-doc, linux-mm,
linux-arm-kernel, kernel
Similar to KCOV_TRACE_CMP mode, KCOV_TRACE_UNIQ_CMP stores unique CMP data
into area.
Signed-off-by: Jiao, Joey <quic_jiangenj@quicinc.com>
---
include/linux/kcov.h | 2 +
include/uapi/linux/kcov.h | 2 +
kernel/kcov.c | 112 ++++++++++++++++++++++++++++++++--------------
3 files changed, 83 insertions(+), 33 deletions(-)
diff --git a/include/linux/kcov.h b/include/linux/kcov.h
index 56b858205ba16c47fc72bda9938c98f034503c8c..a78d78164bf75368c71a958a5438fc3ee68c95ca 100644
--- a/include/linux/kcov.h
+++ b/include/linux/kcov.h
@@ -27,6 +27,8 @@ enum kcov_mode {
KCOV_MODE_TRACE_UNIQ_PC = 16,
/* Collecting uniq edge mode. */
KCOV_MODE_TRACE_UNIQ_EDGE = 32,
+ /* Collecting uniq cmp mode. */
+ KCOV_MODE_TRACE_UNIQ_CMP = 64,
};
#define KCOV_IN_CTXSW (1 << 30)
diff --git a/include/uapi/linux/kcov.h b/include/uapi/linux/kcov.h
index 9b2019f0ab8b8cb5426d2d6b74472fa1a7293817..08abfca273c9624dc54a2c70b12a4a9302700f26 100644
--- a/include/uapi/linux/kcov.h
+++ b/include/uapi/linux/kcov.h
@@ -39,6 +39,8 @@ enum {
KCOV_TRACE_UNIQ_PC = 2,
/* Collecting uniq edge mode. */
KCOV_TRACE_UNIQ_EDGE = 4,
+ /* Collecting uniq CMP mode. */
+ KCOV_TRACE_UNIQ_CMP = 8,
};
/*
diff --git a/kernel/kcov.c b/kernel/kcov.c
index c04bbec9ac3186a5145240de8ac609ad8a7ca733..af73c40114d23adedab8318e8657d24bf36ae865 100644
--- a/kernel/kcov.c
+++ b/kernel/kcov.c
@@ -36,6 +36,11 @@
struct kcov_entry {
unsigned long ent;
+#ifdef CONFIG_KCOV_ENABLE_COMPARISONS
+ unsigned long type;
+ unsigned long arg1;
+ unsigned long arg2;
+#endif
struct hlist_node node;
};
@@ -44,7 +49,7 @@ struct kcov_entry {
#define MIN_POOL_ALLOC_ORDER ilog2(roundup_pow_of_two(sizeof(struct kcov_entry)))
/*
- * kcov hashmap to store uniq pc, prealloced mem for kcov_entry
+ * kcov hashmap to store uniq pc|edge|cmp, prealloced mem for kcov_entry
* and area shared between kernel and userspace.
*/
struct kcov_map {
@@ -87,7 +92,7 @@ struct kcov {
unsigned long prev_pc;
/* Coverage buffer shared with user space. */
void *area;
- /* Coverage hashmap for unique pc. */
+ /* Coverage hashmap for unique pc|cmp. */
struct kcov_map *map;
/* Edge hashmap for unique edge. */
struct kcov_map *map_edge;
@@ -289,14 +294,23 @@ static notrace inline void kcov_map_add(struct kcov_map *map, struct kcov_entry
struct kcov *kcov;
struct kcov_entry *entry;
unsigned int key = hash_key(ent);
- unsigned long pos, *area;
+ unsigned long pos, start_index, end_pos, max_pos, *area;
kcov = t->kcov;
- hash_for_each_possible_rcu(map->buckets, entry, node, key) {
- if (entry->ent == ent->ent)
- return;
- }
+ if ((mode == KCOV_MODE_TRACE_UNIQ_PC ||
+ mode == KCOV_MODE_TRACE_UNIQ_EDGE))
+ hash_for_each_possible_rcu(map->buckets, entry, node, key) {
+ if (entry->ent == ent->ent)
+ return;
+ }
+ else
+ hash_for_each_possible_rcu(map->buckets, entry, node, key) {
+ if (entry->ent == ent->ent && entry->type == ent->type &&
+ entry->arg1 == ent->arg1 && entry->arg2 == ent->arg2) {
+ return;
+ }
+ }
entry = (struct kcov_entry *)gen_pool_alloc(map->pool, 1 << MIN_POOL_ALLOC_ORDER);
if (unlikely(!entry))
@@ -306,16 +320,31 @@ static notrace inline void kcov_map_add(struct kcov_map *map, struct kcov_entry
memcpy(entry, ent, sizeof(*entry));
hash_add_rcu(map->buckets, &entry->node, key);
- if (mode == KCOV_MODE_TRACE_UNIQ_PC)
+ if (mode == KCOV_MODE_TRACE_UNIQ_PC || mode == KCOV_MODE_TRACE_UNIQ_CMP)
area = t->kcov_area;
else
area = kcov->map_edge->area;
pos = READ_ONCE(area[0]) + 1;
- if (likely(pos < t->kcov_size)) {
- WRITE_ONCE(area[0], pos);
- barrier();
- area[pos] = ent->ent;
+ if (mode == KCOV_MODE_TRACE_UNIQ_PC || mode == KCOV_MODE_TRACE_UNIQ_EDGE) {
+ if (likely(pos < t->kcov_size)) {
+ WRITE_ONCE(area[0], pos);
+ barrier();
+ area[pos] = ent->ent;
+ }
+ } else {
+ start_index = 1 + (pos - 1) * KCOV_WORDS_PER_CMP;
+ max_pos = t->kcov_size * sizeof(unsigned long);
+ end_pos = (start_index + KCOV_WORDS_PER_CMP) * sizeof(u64);
+ if (likely(end_pos <= max_pos)) {
+ /* See comment in __sanitizer_cov_trace_pc(). */
+ WRITE_ONCE(area[0], pos);
+ barrier();
+ area[start_index] = ent->type;
+ area[start_index + 1] = ent->arg1;
+ area[start_index + 2] = ent->arg2;
+ area[start_index + 3] = ent->ent;
+ }
}
}
@@ -384,33 +413,44 @@ static void notrace write_comp_data(u64 type, u64 arg1, u64 arg2, u64 ip)
struct task_struct *t;
u64 *area;
u64 count, start_index, end_pos, max_pos;
+ struct kcov_entry entry = {0};
+ unsigned int mode;
t = current;
- if (!check_kcov_mode(KCOV_MODE_TRACE_CMP, t))
+ if (!check_kcov_mode(KCOV_MODE_TRACE_CMP | KCOV_MODE_TRACE_UNIQ_CMP, t))
return;
+ mode = t->kcov_mode;
ip = canonicalize_ip(ip);
- /*
- * We write all comparison arguments and types as u64.
- * The buffer was allocated for t->kcov_size unsigned longs.
- */
- area = (u64 *)t->kcov_area;
- max_pos = t->kcov_size * sizeof(unsigned long);
-
- count = READ_ONCE(area[0]);
-
- /* Every record is KCOV_WORDS_PER_CMP 64-bit words. */
- start_index = 1 + count * KCOV_WORDS_PER_CMP;
- end_pos = (start_index + KCOV_WORDS_PER_CMP) * sizeof(u64);
- if (likely(end_pos <= max_pos)) {
- /* See comment in __sanitizer_cov_trace_pc(). */
- WRITE_ONCE(area[0], count + 1);
- barrier();
- area[start_index] = type;
- area[start_index + 1] = arg1;
- area[start_index + 2] = arg2;
- area[start_index + 3] = ip;
+ if (mode == KCOV_MODE_TRACE_CMP) {
+ /*
+ * We write all comparison arguments and types as u64.
+ * The buffer was allocated for t->kcov_size unsigned longs.
+ */
+ area = (u64 *)t->kcov_area;
+ max_pos = t->kcov_size * sizeof(unsigned long);
+
+ count = READ_ONCE(area[0]);
+
+ /* Every record is KCOV_WORDS_PER_CMP 64-bit words. */
+ start_index = 1 + count * KCOV_WORDS_PER_CMP;
+ end_pos = (start_index + KCOV_WORDS_PER_CMP) * sizeof(u64);
+ if (likely(end_pos <= max_pos)) {
+ /* See comment in __sanitizer_cov_trace_pc(). */
+ WRITE_ONCE(area[0], count + 1);
+ barrier();
+ area[start_index] = type;
+ area[start_index + 1] = arg1;
+ area[start_index + 2] = arg2;
+ area[start_index + 3] = ip;
+ }
+ } else {
+ entry.type = type;
+ entry.arg1 = arg1;
+ entry.arg2 = arg2;
+ entry.ent = ip;
+ kcov_map_add(t->kcov->map, &entry, t, KCOV_MODE_TRACE_UNIQ_CMP);
}
}
@@ -730,6 +770,12 @@ static int kcov_get_mode(unsigned long arg)
mode |= KCOV_MODE_TRACE_UNIQ_PC;
if (arg & KCOV_TRACE_UNIQ_EDGE)
mode |= KCOV_MODE_TRACE_UNIQ_EDGE;
+ if (arg == KCOV_TRACE_UNIQ_CMP)
+#ifdef CONFIG_KCOV_ENABLE_COMPARISONS
+ mode = KCOV_MODE_TRACE_UNIQ_CMP;
+#else
+ return -EOPNOTSUPP;
+#endif
if (!mode)
return -EINVAL;
--
2.47.1
^ permalink raw reply [flat|nested] 16+ messages in thread
* [PATCH 5/7] kcov: add the new KCOV uniq modes example code
2025-01-14 5:34 [PATCH 0/7] kcov: Introduce New Unique PC|EDGE|CMP Modes Jiao, Joey
` (3 preceding siblings ...)
2025-01-14 5:34 ` [PATCH 4/7] kcov: introduce new kcov KCOV_TRACE_UNIQ_CMP mode Jiao, Joey
@ 2025-01-14 5:34 ` Jiao, Joey
2025-01-14 5:34 ` [PATCH 6/7] kcov: disable instrumentation for genalloc and bitmap Jiao, Joey
` (2 subsequent siblings)
7 siblings, 0 replies; 16+ messages in thread
From: Jiao, Joey @ 2025-01-14 5:34 UTC (permalink / raw)
To: Dmitry Vyukov, Andrey Konovalov, Jonathan Corbet, Andrew Morton,
Dennis Zhou, Tejun Heo, Christoph Lameter, Catalin Marinas,
Will Deacon
Cc: kasan-dev, linux-kernel, workflows, linux-doc, linux-mm,
linux-arm-kernel, kernel
- Use single program to select different mode.
- Mode [0|1|2|4|8] to KCOV_TRACE_[PC|CMP|UNIQ_PC|UNIQ_EDGE|UNIQ_CMP].
- Mode 6 to KCOV_TRACE_UNIQ_PC and KCOV_TRACE_UNIQ_EDGE.
Signed-off-by: Jiao, Joey <quic_jiangenj@quicinc.com>
---
Documentation/dev-tools/kcov.rst | 243 ++++++++++++++++++++-------------------
1 file changed, 122 insertions(+), 121 deletions(-)
diff --git a/Documentation/dev-tools/kcov.rst b/Documentation/dev-tools/kcov.rst
index 6611434e2dd247c6c40afcbf1e6c4e22e0562176..061ae20b867fd9e68b447b86719733278ee6b86f 100644
--- a/Documentation/dev-tools/kcov.rst
+++ b/Documentation/dev-tools/kcov.rst
@@ -40,11 +40,12 @@ Coverage data only becomes accessible once debugfs has been mounted::
mount -t debugfs none /sys/kernel/debug
-Coverage collection
+Coverage collection for different modes
-------------------
The following program demonstrates how to use KCOV to collect coverage for a
-single syscall from within a test program:
+single syscall from within a test program, argv[1] can be provided to select
+which mode to enable:
.. code-block:: c
@@ -60,55 +61,130 @@ single syscall from within a test program:
#include <fcntl.h>
#include <linux/types.h>
- #define KCOV_INIT_TRACE _IOR('c', 1, unsigned long)
+ #define KCOV_INIT_TRACE _IOR('c', 1, unsigned long)
#define KCOV_ENABLE _IO('c', 100)
- #define KCOV_DISABLE _IO('c', 101)
+ #define KCOV_DISABLE _IO('c', 101)
#define COVER_SIZE (64<<10)
#define KCOV_TRACE_PC 0
#define KCOV_TRACE_CMP 1
+ #define KCOV_TRACE_UNIQ_PC 2
+ #define KCOV_TRACE_UNIQ_EDGE 4
+ #define KCOV_TRACE_UNIQ_CMP 8
+
+ /* Number of 64-bit words per record. */
+ #define KCOV_WORDS_PER_CMP 4
+
+ /*
+ * The format for the types of collected comparisons.
+ *
+ * Bit 0 shows whether one of the arguments is a compile-time constant.
+ * Bits 1 & 2 contain log2 of the argument size, up to 8 bytes.
+ */
+
+ #define KCOV_CMP_CONST (1 << 0)
+ #define KCOV_CMP_SIZE(n) ((n) << 1)
+ #define KCOV_CMP_MASK KCOV_CMP_SIZE(3)
int main(int argc, char **argv)
{
- int fd;
- unsigned long *cover, n, i;
-
- /* A single fd descriptor allows coverage collection on a single
- * thread.
- */
- fd = open("/sys/kernel/debug/kcov", O_RDWR);
- if (fd == -1)
- perror("open"), exit(1);
- /* Setup trace mode and trace size. */
- if (ioctl(fd, KCOV_INIT_TRACE, COVER_SIZE))
- perror("ioctl"), exit(1);
- /* Mmap buffer shared between kernel- and user-space. */
- cover = (unsigned long*)mmap(NULL, COVER_SIZE * sizeof(unsigned long),
- PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);
- if ((void*)cover == MAP_FAILED)
- perror("mmap"), exit(1);
- /* Enable coverage collection on the current thread. */
- if (ioctl(fd, KCOV_ENABLE, KCOV_TRACE_PC))
- perror("ioctl"), exit(1);
- /* Reset coverage from the tail of the ioctl() call. */
- __atomic_store_n(&cover[0], 0, __ATOMIC_RELAXED);
- /* Call the target syscall call. */
- read(-1, NULL, 0);
- /* Read number of PCs collected. */
- n = __atomic_load_n(&cover[0], __ATOMIC_RELAXED);
- for (i = 0; i < n; i++)
- printf("0x%lx\n", cover[i + 1]);
- /* Disable coverage collection for the current thread. After this call
- * coverage can be enabled for a different thread.
- */
- if (ioctl(fd, KCOV_DISABLE, 0))
- perror("ioctl"), exit(1);
- /* Free resources. */
- if (munmap(cover, COVER_SIZE * sizeof(unsigned long)))
- perror("munmap"), exit(1);
- if (close(fd))
- perror("close"), exit(1);
- return 0;
+ int fd;
+ unsigned long *cover, *edge, n, n1, i, type, arg1, arg2, is_const, size;
+ unsigned int mode = KCOV_TRACE_PC;
+
+ /* argv[1] controls which mode to use, default to KCOV_TRACE_PC.
+ * supported modes include:
+ * KCOV_TRACE_PC
+ * KCOV_TRACE_CMP
+ * KCOV_TRACE_UNIQ_PC
+ * KCOV_TRACE_UNIQ_EDGE
+ * KCOV_TRACE_UNIQ_PC | KCOV_TRACE_UNIQ_EDGE
+ * KCOV_TRACE_UNIQ_CMP
+ */
+ if (argc > 1)
+ mode = (unsigned int)strtoul(argv[1], NULL, 10);
+ printf("The mode is: %u\n", mode);
+ if (mode != KCOV_TRACE_PC && mode != KCOV_TRACE_CMP &&
+ !(mode & (KCOV_TRACE_UNIQ_PC | KCOV_TRACE_UNIQ_EDGE | KCOV_TRACE_UNIQ_CMP))) {
+ printf("Unsupported mode!\n");
+ exit(1);
+ }
+ /* A single fd descriptor allows coverage collection on a single
+ * thread.
+ */
+ fd = open("/sys/kernel/debug/kcov", O_RDWR);
+ if (fd == -1)
+ perror("open"), exit(1);
+ /* Setup trace mode and trace size. */
+ if (ioctl(fd, KCOV_INIT_TRACE, COVER_SIZE))
+ perror("ioctl"), exit(1);
+ /* Mmap buffer shared between kernel- and user-space. */
+ cover = (unsigned long*)mmap(NULL, COVER_SIZE * sizeof(unsigned long),
+ PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);
+ if ((void*)cover == MAP_FAILED)
+ perror("mmap"), exit(1);
+ if (mode & KCOV_TRACE_UNIQ_EDGE) {
+ edge = (unsigned long*)mmap(NULL, COVER_SIZE * sizeof(unsigned long),
+ PROT_READ | PROT_WRITE, MAP_SHARED, fd, COVER_SIZE * sizeof(unsigned long));
+ if ((void*)edge == MAP_FAILED)
+ perror("mmap"), exit(1);
+ }
+ /* Enable coverage collection on the current thread. */
+ if (ioctl(fd, KCOV_ENABLE, mode))
+ perror("ioctl"), exit(1);
+ /* Reset coverage from the tail of the ioctl() call. */
+ __atomic_store_n(&cover[0], 0, __ATOMIC_RELAXED);
+ if (mode & KCOV_TRACE_UNIQ_EDGE)
+ __atomic_store_n(&edge[0], 0, __ATOMIC_RELAXED);
+ /* Call the target syscall call. */
+ read(-1, NULL, 0);
+ /* Read number of PCs collected. */
+ n = __atomic_load_n(&cover[0], __ATOMIC_RELAXED);
+ if (mode & KCOV_TRACE_UNIQ_EDGE)
+ n1 = __atomic_load_n(&edge[0], __ATOMIC_RELAXED);
+ if (mode & (KCOV_TRACE_CMP | KCOV_TRACE_UNIQ_CMP)) {
+ for (i = 0; i < n; i++) {
+ uint64_t ip;
+
+ type = cover[i * KCOV_WORDS_PER_CMP + 1];
+ /* arg1 and arg2 - operands of the comparison. */
+ arg1 = cover[i * KCOV_WORDS_PER_CMP + 2];
+ arg2 = cover[i * KCOV_WORDS_PER_CMP + 3];
+ /* ip - caller address. */
+ ip = cover[i * KCOV_WORDS_PER_CMP + 4];
+ /* size of the operands. */
+ size = 1 << ((type & KCOV_CMP_MASK) >> 1);
+ /* is_const - true if either operand is a compile-time constant.*/
+ is_const = type & KCOV_CMP_CONST;
+ printf("ip: 0x%lx type: 0x%lx, arg1: 0x%lx, arg2: 0x%lx, "
+ "size: %lu, %s\n",
+ ip, type, arg1, arg2, size,
+ is_const ? "const" : "non-const");
+ }
+ } else {
+ for (i = 0; i < n; i++)
+ printf("0x%lx\n", cover[i + 1]);
+ if (mode & KCOV_TRACE_UNIQ_EDGE) {
+ printf("======edge======\n");
+ for (i = 0; i < n1; i++)
+ printf("0x%lx\n", edge[i + 1]);
+ }
+ }
+ /* Disable coverage collection for the current thread. After this call
+ * coverage can be enabled for a different thread.
+ */
+ if (ioctl(fd, KCOV_DISABLE, 0))
+ perror("ioctl"), exit(1);
+ /* Free resources. */
+ if (munmap(cover, COVER_SIZE * sizeof(unsigned long)))
+ perror("munmap"), exit(1);
+ if (mode & KCOV_TRACE_UNIQ_EDGE) {
+ if (munmap(edge, COVER_SIZE * sizeof(unsigned long)))
+ perror("munmap"), exit(1);
+ }
+ if (close(fd))
+ perror("close"), exit(1);
+ return 0;
}
After piping through ``addr2line`` the output of the program looks as follows::
@@ -137,85 +213,10 @@ mmaps coverage buffer, and then forks child processes in a loop. The child
processes only need to enable coverage (it gets disabled automatically when
a thread exits).
-Comparison operands collection
-------------------------------
-
-Comparison operands collection is similar to coverage collection:
-
-.. code-block:: c
-
- /* Same includes and defines as above. */
-
- /* Number of 64-bit words per record. */
- #define KCOV_WORDS_PER_CMP 4
-
- /*
- * The format for the types of collected comparisons.
- *
- * Bit 0 shows whether one of the arguments is a compile-time constant.
- * Bits 1 & 2 contain log2 of the argument size, up to 8 bytes.
- */
-
- #define KCOV_CMP_CONST (1 << 0)
- #define KCOV_CMP_SIZE(n) ((n) << 1)
- #define KCOV_CMP_MASK KCOV_CMP_SIZE(3)
-
- int main(int argc, char **argv)
- {
- int fd;
- uint64_t *cover, type, arg1, arg2, is_const, size;
- unsigned long n, i;
-
- fd = open("/sys/kernel/debug/kcov", O_RDWR);
- if (fd == -1)
- perror("open"), exit(1);
- if (ioctl(fd, KCOV_INIT_TRACE, COVER_SIZE))
- perror("ioctl"), exit(1);
- /*
- * Note that the buffer pointer is of type uint64_t*, because all
- * the comparison operands are promoted to uint64_t.
- */
- cover = (uint64_t *)mmap(NULL, COVER_SIZE * sizeof(unsigned long),
- PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);
- if ((void*)cover == MAP_FAILED)
- perror("mmap"), exit(1);
- /* Note KCOV_TRACE_CMP instead of KCOV_TRACE_PC. */
- if (ioctl(fd, KCOV_ENABLE, KCOV_TRACE_CMP))
- perror("ioctl"), exit(1);
- __atomic_store_n(&cover[0], 0, __ATOMIC_RELAXED);
- read(-1, NULL, 0);
- /* Read number of comparisons collected. */
- n = __atomic_load_n(&cover[0], __ATOMIC_RELAXED);
- for (i = 0; i < n; i++) {
- uint64_t ip;
-
- type = cover[i * KCOV_WORDS_PER_CMP + 1];
- /* arg1 and arg2 - operands of the comparison. */
- arg1 = cover[i * KCOV_WORDS_PER_CMP + 2];
- arg2 = cover[i * KCOV_WORDS_PER_CMP + 3];
- /* ip - caller address. */
- ip = cover[i * KCOV_WORDS_PER_CMP + 4];
- /* size of the operands. */
- size = 1 << ((type & KCOV_CMP_MASK) >> 1);
- /* is_const - true if either operand is a compile-time constant.*/
- is_const = type & KCOV_CMP_CONST;
- printf("ip: 0x%lx type: 0x%lx, arg1: 0x%lx, arg2: 0x%lx, "
- "size: %lu, %s\n",
- ip, type, arg1, arg2, size,
- is_const ? "const" : "non-const");
- }
- if (ioctl(fd, KCOV_DISABLE, 0))
- perror("ioctl"), exit(1);
- /* Free resources. */
- if (munmap(cover, COVER_SIZE * sizeof(unsigned long)))
- perror("munmap"), exit(1);
- if (close(fd))
- perror("close"), exit(1);
- return 0;
- }
-
Note that the KCOV modes (collection of code coverage or comparison operands)
-are mutually exclusive.
+are mutually exclusive, KCOV_TRACE_UNIQ_PC and KCOV_TRACE_UNIQ_EDGE can be
+enabled together.
+
Remote coverage collection
--------------------------
--
2.47.1
^ permalink raw reply [flat|nested] 16+ messages in thread
* [PATCH 6/7] kcov: disable instrumentation for genalloc and bitmap
2025-01-14 5:34 [PATCH 0/7] kcov: Introduce New Unique PC|EDGE|CMP Modes Jiao, Joey
` (4 preceding siblings ...)
2025-01-14 5:34 ` [PATCH 5/7] kcov: add the new KCOV uniq modes example code Jiao, Joey
@ 2025-01-14 5:34 ` Jiao, Joey
2025-01-14 5:34 ` [PATCH 7/7] arm64: disable kcov instrument in header files Jiao, Joey
2025-01-14 10:43 ` [PATCH 0/7] kcov: Introduce New Unique PC|EDGE|CMP Modes Marco Elver
7 siblings, 0 replies; 16+ messages in thread
From: Jiao, Joey @ 2025-01-14 5:34 UTC (permalink / raw)
To: Dmitry Vyukov, Andrey Konovalov, Jonathan Corbet, Andrew Morton,
Dennis Zhou, Tejun Heo, Christoph Lameter, Catalin Marinas,
Will Deacon
Cc: kasan-dev, linux-kernel, workflows, linux-doc, linux-mm,
linux-arm-kernel, kernel
gen_pool_alloc in kcov_map_add triggers recursive call, which trigger
BUG: TASK stack guard page was hit at ffffc9000451ff38.
Disable KCOV to avoid the recursive call.
Signed-off-by: Jiao, Joey <quic_jiangenj@quicinc.com>
---
lib/Makefile | 2 ++
1 file changed, 2 insertions(+)
diff --git a/lib/Makefile b/lib/Makefile
index a8155c972f02856fcc61ee949ddda436cfe211ff..7a110a9a4a527b881ca3a0239d0b60511cb6e38b 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -15,6 +15,8 @@ KCOV_INSTRUMENT_debugobjects.o := n
KCOV_INSTRUMENT_dynamic_debug.o := n
KCOV_INSTRUMENT_fault-inject.o := n
KCOV_INSTRUMENT_find_bit.o := n
+KCOV_INSTRUMENT_genalloc.o := n
+KCOV_INSTRUMENT_bitmap.o := n
# string.o implements standard library functions like memset/memcpy etc.
# Use -ffreestanding to ensure that the compiler does not try to "optimize"
--
2.47.1
^ permalink raw reply [flat|nested] 16+ messages in thread
* [PATCH 7/7] arm64: disable kcov instrument in header files
2025-01-14 5:34 [PATCH 0/7] kcov: Introduce New Unique PC|EDGE|CMP Modes Jiao, Joey
` (5 preceding siblings ...)
2025-01-14 5:34 ` [PATCH 6/7] kcov: disable instrumentation for genalloc and bitmap Jiao, Joey
@ 2025-01-14 5:34 ` Jiao, Joey
2025-01-14 10:43 ` [PATCH 0/7] kcov: Introduce New Unique PC|EDGE|CMP Modes Marco Elver
7 siblings, 0 replies; 16+ messages in thread
From: Jiao, Joey @ 2025-01-14 5:34 UTC (permalink / raw)
To: Dmitry Vyukov, Andrey Konovalov, Jonathan Corbet, Andrew Morton,
Dennis Zhou, Tejun Heo, Christoph Lameter, Catalin Marinas,
Will Deacon
Cc: kasan-dev, linux-kernel, workflows, linux-doc, linux-mm,
linux-arm-kernel, kernel
Disable instrument which causes recursive call to __sanitizer_cov_trace_pc
Signed-off-by: Jiao, Joey <quic_jiangenj@quicinc.com>
---
arch/arm64/include/asm/percpu.h | 2 +-
arch/arm64/include/asm/preempt.h | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/include/asm/percpu.h b/arch/arm64/include/asm/percpu.h
index 9abcc8ef3087b7066c82db983ae2753f30607f7f..a40ff8168151bb481756d0f6cb341aa8dc52a121 100644
--- a/arch/arm64/include/asm/percpu.h
+++ b/arch/arm64/include/asm/percpu.h
@@ -29,7 +29,7 @@ static inline unsigned long __hyp_my_cpu_offset(void)
return read_sysreg(tpidr_el2);
}
-static inline unsigned long __kern_my_cpu_offset(void)
+static __no_sanitize_coverage inline unsigned long __kern_my_cpu_offset(void)
{
unsigned long off;
diff --git a/arch/arm64/include/asm/preempt.h b/arch/arm64/include/asm/preempt.h
index 0159b625cc7f0e7d6996b34b4de8e71b04ca32e5..a8742a57481a8bf7f1e35b9cd8b0fd9a37f0ba78 100644
--- a/arch/arm64/include/asm/preempt.h
+++ b/arch/arm64/include/asm/preempt.h
@@ -8,7 +8,7 @@
#define PREEMPT_NEED_RESCHED BIT(32)
#define PREEMPT_ENABLED (PREEMPT_NEED_RESCHED)
-static inline int preempt_count(void)
+static __no_sanitize_coverage inline int preempt_count(void)
{
return READ_ONCE(current_thread_info()->preempt.count);
}
--
2.47.1
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH 0/7] kcov: Introduce New Unique PC|EDGE|CMP Modes
2025-01-14 5:34 [PATCH 0/7] kcov: Introduce New Unique PC|EDGE|CMP Modes Jiao, Joey
` (6 preceding siblings ...)
2025-01-14 5:34 ` [PATCH 7/7] arm64: disable kcov instrument in header files Jiao, Joey
@ 2025-01-14 10:43 ` Marco Elver
2025-01-14 11:02 ` Dmitry Vyukov
2025-01-14 12:59 ` Joey Jiao
7 siblings, 2 replies; 16+ messages in thread
From: Marco Elver @ 2025-01-14 10:43 UTC (permalink / raw)
To: Jiao, Joey
Cc: Dmitry Vyukov, Andrey Konovalov, Jonathan Corbet, Andrew Morton,
Dennis Zhou, Tejun Heo, Christoph Lameter, Catalin Marinas,
Will Deacon, kasan-dev, linux-kernel, workflows, linux-doc,
linux-mm, linux-arm-kernel, kernel
On Tue, 14 Jan 2025 at 06:35, Jiao, Joey <quic_jiangenj@quicinc.com> wrote:
>
> Hi,
>
> This patch series introduces new kcov unique modes:
> `KCOV_TRACE_UNIQ_[PC|EDGE|CMP]`, which are used to collect unique PC, EDGE,
> CMP information.
>
> Background
> ----------
>
> In the current kcov implementation, when `__sanitizer_cov_trace_pc` is hit,
> the instruction pointer (IP) is stored sequentially in an area. Userspace
> programs then read this area to record covered PCs and calculate covered
> edges. However, recent syzkaller runs show that many syscalls likely have
> `pos > t->kcov_size`, leading to kcov overflow. To address this issue, we
> introduce new kcov unique modes.
Overflow by how much? How much space is missing?
> Solution Overview
> -----------------
>
> 1. [P 1] Introduce `KCOV_TRACE_UNIQ_PC` Mode:
> - Export `KCOV_TRACE_UNIQ_PC` to userspace.
> - Add `kcov_map` struct to manage memory during the KCOV lifecycle.
> - `kcov_entry` struct as a hashtable entry containing unique PCs.
> - Use hashtable buckets to link `kcov_entry`.
> - Preallocate memory using genpool during KCOV initialization.
> - Move `area` inside `kcov_map` for easier management.
> - Use `jhash` for hash key calculation to support `KCOV_TRACE_UNIQ_CMP`
> mode.
>
> 2. [P 2-3] Introduce `KCOV_TRACE_UNIQ_EDGE` Mode:
> - Save `prev_pc` to calculate edges with the current IP.
> - Add unique edges to the hashmap.
> - Use a lower 12-bit mask to make hash independent of module offsets.
> - Distinguish areas for `KCOV_TRACE_UNIQ_PC` and `KCOV_TRACE_UNIQ_EDGE`
> modes using `offset` during mmap.
> - Support enabling `KCOV_TRACE_UNIQ_PC` and `KCOV_TRACE_UNIQ_EDGE`
> together.
>
> 3. [P 4] Introduce `KCOV_TRACE_UNIQ_CMP` Mode:
> - Shares the area with `KCOV_TRACE_UNIQ_PC`, making these modes
> exclusive.
>
> 4. [P 5] Add Example Code Documentation:
> - Provide examples for testing different modes:
> - `KCOV_TRACE_PC`: `./kcov` or `./kcov 0`
> - `KCOV_TRACE_CMP`: `./kcov 1`
> - `KCOV_TRACE_UNIQ_PC`: `./kcov 2`
> - `KCOV_TRACE_UNIQ_EDGE`: `./kcov 4`
> - `KCOV_TRACE_UNIQ_PC|KCOV_TRACE_UNIQ_EDGE`: `./kcov 6`
> - `KCOV_TRACE_UNIQ_CMP`: `./kcov 8`
>
> 5. [P 6-7] Disable KCOV Instrumentation:
> - Disable instrumentation like genpool to prevent recursive calls.
>
> Caveats
> -------
>
> The userspace program has been tested on Qemu x86_64 and two real Android
> phones with different ARM64 chips. More syzkaller-compatible tests have
> been conducted. However, due to limited knowledge of other platforms,
> assistance from those with access to other systems is needed.
>
> Results and Analysis
> --------------------
>
> 1. KMEMLEAK Test on Qemu x86_64:
> - No memory leaks found during the `kcov` program run.
>
> 2. KCSAN Test on Qemu x86_64:
> - No KCSAN issues found during the `kcov` program run.
>
> 3. Existing Syzkaller on Qemu x86_64 and Real ARM64 Device:
> - Syzkaller can fuzz, show coverage, and find bugs. Adjusting `procs`
> and `vm mem` settings can avoid OOM issues caused by genpool in the
> patches, so `procs:4 + vm:2GB` or `procs:4 + vm:2GB` are used for
> Qemu x86_64.
> - `procs:8` is kept on Real ARM64 Device with 12GB/16GB mem.
>
> 4. Modified Syzkaller to Support New KCOV Unique Modes:
> - Syzkaller runs fine on both Qemu x86_64 and ARM64 real devices.
> Limited `Cover overflows` and `Comps overflows` observed.
>
> 5. Modified Syzkaller + Upstream Kernel Without Patch Series:
> - Not tested. The modified syzkaller will fall back to `KCOV_TRACE_PC`
> or `KCOV_TRACE_CMP` if `ioctl` fails for Unique mode.
>
> Possible Further Enhancements
> -----------------------------
>
> 1. Test more cases and setups, including those in syzbot.
> 2. Ensure `hash_for_each_possible_rcu` is protected for reentrance
> and atomicity.
> 3. Find a simpler and more efficient way to store unique coverage.
>
> Conclusion
> ----------
>
> These patches add new kcov unique modes to mitigate the kcov overflow
> issue, compatible with both existing and new syzkaller versions.
Thanks for the analysis, it's clearer now.
However, the new design you introduce here adds lots of complexity.
Answering the question of how much overflow is happening, might give
better clues if this is the best design or not. Because if the
overflow amount is relatively small, a better design (IMHO) might be
simply implementing a compression scheme, e.g. a simple delta
encoding.
Thanks,
-- Marco
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH 0/7] kcov: Introduce New Unique PC|EDGE|CMP Modes
2025-01-14 10:43 ` [PATCH 0/7] kcov: Introduce New Unique PC|EDGE|CMP Modes Marco Elver
@ 2025-01-14 11:02 ` Dmitry Vyukov
2025-01-14 12:39 ` Joey Jiao
2025-01-14 12:59 ` Joey Jiao
1 sibling, 1 reply; 16+ messages in thread
From: Dmitry Vyukov @ 2025-01-14 11:02 UTC (permalink / raw)
To: Marco Elver
Cc: Jiao, Joey, Andrey Konovalov, Jonathan Corbet, Andrew Morton,
Dennis Zhou, Tejun Heo, Christoph Lameter, Catalin Marinas,
Will Deacon, kasan-dev, linux-kernel, workflows, linux-doc,
linux-mm, linux-arm-kernel, kernel
On Tue, 14 Jan 2025 at 11:43, Marco Elver <elver@google.com> wrote:
> On Tue, 14 Jan 2025 at 06:35, Jiao, Joey <quic_jiangenj@quicinc.com> wrote:
> >
> > Hi,
> >
> > This patch series introduces new kcov unique modes:
> > `KCOV_TRACE_UNIQ_[PC|EDGE|CMP]`, which are used to collect unique PC, EDGE,
> > CMP information.
> >
> > Background
> > ----------
> >
> > In the current kcov implementation, when `__sanitizer_cov_trace_pc` is hit,
> > the instruction pointer (IP) is stored sequentially in an area. Userspace
> > programs then read this area to record covered PCs and calculate covered
> > edges. However, recent syzkaller runs show that many syscalls likely have
> > `pos > t->kcov_size`, leading to kcov overflow. To address this issue, we
> > introduce new kcov unique modes.
>
> Overflow by how much? How much space is missing?
>
> > Solution Overview
> > -----------------
> >
> > 1. [P 1] Introduce `KCOV_TRACE_UNIQ_PC` Mode:
> > - Export `KCOV_TRACE_UNIQ_PC` to userspace.
> > - Add `kcov_map` struct to manage memory during the KCOV lifecycle.
> > - `kcov_entry` struct as a hashtable entry containing unique PCs.
> > - Use hashtable buckets to link `kcov_entry`.
> > - Preallocate memory using genpool during KCOV initialization.
> > - Move `area` inside `kcov_map` for easier management.
> > - Use `jhash` for hash key calculation to support `KCOV_TRACE_UNIQ_CMP`
> > mode.
> >
> > 2. [P 2-3] Introduce `KCOV_TRACE_UNIQ_EDGE` Mode:
> > - Save `prev_pc` to calculate edges with the current IP.
> > - Add unique edges to the hashmap.
> > - Use a lower 12-bit mask to make hash independent of module offsets.
> > - Distinguish areas for `KCOV_TRACE_UNIQ_PC` and `KCOV_TRACE_UNIQ_EDGE`
> > modes using `offset` during mmap.
> > - Support enabling `KCOV_TRACE_UNIQ_PC` and `KCOV_TRACE_UNIQ_EDGE`
> > together.
> >
> > 3. [P 4] Introduce `KCOV_TRACE_UNIQ_CMP` Mode:
> > - Shares the area with `KCOV_TRACE_UNIQ_PC`, making these modes
> > exclusive.
> >
> > 4. [P 5] Add Example Code Documentation:
> > - Provide examples for testing different modes:
> > - `KCOV_TRACE_PC`: `./kcov` or `./kcov 0`
> > - `KCOV_TRACE_CMP`: `./kcov 1`
> > - `KCOV_TRACE_UNIQ_PC`: `./kcov 2`
> > - `KCOV_TRACE_UNIQ_EDGE`: `./kcov 4`
> > - `KCOV_TRACE_UNIQ_PC|KCOV_TRACE_UNIQ_EDGE`: `./kcov 6`
> > - `KCOV_TRACE_UNIQ_CMP`: `./kcov 8`
> >
> > 5. [P 6-7] Disable KCOV Instrumentation:
> > - Disable instrumentation like genpool to prevent recursive calls.
> >
> > Caveats
> > -------
> >
> > The userspace program has been tested on Qemu x86_64 and two real Android
> > phones with different ARM64 chips. More syzkaller-compatible tests have
> > been conducted. However, due to limited knowledge of other platforms,
> > assistance from those with access to other systems is needed.
> >
> > Results and Analysis
> > --------------------
> >
> > 1. KMEMLEAK Test on Qemu x86_64:
> > - No memory leaks found during the `kcov` program run.
> >
> > 2. KCSAN Test on Qemu x86_64:
> > - No KCSAN issues found during the `kcov` program run.
> >
> > 3. Existing Syzkaller on Qemu x86_64 and Real ARM64 Device:
> > - Syzkaller can fuzz, show coverage, and find bugs. Adjusting `procs`
> > and `vm mem` settings can avoid OOM issues caused by genpool in the
> > patches, so `procs:4 + vm:2GB` or `procs:4 + vm:2GB` are used for
> > Qemu x86_64.
> > - `procs:8` is kept on Real ARM64 Device with 12GB/16GB mem.
> >
> > 4. Modified Syzkaller to Support New KCOV Unique Modes:
> > - Syzkaller runs fine on both Qemu x86_64 and ARM64 real devices.
> > Limited `Cover overflows` and `Comps overflows` observed.
> >
> > 5. Modified Syzkaller + Upstream Kernel Without Patch Series:
> > - Not tested. The modified syzkaller will fall back to `KCOV_TRACE_PC`
> > or `KCOV_TRACE_CMP` if `ioctl` fails for Unique mode.
> >
> > Possible Further Enhancements
> > -----------------------------
> >
> > 1. Test more cases and setups, including those in syzbot.
> > 2. Ensure `hash_for_each_possible_rcu` is protected for reentrance
> > and atomicity.
> > 3. Find a simpler and more efficient way to store unique coverage.
> >
> > Conclusion
> > ----------
> >
> > These patches add new kcov unique modes to mitigate the kcov overflow
> > issue, compatible with both existing and new syzkaller versions.
>
> Thanks for the analysis, it's clearer now.
>
> However, the new design you introduce here adds lots of complexity.
> Answering the question of how much overflow is happening, might give
> better clues if this is the best design or not. Because if the
> overflow amount is relatively small, a better design (IMHO) might be
> simply implementing a compression scheme, e.g. a simple delta
> encoding.
Joey, do you have corresponding patches for syzkaller? I wonder how
the integration looks like, in particular when/how these maps are
cleared.
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH 0/7] kcov: Introduce New Unique PC|EDGE|CMP Modes
2025-01-14 11:02 ` Dmitry Vyukov
@ 2025-01-14 12:39 ` Joey Jiao
0 siblings, 0 replies; 16+ messages in thread
From: Joey Jiao @ 2025-01-14 12:39 UTC (permalink / raw)
To: Dmitry Vyukov
Cc: Marco Elver, Andrey Konovalov, Jonathan Corbet, Andrew Morton,
Dennis Zhou, Tejun Heo, Christoph Lameter, Catalin Marinas,
Will Deacon, kasan-dev, linux-kernel, workflows, linux-doc,
linux-mm, linux-arm-kernel, kernel
On Tue, Jan 14, 2025 at 12:02:31PM +0100, Dmitry Vyukov wrote:
> On Tue, 14 Jan 2025 at 11:43, Marco Elver <elver@google.com> wrote:
> > On Tue, 14 Jan 2025 at 06:35, Jiao, Joey <quic_jiangenj@quicinc.com> wrote:
> > >
> > > Hi,
> > >
> > > This patch series introduces new kcov unique modes:
> > > `KCOV_TRACE_UNIQ_[PC|EDGE|CMP]`, which are used to collect unique PC, EDGE,
> > > CMP information.
> > >
> > > Background
> > > ----------
> > >
> > > In the current kcov implementation, when `__sanitizer_cov_trace_pc` is hit,
> > > the instruction pointer (IP) is stored sequentially in an area. Userspace
> > > programs then read this area to record covered PCs and calculate covered
> > > edges. However, recent syzkaller runs show that many syscalls likely have
> > > `pos > t->kcov_size`, leading to kcov overflow. To address this issue, we
> > > introduce new kcov unique modes.
> >
> > Overflow by how much? How much space is missing?
> >
> > > Solution Overview
> > > -----------------
> > >
> > > 1. [P 1] Introduce `KCOV_TRACE_UNIQ_PC` Mode:
> > > - Export `KCOV_TRACE_UNIQ_PC` to userspace.
> > > - Add `kcov_map` struct to manage memory during the KCOV lifecycle.
> > > - `kcov_entry` struct as a hashtable entry containing unique PCs.
> > > - Use hashtable buckets to link `kcov_entry`.
> > > - Preallocate memory using genpool during KCOV initialization.
> > > - Move `area` inside `kcov_map` for easier management.
> > > - Use `jhash` for hash key calculation to support `KCOV_TRACE_UNIQ_CMP`
> > > mode.
> > >
> > > 2. [P 2-3] Introduce `KCOV_TRACE_UNIQ_EDGE` Mode:
> > > - Save `prev_pc` to calculate edges with the current IP.
> > > - Add unique edges to the hashmap.
> > > - Use a lower 12-bit mask to make hash independent of module offsets.
> > > - Distinguish areas for `KCOV_TRACE_UNIQ_PC` and `KCOV_TRACE_UNIQ_EDGE`
> > > modes using `offset` during mmap.
> > > - Support enabling `KCOV_TRACE_UNIQ_PC` and `KCOV_TRACE_UNIQ_EDGE`
> > > together.
> > >
> > > 3. [P 4] Introduce `KCOV_TRACE_UNIQ_CMP` Mode:
> > > - Shares the area with `KCOV_TRACE_UNIQ_PC`, making these modes
> > > exclusive.
> > >
> > > 4. [P 5] Add Example Code Documentation:
> > > - Provide examples for testing different modes:
> > > - `KCOV_TRACE_PC`: `./kcov` or `./kcov 0`
> > > - `KCOV_TRACE_CMP`: `./kcov 1`
> > > - `KCOV_TRACE_UNIQ_PC`: `./kcov 2`
> > > - `KCOV_TRACE_UNIQ_EDGE`: `./kcov 4`
> > > - `KCOV_TRACE_UNIQ_PC|KCOV_TRACE_UNIQ_EDGE`: `./kcov 6`
> > > - `KCOV_TRACE_UNIQ_CMP`: `./kcov 8`
> > >
> > > 5. [P 6-7] Disable KCOV Instrumentation:
> > > - Disable instrumentation like genpool to prevent recursive calls.
> > >
> > > Caveats
> > > -------
> > >
> > > The userspace program has been tested on Qemu x86_64 and two real Android
> > > phones with different ARM64 chips. More syzkaller-compatible tests have
> > > been conducted. However, due to limited knowledge of other platforms,
> > > assistance from those with access to other systems is needed.
> > >
> > > Results and Analysis
> > > --------------------
> > >
> > > 1. KMEMLEAK Test on Qemu x86_64:
> > > - No memory leaks found during the `kcov` program run.
> > >
> > > 2. KCSAN Test on Qemu x86_64:
> > > - No KCSAN issues found during the `kcov` program run.
> > >
> > > 3. Existing Syzkaller on Qemu x86_64 and Real ARM64 Device:
> > > - Syzkaller can fuzz, show coverage, and find bugs. Adjusting `procs`
> > > and `vm mem` settings can avoid OOM issues caused by genpool in the
> > > patches, so `procs:4 + vm:2GB` or `procs:4 + vm:2GB` are used for
> > > Qemu x86_64.
> > > - `procs:8` is kept on Real ARM64 Device with 12GB/16GB mem.
> > >
> > > 4. Modified Syzkaller to Support New KCOV Unique Modes:
> > > - Syzkaller runs fine on both Qemu x86_64 and ARM64 real devices.
> > > Limited `Cover overflows` and `Comps overflows` observed.
> > >
> > > 5. Modified Syzkaller + Upstream Kernel Without Patch Series:
> > > - Not tested. The modified syzkaller will fall back to `KCOV_TRACE_PC`
> > > or `KCOV_TRACE_CMP` if `ioctl` fails for Unique mode.
> > >
> > > Possible Further Enhancements
> > > -----------------------------
> > >
> > > 1. Test more cases and setups, including those in syzbot.
> > > 2. Ensure `hash_for_each_possible_rcu` is protected for reentrance
> > > and atomicity.
> > > 3. Find a simpler and more efficient way to store unique coverage.
> > >
> > > Conclusion
> > > ----------
> > >
> > > These patches add new kcov unique modes to mitigate the kcov overflow
> > > issue, compatible with both existing and new syzkaller versions.
> >
> > Thanks for the analysis, it's clearer now.
> >
> > However, the new design you introduce here adds lots of complexity.
> > Answering the question of how much overflow is happening, might give
> > better clues if this is the best design or not. Because if the
> > overflow amount is relatively small, a better design (IMHO) might be
> > simply implementing a compression scheme, e.g. a simple delta
> > encoding.
>
> Joey, do you have corresponding patches for syzkaller? I wonder how
> the integration looks like, in particular when/how these maps are
> cleared.
Uploaded in https://github.com/google/syzkaller/pull/5673
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH 0/7] kcov: Introduce New Unique PC|EDGE|CMP Modes
2025-01-14 10:43 ` [PATCH 0/7] kcov: Introduce New Unique PC|EDGE|CMP Modes Marco Elver
2025-01-14 11:02 ` Dmitry Vyukov
@ 2025-01-14 12:59 ` Joey Jiao
2025-01-15 15:16 ` Alexander Potapenko
1 sibling, 1 reply; 16+ messages in thread
From: Joey Jiao @ 2025-01-14 12:59 UTC (permalink / raw)
To: Marco Elver
Cc: Dmitry Vyukov, Andrey Konovalov, Jonathan Corbet, Andrew Morton,
Dennis Zhou, Tejun Heo, Christoph Lameter, Catalin Marinas,
Will Deacon, kasan-dev, linux-kernel, workflows, linux-doc,
linux-mm, linux-arm-kernel, kernel
On Tue, Jan 14, 2025 at 11:43:08AM +0100, Marco Elver wrote:
> On Tue, 14 Jan 2025 at 06:35, Jiao, Joey <quic_jiangenj@quicinc.com> wrote:
> >
> > Hi,
> >
> > This patch series introduces new kcov unique modes:
> > `KCOV_TRACE_UNIQ_[PC|EDGE|CMP]`, which are used to collect unique PC, EDGE,
> > CMP information.
> >
> > Background
> > ----------
> >
> > In the current kcov implementation, when `__sanitizer_cov_trace_pc` is hit,
> > the instruction pointer (IP) is stored sequentially in an area. Userspace
> > programs then read this area to record covered PCs and calculate covered
> > edges. However, recent syzkaller runs show that many syscalls likely have
> > `pos > t->kcov_size`, leading to kcov overflow. To address this issue, we
> > introduce new kcov unique modes.
>
> Overflow by how much? How much space is missing?
Ideally we should get the pos, but the test in syzkaller only counts how many
times the overflow occurs. Actually I guess the pos is much bigger than cover
size because originally we have 64KB cover size, the overflow happens; then now
syzkaller set it to 1MB, but still 3535 times overflow for
`ioctl$DMA_HEAP_IOCTL_ALLOC` syscall which has only 19 inputs. mmap syscall is
also likely to overflow for 10873 times with 181 inputs in my case. Internally,
I tried also 64MB cover size, but I still see the overflow case. Using
syz-execprog together with -cover options shows many pcs are hit frequently,
but disabling instrumentation for each these PC is less efficient and sometimes
no lucky to fix the overflow problem.
I think the overflow happens more frequent on arm64 device as I found functions
in header files hit frequently.
And I'm not able to access syzbot backend syz-manager data, perhaps qemu x86_64
setup has more info.
>
> > Solution Overview
> > -----------------
> >
> > 1. [P 1] Introduce `KCOV_TRACE_UNIQ_PC` Mode:
> > - Export `KCOV_TRACE_UNIQ_PC` to userspace.
> > - Add `kcov_map` struct to manage memory during the KCOV lifecycle.
> > - `kcov_entry` struct as a hashtable entry containing unique PCs.
> > - Use hashtable buckets to link `kcov_entry`.
> > - Preallocate memory using genpool during KCOV initialization.
> > - Move `area` inside `kcov_map` for easier management.
> > - Use `jhash` for hash key calculation to support `KCOV_TRACE_UNIQ_CMP`
> > mode.
> >
> > 2. [P 2-3] Introduce `KCOV_TRACE_UNIQ_EDGE` Mode:
> > - Save `prev_pc` to calculate edges with the current IP.
> > - Add unique edges to the hashmap.
> > - Use a lower 12-bit mask to make hash independent of module offsets.
> > - Distinguish areas for `KCOV_TRACE_UNIQ_PC` and `KCOV_TRACE_UNIQ_EDGE`
> > modes using `offset` during mmap.
> > - Support enabling `KCOV_TRACE_UNIQ_PC` and `KCOV_TRACE_UNIQ_EDGE`
> > together.
> >
> > 3. [P 4] Introduce `KCOV_TRACE_UNIQ_CMP` Mode:
> > - Shares the area with `KCOV_TRACE_UNIQ_PC`, making these modes
> > exclusive.
> >
> > 4. [P 5] Add Example Code Documentation:
> > - Provide examples for testing different modes:
> > - `KCOV_TRACE_PC`: `./kcov` or `./kcov 0`
> > - `KCOV_TRACE_CMP`: `./kcov 1`
> > - `KCOV_TRACE_UNIQ_PC`: `./kcov 2`
> > - `KCOV_TRACE_UNIQ_EDGE`: `./kcov 4`
> > - `KCOV_TRACE_UNIQ_PC|KCOV_TRACE_UNIQ_EDGE`: `./kcov 6`
> > - `KCOV_TRACE_UNIQ_CMP`: `./kcov 8`
> >
> > 5. [P 6-7] Disable KCOV Instrumentation:
> > - Disable instrumentation like genpool to prevent recursive calls.
> >
> > Caveats
> > -------
> >
> > The userspace program has been tested on Qemu x86_64 and two real Android
> > phones with different ARM64 chips. More syzkaller-compatible tests have
> > been conducted. However, due to limited knowledge of other platforms,
> > assistance from those with access to other systems is needed.
> >
> > Results and Analysis
> > --------------------
> >
> > 1. KMEMLEAK Test on Qemu x86_64:
> > - No memory leaks found during the `kcov` program run.
> >
> > 2. KCSAN Test on Qemu x86_64:
> > - No KCSAN issues found during the `kcov` program run.
> >
> > 3. Existing Syzkaller on Qemu x86_64 and Real ARM64 Device:
> > - Syzkaller can fuzz, show coverage, and find bugs. Adjusting `procs`
> > and `vm mem` settings can avoid OOM issues caused by genpool in the
> > patches, so `procs:4 + vm:2GB` or `procs:4 + vm:2GB` are used for
> > Qemu x86_64.
> > - `procs:8` is kept on Real ARM64 Device with 12GB/16GB mem.
> >
> > 4. Modified Syzkaller to Support New KCOV Unique Modes:
> > - Syzkaller runs fine on both Qemu x86_64 and ARM64 real devices.
> > Limited `Cover overflows` and `Comps overflows` observed.
> >
> > 5. Modified Syzkaller + Upstream Kernel Without Patch Series:
> > - Not tested. The modified syzkaller will fall back to `KCOV_TRACE_PC`
> > or `KCOV_TRACE_CMP` if `ioctl` fails for Unique mode.
> >
> > Possible Further Enhancements
> > -----------------------------
> >
> > 1. Test more cases and setups, including those in syzbot.
> > 2. Ensure `hash_for_each_possible_rcu` is protected for reentrance
> > and atomicity.
> > 3. Find a simpler and more efficient way to store unique coverage.
> >
> > Conclusion
> > ----------
> >
> > These patches add new kcov unique modes to mitigate the kcov overflow
> > issue, compatible with both existing and new syzkaller versions.
>
> Thanks for the analysis, it's clearer now.
>
> However, the new design you introduce here adds lots of complexity.
> Answering the question of how much overflow is happening, might give
> better clues if this is the best design or not. Because if the
> overflow amount is relatively small, a better design (IMHO) might be
> simply implementing a compression scheme, e.g. a simple delta
> encoding.
I tried many ways to store the uniq info, like bitmap, segment bitmap,
customized allocator + allocation index, also considering rhashmap, but perhaps
hashmap (maybe rhashmap) is better.
I also tried a full bitmap to record all PCs from all threads which shows that
syzkaller can't find the new coverage while the full bitmap recorded it. If I
replay the syzkaller log (or prog), kernel GCOV can also show these
functions/lines are hit (not because flaky or interrupt) but syzkaller coverage
doesn't have that data, which can be another proof of the kcov overflow.
>
> Thanks,
> -- Marco
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH 0/7] kcov: Introduce New Unique PC|EDGE|CMP Modes
2025-01-14 12:59 ` Joey Jiao
@ 2025-01-15 15:16 ` Alexander Potapenko
2025-01-16 1:16 ` Joey Jiao
0 siblings, 1 reply; 16+ messages in thread
From: Alexander Potapenko @ 2025-01-15 15:16 UTC (permalink / raw)
To: Joey Jiao
Cc: Marco Elver, Dmitry Vyukov, Andrey Konovalov, Jonathan Corbet,
Andrew Morton, Dennis Zhou, Tejun Heo, Christoph Lameter,
Catalin Marinas, Will Deacon, kasan-dev, linux-kernel, workflows,
linux-doc, linux-mm, linux-arm-kernel, kernel
On Tue, Jan 14, 2025 at 2:00 PM Joey Jiao <quic_jiangenj@quicinc.com> wrote:
>
> On Tue, Jan 14, 2025 at 11:43:08AM +0100, Marco Elver wrote:
> > On Tue, 14 Jan 2025 at 06:35, Jiao, Joey <quic_jiangenj@quicinc.com> wrote:
> > >
> > > Hi,
> > >
> > > This patch series introduces new kcov unique modes:
> > > `KCOV_TRACE_UNIQ_[PC|EDGE|CMP]`, which are used to collect unique PC, EDGE,
> > > CMP information.
> > >
> > > Background
> > > ----------
> > >
> > > In the current kcov implementation, when `__sanitizer_cov_trace_pc` is hit,
> > > the instruction pointer (IP) is stored sequentially in an area. Userspace
> > > programs then read this area to record covered PCs and calculate covered
> > > edges. However, recent syzkaller runs show that many syscalls likely have
> > > `pos > t->kcov_size`, leading to kcov overflow. To address this issue, we
> > > introduce new kcov unique modes.
Hi Joey,
Sorry for not responding earlier, I thought I'd come with a working
proposal, but it is taking a while.
You are right that kcov is prone to overflows, and we might be missing
interesting coverage because of that.
Recently we've been discussing the applicability of
-fsanitize-coverage=trace-pc-guard to this problem, and it is almost
working already.
The idea is as follows:
- -fsanitize-coverage=trace-pc-guard instruments basic blocks with
calls to `__sanitizer_cov_trace_pc_guard(u32 *guard)`, each taking a
unique 32-bit global in the __sancov_guards section;
- these globals are zero-initialized, but upon the first call to
__sanitizer_cov_trace_pc_guard() from each callsite, the corresponding
global will receive a unique consequent number;
- now we have a mapping of PCs into indices, which can we use to
deduplicate the coverage:
-- storing PCs by their index taken from *guard directly in the
user-supplied buffer (which size will not exceed several megabytes in
practice);
-- using a per-task bitmap (at most hundreds of kilobytes) to mark
visited basic blocks, and appending newly encountered PCs to the
user-supplied buffer like it's done now.
I think this approach is more promising than using hashmaps in kcov:
- direct mapping should be way faster than a hashmap (and the overhead
of index allocation is amortized, because they are persistent between
program runs);
- there cannot be collisions;
- no additional complexity from pool allocations, RCU synchronization.
The above approach will naturally break edge coverage, as there will
be no notion of a program trace anymore.
But it is still a question whether edges are helping the fuzzer, and
correctly deduplicating them may not be worth the effort.
If you don't object, I would like to finish prototyping coverage
guards for kcov before proceeding with this review.
Alex
> > > 2. [P 2-3] Introduce `KCOV_TRACE_UNIQ_EDGE` Mode:
> > > - Save `prev_pc` to calculate edges with the current IP.
> > > - Add unique edges to the hashmap.
> > > - Use a lower 12-bit mask to make hash independent of module offsets.
Note that on ARM64 this will be effectively using bits 11:2, so if I
am understanding correctly more than a million coverage callbacks will
be mapped into one of 1024 buckets.
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH 0/7] kcov: Introduce New Unique PC|EDGE|CMP Modes
2025-01-15 15:16 ` Alexander Potapenko
@ 2025-01-16 1:16 ` Joey Jiao
0 siblings, 0 replies; 16+ messages in thread
From: Joey Jiao @ 2025-01-16 1:16 UTC (permalink / raw)
To: Alexander Potapenko
Cc: Marco Elver, Dmitry Vyukov, Andrey Konovalov, Jonathan Corbet,
Andrew Morton, Dennis Zhou, Tejun Heo, Christoph Lameter,
Catalin Marinas, Will Deacon, kasan-dev, linux-kernel, workflows,
linux-doc, linux-mm, linux-arm-kernel, kernel
On Wed, Jan 15, 2025 at 04:16:57PM +0100, Alexander Potapenko wrote:
> On Tue, Jan 14, 2025 at 2:00 PM Joey Jiao <quic_jiangenj@quicinc.com> wrote:
> >
> > On Tue, Jan 14, 2025 at 11:43:08AM +0100, Marco Elver wrote:
> > > On Tue, 14 Jan 2025 at 06:35, Jiao, Joey <quic_jiangenj@quicinc.com> wrote:
> > > >
> > > > Hi,
> > > >
> > > > This patch series introduces new kcov unique modes:
> > > > `KCOV_TRACE_UNIQ_[PC|EDGE|CMP]`, which are used to collect unique PC, EDGE,
> > > > CMP information.
> > > >
> > > > Background
> > > > ----------
> > > >
> > > > In the current kcov implementation, when `__sanitizer_cov_trace_pc` is hit,
> > > > the instruction pointer (IP) is stored sequentially in an area. Userspace
> > > > programs then read this area to record covered PCs and calculate covered
> > > > edges. However, recent syzkaller runs show that many syscalls likely have
> > > > `pos > t->kcov_size`, leading to kcov overflow. To address this issue, we
> > > > introduce new kcov unique modes.
>
> Hi Joey,
>
> Sorry for not responding earlier, I thought I'd come with a working
> proposal, but it is taking a while.
> You are right that kcov is prone to overflows, and we might be missing
> interesting coverage because of that.
>
> Recently we've been discussing the applicability of
> -fsanitize-coverage=trace-pc-guard to this problem, and it is almost
> working already.
Can you share the patch? I was tried trace-pc-guard but had the same unique
info problem.
> The idea is as follows:
> - -fsanitize-coverage=trace-pc-guard instruments basic blocks with
> calls to `__sanitizer_cov_trace_pc_guard(u32 *guard)`, each taking a
> unique 32-bit global in the __sancov_guards section;
> - these globals are zero-initialized, but upon the first call to
> __sanitizer_cov_trace_pc_guard() from each callsite, the corresponding
> global will receive a unique consequent number;
> - now we have a mapping of PCs into indices, which can we use to
> deduplicate the coverage:
> -- storing PCs by their index taken from *guard directly in the
> user-supplied buffer (which size will not exceed several megabytes in
> practice);
> -- using a per-task bitmap (at most hundreds of kilobytes) to mark
> visited basic blocks, and appending newly encountered PCs to the
> user-supplied buffer like it's done now.
Why at most hundreds of kilobytes? Still stored in sequence? Assume we have 2GB
kernel text, then bitmap will have 64MB for unique basic blocks?
>
> I think this approach is more promising than using hashmaps in kcov:
> - direct mapping should be way faster than a hashmap (and the overhead
> of index allocation is amortized, because they are persistent between
> program runs);
> - there cannot be collisions;
> - no additional complexity from pool allocations, RCU synchronization.
>
> The above approach will naturally break edge coverage, as there will
> be no notion of a program trace anymore.
I think guard value is equavalent to the effect of edge? We can use the guard
value in syzkaller as edge info?
> But it is still a question whether edges are helping the fuzzer, and
> correctly deduplicating them may not be worth the effort.
>
> If you don't object, I would like to finish prototyping coverage
> guards for kcov before proceeding with this review.
>
> Alex
Thanks Alex, sure, please continue the guards patches.
Also I think we can still store the covered PC inside
__santizer_cov_trace_pc_guard, right?
+void notrace __sanitizer_cov_trace_pc_guard(unsigned long* guard) {
+ struct task_struct *t;
+ struct kcov *kcov;
+ unsigned long ip = canonicalize_ip(_RET_IP_);
+
+ if (!*guard)
+ return;
>
> > > > 2. [P 2-3] Introduce `KCOV_TRACE_UNIQ_EDGE` Mode:
> > > > - Save `prev_pc` to calculate edges with the current IP.
> > > > - Add unique edges to the hashmap.
> > > > - Use a lower 12-bit mask to make hash independent of module offsets.
>
> Note that on ARM64 this will be effectively using bits 11:2, so if I
> am understanding correctly more than a million coverage callbacks will
> be mapped into one of 1024 buckets.
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH 4/7] kcov: introduce new kcov KCOV_TRACE_UNIQ_CMP mode
2025-01-14 5:34 ` [PATCH 4/7] kcov: introduce new kcov KCOV_TRACE_UNIQ_CMP mode Jiao, Joey
@ 2025-01-24 2:11 ` kernel test robot
2025-01-24 12:26 ` kernel test robot
1 sibling, 0 replies; 16+ messages in thread
From: kernel test robot @ 2025-01-24 2:11 UTC (permalink / raw)
To: Jiao, Joey, Dmitry Vyukov, Andrey Konovalov, Jonathan Corbet,
Andrew Morton, Dennis Zhou, Tejun Heo, Christoph Lameter,
Catalin Marinas, Will Deacon
Cc: oe-kbuild-all, Linux Memory Management List, kasan-dev,
linux-kernel, workflows, linux-doc, linux-arm-kernel, kernel
Hi Joey,
kernel test robot noticed the following build errors:
[auto build test ERROR on 9b2ffa6148b1e4468d08f7e0e7e371c43cac9ffe]
url: https://github.com/intel-lab-lkp/linux/commits/Jiao-Joey/kcov-introduce-new-kcov-KCOV_TRACE_UNIQ_PC-mode/20250114-133713
base: 9b2ffa6148b1e4468d08f7e0e7e371c43cac9ffe
patch link: https://lore.kernel.org/r/20250114-kcov-v1-4-004294b931a2%40quicinc.com
patch subject: [PATCH 4/7] kcov: introduce new kcov KCOV_TRACE_UNIQ_CMP mode
config: mips-randconfig-r073-20250124 (https://download.01.org/0day-ci/archive/20250124/202501240959.61XLxBYF-lkp@intel.com/config)
compiler: mips-linux-gcc (GCC) 14.2.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20250124/202501240959.61XLxBYF-lkp@intel.com/reproduce)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202501240959.61XLxBYF-lkp@intel.com/
All errors (new ones prefixed by >>):
kernel/kcov.c: In function 'kcov_map_add':
>> kernel/kcov.c:309:60: error: 'struct kcov_entry' has no member named 'type'
309 | if (entry->ent == ent->ent && entry->type == ent->type &&
| ^~
kernel/kcov.c:309:73: error: 'struct kcov_entry' has no member named 'type'
309 | if (entry->ent == ent->ent && entry->type == ent->type &&
| ^~
>> kernel/kcov.c:310:34: error: 'struct kcov_entry' has no member named 'arg1'
310 | entry->arg1 == ent->arg1 && entry->arg2 == ent->arg2) {
| ^~
kernel/kcov.c:310:47: error: 'struct kcov_entry' has no member named 'arg1'
310 | entry->arg1 == ent->arg1 && entry->arg2 == ent->arg2) {
| ^~
>> kernel/kcov.c:310:62: error: 'struct kcov_entry' has no member named 'arg2'
310 | entry->arg1 == ent->arg1 && entry->arg2 == ent->arg2) {
| ^~
kernel/kcov.c:310:75: error: 'struct kcov_entry' has no member named 'arg2'
310 | entry->arg1 == ent->arg1 && entry->arg2 == ent->arg2) {
| ^~
kernel/kcov.c:343:48: error: 'struct kcov_entry' has no member named 'type'
343 | area[start_index] = ent->type;
| ^~
kernel/kcov.c:344:52: error: 'struct kcov_entry' has no member named 'arg1'
344 | area[start_index + 1] = ent->arg1;
| ^~
kernel/kcov.c:345:52: error: 'struct kcov_entry' has no member named 'arg2'
345 | area[start_index + 2] = ent->arg2;
| ^~
vim +309 kernel/kcov.c
290
291 static notrace inline void kcov_map_add(struct kcov_map *map, struct kcov_entry *ent,
292 struct task_struct *t, unsigned int mode)
293 {
294 struct kcov *kcov;
295 struct kcov_entry *entry;
296 unsigned int key = hash_key(ent);
297 unsigned long pos, start_index, end_pos, max_pos, *area;
298
299 kcov = t->kcov;
300
301 if ((mode == KCOV_MODE_TRACE_UNIQ_PC ||
302 mode == KCOV_MODE_TRACE_UNIQ_EDGE))
303 hash_for_each_possible_rcu(map->buckets, entry, node, key) {
304 if (entry->ent == ent->ent)
305 return;
306 }
307 else
308 hash_for_each_possible_rcu(map->buckets, entry, node, key) {
> 309 if (entry->ent == ent->ent && entry->type == ent->type &&
> 310 entry->arg1 == ent->arg1 && entry->arg2 == ent->arg2) {
311 return;
312 }
313 }
314
315 entry = (struct kcov_entry *)gen_pool_alloc(map->pool, 1 << MIN_POOL_ALLOC_ORDER);
316 if (unlikely(!entry))
317 return;
318
319 barrier();
320 memcpy(entry, ent, sizeof(*entry));
321 hash_add_rcu(map->buckets, &entry->node, key);
322
323 if (mode == KCOV_MODE_TRACE_UNIQ_PC || mode == KCOV_MODE_TRACE_UNIQ_CMP)
324 area = t->kcov_area;
325 else
326 area = kcov->map_edge->area;
327
328 pos = READ_ONCE(area[0]) + 1;
329 if (mode == KCOV_MODE_TRACE_UNIQ_PC || mode == KCOV_MODE_TRACE_UNIQ_EDGE) {
330 if (likely(pos < t->kcov_size)) {
331 WRITE_ONCE(area[0], pos);
332 barrier();
333 area[pos] = ent->ent;
334 }
335 } else {
336 start_index = 1 + (pos - 1) * KCOV_WORDS_PER_CMP;
337 max_pos = t->kcov_size * sizeof(unsigned long);
338 end_pos = (start_index + KCOV_WORDS_PER_CMP) * sizeof(u64);
339 if (likely(end_pos <= max_pos)) {
340 /* See comment in __sanitizer_cov_trace_pc(). */
341 WRITE_ONCE(area[0], pos);
342 barrier();
343 area[start_index] = ent->type;
344 area[start_index + 1] = ent->arg1;
345 area[start_index + 2] = ent->arg2;
346 area[start_index + 3] = ent->ent;
347 }
348 }
349 }
350
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH 4/7] kcov: introduce new kcov KCOV_TRACE_UNIQ_CMP mode
2025-01-14 5:34 ` [PATCH 4/7] kcov: introduce new kcov KCOV_TRACE_UNIQ_CMP mode Jiao, Joey
2025-01-24 2:11 ` kernel test robot
@ 2025-01-24 12:26 ` kernel test robot
1 sibling, 0 replies; 16+ messages in thread
From: kernel test robot @ 2025-01-24 12:26 UTC (permalink / raw)
To: Jiao, Joey, Dmitry Vyukov, Andrey Konovalov, Jonathan Corbet,
Andrew Morton, Dennis Zhou, Tejun Heo, Christoph Lameter,
Catalin Marinas, Will Deacon
Cc: llvm, oe-kbuild-all, Linux Memory Management List, kasan-dev,
linux-kernel, workflows, linux-doc, linux-arm-kernel, kernel
Hi Joey,
kernel test robot noticed the following build errors:
[auto build test ERROR on 9b2ffa6148b1e4468d08f7e0e7e371c43cac9ffe]
url: https://github.com/intel-lab-lkp/linux/commits/Jiao-Joey/kcov-introduce-new-kcov-KCOV_TRACE_UNIQ_PC-mode/20250114-133713
base: 9b2ffa6148b1e4468d08f7e0e7e371c43cac9ffe
patch link: https://lore.kernel.org/r/20250114-kcov-v1-4-004294b931a2%40quicinc.com
patch subject: [PATCH 4/7] kcov: introduce new kcov KCOV_TRACE_UNIQ_CMP mode
config: x86_64-randconfig-001-20250124 (https://download.01.org/0day-ci/archive/20250124/202501242043.KmrFufhL-lkp@intel.com/config)
compiler: clang version 19.1.3 (https://github.com/llvm/llvm-project ab51eccf88f5321e7c60591c5546b254b6afab99)
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20250124/202501242043.KmrFufhL-lkp@intel.com/reproduce)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202501242043.KmrFufhL-lkp@intel.com/
All errors (new ones prefixed by >>):
>> kernel/kcov.c:309:41: error: no member named 'type' in 'struct kcov_entry'
309 | if (entry->ent == ent->ent && entry->type == ent->type &&
| ~~~~~ ^
kernel/kcov.c:309:54: error: no member named 'type' in 'struct kcov_entry'
309 | if (entry->ent == ent->ent && entry->type == ent->type &&
| ~~~ ^
>> kernel/kcov.c:310:15: error: no member named 'arg1' in 'struct kcov_entry'
310 | entry->arg1 == ent->arg1 && entry->arg2 == ent->arg2) {
| ~~~~~ ^
kernel/kcov.c:310:28: error: no member named 'arg1' in 'struct kcov_entry'
310 | entry->arg1 == ent->arg1 && entry->arg2 == ent->arg2) {
| ~~~ ^
>> kernel/kcov.c:310:43: error: no member named 'arg2' in 'struct kcov_entry'
310 | entry->arg1 == ent->arg1 && entry->arg2 == ent->arg2) {
| ~~~~~ ^
kernel/kcov.c:310:56: error: no member named 'arg2' in 'struct kcov_entry'
310 | entry->arg1 == ent->arg1 && entry->arg2 == ent->arg2) {
| ~~~ ^
kernel/kcov.c:343:29: error: no member named 'type' in 'struct kcov_entry'
343 | area[start_index] = ent->type;
| ~~~ ^
kernel/kcov.c:344:33: error: no member named 'arg1' in 'struct kcov_entry'
344 | area[start_index + 1] = ent->arg1;
| ~~~ ^
kernel/kcov.c:345:33: error: no member named 'arg2' in 'struct kcov_entry'
345 | area[start_index + 2] = ent->arg2;
| ~~~ ^
9 errors generated.
vim +309 kernel/kcov.c
290
291 static notrace inline void kcov_map_add(struct kcov_map *map, struct kcov_entry *ent,
292 struct task_struct *t, unsigned int mode)
293 {
294 struct kcov *kcov;
295 struct kcov_entry *entry;
296 unsigned int key = hash_key(ent);
297 unsigned long pos, start_index, end_pos, max_pos, *area;
298
299 kcov = t->kcov;
300
301 if ((mode == KCOV_MODE_TRACE_UNIQ_PC ||
302 mode == KCOV_MODE_TRACE_UNIQ_EDGE))
303 hash_for_each_possible_rcu(map->buckets, entry, node, key) {
304 if (entry->ent == ent->ent)
305 return;
306 }
307 else
308 hash_for_each_possible_rcu(map->buckets, entry, node, key) {
> 309 if (entry->ent == ent->ent && entry->type == ent->type &&
> 310 entry->arg1 == ent->arg1 && entry->arg2 == ent->arg2) {
311 return;
312 }
313 }
314
315 entry = (struct kcov_entry *)gen_pool_alloc(map->pool, 1 << MIN_POOL_ALLOC_ORDER);
316 if (unlikely(!entry))
317 return;
318
319 barrier();
320 memcpy(entry, ent, sizeof(*entry));
321 hash_add_rcu(map->buckets, &entry->node, key);
322
323 if (mode == KCOV_MODE_TRACE_UNIQ_PC || mode == KCOV_MODE_TRACE_UNIQ_CMP)
324 area = t->kcov_area;
325 else
326 area = kcov->map_edge->area;
327
328 pos = READ_ONCE(area[0]) + 1;
329 if (mode == KCOV_MODE_TRACE_UNIQ_PC || mode == KCOV_MODE_TRACE_UNIQ_EDGE) {
330 if (likely(pos < t->kcov_size)) {
331 WRITE_ONCE(area[0], pos);
332 barrier();
333 area[pos] = ent->ent;
334 }
335 } else {
336 start_index = 1 + (pos - 1) * KCOV_WORDS_PER_CMP;
337 max_pos = t->kcov_size * sizeof(unsigned long);
338 end_pos = (start_index + KCOV_WORDS_PER_CMP) * sizeof(u64);
339 if (likely(end_pos <= max_pos)) {
340 /* See comment in __sanitizer_cov_trace_pc(). */
341 WRITE_ONCE(area[0], pos);
342 barrier();
343 area[start_index] = ent->type;
344 area[start_index + 1] = ent->arg1;
345 area[start_index + 2] = ent->arg2;
346 area[start_index + 3] = ent->ent;
347 }
348 }
349 }
350
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
^ permalink raw reply [flat|nested] 16+ messages in thread
end of thread, other threads:[~2025-01-24 12:27 UTC | newest]
Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-01-14 5:34 [PATCH 0/7] kcov: Introduce New Unique PC|EDGE|CMP Modes Jiao, Joey
2025-01-14 5:34 ` [PATCH 1/7] kcov: introduce new kcov KCOV_TRACE_UNIQ_PC mode Jiao, Joey
2025-01-14 5:34 ` [PATCH 2/7] kcov: introduce new kcov KCOV_TRACE_UNIQ_EDGE mode Jiao, Joey
2025-01-14 5:34 ` [PATCH 3/7] kcov: allow using KCOV_TRACE_UNIQ_[PC|EDGE] modes together Jiao, Joey
2025-01-14 5:34 ` [PATCH 4/7] kcov: introduce new kcov KCOV_TRACE_UNIQ_CMP mode Jiao, Joey
2025-01-24 2:11 ` kernel test robot
2025-01-24 12:26 ` kernel test robot
2025-01-14 5:34 ` [PATCH 5/7] kcov: add the new KCOV uniq modes example code Jiao, Joey
2025-01-14 5:34 ` [PATCH 6/7] kcov: disable instrumentation for genalloc and bitmap Jiao, Joey
2025-01-14 5:34 ` [PATCH 7/7] arm64: disable kcov instrument in header files Jiao, Joey
2025-01-14 10:43 ` [PATCH 0/7] kcov: Introduce New Unique PC|EDGE|CMP Modes Marco Elver
2025-01-14 11:02 ` Dmitry Vyukov
2025-01-14 12:39 ` Joey Jiao
2025-01-14 12:59 ` Joey Jiao
2025-01-15 15:16 ` Alexander Potapenko
2025-01-16 1:16 ` Joey Jiao
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox