From: Alexei Starovoitov <alexei.starovoitov@gmail.com>
To: davem@davemloft.net
Cc: daniel@iogearbox.net, andrii@kernel.org, tj@kernel.org,
memxor@gmail.com, delyank@fb.com, linux-mm@kvack.org,
bpf@vger.kernel.org, kernel-team@fb.com
Subject: [PATCH v5 bpf-next 04/15] samples/bpf: Reduce syscall overhead in map_perf_test.
Date: Thu, 1 Sep 2022 09:15:36 -0700 [thread overview]
Message-ID: <20220901161547.57722-5-alexei.starovoitov@gmail.com> (raw)
In-Reply-To: <20220901161547.57722-1-alexei.starovoitov@gmail.com>
From: Alexei Starovoitov <ast@kernel.org>
Make map_perf_test for preallocated and non-preallocated hash map
spend more time inside bpf program to focus performance analysis
on the speed of update/lookup/delete operations performed by bpf program.
It makes 'perf report' of bpf_mem_alloc look like:
11.76% map_perf_test [k] _raw_spin_lock_irqsave
11.26% map_perf_test [k] htab_map_update_elem
9.70% map_perf_test [k] _raw_spin_lock
9.47% map_perf_test [k] htab_map_delete_elem
8.57% map_perf_test [k] memcpy_erms
5.58% map_perf_test [k] alloc_htab_elem
4.09% map_perf_test [k] __htab_map_lookup_elem
3.44% map_perf_test [k] syscall_exit_to_user_mode
3.13% map_perf_test [k] lookup_nulls_elem_raw
3.05% map_perf_test [k] migrate_enable
3.04% map_perf_test [k] memcmp
2.67% map_perf_test [k] unit_free
2.39% map_perf_test [k] lookup_elem_raw
Reduce default iteration count as well to make 'map_perf_test' quick enough
even on debug kernels.
Acked-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
---
samples/bpf/map_perf_test_kern.c | 44 ++++++++++++++++++++------------
samples/bpf/map_perf_test_user.c | 2 +-
2 files changed, 29 insertions(+), 17 deletions(-)
diff --git a/samples/bpf/map_perf_test_kern.c b/samples/bpf/map_perf_test_kern.c
index 8773f22b6a98..7342c5b2f278 100644
--- a/samples/bpf/map_perf_test_kern.c
+++ b/samples/bpf/map_perf_test_kern.c
@@ -108,11 +108,14 @@ int stress_hmap(struct pt_regs *ctx)
u32 key = bpf_get_current_pid_tgid();
long init_val = 1;
long *value;
+ int i;
- bpf_map_update_elem(&hash_map, &key, &init_val, BPF_ANY);
- value = bpf_map_lookup_elem(&hash_map, &key);
- if (value)
- bpf_map_delete_elem(&hash_map, &key);
+ for (i = 0; i < 10; i++) {
+ bpf_map_update_elem(&hash_map, &key, &init_val, BPF_ANY);
+ value = bpf_map_lookup_elem(&hash_map, &key);
+ if (value)
+ bpf_map_delete_elem(&hash_map, &key);
+ }
return 0;
}
@@ -123,11 +126,14 @@ int stress_percpu_hmap(struct pt_regs *ctx)
u32 key = bpf_get_current_pid_tgid();
long init_val = 1;
long *value;
+ int i;
- bpf_map_update_elem(&percpu_hash_map, &key, &init_val, BPF_ANY);
- value = bpf_map_lookup_elem(&percpu_hash_map, &key);
- if (value)
- bpf_map_delete_elem(&percpu_hash_map, &key);
+ for (i = 0; i < 10; i++) {
+ bpf_map_update_elem(&percpu_hash_map, &key, &init_val, BPF_ANY);
+ value = bpf_map_lookup_elem(&percpu_hash_map, &key);
+ if (value)
+ bpf_map_delete_elem(&percpu_hash_map, &key);
+ }
return 0;
}
@@ -137,11 +143,14 @@ int stress_hmap_alloc(struct pt_regs *ctx)
u32 key = bpf_get_current_pid_tgid();
long init_val = 1;
long *value;
+ int i;
- bpf_map_update_elem(&hash_map_alloc, &key, &init_val, BPF_ANY);
- value = bpf_map_lookup_elem(&hash_map_alloc, &key);
- if (value)
- bpf_map_delete_elem(&hash_map_alloc, &key);
+ for (i = 0; i < 10; i++) {
+ bpf_map_update_elem(&hash_map_alloc, &key, &init_val, BPF_ANY);
+ value = bpf_map_lookup_elem(&hash_map_alloc, &key);
+ if (value)
+ bpf_map_delete_elem(&hash_map_alloc, &key);
+ }
return 0;
}
@@ -151,11 +160,14 @@ int stress_percpu_hmap_alloc(struct pt_regs *ctx)
u32 key = bpf_get_current_pid_tgid();
long init_val = 1;
long *value;
+ int i;
- bpf_map_update_elem(&percpu_hash_map_alloc, &key, &init_val, BPF_ANY);
- value = bpf_map_lookup_elem(&percpu_hash_map_alloc, &key);
- if (value)
- bpf_map_delete_elem(&percpu_hash_map_alloc, &key);
+ for (i = 0; i < 10; i++) {
+ bpf_map_update_elem(&percpu_hash_map_alloc, &key, &init_val, BPF_ANY);
+ value = bpf_map_lookup_elem(&percpu_hash_map_alloc, &key);
+ if (value)
+ bpf_map_delete_elem(&percpu_hash_map_alloc, &key);
+ }
return 0;
}
diff --git a/samples/bpf/map_perf_test_user.c b/samples/bpf/map_perf_test_user.c
index b6fc174ab1f2..1bb53f4b29e1 100644
--- a/samples/bpf/map_perf_test_user.c
+++ b/samples/bpf/map_perf_test_user.c
@@ -72,7 +72,7 @@ static int test_flags = ~0;
static uint32_t num_map_entries;
static uint32_t inner_lru_hash_size;
static int lru_hash_lookup_test_entries = 32;
-static uint32_t max_cnt = 1000000;
+static uint32_t max_cnt = 10000;
static int check_test_flags(enum test_type t)
{
--
2.30.2
next prev parent reply other threads:[~2022-09-01 16:16 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-09-01 16:15 [PATCH v5 bpf-next 00/15] bpf: BPF specific memory allocator Alexei Starovoitov
2022-09-01 16:15 ` [PATCH v5 bpf-next 01/15] bpf: Introduce any context " Alexei Starovoitov
2022-09-01 16:15 ` [PATCH v5 bpf-next 02/15] bpf: Convert hash map to bpf_mem_alloc Alexei Starovoitov
2022-09-01 16:15 ` [PATCH v5 bpf-next 03/15] selftests/bpf: Improve test coverage of test_maps Alexei Starovoitov
2022-09-01 16:15 ` Alexei Starovoitov [this message]
2022-09-01 16:15 ` [PATCH v5 bpf-next 05/15] bpf: Relax the requirement to use preallocated hash maps in tracing progs Alexei Starovoitov
2022-09-01 16:15 ` [PATCH v5 bpf-next 06/15] bpf: Optimize element count in non-preallocated hash map Alexei Starovoitov
2022-09-01 16:15 ` [PATCH v5 bpf-next 07/15] bpf: Optimize call_rcu " Alexei Starovoitov
2022-09-01 16:15 ` [PATCH v5 bpf-next 08/15] bpf: Adjust low/high watermarks in bpf_mem_cache Alexei Starovoitov
2022-09-01 16:15 ` [PATCH v5 bpf-next 09/15] bpf: Batch call_rcu callbacks instead of SLAB_TYPESAFE_BY_RCU Alexei Starovoitov
2022-09-01 16:15 ` [PATCH v5 bpf-next 10/15] bpf: Add percpu allocation support to bpf_mem_alloc Alexei Starovoitov
2022-09-01 16:15 ` [PATCH v5 bpf-next 11/15] bpf: Convert percpu hash map to per-cpu bpf_mem_alloc Alexei Starovoitov
2022-09-01 16:15 ` [PATCH v5 bpf-next 12/15] bpf: Remove tracing program restriction on map types Alexei Starovoitov
2022-09-01 16:15 ` [PATCH v5 bpf-next 13/15] bpf: Prepare bpf_mem_alloc to be used by sleepable bpf programs Alexei Starovoitov
2022-09-01 16:15 ` [PATCH v5 bpf-next 14/15] bpf: Remove prealloc-only restriction for " Alexei Starovoitov
2022-09-01 16:15 ` [PATCH v5 bpf-next 15/15] bpf: Remove usage of kmem_cache from bpf_mem_cache Alexei Starovoitov
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20220901161547.57722-5-alexei.starovoitov@gmail.com \
--to=alexei.starovoitov@gmail.com \
--cc=andrii@kernel.org \
--cc=bpf@vger.kernel.org \
--cc=daniel@iogearbox.net \
--cc=davem@davemloft.net \
--cc=delyank@fb.com \
--cc=kernel-team@fb.com \
--cc=linux-mm@kvack.org \
--cc=memxor@gmail.com \
--cc=tj@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox