From: Dmitry Ilvokhin <d@ilvokhin.com>
To: Dennis Zhou <dennis@kernel.org>, Tejun Heo <tj@kernel.org>,
Christoph Lameter <cl@gentwo.org>,
Steven Rostedt <rostedt@goodmis.org>,
Masami Hiramatsu <mhiramat@kernel.org>,
Mathieu Desnoyers <mathieu.desnoyers@efficios.com>,
Peter Zijlstra <peterz@infradead.org>,
Ingo Molnar <mingo@redhat.com>, Will Deacon <will@kernel.org>,
Boqun Feng <boqun@kernel.org>, Waiman Long <longman@redhat.com>
Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org,
linux-trace-kernel@vger.kernel.org, kernel-team@meta.com,
Dmitry Ilvokhin <d@ilvokhin.com>
Subject: [PATCH RFC 0/3] locking: contended_release tracepoint instrumentation
Date: Wed, 4 Mar 2026 16:56:14 +0000 [thread overview]
Message-ID: <cover.1772642407.git.d@ilvokhin.com> (raw)
The existing contention_begin/contention_end tracepoints fire on the
waiter side. The lock holder's identity and stack can be captured at
contention_begin time (e.g. perf lock contention --lock-owner), but
this reflects the holder's state when a waiter arrives, not when the
lock is actually released.
This series adds a contended_release tracepoint that fires on the
holder side when a lock with waiters is released. This provides:
- Hold time estimation: when the holder's own acquisition was
contended, its contention_end (acquisition) and contended_release
can be correlated to measure how long the lock was held under
contention.
- The holder's stack at release time, which may differ from what perf lock
contention --lock-owner captures if the holder does significant work between
the waiter's arrival and the unlock.
The tracepoint is placed exclusively in slowpath unlock paths, so
there is no performance impact on the uncontended fast path and
expected minimal impact on binary size.
Dmitry Ilvokhin (3):
locking: Add contended_release tracepoint
locking/percpu-rwsem: Extract __percpu_up_read_slowpath()
locking: Wire up contended_release tracepoint
include/linux/percpu-rwsem.h | 15 +++------------
include/trace/events/lock.h | 17 +++++++++++++++++
kernel/locking/mutex.c | 1 +
kernel/locking/percpu-rwsem.c | 21 +++++++++++++++++++++
kernel/locking/rtmutex.c | 1 +
kernel/locking/rwbase_rt.c | 8 +++++++-
kernel/locking/rwsem.c | 9 +++++++--
kernel/locking/semaphore.c | 4 +++-
8 files changed, 60 insertions(+), 16 deletions(-)
--
2.47.3
next reply other threads:[~2026-03-04 16:56 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-03-04 16:56 Dmitry Ilvokhin [this message]
2026-03-04 16:56 ` [PATCH RFC 1/3] locking: Add contended_release tracepoint Dmitry Ilvokhin
2026-03-04 16:56 ` [PATCH RFC 2/3] locking/percpu-rwsem: Extract __percpu_up_read_slowpath() Dmitry Ilvokhin
2026-03-04 16:56 ` [PATCH RFC 3/3] locking: Wire up contended_release tracepoint Dmitry Ilvokhin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=cover.1772642407.git.d@ilvokhin.com \
--to=d@ilvokhin.com \
--cc=boqun@kernel.org \
--cc=cl@gentwo.org \
--cc=dennis@kernel.org \
--cc=kernel-team@meta.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-trace-kernel@vger.kernel.org \
--cc=longman@redhat.com \
--cc=mathieu.desnoyers@efficios.com \
--cc=mhiramat@kernel.org \
--cc=mingo@redhat.com \
--cc=peterz@infradead.org \
--cc=rostedt@goodmis.org \
--cc=tj@kernel.org \
--cc=will@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox