From: Will Deacon <will@kernel.org>
To: cl@gentwo.org
Cc: Catalin Marinas <catalin.marinas@arm.com>,
Peter Zijlstra <peterz@infradead.org>,
Ingo Molnar <mingo@redhat.com>, Waiman Long <longman@redhat.com>,
Boqun Feng <boqun.feng@gmail.com>,
Linus Torvalds <torvalds@linux-foundation.org>,
linux-mm@kvack.org, linux-kernel@vger.kernel.org,
linux-arm-kernel@lists.infradead.org, linux-arch@vger.kernel.org
Subject: Re: [PATCH v2] Avoid memory barrier in read_seqcount() through load acquire
Date: Fri, 23 Aug 2024 11:32:11 +0100 [thread overview]
Message-ID: <20240823103205.GA31866@willie-the-truck> (raw)
In-Reply-To: <20240819-seq_optimize-v2-1-9d0da82b022f@gentwo.org>
On Mon, Aug 19, 2024 at 11:30:15AM -0700, Christoph Lameter via B4 Relay wrote:
> diff --git a/include/linux/seqlock.h b/include/linux/seqlock.h
> index d90d8ee29d81..353fcf32b800 100644
> --- a/include/linux/seqlock.h
> +++ b/include/linux/seqlock.h
> @@ -176,6 +176,28 @@ __seqprop_##lockname##_sequence(const seqcount_##lockname##_t *s) \
> return seq; \
> } \
> \
> +static __always_inline unsigned \
> +__seqprop_##lockname##_sequence_acquire(const seqcount_##lockname##_t *s) \
> +{ \
> + unsigned seq = smp_load_acquire(&s->seqcount.sequence); \
> + \
> + if (!IS_ENABLED(CONFIG_PREEMPT_RT)) \
> + return seq; \
> + \
> + if (preemptible && unlikely(seq & 1)) { \
> + __SEQ_LOCK(lockbase##_lock(s->lock)); \
> + __SEQ_LOCK(lockbase##_unlock(s->lock)); \
> + \
> + /* \
> + * Re-read the sequence counter since the (possibly \
> + * preempted) writer made progress. \
> + */ \
> + seq = smp_load_acquire(&s->seqcount.sequence); \
We could probably do even better with LDAPR here, as that should be
sufficient for this. It's a can of worms though, as it's not implemented
on all CPUs and relaxing smp_load_acquire() might introduce subtle
breakage in places where it's used to build other types of lock. Maybe
you can hack something to see if there's any performance left behind
without it?
> + } \
> + \
> + return seq; \
> +} \
> + \
> static __always_inline bool \
> __seqprop_##lockname##_preemptible(const seqcount_##lockname##_t *s) \
> { \
> @@ -211,6 +233,11 @@ static inline unsigned __seqprop_sequence(const seqcount_t *s)
> return READ_ONCE(s->sequence);
> }
>
> +static inline unsigned __seqprop_sequence_acquire(const seqcount_t *s)
> +{
> + return smp_load_acquire(&s->sequence);
> +}
> +
> static inline bool __seqprop_preemptible(const seqcount_t *s)
> {
> return false;
> @@ -259,6 +286,7 @@ SEQCOUNT_LOCKNAME(mutex, struct mutex, true, mutex)
> #define seqprop_ptr(s) __seqprop(s, ptr)(s)
> #define seqprop_const_ptr(s) __seqprop(s, const_ptr)(s)
> #define seqprop_sequence(s) __seqprop(s, sequence)(s)
> +#define seqprop_sequence_acquire(s) __seqprop(s, sequence_acquire)(s)
> #define seqprop_preemptible(s) __seqprop(s, preemptible)(s)
> #define seqprop_assert(s) __seqprop(s, assert)(s)
>
> @@ -293,6 +321,18 @@ SEQCOUNT_LOCKNAME(mutex, struct mutex, true, mutex)
> *
> * Return: count to be passed to read_seqcount_retry()
> */
> +#ifdef CONFIG_ARCH_HAS_ACQUIRE_RELEASE
> +#define raw_read_seqcount_begin(s) \
> +({ \
> + unsigned _seq; \
> + \
> + while ((_seq = seqprop_sequence_acquire(s)) & 1) \
> + cpu_relax(); \
It would also be interesting to see whether smp_cond_load_acquire()
performs any better that this loop in the !RT case.
Both things to look at separately though, so:
Acked-by: Will Deacon <will@kernel.org>
I assume this will go via -tip.
Will
next prev parent reply other threads:[~2024-08-25 17:31 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-08-19 18:30 Christoph Lameter via B4 Relay
2024-08-23 10:32 ` Will Deacon [this message]
2024-08-23 17:56 ` Christoph Lameter (Ampere)
2024-08-23 19:38 ` Christoph Lameter (Ampere)
2024-08-23 21:05 ` Thomas Gleixner
2024-08-28 17:15 ` Christoph Lameter (Ampere)
2024-09-02 11:55 ` Thomas Gleixner
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20240823103205.GA31866@willie-the-truck \
--to=will@kernel.org \
--cc=boqun.feng@gmail.com \
--cc=catalin.marinas@arm.com \
--cc=cl@gentwo.org \
--cc=linux-arch@vger.kernel.org \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=longman@redhat.com \
--cc=mingo@redhat.com \
--cc=peterz@infradead.org \
--cc=torvalds@linux-foundation.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox