linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Suren Baghdasaryan <surenb@google.com>
To: Wei Yang <richard.weiyang@gmail.com>
Cc: akpm@linux-foundation.org, peterz@infradead.org,
	andrii@kernel.org,  jannh@google.com, Liam.Howlett@oracle.com,
	lorenzo.stoakes@oracle.com,  vbabka@suse.cz, mhocko@kernel.org,
	shakeel.butt@linux.dev, hannes@cmpxchg.org,  david@redhat.com,
	willy@infradead.org, brauner@kernel.org, oleg@redhat.com,
	 arnd@arndb.de, zhangpeng.00@bytedance.com, linmiaohe@huawei.com,
	 viro@zeniv.linux.org.uk, hca@linux.ibm.com, linux-mm@kvack.org,
	 linux-kernel@vger.kernel.org
Subject: Re: [PATCH v3 3/3] mm: introduce mmap_lock_speculate_{try_begin|retry}
Date: Mon, 25 Nov 2024 08:18:38 -0800	[thread overview]
Message-ID: <CAJuCfpF+ZdD3-gTSLr1iwpa=fefUyL5dLoy8vGpv=v7LABnjNw@mail.gmail.com> (raw)
In-Reply-To: <20241125005804.libwzfcz6d5zeyi4@master>

On Sun, Nov 24, 2024 at 4:58 PM Wei Yang <richard.weiyang@gmail.com> wrote:
>
> On Fri, Nov 22, 2024 at 09:44:16AM -0800, Suren Baghdasaryan wrote:
> >Add helper functions to speculatively perform operations without
> >read-locking mmap_lock, expecting that mmap_lock will not be
> >write-locked and mm is not modified from under us.
> >
> >Suggested-by: Peter Zijlstra <peterz@infradead.org>
> >Signed-off-by: Suren Baghdasaryan <surenb@google.com>
> >Reviewed-by: Liam R. Howlett <Liam.Howlett@Oracle.com>
> >---
> >Changes since v2 [1]
> >- Added SOB, per Liam Howlett
> >
> >[1] https://lore.kernel.org/all/20241121162826.987947-3-surenb@google.com/
> >
> > include/linux/mmap_lock.h | 33 +++++++++++++++++++++++++++++++--
> > 1 file changed, 31 insertions(+), 2 deletions(-)
> >
> >diff --git a/include/linux/mmap_lock.h b/include/linux/mmap_lock.h
> >index 9715326f5a85..8ac3041df053 100644
> >--- a/include/linux/mmap_lock.h
> >+++ b/include/linux/mmap_lock.h
> >@@ -71,6 +71,7 @@ static inline void mmap_assert_write_locked(const struct mm_struct *mm)
> > }
> >
> > #ifdef CONFIG_PER_VMA_LOCK
> >+
> > static inline void mm_lock_seqcount_init(struct mm_struct *mm)
> > {
> >       seqcount_init(&mm->mm_lock_seq);
> >@@ -87,11 +88,39 @@ static inline void mm_lock_seqcount_end(struct mm_struct *mm)
> >       do_raw_write_seqcount_end(&mm->mm_lock_seq);
> > }
> >
> >-#else
> >+static inline bool mmap_lock_speculate_try_begin(struct mm_struct *mm, unsigned int *seq)
> >+{
> >+      /*
> >+       * Since mmap_lock is a sleeping lock, and waiting for it to become
> >+       * unlocked is more or less equivalent with taking it ourselves, don't
> >+       * bother with the speculative path if mmap_lock is already write-locked
> >+       * and take the slow path, which takes the lock.
> >+       */
> >+      return raw_seqcount_try_begin(&mm->mm_lock_seq, *seq);
> >+}
> >+
> >+static inline bool mmap_lock_speculate_retry(struct mm_struct *mm, unsigned int seq)
> >+{
> >+      return do_read_seqcount_retry(&mm->mm_lock_seq, seq);
>
> Just curious why we don't use read_seqcount_retry().
>
> Looks this is the only user outside seqlock.h.

Ah, good eye! read_seqcount_retry() would be better.

Peter, do you want me to post a new patchset or you can patch it when
picking it up?

>
> >+}
> >+
> >+#else /* CONFIG_PER_VMA_LOCK */
> >+
> > static inline void mm_lock_seqcount_init(struct mm_struct *mm) {}
> > static inline void mm_lock_seqcount_begin(struct mm_struct *mm) {}
> > static inline void mm_lock_seqcount_end(struct mm_struct *mm) {}
> >-#endif
> >+
> >+static inline bool mmap_lock_speculate_try_begin(struct mm_struct *mm, unsigned int *seq)
> >+{
> >+      return false;
> >+}
> >+
> >+static inline bool mmap_lock_speculate_retry(struct mm_struct *mm, unsigned int seq)
> >+{
> >+      return true;
> >+}
> >+
> >+#endif /* CONFIG_PER_VMA_LOCK */
> >
> > static inline void mmap_init_lock(struct mm_struct *mm)
> > {
> >--
> >2.47.0.371.ga323438b13-goog
>
> --
> Wei Yang
> Help you, Help me


  reply	other threads:[~2024-11-25 16:18 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-11-22 17:44 [PATCH v3 1/3] seqlock: add raw_seqcount_try_begin Suren Baghdasaryan
2024-11-22 17:44 ` [PATCH v3 2/3] mm: convert mm_lock_seq to a proper seqcount Suren Baghdasaryan
2024-11-22 17:44 ` [PATCH v3 3/3] mm: introduce mmap_lock_speculate_{try_begin|retry} Suren Baghdasaryan
2024-11-25  0:58   ` Wei Yang
2024-11-25 16:18     ` Suren Baghdasaryan [this message]
2024-11-25 16:53       ` Peter Zijlstra
2024-12-01  6:44         ` Andrew Morton

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAJuCfpF+ZdD3-gTSLr1iwpa=fefUyL5dLoy8vGpv=v7LABnjNw@mail.gmail.com' \
    --to=surenb@google.com \
    --cc=Liam.Howlett@oracle.com \
    --cc=akpm@linux-foundation.org \
    --cc=andrii@kernel.org \
    --cc=arnd@arndb.de \
    --cc=brauner@kernel.org \
    --cc=david@redhat.com \
    --cc=hannes@cmpxchg.org \
    --cc=hca@linux.ibm.com \
    --cc=jannh@google.com \
    --cc=linmiaohe@huawei.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=lorenzo.stoakes@oracle.com \
    --cc=mhocko@kernel.org \
    --cc=oleg@redhat.com \
    --cc=peterz@infradead.org \
    --cc=richard.weiyang@gmail.com \
    --cc=shakeel.butt@linux.dev \
    --cc=vbabka@suse.cz \
    --cc=viro@zeniv.linux.org.uk \
    --cc=willy@infradead.org \
    --cc=zhangpeng.00@bytedance.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox