From: Vlastimil Babka <vbabka@suse.cz>
To: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>,
Andrew Morton <akpm@linux-foundation.org>
Cc: David Hildenbrand <david@kernel.org>,
"Liam R . Howlett" <Liam.Howlett@oracle.com>,
Mike Rapoport <rppt@kernel.org>,
Suren Baghdasaryan <surenb@google.com>,
Michal Hocko <mhocko@suse.com>,
Shakeel Butt <shakeel.butt@linux.dev>,
Jann Horn <jannh@google.com>,
linux-mm@kvack.org, linux-kernel@vger.kernel.org,
linux-rt-devel@lists.linux.dev,
Peter Zijlstra <peterz@infradead.org>,
Ingo Molnar <mingo@redhat.com>, Will Deacon <will@kernel.org>,
Boqun Feng <boqun.feng@gmail.com>,
Waiman Long <longman@redhat.com>,
Sebastian Andrzej Siewior <bigeasy@linutronix.de>,
Clark Williams <clrkwllms@kernel.org>,
Steven Rostedt <rostedt@goodmis.org>
Subject: Re: [PATCH v4 07/10] mm/vma: introduce helper struct + thread through exclusive lock fns
Date: Mon, 26 Jan 2026 12:16:07 +0100 [thread overview]
Message-ID: <ba27d41a-2109-4c89-83ce-00ba8b78d249@suse.cz> (raw)
In-Reply-To: <7d3084d596c84da10dd374130a5055deba6439c0.1769198904.git.lorenzo.stoakes@oracle.com>
On 1/23/26 21:12, Lorenzo Stoakes wrote:
> It is confusing to have __vma_enter_exclusive_locked() return 0, 1 or an
It's now __vma_start_exclude_readers()
> error (but only when waiting for readers in TASK_KILLABLE state), and
> having the return value be stored in a stack variable called 'locked' is
> further confusion.
>
> More generally, we are doing a lock of rather finnicky things during the
^ lot?
> acquisition of a state in which readers are excluded and moving out of this
> state, including tracking whether we are detached or not or whether an
> error occurred.
>
> We are implementing logic in __vma_enter_exclusive_locked() that
again __vma_start_exclude_readers()
> effectively acts as if 'if one caller calls us do X, if another then do Y',
> which is very confusing from a control flow perspective.
>
> Introducing the shared helper object state helps us avoid this, as we can
> now handle the 'an error arose but we're detached' condition correctly in
> both callers - a warning if not detaching, and treating the situation as if
> no error arose in the case of a VMA detaching.
>
> This also acts to help document what's going on and allows us to add some
> more logical debug asserts.
>
> Also update vma_mark_detached() to add a guard clause for the likely
> 'already detached' state (given we hold the mmap write lock), and add a
> comment about ephemeral VMA read lock reference count increments to clarify
> why we are entering/exiting an exclusive locked state here.
>
> Finally, separate vma_mark_detached() into its fast-path component and make
> it inline, then place the slow path for excluding readers in mmap_lock.c.
>
> No functional change intended.
>
> Signed-off-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
Great improvement, thanks.
Just some more nits wrt naming.
> ---
> include/linux/mm_types.h | 14 ++--
> include/linux/mmap_lock.h | 23 +++++-
> mm/mmap_lock.c | 152 +++++++++++++++++++++-----------------
> 3 files changed, 112 insertions(+), 77 deletions(-)
>
> diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
> index 12281a1128c9..ca47a5d3d71e 100644
> --- a/include/linux/mm_types.h
> +++ b/include/linux/mm_types.h
> @@ -1011,15 +1011,15 @@ struct vm_area_struct {
> * decrementing it again.
> *
> * VM_REFCNT_EXCLUDE_READERS_FLAG - Detached, pending
> - * __vma_exit_locked() completion which will decrement the reference
> - * count to zero. IMPORTANT - at this stage no further readers can
> - * increment the reference count. It can only be reduced.
> + * __vma_exit_exclusive_locked() completion which will decrement the
__vma_end_exclude_readers()
> + * reference count to zero. IMPORTANT - at this stage no further readers
> + * can increment the reference count. It can only be reduced.
> *
> * VM_REFCNT_EXCLUDE_READERS_FLAG + 1 - A thread is either write-locking
> - * an attached VMA and has yet to invoke __vma_exit_locked(), OR a
> - * thread is detaching a VMA and is waiting on a single spurious reader
> - * in order to decrement the reference count. IMPORTANT - as above, no
> - * further readers can increment the reference count.
> + * an attached VMA and has yet to invoke __vma_exit_exclusive_locked(),
__vma_end_exclude_readers()
(also strictly speaking, these would belong to the previous patch, but not
worth the trouble moving)
> + * OR a thread is detaching a VMA and is waiting on a single spurious
> + * reader in order to decrement the reference count. IMPORTANT - as
> + * above, no further readers can increment the reference count.
> *
> * > VM_REFCNT_EXCLUDE_READERS_FLAG + 1 - A thread is either
> * write-locking or detaching a VMA is waiting on readers to
> diff --git a/include/linux/mmap_lock.h b/include/linux/mmap_lock.h
> index d6df6aad3e24..678f90080fa6 100644
> --- a/include/linux/mmap_lock.h
> +++ b/include/linux/mmap_lock.h
> @@ -358,7 +358,28 @@ static inline void vma_mark_attached(struct vm_area_struct *vma)
> refcount_set_release(&vma->vm_refcnt, 1);
> }
>
> -void vma_mark_detached(struct vm_area_struct *vma);
> +void __vma_exclude_readers_for_detach(struct vm_area_struct *vma);
> +
> +static inline void vma_mark_detached(struct vm_area_struct *vma)
> +{
> + vma_assert_write_locked(vma);
> + vma_assert_attached(vma);
> +
> + /*
> + * The VMA still being attached (refcnt > 0) - is unlikely, because the
> + * vma has been already write-locked and readers can increment vm_refcnt
> + * only temporarily before they check vm_lock_seq, realize the vma is
> + * locked and drop back the vm_refcnt. That is a narrow window for
> + * observing a raised vm_refcnt.
> + *
> + * See the comment describing the vm_area_struct->vm_refcnt field for
> + * details of possible refcnt values.
> + */
> + if (likely(!__vma_refcount_put_return(vma)))
> + return;
> +
> + __vma_exclude_readers_for_detach(vma);
> +}
>
> struct vm_area_struct *lock_vma_under_rcu(struct mm_struct *mm,
> unsigned long address);
> diff --git a/mm/mmap_lock.c b/mm/mmap_lock.c
> index 72f15f606093..b523a3fe110c 100644
> --- a/mm/mmap_lock.c
> +++ b/mm/mmap_lock.c
> @@ -46,20 +46,38 @@ EXPORT_SYMBOL(__mmap_lock_do_trace_released);
> #ifdef CONFIG_MMU
> #ifdef CONFIG_PER_VMA_LOCK
>
> +/* State shared across __vma_[enter, exit]_exclusive_locked(). */
__vma_[start,end]_exclude_readers
> +struct vma_exclude_readers_state {
> + /* Input parameters. */
> + struct vm_area_struct *vma;
> + int state; /* TASK_KILLABLE or TASK_UNINTERRUPTIBLE. */
> + bool detaching;
> +
> + /* Output parameters. */
> + bool detached;
> + bool exclusive; /* Are we exclusively locked? */
> +};
> +
> /*
> * Now that all readers have been evicted, mark the VMA as being out of the
> * 'exclude readers' state.
> - *
> - * Returns true if the VMA is now detached, otherwise false.
> */
> -static bool __must_check __vma_end_exclude_readers(struct vm_area_struct *vma)
> +static void __vma_end_exclude_readers(struct vma_exclude_readers_state *ves)
> {
> - bool detached;
> + struct vm_area_struct *vma = ves->vma;
>
> - detached = refcount_sub_and_test(VM_REFCNT_EXCLUDE_READERS_FLAG,
> - &vma->vm_refcnt);
> + VM_WARN_ON_ONCE(ves->detached);
> +
> + ves->detached = refcount_sub_and_test(VM_REFCNT_EXCLUDE_READERS_FLAG,
> + &vma->vm_refcnt);
> __vma_lockdep_release_exclusive(vma);
> - return detached;
> +}
> +
> +static unsigned int get_target_refcnt(struct vma_exclude_readers_state *ves)
> +{
> + const unsigned int tgt = ves->detaching ? 0 : 1;
> +
> + return tgt | VM_REFCNT_EXCLUDE_READERS_FLAG;
> }
>
> /*
> @@ -69,32 +87,29 @@ static bool __must_check __vma_end_exclude_readers(struct vm_area_struct *vma)
> * Note that this function pairs with vma_refcount_put() which will wake up this
> * thread when it detects that the last reader has released its lock.
> *
> - * The state parameter ought to be set to TASK_UNINTERRUPTIBLE in cases where we
> - * wish the thread to sleep uninterruptibly or TASK_KILLABLE if a fatal signal
> - * is permitted to kill it.
> + * The ves->state parameter ought to be set to TASK_UNINTERRUPTIBLE in cases
> + * where we wish the thread to sleep uninterruptibly or TASK_KILLABLE if a fatal
> + * signal is permitted to kill it.
> *
> - * The function will return 0 immediately if the VMA is detached, or wait for
> - * readers and return 1 once they have all exited, leaving the VMA exclusively
> - * locked.
> + * The function sets the ves->exclusive parameter to true if readers were
> + * excluded, or false if the VMA was detached or an error arose on wait.
> *
> - * If the function returns 1, the caller is required to invoke
> - * __vma_end_exclude_readers() once the exclusive state is no longer required.
> + * If the function indicates an exclusive lock was acquired via ves->exclusive
> + * the caller is required to invoke __vma_end_exclude_readers() once the
> + * exclusive state is no longer required.
> *
> - * If state is set to something other than TASK_UNINTERRUPTIBLE, the function
> - * may also return -EINTR to indicate a fatal signal was received while waiting.
> + * If ves->state is set to something other than TASK_UNINTERRUPTIBLE, the
> + * function may also return -EINTR to indicate a fatal signal was received while
> + * waiting.
It says "may also return..." but now doesn't say anywhere that otherwise
it's always 0.
> */
> -static int __vma_start_exclude_readers(struct vm_area_struct *vma,
> - bool detaching, int state)
> +static int __vma_start_exclude_readers(struct vma_exclude_readers_state *ves)
next prev parent reply other threads:[~2026-01-26 11:16 UTC|newest]
Thread overview: 43+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-01-23 20:12 [PATCH v4 00/10] mm: add and use vma_assert_stabilised() helper Lorenzo Stoakes
2026-01-23 20:12 ` [PATCH v4 01/10] mm/vma: rename VMA_LOCK_OFFSET to VM_REFCNT_EXCLUDE_READERS_FLAG Lorenzo Stoakes
2026-01-30 16:50 ` Liam R. Howlett
2026-01-23 20:12 ` [PATCH v4 02/10] mm/vma: document possible vma->vm_refcnt values and reference comment Lorenzo Stoakes
2026-01-26 5:15 ` Suren Baghdasaryan
2026-01-26 9:33 ` Lorenzo Stoakes
2026-01-26 9:40 ` Vlastimil Babka
2026-01-30 17:06 ` Liam R. Howlett
2026-01-23 20:12 ` [PATCH v4 03/10] mm/vma: rename is_vma_write_only(), separate out shared refcount put Lorenzo Stoakes
2026-01-26 5:36 ` Suren Baghdasaryan
2026-01-26 9:45 ` Lorenzo Stoakes
2026-01-26 10:12 ` Vlastimil Babka
2026-01-23 20:12 ` [PATCH v4 04/10] mm/vma: add+use vma lockdep acquire/release defines Lorenzo Stoakes
2026-01-28 11:18 ` Sebastian Andrzej Siewior
2026-01-28 11:31 ` Lorenzo Stoakes
2026-01-28 11:37 ` Sebastian Andrzej Siewior
2026-01-28 11:48 ` Lorenzo Stoakes
2026-01-29 21:30 ` Suren Baghdasaryan
2026-01-23 20:12 ` [PATCH v4 05/10] mm/vma: de-duplicate __vma_enter_locked() error path Lorenzo Stoakes
2026-01-23 20:12 ` [PATCH v4 06/10] mm/vma: clean up __vma_enter/exit_locked() Lorenzo Stoakes
2026-01-26 5:47 ` Suren Baghdasaryan
2026-01-26 9:45 ` Lorenzo Stoakes
2026-01-26 10:25 ` Vlastimil Babka
2026-01-23 20:12 ` [PATCH v4 07/10] mm/vma: introduce helper struct + thread through exclusive lock fns Lorenzo Stoakes
2026-01-26 11:16 ` Vlastimil Babka [this message]
2026-01-26 16:09 ` Lorenzo Stoakes
2026-01-26 19:38 ` Andrew Morton
2026-01-26 18:15 ` Suren Baghdasaryan
2026-01-23 20:12 ` [PATCH v4 08/10] mm/vma: improve and document __is_vma_write_locked() Lorenzo Stoakes
2026-01-26 11:30 ` Vlastimil Babka
2026-01-26 16:29 ` Lorenzo Stoakes
2026-01-26 19:21 ` Suren Baghdasaryan
2026-01-28 11:51 ` Lorenzo Stoakes
2026-01-28 13:01 ` Vlastimil Babka
2026-01-28 18:52 ` Suren Baghdasaryan
2026-01-26 16:30 ` Lorenzo Stoakes
2026-01-23 20:12 ` [PATCH v4 09/10] mm/vma: update vma_assert_locked() to use lockdep Lorenzo Stoakes
2026-01-26 13:42 ` Vlastimil Babka
2026-01-26 16:44 ` Lorenzo Stoakes
2026-01-26 17:16 ` Lorenzo Stoakes
2026-01-26 17:37 ` Lorenzo Stoakes
2026-01-23 20:12 ` [PATCH v4 10/10] mm/vma: add and use vma_assert_stabilised() Lorenzo Stoakes
2026-01-23 22:48 ` [PATCH v4 00/10] mm: add and use vma_assert_stabilised() helper Andrew Morton
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ba27d41a-2109-4c89-83ce-00ba8b78d249@suse.cz \
--to=vbabka@suse.cz \
--cc=Liam.Howlett@oracle.com \
--cc=akpm@linux-foundation.org \
--cc=bigeasy@linutronix.de \
--cc=boqun.feng@gmail.com \
--cc=clrkwllms@kernel.org \
--cc=david@kernel.org \
--cc=jannh@google.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-rt-devel@lists.linux.dev \
--cc=longman@redhat.com \
--cc=lorenzo.stoakes@oracle.com \
--cc=mhocko@suse.com \
--cc=mingo@redhat.com \
--cc=peterz@infradead.org \
--cc=rostedt@goodmis.org \
--cc=rppt@kernel.org \
--cc=shakeel.butt@linux.dev \
--cc=surenb@google.com \
--cc=will@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox