linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: David Hildenbrand <david@kernel.org>,
	"Liam R . Howlett" <Liam.Howlett@oracle.com>,
	Vlastimil Babka <vbabka@suse.cz>, Mike Rapoport <rppt@kernel.org>,
	Suren Baghdasaryan <surenb@google.com>,
	Michal Hocko <mhocko@suse.com>,
	Shakeel Butt <shakeel.butt@linux.dev>,
	Jann Horn <jannh@google.com>,
	linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	linux-rt-devel@lists.linux.dev,
	Peter Zijlstra <peterz@infradead.org>,
	Ingo Molnar <mingo@redhat.com>, Will Deacon <will@kernel.org>,
	Boqun Feng <boqun.feng@gmail.com>,
	Waiman Long <longman@redhat.com>,
	Sebastian Andrzej Siewior <bigeasy@linutronix.de>,
	Clark Williams <clrkwllms@kernel.org>,
	Steven Rostedt <rostedt@goodmis.org>
Subject: [PATCH RESEND v3 07/10] mm/vma: introduce helper struct + thread through exclusive lock fns
Date: Thu, 22 Jan 2026 13:01:59 +0000	[thread overview]
Message-ID: <4f95671feac6b6d4cea3c53426c875f3fd8a8855.1769086312.git.lorenzo.stoakes@oracle.com> (raw)
In-Reply-To: <cover.1769086312.git.lorenzo.stoakes@oracle.com>

It is confusing to have __vma_enter_exclusive_locked() return 0, 1 or an
error (but only when waiting for readers in TASK_KILLABLE state), and
having the return value be stored in a stack variable called 'locked' is
further confusion.

More generally, we are doing a lock of rather finnicky things during the
acquisition of a state in which readers are excluded and moving out of this
state, including tracking whether we are detached or not or whether an
error occurred.

We are implementing logic in __vma_enter_exclusive_locked() that
effectively acts as if 'if one caller calls us do X, if another then do Y',
which is very confusing from a control flow perspective.

Introducing the shared helper object state helps us avoid this, as we can
now handle the 'an error arose but we're detached' condition correctly in
both callers - a warning if not detaching, and treating the situation as if
no error arose in the case of a VMA detaching.

This also acts to help document what's going on and allows us to add some
more logical debug asserts.

Also update vma_mark_detached() to add a guard clause for the likely
'already detached' state (given we hold the mmap write lock), and add a
comment about ephemeral VMA read lock reference count increments to clarify
why we are entering/exiting an exclusive locked state here.

No functional change intended.

Signed-off-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
---
 mm/mmap_lock.c | 144 +++++++++++++++++++++++++++++++------------------
 1 file changed, 91 insertions(+), 53 deletions(-)

diff --git a/mm/mmap_lock.c b/mm/mmap_lock.c
index f73221174a8b..75166a43ffa4 100644
--- a/mm/mmap_lock.c
+++ b/mm/mmap_lock.c
@@ -46,20 +46,40 @@ EXPORT_SYMBOL(__mmap_lock_do_trace_released);
 #ifdef CONFIG_MMU
 #ifdef CONFIG_PER_VMA_LOCK

+/* State shared across __vma_[enter, exit]_exclusive_locked(). */
+struct vma_exclude_readers_state {
+	/* Input parameters. */
+	struct vm_area_struct *vma;
+	int state; /* TASK_KILLABLE or TASK_UNINTERRUPTIBLE. */
+	bool detaching;
+
+	bool detached;
+	bool exclusive; /* Are we exclusively locked? */
+};
+
 /*
  * Now that all readers have been evicted, mark the VMA as being out of the
  * 'exclude readers' state.
  *
  * Returns true if the VMA is now detached, otherwise false.
  */
-static bool __must_check __vma_exit_exclusive_locked(struct vm_area_struct *vma)
+static void __vma_exit_exclusive_locked(struct vma_exclude_readers_state *ves)
 {
-	bool detached;
+	struct vm_area_struct *vma = ves->vma;
+
+	VM_WARN_ON_ONCE(ves->detached);
+	VM_WARN_ON_ONCE(!ves->exclusive);

-	detached = refcount_sub_and_test(VM_REFCNT_EXCLUDE_READERS_FLAG,
-					 &vma->vm_refcnt);
+	ves->detached = refcount_sub_and_test(VM_REFCNT_EXCLUDE_READERS_FLAG,
+					      &vma->vm_refcnt);
 	__vma_lockdep_release_exclusive(vma);
-	return detached;
+}
+
+static unsigned int get_target_refcnt(struct vma_exclude_readers_state *ves)
+{
+	const unsigned int tgt = ves->detaching ? 0 : 1;
+
+	return tgt | VM_REFCNT_EXCLUDE_READERS_FLAG;
 }

 /*
@@ -69,30 +89,31 @@ static bool __must_check __vma_exit_exclusive_locked(struct vm_area_struct *vma)
  * Note that this function pairs with vma_refcount_put() which will wake up this
  * thread when it detects that the last reader has released its lock.
  *
- * The state parameter ought to be set to TASK_UNINTERRUPTIBLE in cases where we
- * wish the thread to sleep uninterruptibly or TASK_KILLABLE if a fatal signal
- * is permitted to kill it.
+ * The ves->state parameter ought to be set to TASK_UNINTERRUPTIBLE in cases
+ * where we wish the thread to sleep uninterruptibly or TASK_KILLABLE if a fatal
+ * signal is permitted to kill it.
  *
- * The function will return 0 immediately if the VMA is detached, and 1 once the
- * VMA has evicted all readers, leaving the VMA exclusively locked.
+ * The function sets the ves->locked parameter to true if an exclusive lock was
+ * acquired, or false if the VMA was detached or an error arose on wait.
  *
- * If the function returns 1, the caller is required to invoke
- * __vma_exit_exclusive_locked() once the exclusive state is no longer required.
+ * If the function indicates an exclusive lock was acquired via ves->exclusive
+ * (or equivalently, returning 0 with !ves->detached), the caller is required to
+ * invoke __vma_exit_exclusive_locked() once the exclusive state is no longer
+ * required.
  *
- * If state is set to something other than TASK_UNINTERRUPTIBLE, the function
- * may also return -EINTR to indicate a fatal signal was received while waiting.
+ * If ves->state is set to something other than TASK_UNINTERRUPTIBLE, the
+ * function may also return -EINTR to indicate a fatal signal was received while
+ * waiting.
  */
-static int __vma_enter_exclusive_locked(struct vm_area_struct *vma,
-		bool detaching, int state)
+static int __vma_enter_exclusive_locked(struct vma_exclude_readers_state *ves)
 {
-	int err;
-	unsigned int tgt_refcnt = VM_REFCNT_EXCLUDE_READERS_FLAG;
+	struct vm_area_struct *vma = ves->vma;
+	unsigned int tgt_refcnt = get_target_refcnt(ves);
+	int err = 0;

 	mmap_assert_write_locked(vma->vm_mm);
-
-	/* Additional refcnt if the vma is attached. */
-	if (!detaching)
-		tgt_refcnt++;
+	VM_WARN_ON_ONCE(ves->detached);
+	VM_WARN_ON_ONCE(ves->exclusive);

 	/*
 	 * If vma is detached then only vma_mark_attached() can raise the
@@ -101,37 +122,39 @@ static int __vma_enter_exclusive_locked(struct vm_area_struct *vma,
 	 * See the comment describing the vm_area_struct->vm_refcnt field for
 	 * details of possible refcnt values.
 	 */
-	if (!refcount_add_not_zero(VM_REFCNT_EXCLUDE_READERS_FLAG, &vma->vm_refcnt))
+	if (!refcount_add_not_zero(VM_REFCNT_EXCLUDE_READERS_FLAG, &vma->vm_refcnt)) {
+		ves->detached = true;
 		return 0;
+	}

 	__vma_lockdep_acquire_exclusive(vma);
 	err = rcuwait_wait_event(&vma->vm_mm->vma_writer_wait,
 		   refcount_read(&vma->vm_refcnt) == tgt_refcnt,
-		   state);
+		   ves->state);
 	if (err) {
-		if (__vma_exit_exclusive_locked(vma)) {
-			/*
-			 * The wait failed, but the last reader went away
-			 * as well. Tell the caller the VMA is detached.
-			 */
-			WARN_ON_ONCE(!detaching);
-			err = 0;
-		}
+		__vma_exit_exclusive_locked(ves);
 		return err;
 	}
-	__vma_lockdep_stat_mark_acquired(vma);

-	return 1;
+	__vma_lockdep_stat_mark_acquired(vma);
+	ves->exclusive = true;
+	return 0;
 }

 int __vma_start_write(struct vm_area_struct *vma, unsigned int mm_lock_seq,
 		int state)
 {
-	int locked;
+	int err;
+	struct vma_exclude_readers_state ves = {
+		.vma = vma,
+		.state = state,
+	};

-	locked = __vma_enter_exclusive_locked(vma, false, state);
-	if (locked < 0)
-		return locked;
+	err = __vma_enter_exclusive_locked(&ves);
+	if (err) {
+		WARN_ON_ONCE(ves.detached);
+		return err;
+	}

 	/*
 	 * We should use WRITE_ONCE() here because we can have concurrent reads
@@ -141,9 +164,11 @@ int __vma_start_write(struct vm_area_struct *vma, unsigned int mm_lock_seq,
 	 */
 	WRITE_ONCE(vma->vm_lock_seq, mm_lock_seq);

-	/* vma should remain attached. */
-	if (locked)
-		WARN_ON_ONCE(__vma_exit_exclusive_locked(vma));
+	if (!ves.detached) {
+		__vma_exit_exclusive_locked(&ves);
+		/* VMA should remain attached. */
+		WARN_ON_ONCE(ves.detached);
+	}

 	return 0;
 }
@@ -151,7 +176,12 @@ EXPORT_SYMBOL_GPL(__vma_start_write);

 void vma_mark_detached(struct vm_area_struct *vma)
 {
-	bool detached;
+	struct vma_exclude_readers_state ves = {
+		.vma = vma,
+		.state = TASK_UNINTERRUPTIBLE,
+		.detaching = true,
+	};
+	int err;

 	vma_assert_write_locked(vma);
 	vma_assert_attached(vma);
@@ -160,18 +190,26 @@ void vma_mark_detached(struct vm_area_struct *vma)
 	 * See the comment describing the vm_area_struct->vm_refcnt field for
 	 * details of possible refcnt values.
 	 */
-	detached = __vma_refcount_put(vma, NULL);
-	if (unlikely(!detached)) {
-		/* Wait until vma is detached with no readers. */
-		if (__vma_enter_exclusive_locked(vma, true, TASK_UNINTERRUPTIBLE)) {
-			/*
-			 * Once this is complete, no readers can increment the
-			 * reference count, and the VMA is marked detached.
-			 */
-			detached = __vma_exit_exclusive_locked(vma);
-			WARN_ON_ONCE(!detached);
-		}
+	if (likely(__vma_refcount_put(vma, NULL)))
+		return;
+
+	/*
+	 * Wait until the VMA is detached with no readers. Since we hold the VMA
+	 * write lock, the only read locks that might be present are those from
+	 * threads trying to acquire the read lock and incrementing the
+	 * reference count before realising the write lock is held and
+	 * decrementing it.
+	 */
+	err = __vma_enter_exclusive_locked(&ves);
+	if (!err && !ves.detached) {
+		/*
+		 * Once this is complete, no readers can increment the
+		 * reference count, and the VMA is marked detached.
+		 */
+		__vma_exit_exclusive_locked(&ves);
 	}
+	/* If an error arose but we were detached anyway, we don't care. */
+	WARN_ON_ONCE(!ves.detached);
 }

 /*
--
2.52.0


  parent reply	other threads:[~2026-01-22 13:03 UTC|newest]

Thread overview: 73+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-01-22 13:01 [PATCH RESEND v3 00/10] mm: add and use vma_assert_stabilised() helper Lorenzo Stoakes
2026-01-22 13:01 ` [PATCH RESEND v3 01/10] mm/vma: rename VMA_LOCK_OFFSET to VM_REFCNT_EXCLUDE_READERS_FLAG Lorenzo Stoakes
2026-01-22 16:26   ` Vlastimil Babka
2026-01-22 16:29     ` Lorenzo Stoakes
2026-01-23 13:52       ` Lorenzo Stoakes
2026-01-22 16:37   ` Suren Baghdasaryan
2026-01-23 13:26     ` Lorenzo Stoakes
2026-01-22 13:01 ` [PATCH RESEND v3 02/10] mm/vma: document possible vma->vm_refcnt values and reference comment Lorenzo Stoakes
2026-01-22 16:48   ` Vlastimil Babka
2026-01-22 17:28     ` Suren Baghdasaryan
2026-01-23 15:06       ` Lorenzo Stoakes
2026-01-23 13:45     ` Lorenzo Stoakes
2026-01-22 13:01 ` [PATCH RESEND v3 03/10] mm/vma: rename is_vma_write_only(), separate out shared refcount put Lorenzo Stoakes
2026-01-22 17:36   ` Vlastimil Babka
2026-01-22 19:31     ` Suren Baghdasaryan
2026-01-23  8:24       ` Vlastimil Babka
2026-01-23 14:52         ` Lorenzo Stoakes
2026-01-23 15:05           ` Vlastimil Babka
2026-01-23 15:07             ` Lorenzo Stoakes
2026-01-23 14:41       ` Lorenzo Stoakes
2026-01-26 10:04         ` Lorenzo Stoakes
2026-01-23 14:02     ` Lorenzo Stoakes
2026-01-22 13:01 ` [PATCH RESEND v3 04/10] mm/vma: add+use vma lockdep acquire/release defines Lorenzo Stoakes
2026-01-22 19:32   ` Suren Baghdasaryan
2026-01-22 19:41     ` Suren Baghdasaryan
2026-01-23  8:41       ` Vlastimil Babka
2026-01-23 15:08         ` Lorenzo Stoakes
2026-01-23 15:00     ` Lorenzo Stoakes
2026-01-23  8:48   ` Vlastimil Babka
2026-01-23 15:10     ` Lorenzo Stoakes
2026-01-22 13:01 ` [PATCH RESEND v3 05/10] mm/vma: de-duplicate __vma_enter_locked() error path Lorenzo Stoakes
2026-01-22 19:39   ` Suren Baghdasaryan
2026-01-23 15:11     ` Lorenzo Stoakes
2026-01-23  8:54   ` Vlastimil Babka
2026-01-23 15:10     ` Lorenzo Stoakes
2026-01-22 13:01 ` [PATCH v3 06/10] mm/vma: clean up __vma_enter/exit_locked() Lorenzo Stoakes
2026-01-22 13:08   ` Lorenzo Stoakes
2026-01-22 20:15   ` Suren Baghdasaryan
2026-01-22 20:55     ` Andrew Morton
2026-01-23 16:15       ` Lorenzo Stoakes
2026-01-23 16:33     ` Lorenzo Stoakes
2026-01-23  9:16   ` Vlastimil Babka
2026-01-23 16:17     ` Lorenzo Stoakes
2026-01-23 16:28       ` Lorenzo Stoakes
2026-01-22 13:01 ` Lorenzo Stoakes [this message]
2026-01-22 21:41   ` [PATCH RESEND v3 07/10] mm/vma: introduce helper struct + thread through exclusive lock fns Suren Baghdasaryan
2026-01-23 17:59     ` Lorenzo Stoakes
2026-01-23 19:34       ` Suren Baghdasaryan
2026-01-23 20:04         ` Lorenzo Stoakes
2026-01-23 22:07           ` Suren Baghdasaryan
2026-01-24  8:54             ` Lorenzo Stoakes
2026-01-26  6:09               ` Suren Baghdasaryan
2026-01-23 10:02   ` Vlastimil Babka
2026-01-23 18:18     ` Lorenzo Stoakes
2026-01-22 13:02 ` [PATCH RESEND v3 08/10] mm/vma: improve and document __is_vma_write_locked() Lorenzo Stoakes
2026-01-22 21:55   ` Suren Baghdasaryan
2026-01-23 16:21     ` Vlastimil Babka
2026-01-23 17:42       ` Suren Baghdasaryan
2026-01-23 18:44       ` Lorenzo Stoakes
2026-01-22 13:02 ` [PATCH RESEND v3 09/10] mm/vma: update vma_assert_locked() to use lockdep Lorenzo Stoakes
2026-01-22 22:02   ` Suren Baghdasaryan
2026-01-23 18:45     ` Lorenzo Stoakes
2026-01-23 16:55   ` Vlastimil Babka
2026-01-23 18:49     ` Lorenzo Stoakes
2026-01-22 13:02 ` [PATCH RESEND v3 10/10] mm/vma: add and use vma_assert_stabilised() Lorenzo Stoakes
2026-01-22 22:12   ` Suren Baghdasaryan
2026-01-23 18:54     ` Lorenzo Stoakes
2026-01-23 17:10   ` Vlastimil Babka
2026-01-23 18:51     ` Lorenzo Stoakes
2026-01-23 23:35   ` Hillf Danton
2026-01-22 15:48 ` [PATCH RESEND v3 00/10] mm: add and use vma_assert_stabilised() helper Andrew Morton
2026-01-22 15:57   ` Lorenzo Stoakes
2026-01-22 16:01     ` Lorenzo Stoakes

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4f95671feac6b6d4cea3c53426c875f3fd8a8855.1769086312.git.lorenzo.stoakes@oracle.com \
    --to=lorenzo.stoakes@oracle.com \
    --cc=Liam.Howlett@oracle.com \
    --cc=akpm@linux-foundation.org \
    --cc=bigeasy@linutronix.de \
    --cc=boqun.feng@gmail.com \
    --cc=clrkwllms@kernel.org \
    --cc=david@kernel.org \
    --cc=jannh@google.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-rt-devel@lists.linux.dev \
    --cc=longman@redhat.com \
    --cc=mhocko@suse.com \
    --cc=mingo@redhat.com \
    --cc=peterz@infradead.org \
    --cc=rostedt@goodmis.org \
    --cc=rppt@kernel.org \
    --cc=shakeel.butt@linux.dev \
    --cc=surenb@google.com \
    --cc=vbabka@suse.cz \
    --cc=will@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox