linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Vlastimil Babka <vbabka@suse.cz>
To: Suren Baghdasaryan <surenb@google.com>, akpm@linux-foundation.org
Cc: peterz@infradead.org, willy@infradead.org,
	liam.howlett@oracle.com, lorenzo.stoakes@oracle.com,
	mhocko@suse.com, hannes@cmpxchg.org, mjguzik@gmail.com,
	oliver.sang@intel.com, mgorman@techsingularity.net,
	david@redhat.com, peterx@redhat.com, oleg@redhat.com,
	dave@stgolabs.net, paulmck@kernel.org, brauner@kernel.org,
	dhowells@redhat.com, hdanton@sina.com, hughd@google.com,
	lokeshgidra@google.com, minchan@google.com, jannh@google.com,
	shakeel.butt@linux.dev, souravpanda@google.com,
	pasha.tatashin@soleen.com, klarasmodin@gmail.com, corbet@lwn.net,
	linux-doc@vger.kernel.org, linux-mm@kvack.org,
	linux-kernel@vger.kernel.org, kernel-team@android.com
Subject: Re: [PATCH v7 12/17] mm: replace vm_lock and detached flag with a reference count
Date: Wed, 8 Jan 2025 12:52:50 +0100	[thread overview]
Message-ID: <ec71eaa7-a5e5-4d83-a405-782d63cf5c53@suse.cz> (raw)
In-Reply-To: <20241226170710.1159679-13-surenb@google.com>

On 12/26/24 18:07, Suren Baghdasaryan wrote:
> rw_semaphore is a sizable structure of 40 bytes and consumes
> considerable space for each vm_area_struct. However vma_lock has
> two important specifics which can be used to replace rw_semaphore
> with a simpler structure:
> 1. Readers never wait. They try to take the vma_lock and fall back to
> mmap_lock if that fails.
> 2. Only one writer at a time will ever try to write-lock a vma_lock
> because writers first take mmap_lock in write mode.
> Because of these requirements, full rw_semaphore functionality is not
> needed and we can replace rw_semaphore and the vma->detached flag with
> a refcount (vm_refcnt).
> When vma is in detached state, vm_refcnt is 0 and only a call to
> vma_mark_attached() can take it out of this state. Note that unlike
> before, now we enforce both vma_mark_attached() and vma_mark_detached()
> to be done only after vma has been write-locked. vma_mark_attached()
> changes vm_refcnt to 1 to indicate that it has been attached to the vma
> tree. When a reader takes read lock, it increments vm_refcnt, unless the
> top usable bit of vm_refcnt (0x40000000) is set, indicating presence of
> a writer. When writer takes write lock, it both increments vm_refcnt and
> sets the top usable bit to indicate its presence. If there are readers,
> writer will wait using newly introduced mm->vma_writer_wait. Since all
> writers take mmap_lock in write mode first, there can be only one writer
> at a time. The last reader to release the lock will signal the writer
> to wake up.
> refcount might overflow if there are many competing readers, in which case
> read-locking will fail. Readers are expected to handle such failures.
> 
> Suggested-by: Peter Zijlstra <peterz@infradead.org>
> Suggested-by: Matthew Wilcox <willy@infradead.org>
> Signed-off-by: Suren Baghdasaryan <surenb@google.com>

>   */
>  static inline bool vma_start_read(struct vm_area_struct *vma)
>  {
> +	int oldcnt;
> +
>  	/*
>  	 * Check before locking. A race might cause false locked result.
>  	 * We can use READ_ONCE() for the mm_lock_seq here, and don't need
> @@ -720,13 +745,20 @@ static inline bool vma_start_read(struct vm_area_struct *vma)
>  	if (READ_ONCE(vma->vm_lock_seq) == READ_ONCE(vma->vm_mm->mm_lock_seq.sequence))
>  		return false;
>  
> -	if (unlikely(down_read_trylock(&vma->vm_lock.lock) == 0))
> +
> +	rwsem_acquire_read(&vma->vmlock_dep_map, 0, 0, _RET_IP_);

I don't know much about lockdep, but I see that down_read() does

rwsem_acquire_read(&sem->dep_map, 0, 0, _RET_IP_);

down_read_trylock() does

rwsem_acquire_read(&sem->dep_map, 0, 1, _RET_IP_);

This is passing the down_read()-like variant but it behaves like a trylock, no?

> +	/* Limit at VMA_REF_LIMIT to leave one count for a writer */

It's mainly to not increase as much as VMA_LOCK_OFFSET bit could become
false positively set set by readers, right? The "leave one count" sounds
like an implementation detail of VMA_REF_LIMIT and will change if Liam's
suggestion is proven feasible?

> +	if (unlikely(!__refcount_inc_not_zero_limited(&vma->vm_refcnt, &oldcnt,
> +						      VMA_REF_LIMIT))) {
> +		rwsem_release(&vma->vmlock_dep_map, _RET_IP_);
>  		return false;
> +	}
> +	lock_acquired(&vma->vmlock_dep_map, _RET_IP_);
>  
>  	/*
> -	 * Overflow might produce false locked result.
> +	 * Overflow of vm_lock_seq/mm_lock_seq might produce false locked result.
>  	 * False unlocked result is impossible because we modify and check
> -	 * vma->vm_lock_seq under vma->vm_lock protection and mm->mm_lock_seq
> +	 * vma->vm_lock_seq under vma->vm_refcnt protection and mm->mm_lock_seq
>  	 * modification invalidates all existing locks.
>  	 *
>  	 * We must use ACQUIRE semantics for the mm_lock_seq so that if we are
> @@ -734,10 +766,12 @@ static inline bool vma_start_read(struct vm_area_struct *vma)
>  	 * after it has been unlocked.
>  	 * This pairs with RELEASE semantics in vma_end_write_all().
>  	 */
> -	if (unlikely(vma->vm_lock_seq == raw_read_seqcount(&vma->vm_mm->mm_lock_seq))) {
> -		up_read(&vma->vm_lock.lock);
> +	if (unlikely(oldcnt & VMA_LOCK_OFFSET ||
> +		     vma->vm_lock_seq == raw_read_seqcount(&vma->vm_mm->mm_lock_seq))) {
> +		vma_refcount_put(vma);
>  		return false;
>  	}
> +
>  	return true;
>  }
>  
> @@ -749,8 +783,17 @@ static inline bool vma_start_read(struct vm_area_struct *vma)
>   */
>  static inline bool vma_start_read_locked_nested(struct vm_area_struct *vma, int subclass)
>  {
> +	int oldcnt;
> +
>  	mmap_assert_locked(vma->vm_mm);
> -	down_read_nested(&vma->vm_lock.lock, subclass);
> +	rwsem_acquire_read(&vma->vmlock_dep_map, subclass, 0, _RET_IP_);

Same as above?

> +	/* Limit at VMA_REF_LIMIT to leave one count for a writer */

Also

> +	if (unlikely(!__refcount_inc_not_zero_limited(&vma->vm_refcnt, &oldcnt,
> +						      VMA_REF_LIMIT))) {
> +		rwsem_release(&vma->vmlock_dep_map, _RET_IP_);
> +		return false;
> +	}
> +	lock_acquired(&vma->vmlock_dep_map, _RET_IP_);
>  	return true;
>  }
>  


  parent reply	other threads:[~2025-01-08 11:52 UTC|newest]

Thread overview: 65+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-12-26 17:06 [PATCH v7 00/17] move per-vma lock into vm_area_struct Suren Baghdasaryan
2024-12-26 17:06 ` [PATCH v7 01/17] mm: introduce vma_start_read_locked{_nested} helpers Suren Baghdasaryan
2025-01-08 14:59   ` Liam R. Howlett
2024-12-26 17:06 ` [PATCH v7 02/17] mm: move per-vma lock into vm_area_struct Suren Baghdasaryan
2025-01-08 14:59   ` Liam R. Howlett
2024-12-26 17:06 ` [PATCH v7 03/17] mm: mark vma as detached until it's added into vma tree Suren Baghdasaryan
2025-01-08 15:01   ` Liam R. Howlett
2024-12-26 17:06 ` [PATCH v7 04/17] mm: modify vma_iter_store{_gfp} to indicate if it's storing a new vma Suren Baghdasaryan
2025-01-07 16:48   ` Vlastimil Babka
2025-01-07 16:49   ` Liam R. Howlett
2025-01-07 17:12     ` Suren Baghdasaryan
2024-12-26 17:06 ` [PATCH v7 05/17] mm: mark vmas detached upon exit Suren Baghdasaryan
2025-01-07 17:08   ` Vlastimil Babka
2025-01-07 17:13     ` Suren Baghdasaryan
2024-12-26 17:06 ` [PATCH v7 06/17] mm/nommu: fix the last places where vma is not locked before being attached Suren Baghdasaryan
2025-01-07 17:51   ` Liam R. Howlett
2025-01-07 18:05     ` Suren Baghdasaryan
2024-12-26 17:06 ` [PATCH v7 07/17] types: move struct rcuwait into types.h Suren Baghdasaryan
2024-12-27 18:35   ` Davidlohr Bueso
2025-01-08 15:02   ` Liam R. Howlett
2024-12-26 17:07 ` [PATCH v7 08/17] mm: allow vma_start_read_locked/vma_start_read_locked_nested to fail Suren Baghdasaryan
2025-01-07 17:28   ` Vlastimil Babka
2025-01-07 17:31     ` Suren Baghdasaryan
2024-12-26 17:07 ` [PATCH v7 09/17] mm: move mmap_init_lock() out of the header file Suren Baghdasaryan
2025-01-07 17:30   ` Vlastimil Babka
2024-12-26 17:07 ` [PATCH v7 10/17] mm: uninline the main body of vma_start_write() Suren Baghdasaryan
2025-01-07 17:35   ` Vlastimil Babka
2025-01-07 17:45     ` Suren Baghdasaryan
2025-01-07 18:51       ` Suren Baghdasaryan
2025-04-08  4:39     ` Eric Naim
2025-04-08  6:01       ` Christoph Hellwig
2025-04-08  6:25         ` Lorenzo Stoakes
2025-04-08  7:52           ` Eric Naim
2025-04-08 17:09             ` Suren Baghdasaryan
2024-12-26 17:07 ` [PATCH v7 11/17] refcount: introduce __refcount_{add|inc}_not_zero_limited Suren Baghdasaryan
2025-01-08  9:16   ` Vlastimil Babka
2025-01-08 15:06     ` Matthew Wilcox
2025-01-08 15:45       ` Suren Baghdasaryan
2025-01-10 13:32       ` David Laight
2025-01-10 16:29         ` Suren Baghdasaryan
2024-12-26 17:07 ` [PATCH v7 12/17] mm: replace vm_lock and detached flag with a reference count Suren Baghdasaryan
2025-01-06  0:38   ` Wei Yang
2025-01-06 17:26     ` Suren Baghdasaryan
2025-01-07 18:44   ` Liam R. Howlett
2025-01-07 19:38     ` Suren Baghdasaryan
2025-01-08 11:52   ` Vlastimil Babka [this message]
2025-01-08 17:53     ` Suren Baghdasaryan
2024-12-26 17:07 ` [PATCH v7 13/17] mm/debug: print vm_refcnt state when dumping the vma Suren Baghdasaryan
2024-12-26 19:40   ` kernel test robot
2024-12-26 19:51     ` Suren Baghdasaryan
2024-12-26 19:54       ` Suren Baghdasaryan
2024-12-26 20:04         ` Suren Baghdasaryan
2024-12-26 20:13   ` kernel test robot
2024-12-26 17:07 ` [PATCH v7 14/17] mm: remove extra vma_numab_state_init() call Suren Baghdasaryan
2025-01-08 18:04   ` Vlastimil Babka
2024-12-26 17:07 ` [PATCH v7 15/17] mm: prepare lock_vma_under_rcu() for vma reuse possibility Suren Baghdasaryan
2025-01-08 18:05   ` Vlastimil Babka
2024-12-26 17:07 ` [PATCH v7 16/17] mm: make vma cache SLAB_TYPESAFE_BY_RCU Suren Baghdasaryan
2025-01-08 14:55   ` Liam R. Howlett
2025-01-08 18:21   ` Vlastimil Babka
2025-01-08 18:44     ` Suren Baghdasaryan
2025-01-08 19:00       ` Vlastimil Babka
2025-01-08 19:17         ` Suren Baghdasaryan
2024-12-26 17:07 ` [PATCH v7 17/17] docs/mm: document latest changes to vm_lock Suren Baghdasaryan
2025-01-08 15:46   ` Liam R. Howlett

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ec71eaa7-a5e5-4d83-a405-782d63cf5c53@suse.cz \
    --to=vbabka@suse.cz \
    --cc=akpm@linux-foundation.org \
    --cc=brauner@kernel.org \
    --cc=corbet@lwn.net \
    --cc=dave@stgolabs.net \
    --cc=david@redhat.com \
    --cc=dhowells@redhat.com \
    --cc=hannes@cmpxchg.org \
    --cc=hdanton@sina.com \
    --cc=hughd@google.com \
    --cc=jannh@google.com \
    --cc=kernel-team@android.com \
    --cc=klarasmodin@gmail.com \
    --cc=liam.howlett@oracle.com \
    --cc=linux-doc@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=lokeshgidra@google.com \
    --cc=lorenzo.stoakes@oracle.com \
    --cc=mgorman@techsingularity.net \
    --cc=mhocko@suse.com \
    --cc=minchan@google.com \
    --cc=mjguzik@gmail.com \
    --cc=oleg@redhat.com \
    --cc=oliver.sang@intel.com \
    --cc=pasha.tatashin@soleen.com \
    --cc=paulmck@kernel.org \
    --cc=peterx@redhat.com \
    --cc=peterz@infradead.org \
    --cc=shakeel.butt@linux.dev \
    --cc=souravpanda@google.com \
    --cc=surenb@google.com \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox