* [PATCH 1/1] mm: remove extra check for VMA_LOCK_OFFSET when read-locking a vma
@ 2025-01-07 3:04 Suren Baghdasaryan
2025-01-07 12:28 ` Wei Yang
0 siblings, 1 reply; 3+ messages in thread
From: Suren Baghdasaryan @ 2025-01-07 3:04 UTC (permalink / raw)
To: akpm
Cc: richard.weiyang, peterz, willy, liam.howlett, lorenzo.stoakes,
mhocko, vbabka, hannes, mjguzik, oliver.sang, mgorman, david,
peterx, oleg, dave, paulmck, brauner, dhowells, hdanton, hughd,
lokeshgidra, minchan, jannh, shakeel.butt, souravpanda,
pasha.tatashin, klarasmodin, corbet, linux-doc, linux-mm,
linux-kernel, kernel-team, surenb
Since we limit vm_refcnt at VMA_REF_LIMIT and it's smaller than
VMA_LOCK_OFFSET, there is no need to check again if VMA_LOCK_OFFSET bit
is set. Remove the extra check and add a clarifying comment.
Fixes: e8f32ff00a66 ("mm: replace vm_lock and detached flag with a reference count")
Suggested-by: Wei Yang <richard.weiyang@gmail.com>
Signed-off-by: Suren Baghdasaryan <surenb@google.com>
---
Applies over mm-unstable
include/linux/mm.h | 9 ++++++---
1 file changed, 6 insertions(+), 3 deletions(-)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 486638d22fc6..b5f262fc7dc5 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -747,7 +747,11 @@ static inline bool vma_start_read(struct mm_struct *mm, struct vm_area_struct *v
rwsem_acquire_read(&vma->vmlock_dep_map, 0, 0, _RET_IP_);
- /* Limit at VMA_REF_LIMIT to leave one count for a writer */
+ /*
+ * Limit at VMA_REF_LIMIT to leave one count for a writer.
+ * If VMA_LOCK_OFFSET is set, __refcount_inc_not_zero_limited() will fail
+ * because VMA_REF_LIMIT is less than VMA_LOCK_OFFSET.
+ */
if (unlikely(!__refcount_inc_not_zero_limited(&vma->vm_refcnt, &oldcnt,
VMA_REF_LIMIT))) {
rwsem_release(&vma->vmlock_dep_map, _RET_IP_);
@@ -766,8 +770,7 @@ static inline bool vma_start_read(struct mm_struct *mm, struct vm_area_struct *v
* after it has been unlocked.
* This pairs with RELEASE semantics in vma_end_write_all().
*/
- if (unlikely(oldcnt & VMA_LOCK_OFFSET ||
- vma->vm_lock_seq == raw_read_seqcount(&mm->mm_lock_seq))) {
+ if (unlikely(vma->vm_lock_seq == raw_read_seqcount(&mm->mm_lock_seq))) {
vma_refcount_put(vma);
return false;
}
base-commit: f349e79bfbf3abfade8011797ff6d0d47b67dab7
--
2.47.1.613.gc27f4b7a9f-goog
^ permalink raw reply [flat|nested] 3+ messages in thread* Re: [PATCH 1/1] mm: remove extra check for VMA_LOCK_OFFSET when read-locking a vma
2025-01-07 3:04 [PATCH 1/1] mm: remove extra check for VMA_LOCK_OFFSET when read-locking a vma Suren Baghdasaryan
@ 2025-01-07 12:28 ` Wei Yang
2025-01-07 23:22 ` Suren Baghdasaryan
0 siblings, 1 reply; 3+ messages in thread
From: Wei Yang @ 2025-01-07 12:28 UTC (permalink / raw)
To: Suren Baghdasaryan
Cc: akpm, richard.weiyang, peterz, willy, liam.howlett,
lorenzo.stoakes, mhocko, vbabka, hannes, mjguzik, oliver.sang,
mgorman, david, peterx, oleg, dave, paulmck, brauner, dhowells,
hdanton, hughd, lokeshgidra, minchan, jannh, shakeel.butt,
souravpanda, pasha.tatashin, klarasmodin, corbet, linux-doc,
linux-mm, linux-kernel, kernel-team
On Mon, Jan 06, 2025 at 07:04:15PM -0800, Suren Baghdasaryan wrote:
>Since we limit vm_refcnt at VMA_REF_LIMIT and it's smaller than
>VMA_LOCK_OFFSET, there is no need to check again if VMA_LOCK_OFFSET bit
>is set. Remove the extra check and add a clarifying comment.
>
>Fixes: e8f32ff00a66 ("mm: replace vm_lock and detached flag with a reference count")
>Suggested-by: Wei Yang <richard.weiyang@gmail.com>
>Signed-off-by: Suren Baghdasaryan <surenb@google.com>
Reviewed-by: Wei Yang <richard.weiyang@gmail.com>
>---
>Applies over mm-unstable
>
> include/linux/mm.h | 9 ++++++---
> 1 file changed, 6 insertions(+), 3 deletions(-)
>
>diff --git a/include/linux/mm.h b/include/linux/mm.h
>index 486638d22fc6..b5f262fc7dc5 100644
>--- a/include/linux/mm.h
>+++ b/include/linux/mm.h
>@@ -747,7 +747,11 @@ static inline bool vma_start_read(struct mm_struct *mm, struct vm_area_struct *v
>
>
> rwsem_acquire_read(&vma->vmlock_dep_map, 0, 0, _RET_IP_);
>- /* Limit at VMA_REF_LIMIT to leave one count for a writer */
>+ /*
>+ * Limit at VMA_REF_LIMIT to leave one count for a writer.
>+ * If VMA_LOCK_OFFSET is set, __refcount_inc_not_zero_limited() will fail
>+ * because VMA_REF_LIMIT is less than VMA_LOCK_OFFSET.
>+ */
> if (unlikely(!__refcount_inc_not_zero_limited(&vma->vm_refcnt, &oldcnt,
> VMA_REF_LIMIT))) {
> rwsem_release(&vma->vmlock_dep_map, _RET_IP_);
>@@ -766,8 +770,7 @@ static inline bool vma_start_read(struct mm_struct *mm, struct vm_area_struct *v
> * after it has been unlocked.
> * This pairs with RELEASE semantics in vma_end_write_all().
> */
>- if (unlikely(oldcnt & VMA_LOCK_OFFSET ||
>- vma->vm_lock_seq == raw_read_seqcount(&mm->mm_lock_seq))) {
>+ if (unlikely(vma->vm_lock_seq == raw_read_seqcount(&mm->mm_lock_seq))) {
> vma_refcount_put(vma);
> return false;
> }
>
>base-commit: f349e79bfbf3abfade8011797ff6d0d47b67dab7
>--
>2.47.1.613.gc27f4b7a9f-goog
--
Wei Yang
Help you, Help me
^ permalink raw reply [flat|nested] 3+ messages in thread* Re: [PATCH 1/1] mm: remove extra check for VMA_LOCK_OFFSET when read-locking a vma
2025-01-07 12:28 ` Wei Yang
@ 2025-01-07 23:22 ` Suren Baghdasaryan
0 siblings, 0 replies; 3+ messages in thread
From: Suren Baghdasaryan @ 2025-01-07 23:22 UTC (permalink / raw)
To: Wei Yang
Cc: akpm, peterz, willy, liam.howlett, lorenzo.stoakes, mhocko,
vbabka, hannes, mjguzik, oliver.sang, mgorman, david, peterx,
oleg, dave, paulmck, brauner, dhowells, hdanton, hughd,
lokeshgidra, minchan, jannh, shakeel.butt, souravpanda,
pasha.tatashin, klarasmodin, corbet, linux-doc, linux-mm,
linux-kernel, kernel-team
On Tue, Jan 7, 2025 at 4:28 AM Wei Yang <richard.weiyang@gmail.com> wrote:
>
> On Mon, Jan 06, 2025 at 07:04:15PM -0800, Suren Baghdasaryan wrote:
> >Since we limit vm_refcnt at VMA_REF_LIMIT and it's smaller than
> >VMA_LOCK_OFFSET, there is no need to check again if VMA_LOCK_OFFSET bit
> >is set. Remove the extra check and add a clarifying comment.
> >
> >Fixes: e8f32ff00a66 ("mm: replace vm_lock and detached flag with a reference count")
> >Suggested-by: Wei Yang <richard.weiyang@gmail.com>
> >Signed-off-by: Suren Baghdasaryan <surenb@google.com>
>
> Reviewed-by: Wei Yang <richard.weiyang@gmail.com>
Since I have to respin v8, I'll fold this fix into the original patch.
>
> >---
> >Applies over mm-unstable
> >
> > include/linux/mm.h | 9 ++++++---
> > 1 file changed, 6 insertions(+), 3 deletions(-)
> >
> >diff --git a/include/linux/mm.h b/include/linux/mm.h
> >index 486638d22fc6..b5f262fc7dc5 100644
> >--- a/include/linux/mm.h
> >+++ b/include/linux/mm.h
> >@@ -747,7 +747,11 @@ static inline bool vma_start_read(struct mm_struct *mm, struct vm_area_struct *v
> >
> >
> > rwsem_acquire_read(&vma->vmlock_dep_map, 0, 0, _RET_IP_);
> >- /* Limit at VMA_REF_LIMIT to leave one count for a writer */
> >+ /*
> >+ * Limit at VMA_REF_LIMIT to leave one count for a writer.
> >+ * If VMA_LOCK_OFFSET is set, __refcount_inc_not_zero_limited() will fail
> >+ * because VMA_REF_LIMIT is less than VMA_LOCK_OFFSET.
> >+ */
> > if (unlikely(!__refcount_inc_not_zero_limited(&vma->vm_refcnt, &oldcnt,
> > VMA_REF_LIMIT))) {
> > rwsem_release(&vma->vmlock_dep_map, _RET_IP_);
> >@@ -766,8 +770,7 @@ static inline bool vma_start_read(struct mm_struct *mm, struct vm_area_struct *v
> > * after it has been unlocked.
> > * This pairs with RELEASE semantics in vma_end_write_all().
> > */
> >- if (unlikely(oldcnt & VMA_LOCK_OFFSET ||
> >- vma->vm_lock_seq == raw_read_seqcount(&mm->mm_lock_seq))) {
> >+ if (unlikely(vma->vm_lock_seq == raw_read_seqcount(&mm->mm_lock_seq))) {
> > vma_refcount_put(vma);
> > return false;
> > }
> >
> >base-commit: f349e79bfbf3abfade8011797ff6d0d47b67dab7
> >--
> >2.47.1.613.gc27f4b7a9f-goog
>
> --
> Wei Yang
> Help you, Help me
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2025-01-07 23:22 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-01-07 3:04 [PATCH 1/1] mm: remove extra check for VMA_LOCK_OFFSET when read-locking a vma Suren Baghdasaryan
2025-01-07 12:28 ` Wei Yang
2025-01-07 23:22 ` Suren Baghdasaryan
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox