linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Suren Baghdasaryan <surenb@google.com>
To: "Liam R. Howlett" <Liam.Howlett@oracle.com>,
	Suren Baghdasaryan <surenb@google.com>,
	 akpm@linux-foundation.org, torvalds@linux-foundation.org,
	jannh@google.com,  willy@infradead.org, david@redhat.com,
	peterx@redhat.com,  ldufour@linux.ibm.com, vbabka@suse.cz,
	michel@lespinasse.org,  jglisse@google.com, mhocko@suse.com,
	hannes@cmpxchg.org, dave@stgolabs.net,  hughd@google.com,
	linux-kernel@vger.kernel.org, linux-mm@kvack.org,
	 stable@vger.kernel.org,
	Linus Torvalds <torvalds@linuxfoundation.org>
Subject: Re: [PATCH v3 6/6] mm: move vma locking out of vma_prepare and dup_anon_vma
Date: Thu, 3 Aug 2023 12:14:58 -0700	[thread overview]
Message-ID: <CAJuCfpHUp5xVV-p=pKXp6javYq+GmUx_3cDKr9mmTnHYxsg0Mw@mail.gmail.com> (raw)
In-Reply-To: <20230803183228.zreczwv3g3qp4kux@revolver>

On Thu, Aug 3, 2023 at 11:32 AM Liam R. Howlett <Liam.Howlett@oracle.com> wrote:
>
> * Suren Baghdasaryan <surenb@google.com> [230803 13:27]:
> > vma_prepare() is currently the central place where vmas are being locked
> > before vma_complete() applies changes to them. While this is convenient,
> > it also obscures vma locking and makes it harder to follow the locking
> > rules. Move vma locking out of vma_prepare() and take vma locks
> > explicitly at the locations where vmas are being modified. Move vma
> > locking and replace it with an assertion inside dup_anon_vma() to further
> > clarify the locking pattern inside vma_merge().
> >
> > Suggested-by: Linus Torvalds <torvalds@linuxfoundation.org>
> > Suggested-by: Liam R. Howlett <Liam.Howlett@oracle.com>
> > Signed-off-by: Suren Baghdasaryan <surenb@google.com>
> > ---
> >  mm/mmap.c | 29 ++++++++++++++++++-----------
> >  1 file changed, 18 insertions(+), 11 deletions(-)
> >
> > diff --git a/mm/mmap.c b/mm/mmap.c
> > index 850a39dee075..ae28d6f94c34 100644
> > --- a/mm/mmap.c
> > +++ b/mm/mmap.c
> > @@ -476,16 +476,6 @@ static inline void init_vma_prep(struct vma_prepare *vp,
> >   */
> >  static inline void vma_prepare(struct vma_prepare *vp)
> >  {
> > -     vma_start_write(vp->vma);
> > -     if (vp->adj_next)
> > -             vma_start_write(vp->adj_next);
> > -     if (vp->insert)
> > -             vma_start_write(vp->insert);
> > -     if (vp->remove)
> > -             vma_start_write(vp->remove);
> > -     if (vp->remove2)
> > -             vma_start_write(vp->remove2);
> > -
> >       if (vp->file) {
> >               uprobe_munmap(vp->vma, vp->vma->vm_start, vp->vma->vm_end);
> >
> > @@ -618,7 +608,7 @@ static inline int dup_anon_vma(struct vm_area_struct *dst,
> >        * anon pages imported.
> >        */
> >       if (src->anon_vma && !dst->anon_vma) {
> > -             vma_start_write(dst);
> > +             vma_assert_write_locked(dst);
> >               dst->anon_vma = src->anon_vma;
> >               return anon_vma_clone(dst, src);
> >       }
> > @@ -650,10 +640,12 @@ int vma_expand(struct vma_iterator *vmi, struct vm_area_struct *vma,
> >       bool remove_next = false;
> >       struct vma_prepare vp;
> >
> > +     vma_start_write(vma);
> >       if (next && (vma != next) && (end == next->vm_end)) {
> >               int ret;
> >
> >               remove_next = true;
> > +             vma_start_write(next);
> >               ret = dup_anon_vma(vma, next);
> >               if (ret)
> >                       return ret;
> > @@ -708,6 +700,8 @@ int vma_shrink(struct vma_iterator *vmi, struct vm_area_struct *vma,
> >       if (vma_iter_prealloc(vmi))
> >               return -ENOMEM;
> >
> > +     vma_start_write(vma);
> > +
> >       init_vma_prep(&vp, vma);
> >       vma_prepare(&vp);
> >       vma_adjust_trans_huge(vma, start, end, 0);
> > @@ -940,16 +934,21 @@ struct vm_area_struct *vma_merge(struct vma_iterator *vmi, struct mm_struct *mm,
> >       if (!merge_prev && !merge_next)
> >               return NULL; /* Not mergeable. */
> >
> > +     if (prev)
>
> Maybe if (merge_prev) instead of prev?  We will write lock prev if it
> exists and won't change with the current check (case 3 and 8,
> specifically), with this change case 4 will need to lock prev as it
> shifts prev->vm_end lower.

Ah, I see. I was trying to make sure we don't miss any locks and
over-locked it for case 3 and 8.
Ok, I'll change the check to if (merge_prev) and will add a separate
locking for case 4. I think that's what you meant?

>
> > +             vma_start_write(prev);
> > +
> >       res = vma = prev;
> >       remove = remove2 = adjust = NULL;
> >
> >       /* Can we merge both the predecessor and the successor? */
> >       if (merge_prev && merge_next &&
> >           is_mergeable_anon_vma(prev->anon_vma, next->anon_vma, NULL)) {
> > +             vma_start_write(next);
> >               remove = next;                          /* case 1 */
> >               vma_end = next->vm_end;
> >               err = dup_anon_vma(prev, next);
> >               if (curr) {                             /* case 6 */
> > +                     vma_start_write(curr);
> >                       remove = curr;
> >                       remove2 = next;
> >                       if (!next->anon_vma)
> > @@ -957,6 +956,7 @@ struct vm_area_struct *vma_merge(struct vma_iterator *vmi, struct mm_struct *mm,
> >               }
> >       } else if (merge_prev) {                        /* case 2 */
> >               if (curr) {
> > +                     vma_start_write(curr);
> >                       err = dup_anon_vma(prev, curr);
> >                       if (end == curr->vm_end) {      /* case 7 */
> >                               remove = curr;
> > @@ -966,6 +966,7 @@ struct vm_area_struct *vma_merge(struct vma_iterator *vmi, struct mm_struct *mm,
> >                       }
> >               }
> >       } else { /* merge_next */
> > +             vma_start_write(next);
> >               res = next;
> >               if (prev && addr < prev->vm_end) {      /* case 4 */
> >                       vma_end = addr;
> > @@ -983,6 +984,7 @@ struct vm_area_struct *vma_merge(struct vma_iterator *vmi, struct mm_struct *mm,
> >                       vma_pgoff = next->vm_pgoff - pglen;
> >                       if (curr) {                     /* case 8 */
> >                               vma_pgoff = curr->vm_pgoff;
> > +                             vma_start_write(curr);
> >                               remove = curr;
> >                               err = dup_anon_vma(next, curr);
> >                       }
> > @@ -2373,6 +2375,9 @@ int __split_vma(struct vma_iterator *vmi, struct vm_area_struct *vma,
> >       if (new->vm_ops && new->vm_ops->open)
> >               new->vm_ops->open(new);
> >
> > +     vma_start_write(vma);
> > +     vma_start_write(new);
> > +
> >       init_vma_prep(&vp, vma);
> >       vp.insert = new;
> >       vma_prepare(&vp);
> > @@ -3078,6 +3083,8 @@ static int do_brk_flags(struct vma_iterator *vmi, struct vm_area_struct *vma,
> >               if (vma_iter_prealloc(vmi))
> >                       goto unacct_fail;
> >
> > +             vma_start_write(vma);
> > +
> >               init_vma_prep(&vp, vma);
> >               vma_prepare(&vp);
> >               vma_adjust_trans_huge(vma, vma->vm_start, addr + len, 0);
> > --
> > 2.41.0.585.gd2178a4bd4-goog
> >


  reply	other threads:[~2023-08-03 19:15 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-08-03 17:26 [PATCH v3 0/6] make vma locking more obvious Suren Baghdasaryan
2023-08-03 17:26 ` [PATCH v3 1/6] mm: enable page walking API to lock vmas during the walk Suren Baghdasaryan
2023-08-03 17:26 ` [PATCH v3 2/6] mm: for !CONFIG_PER_VMA_LOCK equate write lock assertion for vma and mmap Suren Baghdasaryan
2023-08-03 17:26 ` [PATCH v3 3/6] mm: replace mmap with vma write lock assertions when operating on a vma Suren Baghdasaryan
2023-08-03 17:26 ` [PATCH v3 4/6] mm: lock vma explicitly before doing vm_flags_reset and vm_flags_reset_once Suren Baghdasaryan
2023-08-03 17:26 ` [PATCH v3 5/6] mm: always lock new vma before inserting into vma tree Suren Baghdasaryan
2023-08-03 18:01   ` Linus Torvalds
2023-08-03 18:15     ` Liam R. Howlett
2023-08-03 18:26       ` Suren Baghdasaryan
2023-08-03 18:34         ` Suren Baghdasaryan
2023-08-03 17:26 ` [PATCH v3 6/6] mm: move vma locking out of vma_prepare and dup_anon_vma Suren Baghdasaryan
2023-08-03 18:32   ` Liam R. Howlett
2023-08-03 19:14     ` Suren Baghdasaryan [this message]
2023-08-03 19:20       ` Liam R. Howlett
2023-08-04 15:29 ` [PATCH v3 0/6] make vma locking more obvious Suren Baghdasaryan

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAJuCfpHUp5xVV-p=pKXp6javYq+GmUx_3cDKr9mmTnHYxsg0Mw@mail.gmail.com' \
    --to=surenb@google.com \
    --cc=Liam.Howlett@oracle.com \
    --cc=akpm@linux-foundation.org \
    --cc=dave@stgolabs.net \
    --cc=david@redhat.com \
    --cc=hannes@cmpxchg.org \
    --cc=hughd@google.com \
    --cc=jannh@google.com \
    --cc=jglisse@google.com \
    --cc=ldufour@linux.ibm.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@suse.com \
    --cc=michel@lespinasse.org \
    --cc=peterx@redhat.com \
    --cc=stable@vger.kernel.org \
    --cc=torvalds@linux-foundation.org \
    --cc=torvalds@linuxfoundation.org \
    --cc=vbabka@suse.cz \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox