linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Suren Baghdasaryan <surenb@google.com>
To: "Liam R. Howlett" <Liam.Howlett@oracle.com>
Cc: linux-mm@kvack.org, Andrew Morton <akpm@linux-foundation.org>,
	 Vlastimil Babka <vbabka@suse.cz>,
	Lorenzo Stoakes <lstoakes@gmail.com>,
	 Matthew Wilcox <willy@infradead.org>,
	sidhartha.kumar@oracle.com,
	 "Paul E . McKenney" <paulmck@kernel.org>,
	Bert Karwatzki <spasswolf@web.de>, Jiri Olsa <olsajiri@gmail.com>,
	 linux-kernel@vger.kernel.org, Kees Cook <kees@kernel.org>
Subject: Re: [PATCH v3 04/16] mm/mmap: Extract the gathering of vmas from do_vmi_align_munmap()
Date: Wed, 10 Jul 2024 09:07:09 -0700	[thread overview]
Message-ID: <CAJuCfpFHK+Bw8H1r5691yjgB+OjUEd1cedDydsfqp8zJwfRb_w@mail.gmail.com> (raw)
In-Reply-To: <20240704182718.2653918-5-Liam.Howlett@oracle.com>

On Thu, Jul 4, 2024 at 11:27 AM Liam R. Howlett <Liam.Howlett@oracle.com> wrote:
>
> Create vmi_gather_munmap_vmas() to handle the gathering of vmas into a
> detached maple tree for removal later.  Part of the gathering is the
> splitting of vmas that span the boundary.
>
> Signed-off-by: Liam R. Howlett <Liam.Howlett@oracle.com>
> ---
>  mm/mmap.c | 82 +++++++++++++++++++++++++++++++++++++++----------------
>  1 file changed, 58 insertions(+), 24 deletions(-)
>
> diff --git a/mm/mmap.c b/mm/mmap.c
> index 411798f46932..8dc8ffbf9d8d 100644
> --- a/mm/mmap.c
> +++ b/mm/mmap.c
> @@ -2656,32 +2656,29 @@ vmi_complete_munmap_vmas(struct vma_iterator *vmi, struct vm_area_struct *vma,
>  }
>
>  /*
> - * do_vmi_align_munmap() - munmap the aligned region from @start to @end.
> + * vmi_gather_munmap_vmas() - Put all VMAs within a range into a maple tree
> + * for removal at a later date.  Handles splitting first and last if necessary
> + * and marking the vmas as isolated.
> + *
>   * @vmi: The vma iterator
>   * @vma: The starting vm_area_struct
>   * @mm: The mm_struct
>   * @start: The aligned start address to munmap.
>   * @end: The aligned end address to munmap.
>   * @uf: The userfaultfd list_head
> - * @unlock: Set to true to drop the mmap_lock.  unlocking only happens on
> - * success.
> + * @mas_detach: The maple state tracking the detached tree
>   *
> - * Return: 0 on success and drops the lock if so directed, error and leaves the
> - * lock held otherwise.
> + * Return: 0 on success
>   */
>  static int
> -do_vmi_align_munmap(struct vma_iterator *vmi, struct vm_area_struct *vma,
> +vmi_gather_munmap_vmas(struct vma_iterator *vmi, struct vm_area_struct *vma,
>                     struct mm_struct *mm, unsigned long start,
> -                   unsigned long end, struct list_head *uf, bool unlock)
> +                   unsigned long end, struct list_head *uf,
> +                   struct ma_state *mas_detach, unsigned long *locked_vm)
>  {
>         struct vm_area_struct *next = NULL;
> -       struct maple_tree mt_detach;
>         int count = 0;
>         int error = -ENOMEM;
> -       unsigned long locked_vm = 0;
> -       MA_STATE(mas_detach, &mt_detach, 0, 0);
> -       mt_init_flags(&mt_detach, vmi->mas.tree->ma_flags & MT_FLAGS_LOCK_MASK);
> -       mt_on_stack(mt_detach);
>
>         /*
>          * If we need to split any vma, do it now to save pain later.
> @@ -2720,15 +2717,14 @@ do_vmi_align_munmap(struct vma_iterator *vmi, struct vm_area_struct *vma,
>                                 goto end_split_failed;
>                 }
>                 vma_start_write(next);
> -               mas_set(&mas_detach, count);
> -               error = mas_store_gfp(&mas_detach, next, GFP_KERNEL);
> +               mas_set(mas_detach, count++);
> +               if (next->vm_flags & VM_LOCKED)
> +                       *locked_vm += vma_pages(next);

Uh, this was confusing. You moved locked_vm/count accounting before
mas_store_gfp(), so if mas_store_gfp() fails, they will be one-off
because we incremented them but failed to insert the element. Only
later I realized that if mas_store_gfp() fails then we never use these
counters. The code is still correct but I'm wondering if this movement
was necessary. We wouldn't use wrong values but why make them wrong in
the first place?
In later patches you account for more things in here and all that is
also done before mas_store_gfp(). Would moving all that after
mas_store_gfp() and keeping them always correct be an issue?




> +
> +               error = mas_store_gfp(mas_detach, next, GFP_KERNEL);
>                 if (error)
>                         goto munmap_gather_failed;
>                 vma_mark_detached(next, true);
> -               if (next->vm_flags & VM_LOCKED)
> -                       locked_vm += vma_pages(next);
> -
> -               count++;
>                 if (unlikely(uf)) {
>                         /*
>                          * If userfaultfd_unmap_prep returns an error the vmas
> @@ -2753,7 +2749,7 @@ do_vmi_align_munmap(struct vma_iterator *vmi, struct vm_area_struct *vma,
>  #if defined(CONFIG_DEBUG_VM_MAPLE_TREE)
>         /* Make sure no VMAs are about to be lost. */
>         {
> -               MA_STATE(test, &mt_detach, 0, 0);
> +               MA_STATE(test, mas_detach->tree, 0, 0);
>                 struct vm_area_struct *vma_mas, *vma_test;
>                 int test_count = 0;
>
> @@ -2773,6 +2769,48 @@ do_vmi_align_munmap(struct vma_iterator *vmi, struct vm_area_struct *vma,
>         while (vma_iter_addr(vmi) > start)
>                 vma_iter_prev_range(vmi);
>
> +       return 0;
> +
> +userfaultfd_error:
> +munmap_gather_failed:
> +end_split_failed:
> +       abort_munmap_vmas(mas_detach);
> +start_split_failed:
> +map_count_exceeded:
> +       return error;
> +}
> +
> +/*
> + * do_vmi_align_munmap() - munmap the aligned region from @start to @end.
> + * @vmi: The vma iterator
> + * @vma: The starting vm_area_struct
> + * @mm: The mm_struct
> + * @start: The aligned start address to munmap.
> + * @end: The aligned end address to munmap.
> + * @uf: The userfaultfd list_head
> + * @unlock: Set to true to drop the mmap_lock.  unlocking only happens on
> + * success.
> + *
> + * Return: 0 on success and drops the lock if so directed, error and leaves the
> + * lock held otherwise.
> + */
> +static int
> +do_vmi_align_munmap(struct vma_iterator *vmi, struct vm_area_struct *vma,
> +                   struct mm_struct *mm, unsigned long start,
> +                   unsigned long end, struct list_head *uf, bool unlock)
> +{
> +       struct maple_tree mt_detach;
> +       MA_STATE(mas_detach, &mt_detach, 0, 0);
> +       mt_init_flags(&mt_detach, vmi->mas.tree->ma_flags & MT_FLAGS_LOCK_MASK);
> +       mt_on_stack(mt_detach);
> +       int error;
> +       unsigned long locked_vm = 0;
> +
> +       error = vmi_gather_munmap_vmas(vmi, vma, mm, start, end, uf,
> +                                      &mas_detach, &locked_vm);
> +       if (error)
> +               goto gather_failed;
> +
>         error = vma_iter_clear_gfp(vmi, start, end, GFP_KERNEL);
>         if (error)
>                 goto clear_tree_failed;
> @@ -2783,12 +2821,8 @@ do_vmi_align_munmap(struct vma_iterator *vmi, struct vm_area_struct *vma,
>         return 0;
>
>  clear_tree_failed:
> -userfaultfd_error:
> -munmap_gather_failed:
> -end_split_failed:
>         abort_munmap_vmas(&mas_detach);
> -start_split_failed:
> -map_count_exceeded:
> +gather_failed:
>         validate_mm(mm);
>         return error;
>  }
> --
> 2.43.0
>


  parent reply	other threads:[~2024-07-10 16:07 UTC|newest]

Thread overview: 78+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-07-04 18:27 [PATCH v3 00/16] Avoid MAP_FIXED gap exposure Liam R. Howlett
2024-07-04 18:27 ` [PATCH v3 01/16] mm/mmap: Correctly position vma_iterator in __split_vma() Liam R. Howlett
2024-07-04 18:27 ` [PATCH v3 02/16] mm/mmap: Introduce abort_munmap_vmas() Liam R. Howlett
2024-07-05 17:02   ` Lorenzo Stoakes
2024-07-05 18:12     ` Liam R. Howlett
2024-07-10 16:06       ` Suren Baghdasaryan
2024-07-04 18:27 ` [PATCH v3 03/16] mm/mmap: Introduce vmi_complete_munmap_vmas() Liam R. Howlett
2024-07-05 17:39   ` Lorenzo Stoakes
2024-07-10 16:07   ` Suren Baghdasaryan
2024-07-04 18:27 ` [PATCH v3 04/16] mm/mmap: Extract the gathering of vmas from do_vmi_align_munmap() Liam R. Howlett
2024-07-05 18:01   ` Lorenzo Stoakes
2024-07-05 18:41     ` Liam R. Howlett
2024-07-10 16:07   ` Suren Baghdasaryan [this message]
2024-07-10 16:32     ` Liam R. Howlett
2024-07-04 18:27 ` [PATCH v3 05/16] mm/mmap: Introduce vma_munmap_struct for use in munmap operations Liam R. Howlett
2024-07-05 18:39   ` Lorenzo Stoakes
2024-07-05 19:09     ` Liam R. Howlett
2024-07-10 16:07       ` Suren Baghdasaryan
2024-07-10 16:30         ` Liam R. Howlett
2024-07-04 18:27 ` [PATCH v3 06/16] mm/mmap: Change munmap to use vma_munmap_struct() for accounting and surrounding vmas Liam R. Howlett
2024-07-05 19:27   ` Lorenzo Stoakes
2024-07-05 19:59     ` Liam R. Howlett
2024-07-10 16:07       ` Suren Baghdasaryan
2024-07-10 17:29         ` Suren Baghdasaryan
2024-07-04 18:27 ` [PATCH v3 07/16] mm/mmap: Extract validate_mm() from vma_complete() Liam R. Howlett
2024-07-05 19:35   ` Lorenzo Stoakes
2024-07-10 16:06     ` Suren Baghdasaryan
2024-07-04 18:27 ` [PATCH v3 08/16] mm/mmap: Inline munmap operation in mmap_region() Liam R. Howlett
2024-07-05 19:39   ` Lorenzo Stoakes
2024-07-05 20:00     ` Liam R. Howlett
2024-07-10 16:15   ` Suren Baghdasaryan
2024-07-10 16:35     ` Liam R. Howlett
2024-07-04 18:27 ` [PATCH v3 09/16] mm/mmap: Expand mmap_region() munmap call Liam R. Howlett
2024-07-05 20:06   ` Lorenzo Stoakes
2024-07-05 20:30     ` Liam R. Howlett
2024-07-05 20:36       ` Lorenzo Stoakes
2024-07-08 14:49         ` Liam R. Howlett
2024-07-04 18:27 ` [PATCH v3 10/16] mm/mmap: Reposition vma iterator in mmap_region() Liam R. Howlett
2024-07-05 20:18   ` Lorenzo Stoakes
2024-07-05 20:56     ` Liam R. Howlett
2024-07-08 11:08       ` Lorenzo Stoakes
2024-07-08 16:43         ` Liam R. Howlett
2024-07-10 16:48   ` Suren Baghdasaryan
2024-07-10 17:18     ` Liam R. Howlett
2024-07-04 18:27 ` [PATCH v3 11/16] mm/mmap: Track start and end of munmap in vma_munmap_struct Liam R. Howlett
2024-07-05 20:27   ` Lorenzo Stoakes
2024-07-08 14:45     ` Liam R. Howlett
2024-07-10 17:14     ` Suren Baghdasaryan
2024-07-04 18:27 ` [PATCH v3 12/16] mm/mmap: Clean up unmap_region() argument list Liam R. Howlett
2024-07-05 20:33   ` Lorenzo Stoakes
2024-07-10 17:14     ` Suren Baghdasaryan
2024-07-04 18:27 ` [PATCH v3 13/16] mm/mmap: Avoid zeroing vma tree in mmap_region() Liam R. Howlett
2024-07-08 12:18   ` Lorenzo Stoakes
2024-07-08 19:10     ` Liam R. Howlett
2024-07-09 14:27       ` Lorenzo Stoakes
2024-07-09 18:43         ` Liam R. Howlett
2024-07-04 18:27 ` [PATCH v3 14/16] mm/mmap: Use PHYS_PFN " Liam R. Howlett
2024-07-08 12:21   ` Lorenzo Stoakes
2024-07-09 18:35     ` Liam R. Howlett
2024-07-09 18:42       ` Lorenzo Stoakes
2024-07-10 17:32     ` Suren Baghdasaryan
2024-07-04 18:27 ` [PATCH v3 15/16] mm/mmap: Use vms accounted pages " Liam R. Howlett
2024-07-08 12:43   ` Lorenzo Stoakes
2024-07-10 17:43     ` Suren Baghdasaryan
2024-07-04 18:27 ` [PATCH v3 16/16] mm/mmap: Move may_expand_vm() check " Liam R. Howlett
2024-07-08 12:52   ` Lorenzo Stoakes
2024-07-08 20:43     ` Liam R. Howlett
2024-07-09 14:42       ` Liam R. Howlett
2024-07-09 14:51         ` Lorenzo Stoakes
2024-07-09 14:52         ` Liam R. Howlett
2024-07-09 18:13           ` Dave Hansen
2024-07-09 14:45       ` Lorenzo Stoakes
2024-07-10 12:28         ` Michael Ellerman
2024-07-10 12:45           ` Lorenzo Stoakes
2024-07-10 12:59             ` LEROY Christophe
2024-07-10 16:09               ` Liam R. Howlett
2024-07-10 19:27                 ` Dmitry Safonov
2024-07-10 21:04                 ` LEROY Christophe

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAJuCfpFHK+Bw8H1r5691yjgB+OjUEd1cedDydsfqp8zJwfRb_w@mail.gmail.com \
    --to=surenb@google.com \
    --cc=Liam.Howlett@oracle.com \
    --cc=akpm@linux-foundation.org \
    --cc=kees@kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=lstoakes@gmail.com \
    --cc=olsajiri@gmail.com \
    --cc=paulmck@kernel.org \
    --cc=sidhartha.kumar@oracle.com \
    --cc=spasswolf@web.de \
    --cc=vbabka@suse.cz \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox