From: Jeff Xu <jeffxu@chromium.org>
To: Pedro Falcato <pedro.falcato@gmail.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
"Liam R. Howlett" <Liam.Howlett@oracle.com>,
Vlastimil Babka <vbabka@suse.cz>,
Lorenzo Stoakes <lorenzo.stoakes@oracle.com>,
Shuah Khan <shuah@kernel.org>,
linux-mm@kvack.org, linux-kernel@vger.kernel.org,
linux-kselftest@vger.kernel.org, oliver.sang@intel.com,
torvalds@linux-foundation.org,
Michael Ellerman <mpe@ellerman.id.au>,
Kees Cook <kees@kernel.org>
Subject: Re: [PATCH v3 2/7] mm/munmap: Replace can_modify_mm with can_modify_vma
Date: Wed, 21 Aug 2024 09:15:52 -0700 [thread overview]
Message-ID: <CABi2SkUBH+utreVMTd1qEBSvXTPM7Qc1GiwNKjVa9+EeZK8WWA@mail.gmail.com> (raw)
In-Reply-To: <20240817-mseal-depessimize-v3-2-d8d2e037df30@gmail.com>
On Fri, Aug 16, 2024 at 5:18 PM Pedro Falcato <pedro.falcato@gmail.com> wrote:
>
> We were doing an extra mmap tree traversal just to check if the entire
> range is modifiable. This can be done when we iterate through the VMAs
> instead.
>
> Signed-off-by: Pedro Falcato <pedro.falcato@gmail.com>
> ---
> mm/mmap.c | 11 +----------
> mm/vma.c | 19 ++++++++++++-------
> 2 files changed, 13 insertions(+), 17 deletions(-)
>
> diff --git a/mm/mmap.c b/mm/mmap.c
> index 3af256bacef3..30ae4cb5cec9 100644
> --- a/mm/mmap.c
> +++ b/mm/mmap.c
> @@ -1740,16 +1740,7 @@ int do_vma_munmap(struct vma_iterator *vmi, struct vm_area_struct *vma,
> unsigned long start, unsigned long end, struct list_head *uf,
> bool unlock)
> {
> - struct mm_struct *mm = vma->vm_mm;
> -
> - /*
> - * Check if memory is sealed, prevent unmapping a sealed VMA.
> - * can_modify_mm assumes we have acquired the lock on MM.
> - */
> - if (unlikely(!can_modify_mm(mm, start, end)))
> - return -EPERM;
Another approach to improve perf is to clone the vmi (since it
already point to the first vma), and pass the cloned vmi/vma into
can_modify_mm check, that will remove the cost of re-finding the first
VMA.
The can_modify_mm then continues from cloned VMI/vma till the end of
address range, there will be some perf cost there. However, most
address ranges in the real world are within a single VMA, in
practice, the perf cost is the same as checking the single VMA, 99.9%
case.
This will help preserve the nice sealing feature (if one of the vma is
sealed, the entire address range is not modified)
> -
> - return do_vmi_align_munmap(vmi, vma, mm, start, end, uf, unlock);
> + return do_vmi_align_munmap(vmi, vma, vma->vm_mm, start, end, uf, unlock);
> }
>
> /*
> diff --git a/mm/vma.c b/mm/vma.c
> index 84965f2cd580..5850f7c0949b 100644
> --- a/mm/vma.c
> +++ b/mm/vma.c
> @@ -712,6 +712,12 @@ do_vmi_align_munmap(struct vma_iterator *vmi, struct vm_area_struct *vma,
> if (end < vma->vm_end && mm->map_count >= sysctl_max_map_count)
> goto map_count_exceeded;
>
> + /* Don't bother splitting the VMA if we can't unmap it anyway */
> + if (!can_modify_vma(vma)) {
> + error = -EPERM;
> + goto start_split_failed;
> + }
> +
> error = __split_vma(vmi, vma, start, 1);
> if (error)
> goto start_split_failed;
> @@ -723,6 +729,11 @@ do_vmi_align_munmap(struct vma_iterator *vmi, struct vm_area_struct *vma,
> */
> next = vma;
> do {
> + if (!can_modify_vma(next)) {
> + error = -EPERM;
> + goto modify_vma_failed;
> + }
> +
> /* Does it split the end? */
> if (next->vm_end > end) {
> error = __split_vma(vmi, next, end, 0);
> @@ -815,6 +826,7 @@ do_vmi_align_munmap(struct vma_iterator *vmi, struct vm_area_struct *vma,
> __mt_destroy(&mt_detach);
> return 0;
>
> +modify_vma_failed:
> clear_tree_failed:
> userfaultfd_error:
> munmap_gather_failed:
> @@ -860,13 +872,6 @@ int do_vmi_munmap(struct vma_iterator *vmi, struct mm_struct *mm,
> if (end == start)
> return -EINVAL;
>
> - /*
> - * Check if memory is sealed, prevent unmapping a sealed VMA.
> - * can_modify_mm assumes we have acquired the lock on MM.
> - */
> - if (unlikely(!can_modify_mm(mm, start, end)))
> - return -EPERM;
> -
> /* Find the first overlapping VMA */
> vma = vma_find(vmi, end);
> if (!vma) {
>
> --
> 2.46.0
>
next prev parent reply other threads:[~2024-08-21 16:16 UTC|newest]
Thread overview: 36+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-08-17 0:18 [PATCH v3 0/7] mm: Optimize mseal checks Pedro Falcato
2024-08-17 0:18 ` [PATCH v3 1/7] mm: Move can_modify_vma to mm/vma.h Pedro Falcato
2024-08-19 20:15 ` Liam R. Howlett
2024-08-19 21:00 ` Pedro Falcato
2024-08-21 6:31 ` Lorenzo Stoakes
2024-08-17 0:18 ` [PATCH v3 2/7] mm/munmap: Replace can_modify_mm with can_modify_vma Pedro Falcato
2024-08-19 20:22 ` Liam R. Howlett
2024-08-21 6:40 ` Lorenzo Stoakes
2024-08-21 16:15 ` Jeff Xu [this message]
2024-08-21 16:23 ` Pedro Falcato
2024-08-21 16:33 ` Jeff Xu
2024-08-21 17:02 ` Lorenzo Stoakes
2024-08-21 18:25 ` Liam R. Howlett
2024-08-21 17:00 ` Lorenzo Stoakes
2024-08-17 0:18 ` [PATCH v3 3/7] mm/mprotect: " Pedro Falcato
2024-08-19 20:33 ` Liam R. Howlett
2024-08-21 6:51 ` Lorenzo Stoakes
2024-08-17 0:18 ` [PATCH v3 4/7] mm/mremap: " Pedro Falcato
2024-08-19 20:34 ` Liam R. Howlett
2024-08-21 6:53 ` Lorenzo Stoakes
2024-08-17 0:18 ` [PATCH v3 5/7] mseal: Replace can_modify_mm_madv with a vma variant Pedro Falcato
2024-08-19 20:32 ` Liam R. Howlett
2024-08-21 8:41 ` Lorenzo Stoakes
2024-08-17 0:18 ` [PATCH v3 6/7] mm: Remove can_modify_mm() Pedro Falcato
2024-08-19 20:32 ` Liam R. Howlett
2024-08-21 8:42 ` Lorenzo Stoakes
2024-08-17 0:18 ` [PATCH v3 7/7] selftests/mm: add more mseal traversal tests Pedro Falcato
2024-08-18 6:36 ` Pedro Falcato
2024-08-20 15:45 ` Liam R. Howlett
2024-08-21 8:47 ` Lorenzo Stoakes
2024-08-21 15:56 ` Jeff Xu
2024-08-21 16:20 ` Pedro Falcato
2024-08-21 16:27 ` Jeff Xu
2024-08-21 17:28 ` Pedro Falcato
2024-08-21 17:36 ` Pedro Falcato
2024-08-21 23:37 ` Pedro Falcato
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CABi2SkUBH+utreVMTd1qEBSvXTPM7Qc1GiwNKjVa9+EeZK8WWA@mail.gmail.com \
--to=jeffxu@chromium.org \
--cc=Liam.Howlett@oracle.com \
--cc=akpm@linux-foundation.org \
--cc=kees@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-kselftest@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=lorenzo.stoakes@oracle.com \
--cc=mpe@ellerman.id.au \
--cc=oliver.sang@intel.com \
--cc=pedro.falcato@gmail.com \
--cc=shuah@kernel.org \
--cc=torvalds@linux-foundation.org \
--cc=vbabka@suse.cz \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox