linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: "Liam R. Howlett" <Liam.Howlett@oracle.com>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	Suren Baghdasaryan <surenb@google.com>,
	Lorenzo Stoakes <lorenzo.stoakes@oracle.com>,
	Matthew Wilcox <willy@infradead.org>,
	Vlastimil Babka <vbabka@suse.cz>,
	sidhartha.kumar@oracle.com, Bert Karwatzki <spasswolf@web.de>,
	Jiri Olsa <olsajiri@gmail.com>, Kees Cook <kees@kernel.org>,
	"Paul E . McKenney" <paulmck@kernel.org>,
	"Liam R. Howlett" <Liam.Howlett@Oracle.com>
Subject: [PATCH v6 18/20] ipc/shm, mm: Drop do_vma_munmap()
Date: Tue, 20 Aug 2024 19:57:27 -0400	[thread overview]
Message-ID: <20240820235730.2852400-19-Liam.Howlett@oracle.com> (raw)
In-Reply-To: <20240820235730.2852400-1-Liam.Howlett@oracle.com>

From: "Liam R. Howlett" <Liam.Howlett@Oracle.com>

The do_vma_munmap() wrapper existed for callers that didn't have a vma
iterator and needed to check the vma mseal status prior to calling the
underlying munmap().  All callers now use a vma iterator and since the
mseal check has been moved to do_vmi_align_munmap() and the vmas are
aligned, this function can just be called instead.

do_vmi_align_munmap() can no longer be static as ipc/shm is using it and
it is exported via the mm.h header.

Signed-off-by: Liam R. Howlett <Liam.Howlett@Oracle.com>
---
 include/linux/mm.h |  6 +++---
 ipc/shm.c          |  8 ++++----
 mm/mmap.c          | 29 ++++-------------------------
 3 files changed, 11 insertions(+), 32 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index b1eed30fdc06..c5a83d9d1110 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -3292,14 +3292,14 @@ extern unsigned long do_mmap(struct file *file, unsigned long addr,
 extern int do_vmi_munmap(struct vma_iterator *vmi, struct mm_struct *mm,
 			 unsigned long start, size_t len, struct list_head *uf,
 			 bool unlock);
+extern int do_vmi_align_munmap(struct vma_iterator *vmi, struct vm_area_struct *vma,
+		    struct mm_struct *mm, unsigned long start,
+		    unsigned long end, struct list_head *uf, bool unlock);
 extern int do_munmap(struct mm_struct *, unsigned long, size_t,
 		     struct list_head *uf);
 extern int do_madvise(struct mm_struct *mm, unsigned long start, size_t len_in, int behavior);
 
 #ifdef CONFIG_MMU
-extern int do_vma_munmap(struct vma_iterator *vmi, struct vm_area_struct *vma,
-			 unsigned long start, unsigned long end,
-			 struct list_head *uf, bool unlock);
 extern int __mm_populate(unsigned long addr, unsigned long len,
 			 int ignore_errors);
 static inline void mm_populate(unsigned long addr, unsigned long len)
diff --git a/ipc/shm.c b/ipc/shm.c
index 3e3071252dac..99564c870084 100644
--- a/ipc/shm.c
+++ b/ipc/shm.c
@@ -1778,8 +1778,8 @@ long ksys_shmdt(char __user *shmaddr)
 			 */
 			file = vma->vm_file;
 			size = i_size_read(file_inode(vma->vm_file));
-			do_vma_munmap(&vmi, vma, vma->vm_start, vma->vm_end,
-				      NULL, false);
+			do_vmi_align_munmap(&vmi, vma, mm, vma->vm_start,
+					    vma->vm_end, NULL, false);
 			/*
 			 * We discovered the size of the shm segment, so
 			 * break out of here and fall through to the next
@@ -1803,8 +1803,8 @@ long ksys_shmdt(char __user *shmaddr)
 		if ((vma->vm_ops == &shm_vm_ops) &&
 		    ((vma->vm_start - addr)/PAGE_SIZE == vma->vm_pgoff) &&
 		    (vma->vm_file == file)) {
-			do_vma_munmap(&vmi, vma, vma->vm_start, vma->vm_end,
-				      NULL, false);
+			do_vmi_align_munmap(&vmi, vma, mm, vma->vm_start,
+					    vma->vm_end, NULL, false);
 		}
 
 		vma = vma_next(&vmi);
diff --git a/mm/mmap.c b/mm/mmap.c
index 2a4f1df96f94..49d9e95f42f5 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -169,11 +169,12 @@ SYSCALL_DEFINE1(brk, unsigned long, brk)
 			goto out; /* mapping intersects with an existing non-brk vma. */
 		/*
 		 * mm->brk must be protected by write mmap_lock.
-		 * do_vma_munmap() will drop the lock on success,  so update it
-		 * before calling do_vma_munmap().
+		 * do_vmi_align_munmap() will drop the lock on success,  so
+		 * update it before calling do_vma_munmap().
 		 */
 		mm->brk = brk;
-		if (do_vma_munmap(&vmi, brkvma, newbrk, oldbrk, &uf, true))
+		if (do_vmi_align_munmap(&vmi, brkvma, mm, newbrk, oldbrk, &uf,
+					/* unlock = */ true))
 			goto out;
 
 		goto success_unlocked;
@@ -1742,28 +1743,6 @@ SYSCALL_DEFINE5(remap_file_pages, unsigned long, start, unsigned long, size,
 	return ret;
 }
 
-/*
- * do_vma_munmap() - Unmap a full or partial vma.
- * @vmi: The vma iterator pointing at the vma
- * @vma: The first vma to be munmapped
- * @start: the start of the address to unmap
- * @end: The end of the address to unmap
- * @uf: The userfaultfd list_head
- * @unlock: Drop the lock on success
- *
- * unmaps a VMA mapping when the vma iterator is already in position.
- * Does not handle alignment.
- *
- * Return: 0 on success drops the lock of so directed, error on failure and will
- * still hold the lock.
- */
-int do_vma_munmap(struct vma_iterator *vmi, struct vm_area_struct *vma,
-		unsigned long start, unsigned long end, struct list_head *uf,
-		bool unlock)
-{
-	return do_vmi_align_munmap(vmi, vma, vma->vm_mm, start, end, uf, unlock);
-}
-
 /*
  * do_brk_flags() - Increase the brk vma if the flags match.
  * @vmi: The vma iterator
-- 
2.43.0



  parent reply	other threads:[~2024-08-20 23:58 UTC|newest]

Thread overview: 32+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-08-20 23:57 [PATCH v6 00/20] Avoid MAP_FIXED gap exposure Liam R. Howlett
2024-08-20 23:57 ` [PATCH v6 01/20] mm/vma: Correctly position vma_iterator in __split_vma() Liam R. Howlett
2024-08-20 23:57 ` [PATCH v6 02/20] mm/vma: Introduce abort_munmap_vmas() Liam R. Howlett
2024-08-20 23:57 ` [PATCH v6 03/20] mm/vma: Introduce vmi_complete_munmap_vmas() Liam R. Howlett
2024-08-20 23:57 ` [PATCH v6 04/20] mm/vma: Extract the gathering of vmas from do_vmi_align_munmap() Liam R. Howlett
2024-08-20 23:57 ` [PATCH v6 05/20] mm/vma: Introduce vma_munmap_struct for use in munmap operations Liam R. Howlett
2024-08-20 23:57 ` [PATCH v6 06/20] mm/vma: Change munmap to use vma_munmap_struct() for accounting and surrounding vmas Liam R. Howlett
2024-08-21  9:59   ` Lorenzo Stoakes
2024-08-21 13:17     ` Liam R. Howlett
2024-08-20 23:57 ` [PATCH v6 07/20] mm/vma: Extract validate_mm() from vma_complete() Liam R. Howlett
2024-08-20 23:57 ` [PATCH v6 08/20] mm/vma: Inline munmap operation in mmap_region() Liam R. Howlett
2024-08-20 23:57 ` [PATCH v6 09/20] mm/vma: Expand mmap_region() munmap call Liam R. Howlett
2024-08-20 23:57 ` [PATCH v6 10/20] mm/vma: Support vma == NULL in init_vma_munmap() Liam R. Howlett
2024-08-21 10:07   ` Lorenzo Stoakes
2024-08-21 13:18     ` Liam R. Howlett
2024-08-20 23:57 ` [PATCH v6 11/20] mm/mmap: Reposition vma iterator in mmap_region() Liam R. Howlett
2024-08-20 23:57 ` [PATCH v6 12/20] mm/vma: Track start and end for munmap in vma_munmap_struct Liam R. Howlett
2024-08-20 23:57 ` [PATCH v6 13/20] mm: Clean up unmap_region() argument list Liam R. Howlett
2024-08-20 23:57 ` [PATCH v6 14/20] mm/mmap: Avoid zeroing vma tree in mmap_region() Liam R. Howlett
2024-08-21 11:02   ` Lorenzo Stoakes
2024-08-21 15:09     ` Liam R. Howlett
2024-08-20 23:57 ` [PATCH v6 15/20] mm: Change failure of MAP_FIXED to restoring the gap on failure Liam R. Howlett
2024-08-21 11:56   ` Lorenzo Stoakes
2024-08-21 13:31     ` Liam R. Howlett
2024-08-20 23:57 ` [PATCH v6 16/20] mm/mmap: Use PHYS_PFN in mmap_region() Liam R. Howlett
2024-08-20 23:57 ` [PATCH v6 17/20] mm/mmap: Use vms accounted pages " Liam R. Howlett
2024-08-21 16:35   ` Paul Moore
2024-08-21 17:15     ` Liam R. Howlett
2024-08-20 23:57 ` Liam R. Howlett [this message]
2024-08-21 11:59   ` [PATCH v6 18/20] ipc/shm, mm: Drop do_vma_munmap() Lorenzo Stoakes
2024-08-20 23:57 ` [PATCH v6 19/20] mm: Move may_expand_vm() check in mmap_region() Liam R. Howlett
2024-08-20 23:57 ` [PATCH v6 20/20] mm/vma: Drop incorrect comment from vms_gather_munmap_vmas() Liam R. Howlett

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20240820235730.2852400-19-Liam.Howlett@oracle.com \
    --to=liam.howlett@oracle.com \
    --cc=akpm@linux-foundation.org \
    --cc=kees@kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=lorenzo.stoakes@oracle.com \
    --cc=olsajiri@gmail.com \
    --cc=paulmck@kernel.org \
    --cc=sidhartha.kumar@oracle.com \
    --cc=spasswolf@web.de \
    --cc=surenb@google.com \
    --cc=vbabka@suse.cz \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox