linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] mm/mmap: Introduce unlock_range() for code cleanup
@ 2021-05-10 19:50 Liam Howlett
  2021-05-10 19:56 ` Matthew Wilcox
  2021-05-11 21:11 ` Davidlohr Bueso
  0 siblings, 2 replies; 4+ messages in thread
From: Liam Howlett @ 2021-05-10 19:50 UTC (permalink / raw)
  To: maple-tree, linux-mm, linux-kernel, Andrew Morton
  Cc: Song Liu, Davidlohr Bueso, Paul E . McKenney, Matthew Wilcox,
	Laurent Dufour, David Rientjes, Axel Rasmussen,
	Suren Baghdasaryan, Vlastimil Babka, Michel Lespinasse,
	Liam Howlett

Both __do_munmap() and exit_mmap() unlock a range of VMAs using almost
identical code blocks.  Replace both blocks by a static inline function.

Signed-off-by: Liam R. Howlett <Liam.Howlett@Oracle.com>
---
 mm/mmap.c | 38 +++++++++++++++++++-------------------
 1 file changed, 19 insertions(+), 19 deletions(-)

diff --git a/mm/mmap.c b/mm/mmap.c
index 81f5595a8490..ea556fc795d2 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -2801,6 +2801,21 @@ int split_vma(struct mm_struct *mm, struct vm_area_struct *vma,
 	return __split_vma(mm, vma, addr, new_below);
 }
 
+static inline void unlock_range(struct vm_area_struct *start, unsigned long limit)
+{
+	struct mm_struct *mm = start->vm_mm;
+	struct vm_area_struct *tmp = start;
+
+	while (tmp && tmp->vm_start < limit) {
+		if (tmp->vm_flags & VM_LOCKED) {
+			mm->locked_vm -= vma_pages(tmp);
+			munlock_vma_pages_all(tmp);
+		}
+
+		tmp = tmp->vm_next;
+	}
+}
+
 /* Munmap is split into 2 main parts -- this part which finds
  * what needs doing, and the areas themselves, which do the
  * work.  This now handles partial unmappings.
@@ -2889,17 +2904,8 @@ int __do_munmap(struct mm_struct *mm, unsigned long start, size_t len,
 	/*
 	 * unlock any mlock()ed ranges before detaching vmas
 	 */
-	if (mm->locked_vm) {
-		struct vm_area_struct *tmp = vma;
-		while (tmp && tmp->vm_start < end) {
-			if (tmp->vm_flags & VM_LOCKED) {
-				mm->locked_vm -= vma_pages(tmp);
-				munlock_vma_pages_all(tmp);
-			}
-
-			tmp = tmp->vm_next;
-		}
-	}
+	if (mm->locked_vm)
+		unlock_range(vma, end);
 
 	/* Detach vmas from rbtree */
 	if (!detach_vmas_to_be_unmapped(mm, vma, prev, end))
@@ -3184,14 +3190,8 @@ void exit_mmap(struct mm_struct *mm)
 		mmap_write_unlock(mm);
 	}
 
-	if (mm->locked_vm) {
-		vma = mm->mmap;
-		while (vma) {
-			if (vma->vm_flags & VM_LOCKED)
-				munlock_vma_pages_all(vma);
-			vma = vma->vm_next;
-		}
-	}
+	if (mm->locked_vm)
+		unlock_range(mm->mmap, ULONG_MAX);
 
 	arch_exit_mmap(mm);
 
-- 
2.30.2


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH] mm/mmap: Introduce unlock_range() for code cleanup
  2021-05-10 19:50 [PATCH] mm/mmap: Introduce unlock_range() for code cleanup Liam Howlett
@ 2021-05-10 19:56 ` Matthew Wilcox
  2021-05-10 21:01   ` Liam Howlett
  2021-05-11 21:11 ` Davidlohr Bueso
  1 sibling, 1 reply; 4+ messages in thread
From: Matthew Wilcox @ 2021-05-10 19:56 UTC (permalink / raw)
  To: Liam Howlett
  Cc: maple-tree, linux-mm, linux-kernel, Andrew Morton, Song Liu,
	Davidlohr Bueso, Paul E . McKenney, Laurent Dufour,
	David Rientjes, Axel Rasmussen, Suren Baghdasaryan,
	Vlastimil Babka, Michel Lespinasse

On Mon, May 10, 2021 at 07:50:22PM +0000, Liam Howlett wrote:
> Both __do_munmap() and exit_mmap() unlock a range of VMAs using almost
> identical code blocks.  Replace both blocks by a static inline function.
> 
> Signed-off-by: Liam R. Howlett <Liam.Howlett@Oracle.com>

Reviewed-by: Matthew Wilcox (Oracle) <willy@infradead.org>

> +static inline void unlock_range(struct vm_area_struct *start, unsigned long limit)

Seems like an unnecessary >80 column line ...

static inline
void unlock_range(struct vm_area_struct *start, unsigned long limit)



^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH] mm/mmap: Introduce unlock_range() for code cleanup
  2021-05-10 19:56 ` Matthew Wilcox
@ 2021-05-10 21:01   ` Liam Howlett
  0 siblings, 0 replies; 4+ messages in thread
From: Liam Howlett @ 2021-05-10 21:01 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: maple-tree, linux-mm, linux-kernel, Andrew Morton, Song Liu,
	Davidlohr Bueso, Paul E . McKenney, Laurent Dufour,
	David Rientjes, Axel Rasmussen, Suren Baghdasaryan,
	Vlastimil Babka, Michel Lespinasse

* Matthew Wilcox <willy@infradead.org> [210510 15:57]:
> On Mon, May 10, 2021 at 07:50:22PM +0000, Liam Howlett wrote:
> > Both __do_munmap() and exit_mmap() unlock a range of VMAs using almost
> > identical code blocks.  Replace both blocks by a static inline function.
> > 
> > Signed-off-by: Liam R. Howlett <Liam.Howlett@Oracle.com>
> 
> Reviewed-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> 
> > +static inline void unlock_range(struct vm_area_struct *start, unsigned long limit)
> 
> Seems like an unnecessary >80 column line ...
> 
> static inline
> void unlock_range(struct vm_area_struct *start, unsigned long limit)
> 

Sorry about that, checkpatch also did not see this.  I will send a v2.

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH] mm/mmap: Introduce unlock_range() for code cleanup
  2021-05-10 19:50 [PATCH] mm/mmap: Introduce unlock_range() for code cleanup Liam Howlett
  2021-05-10 19:56 ` Matthew Wilcox
@ 2021-05-11 21:11 ` Davidlohr Bueso
  1 sibling, 0 replies; 4+ messages in thread
From: Davidlohr Bueso @ 2021-05-11 21:11 UTC (permalink / raw)
  To: Liam Howlett
  Cc: maple-tree, linux-mm, linux-kernel, Andrew Morton, Song Liu,
	Paul E . McKenney, Matthew Wilcox, Laurent Dufour,
	David Rientjes, Axel Rasmussen, Suren Baghdasaryan,
	Vlastimil Babka, Michel Lespinasse

On Mon, 10 May 2021, Liam Howlett wrote:

>Both __do_munmap() and exit_mmap() unlock a range of VMAs using almost
>identical code blocks.  Replace both blocks by a static inline function.
>
>Signed-off-by: Liam R. Howlett <Liam.Howlett@Oracle.com>

Reviewed-by: Davidlohr Bueso <dbueso@suse.de>


^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2021-05-11 21:11 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-05-10 19:50 [PATCH] mm/mmap: Introduce unlock_range() for code cleanup Liam Howlett
2021-05-10 19:56 ` Matthew Wilcox
2021-05-10 21:01   ` Liam Howlett
2021-05-11 21:11 ` Davidlohr Bueso

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox