linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Chen Gang <xili_gchen_5257@hotmail.com>
To: Andrew Morton <akpm@linux-foundation.org>,
	"mhocko@suse.cz" <mhocko@suse.cz>
Cc: Linux Memory <linux-mm@kvack.org>,
	kernel mailing list <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH] mm/mmap.c: Only call vma_unlock_anon_vm() when failure occurs in expand_upwards() and expand_downwards()
Date: Tue, 1 Sep 2015 21:49:15 +0800	[thread overview]
Message-ID: <COL130-W4895D78CDAEA273AB88C53B96A0@phx.gbl> (raw)
In-Reply-To: <55E5AD17.6060901@hotmail.com>

[-- Attachment #1: Type: text/plain, Size: 6531 bytes --]


Sorry for the incorrect format of the patch. So I put the patch into the
attachment which generated by "git format-patch -M HEAD^". Please help
check, thanks.

Next, I shall try to find another mail address which can be accepted by
both China and our mailing list.

Thanks.

On 9/1/15 04:54, Chen Gang wrote:
> When failure occurs, we need not call khugepaged_enter_vma_merge() or
> validate_mm().
>
> Also simplify do_munmap(): declare 'error' 1 time instead of 2 times in
> sub-blocks.
>
> Signed-off-by: Chen Gang <gang.chen.5i5j@gmail.com>
> ---
> mm/mmap.c | 116 +++++++++++++++++++++++++++++++-------------------------------
> 1 file changed, 58 insertions(+), 58 deletions(-)
>
> diff --git a/mm/mmap.c b/mm/mmap.c
> index df6d5f0..d32199a 100644
> --- a/mm/mmap.c
> +++ b/mm/mmap.c
> @@ -2182,10 +2182,9 @@ int expand_upwards(struct vm_area_struct *vma, unsigned long address)
> if (address < PAGE_ALIGN(address+4))
> address = PAGE_ALIGN(address+4);
> else {
> - vma_unlock_anon_vma(vma);
> - return -ENOMEM;
> + error = -ENOMEM;
> + goto err;
> }
> - error = 0;
>
> /* Somebody else might have raced and expanded it already */
> if (address> vma->vm_end) {
> @@ -2194,38 +2193,39 @@ int expand_upwards(struct vm_area_struct *vma, unsigned long address)
> size = address - vma->vm_start;
> grow = (address - vma->vm_end)>> PAGE_SHIFT;
>
> - error = -ENOMEM;
> - if (vma->vm_pgoff + (size>> PAGE_SHIFT)>= vma->vm_pgoff) {
> - error = acct_stack_growth(vma, size, grow);
> - if (!error) {
> - /*
> - * vma_gap_update() doesn't support concurrent
> - * updates, but we only hold a shared mmap_sem
> - * lock here, so we need to protect against
> - * concurrent vma expansions.
> - * vma_lock_anon_vma() doesn't help here, as
> - * we don't guarantee that all growable vmas
> - * in a mm share the same root anon vma.
> - * So, we reuse mm->page_table_lock to guard
> - * against concurrent vma expansions.
> - */
> - spin_lock(&vma->vm_mm->page_table_lock);
> - anon_vma_interval_tree_pre_update_vma(vma);
> - vma->vm_end = address;
> - anon_vma_interval_tree_post_update_vma(vma);
> - if (vma->vm_next)
> - vma_gap_update(vma->vm_next);
> - else
> - vma->vm_mm->highest_vm_end = address;
> - spin_unlock(&vma->vm_mm->page_table_lock);
> -
> - perf_event_mmap(vma);
> - }
> + if (vma->vm_pgoff + (size>> PAGE_SHIFT) < vma->vm_pgoff) {
> + error = -ENOMEM;
> + goto err;
> }
> + error = acct_stack_growth(vma, size, grow);
> + if (error)
> + goto err;
> + /*
> + * vma_gap_update() doesn't support concurrent updates, but we
> + * only hold a shared mmap_sem lock here, so we need to protect
> + * against concurrent vma expansions. vma_lock_anon_vma()
> + * doesn't help here, as we don't guarantee that all growable
> + * vmas in a mm share the same root anon vma. So, we reuse mm->
> + * page_table_lock to guard against concurrent vma expansions.
> + */
> + spin_lock(&vma->vm_mm->page_table_lock);
> + anon_vma_interval_tree_pre_update_vma(vma);
> + vma->vm_end = address;
> + anon_vma_interval_tree_post_update_vma(vma);
> + if (vma->vm_next)
> + vma_gap_update(vma->vm_next);
> + else
> + vma->vm_mm->highest_vm_end = address;
> + spin_unlock(&vma->vm_mm->page_table_lock);
> +
> + perf_event_mmap(vma);
> }
> vma_unlock_anon_vma(vma);
> khugepaged_enter_vma_merge(vma, vma->vm_flags);
> validate_mm(vma->vm_mm);
> + return 0;
> +err:
> + vma_unlock_anon_vma(vma);
> return error;
> }
> #endif /* CONFIG_STACK_GROWSUP || CONFIG_IA64 */
> @@ -2265,36 +2265,37 @@ int expand_downwards(struct vm_area_struct *vma,
> size = vma->vm_end - address;
> grow = (vma->vm_start - address)>> PAGE_SHIFT;
>
> - error = -ENOMEM;
> - if (grow <= vma->vm_pgoff) {
> - error = acct_stack_growth(vma, size, grow);
> - if (!error) {
> - /*
> - * vma_gap_update() doesn't support concurrent
> - * updates, but we only hold a shared mmap_sem
> - * lock here, so we need to protect against
> - * concurrent vma expansions.
> - * vma_lock_anon_vma() doesn't help here, as
> - * we don't guarantee that all growable vmas
> - * in a mm share the same root anon vma.
> - * So, we reuse mm->page_table_lock to guard
> - * against concurrent vma expansions.
> - */
> - spin_lock(&vma->vm_mm->page_table_lock);
> - anon_vma_interval_tree_pre_update_vma(vma);
> - vma->vm_start = address;
> - vma->vm_pgoff -= grow;
> - anon_vma_interval_tree_post_update_vma(vma);
> - vma_gap_update(vma);
> - spin_unlock(&vma->vm_mm->page_table_lock);
> -
> - perf_event_mmap(vma);
> - }
> + if (grow> vma->vm_pgoff) {
> + error = -ENOMEM;
> + goto err;
> }
> + error = acct_stack_growth(vma, size, grow);
> + if (error)
> + goto err;
> + /*
> + * vma_gap_update() doesn't support concurrent updates, but we
> + * only hold a shared mmap_sem lock here, so we need to protect
> + * against concurrent vma expansions. vma_lock_anon_vma()
> + * doesn't help here, as we don't guarantee that all growable
> + * vmas in a mm share the same root anon vma. So, we reuse mm->
> + * page_table_lock to guard against concurrent vma expansions.
> + */
> + spin_lock(&vma->vm_mm->page_table_lock);
> + anon_vma_interval_tree_pre_update_vma(vma);
> + vma->vm_start = address;
> + vma->vm_pgoff -= grow;
> + anon_vma_interval_tree_post_update_vma(vma);
> + vma_gap_update(vma);
> + spin_unlock(&vma->vm_mm->page_table_lock);
> +
> + perf_event_mmap(vma);
> }
> vma_unlock_anon_vma(vma);
> khugepaged_enter_vma_merge(vma, vma->vm_flags);
> validate_mm(vma->vm_mm);
> + return 0;
> +err:
> + vma_unlock_anon_vma(vma);
> return error;
> }
>
> @@ -2542,6 +2543,7 @@ int do_munmap(struct mm_struct *mm, unsigned long start, size_t len)
> {
> unsigned long end;
> struct vm_area_struct *vma, *prev, *last;
> + int error;
>
> if ((start & ~PAGE_MASK) || start> TASK_SIZE || len> TASK_SIZE-start)
> return -EINVAL;
> @@ -2570,8 +2572,6 @@ int do_munmap(struct mm_struct *mm, unsigned long start, size_t len)
> * places tmp vma above, and higher split_vma places tmp vma below.
> */
> if (start> vma->vm_start) {
> - int error;
> -
> /*
> * Make sure that map_count on return from munmap() will
> * not exceed its limit; but let map_count go just above
> @@ -2589,7 +2589,7 @@ int do_munmap(struct mm_struct *mm, unsigned long start, size_t len)
> /* Does it split the last one? */
> last = find_vma(mm, end);
> if (last && end> last->vm_start) {
> - int error = __split_vma(mm, last, end, 1);
> + error = __split_vma(mm, last, end, 1);
> if (error)
> return error;
> }
> --
> 1.9.3
>
>
>

--
Chen Gang

Open, share, and attitude like air, water, and life which God blessed
 		 	   		  

[-- Attachment #2: 0001-mm-mmap.c-Only-call-vma_unlock_anon_vm-for-failure-i.patch --]
[-- Type: text/plain, Size: 6565 bytes --]

From 87b8632f50e6ff3d09289ae6a76bd71fc2ecb074 Mon Sep 17 00:00:00 2001
From: Chen Gang <gang.chen.5i5j@gmail.com>
Date: Tue, 1 Sep 2015 04:40:56 +0800
Subject: [PATCH] mm/mmap.c: Only call vma_unlock_anon_vm() for failure in
 expand_upwards() and expand_downwards()

When failure occurs, we need not call khugepaged_enter_vma_merge() or
validate_mm().

Also simplify do_munmap(): declare 'error' 1 time instead of 2 times in
sub-blocks.

Signed-off-by: Chen Gang <gang.chen.5i5j@gmail.com>
---
 mm/mmap.c | 116 +++++++++++++++++++++++++++++++-------------------------------
 1 file changed, 58 insertions(+), 58 deletions(-)

diff --git a/mm/mmap.c b/mm/mmap.c
index df6d5f0..d32199a 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -2182,10 +2182,9 @@ int expand_upwards(struct vm_area_struct *vma, unsigned long address)
 	if (address < PAGE_ALIGN(address+4))
 		address = PAGE_ALIGN(address+4);
 	else {
-		vma_unlock_anon_vma(vma);
-		return -ENOMEM;
+		error = -ENOMEM;
+		goto err;
 	}
-	error = 0;
 
 	/* Somebody else might have raced and expanded it already */
 	if (address > vma->vm_end) {
@@ -2194,38 +2193,39 @@ int expand_upwards(struct vm_area_struct *vma, unsigned long address)
 		size = address - vma->vm_start;
 		grow = (address - vma->vm_end) >> PAGE_SHIFT;
 
-		error = -ENOMEM;
-		if (vma->vm_pgoff + (size >> PAGE_SHIFT) >= vma->vm_pgoff) {
-			error = acct_stack_growth(vma, size, grow);
-			if (!error) {
-				/*
-				 * vma_gap_update() doesn't support concurrent
-				 * updates, but we only hold a shared mmap_sem
-				 * lock here, so we need to protect against
-				 * concurrent vma expansions.
-				 * vma_lock_anon_vma() doesn't help here, as
-				 * we don't guarantee that all growable vmas
-				 * in a mm share the same root anon vma.
-				 * So, we reuse mm->page_table_lock to guard
-				 * against concurrent vma expansions.
-				 */
-				spin_lock(&vma->vm_mm->page_table_lock);
-				anon_vma_interval_tree_pre_update_vma(vma);
-				vma->vm_end = address;
-				anon_vma_interval_tree_post_update_vma(vma);
-				if (vma->vm_next)
-					vma_gap_update(vma->vm_next);
-				else
-					vma->vm_mm->highest_vm_end = address;
-				spin_unlock(&vma->vm_mm->page_table_lock);
-
-				perf_event_mmap(vma);
-			}
+		if (vma->vm_pgoff + (size >> PAGE_SHIFT) < vma->vm_pgoff) {
+			error = -ENOMEM;
+			goto err;
 		}
+		error = acct_stack_growth(vma, size, grow);
+		if (error)
+			goto err;
+		/*
+		 * vma_gap_update() doesn't support concurrent updates, but we
+		 * only hold a shared mmap_sem lock here, so we need to protect
+		 * against concurrent vma expansions. vma_lock_anon_vma()
+		 * doesn't help here, as we don't guarantee that all growable
+		 * vmas in a mm share the same root anon vma. So, we reuse mm->
+		 * page_table_lock to guard against concurrent vma expansions.
+		 */
+		spin_lock(&vma->vm_mm->page_table_lock);
+		anon_vma_interval_tree_pre_update_vma(vma);
+		vma->vm_end = address;
+		anon_vma_interval_tree_post_update_vma(vma);
+		if (vma->vm_next)
+			vma_gap_update(vma->vm_next);
+		else
+			vma->vm_mm->highest_vm_end = address;
+		spin_unlock(&vma->vm_mm->page_table_lock);
+
+		perf_event_mmap(vma);
 	}
 	vma_unlock_anon_vma(vma);
 	khugepaged_enter_vma_merge(vma, vma->vm_flags);
 	validate_mm(vma->vm_mm);
+	return 0;
+err:
+	vma_unlock_anon_vma(vma);
 	return error;
 }
 #endif /* CONFIG_STACK_GROWSUP || CONFIG_IA64 */
@@ -2265,36 +2265,37 @@ int expand_downwards(struct vm_area_struct *vma,
 		size = vma->vm_end - address;
 		grow = (vma->vm_start - address) >> PAGE_SHIFT;
 
-		error = -ENOMEM;
-		if (grow <= vma->vm_pgoff) {
-			error = acct_stack_growth(vma, size, grow);
-			if (!error) {
-				/*
-				 * vma_gap_update() doesn't support concurrent
-				 * updates, but we only hold a shared mmap_sem
-				 * lock here, so we need to protect against
-				 * concurrent vma expansions.
-				 * vma_lock_anon_vma() doesn't help here, as
-				 * we don't guarantee that all growable vmas
-				 * in a mm share the same root anon vma.
-				 * So, we reuse mm->page_table_lock to guard
-				 * against concurrent vma expansions.
-				 */
-				spin_lock(&vma->vm_mm->page_table_lock);
-				anon_vma_interval_tree_pre_update_vma(vma);
-				vma->vm_start = address;
-				vma->vm_pgoff -= grow;
-				anon_vma_interval_tree_post_update_vma(vma);
-				vma_gap_update(vma);
-				spin_unlock(&vma->vm_mm->page_table_lock);
-
-				perf_event_mmap(vma);
-			}
+		if (grow > vma->vm_pgoff) {
+			error = -ENOMEM;
+			goto err;
 		}
+		error = acct_stack_growth(vma, size, grow);
+		if (error)
+			goto err;
+		/*
+		 * vma_gap_update() doesn't support concurrent updates, but we
+		 * only hold a shared mmap_sem lock here, so we need to protect
+		 * against concurrent vma expansions. vma_lock_anon_vma()
+		 * doesn't help here, as we don't guarantee that all growable
+		 * vmas in a mm share the same root anon vma. So, we reuse mm->
+		 * page_table_lock to guard against concurrent vma expansions.
+		 */
+		spin_lock(&vma->vm_mm->page_table_lock);
+		anon_vma_interval_tree_pre_update_vma(vma);
+		vma->vm_start = address;
+		vma->vm_pgoff -= grow;
+		anon_vma_interval_tree_post_update_vma(vma);
+		vma_gap_update(vma);
+		spin_unlock(&vma->vm_mm->page_table_lock);
+
+		perf_event_mmap(vma);
 	}
 	vma_unlock_anon_vma(vma);
 	khugepaged_enter_vma_merge(vma, vma->vm_flags);
 	validate_mm(vma->vm_mm);
+	return 0;
+err:
+	vma_unlock_anon_vma(vma);
 	return error;
 }
 
@@ -2542,6 +2543,7 @@ int do_munmap(struct mm_struct *mm, unsigned long start, size_t len)
 {
 	unsigned long end;
 	struct vm_area_struct *vma, *prev, *last;
+	int error;
 
 	if ((start & ~PAGE_MASK) || start > TASK_SIZE || len > TASK_SIZE-start)
 		return -EINVAL;
@@ -2570,8 +2572,6 @@ int do_munmap(struct mm_struct *mm, unsigned long start, size_t len)
 	 * places tmp vma above, and higher split_vma places tmp vma below.
 	 */
 	if (start > vma->vm_start) {
-		int error;
-
 		/*
 		 * Make sure that map_count on return from munmap() will
 		 * not exceed its limit; but let map_count go just above
@@ -2589,7 +2589,7 @@ int do_munmap(struct mm_struct *mm, unsigned long start, size_t len)
 	/* Does it split the last one? */
 	last = find_vma(mm, end);
 	if (last && end > last->vm_start) {
-		int error = __split_vma(mm, last, end, 1);
+		error = __split_vma(mm, last, end, 1);
 		if (error)
 			return error;
 	}
-- 
1.9.3


  parent reply	other threads:[~2015-09-01 13:49 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-08-31 20:54 Chen Gang
     [not found] ` <55E5AD17.6060901@hotmail.com>
2015-09-01 13:49   ` Chen Gang [this message]
     [not found]     ` <55E96E01.5010605@hotmail.com>
2015-09-04 10:09       ` Chen Gang
     [not found]         ` <55EAC021.3080205@hotmail.com>
2015-09-05 10:11           ` Chen Gang
2015-09-07  7:24             ` Michal Hocko
     [not found]               ` <55EEF4B4.5010205@hotmail.com>
2015-09-08 14:44                 ` Chen Gang
2015-09-02 21:36 gang.chen.5i5j

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=COL130-W4895D78CDAEA273AB88C53B96A0@phx.gbl \
    --to=xili_gchen_5257@hotmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@suse.cz \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox