linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Roman Gushchin <guro@fb.com>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: Roman Gushchin <guroan@gmail.com>,
	"linux-mm@kvack.org" <linux-mm@kvack.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Kernel Team <Kernel-team@fb.com>,
	"Matthew Wilcox" <willy@infradead.org>,
	Johannes Weiner <hannes@cmpxchg.org>,
	"Vlastimil Babka" <vbabka@suse.cz>
Subject: Re: [PATCH v4 1/2] mm: refactor __vunmap() to avoid duplicated call to find_vm_area()
Date: Wed, 17 Apr 2019 23:02:25 +0000	[thread overview]
Message-ID: <20190417230219.GA5538@tower.DHCP.thefacebook.com> (raw)
In-Reply-To: <20190417145827.8b1c83bf22de8ba514f157e3@linux-foundation.org>

On Wed, Apr 17, 2019 at 02:58:27PM -0700, Andrew Morton wrote:
> On Wed, 17 Apr 2019 12:40:01 -0700 Roman Gushchin <guroan@gmail.com> wrote:
> 
> > __vunmap() calls find_vm_area() twice without an obvious reason:
> > first directly to get the area pointer, second indirectly by calling
> > remove_vm_area(), which is again searching for the area.
> > 
> > To remove this redundancy, let's split remove_vm_area() into
> > __remove_vm_area(struct vmap_area *), which performs the actual area
> > removal, and remove_vm_area(const void *addr) wrapper, which can
> > be used everywhere, where it has been used before.
> > 
> > On my test setup, I've got 5-10% speed up on vfree()'ing 1000000
> > of 4-pages vmalloc blocks.
> > 
> > Perf report before:
> >   22.64%  cat      [kernel.vmlinux]  [k] free_pcppages_bulk
> >   10.30%  cat      [kernel.vmlinux]  [k] __vunmap
> >    9.80%  cat      [kernel.vmlinux]  [k] find_vmap_area
> >    8.11%  cat      [kernel.vmlinux]  [k] vunmap_page_range
> >    4.20%  cat      [kernel.vmlinux]  [k] __slab_free
> >    3.56%  cat      [kernel.vmlinux]  [k] __list_del_entry_valid
> >    3.46%  cat      [kernel.vmlinux]  [k] smp_call_function_many
> >    3.33%  cat      [kernel.vmlinux]  [k] kfree
> >    3.32%  cat      [kernel.vmlinux]  [k] free_unref_page
> > 
> > Perf report after:
> >   23.01%  cat      [kernel.kallsyms]  [k] free_pcppages_bulk
> >    9.46%  cat      [kernel.kallsyms]  [k] __vunmap
> >    9.15%  cat      [kernel.kallsyms]  [k] vunmap_page_range
> >    6.17%  cat      [kernel.kallsyms]  [k] __slab_free
> >    5.61%  cat      [kernel.kallsyms]  [k] kfree
> >    4.86%  cat      [kernel.kallsyms]  [k] bad_range
> >    4.67%  cat      [kernel.kallsyms]  [k] free_unref_page_commit
> >    4.24%  cat      [kernel.kallsyms]  [k] __list_del_entry_valid
> >    3.68%  cat      [kernel.kallsyms]  [k] free_unref_page
> >    3.65%  cat      [kernel.kallsyms]  [k] __list_add_valid
> >    3.19%  cat      [kernel.kallsyms]  [k] __purge_vmap_area_lazy
> >    3.10%  cat      [kernel.kallsyms]  [k] find_vmap_area
> >    3.05%  cat      [kernel.kallsyms]  [k] rcu_cblist_dequeue
> > 
> > ...
> >
> > --- a/mm/vmalloc.c
> > +++ b/mm/vmalloc.c
> > @@ -2068,6 +2068,24 @@ struct vm_struct *find_vm_area(const void *addr)
> >  	return NULL;
> >  }
> >  
> > +static struct vm_struct *__remove_vm_area(struct vmap_area *va)
> > +{
> > +	struct vm_struct *vm = va->vm;
> > +
> > +	might_sleep();
> 
> Where might __remove_vm_area() sleep?
> 
> From a quick scan I'm only seeing vfree(), and that has the
> might_sleep_if(!in_interrupt()).
> 
> So perhaps we can remove this...

Agree. Here is the patch.

Thank you!

--

From 4adf58e4d3ffe45a542156ca0bce3dc9f6679939 Mon Sep 17 00:00:00 2001
From: Roman Gushchin <guro@fb.com>
Date: Wed, 17 Apr 2019 15:55:49 -0700
Subject: [PATCH] mm: remove might_sleep() in __remove_vm_area()

__remove_vm_area() has a redundant might_sleep() call, which isn't
really required, because the only place it can sleep is vfree()
and it already contains might_sleep_if(!in_interrupt()).

Suggested-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Roman Gushchin <guro@fb.com>
---
 mm/vmalloc.c | 2 --
 1 file changed, 2 deletions(-)

diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 69a5673c4cd3..4a91acce4b5f 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -2079,8 +2079,6 @@ static struct vm_struct *__remove_vm_area(struct vmap_area *va)
 {
 	struct vm_struct *vm = va->vm;
 
-	might_sleep();
-
 	spin_lock(&vmap_area_lock);
 	va->vm = NULL;
 	va->flags &= ~VM_VM_AREA;
-- 
2.20.1


  reply	other threads:[~2019-04-17 23:03 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-04-17 19:40 [PATCH v4 0/2] vmalloc enhancements Roman Gushchin
2019-04-17 19:40 ` [PATCH v4 1/2] mm: refactor __vunmap() to avoid duplicated call to find_vm_area() Roman Gushchin
2019-04-17 21:58   ` Andrew Morton
2019-04-17 23:02     ` Roman Gushchin [this message]
2019-04-18 11:18     ` Matthew Wilcox
2019-04-18 22:24       ` Andrew Morton
2019-04-18 23:17         ` Eric Dumazet
2019-04-19 19:08         ` Al Viro
2019-04-17 19:40 ` [PATCH v4 2/2] mm: show number of vmalloc pages in /proc/meminfo Roman Gushchin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190417230219.GA5538@tower.DHCP.thefacebook.com \
    --to=guro@fb.com \
    --cc=Kernel-team@fb.com \
    --cc=akpm@linux-foundation.org \
    --cc=guroan@gmail.com \
    --cc=hannes@cmpxchg.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=vbabka@suse.cz \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox