From: Davidlohr Bueso <davidlohr@hp.com>
To: KOSAKI Motohiro <kosaki.motohiro@gmail.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
Hugh Dickins <hughd@google.com>,
Michel Lespinasse <walken@google.com>,
Ingo Molnar <mingo@kernel.org>, Mel Gorman <mgorman@suse.de>,
Rik van Riel <riel@redhat.com>, Guan Xuetao <gxt@mprc.pku.edu.cn>,
aswin@hp.com, LKML <linux-kernel@vger.kernel.org>,
"linux-mm@kvack.org" <linux-mm@kvack.org>
Subject: Re: [PATCH] mm: cache largest vma
Date: Sun, 03 Nov 2013 20:22:51 -0800 [thread overview]
Message-ID: <1383538971.2373.25.camel@buesod1.americas.hpqcorp.net> (raw)
In-Reply-To: <CAHGf_=okr7mFx3j=gRfkETS21KZXkdo4XevF1KQM+gbXkTabgg@mail.gmail.com>
On Sun, 2013-11-03 at 18:57 -0500, KOSAKI Motohiro wrote:
> >> I'm slightly surprised this cache makes 15% hit. Which application
> >> get a benefit? You listed a lot of applications, but I'm not sure
> >> which is highly depending on largest vma.
> >
> > Well I chose the largest vma because it gives us a greater chance of
> > being already cached when we do the lookup for the faulted address.
> >
> > The 15% improvement was with Hadoop. According to my notes it was at
> > ~48% with the baseline kernel and increased to ~63% with this patch.
> >
> > In any case I didn't measure the rates on a per-task granularity, but at
> > a general system level. When a system is first booted I can see that the
> > mmap_cache access rate becomes the determinant factor and when adding a
> > workload it doesn't change much. One exception to this was a kernel
> > build, where we go from ~50% to ~89% hit rate on a vanilla kernel.
>
> I looked at this patch a bit. The worth of this is to improve the
> cache hit ratio
> of heap.
>
> 1) For single thread applications, heap is frequently largest mapping
> in the process.
Right.
> 2) For java VM, "java -Xms1000m -Xmx1000m HelloWorld" makes following
> /proc/<pid>/smaps entry. That said, JVM allocate single heap even if
> applications are multi threaded.
Oh, this is new to me and nicely explains why I see the most benefit in
java related workloads.
>
> c1800000-100000000 rw-p 00000000 00:00 0
> Size: 1024000 kB
> Rss: 244 kB
> Pss: 244 kB
> Shared_Clean: 0 kB
> Shared_Dirty: 0 kB
> Private_Clean: 0 kB
> Private_Dirty: 244 kB
> Referenced: 244 kB
> Anonymous: 244 kB
> AnonHugePages: 0 kB
> Swap: 0 kB
> KernelPageSize: 4 kB
> MMUPageSize: 4 kB
>
> That's good.
>
> However, we know there is a situation that this patch doesn't work.
> glibc makes per thread heap (arena) by default. So, it is not to be
> expected works well on glibc multi threaded programs. That's a
> slightly big limitation.
I think this is what Linus was referring to.
>
> Anyway, I haven't observed real performance difference because most
> big penalty of find_vma come from taking mmap_sem, not rb-tree search.
Yes, undoubtedly, which is why I'm using units of hit/miss rather than
workload throughput.
Thanks,
Davidlohr
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2013-11-04 4:22 UTC|newest]
Thread overview: 38+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-11-01 20:17 Davidlohr Bueso
2013-11-01 20:38 ` KOSAKI Motohiro
2013-11-01 21:11 ` Davidlohr Bueso
2013-11-03 9:46 ` Ingo Molnar
2013-11-03 23:57 ` KOSAKI Motohiro
2013-11-04 4:22 ` Davidlohr Bueso [this message]
2013-11-01 21:23 ` Rik van Riel
2013-11-03 10:12 ` Ingo Molnar
2013-11-04 4:20 ` Davidlohr Bueso
2013-11-04 4:48 ` converting unicore32 to gate_vma as done for arm (was Re: [PATCH] mm: cache largest vma) Al Viro
2013-11-05 2:49 ` 管雪涛
2013-11-11 7:25 ` converting unicore32 to gate_vma as done for arm (was " Al Viro
2013-11-04 7:00 ` [PATCH] mm: cache largest vma Ingo Molnar
2013-11-04 7:05 ` Ingo Molnar
2013-11-04 14:20 ` Frederic Weisbecker
2013-11-04 17:52 ` Ingo Molnar
2013-11-04 18:10 ` Frederic Weisbecker
2013-11-05 8:24 ` Ingo Molnar
2013-11-05 14:27 ` Jiri Olsa
2013-11-06 6:01 ` Ingo Molnar
2013-11-06 14:03 ` Konstantin Khlebnikov
2013-11-03 18:51 ` Linus Torvalds
2013-11-04 4:04 ` Davidlohr Bueso
2013-11-04 7:36 ` Ingo Molnar
2013-11-04 14:56 ` Michel Lespinasse
2013-11-11 4:12 ` Davidlohr Bueso
2013-11-11 7:43 ` Michel Lespinasse
2013-11-11 12:04 ` Ingo Molnar
2013-11-11 20:47 ` Davidlohr Bueso
2013-11-13 17:08 ` Davidlohr Bueso
2013-11-13 17:59 ` Ingo Molnar
2013-11-13 18:16 ` Peter Zijlstra
2013-11-11 12:01 ` Ingo Molnar
2013-11-11 18:24 ` Davidlohr Bueso
2013-11-11 20:47 ` Ingo Molnar
2013-11-11 20:59 ` Davidlohr Bueso
2013-11-11 21:09 ` Ingo Molnar
2013-11-04 7:03 ` Christoph Hellwig
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1383538971.2373.25.camel@buesod1.americas.hpqcorp.net \
--to=davidlohr@hp.com \
--cc=akpm@linux-foundation.org \
--cc=aswin@hp.com \
--cc=gxt@mprc.pku.edu.cn \
--cc=hughd@google.com \
--cc=kosaki.motohiro@gmail.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mgorman@suse.de \
--cc=mingo@kernel.org \
--cc=riel@redhat.com \
--cc=walken@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox