From: Nick Piggin <npiggin@suse.de>
To: Eric Dumazet <eric.dumazet@gmail.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>,
linux-mm@kvack.org, Andrew Morton <akpm@linux-foundation.org>,
linux-kernel@vger.kernel.org,
Marcelo Tosatti <mtosatti@redhat.com>,
Adam Litke <agl@us.ibm.com>, Avi Kivity <avi@redhat.com>,
Izik Eidus <ieidus@redhat.com>,
Hugh Dickins <hugh.dickins@tiscali.co.uk>,
Rik van Riel <riel@redhat.com>, Mel Gorman <mel@csn.ul.ie>,
Dave Hansen <dave@linux.vnet.ibm.com>,
Benjamin Herrenschmidt <benh@kernel.crashing.org>,
Ingo Molnar <mingo@elte.hu>, Mike Travis <travis@sgi.com>,
KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>,
Christoph Lameter <cl@linux-foundation.org>,
Chris Wright <chrisw@sous-sol.org>,
bpicco@redhat.com,
KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>,
Balbir Singh <balbir@linux.vnet.ibm.com>,
"Michael S. Tsirkin" <mst@redhat.com>,
Peter Zijlstra <peterz@infradead.org>,
Johannes Weiner <hannes@cmpxchg.org>,
Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp>,
Chris Mason <chris.mason@oracle.com>,
Borislav Petkov <bp@alien8.de>
Subject: Re: Transparent Hugepage Support #25
Date: Fri, 21 May 2010 15:13:02 +1000 [thread overview]
Message-ID: <20100521051302.GK2516@laptop> (raw)
In-Reply-To: <1274412373.4977.8.camel@edumazet-laptop>
On Fri, May 21, 2010 at 05:26:13AM +0200, Eric Dumazet wrote:
> Le vendredi 21 mai 2010 a 02:05 +0200, Andrea Arcangeli a ecrit :
> > If you're running scientific applications, JVM or large gcc builds
> > (see attached patch for gcc), and you want to run from 2.5% faster for
> > kernel build (on bare metal), or 8% faster in translate.o of qemu (on
> > bare metal), 15% faster or more with virt and Intel EPT/ AMD NPT
> > (depending on the workload), you should apply and run the transparent
> > hugepage support on your systems.
> >
> > Awesome results have already been posted on lkml, if you test and
> > benchmark it, please provide any positive/negative real-life result on
> > lkml (or privately to me if you prefer). The more testing the better.
> >
>
> Interesting !
>
> Did you tried to change alloc_large_system_hash() to use hugepages for
> very large allocations ? We currently use vmalloc() on NUMA machines...
>
> Dentry cache hash table entries: 2097152 (order: 12, 16777216 bytes)
> Inode-cache hash table entries: 1048576 (order: 11, 8388608 bytes)
> IP route cache hash table entries: 524288 (order: 10, 4194304 bytes)
> TCP established hash table entries: 524288 (order: 11, 8388608 bytes)
Different (easier) kind of problem there.
We should indeed start using hugepages for special vmalloc cases like
this eventually. Last time I checked, we didn't quite have enough memory
per node to do this (ie. it does not end up being interleaved over all
nodes). It probably starts becoming realistic to do this soon with the
rate of memory size increases.
Probably for tuned servers where various hashes are sized very large,
it already makes sese.
It's on my TODO list.
>
>
> 0xffffc90000003000-0xffffc90001004000 16781312 alloc_large_system_hash+0x1d8/0x280 pages=4096 vmalloc vpages N0=2048 N1=2048
> 0xffffc9000100f000-0xffffc90001810000 8392704 alloc_large_system_hash+0x1d8/0x280 pages=2048 vmalloc vpages N0=1024 N1=1024
> 0xffffc90005882000-0xffffc90005c83000 4198400 alloc_large_system_hash+0x1d8/0x280 pages=1024 vmalloc vpages N0=512 N1=512
> 0xffffc90005c84000-0xffffc90006485000 8392704 alloc_large_system_hash+0x1d8/0x280 pages=2048 vmalloc vpages N0=1024 N1=1024
>
>
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2010-05-21 5:13 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2010-05-21 0:05 Andrea Arcangeli
2010-05-21 3:26 ` Eric Dumazet
2010-05-21 5:13 ` Nick Piggin [this message]
2010-06-02 5:44 ` [RFC][BUGFIX][PATCH 0/2] transhuge-memcg: some fixes (Re: Transparent Hugepage Support #25) Daisuke Nishimura
2010-06-02 5:45 ` [RFC][BUGFIX][PATCH 1/2] transhuge-memcg: fix for memcg compound Daisuke Nishimura
2010-06-02 5:46 ` [RFC][BUGFIX][PATCH 2/2] transhuge-memcg: commit tail pages at charge Daisuke Nishimura
2010-06-18 1:08 ` [RFC][BUGFIX][PATCH 0/2] transhuge-memcg: some fixes (Re: Transparent Hugepage Support #25) Andrea Arcangeli
2010-06-18 4:28 ` Daisuke Nishimura
2010-07-09 16:48 ` Andrea Arcangeli
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20100521051302.GK2516@laptop \
--to=npiggin@suse.de \
--cc=aarcange@redhat.com \
--cc=agl@us.ibm.com \
--cc=akpm@linux-foundation.org \
--cc=avi@redhat.com \
--cc=balbir@linux.vnet.ibm.com \
--cc=benh@kernel.crashing.org \
--cc=bp@alien8.de \
--cc=bpicco@redhat.com \
--cc=chris.mason@oracle.com \
--cc=chrisw@sous-sol.org \
--cc=cl@linux-foundation.org \
--cc=dave@linux.vnet.ibm.com \
--cc=eric.dumazet@gmail.com \
--cc=hannes@cmpxchg.org \
--cc=hugh.dickins@tiscali.co.uk \
--cc=ieidus@redhat.com \
--cc=kamezawa.hiroyu@jp.fujitsu.com \
--cc=kosaki.motohiro@jp.fujitsu.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mel@csn.ul.ie \
--cc=mingo@elte.hu \
--cc=mst@redhat.com \
--cc=mtosatti@redhat.com \
--cc=nishimura@mxp.nes.nec.co.jp \
--cc=peterz@infradead.org \
--cc=riel@redhat.com \
--cc=travis@sgi.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox