linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Brian Twichell <tbrian@us.ibm.com>
To: Dave McCracken <dmccr@us.ibm.com>
Cc: Hugh Dickins <hugh@veritas.com>,
	Linux Memory Management <linux-mm@kvack.org>,
	Linux Kernel <linux-kernel@vger.kernel.org>,
	slpratt@us.ibm.com
Subject: Re: [PATCH 0/2][RFC] New version of shared page tables
Date: Fri, 05 May 2006 14:25:38 -0500	[thread overview]
Message-ID: <445BA6B2.4030807@us.ibm.com> (raw)
In-Reply-To: <1146671004.24422.20.camel@wildcat.int.mccr.org>

Hi,

We reevaluated shared pagetables with recent patches from Dave.  As with 
our previous evaluation, a database transaction-processing workload was 
used.  This time our evaluation focused on a 4-way x86-64 configuration 
with 8 GB of memory.

In the case that the bufferpools were in small pages, shared pagetables 
provided a 27% improvement in transaction throughput.  The performance 
increase is attributable to multiple factors.  First, pagetable memory 
consumption was reduced from 1.65 GB to 51 MB, freeing up 20% of the 
system's memory.  This memory was devoted to enlarging the database 
bufferpools, which allowed more database data to be cached in memory.  
The effect of this was to reduce the number of disk I/O's per 
transaction by 23%, which contributed to a similar reduction in the 
context switch rate.  A second major component of the performance 
improvement is reduced TLB and cache miss rates, due to the smaller 
pagetable footprint.  To try to isolate this benefit, we performed an 
experiment where pagetables were shared, but the database bufferpools 
were not enlarged.  In this configuration, shared pagetables provided a 
9% increase in database transaction throughput.  Analysis of processor 
performance counters revealed the following benefits from pagetable sharing:

- ITLB and DTLB page walks were reduced by 27% and 26%, respectively.
- L1 and L2 cache misses were reduced by 5%.  This is due to fewer 
pagetable entries crowding the caches.
- Front-side bus traffic was reduced approximately 10%.

When the bufferpools were in hugepages, shared pagetables provided a 3% 
increase in database transaction throughput.  Some of the underlying 
benefits of pagetable sharing were as follows:

- Pagetable memory consumption was reduced from 53 MB to 37 MB.
- ITLB and DTLB page walks were reduced by 28% and 10%, respectively.
- L1 and L2 cache misses were reduced by 2% and 6.5%, respectively.
- Front-side bus traffic was reduced by approximately 4%.

The database transaction throughput achieved using small pages with 
shared pagetables (with bufferpools enlarged) was within 3% of the 
transaction throughput achieved using hugepages without shared 
pagetables.  Thus shared pagetables provided nearly all the benefit of 
hugepages, without the requirement of having to deal with limitations of 
hugepages.  We believe this would be a significant benefit to customers 
running these types of workloads.

We also measured the benefit of shared pagetables on our larger setups.  
On our 4-way x86-64 setup with 64 GB memory, using small pages for the 
bufferpools, shared pagetables provided a 33% increase in transaction 
throughput.  Using hugepages for the bufferpools, shared pagetables 
provided a 3% increase.  Performance with small pages and shared 
pagetables was within 4% of the performance using hugepages without 
shared pagetables.

On our ppc64 setups we used both Oracle and DB2 to evaluate the benefit 
of shared pagetables.  When database bufferpools were in small pages, 
shared pagetables provided an increase in database transaction 
throughput in the range of 60-65%, while in the hugepage case the 
improvement was up to 2.4%.

We thank Kshitij Doshi and Ken Chen from Intel for their assistance in 
analyzing the x86-64 data.

Cheers,
Brian


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  parent reply	other threads:[~2006-05-05 19:25 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2006-05-03 15:43 Dave McCracken
2006-05-03 15:56 ` Hugh Dickins
2006-05-03 16:06   ` Dave McCracken
2006-05-06 15:25     ` Hugh Dickins
2006-05-08 19:32       ` Ray Bryant
2006-05-16 21:09         ` Dave McCracken
2006-05-19 16:55           ` Ray Bryant
2006-05-22 18:00           ` Ray Bryant
2006-05-08 19:49       ` Brian Twichell
2006-05-09  3:42         ` Nick Piggin
2006-05-10  2:07           ` Chen, Kenneth W
2006-05-10 19:45           ` Brian Twichell
2006-05-12  5:17             ` Nick Piggin
2006-05-09 19:22         ` Hugh Dickins
2006-05-05 19:25 ` Brian Twichell [this message]
2006-05-06  3:37   ` Chen, Kenneth W

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=445BA6B2.4030807@us.ibm.com \
    --to=tbrian@us.ibm.com \
    --cc=dmccr@us.ibm.com \
    --cc=hugh@veritas.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=slpratt@us.ibm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox