From: Brian Twichell <tbrian@us.ibm.com>
To: Hugh Dickins <hugh@veritas.com>
Cc: Dave McCracken <dmccr@us.ibm.com>, Andrew Morton <akpm@osdl.org>,
Linux Kernel <linux-kernel@vger.kernel.org>,
Linux Memory Management <linux-mm@kvack.org>
Subject: Re: [PATCH/RFC] Shared page tables
Date: Fri, 27 Jan 2006 16:50:49 -0600 [thread overview]
Message-ID: <43DAA3C9.9070105@us.ibm.com> (raw)
In-Reply-To: <Pine.LNX.4.61.0601202020001.8821@goblin.wat.veritas.com>
Hugh Dickins wrote:
>On Thu, 5 Jan 2006, Dave McCracken wrote:
>
>
>>Here's a new version of my shared page tables patch.
>>
>>The primary purpose of sharing page tables is improved performance for
>>large applications that share big memory areas between multiple processes.
>>It eliminates the redundant page tables and significantly reduces the
>>number of minor page faults. Tests show significant performance
>>improvement for large database applications, including those using large
>>pages. There is no measurable performance degradation for small processes.
>>
>>This version of the patch uses Hugh's new locking mechanism, extending it
>>up the page table tree as far as necessary for proper concurrency control.
>>
>>The patch also includes the proper locking for following the vma chains.
>>
>>Hugh, I believe I have all the lock points nailed down. I'd appreciate
>>your input on any I might have missed.
>>
>>The architectures supported are i386 and x86_64. I'm working on 64 bit
>>ppc, but there are still some issues around proper segment handling that
>>need more testing. This will be available in a separate patch once it's
>>solid.
>>
>>Dave McCracken
>>
>>
>
>The locking looks much better now, and I like the way i_mmap_lock seems
>to fall naturally into place where the pte lock doesn't work. But still
>some raciness noted in comments on patch below.
>
>The main thing I dislike is the
> 16 files changed, 937 insertions(+), 69 deletions(-)
>(with just i386 and x86_64 included): it's adding more complexity than
>I can welcome, and too many unavoidable "if (shared) ... else ..."s.
>With significant further change needed, not just adding architectures.
>
>Worthwhile additional complexity? I'm not the one to judge that.
>Brian has posted dramatic improvments (25%, 49%) for the non-huge OLTP,
>and yes, it's sickening the amount of memory we're wasting on pagetables
>in that particular kind of workload. Less dramatic (3%, 4%) in the
>hugetlb case: and as yet (since last summer even) no profiles to tell
>where that improvement actually comes from.
>
>
>
Hi,
We collected more granular performance data for the ppc64/hugepage case.
CPI decreased by 3% when shared pagetables were used. Underlying this was a
7% decrease in the overall TLB miss rate. The TLB miss rate for hugepages
decreased 39%. TLB miss rates are calculated per instruction executed.
We didn't collect a profile per se, as we would expect a CPI improvement
of this nature to be spread over a significant number of functions,
mostly in user-space.
Cheers,
Brian
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2006-01-27 22:50 UTC|newest]
Thread overview: 55+ messages / expand[flat|nested] mbox.gz Atom feed top
2006-01-05 16:19 Dave McCracken
2006-01-07 12:25 ` Heiko Carstens
2006-01-07 18:09 ` Dave McCracken
2006-01-08 12:09 ` Heiko Carstens
2006-01-08 14:04 ` Dave McCracken
2006-01-13 5:15 ` Brian Twichell
2006-01-13 22:34 ` Ray Bryant
2006-01-17 4:50 ` Brian Twichell
2006-01-25 4:14 ` Brian Twichell
2006-01-13 15:18 ` Phillip Susi
2006-01-14 20:45 ` Brian Twichell
2006-01-17 23:53 ` Robin Holt
2006-01-18 0:17 ` Dave Hansen
2006-01-18 6:11 ` Dave McCracken
2006-01-18 1:27 ` Chen, Kenneth W
2006-01-18 3:32 ` Robin Holt
2006-01-23 23:58 ` Ray Bryant
2006-01-24 0:16 ` Ray Bryant
2006-01-24 0:39 ` Andi Kleen
2006-01-24 0:51 ` Dave McCracken
2006-01-24 1:11 ` Andi Kleen
2006-01-24 1:26 ` Dave McCracken
2006-01-24 0:53 ` Ray Bryant
2006-01-24 1:00 ` Dave McCracken
2006-01-24 1:10 ` Andi Kleen
2006-01-24 1:23 ` Benjamin LaHaise
2006-01-24 1:38 ` Andi Kleen
2006-01-24 7:08 ` Arjan van de Ven
2006-01-24 7:06 ` Arjan van de Ven
2006-01-24 7:18 ` Andi Kleen
2006-01-27 18:16 ` Martin Bligh
2006-02-01 9:49 ` Nick Piggin
2006-01-24 14:48 ` Dave McCracken
2006-01-24 14:56 ` Arjan van de Ven
2006-01-24 0:19 ` Dave McCracken
2006-01-24 0:46 ` Ray Bryant
2006-01-24 23:43 ` Ray Bryant
2006-01-24 23:50 ` Dave McCracken
2006-01-25 0:21 ` Ray Bryant
2006-01-25 22:48 ` Ray Bryant
2006-01-25 22:52 ` Dave McCracken
2006-01-26 0:16 ` Ray Bryant
2006-01-26 0:58 ` Ray Bryant
2006-01-26 4:06 ` Robin Holt
2006-01-20 21:24 ` Hugh Dickins
2006-01-20 21:54 ` Chen, Kenneth W
2006-01-23 17:39 ` Dave McCracken
2006-01-23 20:19 ` Benjamin LaHaise
2006-01-24 17:50 ` Hugh Dickins
2006-01-24 18:07 ` Dave McCracken
2006-01-24 18:20 ` Hugh Dickins
2006-01-27 22:50 ` Brian Twichell [this message]
2006-01-30 18:46 ` Ray Bryant
2006-01-31 18:47 ` Brian Twichell
2006-01-31 19:18 ` Dave McCracken
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=43DAA3C9.9070105@us.ibm.com \
--to=tbrian@us.ibm.com \
--cc=akpm@osdl.org \
--cc=dmccr@us.ibm.com \
--cc=hugh@veritas.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox