From: Christoph Lameter <cl@linux-foundation.org>
To: "Zhang, Yanmin" <yanmin_zhang@linux.intel.com>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>,
"hugh.dickins@tiscali.co.uk" <hugh.dickins@tiscali.co.uk>,
linux-mm@kvack.org, linux-kernel@vger.kernel.org,
akpm@linux-foundation.org, Tejun Heo <tj@kernel.org>,
Andi Kleen <andi@firstfloor.org>
Subject: Re: [MM] Make mm counters per cpu instead of atomic
Date: Tue, 24 Nov 2009 09:17:40 -0600 (CST) [thread overview]
Message-ID: <alpine.DEB.2.00.0911240914190.14045@router.home> (raw)
In-Reply-To: <1259049753.29789.49.camel@localhost>
On Tue, 24 Nov 2009, Zhang, Yanmin wrote:
> > True.... We need to find some alternative to per cpu data to scale mmap
> > sem then.
> I ran lots of benchmarks such like specjbb2005/hackbench/tbench/dbench/iozone
> /sysbench_oltp(mysql)/aim7 against percpu tree(based on 2.6.32-rc7) on a 4*8*2 logical
> cpu machine, and didn't find big result difference between with your patch and without
> your patch.
This affects loads that heavily use mmap_sem. You wont find too many
issues in tests that do not run processes with a large thread count and
cause lots of faults or uses of get_user_pages(). The tests you list are
not of that nature.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2009-11-24 15:18 UTC|newest]
Thread overview: 33+ messages / expand[flat|nested] mbox.gz Atom feed top
2009-11-04 19:14 Christoph Lameter
2009-11-04 19:17 ` [MM] Remove rss batching from copy_page_range() Christoph Lameter
2009-11-04 21:02 ` Andi Kleen
2009-11-04 22:02 ` Christoph Lameter
2009-11-05 8:27 ` Andi Kleen
2009-11-04 21:01 ` [MM] Make mm counters per cpu instead of atomic Andi Kleen
2009-11-04 23:49 ` Dave Jones
2009-11-05 15:04 ` Christoph Lameter
2009-11-05 15:36 ` [MM] Make mm counters per cpu instead of atomic V2 Christoph Lameter
2009-11-06 1:11 ` KAMEZAWA Hiroyuki
2009-11-06 3:23 ` KAMEZAWA Hiroyuki
2009-11-06 17:32 ` Christoph Lameter
2009-11-06 19:03 ` KAMEZAWA Hiroyuki
2009-11-06 19:13 ` Christoph Lameter
2009-11-06 19:20 ` KAMEZAWA Hiroyuki
2009-11-06 19:47 ` Christoph Lameter
2009-11-10 22:44 ` Andrew Morton
2009-11-10 23:20 ` Christoph Lameter
2009-11-06 4:08 ` KAMEZAWA Hiroyuki
2009-11-06 4:15 ` KAMEZAWA Hiroyuki
2009-11-05 1:16 ` [MM] Make mm counters per cpu instead of atomic KAMEZAWA Hiroyuki
2009-11-05 15:10 ` Christoph Lameter
2009-11-05 23:42 ` KAMEZAWA Hiroyuki
2009-11-17 6:48 ` Zhang, Yanmin
2009-11-17 7:31 ` Zhang, Yanmin
2009-11-17 9:34 ` Zhang, Yanmin
2009-11-17 17:25 ` Christoph Lameter
2009-11-19 0:48 ` Zhang, Yanmin
2009-11-23 8:51 ` Zhang, Yanmin
2009-11-23 14:31 ` Christoph Lameter
2009-11-24 8:02 ` Zhang, Yanmin
2009-11-24 15:17 ` Christoph Lameter [this message]
2009-11-25 1:23 ` Zhang, Yanmin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=alpine.DEB.2.00.0911240914190.14045@router.home \
--to=cl@linux-foundation.org \
--cc=akpm@linux-foundation.org \
--cc=andi@firstfloor.org \
--cc=hugh.dickins@tiscali.co.uk \
--cc=kamezawa.hiroyu@jp.fujitsu.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=tj@kernel.org \
--cc=yanmin_zhang@linux.intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox