From: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
To: balbir@linux.vnet.ibm.com
Cc: "linux-mm@kvack.org" <linux-mm@kvack.org>,
"nishimura@mxp.nes.nec.co.jp" <nishimura@mxp.nes.nec.co.jp>
Subject: Re: [RFC][PATCH 0/4][mmotm] memcg: reduce lock contention v3
Date: Thu, 10 Sep 2009 09:20:17 +0900 [thread overview]
Message-ID: <20090910092017.3d550d5a.kamezawa.hiroyu@jp.fujitsu.com> (raw)
In-Reply-To: <20090909203042.GA4473@balbir.in.ibm.com>
On Thu, 10 Sep 2009 02:00:42 +0530
Balbir Singh <balbir@linux.vnet.ibm.com> wrote:
> * KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> [2009-09-09 17:39:03]:
>
> > This patch series is for reducing memcg's lock contention on res_counter, v3.
> > (sending today just for reporting current status in my stack.)
> >
> > It's reported that memcg's res_counter can cause heavy false sharing / lock
> > conention and scalability is not good. This is for relaxing that.
> > No terrible bugs are found, I'll maintain/update this until the end of next
> > merge window. Tests on big-smp and new-good-idea are welcome.
> >
> > This patch is on to mmotm+Nishimura's fix + Hugh's get_user_pages() patch.
> > But can be applied directly against mmotm, I think.
> >
> > numbers:
> >
> > I used 8cpu x86-64 box and run make -j 12 kernel.
> > Before make, make clean and drop_caches.
> >
>
> Kamezawa-San
>
> I was able to test on a 24 way using my parallel page fault test
> program and here is what I see
>
thank you.
> Performance counter stats for '/home/balbir/parallel_pagefault' (3
> runs):
>
> 7191673.834385 task-clock-msecs # 23.953 CPUs ( +- 0.001% )
> 427765 context-switches # 0.000 M/sec ( +- 0.106% )
> 234 CPU-migrations # 0.000 M/sec ( +- 20.851% )
> 87975343 page-faults # 0.012 M/sec ( +- 0.347% )
> 5962193345280 cycles # 829.041 M/sec ( +- 0.012% )
> 1009132401195 instructions # 0.169 IPC ( +- 0.059% )
> 10068652670 cache-references # 1.400 M/sec ( +- 2.581% )
> 2053688394 cache-misses # 0.286 M/sec ( +- 0.481% )
>
> 300.238748326 seconds time elapsed ( +- 0.001% )
>
> Without the patch I saw
>
> Performance counter stats for '/home/balbir/parallel_pagefault' (3
> runs):
>
> 7198364.596593 task-clock-msecs # 23.959 CPUs ( +- 0.004% )
> 425104 context-switches # 0.000 M/sec ( +- 0.244% )
> 157 CPU-migrations # 0.000 M/sec ( +- 13.291% )
> 28964117 page-faults # 0.004 M/sec ( +- 0.106% )
> 5786854402292 cycles # 803.912 M/sec ( +- 0.013% )
> 835828892399 instructions # 0.144 IPC ( +- 0.073% )
> 6240606753 cache-references # 0.867 M/sec ( +- 1.058% )
> 2068445332 cache-misses # 0.287 M/sec ( +- 1.844% )
>
> 300.443366784 seconds time elapsed ( +- 0.005% )
>
>
> This does look like a very good improvement.
>
Seems good.
BTW, why the number of page-faults after patch is 3 times bigger than
one before patch ? The difference in the number of instructions meets it ?
THanks,
-Kame
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2009-09-10 0:22 UTC|newest]
Thread overview: 24+ messages / expand[flat|nested] mbox.gz Atom feed top
2009-09-09 8:39 KAMEZAWA Hiroyuki
2009-09-09 8:41 ` [RFC][PATCH 1/4][mmotm] memcg: soft limit clean up KAMEZAWA Hiroyuki
[not found] ` <661de9470909090410t160454a2k658c980b92d11612@mail.gmail.com>
2009-09-10 0:10 ` KAMEZAWA Hiroyuki
2009-09-09 8:41 ` [RFC][PATCH 2/4][mmotm] clean up charge path of softlimit KAMEZAWA Hiroyuki
2009-09-09 8:44 ` [RFC][PATCH 3/4][mmotm] memcg: batched uncharge KAMEZAWA Hiroyuki
2009-09-09 8:45 ` [RFC][PATCH 4/4][mmotm] memcg: coalescing charge KAMEZAWA Hiroyuki
2009-09-12 4:58 ` Daisuke Nishimura
2009-09-15 0:09 ` KAMEZAWA Hiroyuki
2009-09-09 20:30 ` [RFC][PATCH 0/4][mmotm] memcg: reduce lock contention v3 Balbir Singh
2009-09-10 0:20 ` KAMEZAWA Hiroyuki [this message]
2009-09-10 5:18 ` Balbir Singh
2009-09-18 8:47 ` [RFC][PATCH 0/11][mmotm] memcg: patch dump (Sep/18) KAMEZAWA Hiroyuki
2009-09-18 8:50 ` [RFC][PATCH 1/11] memcg: clean up softlimit uncharge KAMEZAWA Hiroyuki
2009-09-18 8:52 ` [RFC][PATCH 2/11]memcg: reduce res_counter_soft_limit_excess KAMEZAWA Hiroyuki
2009-09-18 8:53 ` [RFC][PATCH 3/11] memcg: coalescing uncharge KAMEZAWA Hiroyuki
2009-09-18 8:54 ` [RFC][PATCH 4/11] memcg: coalescing charge KAMEZAWA Hiroyuki
2009-09-18 8:55 ` [RFC][PATCH 5/11] memcg: clean up cancel charge KAMEZAWA Hiroyuki
2009-09-18 8:57 ` [RFC][PATCH 6/11] memcg: cleaun up percpu statistics KAMEZAWA Hiroyuki
2009-09-18 8:58 ` [RFC][PATCH 7/11] memcg: rename from_cont to from_cgroup KAMEZAWA Hiroyuki
2009-09-18 9:00 ` [RFC][PATCH 8/11]memcg: remove unused macro and adds commentary KAMEZAWA Hiroyuki
2009-09-18 9:01 ` [RFC][PATCH 9/11]memcg: clean up zonestat funcs KAMEZAWA Hiroyuki
2009-09-18 9:04 ` [RFC][PATCH 10/11][mmotm] memcg: clean up percpu and more commentary for soft limit KAMEZAWA Hiroyuki
2009-09-18 9:06 ` [RFC][PATCH 11/11][mmotm] memcg: more commentary and clean up KAMEZAWA Hiroyuki
2009-09-18 10:37 ` [RFC][PATCH 0/11][mmotm] memcg: patch dump (Sep/18) Daisuke Nishimura
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20090910092017.3d550d5a.kamezawa.hiroyu@jp.fujitsu.com \
--to=kamezawa.hiroyu@jp.fujitsu.com \
--cc=balbir@linux.vnet.ibm.com \
--cc=linux-mm@kvack.org \
--cc=nishimura@mxp.nes.nec.co.jp \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox