From: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
To: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: "linux-mm@kvack.org" <linux-mm@kvack.org>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
"akpm@linux-foundation.org" <akpm@linux-foundation.org>,
"balbir@linux.vnet.ibm.com" <balbir@linux.vnet.ibm.com>,
"nishimura@mxp.nes.nec.co.jp" <nishimura@mxp.nes.nec.co.jp>
Subject: Re: [PATCH 0/2] memcg: improving scalability by reducing lock contention at charge/uncharge
Date: Fri, 2 Oct 2009 17:53:10 +0900 [thread overview]
Message-ID: <20091002175310.0991139c.kamezawa.hiroyu@jp.fujitsu.com> (raw)
In-Reply-To: <20091002135531.3b5abf5c.kamezawa.hiroyu@jp.fujitsu.com>
[-- Attachment #1: Type: text/plain, Size: 3158 bytes --]
On Fri, 2 Oct 2009 13:55:31 +0900
KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> wrote:
> Following is test result of continuous page-fault on my 8cpu box(x86-64).
>
> A loop like this runs on all cpus in parallel for 60secs.
> ==
> while (1) {
> x = mmap(NULL, MEGA, PROT_READ|PROT_WRITE,
> MAP_PRIVATE|MAP_ANONYMOUS, 0, 0);
>
> for (off = 0; off < MEGA; off += PAGE_SIZE)
> x[off]=0;
> munmap(x, MEGA);
> }
> ==
> please see # of page faults. I think this is good improvement.
>
>
> [Before]
> Performance counter stats for './runpause.sh' (5 runs):
>
> 474539.756944 task-clock-msecs # 7.890 CPUs ( +- 0.015% )
> 10284 context-switches # 0.000 M/sec ( +- 0.156% )
> 12 CPU-migrations # 0.000 M/sec ( +- 0.000% )
> 18425800 page-faults # 0.039 M/sec ( +- 0.107% )
> 1486296285360 cycles # 3132.080 M/sec ( +- 0.029% )
> 380334406216 instructions # 0.256 IPC ( +- 0.058% )
> 3274206662 cache-references # 6.900 M/sec ( +- 0.453% )
> 1272947699 cache-misses # 2.682 M/sec ( +- 0.118% )
>
> 60.147907341 seconds time elapsed ( +- 0.010% )
>
> [After]
> Performance counter stats for './runpause.sh' (5 runs):
>
> 474658.997489 task-clock-msecs # 7.891 CPUs ( +- 0.006% )
> 10250 context-switches # 0.000 M/sec ( +- 0.020% )
> 11 CPU-migrations # 0.000 M/sec ( +- 0.000% )
> 33177858 page-faults # 0.070 M/sec ( +- 0.152% )
> 1485264748476 cycles # 3129.120 M/sec ( +- 0.021% )
> 409847004519 instructions # 0.276 IPC ( +- 0.123% )
> 3237478723 cache-references # 6.821 M/sec ( +- 0.574% )
> 1182572827 cache-misses # 2.491 M/sec ( +- 0.179% )
>
> 60.151786309 seconds time elapsed ( +- 0.014% )
>
BTW, this is a score in root cgroup.
473811.590852 task-clock-msecs # 7.878 CPUs ( +- 0.006% )
10257 context-switches # 0.000 M/sec ( +- 0.049% )
10 CPU-migrations # 0.000 M/sec ( +- 0.000% )
36418112 page-faults # 0.077 M/sec ( +- 0.195% )
1482880352588 cycles # 3129.684 M/sec ( +- 0.011% )
410948762898 instructions # 0.277 IPC ( +- 0.123% )
3182986911 cache-references # 6.718 M/sec ( +- 0.555% )
1147144023 cache-misses # 2.421 M/sec ( +- 0.137% )
Then,
36418112 x 100 / 33177858 = 109% slower in children cgroup.
But, Hmm, this test is an extreme case.(60sec continuous page faults on all cpus.)
We may can do something more, but this score itself is not so bad. I think.
Results on more cpus are welcome. Programs I used are attached.
Thanks,
-Kame
[-- Attachment #2: pagefault.c --]
[-- Type: text/x-csrc, Size: 453 bytes --]
#include <stdlib.h>
#include <unistd.h>
#include <sys/mman.h>
#include <signal.h>
#define PAGE_SIZE (4096)
#define MEGA (1024 * 1024)
void sigalarm_handler(int sig)
{
}
int main(int argc, char *argv[])
{
char *x;
int off;
signal(SIGALRM, sigalarm_handler);
pause();
while (1) {
x = mmap(NULL, MEGA, PROT_READ|PROT_WRITE,
MAP_PRIVATE|MAP_ANONYMOUS, 0, 0);
for (off = 0; off < MEGA; off += PAGE_SIZE)
x[off]=0;
munmap(x, MEGA);
}
}
[-- Attachment #3: runpause.sh --]
[-- Type: text/x-sh, Size: 128 bytes --]
#!/bin/sh
for i in 0 1 2 3 4 5 6 7 ;do
taskset -c $i ./pagefault &
done
pkill -ALRM pagefault
sleep 60
pkill -HUP pagefault
next prev parent reply other threads:[~2009-10-02 8:45 UTC|newest]
Thread overview: 22+ messages / expand[flat|nested] mbox.gz Atom feed top
2009-10-02 4:55 KAMEZAWA Hiroyuki
2009-10-02 5:01 ` [PATCH 1/2] memcg: coalescing uncharge at unmap and truncation KAMEZAWA Hiroyuki
2009-10-02 6:47 ` Hiroshi Shimamoto
2009-10-02 6:53 ` Hiroshi Shimamoto
2009-10-02 7:04 ` KAMEZAWA Hiroyuki
2009-10-02 7:02 ` [PATCH 1/2] memcg: coalescing uncharge at unmap and truncation (fixed coimpile bug) KAMEZAWA Hiroyuki
2009-10-08 22:17 ` Andrew Morton
2009-10-08 23:48 ` KAMEZAWA Hiroyuki
2009-10-09 4:01 ` [PATCH 1/2] memcg: coalescing uncharge at unmap and truncation Balbir Singh
2009-10-09 4:17 ` KAMEZAWA Hiroyuki
2009-10-02 5:03 ` [PATCH 2/2] memcg: coalescing charges per cpu KAMEZAWA Hiroyuki
2009-10-08 22:26 ` Andrew Morton
2009-10-08 23:54 ` KAMEZAWA Hiroyuki
2009-10-09 4:15 ` Balbir Singh
2009-10-09 4:25 ` KAMEZAWA Hiroyuki
2009-10-02 8:53 ` KAMEZAWA Hiroyuki [this message]
2009-10-05 7:18 ` [PATCH 0/2] memcg: improving scalability by reducing lock contention at charge/uncharge KAMEZAWA Hiroyuki
2009-10-05 10:37 ` Balbir Singh
[not found] ` <604427e00910091737s52e11ce9p256c95d533dc2837@mail.gmail.com>
2009-10-11 2:33 ` KAMEZAWA Hiroyuki
[not found] ` <604427e00910111134o6f22f0ddg2b87124dd334ec02@mail.gmail.com>
2009-10-12 11:38 ` Balbir Singh
2009-10-13 0:29 ` KAMEZAWA Hiroyuki
[not found] ` <604427e00910121818w71dd4b7dl8781d7f5bc4f7dd9@mail.gmail.com>
2009-10-13 1:28 ` KAMEZAWA Hiroyuki
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20091002175310.0991139c.kamezawa.hiroyu@jp.fujitsu.com \
--to=kamezawa.hiroyu@jp.fujitsu.com \
--cc=akpm@linux-foundation.org \
--cc=balbir@linux.vnet.ibm.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=nishimura@mxp.nes.nec.co.jp \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox