linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
To: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: balbir@linux.vnet.ibm.com, "xemul@openvz.org" <xemul@openvz.org>,
	"hugh@veritas.com" <hugh@veritas.com>,
	linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	menage@google.com
Subject: Re: [RFC] [PATCH 0/9]  remove page_cgroup pointer (with some enhancements)
Date: Fri, 12 Sep 2008 18:35:40 +0900	[thread overview]
Message-ID: <20080912183540.6e7d2468.kamezawa.hiroyu@jp.fujitsu.com> (raw)
In-Reply-To: <20080911200855.94d33d3b.kamezawa.hiroyu@jp.fujitsu.com>

On Thu, 11 Sep 2008 20:08:55 +0900
KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> wrote:
> Peformance comparison is below.
> ==
> rc5-mm1
> ==
> Execl Throughput                           3006.5 lps   (29.8 secs, 3 samples)
> C Compiler Throughput                      1006.7 lpm   (60.0 secs, 3 samples)
> Shell Scripts (1 concurrent)               4863.7 lpm   (60.0 secs, 3 samples)
> Shell Scripts (8 concurrent)                943.7 lpm   (60.0 secs, 3 samples)
> Shell Scripts (16 concurrent)               482.7 lpm   (60.0 secs, 3 samples)
> Dc: sqrt(2) to 99 decimal places         124804.9 lpm   (30.0 secs, 3 samples)
> 
> After this series
> ==
> Execl Throughput                           3003.3 lps   (29.8 secs, 3 samples)
> C Compiler Throughput                      1008.0 lpm   (60.0 secs, 3 samples)
> Shell Scripts (1 concurrent)               4580.6 lpm   (60.0 secs, 3 samples)
> Shell Scripts (8 concurrent)                913.3 lpm   (60.0 secs, 3 samples)
> Shell Scripts (16 concurrent)               569.0 lpm   (60.0 secs, 3 samples)
> Dc: sqrt(2) to 99 decimal places         124918.7 lpm   (30.0 secs, 3 samples)
> 
> Hmm..no loss ? But maybe I should find what I can do to improve this.
> 
This is the latest number.
 - added "Used" flag as Balbir's one.
 - rewrote and optimize uncharge() path.
 - move bit_spinlock() (lock_page_cgroup()) to header file as inilned function.

Execl Throughput                           3064.9 lps   (29.8 secs, 3 samples)
C Compiler Throughput                       998.0 lpm   (60.0 secs, 3 samples)
Shell Scripts (1 concurrent)               4717.0 lpm   (60.0 secs, 3 samples)
Shell Scripts (8 concurrent)                928.3 lpm   (60.0 secs, 3 samples)
Shell Scripts (16 concurrent)               474.3 lpm   (60.0 secs, 3 samples)
Dc: sqrt(2) to 99 decimal places         127184.0 lpm   (30.0 secs, 3 samples)

Hmm..it seems something bad? in concurrent shell test.
(But this -mm's shell test is not trustable. 15% slowdown from rc4's.)

I tries to avoid mz->lru_lock (it was in my set), also. But I find I can't.
I postpone that. (maybe remove mz->lru_lock and depends on zone->lock is choice.
This make memcg's lru to be synchronized with global lru.)

Unfortunately, I'll be offline for 2 or 3 days. I'm sorry if I can't make
quick response.

Thanks,
-Kame




--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  parent reply	other threads:[~2008-09-12  9:35 UTC|newest]

Thread overview: 27+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2008-09-11 11:08 KAMEZAWA Hiroyuki
2008-09-11 11:11 ` [RFC] [PATCH 1/9] memcg:make root no limit KAMEZAWA Hiroyuki
2008-09-11 11:13 ` [RFC] [PATCH 2/9] memcg: atomic page_cgroup flags KAMEZAWA Hiroyuki
2008-09-11 11:14 ` [RFC] [PATCH 3/9] memcg: move_account between groups KAMEZAWA Hiroyuki
2008-09-12  4:36   ` KAMEZAWA Hiroyuki
2008-09-11 11:16 ` [RFC] [PATCH 4/9] memcg: new force empty KAMEZAWA Hiroyuki
2008-09-11 11:17 ` [RFC] [PATCH 5/9] memcg: set mapping null before uncharge KAMEZAWA Hiroyuki
2008-09-11 11:18 ` [RFC] [PATCH 6/9] memcg: optimize stat KAMEZAWA Hiroyuki
2008-09-11 11:20 ` [RFC] [PATCH 7/9] memcg: charge likely success KAMEZAWA Hiroyuki
2008-09-11 11:22 ` [RFC] [PATCH 8/9] memcg: remove page_cgroup pointer from memmap KAMEZAWA Hiroyuki
2008-09-11 14:00   ` Nick Piggin
2008-09-11 14:38   ` kamezawa.hiroyu
2008-09-11 15:01   ` kamezawa.hiroyu
2008-09-12 16:12   ` Balbir Singh
2008-09-12 16:19     ` Dave Hansen
2008-09-12 16:23       ` Dave Hansen
2008-09-16 12:13     ` memcg: lazy_lru (was Re: [RFC] [PATCH 8/9] memcg: remove page_cgroup pointer from memmap) KAMEZAWA Hiroyuki
2008-09-16 12:17       ` [RFC][PATCH 10/9] get/put page at charge/uncharge KAMEZAWA Hiroyuki
2008-09-16 12:19       ` [RFC][PATCH 11/9] lazy lru free vector for memcg KAMEZAWA Hiroyuki
2008-09-16 12:23         ` Pavel Emelyanov
2008-09-16 13:02         ` kamezawa.hiroyu
2008-09-16 12:21       ` [RFC] [PATCH 12/9] lazy lru add vie per cpu " KAMEZAWA Hiroyuki
2008-09-11 11:24 ` [RFC] [PATCH 9/9] memcg: percpu page cgroup lookup cache KAMEZAWA Hiroyuki
2008-09-11 11:31   ` Nick Piggin
2008-09-11 12:49   ` kamezawa.hiroyu
2008-09-12  9:35 ` KAMEZAWA Hiroyuki [this message]
2008-09-12 10:18   ` [RFC] [PATCH 0/9] remove page_cgroup pointer (with some enhancements) KAMEZAWA Hiroyuki

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20080912183540.6e7d2468.kamezawa.hiroyu@jp.fujitsu.com \
    --to=kamezawa.hiroyu@jp.fujitsu.com \
    --cc=balbir@linux.vnet.ibm.com \
    --cc=hugh@veritas.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=menage@google.com \
    --cc=xemul@openvz.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox