linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
To: Ying Han <yinghan@google.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>,
	Minchan Kim <minchan.kim@gmail.com>,
	Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp>,
	Balbir Singh <balbir@linux.vnet.ibm.com>,
	Tejun Heo <tj@kernel.org>, Pavel Emelyanov <xemul@openvz.org>,
	Andrew Morton <akpm@linux-foundation.org>,
	Li Zefan <lizf@cn.fujitsu.com>,
	Suleiman Souhlal <suleiman@google.com>,
	Mel Gorman <mel@csn.ul.ie>, Christoph Lameter <cl@linux.com>,
	Johannes Weiner <hannes@cmpxchg.org>,
	Rik van Riel <riel@redhat.com>, Hugh Dickins <hughd@google.com>,
	Michal Hocko <mhocko@suse.cz>,
	Dave Hansen <dave@linux.vnet.ibm.com>,
	Zhu Yanhai <zhu.yanhai@gmail.com>,
	"linux-mm@kvack.org" <linux-mm@kvack.org>
Subject: Re: [PATCH V3] memcg: add reclaim pgfault latency histograms
Date: Tue, 21 Jun 2011 09:02:50 +0900	[thread overview]
Message-ID: <20110621090250.97c5abe2.kamezawa.hiroyu@jp.fujitsu.com> (raw)
In-Reply-To: <BANLkTi=fj8xaThqSVFtzX1WGuzykkqSwpQ@mail.gmail.com>

On Sun, 19 Jun 2011 23:08:52 -0700
Ying Han <yinghan@google.com> wrote:

> On Sunday, June 19, 2011, KAMEZAWA Hiroyuki
> <kamezawa.hiroyu@jp.fujitsu.com> wrote:
> > On Fri, 17 Jun 2011 16:53:48 -0700
> > Ying Han <yinghan@google.com> wrote:
> >
> >> This adds histogram to capture pagefault latencies on per-memcg basis. I used
> >> this patch on the memcg background reclaim test, and figured there could be more
> >> usecases to monitor/debug application performance.
> >>
> >> The histogram is composed 8 bucket in us unit. The last one is "rest" which is
> >> everything beyond the last one. To be more flexible, the buckets can be reset
> >> and also each bucket is configurable at runtime.
> >>
> >> memory.pgfault_histogram: exports the histogram on per-memcg basis and also can
> >> be reset by echoing "-1". Meantime, all the buckets are writable by echoing
> >> the range into the API. see the example below.
> >>
> >> change v3..v2:
> >> no change except rebasing the patch to 3.0-rc3 and retested.
> >>
> >> change v2..v1:
> >> 1. record the page fault involving reclaim only and changing the unit to us.
> >> 2. rename the "inf" to "rest".
> >> 3. removed the global tunable to turn on/off the recording. this is ok since
> >> there is no overhead measured by collecting the data.
> >> 4. changed reseting the history by echoing "-1".
> >>
> >> Functional Test:
> >> $ cat /dev/cgroup/memory/D/memory.pgfault_histogram
> >> page reclaim latency histogram (us):
> >> < 150 A  A  A  A  A  A 22
> >> < 200 A  A  A  A  A  A 17434
> >> < 250 A  A  A  A  A  A 69135
> >> < 300 A  A  A  A  A  A 17182
> >> < 350 A  A  A  A  A  A 4180
> >> < 400 A  A  A  A  A  A 3179
> >> < 450 A  A  A  A  A  A 2644
> >> < rest A  A  A  A  A  29840
> >>
> >> $ echo -1 >/dev/cgroup/memory/D/memory.pgfault_histogram
> >> $ cat /dev/cgroup/memory/B/memory.pgfault_histogram
> >> page reclaim latency histogram (us):
> >> < 150 A  A  A  A  A  A 0
> >> < 200 A  A  A  A  A  A 0
> >> < 250 A  A  A  A  A  A 0
> >> < 300 A  A  A  A  A  A 0
> >> < 350 A  A  A  A  A  A 0
> >> < 400 A  A  A  A  A  A 0
> >> < 450 A  A  A  A  A  A 0
> >> < rest A  A  A  A  A  0
> >>
> >> $ echo 500 520 540 580 600 1000 5000 >/dev/cgroup/memory/D/memory.pgfault_histogram
> >> $ cat /dev/cgroup/memory/B/memory.pgfault_histogram
> >> page reclaim latency histogram (us):
> >> < 500 A  A  A  A  A  A 0
> >> < 520 A  A  A  A  A  A 0
> >> < 540 A  A  A  A  A  A 0
> >> < 580 A  A  A  A  A  A 0
> >> < 600 A  A  A  A  A  A 0
> >> < 1000 A  A  A  A  A  0
> >> < 5000 A  A  A  A  A  0
> >> < rest A  A  A  A  A  0
> >>
> >> Performance Test:
> >> I ran through the PageFaultTest (pft) benchmark to measure the overhead of
> >> recording the histogram. There is no overhead observed on both "flt/cpu/s"
> >> and "fault/wsec".
> >>
> >> $ mkdir /dev/cgroup/memory/A
> >> $ echo 16g >/dev/cgroup/memory/A/memory.limit_in_bytes
> >> $ echo $$ >/dev/cgroup/memory/A/tasks
> >> $ ./pft -m 15g -t 8 -T a
> >>
> >> Result:
> >> $ ./ministat no_histogram histogram
> >>
> >> "fault/wsec"
> >> x fault_wsec/no_histogram
> >> + fault_wsec/histogram
> >> +-------------------------------------------------------------------------+
> >> A  A  N A  A  A  A  A  Min A  A  A  A  A  Max A  A  A  A Median A  A  A  A  A  Avg A  A  A  A Stddev
> >> x A  5 A  A  864432.44 A  A  880840.81 A  A  879707.95 A  A  874606.51 A  A  7687.9841
> >> + A  5 A  A  861986.57 A  A  877867.25 A  A  A 870823.9 A  A  870901.38 A  A  6413.8821
> >> No difference proven at 95.0% confidence
> >>
> >> "flt/cpu/s"
> >> x flt_cpu_s/no_histogram
> >> + flt_cpu_s/histogram
> >> +-------------------------------------------------------------------------+
> >> A  A  I'll never ack this.
> 
> The patch is created as part of effort testing per-memcg bg reclaim
> patch. I don't have strong opinion that we indeed need to merge it,
> but found it is a useful testing and monitoring tool.
> 
> Meantime, can you help to clarify your concern? In case I missed
> something here.
> 

I want to see the numbers via 'perf' because of its flexibility.
For this kind of things, I like dumping "raw" data and parse it by
tools. Because we can change our view with a single data without
taking mulitple-data-by-multiple-experiments.

I like your idea of histgram. So, I'd like to try to write a
perf stuff when my memory.vmscan_stat is merged (it's good trace
point I think) and see what we can get.

Thanks,
-Kame

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2011-06-21  0:10 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-06-17 23:53 Ying Han
2011-06-19 23:45 ` KAMEZAWA Hiroyuki
2011-06-20  6:08   ` Ying Han
2011-06-21  0:02     ` KAMEZAWA Hiroyuki [this message]
2011-06-21  0:33       ` Ying Han

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20110621090250.97c5abe2.kamezawa.hiroyu@jp.fujitsu.com \
    --to=kamezawa.hiroyu@jp.fujitsu.com \
    --cc=akpm@linux-foundation.org \
    --cc=balbir@linux.vnet.ibm.com \
    --cc=cl@linux.com \
    --cc=dave@linux.vnet.ibm.com \
    --cc=hannes@cmpxchg.org \
    --cc=hughd@google.com \
    --cc=kosaki.motohiro@jp.fujitsu.com \
    --cc=linux-mm@kvack.org \
    --cc=lizf@cn.fujitsu.com \
    --cc=mel@csn.ul.ie \
    --cc=mhocko@suse.cz \
    --cc=minchan.kim@gmail.com \
    --cc=nishimura@mxp.nes.nec.co.jp \
    --cc=riel@redhat.com \
    --cc=suleiman@google.com \
    --cc=tj@kernel.org \
    --cc=xemul@openvz.org \
    --cc=yinghan@google.com \
    --cc=zhu.yanhai@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox